Next Article in Journal
Probing the Nuclear Equation of State from the Existence of a ∼2.6 M Neutron Star: The GW190814 Puzzle
Previous Article in Journal
Electronic Origin of Tc in Bulk and Monolayer FeSe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation Results for Variational Inequalities Involving Pseudomonotone Bifunction in Real Hilbert Spaces

by
Kanikar Muangchoo
1,
Nasser Aedh Alreshidi
2 and
Ioannis K. Argyros
3,*
1
Faculty of Science and Technology, Rajamangala University of Technology Phra Nakhon (RMUTP), 1381 Pracharat 1 Road, Wongsawang, Bang Sue, Bangkok 10800, Thailand
2
Department of Mathematics, College of Science, Northern Border University, Arar 73222, Saudi Arabia
3
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(2), 182; https://doi.org/10.3390/sym13020182
Submission received: 8 January 2021 / Revised: 17 January 2021 / Accepted: 18 January 2021 / Published: 23 January 2021
(This article belongs to the Section Mathematics)

Abstract

:
In this paper, we introduce two novel extragradient-like methods to solve variational inequalities in a real Hilbert space. The variational inequality problem is a general mathematical problem in the sense that it unifies several mathematical models, such as optimization problems, Nash equilibrium models, fixed point problems, and saddle point problems. The designed methods are analogous to the two-step extragradient method that is used to solve variational inequality problems in real Hilbert spaces that have been previously established. The proposed iterative methods use a specific type of step size rule based on local operator information rather than its Lipschitz constant or any other line search procedure. Under mild conditions, such as the Lipschitz continuity and monotonicity of a bi-function (including pseudo-monotonicity), strong convergence results of the described methods are established. Finally, we provide many numerical experiments to demonstrate the performance and superiority of the designed methods.

1. Introduction

This paper concerns the problem of the classic variational inequality problem [1,2]. The variational inequalities problem (VIP) for an operator G : H H is defined in the following way:
Find u * C such that G ( u * ) , y u * 0 , y C
where C is a non-empty, convex and closed subset of a real Hilbert space H and . , . and . denote an inner product and the induced norm on H , respectively. Moreover, R , N are the sets of real numbers and natural numbers, respectively. It is important to note that the problem (VIP) is equivalent to solving the following problem:
Find u * C such that u * = P C [ u * χ G ( u * ) ] .
The idea of variational inequalities has been used by an antique mechanism to consider a wide range of topics, i.e., engineering, physics, optimization theory and economics. It is an important mathematical model that unifies a number of different mathematics problems such as the network equilibrium problem, the necessary optimality conditions, systems of non-linear equations and the complementarity problems (for further details [3,4,5,6,7,8,9]). This problem was introduced by Stampacchia [2] in 1964 and also demonstrated that the problem (VIP) had a key position in non-linear analysis. There are many researchers who have studied and considered many projection methods (see for more details [10,11,12,13,14,15,16,17,18,19,20]). Korpelevich [13] and Antipin [21] established the following extragradient method:
u 0 C , y n = P C [ u n χ G ( u n ) ] , u n + 1 = P C [ u n χ G ( y n ) ] .
Recently, the subgradient extragradient method was introduced by Censor et al. [10] for solving the problem (VIP) in a real Hilbert space. It has the following form:
u 0 C , y n = P C [ u n χ G ( u n ) ] , u n + 1 = P H n [ u n χ G ( y n ) ] ,
where
H n = { z H : u n χ G ( u n ) y n , z y n 0 } .
It is important to mention that the above well-established method carries two serious shortcomings, the first one is the fixed step size that involves the knowledge or approximation of the Lipschitz constants of the related mapping and it only converges weakly in Hilbert spaces. From the computational point of view, it might be questionable to use fixed step size, and hence the convergence rate and usefulness of the method could be affected.
The main objective of this paper is to introduce inertial-type methods that are used to strengthen the convergence rate of the iterative sequence in this context. Such methods have been previously established due to the oscillator equation with a damping and conservative force restoration. This second-order dynamical system is called a heavy friction ball, which was originally studied by Polyak in [22]. Mainly, the functionality of the inertial-type method is that it will use the two previous iterations for the next iteration. Numerical results support that inertial term usually improves the functioning of the methods in terms of the number of iterations and elapsed time in this sense, and that inertial-type method has been broadly studied in [23,24,25].
So a natural question arises:
“Is it possible to introduce a new inertial-type strongly convergent extragradient-like method with a monotone variable step size rule to solve problem (VIP)”?
In this study, we provide a positive answer of this question, i.e., the gradient method still generates a strong convergence sequence by using a fixed and variable step size rule for solving a problem (VIP) associated with pseudo-monotone mappings. Motivated by the works of Censor et al. [10] and Polyak [22], we introduce a new inertial extragradient-type method to solve the problem (VIP) in the setting of an infinite-dimensional real Hilbert space.
In brief, the key points of this paper are set out as follows:
(i)
We propose an inertial subgradient extragradient method by using a fixed step size to solve the variational inequality problem in real Hilbert space and confirm that a generated sequence is strongly convergent.
(ii)
We also create a second inertial subgradient extragradient method by using a variable monotone step size rule independent of the Lipschitz constant to solve pseudomonotone variational inequality problems.
(iii)
Numerical experiments are presented corresponding to proposed methods for the verification of theoretical findings, and we compare them with the results in [Algorithm 3.4 in [23]], [Algorithm 3.2 in [24] and Algorithm 3.1 in [25]]. Our numerical data has shown that the proposed methods are useful and performed better as compared to the existing ones.
The rest of the article is arranged as follows: The Section 2 includes the basic definitions and important lemmas that are used in the manuscript. Section 3 consists of inertial-type iterative schemes and convergence analysis theorems. Section 4 provided the numerical findings to explain the behaviour of the new methods and in comparison with other methods.

2. Preliminaries

In this section, we have written a number of important identities and relevant lemmas and definitions. A metric projection P C ( u 1 ) of u 1 H is defined by
P C ( u 1 ) = arg min { u 1 u 2 : u 2 C } .
Next, we list some of the important properties of the projection mapping.
Lemma 1.
[26] Suppose that P C : H C is a metric projection. Then, we have
(i) 
u 3 = P C ( u 1 ) if and only if
u 1 u 3 , u 2 u 3 0 , u 2 C .
(ii) 
u 1 P C ( u 2 ) 2 + P C ( u 2 ) u 2 2 u 1 u 2 2 , u 1 C , u 2 H .
(iii) 
u 1 P C ( u 1 ) u 1 u 2 , u 2 C , u 1 H .
Lemma 2.
[27] Let { a n } [ 0 , + ) be a sequence satisfying the following inequality
a n + 1 ( 1 b n ) a n + b n r n , n N .
Furthermore, { b n } ( 0 , 1 ) and { r n } R be two sequences such that
lim n + b n = 0 , n = 1 + b n = + and lim sup n + r n 0 .
Then, lim n + a n = 0 .
Lemma 3.
[28] Assume that { a n } R is a sequence and there exist a subsequence { n i } of { n } such that
a n i < a n i + 1 i N .
Then, there exists a non decreasing sequence m k N such that m k + as k + , and satisfying the following inequality for numbers k N :
a m k a m k + 1 and a k a m k + 1 .
Indeed, m k = max { j k : a j a j + 1 } .
Next, we list some of the important identities that were used to prove the convergence analysis.
Lemma 4.
[26] For any u 1 , u 2 H and b R . Then, the following inequalities hold.
(i) 
b u 1 + ( 1 b ) u 2 2 = b u 1 2 + ( 1 b ) u 2 2 b ( 1 b ) u 1 u 2 2 .
(ii) 
u 1 + u 2 2 u 1 2 + 2 u 2 , u 1 + u 2 .
Lemma 5.
[29] Assume that G : C H is a continuous and pseudo-monotone mapping. Then, u * solves the problem (VIP) iff u * is the solution of the following problem:
F i n d u C s u c h t h a t G ( y ) , y u 0 , y C .

3. Main Results

Now, we introduce both inertial-type subgradient extragradient methods which incorporate a monotone step size rule and the inertial term and provide both strong convergence theorems. The following two main results are outlined as Algorithms 1 and 2:
Algorithm 1 Inertial-type strongly convergent iterative scheme.
  • Step 0: Choose arbitrary starting points u 1 , u 0 C , θ > 0 and 0 < χ < 1 L . Moreover, choose { ϕ n } ( 0 , 1 ) comply with the following conditions:
    lim n + ϕ n = 0 and n = 1 + ϕ n = + .
  • Step 1: Evaluate
    w n = u n + θ n ( u n u n 1 ) ϕ n u n + θ n ( u n u n 1 ) ,
    where θ n such that
    0 θ n θ n ^ and θ n ^ = min θ 2 , ϵ n u n u n 1 if u n u n 1 , θ 2 else ,
    where ϵ n = ( ϕ n ) is a positive sequence, i.e., lim n + ϵ n ϕ n = 0 .
  • Step 2: Evaluate
    y n = P C ( w n χ G ( w n ) ) .
    If w n = y n , then STOP. Otherwise, go to Step 3.
  • Step 3: Evaluate
    u n + 1 = P H n ( w n χ G ( y n ) ) .
    where
    H n = { z H : w n χ G ( w n ) y n , z y n 0 } .
    Set n = n + 1 and go back to Step 1.
In order to study the convergence analysis, we consider that the following condition have been satisfied:
(B1)
The solution set of problem (VIP), denoted by Ω is non-empty;
(B2)
An operator G : H H is called to be pseudo-monotone, i.e.,
G ( y 1 ) , y 2 y 1 0 G ( y 2 ) , y 1 y 2 0 , y 1 , y 2 C ;
(B3)
An operator G : H H is called to be Lipschitz continuous through a constant L > 0 , i.e., there exists L > 0 such that
G ( y 1 ) G ( y 2 ) L y 1 y 2 , y 1 , y 2 C ;
(B4)
An operator G : H H is called to be weakly sequentially continuous, i.e., { G ( u n ) } converges weakly to G ( u ) for every sequence { u n } converges weakly to u.
Algorithm 2 Explicit Inertial-type strongly convergent iterative scheme.
  • Step 0: Choose arbitrary starting points u 1 , u 0 C , θ > 0 , μ ( 0 , 1 ) , χ 0 > 0 . Moreover, select { ϕ n } ( 0 , 1 ) comply with the following conditions:
    lim n + ϕ n = 0 and n = 1 + ϕ n = + .
  • Step 1: Evaluate
    w n = u n + θ n ( u n u n 1 ) ϕ n u n + θ n ( u n u n 1 )
    where θ n such that
    0 θ n θ n ^ and θ n ^ = min θ 2 , ϵ n u n u n 1 if u n u n 1 , θ 2 else ,
    where ϵ n = ( ϕ n ) is a positive sequence, i.e., lim n + ϵ n ϕ n = 0 .
  • Step 2: Evaluate
    y n = P C ( w n χ G ( w n ) ) .
    If w n = y n , then STOP and y n is a solution. Otherwise, go to Step 3.
  • Step 3: Evaluate
    u n + 1 = P H n ( w n χ G ( y n ) ) .
    where
    H n = { z H : w n χ G ( w n ) y n , z y n 0 } .
    (iii) Compute
    χ n + 1 = min χ n , μ w n y n G ( w n ) G ( y n ) if G ( w n ) G ( y n ) 0 , χ n e l s e .
    Set n = n + 1 and go back to Step 1.
Lemma 6.
Assume that G : H H satisfies the conditions (B1)(B4) in Algorithm 1. For each u * Ω , we have
u n + 1 u * 2 w n u * 2 ( 1 χ L ) w n y n 2 ( 1 χ L ) u n + 1 y n 2 .
Proof. 
First, consider the following
u n + 1 u * 2 = P H n [ w n χ G ( y n ) ] u * 2 = P H n [ w n χ G ( y n ) ] + [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] u * 2 = [ w n χ G ( y n ) ] u * 2 + P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] 2 + 2 P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] , [ w n χ G ( y n ) ] u * .
It is given that u * Ω C H n , such that
P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] 2 + P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] , [ w n χ G ( y n ) ] u * = [ w n χ G ( y n ) ] P H n [ w n χ G ( y n ) ] , u * P H n [ w n χ G ( y n ) ] 0 ,
which implies that
P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] , [ w n χ G ( y n ) ] u * P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] 2 .
By the use of expressions (4) and (6), we obtain
u n + 1 u * 2 w n χ G ( y n ) u * 2 P H n [ w n χ G ( y n ) ] [ w n χ G ( y n ) ] 2 w n u * 2 w n u n + 1 2 + 2 χ G ( y n ) , u * u n + 1 .
It is given that u * Ω , we obtain
G ( u * ) , y u * 0 , for all y C .
By the use of pseudo-monotonicity of mapping G on C , we obtain
G ( y ) , y u * 0 , for all y C .
Let consider y = y n C , we obtain
G ( y n ) , y n u * 0 .
Thus, we have
G ( y n ) , u * u n + 1 = G ( y n ) , u * y n + G ( y n ) , y n u n + 1 G ( y n ) , y n u n + 1 .
By the use of expressions (7) and (8), we get
u n + 1 u * 2 w n u * 2 w n u n + 1 2 + 2 χ G ( y n ) , y n u n + 1 w n u * 2 w n y n + y n u n + 1 2 + 2 χ G ( y n ) , y n u n + 1 w n u * 2 w n y n 2 y n u n + 1 2 + 2 w n χ G ( y n ) y n , u n + 1 y n .
It is given that u n + 1 = P H n [ w n χ G ( y n ) ] , we have
2 w n χ G ( y n ) y n , u n + 1 y n = 2 w n χ G ( w n ) y n , u n + 1 y n + 2 χ G ( w n ) G ( y n ) , u n + 1 y n 2 χ L w n y n u n + 1 y n χ L w n y n 2 + χ L u n + 1 y n 2 .
Combining expressions (9) and (10), we obtain
u n + 1 u * 2 w n u * 2 ( 1 χ L ) w n y n 2 ( 1 χ L ) u n + 1 y n 2 .
   □
Theorem 1.
Let { u n } be a sequence generated by Algorithm 1 and satisfies the conditions (B1)–(B4). Then, { u n } strongly converges to u * Ω . Moreover, P Ω ( 0 ) = u * .
Proof. 
It is given in expression (3) that
lim n + θ n ϕ n u n u n 1 lim n + ϵ n ϕ n u n u n 1 = 0 .
By the use of definition of { w n } and inequality (12), we get
w n u * = u n + θ n ( u n u n 1 ) ϕ n u n θ n ϕ n ( u n u n 1 ) u * = ( 1 ϕ n ) ( u n u * ) + ( 1 ϕ n ) θ n ( u n u n 1 ) ϕ n u * ( 1 ϕ n ) u n u * + ( 1 ϕ n ) θ n u n u n 1 + ϕ n u *
( 1 ϕ n ) u n u * + ϕ n M 1 ,
where
( 1 ϕ n ) θ n ϕ n u n u n 1 + u * M 1 .
By the use of Lemma 6, we obtain
u n + 1 u * 2 w n u * 2 , n N .
Combining (14) with (15), we obtain
u n + 1 u * ( 1 ϕ n ) u n u * + ϕ n M 1 max u n u * , M 1 max u 0 u * , M 1 .
Thus, we conclude that the { u n } is bounded sequence. Indeed, by (14) we have
w n u * 2 ( 1 ϕ n ) 2 u n u * 2 + ϕ n 2 M 1 2 + 2 M 1 ϕ n ( 1 ϕ n ) u n u * u n u * 2 + ϕ n ϕ n M 1 2 + 2 M 1 ( 1 ϕ n ) u n u * u n u * 2 + ϕ n M 2 ,
for some M 2 > 0 . Combining the expressions (11) with (17), we have
u n + 1 u * 2 u n u * 2 + ϕ n M 2 ( 1 χ L ) w n y n 2 ( 1 χ L ) u n + 1 y n 2 .
Due to the Lipschitz-continuity and pseudo-monotonicity of G implies that Ω is a closed and convex set. It is given that u * = P Ω ( 0 ) and by using Lemma 1 (ii), we have
0 u * , y u * 0 , y Ω .
The rest of the proof is divided into the following parts:  
Case 1: Now consider that a number N 1 N such that
u n + 1 u * u n u * , n N 1 .
Thus, above implies that lim n + u n u * exists and let lim n + u n u * = l , for some l 0 . From the expression (18), we have
( 1 χ L ) w n y n 2 + ( 1 χ L ) u n + 1 y n 2 u n u * 2 + ϕ n M 2 u n + 1 u * 2 .
Due to existence of a limit of sequence u n u * and ϕ n 0 , we infer that
w n y n 0 and u n + 1 y n 0 as n + .
By the use of expression (22), we have
lim n + w n u n + 1 lim n + w n y n + lim n + y n u n + 1 = 0 .
Next, we will evaluate
w n u n = u n + θ n ( u n u n 1 ) ϕ n u n + θ n ( u n u n 1 ) u n θ n u n u n 1 + ϕ n u n + θ n ϕ n u n u n 1 = ϕ n θ n ϕ n u n u n 1 + ϕ n u n + ϕ n 2 θ n ϕ n u n u n 1 0 .
Thus above implies that
lim n + u n u n + 1 lim n + u n w n + lim n + w n u n + 1 = 0 .
The above explanation guarantees that the sequences { w n } and { y n } are also bounded. By the use of reflexivity of H and the boundedness of { u n } guarantees that there exits a subsequence { u n k } such that { u n k } u ^ H as k + . Next, we have to prove that u ^ Ω . It is given that y n k = P C [ w n k χ G ( w n k ) ] that is equivalent to
w n k χ G ( w n k ) y n k , y y n k 0 , y C .
The inequality described above implies that
w n k y n k , y y n k χ G ( w n k ) , y y n k , y C .
Thus, we obtain
1 χ w n k y n k , y y n k + G ( w n k ) , y n k w n k G ( w n k ) , y w n k , y C .
Due to boundedness of the sequence { w n k } implies that { G ( w n k ) } is also bounded. By the use of lim k w n k y n k = 0 and k in (28), we obtain
lim inf k G ( w n k ) , y w n k 0 , y C .
Moreover, we have
G ( y n k ) , y y n k = G ( y n k ) G ( w n k ) , y w n k + G ( w n k ) , y w n k + G ( y n k ) , w n k y n k .
Since lim k w n k y n k = 0 and G is L-Lipschitz continuity on H implies that
lim k G ( w n k ) G ( y n k ) = 0 ,
which together with (30) and (31), we obtain
lim inf k G ( y n k ) , y y n k 0 , y C .
Let us consider a sequence of positive numbers { ϵ k } that is decreasing and converges to zero. For each k, we denote m k by the smallest positive integer such that
G ( w n i ) , y w n i + ϵ k 0 , i m k .
Due to { ϵ k } is decreasing and { m k } is increasing.  
Case I: If there is a w n m k j subsequence of w n m k such that G ( w n m k j ) = 0 ( j ). Let j , we obtain
G ( u ^ ) , y u ^ = lim j G ( w n m k j ) , y u ^ = 0 .
Thus, u ^ C and imply that u ^ Ω .  
Case II: If there exits N 0 N such that for all n m k N 0 , G ( w n m k ) 0 . Consider that
Ξ n m k = G ( w n m k ) G ( w n m k ) 2 , n m k N 0 .
Due to the above definition, we obtain
G ( w n m k ) , Ξ n m k = 1 , n m k N 0 .
Moreover, expressions (33) and (36), for all n m k N 0 , we have
G ( w n m k ) , y + ϵ k Ξ n m k w n m k 0 .
Due to the pseudomonotonicity of G for n m k N 0 , we have
G ( y + ϵ k Ξ n m k ) , y + ϵ k Ξ n m k w n m k 0 .
For all n m k N 0 , we have
G ( y ) , y w n m k G ( y ) G ( y + ϵ k Ξ n m k ) , y + ϵ k Ξ n m k w n m k ϵ k G ( y ) , Ξ n m k .
Due to { w n k } weakly converges to u ^ C through G is sequentially weakly continuous on the set C , we get { G ( w n k ) } weakly converges to G ( u ^ ) . Suppose that G ( u ^ ) 0 , we have
G ( u ^ ) lim inf k G ( w n k ) .
Since { w n m k } { w n k } and lim k ϵ k = 0 , we have
0 lim k ϵ k Ξ n m k = lim k ϵ k G ( w n m k ) 0 G ( u ^ ) = 0 .
Next, consider k in (39), we obtain
G ( y ) , y u ^ 0 , y C .
By the use of Minty Lemma 5, we infer u ^ Ω . Next, we have
lim sup n + u * , u * u n = lim k + u * , u * u n k = u * , u * u ^ 0 .
By the use of lim n + u n + 1 u n = 0 . Thus, (43) implies that
lim sup n + u * , u * u n + 1 lim sup n + u * , u * u n + lim sup n + u * , u n u n + 1 0 .
Consider the expression (13), we have
w n u * 2 = u n + θ n ( u n u n 1 ) ϕ n u n θ n ϕ n ( u n u n 1 ) u * 2 = ( 1 ϕ n ) ( u n u * ) + ( 1 ϕ n ) θ n ( u n u n 1 ) ϕ n u * 2 ( 1 ϕ n ) ( u n u * ) + ( 1 ϕ n ) θ n ( u n u n 1 ) 2 + 2 ϕ n u * , w n u * = ( 1 ϕ n ) 2 u n u * 2 + ( 1 ϕ n ) 2 θ n 2 u n u n 1 2 + 2 θ n ( 1 ϕ n ) 2 u n u * u n u n 1 + 2 ϕ n u * , w n u n + 1 + 2 ϕ n u * , u n + 1 u * ( 1 ϕ n ) u n u * 2 + θ n 2 u n u n 1 2 + 2 θ n ( 1 ϕ n ) u n u * u n u n 1 + 2 ϕ n u * w n u n + 1 + 2 ϕ n u * , u n + 1 u * = ( 1 ϕ n ) u n u * 2 + ϕ n [ θ n u n u n 1 θ n ϕ n u n u n 1 + 2 ( 1 ϕ n ) u n u * θ n ϕ n u n u n 1 + 2 u * w n u n + 1 + 2 u * , u * u n + 1 ] .
From expressions (15) and (45) we obtain
u n + 1 u * 2 ( 1 ϕ n ) u n u * 2 + ϕ n [ θ n u n u n 1 θ n ϕ n u n u n 1 + 2 ( 1 ϕ n ) u n u * θ n ϕ n u n u n 1 + 2 u * w n u n + 1 + 2 u * , u * u n + 1 ] .
By the use of (23), (44), (46) and applying Lemma 2, conclude that lim n + u n u * = 0 .
  Case 2: Consider that there exists { n i } subsequence of { n } such that
u n i u * u n i + 1 u * , i N .
By using Lemma 3 there exists a sequence { m k } N as { m k } + such that
u m k u * u m k + 1 u * and u k u * u m k + 1 u * , for all k N .
As in Case 1, the relation (21) gives that
( 1 χ L ) w m k y m k 2 + ( 1 χ L ) u m k + 1 y m k 2 u m k u * 2 + ϕ m k M 2 u m k + 1 u * 2 .
Due to ϕ m k 0 , we deduce the following:
lim k + w m k y m k = lim k + u m k + 1 y m k = 0 .
It follows that
lim k + u m k + 1 w m k lim k + u m k + 1 y m k + lim k + y m k w m k = 0 .
Next, we evaluate
w m k u m k = u m k + θ m k ( u m k u m k 1 ) ϕ m k u m k + θ m k ( u m k u m k 1 ) u m k θ m k u m k u m k 1 + ϕ m k u m k + θ m k ϕ m k u m k u m k 1 = ϕ m k θ m k ϕ m k u m k u m k 1 + ϕ m k u m k + ϕ m k 2 θ m k ϕ m k u m k u m k 1 0 .
It follows that
lim k + u m k u m k + 1 lim k + u m k w m k + lim k + w m k u m k + 1 = 0 .
By using the same explanation as in the Case 1, such that
lim sup k + u * , u * u m k + 1 0 .
By using the expressions (46) and (47) we obtain
u m k + 1 u * 2 ( 1 ϕ m k ) u m k u * 2 + ϕ m k [ θ m k u m k u m k 1 θ m k ϕ m k u m k u m k 1 + 2 ( 1 ϕ m k ) u m k u * θ m k ϕ m k u m k u m k 1 + 2 u * w m k u m k + 1 + 2 u * , u * u m k + 1 ] ( 1 ϕ m k ) u m k + 1 u * 2 + ϕ m k [ θ m k u m k u m k 1 θ m k ϕ m k u m k u m k 1 + 2 ( 1 ϕ m k ) u m k u * θ m k ϕ m k u m k u m k 1 + 2 u * w m k u m k + 1 + 2 u * , u * u m k + 1 ] .
Thus, above implies that
u m k + 1 u * 2 [ θ m k u m k u m k 1 θ m k ϕ m k u m k u m k 1 + 2 ( 1 ϕ m k ) u m k u * θ m k ϕ m k u m k u m k 1 + 2 u * w m k u m k + 1 + 2 u * , u * u m k + 1 ] .
Since ϕ m k 0 , and boundedness of the sequence u m k u * is a bounded. Thus, expressions (53) and (55) implies that
u m k + 1 u * 2 0 , as k + .
It implies that
lim n + u k u * 2 lim n + u m k + 1 u * 2 0 .
As a consequence u n u * . This completes the proof of the theorem.    □
Lemma 7.
Assume that G : H H satisfies the conditions (B1)(B4) in Algorithm 2. For each u * Ω , we have
u n + 1 u * 2 w n u * 2 1 μ χ n χ n + 1 w n y n 2 1 μ χ n χ n + 1 u n + 1 y n 2 .
Proof. 
Consider that
u n + 1 u * 2 = P H n [ w n χ n G ( y n ) ] u * 2 = P H n [ w n χ n G ( y n ) ] + [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] u * 2 = [ w n χ n G ( y n ) ] u * 2 + P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] 2 + 2 P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] , [ w n χ n G ( y n ) ] u * .
It is given that u * Ω C H n , we obtain
P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] 2 + P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] , [ w n χ n G ( y n ) ] u * = [ w n χ n G ( y n ) ] P H n [ w n χ n G ( y n ) ] , u * P H n [ w n χ n G ( y n ) ] 0 ,
which implies that
P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] , [ w n χ n G ( y n ) ] u * P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] 2 .
By using expressions (59) and (61), we obtain
u n + 1 u * 2 w n χ n G ( y n ) u * 2 P H n [ w n χ n G ( y n ) ] [ w n χ n G ( y n ) ] 2 w n u * 2 w n u n + 1 2 + 2 χ n G ( y n ) , u * u n + 1 .
Thus, we have
G ( u * ) , y u * 0 , for all y C .
By the use of condition (B2), we have
G ( y ) , y u * 0 , for all y C .
Take y = y n C , we obtain
G ( y n ) , y n u * 0 .
Thus, we have
G ( y n ) , u * u n + 1 = G ( y n ) , u * y n + G ( y n ) , y n u n + 1 G ( y n ) , y n u n + 1 .
Combining expressions (62) and (63), we obtain
u n + 1 u * 2 w n u * 2 w n u n + 1 2 + 2 χ n G ( y n ) , y n u n + 1 w n u * 2 w n y n + y n u n + 1 2 + 2 χ n G ( y n ) , y n u n + 1 w n u * 2 w n y n 2 y n u n + 1 2 + 2 w n χ n G ( y n ) y n , u n + 1 y n .
Note that u n + 1 = P H n [ w n χ n G ( y n ) ] and by the definition of χ n + 1 , we have
2 w n χ n G ( y n ) y n , u n + 1 y n = 2 w n χ n G ( w n ) y n , u n + 1 y n + 2 χ n G ( w n ) G ( y n ) , u n + 1 y n 2 χ n G ( w n ) G ( y n ) u n + 1 y n 2 μ χ n χ n + 1 w n y n u n + 1 y n μ χ n χ n + 1 w n y n 2 + μ χ n χ n + 1 u n + 1 y n 2 .
Combining expressions (64) and (65), we obtain
u n + 1 u * 2 w n u * 2 w n y n 2 y n u n + 1 2 + χ n χ n + 1 μ w n y n 2 + μ u n + 1 y n 2 w n u * 2 1 μ χ n χ n + 1 w n y n 2 1 μ χ n χ n + 1 u n + 1 y n 2 .
Theorem 2.
Let { u n } be a sequence generated by Algorithm 2 and satisfies the conditions (B1)–(B4). Then, { u n } strongly converges to u * Ω . Moreover, P Ω ( 0 ) = u * .
Proof. 
From Lemma 7, we have
u n + 1 u * 2 w n u * 2 1 μ χ n χ n + 1 w n y n 2 1 μ χ n χ n + 1 u n + 1 y n 2 .
It is given that χ n χ such that ϵ ( 0 , 1 μ ) , we have
lim n 1 μ χ n χ n + 1 = 1 μ > ϵ > 0 .
Therefore, there exists N 1 * N in order that
1 μ χ n χ n + 1 > ϵ > 0 , n N 1 * .
Thus, implies that
u n + 1 u * 2 w n u * 2 , n N 1 * .
Next, we follow the same steps as in the proof of Theorem 1. □

4. Numerical Illustrations

This section examines four numerical experiments to show the efficacy of the proposed algorithms. Any of these numerical experiments provide a detailed understanding of how better control parameters can be chosen. Some of them show the advantages of the proposed methods compared to existing ones in the literature.
Example 1.
Firstly, consider the HpHard problem that is taken from [30] and this example was studied by many authors for numerical experiments (see for details [31,32,33]). Let G : R m R m be a mapping is defined by
G ( u ) = M u + q
where q R m and
M = N N T + B + D
where B is an m × m skew-symmetric matrix, N is an m × m matrix and D is a diagonal m × m positive definite matrix. The set C is taken in the following way:
C = { u R m : 10 u i 10 } .
It is clear that G is monotone and Lipschitz continuous through L = M . For q = 0 , the solution set of the corresponding variational inequality problem is Ω = { 0 } . During this experiment, the initial point is u 0 = u 1 = ( 1 , 1 , , 1 ) and D n = w n y n 10 4 . The numerical findings of these methods are shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 and Table 1. The control conditions are taken as follows:
(i) 
Algorithm 3.4 in [23] (shortly, MT-EgM):
χ 0 = 0.20 , θ = 0.70 , μ = 0.30 , ϕ n = 1 ( n + 2 ) , τ n = 1 ( n + 1 ) 2 , θ n = 5 10 ( 1 ϕ n ) .
(ii) 
Algorithm 3.2 in [24] (shortly, VT1-EgM):
τ 0 = 0.20 , θ = 0.50 , μ = 0.50 , ϕ n = 1 ( n + 1 ) , ϵ n = 1 ( n + 1 ) 2 , f ( u ) = u 3 .
(iii) 
Algorithm 3.1 in [25] (shortly, VT2-EgM):
χ = 0.7 L , θ = 0.70 , ϕ n = 1 ( n + 2 ) , τ n = 1 ( n + 1 ) 2 , f ( u ) = u 3 .
(iv) 
Algorithm 1 (shortly, I1-EgA):
χ = 0.7 L , θ = 0.70 , ϕ n = 1 ( n + 2 ) , ϵ n = 1 ( n + 1 ) 2 .
(v) 
Algorithm 2 (shortly, I2-EgA):
χ 0 = 0.20 , μ = 0.30 , θ = 0.70 , ϕ n = 1 ( n + 2 ) , ϵ n = 1 ( n + 1 ) 2 .
Example 2.
Assume that H = L 2 ( [ 0 , 1 ] ) is a Hilbert space with an inner product
u , y = 0 1 u ( t ) y ( t ) d t , u , y H
and norm is defined by
u = 0 1 | u ( t ) | 2 d t .
Consider that the set C : = { u L 2 ( [ 0 , 1 ] ) : u 1 } is a unit ball. Let G : C H is defined by
G ( u ) ( t ) = 0 1 u ( t ) H ( t , s ) f ( u ( s ) ) d s + g ( t )
where
H ( t , s ) = 2 t s e ( t + s ) e e 2 1 , f ( u ) = cos u , g ( t ) = 2 t e t e e 2 1 .
It can be seen that G is Lipschitz-continuous with Lipschitz constant L = 2 and monotone. Figure 7, Figure 8 and Figure 9 and Table 2 show the numerical results by choosing different choices of u 0 . The control conditions are taken as follows:
(i) 
Algorithm 3.4 in [23] (shortly, MT-EgM):
χ 0 = 0.25 , θ = 0.75 , μ = 0.35 , τ n = 1 ( n + 1 ) 2 , ϕ n = 1 2 ( n + 2 ) , θ n = 6 10 ( 1 ϕ n ) .
(ii) 
Algorithm 3.2 in [24] (shortly, VT1-EgM):
τ 0 = 0.25 , θ = 0.75 , μ = 0.35 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 2 ( n + 2 ) , f ( u ) = u 4 .
(iii) 
Algorithm 3.1 in [25] (shortly, VT2-EgM):
χ = 0.75 L , θ = 0.75 , τ n = 1 ( n + 1 ) 2 , ϕ n = 1 2 ( n + 2 ) , f ( u ) = u 4 .
(iv) 
Algorithm 1 (shortly, I1-EgA):
χ = 0.75 L , θ = 0.75 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 2 ( n + 2 ) .
(v) 
Algorithm 2 (shortly, I2-EgA):
χ 0 = 0.25 , μ = 0.35 , θ = 0.75 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 2 ( n + 2 ) .
Example 3.
Consider that the problem of Kojima–Shindo where the constraint set C is
C = { u R 4 : 1 u i 5 , i = 1 , 2 , 3 , 4 } ,
and the mapping G : R 4 R 4 is defined by
G ( u ) = u 1 + u 2 + u 3 + u 4 4 u 2 u 3 u 4 u 1 + u 2 + u 3 + u 4 4 u 1 u 3 u 4 u 1 + u 2 + u 3 + u 4 4 u 1 u 2 u 4 u 1 + u 2 + u 3 + u 4 4 u 1 u 2 u 3 .
It is easy to see that G is not monotone on the set C . By using the Monte-Carlo approach [34], it can be shown that G is pseudo-monotone on C . This problem has a unique solution u * = ( 5 , 5 , 5 , 5 ) T . Actually, in general, it is a very difficult task to check the pseudomonotonicity of any mapping G in practice. We here employ the Monte Carlo approach according to the definition of pseudo-monotonicity: Generate a large number of pairs of points u and y uniformly in C satisfying G ( u ) T ( y u ) 0 and then check if G ( y ) T ( y u ) 0 . Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 show the numerical results by taking different values of u 0 . The control conditions are taken as follows:
(i) 
Algorithm 3.4 in [23] (shortly, MT-EgM):
χ 0 = 0.05 , θ = 0.70 , μ = 0.33 , τ n = 1 ( n + 1 ) 2 , ϕ n = 1 50 ( n + 2 ) , θ n = 6 10 ( 1 ϕ n ) .
(ii) 
Algorithm 3.2 in [24] (shortly, VT1-EgM):
τ 0 = 0.05 , θ = 0.70 , μ = 0.33 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 50 ( n + 2 ) , f ( u ) = u 3 .
(iii) 
Algorithm 3.1 in [25] (shortly, VT2-EgM):
χ = 0.7 L , θ = 0.70 , τ n = 1 ( n + 1 ) 2 , ϕ n = 1 50 ( n + 2 ) , f ( u ) = u 3 .
(iv) 
Algorithm 1 (shortly, I1-EgA):
χ = 0.7 L , θ = 0.70 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 50 ( n + 2 ) .
(v) 
Algorithm 2 (shortly, I2-EgA):
χ 0 = 0.05 , μ = 0.33 , θ = 0.70 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 50 ( n + 2 ) .
Example 4.
The last Example has taken from [35] where G : R 2 R 2 is defined by
G ( u ) = 0.5 u 1 u 2 2 u 2 10 7 4 u 1 0.1 u 2 2 10 7
where C = { u R 2 : ( u 1 2 ) 2 + ( u 2 2 ) 2 1 } . It can easily see that G is Lipschitz continuous with L = 5 and G is not monotone on C but pseudomonotone. Here, the above problem has unique solution u * = ( 2.707 , 2.707 ) T . Figure 10, Figure 11, Figure 12 and Figure 13 and Table 9 show the numerical findings by letting different values of u 0 . The control conditions are taken as follows:
(i) 
Algorithm 3.4 in [23] (shortly, MT-EgM):
χ 0 = 0.35 , θ = 0.80 , μ = 0.55 , τ n = 1 ( n + 1 ) 2 , ϕ n = 1 100 ( n + 2 ) , θ n = 6 10 ( 1 ϕ n ) .
(ii) 
Algorithm 3.2 in [24] (shortly, VT1-EgM):
τ 0 = 0.35 , θ = 0.80 , μ = 0.55 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 100 ( n + 1 ) , f ( u ) = u 5 .
(iii) 
Algorithm 3.1 in [25] (shortly, VT2-EgM):
χ = 0.8 L , θ = 0.80 , τ n = 1 ( n + 1 ) 2 , ϕ n = 1 100 ( n + 2 ) , f ( u ) = u 5 .
(iv) 
Algorithm 1 (shortly, I1-EgA):
χ = 0.8 L , θ = 0.80 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 100 ( n + 2 ) .
(v) 
Algorithm 2 (shortly, I2-EgA):
χ 0 = 0.35 , μ = 0.55 , θ = 0.80 , ϵ n = 1 ( n + 1 ) 2 , ϕ n = 1 100 ( n + 2 ) .

5. Conclusions

In this study, we have introduced two new methods for finding a solution of variational inequality problem in a Hilbert space. The results have been established on the base of two previous methods: the subgradient extragradient method and the inertial method. Some new approaches to the inertial framework and the step size rule have been set up. The strong convergence of our proposed methods is set up under the condition of pseudo-monotonicity and Lipschitz continuity of mapping. Some numerical results are presented to explain the convergence of the methods over others. The results in this paper have been used as methods for figuring out the variational inequality problem in Hilbert spaces. Finally, numerical experiments indicate that the inertial approach normally enhances the performance of the proposed methods.

Author Contributions

Conceptualization, K.M., N.A.A. and I.K.A.; methodology, K.M. and N.A.A.; software, K.M., N.A.A. and I.K.A.; validation, N.A.A. and I.K.A.; formal analysis, K.M. and N.A.A.; investigation, K.M., N.A.A. and I.K.A.; writing—original draft preparation, K.M., N.A.A. and I.K.A.; writing—review and editing, K.M., N.A.A. and I.K.A.; visualization, K.M., N.A.A. and I.K.A.; supervision and funding, K.M. and I.K.A. All authors have read and agree to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This first author was supported by Rajamangala University of Technology Phra Nakhon (RMUTP).

Conflicts of Interest

The authors declare no competing interest.

References

  1. Konnov, I.V. On systems of variational inequalities. Russ. Math. C/C Izv. Vyss. Uchebnye Zaved. Mat. 1997, 41, 77–86. [Google Scholar]
  2. Stampacchia, G. Formes bilinéaires coercitives sur les ensembles convexes. Comptes Rendus Hebd. Seances Acad. Sci. 1964, 258, 4413. [Google Scholar]
  3. Elliott, C.M. Variational and quasivariational inequalities applications to free—boundary ProbLems. (claudio baiocchi and antónio capelo). SIAM Rev. 1987, 29, 314–315. [Google Scholar] [CrossRef]
  4. Kassay, G.; Kolumbán, J.; Páles, Z. On nash stationary points. Publ. Math. 1999, 54, 267–279. [Google Scholar]
  5. Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of minty and stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [Google Scholar] [CrossRef]
  6. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  7. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  8. Nagurney, A.; Economics, E.N. A Variational Inequality Approach; Springer: Boston, MA, USA, 1999. [Google Scholar]
  9. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  10. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Censor, Y.; Gibali, A.; Reich, S. Extensions of korpelevich extragradient method for the variational inequality problem in euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  12. Iusem, A.N.; Svaiter, B.F. A variant of korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
  13. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  14. Malitsky, Y.V.; Semenov, V.V. An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [Google Scholar] [CrossRef]
  15. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  16. Noor, M.A. Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 2010, 21, 97–108. [Google Scholar] [CrossRef]
  17. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [Google Scholar] [CrossRef]
  18. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2017, 78, 1045–1060. [Google Scholar] [CrossRef]
  19. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  20. Zhang, L.; Fang, C.; Chen, S. An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems. Numer. Algorithms 2018, 79, 941–956. [Google Scholar] [CrossRef]
  21. Antipin, A.S. On a method for convex programs using a symmetrical modification of the lagrange function. Ekon. Mat. Metod. 1976, 12, 1164–1173. [Google Scholar]
  22. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  23. Anh, P.K.; Thong, D.V.; Vinh, N.T. Improved inertial extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2020, 1–24. [Google Scholar] [CrossRef]
  24. Thong, D.V.; Hieu, D.V.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2019, 14, 115–144. [Google Scholar] [CrossRef]
  25. Thong, D.V.; Vinh, N.T.; Cho, Y.J. A strong convergence theorem for tseng’s extragradient method for solving variational inequality problems. Optim. Lett. 2019, 14, 1157–1175. [Google Scholar] [CrossRef]
  26. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  27. Xu, H.-K. Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65, 109–113. [Google Scholar] [CrossRef] [Green Version]
  28. Maingé, P.-E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  29. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  30. Harker, P.T.; Pang, J.-S. For the linear complementarity problem. Comput. Solut. Nonlinear Syst. Equ. 1990, 26, 265. [Google Scholar]
  31. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2017, 70, 687–704. [Google Scholar] [CrossRef]
  32. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2016, 66, 75–96. [Google Scholar] [CrossRef]
  33. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  34. Hu, X.; Wang, J. Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network. IEEE Trans. Neural Netw. 2006, 17, 1487–1499. [Google Scholar]
  35. Shehu, Y.; Dong, Q.-L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2018, 68, 385–409. [Google Scholar] [CrossRef]
Figure 1. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 5 .
Figure 1. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 5 .
Symmetry 13 00182 g001
Figure 2. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 20 .
Figure 2. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 20 .
Symmetry 13 00182 g002
Figure 3. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 50 .
Figure 3. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 50 .
Symmetry 13 00182 g003
Figure 4. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 50 .
Figure 4. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 50 .
Symmetry 13 00182 g004
Figure 5. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 100 .
Figure 5. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 100 .
Symmetry 13 00182 g005
Figure 6. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 100 .
Figure 6. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 100 .
Symmetry 13 00182 g006
Figure 7. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = t 2 + 1 .
Figure 7. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = t 2 + 1 .
Symmetry 13 00182 g007
Figure 8. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = 3 t 2 + 2 sin ( t ) .
Figure 8. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = 3 t 2 + 2 sin ( t ) .
Symmetry 13 00182 g008
Figure 9. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = 5 t 2 + e t .
Figure 9. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = 5 t 2 + e t .
Symmetry 13 00182 g009
Figure 10. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 1.5 , 1.7 ) T .
Figure 10. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 1.5 , 1.7 ) T .
Symmetry 13 00182 g010
Figure 11. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 2.0 , 3.0 ) T .
Figure 11. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 2.0 , 3.0 ) T .
Symmetry 13 00182 g011
Figure 12. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 1.0 , 2.0 ) T .
Figure 12. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 1.0 , 2.0 ) T .
Symmetry 13 00182 g012
Figure 13. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 2.7 , 2.6 ) T .
Figure 13. Numerical illustration of Algorithms 1 and 2 with Algorithm 3.4 in [23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when u 0 = u 1 = ( 2.7 , 2.6 ) T .
Symmetry 13 00182 g013
Table 1. Numerical data for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Table 1. Numerical data for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Dimension m = 5 m = 20 m = 50 m = 100
Algorithm NameIter.TimeIter.TimeIter.TimeIter.Time
Algorithm 3.4 in [23]480.2033512641.4406952941.5148703131.817463
Algorithm 3.2 in [24]300.1503363211.7668263571.9667473062.782575
Algorithm 3.1 in [25]270.131781420.182407410.286670400.234242
Algorithm 1100.074348100.041680100.05500290.064665
Algorithm 2140.0649531420.672565890.437610650.380447
Table 2. Numerical data for Figure 7, Figure 8 and Figure 9.
Table 2. Numerical data for Figure 7, Figure 8 and Figure 9.
u 0 = u 1 t 2 + 1 3 t 2 + 2 sin ( t ) t 2 + e t
Algorithm NameIter.TimeIter.TimeIter.Time
Algorithm 3.4 in [23]570.037861710.168874810.207324
Algorithm 3.2 in [24]270.021260320.086875340.109731
Algorithm 3.1 in [25]190.012435230.042838260.049123
Algorithm 1140.014493140.032441120.031265
Algorithm 2110.017906150.042816210.076447
Table 3. Example 3: Numerical findings of Algorithm 3.4 in [23] and u 0 = u 1 = ( 1 , 2 , 3 , 4 ) T .
Table 3. Example 3: Numerical findings of Algorithm 3.4 in [23] and u 0 = u 1 = ( 1 , 2 , 3 , 4 ) T .
Iter (n) u 1 u 2 u 3 u 4
17.8811010525954911.133592105214711.660802631532911.4916973684078
22.725170699713475.521963070099095.910626174869845.78231681677249
32.746503581697795.238924622056545.435409729655475.37058363048288
42.795896525517005.105600977981455.204817310388185.17209179402342
52.848153004775265.043212083183835.093301954272145.07678246381620
62.902423255402755.014447277308505.039727558059875.03139070422338
72.958769798575885.001541131071225.014295414227145.01008946137975
83.016145320872064.996046505807715.002481934131285.00035976548024
1944.999491595540964.999491595540964.999491595540964.99949159554096
1954.999494201455184.999494201455184.999494201455184.99949420145518
1964.999496780789794.999496780789794.999496780789794.99949678078979
1974.999499333949414.999499333949414.999499333949414.99949933394941
1984.999501861330474.999501861330474.999501861330474.99950186133047
CPU time is seconds1.011633
Table 4. Example 3: Numerical findings of Algorithm 3.2 in [24] and u 0 = u 1 = ( 1 , 2 , 3 , 4 ) T .
Table 4. Example 3: Numerical findings of Algorithm 3.2 in [24] and u 0 = u 1 = ( 1 , 2 , 3 , 4 ) T .
Iter (n) u 1 u 2 u 3 u 4
14.9481481470750720.165907407310420.228944444351418.9075370362809
2−8.0467038955458012.459730788360912.505402194682711.3787988702116
31.039729354402474.901320950887904.894528027709334.98533018397129
41.119735376357424.892180786760534.885378899658094.95475431398367
51.201285230140714.891699088309414.884930571813134.94635475329384
61.277745391107794.896259293752734.889534849964494.94790178020963
71.349838113124254.903783946561154.897103362137194.95376528318611
81.418963974548864.913348709778284.906709816299614.96199173340817
1444.999489117827814.999489117827814.999489117827814.99948911782781
1454.999492617199444.999492617199444.999492617199444.99949261719944
1464.999496068958114.999496068958114.999496068958114.99949606895811
1474.999499474069024.999499474069024.999499474069024.99949947406902
1484.999502833471424.999502833471424.999502833471424.99950283347142
CPU time is seconds0.7115419
Table 5. Example 3: Numerical findings of Algorithm 2 and u 0 = u 1 = ( 1 , 2 , 3 , 4 ) T .
Table 5. Example 3: Numerical findings of Algorithm 2 and u 0 = u 1 = ( 1 , 2 , 3 , 4 ) T .
Iter (n) u 1 u 2 u 3 u 4
14.9999999993481920.065875205689120.095584515677818.7728355231621
2−7.8314481292019811.926561333903811.943709551375610.8524606746456
31.014901590566444.967243473289284.965120009802755.10532223071200
41.106429150684944.938191448789544.935994677249914.99999999995773
51.184634301959654.933740309705764.931533422357504.97372522603788
61.264122383983384.937856583512414.935656787797884.96968839262467
71.338053115644304.946275634685934.944088275339794.97519539293950
81.407482892762254.956927980810474.954753047776574.98440717807182
964.999999999622904.999999999622904.999999999622904.99999999962290
974.999999999623184.999999999623184.999999999623184.99999999962318
984.999999999623464.999999999623464.999999999623464.99999999962346
994.999999999623734.999999999623734.999999999623734.99999999962373
1004.999999999623994.999999999623994.999999999623994.99999999962399
CPU time is seconds0.503420
Table 6. Example 3: Numerical findings of Algorithm 3.4 in [23] and u 0 = u 1 = ( 1 , 0 , 1 , 2 ) T .
Table 6. Example 3: Numerical findings of Algorithm 3.4 in [23] and u 0 = u 1 = ( 1 , 0 , 1 , 2 ) T .
Iter (n) u 1 u 2 u 3 u 4
10.1168815814499870.8131973709239161.111618428253261.96709210640194
20.5440961877387390.9639368027721231.150668802240721.94902465158251
30.7677439283890321.010631355692871.174222027536911.93884243796376
40.8839257924570611.045088252387641.197067424656521.93285090332439
50.9437320246631541.077002905915131.219996791437331.92972670723471
60.9850768849258961.107171913393121.242630894185371.92912929557214
71.023271424574841.137694630608101.266403732323461.93117221434202
81.060419421456221.168486560609791.291154241267521.93571856968895
1784.999493041147154.999493041147154.999493041147154.99949304114715
1794.999495869700684.999495869700684.999495869700684.99949586970068
1804.999498666863584.999498666863584.999498666863584.99949866686358
1814.999501433155554.999501433155554.999501433155554.99950143315555
1824.999504169084854.999504169084854.999504169084854.99950416908485
CPU time is seconds0.957781
Table 7. Example 3: Numerical findings of Algorithm 3.2 in [24] and u 0 = u 1 = ( 1 , 0 , 1 , 2 ) T .
Table 7. Example 3: Numerical findings of Algorithm 3.2 in [24] and u 0 = u 1 = ( 1 , 0 , 1 , 2 ) T .
Iter (n) u 1 u 2 u 3 u 4
10.7600535827753001.290032983827481.203194802179641.94068518744861
21.218042986159781.594142413592081.424649889396972.01820655416591
31.426594803610341.739431087070351.571612845531922.09415611599989
41.578861673407131.854475284437901.699675000529182.17426416753376
51.718081532567881.967019844569501.825395325022082.26236423411214
61.856327029277122.083924157700311.953758340774972.35895786218053
71.998422968433802.207863827798882.087662013319542.46482556588447
82.146898953435152.340396918640252.229034330817932.58085673886984
1444.999489105797034.999489105797034.999489105797034.99948910579703
1454.999492605170524.999492605170524.999492605170524.99949260517052
1464.999496056931034.999496056931034.999496056931034.99949605693103
1474.999499462043754.999499462043754.999499462043754.99949946204375
1484.999502821447944.999502821447944.999502821447944.99950282144794
CPU time is seconds0.8748252
Table 8. Example 3: Numerical findings of Algorithm 2 and u 0 = u 1 = ( 1 , 0 , 1 , 2 ) T .
Table 8. Example 3: Numerical findings of Algorithm 2 and u 0 = u 1 = ( 1 , 0 , 1 , 2 ) T .
Iter (n) u 1 u 2 u 3 u 4
10.7751712478444781.300586465435491.210229010099411.94547500162994
21.214468213116151.593158205207071.418028470511952.01854575677085
31.409301549342551.728305360828671.552387699197042.08704132135424
41.550050421009281.833582022943121.669452027144272.15901099064876
51.676523350374371.934615310409661.783146622520362.23718181897121
61.800204283178442.037940158056061.897672145854522.32166725738566
71.925735771236472.146109042795052.015641051185872.41306525933486
82.055467835907642.260530268249052.138796277597952.51211478239818
964.999999999983684.999999999983684.999999999983684.99999999998368
974.999999999983684.999999999983684.999999999983684.99999999998368
984.999999999983684.999999999983684.999999999983684.99999999998368
994.999999999983694.999999999983694.999999999983694.99999999998369
1004.999999999983694.999999999983694.999999999983694.99999999998369
CPU time is seconds0.544268
Table 9. Numerical data for Figure 7, Figure 8 and Figure 9.
Table 9. Numerical data for Figure 7, Figure 8 and Figure 9.
u 0 = u 1 ( 1.5 , 1.7 ) T ( 2.0 , 3.0 ) T ( 1.0 , 2.0 ) T ( 2.7 , 2.6 ) T
Algorithm NameIter.TimeIter.TimeIter.TimeIter.Time
Algorithm 3.4 in [23]613.083492594.127714602.882394593.111729
Algorithm 3.2 in [24]482.189625492.674055492.448063492.306584
Algorithm 3.1 in [25]381.440188381.684040381.784568381.645227
Algorithm 1230.933457231.021092241.139583230.922199
Algorithm 2190.899018190.969045190.907344190.953694
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muangchoo, K.; Alreshidi, N.A.; Argyros, I.K. Approximation Results for Variational Inequalities Involving Pseudomonotone Bifunction in Real Hilbert Spaces. Symmetry 2021, 13, 182. https://doi.org/10.3390/sym13020182

AMA Style

Muangchoo K, Alreshidi NA, Argyros IK. Approximation Results for Variational Inequalities Involving Pseudomonotone Bifunction in Real Hilbert Spaces. Symmetry. 2021; 13(2):182. https://doi.org/10.3390/sym13020182

Chicago/Turabian Style

Muangchoo, Kanikar, Nasser Aedh Alreshidi, and Ioannis K. Argyros. 2021. "Approximation Results for Variational Inequalities Involving Pseudomonotone Bifunction in Real Hilbert Spaces" Symmetry 13, no. 2: 182. https://doi.org/10.3390/sym13020182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop