Next Article in Journal
On the Existence and Uniqueness of an Rν-Generalized Solution to the Stokes Problem with Corner Singularity
Next Article in Special Issue
Modified Mann-Type Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points Implicating Countably Many Nonexpansive Operators
Previous Article in Journal
Spatial Channel Attention for Deep Convolutional Neural Networks
Previous Article in Special Issue
New Fundamental Results on the Continuous and Discrete Integro-Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Modification Using Self-Adaptive Subgradient Extragradient Techniques for Equilibrium Programming Applied to Variational Inequalities and Fixed-Point Problems

1
Department of Mathematics, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thung Khru, Bangkok 10140, Thailand
2
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
3
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1751; https://doi.org/10.3390/math10101751
Submission received: 4 April 2022 / Revised: 13 May 2022 / Accepted: 16 May 2022 / Published: 20 May 2022
(This article belongs to the Special Issue Applied Functional Analysis and Applications)

Abstract

:
Equilibrium problems are articulated in a variety of mathematical computing applications, including minimax and numerical programming, saddle-point problems, fixed-point problems, and variational inequalities. In this paper, we introduce improved iterative techniques for evaluating the numerical solution of an equilibrium problem in a Hilbert space with a pseudomonotone and a Lipschitz-type bifunction. These techniques are based on two computing steps of a proximal-like mapping with inertial terms. We investigated two simplified stepsize rules that do not require a line search, allowing the technique to be carried out more successfully without knowledge of the Lipschitz-type constant of the cost bifunction. Once control parameter constraints are put in place, the iterative sequences converge on a particular solution to the problem. We prove strong convergence theorems without knowing the Lipschitz-type bifunction constants. A sequence of numerical tests was performed, and the results confirmed the correctness and speedy convergence of the new techniques over the traditional ones.

1. Introduction

Let Δ be a nonempty closed convex subset of a real Hilbert space Σ . The study focuses on new iterative techniques for solving an equilibrium problem. Assume that a bifunction ϝ : Δ × Δ R is satisfying ϝ ( q 1 , q 1 ) = 0 , for each q 1 Δ . An equilibrium problem for a bifunction ϝ on Δ is described in the following manner: Find ν * Δ such that
ϝ ( ν * , q 1 ) 0 , q 1 Δ .
Let us denote the solution set of a problem (1) as S o l ( ϝ , Δ ) . Furthermore, we will assume in the following text that this solution set is nonempty. This study investigates the numerical evaluation of the equilibrium problem under the following conditions. We assume the following conditions are satisfied:
( ϝ 1)
The solution set S o l ( ϝ , Δ ) is nonempty for the problem (1);
( ϝ 2)
ϝ is said to be pseudomonotone [1,2], i.e.,
ϝ ( q 1 , q 2 ) 0 ϝ ( q 2 , q 1 ) 0 , q 1 , q 2 Δ ;
( ϝ 3)
ϝ is said to be Lipschitz-type continuous [3] on set Δ if there exists two constants c 1 , c 2 > 0 , such that
ϝ ( q 1 , q 3 ) ϝ ( q 1 , q 2 ) + ϝ ( q 2 , q 3 ) + c 1 q 1 q 2 2 + c 2 q 2 q 3 2 , q 1 , q 2 , q 3 Δ ;
( ϝ 4)
For any sequence { q k } Δ satisfying q k q * , then the following inequality holds
lim sup k + ϝ ( q k , q 1 ) ϝ ( q * , q 1 ) , q 1 Δ ;
( ϝ 5)
ϝ ( q 1 , · ) is convex and subdifferentiable on Σ for each fixed q 1 Σ .
Researchers have concentrated on the equilibrium problem because it encompasses numerous additional mathematical problems in the literature (check for more details [1,4,5,6,7]). The term “equilibrium problem” was originally proposed in the literature in 1992 by Muu and Oettli [4] and was further examined by Blum and Oettli [1]. More precisely, we consider two applications for the problem (1). (i) A variational inequality problem for E : Δ Σ is stated in the following manner: Find ν * Δ such that
E ( ν * ) , q 1 ν * 0 , q 1 Δ .
Let us define a bifunction ϝ as
ϝ ( q 1 , q 2 ) : = E ( q 1 ) , q 2 q 1 , q 1 , q 2 Δ .
The equilibrium problem is then transformed into the variational inequalities problem described in (2) and Lipschitz constants of a mapping E is L = 2 c 1 = 2 c 2 . (ii) Let a mapping D : Δ Δ is said to κ -strict pseudocontraction [8] if there is a constant κ ( 0 , 1 ) such that
D q 1 D q 2 2 q 1 q 2 2 + κ ( q 1 D q 1 ) ( q 2 D q 2 ) 2 , q 1 , q 2 Δ .
A fixed-point problem (FPP) for D : Δ Δ is to evaluate ν * Δ such that D ( ν * ) = ν * . Let us define a bifunction ϝ as follows:
ϝ ( q 1 , q 2 ) = q 1 D q 1 , q 2 q 1 , q 1 , q 2 Δ .
It can be easily seen in [9] that the expression (5) satisfies the conditions ( ϝ 1)–( ϝ 5) as well as the value of Lipschitz constants which are c 1 = c 2 = 3 2 κ 2 2 κ .
The extragradient method developed by Tran et al. [10] is one useful method. This technique was built as follows. Take an arbitrary starting point s 0 Σ ; using the current iteration s k , take the next iteration as follows:
s 0 Δ , q k = arg min q Δ { σ ϝ ( s k , q ) + 1 2 s k q 2 } , s k + 1 = arg min q Δ { σ ϝ ( q k , q ) + 1 2 s k q 2 } ,
where 0 < σ < min 1 2 c 1 , 1 2 c 2 and c 1 , c 2 are two Lipschitz-type constants. The main objective is to create an inertial-type technique in the case of [10] that will be designed to increase the convergence rate of the iterative sequence. Such techniques were previously developed as a result of the oscillator equation with damping and conservative force restoration. This second-order dynamical system is known as a “heavy friction ball”, and it was first proposed by Polyak in [11]. The important characteristic of this method is that the next iteration is built up of two prior iterations. In this context, numerical findings show that inertial terms improve the performance of the techniques in terms of the number of iterations and the elapsed time. Inertial-type iterative methods have been intensively investigated in recent years for certain classes of equilibrium problems [12,13,14,15,16,17,18], and others in [19,20,21,22,23,24].
As a result, the following obvious question arises:
Is it possible to construct new inertial-type strongly convergent extragradient-type techniques for solving equilibrium problems using monotone and nonmonotone stepsize rules?
In this work, we present a positive response to this question, namely that when solving equilibrium problems involving pseudomonotone bifunctions using a new monotone and nonmonotone variable stepsize rule, the gradient technique still yields a strong convergence sequence. Motivated by the work of Censor et al. [25] and Tran et al. [10], we present a new inertial extragradient-type technique for solving problem (1) in the setting of an infinite-dimensional real Hilbert space. Our main factors that contributed to this work were as follows:
(i)
To solve equilibrium problems in a real Hilbert space, we develop an inertial subgradient extragradient approach with a new monotone variable stepsize rule and show that the resulting sequence is strongly convergent.
(ii)
To solve equilibrium problems, we developed another inertial subgradient extragradient technique based on a novel variable nonmonotone stepsize rule independent of the Lipschitz constants.
(iii)
In order to solve various types of equilibrium issues in a real Hilbert space, several conclusions are derived.
(iv)
We show illustrative computations of the proposed methods for confirming theoretical conclusions and comparing them to earlier results [26,27,28]. Our numerical findings show that the new methods are useful and outperform the existing ones. A variety of effective techniques were also evaluated in the recent work by Rehman et al. [29].
The following is how this paper is organized: Preliminary results were reported in Section 2. All new methods and their convergence analysis are presented in Section 3. Finally, Section 5 provides numerical findings to demonstrate the practical efficacy of the proposed methodologies.

2. Preliminaries

In this section, we will go over some elementary identities as well as key definitions and lemmas. A metric projection  P Δ ( q 1 ) of q 1 Σ is formulated as having:
P Δ ( q 1 ) = arg min { q 1 q 2 : q 2 Δ } .
 The key characteristics of projection mapping are described below.
Lemma 1
([30]). Suppose that a metric projection P Δ : Σ Δ . Then, we have the following characteristics:
(i)
q 1 P Δ ( q 2 ) 2 + P Δ ( q 2 ) q 2 2 q 1 q 2 2 , q 1 Δ , q 2 Σ ;
(ii) q 3 = P Δ ( q 1 ) if and only if
q 1 q 3 , q 2 q 3 0 , q 2 Δ ;
(iii)
q 1 P Δ ( q 1 ) q 1 q 2 , q 2 Δ , q 1 Σ .
Lemma 2
([30]). For any q 1 , q 2 Σ and R . Then, the respective conditions are satisfied:
(i)
q 1 + ( 1 ) q 2 2 = q 1 2 + ( 1 ) q 2 2 ( 1 ) q 1 q 2 2 .
(ii)
q 1 + q 2 2 q 1 2 + 2 q 2 , q 1 + q 2 .
 A normal cone of Δ at q 1 Δ is characterized with
N Δ ( q 1 ) = { q 3 Σ : q 3 , q 2 q 1 0 , q 2 Δ } .
 Suppose that : Δ R is a convex function and subdifferential of ℧ at q 1 Δ is characterized with
( q 1 ) = { q 3 Σ : ( q 2 ) ( q 1 ) q 3 , q 2 q 1 , q 2 Δ } .
Lemma 3
([31]). Suppose that : Δ R is a convex, subdifferentiable, and lower semicontinuous function. An element p Δ is a minimizer of a function ℧ if and only if
0 ( p ) + N Δ ( p ) ,
where ( p ) denoted subdifferential of ℧ at p Δ and N Δ ( p ) the normal cone of Δ at p .
Lemma 4
([32]). Let { l k } [ 0 , + ) , { m k } ( 0 , 1 ) and { n k } R be three sequences subsequent the following conditions:
l k + 1 ( 1 m k ) l k + m k n k , k N and k = 1 + m k = + .
If lim sup j + n k j 0 for each subsequence { l k j } of a sequence { l k } satisfies
lim inf j + ( l k j + 1 l k j ) 0 .
Then, lim k + l k = 0 .

3. Main Results

In this section, we describe a numerical iterative method for increasing the rate of convergence of an iterative sequence by combining two strong convex optimization problems with an inertial term. We provide the following techniques for resolving equilibrium problems.
Lemma 5.
A sequence { σ k } is convergent to σ and min μ ( 2 2 ρ ) max { 2 c 1 , 2 c 2 } , σ 1 σ σ 1 .
Proof. 
Due to the Lipschitz-type continuity of a bifunction, there exists two constants c 1 > 0 and c 2 > 0 . Let ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) > 0 . Thus, we have
μ ( 2 2 ρ ) ( r k q k 2 + s k + 1 q k 2 ) 2 [ ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ] μ ( 2 2 ρ ) ( r k q k 2 + s k + 1 q k 2 ) 2 [ c 1 r k q k 2 + c 2 s k + 1 q k 2 ] μ ( 2 2 ρ ) 2 max { c 1 , c 2 } .
Thus, we obtain lim k + σ k = σ . This completes the proof of the lemma.    □
Lemma 6.
A sequence { σ k } is convergent to σ and min μ ( 2 2 ρ ) max { 2 c 1 , 2 c 2 } , σ 1 σ σ 1 + P , where P = k = 1 + φ k .
Proof. 
Due to the Lipschitz-type continuity of a bifunction, there exists two constants c 1 > 0 and c 2 > 0 . Let ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) > 0 . Thus, we have
μ ( 2 2 ρ ) ( r k q k 2 + s k + 1 q k 2 ) 2 [ ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ] μ ( 2 2 ρ ) ( r k q k 2 + s k + 1 q k 2 ) 2 [ c 1 r k q k 2 + c 2 s k + 1 q k 2 ] μ ( 2 2 ρ ) 2 max { c 1 , c 2 } .
By definition of σ k + 1 , we have
min μ ( 2 2 ρ ) max { 2 c 1 , 2 c 2 } , σ 1 σ k σ 1 + P .
Suppose that [ σ k + 1 σ k ] + = max 0 , σ k + 1 σ k and
[ σ k + 1 σ k ] = max 0 , ( σ k + 1 σ k ) .
Due to the definition of { σ k } , we obtain
k = 1 + ( σ k + 1 σ k ) + = k = 1 + max 0 , σ k + 1 σ k P < + .
That is to say, the series k = 1 + ( σ k + 1 σ k ) + is convergent. The convergence must now be established
k = 1 + ( σ k + 1 σ k ) .
Let k = 1 + ( σ k + 1 σ k ) = + . Due to the fact that
σ k + 1 σ k = ( σ k + 1 σ k ) + ( σ k + 1 σ k ) .
We obtain the given equations:
σ k + 1 σ 1 = k = 0 k ( σ k + 1 σ k ) = k = 0 k ( σ k + 1 σ k ) + k = 0 k ( σ k + 1 σ k ) .
Letting k + in expression (10), we have σ k as k + . This is an absurdity. As a result of the series convergence k = 0 k ( σ k + 1 σ k ) + and k = 0 k ( σ k + 1 σ k ) taking k + in expression (10), we obtain lim k + σ k = σ . This completes the proof of the theorem.    □
Lemma 7.
From Algorithm 1 we have the following useful inequality
σ k ϝ ( q k , q ) σ k ϝ ( q k , s k + 1 ) r k s k + 1 , q s k + 1 , q Σ k .
Algorithm 1: Explicit Subgradient Extragradient Method With Monotone Stepsize Rule
STEP 0: Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
STEP 1: Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
where choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 .
STEP 2: Compute
q k = arg min q Δ { σ k ϝ ( r k , q ) + 1 2 r k q 2 } .
STEP 3: Given the current iterates s k 1 , s k , q k . Firstly, choose ω k 2 ϝ ( r k , q k ) satisfying r k σ k ω k q k N Δ ( q k ) and generate a half-space
Σ k = { z Σ : r k σ k ω k q k , z q k 0 } .
Compute
s k + 1 = arg min q Σ k { σ k ϝ ( q k , q ) + 1 2 r k q 2 } .
STEP 4: Compute
σ k + 1 = min σ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 [ ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ] if ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) > 0 , σ k , otherwise .
STEP 5: If q k = r k , then complete the computation. Otherwise, set k : = k + 1 and go back STEP 1.
Proof. 
By the use of Lemma 3, we have
0 2 σ k ϝ ( q k , · ) + 1 2 r k · 2 ( s k + 1 ) + N Σ k ( s k + 1 ) .
Thus, υ ϝ ( q k , s k + 1 ) there exists a vector υ ¯ N Σ k ( s k + 1 ) such that
σ k υ + s k + 1 r k + υ ¯ = 0 .
As a result of this, we now have
r k s k + 1 , q s k + 1 = σ k υ , q s k + 1 + υ ¯ , q s k + 1 , q Σ k .
Because υ ¯ N Σ k ( s k + 1 ) , thus there exists a vector υ ¯ , q s k + 1 0 for all q Σ k . It implies that
r k s k + 1 , q s k + 1 σ k υ , q s k + 1 , q Σ k .
Because υ ϝ ( q k , s k + 1 ) , we have
ϝ ( q k , q ) ϝ ( q k , s k + 1 ) υ , q s k + 1 , q Σ .
We obtain by combining the Formulas (13) and (14)
σ k ϝ ( q k , q ) σ k ϝ ( q k , s k + 1 ) r k s k + 1 , q s k + 1 , q Σ k .
   □
Lemma 8.
The following important inequality is obtained from Algorithm 1
σ k ϝ ( r k , q ) σ k ϝ ( r k , q k ) r k q k , q q k , q Δ .
Proof. 
The proof is analogous to the proof of Lemma 7. Next, substitute q = s k + 1 , we have
σ k ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) r k q k , s k + 1 q k .
   □
Theorem 1.
Let { s k } be a sequence generated by Algorithm 1 that meets the conditions ( ϝ 1)( ϝ 5). The sequence { s k } then strongly converges to ν * .
Proof. 
By using q = ν * in Lemma 7, we obtain
σ k ϝ ( q k , ν * ) σ k ϝ ( q k , s k + 1 ) r k s k + 1 , ν * s k + 1 .
By the use of condition ( ϝ 2), we obtain
r k s k + 1 , s k + 1 ν * σ k ϝ ( q k , s k + 1 ) .
From expression (12), we obtain
ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ( 2 2 ρ ) μ r k q k 2 + s k + 1 q k 2 2 σ k + 1 .
After multiplying both sides by σ k > 0 , we have
σ k ϝ ( q k , s k + 1 ) σ k ϝ ( r k , s k + 1 ) σ k ϝ ( r k , q k ) ( 2 2 ρ ) σ k μ r k q k 2 + s k + 1 q k 2 2 σ k + 1 .
Integrating Equations (17) and (18) yields
r k s k + 1 , s k + 1 ν * σ k { ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) } ( 2 2 ρ ) σ k μ r k q k 2 + s k + 1 q k 2 2 σ k + 1 .
By using expression (15), we have
σ k ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) r k q k , s k + 1 q k .
Combining expressions (19) and (20), we have
r k s k + 1 , s k + 1 ν * r k q k , s k + 1 q k ( 2 2 ρ ) σ k μ r k q k 2 + s k + 1 q k 2 2 σ k + 1 .
The following facts are available to us:
2 r k s k + 1 , s k + 1 ν * = r k ν * 2 s k + 1 r k 2 s k + 1 ν * 2 ,
2 q k r k , q k s k + 1 = r k q k 2 + s k + 1 q k 2 r k s k + 1 2 .
As a result of this, we now have
s k + 1 ν * 2 r k ν * 2 r k q k 2 s k + 1 q k 2 + ( 2 2 ρ ) σ k μ r k q k 2 + s k + 1 q k 2 σ k + 1 .
Due to σ k σ , there is a specific natural number N 1 N such that
lim k + μ σ k σ k + 1 1 .
Thus, we have
s k + 1 ν * 2 r k ν * 2 r k q k 2 s k + 1 q k 2 + ( 2 2 ρ ) r k q k 2 + s k + 1 q k 2 .
Furthermore, it implies that
s k + 1 ν * 2 r k ν * 2 ( 2 1 ) r k q k 2 ( 2 1 ) s k + 1 q k 2 ρ r k q k 2 + s k + 1 q k 2 .
From expression (24), we obtain
s k + 1 ν * 2 r k ν * 2 , k N 1 .
To use the formula of { r k } , we obtain
r k ν * = s k + ϰ k ( s k s k 1 ) χ k s k ϰ k χ k ( s k s k 1 ) ν * (26) = ( 1 χ k ) ( s k ν * ) + ( 1 χ k ) ϰ k ( s k s k 1 ) χ k ν * ( 1 χ k ) s k ν * + ( 1 χ k ) ϰ k s k s k 1 + χ k ν * (27) ( 1 χ k ) s k ν * + χ k K 1 ,
where
( 1 χ k ) ϰ k χ k s k s k 1 + ν * K 1 .
Combining the expressions (25) and (27), we obtain
s k + 1 ν * ( 1 χ k ) s k ν * + χ k K 1 max s k ν * , K 1   max s N 1 ν * , K 1 .
As a result, we may conclude that { s k } is a bounded sequence. By using expression (22), we have
s k + 1 ν * 2 r k ν * 2 r k q k 2 s k + 1 q k 2 + ( 2 2 ρ ) σ k μ r k q k 2 + s k + 1 q k 2 σ k + 1 .
Indeed, by expression (27), we have
r k ν * 2 ( 1 χ k ) 2 s k ν * 2 + χ k 2 K 1 2 + 2 K 1 χ k ( 1 χ k ) s k ν * s k ν * 2 + χ k χ k K 1 2 + 2 K 1 ( 1 χ k ) s k ν * s k ν * 2 + χ k K 2 ,
for some K 2 > 0 . By using expressions (29) and (30), we have
s k + 1 ν * 2 s k ν * 2 + χ k K 2 1 ( 2 2 ρ ) μ σ k σ k + 1 r k q k 2 1 ( 2 2 ρ ) μ σ k σ k + 1 q k s k + 1 2 .
Consequently, this implies that
1 ( 2 2 ρ ) μ σ k σ k + 1 r k q k 2 + 1 ( 2 2 ρ ) μ σ k σ k + 1 q k s k + 1 2 s k ν * 2 + χ k K 2 s k + 1 ν * 2 .
From the definition of { r k } , we can rewrite
r k ν * 2 = s k + ϰ k ( s k s k 1 ) χ k s k ϰ k χ k ( s k s k 1 ) ν * 2 = ( 1 χ k ) ( s k ν * ) + ( 1 χ k ) ϰ k ( s k s k 1 ) χ k ν * 2 ( 1 χ k ) ( s k ν * ) + ( 1 χ k ) ϰ k ( s k s k 1 ) 2 + 2 χ k ν * , r k ν * = ( 1 χ k ) 2 s k ν * 2 + ( 1 χ k ) 2 ϰ k 2 s k s k 1 2 + 2 ϰ k ( 1 χ k ) 2 s k ν * s k s k 1 + 2 χ k ν * , r k s k + 1 + 2 χ k ν * , s k + 1 ν * ( 1 χ k ) s k ν * 2 + ϰ k 2 s k s k 1 2 + 2 ϰ k ( 1 χ k ) s k ν * s k s k 1 + 2 χ k ν * r k s k + 1 + 2 χ k ν * , s k + 1 ν * = ( 1 χ k ) s k ν * 2 + χ k [ ϰ k s k s k 1 ϰ k χ k s k s k 1 + 2 ( 1 χ k ) s k ν * ϰ k χ k s k s k 1 + 2 ν * r k s k + 1 + 2 ν * , ν * s k + 1 ] .
Combining expressions (25) and (33), we obtain
s k + 1 ν * 2 ( 1 χ k ) s k ν * 2 + χ k [ ϰ k s k s k 1 ϰ k χ k s k s k 1 + 2 ( 1 χ k ) s k ν * ϰ k χ k s k s k 1 + 2 ν * r k s k + 1 + 2 ν * , ν * s k + 1 ] .
Next, we need to prove that the sequence s k ν * 2 converges to zero. Set
l k : = s k ν * 2
and
m k : = ϰ k s k s k 1 ϰ k χ k s k s k 1 + 2 ( 1 χ k ) s k ν * ϰ k χ k s k s k 1 + 2 ν * r k s k + 1 + 2 ν * , ν * s k + 1 .
Then, next expression can be rewritten as follows:
l k + 1 ( 1 χ k ) l k + χ k m k .
Indeed, from Lemma 4, it is adequate to demonstrate that lim sup j + m k j 0 such that for every subsequence { l k j } of { l k } satisfying
lim inf j + ( l k j + 1 l k j ) 0 .
This is proportionately needed to show that
lim sup j + ν * , ν * s k j + 1 0
and
lim sup j + r k j s k j + 1 0 ,
for each subsequence { s k j ν * } of { s k ν * } satisfying
lim inf j + s k j + 1 ν * s k j ν * 0 .
Consider that { s k j ν * } is a subsequence of { s k ν * } satisfying
lim inf j + s k j + 1 ν * s k j ν * 0 .
Then, we have
lim inf j + s k j + 1 ν * 2 s k j ν * 2 = lim inf j + s k j + 1 ν * s k j ν * s k j + 1 ν * + s k j ν * 0 .
It derives from expression (32) that
lim sup j + 1 ( 2 2 ρ ) μ σ k j σ k j + 1 r k j q k j 2 + 1 ( 2 2 ρ ) μ σ k j σ k j + 1 q k j s k j + 1 2 lim sup j + s k j ν * 2 s k j + 1 ν * 2 + lim sup j + χ k j K 2 = lim inf j + s k j + 1 ν * 2 s k j ν * 2 0 .
The preceding relationship suggests that
lim j + r k j q k j = 0 , lim j + s k j + 1 q k j = 0 .
Therefore, we obtain
lim j + s k j + 1 r k j = 0 .
Next, we have to compute
r k j s k j = s k j + ϰ k j ( s k j s k j 1 ) χ k j s k j + ϰ k j ( s k j s k j 1 ) s k j ϰ k j s k j s k j 1 + χ k j s k j + ϰ k j χ k j s k j s k j 1 = χ k j ϰ k j χ k j s k j s k j 1 + χ k j s k j + χ k j 2 ϰ k j χ k j s k j s k j 1 0 .
This, together with lim j + s k j + 1 r k j = 0 , yields that
lim j + s k j + 1 s k j = 0 .
Because { s k j } is a bounded sequence, without loss of generality, we can consider that { s k j } converges weakly to some p ^ Σ . Thus, we have
0 ν * , q ν * 0 , q S o l ( ϝ , Δ ) .
Next, let us demonstrate that p ^ S o l ( ϝ , Δ ) . We have obtained as follows by combining Lemma 7 with expressions (18) and (15), and we have
σ k j ϝ ( q k j , q ) σ k j ϝ ( q k j , s k j + 1 ) + r k j s k j + 1 , q s k j + 1 σ k j ϝ ( r k j , s k j + 1 ) σ k j ϝ ( r k j , q k j ) ( 2 2 ρ ) μ σ k j 2 σ k j + 1 q k j r k j 2 ( 2 2 ρ ) μ σ k j 2 σ k j + 1 q k j s k j + 1 2 + r k j s k j + 1 , q s k j + 1 r k j q k j , s k j + 1 q k j ( 2 2 ρ ) μ σ k j 2 σ k j + 1 q k j r k j 2 ( 2 2 ρ ) μ σ k j 2 σ k j + 1 q k j s k j + 1 2 + r k j s k j + 1 , q s k j + 1 ,
where q is any member of Σ k . The right-hand side of the last inequality equals zero due to formula (37) and the boundedness of sequence { s k } . By making the use of the condition ( ϝ 4) and q k j p ^ , we have σ k j σ > 0 such that
0 lim sup j + ϝ ( q k j , q ) ϝ ( p ^ , q ) , q Σ k .
Because Δ is a subset of Σ k and ϝ ( p ^ , q ) 0 , q Δ . This demonstrates that p ^ S o l ( ϝ , Δ ) . Thus, we have
lim sup k + ν * , ν * s k = lim j + ν * , ν * s k j = ν * , ν * p ^ 0 .
By using the fact lim j + s k j + 1 s k j = 0 . Thus, we have
lim sup j + ν * , ν * s k j + 1 lim sup j + ν * , ν * s k j + lim sup j + ν * , s k j s k j + 1 0 .
By considering expression (34) with Lemma 4, we infer that s k ν * is the same as k + . This concludes the proof of Theorem 1.    □

4. Results to Solve Fixed-Point Problem and Variational Inequalities

In this section, we apply our primary results to solve fixed-point problems and variational inequalities. The Formulas (3) and (5) are being used to retrieve relevant conclusions. All the techniques are based on our primary results, which are discussed in further detail below.
Corollary 2.
Assume that E : Δ Σ is a weakly continuous, L-Lipschitz continuous, and pseudomonotone operator with solution set S o l ( E , Δ ) . Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 . First, we have to compute
q k = P Δ ( r k σ k E ( r k ) ) .
Given the current iterates s k 1 , s k , q k and construct a half-space
Σ k = { z Σ : r k σ k E ( r k ) q k , z q k 0 } for each k 0 .
Compute
s k + 1 = P Σ k ( r k σ k E ( q k ) ) .
The stepsize should be updated as follows:
σ k + 1 = min σ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 E ( r k ) E ( q k ) , s k + 1 q k if E ( r k ) E ( q k ) , s k + 1 q k > 0 , σ k , otherwise .
Then, the sequences { s k } converge strongly to ν * S o l ( E , Δ ) .
Corollary 3.
Assume that E : Δ Σ is a weakly continuous, L-Lipschitz continuous, and pseudomonotone operator with solution set S o l ( E , Δ ) . Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 . First, we have to compute
q k = P Δ ( r k σ k E ( r k ) ) .
Given the current iterates s k 1 , s k , q k and construct a half-space
Σ k = { z Σ : r k σ k E ( r k ) q k , z q k 0 } for each k 0 .
Compute
s k + 1 = P Σ k ( r k σ k E ( q k ) ) .
Select a non-negative real sequence { φ k } such that k = 1 + φ k < + . Update
σ k + 1 = min σ k + φ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 E ( r k ) E ( q k ) , s k + 1 q k if E ( r k ) E ( q k ) , s k + 1 q k > 0 , σ k + φ k , otherwise .
Then, the sequences { s k } converge strongly to ν * S o l ( E , Δ ) .
Corollary 4.
Assume that E : Δ Σ is a weakly continuous, L-Lipschitz continuous, and pseudomonotone operator with solution set S o l ( E , Δ ) . Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 . First, we have to compute
q k = P Δ ( r k σ k E ( r k ) ) , s k + 1 = P Δ ( r k σ k E ( q k ) ) .
Update the stepsize in the following way:
σ k + 1 = min σ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 E ( r k ) E ( q k ) , s k + 1 q k if E ( r k ) E ( q k ) , s k + 1 q k > 0 , σ k , otherwise .
Then, the sequences { s k } converge strongly to ν * S o l ( E , Δ ) .
Corollary 5.
Assume that E : Δ Σ is a weakly continuous, L-Lipschitz continuous, and pseudomonotone operator with solution set S o l ( E , Δ ) . Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 . First, we have to compute
q k = P Δ ( r k σ k E ( r k ) ) , s k + 1 = P Δ ( r k σ k E ( q k ) ) .
Furthermore, select a non-negative real sequence { φ k } such that k = 1 + φ k < + . Update
σ k + 1 = min σ k + φ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 E ( r k ) E ( q k ) , s k + 1 q k if E ( r k ) E ( q k ) , s k + 1 q k > 0 , σ k + φ k , otherwise .
Then, the sequences { s k } converge strongly to ν * S o l ( E , Δ ) .
Corollary 6.
Suppose that D : Δ Σ is weakly continuous, L-Lipschitz continuous, and κ-strict pseudocontraction operator with S o l ( D , Δ ) . Select σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 .
Compute
q k = P Δ r k σ k ( r k D ( r k ) ) .
Given s k 1 , s k , q k , and construct a half-space
Σ k = { z Σ : ( 1 σ k ) r k + σ k D ( r k ) q k , z q k 0 } .
Compute
s k + 1 = P Σ k r k σ k ( q k D ( q k ) ) .
Evaluate stepsize rule for next iteration, which is evaluated as follows:
σ k + 1 = min σ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k if ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k > 0 , σ k otherwise .
Then, the sequence { s k } converges strongly to ν * S o l ( D , Δ ) .
Corollary 7.
Suppose that D : Δ Σ is weakly continuous, L-Lipschitz continuous, and κ-strict pseudocontraction operator with S o l ( D , Δ ) . Select σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 .
Compute
q k = P Δ r k σ k ( r k D ( r k ) ) .
Given s k 1 , s k , q k , and construct a half-space
Σ k = { z Σ : ( 1 σ k ) r k + σ k D ( r k ) q k , z q k 0 } .
Compute
s k + 1 = P Σ k r k σ k ( q k D ( q k ) ) .
Select a non-negative real sequence { φ k } such that k = 1 + φ k < + . Update
σ k + 1 = min σ k + φ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k if ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k > 0 , σ k + φ k otherwise .
Then, the sequence { s k } converges strongly to ν * S o l ( D , Δ ) .
Corollary 8.
Suppose that D : Δ Σ is weakly continuous, L-Lipschitz continuous, and κ-strict pseudocontraction operator with S o l ( D , Δ ) . Select σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 . Compute
q k = P Δ r k σ k ( r k D ( r k ) ) , s k + 1 = P Δ r k σ k ( q k D ( q k ) ) .
Evaluate stepsize rule for next iteration, which is evaluated as follows:
σ k + 1 = min σ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k if ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k > 0 , σ k otherwise .
Then, the sequence { s k } converges strongly to ν * S o l ( D , Δ ) .
Corollary 9.
Suppose that D : Δ Σ is weakly continuous, L-Lipschitz continuous, and κ-strict pseudocontraction operator with S o l ( D , Δ ) . Select σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
and choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 .
Compute
q k = P Δ r k σ k ( r k D ( r k ) ) , s k + 1 = P Δ r k σ k ( q k D ( q k ) ) .
Furthermore, select a non-negative real sequence { φ k } such that k = 1 + φ k < + . Update
σ k + 1 = min σ k + φ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k if ( r k q k ) [ D ( r k ) D ( q k ) ] , s k + 1 q k > 0 , σ k + φ k otherwise .
Then, the sequence { s k } converges strongly to ν * S o l ( D , Δ ) .

5. Numerical Illustrations

This section describes a number of numerical experiments conducted to demonstrate the efficacy of the suggested techniques. Some of these numerical experiments provide a thorough understanding of how to select optimal control parameters. Some of them show how the suggested strategies outperform current ones in the literature. All MATLAB codes were executed in MATLAB 9.5 (R2018b) on an Intel(R) Core(TM) i5-6200 Processor CPU @ 2.30–2.40 GHz, with 8.00 GB RAM.
Example 1.
The first sample problem given is taken from the Nash–Cournot Oligopolistic Equilibrium model in [10]. In this case, the bifunction ϝ could be written as follows:
ϝ ( p , q ) = P p + Q q + c , q p ,
where matrices P , Q and vector c are defined by
P = 3.1 2 0 0 0 2 3.6 0 0 0 0 0 3.5 2 0 0 0 2 3.3 0 0 0 0 0 3 Q = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 1 0 0 0 1 1.5 0 0 0 0 0 2 c = 1 2 1 2 1 .
The matrix Q P has eigenvalues 2.9050 , 2.7808 , 1.0000 , 0.8950 , 0.7192 . As a result, the matrices Q P and Q are symmetrically negative semidefinite and symmetrically positive semidefinite, respectively. In addition, the Lipschitz-type constants are c 1 = c 2 = 1 2 P Q = 1.4525 . The constraint set Δ R M is defined by
Δ : = { p R 5 : 2 p i 5 ; i = 1 , 2 , 3 , 4 , 5 } .
Consider the following details regarding control settings: (1) Algorithm 3.2 in [27] (to make it short,Iter.Mthd.1): σ = 1 max 3 c 1 , 3 c 2 , ϰ k = 1 5 ( k + 2 ) , E r r o r T e r m = s k q k 2 ; (2) Algorithm 4.1 in [26] (to make it short,Iter.Mthd.2): σ 1 = 0.25 , μ = 0.33 , ϰ k = 1 ( k + 1 ) 0.5 , E r r o r T e r m = max s k + 1 q k 2 , s k q k 2 ; (3) Algorithm 3 in [28] (to make it short,Iter.Mthd.3): σ = 1 max 3 c 1 , 3 c 2 , ρ = 0.50 , ϵ k = 1 ( k + 1 ) 2 , γ k = 1 5 ( k + 2 ) , χ k = 5 10 ( 1 γ k ) , E r r o r T e r m = r k q k 2 ; (4) For Algorithm 1 (to make it short,Iter.Mthd.4): σ 1 = 0.50 , ϰ = 0.50 , μ = 0.55 , ρ = 0.05 , ϵ k = 1 k 2 , E r r o r T e r m = r k q k 2 ; (5) For Algorithm 2 (to make it short,Iter.Mthd.5): σ 1 = 0.50 , ϰ = 0.50 , μ = 0.55 , ρ = 0.05 , ϵ k = 1 k 2 , φ k = 100 ( 1 + k ) 2 , E r r o r T e r m = r k q k 2 .  Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Table 1 and Table 2 are shown numerical results for Example 1.
We present two iterative methods Algorithms 3 and 4 based on a monotone and nonmonotone variable stepsize rule, as well as two strongly convex minimization problems that do not require subgradient methods. The second most significant methods are described below.
Example 2.
Let ϝ : Δ × Δ ϝ be a bifunction defined by
ϝ ( p , q ) = i = 2 5 ( q i p i ) p , p , q ϝ 5 .
Furthermore, the set Δ is taken into account as follows:
Δ = ( s 1 , , s 5 ) : s 1 1 , s i 1 , i = 2 , , 5 .
As a result, the bifunction ϝ is Lipschitz-type continuous with c 1 = c 2 = 2 and meets the criterion ( ϝ 1)( ϝ 5). The solution set of an equilibrium problem is S o l ( ϝ , Δ ) = { ( s 1 , 1 , 1 , 1 , 1 ) : s 1 > 1 } ; for details, see [9]. Consider the following information regarding control settings: (1) Algorithm 3.2 in [27] (to make it short,Iter.Mthd.1): σ = 1 max 4 c 1 , 4 c 2 , ϰ k = 1 4 ( k + 2 ) , E r r o r T e r m = s k q k 2 ; (2) Algorithm 4.1 in [26] (to make it short,Iter.Mthd.2): σ 1 = 0.33 , μ = 0.53 , ϰ k = 1 ( k + 1 ) 0.4 , E r r o r T e r m = max s k + 1 q k 2 , s k q k 2 ; (3) Algorithm 3 in [28] (to make it short,Iter.Mthd.3): σ = 1 max 4 c 1 , 4 c 2 , ρ = 0.55 , ϵ k = 1 ( k + 1 ) 2 , γ k = 1 4 ( k + 2 ) , χ k = 7 10 ( 1 γ k ) , E r r o r T e r m = r k q k 2 ; (4) For Algorithm 1 (to make it short,Iter.Mthd.4): σ 1 = 0.33 , ϰ = 0.76 , μ = 0.53 , ρ = 0.045 , ϵ k = 1 k 2 , E r r o r T e r m = r k q k 2 ; (5) For Algorithm 2 (to make it short,Iter.Mthd.5): σ 1 = 0.33 , ϰ = 0.76 , μ = 0.53 , ρ = 0.045 , ϵ k = 1 k 2 , φ k = 100 ( 1 + k ) 2 , E r r o r T e r m = r k q k 2 .  Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 and Table 3 and Table 4 are shown numerical results for Example 2.
Algorithm 2: Explicit Subgradient Extragradient Method With Non-Monotone Stepsize Rule
STEP 0: Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
STEP 1: Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
where choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 .
STEP 2: Compute
q k = arg min q Δ { σ k ϝ ( r k , q ) + 1 2 r k q 2 } .
STEP 3: Given the current iterates s k 1 , s k , q k . Firstly choose ω k 2 ϝ ( r k , q k ) satisfying r k σ k ω k q k N Δ ( q k ) and generate a half-space
Σ k = { z Σ : r k σ k ω k q k , z q k 0 } .
Compute
s k + 1 = arg min q Σ k { σ k ϝ ( q k , q ) + 1 2 r k q 2 } .
STEP 4: Select a non-negative real sequence { φ k } such that k = 1 + φ k < + . Compute
σ k + 1 = min σ k + φ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 [ ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ] if ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) > 0 , σ k + φ k , otherwise .
STEP 5: If q k = r k , then complete the computation. Otherwise, set k : = k + 1 and go back STEP 1.
Algorithm 3: Explicit Extragradient Method With Monotone Stepsize Rule
STEP 0: Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
STEP 1: Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
where choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a sequence such that lim k + ϵ k χ k = 0 .
STEP 2: Compute
q k = arg min q Δ { σ k ϝ ( r k , q ) + 1 2 r k q 2 } .
STEP 3: Compute
s k + 1 = arg min q Δ { σ k ϝ ( q k , q ) + 1 2 r k q 2 } .
STEP 4: Compute
σ k + 1 = min σ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 [ ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ] if ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) > 0 , σ k , otherwise .
STEP 5: If q k = r k , then complete the computation. Otherwise, set k : = k + 1 and go back STEP 1.
Algorithm 4: Explicit Extragradient Method With Non-Monotone Stepsize Rule
STEP 0: Choose σ 1 > 0 , s 1 , s 0 Σ , ϰ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 2 2 ) with a sequence { χ k } ( 0 , 1 ) satisfy the following conditions:
lim k + χ k = 0 and k = 1 + χ k = + .
STEP 1: Compute
r k = s k + ϰ k ( s k s k 1 ) χ k s k + ϰ k ( s k s k 1 ) ,
where choose ϰ k such that
0 ϰ k ϰ k ^ and ϰ k ^ = min ϰ , ϵ k s k s k 1 if s k s k 1 , ϰ otherwise ,
where ϵ k = ( χ k ) a positive sequence such that lim k + ϵ k χ k = 0 .
STEP 2: Compute
q k = arg min q Δ { σ k ϝ ( r k , q ) + 1 2 r k q 2 } .
STEP 3: Compute
s k + 1 = arg min q Δ { σ k ϝ ( q k , q ) + 1 2 r k q 2 } .
STEP 4: Furthermore, select a non-negative real sequence { φ k } as well as k = 1 + φ k < + . Compute
σ k + 1 = min σ k + φ k , ( 2 2 ρ ) μ r k q k 2 + ( 2 2 ρ ) μ s k + 1 q k 2 2 [ ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) ] if ϝ ( r k , s k + 1 ) ϝ ( r k , q k ) ϝ ( q k , s k + 1 ) > 0 , σ k + φ k , otherwise .
STEP 5: If q k = r k , then complete the computation. Otherwise, set k : = k + 1 and go back STEP 1.
Some Observations through Numerical Illustrations: The following conclusions can be drawn from the above-mentioned numerical examples: (1) Numerical findings for multiple methods in finite-dimensional domains have been presented for Examples 1 and 2. It is apparent that the supplied techniques are efficacious in terms of the number of iterations and the lapse of time in practically all scenarios. All trials demonstrate that the presented algorithms outperform the techniques available beforehand. (2) In both situations, the existence of an insufficient stepsize results in a hump in the graph of the methods. It has no effect on the overall effectiveness of the techniques. (3) Examples 1 and 2 have provided data for a variety of approaches in both finite-dimensional and infinite-dimensional domains. In more situations, we could see that the computation development is enhanced by the problem’s complexity and the tolerance value employed. (4) It has also been revealed that adopting a specific formula for a stepsize evaluation enhances the efficiency and speed of convergence of the algorithm. In other words, rather than having a fixed stepsize, a variable stepsize improves algorithm efficiency. (5) In Examples 1 and 2, it is also proved that the beginning point selection and the complexity of the operators have an influence on the effectiveness of techniques in proportion to the number of iterations and the time of operation in seconds.

6. Conclusions

The paper proposes four different explicit extragradient-like techniques for solving an equilibrium problem in a real Hilbert space by combining a pseudomonotone with a Lipschitz-type bifunction. An innovative stepsize rule has been presented that would not rely on Lipschitz-type constant information. The algorithm’s convergence has been verified. Several experiments are constructed to demonstrate the numerical behavior of our two algorithms and to compare them to others widely known in the literature.

Author Contributions

Conceptualization, H.u.R., W.K. and K.S.; methodology, H.u.R. and K.S.; software, H.u.R. and K.S.; validation, H.u.R., W.K. and K.S.; formal analysis, H.u.R. and W.K.; investigation, H.u.R., W.K. and K.S.; writing—original draft preparation, H.u.R., W.K. and K.S.; writing—review and editing, H.u.R., W.K. and K.S.; visualization, H.u.R., W.K. and K.S.; supervision and funding, W.K. and K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is supported by the Thailand Science Research and Innovation (TSRI) and Rajamangala University of Technology Thanyaburi (RMUTT) under National Science, Research and Innovation Fund (NSRF); Basic Research Fund: Fiscal year 2022 (Contract No. FRB650070/0168 and under project number FRB65E0633M.2).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research project is supported by the Thailand Science Research and Innovation (TSRI) and Rajamangala University of Technology Thanyaburi (RMUTT) under National Science, Research and Innovation Fund (NSRF); Basic Research Fund: Fiscal year 2022 (Contract No. FRB650070/0168 and under project number FRB65E0633M.2).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  3. Mastroeni, G. On Auxiliary Principle for Equilibrium Problems. In Nonconvex Optimization and Its Applications; Springer: Boston, MA, USA, 2003; pp. 289–298. [Google Scholar] [CrossRef]
  4. Muu, L.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  5. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  6. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  7. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef] [Green Version]
  8. Browder, F.; Petryshyn, W. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, S.; Zhang, Y.; Ping, P.; Cho, Y.; Guo, H. New extragradient methods with non-convex combination for pseudomonotone equilibrium problems with applications in Hilbert spaces. Filomat 2019, 33, 1677–1693. [Google Scholar] [CrossRef] [Green Version]
  10. Tran, D.Q.; Dung, M.L.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  11. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  12. Attouch, F.A.H. An Inertial Proximal Method for Maximal Monotone Operators via Discretization of a Nonlinear Oscillator with Damping. Set-Valued Var. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  13. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  14. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 36, 82–113. [Google Scholar] [CrossRef]
  15. Hieu, D.V. An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 2018, 88, 399–415. [Google Scholar] [CrossRef]
  16. Hieu, D.V.; Cho, Y.J.; bin Xiao, Y. Modified extragradient algorithms for solving equilibrium problems. Optimization 2018, 67, 2003–2029. [Google Scholar] [CrossRef]
  17. Dong, Q.L.; Kazmi, K.R.; Ali, R.; Li, X.H. Inertial Krasnosel’skiǐ–Mann type hybrid algorithms for solving hierarchical fixed point problems. J. Fixed Point Theory Appl. 2019, 21, 57. [Google Scholar] [CrossRef]
  18. Alansari, M.; Ali, R.; Farid, M. Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space. J. Inequal. Appl. 2020, 2020, 42. [Google Scholar] [CrossRef]
  19. Farid, M.; Cholamjiak, W.; Ali, R.; Kazmi, K.R. A new shrinking projection algorithm for a generalized mixed variational-like inequality problem and asymptotically quasi-$$\phi $$-nonexpansive mapping in a Banach space. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2021, 115, 114. [Google Scholar] [CrossRef]
  20. Suantai, S.; Kesornprom, S.; Cholamjiak, W.; Cholamjiak, P. Modified Projection Method with Inertial Technique and Hybrid Stepsize for the Split Feasibility Problem. Mathematics 2022, 10, 933. [Google Scholar] [CrossRef]
  21. Muangchoo, K.; ur Rehman, H.; Kumam, P. Weak convergence and strong convergence of nonmonotonic explicit iterative methods for solving equilibrium problems. J. Nonlinear Convex Anal. 2021, 22, 663–681. [Google Scholar]
  22. ur Rehman, H.; Kumam, P.; Özdemir, M.; Karahan, I. Two generalized non-monotone explicit strongly convergent extragradient methods for solving pseudomonotone equilibrium problems and applications. Math. Comput. Simul. 2021. [Google Scholar] [CrossRef]
  23. ur Rehman, H.; Alreshidi, N.A.; Muangchoo, K. A New Modified Subgradient Extragradient Algorithm Extended for Equilibrium Problems With Application in Fixed Point Problems. J. Nonlinear Convex Anal. 2021, 22, 421–439. [Google Scholar]
  24. ur Rehman, H.; Kumam, P.; Gibali, A.; Kumam, W. Convergence analysis of a general inertial projection-type method for solving pseudomonotone equilibrium problems with applications. J. Inequal. Appl. 2021, 2021, 63. [Google Scholar] [CrossRef]
  25. Censor, Y.; Gibali, A.; Reich, S. The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  26. Hieu, D.V.; Strodiot, J.J.; Muu, L.D. Strongly convergent algorithms by using new adaptive regularization parameter for equilibrium problems. J. Comput. Appl. Math. 2020, 376, 112844. [Google Scholar] [CrossRef]
  27. Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2016, 111, 823–840. [Google Scholar] [CrossRef]
  28. Vinh, N.T.; Muu, L.D. Inertial Extragradient Algorithms for Solving Equilibrium Problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  29. Kumam, P.; Argyros, I.K.; Kumam, W.; Shutaywi, M.; Rehman, H.u. The inertial iterative extragradient methods for solving pseudomonotone equilibrium programming in Hilbert spaces. J. Inequal. Appl. 2022, 2022, 58. [Google Scholar]
  30. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  31. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  32. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
Figure 1. All techniques are computationally examined while s 0 = ( 1 , 1 , 1 , 1 , 1 ) T .
Figure 1. All techniques are computationally examined while s 0 = ( 1 , 1 , 1 , 1 , 1 ) T .
Mathematics 10 01751 g001
Figure 2. All techniques are computationally examined while s 0 = ( 1 , 1 , 1 , 1 , 1 ) T .
Figure 2. All techniques are computationally examined while s 0 = ( 1 , 1 , 1 , 1 , 1 ) T .
Mathematics 10 01751 g002
Figure 3. All techniques are computationally examined while s 0 = ( 1 , 2 , 1 , 2 , 3 ) T .
Figure 3. All techniques are computationally examined while s 0 = ( 1 , 2 , 1 , 2 , 3 ) T .
Mathematics 10 01751 g003
Figure 4. All techniques are computationally examined while s 0 = ( 1 , 2 , 1 , 2 , 3 ) T .
Figure 4. All techniques are computationally examined while s 0 = ( 1 , 2 , 1 , 2 , 3 ) T .
Mathematics 10 01751 g004
Figure 5. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 4 ) T .
Figure 5. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 4 ) T .
Mathematics 10 01751 g005
Figure 6. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 4 ) T .
Figure 6. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 4 ) T .
Mathematics 10 01751 g006
Figure 7. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 6 ) T .
Figure 7. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 6 ) T .
Mathematics 10 01751 g007
Figure 8. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 6 ) T .
Figure 8. All techniques are computationally examined while s 0 = ( 2 , 2 , 3 , 4 , 6 ) T .
Mathematics 10 01751 g008
Figure 9. All techniques are computationally examined while s 0 = ( 1 , 1 , 2 , 0 , 3 ) T .
Figure 9. All techniques are computationally examined while s 0 = ( 1 , 1 , 2 , 0 , 3 ) T .
Mathematics 10 01751 g009
Figure 10. All techniques are computationally examined while s 0 = ( 1 , 1 , 2 , 0 , 3 ) T .
Figure 10. All techniques are computationally examined while s 0 = ( 1 , 1 , 2 , 0 , 3 ) T .
Mathematics 10 01751 g010
Figure 11. All techniques are computationally examined while s 0 = ( 2 , 1 , 3 , 2 , 5 ) T .
Figure 11. All techniques are computationally examined while s 0 = ( 2 , 1 , 3 , 2 , 5 ) T .
Mathematics 10 01751 g011
Figure 12. All techniques are computationally examined while s 0 = ( 2 , 1 , 3 , 2 , 5 ) T .
Figure 12. All techniques are computationally examined while s 0 = ( 2 , 1 , 3 , 2 , 5 ) T .
Mathematics 10 01751 g012
Figure 13. All techniques are computationally examined while s 0 = ( 1 , 6 , 3 , 4 , 4 ) T .
Figure 13. All techniques are computationally examined while s 0 = ( 1 , 6 , 3 , 4 , 4 ) T .
Mathematics 10 01751 g013
Figure 14. All techniques are computationally examined while s 0 = ( 1 , 6 , 3 , 4 , 4 ) T .
Figure 14. All techniques are computationally examined while s 0 = ( 1 , 6 , 3 , 4 , 4 ) T .
Mathematics 10 01751 g014
Figure 15. All techniques are computationally examined while s 0 = ( 6 , 1 , 5 , 4 , 1 ) T .
Figure 15. All techniques are computationally examined while s 0 = ( 6 , 1 , 5 , 4 , 1 ) T .
Mathematics 10 01751 g015
Figure 16. All techniques are computationally examined while s 0 = ( 6 , 1 , 5 , 4 , 1 ) T .
Figure 16. All techniques are computationally examined while s 0 = ( 6 , 1 , 5 , 4 , 1 ) T .
Mathematics 10 01751 g016
Table 1. The first three techniques have numerical data for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 by using Example 1.
Table 1. The first three techniques have numerical data for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 by using Example 1.
Number of IterationsExecution Time in Seconds
s 0 Iter.Mthd.1 Iter.Mthd.2 Iter.Mthd.3 Iter.Mthd.1 Iter.Mthd.2 Iter.Mthd.3
( 1 , 1 , 1 , 1 , 1 ) T 4028240.3448010.263010.24393
( 1 , 2 , 1 , 2 , 3 ) T 3630250.3585330.390070.25731
( 2 , 2 , 3 , 4 , 4 ) T 5035270.4676550.383100.29807
( 2 , 2 , 3 , 4 , 6 ) T 4441270.4488250.409520.27013
Table 2. The proposed two techniques have numerical data for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 by using Example 1.
Table 2. The proposed two techniques have numerical data for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 by using Example 1.
Number of IterationsExecution Time in Seconds
s 0 Iter.Mthd.4 Iter.Mthd.5 Iter.Mthd.4 Iter.Mthd.5
( 1 , 1 , 1 , 1 , 1 ) T 1380.12617690.0854048
( 1 , 2 , 1 , 2 , 3 ) T 1880.18579960.0932000
( 2 , 2 , 3 , 4 , 4 ) T 19110.20989140.1436989
( 2 , 2 , 3 , 4 , 6 ) T 20100.21993710.1250085
Table 3. The first three techniques have numerical data for Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 by using Example 2.
Table 3. The first three techniques have numerical data for Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 by using Example 2.
Number of IterationsExecution Time in Seconds
s 0 Iter.Mthd.1 Iter.Mthd.2 Iter.Mthd.3 Iter.Mthd.1 Iter.Mthd.2 Iter.Mthd.3
( 1 , 1 , 2 , 0 , 3 ) T 4433220.34081470.31290660.236492066
( 2 , 1 , 3 , 2 , 5 ) T 5435230.65237790.35181800.256393849
( 1 , 6 , 3 , 4 , 4 ) T 5635250.52669490.33257440.259483922
( 6 , 1 , 5 , 4 , 1 ) T 5740250.49483730.35903960.258739392
Table 4. The proposed two techniques have numerical data for Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 by using Example 2.
Table 4. The proposed two techniques have numerical data for Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 by using Example 2.
Number of IterationsExecution Time in Seconds
s 0 Iter.Mthd.4 Iter.Mthd.5 Iter.Mthd.4 Iter.Mthd.5
( 1 , 1 , 2 , 0 , 3 ) T 14100.12760950.1807799
( 2 , 1 , 3 , 2 , 5 ) T 16110.15222140.1913611
( 1 , 6 , 3 , 4 , 4 ) T 16130.15429630.1918947
( 6 , 1 , 5 , 4 , 1 ) T 16130.14451210.1881485
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rehman, H.u.; Kumam, W.; Sombut, K. Inertial Modification Using Self-Adaptive Subgradient Extragradient Techniques for Equilibrium Programming Applied to Variational Inequalities and Fixed-Point Problems. Mathematics 2022, 10, 1751. https://doi.org/10.3390/math10101751

AMA Style

Rehman Hu, Kumam W, Sombut K. Inertial Modification Using Self-Adaptive Subgradient Extragradient Techniques for Equilibrium Programming Applied to Variational Inequalities and Fixed-Point Problems. Mathematics. 2022; 10(10):1751. https://doi.org/10.3390/math10101751

Chicago/Turabian Style

Rehman, Habib ur, Wiyada Kumam, and Kamonrat Sombut. 2022. "Inertial Modification Using Self-Adaptive Subgradient Extragradient Techniques for Equilibrium Programming Applied to Variational Inequalities and Fixed-Point Problems" Mathematics 10, no. 10: 1751. https://doi.org/10.3390/math10101751

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop