Next Article in Journal
Model-Based Adaptive Control of Bioreactors—A Brief Review
Previous Article in Journal
A Transfer Learning Approach: Early Prediction of Alzheimer’s Disease on US Healthy Aging Dataset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method with Double Inertial Type and Golden Rule Line Search for Solving Variational Inequalities

by
Uzoamaka Azuka Ezeafulukwe
1,
Besheng George Akuchu
1,
Godwin Chidi Ugwunnadi
2,3,* and
Maggie Aphane
3
1
Department of Mathematics, University of Nigeria, Nsukka 410105, Nigeria
2
Department of Mathematics, University of Eswatini, Kwaluseni M201, Eswatini
3
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Medunsa, P.O. Box 94, Pretoria 0204, South Africa
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2203; https://doi.org/10.3390/math12142203
Submission received: 14 May 2024 / Revised: 21 June 2024 / Accepted: 12 July 2024 / Published: 13 July 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this work, we study a new line-search rule for solving the pseudomonotone variational inequality problem with non-Lipschitz mapping in real Hilbert spaces as well as provide a strong convergence analysis of the sequence generated by our suggested algorithm with double inertial extrapolation steps. In order to speed up the convergence of projection and contraction methods with inertial steps for solving variational inequalities, we propose a new approach that combines double inertial extrapolation steps, the modified Mann-type projection and contraction method, and the line-search rule, which is based on the golden ratio ( 5 + 1 ) / 2 . We demonstrate the efficiency, robustness, and stability of the suggested algorithm with numerical examples.

1. Introduction

Let H be a real Hilbert space, C be a nonempty closed convex subset of H, and · , · denote the inner product with the induced norm | | · | | . The variational inequality problem (VIP) with respect to G is a problem of finding x C such that
G x , y x 0 , y C ,
where G is an operator from C into H. We denote by VI(C,G) the set of solutions of (1). In equilibrium situations where no party may unilaterally enhance their position, this inequality (1) frequently occurs. Applications of VIPs can be found in many domains, including engineering, economics, and optimization. They are strongly associated with equilibrium analysis, game theory, and convex optimization. It offers a more comprehensive foundation for gradient-based algorithms than only gradient descent. These algorithms play a key role in solving large-scale, high-dimensional machine learning challenges. Additionally, decisions made by patients and providers can be informed by machine learning techniques applied to actual healthcare data. According to Hess et al. [1], VIPs can aid in the optimization of treatment plans, resource allocation, as well as customized treatments. Decision making in multi-agent systems requires participant interactions and equilibria. VIPs assist in simulating these interactions and identifying solutions that meet equilibrium requirements. VIPs serve as a strong lens through which we may improve decision making across a variety of areas, generally bridging the gap between optimization, equilibrium, and decision making. Understanding VIPs is crucial for effectively addressing real-world difficulties as machine learning advances (see [1,2] for more information). Moreover, in the context of the equilibrium interpretation, VIPs look for solutions in situations in which none of the participants can strengthen their position. This is similar to the idea of fixed points, which are points that do not change when they undergo a transformation. The computational methods and mathematical underpinnings of fixed-point problems and VIPs are similar. Our capacity to tackle challenging issues in a variety of fields is improved by a better understanding of their relationships. In fact, the set of solutions of the VIP can be expressed in terms of fixed points:
find x * C such that x * = P C ( I τ G ) x * ,
where τ is a positive real number, I is the identity mapping, and P C is a metric projection onto C. Recently, many methods have been applied for solving V I ( C , G ) ; (see [3,4,5,6,7,8,9,10,11]). In finite-dimensional Euclidean spaces, Korpelvich [12] and Antipin [13] proposed the extragradient method (EM), which is one of the most simple approaches to solving VIP for a monotone and L-Lipschitz continuous mapping G in the following way:
x 0 C , τ > 0 , y n = P C ( x n τ G x n ) , x n + 1 = P C ( x n τ G y n ) , n 1 ,
where L > 0 , τ ( 0 , 1 L ) . If V I ( C , G ) is not empty, the sequence { x n } generated by EM (2) converges to an element of V I ( C , G ) . However, it should be noted that in EM, one needs to calculate two projections onto the feasible set C in each iteration. If the set C is not simple, the EM becomes very difficult and expensive to implement. In addition, the convergence of the method (2) requires a prior estimate of the Lipschitz constant, which is often difficult to estimate, and we emphasize that the step size defined by the process is too small and reduces the convergence rate of the method.
One of the methods to overcome these drawbacks is known as Projection and Contraction Method (PCM) proposed by He [14] for solving VIP. The PCM can be summarized as follows:
x 0 H , y n = P C ( x n τ G x n ) , d ( x n , y n ) : = ( x n y n ) τ ( G x n G y n ) , x n + 1 = x n ϵ η n d ( x n , y n ) , n 0 ,
where ϵ ( 0 , 2 ) , τ ( 0 , 1 / L ) and
η n : = ϕ ( x n , y n ) d ( x n , y n ) 2 , ϕ ( x n , y n ) : = x n y n , d ( x n , y n ) , n 0 .
where G is a monotone and L-Lipschitz continuous mapping on H. By the assumption of monotonicity on G, they proved that the sequence { x n } generated by (3) converges weakly to an element of VI(C,G). Many improvements have been made to the projection and contraction approach, which has drawn a lot of attention (see [15,16,17]). To specifically tackle the problem (VIP), Dong et al. [17] combined the projection and contraction approach with the inertial method to create an inertial PCM algorithm.
x 0 , x 1 H , w n = x n + θ n ( x n x n 1 ) , y n = P C ( w n τ G w n ) , d ( w n , y n ) = ( w n y n ) τ ( G w n G y n ) , x n + 1 = w n ϵ η n d ( w n , y n ) , n 1 ,
where ϵ ( 0 , 2 ) , τ ( 0 , 1 / L ) and
η n : = ϕ ( w n , y n ) d ( w n , y n 2 , if d ( w n , y n ) 0 , 0 , if d ( w n , y n ) = 0 ,
where ϕ ( w n , y n ) : = w n y n , d ( w n , y n ) and { θ n } is a sequence in ( 0 , 1 ) that controls the inertial term. They proved that under some conditions on the control parameters, the sequence { x n } generated by (4) converges weakly to an element of V I ( C , G ) . As the step size depends on the Lipschitz constant L, note that the iterative techniques (2) to (4) may not work when the mapping G is not L-Lipschitz continuous or if its Lipschitz constant is unknown. But the behavior of iterative algorithms toward convergence is known to be strongly influenced by the step size selection. The step size is often determined by researchers using the operator’s Lipschitz constant (see [13,14,15,16,17] and references therein). Yet, this method can be limited, particularly when working with mappings that are not Lipschitz. Instead of relying exclusively on the Lipschitz constant, researchers have put forth an alternate step size rule. In 2021, Tian and Xu [18] proposed the following inertial projection and contraction method in a way to avoid this obstacle.
x 0 , x 1 H , w n = x n + θ n ( x n x n 1 ) , y n = P C ( w n τ n G w n ) , d ( w n , y n ) = ( w n y n ) τ n ( G w n G y n ) , z n = w n ϵ η n d ( w n , y n ) , x n + 1 = ( 1 α n β n ) w n + β n z n , n 1 ,
where { α n } , { β n } are control sequences in ( 0 , 1 ) , ϵ ( 0 , 2 ) and η n is defined in (5). The step size τ n is chosen to be the largest τ { γ , γ l , γ l 2 , } such that
τ | | G w n G y n | | κ | | w n y n | | , γ > 0 , l ( 0 , 1 ) , κ ( 0 , 1 ) .
They prove that the sequence { x n } generated by (6) converges strongly to a point in VI(C,G).
It is assumed by this Equation (6) that G is uniformly continuous on C and pseudomonotone. For the purpose of resolving variational inequalities and associated optimization problems, numerous numerical techniques with inertial exponentiation steps have been developed; refer to [3,4,6,17,19,20,21,22,23,24] as well as the citations included. Furthermore, in 2022, Yao et al. [25] introduced double inertial extrapolation steps into extragradient subgradient methods to improve the effectiveness and speed up the rate of convergence of techniques for solving variational inequality problems. The algorithm stability and performance of the approach are enhanced by these steps, which add more momentum. They proposed the following double inertial steps subgradient extragradient method:
x 0 , x 1 H , w n = x n + θ ( x n x n 1 ) , u n = x n + δ ( x n x n 1 ) , y n = P C ( u n τ n G u n ) , x n + 1 = ( 1 α ) w n + α P C ( u n τ n G y n ) , n 1 ,
where G : H H is L-Lipschitz continuous and pseudomontone. They demonstrated how given the right circumstances, { x n } produced by (7) converges weakly to an element in VI(C,G). Since then, the study of double inertial-type algorithms for variational inequality problems has attracted more attention (see [26,27,28,29,30]).
With this in mind, we would like to develop a modified double inertial projection and contraction approach that converges at a lesser condition. Specifically, we take the operator to be a uniformly continuous pseudomonotone. We propose a new method that combines the modified Mann-type projection and contraction method, the line-search rule, which is based on the golden ratio ( 5 + 1 ) / 2 , and double inertial extrapolation steps to speed up the convergence of projection and contraction methods with inertial terms for solving variational inequalities. We provide numerical examples to illustrate the behavior of the proposed method.

2. Preliminaries

We state some known and useful results which will be needed in the proof of our main theorem. In the sequel, we denote strong and weak convergence by “→” and “⇀”, respectively.
Let C be a closed convex subset of a real Hilbert space H. Then, for each x H , there exists a unique point z = P C ( x ) such that
| | x z | | = inf y C | | x y | | .
Theoperator P C : H C is called the metric projection from H onto C. The following lemma highlights some important characteristics of the projection operator.
Lemma 1
([20]). Let x H , and z C be any point. Then, we have that z = P C ( x ) if and only if the following relation holds
x z , y z 0 , y C .
Definition 1.
An operator G : H H is said to be
(a) 
L-Lipschitz continuous with L > 0 if
| | G x G y | | L | | x y | | , x , y H ;
(b) 
monotone if
G x G y , x y 0 , x , y H ;
(c) 
pseudomonotone if
G x , y x 0 G y , y x 0 , x , y H .
Lemma 2
([20,31]). Let H be a real Hilbert space. Then, for all x , y H and α R , the following hold
(i) 
| | x y | | 2 = | | x | | 2 + | | y | | 2 2 x , y ,
(ii) 
| | x + y | | 2 | | x | | 2 + 2 y , x + y ,
(iii) 
| | α x + ( 1 α ) y | | 2 = α | | x | | 2 + ( 1 α ) | | y | | 2 α ( 1 α ) | | x y | | 2 .
Proof. 
For (iii), let x , y H and α R , then using Lemma 2(i), we obtain
| | α x + ( 1 α ) y | | 2 = α 2 | | x | | 2 + 2 α ( 1 α ) x , y + ( 1 α ) 2 | | y | | 2 = α 2 | | x | | 2 + α ( 1 α ) [ | | x | | 2 + | | y | | 2 | | x y | | 2 ] + ( 1 α ) 2 | | y | | 2 = [ α 2 + α ( 1 α ) ] | | x | | 2 + ( 1 α ) [ α + 1 α ] | | y | | 2 α ( 1 α ) | | x y | | 2 = α | | x | | 2 + ( 1 α ) | | y | | 2 α ( 1 α ) | | x y | | 2 .
   □
Lemma 3
([32]). Let { p n } be a sequence of nonnegative real numbers and { α n } be a sequence of real numbers in ( 0 , 1 ) with the following condition:
n = 1 α n =
and { b n } be a sequence of real numbers. Assume that
p n + 1 ( 1 α n ) p n + α n b n , n 1 .
If lim sup k b n k 0 for every subsequence { p n k } of { p n } satisfying the condition
lim inf k ( p n k + 1 p n k ) 0 ,
then lim n p n = 0 .

3. Main Result

In order to solve pseudomonotone (VIP) in real Hilbert spaces, we present novel iterative techniques in the following section that are based on the double inertial PCM. With the help of the new line-search method and Mann-type method, these algorithms ensure robust convergence. Our methods have the benefit of not requiring us to know the mapping’s Lipschitz constant beforehand. It is actually not necessary for the variational inequality mapping to satisfy the Lipschitz continuity; rather, it simply has to satisfy the uniform continuity criterion. The following presumptions must be met by the mapping and parameters used in our methods in order to examine the convergence of the algorithms.
Assumption 1.
(L1) 
H is a Hilbert space and C is a nonempty, closed and convex subset of H.
(L2) 
G : H H is pseudomonotone and uniformly continuous on H.
(L3) 
G is weakly sequentially continuous, that is for any { x n } H , we have x n x * implies G x n G x * .
We assume that { β n } is a sequence such that β n [ a , b ] ( 0 , 1 ) , and { α n } ( 0 , 1 ) satisfies the conditions
lim n α n = 0 and n = 1 α n = .
We present in (Algorithm 1) a double inertial extrapolation with the Mann-type projection and contraction methods using golden rule line search for an approximate solution to the pseudomonotone variational inequality problem.
Algorithm 1: Double inertial PCM-type method for solving pseudomonotone VIP
Initialization: Given λ 1 > 0 , γ > 0 , v , θ n , ϵ ( 0 , 2 ) , δ n ( 0 , 1 ) , φ = 1 + 5 2 , μ ( 0 , 2 φ ) , θ = φ 1 , α > 0 . Let x 0 , x 1 H be given starting points. Set n = 1 .
Iterative Steps:  Calculate x n + 1 as follows:
1: 
Compute
u n = x n + δ n ( x n x n 1 ) , w n = ( 1 α n ) [ x n + θ n ( x n x n 1 ) ] , v n = P C ( w n λ n G w n ) .
If x n = w n = v n or G w n = 0 , STOP, then v n is a solution of V I ( C , G ) . Otherwise go to
2: 
Compute
y n = w n ϵ τ n d n ( w n , v n ) , d n ( w n , v n ) : = w n v n λ n ( G w n G v n ) ,
where
τ n = ( 1 μ 2 φ ) | | w n v n | | 2 | | d n ( w n , v n ) | | 2
and λ n = γ ν m n , m n is the smallest nonnegative integer m satisfying
γ ν m | | G ( w n ) G ( v n ) | | μ 2 φ | | w n v n | | .
3: 
Compute
t n = ( 1 θ ) u n + θ x n , x n + 1 = ( 1 β n ) t n + β n y n , n 1 .
Set n : = n + 1 and return to Step 1.
Remark 1.
(a) 
The new line-search rule (11) is not the same as the one that was taken into consideration in earlier studies [18,33]. Notably, more efficient numerical solutions are obtained when the step size is selected using the golden ratio.
(b) 
An alternate strategy for solving variational inequalities is provided by the modified projection and contraction method with the new line-search rule (11). This algorithm uses the golden ratio ( 5 + 1 ) / 2 to efficiently estimate the step size, in contrast to earlier methods (see [34,35,36]), which employed a different line-search method. Its efficacy is shown by numerical experiments, which makes it a promising substitute for solving variational inequalities with non-Lipschitz mappings in real Hilbert spaces.
(c) 
Our initial computational findings demonstrate that in comparison to the inertial PCM approaches in [17,18], our suggested double inertial extrapolation method is more effective and converges with greater speed (both in CPU time and number of iterations).
The following lemmas are very helpful in analyzing the convergence of Algorithm 1.
Lemma 4
([37]). Assume that (C1)–(C3) holds, then the line-search rule (11) is well defined. In addition, we have λ n γ .
Lemma 5
([37]). Suppose that Assumption (C1)–(C3) holds. Let { w n } and { v n } be two sequences generated by Algorithm 1. If there exists a subsequence { w n k } of { w n } such that { w n k } converges weakly to z H and lim k | | w n k v n k | | = 0 , then z V I ( C , G ) .
We can obtain the conclusions of Lemmas 4 and 5 by a simple modification of Lemma 3.1 and 3.2 in [37], respectively. To avoid repetitive expression, we omit their proofs here.
Lemma 6.
Suppose that Assumption 1 (C1)–(C3) holds. Let { w n } , { v n } and { y n } be three sequences generated by Algorithm 1. Then, for all z * V I ( C , G )
| | y n z * | | 2 | | w n z * | | 2 2 ϵ ϵ | | y n w n | | 2 .
Proof. 
For z * V I ( C , G ) , we obtain
w n z * , d n ( w n , v n ) = w n v n , d n ( w n , v n ) + v n z * , d n ( w n , v n ) = w n v n , d n ( w n , v n ) + v n z * , w n v n λ n ( G w n G v n ) .
From (8), and the definition of ( v n ) in (10), it follows that
w n v n λ n G w n , v n z * 0 ,
since v n C and z * V I ( C , G ) , it follows that G z * , v n z * 0 and according to the pseudomonotonicity of G, we obtain G v n , v n z * 0 which together with (15), implies that
w n v n λ n ( G ( w n ) G ( v n ) ) , v n z * 0 .
Combining (14) and (16), we obtain
w n z * , d n ( w , v n ) = w n v n , d n ( w , v n ) + v n z * , d n ( w , v n ) = w n v n , d ( w n , v n ) + v n z * , w n v n λ n ( G w n G v n ) w n v n , d ( w n , v n ) = w n v n , w n v n λ n ( G w n G v n ) = w n v n , w n v n λ n w n v n , G w n G v n | | w n v n | | 2 λ n | | w n v n | | | | G w n G v n | | | | w n v n | | 2 μ 2 φ | | w n v n | | 2 = 1 μ 2 φ | | w n v n | | 2
On the other hand, from (17) and the definition of ( y n ) , we have
| | y n z * | | 2 = | | w n z * ϵ τ n d ( w n , v n ) | | 2 = | | w n z * | | 2 2 ϵ τ n d n ( w n , v n ) , w n z * + ϵ 2 τ n 2 | | d n ( w n , v n ) | | 2 | | w n z * | | 2 2 ϵ τ n 1 μ 2 φ | | w n v n | | 2 + ϵ 2 τ n 2 | | d n ( w n , v n ) | | 2 = | | w n z * | | 2 2 ϵ τ n 2 | | d n ( w n , v n ) | | 2 + ϵ 2 τ n 2 | | d n ( w n , v n ) | | 2 = | | w n z * | | 2 2 ϵ ϵ ( ϵ τ n ) 2 | | d n ( w n , v n ) | | 2 = | | w n z * | | 2 2 ϵ ϵ | | y n w n | | 2
Theorem 1.
Suppose that Assumption (C1)–(C3) holds and θ n , δ n ( 0 , 1 ) are chosen such that
lim n θ n α n | | x n x n 1 | | = 0 a n d lim n δ n α n | | x n x n 1 | | = 0 ,
where θ n = ( α n ) , δ n = ( α n ) . Then, the sequence { x n } generated by Algorithm 1 converges strongly to z * V I ( C , G ) , where | | z * | | = min { | | z | | : z V I ( C , G ) } .
Proof. 
Let z * V I ( C , G ) . The proof is divided into the following four stages.
Stage 1. We prove that the sequence { x n } is bounded. Indeed, from Lemma 4, it follows that
| | y n z * | | | | w n z * | | .
Moreover, from the definition of ( u n ) and ( w n ) , we obtain
| | u n z * | | = | | x n + δ n ( x n x n 1 ) z * | | | | x n z * | | + δ n | | x n x n 1 | | = | | x n z * | | + α n × δ n α n | | x n x n 1 | | .
and letting h n = x n + θ n ( x n x n 1 ) , then w n = ( 1 α n ) h n . Therefore, following the same argument of (19), we have
| | h n z * | | | | x n z * | | + θ n α n | | x n x n 1 | | .
Thus,
| | w n z * | | = | | ( 1 α n ) h n z * | | ( 1 α n ) | | h n z * | | + α n | | z * | | .
Combining (20) and (21), we obtain
| | w n z * | | ( 1 α n ) [ | | x n z * | | + α n × θ n α n | | x n x n 1 | | ] + α n | | z * | | ( 1 α n ) | | x n z * | | + α n θ n α n | | x n x n 1 | | + | | z * | | .
With the condition that { θ n α n | | x n x n 1 | | } and { δ n α n | | x n x n 1 | | } converges to 0 as n , then there exist positive numbers M 1 , M 2 such that
θ n α n | | x n x n 1 | | M 1 and δ n α n | | x n x n 1 | | M 2
for all n 1 . Therefore, letting M * = max { M 1 + | | z * | | , M 2 } , we respectively obtain from (20) and (21) that
| | u n z * | | | | x n z * | | + α n M *
and
| | w n z * | | ( 1 α n ) | | x n z * | | + α n M *
On the other hand, from the definition of { t n } and (23)
| | t n z * | | = | | ( 1 θ ) u n + θ x n z * | | ( 1 θ ) | | u n z * | | + θ | | x n z * | | ( 1 θ ) [ | | x n z * | | + α n M * ] + θ | | x n z * | | = | | x n z * | | + α n ( 1 θ ) M *
and using the fact that for all n 1 , β [ a , b ] ( 0 , 1 ) , we obtain
| | x n + 1 z * | | = | | ( 1 β n ) t n + β n y n z * | | ( 1 β n ) | | t n z * | | + β n | | y n z * | | ( 1 β n ) | | t n z * | | + β n | | w n z * | | ( 1 β n ) [ | | x n z * | | + α n M * ] + β n [ ( 1 α n ) | | x n z * | | + α n M * ] = ( 1 α n β n ) | | x n z * | | + α n M * ( 1 α n a ) | | x n z * | | + α n a ( M * / a ) max { | | x n z * | | , M * / a } max { | | x 1 z * | | , M * / a } .
Therefore, { x n } is bounded and so { t n } , { y n } , { w n } and { u n } are also bounded.
Stage 2. We show, for some K > 0 , that
2 ϵ ϵ a | | v n w n | | 2 + a ( 1 b ) | | t n y n | | 2 | | x n z * | | 2 | | x n + 1 z * | | 2 + α n K .
Indeed, from the definition of ( x n + 1 ) and Lemma 2, we obtain
| | x n + 1 z * | | 2 = | | ( 1 β n ) t n + β n y n z * | | 2 = | | ( 1 β n ) ( t n z * ) + β n ( y n z * ) | | 2 = ( 1 β n ) | | t n z * | | 2 + β n | | y n z * | | 2 β n ( 1 β n ) | | t n y n | | 2 ( 1 β n ) [ ( 1 θ ) | | u n z * | | 2 + θ | | x n z * | | 2 ] + β n | | y n z * | | 2 β n ( 1 β n ) | | t n y n | | 2 ( 1 β n ) | | u n z * | | 2 + ( 1 β n ) | | x n z * | | 2 + β n | | y n z * | | 2 β n ( 1 β n ) | | t n y n | | 2 .
It follows from (13) and (26) that
| | x n + 1 z * | | 2 ( 1 β n ) | | u n z * | | 2 + ( 1 β n ) | | x n z * | | 2 + β n | | w n z * | | 2 β n 2 ϵ ϵ | | v n w n | | 2 β n ( 1 β n ) | | t n y n | | 2
and
| | u n z * | | 2 = | | x n + δ n ( x n x n 1 ) z * | | | | x n z * | | 2 + 2 δ n x n x n 1 , u n z * | | x n z * | | 2 + 2 δ n | | x n x n 1 | | | | u n z * | |
Since { u n } is bounded and { δ n α n | | x n x n 1 | | } converges, then there exists M 3 > 0 such that for all n 1
| | x n x n 1 | | | | u n z * | | M 3
thus
| | u n z * | | 2 | | x n z * | | 2 + 2 α n M 3 .
It follows from (24), (27) and (29) that
| | x n + 1 z * | | 2 ( 1 β n ) [ | | x n z * | | 2 + 2 α n M 3 ] + β n [ ( 1 α n ) | | x n z * | | + α n M * ] 2 β n 2 ϵ ϵ | | v n w n | | 2 β n ( 1 β n ) | | t n y n | | 2 ( 1 β n ) | | x n z * | | 2 + 2 α n ( 1 β n ) M 3 + β n ( 1 α n ) | | x n z * | | 2 + 2 α n β n ( 1 β n ) M * + ( α n M * ) 2 β n 2 ϵ ϵ | | v n w n | | 2 β n ( 1 β n ) | | t n y n | | 2 ( 1 α n β n ) | | x n z * | | 2 + α n [ 2 M 3 + 2 M * + α n M * 2 ] β n 2 ϵ ϵ | | v n w n | | 2 β n ( 1 β n ) | | t n y n | | 2 .
Thus, for some K > 0 , we obtain
2 ϵ ϵ a | | v n w n | | 2 + a ( 1 b ) | | t n y n | | 2 2 ϵ ϵ β n | | v n w n | | 2 + β n ( 1 β n ) | | t n y n | | 2 | | x n z * | | 2 | | x n + 1 z * | | 2 + α n [ 2 M 3 + 2 M * + α n M * 2 β n | | x n z * | | 2 ] | | x n z * | | 2 | | x n + 1 z * | | 2 + α n K .
Stage 3. Next, we estimate that
| | x n + 1 z * | | 2 ( 1 α n β n ) | | x n z * | | 2 + α n β n Ψ n ,
where
Ψ n = α n | | z * | | 2 + 2 ( 1 α n ) h n z * , z * + 2 δ n | | x n x n 1 | | | | u n z * | | + 2 θ n | | x n x n 1 | | | | h n z * | | α n
From the definition of { x n + 1 } and (28), we obtain
| | x n + 1 z * | | 2 = | | ( 1 β n ) t n + β n y n z * | | 2 ( 1 β n ) | | t n z * | | 2 + β n | | y n z 2 | | 2 = ( 1 β n ) | | ( 1 θ ) ( u n z * ) + θ ( x n z * ) | | 2 + β n | | y n z * | | 2 ( 1 β n ) [ ( 1 θ ) | | u n z * | | 2 + θ | | x n z * | | 2 ] + β n | | w n z * | | 2 ( 1 β n ) [ ( 1 θ ) [ | | x n z * | | 2 + δ n | | x n x n 1 | | u n z * | | ] + θ | | x n z * | | 2 ]   + β n | | ( 1 α n ) h n z * | | 2 ( 1 β n ) | | x n z * | | 2 + 2 δ n | | x n x n 1 | | | | u n z * | | + β n | | ( 1 α n ) ( h n z * ) α n z * | | 2 ( 1 β n ) | | x n z * | | 2 + 2 δ n | | x n x n 1 | | | | x n z * | | + β n [ ( 1 α n ) | | h n z * | | 2 + α n 2 | | z * | | 2   + 2 α n ( 1 α n ) h n z * , z * ] ( 1 β n ) | | x n z * | | 2 + 2 δ n | | x n x n 1 | | | | u n z * | | + β n ( 1 α n ) | | x n z * | | 2   + 2 β n θ n | | x n x n 1 | | | | h n z * | | + β n α n 2 | | z * | | 2 + 2 β n α n ( 1 α n ) h n z * , z * ( 1 α n β n ) | | x n z * | | 2 + α n β n ( α n | | z * | | 2 + 2 ( 1 α n ) h n z * , z *   + 2 δ n α n | | x n x n 1 | | | | u n z * | | + 2 θ n α n | | x n x n 1 | | | | h n z * | | ] = ( 1 α n β n ) | | x n z * | | 2 + α n β n Ψ n .
where
Ψ n = α n | | z * | | 2 + 2 ( 1 α n ) h n z * , z * + 2 δ n | | x n x n 1 | | | | u n z * | | + 2 θ n | | x n x n 1 | | | | h n z * | | α n
Stage 4. Finally, we prove that the sequence { | | x n z * | | } converges to zero.
Let { | | x n k z * | | } be a subsequence of { | | x n z * | | } such that
lim inf k ( | | x n k + 1 z * | | | | x n k z * | | ) 0 .
Then
lim inf k | | x n k + 1 z * | | 2 | | x n k z * | | 2 = lim inf k ( | | x n k + 1 z * | | | | x n k z * | | ) ( | | x n k + 1 z * | | + | | x n k z * | | ) 0 .
From (25) and using α n 0 as n , we obtain
lim sup k 2 ϵ ϵ a | | v n k w n k | | 2 + a ( 1 b ) | | t n k y n k | | 2 lim sup k [ | | x n k z * | | 2 | | x n k + 1 z * | | 2 ] + lim sup k α n k K lim inf k [ | | x n k + 1 z * | | 2 | | x n k z * | | 2 ] 0 ,
This implies that
lim k | | v n k w n k | | = 0 and lim k | | t n k y n k | | = 0
From (11), the definition of ( d n ) and ( τ n ) , we obtain
| | d n ( w n , v n ) | | w n v n | | μ 2 φ | | w n v n | | = 1 μ 2 φ | | w n v n | |
and then
τ n | | d n ( w n , v n ) | | = 1 μ 2 φ | | w n v n | | 2 | | d n ( w n , v n ) | | | | w n v v | | .
It follows from the definition of ( y n ) that
| | y n w n | | = ϵ τ n | | d n ( w n , v n ) | | ϵ | | w n v n | |
Using (31) in (32), we obtain
lim k | | y n k w n k | | = 0 .
Combining (31) and (33), we obtain
lim k | | t n k w n k | | = 0 .
Also, from w n = ( 1 α n ) h n , where h n = x n + θ n ( x n x n 1 ) , we obtain
| | h n k x n k | | α n k × θ n k α n k | | x n k x n k 1 | | 0
as k , and since α n 0 as n , then
lim k | | w n k h n k | | = 0 .
Combining (35) and (36), we obtain
lim k | | w n k x n k | | = 0 ,
from (34) and (37), we obtain
lim k | | t n k x n k | | = 0 .
Andsince | | x n + 1 t n | | β n | | t n y n | | , then from (31), we obtain
lim k | | x n k + 1 t n k | | = 0
which follows from (38) that
lim k | | x n k + 1 x n k | | = 0 .
Since { x n k } is bounded, there exists a subsequence { x n k j } of { x n k } such that x n k j p H . From (37), we obtain w n k j p , and it follows from (29) and Lemma 5 that p V I ( C , G ) . Furthermore, since p V I ( C , G ) and | | z * | | = min { | | z | | : z V I ( C , G ) } , from Lemma 1, we obtain
lim sup k x n k z * , z * = lim j x n k j z * , z * = p z * , z * 0
it follows from (35) that
lim sup k h n k z * , z * = lim sup k h n k j x n k , z * + lim sup k h n k z * , z * = p z * , z * 0
which implies that lim sup k Ψ n k 0 . Therefore, it follows from (33) and Lemma 3 that x n z * as n . □

4. Numerical Example

In order to demonstrate how the stability and convergence rate of Mann-type inertial projection and contraction methods for solving variational inequality problems can be improved through the additional momentum of inertial terms, known as double inertial extrapolation terms, we first present Example 1 as a comparison in this section. Second, we compare the inertial projection and contraction techniques presented in [18] [Equation (4)] and [17] [Equation (6)] with Algorithm 1 using a computational experiment.
Example 1.
Let H = L 2 [ 0 , 1 ] and C = { x L 2 [ 0 , 1 ] : a , x b } , where a = t 2 + 1 and b = 1 , with norm | | x | | = 0 1 | x ( t ) | 2 d t and inner product x , y = 0 t x ( t ) y ( t ) d t , for all x , y L 2 ( [ 0 , 1 ] ) , t [ 0 , 1 ] . Define metric projection P C as follows:
P C ( x ) = x , if x C   b a , x | | a | | L 2 a + x , otherwise .
Let G : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be defined by G ( x ( t ) ) = e | | x | | 0 t x ( s ) d s , for all x L 2 [ 0 , 1 ] , t , s [ 0 , 1 ] ; then, G is pseudomonotone and uniformly continuous mapping (see [38]). We choose the following parameters for the algorithms: α n = 1 1 + n , β n = n 2 ( n + 1 ) ( 1 α n ) , δ n = θ = 1 ( 1 + n ) 0.1 , μ = 1.1 , ν = 0.5 , γ = 0.5 , κ = 0.5 and ϵ = 1.9 . We define the sequence T O L n : = | | x n + 1 x n | | 2 and apply the stopping criterion T O L n < ε for the iterative processes because the solution to the problem is unknown. ε is the predetermined error. Here, the terminating condition is set to ε = 10 5 . For the numerical experiments illustrated in Figure 1 and Table 1 below, we take into consideration the resulting cases.
Case 1: 
x 0 = t 3 and x 1 = t 2 + t .
Case 2: 
x 0 = 2 t 3 + t and x 1 = t 2 2 .
Case 3: 
x 0 = 6 t 3 + t 2 + t and x 1 = t 5 2 .
Case 4: 
x 0 = t 7 + 4 t and x 1 = t 3 6 .
From the numerical results, it is clear that our Algorithm 1 solves the problem with a smaller number of iterations and CPU−time (second). This shows the advantage of using a double inertial terms and golden ratio in Algorithm 1.
Example 2.
Let H = R N . Define G : R N R N by A ( x ) = M x + q , where the matrix M is formed as M = V V , where V = I 2 v v v 2 and = d i a g ( σ 11 , σ 12 , , σ 1 N ) are the householder and the diagonal matrix, and
σ 1 j = cos j π N + 1 + 1 + cos π N + 1 + 1 C ^ ( cos N π N + 1 + 1 ) C ^ 1 , j = 1 , 2 , , N ,
with C ^ being the present condition number of M ([39], Example 5.2). In the numerical computation, we choose C ^ = 10 4 , q = 0 and uniformly take the vector v R N in ( 1 , 1 ) . Thus, G is pseudomonotone and Lipschitz continuous with K = | | M | | (see [39]). By setting C = { x R N : | | x | | 1 } , Matlab is used to efficiently compute the projection onto C. Moreover, we examine various instances of the problem’s dimension. That is, N = 20 , 40 , 60 , 80 , with starting points x 1 = ( 1 , 1 , , 1 ) and x 0 = ( 0 , 0 , , 0 ) . We choose the following parameters for Algorithm 1: γ = 0.5 , ν = 0.5 , μ = 1.2 , ϵ = 1.9 , α n = 1 / ( 100 n + 1 ) , θ n = δ n = α n n 0.01 , β n = 0.5 α n . We take l = 0.11 , κ = 0.5 , ϵ = 0.5 , θ n = α n n 0.01 , and β n = 0.5 α n for Equations (4) and (6). In this example, we take the stopping criterion to be ε = 10 5 and obtain the numerical results shown in Table 2 and Figure 2.
The numerical results show that Algorithm 1 performs better than both Equations (4) and (6) in terms of the number of iterations and CPU time needed for computation. These results are shown in Table 2 and Figure 2. This increase in performance highlights the advantages of using double inertial extrapolation steps, which boost the methods’ effectiveness and accelerate their rate of convergence. These steps also give more momentum to the algorithm, improving its overall performance and stability significantly.

5. Conclusions

The pseudomonotone variational inequality problem in the context of real Hilbert spaces is addressed in this paper using a Mann-type projection and contraction technique. Our approach uses a double inertial extrapolation method, as specified, together with a novel line search step size based on the golden rule to accelerate the rate of convergence. To demonstrate how well our suggested approach works, we provided a numerical example.

Author Contributions

All authors contributed equally to the writing of this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable. The authors affirm that this work has not yet been published anywhere and is not currently being considered for publication elsewhere.

Data Availability Statement

Data and source code will be made available on request.

Acknowledgments

The authors are grateful to Department of Mathematics and Applied Mathematics, Sefako Makgato Health Science University South Africa for supporting this research work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brnabic, A.; Hess, L.M. Systematic literature review of machine learning methods used in the analysis of real-world data for patient-provider decision making. BMC Med. Inform. Decis. Mak. 2021, 21, 54. [Google Scholar] [CrossRef] [PubMed]
  2. Salazar-Reyna, R.; Gonzalez-Aleu, F.; Granda-Gutierrez, E.M.; Diaz-Ramirez, J.; Garza-Reyes, J.A.; Kumar, A. A systematic literature review of data science, data analytics and machine learning applied to healthcare engineering systems. Manag. Decis. 2022, 60, 300–319. [Google Scholar] [CrossRef]
  3. Abass, H.A.; Ugwunnadi, G.C.; Narain, O.K.; Darvish, V. Inertial extrapolation method for solving variational inequality and fixed point problems of a Bregman demigeneralized mapping in a reflexive Banach space. Numer. Funct. Anal. Optim. 2022, 43, 933–960. [Google Scholar] [CrossRef]
  4. Alansari, M.; Ali, R.; Farid, M. Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem and fixed point problem in a Banach space. J. Inequal. Appl. 2020, 2020, 42. [Google Scholar] [CrossRef]
  5. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algor. 2012, 59, 301–323. [Google Scholar] [CrossRef]
  6. Ali, B.; Ugwunnadi, G.C.; Lawan, M.S.; Khan, A.R. Modified inertial subgradient extragradient method in reflexive Banach spaces. Bol. Soc. Mat. Mex. 2021, 27, 30. [Google Scholar] [CrossRef]
  7. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  8. Cholamjiak, P.; Shehu, Y. Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  9. Shehu, Y.; Vuong, P.T.; Cholamjiak, P. A self-adaptive projection method with an inertial technique for split feasibility problems in Banach spaces with applications to image restoration problems. J. Fixed Point Theory Appl. 2019, 21, 50. [Google Scholar] [CrossRef]
  10. Suantai, S.; Cho, Y.J.; Cholamjiak, P. Halpern’s iteration for Bregman strongly nonexpansive mappings in reflexive Banach spaces. Comput. Math. Appl. 2012, 64, 489–499. [Google Scholar] [CrossRef]
  11. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef]
  12. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekonom. Mate. Metod. 1976, 12, 747–756. [Google Scholar]
  13. Antipin, A.S. On a method for convex programs using a symmetrical modification of Lagrange function. Ekonom. I Mate. Metod. 1976, 12, 1164–1173. [Google Scholar]
  14. He, B.S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
  15. Dong, Q.L.; Cho, Y.J.; Rassias, T.M. The projection and contraction methods for finding common solutions to variational inequality problems. Optim. Lett. 2018, 12, 1871–1896. [Google Scholar] [CrossRef]
  16. Dong, Q.L.; Jiang, D.; Gibal, A. A modified subgradient extragradient method for solving the variational inequality problem. Numer. Algor. 2018, 9, 927–940. [Google Scholar] [CrossRef]
  17. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  18. Tian, M.; Xu, G. Improved inertial projection and contraction method for solving pseudomonotone variational inequality problems. J. Inequal. Appl. 2021, 2021, 107. [Google Scholar] [CrossRef]
  19. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  20. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlineqar Problems; Kluwer Academic: Dordrecht, The Netherlands, 1990. [Google Scholar]
  21. Maluleka, R.; Ugwunnadi, G.C.; Aphane, M. Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Math. 2023, 8, 30102–30119. [Google Scholar] [CrossRef]
  22. Li, X.-H. A strong convergence theorem for solving variational inequality problems with pseudo-monotone and Lipschitz mappings. J. Nonlinear Funct. Anal. 2022, 2022, 4. [Google Scholar]
  23. Liu, L. A stochastic projection and contraction algorithm with inertial effects for stochastic variational inequalities. J. Nonlinear Var. Anal. 2023, 7, 995–1016. [Google Scholar]
  24. Thong, D.V.; Vuong, P.T. Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2019, 68, 2207–2226. [Google Scholar] [CrossRef]
  25. Yao, Y.; Iyiola, O.S.; Shehu, Y. Subgradient Extragradient Method with Double Inertial Steps for Variational Inequalities. J. Sci. Comput. 2022, 90, 71. [Google Scholar] [CrossRef]
  26. Li, H.Y.; Wang, X.F. Subgradient extragradient method with double inertial steps for quasi-monotone variational inequalities. Filomat 2023, 37, 9823–9844. [Google Scholar] [CrossRef]
  27. Li, H.Y.; Wang, X.F.; Wang, F.H. Projection and contraction method with double inertial steps for quasi-monotone variational inequalities. Optimization 2024, 1–31. [Google Scholar] [CrossRef]
  28. Ofem, A.E.; Mebawondu, A.A.; Ugwunnadi, G.C.; Cholamjiak, P.; Narain, O.K. Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems. Numer. Algor. 2024, 96, 1465–1498. [Google Scholar] [CrossRef]
  29. Thong, D.V.; Li, X.H.; Dung, V.T.; Huyen, P.T.; Tam, H.T. Using Double Inertial Steps Into the Single Projection Method with Non-monotonic Step Sizes for Solving Pseudomontone Variational Inequalities. Netw. Spat. Econ. 2024, 24, 1–26. [Google Scholar] [CrossRef]
  30. Wang, K.; Wang, Y.H.; Iyiola, O.S.; Shehu, Y. Double inertial projection method for variational inequalities with quasi-monotonicity. Optimization 2024, 73, 707–739. [Google Scholar] [CrossRef]
  31. Chidume, C.E. Geometric properties of Banach spaces and nonlinear iterations. In Springer Verlag Series; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2009; ISBN 978-1-84882-189-7. [Google Scholar]
  32. Saejung, S.; Yotkaew, P. Approximation of zeroes of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  33. Tan, B.; Qin, X. Modified inertial projection and contraction algorithms for solving variational inequality problems with non-Lipschitz continuous operators. Anal. Mathl. Phys. 2022, 12, 26. [Google Scholar]
  34. Iusem, A.N. An iterative algorithm for the variational inequality problem. Comput. Appl. Math. 1994, 13, 103–114. [Google Scholar]
  35. Khobotov, E.N. Modifications of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987, 27, 120–127. [Google Scholar] [CrossRef]
  36. Marcotte, P. Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar] [CrossRef]
  37. Long, X.J.; Yang, J.; Cho, Y.J. Modified Subgradient Extragradient Algorithms with A New Line-Search Rule for Variational Inequalities. Bull. Malays. Math. Sci. Soc. 2023, 46, 140. [Google Scholar] [CrossRef]
  38. Thong, D.V.; Shehu, Y.; Iyiola, O.S. Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numer. Algor. 2019, 84, 795–823. [Google Scholar] [CrossRef]
  39. He, H.; Ling, C.; Xu, H.K. A relaxed projection method for split variational inequalities. J. Optim. Theory Appl. 2015, 166, 213–233. [Google Scholar] [CrossRef]
Figure 1. (Top Left): Case 1; (Top Right): Case 2; (Bottom Left): Case 3; (Bottom Right): Case 4, the error plotting of comparison of Algorithm 1, and Equation (6) for Example 1.
Figure 1. (Top Left): Case 1; (Top Right): Case 2; (Bottom Left): Case 3; (Bottom Right): Case 4, the error plotting of comparison of Algorithm 1, and Equation (6) for Example 1.
Mathematics 12 02203 g001
Figure 2. The behavior of T O L n with ε = 10 5 for Example 2: (Top Left): N = 20 ; (Top Right): N = 40 ; (Bottom Left): N = 60 ; (Bottom Right): N = 80 .
Figure 2. The behavior of T O L n with ε = 10 5 for Example 2: (Top Left): N = 20 ; (Top Right): N = 40 ; (Bottom Left): N = 60 ; (Bottom Right): N = 80 .
Mathematics 12 02203 g002
Table 1. Comparison of Algorithm 1 and Equation (6).
Table 1. Comparison of Algorithm 1 and Equation (6).
Cases Algorithm 1Equation (6)
1Iter.
CPU (time)
6
0.6209
10
0.6549
2Iter.
CPU (time)
5
0.5492
10
0.5824
3Iter.
CPU (time)
6
0.6075
10
0.6115
4Iter.
CPU (time)
6
0.5463
10
0.6069
Table 2. Numerical results for Example 2 with ε = 10 5 .
Table 2. Numerical results for Example 2 with ε = 10 5 .
N Algorithm 1Equation (6)Equation (4)
20Iter. CPU7
0.0161
585
0.0223
4512
0.0331
40Iter. CPU7
0.0267
855
0.0278
27,541
0.4378
60Iter. CPU7
0.0200
647
0.0303
45,469
1.0728
80Iter. CPU7
0.0216
479
0.0238
81,719
2.8491
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ezeafulukwe, U.A.; Akuchu, B.G.; Ugwunnadi, G.C.; Aphane, M. A Method with Double Inertial Type and Golden Rule Line Search for Solving Variational Inequalities. Mathematics 2024, 12, 2203. https://doi.org/10.3390/math12142203

AMA Style

Ezeafulukwe UA, Akuchu BG, Ugwunnadi GC, Aphane M. A Method with Double Inertial Type and Golden Rule Line Search for Solving Variational Inequalities. Mathematics. 2024; 12(14):2203. https://doi.org/10.3390/math12142203

Chicago/Turabian Style

Ezeafulukwe, Uzoamaka Azuka, Besheng George Akuchu, Godwin Chidi Ugwunnadi, and Maggie Aphane. 2024. "A Method with Double Inertial Type and Golden Rule Line Search for Solving Variational Inequalities" Mathematics 12, no. 14: 2203. https://doi.org/10.3390/math12142203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop