Next Article in Journal
The Logistic Distribution as a Limit Law for Random Sums and Statistics Constructed from Samples with Random Sizes
Previous Article in Journal
Estimation and Simultaneous Confidence Bands for Fixed-Effects Panel Data Partially Linear Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces

by
Hammed Anuoluwapo Abass
1,
Olawale Kazeem Oyewole
2,*,
Seithuti Philemon Moshokoa
2 and
Abubakar Adamu
3,4
1
Department of Mathematics and Applied Mathematics, Sefako Makgato Health Science University, Pretoria 0204, South Africa
2
Department of Mathematics and Statistics, Tshwane University of Technology, Arcadia, Pretoria 0007, South Africa
3
Operational Research Center in Healthcare, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
4
Charles Chidume Mathematics Institute, African University of Science and Technology, Abuja 900107, Nigeria
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(23), 3773; https://doi.org/10.3390/math12233773
Submission received: 26 October 2024 / Revised: 26 November 2024 / Accepted: 27 November 2024 / Published: 29 November 2024
(This article belongs to the Section B: Geometry and Topology)

Abstract

:
This paper explores the iterative approximation of solutions to equilibrium problems and proposes a simple proximal point method for addressing them. We incorporate the golden ratio technique as an extrapolation method, resulting in a two-step iterative process. This method is self-adaptive and does not require any Lipschitz-type conditions for implementation. We present and prove a weak convergence theorem along with a sublinear convergence rate for our method. The results extend some previously published findings from Hilbert spaces to 2-uniformly convex Banach spaces. To demonstrate the effectiveness of the method, we provide several numerical illustrations and compare the results with those from other methods available in the literature.

1. Introduction

In this paper, we focus on studying an iterative approximation of the Equilibrium Problem (EP), which is defined by the following condition:
Find s * K such that f ( s * , s ) 0 , s K ,
where f : K × K R is a bifunction, and K is a closed and convex subset of a real Banach space E. We denote the solution set of the EP by Ω whenever the problem is consistent. The introduction of the EP is attributed to Blum and Oettli [1], with additional contributions from [2] in the early 1990s. Both of these works are based on the minimax inequality considered by Ky Fan [3]. As presented in this form, the EP serves as a unified framework for studying several optimization and fixed-point problems, including variational inequalities, complementarity problems, saddle point problems, and Nash equilibrium problems, among others.
In recent years, the EP has attracted significant interest from various researchers, leading to the development of several prominent solution methods. Notable among these are gap function methods [4], auxiliary problem principle methods [5], and proximal point methods [2,6]. The proximal method, in particular, has garnered considerable attention as a means of obtaining approximate solutions to the EP [6,7]. This method is based on the auxiliary problem principle initially introduced by Cohen [8] and later adapted to the EP in real Banach space by Mastroeni [5]. It is important to note that the proximal method is particularly effective when the bifunction is strongly monotone or monotone. In 2008, Tran et al. [9] extended the proximal point method for solving the equilibrium problem (EP) under the assumption that the bifunction is pseudomonotone and satisfies a Lipschitz-type condition. This proposed method, also known as the extragradient method, is a two-step proximal point method that builds on Korpelevich’s extragradient method (see [10]) within the EP framework.
It is worth mentioning that the construction of the extragradient method by Tran et al. [9] requires satisfying a Lipschitz-type condition, particularly regarding the selection of the scaling parameter α , which must fall in the range of 0 , min { 1 2 c 1 , 1 2 c 2 } , where c 1 and c 2 are the Lipschitz-type constants of the bifunction. Since then, numerous extensions and modifications of this method have emerged in various directions (see [11,12,13,14,15,16]).
Hieu [13], drawing from the subgradient technique developed by Censor et al. [17,18], introduced a subgradient extragradient method to solve the EP associated with a pseudomonotone operator. In this approach, the second optimization subproblem is altered by replacing the feasible set with a constructible half-space. Similar to the method proposed by Tran et al. [9], Hieu’s method [13] also relies on knowledge of the Lipschitz condition of the cost function. In the context of 2-uniformly convex Banach spaces, Ogbuisi [19] proposed a Popov subgradient extragradient method for approximating solutions to the equilibrium problem (EP). He established a weak convergence theorem under the condition that the scaling parameter satisfies specific requirements related to Lipschitz-type constants c 1 and c 2 . However, the reliance of these methods on the Lipschitz constant poses significant limitations, especially due to the nonlinear nature of the function, thus restricting the applicability of these methods.
To address this limitation, Yang and Liu [15,16] introduced a self-adaptive subgradient extragradient method. In their approach, the scaling parameter is updated using a formula that begins with a known initial point, thereby enhancing the scope of the subgradient extragradient method. Jolaoso and Aphane [20] proposed two subgradient extragradient methods for solving the EP. The first method, similar to Ogbuisi’s, relies on the Lipschitz-type condition. However, in their second method, they utilized the self-adaptive technique, extending the work of Yang and Liu [15,16] to the framework of 2-uniformly convex Banach spaces. It is important to note that both the extragradient method and its subgradient versions require solving a strongly convex optimization problem twice per iteration, which can be time-consuming and memory-intensive.
Recently, Hoai [21] introduced a modified proximal point algorithm based on the golden ratio technique within the framework of a real Hilbert space. A noteworthy feature of this method is that it requires solving only one strongly convex optimization problem per iteration. This improvement over the original proximal point method is attributed to the incorporation of the golden ratio. The golden ratio method was initially proposed by Malitsky [22] for solving Mixed Variational Inequalities, and it has recently been studied as an extrapolation technique to enhance the convergence properties of iterative algorithms (see [22,23]) and the references therein. To generate a sequence { s n } using this method, an extra step r n , which is a weighted average of the current s n and the previous r n 1 scaled by ϕ = 5 + 1 2 , is introduced. Due to the improvements observed in the convergence properties of iterative methods, researchers have increasingly considered the application of this technique (see [21,24,25,26]).
Building upon the works of Hoai [21], Ogbuisi [19], and Jolaoso [20], we introduced a simple proximal point method based on the golden ratio technique for solving the EP in the context of 2-uniformly convex Banach spaces. This method, which combines proximal point and golden ratio approaches, is self-adaptive and eliminates the need for prior knowledge of the Lipschitz-type constants of the associated pseudomonotone operator. Using the proposed method, we establish and prove a weak convergence theorem, along with a sublinear rate of convergence for the method.
The manuscript is structured as follows: In Section 2, we review important definitions and results, and we derive analogs of two well-known results from Hilbert spaces. The main result is presented in Section 3, which begins with key assumptions and an introduction to the methodology. Section 4 contains numerical illustrations that demonstrate the performance of the method, along with comparisons to previously published results. Finally, we conclude the manuscript with a summary in the last section.

2. Preliminary Results

In this section, we revisit several foundational definitions and relevant lemmas. Let K be a closed and convex subset of a real Banach space E with norm · . We denote the dual space of E as E * and the duality pairing between elements of E and E * as · , · . The weak convergence of a sequence { s n } to a point s * E is expressed as s n s * .
Let J : E 2 E * be the normalized duality mapping defined by
J x = { s * E * : s , s * = s = s * 2 , s E } .
We also define the Lyapunov functional ϕ : E × E R + as follows (see [27]):
ϕ ( s , t ) = s 2 2 s , J t + t 2 , s , t E .
From the definition of ϕ , it is evident that
s t 2 ϕ ( s , t ) s + t 2 .
We will also require the following property of the functional ϕ (see [28,29]):
ϕ ( s , u ) = ϕ ( s , t ) + ϕ ( t , u ) + 2 s t , J t J u .
Also in [27], Alber introduced the generalized projection operator defined by
Π K s = inf t K { ϕ ( t , s ) , s E } .
The generalized projection operator satisfies the following property as given in the next result.
Lemma 1
([27]). Let K be a closed and convex subset of a smooth, strictly convex, and reflexive real Banach space E . If s E and t K , then
t = Π K s u t , J s J t 0 , u K
and
ϕ ( u , Π K s ) + ϕ ( Π K s , s ) ϕ ( u , s ) , u K , s E .
In the case where E = H is a real Hilbert space, then ϕ ( s , t ) = s t 2 and Π K reduces to the metric projection (see [30]).
Definition 1.
Let K be a closed and convex subset of E and let f : K × K R be a bifunction. Then, f is
(1) 
strongly monotone, if there exists a positive constant γ, such that
f ( s , t ) + f ( t , s ) + γ s t 2 0 , s , t K ;
(2) 
monotone, if
f ( s , t ) + f ( t , s ) 0 , s , t K ;
(3) 
strongly pseudomonotone, if there exists a positive constant γ, such that
f ( s , t ) 0 f ( t , s ) + γ s t 2 0 , s , t K ;
(4) 
pseudomonotone, if
f ( s , t ) 0 f ( t , s ) 0 , s , t K ;
(5) 
(see Theorem 3.1(v), [4]). Lipschitz-type on K if there exists a positive constant L > 0
f ( p , r ) f ( p , q ) f ( q , r ) L p q q r , p , q , r K .
We note that the above inequality corresponds to
f ( p , r ) f ( p , q ) f ( q , r ) c 1 p q 2 + c 2 q r 2 , p , q , r K ,
where 2 c 1 = 2 c 2 = L .
  • Clearly ( 1 ) ( 2 ) ( 4 ) .
Denote the unit sphere and unit ball of a real Banach space E by S E and B E , repectively. The modulus of convexity of E is the function δ E : ( 0 , 2 ] [ 0 , 1 ] defined by
δ E ( ϵ ) = inf 1 1 2 s + t : s , t B E , s t ϵ .
A Banach space E is said to be uniformly convex if δ E ( ϵ ) > 0 for any ( 0 , 2 ] and 2-uniformly convex if there exists a constant κ > 0 , such that δ E ( ϵ ) > κ ϵ 2 for any ϵ ( 0 , 2 ] . It is obvious that every 2-uniformly convex Banach space is uniformly convex. The Banach space E is said to be strictly convex if s + t < 2 for every s , t S E with s t . The modulus of smoothness of E is the function ρ E : [ 0 , + ) [ 0 , + ) defined by
ρ E ( λ ) = sup 1 2 ( s + λ t s λ t ) 1 : s S E , t λ .
The Banach space E is said to be uniformly smooth if lim λ 0 ρ E ( λ ) λ = 0 and 2-uniformly smooth if there exists a constant c > 0 , such that ρ E ( λ ) < c λ 2 . Also, E is said to be smooth if the limit
lim λ 0 s + λ t s λ
exists for all s , t S E . It is well known that every 2-uniformly smooth Banach space is uniformly smooth. See [30,31] for more on the geometry of Banach spaces.
Lemma 2
([32]). Let E be a smooth and uniformly convex real Banach space, let { x n } and { y n } be sequences in E . If either { x n } or { y n } is bounded and ϕ ( x n , y n ) 0 as n , then x n y n 0 as n .
Lemma 3
([33]). Let E be a 2-uniformly convex Banach space. Then, there exists a constant κ > 0 , such that
κ x y 2 ϕ ( x , y ) , x , y E .
Lemma 4
([34]). Let 1 p + 1 q = 1 , p , q > 1 . The space E is q-uniformly smooth if and only if its dual E * is p-uniformly convex.
We use the following definitions in the sequel. The normal cone N K to a set K at a point t K is given by
N K ( s ) = { s * E * : s t , s * 0 , t K } .
Let h : K R be a function. The subdifferential of h at s is defined by
h ( s ) = { s * E * : h ( t ) h ( s ) s * , t s , t K . }
Remark 1.
Observe from the definitions of the subdifferential and ϕ ( s , t ) , that
ϕ ( · , t ) ( s ) = { 2 ( J s J t ) } .
Lemma 5
(Chapter 7, Section 7.2, [35]). Let K be convex subset of a Banach space E and h : K R { + } be a convex, subdifferentiable and lower semicontinuous function. Furthermore, the function h satisfies the following regularity condition:
E i t h e r i n t ( K ) o r g i s   c o n t i n u o u s   a t   a   p o i n t   i n K .
Then, s * is a solution to the following convex optimization problem
min s K h ( s )
if and only if 0 h ( s * ) + N K ( s * ) .
Let K be a closed and convex subset of a real Banach space E , h : K R be a convex function. Let α > 0 be a scaling parameter. The proximal mapping of h is defined by
p r o x α h ( s ) = arg min t K α h ( t ) + 1 2 ϕ ( s , t ) , s E .
Remark 2.
From (Chapter 3, Section 3.2, [36]) it is known that p r o x α h ( s ) is single-valued. By the definition of h ( p r o x α h ( s ) ) and Lemma 5, we obtain
α ( h ( t ) h ( p r o x α h ( s ) ) ) J s J p r o x α h ( s ) , t p r o x α h ( s ) , t E .
Indeed, by Remark 1 and Lemma 5, we have by setting u = p r o x α h ( s ) ) , that
0 α h ( t ) + 1 2 ϕ ( t , s ) u + N K u , t K .
Then, there exists w h ( u ) and v N K ( u ) , such that
α w + J u J s + v = 0 .
As v N K ( u ) , we have t u , v 0 for all t K . It follows from (3), that
α t u , w t u , J s J u , t K .
Then, from w h ( u ) , we obtain
h ( t ) h ( u ) w , t u .
Combining (4) and (5), one has
α ( h ( t ) h ( p r o x α h ( s ) ) ) t p r o x α h ( s ) , J s J p r o x α h ( s ) .
To establish the weak convergence, we need the following Lemma which is an analog of the ones obtained previously in the Hilbert spaces [37]. First, we obtain the analog of the renowned Opial’s Lemma for the Lyapunov functional.
Lemma 6.
Let { s n } be a sequence in E such that s n q . Then for all q p
lim inf n ϕ ( q , s n ) < lim inf n ϕ ( p , s n ) .
Proof. 
From (1), we have that
ϕ ( p , s n ) = ϕ ( p , q ) + ϕ ( q , s n ) + J q J s n , p q .
As for all p q , ϕ ( p , q ) > 0 , it follows from the fact that J is weakly-weakly continuous that J s n J p , thus
lim inf n ϕ ( q , s n ) < lim inf n ϕ ( p , s n )
as required.    □
Lemma 7.
Let K be a nonempty, closed, and convex subset of a 2-uniformly convex Banach E and let { s n } be a sequence in E such that
(i) 
lim n ϕ ( p , s n ) exists;
(ii) 
every sequential weak limit point of { s n } belongs to E .
Then, { s n } converges weakly to a point in E .
Proof. 
We proceed by contradiction. Assume that there exists at least two weak accumulation points p and q, such that p q . Let { s n k } be a subsequence of { s n } such that s n k p . Then, by the previous Lemma, we have that
lim n ϕ ( p , s n ) = lim k ϕ ( p , s n k ) = lim inf k ϕ ( p , s n k ) < lim inf k ϕ ( q , s n k ) = lim k ϕ ( q , s n k ) = lim n ϕ ( q , s n ) .
We can also in comparable manner show that lim n ϕ ( q , s n ) < lim n ϕ ( p , s n ) , which is a contradiction. Hence p = q and thus we conclude that { s n } converges to a unique point of K .    □
Lemma 8
([23]). Let { a n } and { b n } be nonnegative sequences satisfying
a n + 1 a n b n , n > m
where m is some nonnegative integer. Then b n 0 as n and a limit of { a n } exists.

3. Main Result

In this section, we announce our main results in the sequel. First, we prove the following crucial result.
Lemma 9.
Let s , t , u , v E and β R and let J t = β J u + ( 1 β ) J v , then
ϕ ( s , t ) = β ϕ ( s , u ) ϕ ( t , u ) + ( 1 β ) ϕ ( s , v ) ϕ ( t , v ) .
Proof. 
From the definition of ϕ and J t = β J u + ( 1 β ) J v , we have
ϕ ( s , t ) = s 2 2 s , J t + t 2 = s 2 2 s , β J u + ( 1 β ) J v + t 2 = s 2 2 β s , J u 2 ( 1 β ) s , J v + t 2 = β ( s 2 2 s , J u + t 2 ) + ( 1 β ) ( s 2 2 s , J v + t 2 ) .
Now observe that
s 2 2 s , J u + t 2 = s 2 2 s , J u + u 2 u 2 2 t , J u + 2 t , J u + t 2 = ϕ ( s , u ) u 2 + 2 t , J u t 2 + t 2 + t 2 2 t , J u = ϕ ( s , u ) ϕ ( t , u ) + 2 t 2 2 t , J u .
Similarly,
s 2 2 s , J v + t 2 = ϕ ( s , v ) ϕ ( t , v ) + 2 t 2 2 t , J v .
Substituting (8) and (9) into (7), we obtain
ϕ ( s , t ) = β ϕ ( s , u ) ϕ ( t , u ) + 2 t 2 2 t , J u + ( 1 β ) ϕ ( s , v ) ϕ ( t , v ) + 2 t 2 2 t , J v = β ϕ ( s , u ) ϕ ( t , u ) + ( 1 β ) ϕ ( s , v ) ϕ ( t , v ) + 2 β t 2 + ( 1 β ) t 2 2 t , β J u + ( 1 β ) J v = β ϕ ( s , u ) ϕ ( t , u ) + ( 1 β ) ϕ ( s , v ) ϕ ( t , v ) + 2 t 2 2 t , J t = β ϕ ( s , u ) ϕ ( t , u ) + ( 1 β ) ϕ ( s , v ) ϕ ( t , v ) + 2 t 2 2 t 2 .
The conclusion follows.   □
Remark 3.
If ϕ ( s , t ) = s t 2 and E = H is a real Hilbert space, then the conclusion of Lemma 9 simplifies to the Euclidean identity
β s + ( 1 β ) t 2 = β s 2 + ( 1 β ) t 2 β ( 1 β ) s t 2 .
Next, we state the following iterative method (see Algorithm 1) and assumptions.
Algorithm 1: Self adaptive inertial GRAAL for EP
  • Step 1: Choose α 0 , α ¯ > 0 , s 0 , s 1 K . Let μ 0 = μ 1 = 1 , r 0 = s 1 , σ = δ ( 1 + δ ) , δ 5 1 2 , 1 and κ > 0 . Start the counter at n = 1 .
  • Step 2: Compute { s n } and update α n as follows
    r n = J 1 ( 1 δ ) J s n + δ J r n 1 ) , s n + 1 = p r o x α n f ( s n , · ) ( r n ) = arg min z K α n f ( s n , z ) + 1 2 ϕ ( r n , z )
    and
    α n + 1 = min α n , κ μ n μ n 1 s n s n 1 s n + 1 s n 2 [ f ( s n 1 , s n + 1 ) f ( s n , s n + 1 ) f ( s n 1 , s n ) ] , α ¯ , if f ( s n 1 , s n + 1 ) f ( s n , s n + 1 ) f ( s n 1 , s n ) > 0 , α n , otherwise ,
    where
    μ n = α n α n 1 δ .
  • Step 3: Set n n + 1 and go to Step 2.
Assumption 1.
Suppose that the following conditions hold:
(A1) 
The feasible set K is a closed and convex subset of E .
(A2) 
The bifuntion f : K × K R is pseudomonotone with f ( s , s ) = 0 and satisfies the Lipschitz-type condition (2).
(A3) 
f ( s , · ) is convex, lower semicontinuous and subdifferentiable on K for each s K .
(A4) 
The solution set Ω .
Lemma 10.
Let { s n } be the sequence of iterates generated by Algorithm 1, such that condition A2 holds, then { α n } and { μ n } are bounded and distinct from 0.
Proof. 
It is clear from (12), that α n + 1 α ¯ for all n N . From condition A2, we have that there exists L > 0 , such that
f ( s n 1 , s n + 1 ) f ( s n , s n + 1 ) f ( s n 1 , s n ) L s n s n 1 s n + 1 s n , n N .
Let α 0 and α 1 such that α i κ 2 δ L . Now, assume that this holds for i = 0 , 1 , , n . It follows from δ 5 1 2 , 1 , that
α n + 1 = α ¯ α n κ 2 δ L .
or
α n + 1 = κ μ n μ n 1 s n s n 1 s n + 1 s n 2 [ f ( s n 1 , s n + 1 ) f ( s n , s n + 1 ) f ( s n 1 , s n ) ] κ 2 δ L .
We infer from both cases that { α n } is bounded by κ 2 δ L and separated from zero. Clearly, μ n κ 2 δ 2 α ¯ L .
Lemma 11.
Suppose the conditions of Assumption 1 hold. Then,
(i) 
the limit lim n ϕ ( p , s n ) , p Ω , exists;
(ii) 
the sequences { s n } and { r n } are bounded.
Proof. 
From Remark 2 and (11), we have
2 α n ( f ( s n , s n + 1 ) f ( s n , t ) ) 2 J r n J s n + 1 , s n + 1 t , t K .
Similarly, we have
2 α n 1 ( f ( s n 1 , s n ) f ( s n 1 , t ) ) 2 J r n 1 J s n , s n t , t K .
Substitute t = p Ω in (13) and t = s n + 1 in (14), then
2 α n ( f ( s n , s n + 1 ) f ( s n , p ) ) 2 J r n J s n + 1 , s n + 1 p
and
2 α n 1 ( f ( s n 1 , s n ) f ( s n 1 , s n + 1 ) ) 2 J r n 1 J s n , s n s n + 1 .
Again from (11), we have J r n = ( 1 δ ) J s n + δ J r n 1 , which implies
J s n = 1 ( 1 δ ) J r n δ ( 1 δ ) J r n 1 .
Thus,
J s n J r n = δ ( J s n J r n 1 ) .
Substituting (17) into (16), one has
2 α n 1 ( f ( s n 1 , s n ) f ( s n 1 , s n + 1 ) ) 2 δ J r n J s n , s n s n + 1 .
Multiplying this through by α n α n 1 and adding to (15), we obtain
2 α n [ f ( s n , s n + 1 ) f ( s n , p ) + f ( s n 1 , s n ) f ( s n 1 , s n + 1 ) ] 2 J r n J s n , s n + 1 p + 2 α n α n 1 δ J r n J s n , s n s n + 1 .
Employing property (1), we have
2 J r n J s n + 1 , s n + 1 p = ϕ ( p , r n ) ϕ ( p , s n + 1 ) ϕ ( s n + 1 , r n )
and
2 J r n J s n , s n s n + 1 = ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , s n ) ϕ ( s n , r n ) .
Using this estimates in (19), we obtain
2 α n [ f ( s n , s n + 1 ) f ( s n , p ) + f ( s n 1 , s n ) f ( s n 1 , s n + 1 ) ] ϕ ( p , r n ) ϕ ( p , s n + 1 ) ϕ ( s n + 1 , r n ) + α n α n 1 δ ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , s n ) ϕ ( s n , r n ) .
As p Ω and s n K , we have that f ( p , s n ) 0 which implies by the pseudomonotonicity of f , that f ( s n , p ) 0 . Hence, we obtain
2 α n [ f ( s n , s n + 1 ) + f ( s n 1 , s n ) f ( s n 1 , s n + 1 ) ] ϕ ( p , r n ) ϕ ( p , s n + 1 ) ϕ ( s n + 1 , r n ) + α n α n 1 δ ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , s n ) ϕ ( s n , r n ) .
Hence,
ϕ ( p , s n + 1 ) ϕ ( p , r n ) ϕ ( s n + 1 , r n ) + 2 α n ( f ( s n 1 , s n + 1 ) f ( s n 1 , s n ) f ( s n , s n + 1 ) ) + μ n ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , s n ) ϕ ( s n , r n ) .
Using the update (12), one has
2 α n ( f ( s n 1 , s n + 1 ) f ( s n 1 , s n ) f ( s n , s n + 1 ) ) κ μ n μ n 1 s n s n 1 s n + 1 s n κ μ n 1 2 s n s n 1 2 + κ μ n 2 s n + 1 s n 2 μ n 1 2 ϕ ( s n , s n 1 ) + μ n 2 ϕ ( s n + 1 , s n ) .
Combining (22) and (23), we obtain
ϕ ( p , s n + 1 ) ϕ ( p , r n ) ϕ ( s n + 1 , r n ) + μ n ( ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , s n ) ϕ ( s n , r n ) ) + μ n 1 2 ϕ ( s n , s n 1 ) + μ n 2 ϕ ( s n + 1 , s n )
which implies
ϕ ( p , s n + 1 ) ϕ ( p , s n ) + ( μ n 1 ) ϕ ( s n + 1 , r n ) μ n 2 ϕ ( s n + 1 , s n ) μ n ϕ ( s n , r n ) + μ n 1 2 ϕ ( s n , s n 1 ) .
We go again from (11), where we have that
J s n + 1 = 1 ( 1 δ ) J r n + 1 δ ( 1 δ ) J r n .
It follows from Lemma 9, that
ϕ ( p , s n + 1 ) = 1 ( 1 δ ) ( ϕ ( p , r n + 1 ) ϕ ( s n + 1 , r n + 1 ) ) δ ( 1 δ ) ( ϕ ( p , r n ) ϕ ( s n + 1 , r n ) )
and thus
1 ( 1 δ ) ϕ ( p , r n + 1 ) = ϕ ( p , s n + 1 ) + 1 ( 1 δ ) ϕ ( s n + 1 , r n + 1 ) + δ ( 1 δ ) ϕ ( p , r n ) ϕ ( s n + 1 , r n ) .
By combining this with (24), we obtain
1 ( 1 δ ) ϕ ( p , r n + 1 ) ϕ ( p , s n ) + ( μ n 1 ) ϕ ( s n + 1 , r n ) μ n 2 ϕ ( s n + 1 , s n ) μ n ϕ ( s n , r n ) + μ n 1 2 ϕ ( s n , s n 1 ) + 1 ( 1 δ ) ϕ ( s n + 1 , r n + 1 ) + δ ( 1 δ ) ϕ ( p , r n ) ϕ ( s n + 1 , r n ) = 1 ( 1 δ ) ϕ ( p , r n ) + μ n 1 ( 1 δ ) ϕ ( s n + 1 , r n ) μ n ϕ ( s n , r n ) μ n 2 ϕ ( s n + 1 , s n ) + μ n 1 2 ϕ ( s n , s n 1 ) + 1 ( 1 δ ) ϕ ( s n + 1 , r n + 1 ) .
Rearranging the above, we have
1 ( 1 δ ) ϕ ( p , r n + 1 ) + μ n 2 ϕ ( s n , s n + 1 ) ϕ ( s n + 1 , r n + 1 ) 1 ( 1 δ ) ϕ ( p , r n ) + μ n 1 2 ϕ ( s n , s n 1 ) μ n ϕ ( s n , r n ) + μ n 1 ( 1 δ ) ϕ ( s n + 1 , r n ) + δ ( 1 δ ) ϕ ( s n + 1 , r n + 1 ) .
Again from Lemma 9 and J r n + 1 = ( 1 δ ) J s n + 1 + δ J r n , we have that
ϕ ( s n + 1 , r n + 1 ) = ( 1 δ ) ( ϕ ( s n + 1 , s n + 1 ) ϕ ( r n + 1 , s n + 1 ) ) + δ ( ϕ ( s n + 1 , r n ) ϕ ( r n + 1 , r n ) ) δ ϕ ( s n + 1 , r n ) .
From this and the last two terms of (26), one obtains
μ n 1 ( 1 δ ) ϕ ( s n + 1 , r n ) + δ ( 1 δ ) ϕ ( s n + 1 , r n + 1 ) μ n 1 ( 1 δ ) ϕ ( s n + 1 , r n ) + δ 2 ( 1 δ ) ϕ ( s n + 1 , r n ) = μ n 1 ( 1 δ ) + δ 2 ( 1 δ ) ϕ ( s n + 1 , r n ) = ( μ n 1 δ ) ϕ ( s n + 1 , r n ) .
Thus, we obtain from (26), that
1 ( 1 δ ) ϕ ( p , r n + 1 ) + μ n 2 ϕ ( s n , s n + 1 ) ϕ ( s n + 1 , r n + 1 ) 1 ( 1 δ ) ϕ ( p , r n ) + μ n 1 2 ϕ ( s n , s n 1 ) μ n ϕ ( s n , r n ) + ( μ n 1 δ ) ϕ ( s n + 1 , r n ) .
It follows from Lemma 10, that μ n κ 2 δ 2 α ¯ L . for all n N . Now, since lim n μ n = lim n α n α n 1 δ = 1 δ , δ 5 1 2 , 1 , then there exists N 1 N , such that μ n 1 δ 0 for all n N 1 . Thus, there exists a > 0 , such that μ n 1 δ a . Now, set
ρ n = 1 ( 1 δ ) ϕ ( p , r n ) + μ n 1 2 ϕ ( s n , s n 1 ) ϕ ( s n , r n ) .
We obtain easily that (28) becomes
ρ n + 1 ρ n a ϕ ( s n + 1 , r n ) .
Next, we claim that { s n } is bounded. To see this, we obtain from (1), that
ϕ ( p , s n + 1 ) + ϕ ( s n + 1 , r n + 1 ) = ϕ ( p , r n + 1 ) + 2 J r n + 1 J s n + 1 , p s n + 1 = ϕ ( p , r n + 1 ) + 2 δ J r n J s n + 1 , p s n + 1 = ϕ ( p , r n + 1 ) + 2 δ α n ( f ( s n , p ) f ( s n , s n + 1 ) ) .
Observe that
2 α n f ( s n , s n + 1 ) = 2 α n ( f ( s n 1 , s n + 1 ) f ( s n , s n + 1 ) f ( s n 1 , s n ) ) + 2 α n ( f ( s n 1 , s n ) f ( s n 1 , s n + 1 ) ) μ n 2 ϕ ( s n + 1 , s n ) + μ n 1 2 ϕ ( s n , s n 1 ) + μ n ( ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , s n ) ϕ ( s n , r n ) ) μ n 1 2 ϕ ( s n , s n 1 ) + μ n ϕ ( s n + 1 , r n ) M .
Hence,
ϕ ( s n + 1 , r n + 1 ) ϕ ( p , r n + 1 ) + δ M 1 δ ϕ ( p , r n + 1 ) + M
which implies that ρ n + 1 M . Therefore, the sequence { ρ n } is bounded and ϕ ( s n + 1 , r n ) 0 as n . It follows from this and (27), that ϕ ( s n + 1 , r n + 1 ) 0 as n . We obtain by Lemma 3, that s n + 1 r n 0 as n . From Lemma 9 and J s n = 1 ( 1 δ ) J r n δ ( 1 δ ) J r n 1 , we have
ϕ ( s n + 1 , s n ) = 1 ( 1 δ ) ϕ ( s n + 1 , r n ) ϕ ( s n + 1 , r n 1 ) δ ( 1 δ ) ϕ ( s n , r n + 1 ) ϕ ( s n , r n 1 ) 1 ( 1 δ ) ϕ ( s n + 1 , r n ) + δ ( 1 δ ) ϕ ( s n , r n 1 ) 0 .
Again by Lemma 3, we obtain
lim n s n + 1 s n = 0 = lim n s n r n .
As { μ n } is bounded, we infer that
lim n ρ n = 1 ( 1 δ ) ϕ ( p , r n ) .
Thus, we have established that the limits of ϕ ( p , s n ) exist. Therefore, { ϕ ( p , s n ) } is bounded. Consequently, the sequences { s n } and { r n } are bounded.
In the next result, we shall show that each weak sub-sequential limit of { s n } is in Ω .
Lemma 12.
Let { r n } and { s n } be the sequences generated by Algorithm 1 under Assumptions 1. Assume { r n k } is a sub-sequence of { r n } , such that r n k q K , then q Ω .
Proof. 
By Lemma 11, we have that { s n } is bounded, thus there exists a sub-sequence { s n k } of { s n } such that r n k q K . Recall from (13), that
α n k ( f ( s n k , s n k + 1 ) f ( s n k , t ) ) J r n k J s n k + 1 , s n k + 1 t , t K .
As f has a Lipschitz-type property, one has
α n k f ( s n k , s n k + 1 ) α n k f ( s n k 1 , s n k + 1 ) α n k f ( s n k 1 , s n k ) α n k c 1 ϕ ( s n k , s n k 1 ) α n k c 2 ϕ ( s n k + 1 , s n k ) .
We obtain by this and multiplying (18) by α n k α n k 1 , that
α n k f ( s n k , s n k + 1 ) α n k α n k 1 δ J s n k J r n k , s n k s n k + 1 α n k c 1 ϕ ( s n k , s n k 1 ) α n k c 2 ϕ ( s n k + 1 , s n k ) .
Combining (33) and (35), we obtain
f ( s n k , t ) 1 α n 1 δ J s n k J r n k , s n k s n k + 1 c 1 ϕ ( s n k , s n k 1 ) c 2 ϕ ( s n k + 1 , s n k ) + 1 α n k t s n k + 1 , J r n k J s n k + 1 .
As α n k > 0 , we obtain by passing to the limit as k in (36), that f ( q , t ) 0 , t K . Hence, q Ω .
Theorem 1.
Let { r n } and { s n } be sequences generated by Algorithm 1. Then, { r n } and { s n } converge weakly to a point in Ω .
Proof. 
From Lemma 12, we have that the set of weak accumulation points of s n lies in Ω . We have also established in Lemma 11 that the limit lim n ϕ ( p , s n ) exists. We conclude by Lemma 7 and (31), that { s n } and { r n } both converge weakly to p Ω .

Sublinear Rate

As part of our convergence result, we show that our method sub-linearly converges by using s n + 1 r n as the measure of the rate of convergence.
Theorem 2.
Under the conditions of Assumption 1 and a > 0 ,
min 1 m n s m + 1 r m = O ( 1 / m ) .
Proof. 
It follows from ρ n + 1 ρ n a ϕ ( s n + 1 , r n ) , that
a ϕ ( s n + 1 , r n ) ρ n ρ n + 1 = 1 ( 1 δ ) ϕ ( p , r n ) + μ n 1 2 ϕ ( s n , s n 1 ) ϕ ( s n , r n ) 1 ( 1 δ ) ϕ ( p , r n + 1 ) + μ n 2 ϕ ( s n + 1 , s n ) ϕ ( s n + 1 , r n + 1 ) .
This implies
m = 1 n a ϕ ( s m + 1 , r m ) 1 1 δ ϕ ( p , r 1 ) ϕ ( p , r 2 ) + μ 0 2 ϕ ( s 1 , s 0 ) μ 1 2 ϕ ( s 2 , s 1 ) ϕ ( s 1 , r 1 ) ϕ ( s 2 , r 2 ) = 1 ( 1 δ ) ϕ ( p , r 1 ) + μ 0 2 ϕ ( s 1 , s 0 ) ϕ ( s 1 , r 1 ) b
with
b = 1 ( 1 δ ) ϕ ( p , r 2 ) + μ 1 2 ϕ ( s 2 , s 1 ) ϕ ( s 2 , r 2 ) .
We obtain from m 1 and b 0 , that
m = 1 n s m + 1 r m 2 1 a κ 1 ( 1 δ ) ϕ ( p , r 1 ) + μ 0 2 ϕ ( s 1 , s 0 ) ϕ ( s 1 , r 1 ) .
Hence,
min 1 m n s m + 1 r m 1 n a κ 1 ( 1 δ ) ϕ ( p , r 1 ) + μ 0 2 ϕ ( s 1 , s 0 ) ϕ ( s 1 , r 1 )
and the conclusion follows. □

4. Numerical Example

In this section, we report some numerical illustrations of our proposed methods in comparison with some previously announced methods in the literature. All codes were written on a personal Dell laptop running on the memory 8gig/256gig with MATLAB 2024a software.
Example 1.
We consider the following Nash Equilibrium problem initially solved in [9], with the bifunction f : K × K R defined by f ( p , q ) = A p + B q + c , q p defined on a feasible set K given by
K : = { s R 5 : j = 1 5 s j 5 , 5 s j 5 , j = 1 , 2 , , 5 } .
Let the matrices A , B and c be defined, respectively, by
A = 3.1 2 0 0 0 2 3.6 0 0 0 0 0 3.5 2 0 0 0 0 3.3 0 0 0 0 0 3.1 , B = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 1 0 0 0 0 1.5 0 0 0 0 0 2 a n d c = 1 2 1 2 1 .
Then, one can see easily that the matrix B is positive semidefinite and B A is negative semidefinite. Hence, f is monotone and thus pseudomonotone with a Lipschitz constant L = B A . We will compare the performance of our proposed Algorithm 1 with that of Hoai et al. [21] (Algorithm 3.1, HTV Alg. 3.1), Jolaoso [38] (Algorithm 3.1), Vinh and Muu [39] (Algorithm 1, VM Alg. 1), and Xie et al. [40] (Algorithm 3, XCT Alg. 3). We consider the following set of initial parameters and points:
IP 1. 
First set of initial parameters and points for each algorithm.
  • Algorithm 1: α 0 = 0.3 , α ¯ = 0.01 , δ = 0.67 , μ 0 = 0.8 , μ 1 = 0.001 , κ = 0.00001 , r 0 = ( 1 , 1 , 1 , 1 , 1 ) T and s 0 = s 1 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T .
  • HTV Alg. 3.1 λ = 1 n + 1 , x 0 = ( 1 , 1 , 1 , 1 , 1 ) T and y 1 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T .
  • J Alg. 3.1 μ = 0.01 , λ 1 = 1 / 4 L , α n = 1 ( n + 1 ) 0.7 , β n = 2 n 5 n + 7 , x 0 = ( 0 , 0 , 0 , 0 , 0 ) T and x 1 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T .
  • VM Alg. 1 λ = 0.1 , θ = 0.1 , ϵ n = 1 ( n + 1 ) 2 , α n = α ¯ n and x 0 = x 1 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T .
  • XCT Alg. 3 τ = 0.01 , μ = 0.01 , α n = 1 ( n + 1 ) 0.7 , β n = 0.01 , θ = 0.9 , κ = 0.8 and t 0 = t 1 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T .
IP 2. 
Second set of initial parameters and points for each algorithm.
  • Algorithm 1: α 0 , α ¯ , δ , μ 0 , μ 1 , κ were same as in IP 1 and, r 0 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T and s 0 = s 1 = ( 1 , 2 , 1 , 3 , 2 ) T .
  • HTV Alg. 3.1 λ = 1 n + 1 , x 0 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) T and y 1 = ( 1 , 2 , 1 , 3 , 2 ) T .
  • J Alg. 3.1 μ , λ 1 , α n , β n x 0 = ( 0 , 0 , 0 , 0 , 0 ) T were same as in IP 1 and, x 1 = ( 1 , 2 , 1 , 3 , 2 ) T .
  • VM Alg. 1 λ , θ , ϵ n , α n = α ¯ n were same as in IP 1 and, x 0 = x 1 = ( 1 , 2 , 1 , 3 , 2 ) T .
  • XCT Alg. 3 τ , μ , α n , β n , θ , κ were same as in IP 1 and, t 0 = t 1 = ( 1 , 2 , 1 , 3 , 2 ) T .
Setting a maximum number of iterations 3000, for each algorithm, the simulation is continued until n = 30001 or E n = x n + 1 x n < 10 6 is not satisfied. The results of the numerical experiment are presented in Table 1 and Figure 1.
Next, we will consider an example in the setting of the classical Banach space 3 ( R ) to support our main theorem.
Example 2.
Let E = 3 ( R ) the space of cubic summable sequences given by
3 ( R ) = { s = { s i } i = 1 , s i R : i = 1 | s i | 3 < }
with norm s 3 = i = 1 3 | s i | 3 for all s E . It is known that E is a Banach space, which is not a Hilbert space. Define the feasible set K by K : = { s E : s 3 3 } and let the bifunction f : K × K R be defined by
f ( p , q ) = ( 3 p ) p , q p , p , q K .
It is clear that f is pseudomonotone but not monotone and f satisfies the Lipschitz-type condition with L = 5 . In this example, we will only compare the performance of our proposed Algorithm 1 and Jolaoso [38] (Algorithm 3.1) because other algorithms were established in the setting of Hilbert spaces. For Algorithm 1 and J Alg. 3.1, we use the same set of parameters considered in Example 1. We consider the following initial points:
IP 1. 
r 0 = x 0 = x 1 = ( 1 , 1 , . . . , 1 15 , 0 , 0 , . . . ) .
IP 2. 
r 0 = x 0 = x 1 = ( 1 , 0.1 , 1 , 0.1 , 0 , 2 , 3 , 2 , 3 , 0 , 0.5 , 0.8 , 0.3 , 0.9 , 0.25 , 0 , 0 , 0 , . . . ) (see Table 2 and Figure 2).

5. Conclusions

In this manuscript, we address the problem of finding approximate solutions to equilibrium problems in Banach spaces. We propose a modified proximal point algorithm that utilizes the golden ratio technique when the cost mapping is pseudomonotone. This method is self-adaptive, meaning that it does not require prior knowledge of the Lipschitz property of the operator for its control parameter. The construction of our method involves solving a single convex optimization problem in each iteration. We present and prove two convergence results: one weak convergence result and one sublinear convergence result. Lastly, we provide two numerical examples, including one conducted in a Banach space, to demonstrate the advantages of our proposed method compared to previous approaches in the literature.

Author Contributions

Conceptualization, O.K.O.; Investigation, H.A.A. and S.P.M.; Methodology, O.K.O.; Software, O.K.O. and A.A.; Validation, H.A.A. and S.P.M.; Writing—review & editing, O.K.O. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no competing interests.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Muu, L.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  3. Fan, K. A Minimax Inequality and Its Application; In Inequalities; Shisha, O., Ed.; Academic: New York, NY, USA, 1972; Volume 3, pp. 103–113. [Google Scholar]
  4. Mastroeni, G. Gap function for equilibrium problems. J. Glob. Optim. 2003, 27, 411–426. [Google Scholar] [CrossRef]
  5. Mastroeni, G. On Auxiliary Principle for Equilibrium Problems; Series Nonconvex Optimization and Its Applications; Springer: Berlin/Heidelberg, Germany, 2003; Volume 68, pp. 289–298. [Google Scholar]
  6. Moudafi, A. Proximal point algorithm extended to equilibrium problem. J. Nat. Geometry. 1999, 15, 91–100. [Google Scholar]
  7. Flam, S.D.; Antipin, A.S. Equilibrium programming and proximal-like algorithms. Math. Program. 1997, 78, 29–41. [Google Scholar] [CrossRef]
  8. Cohen, G. Auxiliary problem principle and decomposition of optimization problems. J. Optim. Theory Appl. 1980, 32, 277–305. [Google Scholar] [CrossRef]
  9. Tran, D.Q.; Muu, L.D.; Hien, N.V. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  10. Korpelevich, G. An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metody. 1976, 12, 747–756. [Google Scholar]
  11. Anh, P.N.; An, L.T.H. New subgradient extragradient methods for solving monotone bilevel equilibrium problems. Optimization 2019, 68, 2099–2124. [Google Scholar] [CrossRef]
  12. Anh, P.N.; An, L.T.H. The subgradient extragradient method extended to equilibrium problems. Optimization 2015, 64, 225–248. [Google Scholar] [CrossRef]
  13. Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. Rev. Real Acad. Cienc. Exactas FíSicas Nat. Ser. MatemáTicas 2017, 111, 823–840. [Google Scholar]
  14. Khanh, P.Q.; Thong, D.V.; Vinh, N.T. Versions of the subgradient extragradient method for pseudomonotone variational inequalities. Acta Appl. Math. 2020, 170, 319–345. [Google Scholar] [CrossRef]
  15. Yang, J.; Liu, H.; Liu, Z.X. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  16. Yang, J.; Liu, H. The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optim. Lett. 2019, 14, 1803–1816. [Google Scholar] [CrossRef]
  17. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  18. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  19. Ogbuisi, F. Popov subgradient extragradient algorithm for pseudomonotone equilibrium problem in Banach spaces. J. Nonlinear Funct. Analy. 2019, 44. [Google Scholar]
  20. Jolaoso, L.O.; Aphane, M. An explicit subgradient extragradient with self-adaptive stepsize for pseudomonotone equilibrium problems in Banach spaces. Numer. Algorithms 2022, 89, 583–610. [Google Scholar] [CrossRef]
  21. Pham Thi, H.A.; Ngo Thi, T.; Nguyen, V. The Golden ratio algorithms for solving equilibrium problems in Hilbert spaces. J. Nonlinear Var. Anal. 2021, 5, 493–518. [Google Scholar]
  22. Malitsky, Y. Golden ratio algorithms for variational inequalities. Math. Program. 2020, 184, 383–410. [Google Scholar] [CrossRef]
  23. Chang, X.K.; Yang, J.F. A golden ratio primal-dual algorithm for structured convex optimization. J. Sci. Comput. 2021, 87, 47. [Google Scholar] [CrossRef]
  24. Chang, X.K.; Yang, J.F.; Zhang, H.C. Golden ratio primal-dual algorithm with linesearch. SIAM J. Optim. 2022, 32, 1584–1613. [Google Scholar] [CrossRef]
  25. Oyewole, O.K.; Reich, S. Two subgradient extragradient methods based on the golden ratio technique for solving varia-tional inequality problems. Numer. Algorithms 2024, 97, 1215–1236. [Google Scholar] [CrossRef]
  26. Zhang, C.; Chu, Z. New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIM Math. 2023, 8, 23291–23312. [Google Scholar] [CrossRef]
  27. Alber, Y.I. Metric and Generalized Projection Operators in Banach Spaces: Properties and Applications; Kartsatos, A.G., Ed.; Dekker: New York, NY, USA, 1996. [Google Scholar]
  28. Reem, D.; Reich, S.; De Pierro, A. Re-examination of Bregman functions and new properties of their divergences. Optimization 2019, 68, 279–348. [Google Scholar] [CrossRef]
  29. Reich, S. A weak convergence theorem for alternating method with Bregman distance. In Theory and Applications and Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Marcel Dekker: New York, NY, USA, 1996; pp. 313–318. [Google Scholar]
  30. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  31. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings, and Nonlinear Problems; Kluwer: Dordrecht, The Netherlands, 1990. [Google Scholar]
  32. Kamimura, S.; Takahashi, W. Strong convergence of a proximal-type algorithm in Banach space. SIAM J. Optim. 2002, 13, 938–945. [Google Scholar] [CrossRef]
  33. Nakajo, K. Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 2015, 271, 251–258. [Google Scholar] [CrossRef]
  34. Avetisyan, K.; Djordjević, K.; Pavlović, M. Littlewood-palew inequalities in uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 2007, 336, 31–43. [Google Scholar] [CrossRef]
  35. Tiel, J.V. Convex Analysis: An Introductory Text; Wiley: New York, NY, USA, 1984. [Google Scholar]
  36. Parikh, N.; Boyd, S. Proximal Algorithms. Found. Trends Optim. 2013, 1, 123–231. [Google Scholar]
  37. Acedo, G.L.; Xu, H.K. Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2007, 67, 2258–2271. [Google Scholar] [CrossRef]
  38. Jolaoso, L.O. The subgradient extragradient method for solving pseudomonotone equilibrium and fixed point problems in Banach spaces. Optimization 2022, 71, 4051–4081. [Google Scholar] [CrossRef]
  39. Vinh, N.T.; Muu, L.D. Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  40. Xie, Z.; Cai, G.; Tan, B. Inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and fixed point problems in Hilbert spaces. Optimization 2022, 73, 1329–1354. [Google Scholar] [CrossRef]
Figure 1. Norm of successive iterates ( x n x n + 1 ) for the algorithms and the data in Table 1. Top: IP 1 and Bottom: IP 2.
Figure 1. Norm of successive iterates ( x n x n + 1 ) for the algorithms and the data in Table 1. Top: IP 1 and Bottom: IP 2.
Mathematics 12 03773 g001
Figure 2. Norm of successive iterates ( x n x n + 1 ) for the algorithms and the data in Table 1. Top: IP 1 and Bottom: IP 2.
Figure 2. Norm of successive iterates ( x n x n + 1 ) for the algorithms and the data in Table 1. Top: IP 1 and Bottom: IP 2.
Mathematics 12 03773 g002
Table 1. Numerical performance of all algorithms in Example 1.
Table 1. Numerical performance of all algorithms in Example 1.
AlgorithmsIP 1IP 2
Iter.Time (s)Iter.Time (s)
Algorithm 1470.5526480.6061
HTV Alg. 3.1792.9859952.3807
J Alg. 3.11011.28391331.5794
VM Alg. 1631.1043590.8783
XCT Alg. 3981.27821001.3238
Table 2. Numerical performance of all algorithms in Example 2.
Table 2. Numerical performance of all algorithms in Example 2.
AlgorithmsIP 1IP 2
Iter.Time (s)Iter.Time (s)
Algorithm 1610.4822720.9022
J Alg. 3.1180142.2872151104.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abass, H.A.; Oyewole, O.K.; Moshokoa, S.P.; Adamu, A. An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces. Mathematics 2024, 12, 3773. https://doi.org/10.3390/math12233773

AMA Style

Abass HA, Oyewole OK, Moshokoa SP, Adamu A. An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces. Mathematics. 2024; 12(23):3773. https://doi.org/10.3390/math12233773

Chicago/Turabian Style

Abass, Hammed Anuoluwapo, Olawale Kazeem Oyewole, Seithuti Philemon Moshokoa, and Abubakar Adamu. 2024. "An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces" Mathematics 12, no. 23: 3773. https://doi.org/10.3390/math12233773

APA Style

Abass, H. A., Oyewole, O. K., Moshokoa, S. P., & Adamu, A. (2024). An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces. Mathematics, 12(23), 3773. https://doi.org/10.3390/math12233773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop