Next Article in Journal
Inverse Applications of the Generalized Littlewood Theorem Concerning Integrals of the Logarithm of Analytic Functions
Previous Article in Journal
Enhancing Oracle Bone Character Category Discovery via Character Component Distillation and Self-Merged Pseudo-Label
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mann-Type Inertial Accelerated Subgradient Extragradient Algorithm for Minimum-Norm Solution of Split Equilibrium Problems Induced by Fixed Point Problems in Hilbert Spaces

by
Manatchanok Khonchaliew
1,
Kunlanan Khamdam
2 and
Narin Petrot
2,3,*
1
Department of Mathematics, Faculty of Science, Lampang Rajabhat University, Lampang 52100, Thailand
2
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
3
Centre of Excellence in Nonlinear Analysis and Optimization, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(9), 1099; https://doi.org/10.3390/sym16091099
Submission received: 16 July 2024 / Revised: 14 August 2024 / Accepted: 17 August 2024 / Published: 23 August 2024
(This article belongs to the Section Mathematics)

Abstract

:
This paper presents the Mann-type inertial accelerated subgradient extragradient algorithm with non-monotonic step sizes for solving the split equilibrium and fixed point problems relating to pseudomonotone and Lipschitz-type continuous bifunctions and nonexpansive mappings in the framework of real Hilbert spaces. By sufficient conditions on the control sequences of the parameters of concern, the strong convergence theorem to support the proposed algorithm, which involves neither prior knowledge of the Lipschitz constants of bifunctions nor the operator norm of the bounded linear operator, is demonstrated. Some numerical experiments are performed to show the efficacy of the proposed algorithm.

1. Introduction

The fixed point problem is a powerful instrument for understanding the fields of chemistry, physics, engineering, and economics in various mathematical models (see [1,2,3,4]). The fixed point problem is expressed as follows:
Find p * H such that T p * = p * ,
where H is a real Hilbert space, and T : H H is a mapping. The set of fixed points of the mapping T is denoted by F i x ( T ) . To find fixed points of a nonexpansive mapping T can be performed by using the widest methods, which was proposed by Mann [5]:
x 0 C , x k + 1 = ( 1 α k ) x k + α k T x k ,
where C is a nonempty closed convex subset of H, and { α k } ( 0 , 1 ) . In [6], the author showed that if T has a fixed point and k = 0 α k ( 1 α k ) = , then the sequence { x k } generated by Equation (2) converges weakly to a fixed point of T.
On the contrary, after a paper by Blum and Oettli [7] was published, the equilibrium problem began to garner attention and has been utilized for studying a range of mathematical problems, such as variational inequality problems, optimization problems, minimax problems, saddle point problems, and Nash equilibrium problems (see [7,8,9,10]). The following is a formulation of the equilibrium problem:
Find p * C such that f ( p * , z ) 0 , z C ,
where f : H × H R is a bifunction. The solution set of the equilibrium problem (3) is represented by E P ( f , C ) . An acknowledged approach to solve the equilibrium problem (3) is the proximal point method during which f is a monotone bifunction (see [11]). Unfortunately, in this circumstance, if f is a pseudomonotone bifunction, which is a weaker property than a monotone bifunction, the proximal point method cannot be assured. To maneuver past this obstacle, Tran et al. [12] presented the following extragradient method to solve the equilibrium problem when f is a pseudomonotone and Lipschitz-type continuous bifunction:
x 0 C , y k = arg min λ f ( x k , z ) + 1 2 z x k 2 : z C , x k + 1 = arg min λ f ( y k , z ) + 1 2 z x k 2 : z C ,
where c 1 and c 2 are Lipschitz constants of f, and 0 < λ < min 1 2 c 1 , 1 2 c 2 . They showed that the sequence { x k } generated by Equation (4) converges weakly to a solution of the equilibrium problem. It is important to note that when the feasible set C has a complicated structure, the computing performance of the algorithm is impacted by the extragradient method, a two-step iteration technique. Specifically, each iteration requires solving the optimization problems two times on the feasible set C in order to find y k and x k + 1 . To gain over this drawback, Hieu [13] expanded the following subgradient extragradient method to solve the equilibrium problem when f is a pseudomonotone and Lipschitz-type continuous bifunction:
x 0 H , y k = arg min λ k f ( x k , z ) + 1 2 z x k 2 : z C , B k = y H : x k λ k s k y k , y y k 0 , s k 2 f ( x k , y k ) , z k = arg min λ k f ( y k , z ) + 1 2 z x k 2 : z B k , x k + 1 = α k x 0 + ( 1 α k ) z k ,
where c 1 and c 2 are Lipschitz constants of f, 0 < λ k < min 1 2 c 1 , 1 2 c 2 , { α k } ( 0 , 1 ) with k = 0 α k = + such that lim k α k = 0 . The author showed that the sequence { x k } generated by Equation (5) converges strongly to P E P ( f , C ) ( x 0 ) . It is highlighted that in the second step, the optimization problem on the feasible set C is transformed to the half-space B k by the subgradient extragradient method in order to achieve z k for each iteration. As a result, by only having the optimization problem solved on the feasible set C once in order to acquire y k , the computing performance of the extragradient method is improved by the subgradient extragradient method. Additionally, the step sizes of the previously stated algorithms are dependent on the values c 1 and c 2 . This indicates that the prior knowledge of the values c 1 and c 2 is necessary for these algorithms. In practical applications, it may be difficult to acquire this kind of data.
At this point, the inertial method, which originates from a discrete version of a second-order dissipative dynamic system [14,15], has been extensively studied to accelerate algorithmic convergence properties, for instance, see [16,17,18,19,20] and the references therein. The key characteristic of this method is that the previous two iterates are applied to establish the next iterate point. In 2023, by combining the techniques of subgradient extragradient and inertial methods in conjunction with the Mann-type method, Panyanak et al. [21] presented the following algorithm to solve the equilibrium and fixed point problems, when T is a ρ -demicontractive mapping and f is a pseudomonotone and Lipschitz-type continuous bifunction:
x 0 , x 1 C , w k = x k + θ k ( x k x k 1 ) , y k = arg min λ k f ( w k , z ) + 1 2 z w k 2 : z C , B k = y H : w k λ k s k y k , y y k 0 , s k 2 f ( w k , y k ) , z k = arg min λ k f ( y k , z ) + 1 2 z w k 2 : z B k , x k + 1 = 1 α k β k z k + α k T z k ,
where the parameter θ k is chosen in 0 , θ ¯ k with
θ ¯ k = min γ 2 , ϵ k x k x k 1 , i f   x k x k 1 , γ 2 , otherwise ,
and the step size λ k is given as
λ k + 1 = min λ k , μ ( w k y k 2 + z k y k 2 ) 2 f ( w k , z k ) f ( w k , y k ) f ( y k , z k ) , i f   f ( w k , z k ) f ( w k , y k ) f ( y k , z k ) > 0 , λ k , otherwise ,
when λ 1 > 0 , γ ( 0 , 1 ) , μ ( 0 , 1 ) , { β k } 0 , 1 such that k = 1 β k = + , lim k β k = 0 , { ϵ k } [ 0 , ) , { α k } a , b with 0 < a , b < 1 ρ , 0 < a , b < 1 β k , and lim k ϵ k α k x k x k 1 = 0 . They showed that the sequence { x k } generated by Equation (6) converges strongly to P F i x ( T ) E P ( f , C ) ( 0 ) . It is apparent that Equation (6) effectively addresses the unknown information of the Lipschitz constants of f through the use of the adaptive step size. Furthermore, the adaptive step size criteria are provided, which depend on previously known information for the uncomplicated calculation of updating the step size of each iteration.
In 2016, Dinh et al. [22] generated the following split equilibrium and fixed point problems:
Find p * F i x ( T ) such that f 1 ( p * , z ) 0 , z C 1 , and A p * F i x ( S ) solves f 2 ( A p * , v ) 0 , v C 2 ,
where H 1 and H 2 are real Hilbert spaces; C 1 and C 2 are nonempty closed convex subsets of H 1 and H 2 , respectively; T : C 1 C 1 and S : C 2 C 2 are mappings; f 1 : C 1 × C 1 R and f 2 : C 2 × C 2 R are bifunctions; and A : H 1 H 2 is a bounded linear operator. Dinh et al. [22] presented the following algorithm by utilizing the extragradient and proximal point methods to solve the split equilibrium and fixed point problems (7), where S and T are nonexpansive mappings, f 1 is a pseudomonotone and Lipschitz-type continuous bifunction, and f 2 is a monotone bifunction:
x 1 C 1 , y k = arg min λ k f 1 ( x k , z ) + 1 2 x k z 2 : z C 1 , z k = arg min λ k f 1 ( y k , z ) + 1 2 x k z 2 : z C 1 , s k = ( 1 α ) z k + α T z k , u k = T t k f 2 A s k , x k + 1 = P C 1 ( s k + η A * ( S u k A s k ) ) ,
where c 1 and c 2 are Lipschitz constants of f 1 , α ( 0 , 1 ) , η 0 , 1 A 2 , { λ k } [ λ ̲ , λ ¯ ] with 0 < λ ̲ λ ¯ < min 1 2 c 1 , 1 2 c 2 , { t k } ( 0 , + ) such that lim inf k t k > 0 , and T t k f 2 A s k : = { u C 2 | f 2 ( u , v ) + 1 t k v u , u A s k 0 , v C 2 } , and A * is the adjoint operator of A. They showed that the sequence { x k } generated by Equation (8) converges weakly to a solution of the split equilibrium and fixed point problems (7). This kind of problem has attracted a lot of attention due to its broad applications in intensity-modulated radiation therapy, image restoration, network resource allocation, signal processing, phase retrievals, and other real-world applications (see, e.g., [23,24,25]).
Inspired by the above results, Petrot et al. [26] presented the following algorithm by applying the concept of the extragradient method to solve the split equilibrium and fixed point problems (7), where S and T are nonexpansive mappings and f 1 and f 2 are pseudomonotone and Lipschitz-type continuous bifunctions:
x 1 H 1 , u k = arg min μ k f 2 ( P C 2 ( A x k ) , v ) + 1 2 P C 2 ( A x k ) v 2 : v C 2 , v k = arg min μ k f 2 ( u k , v ) + 1 2 P C 2 ( A x k ) v 2 : v C 2 , y k = P C 1 x k + η k A * ( S v k A x k ) , t k = arg min λ k f 1 ( y k , z ) + 1 2 y k z 2 : z C 1 , z k = arg min λ k f 1 ( t k , z ) + 1 2 y k z 2 : z C 1 , x k + 1 = α k h ( x k ) + ( 1 α k ) β k x k + ( 1 β k ) T z k ,
where c 1 and c 2 are Lipschitz constants of f 1 ; d 1 and d 2 are Lipschitz constants of f 2 , { λ k } [ λ ̲ , λ ¯ ] with 0 < λ ̲ λ ¯ < min 1 2 c 1 , 1 2 c 2 , { μ k } [ μ ̲ , μ ¯ ] with 0 < μ ̲ μ ¯ < min 1 2 d 1 , 1 2 d 2 , { η k } [ η ̲ , η ¯ ] with 0 < η ̲ η ¯ < 1 A 2 , { β k } ( 0 , 1 ) with 0 < lim inf k β k lim sup k β k < 1 , { α k } 0 , 1 2 ρ with k = 1 α k = such that lim k α k = 0 ; and h is a ρ -contraction mapping. They showed that the sequence { x k } generated by Equation (9) converges strongly to a solution of the split equilibrium and fixed point problems (7). It is noteworthy that the step size η k of this algorithm is determined by the operator norm of the bounded linear operator A. In order to overcome this drawback, Ezeora et al. [27] presented the following algorithm by employing the ideas of extragradient and inertial methods to solve the split equilibrium and fixed point problems (7), where S and T are nonexpansive mappings and f 1 and f 2 are pseudomonotone and Lipschitz-type continuous bifunctions:
x 0 , x 1 H 1 , w k = x k + θ k ( x k x k 1 ) , u k = arg min μ k f 2 ( P C 2 ( A w k ) , v ) + 1 2 v P C 2 ( A w k ) 2 : v C 2 , v k = arg min μ k f 2 ( u k , v ) + 1 2 v P C 2 ( A w k ) 2 : v C 2 , y k = P C 1 w k + η k A * ( S v k A w k ) , t k = arg min λ k f 1 ( y k , z ) + 1 2 z y k 2 : z C 1 , z k = arg min λ k f 1 ( t k , z ) + 1 2 z y k 2 : z C 1 , x k + 1 = α k h ( w k ) + ( 1 α k ) β k w k + ( 1 β k ) T z k ,
where the parameter θ k is chosen in 0 , θ ¯ k with
θ ¯ k = min γ , ϵ k x k x k 1 , if   x k x k 1 , γ , otherwise ,
and for δ > 0 small enough, the step size η k is given as
η k δ , A w k S v k 2 A * ( A w k S v k ) 2 δ i f   A w k S v k 0 , n , otherwise ,
when c 1 and c 2 are Lipschitz constants of f 1 ; d 1 and d 2 are Lipschitz constants of f 2 , n > 0 , γ [ 0 , 1 ) , k = 1 ϵ k < , { λ k } [ λ ̲ , λ ¯ ] with 0 < λ ̲ λ ¯ < min 1 2 c 1 , 1 2 c 2 , { μ k } [ μ ̲ , μ ¯ ] with 0 < μ ̲ μ ¯ < min 1 2 d 1 , 1 2 d 2 , { β k } ( 0 , 1 ) with 0 < lim inf k β k lim sup k β k < 1 , { α k } 0 , 1 2 ( 1 ρ ) such that k = 1 α k = , and lim k α k = 0 ; and h is a ρ -contraction mapping. They showed that the sequence { x k } generated by Equation (10) converges strongly to a solution of the split equilibrium and fixed point problems (7). We observe that, in order to guarantee the convergence of Equation (10), the maximum values of the scalars λ ¯ and μ ¯ depend on the Lipschitz constants of f 1 and f 2 , respectively.
Motivated by the advantageous method presented in Equation (10) by Ezeora et al. [27], this paper continues to focus on methods that solve the split equilibrium and fixed point problems (7). We consider an updated iterative algorithm that does not require prior knowledge of either the Lipschitz constants of the bifunctions or the operator norm of the bounded linear operator. This approach aims to find the minimum-norm solution for the split equilibrium and fixed point problems (7). Numerical experiments and comparisons with some recently developed algorithms are conducted to evaluate the performance of the introduced algorithm.
This paper is arranged as follows: Section 2 reviews some preliminary definitions and properties for use later on. Section 3 presents the Mann-type inertial accelerated subgradient extragradient algorithm and the corresponding strong convergence result. In Section 4, we discuss the performance of the proposed algorithm in comparison to some recently developed algorithms through numerical experiments. In Section 5, we close this paper with some conclusions.

2. Preliminaries

We provide some important definitions and qualifications in this section that are applied throughout the work. Let H be a real Hilbert space endowed with inner product · , · , and its corresponding norm · . For a sequence { x k } in H, we denote the strong convergence and the weak convergence of a sequence { x k } to a point p * H by x k p * and x k p * , respectively. The sets of the real numbers and the natural numbers are represented by R and N , respectively.
We begin with some definitions and results concerning the nonlinear mappings.
Definition 1.
Let T : H H be a mapping. The mapping T is called nonexpansive if
T x T y x y , x , y H .
Remark 1.
We notice that if T is a nonexpansive mapping, F i x ( T ) is closed and convex; see [28].
Definition 2.
Let T : H H be a mapping. The mapping T is called demiclosed at z H if for each sequence { x k } H , x k p * H , and T x k z , we have T p * = z .
Lemma 1
([28]). Let T : H H be a nonexpansive mapping having a fixed point. Then, I T is demiclosed at 0.
In what follows, we collect some fundamental concepts which are needed for the sequels. For each x , y , z H , it is known that
x + y 2 x 2 + 2 y , x + y ,
and
α x + β y + η z 2 = α x 2 + β y 2 + η z 2 α β x y 2 β η y z 2 α η x z 2 ,
for each α , β , η [ 0 , 1 ] such that α + β + η = 1 ; see [17].
For a point x H , the metric projection of a point x onto C is represented by P C ( x ) in the sense of
x P C ( x ) x z , z C ,
where C is a nonempty closed convex subset of H.
Lemma 2
([28,29]). Let C be a nonempty closed convex subset of H. Then, the following results hold:
(i) 
For each x H , P C ( x ) is well-defined and singleton;
(ii) 
y = P C ( x ) if and only if x y , z y 0 , z C ;
(iii) 
P C is a nonexpansive mapping.
Let f : H R be a function. The subdifferential of f at y H is given by
f ( y ) = { v H | f ( z ) f ( y ) v , z y , z H } .
The function f is called subdifferentiable at y if f ( y ) .
Lemma 3
([29]). For each y H , the subdifferentiable f ( y ) of a continuous convex function f is a weakly closed and bounded convex set.
Lemma 4
([9]). Let C be a convex subset of H. Suppose that f : C R is subdifferentiable on C. Then, p * is a solution to the following convex problem:
min f ( x ) : x C
if and only if 0 f ( p * ) + N C ( p * ) , where N C ( p * ) : = { v H | v , z p * 0 , z C } is the normal cone of C at p * .
The following technical lemmas are necessary to obtain the convergence results.
Lemma 5
([30]). Let { a k } and { c k } be sequences of non-negative real numbers and { b k } be a sequence of real numbers satisfying the following relation:
a k + 1 ( 1 β k ) a k + β k b k + c k , k N { 0 } ,
where { β k } ( 0 , 1 ) with k = 0 β k = . If k = 0 c k < and lim sup k b k 0 , then lim k a k = 0 .
Lemma 6
([31]). Let { a k } be a sequence of real numbers in which there exists a subsequence { a k i } of { a k } with a k i < a k i + 1 , for any i N . Then, there exists a non-decreasing sequence { m n } of positive integers with lim n m n = , and the following relations hold:
a m n a m n + 1 and a n a m n + 1 ,
for any (sufficiently large) numbers n N . Indeed, m n is the largest number k in the set { 1 , 2 , , n } satisfying the following relation:
a k < a k + 1 .
We end this section by providing some definitions and properties relating to the equilibrium problems.
Definition 3.
Let C be a nonempty closed convex subset of H and f : H × H R be a bifunction. The bifunction f is called
(i) 
Monotone on C if
f ( x , y ) + f ( y , x ) 0 , x , y C ;
(ii) 
Pseudomonotone on C if
f ( x , y ) 0 f ( y , x ) 0 , x , y C ;
(iii) 
Lipschitz-type continuous on H if there exists constants c 1 > 0 and c 2 > 0 satisfying
f ( x , y ) + f ( y , z ) f ( x , z ) c 1 x y 2 c 2 y z 2 , x , y , z H .
Remark 2.
Note that a monotone bifunction is a pseudomonotone bifunction. However, in general, the converse is not true; see, e.g., [32].
Let C be a nonempty closed convex subset of a real Hilbert space H and f : H × H R be a bifunction. In this paper, we take into account the following assumptions:
(A1) 
For each fixed z C , f ( · , z ) is sequentially weakly upper semicontinuous on C, that is, if { x k } is a sequence in C with x k p * C , then lim sup k f ( x k , z ) f ( p * , z ) ;
(A2) 
For each fixed x H , f ( x , · ) is convex, subdifferentiable, and lower semicontinuous on H;
(A3) 
f is psuedomonotone on C;
(A4) 
f is Lipschitz-type continuous on H.
Remark 3.
(i) 
It is well-known that the solution set E P ( f , C ) is closed and convex, where the bifunction f satisfies assumptions ( A 1 ) ( A 3 ) ; see [12,33,34].
(ii) 
We observe that f ( x , x ) = 0 , for each x C , where the bifunction f satisfies assumptions ( A 3 ) and ( A 4 ) ; see [17].
(iii) 
For each fixed x H , the subdifferential of a bifunction f ( x , · ) at y H is provided by
2 f ( x , y ) = { v H | f ( x , z ) f ( x , y ) v , z y , z H } .

3. Main Results

Let H 1 and H 2 be real Hilbert spaces and C 1 and C 2 be nonempty closed convex subsets of H 1 and H 2 , respectively. We start by recalling the split equilibrium and fixed point problems:
Find p * C 1 F i x ( T ) such that f 1 ( p * , z ) 0 , z C 1 , and A p * C 2 F i x ( S ) solves f 2 ( A p * , v ) 0 , v C 2 ,
where f 1 : H 1 × H 1 R and f 2 : H 2 × H 2 R are bifunctions, T : H 1 H 1 and S : H 2 H 2 are mappings, A : H 1 H 2 is a bounded linear operator, and A * : H 2 H 1 is the adjoint operator of A. We observe that the setting of problem (13) differs from that of problem (7). Specifically, in problem (13), the domain of each operator is the entire space, whereas in problem (7), the domain of the considered operator is a closed convex subset of the corresponding space. Consequently, the fixed point sets of the operators T and S in problem (13) are independent of the sets C 1 and C 2 , respectively. This distinction can lead to different applications and interpretations for problems (7) and (13). The solution set of the split equilibrium and fixed point problems (13) is, henceforth, represented by Ω in the form of
Ω : = p * E P ( f 1 , C 1 ) F i x ( T ) | A p * E P ( f 2 , C 2 ) F i x ( S ) .
It is noteworthy that, by using Remark 1, 3 (i), and the linearity property of the operator A, the solution set Ω is closed and convex; see [35].
The following algorithm (Algorithm 1) is presented to solve the split equilibrium and fixed point problems (13).
Algorithm 1. Mann-type inertial accelerated subgradient extragradient algorithm
      Initialization. Select λ 1 > 0 , μ 1 > 0 , η 1 > 0 , ω ( 0 , 1 ) , τ ( 0 , 1 ) , φ ( 0 , 1 ) , γ k [ 0 , 1 ) with lim k γ k = 0 , { ξ k } 1 , with lim k ξ k = 1 , { σ k } 1 , with lim k σ k = 1 , { ρ k } 0 , with k = 1 ρ k < + , { δ k } 0 , with k = 1 δ k < + , { ζ k } 0 , with k = 1 ζ k < + , { ϵ k } [ 0 , ) , { β k } 0 , 1 such that k = 1 β k = , lim k β k = 0 , lim k ϵ k β k = 0 , and { α k } a , b 0 , 1 β k , for some positive constants a and b. Put k = 1 and x 0 , x 1 H arbitrarily.
      Step 1. Select θ k satisfying 0 θ k θ ¯ k , where
θ ¯ k = min γ k , ϵ k x k x k 1 , i f   x k x k 1 , γ k , otherwise ,
and calculate
w k = x k + θ k x k x k 1 .
      Step 2. Compute
y k = arg min { ξ k λ k f 1 ( w k , z ) + 1 2 z w k 2 : z C 1 } .
      Step 3. Select s k k f 1 and create a half-space
B k = y H 1 | w k ξ k λ k s k y k , y y k 0 ,
where
k f 1 = s 2 f 1 ( w k , y k ) | ξ k λ k s + y k = w k q , q N C 1 ( y k ) .
      Step 4. Compute
z k = arg min { λ k f 1 ( y k , z ) + 1 2 z w k 2 : z B k } .
      Step 5. Calculate
t k = 1 β k α k z k + α k T z k .
      Step 6. Compute
u k = arg min { σ k μ k f 2 ( A t k , v ) + 1 2 v A t k 2 : v C 2 } .
      Step 7. Select r k k f 2 and create a half-space
D k = u H 2 | A t k σ k μ k r k u k , u u k 0 ,
where
k f 2 = r 2 f 2 ( A t k , u k ) | σ k μ k r + u k = A t k l , l N C 2 ( u k ) .
      Step 8. Compute
v k = arg min { μ k f 2 ( u k , v ) + 1 2 v A t k 2 : v D k } .
      Step 9. Construct x k + 1 by using the following expression:
x k + 1 = P B k ( t k + η k A * ( S v k A t k ) ) .
      Step 10. Calculate
λ k + 1 = min λ k + ρ k , ω ( w k y k 2 + y k z k 2 ) 2 f 1 ( w k , z k ) f 1 ( w k , y k ) f 1 ( y k , z k ) , i f   f 1 ( w k , z k ) f 1 ( w k , y k ) f 1 ( y k , z k ) > 0 , λ k + ρ k , otherwise ,
μ k + 1 = min μ k + δ k , τ ( A t k u k 2 + u k v k 2 ) 2 f 2 ( A t k , v k ) f 2 ( A t k , u k ) f 2 ( u k , v k ) , i f   f 2 ( A t k , v k ) f 2 ( A t k , u k ) f 2 ( u k , v k ) > 0 , μ k + δ k , otherwise ,
and
η k + 1 = min η k + ζ k , φ S v k A t k 2 A * ( S v k A t k ) 2 , i f   S v k A t k , η k + ζ k , otherwise .
      Step 11. Set k : = k + 1 and return to Step 1.
Remark 4.
(i) 
The auxiliary sequences { ξ k } and { σ k } in Algorithm 1 are introduced to account for bias in the bifunctions f 1 and f 2 in steps 2 and 6, respectively. These auxiliary biases contribute to the acceleration of Algorithm 1. We highlight that the superior numerical behavior of Algorithm 1 can be affected by choosing sequences { ξ k } and { σ k } . Take note that the accelerated subgradient extragradient method contained in Algorithm 1 reduces to the situation as presented in [13,21] if ξ k = 1 and σ k = 1 , for each k N .
(ii) 
The self-adaptive step sizes λ k , μ k , and η k are implemented without requiring prior knowledge of the Lipschitz constants of the bifunctions f 1 and f 2 or the operator norm of the bounded linear operator A, respectively. This demonstrates that Algorithm 1 automatically updates the iteration step sizes λ k , μ k , and η k by utilizing some previously existing data. We emphasize that the step sizes λ k , μ k , and η k in Algorithm 1 reduce to the non-increasing step sizes in the case of ρ k = 0 , δ k = 0 , and ζ k = 0 for each k N , as presented in [21,36]. These may have a significant impact on the numerical results; see Section 4 for further discussion.
(iii) 
It is important to note that step 9 in Algorithm 1 is improved as the metric projection onto the half-space B k instead of the metric projection onto the nonempty closed convex subset C 1 , as highlighted in [22,26,27]. This is an advantage of Algorithm 1 given that the metric projection onto the half-space B k has an explicit formula. Meanwhile, the metric projection onto C 1 may be difficult if C 1 has a complicated structure.
(iv) 
In step 3, the nonemptiness of the set k f 1 is always guaranteed. Indeed, by the definition of y k and Lemma 4, one sees that
0 2 ξ k λ k f 1 ( w k , y k ) + 1 2 y k w k 2 + N C 1 ( y k ) .
Thus, there exist s 2 f 1 ( w k , y k ) and q N C 1 ( y k ) such that
ξ k λ k s + y k w k + q = 0 .
This ensures that there exists a solution within k f 1 , confirming its nonemptiness. Similarly, in step 7, the nonemptiness of the set k f 2 is guaranteed by utilizing the definition of u k and Lemma 4.
The following lemmas are very useful for analyzing the convergence of Algorithm 1.
Lemma 7.
Let f 1 : H 1 × H 1 R and f 2 : H 2 × H 2 R be bifunctions that satisfy ( A 1 ) ( A 4 ) , T : H 1 H 1 and S : H 2 H 2 be nonexpansive mappings, A : H 1 H 2 be a bounded linear operator, and A * : H 2 H 1 be the adjoint operator of A. Let w k , y k , z k H 1 and A t k , u k , v k H 2 . Assume that the sequences { λ k } , { μ k } , and { η k } are generated by (14), (15), and (16), respectively. Then, the following results hold:
lim k λ k = λ min λ 1 , ω 2 max c 1 , c 2 , λ 1 + ρ , where ρ = k = 1 ρ k ,
lim k μ k = μ min μ 1 , τ 2 max d 1 , d 2 , μ 1 + δ , where δ = k = 1 δ k ,
and
lim k η k = η min η 1 , φ A 2 , η 1 + ζ , where ζ = k = 1 ζ k .
Proof. 
The proof of this Lemma follows the technique in (Lemma 3.1 of [37]). Firstly, we note that
φ S v k A t k 2 A * ( S v k A t k ) 2 φ S v k A t k 2 A 2 S v k A t k 2 = φ A 2 .
This, together with the definition of η k + 1 and the assumptions on the parameter ζ k , yields
η k + 1 min η k + ζ k , φ A 2 min η k , φ A 2 .
Thus, the sequence { η k } has a lower bound as min η 1 , φ A 2 by induction.
Additionally, in view of the definition of η k + 1 , we obtain η k + 1 η k + ζ k for each k N . Then, by induction, the sequence { η k } has an upper bound as η 1 + ζ , where ζ = k = 1 ζ k . Consequently, we conclude that { η k } is a bounded sequence and η k min η 1 , φ A 2 , η 1 + ζ .
In what follows, we set
η k + 1 η k : = max 0 , η k + 1 η k and η k + 1 η k + : = max 0 , η k + 1 η k .
Combining it with the definition of sequence { η k } , we have
k = 1 η k + 1 η k + k = 1 ζ k + .
Thus, the series k = 1 η k + 1 η k + is convergent. Next, we assert the convergence of the series k = 1 η k + 1 η k . Suppose that k = 1 η k + 1 η k = + . We observe that
η k + 1 η k = η k + 1 η k + η k + 1 η k .
This implies that
η n + 1 η 1 = k = 1 n η k + 1 η k = k = 1 n η k + 1 η k + k = 1 n η k + 1 η k .
Then, by taking n + in (17), we have η n , as n . This is a contradiction with the boundedness property of { η k } . Owing to the convergence of the series k = 1 η k + 1 η k + and k = 1 η k + 1 η k , by taking n + in (17), we deduce that lim k η k = η and η min η 1 , φ A 2 , η 1 + ζ .
On the other hand, from the Lipschitz-type continuity of f 1 on H 1 and of f 2 on H 2 , there exists some positive constants { c 1 , c 2 } and { d 1 , d 2 } , respectively, such that
f 1 ( w k , z k ) f 1 ( w k , y k ) f 1 ( y k , z k ) max c 1 , c 2 ( w k y k 2 + y k z k 2 ) ,
and
f 2 ( A t k , v k ) f 2 ( A t k , u k ) f 2 ( u k , v k ) max d 1 , d 2 ( A t k u k 2 + u k v k 2 ) .
These, together with the definitions of λ k + 1 and μ k + 1 , and the conditions on the sequences { ρ k } and { δ k } , yield
λ k + 1 min λ k + ρ k , ω 2 max c 1 , c 2 min λ k , ω 2 max c 1 , c 2 ,
and
μ k + 1 min μ k + δ k , τ 2 max d 1 , d 2 min μ k , τ 2 max d 1 , d 2 .
Hence, by induction, we find that the sequences { λ k } and { μ k } have a lower bound as min λ 1 , ω 2 max c 1 , c 2 and min μ 1 , τ 2 max d 1 , d 2 , respectively. Similar to the above technique, we can show that
lim k λ k = λ min λ 1 , ω 2 max c 1 , c 2 , λ 1 + ρ , where ρ = k = 1 ρ k ,
and
lim k μ k = μ min μ 1 , τ 2 max d 1 , d 2 , μ 1 + δ , where δ = k = 1 δ k .
This completes the proof. □
Lemma 8.
Let f 1 : H 1 × H 1 R and f 2 : H 2 × H 2 R be bifunctions that satisfy ( A 1 ) ( A 4 ) , T : H 1 H 1 and S : H 2 H 2 be nonexpansive mappings, A : H 1 H 2 be a bounded linear operator, and A * : H 2 H 1 be the adjoint operator of A. Assume that the solution set Ω is nonempty. Let w k H 1 and A t k H 2 . If y k , z k , u k , v k , λ k + 1 , and μ k + 1 are created by the process of Algorithm 1, then the following results hold:
z k p * 2 w k p * 2 1 ξ k ω λ k λ k + 1 w k y k 2 1 ξ k ω λ k λ k + 1 y k z k 2 ,
and
v k A p * 2 A t k A p * 2 1 σ k τ μ k μ k + 1 A t k u k 2 1 σ k τ μ k μ k + 1 u k v k 2 ,
p * Ω .
Proof. 
First, we show that C 1 B k for every k N . Let k N be fixed and z C 1 . Since s k k f 1 , it follows that there exists q k N C 1 ( y k ) such that
ξ k λ k s k + y k = w k q k .
So, from q k N C 1 ( y k ) , we obtain
w k ξ k λ k s k y k , z y k = q k , z y k 0 .
This implies that z B k . Hence, we conclude that C 1 B k for every k N . Similarly, since r k k f 2 , we can show that C 2 D k for every k N . As a result, we ensure that Algorithm 1 is well-defined.
Afterwards, by utilizing the above facts, we display the results of the Lemma. Let p * Ω . That is, p * E P ( f 1 , C 1 ) , p * F i x ( T ) , and A p * E P ( f 2 , C 2 ) , A p * F i x ( S ) . Due to s k 2 f 1 ( w k , y k ) and the subdifferentiability of f 1 , we obtain
f 1 ( w k , z ) f 1 ( w k , y k ) s k , z y k , z H 1 .
It follows from z k B k H 1 that
f 1 ( w k , z k ) f 1 ( w k , y k ) s k , z k y k .
Likewise, by using the definition of B k and z k B k , we obtain
w k ξ k λ k s k y k , z k y k 0 .
Combined with relation (20), one sees that
ξ k λ k [ f 1 ( w k , z k ) f 1 ( w k , y k ) ] y k w k , y k z k .
Furthermore, from the definition of z k and Lemma 4, we obtain
0 2 λ k f 1 ( y k , z k ) + 1 2 z k w k 2 + N B k ( z k ) .
Then, there exists s * 2 f 1 ( y k , z k ) and q * N B k ( z k ) such that
λ k s * + z k w k + q * = 0 .
This, together with the subdifferentiability of f 1 , yields
f 1 ( y k , z ) f 1 ( y k , z k ) s * , z z k , z H 1 .
Additionally, from q * N B k ( z k ) , we obtain
q * , z k z 0 , z B k .
It follows from equality (22) that
w k z k , z k z λ k s * , z k z , z B k .
Due to relation (23), we obtain
w k z k , z k z λ k [ f 1 ( y k , z k ) f 1 ( y k , z ) ] , z B k .
In particular, from p * C 1 B k , one sees that
w k z k , z k p * λ k [ f 1 ( y k , z k ) f 1 ( y k , p * ) ] .
Combined with the pseudomonotonic of f 1 , we have
w k z k , z k p * λ k f 1 ( y k , z k ) .
So, relations (21) and (26) imply that
ξ k λ k [ f 1 ( w k , z k ) f 1 ( w k , y k ) f 1 ( y k , z k ) ] ξ k z k w k , z k p * + y k w k , y k z k .
On the other hand, we observe from the definition of λ k + 1 that
f 1 ( w k , z k ) f 1 ( w k , y k ) f 1 ( y k , z k ) ω ( w k y k 2 + y k z k 2 ) 2 λ k + 1 .
Using this together with relation (27), we obtain
ξ k w k z k , z k p * y k w k , y k z k ω ξ k λ k ( w k y k 2 + y k z k 2 ) 2 λ k + 1 .
Owing to the above relation, we have the following facts:
ξ k w k p * 2 w k z k 2 z k p * 2 = 2 ξ k w k z k , z k p *   2 y k w k , y k z k     ω ξ k λ k ( w k y k 2 + y k z k 2 ) λ k + 1 .
This implies that
z k p * 2 w k p * 2 w k z k 2 2 ξ k y k w k , y k z k     + ω λ k ( w k y k 2 + y k z k 2 ) λ k + 1   = w k p * 2 w k z k 2 + 1 ξ k w k z k 2 1 ξ k w k y k 2 1 ξ k y k z k 2     + ω λ k ( w k y k 2 + y k z k 2 ) λ k + 1   = w k p * 2 1 ξ k ω λ k λ k + 1 w k y k 2 1 ξ k ω λ k λ k + 1 y k z k 2     1 1 ξ k w k z k 2 .
Hence, by applying the choice of the parameter ξ k 1 , , we deduce that
z k p * 2 w k p * 2 1 ξ k ω λ k λ k + 1 w k y k 2 1 ξ k ω λ k λ k + 1 y k z k 2 .
Similarly, we can show that
v k A p * 2 A t k A p * 2 1 σ k τ μ k μ k + 1 A t k u k 2 1 σ k τ μ k μ k + 1 u k v k 2 .
This completes the proof. □
We are presently equipped to consider the strong convergence of Algorithm 1.
Theorem 1.
Let f 1 : H 1 × H 1 R and f 2 : H 2 × H 2 R be bifunctions that satisfy ( A 1 ) ( A 4 ) , T : H 1 H 1 and S : H 2 H 2 be nonexpansive mappings, A : H 1 H 2 be a bounded linear operator, and A * : H 2 H 1 be the adjoint operator of A. Assume that the solution set Ω is nonempty. Then, the sequence { x k } that is generated by Algorithm 1 converges strongly to the minimum-norm element of Ω.
Proof. 
Let p * Ω . That is, p * E P ( f 1 , C 1 ) , p * F i x ( T ) , and A p * E P ( f 2 , C 2 ) , A p * F i x ( S ) . We start by regarding the definition of x k + 1 and using the nonexpansivity of P B k as follows:
x k + 1 p * 2 ( t k p * ) + η k A * ( S v k A t k ) 2   = t k p * 2 + η k 2 A * ( S v k A t k ) 2     + 2 η k A t k A p * , S v k A t k .
Consider,
2 A t k A p * , S v k A t k = 2 S v k A p * , S v k A t k 2 S v k A t k 2   = S v k A p * 2 S v k A t k 2 A t k A p * 2 .
Using this one together with relation (30), we have
x k + 1 p * 2 t k p * 2 + η k 2 A * ( S v k A t k ) 2 η k S v k A t k 2     + η k ( S v k A p * 2 A t k A p * 2 ) .
It follows from the nonexpansivity of S and the definition of η k + 1 that
x k + 1 p * 2 t k p * 2 + φ η k 2 η k + 1 S v k A t k 2 η k S v k A t k 2     + η k ( S v k A p * 2 A t k A p * 2 )   t k p * 2 η k 1 φ η k η k + 1 S v k A t k 2     + η k ( v k A p * 2 A t k A p * 2 ) .
Combined with the condition on the parameter φ ( 0 , 1 ) and Lemma 7, we obtain
lim k 1 φ η k η k + 1 = 1 φ > 0 .
So, there exists k 1 N such that
1 φ η k η k + 1 > 0 , k k 1 .
On the other hand, from the assumption of the parameter ω ( 0 , 1 ) , the fact that lim k ξ k = 1 , and Lemma 7, we have
lim k 1 ξ k ω λ k λ k + 1 = 1 ω > 0 .
Thus, there exists k 2 N such that
1 ξ k ω λ k λ k + 1 > 0 , k k 2 .
Moreover, considering the choice of the parameter τ ( 0 , 1 ) , the fact that lim k σ k = 1 , and Lemma 7, we obtain
lim k 1 σ k τ μ k μ k + 1 = 1 τ > 0 .
Then, there exists k 3 N such that
1 σ k τ μ k μ k + 1 > 0 , k k 3 .
Choose k 0 = max { k 1 , k 2 , k 3 } . Let us now consider the case for every k N satisfying k k 0 . So, by applying relations (33) and (34) to Lemma 8, we have the following relations:
z k p * w k p * ,
and
v k A p * A t k A p * .
Using this together with relation (31), we have
x k + 1 p * 2 t k p * 2 η k 1 φ η k η k + 1 S v k A t k 2 .
This, together with the assumption on the parameter η k and the fact (32), yields
x k + 1 p * 2 t k p * 2 .
On the other hand, we note from the definition of t k , the nonexpansivity of T, and relation (35) that
t k p * = ( 1 β k α k ) ( z k p * ) + α k ( T z k p * ) β k p *   ( 1 β k α k ) z k p * + α k T z k p * + β k p *   ( 1 β k ) w k p * + β k p * .
It follows from the definition of w k that
t k p * ( 1 β k ) x k p * + ( 1 β k ) θ k x k x k 1 + β k p *   = ( 1 β k ) x k p * + β k ( 1 β k ) θ k β k x k x k 1 + p * .
Owing to the condition on the sequence { θ k } , we obtain
( 1 β k ) θ k β k x k x k 1 ( 1 β k ) ϵ k β k .
This, together with the facts lim k ϵ k β k = 0 and lim k β k = 0 , yields
lim k ( 1 β k ) θ k β k x k x k 1 = 0 .
Then, there exists a constant M 1 > 0 such that
( 1 β k ) θ k β k x k x k 1 M 1 .
Combined with relations (38) and (39), we have
x k + 1 p * ( 1 β k ) x k p * + β k M 1 + p *   max x k p * , M 1 + p *     max x k 0 p * , M 1 + p * .
It follows that { x k p * } is a bounded sequence. As such, the sequence { x k } is bounded.
On the other hand, in view of the definition of w k , one sees that
w k p * 2 = ( 1 + θ k ) ( x k p * ) θ k ( x k 1 p * ) 2   = ( 1 + θ k ) x k p * 2 θ k x k 1 p * 2 + θ k ( 1 + θ k ) x k x k 1 2   ( 1 + θ k ) x k p * 2 θ k x k 1 p * 2 + 2 θ k x k x k 1 2   = x k p * 2 + θ k ( x k p * 2 x k 1 p * 2 ) + 2 θ k x k x k 1 2 .
Furthermore, from the definition of t k and (12), we obtain
t k p * 2 = ( 1 β k α k ) ( z k p * ) + α k ( T z k p * ) + β k ( p * ) 2   ( 1 β k α k ) z k p * 2 + α k T z k p * 2 + β k p * 2     α k ( 1 β k α k ) z k T z k 2 .
Thus, by utilizing relations (35), (38), (42), and (43), the nonexpansivity of T, and Lemma 8, we have
x k + 1 p * 2 ( 1 β k α k ) w k p * 2 + α k z k p * 2 + β k p * 2     α k ( 1 β k α k ) z k T z k 2   ( 1 β k ) w k p * 2 + β k p * 2 α k ( 1 β k α k ) z k T z k 2     α k 1 ξ k ω λ k λ k + 1 w k y k 2 α k 1 ξ k ω λ k λ k + 1 y k z k 2   ( 1 β k ) x k p * 2 + θ k ( 1 β k ) ( x k p * 2 x k 1 p * 2 )     + 2 θ k ( 1 β k ) x k x k 1 2 + β k p * 2 α k ( 1 β k α k ) z k T z k 2     α k 1 ξ k ω λ k λ k + 1 w k y k 2 α k 1 ξ k ω λ k λ k + 1 y k z k 2 .
This implies that
    α k ( 1 β k α k ) z k T z k 2 + α k 1 ξ k ω λ k λ k + 1 w k y k 2 + α k 1 ξ k ω λ k λ k + 1 y k z k 2   x k p * 2 x k + 1 p * 2 + θ k ( 1 β k ) ( x k p * 2 x k 1 p * 2 )     + 2 θ k ( 1 β k ) x k x k 1 2 + β k p * 2 .
Now, we are ready to assert that the sequence { x k } converges strongly to p ˜ : = P Ω ( 0 ) by investigating the proof in two cases.
Case 1. Suppose that x k + 1 p ˜ x k p ˜ , for every k k 0 . This means the sequence { x k p ˜ } k k 0 is non-increasing. Since the sequence { x k p ˜ } is bounded, we can confirm that the limit of x k p ˜ exists. Combining with relation (45) the facts that lim k β k = 0 and lim k θ k x k x k 1 2 = 0 , and the properties of the sequences { α k } and { β k } , we have
lim k w k y k = 0 ,
lim k y k z k = 0 ,
and
lim k T z k z k = 0 .
These imply that
lim k w k z k = 0 .
In addition, from lim k θ k x k x k 1 = 0 , we obtain
lim k w k x k = 0 .
This, together with (49), yields
lim k x k z k = 0 .
It follows from (47) that
lim k x k y k = 0 .
Consider
t k z k = α k ( T z k z k ) β k z k   α k T z k z k + β k z k .
So, by using (48) and the fact that lim k β k = 0 , we have
lim k t k z k = 0 .
Combining it with (51), we obtain
lim k x k t k = 0 .
On the other hand, in view of relation (37), one sees that
η k 1 φ η k η k + 1 S v k A t k 2 t k p ˜ 2 x k + 1 p ˜ 2   t k z k + z k x k + x k p ˜ x k + 1 p ˜ ( t k p ˜     + x k + 1 p ˜ ) .
Thus, by utilizing (51) and (53), and the existence of lim k x k p ˜ , we have
lim k S v k A t k = 0 .
Furthermore, from Lemma 8 and the nonexpansivity of S, we obtain
    1 σ k τ μ k μ k + 1 A t k u k 2 + 1 σ k τ μ k μ k + 1 u k v k 2   A t k A p ˜ 2 v k A p ˜ 2   ( A t k S v k + S v k A p ˜ v k A p ˜ ) ( A t k A p ˜ + v k A p ˜ )   A t k S v k ( A t k A p ˜ + v k A p ˜ ) .
So, by using (55) and the above relation, we obtain
lim k A t k u k = 0 ,
and
lim k u k v k = 0 .
These imply that
lim k A t k v k = 0 .
Since S v k v k S v k A t k + A t k v k , it follows from (55) and (58) that
lim k S v k v k = 0 .
Now, let x * ω w ( x k ) and { x k n } be a subsequence of { x k } with x k n x * , as n . By using (50)–(52) and (54), we have w k n x * , y k n x * , z k n x * , and t k n x * , as n . It follows that A t k n A x * , as n . Combining (56) and (58), we have u k n A x * and v k n A x * as n . Since C 1 and C 2 are closed convex subsets, C 1 and C 2 are weakly closed. Hence, x * C 1 and A x * C 2 .
Next, due to relations (21), (25) and (28), we obtain
λ k n f 1 ( y k n , z ) λ k n f 1 ( y k n , z k n ) + w k n z k n , z z k n λ k n f 1 ( w k n , z k n ) λ k n f 1 ( w k n , y k n ) ω λ k n 2 λ k n + 1 w k n y k n 2 ω λ k n 2 λ k n + 1 y k n z k n 2 + w k n z k n , z z k n 1 ξ k n y k n w k n , y k n z k n ω λ k n 2 λ k n + 1 w k n y k n 2 ω λ k n 2 λ k n + 1 y k n z k n 2 + w k n z k n , z z k n ,
for each z C 1 . So, by utilizing the facts (46), (47) and (49), and the boundedness of { z k } , we find that the right-hand side of the inequality (60) tends to zero. It follows from the sequentially weakly upper semicontinuity of f 1 and λ k n > 0 that
0 lim sup n f 1 ( y k n , z ) f 1 ( x * , z ) , z C 1 .
Similarly, we can show that
μ k n f 2 ( u k n , v ) 1 σ k n u k n A t k n , u k n v k n τ μ k n 2 μ k n + 1 A t k n u k n 2 τ μ k n 2 μ k n + 1 u k n v k n 2 + A t k n v k n , v v k n ,
for each v C 2 . Using this one together with (56)–(58), and the boundedness of { v k } , we find that the right-hand side of the inequality (61) tends to zero. Thus, by using the sequentially weakly upper semicontinuity of f 2 and μ k n > 0 , we have
0 lim sup n f 2 ( u k n , v ) f 2 ( A x * , v ) , v C 2 .
On the other hand, since z k n x * , as n , and considering (48), by the demiclosedness at zero of I T , we have x * F i x ( T ) . Similarly, since v k n A x * , as n , and considering (59), it follows from the demiclosedness at zero of I S that A x * F i x ( S ) . Hence, we can conclude that x * Ω .
Put h k = ( 1 α k ) z k + α k T z k . Relation (35) and the nonexpansivity of T imply that
h k p ˜ ( 1 α k ) z k p ˜ + α k T z k p ˜ w k p ˜ .
From the definition of t k , one sees that
t k = h k β k z k   = ( 1 β k ) h k β k ( z k h k )   = ( 1 β k ) h k β k α k ( z k T z k ) .
It follows from (11) that
t k p ˜ 2 = ( 1 β k ) ( h k p ˜ ) β k α k ( z k T z k ) β k p ˜ 2   ( 1 β k ) h k p ˜ 2 2 β k α k z k T z k , t k p ˜     2 β k p ˜ , t k p ˜ .
Using this together with relations (38) and (62), we obtain
x k + 1 p ˜ 2 ( 1 β k ) w k p ˜ 2 + β k [ 2 α k T z k z k , t k p ˜ + 2 t k p ˜ , p ˜ ] .
Consider
w k p ˜ 2 ( x k p ˜ + θ k x k x k 1 ) 2   x k p ˜ 2 + 2 θ k x k p ˜ x k x k 1 + θ k x k x k 1 2   x k p ˜ 2 + 3 M 2 θ k x k x k 1 ,
where M 2 = sup k k 0 { x k p ˜ , x k x k 1 } . It follows from relation (64) that
x k + 1 p ˜ 2 ( 1 β k ) x k p ˜ 2 + 3 M 2 θ k ( 1 β k ) x k x k 1     + β k 2 α k T z k z k , t k p ˜ + 2 t k p ˜ , p ˜   = ( 1 β k ) x k p ˜ 2 + β k [ 3 M 2 ( 1 β k ) θ k β k x k x k 1     + 2 α k T z k z k , t k p ˜ + 2 t k p ˜ , p ˜ ] .
Indeed, from x * ω w ( x k ) Ω and the properties of p ˜ : = P Ω ( 0 ) , we have
lim sup k t k p ˜ , p ˜ = lim n t k n p ˜ , p ˜ = x * p ˜ , p ˜ 0 .
Therefore, by using (40), (48), (66), (67), and Lemma 5, we obtain
lim k x k p ˜ = 0 .
Case 2. Suppose that there exists a subsequence { x k i p ˜ } of { x k p ˜ } satisfying
x k i p ˜ < x k i + 1 p ˜ , i N .
According to Lemma 6, there exists a non-decreasing sequence { m n } N with lim n m n = , and
x m n p ˜ x m n + 1 p ˜ and x n p ˜ x m n + 1 p ˜ , n N .
Using this one together with relation (45), we have
    α m n ( 1 β m n α m n ) z m n T z m n 2 + α m n 1 ξ m n ω λ m n λ m n + 1 w m n y m n 2     + α m n 1 ξ m n ω λ m n λ m n + 1 y m n z m n 2   x m n p ˜ 2 x m n + 1 p ˜ 2 + θ m n ( 1 β m n ) ( x m n p ˜ 2 x m n 1 p ˜ 2 )     + 2 θ m n ( 1 β m n ) x m n x m n 1 2 + β m n p ˜ 2   θ m n ( 1 β m n ) x m n x m n 1 ( x m n p ˜ + x m n 1 p ˜ )     + 2 θ m n ( 1 β m n ) x m n x m n 1 2 + β m n p ˜ 2 .
Following the line proof of Case 1, we can check that
lim n w m n y m n = 0 , lim n y m n z m n = 0 ,
lim n x m n z m n = 0 , lim n x m n t m n = 0 ,
lim n T z m n z m n = 0 , lim n S v m n v m n = 0 ,
lim n A t m n v m n = 0 , lim n A t m n u m n = 0 ,
lim sup n t m n p ˜ , p ˜ 0 ,
and
x m n + 1 p ˜ 2 ( 1 β m n ) x m n p ˜ 2 + β m n [ 3 M 2 ( 1 β m n ) θ m n β m n x m n x m n 1 + 2 α m n T z m n z m n , t m n p ˜ + 2 t m n p ˜ , p ˜ ] .
This, together with relation (69), yields
x m n + 1 p ˜ 2 ( 1 β m n ) x m n + 1 p ˜ 2 + β m n [ 3 M 2 ( 1 β m n ) θ m n β m n x m n x m n 1 + 2 α m n T z m n z m n , t m n p ˜ + 2 t m n p ˜ , p ˜ ] .
It follows from relation (69), again, that
x n p ˜ 2 3 M 2 ( 1 β m n ) θ m n β m n x m n x m n 1 + 2 α m n T z m n z m n , t m n p ˜ + 2 t m n p ˜ , p ˜
Then, by using (40), (72) and (74), we have
lim sup n x n p ˜ 2 0 .
Therefore, the sequence { x n } converges strongly to p ˜ = P Ω ( 0 ) . This completes the proof. □

4. Numerical Experiments

This section presents some numerical experiments occurring in finite- and infinite-dimensional Hilbert spaces to illustrate the effectiveness of the introduced Algorithm 1 and compare it with Equation (10). We focus on the effectiveness of two groups of auxiliary parameters: ξ k , σ k , and ρ k , δ k , ζ k , based on different choices of these parameters. All the numerical computations were implemented in Matlab R2021b and performed on an Apple M1 with 8.00 GB RAM.
Example 1.
Let H 1 = R n and H 2 = R m be equipped with the Euclidean norm. We consider the functions g 1 : R n R and g 2 : R m R , which are formed by g 1 x = 1 2 x T V x and g 2 x = x , where V R n × n is an invertible symmetric positive semidefinite matrix. The mappings T : R n R n and S : R m R m with respect to the functions g 1 and g 2 , respectively, are given as follows:
T x = ( I n + V ) 1 ( x ) ,
and
S x = 1 1 x x , i f x 1 , 0 , o t h e r w i s e ,
where I n is the identity matrix of dimension n. Observe that T and S are nonexpansive mappings, so that F i x ( T ) = arg min g 1 and F i x ( S ) = arg min g 2 ; see [38].
On the other hand, we regard the bifunctions f 1 ˜ and f 2 ˜ , which are manufactured from the Nash–Cournot oligopolistic equilibrium models of electricity markets; see [33,39],
f 1 ˜ ( x , y ) = P 1 x + P 2 y , y x , x , y R n , f 2 ˜ ( u , v ) = Q 1 u + Q 2 v , v u , u , v R m ,
where P 1 , P 2 R n × n and Q 1 , Q 2 R m × m are matrices such that P 2 , Q 2 are symmetric positive semidefinite and P 2 P 1 , Q 2 Q 1 are negative semidefinite. We notice that f 1 ˜ ( x , y ) + f 1 ˜ ( y , x ) = ( x y ) T ( P 2 P 1 ) ( x y ) , x , y R n . So, by utilizing the qualification of P 2 P 1 , we find that f 1 ˜ is monotone. Likewise, we find that f 2 ˜ is monotone.
Afterwards, the bifunctions f 1 and f 2 are imposed as follows:
f 1 ( x , y ) = f 1 ˜ ( x , y ) , i f ( x , y ) C 1 × C 1 , 0 , o t h e r w i s e ,
and
f 2 ( u , v ) = f 2 ˜ ( u , v ) , i f ( u , v ) C 2 × C 2 , 0 , o t h e r w i s e ,
where C 1 = i = 1 n [ 5 , 5 ] and C 2 = j = 1 m [ 20 , 20 ] are the constrained boxes. Notice that f 1 and f 2 are Lipschitz-type continuous; see [12,19].
Through this numerical experiment, the matrices P 1 , P 2 , Q 1 , and Q 2 were generated randomly in the interval of [ 5 , 5 ] such that they satisfy the qualifications mentioned above. The linear operator A : R n R m is an m × n matrix, in which each of its entries is generated randomly in the interval of [ 2 , 2 ] . Notice that the solution set Ω is nonempty due to 0 Ω . Algorithm 1 was tested in conjunction with Equation (10) by applying the stopping criteria x k + 1 x k x k + 1 < 10 6 . The starting points x 0 = x 1 R n were generated randomly in the interval of [ 5 , 5 ] . We randomly selected 10 starting points, and the average results are shown, where n = 10 and m = 20 . Also, we took into account the control parameters of Algorithm 1 and Equation (10) as follows.
  • In Algorithm 1, we chose ω = τ = φ = 0.6 , λ 1 = μ 1 = η 1 = 0.9 , γ k = 1 k + 1 , ϵ k = 1 ( k + 1 ) 2 , β k = 1 k + 1 , α k = 0.8 ( 1 β k ) , and θ k = θ ¯ k .
  • In Equation (10), we set γ = 0.5 , η k = 1 2 A 2 , ϵ k = 10 k 2 , β k = 10 10 + 1 k + 1 , α k = 1 k + 3 , and θ k = θ ¯ k , which are the appropriate values as presented in [27]. Indeed, c 1 = c 2 = 1 2 P 1 P 2 and d 1 = d 2 = 1 2 Q 1 Q 2 are Lipschitz constants of f 1 and f 2 , respectively. Observe that, if b 1 = max { c 1 , d 1 } and b 2 = max { c 2 , d 2 } , then b 1 and b 2 are Lipschitz constants of both f 1 and f 2 . Thus, we set μ k = λ k = 1 max { b 1 , b 2 } .
In order to evaluate the optimal values of the control parameters, for the first experiment, the numerical results were obtained by taking the variation of the parameters ρ k , δ k , and ζ k . The results of the numerical comparison are presented in Table 1 for any collections of parameters ρ k = 0 , 1 ( k + 1 ) 1.2 , δ k = 0 , 1 ( k + 1 ) 1.2 , and ζ k = 0 , and 1 ( k + 1 ) 1.2 , where the parameters ξ k = σ k = 1 are fixed. We observe that when ρ k = 0 , δ k = 0 , and ζ k = 0 , the step sizes taken into consideration in Algorithm 1 reduces to forms equivalent to the step sizes provided in [21,36,40].
In Table 1, the number of iterations (Iter) and the CPU time (Time) in seconds are displayed. Also, we may suggest that the parameters ρ k = 1 ( k + 1 ) 1.2 , δ k = 1 ( k + 1 ) 1.2 , and ζ k = 1 ( k + 1 ) 1.2 better demonstrate the number of iterations and the CPU time than the parameters ρ k = 0 , δ k = 0 , and ζ k = 0 , respectively. This illustrates that Algorithm 1 can be improved efficiently by selecting parameters ξ k , ρ k , and δ k . Additionally, the results indicate that Algorithm 1 performs most effectively according to the suggested parameters ρ k = 1 ( k + 1 ) 1.2 , δ k = 1 ( k + 1 ) 1.2 , and ζ k = 1 ( k + 1 ) 1.2 . Eventually, when the parameters ξ k = 1 and σ k = 1 , Algorithm 1 is superior to Equation (10) in terms of the number of iterations and the CPU time.
In the ensuing experiment, we regard the numerical results of the control parameters ξ k and σ k , which are included to allow for bias in the bifunctions f 1 and f 2 , respectively. The numerical computations are presented in Table 2 for different choices of control parameters ξ k = 1 + 1 k 100 , 1 + 1 k , 1 + 1 k , 1 + 1 log ( k + 1 ) , and σ k = 1 + 1 k 100 , 1 + 1 k , 1 + 1 k , 1 + 1 log ( k + 1 ) , while preserving fixed values of the control parameters ρ k = δ k = ζ k = 1 ( k + 1 ) 1.2 .
Table 2 points out that the choices of pertinent parameters, especially the values of parameters ξ k and σ k , are important. The parameters ξ k and σ k , which converge to 1 the fastest, such as ξ k = 1 + 1 k 100 and σ k = 1 + 1 k 100 , exhibit better performance than other cases when considering the number of iterations. Nevertheless, we point out that the fastest converging sequence to 1 is the constant sequence 1, but the numerical results turn out to be slow (see Table 1). This demonstrates that the proposed parameters ξ k and σ k in Algorithm 1 lead to the improved performance of Algorithm 1.
Example 2.
Let H 1 = H 2 = L 2 ( [ 0 , 1 ] ) be two infinite-dimensional Hilbert spaces endowed with inner product
x , y = 0 1 x ( t ) y ( t ) d t , x , y L 2 ( [ 0 , 1 ] ) ,
and its corresponding norm
x = 0 1 x ( t ) 2 d t 1 2 , x L 2 ( [ 0 , 1 ] ) .
Assume that R 1 , R 2 , and r 1 , r 2 are positive real numbers such that R 1 / ( k 1 + 1 ) < r 1 / k 1 < r 1 < R 1 , for some k 1 > 1 and R 2 / ( k 2 + 1 ) < r 2 / k 2 < r 2 < R 2 , for some k 2 > 1 . Take the feasible sets as C 1 = x L 2 ( [ 0 , 1 ] ) | x r 1 and C 2 = x L 2 ( [ 0 , 1 ] ) | x r 2 . To be considered here are the operators F 1 : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) and F 2 : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) , which are defined by
F 1 x = R 1 x x , and F 2 x = R 2 x x , x L 2 ( [ 0 , 1 ] ) .
Set
f 1 ( x , y ) = F 1 x , y x , and f 2 ( x , y ) = F 2 x , y x , x , y L 2 ( [ 0 , 1 ] ) .
Observe that the operators F 1 and F 2 are psuedomonotone rather than monotone and satisfy Lipschitz continuity; see [41]. These imply that f 1 and f 2 are psuedomonotone and Lipschitz-type continuous bifunctions. Here, we consider the nonexpansive mappings T : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) and S : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) , which are defined by T x = x 3 and S x = x 5 , x L 2 ( [ 0 , 1 ] ) . Also, the linear operator A : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) is imposed by A x = x , x L 2 ( [ 0 , 1 ] ) .
During this numerical experiment, the control parameters of Algorithm 1 are determined as in Example 1. Meanwhile, for Equation (10), we are required to know the Lipschitz constants of f 1 and f 2 . Indeed, c 1 = c 2 = 1 2 ( 2 R 1 + r 1 ) and d 1 = d 2 = 1 2 ( 2 R 2 + r 2 ) are Lipschitz constants of f 1 and f 2 , respectively. Similar to Example 1, if b 1 = max { c 1 , d 1 } and b 2 = max { c 2 , d 2 } , then b 1 and b 2 are Lipschitz constants of both f 1 and f 2 . So, we choose μ k = λ k = 1 max { b 1 , b 2 } for Equation (10). The remaining parameters of Equation (10) are given as in Example 1. Here, we take R 1 = 1.5 , r 1 = 1 , k 1 = 1.1 , and R 2 = 1.7 , r 2 = 1 , k 2 = 1.2 . In the above setting, the solution set Ω is x * ( t ) = 0 ; see [41]. Algorithm 1 was tested along with Equation (10) by utilizing the stopping criteria x k + 1 x k x k + 1 < 10 6 with the starting points x 0 ( t ) = x 1 ( t ) = 3 exp ( t ) . The numerical results are reported in Table 3 for any collections of parameters ρ k = 0 , 1 ( k + 1 ) 1.2 , δ k = 0 , 1 ( k + 1 ) 1.2 , and ζ k = 0 , 1 ( k + 1 ) 1.2 by fixing the parameters ξ k = σ k = 1 as in Example 1.
Table 3 indicates that the number of iterations and the CPU time of the concerned parameters ρ k , δ k , and ζ k in all cases are equal. Ultimately, when the parameters ξ k = 1 and σ k = 1 , Algorithm 1 outperforms Equation (10) in terms of both the number of iterations and the CPU time.
To see the optimum values of the control parameters, the following numerical comparison results were obtained while accounting for the different choices of the control parameters ξ k = 1 + 1 k 100 , 1 + 1 k , 1 + 1 k , 1 + 1 log ( k + 1 ) , and σ k = 1 + 1 k 100 , 1 + 1 k , 1 + 1 k , 1 + 1 log ( k + 1 ) by choosing the appropriate values of parameters ρ k = δ k = ζ k = 1 ( k + 1 ) 1.2 as in Example 1.
From Table 4, the number of iterations in the case of the optimal parameters ξ k = 1 + 1 k 100 and σ k = 1 + 1 k 100 , in which these parameters converge to 1 the fastest, illustrates a better behavior than other cases.
Concentrating on these observations, we deduce that making choices for auxiliary parameters ξ k , σ k , and ρ k , δ k , ζ k , as allowed in Algorithm 1, can improve the effectiveness of Algorithm 1 in finite- and infinite-dimensional Hilbert spaces.

5. Conclusions

This paper introduces the Mann-type inertial accelerated subgradient extragradient algorithm with non-monotonic step sizes for finding the minimum-norm solution of split equilibrium and fixed point problems, specifically involved with pseudomonotone and Lipschitz-type continuous bifunctions and nonexpansive mappings within the context of real Hilbert spaces. Without requiring prior knowledge of the Lipschitz constants of bifunctions as well as the operator norm of the bounded linear operator, we take into consideration the Mann-type and inertial methods together with the accelerated subgradient extragradient method to propose a sequence that is strongly convergent to the minimum-norm solution of the split equilibrium and fixed point problems by sufficient conditions on the control sequences of the parameters of concern. Numerical experiments were conducted to demonstrate the efficacy of the proposed algorithm in both finite- and infinite-dimensional Hilbert spaces. The results confirm that focusing on the fine-tuning of auxiliary control parameters within algorithms is a promising and important research direction, as it tends to lead to significant improvements in performance.

Author Contributions

Conceptualization, M.K., K.K. and N.P.; methodology, M.K., K.K. and N.P.; software, M.K., K.K. and N.P.; formal analysis, M.K., K.K. and N.P.; investigation, M.K., K.K. and N.P.; writing—original draft preparation, M.K., K.K. and N.P.; writing—review and editing, M.K., K.K. and N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Naresuan University (NU) and National Science, Research and Innovation Fund (NRMF) Grant No. R2567B011. Narin Petrot received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation Grant No. B41G670027.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ansari, Q.H.; Nimana, N.; Petrot, N. Split hierarchical variational inequality problems and related problems. Fixed Point Theory Appl. 2014, 2014, 208. [Google Scholar] [CrossRef]
  2. Iiduka, H. Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. Math. Program. 2016, 159, 509–538. [Google Scholar] [CrossRef]
  3. Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
  4. Inchan, I. Iterative scheme for fixed point problem of asymptotically nonexpansive semigroups and split equilibrium problem in Hilbert spaces. J. Nonlinear Anal. Optim. 2020, 11, 41–57. [Google Scholar]
  5. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  6. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef]
  7. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 127–149. [Google Scholar]
  8. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef]
  9. Daniele, P.; Giannessi, F.; Maugeri, A. Equilibrium Problems and Variational Models; Kluwer: Dordrecht, The Netherlands, 2003. [Google Scholar]
  10. Dinh, B.; Thanh, H.; Ngoc, H.; Huyen, T. Strong convergence algorithms for equilibrium problems without monotonicity. J. Nonlinear Anal. Optim. 2018, 9, 139–150. [Google Scholar]
  11. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  12. Tran, D.Q.; Dung, L.M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  13. Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2017, 111, 823–840. [Google Scholar] [CrossRef]
  14. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 2004, 9, 773–782. [Google Scholar] [CrossRef]
  15. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator damping. Set-Valued. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  16. Hieu, D.V. An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 2018, 88, 399–415. [Google Scholar] [CrossRef]
  17. Vinh, N.T.; Muu, L.D. Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  18. Shehu, Y.; Izuchukwu, C.; Yao, J.C.; Qin, X. Strongly convergent inertial extragradient type methods for equilibrium problems. Appl. Anal. 2023, 102, 2160–2188. [Google Scholar] [CrossRef]
  19. Suantai, S.; Petrot, N.; Khonchaliew, M. Inertial extragradient methods for solving split equilibrium problems. Mathematics 2021, 9, 1884. [Google Scholar] [CrossRef]
  20. Tan, B.; Cho, S.Y.; Yao, J. Accelerated inertial subgradient extragradient algorithms with non-monotonic step sizes for equilibrium problems and fixed point problems. J. Nonlinear Var. Anal. 2022, 6, 89–122. [Google Scholar]
  21. Panyanak, B.; Khunpanuk, C.; Pholasa, N.; Pakkaranang, N. Dynamical inertial extragradient techniques for solving equilibrium and fixed-point problems in real Hilbert spaces. J. Ineq. Appl. 2023, 2023, 7. [Google Scholar] [CrossRef]
  22. Dinh, B.V.; Son, D.X.; Anh, T.V. Extragradient-proximal methods for split equilibrium and fixed point problems in Hilbert spaces. Vietnam J. Math. 2017, 45, 651–668. [Google Scholar] [CrossRef]
  23. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  24. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  25. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  26. Petrot, N.; Rabbani, M.; Khonchaliew, M.; Dadashi, V. A new extragradient algorithm for split equilibrium problems and fixed point problems. J. Ineq. Appl. 2019, 2019, 137. [Google Scholar] [CrossRef]
  27. Ezeora, J.N.; Enyi, C.D.; Nwawuru, F.O.; Ogbonna, R.C. An algorithm for split equilibrium and fixed-point problems using inertial extragradient techniques. Comput. Appl. Math. 2023, 42, 103. [Google Scholar] [CrossRef]
  28. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  29. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces, Lecture Notes in Mathematics 2057; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  30. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  31. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  32. Karamardian, S.; Schaible, S.; Crouzeix, J.P. Characterizations of generalized monotone maps. J. Optim. Theory Appl. 1993, 76, 399–413. [Google Scholar] [CrossRef]
  33. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
  34. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  35. Gebrie, A.G.; Wangkeeree, R. Hybrid projected subgradient-proximal algorithms for solving split equilibrium problems and split common fixed point problems of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2018, 2018, 5. [Google Scholar] [CrossRef]
  36. Ogbuisi, F.U. The projection method with inertial extrapolation for solving split equilibrium problems in Hilbert spaces. Appl. Set-Valued Anal. Optim. 2021, 3, 239–255. [Google Scholar]
  37. Liu, H.; Yang, J. Weak convergence of iterative methods for solving quasimonotone variational inequalities. Comput. Optim. Appl. 2020, 77, 491–508. [Google Scholar] [CrossRef]
  38. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  39. Contreras, J.; Klusch, M.; Krawczyk, J.B. Numerical solution to Nash-Cournot equilibria in coupled constraint electricity markets. EEE Trans. Power Syst. 2004, 19, 195–206. [Google Scholar] [CrossRef]
  40. Wairojjana, N.; Younis, M.; Rehman, H.U.; Pakkaranang, N.; Pholasa, N. Modified viscosity subgradient extragradient-like algorithms for solving monotone variational inequalities problems. Axioms 2020, 9, 118. [Google Scholar] [CrossRef]
  41. Hieu, D.V.; Cho, Y.J.; Xiao, Y.B.; Kumam, P. Modified extragradient method for pseudomonotone variational inequalities in infinite dimensional Hilbert spaces. Vietnam J. Math. 2021, 49, 1165–1183. [Google Scholar] [CrossRef]
Table 1. For ξ k = σ k = 1 , for all k N . Numerical performance for Example 1 based on the different choices of parameters ρ k , δ k , and ζ k .
Table 1. For ξ k = σ k = 1 , for all k N . Numerical performance for Example 1 based on the different choices of parameters ρ k , δ k , and ζ k .
Algorithm 1 ρ k = 0 ρ k = 1 ( k + 1 ) 1.2 Equation (10)
δ k = 0 δ k = 1 ( k + 1 ) 1.2 δ k = 0 δ k = 1 ( k + 1 ) 1.2
IterTimeIterTimeIterTimeIterTimeIterTime
ζ k = 0 16.80.0716.60.0616.40.0616.30.0522.50.08
ζ k = 1 ( k + 1 ) 1.2 15.80.0515.70.0515.60.0415.40.04
Table 2. For ρ k = δ k = ζ k = 1 ( k + 1 ) 1.2 , for all k N . Numerical performance for Example 1 based on the different choices of parameters ξ k and σ k .
Table 2. For ρ k = δ k = ζ k = 1 ( k + 1 ) 1.2 , for all k N . Numerical performance for Example 1 based on the different choices of parameters ξ k and σ k .
Algorithm 1Number of Iterations
ξ k = 1 + 1 k 100 ξ k = 1 + 1 k ξ k = 1 + 1 k ξ k = 1 + 1 log ( k + 1 )
σ k = 1 + 1 k 100 13.113.913.915.3
σ k = 1 + 1 k 13.113.814.015.3
σ k = 1 + 1 k 13.413.814.015.2
σ k = 1 + 1 log ( k + 1 ) 13.913.914.315.3
Table 3. For ξ k = σ k = 1 , for all k N . Numerical behavior for Example 2 based on the different choices of parameters ρ k , δ k , and ζ k .
Table 3. For ξ k = σ k = 1 , for all k N . Numerical behavior for Example 2 based on the different choices of parameters ρ k , δ k , and ζ k .
Algorithm 1 ρ k = 0 ρ k = 1 ( k + 1 ) 1.2 Equation (10)
δ k = 0 δ k = 1 ( k + 1 ) 1.2 δ k = 0 δ k = 1 ( k + 1 ) 1.2
IterTimeIterTimeIterTimeIterTimeIterTime
ζ k = 0 90.0590.0590.0590.05170.11
ζ k = 1 ( k + 1 ) 1.2 90.0590.0590.0590.05
Table 4. For ρ k = δ k = ζ k = 1 ( k + 1 ) 1.2 , for all k N . Numerical behavior for Example 2 based on the different choices of parameters ξ k and σ k .
Table 4. For ρ k = δ k = ζ k = 1 ( k + 1 ) 1.2 , for all k N . Numerical behavior for Example 2 based on the different choices of parameters ξ k and σ k .
Algorithm 1Number of Iterations
ξ k = 1 + 1 k 100 ξ k = 1 + 1 k ξ k = 1 + 1 k ξ k = 1 + 1 log ( k + 1 )
σ k = 1 + 1 k 100 79810
σ k = 1 + 1 k 79810
σ k = 1 + 1 k 98810
σ k = 1 + 1 log ( k + 1 ) 891011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khonchaliew, M.; Khamdam, K.; Petrot, N. Mann-Type Inertial Accelerated Subgradient Extragradient Algorithm for Minimum-Norm Solution of Split Equilibrium Problems Induced by Fixed Point Problems in Hilbert Spaces. Symmetry 2024, 16, 1099. https://doi.org/10.3390/sym16091099

AMA Style

Khonchaliew M, Khamdam K, Petrot N. Mann-Type Inertial Accelerated Subgradient Extragradient Algorithm for Minimum-Norm Solution of Split Equilibrium Problems Induced by Fixed Point Problems in Hilbert Spaces. Symmetry. 2024; 16(9):1099. https://doi.org/10.3390/sym16091099

Chicago/Turabian Style

Khonchaliew, Manatchanok, Kunlanan Khamdam, and Narin Petrot. 2024. "Mann-Type Inertial Accelerated Subgradient Extragradient Algorithm for Minimum-Norm Solution of Split Equilibrium Problems Induced by Fixed Point Problems in Hilbert Spaces" Symmetry 16, no. 9: 1099. https://doi.org/10.3390/sym16091099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop