Next Article in Journal
Global–Local Query-Support Cross-Attention for Few-Shot Semantic Segmentation
Previous Article in Journal
TADocs: Teacher–Assistant Distillation for Improved Policy Transfer in 6G RAN Slicing
Previous Article in Special Issue
Suzuki–Ćirić-Type Nonlinear Contractions Employing a Locally ζ-Transitive Binary Relation with Applications to Boundary Value Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Accelerated Cyclic Iterative Approximation for Hierarchical Variational Inequalities Constrained by Multiple-Set Split Common Fixed-Point Problems

1
College of Mathematics and Statistics, Sichuan University of Science and Engineering, Zigong 643000, China
2
South Sichuan Center for Applied Mathematics, Zigong 643000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2935; https://doi.org/10.3390/math12182935
Submission received: 23 August 2024 / Revised: 14 September 2024 / Accepted: 16 September 2024 / Published: 21 September 2024
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

Abstract

:
In this paper, we investigate a class of hierarchical variational inequalities (HVIPs, i.e., strongly monotone variational inequality problems defined on the solution set of multiple-set split common fixed-point problems) with quasi-pseudocontractive mappings in real Hilbert spaces, with special cases being able to be found in many important engineering practical applications, such as image recognizing, signal processing, and machine learning. In order to solve HVIPs of potential application value, inspired by the primal-dual algorithm, we propose a novel accelerated cyclic iterative algorithm that combines the inertial method with a correction term and a self-adaptive step-size technique. Our approach eliminates the need for prior knowledge of the bounded linear operator norm. Under appropriate assumptions, we establish strong convergence of the algorithm. Finally, we apply our novel iterative approximation to solve multiple-set split feasibility problems and verify the effectiveness of the proposed iterative algorithm through numerical results.

1. Introduction

Let F : H H be a mapping that is Lipschitz continuous and strongly monotone, and let Fix ( F ) denote the fixed-point set of F . For i = { 1 , 2 , , p } , suppose that S i : H H represents nonlinear mappings, such that Γ = i = 1 p Fix ( S i ) . The variational inequality problem defined on the solution set of a common fixed-point problem, also known as the hierarchical variational inequality problem (HVIP; see [1]), is defined as follows:
Find ϑ * i = 1 p Fix S i such that F ϑ * , ς ϑ * 0 , ς Γ ,
where · , · denotes the inner product in Hilbert space H and Γ represents the common fixed-point set. This type of HVIP is pivotal in various practical applications in real-world applications, such as power supervision [2], optimal regulating [3], network positioning [4], machine learning [5], signal restructuring, beamforming, bandwidth distribution, and so on [1,6,7]. The broad applicability of the HVIP has sparked significant research interest in recent years, leading to numerous studies exploring its various facets. For more detail, one can refer to [1,3,6] and the references cited within.
As known to all, an inverse problem is a fundamental challenge in computational mathematics, with a wide range of applications. Over the recent few decades, the study of inverse problems has seen rapid development, finding applications in diverse fields, such as computer vision, tomography, machine learning, physics, medical imaging, remote sensing, statistics, ocean acoustics, aviation, and geography. In 1994, Censor et al. [8] introduced the split feasibility problem (SFP) as a model to address certain types of inverse problems. Specifically, let H 1 and H 2 be two real Hilbert spaces. Then, the SFP can be formulated as
Find ϑ * C such that A ϑ * Q ,
where C H 1 and Q H 2 are nonempty closed convex subsets, and A : H 1 H 2 represents a bounded linear operator. The SFP has become a crucial tool in a range of applications, such as computed tomography, image restoration, signal processing, and intensity-modulated radiation therapy (IMRT) [9,10,11,12,13,14,15,16,17,18,19,20,21]. And then, motivated by challenges in inverse problems related to IMRT, Censor et al. [10] generalized the SFP on the solution set of a common fixed-point problem and proposed the following multiple-set split feasibility problem (MSFP): Find a point ϑ * , such that
ϑ * i = 1 p C i such that A ϑ * j = 1 r Q j ,
where p and r are integers with p , r 1 , and { C i } i = 1 p and { Q j } j = 1 r represent nonempty, closed, convex subsets of Hilbert spaces H 1 and H 2 , respectively. In this case, when p = r = 1 , the MSFP (1) further simplifies to the SFP. As an extension of the convex feasibility problem, SFP, and MSFP, split common fixed-point problems (SCFPs) were later introduced by Censor and Segal [12] in 2009. SCFPs seek to find ϑ * H 1 , such that
ϑ * Fix ( U ) and B ϑ * Fix ( T ) ,
where B : H 1 H 2 is a bounded linear operator, and U : H 1 H 1 and T : H 2 H 2 are two nonlinear mappings. The SCFP has attracted significant research interest owing to its wide array of applications, such as signal processing, image reconstruction, IMRT, inverse problem modeling, and electron microscopy [12,19].
Next, we turn our attention to a class of multiple-set split common fixed-point problems (MSCFPs), which generalize SCFPs, and are recognized for their extensive applications in fields such as image reconstruction, computed tomography, and radiotherapy treatment planning [8,9,10,11]. In recent years, they have attracted considerable interest from researchers because of their broad applicability (see [1,22,23]). Formally, MSCFPs try to find ϑ * H 1 , such that
ϑ * i = 1 p Fix S i , A ϑ * j = 1 r Fix T j ,
where A : H 1 H 2 is a bounded linear operator, and for 1 i p and 1 j r , Fix ( S i ) and Fix ( T j ) denote the fixed-point sets of the nonlinear mappings S i : H 1 H 1 and T j : H 2 H 2 , respectively. Notably, when p = r = 1 , the MSCFP reduces to an SCFP. Furthermore, if S i and T j are severally projection operators onto nonempty closed convex subsets C i and Q j , the MSCFP (2) simplifies to the MSFP (1).
On the other hand, in order to address the MSCFP (2) and circumvent the aforementioned challenges, Wang and Xu [24] introduced a cyclic iterative algorithm designed to solve MSCFPs for directed operators, as follows:
ϑ n + 1 = U [ n ] 1 ϑ n + γ A * T [ n ] 2 I A ϑ n ,
where 0 < γ < 2 ρ ( A * A ) , [ n ] 1 : = n ( mod p ) , and [ n ] 2 : = n ( mod r ) . They established the weak convergence of the sequence { ϑ n } .
It is important to note that many existing algorithms rely on the calculation of operator norms to determine the appropriate step size. However, in practical applications, obtaining the operator norm is often challenging due to the complexity and high computational costs. While selecting such a step size may be theoretically optimal, it poses significant difficulties in practice, complicating the implementation of these algorithms. To overcome these difficulty, Gupta et al. [22] and Zhao et al. [23] introduced some cyclic iterative algorithms, whose step sizes do not depend on the operator norm. Especially, we note that the algorithm proposed by Zhao et al. [23] is defined as follows:
ς n = ϑ n + a n ( ϑ n ϑ n 1 ) , v n = ς n γ n A * ( I T n ) A ς n , ω n + 1 = ( I U n ) v n + ( 1 λ ) ω n , ϑ n + 1 = v n λ ω n + 1 ,
where the step size γ n is determined by
γ n : = ρ n ( I T n ) A ς n 2 A * ( I T n ) A ς n 2 , ( I T n ) A ς n 0 , γ , ( I T n ) A ς n = 0
here γ > 0 , and with U and T being quasi-nonexpansive operators. Under suitable conditions, the sequence { ϑ n } produced by this algorithm converges weakly to a solution of the MSCFP, offering a practical advantage by eliminating the need to compute the operator norm.
Moreover, over the recent few years, there has been a significant increase in activity and significant progress in the field of MSCFPs, driven by the need to broaden their applicability to more generalized operators. Researchers have extended the framework to include various types of operators, such as quasi-nonexpansive mappings, firmly quasi-nonexpansive mappings [23], demicontractive mappings [22], and directed mappings [25]. These advancements have allowed for more robust and versatile solutions to MSCFPs across different application domains. In this paper, we focus on quasi-pseudocontractive operators, which further expand the scope of MSCFs to encompass a broader class of nonlinear mappings, including demicontractive mappings, directed mappings, and firmly quasi-nonexpansive mappings (see [26]). By examining quasi-pseudocontractive operators, we aim to provide new insights and extend the applicability of MSCFPs to a wider range of practical problems.
Recently, there has been an increasing focus on constructing iterative schemes that achieve faster convergence rates, particularly in the context of solving fixed-point and optimization problems. Inertial techniques, which discretely simulate second-order dissipative dynamical systems, have gained recognition for their ability to accelerate the convergence behavior of iterative methods. The most prevalent method in this category is single-step inertial extrapolation, given by ϑ n + Θ n ( ϑ n ϑ n 1 ) . Since the introduction of inertial-type algorithms, many researchers have incorporated the inertial term [ Θ n ( ϑ n ϑ n 1 ) ] into various iterative schemes, such as Mann, Krasnoselski, Halpern, and Viscosity methods, to approximate solutions for fixed-point and optimization problems. While most studies have established weak convergence results, achieving strong convergence remains challenging and relatively rare. Recently, Kim [27] and Maingé [28] explored the inertial extrapolation technique, incorporating an extra correction term, in the context of optimization problems. This correction term plays a crucial role in enhancing the acceleration rate of the algorithm, and their studies have demonstrated some promising weak convergence results, paving the way for further exploration in this direction.
Inspired by recent advancements in iterative methods and motivated by the need for more efficient algorithms, the purpose of this paper is to explore a class of novel self-adaptive cyclic iterative algorithms for approximating solutions of HVIPs for quasi-pseudocontractive mappings. Our iterative approximation integrates the inertial approach combined with correction terms and the primal-dual approach, enhancing the rate of convergence of the iterative process. A key feature of the iterative approximation algorithms presented in this paper is their use of a self-adaptive step size strategy, which can be implemented without requiring prior knowledge of the operator norm, thus making it more practical for real-world applications. By imposing appropriate control conditions on the relevant parameters, we establish that the iterates converge strongly to the unique solution of the hierarchical variational inequality problem under consideration. We further apply our theoretical results to the multiple set split feasibility problem, demonstrating the broad applicability of our approach. To confirm the effectiveness of the proposed algorithm, we provide numerical examples that illustrate its superior performance in comparison to existing methods.

2. Preliminaries

In this paper, the inner product is denoted by · , · and the norm by · . The identity operator on the Hilbert space H is denoted by I. We represent the fixed-point set of an operator T as Fix ( T ) . Strong convergence is indicated by →, while weak convergence is represented by ⇀. The weak ω -limit set of the sequence { ϑ n } is denoted by ω w ( ϑ n ) .
Definition 1. 
Let H be a real Hilbert space. Then, for all ϑ , ς H and α ( 0 , 1 ) , one has
(i)
2 ϑ , ς = ϑ 2 + ς 2 ϑ ς 2 = ϑ + ς 2 ϑ 2 ς 2 ,
(ii)
α ϑ + ( 1 α ) ς 2 = α ϑ 2 + ( 1 α ) ς 2 α ( 1 α ) ϑ ς 2 ,
(iii)
ϑ + ς 2 ϑ 2 + 2 ς , ϑ + ς .
Definition 2. 
A mapping F : H H is termed l-Lipschitz continuous, provided that there exists a constant l > 0 , such that
F ϑ F ς l ϑ ς , ϑ , ς H .
The mapping F is referred to as τ-strongly monotone if a constant τ > 0 can be found, such that
F ϑ F ς , ϑ ς τ ϑ ς 2 , ϑ , ς H .
Lemma 1 
([29]). Let T : H H be a l-Lipschitizian mapping with l 1 . Denote
K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) .
If  0 < ξ < η < 1 1 + 1 + l 2 , then the following conclusions hold:
(i)
Fix ( T ) = Fix ( T ( ( 1 η ) I + η T ) ) = Fix ( K ) ;
(ii)
If  T is demiclosed at 0, then K is also demiclosed at 0;
(iii)
In addition, if  T : H H is quasi-pseudocontractive, then the mapping K is quasi-nonexpansive, that is,
K ϑ ϑ * ϑ ϑ * , ϑ H , ϑ * Fix ( T ) = Fix ( K ) .
Lemma 2. 
Let H be a real Hilbert space, and let T : H H be both l-Lipschitzian and quasi-pseudocontractive with coefficient l 1 , K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) , where 0 < ξ < η < 1 1 + 1 + l 2 . Set K μ = ( 1 μ ) I + μ K for μ ( 0 , 1 ) ; then, for all ϑ H , ϑ * Fix ( T ) , we have the following results:
(i)
ϑ K ϑ , ϑ ϑ * 1 2 ϑ K ϑ 2 and ϑ K ϑ , ϑ * K ϑ 1 2 ϑ K ϑ 2 ,
(ii)
K μ ϑ ϑ * 2 ϑ ϑ * 2 μ ( 1 μ ) K ϑ ϑ 2 ,
(iii)
ϑ K μ ϑ , ϑ ϑ * μ 2 K ϑ ϑ 2 .
Proof. 
By Lemma 1, it is known that K is quasi-nonexpansive and satisfies K ϑ ϑ * ϑ ϑ * , for all ϑ H and ϑ * Fix ( T ) = Fix ( K ) .
(i) Combining the classic ϑ , ς = 1 2 ϑ ς 2 + 1 2 ϑ 2 + 1 2 ς 2 , we have
ϑ K ϑ , ϑ ϑ * = 1 2 K ϑ ϑ * 2 + 1 2 ϑ K ϑ 2 + 1 2 ϑ ϑ * 2 1 2 ϑ ϑ * 2 + 1 2 ϑ K ϑ 2 + 1 2 ϑ ϑ * 2 = 1 2 ϑ K ϑ 2 .
Similarly, we can obtain ϑ K ϑ , ϑ * K ϑ 1 2 ϑ K ϑ 2 .
(ii) From (i), we know that
K μ ϑ ϑ * 2 = [ ( 1 μ ) I + μ K ] ϑ ϑ * 2 = ϑ ϑ * 2 2 μ ϑ ϑ * , ϑ K ϑ + μ 2 K ϑ ϑ 2 ϑ ϑ * 2 μ ( 1 μ ) K ϑ ϑ 2 .
(iii) With (i), we have
ϑ K μ ϑ , ϑ ϑ * = μ ϑ K ϑ , ϑ ϑ * μ 2 K ϑ ϑ 2 .
   □
Remark 1. 
Let K μ = ( 1 μ ) I + μ K for μ ( 0 , 1 ) , where K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) with 0 < ξ < η < 1 1 + 1 + l 2 , T : H H be both l-Lipschitzian and quasi-pseudocontractive with l 1 . We have Fix K μ = Fix ( K ) and K μ ϑ ϑ 2 = μ 2 K ϑ ϑ 2 . It follows form (ii) of Lemma 2 that K μ ϑ ϑ * 2 ϑ ϑ * 2 1 μ μ K μ ϑ ϑ 2 , which implies that K μ is firmly quasi-nonexpansive when μ = 1 2 . On the other hand, if  K ^ is a firmly quasi-nonexpansive operator, we can easily obtain K ^ = 1 2 I + 1 2 K , where K is a quasi-nonexpansive operator.
Lemma 3. 
Let T : H H be both l-Lipschitzian and quasi-pseudocontractive with l 1 and μ ( 0 , 1 ) . If  K : = ( 1 ξ ) I + ξ T ( ( 1 η ) I + η T ) with 0 < ξ < η < 1 1 + 1 + l 2 , K μ = ( 1 μ ) I + μ K ; then, I K μ ϑ 2 2 μ ϑ ϑ * , I K μ ϑ for all ϑ H , ϑ * Fix ( T ) .
Proof. 
By Lemma 1, it is known that K is quasi-nonexpansive and satisfies ϑ * Fix ( T ) = Fix ( K ) . It follows easily from (iii) of Lemma 2.    □
Lemma 4 
([30]). Let the operator F : H H be l-Lipschitz continuous and δ-strongly monotone with constants l > 0 , δ > 0 . Assume that ϵ 0 , 2 δ l 2 . Define G μ = I μ ϵ F for μ ( 0 , 1 ) . Then, for all ϑ , ς H ,
G μ ϑ G μ ς ( 1 μ χ ) ϑ ς
holds, where χ = 1 1 ϵ 2 δ ϵ l 2 ( 0 , 1 ) .
Lemma 5 
([1]). Suppose that ϑ n is a sequence of non-negative real numbers satisfying the condition that
ϑ n + 1 1 τ n ϑ n + τ n ψ n , n 0 , ϑ n + 1 ϑ n ϱ n + θ n , n 0 ,
where τ n ( 0 , 1 ) , ϱ n 0 , and { ψ n } and { θ n } are two sequences in R , such that
(i) n = 1 τ n = ,(ii) lim n θ n = 0 ,
(iii) lim l ϱ n l = 0 implies lim sup l ψ n l 0 for any subsequence n l { n } .
Then, lim n ϑ n = 0 .

3. Main Results

In this section, we present our novel accelerated cyclic iterative approximation algorithm and convergence analysis. We begin by outlining the assumptions necessary for achieving strong convergence.
Assumption 1. 
Let H 1 and H 2 be real Hilbert spaces. Additionally, we assume that the following conditions are satisfied:
(i) A : H 1 H 2 is a nonzero bounded linear operator with an adjoint operator denoted by A * , S i : H 1 H 1 is an l 1 i -Lipschitzian quasi-pseudocontractive operator with 1 < l 1 i l 1 , and T j : H 2 H 2 is an l 2 j -Lipschitzian quasi-pseudocontractive operator with 1 < l 2 j l 2 , such that I 1 S i and I 2 T j are demiclosed at 0 for all 1 i p and 1 j r . Here, I 1 and I 2 are separate identity operators on H 1 and H 2 , and F : H 1 H 1 is l-Lipschitz continuous and δ-strongly monotone.
(ii) Ω : = υ H 1 υ i = 1 p Fix ( S i ) a n d A υ j = 1 r Fix T j .
(iii) For all n 1 , J n : = ( 1 κ n ) I 1 + κ n J n , [ n ] 1 and D n : = ( 1 ι n ) I 2 + ι n D n , [ n ] 2 , where { κ n } ( 0 , 1 2 ] with κ = sup n 1 { κ n } and { ι n } ( 0 , 1 ) with ι = sup n 1 { ι n } , [ n ] 1 : = n ( mod p ) + 1 , [ n ] 2 : = n ( mod ) r + 1 , and for i { 1 , 2 , , p } and j { 1 , 2 , , r } , J n , i : = ( 1 ξ n ) I 1 + ξ n S i ( ( 1 η n ) I 1 + η n S i ) and D n , j : = ( 1 ξ n ) I 2 + ξ n T j ( ( 1 η n ) I 2 + η n T j ) with
0 < ξ n η n < 1 1 + 1 + l 3 2
for l 3 = max { l 1 , l 2 } .
(iv) ε n and ρ n are two positive sequences such that lim n ϕ n σ n = 0 , lim n ε n σ n = 0 , lim n ρ n σ n = 0 , where { ϕ n } [ 0 , 1 ] , σ n ( 0 , 1 ) satisfies lim n σ n = 0 and n = 0 σ n = , and 0 lim inf n ρ n lim sup n ρ n < 1 ι .
Next, we shall propose the following novel iterative approximation (i.e., Algorithm 1) to solve a HVIP controlled by the MSCFP (2).
Remark 2. 
Since Ω in Assumption 1 (ii) is a nonempty closed convex set, the variational inequality
F ϑ * , z ϑ * 0 , z Ω ,
 has a unique solution by Assumption 1 (i).
Remark 3. 
It can be seen that the newly proposed adaptive cyclic iterative algorithm (i.e., Algorithm 1), distinct from Zhao et al. [23], combines the inertial approach with a correction term and the primal-dual ideal, without requiring prior knowledge of the operator norm. This makes the algorithm easier to implement in practice and helps to improve the convergence speed of the iterative process. In addition, we have conducted our research using a more generalized quasi-pseudocontractive operator, which expands the scope to various categories of nonlinear mappings, including demicontractive, directed, and firmly quasi-nonexpansive mappings [26].
Theorem 1. 
Suppose that ϑ n is a sequence generated by Algorithm 1 under Assumption 1. Then, the sequence ϑ n converges strongly to ϑ * Ω , which is the unique solution to the HVIP, as follows:
F ϑ * , z ϑ * 0 , z Ω ,
where Ω is defined by Assumption 1 (ii). This implies that ϑ * is also a solution of the MSCFP (2).
Algorithm 1 
Initialization Let α , β , γ > 0 . Pick out sequences ε n , { ρ n } , ϕ n and σ n such that Assumption 1 is satisfied, and give the initial points ϑ 0 , ϑ 1 , ν 0 , ω 0 H 1 .
IterativeSteps : For ( n 1 ) , based on the iterates ϑ n 1 and ϑ n , make the following computation for ϑ n + 1 :
Step 1 : Calculate
ν n = ϑ n + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 ,
where α n ( 0 , α ¯ n ] and β n 0 , β ¯ n with the condition that
α ¯ n = min ε n ϑ n ϑ n 1 + ω n , α , if ϑ n ϑ n 1 or ω n 0 , α , otherwise ,
and
β ¯ n = min ε n ν n 1 ϑ n 1 + ω n , β , if ν n 1 ϑ n 1 or ω n 0 , β , otherwise ,

Step 2 : Evaluate
z n = ν n γ n A * ( I 2 D n ) A ν n , ω n + 1 = ( I 1 J n ) ( z n + λ n ω n ) ,
where λ n = α n + β n and the step size γ n is selected in such a manner that
γ n : = ρ n ( I 2 D n ) A ν n 2 A * ( I 2 D n ) A ν n 2 , if ( I 2 D n ) A ν n 0 γ , otherwise ,

Step 3 : Ascertain
ς n = ( 1 ϕ n ) z n + ϕ n ω n + 1 .

Step 4 : Compute
ϑ n + 1 = I 1 σ n F ς n .

Set n : = n + 1 and go to step 1.
Proof. 
Firstly, we show that the sequences { ϑ n } is bounded.
Taking ϑ * Ω , we have ϑ * i = 1 p Fix S i such that A ϑ * j = 1 r Fix T j . For each i { 1 , 2 , , p } , j { 1 , 2 , , r } , it follows from the definitions of J n and D n , Assumption 1, Lemma 1, and Remark 1 that ϑ * n = 1 Fix J n and A ϑ * n = 1 Fix D n .
According to Lemmas 1 and 3, we have
ν n ϑ * , A * I 2 D n A ν n = A ν n A ϑ * , I 2 D n A ν n 1 2 ι n I 2 D n A ν n 2 1 2 ι I 2 D n A ν n 2 .
From Algorithm 1 and Equation (8), it follows that
z n ϑ * 2 = ν n γ n A * I 2 D n A ν n ϑ * 2 = ν n ϑ * 2 2 γ n ν n ϑ * , A * I 2 D n A ν n + γ n 2 A * I 2 D n A ν n 2 ν n ϑ * 2 γ n ι I 2 D n A ν n 2 + γ n 2 A * I 2 D n A ν n 2 .
For the case ( I 2 D n ) A ν n = 0 , one obtains
z n ϑ * 2 ν n ϑ * 2 γ n 1 ι I 2 D n A ν n 2 γ n A * I 2 D n A ν n 2 = ν n ϑ * 2 .
Otherwise, we deduce from (7) and (9) that
z n ϑ * 2 ν n ϑ * 2 ρ n 1 ι ρ n ( I 2 D n ) A ν n 4 A * ( I 2 D n ) A ν n 2 .
By Assumption 1 (iv), (10), and (11), we see that
z n ϑ * ν n ϑ * .
By Lemma 3, we have
ω n + 1 2 = ( I 1 J n ) ( z n + λ n ω n ) ( I 1 J n ) ϑ * 2 2 κ n ω n + 1 , z n ϑ * + λ n ω n ω n + 1 , z n ϑ * + λ n ω n ω n + 1 z n ϑ * + λ n ω n ,
Then, we obtain
ω n + 1 z n ϑ * + λ n ω n z n ϑ * + λ n ω n .
It follows from Algorithm 1 and the triangle inequality that
ν n ϑ * = ϑ n + α n ϑ n 1 ϑ n + β n ν n 1 ϑ n 1 ϑ * ϑ n ϑ * + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 .
From (5), we have α n ( ϑ n ϑ n 1 + ω n ) ε n for all n. This, combined with Assumption 1 (iv), leads to
lim n α n σ n ( ϑ n ϑ n 1 + ω n ) = 0 .
Following a similar reasoning from (6), we determine that
lim n β n σ n ( ν n 1 ϑ n 1 + ω n ) = 0 .
By (12)–(14), one knows that
ς n ϑ * = ( 1 ϕ n ) z n + ϕ n ω n + 1 ϑ * ( 1 ϕ n ) z n ( 1 ϕ n ) ϑ * + ϕ n ϑ * + ϕ n ω n + 1 ( 1 ϕ n ) z n ϑ * + ϕ n z n ϑ * + λ n ω n + ϕ n ϑ * ν n ϑ * + λ n ω n + ϕ n ϑ * ϑ n ϑ * + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 + λ n ω n + ϕ n ϑ * = ϑ n ϑ * + σ n α n σ n ( ϑ n ϑ n 1 + ω n ) + σ n β n σ n ( ν n 1 ϑ n 1 + ω n ) + ϕ n ϑ * ϑ n ϑ * + 2 σ n M 1 ,
where M 1 = sup n N α n σ n ( ϑ n ϑ n 1 + ω n ) , β n σ n ( ν n 1 ϑ n 1 + ω n ) , ϕ n ϑ * < .
Consider ϵ 0 , 2 δ l 2 . Given that lim n σ n = 0 , there exists n 0 N , such that for all n > n 0 , σ n < ϵ . Consequently, σ n ϵ ( 0 , 1 ) . According to Lemma 4 for all n > n 0 , one can deduce that
I 1 σ n F ς n I 1 σ n F ϑ * = I 1 σ n ϵ · ϵ F ς n I 1 σ n ϵ · ϵ F ϑ * 1 σ n ϵ χ ς n ϑ * ,
where χ = 1 1 ϵ 2 δ ϵ l 2 ( 0 , 1 ) . By applying Inequalities (15) and (16), we know that
ϑ n + 1 ϑ * = I 1 σ n F ς n ϑ * = I 1 σ n F ς n I 1 σ n F ϑ * σ n F ϑ * I 1 σ n F ς n I 1 σ n F ϑ * + σ n F ϑ * 1 σ n ϵ χ ς n ϑ * + σ n F ϑ * 1 σ n ϵ χ ϑ n ϑ * + 2 σ n M 1 + σ n F ϑ * 1 σ n ϵ χ ϑ n ϑ * + σ n ϵ χ ϵ 2 M 1 + F ϑ * χ max ϑ n ϑ * , ϵ 2 M 1 + F ϑ * χ max ϑ n 0 ϑ * , ϵ 2 M 1 + F ϑ * χ .
Thus, the sequence ϑ n is bounded. As a result, the sequences ς n , w n + 1 and ν n are also bounded.
According to the definition of ν n , along with applying the Cauchy–Schwarz inequality, we obtain
ν n ϑ * 2 = ϑ n + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 ϑ * 2 = ϑ n ϑ * 2 + α n 2 ϑ n 1 ϑ n 2 + β n 2 ν n 1 ϑ n 1 2 + 2 α n β n ϑ n 1 ϑ n , ν n 1 ϑ n 1 + 2 α n ϑ n ϑ * , ϑ n 1 ϑ n + 2 β n ϑ n ϑ * , ν n 1 ϑ n 1 ϑ n ϑ * 2 + α n 2 ϑ n ϑ n 1 2 + β n 2 ν n 1 ϑ n 1 2 + 2 α n β n ϑ n ϑ n 1 ν n 1 ϑ n 1 + 2 α n ϑ n ϑ * ϑ n ϑ n 1 + 2 β n ϑ n ϑ * ν n 1 ϑ n 1 = ϑ n ϑ * 2 + β n ν n 1 ϑ n 1 2 ϑ n ϑ * + β n ν n 1 ϑ n 1 + α n ϑ n ϑ n 1 α n ϑ n ϑ n 1 + 2 β n ν n 1 ϑ n 1 + 2 ϑ n ϑ * .
Since ϑ n , α n , β n and ν n are bounded, there exists a constant M 2 > 0 such that
ν n ϑ * 2 ϑ n ϑ * 2 + M 2 α n ϑ n ϑ n 1 + M 2 β n ν n 1 ϑ n 1 .
By Algorithm 1 and (12), we have
ς n ϑ * 2 = ( 1 ϕ n ) z n + ϕ n ω n + 1 ϑ * 2 ϕ n ω n + 1 ϑ * 2 + ( 1 ϕ n ) z n ϑ * 2 z n ϑ * 2 + ϕ n ω n + 1 ϑ * 2 ν n ϑ * 2 + ϕ n ω n + 1 ϑ * 2 .
Utilizing (16) and (17), one arrives at the following for all n > n 0 :
ϑ n + 1 ϑ * 2 = I 1 σ n F ς n I 1 σ n F ϑ * σ n F ϑ * 2 I 1 σ n F ς n I 1 σ n F ϑ * 2 2 σ n F ϑ * , ϑ n + 1 ϑ * 1 σ n ϵ χ 2 ς n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 1 σ n ϵ χ ν n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 + ϕ n ω n + 1 ϑ * 2 1 σ n ϵ χ ϑ n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 + M 2 α n ϑ n ϑ n 1 + M 2 β n ν n 1 ϑ n 1 + ϕ n ω n + 1 ϑ * 2 1 σ n ϵ χ ϑ n ϑ * 2 + σ n ϵ χ 2 ϵ χ F ϑ * , ϑ * ϑ n + 1 + ϵ α n σ n M 2 χ ϑ n ϑ n 1 + ϵ β n σ n M 2 χ ν n 1 ϑ n 1 + ϵ ϕ n σ n χ ω n + 1 ϑ * 2 = 1 τ n ϑ n ϑ * 2 + τ n ψ n ,
where
τ n = σ n ϵ χ , ψ n = 2 ϵ χ F ϑ * , ϑ * ϑ n + 1 + ϵ α n σ n M 2 χ ϑ n ϑ n 1 + ϵ β n σ n M 2 χ ν n 1 ϑ n 1 + ϵ ϕ n σ n χ ω n + 1 ϑ * 2 .
It is evident that τ n 0 , n = 1 τ n = . Given that the sequence ϑ n is bounded, this implies the existence of a constant M 3 > 0 , such that
2 F ϑ * , ϑ * ϑ n + 1 M 3 .
Hence, by (16), we obtain
ϑ n + 1 ϑ * 2 = I 1 σ n F ς n ϑ * 2 = I 1 σ n F ς n I 1 σ n F ϑ * σ n F ϑ * 2 I 1 σ n F ς n I 1 σ n F ϑ * 2 2 σ n F ϑ * , ϑ n + 1 ϑ * 1 σ n ϵ χ 2 ς n ϑ * 2 + 2 σ n F ϑ * , ϑ * ϑ n + 1 ς n ϑ * 2 + σ n M 3 n > n 0 ,
from the preceding inequality, as well as (9) and (18), we have the following for all n > n 0 :
ϑ n + 1 ϑ * 2 z n ϑ * 2 + ϕ n ω n + 1 ϑ * 2 + σ n M 3 ν n ϑ * 2 γ n ι I 2 D n A ν n 2 + γ n 2 A * I 2 D n A ν n 2 + ϕ n ω n + 1 ϑ * 2 + σ n M 3 .
We know from (14) that
ν n ϑ * ϑ n ϑ * + α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 = ϑ n ϑ * + α n ( ϑ n ϑ n 1 + ω n ) + β n ( ν n 1 ϑ n 1 + ω n ) λ n ω n ϑ n ϑ * + σ n α n σ n ( ϑ n ϑ n 1 + ω n ) + σ n β n σ n ( ν n 1 ϑ n 1 + ω n ) ϑ n ϑ * + 2 σ n M 1 .
Then, for some constant M 4 > 0 ,
ν n ϑ * 2 ϑ n ϑ * + 2 σ n M 1 2 = ϑ n ϑ * 2 + σ n 4 M 1 ϑ n ϑ * + 4 σ n M 1 2 ϑ n ϑ * 2 + σ n M 4 .
Combining (20) and (21), we get the following for all n > n 0 :
ϑ n + 1 ϑ * 2 ϑ n ϑ * 2 + σ n M 4 + γ n 2 A * I 2 D n A ν n 2 + σ n M 3 + ϕ n ω n + 1 ϑ * 2 γ n ι I 2 D n A ν n 2 ϑ n ϑ * 2 + σ n M 4 + σ n γ n 2 σ n A * I 2 D n A ν n 2 + σ n M 3 + σ n ϕ n σ n ω n + 1 ϑ * 2 γ n ι I 2 D n A ν n 2 ϑ n ϑ * 2 + σ n ( M 3 + M 4 + M 5 ) γ n ι I 2 D n A ν n 2 ,
where M 5 = sup n N γ n 2 σ n A * I 2 D n A ν n 2 , ϕ n σ n ω n + 1 ϑ * 2 < .
Now we set
ϱ n = γ n ι I 2 D n A ν n 2 ,
and
θ n = σ n M 3 + M 4 + M 5 , Γ n = ϑ n ϑ * 2 .
Thus, (22) can be reformulated as follows:
Γ n + 1 Γ n ϱ n + θ n .
It is easy to see that lim n θ n = 0 . To establish that Γ n 0 , according to Lemma 5 (and taking into account (19) and (23)), it is enough to demonstrate that for any subsequence n k { n } , if  lim k ϱ n k = 0 , we can obtain
lim sup k ψ n k 0 .
We suppose that lim k ϱ n k = 0 . When ( I 2 D n ) A ν n k = 0 , it is clear that
lim k γ ι I 2 D n A ν n k 2 = 0 .
Otherwise, as indicated by (7), it follows that
lim k γ n k ι I 2 D n A ν n k 2 = 0 .
By our assumption, we get
lim k γ n k ι I 2 D n A ν n k 2 = lim k ρ n k ( I 2 D n ) A ν n k 4 ι A * ( I 2 D n ) A ν n k 2 ,
which implies that
lim k ( I 2 D n ) A ν n k 4 A * ( I 2 D n ) A ν n k 2 = 0 .
Further, we obtain
lim k I 2 D n A ν n k 2 A * I 2 D n A ν n k = 0 .
Since A is a bounded linear operator, we can conclude that
A * I 2 D n A ν n k A I 2 D n A ν n k .
Hence, we have
1 A I 2 D n A ν n k = I 2 D n A ν n k 2 A I 2 D n A ν n k I 2 D n A ν n k 2 A * I 2 D n A ν n k ,
by considering that A 0 . Using (24)–(26), it follows that
lim k I 2 D n A ν n k = 0 .
Since lim n σ n = 0 and the sequence F ς n is bounded, it follows that
ϑ n k + 1 ς n k = σ n k F ς n k 0 , ( k ) .
It is evident that as n , we have
ν n ϑ n α n ( ϑ n ϑ n 1 + ω n ) + β n ( ν n 1 ϑ n 1 + ω n ) = σ n α n σ n ϑ n ϑ n 1 + ω n + σ n β n σ n ν n 1 ϑ n 1 + ω n 0 .
Further, according to (25) and (7), we get
lim k ν n k z n k = lim k γ n k A * I 2 D n A ν n k = lim k ρ n k I 2 D n A ν n k 2 A * I 2 D n A ν n k = 0 .
From the above inequalities, we arrive at
ϑ n k + 1 ϑ n k ϑ n k + 1 ς n k + ς n k ν n k + ν n k ϑ n k 0 , ( k ) .
According to the arbitrariness of n k , we have
I 2 D n A ν n 0 ; ϑ n + 1 ϑ n 0 , ( n ) .
By Algorithm 1, we show that
ν n ϑ n α n ϑ n ϑ n 1 + β n ν n 1 ϑ n 1 = α n ( ϑ n ϑ n 1 + ω n ) + β n ( ν n 1 ϑ n 1 + ω n ) λ n ω n σ n α n σ n ( ϑ n ϑ n 1 + ω n ) + σ n β n σ n ( ν n 1 ϑ n 1 + ω n ) λ n ω n 2 σ n M 1 λ n ω n .
Then, we get
λ n ω n 2 σ n M 1 ν n ϑ n ,
which means that
lim n λ n ω n = lim n 2 σ n M 1 ν n ϑ n 0 ,
and
lim n ω n 0 .
Given that { ϑ n k } is bounded, there exists a subsequence { ϑ n k l } that converges weakly to ϑ ^ . Without a loss of generality, we assume that the entire sequence ϑ n k converges weakly to ϑ ^ .
At the same time, it follows from (27) that ν n k ϑ ^ and A ν n k A ϑ ^ as k . By (27)–(30), we have z n k + λ n k ω n k ϑ ^ as k . Noting that the pool of indexes is finite and { ϑ n } is asymptotically regular, for any 1 i p , we can select a subsequence n i m { n } such that ϑ n i m ϑ ^ , z n i m + λ n i m ω n i m ϑ ^ as m , with [ n i m ] 1 = i for all m. Consequently, we find that
lim m I 1 J n i m , i z n i m + λ n i m ω n i m = lim m I 1 J n i m , n i m 1 z n i m + λ n i m ω n i m = lim m 1 κ n i m I 1 J n i m , n i m z n i m + λ n i m ω n i m = lim m 1 κ n i m ω n i m + 1 = 0 .
Similarly, for any 1 j r , we can select a subsequence { n j s } { n } such that A ν n j s A ϑ ^ as s , and n j s 2 = j for all s. It turns out that
lim s I 2 D n j s , j A ν n j s = lim s I 2 D n j s , n j s 2 A ν n j s = lim s 1 ι n j s I 2 D n j s , n j s A ν n j s = 0 .
Since I 1 S i ( 1 i p ) and I 2 T j ( 1 j r ) are demiclosed at 0, by employing Lemma 1, we have I 1 J n , i ( 1 i p ) and I 2 D n , j ( 1 j r ) , which are demiclosed at 0; moreover, it follows from (31) and (32) that ϑ ^ i = 1 p Fix J n , i , A ϑ ^ j = 1 r Fix D n , j . Additionally, with Lemma 1, we know that ϑ ^ i = 1 p Fix S i , A ϑ ^ j = 1 r Fix T j .
Subsequently, we demonstrate that
lim sup k F ϑ * , ϑ * ϑ n k 0 .
To demonstrate this inequality, we select a subsequence ϑ n k l from ϑ n k such that
lim l F ϑ * , ϑ * ϑ n k l = lim sup k F ϑ * , ϑ * ϑ n k .
Given that ϑ * is the unique solution to the variational inequality (4) and ϑ n k l converges weakly to ϑ ^ Ω , we can deduce that
lim sup k F ϑ * , ϑ * ϑ n k = lim l F ϑ * , ϑ * ϑ n k l = F ϑ * , ϑ * ϑ ^ 0 .
From (29), we get
lim sup k F ϑ * , ϑ * ϑ n k + 1 = lim sup k F ϑ * , ϑ * ϑ n k 0
and
lim sup k ψ n k 0 .
Therefore, all the conditions specified in Lemma 5 are met. As a result, we can directly conclude that lim n Γ n = lim n ϑ n ϑ * 2 = 0 . This implies that the sequence ϑ n converges strongly to ϑ * , which serves as the unique solution to the variational inequality (4). □

4. Application to the MSFP and a Numerical Example

It is widely recognized that the projection operator P Q on a nonempty closed convex subset Q is firmly nonexpansive, implying that it is demiclosed at 0. Now, as an application, we will solve the MSFP (1).
Theorem 2. 
Let H 1 and H 2 be real Hilbert spaces, C i H 1 for all i = 1 , 2 , , p and Q j H 2 for each j = 1 , 2 , , and r be nonempty closed convex subsets. Suppose that A , F , [ n ] 1 , [ n ] 2 , J n , D n , { ϕ n } , ε n and σ n are the same as in Assumption 1. Additionally, if  Ω ¯ = υ ¯ H 1 υ ¯ i = 1 p C i a n d A υ ¯ j = 1 r Q j , 0 lim inf n ρ n lim sup n ρ n < 2 , lim n ρ n σ n = 0 , and for i { 1 , 2 , , p } and j { 1 , 2 , , r } , J n , i = ( 1 ξ n ) I 1 + ξ n P C i ( ( 1 η n ) I 1 + η n P C i ) and D n , j = ( 1 ξ n ) I 2 + ξ n P Q j ( ( 1 η n ) I 2 + η n P Q j ) with 0 < ξ n < η n < 1 2 + 1 , then the sequence { ϑ n } constructed by the following Algorithm 1 converges strongly to a point υ ¯ Ω ¯ , that is to say, υ ¯ is a solution of the MSFP (1), which is also the sole solution of the following HVIP:
F υ ¯ , z υ ¯ 0 , z Ω ¯ .
Proof. 
Let l 1 = l 2 = 1 , S i = P C i for all i = 1 , 2 , , p and T j = P Q j for each j = 1 , 2 , , r in Assumption 1 (i). Consequently, it follows from Theorem 1 that conclusions can be easily drawn. □
In this section, we showcase the effectiveness of the proposed Algorithm 1 by using it to solve the MSFP (1). The algorithm was implemented using MATLAB R2020a and executed on a laptop with an Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz and 8.00GB of RAM.
Example 1. 
For 1 i p and 1 j r , we select subsets C i R M and Q j R N in the MSFP (1), defined by C i : = ϑ R M a i C , ϑ b i C and Q j : = ϑ R N a j Q , ϑ b j Q , respectively. Here, a i C R M , a j Q R N , and b i C , b j Q R . For these ranges, the components of a i C and a j Q are randomly selected from the closed interval [ 1 , 3 ] , and b i C and b j Q are randomly chosen from the closed interval [ 2 , 4 ] . Additionally, we define A = a ^ k l N × M as a bounded linear operator with entries a ^ k l being randomly generated within the closed interval [ 20 , 120 ] . Further, we define the function Φ : R M R by
Φ ( ϑ ) = i = 1 p 1 p ϑ P C i ( ϑ ) 2 + j = 1 r 1 r A ϑ P Q j ( A ϑ ) 2 ,
and use the stopping rule Φ ( ϑ ) < ε = 10 20 . Set p = r = 10 , α = 0.9 , β = 0 , F = I , and κ n = ι n 1 2 , ε n = 1 n 2 , ρ n = 1 ( n + 1 ) 0.7 , σ n = 1 log ( n + 2 ) , ϕ n = 1 log ( n + 2 ) 1.1 and ρ n = 1.95 for each n 1 .
By applying Algorithm 1 to solve Example 1,we can select the inertial extrapolation factor α n within the interval ( 0 , α ¯ n ] . Specifically, by setting α n = ϖ α ¯ n , we can vary the inertial extrapolation factors by adjusting the parameter ϖ within the range ( 0 , 1 ] . Setting ϑ 0 = 5 e 1 , ϑ 1 = 10 e 1 , and ω 0 = 10 e 1 , where e 1 = ( 1 , 1 , , 1 ) T , we compare different inertial extrapolation factors of Algorithm 1 across various dimensional spaces. Table 1 and Table 2 present the iteration numbers and CPU time of Algorithm 1 for ϖ = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , and 1 when the dimensions are ( N , M ) = ( 10 , 15 ) and ( N , M ) = ( 50 , 50 ) , respectively.
Based on the data presented in Table 1 and Table 2, it is evident that Algorithm 1 outperforms Equation (3) in terms of convergence speed across various dimensions. The computational results indicate that Algorithm 1, with the adjustment parameter ϖ = 1 , demonstrates superior performance across different dimensions.

5. Conclusions

In this paper, we introduced a novel algorithm to approximate solutions for strongly monotone variational inequality problems within the solution set of the split common fixed-point problem with quasi-pseudocontractive mappings. Our method integrates a self-adaptive step size, which negates the need for prior operator norm knowledge, and incorporates an inertial method with a correction term to accelerate convergence. We proved a strong convergence theorem under reasonable conditions and applied our results to solve a multiple-set split feasibility problem. Numerical experiments further validated the effectiveness of our proposed algorithm. The issue addressed in this paper holds significant potential for practical applications in real-world scenarios, including image recognition, signal processing, and machine learning. Our findings contribute to the development of more efficient and practical iterative methods, potentially influencing future research and applications in these fields.

Author Contributions

Conceptualization, Y.Y. and H.-y.L.; methodology, Y.Y.; software, Y.Y.; validation, Y.Y. and H.-y.L.; formal analysis, Y.Y.; investigation, Y.Y.; resources, Y.Y.; data curation, Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, H.-y.L.; visualization, H.-y.L.; supervision, H.-y.L.; project administration, H.-y.L.; funding acquisition, H.-y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Innovation Fund of Postgraduates, Sichuan University of Science & Engineering (Y2023331) and the Scientific Research and Innovation Team Program of Sichuan University of Science and Engineering (SUSE652B002).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HVIPsHierarchical variational inequality problems
SFPSplit feasibility problem
IMRTIntensity-modulated radiation therapy
MSFPMultiple-set split feasibility problem
SCFPsSplit common fixed-point problems
MSCFPsMultiple-set split common fixed-point problems

References

  1. Eslamian, M.; Kamandi, A. A novel method for hierarchical variational inequality with split common fixed point constraint. J. Appl. Math. Comput. 2024, 70, 1837–1857. [Google Scholar] [CrossRef]
  2. Iiduka, H. Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 2012, 133, 227–242. [Google Scholar] [CrossRef]
  3. Eslamian, M.; Kamandi, A. Hierarchical variational inequality problem and split common fixed point of averaged operators. J. Comput. Appl. Math. 2024, 437, 115490. [Google Scholar] [CrossRef]
  4. Iiduka, H. Distributed optimization for network resource allocation with nonsmooth utility functions. IEEE Trans. Control Netw. Syst. 2019, 6, 1354–1365. [Google Scholar] [CrossRef]
  5. Iiduka, H. Stochastic fixed point optimization algorithm for classifier ensemble. IEEE Trans. Cybern. 2020, 50, 4370–4380. [Google Scholar] [CrossRef]
  6. Jiang, B.N.; Wang, Y.H.; Yao, J.C. Two new multi-step inertial regularized algorithms for the hierarchical variational inequality problem with a generalized Lipschitzian mapping. J. Nonlinear Convex Anal. 2024, 25, 99–121. [Google Scholar]
  7. Iiduka, H.; Yamada, I. A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 2009, 70, 1881–1893. [Google Scholar] [CrossRef]
  8. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  9. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  10. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  11. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  12. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  13. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  14. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  15. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  16. He, Z. The split equilibrium problem and its convergence algorithms. J. Inequalities Appl. 2012, 2012, 162. [Google Scholar] [CrossRef]
  17. Lorenz, D.A.; Schöpfer, F.; Wenger, S. The linearized Bregman method via split feasibility problems: Analysis and generalizations. SIAM J. Imag. Sci. 2014, 7, 1237–1262. [Google Scholar] [CrossRef]
  18. He, H.J.; Ling, C.; Xu, H.K. An implementable splitting algorithm for the 1-norm regularized split feasibility problem. J. Sci. Comput. 2016, 67, 281–298. [Google Scholar] [CrossRef]
  19. Jirakitpuwapat, W.; Kumam, P.; Cho, Y.J.; Sitthithakerngkiet, K. A general algorithm for the split common fixed point problem with its applications to signal processing. Mathematics 2019, 7, 226. [Google Scholar] [CrossRef]
  20. Sahu, D.R.; Pitea, A.; Verma, M. A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer. Algor. 2020, 83, 421–449. [Google Scholar] [CrossRef]
  21. Usurelu, G.I. Split feasibility handled by a single-projection three-step iteration with comparative analysis. J. Nonlinear Convex Anal. 2021, 22, 543–557. [Google Scholar]
  22. Gupta, N.; Postolache, M.; Nandal, A.; Chugh, R. A cyclic iterative algorithm for multiple-sets split common fixed point problem of demicontractive mappings without prior knowledge of operator norm. Mathematics 2021, 9, 372. [Google Scholar] [CrossRef]
  23. Zhao, J.; Wang, H.; Zhao, N. Accelerated cyclic iterative algorithms for the multiple-set split common fixed-point problem of quasi-nonexpansive operators. J. Nonlinear Var. Anal. 2023, 7, 1–22. [Google Scholar]
  24. Wang, F.H.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
  25. Zhao, J.; Zhao, N.; Hou, D. Inertial accelerated algorithms for the split common fixed-point problem of directed operators. Optimization 2021, 70, 1375–1407. [Google Scholar] [CrossRef]
  26. Chang, S.S.; Wang, L.; Zhao, Y.H.; Wang, G.; Ma, Z.L. Split common fixed point problem for quasi-pseudocontractive mapping in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 2021, 44, 1155–1166. [Google Scholar] [CrossRef]
  27. Kim, D. Accelerated proximal point method for maximally monotone operators. Math. Program. 2021, 190, 57–87. [Google Scholar] [CrossRef]
  28. Maingé, P.E. Accelerated proximal algorithms with a correction term for monotone inclusions. Appl. Math. Optim. 2021, 84, 2027–2061. [Google Scholar] [CrossRef]
  29. Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert spaces. J. Ind. Manag. Optim. 2021, 17, 2733–2759. [Google Scholar] [CrossRef]
  30. Yamada, I. The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application; Studies in Computational Mathematics; Butnariu, D., Censor, Y., Reich, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2001; Volume 8, pp. 473–504. [Google Scholar]
Table 1. Numerical results with different α n , where α n = ϖ α ¯ n , N = 10 , M = 15 .
Table 1. Numerical results with different α n , where α n = ϖ α ¯ n , N = 10 , M = 15 .
       ϖ = 0.1 ϖ = 0.2 ϖ = 0.3 ϖ = 0.4 ϖ = 1
λ = 0.93 Iter29225221217116
Equation (3)CPU(s)0.01160.01090.01790.00690.0010
Iter66665
Algorithm 1CPU(s)0.00120.00120.00080.00060.0005
Table 2. Numerical results with different α n , where α n = ϖ α ¯ n , N = 50 , M = 50 .
Table 2. Numerical results with different α n , where α n = ϖ α ¯ n , N = 50 , M = 50 .
       ϖ = 0.1 ϖ = 0.2 ϖ = 0.3 ϖ = 0.4 ϖ = 1
λ = 0.98 Iter2672271871398
Equation (3)CPU(s)0.02260.01310.01150.00700.0007
Iter87666
Algorithm 1CPU(s)0.00230.00130.00120.00040.0004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, Y.; Lan, H.-y. Novel Accelerated Cyclic Iterative Approximation for Hierarchical Variational Inequalities Constrained by Multiple-Set Split Common Fixed-Point Problems. Mathematics 2024, 12, 2935. https://doi.org/10.3390/math12182935

AMA Style

Ye Y, Lan H-y. Novel Accelerated Cyclic Iterative Approximation for Hierarchical Variational Inequalities Constrained by Multiple-Set Split Common Fixed-Point Problems. Mathematics. 2024; 12(18):2935. https://doi.org/10.3390/math12182935

Chicago/Turabian Style

Ye, Yao, and Heng-you Lan. 2024. "Novel Accelerated Cyclic Iterative Approximation for Hierarchical Variational Inequalities Constrained by Multiple-Set Split Common Fixed-Point Problems" Mathematics 12, no. 18: 2935. https://doi.org/10.3390/math12182935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop