Next Article in Journal
Research and Implementation of Denoising Algorithm for Brain MRIs via Morphological Component Analysis and Adaptive Threshold Estimation
Next Article in Special Issue
Relation-Theoretic Nonlinear Almost Contractions with an Application to Boundary Value Problems
Previous Article in Journal
Modeling Wave Packet Dynamics and Exploring Applications: A Comprehensive Guide to the Nonlinear Schrödinger Equation
Previous Article in Special Issue
A Modified Viscosity-Type Self-Adaptive Iterative Algorithm for Common Solution of Split Problems with Multiple Output Sets in Hilbert Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Halpern-Type Inertial Iteration Methods with Self-Adaptive Step Size for Split Common Null Point Problem

Department of Mathematics, Faculty of Science, University of Tabuk, P.O. Box 741, Tabuk 71491, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 747; https://doi.org/10.3390/math12050747
Submission received: 5 January 2024 / Revised: 23 February 2024 / Accepted: 23 February 2024 / Published: 1 March 2024

Abstract

:
In this paper, two Halpern-type inertial iteration methods with self-adaptive step size are proposed for estimating the solution of split common null point problems ( S p CNPP ) in such a way that the Halpern iteration and inertial extrapolation are computed simultaneously in the beginning of each iteration. We prove the strong convergence of sequences driven by the suggested methods without estimating the norm of bounded linear operator when certain appropriate assumptions are made. We demonstrate the efficiency of our iterative methods and compare them with some related and well-known results using relevant numerical examples.

1. Introduction

The split feasibility problem ( S p FP ) presented by Censor and Elfving [1] is the first split problem which has been studied intensively by researchers in applied sciences, see e.g., [2,3,4,5]. The split inverse problem ( S p IP ) is the most prevalent split problem, due to Censor et al. [6]. Recently, various models of inverse problems have been developed and studied. Due to application oriented nature a growing interest has noticed in recent years in the study of split variational inequality/inclusion problems. Moudafi [7] studied a particular case of ( S p IP ) called split monotone variational inclusion problem ( S p MVIP ) :
Find ϕ H 1 such that ϕ VI s P ( E 1 ; V 1 ; H 1 ) and A ( ϕ ) VI s P ( E 2 ; V 2 ; H 2 ) ,
where V 1 : H 1 2 H 1 and V 2 : H 2 2 H 2 are set-valued mappings; E 1 : H 1 H 1 and E 2 : H 2 H 2 are single-valued mappings on Hilbert spaces H 1 and H 2 , respectively; VI s P ( E 1 , V 1 ; H 1 ) = { ϕ H 1 : 0 E 1 ( ϕ ) + V 1 ( ϕ ) } and VI s P ( E 2 , V 2 ; H 2 ) = { A ϕ H 2 : 0 E 2 ( A ϕ ) + V 2 ( A ϕ ) } . Moudafi [7] composed the following scheme for ( S p MVIP ) . Let μ > 0 , for arbitrary z 0 H 1 , compute
z n + 1 = X [ z n + κ A * ( Y I ) A z n ] ,
where A * is an adjoint operator of A, κ ( 0 , 1 / L ) with L being the spectral radius of operator A * A , X = R μ V 1 ( I μ E 1 ) = ( I + μ V 1 ) 1 ( I μ E 1 ) and Y = R μ V 2 ( I μ E 2 ) = ( I + μ V 2 ) 1 ( I μ E 2 ) . If  E 1 = E 2 = 0 , then ( S p MVIP ) turn into the split common null point problem ( S p CNPP ) , suggested by Byrne et al. [8]:
Find ϕ H 1 such that ϕ VI s P ( V 1 ; H 1 ) and A ( ϕ ) VI s P ( V 2 ; H 2 ) ,
where VI s P ( V 1 ; H 1 ) = { z H 1 : 0 V 1 ( z ) } and VI s P ( V 2 ; H 2 ) = { A z H 2 : 0 V 2 ( A z ) } . Also, Byrne et al. [8] composed the next scheme for ( S p CNPP ) . For arbitrary point z 0 H 1 , compute
z n + 1 = R μ V 1 [ z n + κ A * ( R μ V 2 I ) A z n ] ,
where L = A * A = A 2 , τ ( 0 , 2 / L ) . It is obvious to see that ϕ solves ( S p CNPP ) if and only if ϕ = R μ V 1 [ ϕ + κ A * ( R μ V 2 I ) A ϕ ] . Kazmi and Rizvi [9] investigated the solutions of ( S p VI s P ) and fixed point problem of a nonexpansive mapping T  ( FPP ) by using the following scheme. For  arbitrary z 0 H 1 , compute
y n = R μ V 1 [ z n + κ A * ( R μ V 2 I ) A z n ] , z n + 1 = ζ n h ( z n ) + ( 1 ζ n ) T y n ,
where h is contraction and κ ( 0 , 2 A 2 ) . Later, Dilshad et al. [10] investigated the common solution of ( S p VI s P ) and the fixed point of a finite collection of nonexpansive mappings. Recently, an alternative method was suggested by Akram et al. [11] to explore the common solution of ( S p VI s P ) and ( FPP ) as follows:
y n = z n κ ( I R μ 1 V 1 ) z n + A * ( I R μ 2 V 2 ) A z n , z n + 1 = ζ n h ( z n ) + ( 1 ζ n ) T ( y n ) ,
where κ = 1 1 + A 2 . Later on many authors have shown their interest in solving ( S p VI s P ) and related problems, using innovative methods. Some interesting results can be found in [10,12,13,14,15,16] and references therein.
In the above-mentioned work and related literature, we found that the step size, which is under the control of norm A * A , is required for the convergence of iterative schemes. To overcome this regulation, a new form of iterative schemes have been proposed, see, e.g., [16,17,18,19]. L o ´ pez  et al. [20] proposed a relaxed iterative scheme for ( S p FP ) :
z n + 1 = P Q 1 I τ n A * ( I P Q 2 ) A ] z n ,
where P Q 1 and P Q 2 are the orthogonal projections on Q 1 and Q 2 , respectively, and τ n is calculated by
τ n : = ϵ n g ( z n ) g ( z n ) 2 ,
with g ( z ) = 1 2 ( I P Q 2 ) A z 2 and g ( z n ) = A * ( I P Q 2 ) A z , n 0 , and  0 < ϵ n < 1 and inf ϵ n ( 4 ϵ n ) > 0 . Dilshad et al. [21], investigate the solution of ( S p CNPP ) without using pre-calculated norm A . For arbitrary υ H 1 , compute
x n = ( I R μ V 1 ) z n + A * ( I R μ V 2 ) A z n z n + 1 = ζ n υ + ( 1 ζ n ) ( z n τ n x n )
where ζ n ( 0 , 1 ) and the τ n is calculated by τ n = ( I R μ 1 V 1 ) ( z n ) 2 + A * ( I R μ 2 V 2 ) ( A z n ) 2 ( I R μ 1 V 1 ) ( z n ) + A * ( I R μ 2 V 2 ) ( A z n ) 2 .
Slow convergence of the suggested algorithms was the new problem for researchers. Therefore, many efforts have been made to accelerate the convergence. Several researchers have implemented the inertial term as one of the speed-up approaches. Recall that Alvarez and Attouch [22] established the inertial proximal point approach for the monotone mapping V utilising the notion of implicit descritization for derivatives
z n + 1 = [ I + μ V ] 1 [ z n + α n ( z n z n 1 ) ] ,
where μ > 0 , α n [ 0 , 1 ) is the extrapolation coefficient and α n ( z n z n 1 ) composes the inertial term. It is found that this kind of scheme has an improved convergence rate and therefore this scheme was adopted, altered and implemented to solve various nonlinear problems, see, e.g., [23,24,25,26,27,28]. Very recently, Reich and Taiwo [29] studied some fast iterative methods for estimating the solution of variational inclusion problem in which they jointly compute the viscosity approximation and inertial extrapolation in the first step of iterations.
Inspired by the above-mentioned work, we suggest two Halpern-type inertial iteration methods with self adaptive step size for approaching the solution of ( S p CNPP ) in the setting of Hilbert spaces. Our methods compute the Halpern iteration, inertial extrapolation simultaneously in the beginning of each iterations. We use the self-adaptive step-size such that the iteration process do not required the prior calculated norm of bounded linear operator. Our work can be seen as simple and accelerated modified methods for solving  ( S p CNPP ) .
We arrange this paper as follows. The following section recalls some definitions and results that are beneficial in the convergence analysis of the proposed methods. In Section 3, we propose our two Halpern-type, inertial and self-adaptive iteration methods and then we state and study the strong convergence results. Section 4 illustrates numerical examples in finite and infinite dimensional Hilbert spaces showing the behaviour and advantages of suggested iterative methods. We conclude our study and numerical experiment in Section 5.

2. Preliminaries

Assume that H is a real Hilbert space and D is a close and convex subset of H. If  { z n } is a sequence in H, then z n z denotes strong convergence of { z n } to z and z n z denotes weak convergence. The weak ω -limit of { z n } is defined by
ω w ( z n ) = { z H : z n j z as j , z n j is a subsequnce of z n } .
If P D ϑ is the projection of ϑ onto D H , then for some ϑ H , there exists unique closest point in D indicate by P D ϑ such that
ϑ P D ϑ     ϑ ω , f o r a l l ω D ,
P D ϑ also satisfies
ω ϑ , P D ω P C ϑ P D ω P D ϑ 2 , f o r a l l ω , ϑ H .
Moreover, P D ω is also identified by the fact
P D ω = η ω η , ϑ η 0 , f o r a l l ϑ D .
For all η , ω , ϑ in Hilbert space H, we have the following equality:
η ± ϑ 2 = η 2 ± 2 ϑ , η + ϑ .
Definition 1.
A mapping h : H H is called
(i) 
contraction, if  h ( η ) ω ) τ η ω , f o r a l l η , ω H , and τ ( 0 , 1 ) ;
(ii) 
nonexpansive, if  h ( η ) h ( ω ) η ω , f o r a l l η , ω H ;
(iii) 
firmly nonexpansive, if  h ( η ) h ( ω ) 2 η ω , h ( η ) h ( ω ) , f o r a l l η , ω H .
Definition 2. 
Let V : H 2 H be set-valued mapping. Then
(i) 
V is called monotone, if  p q , η ω 0 , f o r a l l η , ω H , p V ( η ) , q V ( ω ) ;
(ii) 
G r a p h ( V ) = { ( p , ω ) H × H : ω V ( p ) } ;
(iii) 
V is called maximal monotone, if V is monotone and ( I + μ V ) ( H ) = H , for  μ > 0 and I is an identity mapping.
Lemma 1
([30]). If { z n } is a sequence of nonnegative real numbers such that
z n + 1 ( 1 ζ n ) z n + β n , n 0 ,
where { ζ n } is a sequence in ( 0 , 1 ) and { β n } is a sequence in R such that
(i) 
n = 1 ζ n = ;
(ii) 
lim sup n β n ζ n 0 or n = 1 | β n | < .
Then lim n z n = 0 .
Lemma 2
([31]). In a Hilbert space H,
(i) 
if V : H 2 H is monotone and R μ V be the resolvent of V, then R μ V and I R μ V are firmly nonexpansive for μ > 0 .
(ii) 
if V : H 2 H is nonexpansive, then I V is demiclosed at zero and if V is firmly nonexpansive then I V is firmly nonexpansive.
Lemma 3
([32]). For a bounded sequence { ψ n } in Hilbert space H. If there exists a subset Q H satisfying
(i) 
lim n ψ n ω exists, ω Q ,
(ii) 
ω w ( ψ n ) Q .
Then, there exists ϕ Q such that ψ n ϕ .
Lemma 4
([33]). Let { z n } be a sequence in R that does not decrease at infinity in the sense that there exists a subsequence { z n k } of { z n } such that z n k < z n k + 1 for all k 0 . Also consider the sequence of integers { ρ ( n ) } n n 0 defined by
ρ ( n ) = max { k n : z k z k + 1 } .
Then { ρ ( n ) } n n 0 is a nondecreasing sequence verifying lim n ρ ( n ) = and for all n n 0 , the following inequality holds:
max { z ρ ( n ) , z ( n ) } z ρ ( n ) + 1 .

3. Main Results

In this part, we describe our Halpern-type inertial iteration methods with self-adaptive step size for ( S p CNPP ) . We adopt the following assumptions to ensure the convergence of our methods:
( S 1 )
V 1 : H 1 2 H 1 and V 2 : H 2 2 H 2 are maximal monotone operators;
( S 2 )
A : H 1 H 2 is a bounded linear operator;
( S 3 )
{ ζ n } is a sequence in ( 0 , 1 ) so that lim n ζ n = 0 and n = 1 ζ n = ;
( S 4 )
{ δ n } is a positive sequence so that n = 1 δ n < and lim n δ n ζ n = 0 ;
( S 5 )
The solution set of ( S p CNPP ) is express by Σ and Σ .
Now, we can present our Halpern-type inertial iteration method for solving ( S p CNPP ) .
Remark 1.
From (3), we have α n δ n z n z n 1 . By the choice of α n > 0 and δ n satisfying n = 1 δ n < , we obtain lim n α n z n z n 1 = 0 and by assumption ( S 4 ) , we obtain lim n α n z n z n 1 ζ n lim n δ n ζ n = 0 .
Remark 2.
We can easily show that ϕ if and only if R μ 1 V 1 ( ϕ ) = ϕ and R μ 2 V 2 ( A ϕ ) = A ( ϕ ) , by using the definition of resolvents of V 1 and V 2 , respectively.
Lemma 5
([21]). If w n satisfies
lim n ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 = 0 ,
then
lim n ( I R μ 1 V 1 ) ( w n )   = lim n ( I R μ 2 V 2 ) ( A w n ) = 0 .
Remark 3.
If z n + 1 = w n , then from (5), we obtain
τ n ( I R μ 1 V 1 ) w n + A * ( I R μ 2 V 2 ) A w n = 0 .
If ( I R μ 1 V 1 ) w n + A * ( I R μ 2 V 2 ) A w n   = 0 , we obtain that w n , otherwise by putting the value of τ n and taking limit on both sides, we obtain
lim n ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) = 0 .
By using Lemma 5, we conclude that
lim n ( I R μ 1 V 1 ) ( w n )   = lim n ( I R μ 2 V 2 ) ( A w n ) = 0 ,
which implies that w n .
Theorem 1.
If the assumptions ( S 1 ) ( S 5 ) are satisfied. Then the sequence { z n } induced by Algorithm 1 converges strongly to a solution ϕ of ( S p CNPP ) , where P ( υ ) = ϕ .
Algorithm 1 Choose α [ 0 , 1 ) and μ 1 > 0 , μ 2 > 0 are given. Choose arbitrary points z 0 and z 1 and set n = 0 .
Iterative Step: For iterates z n , and  z n 1 ,   n 1 , select 0 < α n < α ¯ n , where
α ¯ n = min δ n z n z n 1 , α , if z n z n 1 , α , otherwise .
Compute
w n = ζ n υ + ( 1 ζ n ) z n + α n ( z n z n 1 ) ,
z n + 1 = w n τ n [ ( I R μ 1 V 1 ) w n + A * ( I R μ 2 V 2 ) A w n ] ,
for some fixed element υ H 1 and τ n is defined as
τ n = ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 , if ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 0 0 , otherwise .
If z n + 1 = w n then stop, if not, set n = n + 1 and back to iterative step.
Proof. 
Let ϕ and using (1) and (5), we obtain
z n + 1 ϕ 2 = w n τ n [ ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) ] ϕ 2 w n ϕ 2 +   τ n 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 2 τ n ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) , w n ϕ .
For ϕ Σ , by Remark 2, we have R μ 1 V 1 ( ϕ ) = ϕ and R μ 2 V 2 ( A ϕ ) = A ϕ . Since ( I R μ 1 V 1 ) and ( I R μ 2 V 2 ) are firmly nonexpansive (Lemma 2), we have
( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 1 V 2 ) ( A w n ) , w n ϕ = ( I R μ 1 V 1 ) ( w n ) , w n ϕ + A * ( I R μ 2 V 2 ) ( A w n ) , w n ϕ = ( I R μ 1 V 1 ) ( w n ) ( I R μ 1 V 1 ) ( ϕ ) , w n ϕ + A * ( I R μ 2 V 2 ) ( A w n ) A * ( I R μ 2 V 2 ) ( A ϕ ) , w n ϕ = ( I R μ 1 V 1 ) ( w n ) ( I R μ 1 V 1 ) ( ϕ ) , w n ϕ + ( I R μ 2 V 2 ) ( A w n ) ( I R μ 2 V 2 ) ( A ϕ ) , A ( w n ) A ( ϕ ) ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2
and
τ n 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( B w n ) 2 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) , w n ϕ ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 2 ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 = ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 .
From (7)–(8), we achieve
z n + 1 ϕ 2 w n ϕ 2 ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 ,
or
z n + 1 ϕ w n ϕ .
Since lim n α n z n z n 1 = 0 , implies that α n z n z n 1 is bounded, hence there exists a number K 1 such that α n z n z n 1 K 1 . From  (4), it follows that  
w n ϕ = ζ n υ + ( 1 ζ n ) z n + α n ( z n z n 1 ϕ = ζ n υ ϕ + ( 1 ζ n ) ( z n ϕ ) + α n ( z n z n 1 ) ζ n υ ϕ   + ( 1 ζ n ) z n ϕ + α n z n z n 1 ζ n υ ϕ + ( 1 ζ n ) z n ϕ   +   K 1 max υ ϕ , z n ϕ   +   K 1 max υ ϕ , w n 1 ϕ   +   K 1 max υ ϕ , w 0 ϕ   +   K 1 ,
which implies that { w n } is bounded. By using (10), we conclude that { z n } is also bounded.
Let v n = z n + α n ( z n z n 1 ) . The boundedness of z n implies that v n is also bounded. By using (4), we obtain
w n ϕ 2 = ζ n υ + ( 1 ζ n ) v n ϕ 2 = ζ n 2 υ ϕ 2 +   ( 1 ζ n ) 2 v n ϕ 2 + 2 ζ n ( 1 ζ n ) υ ϕ , v n ϕ ,
and
v n ϕ 2 = z n + α n ( z n z n 1 ) ϕ 2 = z n ϕ 2 + 2 α n z n z n 1 , v n ϕ z n ϕ 2 + 2 α n z n z n 1 v n ϕ z n ϕ 2 + 2 ψ n v n ϕ ,
where ψ n = α n z n z n 1 . Therefore by using (11) and (12), we obtain
w n ϕ 2 ζ n 2 υ ϕ 2 + ( 1 ζ n ) 2 z n ϕ 2 +   2 ψ n v n ϕ + 2 ζ n ( 1 ζ n ) υ ϕ , v n ϕ , ( 1 ζ n ) 2 z n ϕ 2 +   2 ζ n ( 1 ζ n ) υ ϕ , v n ϕ + ζ n 2 υ ϕ 2 + 2 ψ n v n ϕ .
From (9) and (13), we get
z n + 1 ϕ 2 ( 1 ζ n ) 2 z n ϕ 2 +   2 ζ n ( 1 ζ n ) υ ϕ , v n ϕ + ζ n 2 υ ϕ 2 + 2 ψ n v n ϕ ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 , ( 1 ζ n ) z n ϕ 2 + ζ n { 2 ( 1 ζ n ) υ ϕ , v n ϕ + ζ n υ ϕ 2 + 2 ψ n ζ n v n ϕ } ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 .
Case I: Suppose that the sequence { z n ϕ } is monotonically decreasing then there exists m 0 such that z n + 1 ϕ     z n ϕ for all n m 0 . Hence, the boundedness of { z n ϕ } implies { z n ϕ } is convergent. Therefore, using (14), we have
( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2     z n ϕ 2   z n + 1 ϕ 2   ζ n z n ϕ 2 + ζ n 2 ( 1 ζ n ) υ ϕ , v n ϕ + ζ n υ     ϕ 2 +   2 ψ n ζ n v n ϕ .
Taking limit n 0 , we obtain
lim n ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 = 0 .
By using Lemma 5, we obtain
lim n ( I R μ 1 V 1 ) ( w n )   = lim n ( I R μ 2 V 2 ) ( A w n ) = 0 .
By using (5) and (16), we see that
lim n z n + 1 w n = 0 .
Since v n = z n + α n ( z n z n 1 ) and using Remark 1, we obtain
lim n z n v n   = lim n α n z n z n 1 = 0 .
From (4), we have
w n z n = ζ n ( υ z n ) + ( 1 ζ n ) α n ( z n z n 1 ) , w n z n ζ n υ z n   +   α n z n z n 1 ,
taking limit n , in  both the sides, using boundedness of z n , Remark 1 and Assumption ( S 3 ) , we get
lim n w n z n = 0 .
Hence using (18) and (20), we conclude
z n + 1 z n     z n + 1 w n   +   w n z n 0 .
Since { z n } is bounded, there exist a subsequence { z n k } of { z n } so that z n k ϕ ¯ as k . It follows from (19) and (20) that v n k ϕ ¯ and w n k ϕ ¯ as k . We claim that ϕ ¯ Σ . From (5), it follows that
z n k + 1 w n k 2   = τ n k 2 [ ( I R μ 1 V 1 ) w n k + A * ( I R μ 2 V 2 ) A w n k ] 2 = ( I R μ 1 V 1 ) ( w n k ) 2 +   ( I R μ 2 V 2 ) ( A w n k ) 2 2 ( I R μ 1 V 1 ) ( w n k ) + A * ( I R μ 2 V 2 ) ( A w n k ) 4 ( I R μ 1 V 1 ) w n k + A * ( I R μ 2 V 2 ) A w n k 2 = ( I R μ 1 V 1 ) ( w n k ) 2 +   ( I R μ 2 V 2 ) ( A w n k ) 2 2 ( I R μ 1 V 1 ) ( w n k ) + A * ( I R μ 2 V 2 ) ( A w n k ) 2 .
Taking k , and using Lemma 5, we obtain
lim n ( I R μ 1 V 1 ) ( w n k ) = lim n ( I R μ 2 V 2 ) ( A w n k ) = 0 .
By the demiclosedness principle, we obtain
( I R μ 1 V 1 ) ( ϕ ¯ ) = 0 , ( I R μ 2 V 2 ) ( A ϕ ¯ ) = 0 .
Remark 2 implies that ϕ ¯ Σ . Now, we exhibit that { z n } converge strongly to ϕ . From (14), we immediately see that
z n + 1 ϕ 2     ( 1 ζ n ) z n ϕ 2 + ζ n 2 ( 1 ζ n ) υ ϕ , v n ϕ + ζ n υ ϕ 2 +   2 ψ n ζ n v n ϕ .
Moreover, using ( S 2 ) and Remark 1, we get
lim n 2 ( 1 ζ n ) υ ϕ , v n ϕ + ζ n υ ϕ 2 +   2 ψ n ζ n v n ϕ = lim k 2 ( 1 ζ n k ) υ ϕ , v n k ϕ + ζ n k υ ϕ 2 +   2 ψ n k ζ n k v n k ϕ = 2 υ ϕ , ϕ ¯ ϕ 0 .
By applying Lemma 1 to (23), we deduce that z n ϕ Σ and ϕ = P Σ υ .
  • Case II: If the Case I is not true, then there exists a subsequence { z n k } of { z n } such that z n k ϕ 2   z n k + 1 ϕ 2 and the sequence defined by ρ ( n ) = max { m n :   z m ϕ     z m + 1 ϕ } is an increasing sequence and ρ ( n ) as n and
    0   z ρ ( n ) ϕ     z ρ ( n ) + 1 ϕ , n m .
Following the corresponding arguments as in the proof of Case I, we get
lim n ( I R μ 1 V 1 ) ( w ρ ( n ) )   = lim n ( I R μ 2 V 2 ) ( A w ρ ( n ) ) = 0 ,
and
lim n w ρ ( n ) v ρ ( n )   = lim n z ρ ( n ) v ρ ( n ) = 0 .
From (23) and (16), we have
0   z ρ ( n ) ϕ 2 2 ( 1 ζ ρ ( n ) ) υ ϕ , v ρ ( n ) ϕ + ζ ρ ( n ) υ ϕ 2 + 2 ψ ρ ( n ) ζ ρ ( n ) v ρ ( n ) ϕ 0 .
By taking limit n , we obtain z ρ ( n ) ϕ 0 as n . Invoking Lemma 4, we have
0   z n ϕ   max z n ϕ , z ρ ( n ) ϕ z ρ ( n ) + 1 ϕ .
Therefore from (28), it follows that z n ϕ 0 as n . Hence, z n ϕ as n and we achieved the desired result.    □
Theorem 2.
Suppose that the assumptions ( S 1 ) ( S 5 ) are satisfied. Then the sequence { z n } induced by Algorithm 2 converge strongly to a solution ϕ of ( S p CNPP ) , where P ( υ ) = ϕ .
Algorithm 2 Choose α [ 0 , 1 ) and μ 1 > 0 , μ 2 > 0 are given. Choose arbitrary points z 0 and z 1 and set n = 0 .
Iterative Step: Given the iterates z n , and  z n 1 ,   n 1 , choose 0 < α n < α ¯ n , where
α ¯ n = min δ n z n z n 1 , α , if z n z n 1 , α , otherwise .
Compute
w n = ζ n υ + ( 1 ζ n ) z n + α n ( z n z n 1 ) ,
z n + 1 = w n τ n [ ( I R μ 1 V 1 ) w n + A * ( I R μ 2 V 2 ) A w n ] ,
for some fixed υ H 1 and τ n is defined as
τ n = ( I R μ 1 V 1 ) ( w n ) 2 + ( I R μ 2 V 2 ) ( A w n ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 , if ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 0 0 , otherwise .
If z n + 1 = w n then stop, if not, set n = n + 1 and back to iterative step.
Proof. 
Let ϕ Σ , and since lim n α n z n z n 1 ζ n = 0 , implies that α n z n z n 1 ζ n is bounded, so there exists a number K 2 such that α n z n z n 1 ζ n K 2 . Then by using (30), we obtain
w n ϕ = ζ n υ + ( 1 ζ n ) z n + α n ( z n z n 1 ) ϕ ζ n υ     ϕ   + ( 1 ζ n ) z n ϕ   +   α n z n z n 1 = ζ n υ     ϕ   +   α n ζ n z n z n 1 + ( 1 ζ n ) z n ϕ = ζ n υ     ϕ   +   K 2 + ( 1 ζ n ) z n ϕ = ζ n υ     ϕ   +   K 2 + ( 1 ζ n ) w n 1 ϕ max υ     ϕ   +   K 2 , w n 1 ϕ max υ     ϕ   +   K 2 , w 0 ϕ .
By using (10), we obtain
z n + 1 ϕ max υ ϕ   +   K 2 , w 0 ϕ ,
which implies that { z n } is bounded and so is { w n } . Let y n = ζ n υ + ( 1 ζ n ) z n , then by using (1), we obtain
w n ϕ 2 = y n + α n ( z n z n 1 ) ϕ 2 = y n ϕ 2 +   2 y n ϕ , α n ( z n z n 1 ) + α n 2 z n z n 1 2 .
Now, we estimate
y n ϕ 2 = ζ n υ + ( 1 ζ n ) z n ϕ 2 = ζ n 2 υ ϕ 2 +   2 ζ n ( 1 ζ n ) υ ϕ , z n ϕ + ( 1 ζ n ) 2 z n ϕ 2
and
y n ϕ , α n ( z n z n 1 ) = ζ n υ + ( 1 ζ n ) z n ϕ , α n ( z n z n 1 ) = ζ n α n υ ϕ , ( z n z n 1 ) + ( 1 ζ n ) α n z n ϕ , z n z n 1 α n υ ϕ z n z n 1   +   α n z n ϕ z n z n 1 α n z n z n 1 υ ϕ   +   z n ϕ .
From (33), (34), and (35), we obtain
w n ϕ 2 ( 1 ζ n ) z n ϕ 2 +   ζ n { ζ n υ     ϕ 2 +   2 α n ζ n z n z n 1 { υ ϕ + z n ϕ } 2 α n 2 ζ n z n z n 1 2 + 2 υ ϕ , z n ϕ } .
Putting the value of w n ϕ in (9), we have
z n + 1 ϕ 2 ( 1 ζ n ) z n ϕ 2 + ζ n { ζ n υ ϕ 2 +   2 α n ζ n z n z n 1 υ ϕ   +   z n ϕ + 2 α n 2 ζ n z n z n 1 2 + 2 υ ϕ , z n ϕ } ( ( I R μ 1 V 1 ) ( w n ) 2 +   ( I R μ 2 V 2 ) ( A w n ) 2 ) 2 ( I R μ 1 V 1 ) ( w n ) + A * ( I R μ 2 V 2 ) ( A w n ) 2 .
Now, We can obtain the desired outcomes by following the corresponding steps as in the proof of Theorem 1. □
Remark 4.
Let Υ 1 : H 1 H 1 , Υ 2 : H 2 H 2 be nonexpansive mappings and A : H 1 H 2 is a bounded linear operator, then split common fixed point problem ( S p CFPP ) is defined as:
Find s * H 1 such that s * Fix ( Υ 1 ) and A s * H 2 such that A s * Fix ( Υ 2 ) ,
where Fix ( Υ 1 ) and Fix ( Υ 2 ) denote the fixed point sets of mappings Υ 1 and Υ 2 , respectively. By replacing R μ 1 V 1 and R μ 2 V 2 with nonexpansive mappings Υ 1 and Υ 2 , respectively, in Algorithms 1 and 2, we can obtain the strong convergence theorems for ( S p CFPP ) .

4. Numerical Experiments

Example 1. 
Suppose H 1 = R = H 2 . We define the monotone mappings V 1 and V 2 by V 1 ( x ) = x 3 + 1 and V 2 ( x ) = x + 1 . Let A ( x ) = x 3 is a bounded linear operator. It is obvious to see that V 1 and V 2 are monotone operators. The Resolvents of V 1 and V 2 are
R μ 1 V 1 ( x ) = [ I + μ 1 V 1 ] 1 ( x ) = 3 ( x μ 1 ) 3 + μ 1 , μ 1 > 0 ;
R μ 2 V 2 ( x ) = [ I + μ 2 V 2 ] 1 ( x ) = x μ 2 1 + μ 2 , μ 2 > 0 .
We choose ζ n = 1 n + 1 and δ n = 1 1 + n 2 satisfying the conditions ( S 3 ) and ( S 4 ) . As a stopping condition, we set the maximum number of iterations at 50. The parameter α n is created at random in the range ( 0 , α ¯ n ) , where α ¯ n is computed by (3). It can easily seen that = { 3 } and we select the fixed point υ = 3 . Figure 1 depicts the behaviour of the sequences derived from Algorithms 1 and 2 using three distinct cases, which are mentioned below:
Case (I):
z 0 = 0 , z 1 = 2 , μ 1 = 0.5 , μ 2 = 0.8 .
Case (II):
z 0 = 3 , z 1 = 1 , μ 1 = 1.2 , μ 2 = 0.9 .
Case (III):
z 0 = 2 , z 1 = 4 , , μ 1 = 0.05 , μ 2 = 1 .
Comparison: Furthermore, we compare our proposed methods to the methods in Byrne et al. [8], Kazmi [9], Dilshad et al. [21] and Akram et al. [11]. We select κ = 0.15 for Byrne et al. [8], Kazmi [9]; κ = 0.5 for Akram et al. [11]; μ = μ 1 = 1 , μ 2 = 2 for Byrne et al. [8], Kazmi [9], Dilshad et al. [21] and Akram et al. [11]; h ( x ) = x 2 , T ( x ) = x for Kazmi [9] and Akram et al. [11]. We consider the following cases:
Case (A):
z 0 = 2 , z 1 = 0 , ζ n = 1 1 + n ;
Case (B):
z 0 = 2 , z 1 = 11 , ζ n = 1 ( 1 + n ) 0.7 ;
Case (C):
z 0 = 5 , z 1 = 4 , ζ n = 1 ( 1 + n ) 0.7 ;
Case (D):
z 0 = 2 / 3 , z 1 = 3 , ζ n = 1 1 + n ;
It is noticed that our schemes are easy to implement and the choosing step size is free from the pre-calculation of A . The experiment results are presented in Table 1 and Figure 2.
Example 2. 
Let H 1 = H 2 = l 2 : = { x : = ( x 1 , x 2 , x 3 , , x n , ) , x n R : n = 1 | x n | < } is a real Hilbert space with inner product x , y = n = 1 x n y n and the norm is given by x = n = 1 | x n | 2 1 / 2 . We define the monotone mappings by V 1 ( x ) = x = ( x 1 , x 2 , x 3 , , x n , ) and V 2 ( x ) = x 3 = ( x 1 3 , x 2 3 , x 3 3 , , x n 3 , ) . Let A = I , the identity operator hence so its adjoint A * . The stopping criteria for our computation is D n < 10 15 , where D n = z n + 1 z n . We compare our proposed methods to the methods in Byrne et al. [8], Kazmi [9], Dilshad et al. [21] and Akram et al. [11].
We select κ = 0.25 for Byrne et al. [8], and Kazmi [9]; κ = 0.5 for Akram et al. [11]; μ 1 = 1.5 and μ 2 = 0.5 for Algorithms 1 and 2 and Akram et al. [11]; μ = 1.5 for Byrne et al. [8], Kazmi [9] and Dilshad et al. [21]; h ( x ) = x 2 = ( x 1 2 , x 2 2 , , x n 2 , ) , T ( x ) = x = ( x 1 , x 2 , , x n , ) for Kazmi [9] and Akram et al. [11]; ν = 0 for Algorithms 1 and 2 and Dilshad et al. [21]; δ n = 1 1 + n 2 , α = 0.3 and α ¯ is selected randomly by (3). We take into consideration the following different cases:
Case(A′):
z 0 = ( 1 , 1 / 4 , 1 / 9 , 1 / 16 , ) , z 1 = ( 1 , 1 / 8 , 1 / 27 , 1 / 256 , ) and ζ n = 1 100 + n ;
Case(B′):
z 0 = ( 1 , 1 / 2 , 1 / 3 , 1 / 4 , ) , z 1 = ( 1 , 0 , 1 / 3 , 0 , ) and ζ n = 1 ( 100 + n ) 0.25 ;
Case(C′):
z 0 = ( 1 , 0 , 0 , 0 , ) , z 1 = ( 2 / 3 , 2 / 9 , 2 / 27 , 2 / 54 , ) and ζ n = 1 100 + n ;
Case(D′):
z 0 = ( 1 / 4 , 1 / 16 , 1 / 64 , 1 / 256 , ) , z 1 = ( 0 , 1 / 2 , 0 , 1 / 4 , ) and ζ n = 1 ( 100 + n ) 0.25 ;
The experiment results we have obtained are shown in Table 2 and Figure 3.

5. Conclusions

We have presented two Halpern-type inertial iteration methods with a self-adaptive step size to estimate the solution of ( S p CNPP ) in such a way that the Halpern iteration and inertial term are computed simultaneously. We demonstrated the strong convergence of the suggested methods to approach the solution of ( S p CNPP ) with some appropriate assumptions so that the calculation of A is not necessary for the step size. Finally, we illustrate the proposed methods by choosing different parameters with suitable numerical examples. We show that our suggested schemes perform so well in the number of iterations as well as time taken by CPU. Note that the viscosity approximation is more general than the Halpern approximation method. Applying the viscosity-type inertial approximation to estimate the solution of ( S p CNPP ) or ( S p MVIP ) together with ( FPP ) will be intriguing in the future.

Author Contributions

Conceptualization, M.D.; methodology, M.D.; software, M.D.; validation, A.A.; formal analysis, A.A.; investigation, A.A.; resources, A.A.; data curation, A.A.; writing—original draft preparation, M.D.; writing—review and editing, M.D.; visualization, A.A.; supervision, A.A.; funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

This article does not contain any studies with human participants or animals performed by any of the authors.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors are thankful to the unknown reviewers and editor for their valuable remarks and suggestions which enhanced the quality and contents of this research article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  2. Combettes, P.L. The convex feasibilty problem in image recovery. In Advance in Image and Electronphysiccs; Hawkes, P., Ed.; Academic Press: New York, NY, USA, 1996; Volume 95, pp. 155–270. [Google Scholar]
  3. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problem in intensity modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef]
  4. Censor, Y.; Elfving, T. A multi projection algorithm using Bregman projections in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  5. Xu, H.K. Iterative methods for split feasibility problem in infinite dimensional Hilbert spaces. Inverse Prob. 2010, 26, 105018. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algor. 2012, 59, 301–323. [Google Scholar] [CrossRef]
  7. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  8. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  9. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  10. Dilshad, M.; Aljohani, A.F.; Akram, M. Iterative scheme for split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. J. Funct. Spaces 2020, 2020, 3567648. [Google Scholar] [CrossRef]
  11. Akram, M.; Dilshad, M.; Rajpoot, A.K.; Babu, F.; Ahmad, R.; Yao, J.-C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics 2022, 10, 2098. [Google Scholar] [CrossRef]
  12. Arfat, Y.; Kumam, P.; Khan, M.A.A.; Ngiamsunthorn, P.S. Shrinking approximants for fixed point problem and generalized split null point problem in Hilbert spaces. Optim. Lett. 2022, 16, 1895–1913. [Google Scholar] [CrossRef]
  13. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comp. 2015, 250, 986–1001. [Google Scholar] [CrossRef]
  14. Suantai, S.; Shehu, Y.; Cholamjiak, P. Nonlinear iterative methods for solving the split common null point problem in Banach spaces. Optim. Methods Softw. 2019, 34, 853–874. [Google Scholar] [CrossRef]
  15. Chugh, R.; Gupta, N. Strong convergence of new split general system of monotone variational inclusion problem. Appl. Anal. 2014, 103, 138–165. [Google Scholar] [CrossRef]
  16. Tang, Y. New algorithms for split common null point problems. Optimization 2020, 70, 1141–1160. [Google Scholar] [CrossRef]
  17. Moudafi, A.; Thakur, B.S. Solving proximal split feasibilty problem without prior knowledge of matrix norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef]
  18. Ngwepe, M.D.; Jolaoso, L.O.; Aphane, M.; Adenekan, I.O. An algorithm that adjusts the stepsize to be self-adaptive with an inertial term aimed for solving split variational inclusion and common fixed point problems. Mathematics 2023, 11, 4708. [Google Scholar] [CrossRef]
  19. Zhu, L.-J.; Yao, Y. Algorithms for approximating solutions of split variational inclusion and fixed-point problems. Mathematics 2023, 11, 641. [Google Scholar] [CrossRef]
  20. Lopez, G.; Martin-Marquez, V.; Xu, H.K. Solving the split feasibilty problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  21. Dilshad, M.; Akram, M.; Ahmad, I. Algorithms for split common null point problem without pre-existing estimation of operator norm. J. Math. Inequal. 2020, 14, 1151–1163. [Google Scholar] [CrossRef]
  22. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear osculattor ith damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  23. Arfat, Y.; Kumam, P.; Ngiamsunthorn, P.S.; Khan, M.A.A. An accelerated projection based parallel hybrid algorithm for fixed point and split null point problems in Hilbert spaces. Math. Meth. Appl. Sci. 2021. [Google Scholar] [CrossRef]
  24. Dilshad, M.; Akram, M.; Nsiruzzaman, M.; Filali, D.; Khidir, A.A. Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Math. 2023, 8, 12922–12942. [Google Scholar] [CrossRef]
  25. Filali, D.; Dilshad, M.; Alyasi, L.S.M.; Akram, M. Inertial iterative algorithms for split variational inclusion and fixed point problems. Axioms 2023, 12, 848. [Google Scholar] [CrossRef]
  26. Tang, Y.; Lin, H.; Gibali, A.; Cho, Y.-J. Convegence analysis and applicatons of the inertial algorithm solving inclusion problems. Appl. Numer. Math. 2022, 175, 1–17. [Google Scholar] [CrossRef]
  27. Tang, Y.; Zhang, Y.; Gibali, A. New self-adaptive inertial-like proximal point methods for the split common null point problem. Symmetry 2021, 13, 2316. [Google Scholar] [CrossRef]
  28. Tang, Y.; Gibali, A. New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algor. 2019, 83, 305–331. [Google Scholar] [CrossRef]
  29. Reich, S.; Taiwo, A. Fast iterative schemes for solving variational inclusion problems. Math. Meth. Appl. Sci. 2023, 46, 17177–17198. [Google Scholar] [CrossRef]
  30. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  31. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: Berlin, Germany, 2011. [Google Scholar]
  32. Opial, Z. Weak covergence of the sequence of successive approximations of nonexpansive mappings. Bull. Am. Math. Soc. 1976, 73, 591–597. [Google Scholar] [CrossRef]
  33. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Figure 1. Graphical behaviour of z n , z n + 1 w n of Algorithm 1 are shown in figures (a,b) and graph of z n + 1 z n and z n + 1 w n of Algorithm 2 are displayed in figures (c,d) by choosing three distinct cases of parameters.
Figure 1. Graphical behaviour of z n , z n + 1 w n of Algorithm 1 are shown in figures (a,b) and graph of z n + 1 z n and z n + 1 w n of Algorithm 2 are displayed in figures (c,d) by choosing three distinct cases of parameters.
Mathematics 12 00747 g001aMathematics 12 00747 g001b
Figure 2. Graphical comparison of Algorithms 1 and 2 with Byrne et al. [8], Kazmi and Rizvi [9], Dilshad et al. [21] and Akram et al. [11] for Example 1 by using Case (A)–Case (D). (a) by Case (A), (b) by Case (B), (c) by case (C), (d) by Case (D).
Figure 2. Graphical comparison of Algorithms 1 and 2 with Byrne et al. [8], Kazmi and Rizvi [9], Dilshad et al. [21] and Akram et al. [11] for Example 1 by using Case (A)–Case (D). (a) by Case (A), (b) by Case (B), (c) by case (C), (d) by Case (D).
Mathematics 12 00747 g002
Figure 3. Graphical comparison of Algorithms 1 and 2 with Byrne et al. [8], Kazmi and Rizvi [9], Dilshad et al. [21] and Akram et al. [11] for Example 2 by using Case (A′)–Case (D′). (a) by Case (A′), (b) by Case (B′), (c) by Case (C′), (d) by Case (D′).
Figure 3. Graphical comparison of Algorithms 1 and 2 with Byrne et al. [8], Kazmi and Rizvi [9], Dilshad et al. [21] and Akram et al. [11] for Example 2 by using Case (A′)–Case (D′). (a) by Case (A′), (b) by Case (B′), (c) by Case (C′), (d) by Case (D′).
Mathematics 12 00747 g003
Table 1. Numerical comparison of Algorithms 1 and 2 with the work studied in [8,9,11,21] for Example 1.
Table 1. Numerical comparison of Algorithms 1 and 2 with the work studied in [8,9,11,21] for Example 1.
MethodsIter (n)/Times (s)Case (A)Case (B)Case (C)Case (D)
Algorithm 1Iteration48505259
Time/s5.10 × 10 6 4.40 × 10 6 4.90 × 10 6 1.10 × 10 5
Algorithm 2Iteration41504150
Time/s4.70 × 10 6 3.00 × 10 6 1.16 × 10 5 3.00 × 10 6
Byrne et al. [8]Iteration102108100101
Time/s1.93 × 10 4 1.63 × 10 4 2.17 × 10 4 1.73 × 10 4
Kazmi et al. [9]Iteration109114107111
Time/s2.20 × 10 6 3.20 × 10 6 2.00 × 10 6 2.00 × 10 6
Dilshad et al. [21]Iteration126128109120
Time/s5.20 × 10 6 3.10 × 10 6 2.30 × 10 6 3.00 × 10 6
Akram et al. [11]Iteration105110108104
Time/s1.28 × 10 4 1.10 × 10 4 1.14 × 10 4 9.44 × 10 5
Table 2. Numerical comparison of Algorithms 1 and 2 with the work studied in [8,9,11,21] for Example 2.
Table 2. Numerical comparison of Algorithms 1 and 2 with the work studied in [8,9,11,21] for Example 2.
MethodsIter (n)/Times (s)Case (A′)Case (B′)Case (C′)Case (D′)
Algorithm 1Iteration34283030
Time/s1.36 × 10 5 1.93 × 10 5 1.36 × 10 5 1.39 × 10 5
Algorithm 2Iteration29283326
Time/s1.34 × 10 5 1.45 × 10 5 1.3× 10 5 1.24 × 10 5
Byrne et al. [8]Iteration60605957
Time/s1.54 × 10 5 1.06 × 10 5 1.78 × 10 5 1.03 × 10 5
Kazmi et al. [9]Iteration80777976
Time/s1.04 × 10 5 1.02 × 10 5 9.6 × 10 6 1.16 × 10 5
Dilshad et al. [21]Iteration41374137
Time/s3.23 × 10 5 4.49 × 10 5 3.14 × 10 5 3.97 × 10 5
Akram et al. [11]Iteration44403938
Time/s1.53 × 10 5 2.33 × 10 5 1.56 × 10 5 2.05 × 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alamer, A.; Dilshad, M. Halpern-Type Inertial Iteration Methods with Self-Adaptive Step Size for Split Common Null Point Problem. Mathematics 2024, 12, 747. https://doi.org/10.3390/math12050747

AMA Style

Alamer A, Dilshad M. Halpern-Type Inertial Iteration Methods with Self-Adaptive Step Size for Split Common Null Point Problem. Mathematics. 2024; 12(5):747. https://doi.org/10.3390/math12050747

Chicago/Turabian Style

Alamer, Ahmed, and Mohammad Dilshad. 2024. "Halpern-Type Inertial Iteration Methods with Self-Adaptive Step Size for Split Common Null Point Problem" Mathematics 12, no. 5: 747. https://doi.org/10.3390/math12050747

APA Style

Alamer, A., & Dilshad, M. (2024). Halpern-Type Inertial Iteration Methods with Self-Adaptive Step Size for Split Common Null Point Problem. Mathematics, 12(5), 747. https://doi.org/10.3390/math12050747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop