Next Article in Journal
Data-Driven Consensus Protocol Classification Using Machine Learning
Previous Article in Journal
SIR Epidemic Model with General Nonlinear Incidence Rate and Lévy Jumps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Convergence for Multi-Step Schemes under Weak Conditions

1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, University of Houston, Houston, TX 77205, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(2), 220; https://doi.org/10.3390/math12020220
Submission received: 10 December 2023 / Revised: 3 January 2024 / Accepted: 8 January 2024 / Published: 9 January 2024

Abstract

:
We have developed a local convergence analysis for a general scheme of high-order convergence, aiming to solve equations in Banach spaces. A priori estimates are developed based on the error distances. This way, we know in advance the number of iterations required to reach a predetermined error tolerance. Moreover, a radius of convergence is determined, allowing for a selection of initial points assuring the convergence of the scheme. Furthermore, a neighborhood that contains only one solution to the equation is specified. Notably, we present the generalized convergence of these schemes under weak conditions. Our findings are based on generalized continuity requirements and contain a new semi-local convergence analysis (with a majorizing sequence) not seen in earlier studies based on Taylor series and derivatives which are not present in the scheme. We conclude with a good collection of numerical results derived from applied science problems.
MSC:
65H10; 65Y20; 65G99; 41A58

1. Introduction

Nowadays, nonlinear equation systems are an essential tool in the scientific community for solving challenging real-world issues [1,2,3,4,5]. These equations, characterized by their nonlinearity in variables, find applications in a wide range of disciplines, from physics and biology to economics and engineering [6,7,8,9]. For instance, nonlinear equations are used in biology to characterize population dynamics [10,11,12,13]. In economics, they provide the dynamics of supply and demand through mathematical modeling to explain market behavior. Additionally, nonlinear equations play a pivotal role in engineering for the design of robust structures and the optimization of very complicated processes. For more details, please see the available literature [14,15,16,17].
In the course of this study, we undertake the challenge of estimating a solution, denoted by x * , to a nonlinear equation in the following form:
F ( x ) = 0 ,
where F : Ω B 1 B 2 is a continuous operator mapping between two complete normed spaces [6,14,15], namely, B 1 and B 2 . In addition, Ω ϕ and an open set. Most of the time, in complicated problems, the analytical solutions become non-fruitful. In such cases, our best course of action is to turn our focus towards iterative techniques. Iterative techniques are very useful in many fields, such as computer science, engineering, and mathematics, particularly when working with nonlinear systems of equations. Step by step, these schemes have the ability to improve the solution from our first approximations until we reach the desired solution. While dealing with complex problems, they perform best in those that are difficult to solve analytically.
In addition, these schemes improve the required solution with each iteration, gradually bringing us closer to precise answers that might otherwise be difficult to find. Therefore, researchers moved their focus to iterative methods. In the recent and distant past, many scholars [7,8,9,10,11,12,13,16,17,18,19,20,21,22,23,24], proposed higher-order iterative methods for the solutions to nonlinear problems.
We study the local convergence of the multi-step scheme defined for all τ = 0 , 1 , 2 , as
y τ = φ 0 ( x τ ) , z τ = z τ ( 0 ) = φ ( x τ , y τ ) , z τ ( 1 ) = z τ ( 0 ) φ 1 ( x τ , y τ ) F ( z τ ( 0 ) ) , z τ ( 2 ) = z τ ( 1 ) φ 1 ( x τ , y τ ) F ( z τ ( 1 ) ) , z τ ( i 1 ) = z τ ( i 2 ) φ 1 ( x τ , y τ ) F ( z τ ( i 2 ) ) , x τ + 1 = z τ ( i ) = z τ ( i 1 ) φ 1 ( x τ , y τ ) F ( z τ ( i 1 ) ) ,
where x 0 Ω is an initial point; i is a fixed natural number; φ 0 : Ω B 1 is an iteration function of order 1 ; φ : Ω × Ω B 2 is an iteration function of order μ ; and φ 1 : Ω × Ω δ ( B 1 , B 2 ) , where δ stands for the space of linear mapping sending B 1 into B 2 . A plethora of multi-step schemes are special cases of scheme ( 2 ) (see the special cases).
Motivation:
(P1)
The convergence order is mostly achieved using Taylor series. There are a number of problems with this methodology.
Let us consider two specializations for μ = l = 2 .
(i)
Choose:
ϕ 0 ( x ) = x F ( x ) 1 F ( x ) , φ ( x , y ) = y F ( y ) 1 F ( y ) , and φ 1 ( x , y ) = F ( y ) 1 .
(ii)
Consider:
φ ( x , y ) = ϕ 0 ( x ) = x F ( x ) 1 F ( x ) , and φ 1 ( x , y ) = F ( x ) 1 .
These schemes are considered to be Newton-type schemes [20,21,22,23]. In addition, see also the special cases in Remark 1.
As demonstrated in the local analysis of convergence (LAC) in [18], the convergence order necessitates derivatives up to the ninth order (e.g., in the case of (i)), which are absent from the technique. These restrictions limit their use in the scenario where B 1 = B 2 = R p , where p is a natural number. Here, we choose a simple yet inspirational example, represented by function F on B 1 = B 2 = R , Ω = [ 1.3 , 1.5 ] , which is given by
F ( χ ) = 2 χ 5 2 χ 4 + 6 χ 3 log t , χ 0 0 , χ = 0 .
Next, it is determined that the first three derivatives are
F ( χ ) = 2 χ 2 5 χ 2 + 9 log χ 2 4 χ + 6 , F ( χ ) = 4 χ 10 χ 2 + 9 log χ 2 6 χ + 15 , and F ( χ ) = 12 10 χ 2 + 3 log χ 2 4 χ + 11 .
Function F ( χ ) is found to be unbounded on Ω at χ = 0 , and solution x * = 1 Ω . Therefore, the guaranteed convergence of method (2) cannot be determined just from the local convergence results that were reported in [18].
(P2)
A priori estimates on x x τ are not developed in [18]. Therefore, the number of iterations to reach an error tolerance is unknown in advance.
(P3)
The uniqueness of the solution in a neighborhood containing it is unknown.
(P4)
The study in [18] is restricted on R p .
(P5)
The choice of the initial point ( x 0 ) is difficult, since the radius of convergence is not given.
(P6)
We introduce a crucial aspect that was not explored, namely, the semi-local analysis of convergence (SLAC), which forms the central focus of this study.
Novelty: Our novel approach not only solves the abovementioned problems ( P 1 ) ( P 6 ) in the more general setting of a Banach space [6,11,14,15] but also demonstrates exceptional adaptability in extending the reach of current methods. This study takes a methodical and detailed approach to its subject matter for easy understanding. The approach is demonstrated using only the operators on the scheme. We have selected the scheme from [18] to demonstrate our approach. However, the same approach can be utilized to extend the applicability of other methods using Taylor series [16,19,22,24,25] along the same lines.
This article approaches the subject in a methodical manner. We lay the fundamental foundation in Section 2 by introducing the local studies and fundamental theorems. We continue our research on semi-local convergence in Section 3. We demonstrate the applicability of our results on numerical problems in Section 4. The conclusion and a summary of potential areas for future work are provided in Section 5.

2. Local Convergence

The following assumptions, ( A 1 ) ( A 5 ) , are used in the local analysis of convergence (LAC).
(A1)
F : Ω B 2 is a continuous operator, and there exists x * Ω such that F ( x * ) = 0 .
(A2)
There exist a continuous iteration of order 1 operator, φ 0 : Ω B 1 , a non-decreasing and continuous function (NDCF) g 0 : [ 0 , ) [ 0 , ) , such that for all x Ω ,
φ 0 ( x ) x * g 0 ( x x * ) x x * ,
and equation g 0 ( ζ ) 1 = 0 has a minimal positive solution denoted by ρ 0 .
Set Ω 0 = Ω U ( x * , ρ 0 ) and I 0 = [ 0 , ρ 0 ] .
(A3)
There exists an iteration operator of order μ l , φ : Ω 0 × Ω 0 B 1 NDCF g : I 0 [ 0 , ) , g 1 : I 0 [ 0 , ) , and an operator
φ 1 : Ω 0 × Ω 0 δ ( B 1 , B 2 ) ,
such that for all x Ω 0 ,
φ ( x , y ) x * g ( x x * ) y x * ,
and
z x * φ 1 x , y F ( z ) g 1 ( x x * ) z x * ,
where y = φ 0 ( x ) and z = φ ( x , y ) .
Moreover, the equations
g ( ζ ) 1 = 0
and
g 1 ( ζ ) 1 = 0 ,
have minimal solution in ( 0 , ρ 0 ) , denoted by ξ and ξ 1 , respectively.
(A4)
Suppose that the following equation
g 1 k ( ζ ) g ( ζ ) g 0 ( ζ ) 1 = 0 ,
has a minimal positive solution ξ k , where k = 1 , 2 , 3 , , i .
Set
ρ = m i n { ξ , ξ 1 , ξ k }
and
C = g 1 k ( ρ ) g ( ρ ) g 0 ( ρ ) .
(A5)
U ¯ ( x * , ρ ) Ω .
Next, we present the LAC of scheme (2) under these assumptions, ( A 1 ) ( A 5 ) , and the developed notation.
Theorem 1.
Assume that conditions ( A 1 ) ( A 5 ) are valid. Then, sequence { x τ } , which is generated for x 0 U ( x * , ρ ) { x * } , by scheme (2), is well defined, remains in U ( x * , ρ ) for all τ = 0 , 1 , 2 , , and is convergent to x * .
Proof. 
We employ mathematical induction. First, for m = 0 , we have in turn, by scheme (2), the definition of ρ and conditions ( A 1 ) and ( A 2 ) :
y 0 x * = φ 0 ( x 0 ) x * g 0 ( x 0 x * ) x 0 x * x 0 x * < ρ .
Moreover, it follows, by condition ( A 3 ) ,
z 0 x * = z 0 ( 0 ) x * g ( x 0 x * ) y 0 x * g ( x 0 x * ) g 0 ( x 0 x * ) x 0 x * x 0 x * < ρ ,
z 0 ( 1 ) x * g 1 ( x 0 x * ) z 0 ( 0 ) x * g 1 ( x 0 x * ) g ( x 0 x * ) y 0 x * < ρ , z 0 ( 2 ) x * g 1 ( x 0 x * ) z 0 ( 1 ) x * g 1 2 ( x 0 x * ) g ( x 0 x * ) g 0 ( x 0 x * ) x 0 x * < ρ .
Similarly by induction, we obtain
x m + 1 x * C x m x * C m + 1 x 0 x * < ρ ,
where C : = g 1 k ( x 0 x * ) g ( x 0 x * ) g 0 ( x 0 x * ) [ 0 , 1 ) .
Thus, all the iterates remain in U ( x * , ρ ) , and lim m x m = x * .
Remark 1
(Special cases). The single-step ( k = 1 ) or multi-step schemes ( k > 1 ) given in [18] are special cases of scheme (2) by simply choosing φ 0 , φ , a n d v a r p h i 1 properly. Let us illustrate two cases. Notice that in both cases, iteration functions φ 0 and φ are of order 2, i.e., μ = l = 2 .
(I)
Multi-step Newton’s scheme I: Choose φ 0 ( x ) = x F ( x ) 1 F ( x ) , φ ( x , y ) = y F ( y ) 1 F ( y ) , a n d φ 1 ( x , y ) = F ( y ) 1 . Then, we can take
g 0 ( ζ ) = 0 1 η ( 1 κ ) ζ d κ 1 η 0 ( ζ ) , g ( ζ ) = 0 1 η ( 1 κ ) g 0 ( ζ ) ζ d κ 1 η 0 g 0 ( ζ ) ζ a n d g 1 ( ζ ) = 0 1 η ( 1 κ ) g ( ζ ) ζ d κ 1 η 0 g ( ζ ) ζ + η 0 g ( ζ ) ζ + η 0 g 0 ( ζ ) ζ 0 1 η 1 κ g ( ζ ) ζ d κ 1 η 0 ( g ( ζ ) ζ ) 1 η 0 ( g 0 ( ζ ) ζ ) .
These functions are related, if we assume the additional conditions:
( A 6 ) There exists a linear operator Q which is invertible
Q 1 F ( x ) Q η 0 ( x x * ) ,
for all x Ω and Q 1 F ( x ) F ( y ) η ( x y ) for all x Ω 0 = Ω U ( x * , ρ 1 ) , where ρ 1 is the minimal positive solution of equation η 0 ( ζ ) 1 = 0 and
Q 1 F ( x ) η 1 ( x x * ) , f o r a l l x Ω 0 ,
where η 0 : [ 0 , ) [ 0 , ) , η 1 : [ 0 , ρ 0 ] [ 0 , ) , and η : [ 0 , ρ 0 ] [ 0 , ) are NDCFs. Then, we also notice the estimates
F ( x ) 1 Q 1 1 η 0 ( x x * ) , Q 1 F ( x ) 0 1 r ( κ x x * ) d κ x x * , y τ x * = x τ x * F ( x τ ) 1 F ( x τ ) F ( x τ ) 1 Q 0 1 Q 1 F x * + κ ( x τ x * ) F ( x τ ) d κ ( x τ x * ) 0 1 η ( 1 κ ) x τ x * d κ 1 η 0 ( x τ x * ) x τ x * g 0 ( x τ x * ) x τ x * x τ x * < ρ .
Similarly, it follows that
z τ x * 0 1 ( 1 κ ) y τ x * d κ 1 η 0 ( y τ x * ) y τ x * 0 1 η ( 1 κ ) g 0 ( x τ x * ) x τ x * d κ 1 η 0 ( g 0 ( x τ x * ) ) x τ x * y τ x * g ( x τ x * ) y τ x * g ( x τ x * ) g 0 ( x τ x * ) x τ x * x τ x * ,
and
z τ x * F ( y τ ) 1 F ( z τ ) = z τ x * F ( z τ ) 1 F ( z τ ) + F ( z τ ) 1 F ( y τ ) 1 F ( z τ ) ( 0 1 η ( 1 κ ) z τ x * d κ 1 η 0 ( z τ x * ) + η 0 ( z τ x * ) + η 0 ( y τ x * ) 0 1 η 1 ( κ z τ x * ) d κ 1 η 0 ( z τ x * ) 1 η 0 ( y τ x * ) ) z τ x τ ( 0 1 η ( 1 κ ) g ( x τ x * ) x τ x * d κ 1 η 0 g ( x τ x * ) x τ x * + η 0 g ( x τ x * ) x τ x * + η 0 g 0 ( x τ x * ) x τ x * 1 η 0 ( g ( x τ x * ) x τ x * ) 1 η 0 ( g 0 ( x 0 x * ) x 0 x * ) ) × z τ x * g 1 ( x τ x * ) z τ x * C x τ x * x τ x * < ρ .
(II)
Multi-step Newton’s scheme II:
Take φ 0 ( x ) = φ ( x , y ) = x F ( x ) 1 F ( x ) and φ 1 ( x , y ) = F ( x ) 1 .
Then, as before, we can choose g 0 ( ζ ) = g ( ζ ) = 0 1 η ( 1 κ ) ζ d κ 1 η 0 ( ζ ) and
g 1 ( ζ ) = 0 1 η ( 1 κ ) g ( ζ ) ζ d κ 1 η 0 g ( ζ ) ζ + η 0 ( ζ ) + η 0 ( g ( ζ ) ζ ) 0 1 η 1 κ g ( ζ ) ζ d κ 1 η 0 ( ζ ) 1 η 0 ( g ( ζ ) ζ ) .
A plethora of other special cases of these operators are possible [6,8,10,14]
Remark 2.
Some of the important remarks are given below:
(1)
If assumption ( A 6 ) is valid, a popular choice (but not the most flexible) is Q = F ( x * ) . Then, point x * is simple. However, if Q F ( x * ) , our results allow for using scheme (2) for the determination of a solution x * of multiplicity greater than one.
(2)
In order to introduce isolation of the solution results, say, under the first assumption in ( A 6 ) , consider the following conditions:
( A 7 ) There exists ρ ¯ ρ 0 so that
0 1 η 0 ( κ ρ ¯ ) d κ < 1 .
Set D = Ω 0 U [ x * , ρ ¯ ] .
Then, the only solution to (1) in set D is x * .
Indeed, let x ¯ D be such that F ( x ¯ ) = 0 . Consider that T = 0 1 F ( x * + κ x ¯ ) x * ) d κ < 1 .
By applying condition ( A 7 ) , we have
Q ( T Q ) 0 1 η 0 ( κ x ¯ x 0 ) d κ < 1 .
Hence, the perturbation lemma on linear operators with inverses attributed to Banach [11] implies the invertibility of Q. It then follows, by the identity
x ¯ x * = Q 1 F ( x ¯ ) F ( x * ) = Q 1 ( 0 ) = 0 ,
that x ¯ = x * .
(3)
The last condition in ( A 6 ) can be dropped if we take η 1 ( ζ ) = 1 + η 0 ( ζ ) . This is due to the following calculation:
Q 1 F ( x ) Q 1 F ( x ) Q + Q 1 + Q 1 F ( x ) Q 1 + η 0 ( x ¯ x * ) .
Moreover, the expression
η 0 ( ζ ) + η 0 g ( ζ ) ζ o r η 0 g ( ζ ) ζ + η 0 g 0 ( ζ ) ζ ,
can be replaced by η 1 + g ( ζ ) ζ and η g ( ζ ) + g 0 ( ζ ) ζ , respectively.

3. Semi-Local Convergence

Majorizing sequences are introduced to show the semi-local convergence (CSL) of scheme (2).
Let r 0 ( 0 ) = 0 , r 0 ( 1 ) 0 . Assume that a τ , q τ , s τ + 1 and p τ j are non-negative sequences for all j = 1 , 2 , and τ = 0 , 1 , 2 , .
Moreover, define sequence { r τ } for r τ = r τ ( 0 ) as
r τ ( 2 ) = r τ ( 1 ) + a τ , r τ ( 3 ) = r τ ( 2 ) + q τ p τ ( 1 ) , r τ ( 4 ) = r τ ( 3 ) + q τ p τ ( 2 ) , r τ ( j ) = r τ ( j 1 ) + q τ p τ ( j 2 ) , j = 2 , . , i 1 , r τ + 1 = r τ ( i ) = r τ ( i 1 ) + q τ p τ ( i 2 ) , r τ + 1 ( 1 ) = r τ + 1 + s τ + 1 .
The given sequences are later connected to the operators appearing in scheme (2). However, first, some convergence conditions are developed for sequence { r τ } .
Lemma 1.
Suppose that there exist b 0 such that for each τ = 0 , 1 , 2 , ,
r τ r τ + 1 b .
Then, there exists b * [ 0 , b ] such that lim τ r τ = b * .
Proof. 
The sequence is non-decreasing and bounded above by b. Therefore, it is convergent to some parameter b * [ 0 , b ] . Limit point b * is the least upper bound of sequence { r τ } .
Next, we relate the scalar functions appearing in Formula (3) to the operators in scheme ( 2 ) .
Suppose:
(H1)
There exist x 0 Ω , r 0 ( 1 ) 0 and an invertible linear operator Q so that y 0 φ 0 ( x 0 ) r 0 ( 1 ) .
(H2)
The iterations in scheme (2) exist and satisfy for all τ = 0 , 1 , 2 ,
φ ( x τ , y τ ) y τ r τ ( 2 ) r τ ( 1 ) φ 1 ( x τ , y τ ) F ( x 0 ) q τ ,
and
Q 1 F ( z τ ( j ) p τ ( j + 1 ) , j = 0 , , i 1 .
(H3)
Condition (4) is valid.
(H4)
U [ x * , b * ] Ω .
Next, the result on the CSL follows for scheme ( 2 ) .
Theorem 2.
Suppose that conditions ( H 1 ) ( H 4 ) hold and lim τ Q 1 F ( x τ ) = 0 . Then, there exists a solution x * U [ x 0 , b * ] of equation F ( x ) such that lim τ x τ = x * .
Proof. 
Induction is needed to show that
z τ y τ r τ ( 2 ) r τ ( 1 ) , z τ ( 1 ) z τ ( 0 ) r τ ( 3 ) r τ ( 2 ) , and z τ ( i ) z τ ( i 1 ) r τ ( i + 2 ) r τ ( i + 1 ) , i = 2 , 3 , , k .
Condition ( H 1 ) implies that iterate y 0 U ( x 0 , b * ) , and assertion (5) holds if τ = 0 . Then, condition ( H 2 ) and Formula (3) imply, for m = 0 , 1 , 2 , , that
z τ y τ = φ ( x τ , y τ ) y τ r τ ( 2 ) r τ ( 1 ) < b * , z τ x 0 z τ y 0 + y τ x 0 r τ ( 2 ) r τ ( 1 ) + r τ ( 1 ) r 0 = r τ ( 2 ) < b * ,
so the first assertion holds in (5), and iterate z m U ( x 0 , b * ) . Then, by the third substep of (2), condition ( H 2 ) , and Formula (2), we have in turn the triangle inequality
z τ ( 1 ) z τ ( 0 ) φ 1 ( x τ , y τ ) Q Q 1 F ( z τ ( b ) ) q τ p τ ( 1 ) = r τ ( 3 ) r τ ( 2 ) , z τ ( 1 ) x 0 z τ ( 1 ) z τ ( 0 ) + z τ ( 0 ) x 0 r τ ( 3 ) r τ ( 2 ) + r τ ( 2 ) r 0 = r τ ( 3 ) < b * , .
Thus, assertions (5) hold, and all the iterates of scheme (2) belong in U ( x 0 , b * ) . In particular, the last substep of scheme (2) gives
x τ + 1 z τ ( k 2 ) = z τ ( k ) z τ ( k 1 ) φ 1 ( x τ , y τ ) Q Q 1 F ( z τ ( k 1 ) ) q τ p τ ( k ) = r τ ( k + 2 ) r τ ( k + 1 ) ,
and
x τ + 1 x 0 z τ ( k ) z τ ( k 1 ) + z τ ( k 1 ) x 0 r τ ( k + 2 ) r τ ( k + 1 ) + r τ ( k + 1 ) r 0 = r τ ( k + 2 ) < b * .
Hence, the induction is completed. It follows that sequence { r τ } is majorizing for { x τ } . Therefore, { x τ } is Cauchy in a complete normed space B 1 . Therefore, there exists x * U [ x 0 , b * ] such that lim τ x τ = x * . Then, by the condition lim τ Q 1 F ( x τ ) = 0 and the continuity of F, we conclude that F ( x * ) = 0 .
Next, we shall determine the scalar sequences satisfying Formula (3) by speculating the iteration functions φ 0 , φ , and φ 1 . Define function τ = 0 , 1 , 2 , .
φ 0 ( x τ ) = x τ F ( x τ ) 1 F ( x τ ) , φ ( x τ , y τ ) = x τ F ( x τ ) 1 F ( y τ ) , a n d φ 1 ( x τ ) = F ( x τ ) 1 .
Suppose:
( H 2 ) : There exists an NDFC θ 0 : [ 0 , ) | R such that
Q 1 F ( v ) Q θ 0 ( v x 0 ) , for each v Ω ,
and the equation θ 0 ( ζ ) 1 = 0 has a smallest positive solution denoted by s.
Set Ω 0 = Ω U ( x 0 , s ) and
Q 1 F ( v 2 ) F ( v 1 ) θ ( v 2 v 1 ) , for each v 1 , v 2 Ω 0 .
Then, condition ( H 2 ) can replace ( H 2 ) in Theorem 2. Under these selections and condition ( H 2 ) , we obtain, in turn,
F z τ ( 0 ) = F z τ ( 0 ) F ( x τ ) + F ( x τ ) = F z τ ( 0 ) F ( x τ ) F ( x τ ) ( y τ x τ ) , Q 1 F z τ ( 0 ) 1 + 0 1 θ 0 x τ x 0 + κ z τ ( 0 ) x τ d κ y τ x τ + 1 + θ 0 ( x τ x 0 ) y τ x τ 1 + 0 1 θ 0 r τ ( 0 ) + κ ( r τ ( 1 ) r τ ( 0 ) ) d κ r τ ( 1 ) r τ ( 0 ) + 1 + θ 0 ( r τ ( 0 ) ) r τ ( 1 ) r τ ( 0 ) = p τ ( 1 ) , z τ ( 1 ) z τ ( 0 ) = F ( x τ ) 1 F z τ ( 0 ) F ( x τ ) 1 Q Q 1 F z τ ( 0 ) q τ p τ ( 1 ) = r τ ( 3 ) r τ ( 2 ) , z τ ( k 1 ) z τ ( k 2 ) q τ p τ ( k 1 ) = r τ ( k + 1 ) r τ ( k ) , x τ + 1 z τ ( k 1 ) = z τ ( k ) z τ ( k 1 ) q τ p τ ( k ) = r τ ( k + 2 ) r τ ( k + 1 ) = r τ + 1 r τ ( k + 1 ) , F ( x τ + 1 ) = F ( x τ + 1 ) F ( x τ ) F ( x τ ) ( y τ x τ ) = F ( x τ + 1 ) F ( x τ ) F ( x τ ) ( x τ + 1 x τ ) + F ( x τ ) ( x τ + 1 x τ ) F ( x τ ) ( y τ x τ ) = F ( x τ + 1 ) F ( x τ ) F ( x τ ) ( x τ + 1 x τ ) + F ( x τ ) ( x τ + 1 y τ ) ,
leading to
Q 1 F ( x τ + 1 ) 0 1 θ ( 1 κ ) ( r τ + 1 r τ ) d κ ( r τ + 1 r τ ) + 1 + θ 0 ( r τ ) r τ ( 1 ) r τ ( 0 ) = s τ + 1 ( 0 ) , y τ + 1 x τ + 1 F ( x τ + 1 ) 1 Q Q 1 F ( x τ + 1 ) s τ + 1 ( 0 ) 1 θ 0 ( r τ + 1 ) = s τ + 1 = r τ + 1 ( 1 ) r τ + 1 ,
justifying the choices of sequences p τ ( j ) , a τ , q τ , and s τ + 1 .
Hence, we arrived at the specification of Theorem 2.
Corollary 1.
Suppose that conditions ( H 1 ) , ( H 2 ) , ( H 3 ) , ( H 4 ) hold. Then, the conclusions of Theorem 2 hold for sequence { x τ } , provided that s = b .
Remark 3.
A possible choice for operator Q = F ( x 0 ) .

4. Numerical Problems

Here, we mention the computational significance by applying our proposed theoretical predictions. Our analysis involves four cases, with the initial two demonstrating LAC (local acceleration convergence) and the subsequent instances illustrating SLAC (semi-local acceleration convergence). The computed outcomes for local convergence are detailed in Table 1 and Table 2, whereas Table 3 and Table 4 present the results for semi-local convergence. Furthermore, we determine the coefficient of convergence ( C O C ) [4,5] using the equations
ϑ = ln x τ + 1 x * | x τ x * ln x τ x * x τ 1 x * , f o r   τ = 1 , 2 , ,
or approximate the coefficient of convergence ( A C O C ) [8,19] as
ϑ * = ln x τ + 1 x τ x τ x τ 1 ln x τ x τ 1 x τ 1 x τ 2 , f o r   τ = 2 , 3 , .
Further, we choose ϵ = 10 300 as the error tolerance. The programming termination criteria are given below:
(i)
x τ + 1 x τ < ϵ and
(ii)
( i i ) F ( x τ ) < ϵ .
All calculations are rigorously explained using the Mathematica v.11 software program with multiple precision arithmetic. The configuration of the used computer is given below:
Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz;
Made: HP (Palo Alto, CA, USA);
RAM: 8.00 GB (7.89 GB usable);
System type: 64-bit Operating System, x64-based processor;
Windows edition: Windows 10 Enterprise;
Windows version: 22H2.

4.1. Examples for LAC

In order to emphasize the theoretical concepts related to local convergence outlined in Section 2, we choose two specific examples denoted as Examples 1 and 2. These selected examples are considered in order to demonstrate the applicability of method 2 to real-life problems. In addition, these examples also advance our comprehension of local convergence.
Example 1.
In the field of science and engineering, a nonlinear system of order 3 × 3 with polynomial and exponential terms presents a difficult mathematical problem. Addressing such complexity is required for the application of advanced mathematical tools and numerical schemes. As a demonstration, we focus on the following nonlinear equation system:
F ( v ) = ( 10 x + sin ( x + y ) 1 , 8 y cos 2 ( z y ) 1 , 12 z + sin z 1 ) t r ,
where Ω = U [ 0 , 1 ] , B 1 = B 2 = R 3 , and v = ( x , y , z ) t r . Then, the Fréchet derivative is given by
F ( v ) = 10 + cos ( x + y ) cos ( x + y ) 0 0 8 + sin 2 ( z y ) sin 2 ( z y ) 0 0 12 + cos z .
Notice also that x * = ( 0.068978 , 0.246442 , 0.076929 ) T and F ( x * ) = I . By plugging the values of F in the conditions of Theorem 1, we see that they are validated if we choose the following values:
η 0 ( χ ) = η ( χ ) = χ , a n d η 1 ( χ ) = 1 + χ .
In Table 1, we present radii for Example 1.
Example 2.
We choose a well-known first-order nonlinear integral equation with a Hammerstein operator mathematical problem that can be found in many disciplines, from engineering to physics. This problem is a combination of integral and nonlinear components. Analytical solutions to such problems are either extremely difficult or nonexistent. As an illustration, we consider Ω = U [ 0 , 1 ] and E 1 = E 2 = C [ 0 , 1 ] . In this context, we encounter a first-kind nonlinear integral equation involving the Hammerstein operator (H), which is given by
F ( v ) ( x ) = v ( x ) 7 0 1 x Δ v ( Δ ) 3 d Δ .
The derivative of operator H is given below:
F v ( q ) ( x ) = q ( x ) 21 0 1 x Δ v ( Δ ) 2 q ( Δ ) d Δ ,
for q C [ 0 , 1 ] . The values of operator F satisfy the hypotheses of Theorem 1 if we choose
η 0 ( χ ) = 21 χ , η ( χ ) = 42 χ a n d η 1 ( χ ) = 1 + 21 χ .
In Table 2, we present radii for Example 2.

4.2. Examples for SLAC

For semi-local convergence, we choose three different problems, designated as Examples 3–5, in order to illustrate the theoretical results. Examples 3 and 4 are well-known 2D Bratu problems and Van der Pol equations, applied science problems that compromise two nonlinear systems of orders 256 × 256 and 100 × 100 , respectively. In the last one, Example 5, we adopted an academic exercise that compromises a very large nonlinear system of order 500 × 500 .
Example 3.
The 2D Bratu problem [2,3] is defined as
u x x + u t t + C e u = 0 , o n Ω : ( x , ζ ) 0 x 1 , 0 t 1 , w i t h b o u n d a r y c o n d i t i o n s u = 0 o n Ω .
We examine the mesh grid points, where η i , j = u ( x i , t j ) represents the approximate solution. Let M and N denote the number of steps in the x and t directions and h and k represent their respective step sizes. To address PDE (9), central differences are employed for approximating u x x and u t t , given by
u x x ( x i , t j ) = ( η i + 1 , j 2 η i , j + η i 1 , j ) / h 2 , with C = 0.1
for t in the interval [ 0 , 1 ] . For M = 17 and N = 17 , expression (10) leads to a nonlinear system comprising 256 × 256 equations. For this system, the initial guess is chosen as x 0 = 0.1 sin ( π h ) sin ( π k ) , , ; sin ( 6 π h ) sin ( 6 π k ) T , which converges to the following solution:
x * = 0.000562 , 0.000951 , 0.00123 , 0.00144 , 0.00160 , 0.00171 , 0.00178 , 0.00181 , 0.00181 , 0.00178 , 0.00171 , 0.00160 , 0.00144 , 0.00123 , 0.000951 , 0.000562 , 0.000951 , 0.00166 , 0.00219 , 0.00260 , 0.00290 , 0.00311 , 0.00325 , 0.00332 , 0.00332 , 0.00325 , 0.00311 , 0.00290 , 0.00260 , 0.00219 , 0.00166 , 0.000951 , 0.00123 , 0.00219 , 0.00294 , 0.00351 , 0.00394 , 0.00425 , 0.00445 , 0.00454 , 0.00454 , 0.00445 , 0.00425 , 0.00394 , 0.00351 , 0.00294 , 0.00219 , 0.00123 , 0.00144 , 0.00260 , 0.00351 , 0.00423 , 0.00476 , 0.00515 , 0.00540 , 0.00552 , 0.00552 , 0.00540 , 0.00515 , 0.00476 , 0.00423 , 0.00351 , 0.00260 , 0.00144 , 0.00160 , 0.00290 , 0.00394 , 0.00476 , 0.00539 , 0.00583 , 0.00613 , 0.00627 , 0.00627 , 0.00613 , 0.00583 , 0.00539 , 0.00476 , 0.00394 , 0.00290 , 0.00160 , 0.00171 , 0.00311 , 0.00425 , 0.00515 , 0.00583 , 0.00633 , 0.00665 , 0.00681 , 0.00681 , 0.00665 , 0.00633 , 0.00583 , 0.00515 , 0.00425 , 0.00311 , 0.00171 , 0.00178 , 0.00325 , 0.00445 , 0.00540 , 0.00613 , 0.00665 , 0.00700 , 0.00717 , 0.00717 , 0.00700 , 0.00665 , 0.00613 , 0.00540 , 0.00445 , 0.00325 , 0.00178 , 0.00181 , 0.00332 , 0.00454 , 0.00552 , 0.00627 , 0.00681 , 0.00717 , 0.00734 , 0.00734 , 0.00717 , 0.00681 , 0.00627 , 0.00552 , 0.00454 , 0.00332 , 0.00181 , 0.00181 , 0.00332 , 0.00454 , 0.00552 , 0.00627 , 0.00681 , 0.00717 , 0.00734 , 0.00734 , 0.00717 , 0.00681 , 0.00627 , 0.00552 , 0.00454 , 0.00332 , 0.00181 , 0.00178 , 0.00325 , 0.00445 , 0.00540 , 0.00613 , 0.00665 , 0.00700 , 0.00717 , 0.00717 , 0.00700 , 0.00665 , 0.00613 , 0.00540 , 0.00445 , 0.00325 , 0.00178 , 0.00171 , 0.00311 , 0.00425 , 0.00515 , 0.00583 , 0.00633 , 0.00665 , 0.00681 , 0.00681 , 0.00665 , 0.00633 , 0.00583 , 0.00515 , 0.00425 , 0.00311 , 0.00171 , 0.00160 , 0.00290 , 0.003948 , 0.00476 , 0.00539 , 0.00583 , 0.00613 , 0.00627 , 0.00627 , 0.00613 , 0.00583 , 0.00539 , 0.00476 , 0.00394 , 0.00290 , 0.00160 , 0.00144 , 0.00260 , 0.00351 , 0.00423 , 0.00476 , 0.00515 , 0.00540 , 0.00552 , 0.00552 , 0.00540 , 0.005153 , 0.00476 , 0.00423 , 0.00351 , 0.00260 , 0.00144 , 0.00123 , 0.00219 , 0.00294 , 0.00351 , 0.00394 , 0.00425 , 0.00445 , 0.00454 , 0.00454 , 0.00445 , 0.00425 , 0.00394 , 0.00351 , 0.00294 , 0.00219 , 0.00123 , 0.000951 , 0.00166 , 0.00219 , 0.00260 , 0.00290 , 0.00311 , 0.00325 , 0.00332 , 0.00332 , 0.00325 , 0.00311 , 0.00290 , 0.00260 , 0.00219 , 0.00166 , 0.000951 , 0.000562 , 0.000951 , 0.00123 , 0.00144 , 0.00160 , 0.00171 , 0.00178 , 0.00181 , 0.00181 , 0.00178 , 0.00171 , 0.00160 , 0.00144 , 0.00123 , 0.000951 , 0.000562 t r .
Furthermore, in Table 3, we present key details regarding this example, including the number of iterations, residual error, error between two consecutive iterations, convergence order, and CPU timing, all of which provide valuable insights into the performance of the method applied to Example 3.
Example 4.
Consider the Van der Pol equation [1], which is given below:
y μ ( y 2 1 ) y + y = 0 , μ > 0 .
This determines the flow of current in a vacuum tube with boundary conditions y ( 0 ) = 0 and y ( 2 ) = 1 . Furthermore, we consider the following partition of the provided interval [ 0 , 2 ] :
x 0 = 0 < x 1 < x 2 < x 3 < < x η , w h e r e x i = x 0 + i h , h = 2 η .
Furthermore, we suppose that
y 0 = y ( x 0 ) = 0 , y 1 = y ( x 1 ) , , y η 1 = y ( x η 1 ) , y η = y ( x η ) = 1 .
If we discretize the preceding problem (11) using the second-order division difference for the first and second derivatives, we obtain
y τ = y τ + 1 y τ 1 2 h , y τ = y τ 1 2 y τ + y τ + 1 h 2 , τ = 1 , 2 , , η 1 .
The result is an ( η 1 ) × ( η 1 ) system of nonlinear equations, which is defined by
2 h 2 x τ h μ x τ 2 1 x τ + 1 x τ 1 + 2 x τ 1 + x τ + 1 2 x τ = 0 , τ = 1 , 2 , , η 1 .
Let μ = 1 2 , and for the initial approximation, set y τ ( 0 ) = log 1 τ 2 , where τ ranges from 1 to η 1 . In this scenario, we are dealing with a 100 × 100 system of nonlinear equations for η = 101 , and the provided solution is given as a column vector (not a matrix):
x * = 0.0142 , 0.0283 , 0.0423 , 0.0563 , 0.0702 , 0.0841 , 0.0978 , 0.111 , 0.125 , 0.138 , 0.152 , 0.165 , 0.178 , 0.191 , 0.205 , 0.218 , 0.231 , 0.243 , 0.256 , 0.269 , 0.282 , 0.294 , 0.307 , 0.319 , 0.331 , 0.343 , 0.356 , 0.368 , 0.379 , 0.391 , 0.403 , 0.415 , 0.426 , 0.438 , 0.449 , 0.461 , 0.472 , 0.483 , 0.494 , 0.505 , 0.516 , 0.527 , 0.537 , 0.548 , 0.558 , 0.569 , 0.579 , 0.589 , 0.599 , 0.610 , 0.619 , 0.629 , 0.639 , 0.649 , 0.658 , 0.668 , 0.677 , 0.687 , 0.696 , 0.705 , 0.714 , 0.723 , 0.732 , 0.741 , 0.749 , 0.758 , 0.766 , 0.775 , 0.783 , 0.791 , 0.800 , 0.808 , 0.815 , 0.823 , 0.831 , 0.839 , 0.846 , 0.854 , 0.861 , 0.868 , 0.876 , 0.883 , 0.890 , 0.897 , 0.903 , 0.910 , 0.917 , 0.923 , 0.930 , 0.936 , 0.9428 , 0.949 , 0.955 , 0.961 , 0.966 , 0.972 , 0.978 , 0.983 , 0.989 , 0.994 t r .
Table 4 shows the data on the COC (coefficient of convergence), CPU timing, the number of iterations, residual errors, and the difference in errors between consecutive iterations for Example 4.
Example 5.
We examine the subsequent nonlinear system:
F ( X ) = x j 2 x j + 1 1 = 0 , 1 j n , x n 2 x 1 1 = 0 , o t h e r w i s e .
To obtain a large system of nonlinear equations with a size of 500 × 500 , we set n equal to 500 and initialize the solution with x ( 0 ) = ( 1.14 , 1.14 , 1.14 , , 1.14 ) T . The desired solution for this problem is ξ * = ( 1 , 1 , 1 , , 1 ) T . The results obtained are presented in Table 5, which includes data on the computational order of convergence (COC), CPU timing, the number of iterations, residual errors, and the difference in errors between consecutive iterations for Example 5.

5. Conclusions

In concluding remarks, our method offers a significant breakthrough in the field of high-order convergence strategies for solving equations in Banach spaces, unlike prior approaches that depend on high-order derivatives, not on the scheme. We introduce computable error bounds, which are not only a measure of convergence but also enable the uniqueness of solutions. Such issues were not discussed in the earlier study [18]. This is one of the main achievements of our work.
It is important to note that our study has a solid practical foundation and is not restricted to just theoretical innovations. We have provided five numerical examples to support our work. Among them, the first two are used for local convergence along with radii of convergence. The other three are chosen in order to illustrate semi-local convergence. The addition of a unique semi-local convergence study using a majorizing sequence deepens our findings and broadens the range of research work in this field. The sizes of the nonlinear systems in the last three problems are 256 × 256 , 100 × 100 , and 500 × 500 , respectively, which are quite big from a computational point of view. This highlights the effectiveness and reliability of our technique for solving complex and large systems of nonlinear equations. The methodology can be applied to other methods [4,5,8,11,16] in order to extend their applicability along the same lines. This is the direction of future research.

Author Contributions

Conceptualization, R.B., I.K.A. and S.R.; methodology, R.B., I.K.A. and S.R.; software, R.B., I.K.A. and S.R.; validation, R.B., I.K.A. and S.R.; formal analysis, R.B., I.K.A. and S.R.; investigation, R.B., I.K.A. and S.R.; resources, R.B., I.K.A. and S.R.; data curation, R.B., I.K.A. and S.R.; writing—original draft preparation, R.B., I.K.A. and S.R.; writing—review and editing, R.B., I.K.A., H.A. and S.R.; visualization, R.B., I.K.A., H.A. and S.R.; supervision, R.B., I.K.A. and S.R.; project administration, R.B., I.K.A. and S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All relevant data are within the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  2. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  3. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  4. Abdul-Hassan, N.Y.; Ali, A.H.; Park, C. A new fifth-order iterative method free from second derivative for solving nonlinear equations. J. Appl. Math. Comput. 2022, 68, 2877–2886. [Google Scholar] [CrossRef]
  5. Saeed, H.J.; Ali, A.H.; Menzer, R.; Poțclean, A.D.; Arora, H. New Family of Multi-Step Iterative Methods Based on Homotopy Perturbation Technique for Solving Nonlinear Equations. Mathematics 2023, 11, 2603. [Google Scholar] [CrossRef]
  6. Argyros, C.; Argyros, I.K.; George, S.; Regmi, S. Contemporary Algorthims, Theory and Applicatios; Nova Publication Inc.: Boca Raton, FL, USA, 2023; Volume III. [Google Scholar]
  7. Ezquerro, J.A.; Grau-Sánchez, M.; Grau, A.; Hernández, M.A. Construction of derivative-free iterative methods from Chebyshev’s method. Anal. Appl. 2013, 11, 1350009. [Google Scholar] [CrossRef]
  8. Ezquerro, J.A.; Grau-Sánchez, M.; Grau, A.; Hernández, M.A.; Noguera, M.; Romero, N. On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. 2011, 151, 163–174. [Google Scholar] [CrossRef]
  9. Hernández, M.A.; Rubio, M.J. Semilocal convergence of the secant method under mild convergence conditions of differentiability. Comput. Math. Appl. 2002, 44, 277–285. [Google Scholar] [CrossRef]
  10. Magreñán, A.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  11. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  12. Shakhno, S.M. On an iterative method of order 1.839…for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 261, 253–264. [Google Scholar]
  13. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear equations. J. Comput. Appl. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef]
  14. Argyros, I.K.; Gutiérrez, J.M. A unifying local and semilocal convergence analysis of Newton-like methods. Adv. Nonlinear Var. Inequal. 2017, 10, 1–11. [Google Scholar]
  15. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA; Taylor & Francis: Abingdon, UK, 2017. [Google Scholar]
  16. Grau-Sánchez, M.; Noguera, M. A technique to choose the most efficient method between secant method and some variants. Appl. Math. Comput. 2012, 218, 6415–6426. [Google Scholar] [CrossRef]
  17. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR 1971, 198, 524–526, Translation in Sov. Math. Dokl. 1971, 12, 835–838. [Google Scholar]
  18. Grau-Sánchez, M.; Grau, A.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  19. Genocchi, A. Relation enter la difference et la derivee dun meme order quelconque. Arch. Math. Phys. 1869, 49, 342–345. [Google Scholar]
  20. Potra, F.A.; Pták, V. A generalization of Regula Falsi. Numer. Math. 1981, 36, 333–346. [Google Scholar] [CrossRef]
  21. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Boston, MA, USA, 1984; Volume 250. [Google Scholar]
  22. Sanchez, M.G.; Noguera, M.; Gutieerez, J.M. Frozen iterative methods using divided differences “a la Schmidt-Schwerlick”. J. Optim. Theory Appl. 2014, 10, 931–948. [Google Scholar] [CrossRef]
  23. Shakhno, S.M. Convergence of the two-step compbined methods and uniqueness of the solution of nonlinear equations. J. Comput. Appl. Math. 2014, 261, 378–386. [Google Scholar] [CrossRef]
  24. Schmidt, J.W.; Schwetlick, H. Ableitungsfreie Verfahren mit höherer Konvergenzgeschwindigkeit. Computing 1968, 3, 215–226. [Google Scholar] [CrossRef]
  25. Ulm, S. Printzip majorant i metod chord. IANESSR Ser Fiz-Matem I Tehn. 1964, 3, 217–227. [Google Scholar]
Table 1. Radii for Example 1.
Table 1. Radii for Example 1.
Cases ( i ) ρ 0 ξ 0 ξ 1 ξ 3 ρ
Case-I30.666670.666670.552350.585190.55235
Case-II30.666670.666670.376270.459210.37627
Table 2. Radii for Example 2.
Table 2. Radii for Example 2.
Cases ( i ) ρ 0 ξ 0 ξ 1 ξ 3 ρ
Case-I30.0238090.0238090.0195040.0205610.019504
Case-II30.0238090.0238090.0149340.017140.014934
Table 3. Numerical results of Example 3 based on 2.
Table 3. Numerical results of Example 3 based on 2.
Cases | F ( x τ ) | | x τ + 1 x τ | τ ϑ CPU Timing
Case-I 5.1 × 10 795 7.0 × 10 794 36.0000345.443
Case-II 2.5 × 10 872 3.4 × 10 871 53.0000508.047
Table 4. Computational results of Example 4.
Table 4. Computational results of Example 4.
Cases F ( x τ ) x τ + 1 x τ τ ϑ CPU Timing
Case-I 4.4 × 10 1142 8.8 × 10 1141 56.066973.0858
Case-II 3.2 × 10 319 1.2 × 10 317 73.053591.3202
Table 5. Computational results of Example 5.
Table 5. Computational results of Example 5.
Cases F ( x τ ) x τ + 1 x τ τ ϑ CPU Timing
Case-I 2.8 × 10 1081 9.2 × 10 1082 46.0453774.099
Case-II 5.6 × 10 557 1.9 × 10 557 63.02201081.68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Alshehri, H.; Regmi, S. Generalized Convergence for Multi-Step Schemes under Weak Conditions. Mathematics 2024, 12, 220. https://doi.org/10.3390/math12020220

AMA Style

Behl R, Argyros IK, Alshehri H, Regmi S. Generalized Convergence for Multi-Step Schemes under Weak Conditions. Mathematics. 2024; 12(2):220. https://doi.org/10.3390/math12020220

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, Hashim Alshehri, and Samundra Regmi. 2024. "Generalized Convergence for Multi-Step Schemes under Weak Conditions" Mathematics 12, no. 2: 220. https://doi.org/10.3390/math12020220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop