Next Article in Journal
AI-Empowered Attack Detection and Prevention Scheme for Smart Grid System
Previous Article in Journal
Influence of Binomial Crossover on Approximation Error of Evolutionary Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Comparative Study between Newton’s and Steffensen-like Methods with Applications

by
Ioannis K. Argyros
1,
Christopher Argyros
2,
Johan Ceballos
3 and
Daniel González
3,*
1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
3
Facultad de Ingeniería y Ciencias Aplicadas, Universidad de Las Américas, Quito 170124, Ecuador
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2851; https://doi.org/10.3390/math10162851
Submission received: 8 July 2022 / Revised: 27 July 2022 / Accepted: 28 July 2022 / Published: 10 August 2022
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
Comparisons between Newton’s and Steffensen-like methods are given for solving systems of equations as well as Banach space valued equations. Our idea of the restricted convergence domain is used to compare the sufficient convergence criteria of these methods under the same conditions as in previous papers. It turns out that the following advantages are shown: enlarged convergence domain; tighter error estimates and a more precise information on the location of the solution. Advantages are obtained under the same or at least as tight Lipschitz constants, which are specializations of earlier ones. Hence, the applicability of these methods is extended. Numerical experiments complete this study.

1. Introduction

Many problems in computational sciences and other areas are related with the problem of approximating a locally solution x * using mathematical modeling [1] of a general nonlinear equation or a system of equations in the form
G ( x ) = 0 ,
with G being a continuous operator mapping a convex subset D of a Banach space T 1 into a Banach space T 2 .
Solutions of such equations can hardly be found in closed form. So, most solution methods for these equations are iterative. The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence (SL) is using data around an initial point, to give criteria assuring the convergence of the iterative procedure. The local one (LC) uses information around a solution, to find the radii of convergence balls. Note that in computational sciences, the practice of numerical analysis for finding such solutions is connected to Newton’s method (NM),
x 0 given in D , x j + 1 = x j G ( x j ) 1 G ( x j ) for j = 0 , 1 , 2 ,
NM is undoubtedly the most popular method for generating a sequence { x j } quadratically (under certain hypotheses [1]) converging to x * . Here, G ( x ) 1 B ( T 2 , T 1 ) the space of bounded linear operators from T 2 into T 1 . There is extensive literature on local as well as semi-local convergence results for NM under Lipschitz-type conditions.
There are several iterative processes that consider the use of divided differences instead of derivatives because the operator G can be not differentiable. The operator [ x ; y ; G ] : D T 1 T 2 , x , y D , with x y , is the first divided difference when
[ x ; y ; G ] B ( T 1 , T 2 ) and [ x ; y ; G ] ( x y ) = G ( x ) G ( y ) .
We can consider the approximation G ( x j ) [ h 1 ( x j ) ; h 2 ( x j ) ; G ] , where h 1 ( x j ) and h 2 ( x j ) are known data at the point x j . Depending on the data, this approximation will improve and we can show the Secant-like method [2,3,4,5]
x 0 given in D , x j + 1 = x j [ h 1 ( x j ) ; h 2 ( x j ) ; G ] 1 G ( x j ) for j = 0 , 1 , 2 ,
In general, symmetric divided differences perform better. We can see this in the Center–Steffensen ( h 1 ( x j ) = x j G ( x j ) and h 2 ( x j ) = x j + G ( x j ) ) [2,6] and Kurchatov methods ( h 1 ( x j 1 , x j ) = x j 1 and h 2 ( x j 1 , x j ) = 2 x j x j 1 ) [7]; they both maintain the quadratic convergence of Newton’s method by approximating the derivative through symmetric divided differences with respect to the x j [1,3,8,9,10,11]. Following this idea, in this work we consider the derivative-free point-to-point iterative process of the Steffensen-like method given by
x 0 given in D , x j + 1 = x j [ x j δ G ( x 0 ) 1 G ( x j ) ; x j + δ G ( x 0 ) 1 G ( x j ) ; G ] 1 G ( x j ) for j = 0 , 1 , 2 , ,
where δ = ( δ , δ , , δ ) R m for a real number δ > 0 . Thus, we are considering a symmetric divided difference to approximate the derivative of NM. Furthermore, by varying the parameter δ , we can approach the value G ( y j ) .
In a similar way, we can consider the Inexact Steffensen-like method
x 0 given in D , x j + 1 = x j A j 1 G ( x j ) for j = 0 , 1 , 2 , , A j = [ x j α G ( x j ) ; x j + β G ( x j ) ; G ] α = G ( x 0 ) 1 a , β = G ( x 0 ) 1 b , a , b R
Our objective in this work focuses on verifying that theses are iterative processes, methods (4) and (5), and have a behavior like Newton’s method in differentiable situations and maintains this behavior for non-differentiable situations, where Newton’s method is not applicable.
The rest of the work is organized as follows: The SL of methods (4) and (5) are given in Section 2 and Section 3, respectively, whereas the numerical examples are given in Section 4. A discussion is presented in Section 5, whereas the conclusions can be found in Section 6.

2. Convergence for Steffensen-like Method

Along the work, we denote U ( x , ρ ) = { y R m ; y x < ρ } and
U ¯ ( x , ρ ) = { y R m ; y x ρ } , respectively, for the open and closed balls with center x T 1 and of radius ρ > 0 .
We start by presenting our extension of the celebrated Newton–Kantorovich theorem for solving nonlinear equations given in [1] under the following conditions:
( H 1 )
There exist x 0 D and d 0 such that G ( x 0 ) 1 B ( T 2 , T 1 ) and
G ( x 0 ) 1 G ( x 0 ) d .
( H 2 )
There exists M 0 > 0 such that for x D that
G ( x 0 ) 1 ( G ( x ) G ( x 0 ) ) M 0 x x 0 .
Set S = D U ( x 0 , 1 / M 0 ) .
( H 2 )
There exists M > 0 such that for x , y S that
G ( x 0 ) 1 ( G ( y ) G ( x ) ) M y x .
Theorem 1
(Extended Newton–Kantorovich theorem). Let G : D T 1 T 2 be a continously Fréchet differentiable operator. Assume that conditions ( H 1 ) ( H 2 ) and U ¯ ( x 0 , s * ) D are satisfied. Further, assume
M d 1 2 ·
Then, sequence { x j } generated by Newton’s method (2) is well defined in U ( x 0 , s * ) , remains in U ( x 0 , s * ) and converges to a solution x * U ¯ ( x 0 , s * ) of equation G ( x ) = 0 , where s * = 1 1 2 M d M is the smallest positive zero of polynomial p ( r ) = M 2 r 2 r + d . Moreover, the following estimates hold
x j + 1 x j M ¯ ( r j + 1 r j ) 2 2 ( 1 M 0 r j + 1 ) M ( s j + 1 s j ) 2 2 ( 1 M s j + 1 ) ,
and
x j x * s * s j ,
where
M ¯ = M 0 , j = 1 , M , j = 2 , 3 , ,
r 0 = 0 , r 1 = d , r j + 1 = r j + M ( r j r j 1 ) 2 2 ( 1 M 0 r j ) ,
s 0 = 0 , s j + 1 = s j + M ( s j s j 1 ) 2 2 ( 1 M 0 s j ) = s j p ( s j ) p ( s j )
and
r * s * = lim j s j .
Furthermore, if there exists s ¯ s * , such that
M 0 ( s * + s ¯ ) < 2 ,
the limit point x * is the only solution of equation G ( x ) = 0 in the set S 0 = D U ¯ ( x 0 , s ¯ ) .
Remark 1.
The following Lipschitz condition has been used for some M > 0 and all x , y D : G ( x 0 ) 1 ( G ( y ) G ( x ) ) M 1 y x
However, then
M 0 M
and
M M 1 ,
since
S D .
The sufficient SL convergence condition given by Kantorovich [1] (see also [3,6,8,9,10,11]) under ( H 1 ) and ( H 2 ) (in non-affine invariant form) is
M 1 d 1 2 ·
Then, by (6)–(10), we have
M 1 d 1 2 M d 1 2
but not vice versa, unless if M 0 = M = M 1 . The error estimates under (10) are less precise as well as the uniqueness results, since M 1 replaces M (and M 0 ).
Similar extensions hold in the case of the Steffensen-like method (4). Indeed, let us consider T 1 = T 2 = R i and present the semi-local convergence result in Theorem 1 of [2], but given in affine form.
Theorem 2.
Let G : D R i be a continuously differentiable operator. Assume conditions ( H 1 ) and ( H 2 ) with
δ < 2 M 1 d
are satisfied. Moremover, assume
K 1 β 1 d 1 2
and
U ¯ ( x 0 , r ¯ * + δ d ) D ,
where r ¯ * = 1 1 2 K 1 β 1 d β 1 K 1 is the smallest positive root of polinomial
q 1 ( t ) = K 1 2 t 2 t β 1 + d ,
K 1 = M 1 1 + δ β 1 and β 1 = 2 2 δ M d ·
Then, sequence { x j } generated by Steffensen-like method (4) is well defined in U ( x 0 , r ¯ * + δ d ) , remains in U ( x 0 , r ¯ * + δ d ) and converges to a solution x * of equation G ( x ) = 0 . Moreover, the following error estimates hold
x j + 1 x j r ¯ j + 1 r ¯ j ,
x j x * r ¯ * r ¯ j ,
x j x * τ 2 j ( r ¯ * * r ¯ x ) 1 τ 2 n , τ = r ¯ * r ¯ * * < 1 , i f r ¯ * < r ¯ * *
and
x j x * 1 2 j r ¯ * , i f r ¯ * = r ¯ * * ,
where
r ¯ * * = 1 + 1 2 K 1 β 1 d β 1 K 1
and
r ¯ 0 = 0 , r ¯ j + 1 = r ¯ j + K 1 ( r ¯ j r ¯ j 1 ) 2 2 ( 1 K 1 β 1 r ¯ j ) = r ¯ j q 1 ( t j ) q 1 ( t j ) ·
Furthermore, the limit point x * is unique in U ( x 0 , ρ ) D , where
ρ = 2 M 1 ( r ¯ * + δ d ) .
Remark 2.
In order to compare Theorem 2 with its extension that follows as in [2], we introduce the divided difference of order one [ · ; · ; G ] : D × D B ( T 1 , T 2 ) [2,3,7] given by
[ x ; y ; G ] = 0 1 G ( θ x + ( 1 θ ) y ) d θ
with
[ x ; y ; G ] = G ( x ) i f G is d i f f e r e n t i a b l e .
Using ( H 1 ) (instead ( H 2 ) used in [2]) and (19), we get the following extension of Theorem 2.
Assume
δ M 0 d < 2 .
Then, we have
G ( x 0 ) 1 ( A 0 G ( x 0 ) ) δ M 0 d 2 < 1
from which it follows by the Banach lemma on invertible operators [1,5,8] that
A 0 B ( T 2 , T 1 )
A 0 1 G ( x 0 ) 1 1 δ M 0 d 2 : = β .
Set
K = M 1 + δ b , K 0 = M 0 1 + δ b
q ( t ) = K 2 t 2 t β + d , s ¯ * = 1 1 2 β K d β K
s ¯ 0 = 0 , s ¯ j + 1 = s ¯ j + K ( s ¯ j s ¯ j 1 ) 2 2 ( 1 K β s ¯ j ) = s ¯ j q ( s ¯ j ) q ( s ¯ j )
s ¯ ¯ 0 = 0 , s ¯ ¯ j + 1 = s ¯ ¯ j + K ¯ ( s ¯ ¯ j s ¯ ¯ j 1 ) 2 2 ( 1 K 0 β s ¯ ¯ j )
K ¯ = K 0 , j = 1 , K , j = 2 , 3 , , , ρ 0 = 2 M 0 ( s ¯ ¯ * + δ d )
and
s ¯ ¯ * = lim j s ¯ ¯ j .
Hence, we get:
Theorem 3.
Let G : D R i be a continuously differentiable operator. Assume conditions ( H 1 ) ( H 2 ) and (20) are satisfied. Moreover, assume
K β d 1 2
and
U ¯ ( x 0 , s ¯ ¯ * + δ d ) D .
Then, sequence { x j } generated by Steffensen-like method (4) is well defined in U ( x 0 , s ¯ ¯ * + δ d ) , remains in U ( x 0 , s ¯ ¯ * + δ d ) and converges to a solution x * of equation G ( x ) = 0 . Moreover, the following error estimates hold
x j + 1 x j s ¯ ¯ j + 1 s ¯ ¯ j s ¯ j + 1 s ¯ j ,
x j x * s ¯ ¯ * s ¯ ¯ j s ¯ * s ¯ j ,
x j x * τ 0 2 ( s ¯ * * s ¯ * ) 1 τ 0 2 j , τ 0 = s ¯ * s ¯ * * < 1 , if s ¯ * < s ¯ * *
and
x j x * 1 2 j s ¯ * , if s ¯ * = s ¯ * * .
Furthermore, the limit point x * is unique in U ( x 0 , ρ 0 ) D .
Proof. 
Simply repeat proof in Theorem 2 in [2] but in affine invariant form, and use L 0 instead of L 1 for the upper bounds on the inverse of the operators involved.  □
Remark 3.
In view of (7)–(9), we have
β β 1 ,
M 0 M 1 ,
M M 1 ,
τ 0 τ , ρ < ρ 0 ,
s ¯ ¯ j s ¯ j t ¯ j ,
s ¯ ¯ j s ¯ j t ¯ j + 1 t ¯ j ,
s ¯ * t ¯ * ,
M 1 β 1 d 1 2 M β d 1 2
and
δ M d < 2 δ M 0 η < 2
which justify the advantages ( A ) as stated in the introduction. The computation of parameter M 1 requires that of M 0 and M. Hence, advantages ( A ) are obtained under the same computational cost as before. A further improvement can be obtained if D 0 in ( H 1 ) and ( H 2 ) is replaced by S 1 = U ( x 1 , 1 / M 0 x 1 x 0 ) , since in this case, tighter M ¯ can replace M in all previous results with M ¯ M since S 1 S .

3. Convergence of Inexact Steffensen-like Method

We shall first develop an auxiliary result conerning a majorizing sequence for method (5). Let a and b be real numbers. Define parameters and functions
γ = 0 1 | ( 1 θ ) b θ a | d θ ,
β 0 = 1 1 M 0 γ d ,
N 0 = M 0 2 1 + 2 γ β 0 ,
N = M 2 1 + 2 γ β 0 ,
α = N 2 M 0 γ + N 2 M 0 γ 2 + 2 N M 0 ( 1 + γ ) 2 M 0 ( 1 + γ ) ,
φ j ( r ) = M 0 γ r j d + M 0 ( 1 + r + + r j ) d + N 2 r j 1 d 1 ,
g ( r ) = M 0 ( 1 + γ ) t 2 + N 2 M 0 γ t N 2 ,
u 0 = 0 , u 1 = d , u j + 2 = u j + 1 + N ¯ ( u j + 1 u j ) 2 2 ( 1 M 0 ( u j + 1 + γ ( u j + 1 u j ) ) ) ,
where,
N ¯ = N 0 , j = 0 N , j = 1 , 2 ,
Notice that α is the unique positive root of equation g ( r ) = 0 .
Sequence { u j } shall be shown to be majorizing for sequence { x j } in Theorem 4. However, first two convergence results are presented for sequence { u j } .
Lemma 1.
Assume that for j = 0 , 1 , 2 ,
M 0 ( u j + 1 + γ ( u j + 1 u j ) ) < 1 .
Then, the following items hold
u j u j + 1 < μ : = 1 ( 1 + γ ) M 0
and
lim j + u j = u * μ .
Proof. 
It follows immediately by the definition of sequence { u j } and condition (24).  □
Next, a second convergent result follows for sequence { u j } .
Lemma 2.
Let M 0 > 0 , M > 0 , d 0 , and a , b be real numbers. Assume that
0 N d 2 ( 1 M 0 ( 1 + γ ) d ) α 1 M 0 d .
Then, sequence { u j } generated by (23) is well defined, non-decreasing, bounded from above by u * * = 1 1 α d and converges to its unique least upper bound u * , which satisfies
η u * u * * .
Proof. 
We shall show, using mathematical induction, that for m = 0 , 1 , 2 ,
0 N ( u m + 1 u m ) 2 ( 1 M 0 ( u m + 1 + γ ( u m + 1 u m ) ) ) α .
Estimate (27) is satisfied for m = 0 by condition (25). Then, by (23), we have
0 u 2 u 1 α ( u 1 u 0 ) and u 2 1 α 2 1 α d < u * * .
Assume that,
0 u j u j 1 α ( u j 1 u j 2 ) and u j + 1 1 α j + 1 1 α d < u * * , j = 1 , 2 , , m
Then, by the induction hypothesis (28), instead of showing (27), it suffices to show
0 N α m d 2 1 M 0 1 α m + 1 1 α d + γ α m d α
or
M 0 γ α m d + M 0 1 + α + + α m d + N 2 α m 1 d 1 0 .
Estimate (29) motivates us to introduce the sequence of functions { φ j } .
We need a relationship between consecutive functions φ m . We can write
φ m + 1 ( r ) = M 0 γ r m + 1 d + M 0 1 + r + + r m d + N 2 r m d 1 M 0 γ r m d M 0 1 + r + + r m d N 2 r m 1 d + 1 + φ m ( r ) = g ( r ) r m 1 d + φ m ( r ) .
In particular, by the definition of α and Equation (30), we get φ m + 1 ( α ) = φ m ( α ) . However, then, (29) holds if
φ ( α ) 0 ,
where
φ ( α ) = lim m φ m ( α ) = M 0 d 1 α 1 .
So, we must show instead of (31), that M 0 d 1 α 1 0 , which is true by condition (25). Hence, the induction for (27) is completed.
It follows that the sequence { u j } is non-decreasing, bounded from above by u * * , and as such it converges to its unique least upper bound u * , which satisfies (26).  □
Next, we present the semi-local convergence analysis of method (5) using conditions ( H 1 ) , ( H 2 ) , ( H 2 ) , and the preceding Lemma and notations.
Theorem 4.
Assume conditions ( H 1 ) , ( H 2 ) , ( H 2 ) , U ¯ ( x 0 , u * + δ d ) D with D 0 replaced by S 2 = D U x 0 , 1 ( 1 + γ ) M 0 and (24) or (25) are satisfied. Then, sequence { x j } generated by method (5) is well defined in U x 0 , u * + δ d , remains in U x 0 , u * + δ d for j = 0 , 1 , 2 , and converges to a solution x * U ¯ ( x 0 , u * + δ d ) of equation G ( x ) = 0 . Moreover, the following estimates hold
x j + 1 x j u j + 1 u j
and
x j x * u * u j .
Furthermore, the limit point x * is the only solution of equation F ( x ) = 0 in the set
S 3 = U ( x 0 , ρ 1 ) D , where ρ 1 = 2 M 0 ( u * + δ d ) .
Proof. 
It follows from the estimates
G ( x 0 ) 1 ( A m G ( x 0 ) ) = 0 1 G ( x 0 ) 1 ( G ( x m + β G ( x m ) + θ ( x m α G ( x m ) x m β G ( x m ) ) ) G ( x 0 ) ) d θ = 0 1 G ( x 0 ) 1 ( G ( x m + β G ( x m ) θ ( α + β ) G ( x m ) ) G ( x 0 ) ) d θ M 0 0 1 x m x 0 + ( β θ ( α + β ) ) G ( x m ) d θ M 0 x m x 0 + γ G ( x 0 ) 1 G ( x m ) M 0 ( 1 + γ ) < 1 .
Therefore,
A m 1 G ( x 0 ) 1 1 G 0 x m x 0 + γ G ( x 0 ) 1 G ( x m ) ,
G ( x m + 1 ) = G ( x m + 1 ) G ( x m ) [ x m α G ( x m ) ; x m + β G ( x m ) ; G ] = [ x m + 1 ; x m ; G ] [ x m α G ( x m ) ; x m + β G ( x m ) ; G ] ( x m + 1 x m ) = [ x m + 1 ; x m ; G ] G ( x m ) + G ( x m ) [ x m α G ( x m ) ; x m + β G ( x m ) ; G ] ( x m + 1 x m ) = 0 1 G ( x m + θ ( x m + 1 x m ) ) G ( x m ) d θ ( x m + 1 x m ) + 0 1 G ( x m ) G ( x m + β G ( x m ) θ ( α + β ) G ( x m ) ) d θ ( x m + 1 x m )
leading to
G ( x 0 ) 1 G ( x m + 1 ) M ¯ 2 x m + 1 x m 2 + M ¯ γ G ( x 0 ) 1 G ( x m ) x m + 1 x m N ¯ ( u m + 1 u m ) 2 u m + 1 u m ,
so, we conclude
x m + 1 x m = A m 1 G ( x 0 ) G ( x 0 ) 1 G ( x m ) A m 1 G ( x 0 ) G ( x 0 ) 1 G ( x m ) u m + 1 u m .
The rest follows as in Theorem 2 in [2].  □
Remark 4.
(a) 
Comments similar to the one given in Remark 3 can follow.
(b) 
As in the case of Lemma 1, convergence criterion (6) can be replaced by
M 0 r 1 < 1 a n d M r j < 1 j = 2 , 3 ,
in the case of NM. Finally, it is worth noticing that conditions (24) and (32) are weaker than (25) and (6) (or (10)), respectively.

4. Numerical Examples

Example 1.
Let T 1 = T 2 = R , x 0 = 1 and D = U ( x 0 , 1 λ ) , λ = 0.49 , δ = 0.01 . Define function g on D by
g ( x ) = x 3 λ .
Then, we have
d = 1 3 ( 1 λ ) , M 0 = 3 λ < M = 2 1 + 1 M 0 < M 1 = 2 ( 2 λ ) .
M d = 2 1 + 1 M 0 1 3 ( 1 λ ) = 0.20454183 < 1 .
We can see that condition (10) is not verified and there is no SL convergence using this condition of Kantorovich for Newton’s method. In contrast, the advantage of the new method (5) is that there is convergence, as we can see in Table 1 and Table 2 and obtaining the solutions
r * = 0.259251801670772 , s * = 0.278335281448159 ,
s ¯ * = 0.286717679267024 , u * = 0.149730996934000
Three sequences appear in Table 1. They are constructed using method (4) with different constants and we can see that they are convergent to their corresponding limit points.
In a similar way, in Table 2, we can see the convergence of sequence { u n } built using method (5) with the parameters defined above.
Example 2.
Let T 1 = T 2 = C [ 0 , 1 ] be the domain of continuous real functions defined on the interval [ 0 , 1 ] . We consider the max–norm. Suppose that D = U [ x 0 , 3 ] , and F on D is defined as
G ( v ) ( v 1 ) = v ( v 1 ) y ( v 1 ) 0 1 M ( v 1 , t ) v 3 ( t ) d t ,
where v , v 1 , y are given in C [ 0 , 1 ] , and M represents a Kernel which is defined as a Green function
M ( v 1 , t ) = ( 1 v 1 ) t , i f t v 1 v 1 ( 1 t ) , i f v 1 < t .
From Equation (33) for z , v 1 [ 0 , 1 ] , the derivative is given by:
[ G ( v ) ( z ) ] ( v 1 ) = z ( v 1 ) 3 0 1 M ( v 1 , t ) v 2 ( t ) z ( t ) d t .
We choose x 0 ( v 1 ) = y ( v 1 ) = 1 , and we conclude from Equations (33)–(35) that G ( x 0 ) 1 M ( T 2 , T 1 ) , I G ( x 0 ) < 0.375 , G ( x 0 ) 1 1.6 , d = 0.2 , M 0 = 2.4 , M 1 = 3.6 and D 0 = U ( x 0 , 3 ) U ( x 0 , 04167 ) = U ( x 0 , 0.4167 ) , then M = 1.5 . Observe that M 0 < M 1 and M < M 1 . Since 2 M 1 d = 1.44 > 1 , the condition given in Equation (10) is not achieved. Therefore, convergence is not assured by the Kantorovich criterion. In contrast, the advantage of the new method (5) is that there is convergence, as we can see in Table 3, obtaining the solution and verifying u * < μ :
u * = 0.163932655620047 < 0.208333333333333 = μ
Analogous to Example 1, we can construct the sequence { u n } that converges to its limit point.

5. Discussion

Note that if all methods are convergent, the new error bounds are at least as tight, since the Lipschitz constants are at least as small. For instance, in the Example 1, the Lipschitz constant M 1 (see ( H 2 )) used before is M 1 = 2 ( 2 λ ) , so M 0 = 3 λ < M = 2 1 + 1 M 0 < M 1 , where M 0 and M are the constants used. Then, the new majorizing sequence (see Theorem 1) is tighter than the one used by Kantorovich where M 0 = M = M 1 (for Newton’s method). The same is true for Steffensen-like methods (4) and (5) (see Theorem 2 and Remark 3). Besides, if all Newton and Steffensen methods are compared at the same time, then if the error estimated are obtained using majorizing sequences which in turn depend on the “M” and “K” constants, respectively, then the tighter error bounds will be given by those with the smallest constants. Notice also that such a comparison was the main topic in the motivational paper [2]. Observe that the methods (4) and (5) are derivative free. They should be used when the derivative is hard to find or it does not exist. It is clear that for sufficiently small δ , these methods will be similar to Newton’s (see also [2]).

6. Conclusions

More precise majorizing sequences have been used to expand the convergence domain for methods (4) and (5) under the same or weaker conditions than in earlier works [1,2,4,5,10,11]. Further benefits include improved error estimates and uniqueness ball. The technique can apply to extend the usage of other methods in a similar way [5,6,7,8].

Author Contributions

Conceptualization, I.K.A., C.A., J.C. and D.G.; data curation, I.K.A., C.A., J.C. and D.G.; methodology, I.K.A., C.A., J.C. and D.G.; project administration, D.G.; formal analysis, I.K.A., C.A., J.C. and D.G.; investigation, I.K.A., C.A., J.C. and D.G.; resources, I.K.A., C.A., J.C. and D.G.; writing—original draft preparation, I.K.A., C.A., J.C. and D.G.; writing—review and editing, I.K.A., C.A., J.C. and D.G.; visualization, I.K.A., C.A., J.C. and D.G.; supervision, I.K.A., C.A., J.C. and D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universidad de Las Américas, Quito, Ecuador, grant number FGE.DGS.21.04.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  2. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. On a Steffensen-like method for solving nonlinear equations. Calcolo 2016, 53, 171–188. [Google Scholar] [CrossRef]
  3. Argyros, I.K. On the secant method. Publ. Math. Debrecen 1993, 43, 223–238. [Google Scholar]
  4. Hernández-Verón, M.Á.; Magreñán, Á.A.; Rubio, M.J. Dynamics and local convergence of a family of derivative–free iterative processes. J. Comput. Appl. Math. 2019, 354, 414–430. [Google Scholar] [CrossRef]
  5. Magreñán, Á.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  6. Argyros, I.K. Unified convergence criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  7. Shakhno, S.M. On a Kurtachov’s method of linear interpolation for solving nonlinear equations. Proc. Apll. Math. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
  8. Argyros, I.K. The Convergence Theory and Applications of Iterative Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA; Taylor and Francis Publishing Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  9. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  10. Ezquerro, J.A.; Hernández-Verón, M.Á. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Frontiers in Mathematics; Springer: Cham, Switzerland, 2017. [Google Scholar]
  11. Ezquerro, J.A.; Hernández-Verón, M.Á. Mild Differentiability Conditions for Newton’s Method in Banach Spaces; Frontiers in Mathematics; Springer: Cham, Switzerland, 2020. [Google Scholar]
Table 1. Sequences for method (4).
Table 1. Sequences for method (4).
n r n s n s ¯ n
0000
10.170000000000000…0.170000000000000…0.170363470464235…
20.240493536059842…0.247046179553395…0.249707250020066…
30.258025914059003…0.273905528904482…0.280103918739904…
40.259245843358240…0.278217982953951…0.286418724851963…
50.259251801528641…0.278335194730576…0.286717010734407…
60.259251801670772…0.278335281448111…0.286717679263666…
70.259251801670772…0.278335281448159…0.286717679267024…
80.259251801670772…0.278335281448159…0.286717679267024…
90.259251801670772…0.278335281448159…0.286717679267024…
100.259251801670772…0.278335281448159…0.286717679267024…
Table 2. Sequence for method (5).
Table 2. Sequence for method (5).
n u n
00
10.170000000000000…
20.149793027888446…
30.149730997565728…
40.149730996934000…
50.149730996934000…
60.149730996934000…
70.149730996934000…
80.149730996934000…
90.149730996934000…
100.149730996934000…
Table 3. Sequence for method (5).
Table 3. Sequence for method (5).
n u n
00
10.200000000000000…
20.164000000000000…
30.163932655889145…
40.163932655620047…
50.163932655620047…
60.163932655620047…
70.163932655620047…
80.163932655620047…
90.163932655620047…
100.163932655620047…
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Argyros, C.; Ceballos, J.; González, D. Extended Comparative Study between Newton’s and Steffensen-like Methods with Applications. Mathematics 2022, 10, 2851. https://doi.org/10.3390/math10162851

AMA Style

Argyros IK, Argyros C, Ceballos J, González D. Extended Comparative Study between Newton’s and Steffensen-like Methods with Applications. Mathematics. 2022; 10(16):2851. https://doi.org/10.3390/math10162851

Chicago/Turabian Style

Argyros, Ioannis K., Christopher Argyros, Johan Ceballos, and Daniel González. 2022. "Extended Comparative Study between Newton’s and Steffensen-like Methods with Applications" Mathematics 10, no. 16: 2851. https://doi.org/10.3390/math10162851

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop