Next Article in Journal
Face Preference in Infants at Six and Nine Months Old: The Effects of Facial Attractiveness and Observation Experience
Next Article in Special Issue
Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations
Previous Article in Journal
Local and Non-Local Invasive Measurements on Two Quantum Spins Coupled via Nanomechanical Oscillations
Previous Article in Special Issue
Method of Third-Order Convergence with Approximation of Inverse Operator for Large Scale Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Direct Comparison between Two Third Convergence Order Schemes for Solving Equations

1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Karnataka 575 025, India
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1080; https://doi.org/10.3390/sym12071080
Submission received: 24 May 2020 / Revised: 29 May 2020 / Accepted: 30 May 2020 / Published: 1 July 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
We provide a comparison between two schemes for solving equations on Banach space. A comparison between same convergence order schemes has been given using numerical examples which can go in favor of either scheme. However, we do not know in advance and under the same set of conditions which scheme has the largest ball of convergence, tighter error bounds or best information on the location of the solution. We present a technique that allows us to achieve this objective. Numerical examples are also given to further justify the theoretical results. Our technique can be used to compare other schemes of the same convergence order.

1. Introduction

In this study we compare two third convergence order schemes for solving nonlinear equation
G ( x ) = 0 ,
where G : D B 1 B 2 be a continuously differentiable nonlinear operator and D stands for an open non empty subset of B 1 . Here B 1 and B 2 denote Banach spaces. It is desirable to obtain a unique solution p of (1). However, this can rarely be achieved. So researchers develop iterative schemes which converge to p . Some popular schemes are
Chebyshev-Type Scheme:
y n = x n + α G ( x n ) 1 G ( x n ) x n + 1 = x n 1 2 α G ( x n ) 1 A n G ( x n ) 1 G ( x n ) ,
Simplified Chebyshev-Type Scheme:
y n = x n G ( x n ) 1 G ( x n ) x n + 1 = y n + 1 2 G ( x n ) 1 ( G ( y n ) G ( x n ) ) G ( x n ) 1 G ( x n ) ,
Two-Step-Newton-Type Scheme:
y n = x n G ( x n ) 1 G ( x n ) x n + 1 = y n G ( x n ) 1 G ( y n ) ,
where A n = ( 2 α 1 ) G ( x n ) + G ( y n ) , α R { 0 } or α C { 0 } . Notice that (2) specializes to (3) for α = 1 .
The analysis of these schemes uses assumptions on the fourth order derivatives of G which are not on these schemes. The assumptions on fourth order derivatives reduce the applicability of these schemes. For example: Let B 1 = B 2 = R , D = [ 1 2 , 3 2 ] . Define G on D by
G ( t ) = t 3 log t 2 + t 5 t 4 , t 0 0 , t = 0 .
Then, we get
G ( t ) = 3 t 2 log t 2 + 5 t 4 4 t 3 + 2 t 2 ,
G ( t ) = 6 t log t 2 + 20 t 3 12 t 2 + 10 t ,
G ( t ) = 6 log t 2 + 60 t 2 24 t + 22 ,
where the solution p = 1 . Obviously G ( t ) is not bounded on D . Hence, the convergence of the above schemes are not guaranteed by the earlier studies. In this study we use only assumptions on the first derivative to prove our results. The advantages of our approach include: larger radius needed on scheme of convergence (i.e., more initial points), tighter upper bounds on x k p ( i.e., fewer iterates to achieve a desired error tolerance). It is worth noting that these advantages are obtained without any additional conditions [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35].
So far a comparison is given between iterative schemes of the same order using numerical examples [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35]. However, not a direct comparison is given theoretically, so we know in advance under the same set of convergence conditions which scheme has the largest radius, tighter error bounds and better results on the uniqueness of the solution. The novelty of our paper is that, we introduce a technique under we can answer that scheme (4) is the best when compared to scheme (3). The same technique can be used to draw conclusions on other same order schemes.
Notice also that scheme (3) requires two derivative evaluations, one inverse and one operator evaluation. However, scheme (4) is less expensive requiring two function evaluations and one inverse. Both schemes have been studied in the literature under assumptions reaching the fourth derivative which does not appear in these schemes. However, we use only conditions on the first derivative that does appear on the schemes.
Throughout this paper U ( x , r ) stand for open ball with center at x and radius r > 0 and U ¯ ( x , r ) denote the closure of U ( x , r ) .
Rest of the paper is structured as follows. The convergence analysis of schemes are given in Section 2 and examples is given in Section 3.

2. Ball Convergence

We present the ball convergence scheme (2), scheme (3) and scheme (4), respectively in this section.
To achieve this introduce certain functions and parameters. Suppose that there exists a continuous and increasing function defined on the interval [ 0 , ) with values in itself such that equation
ω 0 ( t ) 1 = 0 ,
has a real positive zero denoted as R 0 . Suppose that there exists functions ω and ω 1 defined on [ 0 , R 0 ) with values in [ 0 , ) . Define functions g α 1 and h α 1 on [ 0 , R 0 ) as
g α 1 ( s ) = 0 1 ω ( ( 1 τ ) s ) d τ + | 1 + α | 0 1 ω 1 ( τ s ) d τ 1 ω 0 ( s )
and
h α 1 ( s ) = g α 1 ( s ) 1 .
Suppose that equation
h α 1 ( s ) = 0
has a least zero denoted by R α 1 in ( 0 , R 0 ) . Moreover, define functions g α 2 , h α 2 on [ 0 , R 0 ) as
g α 2 ( s ) = g α 1 ( s ) + ( ω 0 ( s ) + ω 0 ( g α 1 ( s ) s ) ) 0 1 ω 1 ( τ s ) d τ 2 | α | ( 1 ω 0 ( s ) ) 2
and
h α 2 ( s ) = g α 2 ( s ) 1 .
Suppose equation
h α 2 ( s ) = 0
has a least zero denoted by R α 2 in ( 0 , R 0 ) . Define a radius of convergence R α as
R α = min { R α 1 , R α 2 } .
It follows that
0 ω 0 ( s ) < 1 ,
0 g α i ( s ) < 1 , i = 1 , 2
for all s [ 0 , R α ) . Set e n = x n p . We introduce a set of conditions ( A ) under which the ball convergence for all schemes will be obtained.
(A1)
G : D B 2 is differentiable; there exists a simple zero p of equation G ( x ) = 0 .
(A2)
There exists a continuous and increasing function ω 0 defined on [ 0 , ) with values in [ 0 , ) such that for all x D
G ( p ) 1 ( G ( x ) G ( p ) ) ω 0 ( x p ) ,
provided R 0 exists and is defined in (5). Set D 0 = D U ( p , R 0 ) .
(A3)
There exist continuous and increasing functions ω and ω 1 on the interval [ 0 , R 0 ) with values in [ 0 , ) such that for all x , y D 0
G ( p ) 1 ( G ( y ) G ( x ) ) ω ( y x )
and
G ( p ) 1 G ( x ) ω 1 ( x p ) .
(A4)
U ¯ ( p , R α ) D , and R α 1 , R α 2 exist and are given by (6) and (7), respectively, where R α is defined by (8).
(A5)
There exists S α R α such that
0 1 ω 0 ( τ S α ) d τ < 1 .
Set D 1 = D U ( p , S α ) .
Next, the main ball convergence result for scheme (2) is displayed.
Theorem 1.
Under the conditions (A) choose x 0 U ( p , R α ) { p } . Then, the following assertions hold true
{ x n } U ( p , R α ) , lim n x n = p ,
y n p g α 1 ( e n ) e n e n < R α ,
and
x n + 1 p g α 2 ( e n ) e n e n ,
where functions g α 1 , g α 2 were introduced earlier and R α is defined in (8). The vector p is the only zero of Equation (1) in the set D 1 introduced in condition (A5).
Proof. 
It is based on induction which assists us to show (12) and (13). If z U ( p , R α ) { p } , we can use (A1), (A2), (8) and (9) to see that
G ( p ) 1 ( G ( z ) G ( p ) ) ω 0 ( z p ) < ω 0 ( R α ) 1 ,
so by the perturbation Banach result for invertible operators [30] G ( z ) 1 L ( B 2 , B 1 ) with
G ( z ) 1 G ( p ) 1 1 ω 0 ( z p ) ,
so y 0 and x 1 exist by scheme (2) for n = 0 . We use the identity
G ( z ) = G ( z ) G ( p ) = 0 1 G ( p + τ ( z p ) ) d τ ( z p )
and the second condition in (A3) to obtain
G ( p ) 1 G ( z ) 0 1 ω 1 ( τ z p ) d τ z p .
Next, in view of (8), (10), (10) (for i = 1 ), (A1), (A3), (14) (for z = x 0 ), scheme (2) and (16) the following are obtained in sequence
y 0 p = x 0 p G ( x 0 ) 1 G ( x 0 ) + ( 1 + α ) G ( x 0 ) 1 G ( x 0 ) = ( G ( x 0 ) 1 G ( p ) ) ( G ( p ) 1 0 1 ( G ( p + τ ( x 0 p ) ) G ( x 0 ) ) ( x 0 p ) ) d τ + ( 1 + α ) G ( x 0 ) 1 G ( x 0 ) 0 1 ω ( ( 1 τ ) e 0 ) d τ e 0 + | 1 + α | 0 1 ω 1 ( τ e 0 ) d τ e 0 1 ω 0 e 0 ) = g α 1 ( e 0 ) e 0 e 0 < R α ,
leading to y 0 U ( p , R α ) and the verification of (12) for n = 0 . Moreover, as in (16) the following are obtained in sequence
x 1 p ( x 0 p G ( x 0 ) 1 G ( x 0 ) ) + ( I 1 2 α G ( x 0 ) 1 A 0 ) G ( x 0 ) 1 G ( x 0 ) ( y 0 p + 1 2 α ( G ( x 0 ) 1 G ( p ) ) ( G ( p ) 1 ( G ( x 0 ) G ( y 0 ) ) × ( G ( x 0 ) 1 G ( p ) ) ( G ( p ) 1 G ( x 0 ) ) [ g α 1 ( e 0 ) + ( ω 0 ( e 0 ) + ω 0 ( y 0 p ) ) 0 1 ω 1 ( τ e 0 ) d τ 2 | α | ( 1 ω 0 ( e 0 ) ) 2 ] e 0 g α 2 ( e 0 ) e 0 e 0 ,
leading to x 1 U ( p , R α ) and the verification of (13). Thus, estimates (12) and (13) hold true for n = 0 . Suppose they hold true for all j n 1 . Then, by exchanging x 0 , y 0 , x 1 by x j , y j , x j + 1 in the preceding calculations, we terminate the induction for (12) and (13). Next, by the estimate
x n + 1 p b e 0 < R α ,
where b = g α 2 ( e 0 ) [ 0 , 1 ) , we deduce lim n x n = p and x n + 1 U ( p , R α ) . Further, let v D 1 with G ( v ) = 0 and set E = 0 1 G ( v + τ ( p v ) ) d τ . By (A1) and (A5) we get
G ( p ) 1 ( E G ( p ) ) 0 1 ω 0 ( ( 1 τ ) p v ) d τ 0 1 ω 0 ( τ S α ) d τ < 1 ,
so the invertibility is implied leading together with the estimate 0 = G ( p ) G ( v ) = E ( p v ) to the conclusion that p = v .  □
Remark 1.
 1. 
In view of ( A 3 ) and the estimate
G ( p ) 1 G ( x ) = G ( p ) 1 ( G ( x ) G ( p ) ) + I 1 + G ( p ) 1 ( G ( x ) G ( p ) ) 1 + ω 0 ( x p )
condition ( C 5 ) can be dropped and ω 1 can be replaced by
ω 1 ( t ) = 1 + ω 0 ( t ) o r ω 1 ( t ) = 1 + ω 0 ( R 0 ) .
 2. 
The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form
G ( x ) = P ( G ( x ) )
where P is a continuous operator. Then, since G ( p ) = P ( G ( p ) ) = P ( 0 ) , we can apply the results without actually knowing p . For example, let G ( x ) = e x 1 . Then, we can choose: P ( x ) = x + 1 .
 3. 
If ω 0 and ω 1 are constant functions, say ω 0 ( t ) = L 0 t , ω ( t ) = L t , for some L 0 > 0 and L > 0 , then the radius r 1 = 2 2 L 0 + L was shown by us to be the convergence radius of Newton’s method [5,6]
x n + 1 = x n G ( x n ) 1 G ( x n ) for   each n = 0 , 1 , 2 ,
under the conditions ( C 1 )–( C 4 ). It follows from the definition of r that the convergence radius R α of the method (2) cannot be larger than the convergence radius r 1 of the second order Newton’s method (19). As already noted in [5,6] r 1 is at least as large as the convergence ball given by Rheinboldt [14]
r R = 2 3 L 1 ,
where L 1 is the Lipschitz constant on D . In particular, for L 0 < L 1 we have that
r R < r
and
r R r 1 1 3 a s L 0 L 1 0 .
That is our convergence ball r 1 is at most three times larger than Rheinboldt’s. The same value for r R was given by Traub [15].
 4. 
It is worth noticing that method (2) is not changing when we use simpler methods the conditions of Theorem 1 instead of the stronger conditions used in [10]. Moreover, we can compute the computational order of convergence (COC) defined by
ξ = ln x n + 1 p x n p / ln x n p x n 1 p
or the approximate computational order of convergence
ξ 1 = ln x n + 1 x n x n x n 1 / ln x n x n 1 x n 1 x n 2 .
This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F .
 5. 
Method (2) can be generalized as
y n = G ( x n )
x n + 1 = H ( y n ) ,
where G : D X , H : X X are continuous operators
 (i) 
For all x U ( x , R ) , we have y = G ( x ) U ( x , R ) for R > 0 , x D such that U ( x , R ) D and G ( x ) = 0 ;
 (ii) 
H ( y ) x d x x for some d [ 0 , 1 ) .
Then, it follows lim n x n = x provided that x 0 U ( x , R ) . Indeed, we have
x n + 1 x = H ( y n ) x d x n x d n + 1 x 0 x < R
so x n + 1 U ( x , R ) and lim n x n = x . Notice that if G ( x ) = x A ( x ) 1 G ( x ) , H ( y ) = y B ( x ) 1 G ( y ) , then (21) specializes to scheme (2).
 6. 
The ball convergence result for scheme (3) clearly is obtained from Theorem 1 for α = 1 .
 7. 
In our earlier works with other schemes we assumed ω 0 ( 0 ) = ω 1 ( 0 ) = 0 to show the existence of R α 1 and R α 2 using the intermediate value theorem. Bur these initial conditions on ω 0 and ω 1 . This is not necessary with our approach. This way we further expand the applicability of scheme (2) and (3). The same is true for scheme (4) whose ball convergence follows.
Theorem 2.
Under the conditions of Theorem 1, the conclusions of it hold for scheme (3) for α = 1 .
To deal with scheme (4) our real functions and parameters are g 1 1 , h 1 1 , R 1 1 . Moreover, we suppose that equation
ω 0 ( g 1 1 ( s ) s ) 1 = 0
has a least solution in ( 0 , R 0 ) denoted by R 0 1 and set R ¯ 0 = min { R 0 , R 0 1 } . Define functions g ¯ 2 and h ¯ 2 on ( 0 , R ¯ 0 ) as
g ¯ 2 ( s ) = [ 1 + ω 0 ( s ) + ω 0 ( g 1 1 ( s ) s ) ) 0 1 ω 1 ( τ g 1 1 ( s ) s ) d τ ( 1 ω 0 ( s ) ) ( 1 ω 0 ( g 1 1 ( s ) s ) ) ] g 1 1 ( s )
and
h ¯ 2 ( s ) = g ¯ 2 ( s ) 1 .
Suppose equation h ¯ 2 ( s ) = 0 has a least positive zero in ( 0 , R ¯ 0 ) denoted by R ¯ 2 . Define a radius of convergence R ¯ by
R ¯ = min { R 1 1 , R ¯ 2 } .
We also use
x n + 1 p = y n p + ( F ( y n ) 1 F ( x n ) 1 ) F ( y n ) = y n p + F ( y n ) 1 ( F ( x n ) F ( y n ) ) F ( x n ) 1 F ( y n )
instead of (17) to obtain
x n + 1 p [ 1 + ω 0 ( e n ) + ω 0 ( y n p ) 0 1 ω 1 ( τ y n p ) d τ ( 1 ω 0 ( e n ) ) ( 1 ω 0 ( y n p ) ) ] y n p g ¯ 2 ( e n ) e n e n < R ¯ .
Hence, we arrive at:
Theorem 3.
Under the conditions (A) but with R ¯ , R , b ¯ = g ¯ 2 ( e 0 ) , replacing R , S α , b , respectively, the following hold for method (4), { x n } U ( p , R ¯ ) , lim n x n = p ,
y n p g 1 1 ( e n ) e n e n < R ¯
and
x n + 1 p g ¯ 2 ( e n ) e n e n .
Moreover, p is the unique solution of Equation (1) in the set D 1 .
Remark 2.
We have
g ¯ ( s ) g 1 1 ( s )
and
h ¯ 2 ( s ) h 1 ( s )
for all s ( 0 , R 0 ) , so
R R ¯
and
b ¯ b .
Hence, the radius R ¯ of scheme (4) is at least as large as that of scheme (3) whereas the ratio of convergence of scheme (4) is at least as small as that of scheme (3) (see also the numerical examples).

3. Numerical Examples

We compute the radii provided that α = 1 .
Example 1.
Let us consider a system of differential equations governing the motion of an object and given by
F 1 ( x ) = e x , F 2 ( y ) = ( e 1 ) y + 1 , F 3 ( z ) = 1
with initial conditions F 1 ( 0 ) = F 2 ( 0 ) = F 3 ( 0 ) = 0 . Let H = ( F 1 , F 2 , F 3 ) . Let B 1 = B 2 = R 3 , D = U ¯ ( 0 , 1 ) , p = ( 0 , 0 , 0 ) T . Define function F on D for w = ( x , y , z ) T by
F ( w ) = ( e x 1 , e 1 2 y 2 + y , z ) T .
We need the Fréchet-derivative defined by
F ( w ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 ,
to compute function ω 0 (see (A2)) and functions ω , ω 1 (see (A3)). Notice that using the ( ( A ) ) conditions, we get ω 0 ( t ) = ( e 1 ) t , ω ( t ) = e 1 e 1 t , ω 1 ( t ) = e 1 e 1 . The radii are given in Table 1.
Example 2.
Let B 1 = B 2 = C [ 0 , 1 ] , the space of continuous functions defined on [ 0 , 1 ] be equipped with the max norm. Let D = U ¯ ( 0 , 1 ) . Define function F on D by
F ( φ ) ( x ) = φ ( x ) 5 0 1 x θ φ ( θ ) 3 d θ .
We have that
F ( φ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ φ ( θ ) 2 ξ ( θ ) d θ , for   each ξ D .
Then, we get that x = 0 , so ω 0 ( t ) = 7.5 t , ω ( t ) = 15 t and ω 1 ( t ) = 2 . Then the are given in Table 2.
Example 3.
The example at the introduction of this study, gives ω 0 ( t ) = ω ( t ) = 96.6629073 t and ω 1 ( t ) = 2 . The parameters are given in Table 3.

4. Conclusions

A new technique is introduced allowing to compare schemes of the same convergence order under the same set of conditions. Hence, we know how to choose in advance among all third convergence order schemes the one providing larger choice of initial points the least number of iterates for a predetermined error tolerance and the best location on the solution. This technique can be used on other schemes along the same lines. In particular, we have shown that scheme (4) is better to use than scheme (3) under the condition (A).

Author Contributions

Conceptualization, S.R., I.K.A. and S.G.; Data curation, S.R., I.K.A. and S.G.; Formal analysis, S.R., I.K.A. and S.G.; Funding acquisition, S.R., I.K.A. and S.G.; Investigation, S.R., I.K.A. and S.G.; Methodology, S.R., I.K.A. and S.G.; Project administration, S.R., I.K.A. and S.G.; Resources, S.R., I.K.A. and S.G.; Software, S.R., I.K.A. and S.G.; Supervision, S.R., I.K.A. and S.G.; Validation, S.R., I.K.A. and S.G.; Visualization, S.R., I.K.A. and S.G.; Writing—original draft, S.R., I.K.A. and S.G.; Writing—review and editing, S.R., I.K.A. and S.G. All authors have equally contributed to this work. All authors read and approved this final manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amat, S.; Busquier, S.; Negra, M. Adaptive approximation of nonlinear operators. Numer. Funct. Anal. Optim. 2004, 25, 397–405. [Google Scholar] [CrossRef]
  2. Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Alg. 2017. [Google Scholar] [CrossRef]
  3. Argyros, I.K. Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Company: New York, NY, USA, 2007. [Google Scholar]
  4. Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Volume III: Newton’s Method Defined on Not Necessarily Bounded Domain; Nova Publishes: New York, NY, USA, 2019. [Google Scholar]
  5. Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Volume-IV: Local Convergence of a Class of Multi-Point Super–Halley Methods; Nova Publishes: New York, NY, USA, 2019. [Google Scholar]
  6. Argyros, I.K.; George, S.; Magreñán, A.A. Local convergence for multi-point- parametric Chebyshev-Halley- type method of higher convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]
  7. Argyros, I.K.; Magreñán, A.A. Iterative Method and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  8. Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Alg. 2015, 71, 1–23. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova Science Publishers: New York, NY, USA, 2019. [Google Scholar]
  10. Babajee, D.K.R.; Davho, M.E.; Darvishi, M.T.; Karami, A.; Barati, A. Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2002–2012. [Google Scholar] [CrossRef] [Green Version]
  11. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  12. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
  13. Choubey, N.; Panday, B.; Jaiswal, J.P. Several two-point with memory iterative methods for solving nonlinear equations. Afrika Matematika 2018, 29, 435–449. [Google Scholar] [CrossRef]
  14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  15. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Alg. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  17. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  18. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  19. Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar]
  20. Fang, X.; Ni, Q.; Zeng, M. A modified quasi-Newton method for nonlinear equations. J. Comput. Appl. Math. 2018, 328, 44–58. [Google Scholar] [CrossRef]
  21. Fousse, L.; Hanrot, G.; Lefvre, V.; Plissier, P.; Zimmermann, P. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 2007, 33, 15. [Google Scholar] [CrossRef]
  22. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef] [Green Version]
  23. Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected Topics in Numerical Analysis; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2010; ISBN 978-3-8433-6793-6. [Google Scholar]
  24. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef] [Green Version]
  25. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  26. Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef] [Green Version]
  27. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–10619. [Google Scholar] [CrossRef] [Green Version]
  28. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  29. Ostrowski, A.M. Solution of Equation and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  30. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. In Mathematical Models and Numerical Solvers; Tikhonov, A.N., Ed.; Banach Center: Warsaw, Poland, 1977; pp. 129–142. [Google Scholar]
  31. Petkovic, M.S.; Neta, B.; Petkovic, L.j.D.; Dzunic, J. Multi-Point Methods for Solving Nonlinear Equations; Elsevier/Academic Press: Amsterdam, The Netherlands; Boston, MA, USA; Heidelberg, Germany; London, UK; New York, NY, USA, 2013. [Google Scholar]
  32. Sharma, J.R.; Sharma, R.; Bahl, A. An improved Newton-Traub composition for solving systems of nonlinear equa- tions. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar]
  33. Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA 2017, 74, 147–163. [Google Scholar] [CrossRef]
  34. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  35. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
Table 1. Comparison Table of Chebyshev (2) and Simplified Chebyshev (3) method with Two-Step-Newton method (4).
Table 1. Comparison Table of Chebyshev (2) and Simplified Chebyshev (3) method with Two-Step-Newton method (4).
α = 1 (2) & (3) (4)
R α 1 = 0.38269191223238574472986783803208 R 1 1 = 0.38269191223238574472986783803208
R α 2 = 0.21292963191331051864274570561975 R ¯ 2 = 1.6195946293252685421748537919484
radius R α = R α 2 R ¯ = R 1 1
Table 2. Comparison Table of Chebyshev (2) and Simplified Chebyshev (3) method with Two-Step-Newton method (4).
Table 2. Comparison Table of Chebyshev (2) and Simplified Chebyshev (3) method with Two-Step-Newton method (4).
α = 1 (2) & (3) (4)
R α 1 = 0.066666666666666666666666666666667 R 1 1 = 0.066666666666666666666666666666667
R α 2 = 0.040057008172481922692043099232251 R ¯ 2 = 0.014407266709891463490889051968225
radius R α = R α 2 R ¯ = R ¯ 2
Table 3. Comparison Table of Chebyshev (2) and Simplified Chebyshev (3) with Two-Step-Newton method (4).
Table 3. Comparison Table of Chebyshev (2) and Simplified Chebyshev (3) with Two-Step-Newton method (4).
α = 1 (2) & (3) (4)
R α 1 = 0.0068968199414654552878434223828208 R 1 1 = 0.0068968199414654552878434223828208
R α 2 = 0.00377575830521247697221798311773 R ¯ 2 = 0.000205625336098942375698261919581
radius R α = R α 2 R ¯ = R ¯ 2

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S. Direct Comparison between Two Third Convergence Order Schemes for Solving Equations. Symmetry 2020, 12, 1080. https://doi.org/10.3390/sym12071080

AMA Style

Regmi S, Argyros IK, George S. Direct Comparison between Two Third Convergence Order Schemes for Solving Equations. Symmetry. 2020; 12(7):1080. https://doi.org/10.3390/sym12071080

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, and Santhosh George. 2020. "Direct Comparison between Two Third Convergence Order Schemes for Solving Equations" Symmetry 12, no. 7: 1080. https://doi.org/10.3390/sym12071080

APA Style

Regmi, S., Argyros, I. K., & George, S. (2020). Direct Comparison between Two Third Convergence Order Schemes for Solving Equations. Symmetry, 12(7), 1080. https://doi.org/10.3390/sym12071080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop