Next Article in Journal
Fault Sensing Using Fractal Dimension and Wavelet
Previous Article in Journal
Theorietage der Gesellschaft für Informatik in Speyer 2015—Special Issue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence Analysis of an Eighth Order Scheme Using Hypothesis Only on the First Derivative

1
Department of Mathematics Sciences Lawton, Cameron University, Lawton, OK 73505, USA
2
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Private Bag X01, Pietermaritzburg 3209, South Africa
*
Author to whom correspondence should be addressed.
Algorithms 2016, 9(4), 65; https://doi.org/10.3390/a9040065
Submission received: 6 May 2016 / Revised: 20 September 2016 / Accepted: 20 September 2016 / Published: 29 September 2016

Abstract

:
In this paper, we propose a local convergence analysis of an eighth order three-step method to approximate a locally unique solution of a nonlinear equation in a Banach space setting. Further, we also study the dynamic behaviour of that scheme. In an earlier study, Sharma and Arora (2015) did not discuss these properties. Furthermore, the order of convergence was shown using Taylor series expansions and hypotheses up to the fourth order derivative or even higher of the function involved which restrict the applicability of the proposed scheme. However, only the first order derivatives appear in the proposed scheme. To overcome this problem, we present the hypotheses for the proposed scheme maximum up to first order derivative. In this way, we not only expand the applicability of the methods but also suggest convergence domain. Finally, a variety of concrete numerical examples are proposed where earlier studies can not be applied to obtain the solutions of nonlinear equations on the other hand our study does not exhibit this type of problem/restriction.

1. Introduction

Numerical analysis is a wide-ranging discipline having close connections with mathematics, computer science, engineering and the applied sciences. One of the most basic and earliest problems of numerical analysis are concerned with efficiently and accurately finding the approximate locally unique solution x * of an equation of the form
F ( x ) = 0 ,
where F is a Fréchet differentiable operator defined on a convex subset D of X with values in Y , where X and Y are Banach spaces.
Analytical methods for solving such equations are almost non-existent for obtaining the exact numerical values of the required roots. Therefore, it is only possible to obtain approximate solutions and one has to be satisfied with approximate solutions up to any specified degree of accuracy, by relying on numerical methods which are based on iterative procedures. Therefore, worldwide, researchers resort to an iterative method and a plethora of iterative methods has been proposed [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. While using these iterative methods, researchers face the problems of slow convergence, non-convergence, divergence, inefficiency or failure (for detail please see Traub [14] and Petkovíc et al. [11]).
The convergence analysis of iterative methods is usually divided into two categories: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of iterative procedures. On the other hand, the local convergence is based on the information around a solution, to find estimates of the radii of convergence balls. A very important problem in the study of iterative procedures is the convergence domain. Therefore, it is very important to propose the radius of convergence of the iterative methods.
We study the local convergence analysis of a three step method defined for each n = 0 , 1 , 2 , by
y n = x n F ( x n ) 1 F ( x n ) , z n = ϕ 4 ( x n , y n ) , x n + 1 = ϕ 8 ( x n , y n , z n ) = z n 2 [ z n , y n ; F ] [ z n , x n ; F ] 1 F ( x n ) [ y n , x n , F ] + [ z n , y n ; F ] F ( x n ) 1 F ( z n ) ,
where x 0 D is an initial point, [ · , · ; F ] : D 2 L ( X ) , ϕ 4 is any two-point optimal fourth-order scheme and ϕ 8 ( x , y , z ) : = z 2 [ z , y ; F ] [ z , x ; F ] 1 ( F ( x ) [ y , x ; F ] + [ z , y ; F ] ) F ( x ) 1 F ( z ) . The eighth order of convergence of method Equation (2) was shown in [13] when X = Y = R and [ x , y ; F ] = F ( x ) F ( y ) x y for x y and [ x , x ; F ] = F ( x ) . That is when [ · , · ; F ] is a divided difference of first order of operator F [4,5]. The local convergence was shown using Taylor series expansions and hypotheses reaching up to the fifth order derivative. The hypotheses on the derivatives of F limit the applicability of method Equation (2). As a motivational example, define function F on X = Y = R , D = [ 1 π , 2 π ] by
F ( x ) = x 3 log ( π 2 x 2 ) + x 5 sin 1 x , x 0 0 , x = 0 .
Then for x 0 ,we have
F ( x ) = 2 x 2 x 3 cos 1 x + 3 x 2 log ( π 2 x 2 ) + 5 x 4 sin 1 x ,
F ( x ) = 8 x 2 cos 1 x + 2 x ( 5 + 3 log ( π 2 x 2 ) ) + x ( 20 x 2 1 ) sin 1 x
and
F ( x ) = 1 x ( 1 36 x 2 ) cos 1 x + x 22 + 6 log ( π 2 x 2 ) + ( 60 x 2 9 ) sin 1 x .
One can easily see that the function F is unbounded on D at the point x = 0 . Hence, the results in [13] cannot be applied to show the convergence of method Equation (2) or its special cases requiring hypotheses on the fifth derivative of function F or higher. Notice that, in particular there is a plethora of iterative methods for approximating solutions of nonlinear equations [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. These results show that initial guesses should be close to the required root for the convergence of the corresponding methods. However, how close should an initial guess should be for the convergence of the corresponding method to be possible? These local results give no information on the radius of the ball convergence for the corresponding method. The same technique can be used for other methods.
In the present study, we expand the applicability of method given by Equation (2) using only hypotheses on the first order derivative of function F. We also propose the computable radii of convergence and error bounds based on the Lipschitz constants. We further present the range of initial guesses x * that tell us how close the initial guesses should be required to guarantee the convergence of the method Equation (2). This problem was not addressed in [13]. The advantages of our approach are similar to the ones already mentioned for the method described by Equation (2).
The rest of the paper is organised as follows: in Section 2, we present the local convergence analysis of scheme Equation (2). Section 3 is devoted to the numerical examples which demonstrate our theoretical results. Finally, the conclusion is given in Section 4.

2. Local Convergence: One Dimensional Case

In this section, we define some scalar functions and parameters to study the local convergence of method Equation (2).
Let K 0 > 0 , K 1 > 0 , K > 0 , L 0 > 0 , L > 0 , M 1 , λ > 1 be given constants. Let us also assume g 2 : 0 , 1 L 0 R , be a nondecreasing and continuous function. Define functions g 1 and p in the interval 0 , 1 L 0 by
g 1 ( t ) = L t 2 ( 1 L 0 t ) , p ( t ) = ( K 0 g 2 ( t ) t λ 1 + K 1 ) t
and parameter r 1 by
r 1 = 2 2 L 0 + L .
Notice that the function g 1 is a monotonically increasing function on the interval [ 0 , 1 ) . We have g 1 ( r 1 ) = 1 and for each t [ 0 , r 1 ) : 0 g 1 ( t ) < 1 . Further, define function h 2 : 0 , 1 L 0 R by h 2 ( t ) = g 2 ( t ) t λ 1 1 .
Suppose that
g 2 ( t ) t λ 1 < 1 , for each 0 , r 1 , h 2 ( l ) > 0 , as t l < r 1 for some l > 0 .
Then, we have h 2 ( 0 ) = 1 < 0 . By Equation (3) and the intermediate value theorem, function h 2 has zeros in the interval ( 0 , l ) . Further, let r 2 be the smallest such zero. Further, define functions q and h q on the interval 0 , 1 L 0 by q ( t ) = p ( t ) + 2 K 0 g 2 ( t ) t λ + K 1 g 1 ( t ) t and h q ( t ) = q ( t ) 1 .
Using h q ( 0 ) = 1 < 0 and Equation (3), we deduce that function h q has a smallest zero denoted by r q .
Finally, define functions g 3 and h 3 on the interval [ 0 , r q ) by
g 3 ( t ) = 1 + K + ( L 0 + K 1 ) t + K 0 g 1 ( t ) t M 1 L 0 t 1 q ( t ) g 2 ( t ) t λ 1
and
h 3 ( t ) = g 3 ( t ) 1 .
Then, we get h 3 ( 0 ) = 1 and h 3 ( t ) + as t r q . Denote by r 3 the smallest zero of function h 3 on the interval ( 0 , r q ) . Define
r = min { r 1 , r 2 , r 3 } .
Then, we have that
0 < r r 1 < 1 L 0
and for each t [ 0 , r )
0 g 1 ( t ) ,
0 p ( t ) ,
0 q ( t ) < 1 ,
0 g 2 ( t ) < 1
and
0 g 3 ( t ) < 1 .
U ( γ , s ) and U ¯ ( γ , s ) stand, respectively for the open and closed balls in X with center γ X and radius s > 0 .
Next, we present the local convergence analysis of scheme Equation (2) using the preceding notations.
Theorem 1.
Let us consider F : D X Y be a Fréchet differentiable operator. Let us also assume [ · , · ; F ] : D 2 L ( X ) be a divided difference of order one. Suppose there exist x * D , a neighborhood D 0 of D , L 0 > 0 , λ > 1 such that Equation (3) holds and for each x D 0
F ( x * ) = 0 , F ( x ) 1 L ( Y , X )
z ( x ) x * g 2 ( x x * ) x x * λ
and
F ( x * ) 1 F ( x ) F ( x * ) L 0 x x * ,
where z ( x ) = ϕ 4 x , x F ( x ) 1 F ( x ) .
Moreover, suppose that there exist K 0 > 0 , K 1 > 0 , K > 0 , L > 0 and M 1 such that for each x , y U x * , 1 L 0 D 0
F ( x * ) 1 [ x , y ; F ] F ( x * ) K 0 x x * + K 1 y x * ,
F ( x * ) 1 [ x , y ; F ] K ,
F ( x * ) 1 F ( x ) F ( y ) L x y ,
F ( x * ) 1 F ( x ) M ,
and
U ¯ x * , r D 0 ,
where the radius of convergence r is defined by Equation (4). Then, the sequence { x n } generated by method Equation (2) for x 0 U ( x * , r ) { x * } is well defined, remains in U ( x * , r ) for each n = 0 , 1 , 2 , and converges to x * . Moreover, the following estimates hold
y n x * g 1 ( x n x * ) x n x * x n x * < r ,
z n x * g 2 ( x n x * ) x n x * x n x * ,
and
x n + 1 x * g 3 ( x n x * ) x n x * x n x * ,
where the “g” functions are defined by previously. Furthermore, for T r , 2 L 0 , the limit point x * is the only solution of equation F ( x ) = 0 in U ¯ ( x * , T ) D 0 .
Proof of Theorem 1.
We shall show that estimates Equations (19)–(21) with the help of mathematical induction. By hypotheses x 0 U ( x * , r ) { x * } , Equations (5) and (13), we get that
F ( x * ) 1 F ( x 0 ) F ( x * ) L 0 x 0 x * < L 0 r < 1 .
It follows from Equation (22) and the Banach Lemma on invertible operators [4,12] that F ( x 0 ) 1 L ( Y , X ) , y 0 is well defined and
F ( x 0 ) 1 F ( x * ) 1 1 L 0 x 0 x * .
Using the first substep of method Equation (2) for n = 0 , Equations (4), (6), (11), (16) and (23), we get in turn
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x * ) 0 1 F ( x * ) 1 F ( x * + θ ( x 0 x * ) ) F ( x 0 ) ( x 0 x * ) d θ L x 0 x * 2 1 L 0 x 0 x * = g 1 ( x 0 x * ) x 0 x * x 0 x * < r ,
which shows Equation (19) and y 0 U ( x * , r ) . Then, from Equations (3) and (12), we see that (20) follows for n = 0 . Hence, z 0 U ( x * , r ) . Next, we shall show that 2 [ z 0 , y 0 ; F ] [ z 0 , x 0 ; F ] 1 L ( Y , X ) . Using Equations (12), (14) and the strict monotonicity of q, we obtain in turn that
F ( x * ) 1 2 [ z 0 , y 0 ; F ] F ( x * ) [ z 0 , x 0 ; F ] F ( x * ) 2 F ( x * ) 1 [ z 0 , y 0 ; F ] F ( x * ) + F ( x * ) 1 [ z 0 , x 0 ; F ] F ( x * ) 2 K 0 z 0 x * + K 1 y 0 x * + p ( x 0 x * ) 2 K 0 g 2 ( x 0 x * ) x 0 x * λ + K 1 g 1 ( x 0 x * ) x 0 x * + p ( x 0 x * ) = q ( x 0 x * ) < q ( r ) < 1 .
That is
2 [ z 0 , y 0 ; F ] [ z 0 , x 0 ; F ] 1 F ( x * ) 1 1 q ( x 0 x * ) .
Hence, x 1 is well defined by the third substep of method Equation (2) for n = 0 . We can write by Equation (11)
F ( x 0 ) = F ( x 0 ) F ( x * ) = 0 1 F ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ .
Notice that x * + θ ( x 0 x * ) x * = θ x 0 x * < r , for all θ ( 0 , 1 ) . Hence, we have that x * + θ ( x 0 x * ) U ( x * , r ) . Then, by Equations (17) and (27) we get that
F ( x * ) 1 F ( x 0 ) = 0 1 F ( x * ) 1 F ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ M x 0 x * .
We also have that by replacing x 0 by z 0 in Equation (28) that
F ( x * ) 1 F ( z 0 ) M z 0 x * ,
since z 0 U ( x * , r ) .
Then, using the last substep of method Equation (2) for n = 0 , Equations (4), (10), (13)–(15), (19), (20) (for n = 0 ), (23), (26) and (29), we obtain that
x 1 x * z 0 x * + F ( x 0 ) 1 F ( x * ) [ F ( x * ) 1 [ y 0 , x 0 ; F ] F ( x * ) + F ( x * ) 1 F ( x 0 ) F ( x * ) + F ( x * ) 1 [ z 0 , x 0 ; F ] ] × 2 [ z 0 , y 0 ; F ] [ z 0 , x 0 ; F ] 1 F ( x * ) F ( x * ) 1 F ( z 0 ) z 0 x * + K + L 0 x 0 x * + K 0 y 0 x * + K 1 x 0 x * M z 0 x * 1 L 0 x 0 x * 1 q ( x 0 x * ) 1 + K + ( L 0 + K 1 ) x 0 x * + K 0 g 1 ( x 0 x * ) x 0 x * M 1 L 0 x 0 x * 1 q ( x 0 x * ) z 0 x * g 3 ( x 0 x * ) x 0 x * < x 0 x * < r ,
which shows Equation (21) for n = 0 and x 1 U ( x * , r ) . By simply replacing x 0 , y 0 , z 0 by x n , y n , z n in the preceding estimates we arrive at Equations (19)–(21) for all n = 0 , 1 , 2 , . Using the monotonicity of g 3 on the interval [ 0 , r ] , x 0 U ( x * , r ) and (21), we have with c = g 3 ( x 0 x * ) [ 0 , 1 )
x n + 1 x * c x n x * c n x 0 x * ,
where r < 1 . This yields lim n x n = x * and x n + 1 U ( x * , r ) .
Finally, to show the uniqueness part, let y * U ¯ ( x * , T ) be such that F ( y * ) = 0 . Set Q = 0 1 F x * + θ ( y * x * ) d θ . Then, using Equation (13), we get that
F ( x * ) 1 ( Q F ( x * ) ) L 0 0 1 θ x * y * d θ L 0 2 T < 1 .
Hence, Q 1 L ( Y , X ) . Then, in view of the identity F ( y * ) F ( x * ) = Q ( y * x * ) , we conclude that x * = y * . It is worth noticing that in view of Equations (12), (21), (31) and the definition of functions g i , i = 1 , 2 , 3 the convergence order of method Equation (2) is at least λ. ☐
Remark 1
(a)
In view of Equation (13) and the estimate
F ( x * ) 1 F ( x ) = F ( x * ) 1 ( F ( x ) F ( x * ) ) + F ( x * ) 1 + F ( x * ) 1 ( F ( x ) F ( x * ) ) 1 + L 0 x x *
condition Equation (17) can be dropped and M can be replaced by
M = M ( t ) = 1 + L 0 t
or M = 2 , since t [ 0 , 1 L 0 ) .
(b)
The results obtained here can be used for operators F satisfying the autonomous differential equation [4,5] of the form
F ( x ) = P ( F ( x ) ) ,
where P is a known continuous operator. Since F ( x * ) = P ( F ( x * ) ) = P ( 0 ) , we can apply the results without actually knowing the solution x * . As an example, let F ( x ) = e x + 2 . Then, we can choose P ( x ) = x 2 .
(c)
The radius r 1 was shown in [4,5] to be the convergence radius for Newton’s method under conditions (11) and (12). It follows from Equation (4) and the definition of r 1 that the convergence radius r of the method Equation (2) cannot be larger than the convergence radius r 1 of the second order Newton’s method. As already noted in [4,5] r 1 is at least the convergence radius given by Rheinboldt [12]
r R = 2 3 L .
In particular, for L 0 < L we have that
r R < r 1
and
r R r 1 1 3 as L 0 L 0 .
That is our convergence radius r is at most three times larger than Rheinboldt’s. The same value for r R given by Traub [14].
(d)
We shall show how to define function g 2 and l appearing in condition Equation (3) for the method
y n = x n F ( x n ) 1 F ( x n ) z n = ϕ 4 ( x n , y n ) : = y n [ y n , x n ; F ] 1 F ( x n ) [ y n , x n ; F ] 1 F ( y n ) x n + 1 = ϕ 8 ( x n , y n , z n ) .
Clearly method Equation (33) is a special case of method Equation (2). If X = Y = R , then method Equation (33) reduces to Kung-Traub method [14]. We shall follow the proof of Theorem 1 but first we need to show that [ y n , x n ; F ] 1 L ( Y , X ) . By assuming the hypotheses of Theorem 1, we get that
F ( x * ) 1 [ y n , x n ; F ] F ( x * ) K 0 y n x * + K 1 x n x * K 0 g 1 ( x n x * ) + K 1 x n x * = p 0 ( x n x * ) ,
The function h p 0 ( t ) = p 0 ( t ) 1 , where p 0 ( t ) = ( K 0 g 1 ( t ) + K 1 ) t , has a smallest zero denoted by r p 0 in the interval 0 , 1 L 0 . Set l = r p 0 . Then, we have from the last substep of method Equation (33) that
z n x * y n x * + [ y n , x n ; F ] 1 F ( x * ) F ( x * ) 1 F ( x * ) × [ y n , x n ; F ] 1 F ( x * ) F ( x * ) 1 F ( y n ) y n x * + M 2 1 p 0 ( x n x * ) 2 y n x * 1 + M 2 1 p 0 ( x n x * ) 2 g 1 ( x n x * ) x n x * = 1 2 1 + M 2 1 p 0 ( x n x * ) 2 L x n x * 2 1 L 0 x n x * .
It follows from Equation (34) that λ = 2 and
g 2 ( t ) = 1 2 L 1 L 0 t 1 + M 2 1 p 0 ( t ) 2 .
Then, the convergence radius is given by
r = min { r 1 , r 2 , r p 0 , r 3 } .

3. Numerical Example and Applications

In this section, we will verify the validity and effectiveness of our theoretical results which we have proposed in Section 2 based on the scheme proposed by Sharma et al. [13]. For this purpose, we shall choose a variety of nonlinear equations and systems of nonlinear equations which are mentioned in the following examples including motivational example. At this point, we will chose the following methods
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 2 [ y n , x n ; F ] F ( x n ) 1 F ( y n ) , x n + 1 = ϕ 8 ( x n , y n , z n ) ,
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 2 [ y n , x n ; F ] 1 F ( x n ) 1 F ( y n ) , x n + 1 = ϕ 8 ( x n , y n , z n )
and
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 3 I 2 F ( x n ) 1 [ y n , x n , F ] F ( x n ) 1 F ( y n ) , x n + 1 = ϕ 8 ( x n , y n , z n )
denoted by M 1 , M 2 and M 3 , respectively.
First of all, we require the initial guesses x 0 which gives the guarantee for convergence of the iterative methods. Therefore, we shall calculate the values of r R , r 1 , r 2 , r q , r 3 , r p 0 and r to find the range of convergence domain, which are displayed in the Table 1, Table 2, Table 3 and Table 4 up to 5 significant digits. However, we have the values of these constants up to several number of significant digits. Then, we will also verify the theoretical order of convergence of these methods for scalar equations on the basis of the results obtained from computational order of convergence and e n + 1 e n p . In the Table 5 and Table 6, we display the number of iteration indexes ( n ) , approximated zeros ( x n ) , residual errors of the corresponding function ( | F ( x n ) | ) , errors | e n | (where e n = x n x * ), e n + 1 e n p and the asymptotic errors constant η = lim n e n + 1 e n p . In addition, We will use the formulas proposed by Sánchez et al. in [7] to calculate the computational order of convergence (COC), which is given by
ρ = l n x n + 2 x * x n + 1 x * l n x n + 1 x * x n x * , for   each = 0 , 1 , 2 ,
or the approximate computational order of convergence (ACOC) [7]
ρ * = l n x n + 2 x n + 1 x n + 1 x n l n x n + 1 x n x n x n 1 , for   each = 1 , 2 ,
We calculate the computational order of convergence, asymptotic errors constant and other constants up to several numbers of significant digits (minimum 1000 significant digits) to minimise the round off errors.
In the context of systems of nonlinear equations, we also consider a nonlinear systems in Example 3 to check the proposed theoretical results for nonlinear systems. In this regard, we displayed the number of iteration indexes ( n ) , residual errors of the corresponding function ( F ( x n ) ) , errors e n (where e n = x n x * ), e n e n 1 p and the asymptotic errors constant η = lim n e n e n 1 p in the Table 7 and Table 8.
As we mentioned in the earlier paragraph, we calculate the values of all the constants and functional residuals up to several numbers of significant digits but, due to the limited paper space, we display the values of x n up to 15 significant digits. Further, the values of other constants, namely, ξ ( C O C ) up to 5 significant digits and the values e n e n 1 p , η and e n e n 1 p are up to 10 significant digits. Furthermore, the residual errors in the function/systems of nonlinear functions ( | F ( x n ) | or F ( x n ) ) and the errors | e n | or e n are displayed up to 2 significant digits with exponent power which are mentioned in the following tables corresponding to the test function. However, a minimum of 1000 significant digits are available with us for every value.
Furthermore, we consider the approximated zero of test functions when the exact zero is not available, which is corrected up to 1000 significant digits to calculate x n x * . During the current numerical experiments with programming language Mathematica (Version 9), all computations have been done with multiple precision arithmetic, which minimises round-off errors.
Further, we use λ = 2 and function g 2 defined by Equation (35) in all the examples.
Example 1.
Let X = Y = R 3 , D = U ¯ ( 0 , 1 ) . Define F on D for v = ( x , y , z ) T by
F ( v ) = ( 10 x + sin ( x + y ) 1 , 8 y cos 2 ( z y ) 1 , 12 z + sin z 1 ) T .
Then the Fréchet-derivative is given by
F ( v ) = 10 + cos ( x + y ) cos ( x + y ) 0 0 8 + sin 2 ( z y ) sin 2 ( z y ) 0 0 12 + cos z .
Then, for x * = ( 0.068978 , 0.246442 , 0.076929 ) , we get L = L 0 = 38.9911 , K = M = 168.962 and K 0 = K 1 = L 0 2 . We calculate the radius of convergence, the residual errors of the corresponding function ( F ( x n ) ) (by considering initial approximation ( 0.072 , 0.25 , 0.080 ) in the methods M 1 , M 2 and M 3 ), errors e n (where e n = x n x * ), e n e n 1 p and the asymptotic errors constant η = lim n e n e n 1 p , which are displayed in the Table 1 and Table 5.
Example 2.
Let X = Y = R , D = [ 1 , 1 ] and define function F on D by
F ( x ) = e x 4 x 2 .
Then, for x * = 0.714806 , we obtain that L 0 = L = 2.91681 , M = K = 1.82827 and K 0 = K 1 = L 0 2 . Hence, we obtain all the values of constants and residual errors in the functions, etc., which we have described earlier in the Table 2 and Table 6.
Example 3.
Let X = Y = R 3 , D = U ¯ ( 0 , 1 ) . Define F on D for v = ( x , y , z ) T by
F ( v ) = e x 1 , e 1 2 y 2 + y , z T .
Then the Fréchet-derivative is given by
F ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Notice that x * = ( 0 , 0 , 0 ) , F ( x * ) = F ( x * ) 1 = d i a g { 1 , 1 , 1 } , L 0 = e 1 < L = 1.78957239 , K 0 = K 1 = e 1 2 and K = M = 1.7896 . Hence, we calculate the radius of convergence, the residual errors of the corresponding function ( F ( x n ) ) (by considering initial approximation ( 0.094 , 0.094 , 0.095 ) in the methods M 1 , M 2 and M 3 ), errors e n (where e n = x n x * ) and other constants, which are given in the Table 3 and Table 7.
Example 4.
Returning back to the motivational example in the introduction of this paper, we have L = L 0 = 2 2 π + 1 ( 80 + 16 π + ( 11 + 12 log 2 ) π 2 ) , M = K = 2 an K 0 = K 1 = L 0 2 . We display all the values of constants and residual errors in the functions which we have described earlier in the Table 4 and Table 8.

3.1. Results and Discussion

Sharma and Arora’s study was only valid for simple roots of the scalar equation. The order of convergence was shown using Taylor series expansions, and hypotheses up to the fourth order derivative (or even higher) of the function involved, which restricts the applicability of the proposed scheme. However, only the first order derivatives appear in the proposed scheme. In order to overcome this problem, we propose the hypotheses only up to the first order derivative. the applicability of the proposed scheme can be seen in the Examples 1, 3 and motivational example, where earlier studies were not successful. We also provide the radius of convergence for the considered test problem which gives the guarantee for the convergence.
In addition, we have also calculated residual error in the each corresponding test function and the difference between the exact zero and approximated zero. It is straightforward to say from the Table 5, Table 6, Table 7 and Table 8 that the mentioned methods have smaller residual error in each corresponding test function and a smaller difference error between the exact and approximated zero. So, we can say that these methods converge faster towards the exact root. Moreover, the mentioned methods also have simple asymptotic error constant in most of the test functions which can be seen in the Table 5, Table 6, Table 7 and Table 8. The dynamic study of iterative methods via basins of attraction also confirm the faster convergence. However, one can find different behaviour of our methods when he/she considers some different nonlinear equations. The behaviour of the iterative methods mainly depends on the body structure of the iterative method, considered test function, initial guess and programming software, etc.

4. Basin of Attractions

In this section, we further investigate some dynamical properties of the attained simple root finders in the complex plane by analysing the structure of their basins of attraction. It is known that the corresponding fractal of an iterative root-finding method is a boundary set in the complex plane, which is characterised by the iterative method applied to a fixed polynomial p ( z ) C , see, e.g., [6,16,17]. The aim herein is to use the basin of attraction as another way for comparing the iterative methods.
From the dynamical point of view, we consider a rectangle A = [ 3 , 3 ] × [ 3 , 3 ] C with a 400 × 400 grid, and we assign a color to each point z 0 A according to the simple root at which the corresponding iterative method starting from z 0 converges, and we mark the point as black if the method does not converge. In this section, we consider the stopping criterion for convergence to be less than 10 4 wherein the maximum number of full cycles for each method is considered to be 100. In this way, we distinguish the attraction basins by their colours for different iterative methods.
Test Problem 1.
Let p 1 ( z ) = ( z 4 + 1 ) , having simple zeros { 0.707107 0.707107 i , 0.707107 + 0.707107 i , 0.707107 0.707107 i , 0.707107 + 0.707107 i } . It is straightforward to see from Figure 1 that the method M 1 is the best method in terms of less chaotic behaviour to obtain the solutions. Further, M 1 also has the largest basins for the solution and is faster in comparison to all the mentioned methods namely, M 2 and M 3 .
Test Problem 2.
Let p 2 ( z ) = ( z 3 + 2 z 1 ) , having simple zeros { 0.0992186 2.24266 i , 0.0992186 + 2.24266 i , 0.198437 } . Based on Figure 2, it is observed that all the methods, namely, M 1 , M 2 and M 3 have almost zero non convergent points. However, method M 1 has a larger and brighter basin of attraction as compared to the methods M 2 and M 3 . Further, the dynamics behavior of the method M 1 on the boundary points is less chaotic than other methods M 2 and M 3 .
Test Problem 3.
Let p 3 ( z ) = ( z 6 + z ) , having simple zeros { 0.809017 0.587785 i , 0.809017 + 0.587785 i , 0 , 0.309017 0.951057 i , 0.309017 + 0.951057 i , 1 } . It is concluded on the basis of Figure 3, that the method M 1 has a much lower number of non convergent points as compared to M 2 and M 3 (in fact we can say that method M 1 has almost zero non convergent points in this region). Further, the dynamics behaviors of the methods M 2 and M 3 are shown to be very chaotic on the boundary points.

5. Conclusions

Commonly, researchers have mentioned that the initial guess should be close to the required root for the granted convergence of their proposed schemes for solving nonlinear equations. However, how close should the initial guess be if it is required to guarantee the convergence of the proposed method? We propose the computable radius of convergence and errors bound by using Lipschitz conditions in this paper. Further, we also reduce the hypotheses from fourth order derivative of the involved function to only first order derivative. It is worth noting that the method in Equation (2) does not change if we use the conditions of Theorem 1 instead of the stronger conditions proposed by Sharma and Arora (2015). Moreover, to obtain the errors bound in practice and order of convergence, we can use the computational order of convergence which is defined in numerical Section 3. Therefore, we obtain in practice the order of convergence in a way that avoids the bounds involving estimates higher than the first Fréchet derivative.
In an earlier study, Sharma and Arora mentioned that their work is valid only for R . However, in our study we have shown that this scheme will work in any space. The order of convergence for the proposed scheme may be unaltered or reduce in another space because it depends upon the space and function (for details please see the numerical section). Finally, on account of the results obtained in Section 3, it can be concluded that the proposed study not only expands the applicability but also gives the computable radius of convergence and errors bound by the scheme given by Sharma and Arora (2015), for obtaining simple roots of nonlinear equations as well as systems of nonlinear equations.

Author Contributions

Selection of the proposed scheme and the local convergence analysis (pages 1–8) given by Ioannis K. Argyros. Numerical examples and applications, results and discussion, Basin of attraction and concluding remarks presented by Ramandeep Behl and Sandile S. Motsa.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequ. Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  2. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef]
  3. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  4. Argyros, I.K. Convergence and Application of Newton-type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  5. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publisher Company: Singapore, 2013. [Google Scholar]
  6. Behl, R.; Motsa, S.S. Geometric construction of eighth-order optimal families of Ostrowski’s method. Sci. World J. 2015, 2015. [Google Scholar] [CrossRef] [PubMed]
  7. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  8. Kanwar, V.; Behl, R.; Sharma, K.K. Simply constructed family of a Ostrowski’s method with optimal order of convergence. Comput. Math. Appli. 2011, 62, 4021–4027. [Google Scholar] [CrossRef]
  9. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  10. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  11. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  12. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science. Banach Center Publ. 1978, 3, 129–142. [Google Scholar]
  13. Sharma, J.R.; Arora, H. An efficient family of weighted-Newton methods with optimal eighth order convergence. Appl. Math. Lett. 2014, 29, 1–6. [Google Scholar] [CrossRef]
  14. Traub, J.F. Iterative Nethods for the Solution of Equations; Prentice- Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  15. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  16. Neta, B.; Scot, M.; Chun, C. Basins of attraction for several methods to find simple roots of nonlinear equations. Appl. Math. Comput. 2012, 218, 10548–10556. [Google Scholar] [CrossRef]
  17. Scott, M.; Neta, B.; Chun, C. Basins attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
Figure 1. The methods M 1 , M 2 and M 3 , respectively for test problem 1.
Figure 1. The methods M 1 , M 2 and M 3 , respectively for test problem 1.
Algorithms 09 00065 g001
Figure 2. The methods M 1 , M 2 and M 3 , respectively for test problem 2.
Figure 2. The methods M 1 , M 2 and M 3 , respectively for test problem 2.
Algorithms 09 00065 g002
Figure 3. The methods M 1 , M 2 and M 3 , respectively for test problem 3.
Figure 3. The methods M 1 , M 2 and M 3 , respectively for test problem 3.
Algorithms 09 00065 g003
Table 1. Different values of parameters which satisfy Theorem 1.
Table 1. Different values of parameters which satisfy Theorem 1.
r R r 1 r 2 r 3 r q r p 0 r
0.017098 0.017098 0.0061188 0.0077833 0.0080573 0.019592 0.0061188
Table 2. Different values of parameters which satisfy Theorem 1.
Table 2. Different values of parameters which satisfy Theorem 1.
r R r 1 r 2 r 3 r q r p 0 r
0.22856 0.22856 0.089459 0.086166 0.11255 0.26191 0.086166
Table 3. Different values of parameters which satisfy Theorem 1.
Table 3. Different values of parameters which satisfy Theorem 1.
r R r 1 r 2 r 3 r q r p 0 r
0.37258 0.38269 0.15120 0.13166 0.19038 0.44149 0.13166
Table 4. Different values of parameters which satisfy Theorem 1.
Table 4. Different values of parameters which satisfy Theorem 1.
r R r 1 r 2 r 3 r q r p 0 r
0.0075648 0.0075648 0.0027072 0.0035090 0.0035649 0.0086685 0.0027072
Table 5. Convergence behavior of different cases on Example 1.
Table 5. Convergence behavior of different cases on Example 1.
Methodsn F ( x n ) e n ξ e n e n 1 p η
M 1 06.1e−25.6e−3
11.6e−171.8e−18 0.00006065014094 2.153729894e+49
21.0e−939.1e−95 4.9289 2.314984786e+12
31.4e−5141.3e−515 5.5155 2.153729894e+49
M 2 06.1e−25.6e−3
12.3e−132.1e−14 0.00002153200773 4.146304252e−6
22.3e−592.1e−60 4.0264 0.00001076045277
38.6e−2447.9e−245 4.0090 4.146304252e−6
M 3 06.1e−25.6e−3
13.6e−133.3e−14 0.00003395540062 9.787194795e−6
22.2e−582.0e−59 4.0278 0.00001654426326
31.7e−2391.5e−240 4.0050 9.787194795e−6
Table 6. Convergence behavior of different cases on Example 2.
Table 6. Convergence behavior of different cases on Example 2.
Methodsn x n | F ( x n ) | | e n | ξ e n e n 1 p η
M 1 0 0.65 2.3e−16.5e−2
1 0.714805912187027 6.5e−101.8e−10 0.5649113510 0.3156709071
2 0.714805912362778 1.1e−782.9e−79 8.0295 0.3156709076
3 0.714805912362778 1.1e−6291.5e−629 8.0000 0.3156709071
M 2 0 0.65 2.3e−16.5e−2
1 0.714805910701679 6.1e−91.7e−9 5.339230124 2.311637753
2 0.714805912362778 4.9e−701.3e−70 8.0479 2.311637800
3 0.714805912362778 8.8e−5592.4e−559 8.0000 2.311637753
M 3 0 0.65 2.3e−16.5e−2
1 0.714805907218054 1.9e−85.1e−9 16.53656565 6.144786331
2 0.714805912362778 1.1e−653.0e−66 8.0606 6.144786786
3 0.714805912362778 1.5e−5234.2e−524 8.0000 6.144786331
Table 7. Convergence behavior of different cases on Example 3.
Table 7. Convergence behavior of different cases on Example 3.
Methodsn F ( x n ) e n ξ e n e n 1 p η
M 1 01.7e−11.6e−1
17.2e−127.2 e−12 0.00001412834175 0.001157325063
23.0e−933.0e−93 7.8569 0.0004283379739
37.9e−7447.9e−744 7.9947 0.001157325063
M 2 01.7e−11.6e−1
11.9e−51.9e−5 0.02642235158 0.2812313492
22.8e−202.8e−20 3.7651 0.2223712525
31.7e−791.7e−79 3.9931 0.2812313492
M 3 01.7e−11.6e−1
12.8e−52.5e−5 0.03967215686 0.4999499551
22.5e−192.5e−19 3.7373 0.3860243040
31.9e−751.9e−75 3.9920 0.4999499551
Table 8. Convergence behavior of different cases on Example 4.
Table 8. Convergence behavior of different cases on Example 4.
Methodsn x n | F ( x n ) | | e n | ξ e n e n 1 p η
M 1 0 0.317 3.0e−41.3e−3
1 0.318309886183791 3.8e−181.6e−17 1.881774410e+61.746669349e+6
2 0.318309886183791 2.1e−1298.7e−129 8.0023 1.746669349e+6
3 0.318309886183791 1.4e−10196.0e−1019 8.0000 1.746669349e+6
M 2 0 0.317 3.0e−41.3e−3
1 0.318309886183790 5.5e−172.3e−16 2.711175817e+72.406103772e+7
2 0.318309886183791 5.3e−1192.2e−118 8.0041 2.406103772e+7
3 0.318309886183791 3.5e−9351.5e−934 8.0000 2.406103772e+7
M 3 0 0.317 3.0e−41.3e−3
1 0.318309886183790 1.7e−167.2 e−16 8.333410395e+77.204280059e+7
2 0.318309886183791 1.3e−1145.3e−114 8.0052 7.204280059e+7
3 0.318309886183791 1.1e−8994.7e−899 8.0000 7.204280059e+7

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Behl, R.; Motsa, S.S. Local Convergence Analysis of an Eighth Order Scheme Using Hypothesis Only on the First Derivative. Algorithms 2016, 9, 65. https://doi.org/10.3390/a9040065

AMA Style

Argyros IK, Behl R, Motsa SS. Local Convergence Analysis of an Eighth Order Scheme Using Hypothesis Only on the First Derivative. Algorithms. 2016; 9(4):65. https://doi.org/10.3390/a9040065

Chicago/Turabian Style

Argyros, Ioannis K., Ramandeep Behl, and Sandile S. Motsa. 2016. "Local Convergence Analysis of an Eighth Order Scheme Using Hypothesis Only on the First Derivative" Algorithms 9, no. 4: 65. https://doi.org/10.3390/a9040065

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop