Next Article in Journal
Fixed Point Results for the Family of Multivalued F-Contractive Mappings on Closed Ball in Complete Dislocated b-Metric Spaces
Next Article in Special Issue
Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions
Previous Article in Journal
A Class of Nonlinear Fuzzy Variational Inequality Problems
Previous Article in Special Issue
A Third Order Newton-Like Method and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Few Iterative Methods by Using [1,n]-Order Padé Approximation of Function and the Improvements

1
Institute of Applied Mathematics, Bengbu University, Bengbu 233030, China
2
School of Computer Engineering, Bengbu University, Bengbu 233030, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 55; https://doi.org/10.3390/math7010055
Submission received: 15 November 2018 / Revised: 28 December 2018 / Accepted: 30 December 2018 / Published: 7 January 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
In this paper, a few single-step iterative methods, including classical Newton’s method and Halley’s method, are suggested by applying [ 1 , n ] -order Padé approximation of function for finding the roots of nonlinear equations at first. In order to avoid the operation of high-order derivatives of function, we modify the presented methods with fourth-order convergence by using the approximants of the second derivative and third derivative, respectively. Thus, several modified two-step iterative methods are obtained for solving nonlinear equations, and the convergence of the variants is then analyzed that they are of the fourth-order convergence. Finally, numerical experiments are given to illustrate the practicability of the suggested variants. Henceforth, the variants with fourth-order convergence have been considered as the imperative improvements to find the roots of nonlinear equations.

1. Introduction

It is well known that a variety of problems in different fields of science and engineering require to find the solution of the nonlinear equation f ( x ) = 0 where f : I D , for an interval I R and D R , is a scalar function. In general, iterative methods, such as Newton’s method, Halley’s method, Cauchy’s method, and so on, are the most used techniques. Hence, iterative algorithms based on these iterative methods for finding the roots of nonlinear equations are becoming one of the most important aspects in current researches. We can see the works, for example, [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22] and references therein. In the last few years, some iterative methods with high-order convergence have been introduced to solve a single nonlinear equation. By using various techniques, such as Taylor series, quadrature formulae, decomposition techniques, continued fraction, Padé approximation, homotopy methods, Hermite interpolation, and clipping techniques, these iterative methods can be constructed. For instance, there are many ways of introducing Newton’s method. Among these ways, using Taylor polynomials to derive Newton’s method is probably the most widely known technique [1,2]. By considering different quadrature formulae for the computation of the integral, Weerakoon and Fernando derive an implicit iterative scheme with cubic convergence by the trapezoidal quadrature formulae [4], while Cordero and Torregrosa develope some variants of Newton’s method based in rules of quadrature of fifth order [5]. In 2005, Chun [6] have presented a sequence of iterative methods improving Newton’s method for solving nonlinear equations by applying the Adomian decomposition method. Based on Thiele’s continued fraction of the function, Li et al. [7] give a fourth-order convergent iterative method. Using Padé approximation of the function, Li et al. [8] rederive the Halley’s method and by the divided differences to approximate the derivatives, they arrive at some modifications with third-order convergence. In [9], Abbasbandy et al. present an efficient numerical algorithm for solving nonlinear algebraic equations based on Newton–Raphson method and homotopy analysis method. Noor and Khan suggest and analyze a new class of iterative methods by using the homotopy perturbation method in [10]. In 2015, Wang et al. [11] deduce a general family of n-point Newton type iterative methods for solving nonlinear equations by using direct Hermite interpolation. Moreover, for a particular class of functions, for instance, if f is a polynomial, there exist some efficient univariate root-finding algorithms to compute all solutions of the polynomial equation (see [12,13]). In the literature [13], Bartoň et al. present an algorithm for computing all roots of univariate polynomial based on degree reduction, which has the higher convergence rate than Newton’s method. In this article, we will mainly solve more general nonlinear algebraic equations.
Newton’s method is probably the best known and most widely used iterative algorithm for root-finding problems. By applying Taylor’s formula for the function f ( x ) , let us recall briefly how to derive Newton iterative method. Suppose that f ( x ) C n [ I ] , n = 1 , 2 , 3 , , and η I is a single root of the nonlinear equation f ( x ) = 0 . For a given guess value x 0 I and a δ R , assume that f ( x ) 0 for each x belongs to the neighborhood ( x 0 δ , x 0 + δ ) . For any x ( x 0 δ , x 0 + δ ) , we expand f ( x ) into the following Taylor’s formula about x 0 :
f ( x ) = f ( x 0 ) + f ( x 0 ) ( x x 0 ) + 1 2 ! f ( x 0 ) ( x x 0 ) 2 + + 1 k ! ( x x 0 ) k f ( k ) ( x 0 ) + ,
where k = 0 , 1 , 2 , . Let | η x 0 | be sufficiently small. Then the terms involving ( η x 0 ) k , k = 2 , 3 , , are much smaller. Hence, we think the fact that the first Taylor polynomial is a good approximation to the function near the point x 0 and give that
f ( x 0 ) + f ( x 0 ) ( η x 0 ) 0 .
Notice the fact f ( x 0 ) 0 , and solving the above equation for η yields
η x 0 f ( x 0 ) f ( x 0 ) ,
which follows that we can construct the Newton iterative scheme as below
x k + 1 = x k f ( x k ) f ( x k ) , k = 0 , 1 , 2 , .
It has been known that Newton iterative method is a celebrated one-step iterative method. The order of convergence of Newton’s method is quadratic for a simple zero and linear for multiple root.
Motivated by the idea of the above technique, in this paper, we start with using Padé approximation of a function to construct a few one-step iterative schemes which includes classical Newton’s method and Halley’s method to find roots of nonlinear equations. In order to avoid calculating the high-order derivatives of the function, then we employ the approximants of the higher derivatives to improve the presented iterative method. As a result, we build several two-step iterative formulae, and some of them do not require the operation of high-order derivatives. Furthermore, it is shown that these modified iterative methods are all fouth-order convergent for a simple root of the equation. Finally, we give some numerical experiments and comparison to illustrate the efficiency and performance of the presented methods.
The rest of this paper is organized as follows. we introduce some basic preliminaries about Padé approximation and iteration theory for root-finding problem in Section 2. In Section 3, we firstly construct several one-step iterative schemes based on Padé approximation. Then, we modify the presented iterative method to obtain a few iterative formulae without calculating the high-order derivatives. In Section 4, we show that the modified methods have fourth-order convergence at least for a simple root of the equation. In Section 5 we give numerical examples to show the performance of the presented methods and compare them with other high-order methods. At last, we draw conclusions from the experiment results in Section 6.

2. Preliminaries

In this section, we briefly review some basic definitions and results for Padé approximation of function and iteration theory for root-finding problem. Some surveys and complete literatures about iteration theory and Padé approximation could be found in Alfio [1], Burden et al. [2], Wuytack [23], and Xu et al. [24].
Definition 1.
Assume that f ( x ) is a function whose ( n + 1 ) -st derivative f ( n + 1 ) ( x ) , n = 0 , 1 , 2 , , exists for any x in an interval I. Then for each x I , we have
f ( x ) = f ( x 0 ) + f ( x 0 ) ( x x 0 ) + f ( x 0 ) 2 ! ( x x 0 ) 2 + + f ( n ) ( x 0 ) n ! ( x x 0 ) n + o [ ( x x 0 ) n ] ,
which is called the Taylor’s formula with Peano remainder term of order n based at x 0 , and the error o [ ( x x 0 ) n ] is called the Peano remainder term or the Peano truncation error.
Definition 2.
If P ( x ) is a polynomial, the accurate degree of the polynomial is ( P ) , and the order of the polynomial is ω ( P ) , which is the degree of the first non-zero term of the polynomial.
Definition 3.
If it can be found two ploynomials
P ( x ) = i = 0 m a i ( x x 0 ) i a n d Q ( x ) = i = 0 n b i ( x x 0 ) i
such that
( P ( x ) ) m , ( Q ( x ) ) n , ω ( f ( x ) Q ( x ) P ( x ) ) m + n + 1 ,
then we have the following incommensurable form of the rational fraction P ( x ) Q ( x ) :
R m , n ( x ) = P 0 ( x ) Q 0 ( x ) = P ( x ) Q ( x ) ,
which is called [ m , n ] -order Padé approximation of function f ( x ) .
We give the computational formula of Padé approximation of function f ( x ) by the use of determinant, as shown in the following lemma [23,24].
Lemma 1.
Assume that R m , n ( x ) = P 0 ( x ) Q 0 ( x ) is Padé approximation of function f ( x ) . If the matrix
A m , n = a m a m 1 a m + 1 n a m + 1 a m a m + 2 n a m + n 1 a m + n 2 a m
is nonsingular, that is the determinant | A m , n | = d 0 , then P 0 ( x ) , Q 0 ( x ) can be written by the following determinants
P 0 ( x ) = 1 d T m ( x ) ( x x 0 ) T m 1 ( x ) ( x x 0 ) n T m + 1 n ( x ) a m + 1 a m a m + 1 n a m + n a m + n 1 a m
and
Q 0 ( x ) = 1 d 1 ( x x 0 ) ( x x 0 ) n a m + 1 a m a m + 1 n a m + n a m + n 1 a m ,
where a n = f ( n ) ( x 0 ) n ! , n = 0 , 1 , 2 , , and we appoint that
T k ( x ) = i = 0 k a i ( x x 0 ) i , f o r k 0 , 0 , f o r k < 0 .
Next, we recall the speed of convergence of an iterative scheme. Thus, we give the following definition and lemma.
Definition 4.
Assume that a sequence { x i } i = 0 converges to η, with x i η for all i , i = 0 , 1 , 2 , . Let the error be e i = x i η . If there exist two positive constants α and β such that
lim i | e i + 1 | | e i | α = β ,
then { x i } i = 0 converges to the constant η of order α. When α = 1 , the sequence { x i } i = 0 is linearly convergent. When α > 1 , the sequence { x i } i = 0 is said to be of higher-order convergence.
For a single-step iterative method, sometimes it is convenient to use the following lemma to judge the order of convergence of the iterative method.
Lemma 2.
Assume that the equation f ( x ) = 0 , x I , can be rewritten as x = φ ( x ) , where f ( x ) C [ I ] and φ ( x ) C γ [ I ] , γ N + . Let η be a root of the equation f ( x ) = 0 . If the iterative function φ ( x ) satisfies
φ ( j ) ( η ) = 0 , j = 1 , 2 , , γ 1 , φ ( γ ) ( η ) 0 ,
then the order of convergence of the iterative scheme x i + 1 = φ ( x i ) , i = 0 , 1 , 2 , , is γ.

3. Some Iterative Methods

Let η be a simple real root of the equation f ( x ) = 0 , where f : I D , I R , D R . Suppose that x 0 I is an initial guess value sufficiently close to η , and the function f ( x ) has n-th derivative f ( n ) ( x ) , n = 1 , 2 , 3 , , in the interval I. According to Lemma 1, [ m , n ] -order Padé approximation of function f ( x ) is denoted by the following rational fraction:
f ( x ) R m , n ( x ) = T m ( x ) ( x x 0 ) T m 1 ( x ) ( x x 0 ) n T m + 1 n ( x ) a m + 1 a m a m + 1 n a m + n a m + n 1 a m 1 ( x x 0 ) ( x x 0 ) n a m + 1 a m a m + 1 n a m + n a m + n 1 a m .
Recall Newton iterative method derived by Taylor’s series in Section 1. The first Taylor polynomial is regarded as a good approximation to the function f ( x ) near the point x 0 . Solving the linear equation denoted by f ( x 0 ) + f ( x 0 ) ( η x 0 ) 0 for η gives us the stage for Newton’s method. Then, we think whether or not a novel or better linear function is selected to approximate the function f ( x ) near the point x 0 . Maybe Padé approximation can solve this question. In the process of obtaining new iterative methods based on Padé approximation of function, on the one hand, we consider that the degree of the numerator of Equation (2) is always taken as 1, which guarantees to obtain the different linear function. On the other hand, we discuss the equations are mainly nonlinear algebraic equations, which differ rational equations and have not the poles. Clearly, as n grows, the poles of the denominator of Equation (2) do not affect the linear functions that we need. These novel linear functions may be able to set the stage for new methods. Next, let us start to introduce a few iterative methods by using [ 1 , n ] -order Padé approximation of function.

3.1. Iterative Method Based on [ 1 , 0 ] -Order Padé Approximation

Firstly, when m = 1 , n = 0 , we consider [ 1 , 0 ] -order Padé approximation of function f ( x ) . It follows from the expression (2) that
f ( x ) R 1 , 0 ( x ) = T 1 ( x ) = a 0 + a 1 ( x x 0 ) .
Let R 1 , 0 ( x ) = 0 , then we have
a 0 + a 1 ( x x 0 ) = 0 .
Due to the determinant | A 1 , 0 | 0 , i.e., f ( x 0 ) 0 , we obtain the following equation from Equation (3).
x = x 0 a 0 a 1 .
In view of a 0 = f ( x 0 ) , a 1 = f ( x 0 ) , we reconstruct the Newton iterative method as below.
Method 1.
Assume that the function f : I D has its first derivative at the point x 0 I . Then we obtain the following iterative method based on [ 1 , 0 ] -order Padé approximation of function f ( x ) :
x k + 1 = x k f ( x k ) f ( x k ) , k = 0 , 1 , 2 , .
Starting with an initial approximation x 0 that is sufficiently close to the root η and using the above scheme (4), we can get the iterative sequence { x i } i = 0 .
Remark 1.
Method 1 is the well-known Newton’s method for solving nonlinear equation [1,2].

3.2. Iterative Method Based on [ 1 , 1 ] -Order Padé Approximation

Secondly, when m = 1 , n = 1 , we think about [ 1 , 1 ] -order Padé approximation of function f ( x ) . Similarly, it follows from the expression (2) that
f ( x ) R 1 , 1 ( x ) = T 1 ( x ) ( x x 0 ) T 0 ( x ) a 2 a 1 1 ( x x 0 ) a 2 a 1 .
Let R 1 , 1 ( x ) = 0 , then we get
a 0 a 1 + a 1 2 ( x x 0 ) a 0 a 2 ( x x 0 ) = 0 .
Due to the determinant | A 1 , 1 | 0 , that is,
a 1 a 0 a 2 a 1 = f ( x 0 ) f ( x 0 ) f ( x 0 ) 2 f ( x 0 ) = f 2 ( x 0 ) f ( x 0 ) f ( x 0 ) 2 0 .
Thus, we obtain the following equality from Equation (5):
x = x 0 a 0 a 1 a 1 2 a 0 a 2 .
Combining a 0 = f ( x 0 ) , a 1 = f ( x 0 ) , a 2 = 1 2 f ( x 0 ) , gives Halley iterative method as follows.
Method 2.
Assume that the function f : I D has its second derivative at the point x 0 I . Then we obtain the following iterative method based on [ 1 , 1 ] -order Padé approximation of function f ( x ) :
x k + 1 = x k 2 f ( x k ) f ( x k ) 2 f 2 ( x k ) f ( x k ) f ( x k ) , k = 0 , 1 , 2 , .
Starting with an initial approximation x 0 that is sufficiently close to the root η and applying the above scheme (6), we can obtain the iterative sequence { x i } i = 0 .
Remark 2.
Method 2 is the classical Halley’s method for finding roots of nonlinear equation [1,2], which converges cubically.

3.3. Iterative Method Based on [ 1 , 2 ] -Order Padé Approximation

Thirdly, when m = 1 , n = 2 , we take into account [ 1 , 2 ] -order Padé approximation of function f ( x ) . By the same manner, it follows from the expression (2) that
f ( x ) R 1 , 2 ( x ) = T 1 ( x ) ( x x 0 ) T 0 ( x ) 0 a 2 a 1 a 0 a 3 a 2 a 1 1 x x 0 ( x x 0 ) 2 a 2 a 1 a 0 a 3 a 2 a 1 .
Let R 1 , 2 ( x ) = 0 , then one has
a 0 a 1 2 + a 0 2 a 2 + ( a 1 3 2 a 0 a 1 a 2 + a 0 2 a 3 ) ( x x 0 ) = 0 .
Due to the determinant | A 1 , 2 | 0 , that is,
a 1 a 0 0 a 2 a 1 a 0 a 3 a 2 a 1 = f ( x 0 ) f ( x 0 ) 0 f ( x 0 ) 2 f ( x 0 ) f ( x 0 ) f ( x 0 ) 6 f ( x 0 ) 2 f ( x 0 ) = f 3 ( x 0 ) f ( x 0 ) f ( x 0 ) f ( x 0 ) + f 2 ( x 0 ) f ( x 0 ) 6 0 .
Thus, we gain the following equality from Equation (7):
x = x 0 a 0 a 1 2 a 0 2 a 2 a 1 3 2 a 0 a 1 a 2 + a 0 2 a 3 .
Substituting a 0 = f ( x 0 ) , a 1 = f ( x 0 ) , a 2 = 1 2 f ( x 0 ) , and a 3 = 1 6 f ( x 0 ) into the above equation gives a single-step iterative method as follows.
Method 3.
Assume that the function f : I D has its third derivative at the point x 0 I . Then we obtain the following iterative method based on [ 1 , 2 ] -order Padé approximation of function f ( x ) :
x k + 1 = x k 3 f ( x k ) ( 2 f 2 ( x k ) f ( x k ) f ( x k ) ) 6 f 3 ( x k ) 6 f ( x k ) f ( x k ) f ( x k ) + f 2 ( x k ) f ( x k ) , k = 0 , 1 , 2 , .
Starting with an initial approximation x 0 that is sufficiently close to the root η and applying the above scheme (8), we can receive the iterative sequence { x i } i = 0 .
Remark 3.
Method 3 could be used to find roots of nonlinear equation. Clearly, for the sake of applying this iterative method, we must compute the second derivative and the third derivative of the function f ( x ) , which may generate inconvenience. In order to overcome the drawback, we suggest approximants of the second derivative and the third derivative, which is a very important idea and plays a significant part in developing some iterative methods free from calculating the higher derivatives.

3.4. Modified Iterative Method Based on Approximant of the Third Derivative

In fact, we let z k = x k f ( x k ) f ( x k ) . Then expanding f ( z k ) into third Taylor’s series about the point x k yields
f ( z k ) f ( x k ) + f ( x k ) ( z k x k ) + 1 2 ! f ( x k ) ( z k x k ) 2 + 1 3 ! f ( x k ) ( z k x k ) 3 ,
which follows that
f ( x k ) 3 f 2 ( x k ) f ( x k ) f ( x k ) 6 f ( z k ) f 3 ( x k ) f 3 ( x k ) .
Substituting (9) into (8), we can have the following iterative method.
Method 4.
Assume that the function f : I D has its second derivative about the point x 0 I . Then we possess a modified iterative method as below:
z k = x k f ( x k ) f ( x k ) , x k + 1 = x k x k z k 1 + 2 f ( z k ) f 2 ( x k ) L 1 ( x k ) , k = 0 , 1 , 2 , ,
where L ( x k ) = f ( x k ) ( f ( x k ) f ( x k ) 2 f 2 ( x k ) ) . Starting with an initial approximation x 0 that is sufficiently close to the root η and using the above scheme (10), we can have the iterative sequence { x i } i = 0 .
Remark 4.
Methods 4 is a two-step iterative method free from third derivative of the function.

3.5. Modified Iterative Method Based on Approximant of the Second Derivative

It is obvious that the iterative method (10) requires the operation of the second derivative of the function f ( x ) . In order to avoid computing the second derivative, we introduce an approximant of the second derivative by using Taylor’s series.
Similarly, expanding f ( z k ) into second Taylor’s series about the point x k yields
f ( z k ) f ( x k ) + ( z k x k ) f ( x k ) + 1 2 ! ( z k x k ) 2 f ( x k ) ,
which means
f ( x k ) 2 f ( z k ) f 2 ( x k ) f 2 ( x k ) .
Using (11) in (10), we can get the following modified iterative method without computing second derivative.
Method 5.
Assume that the function f : I D has its first derivative about the point x 0 I . Then we have a modified iterative method as below:
z k = x k f ( x k ) f ( x k ) , x k + 1 = x k f ( x k ) f ( z k ) f ( x k ) 2 f ( z k ) ( x k z k ) , k = 0 , 1 , 2 , .
Starting with an initial approximation x 0 that is sufficiently close to the root η and using the above scheme (12), we can obtain the iterative sequence { x i } i = 0 .
Remark 5.
Method 5 is another two-step iterative method. It is clear that Method 5 does not require to calculate the high-order derivative. But more importantly, the characteristic of Method 5 is that per iteration it requires two evaluations of the function and one of its first-order derivative. The efficiency of this method is better than that of the well-known other methods involving the second-order derivative of the function.

4. Convergence Analysis of Iterative Methods

Theorem 1.
Suppose that f ( x ) is a function whose n-th derivative f ( n ) ( x ) , n = 1 , 2 , 3 , , exists in a neighborhood of its root η with f ( η ) 0 . If the initial approximation x 0 is sufficiently close to η, then the Method 3 defined by (8) is fourth-order convergent.
Proof of Theorem 1.
By the hypothesis f ( η ) = 0 and f ( η ) 0 , we know that η is an unique single root of the equation f ( x ) = 0 . So, for each positive integer n 1 , we have that the derivatives of high orders f ( n ) ( η ) 0 . Considering the iterative scheme (8) in Method 3, we denote its corresponding iterative function as shown below:
φ ( x ) = x 3 f ( x ) ( 2 f 2 ( x ) f ( x ) f ( x ) ) 6 f 3 ( x ) 6 f ( x ) f ( x ) f ( x ) + f 2 ( x ) f ( x ) .
By calculating the first and high-order derivatives of the iterative function φ ( x ) with respect to x at the point η , we verify that
φ ( η ) = 0 , φ ( η ) = 0 , φ ( η ) = 0
and
φ ( 4 ) ( η ) = 3 f 3 ( η ) 4 f ( η ) f ( η ) f ( η ) + f 2 ( η ) f ( 4 ) ( η ) f 3 ( η ) 0 .
Thus, it follows from Lemma 2 that Method 3 defined by (8) is fourth-order convergent. This completes the proof. □
Theorem 2.
Suppose that f ( x ) is a function whose n-th derivative f ( n ) ( x ) , n = 1 , 2 , 3 , , exists in a neighborhood of its root η with f ( η ) 0 . If the initial approximation x 0 is sufficiently close to η, then the Method 4 defined by (10) is at least fourth-order convergent with the following error equation
e k + 1 = ( b 2 3 2 b 2 b 3 ) e k 4 + O ( e k 5 ) ,
where e k = x k η   , k = 1 , 2 , 3 , , and the constants b n = a n f ( η ) , a n = f ( n ) ( η ) n ! , n = 1 , 2 , 3 , .
Proof of Theorem 2.
By the hypothesis, it is clear to see that η is an unique single root of the equation f ( x ) = 0 . By expanding f ( x k ) , f ( x k ) and f ( x k ) into Taylor’s series about η , we obtain
f ( x k ) = e k f ( η ) + e k 2 2 ! f ( η ) + e k 3 3 ! f ( η ) + e k 4 4 ! f ( 4 ) ( η ) + e k 5 5 ! f ( 5 ) ( η ) + e k 6 6 ! f ( 6 ) ( η ) + O ( e k 7 ) = f ( η ) b 1 e k + b 2 e k 2 + b 3 e k 3 + b 4 e k 4 + b 5 e k 5 + b 6 e k 6 + O ( e k 7 ) ,
f ( x k ) = f ( η ) b 1 + 2 b 2 e k + 3 b 3 e k 2 + 4 b 4 e k 3 + 5 b 5 e k 4 + 6 b 6 e k 5 + O ( e k 6 )
and
f ( x k ) = f ( η ) 2 b 2 + 6 b 3 e k + 12 b 4 e k 2 + 20 b 5 e k 3 + 30 b 6 e k 4 + O ( e k 5 ) ,
where b n = 1 n ! f ( n ) ( η ) f ( η ) ,   n = 1 , 2 , . Clearly, b 1 = 1 . Dividing (14) by (15) directly, gives us
f ( x k ) f ( x k ) = x k z k = e k b 2 e k 2 2 ( b 3 b 2 2 ) e k 3 ( 4 b 2 3 + 3 b 4 7 b 2 b 3 ) e k 4 2 ( 10 b 2 2 b 3 2 b 5 + 5 b 2 b 4 + 4 b 2 4 + 3 b 3 2 ) e k 5 ( 16 b 2 5 + 28 b 2 2 b 4 + 33 b 2 b 3 2 + 5 b 6 52 b 2 3 b 3 17 b 3 b 4 13 b 2 b 5 ) e k 6 + O ( e k 7 ) .
By substituting (17) into (10) in Method 4, one has
z k = η + b 2 e k 2 + 2 ( b 3 b 2 2 ) e k 3 + ( 4 b 2 3 + 3 b 4 7 b 2 b 3 ) e k 4 + 2 ( 10 b 2 2 b 3 2 b 5 + 5 b 2 b 4 + 4 b 2 4 + 3 b 3 2 ) e k 5 + ( 16 b 2 5 + 28 b 2 2 b 4 + 33 b 2 b 3 2 + 5 b 6 52 b 2 3 b 3 17 b 3 b 4 13 b 2 b 5 ) e k 6 + O ( e k 7 ) .
Again, expanding f ( z k ) by Taylor’s series about η , we have
f ( z k ) = f ( η ) b 2 e k 2 2 ( b 2 2 b 3 ) e k 3 ( 7 b 2 b 3 5 b 2 3 3 b 4 ) e k 4 2 ( 5 b 2 b 4 + 6 b 2 4 + 3 b 3 2 12 b 2 2 b 3 2 b 5 ) e k 5 + ( 28 b 2 5 + 34 b 2 2 b 4 + 37 b 2 b 3 2 + 5 b 6 73 b 2 3 b 3 17 b 3 b 4 13 b 2 b 5 ) e k 6 + O ( e k 7 ) .
Hence, from (15) and (19), we have
f ( z k ) f 2 ( x k ) = f 3 ( η ) b 2 e k 2 + 2 ( b 2 2 + b 3 ) e k 3 + ( 7 b 2 b 3 + b 2 3 + 3 b 4 ) e k 4 + 2 ( 5 b 2 b 4 + 2 b 2 2 b 3 + 3 b 3 2 + 2 b 5 ) e k 5 + ( 4 b 2 b 3 2 + 6 b 2 2 b 4 + b 2 3 b 3 + 13 b 2 b 5 + 17 b 3 b 4 + 5 b 6 ) e k 6 + O ( e k 7 ) .
Also, from (14), (15), and (16), one has
L ( x k ) = 2 f 3 ( η ) e k + 4 b 2 e k 2 + 2 ( 3 b 2 2 + 2 b 3 ) e k 3 + ( 14 b 2 b 3 + 3 b 2 3 + 3 b 4 ) e k 4 + ( 14 b 2 b 4 + 11 b 2 2 b 3 + 9 b 3 2 + b 5 ) e k 5 + 2 ( 7 b 2 b 3 2 + 6 b 2 2 b 4 + 6 b 2 b 5 + 10 b 3 b 4 + b 6 ) e k 6 + O ( e k 7 ) .
Therefore, combining (20) and (21), one can have
2 f ( z k ) f 2 ( x k ) L ( x k ) = b 2 e k 2 ( b 3 b 2 2 ) e k 2 ( 3 b 2 3 + 3 b 4 5 b 2 b 3 ) e k 3 ( 6 b 2 2 b 3 + 4 b 5 5 b 2 b 4 3 b 2 4 2 b 3 2 ) e k 4 + O ( e k 5 ) .
Furthermore, from (17) and (22), we get
x k y k 1 + 2 f ( z k ) f 2 ( x k ) L 1 ( x k ) = e k ( b 2 3 2 b 2 b 3 ) e k 4 ( 12 b 2 2 b 3 5 b 2 b 4 4 b 2 4 4 b 3 2 ) e k 5 + O ( e k 6 ) .
So, substituting (23) into (10) in Method 4, one obtains
x k + 1 = η + ( b 2 3 2 b 2 b 3 ) e k 4 + O ( e k 5 ) .
Noticing that the ( k + 1 ) -st error e k + 1 = x k + 1 η , from (24) we have the following error equation
e k + 1 = ( b 2 3 2 b 2 b 3 ) e k 4 + O ( e k 5 ) ,
which shows that Method 4 defined by (10) is at least fourth-order convergent according to Definition 4. We have shown Theorem 2. □
Theorem 3.
Suppose that f ( x ) is a function whose n-th derivative f ( n ) ( x ) , n = 1 , 2 , 3 , , exists in a neighborhood of its root η with f ( η ) 0 . If the initial approximation x 0 is sufficiently close to η, then the Method 5 defined by (12) is also at least fourth-order convergent with the following error equation
e k + 1 = ( b 2 3 b 2 b 3 ) e k 4 + O ( e k 5 ) ,
where e k = x k η ,   k = 1 , 2 , 3 , , and the constants b n = a n f ( η ) , a n = f ( n ) ( η ) n ! , n = 1 , 2 , 3 , .
Proof of Theorem 3.
Referring to (14) and (19) in the proof of Theorem 2, then, dividing f ( z k ) by f ( x k ) f ( z k ) we see that
f ( z k ) f ( x k ) f ( z k ) = b 2 e k 2 ( b 2 2 b 3 ) e k 2 3 ( 2 b 2 b 3 b 2 3 b 4 ) e k 3 ( 3 b 2 4 + 4 b 3 2 + 8 b 2 b 4 11 b 2 2 b 3 4 b 5 ) e k 4 ( 10 b 2 3 b 3 + 10 b 3 b 4 + 10 b 2 b 5 11 b 2 b 3 2 14 b 2 2 b 4 5 b 6 ) e k 5 ( 221 b 2 4 b 3 + 16 b 3 3 + 78 b 2 b 3 b 4 + 27 b 2 2 b 5 73 b 2 6 158 b 2 2 b 3 2 91 b 2 3 b 4 6 b 4 2 10 b 3 b 5 4 b 2 b 6 ) e k 6 + O ( e k 7 ) .
From (26), we obtain
f ( x k ) f ( z k ) f ( x k ) 2 f ( z k ) = 1 1 f ( z k ) f ( x k ) f ( z k ) = 1 + b 2 e k + ( 2 b 3 b 2 2 ) e k 2 + ( 3 b 4 2 b 2 b 3 ) e k 3 + ( 2 b 2 4 + 4 b 5 3 b 2 2 b 3 2 b 2 b 4 ) e k 4 + ( 14 b 2 3 b 3 + 2 b 3 b 4 + 5 b 6 5 b 2 5 9 b 2 b 3 2 5 b 2 2 b 4 2 b 2 b 5 ) e k 5 + ( 77 b 2 6 + 192 b 2 2 b 3 2 + 121 b 2 3 b 4 + 15 b 4 2 + 26 b 3 b 5 + 14 b 2 b 6 240 b 2 4 b 3 24 b 3 3 130 b 2 b 3 b 4 51 b 2 2 b 5 ) e k 6 + O ( e k 7 ) .
Multiplying (27) by (17) yields that
f ( x k ) f ( y k ) f ( x k ) 2 f ( y k ) ( x k z k ) = e k ( b 2 3 b 2 b 3 ) e k 4 + 2 ( 2 b 2 4 4 b 2 2 b 3 + b 3 2 + b 2 b 4 ) e k 5 + O ( e k 6 ) .
Consequently, from (12) and the above Equation (28), and noticing the error e k + 1 = x k + 1 η , we get the error equation as below:
e k + 1 = ( b 2 3 b 2 b 3 ) e k 4 + O ( e k 5 ) .
Thus, according to Definition 4, we have shown that Method 5 defined by (12) has fourth-order convergence at least. This completes the proof of Theorem 3. □
Remark 6.
Per iteration, Method 5 requires two evaluations of the function and one of its first-order derivative. If we consider the definition of efficiency index [3] as λ τ , where λ is the order of convergence of the method and τ is the total number of new function evaluations (i.e., the values of f and its derivatives) per iteration, then Method 5 has the efficiency index equal to 4 3 1.5874 , which is better than the ones of Halley iterative method 3 3 1.4423 and Newton iterative method 2 1.4142 .

5. Numerical Results

In this section, we present the results of numerical calculations to compare the efficiency of the proposed iterative methods (Methods 3–5) with Newton iterative method (Method 1, NIM for short), Halley iterative method (Method 2, HIM for short) and a few classical variants defined in literatures [19,20,21,22], such as the next iterative schemes with fourth-order convergence:
(i)
Kou iterative method (KIM for short) [19].
x k + 1 = x k 2 1 + 1 2 L ¯ f ( x k ) f ( x k ) f ( x k ) , k = 0 , 1 , 2 , ,
where L ¯ f ( x k ) is defined by the equation as follows:
L ¯ f ( x k ) = f x k f ( x k ) / 3 f ( x k ) f ( x k ) f 2 ( x k ) .
(ii)
Double-Newton iterative method (DNIM for short) [20].
z k = x k f ( x k ) f ( x k ) , x k + 1 = x k f ( x k ) f ( x k ) f ( z k ) f ( z k ) , k = 0 , 1 , 2 , .
(iii)
Chun iterative method (CIM for short) [21].
z k = x k f ( x k ) f ( x k ) , x k + 1 = x k f ( x k ) f ( x k ) 1 + 2 f ( z k ) f ( x k ) + f 2 ( z k ) f 2 ( x k ) f ( z k ) f ( x k ) , k = 0 , 1 , 2 , .
(iv)
Jarratt-type iterative method (JIM for short) [22].
z k = x k 2 3 f ( x k ) f ( x k ) , x k + 1 = x k 4 f ( x k ) f ( x k ) + 3 f ( z k ) 1 + 9 16 f ( z k ) f ( x k ) 1 2 , k = 0 , 1 , 2 , .
In iterative process, we use the following stopping criteria for computer programs:
| x k + 1 x k | < ε a n d | f ( x k + 1 ) | < ε ,
where the fixed tolerance ε is taken as 10 14 . When the stopping criteria are satisfied, x k + 1 can be regarded as the exact root η of the equation. Numerical experiments are performed in Mathematica 10 environment with 64 digit floating point arithmetics (Digits: = 64). Different test equations f i = 0 , i = 1 , 2 , , 5 , the initial guess value x 0 , the number of iterations k + 1 , the approximate root x k + 1 , the values of | x k + 1 x k | and | f ( x k + 1 ) | are given in Table 1. The following test equations are used in the numerical results:
f 1 ( x ) = x 3 11 = 0 , f 2 ( x ) = cos x x = 0 , f 3 ( x ) = x 3 + 4 x 2 25 = 0 , f 4 ( x ) = x 2 e x 3 x + 2 = 0 , f 5 ( x ) = ( x + 2 ) e x 1 = 0 .

6. Conclusions

In Section 3 of the paper, it is evident that we have obtained a few single-step iterative methods including classical Newton’s method and Halley’s method, based on [ 1 , n ] -order Padé approximation of a function for finding a simple root of nonlinear equations. In order to avoid calculating the higher derivatives of the function, we have tried to improve the proposed iterative method by applying approximants of the second derivative and the third derivative. Hence, we have gotten a few modified two-step iterative methods free from the higher derivatives of the function. In Section 4, we have given theoretical proofs of the several methods. It is seen that any modified iterative method reaches the convergence order 4. However, it is worth mentioning that Method 5 is free from second order derivative and its efficiency index is 1.5874 . Furthermore, in Section 5, numerical examples are employed to illustrate the practicability of the suggested variants for finding the approximate roots of some nonlinear scalar equations. The computational results presented in Table 1 show that in almost all of the cases the presented variants converge more rapidly than Newton iterative method and Halley iterative method, so that they can compete with Newton iterative method and Halley iterative method. Finally, for more nonlinear equations we tested, the presented variants have at least equal performance compared to the other existing iterative methods that are of the same order.

Author Contributions

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

Funding

This research was funded by the Natural Science Key Foundation of Education Department of Anhui Province (Grant No. KJ2013A183), the Project of Leading Talent Introduction and Cultivation in Colleges and Universities of Education Department of Anhui Province (Grant No. gxfxZD2016270) and the Incubation Project of National Scientific Research Foundation of Bengbu University (Grant No. 2018GJPY04).

Acknowledgments

The authors are thankful to the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alfio, Q.; Riccardo, S.; Fausto, S. Rootfinding for nonlinear equations. In Numerical Mathematics; Springer: New York, NY, USA, 2000; pp. 251–285. ISBN 0-387-98959-5. [Google Scholar]
  2. Burden, A.M.; Faires, J.D.; Burden, R.L. Solutions of equations in one variable. In Numerical Analysis, 10th ed.; Cengage Learning: Boston, MA, USA, 2014; pp. 48–101. ISBN 978-1-305-25366-7. [Google Scholar]
  3. Gautschi, W. Nonlinear equations. In Numerical Analysis, 2nd ed.; Birkhäuser: Boston, MA, USA, 2011; pp. 253–323. ISBN 978-0-817-68258-3. [Google Scholar]
  4. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  5. Cordero, A.; Torregrosa, I.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  6. Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
  7. Li, S.; Tan, J.; Xie, J.; Dong, Y. A new fourth-order convergent iterative method based on Thiele’s continued fraction for solving equations. J. Inform. Comput. Sci. 2011, 8, 139–145. [Google Scholar]
  8. Li, S.; Wang, R.; Zhang, X.; Xie, J.; Dong, Y. Halley’s iterative formula based on Padé approximation and its modifications. J. Inform. Comput. Sci. 2012, 9, 997–1004. [Google Scholar]
  9. Abbasbandy, S.; Tan, Y.; Liao, S.J. Newton-homotopy analysis method for nonlinear equations. Appl. Math. Comput. 2007, 188, 1794–1800. [Google Scholar] [CrossRef]
  10. Noor, M.A.; Khan, W.A. New iterative methods for solving nonlinear equation by using homotopy perturbation method. Appl. Math. Comput. 2012, 219, 3565–3574. [Google Scholar] [CrossRef]
  11. Wang, X.; Qin, Y.; Qian, W.; Zhang, S.; Fan, X. A family of Newton type iterative methods for solving nonlinear equations. Algorithms 2015, 8, 786–798. [Google Scholar] [CrossRef]
  12. Sederberg, T.W.; Nishita, T. Curve intersection using Bézier clipping. Comput.-Aided Des. 1990, 22, 538–549. [Google Scholar] [CrossRef]
  13. Bartonň, M.; Jüttler, B. Computing roots of polynomials by quadratic clipping. Comput. Aided Geom. Des. 2007, 24, 125–141. [Google Scholar] [CrossRef]
  14. Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
  15. Thukral, R. New modification of Newton method with third-order convergence for solving nonlinear equations of type f(0) = 0. Am. J. Comput. Appl. Math. 2016, 6, 14–18. [Google Scholar] [CrossRef]
  16. Rafiq, A.; Rafiullah, M. Some multi-step iterative methods for solving nonlinear equations. Comput. Math. Appl. 2009, 58, 1589–1597. [Google Scholar] [CrossRef] [Green Version]
  17. Ali, F.; Aslam, W.; Ali, K.; Anwar, M.A.; Nadeem, A. New family of iterative methods for solving nonlinear models. Disc. Dyn. Nat. Soc. 2018, 1–12. [Google Scholar] [CrossRef]
  18. Qureshi, U.K. A new accelerated third-order two-step iterative method for solving nonlinear equations. Math. Theory Model. 2018, 8, 64–68. [Google Scholar]
  19. Kou, J.S. Some variants of Cauchy’s method with accelerated fourth-order convergence. J. Comput. Appl. Math. 2008, 213, 71–78. [Google Scholar] [CrossRef]
  20. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1977; pp. 1–49. [Google Scholar]
  21. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  22. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
  23. Wuytack, L. Padé approximants and rational functions as tools for finding poles and zeros of analytical functions measured experimentally. In Padé Approximation and its Applications; Springer: New York, NY, USA, 1979; pp. 338–351. ISBN 0-387-09717-1. [Google Scholar]
  24. Xu, X.Y.; Li, J.K.; Xu, G.L. Definitions of Padé approximation. In Introduction to Padé Approximation; Shanghai Science and Technology Press: Shanghai, China, 1990; pp. 1–8. ISBN 978-7-532-31887-2. [Google Scholar]
Table 1. Numerical results and comparison of various iterative methods.
Table 1. Numerical results and comparison of various iterative methods.
MethodsEquation x 0 k + 1 x k + 1 | x k + 1 x k | | f ( x k + 1 ) |
NIM f 1 = 0 1.57 2.22398009056931552116536337672215719652 1.1 × 10 25 4.1 × 10 47
HIM f 1 = 0 1.55 2.22398009056931552116536337672215719652 1.7 × 10 41 1.0 × 10 46
Method 3 f 1 = 0 1.54 2.22398009056931552116536337672215719652 8.3 × 10 40 1.6 × 10 48
Method 4 f 1 = 0 1.54 2.22398009056931552116536337672215719652 8.3 × 10 22 1.9 × 10 47
Method 5 f 1 = 0 1.54 2.22398009056931552116536337672215719652 7.5 × 10 30 7.4 × 10 45
KIM f 1 = 0 1.54 2.22398009056931552116536337672215719652 8.5 × 10 38 3.9 × 10 48
DNIM f 1 = 0 1.54 2.22398009056931552116536337672215719652 1.1 × 10 25 1.1 × 10 47
CIM f 1 = 0 1.55 2.22398009056931552116536337672215719652 1.5 × 10 41 6.6 × 10 45
JIM f 1 = 0 1.55 2.22398009056931552116536337672215719652 1.2 × 10 45 4.3 × 10 47
NIM f 2 = 0 15 0.73908513321516064165531208767387340401 6.4 × 10 21 1.5 × 10 41
HIM f 2 = 0 14 0.73908513321516064165531208767387340401 3.4 × 10 29 5.1 × 10 49
Method 3 f 2 = 0 13 0.73908513321516064165531208767387340401 8.2 × 10 19 7.5 × 10 49
Method 4 f 2 = 0 13 0.73908513321516064165531208767387340401 1.4 × 10 17 9.4 × 10 48
Method 5 f 2 = 0 13 0.73908513321516064165531208767387340401 1.1 × 10 18 7.5 × 10 47
KIM f 2 = 0 13 0.73908513321516064165531208767387340401 1.5 × 10 20 8.3 × 10 49
DNIM f 2 = 0 13 0.73908513321516064165531208767387340401 6.4 × 10 21 9.5 × 10 48
CIM f 2 = 0 13 0.73908513321516064165531208767387340401 2.2 × 10 17 9.4 × 10 48
JIM f 2 = 0 13 0.73908513321516064165531208767387340401 7.4 × 10 18 8.3 × 10 49
NIM f 3 = 0 3.57 2.03526848118195915354755041547361249916 6.4 × 10 28 2.9 × 10 47
HIM f 3 = 0 3.55 2.03526848118195915354755041547361249916 2.0 × 10 39 5.8 × 10 47
Method 3 f 3 = 0 3.54 2.03526848118195915354755041547361249916 2.0 × 10 33 6.0 × 10 47
Method 4 f 3 = 0 3.54 2.03526848118195915354755041547361249916 2.0 × 10 33 8.0 × 10 46
Method 5 f 3 = 0 3.54 2.03526848118195915354755041547361249916 3.4 × 10 30 3.1 × 10 45
KIM f 3 = 0 3.54 2.03526848118195915354755041547361249916 4.3 × 10 33 8.6 × 10 47
DNIM f 3 = 0 3.54 2.03526848118195915354755041547361249916 6.4 × 10 28 9.9 × 10 46
CIM f 3 = 0 3.54 2.03526848118195915354755041547361249916 1.1 × 10 20 9.6 × 10 46
JIM f 3 = 0 3.54 2.03526848118195915354755041547361249916 1.9 × 10 22 1.1 × 10 49
NIM f 4 = 0 3.68 0.25753028543986076045536730493724178138 6.5 × 10 29 3.5 × 10 46
HIM f 4 = 0 3.66 0.25753028543986076045536730493724178138 4.8 × 10 37 1.6 × 10 46
Method 3 f 4 = 0 3.64 0.25753028543986076045536730493724178138 9.6 × 10 14 2.7 × 10 46
Method 4 f 4 = 0 3.65 0.25753028543986076045536730493724178138 1.1 × 10 36 3.3 × 10 44
Method 5 f 4 = 0 3.64 0.25753028543986076045536730493724178138 2.5 × 10 19 4.9 × 10 44
KIM f 4 = 0 3.65 0.25753028543986076045536730493724178138 2.1 × 10 14 8.9 × 10 44
DNIM f 4 = 0 3.64 0.25753028543986076045536730493724178138 2.6 × 10 14 3.5 × 10 46
CIM f 4 = 0 3.64 0.25753028543986076045536730493724178138 2.8 × 10 12 2.8 × 10 46
JIM f 4 = 0 3.65 0.25753028543986076045536730493724178138 9.7 × 10 38 9.7 × 10 46
NIM f 5 = 0 3.511 0.44285440100238858314132799999933681972 8.2 × 10 22 7.7 × 10 43
HIM f 5 = 0 3.57 0.44285440100238858314132799999933681972 2.2 × 10 37 6.1 × 10 45
Method 3 f 5 = 0 3.55 0.44285440100238858314132799999933681972 1.8 × 10 24 3.4 × 10 45
Method 4 f 5 = 0 3.55 0.44285440100238858314132799999933681972 5.3 × 10 37 7.9 × 10 44
Method 5 f 5 = 0 3.56 0.44285440100238858314132799999933681972 2.0 × 10 42 3.9 × 10 42
KIM f 5 = 0 3.57 0.44285440100238858314132799999933681972 3.6 × 10 23 2.7 × 10 42
DNIM f 5 = 0 3.56 0.44285440100238858314132799999933681972 8.2 × 10 22 4.9 × 10 45
CIM f 5 = 0 3.57 0.44285440100238858314132799999933681972 3.3 × 10 37 8.6 × 10 44
JIM f 5 = 0 3.56 0.44285440100238858314132799999933681972 9.3 × 10 13 6.6 × 10 46

Share and Cite

MDPI and ACS Style

Li, S.; Liu, X.; Zhang, X. A Few Iterative Methods by Using [1,n]-Order Padé Approximation of Function and the Improvements. Mathematics 2019, 7, 55. https://doi.org/10.3390/math7010055

AMA Style

Li S, Liu X, Zhang X. A Few Iterative Methods by Using [1,n]-Order Padé Approximation of Function and the Improvements. Mathematics. 2019; 7(1):55. https://doi.org/10.3390/math7010055

Chicago/Turabian Style

Li, Shengfeng, Xiaobin Liu, and Xiaofang Zhang. 2019. "A Few Iterative Methods by Using [1,n]-Order Padé Approximation of Function and the Improvements" Mathematics 7, no. 1: 55. https://doi.org/10.3390/math7010055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop