Next Article in Journal
Seismo-VLAB: An Open-Source Software for Soil–Structure Interaction Analyses
Previous Article in Journal
The Shape of a Compressible Drop on a Vibrating Solid Plate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Dimensional Variant of Newton’s Method and a Three-Point Hermite Interpolation: Fourth- and Eighth-Order Optimal Iterative Schemes

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mathematics, College of Sciences and Humanities in Al-Kharj, Prince Sattam bin Abdulaziz University, Alkharj 11942, Saudi Arabia
3
Department of Basic Engineering Science, Faculty of Engineering, Menofia University, Shebin El-Kom 32511, Egypt
4
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4529; https://doi.org/10.3390/math11214529
Submission received: 10 October 2023 / Revised: 28 October 2023 / Accepted: 1 November 2023 / Published: 3 November 2023

Abstract

:
A nonlinear equation f ( x ) = 0 is mathematically transformed to a coupled system of quasi-linear equations in the two-dimensional space. Then, a linearized approximation renders a fractional iterative scheme x n + 1 = x n f ( x n ) / [ a + b f ( x n ) ] , which requires one evaluation of the given function per iteration. A local convergence analysis is adopted to determine the optimal values of a and b. Moreover, upon combining the fractional iterative scheme to the generalized quadrature methods, the fourth-order optimal iterative schemes are derived. The finite differences based on three data are used to estimate the optimal values of a and b. We recast the Newton iterative method to two types of derivative-free iterative schemes by using the finite difference technique. A three-point generalized Hermite interpolation technique is developed, which includes the weight functions with certain constraints. Inserting the derived interpolation formulas into the triple Newton method, the eighth-order optimal iterative schemes are constructed, of which four evaluations of functions per iteration are required.

1. Introduction

In the paper, we develop several novel iterative schemes to solve the nonlinear equation:
f ( x ) = 0 ,
which is frequently occurred in engineering and scientific applications. For solving Equation (1),
x n + 1 = x n f ( x n ) f ( x n )
is a well-known Newton iterative method. A lot of methods were modified from the Newton iterative method [1,2,3,4,5,6,7,8], and they are effective to solve nonlinear equations. We are going to replace f ( x n ) in the denominator of Equation (2) by a linear function of f ( x n ) , like as a + b f ( x n ) for some constants a and b. In doing so, a major drawback of the sensitivity to the initial guess of the Newton iterative method can be avoided, and at the same time, a major advantage is that f ( x n ) is no longer needed.
The iterative schemes in [2,9,10,11,12,13,14] were based on the quadratures, which are of two-step-type iterative schemes with third-order convergence, needing the first Newton step to generate a trial solution, and then a correction at the second step by some quadrature rules. Our fractional iterative scheme is of the one-step type and also of the third-order convergence, which saves much computation of function per iteration.
Weerakoon and Fernando [2] resorted to a trapezoidal quadrature rule to derive an arithmetic mean Newton method with third-order convergence of the iterative scheme. After that, third-order iterative schemes based on different quadrature methods were developed in [9,10,11,12,15,16], of which the evaluations of [ f ( x n ) , f ( x n ) , f ( x ^ n ) ] with x ^ n = x n f ( x n ) / f ( x n ) are required per iteration. They have the same order p = 3 and have the same efficiency index (E.I.) = p 1 / m = 1.44225 . However, the optimal order and efficiency index of the iterative scheme based on [ f ( x n ) , f ( x n ) , f ( x ^ n ) ] are p = 2 m 1 = 4 and E.I. = 1.5874, according to the conjecture of Kung and Traub [17].
For the fourth-order optimal iterative scheme (FOIS) with [ f ( x n ) , f ( x n ) , f ( x ^ n ) ] performing at each iteration, there were many methods [1,17,18,19,20,21,22,23,24,25,26,27,28,29]. Recently, Liu and Liu [30] derived a double-weight function technique to derive the FOIS. The iterative schemes using difference modifications of Potra-Ptak’s method with optimal fourth and eighth orders of convergence were performed by Cordero et al. [31]. However, the two-step FOIS based on [ f ( x n ) , f ( x n ) , f ( x ^ n ) ] is not yet developed. We are going to propose a simple method based on generalized quadratures to derive the FOIS, which is the optimal combination of two third-order iterative schemes to be developed in the paper.
Besides the quadratures and the finite difference approximation as used in the Ostrowski method and its modifications in [32,33,34,35,36], the function interpolation technique is often used to generate the high-order iterative scheme. Data interpolation is a mathematical process to construct the interpolant from the given data, of which the differential at each point is inserted to set up the Hermite interpolant, which was used in higher-order iterative schemes [30,37,38,39,40,41,42,43]. Also the three-point generalized Hermite interpolation techniques, and a new class of three-step eighth-order optimal iterative schemes (EOIS) is constructed in the paper, which involves four evaluations of functions that are optimal in the sense of Kung and Traub.
Previously, Zhanlav et al. [44] used the generating function method for constructing new EOIS iterations, by which our technique is quite different.
The scalar equation obtained in the engineering application is usually an implicit function of the unknown variable. For instance, the target equation used in the shooting method to solve the nonlinear boundary value problem is an implicit equation with f ( x ) being an implicit function of x, and under this situation, it is hard to obtain the derivative term f ( x ) . When the Newton iterative method cannot be applied to solve this sort problem, the proposed fractional iterative schemes have an advantage to solve this type of problem without using f ( x ) .
The paper is organized as follows. A two-dimensional variant of Newton’s iterative method is developed in Section 2. We derive a fractional iterative scheme, whose convergence criterion is proven. The convergence behavior analysis of the fractional iterative scheme is carried out in Section 3. We verify the performance of the proposed iterative schemes in Section 4 by computing several numerical examples. In Section 5, we combine the fractional iterative scheme to the quadrature methods to generate the FOIS. We reduce the fractional iterative schemes to some derivative-free iterative schemes in Section 6, and the convergence is identified. The Hermite interpolation is introduced in Section 7, and a three-point interpolation formula is derived. The results are used in Section 8 to derive the three-point generalized Hermite interpolations. In Section 9, we construct the EOIS by using the weight functions obtained from the three-point generalized Hermite interpolations, and examples are given. Finally, we draw the conclusions in Section 10. The abbreviations are listed in the Abbreviations.

2. Two-Dimensional Generalization of Newton’s Method

To motivate the present study, we begin with
f ( x n ) = f ( r ) + f ( r ) ( x n r ) + ,
f ( x n ) = f ( r ) ( x n r ) + ,
where r is a simple solution of f ( x ) = 0 with f ( r ) = 0 and f ( r ) 0 . Inserting Equation (3) for f ( x n ) into Equation (2) and using x n r = f ( x n ) / f ( r ) derived from Equation (4) with higher-order terms being neglected yields
x n + 1 = x n f ( x n ) f ( r ) + f ( r ) f ( r ) f ( x n ) .
This iterative scheme is a variant near to the Newton iterative scheme (2). We will prove that the iterative scheme (5), like Equation (2), is quadratically convergent as to be stated by Theorem 3 in Section 3. Below we will derive an iterative scheme with a similar form to Equation (5), but it is cubically convergent, rather than the quadratic convergence of Equation (5).
When a new variable is defined by
y : = f ( x ) = g ( x ) x ,
where we suppose that f ( x ) can be decomposed as f ( x ) = g ( x ) x , x 0 , from Equation (6) the following identity holds:
y g ( x ) x = 0 .
By Equations (1) and (7), we have
a x + y = a x ,
[ w ( x ) 1 ] g ( x ) x + y = w ( x ) g ( x ) x ,
where a is an accelerating parameter and w ( x ) is a splitting function to be discussed below. While a x is added on both sides of the equation y = 0 in Equation (8), we add w ( x ) g ( x ) x on both sides of Equation (7) to render Equation (9). Herein, the problem of finding the solution of Equation (1) is mathematically transformed to a coupled system of quasi-linear Equations (8) and (9) in the two-dimensional space ( x , y ) .
The splitting technique of g ( x ) x = ( 1 w + w ) g ( x ) x in Equation (9) is used to solve Equation (1) in [45]. Then, Liu et al. [46] proposed a derivative-free iterative scheme using f ( x ) = g ( x ) x + A x B = 0 . We further carry out a theoretical analysis in the two-dimensional space directly for f ( x ) = g ( x ) x = 0 .
When ( x n , y n ) is obtained at the nth step, the linearizations around x n for Equations (8) and (9) are
a x + y = a x n ,
( w n 1 ) g n x + y = w n g n x n ,
which are two-dimensional linear system for ( x , y ) . We take g n = g ( x n ) and w n = w ( x n ) for shorting the notations.
From Equations (10) and (11), we can obtain ( x n + 1 , y n + 1 ) at the next step by
x n + 1 y n + 1 = a 1 ( w n 1 ) g n 1 1 a x n w n g n x n ,
which renders
x n + 1 = a x n w n g n x n a + ( 1 w n ) g n ,
y n + 1 = a g n x n a + ( 1 w n ) g n = a y n a + ( 1 w n ) g n .
Both the nominator and denominator on the right-hand side of Equation (12) are multiplied by x n , and using Equation (6), it can be refined to
x n + 1 = x n f ( x n ) x n a x n + ( 1 w n ) f ( x n ) .
Remark 1. 
Mathematically, after adding a x w ( x ) f ( x ) on both sides, Equation (1) is equivalent to
a x + f ( x ) w ( x ) f ( x ) = a x w ( x ) f ( x ) ,
which is then multiplied by x,
[ a x + f ( x ) w ( x ) f ( x ) ] x = a x 2 w ( x ) f ( x ) x .
If x n is already known, we can seek the next x n + 1 by
x n + 1 = a x n 2 w ( x n ) f ( x n ) x n a x n + f ( x n ) w ( x n ) f ( x n ) = x n f ( x n ) x n a x n + f ( x n ) w ( x n ) f ( x n ) .
Upon taking the same notation with w n = w ( x n ) , Equation (15) goes back to Equation (14). However, this one-dimensional approach cannot generate Equation (13); without setting the problem f ( x ) = 0 in the two-dimensional space ( x , y ) as carried out in Equations (8) and (9), it is hard to determine a and w ( x ) , and this proves Theorem 1 given below.
We are going to show that without resorting on the derivative term, the third-order convergence of the iterative scheme can also be realized. Our aim is reducing the number of function evaluations to one per iteration, without using the derivative term to maintain the order of convergence to be three.
Letting
1 w n = b x n ,
where b is a constant, we can cancel x n in the fractional term of Equation (14), and with the aid of Equation (16), we can achieve
x n + 1 = x n f ( x n ) a + b f ( x n ) ,
which, including two constant parameters a and b, is a novel iterative scheme to solve Equation (1). For use in the later, the iterative scheme (17) is called a fractional iterative scheme.
Theorem 1. 
For solving f ( x ) = 0 , the iterative scheme (17) is convergent, if
f 2 ( x n ) + 2 a b f ( x n ) > 0 .
A finer criterion is
b f ( x n ) a > 0 , o r b f ( x n ) a < 2 .
Proof. 
In Equation (13), y n + 1 is replaced by f ( x n + 1 ) and y n is replaced by f ( x n ) . In view of Equations (6) and (16), we have
f ( x n + 1 ) = a f ( x n ) a + b f ( x n ) .
Let
ξ n : = a a + b f ( x n )
be a contraction factor. By Equation (18), we have
f 2 ( x n ) + 2 a b f ( x n ) > 0 | ξ n | < 1 ,
which implies a strictly monotonically decreasing sequence of
| f ( x n + 1 ) | < | f ( x n ) | .
Hence, the absolute convergence of the iterative scheme (17) was proved. Equation (20) can be written as
f ( x n + 1 ) = f ( x n ) 1 + b f ( x n ) a .
If the criterion in Equation (19) is satisfied, from Equation (23), we can derive the inequality in Equation (22). The proof is completed. □
Remark 2. 
Although we have made a decomposition of f ( x ) = x g ( x ) in Equation (6), the final results in Equations (17) and (20) are independent to g ( x ) . However, the decomposition technique in Equation (6) can help us to derive the two-dimensional approach to the Newton method and the fractional iterative scheme.

3. Convergence Analysis of Fractional Iterative Scheme

The convergence analysis of Equation (17) is given below.
Theorem 2. 
The iterative scheme (17) for solving f ( x ) = 0 has third-order convergence, with the parameters given by
a = f ( r ) , b = f ( r ) 2 f ( r ) .
Proof. 
For the proof of convergence, let r be a simple solution of f ( x ) = 0 , i.e., f ( r ) = 0 and f ( r ) 0 . Thus, by giving
e n = x n r ,
it follows that
e n + 1 = e n + x n + 1 x n ,
f ( x n ) = f ( r ) [ e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + ] ,
where
c k : = f ( k ) ( r ) k ! f ( r ) , k = 2 , 3 , .
Inserting Equation (26) into Equation (17) yields
f ( x n ) a + b f ( x n ) = e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + 1 + b e n + b c 2 e n 2 + b c 3 e n 3 + = e n + D 2 e n 2 + D 3 e n 3 + D 4 e n 4 + ,
where we have used the first one in Equation (24), and D 2 , D 3 and D 4 are given by
D 2 = c 2 b , D 3 = c 3 2 c 2 b + b 2 , D 4 = 2 b 2 c 2 b c 3 + c 2 ( b 2 b c 2 ) c 3 b + c 4 b 3 .
Inserting Equation (27) into Equation (17) and using Equation (25) yields
e n + 1 = e n e n D 2 e n 2 D 3 e n 3 D 4 e n 4 = D 2 e n 2 D 3 e n 3 D 4 e n 4 .
If b = c 2 holds as that given in second one in Equation (24), we have
D 2 = 0 , D 3 = c 3 c 2 2 , D 4 = c 2 3 2 c 2 c 3 + c 4 ,
and at the same time, Equation (29) reduces to
e n + 1 = ( c 2 2 c 3 ) e n 3 + O ( e n 4 ) .
Equation (30) indicates the third-order convergence. □
Theorem 3. 
The iterative scheme (5) for solving f ( x ) = 0 has second-order convergence.
Proof. 
Upon comparing to Equation (17), the parameters in Equation (5) are given by
a = f ( r ) , b = f ( r ) f ( r ) .
Inserting b into D 2 in Equation (28) yields
D 2 = c 2 f ( r ) f ( r ) = f ( r ) 2 f ( r ) f ( r ) f ( r ) = f ( r ) 2 f ( r ) 0 .
By Equation (29),
e n + 1 = e n e n D 2 e n 2 D 3 e n 3 D 4 e n 4 = f ( r ) 2 f ( r ) e n 2 + O ( e n 3 )
proves this theorem. □
In practice, the iterative scheme (5) is a variant of the Newton iterative scheme (2). Both orders of convergence are two.
Notice that the Newton method (2) is a single-point second-order optimal iterative scheme, with two function operations of f ( x n ) and f ( x n ) . Halley [47] derived the following extension of the Newton method to a third-order iterative scheme:
x n + 1 = x n f ( x n ) f ( x n ) f ( x n ) f ( x n ) 2 f ( x n ) .
However, because it needs three function operations on f ( x n ) , f ( x n ) and f ( x n ) , it is not an optimal iterative scheme. Besides the Halley method, there are many two-point iterative schemes which are of third-order convergence. Liu and Lee [48] generalized many quadrature-type third-order iterative schemes to
y n = x n f ( x n ) f ( x n ) , η n = f ( x n ) f ( y n ) 1 , x n + 1 = x n W ( η n ) f ( x n ) f ( x n ) , W ( 0 ) = 1 , W ( 0 ) = 1 2 , | W ( 0 ) | < .
Based on three function operations of f ( x n ) , f ( x n ) and f ( y n ) , Equation (32) is not an optimal iterative scheme. It is interesting that upon comparing Equation (31) to Equation (17), these two iterative schemes are the same if we take a = f ( x n ) and b = f ( x n ) / ( 2 f ( x n ) ) . But merely with a and b given by Equation (24), the iterative scheme (17) is of third-order convergence. It must be emphasized that we do not need f ( x n ) , f ( x n ) and two-point operation to achieve the third-order convergence. Therefore, the key issue of Equations (17) and (24) is that we need to give a precise estimation of a and b, without using the information from f ( r ) and f ( r ) .

4. Numerical Verifications

Some examples are used to evaluate the iterative scheme (17), which is subjected to the convergence criteria:
| x n + 1 x n | < ε and | f ( x n ) | < ε .
We fix ε = 10 15 for all numerical tests. The numerically computed order of convergence (COC) is defined in [2]:
COC : = ln | ( x n + 1 r ) / ( x n r ) | ln | ( x n r ) / ( x n 1 r ) | ,
where f ( r ) = 0 .
The presently computed results are compared to those obtained by the Newton method (NM), the Halley method (HM) [47] in Equation (31) and the method of Li (LM) [18]:
x ^ n = x n f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) [ f ( x n ) f ( x ^ n ) ] f ( x n ) [ f ( x n ) 2 f ( x ^ n ) ] .
The orders of convergence for the NM, HM and LM are two, three and four, respectively.
We first use the following example:
f ( x ) = x 3 1 2 x 2 7 2 x + 3 = 0 ,
to present the monotonically decreasing sequence of f ( x n ) , which is generated by the method in Equation (17). The parameters a and b are specified, not that given by Equation (24).
There are three solutions r = 2 , r = 1 and r = 1.5 of Equation (35). We consider four cases:
( a ) x 0 < 0 , f ( x 0 ) < 0 , a > 0 , b < 0 , ( b ) x 0 < 0 , f ( x 0 ) > 0 , a > 0 , b < 0 , ( c ) x 0 > 0 , f ( x 0 ) < 0 , a < 0 , b < 0 , ( d ) x 0 > 0 , f ( x 0 ) > 0 , a > 0 , b > 0 .
We take x 0 = 3 , a = 10.5 and b = 0.619 for (a); x 0 = 1 , a = 10.5 and b = 0.619 for (b); x 0 = 1.1 , a = 1.5 and b = 1.66 for (c); x 0 = 2.5 , a = 1.75 and b = 1.8 for (d). Cases (a) and (b) tend to the solution r = 2 ; case (c) tends to r = 1 ; and case (d) tends to r = 1.5 .
Due to the monotonically decreasing sequence of f ( x n ) , all the COCs are near to the third-order and they converge very fast. In the last column of Table 1, we list the NIs by using the LM of [18]. Starting from the same initial values, for the first two cases, the scheme (17) is convergent slightly faster than the LM. As mentioned by Li [18], the iterative scheme (34) requires two evaluations of the function and one first-order derivative per iteration. Therefore, the scheme (17) with one evaluation of the function saves much of the computational cost.
Other test examples are given by
f 1 ( x ) = exp ( x 2 + 7 x 30 ) 1 ,
f 2 ( x ) = x 3 + 4 x 2 10 ,
f 3 ( x ) = x e x 2 sin 2 x + 3 cos x + 5 ,
f 4 ( x ) = | x 2 9 | .
Table 2 lists x 0 , a and b and the NIs for different methods. We can observe that the present iterative scheme converges faster than the NM and HM. NM and HM are not good for the solution of f 2 ( x ) = 0 with a worse initial guess x 0 = 0.3 , and cannot be applied to solve f 4 ( x ) = 0 in Equation (39).

5. Fourth-Order Optimal Iterative Schemes

Now, we propose some new FOIS by a constantly weighting combination of the third-order iterative schemes from Equations (17) and (24), as well as the following one. Before that, we cite the following result [48].
Lemma 1. 
The following two-step iterative scheme has third-order convergence:
x n + 1 = x n W ( η n ) f ( x n ) f ( x n ) ,
where W satisfies
W ( 0 ) = 1 , W ( 0 ) = 1 2 , | W ( 0 ) | < .
Here,
x ^ n = x n f ( x n ) f ( x n ) , η n : = f ( x ^ n ) f ( x n ) 1 .
The error equation reads as
e n + 1 = 2 [ 1 W ( 0 ) ] c 2 2 + c 3 2 e n 3 + O ( e n 4 ) .
Proof. 
The proof can refer to [48]. □
Theorem 4. 
The following iterative scheme as an optimal combination of Equations (17) and (40):
x n + 1 = x n w 1 f ( x n ) a + b f ( x n ) w 2 W ( η n ) f ( x n ) f ( x n )
is of fourth-order convergence, where W ( 0 ) = 1 , W ( 0 ) = 1 / 2 , W ( 0 ) = α , and
w 1 = 1 3 , w 2 = 2 3 , α = 5 4 .
Proof. 
The combination of Equations (17) and (40) is given by Equation (42), whose weighting factors w 1 and w 2 are subjected to
w 1 + w 2 = 1 .
Then, we consider the weighting combination of the error equations in Equations (30) and (41), such that the combined coefficient preceding e n 3 is zero:
w 1 ( c 2 2 c 3 ) + w 2 2 [ 1 α ] c 2 2 + c 3 2 = 0 w 1 ( c 2 2 c 3 ) w 2 2 [ ( 4 α 4 ) c 2 2 c 3 ] = 0 .
Equation (45) leads to
4 α 4 = 1 , w 1 w 2 2 = 0 .
Solving Equations (44) and (46), we can derive Equation (43). Thus, the error equation of the optimally combined iterative scheme in Equation (42) is
e n + 1 = O ( e n 4 ) .
This completes the proof of Theorem 4. □
As an application of Theorem 4, we consider [49]:
x n + 1 = x n f ( x n ) [ f ( x n ) γ 1 + f ( x ^ n ) γ 1 ] f ( x n ) γ + f ( x ^ n ) γ , e n + 1 = γ c 2 2 + c 3 2 e n 3 + O ( e n 4 ) ,
which in the form of Equation (40) leads to
W ( η n ) = 1 + ( η n + 1 ) γ 1 1 + ( η n + 1 ) γ .
When we take γ = 1 / 2 , the conditions W ( 0 ) = 1 , W ( 0 ) = 1 / 2 , W ( 0 ) = 5 / 4 in Theorem 4 are satisfied. In Table 3, we solve Equation (35) by using the FOIS (42), and list the results for three different solutions r = 2 , 1 , 1.5 , which show large values of the COC.
Let α = W ( 0 ) be a parameter. With W ( η ) = 1 η / 2 + α η 2 / 2 in Equation (40), we have
x n + 1 = x n f ( x n ) f ( x n ) 3 2 f ( x ^ n ) 2 f ( x n ) + α [ f ( x ^ n ) f ( x n ) ] 2 2 f ( x n ) 2 , e n + 1 = 2 [ 1 α ] c 2 2 + c 3 2 e n 3 + O ( e n 4 ) .
The best value of α = 5 / 4 is chosen such that
x n + 1 = x n f ( x n ) 3 [ a + b f ( x n ) ] 2 f ( x n ) 3 f ( x n ) 3 2 f ( x ^ n ) 2 f ( x n ) + α [ f ( x ^ n ) f ( x n ) ] 2 2 f ( x n ) 2
is an FOIS.
In Table 4, we solve Equation (36) and list the results for the solution r = 3 with four different initial guesses: x 0 = 3.1 , 3.5 , 4 , 5 , which show large values of the COC.

6. Derivative-Free Iterative Schemes

In this section, we approximate a and b in Equation (24). With two initial guesses x 0 and x ˜ 2 satisfying f ( x 0 ) f ( x ˜ 2 ) < 0 to render r ( x 0 , x ˜ 2 ) , x ˜ 1 = ( x 0 + x ˜ 2 ) / 2 is taken. By a finite difference approximation of a and b, we take
a = f ( x ˜ 2 ) f ( x 0 ) x ˜ 2 x 0 , b = 1 2 a f ( x ˜ 2 ) 2 f ( x ˜ 1 ) + f ( x 0 ) ( x ˜ 1 x 0 ) 2 = 2 f ( x ˜ 2 ) 4 f ( x ˜ 1 ) + 2 f ( x 0 ) ( x ˜ 2 x 0 ) [ f ( x ˜ 2 ) f ( x 0 ) ] .
Inserting a and b in Equation (48) into Equation (17), we solve Equation (35) and the related data are tabulated in Table 5.
We solve Equations (36)–(39), and the related data are tabulated in Table 6.
Remark 3. 
For the solution of f 4 ( x ) = 0 , it does not exist x 0 and x ˜ 2 such that f 4 ( x 0 ) f 4 ( x ˜ 2 ) < 0 , due to f 4 ( x ) 0 . However, we place x 0 and x ˜ 2 on the right-side of the solution and the present iterative scheme is applicable to find the solution with 14 iterations, as shown in Table 6.
If the curve of f ( x ) vs. x is available, we can observe a rough position of the solution r, and then the slope f ( r ) and the curvature f ( r ) can be estimated roughly. Intuitively, we can estimate a and b by the slope and curvature. In order to maintain the fast convergence, we must choose x ˜ 0 and x ˜ 2 quite close to the solution r, such that
a ˜ = f ( x ˜ 2 ) f ( x ˜ 0 ) x ˜ 2 x ˜ 0 , b ˜ = 1 2 a ˜ f ( x ˜ 2 ) 2 f ( x ˜ 1 ) + f ( x ˜ 0 ) ( x ˜ 1 x ˜ 0 ) 2
are very close to a and b in Equation (24), where x ˜ 1 = ( x ˜ 0 + x ˜ 2 ) / 2 . For Equation (35) with the solution r = 2 , if we take x ˜ 0 = r 0.01 and x ˜ 2 = r + 0.01 , NI is greatly reduced from 11 to 5 and COC = 2.904; for r = 1 , NI reduces to 6 and COC = 2.862; and for r = 1.5 , NI reduces to 5 and COC = 2.936. Here, NI and COC are improved by comparing to that in Table 5.
A greed search such as the 2D golden section search algorithm in the given range can help us to obtain the optimal values of a and b for fast convergence. However, it would spend much more computations. Instead of a greed search in the plane ( a , b ) , we discuss the influence of a ˜ and b ˜ in Equation (49) by giving x ˜ 0 = r c and x ˜ 2 = r + c . For different c, a ˜ and b ˜ are different. In Table 7, we list the results. Obviously, COC defined in Equation (33) is sensitive to the values of a ˜ and b ˜ , when they approach to the optimal values, but NI does not have a large variation. When a ˜ and b ˜ tend to optimal values a = 10.5 and b = 0.619047619 , COC = 2.9041 tends to the theoretical one with COC = 3, as listed in Table 1.
Like that performed in Equations (48) and (17), we introduce a derivative-free modification of the Newton variant in Equation (5) to
x n + 1 = x n f ( x n ) A + B f ( x n ) ,
where
A = f ( x ˜ 2 ) f ( x 0 ) x ˜ 2 x 0 , B = 1 A f ( x ˜ 2 ) 2 f ( x ˜ 1 ) + f ( x 0 ) ( x ˜ 1 x 0 ) 2 = 4 f ( x ˜ 2 ) 8 f ( x ˜ 1 ) + 4 f ( x 0 ) ( x ˜ 2 x 0 ) [ f ( x ˜ 2 ) f ( x 0 ) ] .
For Equations (36)–(39) solved by the first derivative-free Newton method (FDFNM), the related data are tabulated in Table 8.
By neglecting the higher-order terms in Equations (2) and (3), we have
x n + 1 = x n f ( x n ) f ( r ) + f ( r ) ( x n r ) .
We have
f ( x n ) = f ( r ) + f ( r ) ( x n r ) + 1 2 f ( r ) ( x n r ) 2 + .
By using f ( r ) = 0 and neglecting the higher-order terms, it follows from Equation (53) a quadratic equation for x n r :
f ( r ) ( x n r ) 2 + 2 f ( r ) ( x n r ) 2 f ( x n ) = 0 .
Thus, we can derive
x n r = f ( r ) 2 + 2 f ( x n ) f ( r ) f ( r ) f ( r ) .
Inserting Equation (54) into Equation (52), we can derive the second modified Newton method:
x n + 1 = x n f ( x n ) f ( r ) 2 + 2 f ( x n ) f ( r ) ,
which is different from the first modified Newton method (5). Let
C = f ( x ˜ 2 ) f ( x 0 ) x ˜ 2 x 0 , D = f ( x ˜ 2 ) 2 f ( x ˜ 1 ) + f ( x 0 ) ( x ˜ 1 x 0 ) 2 ,
and we can obtain the second derivative-free Newton method (SDFNM):
x n + 1 = x n f ( x n ) C 2 + 2 D f ( x n ) .
We employ the SDFNM to solve Equations (36)–(39), and the related data are tabulated in Table 9.
Upon comparing Table 6, Table 8 and Table 9, the performance of the presented method in Equations (48) and (17), as well as the FDFNM in Equations (50) and (51) and the SDFNM in Equations (56) and (57), are almost the same.
As a practical application of the proposed iterative schemes, let us consider a nonlinear boundary value problem:
u ( y ) = 3 2 u 2 ( y ) , y ( 0 , 1 ) ,
u ( 0 ) = 4 , u ( 1 ) = 1 .
An exact solution is
u ( y ) = 4 ( y + 1 ) 2 .
The conventional shooting method is assumed to be an unknown initial slope u ( 0 ) = x , and we integrate Equation (58) with the initial conditions u ( 0 ) = 4 and u ( 0 ) = x , which results an implicit equation f ( x ) = u ( 1 , x ) 1 = 0 to be solved. The exact solution is r = 8 .
We apply the fourth-order Runge–Kutta method to integrate Equation (58) with N = 7500 steps, and fix x 0 = 9.5 and x ˜ 2 = 6 . By using Equations (48) and (17), NI is 16, the error of x is 2.842 × 10 14 and the maximum error of u is 5.33 × 10 15 . While we use Equations (50) and (51), NI is increased to 18, and the errors are the same.

7. Hermite Interpolation

To be the extensions of the one-step Newton method (2), there are, respectively, two-step and three-step methods of double Newton and triple Newton:
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( y n ) f ( y n ) , y n = x n f ( x n ) f ( x n ) , z n = y n f ( y n ) f ( y n ) , x n + 1 = z n f ( z n ) f ( z n ) .
However, due to the low efficiency index (E.I.) = 1.414 of these iterative schemes, they are rarely used in the solution of nonlinear equations. Below, we will employ the generalized Hermite interpolation techniques to raise the values of E.I.
We fix
y n = x n f ( x n ) f ( x n ) ,
ξ n = f ( y n ) f ( x n ) , η n = f ( z n ) f ( y n ) , γ n = f ( z n ) f ( x n ) = ξ n η n .
The Hermite function H ( x ) for the interpolation of the data of a function f ( x ) at two points x n and y n is such that
H ( x n ) = f ( x n ) , H ( y n ) = f ( y n ) , H ( x n ) = f ( x n ) , H ( y n ) = f ( y n ) .
If H ( x ) is a polynomial to match these four conditions in Equation (63), it is at least a second-order function of x, denoted as H 2 ( x ) . When y n is computed from Equation (61), it is not independent to x n ; hence, there exists an Hermite interpolation formula to predict f ( y n ) from the data ( f ( x n ) , f ( y n ) , f ( x n ) ) . The two-point Hermite interpolation formula was generalized in [30], involving a weight function.
The second-order Hermite polynomial is constructed according to the Hermite interpolation conditions [30]:
H 2 ( x n ) = f ( x n ) , H 2 ( y n ) = f ( y n ) , H 2 ( x n ) = f ( x n ) .
Wang and Liu [36] derived
H 2 ( x ) = ( x y n ) f ( x n ) x n y n x x n ( x n y n ) 2 [ ( x y n ) f ( x n ) ( x x n ) f ( y n ) ] + ( x x n ) ( x y n ) x n y n f ( x n ) , H 2 ( x ) = 2 ( x x n ) ( y n x n ) 2 [ f ( y n ) f ( x n ) ] 2 x x n y n y n x n f ( x n ) .
Then, it follows from f ( y n ) = H 2 ( y n ) and Equations (61) and (62) that
f ( y n ) = ( 1 2 ξ n ) f ( x n ) ,
which expressed a certain two-point generalized Hermite interpolation.
Definition 1. 
A two-point generalized Hermite interpolation of f ( y n ) in terms of f ( x n ) and ξ n is depicted by
f ( y n ) = h ( ξ n ) f ( x n ) ,
where the weight function h ( ξ n ) satisfies
h ( 0 ) = 1 , h ( 0 ) = 2 .
Equations (65) and (66) include Equation (64) as a special case.
If one replaces f ( y n ) in the first equation in Equation (60) by that in Equation (65), it generates an FOIS [17,37,38]:
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n 1 h ( ξ n ) f ( y n ) f ( x n ) .
The E.I. of Equation (67) is now raised to E.I. = 1.587, which is better than E.I. = 1.414 of the double Newton method.
This fact encourages us also to replace f ( z n ) in Equation (60) by the following three-point interpolation formula:
f ( z n ) = g ( ξ n , η n , γ n ) f ( x n ) ,
such that the combination of Equations (60), (65) and (68) leads to
y n = x n f ( x n ) f ( x n ) , z n = y n 1 h ( ξ n ) f ( y n ) f ( x n ) , x n + 1 = z n 1 g ( ξ n , η n , γ n ) f ( z n ) f ( x n ) .
With certain conditions on h ( ξ n ) and g ( ξ n , η n , γ n ) , the E.I. of the iterative scheme (69) can be further raised to E.I. = 1.682. They are three-point EOISs.

8. Three-Point Generalized Hermite Interpolations

Using the third-order Hermite polynomial for the three-point ( x n , y n , z n ) interpolation, Wang and Liu [36] and Petković [37] derived
f ( z n ) = 2 f [ x n , y n ] + f [ y n , z n ] + 2 f [ x n , z n ] + ( y n z n ) f [ y n , x n , x n ] ,
where
f [ x n , y n ] = f ( x n ) f ( y n ) x n y n , f [ y n , x n , x n ] = f [ y n , x n ] f ( x n ) y n x n ,
and f [ x n , z n ] and f [ y n , z n ] are defined by the same fashion.
From the first two equations in Equation (69) and Equation (62), we can derive the following divided differences:
f [ x n , y n ] = ( 1 ξ n ) f ( x n ) ,
f [ y n , z n ] = h ( ξ n ) ( 1 η n ) f ( x n ) , f [ x n , z n ] = h ( ξ n ) ( 1 γ n ) h ( ξ n ) + ξ n f ( x n ) , f [ y n , x n , x n ] = ξ n f ( x n ) f ( x n ) 2 ,
f [ z n , x n , x n ] = f [ z n , x n ] f ( x n ) z n x n = [ h ( ξ n ) γ n + ξ n ] h ( ξ n ) [ h ( ξ n ) + ξ n ] 2 f ( x n ) f ( x n ) 2 .
Using Equations (70), (71) and (72) and through some manipulations, we can derive
f ( z n ) = [ A ( ξ n ) B ( ξ n ) η n C ( ξ n ) γ n ] f ( x n ) ,
where
A ( ξ n ) : = 2 ξ n + h ( ξ n ) 2 ξ n h ( ξ n ) + ξ n + ξ n 2 h ( ξ n ) , B ( ξ n ) = h ( ξ n ) , C ( ξ n ) : = 2 h ( ξ n ) h ( ξ n ) + ξ n .
Definition 2. 
A three-point generalized Hermite interpolation of f ( z n ) in terms of f ( x n ) and g ( ξ n , γ n , η n ) is defined by Equations (73) and (74) as
f ( z n ) = g ( ξ n , η n , γ n ) f ( x n ) : = [ A ( ξ n ) B ( ξ n ) η n C ( ξ n ) γ n ] f ( x n ) ,
where the weight functions A ( ξ n ) , B ( ξ n ) and C ( ξ n ) satisfy
A ( 0 ) = 1 , A ( 0 ) = 2 , A ( 0 ) = 2 , A ( 0 ) = 0 , B ( 0 ) = 1 , B ( 0 ) = 2 , C ( 0 ) = 2 .
For the two-point Hermite interpolation function h ( ξ n ) = 1 2 ξ n in Equation (64), by Equation (74), we can derive
A ( ξ n ) = 2 4 ξ n 1 ξ n + ξ n 2 1 2 ξ n 1 , B ( ξ n ) = 1 2 ξ n , C ( ξ n ) = 2 4 ξ n 1 ξ n .
Equation (77) is a special case of Equation (76). Unlike that in Equation (74), for the generalized interpolation in Equation (75), g ( ξ n , γ n , η n ) = A ( ξ n ) B ( ξ n ) η n C ( ξ n ) γ n can be independent to h ( ξ n ) .
The function A ( ξ ) is subjected to four conditions, in which it is somewhat not easy to obtain A ( ξ ) . We attempt to construct it by a function with merely two conditions.
Lemma 2. 
A function A ( ξ ) , with
A ( 0 ) = a 0 , A ( 0 ) = a 1 , A ( 0 ) = a 2 , A ( 0 ) = a 3 ,
can be obtained from
A ( ξ ) = a 0 + a 1 ξ + ξ 2 F ( ξ ) ,
where
F ( 0 ) = a 2 2 , F ( 0 ) = a 3 6 .
Proof. 
Inserting ξ = 0 into Equation (79), A ( 0 ) = a 0 follows directly. It follows from Equation (79) that
A ( ξ ) = a 1 + 2 ξ F ( ξ ) + ξ 2 F ( ξ ) , A ( ξ ) = 2 F ( ξ ) + 4 ξ F ( ξ ) + ξ 2 F ( ξ ) , A ( ξ ) = 6 F ( ξ ) + 6 ξ F ( ξ ) + ξ 2 F ( ξ ) ,
of which after inserting ξ = 0 and using Equation (80), we can derive the last three conditions in Equation (78). □
Taking advantage of Lemma 2, we can replace Equation (76) by
A ( ξ ) = 1 2 ξ + ξ 2 F ( ξ ) , F ( 0 ) = 1 , F ( 0 ) = 0 , B ( 0 ) = 1 , B ( 0 ) = 2 , C ( 0 ) = 2 .
There appear different interpolation techniques. Using a rational function, Sharma and Sharma [50] derived
f ( z n ) = f [ x n , z n ] f [ y n , z n ] f [ x n , y n ] ,
which in terms of Equation (71) can be written as
f ( z n ) = h 2 ( ξ n ) ( 1 γ n ) ( 1 η n ) [ h ( ξ n ) + ξ n ] ( 1 ξ n ) f ( x n ) , h ( ξ n ) = 1 2 ξ n .
Based on the Taylor series expansion,
f ( z n ) = f [ z n , y n ] + ( z n y n ) f [ z n , x n , x n ] = f [ z n , y n ] + z n y n z n x n [ f [ z n , x n ] f ( x n ) ]
was derived by Bi et al. [34], which with the aid of Equations (71) and (72) can be recast to
f ( z n ) = h ( ξ n ) ( 1 η n ) ξ n [ h ( ξ n ) γ n + ξ n ] [ h ( ξ n ) + ξ n ] 2 f ( x n ) , h ( ξ n ) = 2 5 ξ n 2 ξ n .
Both interpolations (81) and (82) are special cases of the generalized interpolation in Equation (75).
To observe the accuracy of Equation (75) and the other two interpolations (81) and (82), we consider two definite functions:
f 1 ( x ) = x 3 + 4 x 2 10 , f 2 ( x ) = e x 2 + 7 x 30 1 ,
of which f 1 ( r = 1.3652300134 ) = 0 and f 2 ( r = 3 ) = 0 . We define
RE : = | f ( z n ) f E ( z n ) | | f E ( z n ) |
to be the relative error of the interpolation of f ( z n ) , where f ( z n ) and f E ( z n ) are, respectively, the value calculated from Equation (75) and the exact value.
The following cases are with simple weight functions:
( a ) h = 1 2 ξ n , and A ( ξ n ) , B ( ξ n ) , C ( ξ n ) by   Equation ( 77 ) , ( b ) h = 1 2 ξ n ξ n 4 , and A ( ξ n ) , B ( ξ n ) , C ( ξ n ) by   Equation ( 74 ) , ( c ) h = 1 2 ξ n + ξ n 3 , and A ( ξ n ) , B ( ξ n ) , C ( ξ n ) by   Equation ( 74 ) , ( d ) h = 1 2 ξ n + ξ n 4 , and A ( ξ n ) , B ( ξ n ) , C ( ξ n ) by   Equation ( 74 ) , ( e ) h = 1 2 ξ n + ξ n 3 , F ( ξ n ) = 1 ξ n 2 , B ( ξ n ) = 1 2 ξ n , C ( ξ n ) = 2 2 ξ n , ( f ) h = 1 2 ξ n , F ( ξ n ) = 1 + ξ n 2 + ξ n 3 , B ( ξ n ) = 1 2 ξ n + ξ 2 , C ( ξ n ) = 2 .
Table 10 lists the REs.
For f 1 ( x ) , the accuracy of Cases (a)–(d) is much better than others because the interpolant f 1 ( x ) is itself a third-order polynomial; however, for f 2 ( x ) , the accuracy of all cases are of the levels 10 2 and 10 3 .

9. Three-Point Eighth-Order Optimal Iterative Schemes

In this section, we combine the two-point and three-point generalized Hermite interpolations to generate some eighth-order optimal iterative schemes (EOIS).
Theorem 5. 
Equation (69) has eighth-order convergence, if h ( ξ n ) satisfies
h ( 0 ) = 1 , h ( 0 ) = 2 , h ( 0 ) = 0 ,
and A ( ξ n ) , B ( ξ n ) and C ( ξ n ) in
g ( ξ n , η n , γ n ) = A ( ξ n ) B ( ξ n ) η n C ( ξ n ) γ n
satisfy the conditions in Equation (76).
Proof. 
Before giving the proof, we emphasize that for the special case with h ( ξ n ) = 1 2 ξ n and A ( ξ n ) , B ( ξ n ) and C ( ξ n ) given in Equation (77), Wang and Liu [36] have proven the eighth-order convergence of the iterative scheme (69) and derived the corresponding error equation. For saving space, the details are not written here, and the error equation is not written out explicitly.
Let e n = x n r , s n = y n r , d n = z n r and c k = f ( k ) ( r ) / [ k ! f ( r ) ] , k = 2 , . As shown in [36],
f ( x n ) = f ( r ) [ e n + c 2 e n 2 + + c 8 e n 8 + O ( e n 9 ) ] ,
f ( y n ) = f ( r ) [ s n + c 2 s n 2 + c 3 s n 3 + c 4 s n 4 + O ( e n 9 ) ] ,
f ( z n ) = f ( r ) [ d n + c 2 d n 2 + O ( e n 9 ) ] ,
where
s n = c 2 e n 2 2 ( c 2 2 c 3 ) e n 3 ( 7 c 2 c 3 4 c 2 3 3 c 4 ) e n 4 + + c 8 e n 8 + O ( e n 9 ) ,
d n = ( c 2 3 c 2 c 3 ) e n 4 + + O ( e n 9 ) .
In view of Equations (88)–(92), ( ξ n , η n , γ n ) in Equation (62) have the following asymptotic estimations:
ξ n O ( e n ) , η n O ( e n 2 ) , γ n O ( e n 3 ) .
Due to
f ( x n ) = f ( r ) [ 1 + 2 c 2 e n + + 8 c 8 e n 7 + O ( e n 8 ) ]
and Equation (90), the term f ( z n ) / f ( x n ) in Equation (69) has the following asymptotic estimation:
f ( z n ) f ( x n ) O ( e n 4 ) .
Therefore, we merely need to expand g to the third order by
g A ( 0 ) + A ( 0 ) ξ n + A ( 0 ) 2 ξ n 2 + A ( 0 ) 6 ξ n 3 [ B ( 0 ) + B ( 0 ) ξ n ] η n C ( 0 ) γ n + O ( e n 4 ) ,
where Equation (93) was taken into account.
Inserting d n = x n r and Equations (94) and (95) into the last one in Equation (69) yields
e n + 1 = d n 1 g f ( z n ) f ( x n ) d n 1 A ( 0 ) + A ( 0 ) ξ n + A ( 0 ) 2 ξ n 2 + A ( 0 ) 6 ξ n 3 [ B ( 0 ) + B ( 0 ) ξ n ] η n C ( 0 ) γ n O ( e n 4 ) O ( e n 8 ) ,
since h ( ξ n ) satisfies Equation (86); A ( ξ n ) , B ( ξ n ) and C ( ξ n ) have the same values at ξ n = 0 as listed in Equation (76) with those in Equation (77); and according to [36], the coefficients preceding e n 4 to e n 7 are zeros for the iterative scheme (69). Equation (96) indicates that Equation (69) has eighth-order convergence. Because we do not derive the error equation in an explicit form, many processes were omitted. □
Although, the details of the derivation of the corresponding error equation is not given here, we give two examples to verify the performance of the iterative scheme (69), of which h ( ξ n ) and g ( ξ n , η n , γ n ) are independent, not like that in [36] as shown by Case (a) in Equation (85), wherein g is related to h by Equations (75) and (74). We abbreviate the method in [36] as WLM. For the purpose of comparison, the WLM is written as follows:
y n = x n f ( x n ) f ( x n ) , z n = y n f ( y n ) 2 f [ x n , y n ] f ( x n ) , x n + 1 = z n f ( z n ) 2 f [ x n , z n ] + f [ y n , z n ] 2 f [ x n , y n ] + ( y n z n ) f [ y n , x n , x n ] ,
f [ x n , y n ] = f ( x n ) f ( y n ) x n y n , f [ x n , z n ] = f ( x n ) f ( z n ) x n z n , f [ y n , z n ] = f ( y n ) f ( z n ) y n z n , f [ y n , x n , x n ] = f [ x n , y n ] f ( x n ) y n x n .
This iterative scheme was proved in [36] to be eighth-order convergence, which is optimal, because there are four function operations on [ f ( x n ) , f ( x n ) , f ( y n ) , f ( z n ) ] . Compared to Equation (69), which is also eighth-order convergence, and also with the same function operations on [ f ( x n ) , f ( x n ) , f ( y n ) , f ( z n ) ] , the two methods are different in their construction techniques.
The iterative scheme (69), which involves h ( ξ n ) and g ( ξ n , η n , γ n ) , is definitely more general than that in Equations (97) and (98). The theoretical basis of Equation (69) is the generalizations of the two-point and three-point Hermite interpolation methods.
Substituting the functions in Cases (a)–(f) into the iterative scheme (69), we have different algorithms; in particular, using the functions in Equations (81) and (82), we recover the iterative schemes developed in [34,50], which are shortened as SSM and BRWM in Table 11. The convergence criteria are | x n + 1 x n |   < 10 15 and | f ( x n ) |   < 10 15 . It can be seen that these iterative schemes have the same performance.

10. Conclusions

We have derived a simple iterative scheme to solve nonlinear equations from a two-dimensional approach with two constant parameters involved. Executing the convergence analysis, the parameters were constrained by a derived inequality, which resulted in a third-order convergence iterative scheme. The presented method is derivative-free and the theoretically and numerically proved order of the convergence is three. We proved that the iterative scheme is a variant of Newton’s method improved from quadratic convergence to cubic convergence. Numerical results revealed that the new method can be of practical interest for finding the solution quickly, and at each iteration, it merely required one evaluation of the given function. The proposed fractional iterative scheme was combined with the generalized quadrature methods to develop a class of fourth-order optimal iterative schemes. Examples were given to show that the COC is close to four. We employed a finite difference technique on the data at three points near to the solution to estimate the optimal values of two parameters, whose efficiency was identified by numerical tests. In terms of weight functions with certain constraints, two new generalized Hermite interpolation techniques were performed for the transformations between the derivatives at two points and at three points. A modification of the triple Newton’s method with the two generalized Hermite interpolation formulas can realize the eighth-order optimal iterative scheme, whose E.I. = 1.682 is better than E.I. = 1.414 of the triple Newton method. Two examples were verified for testing the accuracy of the generalized Hermite interpolations and the efficiency of the proposed eighth-order optimal iterative schemes. In summary, the novelty points of this paper are as follows:
  • A two-dimensional approach to the modification of the Newton method is developed.
  • The resulting fractional iterative scheme involving two parameters is of the one-step type and also of third-order convergence, which saves much computation of function per iteration.
  • A new fourth-order optimal iterative scheme was constructed.
  • A new three-point generalized Hermite interpolation was constructed, which is quite general.
  • A new eighth-order optimal iterative scheme was constructed.

Author Contributions

Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., E.R.E.-Z. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., E.R.E.-Z. and C.-W.C.; Resources, E.R.E.-Z. and C.-W.C.; Data curation, C.-S.L.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., E.R.E.-Z. and C.-W.C.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition and Resources, E.R.E.-Z. and C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National United University [grant number: 111I1206-8] and the National Science and Technology Council [grant number: NSTC 112-2221-E-239-022]. Also, this study is supported via funding from Prince sattam bin Abdulaziz University project number (PSAU/2023/R/1445).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BRWM:Bi, Ren and Wu method
COC:computed order of convergence
DFNM:derivative-free Newton method
E.I.:efficiency index
EOIS:eighth-order optimal iterative scheme
FDFNM:first derivative-free Newton method
FOIS:fourth-order optimal iterative scheme
HM:Halley method
LM:Li method
NI:number of iterations
NM:Newton method
RE:relative error
SDFNM:second derivative-free Newton method
SSM:Sharma and Sharma method
WLM:Wang and Liu method

References

  1. Chun, C.; Ham, Y. Some fourth-order modifications of Newton’s method. Appl. Math. Comput. 2008, 197, 654–658. [Google Scholar] [CrossRef]
  2. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  3. Noor, M.A.; Noor, K.I.; Al-Said, E.; Waseem, M. Some new iterative methods for nonlinear equations. Math. Prob. Eng. 2000, 2010, 198943. [Google Scholar] [CrossRef]
  4. Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
  5. Thukral, R. New modification of Newton method with third-order convergence for solving nonlinear equations of type f(0) = 0. Am. J. Comput. Appl. Math. 2016, 6, 14–18. [Google Scholar]
  6. Saqib, M.; Iqbal, M. Some multi-step iterative methods for solving nonlinear equations. Open J. Math. Sci. 2017, 1, 25–33. [Google Scholar] [CrossRef]
  7. Qureshi, U.K. A new accelerated third-order two-step iterative method for solving nonlinear equations. Math. Theo. Model. 2018, 8, 64–68. [Google Scholar]
  8. Ali, F.; Aslam, W.; Ali, K.; Anwar, M.A.; Nadeem, A. New family of iterative methods for solving nonlinear models. Discr. Dyn. Nat. Soc. 2018, 2018, 9619680. [Google Scholar] [CrossRef]
  9. Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
  10. Ozban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
  11. Lukic, T.; Ralevic, N.M. Geometric mean Newton’s method for simple and multiple roots. Appl. Math. Lett. 2008, 21, 30–36. [Google Scholar] [CrossRef]
  12. Ababneh, O.Y. New Newton’s method with third-order convergence for solving nonlinear equations. World Acad. Sci. Eng. Tech. 2012, 6, 1269–1271. [Google Scholar]
  13. Abdul-Hassan, N.Y. Two new predictor-corrector iterative methods with third- and ninth-order convergence for solving nonlinear equations. Math. Theo. Model. 2016, 6, 44–56. [Google Scholar]
  14. Kou, J.; Li, Y.; Wang, X. Third-order modification of Newtons method. J. Comput. Appl. Math. 2007, 205, 1–5. [Google Scholar]
  15. Chun, C. On the construction of iterative methods with at least cubic convergence. Appl. Math. Comput. 2007, 189, 1384–1392. [Google Scholar] [CrossRef]
  16. Verma, K.L. On the centroidal mean Newton’s method for simple and multiple roots of nonlinear equations. Int. J. Comput. Sci. Math. 2016, 7, 126–143. [Google Scholar] [CrossRef]
  17. Liu, C.S.; Li, T.L. A new family of fourth-order optimal iterative schemes and remark on Kung and Traub’s conjecture. J. Math. 2021, 2021, 5516694. [Google Scholar] [CrossRef]
  18. Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algor. Comput. Tech. 2019, 13, 1–8. [Google Scholar] [CrossRef]
  19. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  20. King, R. A family of fourth-order iterative methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  21. Chun, C. Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence. Appl. Math. Comput. 2007, 189, 597–601. [Google Scholar] [CrossRef]
  22. Kuo, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar]
  23. Chun, C. Some variants of King’s fourth-order family of methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 57–62. [Google Scholar] [CrossRef]
  24. Ostrowski, A.M. Solutions of Equations and System Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  25. Maheshwari, A.K. A fourth order iterative method for solving nonlinear equations. Appl. Math. Comput. 2009, 211, 383–391. [Google Scholar] [CrossRef]
  26. Ghanbari, B. A new general fourth-order family of methods for finding simple roots of nonlinear equations. J. King Saud Univ. Sci. 2011, 23, 395–398. [Google Scholar] [CrossRef]
  27. Khattri, S.K.; Noot, M.A.; Al-Said, E. Unifying fourth-order family of iterative methods. Appl. Math. Lett. 2011, 24, 1295–1300. [Google Scholar] [CrossRef]
  28. Kumar, S.; Kanwar, V.; Singh, S. Modified efficient families of two and three-step predictor-corrector iterative methods for solving nonlinear equations. Appl. Math. 2010, 1, 153–158. [Google Scholar] [CrossRef]
  29. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Wide stability in a new family of optimal fourth-order iterative methods. Comput. Math. Meth. 2019, 1, e1023. [Google Scholar] [CrossRef]
  30. Liu, D.; Liu, C.S. Two-point generalized Hermite interpolation: Double-weight function and functional recursion methods for solving nonlinear equations. Math. Comput. Simul. 2022, 193, 317–330. [Google Scholar] [CrossRef]
  31. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. New modifications of Potra-Ptak’s method with optimal fourth and eighth orders of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
  32. Chun, C.; Ham, Y. Some sixth-order variants of Ostrowski root-finding methods. Appl. Math. Comput. 2007, 193, 389–394. [Google Scholar] [CrossRef]
  33. Kou, J.; Li, Y.; Wang, X. Some variants of Ostrowskis method with seventh-order convergence. J. Comput. Appl. Math. 2007, 209, 153–159. [Google Scholar] [CrossRef]
  34. Bi, W.; Ren, H.; Wu, Q. Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Comput. Appl. Math. 2009, 225, 105–112. [Google Scholar] [CrossRef]
  35. Bi, W.; Wu, Q.; Ren, H. A new family of eighth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2009, 214, 236–245. [Google Scholar] [CrossRef]
  36. Wang, X.; Liu, L. Modified Ostrowski’s method with eighth-order convergence and high efficiency index. Appl. Math. Lett. 2010, 23, 549–554. [Google Scholar] [CrossRef]
  37. Petković, M.S. On optimal multipoint methods for solving nonlinear equations. Novi Sad J. Math. 2009, 39, 123–130. [Google Scholar]
  38. Petković, M.S.; Petković, L.D. Families of optimal multipoint methods for solving nonlinear equations: A survey. Appl. Anal. Discr. Math. 2010, 4, 1–22. [Google Scholar]
  39. Neta, B.; Petković, M.S. Construction of optimal order nonlinear solvers using inverse interpolation. Appl. Math. Comput. 2010, 217, 2448–2455. [Google Scholar] [CrossRef]
  40. Soleymani, F.; Shateyi, S.; Salmani, H. Computing simple roots by an optimal sixteenth-order class. J. Appl. Math. 2012, 2012, 958020. [Google Scholar] [CrossRef]
  41. Matinfar, M.; Aminzadeh, M. Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Interpol. Approx. Sci. Comput. 2013, 2013, jiasc-00013. [Google Scholar] [CrossRef]
  42. Zafar, F.; Yasmin, N.; Akram, S.; Junjua, M.D. A general class of derivative-free optimal root finding methods based on rational interpolation. Sci. World J. 2015, 2015, 935260. [Google Scholar] [CrossRef] [PubMed]
  43. Junjua, M.D.; Zafar, F.; Yasmin, N. Optimal derivative-free root finding methods based on inverse interpolation. Mathematics 2019, 7, 164. [Google Scholar] [CrossRef]
  44. Zhanlav, T.; Chuluunbaatar, O.; Ulziibayar, V. Generating function method for constructing new iterations. Appl. Math. Comput. 2017, 315, 414–423. [Google Scholar] [CrossRef]
  45. Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
  46. Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
  47. Halley, E. A new exact and easy method for finding the roots of equations generally and without any previous reduction. Philos. Roy. Soc. London 1964, 8, 136–147. [Google Scholar]
  48. Liu, C.S.; Li, T.L. A new family of generalized quadrature methods for solving nonlinear equations. Asian-Eur. J. Math. 2022, 15, 2250044. [Google Scholar] [CrossRef]
  49. Cordero, A.; Franceschi, J.; Torregrosa, J.R.; Zagati, A.C. A convex combination approach for mean-based variants of Newton’s method. Symmetry 2019, 11, 1106. [Google Scholar] [CrossRef]
  50. Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
Table 1. For Equation (35), showing the monotonically decreasing sequence of f ( x n ) with respect to the number n of steps and the COC for different cases and listing the number of iterations (NI) of present method. The last column is the NI of LM.
Table 1. For Equation (35), showing the monotonically decreasing sequence of f ( x n ) with respect to the number n of steps and the COC for different cases and listing the number of iterations (NI) of present method. The last column is the NI of LM.
n012345COCNILM
(a)−18−1.9558 1.3362 × 10 2 5.4058 × 10 9 0×3.0146
(b)52.75880.1148 4.1074 × 10 6 0×3.0346
(c)−0.1240 6.2336 × 10 3 5.5696 × 10 7 1.7764 × 10 15 0×2.9444
(d)6.752.09470.3115 2.6840 × 10 4 1.9952 × 10 8 02.9855
Table 2. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , a and b and the NIs of present method. The last two columns are the NIs of NM and HM.
Table 2. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , a and b and the NIs of present method. The last two columns are the NIs of NM and HM.
f = 0 x 0 abPresent NINI (NM)NI (HM)
f 1 = 0 413272011
f 2 = 0 −0.316.51340.49655142
f 3 = 0 −220.307−1696
f 4 = 0 3.260.24××
Table 3. For Equation (35), listing x 0 , NIs and COCs of the FOIS (42) for three solutions.
Table 3. For Equation (35), listing x 0 , NIs and COCs of the FOIS (42) for three solutions.
r x 0 NICOC
−2−2.543.828
10.843.694
1.51.444.278
Table 4. For Equation (36), listing x 0 , NIs and COCs of the FOIS (47) for the solution r = 3 .
Table 4. For Equation (36), listing x 0 , NIs and COCs of the FOIS (47) for the solution r = 3 .
x 0 3.13.545
NI581221
COC3.3253.8113.7413.339
Table 5. For Equation (35), listing x 0 , x ˜ 2 , a and b, NIs and COCs of present method for three solutions: r = 2 , 1 , 1.5 .
Table 5. For Equation (35), listing x 0 , x ˜ 2 , a and b, NIs and COCs of present method for three solutions: r = 2 , 1 , 1.5 .
r x 0 x ˜ 2 abNICOC
−2−2.5−1.510.75−0.60645111.0059
10.81.2−1.46−1.71233111.01383
1.51.41.61.762.2727381.00138
Table 6. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , x ˜ 2 , a and b, NIs and COCs of present method.
Table 6. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , x ˜ 2 , a and b, NIs and COCs of present method.
i x 0 x ˜ 2 abNICOC
12.93.0713.06215.9624890.978064
20.91.311.670.28277690.9819139
3−1.25−1.221.43125−1.5275414111.009
43.23.36.50.1538462141.11766
Table 7. For Equation (35) with the solution r = 2 and with the initial guess x 0 = 2.5 , listing a ˜ , b ˜ , NIs and COCs of present method for different values of c.
Table 7. For Equation (35) with the solution r = 2 and with the initial guess x 0 = 2.5 , listing a ˜ , b ˜ , NIs and COCs of present method for different values of c.
c a ˜ b ˜ NICOC
0.110.51−0.6184586170.999994
0.0510.5025−0.6189002661.084194
0.0210.50004−0.6190240461.233496
0.01110.500012−0.6190404952.90119
0.0110.50001−0.6190417252.90404
Table 8. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , x ˜ 2 , A and B, NIs and COCs of the FDFNM.
Table 8. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , x ˜ 2 , A and B, NIs and COCs of the FDFNM.
i x 0 x ˜ 2 ABNICOC
12.93.0713.062111.92497101.00286
20.91.311.670.56555391.000512
3−1.25−1.221.43125−3.055083121.05017
43.23.36.50.30729631141.00584
Table 9. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , x ˜ 2 , A and B, NIs and COCs of the SDFNM.
Table 9. For f i ( x ) = 0 , i = 1 , , 4 , listing x 0 , x ˜ 2 , A and B, NIs and COCs of the SDFNM.
i x 0 x ˜ 2 CDNICOC
12.93.0713.0621155.814180.9984
20.91.311.676.691.0022
3−1.25−1.221.43125−65.4743121.0311
43.23.36.52140.98004
Table 10. Listing REs for the functions f 1 ( x ) and f 2 ( x ) in Equation (83), which are computed at x n = 1.3 and x n = 3.05 , respectively. Each number of e ( k ) means that 10 k .
Table 10. Listing REs for the functions f 1 ( x ) and f 2 ( x ) in Equation (83), which are computed at x n = 1.3 and x n = 3.05 , respectively. Each number of e ( k ) means that 10 k .
(a)(b)(c)(d)(e)(f)(81)(82)
f 1 3.6 e ( 14 ) 2.4 e ( 14 ) 2.3 e ( 14 ) 1.1 e ( 14 ) 3.3 e ( 5 ) 3.8 e ( 6 ) 2.6 e ( 5 ) 2.4 e ( 5 )
f 2 4.3 e ( 3 ) 4.3 e ( 3 ) 4.2 e ( 3 ) 4.2 e ( 3 ) 4.3 e ( 2 ) 1.6 e ( 2 ) 9.1 e ( 3 ) 2.5 e ( 2 )
Table 11. For the functions f 1 ( x ) = x 3 + 4 x 2 10 with x 0 = 1 and f 2 ( x ) = e x 2 + 7 x 30 1 with x 0 = 3.5 , comparing the number of iterations.
Table 11. For the functions f 1 ( x ) = x 3 + 4 x 2 10 with x 0 = 1 and f 2 ( x ) = e x 2 + 7 x 30 1 with x 0 = 3.5 , comparing the number of iterations.
WLM(b)(c)(d)(e)(f)SSMBRWM
f 1 = 0 33333333
f 2 = 0 66666665
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; El-Zahar, E.R.; Chang, C.-W. A Two-Dimensional Variant of Newton’s Method and a Three-Point Hermite Interpolation: Fourth- and Eighth-Order Optimal Iterative Schemes. Mathematics 2023, 11, 4529. https://doi.org/10.3390/math11214529

AMA Style

Liu C-S, El-Zahar ER, Chang C-W. A Two-Dimensional Variant of Newton’s Method and a Three-Point Hermite Interpolation: Fourth- and Eighth-Order Optimal Iterative Schemes. Mathematics. 2023; 11(21):4529. https://doi.org/10.3390/math11214529

Chicago/Turabian Style

Liu, Chein-Shan, Essam R. El-Zahar, and Chih-Wen Chang. 2023. "A Two-Dimensional Variant of Newton’s Method and a Three-Point Hermite Interpolation: Fourth- and Eighth-Order Optimal Iterative Schemes" Mathematics 11, no. 21: 4529. https://doi.org/10.3390/math11214529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop