Next Article in Journal
Apriorics: Information and Graphs in the Description of the Fundamental Particles—A Mathematical Proof
Next Article in Special Issue
Numerical Analysis for Sturm–Liouville Problems with Nonlocal Generalized Boundary Conditions
Previous Article in Journal
A Semiclassical Approach to the Nonlocal Nonlinear Schrödinger Equation with a Non-Hermitian Term
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 360302, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(4), 581; https://doi.org/10.3390/math12040581
Submission received: 15 January 2024 / Revised: 8 February 2024 / Accepted: 13 February 2024 / Published: 15 February 2024
(This article belongs to the Special Issue Numerical Analysis in Computational Mathematics)

Abstract

:
In the paper, we iteratively solve a scalar nonlinear equation f ( x ) = 0 , where f C ( I , R ) , x I R , and I includes at least one real root r. Three novel two-step iterative schemes equipped with memory updating methods are developed; they are variants of the fixed-point Newton method. A triple data interpolation is carried out by the two-degree Newton polynomial, which is used to update the values of f ( r ) and f ( r ) . The relaxation factor in the supplementary variable is accelerated by imposing an extra condition on the interpolant. The new memory method (NMM) can raise the efficiency index (E.I.) significantly. We apply the NMM to five existing fourth-order iterative methods, and the computed order of convergence (COC) and E.I. are evaluated by numerical tests. When the relaxation factor acceleration technique is combined with the modified D z ˇ uni c ´ ’s memory method, the value of E.I. is much larger than that predicted by the paper [Kung, H.T.; Traub, J.F. J. Assoc. Comput. Machinery 1974, 21]. for the iterative method without memory.

1. Introduction

Three novel two-step iterative schemes with memory will be proposed to solve a given scalar nonlinear equation:
f ( x ) = 0 , f C ( I , R ) , x I R ,
where I is an interval to include the real solution of f = 0 .
For iteratively solving Equation (1),
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 ,
is a famous Newton method (NM). Obviously, for the Newton method, f ( x ) is required to be differentiable, even though the NM is still a popular iterative method to solve Equation (1), owing to its simplicity.
From Equation (2), we can define the following Newton iteration function:
N ( x ) = x f ( x ) f ( x ) .
It follows that
N ( x ) = f ( x ) f ( x ) f ( x ) 2 .
The critical point of the mapping N ( x ) has two sources: one is the root of f ( x ) = 0 and another is the zero point of f ( x ) = 0 , which causes N ( x ) = 0 . When the iteration tends to the zero point of f ( x ) = 0 , the NM no longer converges to the real solution of f ( x ) = 0 . For the function f ( x ) = x 3 2 x + 2 = 0 , as an example, we have
N ( x ) = x f ( x ) f ( x ) = 2 x 3 2 3 x 2 2 .
It follows that
N ( 0 ) = 1 , N ( 1 ) = 0 .
Because of f ( 0 ) = 0 and N ( 1 ) = 0 , if the initial point x 0 is located near 0 and 1, the NM does not converge to the true solution r = 1.769292354238631 of x 3 2 x + 2 = 0 .
In summary, the NM possesses some drawbacks, such as sensitiveness to the initial guess, dividing by a nearly zero value in the denominator, and nonconvergence near to the critical values which are not the roots of f ( x ) = 0 . In order to overcome these difficulties, we propose the following perturbation of the Newton method. Mathematically, Equation (1) can be written as
x f ( x ) α x f ( x ) = x f ( x ) ( 1 + α x ) f ( x ) ,
where α = f ( r ) / ( 2 f ( r ) ) is determined in [1]. By canceling x f ( x ) and α x f ( x ) on both sides of
x f ( x ) α x f ( x ) = x f ( x ) α x f ( x ) f ( x ) ,
we can achieve
f ( x ) = 0 ,
which is equivalent to Equation (1).
From Equation (8):
[ f ( x n ) α f ( x n ) ] x n + 1 = x n f ( x n ) α x n f ( x n ) f ( x n ) ,
which upon dividing both sides by f ( x n ) α f ( x n ) preceding x n + 1 , yields
x n + 1 = x n f ( x n ) f ( x n ) α f ( x n ) .
The iterative scheme (10) was developed to be a one-step continuation Newton-like method [2], and it was used as the first step in the multistep iterative schemes in [3,4,5]. Some dynamical analysis of Equation (10) can be seen in [6,7]. When f ( x n ) = 0 , Equation (10) is still applicable, but Equation (2) is a failure. As pointed out by Wu [8], Equation (10) has some merits over the NM.
The following second-order boundary value problem (BVP) demonstrates the usefulness of Equation (1):
u ( y ) 3 u ( y ) + 2 u ( y ) = 0 , y ( 0 , 1 ) ,
u ( 0 ) = 1 , u ( 1 ) = 0 .
Upon letting
u ( y ) = v ( y ) ( x + 1 ) y + 1 ,
and using v ( 0 ) = 0 and v ( 1 ) = x to render u ( 0 ) = 1 and u ( 1 ) = 0 automatically, we can transform Equations (11) and (12) to an initial value problem of the following ordinary differential equation (ODE):
v ( y ) 3 v ( y ) + 2 v ( y ) 2 ( x + 1 ) y + 3 ( x + 1 ) + 2 = 0 ,
v ( 0 ) = 0 , v ( 0 ) = 0 ;
the solution is endowed with an unknown value x, given by
v ( y ) = ( 3 + x ) e y ( x + 2 ) e 2 y + ( x + 1 ) y 1 .
If a real value exists for x, then we have a real solution for v ( y ) . By imposing v ( 1 ) = x ,
( 3 + x ) e ( x + 2 ) e 2 + x + 1 1 = x
results to the equation f ( x ) = ( 3 + x ) e ( x + 2 ) e 2 = 0 for determining x; then, by Equations (16) and (13), the exact solution u ( y ) can be obtained.
Sometimes f ( x ) is obtained from a nonlinear ODE, rather than the linear ODE in Equation (11). We further consider a nonlinear BVP:
u ( y ) = 3 2 u 2 ( y ) , y ( 0 , 1 ) ,
u ( 0 ) = 4 , u ( 1 ) = 1 .
One assumes an initial value u ( 0 ) = x with x being unknown and integrates Equation (18) with the initial conditions u ( 0 ) = 4 and u ( 0 ) = x . The nonlinear equation f ( x ) = u ( 1 , x ) 1 = 0 for satisfying the right-end boundary condition in Equation (19) is derived. Since u ( 1 , x ) is an implicit function of x, the function f ( x ) cannot be written out explicitly. In this nonlinear problem, when we apply the NM to solve f ( x ) = u ( 1 , x ) 1 = 0 , we encounter a difficulty to calculate f ( x ) . Recently, Liu et al. [9] proposed a single-step memory-dependent method to solve f ( x ) = 0 by a Lie-symmetry formulation of Equation (18).
Consider the following one [10]:
Q ˙ ( t ) = k e q ˙ ( t ) k e x Q 0 Q ( t ) ,
( Q 0 Q ( t ) ) 2 + x 2 ( Q 0 Q ( t ) ) x = 0 ,
which simulates the time-varying relation between stress Q and strain q of an elastic–perfectly plastic material. In the above, k e is the elastic modulus and Q 0 is the yield stress of material. We need to solve the nonlinear scalar equation f ( x ) = ( Q 0 Q ) 2 + x 2 ( Q 0 Q ) x = 0 to determine x, but Q is governed by a system of first-order ODEs being coupled to x in Equation (20). The difficulty is exhibited by using the NM to solve Equation (21), where the transition from elastic phase x = 0 to plastic phase x > 0 is not smooth.
Many engineering problems and adapted mathematical methods have been proposed for solving nonlinear equations, e.g., a weighted density functional theory for an inhomogeneous 12-6 Lennard–Jones fluid and the Euler–Lagrange equation derived from the density functional theory of inhomogeneous fluids [11], the governing mathematical equations defining the physical features of the first-grade viscoelastic nanofluid flow and heat transfer models [12], and a specialized nonlinear Fredholm integral equation in the turbo-reactors industry [13].
If one attempts to obtain an approximate analytical solution of the nonlinear BVP, the functional iteration method may be a useful tool. A conventional functional iteration method is the Picard iteration method; however, it has a major disadvantage because of its slow convergence. In order to improve the convergence property, He [14,15] proposed the variational iteration method, which is a modification of the Picard iteration method for the second-order nonlinear initial value problem. Recently, Wang et al. [16] developed an accurate predictor–corrector Picard iteration method for solving nonlinear problems. By the Newton–Kurchatov method for solving nonlinear equations, Argyros et al. [17] addressed a semilocal analysis and derived the weaker sufficient semilocal convergence criteria, and Argyros and Shakhno [18] employed a local convergence in the Banach space valued equations.
The drawbacks and limitations of the NM as just mentioned above induced a lot of studies that continue to the present and render some modifications of the Newton method, like the Adomian decomposition method [19], decomposition method [20], the arithmetic mean quadrature method [21], contra-harmonic mean and quadrature method [22], power-means variants [23], modified homotopy perturbation method [24], generating function method [25], perturbation of Newton’s method [26], and the variants of Bawazir’s iterative methods [27].
To improve the low-order convergence of the one-step iterative scheme and to enhance the convergence order, many multistep iterative schemes were developed. One can refer to [28,29] for many discussions of the multistep iterative methods. According to the conjecture of Kung and Traub [30], the upper bound of the efficiency index (E.I.) for the optimal iterative scheme with m evaluations of functions is E.I. = 2 1 1 / m < 2 . For m = 2 , the NM is an optimal iterative scheme with E.I. = 1.414. With m = 3 , the Halley method is not the optimal one with a low-value E.I. = 1.44225. The conjecture of Kung and Traub [30] is only applicable to the integer-order convergence scheme. The computed order of convergence (COC) proposed in [21] can be adopted to evaluate the convergence order of the iterative scheme. In this paper, we extend the Newton method by the idea of perturbations, which involve some optimal parameters determined by the convergence analysis. The COC of these two-step iterative schemes is larger than p = 2 m 1 .
Liu et al. [31] verified that the following iterative scheme:
x n + 1 = x n f ( x n ) c + d f ( x n )
is of third-order convergence, if c = f ( r ) and d = f ( r ) / ( 2 f ( r ) ) . By using the accelerating parameters, we can speed up the convergence. A more detailed analysis of the iterative scheme (22) can be seen in [1]. Equation (22) was addressed by Liu [32] from a two-dimensional approach together with the splitting technique. If we take c = f ( r ) and d = 0 , Equation (22) is known to be a fixed-point Newton method:
x n + 1 = x n f ( x n ) f ( r ) .
Traub [33] developed a simple accelerating method by giving x 0 and γ 0 :
w n = x n + γ n f ( x n ) , x n + 1 = x n f ( x n ) f [ x n , w n ] , γ n + 1 = 1 f [ x n , x n + 1 ] .
By incorporating the memory of x n into account, the order of convergence can be raised. Traub’s technique is a typical method with memory, in which the data that appeared in the previous iteration were adopted in the iteration. For recent progress of the memory method with accelerating parameters in the multistep iterative methods, one can refer to [5,27,34,35,36,37,38,39,40]. One major goal of this paper is to develop the two-step iterative schemes with a new memory method to determine the accelerating parameters by updating technique with the information at the current step.

2. Three New Two-Step Iterative Schemes

In 2010, Wang and Liu [41] proposed
y n = x n f ( x n ) f ( x n ) α f ( x n ) , x n + 1 = y n f ( y n ) f ( x n ) α f ( x n ) ,
which is a two-step iterative scheme based on Equation (10). The error equation is
e n + 1 = ( α 2 3 a 2 α + 2 a 2 2 ) e n 3 + O ( e n 4 ) ,
where a 2 : = f ( r ) / ( 2 f ( r ) ) and e n = x n r . Equation (25) is a two-step iterative scheme because it involves two variables, x n and y n , and two steps for computing x n + 1 ; the first step is computing y n , and then x n and y n are inserted into the second step to compute x n + 1 .
Wang and Zhang [42] developed a family of Newton-type two-step iterative schemes with a memory method for solving nonlinear equations whose R-convergence order is increased from 4 to 4.5616, 4.7913, and 5 depending on whether updating techniques are used in the accelerating parameters. Nowadays, most memory-accelerating methods do not take the differential term f ( x ) into the iterative schemes.

2.1. First New Two-Step Iterative Scheme

Instead of Equation (25), we consider an extension of Equation (23) to a two-step iterative scheme:
y n = x n f ( x n ) f ( r ) β f ( x n ) , x n + 1 = y n f ( y n ) f ( r ) β f ( x n ) ,
where β is a parameter whose optimal value is to be determined. The first step is a variant of the so-called fixed-point Newton method y n = x n f ( x n ) / f ( r ) .
Theorem 1.
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . If x 0 is sufficiently close to r within the radius of convergence, then the iterative scheme (27) for solving f ( x ) = 0 has fourth-order convergence:
e n + 1 = ( a 2 3 a 2 a 3 ) e n 4 + O ( e n 5 ) ,
where the optimal value of β is given by
β = a 2 = f ( r ) 2 f ( r ) .
Furthermore, the error of y n reads as
e n , y = y n r = ( a 2 2 a 3 ) e n 3 + O ( e n 4 ) ,
where a 3 = f ( r ) / ( 6 f ( r ) ) .
Proof. 
According to the assumption,
e n = x n r
is a small quantity; by e n + 1 = x n + 1 r and Equation (31):
e n + 1 = e n + x n + 1 x n .
Define
a n : = f ( n ) ( r ) n ! f ( r ) , n = 2 , ;
as usual,
f ( x n ) = f ( r ) [ e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ] .
A straightforward computation renders
β f ( x n ) f ( r ) = f ( r ) [ 1 + β e n + β a 2 e n 2 + β a 3 e n 3 + β a 4 e n 4 ] ,
d n = y n x n = f ( x n ) β f ( x n ) f ( r ) = e n + b 2 e n 2 + b 3 e n 3 + b 4 e n 4 ,
where
b 2 = ( β + a 2 ) , b 3 = ( 2 β a 2 + β 2 + a 3 ) , b 4 = ( β a 2 2 + 3 β 2 a 2 + 2 β a 3 + a 4 ) .
It follows from Equations (31) and (36) that
e n , y = y n r = y n x n + x n r = d n + e n = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 .
In terms of e n , y , we can express f ( y n ) by
f ( y n ) = f ( r ) [ e n , y + a 2 e n , y 2 + ] = f ( r ) [ b 2 e n 2 + b 3 e n 3 + b 4 e n 4 + a 2 ( b 2 e n 2 + b 3 e n 3 + b 4 e n 4 ) 2 + ] = f ( r ) [ b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 + ] .
By using the second one in Equation (27), subtracting both sides by r and from Equations (34), (38) and (39), we can derive
e n + 1 = e n , y f ( y n ) f ( r ) β f ( x n ) = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 1 β e n β a 2 e n 2 β a 3 e n 3 β a 4 e n 4 = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 [ b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 ] × [ 1 + β e n + β a 2 e n 2 + ( β e n + β a 2 e n 2 ) 2 ] = β b 2 e n 3 ( β b 3 + a 2 b 2 2 + β a 2 b 2 + β 2 b 2 ) e n 4 + O ( e n 5 ) = β ( β + a 2 ) e n 3 ( β b 3 + a 2 b 2 2 + β a 2 b 2 + β 2 b 2 ) e n 4 + O ( e n 5 ) .
If we take β = a 2 , then b 2 = 0 , b 3 = a 2 2 a 3 , and then Equation (28) is derived.
If β = a 2 is taken, Equation (38) reduces to
e n , y = b 3 e n 3 + b 4 e n 4 = ( a 2 2 a 3 ) e n 3 + ( 2 a 2 a 3 2 a 2 3 a 4 ) e n 4 .
The proof of Theorem 1 is completed. □
Notice that the error Equation (28) is the same as the Ostrowski’s two-step fourth-order optimal iterative scheme [43]. Since Equation (27) is different from Equation (25), the error Equation (28) is different from that in Equation (26); if α = a 2 , the iterative scheme (25) also has fourth-order convergence.

2.2. Second New Two-Step Iterative Scheme

Let
y n = x n f ( x n ) f ( r ) β f ( x n ) , x n + 1 = y n f ( y n ) f ( r ) α f ( x n ) ,
where β and α are two parameters whose optimal values are to be determined.
Theorem 2.
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . If x 0 is sufficiently close to r within the radius of convergence, then the iterative scheme (42) with α = 0 for solving f ( x ) = 0 has fourth-order convergence:
e n + 1 = a 2 ( β + a 2 ) 2 e n 4 + O ( e n 5 ) .
If the optimal value of β is given by
β = a 2 = f ( r ) 2 f ( r ) ,
then the iterative scheme (42) with α = 0 has fifth-order convergence.
Proof. 
By using the second one in Equation (42), subtracting both sides by r and from Equations (34), (38) and (39), we can derive
e n + 1 = e n , y f ( y n ) f ( r ) α f ( x n ) = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 1 α e n α a 2 e n 2 α a 3 e n 3 α a 4 e n 4 = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 [ b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 ] × [ 1 + α e n + α a 2 e n 2 + ( α e n + α a 2 e n 2 ) 2 ] = α b 2 e n 3 ( α b 3 + a 2 b 2 2 + α a 2 b 2 + α 2 b 2 ) e n 4 + O ( e n 5 ) = α ( β + a 2 ) e n 3 ( α b 3 + a 2 b 2 2 + α a 2 b 2 + α 2 b 2 ) e n 4 + O ( e n 5 ) .
If we take α = 0 , then Equation (43) is derived after taking b 2 = ( β + a 2 ) . □
It is interesting that the iterative scheme (42) with α = 0 is simpler than the iterative scheme (27), but its order of convergence is better.

2.3. Third New Two-Step Iterative Scheme

We further consider
y n = x n f ( x n ) f ( r ) β f ( x n ) , x n + 1 = y n f ( y n ) f ( r ) β f ( y n ) .
The second-step is enhanced by using f ( r ) β f ( y n ) , rather than f ( r ) β f ( x n ) in Equation (27). Equation (46) is a two-step iterative scheme because it involves two variables, x n and y n , and two steps for computing x n + 1 ; the first step is computing y n , and then y n is inserted into the second step to compute x n + 1 .
Theorem 3.
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . If x 0 is sufficiently close to r within the radius of convergence, then the iterative scheme (46) for solving f ( x ) = 0 is of the fourth-order convergence:
e n + 1 = ( β + a 2 ) b 2 2 e n 4 + O ( e n 5 ) = ( β + a 2 ) 3 e n 4 + O ( e n 5 ) .
If β = a 2 , Equation (47) reduces to e n + 1 = O ( e n 5 ) . That is, the iterative scheme (46) is of the fifth-order convergence.
Proof. 
By using the second one in Equation (46), subtracting both sides by r and from Equations (38) and (39), we can derive
e n + 1 = e n , y f ( y n ) f ( r ) β f ( y n ) = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 1 β b 2 e n 2 β b 3 e n 3 β ( b 4 + a 2 b 2 2 ) e n 4 = b 2 e n 2 + b 3 e n 3 + b 4 e n 4 [ b 2 e n 2 + b 3 e n 3 + ( b 4 + a 2 b 2 2 ) e n 4 ] [ 1 + β b 2 e n 2 + β b 3 e n 3 ] = ( β + a 2 ) b 2 2 e n 4 + O ( e n 5 ) .
The proof of Theorem 3 is completed. □

3. Four New Memory Methods

3.1. The First and Second New Memory Methods

The error Equation (40) is simpler than that in Equation (26). Theorem 1 indicates that the optimal value of β is β = f ( r ) / ( 2 f ( r ) ) . However, because the root r is itself an unknown value, f ( r ) and f ( r ) are not available; they are critical parameters to enhance the performance of the proposed two-step iterative schemes.
The memory-dependent techniques to obtain the suitable parameters’ values were found in [34,44,45,46,47,48]. Let A = f ( r ) and B = β = f ( r ) / ( 2 f ( r ) ) . We develop a new memory method for updating the values of A and B with the current values. In Equation (27), there are only two current values, x n and y n , which are insufficient to update f ( r ) . Therefore, we introduce a supplementary variable obtained by the fixed point Newton method:
w n = x n f ( x n ) A = x n f ( x n ) f ( r ) .
Then, with the three data ( x n , w n , y n ) , a second-degree Newton polynomial that is an interpolant is given by
N 2 ( x ) = f ( x n ) + f [ x n , w n ] ( x x n ) + f [ x n , w n , y n ] ( x x n ) ( x w n ) ,
where
f [ x n , w n ] = f ( x n ) f ( w n ) x n w n , f [ x n , w n , y n ] = f [ x n , w n ] f [ w n , y n ] x n y n .
It is easy to derive N 2 ( y n ) = f ( y n ) , and
N 2 ( x n + 1 ) = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) , N 2 ( x n + 1 ) = 2 f [ x n , w n , y n ] .
In general, N 2 ( x n + 1 ) f ( x n + 1 ) .
The new algorithm based on Theorem 1, namely the first new memory-updating method (FNMUM), is depicted by (i) giving x 0 , a 0 , b 0 , and c 0 = ( a 0 + b 0 ) / 2 and computing A 0 and B 0 by
A 0 = f [ a 0 , b 0 ] , B 0 = f ( b 0 ) 2 f ( c 0 ) + f ( a 0 ) 2 A 0 ( c 0 a 0 ) 2 ,
(ii) for n = 0 , 1 , , perform the following computations until convergence:
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) A n + B n f ( x n ) ,
x n + 1 = y n f ( y n ) A n + B n f ( x n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 .
There exist three evaluations of functions for f ( x n ) , f ( w n ) , and f ( y n ) such that the optimal order of convergence is p = 2 2 = 4 , and E.I. = 1.5874. The role of w n , which does not engage in the iteration, is different from x n and y n ; x n and y n are step variables used in the iteration in Equations (55) and (56), and w n is computed from Equation (54) to provide an extra datum used in Equations (57) and (58) to update the values of A n + 1 and B n + 1 .
Therefore, the present parameters’ updating technique is different from the memory-dependent accelerating techniques in [34,44,45,46,47,48]. In the FNMUM, no previous iteration values of w n 1 and y n 1 were used in addition to the initial values a 0 and b 0 . Therefore, the new memory method can save much more computational cost than the previous memory-accelerating technique.
The second new memory-updating method (SNMUM) based on Theorem 2 is depicted by (i) giving x 0 , a 0 , b 0 , and c 0 = ( a 0 + b 0 ) / 2 and computing A 0 and B 0 by
A 0 = f [ a 0 , b 0 ] , B 0 = f ( b 0 ) 2 f ( c 0 ) + f ( a 0 ) 2 A 0 ( c 0 a 0 ) 2 ,
(ii) for n = 0 , 1 , , perform the following computations until convergence:
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) A n + B n f ( x n ) ,
x n + 1 = y n f ( y n ) A n ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 .
In the SNMUM, only the initial values a 0 and b 0 are guessed, and no previous iteration values of w n 1 and y n 1 were used, which renders it more computationally cost-effective than the previous memory-accelerating technique.

3.2. The Third New Memory Method

According to Theorem 3, the third new memory-updating method (TNMUM), is depicted by (i) giving x 0 , a 0 , b 0 , and c 0 = ( a 0 + b 0 ) / 2 and computing A 0 and B 0 by
A 0 = f [ a 0 , b 0 ] , B 0 = f ( b 0 ) 2 f ( c 0 ) + f ( a 0 ) 2 A 0 ( c 0 a 0 ) 2 ,
(ii) for n = 0 , 1 , , perform the following computations until convergence:
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) A n B n f ( x n ) ,
x n + 1 = y n f ( y n ) A n B n f ( y n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 .
Similarly, in the TNMUM, only the initial values a 0 and b 0 are guessed, and no previous iteration values of w n 1 and y n 1 are used; it is quite computationally cost-effective.

3.3. The Accelerated Third New Memory Method

In the previous three new memory methods, the datum of f ( x n + 1 ) at x n + 1 was not used. We modify the supplementary variable by
w n = x n η f ( x n ) f ( r ) ,
where η is a relaxation factor to be designed.
Then, we set
N 2 ( x n + 1 ) = f ( x n ) + f [ x n , w n ] ( x n + 1 x n ) + f [ x n , w n , y n ] ( x n + 1 x n ) ( x n + 1 w n ) = f ( x n + 1 ) ,
where N 2 ( x ) was given by Equation (50). Inserting
x n w n = η f ( x n ) f ( r )
into Equation (72), we can derive
f ( x n ) + f ( r ) [ f ( x n ) f ( w n ) ] η f ( x n ) ( x n + 1 x n ) + 1 x n y n f ( r ) [ f ( x n ) f ( w n ) ] η f ( x n ) f ( w n ) f ( y n ) x n y n = f ( x n + 1 ) .
Through some manipulations we can derive
η = f ( r ) [ f ( x n ) f ( w n ) ] [ ( x n + 1 x n ) ( x n y n ) + 1 ] f ( x n ) { [ f ( x n + 1 ) f ( x n ) ] ( x n y n ) + f [ w n , y n ] } ,
which can be used to update η .
The new algorithm based on Theorem 3 and Equations (71) and (75), namely the accelerated third new memory-updating method (ATNMUM), is depicted by (i) giving x 0 , a 0 , b 0 , c 0 = ( a 0 + b 0 ) / 2 and η 0 and computing A 0 and B 0 by
A 0 = f [ a 0 , b 0 ] , B 0 = f ( b 0 ) 2 f ( c 0 ) + f ( a 0 ) 2 A 0 ( c 0 a 0 ) 2 ,
(ii) for n = 0 , 1 , , perform the following computations until convergence:
w n = x n η n f ( x n ) A n ,
y n = x n f ( x n ) A n + B n f ( x n ) ,
x n + 1 = y n f ( y n ) A n + B n f ( y n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 ,
η n + 1 = A n + 1 [ f ( x n ) f ( w n ) ] [ ( x n + 1 x n ) ( x n y n ) + 1 ] f ( x n ) { [ f ( x n + 1 ) f ( x n ) ] ( x n y n ) + f [ w n , y n ] } .
In Equation (46), we take the free parameter β = B n .
In the ATNMUM, the initial values a 0 , b 0 , and η 0 are guessed, and no previous iteration values of w n 1 and y n 1 are used; it is quite computationally cost-effective.
In the above four iterative algorithms, FNMUM, SNMUM, TNMUM, and ATNMUM, the function f ( x ) with f C ( I , R ) is sufficient. We suppose that the interval I includes at least one real root. In general, the nonlinear equation f ( x ) = 0 may possess many roots, which can be determined by the iterative schemes by giving different initial guesses of x 0 .

4. A Simple Memory Approach to Existent Iterative Schemes

4.1. The WMM Method

Wang [36] modified the Ostrowski-type method with memory for self-accelerating a parameter λ , given by
w n = x n f ( x n ) f ( x n ) , y n = w n λ ( w n x n ) 2 , x n + 1 = y n f ( y n ) 2 f [ x n , y n ] f ( x n ) ,
whose error equation was proven to be
e n + 1 = ( a 2 λ ) ( a 2 2 a 3 a 2 λ ) e n 4 + O ( e n 5 ) .
Substituting the first one into the second one in Equation (83), it is indeed a two-step method:
y n = x n f ( x n ) f ( x n ) λ f 2 ( x n ) f ( x n ) 2 , x n + 1 = y n f ( y n ) 2 f [ x n , y n ] f ( x n ) .
Upon taking B = λ = a 2 to be an updating parameter, the order can be raised by the Wang memory method (WMM), which is depicted by (i) giving x 0 , a 0 , b 0 , and c 0 = ( a 0 + b 0 ) / 2 and computing B 0 by
B 0 = f ( b 0 ) 2 f ( c 0 ) + f ( a 0 ) 2 f [ a 0 , b 0 ] ( c 0 a 0 ) 2 ,
(ii) perform for n = 0 , 1 , ,
w n = x n f ( x n ) f ( x n ) ,
y n = w n B n ( w n x n ) 2 ,
x n + 1 = y n f ( y n ) 2 f [ x n , y n ] f ( x n ) ,
B n + 1 = f [ x n , w n , y n ] f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) .
There exist four evaluations of functions for f ( x n ) , f ( w n ) , f ( y n ) , and f ( x n ) .

4.2. The ZLHMM Method

Zheng et al. [49] modified the Steffensen-type method with an accelerating parameter γ :
w n = x n + γ f ( x n ) , y n = x n f ( x n ) f [ x n , w n ] , x n + 1 = y n f ( y n ) f [ x n , y n ] + ( y n x n ) f [ x n , w n , y n ] ,
whose error equation is
e n + 1 = ( 1 + f ( r ) γ ) 2 a 2 ( a 2 2 a 3 ) e n 4 + O ( e n 5 ) .
Let A = f ( r ) and γ = 1 / A . The ZLH memory method (ZLHMM) reads as (i) giving x 0 , a 0 , b 0 and computing A 0 by
A 0 = f [ a 0 , b 0 ] ,
(ii) for n = 0 , 1 , ,
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) f [ x n , w n ] ,
x n + 1 = y n f ( y n ) f [ x n , y n ] + ( y n x n ) f [ x n , w n , y n ] ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) .
There exist three evaluations of functions for f ( x n ) , f ( w n ) , and f ( y n ) .

4.3. The CCTMM Method

Chicharro et al. [37] proposed a two-step iterative scheme:
y n = x n f ( x n ) f [ x n , w n ] , x n + 1 = y n H ( t n ) f ( y n ) f [ y n , w n ] ,
where w n = x n + γ f ( x n ) , t n = f ( y n ) / f ( x n ) , and H ( 0 ) = H ( 0 ) = 1 , and H ( 0 ) < . They derived the following error equation:
e n + 1 = a 2 2 [ 1 + γ f ( r ) ] 2 [ { 6 + γ f ( r ) ( H ( 0 ) 2 ) + H ( 0 ) } a 2 2 + 2 a 3 ] e n 4 + O ( e n 5 ) .
If we take γ = 1 / f ( r ) , the convergence at least increases by one order.
Let A = f ( r ) . The CCT memory method (CCTMM) reads as (i) giving x 0 , a 0 , b 0 and computing A 0 by Equation (93), and (ii) for n = 0 , 1 , ,
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) f [ x n , w n ] ,
t n = f ( y n ) f ( x n ) ,
x n + 1 = y n H ( t n ) f ( y n ) f [ y n , w n ] ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) .
In the numerical test, we take H ( t ) = 1 + t . Three evaluations of functions for f ( x n ) , f ( w n ) , and f ( y n ) are needed.

4.4. The DMM Method

In 2013, D z ˇ uni c ´ [34] proposed a two-step iterative scheme:
y n = x n f ( x n ) f [ x n , w n ] + p f ( w n ) , x n + 1 = y n g ( t n ) f ( y n ) f [ y n , w n ] + p f ( w n ) ,
where w n = x n + γ f ( x n ) , t n = f ( y n ) / f ( x n ) , and g ( 0 ) = g ( 0 ) = 1 , and | g ( 0 ) | < . D z ˇ uni c ´ [34] verified that the convergence order is at least seven when the parameters γ and p are accelerated by the memory-dependent technique using the third-order and fourth-order Newton polynomials.
Here, we employ the second-order N 2 ( x ) in Equation (50) to update γ and p. Let A = f ( r ) = 1 / γ and B = p = a 2 . The new D z ˇ uni c ´ memory method (DMM) reads as (i) giving x 0 , a 0 , b 0 , c 0 = ( a 0 + b 0 ) / 2 and computing A 0 and B 0 by Equation (53), as well as (ii) performing, for n = 0 , 1 , ,
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) f [ x n , w n ] B n f ( w n ) ,
t n = f ( y n ) f ( x n ) ,
x n + 1 = y n g ( t n ) f ( y n ) f [ y n , w n ] B n f ( w n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 .
In the numerical test, we take g ( t ) = 1 + t . Three evaluations of functions for f ( x n ) , f ( w n ) , and f ( y n ) are needed.
Like the accelerated third new memory method in Section 3.3, we propose a modification of DMM (MDMM). The new memory method of MDMM reads as (i) giving x 0 , a 0 , b 0 , c 0 = ( a 0 + b 0 ) / 2 , η 0 and computing A 0 and B 0 by Equation (53), as well as (ii) performing, for n = 0 , 1 , ,
w n = x n η n f ( x n ) A n ,
y n = x n f ( x n ) f [ x n , w n ] B n f ( w n ) ,
t n = f ( y n ) f ( x n ) ,
x n + 1 = y n g ( t n ) f ( y n ) f [ y n , w n ] B n f ( w n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 ,
η n + 1 = A n + 1 [ f ( x n ) f ( w n ) ] [ ( x n + 1 x n ) ( x n y n ) + 1 ] f ( x n ) { [ f ( x n + 1 ) f ( x n ) ] ( x n y n ) + f [ w n , y n ] } .

4.5. The CLBTMM Method

In 2015, Cordero et al. [50] proposed a two-step iterative scheme:
y n = x n f ( x n ) f [ x n , w n ] + λ f ( w n ) , x n + 1 = y n f ( y n ) f [ x n , y n ] + ( y n x n ) f [ x n , w n , y n ] ,
where w n = x n + γ f ( x n ) . They argued that the memory-dependent method possesses at least seven-order convergence.
Let A = f ( r ) = 1 / γ and B = λ = a 2 . The new CLBT memory method (CLBTMM) reads as (i) giving x 0 , a 0 , b 0 , c 0 = ( a 0 + b 0 ) / 2 and computing A 0 and B 0 by Equation (53), as well as (ii) performing, for n = 0 , 1 , ,
w n = x n f ( x n ) A n ,
y n = x n f ( x n ) f [ x n , w n ] B n f ( w n ) ,
x n + 1 = y n f ( y n ) f [ x n , y n ] + ( y n x n ) f [ x n , w n , y n ] ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 x n w n ) ,
B n + 1 = f [ x n , w n , y n ] A n + 1 .
Three evaluations of functions for f ( x n ) , f ( w n ) , and f ( y n ) are needed.

5. Numerical Verifications of the New Memory-Updating Method

We give some examples to assess the performance of the proposed iterative methods by the numerically computed order of convergence (COC), which is approximated by [21,51]
C O C 1 : = ln | ( x n + 1 r ) / ( x n r ) | ln | ( x n r ) / ( x n 1 r ) | , C O C 2 : = ln | f ( x n + 1 ) / f ( x n ) | ln | f ( x n ) / f ( x n 1 ) | .
The triple ( x n + 1 , x n , x n 1 ) is the last three values. If ln | ( x n + 1 r ) / ( x n r ) | and ln | f ( x n + 1 ) / f ( x n ) | are not computable due to x n + 1 = r and f ( x n + 1 ) = 0 , we can shift the triple forward to ( x n , x n 1 , x n 2 ) .
To save space, we test three examples by
f 1 ( x ) = exp ( x 2 + 7 x 30 ) 1 ,
f 2 ( x ) = ( x 1 ) ( x 6 + 1 / x 6 + 4 ) sin ( x 2 ) ,
f 3 ( x ) = ( x 1 ) 3 1 .
The corresponding solutions are, respectively, r 1 = 3 , r 2 = 1 , and r 3 = 2 .
As noticed by D z ˇ uni c ´ [34], f 2 ( x ) shows a nontrivial behavior since two relatively close roots, r = 1.772453850905516 and r = 1.772453850905516 , appear, and a singularity is close to the sought root r = 1 . In practice, we solved f 2 ( x ) = 0 by the Newton method. With x 0 = 0.6 , the NM spent 12 iterations to find r = 1 ; with x 0 = ± 2 ; the NM does not converge to r = ± 1.772453850905516 within 500 iterations.
With three evaluations of functions, the optimal order of convergence of the two-step iterative scheme without memory is p = 4 and E.I. = 1.5874. In Table 1, we list the number of iterations (NI) to satisfy f i = 0 , i = 1 , 2 , 3 , where, as expected, the value of E.I. is near or larger than 1.5874. The roots of f 3 = 0 are triple; however, FNMUM can still quickly find the solution x = 2 . Through the new memory-updating method, the convergence is significantly increased by several orders.
Compared with the Newton method mentioned above, for f 2 ( x ) = 0 , the FNMUM with x 0 = 0.6 spent five iterations to find r = 1 ; with x 0 = 2 , the FNMUM spent eight iterations for r = 1.772453850905516 ; and with x 0 = 2 , it took six iterations for r = 1.772453850905516 .
For f 1 ( x ) = 0 , the exact values of f ( r ) = 13 and f ( r ) = 171 are obtained; hence, we can estimate the errors of parameters’ values by ERR1 = | A n 13 | and ERR2 = | B n 171 / 26 | = | B n 6.576923076923077 | . Table 2 demonstrates that A n and B n tended to exact ones.
In Table 3, we list the number of iterations (NI) to satisfy f i = 0 , i = 1 , 2 , 3 , where the value of E.I. is near or larger than 1.5874. As expected, the second new memory-updating method significantly increased the COC by more than 4.7 for f 1 ( x ) = 0 and f 2 ( x ) = 0 . But for f 3 ( x ) = 0 , The SNMUM is weak.
In Table 4, we list the results for the third new memory-updating method (TNMUM). As expected, the COCs are greater than four for f 2 ( x ) = 0 and f 3 ( x ) = 0 .
In Table 5, we list the results for the accelerated third new memory-updating method (ATNMUM). As expected, the COCs are larger than those in Table 3.
We can estimate the errors by ERR1 = | A n 13 | and ERR2 = | N 2 ( x n ) f ( x n ) | for the solution of f 1 ( x ) = 0 by using the ATNMUM. Table 6 demonstrates that A n tends to an exact one, and N 2 ( x n ) quickly approaches to f ( x n ) owing to the design of the relaxation factor in Equation (75).
In Table 7, we list the number of iterations (NI) for solving f i = 0 , i = 1 , 2 , 3 by the WMM, of which four evaluations of functions are required; hence, we take E.I. = (COC1)1/4. The good performance of the new memory method can be seen, which raises COC = 4 to values in the range [ 5.592 , 7.576 ] .
In Table 8, we list the number of iterations (NI) for solving f i = 0 , i = 1 , 2 , 3 by the ZLHMM method.
In Table 9, we list the number of iterations (NI) to satisfy f i = 0 , i = 1 , 2 , 3 , where, as expected, the value of E.I. is near or larger than 1.7. Moreover, with the same initial value x 0 = 2.5 , the presented COC1 and NI for f 3 ( x ) = 0 are better than those computed by Chicharro et al. [37], where NI = 6 and ACOC = 4.476. The values 1.705, 1.736, and 1.986 are also better than the E.I. = 8 1 / 4 = 1.682 obtained by the eighth-order optimal iterative scheme with four evaluations of functions.
In Table 10, the presented COC2 for f 2 ( x ) = 0 is better than that computed by D z ˇ uni c ´ [34], where COC = 6.9 is smaller than 7.3. Even if we do not use the full memory information, the performance is better than that in [34].
In Table 11, the presented COC2 for f 2 ( x ) = 0 is better than that computed by D z ˇ uni c ´ [34], where COC = 6.9 is smaller than 11.959. Even if we do not use the full memory information, the performance is better than that in [34]. Compared with Table 10, the high performance was gained in Table 11 by introducing a relaxation factor in the modified D z ˇ uni c ´ ’s memory method. The COC2 = 21.009 is abnormal for the solution of f 3 ( x ) = 0 .
In Table 12, the presented COC2 = 8.53 for f 4 ( x ) = x 3 + 4 x 2 10 = 0 with the root r 4 = 1.365230013414097 is better than that computed in [52], where COC = 7 is smaller than 8.53. Even if we do not use the full memory information, the performance is better than that in [52], where γ n = 1 / N 3 ( x n + 1 ) and λ n = N 3 ( w n + 1 ) / [ 2 N 3 ( w n + 1 ) ] were used for the memory-dependent function N 3 ( x ; x n , y n 1 , w n 1 , x n 1 ) in Equation (119). N 3 ( x ) is the third-degree Newton interpolation polynomial.
The presented COC2 = 9.579 for f 3 ( x ) = ( x 1 ) 3 1 = 0 is better than that computed in [50], where COC = 6.9041 was obtained, which used γ n = 1 / N 3 ( x n + 1 ) , λ n = N 4 ( w n + 1 ) / [ 2 N 4 ( w n + 1 ) ] , N 3 ( x ; x n , y n 1 , x n 1 , w n 1 ) , and N 4 ( x ; w n , x n , y n 1 , w n 1 , x n 1 ) in Equation (119). N 3 ( x ) and N 4 ( x ) are, respectively, the third-degree and fourth-degree Newton interpolation polynomials. For f 5 ( x ) = ( x 2 ) ( x 6 + x 3 + 1 ) e x 2 = 0 with the root r 5 = 2 , the presented COC2 = 9.686 is better than the COC = 6.9727 computed in [50].

6. Enhancing the Updating Speed of Parameters

To further accelerate the updating speed of the parameters A n and B n , according to [52], we can take A n = N 3 ( x n + 1 ) and B n = N 3 ( w n + 1 ) / [ 2 N 3 ( w n + 1 ) ] . The memory-dependent method of CLBTM reads as (i) giving x 0 , a 0 , b 0 , c 0 = ( a 0 + b 0 ) / 2 and giving or computing A 0 and B 0 by Equation (53), and w 0 = x 0 f ( x 0 ) / A 0 , as well as (ii) performing, for n = 0 , 1 , ,
y n = x n f ( x n ) f [ x n , w n ] B n f ( w n ) ,
x n + 1 = y n f ( y n ) f [ x n , y n ] + ( y n x n ) f [ x n , w n , y n ] ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 x n + 1 w n x n ) + f [ x n , w n , y n , x n + 1 ] × [ ( x n + 1 w n ) ( x n + 1 y n ) + ( x n + 1 x n ) ( x n + 1 y n ) + ( x n + 1 x n ) ( x n + 1 w n ) ] , w n + 1 = x n + 1 f ( x n + 1 ) A n + 1 ,
C n + 1 = 2 f [ x n , w n , y n ] + f [ x n , w n , y n , x n + 1 ] [ ( 6 w n + 1 2 x n 2 w n 2 y n ) D n + 1 = f [ x n , w n ] + f [ x n , w n , y n ] ( 2 w n + 1 w n x n ) + f [ x n , w n , y n , x n + 1 ] × [ ( w n + 1 w n ) ( w n + 1 y n ) + ( w n + 1 x n ) ( w n + 1 y n ) + ( w n + 1 x n ) ( w n + 1 w n ) ] , B n + 1 = C n + 1 2 D n + 1 ,
where
f [ x n , w n , y n , x n + 1 ] = f [ x n , w n , y n ] f [ w n , y n , x n + 1 ] x n x n + 1 .
In Table 13, the presented COC2 = 9.157 for f 4 ( x ) = x 3 + 4 x 2 10 = 0 is better than that computed in [52]. The presented COC2 = 15.634 for f 3 ( x ) = ( x 1 ) 3 1 = 0 is better than that computed in [50], where COC = 6.9041 was obtained. The presented COC2 = 10.159 for f 5 ( x ) = ( x 2 ) ( x 6 + x 3 + 1 ) e x 2 = 0 is better than that computed in [50], where COC = 6.9727 was obtained.
Comparing Table 13 with Table 12, the convergence speed as reflected in the values of COC and E.I. is enhanced by the CLBTM; however, the complexity is increased compared with the iterative scheme CLBTMM in Section 4.5.
On this occasion, we can point out the difference between the new memory method (NMM) and the memory method (MM): in the MM, f ( x n + 1 ) is computed, but it is not needed in the NMM. The supplementary variable w n + 1 is updated by Equation (131) in the MM, but no update of w is required in the NMM; in the NMM, a lower-order N 2 polynomial is sufficient, but for the MM, the higher orders polynomials of N 3 , N 4 , N 5 are necessary. According to the authors, the computational cost of the NMM saves much more than the MM.

7. Concluding Remarks

Traub [33] was the first to develop a memory method from the Steffnensen’s iterative scheme:
w n = x n γ n f ( x n ) , x n + 1 = x n f ( x n ) f [ x n , w n ] , γ n + 1 = f [ x n , x n + 1 ]
By giving x 0 and γ 0 , the above iterative scheme can be initiated. We notice that in the proposed new memory-updating method, we do not need to compute f ( x n + 1 ) , which saves one more evaluation of the functions than Traub’s memory method.
In [34], the accelerating technique is based on the memory of ( w n , x n , y n 1 , w n 1 , x n 1 ) by
γ n = 1 N 3 ( x n ) , p n = N 4 ( w n ) 2 N 4 ( w n ) .
In addition to the cost of storing the information in the previous iteration, there is an expensive computational cost for computing N 3 ( x n ) , N 4 ( w n ) , and N 4 ( w n ) . Since the work in [34], there has been extensive literature on this memory style using similar techniques. The new memory approach for updating the accelerating parameters proposed in this paper without using the information at the previous iteration, which takes the current values of ( x n , w n , y n ) into N 2 ( x n + 1 ) for updating A n and B n , is more computationally cost-effective than the previous techniques. The three new two-step iterative schemes developed together with four updating techniques worked very well to quickly solve nonlinear equations. Numerical examples revealed that without computing extra functions, the new memory-updating methods can raise several orders of convergence and significantly enhance the values of E.I.
We introduced an accelerating technique in the third new memory method by imposing N 2 ( x n + 1 ) = f ( x n + 1 ) to determine the relaxation factor. The values of COC and E.I. are greatly raised by this accelerated third new memory-updating method. When the relaxation factor acceleration technique was combined with the modified D z ˇ uni c ´ ’s memory method, very high values of COC = 13.222 and E.I. = 2.46 were achieved. High performance is achieved by the proposed two-step iterative methods to find the root of nonlinear equations.
The novelties involved in this paper are as follows:
  • Developing three novel two-step iterative schemes with simple forms, which are derivative-free.
  • A second-degree Newton polynomial was used to update two critical parameters, f ( r ) and f ( r ) , greatly saving computation cost by evaluating three functions and using the new memory method (NMM).
  • The NMM was applied to five existing two-step iterative schemes, WMM, ZLHMM, CCTMM, DMM, and CLBTMM, with the high values of E.I. all being larger than 1.5874.
  • The new idea of imposing N 2 ( x n + 1 ) = f ( x n + 1 ) to determine the relaxation factor was developed, whose resulting E.I. [ 1.78 , 2.176 ] is larger than E.I. = 1.5874 of the fourth-order optimal iterative scheme.
  • Combining the relaxation factor acceleration technique and D z ˇ uni c ´ ’s new memory method, very high values of COC and E.I. were achieved.

Author Contributions

Conceptualization, C.-S.L.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L. and C.-W.C.; Resources, C.-W.C.; Data curation, C.-S.L. and C.-W.C.; Writing—original draft, C.-S.L.; Writing—review and editing, C.-W.C.; Visualization, C.-S.L. and C.-W.C.; Supervision, C.-S.L.; Project administration, C.-S.L.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to restrictions privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ATNMUMAccelerated third new memory-updating method
BVPBoundary value problem
CCTMMChicharro, Cordero, and Torregrosa’s memory method
CLBTMCordero, Lotfi, Bakhtiari, and Torregrosa’s method
CLBTMMCordero, Lotfi, Bakhtiari, and Torregrosa’s memory method
COCComputed order of convergence
DMMD z ˇ uni c ´ ’s memory method
E.I.Efficiency index
FNMUMFirst new memory-updating method
MDMMModification of D z ˇ uni c ´ ’s memory method
NInumber of iterations
NMNewton method
NMMNew memory method
ODEOrdinary differential equation
SNMUMSecond new memory-updating method
TNMUMThird new memory-updating method
WMMWang memory method
ZLHMMZheng, Li, and Huang’s memory method

References

  1. Liu, C.S.; El-Zahar, E.R.; Chang, C.W. A two-dimensional variant of Newton’s method and a three-point Hermite interpolation: Fourth- and eighth-order optimal iterative schemes. Mathematics 2023, 11, 4529. [Google Scholar] [CrossRef]
  2. Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
  3. Lee, M.Y.; Kim, Y.I.; Magrenãń, A.A. On the dynamics of tri-parametric family of optimal fourthorder multiple-zero finders with a weight function of the principal mth root of a function-function ratio. Appl. Math. Comput. 2017, 315, 564–590. [Google Scholar]
  4. Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algor. 2019, 81, 947–981. [Google Scholar] [CrossRef]
  5. Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
  6. Singh, M.K.; Singh, A.K. A derivative free globally convergent method and its deformations. Arab. J. Math. 2021, 10, 481–496. [Google Scholar] [CrossRef]
  7. Singh, M.K.; Argyros, I.K. The dynamics of a continuous Newton-like method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
  8. Wu, X.Y. Newton-like method with some remarks. Appl. Math. Comput. 2007, 118, 433–439. [Google Scholar] [CrossRef]
  9. Liu, C.S.; Chang, C.W.; Kuo, C.L. Memory-accelerating methods for one-step iterative schemes with Lie-symmetry method solving nonlinear boundary value problem. Symmetry 2024, 16, 120. [Google Scholar] [CrossRef]
  10. Liu, C.S. Elastoplastic models and oscillators solved by a Lie-group differential algebraic equations method. Int. J. Non-Linear Mech. 2015, 69, 93–108. [Google Scholar] [CrossRef]
  11. Yu, Y.X. A novel weighted density functional theory for adsorption, fluid-solid interfacial tension, and disjoining properties of simple liquid films on planar solid surfaces. J. Chem. Phys. 2009, 131, 024704. [Google Scholar] [CrossRef] [PubMed]
  12. Alazwari, M.A.; Abu-Hamdeh, N.H.; Goodarzi, M. Entropy optimization of first-grade viscoelastic nanofluid flow over a stretching sheet by using classical Keller-box scheme. Mathematics 2021, 9, 2563. [Google Scholar] [CrossRef]
  13. Khan, F.A.; Aldhabani, M.S.; Alamer, A.; Alshaban, E.; Alamrani, F.M.; Mohammed, H.I.A. Almost nonlinear contractions under locally finitely transitive relations with applications to integral equations. Mathematics 2023, 11, 4749. [Google Scholar] [CrossRef]
  14. He, J.H. Variational iteration method—A kind of non-linear analytical technique: Some examples. Int. J. Non-linear Mech. 1999, 34, 699–708. [Google Scholar] [CrossRef]
  15. He, J.H. Variational iteration method for autonomous ordinary systems. Appl. Math. Comput. 2000, 114, 115–123. [Google Scholar] [CrossRef]
  16. Wang, X.; He, W.; Feng, H.; Atluri, S.N. Fast and accurate predictor-corrector methods using feedback-accelerated Picard iteration for strongly nonlinear problems. Comput. Model. Eng. Sci. 2024, 139, 1263–1294. [Google Scholar] [CrossRef]
  17. Argyros, I.K.; Shakhno, S.M.; Yarmola, H.P. Extended semilocal convergence for the Newton-Kurchatov method. Mat. Stud. 2020, 53, 85–91. [Google Scholar]
  18. Argyros, I.K.; Shakhno, S.M. Extended local convergence for the combined Newton-Kurchatov method under the generalized Lipschitz conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef]
  19. Abbasbandy, S. Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl. Math. Comput. 2003, 145, 887–893. [Google Scholar] [CrossRef]
  20. Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
  21. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  22. Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
  23. Ogbereyivwe, O.; Umar, S.S. Behind Weerakoon and Fernando’s scheme: Is Weerakoon and Fernando’s scheme version computationally better than its power-means variants? FUDMA J. Sci. 2023, 7, 368–371. [Google Scholar] [CrossRef]
  24. Saqib, M.; Iqbal, M. Some multi-step iterative methods for solving nonlinear equations. Open J. Math. Sci. 2017, 1, 25–33. [Google Scholar] [CrossRef]
  25. Zhanlav, T.; Chuluunbaatar, O.; Ulziibayar, V. Generating function method for constructing new iterations. Appl. Math. Comput. 2017, 315, 414–423. [Google Scholar] [CrossRef]
  26. Argyros, I.K.; Regmi, S.; Shakhno, S.; Yarmola, H. Perturbed Newton methods for solving nonlinear equations with applications. Symmetry 2022, 14, 2206. [Google Scholar] [CrossRef]
  27. Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
  28. Petkovic, M.; Neta, B.; Petkovic, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  29. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA-SEMAI Springer series; Springer: Cham, Switzerland, 2016. [Google Scholar]
  30. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iterations. J. Assoc. Comput. Machinery 1974, 21, 643–651. [Google Scholar] [CrossRef]
  31. Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
  32. Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
  33. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
  34. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algor. 2013, 63, 549–569. [Google Scholar]
  35. Lotfi, T.; Soleymani, F.; Noori, Z.; KJlJçman, A.; Khaksar Haghani, F. Efficient iterative methods with and without memory possessing high efficiency indices. Discr. Dyna. Natu. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
  36. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  37. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  38. Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
  39. Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
  40. Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
  41. Wang, H.; Liu, H. Note on a cubically convergent Newton-type method under weak conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
  42. Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
  43. Ostrowski, A.M. Solutions of Equations and System Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  44. Wang, X. A family of Newton-type iterative methods using some special self-accelerating parameters. Int. J. Comput. Math. 2018, 95, 2112–2127. [Google Scholar] [CrossRef]
  45. Jain, P.; Chand, P.B. Derivative free iterative methods with memory having higher R-order of convergence. Int. J. Nonl. Sci. Numer. Simul. 2020, 21, 641–648. [Google Scholar] [CrossRef]
  46. Zhou, X.; Liu, B. Iterative methods for multiple roots with memory using self-accelerating technique. J. Comput. Appl. Math. 2023, 428, 115181. [Google Scholar] [CrossRef]
  47. Džunić, J.; Petković, M.S.; Petković, L.D. Three-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2012, 218, 4917–4927. [Google Scholar]
  48. Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. J. Comput. Appl. Math. 2012, 236, 2909–2920. [Google Scholar]
  49. Zheng, Q.; Li, J.; Huang, F. Optimal Steffensen-type families for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
  50. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parameter family with memory for nonlinear equations. Numer. Algor. 2015, 68, 323–335. [Google Scholar] [CrossRef]
  51. Petković, M.S. Remarks on “On a general class of multipoint root-finding methods of high computational efficiency”. SIAM J. Numer. Anal. 2011, 49, 1317–1319. [Google Scholar] [CrossRef]
  52. Torkashvand, V.; Kazemi, M. On an efficient family with memory with high order of convergence for solving nonlinear equations. Int. J. Indus. Math. 2020, 12, IJIM-1260. [Google Scholar]
Table 1. The NI, COC1, COC2, and E.I. for the first new memory-updating method (FNMUM).
Table 1. The NI, COC1, COC2, and E.I. for the first new memory-updating method (FNMUM).
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/3
f 1 2.9 [ 2.9 , 3.1 ] 53.9734.0951.584
f 2 0.5 [ 0.5 , 1.5 ] 45.2745.2951.741
f 3 1.5 [ 0.5 , 1 ] 54.5664.6211.659
Table 2. The values of A n and B n tend to exact ones.
Table 2. The values of A n and B n tend to exact ones.
n1234
ERR14.153.450.259 1.55 × 10 4
ERR20.8022.86 1.85 × 10 2 4.69 × 10 2
Table 3. The NI, COC1, COC2, and E.I. for the second new memory-updating method (SNMUM).
Table 3. The NI, COC1, COC2, and E.I. for the second new memory-updating method (SNMUM).
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/3
f 1 2.9 [ 2.8 , 3.1 ] 44.7114.7131.676
f 2 0.5 [ 0.5 , 1.5 ] 45.4395.4591.759
f 3 1.5 [ 1.5 , 2.5 ] 53.6273.6921.536
Table 4. The NI, COC1, COC2, and E.I. for the third new memory-updating method (TNMUM).
Table 4. The NI, COC1, COC2, and E.I. for the third new memory-updating method (TNMUM).
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/3
f 1 2.9 [ 2.95 , 3.01 ] 53.6773.7941.544
f 2 0.5 [ 0.5 , 1.4 ] 44.3134.3341.628
f 3 1.5 [ 0.5 , 1.5 ] 54.8395.0151.691
Table 5. The NI, COC1, COC2, and E.I. for the accelerated third new memory-updating method (ATNMUM).
Table 5. The NI, COC1, COC2, and E.I. for the accelerated third new memory-updating method (ATNMUM).
Functions x 0 [ A 0 , B 0 ] η 0 NICOC1COC2E.I. = (COC1)1/3
f 1 2.9 [ 13.97 , 6.35 ] −246.9136.9591.905
f 2 0.5 [ 14.44 , 0.33 ] 1.5410.29610.3542.176
f 3 2.5 [ 1.75 , 0.86 ] 255.6355.6251.780
Table 6. The values of A n and N 2 ( x n ) tend to exact ones.
Table 6. The values of A n and N 2 ( x n ) tend to exact ones.
n123
ERR14.22 7.82 × 10 2 1.60 × 10 4
ERR2 5.72 × 10 2 2.92 × 10 6 4.43 × 10 10
Table 7. The NI, COC1, COC2, and E.I. for the WMM method.
Table 7. The NI, COC1, COC2, and E.I. for the WMM method.
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/4
f 1 2.9 [ 2.9 , 3.2 ] 56.4476.3691.594
f 2 0.6 [ 0.5 , 1.5 ] 35.5924.4591.538
f 3 1.5 [ 0.5 , 1 ] 57.5766.7961.659
Table 8. The NI, COC1, COC2, and E.I. for the ZLHMM method.
Table 8. The NI, COC1, COC2, and E.I. for the ZLHMM method.
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/3
f 1 2.9 [ 2.8 , 3.2 ] 45.3887.2911.753
f 2 0.6 [ 0.6 , 2.5 ] 65.8545.7541.802
f 3 1.5 [ 0.5 , 1 ] 45.0114.7431.711
Table 9. The NI, COC1, COC2, and E.I. for the CCTMM method.
Table 9. The NI, COC1, COC2, and E.I. for the CCTMM method.
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/3
f 1 2.9 [ 2.5 , 3.1 ] 34.9595.6951.705
f 2 0.2 [ 0.2 , 1.5 ] 45.2364.9671.736
f 3 2.5 [ 0.5 , 1 ] 47.8336.2991.986
Table 10. The NI, COC1, COC2, and E.I. for the DMM method.
Table 10. The NI, COC1, COC2, and E.I. for the DMM method.
Functions x 0 [ a 0 , b 0 ] NICOC1COC2E.I. = (COC1)1/3
f 1 2.7 [ 2.7 , 3.2 ] 411.7758.8612.275
f 2 0.6 [ 0.5 , 2 ] 57.4087.3001.949
f 3 2.5 [ 0.5 , 3 ] 35.7556.4791.792
Table 11. The NI, COC1, COC2, and E.I. for the MDMM method.
Table 11. The NI, COC1, COC2, and E.I. for the MDMM method.
Functions x 0 [ a 0 , b 0 ] η 0 NICOC1COC2E.I. = (COC1)1/3
f 1 2.7 [ 2.7 , 3.2 ] -0.2414.89413.2222.460
f 2 0.6 [ 0.5 , 2 ] 0.15510.65811.9592.200
f 3 2.5 [ 0.5 , 3 ] -0.4736.53321.0091.869
Table 12. The NI, COC1, COC2, and E.I. for the CLBTMM method.
Table 12. The NI, COC1, COC2, and E.I. for the CLBTMM method.
Functions x 0 [ a 0 , b 0 ] NICOC1E.I. = (COC1)1/3COC2E.I. = (COC2)1/3
f 1 2.8 [ 2.8 , 3.2 ] 45.5121.7668.7622.062
f 3 1.5 [ 1 , 3 ] 37.5191.9599.5792.124
f 4 −0.5 [ 0.5 , 2 ] 35.9751.8158.5302.043
f 5 1.8 [ 0.5 , 2.5 ] 310.3092.1769.6862.132
Table 13. The NI, COC1, COC2, and E.I. for the CLBTM method.
Table 13. The NI, COC1, COC2, and E.I. for the CLBTM method.
Functions x 0 [ A 0 , B 0 ] NICOC1E.I. = (COC1)1/3COC2E.I. = (COC2)1/3
f 1 2.85 [ 17.16 , 5.78 ] 36.3391.8519.3062.103
f 3 1.6 [ 4 , 0 ] 311.8052.27719.0532.671
f 4 1 [ 10 , 0.1 ] 311.8272.27813.0272.353
f 5 2.5 [ 1.08 , 1.07 ] 39.8962.14718.3272.637
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Chang, C.-W. New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index. Mathematics 2024, 12, 581. https://doi.org/10.3390/math12040581

AMA Style

Liu C-S, Chang C-W. New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index. Mathematics. 2024; 12(4):581. https://doi.org/10.3390/math12040581

Chicago/Turabian Style

Liu, Chein-Shan, and Chih-Wen Chang. 2024. "New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index" Mathematics 12, no. 4: 581. https://doi.org/10.3390/math12040581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop