Next Article in Journal
The Impacts of Payment Schemes and Carbon Emission Policies on Replenishment and Pricing Decisions for Perishable Products in a Supply Chain
Previous Article in Journal
Simulation and Optimization of a Dual-Axis Solar Tracking Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Updating to Optimal Parametric Values by Memory-Dependent Methods: Iterative Schemes of Fractional Type for Solving Nonlinear Equations

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 360302, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 1032; https://doi.org/10.3390/math12071032
Submission received: 9 March 2024 / Revised: 25 March 2024 / Accepted: 27 March 2024 / Published: 29 March 2024

Abstract

:
In the paper, two nonlinear variants of the Newton method are developed for solving nonlinear equations. The derivative-free nonlinear fractional type of the one-step iterative scheme of a fourth-order convergence contains three parameters, whose optimal values are obtained by a memory-dependent updating method. Then, as the extensions of a one-step linear fractional type method, we explore the fractional types of two- and three-step iterative schemes, which possess sixth- and twelfth-order convergences when the parameters’ values are optimal; the efficiency indexes are 6 and 12 3 , respectively. An extra variable is supplemented into the second-degree Newton polynomial for the data interpolation of the two-step iterative scheme of fractional type, and a relaxation factor is accelerated by the memory-dependent method. Three memory-dependent updating methods are developed in the three-step iterative schemes of linear fractional type, whose performances are greatly strengthened. In the three-step iterative scheme, when the first step involves using the nonlinear fractional type model, the order of convergence is raised to sixteen. The efficiency index also increases to 16 3 , and a third-degree Newton polynomial is taken to update the values of optimal parameters.

1. Introduction

We consider a second-order nonlinear ordinary differential equation subjecting to boundary values:
u ( y ) = F ( y , u ( y ) , u ( y ) ) ,
u ( 0 ) = a , u ( 1 ) = b ,
where F ( y , u ( y ) , u ( y ) ) : [ 0 , 1 ] × R 2 R is a given nonlinear continuous function, and a and b are given constants. We integrate Equation (1), starting from the initial values u ( 0 ) = a and u ( 0 ) = x , where x is an unknown value determined by u ( 1 ) = b in Equation (2), which results in a nonlinear equation:
f ( x ) = 0 ,
where f ( x ) : R R is a given continuous function, not necessarily a differentiable function. Therefore, the solutions for Equations (1) and (2) can be obtained by solving an implicit nonlinear equation f ( x ) = u ( 1 , x ) b = 0 to find the root x, where u ( 1 , x ) is the value of u ( y ) at y = 1 .
The linear fractional one-step iterative scheme below [1]
x n + 1 = x n f ( x n ) a 0 + b 0 f ( x n )
is cubically convergent if a 0 = f ( r ) and b 0 = f ( r ) / ( 2 f ( r ) ) , where f ( r ) = 0 and f ( r ) 0 . The parameters a 0 and b 0 can be updated to speed up the convergence by the memory-dependent method [2]. One can refer to [3,4,5,6,7,8,9,10] for more memory-dependent iterative methods.
For the Newton method, there exists some weak points as pointed out in [11]. In addition, for an odd function with f ( x ) = f ( x ) , there exist two-cycle points of the Newton method. Let
N ( x ) = x f ( x ) f ( x )
be the mapping function of the Newton method. A cyclic point is determined by
N ( x ) = x f ( x ) f ( x ) = x ,
with f ( x ) = f ( x ) and f ( x ) = f ( x ) , which becomes a nonlinear equation:
f ( x ) f ( x ) 2 x = 0 .
When x c is a solution of Equation (7), x c is also a solution, because of f ( x c ) = f ( x c ) and f ( x c ) = f ( x c ) . The pair ( x c , x c ) are two-cycle points of the Newton method, with the properties N ( x c ) = x c and N ( x c ) = x c ; hence, N 2 ( x c ) = x c and N 2 ( x c ) = x c .
For f ( x ) = arctan x as an instance, it follows from Equation (7) that
tan 2 x 1 + x 2 x = 0 ,
whose solutions are x c = 1.391745200270735 and x c = 1.391745200270735 , which are two-cycle points of the Newton method for the function f ( x ) = arctan x . When the initial guess satisfies | x 0 | < x c , the Newton method is convergent; however, when | x 0 | > x c , the Newton method is divergent.
He et al. [12] proposed an iterative algorithm for approximating a common element of a set of fixed points of a nonexpansive mapping and the solutions of variational inequality on Hadamard manifolds. They proved that the sequence generated by the suggested algorithm strongly converges to the common solution of the fixed point problem. After that, Zhou et al. [13] built an accurate threshold representation theory, and on that basis, a fast and efficient iterative threshold algorithm of log-sum regularization was exploited. Note that the log-sum regularization possesses an exceptionally strong capability to solve the sparsity problem.
We plan to develop a new iterative scheme to overcome these drawbacks of the Newton method. The idea behind the development of the new iterative scheme is the SOR technique [14] for the following system of linear equations:
A x = b ,
where
A = D U L ,
and D , U , and L , respectively, represent a diagonal matrix, a strictly upper triangular matrix, and a strictly lower triangular matrix of A .
An equivalent linear system from Equations (9) to (10) is
D x U x L x = b .
Equation (11) is multiplied by w, and then D x is added on both sides,
D x w L x = w b + D x w D x + w U x ;
the corresponding iterative form is the SOR [15]:
( D w L ) x n + 1 = w b + ( 1 w ) D x n + w U x n .
Traub’s technique is a typical method with memory, of which the data that appeared in the previous iteration were adopted in the following iteration [16]:
w n = x n + γ n f ( x n ) , x n + 1 = x n f ( x n ) f [ x n , w n ] , γ n + 1 = 1 f [ x n , x n + 1 ] ,
By resulting in x 0 and γ 0 and incorporating the memory of x n , Traub’s iterative scheme can proceed to find the solution of f ( x ) = 0 upon convergence. For a recent report on the progress of the memory method with accelerating parameters in the one-step iterative method, one can refer to [2], while for a recent progress report on the memory method with accelerating parameters in the two-step iterative method, one can refer to [11]. One major goal of the paper is the development of multi-step iterative schemes with a new memory method to determine the accelerating parameters by updating the technique with information at the current step.
We have arranged other contents of the paper as follows. Two types of one-step iterative schemes are introduced in Section 2 as nonlinear perturbations of the Newton method. Section 3 gives them a local convergence analysis for obtaining the optimal values of parameters with fourth-order convergence. In Section 4, we evaluate these two one-step iterative schemes using optimal values; a memory-dependent technique is developed for updating the optimal values of a nonlinear one-step iterative scheme of fractional type. In Section 5, we develop multi-step iterative schemes of fractional type, giving a detailed convergence analysis. Numerical experiments of the fractional type iterative schemes are executed in Section 6. In Section 7, we derive the memory-dependent method for determining the critical parameters in three linear fractional type iterative schemes, and new updating methods are developed for the linear fractional type three-step iterative scheme. An accelerated two-step memory-dependent iterative scheme is developed in Section 8. A nonlinear three-step iterative scheme of fractional type is developed in Section 9, where an accelerated memory-dependent method based on the Newton interpolant is used to update three critical parameters. Finally, the achievements are sketched in Section 10.

2. Nonlinear Perturbations of Newton Method

The idea leading to Equations (12) and (13) from Equation (9) motivates a nonlinear perturbation of the Newton method, which includes the introduction of a parameter w, adding the same term on both sides and generating an iterative form. We realize it for Equation (3) as follows. By adding x f ( x ) + x H ( f 2 ( x ) ) on both sides of Equation (3), which can be extended to
x f ( x ) + f ( x ) + x H ( f 2 ( x ) ) = x f ( x ) + x H ( f 2 ( x ) ) ,
f ( x ) is split into ( 1 + b 0 x b 0 x ) f ( x ) , such that
x f ( x ) + ( 1 + b 0 x b 0 x ) f ( x ) + x H ( f 2 ( x ) ) = x f ( x ) + x H ( f 2 ( x ) ) .
Next, we move ( 1 + b 0 x ) f ( x ) to the right-hand side, which results in
x f ( x ) b 0 x f ( x ) + x H ( f 2 ( x ) ) = x f ( x ) ( 1 + b 0 x ) f ( x ) + x H ( f 2 ( x ) ) ,
where b 0 is a weight factor and H is a weight function; and the iterative form is
[ f ( x n ) b 0 f ( x n ) + H ( f 2 ( x n ) ) ] x n + 1 = x n f ( x n ) b 0 x n f ( x n ) + x n H ( f 2 ( x n ) ) f ( x n ) ,
which results in
x n + 1 = x n f ( x n ) f ( x n ) b 0 f ( x n ) + H ( f 2 ( x n ) ) .
It is a new one-step iterative scheme to solve Equation (3), including a parameter b 0 R and a weight function H, to be assigned. If H ( f 2 ( x n ) ) = 0 , Equation (16) is a continuation Newton-like method developed by the author of [17]. If one takes b 0 = 0 and H ( f 2 ( x n ) ) = 0 , the Newton method (NM) is recovered.
Comparing with a third-order iterative scheme, namely the Halley method [18] (HM)
x n + 1 = x n f ( x n ) f ( x n ) f ( x n ) f ( x n ) 2 f ( x n ) ,
Equation (16) possesses two advantages: the convergence order increasing to four, and not needing f ( x n ) .
As a variant of Equation (16), we consider the following nonlinear one-step iterative scheme of fractional type:
x n + 1 = x n f ( x n ) a 0 + b 0 f ( x n ) + H ( f 2 ( x n ) ) ,
which includes two parameters a 0 , b 0 R and a weight function H, to be assigned. When compared with Equation (17), Equation (18) does not need the differential terms f ( x n ) and f ( x n ) . For a 0 = f ( r ) , b 0 = f ( r ) / ( 2 f ( r ) ) , and H ( f 2 ( x n ) ) = 0 in Equation (18), the resulting iterative scheme is proven to be a third-order convergence in [19].
To improve the efficiency of the one-step iterative schemes, a lot of iterative schemes based on two-step and three-step methods were depicted in [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36], and some were based on the multi-composition of the functions [37,38,39,40].

3. Convergence Analysis of Equations (16) and (18)

Theorem 1. 
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . With
b 0 = a 2 = f ( r ) 2 f ( r ) , H ( 0 ) = 0 , c 0 = H ( 0 ) = a 2 2 2 a 3 f ( r ) = f ( r ) 2 4 f ( r ) 3 f ( r ) 3 f ( r ) 2 ,
Equation (16) is of fourth-order convergence.
Proof. 
Let r be a simple solution of f ( x ) with f ( r ) = 0 and f ( r ) 0 . We suppose that x n is sufficiently close to r, such that
e n = x n r
is a sufficiently small number. Subtracting e n + 1 = x n + 1 r by Equation (20) renders
e n + 1 = e n + x n + 1 x n .
Using the Taylor series yields
f ( x n ) = f ( r ) [ e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ] , a n : = f ( n ) ( r ) n ! f ( r ) , n = 2 , ,
f ( x n ) = f ( r ) [ 1 + 2 a 2 e n + 3 a 3 e n 2 + 4 a 4 e n 3 + ] ,
from which we have
H ( f 2 ( x n ) ) = H ( 0 ) + H ( 0 ) f 2 ( x n ) + = c 0 f ( r ) 2 [ e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ] 2 + ,
due to H ( 0 ) = 0 and c 0 = H ( 0 ) . Then, we have
f ( x n ) f ( x n ) b 0 f ( x n ) + H ( f 2 ( x n ) ) = e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + 1 + ( 2 a 2 b 0 ) e n + [ 3 a 3 b 0 a 2 + c 0 f ( r ) ] e n 2 + [ 4 a 4 b 0 a 3 + 2 a 2 c 0 f ( r ) ] e n 3 + = e n + C 4 e n 4 + ,
where Equation (19) was used, and
C 4 = a 3 ( b 0 2 a 2 ) + a 2 [ b 0 a 2 3 a 3 c 0 f ( r ) ] + a 2 ( b 0 2 a 2 ) 2 + b 0 a 3 3 a 4 2 a 2 c 0 f ( r ) + 2 ( 2 a 2 b 0 ) [ 3 a 3 b 0 a 2 + c 0 f ( r ) ] .
Inserting Equation (25) into Equation (16) and using Equation (21), we can obtain
e n + 1 = e n e n C 4 e n 4 + = C 4 e n 4 + .
By b 0 = a 2 and c 0 f ( r ) = a 2 2 2 a 3 , C 4 in Equation (26) is reduced to
C 4 = 5 a 2 a 3 a 2 3 3 a 4 ,
and the error Equation (27) is simplified to
e n + 1 = ( a 2 3 5 a 2 a 3 + 3 a 4 ) e n 4 + .
Since a 2 , a 3 , and a 4 are some constants, and the power of e n is four, the fourth-order convergence of Equation (16) is proven. □
Theorem 2. 
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . With
a 0 = f ( r ) , b 0 = f ( r ) 2 f ( r ) , H ( 0 ) = 0 , q 0 = H ( 0 ) = a 3 a 2 2 f ( r ) = f ( r ) 6 f ( r ) 2 f ( r ) 2 4 f ( r ) 3 ,
Equation (18) is of fourth-order convergence.
Proof. 
We take
G ( e n ) = e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ,
such that by Equation (22),
f ( x n ) = G ( e n ) f ( r ) .
Inserting the above and Equation (24) into Equation (18), we have
f ( x n ) a 0 + b 0 f ( x n ) + H ( f 2 ( x n ) ) = G ( e n ) 1 + b 0 G ( e n ) + q 0 f ( r ) G 2 ( e n ) + = G ( e n ) [ 1 b 0 G ( e n ) q 0 f ( r ) G 2 ( e n ) + b 0 2 G 2 ( e n ) + 2 b 0 q 0 f ( r ) G 3 ( e n ) + ] = G ( e n ) b 0 G 2 ( e n ) + [ b 0 2 q 0 f ( r ) ] G 3 ( e n ) + 2 b 0 q 0 f ( r ) G 4 ( e n ) + ,
where a 0 = f ( r ) , H ( 0 ) = 0 , and q 0 = H ( 0 ) are used.
Then, we have
e n + 1 = e n G ( e n ) + b 0 G 2 ( e n ) [ b 0 2 q 0 f ( r ) ] G 3 ( e n ) 2 b 0 q 0 f ( r ) G 4 ( e n ) + .
By Equation (30) and using b 0 = a 2 and q 0 f ( r ) = a 3 a 2 2 , the error equation is simplified to
e n + 1 = ( 3 a 2 a 3 3 a 2 3 a 4 ) e n 4 + .
The fourth-order convergence of Equation (18) is thus proven. □

4. Numerical Results Based on Theorems 1 and 2

4.1. Numerical Results

In [41], the computed order of convergence (COC) of an iterative scheme is evaluated by
COC : = ln | ( x n + 1 r ) / ( x n r ) | ln | ( x n r ) / ( x n 1 r ) | .
We consider f ( x ) = arctan x as discussed in Section 1. For Equation (16), by using b 0 = 0 , c 0 = 2 / 3 and H ( t ) = c 0 t , and Equation (18) by using a 0 = 1 , b 0 = 0 , q 0 = 1 / 3 , and H ( t ) = q 0 ( 1 e t ) , we compare the number of iterations (NIs) and COC with those computed by the Newton method (NM) with different initial guesses, x 0 , shown in Table 1. For the initial guess x 0 = 1.4 , the NM is divergent owing to | x 0 | > x c , where x c = 1.391745200270735 is a solution of Equation (8). The many decimal numbers after this point guarantees that x c is a solution of Equation (8), thus satisfying the error tolerance with ε = 10 15 .
Next, we solve Equations (1) and (2) with F = 3 u 2 / 2 , a = 4 , and b = 1 , and compare the results with the solution u ( y ) = 4 / ( y + 1 ) 2 . In Equation (18), we take H ( t ) = q 0 ( 1 e t ) , and with a 0 = 2.27615 , b 0 = 0.149978 , and q 0 = 0.15 through eight iterations, we obtain the slope u ( 0 ) = 8.000000000000028 , which is very close to the exact one r = 8 ; 5.33 × 10 15 is the maximum error obtained for u ( y ) .

4.2. A Memory-Updating Method for Equation (18)

We update the values of a 0 , b 0 , and q 0 in Equation (29) for the iterative Scheme (18) by the memory-dependent method. For this purpose, an extra variable is supplemented by
w n = x n f ( x n ) f ( r ) .
Now, we have x n and w n as the current data in memory and x n + 1 and w n + 1 as the updated data. Accordingly, we can construct the second- and third-degree Newton polynomials by
N 2 ( x ; x n , w n , x n + 1 ) = f ( x n ) + f [ x n , w n ] ( x x n ) + f [ x n , w n , x n + 1 ] ( x x n ) ( x w n ) , N 3 ( x ; x n , w n , x n + 1 , w n + 1 ) = N 2 ( x ; x n , w n , x n + 1 )
+ f [ x n , w n , x n + 1 , w n + 1 ] ( x x n ) ( x w n ) ( x x n + 1 ) ,
where ( x n , w n , x n + 1 ) are data points of N 2 ; ( x n , w n , x n + 1 , w n + 1 ) are data points of N 3 ; and
f [ x n , w n ] = f ( x n ) f ( w n ) x n w n , f [ x n , w n , x n + 1 ] = f [ x n , w n ] f [ w n , x n + 1 ] x n x n + 1 , f [ x n , w n , x n + 1 , w n + 1 ] = f [ x n , w n , x n + 1 ] f [ w n , x n + 1 , w n + 1 ] x n w n + 1 .
In Equation (29), we take A = a 0 , B = b 0 , and Q = H ( 0 ) . By Theorem 2, we have a one-step memory-dependent iterative Scheme (18): (i) resulting in A 0 , B 0 , and Q 0 , and w 0 = x 0 f ( x 0 ) / A 0 , and (ii) calculating for n = 0 , 1 , ,
x n + 1 = x n f ( x n ) A n + B n f ( x n ) + H ( f 2 ( x n ) ) , A ^ n + 1 = N 2 ( x n + 1 ; x n , w n , x n + 1 ) ,
w n + 1 = x n + 1 f ( x n + 1 ) A ^ n + 1 , A n + 1 = N 3 ( w n + 1 ; x n , w n , x n + 1 , w n + 1 ) , D n + 1 = N 3 ( w n + 1 ; x n , w n , x n + 1 , w n + 1 ) , E n + 1 = f [ x n , w n , x n + 1 , w n + 1 ] , B n + 1 = D n + 1 2 A n + 1 ,
Q n + 1 = E n + 1 A n + 1 2 D n + 1 2 4 A n + 1 3 ,
where H ( f 2 ( x n ) ) = Q n f 2 ( x n ) or H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) can be selected.
We consider f ( x ) = x 3 x = 0 , whose solutions are −1, 0, and 1. By using x 0 = 5 , H ( f 2 ( x n ) ) = Q n f 2 ( x n ) and with A 0 = 2.25 and B 0 = 4 / 3 , Table 2 lists the results for different values of Q 0 , where E.I. = CO C 1 / 2 signifies the efficiency index for two evaluations of functions f ( x n ) and f ( w n ) .
We consider f ( x ) = x 3 + 4 x 2 10 = 0 , whose solution is 1.365230013414097. By starting from x 0 = 1.5 , using H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) and with A 0 = 15.25 and B 0 = 0.5082 , Table 3 lists the results for different values of Q 0 .
Table 2 and Table 3 reveal that the values of COC can be over four of the theoretical one, if a good choice of Q 0 for the iterative Scheme (18) with memory-dependent technique is taken.
We solve Equations (1) and (2) with F = 3 u 2 / 2 , a = 4 , and b = 1 again by using the one-step memory-dependent accelerating technique for the iterative Scheme (18) with H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) . We take A 0 = 2.301 , B 0 = 0.149 , and Q 0 = 0.1 ; through three iterations, we obtain the same results as that shown in Section 4.1; however, the memory-dependent technique converges faster than when not using the memory.
We consider the Troesch problem [42]:
u ( y ) = λ sinh [ λ u ( y ) ] , u ( 0 ) = 0 , u ( 1 ) = 1 .
When λ is large, there appears to be a strong singularity in the boundary layer near to the right-end side.
We solve Equation (43) with λ = 5 by using the one-step memory-dependent accelerating technique for the iterative Scheme (18) with H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) . We take A 0 = 26.185 , B 0 = 0 , and Q 0 = 0.05 , and through four iterations, we obtain COC = 3.473, u ( 0 ) = 0.04575046140644138 , and the solution with an error at the right-end is 8.88 × 10 16 . In Table 4, we list the numerical results at some points for the case λ = 5 . It clear that the present method is more accurate than that obtained by the B-spline method [43].
The Bratu problem is used as a benchmark problem to test numerical methods for solving nonlinear boundary value problems [44]:
u ( y ) = λ exp ( u ( y ) ) , u ( 0 ) = 0 , u ( 1 ) = 0 ,
u ( y ) = 2 ln cosh y 1 2 θ 2 cosh θ 4 ,
where θ is solved from the following equation:
θ = 2 λ cosh θ 4 .
For λ = 3 , θ = 3.373507764285891 and θ = 6.576569259254376 are obtained by the iterative Scheme (18) with NI=4 and NI=3, respectively. When we apply the NM to solve Equation (46), we cannot find the second solution; and for the first solution, it used up 22 iterations.
We solve Equation (44) by using the one-step memory-dependent accelerating technique for the iterative Scheme (18) with H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) . For the first solution, we take A 0 = 0.309 , B 0 = 0.302 , and Q 0 = 0.01 , and through three iterations, we obtain the slope u ( 0 ) = 2.319602258081590 ; 2.11 × 10 15 is the maximum error obtained for u ( y ) . For the second solution, we take A 0 = 0.341 , B 0 = 0.158 , and Q 0 = 0.01 , and through four iterations, we obtain the slope u ( 0 ) = 6.103381294149195 ; 8.3 × 10 8 is the maximum error of u ( y ) , y [ 0 , 1 ] .

5. Multi-Step Iterative Schemes of Fractional Type

As the extensions of Equation (4), we propose the following fractional type two-step iterative scheme:
y n = x n f ( x n ) a 0 + b 0 f ( x n ) , x n + 1 = y n f ( y n ) a 0 + b 0 f ( y n ) ,
as well as the fractional type three-step iterative scheme:
y n = x n f ( x n ) a 0 + b 0 f ( x n ) , z n = y n f ( y n ) a 0 + b 0 f ( y n ) , x n + 1 = z n f ( z n ) a 0 + b 0 f ( z n ) .
Theorem 3. 
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . If
a 0 = c 1 = f ( r ) , b 0 = c 2 c 1 = f ( r ) 2 f ( r ) ,
then the iterative Scheme (4) has third-order convergence.
Proof. 
Refer to [19] for a different approach of the proof. By Equation (22),
f ( x n ) = c 1 e n + c 2 e n 2 + c 3 e n 3 + , c n : = f ( n ) ( r ) n ! , n = 1 , ,
where c n = f ( r ) a n , n = 2 , .
Let
F ( e n ) : = f ( x n ) a 0 + b 0 f ( x n ) .
In view of Equation (50), f ( x n ) is a function of e n ; hence, F is deemed to be a function of e n . It follows from Equations (4), (21), and (51) that
e n + 1 = e n F ( e n ) : = e n q ( e n ) p ( e n ) ,
where
q ( e n ) = f ( x n ) = c 1 e n + c 2 e n 2 + c 3 e n 3 + ,
p ( e n ) = a 0 + b 0 f ( x n ) = a 0 + b 0 ( c 1 e n + c 2 e n 2 + c 3 e n 3 + ) .
Invoking the Taylor series for F ( e n ) generates the following:
F ( e n ) = F ( 0 ) + F ( 0 ) e n + F ( 0 ) 2 e n 2 + F ( 0 ) 6 e n 3 + .
It is apparent from Equations (53) and (54) that
q ( 0 ) = 0 , q ( 0 ) = c 1 , q ( 0 ) = 2 c 2 ,
p ( 0 ) = a 0 , p ( 0 ) = b 0 c 1 , p ( 0 ) = 2 b 0 c 2 .
By inserting e n = 0 into the formulas
F ( e n ) = q ( e n ) p ( e n ) , F ( e n ) = q ( e n ) p ( e n ) p ( e n ) q ( e n ) p 2 ( e n ) , F ( e n ) = q ( e n ) p ( e n ) 2 p ( e n ) q ( e n ) p 2 ( e n ) p ( e n ) q ( e n ) p 2 ( e n ) + 2 p ( e n ) 2 q ( e n ) p 3 ( e n ) ,
we have
F ( 0 ) = q ( 0 ) p ( 0 ) , F ( 0 ) = q ( 0 ) p ( 0 ) p ( 0 ) q ( 0 ) p 2 ( 0 ) , F ( 0 ) = q ( 0 ) p ( 0 ) 2 p ( 0 ) q ( 0 ) p 2 ( 0 ) p ( 0 ) q ( 0 ) p 2 ( 0 ) + 2 p ( 0 ) 2 q ( 0 ) p 3 ( 0 ) .
Then, from Equations (56) and (57), it follows that
F ( 0 ) = 0 , F ( 0 ) = c 1 a 0 , F ( 0 ) = 2 c 2 a 0 2 b 0 c 1 2 a 0 2 ,
which, with the help of Equations (49) and (60), generates
F ( 0 ) = 0 , F ( 0 ) = 1 , F ( 0 ) = 0 .
Hence, Equation (55) becomes
F ( e n ) = e n + F ( 0 ) 6 e n 3 + ,
and from Equation (52), we can obtain
e n + 1 = e n F ( e n ) = e n e n F ( 0 ) 6 e n 3 + = F ( 0 ) 6 e n 3 + .
The third-order convergence is thus proven. □
Theorem 4. 
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 ; the iterative schemes in Equations (47) and (48), with the optimal values a 0 = c 1 and b 0 = c 2 / c 1 , have sixth- and twelfth-order convergences, respectively.
Proof. 
When ϵ is a small quantity, we have the following series:
1 1 + ϵ = 1 ϵ + ϵ 2 ϵ 3 + .
We rewrite Equation (62) as
F ( e n ) = e n + C 3 e n 3 + C 4 e n 4 + C 5 e n 5 + , C n : = F ( n ) ( 0 ) n ! , n = 3 , .
Then, from the first one in Equation (47) and Equations (51), (65), and (20), it follows that
y n = x n F ( e n ) = r + ϵ ,
where
ϵ : = C 3 e n 3 C 4 e n 4 C 5 e n 5 .
We expand f ( y n ) around r, with f ( r ) = 0 , as follows:
f ( y n ) = f ( r + ϵ ) = f ( r ) + f ( r ) ϵ + f ( r ) ϵ 2 / 2 + = f ( r ) ϵ + O ( e n 6 ) .
Inserting Equations (66) and (68) into the second one in Equation (47) yields
x n + 1 = r + ϵ f ( r ) ϵ + O ( e n 6 ) a 0 + b 0 f ( r ) ϵ + O ( e n 6 ) ;
owing to Equations (20) and (49), it becomes
e n + 1 = ϵ ϵ + O ( e n 6 ) 1 + b 0 ϵ + O ( e n 6 ) .
Upon using Equations (64) and (67), we can obtain
e n + 1 = ϵ ϵ ( 1 b 0 ϵ + b 0 2 ϵ 2 + ) = b 0 ϵ 2 b 0 2 ϵ 3 + = b 0 ( C 3 e n 3 + C 4 e n 4 + C 5 e n 5 ) 2 + = b 0 C 3 2 e n 6 + .
Thus, we complete the proof that the iterative Scheme (47) has sixth-order convergence.
Inserting Equations (66) and (68) into the second one in Equation (48) and using Equations (49) and (64) yields
z n = r + ϵ f ( r ) ϵ + O ( e n 6 ) a 0 + b 0 f ( r ) ϵ + O ( e n 6 ) = r + b 0 ϵ 2 b 0 2 ϵ 3 + .
Then, we expand f ( z n ) around the root r in terms of the Taylor series:
f ( z n ) = f ( r ) ( b 0 ϵ 2 b 0 2 ϵ 3 ) + O ( e n 12 ) .
Upon inserting Equations (72) and (73) into the last one in Equation (48), we have
x n + 1 = r + b 0 ϵ 2 b 0 2 ϵ 3 f ( r ) ( b 0 ϵ 2 b 0 2 ϵ 3 ) + O ( e n 12 ) a 0 + b 0 f ( r ) ( b 0 ϵ 2 b 0 2 ϵ 3 ) + O ( e n 12 ) ,
which, owing to Equations (20), (49), (64), and (67), becomes
e n + 1 = b 0 ϵ 2 b 0 2 ϵ 3 ( b 0 ϵ 2 b 0 2 ϵ 3 ) ( 1 b 0 ϵ 2 + b 0 2 ϵ 3 ) + = b 0 3 ϵ 4 2 b 0 4 ϵ 5 + b 0 5 ϵ 6 + = b 0 3 ( C 3 e n 3 + C 4 e n 4 + C 5 e n 5 ) 4 + = b 0 3 C 3 4 e n 12 + .
The twelfth-order convergence of Equation (48) is thus proven. □
It should be noted that the following three-step iterative scheme
y n = x n 2 f ( x n ) f ( x n ) 2 f ( x n ) 2 f ( x n ) f ( x n ) , z n = y n f ( y n ) f ( y n ) , x n + 1 = y n f ( y n ) + f ( z n ) f ( y n )
has ninth-order convergence [45]. The first step is obtained from the Halley method in Equation (17). In Equation (76), six evaluations of functions are required such that E.I. = 9 1 / 6 = 1.442 , which is the same as the Halley method. For Equation (48) with a 0 = f ( r ) and b 0 = f ( r ) / ( 2 f ( r ) ) , E.I. = 12 1 / 3 = 2.289 is much larger than that of Equation (76).
There are two main factors such that the EIs of the iterative Schemes (47) and (48) are 6 and 12 3 , respectively. The first factor is that the optimal parameters a 0 and b 0 are used in all steps of Equations (47) and (48). Then, the second factor is that only a new function f ( y n ) is used in the second step of Equation (47), and only two new functions f ( y n ) and f ( z n ) are used in the second and third steps of Equation (48). Therefore, when Equation (47) has the optimal parameters’ values, it is EI = 6 1 / 2 ; only two functions’ evaluations, f ( x n ) and f ( y n ) , are required. When Equation (48) has the optimal parameters’ values, it is EI = 12 1 / 3 ; only three functions’ evaluations, f ( x n ) , f ( y n ) , and f ( z n ) , are required.

6. Numerical Experiments Based on Theorems 3 and 4

Equations (4), (47), and (48) are sequentially labeled as Algorithms 1–3. The requirement of f ( r ) 0 is mandatory for the convergence of most algorithms, which include the derivative terms, like the Newton method. In order to investigate the applicability of Algorithms 1–3 in this condition, we consider a simple case of f ( x ) = ( x 1 ) 2 = 0 , where r = 1 is a double-root and f ( 1 ) = 0 . By taking a 0 = 10 5 and b 0 = 2 and starting from x 0 = 1.5 , Algorithm 1 converges with three iterations, while both Algorithms 2 and 3 converge with two iterations. Even though for f ( x ) = ( x + 1 ) 3 = 0 , where r = 1 is a triple-root and f ( 1 ) = f ( 1 ) = 0 , with a 0 = 10 15 and b 0 = 2 , and starting from x 0 = 1.5 , Algorithm 1 converges with six iterations, Algorithm 2 with four iterations, and Algorithm 3 with three iterations.
Then, we consider f ( x ) = x 3 x = 0 with x 0 = 3 ; we can compute the COC by Equation (35) in Table 5 to find the root r = 1 , where a 0 = 2 and b 0 = 3 / 2 are used in Algorithms 1–3.
Unlike the NM, which is sensitive to the initial value x 0 , the new methods are insensitive to the initial value x 0 . For example, we seek another root r = 0 of f ( x ) = x 3 x = 0 by the NM, starting from x 0 = 0.5 , which converges to r = 1 (third root) with two iterations; to r = 1 with eleven iterations starting from x 0 = 0.6 ; and to r = 0 with six iterations starting from x 0 = 0.4 . The new methods converge to r = 0 with three or four iterations, no matter which initial value is used among x 0 = 0.4 , 0.5 , and 0.6 .
The other test examples are given by
f 1 ( x ) = exp ( x 2 + 7 x 30 ) 1 , r = 3 ,
f 2 ( x ) = sin 2 x x 2 + 1 , r = 1.4044916482 ,
f 3 ( x ) = x 2 e x 3 x + 2 , r = 0.2575302854
f 4 ( x ) = ( x 1 ) 3 1 , r = 2 ,
f 5 ( x ) = x 3 10 , r = 10 1 / 3 .
In Table 6, NIs are tabulated for solving f 1 ( x ) = 0 , which starts from x 0 = 4 . We compare the computed results to the Newton method (NM), the Halley method [18], the method of Soheili et al. [46], and the method of Bahgat [47]. The values of parameters used are a 0 = 13 and b 0 = 6.577 .
In Table 7, NIs and COCs obtained by Algorithms 1–3 with different parameters of ( a 0 , b 0 ) are tabulated. It can be seen that Algorithm 3 is faster than the Algorithms 1 and 2. Even though the values of the parameters are not the best ones, Algorithms 2 and 3 converge very fast.
Table 8 tabulates NIs obtained by Algorithms 1–3, NM, NNT (the method of Noor et al. [35]), CM (the method of Chun [20]), and NRM (the method of Noor et al. [21]).
We evaluate the E.I. = p 1 / m as defined by Traub [16], where p is the order of convergence and m is the number of the evaluations of functions per iteration. In Table 9, for different methods, we list the E.I. obtained by Algorithms 1–3, NM, HM (the Halley method [18]), CM (the method of Chun [20]), NRM (the method of Noor et al. [21]), LM (the method of Li [48]), MCM (the method of Milovanovic and Cvetkovic [49]), AM (the method of Abdul-Hassan [50]), and AHHRM (the method of Ahmad et al. [51]).

7. Memory-Dependent Updating Iterative Schemes

In Equations (4), (47), and (48), the values of a 0 and b 0 are crucial, whose optimal values are a 0 = f ( r ) and b 0 = f ( r ) / ( 2 f ( r ) ) . In this section, we approximate a 0 and b 0 without using the differentials.
Give x ˜ 0 and x ˜ 2 and take
A 0 = f ( x ˜ 2 ) f ( x ˜ 0 ) x ˜ 2 x ˜ 0 , B 0 = 1 2 A 0 f ( x ˜ 2 ) 2 f ( x ˜ 1 ) + f ( x ˜ 0 ) ( x ˜ 1 x ˜ 0 ) 2
close to a 0 and b 0 in Equation (49), where x ˜ 1 = ( x ˜ 0 + x ˜ 2 ) / 2 .
In Equations (4), (47), and (48), if a 0 and b 0 are replaced by A 0 and B 0 , we can quickly obtain the solution, as shown in Table 10, for f 1 ( x ) = 0 , where x ˜ 0 = 2.9 and x ˜ 1 = 3.07 .
For Algorithm 2, COC = 6.016 is obtained, and for Algorithm 3, COC = 7.077 is obtained. The value COC = 7.077 is much smaller than the theoretical value 12, as demonstrated in Table 9. However, from Equation (47), there are two functions’ evaluations, f ( x n ) and f ( y n ) , which, according to the conjecture of Kung and Traub, have an optimal order of p = 2 , which is smaller than COC=6.016. Similarly, from Equation (48), there are three functions’ evaluations, f ( x n ) , f ( y n ) , and f ( z n ) , which, according to the conjecture of Kung and Traub, have the optimal order of p = 4 , which is smaller than COC=7.077.
In Table 11, for different functions, we list the NIs and COC obtained by Algorithms 1–3. For f 3 ( x ) = 0 , we obtain the root r = 0.1906879703188765 .
In Algorithm 3, the value of COC as just mentioned was 7.077. However, this value is significantly smaller than the theoretically expected value of 12. The main reason behind this discrepancy is that we just computed two rough values of A 0 and B 0 by Equation (82) with two ad hoc values of x ˜ 0 and x ˜ 2 , which are not the optimal values of a 0 and b 0 . Below, we will update the values of A 0 and B 0 by using the memory-dependent technique to a better approximation of the optimal parameters a 0 and b 0 . Higher-order data interpolation by higher-order polynomial can enhance the convergence order; however, at the same time, more algebraic operations are needed. By balancing the number of function evaluations, algebraic operations, and their impact on the convergence order, we can employ the polynomial data interpolation up to the second order to approximate a 0 and b 0 .
To raise the COC for Algorithm 3, we can update the values A 0 and B 0 in Equation (82) after the first iteration by proposing the following Algorithm 4, which is depicted by (i) giving x 0 , x ˜ 0 , x ˜ 2 , and x ˜ 1 = ( x ˜ 0 + x ˜ 2 ) / 2 , such that r [ x ˜ 0 , x ˜ 2 ] , and computing A 0 and B 0 by Equation (82), and (ii) calculating for n = 0 , 1 , ,
y n = x n f ( x n ) A n + B n f ( x n ) ,
z n = y n f ( y n ) A n + B n f ( y n ) ,
x n + 1 = z n f ( z n ) A n + B n f ( z n ) ,
A n + 1 = f ( z n ) f ( y n ) z n y n ,
y ¯ n = y n + z n 2 ,
B n + 1 = f ( z n ) 2 f ( y ¯ n ) + f ( y n ) 2 A n + 1 ( y ¯ n y n ) 2 .
Here, p = 2 3 = 8 and and E.I. = 1.682 are optimal ones. Table 12 lists the the results obtained by Algorithm 4. Some E.I.s are larger than 1.682.
There are some self-accelerating iterative methods for simple roots [3,52,53], which are then extended to the self-accelerating technique for the iterative methods for multiple roots [8,54]. In Equations (86)–(88), the self-accelerating technique for A n and B n is quite simple as compared to that in the literature.
The term f ( y ¯ n ) in Equation (88) can be computed by the second-order polynomial interpolation:
f ( y ¯ n ) = f ( x n ) + C ( y ¯ n x n ) + D ( y ¯ n x n ) 2 ,
C = ( z n x n ) 2 [ f ( y n ) f ( x n ) ] ( y n x n ) 2 [ f ( z n ) f ( x n ) ] ( y n x n ) ( z n x n ) 2 ( z n x n ) ( y n x n ) 2 ,
D = ( x n z n ) [ f ( y n ) f ( x n ) ] + ( y n x n ) [ f ( z n ) f ( x n ) ] ( y n x n ) ( z n x n ) 2 ( z n x n ) ( y n x n ) 2 .
In doing so, the evaluations of functions are reduced from f ( x n ) , f ( y n ) , f ( y ¯ n ) , and f ( z n ) to f ( x n ) , f ( y n ) , and f ( z n ) , and E.I. can be increased. With this modification, the iterative scheme is named Algorithm 5. Table 13 lists the NIs, COCs, and E.I.s obtained by Algorithm 5. For three evaluations of functions, p = 2 2 = 4 and E.I. = 1.5874 are optimal ones. However, the values of E.I. in Table 13 are much larger than E.I. = 1.5874. In Table 13,
f 6 ( x ) = sin ( x 2 ) ( x 1 ) ( x 6 + 1 / x 6 + 4 ) , r = 1 .
The result of COC=15.517 for f 6 ( x ) = 0 is larger than that given in Table 7 in [3] with COC = 6.9. Here, the presented Algorithm 5 requires three evaluations of functions, but it needed to specify a rough range [ 0.5 , 3.5 ] to include the solution r = 1 as an inner point.
An iterative method that uses the information from the current and previous iterations is called a method with memory. In addition to the given initial guesses A 0 and B 0 , A n + 1 and B n + 1 —calculated by Equations (86)–(88) and Equations (89)–(91)—only need the current values of x n , y n , z n , f ( x n ) , f ( y n ) , and f ( z n ) , such that we point out that both Algorithms 4 and 5 are without the use of the memory of previous values. Other memory-dependent techniques can be seen in [3,52,53,54,55,56].
Moreover, we develop a more advanced updating technique using the information from x n + 1 and the second-order Newton polynomial interpolation, namely Algorithm 6, where we replace f ( r ) and f ( r ) with f ( x n + 1 ) and f ( x n + 1 ) , and update them with N 2 ( x ; x n , y n , z n ) and N 2 ( x ; x n , y n , z n ) . Algorithm 6 is depicted by (i) giving x 0 , x ˜ 0 , x ˜ 2 , and x ˜ 1 = ( x ˜ 0 + x ˜ 2 ) / 2 , such that r [ x ˜ 0 , x ˜ 2 ] , and computing A 0 and B 0 by Equation (82), and (ii) calculating for n = 0 , 1 , ,
y n = x n f ( x n ) A n + B n f ( x n ) ,
z n = y n f ( y n ) A n + B n f ( y n ) ,
x n + 1 = z n f ( z n ) A n + B n f ( z n ) ,
A n + 1 = f ( y n ) f ( x n ) y n x n + ( 2 x n + 1 x n y n ) f ( z n ) f ( x n ) ( z n x n ) ( z n y n ) f ( y n ) f ( x n ) ( y n x n ) ( z n y n ) ,
B n + 1 = 1 A n + 1 f ( z n ) f ( x n ) ( z n x n ) ( z n y n ) f ( y n ) f ( x n ) ( y n x n ) ( z n y n ) .
Table 14 lists the NIs, COCs, and E.I.s obtained by Algorithm 6. All E.I.s are larger than 1.5874. We found that the NI is not sensitive to the initial values of x ˜ 0 and x ˜ 2 ; however, we adjust x ˜ 0 and x ˜ 2 to make E.I. as large as possible.
At this point, we have finished three memory-dependent updating techniques of the three-step iterative Scheme (48) of fractional type. Through the numerical tests of six nonlinear equations, Algorithms 5 and 6 are proven to be better than Algorithm 4.

8. An Accelerated Two-Step Memory-Dependent Method

Instead of Equation (36), we consider
w n = x n ξ f ( x n ) ,
where ξ is to be determined. Equation (98) supplies an extra datum for Equation (47).
Then, we impose an extra condition
N 2 ( x n + 1 ; x n , w n , y n ) = f ( x n + 1 ) ,
to determine ξ . Inserting
x n w n = ξ f ( x n )
into Equation (99) yields
f ( x n ) f ( w n ) ξ f ( x n ) 1 + x n + 1 w n x n y n f [ w n , y n ] x n y n = f [ x n + 1 , x n ] ,
ξ = ( x n + 1 w n + x n y n ) [ f ( x n ) f ( w n ) ] f ( x n ) { f [ x n + 1 , x n ] ( x n y n ) + f [ w n , y n ] ( x n + 1 w n ) } ,
which can be used to update ξ .
The accelerated two-step memory-updating method (ATSMUM) is coined as (i) giving x 0 , x ^ 0 , x ^ 2 , x ^ 1 = ( x ^ 0 + x ^ 2 ) / 2 and ξ 0 , and computing A 0 and B 0 by Equation (82), and (ii) calculating for n = 0 , 1 , ,
w n = x n ξ n f ( x n ) ,
y n = x n f ( x n ) A n + B n f ( x n ) ,
x n + 1 = y n f ( y n ) A n + B n f ( y n ) ,
A n + 1 = N 2 ( x n + 1 ; x n , w n , y n ) ,
B n + 1 = N 2 ( x n + 1 ; x n , w n , y n ) 2 A n + 1 ,
ξ n + 1 = ( x n + 1 w n + x n y n ) [ f ( x n ) f ( w n ) ] f ( x n ) { f [ x n + 1 , x n ] ( x n y n ) + f [ w n , y n ] ( x n + 1 w n ) } .
The ATSMUM reasonably saves the computational cost.
Table 15 lists the results obtained by the ATSMUM. All E.I.s are larger than 1.5874. As designed, | N 2 ( x n + 1 ) f ( x n + 1 ) | tended to be very small values.
The result of COC = 9.865 for f 6 ( x ) = 0 is larger than that given in Table 7 in [3] with COC = 6.9, which requires two extra evaluations of previous functions f ( w n 1 ) and f ( y n 1 ) .
In order to investigate the effect of ξ on the convergence behavior, we give some testing values of ξ in Equation (98) and do not consider the accelerating technique in Equation (108). Table 16 lists the results without using the accelerating technique in Equation (108). When the parameter ξ in Equation (98) is given by trial and error, the resulting iterative scheme is usually not the optimal one. The particular benefit with the memory-dependent technique in terms of quickening the iterative schemes’ relaxation factor can be seen to increase the values of the COC and E.I.

9. Nonlinear Fractional Type Three-Step Iterative Scheme

As an extension of Equation (48), we propose the following nonlinear fractional type three-step iterative scheme:
y n = x n f ( x n ) a 0 + b 0 f ( x n ) + H ( f 2 ( x n ) ) , z n = y n f ( y n ) a 0 + b 0 f ( y n ) , x n + 1 = z n f ( z n ) a 0 + b 0 f ( z n ) .
Theorem 5. 
The function f : I R R is sufficiently differentiable on the domain I, and r I is a simple root with f ( r ) = 0 and f ( r ) 0 . If
a 0 = f ( r ) , b 0 = f ( r ) 2 f ( r ) , H ( 0 ) = 0 , q 0 : = H ( 0 ) = f ( r ) 6 f ( r ) 2 f ( r ) 2 4 f ( r ) 3 ,
then the iterative Scheme (109) has sixteenth-order convergence.
Proof. 
Let
ϵ = y n r = E 0 e n 4 + O ( e n 5 )
be the error of the iterative Scheme (109) at the first step, where E 0 = 3 a 2 a 3 3 a 2 3 a 4 is the coefficient preceding e n 4 in Equation (34). Then, by Equation (75), we have
e n + 1 = b 0 3 ϵ 4 2 b 0 4 ϵ 5 + b 0 5 ϵ 6 + = b 0 3 ( E 0 e n 4 + O ( e n 5 ) ) 4 = b 0 3 E 0 4 e n 16 + O ( e n 17 ) .
We have proven that the iterative Scheme (109) has sixteenth-order convergence. □
In Equations (109), and (110), we let A = a 0 , B = a 0 and Q = H ( 0 ) , which are updated by a third-degree Newton interpolant N 3 ( x ; x n , y n , z n , x n + 1 ) , so that we have a three-step Algorithm 7 with memory-updating method: (i) giving x 0 , Q 0 , x ˜ 0 , x ˜ 2 , and x ˜ 1 = ( x ˜ 0 + x ˜ 2 ) / 2 , such that r [ x ˜ 0 , x ˜ 2 ] , and computing A 0 and B 0 by Equation (82), and (ii) calculating for n = 0 , 1 , ,
y n = x n f ( x n ) A n + B n f ( x n ) + H ( f 2 ( x n ) ) ,
z n = y n f ( y n ) A n + B n f ( y n ) ,
x n + 1 = z n f ( z n ) A n + B n f ( z n ) , A n + 1 = N 3 ( x n + 1 ; x n , y n , z n , x n + 1 ) , D n + 1 = N 3 ( x n + 1 ; x n , y n , z n , x n + 1 ) , E n + 1 = f [ x n , y n , z n , x n + 1 ] , B n + 1 = D n + 1 2 A n + 1 ,
Q n + 1 = E n + 1 A n + 1 2 D n + 1 2 4 A n + 1 3 ,
where we can take H ( f 2 ( x n ) ) = Q n f 2 ( x n ) , or H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) .
We consider f ( x ) = x 3 + 4 x 2 10 = 0 again. By starting from x 0 = 1.5 , using H ( f 2 ( x n ) ) = Q n ( 1 exp [ f 2 ( x n ) ] ) and with A 0 = 9.25 , B 0 = 0.676 and Q 0 = 2.8 , we determine that NI = 3, COC = 19.74, and E.I. = 2.703 are much larger than the values in Table 3.
Table 17 lists the NIs, COCs, and E.I.s obtained by Algorithm 7. For f 3 ( x ) = 0 and f 4 ( x ) = 0 , two iterations can obtain the exact solution, such that COC and E.I. are not defined, which need at least three data.

10. Conclusions

A nonlinear perturbation of the fixed-point Newton method was derived, which included two parameters, and a weight function permitted the fourth-order convergence equipped with three critical parameters. We developed a one-step memory-dependent iterative scheme by updating the optimal values of parameters by a third-degree Newton polynomial. For the data interpolation, a supplementary variable was computed. The E.I. was over 4 = 2 , which is better than some two-step fourth-order convergence iterative schemes modified from the Newton method, whose E.I. was 4 3 = 1.5874 . We derived a one-step iterative scheme of fractional type in Algorithm 1. Because the order of convergence is three and the efficiency index is three, we simply extended Algorithm 1 to a two-step and three-step iterative scheme, namely Algorithm 2 and Algorithm 3. It is interesting that the orders of convergence are largely increased to six and twelve, and the E.I.s became 6 and 12 3 . For the two-step iterative scheme, the relaxation factor was accelerated, whose performance was very good. Moreover, for the nonlinear three-step iterative scheme of the fractional type, the E.I. became 16 3 . We developed three memory-dependent updating techniques to gradually obtain the optimal values of these critical parameters for the three-step iterative scheme of fractional type. Through several numerical tests as listed and compared in the tables presented herein, we revealed that the new methods can find the solution quickly, without using the information of the differentials, and have better convergence efficiency and performance than most methods in the literature. Through the studies performed in the paper, we found that the proposed iterative schemes are cost-saving, with low computational complexity, where two functions’ evaluations and some algebraic operations for updating the optimal parameters were required for the two-step iterative scheme of fractional type, and three functions’ evaluations and some algebraic operations for updating the optimal parameters were required for the three-step iterative scheme of fractional type. These iterative schemes are especially useful for the practical viability with the simple structures of these algorithms and only need the implicit form of the nonlinear equation f ( x ) = 0 ; the explicit form of the function f ( x ) and the differential term f ( x ) are not required, which can be a great advantage in many practical engineering applications.

Author Contributions

Conceptualization, C.-S.L. and C.-W.C.; methodology, C.-S.L. and C.-W.C.; software, C.-S.L. and C.-W.C.; validation, C.-S.L. and C.-W.C.; formal analysis, C.-S.L. and C.-W.C.; investigation, C.-S.L. and C.-W.C.; resources, C.-W.C.; data curation, C.-W.C.; writing—original draft preparation, C.-S.L.; writing—review and editing, C.-W.C.; visualization, C.-W.C.; supervision, C.-S.L. and C.-W.C.; project administration, C.-W.C.; funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Science and Technology Council [grant numbers: NSTC 112-2221-E-239-022].

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
  2. Liu, C.S.; Chang, C.W.; Kuo, C.L. Memory-accelerating methods for one-step iterative schemes with Lie-symmetry method solving nonlinear boundary value problem. Symmetry 2024, 16, 120. [Google Scholar] [CrossRef]
  3. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2012, 63, 549–569. [Google Scholar]
  4. Lotfi, T.; Soleymani, F.; Noori, Z.; KJlJçman, A.; Khaksar Haghani, F. Efficient iterative methods with and without memory possessing high efficiency indices. Discret. Dyn. Nat. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
  5. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  6. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  7. Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
  8. Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
  9. Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
  10. Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
  11. Liu, C.S.; Chang, C.W. New memory-updating methods in two-step Newton’s variants for solving nonlinear equations with high efficiency index. Mathematics 2024, 12, 581. [Google Scholar] [CrossRef]
  12. He, H.; Peng, J.; Li, H. Iterative approximation of fixed point problems and variational inequality problems on Hadamard manifolds. UPB Sci. Bull. Ser. A 2022, 84, 25–36. [Google Scholar]
  13. Zhou, X.; Liu, X.; Zhang, G.; Jia, L.; Wang, X.; Zhao, Z. An iterative threshold algorithm of log-sum regularization for sparse problem. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 4728–4740. [Google Scholar] [CrossRef]
  14. Liu, C.S.; El-Zahar, E.R.; Chang, C.W. Dynamical optimal values of parameters in the SSOR, AOR and SAOR testing using the Poisson linear equations. Mathematics 2023, 11, 3828. [Google Scholar] [CrossRef]
  15. Hadjidimos, A. Successive overrelaxation (SOR) and related methods. J. Comput. Appl. Math. 2000, 123, 177–199. [Google Scholar] [CrossRef]
  16. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
  17. Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
  18. Halley, E. A new exact and easy method for finding the roots of equations generally and without any previous reduction. Philos. Trans. R. Soc. Lond. 1964, 8, 136–147. [Google Scholar]
  19. Liu, C.S.; El-Zahar, E.R.; Chang, C.W. A two-dimensional variant of Newton’s method and a three-point Hermite interpolation: Fourth- and eighth-order optimal iterative schemes. Mathematics 2023, 11, 4529. [Google Scholar] [CrossRef]
  20. Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
  21. Noor, M.A.; Noor, K.I.; Al-Said, E.; Waseem, M. Some new iterative methods for nonlinear equations. Math. Probl. Eng. 2010, 2010, 198943. [Google Scholar] [CrossRef]
  22. Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
  23. Saqib, M.; Iqbal, M. Some multi-step iterative methods for solving nonlinear equations. Open J. Math. Sci. 2017, 1, 25–33. [Google Scholar] [CrossRef]
  24. Qureshi, U.K. A new accelerated third-order two-step iterative method for solving nonlinear equations. Math. Theory Model. 2018, 8, 64–68. [Google Scholar]
  25. Ali, F.; Aslam, W.; Ali, K.; Anwar, M.A.; Nadeem, A. New family of iterative methods for solving nonlinear models. Discret. Dyn. Nat. Soc. 2018, 2018, 1–12. [Google Scholar] [CrossRef]
  26. Zhanlav, T.; Chuluunbaatar, O.; Ulziibayar, V. Generating function method for constructing new iterations. Appl. Math. Comput. 2017, 315, 414–423. [Google Scholar] [CrossRef]
  27. Qureshi, S.; Soomro, A.; Shaikh, A.A.; Hincal, E.; Gokbulut, N. A novel multistep iterative technique for models in medical sciences with complex dynamics. Comput. Math. Methods Med. 2022, 2022, 7656451. [Google Scholar] [CrossRef] [PubMed]
  28. Argyros, I.K.; Regmi, S.; John, J.A.; Jayaraman, J. Extended convergence for two sixth order methods under the same weak conditions. Foundations 2023, 3, 127–139. [Google Scholar] [CrossRef]
  29. Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2020, 14, 2020. [Google Scholar] [CrossRef]
  30. Noor, M.A.; Noor, K.I. Three-step iterative methods for nonlinear equations. Appl. Math. Comput. 2006, 183, 322–327. [Google Scholar]
  31. Noor, M.A.; Noor, K.I. Some iterative schemes for nonlinear equations. Appl. Math. Comput. 2006, 183, 774–779. [Google Scholar]
  32. Noor, M.A. New iterative schemes for nonlinear equations. Appl. Math. Comput. 2007, 187, 937–943. [Google Scholar] [CrossRef]
  33. Noor, K.I. New family of iterative methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 553–558. [Google Scholar] [CrossRef]
  34. Noor, K.I.; Noor, M.A. Predictor-corrector Halley method for nonlinear equations. Appl. Math. Comput. 2007, 188, 1587–1591. [Google Scholar]
  35. Noor, M.A.; Noor, K.I.; Mohyud-Din, S.T.; Shabbir, A. An iterative method with cubic convergence for nonlinear equations. Appl. Math. Comput. 2006, 183, 1249–1255. [Google Scholar] [CrossRef]
  36. Noor, M.A.; Noor, K.I.; Waseem, M. Fourth-order iterative methods for solving nonlinear equations. Int. J. Appl. Math. Eng. Sci. 2010, 4, 43–52. [Google Scholar]
  37. Jain, P. Steffensen type methods for solving nonlinear equations. Appl. Math. Comput. 2007, 194, 527–533. [Google Scholar]
  38. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Steffensen type methods for solving nonlinear equations. J. Comput. Appl. Math. 2012, 236, 3058–3064. [Google Scholar] [CrossRef]
  39. Soleymani, F.; Hosseinabadi, V. New third- and sixth-order derivative-free techniques for nonlinear equations. J. Math. Res. 2011, 3, 107–112. [Google Scholar] [CrossRef]
  40. Hafiz, M.A.; Bahgat, M.S.M. Solving nonsmooth equations using family of derivative-free optimal methods. J. Egypt. Math. Soc. 2013, 21, 38–43. [Google Scholar] [CrossRef]
  41. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  42. Roberts, S.M.; Shipman, J.S. On the closed form solution of Troesch’s problem. J. Comput. Phys. 1976, 21, 291. [Google Scholar] [CrossRef]
  43. Khuri, S.A.; Sayfy, A. Troesch’s problem: A B-spline collocation approach. Math. Comput. Model. 2011, 54, 1907–1918. [Google Scholar] [CrossRef]
  44. Abbasbandy, S.; Hashemi, M.S.; Liu, C.S. The Lie-group shooting method for solving the Bratu equation. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 4238–4249. [Google Scholar] [CrossRef]
  45. Qureshi, S.; Ramos, H.; Soomro, A.K. A new nonlinear ninth-order root-finding method with error analysis and basins of attraction. Mathematics 2021, 9, 1996. [Google Scholar] [CrossRef]
  46. Soheili, A.R.; Ahmadian, S.A.; Naghipoor, J. A Family of Predictor-Corrector Methods Based on Weight Combination of Quadratures for Solving Nonlinear equations. Int. J. Nonlinear Sci. 2008, 6, 29–33. [Google Scholar]
  47. Bahgat, M.S.M. New two-step iterative methods for solving nonlinear equations. J. Math. Res. 2012, 4, 128–131. [Google Scholar] [CrossRef]
  48. Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algorithms Comput. Technol. 2019, 13, 1–8. [Google Scholar] [CrossRef]
  49. Milovanovic, G.V.; Cvetkovic, A.S. A note on three-step iterative methods for nonlinear equations. Stud. Univ. Babes-Bolyai Math. 2007, 3, 137–146. [Google Scholar]
  50. Abdul-Hassan, N.Y. Two new predictor-corrector iterative methods with third- and ninth-order convergence for solving nonlinear equations. Math. Theory Model. 2016, 6, 44–56. [Google Scholar]
  51. Ahmad, F.; Hussain, S.; Hussain, S.; Rafiq, A. New twelfth order J-Halley method for solving nonlinear equations. Open Sci. J. Math. Appl. 2013, 1, 1–4. [Google Scholar]
  52. Wang, X. A family of Newton-type iterative methods using some special self-accelerating parameters. Int. J. Comput. Math. 2018, 95, 2112–2127. [Google Scholar] [CrossRef]
  53. Jain, P.; Chand, P.B. Derivative free iterative methods with memory having higher R-order of convergence. Int. J. Nonlinear Sci. Numer. Simul. 2020, 21, 641–648. [Google Scholar] [CrossRef]
  54. Zhou, X.; Liu, B. Iterative methods for multiple roots with memory using self-accelerating technique. J. Comput. Appl. Math. 2023, 428, 115181. [Google Scholar] [CrossRef]
  55. Džunić, J.; Petković, M.S.; Petković, L.D. Three-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2012, 218, 4917–4927. [Google Scholar]
  56. Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. J. Comput. Appl. Math. 2012, 236, 2909–2920. [Google Scholar]
Table 1. For different methods; the NIs and COCs computed with different initial guesses for f ( x ) = arctan x = 0 .
Table 1. For different methods; the NIs and COCs computed with different initial guesses for f ( x ) = arctan x = 0 .
x 0 −1.4−1−0.50.40.611.3
NMdivergent(6, 2.829)(5, 2.976)(5, 2.987)(5, 2.959)(6, 2.829)(8, 2.683)
Equation (16)(5, 4.967)(4, 4.533)(4, 4.934)(4, 4.964)(4, 4.890)(4, 4.533)(4, 4.067)
Equation (18)(4, 4.202)(4, 4.637)(4, 4.938)(4, 4.965)(4, 4.901)(4, 4.637)(4, 4.323)
Table 2. A one-step memory-dependent accelerating technique for the iterative Scheme (18) to solve f ( x ) = x 3 x = 0 .
Table 2. A one-step memory-dependent accelerating technique for the iterative Scheme (18) to solve f ( x ) = x 3 x = 0 .
Q 0 10−1
r01−1
NI667
COC5.5634.5653.042
E.I.2.3592.1371.744
Table 3. A one-step memory-dependent accelerating technique for the iterative Scheme (18) to solve f ( x ) = x 3 + 4 x 2 10 = 0 .
Table 3. A one-step memory-dependent accelerating technique for the iterative Scheme (18) to solve f ( x ) = x 3 + 4 x 2 10 = 0 .
Q 0 −10−6−1025
NI443333
COC3.2233.0894.3853.7913.5084.741
E.I.1.7951.7572.0941.9211.8732.177
Table 4. Comparing the numerical results for the Troesch equation with λ = 5 .
Table 4. Comparing the numerical results for the Troesch equation with λ = 5 .
yExactB-Spline [43]Present Paper
0.20.010753420.010020270.01075341
0.40.033200510.030997930.03320049
0.80.258216640.241704960.25821649
0.90.455060340.424618300.45506003
Table 5. For f ( x ) = x 3 x = 0 , the COCs computed by different methods; × denotes not computable.
Table 5. For f ( x ) = x 3 x = 0 , the COCs computed by different methods; × denotes not computable.
n23456789
NM1.241.381.591.841.982.0××
Algorithm 11.501.782.312.863.0××
Algorithm 23.557.97××
Algorithm 314.84××
Table 6. For f 1 ( x ) = 0 , the NIs for different methods.
Table 6. For f 1 ( x ) = 0 , the NIs for different methods.
NMHMSMBMAlgorithm 1Algorithm 2Algorithm 3
20111381165
Table 7. For f 1 ( x ) = 0 , a comparison between the number of iterations (NIs) and the COCs for Algorithms 1–3.
Table 7. For f 1 ( x ) = 0 , a comparison between the number of iterations (NIs) and the COCs for Algorithms 1–3.
( a 0 , b 0 ) Algorithm 1Algorithm 2Algorithm 3
( 13 , 6.577 ) 11, 2.976, 6.065, 15.78
( 12 , 4 ) 18, 3.2810, 4.267, 4.94
( 11 , 6 ) 26, 2.4214, 3.3210, 3.90
Table 8. A comparison of different methods for the number of iterations.
Table 8. A comparison of different methods for the number of iterations.
Functions x 0 NMNNTCMNRMAlgorithm 1Algorithm 2Algorithm 3
f 1 3.513787743
f 2 −17554854
f 3 26544643
f 4 3.58555643
f 5 1.57554533
Table 9. A comparison of different methods for the efficiency index (E.I.)
Table 9. A comparison of different methods for the efficiency index (E.I.)
MethodpmE.I. = p m
NM221.41421
HM331.44225
CM441.41421
NRM481.18921
LM431.58740
MCM841.68179
AM961.44225
AHHRM1261.51309
Algorithm 1313
Algorithm 2622.44949
Algorithm 31232.28943
Table 10. For f 1 ( x ) = 0 with the solution x = 3 and x 0 = 4 , a comparison of the number of iterations (NIs) for different methods.
Table 10. For f 1 ( x ) = 0 with the solution x = 3 and x 0 = 4 , a comparison of the number of iterations (NIs) for different methods.
Algorithm 1Algorithm 2Algorithm 3
1375
Table 11. A comparison of Algorithms 1–3 for the NIs and COCs of different functions.
Table 11. A comparison of Algorithms 1–3 for the NIs and COCs of different functions.
Functions x 0 x ˜ 0 x ˜ 2 Algorithm 1Algorithm 2Algorithm 3
f 1 3.52.93.0710, 1.00426, 8.79064, 7.1009
f 2 −11.31.512, 0.99997, 5.2285, 9.213
f 3 20.180.27, 1.00014, 1.94473, 10.8355
f 4 3.51.852.1510, 0.99946, 3.60274, 23.5558
f 5 1.52.152.166, 1.00014, 17.25043, 14.5535
Table 12. The NIs, COCs and E.I.s for Algorithm 4.
Table 12. The NIs, COCs and E.I.s for Algorithm 4.
Functions x 0 [ x ˜ 0 , x ˜ 2 ] NICOCE.I.
f 1 3.5 [ 2.8 , 3.1 ] 562.57052.813
f 2 1 [ 1 , 2.5 ] 44.08381.422
f 3 2 [ 0.1 , 0.5 ] 31.7641.153
f 4 0.5 [ 1 , 2.5 ] 55.50141.532
f 5 1 [ 2 , 3.5 ] 515.65611.9897
Table 13. The NIs, COCs, and E.I.s for Algorithm 5.
Table 13. The NIs, COCs, and E.I.s for Algorithm 5.
Functions x 0 [ x ˜ 0 , x ˜ 2 ] NICOCE.I.
f 1 2.8 [ 2.8 , 3.1 ] 417.67862.605
f 2 1 [ 1 , 2.5 ] 44.1081.602
f 3 −1 [ 0.1 , 0.5 ] 45.20091.733
f 4 0.5 [ 1 , 2.5 ] 55.49921.765
f 5 1 [ 2 , 3.5 ] 515.9972.520
f 6 −1.5 [ 0.5 , 3.5 ] 915.5172.494
Table 14. The NIs, COCs, and E.I.s for Algorithm 6.
Table 14. The NIs, COCs, and E.I.s for Algorithm 6.
Functions x 0 [ x ˜ 0 , x ˜ 2 ] NICOCE.I.
f 1 3.3 [ 2.9 , 3.1 ] 518.04552.623
f 2 0.5 [ 0.1 , 3.5 ] 55.69651.786
f 3 0.1 [ 0.1 , 1.5 ] 48.87342.07
f 4 1.5 [ 1 , 3.5 ] 54.47131.648
f 5 0 [ 1 , 3.5 ] 59.67582.131
f 6 0.5 [ 0.5 , 2.5 ] 105.13521.725
Table 15. The NIs, COCs, and E.I.s for the ATSMUM.
Table 15. The NIs, COCs, and E.I.s for the ATSMUM.
Functions x 0 [ x ˜ 0 , x ˜ 2 ] ξ 0 | N 2 ( x n + 1 ) f ( x n + 1 ) | NICOCE.I.
f 1 2.9 [ 2.95 , 3.04 ] −0.05 1.0 × 10 17 48.2182.018
f 2 1 [ 0.5 , 2.5 ] 3 2.2 × 10 9 44.9851.708
f 3 2 [ 1 , 1.5 ] −5 2.5 × 10 11 46.1591.833
f 4 1.5 [ 1.96 , 2.05 ] −1 5.0 × 10 15 49.3642.108
f 5 2.2 [ 0.5 , 3 ] −3 3.59 × 10 19 36.6671.882
f 6 0.5 [ 0.5 , 1.5 ] 0.1 3.6 × 10 9 49.8652.145
Table 16. The NIs, COCs, and E.I.s without using the accelerating technique in Equation (108).
Table 16. The NIs, COCs, and E.I.s without using the accelerating technique in Equation (108).
Functions x 0 [ x ˜ 0 , x ˜ 2 ] ξ NICOCE.I.
f 1 2.9 [ 2.95 , 3.04 ] 0.2546.3891.856
f 2 1 [ 0.5 , 2.5 ] 143.9471.580
f 3 2 [ 1 , 1.5 ] -0.545.1111.723
f 4 1.5 [ 1.96 , 2.05 ] -0.546.8691.901
f 5 2.2 [ 0.5 , 3 ] -134.8581.694
f 6 0.5 [ 0.5 , 1.5 ] 0.253.2961.488
Table 17. The NIs, COCs, and E.I.s for Algorithm 7.
Table 17. The NIs, COCs, and E.I.s for Algorithm 7.
Functions x 0 [ x ˜ 0 , x ˜ 2 ] Q 0 NICOCE.I.
f 2 1.5 [ 0.5 , 1.5 ] −539.8962.147
f 3 0.5 [ 0.5 , 1.5 ] −12××
f 4 1.5 [ 1.95 , 2.05 ] −12××
f 6 0.5 [ 1.5 , 2.5 ] 1414.8822.460
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Chang, C.-W. Updating to Optimal Parametric Values by Memory-Dependent Methods: Iterative Schemes of Fractional Type for Solving Nonlinear Equations. Mathematics 2024, 12, 1032. https://doi.org/10.3390/math12071032

AMA Style

Liu C-S, Chang C-W. Updating to Optimal Parametric Values by Memory-Dependent Methods: Iterative Schemes of Fractional Type for Solving Nonlinear Equations. Mathematics. 2024; 12(7):1032. https://doi.org/10.3390/math12071032

Chicago/Turabian Style

Liu, Chein-Shan, and Chih-Wen Chang. 2024. "Updating to Optimal Parametric Values by Memory-Dependent Methods: Iterative Schemes of Fractional Type for Solving Nonlinear Equations" Mathematics 12, no. 7: 1032. https://doi.org/10.3390/math12071032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop