Next Article in Journal
On Global Solutions of Two-Dimensional Hyperbolic Equations with General-Kind Nonlocal Potentials
Previous Article in Journal
AMED: Automatic Mixed-Precision Quantization for Edge Devices
Previous Article in Special Issue
Passive Stabilization of Static Output Feedback of Disturbed Nonlinear Stochastic System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations

by
Sunil Panday
1,
Shubham Kumar Mittal
1,*,
Carmen Elena Stoenoiu
2 and
Lorentz Jäntschi
3
1
Department of Mathematics, National Institute of Technology Manipur, Langol, Imphal 795004, Manipur, India
2
Department of Electric Machines and Drives, Technical University of Cluj-Napoca, 26-28 Baritiu Str., 400027 Cluj-Napoca, Romania
3
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 103-105 Muncii Blvd., 400641 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1809; https://doi.org/10.3390/math12121809
Submission received: 6 May 2024 / Revised: 3 June 2024 / Accepted: 7 June 2024 / Published: 11 June 2024
(This article belongs to the Special Issue New Trends in Nonlinear Analysis)

Abstract

:
In this article, we introduce a novel three-step iterative algorithm with memory for finding the roots of nonlinear equations. The convergence order of an established eighth-order iterative method is elevated by transforming it into a with-memory variant. The improvement in the convergence order is achieved by introducing two self-accelerating parameters, calculated using the Hermite interpolating polynomial. As a result, the R-order of convergence for the proposed bi-parametric with-memory iterative algorithm is enhanced from 8 to 10.5208 . Notably, this enhancement in the convergence order is accomplished without the need for extra function evaluations. Moreover, the efficiency index of the newly proposed with-memory iterative algorithm improves from 1.5157 to 1.6011 . Extensive numerical testing across various problems confirms the usefulness and superior performance of the presented algorithm relative to some well-known existing algorithms.

1. Introduction

Addressing nonlinear equations is a critical challenge in science and engineering, particularly in fields such as gas dynamics and elasticity, where problems are often reduced to solving single-variable nonlinear equations Ω ( x ) = 0 , where Ω : D R R acts as a scalar function within an open interval D. Traditional analytical methods frequently prove inadequate for determining the roots of these complex nonlinear equations, making iterative numerical methods indispensable, with the ongoing advancement of computational technology.
The classical one-point Newton’s method [1] for a nonlinear equation is defined by the iterative formula:
x n + 1 = x n Ω ( x n ) Ω ( x n ) , n = 0 , 1 , 2 ,
where Ω is the function and Ω is its derivative. The Newton–Raphson method, known for its quadratic convergence near the roots, requires the evaluation of both the function and its derivative in each iteration. Researchers consistently strive to enhance the convergence rate of iterative methods.
Multipoint methods for solving nonlinear equations offer significant advantages over one-point methods due to their computational efficiency and higher convergence order. Researchers have shown considerable interest in constructing optimal multipoint methods without memory, following Kung–Traub’s conjecture [2], which uses n + 1 functional evaluations to reach the optimal 2 n convergence order.
Recent innovations have improved a variety of numerical methods, including the Adomian decomposition Newton–Raphson [1], bisection [3], Chebyshev–Halley [4], Chun–Neta [5], collocation [6], Galerkin [7], and Jarratt methods [8], as well as the Nash–Moser iteration [9], Thukral method [10], Osada method [11], Ostrowski method [12], Picard iteration [13], diverse quadrature formulas [14,15], super-Halley method [16], and Traub–Steffensen method [17].
With the increase in the convergence rate, the number of evaluations of the required function also increases, which can lead to a reduction in the efficiency index. The efficiency index of an iterative method quantifies its performance, and it is defined as [2,18]:
E = ρ 1 / γ
where ρ is the iterative method convergence rate and γ is the number of function and derivative evaluations performed per iteration.
On the other hand, iterative methods that incorporate memory make use of information from both recent and past iterations to boost both the convergence order and the efficiency index. Recent advancements in the field have seen significant contributions in extending without-memory methods to with-memory methods using self-accelerating parameters. In 2022, Choubey et al. [19] transformed a fourth-order without-memory iterative method into a with-memory method using one self-accelerating parameter and achieved sixth-order convergence. In 2023, Sharma et al. [20] upgraded an eighth-order without-memory iterative method to a with-memory method using two self-accelerating parameters and attained tenth-order convergence. Also in 2023, Abdullah et al. [21] developed a with-memory method by enhancing a without-memory method with one parameter, which improved its convergence order from 6 to 7.2749. Additionally, in the same year, Thangkhenpau et al. [22] developed a derivative-free without-memory iterative method with eighth-order convergence and then expanded it to a with-memory method using four self-accelerating parameters, which resulted in an increase in the convergence order from 8 to 15.5156. In their pursuit of resolving nonlinear equations with multiple roots, Thangkhenpau et al. introduced a novel scheme offering both with- and without-memory-based variants [23]. In recent years, the development of with-memory iterative methods has garnered considerable interest among researchers. For a deeper understanding, one can refer to [23,24,25,26,27,28,29,30,31,32,33,34] and the references cited therein.
In this research paper, a novel bi-parametric three-step with-memory iterative algorithm is introduced, which elevates the R-order of convergence from 8 to 10.5208. The algorithm achieves an efficiency index of 1.6011. The paper is structured to enhance understanding and analysis. Section 2 details the development of this new bi-parametric three-point with-memory iterative algorithm by integrating self-accelerating parameters into the first and third steps of an existing eighth-order without-memory iterative algorithm, accompanied by a thorough convergence analysis. Section 3 presents an extensive evaluation through numerical tests, providing a rigorous comparison of the proposed method with other well-established algorithms. Finally, Section 4 summarizes this study, offering a detailed synthesis of the results and their implications.

2. Analysis of Convergence for With-Memory Algorithm

In this section, the two parameters α , β R used in the first and third steps, respectively, of Algorithm 2.1 in [35], proposed by Butsakorn Kong-ied in 2021, are utilized to increase its order of convergence.
y n = x n Ω ( x n ) Ω ( x n ) + α ( Ω ( x n ) ) 2 , z n = y n ( Ω ( x n ) ) 2 Ω ( y n ) ( Ω ( x n ) ) 2 Ω ( x n ) 2 Ω ( x n ) Ω ( x n ) Ω ( y n ) + Ω ( x n ) ( Ω ( y n ) ) 2 , x n + 1 = z n Ω ( z n ) Ω ( z n ) + β Ω ( z n ) .
Using Taylor-series approximation, the expressions for Ω ( x n ) and Ω ( x n ) can be written as:
Ω ( x n ) = A e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + c 6 e n 6 + c 7 e n 7 + c 8 e n 8 ,
Ω ( x n ) = A 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + 5 c 5 e n 4 + 6 c 6 e n 5 + 7 c 7 e n 6 + 8 c 8 e n 7 + 9 c 9 e n 8 ,
where A = Ω ( ξ ) , ξ is the zero of Ω ( x ) , e n = x n ξ , and c j = Ω ( j ) ( ξ ) j ! Ω ( ξ ) for j = 2 , 3 , .
After substituting the values of Equations (4) and (5) into the first step of (3), the expression for the error of y n is given by:
e n , y = c 2 e n 2 + ( A α 2 c 2 2 + 2 c 3 ) e n 3 + 4 c 2 3 c 2 ( A α + 7 c 3 ) + 3 c 4 e n 4 + O ( e n 5 ) .
where e n , y = y n ξ .
Furthermore, the expression for Ω ( y n ) can be written as:
Ω ( y n ) = A c 2 e n 2 + A ( A α 2 c 2 2 + 2 c 3 ) e n 3 + A ( 5 c 2 3 c 2 ( A α + 7 c 3 ) + 3 c 4 ) e n 4 + O ( e n 5 ) ,
After substituting the values of Equations (4)–(7) into the second step of (3), the expression for the error of z n is given by:
e n , z = c 2 ( 2 A α + 2 c 2 2 c 3 ) e n 4 + 10 c 2 4 + 7 c 2 2 ( A α + 2 c 3 ) ( 2 A α + c 3 ) ( A α + 2 c 3 ) 2 c 2 c 4 e n 5 + O ( e n 6 ) .
where e n , z = z n ξ .
Also, the expression for Ω ( z n ) and Ω ( z n ) can be given by:
Ω ( z n ) = A c 2 ( 2 A α 2 c 2 2 + c 3 ) e n 4 + A 10 c 2 4 + 7 c 2 2 ( A α + 2 c 3 ) ( 2 A α + c 3 ) ( A α + 2 c 3 ) 2 c 2 c 4 e n 5 + O ( e n 6 ) ,
Ω ( z n ) = A 2 A c 2 2 ( 2 A α 2 c 2 2 + c 3 ) e n 4 + 2 A c 2 × 10 c 2 4 + 7 c 2 2 ( A α + 2 c 3 ) ( 2 A α + c 3 ) ( A α + 2 c 3 ) 2 c 2 c 4 e n 5 + O ( e n 6 ) ,
Finally, after substituting the values of Equations (8)–(10) into the third step of (3), the expression for the error of x n + 1 is given by:
e n + 1 = c 2 2 ( β + c 2 ) ( 2 A α 2 c 2 2 + c 3 ) 2 e n 8 + 2 c 2 ( β + c 2 ) ( 2 A α 2 c 2 2 + c 3 ) × 10 c 2 4 7 c 2 2 ( A α + 2 c 3 ) + ( 2 A α + c 3 ) ( A α + 2 c 3 ) + 2 c 2 c 4 e n 9 + ( β + c 2 ) ( 224 c 2 8 + ( 2 A α + c 3 ) 2 ( A α + 2 c 3 ) 2 10 c 2 6 ( 34 A α + 63 c 3 ) + 124 c 2 5 c 4 18 c 2 3 ( 8 A α + 7 c 3 ) c 4 + 2 c 2 ( 2 A α + c 3 ) ( 10 A α + 11 c 3 ) c 4 + c 2 4 ( 173 A 2 α 2 + 730 A α c 3 + 500 c 3 2 12 c 5 ) 2 c 2 2 ( 18 A 3 α 3 + 119 A 2 α 2 c 3 + 171 A α c 3 2 + 58 c 3 3 2 c 4 2 3 ( 2 A α + c 3 ) c 5 ) ) e n 10 2 ( ( β + c 2 ) ( 458 c 2 9 c 2 7 ( 637 A α + 1720 c 3 ) + 472 c 2 6 c 4 ( 2 A α + c 3 ) ( A α + 2 c 3 ) ( 8 A α + 7 c 3 ) c 4 + c 2 5 ( 311 A 2 α 2 + 4 c 3 ( 479 A α + 498 c 3 ) 86 c 5 ) + c 2 ( 6 A 4 α 4 + 79 A 3 α 3 c 3 + 258 A 2 α 2 c 3 2 + 270 A α c 3 3 + 80 c 3 4 28 A α c 4 2 20 c 3 c 4 2 2 ( 2 A α + c 3 ) ( 7 A α + 8 c 3 ) c 5 ) + c 2 3 ( 68 A 3 α 3 1508 A α c 3 2 792 c 3 3 + 54 c 4 2 + 99 A α c 5 + c 3 ( 665 A 2 α 2 + 90 c 5 ) ) + c 2 4 ( ( ( 541 A α + 784 c 3 ) c 4 ) + 8 c 6 ) + 2 c 2 2 ( c 4 ( 89 A 2 α 2 + 3 c 3 ( 89 A α + 48 c 3 ) 3 c 5 ) 2 ( 2 A α + c 3 ) c 6 ) ) ) e n 11 + O ( e n 12 ) .
Now, by replacing α and β , which are calculated using Equations (13) and (14), respectively, with α n and β n in Equation (3), the following with-memory iterative scheme is obtained:
y n = x n Ω ( x n ) Ω ( x n ) + α n ( Ω ( x n ) ) 2 , z n = y n ( Ω ( x n ) ) 2 Ω ( y n ) ( Ω ( x n ) ) 2 Ω ( x n ) 2 Ω ( x n ) Ω ( x n ) Ω ( y n ) + Ω ( x n ) ( Ω ( y n ) ) 2 , x n + 1 = z n Ω ( z n ) Ω ( z n ) + β n Ω ( z n ) .
The above scheme is denoted by NWM11. At this point, from (11), it is clear that the convergence order of Algorithm (3) is 8 when α 2 c 2 2 c 3 2 A and β c 2 , respectively. To accelerate the order of convergence from 8 to 11 of Algorithm (3), one can assume that α = 2 c 2 2 c 3 2 A = 3 ( Ω ( ξ ) ) 2 Ω ( ξ ) Ω ( ξ ) 12 ( Ω ( ξ ) ) 3 and β = c 2 = Ω ( ξ ) 2 Ω ( ξ ) ; however, the exact values of Ω ( ξ ) , Ω ( ξ ) , and Ω ( ξ ) are not attainable in practice. Let us assume the parameters α and β as α n and β n , respectively. The parameters α n and β n can be updated iteratively using the available data from the current and previous iterations, aiming for them to satisfy the conditions lim n α n = 2 c 2 2 c 3 2 A = 3 ( Ω ( ξ ) ) 2 Ω ( ξ ) Ω ( ξ ) 12 ( Ω ( ξ ) ) 3 and lim n β n = c 2 = Ω ( ξ ) 2 Ω ( ξ ) such that the asymptotic convergence constants for the 8th, 9th, and 10th orders in the error expression (11) will be zero. α n and β n are chosen as follows:
α n = 3 ( H 5 ( x n ) ) 2 H 6 ( y n ) Ω ( x n ) 12 ( Ω ( x n ) ) 3 ,
β n = H 5 ( x n ) 2 Ω ( x n ) ,
where
H 6 ( x ) = Ω ( y n ) + ( x y n ) Ω [ y n , x n ] + ( x y n ) ( x x n ) Ω [ y n , x n , x n ] + ( x y n ) ( x x n ) 2 Ω [ y n , x n , x n , z n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) Ω [ y n , x n , x n , z n 1 , y n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) Ω [ y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) Ω [ y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] , H 5 ( x ) = Ω ( x n ) + ( x x n ) Ω [ x n , x n ] + ( x x n ) 2 Ω [ x n , x n , z n 1 ] + ( x x n ) 2 ( x z n 1 ) Ω [ x n , x n , z n 1 , y n 1 ] + ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) Ω [ x n , x n , z n 1 , y n 1 , x n 1 ] + ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) Ω [ x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] , H 6 ( y n ) = 6 Ω [ y n , x n , x n , z n 1 ] + 6 2 ( y n x n ) + ( y n z n 1 ) Ω [ y n , x n , x n , z n 1 , y n 1 ] + 6 ( ( y n x n ) 2 + 2 ( y n x n ) ( y n z n 1 ) + 2 ( y n x n ) ( y n y n 1 ) + ( y n z n 1 ) ( y n y n 1 ) ) Ω [ y n , x n , x n , z n 1 , y n 1 , x n 1 ] + 6 ( ( y n x n ) 2 ( y n z n 1 ) + ( y n x n ) 2 ( y n y n 1 ) + 2 ( y n x n ) ( y n z n 1 ) ( y n y n 1 ) + ( y n x n ) 2 ( y n x n 1 ) + 2 ( y n x n ) ( y n z n 1 ) ( y n x n 1 ) + 2 ( y n x n ) ( y n y n 1 ) ( y n x n 1 ) + ( y n z n 1 ) ( y n y n 1 ) ( y n x n 1 ) ) Ω [ y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] , H 5 ( x n ) = 2 Ω [ x n , x n , z n 1 ] + 2 ( x n z n 1 ) Ω [ x n , x n , z n 1 , y n 1 ] + 2 ( x n z n 1 ) ( x n y n 1 ) Ω [ x n , x n , z n 1 , y n 1 , x n 1 ] + 2 ( x n z n 1 ) ( x n y n 1 ) ( x n x n 1 ) Ω [ x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] .
It should be noted that the condition H m ( x n ) = Ω ( x n ) is satisfied by the Hermite interpolation polynomial H m ( x ) for m = 5 , 6 . So, α n = 3 ( H 5 ( x n ) ) 2 H 6 ( y n ) Ω ( x n ) 12 ( Ω ( x n ) ) 3 and β n = H 5 ( x n ) 2 Ω ( x n ) can be expressed as α n = 3 ( H 5 ( x n ) ) 2 H 6 ( y n ) H m ( x n ) 12 ( H m ( x n ) ) 3 and β n = H 5 ( x n ) 2 H m ( x n ) , respectively, for m = 5 , 6 .
Theorem 1. 
Let H m be the Hermite polynomial of degree m, interpolating the Ω function at interpolation nodes y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 within an interval I D R , and the derivative Ω ( m + 1 ) is continuous in I with H m ( x n ) = Ω ( x n ) and H m ( x n ) = Ω ( x n ) . Suppose that all nodes y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 are in the neighborhood of the root ξ. Then,
H 6 ( y n ) = 6 Ω ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
H 5 ( x n ) = 2 Ω ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) .
and
α n = 3 ( H 5 ( x n ) ) 2 H 6 ( y n ) Ω ( x n ) 12 ( Ω ( x n ) ) 3 2 c 2 2 c 3 2 A c 6 2 A e n 1 , z 2 e n 1 , y 2 e n 1 4 + c 7 2 A e n 1 , z e n 1 , y e n 1 2 ,
β n = H 5 ( x n ) 2 Ω ( x n ) c 2 + c 6 e n 1 , z e n 1 , y e n 1 2 .
Again, after simplification, the result is
α n 2 c 2 2 c 3 2 A = 2 A α 2 c 2 2 + c 3 c 6 2 A e n 1 , z e n 1 , y e n 1 2 + c 7 2 A e n 1 , z e n 1 , y e n 1 2 ,
β n + c 2 c 6 e n 1 , z e n 1 , y e n 1 2 .
Proof. 
The sixth-degree and fifth-degree Hermite interpolation polynomials are
Ω ( x ) H 6 ( x ) = Ω ( 7 ) ( δ ) 7 ! ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 ,
Ω ( x ) H 5 ( x ) = Ω ( 6 ) ( δ ) 6 ! ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 .
In order to obtain the following equations, Equation (22) is differentiated three times in x = y n , and Equation (23) is differentiated two times in x = x n :
H 6 ( y n ) = Ω ( y n ) 6 Ω ( 7 ) ( δ ) 7 ! ( y n z n 1 ) ( y n y n 1 ) ( y n x n 1 ) 2 ,
H 5 ( x n ) = Ω ( x n ) Ω ( 6 ) ( δ ) 6 ! ( x n z n 1 ) ( x n y n 1 ) ( x n x n 1 ) 2 .
The Taylor-series expansion of Ω at points y n and x n in I and δ I about the simple zero ξ of Ω provides
Ω ( x n ) = Ω ( ξ ) 1 + 2 c 2 e n + 3 c 3 e n 2 + O ( e n 3 ) ,
Ω ( x n ) = Ω ( ξ ) 2 c 2 + 6 c 3 e n + O ( e n 2 ) ,
Similarly,
Ω ( y n ) = Ω ( ξ ) 6 c 3 + 24 c 4 e n , y + O ( e n , y 2 ) ,
Ω ( 6 ) ( δ ) = Ω ( ξ ) 6 ! c 6 + 7 ! c 7 e δ + O ( e δ 2 ) ,
Ω ( 7 ) ( δ ) = Ω ( ξ ) 7 ! c 7 + 8 ! c 8 e δ + O ( e δ 2 ) .
where e δ = δ ξ . Putting (28) and (30) into (24), and (27) and (29) into (25), we obtain
H 6 ( y n ) = 6 Ω ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
and
H 5 ( x n ) = 2 Ω ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) .
By using Equations (26), (31) and (32), the result is
3 ( H 5 ( x n ) ) 2 H 6 ( y n ) Ω ( x n ) 12 ( Ω ( x n ) ) 3 2 c 2 2 c 3 2 A c 6 2 A e n 1 , z 2 e n 1 , y 2 e n 1 4 + c 7 2 A e n 1 , z e n 1 , y e n 1 2 ,
and
H 5 ( x n ) 2 Ω ( x n ) c 2 + c 6 e n 1 , z e n 1 , y e n 1 2 .
Hence,
α n 2 c 2 2 c 3 2 A + c 6 2 A e n 1 , z e n 1 , y e n 1 2 + c 7 2 A e n 1 , z e n 1 , y e n 1 2 ,
β n c 2 + c 6 e n 1 , z e n 1 , y e n 1 2 .
or
α n 2 c 2 2 c 3 2 A = 2 A α 2 c 2 2 + c 3 c 6 2 A e n 1 , z e n 1 , y e n 1 2 + c 7 2 A e n 1 , z e n 1 , y e n 1 2 ,
β n + c 2 c 6 e n 1 , z e n 1 , y e n 1 2 .
This completes the proof of Theorem 1. □
R-Order of Convergence: It can be said that sequence { x n } converges to x * with an R-order of convergence of at least τ > 1 if there are constants C ( 0 , ) and θ ( 0 , 1 ) such that [36]
| | x * x n | | C . θ τ n ; n = 0 , 1 ,
Using the above definition of R-order of convergence, along with the statement in [37], provides an estimate of the order of convergence of the iterative scheme (12).
Theorem 2. 
If the errors e j = x j ξ are evaluated through the iterative root-finding method, the following relation exists
e k + 1 i = 0 m 2 ( e k i ) m i , k k ( { e k } )
and then the R-order of convergence of the IM, denoted as O R ( I M , ξ ) , satisfies the inequality O R ( I M , ξ ) s * , where s * is the unique positive solution of the equation s n + 1 i = 0 n m i s n i = 0 [37].
Going further, the new iterative scheme with memory (12) is regulated by the subsequent convergence theorem.
Theorem 3. 
In the iterative method (12), let α n β n be the varying parameters that are calculated using Equations (13) and (14). If an initial guess x 0 is near a simple zero ξ of Ω ( x ) , then the R-order of convergence of the iterative method (12) with memory is at least 10.5208.
Proof. 
Let the iterative method (IM) generate the sequence of { x n } , which converges to the root ξ of Ω ( x ) . By means of R-order O R ( I M , ξ ) r , it is expressed as
e n + 1 D n , r e n r ,
and
e n D n 1 , r e n 1 r .
Next, D r of I M tends to D n , r . The result will be an asymptotic error constant when n , and then
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2 .
The resulting error expression of the with-memory scheme (12) can be obtained using Equations (6), (8) and (11) and the varying parameters α n and β n .
e n , y = y n ξ c 2 e n 2 ,
e n , z = z n ξ c 2 ( 2 A α 2 c 2 2 + c 3 ) e n 4 ,
and
e n + 1 = x n + 1 ξ c 2 2 ( β + c 2 ) ( 2 A α 2 c 2 2 + c 3 ) 2 e n 8 .
It should be noted that in Equations (44)–(46), the higher-order terms are excluded.
Furthermore, if the R-order convergence of the iterative sequences { y n } and { z n } are p and q, respectively, then
e n , y D n , p e n p D n , p ( D n 1 , r e n 1 r ) p = D n , p D n 1 , r p e n 1 r p ,
and
e n , z D n , q e n q D n , q ( D n 1 , r e n 1 r ) q = D n , q D n 1 , r q e n 1 r q .
Now, from Equations (42) and (44), the obtained result is
e n , y c 2 e n 2 c 2 ( D n 1 , r e n 1 r ) 2 c 2 D n 1 , r 2 e n 1 2 r .
Also, from Equations (37), (42) and (45), the following is obtained
e n , z c 2 ( 2 A α 2 c 2 2 c 3 ) e n 4 c 2 B n e n 1 , z e n 1 , y e n 1 2 ( D n 1 , r e n 1 r ) 4 c 2 B n ( D n 1 , q e n 1 q ) ( D n 1 , p e n 1 p ) e n 1 2 ( D n 1 , r e n 1 r ) 4 c 2 B n D n 1 , q D n 1 , p D n 1 , r 4 e n 1 4 r + p + q + 2 .
Again, from Equations (37), (38), (42) and (46), the result is
e n + 1 c 2 2 ( β + c 2 ) ( 2 A α 2 c 2 2 c 3 ) 2 e n 8 c 2 2 c 6 e n 1 , z e n 1 , y e n 1 2 ( B n e n 1 , z e n 1 , y e n 1 2 ) 2 ( D n 1 , r e n 1 r ) 8 c 2 2 c 6 B n 2 e n 1 , z 3 e n 1 , y 3 e n 1 6 ( D n 1 , r e n 1 r ) 8 c 2 2 c 6 B n 2 ( D n 1 , q e n 1 q ) 3 ( D n 1 , p e n 1 p ) 3 e n 1 6 ( D n 1 , r e n 1 r ) 8 c 2 2 c 6 B n 2 D n 1 , q 3 D n 1 , p 3 D n 1 , r 8 e n 1 8 r + 3 p + 3 q + 6 .
where B n = c 6 2 A e n 1 , z e n 1 , y e n 1 2 + c 7 2 A .
Since r > q > p , equating the exponents of e n 1 from the set of relations (47)–(49), (48)–(50) and (43)–(51), the following system of equations is obtained:
r p = 2 r , r q = p + q + 4 r + 2 , r 2 = 3 p + 3 q + 8 r + 6 .
The solution of (52) is r = 10.5208 , q = 4.8403 , and p = 2 . As a result, the R-order of convergence of the with-memory iterative method (12) is at least 10.5208 . □

3. Numerical Discussion

In this section, the convergence behavior of the newly developed with-memory method (NWM11) presented in (12) is explored. The goal of this section is to assess the effectiveness of a recently developed iterative method by applying it to a range of nonlinear equations. The nonlinear test functions, along with their roots and initial guesses for numerical analysis, are detailed below:
Example 1. 
Ω 1 ( x ) = e x sin x + log x 4 3 x + 1 , x 0 = 0.3 , ξ 0.9400 .
Example 2. 
Ω 2 ( x ) = x 2 ( 1 x ) 25 , x 0 = 0.2 , ξ 0.1437 .
Example 3. 
Ω 3 ( x ) = e x 3 x cos ( x 2 1 ) + x 3 + 1 , x 0 = 1.6 , ξ 1.0000 .
Example 4. 
Ω 4 ( x ) = x 6 10 x 3 + x 2 x + 3 , x 0 = 0.6 , ξ 0.6586 .
Example 5. 
Ω 5 ( x ) = ( sin x ) 2 x 2 + 1 , x 0 = 1.3 , ξ 1.4045 .
Example 6. 
Ω 6 ( x ) = x 3 10 , x 0 = 2.1 , ξ 2.1544 .
Example 7. 
Ω 7 ( x ) = x 5 + x 4 + 4 x 2 15 , x 0 = 1.3 , ξ 1.3474 .
Example 8. 
Ω 8 ( x ) = e x 2 + x cos x 1 sin ( π x ) + x log ( x sin x + 1 ) , x 0 = 0.53 , ξ 0.5313 .
The proposed method is evaluated against several well-established methods documented in the literature, including BK8 (53), KP10 (54), OSO10 (55), NJ10 (56), NAJJ10 (57), XT10 (58), and NWM11 (12), which are described below.
In 2021, Butsokorn Kong-ied (BK8) [35] developed an eighth-order iterative method, which is defined as:
y n = x n Ω ( x n ) Ω ( x n ) , z n = y n ( Ω ( x n ) ) 2 Ω ( y n ) Ω ( x n ) ) 2 Ω ( x n ) 2 Ω ( x n ) Ω ( x n ) Ω ( y n ) + Ω ( x n ) ( Ω ( y n ) ) 2 , x n + 1 = z n Ω ( z n ) Ω ( z n ) .
In 2024, Devi and Maroju (KP10) [38] developed a tenth-order iterative method, defined as:
y n = x n Ω ( x n ) Ω ( x n ) , z n = y n 5 Ω ( y n ) Ω ( x n ) , w n = z n Ω ( z n ) 16 Ω ( y n ) 5 Ω ( x n ) , x n + 1 = w n Ω ( w n ) Ω ( w n ) .
In 2023, Ogbereyivwe et al. (OSO10) [39] developed a tenth-order iterative method, defined as:
w n = x n + α ( Ω ( x n ) ) 3 , y n = x n Ω ( x n ) Ω [ x n , w n ] , z n = y n Ω ( y n ) Ω ( y n ) + 1 2 Ω ( y n ) Ω ( y n ) 2 Ω ( y n ) Ω [ x n , w n ] Ω ( y n ) Ω [ x n , w n ] Ω ( x n ) , x n + 1 = z n G ( u n ) Ω ( z n ) 2 Ω [ y n , z n ] Ω ( y n ) ,
where u n = Ω ( z n ) Ω ( x n ) and α R .
In 2016, Choubey and Jaiswal (NJ10) [32] developed a bi-parametric with-memory iterative method, with tenth-order convergence for solving nonlinear equations, defined as:
y n = x n Ω ( x n ) Ω ( x n ) T n Ω ( x n ) , z n = y n Ω ( y n ) ( Ω ( x n ) + γ Ω ( y n ) ) ( Ω ( x n ) 2 T n Ω ( x n ) ) ( Ω ( x n ) + ( γ 2 ) Ω ( y n ) ) , x n + 1 = z n Ω ( z n ) Ω [ z n , y n ] + Ω [ z n , y n , x n ] ( z n y n ) + Ω [ z n , y n , x n , x n ] ( z n y n ) ( z n x n ) ,
where T, γ ∈ R and T n is calculated as T n = H 5 ( x n ) 2 Ω ( x n ) .
In 2018, Choubey et al. (NAJJ10) [33] proposed a tenth-order with-memory iterative method using two self-accelerating parameters, defined as:
y n = x n Ω ( x n ) Ω ( x n ) γ n Ω ( x n ) , z n = y n Ω ( y n ) Ω ( x n ) + 2 ( ( Ω ( y n ) Ω ( x n ) ) / ( y n x n ) ) , x n + 1 = z n Ω ( z n ) ( Ω ( y n ) Ω ( x n ) y n x n + Ω ( z n ) Ω ( y n ) z n y n + Ω ( z n ) Ω ( x n ) z n x n + λ n ( z n x n ) ( z n y n ) ) ,
where γ and λ R are calculated as γ n = H 5 ( x n ) 2 Ω ( x n ) and λ n = H 6 ( y n ) 6 .
In 2013, Wang and Zhang (XT10) [34] developed a family of three-step with-memory iterative schemes for nonlinear equations, defined as:
y n = x n Ω ( x n ) Ω ( x n ) T n Ω ( x n ) , z n = y n Ω ( y n ) 2 Ω [ x n , y n ] Ω ( x n ) + T n Ω ( y n ) , x n + 1 = z n [ G ( s n ) + H ( t n ) ] ( α + w ) Ω ( z n ) 2 w Ω [ y n , z n ] + ( α w ) ( Ω ( x n ) + L Ω ( z n ) ) ,
where s n = Ω ( z n ) Ω ( x n ) , t n = Ω ( y n ) Ω ( x n ) , α = y n x n , w = z n x n , and L R . Also, T n is calculated as T n = H 5 ( x n ) 2 Ω ( x n ) .
All the comparative results for these methods are summarized in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. These tables present the absolute differences between the last two consecutive iterations ( | x n x n 1 | ) and the absolute residual error ( | Ω ( x n ) | ) of up to three iterations for each function, along with the computational order of convergence (COC) for the proposed method in comparison to some well-known existing methods. The determination of the C O C is achieved using the following equation [40]:
C O C = l o g | Ω ( x n ) / Ω ( x n 1 ) | l o g | Ω ( x n 1 ) / Ω ( x n 2 ) | .
For all numerical calculations, the programming software Mathematica 12.2 was used. For the newly proposed with-memory algorithm (NWM11), the parameter values α 0 = 0.001 and β 0 = 0.1 were selected to start the initial iteration.
Based on the numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and Figure 1, it can be concluded that the newly proposed with-memory algorithm (NWM11) is competitive and demonstrates fast convergence toward the roots with minimal absolute residual error and a minimum error value in consecutive iterations compared to the aforementioned existing methods. Additionally, the numerical results indicate that the computational order of convergence supports the theoretical convergence order of the newly presented family of algorithms in the test functions.

4. Conclusions

In this paper, a three-point with-memory iterative algorithm featuring two self-accelerating parameters is presented. By incorporating these parameters, computed using the Hermite interpolating polynomial, into an existing eighth-order method, its R-order of convergence is enhanced from 8 to 10.5208, and its efficiency index is enhanced from EI = 1.3161 to EI = 1.6011, without additional function evaluations. This algorithm not only accelerates convergence but also requires fewer function evaluations compared to other established algorithms, despite its higher convergence order. The findings in this paper demonstrate that the newly developed NWM11 algorithm offers superior performance with faster convergence and lower asymptotic constants, positioning it as a highly efficient alternative for solving nonlinear equations.

Author Contributions

Conceptualization, S.K.M. and S.P.; methodology, S.K.M. and S.P.; software, S.K.M., S.P. and C.E.S.; validation, S.K.M. and S.P.; formal analysis, S.P., S.K.M. and L.J.; resources, S.K.M.; writing—original draft preparation, S.P. and S.K.M.; writing—review and editing, S.K.M., S.P. and C.E.S.; visualization, S.P. and S.K.M.; supervision, S.P. and L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Technical University of Cluj-Napoca’s open-access publication grant.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pho, K.H. Improvements of the Newton–Raphson method. J. Comput. Appl. Math. 2022, 408, 114106. [Google Scholar] [CrossRef]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Soc.: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
  3. Gutierrez, C.; Gutierrez, F.; Rivara, M.C. Complexity of the bisection method. Theor. Comput. Sci. 2007, 382, 131–138. [Google Scholar] [CrossRef]
  4. Sharma, H.; Kansal, M. A modified Chebyshev–Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Methods Appl. Sci. 2023, 46, 12549–12569. [Google Scholar] [CrossRef]
  5. Petković, I.; Herceg, D. Computers in mathematical research: The study of three-point root-finding methods. Numer. Algorithms 2020, 84, 1179–1198. [Google Scholar] [CrossRef]
  6. Lu, Y.; Tang, Y. Solving Fractional Differential Equations Using Collocation Method Based on Hybrid of Block-pulse Functions and Taylor Polynomials. Turk. J. Math. 2021, 45, 1065–1078. [Google Scholar] [CrossRef]
  7. Assari, P.; Dehghan, M. A meshless local Galerkin method for solving Volterra integral equations deduced from nonlinear fractional differential equations using the moving least squares technique. Appl. Numer. Math. 2019, 143, 276–299. [Google Scholar] [CrossRef]
  8. Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K.; Argyros, M.I. Extended three step sixth order Jarratt- like methods under generalized conditions for nonlinear equations. Arab. J. Math. 2022, 11, 443–457. [Google Scholar] [CrossRef]
  9. Temple, B.; Young, R. Inversion of a non-uniform difference operator and a strategy for Nash–Moser. Methods Appl. Anal. 2022, 29, 265–294. [Google Scholar] [CrossRef]
  10. Putri, R.Y.; Wartono, W. Modifikasi metode Schroder tanpa turunan kedua dengan orde konvergensi empat. Aksioma J. Mat. Dan Pendidik. Mat. 2020, 11, 240–251. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; George, S. Local convergence of osada’s method for finding zeros with multiplicity. In Understanding Banach Spaces; Sáanchez, D.G., Ed.; Nova Science Publishers: Hauppauge, NY, USA, 2019; pp. 147–151. [Google Scholar]
  12. Postigo Beleña, C. Ostrowski’s Method for Solving Nonlinear Equations and Systems. J. Mech. Eng. Autom. 2023, 13, 1–6. [Google Scholar] [CrossRef]
  13. Ivanov, S.I. General Local Convergence Theorems about the Picard Iteration in Arbitrary Normed Fields with Applications to Super–Halley Method for Multiple Polynomial Zeros. Mathematics 2020, 8, 1599. [Google Scholar] [CrossRef]
  14. Coclite, G.M.; Fanizzi, A.; Lopez, L.; Maddalena, F.; Pellegrino, S.F. Numerical methods for the nonlocal wave equation of the peridynamics. Appl. Numer. Math. 2020, 155, 119–139. [Google Scholar] [CrossRef]
  15. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  16. Nisha, S.; Parida, P.K. Super-Halley method under majorant conditions in Banach spaces. Cubo (Temuco) 2020, 22, 55–70. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Kumar, D.; Argyros, I.K. An efficient class of Traub-Steffensen-like seventh order multiple-root solvers with applications. Symmetry 2019, 11, 518. [Google Scholar] [CrossRef]
  18. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: Cambridge, MA, USA, 1973. [Google Scholar]
  19. Choubey, N.; Jaiswal, J.P.; Choubey, A. Family of multipoint with memory iterative schemes for solving nonlinear equations. Int. J. Appl. Comput. Math. 2022, 8, 83. [Google Scholar] [CrossRef]
  20. Sharma, E.; Mittal, S.K.; Jaiswal, J.P.; Panday, S. An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations. Appl. Math. 2023, 3, 1019–1033. [Google Scholar] [CrossRef]
  21. Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput. 2024, 70, 285–315. [Google Scholar] [CrossRef]
  22. Thangkhenpau, G.; Panday, S.; Mittal, S.K. New Derivative-Free Families of Four-Parametric with and Without Memory Iterative Methods for Nonlinear Equations. In International Conference on Science, Technology and Engineering; Springer Nature: Singapore, 2023; pp. 313–324. [Google Scholar]
  23. Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations. Mathematics 2023, 11, 2036. [Google Scholar] [CrossRef]
  24. Liu, C.S.; Chang, C.W.; Kuo, C.L. Memory-Accelerating Methods for One-Step Iterative Schemes with Lie Symmetry Method Solving Nonlinear Boundary-Value Problem. Symmetry 2024, 16, 120. [Google Scholar] [CrossRef]
  25. Liu, C.S.; Chang, C.W. New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index. Mathematics 2024, 12, 581. [Google Scholar] [CrossRef]
  26. Erfanifar, R. A class of efficient derivative free iterative method with and without memory for solving nonlinear equations. Comput. Math. Comput. Model. Appl. 2022, 1, 20–26. [Google Scholar]
  27. Howk, C.L.; Hueso, J.L.; Martínez, E.; Teruel, C. A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics. Math. Meth. Appl. Sci. 2018, 41, 7263–7282. [Google Scholar] [CrossRef]
  28. Sharma, H.; Kansal, M.; Behl, R. An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications. Math. Comput. Appl. 2022, 27, 97. [Google Scholar] [CrossRef]
  29. Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
  30. Thangkhenpau, G.; Panday, S.; Chanu, W.H. New efficient bi-parametric families of iterative methods with engineering applications and their basins of attraction. Result. Control Opt. 2023, 12, 100243. [Google Scholar] [CrossRef]
  31. Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
  32. Choubey, N.; Jaiswal, J.P. Two-and three-point with memory methods for solving nonlinear equations. Num. Anal. Appl. 2017, 10, 74–89. [Google Scholar] [CrossRef]
  33. Choubey, N.; Cordero, A.; Jaiswal, J.P.; Torregrosa, J.R. Dynamical techniques for analyzing iterative schemes with memory. Complexity 2018, 2018, 1232341. [Google Scholar] [CrossRef]
  34. Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Meth. 2014, 11, 1350078. [Google Scholar] [CrossRef]
  35. Kong-ied, B. Two new eighth and twelfth order iterative methods for solving nonlinear equations. Int. J. Math. Comput. Sci. 2021, 16, 333–344. [Google Scholar]
  36. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  37. Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: Berlin, Germany, 2012. [Google Scholar]
  38. Devi, K.; Maroju, P. Local convergence study of tenth-order iterative method in Banach spaces with basin of attraction. AIMS Math. 2024, 9, 6648–6667. [Google Scholar] [CrossRef]
  39. Ogbereyivwe, O.; Izevbizua, O.; Umar, S.S. Some high-order convergence modifications of the Householder method for nonlinear equations. Commun. Nonlinear Anal. 2023, 11, 1–11. [Google Scholar]
  40. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Comparison of the algorithms based on the error in consecutive iterations, | x n x n 1 | , after the first three iterations.
Figure 1. Comparison of the algorithms based on the error in consecutive iterations, | x n x n 1 | , after the first three iterations.
Mathematics 12 01809 g001
Table 1. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 1 ( x ) .
Table 1. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 1 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.64003 1.6713 × 10 6 1.3856 × 10 50 1.4946 × 10 402 8.0000
KP10 0.63843 1.6023 × 10 3 2.4259 × 10 32 7.2153 × 10 320 10.0000
OSO10 0.89217 3.7908 × 10 2 2.5397 × 10 14 3.3269 × 10 134 9.9375
NJ10 0.63416 5.8788 × 10 3 2.3098 × 10 24 3.3435 × 10 238 10.0221
NAJJ10 0.60909 3.0943 × 10 2 1.9071 × 10 18 1.4560 × 10 173 9.6100
XT10 0.59184 4.8191 × 10 2 2.7250 × 10 15 3.8179 × 10 147 10.0023
NWM11 0.64003 2.6164 × 10 6 1.6115 × 10 57 5.2489 × 10 609 10.7820
Table 2. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 2 ( x ) .
Table 2. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 2 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.05626 7.2495 × 10 7 1.3567 × 10 43 1.8178 × 10 337 8.0000
KP10 0.05674 4.2508 × 10 4 4.1551 × 10 23 9.7770 × 10 214 9.9988
OSO10 0.05627 1.0119 × 10 5 1.4540 × 10 41 4.8595 × 10 400 10.0000
NJ10 0.05631 4.8255 × 10 5 9.9239 × 10 39 7.0418 × 10 373 9.9177
NAJJ10 0.05620 6.0195 × 10 5 3.6586 × 10 36 2.1280 × 10 347 9.9688
XT10 0.05636 9.6954 × 10 5 3.1227 × 10 33 4.5026 × 10 318 9.9953
NWM11 0.05626 7.3371 × 10 7 2.9281 × 10 56 1.0098 × 10 576 10.5350
Table 3. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 3 ( x ) .
Table 3. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 3 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.60000 6.1229 × 10 11 1.9861 × 10 83 1.2169 × 10 662 8.0000
KP10 0.60000 1.9866 × 10 12 2.1861 × 10 119 2.8467 × 10 1188 10.0000
OSO10 d i v e r g e n t d i v e r g e n t d i v e r g e n t d i v e r g e n t d i v e r g e n t
NJ10 0.60000 2.5816 × 10 9 1.1089 × 10 87 3.5748 × 10 869 9.9811
NAJJ10 0.60000 2.6174 × 10 9 9.1514 × 10 87 4.9636 × 10 860 9.9923
XT10 0.60000 9.3477 × 10 9 1.8199 × 10 81 4.2835 × 10 807 9.9893
NWM11 0.60000 1.4300 × 10 13 8.1050 × 10 139 2.9949 × 10 1451 10.4880
Table 4. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 4 ( x ) .
Table 4. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 4 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.05860 6.1011 × 10 9 4.6502 × 10 65 6.3312 × 10 513 8.0000
KP10 0.05860 5.6994 × 10 9 1.0558 × 10 79 6.0064 × 10 786 10.0000
OSO10 0.05861 1.0003 × 10 5 1.2746 × 10 49 1.7573 × 10 465 9.9998
NJ10 0.05860 1.6837 × 10 8 1.3324 × 10 78 1.7409 × 10 778 9.9992
NAJJ10 0.05860 4.0950 × 10 9 6.2510 × 10 86 6.2244 × 10 855 10.0252
XT10 0.05860 1.0741 × 10 8 2.5825 × 10 80 2.1179 × 10 795 9.9996
NWM11 0.05860 5.6515 × 10 9 1.5026 × 10 88 2.5987 × 10 922 10.4910
Table 5. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 5 ( x ) .
Table 5. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 5 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.10449 1.7327 × 10 8 5.0831 × 10 63 6.9206 × 10 449 8.0000
KP10 0.10449 1.7514 × 10 8 6.4198 × 10 77 6.9790 × 10 761 10.0000
OSO10 0.10449 1.8070 × 10 8 3.4911 × 10 78 6.2812 × 10 775 10.0000
NJ10 0.10449 4.6902 × 10 8 2.5380 × 10 76 9.1343 × 10 759 10.0035
NAJJ10 0.10449 6.7004 × 10 9 2.6528 × 10 85 7.3337 × 10 849 9.9991
XT10 0.10449 3.1329 × 10 8 6.5196 × 10 78 1.9895 × 10 774 10.0011
NWM11 0.10449 1.5128 × 10 8 5.4741 × 10 86 2.6817 × 10 897 10.4823
Table 6. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 6 ( x ) .
Table 6. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 6 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.05443 1.1772 × 10 12 4.7565 × 10 98 4.7039 × 10 780 8.0000
KP10 0.05443 7.9291 × 10 14 2.3095 × 10 132 1.4133 × 10 1316 10.0000
OSO10 0.05443 6.7134 × 10 7 3.8971 × 10 62 2.3485 × 10 613 10.0000
NJ10 0.05443 3.1474 × 10 12 4.7101 × 10 120 3.6954 × 10 1197 10.0000
NAJJ10 0.05443 3.9472 × 10 12 4.5328 × 10 119 2.5181 × 10 1187 10.0000
XT10 0.05443 1.7397 × 10 12 3.1353 × 10 122 1.5779 × 10 1218 10.0000
NWM11 0.05443 9.0914 × 10 13 2.8262 × 10 135 1.0309 × 10 1481 11.0000
Table 7. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 7 ( x ) .
Table 7. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 7 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.04742 1.0014 × 10 10 3.1891 × 10 80 1.2499 × 10 634 8.0000
KP10 0.04742 4.8045 × 10 11 2.9840 × 10 101 9.4424 × 10 1002 10.0000
OSO10 0.04773 3.0677 × 10 4 1.5964 × 10 30 8.7071 × 10 293 10.0380
NJ10 0.04742 3.3965 × 10 10 1.4725 × 10 96 1.2795 × 10 958 10.0000
NAJJ10 0.04742 2.1575 × 10 10 2.9855 × 10 98 2.8465 × 10 975 10.0000
XT10 0.04742 1.3920 × 10 10 6.2415 × 10 100 7.5966 × 10 992 10.0000
NWM11 0.04742 8.9889 × 10 11 6.1255 × 10 111 3.3390 × 10 1211 11.0000
Table 8. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 8 ( x ) .
Table 8. Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for Ω 8 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | Ω ( x 3 ) | COC
BK8 0.53000 7.6607 × 10 8 1.9425 × 10 57 3.8364 × 10 454 8.0000
KP10 0.53000 7.5754 × 10 9 1.7738 × 10 79 1.0153 × 10 785 10.0000
OSO10 d i v e r g e n t d i v e r g e n t d i v e r g e n t d i v e r g e n t d i v e r g e n t
NJ10 0.53000 7.8233 × 10 8 2.5696 × 10 72 3.8701 × 10 717 10.0013
NAJJ10 0.53000 1.6015 × 10 7 1.2265 × 10 69 1.7842 × 10 689 9.9797
XT10 0.53000 1.0726 × 10 8 1.2785 × 10 79 4.1929 × 10 790 10.0180
NWM11 0.53000 3.5063 × 10 8 5.5401 × 10 82 3.9536 × 10 850 10.4091
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Panday, S.; Mittal, S.K.; Stoenoiu, C.E.; Jäntschi, L. A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics 2024, 12, 1809. https://doi.org/10.3390/math12121809

AMA Style

Panday S, Mittal SK, Stoenoiu CE, Jäntschi L. A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics. 2024; 12(12):1809. https://doi.org/10.3390/math12121809

Chicago/Turabian Style

Panday, Sunil, Shubham Kumar Mittal, Carmen Elena Stoenoiu, and Lorentz Jäntschi. 2024. "A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations" Mathematics 12, no. 12: 1809. https://doi.org/10.3390/math12121809

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop