Next Article in Journal
First-Stage Dynamics of the Immune System and Cancer
Next Article in Special Issue
Convergence and Stability Improvement of Quasi-Newton Methods by Full-Rank Update of the Jacobian Approximates
Previous Article in Journal
Max-C and Min-D Projection Auto-Associative Fuzzy Morphological Memories: Theory and an Application for Face Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations

1
Department of Mathematics, National Institute of Technology Manipur, Imphal 795004, India
2
Department of Mathematics, Guru Ghasidas Vishwavidyalaya (A Central University), Bilaspur 495009, India
*
Author to whom correspondence should be addressed.
AppliedMath 2023, 3(4), 1019-1033; https://doi.org/10.3390/appliedmath3040051
Submission received: 23 September 2023 / Revised: 26 November 2023 / Accepted: 6 December 2023 / Published: 11 December 2023
(This article belongs to the Special Issue Contemporary Iterative Methods with Applications in Applied Sciences)

Abstract

:
New three-step with-memory iterative methods for solving nonlinear equations are presented. We have enhanced the convergence order of an existing eighth-order memory-less iterative method by transforming it into a with-memory method. Enhanced acceleration of the convergence order is achieved by introducing two self-accelerating parameters computed using the Hermite interpolating polynomial. The corresponding R-order of convergence of the proposed uni- and bi-parametric with-memory methods is increased from 8 to 9 and 10, respectively. This increase in convergence order is accomplished without requiring additional function evaluations, making the with-memory method computationally efficient. The efficiency of our with-memory methods NWM9 and NWM10 increases from 1.6818 to 1.7320 and 1.7783, respectively. Numeric testing confirms the theoretical findings and emphasizes the superior efficacy of suggested methods when compared to some well-known methods in the existing literature.

1. Introduction

The pursuit of computing accurate solutions for nonlinear equations is a continual quest in the field of numerical computation. The analytical methods used to find the exact roots of nonlinear equation f ( x n ) = 0 , where f : I R R is a real function defined on the open interval I, are either complex or nonexistent. We can only depend on the iterative method to obtain an approximate solution with the desired level of accuracy. The Newton method is one of the most widely used iterative methods to find the solution of nonlinear equations, given by
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , 3 , .
The Newton method is a single-point, second-order method that requires the evaluation of one function and one derivative at each iteration. The Newton method is a method without memory and the optimal order is as per the Kung–Traub Conjecture [1], which states that any iterative method requiring k function evaluations per iteration is considered optimal when the order of convergence equals 2 k 1 . The one-point iteration method, traditionally relying on k-function evaluations, achieves a maximum order of k [1,2]. Multipoint iterative schemes are highly significant as they surpass the theoretical limits of any one-point iterative method. Nevertheless, they may also lead to decreases in the efficiency index of the method. The performance of an iterative method is quantified by its efficiency index, expressed as follows [1,3]:
E = ρ 1 / k .
Here, ρ represents the order of the iterative method. However, the with-memory approach not only exceeds this theoretical limit but also enhances the efficiency indices of the methods. In recent years, there has been a lot of interest in extending without-memory methods to with-memory methods by using accelerating parameters [4,5]. The order of convergence in multi-point with-memory iterative methods is greatly boosted without any additional function evaluation by utilizing information from both the current and previous iterations.
In this manuscript, a novel parametric three-point iterative method with memory is developed, wherein the R-order of convergence is enhanced from 8 to 9 and 10 using one and two parameters, respectively. The remaining part of this work is organized as follows: In Section 2, we develop new parametric three-point iterative methods with memory by introducing self-accelerating parameters using Hermite-interpolating polynomials. In Section 3, we present the results of numerical calculations by comparing the newly proposed methods with other well-known methods on test functions. Finally, concluding remarks are provided in Section 4.

2. Analysis of Convergence for With-Memory Methods

In this section, firstly we will introduce a parameter α in the second step and then introduce another parameter β in the third step of the three-step scheme proposed by Sharma and Arora [6] in 2021. We increase the order of convergence by replacing the parameters α and β with the iterative parameters α n and β n .

2.1. The Uni-Parametric With-Memory Method and Its Convergence Analysis

Here, we introduce the parameter α in the second sub-step of the without-memory scheme of the eighth order, presented in article [6]:
y n = x n Ψ ( x n ) Ψ ( x n ) , z n = y n Ψ ( y n ) 2 Ψ [ x n , y n ] Ψ ( x n ) + α Ψ ( y n ) , x n + 1 = z n Ψ ( x n ) Ψ [ x n , y n ] + Ψ [ y n , z n ] 2 Ψ [ y n , z n ] Ψ [ z n , x n ] Ψ ( z n ) Ψ ( x n ) .
The error expressions for each sub-step of the above scheme are:
e n , y = c 2 e n 2 + ( 2 c 2 2 + 2 c 3 ) e n 3 + ( 4 c 2 3 + 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) ,
e n , z = c 2 ( c 2 ( α + c 2 ) c 3 ) e n 4 2 ( 2 α c 2 3 + 2 c 2 4 4 c 2 2 c 3 + c 3 2 + c 2 ( 2 α c 3 + c 4 ) ) e n 5 + O ( e n 6 ) ,
and
e n + 1 = c 2 ( c 2 ( α + c 2 ) c 3 ) ( α c 2 3 + c 2 4 c 2 2 c 3 + c 3 2 c 2 c 4 ) e n 8 + ( 20 α c 2 7 + 10 c 2 8 + c 2 6 ( 10 α 2 29 c 3 ) + 2 c 3 4 + 2 c 2 c 3 2 ( 2 α c 3 + c 4 ) 2 c 2 5 ( 19 α c 3 + c 4 ) + c 2 4 ( 9 α 2 c 3 + 30 c 3 2 2 α c 4 + 2 c 5 ) + 2 c 2 3 ( 2 c 3 ( 5 α c 3 + c 4 ) + α c 5 ) c 2 2 ( 15 c 3 3 + 2 c 4 2 + 2 c 3 ( α c 4 + c 5 ) ) ) e n 9 + O ( e n 10 ) ,
where e n , y = y n ξ , e n , z = z n ξ , e n = x n ξ and c j = Ψ ( j ) ( ξ ) j ! Ψ ( ξ ) , for j = 2 , 3 , and α R . We obtain the following with-memory iterative scheme by replacing α with α n in (3):
y n = x n Ψ ( x n ) Ψ ( x n ) , z n = y n Ψ ( y n ) 2 Ψ [ x n , y n ] Ψ ( x n ) + α n Ψ ( y n ) , x n + 1 = z n Ψ ( x n ) Ψ [ x n , y n ] + Ψ [ y n , z n ] 2 Ψ [ y n , z n ] Ψ [ z n , x n ] Ψ ( z n ) Ψ ( x n ) ,
and the above scheme is represented by NWM9. Now, from Expression (6), it is clear that the convergence order of Algorithm (3) is eight when α c 3 c 2 c 2 . Next, to accelerate the order of convergence of the algorithm presented in (3) from eight to nine, we can assume α = c 3 c 2 c 2 = Ψ ( ξ ) 3 Ψ ( ξ ) Ψ ( ξ ) 2 Ψ ( ξ ) , but in actual fact, the exact values of Ψ ( ξ ) , Ψ ( ξ ) and Ψ ( ξ ) are not attainable in practice. So, we will assume the parameter α as α n . The parameter α n can be calculated by using the available data from the current and previous iterations and satisfies the condition lim n α n = c 3 c 2 c 2 = Ψ ( ξ ) 3 Ψ ( ξ ) Ψ ( ξ ) 2 Ψ ( ξ ) , such that the eighth-order asymptotic convergence constant should be zero in Error Expression (6). The formula for α n is as follows:
α n = H 6 ( y n ) 3 H 5 ( x n ) H 5 ( x n ) 2 Ψ ( x n ) ,
where
H 5 ( x ) = Ψ ( x n ) + ( x x n ) Ψ [ x n , x n ] + ( x x n ) 2 Ψ [ x n , x n , z n 1 ] + ( x x n ) 2 ( x z n 1 ) Ψ [ x n , x n , z n 1 , y n 1 ] + ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) Ψ [ x n , x n , z n 1 , y n 1 , x n 1 ] + ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) Ψ [ x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] , H 6 ( x ) = Ψ ( y n ) + ( x y n ) Ψ [ y n , x n ] + ( x y n ) ( x x n ) Ψ [ y n , x n , x n ] + ( x y n ) ( x x n ) 2 Ψ [ y n , x n , x n , z n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) Ψ [ y n , x n , x n , z n 1 , y n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) Ψ [ y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) Ψ [ y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] , H 5 ( x n ) = 2 Ψ [ x n , x n , z n 1 ] + 2 ( x n z n 1 ) Ψ [ x n , x n , z n 1 , y n 1 ] + 2 ( x n z n 1 ) ( x n y n 1 ) Ψ [ x n , x n , z n 1 , y n 1 , x n 1 ] + 2 ( x n z n 1 ) ( x n y n 1 ) ( x n x n 1 ) Ψ [ x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] . H 6 ( y n ) = 6 Ψ [ y n , x n , x n , z n 1 ] + 12 ( y n x n ) + 6 ( y n z n 1 ) Ψ [ y n , x n , x n , z n 1 , y n 1 ] + ( 6 ( y n x n ) 2 + 12 ( y n x n ) ( y n z n 1 ) + 12 ( y n x n ) ( y n y n 1 ) + 6 ( y n z n 1 ) ( y n y n 1 ) ) Ψ [ y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( 6 ( y n x n ) 2 ( y n z n 1 ) + 6 ( y n x n ) 2 ( y n y n 1 ) + 12 ( y n x n ) ( y n z n 1 ) ( y n y n 1 ) + 6 ( y n x n ) 2 ( y n x n 1 ) + 12 ( y n x n ) ( y n z n 1 ) ( y n x n 1 ) + 12 ( y n x n ) ( y n y n 1 ) ( y n x n 1 ) + 6 ( y n z n 1 ) ( y n y n 1 ) ( y n x n 1 ) ) Ψ [ y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] .
Note: The condition H m ( x n ) = Ψ ( x n ) is satisfied by the Hermite interpolation polynomial H m ( x ) for m = 5 , 6 . So, α n = H 6 ( y n ) 3 H 5 ( x n ) H 5 ( x n ) 2 Ψ ( x n ) can be expressed as α n = H 6 ( y n ) 3 H 5 ( x n ) H 5 ( x n ) 2 H m ( x n ) for m = 5 , 6 .
Theorem 1. 
Let H m be the Hermite polynomial of degree m that interpolates a function f at interpolation nodes y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 , t 0 , , t m 7 belonging to an interval I, where the derivative Ψ ( m + 1 ) is continuous in I and the Hermite polynomial H m ( x n ) = Ψ ( x n ) , H m ( x n ) = Ψ ( x n ) , H m ( t j ) = Ψ ( t j ) ( j = 0 , 1 , , m 7 ) . Denote e t , j = t j ξ ( j = 0 , 1 , , m 7 ) and suppose that
(1)
All nodes y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 , t 0 , , t m 7 are sufficiently near to the root ξ.
(2)
The condition e n = O ( e t , 0 e t , m 7 ) holds. Then,
H 6 ( y n ) = 6 Ψ ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
H 5 ( x n ) = 2 Ψ ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) ,
α n = H 6 ( y n ) 3 H 5 ( x n ) H 5 ( x n ) 2 Ψ ( x n ) c 3 c 2 c 2 + B n e n 1 , z e n 1 , y e n 1 2 ,
and
α n c 3 c 2 + c 2 = c 2 ( α n + c 2 ) c 3 e n 1 , z e n 1 , y e n 1 2 B n ,
where B n = c 7 c 2 + c 3 c 6 c 2 2 c 6 c 7 c 2 2 e n 1 , z e n 1 , y e n 1 2 + c 6 .
Proof. 
We can calculate the expression of the sixth-degree and fifth-degree Hermite interpolation polynomial as:
Ψ ( x ) H 6 ( x ) = Ψ ( 7 ) ( δ ) 7 ! ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 ,
Ψ ( x ) H 5 ( x ) = Ψ ( 6 ) ( δ ) 6 ! ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 .
Now, we get the below-mentioned equations by differentiating Equation (14) three times at the point x = y n and Equation (15) two times at the point x = x n , respectively.
H 6 ( y n ) = Ψ ( y n ) 6 Ψ ( 7 ) ( δ ) 7 ! ( y n z n 1 ) ( y n y n 1 ) ( y n x n 1 ) 2 ,
H 5 ( x n ) = Ψ ( x n ) Ψ ( 6 ) ( δ ) 6 ! ( x n z n 1 ) ( x n y n 1 ) ( x n x n 1 ) 2 .
Next, a Taylor series expansion of f at the points y n and x n in I and δ I about the zero ξ of f provides
Ψ ( x n ) = Ψ ( ξ ) 1 + 2 c 2 e n + 3 c 3 e n 2 + O ( e n 3 ) ,
Ψ ( x n ) = Ψ ( ξ ) 2 c 2 + 6 c 3 e n + O ( e n 2 ) .
Similarly,
Ψ ( y n ) = Ψ ( ξ ) 6 c 3 + 24 c 4 e n , y + O ( e n , y 2 ) ,
Ψ ( 6 ) ( δ ) = Ψ ( ξ ) 6 ! c 6 + 7 ! c 7 e δ + O ( e δ 2 ) ,
Ψ ( 7 ) ( δ ) = Ψ ( ξ ) 7 ! c 7 + 8 ! c 8 e δ + O ( e δ 2 ) ,
where e δ = δ ξ . Putting (20), (22) in (16) and (19), (21) in (17), we obtain
H 6 ( y n ) = 6 Ψ ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
and
H 5 ( x n ) = 2 Ψ ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) ,
which implies
H 6 ( y n ) 3 H 5 ( x n ) c 3 c 2 c 7 c 2 e n 1 , z e n 1 , y e n 1 2 + c 3 c 6 c 2 2 e n 1 , z e n 1 , y e n 1 2 c 6 c 7 c 2 2 e n 1 , z 2 e n 1 , y 2 e n 1 4 .
Now, by Equations (18) and (24), one can write
H 5 ( x n ) 2 Ψ ( x n ) c 2 c 6 e n 1 , z e n 1 , y e n 1 2 .
Furthermore, by virtue of Relations (25) and (26), it may be written as
H 6 ( y n ) 3 H 5 ( x n ) H 5 ( x n ) 2 Ψ ( x n ) c 3 c 2 c 2 + c 7 c 2 + c 3 c 6 c 2 2 c 6 c 7 c 2 2 e n 1 , z e n 1 , y e n 1 2 + c 6 e n 1 , z e n 1 , y e n 1 2 .
and hence
α n c 3 c 2 c 2 + c 7 c 2 + c 3 c 6 c 2 2 c 6 c 7 c 2 2 e n 1 , z e n 1 , y e n 1 2 + c 6 e n 1 , z e n 1 , y e n 1 2 ,
or
α n c 3 c 2 + c 2 = c 2 ( α n + c 2 ) c 3 e n 1 , z e n 1 , y e n 1 2 B n .
where B n = c 7 c 2 + c 3 c 6 c 2 2 c 6 c 7 c 2 2 e n 1 , z e n 1 , y e n 1 2 + c 6 . □
The definition of the R-order of convergence [7] and the following statement [8] can be used to estimate the order of convergence of the iterative scheme (7).
Theorem 2. 
If the errors e j = x j ξ evaluated by an iterative root finding method (IM) fulfill
e k + 1 i = 0 m 2 ( e k i ) m i , k k ( { e k } ) ,
then the R-order of convergence of IM, denoted by O R ( I M , ξ ) , satisfies the inequality O R ( I M , ξ ) s * , where s * is the unique positive solution of the equation s n + 1 i = 0 n m i s n i = 0 .
Presently, for the new iterative scheme with Memory (7), we can state the subsequent convergence theorem.
Theorem 3. 
In the iterative method (7), let α n be the varying parameter, and it is computed using (8). If an initial guess x 0 is sufficiently close to a simple zero ξ of f ( x ) , then the R-order of convergence for the with-memory method (7) is at least 9.
Proof. 
Suppose the IM produces the sequence of { x n } converging the root ξ of f ( x ) = 0 with an R-order O R ( I M , ξ ) r , then we express
e n + 1 D n , r e n r ,
and
e n D n 1 , r e n 1 r .
Next, D n , r will tend to the asymptotic error constant D r of I M taking n , and then
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2 .
The error expression for the with-memory scheme (7) can be obtained using (4)–(6) and the varying parameter α n .
e n , y = y n ξ c 2 e n 2 ,
e n , z = z n ξ c 2 ( c 2 ( α + c 2 ) c 3 ) e n 4 ,
and
e n + 1 = x n + 1 ξ c 2 ( c 2 ( α + c 2 ) c 3 ) ( α c 2 3 + c 2 4 c 2 2 c 3 + c 3 2 c 2 c 4 ) e n 8 .
Here, the higher order terms in Equations (34)–(36) are excluded. Now, let the R-order convergence of the iterative sequences { y n } and { z n } be p and q, respectively, then
e n , y D n , p e n p D n , p ( D n 1 , r e n 1 r ) p = D n , p D n 1 , r p e n 1 r p ,
and
e n , z D n , q e n q D n , q ( D n 1 , r e n 1 r ) q = D n , q D n 1 , r q e n 1 r q .
Now, by Equations (32) and (34), we obtain
e n , y c 2 e n 2 c 2 ( D n 1 , r e n 1 r ) 2 c 2 D n 1 , r 2 e n 1 2 r .
Again, by Equations (29), (32) and (35), we get
e n , z c 2 ( c 2 ( α + c 2 ) c 3 ) e n 4 c 2 e n 1 , z e n 1 , y e n 1 2 B n ( D n 1 , r e n 1 r ) 4 c 2 ( D n 1 , q e n 1 q ) ( D n 1 , p e n 1 p ) e n 1 2 B n ( D n 1 , r e n 1 r ) 4 c 2 B n D n 1 , q D n 1 , p D n 1 , r 4 e n 1 4 r + p + q + 2 ,
and
e n + 1 c 2 ( c 2 ( α + c 2 ) c 3 ) C n e n 8 c 2 e n 1 , z e n 1 , y e n 1 2 B n C n ( D n 1 , r e n 1 r ) 8 c 2 ( D n 1 , q e n 1 q ) ( D n 1 , p e n 1 p ) e n 1 2 B n C n ( D n 1 , r e n 1 r ) 8 c 2 B n C n D n 1 , q D n 1 , p D n 1 , r 8 e n 1 8 r + p + q + 2 ,
where C n originates from (36) since r > q > p . By equating the exponents of e n 1 present in the set of Relations (37)–(39), (38)–(40) and (33)–(41), we attain the resulting system of equations:
r p = 2 r , 4 r + p + q + 2 = r q , 8 r + p + q + 2 = r 2 .
The solution of the system of Equation (42) is specified by r = 9 , q = 5 and p = 2 . Consequently, the R-order of convergence for the iterative method with Memory (7) is at least 9. □

2.2. The Bi-Parametric With-Memory Method and Its Convergence Analysis

Now, we introduce a new parameter β in the third sub-step of the single parametric with-memory method presented in (7)
y n = x n Ψ ( x n ) Ψ ( x n ) , z n = y n Ψ ( y n ) 2 Ψ [ x n , y n ] Ψ ( x n ) + α n Ψ ( y n ) , x n + 1 = z n Ψ ( x n ) Ψ [ x n , y n ] + Ψ [ y n , z n ] 2 Ψ [ y n , z n ] Ψ [ z n , x n ] + β Ψ ( y n ) 2 Ψ ( z n ) Ψ ( x n ) .
Now, we get the error expressions for each sub-step of (43) as:
e n , y = c 2 e n 2 + ( 2 c 2 2 + 2 c 3 ) e n 3 + ( 4 c 2 3 + 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) ,
e n , z = 2 ( c 3 2 c 2 c 4 ) e n 5 + c 2 5 + 2 c 2 3 c 3 + 4 c 3 3 c 2 + 6 c 2 2 c 4 c 3 c 4 c 2 ( 7 c 3 2 + 3 c 5 ) e n 6 + O ( e n 7 ) ,
and
e n + 1 = 2 ( c 3 2 c 2 c 4 ) ( c 3 2 c 2 ( A β c 2 + c 4 ) ) e n 9 + ( A β c 2 7 4 c 3 5 c 2 + c 2 5 c 3 ( 2 A β + c 3 ) c 2 6 c 4 + c 3 3 c 4 + 2 c 2 4 ( 6 A β + c 3 ) c 4 c 2 2 c 4 ( 9 A β c 3 + 13 c 3 2 + 7 c 5 ) c 2 3 ( 13 A β c 3 2 + 2 c 3 3 6 c 4 2 + 3 A β c 5 ) + c 2 c 3 ( 12 A β c 3 2 + 7 c 3 3 + 3 c 4 2 + 7 c 3 c 5 ) ) e n 10 + O ( e n 11 ) ,
where A = Ψ ( ξ ) , e n , y = y n ξ , e n , z = z n ξ , e n = x n ξ and c j = Ψ ( j ) ( ξ ) j ! Ψ ( ξ ) , for j = 2 , 3 , . . . and β R . We obtain the following with-memory iterative scheme by replacing β with β n in (43):
y n = x n Ψ ( x n ) Ψ ( x n ) , z n = y n Ψ ( y n ) 2 Ψ [ x n , y n ] Ψ ( x n ) + α n Ψ ( y n ) , x n + 1 = z n Ψ ( x n ) Ψ [ x n , y n ] + Ψ [ y n , z n ] 2 Ψ [ y n , z n ] Ψ [ z n , x n ] + β n Ψ ( y n ) 2 Ψ ( z n ) Ψ ( x n ) ,
and the above scheme is represented by NWM10. Now, from (46), it is clear that the convergence order of Algorithm (43) is nine when β c 3 2 A c 2 2 c 4 A c 2 . Next, to accelerate the order of convergence of the algorithm presented in (43) from nine to ten, we can assume β = c 3 2 A c 2 2 c 4 A c 2 = 4 ( Ψ ( ξ ) ) 2 3 Ψ ( ξ ) Ψ ( 4 ) ( ξ ) 36 Ψ ( ξ ) ( Ψ ( ξ ) ) 2 , but in actual fact, the exact values of Ψ ( ξ ) , Ψ ( ξ ) , Ψ ( ξ ) and Ψ ( 4 ) ( ξ ) are not attainable in practice. So, we will assume the parameter β as β n . The parameter β n can be computed using the information from both the current and previous iterations and satisfies the condition lim n β n = c 3 2 A c 2 2 c 4 A c 2 = 4 ( Ψ ( ξ ) ) 2 3 Ψ ( ξ ) Ψ ( 4 ) ( ξ ) 36 Ψ ( ξ ) ( Ψ ( ξ ) ) 2 , such that the ninth-order asymptotic convergence constant should be zero in the error expression (46). The formula for β n is as follows:
β n = 4 ( H 6 ( y n ) ) 2 3 H 5 ( x n ) H 7 ( 4 ) ( z n ) 36 Ψ ( x n ) ( H 5 ( x n ) ) 2 ,
where
H 7 ( x ) = Ψ ( z n ) + ( x z n ) Ψ [ z n , y n ] + ( x z n ) ( x y n ) Ψ [ z n , y n , x n ] + ( x z n ) ( x y n ) ( x x n ) Ψ [ z n , y n , x n , x n ] + ( x z n ) ( x y n ) ( x x n ) 2 Ψ [ z n , y n , x n , x n , z n 1 ] + ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) Ψ [ z n , y n , x n , x n , z n 1 , y n 1 ] + ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) Ψ [ z n , y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) Ψ [ z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] , H 7 ( 4 ) ( z n ) = 24 Ψ [ z n , y n , x n , x n , z n 1 ] + ( 24 ( z n y n ) + 48 ( z n x n ) + 24 ( z n z n 1 ) ) Ψ [ z n , y n , x n , x n , z n 1 , y n 1 ] + ( 48 ( z n y n ) ( z n x n ) + 24 ( z n x n ) 2 + 24 ( z n y n ) ( z n z n 1 ) + 48 ( z n x n ) ( z n z n 1 ) + 24 ( z n y n ) ( z n y n 1 ) + 48 ( z n x n ) ( z n y n 1 ) + 24 ( z n z n 1 ) ( z n y n 1 ) ) Ψ [ z n , y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( 24 ( z n y n ) ( z n x n ) 2 + 48 ( z n y n ) ( z n x n ) ( z n z n 1 ) + 24 ( z n x n ) 2 ( z n z n 1 ) + 48 ( z n y n ) ( z n x n ) ( z n x n 1 ) + 24 ( z n x n ) 2 ( z n x n 1 ) + 24 ( z n y n ) ( z n z n 1 ) ( z n x n 1 ) + 48 ( z n x n ) ( z n z n 1 ) ( z n x n 1 ) + 24 ( z n y n ) ( z n y n 1 ) ( z n x n 1 ) + 48 ( z n x n ) ( z n y n 1 ) ( z n x n 1 ) + 24 ( z n z n 1 ) ( z n y n 1 ) ( z n x n 1 ) ) Ψ [ z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] ,
and H 6 ( y n ) and H 5 ( x n ) can be calculated by Equation (9).
Note: The condition H m ( x n ) = Ψ ( x n ) is satisfied by the Hermite interpolation polynomial H m ( x ) for m = 5 , 6 . So, β n = 4 ( H 6 ( y n ) ) 2 3 H 5 ( x n ) H 7 ( 4 ) ( z n ) 36 Ψ ( x n ) ( H 5 ( x n ) ) 2 can be expressed as β n = 4 ( H 6 ( y n ) ) 2 3 H 5 ( x n ) H 7 ( 4 ) ( z n ) 36 H ( x n ) ( H 5 ( x n ) ) 2 for m = 5 , 6 .
Theorem 4. 
Let H m be the Hermite polynomial of degree m that interpolates a function f at interpolation nodes z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 , t 0 , , t m 8 within an interval I, where the derivative Ψ ( m + 1 ) is continuous in I and the Hermite polynomial satisfies H m ( x n ) = Ψ ( x n ) , H m ( x n ) = Ψ ( x n ) , H m ( t j ) = Ψ ( t j ) ( j = 0 , 1 , , m 8 ) . Denote e t , j = t j ξ ( j = 0 , 1 , , m 8 ) and suppose that
(1)
All nodes z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 , t 0 , , t m 8 are sufficiently near to the root ξ.
(2)
The condition e n = O ( e t , 0 e t , m 8 ) holds. Then,
H 7 ( 4 ) ( z n ) = 24 Ψ ( ξ ) ( c 4 c 8 e n 1 , z e n 1 , y e n 1 2 ) ,
H 6 ( y n ) = 6 Ψ ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
H 5 ( x n ) = 2 Ψ ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) ,
β n = 4 ( H 6 ( y n ) ) 2 3 H 5 ( x n ) H 7 ( 4 ) ( z n ) 36 Ψ ( x n ) ( H 5 ( x n ) ) 2 c 3 2 A c 2 2 c 4 A c 2 + T n ( e n 1 , z e n 1 , y e n 1 2 ) ,
and
β n c 3 2 A c 2 2 + c 4 A c 2 = c 3 2 c 2 ( A β n c 2 + c 4 ) e n 1 , z e n 1 , y e n 1 2 T n ,
where T n = ( c 7 2 A c 2 2 e n 1 , z e n 1 , y e n 1 2 2 c 3 c 7 A c 2 2 + c 8 A c 2 + 2 c 3 2 c 6 A c 2 3 + 2 c 7 2 c 6 c 2 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 4 c 3 c 6 c 7 A c 2 3 e n 1 , z e n 1 , y e n 1 2 c 4 c 6 A c 2 2 + c 6 c 8 A c 2 2 e n 1 , z e n 1 , y e n 1 2 + 2 c 4 c 6 2 A c 2 3 e n 1 , z e n 1 , y e n 1 2 2 c 6 2 c 8 A c 2 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 ).
Proof. 
We can calculate the expression of the seventh-degree Hermite interpolation polynomial as:
Ψ ( x ) H 7 ( x ) = Ψ ( 8 ) ( δ ) 8 ! ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 .
Now, the following equation is derived by taking the fourth derivative of Equation (55) at the point x = z n .
H 7 ( 4 ) ( z n ) = Ψ ( 4 ) ( z n ) 24 Ψ ( 8 ) ( δ ) 8 ! ( z n z n 1 ) ( z n y n 1 ) ( z n x n 1 ) 2 .
Next, a Taylor series expansion of f at the points z n in I and δ I about the zero ξ of f provides
Ψ ( 4 ) ( z n ) = Ψ ( ξ ) 24 c 4 + 120 c 5 e n , z + O ( e n , z 2 ) ,
and
Ψ ( 8 ) ( δ ) = Ψ ( ξ ) 8 ! c 8 + 9 ! c 9 e δ + O ( e δ 2 ) .
Putting Equations (57) and (58) in (56), we obtain
H 7 ( 4 ) ( z n ) = 24 Ψ ( ξ ) ( c 4 c 8 e n 1 , z e n 1 , y e n 1 2 ) ,
Now, using Equations (18), (23), (24) and (59), we get
4 ( H 6 ( y n ) ) 2 3 H 5 ( x n ) H 7 ( 4 ) ( z n ) 36 Ψ ( x n ) ( H 5 ( x n ) ) 2 c 3 2 A c 2 2 c 4 A c 2 + T n ( e n 1 , z e n 1 , y e n 1 2 ) .
and hence
β n c 3 2 A c 2 2 c 4 A c 2 + T n ( e n 1 , z e n 1 , y e n 1 2 ) ,
or
β n c 3 2 A c 2 2 + c 4 A c 2 = c 3 2 c 2 ( A β n c 2 + c 4 ) e n 1 , z e n 1 , y e n 1 2 T n ,
where T n = ( c 7 2 A c 2 2 e n 1 , z e n 1 , y e n 1 2 2 c 3 c 7 A c 2 2 + c 8 A c 2 + 2 c 3 2 c 6 A c 2 3 + 2 c 7 2 c 6 c 2 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 4 c 3 c 6 c 7 A c 2 3 e n 1 , z e n 1 , y e n 1 2 c 4 c 6 A c 2 2 + c 6 c 8 A c 2 2 e n 1 , z e n 1 , y e n 1 2 + 2 c 4 c 6 2 A c 2 3 e n 1 , z e n 1 , y e n 1 2 2 c 6 2 c 8 A c 2 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 ). □
Now, for the iterative scheme with Memory (47), we can state the subsequent convergence theorem.
Theorem 5. 
In the iterative method (47), let β n be the varying parameter, and it is computed using (48). If an initial guess x 0 is sufficiently close to a simple zero ξ of f ( x ) , then the R-order of convergence for the with-memory iterative method (47) is at least 9.9083 .
Proof. 
Suppose the IM produces the sequence of { x n } converging to the root ξ of f ( x ) = 0 with an R-order O R ( I M , ξ ) r , then we express
e n + 1 D n , r e n r ,
and
e n D n 1 , r e n 1 r .
Next, D n , r will tend to the asymptotic error constant D r of I M taking n , and then
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2 .
The error expression of the with-memory scheme (47) can be obtained using (44)–(46) and the varying parameter β n .
e n , y = y n ξ c 2 e n 2 ,
e n , z = z n ξ 2 ( c 3 2 c 2 c 4 ) e n 5 ,
and
e n + 1 = x n + 1 ξ 2 ( c 3 2 c 2 c 4 ) ( c 3 2 c 2 ( A β c 2 + c 4 ) ) e n 9 .
Here, the higher-order terms in Equations (66)–(68) are excluded. Now, let the R-order convergence of the iterative sequences { y n } and { z n } are p and q, respectively, then
e n , y D n , p e n p D n , p ( D n 1 , r e n 1 r ) p = D n , p D n 1 , r p e n 1 r p ,
and
e n , z D n , q e n q D n , q ( D n 1 , r e n 1 r ) q = D n , q D n 1 , r q e n 1 r q ,
Now, by Equations (64) and (66), we obtain
e n , y c 2 e n 2 c 2 ( D n 1 , r e n 1 r ) 2 c 2 D n 1 , r 2 e n 1 2 r .
Again, by Equations (64) and (67), we get
e n , z 2 ( c 3 2 c 2 c 4 ) e n 5 2 ( c 3 2 c 2 c 4 ) ( D n 1 , r e n 1 r ) 5 2 ( c 3 2 c 2 c 4 ) D n 1 , r 5 e n 1 5 r ,
and by (62) and (64), we get
e n + 1 2 ( c 3 2 c 2 c 4 ) ( c 3 2 c 2 ( A β c 2 + c 4 ) ) e n 9 2 ( c 3 2 c 2 c 4 ) e n 1 , z e n 1 , y e n 1 2 T n ( D n 1 , r e n 1 r ) 9 2 ( c 3 2 c 2 c 4 ) ( D n 1 , q e n 1 q ) ( D n 1 , p e n 1 p ) e n 1 2 T n ( D n 1 , r e n 1 r ) 9 2 ( c 3 2 c 2 c 4 ) T n D n 1 , q D n 1 , p D n 1 , r 9 e n 1 9 r + p + q + 2 .
Since r > q > p , by equating the exponents of e n 1 present in the set of Relations (69)–(71), (70)–(72) and (65)–(73), we attain the resulting system of equations:
r p = 2 r , r q = 5 r , 9 r + p + q + 2 = r 2 .
The solution of the system of Equation (74) is specified by r = ( 9 + 117 ) / 2 , q = 5 and p = 2 . As a result, the R-order of convergence of the with-memory iterative method (47) is at least 9.9083 . □
Note: There are a number of optimal-order without-memory methods available in the literature such as [6,9] but every without-memory iterative scheme cannot be extended to the with-memory version.

3. Numerical Results and Discussion

In this section, we provide numerical examples to elucidate the efficiency and effectiveness of the newly formulated three-step with-memory methods NWM9 (7) and NWM10 (47). These methods are compared with some existing well-known three-step methods, SA8, NAJJ10, XT10 and NJ10, as presented in references [5,6,10,11], respectively, using the test functions mentioned in Table 1. The aforementioned iterative methods are now listed below.
In 2021, Sharma and Arora [6] proposed a three-step eighth-order without-memory iterative method (SA8), which is defined as:
y n = x n Ψ ( x n ) Ψ ( x n ) , z n = y n Ψ ( y n ) 2 Ψ [ x n , y n ] Ψ ( x n ) , x n + 1 = z n Ψ ( x n ) Ψ [ x n , y n ] + Ψ [ y n , z n ] 2 Ψ [ y n , z n ] Ψ [ z n , x n ] Ψ ( z n ) Ψ ( x n ) .
In 2018, Choubey et al. [10] proposed a tenth-order with-memory iterative method (NAJJ10) using two self-accelerating parameters, which is defined as:
y n = x n Ψ ( x n ) Ψ ( x n ) γ n Ψ ( x n ) , z n = y n Ψ ( y n ) Ψ ( x n ) + 2 ( ( Ψ ( y n ) Ψ ( x n ) ) / ( y n x n ) ) , x n + 1 = z n Ψ ( z n ) ( Ψ ( y n ) Ψ ( x n ) y n x n + Ψ ( z n ) Ψ ( y n ) z n y n + Ψ ( z n ) Ψ ( x n ) z n x n + λ n ( z n x n ) ( z n y n ) ) ,
where γ n , λ n R and are calculated as γ n = H 5 ( x n ) 2 Ψ ( x n ) and λ n = H 6 ( y n ) 6 .
In 2013, Wang and Zhang [5] developed a family of three-step with-memory iterative schemes (XT10) for nonlinear equations given by:
y n = x n Ψ ( x n ) Ψ ( x n ) T n Ψ ( x n ) , z n = y n Ψ ( y n ) 2 Ψ [ x n , y n ] Ψ ( x n ) + T n Ψ ( y n ) , x n + 1 = z n [ G ( s n ) + H ( t n ) ] ( α + w ) Ψ ( z n ) 2 w Ψ [ y n , z n ] + ( α w ) ( Ψ ( x n ) + L Ψ ( z n ) ) ,
where s n = Ψ ( z n ) Ψ ( x n ) , t n = Ψ ( y n ) Ψ ( x n ) , α = y n x n , w = z n x n and L R . Furthermore, T n is calculated as T n = H 5 ( x n ) 2 Ψ ( x n ) .
In 2016, Choubey and Jaiswal [11] developed a bi-parametric with-memory iterative method (NJ10) with tenth-order convergence, the sub-steps of which are:
y n = x n Ψ ( x n ) Ψ ( x n ) T n Ψ ( x n ) , z n = y n Ψ ( y n ) ( Ψ ( x n ) + γ Ψ ( y n ) ) ( Ψ ( x n ) 2 T n Ψ ( x n ) ) ( Ψ ( x n ) + ( γ 2 ) Ψ ( y n ) ) , x n + 1 = z n Ψ ( z n ) Ψ [ z n , y n ] + Ψ [ z n , y n , x n ] ( z n y n ) + Ψ [ z n , y n , x n , x n ] ( z n y n ) ( z n x n ) ,
where T, γ ∈ R and T n is calculated as T n = H 5 ( x n ) 2 Ψ ( x n ) .
Planck’s radiation law problem: It calculates the energy density within an isothermal blackbody and is given by [12]:
v ( λ ) = 8 π c P λ 5 e c P λ B T 1 ,
where λ is the wavelength of the radiation, T is the absolute temperature of the blackbody, B is the Boltzmann constant, P is the Planck constant and c is the speed of light. We are interested in determining the wavelength λ that corresponds to the maximum energy density v ( λ ) . From (79), we obtain
v ( λ ) = 8 π c P λ 6 e c P λ B T 1 c P λ B T e c P λ B T e c P λ B T 1 5 ,
so that the maxima of v occur when
c P λ B T e c P λ B T e c P λ B T 1 = 5 .
After that, if t = c P λ B T , then (81) is satisfied if
Ψ 6 ( t ) = e t + t 5 1 ; t 0 = 4.7 .
Thus, the solutions to the equation Ψ 6 ( t ) = 0 provide the maximum wavelength of radiation, denoted as λ , as determined by the following formula:
λ c P t * B T ,
where t * is a solution of (82).
In Table 1, we have considered five distinct nonlinear functions, displaying their roots ( ξ ) and their initial approximations ( t 0 ). The formula of computational order of convergence (COC) is given by [13]:
C O C = log | Ψ ( t n ) / Ψ ( t n 1 ) | log | Ψ ( t n 1 ) / Ψ ( t n 2 ) | .
All the compared results are given in Table 2. Table 2 contains the absolute differences between the last two consecutive iterations ( | t n t n 1 | ) and the absolute residual error ( | Ψ ( t n ) | ) of up to three iterations for each function along with the COC for the proposed methods in comparison to some well-known existing methods. All computations presented here have been performed in MATHEMATICA 12.2. The findings showcased in Table 2 validate the theoretical results of the newly proposed methods, highlighting their efficiency in comparison to some well-known iterative methods. Also, the errors in consecutive iterations have been presented though Figure 1.
From Table 2, it is confirmed that the accuracy of the results improved not only compared to the without-memory scheme but also with some well-used with-memory schemes.

4. Conclusions

In this manuscript, we have introduced three-step with memory iterative techniques for solving nonlinear equations by introducing single and double self-accelerating parameters. The primary goal is to enhance the convergence order of the optimal eighth method without requiring additional computations. This is achieved by introducing self-acceleration parameters and their estimates in the eighth-order method. The estimates of these self-accelerating parameters are calculated using the Hermite interpolating polynomial. The inclusion of parameters increases the R-order of convergence of the with-memory methods NWM9 and NWM10 from 8 to 9 and 10, respectively. The results show that the suggested techniques NWM9 and NWM10 have faster convergence and smaller asymptotic constant values than other current approaches. Furthermore, the overall performance of the newly presented approach is outstanding, with a quick convergence speed that makes it a promising alternative for solving nonlinear equations.
By applying the discussed approach, the interested researcher may extend well-known higher optimal order without-memory iterative methods to with-memory algorithms along with single or multi self-accelerating parameters for achieving improved efficiency for single or multivariate functions.

Author Contributions

Conceptualization, J.P.J.; methodology, S.K.M.; formal analysis, S.K.M.; writing-review and editing, E.S. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No public involvement in any aspect of this research.

Acknowledgments

The authors would like to pay their sincere thanks to the reviewers for their useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
  2. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  3. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press Inc.: Cambridge, MA, USA, 1973. [Google Scholar]
  4. Khan, W.A. Numerical simulation of Chun-Hui He’s iteration method with applications in engineering. Int. J. Numer. Methods Heat Fluid Flow 2022, 32, 944–955. [Google Scholar] [CrossRef]
  5. Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2014, 11, 1350078. [Google Scholar] [CrossRef]
  6. Sharma, J.R.; Arora, H. An efficient family of weighted-Newton methods with optimal eighth order convergence. Appl. Math. Lett. 2014, 29, 1–6. [Google Scholar] [CrossRef]
  7. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1970. [Google Scholar]
  8. Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: New York, NY, USA, 2012. [Google Scholar]
  9. Babajee, D.K.R.; Cordero, A.; Soleymani, F.; Torregrosa, J.R. On improved three-step schemes with high efficiency index and their dynamics. Numer. Algorithms 2014, 65, 153–169. [Google Scholar] [CrossRef]
  10. Choubey, N.; Cordero, A.; Jaiswal, J.P.; Torregrosa, J.R. Dynamical techniques for analyzing iterative schemes with memory. Complexity 2018, 2018, 1232341. [Google Scholar]
  11. Choubey, N.; Jaiswal, J.P. Two-and three-point with memory methods for solving nonlinear equations. Numer. Anal. Appl. 2017, 10, 74–89. [Google Scholar] [CrossRef]
  12. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  13. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Comparison of the methods based on the error in consecutive iterations, | t n t n 1 | , after the first three full iterations.
Figure 1. Comparison of the methods based on the error in consecutive iterations, | t n t n 1 | , after the first three full iterations.
Appliedmath 03 00051 g001
Table 1. Test problems, their zeros and initial approximations.
Table 1. Test problems, their zeros and initial approximations.
Test Function Ψ ( t ) Root ( ξ )Initial Approx. ( t 0 )
Ψ 1 ( t ) = t 5 + t 4 + 4 t 2 15 ξ 1.3474 1.4
Ψ 2 ( t ) = ( t 1 ) 3 1 ξ 2.0000 2.5
Ψ 3 ( t ) = t e t 2 s i n 2 t + 3 c o s t + 5 ξ 1.2076 −1.3
Ψ 4 ( t ) = t 10 + t 3 + 2 t 2 1 ξ 0.8420 0.5
Ψ 5 ( t ) = e t 2 ( 1 + t 3 + t 6 ) ( t 2 ) ξ 2.0000 2.2
Table 2. Comparisons of test methods after three ( n = 3 ) iterations.
Table 2. Comparisons of test methods after three ( n = 3 ) iterations.
Method Ψ ( t ) | t 1 t 0 | | t 2 t 1 | | t 3 t 2 | | Ψ ( t 3 ) | COC
SA8 Ψ 1 ( t ) 0.052572 1.8816 × 10 11 6.0796 × 10 87 2.6750 × 10 689 8.0000
NWM9 Ψ 1 ( t ) 0.052572 3.7554 × 10 11 1.2228 × 10 95 1.8634 × 10 854 9.0000
NAJJ10 Ψ 1 ( t ) 0.052572 1.7028 × 10 10 2.8001 × 10 99 1.4992 × 10 985 10.0000
XT10 Ψ 1 ( t ) 0.052572 6.9169 × 10 10 5.7293 × 10 93 3.2266 × 10 922 10.0000
NJ10 Ψ 1 ( t ) 0.052572 1.4825 × 10 10 3.6951 × 10 100 1.2667 × 10 994 10.0000
NWM10 Ψ 1 ( t ) 0.052572 5.4649 × 10 11 9.9261 × 10 104 1.4371 × 10 1029 10.0000
SA8 Ψ 2 ( t ) 0.50011 1.1417 × 10 4 1.4980 × 10 32 3.9438 × 10 255 8.0000
NWM9 Ψ 2 ( t ) 0.50017 1.7331 × 10 4 1.3929 × 10 35 1.9497 × 10 315 9.0000
NAJJ10 Ψ 2 ( t ) 0.45491 4.5088 × 10 2 1.2500 × 10 15 1.3803 × 10 150 10.0000
XT10 Ψ 2 ( t ) 0.50099 9.9027 × 10 4 1.1220 × 10 31 1.1714 × 10 310 10.0000
NJ10 Ψ 2 ( t ) 0.49962 3.7910 × 10 4 3.0109 × 10 36 9.0709 × 10 357 10.0000
NWM10 Ψ 2 ( t ) 0.50019 1.9492 × 10 4 9.1392 × 10 38 1.4053 × 10 370 10.0000
SA8 Ψ 3 ( t ) 0.092352 3.3415 × 10 9 7.0947 × 10 69 5.9503 × 10 545 8.0000
NWM9 Ψ 3 ( t ) 0.092352 2.0992 × 10 13 1.3500 × 10 113 5.1492 × 10 1014 9.0000
NAJJ10 Ψ 3 ( t ) 0.092352 2.1365 × 10 7 5.7705 × 10 68 6.2041 × 10 591 9.9998
XT10 Ψ 3 ( t ) 0.092352 4.0647 × 10 8 2.0381 × 10 75 1.5813 × 10 747 10.0000
NJ10 Ψ 3 ( t ) 0.092352 6.0884 × 10 7 2.3054 × 10 63 5.5259 × 10 625 10.0000
NWM10 Ψ 3 ( t ) 0.092352 3.6090 × 10 13 8.1902 × 10 123 1.7585 × 10 1195 9.8171
SA8 Ψ 4 ( t ) 0.11586 1.5156 × 10 8 1.7097 × 10 63 1.6721 × 10 502 8.0000
NWM9 Ψ 4 ( t ) 0.11586 9.0694 × 10 9 8.1008 × 10 71 2.5001 × 10 630 9.0000
NAJJ10 Ψ 4 ( t ) 0.11586 1.3769 × 10 7 1.6706 × 10 68 2.2752 × 10 675 9.9994
XT10 Ψ 4 ( t ) 0.11586 4.2295 × 10 6 1.7971 × 10 56 8.4556 × 10 556 10.0000
NJ10 Ψ 4 ( t ) 0.11586 1.0998 × 10 6 7.3656 × 10 60 1.2708 × 10 589 10.0000
NWM10 Ψ 4 ( t ) 0.11586 1.2105 × 10 8 8.4012 × 10 76 1.2490 × 10 736 9.8123
SA8 Ψ 5 ( t ) 0.19945 5.4711 × 10 4 1.2977 × 10 25 1.7276 × 10 198 8.0000
NWM9 Ψ 5 ( t ) 0.19960 3.9880 × 10 4 7.3134 × 10 30 2.2584 × 10 261 9.0000
NAJJ10 Ψ 5 ( t ) 0.19052 9.4828 × 10 3 5.4826 × 10 20 4.7445 × 10 192 9.9999
XT10 Ψ 5 ( t ) 0.19645 3.5515 × 10 3 1.0600 × 10 24 2.1623 × 10 239 10.0000
NJ10 Ψ 5 ( t ) 0.21542 1.5421 × 10 2 1.6064 × 10 18 1.4753 × 10 179 9.9997
NWM10 Ψ 5 ( t ) 0.19960 4.0057 × 10 4 3.0366 × 10 32 1.3881 × 10 311 9.8064
SA8 Ψ 6 ( t ) 0.26511 5.7168 × 10 14 1.4599 × 10 115 5.0965 × 10 929 8.0000
NWM9 Ψ 6 ( t ) 0.26511 1.1062 × 10 13 3.0900 × 10 126 1.0209 × 10 1139 9.0000
NAJJ10 Ψ 6 ( t ) 0.26512 9.5231 × 10 6 2.0821 × 10 60 1.0163 × 10 605 10.0060
XT10 Ψ 6 ( t ) 0.26511 3.9258 × 10 9 1.9159 × 10 94 1.6477 × 10 946 10.0000
NJ10 Ψ 6 ( t ) 0.26511 7.7470 × 10 12 1.2437 × 10 119 2.3307 × 10 1198 10.0000
NWM10 Ψ 6 ( t ) 0.26511 1.1664 × 10 13 2.1891 × 10 138 7.6955 × 10 1361 9.8168
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharma, E.; Mittal, S.K.; Jaiswal, J.P.; Panday, S. An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations. AppliedMath 2023, 3, 1019-1033. https://doi.org/10.3390/appliedmath3040051

AMA Style

Sharma E, Mittal SK, Jaiswal JP, Panday S. An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations. AppliedMath. 2023; 3(4):1019-1033. https://doi.org/10.3390/appliedmath3040051

Chicago/Turabian Style

Sharma, Ekta, Shubham Kumar Mittal, J. P. Jaiswal, and Sunil Panday. 2023. "An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations" AppliedMath 3, no. 4: 1019-1033. https://doi.org/10.3390/appliedmath3040051

Article Metrics

Back to TopTop