Next Article in Journal / Special Issue
Three-Dimensional Non-Linearly Thermally Radiated Flow of Jeffrey Nanoliquid towards a Stretchy Surface with Convective Boundary and Cattaneo–Christov Flux
Previous Article in Journal
Multi-Strategy Improved Sparrow Search Algorithm and Application
Previous Article in Special Issue
Impacts of Stefan Blowing on Hybrid Nanofluid Flow over a Stretching Cylinder with Thermal Radiation and Dufour and Soret Effect
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications

1
School of Mathematics, Thapar Institute of Engineering and Technology, Patiala 147004, India
2
Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2022, 27(6), 97; https://doi.org/10.3390/mca27060097
Submission received: 5 October 2022 / Revised: 10 November 2022 / Accepted: 14 November 2022 / Published: 18 November 2022

Abstract

:
We propose a new iterative scheme without memory for solving nonlinear equations. The proposed scheme is based on a cubically convergent Hansen–Patrick-type method. The beauty of our techniques is that they work even though the derivative is very small in the vicinity of the required root or f ( x ) = 0 . On the contrary, the previous modifications either diverge or fail to work. In addition, we also extended the same idea for an iterative method with memory. Numerical examples and comparisons with some of the existing methods are included to confirm the theoretical results. Furthermore, basins of attraction are included to describe a clear picture of the convergence of the proposed method as well as that of some of the existing methods. Numerical experiments are performed on engineering problems, such as fractional conversion in a chemical reactor, Planck’s radiation law problem, Van der Waal’s problem, trajectory of an electron in between two parallel plates. The numerical results reveal that the proposed schemes are of utmost importance to be applied on various real–life problems. Basins of attraction also support this aspect.

1. Introduction

Determining the zeros of a nonlinear function promptly and accurately has become a very crucial task in many branches of science and technology. The most used technique in this regard is Newton’s method [1], which converges linearly for multiple roots and quadratically for simple roots. Various higher order schemes have also been presented in [2,3,4,5,6,7,8]. One amongst them is the Hansen–Patrick’s family [9] of order 3 given by
x n + 1 = x n α + 1 α ± ( 1 ( α + 1 ) L f ( x n ) ) 1 / 2 f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
where L f ( x n ) = f ( x n ) f ( x n ) f 2 ( x n ) and α R \ { 1 } . This family comprises of Euler’s method for ( α = 1 ) , Ostrowski’s square-root method for ( α = 0 ) , Laguerre’s method for ( α = 1 ν 1 , ν 1 ) and Newton’s method as a limiting case. Despite the fact that it has cubic convergence, the involvement of second order derivative is limiting the applied region. This factor has inspired many researchers to concentrate on multipoint methods [10], since they overcome the drawbacks of one-point iterative methods with respect to the convergence order. The main motive in the development of new iterative methods is to achieve an order of convergence that is as high as possible with a certain number of functional evaluations per iteration.
Sharma et al. [11] had modified Equation (1), which is given as follows:
y n = x n α f ( x n ) f ( x n ) , x n + 1 = x n β + 1 β ± ( 1 ( β + 1 ) H f ( x n ) ) 1 / 2 f ( x n ) f ( x n ) ,
where H f ( x n ) = f ( y n ) f ( x n ) f 2 ( x n ) and α , β are free parameters. Here, β 1 . Instead of x n , the authors calculated a second-order derivative of f at y n . Moreover, several developments of Hansen–Patrick-type methods have been presented and examined in [12] in order to eradicate the second-order derivative. Using some appropriate approximation for f ( x n ) , authors in [12] presented the following method:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n α + 1 α ± f ( x n ) 2 + ( β 2 α 2 ) f ( x n ) f ( y n ) β ( α + 1 ) f ( y n ) 2 f ( x n ) 2 + β f ( x n ) f ( y n ) 1 / 2 f ( x n ) f ( x n ) ,
where α and β are free parameters. The prominent problem when using such types of methods is that they fail to work in the case where f ( x ) = 0 and diverge or fail when the derivative is very small in the vicinity of the required root. That is why our main goal is to develop a method that is globally convergent.
On the other hand, it is sometimes possible to increase the convergence order with no further functional evaluation by making use of a self-accelerating parameter. Traub [1] referred these methods as schemes with memory as these methods use the previous information to calculate the next iterate. Traub was the first to introduce the idea of methods with memory. He made minute alterations in the already existing Steffensen method [13] and presented the first method with memory [1] as follows:
γ 0 , x 0 are suitably given , w n = x n + γ n f ( x n ) , 0 γ n R , x n + 1 = x n f ( x n ) f [ x n , w n ] , n = 0 , 1 , 2 , ,
where γ n is a self-accelerating parameter given as
γ n + 1 = 1 N 1 ( x n ) , N 1 ( x ) = f ( x n ) + ( x x n ) f [ x n , w n ] , n = 0 , 1 , 2 ,
This method has an order of convergence of 2.414. However, if we use a better self-accelerating parameter, there are apparent chances that the order of convergence will increase.
Using a secant approach, and by reusing information from the previous iteration, Traub refined a Steffensen-like method and presented the following method:
γ 0 is given , γ n = x n x n 1 f ( x n ) f ( x n 1 ) , n N , x n + 1 = x n γ n f ( x n ) 2 f ( x n + γ n f ( x n ) ) f ( x n ) ,
having R-order of convergence [14] at least 2.414.
In addition, a new approach of hybrid methods is being adopted for solving nonlinear problems which can be seen in [15,16].
For finding the R-order of convergence of our proposed method with memory, we make use of the Theorem 1 given by Traub.
The rest of the paper is organized as follows. Section 2 contains the development of a new iterative method without memory and the proof of its order of convergence. Section 3 covers the inclusion of memory to develop a new iterative method with memory and its error analysis. Numerical results for the proposed methods and comparisons with some of the existing methods to illustrate our theoretical results are given in Section 4. Section 5 depicts the convergence of the methods using basins of attraction. Lastly, Section 6 presents conclusions.
Theorem 1.
Suppose that ( I M ) is an iterative method with memory that generates a sequence { x m } (converging to the root ξ) of approximations to ξ. If there exists a nonzero constant ζ and nonnegative numbers s j , 0 j k , such that the inequality
ϵ m + 1 ζ j = 0 k ϵ m j s j
holds, then the R-order of convergence of the iterative method ( I M ) satisfies the inequality
O R ( ( I M ) , ξ ) t * ,
where t * is the unique positive root of the equation
t k + 1 j = 0 k s j t k j = 0 .

2. Iterative Method without Memory and Its Convergence Analysis

We aim to construct a new two-point Hansen–Patrick-type method without memory in this section.
Suppose y n = x n f ( x n ) f ( x n ) be the Newton’s iterate. Expanding f ( y n ) about a point x = x n by Taylor series, we get
f ( y n ) f ( x n ) + f ( x n ) ( y n x n ) + 1 2 f ( x n ) ( y n x n ) 2 , f ( x n ) 2 f ( x n ) 2 f ( y n ) f ( x n ) 2 .
Further, if we expand the function f ( y n ) = f x n f ( x n ) f ( x n ) about x = x n by Taylor series, we have
f ( y n ) f ( x n ) + f ( x n ) ( y n x n ) ,
f ( x n ) f ( x n ) f ( x n ) ( f ( x n ) f ( y n ) ) .
Using previous developments, we have
f ( x n ) 2 f ( x n ) 2 f ( y n ) f ( x n ) 2 2 + f ( x n ) f ( x n ) ( f ( x n ) f ( y n ) ) 2 2 f ( x n ) 2 f ( y n ) f ( x n ) 2 + f ( x n ) f ( x n ) ( f ( x n ) f ( y n ) ) .
As we can see, this estimation for f ( x n ) uses 4 functional evaluations per iteration [8], f ( x n ) , f ( y n ) , f ( x n ) and f ( y n ) . To decrease the number of functional evaluations, King’s approximation [17] may be used which is
f ( y n ) = f ( x n ) f ( x n ) + γ f ( y n ) f ( x n ) + β f ( y n ) ,
when γ = β 2 , where β is a free parameter.
Now, using this new approximation for f ( x n ) in Equation (1), authors in [12] presented the following scheme:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n α + 1 α ± f ( x n ) 2 + ( β 2 α 2 ) f ( x n ) f ( y n ) β ( α + 1 ) f ( y n ) 2 f ( x n ) 2 + β f ( x n ) f ( y n ) 1 / 2 f ( x n ) f ( x n ) ,
where α and β are free parameters.
Now, in order to extend to the method with memory, we come up with an idea of introducing a parameter b in the scheme given by Equation (7) and we present a modification in this method as follows:
y n = x n f ( x n ) f ( x n ) + b f ( x n ) , x n + 1 = x n α + 1 α ± f ( x n ) 2 + ( β 2 α 2 ) f ( x n ) f ( y n ) β ( α + 1 ) f ( y n ) 2 f ( x n ) 2 + β f ( x n ) f ( y n ) 1 / 2 f ( x n ) f ( x n ) + b f ( x n ) , n = 0 , 1 , 2 , ,
where α and β are free parameters.
Next, we establish the convergence results for our proposed method without memory given by Equation (8).

Convergence Analysis

Theorem 2.
Suppose that f : D R R be a real function suitably differentiable in a domain D. If ξ D is a simple root of f ( x ) = 0 and an initial guess x 0 is sufficiently close to ξ, then the iterative method given by Equation (8) converges to ξ with convergence order p = 3 having the following error relation,
e n + 1 = 1 2 ( b + a 2 ) ( b ( 1 + α β ) + ( 1 + α β ) a 2 ) e n 3 + O ( e n ) 4 ,
where e n = x n ξ and a n = f ( n ) ( ξ ) n ! f ( ξ ) , n = 2 , 3 ,
Proof. 
Expanding f ( x n ) about x n = ξ by Taylor series, we have
f ( x n ) = f ( ξ ) ( e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 ) + O ( e n ) 5 .
Then,
f ( x n ) = f ( ξ ) ( 1 + 2 a 2 e n + 3 a 3 e n 2 + 4 a 4 e n 3 ) + O ( e n ) 4 .
Using Equations (9) and (10), we have
f ( x n ) f ( x n ) + b f ( x n ) = e n ( b + a 2 ) e n 2 + ( b 2 + 2 b a 2 + 2 a 2 2 2 a 3 ) e n 3 + O ( e n ) 4 .
Using Equation (11) in the first step of Equation (8), we have
e n , y = y n ξ = ( b + a 2 ) e n 2 + ( b 2 2 b a 2 2 a 2 2 + 2 a 3 ) e n 3 + O ( e n ) 4 .
Further, the Taylor’s expansion of f ( y n ) is
f ( y n ) = f ( ξ ) ( e n , y + e n , y 2 + e n , y 3 + e n , y 4 ) + O ( e n , y ) 5 .
Using Equations (9)–(13), we have
α + 1 α ± f ( x n ) 2 + ( β 2 α 2 ) f ( x n ) f ( y n ) β ( α + 1 ) f ( y n ) 2 f ( x n ) 2 + β f ( x n ) f ( y n ) 1 / 2 f ( x n ) f ( x n ) + b f ( x n )
= e n + 1 2 ( b + a 2 ) ( a 2 ( 1 + α β ) + b ( 1 + α β ) ) e n 3 + O ( e n ) 4 ,
Finally, putting Equation (14) in the second step of Equation (8), we get
e n + 1 = 1 2 ( b + a 2 ) ( b ( 1 + α β ) + ( 1 + α β ) a 2 ) e n 3 + O ( e n ) 4 ,
which is the error equation for the proposed scheme given by Equation (8) giving convergence order three. This completes the proof. □

3. Iterative Method with Memory and Its Convergence Analysis

Now, we present an extension to the method given by Equation (8) by inclusion of memory having improved convergence order without the addition of any new functional evaluation.
If we observe clearly, it can be seen from the error relation given in Equation (15), if b = a 2 = f ( ξ ) 2 f ( ξ ) , then the order of convergence of the presented scheme given by Equation (8) can possibly be improved, but this value can’t be reached because the values of f ( ξ ) and f ( ξ ) are not practically available. Instead, we can use approximations calculated by already available information [18]. So, to improve the convergence order, we give an estimation using first order divided difference [19], given by b n = 1 2 f [ x n , x n 1 ] f [ x n , x n 1 ] , where f [ s , t ] = f ( s ) f ( t ) s t denotes a first-order divided difference.
So, by replacing b by b n in the method given by Equation (8), we obtain a new family with memory using the two previous iterations x 0 , x 1 as follows:
b n = 1 2 f [ x n , x n 1 ] f [ x n , x n 1 ] , y n = x n f ( x n ) f ( x n ) + b n f ( x n ) , x n + 1 = x n α + 1 α ± f ( x n ) 2 + ( β 2 α 2 ) f ( x n ) f ( y n ) β ( α + 1 ) f ( y n ) 2 f ( x n ) 2 + β f ( x n ) f ( y n ) 1 / 2 × f ( x n ) f ( x n ) + b n f ( x n ) , n N ,
where α and β are free parameters.
Next, we establish the convergence results for our proposed method with memory given by Equation (16).

Convergence Analysis

Theorem 3.
Suppose that f : D R R be a real function suitably differentiable in a domain D. If ξ D is a simple root of f ( x ) = 0 and an initial guess x 0 is sufficiently close to ξ, then the iterative method given by Equation (16) converges to ξ with convergence order at least 3.30 .
Proof. 
Using Taylor series expansion about x n = ξ , we get
f ( x n 1 ) = f ( ξ ) ( e n 1 + a 2 e n 1 2 + a 3 e n 1 3 + a 4 e n 1 4 + a 5 e n 1 5 ) + O ( e n 1 ) 6 ,
f ( x n ) = f ( ξ ) ( e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + a 5 e n 5 ) + O ( e n ) 6 .
Then,
f ( x n 1 ) = f ( ξ ) ( 1 + 2 a 2 e n 1 + 3 a 3 e n 1 2 + 4 a 4 e n 1 3 + 5 a 5 e n 1 4 ) + O ( e n 1 ) 5 ,
f ( x n ) = f ( ξ ) ( 1 + 2 a 2 e n + 3 a 3 e n 2 + 4 a 4 e n 3 + 5 a 5 e n 4 ) + O ( e n ) 5 .
Now, using previous developments, we have
b n = 1 2 f [ x n , x n 1 ] f [ x n , x n 1 ] = ( a 2 + a 2 2 3 2 a 3 e n 1 + a 2 3 + 5 a 2 a 3 2 2 a 4 e n 1 2 + ( a 2 4 7 2 a 2 2 a 3 + 3 a 3 2 2 + 3 a 2 a 4 5 a 5 2 ) e n 1 3 ) + ( a 2 2 3 a 3 2 2 a 2 3 2 a 2 a 3 + a 4 e n 1 + ( 3 a 2 4 17 2 a 2 2 a 3 + 3 a 3 2 + 5 a 2 a 4 5 a 5 2 ) e n 1 2 ) e n + ( a 2 3 + 5 a 2 a 3 2 2 a 4 + ( 3 a 2 4 17 2 a 2 2 a 3 + 3 a 3 2 + 5 a 2 a 4 5 a 5 2 ) e n 1 ) e n 2 + O 3 ( e n 1 e n ) .
Using Equations (18), (20) and (21) in the second step of Equation (16), we get
y n ξ = a 2 2 3 a 3 2 e n 1 + a 2 3 + 5 a 2 a 3 2 2 a 4 e n 1 2 e n 2 + ( ( a 3 2 2 ( a 2 3 2 a 2 a 3 + a 4 ) ) e n 1 + 1 4 8 a 2 4 22 a 2 2 a 3 + 3 a 3 2 + 20 a 2 a 4 10 a 5 e n 1 2 ) e n 3 + O 4 ( e n 1 e n ) .
Then, using Equation (22) in Equation (18), we get
f ( y n ) = f ( ξ ) ( a 2 2 3 a 3 2 e n 1 + a 2 3 + 5 a 2 a 3 2 2 a 4 e n 1 2 e n 2 + ( a 3 2 2 ( a 2 3 2 a 2 a 3 + a 4 ) e n 1 + 1 4 8 a 2 4 22 a 2 2 a 3 + 3 a 3 2 + 20 a 2 a 4 10 a 5 e n 1 2 ) e n 3 ) + O 4 ( e n 1 e n ) .
Using Equations (18), (20)–(23) in the third step of Equation (16), we finally get
e n + 1 = a 2 3 3 a 2 a 3 2 e n 1 e n 3 + a 2 a 3 2 e n 4 + O 5 ( e n 1 e n )
Now, we can see the lowest term of the error equation is a 2 3 3 a 2 a 3 2 e n 1 e n 3 , therefore, by Theorem 1, the unique positive root of the polynomial s 2 3 s 1 gives the R-order of the proposed scheme given by Equation (16), which is s = 3 + 13 2 3.30 . This completes our proof. □

4. Numerical Results

Now, we will investigate the numerical results of our proposed scheme. Further, we will be comparing those results with some existing schemes, both with and without memory. All calculations have been accomplished using Mathematica 11.1 in multiple precision arithmetic environment with the specification of a processor Intel(R) Pentium(R) CPU B960 @ 2.20 GHz (64-bit operating system) Windows 7 Ultimate @ 2009 Microsoft Corporation. We suppose that the initial value of b 0 must be selected prior to performing the iterations and a suitable x 0 be given. We have taken b 0 = 0.01 (or b = 0.01 ) in our computations. In all the numerical values, A e h refers to A × 10 h .
We are using the following functions for our computations:
  • f 1 ( x ) = x 2 e x 3 x + 2 = 0 having one of the real zero 0.2575 .
  • f 2 ( x ) = sin ( π x ) e x 2 + x cos x 1 + x log ( x sin x + 1 ) = 0 having one of the real zero 0.
  • f 3 ( x ) = ( x 2 ) ( x 10 + x + 2 ) e 5 x = 0 having one of the real zero 2.
  • f 4 ( x ) = x 2 1 = 0 having one of the real zero 1 .
  • f 5 ( x ) = sin x = 0 having one of the real zero 2 π .
The results are computed by using the initial guesses 0.7 , 0.5 , 2.2 , 0 and 1.69 for functions f 1 , f 2 , f 3 , f 4 and f 5 respectively. To check the theoretical order of convergence, we have calculated the computational order of convergence [20], ρ c (COC) using the following formula,
ρ c = l o g ( f ( x k ) / f ( x k 1 ) l o g ( f ( x k 1 ) / f ( x k 2 ) , k = 2 , 3 , ,
considering the last three approximations in the iterative procedure. The errors of approximations to the respective zeros of the test functions, x n ξ and COC are displayed in Table 1 and Table 2.
We have demonstrated results for our special cases without memory by taking α = 1 2 , β = 1 2 , α = 1 , β = 1 2 and α = 1 , β = 1 in Equation (8) denoted by ( P M 1 ) , ( P M 2 ) and ( P M 3 ) , respectively. Further, for our special cases with memory by taking α = 1 2 , β = 1 2 , α = 1 , β = 1 2 and α = 1 , β = 1 in Equation (16) denoted by ( P M M 1 ) , ( P M M 2 ) and ( P M M 3 ) , respectively.
For comparisons, we are considering the methods given below:
Hansen–Patrick’s family ( H P F ) [9]:
x n + 1 = x n α + 1 α ± ( 1 ( α + 1 ) L f ( x n ) ) 1 / 2 f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
where L f ( x n ) = f ( x n ) f ( x n ) f 2 ( x n ) and α R \ { 1 } .
Sharma et al. method without memory ( S H M ) [11]:
y n = x n α f ( x n ) f ( x n ) , x n + 1 = x n β + 1 β ± ( 1 ( β + 1 ) H f ( x n ) ) 1 / 2 f ( x n ) f ( x n ) ,
where H f ( x n ) = f ( y n ) f ( x n ) f 2 ( x n ) and α and β ( 1 ) are free parameters.
Halley’s method ( H M ) [3]:
x n + 1 = x n 1 + 1 2 L f ( x n ) 1 α L f ( x n ) f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
where L f ( x n ) = f ( x n ) f ( x n ) f 2 ( x n ) and α = 1 2 .
Traub’s method with memory ( T M 1 ) [1]:
γ 0 , x 0 are suitably given , w n = x n + γ n f ( x n ) , 0 γ n R , x n + 1 = x n f ( x n ) f [ x n , w n ] , n = 0 , 1 , 2 , ,
where γ n is a self-accelerating parameter given as
γ n + 1 = 1 N 1 ( x n ) , N 1 ( x ) = f ( x n ) + ( x x n ) f [ x n , w n ] , n = 0 , 1 , 2 ,
Traub’s method with memory ( T M 2 ) [1]:
γ 0 is given , γ n = x n x n 1 f ( x n ) f ( x n 1 ) , n N , x n + 1 = x n γ n f ( x n ) 2 f ( x n + γ n f ( x n ) ) f ( x n ) .
Remark 1.
In Table 1, for the function f 4 ( x ) , as the derivative of the function becomes zero, the existing methods H P F , S H M and H M fail. Further, for the function f 5 ( x ) , H P F , S H M and H M converge to undesired root ‘ π ’. Further, in Table 2, for the function f 4 ( x ) , T M 1 and T M 2 converge to the desired root but in 11 and 14 number of iterations, respectively as we can see the errors of approximations are large in these cases. Further, for the function f 5 ( x ) , T M 1 and T M 2 both converge to undesired root ‘ 3 π ’.
Further, we are considering some real life problems which are as follows:
Example 1.
Firstly, we analyze the well-known Planck’s radiation law problem [21],
ψ ( λ ) = 8 π c h p λ 5 e c h p λ B k T 1 ,
where λ is the wavelength of radiation, h p is the Planck’s constant, T is the absolute temperature of the blackbody, c is the speed of light and B k is the Boltzmann constant. It computes the energy density within an isothermal blackbody. We intend to obtain wavelength λ corresponding to maximum energy density ψ ( λ ) .
To obtain maximum value of ψ, we take ψ ( λ ) = 0 which gives
c h p λ B k T e c h p λ B k T e c h p λ B k T 1 = 5 .
Let x = c h p λ B k T . Then, Equation (31) becomes
f 6 ( x ) = e x + x 5 1 = 0 .
As we find the solutions of f 6 ( x ) = 0 , we get the maximum wavelength of radiation λ. As stated in [22], the L.H.S. of Equation (32) is zero when x = 5 . Further, e 5 6.738 × 10 3 . Thus, another root could appear close to x = 5 . The desired zero is ξ 4.9651142317442763 .
Example 2.
Van der Waal’s equation of state [4],
P + a n 2 V 2 ( V n b ) = n G T .
The following nonlinear equation needs to be solved to attain the volume V of the gas in terms of another parameters,
P V 3 ( n b P + n G T ) V 2 + a n 2 V a n 2 b = 0 .
Here, G is the universal gas constant, P is the pressure and T is the absolute temperature. If the parameters a and b of a specific gas are given, the values of n, P and T can be calculated. Using certain values, the following nonlinear equation can be obtained,
f 7 ( x ) = 0.986 x 3 5.181 x 2 + 9.067 x 5.289 = 0 ,
having three roots, out of which one is real and two are complex. Though our required zero is ξ 1.9298462428478622 .
Example 3.
Fractional conversion in a chemical reactor [23],
f 8 ( x ) = x 1 x 5 log 0.4 ( 1 x ) 0.4 0.5 x + 4.45977 = 0 .
Here, x denotes the fractional conversion of quantities in a chemical reactor. If x is less than zero or greater than one, then the above fractional conversion will be of no physical meaning. Hence, x is taken to be bounded in the region of 0 x 1 . Moreover, the desired root is ξ 0.7573962462537538 .
Example 4.
The path traversed by an electron in the air gap between two parallel plates considering the multi-factor effect is given by
u ( t ) = u 0 + ν 0 + c 0 E m ω sin ω t 0 + β ( t t 0 ) + c 0 E 0 m ω 2 cos ( ω t + β ) + sin ( ω t + β ) ,
where u 0 and ν 0 are the position and velocity of the electron at time t 0 , m and c 0 are the mass and the charge of the electron at rest and E 0 sin ( ω t + β ) is the RF electric field between the plates. If particular parameters are chosen, Equation (37) can be simplified as
f 9 ( x ) = x 1 2 cos x + π 4 = 0 .
The desired root of Equation (38) is ξ 0.3090932715417949 .
Example 5.
The following nonlinear equation results from the embedment x of a sheet-pile wall,
f 10 ( x ) = x 3 + 2.87 x 2 10.28 4.62 x = 0 .
The required zero of Equation (39) is ξ 2.0021 .
We have also implemented our proposed schemes given by Equations (8) and (16) on the above-mentioned problems. Table 3 and Table 4 demonstrate the corresponding results. Further, Table 1 demonstrates COC for our proposed method without memory given by Equation (8) ( P M 1 , P M 2 and P M 3 ), method given by Equation (25) denoted by H P F , the method given by Equation (26) denoted by S H M and Halley’s method given by Equation (27) denoted by H M , respectively. Table 2 demonstrates COC for our proposed method with memory given by Equation (16) ( P M M 1 , P M M 2 and P M M 3 ), method given by Equation (28) denoted by T M 1 and the method given by Equation (29) denoted by T M 2 , respectively.
Remark 2.
The proposed scheme with memory given by Equation (16) has been compared with some other methods and it is noted that our proposed scheme with memory gives better outcomes in terms of COC and errors as depicted in Table 2 and Table 4. There is an obvious increase in the order of convergence.

5. Basins of Attraction

The basins of attraction of the root t * of u ( t ) = 0 are the sets of all initial points t 0 in the complex plane that converge to t * on the application of the given iterative scheme. Our objective is to make use of basins of attraction to examine the comparison of several root-finding iterative methods in the complex plane in terms of convergence and stability of the method.
On this front, we have taken a 512 × 512 grid of the rectangle S = [ 2 , 2 ] × [ 2 , 2 ] C . A color is assigned to each point t 0 S on the basis of the convergence of the corresponding method starting from t 0 to the simple root and if the method diverges, a black color is assigned to that point. Thus, distinct colors are assigned to the distinct roots of the corresponding problem. It is decided that an initial point t 0 converges to a root t * when t * t 0 < 10 4 . The point t 0 is said to belong to the basins of attraction of t * . Likewise, the method beginning from the initial point t 0 is said to diverge if no root is located in a maximum of 25 iterations. We have used MATLAB R2021a software [24] to draw the presented basins of attraction.
Furthermore, Table 5 and Table 6 list the average number of iterations denoted by Avg_Iter, percentage of non-converging points denoted by P N C and the total CPU time taken by the methods to generate the basins of attraction.
To carry out the desired comparisons, we have considered the test problems given below:
Problem 1.
The first function considered is p 1 ( z ) = z 2 + 1 . The roots of this function are i, i . The basins corresponding to our proposed method and the mentioned existing methods are shown in Figure 1 and Figure 2. It is observed that P M 1 , P M 2 , P M 3 , P M M 1 , P M M 2 and P M M 3 converge to the root with no diverging points but H P F , S H M , T M 1 and T M 2 have some points painted as black.
Problem 2.
Second function taken is p 2 ( z ) = z 3 + 1 having roots 1 , 0.5 + 0.866 i , 0.5 0.866 i . Figure 3 and Figure 4 show the basins for p 2 ( z ) in which it can be seen that S H M , T M 1 and T M 2 have wider regions of divergence.
Problem 3.
The third function considered is p 3 ( z ) = z 4 1 having roots ± 1 , ± i . Figure 5 and Figure 6 show that S H M and T M 2 have smaller basins. Although P M 1 , P M 2 , P M 3 and P M M 1 have some diverging points, yet they converge faster than the existing methods.
Problem 4.
The fourth function we have taken is p 4 ( z ) = z 5 z whose roots are 0, ± 1 , ± i . Figure 7 and Figure 8 show that P M 1 , P M 2 , P M 3 , H P F , P M M 1 , P M M 2 and P M M 3 depict convergence to the root for any initial point as they have no diverging points.
Remark 3.
One can see from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Table 5 and Table 6 that there is a marginal increase in the average number of iterations per point of the existing methods, as they have more number of divergent points than that of the proposed method. Special mention to the fact that our proposed with memory method has negligible number of divergent points in the specified mesh of points and hence, larger basins of attraction. Consequently, our proposed method with memory shows faster convergence in comparison to the existing methods.

6. Conclusions

A new method with memory has been introduced. The proposed method has a higher order of convergence in comparison with the Hansen–Patrick and Traub methods. For verification, we have carried out numerical experiments on a few test functions and some real-life problems. It is clearly visible from our results that the proposed method improves the convergence order. This increase in the convergence order has been achieved with no additional functional evaluation. Furthermore, we have also presented the basins of attraction for the proposed, as well as some existing methods, and our results point to the very fact that our proposed method converges largely to the desired zeros over a specified region much faster. Finally, to conclude, we would say that the proposed method can be significantly used for solving nonlinear equations.

Author Contributions

M.K.: Conceptualization; Methodology; Validation; H.S.: Writing—Original draft preparation; M.K. and R.B.: Writing—Review and Editing, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc-58-130-1443). The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc-58-130-1443). The authors, gratefully acknowledge DSR for technical and financial support. Technical support provided by the Seed Money Project (TU/DORSP/57/7290) by TIET, Punjab, India is also acknowledged with thanks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. Amat, S.; Busquier, S.; Gutiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef] [Green Version]
  3. Argyros, I.K. A note on the Halley method in Banach spaces. Appl. Math. Comput. 1993, 58, 215–224. [Google Scholar]
  4. Argyros, I.K.; Kansal, M.; Kanwar, V.; Bajaj, S. Higher-order derivative-free families of Chebyshev–Halley type methods with or without memory for solving nonlinear equations. Appl. Math. Comput. 2017, 315, 224–245. [Google Scholar] [CrossRef]
  5. Cordero, A.; Moscoso-Martinez, M.; Torregrosa, J.R. Chaos and stability in a new iterative family for solving nonlinear equations. Algorithms 2021, 14, 101. [Google Scholar] [CrossRef]
  6. Gutiérrez, J.M.; Hernández, M.A. A family of Chebyshev–Halley type methods in Banach spaces. Bull. Austral. Math. Soc. 1997, 55, 113–130. [Google Scholar] [CrossRef] [Green Version]
  7. Jain, P.; Chand, P.B. Derivative free iterative methods with memory having R-order of convergence. Int. J. Nonlinear Sci. Numer. Simul. 2020, 21, 641–648. [Google Scholar] [CrossRef]
  8. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  9. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  10. Petković, M.S.; Neta, B.; Petković, L.D.; Dẑunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef] [Green Version]
  11. Sharma, J.R.; Guha, R.K.; Sharma, R. Some variants of Hansen–Patrick method with third and fourth order convergence. Appl. Math. Comput. 2009, 214, 171–177. [Google Scholar] [CrossRef]
  12. Kansal, M.; Kanwar, V.; Bhatia, S. New modifications of Hansen–Patrick’s family with optimal fourth and eighth orders of convergence. Appl. Math. Comput. 2015, 269, 507–519. [Google Scholar] [CrossRef]
  13. Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
  14. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  15. Sihwail, R.; Solaiman, O.S.; Ariffin, K.A.Z. New robust hybrid Jarratt–Butterfly optimization algorithm for nonlinear models. J. King Saud Univ. Comput. Inform. Sci. 2022; in press. [Google Scholar] [CrossRef]
  16. Sihwail, R.; Solaiman, O.S.; Omar, K.; Ariffin, K.A.Z.; Alswaitti, M.; Hashim, I. A hybrid approach for solving systems of nonlinear equations using harris hawks optimization and Newton’s method. IEEE Access 2021, 9, 95791–95807. [Google Scholar] [CrossRef]
  17. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  18. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Haghani, F.K. Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
  19. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef] [Green Version]
  20. Jay, I.O. A note on Q-order of convergence. BIT Numer. Math. 2011, 41, 422–429. [Google Scholar] [CrossRef]
  21. Jain, D. Families of Newton-like method with fourth-order convergence. Int. J. Comput. Math. 2013, 90, 1072–1082. [Google Scholar] [CrossRef]
  22. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  23. Shacham, M. Numerical solution of constrained nonlinear algebraic equations. Int. J. Numer. Methods Eng. 1986, 23, 1455–1481. [Google Scholar] [CrossRef]
  24. Zachary, J.L. Introduction to Scientific Programming: Computational Problem Solving Using Maple and C; Springer: New York, NY, USA, 2012. [Google Scholar]
Figure 1. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 1 ( z ) .
Figure 1. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 1 ( z ) .
Mca 27 00097 g001
Figure 2. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 1 ( z ) .
Figure 2. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 1 ( z ) .
Mca 27 00097 g002
Figure 3. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 2 ( z ) .
Figure 3. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 2 ( z ) .
Mca 27 00097 g003
Figure 4. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 2 ( z ) .
Figure 4. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 2 ( z ) .
Mca 27 00097 g004
Figure 5. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 3 ( z ) .
Figure 5. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 3 ( z ) .
Mca 27 00097 g005
Figure 6. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 3 ( z ) .
Figure 6. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 3 ( z ) .
Mca 27 00097 g006
Figure 7. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 4 ( z ) .
Figure 7. Basins of attraction for P M 1 , P M 2 , P M 3 , H P F , S H M , respectively for p 4 ( z ) .
Mca 27 00097 g007
Figure 8. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 4 ( z ) .
Figure 8. Basins of attraction for P M M 1 , P M M 2 , P M M 3 , T M 1 , T M 2 , respectively for p 4 ( z ) .
Mca 27 00097 g008
Table 1. Comparison of different methods without memory.
Table 1. Comparison of different methods without memory.
Without Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 1 ( x )
P M 1 ( α = 1 2 , β = 1 2 )1.1717 × 10 4 6.9442 × 10 15 1.8615 × 10 35 3.00000.344
P M 2 ( α = 1 , β = 1 2 ) 1.0371 × 10 4 2.8730 × 10 15 1.8615 × 10 35 3.00000.312
P M 3 ( α = 1 , β = 1 ) 1.1756 × 10 4 7.0128 × 10 15 1.8615 × 10 35 3.00000.329
H P F ( α = 1 2 ) 7.2409 × 10 3 2.0697 × 10 8 4.8653 × 10 25 2.99930.343
S H M ( α = 1 , β = 1 2 ) 1.1696 × 10 2 1.8718 × 10 7 7.6275 × 10 22 3.00080.344
H M ( α = 1 2 ) 7.2407 × 10 3 1.8148 × 10 8 2.8886 × 10 25 2.99900.281
f 2 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 9.0357 × 10 3 3.6430 × 10 7 2.4171 × 10 20 2.99610.578
P M 2 ( α = 1 , β = 1 2 ) 8.7353 × 10 3 1.5807 × 10 7 9.6730 × 10 22 2.99460.656
P M 3 ( α = 1 , β = 1 ) 9.0033 × 10 3 3.5624 × 10 7 2.2602 × 10 20 2.99500.749
H P F ( α = 1 2 ) 1.4338 × 10 1 5.1439 × 10 4 6.3759 × 10 11 2.75460.594
S H M ( α = 1 , β = 1 2 ) 5.3590 × 10 1 1.7758 × 10 1 1.2161 × 10 2 2.75180.843
H M ( α = 1 2 ) 1.6294 × 10 1 3.4650 × 10 3 1.2247 × 10 8 3.13140.751
f 3 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 7.5147 × 10 5 5.2354 × 10 17 1.7335 × 10 53 3.00080.345
P M 2 ( α = 1 , β = 1 2 ) 3.0249 × 10 4 2.6407 × 10 15 1.8932 × 10 48 2.99710.344
P M 3 ( α = 1 , β = 1 ) 1.8359 × 10 4 7.8639 × 10 16 5.8747 × 10 50 3.00190.359
H P F ( α = 1 2 ) 6.9350 × 10 3 3.9832 × 10 7 7.5926 × 10 20 2.99940.358
S H M ( α = 1 , β = 1 2 ) 2.3532 × 10 2 3.1713 × 10 5 7.6632 × 10 14 3.00200.328
H M ( α = 1 2 ) 9.4327 × 10 3 1.0017 × 10 6 1.2079 × 10 18 2.99930.390
f 4 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 2.1991 × 10 1 1.01412 × 10 3 1.3014 × 10 10 2.89430.250
P M 2 ( α = 1 , β = 1 2 ) 4.0446 × 10 1 2.5365 × 10 3 1.0552 × 10 9 2.79630.266
P M 3 ( α = 1 , β = 1 ) 4.0439 × 10 1 4.4677 × 10 3 1.1047 × 10 8 2.75470.328
H P F ( α = 1 2 )FFF#-
S H M ( α = 1 , β = 1 2 )FFF#-
H M ( α = 1 2 )FFF#-
f 5 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 2.4220 × 10 1 4.9246 × 10 6 5.9705 × 10 21 3.18210.422
P M 2 ( α = 1 , β = 1 2 ) 4.3566 × 10 1 3.1927 × 10 4 2.4757 × 10 15 3.55970.406
P M 3 ( α = 1 , β = 1 ) 4.7757 × 10 1 1.9651 × 10 4 3.7695 × 10 16 3.47790.328
H P F ( α = 1 2 )CCC 3.0017 *-
S H M ( α = 1 , β = 1 2 )CCC 3.0000 *-
H M ( α = 1 2 )CCC 5.1956 *-
F—Method fails; #—COC not required in case of failure; C—Converging to undesired root; *—COC in case of undesired root.
Table 2. Comparison of different methods with memory.
Table 2. Comparison of different methods with memory.
With Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 1 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 1.1718 × 10 4 4.9506 × 10 15 1.8615 × 10 35 3.34350.407
P M M 2 ( α = 1 , β = 1 2 ) 1.0371 × 10 4 2.9246 × 10 15 1.8615 × 10 35 3.33620.407
P M M 3 ( α = 1 , β = 1 ) 1.1756 × 10 4 4.9995 × 10 15 1.8615 × 10 35 3.34340.344
T M 1 ( γ 0 = 0.01 ) 6.8591 × 10 3 2.0433 × 10 7 2.3938 × 10 18 2.41510.343
T M 2 ( γ 0 = 0.01 ) 6.8591 × 10 3 9.0200 × 10 6 1.5202 × 10 11 2.00370.312
f 2 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 9.0357 × 10 3 3.1838 × 10 7 1.6192 × 10 23 3.65580.859
P M M 2 ( α = 1 , β = 1 2 ) 8.7353 × 10 3 2.2053 × 10 7 5.2810 × 10 24 3.61190.875
P M M 3 ( α = 1 , β = 1 ) 9.0033 × 10 3 3.1393 × 10 7 1.5492 × 10 23 3.65500.938
T M 1 ( γ 0 = 0.01 ) 2.5974 × 10 2 2.2058 × 10 4 1.6454 × 10 9 2.46240.672
T M 2 ( γ 0 = 0.01 ) 2.5974 × 10 2 1.1157 × 10 3 2.4553 × 10 6 1.92910.657
f 3 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 7.5147 × 10 5 2.6944 × 10 14 4.9147 × 10 47 3.46610.516
P M M 2 ( α = 1 , β = 1 2 ) 3.0249 × 10 4 2.5479 × 10 12 1.6300 × 10 40 3.49170.468
P M M 3 ( α = 1 , β = 1 ) 1.8359 × 10 4 3.9306 × 10 13 3.7470 × 10 43 3.46280.422
T M 1 ( γ 0 = 0.01 ) 2.0344 × 10 2 1.4543 × 10 7 4.5676 × 10 20 2.42980.313
T M 2 ( γ 0 = 0.01 ) 2.0344 × 10 2 4.8277 × 10 5 8.6971 × 10 11 2.18860.297
f 4 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 2.1991 × 10 1 2.5047 × 10 3 3.7120 × 10 10 3.43260.250
P M M 2 ( α = 1 , β = 1 2 ) 4.0446 × 10 1 1.2232 × 10 2 6.8347 × 10 8 3.27710.250
P M M 3 ( α = 1 , β = 1 ) 4.0439 × 10 1 1.1104 × 10 2 5.3550 × 10 8 3.23310.296
T M 1 ( γ 0 = 0.01 ) 9.9000 × 10 1 9.9010 × 10 1 4.9012 × 10 1 2.4357 0.359
T M 2 ( γ 0 = 0.01 ) 9.9000 × 10 1 6.5669 × 10 1 3.7895 × 10 1 1.9990 0.282
f 5 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 2.4220 × 10 1 3.8151 × 10 3 1.0165 × 10 10 4.21170.360
P M M 2 ( α = 1 , β = 1 2 ) 4.3566 × 10 1 1.1013 × 10 1 1.1077 × 10 5 6.83990.421
P M M 3 ( α = 1 , β = 1 ) 7.8773 × 10 1 8.7201 × 10 2 1.6415 × 10 5 4.09070.485
T M 1 ( γ 0 = 0.01 )CCC 41.637 *-
T M 2 ( γ 0 = 0.01 )CCC 1.7991 *-
C—Converging to undesired root; *—COC in case of undesired root.
Table 3. Comparison of different methods without memory for real-life problems.
Table 3. Comparison of different methods without memory for real-life problems.
Without Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 6 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 2.4307 × 10 6 2.8490 × 10 16 2.8491 × 10 16 3.00000.392
P M 2 ( α = 1 , β = 1 2 ) 2.2284 × 10 6 2.8491 × 10 16 2.8491 × 10 16 3.00000.390
P M 3 ( α = 1 , β = 1 ) 2.5307 × 10 6 2.8491 × 10 16 2.8491 × 10 16 3.00000.375
H P F ( α = 1 2 ) 1.3952 × 10 4 1.6301 × 10 14 2.8491 × 10 16 3.00000.313
S H M ( α = 1 , β = 1 2 ) 2.4927 × 10 4 1.8505 × 10 13 2.8491 × 10 16 3.00000.328
H M ( α = 1 2 ) 1.4741 × 10 4 2.0063 × 10 14 2.8491 × 10 16 3.00000.329
f 7 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 3.1479 × 10 3 5.5786 × 10 7 7.3448 × 10 18 2.98870.234
P M 2 ( α = 1 , β = 1 2 ) 7.3066 × 10 4 3.5639 × 10 9 4.1118 × 10 18 2.99760.218
P M 3 ( α = 1 , β = 1 ) 2.1500 × 10 3 1.7710 × 10 7 4.2152 × 10 18 2.99120.250
H P F ( α = 1 2 ) 4.0615 × 10 4 1.4098 × 10 10 4.1118 × 10 18 3.00080.282
S H M ( α = 1 , β = 1 2 ) 5.7189 × 10 3 5.6997 × 10 6 5.9525 × 10 15 2.97740.234
H M ( α = 1 2 ) 4.5561 × 10 3 2.3326 × 10 6 3.3202 × 10 16 2.98300.234
f 8 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 8.0551 × 10 4 6.6259 × 10 8 8.8456 × 10 17 3.00490.359
P M 2 ( α = 1 , β = 1 2 ) 5.9396 × 10 4 1.3586 × 10 8 8.8493 × 10 17 3.00560.360
P M 3 ( α = 1 , β = 1 ) 1.0985 × 10 3 1.7216 × 10 7 8.7851 × 10 17 3.00960.376
H P F ( α = 1 2 ) 5.7969 × 10 4 4.0492 × 10 8 8.8506 × 10 17 3.00000.344
S H M ( α = 1 , β = 1 2 ) 2.8780 × 10 3 1.4099 × 10 5 1.6623 × 10 12 3.02460.313
H M ( α = 1 2 ) 1.9432 × 10 5 1.0029 × 10 13 8.8493 × 10 17 3.00000.296
f 9 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 2.1183 × 10 3 3.7397 × 10 10 2.3650 × 10 30 2.99980.390
P M 2 ( α = 1 , β = 1 2 ) 8.4057 × 10 4 1.0824 × 10 11 3.0466 × 10 31 2.99990.375
P M 3 ( α = 1 , β = 1 ) 1.8511 × 10 3 2.4942 × 10 10 9.1594 × 10 31 2.99980.376
H P F ( α = 1 2 ) 1.2235 × 10 3 1.8468 × 10 11 3.0470 × 10 31 2.99940.328
S H M ( α = 1 , β = 1 2 ) 2.6486 × 10 3 1.4723 × 10 9 2.5407 × 10 28 2.99960.406
H M ( α = 1 2 ) 4.1975 × 10 3 3.6399 × 10 9 2.3630 × 10 27 3.00010.375
f 10 ( x )
P M 1 ( α = 1 2 , β = 1 2 ) 1.1429 × 10 3 1.8779 × 10 5 1.8779 × 10 5 3.00020.298
P M 2 ( α = 1 , β = 1 2 ) 6.2014 × 10 4 1.8779 × 10 5 1.8779 × 10 5 3.00010.327
P M 3 ( α = 1 , β = 1 ) 1.3290 × 10 3 1.8779 × 10 5 1.8779 × 10 5 3.00030.328
H P F ( α = 1 2 ) 2.6168 × 10 6 1.8779 × 10 5 1.8779 × 10 5 3.00000.313
S H M ( α = 1 , β = 1 2 ) 1.5114 × 10 3 1.8778 × 10 5 1.8779 × 10 5 3.00020.297
H M ( α = 1 2 ) 1.6720 × 10 3 1.8778 × 10 5 1.8779 × 10 5 3.00030.234
Table 4. Comparison of different methods with memory for real life problems.
Table 4. Comparison of different methods with memory for real life problems.
With Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 6 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 2.4307 × 10 6 2.8491 × 10 16 2.8491 × 10 16 3.32970.359
P M M 2 ( α = 1 , β = 1 2 ) 2.2284 × 10 6 2.8491 × 10 16 2.8491 × 10 16 3.33180.344
P M M 3 ( α = 1 , β = 1 ) 2.5307 × 10 6 2.8491 × 10 16 2.8491 × 10 16 3.32930.297
T M 1 ( γ 0 = 0.01 ) 1.5389 × 10 3 4.7636 × 10 10 2.8491 × 10 16 2.40060.282
T M 2 ( γ 0 = 0.01 ) 1.5389 × 10 3 8.5688 × 10 8 5.5032 × 10 16 2.00010.297
f 7 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 3.1479 × 10 3 1.9881 × 10 7 4.1148 × 10 18 3.28470.125
P M M 2 ( α = 1 , β = 1 2 ) 7.3066 × 10 4 2.3942 × 10 9 4.1118 × 10 18 3.33370.187
P M M 3 ( α = 1 , β = 1 ) 2.1500 × 10 3 6.3764 × 10 8 4.1119 × 10 18 3.30270.171
T M 1 ( γ 0 = 0.01 ) 1.8745 × 10 2 8.6076 × 10 4 6.4139 × 10 7 2.26110.157
T M 2 ( γ 0 = 0.01 ) 1.8745 × 10 2 2.9838 × 10 3 9.9340 × 10 5 1.77060.203
f 8 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 8.0551 × 10 4 1.3900 × 10 8 8.8493 × 10 17 3.24770.390
P M M 2 ( α = 1 , β = 1 2 ) 5.9396 × 10 4 5.4849 × 10 9 8.8493 × 10 17 3.25800.374
P M M 3 ( α = 1 , β = 1 ) 1.0985 × 10 3 3.4510 × 10 8 8.8493 × 10 17 3.23220.297
T M 1 ( γ 0 = 0.01 ) 1.7662 × 10 3 2.1922 × 10 5 1.2446 × 10 10 2.73400.296
T M 2 ( γ 0 = 0.01 ) 1.7662 × 10 3 1.1307 × 10 4 3.9994 × 10 7 2.03450.344
f 9 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 2.1183 × 10 3 5.2782 × 10 11 3.0463 × 10 31 3.32040.390
P M M 2 ( α = 1 , β = 1 2 ) 8.4057 × 10 4 3.2315 × 10 12 3.0463 × 10 31 3.33630.391
P M M 3 ( α = 1 , β = 1 ) 1.8511 × 10 3 3.5214 × 10 11 3.0463 × 10 31 3.32310.359
T M 1 ( γ 0 = 0.01 ) 3.9978 × 10 2 8.1932 × 10 5 2.4800 × 10 11 2.42050.328
T M 2 ( γ 0 = 0.01 ) 3.9978 × 10 2 8.3312 × 10 4 3.8756 × 10 7 1.97670.312
f 10 ( x )
P M M 1 ( α = 1 2 , β = 1 2 ) 1.1429 × 10 3 1.8779 × 10 5 1.8779 × 10 5 3.29640.218
P M M 2 ( α = 1 , β = 1 2 ) 6.2014 × 10 4 1.8779 × 10 5 1.8779 × 10 5 3.31020.187
P M M 3 ( α = 1 , β = 1 ) 1.3290 × 10 3 1.8779 × 10 5 1.8779 × 10 5 3.29300.234
T M 1 ( γ 0 = 0.01 ) 2.3352 × 10 2 4.2977 × 10 5 1.8779 × 10 5 2.56360.235
T M 2 ( γ 0 = 0.01 ) 2.3352 × 10 2 5.4260 × 10 4 1.9035 × 10 5 2.00230.220
Table 5. Comparison of different methods without memory in terms of Avg_Iter, P N C and CPU time.
Table 5. Comparison of different methods without memory in terms of Avg_Iter, P N C and CPU time.
Without Memory MethodsAvg_Iter P N C CPU Time
p 1 ( z )
P M 1 ( α = 1 2 , β = 1 2 ) 2.7165 0 4.0501
P M 2 ( α = 1 , β = 1 2 ) 2.2822 0 3.5181
P M 3 ( α = 1 , β = 1 ) 2.6060 0 3.8734
H P F ( α = 1 2 ) 2.5187 0.004 × 10 3 2.7963
S H M ( α = 1 , β = 1 2 ) 2.5187 0.004 × 10 3 3.0479
p 2 ( z )
P M 1 ( α = 1 2 , β = 1 2 ) 3.1603 0 5.6380
P M 2 ( α = 1 , β = 1 2 ) 2.7433 0 4.9187
P M 3 ( α = 1 , β = 1 ) 2.8910 0 5.0015
H P F ( α = 1 2 ) 2.2958 0.004 × 10 3 2.7867
S H M ( α = 1 , β = 1 2 ) 7.1570 1.505 × 10 1 7.9437
p 3 ( z )
P M 1 ( α = 1 2 , β = 1 2 ) 3.9678 1.323 × 10 2 7.2858
P M 2 ( α = 1 , β = 1 2 ) 3.4784 8.185 × 10 3 6.3408
P M 3 ( α = 1 , β = 1 ) 3.6042 7.816 × 10 3 6.7897
H P F ( α = 1 2 ) 2.9863 0.004 × 10 3 3.4503
S H M ( α = 1 , β = 1 2 ) 8.9566 1.135 × 10 1 10.4044
p 4 ( z )
P M 1 ( α = 1 2 , β = 1 2 ) 3.5381 0 6.2320
P M 2 ( α = 1 , β = 1 2 ) 3.3973 0 6.0172
P M 3 ( α = 1 , β = 1 ) 3.4684 0 6.0730
H P F ( α = 1 2 ) 3.4316 0 3.7895
S H M ( α = 1 , β = 1 2 ) 4.7686 0.304 × 10 3 6.3023
Table 6. Comparison of different methods with memory in terms of Avg_Iter, P N C and CPU time.
Table 6. Comparison of different methods with memory in terms of Avg_Iter, P N C and CPU time.
With Memory MethodsAvg_Iter P N C CPU Time
p 1 ( z )
P M M 1 ( α = 1 2 , β = 1 2 ) 2.7089 0 5.2390
P M M 2 ( α = 1 , β = 1 2 ) 2.4002 0 4.6904
P M M 3 ( α = 1 , β = 1 ) 2.5916 0 5.1093
T M 1 ( γ 0 = 0.01 ) 4.3642 1.949 × 10 3 3.2522
T M 2 ( γ 0 = 0.01 ) 8.3338 1.390 × 10 1 5.6000
p 2 ( z )
P M M 1 ( α = 1 2 , β = 1 2 ) 3.1132 0 6.8290
P M M 2 ( α = 1 , β = 1 2 ) 2.7755 0 6.2775
P M M 3 ( α = 1 , β = 1 ) 2.8498 0 6.3416
T M 1 ( γ 0 = 0.01 ) 6.0252 1.110 × 10 3 5.2045
T M 2 ( γ 0 = 0.01 ) 11.9211 2.765 × 10 1 9.6673
p 3 ( z )
P M M 1 ( α = 1 2 , β = 1 2 ) 3.6980 0.030 × 10 3 8.2979
P M M 2 ( α = 1 , β = 1 2 ) 3.3119 0 7.5358
P M M 3 ( α = 1 , β = 1 ) 3.3200 0 7.7738
T M 1 ( γ 0 = 0.01 ) 8.3110 4.734 × 10 2 6.7431
T M 2 ( γ 0 = 0.01 ) 15.0794 4.084 × 10 1 11.8306
p 4 ( z )
P M M 1 ( α = 1 2 , β = 1 2 ) 3.7034 0 8.2241
P M M 2 ( α = 1 , β = 1 2 ) 3.6684 0 8.2776
P M M 3 ( α = 1 , β = 1 ) 3.6692 0 8.2856
T M 1 ( γ 0 = 0.01 ) 5.4714 1.490 × 10 3 4.5927
T M 2 ( γ 0 = 0.01 ) 9.1054 7.957 × 10 2 7.0686
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sharma, H.; Kansal, M.; Behl, R. An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications. Math. Comput. Appl. 2022, 27, 97. https://doi.org/10.3390/mca27060097

AMA Style

Sharma H, Kansal M, Behl R. An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications. Mathematical and Computational Applications. 2022; 27(6):97. https://doi.org/10.3390/mca27060097

Chicago/Turabian Style

Sharma, Himani, Munish Kansal, and Ramandeep Behl. 2022. "An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications" Mathematical and Computational Applications 27, no. 6: 97. https://doi.org/10.3390/mca27060097

APA Style

Sharma, H., Kansal, M., & Behl, R. (2022). An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications. Mathematical and Computational Applications, 27(6), 97. https://doi.org/10.3390/mca27060097

Article Metrics

Back to TopTop