Next Article in Journal
A New Multi-Attribute Decision Making Method for Overvalued Star Ratings Adjustment and Its Application in New Energy Vehicle Selection
Next Article in Special Issue
New Family of Multi-Step Iterative Methods Based on Homotopy Perturbation Technique for Solving Nonlinear Equations
Previous Article in Journal
The Relationship between Ordinary and Soft Algebras with an Application
Previous Article in Special Issue
Unsteady Hydromagnetic Flow over an Inclined Rotating Disk through Neural Networking Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations

1
Department of Mathematics, National Institute of Technology Manipur, Langol, Imphal 795004, Manipur, India
2
Department of Physics and Chemistry, Technical University of Cluj-Napoca, B.-dul Muncii nr. 103-105, 400641 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2036; https://doi.org/10.3390/math11092036
Submission received: 3 April 2023 / Revised: 20 April 2023 / Accepted: 21 April 2023 / Published: 25 April 2023
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing, 3rd Edition)

Abstract

:
The methods that use memory using accelerating parameters for computing multiple roots are almost non-existent in the literature. Furthermore, the only paper available in this direction showed an increase in the order of convergence of 0.5 from the without memory to the with memory extension. In this paper, we introduce a new fifth-order without memory method, which we subsequently extend to two higher-order with memory methods using a self-accelerating parameter. The proposed with memory methods extension demonstrate a significant improvement in the order of convergence from 5 to 7, making this the first paper to achieve at least a 2-order improvement. In addition to this improvement, our paper is also the first to use Hermite interpolating polynomials to approximate the accelerating parameter in the proposed with memory methods for multiple roots. We also provide rigorous theoretical proofs of convergence theorems to establish the order of the proposed methods. Finally, we demonstrate the potential impact of the proposed methods through numerical experimentation on a diverse range of problems. Overall, we believe that our proposed methods have significant potential for various applications in science and engineering.

1. Introduction

The quest for efficient and accurate methods for finding the roots of nonlinear equations is a continuous endeavour in the field of numerical computation. One of the primary objectives in this pursuit is to develop methods that are both efficient and have a simple structure. Iterative methods are favoured for finding the roots of nonlinear equations because they can be easily implemented using computer software and provide results up to a desired accuracy, despite not always yielding exact roots. As a result, they are practical and versatile tools for solving nonlinear equations of the form:
Ω ( s n ) = 0 ,
where Ω : D C C is a function defined in an open interval D having multiple roots ξ with multiplicity m > 1 . These equations are often encountered in various fields of science, engineering, and mathematics, and their solutions are important for understanding complex systems and phenomena. The modified Newton–Raphson method is a widely known iterative method for finding multiple roots of (1), and it is given by
s n + 1 = s n m Ω ( s n ) Ω ( s n ) , n = 0 , 1 , 2 , .
Equation (2) has a quadratic order of convergence for m 1 . While there exist numerous with and without memory iterative methods for computing simple roots [1,2,3,4,5,6], the task of developing efficient methods for finding multiple roots remains challenging. Despite the availability of several without memory iterative methods for multiple roots (see [7,8,9,10] and the references therein), there is a paucity of methods that utilise multiple points with memory and accelerating parameters for computing multiple roots.
The authors in [11] recently extended existing optimal-order without memory methods [12,13] for multiple roots to include memory using an accelerating parameter. The with memory extension of the optimal fourth-order method is given below:
y n = s n m Ω ( s n ) Ω ( s n ) + α n Ω ( s n ) , s n + 1 = y n m u n W ( u n ) Ω ( s n ) Ω ( s n ) + 2 α n Ω ( s n ) ,
where u n = Ω ( y n ) Ω ( s n ) 1 m . This method (3) has the error equations given by
e n , y = ( α n + c 1 ) m e n 2 + O ( e n 3 ) ,
e n + 1 = ( α n + c 1 ) A 2 m 3 e n 4 + O ( e n 5 ) ,
where c i = m ! ( m + i ) ! Ω ( m + i ) ( ξ ) Ω ( m ) ( ξ ) , i = 1 , 2 , , e n = s n ξ is the error at the n t h iteration, A = α n 2 ( W ( 0 ) 4 ) + 2 α n c 1 ( W ( 0 ) 7 ) + ( W ( 0 ) 9 m ) c 1 2 + 2 m c 2 . The accelerating parameter α n is calculated as follows:
α n = c 1 = 1 m + 1 N 2 ( m + 1 ) ( s n ) Ω ( m ) ( s n ) ,
where N 2 ( m + 1 ) ( s n ) is calculated using a Newton interpolating polynomial.
However, the R-order of convergence of these methods increases from 2 to 2.4142, 4 to 4.2361, and 4 to 4.5616 only, which is a modest improvement. As far as we know, this is the first and only paper that to discuss the with memory iterative method for finding multiple roots using an accelerating parameter.
Motivated by the approach using the self-accelerating technique, we plan to develop new higher-order with memory iterative methods for finding multiple roots of nonlinear equations using an accelerating parameter.
This paper presents new parametric families of two-point with and without memory iterative methods for finding multiple roots of nonlinear equations. The family of without memory methods employs the weight function technique with a real parameter to achieve the fifth order of convergence, while the extended family of with memory methods utilises an accelerating parameter to achieve a two-order improvement in the convergence order. The with memory methods demonstrate an increase in convergence orders from 5 to 7 and 7.2749. To ensure efficiency, we approximated the accelerating parameter using Hermite interpolating polynomials.
The manuscript is structured as follows. Section 2 outlines the development of methods utilising both weight function techniques and the accelerating parameter, with particular attention paid to analysing the order of convergence of the new methods through theorems and lemmas. In Section 3, we present the results of numerical tests comparing the proposed methods with other known methods. Finally, Section 4 provides concluding remarks on the study.

2. Construction of New Iterative Schemes and Their Convergence Analysis

In this section, we present the new parametric families of two-point with and without memory methods for finding multiple roots having multiplicity m > 1 in two separate subsections.

2.1. Parametric Family of Two-Point without Memory Methods and Its Convergence Analysis

Here, we introduce a new fifth-order parametric family of two-point without memory methods for finding multiple roots with multiplicity m > 1 . The newly proposed parametric family of without memory methods is defined as follows, which we denote as PFM:
y n = s n m Ω ( s n ) Ω ( s n ) + α Ω ( s n ) , α R { 0 } s n + 1 = y n m W ( t n ) Ω ( y n ) Ω ( y n ) + α Ω ( y n ) ,
where W : C C is an analytic function in the neighbourhood of 0 with t n = Ω ( y n ) Ω ( s n ) 1 m 1 . This new family (7) requires two functions and two first derivative evaluations at each iteration.
Theorem 1.
Let s = ξ be a multiple zero with multiplicity m > 1 of a function Ω : C C in the region enclosing ξ. Then, the new parametric family of methods (PFM) defined by (7) has the fifth order of convergence if the following conditions are fulfilled:
W ( 0 ) = 1 ; W ( 0 ) = 0 ; W ( 0 ) = 2 ; | W ( 3 ) ( 0 ) | < .
This satisfies the following error equation:
e n + 1 = ( α + c 1 ) 2 6 ( m 1 ) m 4 ( ( 1 m ) ( 12 m c 2 + α 2 ( 6 + W ( 3 ) ( 0 ) ) ) + 2 α c 1 ( m ( 12 W ( 3 ) ( 0 ) ) + W ( 3 ) ( 0 ) ) + c 1 2 ( 6 + 6 m 2 m ( 12 + W ( 3 ) ( 0 ) + W ( 3 ) ( 0 ) ) ) ) e n 5 + O ( e n 6 ) ,
where e n = s n ξ is the error at the n t h iteration.
Proof of Theorem 1.
Let s = ξ be the multiple roots of Ω ( s ) = 0 with multiplicity m > 1 such that e n = s n ξ is the error at the n t h iteration. We expand Ω ( s n ) and Ω ( s n ) in powers of e n by Taylor’s series expansion as follows:
Ω ( s n ) = Ω ( m ) ( ξ ) m ! e n m ( 1 + c 1 e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + O ( e n 6 ) ) ,
and Ω ( s n ) = Ω ( m ) ( ξ ) m ! e n m 1 ( m + ( m + 1 ) c 1 e n + ( m + 2 ) c 2 e n 2 + ( m + 3 ) c 3 e n 3 + ( m + 4 ) c 4 e n 4 + ( m + 5 ) c 5 e n 5 + O ( e n 6 ) ) ,
where c i = m ! ( m + i ) ! Ω ( m + i ) ( ξ ) Ω ( m ) ( ξ ) , i = 1 , 2 , 3 , . Using the above Equations (10) and (11) in the first step of Equation (7), we obtain
y n ξ = α + c 1 m e n 2 1 m 2 α 2 + 2 α c 1 + ( 1 + m ) c 1 2 2 m c 2 e n 3 + i = 0 1 S i e n i + 4 + O ( e n 6 ) ,
where S i , i = 0 , 1 are given in terms of α , m , c 1 , c 2 , c 3 , c 4 , i.e., S 0 = 1 m 3 α 3 + ( 3 + 2 m ) α c 1 2 + ( 1 + m ) 2 c 1 3 4 m α c 2 + c 1 ( 3 α 2 m ( 4 + 3 m ) c 2 ) + 3 m 2 c 3 , S 1 = 1 m 4 ( α 4 + 2 ( 2 + 3 m + m 2 ) α c 1 3 + ( 1 + m ) 3 c 1 4 6 m α 2 c 2 + 2 m 2 ( 2 + m ) c 2 2 c 1 2 ( 3 ( 2 + m ) α 2 + 2 m ( 3 + 5 m + 2 m 2 ) c 2 ) + 6 m 2 α c 3 + 2 c 1 ( 2 α 3 3 m ( 2 + m ) α c 2 + m 2 ( 3 + 2 m ) c 3 ) 4 m 3 c 4 ) .
Then, using Equation (12), Ω ( y n ) and Ω ( y n ) can be obtained as follows:
Ω ( y n ) = Ω ( m ) ( ξ ) m ! e n 2 m α + c 1 m m [ 1 1 α + c 1 α 2 + 2 α c 1 + ( 1 + m ) c 1 2 2 m c 2 e n + i = 0 3 K i e n i + 2 + O ( e n 6 ) ] ,
where K i , i = 0 , 1 , 2 , 3 are given in terms of α , m , c 1 , c 2 , c 3 , c 4 , i.e.,
K 0 = 1 2 m ( α + c 1 ) 2 ( 2 c 1 ( α + c 1 ) 3 + ( 1 + m ) ( α 2 + 2 α c 1 + ( 1 + m ) c 1 2 2 m c 2 ) 2 + 2 ( α + c 1 ) ( α 3 + ( 3 + 2 m ) α c 1 2 + ( 1 + m ) 2 c 1 3 4 m α c 2 + c 1 ( 3 α 2 m ( 4 + 3 m ) c 2 ) + 3 m 2 c 3 ) ) , etc.
In a similar manner, Ω ( y n ) can be written as follows:
Ω ( y n ) = Ω ( m ) ( ξ ) ( m 1 ) ! α + c 1 m m 1 e n 2 ( m 1 ) [ 1 ( m 1 ) m ( α + c 1 ) ( α 2 + 2 α c 1 + ( 1 + m ) c 1 2 2 m c 2 ) e n + i = 0 3 M i e n i + 2 + O ( e n 6 ) ] ,
where M i , i = 0 , 1 , 2 , 3 are given in terms of α , m , c 1 , c 2 , c 3 , c 4 .
Applying Equations (11) and (14), we obtain
t n = Ω ( y n ) Ω ( s n ) 1 m 1 = α + c 1 m e n + 1 ( m 1 ) m 2 ( ( α 3 m α ) c 1 m ( 1 + m ) c 1 2 + ( m 1 ) ( α 2 + 2 m c 2 ) ) e n 2 + i = 0 2 N i e n i + 3 + O ( e n 6 ) ,
where N i , i = 0 , 1 , 2 are given in terms of α , m , c 1 , c 2 , c 3 , c 4 .
Expanding the weight function W ( t n ) in the neighbourhood of origin by Taylor’s series expansion up to second-order terms gives
W ( t n ) W ( 0 ) + t n W ( 0 ) + t n 2 2 ! W ( 0 ) .
Now, substituting the expressions (12)–(16) in the last step of Equation (7), the error equation is obtained as follows:
e n + 1 = 1 m ( 1 W ( 0 ) ) ( α + c 1 ) e n 2 + 1 m 2 ( 2 m ( 1 W ( 0 ) ) c 2 + α 2 ( 1 + W ( 0 ) W ( 0 ) ) + 2 α c 1 ( 1 + W ( 0 ) W ( 0 ) ) + c 1 2 ( 1 + m ( 1 + W ( 0 ) ) + W ( 0 ) W ( 0 ) ) ) e n 3 + + O ( e n 6 ) .
Now, putting the conditions W ( 0 ) = 1 , W ( 0 ) = 0 , and W ( 0 ) = 2 in (17), the error equation becomes
e n + 1 = ( α + c 1 ) 2 6 ( m 1 ) m 4 ( ( 1 m ) ( 12 m c 2 + α 2 ( 6 + W ( 3 ) ( 0 ) ) ) + 2 α c 1 ( m ( 12 W ( 3 ) ( 0 ) ) + W ( 3 ) ( 0 ) ) + c 1 2 ( 6 + 6 m 2 m ( 12 + W ( 3 ) ( 0 ) + W ( 3 ) ( 0 ) ) ) ) e n 5 + O ( e n 6 ) .
From the above error equation, we can conclude that the newly proposed family of iterative methods (PFM) is of fifth order. This completes the proof. □

Some Particular Cases of the Weight Function, W ( t n ) :

Based on the conditions on W ( t n ) as shown in Theorem 1, we can generate numerous methods of the family (7). However, we restricted this to the following simple forms:
Case 1. Let W ( t n ) be a polynomial function of degree two of the form:
W ( t n ) = a 0 + a 1 t n + a 2 t n 2 .
Then, using the conditions (8) of Theorem 1, we obtain a 0 = a 2 = 1 and a 1 = 0 . Thus, W ( t n ) becomes
W ( t n ) = 1 + t n 2 .
Case 2. Consider W ( t n ) as a rational function of the form:
W ( t n ) = 1 + a 0 t n 1 + a 1 t n + a 2 t n 2 .
Then, using the conditions (8) of Theorem 1, we obtain a 0 = a 1 = λ and a 2 = 1 . Thus, W ( t n ) becomes
W ( t n ) = 1 + λ t n 1 + λ t n t n 2 , λ R .
Case 3. Let W ( t n ) be another rational function of the form:
W ( t n ) = 1 + a 0 t n 2 1 + a 1 t n 2 .
Then, using the conditions (8) of Theorem 1, we obtain a 0 = 2 and a 1 = 1 . Thus, W ( t n ) becomes
W ( t n ) = 1 + 2 t n 2 1 + t n 2 .
Case 4. Let W ( t n ) be a function of the form:
W ( t n ) = ( 1 t n ) e a 0 t n + a 1 t n 2 .
Then, using the conditions (8) of Theorem 1, we obtain a 0 = 1 and a 1 = 3 2 . Thus, W ( t n ) becomes
W ( t n ) = ( 1 t n ) e t n + 3 2 t n 2 .

2.2. Parametric Families of Two-Point with Memory Methods and Their Convergence Analysis

Here, we propose new parametric families of two-point with memory methods, which are an extension of the new fifth-order parametric family of without memory methods (PFM) (7).
By analysing the error Equation (9) of Theorem 1, we found that the convergence order of PFM (7) can be increased from 5 to 7 if we set α = c 1 , where c 1 = m ! ( m + 1 ) ! Ω ( m + 1 ) ( ξ ) Ω ( m ) ( ξ ) , m > 1 . However, since the exact values of Ω ( m + 1 ) ( ξ ) and Ω ( m ) ( ξ ) are not available, we used some approximations and replaced α by α n , where α n is an accelerating parameter computed using available information from the current and previous iterations to satisfy the following condition:
lim n α n = m ! ( m + 1 ) ! Ω ( m + 1 ) ( ξ ) Ω ( m ) ( ξ ) .
To be precise, we employed Hermite interpolating polynomial for the computation of α n as follows:
FORM 1. We computed α n as follows:
α n = H 3 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) ,
where
H 3 ( s ) = Ω ( s n ) + Ω [ s n , s n ] ( s s n ) + Ω [ s n , s n , s n ] ( s s n ) 2 + + Ω [ s n , s n , s n , , s n m + 1 , y n 1 ] × ( s s n ) m + 1 + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 ] ( s s n ) m + 1 ( s y n 1 ) + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 , s n 1 ] ( s s n ) m + 1 ( s y n 1 ) 2
and
H 3 ( m + 1 ) ( s n ) = Ω [ s n , s n , s n , , s n m + 1 , y n 1 ] ( m + 1 ) ! + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 ] ( m + 1 ) ! × ( s n y n 1 ) + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 , s n 1 ] ( m + 1 ) ! ( s n y n 1 ) 2 .
Remark 1.
The Hermite interpolating polynomial H k ( s ) , k = 3 , 4 satisfies the condition H k ( s n ) = Ω ( s n ) , k = 3 , 4 .
Now, substituting α n from (23) in Equation (7), we obtain the following parametric family of with memory methods, which we denote as PFWM1:
y n = s n m Ω ( s n ) Ω ( s n ) + α n Ω ( s n ) , α n = H 3 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) , s n + 1 = y n m W ( t n ) Ω ( y n ) Ω ( y n ) + α n Ω ( y n ) .
FORM 2. We computed α n as follows:
α n = H 4 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) ,
where
H 4 ( s ) = Ω ( s n ) + Ω [ s n , s n ] ( s s n ) + Ω [ s n , s n , s n ] ( s s n ) 2 + + Ω [ s n , s n , s n , , s n m + 1 , y n 1 ] ( s s n ) m + 1 + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 ] ( s s n ) m + 1 × ( s y n 1 ) + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 , s n 1 ] ( s s n ) m + 1 ( s y n 1 ) 2 + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 , s n 1 , s n 1 ] ( s s n ) m + 1 ( s y n 1 ) 2 ( s s n 1 )
and
H 4 ( m + 1 ) ( s n ) = Ω [ s n , s n , s n , , s n m + 1 , y n 1 ] ( m + 1 ) ! + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 ] ( m + 1 ) ! × ( s n y n 1 ) + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 , s n 1 ] ( m + 1 ) ! ( s n y n 1 ) 2 + Ω [ s n , s n , s n , , s n m + 1 , y n 1 , y n 1 , s n 1 , s n 1 ] ( m + 1 ) ! ( s n y n 1 ) 2 ( s n s n 1 ) .
Now, substituting α n from (25) in Equation (7), we obtain the following parametric family of with memory methods, which we denote as PFWM2:
y n = s n m Ω ( s n ) Ω ( s n ) + α n Ω ( s n ) , α n = H 4 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) , s n + 1 = y n m W ( t n ) Ω ( y n ) Ω ( y n ) + α n Ω ( y n ) .
Thus, PFWM1 and PFWM2 preserve the efficiency properties of the corresponding without memory methods (PFM) and result in the increase of the convergence order of the with memory methods from 5 to at least 7, resulting in at least a 2-order improvement.
Lemma 1.
If α n = H 3 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) , n = 1 , 2 , 3 , , then the given estimation:
α n + c 1 c 4 e n 1 , y 2 e n 1
holds, where e n = s n ξ , e n , y = y n ξ , and c 4 is an asymptotic constant.
Proof of Lemma 1.
The error of Hermite interpolation can be expressed as follows:
Ω ( s ) H 3 ( s ) = Ω ( m + 4 ) ( δ ) ( m + 4 ) ! ( s s n ) ( m + 1 ) ( s y n 1 ) 2 ( s s n 1 ) .
Now, after differentiating (28) ( m + 1 ) times at the point s = s n and rearranging, we obtain the expression:
H 3 ( m + 1 ) ( s n ) = Ω ( m + 1 ) ( s n ) Ω ( m + 4 ) ( δ ) ( m + 4 ) ! ( m + 1 ) ! ( s n y n 1 ) 2 ( s n s n 1 ) .
Now, for the multiple roots ξ , we have the following Taylor series of function Ω at the point s n :
Ω ( s n ) = Ω ( m ) ( ξ ) m ! e n m + c 1 e n m + 1 + c 2 e n m + 2 + c 3 e n m + 3 + O ( e n m + 4 ) .
Then, after differentiating ( m + 1 ) times, we have
Ω ( m + 1 ) ( s n ) = Ω ( m ) ( ξ ) m ! ( m + 1 ) ! c 1 + ( m + 2 ) ! c 2 e n + O ( e n 2 ) .
Similarly,
Ω ( m + 4 ) ( δ ) = Ω ( m ) ( ξ ) m ! ( m + 4 ) ! c 4 + ( m + 5 ) ! c 5 e δ + O ( e δ 2 ) ,
where e δ = δ ξ . Thus, using Equations (31) and (32) in (29) and excluding the terms of e n and e δ , we can obtain
H 3 ( m + 1 ) ( s n ) Ω ( m ) ( ξ ) m ! ( m + 1 ) ! c 1 Ω ( m ) ( ξ ) m ! c 4 ( m + 1 ) ! ( s n y n 1 ) 2 ( s n s n 1 ) .
After solving this, we have
H 3 ( m + 1 ) ( s n ) Ω ( m ) ( ξ ) m + 1 c 1 c 4 ( s n y n 1 ) 2 ( s n s n 1 ) ,
which implies
H 3 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) c 1 c 4 ( s n y n 1 ) 2 ( s n s n 1 )
or
α n c 1 + c 4 ( s n y n 1 ) 2 ( s n s n 1 ) .
Finally, we have
α n + c 1 c 4 e n 1 , y 2 e n 1 .
This completes the proof for Lemma 1. □
Theorem 2.
Suppose Ω : C C is a function in the neighbourhood of the multiple roots ξ of Ω ( s ) = 0 of multiplicity m > 1 . If s 0 is sufficiently close to ξ, then the R-order of convergence of the iterative method (24) with the parameter α n calculated by (23) is at least seven.
Proof of Theorem 2.
Let the iterative method (IM) generates the sequence of { s n } , which converges to the root ξ of Ω ( s ) ; by means of R-order O R ( I M , ξ ) r , we express
e n + 1 D n , r e n r
and
e n D n 1 , r e n 1 r .
Next, D n , r will tend to the asymptotic error constant D r of I M by taking n . Then,
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2 .
The resulting error expression of the with memory scheme (24) can be obtained using (12) and (9) and the varying parameter α n as follows:
e n , y = y n ξ α n + c 1 m e n 2
and
e n + 1 = s n + 1 ξ 1 6 ( m 1 ) m 4 ( α n + c 1 ) 2 ( ( 1 m ) ( 12 m c 2 + α n 2 ( 6 + W ( 3 ) ( 0 ) ) ) + 2 α n c 1 ( m ( 12 W ( 3 ) ( 0 ) ) + W ( 3 ) ( 0 ) ) + c 1 2 ( 6 + 6 m 2 m ( 12 + W ( 3 ) ( 0 ) + W ( 3 ) ( 0 ) ) ) ) e n 5 .
Here, the higher-order terms in Equations (41) and (42) are excluded.
Now, let p be the R-order of convergence of the iterative sequence { y n } . Then,
e n , y D n , p e n p D n , p ( D n 1 , r e n 1 r ) p D n , p D n 1 , r p e n 1 r p
and
e n 1 , y D n 1 , p e n 1 p .
Now, by Equations (37), (41) and (44), we obtain
e n , y α n + c 1 m e n 2 c 4 e n 1 , y 2 e n 1 m ( D n 1 , r e n 1 r ) 2 c 4 ( D n 1 , p e n 1 p ) 2 e n 1 m ( D n 1 , r e n 1 r ) 2 c 4 D n 1 , p 2 D n 1 , r 2 m e n 1 2 p + 2 r + 1 .
Again, by (37), (42) and (44), we have
e n + 1 1 6 ( m 1 ) m 4 ( α n + c 1 ) 2 G n e n 5 1 6 ( m 1 ) m 4 ( c 4 e n 1 , y 2 e n 1 ) 2 G n ( D n 1 , r e n 1 r ) 5 1 6 ( m 1 ) m 4 c 4 ( D n 1 , p e n 1 p ) 2 e n 1 2 G n ( D n 1 , r e n 1 r ) 5 1 6 ( m 1 ) m 4 c 4 2 D n 1 , p 4 G n D n 1 , r 5 e n 1 4 p + 5 r + 2 ,
where G n originates from (42).
Since r > p , by equating the exponents of e n 1 present in the set of relations (43)–(45) and (40)–(46), we attain the resulting system of equations:
r p = 2 r + 2 p + 1 , r 2 = 5 r + 4 p + 2 .
The positive solutions of the system of Equation (47) are r = 7 and p = 3 . As a result, the R-order of convergence of the with memory iterative method (24) is at least r = 7 . □
Lemma 2.
If α n = H 4 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) , n = 1 , 2 , 3 , , then the given estimation:
α n + c 1 c 5 e n 1 , y 2 e n 1 2
holds, where e n = s n ξ , e n , y = y n ξ , and c 5 is an asymptotic constant.
Proof of Lemma 2.
The error of Hermite interpolation can be expressed as follows:
Ω ( s ) H 4 ( s ) = Ω ( m + 5 ) ( δ ) ( m + 5 ) ! ( s s n ) ( m + 1 ) ( s y n 1 ) 2 ( s s n 1 ) 2 .
Then, after differentiating (49) ( m + 1 ) times at the point s = s n and rearranging, we obtain the expression below:
H 4 ( m + 1 ) ( s n ) = Ω ( m + 1 ) ( s n ) Ω ( m + 5 ) ( δ ) ( m + 5 ) ! ( m + 1 ) ! ( s n y n 1 ) 2 ( s n s n 1 ) 2 .
Now, for the multiple roots ξ , we have the following Taylor series of function Ω at the point s n :
Ω ( s n ) = Ω ( m ) ( ξ ) m ! e n m + c 1 e n m + 1 + c 2 e n m + 2 + c 3 e n m + 3 + O ( e n m + 4 ) .
Then, after differentiating ( m + 1 ) times, we have
Ω ( m + 1 ) ( s n ) = Ω ( m ) ( ξ ) m ! ( ( m + 1 ) ! c 1 + ( m + 2 ) ! c 2 e n + O ( e n 2 ) ) .
Similarly,
Ω ( m + 5 ) ( δ ) = Ω ( m ) ( ξ ) m ! ( ( m + 5 ) ! c 5 + ( m + 6 ) ! c 6 e δ + O ( e δ 2 ) ) ,
where e δ = δ ξ . Thus, using Equations (52) and (53) in (50) and excluding the terms of e n and e δ , we can obtain
H 4 ( m + 1 ) ( s n ) Ω ( m ) ( ξ ) m ! ( m + 1 ) ! c 1 Ω ( m ) ( ξ ) m ! c 5 ( m + 1 ) ! ( s n y n 1 ) 2 ( s n s n 1 ) 2 .
After solving this, we have
H 4 ( m + 1 ) ( s n ) Ω ( m ) ( ξ ) m + 1 c 1 c 5 ( s n y n 1 ) 2 ( s n s n 1 ) 2 ,
which implies
H 4 ( m + 1 ) ( s n ) ( m + 1 ) Ω ( m ) ( s n ) c 1 c 5 ( s n y n 1 ) 2 ( s n s n 1 ) 2
or
α n c 1 + c 5 ( s n y n 1 ) 2 ( s n s n 1 ) 2 .
Finally, we have
α n + c 1 c 5 e n 1 , y 2 e n 1 2 .
This completes the proof for Lemma 2. □
Theorem 3.
Suppose Ω : C C is a function in the neighbourhood of the multiple roots ξ of Ω ( s ) = 0 of multiplicity m > 1 . If s 0 is sufficiently close to ξ, then the R-order of convergence of the iterative method (26) with the parameter α n calculated by (25) is at least ( 7 + 57 ) / 2 7.2749 .
Proof of Theorem 3.
Let the iterative method (IM) generates the sequence of { s n } , which converges to the root ξ of Ω ( s ) ; by means of R-order O R ( I M , ξ ) r , we express
e n + 1 D n , r e n r
and
e n D n 1 , r e n 1 r .
Next, D n , r will tend to the asymptotic error constant D r of I M by taking n . Then,
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2 .
The resulting error expression of the with memory scheme (26) can be obtained using (12) and (9) and the varying parameter α n as follows:
e n , y = y n ξ α n + c 1 m e n 2
and
e n + 1 = s n + 1 ξ 1 6 ( m 1 ) m 4 ( α n + c 1 ) 2 ( ( 1 m ) ( 12 m c 2 + α n 2 ( 6 + W ( 3 ) ( 0 ) ) ) + 2 α n c 1 ( m ( 12 W ( 3 ) ( 0 ) ) + W ( 3 ) ( 0 ) ) + c 1 2 ( 6 + 6 m 2 m ( 12 + W ( 3 ) ( 0 ) + W ( 3 ) ( 0 ) ) ) ) e n 5 .
Here, the higher-order terms in Equations (62) and (63) are excluded.
Now, let p be the R-order convergence of the iterative sequences { y n } . Then,
e n , y D n , p e n p D n , p ( D n 1 , r e n 1 r ) p D n , p D n 1 , r p e n 1 r p
and
e n 1 , y D n 1 , p e n 1 p .
Then, by Equations (58), (62) and (65), we obtain
e n , y α n + c 1 m e n 2 c 5 e n 1 , y 2 e n 1 2 m ( D n 1 , r e n 1 r ) 2 c 5 ( D n 1 , p e n 1 p ) 2 e n 1 2 m ( D n 1 , r e n 1 r ) 2 c 5 D n 1 , p 2 D n 1 , r 2 m e n 1 2 p + 2 r + 2 .
Again, by (58), (63) and (65), we have
e n + 1 1 6 ( m 1 ) m 4 ( α n + c 1 ) 2 G n e n 5 1 6 ( m 1 ) m 4 ( c 5 e n 1 , y 2 e n 1 2 ) 2 G n ( D n 1 , r e n 1 r ) 5 1 6 ( m 1 ) m 4 c 5 ( D n 1 , p e n 1 p ) 2 e n 1 2 2 G n ( D n 1 , r e n 1 r ) 5 1 6 ( m 1 ) m 4 c 5 2 D n 1 , p 4 G n D n 1 , r 5 e n 1 4 p + 5 r + 4 ,
where G n originates from (63).
Since r > p , by equating the exponents of e n 1 present in the set of relations (64)–(66) and (61)–(67), we attain the resulting system of equations:
r p = 2 r + 2 p + 2 , r 2 = 5 r + 4 p + 4 .
The positive solutions of the system of Equation (68) are r = ( 7 + 57 ) / 2 and p = ( 5 + 57 ) / 4 . As a result, the R-order of convergence of the with memory iterative method (26) is at least r = ( 7 + 57 ) / 2 7.2749 . □

3. Numerical Results

In this section, we assess the performance and computational efficiency of the newly developed families of methods PFM, PFWM1 and PFWM2, which were discussed in Section 2. As for the weight function W ( t n ) , we use the particular case given in Equation (18) throughout the whole computation. Additionally, we compare these methods with other similar approaches described in existing literature. The objective of this analysis is to provide a comprehensive evaluation of the proposed families of methods and to validate their theoretical results. By conducting this comparison, we aim to gain a better understanding of the effectiveness and practicality of these new methods and their potential utility for various applications.
Now, let us consider some existing methods for finding multiple roots available in literature for the comparison.
A subcase of the third-order method given by Vinay Kanwar et al. [7], which was developed in March, 2023. The method is denoted as VKM, and the expression is given below:
s n + 1 = s n 2 2 L f m Ω ( s n ) Ω ( s n ) m λ Ω ( s n ) ,
where λ = 1 , L f = m Ω ( s n ) Ω ( s n ) + m λ 2 Ω ( s n ) Ω ( s n ) 2 ( m 1 ) 2 m λ Ω ( s n ) Ω ( s n ) Ω ( s n ) m λ Ω ( s n ) 2 .
A particular case of the with memory methods from Equation (3), which was developed in February 2023 [11]. The method is denoted as XZM, and it is given below:
α n = 1 m + 1 N 2 ( m + 1 ) ( s n ) Ω ( m ) ( s n ) , y n = s n m Ω ( s n ) Ω ( s n ) + α n Ω ( s n ) , s n + 1 = y n m u n 1 + 4 u n ( 1 + u n ) 2 Ω ( s n ) Ω ( s n ) + 2 α n Ω ( s n ) ,
where u n = Ω ( y n ) Ω ( s n ) 1 m .
A subcase of the fifth order methods given by Chanu-Panday-Dwivedi [14], which was developed in the year 2021. The method is denoted as CPDM, and it is given below:
y n = s n + m Ω ( s n ) Ω ( s n ) , z n = s n m 2 m Ω ( y n ) Ω ( s n ) , s n + 1 = z n m 1 2 m + 4 m 2 + 4 m 8 m 2 h + 4 m 2 2 m h 2 Ω ( z n ) Ω ( z n ) ,
where h = 2 m Ω ( s n ) Ω ( y n ) .
A subcase of the fifth order methods given by Sharma-Arora [10], which was developed in the year 2021. The method is denoted as SAM, and it is given below:
y n = s n m Ω ( s n ) Ω ( s n ) , s n + 1 = y n m 1 + Ω ( y n ) Ω ( s n ) 1 m + Ω ( y n ) Ω ( s n ) 2 m 1 + Ω ( y n ) Ω ( s n ) 1 m Ω ( y n ) Ω ( y n ) .
A subcase of the fifth-order methods given by Singh-Arora-Jäntschi [15], which was developed in January 2023. The method is denoted as SAJM, and it is given below:
y n = s n m Ω ( s n ) Ω ( s n ) , s n + 1 = y n m ( 1 + v 2 ) Ω ( y n ) Ω ( y n ) ,
where v = u 1 + λ u , u = Ω ( y n ) Ω ( s n ) 1 m , λ = 1 .
A subcase of the sixth-order methods given by Geum-Kim-Neta [8], which was developed in 2015. The method is denoted as GKNM, and it is given below:
y n = s n m Ω ( s n ) Ω ( s n ) , s n + 1 = y n m + a 1 u ( 1 + b 1 u + b 2 u 2 ) 2 1 1 + 2 ( m 1 ) t Ω ( y n ) Ω ( y n ) ,
where u = Ω ( y n ) Ω ( s n ) 1 m , t = Ω ( y n ) Ω ( s n ) 1 m 1 , a 1 = 2 m ( 4 m 4 16 m 3 + 31 m 2 30 m + 13 ) ( m 1 ) ( 4 m 2 8 m + 7 ) , b 1 = 4 ( 2 m 2 4 m + 3 ) ( m 1 ) ( 4 m 2 8 m + 7 ) , b 2 = 4 m 2 8 m + 3 4 m 2 8 m + 7 .
We performed all numerical tests using Mathematica 12.2, a multi-precision arithmetic programming software. To ensure consistency across all test functions, we set the initial parameter value to α 0 = 1 to initiate the iteration.
The test functions consisted of some academic and real-life engineering examples, each with multiple roots ( ξ ) and corresponding initial guesses ( s 0 ). These functions and their associated roots and initial guesses are listed below.
Example 1.
A standard academic test function given by
Ω 1 ( s ) = 4 + 3 sin s 2 s 2 4 .
It has the multiple roots ξ = 2 with multiplicity m = 4 . We used s 0 = 2.2 as the initial guess, and the results are displayed in Table 1.
Example 2.
A standard academic test function given by
Ω 2 ( s ) = sin 2 s + s 5 .
It has the multiple roots ξ = 0 with multiplicity m = 5 . We used s 0 = 0.6 as the initial guess, and the results are displayed in Table 2.
Example 3.
A standard academic test function given by
Ω 3 ( s ) = e s 2 + s + 3 s + 2 9 .
It has the multiple roots ξ 2.4905398276083051 with multiplicity m = 9 . We used s 0 = 2.6 as the initial guess, and the results are displayed in Table 3.
Example 4
(Manning equation for fluid dynamics [16]). The equation is given below:
Ω 4 ( s ) = tan 1 5 2 tan 1 s 2 1 + 6 tan 1 s 2 1 6 tan 1 1 2 5 6 11 63 7 .
It has the multiple roots ξ 1.8411294068501996 with multiplicity m = 7 . We used s 0 = 1.2 as the initial guess, and the results are displayed in Table 4.
Example 5
(Van der Waals equation of state [17]). The Van der Waals equation of state is a modification of the ideal gas law, which takes into account the forces of attraction between gas molecules and the finite size of the molecules themselves. It is given by the following equation:
p + a n 2 v 2 ( v n b ) = n R T ,
where p is the pressure, v is the volume, R is the gas constant, T is the temperature, n is the number of moles, a is a parameter that represents the strength of the intermolecular forces, and b is a parameter that represents the size of the molecules. This equation is useful for describing the behaviour of real gases, which deviate from the ideal gas law at high pressures and low temperatures.
To solve for the volume v in terms of the other parameters, we can rearrange the Van der Waals equation of state to obtain a cubic equation:
p v 3 ( n b p + n R T ) v 2 + a n 2 v = a b n 3 .
Let us consider a gas with n = 0.1807 moles, a = 278.3 atm L 2 mol 2 , and b = 3.2104 Lmol, at a pressure of 1 atm and a temperature of 313 K. Using the universal gas constant R = 0.08206 atm Lmol K, we can plug these values into the cubic equation above and solve for the volume v. Then, we have
Ω 5 ( s ) = s 3 5.22 s 2 + 9.0825 s 5.2675 ,
where s = v , yielding the multiple roots ξ = 1.75 with multiplicity m = 2 . We used s 0 = 2.5 as the initial guess, and the results are displayed in Table 5.
Example 6
(Beam designing model [16]). Here, we deal with a beam that is positioned at an angle towards the edge of a cubic box. The length of the beam is denoted as “r” units, while each side of the box measures one unit. The beam is inclined in a manner such that one end touches the wall, while the other end touches the floor, as depicted in Figure 1.
The objective of this problem is to determine the distance between the base of the wall and the floor along the bottom of the beam. Let us assume that “y” is the distance between the beam and the floor, measured along the edge of the box. Similarly, “ s = x ” is the distance between the bottom of the box and the beam. For a specific value of “r”, the following equation holds true:
Ω 6 ( s ) = s 4 + 4 s 3 24 s 2 + 16 s + 16 .
The equation has a non-negative root, namely ξ = 2 , which has a multiplicity of m = 2 . We used s 0 = 1.2 as the initial guess, and the results are displayed in Table 6.
Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 summarise the numerical results of the compared methods on the test examples. In these tables, we report the number of iterations (n) needed to converge to the multiple roots with a stopping criterion of
| s n s n 1 | + | Ω ( s n ) | < 10 150
along with the estimated error in consecutive iterations | s n s n 1 | during the first three iterations. Additionally, we provide the absolute residual error of the function | Ω ( s n ) | and the computational order of convergence (COC), which we calculated using the following formula [18]:
C O C = log Ω ( s n ) Ω ( s n 1 ) log Ω ( s n 1 ) Ω ( s n 2 ) .
The computational results presented in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 demonstrate that our proposed methods outperformed existing methods in terms of the error in consecutive iterations and the residual error with the minimal number of iterations required for convergence towards the root after the stopping criterion (80) is satisfied. In Figure 2, we provide the comparison of the methods based on the error in the consecutive iterations, | s n s n 1 | , after the first three iterations. While the methods XZM, CPDM, SAM, SAJM, and GKNM have better results than the method VKM, they do not always retain their order of convergence. In contrast, our proposed methods PFM, PFWM1, and PFWM2 demonstrated higher precision and maintained their order of convergence, thus confirming their theoretical results. Overall, our proposed methods offer improved performance and accuracy compared to existing methods.

4. Conclusions

We introduced new parametric families of two-point with and without memory methods for solving nonlinear equations with multiple zeros. These methods are based on the weight function approach and self-accelerating technique. Our proposed family of the without memory methods, PFM, achieved fifth-order convergence with two function and two derivative evaluations per iteration, while the with memory methods PFWM1 and PFWM2 achieved R-orders of convergence of 7 and 7.2749, respectively, using a self-accelerating parameter. Additionally, the use of an accelerating parameter not only improved the convergence order, but also increased the efficiency and accuracy. We verified our theoretical results through numerous numerical examples and compared our methods with existing methods. The results demonstrated the robustness and superior performance of our proposed methods, with fewer iterations required to converge to the root, minimal residual errors, and minimal errors in consecutive iterations.

Author Contributions

Conceptualisation, G.T. and S.P.; methodology, G.T., S.P. and S.K.M.; software, G.T., S.K.M. and L.J.; validation, G.T. and S.P.; formal analysis, G.T., S.P., S.K.M. and L.J.; resources, G.T.; writing—original draft preparation, G.T., S.P. and S.K.M.; writing—review and editing, G.T., S.P., S.K.M. and L.J.; visualisation, G.T.; supervision, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the University Grants Commission (UGC) NET-JRF fellowship UGC-Ref. No.: 1187/(CSIR-UGC NET JUNE 2019).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the University Grants Commission (UGC), New Delhi, India, for providing financial assistance to carry out this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Behl, R.; Salimi, M.; Ferrara, M.; Sharifi, S.; Alharbi, S.K. Some Real-Life Applications of a Newly Constructed Derivative Free Iterative Scheme. Symmetry 2019, 11, 239. [Google Scholar] [CrossRef]
  2. Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of Optimal Iterative Methods with Their Applications and Basins of Attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
  3. Naseem, A.; Rehman, M.A.; Abdeljawad, T.A. Novel Root-Finding Algorithm With Engineering Applications and its Dynamics via Computer Technology. IEEE Access 2022, 10, 19677–19684. [Google Scholar] [CrossRef]
  4. Panday, S.; Sharma, A.; Thangkhenpau, G. Optimal fourth and eighth-order iterative methods for non-linear equations. J. Appl. Math. Comput. 2023, 69, 953–971. [Google Scholar] [CrossRef]
  5. Abdul-Hassan, N.Y.; Ali, A.H.; Park, C. A new fifth-order iterative method free from second derivative for solving nonlinear equations. J. Appl. Math. Comput. 2021, 68, 2877–2886. [Google Scholar] [CrossRef]
  6. Thangkhenpau, G.; Panday, S. Optimal Eight Order Derivative-Free Family of Iterative Methods for Solving Nonlinear Equations. IAENG Int. J. Comput. Sci. 2023, 50, 335–341. [Google Scholar]
  7. Kanwar, V.; Cordero, A.; Torregrosa, J.R.; Rajput, M.; Behl, R. A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve. Algorithms 2023, 16, 156. [Google Scholar] [CrossRef]
  8. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef]
  9. Behl, R. A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems. Mathematics 2022, 10, 1372. [Google Scholar] [CrossRef]
  10. Sharma, J.R.; Arora, H. A Family of Fifth-Order Iterative Methods for Finding Multiple Roots of Nonlinear Equations. Numer. Anal. Appl. 2021, 14, 168–199. [Google Scholar] [CrossRef]
  11. Zhou, X.; Liu, B. Iterative methods for multiple roots with memory using self-accelerating technique. J. Comput. Appl. Math. 2023, 428, 115181. [Google Scholar] [CrossRef]
  12. Kanwar, V.; Bhatia, S.; Kansal, M. New optimal class of higher-order methods for multiple roots, permitting f(xn) = 0. Appl. Math. Comput. 2013, 222, 564–574. [Google Scholar] [CrossRef]
  13. Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algor. 2019, 81, 947–981. [Google Scholar] [CrossRef]
  14. Chanu, W.H.; Panday, S.; Dwivedi, M. New Fifth Order Iterative Method for Finding Multiple Root of Nonlinear Function. Eng. Lett. 2021, 29, 942–947. [Google Scholar]
  15. Singh, T.; Arora, H.; Jäntschi, L. A Family of Higher Order Scheme for Multiple Roots. Symmetry 2023, 15, 228. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Kumar, D.; Cattani, C. An Efficient Class of Weighted-Newton Multiple Root Solvers with Seventh Order Convergence. Symmetry 2019, 11, 1054. [Google Scholar] [CrossRef]
  17. Zafar, F.; Cordero, A.; Rizvi, D.E.Z.; Torregrosa, J.R. An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Math. 2023, 8, 8478–8503. [Google Scholar] [CrossRef]
  18. Petković, M.S. Remarks on “On a general class of multipoint root-finding methods of high computational efficiency”. SIAM J. Numer. Math 2011, 49, 1317–1319. [Google Scholar] [CrossRef]
Figure 1. Beam designing model [16].
Figure 1. Beam designing model [16].
Mathematics 11 02036 g001
Figure 2. Comparison of the methods based on the error in consecutive iterations, | s n s n 1 | , after the first three iterations.
Figure 2. Comparison of the methods based on the error in consecutive iterations, | s n s n 1 | , after the first three iterations.
Mathematics 11 02036 g002
Table 1. Comparison results of the methods for Ω 1 ( s ) .
Table 1. Comparison results of the methods for Ω 1 ( s ) .
Methodsn s 1 s 0 s 2 s 1 s 3 s 2 | Ω ( s 3 ) | COC
VKM6 0.33008 1.5207 × 10 2 9.7604 × 10 7 1.9584 × 10 71 3.0000
XZM6 0.34505 2.3864 × 10 4 2.6415 × 10 10 1.4426 × 10 194 1.9972
CPDM5 0.34522 6.5995 × 10 5 1.5798 × 10 22 1.1084 × 10 436 5.0000
SAM5 0.34504 2.5032 × 10 4 1.5353 × 10 19 1.4727 × 10 376 5.0000
SAJM5 0.34514 1.4499 × 10 4 4.2323 × 10 21 3.0144 × 10 409 5.0000
GKNM5 0.34517 1.2267 × 10 4 1.7184 × 10 24 1.3290 × 10 568 6.0000
PFM5 0.34528 1.3324 × 10 5 5.4270 × 10 27 6.3786 × 10 530 5.0000
PFWM14 0.34528 1.3324 × 10 5 5.3785 × 10 35 4.0310 × 10 957 7.0000
PFWM24 0.34528 1.3324 × 10 5 4.9818 × 10 35 3.2264 × 10 958 7.0000
Table 2. Comparison results of the methods for Ω 2 ( s ) .
Table 2. Comparison results of the methods for Ω 2 ( s ) .
Methodsn s 1 s 0 s 2 s 1 s 3 s 2 | Ω ( s 3 ) | COC
VKM7 0.45485 1.4360 × 10 1 1.5518 × 10 3 2.2780 × 10 44 3.0000
XZM6 0.58388 1.6122 × 10 2 2.3381 × 10 9 3.8284 × 10 107 3.0017
CPDM5 0.59039 9.6112 × 10 3 8.7442 × 10 10 8.6888 × 10 222 5.0000
SAM5 0.59002 9.9759 × 10 3 4.5553 × 10 10 9.0715 × 10 231 5.0000
SAJM5 0.59138 8.6160 × 10 3 9.0956 × 10 11 2.9921 × 10 250 5.0000
GKNM6 0.59236 7.6391 × 10 3 7.8345 × 10 12 1.2782 × 10 325 6.0000
PFM5 0.59660 3.4028 × 10 3 1.1488 × 10 12 3.7027 × 10 297 5.0000
PFWM15 0.59660 3.4028 × 10 3 1.2404 × 10 15 1.3157 × 10 510 7.0000
PFWM25 0.59660 3.4028 × 10 3 1.0346 × 10 15 1.8921 × 10 513 7.0000
Table 3. Comparison results of the methods for Ω 3 ( s ) .
Table 3. Comparison results of the methods for Ω 3 ( s ) .
Methodsn s 1 s 0 s 2 s 1 s 3 s 2 | Ω ( s 3 ) | COC
VKM6 0.10680 2.6650 × 10 3 3.5814 × 10 8 4.7372 × 10 195 3.0000
XZM7 0.14985 4.1261 × 10 2 7.2813 × 10 6 1.8238 × 10 107 2.9992
CPDM5 0.10774 1.7225 × 10 3 2.0111 × 10 12 9.8289 × 10 504 5.0000
SAM5 0.10948 1.6599 × 10 4 2.2757 × 10 15 1.6903 × 10 684 3.9991
SAJM5 0.10941 3.0955 × 10 4 2.4209 × 10 14 4.9884 × 10 611 3.9991
GKNM60.11076 3.5248 × 10 3 1.5277 × 10 7 3.9790 × 10 178 7.4893
PFM50.10931 1.5165 × 10 4 5.5816 × 10 19 2.6128 × 10 810 5.0000
PFWM14 0.10931 1.5165 × 10 4 2.0972 × 10 24 1.6827 × 10 1450 6.9999
PFWM24 0.10931 1.5165 × 10 4 3.4644 × 10 24 1.3666 × 10 1444 7.0000
Table 4. Comparison results of the methods for Ω 4 ( s ) .
Table 4. Comparison results of the methods for Ω 4 ( s ) .
Methodsn s 1 s 0 s 2 s 1 s 3 s 2 | Ω ( s 3 ) | COC
VKM7 0.50271 1.3682 × 10 1 1.5989 × 10 3 1.1713 × 10 62 3.0000
XZM6 0.70523 6.5936 × 10 2 2.8688 × 10 4 1.0602 × 10 75 2.0024
CPDM5 0.64191 1.0894 × 10 3 3.6165 × 10 17 1.0565 × 10 589 5.0000
SAM5 0.63813 3.5058 × 10 3 2.7647 × 10 13 2.2144 × 10 375 3.9978
SAJM5 0.63791 4.6433 × 10 3 8.5166 × 10 13 1.0630 × 10 361 3.9978
GKNM6 0.44125 5.4541 × 10 4 6.4964 × 10 12 1.5113 × 10 247 3.0028
PFM5 0.64075 3.7793 × 10 4 1.4359 × 10 19 1.9352 × 10 674 5.0000
PFWM14 0.64075 3.7793 × 10 4 2.0608 × 10 25 1.1797 × 10 1215 7.0000
PFWM24 0.64075 3.7793 × 10 4 1.2603 × 10 25 3.5869 × 10 1229 7.0000
Table 5. Comparison results of the methods for Ω 5 ( s ) .
Table 5. Comparison results of the methods for Ω 5 ( s ) .
Methodsn s 1 s 0 s 2 s 1 s 3 s 2 | Ω ( s 3 ) | COC
VKM9 0.54909 1.7090 × 10 1 2.8013 × 10 2 1.2682 × 10 7 3.0000
XZM6 0.71596 3.3325 × 10 2 7.1065 × 10 4 3.0773 × 10 23 5.0000
CPDM6 0.70133 4.8193 × 10 2 4.7547 × 10 4 3.0603 × 10 28 6.0000
SAM6 0.68022 6.7340 × 10 2 2.4441 × 10 3 2.0299 × 10 17 5.0000
SAJM6 0.68454 6.3596 × 10 2 1.8660 × 10 3 4.8728 × 10 19 5.0000
GKNM6 0.68637 6.1983 × 10 2 1.6473 × 10 3 1.0720 × 10 20 6.0000
PFM6 0.73275 1.7174 × 10 2 7.3647 × 10 5 5.5190 × 10 32 5.0000
PFWM15 0.73277 1.7235 × 10 2 1.2844 × 10 5 4.6789 × 10 53 7.0000
PFWM25 0.73275 1.7235 × 10 2 1.2844 × 10 5 4.6789 × 10 53 7.0000
Table 6. Comparison results of the methods for Ω 6 ( s ) .
Table 6. Comparison results of the methods for Ω 6 ( s ) .
Methodsn s 1 s 0 s 2 s 1 s 3 s 2 | Ω ( s 3 ) | COC
VKM7 0.71037 8.9411 × 10 2 2.1557 × 10 4 2.5102 × 10 22 3.0000
XZM6 0.79736 2.6375 × 10 3 1.7127 × 10 8 2.7065 × 10 82 1.9986
CPDM5 0.82797 2.7966 × 10 2 1.8836 × 10 12 8.5982 × 10 145 6.0000
SAM5 0.76907 3.0928 × 10 2 3.9147 × 10 10 3.4391 × 10 97 5.0000
SAJM6 0.48555 3.1425 × 10 1 1.9359 × 10 4 1.0868 × 10 39 5.0000
GKNM6 1.5402 7.3961 × 10 1 5.6854 × 10 4 2.7009 × 10 41 6.0000
PFM5 0.80258 2.5811 × 10 3 1.6437 × 10 15 7.0888 × 10 151 5.0000
PFWM15 0.80274 2.5811 × 10 3 2.7609 × 10 21 4.6947 × 10 292 7.0000
PFWM25 0.80258 2.5811 × 10 3 2.7609 × 10 21 4.6947 × 10 292 7.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations. Mathematics 2023, 11, 2036. https://doi.org/10.3390/math11092036

AMA Style

Thangkhenpau G, Panday S, Mittal SK, Jäntschi L. Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations. Mathematics. 2023; 11(9):2036. https://doi.org/10.3390/math11092036

Chicago/Turabian Style

Thangkhenpau, G, Sunil Panday, Shubham Kumar Mittal, and Lorentz Jäntschi. 2023. "Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations" Mathematics 11, no. 9: 2036. https://doi.org/10.3390/math11092036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop