Next Article in Journal
Finite Element Modelling of a Composite Shell with Shear Connectors
Next Article in Special Issue
Sixteenth-Order Optimal Iterative Scheme Based on Inverse Interpolatory Rational Function for Nonlinear Equations
Previous Article in Journal
Gender Classification Based on the Non-Lexical Cues of Emergency Calls with Recurrent Neural Networks (RNN)
Previous Article in Special Issue
An Efficient Class of Traub-Steffensen-Like Seventh Order Multiple-Root Solvers with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Optimal Class of Newton-Like Fourth-Order Methods for Multiple Roots

by
Munish Kansal
1,
Ramandeep Behl
2,*,
Mohammed Ali A. Mahnashi
2 and
Fouad Othman Mallawi
2
1
School of Mathematics, Thapar Institute of Engineering and Technology, Patiala 147004, India
2
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(4), 526; https://doi.org/10.3390/sym11040526
Submission received: 24 February 2019 / Revised: 31 March 2019 / Accepted: 1 April 2019 / Published: 11 April 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

Abstract

:
Here, we propose optimal fourth-order iterative methods for approximating multiple zeros of univariate functions. The proposed family is composed of two stages and requires 3 functional values at each iteration. We also suggest an extensive convergence analysis that demonstrated the establishment of fourth-order convergence of the developed methods. It is interesting to note that some existing schemes are found to be the special cases of our proposed scheme. Numerical experiments have been performed on a good number of problems arising from different disciplines such as the fractional conversion problem of a chemical reactor, continuous stirred tank reactor problem, and Planck’s radiation law problem. Computational results demonstrates that suggested methods are better and efficient than their existing counterparts.

1. Introduction

Importance of solving nonlinear problems is justified by numerous physical and technical applications over the past decades. These problems arise in many areas of science and engineering. The analytical solutions for such problems are not easily available. Therefore, several numerical techniques are used to obtain approximate solutions. When we discuss about iterative solvers for obtaining multiple roots with known multiplicity m 1 of scalar equations of the type g ( x ) = 0 , where g : D R R , modified Newton’s technique [1,2] (also known as Rall’s method) is the most popular and classical iterative scheme, which is defined by
x s + 1 = x s m g ( x s ) g ( x s ) , s = 0 , 1 , 2 , .
Given the multiplicity m 1 in advance, it converges quadratically for multiple roots. However, modified Newton’s method would fail miserably if the initial estimate x 0 is either far away from the required root or the value of the first-order derivative is very small in the neighborhood of the needed root. In order to overcome this problem, Kanwar et al. [3] considered the following one-point iterative technique
x s + 1 = x s m g ( x s ) g ( x s ) λ g ( x s ) .
One can find the classical Newton’s formula for λ = 0 and m = 1 in (2). The method (2) satisfies the following error equation:
e s + 1 = c 1 λ m e s 2 + O ( e s 3 ) ,
where e s = x s α , c j = m ! ( m + j ) ! g ( m + j ) ( α ) g ( m ) ( α ) , j = 1 , 2 , 3 , . Here, α is a multiple root of g ( x ) = 0 having multiplicity m.
One-point methods are not of practical interest because of their theoretical limitations regarding convergence order and efficiency index. Therefore, multipoint iterative functions are better applicants to certify as efficient solvers. The good thing with multipoint iterative methods without memory for scalar equations is that they have a conjecture related to order of convergence (for more information please have a look at the conjecture [2]). A large community of researchers from the world wide turn towards the most prime class of multipoint iterative methods and proposed various optimal fourth-order methods (they are requiring 3 functional values at each iteration) [4,5,6,7,8,9,10] and non-optimal methods [11,12] for approximating multiple zeros of nonlinear functions.
In 2013, Zhou et al. [13], presented a family of 4-order optimal iterative methods, defined as follows:
w s = x s m g ( x s ) g ( x s ) , x s + 1 = w s m g ( x s ) g ( x s ) Q ( u s ) ,
where u s = g ( w s ) g ( x s ) 1 m and Q : C C is a weight function. The above family (4) requires two functions and one derivative evaluation per full iteration.
Lee et al. in [14], suggested an optimal 4-order scheme, which is given by
w s = x s m g ( x s ) g ( x s ) + λ g ( x s ) , x s + 1 = x s m H g ( u s ) g ( x s ) g ( x s ) + 2 λ g ( x s ) ,
where u s = g ( w s ) g ( x s ) 1 m , H g ( u s ) = u s ( 1 + ( c + 2 ) u s + r u s 2 ) 1 + c u s , λ , c , and r are free disposable parameters.
Very recently, Zafar et al. [15] proposed another class of optimal methods for multiple zeros defined by
w s = x s m g ( x s ) g ( x s ) + a 1 g ( x s ) , x s + 1 = w s m u s H ( u s ) g ( x s ) g ( x s ) + 2 a 1 g ( x s ) ,
where u s = g ( w s ) g ( x s ) 1 m and a 1 R . It can be seen that the family (5) is a particular case of (6).
We are interested in presenting a new optimal class of parametric-based iterative methods having fourth-order convergence which exploit weight function technique for computing multiple zeros. Our proposed scheme requires only three function evaluations g ( x s ) , g ( x s ) , and g ( w s ) at each iteration which is in accordance with the classical Kung-Traub conjecture. It is also interesting to note that the optimal fourth-order families (5) and (6) can be considered as special cases of our scheme for some particular values of free parameters. Therefore, the new scheme can be treated as more general family for approximating multiple zeros of nonlinear functions. Furthermore, we manifest that the proposed scheme shows a good agreement with the numerical results and offers smaller residual errors in the estimation of multiple zeros.
Our presentation is unfolded in what follows. The new fourth-order scheme and its convergence analysis is presented in Section 2. In Section 3, several particular cases are included based on the different choices of weight functions employed at second step of the designed family. In addition, Section 3, is also dedicated to the numerical experiments which illustrate the efficiency and accuracy of the scheme in multi-precision arithmetic on some complicated real-life problems. Section 4, presents the conclusions.

2. Construction of the Family

Here, we suggest a new fourth-order optimal scheme for finding multiple roots having known multiplicity m 1 . So, we present the two-stage scheme as follows:
w s = x s m g ( x s ) g ( x s ) + λ 1 g ( x s ) , z n = w s m u s g ( x s ) g ( x s ) + λ 2 g ( x s ) Q ( t s ) ,
where Q : C C is the weight function and holomorphic function in the neighborhood of origin with u s = g ( w s ) g ( x s ) 1 m and t s = u s a 1 + a 2 u s and being λ 1 , λ 2 , a 1 and a 2 are free parameters.
In the following Theorem 1, we illustrate that how to construct weight function Q so that it arrives at fourth-order without consuming any extra functional values.
Theorem 1.
Let us assume that g : C C is holomorphic function in the region containing the multiple zero x = α with multiplicity m 1 . Then, for a given initial guess x 0 , the iterative expression (7) reaches 4-order convergence when it satisfies
Q ( 0 ) = 1 , Q ( 0 ) = 2 a 1 , λ 2 = 2 λ 1 , and | Q ( 0 ) | < .
Proof. 
Let us assume that x = α is a multiple zero having known multiplicity m 1 of g ( x ) . Adopting Taylor’s series expansion of g ( x s ) and g ( x s ) about α , we obtain
g ( x s ) = g ( m ) ( α ) m ! e s m 1 + c 1 e s + c 2 e s 2 + c 3 e s 3 + c 4 e s 4 + O ( e s 5 )
and
g ( x s ) = g m ( α ) m ! e s m 1 m + c 1 ( m + 1 ) e s + c 2 ( m + 2 ) e s 2 + c 3 ( m + 3 ) e s 3 + c 4 ( m + 4 ) e s 4 + O ( e s 5 ) ,
respectively. Here, e s = x s α and c j = m ! ( m + j ) ! g ( m + j ) ( α ) g ( m ) ( α ) , j = 1 , 2 , 3 , .
From the Equations (9) and (10), we obtain
g ( x s ) g ( x s ) + λ 1 g ( x s ) = e s m + ( λ 1 c 1 ) m 2 e s 2 + c 1 2 + m c 1 2 2 m c 2 + 2 c 1 λ 1 + λ 1 2 e s 3 m 3 + L 1 m 4 e s 4 + O ( e s 5 ) ,
where L 1 = c 1 3 2 m c 1 3 m 2 c 1 3 + 4 m c 1 c 2 + 3 m 2 c 1 c 2 3 m 2 c 3 3 c 1 2 λ 1 2 m c 1 2 λ 1 + 4 m c 2 λ 1 3 c 1 λ 1 2 λ 1 3 .
Now, substituting (11) in the first substep of scheme (7), we get
w s α = ( λ 1 + c 1 ) m 2 e s 2 c 1 2 + m c 1 2 2 m c 2 + 2 c 1 λ 1 + λ 1 2 e s 3 m 2 + L 1 m 3 e s 4 + O ( e s 5 ) .
Using again Taylor’s series, we yield
g ( w s ) = g ( m ) ( α ) e s 2 m [ λ 1 + c 1 m m m ! c 1 + λ 1 m m ( 1 + m ) c 1 2 2 m c 2 + 2 c 1 λ 1 + λ 1 2 e s m ! c 1 + λ 1 + 1 m ! ( c 1 ( c 1 + λ 1 ) λ 1 + c 1 m m m λ 1 + c 1 m m B 1 2 m c 1 + λ 1 2 + B 2 m c 1 + λ 1 ) e s 2 + O ( e s 3 ) ] ,
where
B 1 = ( 1 + m ) ( 1 + m ) c 1 2 2 m c 2 + 2 c 1 λ 1 + λ 1 2 2 , B 2 = ( 1 + m ) 2 c 1 3 + 3 m 2 c 3 + ( 3 + 2 m ) c 1 2 λ 1 4 m c 2 λ 1 + λ 1 3 + c 1 m ( 4 + 3 m ) c 2 + 3 λ 1 2 .
Moreover,
u s = c 1 + λ 1 e s m ( 2 + m ) c 1 2 2 m c 2 + 3 c 1 λ 1 + λ 1 2 e s 2 m 2 + γ 1 e s 3 2 m 3 + O ( e s 4 ) ,
where
γ 1 = 7 + 7 m + 2 m 2 c 1 3 + 5 ( 3 + m ) c 1 2 λ 1 2 c 1 m ( 7 + 3 m ) c 2 5 λ 1 2 + 2 3 m 2 c 3 5 m c 2 λ 1 + λ 1 3 .
Now, using the above expression (15), we get
t s = ( c 1 + λ 1 ) m a 1 e s + i = 1 2 Θ j e s j + 1 + O ( e s 4 ) .
where Θ j = Θ j ( a 1 , a 2 , m , c 1 , c 2 , c 3 , c 4 ) .
Due to the fact that t s = u s a 1 + a 2 u s = O ( e s ) , therefore, it suffices to expand weight function Q ( t s ) around the origin by Taylor’s series expansion up to 3-order term as follows:
Q ( t s ) Q ( 0 ) + Q ( 0 ) t s + 1 2 ! Q ( 0 ) t s 2 + 1 3 ! Q ( 3 ) ( 0 ) t s 3 ,
where Q ( k ) represents the k-th derivative.
Adopting the expressions (9)–(17) in (7), we have
e s + 1 = Ω 1 m e s 2 + Ω 2 m a 1 2 e s 3 + Ω 3 2 m 3 a 1 2 e s 4 + O ( e s 5 ) .
where
Ω 1 = ( 1 + Q ( 0 ) ) ( c 1 + λ 1 ) , Ω 2 = ( a 1 ( 1 + m ) + a 1 ( 3 + m ) Q ( 0 ) Q ( 0 ) ) c 1 2 2 a 1 m ( 1 + Q ( 0 ) ) c 2 + c 1 ( 2 a 1 + 4 a 1 Q ( 0 ) 2 Q ( 0 ) ) λ 1 + a 1 Q ( 0 ) λ 2 + λ 1 ( a 1 ( 1 + Q ( 0 ) ) Q ( 0 ) ) λ 1 + a 1 Q ( 0 ) λ 2 .
It is clear from error Equation (18) that in order to have at least 4-order convergence. The coefficients of e s 2 and e s 3 must vanish simultaneously. Therefore, inserting Q ( 0 ) = 1 in (19), we have
Ω 2 = ( 2 a 1 Q ( 0 ) ) c 1 Q ( 0 ) λ 1 + a 1 λ 2 .
Similarly, Ω 2 = 0 implies that Q ( 0 ) = 2 a 1 and λ 2 = 2 λ 1 .
Finally, using Equations (19) and (20) in the proposed scheme (7), we have
e s + 1 = c 1 + λ 1 4 a 1 a 2 + a 1 2 ( 9 + m ) Q ( 0 ) c 1 2 2 a 1 2 m c 2 + 2 7 a 1 2 + 4 a 1 a 2 Q ( 0 ) c 1 λ 1 + ( 4 a 1 ( a 1 + a 2 ) Q ( 0 ) ) λ 1 2 e s 4 2 a 1 2 m 3 + O ( e s 5 ) .
The consequence of the above error analysis is that the family (7) acquires 4-order convergence by consuming only 3 functional values (viz. g ( x s ) , g ( x s ) , and g ( w s ) ) per full iteration. Hence, the proof is completed. □

Some Particular Cases of the Suggested Class

We suggest some interesting particulars cases of (7) by choosing different forms of weight function Q ( t s ) that satisfy the constrains of Theorem 1.
Let us assume the following optimal class of fourth-order methods by choosing weight function directly from the Theorem 1:
w s = x s m g ( x s ) g ( x s ) + λ 1 g ( x s ) , x s + 1 = w s m u s g ( x s ) g ( x s ) + 2 λ 1 g ( x s ) 1 + 2 a 1 t s + 1 2 t s 2 Q ( 0 ) + 1 3 ! t s 3 Q ( 3 ) ( 0 ) ,
where t s = u s a 1 + a 2 u s , λ 1 , a 1 , a 2 , Q ( 0 ) and Q ( 3 ) ( 0 ) are free disposable variables.
Sub cases of the given scheme (22):
  • We assume that Q ( t s ) = 1 + 2 a 1 t s + μ 2 t s 2 , in expression (22), we obtain
    w s = x s m g ( x s ) g ( x s ) + λ 1 g ( x s ) , x s + 1 = w s m u s 1 + 2 a 1 t s + μ 2 t s 2 g ( x s ) g ( x s ) + 2 λ 1 g ( x s ) ,
    where μ R .
  • Considering the weight function Q ( t s ) = 1 + α 1 t s + α 2 t s 2 + α 3 t s 2 1 + α 4 t s + α 5 t s 2 in expression (22), one gets
    w s = x s m g ( x s ) g ( x s ) + λ 1 g ( x s ) , x s + 1 = w s m u s 1 + α 1 t s + α 2 t s 2 + α 3 t s 2 1 + α 4 t s + α 5 t s 2 g ( x s ) g ( x s ) + 2 λ 1 g ( x s ) ,
    where α 1 = 2 a 1 , α 2 , α 3 , α 4 and α 5 are free parameters.
    Case 2A: Substituting α 2 = α 3 = 1 , α 4 = 15 and α 5 = 10 in (24), we obtain
    w s = x s m g ( x s ) g ( x s ) + λ 1 g ( x s ) , x s + 1 = w s m u s 1 + 2 a 1 t s + t s 2 + t s 2 1 + 15 t s + 10 t s 2 g ( x s ) g ( x s ) + 2 λ 1 g ( x s ) .
    Case 2B: Substituting α 2 = α 3 = 1 , α 4 = 2 and α 5 = 1 , in (24), we have
    w s = x s m g ( x s ) g ( x s ) + λ 1 g ( x s ) , x s + 1 = w s m u s 1 + 2 a 1 t s + α 2 t s 2 + t s 2 1 + α 4 t s + t s 2 g ( x s ) g ( x s ) + 2 λ 1 g ( x s ) .
Remark 1.
It is worth mentioning here that the family (6) can be captured as a special case for a 1 = 1 and a 2 = 0 in the proposed scheme (22).
Remark 2.
Furthermore, it is worthy to record that Q ( t s ) weight function plays a great character in the development of fourth-order schemes. Therefore, it is customary to display different choices of weight functions, provided they must assure all the constrains of Theorem 1. Hence, we have mentioned above some special cases of new fourth-order schemes (23), (24), (25) and (26) having simple body structures so that they can be easily implemented in the numerical experiments.

3. Numerical Experiments

Here, we verify the computational aspects of the following methods: expression (23) for ( a 1 = 1 , a 2 = 1 , λ 1 = 0 , μ = 13 ) , and expression (25) for ( a 1 = 1 , a 2 = 1 , λ 1 = 0 ) denoted by ( M M 1 ) and ( M M 2 ) , respectively, with some already existing techniques of the same convergence order.
In this regard, we consider several test functions coming from real life problems and linear algebra that are depicted in Examples 1–5. We make a contrast of them with existing optimal 4-order methods, namely method (6) given by Zafar et al. [15] for H ( u s ) = ( 1 + 2 u s + k 2 u s 2 ) with k = 11 and a 1 = 0 denoted by ( Z M ) . Also, family (5) proposed by Lee et al. [14] is compared by taking H g ( u s ) = u s ( 1 + u s ) 2 for ( c = 0 , λ = m 2 , r = 1 ), and H g ( u s ) = u s ( 1 u s 2 ) 1 2 u s for ( c = 2 , λ = m 2 , r = 1 ). We denote these methods by ( L M 1 ) and ( L M 2 ) , respectively.
We compare our iterative methods with the exiting optimal 4-order methods on the basis of x n (approximated roots), | g ( x s ) | (residual error of the considered function), | x s + 1 x s | (absolute error between two consecutive iterations), and the estimations of asymptotic error constants according to the formula x s + 1 x s ( x s x s 1 ) 4 are depicted in Table 1, Table 2, Table 3, Table 4 and Table 5. In order to minimize the round off errors, we have considered 4096 significant digits. The whole numerical work have been carried out with M a t h e m a t i c a 7 programming package. In Table 1, Table 2, Table 3, Table 4 and Table 5, the k 1 ( ± k 2 ) stands for k 1 × 10 ( ± k 2 ) .
Example 1.
We assume a 5 × 5 matrix, which is given by
A = 29 14 2 6 9 47 22 1 11 13 19 10 5 4 8 19 10 3 2 8 7 4 3 1 3 .
We have the following characteristic equation of the above matrix:
g 1 ( x ) = ( x 2 ) 4 ( x + 1 ) .
It is straightforward to say that the function g 1 ( x ) has a multiple zero at x = 2 having four multiplicity.
The computational comparisons depicted in Table 1 illustrates that the new methods ( M M 1 ) , ( M M 2 ) and ( Z M ) have better results in terms of precision in the calculation of the multiple zero of g 1 ( x ) . On the other hands, the methods ( L M 1 ) and ( L M 2 ) fail to converge.
Example 2.
(Chemical reactor problem):
We assume the following function (for more details please, see [16])
g 2 ( x ) = 5 log 0.4 ( 1 x ) 0.4 0.5 x + x 1 x + 4.45977 .
The variable x serve as the fractional transformation of the specific species B in the chemical reactor. There will be no physical benefits of the above expression (27) for either x < 0 or x > 1 . Therefore, we are looking for a bounded solution in the interval 0 x 1 and approximated zero is α 0.757396246253753879459641297929 .
We can see that the new methods possess minimal residual errors and minimal errors difference between the consecutive approximations in comparison to the existing ones. Moreover, the numerical results of convergence order that coincide with the theoretical one in each case.
Example 3.
(Continuous stirred tank reactor (CSTR)):
In our third example, we assume a problem of continuous stirred tank reactor (CSTR). We observed the following reaction scheme that develop in the chemical reactor (see [17] for more information):
K 1 + P K 2 K 2 + P K 3 K 3 + P K 4 K 4 + P K 5 ,
where the components T and K 1 are fed at the amount of q-Q and Q, respectively, to the chemical reactor. The above model was studied in detail by Douglas [18] in order to find a good and simple system that can control feedback problem. Finally, he transferred the above model to the following mathematical expression:
K H 2.98 ( t + 2.25 ) ( t + 1.45 ) ( t + 4.35 ) ( t + 2 . 85 ) 2 = 1 ,
where K H denotes for the gaining proportional controller. The suggested control system is balanced with the values of K H . If we assume K H = 0 , we obtain the poles of the open-loop transferred function as the solutions of following uni-variate equation:
g 3 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 = 0
given as: x = 2.85 , 1.45 , 4.35 , 2.85 . It is straightforward to say that we have one multiple root x = 2.85 , having known multiplicity 2. The computational results for Example 3 are displayed in Table 3.
Example 4.
We consider another uni-variate function from [14], defined as follows:
g 4 ( x ) = sin 1 1 x 1 + e x 2 3 2 .
The function g 4 has a multiple zero at x = 1.05655361033535 , having known multiplicity m = 2 .
Table 4 demonstrates the computational results for problem g 4 . It can be concluded from the numerical tests that results are very good for all the methods, but lower residuals error belongs to newly proposed methods.
Example 5.
(Planck’s radiation law problem):
Here, we chosen the well-known Planck’s radiation law problem [19], that addresses the density of energy in an isothermal blackbody, which is defined as follows:
Ω ( δ ) = 8 π c h δ 5 e c h δ B T 1 ,
where the parameters δ , T , h and c denote as the wavelength of the radiation, absolute temperature of the blackbody, Planck’s parameter and c is the light speed, respectively. In order to find the wavelength δ, then we have to calculate the maximum energy density of Ω ( δ ) .
In addition, the maximum value of a function exists on the critical points ( Ω ( δ ) = 0 ), then we have
c h δ B T e c h δ B T e c h δ B T 1 = 5 ,
where B is the Boltzmann constant. If x = c h δ B T , then (32) is satisfied when
g 5 ( x ) = x 5 1 + e x = 0 .
Therefore, the roots of g 5 ( x ) = 0 , provide the maximum wavelength of radiation δ by adopting the following technique:
δ c h α B T ,
where α is a solution of (33). Our desired root is x = 4.9651142317442 with multiplicity m = 1 .
The computational results for g 5 ( x ) = 0 , displayed in Table 5. We concluded that methods ( M M 1 ) and ( M M 2 ) have small values of residual errors in comparison to the other methods.

4. Conclusions

In this study, we proposed a wide general optimal class of iterative methods for approximating multiple zeros of nonlinear functions numerically. Weight functions based on function-to-function ratios and free parameters are employed at second step of the family which enable us to achieve desired convergence order four. In the numerical section, we have incorporated variety of real life problems to confirm the efficiency of the proposed technique in comparison to the existing robust methods. The computational results demonstrates that the new methods show better performance in terms of precision and accuracy for the considered test functions. Finally, we point out that high convergence order of the proposed class, makes it not only interesting from theoretical point of view but also in practice.

Author Contributions

All the authors have equal contribution for this paper.

Funding

“This research received no external funding”.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their constructive suggestions which improved the readability of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Kanwar, V.; Bhatia, S.; Kansal, M. New optimal class of higher-order methods for multiple roots, permitting f(xn)=0. Appl. Math. Comput. 2013, 222, 564–574. [Google Scholar] [CrossRef]
  4. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of methods for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef]
  5. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R.; Kanwar, V. An optimal fourth-order family of methods for multiple roots and its dynamics. Numer. Algor. 2016, 71, 775–796. [Google Scholar] [CrossRef]
  6. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  7. Neta, B.; Chun, C.; Scott, M. On the development of iterative methods for multiple roots. Appl. Math. Comput. 2013, 224, 358–361. [Google Scholar] [CrossRef] [Green Version]
  8. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  9. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  10. Kim, Y.I.; Geum, Y.H. A triparametric family of optimal fourth-order multiple-root finders and their Dynamics. Discret. Dyn. Nat. Soc. 2016, 2016, 8436759. [Google Scholar] [CrossRef]
  11. Li, S.; Cheng, L.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  12. Neta, B. Extension of Murakami’s high-order non-linear solver to multiple roots. Int. J. Comput. Math. 2010, 87, 1023–1031. [Google Scholar] [CrossRef]
  13. Zhou, X.; Chen, X.; Song, Y. Families of third and fourth order methods for multiple roots of nonlinear equations. Appl. Math. Comput. 2013, 219, 6030–6038. [Google Scholar] [CrossRef]
  14. Lee, M.Y.; Kim, Y.I.; Magrenan, A.A. On the dynamics of tri-parametric family of optimal fourth-order multiple-zero finders with a weight function of the principal mth root of a function-function ratio. Appl. Math. Comput. 2017, 315, 564–590. [Google Scholar]
  15. Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algor. 2018. [Google Scholar] [CrossRef]
  16. Shacham, M. Numerical solution of constrained nonlinear algebraic equations. Int. J. Numer. Method Eng. 1986, 23, 1455–1481. [Google Scholar] [CrossRef]
  17. Constantinides, A. , Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  18. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
  19. Jain, D. Families of Newton-like method with fourth-order convergence. Int. J. Comput. Math. 2013, 90, 1072–1082. [Google Scholar] [CrossRef]
Table 1. Convergence study of distinct iterative functions on g 1 ( x ) .
Table 1. Convergence study of distinct iterative functions on g 1 ( x ) .
Methodsn x s | g ( x s ) | | x s + 1 x s | x s + 1 x s ( x s x s 1 ) 4
L M 1 0 0.5 **
1***
2***
3***
L M 2 0 0.5 **
1***
2***
3***
Z M 0 0.5 7.6 ( + 0 ) 4.3 ( + 0 )
1 4.772004872217151226361127 3.4 ( + 2 ) 2.8 ( + 0 ) 4.613629651 ( 2 )
2 2.012505802018268992557295 7.4 ( 8 ) 1.3 ( 2 ) 2.156703982 ( 4 )
3 2.000000000014282551529598 1.2 ( 43 ) 1.4 ( 11 ) 5.839284100 ( 4 )
M M 1 0 0.5 7.6 ( + 0 ) 3.8 ( + 0 )
1 4.260708441594529218722064 1.4 ( + 2 ) 2.3 ( + 0 ) 8.619134726 ( 2 )
2 2.009364265674733970271046 2.3 ( 8 ) 9.4 ( 3 ) 3.645072355 ( 4 )
3 2.000000000008902251900730 1.9 ( 44 ) 8.9 ( 12 ) 1.157723834 ( 3 )
M M 2 0 0.5 7.6 ( + 0 ) 4.4 ( + 0 )
1 4.907580957752597082443289 4.2 ( + 2 ) 2.9 ( + 0 ) 4.04329177 ( 2 )
2 2.017817158679257202528994 3.0 ( 7 ) 1.8 ( 2 ) 2.554989145 ( 4 )
3 2.000000000144841263588776 4.3 ( 39 ) 1.4 ( 10 ) 1.437270553 ( 3 )
*: denotes the case of failure.
Table 2. Convergence study of distinct iterative functions on g 2 ( x ) .
Table 2. Convergence study of distinct iterative functions on g 2 ( x ) .
Methodsn x s | g ( x s ) | | x s + 1 x s | x s + 1 x s ( x s x s 1 ) 4
L M 1 0 0.76 2.2 ( 1 ) 2.6 ( 3 )
1 0.7573968038178290616303393 4.4 ( 5 ) 5.6 ( 7 ) 5.769200720 ( + 18 )
2 0.7573962462537538794608754 9.8 ( 20 ) 1.2 ( 21 ) 1.27693413 ( + 4 )
3 0.7573962462537538794596413 2.4 ( 78 ) 3.0 ( 80 ) 1.277007736 ( + 4 )
L M 2 0 0.76 2.2 ( 1 ) 2.6 ( 3 )
1 0.7573964149978655308754320 1.3 ( 5 ) 1.7 ( 7 ) 2.018201446 ( + 20 )
2 0.7573962462537538794596446 2.6 ( 22 ) 3.3 ( 24 ) 4.015765605 ( + 3 )
3 0.7573962462537538794596413 3.6 ( 89 ) 4.5 ( 91 ) 4.15789304 ( + 3 )
Z M 0 0.76 2.2 ( 1 ) 2.6 ( 3 )
1 0.757396052854315818682498 1.5 ( 5 ) 1.9 ( 7 ) 1.382402073 ( + 20 )
2 0.7573962462537538794596326 6.9 ( 22 ) 8.7 ( 24 ) 6.198672349 ( + 3 )
3 0.7573962462537538794596413 2.8 ( 87 ) 3.5 ( 89 ) 6.198509111 ( + 3 )
M M 1 0 0.76 2.2 ( 1 ) 2.6 ( 3 )
1 0.7573962756803076928764181 2.3 ( 6 ) 2.9 ( 8 ) 3.924476992 ( + 22 )
2 0.7573962462537538794596413 1.3 ( 25 ) 1.7 ( 27 ) 2.210559498 ( + 3 )
3 0.7573962462537538794596413 1.3 ( 102 ) 1.7 ( 104 ) 2.210597968 ( + 3 )
M M 2 0 0.76 2.2 ( 1 ) 2.6 ( 3 )
1 0.7573963165291620208634917 5.6 ( 6 ) 7.0 ( 8 ) 2.881309229 ( + 21 )
2 0.7573962462537538794596413 4.2 ( 25 ) 5.3 ( 113 ) 2.165675770 ( + 2 )
3 0.7573962462537538794596413 1.3 ( 101 ) 1.7 ( 103 ) 2.166423965 ( + 2 )
Table 3. Convergence study of distinct iterative functions on g 3 ( x ) .
Table 3. Convergence study of distinct iterative functions on g 3 ( x ) .
Methodsn x s | g ( x s ) | | x s + 1 x s | x s + 1 x s ( x s x s 1 ) 4
L M 1 0 3.0 4.7 ( 2 ) 1.8 ( 1 )
1 2.817626610201641938500885 2.2 ( 3 ) 3.2 ( 2 ) 2.947357688 ( + 4 )
2 2.849999804254456528880326 8.0 ( 14 ) 2.0 ( 7 ) 1.782172267 ( 1 )
3 2.850000000000000000000000 1.9 ( 55 ) 3.0 ( 28 ) 2.052962276 ( 1 )
L M 2 0 3.0 4.7 ( 2 ) 1.8 ( 1 )
1 2.817286067962330455509242 2.2 ( 3 ) 3.3 ( 2 ) 2.856287648 ( + 4 )
2 2.850000013787746242734760 4.0 ( 16 ) 1.4 ( 8 ) 1.203820035 ( 2 )
3 2.849999999999999818950521 6.9 ( 32 ) 1.8 ( 16 ) 5.009843150 ( + 15 )
Z M 0 3.0 4.7 ( 2 ) 1.5 ( 1 )
1 2.847808068144375821316837 1.0 ( 5 ) 2.2 ( 3 ) 9.49657081 ( + 7 )
2 2.850000238882998104304060 1.2 ( 13 ) 2.4 ( 7 ) 1.034398150 ( + 4 )
3 2.850000000000000000000000 7.2 ( 58 ) 1.8 ( 29 ) 5.668907061 ( 3 )
M M 1 0 3.0 4.7 ( 2 ) 1.5 ( 1 )
1 2.847808129810423347656086 1.0 ( 5 ) 2.2 ( 3 ) 9.497358601 ( + 7 )
2 2.850000238869272754660930 1.2 ( 13 ) 2.4 ( 7 ) 1.034455136 ( + 4 )
3 2.850000000000000000000000 7.2 ( 58 ) 1.9 ( 29 ) 5.682404331 ( 3 )
M M 2 0 3.0 4.7 ( 2 ) 1.5 ( 1 )
1 2.847808157132816544128276 1.0 ( 5 ) 2.2 ( 3 ) 9.497713760 ( + 7 )
2 2.850000238863191571180946 1.2 ( 13 ) 2.4 ( 7 ) 1.034480386 ( + 4 )
3 2.850000000000000000000000 7.2 ( 58 ) 1.9 ( 29 ) 5.689152966 ( 3 )
Table 4. Convergence study of distinct iterative functions on g 4 ( x ) .
Table 4. Convergence study of distinct iterative functions on g 4 ( x ) .
Methodsn x s | g ( x s ) | | x s + 1 x s | x s + 1 x s ( x s x s 1 ) 4
L M 1 0 1.3 4.8 ( + 0 ) 2.2 ( 1 )
1 1.084514032248007059677119 2.7 ( 2 ) 2.8 ( 2 ) 4.571366396 ( + 4 )
2 1.056574385341714914084426 1.3 ( 8 ) 2.1 ( 5 ) 3.409239504 ( + 1 )
3 1.056553610335354902748667 2.0 ( 33 ) 8.1 ( 18 ) 4.373385687 ( + 1 )
L M 2 0 1.3 4.8 ( + 0 ) 2.3 ( 1 )
1 1.065392954832001332413064 2.5 ( 3 ) 8.8 ( 3 ) 1.447890326 ( + 6 )
2 1.056553694873544532804184 2.2 ( 13 ) 8.5 ( 8 ) 1.384807214 ( + 1 )
3 1.056553610335354894601954 1.9 ( 53 ) 7.8 ( 28 ) 1.519374809 ( + 1 )
Z M 0 1.3 4.8 ( + 0 ) 2.3 ( 1 )
1 1.067135979311830779677125 3.6 ( 3 ) 1.1 ( 2 ) 8.438277939 ( + 5 )
2 1.056553548780121516047790 1.2 ( 13 ) 6.2 ( 8 ) 4.908212761 ( + 0 )
3 1.056553610335369488358073 6.6 ( 27 ) 1.5 ( 14 ) 1.016498504 ( + 15 )
M M 1 0 1.3 4.8 ( + 0 ) 2.3 ( 1 )
1 1.073753193668000438771431 9.8 ( 3 ) 1.7 ( 2 ) 1.965340386 ( + 5 )
2 1.056553944712634749549453 3.5 ( 12 ) 3.3 ( 7 ) 3.821191729 ( + 00 )
3 1.056553610335354894601954 2.7 ( 54 ) 3.0 ( 28 ) 2.376288351 ( 2 )
M M 2 0 1.3 4.8 ( + 0 ) 2.3 ( 1 )
1 1.069121742029550523175482 5.1 ( 3 ) 1.3 ( 2 ) 5.037130656 ( + 5 )
2 1.056553743909213498922959 5.5 ( 13 ) 1.3 ( 7 ) 5.353737131 ( + 00 )
3 1.056553610335354894601954 3.9 ( 53 ) 1.1 ( 27 ) 3.547176655 ( + 00 )
Table 5. Convergence study of distinct iterative functions on g 5 ( x ) .
Table 5. Convergence study of distinct iterative functions on g 5 ( x ) .
Methodsn x s | g ( x s ) | | x s + 1 x s | x s + 1 x s ( x s x s 1 ) 4
L M 1 0 5.5 1.0 ( 1 ) 5.3 ( 1 )
1 4.970872146931603546368908 1.1 ( 3 ) 5.8 ( 3 ) 5.238466809 ( + 6 )
2 4.965114231914843999162688 3.3 ( 11 ) 1.7 ( 10 ) 1.55180113 ( 1 )
3 4.965114231744276303698759 2.6 ( 41 ) 1.3 ( 40 ) 1.567247236 ( 1 )
L M 2 0 5.5 1.0 ( 1 ) 5.4 ( 1 )
1 4.956468415831016632868463 1.7 ( 3 ) 8.6 ( 3 ) 1.547326676 ( + 6 )
2 4.965114231063736461886677 1.3 ( 10 ) 6.8 ( 10 ) 1.217950829 ( 1 )
3 4.965114231744276303698759 5.0 ( 39 ) 2.6 ( 38 ) 1.213771079 ( 1 )
Z M 0 5.5 1.0 ( 1 ) 5.3 ( 1 )
1 4.965118934170088855124237 9.1 ( 7 ) 4.7 ( 6 ) 9.616878784 ( 15 )
2 4.965114231744276303698759 1.0 ( 26 ) 5.2 ( 26 ) 1.059300624 ( 4 )
3 4.965114231744276303698759 1.5 ( 106 ) 7.6 ( 106 ) 1.059306409 ( 4 )
M M 1 0 5.5 1.0 ( 1 ) 5.3 ( 1 )
1 4.965119103136732738326681 9.4 ( 7 ) 4.9 ( 6 ) 8.650488681 ( + 15 )
2 4.965114231744276303698759 1.2 ( 26 ) 6.3 ( 26 ) 1.118336091 ( 4 )
3 4.965114231744276303698759 3.4 ( 106 ) 1.8 ( 105 ) 1.118342654 ( 4 )
M M 2 0 5.5 1.0 ( 1 ) 5.3 ( 1 )
1 4.965119178775304742593802 9.5 ( 7 ) 4.9 ( 6 ) 8.259734679 ( + 15 )
2 4.965114231744276303698759 1.3 ( 26 ) 6.9 ( 26 ) 1.147853783 ( 4 )
3 4.965114231744276303698759 4.9 ( 106 ) 2.6 ( 105 ) 1.147860777 ( 4 )

Share and Cite

MDPI and ACS Style

Kansal, M.; Behl, R.; Mahnashi, M.A.A.; Mallawi, F.O. Modified Optimal Class of Newton-Like Fourth-Order Methods for Multiple Roots. Symmetry 2019, 11, 526. https://doi.org/10.3390/sym11040526

AMA Style

Kansal M, Behl R, Mahnashi MAA, Mallawi FO. Modified Optimal Class of Newton-Like Fourth-Order Methods for Multiple Roots. Symmetry. 2019; 11(4):526. https://doi.org/10.3390/sym11040526

Chicago/Turabian Style

Kansal, Munish, Ramandeep Behl, Mohammed Ali A. Mahnashi, and Fouad Othman Mallawi. 2019. "Modified Optimal Class of Newton-Like Fourth-Order Methods for Multiple Roots" Symmetry 11, no. 4: 526. https://doi.org/10.3390/sym11040526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop