Next Article in Journal
On the Height of One-Dimensional Random Walk
Previous Article in Journal
A New Fractional-Order Adaptive Sliding-Mode Approach for Fast Finite-Time Control of Human Knee Joint Orthosis with Unknown Dynamic
Previous Article in Special Issue
A Cubic Class of Iterative Procedures for Finding the Generalized Inverses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivative-Free Families of With- and Without-Memory Iterative Methods for Solving Nonlinear Equations and Their Engineering Applications

by
Ekta Sharma
1,*,
Sunil Panday
1,
Shubham Kumar Mittal
1,*,
Dan-Marian Joița
2,3,
Lavinia Lorena Pruteanu
4 and
Lorentz Jäntschi
3
1
Department of Mathematics, National Institute of Technology Manipur, Langol, Imphal 795004, Manipur, India
2
Chemistry Doctoral School, Babeş-Bolyai University, 400084 Cluj, Romania
3
Department of Physics and Chemistry, Technical University of Cluj-Napoca, B.-dul Muncii nr. 103-105, 400641 Cluj-Napoca, Romania
4
Department of Chemistry and Biology, North University Center at Baia Mare, Technical University of Cluj-Napoca, 430122 Baia Mare, Romania
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4512; https://doi.org/10.3390/math11214512
Submission received: 11 October 2023 / Revised: 28 October 2023 / Accepted: 30 October 2023 / Published: 1 November 2023
(This article belongs to the Special Issue Advances in Linear Recurrence System)

Abstract

:
In this paper, we propose a new fifth-order family of derivative-free iterative methods for solving nonlinear equations. Numerous iterative schemes found in the existing literature either exhibit divergence or fail to work when the function derivative is zero. However, the proposed family of methods successfully works even in such scenarios. We extended this idea to memory-based iterative methods by utilizing self-accelerating parameters derived from the current and previous approximations. As a result, we increased the convergence order from five to ten without requiring additional function evaluations. Analytical proofs of the proposed family of derivative-free methods, both with and without memory, are provided. Furthermore, numerical experimentation on diverse problems reveals the effectiveness and good performance of the proposed methods when compared with well-known existing methods.

1. Introduction

Developing iterative methods to solve nonlinear equations poses an interesting and significant challenge in the fields of applied mathematics and engineering. In practice, analytical techniques often fall short in determining the roots of nonlinear problems. As a result, researchers have developed a variety of iterative methods to solve nonlinear equations.
There are a variety of numerical strategies that can be applied and were recently updated, i.e., Adomian decomposition [1], Aitken extrapolation [2], bisection [3], Chebyshev–Halley [4], Chun–Neta [5], collocation [6], Galerkin [7], homotopy perturbation [8] and Jarratt [9] methods, Nash–Moser iteration [10], Newton–Raphson [11], Osada [12] and Ostrowski [13] methods, Picard iteration [14], quadrature formulas [15,16], super-Halley [17] and Thukral [18] and Traub–Steffensen [19] methods.
Multi-point iterations surpass the limitations of one-point algorithms, demonstrating superior convergence rates and computational efficiency, thereby emerging as the most powerful technique for root finding. The development of iterative methods for finding the roots of nonlinear equations holds a crucial position in the field of numerical analysis, generating considerable interest and significance. The Newton–Raphson method stands as a widely renowned iterative approach that operates without memory, defined as follows [20]:
s n + 1 = s n Θ ( s n ) Θ ( s n ) , n = 0 , 1 , 2 , . . .
where Θ is the function and Θ is its derivative.
The Newton–Raphson method requires the evaluation of two functions in each iteration and exhibits a second-order convergence rate. It aligns with Kung–Traub’s hypothesis [21], which asserts that a memory-less multi-point method can achieve a maximum order of 2 γ 1 by performing γ function evaluations per iteration. The Chebyshev–Halley [21] and Ostrowski [22] methods are two iterative techniques devised for solving nonlinear equations of third and fourth orders, respectively. Researchers commonly strive to enhance the convergence rate of iterative methods. However, as the convergence rate improves, the associated increase in the number of required function evaluations may lead to a reduction in the efficiency index of these methods. The efficiency index of an iterative method quantifies its performance, as defined by [21,22]:
E = ρ 1 / γ
where ρ symbolizes the convergence rate of the iterative method and γ denotes the number of function and derivative evaluations performed per iteration. Recent advancements in the field have witnessed remarkable contributions to the development of iterative techniques for solving nonlinear equations. Kumar et al. [23] introduced a derivative-free fifth-order method, while Choubey et al. [24] presented an eighth-order approach that removes the derivatives by employing techniques such as divided differences and weight functions. Sharma et al. [25] proposed an optimal fourth-order iterative method that incorporates derivatives, and Panday et al. [26] formulated both fourth and eighth-order optimal iterative approaches. Singh and Singh [27] devised an optimal eighth-order method in 2021, while Solaiman and Hashim [28] introduced an optimal eighth-order approach employing the modified Halley’s method. Chanu and Panday [29] contributed a non-optimal tenth-order method for solving nonlinear equations. The methods developed in [22,30,31] required evaluation of the first and second-order derivatives, which is a cumbersome task. Also, when the derivative value is zero, the application of these methods is not possible. The exploration of nonlinear equation solving has also led to the formulation of derivative-free methods with memory, as showcased by B. Neta [32], who utilized Traub’s method and Newton’s method. Furthermore, Chanu et al. [33] proposed optimal memory-less techniques of fourth and eighth orders, extending them to incorporate memory. In the pursuit of resolving nonlinear equations with multiple roots, Thangkhenpau et al. [34] introduced a novel scheme that offers both with- and without-memory-based variants.
In this research article, we present a novel fifth-order derivative-free iterative method for solving simple roots of nonlinear equations. Moreover, it excels in scenarios where the derivative is zero at initial or successive iterative approximations. The method exhibits an efficiency index of 5 1 / 4 = 1.4953 . Building upon this foundation, we extend the technique to a tenth-order method with memory by incorporating self-accelerating parameters without requiring any additional function evaluation. The efficiency index of the tenth-order method with memory is 10 1 / 4 = 1.7783 . The subsequent sections of this document are meticulously organized to provide comprehensive insights. Section 2 delves into the utilization of divided difference and weight function techniques in the formulation of the methods, while also analyzing the convergence rates for both with- and without-memory approaches. Section 3 presents a thorough examination of numerical tests, meticulously comparing the proposed method with other well-established techniques. Finally, Section 4 concludes the study, offering a comprehensive summary of the findings and their implications.

2. Construction of New Iterative Schemes and Their Convergence Analysis

In this section, we have developed novel iterative techniques of both fifth and tenth orders, which are specifically designed for solving nonlinear equations without the need for derivatives. The new three-step fifth-order iterative without-memory method is outlined below:
y n = s n Θ ( s n ) Θ [ s n , w n ] + α Θ ( w n ) , w n = s n + β Θ ( s n ) , α , β R , z n = y n 1 + Θ ( y n ) Θ ( s n ) 2 Θ ( y n ) Θ ( y n ) Θ [ s n , w n ] , s n + 1 = z n Θ ( z n ) Θ [ s n , w n ] ( P ( t n ) + K ( u n , v n ) ) ,
where Θ [ s n , w n ] = Θ ( s n ) Θ ( w n ) s n w n and P : C C is an analytic function in the neighbourhood of 0 with t n = Θ ( z n ) Θ ( y n ) and K : C × C C is another analytic function in the neighbourhood of ( 0 , 0 ) with u n = Θ ( y n ) Θ ( s n ) and v n = Θ ( z n ) Θ ( s n ) . This new family (3) requires four function evaluations at each iteration.
Theorem 1. 
Suppose that Θ : D C C is an analytic function that is sufficiently differentiable. Let ξ D be a simple root of Θ, and x 0 be a value that is sufficiently near to ξ. If P ( t n ) and K ( u n , v n ) satisfy the conditions outlined below, the iterative process defined in (3) is of fifth-order convergence. The conditions are as follows: P ( 0 ) = 0 , P ( 0 ) = 1 , P ( 0 ) = 2 , P ( 0 ) = 6 , P ( 4 ) ( 0 ) = 24 , P ( 5 ) ( 0 ) = 120 , K ( 0 , 0 ) = 1 , K ( 1 , 0 ) ( 0 , 0 ) = 1 , and K ( 0 , 1 ) ( 0 , 0 ) = 2 . Furthermore, the iterative process (3) fulfills the error equation below:
e n + 1 = α ( 1 + Θ ( ξ ) β ) 3 ( α + Θ ( ξ ) α β c 2 ) ( α + c 2 ) 2 e n 5 + O ( e n 6 )
where c j = Θ ( j ) ( ξ ) j ! Θ ( ξ ) , j = 2 , 3 , 4 , . . .
Proof of Theorem 1. 
Let ξ be the simple root of Θ ( s ) = 0 and let e n = s n ξ be the error of n t h iteration. Using Taylor expansion, we obtain
Θ ( s n ) = Θ ( ξ ) e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + O ( e n 6 )
where e n = s n ξ and c j = Θ ( j ) ( ξ ) j ! Θ ( ξ ) , j = 2 , 3 , 4 , . . .
Now, using Equation (5) in the first step of method given by (3), we have
w n ξ = ( 1 + Θ ( ξ ) β ) e n + Θ ( ξ ) β c 2 e n 2 + Θ ( ξ ) β c 2 e n 3 + Θ ( ξ ) β c 2 e n 4 + Θ ( ξ ) β c 2 e n 5 + O e n 6
and
Θ ( w n ) = Θ ( ξ ) ( Θ ( ξ ) β + 1 ) e n + Θ ( ξ ) c 2 ( Θ ( ξ ) β ( Θ ( ξ ) β + 3 ) + 1 ) e n 2 + Θ ( ξ ) c 3 ( Θ ( ξ ) β + 1 ) 3 + 2 Θ ( ξ ) β c 2 2 ( Θ ( ξ ) β + 1 ) + Θ ( ξ ) β c 3 e n 3 + Θ ( ξ ) ( Θ ( ξ ) β ( Θ ( ξ ) β c 2 3 + c 3 c 2 ( Θ ( ξ ) β + 1 ) ( 3 Θ ( ξ ) β + 5 ) + c 4 ( Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β + 4 ) + 6 ) + 5 ) ) + c 4 ) e n 4 + Θ ( ξ ) ( c 5 ( Θ ( ξ ) β + 1 ) 5 + 4 Θ ( ξ ) β c 2 c 4 ( Θ ( ξ ) β + 1 ) 3 + 3 Θ ( ξ ) β c 3 ( Θ ( ξ ) β + 1 ) Θ ( ξ ) β c 2 2 + c 3 + c 3 + 2 Θ ( ξ ) β c 2 Θ ( ξ ) β c 2 c 3 + c 4 + c 4 + Θ ( ξ ) β c 5 ) e n 5 + O e n 6
Now, using Equations (5)–(7), the divided difference Θ [ s n , w n ] can be expressed as:
Θ [ s n , w n ] = Θ ( ξ ) + Θ ( ξ ) c 2 ( Θ ( ξ ) β + 2 ) e n + Θ ( ξ ) Θ ( ξ ) β c 2 2 + c 3 ( Θ ( ξ ) β ( Θ ( ξ ) β + 3 ) + 3 ) e n 2 + Θ ( ξ ) ( Θ ( ξ ) β + 2 ) 2 Θ ( ξ ) β c 2 c 3 + c 4 ( Θ ( ξ ) β ( Θ ( ξ ) β + 2 ) + 2 ) e n 3 + Θ ( ξ ) ( Θ ( ξ ) β ( Θ ( ξ ) β c 3 c 2 2 + c 4 c 2 ( Θ ( ξ ) β ( 3 Θ ( ξ ) β + 8 ) + 7 ) + c 3 2 ( 2 Θ ( ξ ) β + 3 ) + c 5 ( Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β + 5 ) + 10 ) + 10 ) ) + 5 c 5 ) e n 4 + Θ ( ξ ) ( Θ ( ξ ) β ( Θ ( ξ ) β c 4 c 2 2 ( 3 Θ ( ξ ) β + 4 ) + c 2 ( 2 Θ ( ξ ) β c 3 2 + c 5 ( Θ ( ξ ) β ( Θ ( ξ ) β ( 4 Θ ( ξ ) β + 15 ) + 20 ) + 11 ) ) + c 3 c 4 ( Θ ( ξ ) β ( 3 Θ ( ξ ) β + 10 ) + 9 ) + c 6 ( Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β + 6 ) + 15 ) + 20 ) + 15 ) ) + 6 c 6 ) e n 5 + O e n 6
By using Equations (5)–(8) in the second step of the method (3), we obtain
y n ξ = 1 + Θ ( ξ ) β α + c 2 e n 2 + ( α + Θ ( ξ ) α β 2 2 + Θ ( ξ ) β 2 + Θ ( ξ ) β c 2 α + c 2 + 2 c 3 + Θ ( ξ ) β 3 + Θ ( ξ ) β c 3 ) e n 3 + + O e n 6
and
Θ ( y n ) = Θ ( ξ ) 1 + Θ ( ξ ) β α + c 2 e n 2 + Θ ( ξ ) ( α + Θ ( ξ ) α β 2 2 + Θ ( ξ ) β 2 + Θ ( ξ ) β c 2 α + c 2 + 2 c 3 + Θ ( ξ ) β 3 + Θ ( ξ ) β c 3 ) e n 3 + + O e n 6
Now, after using Equations (5) to (9), we obtain the third step z n and Θ ( z n ) as
z n ξ = 1 + Θ ( ξ ) β α + Θ ( ξ ) α β c 2 α + c 2 e n 3 + + O e n 6
Θ ( z n ) = Θ ( ξ ) 1 + Θ ( ξ ) β α + Θ ( ξ ) α β c 2 α + c 2 e n 3 + + O e n 6
Now, we obtain t n = Θ ( z n ) Θ ( y n ) , u n = Θ ( y n ) Θ ( s n ) and v n = Θ ( z n ) Θ ( s n ) by using Equations (5), (10) and (12); thus, we have
t n = α 1 + Θ ( ξ ) β + c 2 e n + ( α + Θ ( ξ ) α β 2 Θ ( ξ ) α β 3 + 2 Θ ( ξ ) β c 2 2 + Θ ( ξ ) β 2 + Θ ( ξ ) β c 2 2 + c 3 ) e n 2 + + O e n 6
u n = 1 + Θ ( ξ ) β α + c 2 e n + ( α + Θ ( ξ ) α β 2 3 + Θ ( ξ ) β 3 + Θ ( ξ ) β c 2 α + c 2 + 2 c 3 + Θ ( ξ ) β 3 + Θ ( ξ ) β c 3 ) e n 2 + + O e n 6
and
v n = 1 + Θ ( ξ ) β α + Θ ( ξ ) α β c 2 α + c 2 e n 2 + + O e n 6
Now, by using Equations (5)–(15), we obtain the fourth step as
s n + 1 ξ = e n + 1 = ( 1 + Θ ( ξ ) β ) P ( 0 ) α + Θ ( ξ ) α β c 2 α + c 2 e n 3 + γ 4 e n 4 + γ 5 e n 5 + O e n 6
where
γ 4 = c 2 ( Θ ( ξ ) β + 1 ) α 2 ( Θ ( ξ ) β 1 ) Θ ( ξ ) β ( Θ ( ξ ) β + 1 ) P ( 0 ) + Θ ( ξ ) β P ( 0 ) + 2 P ( 0 ) + 1 c 3 P ( 0 ) ( Θ ( ξ ) β + 3 ) + α c 2 2 Θ ( ξ ) β Θ ( ξ ) β P ( 0 ) ( Θ ( ξ ) β + 3 ) + 2 P ( 0 ) 1 + 3 P ( 0 ) + P ( 0 ) 1 + P ( 0 ) + 2 P ( 0 ) 1 + α ( Θ ( ξ ) β + 1 ) c 3 P ( 0 ) ( Θ ( ξ ) β ( Θ ( ξ ) β + 3 ) + 1 ) P ( 0 ) 1 ( α + α Θ ( ξ ) β ) 2 + c 2 3 Θ ( ξ ) β P ( 0 ) ( Θ ( ξ ) β ( Θ ( ξ ) β + 5 ) + 9 ) P ( 0 ) + 1 P ( 0 ) + 6 P ( 0 ) + 1
and
γ 5 = 1 2 c 2 2 2 P ( 0 ) α 2 Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β ( 4 Θ ( ξ ) β + 11 ) + 11 ) + 9 ) + c 3 ( Θ ( ξ ) β + 1 ) ( Θ ( ξ ) β + 4 ) + 2 c 3 ( Θ ( ξ ) β ( Θ ( ξ ) β + P ( 0 ) ( Θ ( ξ ) β ( Θ ( ξ ) β ( 3 Θ ( ξ ) β + 19 ) + 45 ) + 53 ) + 5 ) + 27 P ( 0 ) + 4 ) + 2 α 2 Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β ( 3 Θ ( ξ ) β ( P ( 0 ) + 1 ) + 4 ( P ( 0 ) + 3 ) ) 7 P ( 0 ) + 17 ) 10 P ( 0 ) + 14 ) 5 P ( 0 ) P ( 0 ) + 6 3 Θ ( ξ ) β P ( 0 ) ( α + α Θ ( ξ ) β ) 2 + c 2 ( Θ ( ξ ) β + 1 ) α 3 ( Θ ( ξ ) β + 1 ) 2 Θ ( ξ ) β Θ ( ξ ) β ( 4 Θ ( ξ ) β + 5 ) P ( 0 ) + 3 Θ ( ξ ) β P ( 0 ) + 3 P ( 0 ) + 3 + ( Θ ( ξ ) β 2 ) ( Θ ( ξ ) β + 1 ) P ( 0 ) + 4 P ( 0 ) 2 2 c 4 P ( 0 ) ( Θ ( ξ ) β ( Θ ( ξ ) β + 3 ) + 4 ) + 4 α c 3 Θ ( ξ ) β ( Θ ( ξ ) β + 3 ) Θ ( ξ ) β P ( 0 ) ( Θ ( ξ ) β + 2 ) + P ( 0 ) 1 + 2 P ( 0 ) + P ( 0 ) 2 + 2 P ( 0 ) + P ( 0 ) 2 + ( Θ ( ξ ) β + 1 ) 2 α 2 c 3 Θ ( ξ ) β Θ ( ξ ) β Θ ( ξ ) β ( Θ ( ξ ) β + 4 ) P ( 0 ) + Θ ( ξ ) β P ( 0 ) + 2 P ( 0 ) + 4 3 P ( 0 ) P ( 0 ) + 3 4 P ( 0 ) + 2 α c 4 P ( 0 ) ( Θ ( ξ ) β ( Θ ( ξ ) β ( Θ ( ξ ) β + 4 ) + 6 ) + 2 ) 2 c 3 2 P ( 0 ) ( Θ ( ξ ) β + 2 ) + α 4 ( Θ ( ξ ) β + 1 ) 3 2 P ( 0 ) + P ( 0 ) + 2 P ( 0 ) 2 + α c 2 3 2 Θ ( ξ ) β Θ ( ξ ) β Θ ( ξ ) β 2 Θ ( ξ ) β 2 ( Θ ( ξ ) β + 3 ) P ( 0 ) 10 P ( 0 ) + 7 10 P ( 0 ) 33 P ( 0 ) + 10 11 P ( 0 ) 37 P ( 0 ) + 9 + ( Θ ( ξ ) β + 1 ) ( 3 Θ ( ξ ) β + 2 ) P ( 0 ) 8 P ( 0 ) 32 P ( 0 ) + 6 c 2 4 Θ ( ξ ) β 2 Θ ( ξ ) β Θ ( ξ ) β P ( 0 ) ( 2 Θ ( ξ ) β + 15 ) 2 P ( 0 ) + 2 8 P ( 0 ) + 39 P ( 0 ) + 8 26 P ( 0 ) + P ( 0 ) + 96 P ( 0 ) + 24 16 P ( 0 ) + P ( 0 ) + 50 P ( 0 ) + 14 .
After substituting the values P ( 0 ) = 0 , P ( 0 ) = 1 and P ( 0 ) = 2 , we have derived the error equation below:
s n + 1 ξ = e n + 1 = α ( 1 + Θ ( ξ ) β ) 3 ( α + Θ ( ξ ) α β c 2 ) ( α + c 2 ) 2 e n 5 + O ( e n 6 )
By examining Equation (17), we can infer that the method described by Equation (3) exhibits a fifth-order convergence. This conclusion serves as the completion of the proof.
Based on the conditions for P ( t n ) and K ( u n , v n ) , as presented in Theorem 1 we are adopting the particular forms for the weight functions: P ( t n ) = t n 5 + t n 4 + t n 3 + t n 2 + t n and K ( u n , v n ) = u n + 2 v n + 1 within the newly proposed method described by Equation (3). □

Parametric Family of Three-Point With-Memory Method and Its Convergence Analysis

We shall now proceed to enhance the method described by Equation (3) by incorporating the with-memory feature, introducing two additional parameters. Upon analyzing Equation (4), it becomes evident that the convergence order of the method (3) reaches ten when α = c 2 and β = 1 Θ ( ξ ) . By selecting α = c 2 = Θ ( ξ ) 2 Θ ( ξ ) and β = 1 Θ ( ξ ) , we can transform the error Equation (4) into the following form:
e n + 1 = c 2 3 ( 5 c 2 2 c 3 ) c 3 2 e n 10 + O ( e n 11 )
To derive the method with memory, we introduce the parameters α = α n and β = β n , which evolve as the iteration progresses according to the formulas α n = Θ ¯ ( ξ ) 2 Θ ¯ ( ξ ) and β n = 1 Θ ¯ ( ξ ) . In the method (3), we make use of the following approximation:
α n = Θ ¯ ( ξ ) 2 Θ ¯ ( ξ ) N 5 ( w n ) 2 N 5 ( w n ) β n = 1 Θ ¯ ( ξ ) 1 N 4 ( s n )
Let us define Newton’s interpolating polynomials of fourth and fifth degrees as follows: N 4 ( u ) = N 4 ( u ; s n , z n 1 , y n 1 , x n 1 , w n 1 ) and N 5 ( u ) = N 5 ( u ; w n , s n , z n 1 , y n 1 , x n 1 , w n 1 ) .
Now, we can express the iterative method with memory as follows:
w n = s n + β n Θ ( s n ) , y n = s n Θ ( s n ) Θ [ s n , w n ] + α n Θ ( w n ) , z n = y n 1 + Θ ( y n ) Θ ( s n ) 2 Θ ( y n ) Θ ( y n ) Θ [ s n , w n ] , s n + 1 = z n Θ ( z n ) Θ [ s n , w n ] ( P ( t n ) + K ( u n , v n ) ) .
Remark 1. 
It is important to note that the approach of iteratively calculating independent parameters, as employed in this method, is commonly known as a self-accelerating method. Prior to initiating the iterative process, it is crucial to determine the initial values of α 0 and β 0 , as highlighted in [35].
Our objective is to analyze the convergence properties of the method with memory. Specifically, we are interested in investigating the behavior of the sequence s n as it converges to the root ξ of Θ , as well as determining the rate at which this convergence occurs. To quantify the convergence rate, we define the difference between s n and ξ as e n = s n ξ . If the sequence s n approaches the root ξ with an order of p, we can express it as e n + 1 e n p . In order to establish the convergence order of the method (20), we can make use of the following lemma, which has been presented in [36].
Lemma 1. 
If α n = N 5 ( w n ) 2 N 5 ( w n ) and β n = 1 N 4 ( s n ) , n = 1 , 2 , 3 , the estimates, 1 + Θ ( ξ ) β n e n 1 , z e n 1 , y e n 1 , w e n 1 ; α n + c 2 e n 1 , z e n 1 , y e n 1 , w e n 1 hold.
Let us consider the following theorem.
Theorem 2. 
If the initial estimate s 0 is in close vicinity to the unique root ξ of the real and suitably smooth function Θ ( s ) = 0 , method (20) will possess a convergence rate of at least 10.8151.
Proof of Theorem 2. 
Let us assume that the iterative process described in (20) produces a sequence of estimations denoted as s n . If this sequence converges to the root ξ of the function Θ with a convergence order of q, we can deduce the following:
e n + 1 e n q , where e n = s n ξ
e n + 1 ( e n 1 q ) q = e n 1 q 2
Let us assume that the iterative sequences w n , y n and z n have the order q 1 , q 2 and q 3 , respectively. Then, Equations (21) and (22) give the following:
e n , w ( e n q 1 ) = e n 1 q q 1
e n , y ( e n q 2 ) = e n 1 q q 2
e n , z ( e n q 3 ) = e n 1 q q 3
By Theorem 1, we can write
e n , w ( 1 + Θ ( ξ ) β ) e n
e n , y 1 + Θ ( ξ ) β α + c 2 e n 2
e n , z 1 + Θ ( ξ ) β α + Θ ( ξ ) α β c 2 α + c 2 e n 3
e n + 1 α ( 1 + Θ ( ξ ) β ) 3 ( α + Θ ( ξ ) α β c 2 ) ( α + c 2 ) 2 e n 5
Using Lemma 1, we obtain the following:
e n , w ( 1 + Θ ( ξ ) β ) e n ( e n 1 , z e n 1 , y e n 1 , w e n 1 ) e n e n 1 q + q 1 + q 2 + q 3 + 1
e n , y 1 + Θ ( ξ ) β α + c 2 e n 2 ( e n 1 , z e n 1 , y e n 1 , w e n 1 ) 2 e n 2 e n 1 2 q + 2 q 1 + 2 q 2 + 2 q 3 + 2
e n , z 1 + Θ ( ξ ) β α + Θ ( ξ ) α β c 2 α + c 2 e n 3 ( e n 1 , z e n 1 , y e n 1 , w e n 1 ) 3 e n 3 e n 1 3 q + 3 q 1 + 3 q 2 + 3 q 3 + 3
e n + 1 α ( 1 + Θ ( ξ ) β ) 3 ( α + Θ ( ξ ) α β c 2 ) ( α + c 2 ) 2 e n 5 ( e n 1 , z e n 1 , y e n 1 , w e n 1 ) 4 e n 5 e n 1 5 q + 4 q 1 + 4 q 2 + 4 q 3 + 4
Comparing the power of e n 1 of Equations (23)–(30), (24)–(31), (25)–(32) and (22)–(33), we obtain the following system of equations:
q q 1 q q 1 q 2 q 3 1 = 0 q q 2 2 q 2 q 1 2 q 2 2 q 3 2 = 0 q q 3 3 q 3 q 1 3 q 2 3 q 3 3 = 0 q 2 5 q 4 q 1 4 q 2 4 q 3 4 = 0
After, solving the above system of equations, we obtain q 1 = 2.4538 , q 2 = 4.9075 , q 3 = 7.3613 and q = 10.8151 . Thus, the proof is complete. □

3. Numerical Discussion

In this section, our objective is to elucidate the efficacy of recently introduced iterative families of methods through their application to various nonlinear equations. We will compare the results with some well-known existing methods available in the literature. In particular, the following iterative methods, in addition to Newton’s method, are considered for comparison.
The well-known fourth-order multipoint without-memory Ostrowski’s method (OM) is given as [22]:
y n = s n Θ ( s n ) Θ ( s n ) , z n = y n + Θ ( s n ) Θ ( s n ) 2 Θ ( y n ) Θ ( y n ) Θ ( s n ) .
In 2020, Nouri et al. [30] developed the following fifth-order method (NRTM):
y n = s n Θ ( s n ) Θ ( s n ) , z n = s n + Θ ( s n ) Θ ( s n ) , s n + 1 = s n ( s n y n ) Θ ( s n ) 2 ( Θ ( y n ) + Θ ( z n ) ) Θ ( s n ) 2 ( Θ ( z n ) Θ ( y n ) ) 4 Θ ( s n ) Θ ( y n ) 2 6 Θ ( y n ) 3 .
The fifth-order method (GM) developed by Grau et al. [31]:
y n = s n Θ ( s n ) Θ ( s n ) , z n = s n 1 + 1 2 Θ ( s n ) Θ ( s n ) Θ ( s n ) 2 Θ ( s n ) Θ ( s n ) , s n + 1 = s n 1 + Θ ( s n ) ( Θ ( s n ) + Θ ( z n ) ) 2 Θ ( s n ) 2 Θ ( s n ) + Θ ( z n ) Θ ( s n ) .
We selected the method [30,31] for comparison so as to have a fair and uniform comparison. This is because, like our proposed methods, they have the same order of convergence and the same number of function evaluations per iteration.
The nonlinear test functions utilized for comparative analysis, along with their corresponding initial approximations, are provided below.
Example 1. 
Θ 1 ( s ) = e s 2 ( 1 + s 3 + s 6 ) ( s 2 ) , s 0 = 2.1
Example 2. 
Θ 2 ( s ) = sin 2 s + s , s 0 = 0.07
Example 3. 
Θ 3 ( s ) = 10 s e s 2 1 , s 0 = 1.7
Example 4. 
Θ 4 ( s ) = e s 2 + s + 2 1 , s 0 = 2.001
Example 5. 
Θ 5 ( s ) = s 2 + 1 , s 0 = 2 ι ˙ 3
Example 6. 
Θ 6 ( s ) = s 2 + s + 1 , s 0 = 3 ι ˙ 2
Example 7. 
We consider the following Planck’s radiation law problem, which calculates the energy density within an isothermal blackbody and is given by [37]:
v ( λ ) = 8 π c P λ 5 e c P λ B T 1
where λ is the wavelength of the radiation, T is the absolute temperature of the blackbody, B is the Boltzmann constant, P is the Planck constant and c is the speed of light. We are interested in determining the wavelength λ that corresponds to maximum energy density v ( λ ) .
From (38), we obtain
v ( λ ) = 8 π c P λ 6 e c P λ B T 1 c P λ B T e c P λ B T e c P λ B T 1 5
so that the maxima of v occur when
c P λ B T e c P λ B T e c P λ B T 1 = 5
After that, if s = c P λ B T , then (40) is satisfied if
Θ 7 ( s ) = e s + s 5 1 ; s 0 = 4.965
Therefore, the solutions of Θ 7 ( s ) = 0 give the maximum wavelength of radiation λ by means of the following formula:
λ c P s * B T
where s * is a solution of (41).
Example 8. 
In the study of the multi-factor effect, the trajectory of an electron in the air gap between two parallel plates is given by [37]
s ( t ) = s 0 + υ 0 + e E 0 n ω s i n ( ω t 0 + α ) ( t t 0 ) + e E 0 m ω 2 c o s ( ω t + α ) + s i n ( ω + α )
where e and m are the charge and the mass of the electron at rest, s 0 and υ 0 are the position and velocity of the electron at the time t 0 and E 0 s i n ( ω t + α ) is the RF electric field between the plates. We choose the particular parameters in the expression in order to deal with a simpler expression, which is defined as follows:
Θ 8 ( s ) = s 1 2 c o s ( s ) + π 4
with s 0 = 0.309 .
The comparative results for all the methods, including NM (1), NDM1 (3), NDM2 (20), NTRM (36) and GM (37), are summarized in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. In these tables, we have presented the following metrics for each of the compared methods after the first three full iterations on every test function: the approximated roots ( s n ), absolute residual error ( | Θ ( s n ) | ), the difference between the last two successive iterations ( | s n s n 1 | ) and the computational rate of convergence ( C O C ). Also in Figure 1, we provide a comparison of methods based on the error in the consecutive iterations, | s n s n 1 | , after the first three iterations. The determination of C O C is achieved using the following equation [38]:
C O C = l o g | Θ ( s n ) / Θ ( s n 1 ) | l o g | Θ ( s n 1 ) / Θ ( s n 2 ) |
For all numerical calculations, the programming software Mathematica 12.2 was utilized. For the with-memory method NDM2, we have selected the parameter values α 0 = 0.1 and β 0 = 0.01 to start the initial iteration.
From all the numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and from Figure 1, it is concluded that the proposed family of methods NDM1 and NDM2 is highly competitive and possesses fast convergence toward the roots with minimal absolute residual error and a minimum error value in consecutive iteration as compared to the other existing methods. Additionally, the numerical results indicate that the computational order of convergence supports the theoretical convergence order of the newly presented family of methods in the test functions.

4. Conclusions

In this paper, we have presented two families of iterative methods for solving nonlinear equations. The first is a memoryless, derivative-free family of fifth-order methods, while the second is a three-point family of with-memory methods. We obtained the fifth-order family of methods by employing a composition technique along with a modified Newton’s method and a weighted function approach. The memory-based extension of the fifth-order family of methods employs two acceleration parameters calculated using Newton interpolating polynomials and enhancing convergence from the 5th to 10th order without requiring additional function evaluations and subsequently increasing its efficiency index from 1.495 to 1.778. Analysis of the numerical results has revealed the effectiveness and enhanced capabilities of the newly proposed methods in terms of minimal absolute residual error and minimal error in consecutive iterations. The results demonstrate that the proposed methods NDM1 and NDM2 exhibit faster convergence with smaller asymptotic constant values compared to other existing methods. Moreover, the overall performance of the newly proposed work is quite impressive, offering a fast convergence speed, and making it a promising alternative for solving nonlinear equations.

Author Contributions

Conceptualisation, E.S. and S.P.; methodology, E.S. and S.P.; software, E.S., S.K.M., D.-M.J., L.L.P. and L.J.; validation, E.S., S.K.M. and S.P.; formal analysis, E.S., S.P., S.K.M. and L.J.; resources, E.S.; writing—original draft preparation, E.S., S.P. and S.K.M.; writing—review and editing, E.S., S.K.M., D.-M.J., L.L.P. and L.J.; visualization, E.S. and S.K.M.; supervision, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Technical University of Cluj-Napoca open access publication grant.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GMGrau fifth-order method
NMNewton–Raphson method
NDM1Newly developed method 1 (without memory, Equation (3))
NDM2Newly developed method 2 (with memory, Equation (19))
NRTMNouri fifth-order method (Equation (35))
OMOstrowski’s method
COCComputational rate of convergence
The following constants were used in this manuscript:
cspeed of light
PPlanck constant
For the authors, the following abbreviations were used:
E.S.Ekta Sharma
L.J.Lorentz Jäntschi
S.K.M.Shubham Kumar Mittal
S.P.Sunil Panday
D.-M.J.Dan-Marian Joița
L.L.P.Lavinia Lorena Pruteanu

References

  1. Behbahan, A.S.; Alizadeh, A.a.; Mahmoudi, M.; Shamsborhan, M.; Al-Musawi, T.J.; Pasha, P. A new Adomian decomposition technique for a thermal analysis forced non-Newtonian magnetic Reiner-Rivlin viscoelastic fluid flow. Alex. Eng. J. 2023, 80, 48–57. [Google Scholar] [CrossRef]
  2. Fika, P. Approximation of the Tikhonov regularization parameter through Aitken’s extrapolation. Appl. Numer. Math. 2023, 190, 270–282. [Google Scholar] [CrossRef]
  3. Gutierrez, C.; Gutierrez, F.; Rivara, M.C. Complexity of the bisection method. Theor. Comput. Sci. 2007, 382, 131–138. [Google Scholar] [CrossRef]
  4. Sharma, H.; Kansal, M. A modified Chebyshev–Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Methods Appl. Sci. 2023, 46, 12549–12569. [Google Scholar] [CrossRef]
  5. Petković, I.; Herceg, D. Computers in mathematical research: The study of three-point root-finding methods. Numer. Algorithms 2020, 84, 1179–1198. [Google Scholar] [CrossRef]
  6. Lu, Y.; Tang, Y. Solving Fractional Differential Equations Using Collocation Method Based on Hybrid of Block-pulse Functions and Taylor Polynomials. Turk. J. Math. 2021, 45, 1065–1078. [Google Scholar] [CrossRef]
  7. Assari, P.; Dehghan, M. A meshless local Galerkin method for solving Volterra integral equations deduced from nonlinear fractional differential equations using the moving least squares technique. Appl. Numer. Math. 2019, 143, 276–299. [Google Scholar] [CrossRef]
  8. Farhood, A.K.; Mohammed, O.H. Homotopy perturbation method for solving time-fractional nonlinear Variable-Order Delay Partial Differential Equations. Partial. Differ. Equ. Appl. Math. 2023, 7, 100513. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K.; Argyros, M.I. Extended three step sixth order Jarratt- like methods under generalized conditions for nonlinear equations. Arab. J. Math. 2022, 11, 443–457. [Google Scholar] [CrossRef]
  10. Temple, B.; Young, R. Inversion of a non-uniform difference operator and a strategy for Nash–Moser. Methods Appl. Anal. 2022, 29, 265–294. [Google Scholar] [CrossRef]
  11. Pho, K.H. Improvements of the Newton–Raphson method. J. Comput. Appl. Math. 2022, 408, 114106. [Google Scholar] [CrossRef]
  12. Argyros, I.K.; George, S. Local convergence of osada’s method for finding zeros with multiplicity. In Understanding Banach Spaces; Sáanchez, D.G., Ed.; Nova Science Publishers: Hauppauge, NY, USA, 2019; pp. 147–151. Available online: http://idr.nitk.ac.in/jspui/handle/123456789/14597 (accessed on 11 October 2023).
  13. Postigo Beleña, C. Ostrowski’s Method for Solving Nonlinear Equations and Systems. J. Mech. Eng. Autom. 2023, 13, 1–6. [Google Scholar] [CrossRef]
  14. Ivanov, S.I. General Local Convergence Theorems about the Picard Iteration in Arbitrary Normed Fields with Applications to Super–Halley Method for Multiple Polynomial Zeros. Mathematics 2020, 8, 1599. [Google Scholar] [CrossRef]
  15. Coclite, G.M.; Fanizzi, A.; Lopez, L.; Maddalena, F.; Pellegrino, S.F. Numerical methods for the nonlocal wave equation of the peridynamics. Appl. Numer. Math. 2020, 155, 119–139. [Google Scholar] [CrossRef]
  16. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  17. Nisha, S.; Parida, P.K. Super-Halley method under majorant conditions in Banach spaces. Cubo (Temuco) 2020, 22, 55–70. [Google Scholar] [CrossRef]
  18. Putri, R.Y.; Wartono, W. Modifikasi metode Schroder tanpa turunan kedua dengan orde konvergensi empat. Aksioma J. Mat. Dan Pendidik. Mat. 2020, 11, 240–251. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Kumar, D.; Argyros, I.K. An efficient class of Traub-Steffensen-like seventh order multiple-root solvers with applications. Symmetry 2019, 11, 518. [Google Scholar] [CrossRef]
  20. Jamaludin, N.A.A.; Nik Long, N.M.A.; Salimi, M.; Sharifi, S. Review of Some Iterative Methods for Solving Nonlinear Equations with Multiple Zeros. Afr. Mat. 2019, 30, 355–369. [Google Scholar] [CrossRef]
  21. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
  22. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: Cambridge, MA, USA, 1973; Available online: https://cir.nii.ac.jp/crid/1130282272784494208 (accessed on 11 October 2023).
  23. Kumar, M.; Singh, A.K.; Srivastava, A. A new fifth-order derivative free Newton-type method for solving nonlinear equations. Appl. Math. Inf. Sci. 2015, 9, 1507. [Google Scholar]
  24. Choubey, N.; Jaiswal, J.P. A derivative-free method of eighth-order for finding simple root of nonlinear equations. Commun. Numer. Anal. 2015, 2, 90–103. [Google Scholar] [CrossRef]
  25. Sharma, E.; Panday, S.; Dwivedi, M. New Optimal Fourth Order Iterative Method for Solving Nonlinear Equations. Int. J. Emerging Technol. 2020, 11, 755–758. Available online: https://www.researchtrend.net/ijet/pdf/New%20Optimal%20Fourth%20Order%20Iterative%20Method%20for%20Solving%20Nonlinear%20Equations%20Ekta%20Sharma%202481.pdf (accessed on 11 October 2023).
  26. Panday, S.; Sharma, A.; Thangkhenpau, G. Optimal fourth and eighth-order iterative methods for non-linear equations. J. Appl. Math. Comput. 2022, 69, 953–971. [Google Scholar] [CrossRef]
  27. Singh, M.K.; Singh, A.K. The optimal order Newton’s like methods with dynamics. Mathematics 2021, 9, 527. [Google Scholar] [CrossRef]
  28. Solaiman, O.S.; Hashim, I. Optimal eighth-order solver for nonlinear equations with applications in chemical engineering. Intell. Autom. Soft Comput. 2020, 13, 87–93. [Google Scholar] [CrossRef]
  29. Chanu, W.H.; Panday, S. Excellent Higher Order Iterative Scheme for Solving Non-linear Equations. Iaeng Int. J. Appl. Math. 2022, 52, 1–7. Available online: https://www.iaeng.org/IJAM/issues_v52/issue_1/IJAM_52_1_18.pdf (accessed on 11 October 2023).
  30. Nouri, K.; Ranjbar, H.; Torkzadeh, L. Two High Order Iterative Methods for Roots of Nonlinear Equations. Punjab Univ. J. Math. 2020, 51. Available online: http://journals.pu.edu.pk/journals/index.php/pujm/article/viewFile/3339/1452 (accessed on 11 October 2023).
  31. Grau, M.; Díaz-Barrero, J.L. An improvement of the Euler–Chebyshev iterative method. J. Math. Anal. Appl. 2006, 315, 1–7. [Google Scholar] [CrossRef]
  32. Neta, B. A new derivative-free method to solve nonlinear equations. Mathematics 2021, 9, 583. [Google Scholar] [CrossRef]
  33. Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of Optimal Iterative Methods with Their Applications and Basins of Attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
  34. Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations. Mathematics 2023, 11, 2036. [Google Scholar] [CrossRef]
  35. Lotfi, T.; Soleymani, F.; Noori, Z.; Kılıçman, A.; Khaksar Haghani, F. Efficient iterative methods with and without memory possessing high efficiency indices. Discret. Dyn. Nat. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
  36. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2013, 63, 549–569. [Google Scholar] [CrossRef]
  37. Maroju, P.; Magreñán, Á.A.; Motsa, S.S.; Sarría, Í. Second derivative free sixth order continuation method for solving nonlinear equations with applications. J. Math. Chem. 2018, 56, 2099–2116. [Google Scholar] [CrossRef]
  38. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Comparison of the methods based on the error in consecutive iterations, | s n s n 1 | , after the first three iterations.
Figure 1. Comparison of the methods based on the error in consecutive iterations, | s n s n 1 | , after the first three iterations.
Mathematics 11 04512 g001
Table 1. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 1 ( s ) .
Table 1. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 1 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.11779 1.7434 × 10 1 3.5679 × 10 4 2.0508 × 10 7 2.0000
OM 0.10048 4.8190 × 10 4 1.4662 × 10 13 1.6842 × 10 51 4.0000
NRTM 0.099122 8.7833 × 10 4 8.2482 × 10 15 7.9485 × 10 70 5.0000
GM 0.098037 1.9631 × 10 3 1.2383 × 10 12 1.6115 × 10 58 5.0000
NDM1 ( α = 1, β = 1) 0.10105 1.0533 × 10 3 1.3616 × 10 15 7.0363 × 10 75 5.0000
NDM2 0.99749 2.5101 × 10 4 6.0861 × 10 35 5.7271 × 10 341 10.0000
Table 2. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 2 ( s ) .
Table 2. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 2 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.075667 5.6350 × 10 3 3.1750 × 10 5 1.0081 × 10 9 2.0000
OM 0.070030 3.0385 × 10 5 8.5226 × 10 19 5.2758 × 10 73 4.0000
NRTM 0.069979 2.1309 × 10 5 2.7830 × 10 23 1.0574 × 10 112 5.0000
GM 0.069961 3.9227 × 10 5 1.1149 × 10 21 2.0673 × 10 104 5.0000
NDM1 ( α = 1, β = 1) 0.070006 6.1200 × 10 6 1.3737 × 10 24 7.8257 × 10 118 5.0000
NDM2 0.069984 1.5860 × 10 5 3.9187 × 10 56 3.9040 × 10 577 10.0000
Table 3. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 3 ( s ) .
Table 3. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 3 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.020781 4.1098 × 10 4 1.6149 × 10 7 6.8908 × 10 14 2.0000
OM 0.020370 1.4498 × 10 7 3.5630 × 10 28 3.5916 × 10 110 4.0000
NRTM 0.020369 1.8841 × 10 8 1.0645 × 10 38 1.6942 × 10 189 5.0000
GM 0.020369 3.7534 × 10 8 6.9025 × 10 37 4.0127 × 10 180 5.0000
NDM1 ( α = 1, β = 1) 0.020369 1.0344 × 10 10 1.1052 × 10 52 4.2540 × 10 262 5.0000
NDM2 0.020369 2.7829 × 10 9 2.4637 × 10 84 2.0132 × 10 834 10.0000
Table 4. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 4 ( s ) .
Table 4. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 4 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.0010012 1.1684 × 10 6 1.5927 × 10 12 8.8779 × 10 24 2.0000
OM 0.0010000 1.0065 × 10 12 1.0311 × 10 48 3.4070 × 10 192 4.0000
NRTM 0.0010000 9.7700 × 10 15 8.6879 × 10 70 1.4493 × 10 344 5.0000
GM 0.0010000 1.4655 × 10 14 1.0014 × 10 68 4.4761 × 10 339 5.0000
NDM1 ( α = 1, β = 1) 0.0010000 1.2108 × 10 16 3.2123 × 10 81 1.2667 × 10 403 5.0000
NDM2 0.0010000 1.5040 × 10 15 9.7884 × 10 147 4.0028 × 10 1458 10.0000
Table 5. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 5 ( s ) .
Table 5. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 5 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.41667 8.0128 × 10 2 3.2000 × 10 3 1.0240 × 10 4 1.9990
OM 0.33654 3.2051 × 10 3 1.3107 × 10 11 7.3787 × 10 45 4.0000
NRTM 0.31962 1.3711 × 10 2 2.2606 × 10 10 5.1662 × 10 49 5.0000
GM 0.30867 2.4662 × 10 2 7.7382 × 10 9 4.1619 × 10 41 5.0000
NDM1 ( α = 0.1 , β = 0.01 ) 0.33269 1.3872 × 10 3 7.1489 × 10 16 1.3430 × 10 78 5.1293
NDM2 0.33216 1.8261 × 10 3 4.8530 × 10 29 1.6984 × 10 284 10.0000
Table 6. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 6 ( s ) .
Table 6. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 6 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 1.1339 5.4470 × 10 1 1.9126 × 10 1 3.6580 × 10 2 2.0935
OM 1.6641 1.8131 × 10 1 2.6397 × 10 4 1.6190 × 10 15 4.0000
NRTM 1.6753 1.7655 × 10 1 2.2683 × 10 4 8.0963 × 10 19 5.0000
GM 1.6385 2.1410 × 10 1 1.0058 × 10 3 2.3909 × 10 15 5.0003
NDM1 ( α = 0.1 , β = 0.01 ) 1.7591 1.3677 × 10 1 1.9137 × 10 5 1.0706 × 10 24 5.1009
NDM2 1.7514 1.3873 × 10 1 1.8773 × 10 9 3.2210 × 10 88 10.0000
Table 7. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 7 ( s ) .
Table 7. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 7 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.00011423 2.3586 × 10 10 1.0054 × 10 21 3.5263 × 10 45 2.0000
OM 0.00011423 2.8491 × 10 16 7.5636 × 10 67 7.2512 × 10 270 4.0000
NRTM 0.00011423 2.8491 × 10 16 1.1839 × 10 82 2.8305 × 10 415 5.0000
GM 0.00011423 2.8491 × 10 16 2.5115 × 10 82 2.5802 × 10 413 5.0000
NDM1 ( α = 1, β = 1) 0.00011423 1.2212 × 10 19 1.7056 × 10 94 1.7490 × 10 469 5.0000
NDM2 0.00011423 1.0938 × 10 23 1.4380 × 10 233 2.4917 × 10 2341 10.0000
Table 8. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 8 ( s ) .
Table 8. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 8 ( s ) .
Method | ( s 1 s 0 ) | | ( s 2 s 1 ) | | ( s 3 s 2 ) | | Θ ( s 3 ) | COC
NM 0.000093269 2.4434 × 10 9 1.6769 × 10 18 6.6965 × 10 37 2.0000
OM 0.000093272 7.7299 × 10 18 4.9129 × 10 71 6.7973 × 10 284 4.0000
NRTM 0.000093272 7.7299 × 10 18 5.7906 × 10 88 1.1583 × 10 438 5.0000
GM 0.000093272 7.7299 × 10 18 1.3540 × 10 87 1.8930 × 10 436 5.0000
NDM1 ( α = 1, β = 1) 0.000093272 4.0787 × 10 19 6.5223 × 10 91 5.7827 × 10 450 5.0000
NDM2 0.000093272 2.7995 × 10 22 5.0470 × 10 217 6.7502 × 10 2169 10.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharma, E.; Panday, S.; Mittal, S.K.; Joița, D.-M.; Pruteanu, L.L.; Jäntschi, L. Derivative-Free Families of With- and Without-Memory Iterative Methods for Solving Nonlinear Equations and Their Engineering Applications. Mathematics 2023, 11, 4512. https://doi.org/10.3390/math11214512

AMA Style

Sharma E, Panday S, Mittal SK, Joița D-M, Pruteanu LL, Jäntschi L. Derivative-Free Families of With- and Without-Memory Iterative Methods for Solving Nonlinear Equations and Their Engineering Applications. Mathematics. 2023; 11(21):4512. https://doi.org/10.3390/math11214512

Chicago/Turabian Style

Sharma, Ekta, Sunil Panday, Shubham Kumar Mittal, Dan-Marian Joița, Lavinia Lorena Pruteanu, and Lorentz Jäntschi. 2023. "Derivative-Free Families of With- and Without-Memory Iterative Methods for Solving Nonlinear Equations and Their Engineering Applications" Mathematics 11, no. 21: 4512. https://doi.org/10.3390/math11214512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop