1. Introduction
Many problems in computational sciences and other disciplines can be modelled in the form of a non-linear equation or systems. In particular, a large number of problems in applied mathematics and engineering are solved by finding the solutions of these equations. In the literature, there are several iterative methods that have been designed by using different procedures to approximate the simple roots of a non-linear equation,
where
is a real function defined in an open interval
I. To find the roots of Equation (
1), we look towards iterative schemes. A lot of iterative methods of different convergence orders already exist in the literature (see [
1,
2] and the references therein) to approximate the roots of Equation (
1). Out of them, the most eminent one-point iterative method without memory is the quadratic convergent Newton–Raphson scheme [
3] given by
One drawback of this method is that when
, the method fails, which confines its applications. The first objective and inspiration to design iterative methods for solving this kind of problem are to obtain the highest order of convergence with the least computational cost. Therefore, a lot of researchers are interested in constructing optimal multipoint methods [
4] without memory, in the sense of Kung Traub conjecture [
5] which states that multipoint iterative methods without memory, requiring
functional evaluations per iteration, have a convergence order at most
. Among them, an optimal fourth-order iterative method was developed by Kou et al. [
6] defined by
Further, Kansal et al. proposed an optimal fourth-order iterative method [
7] in parameters
and
defined by
Soleymani developed an optimal fourth-order method [
8] given by
Furthermore, an optimal-order method was proposed by Chun et al. [
9] given by
On the other hand, sometimes it is possible to increase the order of convergence without any new function evaluation based on acceleration parameter(s) which appear in the error equation of the multipoint methods without memory. It was Traub [
3], who slightly altered Steffensen’s method [
10] and presented the first method with memory as follows:
This method has an order of convergence of 2.414. Still, if we use a better self-accelerating parameter, there are apparent chances that the order of convergence will increase.
Following the steps of Traub, many authors are constructing higher-order methods with and without memory. Among many others, Chicharro et al. [
11] presented a bi-parametric family of order four and then developed a family of methods with memory having a higher order of convergence without further increasing the number of functional evaluations per iteration. In [
12], the authors presented a derivative-free form of King’s family with memory. The authors in [
13] developed a tri-parametric derivative-free family of Hansen–Patrick-type methods which requires only three functional evaluations to achieve optimal fourth-order convergence. Then, they extended the idea with memory as a result of which the R-order convergence increased from four to seven, without any additional functional evaluation.
The development of such methods has increased over the years. Some applications of these iterative methods can be seen in [
14,
15,
16,
17]. Thus, by taking into consideration these developments, we further attempt to propose an iterative method without memory and then convert it into a more efficient method with memory such that the order of convergence is increased without any further functional evaluation.
However, another important aspect of an iterative scheme to be considered is its stability, which is the analysis that tells us how dependent the scheme of the initial guesses used is. In this regard, a comparison between iterative methods by using the basins of attraction was developed by Ardelean [
18]. This motivates us to work on the optimal-order methods and their with memory variants along with their basins of attraction.
The rest of the paper is organized as follows.
Section 2 contains the development of a new iterative method without memory and the proof of its order of convergence.
Section 3 covers the inclusion of memory to develop a new iterative method with memory and its error analysis. Numerical results for the proposed methods and comparisons with some of the existing methods to illustrate our theoretical results are given in
Section 4.
Section 5 depicts the convergence of the methods using basins of attraction. Lastly,
Section 6 collates the conclusions.
R-Order of Convergence
For finding the R-order convergence [
19] of our proposed method with memory, we make use of the following Theorem 1 given by Traub.
Theorem 1. Suppose that is an iterative method with memory that generates a sequence (converging to the root ξ) of approximations to ξ. If there exists a non-zero constant ζ and non-negative numbers , , such that the inequality,holds, then the R-order of convergence of the iterative method satisfies the inequality,where is the unique positive root of the equation, 2. Iterative Method without Memory and Its Convergence Analysis
We aim to construct a new two-point derivative-free optimal scheme without memory in this section and extend it to a memory scheme.
If the well-known Steffensen’s method is combined with Newton’s method, we obtain the following fourth-order scheme:
where
. To avoid the computation of
, the authors in [
20] approximated it by the derivative
of the following first-degree Padé approximant:
where
,
and
are real parameters to be determined satisfying the following conditions:
Using these conditions, the derivative of the Padé approximant evaluated in
is given as
Using (
14) in the second step of (
9), they presented the following scheme:
where
. This scheme is optimal in the sense of the Kung–Traub conjecture having an order of convergence of four with three functional evaluations per iteration,
,
and
.
Now, in order to extend the method with memory, we devise the idea of introducing two parameters
and
in (
15) and we present a modification in this method as follows:
where
.
This modified scheme yields the optimal order of convergence 4 having three functional evaluations per iteration, , and .
Next, we establish the convergence results for our proposed family without memory given by Equation (
16).
Theorem 2. Suppose that is a real function suitably differentiable in a domain D. If is a simple root of and an initial guess is sufficiently close to ξ, then the iterative method given by Equation (16), converges to ξ with convergence order having the following error relation,where , ξ is a simple root of and Proof. Expanding
about
by the Taylor series, we have
Using Equation (
17) in the first step of Equation (
16), we have
In addition, the Taylor’s expansion of
is
Using Equations (
17)–(
19), we have
Finally, putting Equation (
20) into the second step of Equation (
16), we obtain
which is the error equation for the proposed optimal scheme given by Equation (
16) with a convergence order of four. This completes the proof. □
3. Iterative Method with Memory and Its Convergence Analysis
Now, we present an extension to the method given by Equation (
16) by the inclusion of memory to improve the convergence order without the addition of any new functional evaluations.
If we clearly observed, it can be seen from the error relation given in Equation (
21) that the order of convergence of the proposed family given by Equation (
16) is 4 if
and
. Therefore, if
and
, then the order of convergence of our proposed family can be improved, but this value cannot be reached because the values of
and
are not practically available. Instead, we can use approximations calculated by already available information [
21]. Hence, the main idea in constructing the methods with memory consists of the calculation of parameters
and
as the iteration proceeds by the formulae,
for
Further, it is also assumed that the initial estimates
and
must be chosen before starting the iterations. Thus, we give an estimation for
and
given by
where
and
are Newton’s interpolating polynomials of the third- and fourth-degrees, respectively, which are set through the best available nodal points,
for
and
for
.
Thus, by replacing
by
and
by
in the method given by Equation (
16), we obtain a new family with memory as follows:
Next, we establish the convergence results for our proposed family with memory given by Equation (
23).
Theorem 3. Suppose that is a real function suitably differentiable in a domain D. If is a simple root of and an initial guess is sufficiently close to ξ, then the iterative method given by Equation (23) converges to ξ with a convergence order of at least 7. Proof. Let
be a sequence of approximations generated by an iterative method
. If this sequence converges to zero
of
f with the
R-order
of
, then we can write
where
tends to the asymptotic error constant
of
, when
. Thus,
Let the iterative sequences
and
have
R-orders
and
, respectively. Therefore, we obtain
and
Using (
26), (
27) and a lemma stated in [
13], we obtain
In view of our proposed family of methods without memory given by Equation (
16), we have the following error relations,
where
.
According to the error relations given by Equations (
29)–(31) with self-accelerating parameters,
and
, we can write the corresponding error relations for the methods given by Equation (
23) with memory as follows:
where
depending on iteration index since
and
are re-calculated in each step. Now using Equations (
28) and (
32)–(34), we obtain the following relations:
Now, comparing the error exponents of
on the right-hand side of the pairs given by Equations (
26) with (
35), (
27) with (36) and (
25) with (37), respectively, we obtain the following system of equations:
Solving this system of equations, we obtain a non-trivial solution as
. Hence, we can conclude that the lower bound of the R-order of our proposed family with memory given by Equation (
23) is seven. This completes our proof. □
4. Numerical Results
In this section, the numerical results of our proposed scheme are examined. Furthermore, we will demonstrate the corresponding results after comparison with some existing schemes, both with and without memory. All calculations have been accomplished using Mathematica 11.1 in multiple precision arithmetic environments with specification of a processor Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.20 GHz (64-bit operating system), Windows 11. We suppose that the initial values of (or ) and (or ) must be selected prior to performing the iterations and a suitable be given.
The functions used for our computations are given in
Table 1.
To check the theoretical order of convergence, the computational order of convergence [
22],
(COC) is calculated using the following formula,
considering the last three approximations in the iterative procedure. The errors of approximations to the respective zeros of the test functions,
and COC are displayed in
Table 2 and
Table 3.
We consider the following existing methods for the comparisons:
Soleymani et al. method (
) without memory [
23]:
Cordero et al. method (
) without memory [
20]:
Chun method (
) without memory [
24]:
Cordero et al. method (
) with memory [
25]:
where
and
are as defined in
Section 3.
Džunić method (
and
) with memory [
26]:
where
and
are as defined in
Section 3.
Furthermore, we consider some real-life problems, which are as follows:
Example 1. Fractional conversion in a chemical reactor [27],Here, x denotes the fractional conversion of quantities in a chemical reactor. If x is less than zero or greater than one, then the above fractional conversion will be of no physical meaning. Hence, x is taken to be bounded in the region . Moreover, the desired root is . Example 2. The path traversed by an electron in the air gap between two parallel plates considering the multi-factor effect is given bywhere and are the position and velocity of the electron at time , m and are the mass and charge of the electron at rest and is the RF electric field between the plates. If particular parameters are chosen, Equation (45) can be simplified asThe desired root of Equation (46) is . We also implemented our proposed schemes given by Equations (
16) and (
23) on the above-mentioned problems.
Table 4 and
Table 5 demonstrate the corresponding results. Further,
Table 2 demonstrates COC for our proposed method without memory (
) given by Equation (
16), the method given by Equation (
39) denoted as
, the method given by Equation (
40) denoted as
, and the method given by Equation (
41) denoted as
, respectively.
Table 3 demonstrates COC for our proposed method with memory (
) given by Equation (
23), the method given by Equation (
42) denoted as
, and the method given by Equation (
43) by taking
denoted as
and
denoted by
, respectively.
It can be seen from
Table 2 and
Table 3 that for the function
,
fails to provide a solution and
requires more than three iterations to converge to the root. Furthermore,
converges to the desired root with an error of approximations much lower than
and
. For the function
,
,
and
fail to provide a solution and
and
do not converge to the desired solution within three iterations.
has a somewhat complex structure, and as a consequence takes more time than our method
in most of the cases to converge to the root. Furthermore,
and
converge to the root taking more time than
and
, respectively.
has a drawback of its derivative, so it will not work on points at which the function is zero or close to zero.
Furthermore, for functions , and , the proposed methods and converge to the required root with minimum error compared to the existing methods.
Hence, we can conclude that our methods work on several functions to obtain roots, whereas the existing methods have some limitations.
Remark 1. The proposed schemes given by Equations (16) and (23) have been compared to some already existing methods and it can be seen from the computational results that our proposed schemes give results in many cases where the existing methods fail in terms of COC and errors, as depicted in Table 2, Table 3, Table 4 and Table 5. Our methods display a noticeable decrease in approximation errors, as shown in the above-mentioned tables. Remark 2. From Table 4 and Table 5, one can observe that for the function , the existing method fails to converge. In addition, for the function , an obvious decrease in the order of convergence of the existing methods is noticeable. 5. Basins of Attraction
The basins of attraction of the root of is the set of all initial points in the complex plane that converge to on the application of the given iterative scheme. Our objective is to make use of the basins of attraction to examine the comparison of several root-finding iterative methods in the complex plane in terms of convergence and stability.
On this front, we take a
grid of the rectangle
. A colour is assigned to each point
on the basis of the convergence of the corresponding method starting from
to the simple root and if the method diverges, a black colour is assigned to that point. Thus, distinct colours are assigned to the distinct roots of the corresponding problem. It was decided that an initial point
converges to a root
when
. Then, point
is said to belong to the basins of attraction of
. Likewise, the method beginning from the initial point
is said to diverge if no root is located in a maximum of 25 iterations. We have used MATLAB R2022a software [
28] to draw the presented basins of attraction.
Furthermore,
Table 6 lists the average number of iterations denoted by Avg_Iter and the percentage of non-converging points denoted by
of the methods to generate the basins of attraction.
To carry out the desired comparisons, we considered the test problems given below:
Problem 1. The first function considered is . The roots of this function are 1 and . The basins corresponding to our proposed method and the existing methods are shown in Figure 1 and Figure 2. From Table 6, it can be seen that the proposed methods, and converge to the root in fewer iterations. Furthermore, from the figures, it is observed that converges to the root with no diverging points but the existing methods have some points painted as black. , in particular has very small basins. Problem 2. The second function taken is with roots , and . Figure 3 and Figure 4 show the basins for in which it can be seen that , and have wider regions of divergence. Moreover, the average number of iterations taken by the proposed methods is less in each case compared to the existing methods. Problem 3. The third function considered is with roots and . Figure 5 and Figure 6 show that , and have smaller basins. Although and have some diverging points, they converge in a fewer number of iterations faster than the existing methods. Therefore, we can conclude that from
Figure 1,
Figure 2,
Figure 3,
Figure 4,
Figure 5 and
Figure 6, it can be observed that
has larger basins in comparison to
and
in all cases. The basins for
are very small in comparison to
in all cases. In addition, from
Table 6, we observe that the average number of iterations taken by the methods
are more than
and for
, the iterations required are more than
.
Remark 3. One can see from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 and Table 6 that our proposed methods have larger basins of attraction in comparison to the existing ones. In addition, there is a marginal increase in the average number of iterations per point of the existing methods. Consequently, through our proposed methods, the chances of non-convergence to the root are less when compared to the existing methods.