1. Introduction
The solution of nonlinear equations is a typical problem in engineering and experimental sciences. Problems with gases, liquids, and mechanics require calculating the roots of the equations with the iterative. Among Chemistry problems needing to solve this kind of equations are chemical equilibrium problems, global reaction rates in packed bed reactors [
1], radioactive transfer [
2], continuous stirred tank reactors (see [
3]) or to simulate flow transport in a pipe [
4]. In general, these equations cannot be solved analytically and we must resort to iterative methods to approximate their solutions. With the increasing speed of computers, numerical techniques have become indispensable for scientists and engineers. The principle of these methods is to approach the solution
of a nonlinear equation of the form
, through a sequence of iterations, starting from an initial estimation of
. The most known and widely used method to solve nonlinear equations is Newton’s scheme, whose iterative expression is
and it presents the quadratic convergence. This method requires two functional evaluations
, one of the function and one of its first derivative, per iteration. Kung-Traub’s conjecture [
5] states that an iterative method without memory for finding a simple zero of an scalar equation is optimal if its order of convergence is equal to
. Therefore, Newton’s method is optimal.
Many variants of Newton’s scheme have been constructed by means of several techniques, setting different multistep schemes. Some of them use Adomian decomposition (see [
6], for instance). Another kind of iterative schemes are those of using higher-order derivatives, for example in Chebyshev-type methods by some approximation [
7]. Moreover, the direct composition of known methods with a later treatment to reduce the number of functional evaluations is often used in order to generate new methods. Indeed, a weight-function procedure has been used recently to increase the convergence order of known methods [
8,
9], in order to obtain optimal schemes.
Moreover, it is known that the sensitiveness of a scheme to the initial seeds increases as well as does the order [
10]. It is widely accepted that the dynamical behavior of the rational function related to an iterative scheme provides us with important information about its stability and reliability [
11]. In these terms, Amat et al. in [
12] described the dynamical performance of some known families of iterative methods. More recently, in [
9,
13,
14,
15,
16,
17], different authors analyze the qualitative behavior of several known methods or classes of iterative schemes. Most of these studies demonstrate some elements with very stable behavior, which is proven to be useful in practice, and also different pathological performances, such as attracting fixed points different from the solution of the problem, periodic orbits, etc. The key tool to understanding the behavior of the different members of the family are the parameter planes.
We aim to design a new multipoint fixed point class, without memory, that improves or brings together the existing ones, as for example appears in [
18,
19], in different areas such as computational efficiency, stability and convergence order. There exists a wide number of optimal iterative methods with different orders of convergence, such as [
15]; some of them can be grouped as special cases of our proposed class of iterative procedures.
Let us search the conditions that parameter
and
G function, from the iterative expression
must meet to reach the fourth convergence order, where the variable of the weight function is
.
The rest of the paper is organized as follows: in
Section 2, we prove the fourth-order convergence of the proposed class of iterative methods, under some conditions. Moreover, a particular subclass is presented and some known methods have been found as special cases of our proposed family.
Section 3 is devoted to the qualitative study of our proposed parametric class of iterative schemes, giving rise to some stable and unstable members, that are numerically checked in
Section 4, along with other known methods. These numerical tests demonstrate the good performance in both academical and applied problems, such as the Colebrook-White equation.
2. Convergence Analysis
In the next result, we present the sufficient conditions of the weight function G and parameter that guarantee the convergence of the proposed class.
Theorem 1. Letbe a sufficiently differentiable function in an open interval I anda simple root of equation.
Letbe a real function satisfying,
,
and.
Ifand we choose an initial approximationclose enough to,
then the family of iterative methods defined bysatisfies the error equation below:
where,
,
and,
.
Therefore,
all the members of class (
3)
converges to with an order of convergence four. Proof. Let
be a simple zero of
f. As
f is a sufficiently differentiable function, using the Taylor expansion for
and
at about
, we obtain
and
From these expressions, we obtain the first step
Expanding in Taylor’s series
, around
,
and then, combining these expressions,
Let us represent function
G by its Taylor polynomial of the third order around 1, as
tends to 1 when
k tend to infinity:
To achieve the order of convergence four, it is necessary to force the coefficients of , to be null. Then, we obtain that the following conditions are needed: , , and .
By substituting them in the error equation, we obtain that it is a function of
,
and
must be satisfied. □
Some known schemes can be found as particular cases of family (
3) satisfying all the conditions of Theorem 1. Firstly, the well-known Jarratt’s method [
20], with the iterative expression
where
.
Moreover, the method designed by Hueso et al. in [
21], with iterative expression
where
and the scheme from Khattri and Abbasbandi constructed in [
22], defined as
are particular members of the most general proposed class of iterative methods (
3) with
.
On the other hand, by using the conditions deduced in the previous result, we select a particular subclass of iterative methods, depending on a parameter
, whose iterative expression is
Let us remark that all the members of this family of iterative schemes have fourth-order of convergence, as they satisfy all the hypothesis of Theorem 1. The differences in their performance can be studied by using the tools of complex discrete dynamics; so, the wideness of the sets of converging initial estimations can be deduced depending on .
3. Stability Analysis
In order to study the dynamical behaviour of the iterative methods described in (
5), it is necessary to recall a few concepts. Let us consider a rational function
. The successive application of the operator
R on one point
is defined as the orbit of this point [
12,
23]:
where
means
R applied on
,
n times. In our context,
R is obtained by applying the class of iterative methods on a quadratic polynomial
.
Thus, a fixed point of R satisfies . It is worthy to notice that it is possible to find fixed points of R that are not roots of the polynomial; in this case, those points are called strange fixed points. The stability of fixed points is classified as follows:
Attractive if .
Parabolic or Neutral if .
Repulsive if .
Superattractive if .
On the other hand, the basins of attraction [
24] determine the final state of the orbit of any point in the complex plane after successive application of operator
R. We define the basin of attraction of a fixed point
as the set of preimages of any order that meets it:
Moreover, the roots of the equation
are called critical points of operator
R. Their asymptotic performance plays an important role in the stability of the method [
25]. Moreover, in the connected component of the basin of attraction holding the attractor, there exists one critical point. Indeed, superattracting fixed points are also critical points; furthermore, critical points not being zeros of
are defined as free critical points.
On the other hand, the union of the basins of attraction defines the Fatou set of R. Its complementary set in is called the Julia set.
In this section, we analyze the dynamical behavior of fourth-order parametric family (
5) on the quadratic polynomial
, where
. So, a rational function
is obtained, depending on the parameter of the class,
, and also depending on the roots
a and
b. To obtain a simpler operator, as the fixed point operator satisfies the Scaling Theorem, we use the Möbius transformation [
23]
with properties:
that yields a rational function that, being conjugated to
(and therefore, with equivalent dynamical behavior), does no longer depend on
a and
b:
By solving equation , the fixed points of the rational function are obtained. Among them are and , coming from the roots of the polynomial previous to the Möbius map. The asymptotic behavior of all the fixed points plays a key role in the stability of the iterative methods involved, as the convergence to fixed points different from the roots means an important drawback for an iterative method; thus, we proceed below with this analysis.
A direct result of the Möbius transformation applied on this rational function is the conjugacy by the inverse,
The immediate consequences of this result are:
- (a)
If , then .
- (b)
Except for some specific values of the simplifying the operator, is an strange fixed point of rational operator .
- (c)
The stability function of two conjugate fixed points coincide,
3.1. Performance of the Strange Fixed Points
The behavior of fixed points different from and depends on . In the following result, the stability of is established.
Theorem 2. is an strange fixed point ofif.
Thus, is attracting if, parabolic or neutral when.
Proof. The behavior of
is given by
Thus,
Let us denote
. Then,
and
Therefore,
Finally, if
satisfies
, then
and
is a repulsive point. It is clear that it is parabolic in the boundary
. □
In
Figure 1, the stability function of
can be observed. It can be noticed that complex values of
inside the region
define fourth-order iterative schemes, whose numerical performance do not include divergence.
The demonstration of this result is similar to those of Theorem 2. In
Figure 2, the set of stability regions of strange fixed points
to
appear.
Proposition 1. The roots ofare strange fixed points of, different from , and are denoted by , . These strange fixed points are reduced to four if, as. For,
andare conjugate and repulsive, with independence of the value of parameter α.
andare attractors for values of α in small regions of the complex plane, inside the complex areaand. Moreover, both are superattracting for.
andare conjugate and attractors for values of α inside the complex area. Moreover, both are superattracting for.
3.2. Critical Points and Parameter Planes
In order to determine the critical points, we calculate the first derivative of
,
By definition, the roots of are called critical points. As the order of convergence of our class of proposed iterative methods is higher than two, those fixed points coming from the original roots of the polynomial, that is, and , are also critical points. In the next result, the rest of critical points are determined (called free critical points).
Proposition 2. The number of free critical points of operatoris:
One,
ifor.
In these cases,
the reduced rational operator is:
whose only free critical point is,
that is a pre-image of.
Three,
ifand,
as in this case,
they are defined as:
It is easy to prove that . Therefore, when
As we have said, a classical result states that there is at least one critical point related with each basin of attraction. As and are both superattracting fixed points of , they also are critical points and give rise to their respective Fatou components. For the other critical points, we can establish the following remarks:
- (a)
If , then , and it is a pre-image of the fixed point : . As is repulsive for , .
Thus, has only two invariant Fatou components, and .
- (b)
If , then , and . As is not a fixed point when , then and its orbit will remain at Julia set until the rounding error makes it fall into the basin of attraction of or .
- (c)
For the rest of the values of , we gave three critical points.
As we have previously stated, the dynamical performance of operator
depends on the values of the parameter
. In
Figure 3, we can observe the parameter space associated with family (
5): each point of the parameter plane is associated with a complex value of
, i.e., with an element of family (
5). A free critical point is employed as the starting point and, if for an specific value of
, this critical point converges to
or
, and then the point representing the value of
is painted in red color. Those values of
that make the critical point not converge to
or
are painted in black color. Therefore, each connected component of the parameter plane gives us subfamilies of procedures of class (
5) with similar performance. The fractal defined as the boundary of these connected components separates regions of stable and unstable performance.
In
Figure 3, the parameter plane associated with
is presented. Both critical points have the same parameter plane, as the are conjugated; thus, only one of them is considered a free independent critical point. It is usual not to plot a parameter plane for
as, although it is also a critical point, it is a pre-image of a fixed point and this plane would give us information only about the stability of fixed point
. We would like to remark that the outer black area corresponds to the stability of
. Moreover, the right-sided black circle inside the red area corresponds to the values of the parameter where
and
are simultaneously attracting. In the detail in
Figure 3b, the end of both inner antennas can be observed and also, in red color, the wide area of stable values of
, where only the convergence to the roots is possible. This parameter plane has been obtained by using a mesh of
points for complex values of
and a maximum of 200 iterations.
3.3. Dynamical Planes
If one of the values of
(being painted red or black in the parameter plane) is selected, an specific member of the class of iterative methods (
5) is chosen. Then, a set of initial estimations can be used in order to observe the performance of this iterative method.
The dynamical plane obtained by iterating an element of the family under study, is obtained by using each point of a mesh of
points of the complex plane as an initial estimation. Those points with orbits converging to infinity appear in blue color; those that converge to zero are painted in orange (both with a precision of
) and in green, red, etc., those converge in to other fixed points. All the fixed points appear as white stars in the figures when they are attractors or as white circles when they are repulsors. Moreover, the point is painted in black if the maximum number of 80 iterations is reached without converging to any of the fixed points. The routines used are slight modifications of those appearing in [
24].
Thus, various stable elements can be chosen: values of
where no attracting periodic points nor strange fixed points appear. Some of them can be observed at the dynamical planes of
Figure 4. In particular,
Figure 4a, shows the performance of the method corresponding to
, where the only basins of attraction are those of
(orange area) and
(blue region). Indeed, these basins are wider in this case than in cases
(
Figure 4b) and
(
Figure 4c).
Similarly, some unstable elements can be chosen. It can be observed that
, which yields in the biggest black area of parameter plane (see
Figure 3), corresponds to an iterative method whose bigger basin of attraction is that of an attracting 2-periodic orbit, which can be observed in yellow in
Figure 5a. However,
Figure 5b, where
, the basins of attraction of attracting strange fixed points
and
appear in red and green, and are wider than the one of
. Other attracting elements appear for different values of the parameter; for example,
gives rise to a dynamical plane (see
Figure 5c) where the black basin corresponds to an attracting periodic orbit of period four, which has been plotted in yellow in the figure. Therefore, it can be observed that the fractal defined by the Julia set, the boundary among the basins of attraction, is much more complicated in the case of unstable elements of the family of iterative methods.
This information will be checked numerically in the next section, where non-polynomial functions are used to see the performance of some of these stable and unstable elements of the proposed family of iterative methods.
4. Numerical Results
In this section, we demonstrate the behaviour of the new family of iterative methods defined in (
5), which is a particular sub-class of the method defined in (
3). From the stability analysis, we know that those members of this class of iterative schemes, corresponding to values of
inside the red area of the parameter plane (see
Figure 3), have better performance on quadratic polynomials than those belonging to black areas. We check now their performance on other kind of functions in order to see whether these stability properties are held.
Numerical computations have been carried out by using
R2019a, by using variable precision arithmetics with 1000 digits of mantissa, on a PC equipped with a
Core™ i5-5200U CPU 2.20GHz. In all the tables, we demonstrate the residuals
and
at the last iteration, the estimation of the solution found, the number of iterations needed (if the scheme does not converge, a “d” appears) and the execution time in seconds with the format
, calculated with the command
cputime. In it,
t is the mean of 100 consecutive executions and
d is the standard deviation. The stopping criterion used is
, and the ACOC, defined in [
26], has been also presented in the tables, whose expression is:
The nonlinear test functions used are both academical and real-life functions:
, with two real roots at and .
, with real roots at , and , among others.
Colebrook-White function [
27]
, with a real root at
, among others.
Colebrook-White function is one of the most accurate equations for the calculation of the friction factor and over a wider range, but it has the disadvantage of complexity, as it is an implicit function. It must be solved iteratively until an acceptable level of error is reached, with the computational cost and time involved. It was proposed by Colebrook and White in 1939 [
4,
27]. and is the most widely used because it is the most accurate and universal. For each nonlinear function, two initial approximations are used: one near to the solution and the other far from it.
Some of the proposed schemes are compared with classical Newton’ and Jarratt’s methods, Wang. et al. scheme KM [
28] and two procedures from Chun [
29], denoted by CM1 and CM2.
In
Table 1 and
Table 2, we notice that when values of
are considered inside the stable zone (
,
,
or
), the method achieves better approximations to any of the roots with fewer iterations. It is noteworthy that one of the unstable values of the parameter,
, does not even obtain convergence. One of the stable members for
, shows the lowest execution time, even compared with the classical Newton and Jarratt’s methods.
In
Table 3 and
Table 4, the best results are obtained by the stable proposed members of family (
5), in terms of the number of iterations and residuals. Two of the unstable members do not reach convergence to any root.
In the case of the Colebrook-White function, as shown in
Table 5 and
Table 6, it is only possible to converge to the solution with a very close initial estimation. In this case, the best results in terms of the number of iterations are obtained by the stable members of the proposed class as well as by Jarratt’s method. In terms of the lowest execution time, the best results are given by Jarratt and
schemes.