1. Introduction
Simple systems often embed a good amount of symmetry. Take for instance the characteristic polynomial (ChP) of hydrocarbons [
1]. Considering 3 cases here, propane (ChP is
), normal butane (ChP is
) and isobutane (ChP is
), one should easily notice that the highest symmetry is in isobutane. At the same time, isobutane is the one having multiple roots in the characteristic polynomial. Same symmetry is responsible for the presence of the multiple roots in the ChP of 2,2,4,4-Tetramethylpentane (ChP is
, see a_25 in [
2]). One should notice that, in the selected cases, the multiple root is banal (
); however, in general, in more complex cases, the multiple root is not any more banal.
Much research has been conducted on the solution of nonlinear equations and systems of nonlinear equations. There are numerous publications on the topic, including those given in reference [
3,
4,
5,
6,
7,
8,
9,
10], and Traub’s book [
11] has a whole chapter devoted to it. It can be particularly difficult to find multiple roots for a given nonlinear equation; hence, many scholars have proposed several iterative algorithms for this purpose (see Refs. [
12,
13,
14,
15,
16,
17,
18,
19]). Multiple zero, repeating root, or multiple point are other names for roots having a multiplicity
m. The root is referred to as a simple zero when
. It is very challenging to solve a nonlinear equation with multiple zeros. The goal of this article is to build iterative algorithms for a given nonlinear equation
to find a multiple root
with a multiplicity of
m, i.e.,
and
.
The use of multiple steps to improve the solution in iterative algorithms is commonly referred to as multi-point iterations in literature. Many scholars are now interested in these algorithms because of some interesting aspects. These can first overcome the one-point algorithms’ low efficiency index, and can also minimize the number of iterations and improve the order of convergence with multiple steps, which also lessens the computational burden in numerical work. Many researchers [
20,
21,
22,
23,
24,
25,
26,
27,
28] have developed higher-order iterative techniques using the first-order derivative to locate the multiple roots of a nonlinear problem. One function and two derivative evaluations are needed per iteration for the optimal fourth-order methods established in the literature [
20,
21,
23,
24,
25,
27,
28]. Li et al. have introduced six new fourth-order methods in [
22]. The first four methods require one function and three-derivative evaluations per iteration, whereas the last two require one function and two derivative evaluations per iteration. A class of two-point sixth-order multiple-zero finders, using two functions and two derivative evaluations per step, has been proposed in [
26].
In science and engineering, iterative techniques without derivatives are useful tools in finding multiple roots of complicated equations. These techniques do not rely on the computation of derivatives, which in the case of complex systems can be time-consuming and computationally expensive. For any complex problem, when the derivative of function
is difficult to calculate or is expensive to evaluate, then derivative-free algorithms are important [
29,
30]. Derivative-free iterative approaches are crucial for optimizing complex systems and resolving difficult engineering problems [
31]. In the literature, researchers [
32,
33,
34,
35,
36,
37,
38,
39] have also developed the derivative-free multiple root iterative algorithms that are based on second-order modified Traub-Steffensen iteration [
11]. The modified Traub-Steffensen method is given by
where
and
. For
and
this method is Steffensen’s method [
40]. Note that this method is obtained from Newton’s method
replacing the derivative
by divided difference
.
Our goal in this study is to develop efficient derivative-free algorithms for multiple roots with known multiplicity. Therefore, we describe a class of derivative-free fourth-order algorithms that require three new pieces of function
information per iteration and so have optimal fourth-order convergence, as defined by the Kung-Traub conjecture [
41]. The scheme uses iteration (1) in the first step and in the second step uses a weight function with Traub-Steffensen-like iteration. The algorithms are tested numerically on many real-life problems such as the Van der Waals problem, the Manning (isentropic supersonic flow) problem, the Planck law radiation problem, the Kepler’s problem, etc. The performance, in the context of accuracy and CPU time, of proposed methods is compared with existing ones that require derivative evaluations.
Let us briefly introduce here the practical problems that we will consider for numerical testing. Their mathematical forms are given in later sections. The Van der Waals equation [
42] corrects the deviations from ideal gas behavior, allowing for more accurate predictions of gas properties under non-ideal conditions. It is used to understand the behavior of gases at high pressures and low temperatures, where intermolecular attractions and molecular volume become significant. The Manning isentropic supersonic flow problem has multiple uses. Among them, the gas and particle dynamics in first-generation needle-free drug delivery devices [
43] are of medical interest. Additionally, it is applicable to the Plank law radiation problem, which relates to the principles of phototherapy and photochemotherapy [
44]. The use of the ChP in structure-activity studies is reported in previous works [
45,
46,
47,
48]. Kepler’s problem [
49] has significant applications in celestial mechanics, spacecraft navigation, and astrodynamics.
2. Development of Method
We considered a fourth-order family with a simple and compact body structure for multiple zeros
, which is described by the following:
where
,
and
is a differential function in the vicinity of
. The second step is weighted by the factor
; hence,
is called the weight function.
In what follows, we will prove the convergence results of the scheme (3). For a better understanding (as we will point out in Remark 1), few results are proved separately depending on multiplicity m. First, we consider the case and prove the following theorem:
Theorem 1. Assuming that is an analytic function in a domain surrounding with multiplicity . Let us consider an initial guess is close to . Then, scheme (3) has at least a convergence order of 4, provided that , , , , and , where , for .
Proof.
Assuming that
is an error at
p-th stage, then, using the Taylor’s series expansion of
around
and allowing that
,
and
we have
where
for
.
Similarly, the expansion of
about
can be written as follows:
where
Then, Equation (3) becomes
Similarly, expanding
about
gives us
Using (4), (5) and (7), we have
and
Next, we develop
by Taylor series in the neighborhood of origin
:
where
, for
. Inserting (4)–(10) in the second step of (3), we get
where
. The expression of
is very lengthy, so it is not produced here.
It is clear from (11), if we set coefficients of
,
and
simultaneously equal to zero, then after some simple calculation, one gets
Consequently, the error Equation (11) is given by
Hence, the result is proved. □
Next, we consider the case and prove the following theorem:
Theorem 2. Assuming the hypothesis of Theorem 1, the convergence order of (3) for is at least 4, if , , and , wherein .
Proof.
Let
,
,
and
. Then, expanding
about
using Taylor’s expansion,
where
for
.
Similarly, we can expand
about
as
where
Then, the first step of (3) yields
The expansion of
about
is
Then, from (14), (15) and (17), it follows that
and
By using (10) and (14)–(19) in the last step of (3), we have
where
.
The above Equation (20) will yield minimum fourth order convergence if the coefficients of
,
and
satisfy the following conditions:
Then, the final form of the error Equation (20) is given by
Thus, the theorem is proved. □
Next, we state the results for the cases in the form of corollaries. The proofs are similar to the above-proved Theorems 1 and 2.
Corollary 1. Assuming the hypothesis of Theorem 1, the convergence of (3) for is at least 4, provided that , and , where . Moreover, the error equation is given bywhere for . Corollary 2. Assuming the hypothesis of Theorem 1, the convergence of (3) for is at least 4, provided that , and , wherein . Then, the final error equation is given bywhere for . Corollary 3. Assuming the hypothesis of Theorem 1, the convergence order of (3) for is at least 4, provided that , and , where . Moreover, the error equation of the method iswhere for . Remark 1. Notice that the parameter β, which is used in the expression of , appears only in the error equations for , but not for . However, we have noticed that for , this parameter appears in terms involving and higher orders. In general, such terms are difficult to calculate. Moreover, we do not require these to demonstrate the required fourth-order convergence. For these reasons the convergence conditions for are explored separately in the next section.
4. Numerical Results
To check the validity and stability of the new algorithms, we have considered some real-life problems which prove the results that we have shown in the preceding sections. Moreover, new algorithms are compared with existing fourth order algorithms that use derivatives in the formulas. For example, the following five schemes are taken for comparison:
Li et al. method [
21] (LLC):
Li et al. method [
22] (LCN):
where
Sharma-Sharma method [
23] (SS):
Zhou et al. method [
24] (ZCS):
Soleymani et al. method [
25] (SBL):
where
The numerical work of this study is performed in software Mathematica [
50]. In computation, we use the value
for parameter
. The idea for taking small value is clear from the Traub-Steffensen formula (1), developed by replacing the derivative in the Newton formula (2) with the divided difference, as shown above in the Introduction section, since small values of
will give a more accurate approximation. The results displayed in
Table 1,
Table 2,
Table 3,
Table 4 and
Table 5 are performed in respect of the following:
- (i)
The number of iterations taken by the algorithms and the satisfaction of the stopping criterion .
- (ii)
The first three iterations’ error .
- (iii)
The calculated convergence order (CCO).
- (iv)
The total time (in seconds) consumed by the algorithms.
To calculate the convergence order (CCO), we use the formula (see [
51])
where
Let us consider the following problems for the testing:
Problem 1: The Van der Waals equation for a gas [
42,
52] is given by
where, R: universal gas constant,
T: temperature,
P: pressure,
V: volume,
n: number of moles,
,
: variables with values depending on the gas. To calculate the volume
V, we can write (33) as
One can find values of
n,
P,
T,
and
of a particular gas [
52] such that Equation (34) has three roots. So, by using a specific set of values, we have
that has three roots
. So, our desired root is
with
. The methods are tested for two initial guesses
. Computed results are given in
Table 6.
Figure 1 and
Figure 2 represent the graph of the errors committed by methods as iteration proceeds for
at
and
, respectively. However, some graphs overlap each other. So, to make it more clear, we further draw the graphs in
Figure 3 and
Figure 4 for the following methods: LLC, LCN, SS, ZCS, SBL, and
Figure 5 and
Figure 6 for the methods: NM1, NM2, NM3 and NM4. To make the pictures more clear, the bending portion of the lines is shown in sub-figures within the
Figure 1,
Figure 2,
Figure 3,
Figure 4,
Figure 5 and
Figure 6. It can be observed from the graphs that newly proposed methods are good competitors to existing Newton-like methods.
Figure 1,
Figure 2,
Figure 3,
Figure 4,
Figure 5 and
Figure 6 are drawn using the Mathematica software. These figures are the geometrical representation of the errors shown in columns 3, 4, and 5 of
Table 6 that help us visualize methods’ behavior for different initial guesses.
Table 1.
Numerical problems 2–5.
Table 1.
Numerical problems 2–5.
Problem | Root | Multiplicity | Initial Guess | Results’ Table |
---|
2: Isentropic supersonic flow problem [53] | | | | |
| | | | |
| 1.8411… | 3 | 1.5 & 1.7 | Table 2 |
3: Planck law of radiation problem [54] | | | | |
| 4.9651… | 4 | 2.5 & 5.5 | Table 3 |
4: Kepler’s problem [49] | | | | |
| 0.8093… | 5 | 1 & 1.4 | Table 4 |
5: Complex root problem | | | | |
| i | 6 | 1.1 i & 1.3 i | Table 5 |
Table 2.
Numerical results of methods for problem 2.
Table 2.
Numerical results of methods for problem 2.
Methods | | | | | CCO | CPU-Time |
---|
|
LLC | 4 | | | | 4 | 2.1224 |
LCN | 4 | | | | 4 | 2.4660 |
SS | 4 | | | | 4 | 2.4653 |
ZCS | 4 | | | | 4 | 2.4492 |
SBL | 4 | | | | 4 | 2.7765 |
NM1 | 4 | | | | 4 | 1.8522 |
NM2 | 4 | | | | 4 | 1.8878 |
NM3 | 4 | | | | 4 | 1.8867 |
NM4 | 4 | | | | 4 | 1.9042 |
|
LLC | 4 | | | | 4 | 2.6054 |
LCN | 4 | | | | 4 | 2.8556 |
SS | 4 | | | | 4 | 2.9173 |
ZCS | 4 | | | | 4 | 2.8868 |
SBL | 4 | | | | 4 | 3.2764 |
NM1 | 4 | | | | 4 | 2.4214 |
NM2 | 4 | | | | 4 | 2.5285 |
NM3 | 4 | | | | 4 | 2.5432 |
NM4 | 4 | | | | 4 | 2.4967 |
Table 3.
Numerical results of methods for problem 3.
Table 3.
Numerical results of methods for problem 3.
Methods | | | | | CCO | CPU-Time |
---|
|
LLC | 5 | | | | 4 | 0.8893 |
LCN | 5 | | | | 4 | 1.2483 |
SS | 5 | | | | 4 | 1.2791 |
ZCS | 5 | | | | 4 | 1.2172 |
SBL | 5 | | | | 4 | 1.4986 |
NM1 | 5 | | | | 4 | 0.5946 |
NM2 | 5 | | | | 4 | 0.6385 |
NM3 | 5 | | | | 4 | 0.6843 |
NM4 | 5 | | | | 4 | 0.6682 |
|
LLC | 4 | | | | 4 | 0.6545 |
LCN | 4 | | | | 4 | 1.0772 |
SS | 4 | | | | 4 | 1.0301 |
ZCS | 4 | | | | 4 | 1.0142 |
SBL | 4 | | | | 4 | 1.2646 |
NM1 | 3 | | | 0 | 4 | 0.4534 |
NM2 | 3 | | | 0 | 4 | 0.5173 |
NM3 | 3 | | | 0 | 4 | 0.5238 |
NM4 | 3 | | | 0 | 4 | 0.4923 |
Table 4.
Numerical results of methods for problem 4.
Table 4.
Numerical results of methods for problem 4.
Methods | | | | | CCO | CPU-Time |
---|
|
LLC | 4 | | | | 4 | 0.8422 |
LCN | 4 | | | | 4 | 0.9675 |
SS | 4 | | | | 4 | 0.9833 |
ZCS | 4 | | | | 4 | 0.9215 |
SBL | 4 | | | | 4 | 1.1863 |
NM1 | 4 | | | | 4 | 0.5865 |
NM2 | 4 | | | | 4 | 0.6246 |
NM3 | 4 | | | | 4 | 0.5934 |
NM4 | 4 | | | | 4 | 0.6440 |
|
LLC | 4 | | | | 4 | 0.8114 |
LCN | 4 | | | | 4 | 1.0320 |
SS | 4 | | | | 4 | 1.0146 |
ZCS | 4 | | | | 4 | 0.9998 |
SBL | 4 | | | | 4 | 1.1393 |
NM1 | 4 | | | | 4 | 0.6155 |
NM2 | 4 | | | | 4 | 0.6442 |
NM3 | 4 | | | | 4 | 0.6561 |
NM4 | 4 | | | | 4 | 0.6240 |
Table 5.
Numerical results of methods for problem 5.
Table 5.
Numerical results of methods for problem 5.
Methods | | | | | CCO | CPU-Time |
---|
|
LLC | 4 | | | | 4 | 1.7604 |
LCN | 4 | | | | 4 | 2.4491 |
SS | 4 | | | | 4 | 2.4653 |
ZCS | 4 | | | | 4 | 2.5426 |
SBL | 4 | | | | 4 | 3.1042 |
NM1 | 4 | | | | 4 | 0.5311 |
NM2 | 4 | | | | 4 | 0.5380 |
NM3 | 4 | | | | 4 | 0.5935 |
NM4 | 4 | | | | 4 | 0.5468 |
|
LLC | 4 | | | | 4 | 1.6234 |
LCN | 4 | | | | 4 | 2.5432 |
SS | 4 | | | | 4 | 2.6216 |
ZCS | 4 | | | | 4 | 2.6217 |
SBL | 4 | | | | 4 | 3.1824 |
NM1 | 4 | | | | 4 | 0.5162 |
NM2 | 4 | | | | 4 | 0.6243 |
NM3 | 4 | | | | 4 | 0.5922 |
NM4 | 4 | | | | 4 | 0.5610 |
Table 6.
Numerical results of methods for problem 1.
Table 6.
Numerical results of methods for problem 1.
Methods | | | | | CCO | CPU-Time |
---|
|
LLC | 6 | | | | 4 | 0.0787 |
LCN | 6 | | | | 4 | 0.0786 |
SS | 6 | | | | 4 | 0.1095 |
ZCS | 6 | | | | 4 | 0.0786 |
SBL | 6 | | | | 4 | 0.0938 |
NM1 | 6 | | | | 4 | 0.0753 |
NM2 | 6 | | | | 4 | 0.0821 |
NM3 | 6 | | | | 4 | 0.0778 |
NM4 | 6 | | | | 4 | 0.0782 |
|
LLC | 6 | | | | 4 | 0.0982 |
LCN | 6 | | | | 4 | 0.1102 |
SS | 6 | | | | 4 | 0.1096 |
ZCS | 6 | | | | 4 | 0.1145 |
SBL | 6 | | | | 4 | 0.0944 |
NM1 | 6 | | | | 4 | 0.0682 |
NM2 | 6 | | | | 4 | 0.0924 |
NM3 | 6 | | | | 4 | 0.0947 |
NM4 | 6 | | | | 4 | 0.0936 |
The rest of the problems (2–5) are shown in
Table 1. This table has five columns: column 1 contains the considered problem, column 2 contains the desired root of the problem, column 3 contains the multiplicity of the root, column 4 displays the initial guess and column 5 shows the table number of the numerical results of the corresponding problem.
We can see from the computed results in
Table 6 and
Table 2,
Table 3,
Table 4 and
Table 5 that the new methods have good convergence behavior. The increment in precision per iteration, as seen by the numerical results, is the reason for good convergence. This also describes the stable nature of the methods. It is also clear from the computed results that the new methods have better authenticity than those calculated by the existing methods. We display the value
in
Table 3 at the stage when the stopping criterion has been satisfied. The calculation of the convergence order in each problem, as shown in the tables, implies verification of the theoretical convergence order four. Efficiency of the new methods can also be judged by the fact that the amount of CPU time required by the methods is less than that required by the existing ones. This is shown in
Figure 7,
Figure 8,
Figure 9,
Figure 10 and
Figure 11 where we draw a bar chart of the time consumed by the methods.
Note that, in general, the new methods are more efficient than the existing ones. The new methods are also applied to many different practical problems to confirm their consistency. We conclude this section by the remark that new derivative-free iterative schemes are more effective.