1. Introduction
An elementary, yet very important problem is solving a nonlinear equation
. Given an initial guess
, suppose that it is quite close to the real root
r with
; we can approximate the nonlinear equation by
When
, solving the equation for
x yields
Along this line of thinking,
is coined as the Newton method (NM), which exhibits quadratic convergence. Since then, there have arisen many studies, and this work continues to now, while different fourth-order methods have been used to modify the Newton method, which aim to more quickly and stably solve the nonlinear equations [
1,
2,
3,
4,
5,
6,
7]. In general, the fourth-order iterative methods are two-step with at least three function evaluations. Kung and Traub conjectured that a multi-step iterative scheme without memory based on
m evaluations of functions has an optimal convergence order
. When the fourth-order iterative method is optimal, the bound of the efficiency index (E.I.) is 1.587.
When the one-step iterative scheme with two function evaluations is considered, like the NM, its optimal order is two, while the E.I. reduces to 1.414. From this aspect, the multi-step iterative scheme is superior to the one-step iterative scheme with multi-function evaluations. In the local convergence analysis of the iterative scheme for solving nonlinear equations near the root r, three critical values , , and and their ratios are dominant, which appear as the first three Taylor coefficients. In many iterative schemes, the accelerating parameter and optimal parameter are determined by these critical values. But, the root r is itself an unknown constant, such that the precise values of , , and are not available. The primary goal of many memory methods is to develop a powerful updating technique based on the memory of the previous values of the variables to quickly obtain , , and step-by-step, which will be used in several one-step iterative schemes developed in the paper for achieving high values of the computed order of convergence (COC) and the E.I. with the memory-accelerating technique.
Traub [
8] was the first to develop a memory-dependent accelerating method from Steffensen’s iterative scheme by giving
and
:
With this modification, by taking the memory of
into account, the computational order of convergence is raised from 2 to at least 2.414. The iterative methods using information from the current and previous iteration are the methods with memory. In Equation (
4), when
is a step variable,
is an adjoint variable, which does not have an iteration for itself. The role of
is different from
in the two-step iterative scheme. Later, we will introduce a supplementary variable, which just provides the extra datum used in the data interpolation, and its role is different from
and
.
In 2013, D
uni
[
9] proposed a modification of Steffensen’s and Traub’s iterative schemes by introducing two parameters
and
p in
The error equation was derived as
where
is a ratio of the second and first Taylor coefficients. Taking
is sufficient for the vanishing of the second-order error term; hence, there is freedom to assign the value for the accelerating parameter
p.
There are only a few papers that have been concerned with the one-step iterative schemes with memory [
9,
10,
11]. In this paper, we develop several memory-dependent one-step iterative methods for a high-performance solution of nonlinear equations. The methodologies involve introducing a free parameter and a combination function, and then, they are optimized to raise the order of convergence, which is original and highly novel. The strategy for updating the values of the parameters with the memory-accelerating method can significantly speed up the convergence and raise the value of the E.I. to a limit bound.
The novelties involved in the paper are as follows:
Introducing a free parameter in the existing or newly created model of the iterative scheme.
Inserting a combination function into two iterative schemes; then, the parameter or combination function is optimized to raise the order of convergence.
Several ideas presented here are novel and have not yet appeared in the literature, which can promote the development of fourth-order one-step iterative schemes while saving the computational cost.
For the application of the derivative-free one-step iterative schemes, we developed a powerful Lie symmetry method to solve a second-order nonlinear boundary-value problem.
The rest of the paper’s contents proceed as follows. In
Section 2, we introduce two basic one-step iterative schemes, which are the starting point and motivate the development of the accelerating techniques to update the critical parameters
and
appearing in the third-order iterative schemes.
Section 3 gives a detailed local convergence analysis of these two basic one-step iterative schemes; a new concept of the optimal combination of these two basic one-step iterative schemes is given in Theorem 3. In
Section 4, some numerical experiments are carried out with the computed order of convergence (COC) to evaluate the performance upon comparing to some fourth-order optimal two-step iterative schemes. In
Section 5, we introduce the updating techniques of the three critical parameters
,
, and
by using the memory-updating technique and the third-degree Newton interpolation polynomial. The idea of the supplementary variable is introduced, and the result is the first memory-accelerating technique. In
Section 6, we first derive a new third-order iterative scheme as a variant of D
uni
’s method; then, the updating techniques of two parameters and three parameters by the memory methods are developed; the second memory-accelerating technique is developed. In
Section 7, we improve D
uni
’s method and propose the new optimal combination methods; three memory-accelerating techniques are developed. In
Section 8, we introduce a relaxation factor into D
uni
’s method, and the optimal value of the relaxation factor is derived; the sixth memory-accelerating technique is developed. As a practical application, a Lie symmetry method is developed in
Section 9 to solve the second-order boundary-value problem. Finally, we conclude the achievements in
Section 10.
2. Preliminaries
Mathematically speaking,
is equivalent to
where the constant
is to be determined. If
is known, we can find the next
by solving
Upon viewing the terms including
as the coefficient on both sides and dividing by
, we can obtain
which includes a parameter
to be assigned. The above iterative scheme was developed in [
12] for a one-step continuation Newton-like method; it is referred to as Wu’s method [
12]. The iterative scheme (
9) was used by Lee et al. [
13], Zafar et al. [
14], and Thangkhenpau et al. [
15] as the first step in the multi-step iterative schemes for finding multiple zeros. Recently, Singh and Singh [
16] and also Singh and Argyros [
17] gave a detailed dynamical analysis of the continuous Newton-like method (
9).
As a variant of Equation (
9), we consider
which is obtained from the equivalent form of
:
The iterative scheme (
10) includes two parameters
to be assigned. The iterative scheme (
10) is referred to as Liu’s method [
18].
In 2021, Liu et al. [
19] verified that the iterative scheme (
10) is in general of one-order convergence; however, if we take
, where
r is a simple root of
, the order rises to two; moreover, if we take
and
, the order further rises to three. This technique is somewhat like the method of using accelerating parameters to speed up the convergence, whose optimal values can be determined by the convergence analysis.
The memory methods reuse the information from the previous iteration; they are not required to evaluate the function and its derivative at any new point, but it is required to store this information. The so-called R-convergence order of the memory method increases, and at the same time, the E.I. may go over the corresponding one without the memory method. D
uni
[
9] earlier developed an efficient two-parameter method for solving nonlinear equations by using the memory technique and Newton polynomial interpolation to determine the accelerating parameters. For the progress of the memory methods with accelerating parameters in the multi-step iterative schemes, one can refer to [
20,
21,
22,
23,
24,
25,
26,
27,
28,
29].
3. Convergence Analysis
Comparing to the Newton method in Equation (
3), Equation (
9) is still applicable when
. But, in this situation, the Newton method would fail, which restricts the practical application of the Newton method. The iterative scheme (
9) has some remarkable advantages over the Newton method [
30]. It is interesting to study the convergence behaviors of the iterative scheme (
9) and its variant in the iterative scheme (
10).
Wang and Liu [
31] proposed a two-step iterative scheme as an extension of Equation (
9):
where
is a parameter. The error formula was proven to be
The iterative scheme (
12) is not the optimal one, which, with three function evaluations, leads to the third-order convergence, not the fourth-order convergence. This motivated us to further investigate the local convergence property of the iterative schemes (
9) and (
10). A new idea of the optimal combination of the iterative schemes (
9) and (
10) is introduced, such that a one-step optimal fourth-order iterative scheme can be achieved, which is better than the two-step iterative scheme (
12).
Theorem 1. The iterative scheme (9) for solving has third-order convergence, if Proof. Let
which is small when
and
r are sufficiently close. Repeating Equation (
15) for
and subtracting yield
Then, using Equation (
14), we have
where
Inserting Equation (
19) into Equation (
9) and using Equation (
16), we can obtain
This ends the proof of Theorem 1. □
Theorem 2. Ifthe iterative scheme (10) is cubically convergent. Proof. It can be proven similarly by inserting Equation (
17) into Equation (
10):
where we use Equation (
22), for which
Inserting Equation (
23) into Equation (
10) and using Equation (
16), we can obtain
This ends the proof of Theorem 2. □
The combination of any two iterative schemes of the same order cannot yield a new iterative scheme whose convergence order can be raised by one. The conditions for the success of the combination are that, in the two error equations of these two iterative schemes, the coefficients preceding cannot be the same, and at the same time, the coefficients preceding cannot be the same.
Theorem 3. As a combination of the iterative schemes (9) and (10) with a function Q, the iterative scheme:has fourth-order convergence, if Proof. Inserting Equations (
17) and (18) into Equation (
26) leads to
where we use Equations (
27) and (28). Inserting Equation (
29) into Equation (
26) and using Equation (
16), we can obtain
This completes the proof of Theorem 3. □
4. Numerical Experiments
For the purposes of comparison, we list the fourth-order iterative schemes developed by Chun [
3]:
by King [
4]:
where
, and by Chun and Ham [
1]:
Equations (
31)–(
33) are fourth-order optimal iterative schemes with the E.I. =
. However, the E.I. of the iterative scheme (
26) can be larger, as shown in
Section 5. Furthermore, the iterative scheme (
26) is a single-step one, rather than the two steps of the iterative schemes (
31)–(
33).
The convergence criteria are given by
where
is fixed for all tests. In [
32], the numerically computed order of convergence (COC) is approximated by
where
r is a solution of
.
The iterative schemes in Equations (
9), (
10), and (
26) are named, respectively, Algorithm 1, Algorithm 2, and Algorithm 3. We considered a simple case of
and specify how to calculate the COC. By starting from
, the NM achieves the root
with seven iterations, Algorithms 1 and 2 with five iterations, and Algorithm 3 with four iterations. For each triple of the data of
x, we can compute the COC by Equation (
35). The triple
comprises the last three values of
x before the convergence, and we set the convergence value of
x to be
r with
. If
is not computable due to
, we can shift the triple forward to
, and so on.
By the data from
Table 1, we take the COC of the NM to be 1.999, of Algorithm 1 to be 3.046, of Algorithm 2 to be 2.968 and of Algorithm 3 to be 4.899. As expected, both Algorithms 1 and 2 have near third-order convergence; however, Algorithm 3 with COC = 4.899 is greater than the theoretical value of fourth-order.
The test examples are given by
The corresponding solutions are, respectively, , , , , and .
Algorithm 1 was tested by Wu [
12,
30], and Algorithm 2 was tested by Liu et al. [
19]. Because Algorithm 3 is a new iterative scheme, we tested it for the above examples. In
Table 2, for different functions, we list the NI obtained by the presently developed Algorithm 3, which are compared to the NM, the method of Jarratt [
33] (JM), the method of Traub–Ostrowski [
8] (TM), the method of King [
4] (KM) with
, and the method of Chun and Ham [
1] (CM).
Algorithm 3 theoretically has fourth-order convergence with the optimal values of the parameters. For the first example, , Algorithm 3 converges faster than other fourth-order iterative schemes. For other examples, Algorithm 3 is much better than the NM and is competitive with the other fourth-order iterative schemes, JM, TM, KM, and CM.
5. Updating Three Parameters by Memory Method
In order to achieve the fourth-order convergence of the iterative scheme in Equation (
26), the values of
,
, and
must be known a priori; however, these values are unknown because the root
r is an unknown constant.
Therefore, we introduce a supplementary variable predicted by the Newton scheme used in the data interpolation:
Based on the updated data of
and
and the previous data of
, we can construct a third-degree Newton interpolatory polynomial by
where
We update the values of
,
and
Q in Equation (28) by
Now, we have the first memory-accelerating technique for the iterative scheme (
26) in Theorem 3, which reads as (i) giving
,
, and
and
and (ii) performing for
:
Some numerical tests of the first memory-accelerating technique for the iterative scheme are listed in
Table 3, for which we can observe that the values of the COC are very large. Because the differential term
was included, the E.I. = (COC)
was computed. All E.I.s are greater than the E.I. = 1.587 of the optimal fourth-order iterative scheme without memory; they have the same numberof function evaluations. The last four E.I.s are also greater than the E.I. = 1.682 of the optimal eighth-order iterative scheme without memory and with four function evaluations.
6. A New Fourth-Order Iterative Scheme with Memory Updating
Below, we will develop the memory-accelerating technique without using the differential term
. The test examples will be changed to
The corresponding solutions are, respectively, , , , , and .
In order to compare the numerical results with that given in [
9], a new function
is added in Equation (51). For consistency, the other four functions are rewritten as
,
,
, and
to replace the
,
,
, and
appearing in Equations (
36) and (38)–(40).
In this section, a new one-step iterative scheme with the aid of a supplementary variable and a relaxation factor is introduced. The detailed convergence analysis is derived. Then, we accelerate the introduced parameters by the memory-updating technique.
6.1. A New Third-Order Result
We address the following iterative scheme:
which is a novel one-step iterative scheme with
an adjoint variable. Equation (
55) is a variant of D
uni
’s method [
9], where
appears in the denominator, not
.
To derive the convergence order of Equation (
55), let
Theorem 4. The iterative scheme (55) has third-order convergence:where is a relaxation factor, and If we takethen the order of convergence is further increased to four. Proof. Using the Taylor series yields
where
Hence, it follows from Equations (57) and (58) that
Inserting Equations (
56) and (
64) into the second one in Equation (
55), we have
Through some operations, we can obtain
which can be written as
where
Then, Equation (
65) is further reduced to
Letting
for the vanishing of
, we can derive a relation between
and
p:
Taking
, we prove Equation (
60). By using
and Equation (
68), the coefficient preceding
in Equation (
69) can be simplified to
which, further using
and
, reduces to
If Equation (
61) is satisfied, the error equation becomes
. This completes the proof of Theorem 4. □
Theorem 4 gives us a clue to achieving a fourth-order iterative scheme (
55), if
and
p are given by Equations (
60) and (
61). However, there are three unknown parameters,
,
, and
. We will apply the memory-accelerating technique to adapt the values of
and
p in Equation (
55) per iteration. The accuracy of this memory-dependent adaption technique depends on how many current and previous data are taken into account. Similar to what was performed in [
9], we can further estimate the lower bound of the convergence order of such a memory-accelerating method, upon giving the updating technique for
and
p. Instead of deriving the formulas of this kind of estimation, we use the numerical values of the COC to display the numerical performance.
6.2. Updating Two Parameters
To mimic the memory-updating procedure in
Section 5, we can obtain a memory method of the iterative scheme in Equation (
55) by (i) giving
,
, and
and
and (ii) performing for
:
In the above iterative scheme, is a given constant value of the parameter.
To demonstrate the usefulness of the above iterative scheme, we consider the solution of
. We fix
,
and
and vary the values of
in
Table 4. Because only two function evaluations are required, the E.I. = (COC)
was computed.
6.3. Updating Three Parameters
Table 4 reveals an optimal value of
, such that the COC and E.I. are the best. Indeed, by setting the coefficient preceding
as zero in Theorem 4, we can truly obtain a fourth-order iterative scheme, whose
is determined by Equation (
61).
Thus, we have the second memory-accelerating technique for the iterative scheme (
55) in Theorem 4 using
in Equation (
61), which reads as (i) giving
,
, and
and
, and (ii) performing for
:
Some numerical tests of the second memory-accelerating technique for the iterative scheme are listed in
Table 5. For the equation
, the COC = 3.535 obtained is slightly larger than the COC = 3.48 obtained in [
9].
7. Improvement of Duni’s Method and Optimal Combination
In this section, we improve D
uni
’s method [
9] by deriving the optimal parameters and their accelerating techniques. The idea of two optimal combinations of D
uni
’s and Wu’s method, and D
uni
’s and Liu’s method are introduced. Then, three new one-step iterative schemes with the memory-updating techniques are developed.
7.1. Improvement of Duni’s Memory Method
As was performed in [
9],
and
p in Equation (
5) were taken to be
and
. To guarantee the third-order convergence of D
uni
’s method,
is sufficient, as shown in Equation (
6). Therefore, there exists the freedom to chose the value of
p.
Theorem 5. For solving , the iterative scheme (5) has third-order convergence:if . If we take, furthermore,then Equation (5) has fourth-order convergence: Proof. By Equations (
62) and (
63) and Equations (
56)–(58), we have
At the same time, Equation (
65) is modified to
Since we do not have interest in the details of the fourth-order error equation, we write
where
Then, Equation (
86) is reduced to
where
was derived in [
9].
Letting
for the vanishing of
, we can derive
if we take
. By using Equation (
83), the coefficient preceding
is reduced to zero. This completes the proof of Theorem 5. □
The third memory-accelerating technique for the iterative scheme (
5) in Theorem 5 using
p in Equation (
83) reads as (i) giving
and
and
and (ii) performing for
:
Some numerical tests of the third memory-accelerating technique for the iterative scheme are listed in
Table 6. We can notice that, for the equation
, the COC = 4.002 obtained is larger than the COC = 3.48 obtained in [
9].
7.2. Optimal Combination of Duni’s and Wu’s Iterative Methods
Theorem 6. Ifthen the combination of the iterative schemes (5) and (9):has fourth-order convergence, where Proof. By Equations (
20), (
21), and (
82), we have
If we take
and
Q by Equation (
97), then Equation (
98) becomes
This ends the proof of Theorem 6. □
The fourth memory-accelerating technique for the iterative scheme (
96) in Theorem 6 reads as (i) giving
,
, and
and
and (ii) performing for
:
Some numerical tests of the fourth memory-accelerating technique for the iterative scheme are listed in
Table 7, which shows that the values of COC are high, which, however, need the differential term
. We can notice that, for the equation
, the COC = 6.643 obtained is much larger than the COC = 3.48 obtained in [
9].
7.3. Optimal Combination of Duni’s and Liu’s Iterative Methods
Theorem 7. Ifthen the combination of the iterative schemes (5) and (10):has fourth-order convergence, where Proof. By Equations (24) and (
82), we have
If we take
and
Q by Equation (
106), then Equation (
107) becomes
This ends the proof of Theorem 7. □
The fifth memory-accelerating technique for the iterative scheme (
105) in Theorem 7 is (i) giving
,
, and
and
and (ii) performing for
:
Some numerical tests of the fifth memory-accelerating technique for the iterative scheme are listed in
Table 8. We can notice that, for the equation
, the COC = 5.013 obtained is larger than the COC = 3.48 obtained in [
9].
8. Modification of Duni’s Method
As was performed in [
9],
and
p in Equation (
5) were taken to be
and
. To guarantee the third-order convergence of D
uni
’s method,
is sufficient, as shown in Equation (
6). Therefore, there exists the freedom to chose the value of
. We propose a modification of D
uni
’s method by
that is, we take
and
, where
is a relaxation factor to be determined for increasing the order of convergence. If we take
, D
uni
’s method is recovered. The present modification is different from that analyzed in
Section 7.1 and
Section 7.2, where
p is a free parameter.
Theorem 8. For solving , the iterative scheme (113) with has third- order convergence: If we take, furthermore,then Equation (113) has fourth-order convergence: Proof. Let
. It follows from Equations (
89) and (
90) that
where
,
, and
were inserted. For the vanishing of
, we need to solve
whose solution is given by Equation (
115). □
The sixth memory-accelerating technique for the iterative scheme (
113) in Theorem 8 is (i) giving
,
, and
and
and (ii) for
, performing
Some numerical tests of the sixth memory-accelerating technique for the iterative scheme are listed in
Table 9. For the equation
, the COC = 8.526 is much larger than the COC = 3.48 obtained in [
9].
In Equation (
113), if we take
, then D
uni
’s method is recovered. In
Table 10, the values of the COC are compared, which shows that the COC obtained by the sixth memory-accelerating technique is larger than that obtained by D
uni
’s memory method.
Notice that, by using the suggested initial values of
,
, and
given in [
9] and using
instead of that in Equation (
35), we can obtain the COC = 3.538 for
, which is close to the lower bound of 3.56 derived in [
9]. However, using
and
, the COC = 4.990 is obtained by Equation (
35), and the COC = 3.895 is obtained by Equation (
124). Most papers have used Equation (
35) to compute the COC. In any case, the lower bound derived in [
9] may underestimate the true value of the COC. As shown in
Table 10, all COCs obtained by D
uni
’s memory method are greater than 3.56.
9. A Lie Symmetry Method
As a practical application of the proposed iterative schemes, we developed a Lie symmetry method based on the Lie group
to solve the second-order nonlinear boundary-value problem. This Lie symmetry method was first developed in [
34] for computing the eigenvalues of the generalized Sturm–Liouville problem.
Let
whose exact solution is
The conventional shooting method is assumed to have an unknown initial slope
and integrated with Equation (
125) with the initial conditions
and
, which results in an implicit equation
to be solved.
From Equation (
125), a nonlinear system consists of two first-order ordinary differential equations:
Let
being the coefficient matrix. The Lie symmetry system in Equation (
128) permits a Lie group symmetry
, a two-dimensional real-valued special linear group, because of tr
.
By using the closure property of the Lie group, there exists a
, such that the following mapping holds:
where
and
is an unknown weighting factor to be determined.
Then, it follows from Equations (
130) and (
133) that
Since
and
are given, we can obtain
from Equations (132) and (
134):
It is interesting that the unknown slope
can be derived in Equation (
136) by using the Lie symmetry method, which is more powerful than the traditional shooting method, of which no such explicit formula for
can be obtained.
Now, we apply the fourth-order Runge–Kutta method to integrate Equation (
125) with the initial conditions
and
given in Equation (
136) in terms of
x. The right-end value must satisfy
, which is an implicit function of
x. We take
steps in the Runge–Kutta method and fix the initial guess
. In
Table 11, we compare the NI, the error of
, and the maximum error of
u obtained by comparing to Equation (
127). The weighting factor
is obtained, such that
computed from Equation (
136) is very close to the exact one
.
Instead of the Dirichlet boundary conditions in Equation (126), we consider mixed-type boundary conditions:
Now, from Equation (135), it follows that
In
Table 12, we compare the NI, the error of
, and the maximum error of
u. The weighting factor
is obtained, such that the
computed from Equation (
138) is very close to the exact one
.
10. Conclusions
In this paper, we addressed five one-step iterative methods: A (Wu’s method), B (Liu’s method), C (a novel method), D (Duni’s method), and E (a modification of Duni’s method). Without using specific values of the parameters and without memory, they all have second-order convergence; when specific optimal values of the parameters are used, they have third-order convergence. Three critical values of , , and are parameters in and , which are crucial for achieving good performance in the designed iterative scheme, such that the coefficients (involving and ) preceding and are zeros. We introduced a combination function, which is determined by raising the order of convergence. The optimal combination of A and B can generate a fourth-order one-step iterative scheme. When the values of the parameters and combination function were obtained by a memory-accelerating method with the third-degree Newton polynomial to interpolate the previous and current data, we obtained the first memory-accelerating technique to realize a fourth-order one-step iterative scheme.
In the novel method C, a relaxation factor appeared. If we used the memory-accelerating method for updating the values of the relaxation factor and other parameters, we obtained the second memory-accelerating technique to realize the fourth-order convergence of the derived novel one-step iterative scheme.
We mathematically improved Duni’s method to an iterative scheme with fourth-order convergence, and the third memory-accelerating technique was developed to realize a fourth-order one-step iterative scheme based on Duni’s memory method.
The optimal combination of A and D generated the fourth memory-accelerating technique to realize a fourth-order one-step iterative scheme based on an optimal combination function between Duni’s and Wu’s methods. The optimal combination of B and D generated the fifth memory-accelerating technique to realize a fourth-order one-step iterative scheme based on an optimal combination function between Duni’s and Liu’s methods.
In E, we finally introduced a relaxation factor into Duni’s method, which is optimized to the fourth-order convergence by the sixth memory-accelerating technique.
In the first and fourth memory-accelerating techniques, three evaluations of the function and its derivative were required. In contrast, the second, third, fifth, and sixth memory-accelerating techniques needed two evaluations of the function. Numerical tests confirmed that these fourth-order one-step iterative schemes performed very well with high values of the COC and E.I. Among them, the fifth memory-accelerating technique was the best one with the COC and E.I. for all testing examples. Recall that the efficiency index of the optimal fourth-order two-step iterative scheme with three evaluations of the function and without memory is the E.I. = = 1.587.
As an application of the derivative-free one-step iterative schemes with the second, third, fifth, and sixth memory-accelerating technique, a second-order nonlinear boundary-value problem was solved by the Lie symmetry method. It is remarkable that the Lie symmetry method can derive the unknown initial slope to be an explicit formula of the weighting factor x, whose implicit nonlinear equation can be solved with high efficiency and high accuracy.
The basic iterative schemes in Equations (
9) and (
10) are applicable to find the multiple roots of a nonlinear equation, for instance
with a triple root
. It can be treated well by the proposed accelerated one-step iterative schemes. As for the system of nonlinear equations, more studies are needed by extending the presented accelerating techniques.
Author Contributions
Conceptualization, C.-S.L.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-L.K.; Resources, C.-S.L., C.-W.C. and C.-L.K.; Data curation, C.-S.L., C.-W.C. and C.-L.K.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-L.K.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to restrictions privacy.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Chun, C.; Ham, Y. Some fourth-order modifications of Newton’s method. Appl. Math. Comput. 2008, 197, 654–658. [Google Scholar] [CrossRef]
- Noor, M.A.; Noor, K.I.; Waseem, M. Fourth-order iterative methods for solving nonlinear equations. Int. J. Appl. Math. Eng. Sci. 2010, 4, 43–52. [Google Scholar]
- Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
- King, R. A family of fourth-order iterative methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algorithms Comput. Technol. 2019, 13. [Google Scholar] [CrossRef]
- Chun, C. Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence. Appl. Math. Comput. 2007, 189, 597–601. [Google Scholar] [CrossRef]
- Kuo, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
- Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2013, 63, 549–569. [Google Scholar]
- Haghani, F.K. A modiffied Steffensen’s method with memory for nonlinear equations. Int. J. Math. Model. Comput. 2015, 5, 41–48. [Google Scholar]
- Khdhr, F.W.; Saeed, R.K.; Soleymani, F. Improving the computational efficiency of a variant of Steffensen’s method for nonlinear equations. Mathematics 2019, 7, 306. [Google Scholar] [CrossRef]
- Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
- Lee, M.Y.; Kim, Y.I.; Magrenãń, A.A. On the dynamics of tri-parametric family of optimal fourthorder multiple-zero finders with a weight function of the principal mth root of a function-function ratio. Appl. Math. Comput. 2017, 315, 564–590. [Google Scholar]
- Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algorithms 2019, 81, 947–981. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
- Singh, M.K.; Singh, A.K. A derivative free globally convergent method and its deformations. Arab. J. Math. 2021, 10, 481–496. [Google Scholar] [CrossRef]
- Singh, M.K.; Argyros, I.K. The dynamics of a continuous Newton-like method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
- Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
- Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
- Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar]
- Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
- Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parameter family with memory for nonlinear equations. Numer. Algorithms 2015, 68, 323–335. [Google Scholar] [CrossRef]
- Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
- Lotfi, T.; Soleymani, F.; Noori, Z.; Kiliçman, A.; Haghani, F.K. Efficient iterative methods with and without memory possessing high efficiency indices. Discret. Dyn. Nat. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
- Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
- Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
- Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
- Wu, X.Y. Newton-like method with some remarks. Appl. Math. Comput. 2007, 118, 433–439. [Google Scholar] [CrossRef]
- Wang, H.; Liu, H. Note on a cubically convergent Newton-type method under weak conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Argyros, I.K.; Chen, D.; Qian, Q. The Jarratt method in Banach space setting. J. Comput. Appl. Math. 1994, 51, 103–106. [Google Scholar] [CrossRef]
- Liu, C.S. Computing the eigenvalues of the generalized Sturm-Liouville problems based on the Lie-group SL(2,). J. Comput. Appl. Math. 2012, 236, 4547–4560. [Google Scholar] [CrossRef]
Table 1.
The comparison of different methods for the COCs computed. ×: undefined.
Table 1.
The comparison of different methods for the COCs computed. ×: undefined.
n | 2 | 3 | 4 | 5 | 6 | 7 |
---|
NM | 1.594 | 1.841 | 1.978 | 1.999 | × | × |
Algorithm 1 | 2.779 | 3.046 | × | × | | |
Algorithm 2 | 2.634 | 2.968 | × | × | | |
Algorithm 3 | 4.899 | × | × | | | |
Table 2.
The comparison of different methods for the number of iterations.
Table 2.
The comparison of different methods for the number of iterations.
Functions | | NM | JM | TM | KM | CM | Algorithm 3 |
---|
| −0.3 | 55 | 46 | 46 | 49 | 9 | 5 |
| 0 | 5 | 3 | 3 | 3 | 3 | 3 |
| 3 | 7 | 4 | 4 | 4 | 4 | 4 |
| 3.5 | 11 | 6 | 6 | 7 | 7 | 5 |
| 1 | 7 | 4 | 4 | 8 | 4 | 4 |
Table 3.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the first memory-accelerating technique.
Table 3.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the first memory-accelerating technique.
Functions | | | | NI | COC | E.I. = (COC) |
---|
| 1.3 | | −0.1 | 3 | 4.207 | 1.614 |
| 0.2 | | −0.5 | 3 | 5.433 | 1.758 |
| 2.1 | | 0 | 3 | 5.090 | 1.720 |
| −0.5 | | 0.3 | 3 | 6.095 | 1.827 |
| 1.3 | | 0 | 3 | 6.795 | 1.894 |
Table 4.
The NI, COC, and E.I. for the method by updating two parameters, A and B.
Table 4.
The NI, COC, and E.I. for the method by updating two parameters, A and B.
| −0.9 | −0.6 | −0.3 | −0.2 | 0.1 | 0.3 | 0.6 | 0.9 |
---|
NI | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
COC | 3.413 | 3.114 | 3.233 | 3.443 | 3.081 | 2.974 | 2.870 | 2.795 |
E.I. = (COC) | 1.847 | 1.765 | 1.798 | 1.855 | 1.755 | 1.724 | 1.694 | 1.672 |
Table 5.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and , in the second memory-accelerating technique.
Table 5.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and , in the second memory-accelerating technique.
Functions | | | | NI | COC | E.I. = (COC) |
---|
| 1 | | −0.3 | 4 | 4.680 | 2.163 |
| 0.9 | | 4 | 4 | 3.535 | 1.880 |
| 2 | | −0.3 | 4 | 3.992 | 1.998 |
| −0.5 | | 10 | 4 | 4.237 | 2.058 |
| 1.5 | | 0.3 | 5 | 3.610 | 1.900 |
Table 6.
The NI, COC, and E.I. for the method by updating two parameters, A and p, in the third memory-accelerating technique.
Table 6.
The NI, COC, and E.I. for the method by updating two parameters, A and p, in the third memory-accelerating technique.
Functions | | | | NI | COC | E.I. = (COC) |
---|
| 1 | 5.25 | −0.3 | 3 | 5.169 | 2.274 |
| 1.3 | 4.44 | −0.1 | 4 | 4.022 | 2.005 |
| 2 | 5.11 | −3 | 3 | 3.126 | 2.031 |
| −0.5 | 4.83 | −10 | 3 | 7.227 | 2.688 |
| 1.3 | −1.926 | −10 | 3 | 6.945 | 2.635 |
Table 7.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the fourth memory-accelerating technique.
Table 7.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the fourth memory-accelerating technique.
Functions | | | | | NI | COC | E.I. = (COC) |
---|
| 1.2 | 23.75 | 0.39 | 5 | 3 | 5.156 | 1.728 |
| 1.4 | −1.97 | 8.34 | 0.5 | 5 | 6.643 | 1.880 |
| 2.2 | 3.25 | −0.23 | 2 | 3 | 9.127 | 2.090 |
| −0.5 | 4.83 | 0.61 | 0.5 | 3 | 7.517 | 1.959 |
| 1.3 | −3.733 | 0.475 | 0.1 | 3 | 7.091 | 1.921 |
Table 8.
The NI, COC, and E.I. for the method by updating three parameters, A, B and Q, in the fifth memory-accelerating technique.
Table 8.
The NI, COC, and E.I. for the method by updating three parameters, A, B and Q, in the fifth memory-accelerating technique.
Functions | | | | | NI | COC | E.I. = (COC) |
---|
| 1.2 | 9.99 | 0.67 | 2 | 3 | 5.028 | 2.242 |
| 1.3 | 5.06 | 469,683 | 1 | 6 | 5.013 | 2.239 |
| 2.2 | 3.25 | −0.231 | 2 | 3 | 9.127 | 3.021 |
| −0.5 | 4.83 | 0.61 | 5 | 3 | 7.343 | 2.742 |
| 1.3 | −3.733 | 0.475 | 0.1 | 3 | 7.091 | 2.663 |
Table 9.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and , in the sixth memory-accelerating technique.
Table 9.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and , in the sixth memory-accelerating technique.
Functions | | | | | NI | COC | E.I. = (COC) |
---|
| 1.2 | 17.09 | 0.48 | 2 | 3 | 5.028 | 2.423 |
| 1.3 | −13.79 | 0.721 | 2 | 5 | 8.526 | 2.920 |
| 2.2 | 3.25 | −0.23 | 2 | 3 | 7.879 | 2.807 |
| −0.5 | 22.29 | 0.394 | 1 | 3 | 5.502 | 2.346 |
| 1.3 | −3.733 | 0.475 | 1 | 3 | 5.081 | 2.254 |
Table 10.
The values of the COC for Duni’s memory method and the sixth memory accelerating technique.
Table 10.
The values of the COC for Duni’s memory method and the sixth memory accelerating technique.
Functions | | | | | |
---|
Duni’s method | 4.819 | 4.990 | 5.852 | 4.496 | 4.469 |
Equation (113) | 5.028 | 8.526 | 7.879 | 5.502 | 5.081 |
Table 11.
For Equations (
125) and (126), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Table 11.
For Equations (
125) and (126), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Methods | Second | Third | Fifth | Sixth |
---|
NI | 4 | 3 | 3 | 4 |
Error of | | | | |
Maximum error of u | | | | |
Table 12.
For Equations (
125) and (
137), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Table 12.
For Equations (
125) and (
137), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Methods | Second | Third | Fifth | Sixth |
---|
NI | 5 | 3 | 3 | 3 |
Error of | | | | |
Maximum error of u | | | | |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).