1. Introduction
We consider a second-order nonlinear ordinary differential equation subjecting to boundary values:
where
is a given nonlinear continuous function, and
a and
b are given constants. We integrate Equation (
1), starting from the initial values
and
, where
x is an unknown value determined by
in Equation (2), which results in a nonlinear equation:
where
is a given continuous function, not necessarily a differentiable function. Therefore, the solutions for Equations (
1) and (2) can be obtained by solving an implicit nonlinear equation
to find the root
x, where
is the value of
at
.
The linear fractional one-step iterative scheme below [
1]
is cubically convergent if
and
, where
and
. The parameters
and
can be updated to speed up the convergence by the memory-dependent method [
2]. One can refer to [
3,
4,
5,
6,
7,
8,
9,
10] for more memory-dependent iterative methods.
For the Newton method, there exists some weak points as pointed out in [
11]. In addition, for an odd function with
, there exist two-cycle points of the Newton method. Let
be the mapping function of the Newton method. A cyclic point is determined by
with
and
, which becomes a nonlinear equation:
When
is a solution of Equation (
7),
is also a solution, because of
and
. The pair
are two-cycle points of the Newton method, with the properties
and
; hence,
and
.
For
as an instance, it follows from Equation (
7) that
whose solutions are
and
, which are two-cycle points of the Newton method for the function
. When the initial guess satisfies
, the Newton method is convergent; however, when
, the Newton method is divergent.
He et al. [
12] proposed an iterative algorithm for approximating a common element of a set of fixed points of a nonexpansive mapping and the solutions of variational inequality on Hadamard manifolds. They proved that the sequence generated by the suggested algorithm strongly converges to the common solution of the fixed point problem. After that, Zhou et al. [
13] built an accurate threshold representation theory, and on that basis, a fast and efficient iterative threshold algorithm of log-sum regularization was exploited. Note that the log-sum regularization possesses an exceptionally strong capability to solve the sparsity problem.
We plan to develop a new iterative scheme to overcome these drawbacks of the Newton method. The idea behind the development of the new iterative scheme is the SOR technique [
14] for the following system of linear equations:
where
and
,
, and
, respectively, represent a diagonal matrix, a strictly upper triangular matrix, and a strictly lower triangular matrix of
.
An equivalent linear system from Equations (
9) to (
10) is
Equation (
11) is multiplied by
w, and then
is added on both sides,
the corresponding iterative form is the SOR [
15]:
Traub’s technique is a typical method with memory, of which the data that appeared in the previous iteration were adopted in the following iteration [
16]:
By resulting in
and
and incorporating the memory of
, Traub’s iterative scheme can proceed to find the solution of
upon convergence. For a recent report on the progress of the memory method with accelerating parameters in the one-step iterative method, one can refer to [
2], while for a recent progress report on the memory method with accelerating parameters in the two-step iterative method, one can refer to [
11]. One major goal of the paper is the development of multi-step iterative schemes with a new memory method to determine the accelerating parameters by updating the technique with information at the current step.
We have arranged other contents of the paper as follows. Two types of one-step iterative schemes are introduced in
Section 2 as nonlinear perturbations of the Newton method.
Section 3 gives them a local convergence analysis for obtaining the optimal values of parameters with fourth-order convergence. In
Section 4, we evaluate these two one-step iterative schemes using optimal values; a memory-dependent technique is developed for updating the optimal values of a nonlinear one-step iterative scheme of fractional type. In
Section 5, we develop multi-step iterative schemes of fractional type, giving a detailed convergence analysis. Numerical experiments of the fractional type iterative schemes are executed in
Section 6. In
Section 7, we derive the memory-dependent method for determining the critical parameters in three linear fractional type iterative schemes, and new updating methods are developed for the linear fractional type three-step iterative scheme. An accelerated two-step memory-dependent iterative scheme is developed in
Section 8. A nonlinear three-step iterative scheme of fractional type is developed in
Section 9, where an accelerated memory-dependent method based on the Newton interpolant is used to update three critical parameters. Finally, the achievements are sketched in
Section 10.
2. Nonlinear Perturbations of Newton Method
The idea leading to Equations (
12) and (
13) from Equation (
9) motivates a nonlinear perturbation of the Newton method, which includes the introduction of a parameter
w, adding the same term on both sides and generating an iterative form. We realize it for Equation (
3) as follows. By adding
on both sides of Equation (
3), which can be extended to
is split into
, such that
Next, we move
to the right-hand side, which results in
where
is a weight factor and
H is a weight function; and the iterative form is
which results in
It is a new one-step iterative scheme to solve Equation (
3), including a parameter
and a weight function
H, to be assigned. If
, Equation (
16) is a continuation Newton-like method developed by the author of [
17]. If one takes
and
, the Newton method (NM) is recovered.
Comparing with a third-order iterative scheme, namely the Halley method [
18] (HM)
Equation (
16) possesses two advantages: the convergence order increasing to four, and not needing
.
As a variant of Equation (
16), we consider the following nonlinear one-step iterative scheme of fractional type:
which includes two parameters
and a weight function
H, to be assigned. When compared with Equation (
17), Equation (
18) does not need the differential terms
and
. For
,
, and
in Equation (
18), the resulting iterative scheme is proven to be a third-order convergence in [
19].
To improve the efficiency of the one-step iterative schemes, a lot of iterative schemes based on two-step and three-step methods were depicted in [
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36], and some were based on the multi-composition of the functions [
37,
38,
39,
40].
5. Multi-Step Iterative Schemes of Fractional Type
As the extensions of Equation (
4), we propose the following fractional type two-step iterative scheme:
as well as the fractional type three-step iterative scheme:
Theorem 3. The function is sufficiently differentiable on the domain I, and is a simple root with and . Ifthen the iterative Scheme (4) has third-order convergence. Proof. Refer to [
19] for a different approach of the proof. By Equation (
22),
where
.
Let
In view of Equation (
50),
is a function of
; hence,
F is deemed to be a function of
. It follows from Equations (
4), (
21), and (
51) that
where
Invoking the Taylor series for
generates the following:
It is apparent from Equations (
53) and (54) that
By inserting
into the formulas
we have
Then, from Equations (
56) and (57), it follows that
which, with the help of Equations (
49) and (
60), generates
Hence, Equation (
55) becomes
and from Equation (
52), we can obtain
The third-order convergence is thus proven. □
Theorem 4. The function is sufficiently differentiable on the domain I, and is a simple root with and ; the iterative schemes in Equations (47) and (48), with the optimal values and , have sixth- and twelfth-order convergences, respectively. Proof. When
is a small quantity, we have the following series:
We rewrite Equation (
62) as
Then, from the first one in Equation (
47) and Equations (
51), (
65), and (
20), it follows that
where
We expand
around
r, with
, as follows:
Inserting Equations (
66) and (
68) into the second one in Equation (
47) yields
owing to Equations (
20) and (
49), it becomes
Upon using Equations (
64) and (
67), we can obtain
Thus, we complete the proof that the iterative Scheme (
47) has sixth-order convergence.
Inserting Equations (
66) and (
68) into the second one in Equation (
48) and using Equations (
49) and (
64) yields
Then, we expand
around the root
r in terms of the Taylor series:
Upon inserting Equations (
72) and (
73) into the last one in Equation (
48), we have
which, owing to Equations (
20), (
49), (
64), and (
67), becomes
The twelfth-order convergence of Equation (
48) is thus proven. □
It should be noted that the following three-step iterative scheme
has ninth-order convergence [
45]. The first step is obtained from the Halley method in Equation (
17). In Equation (
76), six evaluations of functions are required such that E.I. =
, which is the same as the Halley method. For Equation (
48) with
and
, E.I. =
is much larger than that of Equation (
76).
There are two main factors such that the EIs of the iterative Schemes (
47) and (
48) are
and
, respectively. The first factor is that the optimal parameters
and
are used in all steps of Equations (
47) and (
48). Then, the second factor is that only a new function
is used in the second step of Equation (
47), and only two new functions
and
are used in the second and third steps of Equation (
48). Therefore, when Equation (
47) has the optimal parameters’ values, it is EI =
; only two functions’ evaluations,
and
, are required. When Equation (
48) has the optimal parameters’ values, it is EI =
; only three functions’ evaluations,
,
, and
, are required.
6. Numerical Experiments Based on Theorems 3 and 4
Equations (
4), (
47), and (
48) are sequentially labeled as Algorithms 1–3. The requirement of
is mandatory for the convergence of most algorithms, which include the derivative terms, like the Newton method. In order to investigate the applicability of Algorithms 1–3 in this condition, we consider a simple case of
, where
is a double-root and
. By taking
and
and starting from
, Algorithm 1 converges with three iterations, while both Algorithms 2 and 3 converge with two iterations. Even though for
, where
is a triple-root and
, with
and
, and starting from
, Algorithm 1 converges with six iterations, Algorithm 2 with four iterations, and Algorithm 3 with three iterations.
Then, we consider
with
; we can compute the COC by Equation (
35) in
Table 5 to find the root
, where
and
are used in Algorithms 1–3.
Unlike the NM, which is sensitive to the initial value , the new methods are insensitive to the initial value . For example, we seek another root of by the NM, starting from , which converges to (third root) with two iterations; to with eleven iterations starting from ; and to with six iterations starting from . The new methods converge to with three or four iterations, no matter which initial value is used among .
The other test examples are given by
In
Table 6, NIs are tabulated for solving
, which starts from
. We compare the computed results to the Newton method (NM), the Halley method [
18], the method of Soheili et al. [
46], and the method of Bahgat [
47]. The values of parameters used are
and
.
In
Table 7, NIs and COCs obtained by Algorithms 1–3 with different parameters of
are tabulated. It can be seen that Algorithm 3 is faster than the Algorithms 1 and 2. Even though the values of the parameters are not the best ones, Algorithms 2 and 3 converge very fast.
Table 8 tabulates NIs obtained by Algorithms 1–3, NM, NNT (the method of Noor et al. [
35]), CM (the method of Chun [
20]), and NRM (the method of Noor et al. [
21]).
We evaluate the E.I.
as defined by Traub [
16], where
p is the order of convergence and
m is the number of the evaluations of functions per iteration. In
Table 9, for different methods, we list the E.I. obtained by Algorithms 1–3, NM, HM (the Halley method [
18]), CM (the method of Chun [
20]), NRM (the method of Noor et al. [
21]), LM (the method of Li [
48]), MCM (the method of Milovanovic and Cvetkovic [
49]), AM (the method of Abdul-Hassan [
50]), and AHHRM (the method of Ahmad et al. [
51]).
7. Memory-Dependent Updating Iterative Schemes
In Equations (
4), (
47), and (
48), the values of
and
are crucial, whose optimal values are
and
. In this section, we approximate
and
without using the differentials.
Give
and
and take
close to
and
in Equation (
49), where
.
In Equations (
4), (
47), and (
48), if
and
are replaced by
and
, we can quickly obtain the solution, as shown in
Table 10, for
, where
and
.
For Algorithm 2, COC = 6.016 is obtained, and for Algorithm 3, COC = 7.077 is obtained. The value COC = 7.077 is much smaller than the theoretical value 12, as demonstrated in
Table 9. However, from Equation (
47), there are two functions’ evaluations,
and
, which, according to the conjecture of Kung and Traub, have an optimal order of
, which is smaller than COC=6.016. Similarly, from Equation (
48), there are three functions’ evaluations,
,
, and
, which, according to the conjecture of Kung and Traub, have the optimal order of
, which is smaller than COC=7.077.
In
Table 11, for different functions, we list the NIs and COC obtained by Algorithms 1–3. For
, we obtain the root
.
In Algorithm 3, the value of COC as just mentioned was 7.077. However, this value is significantly smaller than the theoretically expected value of 12. The main reason behind this discrepancy is that we just computed two rough values of
and
by Equation (
82) with two ad hoc values of
and
, which are not the optimal values of
and
. Below, we will update the values of
and
by using the memory-dependent technique to a better approximation of the optimal parameters
and
. Higher-order data interpolation by higher-order polynomial can enhance the convergence order; however, at the same time, more algebraic operations are needed. By balancing the number of function evaluations, algebraic operations, and their impact on the convergence order, we can employ the polynomial data interpolation up to the second order to approximate
and
.
To raise the COC for Algorithm 3, we can update the values
and
in Equation (
82) after the first iteration by proposing the following Algorithm 4, which is depicted by (i) giving
,
,
, and
, such that
, and computing
and
by Equation (
82), and (ii) calculating for
,
Here,
and and E.I. = 1.682 are optimal ones.
Table 12 lists the the results obtained by Algorithm 4. Some E.I.s are larger than 1.682.
There are some self-accelerating iterative methods for simple roots [
3,
52,
53], which are then extended to the self-accelerating technique for the iterative methods for multiple roots [
8,
54]. In Equations (86)–(88), the self-accelerating technique for
and
is quite simple as compared to that in the literature.
The term
in Equation (88) can be computed by the second-order polynomial interpolation:
In doing so, the evaluations of functions are reduced from
,
,
, and
to
,
, and
, and E.I. can be increased. With this modification, the iterative scheme is named Algorithm 5.
Table 13 lists the NIs, COCs, and E.I.s obtained by Algorithm 5. For three evaluations of functions,
and E.I. = 1.5874 are optimal ones. However, the values of E.I. in
Table 13 are much larger than E.I. = 1.5874. In
Table 13,
The result of COC=15.517 for
is larger than that given in
Table 7 in [
3] with COC = 6.9. Here, the presented Algorithm 5 requires three evaluations of functions, but it needed to specify a rough range
to include the solution
as an inner point.
An iterative method that uses the information from the current and previous iterations is called a method with memory. In addition to the given initial guesses
and
,
and
—calculated by Equations (86)–(88) and Equations (
89)–(91)—only need the current values of
,
,
,
,
, and
, such that we point out that both Algorithms 4 and 5 are without the use of the memory of previous values. Other memory-dependent techniques can be seen in [
3,
52,
53,
54,
55,
56].
Moreover, we develop a more advanced updating technique using the information from
and the second-order Newton polynomial interpolation, namely Algorithm 6, where we replace
and
with
and
, and update them with
and
. Algorithm 6 is depicted by (i) giving
,
,
, and
, such that
, and computing
and
by Equation (
82), and (ii) calculating for
,
Table 14 lists the NIs, COCs, and E.I.s obtained by Algorithm 6. All E.I.s are larger than 1.5874. We found that the NI is not sensitive to the initial values of
and
; however, we adjust
and
to make E.I. as large as possible.
At this point, we have finished three memory-dependent updating techniques of the three-step iterative Scheme (
48) of fractional type. Through the numerical tests of six nonlinear equations, Algorithms 5 and 6 are proven to be better than Algorithm 4.
8. An Accelerated Two-Step Memory-Dependent Method
Instead of Equation (
36), we consider
where
is to be determined. Equation (
98) supplies an extra datum for Equation (
47).
Then, we impose an extra condition
to determine
. Inserting
into Equation (
99) yields
which can be used to update
.
The accelerated two-step memory-updating method (ATSMUM) is coined as (i) giving
,
,
,
and
, and computing
and
by Equation (
82), and (ii) calculating for
,
The ATSMUM reasonably saves the computational cost.
Table 15 lists the results obtained by the ATSMUM. All E.I.s are larger than 1.5874. As designed,
tended to be very small values.
The result of COC = 9.865 for
is larger than that given in
Table 7 in [
3] with COC = 6.9, which requires two extra evaluations of previous functions
and
.
In order to investigate the effect of
on the convergence behavior, we give some testing values of
in Equation (
98) and do not consider the accelerating technique in Equation (108).
Table 16 lists the results without using the accelerating technique in Equation (108). When the parameter
in Equation (
98) is given by trial and error, the resulting iterative scheme is usually not the optimal one. The particular benefit with the memory-dependent technique in terms of quickening the iterative schemes’ relaxation factor can be seen to increase the values of the COC and E.I.
10. Conclusions
A nonlinear perturbation of the fixed-point Newton method was derived, which included two parameters, and a weight function permitted the fourth-order convergence equipped with three critical parameters. We developed a one-step memory-dependent iterative scheme by updating the optimal values of parameters by a third-degree Newton polynomial. For the data interpolation, a supplementary variable was computed. The E.I. was over , which is better than some two-step fourth-order convergence iterative schemes modified from the Newton method, whose E.I. was . We derived a one-step iterative scheme of fractional type in Algorithm 1. Because the order of convergence is three and the efficiency index is three, we simply extended Algorithm 1 to a two-step and three-step iterative scheme, namely Algorithm 2 and Algorithm 3. It is interesting that the orders of convergence are largely increased to six and twelve, and the E.I.s became and . For the two-step iterative scheme, the relaxation factor was accelerated, whose performance was very good. Moreover, for the nonlinear three-step iterative scheme of the fractional type, the E.I. became . We developed three memory-dependent updating techniques to gradually obtain the optimal values of these critical parameters for the three-step iterative scheme of fractional type. Through several numerical tests as listed and compared in the tables presented herein, we revealed that the new methods can find the solution quickly, without using the information of the differentials, and have better convergence efficiency and performance than most methods in the literature. Through the studies performed in the paper, we found that the proposed iterative schemes are cost-saving, with low computational complexity, where two functions’ evaluations and some algebraic operations for updating the optimal parameters were required for the two-step iterative scheme of fractional type, and three functions’ evaluations and some algebraic operations for updating the optimal parameters were required for the three-step iterative scheme of fractional type. These iterative schemes are especially useful for the practical viability with the simple structures of these algorithms and only need the implicit form of the nonlinear equation ; the explicit form of the function and the differential term are not required, which can be a great advantage in many practical engineering applications.