1. Introduction
The main advantage of derivative-free methods is that a derivative-free method can be used for finding the solution of a non-differentiable nonlinear function
. Compared to Newton’s method [
1], derivative-free iterative methods do not require derivative in iterations. Numerous derivative-free iterative methods for finding the solution of nonlinear systems
have been proposed, where
. Traub [
2] proposed the well-known derivative-free iterative method
where
.
is the first-order divided difference operator, which is defined by [
3]
The error equation of the method in (1) is
where
is the solution of
,
and
. By adding one new iterative step to the method in (1), Chicharro et al. [
4] designed the following iterative method with order three:
satisfying the error equation
Let us assume that
in (3) and (5); then, the order of the methods in (1) and (4) can be improved. Unfortunately, the solution
is unknown. Generally, we handle this problem by replacing the constant parameter
V with a variable parameter
. The method with variable parameter
is called method with memory. The variable parameter
is designed by using iterative sequences from the current and previous steps. Substituting the variable parameter
for constant parameter
V in the method in (1), Ahmad et al. [
5] and Petković et al. [
6] designed some efficient multi-step derivative-free iterative methods with memory. In order to design a new iterative method with memory, Kurchatov [
7] obtained the following one-step method with memory:
where
is called Kurchatov’s divided difference operator. Argyros et al. [
8,
9] studied the convergence properties of Kurchatov’s method (6) in the Banach space and proved that Kurchatov’s method is second-order convergent. Wang et al. [
10] and Cordero et al. [
11] presented some Kurchatov-type methods with memory by using Kurchatov’s divided difference operator.
Replacing
V in (3) with
, Chicharro et al. [
4] obtained the following method with memory:
where
The method in (7) is called FM3 in Chicharro’s paper [
4]. Substituting
for the parameter
V in (4), they obtained the following scheme with memory:
The method in (8) is called FM5 in Chicharro’s paper [
4]. They concluded that the convergence orders of the FM3 and FM5 methods were 3 and 5, respectively. However, their conclusions about convergence order were incorrect and the results of numerical experiment were inconsistent with their theoretical results. The main reason of this mistake is that they wrongly used the following equation in the paper [
4]:
where
For the iterative FM3 and FM5 methods, the error
is less than the error
. So, Equation (
9) should be written as
Theorem 1. Let the nonlinear function be sufficiently differentiable and be a zero of F. Let us assume that the initial value is close to η. Then, the FM3 and FM5 methods have convergence orders and 4, respectively.
Proof. Let
r and
p be the convergence orders of the FM3 and FM5 methods, respectively. The error relation of FM3 can be written as
where
is the asymptotic error constant. So,
By replacing
of (3) with (11), we obtain the error expression of the FM3 method.
The asymptotic error constants
in (13) and
in (14) do not affect the convergence order of the iterative method. By equating the appropriate exponents of
in (13) and (14), we have
By solving Equation (
15), we obtain the positive root
This implies that the order of the FM3 method is
Similar to Equations (12) and (13), the error relation of the FM5 method can be given by
and
By replacing
of (5) with (11), we obtain the error relation
From (17) and (18), we obtain
By solving Equation (
19), we obtain
and
. The convergence order of iterative method should be positive, so
is discarded. This implies that the convergence order of the FM5 method is four. □
Theorem 1 obtains the true order of the FM3 and FM5 methods.
Remark 1. From the error Equation (5) of the iterative method in (4) without memory, we know that and . Thus, the error is less than the error . The FM5 method with memory improves the convergence order of the method in (4) by the variable parameter . By replacing of (5) with (9), we obtain This means that the error of the FM5 method is less than and of FM5 method is less than . For the FM3 method with memory, we also obtain that This means that the error of the FM3 method is less than . Thus, we obtain the error Equation (11). Inspired by the FM3 and FM5 methods, we propose three iterative methods with a novel variable parameter in the next section.
The structure of this paper is as follows: The design of two novel divided difference operators as the variable parameters of three derivative-free iterative methods with memory for solving nonlinear systems is presented in
Section 2. Using the new Kurchatov-type divided difference, the new derivative-free iterative methods reached the orders 3,
and 5, respectively. In
Section 3, the application of the proposed methods to solve the ODEs, PDEs, the standard nonlinear systems and the non-differentiable nonlinear systems is presented. In
Section 4, the dynamical behavior of the presented method is studied for analyzing the stability of the presented methods. In
Section 5, we give a short summary.
2. Two New Kurchatov-Type Accelerating Derivative-Free Iterative Methods with Memory
In this section, two new first-order divided difference operators are used for constructing the variable parameter
. Similar to Kurchatov’s first-order divided difference, we construct the following first-order divided differences
and
We call (20) and (21) Kurchatov-type first-order divided differences.
Method 1: Using (20), we obtain .
By replacing
with
in Equations (10) and (11), we obtain
where
.
Method 2: Using (21), we obtain .
By replacing
with
in Equations (10) and (11), we obtain
where
.
By substituting (22) for
of the FM3 method, we obtain the following scheme with memory:
The convergence order of the proposed scheme (24) is given by the following theorem.
Theorem 2. Let be a sufficiently differentiable function in an open neighborhood D and be a zero of function F. Let us assume that the initial value is close enough to η. Then, the convergence order of the iterative scheme in (24) is three.
Proof. From (22) and (24), we have
and
By replacing
V of (3) with (22),
By comparing (26) with (27), we know that
is less than
and
is less than
. From (22), we obtain
From (27) and (28), we obtain
This implies that the method in (24) has order three. □
Remark 2. We note that the method in (24) and the FM3 method have the same computational costs with different convergence orders. Our method (24) has a higher convergence order than the FM3 method. So, the computational efficiency of the FM3 method is less than that of the method in (24).
By replacing of the FM5 method with (22), we obtain the following scheme with memory: By replacing of the FM5 method with (23), we obtain The convergence orders of the proposed schemes in (30) and (31) are given by the following result.
Theorem 3. Let be a sufficiently differentiable function in an open neighborhood D and be a zero of function F. Let us assume that the initial value is close enough to η. Then, the iterative methods in (30) and (31) have convergence orders and 5, respectively.
Proof. Let
. Using (22), we have
Using (5), (30) and (32), we obtain
By comparing (26) with (33), we know that
is more than
and
is more than
. From (32), we obtain
where
Let us assume that the sequence
generated by the iterative method satisfies
where
and
is an asymptotic error constant.
From (33) and (34), we have
From (35) and (36), we obtain
and
From (37) and (38), we have
and
By letting
, we obtain
where
is a constant. By solving Equation (
41), we obtain
. This implies that the convergence order of the method in (30) is
.
From (
5) and (
23), we obtain
and
This implies that the order of the method with memory in (31) is five. □
Remark 3. Theorem 3 shows that the convergence order of the method in (4) without memory is improved from three to five by using one accelerating parameter (23). The convergence orders of the methods in (30) and (31) are higher than that of the FM5 method. The computational efficiency index (CEI) [3] is defined by , where ρ is the convergence order and c is the computational cost of the iterative method. The methods in (30) and (31) and the FM5 method have different convergence orders with the same computational costs. We know that the computational efficiency of our methods shown in (30) and (31) is superior to that of the FM5 method. Remark 4. The methods in (24) and (30) use the same accelerating parameter (22). In Theorems 2 and 3, we use the different error relations of the parameter (22). The main reason is that ( in (22) is less than for the two-step method method in (24) and in (22) is more than for the three-step method in (30).
Remark 5. In order to simplify calculation, the computational scheme of the method in (30) can be written as The computational scheme of the method in (31) can be written as The linear systems in (37) and (38) were solved by LU decomposition in numerical experiments.
3. Numerical Results
The methods in (24), (30) and (31) were compared with Chicharro’s methods (4), FM3 and FM5, for solving ODEs, PDEs and standard nonlinear systems. All the experiments were carried by the Maple 14 computer algebra system (Digits: = 2048). The parameter was the identity matrix. The solution was obtained by the stopping criterion .
Table 1,
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6 show the following information: ACOC [
12] is the approximated computational order of convergence, NIT (number of iterations) is the number of iterations, EVL is evaluation of error
at the last step,
T is the CPU time and FU is function values at the last step. The iterative processes of the iterative methods are given by
Figure 1,
Figure 2,
Figure 3,
Figure 4 and
Figure 5.
Problem 1. The solution of problem 1 is and the initial value is .
Figure 1 shows that the method in (31) had higher accuracy than the other methods. The accuracy of the method in (24) was similar to that of the FM3 method for Problem 1.
Problem 2. The solution of Problem 2 is and the initial guess is
Figure 2 shows that the methods in (30) and (31) had higher accuracy than the other methods. The method in (30) and (31) had similar convergence behaviors for Problem 2. The accuracy of the method in (24) was similar to that of the FM3 method for Problem 2.
Problem 3. Using the discretization method in this problem, we obtain For we chose the initial value and obtained the solution . The numerical results are shown in Table 3. Figure 3 shows that our method, shown in (31), had higher accuracy than the other methods under the same number of iterations.
Problem 4. The interval was partitioned into n intervals with a step size of . Using the difference method to discrete the derivative, we obtainedand For , we chose the initial value and obtained the solution . The numerical results are shown in Table 4. Problem 5. The solution is and the initial value is .
Problem 6. Heat conduction problem [15]: This problem was transformed into nonlinear systems. The step size for t was and the step size for x was . Let , , and ; we obtained The numerical results are given in Table 6. The exact solution of this problem is shown in Figure 6. The absolute value of error of the solution and approximate solution obtained using the method in (31) are shown in Figure 7 and Figure 8. 4. Dynamical Analysis
Recently, the dynamical analysis method has been applied to study the stability of the iterative method. Some basic concepts on complex dynamics can be found in references [
16,
17,
18,
19,
20,
21,
22,
23,
24]. For brevity, we omit these concepts in this section. If an iterative method has good stability, it must has good properties for solving simple nonlinear equations. So, we compared our methods with other methods for solving complex equations
. The field
was divided into a grid of
. If the initial point
did not converge to the zero of the function after 25 iterations, it was painted with black. The tolerance
was used in our programs.
Table 7,
Table 8,
Table 9 and
Table 10 show the computing time (Time) for drawing the dynamical planes and the percentage of points (POPs) which guaranteed the convergence to the roots of the complex equations
.
Figure 9,
Figure 10,
Figure 11 and
Figure 12 show that our method, shown in (24), is the most stable method and the stability of the method in (4) without memory is the worst among the tested methods.
Table 7,
Table 8,
Table 9 and
Table 10 show that, compared with other methods, the method in (24) had the highest percentage of points which guaranteed the convergence to the roots of the complex equations and the method in (31) costed the least computing time.
5. Conclusions
In this paper, three Kurchatov-type accelerating iterative schemes with memory were obtained by using two novel Kurchatov-type divided difference operator. The new methods avoided the evaluation of the derivative. The orders of convergence of our methods, shown in (24), (30) and (31), are 3, and 5, respectively. We also corrected the order of convergence of Chicharro’s methods, FM3 and FM5. In experimental application, our methods were applied to nonlinear ODEs and PDEs. The numerical results show that, compared with other methods, our methods, shown in (30) and (31), had higher computational accuracy. Dynamical planes were used to analyze the stability of the presented methods. It is worth noting that our method with memory, shown in (24), had better stability than the other methods in this paper. The stability of the method without memory in (4) was the worst. We can conclude that iterative methods with memory can effectively improve the stability of iterative methods without memory.