Next Article in Journal
Hermite-Hadamard Fractional Inequalities for Differentiable Functions
Next Article in Special Issue
The Dynamical Analysis of a Biparametric Family of Six-Order Ostrowski-Type Method under the Möbius Conjugacy Map
Previous Article in Journal
A Novel Analytical Formula for the Discounted Moments of the ECIR Process and Interest Rate Swaps Pricing
Previous Article in Special Issue
On Global Convergence of Third-Order Chebyshev-Type Method under General Continuity Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications

School of Mathematical Sciences, Bohai University, Jinzhou 121013, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(2), 59; https://doi.org/10.3390/fractalfract6020059
Submission received: 21 December 2021 / Revised: 11 January 2022 / Accepted: 18 January 2022 / Published: 24 January 2022
(This article belongs to the Special Issue Convergence and Dynamics of Iterative Methods: Chaos and Fractals)

Abstract

:
Two novel Kurchatov-type first-order divided difference operators were designed, which were used for constructing the variable parameter of three derivative-free iterative methods. The convergence orders of the new derivative-free methods are 3, ( 5 + 17 ) / 2 4.56 and 5. The new derivative-free iterative methods with memory were applied to solve nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) in numerical experiments. The dynamical behavior of our new methods with memory was studied by using dynamical plane. The dynamical planes showed that our methods had good stability.

1. Introduction

The main advantage of derivative-free methods is that a derivative-free method can be used for finding the solution of a non-differentiable nonlinear function F ( t ) . Compared to Newton’s method [1], derivative-free iterative methods do not require derivative in iterations. Numerous derivative-free iterative methods for finding the solution of nonlinear systems F ( t ) = 0 have been proposed, where F : D R n R n . Traub [2] proposed the well-known derivative-free iterative method
s ( k ) = t ( k ) + V F ( t ( k ) ) , t ( k + 1 ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) ,
where V R { 0 } . [ s ( k ) , t ( k ) ; F ] is the first-order divided difference operator, which is defined by [3]
[ s ( k ) , t ( k ) ; F ] i j = ( F i ( s 1 ( k ) , s j 1 ( k ) , s j ( k ) , t j + 1 ( k ) , , t m ( k ) ) F i ( s 1 ( k ) , s j 1 ( k ) , t j ( k ) , t j + 1 ( k ) , , t m ( k ) ) ( s j ( k ) t j ( k ) ) , 1 i , j m .
The error equation of the method in (1) is
ε ( k + 1 ) = D 2 ( I + V F ( η ) ) ( ε ( k ) ) 2 + O ( ( ε ( k ) ) 3 ) ,
where η R n is the solution of F ( t ) = 0 , ε ( k ) = t ( k ) η and D k = 1 k ! F ( η ) 1 F ( k ) ( η ) L k ( R n , R n ) , ( k = 2 , 3 , ) . By adding one new iterative step to the method in (1), Chicharro et al. [4] designed the following iterative method with order three:
s ( k ) = t ( k ) + V F ( t ( k ) ) , z ( k ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) , t ( k + 1 ) = z ( k ) [ s ( k ) , z ( k ) ; F ] 1 F ( z ( k ) ) ,
satisfying the error equation
ε ( k + 1 ) = D 2 2 ( I + V F ( η ) ) 2 ( ε ( k ) ) 3 + O ( ( ε ( k ) ) 4 ) .
Let us assume that V = F ( η ) 1 in (3) and (5); then, the order of the methods in (1) and (4) can be improved. Unfortunately, the solution η R n is unknown. Generally, we handle this problem by replacing the constant parameter V with a variable parameter V ( k ) . The method with variable parameter V ( k ) is called method with memory. The variable parameter V ( k ) is designed by using iterative sequences from the current and previous steps. Substituting the variable parameter V ( k ) = [ s ( k 1 ) , t ( k 1 ) ; F ] 1 for constant parameter V in the method in (1), Ahmad et al. [5] and Petković et al. [6] designed some efficient multi-step derivative-free iterative methods with memory. In order to design a new iterative method with memory, Kurchatov [7] obtained the following one-step method with memory:
t ( k + 1 ) = t ( k ) ( A ( k ) ) 1 F ( t ( k ) ) ,
where A ( k ) = [ 2 t ( k ) t ( k 1 ) , t ( k 1 ) ; F ] is called Kurchatov’s divided difference operator. Argyros et al. [8,9] studied the convergence properties of Kurchatov’s method (6) in the Banach space and proved that Kurchatov’s method is second-order convergent. Wang et al. [10] and Cordero et al. [11] presented some Kurchatov-type methods with memory by using Kurchatov’s divided difference operator.
Replacing V in (3) with V ( k ) = ( A ( k ) ) 1 , Chicharro et al. [4] obtained the following method with memory:
s ( k ) = t ( k ) + V ( k ) F ( t ( k ) ) , t ( k + 1 ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) ,
where V ( k ) = [ 2 t ( k ) t ( k 1 ) , t ( k 1 ) ; F ] 1 . The method in (7) is called FM3 in Chicharro’s paper [4]. Substituting V ( k ) = ( A ( k ) ) 1 for the parameter V in (4), they obtained the following scheme with memory:
s ( k ) = t ( k ) + V ( k ) F ( t ( k ) ) , z ( k ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) , t ( k + 1 ) = z ( k ) [ s ( k ) , z ( k ) ; F ] 1 F ( z ( k ) ) .
The method in (8) is called FM5 in Chicharro’s paper [4]. They concluded that the convergence orders of the FM3 and FM5 methods were 3 and 5, respectively. However, their conclusions about convergence order were incorrect and the results of numerical experiment were inconsistent with their theoretical results. The main reason of this mistake is that they wrongly used the following equation in the paper [4]:
I + V ( k ) F ( η ) = D 3 ( ε ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε ( k 1 ) ε ( k ) + O ( ε ( k 1 ) , ε ( k ) ) ,
where
V ( k ) [ I D 3 ( ε ( k 1 ) ) 2 + 2 D 3 ε ( k 1 ) ε ( k ) 2 D 2 ε ( k ) ] F ( η ) 1 .
For the iterative FM3 and FM5 methods, the error ε ( k ) is less than the error ( ε ( k 1 ) ) 2 . So, Equation (9) should be written as
I + V ( k ) F ( η ) D 3 ( ε ( k 1 ) ) 2 .
Theorem 1.
Let the nonlinear function F : D R n R n be sufficiently differentiable and η R n be a zero of F. Let us assume that the initial value t ( 0 ) is close to η. Then, the FM3 and FM5 methods have convergence orders 1 + 3 2.732 and 4, respectively.
Proof. 
Let r and p be the convergence orders of the FM3 and FM5 methods, respectively. The error relation of FM3 can be written as
ε ( k + 1 ) A k , r ( ε ( k ) ) r ,
where A k , r is the asymptotic error constant. So,
ε ( k + 1 ) A k , r ( A k 1 , r ( ε ( k 1 ) ) r ) r = A k , r A k 1 , r r ( ε ( k 1 ) ) r 2 .
By replacing I + V F ( η ) of (3) with (11), we obtain the error expression of the FM3 method.
ε ( k + 1 ) D 2 D 3 A k 1 , r 2 ( ε ( k 1 ) ) 2 r + 2 .
The asymptotic error constants A k , r A k 1 , r r in (13) and D 2 D 3 A k 1 , r 2 in (14) do not affect the convergence order of the iterative method. By equating the appropriate exponents of ε ( k 1 ) in (13) and (14), we have
r 2 2 r 2 = 0 .
By solving Equation (15), we obtain the positive root r = 1 + 3 2.732 . This implies that the order of the FM3 method is 2.732 .
Similar to Equations (12) and (13), the error relation of the FM5 method can be given by
ε ( k + 1 ) A k , p ( ε ( k ) ) p ,
and
ε ( k + 1 ) A k , p A k 1 , p p ( ε ( k 1 ) ) p 2 .
By replacing I + V F ( η ) of (5) with (11), we obtain the error relation
ε ( k + 1 ) D 2 2 D 3 2 A k 1 , p 2 ( ε ( k 1 ) ) 3 p + 4 .
From (17) and (18), we obtain
p 2 3 p 4 = 0 .
By solving Equation (19), we obtain p = 4 and p = 1 . The convergence order of iterative method should be positive, so p = 1 is discarded. This implies that the convergence order of the FM5 method is four. □
Theorem 1 obtains the true order of the FM3 and FM5 methods.
Remark 1.
From the error Equation (5) of the iterative method in (4) without memory, we know that ε ( k + 1 ) ( ε ( k ) ) 3 and ε ( k ) ( ε ( k 1 ) ) 3 . Thus, the error ε ( k ) is less than the error ( ε ( k 1 ) ) 2 . The FM5 method with memory improves the convergence order of the method in (4) by the variable parameter V ( k ) . By replacing I + V F ( η ) of (5) with (9), we obtain ε ( k + 1 ) = D 2 2 ( D 3 ( ε ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε ( k 1 ) ε ( k ) ) 2 ( ε ( k ) ) 3 + O ( ( ε ( k ) ) 4 ) . This means that the error ε ( k + 1 ) of the FM5 method is less than ( ε ( k ) ) 3 and ε ( k ) of FM5 method is less than ( ε ( k 1 ) ) 2 . For the FM3 method with memory, we also obtain that ε ( k + 1 ) = D 2 ( D 3 ( ε ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε ( k 1 ) ε ( k ) ) ( ε ( k ) ) 2 + O ( ( ε ( k ) ) 3 ) . This means that the error ε ( k ) of the FM3 method is less than ( ε ( k 1 ) ) 2 . Thus, we obtain the error Equation (11).
Inspired by the FM3 and FM5 methods, we propose three iterative methods with a novel variable parameter in the next section.
The structure of this paper is as follows: The design of two novel divided difference operators as the variable parameters of three derivative-free iterative methods with memory for solving nonlinear systems is presented in Section 2. Using the new Kurchatov-type divided difference, the new derivative-free iterative methods reached the orders 3, ( 5 + 17 ) / 2 4.56 and 5, respectively. In Section 3, the application of the proposed methods to solve the ODEs, PDEs, the standard nonlinear systems and the non-differentiable nonlinear systems is presented. In Section 4, the dynamical behavior of the presented method is studied for analyzing the stability of the presented methods. In Section 5, we give a short summary.

2. Two New Kurchatov-Type Accelerating Derivative-Free Iterative Methods with Memory

In this section, two new first-order divided difference operators are used for constructing the variable parameter V ( k ) . Similar to Kurchatov’s first-order divided difference, we construct the following first-order divided differences
[ 2 t ( k ) s ( k 1 ) , s ( k 1 ) ; F ]
and
[ 2 t ( k ) z ( k 1 ) , z ( k 1 ) ; F ] .
We call (20) and (21) Kurchatov-type first-order divided differences.
Method 1: Using (20), we obtain V ( k ) = [ 2 t ( k ) s ( k 1 ) , s ( k 1 ) ; F ] 1 .
By replacing ε ( k 1 ) with ε s ( k 1 ) in Equations (10) and (11), we obtain
V ( k ) [ I D 3 ( ε s ( k 1 ) ) 2 2 D 2 ε ( k ) + 2 D 3 ε s ( k 1 ) ε ( k ) ] F ( η ) 1 ,
where ε s ( k 1 ) = s ( k 1 ) η .
Method 2: Using (21), we obtain V ( k ) = [ 2 t ( k ) z ( k 1 ) , z ( k 1 ) ; F ] 1 .
By replacing ε ( k 1 ) with ε z ( k 1 ) in Equations (10) and (11), we obtain
V ( k ) [ I D 3 ( ε z ( k 1 ) ) 2 2 D 2 ε ( k ) + 2 D 3 ( ε z ( k 1 ) ) ε ( k ) ] F ( η ) 1 ,
where ε z ( k 1 ) = z ( k 1 ) η .
By substituting (22) for V ( k ) of the FM3 method, we obtain the following scheme with memory:
s ( k ) = t ( k ) [ 2 t ( k ) s ( k 1 ) , s ( k 1 ) ; F ] 1 F ( t ( k ) ) , t ( k + 1 ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) .
The convergence order of the proposed scheme (24) is given by the following theorem.
Theorem 2.
Let F : D R n R n be a sufficiently differentiable function in an open neighborhood D and η R n be a zero of function F. Let us assume that the initial value t ( 0 ) is close enough to η. Then, the convergence order of the iterative scheme in (24) is three.
Proof. 
From (22) and (24), we have
ε s ( k ) = s ( k ) η = t ( k ) η [ 2 t ( k ) s ( k 1 ) , s ( k 1 ) ; F ] 1 F ( t ( k ) )
= ( D 3 ( ε s ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε s ( k 1 ) ε ( k ) ) ε ( k ) .
and
( ε s ( k ) ) 2 = ( D 3 ( ε s ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε s ( k 1 ) ε ( k ) ) 2 ( ε ( k ) ) 2 .
By replacing V of (3) with (22),
ε ( k + 1 ) = D 2 ( I + V ( k ) F ( η ) ) ( ε ( k ) ) 2 + O ( ( ε ( k ) ) 3 )
= D 2 ( D 3 ( ε s ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε s ( k 1 ) ε ( k ) ) ( ε ( k ) ) 2 + O ( ( ε ( k ) ) 3 ) ,
By comparing (26) with (27), we know that ( ε s ( k ) ) 2 is less than ε ( k + 1 ) and ( ε s ( k 1 ) ) 2 is less than ε ( k ) . From (22), we obtain
I + V ( k ) F ( η ) D 3 ( ε s ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε s ( k 1 ) ε ( k )
2 D 2 ( ε ( k ) ) ,
From (27) and (28), we obtain
ε ( k + 1 ) 2 D 2 2 ( ε ( k ) ) 3 ,
This implies that the method in (24) has order three. □
Remark 2.
We note that the method in (24) and the FM3 method have the same computational costs with different convergence orders. Our method (24) has a higher convergence order than the FM3 method. So, the computational efficiency of the FM3 method is less than that of the method in (24).
By replacing V ( k ) of the FM5 method with (22), we obtain the following scheme with memory:
s ( k ) = t ( k ) [ 2 t ( k ) s ( k 1 ) , s ( k 1 ) ; F ] 1 F ( t ( k ) ) , z ( k ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) , t ( k + 1 ) = z ( k ) [ s ( k ) , z ( k ) ; F ] 1 F ( z ( k ) ) .
By replacing V ( k ) of the FM5 method with (23), we obtain
s ( k ) = t ( k ) [ 2 t ( k ) z ( k 1 ) , z ( k 1 ) ; F ] 1 F ( t ( k ) ) , z ( k ) = t ( k ) [ s ( k ) , t ( k ) ; F ] 1 F ( t ( k ) ) , t ( k + 1 ) = z ( k ) [ s ( k ) , z ( k ) ; F ] 1 F ( z ( k ) ) .
The convergence orders of the proposed schemes in (30) and (31) are given by the following result.
Theorem 3.
Let F : D R n R n be a sufficiently differentiable function in an open neighborhood D and η R n be a zero of function F. Let us assume that the initial value t ( 0 ) is close enough to η. Then, the iterative methods in (30) and (31) have convergence orders ( 5 + 17 ) / 2 4.56 and 5, respectively.
Proof. 
Let H ( k ) = I + V ( k ) F ( η ) . Using (22), we have
H ( k ) = D 3 ( ε s ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε s ( k 1 ) ε ( k ) .
Using (5), (30) and (32), we obtain
ε ( k + 1 ) D 2 2 ( I + V ( k ) F ( η ) ) 2 ( ε ( k ) ) 3
D 2 2 ( H ( k ) ) 2 ( ε ( k ) ) 3
D 2 2 ( D 3 ( ε s ( k 1 ) ) 2 + 2 D 2 ε ( k ) 2 D 3 ε s ( k 1 ) ε ( k ) ) 2 ( ε ( k ) ) 3 .
By comparing (26) with (33), we know that ( ε s ( k ) ) 2 is more than ε ( k + 1 ) and ( ε s ( k 1 ) ) 2 is more than ε ( k ) . From (32), we obtain
H ( k ) D 3 ( ε s ( k 1 ) ) 2
D 3 ( I + V ( k 1 ) F ( η ) ) 2 ( ε ( k 1 ) ) 2
D 3 3 ( H ( k 2 ) ) 4 ( ε ( k 2 ) ) 4 ( ε ( k 1 ) ) 2
D 3 2 k 1 ( H ( 0 ) ) 2 j ( ε ( 0 ) ) 2 k ( ε ( 1 ) ) 2 k 1 ( ε ( 2 ) ) 2 k 2 ( ε ( k 2 ) ) 2 2 ( ε ( k 1 ) ) 2 .
where ε ( k ) = t ( k ) η , ( k = 0 , 1 , , n ) . Let us assume that the sequence t ( k ) generated by the iterative method satisfies
ε ( k + 1 ) A k + 1 ( ε ( 0 ) ) r k + 1 , 0 k n ,
where ε ( k + 1 ) = t ( k + 1 ) η and A k + 1 is an asymptotic error constant.
From (33) and (34), we have
ε ( k + 1 ) D 2 2 ( H ( k ) ) 2 ( ε ( k ) ) 3
D 2 2 D 3 2 k ( H ( 0 ) ) 2 k + 1 ( ε ( 0 ) ) 2 k + 1 ( A 1 ( ε ( 0 ) ) r 1 ) 2 k ( A 2 ( ε ( 0 ) ) r 2 ) 2 k 1
× ( A k 2 ( ε ( 0 ) ) r k 2 ) 2 3 ( A k 1 ( ε ( 0 ) ) r k 1 ) 2 2 ( A k ( ε ( 0 ) ) r k ) 3 .
From (35) and (36), we obtain
r k + 1 = 2 k + 1 + 2 k r 1 + 2 k 1 r 2 + + 2 3 r k 2 + 2 2 r k 1 + 3 r k ,
and
r k = 2 k + 2 k 1 r 1 + 2 k 2 r 2 + + 2 2 r k 2 + 3 r k 1 .
From (37) and (38), we have
r k + 1 2 r k = 2 r k 1 + 3 r k ,
and
r k + 1 r k = 5 2 r k 1 r k .
By letting lim k ( r k + 1 / r k ) = lim k ( r k / r k 1 ) = O r , we obtain
O r = 5 2 1 O r ,
where O r is a constant. By solving Equation (41), we obtain O r = 5 + 17 / 2 . This implies that the convergence order of the method in (30) is O r = 5 + 17 / 2 4.56 .
From (5) and (23), we obtain
I + V ( k ) F ( η ) 2 D 2 ε ( k ) ,
and
ε ( k + 1 ) D 2 2 ( I + V ( k ) F ( η ) ) 2 ( ε ( k ) ) 3 4 D 2 4 ( ε ( k ) ) 5 .
This implies that the order of the method with memory in (31) is five. □
Remark 3.
Theorem 3 shows that the convergence order of the method in (4) without memory is improved from three to five by using one accelerating parameter (23). The convergence orders of the methods in (30) and (31) are higher than that of the FM5 method. The computational efficiency index (CEI) [3] is defined by C E I = ρ 1 c , where ρ is the convergence order and c is the computational cost of the iterative method. The methods in (30) and (31) and the FM5 method have different convergence orders with the same computational costs. We know that the computational efficiency of our methods shown in (30) and (31) is superior to that of the FM5 method.
Remark 4.
The methods in (24) and (30) use the same accelerating parameter (22). In Theorems 2 and 3, we use the different error relations of the parameter (22). The main reason is that ( ε s ( k 1 ) ) 2 in (22) is less than ε ( k ) for the two-step method method in (24) and ( ε s ( k 1 ) ) 2 in (22) is more than ε ( k ) for the three-step method in (30).
Remark 5.
In order to simplify calculation, the computational scheme of the method in (30) can be written as
[ 2 t ( k ) s ( k 1 ) , s ( k 1 ) ; F ] β 1 ( k ) = F ( t ( k ) ) , s ( k ) = t ( k ) β 1 ( k ) , [ s ( k ) , t ( k ) ; F ] β 2 ( k ) = F ( t ( k ) ) , z ( k ) = t ( k ) β 2 ( k ) , [ s ( k ) , z ( k ) ; F ] β 3 ( k ) = F ( z ( k ) ) , t ( k + 1 ) = z ( k ) β 3 ( k ) .
The computational scheme of the method in (31) can be written as
[ 2 t ( k ) z ( k 1 ) , z ( k 1 ) ; F ] γ 1 ( k ) = F ( t ( k ) ) , s ( k ) = t ( k ) γ 1 ( k ) , [ s ( k ) , t ( k ) ; F ] γ 2 ( k ) = F ( t ( k ) ) , z ( k ) = t ( k ) γ 2 ( k ) , [ s ( k ) , z ( k ) ; F ] γ 3 ( k ) = F ( z ( k ) ) , t ( k + 1 ) = z ( k ) γ 3 ( k ) .
The linear systems in (37) and (38) were solved by LU decomposition in numerical experiments.

3. Numerical Results

The methods in (24), (30) and (31) were compared with Chicharro’s methods (4), FM3 and FM5, for solving ODEs, PDEs and standard nonlinear systems. All the experiments were carried by the Maple 14 computer algebra system (Digits: = 2048). The parameter V ( 0 ) was the identity matrix. The solution was obtained by the stopping criterion | | t ( k ) t ( k 1 ) | | < 10 100 .
Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show the following information: ACOC [12] is the approximated computational order of convergence, NIT (number of iterations) is the number of iterations, EVL is evaluation of error | | t ( k ) t ( k 1 ) | | at the last step, T is the CPU time and FU is function values at the last step. The iterative processes of the iterative methods are given by Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5.
Problem 1.
t i 2 t i + 1 1 = 0 , 1 i 19 , t 20 2 t 1 1 = 0 .
The solution of problem 1 is η = { 1 , 1 , , 1 } T and the initial value is t ( 0 ) = { 1.25 , 1.25 , , 1.25 } T .
Figure 1 shows that the method in (31) had higher accuracy than the other methods. The accuracy of the method in (24) was similar to that of the FM3 method for Problem 1.
Problem 2.
2 t i 2 2 j = 1 20 t j 2 + arctan t i = 1 , i = 1 , 2 , 20 .
The solution of Problem 2 is η ( 0.175768 , , 0.175768 ) T and the initial guess is t ( 0 ) = ( 0.1 , 0.1 ) T .
Figure 2 shows that the methods in (30) and (31) had higher accuracy than the other methods. The method in (30) and (31) had similar convergence behaviors for Problem 2. The accuracy of the method in (24) was similar to that of the FM3 method for Problem 2.
Problem 3.
ODE problem [13]:
z ( t ) = α e z ( t ) , t [ 0 , 1 ] , z ( 0 ) = 0 , z ( 1 ) = 1 .
Using the discretization method in this problem, we obtain
z k 1 2 z k + z k + 1 + α h 2 e z k = 0 , k = 1 , 2 , 3 , , n 1 .
For n = 101 , we chose the initial value z ( 0 ) = ( 0.500 , 0.500 , , 0.500 ) T and obtained the solution ( 0.00539 ,   0.02168 ,   0.01587 ,   ,   0.00539 ) T . The numerical results are shown in Table 3.
Figure 3 shows that our method, shown in (31), had higher accuracy than the other methods under the same number of iterations.
Problem 4.
ODE problem [14]:
z ( t ) = z ( t ) 3 + sin ( z ( t ) 2 ) , t [ 0 , 1 ] , z ( 0 ) = 0 , z ( 1 ) = 1 .
The interval [ 0 , 1 ] was partitioned into n intervals with a step size of h = 1 / n . Using the difference method to discrete the derivative, we obtained
z k = z k + 1 2 z k + z k 1 h 2 , k = 1 , 2 , 3 , , n 1 ,
and
z k = z k + 1 z k 1 2 h , k = 1 , 2 , 3 , , n 1 .
We obtained
z k 1 2 z k + z k + 1 h 2 z k 3 h 2 s i n ( ( z k 1 z k + 1 2 h ) 2 ) = 0 , k = 1 , 2 , 3 , , n 1 .
For n = 20 , we chose the initial value t ( 0 ) = ( 0.5 , 0.5 ) T and obtained the solution ( 0.0314264 , 0.063837 ,   0.0972935 ,   ,   0.926986 ) T . The numerical results are shown in Table 4.
Problem 5.
x 1 3 / 2 x 2 3 4 + 0.005 | x 1 1 | = 0 x 2 3 / 2 2 9 x 2 3 8 + 0.005 | x 1 | = 0 .
The solution is η = { 1.0207 , 0.27988 } T and the initial value is x ( 0 ) = { 0.9638 , 0.2376 } T .
Problem 6.
Heat conduction problem [15]:
z x x = z t + z x z 2 + f ( x , t ) , x [ 0 , 1 ] , t 0 , z ( 1 , t ) = 0 , z ( 0 , t ) = 0 . f ( x , t ) = ( ( 2 π 2 ) sin ( π x ) π cos ( π x ) ) e ( t )
This problem was transformed into nonlinear systems. The step size for t was h = 1 / N and the step size for x was k = T / N . Let z i , j z ( x i , t j ) , z x ( x , t ) z ( x + h , t ) z ( z h , t ) 2 h , z t ( x , t ) z ( x , t ) z ( x , t k ) k and z x x ( x , t ) z ( x + h , t ) 2 z ( x , t ) + z ( x h , t ) h 2 ; we obtained
( 2 h 2 4 k ) z i , j + ( k h + 2 k ) z i 1 , j + ( 2 k k h ) z i + 1 , j + 2 k h 2 z i , j 2 2 k h 2 f ( t i , x j ) + 2 h 2 z i , j 1 = 0
The numerical results are given in Table 6. The exact solution z ( x , t ) = exp ( t ) sin ( π x ) of this problem is shown in Figure 6. The absolute value of error of the solution and approximate solution obtained using the method in (31) are shown in Figure 7 and Figure 8.

4. Dynamical Analysis

Recently, the dynamical analysis method has been applied to study the stability of the iterative method. Some basic concepts on complex dynamics can be found in references [16,17,18,19,20,21,22,23,24]. For brevity, we omit these concepts in this section. If an iterative method has good stability, it must has good properties for solving simple nonlinear equations. So, we compared our methods with other methods for solving complex equations z n 1 = 0 , ( n = 2 , 3 , 4 , 5 ) . The field D = [ 5.0 , 5.0 ] × [ 5.0 , 5.0 ] C was divided into a grid of 500 × 500 . If the initial point z 0 did not converge to the zero of the function after 25 iterations, it was painted with black. The tolerance | z z * | < 10 3 was used in our programs. Table 7, Table 8, Table 9 and Table 10 show the computing time (Time) for drawing the dynamical planes and the percentage of points (POPs) which guaranteed the convergence to the roots of the complex equations z n 1 = 0 , ( n = 2 , 3 , 4 , 5 ) .
Figure 9, Figure 10, Figure 11 and Figure 12 show that our method, shown in (24), is the most stable method and the stability of the method in (4) without memory is the worst among the tested methods.
Table 7, Table 8, Table 9 and Table 10 show that, compared with other methods, the method in (24) had the highest percentage of points which guaranteed the convergence to the roots of the complex equations and the method in (31) costed the least computing time.

5. Conclusions

In this paper, three Kurchatov-type accelerating iterative schemes with memory were obtained by using two novel Kurchatov-type divided difference operator. The new methods avoided the evaluation of the derivative. The orders of convergence of our methods, shown in (24), (30) and (31), are 3, ( 5 + 17 ) / 2 4.56 and 5, respectively. We also corrected the order of convergence of Chicharro’s methods, FM3 and FM5. In experimental application, our methods were applied to nonlinear ODEs and PDEs. The numerical results show that, compared with other methods, our methods, shown in (30) and (31), had higher computational accuracy. Dynamical planes were used to analyze the stability of the presented methods. It is worth noting that our method with memory, shown in (24), had better stability than the other methods in this paper. The stability of the method without memory in (4) was the worst. We can conclude that iterative methods with memory can effectively improve the stability of iterative methods without memory.

Author Contributions

Methodology, X.W.; writing—original draft preparation, X.W.; writing—review and editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research study was supported by the National Natural Science Foundation of China (Nos. 61976027), Educational Commission Foundation of Liaoning Province of China (Nos. LJ2019010), National Natural Science Foundation of Liaoning Province (No. 2019-ZD-0502) and LiaoNing Revitalization Talents Program (XLYC2008002).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  3. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  4. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
  5. Ahmad, F.; Soleymani, F.; Haghani, F.K.; Serra-Capizzano, S. Higher order derivative-free iterative methods with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
  6. Petković, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [Google Scholar] [CrossRef]
  7. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR 1971, 198, 524–526. [Google Scholar]
  8. Argyros, I.K. On a two-point Newton-like method of convergent order two. Int. J. Comput. Math. 2005, 88, 219–234. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Ren, H. On the Kurchatov method for solving equations under weak conditions. Appl. Math. Comput. 2016, 273, 98–113. [Google Scholar] [CrossRef]
  10. Wang, X.; Jin, Y.; Zhao, Y. Derivative-Free Iterative Methods with Some Kurchatov-Type Accelerating Parameters for Solving Nonlinear Systems. Symmetry 2021, 13, 943. [Google Scholar] [CrossRef]
  11. Cordero, A.; Soleymani, F.; Torregrosa, J.R.; Khaksar Haghani, F. A family of Kurchatov-type methods and its stability. Appl. Math. Comput. 2017, 294, 264–279. [Google Scholar] [CrossRef] [Green Version]
  12. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  13. Ahmad, F.; Rehman, S.U.; Ullah, M.Z.; Aljahdali, H.M.; Ahmad, S.; Alshomrani, A.S.; Carrasco, J.A.; Ahmad, S.; Sivasankaran, S. Frozen Jocabian multistep iterative method for solving nonlinear IVPs and BVPs. Complexity 2017, 2017, 9407656. [Google Scholar] [CrossRef]
  14. Narang, M.; Bhatia, S.; Kanwar, V. New efficient derivative free family of seventh-order methods for solving systems of nonlinear equations. Numer. Algorithms 2017, 76, 283–307. [Google Scholar] [CrossRef]
  15. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient High-order iterative methods for solving nonlinear systems and their appliation on Heat Conduction Problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef] [Green Version]
  16. Lee, M.Y.; Kim, Y.I. The dynamical analysis of a uniparametric family of three-point optimal eighth-order multiple-root finders under the Möbius conjugacy map on the Riemann sphere. Numer. Algorithms 2020, 83, 1063–1090. [Google Scholar] [CrossRef]
  17. Geum, Y.H.; Kim, Y.I.; Magreñán, Á.A. A biparametric extension of King’s fourth-order methods and their dynamics. Appl. Math. Comput. 2016, 282, 254–275. [Google Scholar] [CrossRef]
  18. Behl, R.; Cordero, A.; Torregrosa, J.R. High order family of multivariate iterative methods: Convergence and stability. J. Comput. Appl. Math. 2020, 405, 113053. [Google Scholar] [CrossRef]
  19. Neta, B.; Scott, M.; Chun, C. Basin attrators for various methods for multiple roots. Appl. Math. Comput. 2012, 218, 5043–5066. [Google Scholar]
  20. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  21. Kotarski, W.; Gdawiec, K.; Lisowska, A. Polynomiography via Ishikawa and Mann iterations. In Proceedings of the International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 305–313. [Google Scholar]
  22. Susanto, H.; Karjanto, N. Newton’s method’s basins of attraction revisited. Appl. Math. Comput. 2009, 215, 1084–1090. [Google Scholar] [CrossRef] [Green Version]
  23. Deng, J.J.; Chiang, H.D. Convergence region of Newton iterative power flow method: Numerical studies. J. Appl. Math. 2013, 2013, 509496. [Google Scholar] [CrossRef] [Green Version]
  24. Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
Figure 1. The iterative processes of iterative methods for Problem 1.
Figure 1. The iterative processes of iterative methods for Problem 1.
Fractalfract 06 00059 g001
Figure 2. The iterative processes of iterative methods for Problem 2.
Figure 2. The iterative processes of iterative methods for Problem 2.
Fractalfract 06 00059 g002
Figure 3. The iterative processes of iterative methods for Problem 3.
Figure 3. The iterative processes of iterative methods for Problem 3.
Fractalfract 06 00059 g003
Figure 4. The iterative processes of iterative methods for Problem 4.
Figure 4. The iterative processes of iterative methods for Problem 4.
Fractalfract 06 00059 g004
Figure 5. The iterative processes of iterative methods for Problem 5.
Figure 5. The iterative processes of iterative methods for Problem 5.
Fractalfract 06 00059 g005
Figure 6. The exact solutions of Example 6.
Figure 6. The exact solutions of Example 6.
Fractalfract 06 00059 g006
Figure 7. Numerical solutions (a) of Example 6 (n = 10) produced by the method (b) in (31).
Figure 7. Numerical solutions (a) of Example 6 (n = 10) produced by the method (b) in (31).
Fractalfract 06 00059 g007
Figure 8. Numerical solutions (a) of Example 6 (n = 20) produced by the method (b) in (31).
Figure 8. Numerical solutions (a) of Example 6 (n = 20) produced by the method (b) in (31).
Fractalfract 06 00059 g008
Figure 9. Dynamical planes for z 2 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Figure 9. Dynamical planes for z 2 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Fractalfract 06 00059 g009
Figure 10. Dynamical planes for z 3 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Figure 10. Dynamical planes for z 3 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Fractalfract 06 00059 g010
Figure 11. Dynamical planes for z 4 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Figure 11. Dynamical planes for z 4 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Fractalfract 06 00059 g011aFractalfract 06 00059 g011b
Figure 12. Dynamical planes for z 5 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Figure 12. Dynamical planes for z 5 1 = 0 . (a) Method (4), (b) Method FM3, (c) Method FM5, (d) Method (24), (e) Method (30), (f) Method (31).
Fractalfract 06 00059 g012aFractalfract 06 00059 g012b
Table 1. Numerical results of different iterative methods for Problem 1.
Table 1. Numerical results of different iterative methods for Problem 1.
MethodsNITEVLFUACOC
(4)77.931 × 10 126 2.395 × 10 374 3.00000
FM376.263 × 10 153 2.601 × 10 416 2.73574
(24)75.070 × 10 180 3.909 × 10 538 3.00000
FM566.709 × 10 344 4.721 × 10 1374 3.98043
(30)63.068 × 10 415 1.153 × 10 1892 4.56599
(31)51.667 × 10 135 3.865 × 10 674 5.00108
Table 2. Numerical results of different iterative methods for Problem 2.
Table 2. Numerical results of different iterative methods for Problem 2.
MethodsNITEVLFUACOC
(4)81.094 × 10 117 1.999 × 10 347 3.00000
FM372.310 × 10 147 8.319 × 10 401 2.74317
(24)73.056 × 10 172 3.356 × 10 513 3.00000
FM568.076 × 10 373 2.939 × 10 1486 4.08488
(30)51.821 × 10 108 2.669 × 10 497 4.79681
(31)55.055 × 10 110 3.685 × 10 544 4.99999
Table 3. Numerical results of different iterative methods for Problem 3.
Table 3. Numerical results of different iterative methods for Problem 3.
MethodsNITEVLFUTACOC
(4)55.500 × 10 123 5.095 × 10 373 567.1413.00009
FM362.718 × 10 168 1.667 × 10 463 405.0722.71295
(24)66.628 × 10 234 8.935 × 10 706 411.7023.00001
FM555.078 × 10 292 8.678 × 10 1171 502.0584.04268
(30)59.412 × 10 376 2.431 × 10 1716 484.1334.55973
(31)44.154 × 10 113 1.123 × 10 570 377.4915.01082
Table 4. Numerical results of different iterative methods for Problem 4.
Table 4. Numerical results of different iterative methods for Problem 4.
MethodsNITEVLFUTACOC
(4)64.082 × 10 122 3.486 × 10 365 57.1273.00008
FM381.345 × 10 203 7.061 × 10 491 58.6252.41965
(24)71.185 × 10 114 1.976 × 10 299 52.3852.70456
FM568.223 × 10 177 1.566 × 10 629 65.6143.54447
(30)61.601 × 10 229 1.857 × 10 852 65.0993.74370
(31)53.550 × 10 104 9.284 × 10 439 52.0544.14892
Table 5. Numerical results of different iterative methods for Problem 5.
Table 5. Numerical results of different iterative methods for Problem 5.
MethodsNITEVLFUACOC
(4)53.143 × 10 102 6.433 × 10 305 2.99114
FM362.325 × 10 168 3.396 × 10 459 2.71541
(24)65.172 × 10 240 7.518 × 10 719 2.98298
FM558.133 × 10 292 2.547 × 10 1163 4.05479
(30)52.915 × 10 369 2.206 × 10 1684 4.56718
(31)41.441 × 10 102 3.107 × 10 511 4.99898
Table 6. Numerical results of different methods for Problem 6 (n = 10).
Table 6. Numerical results of different methods for Problem 6 (n = 10).
Methods(4)FM3(24)FM5(30)(31)
N I T 566444
e - T i m e 47.65448.29747.53348.04847.86147.851
Table 7. Numerical results of different methods for z 2 1 = 0 .
Table 7. Numerical results of different methods for z 2 1 = 0 .
Methods(4)FM3FM5(24)(30)(31)
P O P 99.99 %100%100%100%100%100%
T i m e 38.8740.8639.3442.4238.6638.05
Table 8. Numerical results of different methods for z 3 1 = 0 .
Table 8. Numerical results of different methods for z 3 1 = 0 .
Methods(4)FM3FM5(24)(30)(31)
P O P 68.53%99.45%97.25%99.96%97.96%97.70%
T i m e 91.0457.3861.0959.2759.2856.58
Table 9. Numerical results of different methods for z 4 1 = 0 .
Table 9. Numerical results of different methods for z 4 1 = 0 .
Methods(4)FM3FM5(24)(30)(31)
P O P 21.72%92.93%91.89%99.31%92.65%89.45%
T i m e 159.9687.8292.1299.7099.8787.80
Table 10. Numerical results of different methods for z 5 1 = 0 .
Table 10. Numerical results of different methods for z 5 1 = 0 .
Methods(4)FM3FM5(24)(30)(31)
P O P 10.48%92.96%89.51%97.95%89.38%90.33%
T i m e 179.4892.6397.9292.1096.9692.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Chen, X. Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications. Fractal Fract. 2022, 6, 59. https://doi.org/10.3390/fractalfract6020059

AMA Style

Wang X, Chen X. Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications. Fractal and Fractional. 2022; 6(2):59. https://doi.org/10.3390/fractalfract6020059

Chicago/Turabian Style

Wang, Xiaofeng, and Xiaohe Chen. 2022. "Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications" Fractal and Fractional 6, no. 2: 59. https://doi.org/10.3390/fractalfract6020059

APA Style

Wang, X., & Chen, X. (2022). Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications. Fractal and Fractional, 6(2), 59. https://doi.org/10.3390/fractalfract6020059

Article Metrics

Back to TopTop