1. Introduction
Finding the solutions of system of nonlinear equations
is a hot problem with wide applications in sciences and engineering, wherein
and
D is an open convex domain in
. Many efficient methods have been proposed for solving system of nonlinear equations, see for example [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18] and the references therein. The best known method is the Steffensen method [
1,
2], which is given by
where
is the inverse of
and
is the first order divided difference on
D. Equation (
1) does not require the derivative of the system
F in per iteration.
To reduce the computational time and improve the efficiency index of the Steffensen method, many modified high-order methods have been proposed in open literatures, see [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14] and the references therein. Liu
et al. [
3] obtained a fourth-order derivative-free method for solving system of nonlinear equations, which can be written as
where
Grau-Sánchez
et al. [
4,
5] developed some efficient derivative-free methods. One of the methods is the following sixth-order method
where
and
It should be noted that the Equations (
2) and (
3) need to compute two LU decompositions in per iteration, respectively. Some derivative-free methods are also discussed by Ezquerro
et al. in [
6] and by Wang
et al. in [
7,
8]. The above multi-step derivative-free iterative methods can save the computing time in the High-precision computing. Therefore, it is meaningful to study the multi-step derivative-free iterative methods.
It is well-known that we can improve the efficiency index of the iterative method and reduce the computational time of the iterative process by reducing the computational cost of the iterative method. There are many ways to reduce the computational cost of the iterative method. In this paper, we reduce the computational cost of the iterative method by reducing the number of LU (lower upper) decompositions in per iteration. Two new derivative-free iterative methods are proposed for solving system of nonlinear equations in
Section 2. We prove the local convergence order of the new methods. The feature of the new methods is that the LU decomposition is computed only once in per iteration.
Section 3 compares the efficiency of different methods by computational efficiency index [
10].
Section 4 illustrates convergence behavior of our methods by numerical examples.
Section 5 is a short conclusion.
2. The New Methods and Analysis of Convergence
Using the central difference
, we propose the following iterative scheme
where
,
and
I is the identity matrix. Furthermore, if we define
, then the order of convergence of the following method is six.
Compared with the Equation (
4), the Equation (
5) increases one function evaluation
. In order to simplify calculation, the new Equation (
4) can be written as
Similar strategy can be used in the Equation (
5). For the Equations (
4) and (
5), we have the following analysis of convergence.
Theorem 1. Let be a solution of the system and be sufficiently differentiable in an open neighborhood D of α. Then, for an initial approximation sufficiently close to α, the convergence order of iterative Equation (4) is four with the following error equationwhere and Iterative Equation (5) is of sixth order convergence and satisfies the following error equationwhere Proof. The first order divided difference operator of
F as a mapping
(see [
5,
10,
11]) is given by
Expanding
in Taylor series at the point
x and integrating, we obtain
Developing
in a neighborhood of
α and assuming that
exists, we have
where
The derivatives of
can be given by
Setting
and
, we have
. Replacing the previous expressions Equations (
12)–(
14) into Equation (
10) we get
Noting that
and
, we replace in Equation (
15)
E by
,
e by
, we obtain
where
and
I is the identity matrix. Using Equation (
16), we find
Then, we compel the inverse of
to be (see [
12,
13])
such that
and
verify
Solving the system Equation (
19), we obtain
then,
Similar to Equation (
11), we have
From Equations (
15) and (
22)–(
24), we get
Taking into account Equation (
4), (
24) and (
25), we obtain
This means that the Equation (
4) is of fourth-order convergence.
Therefore, from Equations (
5) and (
24)–(
26), we obtain the error equation:
This means that the Equation (
5) is of sixth-order convergence. ☐
3. Computational Efficiency
The classical efficiency index
(see [
9]) is the most used index, but not the only one. We find that the iterative methods with the same classical efficiency index (
have the different properties in actual applications. The reason is that the number of functional evaluations of iterative method is not the only influence factor in evaluating the efficiency of the iterative method. The number of matrix products, scalar products, decomposition LU of matrix, and the resolution of the triangular linear systems also play an important role in evaluating the real efficiency of iterative method. In this paper, the computational efficiency index (
[
10] is used to compare the efficiency of the iterative methods. Some discussions on the
can be found in [
4,
5,
6,
7]. The
of the iterative methods
is given by
where
is the order of convergence of the method and
is the computational cost of method. The
is given by
where
denotes the number of evaluations of scalar functions used in the evaluations of
F and
, and
represents the operational cost per iteration. To express the value of Equation (29) in terms of products, a ratio
in Equation (
29) between products (and divisions) and evaluations of functions is required, see [
5,
10]. We must add
m products for multiplication of a vector by a scalar and
products for matrix-vector multiplication. To compute an inverse linear operator, we need
products and divisions in the LU decomposition and
products and divisions for solving two triangular linear systems. If we compute the first-order divided difference then we need
scalar functional evaluations and
quotients. The first-order divided difference
of
F is given by
where
,
and
(see [
9]). Based on Equations (28) and (29),
Table 1 shows the computational cost of different methods.
Table 1.
Computational cost of the iterative methods.
Table 1.
Computational cost of the iterative methods.
Methods | ρ | a(m) | p(m) | C(μ, m) |
---|
| 2 | | | |
| 4 | | | |
| 6 | | | |
| 4 | | | |
| 6 | | | |
From
Table 1, we can see that our methods
need less number of LU decomposition than methods
and
. The computational cost of the fourth-order methods show the following order:
We use the following expressions [
10] to compare the CEI of different methods
For the iterative method is more efficient than
Using the of the iterative methods, we obtain the following theorem:
Theorem 2. 1. For the fourth-order method, we have for all
2. For the sixth-order method, we have for all
Proof. 1. From
Table 1, we note that the methods
have the same order
Based on Equations (
29) and (
30), we get that
for all
and
2. The methods
have the same order and the same functional evaluations. The relation between
and
can be given by
Subtracting the denominator from the numerator of Equation (
32), we have
The Equation (
33) is positive for
Thus, we obtain that
for all
and
☐
Then, we compare the of the iterative methods with different convergence order by the following theorem:
Theorem 3. We have 1. for all and
2. for all and
Proof. 1. From the expression Equation (
31) and
Table 1,We get the following relation between
and
We consider the boundary
. The boundary can is given by the following equation
where
over it (see
Figure 1). The boundary Equation (
35) cut axes at points
and
. Thus, we get that
since
for all
and
2. The relation between
and
is given by
Subtracting the denominator from the numerator of Equation (
36), we have
The Equation (
37) is positive for
Thus, we obtain that
for all
and
☐
Figure 1.
The boundary function inplain.
Figure 1.
The boundary function inplain.
4. Numerical Examples
In this section, we compare the performance of related methods by mathematical experiments. The numerical experiments have been carried out using Maple 14 computer algebra system with 2048 digits. The computer specifications are Microsoft Windows 7 Intel(R), Core(TM) i3-2350M CPU, 1.79 GHz with 2 GB of RAM.
According to the Equation (29), the factor
μ is claimed by expressing the cost of the evaluation of elementary functions in terms of products [
15].
Table 2 gives an estimation of the cost of the elementary functions in amount of equivalent products, where the running time of one product is measured in milliseconds.
Table 2.
Estimation of computational cost of elementary functions computed with Maple 14 and using a processor Intel® Core(TM) i3-2350M CPU, 1.79 GHz (32-bit Machine) Microsoft Windows 7 Professional, where and .
Table 2.
Estimation of computational cost of elementary functions computed with Maple 14 and using a processor Intel® Core(TM) i3-2350M CPU, 1.79 GHz (32-bit Machine) Microsoft Windows 7 Professional, where and .
Digits | x·y | x/y | | | | | | |
---|
2048 | 0.109 ms | 1 | 5 | 53 | 12 | 112 | 110 | 95 |
Table 3,
Table 4,
Table 5,
Table 6,
Table 7 and
Table 8 show the following information of the methods
: the number of iterations
k needed to converge to the solution, the norm of function
at the last step, the value of the stopping factors at the last step, the computational cost
C, the computational time
, the computational efficiency indices
and the computational order of convergence
ρ. Using the commond time( ) in Maple 14, we can obtain the computational time of different methods. The computational order of convergence
ρ is defined by [
16]:
The following problems are chosen for numerical tests:
Example 1 Considering the following system
where
are the values used in Equation (
29).
is the initial point and
is the solution of the Example 1.
is the stopping criterion.
The results shown in
Table 3 confirm the first assertion of Theorem 2 and the first assertion of Theorem 3 for
Namely,
for
. The new sixth-order method
spends minimum time for finding the numerical solution. The ’nc’ denotes that the method does not converge in the
Table 3.
Table 3.
Performance of methods for Example 1.
Table 3.
Performance of methods for Example 1.
Method | k | | | ρ | C | CEI | Time(s) |
---|
| 13 | 1.792e−161 | 3.748e−322 | 2.00000 | 838 | 1.0008275 | 1.127 |
| nc | | | | | | |
| 4 | 3.558e−743 | 8.245e−496 | 6.00314 | 1960 | 1.0009148 | 0.780 |
| 5 | 4.086e−211 | 5.330e−421 | 4.00015 | 1686 | 1.0008226 | 0.836 |
| 4 | 6.240e−164 | 2.389e−489 | 6.00420 | 1978 | 1.0009063 | 0.546 |
Example 2 The second system is defined by [
11]
where
. The initial point is
.
is the stopping criterion.The solution is
The results shown in
Table 4 confirm the first assertion of Theorem 2 and assertion 1 of Theorem 3 for
Namely,
and
for
.
Table 4 shows that sixth-order method
is the most efficient iterative method in both computational time and
.
Table 4.
Performance of methods for Example 2.
Table 4.
Performance of methods for Example 2.
Method | k | | | ρ | C | CEI | Time(s) |
---|
| 9 | 2.136e−302 | 5.945e−604 | 2 | 449.6 | 1.00154289 | 0.514 |
| 5 | 2.439e−675 | 2.703e−1350 | 4 | 1032.1 | 1.00134408 | 0.592 |
| 4 | 1.414e−1080 | 7.020e−1620 | 6 | 1023.1 | 1.00175284 | 0.561 |
| 5 | 4.123e−699 | 9.73957e−1397 | 4 | 915.2 | 1.00151589 | 0.561 |
| 4 | 9.097e−550 | 8.57708e−1647 | 6 | 1054.1 | 1.00170125 | 0.483 |
Example 3 Now, considering the following large scale nonlinear systems [
17]:
The initial vector is for the solution The stopping criterion is .
Table 5.
Performance of methods for Example 3, where .
Table 5.
Performance of methods for Example 3, where .
Method | k | | | ρ | C | CEI | Time(s) |
---|
| 10 | 4.993e−150 | 7.480e−299 | 2.00000 | 2,745,802 | 1.000000252 | 95.940 |
| 5 | 3.013e−212 | 1.210e−423 | 4.00000 | 5,649,610 | 1.000000245 | 126.438 |
| 4 | 3.922e−556 | 2.197e−833 | 5.99998 | 5,571,005 | 1.000000322 | 77.111 |
| 5 | 1.404e−269 | 9.850e−538 | 4.00000 | 2,944,404 | 1.000000471 | 81.042 |
| 4 | 5.298e−208 | 2.231e−621 | 5.99976 | 3,063,804 | 1.000000585 | 64.818 |
Table 6.
The computational time (in second) for Example 3 by the methods.
Table 6.
The computational time (in second) for Example 3 by the methods.
Method | | | | | |
---|
| 20.982 | 29.499 | 16.848 | 19.219 | 15.459 |
| 95.940 | 126.438 | 77.111 | 81.042 | 64.818 |
| 254.234 | 328.896 | 207.340 | 199.930 | 156.094 |
Application in Integral Equations
The Chandrasekhar integral [
18] equation comes from radiative transfer theory, which is given by
with the operator
F and parameter
c as
We approximate the integrals by the composite midpoint rule:
where
for
We obtain the resulting discrete problem is
The initial vector is
,
.
Table 7 and
Table 8 show the numerical results of this problem.
is the stopping criterion of this problem.
Table 7.
The computational time (in second) for solving Chandrasekhar Integral equation.
Table 7.
The computational time (in second) for solving Chandrasekhar Integral equation.
Method | | | | | |
---|
| 88.468 | 207.200 | 87.937 | 102.055 | 70.309 |
| 422.388 | 904.602 | 435.929 | 488.969 | 400.345 |
Table 8.
The number of iterations for solving Chandrasekhar Integral equation.
Table 8.
The number of iterations for solving Chandrasekhar Integral equation.
Method | | | | | |
---|
| 8 | 6 | 4 | 5 | 4 |
| 8 | 6 | 4 | 5 | 4 |
The results shown in
Table 5 confirm the assertion of Theorem 2 and Theorem 3 for
Namely,
and
. From the
Table 6, we remark that the computational time of our fourth-order method
is less than that of the sixth order method
for
.
Table 5,
Table 6 and
Table 7 show that, as the nonlinear system is big-sized, our new methods
remarkably reduce the computational time.
The numerical results shown in
Table 3,
Table 4,
Table 5,
Table 6,
Table 7 and
Table 8 are in concordance with the theory developed in this paper. The new methods require less number of iterations to obtain higher accuracy in the contrast to the other methods. The most important is that our methods have higher
and lower computational time than other methods in this paper. The sixth-order method
is the most efficient iterative methods in both
and computational time.