1. Introduction
The design of iterative processes for solving scalar equations,
, or nonlinear systems,
, with
n unknowns and equations, is an interesting challenge of numerical analysis. Many problems in Science and Engineering need the solution of a nonlinear equation or system in any step of the process. However, in general, both equations and nonlinear systems have no analytical solution, so we must resort to approximate the solution using iterative techniques. There are different ways to develop iterative schemes such as quadrature formulaes, Adomian polynomials, divided difference operator, weight function procedure, etc. have been used by many researchers for designing iterative schemes to solve nonlinear problems. For a good overview on the procedures and techniques as well as the different schemes developed in the last half century, one refers to some standard texts [
1,
2,
3,
4,
5].
In this paper, , we want to design Jacobian-free iterative schemes for approximating the solution
of a nonlinear system
, where
is a nonlinear multivariate function defined in a convex set
D. The best known method for finding a solution
is Newton’s procedure,
being the Jacobian of
F evaluated in the
kth iteration.
Based on Newton-type schemes and by using different techniques, several methods for approximating a solution of have been published recently. The main objective of all these processes is to speed the convergence or increase their computational efficiency. We are going to recall some of them that we will use in the last section for comparison purposes.
From a variant of Steffensen’s method for systems introduced by Samanskii in [
6] that replaces the Jacobian matrix
by the divided difference operator defined as
being
, Wang and Fang in [
7] designed a fourth-order scheme, denoted by WF4, whose iterative expression is
where
I is the identity matrix of size
,
and
. Let us observe that this method uses two functional evaluations and two divided difference operators per iteration. Let us remark that Samanskii in [
6] defined also a third-order method with the same divided differences operator at the two steps.
Sharma and Arora in [
8] added a new step in the previous method obtaining a sixth-order scheme, denoted by SA6, whose expression is
where, as before,
and
. In relation with WF4, a new functional evaluation, per iteration, is needed.
By replacing the third step of equation (
2), Narang et al. in [
9] proposed the following seventh-order scheme that uses two divided difference operators and three functional evaluations per iteration, which is denoted by NM7,
where
, being again
,
and
, with
,
.
In a similar way, Wang et al. (see [
10]) designed a scheme of order 7 that we denote by S7, modifying only the third step of expression (
3). Its iterative expression is
where, as in the previous schemes,
, with
and
.
Different indices can be used to compare the efficiency of iterative processes. For example, in [
11], Ostrowski introduced the efficiency index
, where
p is the convergence order and
d is the quantity of functional evaluations at each iteration. Moreover, the matrix inversions appearing in the iterative expressions are in practice calculated by solving linear systems. Therefore, the amount of quotients/products, denoted by
, employed in each iteration play an important role. This is the reason why we presented in [
12] the computational efficiency index,
, combining
and the number of operations per iteration. This index is defined as
.
Our goal of this manuscript is to construct high-order Jacobian-free iterative schemes for solving nonlinear systems involving low computational cost on large systems.
We recall, in
Section 2, some basic concepts that we will use in the rest of the manuscript.
Section 3 is devoted to describe our proposed iterative methods for solving nonlinear systems and to analyze their convergence. The efficiency indices of our methods are studied in
Section 4, as well as a comparative analysis with the schemes presented in the Introduction. Several numerical tests are shown in
Section 5, for illustrating the performance of the new schemes. To get this aim, we use a discretized nonlinear one-dimensional heat conduction equation by means of approximations of the derivatives and also some systems of academic interest. We finish the manuscript with some conclusions.
2. Basic Concepts
If a sequence
in
converges to
, it is said to be of order of convergence
p, being
, if
(
for
) and
exist satisfying
or
being
.
Although this notation was presented by the authors in [
12], we show it for the sake of completeness. Let
be sufficiently Fréchet differentiable in
D. The
qth derivative of
at
,
, is the
q-linear function
such that
. Let us observe that
- 1.
, where denotes the set of linear mappings defined from into .
- 2.
, for all permutation of .
From the above properties, we can use the following notation (let us observe that denotes p times):
- (a)
- (b)
Let us consider
in a neighborhood of
. By applying Taylor series and considering that
is nonsingular,
being
,
. Let us notice that
as
and
.
Moreover, we express
as
the identity matrix being denoted by
I. Then,
. From expression (
6), we get
where
The equation
where
K is a
p-linear operator
, known as error equation and
p is the order of convergence.In addition, we denote
by
.
Divided difference operator of function
(see, for example, [
2]) is defined as a mapping
satisfying
In addition, by using the formula of Gennochi-Hermite [
13] and Taylor series expansions around
x, the divided difference operator is defined for all
x,
as follows:
Being
and
, the divided difference operator for points
and
is
where
are obtained by replacing the Taylor expansion of the different terms that appear in development (
9) and doing algebraic manipulations.
For computational purposes, the following expression (see [
2]) is used
where
and
and
.
4. Efficiency Indices
As we have mentioned in the Introduction, we use indices and to compare the different iterative methods.
To evaluate function F, n scalar functions are calculated and for the first order divided difference . In addition, to calculate an inverse linear operator, an linear system must be solved; then, we have to do quotients/products for getting decomposition and solving the corresponding triangular linear systems. Moreover, for solving m linear systems with the same matrix of coefficients, we need to do products-quotients. In addition, we need products for each matrix-vector multiplication and quotients for evaluating a divided difference operator.
According to the last considerations, we calculate the efficiency indices
of methods CJST5, NM7, S7, SA6 and WF4. In case CJST5, for each iteration, we evaluate
F three times and once
, so
functional evaluations are needed. Therefore,
. The indices obtained for the mentioned methods are also calculated and shown in
Table 1.
In
Table 2, we present the indices
of schemes NM7, SA6, S7, WF4 and CJST5. In it, the amount of functional evaluations is denoted by
, the number of linear systems with the same
as the matrix of coefficients is
and
represents the quantity of products matrix-vector. Then, in case CJST5, for each iteration,
functional evaluations are needed, since we evaluate three times the function
F and one divided difference of first order
. In addition, we must solve three linear systems with
as coefficients matrix (that is
). Thus, the value of
for CJST5 is
Analogously, we obtain the indices
of the other methods. In
Figure 1, we observe the computational efficiency index of the different methods of size 5 to 80. The best index corresponds to our proposed scheme.
5. Numerical Examples
We begin this section checking the performance of the new method on the resulting system obtained by the discretization of Fisher’s partial differential equation. Thereafter, we compare its behavior with that of other known methods on some academic problems. For the computations, we have used R2015a ( Natick, Massachusetts, USA) with variable precision arithmetic, with 1000 digits of mantissa. The characteristics of the computer are, regarding the processor, Intel(R) Core(TM) i7-7700 CPU @ 3.6 GHz, 3.601 Mhz, four processors and RAM 16 GB.
We use the estimation of the theoretical order of convergence
p, called Computational Order of Convergence (COC), introduced by Jay [
14] with the following expression:
and the Approximated Computational Order of Convergence (ACOC), defined by Cordero and Torregrosa in [
15]
Example 1. Fisher’s equation,was proposed in [16] by Fisher to model the diffusion process in population dynamics. In it, is the diffusion constant, r is the level of growth of the species and p is the carrying capacity. Lately, this formulation has proven to be fruitful for many other problems as wave genetics, economy or propagation. Now, we study a particular case of this equation, when and the spatial interval is , and null boundary conditions.
We transform Example 1 in a set of nonlinear systems by applying an implicit method of finite differences, providing the estimated solution in the instant
from the estimated one in
. The spacial step
is selected and the temporal step is
,
and
being the quantity of subintervals in
x and
t, respectively, and
is the final instant. Therefore, a grid of domain
with points
, is selected:
Our purpose is to estimate the solution of problem (
12) at these points, by solving many nonlinear systems, as much as the number of temporal nodes
. For it, we use the following finite differences of order
:
By denoting the approximation of the solution at
as
, and, by replacing it in Example 1, we get the system
for
and
. The unknowns of this system are
that is, the approximations of the solution in each spatial node for the fixed instant
. Let us remark that, for solving this system, the knowledge of solution in
is required.
Let us observe (
Table 3) that the results improve when the temporal step is smaller. In this case, the COC is not a good estimation of the theoretical error. In
Figure 2, we show the approximated solution of the problem when
, by taking
,
and using method CJST5.
In the rest of examples, we are going to compare the performance of the proposed method with the schemes presented in the Introduction as well as with the Newton-type method replacing the Jacobian matrix by the divided difference operator, that is, the Samanskii’s scheme (see [
6]).
Example 2. Let us define the nonlinear system We use, in this example, the starting estimation
, the solution being
.
Table 4 shows the residuals
and
for
as well as ACOC and COC. We observe that the COC index is better than the corresponding ACOC of the other methods. In addition, the value
is better or similar to that of S7 and NM7 methods, both of them of order 7.
The numerical results are displayed in
Table 5. The initial estimation is
and the size of the system is
, the solution being
. We show the same information as in the previous example.
Example 4. The third example is given by the system: Its solution is
. By using the starting guess
with
, we obtain the results appearing in
Table 6.
The different methods give us the expected results, according to their order of convergence.
Example 5. Finally, the last example that we consider is: The solution of it is
. By using the initial estimation
with
, we obtain the numerical results displayed in
Table 7.
It is observed in
Table 5,
Table 6 and
Table 7 that, for the proposed academic problems, the introduced method (CJST5) shows a good performance comparable with higher-order methods. Of course, the worst results are those obtained by Samanskii’s method, but it has been included because it is the Jacobian-free version of Newton’s scheme and it is also the first step of our proposed scheme. Let us also remark that, when only three iterations are calculated, the index COC gives more reliable information than the ACOC one in all of the examples.