Next Article in Journal
Iterative Algorithms for Split Common Fixed Point Problem Involved in Pseudo-Contractive Operators without Lipschitz Assumption
Next Article in Special Issue
An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems
Previous Article in Journal
n0-Order Weighted Pseudo Δ-Almost Automorphic Functions and Abstract Dynamic Equations
Previous Article in Special Issue
An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator

1
Multidisciplinary Institute of Mathematics, Universitat Politènica de València, 46022 València, Spain
2
Department of Applied Mathematics, Universitat Politènica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 776; https://doi.org/10.3390/math7090776
Submission received: 23 July 2019 / Revised: 16 August 2019 / Accepted: 19 August 2019 / Published: 23 August 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
In this manuscript, a new family of Jacobian-free iterative methods for solving nonlinear systems is presented. The fourth-order convergence for all the elements of the class is established, proving, in addition, that one element of this family has order five. The proposed methods have four steps and, in all of them, the same divided difference operator appears. Numerical problems, including systems of academic interest and the system resulting from the discretization of the boundary problem described by Fisher’s equation, are shown to compare the performance of the proposed schemes with other known ones. The numerical tests are in concordance with the theoretical results.

1. Introduction

The design of iterative processes for solving scalar equations, f ( x ) = 0 , or nonlinear systems, F ( x ) = 0 , with n unknowns and equations, is an interesting challenge of numerical analysis. Many problems in Science and Engineering need the solution of a nonlinear equation or system in any step of the process. However, in general, both equations and nonlinear systems have no analytical solution, so we must resort to approximate the solution using iterative techniques. There are different ways to develop iterative schemes such as quadrature formulaes, Adomian polynomials, divided difference operator, weight function procedure, etc. have been used by many researchers for designing iterative schemes to solve nonlinear problems. For a good overview on the procedures and techniques as well as the different schemes developed in the last half century, one refers to some standard texts [1,2,3,4,5].
In this paper, , we want to design Jacobian-free iterative schemes for approximating the solution x ¯ = ( x 1 , x 2 , , x n ) T of a nonlinear system F ( x ) = 0 , where F : D R n R n is a nonlinear multivariate function defined in a convex set D. The best known method for finding a solution x ¯ D is Newton’s procedure,
x ( k + 1 ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , k = 0 , 1 , 2 , ,
F ( x ( k ) ) being the Jacobian of F evaluated in the kth iteration.
Based on Newton-type schemes and by using different techniques, several methods for approximating a solution of F ( x ) = 0 have been published recently. The main objective of all these processes is to speed the convergence or increase their computational efficiency. We are going to recall some of them that we will use in the last section for comparison purposes.
From a variant of Steffensen’s method for systems introduced by Samanskii in [6] that replaces the Jacobian matrix F ( x ) by the divided difference operator defined as
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) ,
being x , y R n , Wang and Fang in [7] designed a fourth-order scheme, denoted by WF4, whose iterative expression is
r ( k ) = x ( k ) [ a ( k ) , b ( k ) ; F ] 1 F ( x ( k ) ) , x ( k + 1 ) = r ( k ) 3 I 2 [ a ( k ) , b ( k ) ; F ] 1 [ x ( k ) , r ( k ) ; F ] [ a ( k ) , b ( k ) ; F ] 1 F ( r ( k ) ) ,
where I is the identity matrix of size n × n , a ( k ) = x ( k ) + F ( x ( k ) ) and b ( k ) = x ( k ) F ( x ( k ) ) . Let us observe that this method uses two functional evaluations and two divided difference operators per iteration. Let us remark that Samanskii in [6] defined also a third-order method with the same divided differences operator at the two steps.
Sharma and Arora in [8] added a new step in the previous method obtaining a sixth-order scheme, denoted by SA6, whose expression is
r ( k ) = x ( k ) [ a ( k ) , b ( k ) ; F ] 1 F ( x ( k ) ) , s ( k ) = r ( k ) 3 I 2 [ a ( k ) , b ( k ) ; F ] 1 [ x ( k ) , r ( k ) ; F ] [ a ( k ) , b ( k ) ; F ] 1 F ( r ( k ) ) , x ( k + 1 ) = s ( k ) 3 I 2 [ a ( k ) , b ( k ) ; F ] 1 [ x ( k ) , r ( k ) ; F ] [ a ( k ) , b ( k ) ; F ] 1 F ( s ( k ) ) ,
where, as before, a ( k ) = x ( k ) + F ( x ( k ) ) and b ( k ) = x ( k ) F ( x ( k ) ) . In relation with WF4, a new functional evaluation, per iteration, is needed.
By replacing the third step of equation (2), Narang et al. in [9] proposed the following seventh-order scheme that uses two divided difference operators and three functional evaluations per iteration, which is denoted by NM7,
r ( k ) = x ( k ) Q 1 F ( x ( k ) ) , s ( k ) = r ( k ) Q 1 F ( r ( k ) ) , x ( k + 1 ) = s ( k ) 17 4 I Q 1 P 27 4 I + Q 1 P 19 4 I 5 4 Q 1 P Q 1 F ( s ( k ) ) ,
where Q = [ a ( k ) , b ( k ) ; F ] , being again a ( k ) = x ( k ) + F ( x ( k ) ) , b ( k ) = x ( k ) F ( x ( k ) ) and P = [ w ( k ) , t ( k ) ; F ] , with w ( k ) = s ( k ) + F ( x ( k ) ) , t ( k ) = s ( k ) F ( x ( k ) ) .
In a similar way, Wang et al. (see [10]) designed a scheme of order 7 that we denote by S7, modifying only the third step of expression (3). Its iterative expression is
r ( k ) = x ( k ) Q 1 F ( x ( k ) ) , s ( k ) = r ( k ) 3 I 2 Q 1 [ r ( k ) , x ( k ) ; F ] Q 1 F ( r ( k ) ) , x ( k + 1 ) = s ( k ) 13 4 I Q 1 [ s ( k ) , r ( k ) ; F ] 7 2 I 5 4 Q 1 [ s ( k ) , r ( k ) ; F ] Q 1 F ( s ( k ) ) ,
where, as in the previous schemes, Q = [ a ( k ) , b ( k ) ; F ] , with a ( k ) = x ( k ) + F ( x ( k ) ) and b ( k ) = x ( k ) F ( x ( k ) ) .
Different indices can be used to compare the efficiency of iterative processes. For example, in [11], Ostrowski introduced the efficiency index E I = p 1 / d , where p is the convergence order and d is the quantity of functional evaluations at each iteration. Moreover, the matrix inversions appearing in the iterative expressions are in practice calculated by solving linear systems. Therefore, the amount of quotients/products, denoted by o p , employed in each iteration play an important role. This is the reason why we presented in [12] the computational efficiency index, C E I , combining E I and the number of operations per iteration. This index is defined as C E I = p 1 / ( d + o p ) .
Our goal of this manuscript is to construct high-order Jacobian-free iterative schemes for solving nonlinear systems involving low computational cost on large systems.
We recall, in Section 2, some basic concepts that we will use in the rest of the manuscript. Section 3 is devoted to describe our proposed iterative methods for solving nonlinear systems and to analyze their convergence. The efficiency indices of our methods are studied in Section 4, as well as a comparative analysis with the schemes presented in the Introduction. Several numerical tests are shown in Section 5, for illustrating the performance of the new schemes. To get this aim, we use a discretized nonlinear one-dimensional heat conduction equation by means of approximations of the derivatives and also some systems of academic interest. We finish the manuscript with some conclusions.

2. Basic Concepts

If a sequence { x ( k ) } k 0 in R n converges to x ¯ , it is said to be of order of convergence p, being p 1 , if C > 0 ( 0 < C < 1 for p = 1 ) and k 0 exist satisfying
x ( k + 1 ) x ¯ C x ( k ) x ¯ p , k k 0 ,
or
e ( k + 1 ) C e ( k ) p , k k 0 ,
being e ( k ) = x ( k ) x ¯ .
Although this notation was presented by the authors in [12], we show it for the sake of completeness. Let Φ : D R n R n be sufficiently Fréchet differentiable in D. The qth derivative of Φ at x R n , q 1 , is the q-linear function Φ ( q ) ( x ) : R n × × R n R n such that Φ ( q ) ( x ) ( y 1 , , y q ) R n . Let us observe that
1.
Φ ( q ) ( x ) ( y 1 , , y q 1 , · ) L ( R n ) , where L ( R n ) denotes the set of linear mappings defined from ( R n ) into ( R n ) .
2.
Φ ( q ) ( x ) ( y σ ( 1 ) , , y σ ( q ) ) = Φ ( q ) ( x ) ( y 1 , , y q ) , for all permutation σ of { 1 , 2 , , q } .
From the above properties, we can use the following notation (let us observe that y p denotes ( y , , y ) p times):
(a)
Φ ( q ) ( x ) ( y 1 , , y q ) = Φ ( q ) ( x ) y 1 y q ,
(b)
Φ ( q ) ( x ) y q 1 Φ ( p ) y p = Φ ( q ) ( x ) Φ ( p ) ( x ) y q + p 1 .
Let us consider x ¯ + ε R n in a neighborhood of x ¯ . By applying Taylor series and considering that Φ ( x ¯ ) is nonsingular,
Φ ( x ¯ + ε ) = Φ ( x ¯ ) ε + q = 2 p 1 C q ε q + O ( ε p ) ,
being C q = ( 1 / q ! ) [ Φ ( x ¯ ) ] 1 Φ ( q ) ( x ¯ ) , q 2 . Let us notice that C q ε q R n as Φ ( q ) ( x ¯ ) L ( R n × × R n , R n ) and [ Φ ( x ¯ ) ] 1 L ( R n ) .
Moreover, we express Φ as
Φ ( x ¯ + ε ) = Φ ( x ¯ ) I + q = 2 p 1 q C q ε q 1 + O ( ε p 1 ) ,
the identity matrix being denoted by I. Then, q C q ε q 1 L ( R n ) . From expression (6), we get
[ Φ ( x ¯ + ε ) ] 1 = I + Y 2 ε + Y 3 ε 2 + Y 4 ε 4 + [ Φ ( x ¯ ) ] 1 + O ( ε p 1 ) ,
where
Y 2 = 2 C 2 , Y 3 = 4 C 2 2 3 C 3 , Y 4 = 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 4 C 4 .
The equation
e ( k + 1 ) = K e ( k ) p + O ( e ( k ) p + 1 ) ,
where K is a p-linear operator K L ( R n × × R n , R n ) , known as error equation and p is the order of convergence.In addition, we denote e ( k ) p by ( e ( k ) , e ( k ) , , e ( k ) ) .
Divided difference operator of function Φ (see, for example, [2]) is defined as a mapping [ · , · ; Φ ] : D × D R n × R n L ( R n ) satisfying
[ x , y ; Φ ] ( x y ) = Φ ( x ) Φ ( y ) , for all x , y D .
In addition, by using the formula of Gennochi-Hermite [13] and Taylor series expansions around x, the divided difference operator is defined for all x, x + h R n as follows:
[ x + ε , x ; Φ ] = 0 1 Φ ( x + t ε ) d t = Φ ( x ) + 1 2 Φ ( x ) ε + 1 6 Φ ( x ) ε 2 + 1 24 Φ ( i v ) ( x ) ε 3 + O ( ε 4 ) .
Being a ( k ) = x ( k ) + Φ ( x ( k ) ) and b ( k ) = x ( k ) Φ ( x ( k ) ) , the divided difference operator for points a ( k ) and b ( k ) is
[ a ( k ) , b ( k ) ; Φ ] = Φ ( b ( k ) ) + 1 2 Φ ( b ( k ) ) ( a ( k ) b ( k ) ) + 1 6 Φ ( b ( k ) ) ( a ( k ) b ( k ) ) 2 + O ( ( a ( k ) b ( k ) ) 3 ) = Φ ( x ¯ ) [ I + A 1 e ( k ) + A 2 e ( k ) 2 + A 3 e ( k ) 3 ] + O ( e ( k ) 4 ) ,
where
A 1 = 2 C 2 , A 2 = C 3 ( 3 + Φ ( x ¯ ) 2 ) , A 3 = 4 C 4 ( 1 + Φ ( x ¯ ) 2 + C 3 Φ ( x ¯ ) 2 C 2 + C 3 Φ ( x ¯ ) C 2 Φ ( x ¯ )
are obtained by replacing the Taylor expansion of the different terms that appear in development (9) and doing algebraic manipulations.
For computational purposes, the following expression (see [2]) is used
[ y , x ; F ] i , j = F i ( y 1 , , y j 1 , y j , x j + 1 , , x n ) F i ( y 1 , , y j 1 , x j , x j + 1 , , x n ) y j x j ,
where x = ( x 1 , , x k j 1 , x j , x j + 1 , , x n ) and y = ( y 1 , , y j 1 , y j , y j + 1 , , y n ) and 1 i , j n .

3. Proposed Methods and Their Convergence

From a Samanskii-type method and by using the composition procedure “frozening” the divided difference operator (we hold the same divided difference operator in all the steps of the method), we propose the following four-steps iterative class with the aim of reaching order five:
y ( k ) = x ( k ) μ [ a ( k ) , b ( k ) ; F ] 1 F ( x ( k ) ) , z ( k ) = y ( k ) α [ a ( k ) , b ( k ) ; F ] 1 F ( y ( k ) ) , t ( k ) = z ( k ) β [ a ( k ) , b ( k ) ; F ] 1 F ( z ( k ) ) , x ( k + 1 ) = t ( k ) γ [ a ( k ) , b ( k ) ; F ] 1 F ( t ( k ) ) ,
where μ , α , β and γ are real parameters, a ( k ) = x ( k ) + F ( x ( k ) ) and b ( k ) = x ( k ) F ( x ( k ) ) .
It is possible to prove that, under some assumptions, we can reach order five. We have used different combinations of these steps trying to preserve order 5 and reducing the computational cost. The best result we have been able to achieve is the following:
y ( k ) = x ( k ) [ a ( k ) , b ( k ) ; F ] 1 F ( x ( k ) ) , z ( k ) = y ( k ) α [ a ( k ) , b ( k ) ; F ] 1 F ( y ( k ) ) , t ( k ) = z ( k ) β [ a ( k ) , b ( k ) ; F ] 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) γ [ a ( k ) , b ( k ) ; F ] 1 F ( t ( k ) ) ,
where α , β and γ are real parameters, a ( k ) = x ( k ) + F ( x ( k ) ) and b ( k ) = x ( k ) F ( x ( k ) ) . The convergence of class (11) is presented in the following result.
Theorem 1.
Let us assume F : D R n R n being a differentiable enough operator at each point of the open neighborhood D of the solution x ¯ of the system F ( x ) = 0 . Let us suppose that F ( x ) is continuous and nonsingular in x ¯ and the initial estimation x ( 0 ) is near enough to x ¯ . Therefore, sequence { x ( k ) } k 0 calculated from expression (11) converges to x ¯ with order 4 if α = 2 γ , β = ( γ 1 ) 2 γ and for all γ R , the error equation being
e ( k + 1 ) = 5 γ 1 γ C 2 3 e ( k ) 4 + O ( e ( k ) 5 ) .
In addition, if γ = 1 5 , the order of convergence is five and the error equation is
e ( k + 1 ) = 14 C 2 4 2 C 2 C 3 C 2 + 6 C 3 C 2 2 2 C 2 C 3 F ( x ¯ ) 2 C 2 + 2 C 3 F ( x ¯ ) 2 C 2 2 e ( k ) 5 + O ( e ( k ) 6 ) ,
where C j = 1 j ! [ F ( x ¯ ) ] 1 F ( j ) ( x ¯ ) , j = 2 , 3 ,
Proof. 
By using the Taylor expansion of F ( x ( k ) ) and its derivatives around x ¯ :
F ( x ( k ) ) = F ( x ¯ ) e ( k ) + C 2 e ( k ) 2 + C 3 e ( k ) 3 + C 4 e ( k ) 4 + C 5 e ( k ) 5 + O ( e ( k ) 6 ) , F ( x ( k ) ) = F ( x ¯ ) I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + 5 C 5 e ( k ) 4 + O ( e ( k ) 5 ) , F ( x ( k ) ) = F ( x ¯ ) 2 C 2 + 6 C 3 e ( k ) + 12 C 4 e ( k ) 2 + 20 C 5 e ( k ) 3 + O ( e ( k ) 4 ) , F ( x ( k ) ) = F ( x ¯ ) 6 C 3 + 24 C 4 e ( k ) + 60 C 5 e ( k ) 2 + O ( e ( k ) 3 ) , F ( i v ) ( x ( k ) ) = F ( x ¯ ) 24 C 4 + 120 C 5 e ( k ) + O ( e ( k ) 2 ) , F ( v ) ( x ( k ) ) = F ( x ¯ ) 120 C 5 + O ( e ( k ) ) .
From the above expression, by replacing in the first order divided difference operator [ a ( k ) , b ( k ) ; F ] , the values a ( k ) = x ( k ) + F ( x ( k ) ) , b ( k ) = x ( k ) F ( x ( k ) ) , we obtain:
a ( k ) , b ( k ) ; F = F ( x ¯ ) [ I + 2 C 2 e ( k ) + 3 C 3 + C 3 F ( x ¯ ) 2 e ( k ) 2 + 4 C 4 + 4 C 4 F ( x ¯ ) 2 + C 3 F ( x ¯ ) 2 C 2 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) e ( k ) 3 + ( 5 C 5 + C 5 F ( x ¯ ) 4 + 10 C 5 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 C 2 + 4 C 4 F ( x ¯ ) C 2 F ( x ¯ ) + C 3 F ( x ¯ ) 2 C 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) C 2 + C 3 F ( x ¯ ) C 3 F ( x ¯ ) ) e ( k ) 4 ] + O ( e ( k ) 5 ) .
From the above expression, we have
a ( k ) , b ( k ) ; F 1 = I + X 2 e ( k ) + X 3 e ( k ) 2 [ F ( x ¯ ) ] 1 + O ( e ( k ) 3 ) ,
where
X 2 = 2 C 2 , X 3 = 3 C 3 + 4 C 2 2 C 3 F ( x ¯ ) 2 , X 4 = 4 C 4 + 6 C 2 C 3 + 6 C 3 C 2 8 C 2 3 + C 3 F ( x ¯ ) 2 C 2 C 3 F ( x ¯ ) C 2 F ( x ¯ ) + 2 C 2 C 3 F ( x ¯ ) 2 4 C 4 F ( x ¯ ) 2 .
Then,
a ( k ) , b ( k ) ; F 1 F ( x ( k ) ) = e ( k ) C 2 e ( k ) 2 + 2 C 3 + 2 C 2 2 C 3 F ( x ¯ ) 2 e ( k ) 3 + 3 C 4 + 4 C 2 C 3 + 3 C 3 C 2 4 C 2 3 C 3 F ( x ¯ ) C 2 F ( x ¯ ) + 2 C 2 C 3 F ( x ¯ ) 2 4 C 4 F ( x ¯ ) 2 e ( k ) 4 + O ( e ( k ) 5 ) .
Thus,
y ( k ) x ¯ = C 2 e ( k ) 2 + 2 C 3 2 C 2 2 + C 3 F ( x ¯ ) 2 e ( k ) 3 + 3 C 4 4 C 2 C 3 3 C 3 C 2 + 4 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 2 C 2 C 3 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 e ( k ) 4 + O ( e ( k ) 5 ) , ( y ( k ) x ¯ ) 2 = C 2 2 e ( k ) 4 + O ( e ( k ) 5 ) ,
and
F ( y ( k ) ) = F ( x ¯ ) ( y ( k ) x ¯ ) + C 2 ( y ( k ) x ¯ ) 2 + O ( ( y ( k ) x ¯ ) 3 ) = F ( x ¯ ) [ C 2 e ( k ) 2 + 2 C 3 2 C 2 2 + C 3 F ( x ¯ ) 2 e ( k ) 3 + 3 C 4 4 C 2 C 3 3 C 3 C 2 + 5 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 2 C 2 C 3 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 e ( k ) 4 ] + O ( e ( k ) 5 ) .
From the values of z ( k ) and t ( k ) in expression (11), we have
t ( k ) = y ( k ) ( α + β ) [ a ( k ) , b ( k ) ; F ] 1 F ( y ( k ) ) .
Then,
a ( k ) , b ( k ) ; F 1 F ( y ( k ) ) = C 2 e ( k ) 2 + 2 C 3 4 C 2 2 + C 3 F ( x ¯ ) 2 e ( k ) 3 + ( 3 C 4 8 C 2 C 3 6 C 3 C 2 + 13 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 4 C 2 C 3 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 C 3 F ( x ¯ ) 2 C 2 ) e ( k ) 4 + O ( e ( k ) 5 ) .
Similarly, we obtain
t ( k ) x ¯ = 1 ( α + β ) C 2 e ( k ) 2 + 1 ( α + β ) 2 C 3 2 C 2 2 + C 3 F ( x ¯ ) 2 + 2 ( α + β ) C 2 2 e ( k ) 3 + ( 1 ( α + β ) 3 C 4 4 C 2 C 3 3 C 3 C 2 + 4 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 2 C 2 C 3 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 ( α + β ) 4 C 2 C 3 3 C 3 C 2 + 9 C 2 3 2 C 2 C 3 F ( x ¯ ) 2 C 3 F ( x ¯ ) 2 C 2 e ( k ) 4 + O ( e ( k ) 5 ) , ( t ( k ) x ¯ ) 2 = 1 ( α + β ) 2 C 2 2 e ( k ) 4 + O ( e ( k ) 5 )
and
F ( t ( k ) ) = F ( x ¯ ) [ 1 ( α + β ) C 2 e ( k ) 2 + 1 ( α + β ) 2 C 3 2 C 2 2 + C 3 F ( x ¯ ) 2 + 2 ( α + β ) C 2 2 e ( k ) 3 + ( 1 ( α + β ) 3 C 4 4 C 2 C 3 3 C 3 C 2 + 4 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 2 C 2 C 3 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 ( α + β ) 4 C 2 C 3 3 C 3 C 2 + 9 C 2 3 2 C 2 C 3 F ( x ¯ ) 2 C 3 F ( x ¯ ) 2 C 2 + 1 ( α + β ) 2 C 2 3 ) e ( k ) 4 ] + O ( e ( k ) 5 ) .
Thus,
a ( k ) , b ( k ) ; F 1 F ( t ( k ) ) = 1 ( α + β ) C 2 e ( k ) 2 + 1 ( α + β ) 2 C 3 4 C 2 2 + C 3 F ( x ¯ ) 2 + 2 ( α + β ) C 2 2 e ( k ) 3 + ( 1 ( α + β ) ( 3 C 4 8 C 2 C 3 6 C 3 C 2 + 12 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 4 C 2 C 3 F ( x ¯ ) 2 C 3 F ( x ¯ ) 2 C 2 + 4 C 4 F ( x ¯ ) 2 ) + 1 ( α + β ) 2 C 2 3 ( α + β ) 4 C 2 C 3 3 C 3 C 2 + 13 C 2 3 C 3 F ( x ¯ ) 2 C 2 2 C 2 C 3 F ( x ¯ ) 2 ) e ( k ) 4 + O ( e ( k ) 5 ) .
Therefore, we obtain
e ( k + 1 ) = e ( k ) a ( k ) , b ( k ) ; F 1 F ( x ( k ) ) α a ( k ) , b ( k ) ; F 1 F ( y ( k ) ) γ a ( k ) , b ( k ) ; F 1 F ( t ( k ) ) = 1 α γ 1 ( α + β ) C 2 e ( k ) 2 + [ 1 α γ 1 ( α + β ) 2 C 3 2 C 2 2 + C 3 F ( x ¯ ) 2 2 α + γ ( α + β ) γ 1 ( α + β ) C 2 2 ] e ( k ) 3 + [ 1 α γ 1 ( α + β ) 3 C 4 4 C 2 C 3 3 C 3 C 2 + 4 C 2 3 + C 3 F ( x ¯ ) C 2 F ( x ¯ ) 2 C 2 C 3 F ( x ¯ ) 2 + 4 C 4 F ( x ¯ ) 2 + α + γ ( α + β ) γ 1 ( α + β ) 4 C 2 C 3 3 C 3 C 2 + 8 C 2 3 2 C 2 C 3 F ( x ¯ ) 2 C 3 F ( x ¯ ) 2 C 2 α + 5 γ ( α + β ) γ 1 ( α + β ) 2 C 2 3 ] e ( k ) 4 + O ( e ( k ) 5 ) .
Thus, by requiring that the coefficients of e ( k ) 2 and e ( k ) 3 are null, we get α = 2 γ and β = ( γ 1 ) 2 γ for all γ R , 4 being its order,
e ( k + 1 ) = 5 γ 1 γ C 2 3 e ( k ) 4 + O ( e ( k ) 5 ) .
By adding the coefficient of e ( k ) 4 to the above system, we get γ = 1 5 , 5 being the order of convergence with error equation in this case:
e ( k + 1 ) = 14 C 2 4 2 C 2 C 3 C 2 + 6 C 3 C 2 2 2 C 2 C 3 F ( x ¯ ) 2 C 2 + 2 C 3 F ( x ¯ ) 2 C 2 2 e ( k ) 5 + O ( e ( k ) 6 ) .

4. Efficiency Indices

As we have mentioned in the Introduction, we use indices E I = p 1 / d and C E I to compare the different iterative methods.
To evaluate function F, n scalar functions are calculated and n ( n 1 ) for the first order divided difference [ · , · ; F ] . In addition, to calculate an inverse linear operator, an n × n linear system must be solved; then, we have to do 1 3 n 3 + n 2 1 3 n quotients/products for getting L U decomposition and solving the corresponding triangular linear systems. Moreover, for solving m linear systems with the same matrix of coefficients, we need to do 1 3 n 3 + m n 2 1 3 n products-quotients. In addition, we need n 2 products for each matrix-vector multiplication and n 2 quotients for evaluating a divided difference operator.
According to the last considerations, we calculate the efficiency indices E I of methods CJST5, NM7, S7, SA6 and WF4. In case CJST5, for each iteration, we evaluate F three times and once [ · , · ; F ] , so n 2 + 2 n functional evaluations are needed. Therefore, E I C J S T 5 = 5 1 n 2 + 2 n . The indices obtained for the mentioned methods are also calculated and shown in Table 1.
In Table 2, we present the indices C E I of schemes NM7, SA6, S7, WF4 and CJST5. In it, the amount of functional evaluations is denoted by N F E , the number of linear systems with the same [ · , · ; F ] as the matrix of coefficients is N L S 1 and M × V represents the quantity of products matrix-vector. Then, in case CJST5, for each iteration, n 2 + 2 n functional evaluations are needed, since we evaluate three times the function F and one divided difference of first order [ a ( k ) , b ( k ) , F ] . In addition, we must solve three linear systems with [ a ( k ) , b ( k ) , F ] as coefficients matrix (that is 1 3 n 3 + 3 n 2 1 3 n ). Thus, the value of C E I for CJST5 is
C E I C J S T 5 = 5 1 1 3 n 3 + 5 n 2 + 5 3 n .
Analogously, we obtain the indices C E I of the other methods. In Figure 1, we observe the computational efficiency index of the different methods of size 5 to 80. The best index corresponds to our proposed scheme.

5. Numerical Examples

We begin this section checking the performance of the new method on the resulting system obtained by the discretization of Fisher’s partial differential equation. Thereafter, we compare its behavior with that of other known methods on some academic problems. For the computations, we have used M a t l a b R2015a ( Natick, Massachusetts, USA) with variable precision arithmetic, with 1000 digits of mantissa. The characteristics of the computer are, regarding the processor, Intel(R) Core(TM) i7-7700 CPU @ 3.6 GHz, 3.601 Mhz, four processors and RAM 16 GB.
We use the estimation of the theoretical order of convergence p, called Computational Order of Convergence (COC), introduced by Jay [14] with the following expression:
p C O C = ln F ( x ( k + 1 ) ) 2 / F ( x ( k ) ) 2 ln ( F ( x ( k ) ) 2 / F ( x ( k 1 ) ) 2 ) , k = 1 , 2 ,
and the Approximated Computational Order of Convergence (ACOC), defined by Cordero and Torregrosa in [15]
p A C O C = ln ( x ( k + 1 ) x ( k ) 2 / x ( k ) x ( k 1 ) 2 ) ln ( x ( k ) x ( k 1 ) 2 / x ( k 1 ) x ( k 2 ) 2 ) .
Example 1.
Fisher’s equation,
u t = D u x x + r u 1 u p , x [ a , b ] t 0 ,
was proposed in [16] by Fisher to model the diffusion process in population dynamics. In it, D > 0 is the diffusion constant, r is the level of growth of the species and p is the carrying capacity. Lately, this formulation has proven to be fruitful for many other problems as wave genetics, economy or propagation.
Now, we study a particular case of this equation, when r = p = 1 and the spatial interval is [ 0 , 1 ] , u ( x , 0 ) = s e c h 2 ( π x ) and null boundary conditions.
We transform Example 1 in a set of nonlinear systems by applying an implicit method of finite differences, providing the estimated solution in the instant t k from the estimated one in t k 1 . The spacial step h = 1 / n x is selected and the temporal step is k = T m a x / n t , n x and n t being the quantity of subintervals in x and t, respectively, and T m a x is the final instant. Therefore, a grid of domain [ 0 , 1 ] × [ 0 , T m a x ] with points ( x i , t j ) , is selected:
x i = 0 + i h , i = 0 , 1 , , n x , t j = 0 + j k , j = 0 , 1 , , n t .
Our purpose is to estimate the solution of problem (12) at these points, by solving many nonlinear systems, as much as the number of temporal nodes t j . For it, we use the following finite differences of order O ( k + h 2 ) :
u t ( x , t ) u ( x , t ) u ( x , t k ) k , u x x ( x , t ) u ( x + h , t ) 2 u ( x , t ) + u ( x h , t ) h 2 .
By denoting the approximation of the solution at ( x i , t j ) as u i , j , and, by replacing it in Example 1, we get the system
k u i + 1 , j + ( k h 2 2 k h 2 ) u i , j k h 2 u i , j 2 + k u i 1 , j = h 2 u i , j 1 ,
for i = 1 , 2 , , n x 1 and j = 1 , 2 , , n t . The unknowns of this system are u 1 , j , u 2 , j , , u n x 1 , j , that is, the approximations of the solution in each spatial node for the fixed instant t j . Let us remark that, for solving this system, the knowledge of solution in t j 1 is required.
Let us observe (Table 3) that the results improve when the temporal step is smaller. In this case, the COC is not a good estimation of the theoretical error. In Figure 2, we show the approximated solution of the problem when T m a x = 10 , by taking n t = 50 , n x = 10 and using method CJST5.
In the rest of examples, we are going to compare the performance of the proposed method with the schemes presented in the Introduction as well as with the Newton-type method replacing the Jacobian matrix by the divided difference operator, that is, the Samanskii’s scheme (see [6]).
Example 2.
Let us define the nonlinear system
cos ( x 2 ) sin ( x 1 ) = 0 , x 3 x 1 1 / x 2 = 0 , e x 1 x 3 2 = 0 .
We use, in this example, the starting estimation x ( 0 ) = ( 1 . 25 , 1 . 25 , 1 . 25 ) T , the solution being x ¯ ( 0 . 9096 , 0 . 6612 , 1 . 576 ) T . Table 4 shows the residuals x ( k ) x ( k 1 ) and F ( x ( k ) ) for k = 1 , 2 , 3 as well as ACOC and COC. We observe that the COC index is better than the corresponding ACOC of the other methods. In addition, the value x 3 x 2 is better or similar to that of S7 and NM7 methods, both of them of order 7.
Example 3.
We consider now
j = 1 n x j x i e x i x i = 0 , i = 1 , 2 , , n .
The numerical results are displayed in Table 5. The initial estimation is x ( 0 ) = ( 0 . 25 , 0 . 25 , , 0 . 25 ) T and the size of the system is n = 10 , the solution being x ¯ = ( 0 , 0 , , 0 ) T . We show the same information as in the previous example.
Example 4.
The third example is given by the system:
x i + 1 2 log 1 + j = 1 n x j x i = 0 , i = 1 , 2 , , n .
Its solution is x ¯ ( 9 . 376 , 9 . 376 , , 9 . 376 ) T . By using the starting guess x ( 0 ) = ( 1 , 1 , , 1 ) T with n = 10 , we obtain the results appearing in Table 6.
The different methods give us the expected results, according to their order of convergence.
Example 5.
Finally, the last example that we consider is:
arctan ( x i ) + 1 2 j = 1 n x j 2 x i 2 = 0 .
The solution of it is x ¯ ( 0 . 1758 , 0 . 1758 , , 0 . 1758 ) T . By using the initial estimation x ( 0 ) = ( 0 . 5 , 0 . 5 , , 0 . 5 ) T with n = 20 , we obtain the numerical results displayed in Table 7.
It is observed in Table 5, Table 6 and Table 7 that, for the proposed academic problems, the introduced method (CJST5) shows a good performance comparable with higher-order methods. Of course, the worst results are those obtained by Samanskii’s method, but it has been included because it is the Jacobian-free version of Newton’s scheme and it is also the first step of our proposed scheme. Let us also remark that, when only three iterations are calculated, the index COC gives more reliable information than the ACOC one in all of the examples.

6. Conclusions

In this paper, we design a family of iterative methods for solving nonlinear systems with fourth-order convergence. This family does not use Jacobian matrices and one of its elements has order five. The relationship between the proposed method and other known ones in terms of efficiency index and computational efficiency index allows us to see that our method is more efficient than the other ones. In addition, its error bounds are smaller with the same number of iterations in some cases. Thus, our proposal is competitive mostly for big size systems.

Author Contributions

The individual contributions of the authors are as follows: conceptualization, J.R.T.; writing—original draft preparation, C.J. and E.S.; validation, A.C. and J.R.T.; formal analysis, A.C.; numerical experiments, C.J. and E.S.

Funding

This research has been supported partially by Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22, PGC2018-094889-B-I00, TEC2016-79884-C2-2-R and also by Spanish grant PROMETEO/2016/089 from Generalitat Valenciana.

Acknowledgments

The authors would like to thank the anonymous reviewers for their useful comments and suggestions that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  3. Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  4. Petković, M.S.; Neta, B.; Petković, L.D.; Dz̆unić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: New York, NY, USA, 2013. [Google Scholar]
  5. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA SIMAI Springer Series; Springer International Publishing: Cham, Switzerland, 2016; Volume 10. [Google Scholar]
  6. Samanskii, V. On a modification of the Newton method. Ukrain. Mat. 1967, 19, 133–138. [Google Scholar]
  7. Wang, X.; Fang, X. Two Efficient Derivative-Free Iterative Method for Solving Nonlinear Systems. Algorithms 2016, 9, 14. [Google Scholar] [CrossRef]
  8. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  9. Narang, M.; Bathia, S.; Kanwar, V. New efficient derivative free family of seventh-order methods for solving systems of nonlinear equations. Numer. Algorithms 2017, 76, 283–307. [Google Scholar] [CrossRef]
  10. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
  11. Ostrowski, A.M. Solution of Equations and Systems of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  12. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  13. Hermite, C. Sur la formule d’interpolation de Lagrange. Reine Angew. Math. 1878, 84, 70–79. [Google Scholar] [CrossRef]
  14. Jay, L.O. A note of Q-order of convergence. BIT Numer. Math. 2001, 41, 422–429. [Google Scholar] [CrossRef]
  15. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  16. Fisher, R.A. The wave of advance of advantageous genes. Ann. Eugen. 1937, 7, 353–369. [Google Scholar] [CrossRef]
Figure 1. C E I for several sizes of the system.
Figure 1. C E I for several sizes of the system.
Mathematics 07 00776 g001
Figure 2. Approximated solution of Example 1.
Figure 2. Approximated solution of Example 1.
Mathematics 07 00776 g002
Table 1. Efficiency indices for different methods.
Table 1. Efficiency indices for different methods.
MethodOrderNFE EI
NM77 2 n 2 + n 7 1 2 n 2 + n
S77 3 n 2 7 1 3 n 2
SA66 2 n 2 + n 6 1 2 n 2 + n
WF44 2 n 2 4 1 2 n 2
CJST55 n 2 + 2 n 5 1 n 2 + 2 n
Table 2. Computational cost of the procedures.
Table 2. Computational cost of the procedures.
MethodOrder NFE NLS 1 M × V CEI
NM77 2 n 2 + 3 n 43 7 1 1 3 n 3 + 11 n 2 + 2 3 n
S77 3 n 2 63 7 1 1 3 n 3 + 14 n 2 1 3 n
SA66 2 n 2 + n 42 6 1 1 3 n 3 + 11 n 2 + 2 3 n
WF44 2 n 2 31 4 1 1 3 n 3 + 8 n 2 1 3 n
CJST55 n 2 + 2 n 30 5 1 1 3 n 3 + 5 n 2 + 5 3 n
Table 3. Fisher results by CJST5 and different T m a x .
Table 3. Fisher results by CJST5 and different T m a x .
T max n x n t F ( x 1 ) F ( x 2 ) F ( x 3 ) COC
0 . 1 10208.033 × 10 9 2.356 × 10 35 3.243 × 10 68 1.2385
0 . 1 102008.679 × 10 13 3.203 × 10 44 7.623 × 10 77 1.0379
110204.158 × 10 5 3.4 × 10 25 1.679 × 10 56 1.5585
1102008.033 × 10 9 2.356 × 10 35 3.243 × 10 68 1.2385
101020nc
1010500.019452.757 × 10 11 1.953 × 10 38 3.0683
Table 4. Numerical results for Example 2.
Table 4. Numerical results for Example 2.
SamanskiiCJST5WF4SA6S7NM7
x ( 1 ) x ( 0 ) 1.4150.88480.85390.89340.91480.9355
x ( 2 ) x ( 1 ) 0.54270.2470.20390.26890.29420.3098
x ( 3 ) x ( 2 ) 0.17380.011590.0062490.0053010.010690.02667
ACOC1.18752.39762.4333.27052.92214.25
F ( x ( 1 ) ) 0.19540.12820.10380.13850.18170.2098
F ( x ( 2 ) ) 0.023690.0099560.0048150.0035840.0064620.01669
F ( x ( 3 ) ) 0.002692.805 × 10 8 7.663 × 10 8 4.009 × 10 11 8.074 × 10 12 1.333 × 10 9
COC1.03135.00153.59835.01046.14456.4561
Table 5. Numerical results for Example 3.
Table 5. Numerical results for Example 3.
SamanskiiCJST5WF4SA6S7NM7
x ( 1 ) x ( 0 ) 1.0360.82490.91160.84990.78470.7897
x ( 2 ) x ( 1 ) 0.25520.034320.1210.059320.005830.0008995
x ( 3 ) x ( 2 ) 0,0096671.487 × 10 11 9.264 × 10 6 5.367 × 10 10 2.529 × 10 21 1.048 × 10 28
ACOC2.33616.78074.69376.95728.62474.25
F ( x ( 1 ) ) 1.9440.27420.96340.47350.046650.007196
F ( x ( 2 ) ) 0.07731.19 × 10 10 7.411 × 10 5 4.293 × 10 9 2.023 × 10 20 8.381 × 10 28
F ( x ( 3 ) ) 2.65 × 10 5 2.839 × 10 59 9.644 × 10 22 1.3 × 10 57 9.093 × 10 149 2.621 × 10 202
COC2.47435.19324.10456.03286.98956.9987
Table 6. Numerical results for Example 4.
Table 6. Numerical results for Example 4.
SamanskiiCJST5WF4SA6S7NM7
x ( 1 ) x ( 0 ) 6.01340.6867.8873.3134.339.23
x ( 2 ) x ( 1 ) 12.1513.8336.0756.7417.157.369
x ( 3 ) x ( 2 ) 10.110.00028720.066430.026880.00014855.422 × 10 8
ACOC-9.99419.958729.87416.8194.25
F ( x ( 1 ) ) 10.8411.0330.4348.7413.425.842
F ( x ( 2 ) ) 6.0840.00022630.052340.021170.0001174.272 × 10 8
F ( x ( 3 ) ) 1.1629.286 × 10 28 6.508 × 10 12 1.807 × 10 20 3.105 × 10 40 7.552 × 10 65
COC2.86514.98883.5835.37447.03136.9755
Table 7. Numerical results for Example 5.
Table 7. Numerical results for Example 5.
SamanskiiCJST5WF4SA6S7NM7
x ( 1 ) x ( 0 ) 0.95031.3231.2721.3681.3941.393
x ( 2 ) x ( 1 ) 0.39120.12660.1770.08210.056390.05732
x ( 3 ) x ( 2 ) 0.10134.988 × 10 5 0.00074076.903 × 10 7 7.214 × 10 9 6.655 × 10 9
ACOC1.52293.34042.7764.15434.94854.25
F ( x ( 1 ) ) 8.3241.7062.4711.0750.72570.7381
F ( x ( 2 ) ) 1.4450.00061790.0091818.552 × 10 6 8.937 × 10 8 8.245 × 10 8
F ( x ( 3 ) ) 0.09021.206 × 10 20 5.635 × 10 12 5.437 × 10 36 8.115 × 10 56 3.521 × 10 56
COC1.58394.85593.7915.92196.9536.9577

Share and Cite

MDPI and ACS Style

Cordero, A.; Jordán, C.; Sanabria, E.; Torregrosa, J.R. A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator. Mathematics 2019, 7, 776. https://doi.org/10.3390/math7090776

AMA Style

Cordero A, Jordán C, Sanabria E, Torregrosa JR. A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator. Mathematics. 2019; 7(9):776. https://doi.org/10.3390/math7090776

Chicago/Turabian Style

Cordero, Alicia, Cristina Jordán, Esther Sanabria, and Juan R. Torregrosa. 2019. "A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator" Mathematics 7, no. 9: 776. https://doi.org/10.3390/math7090776

APA Style

Cordero, A., Jordán, C., Sanabria, E., & Torregrosa, J. R. (2019). A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator. Mathematics, 7(9), 776. https://doi.org/10.3390/math7090776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop