Next Article in Journal
Global Behavior of a Higher Order Fuzzy Difference Equation
Next Article in Special Issue
Design and Complex Dynamics of Potra–Pták-Type Optimal Methods for Solving Nonlinear Equations and Its Applications
Previous Article in Journal
Model Formulation Over Lie Groups and Numerical Methods to Simulate the Motion of Gyrostats and Quadrotors
Previous Article in Special Issue
An Improved Curvature Circle Algorithm for Orthogonal Projection onto a Planar Algebraic Curve
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations

by
Hessah Faihan Alqahtani
1,
Ramandeep Behl
2,* and
Munish Kansal
3
1
Department of Mathematics, King Abdulaziz University, Campus Rabigh 21911, Saudi Arabia
2
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
School of Mathematics, Thapar Institute of Engineering and Technology, Patiala 147004, India
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 937; https://doi.org/10.3390/math7100937
Submission received: 23 August 2019 / Revised: 18 September 2019 / Accepted: 24 September 2019 / Published: 10 October 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
We present a three-step family of iterative methods to solve systems of nonlinear equations. This family is a generalization of the well-known fourth-order King’s family to the multidimensional case. The convergence analysis of the methods is provided under mild conditions. The analytical discussion of the work is upheld by performing numerical experiments on some application oriented problems. Finally, numerical results demonstrate the validity and reliability of the suggested methods.

1. Introduction

System of nonlinear equations (SNEs) finds applications to numerous phenomena in many areas of science and engineering. Given a nonlinear system, F ( X ) = 0 , where F is a nonlinear map from R k R k , we are interested to compute a vector X * = ( x 1 * , x 2 * , , x k * ) T such that F ( X * ) = 0 , where F ( X ) = ( f 1 ( X ) , f 2 ( X ) , , f k ( X ) ) T is a Fréchet differentiable function and X = ( x 1 , x 2 , , x k ) T R k . The classical Newton’s method [1] is the most famous procedure to solve SNEs. It is given by
X ( k + 1 ) = X ( k ) { F ( X ( k ) ) } 1 F ( X ( k ) ) , k = 0 , 1 , 2 , .
It converges quadratically if the function F is continuously differentiable and the initial approximation is close enough. In the literature, there are variety of higher-order methods that improve the convergence order of Newton’s scheme. For example, several authors have proposed cubically convergent methods [2,3,4,5] requiring computation of 2- F (2- F stand for F two times), 1-F (1-F stands for F one time), and two matrix inversions per step. In [6], the authors developed another family of methods of order three, one of which requires one 1-F and 3- F , whereas the other requires 1-F and 4- F evaluations and two matrix inversions per iteration. In [7], Darvishi and Barati utilized 2-F, 2- F and two matrix inversions per step to propose a new third-order scheme. Similarly, several third-order methods have been proposed in [8,9] that require 2-F, 1- F , and one matrix inversion. Babajee et al. [10] presented a method having convergence order four which consumes 1-F, 2- F and two matrix inversions per iteration. Another fourth-order method is developed in [11] using two evaluations of the function and the Jacobian and one matrix inversion, whereas the authors of [12] propose another fourth-order method, utilizing 3-F, 1- F , and one matrix inversion per iteration. Another fifth-order method in [13] requires three evaluations of the function and only one Jacobian evaluation, with the solution of three linear systems with the same matrix of coefficients per iteration.
In pursuit of faster algorithms, researchers have also developed fifth and sixth-order methods, for example, in [6,14,15,16]. In [15], Narang et al. extended the existing Babajee’s fourth-order scheme [17] to solve SNEs and developed a sixth-order convergent family of Chebyshev-Halley type methods. Their scheme requires two F, two F evaluations, and the solution of two linear systems per iteration. One can notice that although the researchers are making an attempt to improve the order of convergence of an iterative method, it mostly leads to increase in the computational cost per iteration. The computational cost is especially high if the method involves the use of second order Fréchet derivative F ( X ) . This is a major limitation of the higher-order methods. Thus, although developing new iterative methods, we should try to keep the computational cost low. With this intention, we have made an attempt to develop a family of three-step sixth-order family of methods requiring two F, two F and one matrix inversion per iteration. This family of methods are compared to be more efficient than existing methods. These have been found to be effective in solving particularly large-scale nonlinear systems.
The outline of the manuscript is as follows. In Section 2, a new class of new sixth-order scheme and its convergence analysis is presented. In Section 3, we present numerous illustrative examples to validate the theoretical results. Finally, Section 4 contains some conclusions.

2. Design of the King’s Family for Multidimensional Case

In this section, we proposed a new three-point extension of King’s method [18,19,20,21] having sixth-order convergence. For this purpose, we consider the well-known fourth-order King’s method, which is given by
y k = x k f ( x k ) f ( x k ) , x k + 1 = y k 1 + α u k 1 + ( α 2 ) u k f ( y k ) f ( x k ) ,
where α is a real parameter and u k = f ( y k ) f ( x k ) . For α = 0 , one can obtain the well-known Ostrowski’s method [22,23,24].
Let us now modify the method (2) for SNEs by rewriting the scheme as follows,
u k = f ( y k ) f ( x k ) + f ( x k ) f ( x k ) = f ( y k ) f ( x k ) f ( x k ) + 1 = 1 f ( y k ) f ( x k ) ( y k x k ) f ( x k ) = 1 f ( x k ) 1 [ y k , x k ; f ] ,
where [ y k , x k ; f ] = f ( y k ) f ( x k ) y k x k . Finally, we can rewrite the above scheme (2) for SNEs with one additional sub-step in the following manner,
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = y ( k ) I + ( α 2 ) U ( k ) 1 ( I + α U ( k ) ) F ( x ( k ) ) 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) [ y ( k ) , z ( k ) ; F ] 1 F ( z ( k ) ) ,
where [ · , · ; F ] is a finite difference of order one and α is a free disposable parameter with U ( k ) = I [ x ( k ) , y ( k ) ; F ] F ( x ( k ) ) 1 . In addition, F [ Y n , X n ] is a finite difference of order one.
Now, it is necessary to analyze the convergence conditions of this modified King’s class of methods. In Theorem 1, we demonstrate the convergence order of the above scheme (3). We have used the following procedures [25] to prove the convergence results.
Let F : Ω R k R k be sufficiently differentiable in Ω . Now, we define the qth derivative of F at ω Ω , q 1 . It can be viewed as a q-linear function F ( q ) ( ω ) : R k × × R k R k , such that F ( q ) ( ω ) ( v 1 , , v q ) R k . It is easy to observe that
  • F ( q ) ( ω ) ( v 1 , , v q 1 , · ) L ( R k ) .
  • F ( q ) ( ω ) ( v σ ( 1 ) , , v σ ( q ) ) = F ( q ) ( ω ) ( v 1 , , v q ) , for all permutation σ of { 1 , 2 , , q } .
Using the above relations, we can introduce the following notation,
(a)
F ( q ) ( ω ) ( v 1 , , v q ) = F ( q ) ( ω ) v 1 v q .
(b)
F ( q ) ( ω ) v q 1 F ( p ) v p = F ( q ) ( ω ) F ( p ) ( ω ) v q + p 1 .
Now, applying Taylor’s expansion for ξ * + h R k in the neighborhood of a solution ξ * of the given linear system, one can get
F ( ξ * + h ) = F ( ξ * ) h + q = 2 p 1 C q h q + O ( h p ) ,
where C q = ( 1 / q ! ) [ F ( ξ * ) ] 1 F ( q ) ( ξ * ) , q 2 . We note that C q h q R k as F ( q ) ( ξ * ) L ( R k × × R k , R k ) , and [ F ( x ¯ ) ] 1 L ( R k ) .
Similarly, we can express F as
F ( ξ * + h ) = F ( ξ * ) I + q = 2 p 1 q C q h q 1 + O ( h p 1 ) ,
where I denotes the identity matrix. Therefore, q C q h q 1 L ( R k ) . From Equation (5), we obtain
[ F ( ξ * + h ) ] 1 = I + X 2 h + X 3 h 2 + X 4 h 4 + [ F ( ξ * ) ] 1 + O ( h p ) ,
where
X 2 = 2 C 2 , X 3 = 4 C 2 2 3 C 3 , X 4 = 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 4 C 4 ,
Let us denote e ( k ) = x ( k ) ξ * as the error at the kth iteration. Then, the error equation is given as follows,
e ( k + 1 ) = M ( e ( k ) ) p + O ( ( e ( k ) ) p + 1 ) ,
where, M is a p-linear function M L ( R k × × R k , R k ) . Here, p is the order of convergence and ( e ( k ) ) p is a column vector ( e ( k ) , e ( k ) , , e ( k ) p ) T .
Theorem 1.
Let F : Ω R k R k be a sufficiently differentiable function defined on a convex set Ω containing the zero ξ * . Let us assume that F ( x ) is continuous and non-vanishing at ξ * . If the initial guess x ( 0 ) is close enough to ξ * , the iterative scheme (3) attains sixth-order convergence for each α.
Proof. 
Let e ( k ) = x ( k ) ξ * be the error at the kth-iteration. Now, expanding F ( x ( k ) ) and F ( x ( k ) ) using Taylor’s expansion in a neighborhood of ξ * , we get
F ( x ( k ) ) = F ( ξ * ) e ( k ) + C 2 ( e ( k ) ) 2 + C 3 ( e ( k ) ) 3 + C 4 ( e ( k ) ) 4 + C 5 ( e ( k ) ) 5 + C 6 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 )
and
F ( x ( k ) ) = F ( ξ * ) I + 2 C 2 e ( k ) + 3 C 3 ( e ( k ) ) 2 + 4 C 4 ( e ( k ) ) 3 + 5 C 5 ( e ( k ) ) 4 + 6 C 6 ( e ( k ) ) 5 + O ( ( e ( k ) ) 6 ) ,
where I is the identity matrix of size n × n and C k = 1 k ! F ( ξ * ) 1 F ( k ) ( ξ * ) , k 2 .
With the help of above expression (8), we have
F ( x ( k ) ) 1 = I 2 C 2 e ( k ) + Δ 0 ( e ( k ) ) 2 + Δ 1 ( e ( k ) ) 3 + Δ 2 ( e ( k ) ) 4 + Δ 3 ( e ( k ) ) 5 + Δ 4 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 F ( ξ * ) 1
where Δ i = Δ i ( C 2 , C 3 , , C 6 ) , for example, Δ 0 = 4 C 2 2 3 C 3 , Δ 1 = ( 8 C 2 3 6 C 2 C 3 6 C 3 C 2 + 4 C 4 ) , Δ 2 = 8 C 2 C 4 + 9 C 3 2 + 8 C 4 C 2 12 C 2 2 C 3 12 C 2 C 3 C 2 12 C 3 C 2 2 + 16 C 2 4 5 C 5 , Δ 3 = 10 C 2 C 5 + 12 C 3 C 4 + 12 C 4 C 3 + 10 C 5 C 2 16 C 2 2 C 4 18 C 2 C 3 2 16 C 2 C 4 C 2 18 C 3 C 2 C 3 18 C 3 2 C 2 16 C 4 C 2 2 + 24 C 2 3 C 3 + 24 C 2 2 C 3 C 2 + 24 C 2 C 3 C 2 2 + 24 C 3 C 2 3 32 C 2 5 6 C 6 , etc.
From expressions (7) and (9), we yield
F ( x ( k ) ) 1 F ( x ( k ) ) = e ( k ) + Θ 0 ( e ( k ) ) 2 + Θ 1 ( e ( k ) ) 3 + Θ 2 ( e ( k ) ) 4 + Θ 3 ( e ( k ) ) 5 + Θ 4 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .
where Θ j = Θ j ( C 2 , C 3 , , C 6 ) , for example, Θ 0 = C 2 , Θ 1 = 2 C 2 2 2 C 3 , Θ 2 = ( 4 C 2 3 4 C 2 C 3 3 C 3 C 2 + 3 C 4 ) , Θ 3 = 6 C 2 C 4 + 6 C 3 2 + 4 C 4 C 2 8 C 2 2 C 3 6 C 2 C 3 C 2 6 C 3 C 2 2 + 8 C 2 4 4 C 5 , Θ 4 = 8 C 2 C 5 + 9 C 3 C 4 + 8 C 4 C 3 + 5 C 5 C 2 12 C 2 2 C 4 12 C 2 C 3 2 8 C 2 C 4 C 2 12 C 3 C 2 C 3 9 C 3 2 C 2 8 C 4 C 2 2 + 16 C 2 3 C 3 + 12 C 2 2 C 3 C 2 + 12 C 2 C 3 C 2 2 + 12 C 3 C 2 3 16 C 2 5 5 C 6 , etc.
By inserting the expression (10) in the first substep of (3), we obtain
y ( k ) ξ * = Θ 0 ( e ( k ) ) 2 Θ 1 ( e ( k ) ) 3 Θ 2 ( e ( k ) ) 4 Θ 3 ( e ( k ) ) 5 Θ 4 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .
which further produces
F ( y ( k ) ) = F ( ξ * ) [ Θ 0 ( e ( k ) ) 2 Θ 1 ( e ( k ) ) 3 + ( C 2 Θ 0 2 Θ 2 ) ( e ( k ) ) 4 + ( 2 C 2 Θ 0 Θ 1 Θ 3 ) ( e ( k ) ) 5 + C 3 Θ 0 3 C 2 ( Θ 1 2 + 2 Θ 0 Θ 2 ) + Θ 4 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) ]
and
U ( k ) = I F ( x ( k ) ) 1 [ x ( k ) , y ( k ) ; F ] = C 2 e ( k ) + ( C 2 Θ 0 2 C 2 2 + 2 C 3 ) ( e ( k ) ) 2 + ( 2 C 2 2 Θ 0 + C 2 ( Θ 1 7 C 3 ) + C 3 Θ 0 + 4 C 2 3 + 3 C 4 ) ( e ( k ) ) 3 + ( 4 C 2 3 Θ 0 + C 2 2 ( 20 C 3 2 Θ 1 ) + C 2 ( 5 C 3 Θ 0 10 C 4 + Θ 2 ) + C 4 Θ 0 + C 3 ( Θ 1 Θ 0 2 ) 8 C 2 4 6 C 3 2 + 4 C 5 ) ( e ( k ) ) 4 + [ 8 C 2 4 Θ 0 + C 2 3 ( 4 Θ 1 52 C 3 ) + 2 C 2 2 ( 8 C 3 Θ 0 + 14 C 4 Θ 2 ) + C 2 C 3 ( 2 Θ 0 2 5 Θ 1 ) 6 C 4 Θ 0 + 33 C 3 2 13 C 5 + Θ 3 C 4 Θ 0 2 3 C 3 2 Θ 0 + C 5 Θ 0 + C 4 Θ 1 + C 3 ( 17 C 4 2 Θ 0 Θ 1 + Θ 2 ) + 16 C 2 5 + 5 C 6 ] ( e ( k ) ) 5 + O ( ( e ( k ) ) 6 ) .
By using expressions (9), (12), and (13), we obtain
I + ( α 2 ) U ( k ) 1 I + α U ( k ) F ( x ( k ) ) 1 F ( y ( k ) ) = Θ 0 ( e ( k ) ) 2 Θ 1 ( e ( k ) ) 3 + ( 2 α C 2 2 Θ 0 C 2 Θ 0 2 C 3 Θ 0 Θ 2 ) ( e ( k ) ) 4 + O ( e ( k ) ) 5
By adopting expressions (11)–(14) in the scheme (3), we have
z ( k ) ξ * = τ 1 ( e ( k ) ) 4 + τ 2 ( e ( k ) ) 5 + τ 3 ( e ( k ) ) 6 + O ( e ( k ) ) 7 .
where τ j = τ j ( Θ 0 , Θ 1 , , Θ 4 , C 2 , C 3 , , C 6 , α ) , j = 1 , 2 , 3 , for example, τ 1 = Θ 0 ( 2 α C 2 2 + C 2 Θ 0 + C 3 ) .
Now, expanding the F ( z ( k ) ) in a neighborhood of ξ * , we have
F ( z ( k ) ) = F ( ξ * ) τ 1 ( e ( k ) ) 4 + τ 2 ( e ( k ) ) 5 + τ 3 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) ,
which further produces with the help of expression (12)
z ( k ) , y ( k ) ; F 1 F ( z ( k ) ) = τ 1 ( e ( k ) ) 4 + τ 2 ( e ( k ) ) 5 + C 2 Θ 0 τ 1 + τ 3 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .
By using equation (17) in the final substep of (3), we have
e ( j + 1 ) = τ 1 ( e ( k ) ) 4 + τ 2 ( e ( k ) ) 5 + τ 3 ( e ( k ) ) 6 τ 1 ( e ( k ) ) 4 + τ 2 ( e ( k ) ) 5 + ( C 2 Θ 0 τ 1 + τ 3 ) ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) = ( C 2 Θ 0 τ 1 ) ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) = C 2 3 ( 2 α + 1 ) c 2 2 c 3 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .
Therefore, the scheme (3) has sixth-order convergence. □

3. Numerical Experiments

Here, we checked the efficiency and effectiveness of our scheme on real-life and standard academic test problems. Therefore, we consider five number of the examples’ details, as seen in the examples (1)–(5). Further, we also depicted the starting points and zeros of the considered nonlinear system in the examples (1)–(5). Now, we employ our sixth-order scheme (3), called ( P M ), to verify the computational performance of them with existing methods considered in the previous section.
Now, we compare (3) with a sixth-order family proposed by Abbasbandy et al. [26] and Hueso et al. [27]. We choose their best expressions (8) and (14–15) for t 1 = 9 4 and s 2 = 9 8 , respectively denoted by ( A M 6 ) and ( H M 6 ) . Moreover, a comparison of a newly proposed scheme has been done with the sixth-order family of iterative method proposed by Sharma and Arora [28] and Wang and Li [29], out these works we choose two methods, namely, (13) and (6), respectively, called ( S M 6 ) and ( W M 6 ) . Finally, we compare (3) with sixth-order methods suggested by Mona et al. [15] and Lotfi et al. [16], we consider methods (3.1) for λ = 1 , β = 2 , p = 1 and q = 3 2 and (5), respectively, called by ( M M 6 ) and ( L M 6 ) . All the iterative expressions are mentioned below.
  • Method A M 6 :
    y ( k ) = x ( k ) 2 3 F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = x ( k ) I + 21 8 F ( x ( k ) ) 1 F ( y ( k ) ) 9 2 F ( x ( k ) ) 1 F ( y ( k ) ) 2 + 15 8 F ( x ( k ) ) 1 F ( y ( k ) ) 3 F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = z ( k ) 3 I 5 2 F ( x ( k ) ) 1 F ( y ( k ) ) + 1 2 F ( x ( k ) ) 1 F ( y ( k ) ) 2 F ( x ( k ) ) 1 F ( z ( k ) ) .
  • Method H M 6 :
    y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , H ( x ( k ) , y ( k ) ) = F ( x ( k ) ) 1 F ( y ( k ) ) , G s ( x ( k ) , y ( k ) ) = s 1 I + s 2 H ( y ( k ) , x ( k ) ) + s 3 H ( x ( k ) , y ( k ) ) + s 4 H ( y ( k ) , x ( k ) ) 2 , z ( k ) = x ( k ) G s ( x ( k ) , y ( k ) ) F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = z ( k ) .
    where s 1 s 2 , s 3 , and s 4 are real numbers.
  • Method S M 6 :
    y ( k ) = x ( k ) 2 3 F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = ϕ 4 ( 4 ) ( x k , y k ) , x ( k + 1 ) = z ( k ) s I + t F ( x ( k ) ) 1 F ( y ( k ) ) F ( x ( k ) ) 1 F ( z ( k ) ) ,
    where s and t are real parameters.
  • Method W M 6 :
    y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = y ( k ) 2 I F ( x ( k ) ) 1 F ( y ( k ) ) F ( x ( k ) ) 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) 2 I F ( x ( k ) ) 1 F ( y ( k ) ) F ( x ( k ) ) 1 F ( z ( k ) ) .
  • Method M M 6 :
    y ( k ) = x ( k ) 2 3 F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = x ( k ) I + 1 2 β I λ β G ( x ( k ) ) H ( G ( x ( k ) ) ) u ( x ( k ) ) , x ( k + 1 ) = z ( k ) p I + q G ( x ( k ) ) F ( x ( k ) ) 1 F ( z ( k ) ) ,
    where λ , β , p , and q are real numbers.
  • Method L M 6 :
    y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = x ( k ) 2 ( F ( x ( k ) + F ( y ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = z ( k ) 7 2 I 4 F ( x ( k ) ) 1 F ( y ( k ) + 3 2 F ( x ( k ) ) 1 F ( y ( k ) ) 2 F ( x ( k ) ) 1 F ( z ( k ) ) .
The abscissas t j and the weights w j are known and depicted in the Table 1 when t = 8 (for the more details please have a look at Example 1). In Table 2, Table 3, Table 4, Table 5 and Table 6, we mention the number of iteration indexes ( n ) , residual errors ( F ( x ( k ) ) ) , error x ( k + 1 ) x ( k ) and computational convergence order ρ * log x ( k + 1 ) x ( k ) / x ( k ) x ( k 1 ) log x ( k ) x ( k 1 ) / x ( k 1 ) x ( k 2 ) . In addition, the value of η is the last calculated value of x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) 6 . Finally, the comparison on the basis of number of iterations taken by different methods on numerical Examples 1–5 is also depicted in Table 7.
All the computations have been done with multiple precision arithmetic with 1000 digits of mantissa, which minimize round-off errors in Mathematica-9. Here, a ( ± b ) is a × 10 ( ± b ) in all the tables. The stopping criteria for the programming is defined as follows,
( i ) F ( x ( k ) ) < 10 100 and ( ii ) x ( k + 1 ) x ( k ) < 10 100 .
Example 1.
Let us consider the Hammerstein integral equation (see [1], pp. 19–20) given as follows,
y ( s ) = 1 + 1 5 0 1 F ( s , t ) y ( t ) 3 d t ,
where y C [ 0 , 1 ] , s , t [ 0 , 1 ] , and the kernel F is
F ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
Now, using the Gauss Legendre formula, we transform the above equation into a finite-dimensional problem, which is given as 0 1 f ( t ) d t j = 1 8 w j f ( t j ) , where t j and w j denote abscissas and weights, respectively. Now t j s and w j s are determined for t = 8 by Gauss Legendre quadrature formula. Let us call y ( t i ) by y i ( i = 1 , 2 , , 8 ) , then we get the following nonlinear system,
5 y i 5 j = 1 8 a i j y j 3 = 0 , where i = 1 , 2 , , 8 ,
where a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
Here, the abscissas t j and the weights w j are known and depicted in the Table 1 when t = 8 .
The convergence of the methods towards the root
ξ * = ( 1.00209 , 1.00990 , 1.01972 , 1.02643 , 1.02643 , 1.01972 , 1.00990 , 1.00209 ) T ,
is tested in the Table 2 on the choice of the initial guess x ( 0 ) = 1 2 , 1 2 , 1 2 , 1 2 , 1 2 , 1 2 , 1 2 , 1 2 T . The numerical results show that the proposed methods P M 1 6 and P M 2 6 have better residual errors in comparison with the existing ones. In addition, smaller asymptotic error constants also belong to our methods P M 1 6 and P M 2 6 .
Example 2.
Let us consider the Van der Pol equation [30], which governs the flow of current in a vacuum tube, defined as follows,
y μ ( y 2 1 ) y + y = 0 , μ > 0 .
Here, boundary conditions are given by y ( 0 ) = 0 , y ( 2 ) = 1 . Further, we divide the given interval [ 0 , 2 ] as follows,
x 0 = 0 < x 1 < x 2 < x 3 < < x n , w h e r e x i = x 0 + i h , h = 2 n .
Moreover, we assume that
y i = y ( x i ) , i = 0 , 1 , , n .
Now, if we discretize the above problem (25) by using the finite-difference formula for the first and second derivatives, which are given by
y k = y k + 1 y k 1 2 h , y k = y k 1 2 y k + y k + 1 h 2 , k = 1 , 2 , , n 1 ,
then, we obtain a SNEs of order ( n 1 ) × ( n 1 ) .
2 h 2 y k h μ y k 2 1 y k + 1 y k 1 + 2 y k 1 + y k + 1 2 y k = 0 , k = 1 , 2 , , n 1 .
Let us consider μ = 1 2 and initial approximation y k ( 0 ) = log 1 k 2 , k = 1 , 2 , , n 1 . In this example, we solve 9 × 9 SNEs by taking n = 10 . The solution of this problem is
ξ * = 0.4795 , 0.9050 , 1.287 , 1.641 , 1.990 , 2.366 , 2.845 , 3.673 , 6.867 , T .
The numerical results are displayed in Table 3. It is found that the newly proposed methods perform better in all aspects, whereas the existing methods show larger residual errors.
Example 3.
The 2D Bratu problem [31,32] is defined as
u x x + u t t + C e u = 0 , o n Ω : ( x , t ) 0 x 1 , 0 t 1 , w i t h b o u n d a r y c o n d i t i o n s u = 0 o n Ω .
Using finite difference discretization, a given nonlinear PDE can be reduced to a SNEs. Let Θ i , j = u ( x i , t j ) be the numerical solution at the grid points of the mesh. Let M 1 and M 2 be the number of steps in x and t directions, respectively, and h and k be the respective step sizes. To solve the given PDE, we apply the central difference formula to u x x and u t t in the following way,
u x x ( x i , t j ) = Θ i + 1 , j 2 Θ i , j + Θ i 1 , j h 2 , C = 0.1 , t [ 0 , 1 ]
By using expression (27) in (26) and after some simplification, we have
Θ i , j + 1 + Θ i , j 1 Θ i , j + Θ i + 1 , j + Θ i 1 , j + h 2 C exp Θ i , j i = 1 , 2 , 3 , , M 1 , j = 1 , 2 , 3 , , M 2
By choosing M 1 = M 2 = 11 , C = 0.1 , and h = 1 11 , we obtain a system of nonlinear equations of size 10 × 10 , with the initial guess x 0 = 0.1 ( s i n ( π h ) s i n ( π k ) , s i n ( 2 π h ) s i n ( 2 π k ) , , s i n ( 10 π h ) s i n ( 10 π k ) ) T . Numerical estimations are given in Table 4. Numerical results demonstrate that the new methods have much improved error estimations and computational order of convergence in comparison to its competitors.
Example 4.
Consider another typical nonlinear problem, that is, Fisher’s equation [33] with homogeneous Neumann’s BC’s, the diffusion coefficient H is
u t = H u x x + u ( 1 u ) = 0 , u ( x , 0 ) = 1.5 + 0.5 c o s ( π x ) , 0 x 1 , u x ( 0 , t ) = 0 , t 0 , u x ( 1 , t ) = 0 , t 0 .
Again using finite difference discretization, the equation (29) reduces to a SNEs. Consider Θ i , j = u ( x i , t j ) be its approximate solution at the grid points of the mesh. Let M 1 and M 2 be the number of steps in x and t directions, and h and k be the respective step size. Applying central difference formula to u x x , backward difference for u t ( x i , t j ) , and forward difference for u x ( x i , t j ) , respectively, in the following way,
u x x ( x i , t j ) = Θ i + 1 , j 2 Θ i , j + Θ i 1 , j h 2 , u t ( x i , t j ) = Θ i , j Θ i , j 1 k and u x ( x i , t j ) = Θ i + 1 , j Θ i , j h ,
where h = 1 M 1 , k = 1 M 2 , t [ 0 , 1 ] .
By adopting expression (30) in (29), after some simplification, we get
Θ 1 , j Θ i , j 1 k Θ i , j 1 Θ i , j H Θ i + 1 , j 2 Θ i , j + Θ i 1 , j h 2 , i = 1 , 2 , 3 , , M 1 , j = 1 , 2 , 3 , , M 2
For the system of nonlinear equations, we considered M 1 = M 2 = 5 , h = 1 5 , k = 1 5 and H = 1 , which reduces to nonlinear system of size 5 × 5 , with the initial guess x 0 = 1 + i 25 T , i = 1 , 2 , , 25 . All the numerical results are shown in Table 5. Numerical results show that our methods have better computational efficiency than the already existing schemes, in terms of residual errors and difference between consecutive approximations.
Example 5.
Let us consider the following nonlinear system,
F ( X ) = x j 2 x j + 1 1 = 0 , 1 j n 1 , x n 2 x 1 1 = 0 .
To obtain a large SNEs 200 × 200 , we choose n = 200 and the initial approximation x ( 0 ) = ( 1.25 , 1.25 , 1.25 , , 1.25 ( 200 t i m e s ) ) T for this problem. The required solution of this problem is ξ * = ( 1 , 1 , 1 , , 1 ( 200 t i m e s ) ) T . The obtained results can be observed in Table 6. It can be easily seen that the proposed scheme performs well; in this case, in terms of error estimates as compared to the available methods of the same nature.

4. Conclusions

In this work, we have developed new family of sixth-order iterative methods for solving SNEs, numerically. To check their effectiveness, the proposed scheme is applied on some large-scale systems arising from various academic problems. Further, the numerical results show that the proposed methods perform better than already existing schemes in the scientific literature.

Author Contributions

R.B. and M.K.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing. H.F.A.: Review & Editing.

Funding

Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. G-1349-665-1440.

Acknowledgments

This project was supported by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. G-1349-665-1440. The authors, therefore, gratefully acknowledge with thanks DSR for technical and financial support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  3. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  4. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of non-linear equations. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar]
  5. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  6. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  7. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  8. Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of non-linear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar]
  9. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterarive Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
  10. Babajee, D.K.R.; Madhu, K.; Jayaraman, J. On some improved harmonic mean Newton-like methods for solving systems of nonlinear equations. Algorithms 2015, 8, 895–909. [Google Scholar] [CrossRef]
  11. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef] [Green Version]
  12. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef]
  13. Arroyo, V.; Cordero, A.; Torregrosa, J.R. Approximation of artificial satellites’ preliminary orbits: The efficiency challenge. Math. Comput. Modell. 2011, 54, 1802–1807. [Google Scholar] [CrossRef]
  14. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  15. Narang, M.; Bhatia, S.; Kanwar, V. New two parameter Chebyshev-Halley like family of fourth and sixth-order methods for systems of nonlinear equations. Appl. Math. Comput. 2016, 275, 394–403. [Google Scholar] [CrossRef]
  16. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
  17. Babajee, D.K.R. On a two-parameter Chebyshev-Halley like family of optimal two-point fourth order methods free from second derivatives. Afrika Matematika 2015, 26, 689–695. [Google Scholar] [CrossRef]
  18. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef]
  19. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef]
  20. Chicharro, F.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. J. Comput. Appl. Math. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  21. King, R.F. A family of fourth-order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  22. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  23. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Amsterdam, The Netherlands, 2012. [Google Scholar]
  24. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  25. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  26. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287, 287–288. [Google Scholar] [CrossRef]
  27. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  28. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  29. Wang, X.; Li, Y. An Efficient Sixth Order Newton Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef]
  30. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  31. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  32. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  33. Sauer, T. Numerical Analysis, 2nd ed.; Pearson: Upper Saddle River, NJ, USA, 2012. [Google Scholar]
Table 1. t j s and w j s of Gauss–Legendre formula for t = 8 .
Table 1. t j s and w j s of Gauss–Legendre formula for t = 8 .
j t j w j
1 0.01985507175123188415821957 0.05061426814518812957626567
2 0.10166676129318663020422303 0.11119051722668723527217800
3 0.23723379504183550709113047 0.15685332293894364366898110
4 0.40828267875217509753026193 0.18134189168918099148257522
5 0.59171732124782490246973807 0.18134189168918099148257522
6 0.76276620495816449290886952 0.15685332293894364366898110
7 0.89833323870681336979577696 0.11119051722668723527217800
8 0.98014492824876811584178043 0.05061426814518812957626567
Table 2. Comparison of methods on Hammerstein integral equation in Example 1.
Table 2. Comparison of methods on Hammerstein integral equation in Example 1.
Methodsk F ( x ( k ) ) x ( k + 1 ) x ( k ) ρ * x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) 6
A M 6 1 1.1 ( 5 ) 2.4 ( 6 )
2 5.4 ( 39 ) 1.2 ( 39 ) 6.596956919 ( 6 )
3 8.0 ( 239 ) 1.7 ( 239 ) 5.9991 7.072478176 ( 6 )
H M 6 1 3.0 ( 5 ) 6.5 ( 6 )
2 6.9 ( 31 ) 1.5 ( 31 ) 1.994799598
3 4.4 ( 159 ) 9.4 ( 160 ) 4.9991 9.220736175 ( + 25 )
S M 6 1 9.4 ( 6 ) 2.0 ( 6 )
2 1.5 ( 39 ) 3.2 ( 40 ) 4.964844066 ( 6 )
3 2.7 ( 242 ) 5.7 ( 243 ) 5.9991 5.324312398 ( 6 )
W M 6 1 1.6 ( 6 ) 3.3 ( 7 )
2 9.0 ( 45 ) 1.9 ( 45 ) 1.377597868 ( 6 )
3 3.3 ( 274 ) 7.1 ( 275 ) 5.9996 1.431074607 ( 6 )
M M 6 1 3.9 ( 6 ) 8.2 ( 7 )
2 5.8 ( 36 ) 1.2 ( 36 ) 3.943508142
3 4.6 ( 185 ) 9.9 ( 186 ) 4.9992 2.773373608 ( + 30 )
L M 6 1 8.6 ( 6 ) 1.8 ( 6 )
2 8.4 ( 37 ) 1.7 ( 37 ) 4.445532921 ( 3 )
3 1.4 ( 189 ) 3.0 ( 190 ) 4.9224 1.232404905 ( + 31 )
P M 1 6 1 1.7 ( 7 ) 1.5 ( 7 )
2 1.9 ( 47 ) 4.1 ( 48 ) 3.291248720 ( 7 )
3 7.6 ( 291 ) 1.6 ( 291 ) 5.9998 3.358306951 ( 7 )
P M 2 6 1 6.6 ( 7 ) 1.4 ( 7 )
2 1.1 ( 47 ) 2.3 ( 48 ) 2.820724919 ( 7 )
3 2.0 ( 292 ) 4.3 ( 293 ) 5.9998 2.880288646 ( 7 )
Table 3. Comparisons of methods on Van der Pol equation in Example 2.
Table 3. Comparisons of methods on Van der Pol equation in Example 2.
Methodsk F ( x ( k ) ) x ( k + 1 ) x ( k ) ρ * x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) 6
A M 6 1 9.8 ( + 16 ) 1.2 ( + 6 )
2 9.3 ( + 15 ) 5.3 ( + 5 ) 2.207185393 ( 31 )
3 8.9 ( + 14 ) 2.4 ( + 5 ) 1.0000 1.115653208 ( 29 )
H M 6 1 2.0 ( + 2 ) 3.4 ( + 1 )
2 2.6 ( + 0 ) 1.4 ( + 3 ) 8.575063031 ( 7 )
3 6.9 ( + 7 ) 8.5 ( + 2 ) 0.13787 1.044481693 ( 16 )
S M 6 1 4.7 ( + 10 ) 9.4 ( + 3 )
2 3.9 ( + 9 ) 4.1 ( + 3 ) 6.141757152 ( 21 )
3 3.3 ( + 8 ) 1.8 ( + 3 ) 0.99993 3.757585542 ( 19 )
W M 6 1 3.8 ( + 10 ) 8.7 ( + 3 )
2 3.3 ( + 9 ) 3.9 ( + 3 ) 8.901990983 ( 21 )
3 2.9 ( + 8 ) 1.7 ( + 3 ) 0.99992 5.216543771 ( 19 )
M M 6 1 1.8 ( + 9 ) 3.2 ( + 3 )
2 1.4 ( + 8 ) 1.4 ( + 3 ) 1.233926627 ( 18 )
3 1.1 ( + 7 ) 6.0 ( + 2 ) 0.99947 8.435924608 ( 17 )
L M 6 1 7.8 ( + 3 ) 5.1 ( + 1 )
2 6.0 ( + 2 ) 1.9 ( + 1 ) 1.090110495 ( 9 )
3 3.9 ( + 1 ) 5.0 ( 0 ) 1.3904 9.718708834 ( 8 )
P M 1 6 1 2.6 ( 0 ) 7.2 ( 1 )
2 1.5 ( 4 ) 2.8 ( 5 ) 2.099986524 ( 4 )
3 6.7 ( 27 ) 1.6 ( 27 ) 5.0546 3.049589936
P M 2 6 1 2.6 ( 1 ) 1.2 ( 1 )
2 8.3 ( 9 ) 1.5 ( 9 ) 4.804752944 ( 4 )
3 2.8 ( 48 ) 7.4 ( 49 ) 4.9654 7.413843150 ( + 4 )
Table 4. Comparisons of different methods on 2D Bratu problem in Example 3.
Table 4. Comparisons of different methods on 2D Bratu problem in Example 3.
Methodsk F ( x ( k ) ) x ( k + 1 ) x ( k ) ρ * x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) 6
A M 6 1 4.4 ( 15 ) 2.4 ( 14 )
2 6.9 ( 95 ) 3.5 ( 94 ) 1.428095547 ( 12 )
3 7.9 ( 574 ) 3.9 ( 573 ) 5.9994 1.973434769 ( 12 )
H M 6 1 2.1 ( 13 ) 1.2 ( 12 )
2 2.1 ( 71 ) 1.2 ( 70 ) 7.368055345 ( 11 )
3 1.7 ( 361 ) 9.3 ( 361 ) 4.9997 3.495510769 ( + 1 )
S M 6 1 4.4 ( 15 ) 2.4 ( 14 )
2 7.1 ( 95 ) 3.6 ( 94 ) 1.433541371 ( 12 )
3 9.2 ( 574 ) 4.5 ( 573 ) 5.9994 1.433541371 ( 12 )
W M 6 1 8.1 ( 2 ) 5.0 ( 1 )
2 5.0 ( 19 ) 2.9 ( 18 ) 1.754949400 ( 16 )
3 1.7 ( 122 ) 1.0 ( 121 ) 5.9999 1.666475363 ( 16 )
M M 6 1 7.5 ( 15 ) 4.1 ( 14 )
2 1.2 ( 80 ) 6.4 ( 80 ) 1.375781477 ( + 1 )
3 1.2 ( 409 ) 6.3 ( 409 ) 4.9997 9.148606524 ( + 66 )
L M 6 1 6.4 ( 16 ) 2.5 ( 15 )
2 2.9 ( 87 ) 5.3 ( 87 ) 1.447342836 ( 13 )
3 9.1 ( 445 ) 3.3 ( 444 ) 4.9853 2.808772449 ( + 1 )
P M 1 6 1 5.7 ( 18 ) 7.0 ( 18 )
2 4.9 ( 117 ) 5.9 ( 117 ) 5.229619528 ( 14 )
3 1.9 ( 711 ) 2.3 ( 711 ) 5.9999 5.367043263 ( 14 )
P M 2 6 1 5.7 ( 18 ) 6.9 ( 18 )
2 4.7 ( 117 ) 5.6 ( 117 ) 1.698178527 ( 6 )
3 1.4 ( 711 ) 1.6 ( 711 ) 5.9999 1.854013522 ( 6 )
Table 5. Comparisons of different methods on Fisher’s equation in Example 4.
Table 5. Comparisons of different methods on Fisher’s equation in Example 4.
Methodsk F ( x ( k ) ) x ( k + 1 ) x ( k ) ρ * x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) 6
A M 6 1 6.9 ( 3 ) 1.5 ( 3 )
2 4.2 ( 21 ) 6.5 ( 22 ) 7.021836209 ( 5 )
3 4.5 ( 131 ) 7.0 ( 132 ) 5.9941 9.022505886 ( 5 )
H M 6 1 6.0 ( 3 ) 1.3 ( 3 )
2 1.3 ( 18 ) 2.0 ( 19 ) 5.236602433 ( 2 )
3 1.8 ( 97 ) 2.8 ( 98 ) 4.9940 4.016099735 ( + 14 )
S M 6 1 4.8 ( 3 ) 1.0 ( 3 )
2 2.9 ( 22 ) 4.5 ( 23 ) 4.391559871 ( 5 )
3 3.0 ( 138 ) 4.7 ( 139 ) 5.9945 5.610610203 ( 5 )
W M 6 1 4.8 ( 3 ) 1.0 ( 3 )
2 2.9 ( 22 ) 4.5 ( 23 ) 4.391559871 ( 5 )
3 3.0 ( 138 ) 4.7 ( 139 ) 5.9945 5.610610203 ( 5 )
M M 6 1 2.7 ( 3 ) 5.5 ( 4 )
2 9.5 ( 21 ) 1.5 ( 21 ) 5.507288873 ( 2 )
3 1.6 ( 108 ) 2.5 ( 109 ) 4.9964 2.350947999 ( + 16 )
L M 6 1 4.4 ( 3 ) 1.0 ( 3 )
2 4.6 ( 20 ) 7.4 ( 21 ) 5.815894917 ( 3 )
3 2.2 ( 107 ) 1.2 ( 108 ) 5.1204 7.024389352 ( + 12 )
P M 1 6 1 5.7 ( 18 ) 7.0 ( 18 )
2 4.9 ( 117 ) 5.9 ( 117 ) 5.229619528 ( 14 )
3 1.9 ( 711 ) 2.3 ( 711 ) 5.9999 5.367043263 ( 14 )
P M 2 6 1 5.7 ( 18 ) 6.9 ( 18 )
2 4.7 ( 117 ) 5.6 ( 117 ) 5.194329469 ( 14 )
3 1.4 ( 711 ) 1.6 ( 711 ) 5.9999 5.331099808 ( 14 )
Table 6. Comparisons of different methods on Example 5.
Table 6. Comparisons of different methods on Example 5.
Methodsk F ( x ( k ) ) x ( k + 1 ) x ( k ) ρ * x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) 6
A M 6 1 5.5 ( 2 ) 1.8 ( 2 )
2 8.2 ( 15 ) 2.7 ( 15 ) 7.599151595 ( 5 )
3 9.5 ( 92 ) 3.2 ( 92 ) 5.9993 7.695242316 ( 5 )
H M 6 1 3.0 ( 2 ) 1.0 ( 2 )
2 2.0 ( 14 ) 6.7 ( 15 ) 6.625126396 ( 3 )
3 2.7 ( 75 ) 8.9 ( 76 ) 4.9997 9.962407479 ( + 9 )
S M 6 1 3.9 ( 2 ) 1.3 ( 2 )
2 6.4 ( 16 ) 2.1 ( 16 ) 4.636076578 ( 5 )
3 1.3 ( 98 ) 4.3 ( 99 ) 6.0000 4.674761498 ( 5 )
W M 1 4.2 ( 2 ) 1.4 ( 2 )
2 1.2 ( 15 ) 3.9 ( 16 ) 5.254428593 ( 5 )
3 5.7 ( 97 ) 1.9 ( 97 ) 5.9995 5.303300859 ( 5 )
M M 6 1 2.7 ( 2 ) 8.9 ( 3 )
2 5.3 ( 15 ) 1.8 ( 15 ) 3.463613165 ( 3 )
3 1.6 ( 78 ) 5.4 ( 79 ) 4.9993 1.781568229 ( + 10 )
L M 6 1 3.4 ( 2 ) 1.1 ( 2 )
2 2.3 ( 16 ) 7.7 ( 17 ) 3.721130070 ( 5 )
3 2.3 ( 101 ) 7.6 ( 102 ) 5.9996 3.746683848 ( 5 )
P M 1 6 1 5.5 ( 3 ) 1.8 ( 3 )
2 3.4 ( 22 ) 1.1 ( 22 ) 2.944381059 ( 6 )
3 1.8 ( 137 ) 5.9 ( 138 ) 6.0000 2.946278255 ( 6 )
P M 2 6 1 2.8 ( 3 ) 9.5 ( 4 )
2 2.5 ( 24 ) 8.5 ( 25 ) 1.178221976 ( 6 )
3 1.3 ( 150 ) 4.4 ( 151 ) 6.0000 1.178511302 ( 6 )
Table 7. Number of iterations taken by different methods on Examples 1–5.
Table 7. Number of iterations taken by different methods on Examples 1–5.
MethodsEx.1Ex.2Ex.3Ex.4Ex. 5TotalAverage
A M 6 320334336.6
H M 6 3D33413 *3.25 *
S M 6 313334265.2
W M 6 313234255
M M 6 312334255
L M 6 37333193.8
P M 1 6 34223142.8
P M 2 6 34223142.8
* means, the total number of iteration calculated on all examples except Example 2, because H M 6 is divergent in Example 2.

Share and Cite

MDPI and ACS Style

Alqahtani, H.F.; Behl, R.; Kansal, M. Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations. Mathematics 2019, 7, 937. https://doi.org/10.3390/math7100937

AMA Style

Alqahtani HF, Behl R, Kansal M. Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations. Mathematics. 2019; 7(10):937. https://doi.org/10.3390/math7100937

Chicago/Turabian Style

Alqahtani, Hessah Faihan, Ramandeep Behl, and Munish Kansal. 2019. "Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations" Mathematics 7, no. 10: 937. https://doi.org/10.3390/math7100937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop