Next Article in Journal
Model Predictive Control of Parabolic PDE Systems under Chance Constraints
Next Article in Special Issue
The Existence Problems of Solutions for a Class of Differential Variational–Hemivariational Inequality Problems
Previous Article in Journal
Some Remarks on Local Fractional Integral Inequalities Involving Mittag–Leffler Kernel Using Generalized (E, h)-Convexity
Previous Article in Special Issue
Comparative Study of Numerical Methods for Solving the Fresnel Integral in Aperiodic Diffractive Lenses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance of a New Sixth-Order Class of Iterative Schemes for Solving Non-Linear Systems of Equations

by
Marlon Moscoso-Martínez
1,2,
Francisco I. Chicharro
1,
Alicia Cordero
1,* and
Juan R. Torregrosa
1
1
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, Camino de Vera s/n, 46022 València, Spain
2
Faculty of Sciences, Escuela Superior Politécnica de Chimborazo (ESPOCH), Panamericana Sur km 1 1/2, Riobamba 060106, Ecuador
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1374; https://doi.org/10.3390/math11061374
Submission received: 9 February 2023 / Revised: 7 March 2023 / Accepted: 10 March 2023 / Published: 12 March 2023
(This article belongs to the Special Issue Numerical Analysis and Modeling)

Abstract

:
This manuscript is focused on a new parametric class of multi-step iterative procedures to find the solutions of systems of nonlinear equations. Starting from Ostrowski’s scheme, the class is constructed by adding a Newton step with a Jacobian matrix taken from the previous step and employing a divided difference operator, resulting in a triparametric scheme with a convergence order of four. The convergence order of the family can be accelerated to six by setting two parameters, resulting in a uniparametric family. We performed dynamic and numerical development to analyze the stability of the sixth-order family. Previous studies for scalar functions allow us to isolate those elements of the family with stable performance for solving practical problems. In this regard, we present dynamical planes showing the complexity of the family. In addition, the numerical properties of the class are analyzed with several test problems.

1. Introduction

A large number of problems in Computer Science and related disciplines are mathematically characterized by a nonlinear equation or a nonlinear system of equations F ( x ) = 0 , where F : D R n R n is a sufficiently Frechet-differentiable function over an open convex set D. Finding the value of a solution ξ is a problem that has been tackled via multiple strategies in fields such as Numerical Analysis, Applied Mathematics, and Engineering.
Newton’s scheme is the best known scheme for finding the root ξ D of F:
x ( k + 1 ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) ,
where k 0 and the Jacobian matrix of F at x ( k ) is denoted by F ( x ( k ) ) .
In recent years, this problem has attracted the attention of many scientists, highlighting the following techniques. The extension of scalar to vector iterative methods [1,2,3,4] is a common practice, provided the extension is feasible, that affords solutions to n-dimensional problems. To improve the convergence order without compromising the computational cost, new steps are included with only one new evaluation of F, keeping F frozen [5,6,7,8].
We propose in this manuscript a new parametric class of multi-step iterative procedure (1) for solving systems of nonlinear equations. This family is a multidimensional extension of the set of methods defined in [9] for nonlinear equations. The starting point of this family is Ostrowski’s scheme, appending a Newton-type step with a “frozen” Jacobian matrix. Thus, it has an iterative expression with three arbitrary parameters and three steps:
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , z ( k ) = y ( k ) [ 2 [ x ( k ) , y ( k ) ; F ] F ( x ( k ) ) ] 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) ( α I + β u ( k ) + γ v ( k ) ) [ F ( x ( k ) ) ] 1 F ( z ( k ) ) ,
where α , β , and γ are arbitrary parameters, v ( k ) = [ x ( k ) , y ( k ) ; F ] 1 F ( x ( k ) ) , u ( k ) = I [ F ( x ( k ) ) ] 1 [ x ( k ) , y ( k ) ; F ] , and k = 0 , 1 , 2 , . The definition of the divided difference operator can be found in [10], where it is defined as the map [ · , · ; F ] : D × D R n × R n L ( R n ) that satisfies
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) , x , y D .
Starting from (1), a uniparametric family is constructed that reaches a convergence order of six, which is corroborated by a supporting convergence analysis. The objective of the new family is to increase the convergence order without significantly increasing the computational cost.
The dynamic behavior of the rational operator obtained from iterative schemes applied to low-degree nonlinear polynomial systems is an effective tool for analyzing the stability and reliability of these numerical methods [6,11]. The stability of the family is analyzed using a real multidimensional discrete dynamical system. Here, we construct dynamical planes that show the complexity of this class. It should be noted that we extend the complex analysis presented in [9] for scalar functions to vector functions in order to choose stable members from the parameter spaces. Several numerical tests are performed to illustrate the efficiency and stability of the iterative schemes.
The outline of this manuscript is as follows: we introduce the proposed class of iterative procedures in Section 1; its convergence is analyzed in Section 2, finding that with appropriate selection of the parameters it is possible to find a single-parametric family for the sixth order of convergence. Section 3 is devoted to the dynamical analysis of this family in order to find those members with best and worst performance in terms of stability. The numerical performance is checked in Section 4, and our conclusions are stated in Section 5.

2. Convergence Analysis of the Family

In this section, we analyze the convergence properties of the new triparametric iterative family. Although the order of the triparametric family is four, in the proof we use higher-order Taylor expansions, as they are useful for proving the order of the uniparametric family.
Theorem 1
(Tri-parametric class). Consider a sufficiently differentiable function F : D R n R n in an convex open set D. Let ξ D be a solution of the nonlinear system F ( x ) = 0 . Assume that F ( x ) is continuous and nonsingular at ξ and that x ( 0 ) is a seed that is sufficiently close to ξ. Then, sequence { x ( k ) } k 0 can be obtained using expression (1), converges to solution ξ, and has a convergence order of four. Under this hypothesis, its error equation is
e ( k + 1 ) = ( 1 α γ ) C 2 3 C 3 C 2 e ( k ) 4 + O ( e ( k ) 5 ) ,
where α, β, and γ are arbitrary parameters, C q = 1 q ! [ F ( ξ ) ] 1 F ( q ) ( ξ ) , q = 2 , 3 , , and e ( k ) = x ( k ) ξ .
Proof. 
Next, let us consider ξ such that F ( ξ ) = 0 and F ( ξ ) 0 ); here, we let be x ( k ) = ξ + e ( k ) . Using Taylor expansion series of F ( x ( k ) ) and F ( x ( k ) ) around ξ , we have
F ( x ( k ) ) = F ( ξ ) e ( k ) + C 2 e ( k ) 2 + C 3 e ( k ) 3 + C 4 e ( k ) 4 + O ( e ( k ) 5 )
and
F ( x ( k ) ) = F ( ξ ) I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + O ( e ( k ) 4 ) ,
where the coefficients C q are defined as C q = 1 q ! [ F ( ξ ) ] 1 F ( q ) ( ξ ) , q = 2 , 3 ,
Now, the Taylor expansion of the inverse [ F ( x ( k ) ) ] 1 can be stated as follows:
1 = I + X 2 e ( k ) + X 3 e ( k ) 2 + X 4 e ( k ) 3 + X 5 e ( k ) 4 + X 6 e ( k ) 5 F ( ξ ) 1 + O ( e ( k ) 7 ) ,
where X 2 , X 3 , , X 6 are unknowns such that
1 F ( x ( k ) ) = I .
Then, we have
X 2 = 2 C 2 , X 3 = 4 C 2 2 3 C 3 , X 4 = 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 4 C 4 , X 5 = 16 C 2 4 12 C 2 2 C 3 12 C 2 C 3 C 2 + 8 C 2 C 4 + 9 C 3 2 12 C 3 C 2 2 + 8 C 4 C 2 5 C 5 , X 6 = 32 C 2 5 + 24 C 2 3 C 3 + 24 C 2 2 C 3 C 2 16 C 2 2 C 4 + 24 C 2 C 3 C 2 2 16 C 2 C 4 C 2 18 C 2 C 3 2 + 10 C 2 C 5 18 C 3 2 C 2 + 24 C 3 C 2 3 18 C 3 C 2 C 3 + 12 C 3 C 4 16 C 4 C 2 2 + 12 C 4 C 3 + 10 C 5 C 2 6 C 6 .
Thus, by multiplying (5) by (3) and replacing them in the first step of (1), we have
y ( k ) = ξ C 2 e ( k ) 2 + ( 2 C 3 + 2 C 2 2 ) e ( k ) 3 + A 4 e ( k ) 4 + A 5 e ( k ) 5 + A 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
where
A 4 = 3 C 4 + 4 C 2 C 3 4 C 2 3 + 3 C 3 C 2 , A 5 = 4 C 5 + 6 C 2 C 4 8 C 2 2 C 3 + 6 C 3 2 + 8 C 2 4 6 C 2 C 3 C 2 6 C 3 C 2 2 + 4 C 4 C 2 , A 6 = 5 C 6 + 8 C 2 C 5 12 C 2 2 C 4 + 9 C 3 C 4 + 16 C 2 3 C 3 12 C 2 C 3 2 12 C 3 C 2 C 3 + 8 C 4 C 3 16 C 2 5 + 12 C 2 2 C 3 C 2 + 12 C 2 C 3 C 2 2 8 C 2 C 4 C 2 9 C 3 2 C 2 + 12 C 3 C 2 3 8 C 4 C 2 2 + 5 C 5 C 2 .
Again, by means of the Taylor series, we develop F ( y ( k ) ) around ξ , with e y ( k ) = y ( k ) ξ , meaning that we have
F ( y ( k ) ) = F ( ξ ) C 2 e ( k ) 2 + ( 2 C 3 2 C 2 2 ) e ( k ) 3 + B 4 e ( k ) 4 + B 5 e ( k ) 5 + B 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
where
B 4 = A 4 + C 2 A 2 2 , B 5 = A 5 + C 2 A 2 A 3 + C 2 A 3 A 2 , B 6 = A 6 + C 2 A 2 A 4 + C 2 A 3 2 + C 2 A 4 A 2 C 3 A 2 3 .
In order to prove the order of convergence of the second step of (1), we can use the Genocchi–Hermite formula (see [12]):
= 0 1 F ( x + t h ) d t
Expanding F ( x + t h ) in the Taylor series around x, we have
0 1 F ( x + t h ) d t = F ( x ) + 1 2 ! F ( x ) h + 1 3 ! F ( x ) h 2 + 1 4 ! F ( i v ) ( x ) h 3 + 1 5 ! F ( v ) ( x ) h 4 + O ( h 5 ) .
Denoting e = x ξ and taking into account that F ( ξ ) is nonsingular, we have
= F ( ξ ) I + P 1 e ( k ) + P 2 e ( k ) 2 + P 3 e ( k ) 3 + P 4 e ( k ) 4 + O ( e ( k ) 5 ) ,
where the error at the first step is denoted by e y ( k ) = y ( k ) ξ . In this expression,
P 1 = C 2 , P 2 = C 2 2 + C 3 , P 3 = C 4 + 2 C 2 C 3 + C 3 C 2 2 C 2 3 , P 4 = C 5 + 3 C 2 C 4 4 C 2 2 C 3 + 4 C 2 4 3 C 2 C 3 C 2 + 2 C 3 2 C 3 C 2 2 + C 4 C 2 .
Now, denoting M = 2 [ x ( k ) , y ( k ) ; F ] F ( x ( k ) ) , we have
M = F ( ξ ) I + M 2 e ( k ) 2 + M 3 e ( k ) 3 + M 4 e ( k ) 4 + O ( e ( k ) 5 ) ,
where
M 2 = 2 C 2 2 C 3 , M 3 = 2 C 4 + 2 C 2 C 3 + C 3 C 2 2 C 2 3 , M 4 = 3 C 5 + 6 C 2 C 4 8 C 2 2 C 3 + 8 C 2 4 6 C 2 C 3 C 2 + 4 C 3 2 2 C 3 C 2 2 + 2 C 4 C 2 .
The inverse of M must satisfy
M 1 M = I ,
where
M 1 = I + Y 1 e ( k ) + Y 2 e ( k ) 2 + Y 3 e ( k ) 3 + Y 4 e ( k ) 4 F ( ξ ) 1 + O ( e ( k ) 5 ) ,
where Y 1 , , Y 4 are unknowns. Then, replacing M 1 and M in (18), we have
Y 1 = 0 , Y 2 = 2 C 2 2 + C 3 , Y 3 = 2 C 4 4 C 2 C 3 2 C 3 C 2 + 4 C 2 3 , Y 4 = 3 C 5 6 C 2 C 4 + 6 C 2 2 C 3 4 C 2 4 + 6 C 2 C 3 C 2 3 C 3 2 2 C 4 C 2 .
Next, we denote L = M 1 F ( y ( k ) ) and obtain
L = C 2 e ( k ) 2 + 2 C 3 C 2 2 e ( k ) 3 + L 4 e ( k ) 4 + L 5 e ( k ) 5 + L 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
where
L 4 = 3 C 4 4 C 2 C 3 2 C 3 C 2 + 3 C 2 3 , L 5 = 4 C 5 6 C 2 C 4 + 6 C 2 2 C 3 4 C 3 2 4 C 2 4 + 4 C 2 C 3 C 2 + 2 C 3 C 2 2 2 C 4 C 2 , L 6 = 5 C 6 8 C 2 C 5 + 9 C 2 2 C 4 6 C 3 C 4 8 C 2 3 C 3 + 8 C 2 C 3 2 + 4 C 3 C 2 C 3 4 C 4 C 3 + 6 C 2 5 7 C 2 2 C 3 C 2 5 C 2 C 3 C 2 2 + 5 C 2 C 4 C 2 + 3 C 3 2 C 2 2 C 3 C 2 3 + 2 C 4 C 2 2 2 C 5 C 2 .
Therefore,
z ( k ) = y ( k ) L = ξ K 4 e ( k ) 4 + K 5 e ( k ) 5 + K 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
where
K 4 = C 2 3 + C 3 C 2 , K 5 = 2 C 2 2 C 3 + 2 C 3 2 + 4 C 2 4 2 C 2 C 3 C 2 4 C 3 C 2 2 + 2 C 4 C 2 , K 6 = 3 C 2 2 C 4 + 3 C 3 C 4 + 8 C 2 3 C 3 4 C 2 C 3 2 8 C 3 C 2 C 3 + 4 C 4 C 3 10 C 2 5 + 5 C 2 2 C 3 C 2 + 7 C 2 C 3 C 2 2 3 C 2 C 4 C 2 6 C 3 2 C 2 + 10 C 3 C 2 3 6 C 4 C 2 2 + 3 C 5 C 2 .
Similarly, denoting e z ( k ) = z ( k ) ξ ,
F ( z ( k ) ) = F ( ξ ) K 4 e ( k ) 4 K 5 e ( k ) 5 K 6 e ( k ) 6 + O ( e ( k ) 7 ) .
Using (5) and (25) and denoting N = [ F ( x ( k ) ) ] 1 F ( z ( k ) ) , we have
N = ( C 2 3 C 3 C 2 ) e ( k ) 4 + N 5 e ( k ) 5 + N 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
N 5 = 2 C 2 2 C 3 2 C 3 2 6 C 2 4 + 4 C 2 C 3 C 2 + 4 C 3 C 2 2 2 C 4 C 2 , N 6 = 3 C 2 2 C 4 3 C 3 C 4 12 C 2 3 C 3 + 8 C 2 C 3 2 + 8 C 3 C 2 C 3 4 C 4 C 3 + 22 C 2 5 13 C 2 2 C 3 C 2 15 C 2 C 3 C 2 2 + 7 C 2 C 4 C 2 + 9 C 3 2 C 2 13 C 3 C 2 3 + 6 C 4 C 2 2 3 C 5 C 2 .
Then, replacing (5) and (14) in u ( k ) ,
u ( k ) = C 2 e ( k ) + ( 3 C 2 2 + 2 C 3 ) e ( k ) 2 + O ( e ( k ) 3 ) .
Now, we can find the Taylor series expansion of [ x ( k ) , y ( k ) ; F ] 1 as follows:
1 = I + R 1 e ( k ) + R 2 e ( k ) 2 F ( ξ ) 1 + O ( e ( k ) 3 ) ,
where R 1 and R 2 are unknowns such that
1 [ x ( k ) , y ( k ) ; F ] = I .
Thus, we have
R 1 = C 2 , R 2 = C 3 .
By substituting (29) and (4) in v ( k ) ,
v ( k ) = I + v 1 e ( k ) + v 2 e ( k ) 2 + O ( e ( k ) 3 ) ,
where
v 1 = C 2 , v 2 = 2 C 3 2 C 2 2 .
Denoting T = ( α I + β u ( k ) + γ v ( k ) ) N and using (28) and (32), we have
T = ( α + γ ) ( C 2 3 C 3 C 2 ) e ( k ) 4 + T 5 e ( k ) 5 + T 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
T 5 = ( α + γ ) N 5 + ( β + γ ) u 1 N 4 , T 6 = ( α + γ ) N 6 + ( β + γ ) u 1 N 5 + ( β u 2 + γ v 2 ) N 4 .
Finally, using (23) and (34),
x ( k + 1 ) = ξ W 4 e ( k ) 4 + W 5 e ( k ) 5 + W 6 e ( k ) 6 + O ( e ( k ) 7 ) ,
where
W 4 = ( α + γ 1 ) C 2 3 C 3 C 2 , W 5 = 2 ( α + γ 1 ) C 2 2 C 3 C 3 2 + 2 C 3 C 2 2 C 4 C 2 ( 6 α β + 5 γ 4 ) C 2 4 + ( 4 α β + 3 γ 2 ) C 2 C 3 C 2 , W 6 = 3 C 2 2 C 4 + 3 C 3 C 4 + 8 C 2 3 C 3 4 C 2 C 3 2 8 C 3 C 2 C 3 + 4 C 4 C 3 10 C 2 5 + 5 C 2 2 C 3 C 2 + 7 C 2 C 3 C 2 2 3 C 2 C 4 C 2 6 C 3 2 C 2 + 10 C 3 C 2 3 6 C 4 C 2 2 + 3 C 5 C 2 + ( α + γ ) 3 C 2 2 C 4 3 C 3 C 4 12 C 2 3 C 3 + 8 C 2 C 3 2 + 8 C 3 C 2 C 3 4 C 4 C 3 + 22 C 2 5 13 C 2 2 C 3 C 2 15 C 2 C 3 C 2 2 + 7 C 2 C 4 C 2 + 9 C 3 2 C 2 13 C 3 C 2 3 + 6 C 4 C 2 2 3 C 5 C 2 + ( β + γ ) C 2 2 C 2 2 C 3 2 C 3 2 6 C 2 4 + 4 C 2 C 3 C 2 + 4 C 3 C 2 2 2 C 4 C 2 + β 3 C 2 2 + 2 C 3 + γ 2 C 3 2 C 2 2 C 2 3 C 3 C 2 ,
and the error equation is
e ( k + 1 ) = W 4 e ( k ) 4 W 5 e ( k ) 5 W 6 e ( k ) 6 + O ( e ( k ) 7 ) = ( 1 α γ ) C 2 3 C 3 C 2 e ( k ) 4 + O ( e ( k ) 5 ) .
This finishes the proof. □
From Theorem 1, the triparametric family is fourth-order convergent for any α , β , and γ . Nevertheless, the order of convergence can be accelerated by reducing the number of parameters, resulting in a uniparametric family.
Theorem 2
(Uni-parametric family). Consider a sufficiently differentiable function F : D R n R n defined in a convex open set D, and let ξ D be a solution of the nonlinear system F ( x ) = 0 . Assuming that F ( x ) is nonsingular and continuous at ξ and x ( 0 ) is a seed close enough to ξ, sequence { x ( k ) } k 0 obtained using (1) converges to ξ with the sixth order of convergence only if γ = 1 α and β = 1 + α . Therefore, its error equation is
e ( k + 1 ) = C 3 2 C 2 C 3 C 2 3 + 6 C 2 5 6 C 2 2 C 3 C 2 e ( k ) 6 + O ( e ( k ) 7 ) ,
where C q = 1 q ! [ F ( ξ ) ] 1 F ( q ) ( ξ ) , q = 2 , 3 , , and e ( k ) = x ( k ) ξ .
Proof. 
Using the results of Theorem 1, in order to cancel W 4 and W 5 the coefficients of e ( k ) 4 and e ( k ) 5 in (38), which are respectively α + γ = 1 , 6 α β + 5 γ = 4 and 4 α β + 3 γ = 2 , must be satisfied. This system has infinite solutions for
β = 1 + α and γ = 1 α ,
with α being a disposable parameter. Then, replacing (39) in (37), we have the following:
W 4 = 0 , W 5 = 0 , and W 6 = C 3 2 C 2 + C 3 C 2 3 6 C 2 5 + 6 C 2 2 C 3 C 2 ,
for which the error equation is
e ( k + 1 ) = W 6 e ( k ) 6 + O ( e ( k ) 7 ) = C 3 2 C 2 C 3 C 2 3 + 6 C 2 5 6 C 2 2 C 3 C 2 e ( k ) 6 + O ( e ( k ) 7 ) .
This finishes the proof. □
As follows from Theorem 2, by replacing β = 1 + α and γ = 1 α in (1) the tri-parametric family becomes a uniparametric family with sixth-order convergence. Thus, the iterative expression of the new three-step family dependent on α , denoted henceforth as MCCT( α ), is
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , z ( k ) = y ( k ) [ 2 [ x ( k ) , y ( k ) ; F ] F ( x ( k ) ) ] 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) ( α I + ( 1 + α ) u ( k ) + ( 1 α ) v ( k ) ) [ F ( x ( k ) ) ] 1 F ( z ( k ) ) ,
where α is an arbitrary parameter v ( k ) = [ x ( k ) , y ( k ) ; F ] 1 F ( x ( k ) ) , u ( k ) = I [ F ( x ( k ) ) ] 1 [ x ( k ) , y ( k ) F ] , and k = 0 , 1 , 2 , .
Next, we analyze the stability of the MCCT( α ) family in order to select its best members, for which we use the real dynamical tools presented in Section 3.

3. Real Dynamics for Stability

This section refers to our analysis of the dynamical behavior of the rational operator related with iterative schemes of the MCCT( α ) family. It provides significative information about the reliability and stability of the class. We construct rational operators and their dynamical planes in order to identify the performance of particular schemes from the different basins of attraction.

3.1. Rational Operator

Rational operators are built on low-degree nonlinear polynomial systems, as the criterion of stability of a method applied to these systems can be generalized to other multidimensional cases. Thus, we propose two nonlinear systems, one with separated variables F ( x 1 , x 2 ) and another with non-separated variables G ( x 1 , x 2 ) , as follows:
F ( x 1 , x 2 ) = x 1 2 1 , x 2 2 1 = 0 , 0 ,
G ( x 1 , x 2 ) = x 1 2 + x 2 2 1 , x 1 2 x 2 2 1 2 = 0 , 0 .
Proposition 1
(Rational operator R F ). Consider the polynomial system F ( x 1 , x 2 ) provided in (43) with roots ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) R 2 . The rational operator associated with the MCCT(α) family and applied on F ( x 1 , x 2 ) (with α R being an arbitrary parameter) is
R F ( x 1 , x 2 , α ) = R F 11 , R F 12 ,
where
R F 11 = 1 32 x 1 2 1 4 α + ( α 19 ) x 1 4 2 ( α 1 ) x 1 2 + 1 4 x 1 5 x 1 2 + 1 2 3 x 1 2 + 1 + 8 x 1 4 + 6 x 1 2 + 1 x 1 3 + x 1 α x 2 2 1 4 x 2 3 x 2 2 + 1 2 , R F 12 = 1 32 x 2 2 1 4 α + ( α 19 ) x 2 4 2 ( α 1 ) x 2 2 + 1 4 x 2 5 x 2 2 + 1 2 3 x 2 2 + 1 + 8 x 2 4 + 6 x 2 2 + 1 x 2 3 + x 2 α x 1 2 1 4 x 1 3 x 1 2 + 1 2 .
In Proposition 1, note that the rational operator R F ( x 1 , x 2 , α ) is obtained by substituting the nonlinear system F ( x 1 , x 2 ) into the iterative scheme of the MCCT( α ) family. To simplify R F , we can select a value of α that cancels terms in the expression in order to reduce it. It is easy to show that the rational operator is simpler for α = 0 and that there are fewer fixed and critical points, which can improve the performance of the associated method. In addition, the components of R F ( x 1 , x 2 , 0 ) are of separate variables, as shown by
R F ( x 1 , x 2 , 0 ) = 77 x 1 12 + 782 x 1 10 + 775 x 1 8 + 404 x 1 6 + 11 x 1 4 2 x 1 2 + 1 128 x 1 5 x 1 2 + 1 2 3 x 1 2 + 1 , 77 x 2 12 + 782 x 2 10 + 775 x 2 8 + 404 x 2 6 + 11 x 2 4 2 x 2 2 + 1 128 x 2 5 x 2 2 + 1 2 3 x 2 2 + 1 .
Proposition 2
(Rational operator R G ). Consider the polynomial system G ( x 1 , x 2 ) provided in (44) with roots 3 2 , 1 2 , 3 2 , 1 2 , 3 2 , 1 2 , 3 2 , 1 2 R 2 . The rational operator associated with the MCCT(α) family and applied on G ( x 1 , x 2 ) (with α R being an arbitrary parameter) is
R G ( x 1 , x 2 , α ) = R G 11 , R G 12 ,
where
R G 11 = 3 4 x 1 2 4 9 ( α + 1 ) + 16 ( α 19 ) x 1 4 24 ( α 1 ) x 1 2 24576 x 1 5 4 x 1 2 + 1 4 x 1 2 + 3 2 + 16 x 1 4 + 72 x 1 2 + 9 64 x 1 3 + 48 x 1 α 1 4 x 2 2 4 512 x 2 3 4 x 2 2 + 1 2 , R G 12 = 1 4 x 2 2 4 α + 16 ( α 19 ) x 2 4 8 ( α 1 ) x 2 2 + 1 8192 x 2 5 4 x 2 2 + 1 2 12 x 2 2 + 1 + 16 x 2 4 + 24 x 2 2 + 1 64 x 2 3 + 16 x 2 α 3 4 x 1 2 4 512 x 1 3 4 x 1 2 + 3 2 .
From Proposition 2, note that the rational operator R G ( x 1 , x 2 , α ) is obtained by substituting the nonlinear system G ( x 1 , x 2 ) into the iterative scheme of the MCCT( α ) family. In the same way for R F , it is easy to prove that the rational operator R G is simpler for α = 0 . Moreover, the components of R G ( x 1 , x 2 , 0 ) are of separate variables, as shown by
R G ( x 1 , x 2 , 0 ) = 304 x 1 4 + 24 x 1 2 + 9 3 4 x 1 2 4 24576 x 1 5 4 x 1 2 + 1 4 x 1 2 + 3 2 + 16 x 1 4 + 72 x 1 2 + 9 64 x 1 3 + 48 x 1 , 304 x 2 4 + 8 x 2 2 + 1 1 4 x 2 2 4 8192 x 2 5 4 x 2 2 + 1 2 12 x 2 2 + 1 + 16 x 2 4 + 24 x 2 2 + 1 64 x 2 3 + 16 x 2 .
With these two rational operators R F ( x 1 , x 2 , α ) and R G ( x 1 , x 2 , α ) , we can study the stability of the MCCT( α ) family by means of dynamical planes built for different values of α . These planes show the complexity of the iterative class.

3.2. Fixed Points and Their Stability

The fixed points are calculated from the rational operators R F ( x 1 , x 2 , α ) and R G ( x 1 , x 2 , α ) provided in (45) and (47), respectively. Using these points, we can perform a stability analysis.
Proposition 3
( R F fixed points). The real fixed points of R F ( x 1 , x 2 , α ) are the roots of the equation R F ( x 1 , x 2 , α ) = ( x 1 , x 2 ) , that is,
f p 1 = ( 1 , 1 ) , f p 2 = ( 1 , 1 ) , f p 3 = ( 1 , 1 ) , f p 4 = ( 1 , 1 ) ,
corresponding to the roots of the polynomial system F ( x 1 , x 2 ) provided in (43); moreover, they are superattracting. While other strange fixed points may appear, their components are roots of polynomials of degree 120.
Proposition 4
( R G fixed points). The real fixed points of R G ( x 1 , x 2 , α ) are the roots of the equation R G ( x 1 , x 2 , α ) = ( x 1 , x 2 ) , that is,
f p 1 = 3 2 , 1 2 , f p 2 = 3 2 , 1 2 , f p 3 = 3 2 , 1 2 , f p 4 = 3 2 , 1 2 ,
corresponding to the roots of the polynomial system G ( x 1 , x 2 ) provided in (44); these are superattracting as well. Again, for other strange fixed points that may appear, their components are roots of polynomials of degree 120.
Propositions 3 and 4 establish a minimum of four fixed points for polynomial systems F ( x 1 , x 2 ) and G ( x 1 , x 2 ) . Of these, f p 1 to f p 4 correspond to the roots of the original systems, and are attractive and critical points.

3.3. Dynamical Planes

We can perform a stability analysis of the MCCT( α ) family by representing dynamical planes of the rational operators R F ( x 1 , x 2 , α ) and R G ( x 1 , x 2 , α ) . Two values of α with different behavior in the parameter space of Figure 1 have been chosen; value α = 0 is in the red zone, which implies convergence, while value α = 200 is in the black zone, which does not guarantee convergence. These parameter spaces were obtained from the MCCT( α ) family for scalar cases [9], with their results then extrapolated for vector cases.
A dynamical plane is represented by a mesh of 400 × 400 points in R 2 . Each point of the mesh is a seed of the iterative process. The convergence of the scheme is shown with a maximum of 50 iterations and a stopping criterion of | | x ( k + 1 ) x ( k ) | | < 10 3 . Each root is color assigned. The color of the mesh points indicates which root it converges to, with black being the points at which the maximum number of iterations is reached and brighter colors indicating a lower number of iterations. Fixed points are represented in white by a circle (‘○’), critical points by a square (‘□’), and attractors by an asterisk (‘*’). The resulting plane was generated using Matlab R2020b.
The dynamical planes corresponding to R F ( x 1 , x 2 , 0 ) and R F ( x 1 , x 2 , 200 ) on the one hand and R G ( x 1 , x 2 , 0 ) and R G ( x 1 , x 2 , 200 ) on the other are shown in Figure 2 and Figure 3, respectively. In both cases, yellow convergence orbits can be observed.
In both cases, the method for α = 0 presents four basins of attraction associated with the roots. No black areas are observed. Consequently, this method shows good dynamic behavior. In contrast, in R F and R G the method for α = 200 presents the same four basins of attraction associated with the roots, except now with reduced size, which minimizes the possibility of convergence to the solution. Black areas of slow convergence are observed. In consequence, this method shows poor dynamic performance.
From Figure 2 and Figure 3, it is apparent that the basins of attraction have similar behavior for the rational operators R F and R G with α = 0 . However, for α = 200 , these basins are reduced and the associated iterative methods do not easily converge to the solution.
If we consider nonlinear systems that involve logarithmic, trigonometric, and exponential as well as polynomial functions, the behavior of the representative members of the MCCT( α ) family for α = 0 and α = 200 is similar to what has already been studied. For example, when analyzing the systems shown in Table 1, we can observe in their dynamical planes (see Figure 4, Figure 5 and Figure 6) that the regions of the basins of attraction for α = 0 are much larger than for α = 200 , increasing the chances of converging to the solution for the first case. In addition, more regions of slow convergence or non-convergence are observed for the MCCT(200) iterative method as compared to the MCCT(0) method.

4. Numerical Results

Several numerical tests were carried out to check the performance of MCCT( α ) family, with the aim of verifying our theoretical results for convergence and stability. We employed two members of the class used above as representatives, namely, MCCT(0) and MCCT(200). These methods were applied to the same two-by-two non-linear test systems seen above and to new three-by-three and four-by-four systems. Along with the corresponding roots, they are summarized in Table 2.
A comparison of MCCT(0) was conducted against three methods from the literature: Newton’s [10], Ostrowski’s [13], and HMT’s methods [14]. Table 3 collects the numerical results, using initial guesses for x ( 0 ) close to ξ solutions.
The computations were performed in Matlab R2020b using variable precision arithmetic, with a mantissa of 200 digits. For each scheme, the amount of iterations (iter) needed to converge to the solution was analyzed in such a way that the stopping criteria | | | x ( k + 1 ) x ( k ) | | < 10 100 or | | | F ( x ( k + 1 ) ) | | < 10 100 were satisfied.
The approximate computational order of convergence (ACOC) [15] was obtained. The ACOC column is ‘nc’ if the number of iterations reaches 50 or ‘-’ if the ACOC does not stabilize.
Table 3 indicates that MCCT(0) converges to ξ in fewer iterations than the other methods in five of the seven nonlinear systems. The theoretical order of convergence is achieved by ACOC as well, being close to six. This method was analyzed for seeds both near the solution and far from it, i.e., for x ( 0 ) 3 ξ and x ( 0 ) > 10 ξ , respectively. The obtained results are collected in Table 4 and Table 5.
The results in Table 4 and Table 5 show that MCCT(0) converges to the solution in six of the seven nonlinear test systems, regardless of the initial estimates used. The ACOC does not stabilize its value in several cases; however, when it does its value approaches six.
Our analysis of the MCCT(200) method is shown below. The numerical results for x ( 0 ) ξ ) and x ( 0 ) 3 ξ ) are presented in Table 6 and Table 7.
MCCT(200) presents convergence problems for x ( 0 ) 3 ξ , as it does not converge to the solution in three of the seven cases, establishing a dependence on the initial estimate and the nonlinear test system used. In addition, the number of iterations is increased with respect to the MCCT(0) method for the same conditions in those systems in which the solution is reached.
Consequently, we conclude that the method for α = 0 is robust, and is able to converge to the solution in few iterations and for any seed and system used. Nevertheless, the method for α = 200 is unstable, as it does not tend to the solution according to the seed and the system used. It can be observed that both methods converge to the solution with order six. Therefore, the theoretical results of the dynamical behavior and convergence analysis of the MCCT( α ) family can be considered verified.

5. Conclusions

In conclusion, the designed class MCCT ( α ) for solving systems of nonlinear equations proves to be a highly efficient class with a convergence order of six.
We analyzed the convergence of the class of iterative schemes, assessed its stability using a real multidimensional discrete dynamical system, and verified its throughput performance numerically using several test problems.
The stable members of the MCCT ( α ) family exhibited outstanding numerical performance. The method for α = 0 proved to be robust (stable) according to the real dynamics analysis performed. The method for α = 200 was shown to be unstable and chaotic, and it may not converge to the searched solution. The theoretical order of convergence was verified by ACOC, and is close to six. Finally, numerical experiments were conducted to confirm the theoretical results.
Future lines of research consist of introducing a new step with similar characteristics to increase the order of convergence without considerable penalty to its computational cost, then analyzing its effect on the stability of the resulting family of methods.

Author Contributions

Conceptualization, A.C. and J.R.T.; methodology, A.C. and M.M.-M.; software, M.M.-M. and F.I.C.; validation, M.M.-M.; formal analysis, J.R.T.; investigation, A.C.; writing—original draft preparation, M.M.-M.; writing—review and editing, F.I.C. and A.C.; supervision, J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their suggestions and comments that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
  2. Solaiman, O.S.; Hashim, I. An iterative scheme of arbitrary odd order and its basins of attraction for nonlinear systems. Comput. Mater. Contin. 2020, 66, 1427–1444. [Google Scholar] [CrossRef]
  3. Sharma, R.; Sharma, J.; Kalra, N. A modified Newton-Özban composition for solving nonlinear systems. Int. J. Comput. Methods 2020, 17, 1950047. [Google Scholar] [CrossRef]
  4. Yaseen, S.; Zafar, F. A new sixth-order Jarratt-type iterative method for systems of nonlinear equations. Arab. J. Math. 2022, 11, 585–599. [Google Scholar] [CrossRef]
  5. Singh, H.; Sharma, J.R.; Kumar, S. A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numer. Algorithms 2022. [Google Scholar] [CrossRef]
  6. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth- and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
  7. Al-Obaidi, R.H.; Darvishi, M.T. Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations. Mathematics 2022, 10, 2952. [Google Scholar] [CrossRef]
  8. Singh, M.K.; Singh, A.K. Study of Frozen-Type Newton-Like Method in a Banach Space with Dynamics. Ukr. Math. J. 2022, 74, 266–288. [Google Scholar] [CrossRef]
  9. Cordero, A.; Moscoso-Martínez, M.; Torregrosa, J.R. Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations. Algorithms 2021, 14, 101. [Google Scholar] [CrossRef]
  10. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  11. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  12. Hermite, C. Sur la formule d’interpolation de Lagrange. Reine Angew. Math. 1878, 84, 70–79. [Google Scholar] [CrossRef]
  13. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  14. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  15. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. Parameter spaces of free critical points of the MCCT( α ) family applied to a nonlinear polynomial equation ( x a ) ( x b ) = 0 , where a , b C .
Figure 1. Parameter spaces of free critical points of the MCCT( α ) family applied to a nonlinear polynomial equation ( x a ) ( x b ) = 0 , where a , b C .
Mathematics 11 01374 g001
Figure 2. Dynamical planes for R F ( x 1 , x 2 , α ) : (a) convergence to p f 1 = ( 1 , 1 ) for α = 0 and an initial estimation x ( 0 ) close to the roots; (b) no convergence to any p f for α = 200 and an initial estimation x ( 0 ) close to the roots.
Figure 2. Dynamical planes for R F ( x 1 , x 2 , α ) : (a) convergence to p f 1 = ( 1 , 1 ) for α = 0 and an initial estimation x ( 0 ) close to the roots; (b) no convergence to any p f for α = 200 and an initial estimation x ( 0 ) close to the roots.
Mathematics 11 01374 g002
Figure 3. Dynamical planes for R G ( x 1 , x 2 , α ) : (a) convergence to p f 1 ( 0.87 , 0.5 ) for α = 0 and an initial estimation x ( 0 ) close to the roots; (b) no convergence to any p f for α = 200 and an initial estimation x ( 0 ) close to the roots.
Figure 3. Dynamical planes for R G ( x 1 , x 2 , α ) : (a) convergence to p f 1 ( 0.87 , 0.5 ) for α = 0 and an initial estimation x ( 0 ) close to the roots; (b) no convergence to any p f for α = 200 and an initial estimation x ( 0 ) close to the roots.
Mathematics 11 01374 g003
Figure 4. Dynamical planes for system M ( x 1 , x 2 ) : (a) considering the MCCT(0) method and (b) considering the MCCT(200) method.
Figure 4. Dynamical planes for system M ( x 1 , x 2 ) : (a) considering the MCCT(0) method and (b) considering the MCCT(200) method.
Mathematics 11 01374 g004
Figure 5. Dynamical planes for system N ( x 1 , x 2 ) : (a) considering the MCCT(0) method and (b) considering the MCCT(200) method.
Figure 5. Dynamical planes for system N ( x 1 , x 2 ) : (a) considering the MCCT(0) method and (b) considering the MCCT(200) method.
Mathematics 11 01374 g005
Figure 6. Dynamical planes for system O ( x 1 , x 2 ) : (a) considering the MCCT(0) method and (b) considering the MCCT(200) method.
Figure 6. Dynamical planes for system O ( x 1 , x 2 ) : (a) considering the MCCT(0) method and (b) considering the MCCT(200) method.
Mathematics 11 01374 g006
Table 1. Tested nonlinear systems for dynamical analysis.
Table 1. Tested nonlinear systems for dynamical analysis.
Non-Linear SystemSome Roots
M ( x 1 , x 2 ) = ( e x 1 e x 2 + x 1 cos ( x 2 ) , x 1 + x 2 1 ) = ( 0 , 0 ) ξ ( 6.4165 , 7.4165 ; 4.3816 , 5.3816 ;
3.4706 , 2.4706 ; 5.1572 , 4.1572 ;
9.1554 , 8.1554 ) T
N ( x 1 , x 2 ) = ln ( x 1 2 ) 2 ln ( cos ( x 2 ) ) , x 1 tan ( x 2 ) = ( 0 , 0 ) ξ = ( 1 , 0 ; 1 , 0 ) T
O ( x 1 , x 2 ) = x 1 + e x 2 cos ( x 2 ) + 0.5 , 3 x 1 x 2 sin ( x 2 ) = ( 0 , 0 ) ξ ( 0.2535 , 0.3851 ; 0.9389 , 1.8576 ;
1.0935 , 4.0974 ) T
Table 2. Test nonlinear systems and their roots.
Table 2. Test nonlinear systems and their roots.
Non-Linear Test SystemRoots
F ( x 1 , x 2 ) = x 1 2 1 , x 2 2 1 = 0 , 0 ξ = ( 1 , 1 ) T
G ( x 1 , x 2 ) = x 1 2 + x 2 2 1 , x 1 2 x 2 2 1 2 = 0 , 0 ξ = 3 2 , 1 2 T
M ( x 1 , x 2 ) = ( e x 1 e x 2 + x 1 cos ( x 2 ) , x 1 + x 2 1 ) = ( 0 , 0 ) ξ ( 3.4706 , 2.4706 ) T
N ( x 1 , x 2 ) = ln ( x 1 2 ) 2 ln ( cos ( x 2 ) ) , x 1 tan ( x 2 ) = ( 0 , 0 ) ξ = ( 1 , 0 ) T
O ( x 1 , x 2 ) = x 1 + e x 2 cos ( x 2 ) + 0.5 , 3 x 1 x 2 sin ( x 2 ) = ( 0 , 0 ) ξ ( 0.2535 , 0.3851 ) T
P ( x 1 , x 2 , x 3 ) = cos ( x 2 ) sin ( x 1 ) , x 3 x 1 1 x 2 , e x 1 x 3 2 = ( 0 , 0 ) ξ ( 0.9096 , 0.6612 , 1.5758 ) T
Q ( x 1 , x 2 , x 3 , x 4 ) = x 2 x 3 + x 4 ( x 2 + x 3 ) , x 1 x 3 + x 4 ( x 1 + x 3 ) , ξ ( 0.5774 , 0.5774 ,
x 1 x 2 + x 4 ( x 1 + x 2 ) , x 1 x 2 + x 1 x 3 + x 2 x 3 1 = ( 0 , 0 ) 0.5774 , 0.2887 ) T
Table 3. Numerical results of MCCT(0) and known schemes on test problems for x ( 0 ) ξ .
Table 3. Numerical results of MCCT(0) and known schemes on test problems for x ( 0 ) ξ .
SystemMethod | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | | IterACOC
F ( x 1 , x 2 ) MCCT(0) 4.1590 × 10 41 1.0578 × 10 162 36.0326
x ( 0 ) = ( 0.90 , 0.90 ) T Newton 4.0862 × 10 82 1.1806 × 10 163 72.0000
Ostrowski 2.3572 × 10 61 6.5488 × 10 183 4-
HMT 2.1362 × 10 52 5.5061 × 10 208 3-
G ( x 1 , x 2 ) MCCT(0) 1.6140 × 10 29 3.8389 × 10 115 35.9785
x ( 0 ) = ( 0.80 , 0.40 ) T Newton 8.4816 × 10 62 1.0174 × 10 122 72.0000
Ostrowski 3.1433 × 10 46 8.7844 × 10 137 4-
HMT 2.5671 × 10 36 1.9467 × 10 208 3-
M ( x 1 , x 2 ) MCCT(0) 1.1444 × 10 49 1.3224 × 10 131 35.5845
x ( 0 ) = ( 3.40 , 2.40 ) T Newton 2.4421 × 10 57 2.1989 × 10 114 62.0000
Ostrowski 3.9750 × 10 66 3.0486 × 10 148 4-
HMT 8.1589 × 10 54 7.7869 × 10 208 35.9851
N ( x 1 , x 2 ) MCCT(0) 1.0160 × 10 76 5.8602 × 10 308 4-
x ( 0 ) = ( 0.90 , 0.10 ) T Newton 1.6691 × 10 73 1.5673 × 10 146 72.0000
Ostrowski 6.5957 × 10 87 1.4347 × 10 259 5-
HMT 3.0359 × 10 41 3.8934 × 10 208 36.1133
O ( x 1 , x 2 ) MCCT(0) 1.2709 × 10 37 6.3818 × 10 107 35.9417
x ( 0 ) = ( 0.20 , 0.30 ) T Newton 2.3676 × 10 73 3.4799 × 10 146 72.0000
Ostrowski 7.1310 × 10 54 3.2193 × 10 123 4-
HMT 2.3769 × 10 43 4.0133 × 10 208 35.9289
P ( x 1 , x 2 , x 3 ) MCCT(0) 1.3178 × 10 66 1.2841 × 10 162 4-
x ( 0 ) = ( 0.80 , 0.60 , 1.50 ) T Newton 1.4817 × 10 63 2.0520 × 10 126 71.9802
Ostrowski 2.3811 × 10 82 2.8239 × 10 179 5-
HMT 6.1154 × 10 24 5.9378 × 10 139 36.2016
Q ( x 1 , x 2 , x 3 , x 4 ) MCCT(0) 2.4839 × 10 22 2.0342 × 10 128 35.6492
x ( 0 ) = ( 0.50 , 0.50 , 0.50 , 0.20 ) T Newton 3.2002 × 10 72 7.8134 × 10 145 72.0156
Ostrowski 4.1778 × 10 49 3.0779 × 10 157 44.0962
HMT 2.0321 × 10 44 1.6859 × 10 208 3-
Table 4. Numerical performance of MCCT(0) on test problems for x ( 0 ) 3 ξ .
Table 4. Numerical performance of MCCT(0) on test problems for x ( 0 ) 3 ξ .
System x ( 0 ) | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | | IterACOC
F ( x 1 , x 2 ) ( 3.00 , 3.00 ) T 2.9511 × 10 49 2.6815 × 10 195 45.8233
G ( x 1 , x 2 ) ( 2.60 , 1.50 ) T 1.0829 × 10 49 6.7038 × 10 196 45.8689
M ( x 1 , x 2 ) ( 10.41 , 7.41 ) T 2.3053 × 10 41 1.0439 × 10 113 45.9611
N ( x 1 , x 2 ) ( 3.00 , 0.00 ) T ncncncnc
O ( x 1 , x 2 ) ( 0.76 , 1.16 ) T 7.0387 × 10 71 7.9136 × 10 174 5-
P ( x 1 , x 2 , x 3 ) ( 2.73 , 1.98 , 4.73 ) T 6.8830 × 10 58 4.1779 × 10 146 5-
Q ( x 1 , x 2 , x 3 , x 4 ) ( 1.73 , 1.73 , 1.73 , 0.87 ) T 1.2880 × 10 33 8.1032 × 10 180 4-
Table 5. Numerical performance of MCCT(0) on test problems for x ( 0 ) > 10 ξ .
Table 5. Numerical performance of MCCT(0) on test problems for x ( 0 ) > 10 ξ .
System x ( 0 ) | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | | IterACOC
F ( x 1 , x 2 ) ( 11.00 , 11.00 ) T 3.4914 × 10 55 05-
G ( x 1 , x 2 ) ( 9.53 , 5.50 ) T 1.2350 × 10 55 05-
M ( x 1 , x 2 ) ( 38.18 , 27.18 ) T 4.9654 × 10 57 3.4199 × 10 145 55.4814
N ( x 1 , x 2 ) ( 11.00 , 0.00 ) T ncncncnc
O ( x 1 , x 2 ) ( 2.79 , 4.24 ) T 3.7780 × 10 39 2.3868 × 10 110 3-
P ( x 1 , x 2 , x 3 ) ( 10.01 , 7.27 , 17.33 ) T 1.6228 × 10 61 2.8246 × 10 153 14-
Q ( x 1 , x 2 , x 3 , x 4 ) ( 6.35 , 6.35 , 6.35 , 3.18 ) T 1.0412 × 10 45 1.9467 × 10 208 5-
Table 6. Numerical performance of MCCT(200) on test problems for x ( 0 ) ξ .
Table 6. Numerical performance of MCCT(200) on test problems for x ( 0 ) ξ .
System x ( 0 ) | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | | IterACOC
F ( x 1 , x 2 ) ( 0.90 , 0.90 ) T 1.9038 × 10 29 4.6447 × 10 116 36.0626
G ( x 1 , x 2 ) ( 0.80 , 0.40 ) T 5.1091 × 10 67 1.9467 × 10 208 4-
M ( x 1 , x 2 ) ( 3.40 , 2.40 ) T 1.6761 × 10 43 2.8365 × 10 119 35.9400
N ( x 1 , x 2 ) ( 0.90 , 0.10 ) T 2.0202 × 10 48 7.7869 × 10 208 4-
O ( x 1 , x 2 ) ( 0.20 , 0.30 ) T 3.8365 × 10 85 6.7625 × 10 202 4-
P ( x 1 , x 2 , x 3 ) ( 0.80 , 0.60 , 1.50 ) T 3.2604 × 10 41 5.4472 × 10 112 4-
Q ( x 1 , x 2 , x 3 , x 4 ) ( 0.50 , 0.50 , 0.50 , 0.20 ) T 8.7884 × 10 87 1.9467 × 10 208 45.6358
Table 7. Numerical performance of MCCT(200) on test problems for x ( 0 ) 3 ξ .
Table 7. Numerical performance of MCCT(200) on test problems for x ( 0 ) 3 ξ .
System x ( 0 ) | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | | IterACOC
F ( x 1 , x 2 ) ( 3.00 , 3.00 ) T 9.6219 × 10 49 3.0304 × 10 193 55.7669
G ( x 1 , x 2 ) ( 2.60 , 1.50 ) T 3.4103 × 10 49 7.5761 × 10 194 55.8239
M ( x 1 , x 2 ) ( 10.41 , 7.41 ) T 5.0005 × 10 75 1.0922 × 10 182 9-
N ( x 1 , x 2 ) ( 3.00 , 0.00 ) T ncncncnc
O ( x 1 , x 2 ) ( 0.76 , 1.16 ) T ncncncnc
P ( x 1 , x 2 , x 3 ) ( 2.73 , 1.98 , 4.73 ) T ncncncnc
Q ( x 1 , x 2 , x 3 , x 4 ) ( 1.73 , 1.73 , 1.73 , 0.87 ) T 5.9657 × 10 32 4.5520 × 10 179 55.8489
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moscoso-Martínez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Performance of a New Sixth-Order Class of Iterative Schemes for Solving Non-Linear Systems of Equations. Mathematics 2023, 11, 1374. https://doi.org/10.3390/math11061374

AMA Style

Moscoso-Martínez M, Chicharro FI, Cordero A, Torregrosa JR. Performance of a New Sixth-Order Class of Iterative Schemes for Solving Non-Linear Systems of Equations. Mathematics. 2023; 11(6):1374. https://doi.org/10.3390/math11061374

Chicago/Turabian Style

Moscoso-Martínez, Marlon, Francisco I. Chicharro, Alicia Cordero, and Juan R. Torregrosa. 2023. "Performance of a New Sixth-Order Class of Iterative Schemes for Solving Non-Linear Systems of Equations" Mathematics 11, no. 6: 1374. https://doi.org/10.3390/math11061374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop