Next Article in Journal
An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem
Next Article in Special Issue
Approximation of the Fixed Point of the Product of Two Operators in Banach Algebras with Applications to Some Functional Equations
Previous Article in Journal
An Approximate Proximal Numerical Procedure Concerning the Generalized Method of Lines
Previous Article in Special Issue
A Methodology for Obtaining the Different Convergence Orders of Numerical Method under Weaker Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations

Department of Mathematics, Faculty of Science, Razi University, Kermanshah 67149, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2952; https://doi.org/10.3390/math10162952
Submission received: 1 July 2022 / Revised: 5 August 2022 / Accepted: 8 August 2022 / Published: 16 August 2022
(This article belongs to the Special Issue Numerical Methods for Solving Nonlinear Equations)

Abstract

:
In this paper, in order to solve systems of nonlinear equations, a new class of frozen Jacobian multi-step iterative methods is presented. Our proposed algorithms are characterized by a highly convergent order and an excellent efficiency index. The theoretical analysis is presented in detail. Finally, numerical experiments are presented for showing the performance of the proposed methods, when compared with known algorithms taken from the literature.

1. Introduction

Approximating a locally unique solution α of the nonlinear system
F ( x ) = 0
has many applications in engineering and mathematics [1,2,3,4]. In (1), we have n equations with n variables. In fact, F is a vector-valued function with n variables. Several problems arising from the different areas in natural and applied sciences take the form of systems of nonlinear Equation (1) that need to be solved, where F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f n ( x ) ) such that for all k = 1 , 2 , , n , f k is a scalar nonlinear function. Additionally, there are many real life problems for which, in the process of finding their solutions, one needs to solve a system of nonlinear equations, see for example [5,6,7,8,9]. It is known that finding an exact solution α t = ( α 1 , α 2 , , α n ) of the nonlinear system (1) is not an easy task, especially when the equation contains terms consisting of logarithms, trigonometric and exponential functions, or a combination of transcendental terms. Hence, in general, one cannot find the solution of Equation (1) analytically, therefore, we have to use iterative methods. Any iterative method starts from one approximation and constructs a sequence such that it converges to the solution of the Equation (1) (for more details, see [10]).
The most commonly used iterative method to solve (1) is the classical Newton method, given by
x ( k + 1 ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) ,
where J F ( x ) ( or F ( x ) ) is the Jacobian matrix of function F, and x ( k ) is the k-th approximation of the root of (1) with the initial guess x ( 0 ) . It is well known that Newton’s method is a quadratic convergence method with the efficiency index 2 [11]. The third and higher-order methods such as the Halley and Chebyshev methods [12] have little practical value because of the evaluation of the second Frechèt-derivative. However, third and higher-order multi-step methods can be good substitutes because they require the evaluation of the function and its first derivative at different points.
In the recent decades, many authors tried to design iterative procedures with better efficiency and higher order of convergence than the Newton scheme, see, for example, ref. [13,14,15,16,17,18,19,20,21,22,23,24] and references therein. However, the accuracy of solutions is highly dependent on the efficiency of the utilized algorithm. Furthermore, at each step of any iterative method, we must find the exact solution of an obtained linear system which is expensive in actual applications, especially when the system size n is very large. However, the proposed higher-order iterative methods are futile unless they have high-order convergence. Therefore, the important aim in developing any new algorithm is to achieve high convergence order with requiring as small as possible the evaluations of functions, derivatives and matrix inversions. Thus, here, we focus on the technique of the frozen Jacobian multi-step iterative algorithms. It is shown that this idea is computationally attractive and economical for constructing iterative solvers because the inversion of the Jacobian matrix (regarding L U -decomposition) is performed once. Many researchers have reduced the computational cost of these algorithms by frozen Jacobian multi-step iterative techniques [25,26,27,28].
In this work, we construct a new class of frozen Jacobian multi-step iterative methods for solving the nonlinear systems of equations. This is a high-order convergent algorithm with an excellent efficiency index. The theoretical analysis is presented completely. Further, by solving some nonlinear systems, the ability of the methods is compared with some known algorithms.
The rest of this paper is organized as follows. In the following section, we present our new methods with obtaining of their order of convergence. Additionally, their computational efficiency are discussed in general. Some numerical examples are considered in Section 3 and Section 4 to show the asymptotic behavior of these methods. Finally, a brief concluding remark is presented in Section 5.

2. Constructing New Methods

In this section, two high-order frozen Jacobian multi-step iterative methods to solve systems of nonlinear equations are presented. These come by increasing the convergence in Newton’s method and simultaneously decreasing its computational costs. The framework of these Frozen Jacobian multi-step iterative Algorithms (FJA) can be described as
No .   of   steps = m > 1 , Order   of   convergence = m + 1 , Function   evaluations = m , Jacobian   evaluations = 1 , No .   of   L U   decomposition = 1 ; FJA : y 0 = initial   guess y 1 = y 0 J F ( y 0 ) 1 F ( y 0 ) for i = 1 : m 1 Œ i = J F ( y 0 ) 1 ( F ( y i ) + F ( y i 1 ) ) y i + 1 = y i 1 Œ i end y 0 = y m .
In (2), for an m-step method ( m > 1 ), one needs m function evaluations and only one Jacobian evaluation. Further, the number of L U decompositions is one. The order of convergence for such FJA method is m + 1 . In the right-hand side column of (2), the algorithm is briefy described.
In the following subsections, by choosing two different values for m, a third- and a fourth-order frozen Jacobian multi-step iterative algorithm are presented.

2.1. The Third-Order FJA

First, we investigate case m = 2 , that is,
y ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) J F ( x ( k ) ) 1 ( F ( y ( k ) ) + F ( x ( k ) ) ) ,
we denote this by M 3 .

2.1.1. Convergence Analysis

In this part, we prove that the order of convergence of method (3) is three. First, we need to definition of the Frechèt derivative.
Definition 1
([29]). Let F be an operator which maps a Banach space X into a Banach space Y. If there exists a bounded linear operator T from X into Y such that
lim y 0 F ( x + y ) F ( x ) T ( y ) y = 0 ,
then F is said to be Frechèt differentiable and F ( x 0 ) = T ( x 0 ) .
For more details on the Frechèt differentiability and Frechèt derivative, we refer the interested readers to a review article by Emmanuel [30] and references therein.
Theorem 1.
Let F : I R n R n be a Frechèt differentiable function at each point of an open convex neighborhood I of α, the solution of system F ( x ) = 0 . Suppose that J F ( x ( k ) ) is continuous and nonsingular in α, then, the sequence { x ( k ) } ( k 0 ) obtained using the iterative method (3) converges to α and its rate of convergence is three.
Proof. 
Suppose that E n = x ( n ) α , using Taylor’s expansion [31], we obtain
F ( x ( n ) ) = F ( α ) + F ( α ) E n + 1 2 ! F ( α ) E n 2 + 1 3 ! F ( α ) E n 3 + 1 4 ! F ( α ) E n 4 +
as α is the root of F so F ( α ) = 0 . As a matter of fact, one may yield the following equations of F ( x ( n ) ) and F ( x ( n ) ) in a neighborhood of α by using Taylor’s series expansions [32],
F ( x ( n ) ) = F ( α ) E n + C 2 E n 2 + C 3 E n 3 + C 4 E n 4 + C 5 E n 5 + O | | E n 6 | | ,
F ( x ( n ) ) = F ( α ) I + 2 C 2 E n + 3 C 3 E n 2 + 4 C 4 E n 3 + 5 C 5 E n 4 + 6 C 6 E n 5 + O | | E n 6 | | ,
wherein C n = [ F ( α ) ] 1 F ( n ) ( α ) n ! and I is the identity matrix whose order is the same as the order of the Jacobian matrix. Note that i C i E n i 1 L ( R n ) . Using (4) and (5) we obtain
F ( x ( n ) ) 1 F ( x ( n ) ) = E n C 2 E n 2 + ( 2 C 2 2 2 C 3 ) E n 3 + ( 4 C 2 3 + 7 C 2 C 3 3 C 4 ) E n 4 + ( 32 C 2 5 + 8 C 2 4 20 C 2 2 C 3 + 10 C 2 C 4 + 6 C 3 2 4 C 5 ) E n 5 + O | | E n 6 | | .
Since y ( n ) = x ( n ) F ( x ( n ) ) 1 F ( x ( n ) ) , we find
y ( n ) = α + C 2 E n 2 + ( 2 C 2 2 + 2 C 3 ) E n 3 + ( 4 C 2 3 7 C 2 C 3 + 3 C 4 ) E n 4 + ( 32 C 2 5 8 C 2 4 + 20 C 2 2 C 3 10 C 2 C 4 6 C 3 2 + 4 C 5 ) E n 5 + O | | E n 6 | | .
By the definition of error term E n , the error term of y ( n ) as an approximation of α , that is, y ( n ) α is obtained from the second term of the right-hand side of Equation (6). Similarly, the Taylor’s expansion of the function F ( y ( n ) ) is
F ( y ( n ) ) = F ( α ) [ C 2 E n 2 + ( 2 C 2 2 + 2 C 3 ) E n 3 + ( 5 C 2 3 7 C 2 C 3 + 3 C 4 ) E n 4 + ( 32 C 2 5 12 C 2 4 + 24 C 2 2 10 C 2 C 4 6 C 3 2 + 4 C 5 ) E n 5 + O | | E n 6 | | ] .
From (4) and (7), we obtain
( F ( x ( n ) ) + F ( y ( n ) ) ) = F ( α ) [ E n + 2 C 2 E n 2 + ( 2 C 2 2 + 3 C 3 ) E n 3 + ( 5 C 2 3 7 C 2 C 3 + 4 C 4 ) E n 4 + ( 32 C 2 5 12 C 2 4 + 24 C 2 2 10 C 2 C 4 6 C 3 2 + 6 C 5 ) E n 5 ] + O | | E n 6 | | ] .
Thus,
F ( x ( n ) ) 1 ( F ( x ( n ) ) + F ( y ( n ) ) ) = E n ( 2 C 2 2 ) E n 3 + ( 9 C 2 3 7 C 2 C 3 ) E n 4 + ( 30 C 2 4 + 44 C 2 2 C 3 10 C 2 C 4 6 C 3 2 + C 5 ) E n 5 + O | | E n 6 | | .
Finally, since
x ( n + 1 ) = x ( n ) J F ( x ( n ) ) 1 ( F ( x ( n ) ) + F ( y ( n ) ) ) ,
we have
x ( n + 1 ) = α ( 2 C 2 2 ) E n 3 ( 9 C 2 3 7 C 2 C 3 ) E n 4 ( 30 C 2 4 + 44 C 2 2 C 3 10 C 2 C 4 + 6 C 3 2 + C 5 ) E n 5 + O | | E n 6 | | .
Clearly, the error Equation (8) shows that the order of convergence of the frozen Jacobian multi-step iterative method (3) is three. This completes the proof. □

2.1.2. The Computational Efficiency

In this section, we compare the computational efficiency of our third-order scheme (3), denoted as M 3 , with some existing third-order methods. We will assess the efficiency index of our new frozen Jacobian multi-step iterative method in contrast with the existing methods for systems of nonlinear equations, using two famous efficiency indices. The first one is the classical efficiency index [33] as
I E = p 1 c
where p is the rate of convergence and c stands for the total computational cost per iteration in terms of the number of functional evaluations, such that c = ( r n + m n 2 ) where r refers to the number of function evaluations needed per iteration and m is the number of Jacobian matrix evaluations needed per iteration.
It is well known that the computation of L U factorization by any of the existing methods in the literature normally needs 2 n 3 / 3 flops in floating point operations, while the floating point operations to solve two triangular systems needs 2 n 2 flops.
The second criterion is the flops-like efficiency index ( F L E I ) which was defined by Montazeri et al. [34] as
F L E I = p 1 c
where p is the order of convergence of the method, c denotes the total computational cost per loop in terms of the number of functional evaluations, as well as the cost of L U factorization for solving two triangular systems (based on the flops).
As the first comparison, we compare M 3 with the third-order method given by Darvishi [35], which is denoted as M 3 , 1
y ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 2 ( J F ( x ( k ) ) + J F ( y ( k ) ) ) 1 F ( x ( k ) ) .
The second iterative method shown by M 3 , 2 is the following third-order method introduced by Hernández [36]
y ( k ) = x ( k ) 1 2 J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) + J F ( x ( k ) ) 1 ( J F ( y ( k ) ) 2 J F ( x ( k ) ) ) × J F ( x ( k ) ) 1 F ( x ( k ) ) .
Another method is the following third-order iterative method given by Babajee et al. [37], M 3 , 3 ,
y ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) + 1 2 J F ( x ( k ) ) 1 ( J F ( y ( k ) ) 3 J F ( x ( k ) ) ) × J F ( x ( k ) ) 1 F ( x ( k ) ) .
Finally, the following third-order iterative method, M 3 , 4 , ref. [38] is considered
y ( k ) = x ( k ) 2 3 J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 4 ( J F ( x ( k ) ) + 3 J F ( y ( k ) ) ) 1 F ( x ( k ) ) .
The computational efficiency of our third-order method revealed that our method, M 3 , is the best one in respect with methods M 3 , 1 , M 3 , 2 , M 3 , 3 and M 3 , 4 , as presented in Table 1, and Figure 1 and Figure 2.

2.2. The Fourth-Order FJA

By setting m = 3 in FJA, the following three-step algorithm is deduced
y ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = x ( k ) J F ( x ( k ) ) 1 ( F ( y ( k ) ) + F ( x ( k ) ) ) , x ( k + 1 ) = y ( k ) J F ( x ( k ) ) 1 ( F ( z ( k ) ) + F ( y ( k ) ) ) ,
.
In the following subsections, the order of convergence and efficiency indices are obtained for the method described in (9).

2.2.1. Convergence Analysis

The frozen Jacobian three-step iterative process (9) has the rate of convergence order four by using three evaluations of function F and one first-order Frechèt derivative F per full iterations. To avoid any repetition, we take a sketch of proof on this subject. Similar to the proof of Theorem 1, by setting z ( k ) = x ( k + 1 ) in (8) we obtain
F ( z ( k ) ) = F ( α ) [ 2 C 2 2 E n 3 + ( 9 C 2 3 + 7 C 2 C 3 ) E n 4 + ( 30 C 2 4 44 C 2 2 C 3 + + 10 C 2 C 4 C 5 ) E n 5 + O | | E n 6 | | ] .
Hence,
( F ( z ( k ) ) + F ( y ( k ) ) ) = F ( α ) [ C 2 E n 2 + 2 C 3 E n 3 + ( 4 C 2 3 + 3 C 4 ) E n 4 + ( 32 C 2 5 + 18 C 2 4 20 C 2 2 C 3 + 3 C 5 ) E n 5 + O | | E n 6 | | ] .
Therefore, from (5) and (10), we find
F ( x ( k ) ) 1 ( F ( z ( k ) ) + F ( y ( k ) ) ) = [ C 2 E n 2 + ( 2 C 2 2 + 2 C 3 ) E n 3 + ( 7 C 2 C 3 + + 3 C 4 ) E n 4 + ( 18 C 2 4 10 C 2 C 4 6 C 3 2 + 3 C 5 ) E n 5 + O | | E n 6 | | ] .
Since we have x ( k + 1 ) = y ( k ) J F ( x ( k ) ) ) 1 ( F ( z ( k ) ) + F ( y ( k ) ) ) from (6) and (11), the following result is obtained
x ( k + 1 ) = α + ( 4 C 2 3 ) E n 4 + ( 32 C 2 5 26 C 2 4 + 20 C 2 2 C 3 + C 5 ) E n 5 + O | | E n 6 | | .
This completes the proof, since error Equation (12) shows that the order of convergence of the frozen Jacobian multi-step iterative method (9) is four.

2.2.2. The Computational of Efficiency

Now, we compare the computational efficiency of our fourth-order scheme (9), called by M 4 , with some existing fourth-order methods. The considered methods are: the third-order method M 4 , 1 given by Sharma et al. [39],
y ( k ) = 2 3 x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 1 2 I + 9 4 J F ( y ( k ) ) 1 J F ( x ( k ) ) + 3 4 J F ( x ( k ) ) 1 J F ( y ( k ) ) × J F ( x ( k ) ) 1 F ( x ( k ) ) ,
the fourth-order iterative method M 4 , 2 given by Darvishi and Barati [40],
y ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( y ( k ) ) + F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 1 6 J F ( x ( k ) ) + 2 3 J F ( x ( k ) + z ( k ) 2 ) + 1 6 J F ( z ( k ) ) 1 F x ( k ) ,
the fourth-order iterative method M 4 , 3 given by Soleymani et al. [34,41],
y ( k ) = 2 3 x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) I 3 8 I J F ( y ( k ) ) 1 J F ( x ( k ) ) 2 J F ( x ( k ) ) 1 F ( x ( k ) ) ,
and the following Jarratt fourth-order method M 4 , 4 [42],
y ( k ) = 2 3 x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 1 2 3 J F ( y ( k ) ) J F ( x ( k ) ) 1 3 J F ( y ( k ) ) + J F ( x ( k ) ) × J F ( x ( k ) ) 1 F ( x ( k ) ) .
The computational efficiency of our fourth-order method showed that our method M 4 is better than methods M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 as the comparison results are presented in Table 2, and Figure 3 and Figure 4. As we can see from Table 2, the indices of our method M 4 are better than similar ones in methods M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 . Furthermore, Figure 3 and Figure 4 show the superiority of our method in respect with the another schemes.

3. Numerical Results

In order to check the validity and efficiency of our proposed frozen Jacobian multi-step iterative methods, three test problems are considered to illustrate convergence and computation behaviors such as efficiency index and some another indices of the frozen Jacobian multi-step iterative methods. Numerical computations have been performed using variable precision arithmetic that uses floating point representation of 100 decimal digits of mantissa in MATLAB. The computer specifications are: Intel(R) Core(TM) i7-1065G7 CPU 1.30 GHz with 16.00 GB of RAM on Windows 10 pro.
Experiment 1. We begin with the following nonlinear system of n equations [43],
f i ( x ) = cos ( x i ) 1 , i = 1 , 2 , , n .
The exact zero of F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f n ( x ) ) t = 0 is ( 0 , 0 , , 0 ) t . To solve (13), we set the initial guess as ( 0.78 , 0.78 , , 0.78 ) t . The stopping criterion is selected as | | f ( x ( k ) ) | | 10 3 .
Experiment 2. The next test problem is the following system of nonlinear equations [44],
f i ( x ) = ( 1 x i 2 ) + x i ( 1 + x i x n 2 x n 1 x n ) 2 , i = 1 , 2 , , n .
The exact root of F ( x ) = 0 is ( 1 , 1 , , 1 ) t . To solve (14), the initial guess is taken as ( 2 , 2 , , 2 ) t . The stopping criterion is selected as | | f ( x ( k ) ) | | 10 8 .
Experiment 3. The last test problem is the following nonlinear system [9],
f i ( x ) = x i 2 x i + 1 1 , i = 1 , 2 , , n 1 , f n ( x ) = x n 2 x 1 1 ,
with the exact solution ( 1 , 1 , , 1 ) t . To solve (15), the initial guess and the stopping criterion are respectively considered as ( 3 , 3 , , 3 ) t and | | f ( x ( k ) ) | | 10 8 .
Table 3 shows the comparison results between our third-order frozen Jacobian two-step iterative method M 3 and some third-order frozen Jacobian iterative methods, namely, M 3 , 1 , M 3 , 2 , M 3 , 3 and M 3 , 4 . For all test problems, two different values for n are considered, namely, n = 50 , 100 . As this table shows, in all cases, our method works better than the others. Similarly, in Table 4, CPU time and number of iterations are presented for our fourth-order method, namely, M 4 and methods M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 . Similar to M 3 , the CPU time for M 4 is less than the CPU time for the other methods. These tables show superiority of our methods in respect with the other ones. In Table 3 and Table 4, i t shows the number of iterations.

4. Another Comparison

In the previous parts, we presented some comparison results between our methods M 3 and M 4 with some another frozen Jacobian multi-step iterative methods from third- and fourth-order methods. In this section, we compare our presented methods with three other methods which are fourth- and fifth-order ones. As Table 5 and Table 6 and Figure 5 and Figure 6 show, our methods are also better than these methods.
  • First. The fourth-order method given by Qasim et al. [25], M A ,
    J F ( x ( k ) ) θ 1 = F ( x ( k ) ) , y ( k ) = x ( k ) θ 1 , J F ( x ( k ) ) θ 2 = F ( y ( k ) ) , J F ( x ( k ) ) θ 3 = J F ( y ( k ) ) θ 2 , x ( k + 1 ) = y k 2 θ 2 + θ 3 .
  • Second. The fourth-order Newton-like method by Amat et al. [26], M B ,
    y ( k ) = x ( k ) J F ( x ( k ) ) 1 F ( x ( k ) ) , z ( k ) = y ( k ) J F ( x ( k ) ) 1 F ( y ( k ) ) , x ( k + 1 ) = z k J F ( x ( k ) ) 1 F ( z ( k ) )
    .
  • Third. The fifth-order iterative method by Ahmad et al. [28], M C ,
    J F ( x ( k ) ) θ 1 = F ( x ( k ) ) , y ( k ) = x ( k ) θ 1 , J F ( x ( k ) ) θ 2 = F ( y ( k ) ) , z ( k ) = y ( k ) 3 θ 2 , J F ( x ( k ) ) θ 3 = J F ( z ( k ) ) θ 2 , J F ( x ( k ) ) θ 4 = J F ( z ( k ) ) θ 3 , x ( k + 1 ) = y ( k ) 7 4 θ 2 + 1 2 θ 3 + 1 4 θ 4
    .
The comparison results of computational efficiency between our methods M 3 and M 4 with selected methods M A , M B and M C are presented in Table 5. Additionally, Figure 5 and Figure 6 show the graphical comparisons between these methods. Finally, Table 6 shows CPU time and number of iterations to solve our test problems by methods M 3 , M 4 , M A , M B and M C . These numerical and graphical reports show the quality of our algorithms.

5. Conclusions

In this article, two new frozen Jacobian two- and three-step iterative methods to solve systems of nonlinear equations are presented. For the first method, we proved that the order of convergence is three, while for the second one, a fourth-order convergence is proved. By solving three different examples, one may see our methods work as well. Further, the CPU time of our methods is less than some selected frozen Jacobian multi-step iterative methods in the literature. Moreover, other indices of our methods such as number of steps, functional evaluations, the classical efficiency index, and so on, are better than these indices for other methods. This class of the frozen Jacobian multi-step iterative methods can be a pattern for new research on the frozen Jacobian iterative algorithms.

Author Contributions

Investigation, R.H.A.-O. and M.T.D.; Project administration, M.T.D.; Resources, R.H.A.-O.; Supervision, M.T.D.; Writing—original draft, M.T.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor of the journal and three anonymous reviewers for their generous time in providing detailed comments and suggestions that helped us to improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fay, T.H.; Graham, S.D. Coupled spring equations. Int. J. Math. Educ. Sci. Technol. 2003, 34, 65–79. [Google Scholar] [CrossRef]
  2. Petzold, L. Automatic selection of methods for solving stiff and non stiff systems of ordinary differential equations. SIAM J. Sci. Stat. Comput. 1983, 4, 136–148. [Google Scholar] [CrossRef]
  3. Ehle, B.L. High order A-stable methods for the numerical solution of systems of D.E.’s. BIT Numer. Math. 1968, 8, 276–278. [Google Scholar] [CrossRef]
  4. Wambecq, A. Rational Runge-Kutta methods for solving systems of ordinary differential equations. Computing 1978, 20, 333–342. [Google Scholar] [CrossRef]
  5. Liang, H.; Liu, M.; Song, M. Extinction and permanence of the numerical solution of a two-prey one-predator system with impulsive effect. Int. J. Comput. Math. 2011, 88, 1305–1325. [Google Scholar] [CrossRef]
  6. Harko, T.; Lobo, F.S.N.; Mak, M.K. Exact analytical solutions of the Susceptible-Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and birth rates. Appl. Math. Comput. 2014, 236, 184–194. [Google Scholar] [CrossRef]
  7. Zhao, J.; Wang, L.; Han, Z. Stability analysis of two new SIRs models with two viruses. Int. J. Comput. Math. 2018, 95, 2026–2035. [Google Scholar] [CrossRef]
  8. Kröger, M.; Schlickeiser, R. Analytical solution of the SIR-model for the temporal evolution of epidemics, Part A: Time-independent reproduction factor. J. Phys. A Math. Theor. 2020, 53, 505601. [Google Scholar] [CrossRef]
  9. Ullah, M.Z.; Behl, R.; Argyros, I.K. Some high-order iterative methods for nonlinear models originating from real life problems. Mathematics 2020, 8, 1249. [Google Scholar] [CrossRef]
  10. Argyros, I.K. Concerning the “terra incognita” between convergence regions of two Newton methods. Nonlinear Anal. 2005, 62, 179–194. [Google Scholar] [CrossRef]
  11. Drexler, M. Newton Method as a Global Solver for Non-Linear Problems. Ph.D Thesis, University of Oxford, Oxford, UK, 1997. [Google Scholar]
  12. Guti’errez, J.M.; Hern’andez, M.A. A family of Chebyshev-Halley type methods in Banach spaces. Bull. Aust. Math. Soc. 1997, 55, 113–130. [Google Scholar] [CrossRef]
  13. Cordero, A.; Jordán, C.; Sanabria, E.; Torregrosa, J.R. A new class of iterative processes for solving nonlinear systems by using one divided differences operator. Mathematics 2019, 7, 776. [Google Scholar] [CrossRef]
  14. Stefanov, S.M. Numerical solution of systems of non linear equations defined by convex functions. J. Interdiscip. Math. 2022, 25, 951–962. [Google Scholar] [CrossRef]
  15. Lee, M.Y.; Kim, Y.I.K. Development of a family of Jarratt-like sixth-order iterative methods for solving nonlinear systems with their basins of attraction. Algorithms 2020, 13, 303. [Google Scholar] [CrossRef]
  16. Cordero, A.; Jordán, C.; Sanabria-Codesal, E.; Torregrosa, J.R. Design, convergence and stability of a fourth-order class of iterative methods for solving nonlinear vectorial problems. Fractal Fract. 2021, 5, 125. [Google Scholar] [CrossRef]
  17. Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. A fast algorithm to solve systems of nonlinear equations. J. Comput. Appl. Math. 2019, 354, 242–258. [Google Scholar] [CrossRef]
  18. Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K. A family of fifth and sixth convergence order methods for nonlinear models. Symmetry 2021, 13, 715. [Google Scholar] [CrossRef]
  19. Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2018, 9, 501–514. [Google Scholar] [CrossRef]
  20. Ullah, M.Z.; Serra-Capizzano, S.; Ahmad, F. An efficient multi-step iterative method for computing the numerical solution of systems of nonlinear equations associated with ODEs. Appl. Math. Comput. 2015, 250, 249–259. [Google Scholar] [CrossRef]
  21. Pacurar, M. Approximating common fixed points of Pres̆ic-Kannan type operators by a multi-step iterative method. An. St. Univ. Ovidius Constanta 2009, 17, 153–168. [Google Scholar]
  22. Rafiq, A.; Rafiullah, M. Some multi-step iterative methods for solving nonlinear equations. Comput. Math. Appl. 2009, 58, 1589–1597. [Google Scholar] [CrossRef]
  23. Aremu, K.O.; Izuchukwu, C.; Ogwo, G.N.; Mewomo, O.T. Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 2021, 17, 2161. [Google Scholar] [CrossRef]
  24. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  25. Qasim, U.; Ali, Z.; Ahmad, F.; Serra-Capizzano, S.; Ullah, M.Z.; Asma, M. Constructing frozen Jacobian iterative methods for solving systems of nonlinear equations, associated with ODEs and PDEs using the homotopy method. Algorithms 2016, 9, 18. [Google Scholar] [CrossRef]
  26. Amat, S.; Busquier, S.; Grau, À.; Grau-Sánchez, M. Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Appl. Math. Comput. 2013, 219, 7954–7963. [Google Scholar] [CrossRef]
  27. Kouser, S.; Rehman, S.U.; Ahmad, F.; Serra-Capizzano, S.; Ullah, M.Z.; Alshomrani, A.S.; Aljahdali, H.M.; Ahmad, S.; Ahmad, S. Generalized Newton multi-step iterative methods GMNp,m for solving systems of nonlinear equations. Int. J. Comput. Math. 2018, 95, 881–897. [Google Scholar] [CrossRef]
  28. Ahmad, F.; Tohidi, E.; Ullah, M.Z.; Carrasco, J.A. Higher order multi-step Jarratt-like method for solving systems of nonlinear equations: Application to PDEs and ODEs. Comput. Math. Appl. 2015, 70, 624–636. [Google Scholar] [CrossRef]
  29. Kaplan, W.; Kaplan, W.A. Ordinary Differential Equations; Addison-Wesley Publishing Company: Boston, MA, USA, 1958. [Google Scholar]
  30. Emmanuel, E.C. On the Frechet derivatives with applications to the inverse function theorem of ordinary differential equations. Aian J. Math. Sci. 2020, 4, 1–10. [Google Scholar]
  31. Ortega, J.M.; Rheinboldt, W.C.; Werner, C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  32. Behl, R.; Bhalla, S.; Magre nán, A.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
  33. Ostrowski, A.M. Solutions of the Equations and Systems of Equations; Prentice-Hall: England Cliffs, NJ, USA; New York, NY, USA, 1964. [Google Scholar]
  34. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
  35. Darvishi, M.T. A two-step high order Newton-like method for solving systems of nonlinear equations. Int. J. Pure Appl. Math. 2009, 57, 543–555. [Google Scholar]
  36. Hernández, M.A. Second-derivative-free variant of the Chebyshev method for nonlinear equations. J. Optim. Theory Appl. 2000, 3, 501–515. [Google Scholar] [CrossRef]
  37. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Karami, A.; Barati, A. Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2002–2012. [Google Scholar] [CrossRef]
  38. Noor, M.S.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  39. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  40. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  41. Soleymani, F. Regarding the accuracy of optimal eighth-order methods. Math. Comput. Modell. 2011, 53, 1351–1357. [Google Scholar] [CrossRef]
  42. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  43. Shin, B.C.; Darvishi, M.T.; Kim, C.H. A comparison of the Newton–Krylov method with high order Newton-like methods to solve nonlinear systems. Appl. Math. Comput. 2010, 217, 3190–3198. [Google Scholar] [CrossRef]
  44. Waziri, M.Y.; Aisha, H.; Gambo, A.I. On performance analysis of diagonal variants of Newton’s method for large-scale systems of nonlinear equations. Int. J. Comput. Appl. 2011, 975, 8887. [Google Scholar]
Figure 1. The classical efficiency index for methods M 3 , M 3 , 1 , M 3 , 2 , M 3 , 3 and M 3 , 4 .
Figure 1. The classical efficiency index for methods M 3 , M 3 , 1 , M 3 , 2 , M 3 , 3 and M 3 , 4 .
Mathematics 10 02952 g001
Figure 2. The flops-like efficiency index for methods M 3 , M 3 , 1 , M 3 , 2 , M 3 , 3 and M 3 , 4 .
Figure 2. The flops-like efficiency index for methods M 3 , M 3 , 1 , M 3 , 2 , M 3 , 3 and M 3 , 4 .
Mathematics 10 02952 g002
Figure 3. The classical efficiency index for methods M 4 , M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 .
Figure 3. The classical efficiency index for methods M 4 , M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 .
Mathematics 10 02952 g003
Figure 4. The Flops-like efficiency index for methods M 4 , M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 .
Figure 4. The Flops-like efficiency index for methods M 4 , M 4 , 1 , M 4 , 2 , M 4 , 3 and M 4 , 4 .
Mathematics 10 02952 g004
Figure 5. The classical efficiency index for M 3 , M 4 , M A , M B and M C .
Figure 5. The classical efficiency index for M 3 , M 4 , M A , M B and M C .
Mathematics 10 02952 g005
Figure 6. The Flops-like efficiency index for M 3 , M 4 , M A , M B and M C .
Figure 6. The Flops-like efficiency index for M 3 , M 4 , M A , M B and M C .
Mathematics 10 02952 g006
Table 1. Comparison of efficiency indices between M 3 and other third-order methods.
Table 1. Comparison of efficiency indices between M 3 and other third-order methods.
Methods M 3 M 3 , 1 M 3 , 2 M 3 , 3 M 3 , 4
No. of steps22222
Order of convergence33333
Functional evaluations 2 n + n 2 n + 2 n 2 n + 2 n 2 n + 2 n 2 n + 2 n 2
The classical efficiency index (IE) 3 1 / ( 2 n + n 2 ) 3 1 / ( n + 2 n 2 ) 3 1 / ( n + 2 n 2 ) 3 1 / ( n + 2 n 2 ) 3 1 / ( n + 2 n 2 )
No. of L U decompositions12112
Cost of L U decompositions 2 n 3 3 4 n 3 3 2 n 3 3 2 n 3 3 4 n 3 3
Cost of linear systems (based on flops) 2 n 3 3 + 4 n 2 4 n 3 3 + 4 n 2 5 n 3 3 + 2 n 2 5 n 3 3 + 2 n 2 4 n 3 3 + 4 n 2
Flops-like efficiency index (FLEI) 3 1 / ( 2 n 3 3 + 5 n 2 + 2 n ) 3 1 / ( 4 n 3 3 + 6 n 2 + n ) 3 1 / ( 5 n 3 3 + 4 n 2 + n ) 3 1 / ( 5 n 3 3 + 4 n 2 + n ) 3 1 / ( 4 n 3 3 + 6 n 2 + n )
Table 2. Comparison of efficiency indices between M 4 and other fourth-order methods.
Table 2. Comparison of efficiency indices between M 4 and other fourth-order methods.
Methods M 4 M 4 , 1 M 4 , 2 M 4 , 3 M 4 , 4
No. of steps32322
Order of convergence44444
Functional evaluations 3 n + n 2 n + 2 n 2 2 n + 3 n 2 n + 2 n 2 n + 2 n 2
The classical efficiency index (IE) 4 1 / ( 3 n + n 2 ) 4 1 / ( n + 2 n 2 ) 4 1 / ( 2 n + 3 n 2 ) 4 1 / ( n + 2 n 2 ) 4 1 / ( n + 2 n 2 )
No. of L U decompositions12222
Cost of L U decompositions 2 n 3 3 4 n 3 3 4 n 3 3 4 n 3 3 4 n 3 3
Cost of linear systems (based on flops) 2 n 3 3 + 6 n 2 10 n 3 3 + 2 n 2 4 n 3 3 + 6 n 2 7 n 3 3 + 2 n 2 7 n 3 3 + 2 n 2
Flops-like efficiency index (FLEI) 4 1 / ( 2 n 3 3 + 7 n 2 + 3 n ) 4 1 / ( 10 n 3 3 + 4 n 2 + n ) 4 1 / ( 4 n 3 3 + 9 n 2 + 2 n ) 4 1 / ( 7 n 3 3 + 4 n 2 + n ) 4 1 / ( 7 n 3 3 + 4 n 2 + n )
Table 3. Comparison results between M 3 and other third-order methods.
Table 3. Comparison results between M 3 and other third-order methods.
MethodsExperiment 1Experiment 2Experiment 3
n it cpun it cpun it cpu
M 3 5047.734450510.625050510.4844
100559.6406100559.8594100560.0313
M 3 , 1 50411.062550513.812550514.1406
100469.4219100587.3594100587.4063
M 3 , 2 50418.718850524.937550521.5469
1005157.23441005143.73441005146.2656
M 3 , 3 50420.703150523.156350524.2969
1005153.17191005143.29691005145.4063
M 3 , 4 50413.171950513.250050411.0156
100473.2500100588.2031100470.2500
Table 4. Comparison results between M 4 and other fourth-order methods.
Table 4. Comparison results between M 4 and other fourth-order methods.
MethodsExperiment 1Experiment 2Experiment 3
n it cpun it cpun it cpu
M 4 50412.246350413.321850411.5781
100478.1563100594.9063100474.2969
M 4 , 1 50423.687550421.953150421.7969
1004151.98441004144.76561004140.8438
M 4 , 2 50315.390650418.953150418.6875
1004121.65631004122.73441004118.5781
M 4 , 3 50312.218850417.875050415.2656
100497.5469100499.0469100497.1250
M 4 , 4 50316.468850421.734450420.7188
1003109.17191004152.01561004140.2969
Table 5. Numerical results for comparing of M 3 and M 4 with M A , M B and M C .
Table 5. Numerical results for comparing of M 3 and M 4 with M A , M B and M C .
Methods M 3 M 4 M A M B M C
No. of steps23233
Order of convergence34445
Functional evaluations 2 n + n 2 3 n + n 2 2 n + 2 n 2 3 n + n 2 2 n + 2 n 2
The classical efficiency index (IE) 3 1 / ( 2 n + n 2 ) 4 1 / ( 3 n + n 2 ) 4 1 / ( 2 n + 2 n 2 ) 4 1 / ( 3 n + n 2 ) 5 1 / ( 2 n + 2 n 2 )
No. of L U decompositions11111
Cost of L U decompositions 2 n 3 3 2 n 3 3 2 n 3 3 2 n 3 3 2 n 3 3
Cost of linear systems (based on flops) 2 n 3 3 + 4 n 2 2 n 3 3 + 6 n 2 5 n 3 3 + 4 n 2 2 n 3 3 + 6 n 2 5 n 3 3 + 4 n 2
Flops-like efficiency index (FLEI) 3 1 / ( 2 n 3 3 + 5 n 2 + 2 n ) 4 1 / ( 2 n 3 3 + 7 n 2 + 3 n ) 4 1 / ( 5 n 3 3 + 6 n 2 + 2 n ) 4 1 / ( 2 n 3 3 + 7 n 2 + 3 n ) 5 1 / ( 5 n 3 3 + 6 n 2 + 2 n )
Table 6. Comparison results between M 3 , M 4 , M A , M B and M C .
Table 6. Comparison results between M 3 , M 4 , M A , M B and M C .
MethodsExperiment 1Experiment 2Experiment 3
n it CPUn it CPUn it CPU
M 3 5047.734450510.625050510.4844
100559.6406100559.8594100560.0313
M 4 50412.246350413.321850411.5781
100478.1563100594.9063100474.2969
M A 50623.187550725.062550625.4063
1006139.56251007173.81251006150.8594
M B 50415.250950412.156350412.9219
100476.1406100591.1719100471.6406
M C 50423.468850423.485450422.1531
1004139.98441004185.14061004138.4063
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Obaidi, R.H.; Darvishi, M.T. Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations. Mathematics 2022, 10, 2952. https://doi.org/10.3390/math10162952

AMA Style

Al-Obaidi RH, Darvishi MT. Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations. Mathematics. 2022; 10(16):2952. https://doi.org/10.3390/math10162952

Chicago/Turabian Style

Al-Obaidi, R. H., and M. T. Darvishi. 2022. "Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations" Mathematics 10, no. 16: 2952. https://doi.org/10.3390/math10162952

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop