Abstract
In this paper, we first derive a family of iterative schemes with fourth order. A weight function is used to maintain its optimality. Then, we transform it into methods with several self-accelerating parameters to reach the highest possible convergence rate 8. For this aim, we employ the property of the eigenvalues of the matrices and the technique with memory. Solving several nonlinear test equations shows that the proposed variants have a computational efficiency index of two (maximum amount possible) in practice.
MSC:
65H05; 41A25
1. Introduction
This paper is concerned with the numerical solution of nonlinear problems having the structure . In fact, we look at iterative approaches to solving nonlinear problems. It is famous that the celebrated Newton’s scheme can define with the local second convergence rate for simple roots while per iteration it requires one evaluation of the function and one of its first derivative. For some applications, one may refer to [1,2].
A wide range of problems which are not related to nonlinear equations at first sight can be expressed as finding the solution of nonlinear equations in special spaces (e.g., in operator form.) For example, finding approximate-analytic solutions to nonlinear stochastic differential equations [3] is possible via Chaplygin type solvers which are in fact some Newton’s iteration in an appropriate operator environment in order to be imposed for solving such equations [4].
Let us recall that the efficiency of the iterative schemes [5] is calculated via wherein d is the number of functional evaluations per cycle and p is the speed rate. Besides, for multi-point without-memory iterative schemes, the optimal convergence order is , needing n functional evaluations per cycle [6].
Now some definitions are given which will be used later in this work.
A famous fourth-order two-point method without-memory is the King’s method, which is given by [7]:
The authors of [8] made the following three steps based on Ostrowski’s method and the use of the technique of the Chun weight function [9]:
Ostrowski’s method proposed in [5] has fourth order of convergence as follows:
This method can be rewritten as follows:
This method supports the optimality conjecture of Kung–Traub for the highest possible convergence order for methods without memory. Accordingly, the efficiency index of Newton’s and Ostrowski’s methods are: and , respectively.
In this work, we turn the famous Ostrowski method into a family of Steffensen-like methods, ref. [10]. This technique eliminates the disadvantage of calculating the function derivative. A three self-parameters family of optimal two-step methods is obtained, which uses the weight function technique to maintain the optimality of the without-memory methods. In addition, the matrix eigenvalue technique is employed to prove the convergence order of the proposed methods.
To explain the motivation of the current manuscript clearly we should address why we need to achieve such high precision results, for which applications? The answer is that we mostly do not need high precision. Actually the current study is useful in terms of a theoretical point of view by proposing a general family of methods with memory that possesses a 100% order improvement without any additional functional evaluations. In terms of the application point of view, we employ multiple precision arithmetic in numerical simulations just to re-check the order of convergence in numerical tests. In applications, clearly our method that has a higher order of convergence reaches the convergence radius faster, and gives the final solution in reasonable timing.
We describe the structure of the modified Ostrowski’s methods two-step without memory in Section 2. The improvement of the convergence rate of this family is attained via employing several parameters of self-acceleration. Such parameters are computed per loop via the information from the current and the previous loops which help to accelerate the convergence without adding further incorporation of functional evaluations. The efficiency index of the new method is two (the highest efficiency index available). The theoretical proof is presented in Section 3. Computational pieces of evidence are brought forward in Section 4 and uphold the analytical results. Finally, we provide concluding remarks in Section 5.
2. Derivation of Methods and Convergence Analysis
By looking at relation (4), it can be seen that this method uses the derivative of the function in the first and second steps and this shows that the two-point family of schemes (4) achieves the fourth convergence rate employing three evaluations of functions only (viz., , and ) per full iteration. To derive new methods, we approximate given in one-step (4) as follows:
In what follows, the derivative in the second step will be estimated via , where is a differentiable function that relies on the real variable
Thus, it is begun by the scheme (4), the approximation (5), and mentions the following two-point method:
The next theorem illustrates the weight function and under what conditions the convergence rate of (6) will achieve the optimal order four.
Theorem 1.
Proof.
The proof of this theorem is similar to the way of proving convergence order for similar schemes in the literature, see e.g., ref. [11]. It is hence omitted and we bring the final error equation, which can be written as follows:
where and using relations . Hence, the fourth-order convergence is established. The proof is ended. □
Some of the functions that satisfy Theorem 1 are as follows:
By considering a new accelerator, the following two-step method can be obtained:
The method (6) can also be converted into a without memory method by adding two self-accelerator parameters to a three-parameter method as follows:
Theorem 2.
Proof.
This is proved as in the Theorem 1; hence, it is omitted. □
We also note here that (12) can be rewritten as follows:
3. Further Improvements via the Concept of Methods with Memory
3.1. One-Parametric Method
It is observed by (7) that the convergence rate order for the presented methods (6) is 4 when . We could approximate the parameter by :
where are defined as follows:
Combining (6) with (14), one is able to propose a family of two-point Ostrowski–Steffensen-type methods with memory as comes next:
Theorem 3.
Proof.
The matrix approach discussed initially in [12] is now used to obtain the rate of convergence for such an accelerated method. Recalling the lower bound for the rate of convergence in such cases on the single-step s-point procedure technique (14), i.e., would be the spectral radius of , corresponding to the method by having:
Then the spectral radius of would be the lower bound for the s-step method . We can state each of the estimates , and as a function of available information and from the k-th iterate and , and from the past iterate. From the relations (16) and (17), we create the corresponding matrices as follows:
Hence, we obtain
for which its eigenvalues are . It is derived that the rate of convergence would be six for the methods with memory (16). The proof is complete now. □
3.2. Two-Parametric Method
Now, similar to the prior case, we build the following derivative-free with the memory method from (9):
Theorem 4.
Let be an initial approach close enough to of . If the parameters and will compute recursively, then the convergence R-order of (18) is at least 7.
Proof.
Using appropriate matrixes in the proof of Theorem 3 and substituting them into the goal matrix, we obtain that (18) has seventh order of convergence. In fact, the proof is similar to the proof of Theorem 3 and hence it is omitted. □
3.3. Tri-Parametric Method
The method (10) with memory could be expressed in what follows:
Now, we establish a theorem for determining the rate of (19).
Theorem 5.
Proof.
From the relation (19) and similar to that used in Theorem 3, we construct the corresponding matrix as follows:
and its eigenvalues are . Thus, is the rate of convergence. □
We will present the process of the work in the following four sections until the maximum degree of convergence (i.e., ):
- (I)
- Now, we consider three parameters’ iterative methods as follows:
Theorem 6.
Having the same conditions as in Theorems 1 and 5, then (20) converges to with the rate of convergence 7.77.
Proof.
In a similar fashion, one obtains that
for which its eigenvalues are , which states that the order of the with-memory methods (20) is . □
- (II)
- Now, we study tri-parametric iterative methods as follows:
Theorem 7.
Proof.
Proving this theorem is similar to that of Theorem 3. □
- (III)
- Now, we consider three parameters’ iterative methods as follows:where and are defined as follows:
Theorem 8.
Having the same assumptions as in Theorem 1, then the proposed family with memory method defined by (22) has R-order 7.94.
Proof.
From the relation (22) and similar to that used in the previous section, we derive the associated matrices as comes next:
So, we obtain
The only eigenvalue of the matrix M that is positive and real is the number . It follows that the convergence rate of (22) is 7.94. □
- (IV)
- At the end of this section, we have presented the most important theorem of this paper, which has the highest degree of convergence of a Ostrowski-like two-point method, i.e., 7.97:
Theorem 9.
With the hypotheses of Theorem 3 and that the three parameters and λ have recursively calculated, then (24) has R-order and its efficiency index is .
Proof.
From the relation (24) and similar to that used in the previous section, we construct the corresponding matrices as follows:
Thus, we get
and its eigenvalues are . This states that is the analytical order and the efficiency index is . □
4. Numerical Results
The principal purpose of numerical examples is to verify the validity of the theoretical developments through a variety of test examples using high accuracy computations by use of the Mathematica program. All computations were performed using Mathematica 11 [13].
In the tables, the abbreviations Div, TNE and Iter are used as follows:
- TNE: Total Number of Evaluations required for a method to do the specified iterations;
- Iter: The number of iterations;
- The errors of estimations to the simple zeros of ;
- The computational order of convergence () [14] can be calculated via:
We shall check the effectiveness of the new without and with memory methods. We employ the presented methods (6), (16), (18), (19) and (24) denoted by TM4, TM6, TM7, TM7.5 and TM8, respectively (for ), to solve some nonlinear equations. We compared our methods and some known methods as follows: Campos et al. (CCTVM) [15], Choubey–Jaiswal (CJM) [16], Chun’s method (CM) [17], Cordero et al. (CLKTM) [18], Cordero et al. (CLTAM) [19], Jaiswal’s method (JM) [20], Jarratt’s method (JM) [21], Kung–Traub’s method (KTM) [6], Maheshwari’s method (MM) [22], Kansal et al.’s method (KKBM) [23], Lalehchini et al.’s method (LLMM) [24], Mohammadi et al.’s method (MLAM) [25], Ostrwoski’s method (OM) [5], Soleymani et al.’s method (SLTKM) [11], Torkashvand–Kazemi (TKM) [26], Traub’s method (TM) [27], Wang’s method (WM) [28] and Zafar et al.’s method (ZYKZM) [29].
In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, we show the numerical results obtained by applying the different methods with a memory for approximating the solution of , given as follows:
Table 1.
Comparison of various iterative schemes (first part).
Table 2.
Comparison of various iterative schemes (second part).
Table 3.
Comparison of various iterative schemes (third part).
Table 4.
Comparison of various iterative schemes (fourth part).
Table 5.
Comparison of various iterative schemes (fifth part).
Table 6.
Comparison of various iterative schemes (sixth part).
Table 7.
Comparison of various iterative schemes (seventh part).
Here, ≈ stands for an approximation of the solution that we have written only to provide an overview of the solution. All calculations are performed using 2000 floating point arithmetic in Wolfram Mathematica. This means that we care about very small numbers in terms of magnitude and we do not allow the programming package to consider them as zero automatically. Actually, higher orders can only be seen in the convergence phase and more clearly in high precision computing. The numerical results shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 confirmed the theoretical discussions and the efficiency of the proposed scheme under different choices of the weight functions.
The question may arise now of do we really need to use the small numbers (e.g., 2.83E-1148 which stands for in Table 7)? The answer is ’no’; to illustrate, we must state that sometimes in applications, results up to at most 100 digits are enough. However, here we used such a number of high floating point arithmetic on purpose to check the computational order of convergence (25). In fact, for higher order methods, the higher speed can only be seen in the number of meaningful decimal places when the method takes several iterations.
In addition, in Table 8 a comparison among various schemes is given, which again states that the proposed methods with memory possess a higher computational efficiency index and can be employed in solving nonlinear equations.
Table 8.
Comparison improvement of convergence order of the proposed method with other schemes.
We end this section by pointing out that the extension of our methods for the system of nonlinear equations (see some application of nonlinear system in [31,32,33]) require the computation of a divided difference operator (DDO) which would be a dense matrix. This dense structure of the DDO matrix restricts the usefulness of such methods. Because of this, we consider our proposed methods only for the scalar case.
5. Conclusions
In this paper, we have used the idea of the weight function and have turned Ostrowski’s method into an optimal order method. We have constructed the with-memory methods by using the same number of evaluations that do not require the calculation of a function derivative. Then, with the advent of accelerator parameters and eigenvalues of matrices, we created the with memory methods with higher orders. By interpolatory accelerator parameters, the methods with memory reached up to 100% convergence improvement. Numerical tests were intended to verify the better performance of the proposed methods over the others. Employing such an efficient numerical scheme for practical problems in solving stochastic differential equations [34] is worth investigating in future work.
Author Contributions
Conceptualization, M.Z.U.; Data curation, V.T.; Formal analysis, M.Z.U., S.S. and M.A.; Funding acquisition, S.S.; Investigation, M.Z.U., V.T. and M.A.; Methodology, M.Z.U. and V.T.; Project administration, V.T. and M.A.; Supervision, V.T. and S.S.; Visualization, M.A. All authors have read and agreed to the published version of the manuscript.
Funding
The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project, under grant no. (KEP-48-130-42).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
For data availability statement, we state that data sharing is not applicable to this article as no new data were used in this study. All used data has clearly been mentioned in the text.
Acknowledgments
The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project, under grant no. (KEP-48-130-42).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Liu, L.; Cho, S.Y.; Yao, J.C. Convergence analysis of an inertial Tseng’s extragradient algorithm for solving pseudomonotone variational inequalities and applications. J. Nonlinear Var. Anal. 2021, 5, 627–644. [Google Scholar]
- Alsaedi, A.; Broom, A.; Ntouyas, S.K.; Ahmad, B. Existence results and the dimension of the solution set for a nonlocal inclusions problem with mixed fractional derivatives and integrals. J. Nonlinear Funct. Anal. 2020, 2020, 28. [Google Scholar]
- Itkin, A.; Soleymani, F. Four-factor model of quanto CDS with jumps-at-default and stochastic recovery. J. Comput. Sci. 2021, 54, 101434. [Google Scholar] [CrossRef]
- Soheili, A.R.; Amini, M.; Soleymani, F. A family of Chaplygin-type solvers for Itô stochastic differential equations. Appl. Math. Comput. 2019, 340, 296–304. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
- King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
- Chun, C.; Lee, M.Y. A new optimal eighth-order family of iterative methods for the solution of nonlinear equations. Appl. Math. Comput. 2013, 223, 509–519. [Google Scholar] [CrossRef]
- Torkashvand, V.; Lotfi, T.; Araghi, M.A.F. A new family of adaptive methods with memory for solving nonlinear equations. Math. Sci. 2019, 13, 1–20. [Google Scholar] [CrossRef] [Green Version]
- Soleymani, F.; Lotfi, T.; Tavakoli, E.; Haghani, F.K. Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
- Herzberger, J. Uber Matrixdarstellungen fur Iterationverfahren bei nichtlinearen Gleichungen. Computing 1974, 12, 215–222. [Google Scholar] [CrossRef]
- Don, E. Schaum’s Outline of Mathematica; McGraw-Hill Professional: New York, NY, USA, 2000. [Google Scholar]
- Petković, M.S.; Neta, B.; Petkovixcx, L.D.; Džunixcx, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef] [Green Version]
- Choubey, N.; Jaiswal, J.P. Two- and three-point with memory methods for solving nonlinear equations. Numer. Anal. Appl. 2017, 10, 74–89. [Google Scholar] [CrossRef]
- Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
- Cordero, A.; Lotfi, T.; Khoshandi, A.; Torregrosa, J.R. An efficient Steffensen-like iterative method with memory. Bull. Math. Soc. Sci. Math. Roum. 2015, 58, 49–58. [Google Scholar]
- Cordero, A.; Lotfi, T.; Torregrosa, J.R.; Assari, P.; Mahdiani, K. Some new bi-accelarator two-point methods for solving nonlinear equations. Comput. Appl. Math. 2016, 35, 251–267. [Google Scholar] [CrossRef]
- Jaiswal, J.P. Two efficient bi-parametric derivative free with memory methods for finding simple roots nonlinear equations. J. Adv. Appl. Math. 2016, 1, 203–210. [Google Scholar] [CrossRef]
- Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
- Maheshwari, A.K. A fourth-order iterative method for solving nonlinear equations. Appl. Math. Comput. 2009, 211, 383–391. [Google Scholar] [CrossRef]
- Kansal, M.; Kanwar, V.; Bhatia, S. Efficient derivative-free variants of Hansen-Patrick’s family with memory for solving nonlinear equations. Numer. Algorithms 2016, 73, 1017–1036. [Google Scholar] [CrossRef]
- Lalehchini, M.J.; Lotfi, T.; Mahdiani, K. On developing an adaptive free-derivative Kung and Traub’s method with memory. J. Math. Ext. 2020, 14, 221–241. [Google Scholar]
- Zadeh, M.M.; Lotfi, T.; Amirfakhrian, M. Developing two efficient adaptive Newton-type methods with memory. Math. Methods Appl. Sci. 2019, 42, 5687–5695. [Google Scholar] [CrossRef]
- Torkashvand, V.; Kazemi, M. On an efficient family with memory with high order of convergence for solving nonlinear equations. Int. J. Ind. Math. 2020, 12, 209–224. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
- Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
- Zafar, F.; Yasmin, N.; Kutbi, M.A.; Zeshan, M. Construction of tri-parametric derivative free fourth order with and without memory iterative method. J. Nonlinear Sci. Appl. 2016, 9, 1410–1423. [Google Scholar] [CrossRef]
- Wang, X.; Zhu, M. Two iterative methods with memory constructed by the method of inverse interpolation and their dynamics. Mathematics 2020, 8, 1080. [Google Scholar] [CrossRef]
- Zhao, Y.-L.; Zhu, P.-Y.; Gu, X.-M.; Zhao, X.-L.; Jian, H.-Y. A preconditioning technique for all-at-once system from the nonlinear tempered fractional diffusion equation. J. Sci. Comput. 2020, 83, 10. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Y.-L.; Gu, X.-M.; Ostermann, A. A preconditioning technique for an all-at-once system from volterra subdiffusion equations with graded time steps. J. Sci. Comput. 2021, 88, 11. [Google Scholar] [CrossRef]
- Gu, X.M.; Wu, S.L. A parallel-in-time iterative algorithm for Volterra partial integro-differential problems with weakly singular kernel. J. Comput. Phys. 2020, 417, 109576. [Google Scholar] [CrossRef]
- Ernst, P.A.; Soleymani, F. A Legendre-based computational method for solving a class of Itô stochastic delay differential equations. Numer. Algorithms 2019, 80, 1267–1282. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).