1. Introduction
A plenty of problems from Biology, Chemistry, Economics, Engineering, Mathematics, and Physics are converted to a mathematical expression of the following form
Here,
, is differentiable,
is a Banach space and
is nonempty and open. Closed form solutions are rarely found, so iterative methods [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16] are used converging to the solution
.
In particular, we propose the following new scheme
is an initial point and is a free parameter. In addition to this, is a divided difference of order one.
We shall present two convergence analyses. Later, we present the advantages over other methods using similar information.
2. Local Convergence Analysis I
We assume that
. We use method (
2) with standard Taylor expansions [
9] for studying local convergence.
Theorem 1. Suppose that mapping F is s sufficient differentiable on Ω, with , a simple zero of F. We also consider that the inverse of F, . Then, provided that is close enough to . Moreover, the convergence order is six.
Proof. Set
and
, where
,
. We shall use some Taylor series expansions, first for
and
:
and
respectively.
By using the expressions (
3) and (
4) in the first substep of scheme (
2), we have
where
Secondly, we expand
In view of (
3)–(
6), we get in the second substep of scheme (
2)
where
Thirdly, we need the expansions for
and
Hence, by (
5) and (
8), we get
leading together with the third substep of method (
2) to
where
☐
According to Theorem 1, the applicability of method (
2) is limited to mappings
F with derivatives up to the seventh order.
Now, we choose
,
and define a function
f, as follows:
We have the following derivatives of function
fHowever,
is not bounded on
, so
Section 2, cannot be used. In this case, we have a more general alternative given in the up coming section.
3. Local Convergence Analysis II
Consider and . Let be a increasingly continuous map with .
Suppose equation
has
as the smallest positive zero. In addition, we assume that
is a increasingly continuous map with
.
Consider functions
and
defined on semi open interval
as follow:
By these definitions, we have and as . Subsequently, the intermediate value theorem assures that function has minimum one solution in . Let be the minimal such zero.
The expression
has the smallest positive zero
. Set
.
We construe the functions
and
on interval
in the following way
We yield and since . The stand for the minimal such zero of function on .
The equation
has
as the smallest positive solution. Set
. Define functions
and
on
as
We obtain
and
as
. The
imply the minimal zero of
on
. Moreover, define
Accordingly, we have
and
for all
denotes the open ball centered at and of radius . By , we denote the closure of
We use the following conditions in order to study the local convergence:
- (a1)
is a differentiable operator in the Fréchet sense, is a divided difference of order one. In addition to this, we assume that is a simple zero of F. At last, the inverse of operator F, .
- (a2)
Let
be a increasingly continuous function with
, parameters
and
, such that for each
Set
, where
exists and is given by (
12).
- (a3)
We assume that
is a increasingly continuous
- (a4)
, where
,
r is defined in (
15) and
exist and are given by (
13) and (
13), respectively.
- (a5)
There exists
, such that
Set .
Theorem 2. Under the hypotheses further consider that . Accordingly, the proceeding assertions holdand In addition, the is the unique solution of in the set mentioned in hypothesis .
Proof. We first show items (
20)–(
24) by adopting mathematical induction. Because
hold and by condition
, we have
and
so
and
belong in
. Afterwards, for
, and
so the Banach lemma on invertible operators [
3,
4,
5,
12] gives
, and
It also follows that is defined.
Adopting (
15), (
16), (
19) (for
),
, (
25) and
, we get
so
(for
) and (
22) holds for
.
so
and
It also follows that
is well defined by the second substep of method (
2) for
. In particular, we have
Next, by (
15), (
19) (for
) and (
25)–(
28), we get, in turn, that
so
(for
) and (
23) holds for
.
We have by (
15), (
18) and (
29)
Accordingly,
and
It also follows that
is well defined by (
30) and the last substep of method (
2) for
. Then, as in (
25) and (
26) (for
) and (
30), we obtain in turn
so,
(for
) and (
24) holds for
. Subsequently, substituting
,
,
by
,
,
, respectively. Hence, the induction for (
30) and (
22)–(
24) is complete. Using the estimation
where
, we deduce that
and
.
Finally, we want to illustrate that the required solution is unique. Therefore, let
for
, so that
. Then, by
and
, we get
so
. Finally,
is deduced from
. ☐
Remark 1. Another way of defining functions and radii is as follows:
Let . Subsequently, as in (12)–(18), we shall have instead: Suppose that equationhas a smallest positive solution . Let be a increasingly continuous function with . Let functions and be defined in the interval by The stands for the smallest positive root of in . Moreover, define functions and on the closed interval , as follows: The and serve as the minimal positive roots of and on closed interval , respectively. Subsequently, Theorem 2 can be written by using the "bar" conditions and functions, with .
Remark 2. The convergence of method (2) to is established under the conditions of Theorem 1. However, the order convergence under the conditions of Theorem 2 can be established by using (COC) and (ACOC) (for the details, please see Section 5). 4. Numerical Examples
Here, we monitor the convergence conditions on three problems (1)–(3). We choose in the examples. We can confirm the verification the hypotheses of Theorem 2 for the given choices of the functions and parameters a and b.
Example 1. Here, we investigate the application of our results on Hammerstein integral equations (see [9], pp. 19–20) for as follows:where We use in (34), where and are the abscissas and weights, respectively. Using for , leads to The values of and when , are illustrated in Table 1. Subsequently, we have Accordingly, we set and . The radii for Example 1 are listed in Table 2 and Table 3: Example 2. Here, we choose as integral equation [17,18], for aswhere Because so, is given as Moreover,so , since , Therefore, our results can be utilized even though is not bounded on Ω
. The radii for Example 2 are given in Table 4. Example 3. We assume the following differential equationscharacterizes the progress/movement of a molecule in 3D with for . The required solution describes to given as It follows from (41) thatwhich yields Example 4. By the example of Section 2, for , we get 5. Applications with Large Systems
We choose
and
in our scheme (
2), called by
and
, respectively. Now, we compare our schemes with a 6th-order iterative methods suggested by Abbasbandy et al. [
19] and Hueso et al. [
20], among them we picked the methods (8) and (14–15)
, respectively, known as
and
. Moreover, a comparison of them has been done with the 6th-order iterative methods given by Wang and Li [
21], among their method we chose expression (6), denoted by and
. At the last, we contrast (
2) with sixth-order scheme given by Sharma and Arora [
22], we pick expression (13), known as
. The details of all the iterative expressions are given, as follows:
method
:
scheme
:
where
and
are real numbers.
scheme :
where
and
.
The
,
,
, and
stands for index of iteration, absolute residual errors in the function
F, error between two successive iterations and computational convergence order, receptively. There values are listed in
Table 9,
Table 10 and
Table 11. Moreover, the quantity
is the final obtained value of
.
The estimation of all the above parameters have been calculated by Mathematica-9. For minimizing the round-off errors, we have chosen multiple precision arithmetic with 1000 digits of mantissa. The term
symbolizes the
in all mentioned tables. We adopted the command "AbsoluteTiming[]" in order to calculate the CPU time. We run our programs three times and depicted the average CPU time in
Table 12, also one can observe the times used for each iterative method, where we want to point out that for big size problems the method
uses the minimum time, so it is being very competitive. The configuration of the used computer is given below:
Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz
Made: HP
RAM: 8:00 GB
System type: 64-bit-Operating System, x64-based processor.
Example 5. Here, we deal with a boundary value problem from Ortega and Rheinboldt [9], given by We assumepartition of the interval and . Now, we discretize expression (46) by adopting following numerical formula for derivativeswhich leads to system of nonlinear equations.
For specific value of , we have a system and the required solution is The computational estimations are listed in Table 9 on the basis of initial approximation . Example 6. The classical 2D Bratu problem [23,24] is given by By adopting finite difference discretization, we can deduced the above PDE (48) to a nonlinear system. For this purpose, we denote as numerical solution at the grid points of the mesh. In addition to this, and stand for the number of steps in the directions of μ and θ, respectively. The h and k called as the respective step sizes in the directions of μ and θ. Adopt the following central difference formula to and leads to us For obtaining a large system of , we choose and . The numerical results are listed in Table 10 based on the initial guess . Example 7. Let us consider the following nonlinear system For specific value , we have system, and chose the following starting point The is the required solution of system 7. Table 11 provides the numerical results. Remark 3.On the basis of Table 9, Table 10 and Table 11, we conclude that our methods namely and perform better in the contrast of existing schemes and on the basis of residual errors, errors between two consecutive iterations, and asymptotic error constant. In addition, our methods also demonstrate the stable computational order of convergence. Finally, we concluded that our methods not only perform better than existing methods in numerical results, but also take half of the CPU time in contrast to other existing methods (results can be easily found in Table 12). 6. Conclusions
We presented a new family of Steffensen-type methods with one parameter. The local convergence is studied in
Section 2 while using Taylor expansion and derivative up to the order seven, when
. To extend the suitability of these iterative methods, we only use hypotheses on the first derivative in
Section 3 and Banach space valued operators. This way, we also find computable error bounds on
as well as uniqueness results based on generalized Lipschitz-type real functions. Numerical examples of equations, favorable comparisons to other methods can be found in
Section 4.
Author Contributions
M.Z.U.: Validation; Review & Editing, R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.
Funding
Research and development office (RDO) at the ministry of Education, Kingdom of Saudi Arabia. Grant no (HIQI-22-2019).
Acknowledgments
This project was funded by the research and development office (RDO) at the ministry of Education, Kingdom of Saudi Arabia. Grant No. (HIQI-22-2019). The authors also, acknowledge with thanks research and development office (RDO-KAU) at King Abdulaziz University for technical support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Amat, S.; Bermudez, C.; Hernández-Verón, M.A.; Martínez, E. On an efficient k-step iterative method for nonlinear equations. J. Comput. Appl. Math. 2016, 302, 258–271. [Google Scholar] [CrossRef] [Green Version]
- Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
- Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishers: New York, NY, USA, 2019; Volume III. [Google Scholar]
- Argyros, I.K.; Hillout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
- Argyros, I.K.; Magrenan, A.A. A Contemporary Study of Iterative Methods; Academy Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
- Cordero, A.; Torregrosa, J.R. Low-complexity root finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2015, 275, 502–515. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. How to improve the domain of starting points for Steffensen’s method. Stud. Appl. Math. 2014, 132, 354–380. [Google Scholar] [CrossRef]
- Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Advanced Publishing Program: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Rheindoldt, W.C. An adaptive continuation process for solving systems of equations. Pol. Acad. Sci. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
- Sharma, J.R.; Ghua, R.K.; Sharma, R. An efficient fourth-order weighted Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–325. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solutions of Equations; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
- Zuníc, J.D.; Petkovíc, M.S. A cubically convergent Steffensenlike method for solving nonlinear equations. Appl. Math. Lett. 2012, 25, 1881–1886. [Google Scholar]
- Alarcón, V.; Amat, S.; Busquier, S.; López, D.J. A Steffensens type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 243–250. [Google Scholar] [CrossRef]
- Behl, R.; Argyros, I.K.; Machado, J.A.T. Ball comparison between three sixth order methods for Banach space valued operators. Mathematics 2020, 8, 667. [Google Scholar] [CrossRef]
- Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected Topics in Numerical Analysis; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2010; ISBN 978-3-8433-6793-6. [Google Scholar]
- Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
- Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef] [Green Version]
- Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287, 287–288. [Google Scholar] [CrossRef]
- Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
- Wang, X.; Li, Y. An Efficient Sixth Order Newton Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef] [Green Version]
- Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
- Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
- Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
Table 1.
Abscissas and weights for k = 8.
Table 1.
Abscissas and weights for k = 8.
j | | |
---|
1 | | |
2 | | |
3 | | |
4 | | |
5 | | |
6 | | |
7 | | |
8 | | |
Table 2.
Convergence radii for Example 1.
Table 2.
Convergence radii for Example 1.
| | | | r |
---|
0 | 5.25452 | 3.87208 | 4.09301 | 3.87208 |
0.5 | 5.25452 | 4.26006 | 4.42602 | 4.26006 |
1 | 5.25452 | 5.25452 | 5.25452 | 5.25452 |
Table 3.
Convergence radii for Example 1 with bar functions.
Table 3.
Convergence radii for Example 1 with bar functions.
| | | | r |
---|
0 | 5.25452 | 3.67748 | 3.87626 | 3.67748 |
0.5 | 5.25452 | 4.07351 | 4.17413 | 4.07351 |
1 | 5.25452 | 5.25452 | 4.89162 | 4.89162 |
Table 4.
Convergence radii for Example 2 with bar functions.
Table 4.
Convergence radii for Example 2 with bar functions.
| | | | r |
---|
0 | 1.03137 | 0.502403 | 0.61211 | 0.502403 |
0.5 | 1.03137 | 0.61199 | 0.70738 | 0.61199 |
1 | 1.03137 | 1.03137 | 1.03137 | 1.03137 |
Table 5.
Convergence radii for Example 3.
Table 5.
Convergence radii for Example 3.
| | | | r |
---|
0 | 0.1388596 | 0.921375 | 0.083356 | 0.083356 |
0.5 | 0.1388596 | 0.921375 | 0.086297 | 0.086297 |
1 | 0.1388596 | 0.1388596 | 0.1388596 | 0.1388596 |
Table 6.
Convergence radii for Example 3 with bar functions.
Table 6.
Convergence radii for Example 3 with bar functions.
| | | | r |
---|
0 | 0.1388596 | 0.0487471 | 0.1229551 | 0.0487471 |
0.5 | 0.1388596 | 0.0487471 | 0.1377815 | 0.0487471 |
1 | 0.1388596 | 0.1388596 | 0.1380780 | 0.1380780 |
Table 7.
Convergence radii for Example 4.
Table 7.
Convergence radii for Example 4.
| | | | r |
---|
0 | 0.00344841 | 0.00239612 | 0.00256623 | 0.00239612 |
0.5 | 0.00344841 | 0.00267769 | 0.00280807 | 0.00267769 |
1 | 0.00344841 | 0.00344841 | 0.00344841 | 0.00344841 |
Table 8.
Convergence radii for Example 4 with bar functions.
Table 8.
Convergence radii for Example 4 with bar functions.
| | | | r |
---|
0 | 0.00344841 | 0.00225955 | 0.00246765 | 0.00225955 |
0.5 | 0.00344841 | 0.00225955 | 0.00246765 | 0.00225955 |
1 | 0.00344841 | 0.00344841 | 0.00334891 | 0.00344841 |
Table 9.
Comparisons of different methods on a Boundary value problem in Example 5.
Table 9.
Comparisons of different methods on a Boundary value problem in Example 5.
Methods | j | | | | |
---|
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
Table 10.
Comparisons of different methods on two-dimensional (2D) Bratu problem in Example 6.
Table 10.
Comparisons of different methods on two-dimensional (2D) Bratu problem in Example 6.
Methods | j | | | | |
---|
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
Table 11.
Comparisons of different methods on Example 7.
Table 11.
Comparisons of different methods on Example 7.
Methods | j | | | | |
---|
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
| 1 | | | | |
2 | | | | |
3 | | | | |
Table 12.
CPU time of different methods on Examples 5–7.
Table 12.
CPU time of different methods on Examples 5–7.
Methods | Example 5 | Example 6 | Example 7 | Total Time | Average Time |
---|
| 0.465330 | 210.079553 | 356.906591 | 567.451474 | 189.1504913 |
| 0.583412 | 189.541919 | 366.511753 | 556.637084 | 185.5456947 |
| 0.274193 | 128.377322 | 182.956711 | 311.608226 | 103.8694087 |
| 1.130812 | 126.641140 | 401.627979 | 529.399931 | 176.4666437 |
| 0.101071 | 120.094370 | 52.204957 | 172.400398 | 57.46679933 |
| 0.100071 | 117.901198 | 52.146903 | 170.148172 | 56.71605733 |
| 0.100083 | 117.923227 | 51.972773 | 169.996083 | 56.665361 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).