Next Article in Journal
Modeling Delayed Dynamics in Biological Regulatory Networks from Time Series Data
Previous Article in Journal
Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Local Radius of Convergence for Picard Iteration

by
Ştefan Măruşter
Department of Informatics, West University of Timisoara, B-dul V. Parvan No. 4, 300223 Timisoara, Romania
Algorithms 2017, 10(1), 10; https://doi.org/10.3390/a10010010
Submission received: 10 October 2016 / Revised: 28 December 2016 / Accepted: 30 December 2016 / Published: 9 January 2017

Abstract

:
In this paper, we propose an algorithm to estimate the radius of convergence for the Picard iteration in the setting of a real Hilbert space. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best ones. As the algorithm does not require to evaluate the norm of derivatives, the computing effort is relatively low.

1. Introduction

The well known Ostrowski theorem [1] gives a sufficient condition (the spectral radius of the Jacobian of the iteration mapping in the fixed point to be less than 1) for the local convergence of Picard iteration. “However, no estimate for the size of an attraction ball is known” [2] (2009). The problem of estimating the local radius of convergence for different iterative methods was considered by numerous authors and several results were obtained particularly for Newton method and its variants. Nevertheless “... effective, computable estimates for convergence radii are rarely available” [3] (1975). A similar remark was made in a more recent paper [4] (2015): “The location of starting approximations, from which the iterative methods converge to a solution of the equation, is a difficult problem to solve”. It is worth noticing that the shape of the attraction basins is an unpredictable and sophisticated set, especially for high order methods, and therefore finding a good ball of convergence for these methods is indeed a difficult task. Among the oldest known results on this topic we could mention those given by Vertgeim, Rall, Rheinboldt, Traub and Wozniakowski, Deuflhard and Potra, Smale [3,5,6,7,8,9]. Relatively recent results were communicated by Argyros [10,11,12], Ferreira [13], Hernandez-Veron and Romero [4], Ren [14], Wang [15].
Deuflhard and Potra [5] proposed the following estimation for Newton method. Let F : D X Y be a nonlinear mapping, where X , Y are Banach spaces. Suppose that F is Fr e ´ chet differentiable on D, that F ( x ) is invertible for each x D , and that
F ( x ) 1 ( F ( x + s ( y x ) ) F ( x ) ) ( y x ) s ω y x 2 ,
for all x , y D , s [ 0 , 1 ] . Under these conditions the equation F ( x ) = 0 has a solution y * D . Let y 0 D such that
B ¯ ( y * , y 0 y * ) = { x : x y * y 0 y * } D
and y 0 B ( y * , 2 / ω ) . Then the Newton method remains in B ( y * , y 0 y * ) and converges to y * .
In [2] the authors propose a simple and elegant formula to estimate the radius of convergence for Picaed iteration and the algorithm presumptively gives a sharp value. More precisely, let G : D R m D be a nonlinear mapping and x * a fixed point of G. Suppose that G is differentiable on some ball centred in x * , B r 1 = { x : x x * r 1 } , and the derivative of G satisfies
G ( x * ) q < 1 , G ( x ) G ( y ) k x y p , x B r 1 .
Define
r 2 = ( 1 + p ) ( 1 q ) k 1 p ,
then r = min { r 1 , r 2 } is an estimation of local convergence radius.
Hernandez and Romero [4] gave the following algorithm for a third-order multi-point method for solving nonlinear equations F ( x ) = 0 in Banach spaces [16]. Suppose that p is a solution of the equation, there exists F ( p ) 1 , F ( p ) 1 β and F is k-Lipschitz continuous on some B r 0 = { x : x p r 0 } . Let r ˜ = min { r 0 , r } , where r = ζ 0 / [ ( 1 + ζ 0 ) β k ] and ζ 0 is the positive real root of a certain algebraic equation of degree three. Then r ˜ is a local convergence radius. A particular method of this class, which will be investigated in the present study, obtained for a particular value of a numerical parameter of the general class, is the following one point method
x n + 1 = x n F ( x n ) 1 [ F ( x n ) + F ( x n F ( x n ) 1 F ( x n ) ) ] .
In fact this is a modified Newton method in which the derivative is re-evaluated periodically after two steps. Note that, in this particular case, the equation giving the value of ζ 0 , is x 3 + 4 x 2 8 = 0 . We will call (1) the One point Ezquerro-Hernandez method.
Remark 1.
The common way to state that an algorithm (formula) gives the best radius of convergence is to show that for a particular function, there exists a point on the border of the convergence ball for which the conditions of convergence are not satisfied or the considered iteration fails to converge. Several authors who propose an algorithm (formula) and claim that the proposed radius is the best possible, proceed in this way. It would be more satisfactory if the proposed algorithm would be tested for some class of mappings or at least for some test mappings.
Finding a good value for the local convergence radius is rather a difficult task and to find the best one is far more difficult. We propose an algorithm to estimate the radius of convergence for the Picard iteration. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best ones. It is analogous (but in some details different) with the algorithm for Mann iteration proposed in [17].
From the computational point of view, most algorithms involve the evaluation of H o ¨ lder (Lipschitz) constant k of the derivative of the iteration mapping, or of the equation mapping. This evaluation entails solving a constraint optimization problem and the optimum must be global on some ball. Usually, solving this problem is not an easy task, whatever norm of the linear mapping is used. However, using a rough value for k diminishes to some extent the convergence radius estimation. The computational effort of the proposed algorithm, using the vector norm and scalar product, is presumptively lower.
The paper is organized as follows. Some preliminaries are presented in Section 2. Two theorems are proved in Section 3 concerning the local convergence of Picard iteration. The algorithm is described in Section 4. A part of a large number of numerical experiments performed by the author showing that the proposed algorithm gives radii close to the best possible values is presented in Section 5. The computer programs for the main steps of the proposed algorithm are given in Appendix A.

2. Preliminaries

Let H be a real Hilbert space with inner product · , · and norm · and let C be an open subset of H .
We recall the following two basic concepts which are essential for our development: demicontractivity and quasi-expansivity.
A mapping T : C H is said to be demicontractive if the set of fixed points of T is nonempty, F i x ( T ) , and
T ( x ) p 2 x p 2 + L x T ( x ) 2 , ( x , p ) C × F i x ( T )
where L > 0 . This condition is equivalent to either of the following two:
x T ( x ) , x p λ x T ( x ) 2 , ( x , p ) C × F i x ( T ) ,
T ( x ) p x p + L T ( x ) x , ( x , p ) C × F i x ( T ) ,
where λ = ( 1 L ) / 2 . Note that (3) is often more suitable in Hilbert spaces, allowing easier handling of the scalar products and norms. The condition (4) was considered in [18] to prove T-stability of the Picard iteration for this class of mappings. Note that the set of fixed points of a demicontractive mapping is closed and convex [19].
We say that the mapping T is quasi-expansive if
x p β x T ( x ) , x C ,
where β > 0 . If β < 1 then x p β 1 β T ( x ) p which justifies the terminology. It is also obvious that the set of fixed points of a mapping T which satisfies (5) consists of a unique element p in C.
Condition (5) is similar to the following condition:
x T ( x ) α inf p F i x ( T ) x p , x C ,
where 0 < α < 1 , which is considered in [20,21] as an additional condition to prove strong convergence of the Mann iteration for nonexpansive (quasi-nonexpansive) mappings in Banach spaces.

3. Local Convergence

The following theorem is the basis of our approach in estimating local radius of convergence. Its proof is similar to the proof of Theorem 1 [22], except in some details. We present the detailed proof here for completeness.
Theorem 1.
Let T : C H be a (nonlinear) mapping with nonempty set of fixed points, where C is an open set of a real Hilbert space H . Let p 0 be a fixed point and let r 0 be such that B r 0 C . We shall suppose that F i x ( T ) B r 0 (if F i x ( T ) is bounded and C = H , then this condition is satisfied for any p 0 F i x ( T ) and suitable r 0 ). Suppose further that
(i) 
I T is demiclosed at zero on C,
(ii) 
T is demicontractive with λ > 0.5 on B r 0 ,
then the sequence { x n } given by Picard iteration, x n + 1 = T ( x n ) , x 0 B r 0 remains in B r 0 and converges weakly to some fixed point of T. If, in addition,
(iii) 
T is quasi-expansive on B r 0 ,
then p 0 is the unique fixed point of T in B r 0 and the sequence { x n } converges strongly to p 0 .
Proof. 
Let p be a fixed point of T. If x n B r 0 then, using (ii), we have
x n + 1 p 2 = ( x n p ) ( x n T ( x n ) ) 2 = x n p 2 2 x n T ( x n ) , x n p + x n T ( x n ) 2 x n p 2 ( 2 λ 1 ) x n T ( x n ) 2 .
Therefore, for any p B r 0 , including p 0 , x n + 1 p x n p and x n + 1 B r 0 , i.e., { x n } B r 0 . Moreover, x n p ϱ p , so that
x n T ( x n ) 2 1 2 λ 1 x n p 2 x n + 1 p 2 0 .
As B r 0 is weakly compact and { x n } is bounded, there exists a subsequence { x n j } which converges weakly to q. Then x n j T ( x n j ) 0 and (i) implies q F i x ( T ) . Suppose now that there exist two subsequences, say { u k } and { v k } , which converge weakly to u and v respectively. As above, u , v F i x ( T ) and x n u ϱ u , x n v ϱ v . Therefore
u k u ϱ u , v k u ϱ u , u k v ϱ v , v k v ϱ v .
Let e k defined by
e k = u k u 2 v k u 2 u k v 2 + v k v 2 .
We have e k 0 as n . On the other hand, 1 2 e k = u k v k , u v u v 2 which entails u = v . Therefore x n q .
Finally, from (iii) it results that p 0 is the unique fixed point of T in B r 0 and
x n p 0 β n n T ( x n ) 0 .
so that x n p 0 . ☐
A demicontractive mapping is weak contractive, and a quasi-expansive mapping is weak expansive. The next theorem shows that the demicontractivity and the quasi-expansivity are not contradictory, there exists a relatively large class of mappings which are both demicontractive and quasi-expansive.
Theorem 2.
Let T : C C be a (nonlinear) mapping, where C is an open convex subset of H . Suppose that F i x ( T ) , that T is differentiable on C and T ( x ) η < 5 2 , x C . Then T is both demicontractive and quasi-expansive on C.
Proof. 
(I) T is quasi-expansive on C (therefore F i x ( T ) consists of a unique element).
Let p be a fixed point of T. Using mean value theorem, for any x C , we have
x T ( x ) = ( x p ) ( T ( x ) T ( p ) ) = ( x p ) 0 1 T ( p + t ( x p ) ) d t ( x p ) .
Using the notation D x , p = 0 1 T ( p + t ( x p ) ) d t , we have x T ( x ) = ( I D x , p ) ( x p ) and D x , p η . From Banach lemma it results that there exists ( I D x , p ) 1 and ( I D x , p ) 1 1 / ( 1 η ) . Therefore
x p = ( I D x , p ) 1 ( x T ( x ) ) 1 1 η x T ( x ) ,
that is T is quasi-expansive with β = 1 / ( 1 η ) .
(II) T is demicontractive on C with λ > 0.5 .
Due to the condition 0 < η < 5 2 , we have 0.5 < ( 1 η ) / ( 1 + η ) 2 < 1 . Let λ be such that 0.5 < λ < ( 1 η ) / ( 1 + η ) 2 . Consider the quadratic polynomial
P ( t , λ ) = λ t 2 + ( 2 λ + 1 ) t 1 + λ .
The largest zero of P is
t m = 2 λ 1 + 8 λ + 1 2 λ ,
and D x , p < η < t m . As P ( 0 , λ ) = 1 + λ < 0 and P ( t m , λ ) = 0 , we have P ( D x , p , λ ) < 0 , which entails 1 D x , p > λ ( 1 + D x , p ) 2 . For any y , y = 1 we have
( I D x , p ) y , y = 1 D x , p y , y 1 D x , p > λ ( 1 + D x , p ) 2 λ I D x , p 2 λ ( I D x , p ) y 2 .
Taking y = ( x p ) / x p and ( I D x , p ) ( x p ) = x T ( x ) , we obtain the condition of demicontracivity. ☐

4. The Algorithm

In finite dimensional spaces the condition of quasi-expansivity is superfluous, since the first two conditions of Theorem 1 are sufficient for the convergence of the Picard iteration. Therefore, in finite dimensional spaces, supposing that the condition (i) of Theorem 1 is fulfilled, we can develop the following algorithm to estimate the local radius of convergence:
Find the largest value for r 0 such that
m = min x B r 0 x T ( x ) , x p x T ( x ) 2 ,
and m > 0.5 .
This procedure involves the following main processing:
  • Apply a line search algorithm (for example of the type half-step algorithm) on the positive real axis to find the largest value for r 0 ;
  • At every step of 1 solve the constraint optimization problem (6) and verify the condition m > 0.5 .
The main processing of this algorithm is the solution of the constraint optimization problem (6). Therefore we need to use a global constrained optimization method which, generally, has a considerable computational effort. In our experiences we used a population-based metaheuristic method in combination with a local search method.
The algorithm involves an outer iteration (for line search procedure) and an inner iteration (for solving the constrained optimization problem). In N-dimensional spaces, every step of the inner iteration involves 2 N evaluations of some functions of N variables each (the components of T), for the computation of x T ( x ) , x p and x T ( x ) 2 .
Remark 2.
Most of the algorithms for estimation the radius of convergence involve the computation of the optimal Hölder/Lipschitz constant of the derivative of T. This implies N 2 evaluations of some functions in N variable each (the components of the derivative of T). Therefore a step of inner iteration involves a polynomial complexity of order 2 combined with the complexity of computing the components of T. The proposed algorithm involves 2 N evaluations of components of T, and, presumptively has a lower complexity. However this estimation has only a relative worth, because the main influence on the computational effort is exerted by the global constrained optimization method, which, as it is well known, is a very expensive computation. A good implementation of these algorithms have some positive impact on the computational process.

5. Numerical Experiments

This section is devoted to numerical experiments in order to evaluate the performance of the proposed algorithm. The obtained radii are compared (numerically or graphically) with the maximum radii of convergence. In our experiments the maximum radius was computed by directly checking the convergence of the iteration process starting from all points of a given net of points. The attraction basin (hence the maximum convergence radius) computed in this way has only relative precision. Nevertheless, this method provides significant information about the attraction basins, and the performance of the algorithm from this point of view can be accurately evaluated.
Several numerical experiments in one and several dimensions were performed to validate this method. It is worthwhile to underline that the values obtained by the proposed algorithm are, to some extent, larger than those given by other methods, and, in some cases, our procedure gives local radii of convergence very close to the maximum ones.

5.1. Experiment 1

We have computed the local radius of convergence with the proposed algorithm for One point Ezquerro-Hernandez method and for a number of real functions. In the most these examples the estimated radii are close to (or even coincide with) the maximum radii. For example, in the case of the function f ( x ) = x 5 2 x 2 + x and p = 1 the estimate and the best radius (computed with 15 decimal digits) are identical, r = 0.080959069788847 .

5.2. Experiment 2

We applied the proposed algorithm to estimate the local radius of convergence for Picard iteration and for a number of mappings in several variables. For the following three test mappings (we will refer to them as Examples 1–3):
F 1 ( x ) = x 2 2 + 0.8 x 1 c o s ( x 1 ) + 1 x 1 3 + 0.8 x 2 ,
F 2 ( x ) = 0.3 s i n ( x 1 ) + x 1 x 2 x 1 3 0.5 x 2 , F 3 ( x ) = 0.7 x 1 x 1 x 2 + 0.2 x 2 x 1 x 2 + 0.3 x 2 ,
the results are given in Figure 1.
The black areas represent the domain of convergence corresponding to the fixed point p = ( 0 , 0 ) T (for all three examples) and the white circles the local convergence balls. It can be seen that the estimates are satisfactorily close to the best possible ones.

5.3. Experiment 3

The iteration (1) can be considered as a Picard iteration, x n + 1 = T ( x n ) , where
T ( x ) = x F ( x ) 1 [ F ( x ) + F ( x F ( x ) 1 F ( x ) ) ] .
Therefore we can apply our algorithm to estimate the local radius of convergence for this method. The results are given in Figure 2.
It can be seen that in this special case the estimates are close (very close) to the best possible radius.

5.4. Experiment 4

In this experiment we estimate numerically the radii of convergence with the proposed algorithm for Picard, Newton and One point Ezquerro-Hernandez methods and for the test mappings. For comparison, we also estimate them by using several other algorithms. In the tables below, we present the results by using the algorithms proposed by Catinas, Deuflhard-Potra and Hernandez-Romero (these algorithms are specifically for considered methods). The last row of each table contains the maximum radii of convergence.
Table 1 contains the results obtained by Catinas and proposed algoritms for the Picard iteration.
The Catinas algorithm requires us to estimate the H o ¨ lder (Lipschitz) constant of the derivative of the iteration mapping (7). Even for mappings in two dimensions (which is the case of our test mappings) the computing effort is very high. Therefore, the values for Catinas algorithm in Table 2 are only approximative (we use the sign ≈ to indicate these values). This experiment confirm the results of Experiment 3, the proposed algorithm gives radii very close to the maximim radii for One point Ezquerro-Hernandez method.
Table 3 contains the results obtained by Deuflhard-Potra and proposed algoritms for the Newton method.
The Deuflhard-Potra algorithm gives satisfactory good estimates for Newton method. In the same time this algorithm does not require the evaluation of H o ¨ lder (Lipschitz) constant of the derivative of the equation mapping and therefore the computing effort is relatively low. Note that in our experiment the parameter s of the algorithm was chosen s = 0.9 , which seems to be most adequate (the algorithm is not very sensitive on this parameter).
Table 2 contains the results obtained by Catinas, Hernandez-Romero and proposed algoritms for the One point Ezquerro-Hernandez method.

6. Conclusions

The proposed algorithm provides radii of convergence close (or very close) to the best possible ones and in some cases (like One point Ezquerro-Hernandez method for one dimension) it seems that the estimates coincide with the maximum radii. As this remarkable characteristic of the proposed algorithm is highlighted in our study only by numerical experiments, it would be a challenge to find the cases in which the two radii are identical.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Bellow are given the computer programs for the proposed algorithm, see Figure A1 and Figure A2. To validate the algorithm, numerous numerical experiments were implemented in several ways. To facilitate understanding we present them in MathCad software, which can be easily translated in any other mathematical software systems.
Figure A1. Computer programs for inner/outer iterations.
Figure A1. Computer programs for inner/outer iterations.
Algorithms 10 00010 g003
Figure A2. Computer programs for attraction basin and convergence ball.
Figure A2. Computer programs for attraction basin and convergence ball.
Algorithms 10 00010 g004

Short Computer Programs Description

Min—implements the inner loop of the algorithm. It requires a positive number r o as input data and provides the global minimum of the cost function (defined in (6) on the ball B r o , corresponding to the iteration function T and to the mapping f. We use here a “brute” method: the cost function is computed on the net of points inside the ball B r o and then the minimum value is chosen. In Figure A1 T is the iteration function for One point Ezquerro-Hernandez method and f is the mapping of Example 3. Note that this program works only for mappings in two variables. For several variables a similar program can be easily developed.
Radius—implements the outer loop of the algorithm. Using a line search technique, it provides the maximum radius of a ball centred in the fixed point, such that the cost function value will be greater than 0.5 . For the considered example, r = 0.136 . . .
Basin—provides a vector containing the points from a given square and belonging to the attraction basin.
Circle—draws a circle centred in the fixed point and with the radius computed by the program Radius.
The mapping F, its derivative, the iteration function T and the cost function c, must be defined outside of the programs. In Figure A1 these elements are defined in the top of the program Min and they are: the mapping F, as in Example 3; the iteration function, as the One point Ezquerro-Hernandez method; the cost function, as proposed in our algorithm. Using a specific graphic function of mathematical software and the vectors b, c, the attraction basin and the estimated ball of convergence can be plotted (Figure 2, Example 3 in our case).

References

  1. Ostrowski, A.M. Solution of Equations and System of Equations, 2nd ed.; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  2. Catinas, E. Estimating the radius of an attraction ball. Appl. Math. Lett. 2009, 22, 712–714. [Google Scholar] [CrossRef]
  3. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Center Publ. 1975, 3, 129–142. [Google Scholar]
  4. Hernandez-Veron, M.A.; Romero, N. On the Local Convergence of a Third Order Family of Iterative Processes. Algorithms 2015, 8, 1121–1128. [Google Scholar] [CrossRef]
  5. Deuflhard, P.; Potra, F.A. Asymptotic mesh independence of Newton-Galerkin Methods via a refined Mysovskii theorem. SIAM J. Numer. Anal. 1992, 29, 1395–1412. [Google Scholar] [CrossRef]
  6. Rall, L.B. A note on the convergence of Newton method. SIAM J. Numer. Anal. 1974, 11, 34–36. [Google Scholar] [CrossRef]
  7. Smale, S. Complexity theory and numerical analysis. Acta Numer. 1997, 6, 523–551. [Google Scholar] [CrossRef]
  8. Traub, J.F.; Wozniakowski, H. Convergence and Complexity of Newton Iteration for Operator Equations. J. Assoc. Comput. Mach. 1979, 26, 250–258. [Google Scholar] [CrossRef]
  9. Vertgeim, B.A. On conditions for the applicability of Newton’s method. Dokl. Akad. Nauk SSSR 1956, 110, 719–722. [Google Scholar]
  10. Argyros, I.K. Concerning the radii of convergence for a certain class of Newton-like methods. J. Korea Math. Soc. Educ. Ser. B: Pure Appl. Math. 2008, 15, 47–55. [Google Scholar]
  11. Argyros, I.K. Concerning the “terra ingognita” between convergence regions of two Newton methods. Nonlinear Anal. 2005, 62, 179–194. [Google Scholar] [CrossRef]
  12. Argyros, I.K. A unifyng local-semilocal convergence analysis and applications for two-points Newton methods in Banach spaces. J. Math. Anal. Appl. 2004, 298, 374–397. [Google Scholar] [CrossRef]
  13. Frerreira, O.P. Local convergence of Newton method in Banach spaces from the viewpoint of the majorant principle. IMA J. Numer. Anal. 2009, 29, 746–759. [Google Scholar] [CrossRef]
  14. Ren, H. On the local convergence of a deformed Newton method under Argyros-type conditions. J. Math. Anal. Appl. 2006, 321, 396–404. [Google Scholar] [CrossRef]
  15. Wang, X. Convergence of Newton’s method and uniquenees of the solution of equations in Banach space. IMA J. Numer. Anal. 2000, 20, 123–134. [Google Scholar] [CrossRef]
  16. Ezquerro, J.A.; Hernandez, M.A. An optimization of Chebyshev’s method. J. Complex. 2009, 25, 343–361. [Google Scholar] [CrossRef]
  17. Maruster, S. Local convergence radius for the Mann-type iteration. Anal. Univ. Vest Timisoara Seria Mat. Inf. LIII 2015, 53, 109–120. [Google Scholar]
  18. Qing, Y.; Rhoades, B.E. T-stability of Picard iteration in metric spaces. Fixed Point Theory Appl. Hindawi Publ. Corp. 2008, 2008. [Google Scholar] [CrossRef]
  19. Marino, G.; Xu, H.-K. Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329, 336–346. [Google Scholar] [CrossRef]
  20. Outlaw, C.L. Mean value iteration for nonexpansive mappings in a Banach space. Pac. J. Math. 1969, 30, 747–759. [Google Scholar] [CrossRef]
  21. Senter, H.F.; Dotson, W.G., Jr. Approximating fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 1974, 44, 375–380. [Google Scholar] [CrossRef]
  22. Maruster, S. The solution by iteration of nonlinear equations in Hilbert spaces. Proc. Am. Math. Soc. 1977, 63, 69–73. [Google Scholar] [CrossRef]
Figure 1. The estimates with the proposed algorithm for Picard iteration.
Figure 1. The estimates with the proposed algorithm for Picard iteration.
Algorithms 10 00010 g001
Figure 2. The estimates with the proposed algorithm for One point Ezquerro-Hernandez method.
Figure 2. The estimates with the proposed algorithm for One point Ezquerro-Hernandez method.
Algorithms 10 00010 g002
Table 1. Local radii of convergence for Picard method.
Table 1. Local radii of convergence for Picard method.
MethodExample 1Example 2Example 3
Catinas algorithm0.2000000.4300170.376876
Proposed algorithm0.3079390.8966390.645209
Maximum radius0.3116081.2406650.806039
Table 2. Local radii of convergence for One point Esquerro-Hernandez method.
Table 2. Local radii of convergence for One point Esquerro-Hernandez method.
MethodExample 1Example 2Example 3
Catinas algorithm≈0.194442≈0.114921≈0.081090
Hernandez-Romero algorithm0.2211140.1461600.111906
Proposed algorithm0.3191850.1914260.136057
Maximum radius0.3192170.1974300.137130
Table 3. Local radii of convergence for Newton method.
Table 3. Local radii of convergence for Newton method.
MethodExample 1Example 2Example 3
Deuflhard-Potra algorithm0.3460830.2000510.162746
Proposed algorithm0.3631350.2369050.163737
Maximum radius0.4460680.2544460.244121

Share and Cite

MDPI and ACS Style

Măruşter, Ş. Estimating the Local Radius of Convergence for Picard Iteration. Algorithms 2017, 10, 10. https://doi.org/10.3390/a10010010

AMA Style

Măruşter Ş. Estimating the Local Radius of Convergence for Picard Iteration. Algorithms. 2017; 10(1):10. https://doi.org/10.3390/a10010010

Chicago/Turabian Style

Măruşter, Ştefan. 2017. "Estimating the Local Radius of Convergence for Picard Iteration" Algorithms 10, no. 1: 10. https://doi.org/10.3390/a10010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop