Next Article in Journal
Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering
Next Article in Special Issue
An Efficient Algorithm for the Separable Nonlinear Least Squares Problem
Previous Article in Journal
Development of Filtered Bispectrum for EEG Signal Feature Extraction in Automatic Emotion Recognition Using Artificial Neural Networks
Previous Article in Special Issue
Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Expanding the Applicability of Some High Order Househölder-Like Methods

by
Sergio Amat
1,*,
Ioannis K. Argyros
2,
Miguel A. Hernández-Verón
3 and
Natalia Romero
3
1
Departamento de Matemática Aplicada y Estadística, Universidad Politécnica de Cartagena, Cartagena 30203, Spain
2
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
3
Departamento de Matemáticas y Computación, Universidad de La Rioja, Logroño 26004, Spain
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(2), 64; https://doi.org/10.3390/a10020064
Submission received: 3 April 2017 / Revised: 26 May 2017 / Accepted: 26 May 2017 / Published: 31 May 2017
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)

Abstract

:
This paper is devoted to the semilocal convergence of a Househölder-like method for nonlinear equations. The method includes many of the studied third order iterative methods. In the present study, we use our new idea of restricted convergence domains leading to smaller γ -parameters, which in turn lead to the following advantages over earlier works (and under the same computational cost): larger convergence domain; tighter error bounds on the distances involved, and at least as precise information on the location of the solution.

1. Introduction

Let B 1 and B 2 be Banach spaces and D be a non-empty open convex subset of B 1 . Let also L ( B 1 , B 2 ) stand for the space of bounded linear operators from B 1 into B 2 . In this study, we are concerned with the problem of approximating a locally unique solution x * of the nonlinear equation:
F ( x ) = 0 ,
where F : D B 1 B 2 is a nonlinear Fréchet-differentiable operator.
Beginning from t 0 , we can consider different third-order iterative methods to solve the scalar equation F t = 0 , with F : X Y and X = Y = R , [1,2].
According to Traub’s classification [3], these third-order methods are divided into two broad classes. On the one hand, the one-point methods, which require the evaluation of F, F and F at the current point only. For instance,
  • The Halley method
    t n + 1 = t n 1 1 + 1 2 L F ( t n ) F ( t n ) F ( t n ) ,
    where L F ( t ) = F ( t ) F ( t ) F ( t ) 2 .
  • The Chebyshev method:
    t n + 1 = t n 1 + 1 2 L F ( t n ) F ( t n ) F ( t n ) ,
  • The family of Newton-like methods:
    t n + 1 = G ( t n ) = t n H ( L F ( t n ) ) F ( t n ) F ( t n ) , n 0 , H ( y ) = j 0 A ˜ j y j ; A ˜ 0 = 1 , A ˜ 1 = 1 / 2 , A ˜ j R + { 0 } , j 2 ,
which includes the well-known iterative methods of the third-order Chebyshev, Halley, Super-Halley, Ostrowski and Euler methods.
Notice that the family (2) has the inconvenience of reaching order greater than three only for certain equations. For instance, it is known that the family (2) has fourth order of convergence for quadratic equations, if A ˜ 2 = 1 / 2 .
On the other hand, the second class is the multipoint methods, which cannot require F . For instance:
Two-step method
s n = t n F ( t n ) F ( t n ) , t n + 1 = T S ( t n ) = s n F ( s n ) F ( t n ) .
Notice that this iterative method cannot be included in the family (2). This way, to obtain iterative methods with higher order than three for any function F and to be able to represent some multipoint methods, as (3), we can consider a modification of the family (2) that allow getting this generalization [4]. Therefore, we consider the following family, which includes all of these methods:
s n = t n F ( t n ) F ( t n ) , t n + 1 = H ( t n ) = s n A 0 ( F ; t n ) F ( t n ) k = 2 A k ( F ; t n ) F ( t n ) F ( t n ) F ( t n ) k 1 F ( t n ) F ( t n ) .
Notice that (4) is well defined if | A k ( F , t ) F ( t ) | k p , for all p and k 2 and | F ( t ) F ( t ) | < 1 .
Thus, for instance, for A 0 ( F ; t ) = 0 and A k ( F ; t ) = A ˜ k 1 F ( t ) k 1 F ( t ) k 2 , we obtain the previous family (2). On the other hand, if we take A k ( F ; t n ) = 0 , for all k 1 , and:
A 0 ( F ; t n ) = F ( s n ) ,
then we get the two-step method (3).
In general, the methods (2) have higher operational cost than the methods (3) when solving a non-linear system. Another measure that takes into account the operational cost that requires an iterative process Φ is the computational efficiency given by C E ( ϕ , F ) = o 1 θ ( F ) where, again, o is the order of convergence of Φ and θ ( F ) is the operational cost to apply a step with the iterative process Φ , that is the number of products needed per iteration and the number of products that appear in the evaluation of the corresponding operator.
Therefore, in order to compare the efficiency of some iterative methods of the new family, we consider the computational efficiency C E ( Φ , F ) . Note that the methods selected in the family (4) as the most efficient have a lower computational efficiency as more terms in the series are considered.
Notice that, for quadratic equations, the Chebyshev method and the two-step method (method in (7) with A 0 ( F ; x n ) = F ( y n ) ) have the same algorithm:
y n = x n F ( x n ) 1 F ( x n ) , x n + 1 = y n F ( x n ) 1 F ( y n ) .
On the other hand, we note that the family of iterative processes (7), with A 2 = 1 / 2 , has the property of having four-order convergence when we consider quadratic equations. Therefore, the iterative process that is more efficient in this family, for quadratic equations, is the known Chebyshev-like method ( A 2 = 1 / 2 and A k = 0 for k 3 in (7)). Then, we compare this iterative process with the Chebyshev method.
This fourth-order method for quadratic equations can be written as:
y n = x n F ( x n ) 1 F ( x n ) , z n = y n F ( x n ) 1 F ( y n ) , x n + 1 = z n F ( x n ) 1 ( 2 F ( z n ) C ( z n y n ) ( z n x n ) ) ,
For the quadratic equation, the computational efficiency, C E , for the Chebyshev method, (5) and (6) methods in the family (7) is C E = 3 3 / ( m 3 + 9 m 2 + 17 m ) , C E = 4 3 / ( m 3 + 12 m 2 + 24 m ) , respectively. As we can observe in Figure 1, the optimum computational efficiency is for the Chebyshev-like method given in (6), when the system has at least six equations.
Recently, in the work by Amat et al. [4], a semilocal convergence analysis was given in a Banach space setting:
y n = x n Γ n F ( x n ) , x n + 1 = y n Γ n A 0 ( F ; x n ) k = 2 Γ n A k ( F ; x n ) Γ n F ( x n ) k 1 Γ n F ( x n ) ,
where Γ n = [ F ( x n ) ] 1 and the operators A k ( F ; ) : D L ( D × × D ( k , B 2 ) are some particular k-linear operators.
An interesting application related to image denoising was presented also in [4]. The approximation, via our family, is of a new nonlinear mathematical model for the denoising of digital images. We are able to find the best method in the family (different of the two-step) for this particular problem. Indeed, the founded method has order four. The denoising model that we propose in this paper permits a good reconstruction of the edges that are the most important visual parts of an image. See also [5,6] for more applications.
On the other hand, the semilocal convergence analysis of method (7) was based on γ -type conditions [7] that constitute an alternative to the classical Kantorovich-type conditions [8]. The convergence domain for the method (7) is small in general. In the present study, we use our new idea of restricted convergence domains leading to smaller γ -parameters, which in turn lead to the following advantages over the work in [4] (and under the same computational cost): larger convergence domain; tighter error bounds on the distances involved and an at least as precise information on the location of the solution.
The rest of the paper is organized as follows: the semilocal convergence analysis of the method (7) is given in Section 2, whereas the numerical examples are presented in the concluding Section 3.

2. Semilocal Convergence

We consider Equation (1), where F is a nonlinear operator defined in a non-empty open convex subset D of a Banach space B 1 with values in another Banach space B 2 .
We need the definition of the γ -condition given in [7].
Definition 1.
Suppose that F : D B 2 is five-times Fréchet-differentiable. Let β > 0 and x 0 D . We say that F satisfies the γ-condition, if Γ 0 = F ( x 0 ) 1 L ( B 2 , B 1 ) and for each x D :
Γ 0 F ( x 0 ) β , Γ 0 F ( x 0 ) 2 γ ( 1 γ x x 0 ) 2 , Γ 0 F ( x ) | | 6 γ 2 ( 1 γ x x 0 | | ) 4 = f ( x x 0 ) , Γ 0 F ( i v ) ( x 0 ) 24 γ 3 , Γ 0 F ( v ) ( x ) 120 γ 4 ( 1 γ x x 0 ) 6 = f ( v ) ( x x 0 ) ,
where:
f ( t ) = β t + γ t 2 1 γ t .
Then, the following semilocal convergence result for the method defined in (7) under γ-type conditions was given in [4], Theorem 1:
Theorem 1.
Assume that the operator F satisfies for each x D :
[ F ( x ) ] 1 A k ( F ; x ) k p , k 2 , , p > 0 a n d [ F ( x ) ] 1 F ( x ) < 1 ,
the γ-condition and hypotheses:
α β γ 3 2 2 , A 0 ( f , t ) 0 , k = 2 A k ( f ; t ) f ( t ) f ( t ) f ( t ) n 1 0 ,
with f given in (30) and U ¯ ( x 0 , R ) D , where R = ( 1 1 2 ) 1 γ . Then, sequences { x n } , { y n } generated by the method (7) are well defined in U ( x 0 , R ) , remain in U ( x 0 , t * ) for each n = 0 , 1 , 2 and converge to a solution x * U ( x 0 , t * ) of equation F ( x ) = 0 , which is unique in U ( x 0 , R ) . Moreover, the following estimates hold true for each n = 0 , 1 , 2 :
y n x n s n t n , x n + 1 y n t n + 1 s n , y n x * t * s n , x n x * t * t n ,
where t 0 = 0 , s 0 = β ,
s n = t n f ( t n ) f ( t n ) , n 0 , t n + 1 = s n A 0 ( f ; t n ) f ( t n ) k = 2 A k ( f ; t n ) f ( t n ) f ( t n ) f ( t n ) k 1 f ( t n ) f ( t n ) .
and:
t * = 1 + α ( 1 + α ) 2 8 α 4 γ .
Next, we show how to achieve the advantages of our approach as stated in the introduction of this study.
Definition 2.
Let F : D B 2 be a Fréchet-differentiable operator. Operator F satisfies the center γ 0 -Lipschitz condition at x 0 , if for each x D :
Γ 0 ( F ( x ) F ( x 0 ) ) 1 ( 1 γ 0 x x 0 ) 2 1 .
Remark 1.
Let D 0 : = U ( x 0 , ( 1 1 2 ) 1 γ 0 ) D . Then, we have that:
Γ 0 ( F ( x ) F ( x 0 ) ) 1 ( 1 γ 0 x x 0 ) 2 1 < 1 .
In view of the Banach lemma on invertible operators [8], Γ 0 = F ( x 0 ) 1 L ( B 2 , B 1 ) and:
[ F ( x ) ] 1 F ( x 0 ) 2 1 ( 1 γ 0 x x 0 ) 2 1 .
The corresponding result using the γ-condition given in [7] is:
[ F ( x ) ] 1 F ( x 0 ) 2 1 ( 1 γ x x 0 ) 2 1 .
for each x D . However, if D 0 D , then γ 0 γ , so if γ 0 < γ , the estimate (12) is more precise than (13), leading to more precise majorizing sequences, which in turn lead to the advantages already stated. Indeed, we need the following definition.
Definition 3.
Let F : D B 2 be a five-times Fréchet-differentiable, which satisfies the center γ 0 -condition. We say that operator F satisfies the δ-γ-condition at x 0 , if for each x D 0 :
Γ 0 F ( x 0 ) β , Γ 0 F ( x 0 ) 2 δ ( 1 δ x x 0 ) 2 , Γ 0 F ( x ) | | 6 δ 2 ( 1 δ x x 0 | | ) 4 = f 1 ( x x 0 ) , Γ 0 F ( i v ) ( x 0 ) 24 δ 3 , Γ 0 F ( v ) ( x ) 120 δ 4 ( 1 δ x x 0 ) 6 = f 1 ( v ) ( x x 0 ) .
Clearly, we have that δ γ .
Now, we define the scalar sequences { s ¯ n } , { t ¯ n } by:
t ¯ 0 = 0 , s ¯ 0 = β , s ¯ n = t ¯ n f 1 ( t ¯ n ) f 1 ( t ¯ n ) n 0 , t ¯ n + 1 = s ¯ n A 0 ( f 1 ; t ¯ n ) f 0 ( t ¯ n ) k = 2 A k ( f 1 ; t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) k 1 f 1 ( t ¯ n ) f 1 ( t ¯ n ) .
where function f 1 is defined on the interval [ 0 , 1 / δ ) by:
f 1 ( t ) = β t + δ t 2 1 δ t ,
for some t ( 0 , 1 δ ) .
Moreover, define constants α 0 , t ¯ * , t ¯ * * and R 0 by α 0 = δ β , t ¯ * = 1 + α 0 ( 1 + α 0 ) 2 8 α 0 4 δ , t ¯ * * = 1 + α 0 + ( 1 + α 0 ) 2 8 α 0 4 δ and R 0 = ( 1 1 2 ) 1 γ 0 .
If condition:
α 0 3 2 2
holds, then function f 1 has two real roots t ¯ * and t ¯ * * such that:
β t ¯ * 1 + 1 2 β 1 1 2 1 δ t ¯ * * 1 2 δ .
Moreover, we have for each t [ 0 , t ¯ * ) : f 1 ( t ) > 0 , f 1 ( t ) = 1 2 ( 1 δ t ) 2 ( 1 δ t ) 2 < 0 and:
f 1 ( i ) ( t ) = i ! δ i 1 ( 1 δ t ) ( 2 + i ) , i = 2 , 3 , .
Notice that:
α 3 2 2 α 0 3 2 2
but not necessarily vice versa, unless, if γ = δ .
We need a series of auxiliary results.
Lemma 1.
Suppose that α 0 3 2 2 , A 0 ( f 1 ; t ) 0 and k = 2 A k ( f 1 ; t ) f 1 ( t ) f 1 ( t ) f 1 ( t ) k 1 0 . Then, sequences { s ¯ n } , { t ¯ n } generated by (14) are well defined for each n = 0 , 1 , 2 , and converge monotonically to t ¯ * so that:
0 t ¯ n s ¯ n t ¯ n + 1 < t ¯ * .
Proof. 
Simply replace f, α, t * , s n , t n in the proof of Lemma 2 [4] by f 1 , α 0 , t ¯ * , s ¯ n , t ¯ n , respectively. ☐
Lemma 2.
(a) 
Let f 1 be the real function given in (15). Then, the following assertion holds:
f 1 ( t ¯ n + 1 ) = 1 2 f 1 ( t ¯ n ) A 0 ( f 1 ; t ¯ n ) f 1 ( t ¯ n ) 2 1 A 0 ( f 1 ; t ¯ n ) + f 1 ( t ¯ n ) A 0 ( f 1 ; t ¯ n ) f 1 ( t ¯ n ) 1 + h ˜ ( f 1 ; t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) + 1 2 f 1 ( t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) 2 + 1 2 f 1 ( t ¯ n ) h ˜ ( f 1 ; t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) 2 + f 1 ( t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) 2 1 h ˜ ( f 1 ; t ¯ n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) + t ¯ n t ¯ n + 1 f 1 ( t ) ( t ¯ n + 1 t ) d t ,
where:
h ˜ ( f 1 ; t ¯ n ) = k = 2 A k ( f 1 ; t ¯ n ) f 1 ( t n ) f 1 ( t ¯ n ) f 1 ( t ¯ n ) k 1
and A k ( f 1 ; ) : R R are real differentiable functions.
(b) 
If F has a continuous third-order Fréchet-derivative on D, then the following assertion holds:
F ( x n + 1 ) = 1 2 F ( x n ) Γ n A 0 ( F ; x n ) Γ n I A 0 ( F ; x n ) + F ( x n ) Γ n A 0 ( F ; x n ) I + Γ n H ˜ ( F ; x n ) Γ n F ( x n ) + 1 2 F ( x n ) Γ n F ( x n ) Γ n F ( x n ) + 1 2 F ( x n ) Γ n H ˜ ( F ; x n ) Γ n F ( x n ) 2 + F ( x n ) Γ n F ( x n ) Γ n I H ˜ ( F ; x n ) Γ n F ( x n ) + x n x n + 1 F ( x ) ( x n + 1 x ) d x ,
where:
H ˜ ( F ; x n ) = k = 2 A k ( F ; x n ) Γ n F ( x n ) k 1
and A k ( F ; ) : D L ( D × × D ( k , Y ) are some particular k-linear operators.
(c) 
Γ 0 F ( x n + 1 ) f 1 ( t ) ( t ¯ n + 1 ) .
Proof. 
(a)
Simply use function f 1 instead of f in the proof of Lemma 3 in [4].
(b)
The same with Lemma 5 in [4].
(c)
Use f 1 instead of f in Lemma 5 [4].
 ☐
Lemma 3.
Suppose F satisfies the γ 0 -center-condition on D and the δ-γ-condition on D 0 . Then, the following items hold:
(a) 
F ( x ) 1 L ( Y , X ) and:
F ( x ) 1 F ( x 0 ) 1 f 0 ( x x 0 ) , f o r   e a c h x D
where:
f 0 ( t ) = b t + γ 0 t 2 1 γ 0 t
(b) 
Γ 0 F ( m 1 ) ( x ) f 1 ( m 1 ) ( x x 0 ) , m = 3 o r m = 5 .
Proof. 
(a)
See Remark 1.
(b)
Simply replace function f by f 1 in Lemma 4 in [4].
 ☐
Lemma 4.
Suppose F satisfies the γ-condition on D. Then, the following items hold:
(a) 
F satisfies the γ 0 -condition on D and the δ-γ-condition on D 0 ;
(b) 
γ 0 γ ;
(c) 
δ γ ;
Moreover, if α 3 2 2 , then α ¯ 0 = b 0 γ 3 2 2 , α 0 3 2 2 ,
F ( x ) 1 F ( x 0 ) 1 f 0 ( x x 0 ) 1 f ( x x 0 ) ,
and:
Γ 0 F ( x 0 ) 1 f 1 ( x x 0 ) 1 f ( x x 0 ) .
Furthermore, (15) supposes that γ 0 δ and from (18); we have:
F ( x ) 1 F ( x 0 ) 1 f 0 ( x x 0 ) 1 f 1 ( x x 0 ) .
Proof. 
The assertions hold from Lemma 3, (18), the definition of functions f 0 , f 1 , f and the definition of γ 0 , γ and δ-condition. ☐
Theorem 2.
Suppose that the operator F satisfies the conditions of Lemma 1, 2, 3, (9), (18), and A 0 ( f 1 ; t ) > = 0 hold. Then, sequences { x n } , { y n } , generated by the method (7) are well defined in U ( x 0 , t ¯ * ) , remain in U ¯ ( x 0 , t ¯ * ) for each n = 0 , 1 , 2 , , and converge to a solution x ¯ * U ¯ ( x 0 , t ¯ * ) of equation F ( x ) = 0 . The limit point x ¯ * is the only solution of equation F ( x ) = 0 in U ( x 0 , R 0 ) . Moreover, the following estimates are true for each n = 0 , 1 , 2 , :
y n x n s ¯ n t ¯ n , x n + 1 y n t ¯ n + 1 s ¯ n , y n x * t ¯ * s ¯ n , x n x * t ¯ * t n ,
where t ¯ * = lim n t ¯ n .
Proof. 
We shall show the estimates (19). Using mathematical induction as a consequence of the following recurrence relations:
( M k 1 )
x k U ¯ ( x 0 , t ¯ k ) ;
( M k 2 )
F ( x k ) 1 F ( x 0 ) 1 f 0 ( t ¯ k ) 1 f 1 ( t ¯ k ) ;
( M k 3 )
y k x k s ¯ k t ¯ k ;
( M k 4 )
y k U ¯ ( x 0 , s ¯ k ) ;
( M k 5 )
x k + 1 x k t ¯ k + 1 s ¯ k .
Items ( M 0 i ), i = 1 , 2 , 3 , 4 , 5 are true by the initial conditions. Suppose ( M k i ) are true for k = 0 , 1 , , n . Then, we shall show that they hold for n + 1 . We have in turn that:
x k + 1 x 0 x k + 1 y k + y k x k + x k x 0 ( t ¯ k + 1 s ¯ k ) + ( s ¯ k t ¯ k + 1 ) + ( t ¯ k + 1 t ¯ 0 ) = t ¯ k + 1 ,
so ( M k + 1 1 ) is true. By Lemma 4 and condition γ 0 δ , we get that Γ k + 1 L ( Y , X ) and:
Γ k + 1 F ( x 0 ) 1 f 0 ( x k + 1 x 0 1 f 0 ( t ¯ k + 1 ) 1 f 1 ( t ¯ k + 1 ) ,
so ( M k + 1 2 ) is true. By Lemma 2, we obtain in turn that Γ k + 1 L ( Y , X ) and:
Γ 0 F ( x k + 1 ) f 1 ( t ¯ k + 1 ) ,
so:
y k + 1 x k + 1 = Γ k + 1 F ( x k + 1 ) Γ k + 1 F ( x 0 ) Γ 0 F ( x k + 1 ) f 1 ( t ¯ k + 1 ) f 1 ( t ¯ k + 1 ) = s ¯ k + 1 t ¯ k + 1 ,
y k + 1 x 0 y k + 1 x k + x k y k 1 + y k 1 x k 1 + x k 1 x 0 ( s ¯ k + 1 t ¯ k ) + ( t ¯ k s ¯ k 1 ) + ( s ¯ k 1 t ¯ k 1 ) + ( t ¯ k 1 t ¯ 0 ) = s ¯ k + 1 ,
and:
x k + 1 y k 1 2 L F ( x k ) k = 3 Γ k A k ( F ; x k ) ( Γ k F ( x k ) ) k 1 y k x k t ¯ k + 1 t ¯ k ,
so ( M k + 1 4 ) and ( M k + 1 5 ) are also true, which completes the induction. ☐
Remark 2.
We have so far weakened the sufficient semilocal convergence conditions of the method (7) (see (17)) and have also extended the uniqueness of the solution ball from U ( x 0 , R ) to U ( x 0 , R 0 ) , since for γ 0 < γ , we have that R < R 0 . It is worth noticing that these advantages are obtained under the same computational cost, since in practice, the computation of constant γ requires the computation of constants γ 0 and δ as special cases.
Next, it follows by a simple inductive argument and the definition of the majorizing sequences that the error bounds on the distances x n + 1 x n , x n + 1 y n , x n y n , y n + 1 y n , x n x * and y n x * can be improved. In particular, we have:
Proposition 1.
Under the hypotheses of Theorem 2 and Lemma 4, further suppose that for each u , u 1 , u 2 , v , v 1 , v 2 [ 0 , R ] , the following hold:
u f 1 ( u ) f 1 ( u ) v f ( v ) f ( v ) , f o r u v , f 1 ( u ) f 1 ( u ) f ( v ) f ( v ) , f o r u v , u 1 g 1 ( u 2 ) v 1 g ( u 2 ) , f o r u 1 v 1 , u 2 v 2 , u 1 u 2 , v 1 v 2 , g 1 ( u ) g ( v ) , f o r u v ,
where functions g and g 1 are defined by:
g ( t ) = A 0 ( f , t ) f ( t ) + k = 2 A k ( f ; t ) f ( t ) f ( t ) f ( t ) k 1 f ( t ) f ( t )
and:
g 1 ( t ) = A 0 ( f 1 , t ) f 1 ( t ) + k = 2 A k ( f 1 ; t ) f 1 ( t ) f 1 ( t ) f 1 ( t ) k 1 f 1 ( t ) f 1 ( t )
Then, the following estimates hold:
s ¯ n s n ,
s ¯ n t ¯ n s n t n ,
t ¯ n + 1 t n + 1 ,
t ¯ n + 1 s ¯ n t n + 1 s n ,
t ¯ * t * .
Proof. 
Estimates (21), (22) and (23) follow using a simple inductive argument from conditions given in (20) Moreover, the estimate (25) follows by letting n in the estimate (21). ☐
Remark 3.
Clearly, under the hypotheses of Proposition 1, the error bounds on the distances are at least as tight, and the information on the location of the solution x * is at least as precise with the new technique.
The result of Theorem and Proposition 1 can be improved further, if we use another approach to compute the upper bounds on the norms Γ k F ( x 0 ) .
Definition 4.
Suppose that there exists L 0 > 0 such that the center-Lipschitz condition is satisfies:
Γ 0 ( F ( x ) F ( x 0 ) ) L 0 x x 0
for each x D holds. Define D 0 * = D U ( x 0 , 1 L 0 ) .
Then, we have that:
Γ 0 ( F ( x ) F ( x 0 ) ) L 0 x x 0 < 1 ,
so F ( x ) 1 L ( Y , X ) and:
F ( x ) 1 F ( x 0 ) 1 1 L 0 x x 0 = f ¯ 0 ( x x 0 ) 1 ,
where f ¯ 0 ( t ) = L 0 2 t 2 t + β . Estimate (27) is more precise than (12) if:
f ¯ 0 ( t ) f 0 ( t ) , for   each t [ 0 , ρ ] ,
where ρ = min { 1 L 0 , R 0 } . If
1 L 0 < R 0 ,
then D 0 * is a strict subset of D. This leads to the construction of an at least as tight function f 2 as f 1 .
Definition 5.
Let F : D B 2 be five-times Fréchet-differentiable, which satisfies the center-Lipschitz condition (26). We say that the operator F satisfies the λ-γ-condition at x 0 , if for each x D 0 * :
Γ 0 F ( x 0 ) β , Γ 0 F ( x 0 ) 2 λ ( 1 λ x x 0 ) 2 , Γ 0 F ( x ) | | 6 λ 2 ( 1 λ x x 0 | | ) 4 = f 2 ( x x 0 ) , Γ 0 F ( i v ) ( x 0 ) 24 λ 3 , Γ 0 F ( v ) ( x ) 120 λ 4 ( 1 λ x x 0 ) 6 = f 2 ( v ) ( x x 0 ) ,
where function f 2 is defined on the interval [ 0 , 1 λ ) by:
f 2 ( t ) = β t + λ t 2 1 λ t .
Define scalar sequences { t ¯ ¯ n } , { s ¯ ¯ n } , points t ¯ ¯ * , t ¯ ¯ * * as, { t ¯ n } , { s ¯ n } , points t ¯ * , t ¯ * * , respectively, by replacing function f 1 by f 2 , and suppose:
α 0 * : = λ β 3 2 2 .
We have by (29) that:
λ δ
so:
α 0 3 2 2 α 0 * 3 2 2 .
Then, with above changes, the condition (28) also replacing the condition in Lemma 3 ( a ) and the center-Lipschitz condition (26) replacing the γ 0 -center gamma condition, we can recover the results after Definition 4 until Remark 3 in this setting.

3. Concluding Remarks

The results obtained so far in this paper are based on the idea of restricted convergence domains, where the gamma constants (or Lipschitz constants) and consequently the corresponding sufficient semilocal convergence conditions, majorizing functions and sequences are determined by considering the set D 0 (or D 0 * ), which can be a strict subset of the set D on which the operator F is defined. This way, one expects (see also the numerical examples) that the new constants will be at least as small as the ones in [4] leading to the advantages as already stated before. The smaller the subset of D is containing the iterates { x n } of the method (7), the tighter the constants are. When constructing such sets (see, e.g., D 0 or D 0 * ), it is desirable if possible not to add hypotheses, but to stay with the same information. Otherwise, the comparison between old and new results will not be fair. Looking in this direction, we can improve our results also, if we simply redefine sets D 0 and D 0 * , respectively, as follows:
D ˜ 0 = D U ( y 0 , R ˜ 0 )
and:
D ˜ 0 * = D U ( y 0 , 1 L 0 β ) ,
where:
R ˜ 0 = R ¯ 0 β
Notice that D ˜ 0 D 0 D and D ˜ 0 * D 0 * D . Clearly, the preceding results can be rewritten with D ˜ 0 (or D ˜ 0 * ) replacing D 0 (or D 0 * ). These results will be at least as good as the ones using D 0 (or D 0 * ), since the constants will be at least as tight. Notice:
(a)
We are still using the same information, since y 0 is defined by the second sub-step of the method (7) for n = 0 , i.e., it depends on the initial data ( x 0 , F )
(b)
The iterates { x n } lie in D ˜ 0 (or D ˜ 0 * ), which is an at least as precise a location as D 0 (or D 0 * ). Moreover, the solution x * U ( y 0 , R ˜ 0 ) (or x * U ( y 0 , 1 L 0 β ) ), which is a more precise location than U ( x 0 , R 0 ) (or U ( x 0 , 1 L 0 ) ).
Finally, the results can be improved further as follows:
  • Case related to the center γ 0 -condition (see Definition 2): Suppose that there exists β 1 0 such that:
    x 1 y 0 β 1
    and:
    β + β 1 < R 0 .
    Then, there exists t 1 such that:
    β + β 1 < t 1 < R 0 .
    Define the set D ˜ ˜ 0 by:
    D ˜ ˜ 0 : = D U ( x 1 , t 1 ) .
    Then, we clearly have that:
    D ˜ ˜ 0 D ˜ 0 D 0 .
    Suppose the conditions of Definition 4 hold, but on the set D ˜ ˜ 0 . Then, these conditions will hold for some parameter δ ¯ and function f 1 ¯ , which shall be at least as tight as δ, f 1 , respectively. In particular, the sufficient convergence conditions shall be:
    α ¯ 0 = δ ¯ β 3 2 2
    and:
    α 0 3 2 2 α 0 * 3 2 2 α ¯ 0 3 2 2 .
    The majorizing sequences { κ n } , { μ n } shall be defined:
    κ 0 = 0 , μ 0 = β , κ 1 = β + β 1 , μ 1 = β ,
    and { κ n } , { μ m } , m = 1 , 2 , n = 2 , as sequences { s ¯ ¯ n } , { t ¯ ¯ n } , but with f ¯ 1 replacing function f 1 . Then, the conclusions of Theorem 2 will hold in this setting. Notice that the estimate x 1 y 0 β 1 still uses the initial data as y 0 x 0 β 1 .
  • Case related to the center-Lipschitz condition (26): Similarly, to the previous case, but:
    D ˜ ˜ 0 * : = D U ( x 1 , t 1 ) .
    Then, again, we have that:
    D ˜ ˜ 0 D ˜ 0 * D 0 * .
  • An example:
    Let us consider X = Y = R 3 , D = U ( 0 , 1 ) , x * = ( 0 , 0 , 0 ) and the nonlinear operator F ( x , y , z ) = ( e x 1 , e 1 2 y 2 + y , z ) .
    In this case, we can consider γ 0 = ( e 1 ) 2 < γ = e 2 , [9].

Acknowledgements

Research supported in part by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 19374/PI/14 and MTM2015-64382-P (MINECO/FEDER). The second and third authors are partly supported by the project MTM2014-52016-C2-1-P of the Spanish Ministry of Economy and Competitiveness.

Author Contributions

The four authors work in the paper in a similar way and in all of its aspect. However, Sergio Amat and Natalia Romero work more in the construction, motivation and efficiency of the methods, and Ioannis K. Argyros and Miguel A. Hernández-Verón work more in the theoretical analysis of the methods.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chun, C. Construction of third-order modifications of Newton’s method. Appl. Math. Comput. 2007, 189, 662–668. [Google Scholar] [CrossRef]
  2. Ezquerro, J.A.; Gutiérrez, J.M.; Hernández, M.A.; Salanova, M.A. Chebyshev-like methods and quadratic equations. Rev. Anal. Numer. Theor. Approx. 1999, 28, 23–35. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  4. Amat, S.; Hernández, M.A.; Romero, N. On a family of high-order iterative methods under gamma conditions with applications in denoising. Numer. Math. 2014, 127, 201–221. [Google Scholar] [CrossRef]
  5. Amat, S.; Hernández, M.A.; Romero, N. Semilocal convergence of a sixth order iterative method for quadratic equations. Appl. Numer. Math. 2012, 62, 833–841. [Google Scholar] [CrossRef]
  6. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  7. Argyros, I.K. An inverse free Newton-Jarrat-type iterative method for solving equations under gamma condition. J. Appl. Math. Comput. 2008, 28, 15–28. [Google Scholar] [CrossRef]
  8. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  9. Argyros, I.K.; Hilout, S.; Khattri, S.K. Expanding the applicability of Newton’s method using Smale’s theory. J. Appl. Math. Comput. 2014, 261, 183–200. [Google Scholar] [CrossRef]
Figure 1. Computational efficiency for the Chebyshev method, (6) and (5) methods in the family (7).
Figure 1. Computational efficiency for the Chebyshev method, (6) and (5) methods in the family (7).
Algorithms 10 00064 g001

Share and Cite

MDPI and ACS Style

Amat, S.; Argyros, I.K.; Hernández-Verón, M.A.; Romero, N. Expanding the Applicability of Some High Order Househölder-Like Methods. Algorithms 2017, 10, 64. https://doi.org/10.3390/a10020064

AMA Style

Amat S, Argyros IK, Hernández-Verón MA, Romero N. Expanding the Applicability of Some High Order Househölder-Like Methods. Algorithms. 2017; 10(2):64. https://doi.org/10.3390/a10020064

Chicago/Turabian Style

Amat, Sergio, Ioannis K. Argyros, Miguel A. Hernández-Verón, and Natalia Romero. 2017. "Expanding the Applicability of Some High Order Househölder-Like Methods" Algorithms 10, no. 2: 64. https://doi.org/10.3390/a10020064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop