Next Article in Journal
Type and Cotype Constants and the Linear Stability of Wigner’s Symmetry Theorem
Previous Article in Journal
Null Electromagnetic Fields from Dilatation and Rotation Transformations of the Hopfion
Previous Article in Special Issue
A Variant of Chebyshev’s Method with 3αth-Order of Convergence by Using Fractional Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Convex Combination Approach for Mean-Based Variants of Newton’s Method

by
Alicia Cordero
1,
Jonathan Franceschi
1,2,*,
Juan R. Torregrosa
1 and
Anna C. Zagati
1,2
1
Institute of Multidisciplinary Mathematics, Universitat Politècnica de València, Camino de Vera, s/n, 46022-Valencia, Spain
2
Universitá di Ferrara, via Ludovico Ariosto, 35, 44121 Ferrara, Italy
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(9), 1106; https://doi.org/10.3390/sym11091106
Submission received: 31 July 2019 / Revised: 13 August 2019 / Accepted: 18 August 2019 / Published: 2 September 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

Abstract

:
Several authors have designed variants of Newton’s method for solving nonlinear equations by using different means. This technique involves a symmetry in the corresponding fixed-point operator. In this paper, some known results about mean-based variants of Newton’s method (MBN) are re-analyzed from the point of view of convex combinations. A new test is developed to study the order of convergence of general MBN. Furthermore, a generalization of the Lehmer mean is proposed and discussed. Numerical tests are provided to support the theoretical results obtained and to compare the different methods employed. Some dynamical planes of the analyzed methods on several equations are presented, revealing the great difference between the MBN when it comes to determining the set of starting points that ensure convergence and observing their symmetry in the complex plane.

1. Introduction

We consider the problem of finding a simple zero α of a function f : I R R , defined in an open interval I. This zero can be determined as a fixed point of some function g by means of the one-point iteration method:
x n + 1 = g ( x n ) , n = 0 , 1 , ,
where x 0 is the starting point. The most widely-used example of these kinds of methods is the classical Newton’s method given by:
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , .
It is well known that it converges quadratically to simple zeros and linearly to multiple zeros. In the literature, many modifications of Newton’s scheme have been published in order to improve its order of convergence and stability. Interesting overviews about this area of research can be found in [1,2,3]. The works of Weerakoon and Fernando [4] and, later, Özban [5] have inspired a whole set of variants of Newton’s method, whose main characteristic is the use of different means in the iterative expression.
It is known that if a sequence { x n } n 0 tends to a limit α in such a way that there exist a constant C > 0 and a positive integer n 0 such that:
| x n + 1 α | C | x n α | p , n n 0 ,
for p 1 , then p is called the order of convergence of the sequence and C is known as the asymptotic error constant. For p = 1 , constant C satisfies 0 < C 1 .
If we denote by e n = x n α the exact error of the nth iterate, then the relation:
e n + 1 = C e n p + O ( e n p + 1 )
is called the error equation for the method and p is the order of convergence.
Let us suppose that f : I R R is a sufficiently-differentiable function and α is a simple zero of f. It is plain that:
f ( x ) = f ( x n ) + x n x f ( t ) d t .
Weerakoon and Fernando in [4] approximated the definite integral (5) by using the trapezoidal rule and taking x = α , getting:
0 f ( x n ) + 1 / 2 ( α x n ) ( f ( x n ) + f ( α ) ) ,
and therefore, a new approximation x n + 1 to α is given by:
x n + 1 = x n f ( x n ) ( f ( x n ) + f ( z n ) ) / 2 , z n = x n f ( x n ) f ( x n ) , n = 0 , 1 , .
Thus, this variant of Newton’s scheme can be considered to be obtained by replacing the denominator f ( x n ) of Newton’s method (2) by the arithmetic mean of f ( x n ) and f ( z n ) . Therefore, it is known as the arithmetic mean Newton method (AN).
In a similar way, the arithmetic mean can be replaced by other means. In particular, the harmonic mean M H a ( x , y ) = 2 x y / ( x + y ) , where x and y are two nonnegative real numbers, from a different point of view:
M H a ( x , y ) = 2 x y x + y = x y x + y θ + y x x + y 1 θ ,
where since 0 y x + y , then 0 θ 1 , i.e., the harmonic mean can be seen as a convex combination between x and y, where every element is given the relevance of the other one in the sum. Now, let us switch the roles of x and y; we get:
x x x + y + y y x + y = x 2 + y 2 x + y = M C h ( x , y ) ,
that is the contraharmonic mean between x and y.
Özban in [5] used the harmonic mean instead of the arithmetic one, which led to a new method:
x n + 1 = x n f ( x n ) ( f ( x n ) + f ( z n ) ) 2 f ( x n ) f ( z n ) , n = 0 , 1 , ,
being z n a Newton step, which he called the harmonic mean Newton method (HN).
Ababneh in [6] designed an iterative method associated with this mean, called the contraharmonic mean Newton method (CHN), whose iterative expression is:
x n + 1 = x n ( f ( x n ) + f ( z n ) ) f ( x n ) f ( x n ) 2 + f ( z n ) 2 ,
with third-order of convergence for simple roots of f ( x ) = 0 , as well as the methods proposed by Weerakoon and Fernando [4] and Özban [5].
This idea has been used by different authors for designing iterative methods applying other means, generating symmetric fixed point operators. For example, Xiaojian in [7] employed the generalized mean of order m R between two values x and y defined as:
M G ( x , y ) = x m + y m 2 1 / m ,
to construct a third-order iterative method for solving nonlinear equations. Furthermore Singh et al. in [8] presented a third-order iterative scheme by using the Heronian mean between two values x and y, defined as:
M H e ( x , y ) = 1 3 ( x + x y + y ) .
Finally, Verma in [9], following the same procedure, designed a third-order iterative method by using the centroidal mean between two values x and y, defined as:
M C e ( x , y ) = 2 ( x 2 + x y + y 2 ) 3 ( x + y ) .
In this paper, we check that all these means are functional convex combinations means and develop a simple test to prove easily the third-order of the corresponding iterative methods, mentioned before. Moreover, we introduce a new method based on the Lehmer mean of order m R , defined as:
M L m ( x , y ) = x m + y m x m 1 + y m 1
and propose a generalization that also satisfies the previous test. Finally, all these schemes are numerically tested, and their dependence on initial estimations is studied by means of their basins of attraction. These basins are shown to be clearly symmetric.
The rest of the paper is organized as follows: Section 2 is devoted to designing a test that allows us to characterize the third-order convergence of the iterative method defined by a mean. This characterization is used in Section 3 for giving an alternative proof of the convergence of mean-based variants of Newton’s (MBN) methods, including some new ones. In Section 4, we generalize the previous methods by using the concept of σ -means. Section 5 is devoted to numerical results and the use of basins of attraction in order to analyze the dependence of the iterative methods on the initial estimations used. With some conclusions, the manuscript is finished.

2. Convex Combination

In a similar way as has been stated in the Introduction for the arithmetic, harmonic, and contraharmonic means, the rest of the mentioned means can be also regarded as convex combinations. This is not coincidental: one of the most interesting properties that a mean satisfies is the averaging property:
min ( x , y ) M ( x , y ) max ( x , y ) ,
where M ( x , y ) is any mean function of x and y nonnegative. This implies that every mean that satisfies this property is a certain convex combination among its terms.
Indeed, there exists a unique θ ( x , y ) ) [ 0 , 1 ] such that:
θ ( x , y ) = M ( x , y ) y x y if x y 0 if x = y .
This approach suggests that it is possible to generalize every mean-based variant of Newton’s method (MBN), by studying their convex combination counterparts. As a matter of fact, every mean-based variant of Newton’s method can be rewritten as:
x n + 1 = x n f ( x n ) θ f ( x n ) + ( 1 θ ) f ( z n ) ,
where θ = θ ( f ( x n ) , f ( z n ) ) . This is a particular case of a family of iterative schemes constructed in [10].
We are interested in studying its order of convergence as a function of θ . Thus, we need to compute the approximated Taylor expansion of the convex combination at the denominator and then its inverse:
θ f ( x n ) + ( 1 θ ) f ( z n ) = θ f ( α ) [ 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + O ( e n 4 ) ] + = + ( 1 θ ) f ( α ) [ 1 + 2 c 2 e n 2 + 4 c 2 ( c 3 c 2 2 ) e n 3 + O ( e n 4 ) ] = f ( α ) [ θ + 2 θ c 2 e n + 3 θ c 3 e n 2 + 4 θ c 4 e n 3 + O ( e n 4 ) ] + = + f ( α ) [ 1 + 2 c 2 2 e n 2 + 4 c 2 ( c 3 c 2 2 ) e n 3 + O ( e n 4 ) ] + = f ( α ) [ θ + 2 θ c 2 2 e n 2 + 4 θ c 2 ( c 3 c 2 2 ) e n 3 + O ( e n 4 ) ] = f ( α ) [ 1 + 2 θ c 2 e n + ( 2 c 2 2 + 3 θ c 3 2 θ c 2 2 + 3 θ c 3 ) e n 2 ] + = + f ( α ) [ ( 4 θ c 4 + ( 1 θ ) 4 c 2 ( c 3 c 2 2 ) ) e n 3 + O ( e n 4 ) ] ;
where c j = 1 j ! f ( j ) ( α ) f ( α ) , j = 1 , 2 , . Then, its inverse can be expressed as:
f ( α ) 1 ( 1 [ 2 θ c 2 e n + ( 2 c 2 2 + 3 θ c 3 2 θ c 2 2 + 3 θ c 3 ) e n 2 + ( 4 θ c 4 + ( 1 θ ) 4 c 2 ( c 3 c 2 2 ) ) e n 3 + O ( e n 4 ) ] + f ( α ) 1 + [ 2 θ c 2 e n + ( 2 c 2 2 + 3 θ c 3 2 θ c 2 2 + 3 θ c 3 ) e n 2 + ( 4 θ c 4 + ( 1 θ ) 4 c 2 ( c 3 c 2 2 ) ) e n 3 + O ( e n 4 ) ] 2 ) = f ( α ) 1 [ 1 2 θ c 2 e n + ( 2 θ c 2 2 2 c 2 2 + 4 θ 2 c 2 2 3 θ c 3 ) e n 2 ( 4 θ c 4 + ( 1 θ ) 4 c 2 ( c 3 c 2 2 ) ) e n 3 + O ( e n 4 ) ] .
Now,
f ( x n ) θ f ( x n ) + ( 1 θ ) f ( z n ) = e n + c 2 ( 1 2 θ ) e n 2 + ( 4 θ 2 c 2 2 2 c 2 2 + c 3 3 θ c 3 ) e n 3 + O ( e n 4 ) ,
and by replacing it in (18), it leads to the MBN error equation as a function of θ :
e n + 1 = c 2 ( 1 2 θ ) e n 2 ( 4 θ 2 c 2 2 2 c 2 2 + c 3 3 θ c 3 ) e n 3 + O ( e n 4 ) Φ ( θ ) .
Equation (22) can be used to re-discover the results of convergence: for example, for the contraharmonic mean, we have:
θ ( f ( x n ) , f ( z n ) ) = f ( x n ) f ( x n ) + f ( z n ) ,
where:
f ( x n ) + f ( z n ) = 2 f ( α ) [ 1 + c 2 e n ( c 2 2 3 / 2 c 3 ) e n 2 + 2 ( c 2 c 3 c 2 3 + c 4 ) e n 3 + O ( e n 4 ) ] ,
so that:
1 f ( x n ) + f ( z n ) = ( 2 f ( α ) ) 1 [ 1 c 2 e n 3 / 2 c 3 e n 2 + 4 c 2 3 e n 3 2 c 4 e n 3 + c 2 c 3 e n 3 + O ( e n 4 ) ] = ( 2 f ( α ) ) 1 [ 1 c 2 e n 3 / 2 c 3 e n 2 + ( 4 c 2 3 2 c 4 + c 2 c 3 ) e n 3 + O ( e n 4 ) ] .
Thus, we can obtain the θ associated with the contraharmonic mean:
θ ( f ( x n ) , f ( z n ) ) = [ 1 / 2 + c 2 e n + 3 / 2 c 3 e n 2 + 2 c 4 e n 3 + O ( e n 4 ) ] · = · [ 1 c 2 e n 3 / 2 c 3 e n 2 + ( 4 c 2 3 + c 2 c 3 2 c 4 ) e n 3 + O ( e n 4 ) ] = 1 / 2 + 1 / 2 c 2 e n c 2 2 e n 2 + 3 / 4 c 3 e n 2 + 2 c 2 3 e n 3 + c 4 e n 3 5 / 2 c 2 c 3 e n 3 + O ( e n 4 ) = 1 / 2 + 1 / 2 c 2 e n + ( 3 / 4 c 3 c 2 2 ) e n 2 + ( 2 c 2 3 + c 4 5 / 2 c 2 c 3 ) e n 3 + O ( e n 4 ) .
Finally, by replacing the previous expression in (22):
e n + 1 = ( 1 / 2 c 3 + 2 c 2 2 ) e n 3 + O ( e n 4 ) ,
and we obtain again that the convergence for the contraharmonic mean Newton method is cubic.
Regarding the harmonic mean, it is straightforward that it is a functional convex combination, with:
θ ( f ( x n ) , f ( z n ) ) = 1 f ( x n ) f ( x n ) + f ( z n ) = 1 / 2 + 1 / 2 c 2 e n + ( c 2 2 3 / 4 c 3 ) e n 2 + ( 5 / 2 c 2 c 3 2 c 2 3 c 4 ) e n 3 + O ( e n 4 ) .
Replacing this expression in (22), we find the cubic convergence of the harmonic mean Newton method,
e n + 1 = 1 / 2 c 3 e n 3 + O ( e n 4 ) .
In both cases, the independent term of θ ( f ( x n ) , f ( z n ) ) was 1 / 2 ; it was not a coincidence, but an instance of the following more general result.
Theorem 1.
Let θ = θ ( f ( x n ) , f ( z n ) ) be associated with the mean-based variant of Newton’s method (MBN):
x n + 1 = x n f ( x n ) M ( f ( x n ) , f ( z n ) ) , z n = x n f ( x n ) f ( x n ) ,
where M is a mean function of the variables f ( x n ) and f ( z n ) . Then, MBN converges, at least, cubically if and only if the estimate:
θ = 1 / 2 + O ( e n ) .
holds.
Proof. 
We replace θ = 1 / 2 + O ( e n ) in the MBN error Equation (22), obtaining:
e n + 1 = ( 4 θ 2 c 2 2 2 c 2 2 + c 3 3 θ c 3 ) e n 3 + O ( e n 4 ) .
Now, some considerations follow.
Remark 1.
Generally speaking,
θ = a 0 + a 1 e n + a 2 e n 2 + a 3 e n 3 + O ( e n 4 ) ,
where a i are real numbers. If we put (33) in (22), we have:
e n + 1 = c 2 ( 1 2 a 0 ) e n 2 ( 4 a 0 2 c 2 2 3 a 0 c 3 2 a 1 c 2 2 c 2 2 + c 3 ) e n 3 + O ( e n 4 ) ;
it follows that, in order to attain cubic convergence, the coefficient of e n 2 must bezero. Therefore, a 0 ( u ) = 1 / 2 . On the other hand, to achieve a higher order (i.e., at least four), we need to solve the following system:
1 2 a 0 = 0 4 a 0 2 c 2 2 3 a 0 c 3 2 a 1 c 2 2 c 2 2 + c 3 = 0 .
This gives us that a 0 ( u ) = 1 / 2 , a 1 ( u ) = 1 / 4 ( 2 c 2 2 + c 3 ) / ( c 2 ) assure at least a fourth-order convergence of the method. However, none of the MBN methods under analysis satisfy these conditions simultaneously.
Remark 2.
The only convex combination involving a constant θ that converges cubically is θ = 1 / 2 , i.e., the arithmetic mean.
The most useful aspect of Theorem 1 is synthesized in the following corollary, which we call the “ θ -test”.
Corollary 1 (θ-test)
With the same hypothesis of Theorem 1, an MBN converges at least cubically if and only if the Taylor expansion of the mean holds:
M ( f ( x n ) , f ( z n ) ) = f ( α ) 1 + 1 2 c 2 e n + O ( e n 2 ) .
Let us notice that Corollary 1 provides a test to analyze the convergence of an MBN without having to find out the inherent θ , therefore sensibly reducing the overall complexity of the analysis.

Re-Proving Known Results for MBN

In this section, we apply Corollary 1 to prove the cubic convergence of known MBN via a convex combination approach.
(i)
Arithmetic mean:
M A ( f ( x n ) , f ( z n ) ) = f ( x n ) + f ( z n ) 2 = 1 2 f ( α ) [ 1 + 2 c 2 e n + O ( e n 2 ) ] + f ( α ) [ 1 + O ( e n 2 ) ] = f ( α ) [ 1 + c 2 e n + O ( e n 2 ) ] .
(ii)
Heronian mean: In this case, the associated θ -test is:
M H e f ( x n ) , f ( z n ) = 1 3 f ( α ) [ 1 + 2 c 2 e n + O ( e n 2 ) ] + f ( α ) [ 1 + c 2 e n + O ( e n 2 ) ] + f ( α ) [ 1 + O ( e n 2 ) ] = f ( α ) 3 [ 3 + 2 c 2 e n + c 2 e n + O ( e n 2 ) ] .
(iii)
Generalized mean:
M G ( f ( x n ) , f ( z n ) ) = f ( x n ) m + f ( z n ) m ) 2 1 / m = f ( α ) m [ 1 + 2 c 2 e n + O ( e n 2 ) ] m + f ( α ) m [ 1 + O ( e n 2 ) ] m 2 1 / m = f ( α ) [ 1 + c 2 e n + O ( e n 2 ) ] m 1 / m = f ( α ) [ 1 + c 2 e n + O ( e n 2 ) ] .
(iv)
Centroidal mean:
M C e ( f ( x n ) , f ( z n ) ) = 2 ( f ( x n ) 2 + f ( x n ) f ( z n ) + f ( z n ) ) 3 ( f ( x n ) + f ( z n ) ) = 2 ( f ( α ) 2 [ 1 + 2 c 2 e n + O ( e n 2 ) ] + f ( α ) 2 [ 2 + 4 c 2 e n + O ( e n 2 ) ] ) 3 ( f ( α ) [ 2 + 2 c 2 e n + O ( e n 2 ) ] ) = 2 ( f ( α ) 2 [ 3 + 6 c 2 e n + O ( e n 2 ) ] ) 3 ( f ( α ) [ 2 + 2 c 2 e n + O ( e n 2 ) ] ) = f ( α ) [ 1 + 2 c 2 e n + O ( e n 2 ) ] [ 1 + c 2 e n + O ( e n 2 ) ] = f ( α ) [ 1 + c 2 e n + O ( e n 2 ) ] .

3. New MBN by Using the Lehmer Mean and Its Generalization

The iterative expression of the scheme based on the Lehmer mean of order m R is:
x n + 1 = x n f ( x n ) M L m ( f ( x n ) , f ( z n ) ) ,
where z n = x n f ( x n ) f ( x n ) and:
M L m ( f ( x n ) , f ( z n ) ) = f ( x n ) m + f ( z n ) m f ( x n ) m 1 + f ( z n ) m 1 .
Indeed, there are suitable values of parameter p such that the associated Lehmer mean equals the arithmetic one and the geometric one, but also the harmonic and the contraharmonic ones. In what follows, we will find it again, this time in a more general context.
By analyzing the associated θ -test, we conclude that the iterative scheme designed with this mean has order of convergence three.
M L m ( f ( x n ) , f ( z n ) ) = f ( x n ) m + f ( z n ) m f ( x n ) m 1 + f ( z n ) m 1 = f ( α ) m [ 1 + 2 c 2 e n + O ( e n 2 ) ] m + f ( α ) m [ 1 + O ( e n 2 ) ] m f ( α ) m 1 [ 1 + 2 c 2 e n + O ( e n 2 ) ] m 1 + f ( α ) m 1 [ 1 + O ( e n 2 ) ] m 1 = f ( α ) [ 1 + m c 2 e n + O ( e n 2 ) ] · [ 1 ( m 1 ) c 2 e n + O ( e n 2 ) + ( m 1 ) c 2 e n + O ( e n 2 ) 2 + ] = f ( α ) [ 1 + m c 2 e n + O ( e n 2 ) ] · [ 1 ( m 1 ) c 2 e n + O ( e n 2 ) ] = f ( α ) [ 1 + c 2 e n + O ( e n 2 ) ] .

σ -Means

Now, we propose a new family of means of n variables, starting again from convex combinations. The core idea in this work is that, in the end, two distinct means only differ in their corresponding weights θ and 1 θ . In particular, we can regard the harmonic mean as an “opposite-weighted”mean, while the contraharmonic one is a “self-weighted”mean.
This behavior can be generalized to n variables:
M C H ( x 1 , , x n ) = i = 1 n x i 2 i = 1 n x i
is the contraharmonic mean among n numbers. Equation (43) is just a particular case of what we call σ -mean.
Definition 1 (σ-mean)
Given x = ( x 1 , , x n ) R n a vector of n real numbers and a bijective map σ : { 1 , , n } { 1 , , n } (i.e., σ ( x ) is a permutation of x 1 , , x n ), we call the σ-mean of order m R the real number given by:
M σ ( x 1 , , x n ) i = 1 n x i · x σ ( i ) m j = 1 n x j m .
Indeed, it is easy to see that, in an σ -mean, the weight assigned to each node x i is:
x σ ( i ) m j = 1 n x σ ( j ) m = x σ ( i ) m j = 1 n x j m [ 0 , 1 ] ,
where the equality holds because σ is a permutation of the indices. We are, therefore, still dealing with a convex combination, which implies that Definition 1 is well posed.
We remark that if we take σ = 𝟙 , i.e., the identical permutation, in (44), we find the Lehmer mean of order m. Actually, the Lehmer mean is a very special case of the σ -mean, as the following result proves.
Proposition 1.
Given m R , the Lehmer mean of order m is the maximum σ-mean of order m.
Proof. 
We recall the rearrangement inequality:
x n y 1 + + x 1 y n x σ ( 1 ) y 1 + + x σ ( n ) y n x 1 y 1 + + x n y n ,
which holds for every choice of x 1 , , x n and y 1 , , y n regardless of the signs, assuming that both x i and y j are sorted in increasing order. In particular, x 1 < x 2 < < x n and y 1 < y 2 < < y n imply that the upper bound is attained only for the identical permutation.
Then, to prove the result, it is enough to replace every y i with the corresponding weight defined in (45). ☐
The Lehmer mean and σ -mean are deeply related: if n = 2 , as is the case of MBN, there are only two possible permutations, the identical one and the one that swaps one and two. We have already observed that the identical permutation leads to the Lehmer mean; however, if we express σ in standard cycle notation as σ ¯ = ( 1 , 2 ) , we have that:
M σ ¯ ( x 1 , x 2 ) = x 1 x 2 ( x 1 m + x 2 m ) x 1 m + 1 + x 2 m + 1 = x 1 m + x 2 m x 1 m 1 + x 2 m 1 = M L m ( x 1 , x 2 ) .
We conclude this section proving another property of σ -means, which is that the arithmetic mean of all possible σ -means of n numbers equals the arithmetic mean of the numbers themselves.
Proposition 2.
Given n real numbers x 1 , , x n and Σ n denoting the set of all possible permutations of { 1 , n } , we have:
1 n ! σ Σ n M σ ( x 1 , , x n ) = 1 n i = 1 n x i
for all m R .
Proof. 
Let us rewrite Equation (48); by definition, we have:
1 n ! σ Σ n M σ ( x 1 , , x n ) = 1 n ! σ Σ n i = 1 n x i x σ ( i ) m j = 1 n x j m = 1 n i = 1 n x i
and we claim that the last equality holds. Indeed, we notice that every term in the sum of the σ -means on the left side of the last equality involves a constant denominator, so we can multiply both sides by it and also by n ! to get:
σ Σ n i = 1 n x i x σ ( i ) m = ( n 1 ) ! j = 1 n x j m i = 1 n x i .
Now, it is just a matter of distributing the product on the right in a careful way:
( n 1 ) ! j = 1 n x j m i = 1 n x i = i = 1 n x i · k = 1 n ( n 1 ) ! x k m ,
If we fix i { 1 , , n } , in Σ n , there are exactly ( n 1 ) ! permutations σ such that σ ( i ) = i . Therefore, the equality in (50) follows straightforwardly.

4. Numerical Results and Dependence on Initial Estimations

Now, we present the results of some numerical computations, in which the following test functions have been used.
(a)
f 1 ( x ) = x 3 + 4 x 2 10 ,
(b)
f 2 ( x ) = sin ( x ) 2 x 2 + 1 ,
(c)
f 3 ( x ) = x 2 e x 3 x + 2 ,
(d)
f 4 ( x ) = cos ( x ) x ,
(e)
f 5 ( x ) = ( x 1 ) 3 1 .
The numerical tests were carried out by using MATLAB with double precision arithmetics in a computer with processor i7-8750H @2.20 GHz, 16 Gb of RAM, and the stopping criterion used was | x n + 1 x n | + | f ( x n + 1 ) | < 10 14 .
We used the harmonic mean Newton method (HN), the contraharmonic mean Newton method (CHN), the Lehmer mean Newton method (LN(m)), a variant of Newton’s method where the mean is a convex combination with θ = 1 / 3 , 1 / 3 N, and the classic Newton method (CN). The main goals of these calculations are to confirm the theoretical results stated in the preceding sections and to compare the different methods, with CN as a control benchmark. In Table 1, we show the number of iterations that each method needs for satisfying the stopping criterion and also the approximated computational order of convergence, defined in [11], with the expression:
A C O C = ln ( | x n + 1 x n | / | x n x n 1 | ) ln ( | x n x n 1 | / | x n 1 x n 2 | ) , n = 2 , 3 , ,
which is considered as a numerical approximation of the theoretical order of convergence p.
Regarding the efficiency of the MBN, we used the efficiency index defined by Ostrowski in [12] as E I = p 1 d , where p is the order of convergence of the method and d is the number of functional evaluations per iteration. In this sense, all the MBN had the same E I M B N = 3 1 3 ; meanwhile, Newton’s scheme had the index E I C N = 2 1 2 . Therefore, all MBN were more efficient than the classical Newton method.
The presented numerical tests showed the performance of the different iterative methods to solve specific problems with fixed initial estimations and a stringent stopping criterion. However, it is useful to know their dependence on the initial estimation used. Although the convergence of the methods has been proven for real functions, it is usual to analyze the sets of convergent initial guesses in the complex plane (the proof would be analogous by changing the condition on the function to be differentiable by being holomorphic). To get this aim, we plotted the dynamical planes of each one of the iterative methods on the nonlinear functions f i ( x ) , i = 1 , 2 , , 5 , used in the numerical tests. In them, a mesh of 400 × 400 initial estimations was employed in the region of the complex plane [ 3 , 3 ] × [ 3 , 3 ] .
We used the routines appearing in [13] to plot the dynamical planes corresponding to each method. In them, each point of the mesh was an initial estimation for the analyzed method on the specific problem. If the method reached the root in less than 40 iterations (closer than 10 3 ), then this point is painted in orange (green for the second, etc.) color; if the process converges to another attractor different from the roots, then the point is painted in black. The zeros of the nonlinear functions are presented in the different pictures by white stars.
In Figure 1, we observe that Harmonic and Lehmer (for m = 7 ) means showed the most stable performance, whose unique basins of attraction were those of the roots (plotted in orange, red, and green). In the rest of the cases, there existed black areas of no convergence to the zeros of the nonlinear function f 1 ( x ) . Specially unstable were the cases of Heronian, convex combination ( θ = ± 2 ), and generalized means, with wide black areas and very small basins of the complex roots.
Regarding Figure 2, again Heronian, convex combination ( θ = 2 ), and generalized means showed convergence only to one of the roots or very narrow basins of attraction. There existed black areas of no convergence to the roots in all cases, but the widest green and orange basins (corresponding to the zeros of f 2 ( x ) ) corresponded to harmonic, contra harmonic, centroidal, and Lehmer means.
Function f 3 ( x ) had only one zero at x 0.25753 , whose basin of attraction is painted in orange color in Figure 3. In general, most of the methods presented good performance; however, three methods did not converge to the root in the maximum of iterations required: Heronian and generalized means with m = ± 2 . Moreover, the basin of attraction was reduced when the parameter θ of the convex combination mean was used.
A similar performance is observed in Figure 4, where Heronian and generalized means with m = ± 2 showed no convergence to only the root of f 4 ( x ) ; meanwhile, the rest of the methods presented good behavior. Let us remark that in some cases, blue areas appear; this corresponded to initial estimations that, after 40 consecutive iterations, had an absolute value higher than 1000. In these cases, they and the surrounding black areas were identified as regions of divergence of the method. The best methods in this case were associated with the arithmetic and harmonic means.
In Figure 5, the best results in terms of the wideness of the basins of the attraction of the roots were for harmonic and Lehmer means, for m = 7 . The biggest black areas corresponded to convex combination with θ = 2 , where the three basins of attraction of the roots were very narrow, and for Heronian and generalized means, there was only convergence to the real root.

5. Conclusions

The proposed θ -test (Corollary 1) has proven to be very useful to reduce the calculations of the analysis of convergence of any MBN. Moreover, though the employment of σ -means in the context of mean-based variants of Newton’s method is probably not the best one to appreciate their flexibility, their use could still lead to interesting results due to their much greater capability of interpolating between numbers than already powerful means, such as the Lehmer one.
With regard to the numerical performance, Table 1 confirms that a convex combination with a constant coefficient could converge cubically if and only if it was the arithmetic mean; otherwise, as with this case, it converged quadratically, even if it may have done so with less iterations, generally speaking, than CN. Regarding the number of iterations, there were non-linear functions for which LN(m) converged with fewer iterations than HN. In our calculations, we set m = 7 , but similar results were achieved also for different parameters. Regarding the dependence on initial estimations, the harmonic and Lehmer methods were proven to be very stable, with the widest areas of convergence in most of the nonlinear problems used in the tests.

Author Contributions

The individual contributions of the authors are as follows: conceptualization, J.R.T.; writing, original draft preparation, J.F. and A.C.Z.; validation, A.C. and J.R.T. formal analysis, A.C.; numerical experiments, J.F. and A.C.Z.

Funding

This research was partially funded by Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22 and by Generalitat Valenciana PROMETEO/2016/089 (Spain).

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions, which improved the final version of this manuscript.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NY, USA, 1964. [Google Scholar]
  2. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  3. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA SIMAI Springer Series; Springer: Berlin, Germany, 2016; Volume 10. [Google Scholar]
  4. Weerakoon, S.; Fernando, T.A. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  5. Özban, A. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
  6. Ababneh, O.Y. New Newton’s method with third order convergence for solving nonlinear equations. World Acad. Sci. Eng. Technol. 2012, 61, 1071–1073. [Google Scholar]
  7. Xiaojian, Z. A class of Newton’s methods with third-order convergence. Appl. Math. Lett. 2007, 20, 1026–1030. [Google Scholar]
  8. Singh, M.K.; Singh, A.K. A new-mean type variant of Newton’s method for simple and multiple roots. Int. J. Math. Trends Technol. 2017, 49, 174–177. [Google Scholar] [CrossRef]
  9. Verma, K.L. On the centroidal mean Newton’s method for simple and multiple roots of nonlinear equations. Int. J. Comput. Sci. Math. 2016, 7, 126–143. [Google Scholar] [CrossRef]
  10. Zafar, F.; Mir, N.A. A generalized family of quadrature based iterative methods. Gen. Math. 2010, 18, 43–51. [Google Scholar]
  11. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  12. Ostrowski, A.M. Solution of Equations and Systems of Equatiuons; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  13. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and pa- rameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Dynamical planes of the mean-based methods on f 1 ( x ) = x 3 + 4 x 2 10 .
Figure 1. Dynamical planes of the mean-based methods on f 1 ( x ) = x 3 + 4 x 2 10 .
Symmetry 11 01106 g001
Figure 2. Dynamical planes of mean-based methods on f 2 ( x ) = s i n ( x ) 2 x 2 + 1 .
Figure 2. Dynamical planes of mean-based methods on f 2 ( x ) = s i n ( x ) 2 x 2 + 1 .
Symmetry 11 01106 g002
Figure 3. Dynamical planes of mean-based methods on f 3 ( x ) = x 2 e x 3 x + 2 .
Figure 3. Dynamical planes of mean-based methods on f 3 ( x ) = x 2 e x 3 x + 2 .
Symmetry 11 01106 g003
Figure 4. Dynamical planes of mean-based methods on f 4 ( x ) = c o s ( x ) x .
Figure 4. Dynamical planes of mean-based methods on f 4 ( x ) = c o s ( x ) x .
Symmetry 11 01106 g004
Figure 5. Dynamical planes of mean-based methods on f 5 ( x ) = ( x 1 ) 3 1 .
Figure 5. Dynamical planes of mean-based methods on f 5 ( x ) = ( x 1 ) 3 1 .
Symmetry 11 01106 g005
Table 1. Numerical results. HN, the harmonic mean Newton method; CHN, the contraharmonic mean Newton method; LN, the Lehmer–Newton method; CN, the classic Newton method.
Table 1. Numerical results. HN, the harmonic mean Newton method; CHN, the contraharmonic mean Newton method; LN, the Lehmer–Newton method; CN, the classic Newton method.
Functionx 0 Number of IterationsACOC
HNCHNLN(−7)1/3 NCNHNCHNLN(−7)1/3 NCN
(a)−0.550185561323.103.032.971.992.00
1455562.943.012.962.022.00
2455563.102.993.022.002.00
(b)1456673.063.163.012.012.00
3457673.012.953.022.012.00
(c)2555563.012.993.112.012.00
3565673.103.003.102.012.00
(d)−0.3555662.993.143.022.011.99
1444552.992.872.882.012.00
1.7445553.002.723.022.011.99
(e)06 > 1000 77103.063.003.022.012.00
1.5577783.043.012.992.012.00
2.5455573.072.963.011.992.00
3.0566673.042.992.982.002.00
3.5566683.072.952.992.002.00

Share and Cite

MDPI and ACS Style

Cordero, A.; Franceschi, J.; Torregrosa, J.R.; Zagati, A.C. A Convex Combination Approach for Mean-Based Variants of Newton’s Method. Symmetry 2019, 11, 1106. https://doi.org/10.3390/sym11091106

AMA Style

Cordero A, Franceschi J, Torregrosa JR, Zagati AC. A Convex Combination Approach for Mean-Based Variants of Newton’s Method. Symmetry. 2019; 11(9):1106. https://doi.org/10.3390/sym11091106

Chicago/Turabian Style

Cordero, Alicia, Jonathan Franceschi, Juan R. Torregrosa, and Anna C. Zagati. 2019. "A Convex Combination Approach for Mean-Based Variants of Newton’s Method" Symmetry 11, no. 9: 1106. https://doi.org/10.3390/sym11091106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop