Next Article in Journal
Clustering Using an Improved Krill Herd Algorithm
Next Article in Special Issue
Expanding the Applicability of Some High Order Househölder-Like Methods
Previous Article in Journal
Erratum: Ahmad, F., et al. A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity. Algorithms 2017, 10, 17
Previous Article in Special Issue
An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions

1
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Sangrur 148106, India
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(2), 54; https://doi.org/10.3390/a10020054
Submission received: 18 April 2017 / Accepted: 9 May 2017 / Published: 12 May 2017
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)

Abstract

:
We present the semilocal convergence of a multi-step modified Newton-Hermitian and Skew-Hermitian Splitting method (MMN-HSS method) to approximate a solution of a nonlinear equation. Earlier studies show convergence under only Lipschitz conditions limiting the applicability of this method. The convergence in this study is shown under generalized Lipschitz-type conditions and restricted convergence domains. Hence, the applicability of the method is extended. Moreover, numerical examples are also provided to show that our results can be applied to solve equations in cases where earlier study cannot be applied. Furthermore, in the cases where both old and new results are applicable, the latter provides a larger domain of convergence and tighter error bounds on the distances involved.

1. Introduction

Let F : D C n C n be Gateaux-differentiable and D be an open set. Let also x 0 D be a point at which F ( x ) is continuous and positive definite. Suppose that F ( x ) = H ( x ) + S ( x ) , where H ( x ) = 1 2 ( F ( x ) + F ( x ) * ) and S ( x ) = 1 2 ( F ( x ) F ( x ) * ) are the Hermitian and Skew-Hermitian parts of the Jacobian matrix F ( x ) , respectively. Many problems can be formulated like the equation
F ( x ) = 0 ,
using mathematical modelling [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]. The solution x * of Equation (1) can rarely be found in explicit form. This is why most solution methods of Equation (1) are usually iterative. In particular, Hermitian and Skew-Hermitian Splitting (HSS) methods have been shown to be very efficient in solving large sparse non-Hermitian positive definite systems of linear equations [11,12,17,19,22].
We study the semilocal convergence of the multi-step modified Newton-HSS (MMN-HSS) method defined by
x k ( 0 ) = x k , x k ( i ) = x k ( i 1 ) I T ( α ; x ) l k ( i ) F ( x k ) 1 F ( x k ( i 1 ) ) , 1 i m , x k + 1 = x k ( m ) , i = 1 , 2 , m , k = 0 , 1 , ,
where x 0 D is an initial point, T ( α ; x ) = ( α I + S ( x ) ) 1 ( α I H ( x ) ) ( α I + H ( x ) ) 1 ( α I H ( x ) ) , l k ( i ) is a sequence of positive integers, and α and t o l are positive constants
F ( x k ) t o l F ( x 0 )
and
F ( x k ) + F ( x k ) d k , l k η k F ( x k ) , η k [ 0 , 1 ) , η k η 1 .
The local and semilocal convergence analysis of method (2) was given in [19] using Lipschitz continuity conditions on F. Later, we extended the local convergence of method (2) using generalized Lipschitz continuity conditions [8].
In the present study, we show that the results in [19] can be extended as the ones for MN-HSS in [8]. Using generalized Lipschitz-type conditions, we present a new semilocal convergence analysis with advantages (A):
(a)
Larger radius of convergence,
(b)
More precise error estimates on x k x * ,
(c)
The new results can be used in cases where the old ones in [19] cannot be used to solve Equation (1).
The advantages (A) are obtained under the same computational cost as in [19]. Hence, the applicability of the MMN-HSS method is extended.
The rest of the paper is structured as follows: Section 2 contains the semilocal convergence analysis of the MMN-HSS method. Section 3 contains the numerical examples.

2. Semilocal Convergence

The following hypotheses shall be used in the semilocal convergence analysis (H):
(H1)
Let x 0 C n . There exist β 1 > 0 , β 2 > 0 , γ > 0 and μ > 0 such that
H ( x 0 ) β 1 , S ( x 0 ) β 2 , F ( x 0 ) 1 γ , F ( x 0 ) μ .
(H2)
There exist v 1 : [ 0 , + ) R , v 2 : [ 0 , + ) R , continuous and nondecreasing functions with v 1 ( 0 ) = v 2 ( 0 ) = 0 such that, for each x , y D ,
H ( x ) H ( x 0 ) v 1 ( x x 0 ) ,
S ( x ) S ( x 0 ) v 2 ( x x 0 ) .
Define functions w and v by w ( t ) = w 1 ( t ) + w 2 ( t ) and v ( t ) = v 1 ( t ) + v 2 ( t ) .
Let r 0 = sup { t 0 : γ v ( t ) < 1 }
and set
D 0 = D U ( x 0 , r 0 ) .
(H3)
There exist w 1 : [ 0 , + ) R , w 2 : [ 0 , + ) R , continuous and nondecreasing functions with w 1 ( 0 ) = w 2 ( 0 ) = 0 such that, for each x , y D 0 ,
H ( x ) H ( y ) w 1 ( x y ) ,
S ( x ) S ( y ) w 2 ( x y ) .
We need the following auxiliary results for the semilocal convergence analysis that follows.
Lemma 1.
Under the (H) hypotheses, the following items hold for each x , y D 0 :
F ( x ) F ( y ) w ( x y ) ,
F ( x ) F ( x 0 ) v ( x y ) ,
F ( x ) v ( x y ) + β 1 + β 2 ,
F ( x ) F ( y ) F ( y ) ( x y ) 0 1 w ( x y ξ ) d ξ x y ,
and
F ( x ) 1 γ 1 γ v ( x x 0 ) .
Proof. 
By hypothesis ( H 3 ) and F ( x ) = H ( x ) + S ( x ) , we have that
F ( x ) F ( y ) = H ( x ) H ( y ) + S ( x ) S ( y ) H ( x ) H ( y ) + S ( x ) S ( y ) w 1 ( x y ) + w 2 ( x y = w ( x y )
and by ( H 2 )
F ( x ) F ( x 0 ) H ( x ) H ( x 0 ) + S ( x ) S ( x 0 ) v 1 ( x x 0 ) + v 2 ( x x 0 ) = v ( x x 0 ) ,
which show inequalities (3) and (4), respectively.
Then, we get, by ( H 1 ) and ( H 3 ) ,
F ( x ) = F ( x ) F ( x 0 ) + F ( x 0 ) F ( x ) F ( x 0 ) + H ( x 0 ) + S ( x 0 ) v ( x x 0 ) + β 1 + β 2 ,
which shows inequality (5). Using ( H 3 ) , we obtain that
F ( x ) F ( y ) F ( y ) ( x y ) = 0 1 F y + ξ ( x y ) F ( y ) d ξ ( x y ) 0 1 w ( x y ξ ) d ξ x y ,
which shows inequality (6). By ( H 1 ) , ( H 2 ) and inequality (4), we get, in turn, that for x D 0 :
F ( x 0 ) 1 F ( x ) F ( x 0 ) γ ( x x 0 ) γ v ( r 0 ) < 1 .
It follows from inequality (8) and the Banach lemma on invertible operators [4] that F ( x ) 1 exists so that inequality (7) is satisfied.    ☐
We shall define some scalar functions and parameters to be used in the semilocal convergence analysis. Let t 0 = 0 and s 0 ( 1 ) = ( 1 + η ) γ μ . Define scalar sequences { t k } , { s k ( 1 ) } , , { s k ( m 1 ) } by the following schemes:
t 0 = 0 , s k ( 0 ) = t k , t k + 1 = s k m ,
s k ( i ) = s k ( i 1 ) + [ ( 0 1 w ( ( s k ( i 1 ) s k ( i 2 ) ) ξ ) d ξ + w ( s k ( i 2 ) t k ) ) ( 1 + η ) γ + η ( 1 γ v ( t k ) ) ] ( s k ( i 1 ) s k ( i 2 ) ) 1 γ v ( t k )
t k + 1 = s k ( i ) + ( 1 γ v ( t k ) ) ( s k ( i ) s k ( i 1 ) ) , i = 0 , 1 , 2 , , m 1 , k = 0 , 1 , 2 , .
Moreover, define functions q and h q on the interval [ 0 , r 0 ) by
q ( t ) = ( 1 + η ) γ 0 1 w ( ( 1 + η ) γ μ ξ ) d ξ + ( 1 + η ) γ w ( t ) + η ( 1 γ v ( t ) ) 1 γ v ( t )
and
h q ( t ) = q ( t ) 1 .
We have that h q ( 0 ) = η 1 < 0 and h q as t r 0 . It follows from the intermediate value theorem that function h q has zeros in interval ( 0 , r 0 ) . Denote by r q the smallest such zero. Then, we have that for each t [ 0 , r 0 )
0 q ( t ) 1 .
Lemma 2.
Suppose that equation
t ( 1 q ( t ) ) ( 1 + η ) γ μ + ( 1 + η ) γ 0 1 w ( ( 1 + η ) γ μ ξ ) d ξ + η = 0
has zeros in interval ( 0 , r q ) . Denote by r the smallest such zero. Then, sequence { t k } , generated by Equation (9) is nondecreasing, bounded from above by r q and converges to its unique least upper bound r * , which satisfies
0 < r * r < r q .
Proof. 
Equation (11) can be written as
t 1 t 0 1 q ( r ) = r ,
since, by Equation (9),
t 1 = ( 1 + η ) γ μ + ( 1 + η ) γ 0 1 w ( ( 1 + η ) γ μ τ ) d τ + η + w ( ( 1 + η ) γ μ )
and r solves Equation (11). It follows from the definition of sequence { t k } , functions w 1 , w 2 , v 1 , v 2 and inequality (10) that
0 t 0 s 0 t 1 s 1 t k s k t k + 1 < r ,
t k + 2 t k + 1 = q ( r ) ( t k + 1 t k ) q ( r ) k + 1 ( t 1 t 0 ) ,
and
t k + 2 t k + 1 + q ( r ) k + 1 ( t 1 t 0 ) t k + q ( r ) k ( t 1 t 0 ) + q ( r ) k + 1 ( t 1 t 0 ) t 1 + q ( r ) ( t 1 t 0 ) + + q ( r ) k + 1 ( t 1 t 0 ) t 1 t 0 1 q ( r ) ( 1 q ( r ) k + 2 ) < t 1 t 0 1 q ( r ) = r .
Therefore, sequences { t k } converges to r * , which satisfies inequality (12).  ☐
Next, we present the semilocal convergence analysis of the MMN-HSS method.
Theorem 1.
Suppose that the hypotheses (H) and hypotheses of Lemma 2 hold. Define r ¯ = m i n { r 1 + , r * } , where r 1 + is defined in ([7], Theorem 2.1) and r * is given in Lemma 2. Let u = m i n { m * , l * } , m * = lim inf k m k , l * = lim inf k l k . Moreover, suppose
u > l n η l n ( ( τ + 1 ) θ ) ,
where the symbol . denotes the smallest integer no less than the corresponding real number, τ ( 0 , 1 θ θ ) and
θ : = θ ( α ; x 0 ) = T ( α ; x 0 < 1 .
Then, the sequence { x k } generated by the MMN-HSS method is well defined, remains in U ( x 0 , r ¯ ) for each k = 0 , 1 , 2 , and converges to a solution x * of Equation F ( x ) = 0 .
Proof. 
Notice that we showed in ([8], Theorem 2.1) that for each x U ( x 0 , r ¯ )
T ( α ; x ) ( τ + 1 ) θ < 1 .
The following statements shall be shown using mathematical induction:
x k x 0 t k t 0 , F ( x k ) 1 ( 1 + η ) γ ϕ ( t k ) , x k ( 1 ) x k s k ( 1 ) t k F ( x k ( i ) ) 1 ( 1 + η ) γ ϕ ( s k ( i ) ) , x k ( i + 1 ) x k ( i ) s k ( i + 1 ) s k ( i ) , i = 1 , 2 , , m 2 , F ( x k ( m 1 ) ) 1 ( 1 + η ) γ ϕ ( s k ( m 1 ) ) , x k + 1 x k ( m 1 ) t k + 1 s k ( m 1 ) .
We have for k = 0 :
x 0 x 0 = 0 t 0 t 0 ,
F ( x 0 ) δ γ ( 1 γ v ( t 0 ) ) ( s 0 1 t 0 ) γ ( 1 + η ) ,
x 0 ( 1 ) x 0 I T ( α ; x 0 ) l 0 ( 1 ) . F ( x 0 ) 1 F ( x 0 ) ( 1 + θ l 0 ( 1 ) ) < ( 1 + η ) γ δ = s 0 ( 1 ) .
Suppose the following items hold for each i < m 1 :
F ( x 0 ( i ) ) 1 ( 1 + η ) γ ( 1 γ v ( t 0 ) ) ( s 0 ( i + 1 ) s 0 ( i ) ) , x 0 ( i + 1 ) x 0 ( i ) s 0 ( i + 1 ) s 0 ( i ) , i = 1 , 2 , , m 2 .
We shall prove that inequalities (18) hold for m 1 .
Using the (H) conditions, we get in turn that
F ( x 0 ( m 1 ) ) F ( x 0 ( m 1 ) ) F ( x 0 ( m 2 ) ) F ( x 0 ) ( x 0 ( m 1 ) x 0 ( m 2 ) + F ( x 0 ( m 2 ) ) + F ( x 0 ) ( x 0 ( m 1 ) x 0 ( m 2 ) F ( x 0 ( m 1 ) ) F ( x 0 ( m 2 ) ) F ( x 0 ( m 2 ) ) ( x 0 ( m 1 ) x 0 ( m 2 ) ) + F ( x 0 ( m 2 ) ) F ( x 0 ) ( x 0 ( m 1 ) x 0 ( m 2 ) + η F ( x 0 ( m 2 ) 0 1 w ( ( x 0 ( m 1 ) x 0 ( m 2 ) ξ ) d ξ ( x 0 ( m 1 ) x 0 ( m 2 ) + w ( x 0 ( m 2 ) x 0 ) ( x 0 ( m 1 ) x 0 ( m 2 ) + η F ( x 0 ( m 2 ) ) .
Then, we also obtain that
( x 0 ( m 1 ) x 0 ( m 2 ) s 0 ( m 1 ) s 0 ( m 2 ) ,
x 0 ( m 2 ) x 0 x 0 ( m 2 ) x 0 ( m 3 ) + + x 0 ( 1 ) x 0 ( s 0 ( m 2 ) s 0 ( m 3 ) ) + + ( s 0 ( 1 ) t 0 ) s 0 ( m 2 ) t 0 = s 0 ( m 2 )
and
F ( x 0 ( m 2 ) 1 ( 1 + η ) γ ( 1 γ v ( t 0 ) ) ( s 0 ( m 1 ) s 0 ( m 2 ) ) .
Hence, we get from inequality (19) that
F ( x 0 ( m 1 ) ) 0 1 w ( ( s 0 ( m 1 ) s 0 ( m 2 ) ) ξ ) d ξ ( s 0 ( m 1 ) s 0 ( m 2 ) ) + w ( s 0 ( m 2 ) t 0 ) ( s 0 ( m 1 ) s 0 ( m 2 ) ) + η ( 1 γ v ( t 0 ) ) ( 1 + η ) γ ( s 0 ( m ) s 0 ( m 1 ) ) 1 γ v ( t 0 ) ( 1 + η ) γ ( s 0 ( m ) s 0 ( m 1 ) ) .
Then, we have by Equation (9) that
x 1 x 0 ( m 1 ) I T ( α ; x 0 ) l 0 ( m ) F ( x 0 ) 1 F ( x 0 ( m 1 ) ) ( 1 + ( ( τ + 1 ) θ ) l 0 ( m ) ) γ 1 ( 1 + η ) γ ( 1 γ v ( t 0 ) ) ( s 0 ( m ) s 0 ( m 1 ) ) ) = t 1 s 0 ( m 1 )
holds, and the items (17) hold for k = 0 . Suppose that the items (17) hold for all nonnegative integers less than k. Next, we prove the items (17) hold for k.
We get, in turn, by the induction hypotheses:
x k x 0 x k x k 1 ( m 1 ) + x k 1 ( m 1 ) x k 1 ( m 2 ) + + x k 1 ( 1 ) x k 1 ( 0 ) + x k 1 x 0 , ( t k s k 1 ( m 1 ) ) + ( s k 1 ( m 1 ) s k 1 ( m 2 ) ) + + ( s k 1 ( 1 ) t k 1 ) + ( t k 1 t 0 ) , = t k t 0 < r * < r .
In view of x k 1 , x k 1 ( 1 ) , , x k 1 ( m 1 ) U ( x 0 , r ) , we have
F ( x k ) F ( x k ) F ( x k 1 ( m 1 ) ) F ( x k 1 ) ( x k x k 1 ( m 1 ) ) + F ( x k 1 ( m 1 ) ) + F ( x k 1 ) ( x k x k 1 ( m 1 ) F ( x k ) F ( x k 1 ( m 1 ) ) F ( x k 1 ( m 1 ) ) ( x k x k 1 ( m 1 ) ) + F ( x k 1 ( m 1 ) ) F ( x k 1 ) ( x k x k 1 ( m 1 ) + η F ( x k 1 ( m 1 ) 0 1 w ( ( x k x k 1 ( m 1 ) ξ ) d ξ ( x k x k 1 ( m 1 ) + w ( x k 1 ( m 1 ) x k 1 ) x k x k 1 ( m 1 ) + η ( 1 γ v ( t k 1 ) ) ( 1 + η ) γ ( s k 1 ( m ) s k 1 ( m 1 ) ) ( 1 γ v ( t k ) ) ( 1 + η ) γ ( s k ( m ) s k ( m 1 ) ) .
We also get that
x k x k 1 ( m 1 ) t k s k 1 ( m 1 ) ,
x k 1 ( m 1 ) x k 1 x k 1 m 1 x k 1 ( m 2 ) + + x k 1 ( 1 ) x k 1 , ( s k 1 ( m 1 ) s k 1 ( m 2 ) ) + + ( s k 1 ( 1 ) t k 1 ) , s k 1 ( m 1 ) t k 1
and
F ( x k 1 ( m 1 ) ) 1 1 + η ) γ ( 1 γ v ( t k 1 ) ) ( s k 1 ( m ) s k 1 ( m 1 ) ) .
It follows that
x k ( 1 ) x k I T ( α ; x k ) l k ( 1 ) F ( x k ) 1 F ( x k ) ( 1 + θ l k ( 1 ) ) γ 1 γ v ( t k ) 1 γ v ( t k ) ( 1 + η ) γ ( s k ( 1 ) t k ) s k ( 1 ) t k .
Suppose that the following items hold for any positive integers less than m 1 :
F ( x 0 ( i ) ) 1 ( 1 + η ) γ ( 1 γ v ( t k ) ) ( s k ( i + 1 ) s k ( i ) ) , x k ( i + 1 ) x k ( i ) s k ( i + 1 ) s k ( i ) , i = 1 , 2 , , m 2 .
We will prove items (25) hold for m 1 . As in inequality (21), we have that
F ( x k ( m 1 ) ) F ( x k ( m 1 ) ) F ( x k 1 ( m 2 ) ) F ( x k ) ( x k ( m 1 ) x k ( m 2 ) ) + F ( x k ( m 2 ) ) + F ( x k ) ( x k ( m 1 ) x k ( m 2 ) ) F ( x k ( m 1 ) F ( x k ( m 2 ) ) F ( x k ( m 2 ) ) ( x k ( m 1 ) x k ( m 2 ) ) + F ( x k ( m 2 ) ) F ( x k ) x k ( m 1 ) x k 1 ( m 2 ) + η F ( x k ( m 2 ) 1 ( 1 + η ) γ ( 1 γ v ( t k ) ) ( s k ( m ) s k ( m 1 ) ) .
We also get that
x k ( m 1 ) x k ( m 2 ) s k ( m 1 ) s k ( m 2 ) ,
x k ( m 2 ) x k x k m 2 x k 1 ( m 3 ) + + x k ( 1 ) x k ( s k ( m 2 ) s k ( m 3 ) ) + + ( s k ( 1 ) t k ) s k ( m 2 ) t k ,
and
F ( x k ( m 2 ) ) 1 ( 1 + η ) γ ( 1 γ v ( t k ) ) ( s k ( m 1 ) s k ( m 2 ) ) .
Therefore,
x k + 1 x k ( m 1 ) I T ( α ; x k ) l k ( m ) F ( x k ) 1 F ( x k ( m 1 ) ) ( 1 + θ l k ( m ) ) γ ( 1 γ v ( t k ) ) ( s k ( m ) s k ( m 1 ) ) ( 1 γ v ( t k ) ) ( 1 + η ) γ t k + 1 s k ( m 1 )
holds. The induction for items (17) is completed. The sequences { t k } , { s k } , , s k ( m 1 ) converges r * , and
x k + 1 x 0 x k + 1 x k ( m 1 ) + x k ( m 1 ) x k ( m 2 ) + + x k ( 1 ) x k ( 0 ) + x k x 0 , ( t k + 1 s k ( m 1 ) ) + ( s k ( m 1 ) s k ( m 2 ) ) + + ( s k ( 1 ) t k ) + ( t k t 0 ) , = t k + 1 t 0 < r * < r .
Then, the sequence { x k } also converges to some x U ¯ ( x * , r ) . By letting k in inequality (21), we get that
F ( x * ) = 0 .
Remark 1.
Let us specialize functions w 1 , w 2 , v 1 , v 2 as w 1 ( t ) = L 1 t , w 2 ( t ) = L 2 t , v 1 ( t ) = K 1 t , v 2 ( t ) = K 2 t for some positive constants K 1 , K 2 , L 1 , L 2 and set L = L 1 + L 2 , K = K 1 + K 2 . Suppose that D 0 = D . Then, notice that
K L ,
since
K 1 L 1
and
K 2 L 2 ,
β 1 β
and
β 2 β ,
where β : = m a x { H ( x 0 ) , S ( x 0 ) } .
Notice that in [19], K 1 = L 1 , K 2 = L 2 . and β = β 1 = β 2 . Therefore, if strict inequality holds in any of item (34), (35), (36) or (37), the present results improve the ones in [19], (see also numerical examples).
Remark 2.
The set D 0 in ( H 3 ) can be replaced by D 1 = D U ( x 1 , r 0 x 1 x 0 ) leading to even smaller “w” and “v” functions, since D 1 D 0 .

3. Numerical Examples

Example 1.
Suppose that the motion of an object in three dimensions is governed by system of differential equations
f 1 ( x ) f 1 ( x ) 1 = 0 , f 2 ( y ) ( e 1 ) y 1 = 0 , f 3 ( z ) 1 = 0 .
with x , y , z D for f 1 ( 0 ) = f 2 ( 0 ) = f 3 ( 0 ) = 0 . Then, the solution of the system is given for v = ( x , y , z ) T by function F : = ( f 1 , f 2 , f 3 ) : D R 3 defined by
F ( v ) = e x 1 , e 1 2 y 2 + y , z T .
Then, the Fréchet-derivative is given by
F ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Then, we have that x * = ( 0 , 0 , 0 ) T , w ( t ) = w 1 ( t ) + w 2 ( t ) , v ( t ) = v 1 ( t ) + v 2 ( t ) , w 1 ( t ) = L 1 t , w 2 ( t ) = L 2 t , v 1 ( t ) = K 1 t , v 2 ( t ) = K 2 t , where L 1 = e 1 , L 2 = e , K 1 = e 2 , K 2 = e , η = 0.001 , γ = 1 and μ = 0.01 .
After solving the equation h q ( t ) = 0 , we obtain the root r q = 0.124067 . Similarly, the roots of Equation (11) are: 0.0452196 and 0.0933513 . So,
r = m i n { 0.0452196 , 0.0933513 } = 0.0452196 .
Therefore,
r = 0.0452196 < r q = 0.124067 .
In addition, we have that
r * = 0.0452196
and (see [7])
r 1 + = 0.020274 .
So,
r ¯ = m i n { r 1 + , r * } = m i n { 0.020274 , 0.0452196 } = 0.020274 .
It follows that sequence { x k } is complete, { t k } r * in D and as such it converges to x * U ( x 0 , r ¯ ) = U ( 0 , 0.020274 ) .
Example 2.
Consider the system of nonlinear equation F ( X ) = 0 , wherein F = ( F 1 , , F n ) T and X = ( x 1 , x 2 , , x n ) T , with
F i ( X ) = ( 3 2 x i ) x i 3 / 2 x i 1 2 x i + 1 + 1 , i = 1 , 2 , , n ,
where x 0 = x n + 1 = 0 by convention. This system has a complex solution. Therefore, we consider the complex initial guess X 0 = ( i , i , , i ) . The derivative F ( X ) is given by
F ( X ) = 3 2 ( 3 2 x 1 ) x 1 2 x 1 3 / 2 2 0 0 1 3 2 ( 3 2 x 2 ) x 2 2 x 2 3 / 2 0 0 0 0 1 3 2 ( 3 2 x n ) x n 2 x n 3 / 2 .
It is clear that F ( X ) is sparse and positive definite. Now, we solve this nonlinear problem by the Newton-HSS method (N-HSS), (see [10]), modified Newton-HSS method (MN-HSS), (see [22]), three-step modified Newton-HSS (3MN-HSS) and four-step modified Newton-HSS (4MN-HSS) method. The methods are compared in error estimates, CPU time (CPU-time) and the number of iterations. We use experimentally optimal parameter values of α for the methods corresponding to the problem dimension n = 100 , 200 , 500 , 1000 , see Table 1. The numerical results are displayed in Table 2. From numerical results, we observe that MN-HSS outperforms N-HSS in the sense of CPU time and the number of iterations. Note that, in this example, the results in [19] can not be applied since the operators involved are not Lipschitz. However, our results can be applied by choosing “w” and “v” functions appropriately as in Example 3.1. We leave these details to the interested readers.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Author Contributions

The contribution of all the authors has been equal. All of them worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amat, S.; Busquier, S.; Plaza, S.; Guttérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
  2. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequ. Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  3. Amat, S.; Hernández, M.A.; Remero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  4. Argyros, I.K. Convergence and Applications of Newton—Type Iterations; Springer-Verlag: New York, NY, USA, 2008. [Google Scholar]
  5. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: Hackensack, NJ, USA, 2013. [Google Scholar]
  6. Argyros, I.K.; Magreñán, Á.A. Ball convergence theorems and the convergence planes of an iterative methods for nonlinear equations. SeMA 2015, 71, 39–55. [Google Scholar]
  7. Argyros, I.K.; Sharma, J.R.; Kumar, D. Local convergence of Newton-HSS methods with positive definite Jacobian matrices under generalized conditions. SeMA 2017. [Google Scholar] [CrossRef]
  8. Argyros, I.K.; Sharma, J.R.; Kumar, D. Extending the applicability of modified Newton-HSS method for solving systems of nonlinear equations. Stud. Math. 2017, submitted. [Google Scholar]
  9. Axelsson, O. Iterative Solution Methods; Cambridge University Press: Cambridge, CA, USA, 1994. [Google Scholar]
  10. Bai, Z.-Z.; Guo, X.P. The Newton-HSS methods for systems of nonlinear equations with positive definite Jacobian matrices. J. Comput. Math. 2010, 28, 235–260. [Google Scholar]
  11. Bai, Z.-Z.; Golub, G.H.; Pan, J.Y. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math. 2004, 98, 1–32. [Google Scholar] [CrossRef]
  12. Bai, Z.-Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite Linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  13. Cordero, A.; Ezquerro, J.A.; Hernández-Veron, M.A.; Torregrosa, J.R. On the local convergence of a fifth-order iterative method in Banach spaces. Appl. Math. Comput. 2015, 251, 396–403. [Google Scholar] [CrossRef]
  14. Dembo, R.S.; Eisenstat, S.C.; Steihaug, T. Inexact Newton methods. SIAM J. Numer. Anal. 1982, 19, 400–408. [Google Scholar] [CrossRef]
  15. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  16. Gutiérrez, J.M.; Hernández, M.A. Recurrence relations for the super Halley Method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef]
  17. Guo, X.-P.; Duff, I.S. Semilocal and global convergence of Newton-HSS method for systems of nonlinear equations. Numer. Linear Algebra Appl. 2011, 18, 299–315. [Google Scholar] [CrossRef]
  18. Hernández, M.A.; Martínez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algorithms 2015, 70, 377–392. [Google Scholar] [CrossRef]
  19. Li, Y.; Guo, X.-P. Multi-step modified Newton-HSS methods for systems of nonlinear equations with positive definite Jacobian Matrices. Numer. Algorithms 2017, 75, 55–80. [Google Scholar] [CrossRef]
  20. Ortega, J.M.; Rheinboldt, W.C. Iteraive Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  21. Shen, W.-P.; Li, C. Convergence criterion of inexact methods for operators with Hölder continuous derivatives. Taiwanese J. Math. 2008, 12, 1865–1882. [Google Scholar]
  22. Wu, Q.-B.; Chen, M.-H. Convergence analysis of modified Newton-HSS method for solving systems of nonlinear equations. Numer. Algorithms 2013, 64, 659–683. [Google Scholar] [CrossRef]
Table 1. Optimal values of α for N-HSS and MN-HSS methods.
Table 1. Optimal values of α for N-HSS and MN-HSS methods.
n1002005001000
N-HSS4.14.14.24.1
MN-HSS4.44.44.34.3
MMN-HSS4.44.44.34.3
Table 2. Numerical results.
Table 2. Numerical results.
nMethodError EstimatesCPU-TimeIterations
100N-HSS 3.98 × 10 6 1.7445
MN-HSS 4.16 × 10 8 1.4854
3MN-HSS 8.28 × 10 5 1.2813
4MN-HSS 1.12 × 10 6 1.3273
200N-HSS 3.83 × 10 6 6.1625
MN-HSS 5.46 × 10 8 4.4504
3MN-HSS 7.53 × 10 5 4.2873
4MN-HSS 9.05 × 10 7 4.1083
500N-HSS 4.65 × 10 6 32.5945
MN-HSS 4.94 × 10 8 24.9684
3MN-HSS 7.69 × 10 5 21.2503
4MN-HSS 9.62 × 10 7 20.4063
1000N-HSS 4.29 × 10 6 119.9375
MN-HSS 5.32 × 10 8 98.2034
3MN-HSS 9.16 × 10 5 89.0183
4MN-HSS 8.94 × 10 7 91.0003

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Sharma, J.R.; Kumar, D. Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions. Algorithms 2017, 10, 54. https://doi.org/10.3390/a10020054

AMA Style

Argyros IK, Sharma JR, Kumar D. Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions. Algorithms. 2017; 10(2):54. https://doi.org/10.3390/a10020054

Chicago/Turabian Style

Argyros, Ioannis K., Janak Raj Sharma, and Deepak Kumar. 2017. "Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions" Algorithms 10, no. 2: 54. https://doi.org/10.3390/a10020054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop