Next Article in Journal
On Generalized (α, ψ, MΩ)-Contractions with w-Distances and an Application to Nonlinear Fredholm Integral Equations
Next Article in Special Issue
Ball Convergence for Combined Three-Step Methods Under Generalized Conditions in Banach Space
Previous Article in Journal
The Immobilization of ChEMBL474807 Molecules Using Different Classes of Nanostructures
Previous Article in Special Issue
Efficient Three-Step Class of Eighth-Order Multiple Root Solvers and Their Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Convergence Analysis of the Newton–Hermitian and Skew–Hermitian Splitting Method

by
Ioannis K Argyros
1,*,
Santhosh George
2,
Chandhini Godavarma
2 and
Alberto A Magreñán
3
1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical and Computational Sciences, National Institute of Technology, Karnataka 575 025, India
3
Departamento de Matemáticas y Computación, Universidad de la Rioja, 26006 Logroño, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(8), 981; https://doi.org/10.3390/sym11080981
Submission received: 24 June 2019 / Revised: 17 July 2019 / Accepted: 25 July 2019 / Published: 2 August 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

Abstract

:
Many problems in diverse disciplines such as applied mathematics, mathematical biology, chemistry, economics, and engineering, to mention a few, reduce to solving a nonlinear equation or a system of nonlinear equations. Then various iterative methods are considered to generate a sequence of approximations converging to a solution of such problems. The goal of this article is two-fold: On the one hand, we present a correct convergence criterion for Newton–Hermitian splitting (NHSS) method under the Kantorovich theory, since the criterion given in Numer. Linear Algebra Appl., 2011, 18, 299–315 is not correct. Indeed, the radius of convergence cannot be defined under the given criterion, since the discriminant of the quadratic polynomial from which this radius is derived is negative (See Remark 1 and the conclusions of the present article for more details). On the other hand, we have extended the corrected convergence criterion using our idea of recurrent functions. Numerical examples involving convection–diffusion equations further validate the theoretical results.

1. Introduction

Numerous problems in computational disciplines can be reduced to solving a system of nonlinear equations with n equations in n variables like
F ( x ) = 0
using Mathematical Modelling [1,2,3,4,5,6,7,8,9,10,11]. Here, F is a continuously differentiable nonlinear mapping defined on a convex subset Ω of the n - dimensional complex linear space C n into C n . In general, the corresponding Jacobian matrix F ( x ) is sparse, non-symmetric and positive definite. The solution methods for the nonlinear problem F ( x ) = 0 are iterative in nature, since an exact solution x * could be obtained only for a few special cases. In the rest of the article, some of the well established and standard results and notations are used to establish our results (See [3,4,5,6,10,11,12,13,14] and the references there in). Undoubtedly, some of the well known methods for generating a sequence to approximate x * are the inexact Newton (IN) methods [1,2,3,5,6,7,8,9,10,11,12,13,14]. The IN algorithm involves the steps as given in the following:
Algorithm IN [6]
  • Step 1: Choose initial guess x 0 , tolerance value t o l ; Set k = 0
  • Step 2: While F ( x k ) > t o l × F ( x 0 ) , Do
    • Choose η k [ 0 , 1 ) . Find d k so that F ( x k ) + F ( x k ) d k η k F ( x k ) .
    • Set x k + 1 = x k + d k ; k = k + 1
Furthermore, if A is sparse, non-Hermitian and positive definite, the Hermitian and skew-Hermitian splitting (HSS) algorithm [4] for solving the linear system A x = b is given by,
Algorithm HSS [4]
  • Step 1: Choose initial guess x 0 , tolerance value t o l and α > 0 ; Set l = 0
  • Step 2: Set H = 1 2 ( A + A * ) and S = 1 2 ( A - A * ) , where H is Hermitian and S is skew-Hermitian parts of A.
  • Step 3: While b - A x ł > t o l × b - A x 0 , Do
    • Solve ( α I + H ) x l + 1 / 2 = ( α I - S ) x l + b
    • Solve ( α I + S ) x l = ( α I - H ) x ł + 1 / 2 + b
    • Set l = l + 1
Newton–HSS [5] algorithm combines appropriately both IN and HSS methods for the solution of the large nonlinear system of equations with positive definite Jacobian matrix. The algorithm is as follows:
Algorithm NHSS (The Newton–HSS method [5])
  • Step 1: Choose initial guess x 0 , positive constants α and t o l ; Set k = 0
  • Step 2: While F ( x k ) > t o l × F ( x 0 )
    -
    Compute Jacobian J k = F ( x k )
    -
    Set
    H k ( x k ) = 1 2 ( J k + J k * ) and S k ( x k ) = 1 2 ( J k - J k * ) ,
    where H k is Hermitian and S k is skew-Hermitian parts of J k .
    -
    Set d k , 0 = 0 ; l = 0
    -
    While
    F ( x k ) + J k d k , ł η k × F ( x k ) ( η k [ 0 , 1 ) )
    Do
    {
    • Solve sequentially:
      ( α I + H k ) d k , l + 1 / 2 = ( α I - S k ) d k , l + b
      ( α I + S k ) d k , l = ( α I - H k ) d k , l + 1 / 2 + b
    • Set l = l + 1
    }
    -
    Set
    x k + 1 = x k + d k , l ; k = k + 1
    -
    Compute J k , H k and S k for new x k
Please note that η k is varying in each iterative step, unlike a fixed positive constant value in used in [5]. Further observe that if d k , k in (6) is given in terms of d k , 0 , we get
d k , k = ( I - T k ) ( I - T k ) - 1 B k - 1 F ( x k )
where T k : = T ( α , k ) , B k : = B ( α , k ) and
T ( α , x ) = B ( α , x ) - 1 C ( α , x ) B ( α , x ) = 1 2 α ( α I + H ( x ) ) ( α I + S ( x ) ) C ( α , x ) = 1 2 α ( α I - H ( x ) ) ( α I - S ( x ) ) .
Using the above expressions for T k and d k , k , we can write the Newton–HSS in (6) as
x k + 1 = x k - ( I - T k ) - 1 F ( x k ) - 1 F ( x k ) .
A Kantorovich-type semi-local convergence analysis was presented in [7] for NHSS. However, there are shortcomings:
(i)
The semi-local sufficient convergence criterion provided in (15) of [7] is false. The details are given in Remark 1. Accordingly, Theorem 3.2 in [7] as well as all the followings results based on (15) in [7] are inaccurate. Further, the upper bound function g 3 (to be defined later) on the norm of the initial point is not the best that can be used under the conditions given in [7].
(ii)
The convergence domain of NHSS is small in general, even if we use the corrected sufficient convergence criterion (12). That is why, using our technique of recurrent functions, we present a new semi-local convergence criterion for NHSS, which improves the corrected convergence criterion (12) (see also Section 3 and Section 4, Example 4.4).
(iii)
Example 4.5 taken from [7] is provided to show as in [7] that convergence can be attained even if these criteria are not checked or not satisfied, since these criteria are not sufficient too. The convergence criteria presented here are only sufficient.
Moreover, we refer the reader to [3,4,5,6,7,8,9,10,11,13,14] and the references therein to avoid repetitions for the importance of these methods for solving large systems of equations.
The rest of the note is organized as follows. Section 2 contains the semi-local convergence analysis of NHSS under the Kantorovich theory. In Section 3, we present the semi-local convergence analysis using our idea of recurrent functions. Numerical examples are discussed in Section 4. The article ends with a few concluding remarks.

2. Semi-Local Convergence Analysis

To make the paper as self-contained as possible we present some results from [3] (see also [7]). The semi-local convergence of NHSS is based on the conditions ( A ). Let x 0 C n and F : Ω C n C n be G - differentiable on an open neighborhood Ω 0 Ω on which F ( x ) is continuous and positive definite. Suppose F ( x ) = H ( x ) + S ( x ) where H ( x ) and S ( x ) are as in (2) with x k = x .
(𝒜1)
There exist positive constants β , γ and δ such that
max { H ( x 0 ) , S ( x 0 ) } β , F ( x 0 ) - 1 γ , F ( x 0 ) δ ,
(𝒜2)
There exist nonnegative constants L h and L s such that for all x , y U ( x 0 , r ) Ω 0 ,
H ( x ) - H ( y ) L h x - y S ( x ) - S ( y ) L s x - y .
Next, we present the corrected version of Theorem 3.2 in [7].
Theorem 1.
Assume that conditions ( A ) hold with the constants satisfying
δ γ 2 L g ¯ 3 ( η )
where g ¯ 3 ( t ) : = ( 1 - t ) 2 2 ( 2 + t + 2 t 2 - t 3 ) , η = max { η k } < 1 , r = max { r 1 , r 2 } with
r 1 = α + β L 1 + 2 α τ θ ( 2 γ + γ τ θ ) ( α + β ) 2 - 1 r 2 = b - b 2 - 2 a c a a = γ L ( 1 + η ) 1 + 2 γ 2 δ L η , b = 1 - η , c = 2 γ δ ,
and with * = lim inf k k satisfying * > ln η ln ( ( τ + 1 ) θ , (Here . represents the largest integer less than or equal to the corresponding real number) τ ( 0 , 1 - θ θ ) and
θ θ ( α , x 0 ) = T ( α , x 0 ) < 1 .
Then, the iteration sequence { x k } k = 0 generated by Algorithm NHSS is well defined and converges to x * , so that F ( x * ) = 0 .
Proof. 
We simply follow the proof of Theorem 3.2 in [7] but use the correct function g ¯ 3 instead of the incorrect function g 3 defined in the following remark. □
Remark 1.
The corresponding result in [7] used the function bound
g 3 ( t ) = 1 - t 2 ( 1 + t 2 )
instead of g ¯ 3 in (12) (simply looking at the bottom of first page of the proof in Theorem 3.2 in [7]), i.e., the inequality they have considered is,
δ γ 2 L g 3 ( η ) .
However, condition (16) does not necessarily imply b 2 - 4 a c 0 , which means that r 2 does not necessarily exist (see (13) where b 2 - 2 a c 0 is needed) and the proof of Theorem 3.2 in [7] breaks down. As an example, choose η = 1 2 , then g 3 ( 1 2 ) = 1 5 , g ¯ 3 ( 1 2 ) = 1 23 and for g ¯ 3 ( 1 2 ) = δ γ 2 L < g 3 ( 1 2 ) , we have b 2 - 4 a c < 0 . Notice that our condition (12) is equivalent to b 2 - 4 a c 0 . Hence, our version of Theorem 3.2 is correct. Notice also that
g ¯ 3 ( t ) < g 3 ( t ) for each t 0 ,
so (12) implies (16) but not necessarily vice versa.

3. Semi-Local Convergence Analysis II

We need to define some parameters and a sequence needed for the semi-local convergence of NHSS using recurrent functions.
Let β , γ , δ , L 0 , L be positive constants and η [ 0 , 1 ) . Then, there exists μ 0 such that L = μ L 0 . Set c = 2 γ δ . Define parameters p , q , η 0 and δ 0 by
p = ( 1 + η ) μ γ L 0 2 , q = - p + p 2 + 4 γ L 0 p 2 γ L 0 ,
η 0 = μ μ + 2
and
ξ = μ 2 min { 2 ( q - η ) ( 1 + η ) μ + 2 q , ( 1 + η ) q - η - q 2 ( 1 + η ) q - η } .
Moreover, define scalar sequence { s k } by
s 0 = 0 , s 1 = c = 2 γ δ and   for   each k = 1 , 2 , s k + 1 = s k + 1 1 - γ L 0 s k [ p ( s k - s k - 1 ) + η ( 1 - γ L 0 s k - 1 ) ] ( s k - s k - 1 ) .
We need to show the following auxiliary result of majorizing sequences for NHSS using the aforementioned notation.
Lemma 1.
Let β , γ , δ , L 0 , L be positive constants and η [ 0 , 1 ) . Suppose that
γ 2 L δ ξ
and
η η 0 ,
where η 0 , ξ are given by (19) and (20), respectively. Then, sequence { s k } defined in (21) is nondecreasing, bounded from above by
s * * = c 1 - q
and converges to its unique least upper bounds s * which satisfies
c s * s * * .
Proof. 
Notice that by (18)–(23) q ( 0 , 1 ) , q > η , η 0 [ 3 3 , 1 ) , c > 0 , ( 1 + η ) q - η > 0 , ( 1 + η ) q - η - q 2 > 0 and ξ > 0 . We shall show using induction on k that
0 < s k + 1 - s k q ( s k - s k - 1 )
or equivalently by (21)
0 1 1 - γ L 0 s k [ p ( s k - s k - 1 ) + η ( 1 - γ L 0 s k - 1 ) ] q .
Estimate (27) holds true for k = 1 by the initial data and since it reduces to showing δ η γ 2 L q - η ( 1 + η ) μ + 2 q , which is true by (20). Then, by (21) and (27), we have
0 < s 2 - s 1 q ( s 1 - s 0 ) , γ L 0 s 1 < 1
and
s 2 s 1 + q ( s 1 - s 0 ) = 1 - q 2 1 - q ( s 1 - s 0 ) < s 1 - s 0 1 - q = s * * .
Suppose that (26),
γ L 0 s k < 1
and
s k + 1 1 - q k + 1 1 - q ( s 1 - s 0 ) < s * *
hold true. Next, we shall show that they are true for k replaced by k + 1 . It suffices to show that
0 1 1 - γ L 0 s k + 1 ( p ( s k + 1 - s k ) + η ( 1 - γ L 0 s k ) ) q
or
p ( s k + 1 - s k ) + η ( 1 - γ L 0 s k ) q ( 1 - γ L 0 s k + 1 )
or
p ( s k + 1 - s k ) + η ( 1 - γ L 0 s k ) - q ( 1 - γ L 0 s k + 1 ) 0
or
p ( s k + 1 - s k ) + η ( 1 - γ L 0 s 1 ) + γ q L 0 s k + 1 ) - q 0
(since s 1 s k ) or
2 γ δ p q k + 2 γ 2 q L 0 δ ( 1 + q + + q k ) + η ( 1 - 2 γ 2 L 0 δ ) - q 0 .
Estimate (30) motivates us to introduce recurrent functions f k defined on the interval [ 0 , 1 ) by
f k ( t ) = 2 γ δ p t k + 2 γ 2 L 0 δ ( 1 + t + + t k ) t - t + η ( 1 - 2 γ 2 L 0 δ ) .
Then, we must show instead of (30) that
f k ( q ) 0 .
We need a relationship between two consecutive functions f k :
f k + 1 ( t ) = f k + 1 ( t ) - f k ( t ) + f k ( t ) = 2 γ δ p t k + 1 + 2 γ 2 L 0 δ ( 1 + t + t k + 1 ) t - t + η ( 1 - 2 γ 2 L 0 δ ) - 2 γ δ p t k - 2 γ 2 L 0 δ ( 1 + t + + t k ) t + t - η ( 1 - 2 γ 2 L 0 δ ) + f k ( t ) = f k ( t ) + 2 γ δ g ( t ) t k ,
where
g ( t ) = γ L 0 t 2 + p t - p .
Notice that g ( q ) = 0 . It follows from (32) and (34) that
f k + 1 ( q ) = f k ( q ) for each k .
Then, since
f ( q ) = lim k f k ( q ) ,
it suffices to show
f ( q ) 0
instead of (32). We get by (31) that
f ( q ) = 2 γ 2 L 0 δ q 1 - q - q + η ( 1 - 2 γ 2 L 0 δ )
so, we must show that
2 γ 2 L 0 δ q 1 - q - q + η ( 1 - 2 γ 2 L 0 δ ) 0 ,
which reduces to showing that
δ μ 2 γ 2 L ( 1 + η ) q - η - q 2 ( 1 + η ) q - η ,
which is true by (22). Hence, the induction for (26), (28) and (29) is completed. It follows that sequence { s k } is nondecreasing, bounded above by s * * and as such it converges to its unique least upper bound s * which satisfies (25). □
We need the following result.
Lemma 2
([14]). Suppose that conditions ( A ) hold. Then, the following assertions also hold:
(i) 
F ( x ) - F ( y ) L x - y
(ii) 
F ( x ) L x - y + 2 β
(iii) 
If r < 1 γ L , then F ( x ) is nonsingular and satisfies
F ( x ) - 1 γ 1 - γ L x - x 0 ,
where L = L h + L s .
Next, we show how to improve Lemma 2 and the rest of the results in [3,7]. Notice that it follows from (i) in Lemma 2 that there exists L 0 > 0 such that
F ( x ) - F ( x 0 ) L 0 x - x 0 for each x Ω .
We have that
L 0 L
holds true and L L 0 can be arbitrarily large [2,12]. Then, we have the following improvement of Lemma 2.
Lemma 3.
Suppose that conditions ( A ) hold. Then, the following assertions also hold:
(i) 
F ( x ) - F ( y ) L x - y
(ii) 
F ( x ) L 0 x - y + 2 β
(iii) 
If r < 1 γ L 0 , then F ( x ) is nonsingular and satisfies
F ( x ) - 1 γ 1 - γ L 0 x - x 0 .
Proof. 
(ii) We have
F ( x ) = F ( x ) - F ( x 0 ) + F ( x 0 ) F ( x ) - F ( x 0 ) + F ( x 0 ) L 0 x - x 0 + F ( x 0 ) L 0 x - x 0 + 2 β .
(iii)
γ F ( x ) - F ( x 0 ) γ L 0 x - x 0 < 1 .
It follows from the Banach lemma on invertible operators [1] that F ( x ) is nonsingular, so that (44) holds. □
Remark 2.
The new estimates (ii) and (iii) are more precise than the corresponding ones in Lemma 2, if L 0 < L .
Next, we present the semi-local convergence of NHSS using the majorizing sequence { s n } introduced in Lemma 1.
Theorem 2.
Assume that conditions ( A ), (22) and (23) hold. Let η = max { η k } < 1 , r = max { r 1 , t * } with
r 1 = α + β L 1 + 2 α τ θ ( 2 γ + γ τ θ ) ( α + β ) 2 - 1
and s * is as in Lemma 1 and with * = lim inf k k satisfying * > ln η ln ( ( τ + 1 ) θ , (Here . represents the largest integer less than or equal to the corresponding real number) τ ( 0 , 1 - θ θ ) and
θ θ ( α , x 0 ) = T ( α , x 0 ) < 1 .
Then, the sequence { x k } k = 0 generated by Algorithm NHSS is well defined and converges to x * , so that F ( x * ) = 0 .
Proof. 
If we follow the proof of Theorem 3.2 in [3,7] but use (44) instead of (41) for the upper bound on the norms F ( x k ) - 1 we arrive at
x k + 1 - x k ( 1 + η ) γ 1 - γ L 0 s k F ( x k ) ,
where
F ( x k ) L 2 ( s k - s k - 1 ) 2 + η 1 - γ L 0 s k - 1 γ ( 1 + η ) ( s k - s k - 1 ) ,
so by (21)
x k + 1 - x k ( 1 + η ) γ 1 - γ L 0 s k [ L 2 ( s k - s k - 1 ) + η 1 - γ L 0 s k - 1 γ ( 1 + η ) ] ( s k - s k - 1 ) = s k + 1 - s k .
We also have that x k + 1 - x 0 x k + 1 - x k + x k - x k - 1 + + x 1 - x 0 s k + 1 - s k + s k - s k - 1 + + s 1 - s 0 = s k + 1 - s 0 < s * . It follows from Lemma 1 and (49) that sequence { x k } is complete in a Banach space R n and as such it converges to some x * U ¯ ( x 0 , r ) (since U ¯ ( x 0 , r ) is a closed set).
However, T ( α ; x * ) < 1 [4] and NHSS, we deduce that F ( x * ) = 0 .
Remark 3.
(a) 
The point s * can be replaced by s * * (given in closed form by (24)) in Theorem 2.
(b) 
Suppose there exist nonnegative constants L h 0 , L s 0 such that for all x U ( x 0 , r ) Ω 0
H ( x ) - H ( x 0 ) L h 0 x - x 0
and
S ( x ) - S ( x 0 ) L s 0 x - x 0 .
Set L 0 = L h 0 + L s 0 . Define Ω 0 1 = Ω 0 U ( x 0 , 1 γ L 0 ) . Replace condition ( A 2 ) by
( A 2 ) There exist nonnegative constants L h and L s such that for all x , y U ( x 0 , r ) Ω 0 1
H ( x ) - H ( y ) L h x - y
S ( x ) - S ( y ) L s x - y .
Set L = L h + L s . Notice that
L h L h , L s L s and L L ,
since Ω 0 1 Ω 0 . Denote the conditions ( A 1 ) and ( A 2 ) by ( A ). Then, clearly the results of Theorem 2 hold with conditions ( A ), Ω 0 1 , L replacing conditions ( A ), Ω 0 and L , respectively (since the iterates { x k } remain in Ω 0 1 which is a more precise location than Ω 0 ). Moreover, the results can be improved even further, if we use the more accurate set Ω 0 2 containing iterates { x k } defined by Ω 0 2 : = Ω U ( x 1 , 1 γ L 0 - γ δ ) . Denote corresponding to L constant by L and corresponding conditions to ( A ) by ( A ). Notice that (see also the numerical examples) Ω 0 2 Ω 0 1 Ω 0 . In view of (50), the results of Theorem 2 are improved and under the same computational cost.
(c) 
The same improvements as in (b) can be made in the case of Theorem 1.
The majorizing sequence { t n } in [3,7] is defined by
t 0 = 0 , t 1 = c = 2 γ δ t k + 1 = t k + 1 1 - γ L t k [ p ( t k - t k - 1 ) + η ( 1 - γ L t k - 1 ) ] ( t k - t k - 1 ) .
Next, we show that our sequence { s n } is tighter than { t n } .
Proposition 1.
Under the conditions of Theorems 1 and 2, the following items hold
(i) 
s n t n
(ii) 
s n + 1 - s n t n + 1 - t n and
(iii) 
s * t * = lim k t k r 2 .
Proof. 
We use a simple inductive argument, (21), (51) and (43). □
Remark 4.
Majorizing sequences using L or L are even tighter than sequence { s n } .

4. Special Cases and Numerical Examples

Example 1.
The semi-local convergence of inexact Newton methods was presented in [14] under the conditions
F ( x 0 ) - 1 F ( x 0 ) β , F ( x 0 ) - 1 ( F ( x ) - F ( y ) ) γ x - y , F ( x 0 ) - 1 s n F ( x 0 ) - 1 F ( x n ) η n
and
β γ g 1 ( η ) ,
where
g 1 ( η ) = ( 4 η + 5 ) 3 - ( 2 η 3 + 14 η + 11 ) ( 1 + η ) ( 1 - η ) 2 .
More recently, Shen and Li [11] substituted g 1 ( η ) with g 2 ( η ) , where
g 2 ( η ) = ( 1 - η ) 2 ( 1 + η ) ( 2 ( 1 + η ) - η ( 1 - η ) 2 ) .
Estimate (22) can be replaced by a stronger one but directly comparable to (20). Indeed, let us define a scalar sequence { u n } (less tight than { s n } ) by
u 0 = 0 , u 1 = 2 γ δ , u k + 1 = u k + ( 1 2 ρ ( u k - u k - 1 ) + η ) 1 - ρ u k ( u k - u k - 1 ) ,
where ρ = γ L 0 ( 1 + η ) μ . Moreover, define recurrent functions f k on the interval [ 0 , 1 ) by
f k ( t ) = 1 2 ρ c t k - 1 + ρ c ( 1 + t + + t k - 1 ) t + η - t
and function g ( t ) = t 2 + t 2 - 1 2 . Set q = 1 2 . Moreover, define function g 4 on the interval [ 0 , 1 2 ) by
g 4 ( η ) = 1 - 2 η 4 ( 1 + η ) .
Then, following the proof of Lemma 1, we obtain:
Lemma 4.
Let β , γ , δ , L 0 , L be positive constants and η [ 0 , 1 2 ) . Suppose that
γ 2 L δ g 4 ( η )
Then, sequence { u k } defined by (52) is nondecreasing, bounded from above by
u * * = c 1 - q
and converges to its unique least upper bound u * which satisfies
c u * u * * .
Proposition 2.
Suppose that conditions ( A ) and (54) hold with r = min { r 1 , u * } . Then, sequence { x n } generated by algorithm NHSS is well defined and converges to x * which satisfies F ( x * ) = 0 .
These bound functions are used to obtain semi-local convergence results for the Newton–HSS method as a subclass of these techniques. In Figure 1 and Figure 2, we can see the graphs of the four bound functions g 1 , g 2 , g ¯ 3 and g 4 . Clearly our bound function g ¯ 3 improves all the earlier results. Moreover, as noted before, function g 3 cannot be used, since it is an incorrect bound function.
In the second example we compare the convergence criteria (22) and (12).
Example 2.
Let η = 1 , Ω 0 = Ω = U ( x 0 , 1 - λ ) , x 0 = 1 , λ [ 0 , 1 ) . Define function F on Ω by
F ( x ) = x 3 - λ .
Then, using (55) and the condition ( A ), we get γ = 1 3 , δ = 1 - λ , L = 6 ( 2 - λ ) , L 0 = 3 ( 3 - λ ) and μ = 2 ( 2 - λ ) 3 - λ . Choosing λ = 0 . 8 . , we get L = 7 . 2 , L 0 = 6 . 6 , δ = 0 . 2 , μ = 1 . 0909091 , η 0 = 0 . 594088525 , p = 1 . 392 , q = 0 . 539681469 , γ 2 L δ = 0 . 16 . Let η = 0 . 16 < η 0 , then, g ¯ 3 ( 0 . 16 ) = 0 . 159847474 , ξ = min { 0 . 176715533 , 0 . 20456064 } = 0 . 176715533 . Hence the old condition (12) is not satisfied, since γ 2 L δ > g ¯ 3 ( 0 . 16 ) . However, the new condition (22) is satisfied, since γ 2 L δ < ξ . Hence, the new results expand the applicability of NHSS method.
The next example is used for the reason already mentioned in (iii) of the introduction.
Example 3.
Consider the two-dimensional nonlinear convection–diffusion equation [7]
- ( u x x + u y y ) + q ( u x + u y ) = - e u , ( x , y ) Ω u ( x , y ) = 0 ( x , y ) Ω
where Ω = ( 0 , 1 ) × ( 0 , 1 ) and Ω is the boundary of Ω . Here q > 0 is a constant to control the magnitude of the convection terms (see [7,15,16]). As in [7], we use classical five-point finite difference scheme with second order central difference for both convection and diffusion terms. If N defines number of interior nodes along one co-ordinate direction, then h = 1 N + 1 and R e = q h 2 denotes the equidistant step-size and the mesh Reynolds number, respectively. Applying the above scheme to (56), we obtain the following system of nonlinear equations:
A ¯ u + h 2 e u = 0 u = ( u 1 , u 2 , , u N ) T , u i = ( u i 1 , u i 2 , , u i N ) T , i = 1 , 2 , , N ,
where the coefficient matrix A ¯ is given by
A ¯ = T x I + I T y .
Here,is the Kronecker product, T x and T y are the tridiagonal matrices
T x = t r i d i a g ( - 1 - R e , 4 , - 1 + R e ) , T y = t r i d i a g ( - 1 - R e , 0 , - 1 + R e ) .
In our computations, N is chosen as 99 so that the total number of nodes are 100 × 100 . We use α = q h 2 as in [7] and we consider two choices for η k i.e., η k = 0 . 1 and η k = 0 . 01 for all k.
The results obtained in our computation is given in Figure 3, Figure 4, Figure 5 and Figure 6. The total number of inner iterations is denoted by I T , the total number of outer iterations is denoted by O T and the total CPU time is denoted by t.

5. Conclusions

A major problem for iterative methods is the fact that the convergence domain is small in general, limiting the applicability of these methods. Therefore, the same is true, in particular for Newton–Hermitian, skew-Hermitian and their variants such as the NHSS and other related methods [4,5,6,11,13,14]. Motivated by the work in [7] (see also [4,5,6,11,13,14]) we:
(a)
Extend the convergence domain of NHSS method without additional hypotheses. This is done in Section 3 using our new idea of recurrent functions. Examples, where the new sufficient convergence criteria hold (but not previous ones), are given in Section 4 (see also the remarks in Section 3).
(b)
The sufficient convergence criterion (16) given in [7] is false. Therefore, the rest of the results based on (16) do not hold. We have revisited the proofs to rectify this problem. Fortunately, the results can hold if (16) is replaced with (12). This can easily be observed in the proof of Theorem 3.2 in [7]. Notice that the issue related to the criteria (16) is not shown in Example 4.5, where convergence is established due to the fact that the validity of (16) is not checked. The convergence criteria obtained here are not necessary too. Along the same lines, our technique in Section 3 can be used to extend the applicability of other iterative methods discussed in [1,2,3,4,5,6,8,9,12,13,14,15,16].

Author Contributions

Conceptualization: I.K.A., S.G.; Editing: S.G., C.G.; Data curation: C.G. and A.A.M.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Szidarovszky, F. The Theory and Applications of Iteration Methods; CRC Press: Boca Raton, FL, USA, 1993. [Google Scholar]
  2. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  3. Argyros, I.K.; George, S. Local convergence for an almost sixth order method for solving equations under weak conditions. SeMA J. 2018, 75, 163–171. [Google Scholar] [CrossRef]
  4. Bai, Z.Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  5. Bai, Z.Z.; Guo, X.P. The Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices. J. Comput. Math. 2010, 28, 235–260. [Google Scholar]
  6. Dembo, R.S.; Eisenstat, S.C.; Steihaug, T. Inexact Newton methods. SIAM J. Numer. Anal. 1982, 19, 400–408. [Google Scholar] [CrossRef]
  7. Guo, X.P.; Duff, I.S. Semi-local and global convergence of the Newton-HSS method for systems of nonlinear equations. Numer. Linear Algebra Appl. 2011, 18, 299–315. [Google Scholar] [CrossRef]
  8. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  9. Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef]
  10. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  11. Shen, W.P.; Li, C. Kantorovich-type convergence criterion for inexact Newton methods. Appl. Numer. Math. 2009, 59, 1599–1611. [Google Scholar] [CrossRef]
  12. Argyros, I.K. Local convergence of inexact Newton-like-iterative methods and applications. Comput. Math. Appl. 2000, 39, 69–75. [Google Scholar] [CrossRef] [Green Version]
  13. Eisenstat, S.C.; Walker, H.F. Choosing the forcing terms in an inexact Newton method. SIAM J. Sci. Comput. 1996, 17, 16–32. [Google Scholar] [CrossRef]
  14. Guo, X.P. On semilocal convergence of inexact Newton methods. J. Comput. Math. 2007, 25, 231–242. [Google Scholar]
  15. Axelsson, O.; Catey, G.F. On the numerical solution of two-point singularly perturbed value problems, Computer Methods in Applied Mechanics and Engineering. Comput. Methods Appl. Mech. Eng. 1985, 50, 217–229. [Google Scholar] [CrossRef]
  16. Axelsson, O.; Nikolova, M. Avoiding slave points in an adaptive refinement procedure for convection-diffusion problems in 2D. Computing 1998, 61, 331–357. [Google Scholar] [CrossRef]
Figure 1. Graphs of g 1 ( t ) (Violet), g 2 ( t ) (Green), g ¯ 3 (Red).
Figure 1. Graphs of g 1 ( t ) (Violet), g 2 ( t ) (Green), g ¯ 3 (Red).
Symmetry 11 00981 g001
Figure 2. Graphs of g 1 ( t ) (Violet), g 2 ( t ) (Green), g ¯ 3 (Red) and g 4 (Blue).
Figure 2. Graphs of g 1 ( t ) (Violet), g 2 ( t ) (Green), g ¯ 3 (Red) and g 4 (Blue).
Symmetry 11 00981 g002
Figure 3. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 600 and x 0 = e .
Figure 3. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 600 and x 0 = e .
Symmetry 11 00981 g003
Figure 4. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 2000 and x 0 = e .
Figure 4. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 2000 and x 0 = e .
Symmetry 11 00981 g004
Figure 5. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 600 and x 0 = 6 e .
Figure 5. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 600 and x 0 = 6 e .
Symmetry 11 00981 g005
Figure 6. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 2000 and x 0 = 6 e .
Figure 6. Plots of (a) inner iterations vs. log ( F ( x k ) ) , (b) outer iterations vs. log ( F ( x k ) ) , (c) CPU time vs. log ( F ( x k ) ) for q = 2000 and x 0 = 6 e .
Symmetry 11 00981 g006

Share and Cite

MDPI and ACS Style

Argyros, I.K.; George, S.; Godavarma, C.; Magreñán, A.A. Extended Convergence Analysis of the Newton–Hermitian and Skew–Hermitian Splitting Method. Symmetry 2019, 11, 981. https://doi.org/10.3390/sym11080981

AMA Style

Argyros IK, George S, Godavarma C, Magreñán AA. Extended Convergence Analysis of the Newton–Hermitian and Skew–Hermitian Splitting Method. Symmetry. 2019; 11(8):981. https://doi.org/10.3390/sym11080981

Chicago/Turabian Style

Argyros, Ioannis K, Santhosh George, Chandhini Godavarma, and Alberto A Magreñán. 2019. "Extended Convergence Analysis of the Newton–Hermitian and Skew–Hermitian Splitting Method" Symmetry 11, no. 8: 981. https://doi.org/10.3390/sym11080981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop