Next Article in Journal
The Proof of a Conjecture Related to Divisibility Properties of z(n)
Next Article in Special Issue
A Note on Traub’s Method for Systems of Nonlinear Equations
Previous Article in Journal
On Graded S-Primary Ideals
Previous Article in Special Issue
A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Kung–Traub Methods for Solving Equations with Applications

by
Samundra Regmi
1,*,
Ioannis K. Argyros
2,
Santhosh George
3,
Ángel Alberto Magreñán
4 and
Michael I. Argyros
5
1
Learning Commons, University of North Texas at Dallas, Dallas, TX 75201, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575025, India
4
Departamento de Matemáticas y Computación, Universidad de La Rioja, 26006 Logroño, Spain
5
Department of Computer Science, University of Oklahoma, Norman, OK 73701, USA
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(20), 2635; https://doi.org/10.3390/math9202635
Submission received: 8 September 2021 / Revised: 8 October 2021 / Accepted: 13 October 2021 / Published: 19 October 2021
(This article belongs to the Special Issue Application of Iterative Methods for Solving Nonlinear Equations)

Abstract

:
Kung and Traub (1974) proposed an iterative method for solving equations defined on the real line. The convergence order four was shown using Taylor expansions, requiring the existence of the fifth derivative not in this method. However, these hypotheses limit the utilization of it to functions that are at least five times differentiable, although the methods may converge. As far as we know, no semi-local convergence has been given in this setting. Our goal is to extend the applicability of this method in both the local and semi-local convergence case and in the more general setting of Banach space valued operators. Moreover, we use our idea of recurrent functions and conditions only on the first derivative and divided difference, which appear in the method. This idea can be used to extend other high convergence multipoint and multistep methods. Numerical experiments testing the convergence criteria complement this study.

1. Introduction

We consider approximating a solution x * of equation
F ( x ) = 0 ,
where F : Ω V 1 V 2 is an operator acting between Banach spaces V 1 and V 2 with Ω . Kung and Traub, in [1], introduced a fourth-order iterative method for solving nonlinear equations on the real line. This method in Banach space is defined for n = 0 , 1 , 2 , by
y n = x n F ( x n ) 1 F ( x n ) x n + 1 = y n [ y n , x n ; F ] 1 F ( x n ) [ y n , x n ; F ] 1 F ( y n ) .
Here [ . , . ; F ] : Ω × Ω L ( V 1 , V 2 ) is a divided difference of order one [2]. The convergence order was obtained using Taylor expansions and hypotheses on the derivative of F of order up to five. Note that the method involves also the derivative of order one, so the assumptions on the fifth derivative reduce the applicability of the method [1,3,4,5].
For example: Let V 1 = V 2 = R , Ω = [ 0.5 , 1.5 ] . Define λ on Ω by
λ ( t ) = t 3 log t 2 + t 5 t 4 i f t 0 0 i f t = 0 .
Then, we have t * = 1 ,
λ ( t ) = 6 log t 2 + 60 t 2 24 t + 22 .
Obviously λ ( t ) is not bounded on Ω . Therefore, the convergence of method (2) is not guaranteed by the analysis in [1]. In order to avoid Taylor series expansions but still obtain the fourth order of convergence for method (2), we use the computational order of convergence and the approximate computational order of convergence, which do not require more than one derivative (see Remark 1.2b).
In this paper, we introduce a majorant sequence and use our idea of recurrent functions to extend the applicability of method (2). Our analysis includes error bounds and results on uniqueness of x * based on computable Lipschitz constants not given before in [1] and in other similar studies using Taylor series [3,4,5,6,7,8,9,10,11,12,13]. The advantages of the extended method include: Applications for solving nonlinear Banach space valued equations are not limited to systems of finite dimensional Euclidean space. Local convergence includes computable upper error bounds not given before. Moreover, the semi-local convergence not given before is proved. The motivation for writing this paper is the extension of the applicability of method (2), as already illustrated by the example. The novelty of the paper includes the extension of the convergence domain in both the local as well as the semi-local convergence case and the introduction of the recurrent functions proving technique, which can be used in other methods too [14,15,16,17,18,19,20,21,22,23,24,25,26,27].
The rest of the paper is set up as follows: In Section 2, we present results on majorizing sequences. Section 3 and Section 4 contain the semi-local and local convergence, respectively, where in Section 5, the numerical experiments are presented. Concluding remarks are given in Section 6.

2. Majorizing Sequences

We present results on majorizing sequences.
Definition 1.
Let { u n } be a sequence in a Banach space. Then, a nondecreasing scalar sequence { m n } is called majorizing for { u n } if
u n + 1 u n m n + 1 m n for   each n = 0 , 1 , 2 , .
By this definition, we can use sequence { m n } to study the convergence of { u n } .
Let η > 0 , > 0 , i > 0 , i = 0 , 1 , 2 , , 5 be the given parameters. Define scalar sequences { s n } , { t n } for each n = 0 , 1 , 2 , by t 0 = 0 , s 0 = η
t 1 = s 0 + 0 ( s 0 t 0 ) 2 2 ( 1 1 s 0 ) 2 , s n + 1 = t n + 1 + α n + 1 1 0 t n + 1 , t n + 2 = s n + 1 + 4 t n + 1 ( s n + 1 t n + 1 ) 2 2 ( 1 1 ( s n + 1 + t n + 1 ) ) 2 ,
where α n + 1 = 3 ( t n + 1 t n ) + 3 ( 1 + 1 ( s n + t n ) ) ( s n t n ) 1 0 t n ( t n + 1 s n ) .
Lemma 1.
Suppose:
1 η < 1 ,
for each n = 0 , 1 , 2 , ,
t n + 1 < 1 0
and
s n + 1 + t n + 1 < 1 1 .
Then, sequences { s n } , { t n } are nondecreasing, bounded from above by 1 and as such they converge to their unique least upper bound t * [ η , 1 0 ] . Moreover, the following hold for each n = 0 , 1 , 2 ,
t n s n t n + 1 .
Proof. 
It follows from the definition of sequence { s n } , { t n } and hypotheses (5)–(7). □
Remark 1.
Hypotheses (6) and (7) are verified only in special cases. That is why we introduce stronger hypotheses implying those of Lemma 1 but not necessarily vice versa.
It is convenient for us to define sequences of functions and functions on the interval M = [ 0 , 1 ) for each n = 1 , 2 , as follows:
f n ( t ) = 5 ( t 2 n + t 2 n 1 ) η + 1 t 2 n 1 η + 1 2 t 2 n 1 ( t 2 n + 2 ( 1 + t + + t 2 n 1 ) ) η 2 + 0 η ( t 2 n + t 2 n + 1 + 2 ( 1 + t + + t 2 n 1 ) ) 0 2 η 2 1 ,
f ( t ) = a 7 t 7 + a 6 t 6 + a 5 t 5 + a 4 t 4 + a 3 t 3 + a 2 t 2 + a 1 t + a 0 , f ( t ) = 1 0 η 1 t 2
g n ( t ) = 4 η 2 2 t 2 n + 2 ( 1 + t + t 2 n + 1 ) + 2 1 η ( t 2 n + 2 + 2 ( 1 + t + + t 2 n + 1 ) ) 1 2 η 2 ( t 2 n + 2 + 2 ( 1 + t + + t 2 n + 1 ) ) 2 1 , g ( t ) = 4 2 η ( t 7 + t 6 + t 5 + t 4 t 1 ) + 2 1 η ( 1 + t ) 2 + 1 2 η ( 1 + t + t 2 ) ( 4 1 t + 3 t 5 + 2 t 4 + t 6 ) ,
and
g ( t ) = 1 2 1 η 1 t 2 ,
where
a 0 = ( 1 + 5 + 2 1 2 2 η ) , a 1 = ( 5 + 2 1 2 η ) , a 2 = 0 + 1 + 1 2 η + 5 , a 3 = 0 + 5 + 2 1 2 η , a 4 = 0 + 2 1 2 η , a 5 = 0 + 2 1 2 η , a 6 = 2 1 2 η
and
a 7 = 1 2 η .
By these definitions we have
f ( 0 ) = ( 1 + 5 + 2 1 2 η ) < 0 , f ( 1 ) = 2 ( 2 0 + 3 1 2 η ) > 0 , g ( 0 ) = 4 η 2 < 0
and
g ( t ) + a s t 1 .
It then follows by the intermediate value theorem that functions f and g have zeros in the interval ( 0 , 1 ) . Denote the smallest such zero by b 1 and b 2 , respectively. Moreover, we have for each t M
f ( t ) 0
and
g ( t ) 0 .
Furthermore, define scalar sequences { γ n } and { δ n } by
γ n = 3 ( t n + 1 t n ) ( 1 0 t n ) + 2 ( 1 + 1 ( s n + t n ) ) ( s n t n ) ( 1 0 t n ) ( 1 0 t n + 1 )
and
δ n = 4 t n + 1 ( s n + 1 t n + 1 ) 291 1 ( t n + 1 + s n + 1 ) ) 2 .
μ 0 = max { γ 0 , δ 0 } , μ 1 = min { b 1 , b 2 } .
Next, we present a second auxiliary result on majorizing sequences.
Lemma 2.
Suppose that there exists μ such that
μ 0 μ μ 1 < 1 2 1 η
and (5) holds. Then, sequences { s n } , { t n } are well defined, nondecreasing, bounded from above by t * * = η 1 μ , and as such they converge to their unique least upper bound t * [ η , t * * ] . Moreover, the following estimates hold for each n = 1 , 2 ,
0 t n + 1 s n μ ( s n t n ) μ 2 n + 1 η ,
0 s n t n μ ( t n s n 1 ) μ 2 n η ,
and
0 t n s n t n + 1 .
Proof. 
Estimates (12)–(14) hold if
0 γ k μ ,
0 δ k μ ,
and
t k s k t k + 1 ,
are true for k = 0 , 1 , 2 , . Notice that by the definition of s 0 , t 1 and (5), we have s 0 t 1 . We also have (15)–(17), which hold for k = 0 by (11). Suppose that estimates (15) and (16) hold for k = 1 , 2 , n . Then, we obtain
s k t k + μ 2 k η s k 1 + μ 2 k 1 η + μ 2 k η η + μ η + + μ 2 k η = 1 μ 2 k + 1 1 μ η < η 1 μ = t * * ,
and
t k + 1 s k + μ 2 k + 1 η t k + μ 2 k η + μ 2 k + 1 η η + μ η + + μ 2 k + 1 η = 1 μ 2 k + 2 1 μ η < t * * .
It follows by the induction hypotheses and (17) that sequences { s k } and { t k } are nondecreasing. Estimates (15) holds if we instead show for 5 = 3 ( 1 0 t 1 ) that
5 ( μ 2 k + 1 + μ 2 k ) η + 1 η μ 2 k + 1 2 η 2 μ 2 k 1 μ 2 k + 1 1 μ + 1 μ 2 k 1 μ μ 1 0 1 μ 2 k 1 μ + 1 μ 2 k + 2 1 μ η + 0 2 ( 1 μ 2 k ) ( 1 μ 2 k + 2 ) ( 1 μ ) 2 η 2 0
or
f k ( t ) 0 f o r t = μ .
We need a relationship between two consecutive functions f k . By the definition of function f k , we can write, in turn, by adding and subtracting f k
f k + 1 ( t ) = f k ( t ) + 5 ( t 2 k + 2 + t 2 k + 1 t 2 k t 2 k 1 ) η + 1 η ( t 2 k + 1 t 2 k 1 ) + 1 2 η 2 t 2 k 1 [ t 2 ( 1 + t + + t 2 k + 2 ) + ( 1 + t + + t 2 k + 3 ) ) ( ( 1 + t + + t 2 k ) + ( 1 + t + + t 2 k + 1 ) ) ] + 0 η [ ( 1 + t + + t 2 k + 1 ) + ( 1 + t + + t 2 k + 3 ) ) ( ( 1 + t + + t 2 k 1 ) + ( 1 + t + + t 2 k + 1 ) ) ] f k ( t ) + [ 5 ( t 3 + t 2 t 1 ) + 1 ( t 2 1 ) + ( t 7 + 2 t 6 + 2 t 5 + 2 t 4 + 2 t 3 + t 2 2 t 2 ) ) 1 2 η + 0 ( t 2 + t 3 + t 4 + t 5 ) ] t 2 k 1 η = f k ( t ) + f ( t ) t 2 k 1 η ,
where we used t k t , k = 1 , 2 , , since t ( 0 , 1 ) . Define f ( t ) = lim k f k ( t ) .
Then, we can show instead of (18) that
f ( μ ) 0 ,
which is true by (8). Set c k = t 2 k + 2 ( 1 + t + + t 2 k + 1 ) and d k = t 2 k + 2 + 2 ( 1 + t + + t 2 k + 1 ) . As in (15), estimate (16) holds if
g k ( t ) 0 f o r t = μ .
Function g k ( t ) can be written as
g k ( t ) = 4 η 2 2 t 2 k + 2 c n + 2 1 η d n 1 2 η 2 d n 2 1 .
Then, we again need a relationship between two consecutive functions g k . Notice that
c k + 1 c k = t 2 k + 4 ( 1 + t + + t 2 k + 3 ) t 2 k + 2 ( 1 + t + + t 2 k + 1 ) = t 2 k + 2 ( 1 t + t 2 k + 2 + t 2 k + 3 + t 2 k + 4 + t 2 k + 5 ) ,
d k + 1 d k = ( 1 + t + + t 2 k + 3 ) + ( 1 + t + + t 2 k + 4 ) ( 1 + t + + t 2 k + 1 ) ( 1 + t + + t 2 k + 2 ) = t 2 k + 2 + 2 t 2 k + 3 + t 2 k + 4 ,
and
d k + 1 d k = 4 ( 1 + t + + t 2 k + 1 ) + 3 t 2 k + 2 + 2 t 2 k + 3 + t 2 k + 4 .
By adding and substracting g k from g k + 1 we obtain
g k + 1 ( t ) = g k ( t ) + 4 η 2 2 t 2 k + 2 ( t 2 k + 5 + t 2 k + 4 + t 2 k + 3 + t 2 k + 2 t 1 ) + 2 1 η ( t 2 k + 2 + 2 t 2 k + 3 + t 2 k + 4 ) + 1 2 η 2 ( d k + 1 2 d k 2 ) g k ( t ) + g ( t ) t 2 k + 2 η .
Define g ( t ) = lim k g k ( t ) . Then, we can show instead of (21) that
g ( μ ) 0 ,
which is true by (11). The induction for estimates (15)–(17) is completed. Hence, sequences { s n } , { t n } are nondecreasing, bounded from above by t * * so they converge to t * .  □

3. Semi-Local Convergence

Let U ( x 0 , r ) = { x V 1 : x x 0 < r , r > 0 } and U [ x 0 , r ] = { x V 1 : x x 0 r , r > 0 } . The semi-local convergence analysis of method (2) uses conditions (H1)–(H4).
Suppose:
(H1)
There exists x 0 Ω and η 0 such that F ( x 0 ) 1 L ( V 2 , V 1 ) and
F ( x 0 ) 1 F ( x 0 ) η .
(H2)
For each x Ω
F ( x 0 ) 1 ( F ( x ) F ( x 0 ) ) 0 x x 0 .
Set Ω 0 = U [ x 0 , 1 0 ] Ω .
(H3)
For each x , y Ω 0 , the following holds
F ( x 0 ) 1 ( F ( y ) F ( x ) ) y x ,
F ( x 0 ) 1 ( [ y , x ; F ] F ( x 0 ) ) 1 ( y x 0 + x x 0 ) ,
F ( x 0 ) 1 ( [ y , x ; F ] F ( x ) ) 2 y x
F ( x 0 ) 1 ( [ z , y ; F ] [ y , x ; F ] ) 3 ( z y + y x ) .
and
F ( x 0 ) 1 F ( x ) 4 x x 0 .
(H4)
U [ x 0 , t * ] Ω .
Then, we can show the main semi-local convergence result for method (2).
Theorem 1.
Suppose that conditions (H1)–(H4) hold. Then, sequence { x n } generated by method (2) is well defined in U [ x 0 , t * ] , remain in U [ x 0 , t * ] for each n = 0 , 1 , 2 , and converge to a solution x * U [ x 0 , t * ] of equation F ( x ) = 0 , so that
x * x n t * t n .
Proof. 
Assertions
(Ak)
y k x k s k t k
(Bk)
x k + 1 y k t k + 1 s k
shall be proven using induction on k . It follows from the first substep of method (2) that
y 0 x 0 = F ( x 0 ) 1 F ( x 0 ) η = s 0 t 0 = s 0 t * .
Hence, ( A 0 ) is true and y 0 U [ x 0 , t * ] . We can write by the first substep of method (2) for n = 0 and (H2)
F ( y 0 ) = F ( y 0 ) F ( x 0 ) F ( x 0 ) ( y 0 x 0 ) ,
so
F ( x 0 ) 1 F ( y 0 ) 0 2 y 0 x 0 2 2 ( s 0 t 0 ) 2 .
Next, we show the invertability of linear operator [ y 0 , x 0 ; F ] . Indeed, we have by (H2) that
F ( x 0 ) 1 ( [ y 0 , x 0 ; F ] F ( x 0 ) ) 1 ( y 0 x 0 + x 0 x 0 ) 1 ( s 0 t 0 ) < 1 ,
so by the Banach lemma on linear invertible operators [20], [ y 0 , x 0 ; F ] 1 exists,
[ y 0 , x 0 ; F ] 1 F ( x 0 ) 1 1 1 y 0 x 0 1 1 1 ( s 0 t 0 )
and iterate x 1 is well defined by the second substep of method (2) for n = 0 . We can write
x 1 y 0 = [ y 0 , x 0 ; F ] 1 F ( x 0 ) [ y 0 , x 0 ; F ] 1 F ( y 0 )
leading to
x 1 y 0 [ y 0 , x 0 ; F ] 1 F ( x 0 ) × [ y 0 , x 0 ; F ] 1 F ( x 0 ) F ( x 0 ) 1 F ( y 0 ) ( s 0 t 0 ) 2 2 ( 1 1 s 0 ) 2 = t 1 s 0 ,
showing ( B 0 ). We also obtain
x 1 x 0 x 0 y 0 + y 0 x 0 t 1 s 0 + s 0 t 0 = t 1 t * ,
so x 1 U [ x 0 , t * ] . Suppose that ( A k ) and ( B k ) hold, y k , x k + 1 U [ x 0 , t * ] and F ( x k ) 1 , [ y k , x k ; F ] 1 exist for each k = 1 , 2 , . We shall show they hold for k = n + 1 . By the second substep of method (2), we can write, in turn
F ( x n + 1 ) = F ( x n + 1 ) F ( y n ) [ y n , x n ; F ] F ( x n ) 1 [ y n , x n ; F ] ( x n + 1 y n ) = ( [ x n + 1 , y n ; F ] [ y n , x n ; F ] F ( x n ) 1 [ y n , x n ; F ] ) ( x n + 1 y n ) = [ ( [ x n + 1 , y n ; F ] [ y n , x n ; F ] ) + [ y n , x n ; F ] [ y n , x n ; F ] F ( x n ) 1 [ y n , x n ; F ] ( x n + 1 y n ) = [ ( [ x n + 1 , y n ; F ] [ y n , x n ; F ] ) + [ y n , x n ; F ] × ( I F ( x n ) 1 [ y n , x n ; F ] ) ] ( x n + 1 y n ) = [ ( [ x n + 1 , y n ; F ] [ y n , x n ; F ] ) + ( [ y n , x n ; F ] F ( x 0 ) + F ( x 0 ) ) × ( F ( x n ) [ y n , x n ; F ] ) ] ( x n + 1 y n ) .
Then, by conditions (H3) and the induction hypotheses, in turn, we obtain that -4.6cm0cm
F ( x 0 ) 1 F ( x n + 1 ) [ 3 ( x n + 1 y n + y n x n ) + ( 1 + 1 ( y n x 0 + x n x 0 ) ) × 2 y n x n 1 0 x n x 0 ] x n + 1 y n [ 3 ( t n + 1 t n ) + ( 1 + 1 ( s n + t n ) ) 2 ( s n t n ) 1 0 t n ] ( t n + 1 s n ) = α n + 1 .
We must show F ( x n + 1 ) is invertible. Indeed, we have by (H2)
F ( x 0 ) 1 ( F ( x n + 1 ) F ( x 0 ) ) 0 x n + 1 x 0 0 t n + 1 < 1 ,
so
F ( x n + 1 ) 1 F ( x 0 ) 1 1 0 t n + 1 .
Hence, we obtain by method (2) and the two preceding estimates that
y n + 1 x n + 1 F ( x n + 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x n + 1 ) α n + 1 1 0 t n + 1 = s n + 1 t n + 1 ,
showing ( A k ) for k = n + 1 . We also obtain
y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 s n + 1 t n + 1 + t n + 1 t 0 = s n + 1 t * ,
so y n + 1 U [ x 0 , t * ] . In view of the first substep of method (2), we can write
F ( y n + 1 ) = F ( y n + 1 ) F ( x n + 1 ) F ( x n + 1 ) ( y n + 1 x n + 1 ) ,
leading to
F ( x 0 ) 1 F ( y n + 1 ) 2 y n + 1 x n + 1 2 2 ( s n + 1 t n + 1 ) 2
so
x n + 2 y n + 1 [ y n + 1 , x n + 1 ; F ] 1 F ( x 0 ) 2 F ( x 0 ) 1 F ( x n + 1 ) F ( x 0 ) 1 F ( y n + 1 ) 4 t n + 1 ( s n + 1 t n + 1 ) 2 2 ( 1 1 ( s n + 1 + t n + 1 ) ) 2 = t n + 2 s n + 1 ,
showing ( B k ) for k = n + 1 . Moreover, we obtain
x n + 2 x 0 x n + 2 y n + 1 + y n + 1 x 0 t n + 2 s n + 1 + s n + 1 t 0 = t n + 2 t * ,
and
x n + 1 x n x n + 1 y n + y n x n t n + 1 s n + s n t n .
Hence, we deduce x n + 2 U [ x 0 , t * ] and sequence { t n } is Cauchy in a Banach space V 1 . Hence, it converges to some x * U [ x 0 , t * ] . By letting n in the estimate
F ( x 0 ) 1 F ( x n + 1 ) α n + 1 ,
and the continuity of F , we obtain F ( x * ) = 0 .  □
Concerning the uniqueness of the solution x * we have:
Proposition 1.
Suppose: There exists
(i)
A simple solution x * of equation F ( x ) = 0 .
and
(ii)
s ˜ t * such that
0 ( s ˜ + t * ) < 2 .
Set Ω 1 = U [ x 0 , s ˜ ] Ω . Then, the only solution of equation F ( x ) = 0 in the region Ω 1 is x * .
Proof. 
Let x ˜ Ω 1 with F ( x ˜ ) = 0 . Set T = 0 1 F ( x ˜ + θ ( x * x ˜ ) ) d θ . Then, by (H2) and (ii), we obtain
F ( x 0 ) 1 ( T F ( x 0 ) ) 0 1 φ 0 ( ( 1 θ ) x ˜ x 0 + θ x * x 0 ) d θ 0 2 ( s ˜ + t * ) < 1 ,
leading to x ˜ = x * , where we used the identity T ( x * x ˜ ) = F ( x * ) F ( x ˜ ) = 0 0 = 0 and the invertability of T.  □

4. Local Convergence

Let L , L j , j = 0 , 1 , 2 , 3 , 4 be positive parameters. Set S = [ 0 , 1 L 0 ) . Define function ψ 1 on the interval S = [ 0 , 1 L 0 ) by
ψ 1 ( t ) = L t 2 ( 1 L 0 t ) .
Then, parameter r 1 is defined by
r 1 = 2 2 L 0 + L
solves equation
ψ 1 ( t ) 1 = 0 .
Moreover, define functions q , p on interval S by
q ( t ) = L 0 ψ 1 ( t ) t 1 a n d p ( t ) = L 1 ( 1 + ψ 1 ( t ) ) t 1 .
Suppose that equations
q ( t ) = 0 , p ( t ) = 0
have smallest solutions r q , r p S { 0 } . Set S 0 = [ 0 , r 0 ) , where r 0 = min { r q , r p } . Define function ψ 2 on S 0 by
ψ 2 ( t ) = L 2 ψ 1 2 ( t ) t 1 L 0 ψ 1 ( t ) t + L 3 L 4 ( 1 + ψ 1 ( t ) ) ψ 1 ( t ) t ( 1 L 0 ψ 1 ( t ) t ) ( 1 L 1 ( 1 + ψ 1 ( t ) t ) ) + L 2 L 4 ( 1 + ψ 1 ( t ) ) ψ 1 ( t ) t ( 1 L 1 ( 1 + ψ 1 ( t ) t ) ) 2 .
Suppose that equation
ψ 2 ( t ) = 0
has the smallest solution r 2 S 0 { 0 } . We shall prove that
r = min { r i } , i = 1 , 2
is a convergence radius for method (2). Set S 1 = [ 0 , r ) . By these definitions, we have that for each t S 1
0 L 0 t < 1 ,
0 q ( t ) < 1 ,
0 p ( t ) < 1 ,
and
0 ψ i ( t ) < 1 .
As in the semi-local convergence case we develop the following conditions (C1)–(C4). Suppose:
(C1)
x * is a simple solution of equation F ( x ) = 0 .
(C2)
For each x Ω
F ( x * ) 1 ( F ( x ) F ( x * ) ) L 0 x x 0 .
Set Ω 1 = U ( x * , 1 L 0 ) Ω .
(C3)
For each x , y Ω 1
F ( x * ) 1 ( F ( y ) F ( x ) ) L y x ,
F ( x * ) 1 ( [ y , x ; F ] F ( x * ) ) L 1 ( y x * + x x * ) ,
F ( x * ) 1 ( [ y , x ; F ] F ( x ) ) L 2 y x
F ( x * ) 1 ( [ z , y ; F ] F ( y ) ) L 3 y x .
and
F ( x * ) 1 F ( x ) L 4 x x * .
(C4)
U [ x * , r ] Ω .
Then, we can show the local convergence result for method (2).
Theorem 2.
Under conditions (C1)–(C4) further suppose that x 0 U ( x * , r ) { x * } . Then, sequence { x n } generated by method (2) is well defined in U ( x * , r ) , remains in U ( x * , r ) for each n = 0 , 1 , 2 , and converges to x * so that
y n x * ψ 1 ( x n x * ) x n x * x n x * < r
and
| x n + 1 x * ψ 2 ( x n x * ) x n x * x n x * .
Proof. 
Let z U ( x * , r ) { x * } . Using (C1), (C2), (24) and (25) we obtain
F ( x * ) 1 ( F ( z ) F ( x * ) ) L 0 z x * L 0 r < 1 ,
so F ( z ) is invertible with
F ( z ) 1 F ( x * ) 1 1 L 0 z x * .
Iterate y 0 is well defined from (31) for z = x 0 and the first substep of method (2). Using (24), (28) (for i = 1 ), (31) (for z = x 0 ) and (C3), we obtain
y 0 x * F ( x 0 ) 1 F ( x * ) × 0 1 F ( x * ) 1 ( F ( x 0 + θ ( x 0 x * ) ) F ( x 0 ) ) d θ ( x 0 x * ) L ¯ x 0 x * 2 2 ( 1 L 0 | | x 0 x * ) ψ 1 ( x 0 x * ) x 0 x * x 0 x * < r ,
showing (29) for n = 0 and y 0 U ( x * , r ) . Next, we shall show that [ u , v ; F ] 1 L ( V 2 , V 1 ) for u , v U ( x * , r ) . Indeed, by (24), (26), (C3) and (32) we have
F ( x * ) 1 ( [ y 0 , x 0 ; F ] F ( x * ) ) L 1 ( y 0 x * + x 0 x * ) L 1 ( ψ 1 ( x 0 x * ) + 1 ) x 0 x * = p ( x 0 x * ) p ( r ) < 1 ,
so
[ y 0 , x 0 ; F ] 1 F ( x * ) 1 1 p ( x 0 x * ) .
We also have that (11) holds for z = y 0 . Hence, iterate x 1 is well defined by the second substep of method (2). Then, we can write in turn that
x 1 x * = y 0 x * F ( y 0 ) 1 F ( x 0 ) + ( F ( y 0 ) 1 [ y 0 , x 0 ; F ] 1 F ( x 0 ) [ y 0 , x 0 ; F ] 1 ) F ( y 0 ) .
However, we obtain
F ( y 0 ) 1 [ y 0 , x 0 ; F ] 1 F ( x 0 ) [ y 0 , x 0 ; F ] 1 = F ( y 0 ) 1 ( I F ( y 0 ) [ y 0 , x 0 ; F ] 1 F ( x 0 ) [ y 0 , x 0 ; F ] 1 ) = F ( y 0 ) 1 ( [ y 0 , x 0 ; F ] F ( y 0 ) + F ( y 0 ) F ( y 0 ) [ y 0 , x 0 ; F ] 1 F ( x 0 ) ) [ y 0 , x 0 ; F ] 1
and
[ y 0 , x 0 ; F ] F ( y 0 ) + F ( y 0 ) F ( y 0 ) [ y 0 , x 0 ; F ] 1 F ( x 0 ) = ( [ y 0 , x 0 ; F ] F ( y 0 ) ) + F ( y 0 ) [ y 0 , x 0 ; F ] 1 ( [ y 0 , x 0 ; F ] F ( x 0 ) ) ,
so
x 1 x * = y 0 x * F ( y 0 ) 1 F ( y 0 ) + F ( y 0 ) 1 ( [ y 0 , x 0 ; F ] F ( y 0 ) ) [ y 0 , x 0 ; F ] 1 F ( y 0 ) + [ y 0 , x 0 ; F ] 1 ( [ y 0 , x 0 ; F ] F ( x 0 ) ) [ y 0 , x 0 ; F ] 1 F ( y 0 ) ,
In view of (24), (28) (for i = 2 ), (31) (for z = x 0 , y 0 ) and (32)–(34), we obtain, in turn,
x 1 x * L y 0 x * 2 2 ( 1 L 0 y 0 x * ) + L 3 y 0 x 0 L 4 y 0 x * ( 1 L 0 y 0 x * ) ( 1 L 1 ( y 0 x * + x 0 x * ) ) + L 2 | | y 0 x 0 L 4 y 0 x * ( 1 L 1 ( y 0 x * + x 0 x * ) ) ψ 2 ( x 0 x * ) x 0 x * x 0 x * ,
showing (30) for n = 0 and x 1 U ( x * , r ) . Moreover, exchange x 0 , y 0 , x 1 by x j , y j , x j + 1 , in the preceding calculations, respectively, to complete the induction for estimates (29) and (30). Furthermore, from the estimate
x j + 1 x * c x j x * ,
where c = ψ 2 ( x 0 x * ) [ 0 , 1 ) , we conclude lim j x j = x * and x j + 1 U ( x * , r ) .  □
Remark 2.
(a) The value r 1 was given by us in [6] for the radius of convergence for Newton’s method. It then follows from (24) that
r r 1 .
Hence, the radius of convergence r for method (2) cannot be larger than Newton’s. Notice that the radius of convergence given independently by Rheinboldt [7] and Traub [8] is ρ = 2 3 K , where K is the Lipschitz constant on Ω . We also have ρ r , since L 0 K and L K .
(b) We compute the computational order of convergence (COC) defined by
C O C = ln x n + 1 x * x n x * / ln x n x * x n 1 x *
or the approximate computational order of convergence (ACOC)
A C O C = ln x n + 1 x n x n x n 1 / ln x n x n 1 x n 1 x n 2 .
Then, we obtain in practice the convergence order and avoid the existence of the higher order Fréchet derivatives for operator F .
Next, we present a uniqueness of the solution result.
Proposition 2.
Suppose:
(a)
There exists a simple solution x * of equation F ( x ) = 0
(b)
There exists r ¯ r such that
r ¯ < 2 L 0 .
Set Ω 2 = Ω U [ x * , r ¯ ] . Then, the only solution of equation F ( x ) = 0 in the region Ω 1 is x * .
Proof. 
Let b Ω 2 with F ( b ) = 0 . Set Q = 0 1 F ( x * + θ ( b x * ) ) d θ . Then, using (C1) and (38), we obtain
F ( x * ) 1 ( Q F ( x * ) ) 0 1 L 0 θ b x * d θ L 0 2 r ¯ < 1 ,
leading to b = x * , since Q 1 L ( V 2 , V 1 ) and A ( b x * ) = F ( b ) F ( x * ) = 0 0 = 0 .  □

5. Numerical Experiments

We provide some examples, with [ x , y ; F ] = 0 1 F ( y + θ ( x y ) ) d θ .
Example 1.
Define function
ψ ( x ) = b 0 x + b 1 + b 2 sin b 3 x , x 0 = 0 ,
where b j , j = 0 , 1 , 2 , 3 are parameters. Then, clearly for b 3 large and b 2 small, 0 can be small (arbitrarily). Notice that 0 0 .
Example 2.
Consider V 1 = V 2 = C [ 0 , 1 ] , Ω = U [ 0 , 1 ] and Q : Ω V 2 defined by
Q ( ψ ) ( x ) = φ ( x ) 5 0 1 x θ ψ ( θ ) 3 d θ .
We obtain
Q ( ψ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ ψ ( θ ) 2 ξ ( θ ) d θ , for   each ξ D .
Then, since x * = 0 , conditions (C1)–(C4) are verified for L 0 = 7.5 , L = L 4 = K = 15 , L 1 = L 0 2 , L 2 = L 3 = L 2 . Then, the radii are:
r = r 1 = 0.066667 , r 2 = 0.109818 , a n d ρ = 2 3 K = 0.0667 .
Example 3.
Consider the motion system
G 1 ( x ) = e x , G 2 ( y ) = ( e 1 ) y + 1 , G 3 ( z ) = 1
with G 1 ( 0 ) = G 2 ( 0 ) = G 3 ( 0 ) = 0 . Let G = ( G 1 , G 2 , G 3 ) . Let V 1 = V 2 = R 3 , Ω = U ¯ ( 0 , 1 ) , x * = ( 0 , 0 , 0 ) T . Define function G on Ω for w = ( x , y , z ) T by
G ( w ) = ( e x 1 , e 1 2 y 2 + y , z ) T .
Then, we obtain
G ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Hence, conditions (C1)–(C4) are verified for L 0 = ( e 1 ) , L = e 1 e 1 = L 4 , L 1 = L 0 2 , L 2 = L 3 = L 2 , K = e . Then, the radii are:
r = r 1 = 0.382692 , r 2 = 0.417923 a n d ρ = 2 3 K = 0.2453 .
Example 4.
Let V 1 , V 2 and Ω be as in the Example 2. It is well-known that the boundary value problem [2]
φ ( 0 ) = 0 , ( 1 ) = 1 ,
φ = φ σ φ 2
can be given as a Hammerstein-like nonlinear integral equation
φ ( s ) = s + 0 1 M ( s , t ) ( φ 3 ( t ) + σ φ 2 ( t ) ) d t
where σ is a parameter. Then, define F : Ω V 2 by
[ F ( x ) ] ( s ) = x ( s ) s 0 1 M ( s , t ) ( x 3 ( t ) + σ x 2 ( t ) ) d t .
Choose x 0 ( s ) = s and Ω = U ( x 0 , ρ 0 ) . Then, clearly U ( x 0 , ρ 0 ) U ( 0 , ρ 0 + 1 ) , since x 0 = 1 . Suppose 2 σ < 5 . Then, conditions (H1)–(H4) are verified for
0 = 2 σ + 3 ρ 0 + 6 8 , = σ + 6 ρ 0 + 3 4 ,
1 = 0 2 , 2 = 2 , 3 = 2 and μ = 1 + σ 5 2 σ . Notice that 0 < 1 .
In general the radius of convergence decreases, when the order increases. However, notice that in the local convergence Examples 2 and 3, the radii for the fourth-order method (2) compare favorably to the ones given in [7,8] for Newton’s (see r and ρ ).

6. Conclusions

The Kung–Traub method was revisited, and its applicability was extended in both the semi-local and local convergence case from the real to the Banach space setting. Our analysis includes error bounds and uniqueness on x * information not available before and under weak conditions. This idea is very general and can be used to extend the applicability of other methods.

Author Contributions

Conceptualization, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Data curation, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Formal analysis, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Funding acquisition, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Investigation, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Methodology, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Project administration, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Resources, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Software, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Supervision, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Validation, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Visualization, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Writing—original draft, S.R., I.K.A., S.G., Á.A.M. and M.I.A.; Writing—review and editing, S.R., I.K.A., S.G., Á.A.M. and M.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi point iteration. J. Appl. Math. Comput. 1974, 21, 643–651. [Google Scholar]
  2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA; London, UK, 1997; SIAM Publ.: Philadelphia, PA, USA, 2000. [Google Scholar]
  3. Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative method. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  4. Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 281, 2377–2385. [Google Scholar]
  5. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  6. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics, 103; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984. [Google Scholar]
  7. Rheinboldt, W.C. An Adaptive Continuation Process of Solving Systems of Nonlinear Equations; Polish Academy of Science, Banach Ctr. Publ. 3: Warsaw, Poland, 1978; pp. 129–142. [Google Scholar]
  8. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  9. Cătinaş, E. The inexact, inexact perturbed, and quasi-Newton methods are equivalent models. Math. Comp. 2005, 74, 291–301. [Google Scholar] [CrossRef]
  10. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]
  11. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted—Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  12. Traub, J.F.; Werschulz, A.G. Complexity and Information, Lezioni Lince; Lincei Lectures; Cambridge University Press: Cambridge, UK, 1998; p. xii+139. ISBN 0-521-48506-1. [Google Scholar]
  13. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  14. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  15. Argyros, I.K.; Magréñan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  16. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Academic Press: New York, NY, USA, 2018. [Google Scholar]
  17. Deuflhard, P. Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics, 35; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  18. Ezquerro, J.A.; Gutiérrez, J.M.; Hernández, M.A.; Romero, N.; Rubio, M.J. The Newton method: From Newton to Kantorovich. Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. (In Spanish) [Google Scholar]
  19. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Springer: Cham, Switzerland, 2018. [Google Scholar]
  20. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  21. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  22. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  23. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  24. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef] [Green Version]
  25. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  26. Shakhno, S.M.; Gnatyshyn, O.P. On aan iterative algorithm of order 1.839… for solving nonlinear least squares problems. Appl. Math. Applic. 2005, 161, 253–264. [Google Scholar]
  27. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pták error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S.; Magreñán, Á.A.; Argyros, M.I. Extended Kung–Traub Methods for Solving Equations with Applications. Mathematics 2021, 9, 2635. https://doi.org/10.3390/math9202635

AMA Style

Regmi S, Argyros IK, George S, Magreñán ÁA, Argyros MI. Extended Kung–Traub Methods for Solving Equations with Applications. Mathematics. 2021; 9(20):2635. https://doi.org/10.3390/math9202635

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, Santhosh George, Ángel Alberto Magreñán, and Michael I. Argyros. 2021. "Extended Kung–Traub Methods for Solving Equations with Applications" Mathematics 9, no. 20: 2635. https://doi.org/10.3390/math9202635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop