Next Article in Journal
Pioneering Black African American Women Chemists and Pharmacists
Next Article in Special Issue
Semi-Local Convergence of a Seventh Order Method with One Parameter for Solving Non-Linear Equations
Previous Article in Journal
Simpson’s Type Inequalities for s-Convex Functions via a Generalized Proportional Fractional Integral
Previous Article in Special Issue
On the Semi-Local Convergence of a Noor–Waseem-like Method for Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximating Solutions of Nonlinear Equations Using an Extended Traub Method

by
Santhosh George
1,
Ioannis K. Argyros
2,*,
Christopher I. Argyros
3 and
Kedarnath Senapati
1
1
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangaluru 575 025, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Foundations 2022, 2(3), 617-623; https://doi.org/10.3390/foundations2030042
Submission received: 24 June 2022 / Revised: 24 July 2022 / Accepted: 25 July 2022 / Published: 27 July 2022
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences)

Abstract

:
The Traub iterates generate a sequence that converges to a solution of a nonlinear equation given certain conditions. The order of convergence has been shown provided that the fifth Fréchet-derivative exists. Notice that this derivative does not appear on the Traub method. Therefore, according to the earlier results, there is no guarantee that the Traub method converges if the operator is not five times Fréchet-differentiable or more. However, the Traub method can converge, since these assumptions are only sufficient. The novelty of our new technique is the fact that only the Fréchet-derivative on the method is assumed to exist to prove convergence. Moreover, the new results does not depend on the Traub method. Consequently, the same technique can be applied on other methods. The dynamics of this method are also studied. Examples further explain the theoretical results.

1. Introduction

The article deals with the challenge of approximating a solution x * of the following nonlinear equation:
H ( x ) = 0 ,
where operator H : Ω X Y acts between Banach spaces X , Y and subset Ω . Since the analytical form of x * is not easily attainable, iterative methods are considered for solving (1). Throughout the article, B ( x 0 , ρ ) = { z X : z x 0 < ρ } and B [ x 0 , ρ ] = { z X : z x 0 ρ } for some ρ > 0 . When one studies the iterative method, the convergence order is an important issue.
Recall [1] that
x k + 1 x *     C x k x * q ,
for some C > 0 and q 1 ; then, q denotes the order for convergence of sequence { x k } , whereas C is the rate of convergence.
This order is obtained in general using Taylor expansion and required assumptions on derivatives of higher order. This reduces the utility of iterative methods (see [1,2,3,4]).
In [5], the Traub method [6] was extended to the following:
y k = x k H ( x k ) 1 H ( x k ) z k = y k H ( x k ) 1 H ( y k ) x k + 1 = z k H ( x k ) 1 H ( z k ) ,
when X = Y = R . The convergence order was shown to be four using Taylor series expansion and assumptions on the fifth Fréchet-derivative of F are used to obtain the convergence order.
The order can be found without using high-order derivatives. However, by using the (COC) Computational Order of Convergence as follows:
ξ = ln x k + 1 x * x k x * / ln x k x * x k 1 x *
or the (ACOC) Approximate Computational Order of Convergence:
ξ 1 = ln x k + 1 x k x k x k 1 / ln x k x k 1 x k 1 x k 2
the order of convergence can be found without involving derivatives of order higher than one.
The article extends method (2) as follows:
y k 1 = x k + H ( x k ) 1 H ( x k ) y k 2 = y k 1 H ( x k ) 1 H ( y k 1 ) x k + 1 = y k m = y k m 1 H ( x k ) 1 H ( y k m 1 ) ,
where m is a fixed natural number. If m = 3 , we obtain method (2). Method (3) was shown to be of order m + 1 using Taylor expansions in [5], when X = Y = R and a = 1 . However, no estimates on x k x * or information concerning the uniqueness ball of the solution are obtained.
As a motivation, let X = Y = R , Ω = [ 1 2 , 3 2 ] . Let the function f on the domain Ω be the following.
f ( ς ) = 0 i f ς = 0 ς 3 log ς 2 + ς 5 ς 4 i f ς 0 .
This definition provides the following.
f ( ς ) = 6 log ς 2 + 60 ς 2 24 ς + 22 .
Thus, function f ( ς ) is discontinous on Ω . That is, the convergence of method (2) or method (3) is not assured by earlier articles [6,7,8,9].
The article addresses in the remaining four sections the local analysis, numerical examples, dynamics, and the conclusions for method (3), respectively.

2. Convergence

The assumptions (H) are used. Assume the following:
(H1)
x * Ω solves Equation (1) and is simple.
(H2)
∃ a minimal positive solution ρ of the following equation:
ψ 0 ( t ) 1 = 0 ,
where ψ 0 : [ 0 , ) [ 0 , ) is some nondecreasing and continuous function such that the following is the case:
H ( x * ) 1 ( H ( x * ) H ( w ) ) ψ 0 ( x * w )
(H3)
∃ functions ψ : [ 0 , ρ ) [ 0 , ) , ψ 1 : [ 0 , ρ ) [ 0 , ) continuous and nondecreasing such that
H ( x * ) 1 ( H ( z ) H ( w ) ) ψ ( z w )
and
H ( x * ) 1 H ( w ) ψ 1 ( x * w )
holds for all z , w Ω 0 = Ω U ( x * , ρ ) .
Define functions ψ j : [ 0 , ρ ) [ 0 , ) by the following:
ψ 1 ( t ) = 0 1 ψ ( ( 1 τ ) t ) d τ + | 1 a | 0 1 ψ 1 ( τ t ) d τ 1 ψ 0 ( t ) , ψ 2 ( t ) = 1 + 0 1 ψ 1 ( τ ψ 1 ( t ) t ) d τ 1 ψ 0 ( t ) ψ 1 ( t ) ψ j ( t ) = 1 + 0 1 ψ 1 ( τ ψ j 1 ( t ) t ) d τ 1 ψ 0 ( t ) ψ j 1 ( t ) ,
j = 1 , 2 , , m . In particular, if m = j , define the following:
ψ m ( t ) = 1 + 0 1 ψ 1 ( τ ψ m 1 ( t ) t ) d τ 1 ψ 0 ( t ) ψ m 1 ( t )
and
(H4)
Equations ψ i ( t ) = 0 , i = 1 , 2 , , m have minimal solutions R i ( 0 , ρ ) , respectively. Define the following parameter:
R = min { r i } ;
and
(H5)
U ( x * , R ) Ω .
Next, the convergence is shown for method (3).
Theorem 1.
Assume conditions (H) hold. Then, if x 0 U ( x * , R ) { x * } , sequence { x k } generated by method (3) exists in U ( x * , R ) , remains in U ( x * , R ) for all k = 0 , 1 , 2 , , and is convergent to x * .
Proof. 
Let u B ( x * , R ) . Then, using (H1) and (H2), one obtains the following:
H ( x * ) 1 ( H ( x * ) H ( u ) ) ψ 0 ( x * u ) ψ 0 ( R ) < 1 ,
thus, H ( u ) 1 L ( Y , X ) in view of the Banach perturbation lemma concerning inverses of linear operators [3] and
H ( u ) 1 H ( x * ) 1 1 ψ 0 ( u x * ) .
In particular, iterates y 0 1 , y 0 2 , , y 0 m are well-defined by method (3). Using the first substep of this method, one can write the following.
y 0 x * = x 0 x * H ( x 0 ) 1 H ( x 0 ) + ( 1 a ) H ( x 0 ) 1 H ( x 0 ) = ( H ( x 0 ) 1 H ( x * ) ) 0 1 H ( x * ) 1 ( H ( x 0 ) H ( x * + τ ( x 0 x * ) ) d τ ( x 0 x * ) + ( 1 a ) ( H ( x 0 ) 1 H ( x * ) ) ( H ( x * ) 1 H ( x 0 ) ) .
Using (H3), (5) (for u = x 0 ), and (7), one obtains the following:
y 0 1 x * 1 1 ψ 0 ( x 0 x * ) 0 1 ψ ( ( 1 τ ) x 0 x * ) d τ + | 1 a | 0 1 ψ 1 ( τ x 0 x * ) d ψ x 0 x * ψ 1 ( x 0 x * ) x 0 x * x 0 x * < R
from the definition of ψ 1 , r 1 and R ; thus, y 0 1 B ( x * , R ) . Similarly, by the second, third, , j step of method (3), one obtains the following:
y 0 2 x * = y 0 1 x * H ( x 0 ) 1 H ( y 0 1 ) y 0 1 x * + 0 1 ψ 1 ( τ ψ 1 ( x 0 x * ) x 0 x * ) d τ 1 ψ 0 ( x 0 x * ) y 0 x * = 1 + 0 1 ψ 1 ( τ ψ 1 ( x 0 x * ) x 0 x * ) d τ 1 ψ 0 ( x 0 x * ) y 0 x * ψ 2 ( x 0 x * ) x 0 x * x 0 x * y 0 j x * 1 + 0 1 ψ 1 ( τ ψ 1 ( y 0 j 1 x * ) ) d τ 1 ψ 0 ( x 0 x * ) y 0 j 1 x * ψ j ( x 0 x * ) x 0 x * x 0 x * ,
so y 0 2 ,   ,   y 0 j U ( x * , R ) . In particular, by (7) for j = m , one obtains the following:
x 1 x * c j x 0 x * ,
where c j = ψ j ( x 0 x * ) [ 0 , 1 ) . Simply replace x 0 ,   y 0 1 ,   ,   y 0 j in the preceding estimate by x k ,   y k 1 ,   ,   y k j , to arrive at the following.
x k + 1 x * c x k x * c k + 1 x 0 x * < R ,
With c = ψ m ( x 0 x * ) [ 0 , 1 ) , we derive lim k x k = x * with x k + 1 U ( x * , R ) .
A uniqueness result follows for the solution.
Proposition 1.
Assume x * solves equation H ( x ) = 0 and is simple. Then, the only solution of Equation (1) in the set Ω 1 = Ω U [ x * , R ¯ ] is x * provided that there exists R ¯ R satisfying the following.
0 1 ψ 0 ( τ R ¯ ) d τ < 1 .
Proof. 
Consider p Ω 1 solving equation H ( x ) = 0 . Define the linear operator M = 0 1 H ( x * + t ( p x * ) ) d t . Then, by using (H2) and (10), the following is the case.
H ( x * ) 1 ( H ( x * ) M ) 0 1 ψ 0 ( τ p x * ) d τ 0 1 ψ 0 ( τ R ¯ ) d τ < 1 .
Therefore, we conclude p = x * , since M 1 exists and 0 = H ( p ) H ( x * ) = M ( p x * ) .

3. Numerical Experiments

The radius of convergence can be obtained by using Formula (4) for an example in this section.
Example 1.
Let X = Y = R 3 , D = U [ 0 , 1 ] , x * = ( 0 , 0 , 0 ) T . Define function F on D for a = ( v 1 , v 2 , v 3 ) T by the following
H ( a ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) T .
Then, one obtains the following:
H ( a ) = e v 1 0 0 0 ( e 1 ) v 2 + 1 0 0 0 1 ,
so conditions (H) hold for φ 0 ( ς ) = ( e 1 ) ς , φ ( ς ) = e 1 e 1 ς and φ 1 ( ς ) = e 1 e 1 . Hence, r 1 = 0.3775 , r 2 = 0.2026 , r 3 = 0.0976 , and R = 0.0976 .
ξ = 3.6483 e 07 , ξ 1 = 0.5758 .
Example 2.
Consider X = Y = C [ 0 , 1 ] ,   D = U ¯ ( 0 , 1 ) and H : D Y defined by
H ( ϕ ) ( x ) = ϕ ( x ) 5 0 1 x θ ϕ ( θ ) 3 d θ .
We have the following.
H ( ϕ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ ϕ ( θ ) 2 ξ ( θ ) d θ , for   each ξ D .
Then, we obtain x * = 0 , so φ 0 ( t ) = 7.5 t , φ ( t ) = 15 t , and φ 1 ( t ) = 2 . Hence, r 1 = 0.0199 , r 2 = 0.0028 , r 3 = 0.0049 , and R = 0.0028 .

4. Basins of Attractions

The Fatou sets or basins of attraction [2,8,9] of method (3) denoted by F a t o s is defined as F a t o s = { x : x is an inital point from which (3) converges to a solution of a given equation }. The complement of F a t o s is called the Julia set. We consider three problems that are systems of polynomials in two variables and computed F a t o s , which is associated with each root of the corresponding systems given in Figure 1:
Example 3.
x 3 y = 0 y 3 x = 0
with solutions { ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) } .
Example 4.
3 x 2 y y 3 = 0 x 3 3 x y 2 1 = 0
with solutions { ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , ( 1 , 0 ) } .
Example 5.
x 2 + y 2 4 = 0 3 x 2 + 7 y 2 16 = 0
with solutions { ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) } .
For each test problem, we chose a = 0.3939 to compute F a t o s and compute their dynamics. For this, we consider the region R = { ( x , y ) R 2 : x [ 2 , 2 ] , y [ 2 , 2 ] } . We make sure that R contains all the roots of the test problems considered. An equspaced grid of 401 × 401 points in R is considered to be the initial guess X 0 for the scheme (3). The scheme is iterated to a maximum of 50 iterations with a fixed tolerance of 10 8 . An iterative scheme with initial guess X 0 does not converge to any of the roots if the above accuracy is not achieved within 50 iterations, and we assigned black colors to those points X 0 . In this manner, we distinguish F a t o s by their respective colors for the distinct roots of each method.
Figure 1 demonstrates F a t o s corresponding to each root of the method (3). The black region denotes the Julia set.
The figures presented in this work is performed in a 4-core 64 bit Windows machine with Intel Core i7-3770 processor using MATLAB programming language.

5. Conclusions

The local convergence and the dynamics of the Traub method (3) have been studied under weaker-than-before conditions. The technique used allows the extension of the usage of the Traub method to include equations with operators that are less than five-times Fréchet-differentiable. The new technique does not depend on the method. Thus, it can be used on other methods [1,4,7,9].

Author Contributions

All authors have contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables, Volume 30 of Classics in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2000. [Google Scholar]
  2. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  3. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Group: Abingdon, UK, 2022. [Google Scholar]
  4. Ostrowski, A.M. Solution of Equations and Systems of Equations, 2nd ed.; Pure and Applied Mathematics; Academic Press: New York, NY, USA; London, UK, 1966; Volume 9. [Google Scholar]
  5. Amat, S.; Busquier, S.; Bermudez, C.; Plaza, S. On two families of high order Newton type methods. Appl. Math. Lett. 2012, 25, 2209–2222. [Google Scholar] [CrossRef]
  6. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  7. Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative method. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  8. Petković, M.S.; Neta, B.; Petković, L.D.; Dzunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  9. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
Figure 1. F a t o s and Julia set for Examples 3–5.
Figure 1. F a t o s and Julia set for Examples 3–5.
Foundations 02 00042 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

George, S.; Argyros, I.K.; Argyros, C.I.; Senapati, K. Approximating Solutions of Nonlinear Equations Using an Extended Traub Method. Foundations 2022, 2, 617-623. https://doi.org/10.3390/foundations2030042

AMA Style

George S, Argyros IK, Argyros CI, Senapati K. Approximating Solutions of Nonlinear Equations Using an Extended Traub Method. Foundations. 2022; 2(3):617-623. https://doi.org/10.3390/foundations2030042

Chicago/Turabian Style

George, Santhosh, Ioannis K. Argyros, Christopher I. Argyros, and Kedarnath Senapati. 2022. "Approximating Solutions of Nonlinear Equations Using an Extended Traub Method" Foundations 2, no. 3: 617-623. https://doi.org/10.3390/foundations2030042

Article Metrics

Back to TopTop