Next Article in Journal / Special Issue
Extending King’s Method for Finding Solutions of Equations
Previous Article in Journal
Non-Equilibrium Thermodynamic Foundations of the Origin of Life
Previous Article in Special Issue
On the Local Convergence of a (p + 1)-Step Method of Order 2p + 1 for Solving Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives

by
Ioannis K. Argyros
1,*,†,
Debasis Sharma
2,†,
Christopher I. Argyros
3,† and
Sanjaya Kumar Parhi
4,†
1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar 751024, India
3
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
4
Department of Mathematics, Fakir Mohan University, Vyasa Vihar 756020, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Foundations 2022, 2(2), 338-347; https://doi.org/10.3390/foundations2020023
Submission received: 3 March 2022 / Revised: 2 April 2022 / Accepted: 6 April 2022 / Published: 12 April 2022
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences)

Abstract

:
For the purpose of obtaining solutions to Banach-space-valued nonlinear models, we offer a new extended analysis of the local convergence result for a seventh-order iterative approach without derivatives. Existing studies have used assumptions up to the eighth derivative to demonstrate its convergence. However, in our convergence theory, we only use the first derivative. Thus, in contrast to previously derived results, we obtain conclusions on calculable error estimates, convergence radius, and uniqueness region for the solution. As a result, we are able to broaden the utility of this efficient method. In addition, the convergence regions of this scheme for solving polynomial equations with complex coefficients are illustrated using the attraction basin approach. This study is concluded with the validation of our convergence result on application problems.

1. Introduction

Numerous very complicated scientific and engineering phenomena may be treated using nonlinear equations of the kind:
G ( y ) = 0 ,
where G : Ω Y 1 Y 2 is derivable, as suggested by Fréchet. Y 1 , Y 2 are complete normed vector spaces, and Ω Y 1 is non-null, convex, and open. Confronting such nonlinearity has remained a major challenge in mathematics. Analytical solutions to these problems are incredibly difficult to come up with. As a result, scientists and researchers often utilize iterative procedures to obtain the desired answer. Among iterative approaches, Newton’s method is often employed to solve these nonlinear equations. Steffensen method [1,2] is well known among iterative schemes without derivatives. Sharma and Arora [3] deduced the following algorithm, which is a variant of the Steffensen method:
s k = y k B k 1 G ( y k ) , y k + 1 = s k 3 I B k 1 G [ s k , y k ] G [ s k , w k ] × B k 1 G ( s k ) ,
where B k = G [ w k , y k ] is the divided difference of order one, and w k = y k + G ( y k ) and y 0 Ω is an initial point. This algorithm uses one matrix inversion. Furthermore, Wang and Zhang’s [4] extended method (3) to design a seventh convergence order method is as follows:
s k = y k B k 1 G ( y k ) , z k = s k [ 3 I B k 1 G [ s k , y k ] G [ s k , w k ] × B k 1 G ( s k ) , y k + 1 = z k G [ z k , y k ] + G [ z k , s k ] G [ y k , s k ] 1 G ( z k ) .
In addition, numerous novel higher-order iterative strategies [3,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] have been developed and implemented during the past few years. The majority of these research papers provide convergence theorems for iterative schemes by imposing requirements on higher-order derivatives. Furthermore, these investigations make no judgments about the convergence radius, error distances, or existence-uniqueness area for the solution.
In the research of iterative schemes, it is essential to determine the domain of convergence. The convergence domain is often rather narrow in most circumstances. Thus, without making any additional assumptions, the convergence domain must be extended. Additionally, accurate error distances must be approximated in the convergence investigation of iterative methods. Focusing on these points, we consider a method without derivatives, which is as follows:
s k = y k A k 1 G ( y k ) , z k = s k ( 3 I 2 A n 1 G [ s k , y k ] ) A k 1 G ( s k ) , y k + 1 = z k 13 4 I A k 1 G [ z k , s k ] 7 2 I 5 4 A k 1 G [ z k , s k ] A k 1 G ( z k ) ,
where G [ . , . ] : Ω × Ω B ( Y 1 , Y 2 ) , A k = G [ v k , q k ] , v k = y k + G ( y k ) and q k = y k G ( y k ) .
However, it is crucial to note that the seventh order convergence of (4) was achieved in [24] by the use of conditions on the derivative of order eight, while this scheme is derivative-free. Because the convergence of this scheme is reliant on derivatives of a higher order, its usefulness is reduced. Taking the function G , defined on Ω = [ 1 2 , 3 2 ] by:
G ( y ) = y 3 ln ( y 2 ) + y 5 y 4 , if y 0 0 , if y = 0 .
one can observe that the previous conclusion on the convergence of this method [24] fails to hold because of the unboundedness of G . Aside from that, the analytical outcome in [24] is not sufficient for the calculation of error | | y k y * | | and convergence radius. There is no conclusion in [24] that can be drawn concerning the location and uniqueness of y * . The local analysis results allow one to estimate the error | | y k y * | | , convergence radius and uniqueness zone for y * . The findings of local convergence, in particular, are very useful since they provide information on the critical problem of identifying starting points. For this reason, we propose a new extended local analysis of the derivative free method (4). In determining the convergence radius, error | | y k y * | | , and uniqueness area of y * , our work is beneficial. This technique can be used to extend the applicability of other methods and relevant topics along the same lines [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,25]. In addition, using the attraction basin approach, the areas where this method can find solutions to complex polynomial equations are shown.
The arrangement of this paper can be described by summarizing the remainder of the text into the following statements. Section 2 deduces the theoretical outcomes with respect to local convergence of method (4). In Section 3, attraction basins for this scheme are shown. The suggested local analysis is verified numerically in Section 4. Finally, several conclusions are offered.

2. Local Convergence

We introduce scalar parameters and functions to deal with the local convergence analysis of scheme (4). Let c 0 , c 0 0 , d 0 and M = [ 0 , ) .
Suppose function:
(1)
P 0 ( t ) 1
  • has a smallest zero R 0 M { 0 } for some function P 0 : M M , which is continuous and non-decreasing (function). Let M 0 = [ 0 , R 0 ) .
(2)
Q 1 ( t ) 1
  • has a smallest zero r 1 M 0 { 0 } for some (function) P : M 0 M and function Q 1 : M 0 M defined by:
    Q 1 ( t ) = P ( t ) 1 P 0 ( t ) .
(3)
Q 2 ( t ) 1
  • has a smallest zero r 2 M 0 { 0 } for some (functions) P 1 : M 0 M , P 2 : M 0 M , P 3 : M 0 M , P 4 : M 0 M , and function Q 2 : M 0 M defined by:
    Q 2 ( t ) = P 1 ( t ) 1 P 0 ( t ) + c P 2 ( t ) ( 1 P 0 ( t ) ) 2 .
(4)
Q 3 ( t ) 1
  • has a smallest zero r 3 M 0 { 0 } for some function Q 3 : M 0 M defined by:
    Q 3 ( t ) = P 3 ( t ) 1 P 0 ( t ) + c 9 + 5 d 1 P 0 ( t ) P 4 ( t ) 4 ( 1 P 0 ( t ) ) 2 Q 2 ( t ) .
The parameter r * defined by
r * = min { r j } , j = 1 , 2 , 3
is shown next to be a convergence radius for method (4).
Let M 1 = [ 0 , r * ) . It follows from the definition of radius r * that for all t M 1 :
0 P 0 ( t ) < 1
and
0 Q j ( t ) < 1 , j = 1 , 2 , 3 .
By U [ y * , δ ] , we denote the closure of the open ball U ( y * , δ ) with center y * Ω and of radius δ > 0 .
The conditions ( A ) are needed provided that y * is a simple solution of equation G ( y ) = 0 , and functions Q j are used as previously defined.
Suppose the following:
(a1)
| | G ( y * ) 1 ( G [ y + G ( y ) , y G ( y ) ] G ( y * ) ) | | P 0 ( | | y y * | | ) | | I + G [ y , y * ] | | a | | I G [ y , y * ] | | b
  • for all y Ω and some a 0 , b 0 .
  • Let Ω 0 = Ω U ( y * , R 0 ) .
(a2)
| | G ( y * ) 1 ( G [ y + G ( y ) , y G ( y ) ] G [ y , y * ] ) | | P ( | | y y * | | ) , | | G ( y * ) 1 ( G [ s , y * ] G [ y + G ( y ) , y G ( y ) ] ) | | P 1 ( | | y y * | | ) , | | G ( y * ) 1 ( G [ y + G ( y ) , y G ( y ) ] G [ s , y ] ) | | P 2 ( | | y y * | | ) , | | G ( y * ) 1 ( G [ y + G ( y ) , y G ( y ) ] G [ z , y * ] ) | | P 3 ( | | y y * | | ) , | | G ( y * ) 1 ( G [ y + G ( y ) , y G ( y ) ] G [ z , s ] ) | | P 4 ( | | y y * | | ) , | | G ( y * ) 1 G [ y , y * ] | | c
  • and
    | | G [ z , s ] | | d
  • for all y Ω 0 some c 0 , d 0 , and s, z given by the first two substeps of method (4).
(a3)
U [ y * , r ] Ω , where r = m a x { r * , a r * , b r * } .
Next, we show the following local convergence result for method (4) using the preceding notation and the conditions ( A ) .
Theorem 1.
Suppose that the conditions ( A ) hold. Then, iteration { y k } generated by method (4) converges to y * provided that starter y 0 U ( y * , r * ) { y * } .
Proof. 
Mathematical induction shall be used to show items:
y k U ( y * , r * )
| | s k y * | | Q 1 ( | | y k y * | | ) | | y k y * | | | | y k y * | | < r *
| | z k y * | | Q 2 ( | | y k y * | | ) | | y k y * | | | | y k y * | |
and
| | y k + 1 y * | | Q 3 ( | | y k y * | | ) | | y k y * | | | | y k y * | | ,
where the radius r * is defined by (6).
By hypothesis y 0 U ( y * , r * ) { y * } U ( y * , r * ) , so (9) holds for n = 0 . Using (6), (7), and ( a 1 ) , we obtain:
| | G ( y * ) 1 ( A 0 G ( y * ) ) | | P 0 ( | | y 0 y * | | ) P 0 ( r * ) < 1 .
Estimate (13) with the Banach perturbation lemma on linear invertible operators [5,25] and imply A 0 1 B ( Y 2 , Y 1 ) and:
| | A 0 1 G ( y * ) | | 1 1 P 0 ( | | y 0 y * | | ) .
Hence, iterates s 0 , z 0 , and y 1 are well-defined by the three substeps of method (4), respectively. Moreover, we can write:
s 0 y * = y 0 y * A 0 1 G ( y 0 ) = A 0 1 ( A 0 G [ y 0 , y * ] ) ( y 0 y * ) ,
z 0 y * = s 0 y * A 0 1 G ( s 0 ) + 2 A 0 1 ( A 0 G [ s 0 , y 0 ] ) A 0 1 G ( s 0 )
and
y 1 y * = z 0 y * A 0 1 G ( z 0 ) 9 4 I A 0 1 G [ z 0 , s 0 ] 7 2 I 5 4 A 0 1 G [ z 0 , s 0 ] A 0 1 G ( z 0 ) = A 0 1 ( A 0 G [ z 0 , y * ] ) ( z 0 y * ) 1 4 A 0 1 9 ( A 0 G [ z 0 , s 0 ] ) + 5 G [ z 0 , s 0 ] ( A 0 G [ z 0 , s 0 ] ) A 0 1 A 0 1 G ( z 0 ) .
Notice that | | y 0 + G ( y 0 ) y * | | = | | I + G [ y 0 , y * ] ) ( y 0 y * ) | | | | I + G [ y 0 , y * ] | | | | y 0 y * | | a r * and similarly | | y 0 G ( y 0 ) y * | | b r * , so y 0 + G ( y 0 ) , y 0 G ( y 0 ) U ( y * , r ) Ω .
Using (6), (8)(for j = 1 ), (14), (15), and ( a 2 ), we have:
| | s 0 y * | | P ( | | y 0 y * | | ) | | y 0 y * | | 1 P 0 ( | | y 0 y * | | ) Q 1 ( | | y 0 y * | | ) | | y 0 y * | | | | y 0 y * | | < r * ,
showing s 0 U ( y * , r * ) and (10) for n = 0 . Moreover, using (6), (8) (for j = 2 ), (14), (16), ( a 2 ), and (18), we obtain:
| | z 0 y * | | P 1 ( | | y 0 y * | | ) 1 P 0 ( | | y 0 y * | | ) + c P 2 ( | | y 0 y * | | ) ( 1 P 0 ( | | y 0 y * | | ) ) 2 | | y 0 y * | | Q 2 ( | | y 0 y * | | ) | | y 0 y * | | | | y 0 y * | | ,
showing z 0 U ( y * , r * ) and (11) for k = 0 .
Furthermore, using (6), (8) (for j = 3 ), (14), (17), ( a 2 ), (18), and (19), we obtain:
| | y 1 y * | | P 3 ( | | y 0 y * | | ) 1 P 0 ( | | y 0 y * | | ) + c 9 P 4 ( | | y 0 y * | | ) + 5 d P 4 ( | | y 0 y * | | ) 1 P 0 ( | | y 0 y * | | ) 4 ( 1 P 0 ( | | y 0 y * | | ) ) 2 | | z 0 y * | | Q 3 ( | | y 0 y * | | ) | | y 0 y * | | | | y 0 y * | | ,
showing (9) for k = 1 and (12) for k = 0 .
So, items (9) for k = 0 , 1 and (10)–(13) hold for k = 0 . Then, if we exchange y 0 , s 0 , z 0 , y 1 by y m , s m , z m , y m + 1 in the preceding calculations, we terminate the induction. It then follows from the estimation:
| | y m + 1 y * | | λ | | y m y * | | < r * ,
where λ = Q 3 ( | | y 0 y * | | ) [ 0 , 1 ) that y m + 1 U ( y * , r * ) and lim m y m = y * . □
Next, concerning the uniqueness of the solution y * , we have:
Proposition 1.
Suppose the following: 
(i) 
Point y * Ω is a simple solution of equation G ( y ) = 0 .
(ii) 
P 5 ( t ) 1 has a smallest solution ρ M { 0 } .
Let M 2 = [ 0 , ρ ) .
(iii) 
| | G ( y * ) 1 ( G [ y , y * ] G ( y * ) ) | | P 5 ( | | y y * | | ) for all y Ω and some function P 5 : M 2 M .
Let Ω 1 = Ω U [ y * , ρ ] .
Then, the only solution of the equation G ( y ) = 0 in the set Ω 1 is y * .
Proof. 
Let T = G [ q , y * ] for some q Ω 1 with G ( q ) = 0 . Then, using (ii) and (iii), we have:
| | G ( y * ) 1 ( T G ( y * ) ) | | P 5 ( | | q y * | | ) P 5 ( ρ ) < 1 ,
so, T 1 B ( Y 2 , Y 1 ) . In view of the identity T ( q y * ) = G ( q ) G ( y * ) = 0 0 = 0 , we conclude q = y * . □
Remark 1. (a) Let us consider the choices G [ y , s ] = 1 2 ( G ( y ) + G ( s ) ) or G [ y , s ] = 0 1 G ( y + θ ( s y ) ) d θ or the standard definition of the divided difference when Y 1 = R i [5,8,9,15,16,22]. Moreover, suppose:
| | G ( y * ) 1 ( G ( y ) G ( y * ) ) | | h 0 ( | | y y * | | )
and
| | G ( y * ) 1 ( G ( y ) G ( s ) ) | | h ( | | y s | | ) ,
where functions h 0 : M M , h : M M are continuous and nondecreasing. Then, under the first or second choice above, it can easily be seen that the hypotheses (A) require for | | G [ y , y * ] | | c 0 , and the choices as given in Example 1.
(b) Hypotheses (A) can be condensed using instead the classical but strongest and less precise condition for studying methods with divided differences [22]:
| | G ( y * ) 1 ( G [ u 1 , u 2 ] G [ u 3 , u 4 ] ) | | P 6 ( | | u 1 u 3 | | , | | u 2 u 4 | | )
for all u 1 , u 2 , u 3 , u 4 Ω , where function P 6 : M × M M is continuous and nondecreasing. However this condition does not give the largest convergence conditions, and all the P functions are at least as small as P 6 ( t , t ) .

3. Attraction Basins

The attraction basins is an extremely valuable geometrical tool for measuring the convergence zones of various iteration schemes. Using this tool, we can see all of the beginning points that converge to any root when we use an iterative procedure. This allows us to identify in a visual fashion which locations are excellent selections as starting points and which ones are not. We select the starting point z 0 E = [ 2 , 2 ] × [ 2 , 2 ] C , and algorithm (4) is applied on 10 polynomials with complex coefficients. The point z 0 is a member of the basin of a zero z * of a test polynomial if lim n z n = z * , and then z 0 is displayed using a specific color associated with z * . As per the number of iterations, we employ light to dark colors for each starting guess z 0 . The point z 0 E is denoted in black if it is not a member of the attraction basin of any zero of the test polynomial. The conditions | | z n z * | | < 10 6 or maximum 100 iterations are set to end the iteration process. For constructing the fractal diagrams, MATLAB 2019a is employed.
In the first step, we take W 1 ( z ) = z 2 + 1 and W 2 ( z ) = z 2 + z to generate the basins related to their zeros. In Figure 1a, yellow and magenta colors indicate the attraction basins of the zeros i and i , respectively, of W 1 ( z ) . Figure 1b displays the attraction basins related to the zeros 1 and 0 of W 2 ( z ) in magenta and green colors, respectively. Next, we choose polynomials W 3 ( z ) = z 3 + 1 and W 4 ( z ) = z 3 + z . Figure 2a shows the attraction basins associated to the zeros 1 2 3 2 i , 1 and 1 2 + 3 2 i of W 3 ( z ) in cyan, yellow, and magenta, respectively. In Figure 2b, the basins of the zeros 0, i , and i of W 4 ( z ) are illustrated in cyan, yellow, and magenta colors, respectively. Furthermore, the complex polynomials W 5 ( z ) = z 4 + 1 and W 6 ( z ) = z 4 + z of degree four are considered to demonstrate the attraction basins associated with their zeros. In Figure 3a, the basins of the solutions 0.707106 + 0.707106 i , 0.707106 0.707106 i , 0.707106 0.707106 i and 0.707106 + 0.707106 i of W 5 ( z ) = 0 are respectively displayed in green, blue, red, and yellow zones. In Figure 3b, convergence to the zeros 1 , 1 2 + 3 2 i , 1 2 3 2 i and 0 of the polynomial W 6 ( z ) is presented in yellow, blue, green, and red, respectively. Furthermore, W 7 ( z ) = z 5 + 1 and W 8 ( z ) = z 5 + z of degree five are selected. In Figure 4a, magenta, green, yellow, blue, and red colors stand for the attraction basins of the zeros 0.809016 + 0.587785 i , 0.809016 0.587785 i , 0.309016 0.951056 i , 1 , and 0.309016 + 0.951056 i , respectively, of W 7 ( z ) . Figure 4b provides the basins related to the solutions 0.707106 + 0.707106 i , 0, 0.707106 + 0.707106 i , 0.707106 0.707106 i , and 0.707106 0.707106 i of W 8 ( z ) = 0 in blue, green, magenta, yellow, and red colors, respectively. Lastly, W 9 ( z ) = z 6 + 1 and W 10 ( z ) = z 6 + z degree six are considered. In Figure 5a, the basins of the solutions 0.500000 0.866025 i , 0.500000 + 0.866025 i , 1 i , 0.500000 0.866025 i , 0.500000 + 0.866025 i , and 1 i of W 9 ( z ) = 0 are painted in yellow, blue, green, magenta, cyan, and red, respectively. Figure 5b gives the basins related to the roots 1 , 0.3090169 + 0.951056 i , 0, 0.3090169 0.951056 i , 0.809016 + 0.587785 i , and 0.809016 0.587785 i of W 10 ( z ) = 0 in green, yellow, red, cyan, magenta, and blue colors, respectively.

4. Numerical Examples

The numerical verification of the new convergence result is conducted in this section. The following example is taken.
Example 1
([6]). Let Y 1 = Y 2 = R 3 and Ω = U [ 0 , 1 ] . Consider G on Ω for y = ( y 1 , y 2 , y 3 ) t as:
G ( y ) = ( e y 1 1 , e 1 2 y 2 2 + y 2 , y 3 ) t
We have: y * = ( 0 , 0 , 0 ) t ,
  • a = b = 1 2 ( 3 + e 1 e 1 ) ,
  • c = 1 2 ( 1 + e 1 e 1 ) = c 0 ,
  • d = e 1 e 1 ,
  • h 0 ( t ) = ( e 1 ) t ,
  • h ( t ) = e 1 e 1 t ,
  • P 0 ( t ) = 1 2 ( h 0 ( a t ) + h 0 ( b t ) ) ,
  • P ( t ) = 1 2 ( h ( c 0 t ) + h 0 ( b t ) ) ,
  • P 1 ( t ) = 1 2 ( h ( Q 1 ( t ) t + a t ) + h 0 ( b t ) ) ,
  • P 2 ( t ) = 1 2 ( h ( Q 1 ( t ) t + a t ) + h ( c 0 t ) ) ,
  • P 3 ( t ) = 1 2 ( h ( Q 2 ( t ) t + a t ) + h 0 ( b t ) )
  • and
  • P 4 ( t ) = 1 2 ( h ( Q 2 ( t ) t + a t ) + h ( Q 1 ( t ) t + a t ) ) . Using Equation (6), the value of r * is calculated and presented in Table 1.
Example 2.
Consider the following nonlinear system of equations:
3 y 1 2 y 2 + y 2 2 1 + | y 1 1 | = 0 y 1 4 + y 1 y 2 3 1 + | y 2 | = 0 .
Let u = ( u 1 , u 2 ) , G = G ( u ) + Q ( u ) , G = ( G 1 , G 2 ) , Q = ( Q 1 , Q 2 ) , where G 1 ( u ) = 3 u 1 2 u 2 + u 2 2 1 , G 2 ( u ) = u 1 4 + u 1 u 2 3 1 , Q 1 ( u ) = | u 1 1 | and Q 2 ( u ) = | u 2 | . The divided difference is a two-by-two real matrix defined for v = ( v 1 , v 2 ) by [ u , v ; G ] j , 1 = G j ( v 1 , v 2 ) G j ( u 1 , v 2 ) v 1 u 1 , [ u , v ; G ] j , 2 = G j ( u 1 , v 2 ) G j ( u 1 , u 2 ) v 2 u 2 , if v 1 u 1 and v 2 u 2 . However, if v 1 = u 1 or v 2 = u 2 , then use G for [ . , . ; G ] . Similarly, replace G j by Q j above to define the divided difference [ . , . ; Q ] provided that v 1 u 1 and v 2 u 2 . However, if v 1 = u 1 or v 2 = u 2 , use the zero 2 × 2 matrix for [ . , . ; Q ] . Choose initial points ( 5 , 5 ) and ( 1 , 0 ) . Then, after three iterations, the application of method (4) gives the solution ( y 1 , y 2 ) = ( 0.894655373334687 , 0.327826521746298 ) for the given system of equations.

5. Conclusions

By eliminating the Taylor series tool from the existing convergence theorem, extended local convergence of a seventh order method without derivatives is developed. The first derivative is all that is required for our convergence result, unlike the preceding concept. In addition, the error estimates, convergence radius, and region of uniqueness for the solution are calculated. As a result, the usefulness of this effective algorithm is enhanced. The convergence zones of this algorithm for solving polynomial equations with complex coefficients are also shown. This aids in the selection of beginning points with the purpose of obtaining a certain root. Our convergence result is validated by numerical testing.

Author Contributions

Conceptualization, I.K.A. and D.S.; methodology, I.K.A., D.S., C.I.A. and S.K.P.; software, I.K.A., C.I.A. and D.S.; validation, I.K.A., D.S., C.I.A. and S.K.P.; formal analysis, I.K.A., D.S. and C.I.A.; investigation, I.K.A.; resources, I.K.A., D.S., C.I.A. and S.K.P.; data curation, C.I.A. and S.K.P.; writing—original draft preparation, I.K.A., D.S., C.I.A. and S.K.P.; writing—review and editing, I.K.A. and D.S.; visualization, I.K.A., D.S., C.I.A. and S.K.P.; supervision, I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The researchers have no conflict of interest.

References

  1. Ortega, J.M.; Rheinholdt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discr. Math. 2013, 7, 390–403. [Google Scholar] [CrossRef] [Green Version]
  4. Wang, X.; Zhang, T. A family of steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar]
  5. Argyros, I.K. Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics; Chui, C.K., Wuytack, L., Eds.; Elsevier: New York, NY, USA, 2007. [Google Scholar]
  6. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  7. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  8. Argyros, I.K.; Magreñán, Á.A. A Contemporary Study of Iterative Methods; Elsevier: New York, NY, USA, 2018. [Google Scholar]
  9. Argyros, I.K. The Theory and Applications of Iterative Methods, 2nd ed; Engineering Series; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  10. Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K. Extended iterative schemes based on decomposition for nonlinear models. J. Appl. Math. Comput. 2021. [Google Scholar] [CrossRef]
  11. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Moysi, A. An optimal derivative free family of Chebyshev-Halley’s method for multiple zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
  12. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-Jarratts composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  13. Ezquerro, J.A.; Hernandez, M.A.; Romero, N.; Velasco, A.I. On Steffensen’s method on Banach spaces. J. Comput. Appl. Math. 2013, 249, 9–23. [Google Scholar] [CrossRef]
  14. Grau-Sanchez, M.; Grau, A.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  15. Hernandez, M.A.; Rubio, M.J. A uniparametric family of iterative processes for solving nondifferentiable equations. J. Math. Anal. Appl. 2002, 275, 821–834. [Google Scholar] [CrossRef] [Green Version]
  16. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Silcock, H.L., Translator; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  17. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  18. Magreñán, Á.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  19. Magreñán, Á.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  20. Ren, H.; Wu, Q.; Bi, W. A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
  21. Sharma, D.; Parhi, S.K. On the local convergence of higher order methods in Banach spaces. Fixed Point Theory. 2021, 22, 855–870. [Google Scholar] [CrossRef]
  22. Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algor. 2014, 67, 917–933. [Google Scholar] [CrossRef]
  23. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  24. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algor. 2015, 70, 545–558. [Google Scholar] [CrossRef]
  25. Rall, L.B. Computational Solution of Nonlinear Operator Equations; Robert E. Krieger: New York, NY, USA, 1979. [Google Scholar]
Figure 1. Attraction basins associated with polynomials W 1 ( z ) and W 2 ( z ) .
Figure 1. Attraction basins associated with polynomials W 1 ( z ) and W 2 ( z ) .
Foundations 02 00023 g001
Figure 2. Attraction basins associated with polynomials W 3 ( z ) and W 4 ( z ) .
Figure 2. Attraction basins associated with polynomials W 3 ( z ) and W 4 ( z ) .
Foundations 02 00023 g002
Figure 3. Attraction basins associated with polynomials W 5 ( z ) and W 6 ( z ) .
Figure 3. Attraction basins associated with polynomials W 5 ( z ) and W 6 ( z ) .
Foundations 02 00023 g003
Figure 4. Attraction basins associated with polynomials W 7 ( z ) and W 8 ( z ) .
Figure 4. Attraction basins associated with polynomials W 7 ( z ) and W 8 ( z ) .
Foundations 02 00023 g004
Figure 5. Attraction basins associated with polynomials W 9 ( z ) and W 10 ( z ) .
Figure 5. Attraction basins associated with polynomials W 9 ( z ) and W 10 ( z ) .
Foundations 02 00023 g005
Table 1. Convergence radius for Example 1.
Table 1. Convergence radius for Example 1.
     r 1      r 2      r 3      r *
0.1347634094 0.0644916806 0.0393641483 0.0393641483
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K. Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives. Foundations 2022, 2, 338-347. https://doi.org/10.3390/foundations2020023

AMA Style

Argyros IK, Sharma D, Argyros CI, Parhi SK. Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives. Foundations. 2022; 2(2):338-347. https://doi.org/10.3390/foundations2020023

Chicago/Turabian Style

Argyros, Ioannis K., Debasis Sharma, Christopher I. Argyros, and Sanjaya Kumar Parhi. 2022. "Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives" Foundations 2, no. 2: 338-347. https://doi.org/10.3390/foundations2020023

APA Style

Argyros, I. K., Sharma, D., Argyros, C. I., & Parhi, S. K. (2022). Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives. Foundations, 2(2), 338-347. https://doi.org/10.3390/foundations2020023

Article Metrics

Back to TopTop