Next Article in Journal
Study on Neutrosophic Graph with Application on Earthquake Response Center in Japan
Previous Article in Journal
Study of Non-Smooth Symmetry Collision of Rolling Bodies of Localized Functional-Slot Cage-Less Ball Bearings Considering Lubrication Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Local Convergence of a Three-Step Sixth-Order Iterative Approach with the Basin of Attraction

1
Department of Mathematics, VIT-AP University, Amaravati 522237, India
2
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 València, Spain
3
Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(6), 742; https://doi.org/10.3390/sym16060742
Submission received: 8 May 2024 / Revised: 27 May 2024 / Accepted: 6 June 2024 / Published: 14 June 2024
(This article belongs to the Section Mathematics)

Abstract

:
In this study, we introduce an iterative approach exhibiting sixth-order convergence for the solution of nonlinear equations. The method attains sixth-order convergence by using three evaluations of the function and two evaluations of the first-order derivative per iteration. We examined the theoretical convergence of our method through the convergence theorem, which substantiates the convergence order. Furthermore, we analyzed the local convergence of our proposed technique by employing a hypothesis that involves the first-order derivative of the function Θ alongside the Lipschitz conditions. To evaluate the performance and efficacy of our iterative method, we provide a comparative analysis against existing methods based on various standard numerical problems. Finally, graphical comparisons employing basins of attraction are presented to illustrate the dynamic behavior of the iterative method in the complex plane.

1. Introduction

Many problems in science and engineering involve nonlinear equations, which are in the form
Θ ( α ) = 0
where Θ is a differentiable function defined on a convex subset D of S with the values in S , where S is R or C . Many equations for real-world issues are too complicated and are hard to solve with an analytical approach. Therefore, researchers have focused on iterative methods.
Iterative-root-finding techniques are essential for approximating solutions to nonlinear equations in the field of nonlinear analysis. These techniques provide important resources for approximating solutions in a variety of disciplines, including computer science, physics, engineering, and economics. Alternatively, optimization algorithms have been used to find solutions for nonlinear equations or systems of nonlinear equations. Over the past ten years, many new optimization algorithms [1] have been developed. By enhancing these algorithms’ performance, researchers have achieved more accurate solutions. Numerical instability may arise during the computation of certain problems; at that time, the iterative methods can be designed to improve numerical stability and prevent divergence. One of the basic iteration formulas for finding the approximate root of the nonlinear Equation (1) is the well-known Newton–Raphson method, which is given by
α n + 1 = α n Θ ( α n ) Θ ( α n )
This method is known for its quadratic convergence. To enhance the efficiency and order of convergence [2], many scholars have introduced iterative techniques with a higher order of convergence and higher efficiency. Several important methods with third-order convergence are given in [3,4,5]; some of the important methods with fourth-order convergence are given in [6,7]; an important method with fifth-order convergence is given in [8,9]. Inspired by these directions, we aim to introduce a sixth-order iterative technique.
In the context of convergence analysis, semilocal and local convergence analyses [10,11] play a crucial role in understanding the behavior of iterative algorithms. Semilocal and local convergence analyses allow for a thorough evaluation of how well an iterative algorithm performs in the vicinity of a solution. Semilocal convergence analysis provides information about the behavior of the iterative sequence and ensures its convergence in a specific domain depending on the initial assumptions. We are interested in local convergence analysis in this work because it gives us the domain of possible initial guesses around the exact solution. In addition, the domain gives us the freedom to choose the initial guess that guarantees convergence. Researchers can choose appropriate initial guesses to obtain convergence radii, optimize convergence parameters, and ensure the algorithm’s reliability in real-world applications across a variety of scientific and engineering domains with the help of these convergence analyses.
In this current research, we derive the new sixth-order iterative method for solving the nonlinear equations. This approach requires three function evaluations and two derivatives per iteration. The advantage of this technique is that it is a second-order derivative and parameter-free technique, which helps to reduce the computational cost.
The structure of this paper is outlined as follows: We present our proposed iterative method and examine the convergence analysis of this method in Section 2; we address the local convergence properties of our proposed iterative method in Section 3; we present numerical results in Section 4; we explore the basin of attraction in Section 5; finally, we summarize our findings in the concluding Section 6.

2. Proposed Iterative Method and Convergence Analysis

In this section, we propose a new sixth-order iterative method and we present the convergence analysis of (3).
β n = α n Θ ( α n ) Θ ( α n ) , γ n = α n Θ ( α n ) Θ ( β n ) Θ ( α n ) 2 Θ ( β n ) Θ ( α n ) Θ ( α n ) , α n + 1 = γ n Θ ( β n ) Θ ( γ n ) Θ ( β n ) 2 Θ ( γ n ) Θ ( γ n ) Θ ( β n ) .
The new scheme (3) uses a total of five functional evaluations per iteration and attains the sixth-order convergence. The efficiency index of the method is ( p ) 1 n , where p is the order of convergence and n is the number of function evaluations. Then, the efficiency index of method (3) is 6 1 5 = 1.430969 . Many researchers have used various approximations and interpolations to develop optimal methods that reduce the number of functional evaluations per iteration. However, even if the number of functional evaluations is minimized, the number of algebraic operations may increase [12]. It may not reach the optimal standards as per the theory of the Kung–Traub conjecture; however, it still provides an effective alternative. When approximating the derivative of a function, the symmetrically divided difference operator plays a vital role. The advantage of the symmetrically divided differences is that, using this, we can develop derivative-free iterative methods, especially when the nonlinear operators are not differentiable or, in the other case, when the computation of the derivative is difficult or it is very expensive to derive the derivative.
Theorem 1.
Suppose that the function Θ : D R R is sufficiently differentiable in an open neighborhood D of its zero α * . If an initial expression α 0 is sufficiently close to the root α * , then the iterative method (3) attains the sixth-order convergence. It satisfies the following error equation:
e n + 1 = c 2 5 c 2 c 3 2 e n 6 + O e n 7 .
Proof. 
Let us assume that e n = α n α * and c k = Θ k ( α * ) k ! Θ ( α * ) . Expanding Θ near α * by using the Taylor series, we obtain
Θ ( α n ) = Θ ( α * ) e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + c 6 e n 6 + O e n 7
and
Θ ( α n ) = Θ ( α * ) 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + 5 c 4 e n 4 + 6 c 6 e n 5 + O e n 6 .
From (5) and (6), we obtain
Θ ( α n ) Θ ( α n ) = e n c 2 e n 2 + 2 ( c 2 2 c 3 ) e n 3 + ( 7 c 2 c 3 + 4 c 2 2 3 c 4 ) e n 4 + ( 8 c 2 4 20 c 2 2 c 3 + 10 c 2 c 4 + 6 c 3 2 4 c 5 ) e n 5 + O e n 6 .
By substituting (7) into (3), we obtain
β n = α * + c 2 e n 2 2 ( c 2 2 c 3 ) e n 3 ( 7 c 2 c 3 + 4 c 2 2 3 c 4 ) e n 4 ( 8 c 2 4 20 c 2 2 c 3 + 10 c 2 c 4 + 6 c 3 2 4 c 5 ) e n 5 + O e n 6 .
Now, use the Taylor series expansion of Θ ( β n ) around α * ; we have
Θ ( β n ) = Θ ( α * ) [ c 2 e n 2 2 ( c 2 2 c 3 ) e n 3 ( 7 c 2 c 3 + 4 c 2 2 3 c 4 ) e n 4 ( 8 c 2 4 20 c 2 2 c 3 + 10 c 2 c 4 + 6 c 3 2 4 c 5 ) e n 5 + O e n 6 ] .
By adopting expressions (5), (6), and (9), we obtain
Θ ( α n ) Θ ( β n ) Θ ( α n ) 2 Θ ( β n ) Θ ( α n ) Θ ( α n ) = e n + c 2 c 3 c 2 3 e n 4 + 2 2 c 2 4 4 c 3 c 2 2 + c 4 c 2 + c 3 2 e n 5 + 10 c 2 5 + 30 c 3 c 2 3 12 c 4 c 2 2 + 3 c 5 6 c 3 2 c 2 + 7 c 3 c 4 e n 6 + O e n 7
and
γ n = α * + c 2 3 c 2 c 3 e n 4 2 2 c 2 4 4 c 3 c 2 2 + c 4 c 2 + c 3 2 e n 5 + 10 c 2 5 30 c 3 c 2 3 + 12 c 4 c 2 2 3 ( c 5 6 c 3 2 ) c 2 7 c 3 c 4 e n 6 + O e n 7 .
Now, we have
Θ ( β n ) Θ ( γ n ) Θ ( β n ) 2 Θ ( γ n ) Θ ( γ n ) Θ ( α n ) = α * + c 2 3 c 2 c 3 e n 4 2 2 c 2 4 4 c 3 c 2 2 + c 4 c 2 + c 3 2 e n 5 + 9 c 2 5 30 c 3 c 2 3 + 12 c 4 c 2 2 + 19 c 3 2 3 c 5 c 2 7 c 3 c 4 e n 6 + O e n 7
Using expressions (11) and (12) in the third sub-step of method (3), we have
e n + 1 = c 2 5 c 2 c 3 2 e n 6 + O e n 7 .
Hence, the above error Equation (13) shows the sixth-order convergence. □

3. Local Convergence

In this section, we present the local convergence analysis of method (3). Let us assume that U ( v , r ) and U ¯ ( v , r ) are open and closed balls, respectively, with center v and radius r > 0 . In addition, we chose λ 0 > 0 , λ > 0 , and M > 0 as the parameters with the constraint λ 0 λ .
Theorem 2.
Let us suppose that Θ : D S S is a differentiable function. Assume that there exists α * D and given parameters λ 0 > 0 , λ > 0 , M > 0 for each α , β D verifying the following:
Θ ( α * ) = 0 , Θ ( α * ) 1 L ( S , S ) ,
| Θ α * 1 ( Θ ( α ) Θ ( α * ) ) | λ 0 | α α * | ,
| Θ α * 1 ( Θ ( α ) Θ ( β ) ) | λ | α β | ,
| Θ α * 1 ( Θ ( α ) ) | M
and
U ¯ ( α * , r ) D .
Therefore, the sequence generated by the method (3) for α 0 U ( α * , r ) α * is well defined in U ¯ ( α * , r ) for each n = 0 , 1 , 2 , 3 , . It converges to α * . r stands for the radius of convergence of (36). Moreover, the following estimates hold:
| β n α * | g 1 ( | α n α * | ) | α n α * | < | α n α * | < 1 ,
| γ n α * | g 3 ( | α n α * | ) | α n α * | < | α n α * | < 1
and
| α n + 1 α * | g 6 ( | α n α * | ) | α n α * | < | α n α * | < 1 .
where r and g 1 , g 3 a n d g 6 are to be determined. Furthermore, suppose that there exists R [ r , 2 λ 0 ) such that U ¯ ( α * , R ) D . Then, the limit point α * is the only solution of Θ ( α ) = 0 in U ¯ ( α * , R ) .
Proof. 
Let α 0 is an initial point in the domain D .
From condition (14), since α 0 D , we have
| Θ α * 1 Θ ( α 0 ) Θ ( α * ) | λ 0 | α 0 α * | .
Let us assume that | α 0 α * | < 1 λ 0 , by using the Banach lemma. We obtain the following:
| Θ α * 1 Θ ( α 0 ) Θ ( α * ) | < 1 ,
| Θ α * 1 Θ ( α 0 ) Θ α * 1 Θ ( α * ) | < 1 ,
| Θ ( α 0 ) 1 Θ ( α ) * | = 1 1 λ 0 | α 0 α * |
and
β n = α n Θ ( α n ) 1 Θ ( α n ) .
Using n = 0 in (23), we obtain
β 0 = α 0 Θ ( α 0 ) Θ ( α 0 ) ,
which further yield
β 0 α * = α 0 α * Θ ( α 0 ) 1 Θ ( α 0 ) = ( α 0 α * ) Θ ( α 0 ) 1 Θ ( α 0 ) Θ ( α 0 ) 1 Θ ( α 0 ) , = Θ ( α 0 ) 1 0 1 Θ ( α * + Θ ( α 0 α * ) ) d Θ ( α 0 α * ) Θ ( α 0 ) ( α 0 α * ) , Θ ( α 0 ) 1 0 1 Θ ( α * + Θ ( α 0 α * ) ) Θ ( α 0 ) d Θ ( α 0 α * ) .
Taking the norm on both sides, we have
| β 0 α * | = | Θ ( α 0 ) 1 0 1 Θ ( α * + Θ ( α 0 α * ) ) Θ ( α 0 ) d Θ ( α 0 α * ) | = | Θ ( α 0 ) 1 Θ ( α * ) | | α 0 α * | | 0 1 Θ ( α * ) 1 Θ ( α * + Θ ( α 0 α * ) ) Θ ( α 0 ) d Θ | , λ | ( α 0 α * ) | 2 2 1 λ 0 | ( α 0 α * ) | , g 1 | α 0 α * | ( α 0 α * ) , | α 0 α * | r .
For n = 0 and β 0 U ( α * , r ) , we obtain
g 1 α 0 α * = λ α 0 α * 2 ( 1 λ 0 α 0 α * ) ,
So, we define auxiliary function σ 1 and obtain the first condition for parameter r:
σ 1 ( t ) = g 1 ( t ) 1 , λ t 2 + 2 λ 0 t = 0 , t ( λ + 2 λ 0 ) = 2 , t = 2 λ + 2 λ 0 , r 1 = 2 λ + 2 λ 0 , r 1 = 2 2 λ 0 + λ < 1 λ 0 , σ 1 ( 0 ) = g 1 ( 0 ) 1 < 0 ,
where σ 1 1 λ 0 as t 1 λ 0 . It follows from the intermediate value theorem that the function σ 1 has the smallest root r 1 in the interval ( 0 , 1 λ 0 ) . This implies
0 < r 1 < 1 λ 0
and
0 < g 1 ( t ) < 1 t ( 0 , r 1 ) .
From the second step of (3), putting n = 0 , we obtain
γ 0 = α 0 Θ ( α 0 ) 2 Θ ( β 0 ) 1 Θ ( α 0 ) Θ ( β 0 ) Θ ( α 0 ) 1 Θ ( α 0 ) .
Taking the norm on both sides of (28), we obtain
γ 0 α * = α 0 α * Θ ( α 0 ) 2 Θ ( β 0 ) 1 Θ ( α 0 ) Θ ( β 0 ) Θ ( α 0 ) 1 Θ ( α 0 ) .
Let us assume that P = Θ ( α * ) ) 1 ( α 0 α * ) 1 and T = Θ ( α 0 ) 2 Θ ( β 0 ) . Then, we have
I P T = Θ ( α * ) 1 ( α 0 α * ) 1 Θ ( α 0 ) 2 Θ ( β 0 ) Θ ( α * ) ( α 0 α * ) = Θ ( α * ) 1 ( α 0 α * ) 1 Θ ( α 0 ) Θ ( α * ) Θ ( α * ) ( α 0 α * ) 2 Θ ( β 0 ) = 1 ( α 0 α * ) Θ ( α * ) 1 0 1 Θ ( α * + Θ ( α 0 α * ) ) Θ ( α 0 ) d Θ ( α 0 α * ) 2 Θ ( β 0 ) 1 ( α 0 α * ) 0 1 Θ ( α * ) 1 Θ ( α * + Θ ( α 0 α * ) ) Θ ( α 0 ) d Θ ( α 0 α * ) ) ] + 2 ( α 0 α * ) Θ ( α * ) ) 1 Θ ( β 0 ) λ 0 α 0 α * 2 + 2 M g 1 ( α 0 α * ) α 0 α * ) 1 α 0 α * λ 0 α 0 α * 2 + 2 M λ α 0 α * 2 ( 1 λ 0 α 0 α * ) = g 2 ( α 0 α * ) g 2 ( r ) < 1
and
( Θ ( α 0 ) 2 Θ ( β 0 ) ) 1 Θ ( α * ) 1 α 0 α * ( 1 g 2 ( α 0 α * ) ) ,
where
g 2 ( t ) = λ 0 t 2 + M λ 1 λ 0 t , σ 2 ( t ) = g 2 ( t ) 1 3 λ 0 t λ 0 2 t 2 + 2 ( M λ 1 ) = 0 , t = 3 λ 0 ± 9 λ 0 2 + 8 M λ λ 0 2 8 λ 0 2 2 λ 0 2 , t = 3 ± 1 + 8 M λ 2 λ 0 , g 2 ( 0 ) = M λ .
Suppose that M λ < 1 , then
σ 2 ( 0 ) = g 2 ( 0 ) 1 < 0 , σ 2 1 λ 0 a s t 1 λ 0 .
It follows from the intermediate value theorem that the function σ 2 has the smallest root r 2 in the interval ( 0 , 1 λ 0 ) . This implies
0 < r 2 < 1 λ 0 ,
and
γ 0 α * α 0 α * + Θ ( α * ) 1 ( Θ ( α 0 ) Θ ( β 0 ) ) ( Θ ( α 0 ) 2 Θ ( β 0 ) ) 1 Θ ( α * ) × ( Θ ( α 0 ) 1 Θ ( α * ) Θ ( α * ) 1 Θ ( α 0 ) = α 0 α * 1 + M 2 ( 1 g 1 ( α 0 α * ) ) α 0 α * ( 1 g 2 ( α 0 α * ) ( 1 λ 0 α 0 α * = g 3 ( α 0 α * ) α 0 α * α 0 α * < r .
Since, γ 0 α * < r , this implies γ 0 U ( α * , r ) . Then, we obtain
g 3 ( α 0 α * ) = α 0 α * + M 2 ( 1 g 1 ( α 0 α * ) ) ( 1 g 2 ( α 0 α * ) ) ( 1 λ 0 α 0 α * ) g 3 ( t ) = t + M 2 ( 1 g 1 ( t ) ) ( 1 g 2 ( t ) ( 1 λ 0 t ) ) , σ 3 ( t ) = g 3 ( t ) 1 , σ 3 ( 0 ) = g 3 ( 0 ) 1 < 0 ,
since σ 3 ( r 2 ) as t r 2 . It follows from the intermediate value theorem that the function σ 3 has the smallest root r 3 in the interval ( 0 , r 2 ) . This implies
0 < r 3 < r 2
and
0 < g 3 ( t ) < 1 t ( 0 , r 2 ) .
By using n = 0 , in the third step of (3), we obtain
α 1 = γ 0 Θ ( β 0 ) 2 Θ ( γ 0 ) 1 Θ ( β 0 ) Θ ( γ 0 ) ( Θ ( β 0 ) ) 1 Θ ( γ 0 ) , α 1 α * = γ 0 α * Θ ( β 0 ) 2 Θ ( γ 0 ) 1 Θ ( β 0 ) Θ ( γ 0 ) ( Θ ( β 0 ) ) 1 Θ ( γ 0 ) .
Let us assume the following:
P = Θ ( α * ) 1 ( β 0 α * ) 1 , T = Θ ( β 0 ) 2 Θ ( γ 0 ) .
Then, we obtain
I P T = Θ ( α * ) 1 ( β 0 α * ) 1 Θ ( β 0 ) 2 Θ ( γ 0 ) Θ ( α * ) ( β 0 α * ) = 1 | β 0 α * | Θ ( α * ) 1 0 1 Θ α * + Θ ( β 0 α * ) Θ ( α * ) d Θ ( β 0 α * ) 2 Θ ( γ 0 ) 1 | β 0 α * | 0 1 Θ ( α * ) 1 Θ α * + Θ ( β 0 α * ) Θ ( α * ) d Θ ( β 0 α * ) ] + 2 | β 0 α * | Θ ( α * ) ) 1 Θ ( γ 0 ) λ 0 | β 0 α * | 2 + 2 M γ 0 α * β 0 α * = λ 0 g 1 ( α 0 α * ) α 0 α * 2 + 2 M g 3 ( α 0 α * ) g 1 ( α 0 α * ) = g 4 ( α 0 α * ) < g 4 ( r ) < 1 ,
where
g 4 ( α 0 α * ) = λ 0 g 1 ( α 0 α * ) α 0 α * 2 + 2 M g 3 ( α 0 α * ) g 1 ( α 0 α * ) , g 4 ( t ) = λ 0 g 1 ( t ) t 2 + 2 M g 3 ( t ) g 1 ( t ) , σ 4 ( t ) = g 4 ( t ) 1 , σ 4 ( 0 ) = g 4 ( 0 ) 1 < 0 ,
since σ 4 ( λ 0 ) as t 1 λ 0 . It follows from the intermediate value theorem that the function σ 4 has the smallest root r 4 in the interval ( 0 , 1 λ 0 ) . This implies
0 < r 4 < 1 λ 0
which further provides
I P T = Θ ( α * ) 1 Θ ( α * ) Θ ( α * ) 1 Θ ( β 0 ) = Θ ( α * ) 1 Θ ( β 0 ) Θ ( α * ) λ β 0 α * λ g 1 ( α 0 α * ) α 0 α * < 1 ,
where
g 5 ( α 0 α * ) = λ α 0 α * , g 5 ( t ) = λ t , σ 5 ( t ) = g 5 ( t ) 1 , σ 5 ( 0 ) = g 5 ( 0 ) 1 < 0 ,
λ t 1 = 0 t = 1 λ r 5 = 1 λ
which further yield
α 1 α * = γ 0 α * Θ ( β 0 ) 2 Θ ( γ 0 ) 1 Θ ( β 0 ) Θ ( γ 0 ) ( Θ ( β 0 ) 1 Θ ( γ 0 ) , γ 0 α * + Θ ( β 0 ) 2 Θ ( γ 0 ) 1 Θ ( α * ) Θ ( β 0 ) Θ ( γ 0 ) Θ ( α * ) 1
and
Θ ( α * ) 1 Θ ( γ 0 ) Θ ( α * ) Θ ( β 0 ) 1 = g 3 ( α 0 α * ) α 0 α * + M 2 g 1 ( α 0 α * ) α 0 α * g 3 ( α 0 α * ) α 0 α * g 3 ( α 0 α * ) α 0 α * 1 g 4 ( α 0 α * ) 1 g 5 ( α 0 α * ) , = g 3 ( α 0 α * ) α 0 α * 1 + M 2 g 1 ( α 0 α * ) α 0 α * g 3 ( | α 0 α * | ) | α 0 α * | 1 g 4 ( | α 0 α * | ) 1 g 5 ( | α 0 α * | ) , = g 6 ( α 0 α * ) α 0 α * < α 0 α * < r .
Since, α 0 α * < r , α 1 U ( α * , r ) , then we have
g 6 ( t ) = g 3 ( t ) 1 + M 2 g 1 ( α 0 α * ) g 3 ( α 0 α * ) 1 g 4 ( α 0 α * ) 1 g 5 ( α 0 α * = g 3 ( t ) 1 + M 2 g 1 ( t ) g 3 ( t ) 1 g 4 ( t ) 1 g 5 ( t )
and
σ 6 ( t ) = g 6 ( t ) 1 , σ 6 ( 0 ) = g 6 ( 0 ) 1 < 0 .
Since, σ 6 1 λ 0 as t 1 λ 0 , it follows from the intermediate value theorem that the function σ 6 has the smallest root r 6 in the interval 0 , 1 λ 0 . This implies
0 < r 6 < 1 λ 0 .
The radius of convergence is defined below:
r = min { r 1 , r 2 , r 3 , r 4 , r 5 , r 6 } < 1 λ 0 .
This shows that n = 0 and α 1 U ( α * , r ) . Therefore, we have
α k + 1 α * < α k α * < r .
From (37), we deduce that
lim n α k = α * and α k + 1 U ( α * , r ) .

Uniqueness

Suppose, there is another solution β * ; this implies Θ ( β * ) = 0 . Let us assume that β * U ¯ ( α * , R ) ; this implies α * β * < R and β * α * .
Let us consider
T = 0 1 Θ ( β * + Θ ( α * β * ) ) d Θ
Now, we assume
I P T = Θ ( α * ) 1 0 1 Θ ( β * + Θ ( α * β * ) ) d Θ ( Θ ( α * ) = Θ ( α * ) 1 0 1 Θ ( β * + Θ ( α * β * ) ) Θ ( α * ) d Θ = 0 1 λ 0 ( α * β * ) ( 1 Θ ) d Θ = λ 0 2 ( α * β * ) λ 0 2 R < 1 .
Since we have
Θ ( α * ) 1 T Θ ( α * ) < 1 ,
therefore ( T ) 1 exists, which implies that T is never equal to 0.
Now,
T = 0 1 Θ ( β * + Θ ( α * β * ) ) d Θ = Θ ( α * ) Θ ( β * ) = 0 ,
which contradicts our hypothesis. Then, we obtain α * = β * . Hence, α * is unique. □

4. Numerical Example

This section is devoted to checking the efficacy of our proposed method. Therefore, we conducted a comprehensive numerical analysis and compared the performance of our proposed method with the existing same-order technique [12,13,14]. The numerical results demonstrated the improvement of the radii of convergence of the proposed iterative method. The calculations were performed with the help of the Mathematica software 11.3. We used a computer with the following configuration:
  • Operating system: Microsoft Windows 11;
  • System manufacturer: HP;
  • Processor: 11th generation intel(R) core i3;
  • RAM: 8 GB;
  • System type: 64-bit operating system.
In our numerical experiments Table 1 and Table 2 present the numerical findings:
  • The number of iterations.
  • The approximated root.
  • The functional value corresponding to the zero.
  • The absolute difference between the consecutive iterations
  • The computational order of convergence (COC).
  • The CPU time is the execution time for the computational operations by using the Mathematica 11.3

Application to Real-Life Problems

Example 1
((Design of a spherical tank) [15]). This type of problem is used in engineering and design when planning water storage solutions for supplying water to small villages in developed countries.
Let us assume the following:
  • v = the volume of the spherical tank.
  • h = the depth of water in the tank.
  • R = the radius of the tank.
The volume of the spherical tank is
v = π h 2 ( 3 R h ) 3
In Equation (38), the formula combines the volume of the cylinder ( π h 2 ), the volume of the cone ( π h 2 3 ), and the height of the cone ( 3 R h ) with the sphere.
For R = 3 meters and v = 30 cubic meters, Equation (38) becomes
π h 3 9 π h 2 + 90 = 0 .
We need to find the depth at which the tank must be filled so that it can hold 30 cubic meters (i.e., 30 m 3 ).
We observe from Table 1 that we can fill the tank up to 2.02691 m so that it can hold 30 m 3 with a starting initial guess of h 0 = 1.6 m and a stopping criterion of 10 4 .
Example 2
((Hydrogen-producing problem) [16]). Water electrolysis is a process that uses electrical energy to split water ( H 2 O ) into its constituent elements, hydrogen ( H 2 ) and oxygen ( O 2 ) . This reaction occurs through the use of an electrolyzer, a device that facilitates the electrolysis process. The overall reaction for water electrolysis is as follows:
H 2 O H 2 + 1 2 O 2 .
Equation (40) represents the dissociation of water vapor into hydrogen gas and oxygen gas. K is the equilibrium constant, defined by the ratio of the concentration of products to reactants at equilibrium, which is given below:
K = α 1 α 2 p t 2 + α ,
where α is the mole fraction and p t is the total pressure of the mixture. If p t = 3 a t m o s p h e r e ( a t m ) and K = 0.05 , then we need to determine the mole fraction α. The advantage of determining the mole fraction is to measure the quantity of the composition of mixtures.
Water molecules are broken down into hydrogen gas ( H 2 ) at the cathode (negative electrode) and oxygen gas ( O 2 ) at the anode (positive electrode). Hydrogen gas is collected at the cathode, and oxygen gas is collected at the anode.
Equation (41) can be written as:
0.0025 α 3 6 α 2 0.0075 α + 0.0050 = 0 .
From Table 2, we obtain that mole fraction as α = 0.0282494 with a starting initial guess of α 0 = 0.01 and a stopping criterion of 10 4 .
Example 3.
Let D = ( , + ) . Define the function f on D by Θ ( α ) = sin ( α ) . The exact root of this function is α * = 0 . Then, we have λ 0 = λ = M = 1 . Based on these values, we obtained the radii of convergence and mention them in Table 3.
Example 4.
Let D = [ 1 , 1 ] . Define the function f on D by Θ ( α ) = e α 1 . The exact root of this function is α * = 0 . Then, we obtain λ 0 = e 1 , λ = e , and M = 2 . We obtained the radii of convergence of Example (4) and depict them in Table 4.
Example 5.
Let us consider a function:
Θ ( α ) = α 3 ln α 2 + α 5 α 4 , α 0 0 , α 0
Θ ( α ) = 3 α 2 ln α 2 + 5 α 4 4 α 3 + 2 α 2 Θ ( α ) = 6 x ln α 2 + 20 α 3 12 α 2 + 10 x Θ ( α ) = 6 ln α 2 + 60 α 2 24 x + 22
We chose x * = 1 . Then, we obtain Θ ( α * ) = 3 , Θ ( α * ) = 0 , λ 0 = 96.662907 , λ = 96.662907 , and M = 2 . So, by applying the theorem, we obtained the radii of convergence and provide them in Table 5.
Example 6.
Consider the nonlinear Hammerstein-type integral equation given by
Θ ( α ) ( s ) = x ( s ) 5 0 1 s t x ( t ) 3 2 d t
where x ( s ) C [ 0 , 1 ] . By choosing λ 0 = 15 / 4 , λ = 15 / 4 , and M = 0.5 , we obtained the radii of convergence of Example (6) and depict them in Table 6.
Example 7.
Consider the nonlinear integral equation given by
Θ ( α ) ( s ) = x ( s ) 3 0 1 G 1 ( s , t ) x ( t ) 5 4 d t
where x ( s ) C [ 0 , 1 ] and G 1 ( s , t ) is Green’s function. By choosing, λ 0 = 15 / 32 , λ = 15 / 32 , and M = 1 + λ 0 t , then, by applying the theorem, we obtained the radii of convergence of Example (7) and mention them in Table 7.

5. Basin of Attraction

The basins of attraction for a function, for a given method, are the sets of initial values from which the iteration will converge to each root. Mathematically, for a given function f defined on the complex plane C , with the roots c 1 , c 2 , c 3 , c 3 , c 5 , c k C , then the basin of attraction for the root c k is
B ( c k ) = { c C t h e i t e r a t i o n γ n + 1 = Θ ( γ n ) w i t h γ 0 = c c o n v e r g e s t o t h e r o o t c k }
If the iterations converge to the root, then the region will be colorful; if the iteration fails to converge, then the region will be black or the color that is not assigned to the basin. One more interesting thing is the “lobes” that occur along the diagonal lines separating the main basins for each root. The region around the root where the iteration will not jump off and converge to the root appears to be the smallest.
The basin of attraction for the complex Newton method was first considered and attributed to Cayley [17,18,19]. It is the fusion of mathematics and art, which shows the beauty of iterative techniques. Understanding the basin of attraction helps the overall behavior and convergence properties of the iterative method. The concept of this section is to use this graphical tool to show the basins of different methods.
In order to view the basins of attraction for complex functions, we make use of an efficient computer programming package, Mathematica 11.3. We consider a rectangle region denoted as Ω = [ 3 , 3 ] × [ 3 , 3 ] C in the complex plane. This rectangle is divided into a grid with 450 points on each axis. Using our iterative methods, we begin the process from each initial point z ( 0 ) . By taking a maximum of 100 iterations and a tolerance criterion of | ϕ ( z ( k ) ) | < 10 4 , we determine whether z ( 0 ) is the basin of attraction for a specific root or not.
Problem 1.
We chose the function
ϕ ( z ) = z ( z 2 + 1 ) ( z 2 + 4 )
2 i , 2 i , i , i , and 0 are the five roots of the function. In Figure 1, we focus on a rectangular region in the complex plane. We denote the rectangular area as Ω = [ 3 , 3 ] × [ 3 , 3 ] C . We started the iteration from different points within this rectangular area. Figure 1a represents the basin of attraction of the proposed method (PM); Figure 1b [13] and Figure 1c [14] represent the basins of attraction of existing iterative techniques denoted by the basin of attraction (IS and DS, respectively). In Figure 1a, the five colors represent the five roots. In Figure 1b,c, the five colors represent the five distinct roots, and where the iterations do not converge, the root appears as a black color. We used a convergence criterion of less than 10 4 . The Julia set is the colorful patterns and outlines created by the boundaries or edges of the attraction areas. This helps to understand the framework of the iteration technique and gives a comprehensive idea about how they behave and converge to form different attraction regions.
Problem 2.
We consider the following function [6]:
ϕ ( z ) = ( z 1 ) 3 1
The equation ϕ ( z ) = 0 has three zeros 2 , 0.5 + 0.866025 i , and 0.5 0.866025 i . In Figure 2, we focus on a rectangular region in the complex plane. We denote the rectangular area as Ω = [ 3 , 3 ] × [ 3 , 3 ] C . We started the iteration from different points within this rectangular area. Figure 2a represents the basin of attraction of the proposed method (PM); Figure 2b,c represent the basins of attraction of existing iterative techniques denoted by the basin of attraction (IS and DS, respectively). In Figure 2a, the three colors represent the three roots, and the black region represents where the iterations do not converge to the root. In Figure 2b,c, the three colors represent the three distinct roots, and where the iterations do not converge, the root appears as a black color. We used a convergence criterion of less than 10 4 . The Julia set is the colorful patterns and outlines created by the boundaries or edges of the attraction areas. This helps to understand the framework of the iteration technique and gives a comprehensive idea about how they behave and converge to form different attraction regions.
Problem 3.
We assumed the following polynomial function of order seven:
ϕ ( z ) = z 7 1
The equation ϕ ( z ) = 0 has seven zeros 1, 0.22521 + 0.9749281 i , 0.62349 + 0.781831 i , 0.900969 + 0.433884 , 0.22521 0.9749281 i , 0.62349 0.781831 i , and 0.900969 0.433884 i . In Figure 3, we focus on a rectangular region in the complex plane. We denote the rectangular area as Ω = [ 3 , 3 ] × [ 3 , 3 ] C . We started the iteration from different points within this rectangular area. Figure 3a represents the basin of attraction of the proposed method (PM); Figure 3b,c represents the basins of attraction of existing iterative techniques denoted by the basin of attraction (IS and DS, respectively). In Figure 3, the seven colors represent the seven roots, and the white region represents where the iterations do not converge to the root. In Figure 3b,c, the seven colors represent the seven distinct roots, and where the iterations do not converge, the root appears as a black color. We used a convergence criterion of less than 10 4 . The Julia set is the colorful patterns and outlines created by the boundaries or edges of the attraction areas. This helps to understand the framework of the iteration technique and gives a comprehensive idea about how they behave and converge to form different attraction regions.
Problem 4.
Finally, we chose the following polynomial function with complex constants:
ϕ ( z ) = ( z 3 i ) ( z + 2 i )
(Ref. [19]) i , 2 i , 0.866025 + 0.5 i , and 0.866025 + 0.5 i are the four zeros of the function. In Figure 4, we focus on a rectangular region in the complex plane. We denote the rectangular area as Ω = [ 3 , 3 ] × [ 3 , 3 ] C . We started the iteration from different points within this rectangular area. Figure 4a represents the basin of attraction of the proposed method (PM); Figure 4b,c represent the basins of attraction of existing iterative techniques denoted by the basin of attraction (IS and DS, respectively). In Figure 4a, the four colors represent the four roots. In Figure 4b,c, the four colors represent the four distinct roots, and where the iterations do not converge, the root appears as a black color. We used a convergence criterion of less than 10 4 . The Julia set is the colorful patterns and outlines created by the boundaries or edges of the attraction areas. This helps to understand the framework of iteration technique and gives a comprehensive idea about how they behave and converge to form different attraction regions.

6. Conclusions

This research paper presents a novel sixth-order iterative method with an efficiency index of 1.430969 . We demonstrated the existence and uniqueness of the proposed method through comprehensive convergence analysis and local convergence analysis. The numerical examples provided in this study validate the superior performance of our method as compared to existing techniques of the same order. The basin of attraction figures also illustrate a reduced divergence region with less chaotic behavior as compared to existing techniques of the same order. Finally, we conclude that our methods can be a good alternative to the existing methods.

Author Contributions

Conceptualization, K.D. and P.M.; methodology, E.M. and R.B.; software, K.D. and P.M.; validation, E.M. and R.B.; formal analysis, P.M.; data curation, P.M.; writing—original draft preparation, K.D.; writing—review and editing, E.M. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Solaiman, S.O.; Sihwail, R.; Shehadeh, H.; Hashim, I.; Alieyan, K. Hybrid Newton–Sperm Swarm Optimization Algorithm for Nonlinear Systems. Mathematics 2023, 11, 1473. [Google Scholar] [CrossRef]
  2. Singh, S.; Gupta, D.K. Iterative methods of higher order for nonlinear equations. Vietnam. J. Math. 2016, 44, 387–398. [Google Scholar] [CrossRef]
  3. Grau, M.; Díaz-Barrero, J.L. An improvement of the Euler—Chebyshev iterative method. J. Math. Anal. Appl. 2006, 315, 1–7. [Google Scholar] [CrossRef]
  4. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Karami, A.; Barati, A. Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2002–2012. [Google Scholar] [CrossRef]
  5. Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  6. Behl, R.; Maroju, P.; Motsa, S.S. A family of second derivative free fourth order continuation method for solving nonlinear equations. J. Comput. Appl. Math. 2017, 318, 38–46. [Google Scholar] [CrossRef]
  7. Maheshwari, A.K. A fourth order iterative method for solving nonlinear equations. Appl. Math. Comput. 2009, 211, 383–391. [Google Scholar] [CrossRef]
  8. Khirallah, M.Q.; Alkhomsan, A.M. A new fifth-order iterative method for solving non-linear equations using weight function technique and the basins of attraction. J. Math. Comput. Sci. 2023, 28, 281–293. [Google Scholar] [CrossRef]
  9. Abdul-Hassan, N.Y.; Ali, A.H.; Park, C. A new fifth-order iterative method free from second derivative for solving nonlinear equations. J. Appl. Math. Comput. 2022, 68, 2877–2886. [Google Scholar] [CrossRef]
  10. Cordero, A.; Ezquerro, J.A.; Hernández-Verón, M.A.; Torregrosa, J.R. On the local convergence of a fifth-order iterative method in Banach spaces. Appl. Math. Comput. 2015, 251, 396–403. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; Khattri, S.K. Local convergence for a family of third order methods in Banach spaces. Punjab Univ. J. Math. 2020, 46, 52–63. [Google Scholar]
  12. Solaiman, S.O.; Hashim, I. Efficacy of optimal methods for nonlinear equations with chemical engineering applications. Math. Probl. Eng. 2019, 2019, 1728965. [Google Scholar]
  13. Argyros, I.K.; George, S. Ball convergence of a sixth order iterative method with one parameter for solving equations under weak conditions. Calcolo 2016, 53, 585–595. [Google Scholar] [CrossRef]
  14. Sharma, D.; Parhi, S.K. On the local convergence of higher order methods in Banach spaces. Fixed Point Theory 2021, 22, 855–870. [Google Scholar] [CrossRef]
  15. Chapra, S.C. Applied Numerical Methods; McGraw-Hill: Columbus, OH, USA, 2012. [Google Scholar]
  16. Shih, A.J.; Monteiro, M.C.; Dattila, F.; Pavesi, D.; Philips, M.; da Silva, A.H.; Vos, R.E.; Ojha, K.; Park, S.; van der Heijden, O.; et al. Water electrolysis. Nat. Rev. Methods Prim. 2022, 2, 84. [Google Scholar] [CrossRef]
  17. Wiersma, A.G. The Complex Dynamics of Newton’s Method. Doctoral Dissertation, Faculty of Science and Engineering, University of Southampton, Southampton, UK, 2016. [Google Scholar]
  18. Solaiman, O.S.; Karim, S.A.A.; Hashim, I. Dynamical Comparison of Several Third-Order Iterative Methods for Nonlinear Equations. Comput. Mater. Contin. 2021, 67, 1951–1962. [Google Scholar] [CrossRef]
  19. Sutherland, S. Finding Roots of Complex Polynomials with Newton’s Method. Doctoral Dissertation, Boston University, Boston, MA, USA, 1989. [Google Scholar]
Figure 1. Comparison of basins of attraction. (a) Basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Figure 1. Comparison of basins of attraction. (a) Basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Symmetry 16 00742 g001
Figure 2. Comparison of basins of attraction. (a) basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Figure 2. Comparison of basins of attraction. (a) basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Symmetry 16 00742 g002
Figure 3. Comparison of basins of attraction. (a) basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Figure 3. Comparison of basins of attraction. (a) basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Symmetry 16 00742 g003
Figure 4. Comparison of basins of attraction. (a) basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Figure 4. Comparison of basins of attraction. (a) basin of attraction (PM); (b) basin of attraction (IS); (c) basin of attraction (DS).
Symmetry 16 00742 g004
Table 1. Numerical results based on Example 1.
Table 1. Numerical results based on Example 1.
Methodn h n Θ ( h n ) h n 1 h n COCCPU Time
PM12.0269 0.000266839 3.5157400 × 10 6
22.02691 1.42109 × 10 14 0 60.234
IS11.8949 9.8524 0.129025
22.02392 0.226569 0.0029862 60.239
32.02691 2.27719 × 10 6 3.0003 × 10 8
DS12.02681 0.00715761 0.0000943061
22.02691 1.42108 × 10 14 1.75689 × 10 12 60.236
Table 2. Numerical results based on Example 2.
Table 2. Numerical results based on Example 2.
Methodn α n Θ ( α n ) α n 1 α n COCCPU Time
PM10.0283696 0.0000417079 0.000120124
20.0282494 1.73472 × 10 18 0 60.140
IS10.023441 0.00152734 0.0047532
20.0281942 0.0000191226 0.0000552426 60.14
30.0282494 2.36397 × 10 11 6.82269 × 10 11
DS1−0.547513 1.78993 0.41036
2−0.137153 0.106843 0.0954872  60.236
3−0.0416658 0.00510393 0.0121293
Table 3. Radii of convergence based on Example 3.
Table 3. Radii of convergence based on Example 3.
Method r 1 r 3 r 6 r = min { r 1 , r 3 , r 6 }
PM 0.6666667 0.6666667 0.57616 0.57616
IS 0.6667 0.3776 0.2289 0.2289
DS 0.618034 0.225441 0.117023 0.117023
Table 4. Radii of convergence based on Example 4.
Table 4. Radii of convergence based on Example 4.
Method r 1 r 3 r 6 r = min { r 1 , r 3 , r 6 }
PM 0.324947 0.324947 0.26948 0.26948
IS 0.3249 0.09876 0.0360 0.0360
DS 0.257767 0.0903129 0.0410436 0.0410436
Table 5. Radii of convergence based on Example 5.
Table 5. Radii of convergence based on Example 5.
Method r 1 r 3 r 6 r = min { r 1 , r 3 , r 6 }
PM 0.00689682 0.00689682 0.00613252 0.00613252
IS 0.0069 0.0025 0.0009 0.0009
DS 0.00518599 0.00206091 0.000298472 0.000298472
Table 6. Radii of convergence based on Example 6.
Table 6. Radii of convergence based on Example 6.
Method r 1 r 3 r 6 r = min { r 1 , r 3 , r 6 }
PM 0.177778 0.177778 0.148066 0.148066
IS 0.426667 0.122696 0.0755756 0.0755756
DS 0.025599 0.017992 0.007313 0.007313
Table 7. Radii of convergence based on Example 7.
Table 7. Radii of convergence based on Example 7.
Method r 1 r 3 r 6 r = min { r 1 , r 3 , r 6 }
PM 1.42222 1.41432 1.29788 1.29788
IS 1.42222 0.981571 0.604605 0.604605
DS 1.973080 0.879329 0.101198 0.101198
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Devi, K.; Maroju, P.; Martínez, E.; Behl, R. The Local Convergence of a Three-Step Sixth-Order Iterative Approach with the Basin of Attraction. Symmetry 2024, 16, 742. https://doi.org/10.3390/sym16060742

AMA Style

Devi K, Maroju P, Martínez E, Behl R. The Local Convergence of a Three-Step Sixth-Order Iterative Approach with the Basin of Attraction. Symmetry. 2024; 16(6):742. https://doi.org/10.3390/sym16060742

Chicago/Turabian Style

Devi, Kasmita, Prashanth Maroju, Eulalia Martínez, and Ramandeep Behl. 2024. "The Local Convergence of a Three-Step Sixth-Order Iterative Approach with the Basin of Attraction" Symmetry 16, no. 6: 742. https://doi.org/10.3390/sym16060742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop