Next Article in Journal
Statistical Solitonic Impact on Submanifolds of Kenmotsu Statistical Manifolds
Previous Article in Journal
Geometric Control and Structure-at-Infinity Control for Disturbance Rejection and Fault Compensation Regarding Buck Converter-Based LED Driver
Previous Article in Special Issue
A Class of Efficient Sixth-Order Iterative Methods for Solving the Nonlinear Shear Model of a Reinforced Concrete Beam
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A One-Parameter Family of Methods with a Higher Order of Convergence for Equations in a Banach Space

1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1278; https://doi.org/10.3390/math12091278
Submission received: 26 March 2024 / Revised: 10 April 2024 / Accepted: 12 April 2024 / Published: 23 April 2024
(This article belongs to the Special Issue Numerical Analysis and Modeling)

Abstract

:
The conventional approach of the local convergence analysis of an iterative method on R m , with m a natural number, depends on Taylor series expansion. This technique often requires the calculation of high-order derivatives. However, those derivatives may not be part of the proposed method(s). In this way, the method(s) can face several limitations, particularly the use of higher-order derivatives and a lack of information about a priori computable error bounds on the solution distance or uniqueness. In this paper, we address these drawbacks by conducting the local convergence analysis within the broader framework of a Banach space. We have selected an important family of high convergence order methods to demonstrate our technique as an example. However, due to its generality, our technique can be used on any other iterative method using inverses of linear operators along the same line. Our analysis not only extends in R m spaces but also provides convergence conditions based on the operators used in the method, which offer the applicability of the method in a broader area. Additionally, we introduce a novel semilocal convergence analysis not presented before in such studies. Both forms of convergence analysis depend on the concept of generalized continuity and provide a deeper understanding of convergence properties. Our methodology not only enhances the applicability of the suggested method(s) but also provides suitability for applied science problems. The computational results also support the theoretical aspects.

1. Introduction

Iterative methods stand as a foundation in numerical analysis that can handle the tough challenge of solving nonlinear equations. These equations came from diverse domains ranging from physics and engineering to economics and finance [1,2,3,4,5,6,7]. In general, the analytical solutions to such problems are almost nonexistent. They can be transmitted in the form of:
Δ ( x ) = 0 ,
where Δ : Φ M 1 M 2 , with M 1 and M 2 being Banach spaces.
Iterative methods represent numerical strategies that work with an initial approximation or more than one initial approximation. They enhance the solution until it reaches a predefined level of precision. One such iterative method is the Newton–Raphson method. This is one of the most significant iterative methods, which is given by
x σ + 1 = x σ Δ ( x σ ) 1 Δ ( x σ ) ,
where σ denotes a natural number, including zero. Newton’s method is a well-known and frequently applied iterative procedure for solving nonlinear problems. However, this method encounters various challenges when applied to nonlinear equations, such as slower convergence, divergence when the Jacobian matrix approaches a null matrix, and failure to function when the Jacobian matrix is null. Consequently, researchers have proposed improvements or modifications to this method over time. We have also opted for such an iterative approach.
y σ = x σ α Δ ( x σ ) 1 Δ ( x σ ) , A σ = A ( α , x σ , y σ ) , z σ ( 1 ) = x σ A σ 1 Δ ( x σ ) , B σ = B ( α , x σ , y σ ) , z σ ( k ) = z σ ( k 1 ) B σ Δ ( z σ ( k 1 ) ) , x σ + 1 = z σ ( k + 1 ) = z σ ( k ) B σ Δ ( z σ ( k ) ) , σ = 0 , 1 , 2 , ,
where α R { 0 } , k N , A σ = 1 2 α Δ ( y σ ) + 1 1 2 α Δ ( x σ ) , and B σ = 2 A σ 1 Δ ( x σ ) 1 and x 0 Φ .
The convergence order k + 2 is shown in [8] using a local Taylor series by assuming the existence and boundedness of Δ ( 4 ) for M 1 = M 2 = R m , where m is a natural number.
There exist the same concerns with the Taylor expansion series technique mostly used to study the convergence of iterative methods.

1.1. Motivation

(L1)
The convergence is shown by assuming derivatives not in this method, such as Δ , Δ , and Δ ( 4 ) . Let Φ 0 = [ 1.3 , 1.3 ] . Define the function g ( ξ ) = s 1 ξ 3 log t + s 2 ξ 5 + s 3 ξ 4 for t 0 and g ( ξ ) = 0 for ξ = 0 , where s 1 0 and s 2 + s 3 = 0 . It follows by this definition that the functions g and g ( 4 ) are not continuous at ξ = 0 . Thus, the results in [8] can not guarantee the convergence of the method to the solution ξ * = 1 Φ 0 . But the method converges to ξ * , say for k = 1 and ξ 0 = 0.9 . It follows by this academic example that the convergence conditions can be weakened. Clearly, it is preferable to use conditions only on Δ , Δ that are in the method.
(L2)
The selection of the initial point constitutes a “shot in the dark”. This is because there is no computable radius of convergence to assist us in choosing possible points x 0 .
(L3)
A priori and computable estimates on x σ ξ * are not available. Hence, we do not know in advance how many iterations should be carried out to arrive at a desirable error tolerance.
(L4)
There is no computable domain about ξ * that contains no other solution of the equation Δ ( x ) = 0 .
(L5)
There is no information in [8] about the semilocal analysis of convergence for the method.
(L6)
This study is restricted on R m .
The concerns ( L 1 ) ( L 6 ) constitute our motivation for this article.
It is worth noticing that these concerns are present in any other studies using the Taylor series on R m [4,5,6,7,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. Therefore, the handling of these concerns will also extend the applicability and inverse of linear operators of other methods along the same lines. Our technique is demonstrated in method (2) as an example. But if it is so general, it can be used on any other method using inverses along the same lines. In particular, we deal with these concerns as follows:

1.2. Novelty

(L1)′
The new sufficient convergence conditions involve only the operators in the method. (See the conditions ( H 1 ) ( H 7 ) or ( E 1 ) ( E 6 ) ).
(L2)′
A computable radius of convergence is determined. Thus, the selection of initial points that guarantee the convergence of the method becomes possible. (See ( 3 ) ).
(L3)′
The number of iterations required to achieve an error tolerance ϵ > 0 is known in advance, because we provide computable upper error bounds on x σ ξ * . (See Theorem 1).
(L4)′
A neighborhood of ξ * is found that contains no other solution of the equation Δ ( x ) = 0 . (See the Propositions).
(L5)′
A semilocal analysis of convergence is developed by relying on majorizing sequences [1,2,3,4,5]. (See Section 3).
(L6)′
The convergence is established for Banach space-valued operators.
Notice also that the convergence in both cases is based on the generalized continuity [1,2,3] assumption on the operator Δ .
The rest of this article is structured as follows: Section 1 contains the local analysis followed by the semilocal analysis in Section 3. The numerical examples appear in Section 4. This article ends with the ending remarks in Section 5.

2. Local Analysis

Let Q = [ 0 , + ) . The convergence is based on some conditions.
Suppose the following:
(H1)
There exists a function 0 : Q R + , which is continuous as well as nondecreasing, such that 0 ( ξ ) 1 = 0 admits a smallest solution, which is positive. Denote such a solution with the letter ρ 0 .
Take Q 0 = [ 0 , ρ 0 ) .
(H2)
The rest exists as a function that is continuous and nondecreasing : Q 0 R + such that for h 0 : Q 0 R , the equation h 0 ( ξ ) 1 = 0 admits a smallest positive solution denoted by r 0 , where
h 0 ( ξ ) = 0 1 ( ( 1 ω ) ξ ) d ω + | 1 α | ( 1 + 0 1 0 ( ω ξ ) d ω ) 1 0 ( ξ ) .
(H3)
If ˜ ( ξ ) = ( ( 1 + h 0 ( ξ ) ) ξ ) or 0 ( ξ ) + 0 ( h 0 ( ξ ) ξ ) , define P ( ξ ) = 1 2 | α | ˜ ( ξ ) + 0 ( ξ ) . The equation P ( ξ ) 1 = 0 admits a smallest solution, which is positive, denoted by ρ 1 ( 0 , ρ 0 ) .
Set ρ = m i n { ρ 0 , ρ 1 } and Q 1 = [ 0 , ρ ) .
(H4)
The equations h 1 ( ξ ) 1 = 0 and h i ( ξ ) 1 = 0 , i = 2 , 3 , , k + 1 admit a smallest positive solution denoted by r 1 , r i ( 0 , ρ ) , respectively, where
h 1 ( ξ ) = 0 1 ( ( 1 ω ) ξ ) d ω 1 0 ( ξ ) + ˜ ( ξ ) ( 1 + 0 1 0 ( ω ξ ) d ω ) 2 | α | ( 1 0 ( ξ ) ) ( 1 P ( ξ ) )
and for
w ˜ i 1 ( ξ ) = ( ( 1 + h i 1 ( ξ ) ) ξ ) 0 ( ξ ) + 0 ( h i 1 ( ξ ) ξ ) ,
h i ( ξ ) = 0 1 ( ( 1 ω ) h i 1 ( ξ ) ξ ) d ω 1 0 ( h i 1 ( ξ ) ξ ) + 1 2 | α | ˜ ( ξ ) + w ˜ i 1 ( ξ ) ( 1 0 ( h i 1 ( ξ ) ξ ) ) ( 1 P ( ξ ) ) + ˜ ( ξ ) ( 1 0 ( ξ ) ) ( 1 P ( ξ ) ) 1 + 0 1 0 ω h i 1 ( ξ ) ξ d ω h i 1 ( ξ )
Take
r = m i n { r j } , j = 0 , 1 , 2 , , k + 1 ,
and set Q * = [ 0 , r ) .
The definitions imply that for each ξ Q * ,
0 0 ( ξ ) < 1 ,
0 P ( ξ ) < 1 ,
0 0 ( h i 1 ( ξ ) ξ ) < 1 ,
0 h j ( ξ ) < 1 .
The developed scalar functions 0 and are associated to the operator Δ in the method.
(H5)
There exists an invertible operator γ ( M 1 , M 2 ) such that for each x Φ ,
γ 1 ( Δ ( x ) γ ) 0 ( x ξ * ) .
Take T 0 = S ( ξ * , ρ 0 ) Φ .
(H6)
γ 1 Δ ( y ) Δ ( x ) ( y x ) for each x , y T 0
and
(H7)
S [ ξ * , r ] Φ .
Remark 1.
(i) 
A possible and popular choice for γ = I or γ = Δ ( ξ * ) . In the latter choice, ξ * is a simple solution of the equation Δ ( x ) = 0 according to the condition ( H 5 ) . In our analysis, we do not necessarily assume that ξ * is simple. Thus, the method can also be used to find solutions of ξ * of multiplicity greater than one. Another selection is γ = I . Other choices can be considered as ( H 5 ) and ( H 6 ) hold.
(ii) 
We shall choose the smallest version of the functions ¯ in the examples to obtain tighter error bounds on x σ ξ * as well as a larger r.
Next, the local analysis follows for the method (2).
Theorem 1.
Suppose that the conditions ( H 1 ) ( H 7 ) are satisfied. Then, if x 0 S 0 = S ( ξ * , r ) { ξ * } , the sequence { x σ } is convergent to ξ * .
Proof. 
The following items are established by induction:
y σ ξ * h 0 ( x σ ξ * ) x σ ξ * x σ ξ * < r ,
z σ ( 1 ) ξ * h 1 ( x σ ξ * ) x σ ξ * x σ ξ * ,
z σ ( i ) ξ * h i ( x σ ξ * ) x σ ξ * x σ ξ * , i = 2 , 3 , , k + 1 ,
where the radius r is determined by the Formula (3), and all the scalar functions g m , m = 0 , 1 , 2 , , k + 1 are as given previously. The condition ( H 5 ) , (3), and the definition of ρ 0 and r give in turn
γ 1 ( Δ ( x 0 ) γ ) 0 ( x 0 ξ * ) 0 ( r ) < 1 .
So, the linear operator Δ ( x 0 ) is invertible by the standard perturbation Lemma on operators that are invertible due to Banach [1,2,3,4,5]. We also have
Δ ( x 0 ) 1 γ 1 1 0 ( x 0 ξ * ) .
Consequently, the iterate x 0 exists by the first substep of the method (2), and
y 0 ξ * = x 0 ξ * Δ ( x 0 ) 1 Δ ( x 0 ) + ( 1 α ) Δ ( x 0 ) 1 Δ ( x 0 ) = 0 1 Δ ( x 0 ) 1 ( Δ ( ξ * + ω ( x 0 ξ * ) ) Δ ( x 0 ) ) d ω ( x 0 ξ * ) + ( 1 α ) Δ ( x 0 ) 1 S 0 ( Δ ( ξ * + ω ( x 0 ξ * ) ) Δ ( x 0 ) + Δ ( x 0 ) ) d ω ( x 0 ξ * ) .
By applying the condition ( H 6 ) , and using (7) ( for j = 0 ) , (12), and (13), we obtain in turn that
y 0 ξ * 0 1 ( ( 1 ω ) x 0 ξ * ) d ω + | 1 α | ( 1 + 0 1 0 ( ω x 0 ξ * ) d ω ) x 0 ξ * 1 0 ( x 0 ξ * ) h 0 ( x 0 ξ * ) x 0 ξ * x 0 ξ * < r .
Thus, the iterate y 0 S ( ξ * , r ) , and the item (8) is satisfied for σ = 0 . We need the estimate
γ 1 ( A 0 α ) 1 2 | α | γ 1 ( Δ ( y 0 ) Δ ( x 0 ) ) + γ 1 ( Δ ( x 0 ) γ ) 1 2 | α | ˜ 0 ( y 0 ξ * ) + 0 ( x 0 ξ * ) = ρ 0 < 1 ,
(by (5)), where the first norm in (15) can be calculated by two different ways:
γ 1 ( Δ ( y 0 ) Δ ( x 0 ) ) ( y 0 ξ * + x 0 ξ * ) ( ( 1 + h 0 ( x 0 ξ * ) ) x 0 ξ * ) ˜ 0 ( x 0 ξ * )
or
γ 1 ( Δ ( y 0 ) Δ ( x 0 ) ) γ 1 ( Δ ( x 0 ) γ ) + γ 1 Δ ( y 0 ) γ 0 ( x 0 ξ * ) + 0 ( y 0 ξ * ) 0 ( x 0 ξ * ) + 0 ( h 0 ( x 0 ξ * ) x 0 ξ * ) ˜ 0 ( x 0 ξ * ) .
So, the linear operator A 0 is invertible and
A 0 1 γ 1 1 P ( x 0 ξ * ) .
It follows that the iterate z 0 ( 1 ) exists by the second substep of the method (2), and
z 0 ( 1 ) ξ * = x 0 ξ * Δ ( x 0 ) 1 Δ ( x 0 ) + ( Δ ( x 0 ) 1 A 0 1 ) Δ ( x 0 ) = x 0 ξ * Δ ( x 0 ) 1 Δ ( x 0 ) A 0 1 ( Δ ( x 0 ) A 0 ) Δ ( x 0 ) 1 Δ ( x 0 ) .
Hence, we obtain by (3), (7) (for j = 2 ), (16), and (17) that
z 0 ( 1 ) ξ * [ 0 1 ( ( 1 ω ) x 0 ξ * ) d ω 1 0 ( x 0 ξ * ) + ˜ 0 ( x 0 ξ * ) ( 1 + 0 1 0 ( ω x 0 ξ * ) d ω ) 2 | α | ( 1 0 ( x 0 ξ * ) ) ( 1 P ( x 0 ξ * ) ) ] x 0 ξ * h 1 ( x 0 ξ * ) x 0 ξ * x 0 ξ * .
It follows by (18) that the iterate z 0 ( 1 ) S ( ξ * , r ) , and the item (9) holds if σ = 0 . Moreover, the iterate z 0 ( 2 ) is well defined by the third substep of the method (2), and
z 0 ( 2 ) ξ * = z 0 ( 1 ) ξ * Δ ( z 0 ( 1 ) ) 1 Δ ( z 0 ( 1 ) ) + ( Δ ( z 0 ( 1 ) ) 1 + Δ ( x 0 ) 1 2 A 0 1 ) Δ ( z 0 ( 1 ) ) = z 0 ( 1 ) ξ * Δ ( z 0 ( 1 ) ) 1 Δ ( z 0 ( 1 ) ) + Δ ( z 0 ( 1 ) ) 1 ( A 0 Δ ( z 0 ( 1 ) ) ) A 0 1 Δ ( z 0 ( 1 ) ) + Δ ( x 0 ) 1 ( A 0 Δ ( x 0 ) ) A 0 1 Δ ( z 0 ( 1 ) ) ,
leading to
z 0 ( 2 ) ξ * [ 0 1 ( ( 1 ω ) z 0 ( 1 ) ξ * ) d ω 1 0 ( z 0 ( 1 ) ξ * ) + 1 2 | γ | ˜ 0 ( x 0 ξ * ) + ( z 0 ( 1 ) ξ * ) ( 1 0 ( z 0 ( 1 ) ξ * ) ) ( 1 P ( x 0 ξ * ) ) + ˜ 0 ( x 0 ξ * ) ( 1 0 ( x 0 ξ * ) ) ( 1 P ( x 0 ξ * ) ) 1 + 0 1 0 ( ω z 0 ( 1 ) ξ * ) d ω ] z 0 ( 1 ) ξ * h 2 ( x 0 ξ * ) x 0 ξ * x 0 ξ * .
Thus, the iterate z 0 ( 2 ) S ( ξ * , r ) , and the item (10) is satisfied if i = 2 . But the computations for the derivation of (20) can be repeated if z 0 ( 3 ) , z 0 ( 4 ) , , z 0 ( k + 1 ) replace z 0 ( 1 ) , respectively, in the preceding estimations, which terminates the induction for (10). Hence, the iterate x k + 1 S ( ξ * , r ) and
x σ + 1 ξ * = z σ k + 1 ξ * c x σ ξ * < r ,
where c = h k + 1 ( x 0 ξ * ) [ 0 , 1 ) . Therefore, we conclude that all the iterates { x σ } stay in S ( ξ * , r ) and lim σ x σ = ξ * . □
Next, a neighborhood of ξ * is determined containing no other solution.
Proposition 1.
Suppose the following:
The solution ξ * S ( ξ * , ρ 2 ) for some ρ 2 > 0 .
The condition ( H 5 ) is satisfied in the ball S ( ξ * , ρ 2 ) , and there exists ρ 3 ρ 2 such that
0 1 0 ( ω ρ 3 ) d ω < 1 .
Define D 1 = Φ S [ ξ * , ρ 3 ] .
Then, the only solution of the equation Δ ( x ) = 0 in the set D 1 is ξ * .
Proof. 
Let x ˜ D 1 be such that Δ ( x ˜ ) = 0 .
Consider the linear operator T = 0 1 Δ ( ξ * + ω ( x ˜ ξ * ) ) d ω . Then, by applying the condition ( H 5 ) and using (22), we obtain in turn that
γ 1 ( T γ ) 0 1 v 0 ( ω x ˜ ξ * ) d ω 0 1 v 0 ( ω ρ 3 ) d ω < 1 .
It follows by (23) that the operator T is invertible. Finally, by the identity, we obtain
x ˜ ξ * = T 1 ( Δ ( x ˜ ) Δ ( ξ * ) ) = T 1 ( 0 ) = 0 .
Thus, we deduce that x ˜ = ξ * . □
Remark 2.
Clearly, we can take ρ 2 = r in Proposition 1.
In the next section, similar estimates are used to show the semilocal analysis of convergence for the method (2). But the role of the functions 0 , and the solution ξ * is exchanged by the functions v 0 , v and the initial point x 0 , respectively.

3. Semilocal Analysis

Suppose the following:
(E1)
There exists a function that is continuous and nondecreasing v 0 : Q R + such that the equation v 0 ( ξ ) 1 = 0 has a smallest solution, which is positive, denoted by ρ 4 . Set Q 2 = [ 0 , ρ 4 ) .
(E2)
There exists a function that is continuous and nondecreasing v : Q 2 R + .
Define the scalar sequence { a σ ( m ) } for a 0 ( 0 ) = 0 , some a 1 ( 0 ) [ 0 , ρ 4 ) , and each σ = 0 , 1 , 2 , , m = 0 , 1 , 2 , , k
by
v ˜ σ = v ( a σ ( 1 ) a σ ( 0 ) ) o r v 0 ( a σ ( 0 ) ) + v 0 ( a σ ( 1 ) ) , λ σ = 1 2 | α | v ˜ σ + ( 1 + v 0 ( a σ ( 0 ) ) ) , μ σ ( m ) = 1 + 0 1 v 0 ( a σ ( 0 ) + ω ( a σ ( m + 1 ) a σ ( 0 ) ) ) d ω ( a σ ( m + 1 ) a σ ( 0 ) ) + 1 | α | ( 1 + v 0 ( a σ ( 0 ) ) ) ( a σ ( 1 ) a σ ( 0 ) ) , q σ = 1 2 | α | v ˜ σ + v 0 ( a σ ( 0 ) ) , a σ ( 2 ) = a σ ( 1 ) + | 1 α | 1 + v 0 ( a σ ( 0 ) ) + 1 2 v ˜ σ a σ ( 1 ) a σ ( 0 ) | α | ( 1 q σ ) , a σ ( m + 2 ) = a σ ( m + 1 ) + λ σ μ σ ( m ) ( 1 v 0 ( a σ ( 0 ) ) ) ( 1 q σ ) , b σ + 1 = 0 1 v ( ( 1 ω ) ( a σ + 1 ( 0 ) a σ ( 0 ) ) ) d ω ( a σ + 1 ( 0 ) a σ ( 0 ) ) + 1 | α | ( 1 + v 0 ( a σ ( 0 ) ) ) ( a σ ( 1 ) a σ ( 0 ) ) , and a σ + 1 ( 1 ) = a σ + 1 ( 0 ) + | α | b σ + 1 1 v 0 ( a σ + 1 ( 0 ) ) .
This sequence is shown to be majorizing for the method (2) in Theorem 2. However, let us first give a convergence condition for it.
(E3)
There exists ρ 5 [ 0 , ρ 4 ) such that for each = 0 , 1 , 2 , , m = 1 , 2 , , k ,
v 0 ( a σ ( 0 ) ) < 1 , q σ < 1 , and a σ ( m ) < ρ 5 .
It follows by the formula (24), ( E 1 ) , ( E 2 ) , and this condition that
0 a σ ( m ) a σ + 1 ( m ) ρ 5
and this sequence is convergent to some a * [ 0 , ρ 5 ] .
As in the local analysis, the functions v 0 and v relate to Δ as follows:
(E4)
There exist x 0 Φ and an invertible operator γ such that for each x Φ ,
γ 1 ( Δ ( x ) γ ) v 0 ( x x 0 ) .
Notice that
γ 1 ( Δ ( x 0 ) γ ) v 0 ( 0 ) < 1 .
Consequently, the inverse of the linear operator Δ ( x 0 ) exists, and we can choose Δ ( x 0 ) 1 Δ ( x 0 ) a 1 ( 0 ) .
Set D 2 = Φ S ( x 0 , ρ 4 ) .
(E5)
γ 1 Δ ( y ) Δ ( x ) v ( y x ) for each x , y D 2 .
(E6)
S [ x 0 , a * ] Φ .
The main semilocal analysis of convergence follows under these conditions in the next result.
Theorem 2.
Suppose that the conditions ( E 1 ) ( E 6 ) are satisfied. Then, there exists a solution ξ * S [ x 0 , a * ] of the equation Δ ( x ) = 0 such that for each σ = 0 , 1 , 2 , ,
ξ * x σ a * a σ ( k ) .
Proof. 
As the local case but using the conditions E instead of H, we have the series of calculations
Δ ( x σ ) α A σ = Δ ( x σ ) 1 2 Δ ( y σ ) α Δ ( x σ ) + 1 2 Δ ( x σ ) = ( 1 α ) Δ ( x σ ) + 1 2 ( Δ ( x σ ) Δ ( y σ ) ) ,
γ 1 Δ ( x σ ) α A σ | 1 α | 1 + v 0 ( a σ ( 0 ) ) + 1 2 v ˜ σ .
So, from the first two substeps, we have
z σ ( 1 ) y σ = [ α Δ ( x σ ) 1 A σ 1 ] Δ ( x σ ) = ( A σ 1 α Δ ( x σ ) 1 ) Δ ( x σ ) = A σ 1 ( Δ ( x σ ) α A σ ) Δ ( x σ ) 1 Δ ( x σ ) = 1 α A σ 1 ( Δ ( x σ ) α A σ ) ( y σ x σ ) .
Thus, we obtain from (26) and (27),
z σ ( 1 ) y σ 1 | α | A σ 1 γ γ 1 Δ ( x σ ) α A σ y σ x σ | 1 α | 1 + v o ( a σ ( 0 ) ) + 1 2 v ˜ σ a σ ( 1 ) a σ ( 0 ) | α | ( 1 q σ ) = a σ ( 2 ) a σ ( 1 ) ,
and
z σ ( 1 ) x 0 z σ ( 1 ) y σ + y σ x 0 a σ ( 2 ) a σ ( 1 ) + a σ ( 1 ) a 0 ( 0 ) = a σ ( 2 ) < a * ,
hence, the iterate z σ ( 1 ) S [ x 0 , a * ] .
We also need the estimate for the definition of B σ that
2 Δ ( x σ ) A σ = 2 Δ ( x σ ) 1 2 α ( Δ ( y σ ) Δ ( x σ ) ) Δ ( x σ ) , = Δ ( x σ ) 1 2 α ( Δ ( y σ ) Δ ( x σ ) ) .
So, we have
γ 1 ( 2 Δ ( x σ ) A σ ) 1 2 | α | v ˜ σ + ( 1 + v 0 ( a σ ( 0 ) ) ) = λ σ ,
B σ γ A σ 1 γ γ 1 ( 2 Δ ( x σ ) A σ ) Δ ( x σ ) 1 γ λ σ ( 1 v 0 ( a σ ( 0 ) ) ) ( 1 q σ ) ,
Δ ( z σ ( 1 ) ) = Δ ( z σ ( 1 ) ) Δ ( x σ ) 1 α Δ ( x σ ) ( y σ x σ ) ,
and
γ 1 Δ ( z σ ( 1 ) ) 1 + 0 1 v 0 ( a σ ( 0 ) + ω ( a σ ( 2 ) a σ ( 0 ) ) ) d ω ( a σ ( 2 ) a σ ( 0 ) ) + 1 | α | ( 1 + v 0 ( a σ ( 0 ) ) ) ( a σ ( 1 ) a σ ( 0 ) ) = μ σ ( 1 ) .
Thus, (29) and (30) lead to
z σ ( 2 ) z σ ( 1 ) B σ 1 γ γ 1 Δ ( z σ ( 1 ) ) λ σ μ σ ( 1 ) ( 1 v 0 ( a σ ( 0 ) ) ) ( 1 q σ ) = a σ ( 3 ) a σ ( 2 ) ,
and
z σ ( 2 ) x 0 z σ ( 2 ) z σ ( 1 ) + z σ ( 1 ) x 0 a σ ( 3 ) a σ ( 2 ) + a σ ( 2 ) a 0 ( 0 ) = a σ ( 3 ) < a * .
Hence, the iterate z σ ( 2 ) S ( x 0 , a * ) . Similarly, we have from
z σ ( m + 1 ) z σ ( m ) = B σ Δ ( z σ ( m ) )
that
z σ ( m + 1 ) z σ ( m ) λ σ μ σ ( m ) 1 v 0 ( a σ ( 0 ) ) ( 1 q σ ) = a σ ( m + 2 ) a σ ( m + 1 )
and
z σ ( m + 1 ) x 0 z σ ( m + 1 ) z σ ( m ) + z σ ( m ) x 0 a σ ( m + 2 ) a σ ( m + 1 ) + a σ ( m + 1 ) a 0 ( 0 ) = a σ ( m + 2 ) < a * ,
so the iterate z σ ( m + 1 ) S ( x 0 , a * ) .
Then, from the identity
Δ ( x σ + 1 ) = Δ ( x σ + 1 ) Δ ( x σ ) Δ ( x σ ) α ( x σ + 1 x σ ) + Δ ( x σ ) ( x σ + 1 y σ ) ,
we obtain
γ 1 Δ ( x σ + 1 ) 0 1 v ( ( 1 ω ) ( a σ + 1 ( 0 ) a σ ( 0 ) ) ) d ω ) ( a σ + 1 ( 0 ) a σ ( 0 ) ) + 1 | α | ( 1 + v 0 ( a σ ( 0 ) ) ) ( a σ ( 1 ) a σ ( 0 ) ) = b σ + 1 ,
leading to
y σ + 1 x σ + 1 = | α | Δ ( x σ + 1 ) 1 γ γ 1 Δ ( x σ + 1 ) | α | b σ + 1 1 v 0 ( a σ + 1 ( 0 ) ) = a σ + 1 ( 1 ) a σ + 1 ( 0 ) ,
and
y σ + 1 x 0 y σ + 1 x σ + 1 + x σ + 1 x 0 a σ + 1 ( 1 ) a σ + 1 ( 0 ) + a σ + 1 ( 0 ) a 0 ( 0 ) = a σ + 1 ( 1 ) < a * ,
so the iterate y σ + 1 S ( x 0 , a * ) .
Hence, we showed that all the iterates { x σ } S ( x 0 , a * ) and the items (31), (32), and (34) are satisfied. By the condition ( E 3 ) , the majorizing sequence { a σ ( m ) } is Cauchy as convergent. Thus, the sequence { x σ } is also Cauchy in the Banach space by (31), (32), and (34). Consequently, it is convergent to some ξ * S [ x 0 , a * ] .
By letting σ + in (33), and using the continuity of the operator Δ , we deduce that Δ ( ξ * ) = 0 . Then, in the estimation x σ + i ( k ) x σ a σ + i ( k ) a σ ( k ) . Let i + to conclude that (25) is satisfied. □
Next, as in the local case, the uniqueness of the solution is established for the equation Δ ( x ) = 0 in a neighborhood of the initial point x 0 .
Proposition 2.
Suppose the following:
There exists a solution x ˜ S ( x 0 , ρ 6 ) for some ρ 6 > 0 .
The condition ( E 4 ) is satisfied in the ball S ( x 0 , ρ 6 ) , and there exists ρ 7 ρ 6 such that
0 1 v 0 ( ω ρ 6 + ( 1 ω ) ρ 7 ) d ω < 1 .
Take D 3 = Φ S [ x 0 , ρ 7 ] .
Then, the only solution of the equation Δ ( x ) = 0 in the set D 3 is x ˜ .
Proof. 
Let u D 3 such that Δ ( u ) = 0 . Define the linear operator T 1 = 0 1 Δ ( x ˜ + ω ( u x ˜ ) ) d ω . It follows by the condition ( E 4 ) and (35) that
γ 1 ( T 1 γ ) 0 1 v 0 ( ω x ˜ x ˜ 0 + ( 1 ω ) u x 0 ) d ω 0 1 v 0 ( ω ρ 6 + ( 1 ω ) ρ 7 ) d ω < 1 ,
so the operator T 1 is invertible. Then, from the identity
u x ˜ = T 1 1 Δ ( u ) Δ ( x ˜ ) = T 1 1 ( 0 ) = 0 ,
we conclude that u = x ˜ . □
Remark 3.
(i) 
A popular choice for γ = Δ ( x 0 ) or γ = I . Other selections are possible as long as the conditions ( E 4 ) and ( E 5 ) hold.
(ii) 
The limit point a * can be replaced by ρ 4 in the condition ( E 6 ) .
(iii) 
Under all the conditions ( E 1 ) ( E 6 ) in Proposition 2, one can take x ˜ = ξ * and ρ 6 = a * .

4. Numerical Illustrations

We illustrate numerically the theoretical consequences proposed in the previous sections. We have illustrated the applicability in real-life and standard nonlinear problems along with initial guesses that are depicted in Examples 1–5. These examples involve a good mixture of scalar and nonlinear systems. We display not only the radius of convergence in the Table 1, Table 2, Table 3, Table 4 and Table 5 but also the least number of iterations corresponding to the solutions of P ( x ) , absolute residual error at the corresponding iteration, and ( C O C ) . Additionally, we obtain the C O C approximated by means of
χ = ln x σ + 1 ξ * | x σ ξ * ln x σ ξ * x σ 1 ξ * , for   σ = 1 , 2 ,
or A C O C [7,15,16,17] by:
χ * = ln x σ + 1 x σ x σ x σ 1 ln x σ x σ 1 x σ 1 x σ 2 , for   σ = 2 , 3 ,
We adopt ϵ = 10 100 as a sanction error. Ceasing criteria are adopted for computer programming to solve the nonlinear system: ( i ) x σ + 1 x σ < ϵ and ( i i ) Δ ( x σ ) < ϵ .
The computational work is implemented with the software M a t h e m a t i c a 9 by adopting higher-precision arithmetic. In addition, we have chosen α = 1 and γ = Δ ( ξ * ) in the next two examples.
Example 1.
We selected a well-known mathematical issue involving a first-order nonlinear integral equation and a Hammerstein operator that is used in many fields, including physics and engineering. Both integral and nonlinear components are present in this problem. These problems have either no analytical answers at all or very complex ones. To provide an example, let us take ϕ = S [ 0 , 1 ] and M 1 = M 2 = C [ 0 , 1 ] . The Hammerstein operator Δ [1,2,3,4,6] is involved in this first-kind nonlinear integral equation, which is defined by
Δ ( v ) ( x ) = v ( x ) 8 0 1 x ω v ( ω ) 3 d ω .
The derivative of operator Δ is given below:
Δ v ( q ) ( x ) = q ( x ) 24 0 1 x ω v ( ω ) 2 q ( ω ) d ω ,
for q C [ 0 , 1 ] . The values of the operator Δ satisfy the hypotheses of Theorem 1, if we choose for ξ * = 0
( ξ ) = 24 ξ , 0 ( ξ ) = 12 ξ , k = 2 , ˜ ( ξ ) = ( ( 1 + h 0 ( ξ ) ) ξ ) , a n d ˜ 1 ( ξ ) = ( ( 1 + h 1 ( ξ ) ) ξ ) .
Hence, we present the convergence radii of illustration (Example 1) in Table 1.
Table 1. Radii of illustration (Example 1) of method (2).
Table 1. Radii of illustration (Example 1) of method (2).
α ρ 0 ρ 1 ρ r 0 r 1 r 2 r
10.0833330.0365370.0365370.041670.0188540.0135280.013528
1 2 0.0833330.0170140.0170140.0185180.00917880.00726460.0072646
1 4 0.0833330.00837540.00837540.00877190.00459100.00395380.0039538
2 3 0.0833330.0230740.0230740.0256410.0122930.00934650.0093465
Example 2.
A three-by-three nonlinear system is a complicated math problem found in science and engineering. It becomes more complicated when it mixes together polynomials and exponential terms. That is why we are picking one of these systems to work on in order to show the applicability of our methods. Let M 1 = M 2 = R 3 and ϕ = S ( 0 , 1 ) , where u * = ( 0 , 0 , 0 ) T . Consider P on ϕ by means of
Δ ( u ) = Δ ( u 1 , u 2 , u 3 ) = e u 1 1 , e 1 2 u 2 2 + u 2 , u 3 T ,
where u = ( u 1 , u 2 , u 3 ) T . It has the succeeding Fréchet-derivative defined by
Δ ( u ) = e u 1 0 0 0 1 + u 2 ( e 1 ) 0 0 0 1 .
Then, Δ ( ξ * ) = Δ ( ξ * ) 1 = d i a g { 1 , 1 , 1 } ; so, we have
( ξ ) = e 1 e 1 ξ , 0 ( ξ ) = ( e 1 ) ξ , k = 2 , ˜ ( ξ ) = ( ( 1 + h 0 ( ξ ) ) ξ ) , a n d ˜ 1 ( ξ ) = ( ( 1 + h 1 ( ξ ) ) ξ ) .
Hence, we present the convergence radii of illustration (Example 2) in Table 2.
Table 2. Radii of illustration (Example 1) of method (2).
Table 2. Radii of illustration (Example 1) of method (2).
α ρ 0 ρ 1 ρ r 0 r 1 r 2 r
10.5819770.3800740.3800740.3826920.204230.154380.15438
1 2 0.5819770.183490.183490.164330.1037060.0845800.084580
1 4 0.5819770.0924820.0924820.0767480.0536320.0469650.046965
2 3 0.5819770.245310.245310.229930.136390.107720.10772

Examples for SLAC

We illustrate the theoretical results of the semilocal convergence on three different problems, namely, (3)–(5). In addition, we select four cases from expression (2): the fifth-order method with ( k = 2 , & α = 1 ) , and k = 2 , & α = 2 3 , denoted as Case-1 and Case-2, respectively; and the seventh-order method with ( k = 3 , & α = 1 ) , and k = 3 , & α = 2 3 , denoted by Case-3 and Case-4, respectively. The considered three examples, (3)–(5), are renowned 2D Bratu, BVP, and Fisher problems, which are applied science concerns that comprise three nonlinear systems of order 100 × 100 , 90 × 90 , and 100 × 100 , respectively.
Example 3.
Bratu 2D Problem:
Defined in [18], there is the well-known 2D Bratu problem ϕ = { ( x , t ) : 0 x 1 , 0 t 1 } , which is given below:
u x x + u t t + C e u = 0 , o n w i t h b o u n d a r y c o n d i t i o n s u = 0 o n ϕ .
To have a nonlinear system, we use the finite difference discretization procedure for the expression (40). The required answer at a mesh’s grid points is u i , j = u ( u i , t j ) . In addition, the step numbers x and t correspond to the corresponding directions M and N. Moreover, the step sizes in the corresponding M and N are h and k. Applying the central differences to the PDE (40) ( u x x and u t t ) mentioned before, we obtain
u x x ( u i , t j ) = ( u i + 1 , j 2 u i , j + u i 1 , j ) / h 2 , C = 0.1 , t [ 0 , 1 ]
As an example of a nonlinear system of size 100 × 100 , we picked M = 11 and N = 11 . It converges to the following column vector, which is not a matrix,
u * = 0.00111 , 0.00181 , 0.00226 , 0.00253 , 0.00266 , 0.00266 , 0.00253 , 0.00226 , 0.00181 , 0.00111 , 0.00181 , 0.00305 , 0.00387 , 0.00437 , 0.00461 , 0.00461 , 0.00437 , 0.00387 , 0.00305 , 0.00181 , 0.00226 , 0.00387 , 0.00497 , 0.00565 , 0.00598 , 0.00598 , 0.00565 , 0.00497 , 0.00387 , 0.00226 , 0.00253 , 0.00437 , 0.00565 , 0.00645 , 0.00684 , 0.00684 , 0.00645 , 0.00565 , 0.00437 , 0.00253 , 0.00266 , 0.00461 , 0.00598 , 0.00684 , 0.00726 , 0.00726 , 0.00684 , 0.00598 , 0.00461 , 0.00266 , 0.00266 , 0.00461 , 0.00598 , 0.00684 , 0.00726 , 0.00726 , 0.00684 , 0.00598 , 0.00461 , 0.00266 , 0.00253 , 0.00437 , 0.00565 , 0.00645 , 0.00684 , 0.00684 , 0.00645 , 0.00565 , 0.00437 , 0.00253 , 0.00226 , 0.00387 , 0.00497 , 0.00565 , 0.00598 , 0.00598 , 0.00565 , 0.00497 , 0.00387 , 0.00226 , 0.00181 , 0.00305 , 0.00387 , 0.00437 , 0.00461 , 0.00461 , 0.00437 , 0.00387 , 0.00305 , 0.00181 , 0.00111 , 0.00181 , 0.00226 , 0.00253 , 0.00266 , 0.00266 , 0.00253 , 0.00226 , 0.00181 , 0.00111 , T .
In Table 3, we mention the computational results based on Example 3.
Table 3. Radii of illustration (Example 3).
Table 3. Radii of illustration (Example 3).
Method (4) x 0 NIT x σ + 1 x σ Δ ( x σ ) COC CPU
Case-1 0.1 sin ( i π h ) sin ( j π h ) 0.1 sin ( i π h ) sin ( j π h ) 3 3.8 × 10 390 6.7 × 10 391 4.9999 42.3046
Case-2 0.1 sin ( i π h ) sin ( j π h ) 0.1 sin ( i π h ) sin ( j π h ) 3 2.4 × 10 402 4.6 × 10 403 4.9989 43.9509
Case-3 0.1 sin ( i π h ) sin ( j π h ) 0.1 sin ( i π h ) sin ( j π h ) 3 3.8 × 10 1109 6.5 × 10 1110 6.9999 44.0704
Case-4 0.1 sin ( i π h ) sin ( j π h ) 0.1 sin ( i π h ) sin ( j π h ) 3 2.5 × 10 1043 4.9 × 10 1044 6.9997 43.1974
( i , j = 1 , 2 , 3 , , 10 . )
Example 4.
Boundary value problems (BVPs) are important parts of mathematics and physics. They work with differential equations that have several, frequently boundary-based conditions. The modeling of practical phenomena such as heat transfer, fluid flow, and quantum mechanics relies heavily on BVPs, which provide valuable insights into physical systems. Therefore, we chose the following BVP (see [14]):
u + a 2 u 2 + 1 = 0
with u ( 0 ) = u ( 1 ) = 0 . The partition of the interval [ 0 , 1 ] into ℓ pieces provides us with the following:
γ 0 = 0 < γ 1 < γ 2 < < γ 1 < γ , γ + 1 = γ + h , h = 1 .
The u 0 , u 1 , u 2 , u stands for u ( γ 0 ) , u ( γ 1 ) , u ( γ 2 ) , , u ( γ ) , respectively. In addition, u 0 = 0 , and u = 1 . By using them in the expression (41), we have
u θ = u θ + 1 u θ 1 2 h , u θ = u θ 1 2 u θ + u θ + 1 h 2 , θ = 1 , 2 , 3 , , 1 ,
with the help of the discretization technique. In this way, we obtain the proceeding nonlinear system of equations of order ( θ 1 ) × ( θ 1 )
u θ 1 2 u θ + u θ + 1 + μ 2 4 ( u θ + 1 u θ 1 ) 2 + h 2 = 0 .
We picked = 91 and μ = 1 2 in order to obtain a larger nonlinear system of order 90 × 90 . In Table 4, we depicted the number of iterations, residual errors, C O C , CPU timing, and error differences between two iterations for Example (4). The above system (42) converges to the following estimated zero:
ξ * = 0.01853 , 0.03685 , 0.05497 , 0.07289 , 0.09061 , 0.1081 , 0.1254 , 0.1426 , 0.1595 , 0.1763 , 0.1928 , 0.2092 , 0.2253 , 0.2413 , 0.2571 , 0.2728 , 0.2882 , 0.3035 , 0.3186 , 0.3335 , 0.3482 , 0.3628 , 0.3772 , 0.3914 , 0.4054 , 0.4193 , 0.4330 , 0.4465 , 0.4599 , 0.4731 , 0.4862 , 0.4990 , 0.5118 , 0.5243 , 0.5367 , 0.5489 , 0.5610 , 0.5730 , 0.5847 , 0.5963 , 0.6078 , 0.6191 , 0.6303 , 0.6413 , 0.6521 , 0.6628 , 0.6734 , 0.6838 , 0.6940 , 0.7042 , 0.7141 , 0.7239 , 0.7336 , 0.7432 , 0.7525 , 0.7618 , 0.7709 , 0.7799 , 0.7887 , 0.7974 , 0.8059 , 0.8143 , 0.8226 , 0.8307 , 0.8387 , 0.8466 , 0.8543 , 0.8619 , 0.8693 , 0.8766 , 0.8838 , 0.8909 , 0.8978 , 0.9046 , 0.9112 , 0.9177 , 0.9241 , 0.9304 , 0.9365 , 0.9425 , 0.9484 , 0.9541 , 0.9597 , 0.9652 , 0.9706 , 0.9758 , 0.9809 , 0.9858 , 0.9907 , 0.9954 T .
Table 4. Radii of illustration (Example 4).
Table 4. Radii of illustration (Example 4).
Method (4) x 0 NIT x σ + 1 x σ Δ ( x σ ) COC CPU
Case-1 1.002 1.002 4 3.7 × 10 640 5.4 × 10 641 4.9964 26.7025
Case-2 1.002 1.002 4 3.7 × 10 640 5.4 × 10 641 4.9964 26.2708
Case-3 1.002 1.002 3 1.5 × 10 332 2.1 × 10 333 6.9971 17.8274
Case-4 1.002 1.002 3 1.5 × 10 332 2.1 × 10 333 6.9971 17.7859
Example 5.
We adopt a renowned Fisher’s equation [22]
η t = D η x x + η ( 1 η ) = 0 , along with homogeneous Neumann s boundary conditions η ( x , 0 ) = 1.5 + 0.5 c o s ( π x ) , 0 x 1 , η x ( 0 , t ) = 0 , t 0 , η x ( 1 , t ) = 0 , t 0 .
We refer to the D as a diffusion parameter. To obtain a nonlinear system, we use the finite difference discretization procedure for the expression (43). The required solution at a mesh’s grid points is w i , j = η ( u i , t j ) . In addition, the step numbers x and t correspond to the corresponding directions M and N, respectively. Moreover, the step sizes in the corresponding M and N are h and l, respectively. Utilizing the center, backward, and forward distinctions that follow, we obtain
η x x ( u i , t j ) = ( w i + 1 , j 2 w i , j + w i 1 , j ) / h 2 , η t ( u i , t j ) = ( w i , j w i , j 1 ) / l , and η x ( u i , t j ) = ( w i + 1 , j w i , j ) / h , t [ 0 , 1 ] ,
leading to
w 1 , j w i , j 1 l w i , j ( 1 w i , j ) D w i + 1 , j 2 w i , j + w i 1 , j h 2 , i = 1 , 2 , 3 , , M , j = 1 , 2 , 3 , , N .
Here, h = 1 M , l = 1 N are used. The system of nonlinear equations of dimension 100 × 100 is obtained by choosing M = N = 10 , h = l = 0.1 , and D = 1 . The above nonlinear system converges to the following column vector solution (not a matrix):
u * = u ( u i , t j ) = 1.6248 , 1.4519 , 1.3549 , 1.2941 , 1.2520 , 1.2201 , 1.1945 , 1.1731 , 1.1547 , 1.1386 , 1.5999 , 1.4411 , 1.3500 , 1.2919 , 1.2509 , 1.2196 , 1.1943 , 1.1730 , 1.1546 , 1.1386 , 1.5541 , 1.4209 , 1.3407 , 1.2876 , 1.2489 , 1.2186 , 1.1938 , 1.1727 , 1.1545 , 1.1386 , 1.4929 , 1.3933 , 1.3280 , 1.2816 , 1.2461 , 1.2173 , 1.1932 , 1.1725 , 1.1544 , 1.1385 , 1.4230 , 1.3612 , 1.3131 , 1.2747 , 1.2428 , 1.2158 , 1.1925 , 1.1721 , 1.1542 , 1.1384 , 1.3514 , 1.3279 , 1.2976 , 1.2674 , 1.2394 , 1.2142 , 1.1917 , 1.1717 , 1.1540 , 1.1383 , 1.2851 , 1.2965 , 1.2828 , 1.2604 , 1.2361 , 1.2126 , 1.1910 , 1.1714 , 1.1539 , 1.1383 , 1.2304 , 1.2702 , 1.2703 , 1.2546 , 1.2334 , 1.2113 , 1.1904 , 1.1711 , 1.1537 , 1.1382 , 1.1921 , 1.2513 , 1.2613 , 1.2503 , 1.2313 , 1.2104 , 1.1899 , 1.1709 , 1.1536 , 1.1381 , 1.1728 , 1.2414 , 1.2566 , 1.2480 , 1.2303 , 1.2099 , 1.1897 , 1.1708 , 1.1536 , 1.1381 T .
We illustrate the numerical results in the Table 5.
Table 5. Radii of illustration (Example 5).
Table 5. Radii of illustration (Example 5).
Method (4) x 0 NIT x σ + 1 x σ Δ ( x σ ) COC CPU
Case-1 1 + i 100 1 + i 100 . 4 4.6 × 10 508 5.4 × 10 507 12.8955 4.9998
Case-2 1 + i 100 1 + i 100 . 4 4.6 × 10 508 5.4 × 10 507 12.8211 4.9998
Case-3 1 + i 100 1 + i 100 . 4 1.4 × 10 1875 1.7 × 10 1874 32.0008 7.0000
Case-4 1 + i 100 1 + i 100 . 4 1.4 × 10 1875 1.7 × 10 1874 32.3537 7.0000
( i = 1 , 2 , 3 , , 100 . )

5. Conclusions

This paper concludes by highlighting the nature of a convergence analysis in iterative approaches, especially when there are no explicit assurances about convergence. It is shown that the investigation of convergence behaviors is different from conventional assumptions by taking into account derivatives such as Δ , Δ , and Δ ( 4 ) with the help of Taylor series expansion. In the previous work, there was no computable radius of convergence, and there was missing information about the choice of the initial point. However, we made a significant contribution by offering computable bounds on | | x σ ξ * | | , which allows one to make well-informed decisions about the number of iterations needed to achieve acceptable error tolerances. Furthermore, the computation of a convergence radius makes the process of choosing an initial location more confident. This study’s relevance to a wide range of issues is highlighted by its emphasis on Banach space operators. In addition, we also introduce a semilocal analysis and careful consideration of generalized continuity assumptions, which significantly advance our understanding of convergence behaviors in iterative methods. We have checked the semilocal convergence by offering practical examples for real-world applications. In particular, the drawbacks of the Taylor expansions series approach listed in ( L 1 ) ( L 6 ) of the introduction have all been positively addressed by ( L 1 ) ( L 6 ) . The technique developed in this paper can also be used with the same benefits on other iterative methods requiring inverses of linear operators or not in an analogous fashion [5,6,7,8,9,10,11,12,13,14,23]. This will be the direction of our future research, including the further weakening of the sufficient convergence conditions presented in this article.

Author Contributions

Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A.; formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A. and S.A.; visualization, R.B., I.K.A. and S.A.; supervision, R.B. and I.K.A. All authors have read and agreed to the published version of this manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author Sattam Alharbi wishes to thank Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445) for funding support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Argyros, I.K.; Magreñan, A.A. A Contemporary Study of Iterative Methods; Academic Press: Cambridge, MA, USA ; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  2. Argyros, I.K.; George, S. Mathematical Modeling for the Solutions with Application; Nova Publisher: New York, NY, USA, 2019; Volume III. [Google Scholar]
  3. Argyros, I.K. Unified Convergence Criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  4. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  6. Gutiérrez, J.M.; Hernández, M.A. A family of Chebyshev-Halley type methods in Banach spaces. Bull. Aust. Math. Soc. 1997, 55, 113–130. [Google Scholar] [CrossRef]
  7. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  8. Xiao, X.; Yin, H. A new class of methods with high-order of convergencefor solving systems of nonlinear equations. Appl. Math. Comput. 2015, 264, 300–309. [Google Scholar]
  9. Behl, R.; Sarria, F.; González, R.; Magrénãn, Á.A. Highly efficient family of iterative methods for solving nonlinear methods for solving nonlinear models. J. Comput. Appl. Math. 2016, 346, 110–132. [Google Scholar] [CrossRef]
  10. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  11. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  12. Ezquerro, J.A.; Grau-Sánchez, M.; Grau, Á.; Hernández, M.A.; Noguera, M.; Romero, N. On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. Math. Comput. 2011, 151, 163–174. [Google Scholar] [CrossRef]
  13. Frontini, M.; Sormani, E. Some variant of Newton’s method with third-order convergence. Appl. Math. Comput. 2003, 140, 419–426. [Google Scholar] [CrossRef]
  14. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  15. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  16. Grau-Sánchez, M.; Grau, Á.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  17. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  18. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  19. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  20. Noor, M.A.; Wassem, M. Some iterative methods for solving a system of nonlinear equations. Appl. Math. Comput. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  21. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  22. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  23. Shakhno, S.M. On a Kurchatov’s method of linear interpolation for solving nonlinear equations. PAMM Proc. Appl. MAth. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
  24. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  25. Dennis, E., Jr.; John, R.; Robert, B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1996. [Google Scholar]
  26. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Elsevier: Nauka, Moscow, 1984. (In Russian) [Google Scholar]
  27. Potra, F.A. On an iterative algorithm of order 1.839… for solving nonlinear operator equations. Numer. Funct. Anal. Optim. 1985, 7, 75–106. [Google Scholar] [CrossRef]
  28. Fernando, T.G.I.; Weerakoon, S. Imporved Newton’s method for finding roots of a nonlinear equation. In Proceedings of the 53rd Annual Sessions of Sri Lanka Association for the Advancement of Science (SLAA), Matara, Sri Lanka, 8–12 December 1997; pp. E1–E22. [Google Scholar]
  29. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Alharbi, S. A One-Parameter Family of Methods with a Higher Order of Convergence for Equations in a Banach Space. Mathematics 2024, 12, 1278. https://doi.org/10.3390/math12091278

AMA Style

Behl R, Argyros IK, Alharbi S. A One-Parameter Family of Methods with a Higher Order of Convergence for Equations in a Banach Space. Mathematics. 2024; 12(9):1278. https://doi.org/10.3390/math12091278

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, and Sattam Alharbi. 2024. "A One-Parameter Family of Methods with a Higher Order of Convergence for Equations in a Banach Space" Mathematics 12, no. 9: 1278. https://doi.org/10.3390/math12091278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop