Next Article in Journal
A Fuzzy-Based Approach for Flexible Modeling and Management of Freshwater Fish Farming
Previous Article in Journal
Hidden Abstract Stack Markov Models with Learning Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multistep Iterative Methods for Solving Equations in Banach Space

1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
4
Computing Sciences and Mathematics, Franklin University, 201 S Grant Ave., Columbus, OH 43215, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(13), 2145; https://doi.org/10.3390/math12132145
Submission received: 6 June 2024 / Revised: 2 July 2024 / Accepted: 5 July 2024 / Published: 8 July 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The novelty of this article lies in the fact that we extend the use of a multistep method for developing a sequence whose limit solves a Banach space-valued equation. We suggest the error estimates, local convergence, and semi-local convergence, a radius of convergence, the uniqueness of the required solution that can be computed under ω -continuity, and conditions on the first derivative, which is on the method. But, earlier studies used high-order derivatives, even though those derivatives do not appear in the body structure of the proposed method. In addition to this, they did not propose computable estimates and semi-local convergence. We checked the applicability of our study to three real-life problems for semi-local convergence and two problems chosen for local convergence. Based on the obtained results, we conclude that our approach improves its applicability and makes it suitable for challenges in applied science.
MSC:
65H10; 65Y20; 65G99; 41A58

1. Introduction

In this study, we choose the problem of estimating a solution x * of the nonlinear equation presented as:
J ( x ) = 0 ,
where J : D T 1 T 2 stands for a differentiable operator in the Fréchet sense, T 1 and T 2 denote Banach spaces. Equation (1) covers a wide range of problems in several disciplines, including economics, engineering, and physics. Usually, analytical solutions to these kinds of problems are out of reach. Thus, we do not have any other option than iterative methods. For example, methods like Newton–Raphson [1,2,3,4] or gradient descent are used to find an approximate solution x * . By adopting iterative methods, researchers approach the required solution by updating an initial guess iteratively. Furthermore, in order to guarantee an accurate and effective estimation of the required solution, they examine the stability and convergence characteristics of these techniques under various circumstances. We choose one of the following iterative methods: defined for x 0 D , κ 3 , a fixed positive integer, and each n = 0 , 1 , 2 , by
y n ( 1 ) = x n J ( x n ) 1 J ( x n ) , A n = J ( x n ) + J ( y n ( 1 ) ) , y n ( 2 ) = x n 2 A n 1 J ( x n ) , B n = 7 2 I J ( x n ) 1 J ( y n ( 1 ) ) 4 I + 3 2 J ( x n ) 1 J ( y n ( 1 ) ) , y n ( 3 ) = y n ( 2 ) B n J ( x n ) 1 J ( y n ( 2 ) ) , x n + 1 = y n + 1 ( κ ) = y n ( κ 1 ) B n J ( x n ) 1 J ( y n ( κ ) ) .
The method uses ( κ 1 ) operator evaluations, and two frozen linear operator inversions per iteration. The convergence order 3 ( κ 1 ) is shown in [5] using Taylor series when T 1 = T 2 = R m and by assuming the existence of J ( 6 ) not on the method.
The convergence order of methods specified on the finite-dimensional Euclidean space has been determined extensively by the application of Taylor series expansion. The standard proofs vary slightly depending on the approach used. Still, it is crucial to have them available. Even though they may converge, these conclusions are typically limited and have shared problems that restrict the method’s applicability. In particular, the drawbacks are:
(M1)
The local convergence analysis usually requires high-order derivatives, the inverses of derivatives, or divided differences not on the methods. As demonstrated, the local analysis of convergence (LAC), in [5], the convergence order necessitates derivatives up to the sixth order, respectively, which are absent from the technique. These restrictions limit their use in the scenario, where T 1 = T 2 = R m . As an inspiring and basic illustration is described by the function J on T 1 = T 2 = R , Ω = [ 1 π , 2 π ] , defined as
J ( Λ ) = Λ 6 + Λ 4 cos 1 Λ + 9 Λ 2 , Λ 0 0 , Λ = 0 .
Next, it is determined that the first three derivatives are:
J ( Λ ) = λ e λ λ 5 + 6 e λ λ 4 + 5 λ 3 sin 1 λ λ 2 cos 1 λ + 18 , J ( Λ ) = λ 4 e λ ( λ 2 + 12 λ + 30 ) + ( 20 λ 2 1 ) λ sin 1 λ 8 λ 2 cos 1 λ + 8 , J ( Λ ) = λ 3 e λ ( λ 3 + 18 λ 2 + 90 λ + 120 ) + 60 λ 2 9 sin 1 λ + 1 λ 36 λ cos 1 λ .
On Ω , it is found that the function J ( Λ ) is unbounded at x * = 0 Ω and J ( 0 ) = 0 . Consequently, the local convergence findings in [5] cannot ensure the convergence of technique (2). However, the iterative scheme (2) converges to x * if, for example, x 0 = 0.1 Ω . This observation suggests that it is possible to weaken these conditions.
(M2)
There is no advance knowledge about the integer K such that x n x * < ϵ ( ϵ > 0 ) for each n K .
(M3)
There is no set containing only x * as a solution of (1).
(M4)
The results only hold on R m .
(M5)
The more important semi-local results are not studied in [5].
Notice that ( M 1 ) ( M 5 ) are problems shared with studies using the Taylor series. Consequently, the problems ( M 1 ) ( M 5 ) constitute the reason for writing this study. Our technique deals with these issues.
The key points of this study addressing the mentioned issues are as follows:
( M 1 )
The convergences use only the operators in scheme (2), i.e., J and J .
( M 2 )
The integer K is known a priori. Thus, we know the iterates to be computed.
( M 3 )
A set containing only x * as a solution is known.
( M 4 )
The results hold in the setting of Banach spaces.
( M 5 )
The semi-local analysis is studied using scalar majorizing sequences for { x n } .
Both local and semi-local convergence use generalized relaxed continuity assumptions used to control J . The generality of the new technique, although demonstrated in method (2), can also be applied to extend the applicability of other iterative methods with or without inverses along the same lines [6,7,8,9,10,11,12,13,14,15,16,17,18,19].
This article is well organized. Basic knowledge about the scheme (2) is presented in Section 2 by introducing the local study and theorems. In addition, the semi-local is covered in Section 3. Section 4 brings theory and practice closer together by exploring real-world applications through numerical problems. Section 5, which also provides suggestions for future research possibilities, brings the article to a close. This organized layout helps readers understand better by guiding them through a step-by-step exploration of the topic.

2. Convergence Type 1: Local

The proof of the local analysis depends on some real functions. Let E = [ 0 , + ) . We denote functions that are continuous as well as nondecreasing ( F C N D ) , and the smallest positive solution by ( P S S ) , of an equation.
Suppose the following:
(B1)
There exists a F C N D   ω 0 : E R + such that ω 0 ( t ) 1 = 0 has a P S S . Denote such a solution by s 0 . Set E 0 = [ 0 , s 0 ) .
(B2)
There exists a F C N D   ω : E 0 R + such that for h 1 : E 0 R + defined by
h 1 ( t ) = 0 1 ω ( 1 θ ) t d θ 1 ω 0 ( t ) ,
the equation h 1 ( t ) 1 = 0 has a P S S in the interval ( 0 , s 0 ) . Denote such a solution by R 1 .
(B3)
For ρ ( t ) = 1 2 ω 0 ( t ) + ω 0 h 1 ( t ) t , the equation ρ ( t ) 1 = 0 has a P S S in the interval ( 0 , s 0 ) . Denote such a solution by s 1 . Set s = min { s 0 , s 1 } , and E 1 = [ 0 , s ) .
(B4)
For
b ( t ) = ω ( 1 + h 1 ( t ) ) t or ω 0 ( t ) + ω 0 h 1 ( t ) t ,
and
h 2 ( t ) = 0 1 ω ( 1 θ ) t d θ 1 ω 0 ( t ) + b ( t ) 1 + 0 1 ω 0 ( θ t ) d θ 2 1 ω 0 ( t ) 1 ρ ( t ) ,
the equation has a P S S in ( 0 , s ) . Denote such a solution by R 2 .
For j = 3 , 4 , , κ , define the functions on E 1 as
b ( j ) ( t ) = ω 1 + h j ( t ) t o r ω 0 ( t ) + ω 0 h j ( t ) t ,
a ( j ) ( t ) = 0 1 ω ( 1 θ ) h j ( t ) t d θ 1 ω 0 h j ( t ) t + b ( j ) ( t ) 1 + 0 1 ω 0 θ h j ( t ) t d θ ( 1 ω 0 ( t ) ) ( 1 ω 0 ( h j ( t ) t ) ) + b ( j ) ( t ) 1 ω 0 ( t ) 2 2 + 3 b ( j ) ( t ) 1 ω 0 ( t ) 1 + 0 1 ω 0 θ h j ( t ) t d θ ,
h j ( t ) = a ( j 1 ) ( t ) h j 1 ( t ) ,
where the functions w 0 and w are connected to the operations on (2).
(B5)
The equations h j ( t ) 1 = 0 have P S S in the interval ( 0 , s ) . Denote such solutions by R j , respectively.
Set
R = min { R m } , m = 1 , 2 , , κ ,
and E 2 = [ 0 , R ) .
It follows by the definition of these functions, and R that for each t E 2 ,
0 ω 0 ( t ) < 1 ,
0 ρ ( t ) < 1 ,
0 h m ( t ) < 1 .
The parameter R is shown to be a radius of convergence for the method (2) (see Theorem 1). By B ( x , r ) , we denote an open ball of radius r > 0 and center x T 1 . Moreover, by B [ x , r ] , we denote its closure.
(B6)
There exists an invertible linear operator H, and a solution x * D of the equation J ( x ) = 0 , such that, for each u D ,
H 1 ( J ( u ) H ) ω 0 ( u x * ) .
Set D 0 = D B ( x * , s 0 ) .
(B7)
For each u 1 , u 2 D 0 ,
H 1 J ( u 2 ) J ( u 1 ) ω ( u 2 u 1 ) ,
and
(B8)
B [ x * , R ] D .
Next, we developed the local analysis of convergence for the method (2) by relying on the conditions ( B 1 ) ( B 8 ) and the preceding notation.
Theorem 1.
Suppose that the conditions ( B 1 ) ( B 8 ) hold and the starting point x 0 B ( x * , R ) { x * } . Then, the sequence { x n } is convergent to the solution x * of the equation J ( x ) = 0 . Moreover, the following items hold for each n = 0 , 1 , 2 , :
y n ( 1 ) x * h 1 ( x n x * ) x n x * x n x * < R ,
y n ( 2 ) x * h 2 ( x n x * ) x n x * x n x * ,
y n ( j ) x * h j ( x n x * ) x n x * x n x * , j = 3 , , κ .
In particular, for j = κ ,
x n + 1 x * = y n + 1 ( κ ) x * h κ ( x n x * ) x n x * x n x * ,
where the radius R is given by the Formula (3), and the functions h 1 , h 2 . , h κ are as previously defined.
Proof. 
We shall show that all iterates belong in the ball B ( x * , R ) as well as the items (7)–(10) using induction. Let u D . The definition of ω 0 , radius R, (4), and the conditions ( B 1 ) and ( B 5 ) imply in turn
H 1 J ( u ) H ω 0 ( u x * ) ω 0 ( R ) < 1 .
It follows from the preceding equation and the Banach standard lemma on operators with inverses [6,20,21,22] that J ( u ) 1 exists and satisfies
J ( u ) 1 H 1 1 ω 0 ( u x * ) .
In particular, if u = x 0 , the estimate (11) holds, and the iterate y 0 is well defined. Then, the first substep of the method (2) allows us to write
y 0 ( 1 ) x * = x 0 x * J ( x 0 ) 1 J ( x 0 ) = 0 1 J ( x 0 ) 1 J x * + θ ( x 0 x * ) J ( x 0 ) d θ ( x 0 x * ) .
The preceding identity, the condition ( B 7 ) , (11) (for u = x 0 ), and (3) imply
y 0 ( 1 ) x * 0 1 ω ( 1 θ ) x 0 x * d θ x 0 x * 1 ω 0 ( x 0 x * ) h 1 ( x 0 x * ) x 0 x * x 0 x * < R .
Hence, the iterate y 0 ( 1 ) B ( x * , R ) , and the expression (7) holds if n = 0 . By the condition ( H 6 ) and (5), we have in turn
( 2 H ) 1 ( A 0 2 H ) 1 2 H 1 ( J ( x 0 ) H ) + H 1 J ( y 0 ( 1 ) ) H 1 2 ω 0 ( x 0 x * ) + ω 0 ( y 0 ( 1 ) x * ) ρ 0 ( x 0 x * ) ρ 0 ( R ) < 1 ,
so A 0 1 exists,
A 0 1 H 1 2 ( 1 ρ 0 ( x 0 x * ) ) ,
the iterate y 0 ( 2 ) exists, and we can write by the second substep of the method (2)
y 0 ( 2 ) x * = x 0 x * J ( x 0 ) 1 J ( x 0 ) 2 A 0 1 J ( x 0 ) 1 J ( x 0 ) = x 0 x * J ( x 0 ) 1 J ( x 0 ) A 0 1 J ( x 0 ) J ( y 0 ( 1 ) ) J ( x 0 ) 1 J ( x 0 ) .
Using (3), (5), (8), and (12)–(14), we obtain
y 0 ( 2 ) x * 0 1 ω ( 1 θ ) x 0 x * d θ 1 ω 0 ( x 0 x * ) + b 0 1 + 0 1 ω ( 1 θ ) x 0 x * d θ 2 1 ω 0 ( x 0 x * ) ( 1 ρ 0 ) x 0 x * h 3 ( x 0 x * ) x 0 x * x 0 x * .
Thus, the iterate y 0 ( 2 ) B ( x * , R ) , and the item (8) hold if n = 0 . We can write the third substep of the method (2) in the following way:
y 0 ( 3 ) x * = y 0 ( 2 ) x * J ( y 0 ( 2 ) ) 1 J ( y 0 ( 2 ) ) + J ( y 0 ( 2 ) ) 1 J ( x 0 ) 1 J ( y 0 ( 2 ) ) + ( I B 0 ) J ( x 0 ) 1 J ( y 0 ( 2 ) ) .
Notice that the iterate y 0 ( 3 ) is well defined, since J ( x 0 ) 1 exists. The identity (16) is valid, since J ( y 0 ( 2 ) ) 1 exists by (11) (for u = y 0 ( 2 ) ), and (15). We need an upper bound on the norm I B 0 . We can write by the definition of B 0 that
I B 0 = 1 2 5 I + 8 J ( x 0 ) 1 J ( y 0 ( 2 ) ) 3 J ( x 0 ) 1 J ( y 0 ( 2 ) ) 2 = J ( x 0 ) 1 J ( y 0 ( 2 ) ) I 2 I 3 J ( x 0 ) 1 J ( y 0 ( 2 ) ) I ,
leading to
I B 0 ω ( y 0 ( 2 ) x 0 ) 1 ω 0 ( x 0 x * ) b 0 ( 2 )
or
I B 0 ω 0 ( x 0 x * ) + ω 0 ( y 0 ( 2 ) x * ) 1 ω 0 ( x 0 x * ) b 0 ( 2 ) .
Also notice that
J ( y 0 ( 2 ) ) = J ( y 0 ( 2 ) ) J ( x * ) = 0 1 J x * + θ ( y 0 ( 2 ) x * ) d θ ( y 0 ( 2 ) x * ) ,
so
H 1 J ( y 0 ( 2 ) ) H 1 0 1 J x * + θ ( y 0 ( 2 ) x * ) d θ H + H ( y 0 ( 2 ) x * ) 1 + 0 1 ω 0 θ ( y 0 ( 2 ) x * ) d θ y 0 ( 2 ) x * .
In view of (3), (6), (15)–(17), we obtain in turn that
y 0 ( 3 ) x * [ 0 1 ω ( 1 θ ) y 0 ( 2 ) x * d θ 1 ω 0 y 0 ( 2 ) x * + b 0 ( 2 ) 1 + 0 1 ω 0 θ y 0 ( 2 ) x * d θ 1 ω 0 ( x 0 x * ) 1 ω 0 ( y 0 ( 2 ) x * ) + b 0 ( 2 ) 1 ω 0 ( x 0 x * ) 2 2 + 3 b 0 1 ω 0 ( x 0 x * ) 1 + 0 1 ω 0 θ y 0 ( 2 ) x * d θ ] × y 0 ( 2 ) x * h 3 ( x 0 x * ) x 0 x * x 0 x * .
Therefore, the iterate y 0 ( 3 ) B ( x * , R ) and the item (9) holds if j = 3 and n = 0 . Simply replace y 0 ( 2 ) , y 0 ( 3 ) by y 0 ( i ) , y 0 ( i + 1 ) , for i = 3 , , κ 1 , respectively, in the estimates (16)–(20) to prove the validity of (9) for the rest of the iterates (including (10) for j = κ ) as well as y 0 ( i ) , y 0 ( i + 1 ) B ( x * , R ) . Moreover, if we exchange the iterates x 0 , y 0 ( 1 ) , , y 0 ( κ ) by x m , y m ( 1 ) , , y m ( κ ) , respectively, in the previous calculations, we complete the induction for items (7)–(10) as well as the fact that all iterates of the method (2) belong in the ball B ( x * , R ) . Then, from (10), we can have
x n + 1 x * C x n x * R ,
where C = h κ ( x 0 x * ) [ 0 , 1 ) . So, we conclude that x n + 1 B ( x * , R ) , and lim n + x n = x * . □
Remark 1.
We have the following remarks about this study:
(i) 
The usual but not the most flexible pick for the linear operator H is either H = I or H = J ( x * ) . In the latter case, it is implied that x * is a simple solution. However, if H J ( x * ) , our conditions do not necessarily imply that x * is simple. Thus, the method (2) can be used to approximate a solution that is not necessarily simple.
(ii) 
In some cases, it is worth adding one more condition in ( B 7 ) given for u 3 D 0 by  ( B 6 ) H 1 J ( u 3 ) ω 1 ( u 3 x * ) , where the function ω 1 is as ω.
There is a relationship between the functions ω 0 and ω 1 . Notice that, under the conditions ( B 6 ) and ( B 5 ) ,
H 1 ( J ( u 3 ) H + H ) 1 + H 1 ( J ( u 3 ) H ) 1 + ω 0 ( u 3 x * ) .
Hence, we can pick ω ¯ 1 ( t ) = 1 + ω 0 ( t ) . Consequently, the results of Theorem 1 can be rewritten with ω ¯ 1 , replacing 1 + ω 0 ( t ) , say, in the case J ( x ) = sin x ( x * = 0 ) , since ω 1 ( t ) = 1 , and this function is smaller than ω ¯ 1 .
(iii) 
Notice that the function b is defined by two different ways. In practice, we shall be using the smallest of the two versions (functions).
A set is specified that includes only x * as a solution.
Proposition 1.
Suppose that the condition ( B 6 ) holds in the ball B ( x * , λ ) and there exists λ 1 λ such that
0 1 ω 0 ( θ λ 1 ) d θ < 1 .
Set D 2 = D B [ x * , λ 1 ] . Then, the only solution of the equation J ( x ) = 0 is x * in the set D 2 .
Proof. 
Let x ˜ be a solution of the equation J ( x ) = 0 and x ˜ D 2 . Define the linear operator M = 0 1 J ( x * + θ ( x ˜ x * ) ) d θ . Then, it follows by the condition ( B 6 ) and (22) in turn that
H 1 ( M H ) 0 1 ω 0 ( θ x ˜ x * ) d θ 0 1 ω 0 ( θ λ 1 ) d θ < 1 .
Consequently, M 1 exists. Then, from the identity
x ˜ x * = M 1 J ( x ˜ ) J ( x * ) = M 1 ( 0 ) = 0 ,
we conclude that x ˜ = x * . □
Remark 2.
Clearly, if all conditions of the Theorem 1 hold, then we can pick x ˜ = x * , and λ 1 = R .

3. Convergence Type 2: Semi-Local

The role of x * , ω 0 , ω is switched by x 0 , ν 0 , and ν , respectively, in this analysis. But, the calculations are similar.
Suppose:
(L1)
There exists an F C N D   ν 0 : E R + such that the equation ν 0 ( t ) 1 = 0 has a P S S . Denote such a solution by s 3 . Let E 2 = [ 0 , s 3 ) .
(L2)
There exists a F C N D   ν : E 2 R + . Define the majorant sequence { α n ( i ) } for α 0 ( 0 ) = 0 , α 0 ( 1 ) b (for some b 0 ) and each n = 0 , 1 , 2 , , i = 0 , 1 , 2 , , κ , by
c n = ν α n ( 0 ) α n ( 1 ) ν 0 ( α n ( 0 ) ) + ν 0 ( α n ( 1 ) ) , q n = 1 2 ν 0 ( α n ( 0 ) ) + ν 0 ( α n ( 1 ) ) , α n ( 2 ) = α n ( 1 ) + c n α n ( 1 ) α n ( 0 ) 2 ( 1 q n ) , d n = 1 + c n 1 ν 0 ( α n ( 0 ) ) 2 + 3 c n 1 ν 0 ( α n ( 0 ) ) , g n ( j ) = 1 + 0 1 ν 0 α n ( 0 ) + θ ( α n ( j ) α n ( 0 ) ) d θ α n ( j ) α n ( 0 ) + 1 + ν 0 ( α n ( 0 ) ) ( α n ( 1 ) α n ( 0 ) ) , j = 3 , 4 , , κ , α n ( j ) = α n ( j 1 ) + d n g n ( j ) 1 ν 0 ( α n ( 0 ) ) , j = 3 , 4 , , κ 1 , α n + 1 ( 0 ) = α n ( κ 1 ) + d n g n ( κ ) 1 ν 0 ( α n ( 0 ) ) , δ n + 1 = 0 1 ν α n ( 0 ) + θ ( α n + 1 ( 1 ) α n ( 0 ) ) d θ α n + 1 ( 1 ) α n ( 0 ) + 1 + ν 0 ( α n ( 0 ) ) α n + 1 ( 1 ) α n ( 0 ) , α n + 1 ( 1 ) = α n + 1 ( 0 ) + δ n + 1 1 ν 0 ( α n + 1 ( 0 ) ) .
We shall show in Theorem 2 that the sequence { α n ( i ) } majorizes the sequence { x n } . However, let us first develop a convergence condition for this sequence.
(L3)
There exists s 4 [ 0 , s 3 ) such that, for each, n = 0 , 1 , 2 , , i = 0 , 1 , 2 , , κ ,
ν 0 ( α n ( 0 ) ) < 1 , q n < 1 and α n ( i ) < s 4 .
It follows from these conditions and (23) that the sequence { α n ( i ) } is non-negative, nondecreasing, and as such, it is convergent to some α * [ 0 , s 4 ] . The functions v 0 and v relate to the operators in (2).
(L4)
There exist x 0 D and a linear and invertible operator H such that, for each, u D ,
H 1 J ( u ) H ν 0 ( u x 0 ) .
Set D 2 = D B ( x 0 , s 3 ) .
The definition of s 3 and the condition ( L 3 ) imply H 1 J ( x 0 ) H ν 0 ( 0 ) < 1 . Thus, J ( x 0 ) 1 exists. Consequently, we can take b J ( x 0 ) 1 J ( x 0 ) .
(L5)
For each u 1 , u 2 D 2 ,
H 1 J ( u 2 ) J ( u 1 ) ν ( u 2 u 1 ) ,
and
(L6)
B [ x 0 , α * ] D .
The conditions ( L 1 ) ( L 6 ) are used to show the semi-local analysis of convergence for method (2).
Theorem 2.
Suppose that the conditions ( L 1 ) ( L 5 ) hold. Then, the sequence { x n } produced by the method (2) exists in B ( x 0 , α * ) , stays in B ( x 0 , α * ) , and is convergent to a solution x * B [ x 0 , α * ] of the equation J ( x ) = 0 .
Proof. 
The sequence { α n ( i ) } is shown to be majorizing for { x n } . Notice that, by the definition of b and (23),
y 0 ( 1 ) x 0 = J ( x 0 ) 1 J ( x 0 ) b = b α 0 ( 0 ) α 0 ( 1 ) α 0 ( 0 ) .
In order to achieve this, we first show using induction the series of estimates as in the local case, but using the conditions ( L 1 ) ( L 5 ) :
y m ( 2 ) y m ( 1 ) = J ( x m ) 1 2 A m 1 J ( x m ) = A m 1 J ( x m ) J ( y m ( 1 ) ) ( y m ( 1 ) x m ) , y m ( 2 ) y m ( 1 ) c m α m ( 1 ) α m ( 0 ) 2 ( 1 q m ) = α m ( 2 ) α m ( 1 ) , y m ( 2 ) x 0 y m ( 2 ) y m ( 1 ) + y m ( 1 ) x 0 α m ( 2 ) α m ( 1 ) + α m ( 1 ) α 0 ( 0 ) = α m ( 2 ) < α * , B m = I J ( x m ) 1 J ( y m ( 1 ) ) I 2 I 3 ( J ( x m ) 1 J ( y m ( 1 ) ) I ) , B m 1 + c m 1 ν 0 ( α m ( 0 ) ) 2 + 3 c m 1 ν 0 ( α m ( 0 ) ) = d m , J ( y m ( 2 ) ) = J ( y m ( 2 ) ) J ( x m ) J ( x m ) ( y m ( 1 ) x m ) , H 1 J ( y m ( 2 ) ) 1 + 0 1 ν 0 α m ( 0 ) + θ ( α m ( 2 ) α m ( 0 ) ) d θ ( α m ( 2 ) α m ( 0 ) ) + 1 + ν 0 ( α m ( 0 ) ) α m ( 1 ) α m ( 0 ) = h m ( 2 ) .
Similarly, we obtain
H 1 J ( y m ( j ) ) g m ( j ) , y m ( j ) y m ( j 1 ) d m g m ( j ) 1 ν 0 α m ( 0 ) = α m ( j ) α m ( j 1 ) , j = 3 , 4 , , κ 1 , y m ( j ) x 0 y m ( j ) y m ( j 1 ) + y m ( j 1 ) x 0 α m ( j ) α m ( j 1 ) + α m ( j 1 ) α 0 ( 0 ) = α m ( j ) < α * , x n + 1 y m ( κ 1 ) = y m + 1 ( κ ) y m ( κ 1 ) α m + 1 ( 0 ) α m ( κ 1 ) , x m + 1 x 0 x m + 1 y m ( κ 1 ) + y m ( κ 1 ) x 0 α m + 1 ( κ ) α m ( κ 1 ) + α m ( κ 1 ) α 0 ( 0 ) = α m + 1 ( κ ) < α * , J ( x m + 1 ) = J ( x m + 1 ) J ( x m ) J ( x m ) ( y m + 1 ( 1 ) x m ) , J ( x m + 1 ) = J ( x m + 1 ) J ( x m ) J ( x m ) ( x m + 1 x n ) + J ( x m ) J ( y m + 1 ( 1 ) ) x m + 1 , H 1 J ( x m + 1 ) 0 1 ν 0 α m ( 0 ) + θ ( α m + 1 ( 1 ) α m ( 0 ) ) d θ ( α m + 1 ( 1 ) α m ( 0 ) ) + 1 + ν 0 ( α m ( 0 ) ) ( α m + 1 ( 1 ) α m ( 0 ) ) = δ m + 1 , y m + 1 ( 1 ) x m + 1 J ( x m + 1 ) 1 H H 1 J ( x m + 1 ) δ m + 1 1 ν 0 ( α m + 1 ( 0 ) ) = α m + 1 ( 1 ) α m + 1 ( 0 ) , y m + 1 ( 1 ) x 0 y m + 1 ( 1 ) x m + 1 + x m + 1 x 0 α m + 1 ( 1 ) α m + 1 ( 0 ) + α m + 1 ( 0 ) α 0 ( 0 ) = α m + 1 ( 1 ) < α * .
Hence, all iterates x m belong in B ( x 0 , α * ) , and the sequence { α m ( i ) } majorizes the sequence { x m } . Moreover, the sequence { α m ( i ) } is Cauchy as convergent. Consequently, the sequence { x m } is also Cauchy in the Banach space T 1 , and as such, it is convergent to some x * B [ x 0 , α * ] . By letting m + in (24) and using the continuity of the operator J, we conclude that J ( x * ) = 0 . So, the limit point x * solves the equation J ( x ) = 0 . □
Next, a set is determined that contains only one solution of the equation J ( x ) = 0 .
Proposition 2.
Suppose there exists a solution x ˜ B ( x 0 , s 5 ) of the equation J ( x ) = 0 . In addition, the condition ( L 3 ) holds in the ball B ( x 0 , s 5 ) and there exists s 6 s 5 such that
0 1 ν 0 ( 1 θ ) s 5 + θ s 6 d θ < 1 .
Set D 4 = D B [ x 0 , s 6 ] . Then, the only solution of the equation J ( x ) = 0 in the set D 4 is x ˜ .
Proof. 
Suppose that there exists x ¯ D 4 solving the equation J ( x ) = 0 . Define the linear operator L by
L = 0 1 J x ˜ + θ ( x * x ¯ ) d θ .
Using the condition ( L 3 ) and (25), we obtain
H 1 ( L H ) 0 1 ν 0 ( 1 θ ) x ˜ x 0 + θ x ¯ x 0 d θ 0 1 ν 0 ( 1 θ ) s 5 + θ s 6 d θ < 1 .
Therefore, L 1 exists. Then, from the identity, we have
x ˜ x ¯ = L 1 J ( x ˜ ) J ( x ¯ ) = L 1 ( 0 ) = 0 .
Consequently, we conclude that x ˜ = x ¯ . □
Remark 3.
The following remarks are based on the semi-local convergence:
  • The choice for H is either H = I or H = J ( x 0 ) . But, these choices are not necessarily the most appropriate.
  • The parameter s 3 can replace α * in the condition ( L 5 ) .
  • Under all the conditions ( L 1 ) ( L 5 ) , we can take x ˜ = x * and s 5 = α * in Proposition 2.

4. Numerical Problems

Based on the theoretical results, we performed a computational analysis to demonstrate their practical significance. Five instances were selected for our computational examination. The first two cases demonstrated local area convergence (LAC), while the other examples demonstrated semi-local area convergence (SLAC). By focusing on local convergence, we presented the computational results in Table 1 and Table 2, based on the Hammerstein operator and an academic problem (details can be seen in Examples 1 and 2, respectively). The numerical results of semi-local convergence are given in Table 3, based on the boundary value problem Example 3. Another well-known example of an applied science problem, the Van der Waals problem in Example 4, was chosen, and the numerical result of semi-local convergence is given in Table 4. In addition, Table 5 provides the values of abscissas t j and weights w j using the Gauss–Legendre quadrature formula. Finally, we chose another well-known Hammerstein integral problem for semi-local convergence (details are given in Example 5), and the numerical results are given in Table 6. Furthermore, we also mentioned the COC that has been calculated using the following formulas:
η = ln x κ + 1 x * | x κ x * ln x κ x * x κ 1 x * , for   κ = 1 , 2 ,
or A C O C [11,12] by:
η * = ln x κ + 1 x κ x κ x κ 1 ln x κ x κ 1 x κ 1 x κ 2 , for   κ = 2 , 3 , .
The programming termination criteria are given below: ( i ) x κ + 1 x κ < ϵ , and ( i i ) J ( x κ ) < ϵ , where ϵ = 10 400 . All computations were performed using Mathematica–11 [23] with multi-precision arithmetics. The configurations of the computer used for programming are given below:
  • Device name: HP
  • Installed RAM: 8.00 GB (7.89 GB usable)
  • Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz 3.60 GHz
  • System type: 64-bit operating system, x64-based processor
  • Edition: Windows 10 Enterprise
  • Version: 22H2
  • OS Build: 19045.2006

4.1. Examples for LAC

To illustrate the theoretical findings of local convergence, which are provided in Section 2, we select two examples, namely Examples 1 and 2.
Example 1.
In numerous areas of physics and engineering, a Hammerstein operator-based nonlinear integral equation of the first kind poses a significant mathematical challenge. The analytical solution is almost non-existent due to the integral and nonlinear elements. To address such complexities, researchers have only relied on iterative methods and functional analysis. For instance, Ω = U [ 0 , 1 ] and T 1 = T 2 = C [ 0 , 1 ] . Then, we have the following nonlinear integral equation of the first kind: Hammerstein operator J
J ( v ) ( x ) = v ( x ) 4 0 1 x Δ v ( Δ ) 3 d Δ .
The derivative of operator J is given below:
J v ( q ) ( x ) = q ( x ) 12 0 1 x Δ v ( Δ ) 2 q ( Δ ) d Δ ,
for q C [ 0 , 1 ] . The values of the operator J satisfy the hypotheses ( B 1 ) ( B 8 ) . Since x * = 0 , so far, H = J ( x * ) = I , proving that
w 0 ( Λ ) = 12 Λ and w ( Λ ) = 24 Λ .
Formula (3) is used to find the value of R and the solutions of the scalar equations to first determine R m . We mention the radii for the special cases of method (2) in Table 1 based on Example 1.
Table 1. Radii of method (2) for Example 1.
Table 1. Radii of method (2) for Example 1.
j S 0 S 1 S R 1 R 2 R 3 R
30.0833330.0555560.0555560.0416670.0267260.0172030.017202
Example 2.
It is quite common in the fields of science and engineering to find an exponential and polynomial term combination that forms a 3 × 3 nonlinear system, which is a sophisticated mathematical model. These systems are frequently characterized by algebraic (polynomial) and exponential variable interactions, which pose challenges in analysis and simulation. We employ advanced mathematical techniques and numerical methods to understand and solve these complexities. Therefore, we pick such a system of nonlinear equations, which is given below:
J ( χ ) = χ 1 + χ 2 + χ 3 , e χ 2 1 , e 1 2 χ 3 2 + χ 3 t r ,
where Ω = U [ 0 , 1 ] , T 1 = T 2 = R 3 and χ = ( χ 1 , χ 2 , χ 3 ) T . It follows by this definition that the derivative T is
J ( χ ) = 1 0 0 0 e χ 2 0 0 0 ( e 1 ) χ 3 + 1 .
Notice also that x * = ( 0 , 0 , 0 ) t r so J ( x * ) = I . Set H = I . By plugging the values of J in the conditions ( B 1 ) ( B 8 ) , we see that
w 0 ( Λ ) = ( e 1 ) Λ and w ( Λ ) = e 1 e 1 Λ .
In order to calculate R m , we must first find the value of R using Formula (3) and the scalar equation solutions. In Table 2, we present radii for methods (2), for Example 2.
Table 2. Radii of method (2) for Example 2.
Table 2. Radii of method (2) for Example 2.
j S 0 S 1 S R 1 R 2 R 3 R
30.581980.441490.441490.382690.224970.138030.13803

4.2. Examples for SLAC

We consider three Examples 3–5 in order to demonstrate the theoretical results of semi-local convergence, which are proposed in Section 3. We chose the values of κ = 3 and κ = 4 , respectively. Then, we have
y n = x n J ( x n ) 1 J ( x n ) , A n = J ( x n ) + J ( y n ) , z n = x n 2 A n 1 J ( x n ) , B n = 7 2 I 4 J ( x n ) 1 J ( y n ) + 3 2 J ( x n ) 1 J ( y n ) 2 , x n + 1 = z n B n J ( x n ) 1 J ( z n ) ,
and
y n = x n J ( x n ) 1 J ( x n ) , A n = J ( x n ) + J ( y n ) , z n = x n 2 A n 1 J ( x n ) , B n = 7 2 I 4 J ( x n ) 1 J ( y n ) + 3 2 J ( x n ) 1 J ( y n ) 2 , w n = z n B n J ( x n ) 1 J ( z n ) , x n + 1 = w n B n J ( x n ) 1 J ( w n ) .
Example 3.
Boundary value problems (BVPs) [22] hold essential significance in mathematics, physics, and engineering. The solution of differential equations is usually at the boundaries of a given domain, with specified conditions at different points. BVPs are important and popular for modeling real-world phenomena like heat transfer, fluid flow, and quantum mechanics, offering invaluable insights into physical systems. Hence, we opted for the following BVP (details can be found in [15]):
u + a 2 u 2 + 1 = 0
with u ( 0 ) = u ( 1 ) = 0 . Divide the interval [ 0 , 1 ] into ℓ parts, which further provides
γ 0 = 0 < γ 1 < γ 2 < < γ 1 < γ , γ + 1 = γ + h , h = 1 .
Next, let us assume that u 0 = u ( γ 0 ) = 0 , u 1 = u ( γ 1 ) , , u 1 = u ( γ 1 ) , u = u ( γ ) = 1 . We obtained
u τ = u τ + 1 u τ 1 2 h , u τ = u τ 1 2 u τ + u τ + 1 h 2 , τ = 1 , 2 , 3 , , p 1 ,
by applying a discretization approach. Then, we have the proceeding system of ( τ 1 ) × ( τ 1 )
u τ 1 2 u τ + u τ + 1 + μ 2 4 ( u τ + 1 u τ 1 ) 2 + h 2 = 0 .
For instance, p = 51 and μ = 1 2 , we obtain a nonlinear system of 50 × 50 .
In Table 3, we present C O C , CPU timing, number of iterations, residual errors, and errors difference between two iterations for Example 3.
Table 3. Computational results of Example 3.
Table 3. Computational results of Example 3.
Methods x 0 F ( x n ) x n + 1 x n n η CPU Timing
(27) ( 1.003 , 1.003 , 50 , 1.003 ) T 4.8 × 10 704 4.8 × 10 703 45.06389.93966
(28) ( 1.003 , 1.003 , 50 , 1.003 ) T 1.8 × 10 390 1.8 × 10 388 37.07299.65756
Method (2) converge to the required estimated zero, which is given as a column vector (not a matrix):
x * = 0.03292 , 0.06518 , 0.09681 , 0.1278 , 0.1582 , 0.1880 , 0.2171 , 0.2457 , 0.2737 , 0.3011 , 0.3279 , 0.3542 , 0.3800 , 0.4051 , 0.4298 , 0.4539 , 0.4775 , 0.5005 , 0.5231 , 0.5451 , 0.5667 , 0.5877 , 0.6082 , 0.6283 , 0.6479 , 0.6670 , 0.6856 , 0.7038 , 0.7215 , 0.7387 , 0.7555 , 0.7718 , 0.7877 , 0.8031 , 0.8181 , 0.8326 , 0.8467 , 0.8604 , 0.8736 , 0.8865 , 0.8989 , 0.9108 , 0.9224 , 0.9335 , 0.9442 , 0.9546 , 0.9645 , 0.9739 , 0.9830 , 0.9917 t r
Example 4.
Consider the van der Pol Equation [24], presented as follows:
y μ ( y 2 1 ) y + y = 0 , μ > 0 .
The above expression describes the current flow in a vacuum tube with the boundary conditions y ( 0 ) = 0 and y ( 2 ) = 1 . Additionally, we consider the following partition of the given interval [ 0 , 2 ] :
x 0 = 0 < x 1 < x 2 < x 3 < < x θ , w h e r e x i = x 0 + i p , p = 2 θ .
Furthermore, we suppose that
y 0 = y ( x 0 ) = 0 , y 1 = y ( x 1 ) , , y θ 1 = y ( x θ 1 ) , y θ = y ( x θ ) = 1 .
If we discretize the preceding problem (30) using the second-order division difference for the first and second derivatives, we obtain
y τ = y τ + 1 y τ 1 2 h , y τ = y τ 1 2 y τ + y τ + 1 h 2 , τ = 1 , 2 , , θ 1 .
The result is a ( θ 1 ) × ( θ 1 ) system of nonlinear equations, which is defined by
2 h 2 x τ h μ x τ 2 1 x τ + 1 x τ 1 + 2 x τ 1 + x τ + 1 2 x τ = 0 , τ = 1 , 2 , , θ 1 .
Let μ = 1 2 , and for the initial approximation, set y τ ( 0 ) = log 1 τ 2 , where τ ranges from 1 to θ 1 . In this scenario, we are dealing with a 110 × 110 system of nonlinear equations for θ = 111 , and the provided solution is given as a column vector (not a matrix):
x * = 0.01292 , 0.02579 , 0.03859 , 0.05134 , 0.06402 , 0.07664 , 0.08920 , 0.1017 , 0.1141 , 0.1265 , 0.1388 , 0.1510 , 0.1632 , 0.1753 , 0.1873 , 0.1993 , 0.2112 , 0.2231 , 0.2348 , 0.2465 , 0.2581 , 0.2697 , 0.2812 , 0.2926 , 0.3039 , 0.3152 , 0.3264 , 0.3376 , 0.3486 , 0.3596 , 0.3705 , 0.3814 , 0.3921 , 0.4028 , 0.4135 , 0.4240 , 0.4345 , 0.4449 , 0.4552 , 0.4655 , 0.4757 , 0.4858 , 0.4958 , 0.5058 , 0.5157 , 0.5255 , 0.5352 , 0.5449 , 0.5545 , 0.5640 , 0.5735 , 0.5828 , 0.5921 , 0.6013 , 0.6105 , 0.6195 , 0.6285 , 0.6374 , 0.6463 , 0.6550 , 0.6637 , 0.6723 , 0.6809 , 0.6893 , 0.6977 , 0.7060 , 0.7143 , 0.7224 , 0.7305 , 0.7385 , 0.7464 , 0.7543 , 0.7621 , 0.7698 , 0.7774 , 0.7849 , 0.7924 , 0.7998 , 0.8071 , 0.8143 , 0.8215 , 0.8286 , 0.8356 , 0.8425 , 0.8494 , 0.8562 , 0.8629 , 0.8695 , 0.8760 , 0.8825 , 0.8889 , 0.8952 , 0.9014 , 0.9076 , 0.9136 , 0.9196 , 0.9255 , 0.9314 , 0.9371 , 0.9428 , 0.9484 , 0.9539 , 0.9594 , 0.9647 , 0.9700 , 0.9752 , 0.9803 , 0.9854 , 0.9903 , 0.9952 t r .
In Table 4, the data for the COC (coefficient of convergence), CPU timing, the number of iterations, residual errors, and the difference in errors between consecutive iterations for Example 4 are all shown.
Table 4. Computational results of Example 4.
Table 4. Computational results of Example 4.
Methods x 0 F ( x n ) x n + 1 x n n η CPU Timing
(27) log 1 1 1 , log 1 2 2 110 , log 1 110 2 T 4.8 × 10 305 8.2 × 10 303 55.25461052.37
(28) log 1 1 1 , log 1 2 2 110 , log 1 110 2 T 3.6 × 10 1589 5.8 × 10 1587 57.10591866.53
Example 5.
We examine one of the most well-known problems in applied science, the Hammerstein integral Equation (details can be found in [22], pp. 19–20). Here, our goal is to compare the effectiveness and applicability of our proposed methods with those previously established. The Hammerstein integral equation, given below, serves as a benchmark for this comparative analysis:
x ( s ) = 1 + 1 5 0 1 G ( s , t ) x ( t ) 3 d t
where x C [ 0 , 1 ] s , t [ 0 , 1 ] and the kernel G is
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
To convert the aforementioned equation into a problem with finite dimensions, use the Gauss–Legendre quadrature formula, which is 0 1 g ( t ) d t j = 1 10 w j g ( t j ) , where the abscissas t j and the weights w j are determined for j = 10 by the Gauss–Legendre quadrature formula. Denoting the approximations of x ( t i ) by x i ( i = 1 , 2 , , 10 ) , one obtains the system of nonlinear equations 5 x i 5 j = 1 10 a i j x j 3 = 0 , where i = 1 , 2 , , 10
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
For i = j = 10 , the abscissas t j and weights w j are known and shown in Table 5.
Table 5. The abscissas t j and weights w j by Gauss Legendre quadrature formula.
Table 5. The abscissas t j and weights w j by Gauss Legendre quadrature formula.
j t j w j
1 0.01304673574141413996101799 0.03333567215434406879678440
2 0.06746831665550774463395165 0.07472567457529029657288816
3 0.16029521585048779688283632 0.10954318125799102199776746
4 0.28330230293537640460036703 0.13463335965499817754561346
5 0.42556283050918439455758700 0.14776211235737643508694649
6 0.57443716949081560544241300 0.14776211235737643508694649
7 0.71669769706462359539963297 0.13463335965499817754561346
8 0.83970478414951220311716368 0.10954318125799102199776746
9 0.93253168334449225536604834 0.07472567457529029657288816
10 0.98695326425858586003898201 0.03333567215434406879678440
The convergence approaches towards the root, which is given as a column vector (not a matrix):
x * = 1.001 , 1.006 , 1.014 , 1.021 , 1.026 , 1.026 , 1.021 , 1.014 , 1.006 , 1.0013 t r .
is tested in the Table 6.
Table 6. Numerical results for Example 5.
Table 6. Numerical results for Example 5.
Methods x 0 F ( x n ) x n + 1 x n n η CPU Timing
(27) 1.1 , 1.1 , 10 , 1.1 T 3.3 × 10 316 6.7 × 10 317 55.0270.841798
(28) 1.1 , 1.1 , 10 , 1.1 T 1.5 × 10 860 3.1 × 10 861 37.00470.452299

5. Conclusions

In conclusion, this study emphasizes the nature of convergence analysis in iterative techniques, particularly in the absence of explicit convergence guarantees. We suggested the radii of convergence, error estimates, or results on the location of x* that can be computed, or information on how to choose x 0 of method (2). In addition, we suggested conditions only on the first derivation, which is involved in the method, not as in the earlier studies, where they used high-order derivatives (not appearing in the method) for the order of convergence. We also investigate extended continuity assumptions and present semi-local analysis, which further advances our knowledge of the convergence properties of iterative approaches. Finally, we demonstrated its applicability to real-world problems. Our idea can be used to extend the applicability of other methods [1,2,3,4] using inverses under the same set of conditions ( J ) .

Author Contributions

Conceptualization, R.B., I.K.A. and M.A.; methodology, R.B. and I.K.A.; software, R.B., I.K.A. and M.A.; validation, R.B., I.K.A. and M.A.; formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A., H.A. and S.A.; visualization, R.B., I.K.A., S.A. and H.A.; funding acquisition, S.A.; supervision, R.B. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author Sattam Alharbi wishes to thank the Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445) for funding support.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Cordero, A.; Leonardo-Sepúlveda, M.A.; Torregrosa, J.R.; Vassileva, M.P. Increasing in three units the order of convergence of iterative methods for solving nonlinear systems. Math. Comput. Simul. 2024, 223, 509–522. [Google Scholar] [CrossRef]
  2. Candelario, G.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Generalized conformable fractional Newton-type method for solving nonlinear systems. Numer. Algorithms 2023, 93, 1171–1208. [Google Scholar] [CrossRef]
  3. Padilla, J.J.; Chicharro, F.; Cordero, A.; Torregrosa, J.R. Parametric family of root-finding iterative methods: Fractals of the basins of attraction. Fractal Fract. 2022, 6, 572. [Google Scholar] [CrossRef]
  4. Capdevila, R.R.; Cordero, A.; Torregrosa, J.R. Isonormal surfaces: A new tool for the multidimensional dynamical analysis of iterative methods for solving nonlinear systems. Math. Methods Appl. Sci. 2022, 45, 3360–3375. [Google Scholar] [CrossRef]
  5. Lofti, T.; Bakhtiari, P.; Cordero, A.; Mahdian, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2014, 92, 1921–1934. [Google Scholar] [CrossRef]
  6. Behl, R.; Argyros, I.K. On the solution of generalized Banach space valued equations. Mathematics 2022, 10, 132. [Google Scholar] [CrossRef]
  7. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  8. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear systems. J. Comput. Appl. Math. 2013, 252, 86–94. [Google Scholar] [CrossRef]
  9. Ezquerro, J.A.; Hernández, M.A. On Halley-type iterations with free second derivative. J. Comput. Appl. Math. 2004, 170, 455–459. [Google Scholar] [CrossRef]
  10. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  11. Grau-Sánchez, M. Improvements of the efficiency of some three-step iterative-like Newton methods. Numer. Math. 2007, 107, 131–146. [Google Scholar] [CrossRef]
  12. Gutiérrez, J.M.; Hernández, M.A. A family of Chebyshev-Halley type methods in Banach spaces. Bull. Aust. Math. Soc. 1997, 55, 113–130. [Google Scholar] [CrossRef]
  13. Haijun, W. New third-order method for solving systems of nonlinear equations. Numer. Algorithms 2009, 50, 271–282. [Google Scholar] [CrossRef]
  14. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  15. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  17. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  18. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  19. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  20. Amat, S.; Argyros, I.K.; Busquier, S.; Herńandez-Veŕon, M.A. On two high-order families of frozen Newton-type methods. Numer. Linear Algebra Appl. 2017, 25, e2126. [Google Scholar] [CrossRef]
  21. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA; Taylor & Francis: Abingdon, UK, 2017. [Google Scholar]
  22. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  23. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media, Inc.: Champaign, IL, USA, 2003; ISBN 1579550223. [Google Scholar]
  24. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Alharbi, S.; Alshehri, H.; Argyros, M. Multistep Iterative Methods for Solving Equations in Banach Space. Mathematics 2024, 12, 2145. https://doi.org/10.3390/math12132145

AMA Style

Behl R, Argyros IK, Alharbi S, Alshehri H, Argyros M. Multistep Iterative Methods for Solving Equations in Banach Space. Mathematics. 2024; 12(13):2145. https://doi.org/10.3390/math12132145

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, Sattam Alharbi, Hashim Alshehri, and Michael Argyros. 2024. "Multistep Iterative Methods for Solving Equations in Banach Space" Mathematics 12, no. 13: 2145. https://doi.org/10.3390/math12132145

APA Style

Behl, R., Argyros, I. K., Alharbi, S., Alshehri, H., & Argyros, M. (2024). Multistep Iterative Methods for Solving Equations in Banach Space. Mathematics, 12(13), 2145. https://doi.org/10.3390/math12132145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop