Next Article in Journal
Some Fractional Integral and Derivative Formulas Revisited
Previous Article in Journal
Coarse-Gridded Simulation of the Nonlinear Schrödinger Equation with Machine Learning
Previous Article in Special Issue
Derivative-Free Iterative Schemes for Multiple Roots of Nonlinear Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerating the Speed of Convergence for High-Order Methods to Solve Equations

1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2785; https://doi.org/10.3390/math12172785 (registering DOI)
Submission received: 23 July 2024 / Revised: 4 September 2024 / Accepted: 5 September 2024 / Published: 9 September 2024
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis: 2nd Edition)

Abstract

:
This article introduces a multistep method for developing sequences that solve Banach space-valued equations. It provides error estimates, a radius of convergence, and uniqueness results. Our approach improves the applicability of the recommended method and addresses challenges in applied science. The theoretical advancements are supported by comprehensive computational results, demonstrating the practical applicability and robustness of the earlier method. We ensure more reliable and precise solutions to Banach space-valued equations by providing computable error estimates and a clear radius of convergence for the considered method. We conclude that our work significantly improves the practical utility of multistep methods, offering a rigorous and computable approach to solving complex equations in Banach spaces, with strong theoretical and computational results.
MSC:
65H10; 65Y20; 65G99; 41A58

1. Introduction

The modeling of complex systems in science, engineering, and nature often involves converting them into either a system of nonlinear equations or a scalar nonlinear equation, both of which are essential to mathematics [1,2,3,4,5,6,7,8,9,10]. Solutions to such nonlinear problems help us in forecasting weather, fluid dynamics, population modeling, and financial markets. Their application to the study of nonlinear real-world processes improves problem solving across a range of disciplines. Therefore, we selected the following system of nonlinear equations to analyze by approximating the solution x * :
Θ ( x ) = 0
where Θ : E E 1 E 2 stands for a differentiable operator in the Fréchet sense, and E 1 and E 2 denote Banach spaces. Analytical solutions to expressions like (1) are typically unattainable. Thus, the only options available to us are iterative approaches. To obtain an approximate solution x * , for instance, we have one of the most popular iterative methods known as the Newton–Raphson method.
Researchers approach the required solution with the help of an iterative scheme by updating an initial guess. Furthermore, they investigate the stability, basin of attraction, extension or modification, and convergence properties of these algorithms under various conditions to provide an accurate and efficient estimation solution.
We consider the following iterative scheme, which is defined for k = 3 , 4 , . . . , x 0 E , and each n = 0 , 1 , 2 , 3 , . . . by
y n = x n Θ ( x n ) 1 Θ ( x n ) , z n = T ( x n , y n ) , T 1 ( x n , y n ) = 1 2 Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) 1 + Θ ( x n ) 1 , z n ( 1 ) = z n T 1 Θ ( z n ) , z n ( k 1 ) = z n ( k 2 ) T 1 Θ ( z n ( k 2 ) ) , x n + 1 = z n ( k ) = z n k 1 T 1 Θ z n ( k 1 ) ,
where T is an iteration function of convergence order h. The convergence order of scheme (1) had proved as h + 3 in [11] by adopting Taylor series and Θ ( 4 ) , where Θ ( 4 ) denotes the fourth derivative of the operator Θ .
The use of Taylor series expansion is significant in proving the convergence order of iterative methods in finite-dimensional Euclidean space. Depending on the chosen method, the standard proofs differ slightly. Although proving their convergence order is not an easy task, the main problem arises when higher-order derivatives of the involved functions are used, even though they do not appear in the structure of the chosen iterative method. Sometimes, these methods may converge, and their conclusions are typically limited and share common issues that restrict their applicability.
The following list of restrictions serves as the inspiration for this study:
(P1)
The local convergence analysis requires usually high-order derivatives, inverses of derivatives, or divided differences not on the methods. As demonstrated in the local analysis of convergence (LAC) in [11], the convergence order necessitates derivatives up to the sixth order, respectively, which are absent from the technique. These restrictions limit their use in a scenario, where E 1 = E 2 = R m . An inspiring and basic illustration is described by the function Θ on E 1 = E 2 = R , where E = [ 1 π , 2 π ] is defined as
Θ ( t ) = t 6 + t 4 cos 1 t + 9 t 2 , t 0 0 , t = 0 .
Next, it is determined that the first three derivatives are
Θ ( t ) = t 6 t 4 + 3 + 4 t 2 cos 1 t + t sin 1 t , Θ ( t ) = 6 5 t 4 + t sin 1 t + 3 + 12 t 2 1 cos 1 t , and Θ ( t ) = 120 t 5 + 18 t 2 1 sin 1 t + 6 4 t 2 1 t cos 1 t t 2 .
On E , it is found that the function Θ ( t ) is unbounded at x * = 0 E and Θ ( 0 ) = 0 . Consequently, the local convergence findings in [11] cannot ensure the convergence of technique (2). However, the iterative scheme (2) converges to x * if, for example, x 0 = 0.1 E . This observation suggests that it is possible to weaken these conditions.
(P2)
There is no prior knowledge about the integer j, such that x n x * < ϵ ( ϵ > 0 ) for each n j .
(P3)
There is no set containing only x * as a solution of (1).
(P4)
The results hold only on R m .
(P5)
The more important semi local results are not studied in [11].
We observe that we have to face the problems ( P 1 ) ( P 5 ) observed in an earlier study [11]. This is our main motivation behind this study. These problems are addressed by our technique. The convergence conditions in [11] and this paper are sufficient but not necessary. This limitation will be addressed in our future work, where necessary conditions are included. Similar work can be found in [2,8,10].
It is worth noting that although the technique is demonstrated using (2), it can also be used analogously to extend the applicability of other methods [2,3,4,5,6,7,8,9,10,12,13,14,15,16,17,18,19,20]. In particular, this can also be carried out with the Newmark, FEM, and Crank–Nicolson methods, which are used to solve complex problems [21,22].
The rest of this article is structured as follows: Local and semi-local analysis of the method is outlined in Section 2 and Section 3, respectively. The three Section 4, Section 5 and Section 6 that follow include special cases. Section 7 contains numerical examples, and Section 8 encompasses the concluding remarks.

2. Local Analysis of Convergence

Some scalar functions are needed in this analysis. Let us consider the interval M = [ 0 , + ) .
The abbreviation (CONDF) stands for a continuous as well as non-decreasing scalar function, and (SMPS) denotes the smallest positive solution.
Suppose the following:
(H1)
There exists CONDF N 0 : M M , such that N 0 ( t ) 1 = 0 admits SMPS, which is denoted by s 0 . Define the interval M 0 = [ 0 , s 0 ) .
(H2)
There exists CONDF N : M 0 M , such that for Ω 1 : M 0 M , which is denoted by
Ω 1 ( t ) = 0 1 N 1 τ t d τ 1 N 0 ( t ) ,
Equation Ω 1 ( t ) 1 = 0 admits SMPS in the interval ( 0 , s 0 ) , which is denoted by s 1 .
Let s 1 ( 0 , s 0 ) and define the interval M 1 = [ 0 , s 1 ] .
(H3)
There exists CONDF Ω 2 : M 1 M , such that equation Ω 2 ( t ) 1 = 0 admits SMPS in the interval ( 0 , s 1 ) , which is denoted by s 2 .
Define the functions N ¯ ( t ) : M 1 M , p : M 1 M , j = 3 , 4 , 5 , , k + 2 and Ω j : M 1 M by
N ¯ ( t ) = N ( 2 t ) , or 2 N 0 ( t ) ,
p ( t ) = 3 N ¯ ( t ) 2 1 N 0 ( t ) 2
and
Ω j ( t ) = 0 1 N ( 1 τ ) Ω j 1 ( t ) t d τ 1 N 0 Ω j 1 ( t ) t + p ( t ) 1 + 0 1 N 0 τ Ω j 1 ( t ) t d τ Ω j 1 ( t ) .
(H4)
Equation Ω j ( t ) 1 = 0 admits SMPS in the interval ( 0 , s 1 ) , which is denoted by s j .
Define that constant as
s = min { s i } , i = 1 , 2 , 3 , . . . , k + 2 .
In Theorem 1, this constant is proven to be a radius of convergence for the method (2).
Define the interval M 2 = [ 0 , s ) . Then, with formula (3), we understand that for each t M 2 we get
0 N 0 ( t ) < 1 ,
0 p ( t ) ,
and
0 Ω i ( t ) < 1 , i = 1 , 2 , . . . , k + 2 .
It is worth noting that according to condition ( H 1 ) , function N 0 : M M is CONDF and s 0 is SMPS for equation N 0 ( t ) 1 = 0 . By the definition of s 0 and the fact that the function is nondecreasing on M = [ 0 , ) , s 0 is the largest positive number in M 0 = [ 0 , s 0 ] , such that N 0 ( t ) 1 = 0 . Therefore, all numbers t [ 0 , s 0 ) will be such that
0 N 0 ( t ) < N 0 ( s 0 ) = 1 .
If in particular, N 0 ( t ) = 0 , this makes 1 N 0 ( t ) = 1 0 , and function Ω 1 is well defined. Hence, the convergence is assured, provided the rest of conditions ( H 3 ) ( H 7 ) are satisfied. The scalar functions N 0 and N are connected to the operators on method (2).
(H5)
There exists L Δ ( E 1 , E 2 ) and a solution x * E for equation Θ ( x ) = 0 , such that L 1 Δ ( E 2 , E 1 ) , and for each u E , we get
L 1 Θ ( u ) L ) N 0 ( u x * ) .
Define the domain E 0 = E B ( x * , s 0 ) .
(H6)
For each u 0 E 0 , we get
L 1 Θ ( u 0 ) Θ ( u ) N ( u 0 x * ) ,
for u = u 0 Θ ( u 0 ) 1 Θ ( u 0 ) , and
T ( u 0 , u ) x * Ω 2 ( u 0 x * ) u 0 x * .
(H7)
B ( x * , s ) E .
Remark 1.
(i) 
Selections for L can be either L = I or L = Θ ( x * ) . Note that according to the condition ( H 5 ) , if L = Θ ( x * ) , then solution x * is simple. If L Θ ( x * ) , condition ( H 4 ) does not necessarily imply that x * is a simple solution. Hence, method (2) can be used to find a solution of multiplicity that is greater than one. Consequently, the popular choice L = Θ ( x * ) is not necessarily the most appropriate. The choice of L = Θ ( x ¯ ) has also been used, where x ¯ E is an auxiliary point.
(ii) 
Conditions ( H 1 ) ( H 7 ) are standard in the study of the convergence of iterative methods, if L = Θ ( x * ) and functions N 0 and N are constant functions [3,9,10,19]. In this case, conditions ( H ) and ( H ) reduce to the usual center-Lipschitz and Lipschitz conditions. If one uses generalized continuity to replace the Lipschitz conditions, the results are more general, since they include Hölder and other continuity conditions (see the first two examples in Section 7).
(iii) 
If one of the two versions of function N ¯ is smaller than the other, then we use the smaller one in our calculations. However, if they cross, say N ( 2 t ) 2 N 0 ( t ) for t [ 0 , ρ ¯ 0 ] and 2 N 0 ( t ) N ( 2 t ) for t [ ρ ¯ 0 , ρ 0 ] , then we choose
N ¯ ( t ) = N ( 2 t ) , t [ 0 , ρ ¯ 0 ] 2 N 0 ( t ) , t [ ρ ¯ 0 , ρ 0 ] .
Next, the main local analysis of convergence for the method (2) uses conditions ( H 1 ) ( H 7 ) . Define the domain E 3 = B ( x * , s ) { x * } .
Theorem 1.
Suppose that conditions ( H 1 ) ( H 7 ) hold and select the starting point x 0 E 3 . Then, the following assertions hold for each n = 0 , 1 , 2 , . . . :
{ x n } B ( x * , s ) ,
y n x * Ω 1 ( x n x * ) x n x * x n x * < s ,
z n 0 x * Ω 2 ( x n x * ) x n x * x n x * ,
z n j x * Ω j ( x n x * ) x n x * x n x * ,
for j = 3 , 4 , . . . , k + 2 and lim n + x n = x * , where the constant s is given by Formula (3), and functions Ω i , i = 1 , 2 , . . . , k + 2 are as defined previously.
Proof. 
Assertions (2.5)–(2.8) are proven with induction. By hypothesis, the starter x 0 E 3 E . So, assertion (7) holds, provided that n = 0 . Select u E . It follows by condition ( H 5 ) that L 1 ( Θ ( u ) L ) N 0 ( u x * ) N 0 ( S ) < 1 .
This estimate together with the standard perturbation Lemma due to Banach on linear and invertible operators [1] implies that Θ ( u ) 1 Δ ( E 2 , E 1 ) , and
Θ ( u ) 1 L 1 1 N 0 u x * .
Specialize u = x 0 . Then, by (11) and the first substep of method (2), iterate y 0 is well defined, and
y o x * = x 0 x * Θ ( x 0 ) 1 Θ ( x 0 ) = 0 1 Θ ( x 0 ) 1 Θ x * + τ ( x 0 x * ) Θ ( x 0 ) ( x 0 x * ) d τ .
Using (3), (6) (for i = 1 ), ( H 6 ) , (11) (for u = x 0 ) , and (12), we have
y 0 x * 0 1 w ( 1 τ ) x 0 x * d τ x 0 x * 1 N 0 ( x 0 x * ) Ω 1 x 0 x * x 0 x * x 0 x * < s .
Thus, iterate y 0 belongs in ball B ( x * , s ) , and assertion (8) holds, provided that n = 0 . Notice that iterate z 0 ( 0 ) exists by the second substep of method (2), and with ( H 6 ) (second condition for u 0 = x 0 , u = y ), we have
z 0 ( 0 ) x * = T ( x 0 , y 0 ) x * Ω 2 ( x 0 x * ) x 0 x * x 0 x * ,
where we also used (6) (for i = 2 ). Thus, iterate z 0 ( 0 ) belongs in ball B ( x * , s ) , and assertion (9) holds, provided that n = 0 . Estimate (11) holds if u = y 0 in (11), since iterate y 0 belongs in ball B ( x * , s ) .
Then, linear operator T 1 ( x 0 , y 0 ) is well defined; moreover, from the estimate, we can get the following:
T 1 Θ ( z 0 ) 1 = 1 2 Θ ( y 0 ) 1 Θ ( x 0 ) Θ ( y 0 ) 1 + Θ ( x 0 ) 1 2 Θ ( z 0 ) 1 = 1 2 ( Θ ( x 0 ) 1 Θ ( x 0 ) I ) + I Θ ( y 0 ) 1 + Θ ( x 0 ) 1 2 Θ ( z 0 ) 1 = 1 2 [ Θ ( y 0 ) 1 Θ ( x 0 ) 1 Θ ( y 0 ) Θ ( y 0 ) 1 + Θ ( y 0 ) 1 Θ ( z 0 ) Θ ( y 0 ) Θ ( z 0 ) 1 + Θ ( x 0 ) 1 Θ ( y 0 ) 1 Θ ( y 0 ) 1 Θ ( z 0 ) 1 ]
Thus, (3), (5), ( H 6 ) (first condition), (11) (for u = x 0 , y 0 , z 0 ), and (13)–(15) give
T 1 Θ ( z 0 ) 1 L 3 N 0 ¯ 2 1 N 0 x 0 x * 2 = p x 0 x * = p 0 .
Clearly, iterations z 0 ( j ) , j = 3 , 4 , . . . , k + 2 are well defined, and we can write with the jth substep of method (2) that
z 0 ( j ) x * = z 0 ( j 1 ) x * Θ ( z 0 ( j 1 ) ) 1 Θ ( z 0 ( j 1 ) ) T 1 Θ ( z 0 ( j 1 ) ) 1 Θ ( z 0 ( j 1 ) ) .
However, with ( H 5 ) , we can write
Θ ( z 0 ( j 1 ) ) = Θ ( z 0 ( j 1 ) ) Θ ( x * ) = 0 1 F x * + τ ( z 0 ( j 1 ) x * ) d τ z 0 ( j 1 ) x * ,
So, with ( H 5 ) , we can get
L 1 Θ ( z 0 ( j 1 ) ) L 1 0 1 F x * + τ ( z 0 ( j 1 ) x * ) L + L d τ ( z 0 ( j 1 ) x * ) 1 + 0 1 N 0 τ z 0 ( j 1 ) x * d τ z 0 ( j 1 ) x *
Then, (3), (6) (for i = 3 , 4 , . . , k + 2 ), and (16)–(18) get
z 0 ( j ) x * z 0 ( j 1 ) x * Θ ( z 0 ( j 1 ) ) 1 Θ ( z 0 ( j 1 ) ) + T 1 Θ ( z 0 ( j 1 ) ) 1 L L 1 Θ ( z 0 ( j 1 ) ) 0 1 w ( 1 τ ) z 0 ( j 1 ) x * d τ 1 N 0 z 0 ( j 1 ) x * + p 0 1 + 0 1 N 0 τ z 0 ( j 1 ) x * d τ × z 0 ( j 1 ) x * Ω j x 0 x * x 0 x * x 0 x * .
Hence, iterates z ( j ) belong in ball B ( x * , s ) , and assertion (10) holds, provided that n = 0 . Notice that for j = k + 2 , (19) and the definition of x 1 imply
x 1 x * Ω k + 2 ( x 0 x * ) x 0 x * = c x 0 x * ,
where c = Ω k + 2 ( x 0 x * ) [ 0 , 1 ) . The preceding calculations can be repeated if iterates x 0 , y 0 , z 0 ( 0 ) , z 0 ( j ) , x 1 are simply replaced by x m , y m , z m ( 0 ) , z m ( j ) , x m , respectively.
Thus, the inductions for assertions (7)–(10) are completed. Then, from the estimation
x m + 1 x * x m x * c m + 1 x 0 x * s ,
we conclude that iterate x m + 1 belongs in ball B ( x * , s ) , and lim m + x m = x * . □
Next, a set is determined that contains only x * as a solution of equation Θ ( x ) = 0 .
Proposition 1.
Suppose the following:
Condition ( H 5 ) holds in ball B ( x * , ρ 2 ) for some ρ 2 0 , and there exists ρ 3 ρ 2 , such that
0 1 N 0 ( τ ρ 3 ) d τ < 1 .
Define the set E 4 = E B [ x * , ρ 3 ] . Then, element x * is the only solution of equation Θ ( x ) = 0 in the set E 3 .
Proof. 
Suppose that there exists a solution x ˜ E 4 of equation Θ ( x ) = 0 . Consider linear operator L 0 = Θ x * + τ ( x ˜ x * ) . Then, via condition ( H 5 ) and (2), we obtain
L 1 ( L 0 L ) 0 1 N 0 τ ( x ˜ x * ) d τ 0 1 N 0 ( τ s 3 ) d τ < 1 ,
which imply that operator L 0 1 Δ ( E 2 , E 1 ) . Finally, identity
x ˜ x * = L 0 1 Θ ( x ˜ ) Θ ( x * ) = L 0 1 ( 0 ) = 0
implies that x ˜ = x * . □
Remark 2.
Clearly, we can select ρ 2 = s in Proposition 1, provided that all the conditions of Theorem 1 hold (i.e., conditions ( H 1 ) ( H 7 ) ).

3. Semi-Local Analysis of Convergence

The estimations are similar to the ones in Section 2. However, items x * , N 0 , N are switched by x 0 , Φ 0 , and Φ , respectively.
Suppose the following:
(B1)
There exists CONDF Φ 0 : M M , such that equation Φ 0 ( t ) 1 = 0 admits SMPS, which is defined by ρ 4 .
Define set M 3 = [ 0 , ρ 4 ) .
(B2)
There exists CONDF Φ : M 3 M and some α 0 1 0 .
Define sequence { α n ( i ) } for α 0 0 = 0 , some { α n ( 2 ) } { α n ( 1 ) } , and each i = 0 , 1 , 2 , . . . , k + 1 and n = 0 , 1 , 2 , . . . by
Φ n ¯ = Φ ( α n ( 1 ) α n ( 0 ) ) Φ 0 ( α n ( 0 ) ) + Φ 0 ( α n ( 1 ) ) , q n = 1 2 Φ ¯ n 1 Φ 0 ( α n ( 0 ) 1 Φ 0 ( α n ( 1 ) ) 2 + 2 1 + Φ ¯ n 1 Φ 0 ( α n ( 1 ) ) 1 1 Φ 0 ( α n ( 0 ) ) , λ n ( j ) = 0 1 Φ ( 1 τ ) ( α n ( j ) α n ( 0 ) ) d τ ( α n ( j ) α n ( 0 ) ) + 1 + Φ 0 ( α n ( 0 ) ) ( α n ( j ) α n ( 1 ) ) , j = 2 , 3 , 4 , . . . , k + 2 , α n ( j + 1 ) = α n ( j ) + q n λ n ( j ) , δ n + 1 = 0 1 Φ ( 1 τ ) α n + 1 ( 0 ) α n ( 0 ) d τ α n + 1 ( 0 ) α n ( 0 ) + 1 + Φ 0 ( α n ( 0 ) ) ( α n + 1 ( 0 ) α n ( 1 ) ) , and α n + 1 ( 1 ) = α n ( k + 2 ) = α n + 1 ( 0 ) + δ n + 1 1 Φ 0 ( α n + 1 ( 0 ) ) .
Sequence { α n ( i ) } is proven to be majoring in Theorem 1. However, let us first develop a convergence condition for it.
(B3)
There exists ρ 5 [ 0 , ρ 4 ) , such that for each n = 0 , 1 , 2 , . . . , and i = 0 , 1 , 2 , . . . , k + 2   Φ 0 ( α n ( i ) ) < 1 , and α n ( i ) < ρ 4 .
It follows by this condition and (24) that sequence { α n ( i ) } is nondecreasing, bounded from above by ρ 4 , and is such that it is convergent to some ρ [ 0 , ρ 4 ] .
(B4)
There exists x 0 E and L Δ ( E 1 , E 2 ) , such that L 1 Δ ( E 2 , E 1 ) and for each u E , we have
L 1 ( Θ ( u ) L ) Φ 0 ( u x 0 ) .
Notice that if u = x 0 , then
L 1 ( Θ ( x 0 ) L ) Φ 0 ( 0 ) < 1 .
It follows that
Θ ( x 0 ) 1 Δ ( E 2 , E 1 ) ,
for u = u 0 Θ ( u 0 ) 1 Θ ( u 0 ) . Consequently, iterate y 0 is well defined by the first subset of the method (2). Hence, we can choose α 0 ( 1 ) Θ ( x 0 ) 1 Θ ( x 0 ) .
(B5)
Define set E 4 = E B ( x 0 , s 4 ) . For each u 0 E 4 ,
T ( x n , y n ) y n α n ( 2 ) α n ( 1 ) ,
provided that iterates { x n } exist.
(B6)
B ( x 0 , ρ ) E .
In Section 4, the operator is specialized, and the iterates are shown to exist.
Remark 3.
These are similar to the ones in Remark 3 for L = Θ ( x 0 ) replacing L = Θ ( x * ) .
The result corresponding to Theorem 1 for the semi-local analysis is
Theorem 2.
Suppose that conditions ( B 1 ) ( B 6 ) hold. Then, the sequence { x k } stays in B [ x 0 , ρ ) and is convergent to a solution x * B [ x 0 , ρ ] of equation Θ ( x ) = 0 .
Proof. 
As already noted above, similar calculations are used. So, we have
z k y k = T ( x k , y k ) y k α k ( 2 ) α k ( 1 ) , z k x 0 z k y k + y k x 0 α k ( 2 ) α k ( 1 ) + α k ( 1 ) α 0 = α k ( 2 ) < s , y 0 x 0 Θ ( x 0 ) 1 Θ ( x 0 ) α k ( 1 ) = α k ( 1 ) α 0 ( 0 ) < ρ , Θ ( z k ) = Θ ( z k ) Θ ( x k ) Θ ( x k ) ( y k x k ) , L 1 Θ ( z k ) 0 1 Φ ( 1 τ ) ( α k ( 2 ) α k ( 0 ) ) d τ α k ( 2 ) α k ( 0 ) + 1 + Φ 0 ( α k ( 0 ) ) ( α k ( 2 ) α k ( 1 ) ) = λ k 2 ,
T 1 = 1 2 Θ ( y k ) 1 Θ ( x k ) Θ ( y k ) Θ ( y k ) 1 + Θ ( y k ) 1 + Θ ( x k ) 1 , = 1 2 Θ ( y k ) Θ ( x k ) Θ ( y k ) Θ ( y k ) + Θ ( y k ) 1 Θ ( x k ) Θ ( y k ) Θ ( x k ) 1 + Θ ( x k ) 1 = 1 2 [ Θ ( y k ) 1 Θ ( x k ) Θ ( y k ) Θ ( x k ) 1 + 2 Θ ( y k ) 1 { Θ ( y k ) 1 Θ ( x k ) Θ ( y k ) + I } Θ ( x k ) 1 ]
which further yields
T 1 L 1 2 Φ ¯ k 1 v 0 ( α k ( 2 ) ) 1 v 0 ( α k ( 1 ) ) + 2 1 + Φ ¯ k 1 v 0 ( α k ( 1 ) ) 1 1 v 0 ( α k ( 0 ) ) = q k .
So, we have
z k ( 1 ) z k q k λ k ( 2 ) = α k ( 3 ) α k ( 2 )
and
z k ( 1 ) x 0 z k ( 1 ) z k + z k x 0 α k ( 3 ) α k ( 2 ) + α k ( 2 ) α k ( 0 ) = α k ( 3 ) < ρ .
Similarly, we have
λ k ( j ) = 0 1 Φ ( 1 τ ) α k + 1 ( 0 ) α k ( 0 ) α j ( 0 ) α k ( 1 ) d τ ,
z k ( j 1 ) z k ( j 2 ) q k λ k ( j ) = α k ( j + 1 ) α k ( j ) z k ( j 1 ) x 0 z k ( j 1 ) z k ( j 2 ) + z k ( j 2 ) x 0 α k ( j + 1 ) α k ( j ) + α k ( j ) α 0 ( 0 ) = α k ( j + 1 ) < ρ .
Moreover, we can write
Θ ( x k + 1 ) = Θ ( x k + 1 ) Θ ( x k ) Θ ( x k ) ( x k + 1 x k ) + Θ ( x k ) ( x k + 1 y k ) L 1 Θ ( x k + 1 ) 0 1 Φ 1 τ α k + 1 ( 0 ) α k ( 0 ) d τ α k + 1 ( 0 ) α k ( 0 ) + 1 Φ 0 ( α k ( 0 ) ) ( α k + 1 0 α k ( 1 ) ) = δ k + 1 .
Thus, we get
y k + 1 x k + 1 Θ ( x ( n + 1 ) ) 1 L L 1 Θ ( x k + 1 ) δ k + 1 1 Φ 0 ( α k + 1 ( 0 ) ) = α k + 1 ( 1 ) α k + 1 ( 0 ) , and y k + 1 x 0 y k + 1 x k + 1 + x k + 1 x 0 α k + 1 ( 1 ) α k + 1 ( 0 ) + α k + 1 ( 0 ) α 0 ( 0 ) = α k + 1 ( 1 ) < ρ .
Hence, the sequence { x k } is Cauchy in the Banach space E 1 (since it is majoring by a scalar sequence, which is also Cauchy as it converges to ρ ). Therefore, there exists x * B [ x 0 , ρ ] , such that lim k + x k = x * . Finally, if we let k + in (25) and use the continuity of the operator Θ , we conclude that Θ ( x * ) = 0 . □
A set is specified with only one solution of equation Θ ( x ) = 0 .
Proposition 2.
Suppose there exists a solution x ¯ B ( x 0 , ρ 6 ) of equation Θ ( x ) = 0 for some ρ 6 > 0 ; condition ( B 3 ) holds in ball B ( x 0 , ρ 6 ) , and there exists ρ 7 ρ 6 , such that
0 1 Φ 0 ( 1 τ ) ρ 6 + τ ρ 7 d τ < 1 .
Define set E 5 = E B [ x 0 , ρ 7 ] . Then, the only solution of equation Θ ( x ) = 0 in set E 5 is x ¯ .
Proof. 
Suppose that there exists a solution x ˜ E 5 of equation Θ ( x ) = 0 . Consider linear operator L 1 = Θ x ¯ + τ ( x ¯ x ˜ ) .
By using condition ( B 3 ) and (26), we get
L 1 ( L 1 L ) 0 1 ( 1 τ ) x ¯ x 0 + τ x ¯ x 0 d τ 0 1 ( 1 τ ) s 6 + τ s 7 d τ < 1 ,
So, L 1 1 Δ ( E 2 , E 1 ) . Then, from identity x ¯ x ˜ = L 1 1 Θ ( x ¯ ) Θ ( x ˜ ) = L 1 1 ( 0 ) = 0 , we deduce that x ¯ = x ˜ . □
Remark 4.
(1)
Constant ρ can be switched with ρ 4 in condition ( B 6 ) .
(2)
Under conditions ( B 1 ) ( B 6 ) , take x ¯ = x * and ρ 6 = ρ in Proposition 2.

4. Specializations and Numerical Experiments

Let us consider the intersecting specialization of method (2) by taking
T ( x n , y n ) = x n 1 2 Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( x n ) and T 1 ( x n , y n ) = 1 2 Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) 1 + Θ ( x n ) 1
Under these choices, method (2) is reduced to
y n = x n Θ ( x n ) 1 Θ ( x n ) , z n = x n Θ ( x n ) 1 + Θ ( x n ) 1 Θ ( x n ) , x n + 1 = z n 1 2 Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( z n ) .
Method (27) is studied in [11] (see (24) in [11]). The convergence order is shown to be six using Taylor series, and the existence of Θ ( 4 ) is also proven. Other limitations of this technique are already reported in the introduction of this paper.

5. Selection of the Majorant Functions for (27)

We can determine functions Ω 1 n and Ω 2 n , which we claim to be defined by
N ˜ ( t ) = N 1 + Ω 1 ( t ) t o r N ¯ ( t ) , N 0 ( t ) + N 0 Ω 1 ( t ) t N ˜ ˜ ( t ) = N 1 + Ω 2 ( t ) t o r N ¯ ( t ) , N 0 ( t ) + N 0 Ω 2 ( t ) t N ˜ ˜ ˜ ( t ) = N Ω 1 ( t ) + Ω 2 ( t ) t o r N ¯ ( t ) , N 0 ( Ω 1 ( t ) ) + N 0 Ω 2 ( t ) t p ˜ ( t ) = 1 2 [ N ¯ ( t ) 1 N 0 ( Ω 1 ( t ) t ) 2 + N ˜ ˜ ( t ) 1 N 0 ( t ) 1 N 0 ( Ω 2 ( t ) t ) + N ˜ ˜ ˜ ( t ) 1 N 0 ( Ω 1 ( t ) t ) 1 N 0 ( Ω 2 ( t ) t ) ) p ( t ) ,
Ω ˜ ˜ 2 ( t ) = 0 1 N ( 1 τ ) t d τ 1 N 0 ( t ) + N ˜ ( t ) 1 + 0 1 N 0 ( τ t ) d τ 2 ( 1 N 0 ( t ) ) 1 N 0 ( Ω 1 ( t ) t ) Ω 2 ( t )
and
Ω ˜ 3 ( t ) = 0 1 N ( 1 τ ) t d τ 1 N 0 ( t ) + p ˜ ( t ) 1 + 0 1 N 0 τ Ω 2 ( t ) t d τ Ω ˜ 2 ( t ) Ω 3 ( t ) .
Justification for the selection of majorant functions. For the choice of Ω ˜ 2 , we have estimates
z n x * = x n x * Θ ( x n ) 1 Θ ( x n ) + Θ ( x n ) 1 1 2 Θ ( y n ) 1 1 2 Θ ( x n ) 1 Θ ( x n ) = x n x * Θ ( x n ) 1 Θ ( x n ) 1 2 Θ ( y n ) Θ ( x n ) Θ ( y n ) Θ ( x n ) 1 Θ ( x n ) , z n x * 0 1 N ( 1 τ ) x n x * d τ 1 N 0 ( x n x * ) + N ¯ n 1 + 0 1 N 0 ( τ x n x * d τ 2 1 N 0 ( x n x * ) 1 N 0 ( y n x * ) × x n x * Ω ˜ 2 ( x n x * ) x n x * x n x * ,
where we also used the estimates
L 1 ( Θ ( x n ) Θ ( y n ) N ( x n y n ) N x n x * + y n x * N 1 + Ω 1 ( x n x * ) x n x * N n ¯ , or L 1 Θ ( x n ) Θ ( y n ) L 1 Θ ( x n ) Θ ( y x * ) + L 1 Θ ( y n ) Θ ( x * ) N 0 ( x n x * ) + N 0 ( y n x * ) N 0 ( x n x * ) + N 0 Ω 1 x 0 x * N n ˜ .
Proof. 
For the choice of Ω 3 ˜ , the calculations are
x n + 1 x * = z n x * Θ ( z n ) 1 Θ ( z n ) + 1 2 [ Θ ( z n ) 1 Θ ( x n ) 1 + Θ ( z n ) 1 Θ ( y n ) 1 + Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) Θ ( z n ) 1 ] Θ ( z n ) .
Thus, we have
x n + 1 x * [ 0 1 N ( 1 τ ) z n x * d τ 1 N 0 z n x * + 1 2 N n ˜ ˜ 1 N 0 ( x n x * ) 1 N 0 ( z n x * ) + N n ˜ ˜ ˜ 1 N 0 ( y n x * ) 1 N 0 ( z n x * ) + N n ˜ 1 N 0 ( y n x * ) 2 1 + 0 1 N 0 ( τ z n x * ) d τ ] z n x * Ω 3 ¯ ( x n x * ) x n x * x n x * .
Notice that in view of the computation for the upper bounds of z n x * , in order to satisfy the second condition, i.e., ( H 6 ) , we must choose
Ω ¯ 2 ( t ) = 0 1 N ( 1 τ ) d τ 1 N 0 ( t ) + 1 2 N ¯ ( t ) 1 + 0 1 N 0 ( τ t ) d τ 1 N 0 ( t ) 1 N 0 ( Ω ¯ 1 ( t ) t ) .
We shall consider the choice of T and Ω ¯ 2 as given above in the first two local examples of Section 7. □

6. Selection of the Majorant Sequence for (27)

We simplify the notation since we have the following steps. The majority sequence { a n } is defined for a 0 = 0 , b 0 Θ ( x 0 ) 1 Θ ( x 0 ) , and each n = 0 , 1 , 2 , . . . is defined by
Φ n ¯ = Φ ( b n a n ) Φ 0 ( a n ) + Φ 0 ( b n ) , c n = b n + 1 2 ( b n a n ) 1 + Φ 0 ( a n ) Φ n ¯ 1 Φ 0 ( a n ) 1 Φ 0 ( b n ) , q n = 1 2 Φ n ¯ 1 Φ 0 ( b n ) 2 + 1 1 Φ 0 ( a n ) 2 + Φ n ¯ 1 Φ 0 ( b n ) , λ n = 1 + 0 1 Φ 0 a n + τ ( c n a n ) d τ ( c n a n ) + ( 1 + Φ 0 ( a n ) ) ( c n a n ) , a n + 1 = c n + q n λ n , δ n + 1 = 0 1 Φ ( 1 τ ) ( a n + 1 a n ) d τ ( a n + 1 a n ) + 1 + Φ 0 ( a n ) ( a n + 1 b n ) and b n + 1 = a n + 1 + δ n + 1 1 Φ 0 ( a n + 1 ) .
The justification for these selections involves the following calculations:
z n y n = Θ ( x n ) 1 1 2 Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( x n ) = 1 2 Θ ( y n ) 1 Θ ( x n ) 1 Θ ( x n ) 1 Θ ( x n ) z n y n 1 2 ( b n a n ) 1 + Φ 0 ( a n ) Φ n ¯ 1 Φ 0 ( a n ) 1 Φ 0 ( b n ) = c n b n , x n + 1 z n = 1 2 [ Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) Θ ( y n ) 1 + Θ ( y n ) 1 Θ ( x n ) 1 + 2 Θ ( x n ) 1 ] Θ ( z n )
However, the bracket is bounded above in norm by
q n = 1 2 Φ n ¯ 1 Φ 0 ( b n ) 2 + 1 1 Φ 0 ( a n ) 2 + Φ n ¯ 1 Φ 0 ( b n ) , and from Θ ( z n ) = Θ ( z n ) Θ ( x n ) Θ ( x n ) ( z n x n ) , L 1 Θ ( z n ) 1 + 0 1 a n + τ ( c n a n ) d τ ( c n a n ) + ( 1 Φ 0 ( a n ) ( c n a n ) = λ n .
Thus, we get
x n + 1 x n + 1 q n 1 λ n = a n + 1 c n .
Moreover, the estimate for
b n + 1 = a n + 1 + δ n + 1 1 Φ 0 ( a n + 1 )
is as in the proof of Theorem 2.

7. Numerical Problems

Based on the theoretical results, we performed a computational analysis to demonstrate their practical significance. We chose six problems for our computational examinations. Numerical examples were divided on the basis of convergence into two parts: local area convergence (LAC) and semi-local area convergence (SLAC).
Local area convergence: The first two problems we considered were local area convergence (LAC). We reported the computational findings in Table 1 and Table 2, based on the Hammerstein operator and an academic problem (details can be seen in instances (1) and (2), respectively), with an emphasis on local convergence.
Semi-local area convergence: On the other hand, the other examples illustrated the semi-local area convergence (SLAC). Based on the well-known two-dimensional Burger equation (Equation (3)), the numerical results of semi-local convergence are presented in Table 3. Table 4 provides the numerical result of semi-local convergence for the boundary value problem in Example 4, another well-known example of an applied science problem. Furthermore, we take into consideration a system of order 200 × 200 that consists of trigonometric and polynomial functions (specifics are shown in example (5)), with the computational results listed in Table 5. For semi-local convergence, we finally selected a different large system of 200 × 200 nonlinear equations (further information is provided in example (6)), with numerical results displayed in Table 6.
Furthermore, we also mentioned the COC that was calculated using the following formulas:
η = ln x κ + 1 x * | x κ x * ln x κ x * x κ 1 x * , for   κ = 1 , 2 ,
or A C O C [7,12] by
η * = ln x κ + 1 x κ x κ x κ 1 ln x κ x κ 1 x κ 1 x κ 2 , for   κ = 2 , 3 ,
The programming terminate criteria are given below: ( i ) x κ + 1 x κ < ϵ , and ( i i ) J ( x κ ) < ϵ , where ϵ = 10 300 . All computations were performed using Mathematica–11 with multi precision arithmetics. The configurations of the computer used for programming are given below:
  • Device Name: HP
  • Installed RAM: 8.00 GB (7.89 GB usable)
  • Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz 3.60 GHz
  • System type: 64-bit operating system, x64-based processor
  • Edition: Windows 10 Enterprise
  • Version: 22H2
  • OS Build: 19045.2006

7.1. Examples for LAC

To illustrate the theoretical findings of local convergence, which are provided in Section 2, we select two examples: (1) and (2). The choices of T and Ω ¯ 2 were made as given in Section 5.
Example 1.
Let E 1 = E 2 = R × R × R and E = B ( 0 , 1 ) . Define the mapping Q on E 1 for v = ( h 1 , h 2 , h 3 ) T , as follows:
Q ( v ) = e h 1 + h 2 + h 3 1 , h 1 + h 2 + h 3 , h 1 + h 2 + e 1 2 h 3 2 + h 3 T .
Then, for L = Q ( x * ) , where x * = ( 0 , 0 , 0 ) T solves equation Q ( v ) = 0 , we have
Q ( v ) = e h 1 0 0 0 1 0 0 0 ( e 1 ) h 3 + 1 .
So, Q ( x * ) = I . Consequently, conditions ( H 5 ) and ( H 6 ) hold if we choose
N 0 ( t ) = ( e 1 ) t and N ( t ) = e 1 e 1 t .
Then, radius ρ given by Formula (2) can be found in Table 1 based on Example 1.
Table 1. Radii of method (2) for Example 1.
Table 1. Radii of method (2) for Example 1.
j s 0 s 1 s 2 s 3 s 4 s
30.5819770.382690.2024790.13506-0.13506
40.5819770.382690.2024790.135060.118810.11881
Example 2.
In numerous areas of physics and engineering, a Hammerstein operator-based nonlinear integral equation of the first kind poses a significant mathematical challenge. The analytical solution is almost non-existent due to the integral and nonlinear elements. To address such complexities, researchers can only rely on iterative methods and functional analysis, for instance, E = B [ 0 , 1 ] and E 1 = E 2 = C [ 0 , 1 ] . Then, we have the following nonlinear integral equation of the first kind Hammerstein operator J:
J ( v ) ( x ) = v ( x ) 4 0 1 x Δ v ( Δ ) 3 d Δ .
The derivative of operator J is given below:
J v ( q ) ( x ) = q ( x ) 12 0 1 x Δ v ( Δ ) 2 q ( Δ ) d Δ ,
for q C [ 0 , 1 ] . The values of operator J satisfy hypotheses ( B 1 ) ( B 8 ) . Since x * = 0 , then L = J ( x * ) = I , provided that
N 0 ( t ) = 24 t and N ( t ) = 48 t .
In Table 2, we present radii for method (2), for Example 2.
Table 2. Radii of method (2), for Example 2.
Table 2. Radii of method (2), for Example 2.
j s 0 s 1 s 2 s 3 s 4 s
30.0416670.0208330.009494530.000568145-0.000568145
40.0416670.0208330.009494530.000558140.001862030.000568145

7.2. Examples for SLAC

We consider three examples, (3)–(6), in order to demonstrate the theoretical results of semi-local convergence, which are proposed Section 3. We chose the values of k = 3 and k = 4 , respectively. Thus, we have
y n = x n Θ ( x n ) 1 Θ ( x n ) , z n = x n 1 2 Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( x n ) , x n + 1 = z n 1 2 Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( z n ) ,
and
y n = x n Θ ( x n ) 1 Θ ( x n ) , z n = x n 1 2 Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( x n ) , u n = z n 1 2 Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( z n ) , x n + 1 = u n 1 2 Θ ( y n ) 1 Θ ( x n ) Θ ( y n ) 1 + Θ ( x n ) 1 Θ ( u n ) .
Example 3.
Burgers’ equation in two dimensions is one of the most famous equations in applied sciences and is used to model the dynamics of fluid flow, particularly shock waves and turbulence. It defines how velocity fields evolve over time, accounting for both convection and diffusion processes. It is significant in fluid mechanics and nonlinear dynamics and provides insights into wave propagation, boundary layers, and complex flow patterns in various physical systems. Therefore, we assume the two–dimensional Burger’s equation [5] to be
2 p 2 u p t + p p u + g ( u , t ) = 0 ,
where ( u , t ) [ 0 , 1 ] × [ 0 , 1 ] , g ( u , t ) = 10 e 2 t [ e t ( u 2 u + 2 ) + 10 u ( 2 u 2 3 u + 1 ) ] . p = p ( u , t ) fulfill the following boundary conditions:
p ( 0 , t ) = p ( 1 , t ) = 0 , p ( u , 0 ) = 10 ( u 2 u ) , p ( u , 1 ) = 10 e ( u 2 u ) .
We assume that p i , j = p ( u i , t j ) is the approximate root at the grid points of the mesh. In addition, we consider that M and N are the number of steps in u and t directions. h and k are their corresponding step sizes. We can easily deduce a nonlinear system of equations from this partial differential equation by adopting a finite-difference discretization. Therefore, we apply the following central difference and backward difference, respectively:
2 p 2 u = 2 p 2 u ( u i , t j ) = p i + 1 , j 2 p i , j + p i 1 , j h 2
and
p u = p i + 1 , j p i 1 , j 2 h , p t = p i , j + 1 p i , j 1 2 k
in order to obtain the solution. For deducing the large system of nonlinear equations 225 × 225 , we use M = 16 and N = 16 . Moreover, we assume that x 0 = ( 1 , 1 , 225 , 1 ) T is the initial vector and our required estimated zero, which is given as a column vector (not a matrix) x * in the Appendix A.
In Table 3, we present the C O C , CPU timing, number of iterations, residual errors, and error differences between two iterations for Example 3.
Table 3. Computational results of Example 3.
Table 3. Computational results of Example 3.
Methods x 0 F ( x 3 ) x 4 x 3 n η CPU Timing
Method (32) ( 1 , 1 , 225 , 1 ) T 4.3 × 10 269 3.4 × 10 271 46.34922439.14
Method (33) ( 1 , 1 , 225 , 1 ) T 5.6 × 10 619 5.2 × 10 621 38.22081620.13
Example 4.
Boundary value problems (BVPs) [3] hold essential significance in mathematics, physics and engineering. The solution of differential equations has conditions specified at different points, usually at the boundaries of a given domain. BVPs are quite important and popular in modeling real-world phenomena like heat transfer, fluid flow, and quantum mechanics, offering invaluable insights into physical systems. Hence, we opted for the following BVP (details can be found in [17]):
u + a 2 u 2 + 1 = 0
with u ( 0 ) = u ( 1 ) = 0 . Divide interval [ 0 , 1 ] into ℓ parts, which further provides
γ 0 = 0 < γ 1 < γ 2 < < γ 1 < γ , γ + 1 = γ + h , h = 1 .
Next, let us assume u 0 = u ( γ 0 ) = 0 , u 1 = u ( γ 1 ) , , u 1 = u ( γ 1 ) , u = u ( γ ) = 1 . We obtained
u τ = u τ + 1 u τ 1 2 h , u τ = u τ 1 2 u τ + u τ + 1 h 2 , τ = 1 , 2 , 3 , , p 1 ,
by applying discretization approach. Then, we have the following system comprising ( τ 1 ) × ( τ 1 ) :
u τ 1 2 u τ + u τ + 1 + μ 2 4 ( u τ + 1 u τ 1 ) 2 + h 2 = 0 .
For instance, p = 101 and μ = 1 2 , we are dealing with a 100 × 100 system of nonlinear equations. The required solution, x * , is given as a column vector (not a matrix) in the appendix.
Table 4 shows the data for the COC (coefficient of convergence), CPU timing, the number of iterations, residual errors, and the difference in errors between consecutive iterations for Example 4.
Table 4. Computational results of Example 4.
Table 4. Computational results of Example 4.
Methods (2) x 0 F ( x 3 ) x 4 x 3 n η CPU Timing
Method (32) ( 1.003 , 1.003 , 110 , 1.003 ) T 1.3 × 10 238 2.8 × 10 237 46.0246100.695
Method (33) ( 1.003 , 1.003 , 110 , 1.003 ) T 3.9 × 10 561 3.0 × 10 559 38.338541.824
Example 5.
We assume a nonlinear system (selected from [7]), which is defined as follows:
F ( x ) = x k cos 2 x k i = 1 n x i , 1 k n .
We choose n = 105 , and the required solution is x ( 0 ) = ( 0.3543 , 0.3543 , 105 0.3543 ) T . The obtained results can be observed in Table 5.
Table 5. Numerical results for Example 5.
Table 5. Numerical results for Example 5.
Methods x 0 Θ ( x 3 ) x 4 x 3 n η CPU Timing
Method (32) 0.36 , 0.36 , 105 , 0.36 T 8.9 × 10 101 9.4 × 10 103 46.06051617.33
Method (33) 0.36 , 0.36 , 105 , 0.36 T 4.7 × 10 312 5.0 × 10 314 39.2706976.077
Example 6.
Suppose a higher system of nonlinear equations which consists of a 200 × 200 system of polynomial equations.
P i = x i 2 x i + 1 1 = 0 , 1 i 199 , x 200 2 x 1 1 = 0 , i = 200 .
The exact zero x * = ( 1 , 1 , , 1 ) t of the above function. We assume X 0 = ( 2 , 2 , , 2 ) t as the starting point for this problem.
Table 6. Numerical results for Example 6.
Table 6. Numerical results for Example 6.
Methods x 0 Θ ( x 3 ) x 4 x 3 n η CPU Timing
Method (32) 0.9 , 0.9 , 200 , 0.9 T 5.9 × 10 232 2.0 × 10 232 46.0298203.797
Method (33) 0.9 , 0.9 , 200 , 0.9 T 3.4 × 10 709 1.1 × 10 709 39.1334103.344
In the last example, we construct sequence { a n } given by (30), which is majorizing for { x n } and is defined in (27). Moreover, the convergence conditions are verified, as well as the convergence order.
Example 7.
Let E 1 = E 2 = R and E = B ( x 0 , 1 d ) for t 0 = 1 , L = F ( t 0 ) and d ( 0 , 1 ) . Define function Θ : E R by
Θ ( t ) = t 3 d .
First of all, we need to calculate b 0 Θ ( x 0 ) 1 Θ ( x 0 ) , Φ 0 and Φ. It follows by the definition of Θ that
b 0 = 1 3 ( 1 d ) .
Then, condition ( B 4 ) holds if
Φ 0 ( t ) = ( 3 d ) t .
Indeed, we have
Θ ( t ) Θ ( t 0 ) = 3 ( t 2 t 0 2 ) = 3 ( t + t 0 ) ( t t 0 ) = 3 ( t t 0 + 2 t 0 ) ( t t 0 ) .
So, we get
Θ ( t 0 ) 1 Θ ( t ) Θ ( t 0 ) ( 3 d ) t t 0 ,
which justifies the choice of the function P 0 .
Notice that equation Φ 0 ( t ) 1 = 0 has a solution ρ 4 = 1 3 d < 1 d for d ( 0 , 1 ) . Thus, we get
E 0 = E B t 0 , 1 ρ 4 = B t 0 , 1 ρ 4 .
Then, for t B t 0 , 1 ρ 4 , we obtain
Θ ( y ) Θ ( x ) = 3 ( y t 0 ) + ( x t 0 ) + 2 t 0 ( y x ) .
So, we yield
Θ ( t 0 ) 1 Θ ( y ) Θ ( x ) 2 1 + 1 3 d t .
leading to the following choice:
ϕ ( t ) = 2 1 + 1 3 d t .
Hence, sequence { a n } given by (30) for a 0 = 0 can be constructed. Let us choose d = 0.9 .
It follows by Table 7 that { a n } majorizes { x n } and is convergent to x * = q 3 for { a n } . The convergence order of methods (32) and (33) for function (36) is 6.0000 and 9.0000 , respectively, by using Formula (7).

8. Conclusions

We have the following concluding remarks about this study:
  • In conclusion, this study emphasizes the nature of convergence analysis in iterative techniques, particularly in the absence of explicit convergence guarantees.
  • We re-examine method (2), which was studied in [11], under high-order derivatives, which do not appear in the method. There are no error estimates or results on the location of x * that can be computed, nor any information on how to choose x 0 .
  • We rectify all these problems by using conditions only on the first derivation, which is on the method. This is how we extend its applicability. This includes Newmark, FEM, and Crant–Nicolson methods, which are used to solve complex problems.
  • Our idea can be used to extend the applicability of other methods [7,11,12,13,14,15,16,17,18,19,20,23] using inverses under the same set of conditions.
  • We also utilize extended continuity assumptions and present semi-local analysis, which further advances our knowledge of convergence tendencies in iterative approaches. By providing useful examples for real-world applications, we verified the semi-local convergence.
  • With similar advantages, the method created in this study can also be applied to other iterative techniques that do not require the inverses of linear operators.
  • The convergence conditions in [11] and in the present article are only sufficient. Our future work will also include necessary conditions.

Author Contributions

Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A., formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A. and S.A.; visualization, R.B., I.K.A. and S.A.; supervision, R.B. and I.K.A.; All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Prince Sattam bin Abdulaziz University, project number PSAU/2024/R/1445.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author Sattam Alharbi wishes to thank the Prince Sattam bin Abdulaziz University, project number PSAU/2024/R/1445, for providing funding support.

Conflicts of Interest

The authors declare that no conflicts of interest.

Appendix A

The required solution x * for Example 3 is given below:
x * = 0.5504 , 0.5171 , 0.4857 , 0.4563 , 0.4286 , 0.4027 , 0.3783 , 0.3554 , 0.3338 , 0.3136 , 0.2946 , 0.2767 , 0.2600 , 0.2442 , 0.2294 , 1.027 , 0.9652 , 0.9067 , 0.8518 , 0.8002 , 0.7517 , 0.7062 , 0.6634 , 0.6232 , 0.5854 , 0.5500 , 0.5166 , 0.4853 , 0.4559 , 0.4283 , 1.431 , 1.344 , 1.263 , 1.186 , 1.114 , 1.047 , 0.9836 , 0.9240 , 0.8680 , 0.8154 , 0.7660 , 0.7196 , 0.6760 , 0.6350 , 0.5966 , 1.761 , 1.654 , 1.554 , 1.460 , 1.371 , 1.288 , 1.210 , 1.137 , 1.068 , 1.003 , 0.9428 , 0.8857 , 0.8320 , 0.7816 , 0.7343 , 2.018 , 1.896 , 1.781 , 1.673 , 1.571 , 1.476 , 1.387 , 1.303 , 1.224 , 1.150 , 1.080 , 1.014 , 0.9534 , 0.8956 , 0.8414 , 2.201 , 2.068 , 1.943 , 1.825 , 1.714 , 1.610 , 1.513 , 1.421 , 1.335 , 1.254 , 1.178 , 1.107 , 1.040 , 0.9770 , 0.9179 , 2.311 , 2.171 , 2.040 , 1.916 , 1.800 , 1.691 , 1.589 , 1.492 , 1.402 , 1.317 , 1.237 , 1.162 , 1.092 , 1.025 , 0.9638 , 2.348 , 2.206 , 2.072 , 1.947 , 1.829 , 1.718 , 1.614 , 1.516 , 1.424 , 1.338 , 1.257 , 1.180 , 1.109 , 1.042 , 0.9791 , 2.311 , 2.171 , 2.040 , 1.916 , 1.800 , 1.691 , 1.589 , 1.492 , 1.402 , 1.317 , 1.237 , 1.162 , 1.092 , 1.025 , 0.9638 , 2.201 , 2.068 , 1.943 , 1.825 , 1.714 , 1.610 , 1.513 , 1.421 , 1.335 , 1.254 , 1.178 , 1.107 , 1.040 , 0.9770 , 0.9179 , 2.018 , 1.896 , 1.781 , 1.673 , 1.571 , 1.476 , 1.387 , 1.303 , 1.224 , 1.150 , 1.080 , 1.014 , 0.9534 , 0.8956 , 0.8414 , 1.761 , 1.654 , 1.554 , 1.460 , 1.371 , 1.288 , 1.210 , 1.137 , 1.068 , 1.003 , 0.9428 , 0.8857 , 0.8321 , 0.7816 , 0.7343 , 1.431 , 1.344 , 1.263 , 1.186 , 1.114 , 1.047 , 0.9836 , 0.9240 , 0.8680 , 0.8155 , 0.7660 , 0.7196 , 0.6760 , 0.6350 , 0.5966 , 1.027 , 0.9652 , 0.9068 , 0.8518 , 0.8002 , 0.7517 , 0.7062 , 0.6634 , 0.6232 , 0.5854 , 0.5500 , 0.5166 , 0.4853 , 0.4559 , 0.4283 , 0.5504 , 0.5171 , 0.4857 , 0.4563 , 0.4287 , 0.4027 , 0.3783 , 0.3554 , 0.3338 , 0.3136 , 0.2946 , 0.2767 , 0.2600 , 0.2442 , 0.2294 T .
The required solution x * for Example 4 is given by
x * = 0.01670 , 0.03324 , 0.04961 , 0.06582 , 0.08186 , 0.09774 , 0.1135 , 0.1290 , 0.1444 , 0.1597 , 0.1748 , 0.1897 , 0.2045 , 0.2191 , 0.2336 , 0.2479 , 0.2621 , 0.2761 , 0.2900 , 0.3038 , 0.3174 , 0.3308 , 0.3441 , 0.3573 , 0.3703 , 0.3832 , 0.3960 , 0.4086 , 0.4211 , 0.4334 , 0.4456 , 0.4577 , 0.4696 , 0.4814 , 0.4931 , 0.5046 , 0.5160 , 0.5273 , 0.5384 , 0.5494 , 0.5603 , 0.5711 , 0.5817 , 0.5922 , 0.6026 , 0.6129 , 0.6230 , 0.6330 , 0.6429 , 0.6527 , 0.6623 , 0.6718 , 0.6812 , 0.6905 , 0.6997 , 0.7087 , 0.7176 , 0.7264 , 0.7351 , 0.7437 , 0.7522 , 0.7605 , 0.7687 , 0.7769 , 0.7849 , 0.7927 , 0.8005 , 0.8082 , 0.8157 , 0.8232 , 0.8305 , 0.8377 , 0.8448 , 0.8518 , 0.8587 , 0.8654 , 0.8721 , 0.8786 , 0.8851 , 0.8914 , 0.8976 , 0.9038 , 0.9098 , 0.9157 , 0.9215 , 0.9272 , 0.9328 , 0.9382 , 0.9436 , 0.9489 , 0.9541 , 0.9591 , 0.9641 , 0.9689 , 0.9737 , 0.9783 , 0.9829 , 0.9873 , 0.9916 , 0.9959 T .

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Romero, A.N.; Ezquerro, J.A.; Hernandez, M.A. Approximación de Soluciones de Algunas Equacuaciones Integrales de Hammerstein Mediante métodos Iterativosntipo Newton, XXI Congreso de Ecuaciones Diferenciales y Aplicaciones. Master’s Thesis, Universidad de Castilla-La Mancha (UCLM), Ciudad Real, Spain, 2009. [Google Scholar]
  3. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  4. Xiao, X.Y.; Yin, H.W. A new class of methods with higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2015, 264, 300–309. [Google Scholar] [CrossRef]
  5. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
  6. Xiao, X.Y.; Yin, H.W. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
  7. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  8. Ramos, H.; Monteiro, M.T.T. A new approach based on the Newton’s method to solve systems of nonlinear equations. J. Comput. Appl. Math. 2017, 318, 3–13. [Google Scholar] [CrossRef]
  9. Argyros, I.K. The Theory and Application of Iterative Methods with Applications, 2nd ed.; Engineering Series; CRC Press-Taylor &amp Francis: Boca Raton, FL, USA, 2022. [Google Scholar]
  10. Argyros, I.K.; George, S. On the unified convergence analysis for Newton-typr for solving generalized equations with the Aubin property. J. Complex. 2024, 81, 101817. [Google Scholar] [CrossRef]
  11. Xiao, X.; Yin, H. Accelerating the convergence speen of iterative methods for solving nonlinear systems. Appl. Math. Comput. 2018, 333, 8–19. [Google Scholar]
  12. Grau-Sánchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  13. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition, Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for function of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  15. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  16. Homeier, H.H.H. On Newton-type methods with cubic convegence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
  17. Kou, J.; Li, Y.; Wang, X. Some modification of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef]
  18. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Compt. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  20. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-newton method for solving systems of nonlinear equations, Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  21. Babaei, M.; Hajmohammad, M.H.; Asemi, K. Natural frequency and dynamic analyses of functionally graded saturated porous annular sector plate and cylindrical panel based on 3D elasticity. Aeros. Sci. Tech. 2020, 96, 105524. [Google Scholar] [CrossRef]
  22. Babaei, M.; Kiarasi, F.; Asemi, K.; Dimitri, R.; Tornabene, F. Transient Thermal Stresses in FG Porous Rotating Truncated Cones Reinforced by Graphene Platelets. Appl. Sci. 2022, 12, 3932. [Google Scholar] [CrossRef]
  23. Frontini, M.; Sormani, E. Third-order methods from quadrate formulae for solving system of nonlinear equations. Appl. Math. Compt. 2004, 149, 771–782. [Google Scholar] [CrossRef]
Table 7. Numerical results for Example 7.
Table 7. Numerical results for Example 7.
n a n Φ 0 ( a n )
00
1
2
3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Alharbi, S. Accelerating the Speed of Convergence for High-Order Methods to Solve Equations. Mathematics 2024, 12, 2785. https://doi.org/10.3390/math12172785

AMA Style

Behl R, Argyros IK, Alharbi S. Accelerating the Speed of Convergence for High-Order Methods to Solve Equations. Mathematics. 2024; 12(17):2785. https://doi.org/10.3390/math12172785

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, and Sattam Alharbi. 2024. "Accelerating the Speed of Convergence for High-Order Methods to Solve Equations" Mathematics 12, no. 17: 2785. https://doi.org/10.3390/math12172785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop