Next Article in Journal
Comparison of the Compressive and Tensile Strength Values of Rocks Obtained on the Basis of Various Standards and Recommendations
Previous Article in Journal
Common Solution for a Finite Family of Equilibrium Problems, Quasi-Variational Inclusion Problems and Fixed Points on Hadamard Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of Higher Order Jarratt-Type Schemes for Nonlinear Equations from Applied Sciences

by
Ramandeep Behl
1,
Ioannis K. Argyros
2,*,
Fouad Othman Mallawi
1 and
Christopher I. Argyros
3
1
Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Computer Science, University of Oklahoma, Norman, OK 73071, USA
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(7), 1162; https://doi.org/10.3390/sym13071162
Submission received: 27 May 2021 / Revised: 15 June 2021 / Accepted: 25 June 2021 / Published: 28 June 2021
(This article belongs to the Section Chemistry: Symmetry/Asymmetry)

Abstract

:
Symmetries are important in studying the dynamics of physical systems which in turn are converted to solve equations. Jarratt’s method and its variants have been used extensively for this purpose. That is why in the present study, a unified local convergence analysis is developed of higher order Jarratt-type schemes for equations given on Banach space. Such schemes have been studied on the multidimensional Euclidean space provided that high order derivatives (not appearing on the schemes) exist. In addition, no errors estimates or results on the uniqueness of the solution that can be computed are given. These problems restrict the applicability of the methods. We address all these problems by using the first order derivative (appearing only on the schemes). Hence, the region of applicability of existing schemes is enlarged. Our technique can be used on other methods due to its generality. Numerical experiments from chemistry and other disciplines of applied sciences complete this study.

1. Introduction

Problems from applied sciences such as mathematics, biology, chemistry, and physics (including symmetries) to mention a few are converted to nonlinear equations which are solved by iterative methods, since exact solutions are hard to find. Let T 1 and T 2 denote Banach spaces and D T 1 stand for an open and convex set. Moreover, we use the notation ( T 1 , T 2 ) for the space of continuous linear operators mapping T 1 into T 2 . The task of determining a solution x * of equation:
F ( x ) = 0 ,
where F : D T 2 is differentiable as defined by Fréchet is of extreme significance in computational disciplines. Finding x * in a closed form is desirable but rarely attainable. That is why one resorts to develop iterative schemes approximating x * , if certain convergence criteria hold.
One of the most basic and popular iterative methods is known as the Newton method [1], which is defined as follows:
x σ + 1 = x σ F ( x σ ) 1 F ( x σ ) .
it has second of order convergence. However, it is a one-point method and one-point methods have several issues concerning the order and computational efficiency. For instance, if we want to attain a third order one-point iterative method then we need the evaluations of function f, first order derivative f , and second order derivative f (more details can be found in [1,2]). Finding second or higher order derivatives are either time consuming or do not exist. Thus, researchers are focused on the most important class of iterative methods that is known as multi-point methods. These multi-point methods are of great practical importance since they overcome theoretical limits of one-point methods (more details can be found in [1]).
Ostrowski [2] was the first mathematician who suggested an optimal [3] multi-point scheme of fourth order which requires only three functional evaluations. Later, Jarratt [4,5], in 1966 and King [6], in 1975, gave many optimal fourth order multi-point methods. King further demonstrated that Ostrowski’s method was a special case of his scheme. A plethora of such schemes can be found in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33] and the references therein.
In particular, we study the (local) convergence of the three-step scheme [33], given for each σ = 0 , 1 , 2 , ,
y σ = x σ α F ( x σ ) 1 F ( x σ ) , z σ = A λ ( x σ , y σ ) , x σ + 1 = z σ B σ F ( x σ ) 1 F ( z σ ) ,
as well as the j-step scheme [33]:
z σ ( 1 ) = x σ α F ( x σ ) 1 F ( x σ ) , z σ ( 2 ) = x σ C σ F ( x σ ) 1 F ( x σ ) , z σ ( 3 ) = z σ ( 2 ) B σ F ( x σ ) 1 F z σ ( 2 ) , z σ ( 4 ) = z σ ( 3 ) B σ F ( x σ ) 1 F z σ ( 3 ) , z σ ( j ) = z σ ( j 1 ) B σ F ( x σ ) 1 F z σ ( j 1 ) , j = 2 , 3 , 4 , , m x σ + 1 = z σ ( m ) ,
with α R { 0 } , A λ : T 1 × T 1 T 1 is a continuous operator of a scheme with order λ 2 , B : D ( T 1 , T 2 ) and C : D ( T 1 , T 2 ) . We have left A λ as general as possible to include numerous special cases of it. But, as an example A λ can be A 4 = A ( x k , y k ) = y k F ( y k ) 1 F ( y k ) . Other choices are given by (31), (32), and in paper [33]. Schemes (2) and (3) were shown to be of order ( λ + 2 ) and 2 j , respectively in [33], when T 1 = T 2 = R j , j = 1 , 2 , 3 , . However, they used high order derivatives in order to demonstrate the convergence order.
Moreover, we refer the reader to [33] for a plethora of choices of α , B σ , C σ leading to already studied schemes or new schemes. Some choices are also given by us in the numerical section of this study. The computational efficiencies and other benefits were also presented in [33].
We have some concerns with the aforementioned studies:
(a)
The convergence order was established by utilizing the Taylor series expansion requiring higher order derivatives (not appearing on the schemes);
(b)
Lack of computable estimates on distances x σ x * ;
(c)
Results related to the uniqueness of the solutions are not given;
(d)
We do not know in advance how many iterates are needed to achieve a desired error tolerance given in advance;
(e)
Earlier studies have been made only on the multidimensional Euclidean space.
Concerns (a)–(e), limit the applicability of Schemes (2) and (3) and similar ones [1,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,28,29,30,31,32]. Let us consider a motivational example. We assume the following function F on T 1 = T 2 = R , D = [ 1 2 , 3 2 ] as:
F ( τ ) = τ 3 ln τ 2 + τ 5 τ 4 , τ 0 0 , τ = 0 .
We yield:
F ( τ ) = 3 τ 2 ln τ 2 + 5 τ 4 4 τ 3 + 2 τ 2 ,
F ( τ ) = 6 τ ln τ 2 + 20 τ 3 12 τ 2 + 10 τ ,
F ( τ ) = 6 ln τ 2 + 60 τ 2 24 τ + 22 .
Thus, we see that F ( τ ) is not bounded in D . Therefore, results requiring the existence of F ( τ ) or higher cannot apply for studying the convergence of (2), (3), and [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32].
The novelty of our study lies in the fact that we address concerns (a)–(e) using the derivative (only appearing on the schemes) as well as very general conditions. This way, we also provide computable upper bound estimations on x σ x * and results on the uniqueness of solutions. Moreover, our results are obtained in the more general setting of a Banach space. Hence, the region of applicability is extended for these schemes. It is worth noticing that computing the convergence radii shows how difficult (limited too) it is to determine initial points. Our idea is so general that it can be used on other schemes in a similar fashion [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33]. We suppose from now on that x * is a simple solution of Equation (1).
It is worth noticing that the foundations of symmetrical principles include quantum physics and micro-world. These problems once converted to equations of the form (1), their solutions are hard to obtain in closed form or analytically. That is why such schemes are important to study.
The rest of the study includes: local convergence of these schemes in Section 2, and the numerical experiments in Section 3. In particular similar work (see Example 2 and Example 4) in our Section 3 can be done in [13,14,25]. Finally, Section 4 is devoted to the concluding remarks.

2. Analysis in the Sense of Local Convergence

It is convenient for the analysis of scheme (2) to develop some parameters and functions. Suppose that scalar equation f ( τ ) = 0 , has real solutions s 1 , s 2 , , s m D R with s 1 s 2 s m . Then, s 1 is called the smallest or minimal solution of the equation f ( τ ) = 0 , in D . Set Ω = [ 0 , ) .
Suppose equation(s):
(i)
μ 0 ( τ ) 1 = 0 ,
has a smallest solution R 0 Ω { 0 } for some continuous and non-decreasing function μ 0 : Ω Ω . Set Ω 0 = [ 0 , 2 R 0 ) .
(ii)
ψ 1 ( τ ) 1 = 0 ,
ψ 2 ( τ ) 1 = 0 ,
have the smallest solutions ρ 1 , ρ 2 Ω 0 { 0 } , respectively, for some continuous and non-decreasing functions μ : Ω 0 Ω , μ 1 : Ω 0 Ω and ψ 1 , ψ 2 : Ω 0 Ω with:
ψ 1 ( τ ) = 0 1 μ ( 1 θ ) τ d θ + | 1 α | 0 1 μ 1 ( θ τ ) d θ 1 μ 0 ( τ ) .
(iii)
ψ 3 ( τ ) 1 = 0 ,
has a smallest solution ρ 3 Ω 0 { 0 } for some continuous and non-decreasing function p : Ω 0 Ω with:
ψ 3 ( τ ) = [ 0 1 μ ( 1 θ ) ψ 2 ( τ ) τ d θ 1 μ 0 ψ 2 ( τ ) τ + p ( τ ) 0 1 μ 1 θ ψ 2 ( τ ) τ d θ 1 μ 0 ( τ ) + μ 0 ( τ ) + μ 0 ψ 2 ( τ ) τ 0 1 μ 1 ( θ ψ 2 ( τ ) τ ) d θ 1 μ 0 ( τ ) 1 μ 0 ψ 2 ( τ ) τ ] ψ 2 ( τ ) .
The parameter ρ defined by:
ρ = min { ρ k } , k = 1 , 2 , 3
shall be shown to be a convergence radius for scheme (2). Set Ω 1 = [ 0 , ρ ) . It follows from this definition that:
0 μ 0 ( τ ) < 1 ,
0 μ 0 ψ 1 ( τ ) τ < 1 ,
0 ψ k ( τ ) < 1 , hold for all τ Ω 1 .
By S ¯ ( x * , μ ) , we denote the closure of ball S ( x * , μ ) with center x * T 1 and of radius μ > 0 .
The local convergence analysis of scheme (2) relies on the conditions ( A ) provided that the scalar functions are as previously defined.
Suppose:
(A1)
For each x D
F ( x * ) 1 F ( x ) F ( x * ) μ 0 ( x x * ) .
Set S 0 = D S ( x * , R 0 ) .
(A2)
For each x S 0
F ( x * ) 1 F ( y ) F ( x ) μ ( y x ) , F ( x * ) 1 F ( x ) μ 1 ( x x * ) , A λ ( x , y ) x * ψ 2 ( x x * ) x x * and I B ( x ) p ( x x * ) ,
where y = x α F ( x ) 1 F ( x ) .
(A3)
S ¯ ( x * , ρ ) D
and
(A4)
There exists ρ * ρ satisfying:
0 1 ψ 0 ( θ ρ * ) d θ < 1 .
Set S 1 = D S ¯ ( x * , ρ * ) .
Next, the local convergence analysis of scheme (2) is developed using conditions ( A ) with functions ψ k as previously defined.
Theorem 1.
Suppose conditions ( A ) hold and choose x 0 S ( x * , ρ ) { x * } . Then, sequence { x σ } starting from x 0 and generated by scheme (2) is well defined in S ( x * , ρ ) . It stays in S ( x * , ρ ) for all σ = 0 , 1 , 2 , and converges to x * . Moreover, the only solution of Equation (1) in the set S 1 is x * .
Proof. 
The following assertions shall be shown using mathematical induction:
{ x σ } S ( x * , ρ ) , lim x σ = x * ,
y σ x * ψ 1 ( x σ x * ) x σ x * x σ x * < r ,
z σ x * ψ 2 ( x σ x * ) x σ x * x σ x * ,
and
x σ + 1 x * ψ 3 ( x σ x * ) x σ x * x σ x * < r ,
where the functions ψ k " as given previously and radius ρ is as defined by (7).
Let v S ( x * , ρ ) { x * } . Using (7), (8), and ( A 1 ) , we obtain:
F ( x * ) 1 F ( v ) F ( x * ) μ 0 ( v x * ) μ 0 ( ρ ) < 1 .
Hence, by a lemma due to Banach for inverses of linear operators [8], F ( x ) 1 ( T 1 , T 2 ) with:
F ( v ) 1 F ( x * ) 1 1 μ 0 ( v x * ) .
Then, iterates y 0 , z 0 , and x 1 are well defined. Thus, we can write by the three steps of Scheme (2), respectively:
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + ( 1 α ) F ( x 0 ) 1 F ( x 0 ) = F ( x 0 ) 1 F ( x * ) 0 1 F ( x * ) 1 F x * + θ ( x 0 x * ) F ( x 0 ) d θ ( x 0 x * ) ,
z 0 x * = A λ ( x 0 , y 0 ) x * ,
and
x 1 x * = z 0 x * B 0 F ( x 0 ) 1 F ( z 0 ) = z 0 x * F ( z 0 ) 1 F ( y 0 ) + ( I B 0 ) F ( x 0 ) 1 F ( z 0 ) + F ( z 0 ) 1 F ( x 0 ) 1 F ( z 0 ) .
By (7), (9), (10) (for k = 1 , 2 , 3 ), (15) (for v = x 0 , y 0 ), ( A 2 ) , ( A 3 ) , and (16)–(18), we get respectively:
y 0 x * 0 1 μ ( 1 θ ) x 0 x * d θ + | 1 α | o 1 μ 1 ( θ x 0 x * ) d θ x 0 x * 1 μ 0 ( x 0 x * ) ψ 1 ( x 0 x * ) x 0 x * x 0 x * < ρ ,
z 0 x * = A λ ( x 0 , y 0 ) x * ψ 2 ( x 0 x * ) x 0 x * x 0 x * ,
and
x 1 x * [ 0 1 μ ( 1 θ ) z 0 x * d θ 1 μ 0 ( z 0 x * ) + p ( x 0 x * ) 0 1 μ 1 ( θ z 0 x * ) d θ 1 μ 0 ( x 0 x * ) + μ 0 ( z 0 x * ) + μ 0 ( x 0 x * ) 0 1 μ 1 ( θ z 0 x * ) d θ 1 μ 0 ( x 0 x * 1 μ 0 ( z 0 x * ) ] z 0 x * ψ 3 ( x 0 x * ) x 0 x * ,
so y 0 , z 0 , x 1 S ( x * , ρ ) , and estimations (12)–(14) hold for σ = 0 . The induction for (11)–(14) finishes if y 0 , z 0 , and x 1 are replaced by x σ , y σ , and x σ + 1 , respectively in the preceding calculations. The completion of the induction gives the estimation:
x σ + 1 x * γ x σ x * < ρ ,
with γ = ψ 3 ( x 0 x * ) [ 0 , 1 ) , from which we have lim σ x σ = x * and x σ + 1 S ( x * , ρ ) .
In order to show the uniqueness part, we set M = 0 1 F x * + θ ( u x * ) d θ , for some u S 1 with F ( u ) = 0 . Then, in view of ( A 1 ) and ( A 4 ) , we obtain:
F ( x * ) 1 M F ( x * ) 0 1 μ 0 ( θ u x * ) d θ 0 1 μ 0 ( θ ρ * ) d θ < 1 ,
leading to u = x * , since 0 = F ( u ) F ( x * ) = M ( u x * ) , M 1 ( T 2 , T 1 ) and u x * = M 1 ( 0 ) = 0 . □
Remark 1.
(a) 
By ( A 1 ) and the estimation:
F ( x * ) 1 F ( x ) = F ( x * ) 1 ( F ( x ) F ( x * ) ) + I 1 + F ( x * ) 1 ( F ( x ) F ( x * ) ) 1 + μ 0 ( x 0 x * )
condition ( A 2 ) can be dropped and μ 1 ( τ ) be replaced by:
μ 1 ( τ ) = 1 + μ 0 ( ρ ) o r μ 1 ( τ ) = 2 .
(b) 
The results obtained here can be used for operators F satisfying the autonomous differential equation [9,10] of the form:
F ( x ) = P ( F ( x ) ) ,
where P is a known continuous operator. Since F ( x * ) = P ( F ( x * ) ) = P ( 0 ) , we can apply the results without actually knowing the solution x * . Let us consider an example F ( x ) = e x 1 . Then, we can choose P ( x ) = x + 1 .
(c) 
If μ 0 ( τ ) = K 0 τ and μ ( t ) = K τ then r 1 = 2 2 K 0 + K was shown in [9,10] to be the convergence radius for Newton’s method. It follows from (7) and the definition of r 1 that the convergence radius ρ of the method (2) cannot be larger than the convergence radius r 1 of the second order Newton’s method. As already noted in [9,10], r 1 is at its smallest as large as the convergence ball given by Rheinboldt [28]:
r R = 2 3 K 1 .
In particular, for K 0 < K 1 (where K 1 is the constant on D ), we have that:
r R < r 1
and
r R r 1 1 3 a s K 0 K 1 0 .
Therefore our convergence ball r 1 is at most three times larger than Rheinboldt’s. The same value for r R is given by Traub [1].
(d) 
Method (2) is not changing if we use the conditions of Theorem 1 instead of the stronger conditions given in [33]. Moreover, for the error bounds in practice we can use the Computational Order of Convergence (COC) [32]:
η = ln s σ + 1 s σ ln s σ s σ 1 , for   σ = 1 , 2 , , i f s σ = x σ x *
or Approximate Computational Order of Convergence ( A C O C ) [32] by:
η * = ln v σ + 1 v σ ln v σ v σ 1 , for   σ = 2 , 3 , , i f v σ = x σ x σ 1 .
So, the convergence order is obtained in this way without evaluations higher than the first Fréchet derivative.
Next, we present the local convergence of scheme (3) in an analogous way. Define functions on Ω 0 as:
ψ ¯ 2 ( τ ) = o 1 μ ( 1 θ ) τ d θ + q ( τ ) 0 1 μ 1 ( θ τ ) d θ 1 μ 0 ( τ ) ψ 1 ( τ ) ,
ψ ¯ 3 ( τ ) = [ 0 1 μ ( 1 θ ) ψ ¯ 2 ( τ ) τ d θ 1 μ 0 ψ ¯ 2 ( τ ) τ + p ( τ ) 0 1 μ 1 θ ψ ¯ 2 ( τ ) τ d θ 1 μ 0 ( τ ) + 0 1 μ 1 θ ψ ¯ 2 ( τ ) τ d θ 1 μ 0 ψ ¯ 2 ( τ ) τ + 0 1 μ 1 θ ψ ¯ 2 ( τ ) τ d θ 1 μ 0 ( τ ) ] ψ ¯ 2 ( τ ) ,
and
ψ ¯ 4 ( τ ) = h ( τ ) ψ ¯ 3 ( τ ) , ψ ¯ 5 ( τ ) = h ( τ ) ψ ¯ 4 ( τ ) , ψ ¯ m ( τ ) = h ( τ ) ψ ¯ m 1 ( τ ) , m = 4 , 5 , 6 , , j
where q : Ω 0 Ω is a continuous and non-decreasing function, and
h ( τ ) = 0 1 μ ( 1 θ ) τ d θ 1 μ 0 ( τ ) + p ( τ ) 0 1 μ 1 θ τ d θ 1 μ 0 ( τ ) + 2 0 1 μ 1 θ τ d θ 1 μ 0 ( τ ) .
Suppose equations:
ψ 1 ( τ ) 1 = 0 , ψ ¯ 2 ( τ ) 1 = 0 , ψ ¯ 3 ( τ ) 1 = 0 , ψ ¯ m ( τ ) 1 = 0 ,
have the smallest solutions in Ω 0 { 0 } denoted by ρ 1 , ρ ¯ 2 , ρ ¯ m , respectively.
Define parameter:
ρ ¯ = min { ρ 1 , ρ ¯ 2 , ρ ¯ m } .
Itshall be shown that ρ ¯ is a convergence radius for scheme (3).
Consider conditions ( A 1 ) , ( A 3 ) , ( A 4 ) with ρ ¯ replacing ρ . Moreover replace the second condition in ( A 2 ) by:
I C ( x ) q ( x x * ) for each x S 0 .
Let us call the resulting conditions ( A ) . Then, as in Theorem 1, we get in turn the estimates that also motivate the introduction of the ψ ¯ functions:
z σ ( 1 ) x * ψ 1 ( x σ x * ) x σ x * x σ x * ρ ¯ ,
z σ ( 2 ) x * = x σ x * F ( x σ ) 1 F ( x σ ) + ( I C 0 ) F ( x σ ) 1 F ( x σ ) 0 1 μ ( 1 θ ) x σ x * d θ + q ( x σ x * ) 0 1 μ 1 θ x σ x * d θ x σ x * 1 μ 0 ( x σ x * ) = ψ ¯ 2 ( x σ x * ) x σ x * x σ x * ,
z σ ( 3 ) x * [ 0 1 μ ( 1 θ ) z σ ( 2 ) x * d θ 1 μ 0 ( z σ ( 2 ) x * ) + p ( x σ x * ) 0 1 μ 1 θ z σ ( 2 ) x * d θ 1 μ 0 ( x σ x * ) + 0 1 μ 1 θ z σ ( 2 ) x * d θ 1 μ 0 ( z σ ( 2 ) x * ) + 0 1 μ 1 θ z σ ( 2 ) x * d θ 1 μ 0 ( x σ x * ) ] z σ ( 2 ) x * = ψ ¯ 3 ( x σ x * ) x σ x * x σ x * ,
z σ ( 4 ) x * [ 0 1 μ ( 1 θ ) z σ ( 3 ) x * d θ 1 μ 0 ( z σ ( 3 ) x * ) + p ( x σ x * ) 0 1 μ 1 θ z σ ( 3 ) x * d θ 1 μ 0 ( x σ x * ) + 0 1 μ 1 θ z σ ( 3 ) x * d θ 1 μ 0 ( z σ ( 3 ) x * ) + 0 1 μ 1 θ z σ ( 3 ) x * d θ 1 μ 0 ( x σ x * ) ] z σ ( 3 ) x * [ 0 1 μ ( 1 θ ) x σ x * d θ 1 μ 0 ( x σ x * ) + p ( x σ x * ) 0 1 μ 1 θ x σ x * d θ 1 μ 0 ( x σ x * ) + 0 1 μ 1 θ x σ x * d θ 1 μ 0 ( x σ x * ) + 0 1 μ 1 θ x σ x * d θ 1 μ 0 ( x σ x * ) ] z σ ( 3 ) x * = h ( x σ x * ) z σ ( 3 ) x * ψ ¯ 4 ( x σ x * ) x σ x * x σ x * ,
x σ + 1 x * = z σ ( j ) x * = ψ ¯ σ ( j ) ( x σ x * ) x σ x * x σ x * .
Hence, we get the local convergence result for scheme (3).
Theorem 2.
We suppose that conditions ( A ) hold and choose x 0 S ( x * , ρ ) { x * } . Then, the sequence { x σ } generated by scheme (3) converges to x * , so that estimations (26)–(30) as well as the uniqueness result of Theorem 1 hold.

3. Numerical Applications

We present the computational results based on the suggested theoretical results in this paper. We choose A 3 ( x σ , y σ ) = x σ T σ F ( x σ ) 1 F ( x σ ) and A 4 ( x σ , y σ ) = x σ H σ F ( x σ ) 1 F ( x σ ) , respectively, in scheme (2) in order to obtain fifth and sixth order iterative procedures. In particular, we have:
y σ = x σ α F ( x σ ) 1 F ( x σ ) , z σ = x σ T σ F ( x σ ) 1 F ( x σ ) , x σ + 1 = z σ B σ F ( x σ ) 1 F ( z σ ) ,
where
T σ = 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 1 + 1 β 2 α F ( x σ ) 1 β 2 α F ( y σ ) , β R , B σ = 1 + δ 2 α F ( x σ ) δ 2 α F ( y σ ) 1 1 + 2 + δ 2 α F ( x σ ) 2 + δ 2 α F ( y σ ) , δ R ,
and
y σ = x σ α F ( x σ ) 1 F ( x σ ) , z σ = x σ H σ F ( x σ ) 1 F ( x σ ) x σ + 1 = z σ B σ F ( x σ ) 1 F ( z σ ) ,
where
H σ = I + I h σ 2 α + ( I h σ ) 2 2 α 2 + 1 3 2 α Δ σ , Δ σ = 1 6 α 2 F ( x σ ) 1 F ( y σ ) 2 F ( x σ ) + F ( w σ ) , w σ = 2 x σ y σ , h σ = F ( x σ ) 1 F ( y σ ) .
For more details of these values please see the article Zhanlav and Otgondroj [33]. Next, we show how to choose the ψ functions for Schemes (31) and (32), respectively.
Suppose equation:
q 1 ( τ ) 1 = 0 ,
has a smallest solution R q 1 Ω 0 { 0 } , where:
q 1 ( τ ) = 1 β 2 α μ 0 ( τ ) + β 2 α μ 0 ψ 1 ( τ ) τ .
Case β = 0
Set Ω 1 = 0 , min { R 0 , R q 1 } . Define function ψ 2 : Ω 1 Ω by:
ψ 2 ( τ ) = 0 1 μ ( 1 θ ) τ d θ 1 μ 0 ( τ ) + μ 0 ( τ ) + μ 0 ψ 1 ( τ ) τ 0 1 μ 1 ( θ τ ) d θ 2 | α | 1 q 1 ( τ ) 1 μ 0 ( τ ) .
The choice of functions q 1 and ψ 2 is justified by the estimates:
z σ x * = x σ x * T σ F ( x σ ) 1 F ( x σ ) = x σ x * F ( x σ ) 1 F ( x σ ) + F ( x σ ) 1 F ( x σ ) [ 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) ] 1 1 + 1 β 2 α F ( x σ ) + 1 β 2 α F ( y σ ) F ( x σ ) 1 F ( x σ ) = x σ x * F ( x σ ) 1 F ( x σ ) + I 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 × 1 + 1 β 2 α F ( x σ ) + 1 β 2 α F ( y σ ) F ( x σ ) 1 F ( x σ ) = x σ x * F ( x σ ) 1 F ( x σ ) + 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 × [ 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 + 1 β 2 α F ( x σ ) 1 β 2 α F ( y σ ) ] F ( x σ ) 1 F ( x σ ) ,
so
z σ x * = [ 0 1 μ ( 1 θ ) x σ x * d θ 1 μ 0 ( x σ x * ) + μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 0 1 μ 1 ( θ x σ x * ) d θ 2 | α | 1 q 1 ( x σ x * ) 1 μ 0 ( x σ x * ) ] x σ x * ψ 2 ( x σ x * ) x σ x * x σ x * < ρ ,
where we also used:
F ( x σ ) 1 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 β 2 α + β 2 α F ( x * ) 1 β 2 α F ( x * ) 1 F ( x σ ) F ( x * ) + β 2 α F ( x * ) 1 F ( y σ ) F ( x * ) 1 β 2 α μ 0 ( x σ x * ) + β 2 α μ 0 ( y σ x * ) q 1 ( x σ x * ) q 1 ( ρ ) < 1 ,
so
1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 F ( x * ) 1 1 q 1 ( x σ x * ) ,
and
F ( x * ) 1 1 β 2 α F ( x σ ) + β 2 α F ( y σ ) 1 + 1 β 2 α F ( x σ ) 1 β 2 α F ( y σ ) 1 2 | α | F ( x * ) 1 F ( y σ ) F ( x σ ) 1 2 | α | F ( x * ) 1 F ( y σ ) F ( x * ) + F ( x * ) 1 F ( x σ ) F ( x * ) 1 2 | α | μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 1 2 | α | μ 0 ( x σ x * ) + μ 0 ψ 1 ( x σ x * ) x σ x * .
Case β 0
The proceeding calculations for ψ 2 in this case suggest that this function can be defined by:
ψ 2 ( τ ) = 0 1 μ ( 1 θ ) τ d θ 1 μ 0 ( τ ) + μ 1 ( τ ) + | 1 2 β | μ 1 ψ 1 ( τ ) τ 0 1 μ 1 ( θ τ ) d θ 2 | α | 1 q 1 ( τ ) 1 μ 0 ( τ ) ,
since:
F ( x * ) 1 β 2 α F ( x σ ) 2 β 1 2 α F ( y σ ) 1 2 | α | F ( x * ) 1 F ( x σ ) + | 1 2 β | 2 | α | F ( x * ) 1 F ( y σ ) ] 1 2 | α | μ 1 ( x σ x * ) + | 1 2 β | 2 | α | μ 1 ( y σ x * ) 1 2 | α | μ 1 ( x σ x * ) + | 1 2 β | μ 1 ψ 1 ( x σ x * ) x σ x * .
Moreover, suppose equation:
q 2 ( τ ) 1 = 0 ,
has a smallest solution R q 2 Ω 1 { 0 } for:
q 2 ( τ ) = 1 + δ 2 α μ 0 ( τ ) + δ 2 α μ 0 ψ 1 ( τ ) τ .
Define function ψ 3 : Ω 1 Ω by:
ψ 3 ( τ ) = [ 0 1 μ ( 1 θ ) ψ 2 ( τ ) τ d θ 1 μ 0 ( ψ 2 ( τ ) τ ) + μ 0 ( τ ) + μ 0 ψ 2 ( τ ) τ 0 1 μ 1 ( θ ψ 2 ( τ ) τ ) d θ 1 μ 0 ( τ ) 1 μ 0 ( ψ 2 ( τ ) τ ) + μ 0 ( τ ) + μ 0 ψ 1 ( τ ) τ 0 1 μ 1 ( θ ψ 2 ( τ ) τ ) d θ | α | 1 q 2 ( τ ) 1 μ 0 ( τ ) ] ψ 2 ( τ ) ,
and
p ( τ ) = μ 0 ( τ ) + μ 0 ψ 1 ( τ ) τ | α | 1 q 2 ( τ ) .
The choice of functions q 2 and ψ 3 is justified by the estimates:
x σ + 1 x * = z σ x * F ( z σ ) 1 F ( z σ ) + F ( z σ ) 1 F ( x σ ) 1 F ( z σ ) + ( I B σ ) F ( x σ ) 1 F ( z σ ) = z σ x * F ( z σ ) 1 F ( z σ ) + F ( z σ ) 1 F ( x σ ) F ( z σ ) F ( x σ ) 1 F ( z σ ) + 1 + δ 2 α F ( x σ ) δ 2 α F ( y σ ) 1 [ 1 + δ 2 α F ( x σ ) δ 2 α F ( y σ ) 1 + 2 + δ 2 α F ( x σ ) + 2 + δ 2 α F ( y σ ) ] F ( x σ ) 1 F ( z σ ) ,
so
x σ + 1 x * [ 0 1 μ ( 1 θ ) z σ x * d θ 1 μ 0 ( z σ x * ) + μ 0 ( x σ x * ) + μ 0 ( z σ x * ) 0 1 μ 1 ( θ z σ x * ) d θ 1 μ 0 ( x σ x * ) 1 μ 0 ( z σ x * ) + μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 0 1 μ 1 ( θ z σ x * ) d θ | α | 1 q 2 ( x σ x * ) 1 μ 0 ( x σ x * ) ] z σ x * ψ 3 ( x σ x * ) x σ x * x σ x * < ρ ,
where we also adopted the esitmates:
F ( x σ ) 1 1 + δ 2 α F ( x σ ) δ 2 α F ( y σ ) 1 + δ 2 α F ( x * ) + δ 2 α F ( x * ) 1 + δ 2 α F ( x * ) 1 F ( x σ ) F ( x * ) + δ 2 α F ( x * ) 1 F ( y σ ) F ( x * ) 1 + δ 2 α μ 0 ( x σ x * ) + δ 2 α μ 0 ( y σ x * ) q 2 ( x σ x * ) q 2 ( ρ ) < 1 ,
so
1 + δ 2 α F ( x σ ) δ 2 α F ( y σ ) 1 F ( x * ) 1 1 q 2 ( x σ x * ) ,
and
F ( x * ) 1 1 + δ 2 α F ( x σ ) δ 2 α F ( y σ ) 1 + 2 + δ 2 α F ( x σ ) + 2 + δ 2 α F ( y σ ) 1 | α | F ( x * ) 1 F ( x σ ) F ( y σ ) 1 | α | F ( x * ) 1 F ( x σ ) F ( x * ) + F ( x * ) 1 F ( y σ ) F ( x * ) 1 | α | μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 1 | α | μ 0 ( x σ x * ) + μ 0 ψ 1 ( x σ x * ) x σ x * .
Next, we find functions ψ ˜ 2 and ψ ˜ 3 (with ψ ˜ 1 = ψ 1 ) but for scheme (32) in an analogous way. We can write:
z σ x * = x σ x * F ( x σ ) 1 F ( x σ ) + ( I H σ ) F ( x σ ) 1 F ( x σ ) ,
so
z σ x * = 0 1 μ ( 1 θ ) x σ x * d θ 1 μ 0 ( x σ x * ) + q 3 ( x σ x * ) 0 1 μ 1 ( θ x σ x * ) d θ 1 μ 0 ( x σ x * ) × x σ x * ψ ˜ 2 ( x σ x * ) x σ x * x σ x * < δ ,
where we also used the following estimate:
I H n = I F ( x σ ) 1 F ( y σ ) 2 α + I F ( x σ ) 1 F ( y σ ) 2 2 α 2 + 1 3 2 α 1 6 α 2 F ( x σ ) 1 F ( y σ ) F ( x σ ) + F ( w σ ) F ( x σ ) μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 2 | α | 1 μ 0 ( x σ x * ) + 1 2 α 2 μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 1 μ 0 ( x σ x * ) 2 + 1 3 2 α 6 α 2 [ μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 1 μ 0 ( x σ x * ) + μ 0 x σ x * + y σ x * + μ 0 ( x σ x * ) 1 μ 0 ( x σ x * ) ] q 3 ( x σ x * ) ,
since
I F ( x σ ) 1 F ( y σ ) F ( x σ ) 1 F ( x σ ) F ( y σ ) μ 0 ( x σ x * ) + μ 0 ( y σ x * ) 1 μ 0 ( x σ x * ) ,
where
q 3 ( τ ) = μ 0 ( τ ) + μ 0 ψ ˜ 1 ( τ ) τ 2 | α | 1 μ 0 ( τ ) + 1 2 α 2 μ 0 ( τ ) + μ 0 ψ ˜ 1 ( τ ) τ 1 μ 0 ( τ ) 2 + | 2 3 α | 12 α 2 μ 0 ( τ ) + μ 0 ψ ˜ 1 ( τ ) τ 1 μ 0 ( τ ) + μ 0 ( τ ) + μ 0 τ + ψ ˜ 1 ( τ ) τ 1 μ 0 ( τ ) .
Hence, we define function ψ ˜ 2 by:
ψ ˜ 2 ( τ ) = 0 1 μ ( 1 θ ) τ d θ 1 μ 0 ( τ ) + q 3 ( τ ) 0 1 μ ( θ τ ) d θ 1 μ 0 ( τ ) .
In view of the previous calculations for finding ψ 3 and the third substep of Schemes (31) and (32), we define ψ ˜ : Ω 0 Ω :
ψ ˜ 3 ( τ ) = [ 0 1 μ ( 1 θ ) ψ ˜ 2 ( τ ) τ d θ 1 μ 0 ( ψ ˜ 2 ( τ ) τ ) + μ 0 ( τ ) + μ 0 ψ ˜ 2 ( τ ) τ 0 1 μ 1 θ ψ ˜ 2 ( τ ) τ d θ 1 μ 0 ( τ ) 1 μ 0 ( ψ ˜ 2 ( τ ) τ ) + μ 0 ( τ ) + μ 0 ψ ˜ 1 ( τ ) τ 0 1 μ 1 θ ψ ˜ 2 ( τ ) τ d θ | α | 1 q 2 ( τ ) 1 μ 0 ( τ ) ] ψ ˜ 2 ( τ ) .
Moreover, to find radius ρ for Schemes (31) and (32), we use equations involving ψ 1 , ψ 2 , ψ 3 and ψ ¯ 1 , ψ ¯ 2 , ψ ¯ 3 as defined previously in this section. We compare these methods on the basis of the radii of convergence. In addition, we choose ϵ = 10 100 as the error tolerance. The terminating criteria to solve nonlinear system or scalar equations are: ( i ) x σ + 1 x σ < ϵ , and ( i i ) F ( x σ ) < ϵ .
The computations are performed with the package M a t h e m a t i c a 11 with multiple precision arithmetic.
Example 1.
Following the example presented in introduction, for x * = 1 , we can set:
μ 0 ( τ ) = μ ( τ ) = 97 τ and μ 1 ( τ ) = 2 .
This way conditions ( A ) are satisfied. Then, by solving equations ψ k ( τ ) 1 = 0 , we find solutions ρ k and using (5), we determine ρ. Hence, the conclusions of Theorem 1 hold. In Table 1, we present the numerical values of radii ρ k and ρ for example (1).
Example 2.
We chose a well-known nonlinear PDE problem of molecular interaction [12,33], which is given as follows:
v x x + v y y = v 2 .
Subject to the following conditions
v ( x , 0 ) = 2 x 2 x + 1 , v ( x , 1 ) = 2 , v ( 0 , y ) = 2 y 2 y + 1 , v ( 1 , y ) = 2
where ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] .
First of all, discretize the above PDE (33) by adopting the central divided difference:
v x x = v i + 1 , j 2 v i , j + v i 1 , j a 2 , v y y = v i , j + 1 2 v i , j + v i , j 1 a 2
which further suggest the following system of nonlinear equations:
v i + 1 , j 4 v i , j + v i 1 , j + v i , j + 1 + v i , j 1 a 2 v i , j 2 = 0 ,
where i = 1 , 2 , 3 , , l 1 , j = 1 , 2 , 3 , , l 1 . We choose l = 13 in order to obtain a system of nonlinear equations of order 12 × 12 and a = 1 l . The approximated solution is for D = S ¯ x * , 1 10 :
x * = ( 0 , 0 , 0 , 0 , 144 , 0 ) T .
Then, we get by the conditions ( A ) :
μ 0 ( τ ) = μ ( τ ) = 0.10 τ a n d μ 1 ( τ ) = 2 .
The above functions clearly satisfied the conditions ( A ) . Then, by solving equations ψ k ( τ ) 1 = 0 , we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. We provide the computational values of radii of convergence for Example 2 in Table 2.
Example 3.
The kinematic synthesis problem for steering [11,31], is given as:
E i ν 2 sin η i ν 3 A i ν 2 sin φ i ν 3 2 + A i ν 2 cos φ i + 1 A i ν 2 cos η i 1 2 ν 1 ν 2 sin η i ν 3 ν 2 cos φ i + 1 ν 1 ν 2 cos η i ν 3 ν 2 sin φ i ν 3 2 = 0 , for i = 1 , 2 , 3 ,
where
E i = ν 3 ν 2 sin φ i sin φ 0 ν 1 ν 2 sin φ i ν 3 + ν 2 cos φ i cos φ 0 , i = 1 , 2 , 3
and
A i = ν 3 ν 2 sin η i + ν 2 cos η i + ν 3 ν 1 ν 2 sin η 0 + ν 2 cos η 0 + ν 1 ν 3 , i = 1 , 2 , 3 .
In Table 3, we present the values of η i and φ i (in radians).
The approximated solution is for D = S ¯ x * , 1 4 :
x * = ( 0.9051567 , 0.6977417 , 0.6508335 ) T .
Then, we get:
μ 0 ( τ ) = μ ( τ ) = 8 τ , a n d μ 1 ( τ ) = 2 .
We can easily cross verify that the above functions satisfied the conditions ( A ) . Then, by solving equations ψ k ( τ ) 1 = 0 , we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. We provide the computational radii of convergence for Example 3 in Table 4.
Example 4.
We choose a prominent 2D Bratu problem [7,30], which is given by:
u x x + u t t + C e u = 0 , o n A : ( x , t ) 0 x 1 , 0 t 1 , a l o n g b o u n d a r y h y p o t h e s i s u = 0 o n A .
Let us assume that Θ i , j = u ( x i , t j ) is a numerical result over the grid points of the mesh. In addition, we consider that τ 1 and τ 2 are the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of PDE (34), we adopt the following approach:
u x x ( x i , t j ) = Θ i + 1 , j 2 Θ i , j + Θ i 1 , j h 2 , C = 0.1 , t [ 0 , 1 ] ,
which further yields the succeeding SNE:
Θ i , j + 1 + Θ i , j 1 Θ i , j + Θ i + 1 , j + Θ i 1 , j + h 2 C exp Θ i , j i = 1 , 2 , 3 , , τ 1 , j = 1 , 2 , 3 , , τ 2 .
By choosing τ 1 = τ 2 = 11 , h = 1 11 , and C = 0.1 , we get a large SNE of order 100 × 100 which converges to the following required root:
x * = 0.0011 , 0.0018 , 0.0022 , 0.0025 , 0.0026 , 0.0026 , 0.0025 , 0.0022 , 0.0018 , 0.0011 , 0.0018 , 0.0030 , 0.0038 , 0.0043 , 0.0046 , 0.0046 , 0.0043 , 0.0038 , 0.0030 , 0.0018 , 0.0022 , 0.0038 , 0.0049 , 0.0056 , 0.0059 , 0.0059 , 0.0056 , 0.0049 , 0.0038 , 0.0022 , 0.0025 , 0.0043 , 0.0056 , 0.0064 , 0.0068 , 0.0068 , 0.0064 , 0.0056 , 0.0043 , 0.0025 , 0.0026 , 0.0046 , 0.0059 , 0.0068 , 0.0072 , 0.0072 , 0.0068 , 0.0059 , 0.0046 , 0.0026 , 0.0026 , 0.0046 , 0.0059 , 0.0068 , 0.0072 , 0.0072 , 0.0068 , 0.0059 , 0.0046 , 0.0026 , 0.0025 , 0.0043 , 0.0056 , 0.0064 , 0.0068 , 0.0068 , 0.0064 , 0.0056 , 0.0043 , 0.0025 , 0.0022 , 0.0038 , 0.0049 , 0.0056 , 0.0059 , 0.0059 , 0.0056 , 0.0049 , 0.0038 , 0.0022 , 0.0018 , 0.0030 , 0.0038 , 0.0043 , 0.0046 , 0.0046 , 0.0043 , 0.0038 , 0.0030 , 0.0018 , 0.0011 , 0.0018 , 0.0022 , 0.0025 , 0.0026 , 0.0026 , 0.0025 , 0.0022 , 0.0018 , 0.0011 T
the column vector. Choose T 1 = T 2 = R 100 and D = S ¯ x * , 1 5 . Then, we have:
μ 0 ( τ ) = μ ( τ ) = 7 τ and μ 1 ( τ ) = 2 .
This way conditions ( A ) are satisfied. Then, by solving equations ψ k ( τ ) 1 = 0 , we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. The computational results are depicted in Table 5.
Remark 2.
We have observed from Examples (1)–(4) that method (32) (for α = 1 , δ = 1 ) has a larger radius of convergence as compared to other particulars cases of (31) and (32). So, we deduce that this particular case is better than other particulars cases of (31) and (32) in terms of convergent points and domain of convergence. It also shows the consistent computational order of convergence.

4. Conclusions

A unified local convergence is presented for a family of higher order Jarratt-type schemes on Banach spaces. Our analysis uses only the derivative on these schemes in contrast to other approaches using λ + 3 and ( 2 j + 1 ) order derivatives for Schemes (2) and (3), respectively (not on these schemes). Hence, the applicability of these schemes is extended. Moreover, our analysis gives computable error distances and uniqueness of the solution answers. This was not done in the earlier work [33]. Our idea provides a new way of looking at iterative schemes. So, it can extend the applicability of this and other schemes [3,4,5,6,7,8,9,10,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. Finally, numerical experiments are conducted to solve problems from chemistry and other disciplines of applied sciences. Notice in particular that many problems from the micro-world are symmetric.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing, F.O.M., C.I.A.: Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. KEP-MSc-49-130-42.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc-49-130-42). The authors, therefore, acknowledge with thanks DSR for their technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation; American Mathematical Soc.: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  3. Kung, H.T.; Taub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  4. Jarratt, P. Some efficient fourth order multipoint methods for solving equations. Nord. Tidskr. Inf. Behandl. 1969, 9, 119–124. [Google Scholar] [CrossRef]
  5. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comp. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  6. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  7. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  8. Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorith. 2017, 74, 371–391. [Google Scholar] [CrossRef]
  9. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  10. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hoboken, NJ, USA, 2013. [Google Scholar]
  11. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
  12. Bahl, A.; Cordero, A.; Sharma, R.; Torregrosa, J.R. A novel bi-parametric sixth order iterative scheme for solving nonlinear systems and its dynamics. Appl. Math. Comput. 2019, 357, 147–166. [Google Scholar] [CrossRef]
  13. Berardi, M.; Difonzo, F.; Vurro, M.; Lopez, L. The 1D Richards’ equation in two layered soils: A Filippov approach to treat discontinuities. Adv. Water Resour. 2018, 115, 264–272. [Google Scholar] [CrossRef]
  14. Casulli, V.; Zanolli, P. A Nested Newton-Type Algorithm for Finite Volume Methods Solving Richards’ Equation in Mixed Form. SIAM J. Sci. Comput. 2010, 32, 2255–2273. [Google Scholar] [CrossRef]
  15. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013, 506–519. [Google Scholar] [CrossRef]
  16. Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef] [Green Version]
  17. Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional stability analysis of a family of biparametric iterative methods. J. Math. Chem. 2017, 55, 1461–1480. [Google Scholar] [CrossRef] [Green Version]
  18. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef] [Green Version]
  19. Cordero, A.; Gutiérrez, J.M.; Magreñán, A.A.; Torregrosa, J.R. Stability analysis of a parametric family of iterative methods for solving nonlinear models. Appl. Math. Comput. 2016, 285, 26–40. [Google Scholar] [CrossRef]
  20. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension. Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  21. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  22. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth-order family of three-point modified newton-like multiple-root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef] [Green Version]
  23. Gutiérrez, J.M.; Hernández, M.A.; Romero, N. Dynamics of a new family of iterative processes for quadratic polynomials. J. Comput. Appl. Math. 2010, 233, 2688–2695. [Google Scholar] [CrossRef] [Green Version]
  24. Gutiérrez, J.M.; Plaza, S.; Romero, N. Dynamics of a fifth-order iterative method. Int. J. Comput. Math. 2012, 89, 822–835. [Google Scholar] [CrossRef]
  25. Illiano, D.; Pop, L.S.; Radu, F.A. Iterative schemes for surfactant transport in porous media. Comput. Geosci. 2021, 25, 805–822. [Google Scholar] [CrossRef]
  26. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  27. Petković, M.S.; Neta, B.; Petković, L.D.; Dz̆unić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  28. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  29. Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2017, 74, 147–163. [Google Scholar] [CrossRef]
  30. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  31. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  32. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  33. Zhanlav, T.; Otgondorj, K. Higher order Jarratt-like iterations for solving system of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
Table 1. Radii for example (1).
Table 1. Radii for example (1).
Methods α β δ ρ 1 ρ 2 ρ 3 ρ
(31)1000.006870.003160.002240.00224
2 3 010.002290.002010.001360.00136
11 1 0.006870.01000.0001640.000164
2 3 1 1 0.002290.009760.00007320.0000732
(32)1- 1 0.006870.004610.003220.00322
2 3 - 1 0.002290.003620.002440.00229
It is clear to say on the basis of above table that method (32) (for α = 1, δ = −1) has a larger radius of convergence as compared to the other mentioned methods. So, we concluded that it is better than other mentioned methods.
Table 2. Radii for example 2.
Table 2. Radii for example 2.
Methods α β δ ρ 1 ρ 2 ρ 3 ρ x 0 σ η
(31)1006.6673.0662.1722.172 ( 2 , 2 , 2 , 144 , 2 ) T 44.9972
2 3 012.2221.9491.3211.321 ( 1.2 , 1.2 , 1.2 , 144 , 1.2 ) T 34.9975
11 1 6.6679.7420.1590.159 ( 0.15 , 0.15 , 0.15 , 144 , 0.15 ) T 34.9990
2 3 1 1 2.2229.4630.07100.0710 ( 0.06 , 0.06 , 0.06 , 144 , 0.06 ) T 35.0000
(32)1- 1 6.6674.4733.1203.120 ( 2.5 , 2.5 , 2.5 , 144 , 2.5 ) 36.0000
2 3 - 1 2.2233.5172.3632.223 ( 2.1 , 2.1 , 2.1 , 144 , 2.1 ) T 36.0000
Since the method (32) (for α = 1, δ = −1) has a larger radius of convergence as compared to other special cases of (31) and (32). So, we conclude that method (32) has a higher number of convergent points as compared to other mentioned methods.
Table 3. Values of η i and φ i (in radians) for Example (3).
Table 3. Values of η i and φ i (in radians) for Example (3).
i η i φ i
0 1.3954170041747090114 1.7461756494150842271
1 1.7444828545735749268 2.0364691127919609051
2 2.0656234369405315689 2.2390977868265978920
3 2.4600678478912500533 2.4600678409809344550
Table 4. Radii for Example (3).
Table 4. Radii for Example (3).
Methods α β δ ρ 1 ρ 2 ρ 3 ρ x 0 σ η
(31)1000.08330.03830.02710.0271 ( 0.88 , 0.67 , 0.63 ) T 45.0313
2 3 010.02780.02440.01650.0165 ( 0.89 , 0.68 , 0.64 ) T 45.0322
11 1 0.08330.1220.001980.00198 ( 0.903 , 0.695 , 0.653 ) T 44.9769
2 3 1 1 0.02780.1180.0008880.000888 ( 0.9051 , 0.6977 , 0.6508 ) T 35.0395
(32)1- 1 0.08330.05590.03900.0390 ( 0.87 , 0.66 , 0.62 ) T 46.0001
2 3 - 1 0.02780.04400.02950.0278 ( 0.88 , 0.67 , 0.63 ) T 46.0249
We noticed from the above table that method (32) (for α = 1, δ = −1) has better choices of staring points as compared to other sub cases of (31) and (32). Since other sub cases of (31) and (32) have a smaller domain of convergence as contrast to method (32) (for α = 1, δ = −1). It also shows the consistent computational order of convergence. No doubt that a particular case of scheme (31) (for α = 2 3 , β = 1 ; δ = 1 ) is consuming the lowest number of iterations. This is quite obvious because the choice of initial is very close to the required solution.
Table 5. Radii of convergence for Example (4).
Table 5. Radii of convergence for Example (4).
Methods α β δ ρ 1 ρ 2 ρ 3 ρ x 0 σ η
(31)1000.09520.04380.03100.0310 ( 0.003 , 0.003 , 0.003 , 100 , 0.003 ) T 24.9978
2 3 010.03170.02780.01890.0189 ( 0.002 , 0.002 , 0.002 , 100 , 0.002 ) T 24.9913
11 1 0.09520.1390.002260.00226 ( 0.002 , 0.002 , 0.002 , 100 , 0.002 ) T 24.9937
2 3 1 1 0.03170.1350.00100.0010 ( 0.0001 , 0.0001 , 0.0001 , 100 , 0.0001 ) T 24.9893
(32)1- 1 0.09520.06390.04460.0446 ( 0.004 , 0.004 , 0.004 , 100 , 0.004 ) T 26.0055
2 3 - 1 0.03170.05020.03380.0317 ( 0.003 , 0.003 , 0.003 , 100 , 0.003 ) T 25.9853
It is straightforward to say on the basis of above table that method (32) (for α = 1, δ = −1) has a larger domain of convergence in contrast to other particulars cases of (31) and (32). It also consumes the same number of iterations as compared to other mentioned methods.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Mallawi, F.O.; Argyros, C.I. Convergence of Higher Order Jarratt-Type Schemes for Nonlinear Equations from Applied Sciences. Symmetry 2021, 13, 1162. https://doi.org/10.3390/sym13071162

AMA Style

Behl R, Argyros IK, Mallawi FO, Argyros CI. Convergence of Higher Order Jarratt-Type Schemes for Nonlinear Equations from Applied Sciences. Symmetry. 2021; 13(7):1162. https://doi.org/10.3390/sym13071162

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, Fouad Othman Mallawi, and Christopher I. Argyros. 2021. "Convergence of Higher Order Jarratt-Type Schemes for Nonlinear Equations from Applied Sciences" Symmetry 13, no. 7: 1162. https://doi.org/10.3390/sym13071162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop