Next Article in Journal
Transfer Learning-Based Steering Angle Prediction and Control with Fuzzy Signatures-Enhanced Fuzzy Systems for Autonomous Vehicles
Previous Article in Journal
Research on Small Sample Rolling Bearing Fault Diagnosis Method Based on Mixed Signal Processing Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of a Family of Methods with Symmetric, Antisymmetric Parameters and Weight Functions

by
Ramandeep Behl
1 and
Ioannis K. Argyros
2,*
1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(9), 1179; https://doi.org/10.3390/sym16091179
Submission received: 31 July 2024 / Revised: 28 August 2024 / Accepted: 2 September 2024 / Published: 9 September 2024
(This article belongs to the Section Mathematics)

Abstract

:
Many problems in scientific research are reduced to a nonlinear equation by mathematical means of modeling. The solutions of such equations are found mostly iteratively. Then, the convergence order is routinely shown using Taylor formulas, which in turn make sufficient assumptions about derivatives which are not present in the iterative method at hand. This technique restricts the usage of the method which may converge even if these assumptions, which are not also necessary, hold. The utilization of these methods can be extended under less restrictive conditions. This new paper contributes in this direction, since the convergence is established by assumptions restricted exclusively on the functions present on the method. Although the technique is demonstrated on a two-step Traub-type method with usually symmetric parameters and weight functions, due to its generality it can be extended to other methods defined on the real line or more abstract spaces. Numerical experimentation complement and further validate the theory.

1. Introduction

A nonlinear equation or system of nonlinear equations is commonly used to solve a wide range of applied science problems from several domains, including economics, engineering, applied mathematics, health science, physics, and chemistry. An example of such a system is given below:
H ( x ) = 0 ,
where H : D S R ,   S = R   or   S = C [1,2,3,4,5]. The exact solution of Equation (1) is almost nonexistent. Therefore, we have to obtain the solutions to such equations by relying on iterative methods.
A popular technique for finding approximations to the roots of a nonlinear equation is Newton’s method, which has a quadratic order of convergence for simple roots. Newton’s method considers an initial point; the method uses the function’s value and its derivative to iteratively refine the approximation. This process is repeated until we reach the desired level of accuracy. Newton’s method is renowned for its rapid convergence, especially when the initial point is close to the required root. With the advancement of computer algebra, many researchers have proposed modified and extended versions of Newton’s method.
Specifically, we define the following two-step method for each θ = 0 , 1 , 2 , 3 , by
y θ = x θ α 1 H θ H θ , A θ = β H θ + γ H y θ , C θ = λ H θ 2 + μ H θ H y θ + ξ H y θ 2 , x θ + 1 = x θ 1 A θ 1 B θ C θ 1 H θ H θ ,
where α 0 , β 0 , δ 0 ,   λ , μ and ξ are parameters, A θ , B θ are non-vanishing functions for each point in their domain, and C θ is any function. In addition, we assume that H θ = H ( x θ ) , H θ = H θ and H y θ = H y θ in the whole study.
There exist specializations of the method (2). We have listed some of them.
  • Newton’s method (order two) [1,6,7,8]
Set α = 1 , C θ = A θ B θ in (2) to obtain
x θ + 1 = x θ 1 H θ H θ .
  • Sharma’s Method (order three) [9]
Set α = 1 , A θ = H θ H y θ , i.e., β = 1 , γ = 1 , any B θ and C θ = B θ H θ to obtain
y θ = x θ 1 H θ H θ , x θ + 1 = x θ H θ 2 H θ H y θ H θ .
  • Traub–Kou Method (order three) [9,10,11,12,13]
Set α = 1 , A θ = ( τ 2 τ + 1 ) H θ H y θ , i.e., β = ( τ 2 τ + 1 ) , τ = 1 , 1 , 1 2 , any B θ and C θ = τ 2 B θ H θ to obtain
y θ = x θ 1 H θ H θ x θ + 1 = x θ τ 2 H θ 2 ( τ 2 τ + 1 ) H θ H y θ H θ .
  • Traub–Potra–Pták Method (order three) [14,15,16]
Take α = 1 , A θ = H θ , i.e., β = 1 , γ = 0 , any B θ and C θ = B θ H θ H y θ to obtain
y θ = x θ 1 H θ H θ , x θ + 1 = x θ H θ + H y θ H θ .
  • A New Fourth Order Method
Set α = β = δ = γ = μ = ξ = 1 , γ = 1 , and ϵ = λ = 3 , to obtain
y θ = x θ 1 H θ H θ , x θ + 1 = x θ H θ 2 3 H θ H y θ + H y θ 2 H θ H y θ H θ 3 H y θ H θ H θ .
The fourth-order of this method is shown in Section 4.
Taylor expansion approaches are mostly used to find the convergence order. Generally, the proofs require high-order derivatives.
However, there are other problems under the Taylor series approach.
  • Motivation
(P1)
We consider an academic but motivational example that contradicts the above factors. We choose D = [ 1.2 , 1.2 ] and the function H on the interval D by H ( t ) = σ 1 t 2 l o g t + σ 2 t 5 + σ 3 t 4 if t 0 , and H ( t ) = 0 , if t = 0 , where σ 1 0 ,   σ 2 , σ 3 are parameters satisfying σ 1 + σ 2 + σ 3 = 0 . It is straightforward to see that t * = 1 solves the equation H ( t ) = 0 . However, the third derivative of function H ( 3 ) is not continuous at t = 0 D . However, the results utilizing Taylor series which require the continuity of H ( 3 ) or even higher cannot guarantee convergence to it.
(P2)
There are no prior computable error estimates for | x * x θ | . Therefore, the number of iterations required to achieve a specified error tolerance cannot be determined in advance.
(P3)
No computable neighborhood of x * does not contain another solution to (1).
(P4)
The choice of x 0 to assure convergence of these method to x * is very difficult or impossible, since the radius of convergence is not usually determined.
(P5)
The convergence of iterative methods is established only for the real line.
(P6)
For the majority of these techniques, the challenging semi-local examination of convergence is not mentioned in the earlier studies.
Hence, the problem ( P 1 ) ( P 6 ) constitute the motivation for writing the paper. These issues are addressed positively in this study. In particular, we have, respectively:
  • Novelty
( P 1 )
The local convergence is assured using on the derivatives on the method, i.e., H
( P 2 )
A prior error estimates those which are computable, and are provided.
Thus, the number of iterations required to achieve the tolerance is known in advance.
( P 3 )
A computable neighbourhood of x * is given that contains no other solution.
( P 4 )
The radius of convergence can be computed.
( P 5 )
The convergence is assured on S.
( P 6 )
Real majorizing sequences are utilised to establish the convergence [1,2,3]. The derivative H is controlled using generalised continuity [1].
The approach is applied to the method (2). However, it can be used on other methods not necessarily defined in S, but in more general spaces such as Hilbert, Euclidean or Banach, along the same lines [2,3,4,5,6,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Therefore, we can expand the usages of these methods for solving Equation (1) in cases not covered before. That is, ( P 1 ) ( P 6 ) constitute the novelty of the paper.
The rest of the paper is organized as follows: Local and semi-local analyses are presented in Section 2 and Section 3, respectively. The convergence order of the method is discussed in Section 4. Examples are provided in Section 5, and conclusions are drawn in Section 6. It is also worth noting that (2) unifies the study of methods. Moreover, their convergence study is carried out under the same conditions. Consequently, a direct comparison between these methods becomes possible.

2. Convergence 1: Local Analysis

Some real functions defined on interval T = [ 0 , ] play a role in analysis. Suppose:
(E1)
There is a function Φ 0 : T T that is continuous and non-decreasing (CD), such that the function Φ 0 ( t ) 1 has zeros in the interval ( 0 , + ) . Let the smallest such zero be denoted as ρ 0 . Define T 0 = [ 0 , ρ 0 ] .
(E2)
There exists a CD function Φ : T 0 T such that for h 1 : T 0 T , defined by
h 1 ( t ) = 0 1 Φ ( 1 κ ) t d κ + | 1 α | 1 + 0 1 Φ 0 ( κ t ) d κ 1 Φ 0 ( t ) ,
The function h 1 ( t ) 1 has a S Z , which is denoted by r 1 .
(E3)
Functions p : T 0 T , q : T 0 T are given as:
p ( t ) = 1 | β | | β | 0 1 Φ 0 ( κ t ) d κ + | γ | 1 + 0 1 Φ 0 κ h 1 ( t ) t ) d κ h 1 ( t ) , β 0 ,
and
q ( t ) = 1 | δ | | δ | 0 1 Φ 0 ( κ t ) d κ + | ϵ | 1 + 0 1 Φ 0 κ h 1 ( t ) t d κ h 1 ( t ) , δ 0 ,
are such that functions p ( t ) 1 and q ( t ) 1 have ( S Z ) in interval ( 0 , ρ 0 ) , which are denoted by ρ p and ρ q , respectively.
Let ρ = min [ ρ p , ρ q ] and T 1 = [ 0 , ρ ] .
Let δ 1 = β δ λ , δ 2 = β ϵ + γ μ , and δ 3 = γ ϵ ξ . Define the functions g : T 1 T and h 2 = T 1 T by
g ( t ) = | δ 1 | 1 + 0 1 Φ 0 ( κ t ) d κ 2 + | δ 2 | 1 + 0 1 Φ 0 ( κ t ) d κ 1 + 0 1 Φ 0 κ h 1 ( t ) t d κ h 1 ( t ) + | δ 3 | 1 + 0 1 Φ 0 κ h 1 ( t ) t d κ 2 h 1 2 ( t ) , h 2 ( t ) = 0 1 Φ ( 1 κ ) t d κ 1 Φ 0 ( t ) + g ( t ) 1 + 0 1 Φ 0 ( κ t ) d κ | β | | δ | 1 p ( t ) 1 q ( t ) 1 Φ 0 ( t ) .
(E4)
The function h 2 ( t ) 1 has a S Z , which is denoted by r 2 . Set
r = min { r j } , j = 1 , 2 , and T 2 = [ 0 , r ] .
Then, we have for each t T 2
0 Φ 0 ( t ) < 1 ,
0 p ( t ) < 1 ,
0 q ( t ) < 1 ,
0 g ( t )
and
0 h j ( t ) < 1 .
The functions Φ 0 and Φ relate to the derivative H on the method (2).
(E5)
There exists a non-zero number L, x * D solving (1), and for each u D , we obtain
L 1 H ( u ) L Φ 0 ( u x * ) .
Let D 0 = B ( x * , ρ 0 ) T .
The notation B ( x , ρ ¯ ) is used to denote an open real interval with center x and of radius ρ ¯ . Moreover, the symbol B ( x , ρ ¯ ) is the closure of interval B ( x , ρ ¯ ) .
(E6)
For each u 1 , u 2 D 0 , we have
L 1 H ( u 2 ) H ( u 1 ) Φ ( u 2 u 1 )
and
(E7)
B [ x * , r ] D .
Remark 1. 
(i) 
The number r given by the formula (8) is shown in Theorem 1 to be a convergence radius for the method (2).
(ii) 
L = H ( x * ) is the usual but not the most appropriate (necessarily) selection. However, in this case, x * is a simple solution of Equation (1). In Theorem 1, we do not assume x * is a simple solution. So, the method (2) is also useful to find solutions of multiplicity greater than the one provided that L H ( x * ) . The preceding notation, together with conditions ( E 1 ) ( E 7 ) , is used in the local analysis of convergence for method (2). Related work on other methods for Banach space valued operators can be found in [1,7].
Theorem 1. 
Suppose that the conditions ( E 1 ) ( E 7 ) hold. Then, if x 0 B ( x * , r ) ( x * ) , the sequence x θ is well defined in the interval B ( x * , ϵ ) , such that the following items hold
x θ B ( x * , r ) ,
ρ y θ   h 1 ( ρ x θ ) ρ x θ     ρ x θ   < r ,
ρ x θ + 1   h 2 ( ρ x θ ) ρ x θ     ρ x θ ,
where x θ x * = ρ x θ , y θ x * = ρ y θ (for θ = 0 , 1 , 2 , ) and x * = lim x x θ .
Proof. 
The items (14)–(16) are shown using induction on n. By hypothesis x 0 B ( x * , r ) ( x * ) , the item (14) clearly holds if θ = 0 , since B ( x * , r ) { x * } B ( x * , r ) . Pick u B ( x * , r ) . It follows (8), (9) and ( E 5 ) in turn that
L 1 H ( u ) L Φ 0 ( u x * ) Φ 0 ( r ) < 1 .
So, H ( u ) 0 by perturbation lemma due to Banach standard and
1 H ( u ) L 1 1 Φ 0 ( u x * ) .
Then, notice that iterate y 0 is defined by first substep of method (2) if θ = 0 , since H x 0 0 and
y 0 x * = x 0 x * 1 H x 0 H x 0 + ( 1 α ) H x 0 1 H x 0 = 0 1 1 H x 0 H x * + κ ( ρ x 0 ) H x 0 d κ ( ρ x 0 ) + ( 1 α ) H x 0 H x 0 .
The application of condition ( E 6 ) , (8), (17) (for u = x 0 ), (13) (for j = 1), and (18), we provide
ρ y 0   = 0 1 Φ ( 1 κ ) ρ x 0 d κ + | 1 α | 1 + 0 1 Φ 0 ( κ ρ x 0 ) d κ ρ x 0 1 Φ 0 ( ρ x 0 ) = h 1 ( ρ x 0 ) ρ x 0     ρ x 0   < r .
Thus, item (15) holds if θ = 0 , and the iterate δ 0 B ( x 0 , r ) .
Then, we show A 0 0 and B 0 0 . We obtain in turn if x 0 x * :
β L ( ρ x 0 ) 1 β H x 0 H x * L ( ρ x 0 ) + γ H y 0 ,
where H ( x 0 ) = H x 0 and we use
H x 0 = H x 0 H x * = 0 1 H x * + κ ( ρ x 0 ) d κ ( ρ x 0 ) .
Thus, we obtain
L 1 H x 0 = L 1 0 H x * + κ ( ρ x 0 ) L + L d κ ( ρ x 0 ) 1 + 0 1 Φ 0 ( κ x 0 x * ) d κ ρ x 0 1 | β | x 0 x * [ | β | 0 1 Φ 0 ( κ x 0 x * ) d κ ρ x 0 + | γ | 1 + 0 1 Φ 0 κ y 0 x * d κ ρ y 0 ) ] 1 | β | | β | 0 1 Φ 0 ( κ x 0 x * ) d κ + | γ | 1 + 0 1 Φ 0 κ y 0 x * d κ h 1 ( ρ x 0 ) p 0 < 1 ,
so,
1 A 0 L = 1 | β | 1 p 0 ( ρ x 0 ) .
Similarly, for
q 0 = 1 | δ | | δ | 0 1 Φ 0 ( κ ρ x 0 ) d κ + | ϵ | 1 + 0 1 Φ 0 κ ρ y 0 d κ ( h 1 ρ x 0 ) ,
we have
1 B 0 L 1 | δ | 1 q 0 ( ρ x 0 ) .
It follows by (20) and (21) that the iterate x * is well-defined by the second substep of method (2) if θ = 0 , and
x 1 x * = x 0 x * 1 H x 0 H x 0 + I 1 A 0 1 B 0 C 0 1 H x 0 H x 0 = x 0 x * 1 H x 0 H x 0 + 1 A 0 1 B 0 ( A 0 B 0 C 0 ) 1 H x 0 H x 0
We need an estimate on the norm of L 2 A 0 B 0 C 0 . Notice that
A 0 B 0 C 0 = β H x 0 + γ H y 0 δ H x 0 + ϵ H y 0 λ H x 0 2 + μ H x 0 H y 0 + ξ H y 0 2 = ( β δ λ ) H x 0 2 + ( β ϵ + γ μ ) H x 0 H y 0 + ( γ ϵ ξ ) H y 0 2 = δ 1 H ( x 0 2 ) + δ 2 H x 0 H y 0 + δ 3 H ( y 0 2 )
leading to
L 2 A 0 B 0 C 0 [ | δ 1 | 1 + 0 1 Φ 0 ( κ ρ x 0 ) d κ 2 + | δ 2 | 1 + 0 1 Φ 0 ( κ ρ x 0 ) d κ 1 + 0 1 Φ 0 ( κ ρ y 0 ) d κ + | δ 3 | 1 + 0 1 Φ 0 ( κ ρ y 0 ) d κ 2 h 1 ( ρ x 0 ) 2 ] ρ x 0 2 = g 0 | | x 0 x * | | 2
By adopting (8), (13) (for j = 1), (20), (22), it follows
| | x 1 x * | | [ 0 1 Φ ( κ | | x 0 x * | | ) d κ 1 Φ 0 ( | | x 0 x * | | ) + g 0 1 + 0 1 Φ 0 ( κ x 0 x * ) d κ | | x 0 x * | | 2 | b | | δ | ( 1 p 0 ) ( 1 q 0 ) 1 Φ 0 ( | | x 0 x * | | ) | | x 0 x * | | 2 ] | | x 0 x * | | h 2 ( ρ x 0 ) ρ x 0     ρ x 0 .
Therefore, the item (16) holds if θ = 0 , and the iterate x 1 B ( x * , r ) , i.e., (14) is true for θ = 1 . The induction for items (14)–(16) if simply terminated if iterates x k , y k , x k + 1 are used for x 0 , y 0 , x 1 , respectively in the above computations. Then, we obtain
x k + 1 x *     δ * x k + 1 x *     δ * k + 1 x k + 1 x * r
where δ * = h 2 ( x k + 1 x * ) [ 0 , 1 ] , we conclude that iterate x k + 1 B ( x * , r ) as well as lim k x k = x * . □
An interval is found that contains only x * as solution.
Proposition 1. 
Suppose there exist ρ 1 > 0 , such that condition ( E 5 ) holds in the interval B ( x * , ρ 1 ) and ρ 2 ρ 1 , such that
0 1 Φ 0 ( κ ρ 2 ) d κ < 1 .
Set D 1 = B [ x * , ρ 2 ] D . Then, the only solution of Equation (1) in the set D 1 is x * .
Proof. 
Let x ¯ D 1 with H ( x ¯ ) = 0 . Set L 1 = 0 1 H ( x * + κ ( x ¯ x * ) ) d κ . The application of condition ( E 5 ) on the interval B ( x * , ρ 1 ) and (25) imply
L 1 ( L 1 L ) 0 1 Φ 0 κ x ¯ x * d κ 0 1 Φ 0 ( κ ρ 2 ) d κ < 1 .
So, L 1 0 . Then, from approximations
x ¯ x * = L 1 H ( x ¯ ) H x * = L 1 ( 0 ) = 0 ,
it follows x ¯ = x * . □
Remark 2. 
Note that if all the conditions of Theorem 1 are satisfied in Proposition 1, we can set δ 1 = r .

3. Convergence 2: Semi-Local Analysis

The calculations are similar to those in the previous section, but the roles of the solution x * and the functions Φ 0 and Φ are now taken by the starting point x 0 and the functions Ψ 0 and Ψ , respectively.
Suppose:
(M1)
There exists a CD function Ψ 0 : T T so that Ψ 0 ( t ) 1 has a S Z which is denoted by s 0 . Set T 3 = [ 0 , s 0 ) .
(M2)
There exists a CD function Ψ : T 3 T .
It is convenient to set γ 1 = α β δ λ , γ 2 = α β ϵ + α γ and γ 3 = α γ ϵ ξ .
Define the sequences { a θ } , { b θ } , { g θ ^ } , { p θ ^ } , { q θ ^ } and { c θ + 1 } for a 0 = 0 , b 0 0 and each θ = 0 , 1 , 2 , 3 . . . by
d θ = 0 1 γ ( 1 κ ) ( b θ a θ ) d κ + 1 1 α 1 + Ψ 0 ( a θ ) , g θ ^ = | γ 1 | 1 + Ψ 0 ( a θ ) 2 + | γ 2 | 1 + Ψ 0 ( a θ ) d θ + | γ 3 | ( d θ 2 ) , p θ ^ = 1 | β | | β | 1 + 0 1 Ψ 0 a θ + κ ( b θ a θ ) d κ + | β + γ | d θ , q θ ^ = 1 | δ | | β | 1 + 0 1 Ψ 0 a θ + κ ( b θ a θ ) d κ + | δ + ϵ | d θ , a θ + 1 = b θ + h θ ^ α β δ | ( 1 p θ ^ ( 1 q θ ^ ) ) a n d c θ + 1 = 1 + 0 1 Ψ 0 a θ + κ ( a θ + 1 a θ ) d κ ( a θ + 1 a θ ) + 1 | α | 1 + Ψ 0 ( a θ ) ( b θ a θ ) .
The following conditions assure the convergence of sequences a θ and b θ .
(M3)
There exists s [ 0 , s 0 ] , such that for each θ = 0 , 1 , 2 , . . ,
Ψ 0 ( a θ ) < 1 , p θ ^ < 1 , q θ ^ < 1 , and a θ s .
This condition together with (28) imply 0 a θ b θ a θ + 1 < s and there exists a * [ 0 , s ] such that lim θ a θ = lim θ b θ = a * . It is well known that a * is the unique least upper bound of the sequence a θ . As in the local analysis, the functions Ψ 0 and Ψ are related to H .
(M4)
There exists x 0 D and a non zero number L, such that for each u D
L 1 H ( u ) L   Ψ 0 ( u x 0 ) .
Set D 2 = D B ( x 0 , s 0 ) if we take u = x 0 in this condition and use the definition of
L 1 H x 0 L Ψ 0 ( 0 ) < 1 .
Thus, H x 0 0 , and the iterate y 0 exists. So, we can take b 0 | α | 1 H x 0 H x 0
(M5)
For each u 1 , u 2 D
L 1 H ( u 2 ) H ( u 1 )   Ψ ( u 2 u 1 )
and
(M6)
B [ x 0 , a * ] D .
Remark 3. 
A popular solution for M = H x 0 . (See also the explanations in Remark 1).
The semi-local analysis of convergence follows, as in the local case, in next set of results.
Theorem 2. 
Suppose that the conditions ( M 1 ) ( M 6 ) hold. Then, the sequence x θ is well-defined and the following items hold
x k B ( x 0 , a * ) ,
y k x k b k a k ,
x k + 1 y k a k + 1 b k ,
and x * B [ x 0 , a * ] exists as a solution of the equation H ( x ) = 0 , such that
x * x k a * a k .
Proof. 
Induction shall establish the items (29)–(31). Clearly, (29) holds if k = 0 . The choice of b 0 and the first substep of the method (2) imply
y 0 x 0 = α 1 H x 0 H x 0 b 0 = b 0 x 0 < a * .
Thus, the item (30) holds if k = 0 and the iterate y 0 B ( x 0 , a 0 ) . Now, we need the estimates that correspond to the local analysis
H ( y k ) = H ( y k ) H ( x k ) 1 α H ( x k ) ( y k x k ) = H ( y k ) H ( x k ) 1 α H ( x k ) ( y k x k ) + 1 1 α H ( x k ) ( y k x k ) L 1 H ( x k ) 0 1 v ( 1 κ ) y k x k d κ y k x k + 1 1 α ( 1 + Ψ 0 x k x 0 ) y k x k 0 1 v ( 1 κ ) b k a k d κ y k x k + 1 1 α ( 1 + Ψ 0 ( a k ) ) y k x k = d k y k x k
and for y k x k
1 β L ( x k y k ) β H ( x k ) H ( y k ) L ( x k y k ) + ( β + γ ) H ( y k ) 1 | β | y k x k | β | 1 + 0 1 Ψ 0 a k + κ ( b k a k ) d κ + | β + γ | d k y k x k = p k ^ < 1 , 1 A k L 1 | β | ( 1 p k ^ ) y k x k
and similarly,
1 B k L 1 | δ | ( 1 q k ^ ) y k x k .
Moreover, by subtracting first, in the second substep we obtain
x k + 1 y k = α 1 A k 1 B k C k 1 H ( x k ) H ( x k ) = 1 A k 1 B k ( α A k B k C k ) 1 α ( y k x k ) = 1 α 1 A k B k ( C k A k B k ) ( y k x k ) .
The following estimates are also needed:
α A k B k C k = ( α β δ λ ) H ( x k ) 2 + ( α β ϵ + α γ ) H ( x k ) H ( y k ) + ( α γ ϵ ξ ) H ( y k ) 2 = γ 1 H ( x k ) 2 + γ 2 H ( x k ) H ( y k ) + γ 3 H ( y k ) 2 ,
leading to
L 2 C k α A k B k | γ 1 | 1 + Ψ 0 ( a k ) 2 + | γ 2 | 1 + Ψ 0 ( a k ) d k + | γ 3 | d 2 m y k x k 2 = h k ^ y k x k 2 .
Hence, (32) gives
x k + 1 y k h k ^ y k x k 3 α β δ | ( 1 p k ^ ( 1 q k ^ ) ) y k x k 3 h k ^ ( b k a k ) | α β δ | ( 1 p k ^ ) ( 1 q k ^ ) = a k + 1 b k
and
x k + 1 x 0 x k + 1 y k + y k x 0 a k + 1 b k + b k a 0 = a k + 1 < a * .
Furthermore, the first substep gives
H ( x k + 1 ) = H ( x k + 1 ) H ( x k ) 1 α H ( x k ) ( y k x k ) .
So, we have
L 1 H ( x k + 1 ) 1 + 0 1 Ψ 0 a k + κ ( a k + 1 a k ) d κ ( a k + 1 a k ) + 1 | α | 1 + Ψ 0 ( a k ) ( b k a k ) = c k + 1 .
Consequently, we obtain
| | y k + 1 x k + 1 | | | α | 1 H ( x k + 1 ) 1 L L 1 H ( x k + 1 ) | α | c k + 1 1 Ψ 0 ( | | x k + 1 x 0 | | ) | α | c k + 1 1 Ψ 0 ( | | a k + 1 | | ) = b k + 1 a k + 1
and
| | y k + 1 x 0 | | | | y k + 1 x k + 1 | | + | | x k + 1 x 0 | | b k + 1 a k + 1 + a k + 1 a 0 = b k + 1 a * .
The introduction for items (29)–(31) is terminated and all iterates x k , y k belong in B ( x 0 , a * ) Therefore, the sequence x k is completed, deemed convergent by conditions ( M 3 ) .
Consequently, there exists x * B ( x 0 , a * ) , such that H x * = 0 . But, lim k c k + 1 = 0 , so H x * = 0 by (35), where we also used a continuation of H. Finally, by x k + i x k a k + i a k the item (32) follows, provided that i + . □
The uniqueness of the solution of Equation (1) is established in the next set of results.
Proposition 2. 
Suppose there exists a solution z B ( x 0 , s 1 ) of equation for some s 1 > 0 , the condition ( M 4 ) holds the interval B ( x 0 , s 1 ) and there exist s 2 s 1 , such that
0 1 Ψ 0 ( 1 κ ) s 1 + κ s 2 d κ < 1
Set D 3 = D B [ x 0 , s 2 ] . Then, the equation has only one solution in D 3 .
Proof. 
Let z 1 D 3 with H ( z 1 ) = 0 . Define L 2 = 0 1 H z + κ ( z 1 z ) d κ . By the hypothesis ( M 4 ) and expression (38); we obtain in turn
L 1 ( L 2 L ) 0 1 Ψ 0 ( 1 κ ) z x 0 + κ z 1 x 0 d κ 0 1 Ψ 0 ( 1 κ ) s 1 + κ s 2 d κ < 1 ,
so L 2 0 . Then, from approximation z 1 z = L 2 1 ( 0 ) = 0 , we conclude that z 1 = z . □
Remark 4. 
(i) 
The limit point a * can be exchanged by s 0 in condition ( M 6 ) .
(ii) 
In the Proposition 2, assume z = x * and s 1 = a * under all conditions ( M 1 ) ( M 6 ) .

4. The Convergence Order of Method (2)

In this section, we show that the convergence order of the method (2) is four, using Taylor series expansions [2,3].
Theorem 3. 
Assume that (1) has a simple solution and that the function H is sufficiently differentiable in a neighborhood around x * . Then, the convergence order of the method (7) is four.
Proof. 
Set l θ = x θ x * and l ¯ θ = y θ x * and δ k = H k ( x * ) k ! H ( x * ) , k = 2 , 3 , 4 , . First, we expand H θ about x * using Taylor series to obtain in turn
H θ = H ( x * ) l θ + Δ 2 l θ 2 + Δ 3 l θ 3 + Δ 4 l θ 4 + O ( l θ 5 ) ,
so
H θ = H ( x * ) 1 + 2 Δ 2 l θ 2 + 3 Δ 3 l θ 3 + 4 Δ 4 l θ 4 + O ( l θ 4 ) .
Consequently, we have
H θ H θ = l θ Δ 2 l θ 2 + 2 Δ 3 Δ 2 2 l θ 3 + 7 Δ 2 Δ 3 4 Δ 2 3 3 Δ 4 l θ 4 + O ( l θ 5 ) ,
and
l ¯ θ = l θ H θ H θ = Δ 2 l θ 2 + 2 Δ 3 Δ 2 2 l θ 3 + 4 Δ 2 3 + 3 Δ 4 7 Δ 2 Δ 3 l θ 4 + O ( l θ 5 ) .
Similarly, we obtain
H y θ = H ( x * ) l ¯ θ + Δ 2 l ¯ θ 2 + Δ 3 l ¯ θ 3 + Δ 4 l ¯ θ 4 + O ( l ¯ θ 5 ) .
Substituting these formulas in (7), we obtain
l θ + 1 = O ( l θ 4 ) .
Hence, this ends the proof. □
Remark 5. 
Notice that in Theorem 3, the function H must be at least five times differentiable. Therefore, Theorem 3 is not applicable to solve the equation given in the Introduction using method (7).

5. Numerical Experiments

We performed a computational analysis to demonstrate the practical significance of our theoretical results. To this end, we selected four numerical problems to demonstrate our computational findings. These numerical examples are categorized into two groups based on convergence analysis: semi-local area convergence (SLAC) and local area convergence (LAC). The SLAC examples illustrate how the method behaves in a broader region around the initial guess, while the LAC examples focus on the convergence properties in a more restricted, localized area. This distinction helps us to better understand the method’s effectiveness in different ways.
  • Local area convergence
We investigated local area convergence (LAC) using the first problem as a case study. Detailed information about this example is provided in (1). The radii of convergence are presented in Table 1.
  • Semi-local area convergence
On the other hand, the remaining examples were selected to explore SLAC. The numerical results for semi-local convergence based on the well-known Planck’s radiation problem (2) are presented in Table 2. Table 3 provides the numerical results for the semi-local convergence of the continuous stirred tank reactor (CSTR), described in example (3), which is another well-known problem in applied science. Finally, the numerical results for semi-local convergence based on the blood rheology model (4) are shown in Table 4.
Furthermore, we also calculated the computational order of convergence (COC) for each example. The COC provides us a measure of how quickly the iterative method approaches the required solution. This can be determined using the following formulas:
η = ln x κ + 1 x * | x κ x * ln x κ x * x κ 1 x * , for   κ = 1 , 2 ,
or the approximate computational order of convergence ( A C O C ) [17,18] by:
η * = ln x κ + 1 x κ x κ x κ 1 ln x κ x κ 1 x κ 1 x κ 2 , for   κ = 2 , 3 ,
The termination criteria for the program are defined as follows: (i)  | x κ + 1 x κ | < ϵ and (ii)  | J ( x κ ) | < ϵ , where ϵ = 10 300 . All computations were carried out using Mathematica 11 with multi-precision arithmetic. The specifications of the computer used for programming are as follows:
  • Device Name: HP
  • Edition: Windows 10 Enterprise
  • Version: 22H2
  • Installed RAM: 8.00 GB (7.89 GB usable)
  • OS Build: 19045.2006
  • Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz
  • System type: 64-bit operating system, ×64-based processor

5.1. Examples for LAC

To illustrate the theoretical findings of local convergence, which are provided in Section 2, we select the Example (1).
Example 1. 
Let S = R and D = B ( 0 , 1 ) . Define the function H on S for v S by
H ( v ) = e v 1 .
Then, for L = H ( x * ) , where x * = 0 solves the equation H ( v ) = 0 . Moreover, we have
H ( v ) = e v .
So, H ( x * ) = I . Consequently, the conditions ( E 5 ) and ( E 6 ) hold, if we choose
Φ 0 ( t ) = ( e 1 ) t and Φ ( t ) = e 1 e 1 t .
The radii of convergence are given in Table 1.
Table 1. Radii of method (7) for Example (1).
Table 1. Radii of method (7) for Example (1).
Method ρ 0 r 1 ρ p ρ q ρ r 2 r
(7)0.5819770.382690.316850.196390.196390.138290.13829

5.2. Examples for SLAC

We consider three examples, (2) through (4), to demonstrate the theoretical results of semi-local convergence proposed in Section 3. The methods under comparison are Newton’s method (3), Sharma’s method (4), the Potra–Pták method (6), and the method (7), which we refer to as (NM), (SM), (PM), and (OM), respectively.
Example 2. Planck’s radiation problem: 
The spectral density of electromagnetic radiation emitted by a black body at a specific temperature and in thermal equilibrium is determined using Planck’s radiation equation [19], which is expressed as follows:
G ( y ) = 8 π c h y 5 e c h y k T 1 ,
where T , y , k , h and c denote the black-body’s absolute temperature, radiation wavelength, Boltzmann constant, Planck’s constant, and the speed of light in a vacuum, respectively. To determine the wavelength y that maximizes the energy density G ( y ) , we set G ( y ) = 0 . This results in the following formula:
( c h y k T ) e c h y k T e c h y k T 1 = 5 .
By setting x = c h y k T , we have
H 3 ( x ) = e x 1 + x 5 .
The required value of the root is γ = 4.96511423174428 . Using this root, the wavelength y can be determined from the equation x = c h y k T . The Planck problem was tested with an initial guess of x 0 = 5.4 , and the computational results are shown in Table 2.
Table 2. Convergence pattern of various approaches to Planck’s radiation problem (2).
Table 2. Convergence pattern of various approaches to Planck’s radiation problem (2).
Methods H ( x 3 ) x 4 x 3 n η CPU
Timing
NM 4.7 × 10 17 2.4 × 10 16 72.00000.0034916
SM 2.9 × 10 58 1.5 × 10 57 53.00000.0106235
PM 1.2 × 10 54 6.1 × 10 54 53.00000.0081417
OM 2.3 × 10 111 1.2 × 10 110 44.00000.0152859
Example 3. Continuous stirred tank reactor (CSTR): 
We assume an isothermal continuous stirred tank reactor (CSTR) (for details, see [20]). The components M 1 and M 2 represent the feed rates to B 1 and B 2 B 1 , respectively. The reaction scheme in the reactor is then as follows:
M 1 + M 2 B 1 B 1 + M 2 C 1 C 1 + M 2 D 1 C 1 + M 2 E 1
While creating a basic model for feedback control systems, Douglas [21] also studied the aforementioned model. Then, he transformed it into the subsequent mathematical expression:
R C 1 2.98 ( x + 2.25 ) ( x + 1.45 ) ( x + 2.85 ) 2 ( x + 4.35 ) = 1 ,
where R C 1 is the gain of the proportional controller. The expression (42) is balanced for negative real values of R C 1 . Specifically, by setting R C 1 = 0 , we obtain
H 4 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 .
The zeros of the function H 4 correspond to the poles of the open-loop transfer function. The function H 4 has four zeros: γ = 1.45 , 2.85 , 2.85 , 4.35 . We select γ = 1.45 as the desired zero and choose x 0 = 1.6 as the initial point for H 4 . The computational results are presented in Table 3.
Table 3. Convergence pattern of various approaches to CSTR problem (3).
Table 3. Convergence pattern of various approaches to CSTR problem (3).
Methods H ( x 3 ) x 4 x 3 n η CPU
Timing
NM 3.6 × 10 4 6.4 × 10 5 92.00000.0004944
SM 1.7 × 10 13 2.9 × 10 14 63.00000.0006272
PM 1.2 × 10 6 2.0 × 10 7 73.00000.000563
OM 1.4 × 10 47 2.5 × 10 48 54.00000.0008959
Example 4. Blood rheology model: 
Finally, we examine the physical and flow properties of blood using the blood rheology model [22]. Blood is called a Caisson fluid because it doesn’t follow Newtonian fluid rules. According to the Caisson fluid model, simple fluids move through tubes in a way that creates a speed difference between the walls and the center, where there is minimal deformation. We study the plug flow of Caisson fluids using the following mathematical expression:
G = x 4 21 + 4 x 3 16 x 7 + 1 .
We then set G = 0.40 to calculate the flow rate reduction, which simplifies to the following nonlinear equation:
H 5 ( x ) = x 8 441 8 x 5 63 2857144357 x 4 50000000000 + 16 x 2 9 906122449 x 250000000 + 3 10 .
By adopting the proposed methods, we determined the desired zero of the function H 5 ( x ) to be x = 0.08643356 and chose x 0 = 0.07 as the starting point for H 5 . The computed results are shown in Table 4.
Table 4. Convergence pattern of various approaches to blood rheology model (4).
Table 4. Convergence pattern of various approaches to blood rheology model (4).
Methods H ( x 3 ) x 4 x 3 n η CPU
Timing
NM 2.1 × 10 16 6.2 × 10 17 72.00000.001264
SM 1.5 × 10 55 4.6 × 10 56 53.00000.0032786
PM 1.1 × 10 51 3.3 × 10 52 53.00000.0022388
OM 2.6 × 10 164 8.0 × 10 165 44.00000.0049327

6. Concluding Remarks

The two-step Traub-type iterative method (2) with parameters and weight functions is used as a sample to demonstrate a new convergence technique, which avoids the Taylor series. Method (2) unifies a plethora of other popular iterative methods [2,3,4,5,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. The usage of method (2) is crucial in cases involving the analytic form of the solution x * of expression (1). It is hard to find in, for example, the case of fractional order, for a pseudo-hyperbolic telegraph equation [1,19,20,21,22]. Other advantages of this technique include weaker sufficient convergence conditions, information on the number of solutions in a certain interval, and a priori upper error estimates on ρ x θ . Further, the semi-local convergence analysis of the iterative approaches was proposed. Moreover, our theoretical aspects are also supported by the numerical results. The generality of this techniques makes it applicable on other methods defined on the real line, the complex plane, or more abstract spaces such as Hilbert or Banach. This is intended to be the direction of our future area of research [4,5,7,8,12,17,18,23,24]. It is also worth noting that the aforementioned references included methods with inverse of linear operators making them suitable for the application of our approach, since they have also used the Taylor series. Consequently, they have the same drawbacks ( P 1 ) ( P 6 ) .

Author Contributions

Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A., formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B. and I.K.A.; visualization, R.B. and I.K.A.; supervision, R.B. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  2. Ostrowski, A.M. Solution of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  4. Jain, P. Steffensen type methods for solving non-linear equations. Appl. Math. Comput. 2007, 194, 527–533. [Google Scholar] [CrossRef]
  5. Padilla, J.J.; Chicharro, F.; Cordero, A.; Torregrosa, J.R. Parametric family of root-finding iterative methods: Fractals of the basins of attraction. Fractal Fract. 2022, 6, 572. [Google Scholar] [CrossRef]
  6. Abbasbandy, S. Improving Newton-Raphson method for nonlinear equations by modified decomposition method. Appl. Math. Comput. 2003, 145, 887–893. [Google Scholar] [CrossRef]
  7. Behl, R.; Argyros, I.K.; Alharbi, S. A One-Parameter Family of Methods with a Higher Order of Convergence for Equations in a Banach Space. Mathematics 2024, 12, 1278. [Google Scholar] [CrossRef]
  8. Behl, R.; Argyros, I.K. Extended High Convergence Compositions for Solving Nonlinear Equations in Banach Space. Int. J. Comput. Methods 2024, 21, 2350021. [Google Scholar] [CrossRef]
  9. Sharma, J.R. A composite third order Newton Steffensen method for solving nonlinear equations. Appl. Math. Comput. 2005, 169, 242–246. [Google Scholar] [CrossRef]
  10. Frontini, M.; Sormani, E. Some variant of Newton’s method with third order convergence. J. Comput. Appl. Math. 2003, 140, 419–426. [Google Scholar] [CrossRef]
  11. Homeier, H.H.H. On Newton type method with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
  12. King, R. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  13. Basto, M.; Semiao, V.; Calheiros, F.L. A new iterative method to compute nonlinear equations. Appl. Math. Comput. 2006, 173, 468–483. [Google Scholar] [CrossRef]
  14. Jarratt, P. Some efficient fourth order multipoint methods for solving equations. BIT 1969, 9, 119–124. [Google Scholar] [CrossRef]
  15. Jarratt, P. Some fourth order methods for non linear equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  16. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  17. Candelario, G.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Generalized conformable fractional Newton-type method for solving nonlinear systems. Numer. Algorithms 2023, 93, 1171–1208. [Google Scholar] [CrossRef]
  18. Capdevila, R.R.; Cordero, A.; Torregrosa, J.R. Isonormal surfaces: A new tool for the multidimensional dynamical analysis of iterative methods for solving nonlinear systems. Math. Methods Appl. Sci. 2022, 45, 3360–3375. [Google Scholar] [CrossRef]
  19. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  20. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall: New Jersey, NJ, USA, 1999. [Google Scholar]
  21. Douglas, J.M. Process Dynamics and Control; Prentice Hall: New Jersey, NJ, USA, 1972; Volume 2. [Google Scholar]
  22. Fournier, R.L. Basic Transport Phenomena in Biomedical Engineering; Taylor & Francis: New York, NY, USA, 2007. [Google Scholar]
  23. Behl, R.; Argyros, I.K. On the solution of generalized Banach space valued equations. Mathematics 2022, 10, 132. [Google Scholar] [CrossRef]
  24. Cordero, A.; Leonardo-Sepúlveda, M.A.; Torregrosa, J.R.; Vassileva, M.P. Increasing in three units the order of convergence of iterative methods for solving nonlinear systems. Math. Comput. Simul. 2024, 223, 509–522. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K. Convergence of a Family of Methods with Symmetric, Antisymmetric Parameters and Weight Functions. Symmetry 2024, 16, 1179. https://doi.org/10.3390/sym16091179

AMA Style

Behl R, Argyros IK. Convergence of a Family of Methods with Symmetric, Antisymmetric Parameters and Weight Functions. Symmetry. 2024; 16(9):1179. https://doi.org/10.3390/sym16091179

Chicago/Turabian Style

Behl, Ramandeep, and Ioannis K. Argyros. 2024. "Convergence of a Family of Methods with Symmetric, Antisymmetric Parameters and Weight Functions" Symmetry 16, no. 9: 1179. https://doi.org/10.3390/sym16091179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop