Next Article in Journal
Designing a Renewable Jet Fuel Supply Chain: Leveraging Incentive Policies to Drive Commercialization and Sustainability
Previous Article in Journal
A New Method for Dynamical System Identification by Optimizing the Control Parameters of Legendre Multiwavelet Neural Network
Previous Article in Special Issue
Dynamics and Bifurcations of a Discrete-Time Moran-Ricker Model with a Time Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Highly Efficient Fractional Numerical Method for Solving Nonlinear Engineering Models

by
Mudassir Shams
1,2 and
Bruno Carpentieri
1,*
1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University, I-14, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(24), 4914; https://doi.org/10.3390/math11244914
Submission received: 26 October 2023 / Revised: 4 December 2023 / Accepted: 5 December 2023 / Published: 10 December 2023
(This article belongs to the Special Issue Theory and Applications of Numerical Analysis)

Abstract

:
We proposed and analyzed the fractional simultaneous technique for approximating all the roots of nonlinear equations in this research study. The newly developed fractional Caputo-type simultaneous scheme’s order of convergence is 3 ς + 5 , according to convergence analysis. Engineering-related numerical test problems are taken into consideration to demonstrate the efficiency and stability of fractional numerical schemes when compared to previously published numerical iterative methods. The newly developed fractional simultaneous approach converges on random starting guess values at random times, demonstrating its global convergence behavior. Although the newly developed method shows global convergent behavior when all starting guess values are distinct, the method diverges otherwise. The total computational time, number of iterations, error graphs and maximum residual error all clearly illustrate the stability and consistency of the developed scheme. The rate of convergence increases as the fractional parameter’s value rises from 0.1 to 1.0.

1. Introduction

When analytical approaches are not available, iterative schemes are the only viable strategy for numerically approximating the roots of nonlinear equations
f ( r ) = 0 ,
in a stable manner. They start with an initial approximation and iteratively refine the solution using algebraic equations until a satisfactory approximation is obtained. This approximation of the solution is carried out in this manner until every root is identified. There are two types of iterative root-finding schemes: simultaneous techniques, which approximate all roots simultaneously, and methods which approximate one root at a time (see, for example, Traub’s method [1], Jarratt’s method [2], King’s method [3], Ostrowski’s method [4], Chun et al.’s method [5], and many others). In recent years, simultaneous techniques have grown in popularity as a result of their global convergence and inherent parallelism (see, for example, the works by Weierstrass [6], Kanno [7], Proinov [8], Mir [9], Farmer [10], Nourein [11], Aberth [12], and Cholakov [13] and the references therein). On the other hand, because of the intrinsic difficulties of these equations, such as the non-linearity and non-locality, standard analytical, semi-analytical, and classical numerical approaches are typically ineffective.
In order to decrease the overall computational time, parallel numerical schemes utilize parallel computing [14] to solve nonlinear equations. This is achieved by decomposing the problem into smaller tasks, which can be executed simultaneously on multiple processors or cores. Therefore, these schemes are particularly useful when dealing with large-scale or computationally intensive engineering problems [15]. A comprehensive understanding of parallel programming techniques, algorithms, and the specific characteristics of the problem at hand is necessary for the effective implementation of parallel numerical schemes. Furthermore, the selection of a parallel scheme is often influenced by the nature of the nonlinear equations being solved, the hardware at hand, and the size of the problem. An overview of parallel numerical methods for solving nonlinear equations can be found in [16,17,18].
The performance of simultaneous root-finding algorithms varies depending on the initial guess and the problem at hand, and convergence is not always guaranteed [19,20,21]. As a result, efforts have been made to develop more robust and efficient procedures. In this research, we propose highly efficient fractional numerical techniques for simultaneously approximating all the roots of nonlinear equations. Fractional simultaneous methods utilize fractional-order derivatives of the function to solve (1). Fractional calculus, which is concerned with non-integer-order derivatives and integrals, is used in many areas, including physics, engineering, and finance [22,23,24]. A comprehensive analysis of the convergence and of the computational complexity of our method is derived. The performance and global convergence behavior of the algorithm is assessed for solving some practical engineering applications by considering various factors, including CPU time, maximum computational time on random initial guess values, maximum residual error, and local computational order of convergence.
The structure of the paper is outlined as follows. After the introduction, we discuss some basic definitions in Section 2. In Section 3, parallel computing schemes are developed and analyzed to solve (1). Section 4 compares the computational aspects of newly proposed simultaneous techniques to existing methods in the literature. In Section 5, we discuss the numerical results of the newly developed scheme. The conclusion of the paper is in Section 6.

2. Some Preliminaries

In this section, we will go over some fundamental aspects of fractional calculus as well as the fractional iterative approach to solving nonlinear equations using Caputo-type derivatives, even though, apart from the Caputo derivative, all fractional-type derivatives do not fulfill the criteria for a fractional calculus. D ς ( 1 ) = 0 if ς is not a natural number.
The gamma function is described as follows [25]:
Γ ( r ) = 0 + u r 1 e u d u ,
where r > 0 . Gamma is a generalization of the factorial function due to Γ ( 1 ) = 1 and Γ ( n + 1 ) = n ! , where n N .
Order ς ’s Caputo fractional derivative [26,27,28,29] with ς > 0 , ς 1 , ς , r R is stated as:
C D ς 1 ς f ( r ) = 1 Γ ( m ς ) 0 r d m d t m f ( t ) 1 r t ς m + 1 d t , m 1 < ς m N , d m d t m f ( r ) , ς = m N ,
where Γ ( r ) is a gamma function with r > 0 .
Theorem 1
(Generalized Taylor formula [30,31]). Suppose C D ς 1 j ς f ( r ) C ς 1 , ς 2 for j = 1 , , n + 1 where ς 0 , 1 , then
f ( r ) = i = 0 n C D ς 1 i ς f ς 1 r ς 1 i ς Γ ( i ς + 1 ) + C D ς 1 n + 1 ς f ξ r ς 1 n + 1 ς Γ ( n + 1 ς + 1 ) ,
and ς 1 ξ r , r ς 1 , ς 2 and C D ς 1 n ς = C D ς 1 ς . C D ς 1 ς C D ς 1 ς (n-times).
In terms of the Caputo-type Taylor development of f ( r ) around ς 1 = ξ , then
f ( r ) = C D ξ ς f ξ Γ ( ς + 1 ) r ξ ς + C D ξ 2 ς f ξ Γ ( 2 ς + 1 ) r ξ 2 ς + O r ξ 3 ς .
Taking the C D ξ ς f ξ Γ ( ς + 1 ) common, we have:
f ( r ) = C D ξ ς f ξ Γ ( ς + 1 ) r ξ ς + C 2 r ξ 2 ς + O r ξ 3 ς ,
where
C γ = Γ ( ς + 1 ) Γ ( γ ς + 1 ) C D ξ γ ς f ξ C D ξ ς f ξ , γ = 2 , 3 ,
The corresponding derivative of the Caputo type of f ( r ) around ξ is
C D ξ ς f r = C D ξ ς f ξ Γ ( ς + 1 ) Γ ( ς + 1 ) + Γ ( 2 ς + 1 ) Γ ( ς + 1 ) C 2 r ξ ς + O r ξ 2 ς .
The classic Newton–Raphson technique is the most widely used method for locating a single root:
y ϑ = r ϑ f ( r ϑ ) f ( r ϑ ) , ( ϑ = 1 , 2 , ) , f ( r ϑ ) 0 .
Akgül et al. [31], Torres-Hernandez et al. [32], Gajori et al. [33] and Kumar et al. [34] discuss the fractional Newton method with different types of fractional derivatives. For the Caputo type of the classical Newton’s method (FNN), Candelario et al. [35] propose the following fractional version:
z [ ϑ ] = r ϑ Γ ( ς + 1 ) f ( r ϑ ) C D ς 1 ς f ( r ϑ ) 1 / ς ,
where C D ς 1 ς f ( r ϑ ) C D ξ ς f ( ξ ) for any ς R . The order of convergence of the fractional Newton method is ς + 1 , satisfying the following error equation:
e ϑ + 1 = Γ ( 2 ς + 1 ) Γ 2 ( ς + 1 ) ς Γ 2 ( ς + 1 ) C 2 e ϑ ς + 1 + O e ϑ 2 ς + 1 ,
where e ϑ + 1 = z [ ϑ ] ξ and e ϑ = r ϑ ξ and C γ = Γ ( ς + 1 ) Γ ( γ ς + 1 ) C D ξ γ ς f ξ C D ξ ς f ξ , γ = 2 , 3 ,
Candelario et al. [35] also present the following fractional numerical scheme ( M σ ) for solving simple roots of nonlinear equations:
y ϑ = r ϑ Γ ( ς + 1 ) f ( r ϑ ) C D ς 1 ς f ( r ϑ ) 1 / ς , v ϑ = y ϑ Γ ( ς + 1 ) f ( y ϑ ) C D ς 1 ς f ( r ϑ ) 1 / ς .
The order of convergence of M σ is 2 ς + 1 , and the error equation is given as:
e ϑ + 1 = Γ ( 2 ς + 1 ) Γ 2 ( ς + 1 ) ς 2 Γ 2 ( ς + 1 ) Λ C 2 2 e ϑ 2 ς + 1 + O e ϑ ς 2 + 2 ς + 1 ,
where Λ = Γ ( 2 ς + 1 ) Γ 2 ( ς + 1 ) Γ 2 ( ς + 1 ) and e ϑ + 1 = v ϑ ξ .

3. Construction of Fractional Parallel Computing Scheme for Estimating All Distinct and Multiple Roots

Weierstrass-Dochive [18] presents the following local quadratic convergence scheme:
u i ϑ = r i [ ϑ ] w ( r i [ ϑ ] ) ,
where
w ( r i [ ϑ ] ) = f ( r i [ ϑ ] ) Π n j i j = 1 ( r i [ ϑ ] r j [ ϑ ] ) , ( i , j = 1 , 2 , , n ) ,
is Weierstrass’ Correction.
In [19], Nedzibove et al. present two new modifications to (14) as:
u i ϑ = r i [ ϑ ] 2 Π n j i j = 1 ( r i [ ϑ ] r j [ ϑ ] ) r i [ ϑ ] Π n j i j = 1 ( r i [ ϑ ] r j [ ϑ ] ) + f ( r i [ ϑ ] ) .
In order to construct an iterative process for approximating all the multiple roots of polynomial, let us assume a monic polynomial of degree n with roots ξ j having known multiplicities α j such that j = 1 m α j = n :
f ( r ) = r n + a n 1 r n 1 + + a 2 r 2 + a x + a 0 = Π j = 1 m ( r ξ j ) α j .
Consider the Newton correction U i ϑ = f ( r i [ ϑ ] ) f ( r i [ ϑ ] ) and
U ˘ i ϑ = Γ ( ς + 1 ) f ( r ϑ ) C D ς 1 ς f ( r ϑ ) 1 / ς .
This implies that
1 U i ϑ = j i j = 1 m α j r i [ ϑ ] ξ j ,
where ξ j is the exact root and r j [ ϑ ] is its approximation. This gives
1 U i ϑ = f ( r i [ ϑ ] ) f ( r i [ ϑ ] ) = j i j = 1 m α j r i [ ϑ ] ξ j = α i r i [ ϑ ] ξ i + j i j = 1 m α j r i [ ϑ ] ξ j ,
α i r i [ ϑ ] ξ i = 1 U i ϑ j i j = 1 m α j r i [ ϑ ] ξ j ,
ξ i = r i [ ϑ ] α i 1 U i ϑ j i j = 1 m α j r i [ ϑ ] ξ j , ( i = 1 , 2 , , m ) .
Substituting the roots ξ j by its approximations r j [ ϑ ] ( j i ) in (19), we obtain the third-order convergent Ehrlich–Aberth method [36] for the roots with multiplicities α j .
r i [ ϑ + 1 ] = r i [ ϑ ] α i 1 U i ϑ j i j = 1 m α j r i [ ϑ ] r j [ ϑ ] ,
where r i [ ϑ + 1 ] is new approximation to the root ξ i . Instead of simple approximation r j [ ϑ ] , we can apply some better approximation to ξ j . The main goal in this accelerating process is to improve convergence. The aim can be achieved by choosing Newton’s approximation r j [ ϑ ] α j U j ϑ instead of ξ j in (22):
r i [ ϑ + 1 ] = r i [ ϑ ] α i 1 U i ϑ j i j = 1 m α j r i [ ϑ ] r j [ ϑ ] + α j U j ϑ .
Now, we derive a new 3 ς + 5 -order method for the determination of all the roots of (1). Let r 1 [ ϑ ] , r 2 [ ϑ ] , , r m [ ϑ ] be reasonably close approximations to the roots ξ 1 , ξ 2 , , ξ m , respectively, of polynomial f ( r ) , which means that ϵ i = max 1 i m r i [ ϑ ] ξ i is a sufficiently small quantity. Let us return to the relation (19) by replacing ξ j with r j [ ϑ ] + + α j U j ϑ . We have:
1 r i [ ϑ ] r j [ ϑ ] 1 r i [ ϑ ] r j [ ϑ ] + α j U j ϑ = 1 ( r i [ ϑ ] r j [ ϑ ] ) 1 + α j U j ϑ r i [ ϑ ] r j [ ϑ ] .
Assuming that ϵ i is small enough to provide α j U j ϑ r i [ ϑ ] r j [ ϑ ] < 1 , we use the development into geometric series and obtain:
1 r i [ ϑ ] r j [ ϑ ] 1 r i [ ϑ ] r j [ ϑ ] 1 α j U j ϑ r i [ ϑ ] r j [ ϑ ] + α j U j ϑ r i [ ϑ ] r j [ ϑ ] 2 + O ϵ i 3 ,
= 1 r i [ ϑ ] r j [ ϑ ] 1 α j U j ϑ r i [ ϑ ] r j [ ϑ ] + α j U j ϑ r i [ ϑ ] r j [ ϑ ] 2 + O ϵ i 3 .
Neglecting terms of a higher order in the last relations, we obtain:
1 r i [ ϑ ] r j [ ϑ ] 1 r i [ ϑ ] r j [ ϑ ] + α j U j ϑ ,
= 1 r i [ ϑ ] r j [ ϑ ] 1 α j U j ϑ r i [ ϑ ] r j [ ϑ ] + α j U j ϑ r i [ ϑ ] r j [ ϑ ] 2 .
Replacing r j [ ϑ ] by z j [ ϑ ] , U j ϑ by U ˘ j ϑ in (29) and using in (23), we have
r i [ ϑ + 1 ] = r i [ ϑ ] α i 1 U ˘ i ϑ j i j = 1 m α j r i [ ϑ ] z j [ ϑ ] + j i j = 1 m α j 2 U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 + j i j = 1 m α j 3 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 .
We name the method introduced in (30) as the SFM σ -Method. Now, we calculate the convergence order of the SFM σ -Method. Firstly, we introduce some notations as:
d = min i , j i j ξ i ξ j , g = 2 n 1 d and j i instead of j = 1 m j i m respectively .
Now, we suppose the condition
ϵ i < d 2 n 1 = 1 g ς ,
where i = ( 1 , 2 , , m ) . The conditions hold for each, where r i [ ϑ ] ξ i = ϵ i .
Convergence Analysis: Here, we prove the following lemma:
Lemma 1.
Let r 1 , r 2 , , r m be reasonably close approximations of roots ξ 1 , ξ 2 , , ξ m , respectively. Let r i [ ϑ ] ξ i = ϵ i , ϵ i = r i [ ϑ + 1 ] ξ i , where r 1 [ ϑ + 1 ] , r 2 [ ϑ + 1 ] , , r m [ ϑ + 1 ] are the new approximations produced by the iterative SFM σ . If (32) is satisfied, then the following estimate is also true:
(i)
ϵ i g 4 n 1 ϵ i 2 j i ϵ j 3 ς + 3 ,
(ii)
ϵ i < d 2 n 1 = 1 g ς , ( i = 1 , 2 , , m ) .
Proof. 
Taking into account (31), we find:
r i [ ϑ ] ξ j = ( ξ i ξ j ) + ( r i [ ϑ ] ξ i ) | ( ξ i ξ j ) | | ( r i [ ϑ ] ξ i ) | > d d 2 n 1 ,
= d 1 1 2 n 1 = d 2 n 2 2 n 1
Using (34), we obtain:
| r i [ ϑ ] ξ j | = 2 n 2 g
Now, considering (33) and (34), we have:
r i [ ϑ ] z j [ ϑ ] = | ( r i [ ϑ ] ξ j ) + ( ξ j z j [ ϑ ] ) | | r i [ ϑ ] ξ j | | ξ j z j [ ϑ ] | ,
r i [ ϑ ] z j [ ϑ ] = | r i [ ϑ ] ξ j | | ξ j z j [ ϑ ] | > 2 n 2 g 1 g = 2 n 3 g .
Now, we introduce some new notations:
1 , i = j i 1 r i [ ϑ ] ξ j ,
1 , i α j = j i α j r i [ ϑ ] ξ j ,
Thus, using (35), we obtain:
1 , i α j j i α j | r i [ ϑ ] ξ j | < g 2 ( n 1 ) j i α j ,
1 , i α j j i α j | r i [ ϑ ] ξ j | < g 2 ( n 1 ) ( n α j ) ,
as n α j < n 1 for every i, this implies
1 , i α j j i α j | r i [ ϑ ] ξ j | < g ( n 1 ) 2 ( n 1 ) = g 2 ,
1 U j ϑ = f ( r i [ ϑ ] ) f ( r i [ ϑ ] ) = 1 , i α j r i [ ϑ ] ξ j = α i r i [ ϑ ] ξ j + 1 , i α j .
As r i [ ϑ ] ξ i = ϵ i ,
1 U ˘ i ϑ = α i ϵ i ς + 1 + 1 , i α j = α i + ϵ i ς + 1 1 , i α j ϵ i ς + 1 ,
U ˘ i ϑ = ϵ i ς + 1 α i + ϵ i ς + 1 1 , i α j .
Using (32) and (35) in the above result, we have:
U ˘ i ϑ = ϵ i ς + 1 α i + ϵ i ς + 1 1 , i α j < | ϵ i ς + 1 | | α i | + ϵ i ς + 1 1 , i α j < 2 g ς + 1 .
As α i > 1 , | ϵ i | = 1 g ς + 1
U ˘ i ϑ < 1 g ς + 1 1 1 g g 2 = 2 g ς + 1 .
From (34), we have ϵ i = r i [ ϑ ] ξ i ,   ϵ i = r i [ ϑ + 1 ] ξ i . Therefore,
ϵ i = r i [ ϑ + 1 ] ξ i = r i [ ϑ ] α i 1 U ˘ i ϑ j i α j r i [ ϑ ] z j [ ϑ ] + j i α j 2 U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 j i α j 3 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ξ i ,
ϵ i = ϵ i α i 1 U ˘ i ϑ j i α j r i [ ϑ ] z j [ ϑ ] + j i α j 2 U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 j i α j 3 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ,
and therefore,
ϵ i = ϵ i α i α i ϵ i + j i α j r i [ ϑ ] ξ j j i α j r i [ ϑ ] z j [ ϑ ] + j i α j 2 U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 j i α j 3 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ,
ϵ i = ϵ i α i ϵ i α i + ϵ i j i α j 1 r i [ ϑ ] ξ j 1 r i [ ϑ ] z j [ ϑ ] + α j U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 α j 2 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ,
ϵ i = ϵ i α i ϵ i α i + ϵ i j i α j r i [ ϑ ] z j [ ϑ ] ( r i [ ϑ ] ξ j ) r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] + α j U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 α j 2 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ,
ϵ i = ϵ i α i ϵ i α i + ϵ i j i α j r i [ ϑ ] z j [ ϑ ] r i [ ϑ ] + ξ j r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] + α j U ˘ j ϑ ( r i [ ϑ ] z j [ ϑ ] ) 2 α j 2 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ,
ϵ i = ϵ i α i ϵ i α i + ϵ i j i α j ξ j z j [ ϑ ] r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] + α j U ˘ j ϑ r i [ ϑ ] z j [ ϑ ] 2 α j 2 U ˘ j ϑ 2 r i [ ϑ ] z j [ ϑ ] 3 ,
using Newton’s correction, we obtain,
U ˘ j ϑ = ϵ j ς + 1 α j + ϵ j ς + 1 1 , i α i ,
ϵ i = ϵ i α i ϵ i α i + ϵ i j i α j ϵ j ς + 1 r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] + α j ϵ j ς + 1 ( r i [ ϑ ] z j [ ϑ ] ) 2 α j + ϵ j ς + 1 1 , i α i α j 2 ϵ j ς + 1 2 ( r i [ ϑ ] z j [ ϑ ] ) 3 α j + ϵ j ς + 1 1 , i α i 2 ,
ϵ i = ϵ i α i ϵ i α i + ϵ i j i α j ϵ j ς + 1 1 r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] + α j ( r i [ ϑ ] z j [ ϑ ] ) 2 α j + ϵ j ς + 1 1 , i α i α j 2 ϵ j ( r i [ ϑ ] z j [ ϑ ] ) 3 α j + ϵ j ς + 1 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j ς + 1 1 r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] + α j ( r i [ ϑ ] z j [ ϑ ] ) 2 α j + ϵ j ς + 1 1 , i α i α j 2 ϵ j ( r i [ ϑ ] z j [ ϑ ] ) 3 α j + ϵ j ς + 1 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j ς + 1 ( r i [ ϑ ] z j [ ϑ ] ) ( α j + ϵ j ς + 1 1 , i α i ) + α j r i [ ϑ ] ξ j r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j ς + 1 1 , i α i α j 2 ϵ j ς + 1 r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j ς + 1 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j ς + 1 ( r i [ ϑ ] z j [ ϑ ] ) ϵ j ς + 1 1 , i α i α j ( r i [ ϑ ] z j [ ϑ ] ) + α j r i [ ϑ ] ξ j r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j 1 , i α i α j 2 ϵ j r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j ς + 1 ( r i [ ϑ ] z j [ ϑ ] ) ϵ j 1 , i α i + α j r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j 1 , i α i α j 2 ϵ j r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j ς + 1 r i [ ϑ ] z j [ ϑ ] ϵ j ς + 1 1 , i α i α j ϵ j ς + 1 r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j ς + 1 1 , i α i α j 2 ϵ j ς + 1 r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j ς + 1 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j 2 ς + 2 α j r i [ ϑ ] z j [ ϑ ] 1 , i α i r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j ς + 1 1 , i α i α j 2 r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j ς + 1 1 , i α i 2 ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j 2 ς + 2 B i j * ,
where,
B i j * = α j r i [ ϑ ] z j [ ϑ ] 1 , i α i r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j ς + 1 1 , j α i α j 2 r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j ς + 1 1 , j α i 2 ,
B i j * = 1 r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j ς + 1 1 , j α i α j r i [ ϑ ] z j [ ϑ ] 1 , i α i r i [ ϑ ] ξ j α j 2 r i [ ϑ ] z j [ ϑ ] α j + ϵ j ς + 1 1 , j α i ,
B i j * = 1 r i [ ϑ ] z j [ ϑ ] 2 α j + ϵ j ς + 1 1 , j α i α j α j + ϵ j ς + 1 1 , j α i r i [ ϑ ] z j [ ϑ ] r i [ ϑ ] z j [ ϑ ] 2 1 , j α i α j + ϵ j ς + 1 1 , j α i α j 2 r i [ ϑ ] ξ j r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] α j + ϵ j ς + 1 1 , j α i ,
B i j * = α j 2 r i [ ϑ ] z j [ ϑ ] + α j ϵ j ς + 1 1 , j α i r i [ ϑ ] z j [ ϑ ] r i [ ϑ ] z j [ ϑ ] 2 1 , j α i α j + ϵ j ς + 1 1 , j α i α j 2 r i [ ϑ ] ξ j r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j ς + 1 1 , j α i 2 ,
B i j * = α j 2 ϵ j ς + 1 + α j ϵ j ς + 1 1 , j α i r i [ ϑ ] z j [ ϑ ] r i [ ϑ ] z j [ ϑ ] 2 1 , j α i α j + ϵ j ς + 1 1 , j α i r i [ ϑ ] ξ j r i [ ϑ ] z j [ ϑ ] 3 α j + ϵ j ς + 1 1 , j α i 2 .
From (55), we get:
U ˘ j ϑ = ϵ j ς + 1 α j + ϵ j ς + 1 1 , j α i < 2 g ς + 1 ,
U ˘ j ϑ = 1 α j + ϵ j ς + 1 1 , j α i < 2 g ς + 1 ϵ j ς + 1 ,
Now, applying Equations (31), (32) and (39) in the above relation in (70), we obtain:
B i j * | α j | 2 | ϵ j ς + 1 | + | α j | | ϵ j ς + 1 | | 1 , j α i | | r i [ ϑ ] z j [ ϑ ] | + | r i [ ϑ ] z j [ ϑ ] | 2 | 1 , j α i | | α j + ϵ j ς + 1 1 , j α i | | r i [ ϑ ] ξ j | | r i [ ϑ ] z j [ ϑ ] | 3 | α j + ϵ j ς + 1 1 , j α i | 2 ,
B i j * n 2 | ϵ j ς + 1 | + n | ϵ j ς + 1 | g 2 2 n 3 g + 2 n 3 g 2 g 2 g 2 ϵ j ς + 1 2 n 3 g 2 n 3 g 3 g 2 ϵ j ς + 1 2 ,
B i j * | ϵ j ς + 1 | n 2 + n ( 2 n 3 ) 2 + ( 2 n 3 ) 2 4 2 n 1 4 g 2 ( 2 n 3 ) 3 | ϵ j ς + 1 | 2 ,
Therefore, B i j * B i j | ϵ j ς + 1 | , where
B i j n 2 + n ( 2 n 3 ) 2 + ( 2 n 3 ) 2 4 2 n 1 4 g 2 ( 2 n 3 ) 3 | ϵ j ς + 1 | 2 ,
and therefore,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j 2 ς + 2 B i j | ϵ j ς + 1 | ,
ϵ i = ϵ i ϵ i 1 + ϵ i α i j i α j ϵ j 3 ς + 3 B i j ,
ϵ i = ϵ i + ϵ i 2 α i j i α j ϵ j 3 ς + 3 B i j ϵ i 1 + ϵ i α i j i α j ϵ j 3 ς + 3 B i j = ϵ i 2 α i j i α j ϵ j 3 ς + 3 B i j 1 + ϵ i α i j i α j ϵ j 3 ς + 3 B i j ,
B i j 4 n 2 + 2 n ( 2 n 3 ) + ( 2 n 3 ) 2 4 2 ( n 1 4 q 2 ) ( 2 n 3 ) 3 | ϵ j ς + 1 | 2 ,
B i j 4 n 2 + 4 n 2 6 n + 4 n 2 + 9 12 n 4 2 ( n 1 4 q 2 ) ( 2 n 3 ) 3 | ϵ j ς + 1 | 2 = g 2 ( 12 n 2 18 n + 9 ) 2 ( n 1 ) | ϵ j ς + 1 | 2 ( 2 n 3 ) 3 ,
B i j g 2 2 ( n 1 ) | ϵ j | . 12 n 2 18 n + 9 ( 2 n 3 ) 3 .
Since ( 12 n 2 18 n + 9 ) ( 2 n 3 ) 3 , n 3 is a monotonically decreasing sequence, let us estimate from the above the absolute values of B i j . We have:
B i j < g 2 2 ( n 1 ) | ϵ j ς + 1 | 2 ,
ϵ i α i j i α j ϵ j 3 ς + 3 B i j | ϵ i α i | j i α j ϵ j 3 ς + 3 B i j .
Since α i 1 1 α i 1 and | ϵ i | α i 1 g for all i ,
| ϵ i α i | j i α j ϵ j 3 ς + 3 B i j < | ϵ i | α i j i α j ϵ j 3 ς + 3 g 2 g 2 2 ( n 1 ) | ϵ j ς + 1 | 2 , = | ϵ i | α i j i α j ϵ j ς + 1 g 2 2 ( n 1 ) < 1 g ς + 1 j i α j 1 g ς + 1 . g 2 2 ( n 1 ) , < 1 2 ( n 1 ) .
Using j i α j = n α i < n 1 for all i, we obtain:
ϵ i α i j i α j ϵ j 3 ς + 3 B i j 1 2 ( n 1 ) ( n 1 ) = 1 2 ,
also from (84)
1 + ϵ i α i j i α j ϵ j 3 ς + 3 B i j 1 ϵ i α i j i α j ϵ j 3 ς + 3 B i j > 1 2 .
Using Equations (81) and (85) in Equation (77), we obtain
ϵ i < ϵ i 2 j i α j ϵ j 3 ς + 3 g 2 2 ( n 1 ) | ϵ j | ς + 1 1 2 = g 2 n 1 ϵ i 2 j i α j ϵ j 2 ς + 2 .
Hence, we have the proof of Lemma 1 (i). Now, from Equation (86), we have ϵ i ~ < g 2 n 1 1 g 2 j i α j 1 g ς = 1 ( n 1 ) g ς ( n α i ) = 1 ( n 1 ) g ς ( n 1 ) = 1 g ς ,
ϵ i < d 2 n 1 = 1 g ς ϵ i < 1 g ς ,
which completes the proof of Lemma 1 (ii).    □
Let r 1 [ 0 ] , , r n [ 0 ] be the good initial guesses to roots ξ 1 , ξ 2 , , ξ n of an algebraic polynomial f and suppose ϵ i [ ϑ ] = r i [ ϑ ] ξ i , where r 1 [ ϑ ] , , r n [ ϑ ] approximations are obtained in the ϑ t h iterative step by the simultaneous SFM σ -method. Using the conditions of Lemma 1, now we state the main convergence theorem of our SFM σ -Method.
Theorem 2.
According to the following assumptions
ϵ i [ 0 ] = r [ 0 ] α i < d 2 n 1 = 1 g ς ( i = 1 , 2 , , m ) ,
the iterative formula SFM σ is convergent, having convergent order 3 ς + 5 .
Proof. 
In Lemma 1 (i), we develop the results (86) under the assumptions (32). Using the same arguments under condition (88) of theorem 1, we have from (86):
| ϵ i 1 | g 4 n 1 | ϵ i 0 | 2 j i α j | ϵ i 0 | 3 ς + 3 < 1 g ς ( i , , m ) .
So according to Lemma 1 (ii), we have:
| ϵ i 0 | < d 2 n 1 = 1 g | ϵ i 1 | < d 2 n 1 = 1 g ς ( i , , m ) .
We prove the theorem by mathematical induction; condition (88) implies
| ϵ i ϑ + 1 | g 4 n 1 | ϵ i ϑ | 2 j i α j | ϵ i ϑ | 3 ς + 3 < 1 g ς ( i , , m ) ,
for every, ϑ = 0 , 1 , 2 , and i = 1 , 2 , , m . Using | ϵ i ϑ | = t i ϑ g ς , (91) becomes
| t i ϑ + 1 | g 4 n 1 | t i ϑ | 2 g 2 j i | t j ϑ | 3 g ς + 3 < 1 g ς ( i , , m ) ,
| t i ϑ + 1 | | t i ϑ | 2 n 1 j i | t j ϑ | 3 ς + 3 < 1 g ς ( i , , m ) .
Let t ϑ = max 1 i m | t i ϑ | , then from assumptions (93), it follows that g ς ϵ i [ 0 ] = t i [ 0 ] t [ 0 ] < 1 . For all i = 1 , 2 , , m and from (93), we obtain t i ϑ < 1 , for each ϑ = 0 , 1 , 2 , and i = 1 , 2 , , m . Therefore, from (93), we obtain:
| t i ϑ + 1 | | t ϑ | 2 n 1 ( n α j ) | t j ϑ | 3 ς + 3 < | t ϑ | 2 n 1 ( n 1 ) | t j ϑ | 3 ς + 3 | t ϑ | 3 ς + 5 ,
which shows that the proposition { t i ϑ : i = 1 , 2 , , m } converges to zero. Consequently, the sequence { | t i ϑ | } also converges to zero. That is, r i ϑ ξ i , for all i as ϑ increases. Finally, from (94), it can be concluded that the method (SFM σ -Method) has convergence order 3 ς + 5 .    □

4. Computational Analysis of Simultaneous Methods

Global convergence behavior dominates the computing complexity of the simultaneous technique as compared to a simple roots-finding computer algorithm. This implies that the overall complexity of the parallel computer technique for (1) is O ( n 2 ) . As presented in [37], the computational efficiency of an iterative method can be estimated using the efficiency index given by
E L ( m ) = log r w a s A S m + w m M m + w d D m ,
Applying (95) and the data given in Table 1, we compute the efficiency ratio ϱ * ( S F M σ 1 S F M σ 4 , S F M σ ) [37] as:
ϱ * ( S F M σ 1 S F M σ 4 , S F M σ ) = S F M σ 1 S F M σ 4 S F M σ 1 × 100 .
Figure 1a–e graphically illustrate these percentage ratios.
Here, Λ 11 = O ( m ) , and SFM σ is a simultaneous method for the fractional parameter value equals one.

5. Numerical Outcomes

To compare our recently developed simultaneous methods SFM σ 1 SFM σ 4 of order 3 ς + 5 to SFM σ , we look at a few numerical test examples in this section. With Maple 18’s 64 digits floating point arithmetic, all calculations were completed. The parallel computer algorithm was terminated based on the following conditions:  
e i [ ϑ ] = r i [ ϑ + 1 ] r i [ ϑ ] < = 10 30 ,
where e i [ ϑ ] denotes the absolute error of consecutive iterations. In Table, 2-21, the numerical schemes for various fractional parameter values, i.e., 0.1, 0.3, 0.5, 0.8, 1.0, are represented by SFM σ 1 SFM σ 4 , SFM σ respectively, and B** denotes digits floating point arithmetic. In all tables, we use the following computer terminating criteria (Algorithm 1).
Algorithm 1 For the fractional numerical scheme SFMs σ
Mathematics 11 04914 i001
Engineering Applications
This section presents many problems in engineering whose solutions are approximated by our newly created parallel approaches SFM σ 1 SFM σ 4 and SFM σ .
Engineering Application 1: Emden–Fowler equation
The Emden–Fowler second-order nonlinear differential equation arises in various fields of physics and engineering, fluid dynamics, heat transfer, and astrophysics, in particular, to model the structure of self-gravitating, spherically symmetric objects, such as stars. The equation is named in honor of Ralph H. Fowler and Robert Emden, two German astrophysicists who made significant contributions to its formulation. The general form of the Emden–Fowler equation is given by [38,39]:
d 2 y d r 2 + g 1 ( r ) d y d r + g 2 ( r ) y ( r ) + g ( r ) = y n , y ( 0 ) = 0 , y ( 0 ) = 0 .
Because of its nonlinearity, solving the Emden–Fowler equation is often difficult, and closed-form solutions exists only in specific cases. Choosing n = 2 ,   g 1 ( r ) = 5 r ,   g 2 ( r ) = 3 r 2 , and g ( r ) = r 4 15 in (97), we obtain the following nonlinear initial value problem:
d 2 y d r 2 + 5 r d y d r + 3 r 2 y ( r ) + g ( r ) = y 2 , y ( 0 ) = 0 , y ( 0 ) = 0 .
Using the procedure described in [40], the numerical solution of (98) can be performed by solving the following polynomial:
f ( r ) = r 2 79 28991745 r 14 + 1306 57747104415 r 18 .
The Caputo-type derivative of (99) is given as:
C D ς 1 ς f ( r ) = Γ ( 3 ) Γ ( 3 ς ) r 2 ς 79 28991745 Γ ( 15 ) Γ ( 15 ς ) r 14 ς + 1306 57747104415 ( Γ ( 19 ) Γ ( 19 ς ) ) r 18 ς .
The exact solution of (99) up to four decimal places is:
ζ 1 = 3.1792 0.2571 i ,   ζ 2 = 3.1792 + 0.2575 i ,   ζ 3 = 2.4135 1.4797 i ζ 4 = 2.4135 + 1.4797 i ,   ζ 5 = 1.4797 2.4135 i ,   ζ 6 = 1.4797 + 2.4135 i ζ 7 = 0.25758 3.1792 i ,   ζ 8 = 0.2575 + 3.17923 i ,   ζ 9 , 10 = 0.0 ,   ζ 11 = 0.2575 3.17923 i ,   ζ 12 = 0.2575 + 3.1792 i ,   ζ 13 = 1.4797 2.4135 i ζ 14 = 1.4797 + 2.4135 i ,   ζ 15 = 2.41359 1.47975 i ,   ζ 16 = 2.4135 + 1.4797 i ζ 17 = 3.1792 0.25758 i ,   ζ 18 = 3.1792 + 0.2575 i .
In order to determine the global convergence behavior of the parallel scheme, we generate a random initial guess ranging from r 1 * [ 0 ] to r 5 * [ 0 ] using Matlab as explained in Appendix A Table A1. According to the results presented in Table 2, when an arbitrary starting value is used, SFM σ 1 SFM σ 4 , SFM σ converges to exact zeros after 19, 17, 13, 10, and 10 iterations for fraction parameters 0.1, 0.3, 0.5, 0.8, and 1.0, respectively. The corresponding CPU times are 2.1254, 1.0874, 1.0078, 0.0784 and 0.0078 as shown in Table 3. The acceleration of the convergence rate of SFM σ 1 SFM σ 4 , SFM σ as the fractional parameter value increases from 0.1 to 1.0 can be clearly seen in Table 4. Global convergence is demonstrated by the fact that the newly developed method converges to exact roots for randomly generated initial guess values.
Table 2 shows the number of iterations of the fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different choices of the random initial vector given in Appendix A, Table A1. Table 2 clearly shows that the number of iterations decreased as the fractional parameter values increased from 0.1 to 1.0.
Table 5 shows the maximum error (Max-Err) computed by the fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different selections of the random initial vector given in Appendix A Table A1 to approximate all roots of the polynomial equations used in application 1. Table 5 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by the simultaneous scheme increased significantly (Figure 2).
Table 4 shows the approximate local computational order of convergence. The approximate local computational order of convergence increases as the fractional parameter values increase from 0.1 to 1.0.
Table 3 shows the computational CPU time in seconds to approximate all roots of the polynomial equation used in application 1 employing the fractional simultaneous scheme.
The rate of convergence increases as the initial guess values are chosen to be sufficiently close to the exact root of (99) as:
r 1 [ 0 ] = 3.1 0.2 i ,   r 2 [ 0 ] = 3.1 + 0.2 i ,   r 3 [ 0 ] = 2.4 1.4 i r 4 [ 0 ] = 2.4 + 1.4 i ,   r 5 [ 0 ] = 1.4 2.4 i ,   r 6 [ 0 ] = 1.4 + 2.4 i r 7 [ 0 ] = 0.2 3.1 i ,   r 8 [ 0 ] = 0.2 + 3.1 i ,   r 9 [ 0 ] = 0.02 ,   r 10 [ 0 ] = 0.01 ,   r 11 [ 0 ] = 0.2 3.1 i ,   r 12 [ 0 ] = 0.2 + 3.1 i ,   r 13 [ 0 ] = 1.4 2.4 i r 14 [ 0 ] = 1.4 + 2.4 i ,   r 15 [ 0 ] = 2.4 1.4 i ,   r 16 [ 0 ] = 2.4 + 1.4 i r 17 [ 0 ] = 3.1 0.2 i ,   r 18 [ 0 ] = 3.1 + 0.2 i .
If we start with initial guessed values that are close to the exact root, Table 6 demonstrates that the fractional simultaneous scheme’s accuracy and convergence order improve. As the fractional parameter value was increased from 0.1 to 1.0, the residual error calculated using numerical methods also increased.
Engineering Application 2: Under Conservative Force—Mass Spring System
Let us now examine an external force acting on a vibrating mass on a spring. A driving force that causes the spring support to oscillate vertically, for instance, could be represented by f ( r ) . If the mechanical system is conservative, the following nonlinear equation arises [41,42]:
f r + f ( r ) 2 + 32 160 r = 0 , f ( 0 ) = 0 , f ( 0 ) = 0 .
Using the method described in [40], the following polynomial is used to simulate (101) as:
f ( r ) = 43359567872 189 r 10 + 425993216 405 r 9 278592512 63 r 8 + 1024000 63 r 7 4096384 45 r 6 + 256 r 5 6400 3 r 4 + 16 3 r 3 80 r 2 .
The Caputo-type derivative of (102) is given as:
C D ς 1 ς f ( r ) = 43359567872 189 Γ ( 11 ) Γ ( 11 ς ) r 10 ς + 425993216 405 Γ ( 10 ) Γ ( 10 ς ) r 9 ς 278592512 63 Γ ( 9 ) Γ ( 9 ς ) r 8 ς + 1024000 63 Γ 8 Γ ( 8 ς ) r 7 ς 4096384 45 Γ 7 Γ ( 7 ς ) r 6 ς + 256 Γ ( 6 ) Γ ( 6 ς ) r 5 ς 6400 3 Γ ( 5 ) Γ ( 5 ς ) r 4 ς + 16 3 Γ ( 4 ) Γ ( 4 ς ) r 3 ς 80 Γ ( 3 ) Γ ( 3 ς ) r 2 ς
The exact solution up to four decimal places is written as:
ζ 1 = 0.12909 0.0824 i ,   ζ 2 = 0.1290 + 0.0824 i ,   ζ 3 = 0.0513 0.1495 i ,   ζ 4 = 0.0513 + 0.1495 i ,   ζ 5 ,   6 = 0.0 ,   ζ 7 = 0.0525 0.1493 i ,   ζ 8 = 0.0525 + 0.1493 i ,   ζ 9 = 0.1301 0.0822 i ,   ζ 10 = 0.13010 + 0.0822 i .
To determine the global convergence component of the parallel scheme, use Matlab to generate a random initial guess value ranging from r 1 * [ 0 ] r 5 * [ 0 ] as specified in Appendix A Table A2. With an arbitrary starting value, SFM σ 1 SFM σ 4 , SFM σ converges to exact zeros after 19, 16, 14, 10 and 10 iterations as indicated in Table 7 for fraction parameters values 0.1, 0.3, 0.5, 0.8, and 1.0, respectively. As described in Table 8, the corresponding CPU times are 3.1254, 1.0729, 1.0137, 0.0881 and 0.0141, respectively. Table 9 clearly illustrates how the rate of convergence of SFM σ 1 SFM σ 4 , SFM σ accelerates as the value of the fractional parameter increases from 0.1 to 1.0. The newly developed method converges to exact roots for randomly generated initial guess values, demonstrating its global convergence.
Table 7 shows the number of iterations of fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different random initial vectors given in Appendix A Table A2. Table 7 clearly shows that the number of iterations decreased as the fractional parameter values increased from 0.1 to 1.0.
Table 9 shows the maximum error (Max-Err) computed by fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different random initial vectors given in Appendix A Table A2 to approximate all the roots of polynomial equations used in application 2. Table 9 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by simultaneous scheme increased significantly (Figure 3).
The approximate local computational order of convergence is shown in Table 10. From 0.1 to 1.0, as the fractional parameter values increase, the approximate local computational order of convergence increases.
According to the fractional simultaneous scheme, Table 10 displays the local computational order of convergence needed for the approximation of all roots of the polynomial equation used in application 2. Convergence rates increase as the following initial estimations are sufficiently adjusted to the exact roots of the engineering application 2:
r 1 [ 0 ] = 0.1 0.01 i ,   r 2 [ 0 ] = 0.1 + 0.01 i ,   r 3 [ 0 ] = 0.01 0.1 i ,   r 4 [ 0 ] = 0.01 + 0.1 i ,   r 5 [ 0 ] = 0.01 ,   r 6 [ 0 ] 0.1 ,   r 7 [ 0 ] = 0.05 0.14 i ,   r 8 [ 0 ] = 0.05 + 0.14 i ,   r 9 [ 0 ] = 0.1 0.08 i ,   r 10 [ 0 ] = 0.1 + 0.08 i .
are chosen as initial guessed values.
Table 11 shows that the convergence order and accuracy of the fractional simultaneous scheme are increased if we take the initial guessed values close to the exact root. The residual error computed by numerical methods also increased as we increased the fractional parameter value from 0.1 to 1.0.
Engineering Application 3: Series Circuit Analogue
Consider a flexible spring that is stretched vertically from a rigid support and has a mass m attached to its free end. Naturally, the mass will determine how much the spring elongates or stretches; different weight masses will result in different ways that the spring will stretch. Hooke’s law states that the spring itself generates a restoring force F that is opposed to the direction of elongation and proportional to the amount of elongation s. In short, a proportionality constant is defined as F = k s , where k is the spring constant. In an undamped spring/mass system, the differential equation represents F ( x ) , which is mathematically modeled as [40,42]:
d 2 y d r 2 + y 3 = 0 , y ( 0 ) = 1 , y ( 0 ) = 1 .
Using the method described in [40], the following polynomial is used to simulate (104) as:
f ( r ) = 0.5 r 3 0.5 r 2 + r + 1 .
The Caputo-type derivative of (105) is given as:
C D ς 1 ς f ( r ) = 0.5 Γ ( 4 ) Γ ( 4 ς ) r 3 ς 0.5 Γ 3 Γ ( 3 ς ) r 2 ς + Γ ( 2 ) Γ ( 2 ς ) r 1 ς + 1 Γ ( 1 ς ) r ς ,
The exact solution of (105) up to the decimal places is written as follows:
ζ 1 = 1 , ζ 2 = 1.414213562 , ζ 3 = 1.414213562 .
To determine the global convergence component of the parallel scheme, use Matlab to generate a random initial guess value ranging from r 1 * [ 0 ] r 5 * [ 0 ] as specified in Appendix A Table A3. With an arbitrary starting value, SFM σ 1 SFM σ 4 , SFM σ converges to exact zeros after 19, 16, 13, 8, and 8 iterations as indicated in Table 12 for various fractional parameters, i.e., 0.1, 0.3, 0.5, 0.8, and 1.0. As described in Table 13, the corresponding CPU times are 3.1364, 1.0701, 1.0078, 0.0874 and 0.0975, respectively. Table 14 clearly illustrates how the rate of convergence of SFM σ 1 SFM σ 4 , SFM σ accelerates as the value of the fractional parameter increases from 0.1 to 1.0. The newly developed method converges to exact roots for randomly generated initial guess values, demonstrating its global convergence.
Table 12 shows the number of iterations of fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different random initial vectors given in Appendix A Table A3. Table 12 clearly shows that the number of iterations decreased as the fractional parameter values increased from 0.1 to 1.0.
Table 14 shows the maximum error (Max-Err) computed by fractional simultaneous scheme SFM σ 1 SFM σ 3 , SFM σ for different random initial vectors given in Appendix A Table A3 to approximate all roots of the polynomial equations used in application 3. Table 14 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by simultaneous scheme increased significantly (Figure 4).
The approximate local computational order of convergence is shown in Table 15. As the fractional parameter values increase from 0.1 to 1.0, the approximate local computational order of convergence increases.
Table 13 displays the computational CPU time in seconds required to approximate all roots of the polynomial equation used in application 3 using the fractional simultaneous scheme.
Convergence rates increase as the following initial estimations are sufficiently adjusted to the exact root of engineering application 3:
r 1 [ 0 ] = 0.1 , r 2 [ 0 ] = 1.4 , r 3 [ 0 ] = 1.4 ,
are chosen as the initial guessed values.
Table 16 shows how the convergence order and accuracy of the fractional simultaneous scheme increase when we use initial guessed values close to the exact root. The residual error computed by numerical schemes increased as we increased the fractional parameter value from 0.1 to 1.0.
Application 4: Hanging Object
A chain attached to an object on the ground is pulled vertically upward by constant forces against gravity, causing the following nonlinear initial value problem:
d 2 y d r 2 d y d r 2 + 13 r + 1 = 0 , y ( 0 ) = 0 , y ( 0 ) = e 2 1 e 2 + 1 .
Using the method described in [40], the following polynomial is used to simulate (107) as:
f ( r ) = 0.02590111180 r 4 0.1066166681 r 3 0.2099871708 r 2 + 0.7615941560 r .
The Caputo-type derivative of (108) is given as:
C D ς 1 ς f ( r ) = 0.02590111180 Γ ( 5 ) Γ ( 5 ς ) r 4 ς 0.1066166681 Γ ( 4 ) Γ ( 4 ς ) r 3 ς 0.2099871708 Γ 3 Γ ( 3 ς ) r 2 ς + 0.7615941560 Γ ( 2 ) Γ ( 2 ς ) r 1 ς ,
The exact solution of (109) up to 4 decimal places is written as follows:
ζ 1 = 0 ,   ζ 2 = 1.6609 ,   ζ 3 = 2.8886 + 3.0592 i ,   ζ 4 = 2.8886 3.0592 i .
To determine the global convergence component of the parallel scheme, use Matlab to generate a random initial guess value ranging from r 1 * [ 0 ] r 5 * [ 0 ] as specified in Appendix A Table A4. With an arbitrary starting value, SFM σ 1 SFM σ 4 , SFM σ converges to exact zeros after 19, 16, 14, 8 and 8 iterations as indicated in Table 17 for various fractional parameters, i.e., 0.1, 0.3, 0.5, 0.8, and 1.0. The newly developed method converges to exact roots for randomly generated initial guess values, demonstrating its global convergence.
Table 17 shows the number of iterations of fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different random initial vectors given in Appendix A Table A4. Table 18 shows the maximum error (Max-Err) computed by fractional simultaneous scheme SFM σ 1 SFM σ 4 , SFM σ for different random initial vectors given in Appendix A Table A4 to approximate all roots of the polynomial equations used in application 4. Table 18 clearly demonstrates that as the fractional parameter values increased from 0.1 to 1.0, the accuracy computed by the simultaneous scheme increased significantly (Figure 5). This indicates the behavior of our recently developed simultaneous scheme in terms of global convergence. Table 19 clearly illustrates how the computational order of convergence of SFM σ 1 SFM σ 4 , SFM σ increase as the value of the fractional parameter increases from 0.1 to 1.0. As described in Table 20, the corresponding CPU times are 2.1254, 1.0874, 1.0078, 0.0874, and 0.0078, are consumed respectively.
The approximate local computational order of convergence is shown in Table 19. As the fractional parameter values increase from 0.1 to 1.0, the approximate local computational order of convergence increases.
Table 20 displays the computational CPU time in seconds required to approximate all roots of the polynomial equation used in application 4 using the fractional simultaneous scheme.
Convergence rates increase as the following initial estimations are sufficiently adjusted to the exact answer of engineering application 4:
r 1 [ 0 ] = 0.2 , r 2 [ 0 ] = 1.4 , r 3 [ 0 ] = 2.8 + 3.0 i , r 4 [ 0 ] = 2.8 3.0 i .
are chosen as initial guessed values.
Table 21 shows how the convergence order and accuracy of the fractional simultaneous scheme increase when we use initial guessed values close to the exact root. The residual error computed by the numerical schemes increased as we increased the fractional parameter value from 0.1 to 1.0.

6. Conclusions

  • In order to approximate all roots of nonlinear equations, a new fractional parallel approach with convergence orders of 3 ς + 5 is presented. The global convergence behavior of the fractional parallel schemes is demonstrated using a variety of random starting estimates of SFM σ 1 SFM σ 4 , SFM σ .
  • The numerical results of the engineering applications from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20 and Table 21 and Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 clearly show the efficiency of the newly developed methods in terms of CPU-time, computational error, maximum residual error, and local computational order of convergence (LCOC). The acceleration of the convergence rate is observed when the initial approximations close to the exact roots are selected as shown in Table 6, Table 11, Table 16 and Table 21.
  • In the future, higher-order parallel iterative approaches for solving (1) will be developed to handle more difficult engineering problems using fractional derivatives of Riemann–Liouville and Grunwald–Letnikov types.

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by Provincia autonoma di Bolzano/Alto Adigeâ euro ” Ripartizione Innovazione, Ricerca, Universitá e Musei (contract nr. 19/34). Bruno Carpentieri is a member of the Gruppo Nazionale per it Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematia (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2022.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Abbreviations

The following abbreviations are utilized in this study’s article:
SFM σ 1 SFM σ 3 , SFM σ Fractional parallel scheme
Error itIteration number
Ex-TimeComputer CPU-Time in seconds
ρ ς i ( σ 1 ) Computational local order of convergence
Per-EPercentage effectiveness
Ini-VInitial vector
D**Digits floating point arithmetic
CPU-TimeComputational time in seconds

Appendix A

Table A1. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 1.
Table A1. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 1.
r [ 0 ] [ r 1 [ 0 ] , r 2 [ 0 ] , r 3 [ 0 ] , r 4 [ 0 ] , r 5 [ 0 ] , r 6 [ 0 ] , r 7 [ 0 ] , r 8 [ 0 ] , r 9 [ 0 ] , r 10 [ 0 ] , r 11 [ 0 ] , r 12 [ 0 ] , r 13 [ 0 ] , r 14 [ 0 ] , r 15 [ 0 ] , r 16 [ 0 ] , r 17 [ 0 ] , r 18 [ 0 ] , ]
r 1 * [ 0 ] [−0.160, 0.643, 0.967, 0.085, 0.967, 0.881, 0.760, 0.643, 0.874, 0.475, 0.876, −0.153, 0.392, 0.615, 0.171, 0.743, 0.643, 0.967]
r 2 * [ 0 ] [0.743, 0.392, 0.655, 0.171, 0.743, 0.392, 0.855, 0.071, 0.145, 0.874, 0.775, 0.076, 0.643, 0.967, 0.085, 0.967, 0.881, 0.076]
r 3 * [ 0 ] [−0.145, 0.874, 0.475, 0.876, −0.153, 0.392, 0.615, 0.171, 0.743, 0.775, 0.076, 0.3456, 0.74125, 0.643, 0.967, 0.874, 0.473, 0.145]
[ , , , , , , , , , , , , , , , , , ]
Table A2. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 2.
Table A2. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 2.
r [ 0 ] [ r 1 [ 0 ] , r 2 [ 0 ] , r 3 [ 0 ] , r 4 [ 0 ] , r 5 [ 0 ] , r 6 [ 0 ] , r 7 [ 0 ] , r 8 [ 0 ] , r 9 [ 0 ] , r 10 [ 0 ] ]
r 1 * [ 0 ] [−0.760, 0.643,0.967, 0.881, 0.760, 0.643, 0.967, 0.085, 0.01451, 0.1452]
r 2 * [ 0 ] [−0.153, 0.392, 0.615, 0.171, 0.743, 0.392, 0.855, 0.071, 0.4512, 0.5641]
r 3 * [ 0 ] [−0.905, 0.874, 0.473, 0.076, 0.145, 0.874, 0.775, 0.076, 0.3456, 0.74125]
[ , , , , , , , ,   , ]
Table A3. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 3.
Table A3. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 3.
r [ 0 ] [ r 1 [ 0 ] , r 2 [ 0 ] , r 3 [ 0 ] ]
r 1 * [ 0 ] [−0.760, 0.643,0.967]
r 2 * [ 0 ] [0.743, 0.392, 0.855]
r 3 * [ 0 ] [0.076, 0.145, 0.874]
[ , , ]
Table A4. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 4.
Table A4. Initial random vectors used in fractional simultaneous schemes for approximating all polynomial roots used in engineering application 4.
r [ 0 ] [ r 1 [ 0 ] , r 2 [ 0 ] , r 3 [ 0 ] , r 4 [ 0 ] ]
r 1 * [ 0 ] [−0.160, 0.643, 0.967, 0.085]
r 2 * [ 0 ] [0.743, 0.392, 0.655, 0.171]
r 3 * [ 0 ] [−0.145, 0.874, 0.475, 0.876]
[ , , , ]

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. Jarratt, P. Some efficient fourth order multiple methods for solving equations. BIT 1969, 9, 119–124. [Google Scholar] [CrossRef]
  3. King, R. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  4. Ostrowski, A.M. Solution of Equation in Euclidean and Banach Space, 3rd ed.; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  5. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Lett. 2008, 195, 454–456. [Google Scholar] [CrossRef]
  6. Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Verän derlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Verän derlichen. Sitzungsberichte KöNiglich Preuss. Akad. Der Wiss. Berl. 1981, 2, 1085–1101. [Google Scholar]
  7. Kanno, S.; Kjurkchiev, N.V.; Yamamoto, T. On some methods for the simultaneous determination of polynomial zeros. Japan J. Appl. Math. 1995, 13, 267–288. [Google Scholar] [CrossRef]
  8. Proinov, P.D.; Cholakov, S.I. Semilocal convergence of Chebyshev-like root-finding method for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 2014, 236, 669–682. [Google Scholar] [CrossRef]
  9. Mir, N.A.; Muneer, R.; Jabeen, I. Some families of two-step simultaneous methods for determining zeros of nonlinear equations. ISRN Appl. Math. 2011, 2011, 817174. [Google Scholar] [CrossRef]
  10. Farmer, M.R. Computing the Zeros of Polynomials Using the Divide and Conquer Approach; Department of Computer Science and Information Systems; Birkbeck: London, UK, 2014. [Google Scholar]
  11. Nourein, A.W. An improvement on Nourein’s method for the simultaneous determination of the zeroes of a polynomial (an algorithm). J. Comput. Appl. Math. 1977, 3, 109–112. [Google Scholar] [CrossRef]
  12. Aberth, O. Iteration methods for finding all zeros of a polynomial simultaneously. Math. Comput. 1973, 27, 339–344. [Google Scholar] [CrossRef]
  13. Cholakov, S.I.; Vasileva, M.T. A convergence analysis of a fourth-order method for computing all zeros of a polynomial simultaneously. J. Comput. Appl. Math. 2017, 321, 270–283. [Google Scholar] [CrossRef]
  14. Consnard, M.; Fraigniaud, P. Finding the roots of a polynomial on an MIMD multicomputer. Parall. Comput. 1990, 15, 75–85. [Google Scholar] [CrossRef]
  15. Petković, M.S.; Petković, L.D.; Džunić, J. On an efficient method for the simultaneous approximation of polynomial multiple roots. Appl. Anal. Disc. Math. 2014, 8, 73–94. [Google Scholar] [CrossRef]
  16. Rafiq, N.; Shams, M.; Mir, N.A.; Gaba, Y.U. A highly efficient computer method for solving polynomial equations appearing in Engineering Problems. Math. Probl. Eng. 2023, 2021, 9826693. [Google Scholar] [CrossRef]
  17. Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Mir, N.A. On iterative techniques for estimating all roots of nonlinear equation and its system with application in differential equation. Adv. Differ. Equ. 2021, 2021, 480. [Google Scholar] [CrossRef]
  18. Kyncheva, V.K.; Yotov, V.V.; Ivanov, S.I. Convergence of Newton, Halley and Chebyshev iterative methods as methods for simultaneous determination of multiple polynomial zeros. Appl. Numer. Math. 2017, 112, 146–154. [Google Scholar] [CrossRef]
  19. Nedzhibov, H. Iterative methods for simultaneous computing arbitrary number of multiple zeros of nonlinear equations. Int. J. Comp. Math. 2013, 90, 994–1007. [Google Scholar] [CrossRef]
  20. Sendov, B.L.; Andreev, A.; Kjurkchiev, N. Numerical solution of polynomial equations. Handb. Numer. Anal. 1994, 3, 625–778. [Google Scholar]
  21. Kyurkchiev, N.; Iliev, A. A general approach to methods with a sparse Jacobian for solving nonlinear systems of equations. Serdica Math. J. 2007, 33, 433–448. [Google Scholar]
  22. Shams, M.; Kausar, N.; Agarwal, P.; Oros, G.I. On Efficient Fractional Caputo-type Simultaneous Scheme for Finding all Roots of polynomial equations. Fractals 2023, 6, 2340075. [Google Scholar] [CrossRef]
  23. Dimitrov, Y.; Georgiev, S.; Todorov, V. Approximation of Caputo Fractional Derivative and Numerical Solutions of Fractional Differential Equations. Fractal Fract. 2023, 7, 750. [Google Scholar] [CrossRef]
  24. Shams, M.; Kausar, N.; Agarwal, P.; Shah, M.A. On family of Caputo-Type fractional numerical scheme for solving polynomial. Appl. Math. Sci. Eng. 2023, 31, 2181959. [Google Scholar] [CrossRef]
  25. Oliveira, D.E.C.; Tenreiro Machado, J.A. A review of definitions for fractional derivatives and integral. Math. Probl. Eng. 2014, 2014, 238459. [Google Scholar] [CrossRef]
  26. Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974. [Google Scholar]
  27. Kukushkin, M.V. Abstract fractional calculus for m-accretive operators. arXiv 2019, arXiv:1901.06118. [Google Scholar] [CrossRef]
  28. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Ntegrals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers: Philadelphia, PA, USA, 1993. [Google Scholar]
  29. Shams, M.; Carpentieri, B. Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications. Fractal. Fract. 2023, 7, 849. [Google Scholar] [CrossRef]
  30. Odibat, Z.M.; Shawagfeh, N.T. Generalized Taylor’s formula. Appl. Math. Comput. 2007, 186, 286–293. [Google Scholar] [CrossRef]
  31. Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2th-order of convergence and its stability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar] [CrossRef]
  32. Torres-Hernandez, A.; Brambila-Paz, F. Sets of fractional operators and numerical estimation of the order of convergence of a family of fractional fixed-point methods. Fractal Fract. 2021, 4, 240. [Google Scholar] [CrossRef]
  33. Cajori, F. Historical note on the Newton-Raphson method of approximation. Am. Math. Mon. 1911, 18, 29–32. [Google Scholar] [CrossRef]
  34. Kumar, P.; Agrawal, O.P. An approximate method for numerical solution of fractional differential equations. Signal Process. 2006, 86, 2602–2610. [Google Scholar] [CrossRef]
  35. Candelario, G.; Cordero, A.; Torregrosa, J.R. Multipoint fractional iterative methods with (2 + 1) th-order of convergence for solving nonlinear problems. Mathematics 2020, 8, 452. [Google Scholar] [CrossRef]
  36. Proinov, P.D.; Vasileva, M.T. On the convergence of high-order Ehrlich-type iterative methods for approximating all zeros of a polynomial simultaneously. J. Ineq. Appl. 2015, 2015, 336. [Google Scholar] [CrossRef]
  37. Chu, Y.; Rafiq, N.; Shams, M.; Akram, S.; Mir, N.A.; Kalsoom, H. Computer methodologies for the comparison of some efficient derivative free simultaneous iterative methods for finding roots of non-linear equations. Comput. Mater. Cont. 2020, 66, 275–290. [Google Scholar] [CrossRef]
  38. Naseem, A.; Rehman, M.A.; Abdeljawad, T. Computational methods for non-linear equations with some real-world applications and their graphical analysis. Intell. Autom. Soft Comput. 2021, 30, 1–14. [Google Scholar] [CrossRef]
  39. Akin-Bohnera, E.; Hoackerb, J. Oscillation properties of an Emden-Fowler type equation on discrete time scales. J. Diff. Equ. Appl. 2003, 9, 603612. [Google Scholar] [CrossRef]
  40. Shams, M.; Kausar, N.; Yaqoob, N.; Arif, N.; Addis, G.M. Techniques for finding analytical solution of generalized fuzzy differential equations with applications. Complexity 2023, 2023, 3000653. [Google Scholar] [CrossRef]
  41. Zill, D.G. Differential Equations with Boundary-Value Problems; Cengage Learning: Boston, MA, USA, 2016. [Google Scholar]
  42. Chapra, S. EBOOK: Applied Numerical Methods with MATLAB for Engineers and Scientists; McGraw Hill: New York, NY, USA, 2011. [Google Scholar]
Figure 1. (ae) The computational efficiency ratios of fractional simultaneous schemes with respect to each other for different fractional parameter values. (a) Computational efficiency ratio of S F M σ 1 with respect to S F M σ . (b) Computational efficiency ratio of S F M σ 2 with respect to S F M σ . (c) Computational efficiency ratio of S F M σ 3 with respect to S F M σ . (d) Computational efficiency ratio of S F M σ 4 with respect to S F M σ . (e) Computational efficiency ratio of S F M σ with respect to S F M σ 1 .
Figure 1. (ae) The computational efficiency ratios of fractional simultaneous schemes with respect to each other for different fractional parameter values. (a) Computational efficiency ratio of S F M σ 1 with respect to S F M σ . (b) Computational efficiency ratio of S F M σ 2 with respect to S F M σ . (c) Computational efficiency ratio of S F M σ 3 with respect to S F M σ . (d) Computational efficiency ratio of S F M σ 4 with respect to S F M σ . (e) Computational efficiency ratio of S F M σ with respect to S F M σ 1 .
Mathematics 11 04914 g001
Figure 2. Residual error of the SFM σ for approximating all polynomial equation roots used in engineering application 1 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Figure 2. Residual error of the SFM σ for approximating all polynomial equation roots used in engineering application 1 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Mathematics 11 04914 g002
Figure 3. The residual error of SFM σ for approximating all polynomial equation roots used in engineering application 2 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Figure 3. The residual error of SFM σ for approximating all polynomial equation roots used in engineering application 2 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Mathematics 11 04914 g003
Figure 4. The residual error of the SFM σ for approximating all polynomial equation roots used in engineering application 3 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Figure 4. The residual error of the SFM σ for approximating all polynomial equation roots used in engineering application 3 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Mathematics 11 04914 g004
Figure 5. The residual error of the SFM σ for approximating all polynomial equation roots used in engineering application 4 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Figure 5. The residual error of the SFM σ for approximating all polynomial equation roots used in engineering application 4 for various fractional parameter values, namely ς = 0.1 , 0.3 , 0.7 , 0.8 , 1.0 .
Mathematics 11 04914 g005
Table 1. Operations per cycle.
Table 1. Operations per cycle.
MethodsAddition and SubtractionMultiplicationsDivisions
SFM σ 1 SFM σ 4 5 m 2  + Λ 11 4 m 2  + Λ 11 2 m 2  + Λ 11
SFM σ 5 m 2  + Λ 11 7 m 2  + Λ 11 2 m 2  + Λ 11
Table 2. Experiments using random initial approximation for finding all polynomial roots simultaneously.
Table 2. Experiments using random initial approximation for finding all polynomial roots simultaneously.
f ( r ) = r 2 79 28991745 r 14 + 1306 57747104415 r 18
Ini-VNumber of Iterations
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 19.017.013.010.010.0
r 2 * [ 0 ] 19.017.013.010.010.0
r 3 * [ 0 ] 19.017.013.010.010.0
r 4 * [ 0 ] 19.017.013.010.010.0
r 5 * [ 0 ] 19.017.013.010.010.0
Iteration are computed by using 64 D**
Table 3. CPU-Time using random initial approximation for finding all polynomial roots.
Table 3. CPU-Time using random initial approximation for finding all polynomial roots.
f ( r ) = r 2 79 28991745 r 14 + 1306 57747104415 r 18
R-InitialComputational CPU-Time in Seconds
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3    SFM σ 4 SFM σ
r 1 * [ 0 ] 1.01241.00121.00140.04170.0045
r 2 * [ 0 ] 2.01251.04151.00450.07840.0048
r 3 * [ 0 ] 2.12541.01481.00470.07450.0078
r 4 * [ 0 ] 1.52411.08741.00780.04510.0071
r 5 * [ 0 ] 1.74511.07411.00780.08740.0069
Maximum CPU-Time is equal to 2.1254 using 64 D**
Table 4. Local computational order of convergence using random initial approximation.
Table 4. Local computational order of convergence using random initial approximation.
f ( r ) = r 2 79 28991745 r 14 + 1306 57747104415 r 18
Ini-VLocal Computational Order of Convergence
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3    SFM σ 4 SFM σ
r 1 * [ 0 ] 6.00196.72157.71247.91157.9414
r 2 * [ 0 ] 7.01286.45157.42368.03657.5210
r 3 * [ 0 ] 4.99176.64547.01487.99147.8456
r 4 * [ 0 ] 5.87485.98747.51547.72148.0147
r 5 * [ 0 ] 6.14276.87487.31747.61487.4878
Maximum LCOC is equal to 8.0147 using 64 D**
Table 5. Maximum error using random initial approximation for finding all polynomial roots.
Table 5. Maximum error using random initial approximation for finding all polynomial roots.
f ( r ) = 2949.604 r 4 + 14748.02 r 3 62295.63648 r 2 + 2.229900624 × 10 5 r 2.675880749 × 10 5
R-InitialMaximum Error
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 3.1 × 10 15 0.1 × 10 18 6.3 × 10 20 9.5 × 10 22 0.2 × 10 29
r 2 * [ 0 ] 0.01 × 10 15 1.2 × 10 18 6.5 × 10 24 8.8 × 10 27 3.5 × 10 20
r 3 * [ 0 ] 0.1 × 10 14 0.1 × 10 18 0.1 × 10 22 3.8 × 10 25 1.5 × 10 25
r 4 * [ 0 ] 0.4 × 10 13 9.5 × 10 18 9.0 × 10 21 3.2 × 10 20 1.2 × 10 25
r 5 * [ 0 ] 7.7 × 10 15 7.8 × 10 18 3.9 × 10 21 0.2 × 10 31 0.5 × 10 31
Residual errors are equal to 10 30 using 64 D**
Table 6. Computation of all polynomial equation roots.
Table 6. Computation of all polynomial equation roots.
MethodsSFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
Error it ϑ = 09 ϑ = 09 ϑ = 07 ϑ = 06 ϑ = 06
CPU0.01410.0160.0540.0670.065
e 1 [ ϑ ] 2.1 × 10 3 0.7 × 10 3 0.3 × 10 12 0.7 × 10 32 0.3 × 10 42
e 2 [ ϑ ] 5.2 × 10 2 2.6 × 10 4 9.1 × 10 16 2.1 × 10 26 2.8 × 10 64
e 3 [ ϑ ] 2.1 × 10 2 3.2 × 10 3 3.2 × 10 16 3.2 × 10 36 3.5 × 10 46
e 4 [ ϑ ] 3.1 × 10 2 4.1 × 10 6 4.1 × 10 16 4.1 × 10 26 4.1 × 10 46
e 5 [ ϑ ] 2.1 × 10 3 0.7 × 10 3 0.3 × 10 12 0.7 × 10 32 0.3 × 10 42
e 6 [ ϑ ] 2.0 × 10 3 0.7 × 10 3 0.3 × 10 12 0.7 × 10 32 0.3 × 10 42
e 7 [ ϑ ] 0.2 × 10 2 0.1 × 10 4 9.1 × 10 16 4.1 × 10 26 2.7 × 10 43
e 8 [ ϑ ] 5.1 × 10 2 1.2 × 10 5 1.1 × 10 14 0.2 × 10 31 3.0 × 10 42
e 9 [ ϑ ] 0.1 × 10 3 0.6 × 10 6 0.1 × 10 15 4.1 × 10 26 6.1 × 10 36
e 10 [ ϑ ] 5.2 × 10 2 2.9 × 10 4 9.1 × 10 16 4.1 × 10 26 2.7 × 10 43
e 11 [ ϑ ] 0.1 × 10 1 1.2 × 10 5 1.1 × 10 14 0.2 × 10 31 3.0 × 10 42
e 12 [ ϑ ] 0.2 × 10 1 3.6 × 10 2 9.8 × 10 16 8.1 × 10 26 2.6 × 10 42
e 13 [ ϑ ] 2.8 × 10 2 7.2 × 10 3 3.2 × 10 16 0.7 × 10 36 3.3 × 10 45
e 14 [ ϑ ] 1.1 × 10 2 1.2 × 10 5 4.1 × 10 14 4.2 × 10 31 3.0 × 10 42
e 15 [ ϑ ] 0.1 × 10 3 0.7 × 10 3 7.3 × 10 12 0.7 × 10 32 1.3 × 10 41
e 16 [ ϑ ] 0.2 × 10 2 3.6 × 10 4 9.8 × 10 16 8.1 × 10 26 2.6 × 10 42
e 17 [ ϑ ] 2.8 × 10 2 7.2 × 10 3 3.2 × 10 16 0.0 × 10 36 3.3 × 10 45
e 18 [ ϑ ] 8.1 × 10 2 0.1 × 10 6 4.1 × 10 16 0.1 × 10 26 4.1 × 10 46
ρ i [ ϑ 1 ] 2.12122.23122.04512.51123.14212
Table 7. Iteration numbers using random initial approximation for finding all polynomial roots.
Table 7. Iteration numbers using random initial approximation for finding all polynomial roots.
f ( r ) = 43359567872 189 r 10 + 425993216 405 r 9 278592512 63 r 8 +
1024000 63 r 7 4096384 45 r 6 + 256 r 5 6400 3 r 4 +
16 3 r 3 80 r 2 .
Ini-VNumber of Iterations
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 19.016.014.010.010.0
r 2 * [ 0 ] 19.016.014.010.010.0
r 3 * [ 0 ] 19.016.014.010.010.0
r 4 * [ 0 ] 19.016.014.010.010.0
r 5 * [ 0 ] 19.016.014.010.010.0
Residual errors are equal to 10 30 using 64 D**
Table 8. CPU-Time using random initial approximation for finding all polynomial roots.
Table 8. CPU-Time using random initial approximation for finding all polynomial roots.
f ( r ) = 43359567872 189 r 10 + 425993216 405 r 9 278592512 63 r 8 +
1024000 63 r 7 4096384 45 r 6 + 256 r 5 6400 3 r 4 +
16 3 r 3 80 r 2 .
R-InitialComputational CPU-Time in Seconds
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 3.01141.00120.00150.04170.0031
r 2 * [ 0 ] 2.11051.03171.00800.08810.0141
r 3 * [ 0 ] 3.12541.01591.01370.07650.0039
r 4 * [ 0 ] 1.42011.06781.00610.04300.0067
r 5 * [ 0 ] 1.74651.07291.00700.091430.0073
Residual errors are equal to 10 30 using 64 D**
Table 9. Maximum error using random initial approximation for finding all polynomial roots.
Table 9. Maximum error using random initial approximation for finding all polynomial roots.
f ( r ) = 43359567872 189 r 10 + 425993216 405 r 9 278592512 63 r 8 + 1024000 63 r 7 4096384 45 r 6 + 256 r 5 6400 3 r 4 + 16 3 r 3 80 r 2 .
R-InitialMaximum Error
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 0.1 × 10 16 1.1 × 10 18 6.3 × 10 20 4.5 × 10 23 6.1 × 10 25
r 2 * [ 0 ] 2.1 × 10 16 6.2 × 10 19 1.5 × 10 24 7.8 × 10 26 3.0 × 10 26
r 3 * [ 0 ] 0.1 × 10 15 6.1 × 10 18 0.4 × 10 24 7.8 × 10 26 2.5 × 10 27
r 4 * [ 0 ] 3.4 × 10 14 6.5 × 10 18 0.1 × 10 20 3.2 × 10 28 6.2 × 10 25
r 5 * [ 0 ] 9.7 × 10 17 1.8 × 10 19 3.2 × 10 21 9.2 × 10 33 3.5 × 10 32
Residual errors are equal to 10 30 using 64 D**
Table 10. Local computational order of convergence using random initial approximation.
Table 10. Local computational order of convergence using random initial approximation.
f ( r ) = 43359567872 189 r 10 + 425993216 405 r 9 278592512 63 r 8 +
1024000 63 r 7 4096384 45 r 6 + 256 r 5 6400 3 r 4 +
16 3 r 3 80 r 2 .
Ini-VLocal Computational Order of Conergence
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 5.01196.70057.73547.91957.9654
r 2 * [ 0 ] 7.01286.45157.42367.43658.1210
r 3 * [ 0 ] 4.99176.64547.01487.99147.8456
r 4 * [ 0 ] 5.17084.91047.51547.71148.0101
r 5 * [ 0 ] 5.11275.80187.00747.00487.9018
Residual errors are equal to 10 30 using 64 D**
Table 11. Determination of all polynomial equation roots.
Table 11. Determination of all polynomial equation roots.
MethodsSFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
Error it ϑ = 09 ϑ = 09 ϑ = 08 ϑ = 08 ϑ = 05
CPU0.01410.0160.0540.0670.065
e 1 [ ϑ ] 0.2 × 10 3 1.7 × 10 4 1.3 × 10 11 5.7 × 10 33 0.3 × 10 42
e 2 [ ϑ ] 7.2 × 10 2 2.8 × 10 5 9.9 × 10 17 5.5 × 10 27 2.8 × 10 46
e 3 [ ϑ ] 2.5 × 10 3 3.2 × 10 3 3.2 × 10 16 1.2 × 10 36 3.0 × 10 47
e 4 [ ϑ ] 3.1 × 10 2 9.9 × 10 7 4.8 × 10 16 4.9 × 10 23 4.1 × 10 46
e 5 [ ϑ ] 7.7 × 10 3 6.7 × 10 6 6.5 × 10 16 6.5 × 10 26 6.5 × 10 46
e 6 [ ϑ ] 2.5 × 10 2 4.3 × 10 7 4.8 × 10 17 4.0 × 10 27 4.0 × 10 47
e 7 [ ϑ ] 1.6 × 10 2 3.0 × 10 6 3.0 × 10 16 7.7 × 10 36 3.0 × 10 46
e 8 [ ϑ ] 7.4 × 10 3 4.4 × 10 7 2.9 × 10 15 2.9 × 10 40 2.4 × 10 46
e 9 [ ϑ ] 3.0 × 10 4 2.8 × 10 6 2.0 × 10 16 2.7 × 10 35 1.1 × 10 49
e 10 [ ϑ ] 3.5 × 10 2 2.1 × 10 7 4.5 × 10 16 2.0 × 10 39 2.7 × 10 41
ρ i [ ϑ 1 ] 2.10122.24522.04913.51123.18742
Table 12. Using random initial approximation for finding all polynomial roots simultaneously.
Table 12. Using random initial approximation for finding all polynomial roots simultaneously.
f ( r ) = 0.5 r 3 0.5 r 2 + r + 1
Ini-VNumber of Iterations
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 19.016.013.08.08.0
r 2 * [ 0 ] 19.016.013.08.08.0
r 3 * [ 0 ] 19.016.013.08.08.0
r 4 * [ 0 ] 19.016.013.08.08.0
r 5 * [ 0 ] 19.016.013.08.08.0
Residual errors are equal to 10 30 using 64 D**
Table 13. CPU-Time using random initial values for finding all polynomial roots.
Table 13. CPU-Time using random initial values for finding all polynomial roots.
f ( r ) = 0.5 r 3 0.5 r 2 + r + 1
Ini-VComputational CPU-Time in Seconds
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 0.01590.04911.45140.03190.0455
r 2 * [ 0 ] 2.01361.04641.00450.06940.0107
r 3 * [ 0 ] 3.13641.01391.00470.06460.0059
r 4 * [ 0 ] 1.52090.08741.00780.04560.0975
r 5 * [ 0 ] 1.74511.07011.00380.08740.0048
Maximum CPU-Time is equale to 2.1254 using 64 D**
Table 14. Maximum error using random initial approximation for finding all polynomial roots.
Table 14. Maximum error using random initial approximation for finding all polynomial roots.
f ( r ) = 0.5 r 3 0.5 r 2 + r + 1
R-InitialMaximum Error
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 3.2 × 10 19 9.1 × 10 17 0.3 × 10 21 9.5 × 10 22 9.1 × 10 23
r 2 * [ 0 ] 2.4 × 10 16 1.2 × 10 17 6.1 × 10 25 8.1 × 10 29 3.5 × 10 27
r 3 * [ 0 ] 0.1 × 10 14 0.5 × 10 19 0.4 × 10 23 0.8 × 10 20 9.5 × 10 28
r 4 * [ 0 ] 6.4 × 10 15 6.5 × 10 18 0.1 × 10 20 3.2 × 10 28 6.2 × 10 25
r 5 * [ 0 ] 7.7 × 10 15 7.8 × 10 18 7.2 × 10 23 6.6 × 10 36 6.7 × 10 31
Residual errors are equal to 10 30 using 64 D**
Table 15. Local computational order of convergence using random initial approximation.
Table 15. Local computational order of convergence using random initial approximation.
f ( r ) = 0.5 r 3 0.5 r 2 + r + 1
Ini-VLocal Computational Order of Convergence
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 5.04296.40137.12347.90117.9012
r 2 * [ 0 ] 7.01286.45157.44168.13017.9210
r 3 * [ 0 ] 5.59276.61146.91417.09148.8151
r 4 * [ 0 ] 5.45015.91747.51247.70148.0140
r 5 * [ 0 ] 5.11216.57017.11047.81088.0038
Residual errors are equal to 10 30 using 64 D**
Table 16. Determination of all polynomial equation roots.
Table 16. Determination of all polynomial equation roots.
MethodsSFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
Error it ϑ = 09 ϑ = 09 ϑ = 08 ϑ = 08 ϑ = 05
CPU0.01240.0170.0390.0770.015
e 1 [ ϑ ] 9.9 × 10 4 0.1 × 10 4 5.5 × 10 16 9.1 × 10 35 0.3 × 10 42
e 2 [ ϑ ] 5.2 × 10 2 0.6 × 10 5 9.1 × 10 16 5.0 × 10 29 2.8 × 10 46
e 3 [ ϑ ] 7.3 × 10 3 8.4 × 10 2 6.6 × 10 19 6.2 × 10 39 3.5 × 10 46
ρ i [ ϑ 1 ] 1.10093.25462.04512.554126.78921
Table 17. Using random initial approximation for finding all polynomial roots simultaneously.
Table 17. Using random initial approximation for finding all polynomial roots simultaneously.
f ( r ) = 0.02590111180 r 4 0.1066166681 r 3 0.2099871708 r 2 + 0.7615941560 r
Ini-VNumber of Iterations
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 19.016.014.08.08.0
r 2 * [ 0 ] 19.016.014.08.08.0
r 3 * [ 0 ] 19.016.014.08.08.0
r 4 * [ 0 ] 19.016.014.08.08.0
r 5 * [ 0 ] 19.016.014.08.08.0
Residual errors are equal to 10 30 using 64 D**
Table 18. Maximum error using random initial approximation for finding all polynomial roots.
Table 18. Maximum error using random initial approximation for finding all polynomial roots.
f ( r ) = 0.02590111180 r 4 0.1066166681 r 3 0.2099871708 r 2 + 0.7615941560 r
R-InitialMaximum Error
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 3.1 × 10 15 6.1 × 10 19 6.9 × 10 21 9.9 × 10 23 0.2 × 10 25
r 2 * [ 0 ] 2.6 × 10 13 0.2 × 10 17 6.5 × 10 24 0.8 × 10 29 5.5 × 10 27
r 3 * [ 0 ] 0.1 × 10 14 5.1 × 10 18 0.4 × 10 26 8.8 × 10 20 2.5 × 10 26
r 4 * [ 0 ] 3.4 × 10 13 8.5 × 10 19 2.1 × 10 20 5.2 × 10 29 6.2 × 10 25
r 5 * [ 0 ] 7.7 × 10 15 7.7 × 10 19 3.2 × 10 22 0.2 × 10 37 9.5 × 10 37
Residual errors are equal to 10 30 using 64 D**
Table 19. Local computational order of convergence using random initial approximation.
Table 19. Local computational order of convergence using random initial approximation.
f ( r ) = 0.02590111180 r 4 0.1066166681 r 3 0.2099871708 r 2 + 0.7615941560 r
Ini-VLocal Computational Order of Convergence
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 5.18135.92437.71247.05157.7165
r 2 * [ 0 ] 7.08786.49857.42368.13647.5243
r 3 * [ 0 ] 4.99176.94047.01487.39117.8656
r 4 * [ 0 ] 5.87485.98747.51547.62128.5447
r 5 * [ 0 ] 6.54216.87487.71777.01488.4873
Residual errors are equal to 10 30 using 64 D**
Table 20. CPU-Time using random initial values for finding all polynomial roots.
Table 20. CPU-Time using random initial values for finding all polynomial roots.
f ( r ) = 0.02590111180 r 4 0.1066166681 r 3 0.2099871708 r 2
+ 0 . 7615941560 r
Ini-VComputational CPU-Time in Seconds
r [ 0 ] SFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
r 1 * [ 0 ] 3.01942.05141.04540.04030.0049
r 2 * [ 0 ] 2.01201.03141.00280.06440.0070
r 3 * [ 0 ] 2.11141.01412.00610.07490.0031
r 4 * [ 0 ] 1.32311.08041.00780.04090.0090
r 5 * [ 0 ] 1.73001.07611.00780.07410.0093
Maximum CPU-Time is equal to 2.1254 using 64 D**
Table 21. Determination of all polynomial equation roots.
Table 21. Determination of all polynomial equation roots.
MethodsSFM σ 1 SFM σ 2 SFM σ 3 SFM σ 4 SFM σ
Error it ϑ = 09 ϑ = 09 ϑ = 08 ϑ = 08 ϑ = 05
CPU0.01410.7170.2620.0920.073
e 1 [ ϑ ] 9.1 × 10 4 0.5 × 10 4 0.1 × 10 12 5.7 × 10 39 9.3 × 10 45
e 2 [ ϑ ] 1.0 × 10 4 2.6 × 10 4 9.1 × 10 16 5.1 × 10 25 0.3 × 10 41
e 3 [ ϑ ] 6.2 × 10 4 2.6 × 10 4 9.1 × 10 16 4.5 × 10 20 8.8 × 10 48
e 4 [ ϑ ] 2.1 × 10 2 0.2 × 10 2 9.9 × 10 16 7.2 × 10 37 0.5 × 10 42
ρ i [ ϑ 1 ] 2.18642.50024.04815.58727.19212
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. On Highly Efficient Fractional Numerical Method for Solving Nonlinear Engineering Models. Mathematics 2023, 11, 4914. https://doi.org/10.3390/math11244914

AMA Style

Shams M, Carpentieri B. On Highly Efficient Fractional Numerical Method for Solving Nonlinear Engineering Models. Mathematics. 2023; 11(24):4914. https://doi.org/10.3390/math11244914

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2023. "On Highly Efficient Fractional Numerical Method for Solving Nonlinear Engineering Models" Mathematics 11, no. 24: 4914. https://doi.org/10.3390/math11244914

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop