Next Article in Journal
Microseismic Data-Driven Short-Term Rockburst Evaluation in Underground Engineering with Strategic Data Augmentation and Extremely Randomized Forest
Previous Article in Journal
Optimization and Numerical Verification of Microseismic Monitoring Sensor Network in Underground Mining: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Stable Multiplicative Calculus-Based Hybrid Parallel Scheme for Nonlinear Equations

1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University I-14, Islamabad 44000, Pakistan
Mathematics 2024, 12(22), 3501; https://doi.org/10.3390/math12223501
Submission received: 20 September 2024 / Revised: 21 October 2024 / Accepted: 7 November 2024 / Published: 9 November 2024

Abstract

:
Fractional-order nonlinear equation-solving methods are crucial in engineering, where complex system modeling requires great precision and accuracy. Engineers may design more reliable mechanisms, enhance performance, and develop more accurate predictions regarding outcomes across a range of applications where these problems are effectively addressed. This research introduces a novel hybrid multiplicative calculus-based parallel method for solving complex nonlinear models in engineering. To speed up the method’s rate of convergence, we utilize a second-order multiplicative root-finding approach as a corrector in the parallel framework. Using rigorous theoretical analysis, we illustrate how the hybrid parallel technique based on multiplicative calculus achieves a remarkable convergence order of 12, indicating its effectiveness and efficiency in solving complex nonlinear equations. The intrinsic stability and consistency of the approach—when applied to nonlinear situations—are clearly indicated by the symmetry seen in the dynamical planes for various parameter values. The method’s symmetrical behavior indicates that it produces accurate findings under a range of scenarios. Using a dynamical system procedure, the ideal parameter values are systematically analyzed in order to further improve the method’s performance. Implementing the aforementioned parameter values using the parallel approach yields very reliable and consistent outcomes. The method’s effectiveness, reliability, and consistency are evaluated through the analysis of numerous nonlinear engineering problems. The analysis provides a detailed comparison with current techniques, emphasizing the benefits and potential improvements of the novel approach.

1. Introduction

Nonlinear equations have a long history in science and engineering, dating back to the 18th and 19th centuries, when they were used to research celestial mechanics and fluid dynamics [1,2,3]. These formulas explain complex phenomena such as chaos [4], bifurcations [5], and solitons in systems with non-linear outputs [6]. In areas such as quantum physics, thermodynamics, and material science, nonlinear equations, e.g.,
f ( x ) = 0
are necessary for describing real-world phenomena.
Many physical processes with memory effects, anomalous diffusion [7], or hereditary features can be better represented using fractional differential equations [8], which are fundamentally nonlinear. When traditional integer-order models fail, fractional-order models enhance the flexibility and precision of system modeling. Both nonlinear equations and fractional differential equations are important in describing the complexities of dynamic systems across a range of scientific and engineering challenges, often leading to the following differential equations [9,10]:
d n f ( x ) d x n + d n 1 2 f ( x ) d x n 1 2 + f ( x ) n 1 = x n + 5 , α 01 [ ] x α 1 [ ] , f α 01 [ ] = d n 3 f α 01 [ ] d x n 3 = d n 2 f α 01 [ ] d x n 2 = α 01 [ ] , d n 1 f α 01 [ ] d x n 1 = α 02 [ ] .
In Equation (2), α 01 [ ] represents the starting initial conditions and x α 01 [ ] , α 1 [ ] . Exact solutions to fractional-order equations are highly desirable because of their precision and ability to avoid approximations. However, finding precise solutions is frequently impossible due to the intrinsic complexity of fractional-order systems, which are frequently characterized by non-local features and memory effects. Analytical techniques like integral transformations, series expansions, and special functions are needed to move beyond this obstacle. By using these analytical or semi-analytical iterative techniques, closed or partially closed solutions can be constructed, which provide important details about the underlying processes and behaviors of complex systems. Even in situations where perfect solutions are not possible, analysts can gain a deeper understanding of fractional systems by utilizing these methodologies to capture both the qualitative and quantitative aspects of the system. Furthermore, these results from the analysis serve as standards for evaluating and improving numerical techniques. They contribute to the reliability and precision of numerical methods by offering standards by which numerical results may be evaluated, particularly in cases where exact solutions are difficult or impossible to achieve.
Multiplicative calculus [11], developed in the twentieth century, builds on classical calculus by focusing on growth rates in multiplicative terms rather than differences. It gained popularity in the 1970s with contributions from researchers such as Grossman and Katz, who formalized its fundamental notions by establishing a new type of derivative and integral, shifting the roles of subtraction and addition to division and multiplication [12]. The theoretical foundation of multiplicative calculus was established by Bashirov et al. [13,14], who also used it to develop multiplicative differential equations [15] and their engineering applications. It works especially well to explain scaling phenomena [16], geometric progressions [17], and exponential growth—all of which are less intuitive concepts in traditional calculus. Its fundamental advantage is that it provides a natural framework for many evolutionary processes, such as population growth, financial returns, and fractal systems. To represent some real-world phenomena more accurately and simply—where ratios and proportionalities are more significant than differences—this calculus makes use of multiplicative derivatives and integrals. In recent years, multiplicative calculus has been utilized in various fields; for example, Ozbay [18] used multiplicative calculus in the backpropagation processes of artificial neural networks (ANNs), Karthikeyan et al. [19] used it to explore the properties of a class of analytic functions, Othman et al. [20] developed a multiplicative calculus-based digital image interpolation technique, and Eyilmaz et al. [21] utilized it to investigate inverse nodal problems. It has also been used in many other works; see [22,23,24] and the references therein. Furthermore, Calogero et al. [25], Sana et al. [26], Mateen et al. [27], Bilgehan et al. [28], and Boruah et al. [29] used multiplicative calculus in their numerical analyses to develop numerical schemes for solving nonlinear engineering problems.
Motivated by the aforementioned work, the primary goal of this research is to develop a family of multiplicative calculus-based simple root-finding schemes and then extend these to parallel numerical schemes for approximating all solutions to (2) simultaneously, with global convergence behavior. This approach uses complex dynamical system methodologies to determine optimal parameter values, which are then utilized in parallel schemes to accelerate the rate of convergence.
Some fundamental multiplicative calculus concepts are reviewed, which were used in the design of our numerical techniques to find all solutions to (2) simultaneously.
Definition 1.
A function ς : D R R is said to be multiplicative differential at x or on D if ς > 0 , and differentiable, respectively, at x or on Ω, and satisfies the following limits:
ς [ ] x = d [ ] ς d x = lim h 0 ς x + h ς x 1 h ,
ς [ ] x = lim h 0 ς x + h ς x 1 h = e ς x ς x ,
ς [ ] x = e ln ς x ,
where ln ς = ln ς x . Similarly, the second multiplicative derivative is defined as follows [30]:
ς [ ] x = e ln ς x .
The higher-order multiplicative derivative is defined similarly, as follows:
ς [ ] x = e ln ς x
and in a more general form, is as follows:
ς [ ] n x = e ln ς n x , n = 0 , 1 ,
where n = 0 , no multiplicative derivative exists, and it represents the original function ς ( x ) = 1 .
Definition 2.
Suppose ς : D R R + is a positive nonlinear function, then the multiplicative nonlinear equation [31] is defined as follows:
ς ( x ) = 1 .
In multiplicative calculus, the multiplicative function is quite similar to the ordinary function in classical calculus. In classical calculus, changes are assessed additively, whereas in multiplicative calculus, changes are measured relatively or proportionately. Some important results of multiplicative differentiation are defined for multiplicative differentiable functions ς and h as follows:
C [ ] = 1 ,
C ς [ ] ( x ) = ς [ ] ( x ) ,
ς h [ ] ( x ) = ς [ ] ( x ) h [ ] ( x ) ,
ς h [ ] ( x ) = ς [ ] ( x ) h [ ] ( x ) ,
ς ψ [ ] ( x ) = ς [ ] ( x ) ψ x ς x ψ x ,
ς ψ [ ] ( x ) = ς [ ] ψ x ψ x .
The multiplicative Taylor theorem [32] is defined in the following theorem, which is used in the construction of the new numerical scheme for solving nonlinear problem (2).
Theorem 1.
Let ς : D R be a n + 1 -times multiplicative differential in an open interval D , then for any x , x + a D , a number η 0 , 1 such that we have the following:
ς x + a = Π n j = 0 ς [ ] j x a j j ! ς [ ] n + 1 x + η a a n + 1 n + 1 ! .
The layout of this paper is as follows: We examine the creation, convergence characteristics, and intricate dynamical analysis of simple and parallel multiplicative-order systems intended to solve Equation (1) in Section 2. The mathematical and computational aspects of the suggested method are extensively clarified in this section. In Section 3, I developed a new multiplicative parallel technique for solving (1). Following that, Section 4 includes an in-depth evaluation of the method’s stability and efficacy using a variety of numerical examples, enhanced by comparisons with other approaches to emphasize its performance advantages. Finally, Section 5 summarizes the findings and discusses prospective areas for future research.

2. Development and Dynamic Analysis of Numerical Methods

The Newton–Raphson iterative method [33,34] is a classical iterative methodology that refines an initial estimate to solve nonlinear problems through a succession of linear approximations. Using derivatives, the method approximates the behavior of a nonlinear function close to a specified point, producing a series of values that converge to the true solution. Since its conceptual creation in the late 17th century, this method—which is renowned for its usually rapid convergence and is frequently quadratic—has served as a fundamental tool in numerical analysis. It is one of the methods used for solving nonlinear equations that is most frequently used in a variety of scientific and technical domains due to its effectiveness and reliability. Using multiplicative analysis, the multiplicative Newton theorem [35] is written as follows:
ς x = ς x ( l ) x ( l ) x ς [ ] ( s ) d s = ς x ( l ) e x ( l ) x ln ς s d s .
Using the Newton–Cotes quadrature [36] of 0-degree order for (10), we have the following:
x ( l ) x ς [ ] ( s ) d s = e x ( l ) x ln ς s d s = e x x ( l ) ln ς x ( l ) = ς [ ] ( x ( l ) ) x x ( l ) .
Using ς x = 1 , the multiplicative classical Newton–Raphson technique—an extension of the classical Newton iterative algorithm—is produced as follows:
x ( l + 1 ) = x ( l ) ln ς x ( l ) ln ς [ ] ( x ( l ) ) .
The multiplicative Newton technique retains the same convergence order as the classical method.
In multiplicative calculus, numerical schemes make it easier to treat scale-invariant processes, whereas classical calculus sometimes struggles with complicated transformations. Furthermore, multiplicative techniques avoid concerns such as sum divergence, which improves performance in circumstances with large-scale variations. They also improve the precision of error estimation in systems based on ratios rather than absolute values. Multiplicative relationships are especially important in fields such as biology, economics, fractal analysis, and differential equations. Furthermore, Singh et al. [37] proposed the multiplicative version of the Schröder method as follows:
u ( l ) = x ( l ) ln ς x ( l ) ln ς [ ] ( x ( l ) ) ln ς [ ] ( x ( l ) ) 2 ln ς [ ] ( x ( l ) ) ln ς x ( l ) .
This method has a convergence order of 2. Similarly, Waseem et al. [38] presented the following iterative approach with quadratic convergence, using multiplicative calculus:
u ( l ) = x ( l ) ln ς x ( l ) ln ς [ ] ( x ( l ) ) α ln ς x ( l ) .
We suggest the following multiplicative technique to obtain simple roots of (1); we have the following:   
v l = x l ln ς x ( l ) ln ς [ ] ( x ( l ) ) 1 1 + α ln ς ( x ( l ) ) 1 + α ln ς ( x ( l ) ) + β ln ς ( x ( l ) ) 2 .
We abbreviate this method as CM [ ] .

2.1. Theoretical Convergence of the Multiplicative Calculus-Based Method

The theoretical convergence of the multiplicative calculus-based simple root-finding approach is essential since it demonstrates the technique’s dependability and effectiveness in accurately identifying roots. This convergence analysis assures that the approach moves gradually toward solutions, providing vital insights into its stability and performance over a wide range of applications. Verifying the method’s application in real-world scenarios—especially scenarios where traditional calculus-based approaches might not be as successful—requires an understanding of this convergence. In order to ascertain the order of convergence of the iterative scheme provided by (14), I establish the subsequent theorem:
Theorem 2.
Let
ς : R R
be a function that is continuous, with ln ς m [ ] ( x ( l ) ) of order m containing the exact root ξ of ς ( x ( l ) ) . In addition, if the initial estimate x [ 0 ] is sufficiently close, the scheme, i.e.,
v l = x l ln ς x ( l ) ln ς [ ] ( x ( l ) ) 1 1 + α ln ς ( x ( l ) ) 1 + α ln ς ( x ( l ) ) + β ln ς ( x ( l ) ) 2
has a convergence order of at least 2, utilizing the error formula described via the following:
ϵ v [ l ] = α t 2 ϵ l 2 + O ϵ l 3 .
Proof. 
Let ξ be a simple exact solution of ς ( x ) = 1 and assume x l = ξ + ϵ l . Using the multiplicative Taylor formula of ς ( x ) around x = ξ , we obtain the following:
ς ( x ( l ) ) = ς ξ + ϵ l = ς ξ ς [ ] ξ ϵ l ς [ ] ξ ϵ l 2 2 ! ς [ ] ξ ϵ l 3 3 ! O ϵ l 4 .
Using the natural logs on both sides of (18), we obtain the following:
ln ς ( x ( l ) ) = ln ς ( ξ ) + ln ς [ ] ( ξ ) ϵ l + ln ς [ ] ( ξ ) ϵ l 2 2 ! + ln ς [ ] ( ξ ) ϵ l 3 3 ! + O ϵ l 4 ,
ln ς ( x ( l ) ) = ln ς [ ] ( ξ ) ϵ l + 1 2 ! ln ς [ ] ( ξ ) ln ς [ ] ( ξ ) ϵ l 2 + 1 3 ! ln ς [ ] ( ξ ) ln ς [ ] ( ξ ) ϵ l 3 + O ϵ l 4 ,
ln ς ( x ( l ) ) = ln ς [ ] ( ξ ) ϵ l + t 2 ϵ l 2 + t 3 ϵ l 3 + O ϵ l 4 ,
where t 2 = 1 j ! ln ς [ j ] ( ξ ) ln ς [ ] ( ξ ) ; j 2 . Taking the derivative of (21), we have the following:
ln ς [ ] ( x ( l ) ) = ln ς [ ] ( ξ ) + ln ς 2 [ ] ( ξ ) ϵ l + ln ς 3 [ ] ( ξ ) ϵ l 2 + O ϵ l 3 ,
ln ς [ ] ( x ( l ) ) = ln ς [ ] ( ξ ) 1 + 1 2 ! ln ς [ ] ( ξ ) ln ς [ ] ( ξ ) ϵ l + 1 3 ! ln ς [ ] ( ξ ) ln ς [ ] ( ξ ) ϵ l 2 + O ϵ l 3 ,
ln ς [ ] ( x ( l ) ) = ln ς [ ] ( ξ ) 1 + 2 t 2 l ϵ l + 3 t 3 ϵ l 2 + O ϵ l 3 .
Dividing ln ς ( x ( l ) ) by ln ς [ ] ( x ( l ) ) , we have the following:
ln ς x ( l ) ln ς [ ] ( x ( l ) ) = ϵ l + t 2 ϵ l 2 + 2 t 3 t 2 2 ϵ l 3 + O ϵ l 4 .
Thus, we have the following:
1 + α ln ς ( x ( l ) ) = 1 + α ϵ l + α t 2 ϵ l 2 + α t 3 ϵ l 3 +
Taking the inverse of (26), we have the following:
1 1 + α ln ς ( x ( l ) ) = 1 α ϵ l + α 2 α t 2 ϵ l 3 +
1 + α ln ς ( x ( l ) ) 1 + α ln ς ( x ( l ) ) + β ln ς ( x ( l ) ) 2
= 1 + α ϵ l + α 2 + α β + α t 2 ϵ l 2 + α 3 2 α 2 t 2 + 2 α β t 2 + α t 3 ϵ l 3 + . . . .
Using (28) in inverse form, we have the following:
1 1 + α ln ς ( x ( l ) ) 1 + α ln ς ( x ( l ) ) + β ln ς ( x ( l ) ) 2 = 1 α ϵ l + 2 α 2 α β α t 2 ϵ l 2 +
Multiplying (25) and (29) yields the following:
ln ς x ( l ) ln ς [ ] ( x ( l ) ) 1 1 + α ln ς ( x ( l ) ) 1 + α ln ς ( x ( l ) ) + β ln ς ( x ( l ) ) 2
= ϵ l + α t 2 ϵ l 2 + 2 α 2 α β + 2 t 2 2 α t 2 ϵ l 3 +
Using (31) in (14), we obtain the following:
v l ζ = x l ζ ln ς ( x ( l ) ) ln ς [ ] ( x ( l ) ) 1 1 + α ln ς ( x ( l ) ) 1 + α ln ς ( x ( l ) ) + β ln ς ( x ( l ) ) 2 ,
ϵ v [ l ] = α t 2 ϵ l 2 + O ϵ l 3 .
Thus, the theorem is proven.    □

2.2. Dynamical Analysis of the Multiplicative Scheme CM [ ]

An in-depth analysis of the dynamic behavior inherent in solution approaches is necessary to ensure the reliability and resilience of nonlinear equation-solving methods [39,40]. Dynamic analysis seeks to determine how well a multiplicative calculus-based method can converge to the exact solution given an initial approximation, even in the presence of slight errors or instabilities during computation. In general, root-finding algorithms are effective when the initial guess is somewhat close to the real root (this is a phenomenon known as local convergence). The function must be performed effectively for these strategies to be the most effective, and their starting point must be carefully determined. However, the accuracy of the initial estimate and the characteristics of the function being solved have a significant impact on the stability and effectiveness of such algorithms. In circumstances where the function exhibits irregular or complex behavior, or when the initial starting values are very far from the exact solutions, classical parallel iterative techniques may fail to converge or produce a solution that is far from the exact solutions. As a consequence, it is essential to consider these characteristics when choosing or developing root-finding approaches in order to ensure reliable and exact results across a wide range of nonlinear problems.
Complex dynamical systems are used to assess the stability of a root-finding algorithm, which is crucial for its performance. These systems help examine the sensitivity of the solution to small changes in input values. Furthermore, the convergence criterion plays an important role in determining how quickly and consistently the algorithm approaches the true root. Stability often requires a careful balance between the intrinsic convergence features of the approach and the careful selection of important parameters, including the initial guess, the particular attributes of the function being solved, and the conditions under which the algorithm stops. Enhancing these components can improve the method’s resilience and dependability and guarantee that the result will remain consistent even in a fluctuating computing environment. This method is especially crucial in applications where accuracy and efficiency are critical, as minor errors in inputs or estimates can result in considerable differences in the final result.
The rational multiplicative calculus-based iterative map f ( x ) = x a x b is obtained as follows:  
R ( x ) = ln A 1 [ ] ln B 1 [ ] 3 β α 2 x + ln A 1 [ ] B 1 [ ] 2 β α x + 2 ln A 1 [ ] B 1 [ ] α x α ln B 1 [ ] 2 + x ln A 1 [ ] ln B 1 [ ] ln A 1 [ ] ln B 1 [ ] 3 β α 2 + β α ln B 1 [ ] 2 + 2 α ln B 1 [ ] + 1 ,
where A 1 [ ] = e 2 x x 2 1 , B 1 [ ] = x 2 1 and a, b belong to C . Although algorithms may transform complex planes with preserved angles and arcs, Möbius transformations (MST) are crucial for conformal mappings in complex analysis and geometry. Many disciplines, including physics, engineering, and computer graphics, use their adaptability to simplify and understand intricate patterns and behaviors. Thus, iterative map (34) depends on a, b, and x. The MST [41] M x = x a x b , is used to demonstrate that the multiplicative calculus-based rational map R ( x ) is conjugate with the operator as follows:
O x = A [ ] β x + 4 ln A 2 [ ] ln B 2 [ ] α x + 4 ln 2 2 α x + 4 ln B 2 [ ] ln 2 α x + ln B 2 [ ] 2 α x 4 ln 2 2 α 4 ln 2 ln B 2 [ ] α ln B 2 [ ] 2 α 2 ln A 2 [ ] x + 2 ln A 2 x + ln B 2 [ ] x 2 ln 2 ln B 2 [ ] A [ ] β x + 4 ln 2 2 α x + 4 ln 2 ln B 2 [ ] α x + ln B 2 [ ] 2 α x + 8 ln A 2 [ ] ln 2 α + 4 ln A 2 [ ] ln 2 α ln 2 2 α 4 ln 2 ln B 2 [ ] α ln B 2 [ ] 2 α + 2 ln 2 x + ln B 2 [ ] x + 2 ln A 2 [ ] 2 ln A 2 [ ] ln B 2 [ ] ,
where A [ ] = 16 ln A 2 ln 2 3 + 24 ln A 2 ln 2 2 ln B 2 α 2 + 12 ln A 2 ln 2 ln B 2 2 α 2 + 2 ln A 2 ln B 2 3 α 2 + 8 ln A 2 ln 2 2 α + 8 ln A 2 ln 2 ln B 2 α + 2 ln A 2 ln B 2 2 α , A 2 [ ] = e 1 2 x 2 1 x , B 2 [ ] = x x 1 2 , since it is independent of a and b, and maps with 0 or infinity (∞). The iterative map R ( x ) , which is based on multiplicative calculus, aligns with the following:
R [ ] ( x ) = i = 0 n a i x i i = 0 n a n i x i ; a i i = 0 n R ,
possessing intriguing characteristics [42].
Proposition 1.
The iterative map’s (35) fixed points are given as follows:
  • x 0 = 0.00 ; x = are super-attracting fixed points in the multiplicative-iterative process.
  • x 1 = 1.00 , is a repulsive point of the multiplicative map.
  • The super-attracting and repelling points are 0.0 and 1.0 , respectively.
While assessing the behavior of the iterative process, this stability function is vital, especially when examining how resilient it is to errors and interruptions in the initial guesses. The robustness and effectiveness of the multiplicative-calculus-based approach when convergent toward a root can be determined by analyzing the stability function. The stability function for multiplicative single root-finding techniques, denoted as CM [ ] , can be expressed as follows for β values close to 1.00 :
R [ ] 1 , α ; β = 4 5.0 × 10 9 α 25.13274124 i α + 1.6 × 10 8 α 2 β + 124.0251068 i α 2 β + 39.47841762 α β 2.0 × 10 8 i α β 9.69604405 i × 10 8 α + 0.5 i α + 3.141592654 × 10 8 i .
The stability areas are analyzed for different values of the parameter β 10 , 10 , as seen in Figure 1a,b and Figure 2a–d.
In Figure 1a,b, the stability zone—which displays the starting estimations or parameters for which the multiplicative calculus-based iterative technique converges to the exact solution of (2)—is displayed by the multiplicative-iterative method, CM [ ] , to obtain the root of (1). The multiplicative-iterative technique’s resilience and efficacy in solving (1) are illustrated in Figure 2a–d for a variety of β parameter values. The figure emphasizes the significance of the b and alpha parameters in determining the stability and efficacy of the method. This strategy is most stable when β = 1 , and it progressively loses stability as β moves closer to zero. The most significant aspects of the iterative mapping are summarized in Figure 3, which visually depicts the various behaviors of the method’s iterations over dynamic planes, including regions of convergence, deviation, and chaotic patterns. Stable and unstable behaviors for various combinations of β and α values are shown in Figure 4a–e, Figure 5a–e, and Figure 6a–e. The visual representations illustrate several kinds of crucial points in the multiplicative calculus-based iterative process using different markers. In order to locate fixed points—places where the values stay constant throughout iteration—small circle and white asterisk are used. Super-attracting fixed points, indicated by squares with embedded stars, emphasize regions where convergence speeds up because of the effectively appealing features of the points. The behavior and stability of the iterative process are greatly influenced by essential or critical points, which are marked with plain squares. Analyzing the convergence properties of iterative techniques for solving nonlinear equations requires the use of dynamic planes. These planes assist in visualizing stability and efficiency, aiding in determining the ideal values of the β and α parameters that improve accuracy and speed of convergence. These planes also display divergence zones, enabling the avoidance of factors that can cause instability or failure. Dynamical plane analysis enables one to determine the optimal parameters for reducing iterations and enhancing solution stability. The multiplicative scheme CM [ ] exhibits consistent behavior for a range of parameter values β , α , as illustrated in Figure 4a–e, Figure 5a–e, and Figure 6a–e, respectively.
The statistics also show how sensitive the method is to parameter alterations, helping to determine if the methodology remains resilient or requires precise parameter tweaks for maximum performance. They also disclose the convergence characteristics—whether linear, superliner, or quadratic—which indicate how quickly the multiplicative calculus-based iterative method approaches the exact solution of (1). It is crucial to comprehend these rates of convergence because they aid in evaluating the method’s effectiveness and speed in producing exact findings. This knowledge is crucial for selecting and adjusting parameters to ensure that the method performs well across a variety of problem settings, thereby enhancing its effectiveness and reliability.
To efficiently solve nonlinear equations like Equation (2), iterative approaches require a thorough grasp of their dynamics. Dynamic analysis can be used to evaluate the impacts of initial estimations on the iterative approach’s convergence behavior, rate of solution, and overall stability. In addition to shedding light on the method’s reliability, this study enhances its efficiency and lowers the possibility of problems like sluggish convergence or possible divergence. Since dynamic analysis allows practitioners to adapt their methods to the unique characteristics of the problem, it is a fundamental component of applied professions such as engineering and the sciences that use iterative processes to achieve accurate solutions. By carefully analyzing these dynamics, it is possible to enhance convergence rates, modify techniques to fit the properties of the function, and guarantee stable solutions (even in complex or uncertain situations).
Iterative sequences converge to solutions in regions of attraction, whereas they diverge in regions of repulsion, as depicted by the dynamical planes. These planes also show chaotic behaviors, periodic cycles, and fixed spots, which shed light on how sensitive iterative techniques are to initial conditions. Using iterative algorithms to solve nonlinear equations more robustly and reliably, this kind of study is essential because it provides a deeper knowledge of how specific factors can affect convergence behavior. Unstable results might result from choosing values of β within the divergence zone, namely β , 5 5 , , as shown in Figure 7a,b. Understanding these zones makes it possible to choose parameters more effectively, which encourages reliable convergence in real-world applications.
Utilizing multiplicative calculus, the CM [ ] method is a productive way to find simple roots. Using dynamical and stability plane representations, this approach finds the ideal parameter that speeds up convergence. A strong and dependable method for root-finding is demonstrated by CM [ ] , which increases convergence rates. We propose a hybrid parallel approach based on multiplicative calculus that uses CM [ ] as a correction factor. This hybrid approach is intended to analyze Equation (2) with enhanced accuracy, stability, and computational efficiency. It is described in the section that follows. In particular, in complex or large-scale computational scenarios, this novel multiplicative calculus-based parallel technique provides a promising framework for simultaneously finding all solutions to nonlinear engineering models.

3. Construction and Analysis of the Multiplicative Parallel Method

Parallel computing techniques are robust computational algorithms used to simultaneously locate all feasible solutions to nonlinear equations. Their simplicity is what makes them unique: they utilize computer multiprocessing to compute all solutions in parallel, iteratively improving guesses regardless of their proximity to the precise solutions. This approach is especially useful for high-degree nonlinear equations with complex solutions that require rapid convergence. The multi-threading capability of these techniques makes them an effective tool in computational mathematics for solving nonlinear equations. The Weierstrass–Durand–Kerner method [43], also called the Weierstrass–Dochive method, is one of the best derivative-free parallel methodologies. It is defined as follows:
Z i l + 1 = x i l ( x i l ) ,
where
( x i l ) = f ( x i l ) Π n j i j = 1 ( x i l x j l ) , ( i , j = 1 , , n ) ,
is referred to as Weierstrass’ modification. Local quadratic convergence is illustrated using this parallel approach.
In 1977, Ehrlich [44] suggested a third-order accurate simultaneous iteration approach with the express purpose of effectively locating multiple roots of nonlinear equations. This novel method represented a major breakthrough in the field, as it enabled the refinement of several root estimates simultaneously within a single iterative framework, as follows:
Z i l + 1 = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l x j l ,
where x j l = u j l is used as a correction in (5). In numerical analysis, this approach has since been used as a paradigm to construct other high-order iterative schemes. For example, Petkovic et al. [45] enhanced the convergence order of the Ehrlich parallel approach from 3 to 6, as follows:
Z i l + 1 = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l u j l ,
with the following corrective factor: u j l = x j l f ( s j l ) f ( x j l ) 2 f ( s j l ) f ( x j l ) f ( x j l ) f ( x j l ) where s j l = x j l f ( x j l ) f ( x j l ) .
The method’s convergence order was further accelerated from 3 to 10 by Petkovic et al. [46], resulting in the following method ( PM [ ] ):
Z i l + 1 = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l T j l ,
where T j l = γ j l σ j l γ j l ) f γ j l f x j l f x j l f x j l f γ j l 2 f σ j l f x j l f σ j l f γ j l , γ j l = σ j l f x j l f σ j l f x j l f x j l f x j l f σ j l 2 , σ j l = x j l f x j l f x j l .
Consider the well-known single-root-finding scheme of seventh-order convergence presented by Kou et al. [47], given as follows:
v [ l ] = γ l f γ l f x l f x l f σ l f x l 2 f σ l 2 + f γ l f σ l α f γ l ,
where γ l = σ l f x l f x l 2 f σ l f σ l f x l , σ l = x l f x l f x l . Using (30) in (6), to find all the solutions to nonlinear Equation (1), we constructed a new parallel multiplicative calculus-based iterative scheme ( CM 1 [ ] ) as follows:
x i l + 1 = γ i l f γ i l Π n j i j = 1 ( γ i l γ j l ) 1 2 [ ] 1 2 2 [ ] 2 + 3 [ ] 1 α 3 [ ] Π n j i j = 1 γ i l γ j l x i l x j l ,
where γ i l = σ i l f ( σ i l ) Π n j i j = 1 ( σ i l σ j l ) 1 1 2 1 [ ] Π n j i j = 1 σ i l σ j l x i l x j l , σ i l = x i l f ( x i l ) Π n j i j = 1 ( x i l v j l ) , and v j l = x j l ln ς ( x j ( l ) ) ln ς [ ] ( x j ( l ) ) 1 1 + α ln ς ( x j ( l ) ) 1 + α ln ς ( x j ( l ) ) + β ln ς ( x j ( l ) ) 2 , 1 [ ] = f ( σ i l ) f ( x i l ) , 2 [ ] = f ( γ i l ) f ( x i l ) , 3 [ ] = f ( γ i l ) f ( σ i l ) .
To comprehend the efficiency of parallel numerical iterative algorithms in resolving nonlinear equations, a theoretical convergence analysis is necessary. Scientists can measure the technique’s efficiency and reliability across different function types by examining convergence properties such as iteration stability and order. Faster and more reliable root-finding techniques in practical applications are the outcome of this research’s contribution to algorithm design. Thus, the theoretical convergence analysis of hybrid parallel schemes based on multiplicative calculus will be discussed in the following analysis:
Theorem 3.
Consider ζ 1 , , ζ σ to be single roots of (1). Assume that the values x 1 [ 0 ] , , x n [ 0 ] are close enough to the exact solutions. Then, a twelfth-convergence order is attained using the CM 1 [ ] parallel approach.
Proof. 
Let ϵ i = x i [ l ] ζ i , ϵ σ = σ i [ l ] ζ i , ϵ γ = γ i l ζ i and ϵ i [ ] = x i [ l + 1 ] ζ i represent the errors in x i [ l ] , σ i [ l ] , γ i l and x i [ l + 1 ] , respectively. From the first step of CM 1 [ ] , we have the following:
σ i [ l ] ζ i = x i [ l ] ζ i f ( x j l ) Π n j i j = 1 ( x i l v j l ) ,
ϵ σ = ϵ i ϵ i Π n j i j = 1 x i [ l ] ζ j x i [ l ] v j [ l ] ,
ϵ σ = ϵ i 1 Π n j i j = 1 x i [ l ] ζ j x i [ l ] v j [ l ] ,
where Π n j i j = 1 x i [ l ] ζ j x i [ l ] v j [ l ] = 1 + v j [ l ] ζ j x i [ l ] v j [ l ] = 1 + O ϵ i 2 l + 1 and, from (23), v j [ l ] ζ j = O ϵ i 2 . Thus, we have the following:
ϵ σ = ϵ i 1 + ( l 1 ) O e i 2 ,
Assuming ϵ i = ϵ j , we have the following:
ϵ σ = O ϵ i 3 .
In the second step, we obtain the following:
γ i l ζ i = σ i l ζ i f ( σ i l ) Π n j i j = 1 ( σ i l σ j l ) 1 1 2 1 [ ] Π n j i j = 1 σ i l σ j l x i l x j l ,
leading to the following:
ϵ γ = ϵ σ ϵ σ ϵ σ f ( σ i [ l ] ) Π n j i j = 1 ( σ i [ l ] σ j [ l ] ) 1 1 2 1 [ ] Π n j i j = 1 σ i l σ j l x i l x j l ,
where Π n j i j = 1 σ i l σ j l x i l x j l = 1 , and 1 [ ] = ϵ σ ϵ i 1 + ϵ i Π n j i j = 1 1 x i l x j l , thus, we have the following:
ϵ γ = ϵ σ 1 1 ϵ σ f ( σ i [ l ] ) Π n j i j = 1 ( σ i [ l ] σ j [ l ] ) 1 1 2 ϵ σ ϵ i 1 + ϵ i Π n j i j = 1 1 x i l x j l ,
ϵ γ = ϵ σ 1 1 ϵ σ f ( σ i [ l ] ) Π n j i j = 1 ( σ i [ l ] σ j [ l ] ) 1 2 ϵ σ ϵ i 1 + Λ 1 [ ] 1 ,
where Λ 1 [ ] = ϵ i n j i j = 1 x i l x j l 1
ϵ γ = ϵ σ 1 1 + O ϵ σ 1 + 2 ϵ σ ϵ i 1 + Λ 1 [ ] + . . . ,
ϵ γ = ϵ σ O ϵ σ 1 + 2 ϵ σ ϵ i 1 + Λ 1 [ ] + . . . ,
ϵ γ = ϵ σ O ϵ σ = O ϵ σ 2 = O ϵ i 3 2 ,
ϵ γ = O ϵ i 6 .
Consider the third step of the multiplicative parallel scheme CM, i.e., 1 [ ]
x i l + 1 ζ i = γ i l ζ i f γ i l Π n j i j = 1 ( γ i l γ j l ) Λ 2 [ ] 2 + Λ 3 [ ] Π n j i j = 1 γ i l γ j l x i l x j l ,
where Λ 2 [ ] = 1 2 [ ] 1 2 2 [ ] , Λ 3 [ ] = 3 [ ] 1 α 3 [ ] .
ϵ i [ ] = ϵ γ ϵ γ ϵ γ f γ i l Π n j i j = 1 ( γ i l γ j l ) Λ 2 [ ] 2 + Λ 3 [ ] Π n j i j = 1 γ i l γ j l x i l x j l ,
where Π n j i j = 1 γ i l γ j l x i l x j l 1 , 1 2 [ ] 1 2 2 [ ] = ϵ i ϵ σ Π n j i j = 1 σ i l ζ j x i l ζ j ϵ i 2 ϵ σ Π n j i j = 1 σ i l ζ j x i l ζ j = ϵ i ϵ σ σ 1 [ ] ϵ i 2 ϵ σ σ 1 [ ] and σ 1 [ ] = Π n j i j = 1 σ i l ζ j x i l ζ j .
Similarly, 3 [ ] 1 α 3 [ ] = ϵ γ Π n j i j = 1 γ i l ζ j σ i l ζ j ϵ σ α ϵ γ Π n j i j = 1 γ i l ζ j σ i l ζ j = ϵ γ σ 2 [ ] ϵ σ α ϵ γ σ 2 [ ] and σ 2 [ ] = Π n j i j = 1 γ i l ζ j σ i l ζ j . Using these values in (59), we have the following:
ϵ i [ ] = ϵ γ ϵ γ ϵ γ f γ i l Π n j i j = 1 ( γ i l γ j l ) ϵ i ϵ α σ 1 [ ] ϵ i 2 ϵ α σ 1 [ ] 2 + ϵ γ σ 2 [ ] ϵ σ α ϵ γ σ 2 [ ] ,
ϵ i [ ] = ϵ γ 1 1 ϵ γ f γ i l Π n j i j = 1 ( γ i l γ j l ) 1 + 2 Λ 4 [ ] + . . . 1 α + ϵ σ α ϵ σ α ϵ γ σ 2 [ ] ,
where Λ 4 [ ] = ϵ i ϵ α σ 1 [ ] ϵ i 2 ϵ α σ 1 [ ] ,
ϵ i [ ] = ϵ γ 1 1 O ϵ γ 1 + 2 Λ 4 [ ] + . . . ϵ σ α + ϵ σ α ϵ σ ϵ γ σ 2 [ ] 1 ,
ϵ i [ ] = ϵ γ O ϵ γ 1 + 2 Λ 4 [ ] + . . . ϵ σ α + ϵ σ α ϵ σ + ϵ γ σ 2 [ ] + . . . ,
ϵ i [ ] = ϵ γ O ϵ γ = O ϵ γ 2 = O ϵ i 6 2 ,
ϵ i [ ] = O ϵ i 12 .
This completes the proof.    □

4. Numerical Results

Computational results show the effectiveness of the suggested methods and their ability to solve (2) for the hybrid multiplicative calculus-based parallel scheme. This validates the accuracy of the algorithm, compares the findings to existing methods, and offers quantitative evidence to support theoretical assumptions. Convergence rates, data processing efficiency, and error analysis are among the numerical results of parallel schemes that are crucial for assessing the overall performance and impact of the study (2). This section uses various types of engineering problems to demonstrate the efficiency and stability of the suggested solution. Its performance in the Maple 18 environment is assessed according to particular computer program termination criteria. These requirements, which provide accurate findings and allow for trustworthy comparisons of convergence behavior across applications, illustrate the method’s stability and suitability for complicated engineering computations. Using the following criteria, these engineering examples further support the parallel multiplicative calculus-based method’s ability to solve nonlinear equations, highlighting its potential as a reliable tool for engineering analysis:
( i ) e i [ l ] = x i l + 1 x i l < = 10 18 ,
where e i [ l ] is the absolute error norm-2. Using the parallel numerical scheme ( SM 1 [ ] ) described in [48], we compare our newly developed multiplicative calculus-based algorithm with the PM [ ] method and the parallel iterative computer algorithm presented in [49].
x i l + 1 = γ i l f ( γ i l ) Π n j i j = 1 ( γ i l γ j l ) ,
where γ i l = σ i l f ( σ i l ) Π n j i j = 1 ( σ i l σ j l ) and σ i l = x i l f ( x i l ) Π n j i j = 1 ( x i l v j l ) , v j l = x j l f ( x j ( l ) ) f ( x j ( l ) + f ( x j ( l ) ) ) f ( x j ( l ) ) , and with [50]
x i l + 1 = σ i l ϕ i ϕ i f ( σ i l ) f ( σ i l ) j i j = 1 n ϕ j x i l x j l ,
where x i l + 1 = x i l ϕ i ϕ i f ( x i l ) f ( x i l ) j i j = 1 n ϕ j x i l Z j l , Z j l = σ j l ϕ j f ( σ j l ) f ( σ j l ) , σ j l = x j l ϕ j f ( x j l ) f ( x j l ) (abbreviated as SM 2 [ ] ). All solutions to the nonlinear Equation (1) are simultaneously computed using Algorithm 1.
Algorithm 1: For the fractional numerical scheme CM 1 [ ] .
Mathematics 12 03501 i001

4.1. Example 1: [51]

Modeling complicated engineering systems requires the use of fractional differential equations, particularly in situations like the suspension design of an automobile, where memory consequences and dampening significantly impact system performance. In contrast to conventional integer-order differential equations, fractional-order models can represent the subtleties of hereditary traits, which are prevalent in systems and materials that entail energy dissipation or display viscoelasticity, such as suspension mechanisms in automobiles. In these applications, fractional differential equations enable the creation of suspension models that improve both performance and stability, providing a more precise framework for addressing difficulties in automotive engineering and control systems. Consequently, these equations are crucial instruments for creating complex models that take into consideration the time-dependent, non-local dynamics inherent in these systems, resulting in formulations that can faithfully capture behaviors found in the actual world. The fractional differential equation that results from this modeling technique is as follows:
d 4 f ( x ) d x 4 + d n 1 2 f ( x ) d x n 1 2 + f ( x ) n 1 = x n + 5 , α 01 [ ] x α 1 [ ] f ( α 01 [ ] ) = d n 3 f ( α 01 [ ] ) d x n 3 = d n 2 f ( α 01 [ ] ) d x n 2 = 0.00 , d n 1 f ( α 01 [ ] ) d x n 1 = 6.00 ,
where α 01 [ ] = 0 , α 1 [ ] = 1 and n = 4 . Using the analytical approach from [52], we simulate Equation (67) utilizing the following non-linear multiplicative estimations as follows:
ς 5 ( x ) 1 + x 2 x 2 + 14 x 3 3 35 x 4 3 + 91 3 x 5 .
Equation (52) has five solutions:
ζ 1 = 0.0 , ζ 2 , 3 = 0.152018637 ± 03931364228 i , ζ 4 , 5 = 0.314432633 ± 0.25883527180 i .
Equation (54) yields an exact solution of zero. We performed a detailed dynamical analysis to investigate the efficiency and behavior of the simple solution-finding multiplicative calculus-based technique used to solve (68). This investigation recorded numerous performance measures, such as elapsed computing time, convergence rate (represented as a percentage), total convergent points assessed, and the number of iterations necessary for convergence across varying values of the α and β parameters. This evaluation’s findings are compiled in Table 1, and Figure 8a–c provides graphical representations that demonstrate how these factors affect the algorithm’s convergence properties.
The optimal parameter values found by dynamical analysis and the data shown in Table 1 are used in parallel computational approaches to solve nonlinear equations. The convergence rate of the simultaneous techniques is improved by initializing estimates that are close to the exact solution (within an error range of ϵ = 10 2 ). This method accelerates convergence to exact roots and minimizes the number of iterations required, making the method more efficient and effective for solving nonlinear equations.
The rates of convergence, convergence order, and residual errors for the different approaches are clearly compared in Table 2. With all of these metrics, it is evident that the proposed strategy outperforms existing approaches when the parameters are set at β = 0.05 and α 0.01 . Specifically, the rate of convergence and the convergence order are greatly improved, and the residual errors for every root are greatly reduced. Since the method provides a more reliable and consistent solution mechanism, this demonstrates its potential to solve nonlinear equations more accurately and with less computing time.
During the simulation, P-convergence represents the percent convergence, P-divergence represents the percentage divergence, A-iteration represents the average number of iterations, T-function represents the total number of functions and derivatives per iteration, Operations-it indicates how many mathematical operations are performed per iteration, Max-Error represents the maximum error per iteration, and Local-COC indicates local convergence. Table 3 clearly illustrates that, in terms of consistency and stability, our method outperforms existing methods in the literature, i.e., PM [ ] , SM 1 [ ] SM 2 [ ] , respectively.
Table 4 clearly illustrates that by using the parameter values α , β obtained by formalizing the concept of the dynamic system and selecting random initial values (Table A1), the newly developed schemes CM 1 [ ] show consistent behavior and are more stable. The scheme’s global convergence property, CM 1 [ ] , outperforms other schemes ( SM 1 [ ] SM 2 [ ] and PM [ ] ) in terms of efficiency, computation time, iterations, and residual error (Figure 9).

4.2. Example 2: [53]

Fractional calculus was established to improve the modeling capabilities of classical differential equations and depict complicated physical systems more accurately. This method captures the complex dynamics seen in systems such as fluid mechanics, electrical circuits, and elastic substances by extending traditional calculus and adding derivatives and integrals of arbitrary, non-integer order. A more flexible framework for explaining phenomena like irregular wave-spreading and dissemination—which are frequently found in disciplines like microbiology and substance science—is provided through fractional calculus. Fractional calculus enhances prediction accuracy in these domains by allowing for these improved descriptions, resulting in the formulation of the fractional initial value problem shown below [40]:
d 5 2 f ( x ) d x 5 2 + x d 19 10 f ( x ) d x 19 10 + f ( x ) 3 = h x , 0 x 1 , f ( 0 ) = d f ( 0 ) d x = d 2 f ( 0 ) d x 2 = 0 .
where h x = 6.770275002 x 0.5 + 5.733474578 x 2.1 + x 3 + x . The following non-linear simulation is used to simulate (69) using the technique from [54]:
ς 3 ( x ) x 3 + 3 x 2 + 4 x + 0.00000105 1 .
Equation (70) has the following solutions:
ζ 1 = 0.000002625005168 , ζ 2 , 3 = 1.49998687 ± 1.322874167 i .
The effectiveness of this root-finding technique is assessed through a thorough dynamical analysis that takes into account variables like the number of iterations, total number of calculated points, convergence rate, and elapsed time over a range of values of the β and α parameters. Table 5 presents these indicators, which provide insight into how various parameter choices impact the algorithm’s effectiveness and efficiency in reaching the root. To understand the method’s robustness and computational behavior under different conditions, this assessment is vital.
The convergence rate of simultaneous iterative algorithms is greatly increased by using starting guesses that are sufficiently close to the right solution (within a tolerance of ϵ = 10 2 ). This proximity allows the techniques to arrive at accurate solutions more quickly, usually in fewer iterations. Figure 10a–c and Table 6 present the outcomes of using randomly selected initial estimations, highlighting the method’s adaptability and efficiency at various starting points. These results validate the versatility of the method by showing flexibility and excellent accuracy, irrespective of initial conditions.
The information shows that each root’s residual error, convergence order, and rate of convergence significantly surpasses conventional parallel methods for the range of α and β values (Table 5). This improved performance illustrates the long-term reliability as well as efficiency of the proposed parallel methodology based on multiplicative calculus. Statistics show that the newly created computer approach has reduced residual errors and faster convergence rates across a wide range of parameter values, indicating that it can estimate solutions more accurately and quickly. These results indicate that the parallel iterative method is a good alternative to existing methods and shows promise for a wide range of nonlinear equation-solving applications where stability and fast convergence are essential.
During the simulation, P-convergence represents the percentage convergence, P-divergence represents the percentage divergence, A-iteration represents the average number of iterations, T-function represents the total number of functions and derivatives per iteration, Operations-it indicates how many mathematical operations are performed per iteration, Max-error represents the maximum error per iteration, and Local-COC indicates local convergence. Table 7 clearly illustrates that, in terms of consistency and stability, our newly developed parallel multiplicative calculus-based technique outperforms existing methods, PM [ ] , SM 1 [ ] SM 2 [ ] .
Table 8, clearly illustrates that by using the parameter values α , β obtained by formalizing the concept of the dynamic system and selecting random initial values (Table A2), the newly developed schemes CM 1 [ ] reveal consistent behavior and are more stable. This illustrates the scheme’s CM 1 [ ] global convergence property in terms of effectiveness, processing time, number of repetitions, and absolute residual error when compared to other existing schemes SM 1 [ ] SM 2 [ ] and PM [ ] (Figure 11).

5. Conclusions

We developed an innovative family of iterative techniques based on a multiplicative approach with a local convergence order of 2 for efficiently finding simple solutions to Equation (2). A hybrid strategy utilizing multiplicative calculus was further developed from this method, resulting in a parallel numerical methodology capable of simultaneously solving all solutions at one time for nonlinear fractional problems. Our convergence analysis shows significant performance benefits with a significantly higher convergence order of 12 for these parallel systems based on multiplicative calculus. Dynamical planes, as shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, were used for optimal starting value selection in order to further increase the convergence rate of the CM 1 [ ] method. This reduced the number of iterations required to find exact answers. Extensive testing on various nonlinear problems confirmed the robustness, stability, and reliability of CM 1 [ ] , outperforming comparable approaches such as SM 1 [ ] , SM 2 [ ] , and PM [ ] . The CM 1 [ ] families of parallel schemes outperform both SM 1 [ ] SM 2 [ ] and PM [ ] methods in terms of absolute residual error (Table 2, Table 3 and Table 4 and Table 6, Table 7 and Table 8), the central processing unit time, and the log of residual error graphs (Figure 9 and Figure 11) for varying α , β values, as demonstrated by the numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8.
In the future, new efficient hybrid multiplicative calculus-based inverse parallel schemes with global convergence behavior [55,56] will be developed to address complex epidemic and vectorial problems.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this article.

Abbreviations

In this article, the following abbreviations are used:
CM 1 [ ] parallel scheme
SM 1 [ ] SM 2 [ ] existing parallel scheme
PM [ ] existing parallel scheme
CPU timecomputational time
e- 10 ( )
ρ ς [ k 1 ] local computational order
niteration

Appendix A

The random initial starting vector is employed in Example 1 to show the global convergence of the numerical schemes, as presented in Table A1.
Table A1. Random initial vectors for Example 1.
Table A1. Random initial vectors for Example 1.
Ran-Test [ x 1 [ 0 ] , x 2 [ 0 ] , x 3 [ 0 ] x 4 [ 0 ] x 5 [ 0 ] ]
1 [ 0.732 , 0.091 , 0.007 0.181 0.097 ]
2 [ 0.822 , 0.048 , 0.131 0.104 0.634 ]
3 [ 0.083 , 0.243 , 0.224 0.590 0.823 ]
4 [ 0.076 , 0.432 , 0.046 0.913 0.643 ]
5 [ 0.011 , 0.283 , 0.057 0.703 0.253 ]
The random initial starting vector is employed in Example 2 to show the global convergence of the numerical schemes, as presented in Table A2.
Table A2. Random initial vectors for Example 2.
Table A2. Random initial vectors for Example 2.
Ran-Test [ x 1 [ 0 ] , x 2 [ 0 ] , x 3 [ 0 ] ]
1 [ 0.052 , 0.101 , 0.097 ]
2 [ 0.071 , 0.048 , 0.674 ]
3 [ 0.078 , 0.003 , 0.173 ]
4 [ 0.106 , 0.102 , 0.243 ]
5 [ 0.001 , 0.203 , 0.003 ]

References

  1. Malkus, D.S.; Nohel, J.A.; Plohr, B.J. Dynamics of shear flow of a non-Newtonian fluid. J. Comput. Phys. 1990, 87, 464–487. [Google Scholar] [CrossRef]
  2. Feng, J.; Hu, H.H.; Joseph, D.D. Direct simulation of initial value problems for the motion of solid bodies in a Newtonian fluid Part 1. Sedimentation. J. Fluid Mech. 1994, 261, 5–134. [Google Scholar] [CrossRef]
  3. Blair, P.M.; Weinaug, C.F. Solution of two-phase flow problems using implicit difference equations. Soc. Pet. Eng. J. 1969, 9, 417–424. [Google Scholar] [CrossRef]
  4. Levy, D. Chaos theory and strategy: Theory, application, and managerial implications. Strateg. Manag. J. 1994, 15, 167–178. [Google Scholar] [CrossRef]
  5. Chen, G.; Moiola, J.L.; Wang, H.O. Bifurcation control: Theories, methods, and applications. Int. J. Bifurc. Chaos 2000, 10, 511–548. [Google Scholar] [CrossRef]
  6. Pritchett, L.; Woolcock, M. Solutions when the solution is the problem: Arraying the disarray in development. World Dev. 2004, 32, 191–212. [Google Scholar] [CrossRef]
  7. Metzler, R.; Klafter, J. The random walk’s guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep. 2000, 339, 1–77. [Google Scholar] [CrossRef]
  8. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations. J. Math. Anal. Appl. 2002, 265, 229–248. [Google Scholar] [CrossRef]
  9. Daftardar-Gejji, V.; Babakhani, A. Analysis of a system of fractional differential equations. J. Math. Anal. Appl. 2004, 293, 511–522. [Google Scholar] [CrossRef]
  10. Lakshmikantham, V.; Vatsala, A.S. Basic theory of fractional differential equations. Nonlinear Anal. Theory Methods Appl. 2008, 69, 2677–2682. [Google Scholar] [CrossRef]
  11. Stanley, D. A multiplicative calculus. Probl. Resour. Issues Math. Undergrad. Stud. 1999, 9, 310–326. [Google Scholar] [CrossRef]
  12. Grossman, M.; Katz, R. Non-Newtonian Calculus: A Self-contained, Elementary Exposition of the Authors’ Investigations. Non-Newton. Calc. 1972, 1, 1–1090. [Google Scholar]
  13. Bashirov, A.E.; Kurpınar, E.M.; Özyapıcı, A. Multiplicative calculus and its applications. J. Math. Anal. Appl. 2008, 337, 36–48. [Google Scholar] [CrossRef]
  14. Bashirov, A.E.; Riza, M. On complex multiplicative differentiation. TWMS J. Appl. Eng. Math. 2011, 1, 75–85. [Google Scholar]
  15. Bashirov, A.E.; Mısırlı, E.; Tandoğdu, Y.; Özyapıcı, A. On modeling with multiplicative differential equations. Appl. Math.-J. Chin. Univ. 2011, 26, 425–438. [Google Scholar] [CrossRef]
  16. Willinger, W.; Govindan, R.; Jamin, S.; Paxson, V.; Shenker, S. Scaling phenomena in the Internet: Critically examining criticality. Proc. Natl. Acad. Sci. USA 2002, 99, 2573–2580. [Google Scholar] [CrossRef]
  17. Harima, Y.; Sakamoto, Y.; Tanaka, S.I.; Kawai, M. Validity of the geometric-progression formula in approximating gamma-ray buildup factors. Nucl. Sci. Eng. 1986, 94, 24–35. [Google Scholar] [CrossRef]
  18. Ozbay, S. Modified Backpropagation Algorithm with Multiplicative Calculus in Neural Networks. Elektronika ir Elektrotechnika 2023, 29, 55–61. [Google Scholar] [CrossRef]
  19. Karthikeyan, K.R.; Murugusundaramoorthy, G. Properties of a Class of Analytic Functions Influenced by Multiplicative Calculus. Fractal Fract. 2024, 8, 131. [Google Scholar] [CrossRef]
  20. Othman, G.M.; Yurtkan, K.; Özyapıcı, A. Improved digital image interpolation technique based on multiplicative calculus and Lagrange interpolation. Signal Image Video Process. 2023, 17, 3953–3961. [Google Scholar] [CrossRef]
  21. Eyilmaz, E.; Gunes, E. Inverse nodal problem for the Sturm-Liouville equation in multiplicative case. Annal. Math. Comput. Sci. 2023, 13, 42–52. [Google Scholar]
  22. Rasham, T.; Nazam, M.; Agarwal, P.; Hussain, A.; Al Sulmi, H.H. Existence results for the families of multi-mappings with applications to integral and functional equations. J. Inequalities Appl. 2023, 2023, 82. [Google Scholar] [CrossRef]
  23. Goktas, S. A New Type of Sturm-Liouville Equation in the Non-Newtonian Calculus. J. Funct. Spaces 2021, 1, 5203939. [Google Scholar] [CrossRef]
  24. Yalcın, N.; Dedeturk, M. Solutions of multiplicative linear differential equations via the multiplicative power series method. Sigma 2023, 41, 837–847. [Google Scholar]
  25. Calogero, F.; Yi, G. Can the general solution of the second-order ODE characterizing Jacobi polynomials be polynomial? J. Phy. A Math. Theor. 2012, 45, 095206. [Google Scholar] [CrossRef]
  26. Sana, G.; Mohammed, P.O.; Shin, D.Y.; Noor, M.A.; Oudat, M.S. On iterative methods for solving nonlinear equations in quantum calculus. Fractal Fract. 2021, 5, 60. [Google Scholar] [CrossRef]
  27. Mateen, A.; Zhang, Z.; Ali, M.A.; Feckan, M. Generalization of Some Integral Inequalities in Multiplicative Calculus with Their Computational Analysis. Preprint 2024, 1–509. [Google Scholar] [CrossRef]
  28. Bilgehan, B.; Özyapıcı, A.; Hammouch, Z.; Gurefe, Y. Predicting the spread of COVID-19 with a machine learning technique and multiplicative calculus. Soft Comput. 2022, 26, 8017–8024. [Google Scholar] [CrossRef]
  29. Boruah, K.; Hazarika, B. Some basic properties of bigeometric calculus and its applications in numerical analysis. Afr. Mat. 2021, 32, 211–227. [Google Scholar] [CrossRef]
  30. Goktas, S.; Yilmaz, E.; Yar, A.C. Multiplicative derivative and its basic properties on time scales. Math. Meth. Appl. Sci. 2022, 45, 2097–2109. [Google Scholar] [CrossRef]
  31. Özyapıcı, A.; Sensoy, Z.B.; Karanfiller, T. Effective Root-Finding Methods for Nonlinear Equations Based on Multiplicative Calculi. J. Math. 2016, 2016, 8174610. [Google Scholar] [CrossRef]
  32. Du, T.; Peng, Y. Hermite–Hadamard type inequalities for multiplicative Riemann–Liouville fractional integrals. J. Comput. Appl. Math. 2024, 440, 115582. [Google Scholar] [CrossRef]
  33. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  34. Shams, M.; Kausar, N.; Samaniego, C.; Agarwal, P.; Ahmed, S.F.; Momani, S. On efficient fractional Caputo-type simultaneous scheme for finding all roots of polynomial equations with biomedical engineering applications. Fractals 2023, 31, 2340075. [Google Scholar] [CrossRef]
  35. Gupta, A.; Kumar, S. A multiplicative Gauss-Newton minimization algorithm: Theory and application to exponential functions. Appl. Math.-J. Chin. Univ. 2021, 36, 370–389. [Google Scholar] [CrossRef]
  36. Unal, E.; Cumhur, I.; Gokdogan, A. Multiplicative Newton’s Methods with Cubic Convergence. New Trends Math. Sci. 2017, 5, 299–307. [Google Scholar] [CrossRef]
  37. Singh, G.; Bhalla, S.; Behl, R. Higher-order multiplicative derivative iterative scheme to solve the nonlinear problems. Math. Comput. Appl. 2023, 28, 23. [Google Scholar] [CrossRef]
  38. Waseem, M.; Noor, M.A.; Shah, F.A.; Noor, K.I. An efficient technique to solve nonlinear equations usingmultiplicative calculus. Turk. J. Math. 2018, 42, 679–691. [Google Scholar] [CrossRef]
  39. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Stability and applicability of iterative methods with memory. J. Math. Chem. 2019, 57, 1282–1300. [Google Scholar] [CrossRef]
  40. Rafiq, N.; Akram, S.; Mir, N.A.; Shams, M. Study of dynamical behavior and stability of iterative methods for nonlinear equation with applications in engineering. Math. Prob. Eng. 2020, 2020, 3524324. [Google Scholar] [CrossRef]
  41. Kennes, R. Computational aspects of the Mobius transformation of graphs. IEEE Trans. Syst. Man Cybern. 1992, 22, 201–223. [Google Scholar] [CrossRef]
  42. Cordero, A.; Reyes, J.A.; Torregrosa, J.R.; Vassileva, M.P. Stability Analysis of a New Fourth-Order Optimal Iterative Scheme for Nonlinear Equations. Axioms. 2023, 13, 34. [Google Scholar] [CrossRef]
  43. Herceg, D.; Tričković, S.; Petković, M. On the fourth order methods of Weierstrass’ type. Nonlinear Anal. Theory Methods Appl. 1997, 30, 83–88. [Google Scholar] [CrossRef]
  44. Anourein, A.W.M. An improvement on two iteration methods for simultaneous determination of the zeros of a polynomial. Int. J. Comput. Math. 1977, 6, 241–252. [Google Scholar] [CrossRef]
  45. Petkovic, M.S.; Petkovic, L.D.; Dcunic, J. On an efficient method for the simultaneous approximation of polynomial multiple roots. Appl. Anal. Discret. Math. 2014, 1, 73–94. [Google Scholar] [CrossRef]
  46. Petkovic, M.S.; Petkovic, L.D.; Dzunic, J. On an efficient simultaneous method for finding polynomial zeros. Appl. Math. Lett. 2014, 28, 60–65. [Google Scholar] [CrossRef]
  47. Kou, J.; Li, Y.; Wang, X. Some variants of Ostrowski’s method with seventh-order convergence. J. Comput. Appl. Math. 2007, 209, 153–159. [Google Scholar] [CrossRef]
  48. Diethelm, K.; Siegmund, S.; Tuan, H.T. Asymptotic behavior of solutions of linear multi-order fractional differential systems. Fract. Calc. Appl. Anal. 2017, 20, 1165–1195. [Google Scholar] [CrossRef]
  49. Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Mir, N.A. On highly efficient derivative-free family of numerical methods for solving polynomial equation simultaneously. Adv. Differ. Equ. 2021, 2021, 465. [Google Scholar] [CrossRef]
  50. Shams, M.; Rafiq, N.; Kausar, N.; Agarwal, P.; Park, C.; Momani, S. Efficient iterative methods for finding simultaneously all the multiple roots of polynomial equation. Adv. Differ. Equ. 2021, 2021, 495. [Google Scholar] [CrossRef]
  51. Yang, Y. Solving a nonlinear multi-order fractional differential equation using Legendre pseudo-spectral method. Appl. Math. 2013, 4, 113–118. [Google Scholar] [CrossRef]
  52. Uwaheren, O.A.; Adebisi, A.F.; Ishola, C.Y.; Raji, M.T.; Yekeen, A.O.; Peter, O.J. Numerical Solution of Volterra integro-differential Equations by Akbari-Ganji’s Method. BAREKENG: Jurnal Ilmu Mat. Terap. 2022, 16, 1123–1130. [Google Scholar] [CrossRef]
  53. Ziada, E.A.A. Solution of Nonlinear Fractional Differential Equations Using Adomain Decomposition Method. J. Syst. Sci. Appl. Math. 2021, 6, 111–119. [Google Scholar]
  54. Ray, S.S.; Bera, R.K. An approximate solution of a nonlinear fractional differential equation by Adomian decomposition method. Appl. Math. Comput. 2005, 167, 561–571. [Google Scholar]
  55. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  56. Rafiq, N.; Akram, S.; Shams, M.; Mir, N.A. Computer geometries for finding all real zeros of polynomial equations simultaneously. Comput. Math. Contin. 2021, 69, 2636–2651. [Google Scholar] [CrossRef]
Figure 1. (a,b): The stability zones of the multiplicative simple root-finding method, CM [ ] , for various values of β and α .
Figure 1. (a,b): The stability zones of the multiplicative simple root-finding method, CM [ ] , for various values of β and α .
Mathematics 12 03501 g001
Figure 2. (ad): The stability zones of the multiplicative simple root-finding method, CM [ ] , for various values of β and α .
Figure 2. (ad): The stability zones of the multiplicative simple root-finding method, CM [ ] , for various values of β and α .
Mathematics 12 03501 g002
Figure 3. The region of the CM [ ] -generated iterative map contains the critical points.
Figure 3. The region of the CM [ ] -generated iterative map contains the critical points.
Mathematics 12 03501 g003
Figure 4. (ae): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Stable behavior. (a) The dynamic regions for α 0.001 0.001 i and β = 1 . (b) The dynamic regions for α 0.1 + 0.00 i and β = 1 . (c) The dynamic regions for α 0.1 i and β = 1 . (d) The dynamic regions for α 2.1 2.1 i and β = 1 . (e) The dynamic regions for α 3.001 + 3.001 i and β = 1 .
Figure 4. (ae): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Stable behavior. (a) The dynamic regions for α 0.001 0.001 i and β = 1 . (b) The dynamic regions for α 0.1 + 0.00 i and β = 1 . (c) The dynamic regions for α 0.1 i and β = 1 . (d) The dynamic regions for α 2.1 2.1 i and β = 1 . (e) The dynamic regions for α 3.001 + 3.001 i and β = 1 .
Mathematics 12 03501 g004
Figure 5. (ae): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Stable behavior. (a) The dynamic regions for α 2.1 i and β = 2 . (b) The dynamic regions for α 0.1 + 0.1 i and β = 2 . (c) The dynamic regions for α 0.001 and β = 2 . (d) The dynamic regions for α 0.5 i and β = 2 . (e) The dynamic regions for α 0.7 and β = 2 .
Figure 5. (ae): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Stable behavior. (a) The dynamic regions for α 2.1 i and β = 2 . (b) The dynamic regions for α 0.1 + 0.1 i and β = 2 . (c) The dynamic regions for α 0.001 and β = 2 . (d) The dynamic regions for α 0.5 i and β = 2 . (e) The dynamic regions for α 0.7 and β = 2 .
Mathematics 12 03501 g005
Figure 6. (ae): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Stable behavior. (a) The dynamic regions for α 0.1 + 0.1 i and β = 0.5 . (b) The dynamic regions for α 0.03 + 0.03 i and β = 0.3 . (c) The dynamic regions for α 0.04 0.07 i and β = 9.5 . (d) The dynamic regions for α 0.01 + 0.09 i and β = 3.3 i . (e) The dynamic regions for α 0.001 and β = 0.5 .
Figure 6. (ae): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Stable behavior. (a) The dynamic regions for α 0.1 + 0.1 i and β = 0.5 . (b) The dynamic regions for α 0.03 + 0.03 i and β = 0.3 . (c) The dynamic regions for α 0.04 0.07 i and β = 9.5 . (d) The dynamic regions for α 0.01 + 0.09 i and β = 3.3 i . (e) The dynamic regions for α 0.001 and β = 0.5 .
Mathematics 12 03501 g006
Figure 7. (a,b): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Unpredictable behavior. (a) The unpredictable behavior for α 0.5 i and β = 20.9 . (b) The unpredictable behavior for α 10.9 and β = 30.5 .
Figure 7. (a,b): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values: Unpredictable behavior. (a) The unpredictable behavior for α 0.5 i and β = 20.9 . (b) The unpredictable behavior for α 10.9 and β = 30.5 .
Mathematics 12 03501 g007
Figure 8. (ac): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values. (a) The dynamic regions for α 1.9 i and β = 0.5 . (b) The dynamic regions for α 0.08 1.0 i and β = 1.5 . (c) The dynamic regions for α 5.1 and β = 1.1 .
Figure 8. (ac): CM [ ] -generated rational map for solving (68) with dynamical planes for different parameter α , β values. (a) The dynamic regions for α 1.9 i and β = 0.5 . (b) The dynamic regions for α 0.08 1.0 i and β = 1.5 . (c) The dynamic regions for α 5.1 and β = 1.1 .
Mathematics 12 03501 g008
Figure 9. The error graph of the multiplicative calculus-based parallel scheme CM 1 [ ] for solving (68) using random test vectors.
Figure 9. The error graph of the multiplicative calculus-based parallel scheme CM 1 [ ] for solving (68) using random test vectors.
Mathematics 12 03501 g009
Figure 10. (ac): CM [ ] -generated rational map for solving (70) with dynamical planes for different parameter α , β values. (a) The dynamic regions for γ 3.4 3.4 i and β = 1.1 i . (b) The dynamic regions for γ 6.9 and β = 2.5 . (c) The dynamic regions for α 4.9 and β = 6.3 .
Figure 10. (ac): CM [ ] -generated rational map for solving (70) with dynamical planes for different parameter α , β values. (a) The dynamic regions for γ 3.4 3.4 i and β = 1.1 i . (b) The dynamic regions for γ 6.9 and β = 2.5 . (c) The dynamic regions for α 4.9 and β = 6.3 .
Mathematics 12 03501 g010
Figure 11. The error graph of the multiplicative calculus-based parallel scheme CM 1 [ ] for solving (70) using random test vectors.
Figure 11. The error graph of the multiplicative calculus-based parallel scheme CM 1 [ ] for solving (70) using random test vectors.
Mathematics 12 03501 g011
Table 1. The dynamic outputs of CM [ ] for solving (68).
Table 1. The dynamic outputs of CM [ ] for solving (68).
α β IterationsTotal PointsConverging PointsElapsed Points
0.1 + 0.1i1.021640,00054.9854%2.14584165
0.02 + 0.02i1.522640,00059.3254%3.65483535
−0.050.0527640,00058.5412%5.36528456
Table 2. The numerical results of parallel computer techniques for solving (68).
Table 2. The numerical results of parallel computer techniques for solving (68).
Err PM [ ] SM 1 [ ] SM 2 [ ] CM 1 [ ]
e 1 [ 5 ] 9.31 × 10 45 1.11 × 10 99 1.1 × 10 104 0.0
e 2 [ 5 ] 2.09 × 10 55 3.02 × 10 76 0.0 0.0
e 3 [ 5 ] 0.17 × 10 76 0.12 × 10 85 0.0 0.0
e 4 [ 5 ] 0.07 × 10 39 6.27 × 10 101 3.95 × 10 165 2.0 × 10 285
e 4 [ 5 ] 1.07 × 10 39 6.27 × 10 101 6.93 × 10 165 0.0
CPU 0.0045 0.0063 0.0023 0.0011
Table 3. Parallel scheme analysis for closed starting values leading to an exact solution.
Table 3. Parallel scheme analysis for closed starting values leading to an exact solution.
Err PM [ ] SM 1 [ ] SM 2 [ ] CM 1 [ ]
P-convergence 93 % 9799100
P-divergence 7.0 3.0 1.0 0.0
A-Iterations07553
T-functions31212016
Operations-it78636048
Max-Error 0.07 × 10 39 6.27 × 10 101 4.9 × 10 165 0.0
Local-COC 8.0045 10.0063 11.0023 12.0011
Table 4. Outcomes of the multiplicative scheme CM 1 [ ] for different initial test vectors for solving (68).
Table 4. Outcomes of the multiplicative scheme CM 1 [ ] for different initial test vectors for solving (68).
Ran-TestIteration τ i + 1 τ i CPU
136 0.2543 × 10 15 0.097
224 1.4433 × 10 17 0.096
322 5.7343 × 10 26 0.076
412 0.5433 × 10 14 0.066
506 2.6453 × 10 16 0.001
Table 5. The dynamic outputs of CM [ ] for solving (70).
Table 5. The dynamic outputs of CM [ ] for solving (70).
α β IterationsTotal PointsConverging PointsElapsed Points
0.1 + 0.1i1.021640,00054.545%1.33464564
0.02 + 0.02i1.518640,00059.5412%4.67454366
−0.051.0020640,00058.2541%5.233554675
Table 6. The numerical results of parallel computer techniques for solving (70).
Table 6. The numerical results of parallel computer techniques for solving (70).
Err PM [ ] SM 1 [ ] SM 2 [ ] CM 1 [ ]
e 1 [ 3 ] 0.23 × 10 53 1.1 × 10 51 0.0 0.0
e 2 [ 3 ] 1.44 × 10 44 0.0 0.0 0.0
e 3 [ 3 ] 7.43 × 10 45 5.01 × 10 57 5.03 × 10 65 5.34 × 10 65
e 4 [ 3 ] 2.87 × 10 45 1.02 × 10 57 1.02 × 10 65 5.06 × 10 65
e 5 [ 3 ] 0.76 × 10 45 5.02 × 10 57 3.02 × 10 65 5.97 × 10 65
σ i [ n 1 ] 0.0036 0.0023 0.0011 0.0011
Table 7. Parallel scheme analysis for closed starting values yielding an exact solution.
Table 7. Parallel scheme analysis for closed starting values yielding an exact solution.
Err PM [ ] SM 1 [ ] SM 2 [ ] CM 1 [ ]
P-convergence 93 % 97 % 99 % 100 %
P-divergence 7.0 % 3.0 % 1.0 % 0.0 %
A-Iterations07553
T-functions31212016
Operations-it78636048
Max-Error 0.07 × 10 39 6.27 × 10 101 4.9 × 10 165 0.0
Local-COC 8.0045 10.0063 11.0023 12.0011
Table 8. Results of the multiplicative scheme CM 1 [ ] for different initial test vectors for (70).
Table 8. Results of the multiplicative scheme CM 1 [ ] for different initial test vectors for (70).
Ran-Testn τ i + 1 τ i CPU
105 0.37567 × 10 4 0.09732
205 1.54567 × 10 10 0.08634
306 0.75635 × 10 11 0.08823
406 7.46535 × 10 12 0.07643
505 3.76543 × 10 27 0.00123
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M. On a Stable Multiplicative Calculus-Based Hybrid Parallel Scheme for Nonlinear Equations. Mathematics 2024, 12, 3501. https://doi.org/10.3390/math12223501

AMA Style

Shams M. On a Stable Multiplicative Calculus-Based Hybrid Parallel Scheme for Nonlinear Equations. Mathematics. 2024; 12(22):3501. https://doi.org/10.3390/math12223501

Chicago/Turabian Style

Shams, Mudassir. 2024. "On a Stable Multiplicative Calculus-Based Hybrid Parallel Scheme for Nonlinear Equations" Mathematics 12, no. 22: 3501. https://doi.org/10.3390/math12223501

APA Style

Shams, M. (2024). On a Stable Multiplicative Calculus-Based Hybrid Parallel Scheme for Nonlinear Equations. Mathematics, 12(22), 3501. https://doi.org/10.3390/math12223501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop