Next Article in Journal
Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering
Previous Article in Journal
Computational Evaluation of Heat and Mass Transfer in Cylindrical Flow of Unsteady Fractional Maxwell Fluid Using Backpropagation Neural Networks and LMS
Previous Article in Special Issue
Impulsive Discrete Runge–Kutta Methods and Impulsive Continuous Runge–Kutta Methods for Nonlinear Differential Equations with Delayed Impulses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Nonlinear Equation Systems via a Steffensen-Type Higher-Order Method with Memory

1
Foundation Department, Changchun Guanghua University, Changchun 130033, China
2
School of Statistics, Beijing Normal University, Beijing 100875, China
3
IT Department, Sichuan Rural Commercial United Bank, Chengdu 610000, China
4
School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
5
Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, P. Bag X5050, Thohoyandou 0950, South Africa
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(23), 3655; https://doi.org/10.3390/math12233655
Submission received: 30 September 2024 / Revised: 8 November 2024 / Accepted: 18 November 2024 / Published: 22 November 2024
(This article belongs to the Special Issue Advances in Computational Mathematics and Applied Mathematics)

Abstract

:
This article introduces a multi-step solver for sets of nonlinear equations. To achieve this, we consider and develop a multi-step Steffensen-type method without memory, which does not require evaluations of the Fréchet derivatives, and subsequently extend it to a method with memory. The resulting order is 5 + 2 , utilizing the identical number of functional evaluations as the solver without memory, thereby demonstrating a higher computational index of efficiency. Finally, we illustrate the advantages of the proposed scheme with memory through various test problems.

1. Introductory Notes

Consider a set of square algebraic nonlinear problems as [1,2]:
H ( η ) = 0 ,
wherein H ( η ) = ( ν 1 ( η ) , ν 2 ( η ) , , ν ω ( η ) ) T and ν i ( η ) , 1 i ω are coordinate functions. Assume that H ( η ) is an enough differentiable function of η within a convex open set denoted by D R ω . Now we first revisit some pioneering iterative methods to address (1). Newton’s method (NM), widely used for such problems, is formulated as follows [3]:
H ( g ( χ ) ) δ ( χ ) = H ( g ( χ ) ) , χ = 0 , 1 , 2 , , g ( χ + 1 ) = δ ( χ ) + g ( χ ) .
This approach achieves second-order convergence, provided that the starting vector g ( 0 ) is adequately near to the true root θ . To overcome certain constraints associated with the NM, alternative solvers [4,5] have been developed, including Steffensen’s method (SM), which operates without the need for derivative calculations [6]. The formulation of the SM for addressing nonlinear systems is outlined in [7]:
λ ( χ ) = H ( g ( χ ) ) + g ( χ ) , g ( χ + 1 ) = [ g ( χ ) , λ ( χ ) ; H ] 1 H ( g ( χ ) ) + g ( χ ) , χ = 0 , 1 , 2 , ,
which utilizes the divided difference operator (DDO). The first-order DDO of H for high-dimensional knots ς and ζ is defined via ( 1 i , j ω ):
[ ς , ζ ; H ] i , j = H i ( ς 1 , , ς j 1 , ζ j , , ζ ω ) + H i ( ς 1 , , ς j , ζ j + 1 , , ζ ω ) ζ j + ς j .
More generally, the DDO for H on R ω can be defined as [7]:
[ · , · ; H ] : D R ω × R ω L ( R ω ) ,
that satisfies [ ζ , ς ; H ] ( ς + ζ ) = H ( ζ ) H ( ς ) , ς , ζ D . Using h = ζ ς , the first-order DDO can also be given by [8]:
[ ς + h , ς ; H ] = 0 1 H ( ς + t h ) d t .
However the definitions (4)–(6) mainly yield in dense matrices for the representation of the DDO, which restrict the applicability of the SM for tackling (1) to some extent.
The author in [9] provided another procedure to compute the DDO in a similar way as follows:
[ g ( χ ) + F ( χ ) , g ( χ ) ; H ] = ( H ( g ( χ ) + F ( χ ) e 1 ) H ( g ( χ ) ) , , H ( g ( χ ) + F ( χ ) e ω ) H ( g ( χ ) ) ) F ( χ ) 1 ,
with F ( χ ) = diag ( ν 1 ( g ( χ ) ) , ν 2 ( g ( χ ) ) , , ν ω ( g ( χ ) ) ) . Traub (TM) introduced an improvement to NM with local cubic convergence [9]:
γ ( χ ) = g ( χ ) H ( g ( χ ) ) 1 H ( g ( χ ) ) , g ( χ + 1 ) = γ ( χ ) H ( g ( χ ) ) 1 H ( γ ( χ ) ) .
Another well-known and effective method for resolving (1) is the fourth-order Jarratt technique (JM) [10,11], which is described as:
y ( χ ) = g ( χ ) 2 3 H ( g ( χ ) ) 1 H ( g ( χ ) ) , g ( χ + 1 ) = g ( χ ) 1 2 ( 3 H ( y ( χ ) ) H ( g ( χ ) ) ) 1 · ( 3 H ( y ( χ ) ) + H ( g ( χ ) ) ) H ( g ( χ ) ) 1 H ( g ( χ ) ) .
An improvement of (3) was furnished in [12,13] as:
z ( χ ) = g ( χ ) [ g ( χ ) , w ( χ ) ; H ] 1 H ( g ( χ ) ) , g ( χ + 1 ) = z ( χ ) [ g ( χ ) , w ( χ ) ; H ] 1 H ( z ( χ ) ) ,
where
w ( χ ) = g ( χ ) + ϑ H ( g ( χ ) ) , ϑ R .
The distinction in (10), as opposed to (3), lies in its utilization of two iterations and consequently two ω -D function assessments, enabling it to achieve a rate superior to quadratic. The core concept is to stabilize the DDO during each cycle and subsequently augment the substeps to maximize order enhancement and contribute to the computational efficiency index (CEI) of the solver.
This manuscript is motivated by the objective of creating a multi-step fast iterative solver that improves both accuracy and efficiency in addressing nonlinear equation sets via (10). By eliminating the reliance on Fréchet derivatives, our proposed method based on an extension over TM seeks to alleviate the complexities and computational demands associated with derivative-involved approaches, thereby advancing the domain of this field. The primary objective is to furnish a higher-order derivative-free Steffensen-type solver capable of addressing nonlinear systems, encompassing both complex and real solutions. Our intention is to enhance computational efficiency by reducing the frequency of matrix inversions and functional evaluations, in accordance with the principles of numerical analysis. In this work, functional evaluations refer to both function and derivative evaluations, which differ from the concept typically used in the Calculus of Variations.
This article is constructed as comes next. Section 2 explores the memorization technique utilized in the Steffensen-type scheme for addressing nonlinear sets of equations. Section 3 formulates a multistep approach comprising several substeps to achieve rapid convergence while minimizing the number of LU decompositions using a with memory structure to accelerate the convergence as much as possible. Section 4 presents an error analysis and assesses the rate of convergence. Subsequently, Section 5 examines the CEI of various methods, focusing on the number of functional assessments and the flops-type index. Furthermore, Section 6 demonstrates the applicability and advantages of the proposed method with memory through its application to several problems. Finally, Section 7 offers concluding remarks.

2. With Memorization of the Iterative Methods

In this context, our objective is to enhance the CEI of (10) without adding additional steps or further DDOs for every iterate; see [14]. To achieve this, we utilize the concept of memory-based methods, which suggest that the speed of convergence and overall efficiency of iterative techniques can be enhanced by retaining and utilizing previously calculated function values.
Noting that the error equation for (10) (the notations here will be pointed out further in Section 3):
ε ( χ + 1 ) = ( I + ϑ A ( θ ) ) ( 2 I + ϑ A ( θ ) ) C 2 ε ( χ ) 3 + O ( ε ( χ ) 4 ) ,
contains a term expressed as follows:
I + ϑ H ( θ ) = 0 .
Here the non-zero scalar ϑ in (13) significantly influences both the convergence domain and the enhancement of the convergence speed. When addressing a nonlinear set of problems, and since θ is unknown, one could approximate H ( θ ) to bring the entire relation in (13) close to zero. Thus, we can express this as
ϑ H ( θ ¯ ) 1 ,
where θ ¯ represents an estimation of the root (for each iteration).
It is crucial to elaborate on how one estimates the matrix ϑ : = A ( χ ) , ( χ 1 ) by utilizing certain approximations of H ( θ ) derived from the available data [15].
To enhance the performance of (10) through the principle of memory-based methods [13], we take into account the following iterative expression without memory shown as PM1:
w ( χ ) = g ( χ ) + ϑ H ( g ( χ ) ) q ( χ ) = g ( χ ) [ g ( χ ) , w ( χ ) ; H ] 1 H ( g ( χ ) ) , z ( χ ) = q ( χ ) [ g ( χ ) , w ( χ ) ; H ] 1 H ( q ( χ ) ) , g ( χ + 1 ) = z ( χ ) [ g ( χ ) , w ( χ ) ; H ] 1 H ( z ( χ ) ) .
This solver reads the following error equation:
ε ( χ + 1 ) = ( I + ϑ H ( θ ) ) ( 2 I + ϑ H ( θ ) ) C 2 2 ε ( χ ) 4 + O ( ε ( χ ) 5 ) .
Without loss of generality, we focus on the scalar case to analyze the dynamical behavior of the iterative methods in the complex plane, rather than extending to the multi-dimensional case for (19). Visualizing the fractal attraction basins of iterative methods for polynomial equations of various degrees in the complex plane is critical for several reasons [16,17], particularly when shading the plot based on the number of iterations required for convergence. In this context, different polynomial roots correspond to distinct regions of attraction. By mapping these basins, one can illustrate where initial guesses converge to specific roots. This step is essential in our work, demonstrating how the use of memory and small free parameter values can expand the convergence radii, thereby enlarging the region for selecting initial approximations.
Shading the plot based on the number of iterations needed for convergence offers insights into the solver’s effectiveness. Regions where the method converges rapidly can be identified as more stable or efficient, whereas areas requiring more iterations (or failing to converge) suggest potential inefficiencies or instability. Such analyses are illustrated in Figure 1, Figure 2, Figure 3 and Figure 4 over the domain [ 2 , 2 ] × [ 2 , 2 ] , with a maximum iteration count of 150 and a tolerance of 10 2 for the residual as the stopping criterion. They reveal that the higher the order is for Steffensen-type methods, the larger the attraction basin is. Note that PM1 and PM2 both are Steffensen-type methods. In these new methods, the number of iterations to get the root is lower than SM; due to this, they have lighter and fewer shaded areas in their attraction basins. Moreover, the convergence radii could be enlarged by selecting small values for the free non-zero scalar ϑ . Thus, memorization will not only contribute to a higher-efficiency index but also to larger attraction basins, which means higher stability for such a solver in contrast to the Steffensen-type method without memory.
Figure 1, Figure 2, Figure 3 and Figure 4 also reveal that by observing how the number of iterations varies across the complex plane, one can assess the convergence properties of any methods. The fractal boundaries highlight regions where small changes in the initial guess can yield drastically different outcomes (i.e., converging to different roots or diverging). This sensitivity is crucial to understand, especially when implementing these methods in practical applications where precision of the initial guess might be limited. They are used later in the paper by implying to select a small value for the free nonzero parameter (carried forward the relation (37)) and how memorization can enhance the convergence domain.

3. Derivation of the Scheme

To facilitate the implementation of the memory-based scheme (15), we will first examine
ϑ : = A ( χ ) = [ w ( χ 1 ) , g ( χ 1 ) ; H ] 1 H ( θ ) 1 ,
and
N ( χ 1 ) δ ( χ ) = H ( g ( χ ) ) , N ( χ 1 ) γ ( χ ) = H ( q ( χ ) ) , N ( χ 1 ) ψ ( χ ) = H ( z ( χ ) ) .
Consequently, we now present the subsequent scheme (PM2) as our main contribution, ( A ( χ ) = [ w ( χ 1 ) , g ( χ 1 ) ; H ] 1 , χ 1 ):
w ( χ ) = g ( χ ) + A ( χ ) H ( g ( χ ) ) , χ 1 , q ( χ ) = g ( χ ) + δ ( χ ) , χ 0 , z ( χ + 1 ) = q ( χ ) + γ ( χ ) , g ( χ + 1 ) = z ( χ ) + ψ ( χ ) .
The error equation of this solver with memory will be given in the next section.
It is well known [15] when D R ω represents a convex nonempty domain. Then, assume that H is three-times Fréchet smooth over D, and [ u , v ; H ] L ( D , D ) for any distinct points u , v D (where v u ). Additionally, let the starting vector g ( 0 ) and the zero θ be in close proximity to each other. By defining A ( χ ) = [ w ( χ 1 ) , g ( χ 1 ) ; H ] 1 and setting d ( χ ) : = I + A ( χ ) H ( θ ) , finally we can derive the equation below
d ( χ ) e ( χ 1 ) .
The relation (20) will be used later in Section 4 of this work.
To implement (19), it is essential to resolve a series of linear algebraic sets of equations. This entails performing a new LU factorization at each iteration, without leveraging any information from preceding steps. Nevertheless, a substantial body of the literature exists regarding the recycling of such information to derive updated preconditioners for iterative solvers [18]. The advantage of (19) lies in the fact that all linear systems share the identical coefficient matrix. Consequently, a single LU factorization suffices; by retaining this decomposition, it can be applied to several distinct right-hand side parts to obtain the resolution vectors for each sub-cycle of (19).
The solution of the nonlinear equation systems that we consider here are in D R ω as stated in Section 1. The roots that we are seeking for are assumed to be simple zeros. Both real and complex roots can be obtained by the discussed methods (if existed). In fact, by choosing a suitable complex initial guess, a complex root (if existed), can be obtained.

4. Convergence Order

Here, we furnish a theoretical analysis of the convergence speed of the iterative scheme presented in (19). Before introducing the main contribution, we represent the ω -dimensional Taylor expansion.
The rate at which the iteration without memory PM1 converges is determined via ω -dimensional Taylor expansions. Let ε ( χ ) = g ( χ ) θ denote the error at the χ -th iterate. As noted in [19]:
ε ( χ + 1 ) = G ε ( χ ) p + O ( ε ( χ ) p + 1 ) ,
this error equation implies G is a p-linear functional, where G L ( R ω , R ω , , R ω ) and p is the speed. Additionally, we have:
ε ( χ ) p = ( ε ( χ ) , ε ( χ ) , , ε ( χ ) p terms ) .
Assume that H : D R ω R ω is sufficiently differentiable in the Fréchet sense in D. Following [10], the ω -th derivative of H at u R ω , ω 1 , is the ω -linear functional, i.e.,
H ( ω ) ( u ) : R ω × × R ω R ω ,
so that H ( ω ) ( u ) ( v 1 , , v ω ) R ω . For θ + h R ω located in a vicinity of the solution θ of (1), the Taylor expansion can be formulated as [10]:
H ( θ + h ) = H ( θ ) h + ω = 2 p 1 M ω h ω + O ( h p ) ,
where M ω = 1 ω ! [ H ( θ ) ] 1 H ( ω ) ( θ ) , ω 2 . It follows that M ω h ω R ω , as H ( ω ) ( θ ) L ( R ω × × R ω , R ω ) and [ H ( θ ) ] 1 L ( R ω ) . Additionally, for H , we have:
H ( θ + h ) = H ( θ ) I + ω = 2 p 1 ω M ω h ω 1 + O ( h p ) ,
wherein I shows the unit matrix. Additionally, ω M ω h ω 1 L ( R ω ) .
Theorem 1.
Let in (1) H : D R ω R ω be adequately Fréchet differentiable at every point in D and that H ( θ ) = 0 at θ R ω . Additionally, let H ( η ) be continuous and nonsingular at θ. Next, { g ( χ ) } χ 0 produced by (19) with memory with a selection of an appropriate starting value has 4.23607 R-convergence order.
Proof. 
For proving the convergence speed, we consider (24) and (25) to write
H ( g ( χ ) ) = H ( θ ) ε ( χ ) + M 2 ε ( χ ) 2 + M 3 ε ( χ ) 3 + O ( ε ( χ ) 4 ) .
In the context of the scheme (15), when operating without memory (i.e., PM1) and applying (24)–(26), we ultimately derive
ε ( χ + 1 ) = ( I + ϑ H ( θ ) ) ( 2 I + ϑ H ( θ ) ) C 2 2 ε ( χ ) 4 + O ( ε ( χ ) 5 ) .
Now by considering the with memorization in (19), we shall express (27) in its asymptotic form as follows:
ε ( χ + 1 ) d 1 ( χ ) ε ( χ ) 4 ,
where ∼ shows for the error equation without the asymptotical term. A variety of symbolic computations conducted by considering that the coefficients of the error terms in our ω -D scenario are all matrices, and applying (20), along with the understanding that multiplication does not allow for commutativity, we can deduce
d 1 ( χ ) ε ( χ 1 ) , χ 1 .
Consequently, one arrives at
d 1 ( χ ) 2 ε ( χ 1 ) 2 , χ 1 .
By imposing (29) and (30) into (28), we obtain:
ε ( χ + 1 ) ε ( χ 1 ) 1 ε ( χ ) 4 .
This demonstrates that 1 p + 4 = p , where the convergence R-order is expressed as
p = 5 + 2 4.23607 .
The proof is concluded. □
Before ending this section, it is pointed out that with a simple change the structure of PM2, it is possible to provide another iteration scheme with memory of a similar kind as follows (PM3), (this time: A ( χ ) = [ w ( χ 1 ) , g ( χ 1 ) ; H ] 1 , χ 1 ):
w ( χ ) = g ( χ ) A ( χ ) H ( g ( χ ) ) , χ 1 , q ( χ ) = g ( χ ) + δ ( χ ) , χ 0 , z ( χ + 1 ) = q ( χ ) + γ ( χ ) , g ( χ + 1 ) = z ( χ ) + ψ ( χ ) .
If we aim to further improve the convergence order, two possible approaches can be considered. First, we could construct a solver by incorporating additional subsets and then apply the memorization procedure. Second, one might explore faster methods to accelerate the free parameter using alternative types of interpolation, which, however, falls beyond the scope of this paper.

5. Computational Efficiency Comparisons

For the proposed solver with memory, PM2 (or equivalently PM3), only a single LU factorization is required, followed by matrix-vector multiplications, which enhances computational efficiency by eliminating the need to compute multiple matrix inverses in each iteration. The CEI for iterative solvers is defined as follows [7]:
C E I = p 1 / C ,
where C represents the total computational cost and p signifies the convergence speed, considering the quantity of functional evaluations. The cost of computing each scalar function is considered a unit, while the costs associated with other calculations are expressed as multiples of this unit cost. To evaluate the CEI for PM2, we first outline the number of functional evaluations (cost) required for ω -dimensional functions, as detailed below (excluding the indicator χ ):
  • In H ( g ) , H ( w ) , H ( q ) , H ( z ) : 4 ω evaluations.
  • In the DDO: ω 2 evaluations.
Furthermore, we consider the costs to solve two triangular systems, all quantified in floating-point operations (flops). The flops necessary for executing the LU procedure amount to ( 2 ω 3 ) / 3 , while resolving the two related triangular systems requires approximately 2 ω 2 flops. Noting that, here, we assume that the cost for one functional evaluation is roughly equal to the cost for one flop. The findings displayed in Table 1 demonstrate that for varying ω , the CEI of our method surpasses that of competing approaches. Comparisons of different derivative-free Steffensen-type solvers with and without memory based on various choices of ω are illustrated in Figure 5 and Figure 6, confirming the superiority and improvement in both classic and flops-type efficiency indices of PM relative to its main competitors.
In real-world applications, there is a trade-off between eliminating Fréchet-derivatives and increasing the method’s overall computational complexity. Generally speaking, it relies on the specific problem, which leads to a nonlinear system of equations. However, by eliminating the Fréchet derivative, we make the solver derivative-free, which is suitable for problems, in which the derivative is not available. Moreover, the concept of memorization can be accompanied in methods without Fréchet derivatives to increase the computational complexity.
It is also remarked that in the absence of Fréchet derivatives, one might ask that what specific mechanisms ensure that the proposed Steffensen-type technique remains robust over a variety of problem sets? To tackle this, it is stated that in the absence of Fréchet derivatives, the convergence radius mainly reduced tremendously, which is one weak point of Steffensen-type techniques at first sight. However, this can simply be improved by choosing very small values for the free non-zero parameter (as seen in the attraction basins of Section 2), as well as imposing the with memorization concept.

6. Numerical Aspects

The objective of this section is to facilitate the implementation of our proposed scheme, PM2. The computations were executed using Mathematica 13.3 [20] in standard machine precision to handle the round-off errors. The linear systems were resolved employing LU decomposition via LinearSolver [ ] . All computational examples were conducted in a uniform environment. We adopt the stopping criterion as follows:
| | H ( g ( χ + 1 ) ) | | 10 6 .
A significant challenge in implementing (19) involves the integration of A ( χ ) , which is no longer constant and must be characterized as a matrix. Here, A ( 0 ) is delineated follows:
A ( 0 ) = diag 0.01 .
In fact due to Figure 1, Figure 2, Figure 3 and Figure 4 and the results discussed previously in [13], other small values for the free parameter can also be used which can yield in different choices such as
A ( 0 ) = diag 0.001 ,
or
A ( 0 ) = diag 0.0001 ,
or
A ( 0 ) = diag 0.00001 .
In fact, for implementation of method with memory, here, for the first iterate, we start with the without memory version of the method and after that, from the second iterate, the information of the previous step can be used to update the parameters and thus with memorization of the scheme can be done. The selection of A ( 0 ) has a direct impact on the entire process, influencing the speed at which convergence is achieved. Here, (37) aligns with the dynamic investigations of Steffensen-type solvers with memory, where larger basins of attraction occur when the free parameter is near zero.
To validate the analytical convergence rate in the computational experiments, we determine the numeric speed of convergence using [7]
ρ ln ( | | H ( g ( χ + 1 ) ) | | 2 / | | H ( g ( χ ) ) | | 2 ) ln ( | | H ( g ( χ ) ) | | 2 / | | H ( g ( χ 1 ) ) | | 2 ) .
Example 1
([14]). We investigate a nonlinear system H ( η ) = 0 , which possesses a complex root, as detailed below
H ( η ) = η 1 sin ( η 2 ) 2 η 10 η 8 + η 10 5 η 6 10 η 9 , 10 η 1 + η 3 2 5 η 5 2 + 10 η 6 η 8 sin ( η 7 ) + 2 η 9 , cos 1 ( 10 η 10 + η 8 + η 9 ) + η 4 sin ( η 2 ) + η 3 15 η 5 2 + η 7 , η 1 η 2 η 7 η 8 η 10 + η 3 5 5 η 5 3 + η 7 , 10 η 1 2 η 10 + cos ( η 2 ) + η 3 2 5 η 6 3 2 η 8 4 η 9 , cos 1 ( η 1 2 ) sin ( η 2 ) 2 η 10 η 5 4 η 6 η 9 + η 3 2 , 2 tan ( η 1 2 ) + 2 η 2 + η 3 2 5 η 5 3 η 6 + η 8 cos ( η 9 ) , η 1 2 η 10 η 5 η 6 η 7 η 8 η 9 + tan ( η 2 ) + 2 η 3 η 4 5 η 6 3 , 5 tan ( η 1 + 2 ) + cos ( η 9 η 10 ) + η 2 3 + 7 η 3 4 2 sin 3 ( η 6 ) , 5 exp ( η 1 2 ) η 2 + 2 η 7 η 10 + 8 η 3 η 4 5 η 6 3 η 9 ,
in which the precise solution is presented up to eight floating points as
θ ( 1.32734904 + 0.35029249 i , 1.0585993 1.7487246 i , 1.02761867 0.01413080 i , 3.2739500 + 0.1278283 i , 0.83182439 + 0.00175519 i , 0.4853245912 + 0.68487764 i , 0.16936676 + 0.1840917 i , 1.5344199 0.3212147 i , 2.0863796 + 0.42634275 i , 1.9895923 + 1.4783953 i ) * .
The computational evidence and the numerical speed, denoted as ρ , are detailed in Table 2. We utilized 1000 fixed floating points, with the initial value set as g ( 0 ) = ( 1.20 + 0.30 i , 1.10 1.90 i , 1.00 0.10 i , 2.50 + 0.50 i , 0.80 0.10 i , 0.40 + 1.00 i , 0.10 + 0.10 i , 1.30 0.70 i , 2.00 + 0.50 i , 1.90 + 1.40 i ) * . The choice of this is based on g ( 0 ) [14]. Rather than g ( 0 ) and (37), no other parameters should be chosen and the method works based on satisfying (35). Additionally, the residual norm is expressed using the · 2 notation. The numerical pieces of evidence seen in Table 2 support the observations in Figure 1, Figure 2, Figure 3 and Figure 4, discussing that the smaller the choice of the free parameter for PM2 would results in arriving at the convergence phase faster than consider larger values for this parameter.
Example 2
([21]). In this test, we investigate the nonlinear systems extracted through the computational resolution of the following Partial Differential Equation (PDE)
u τ + u u z = ρ ¯ u z z , u ( 1 , τ ) = 0 , τ 0 , u ( 0 , τ ) = 0 , τ 0 , u ( z , 0 ) = 2 ρ ¯ ϑ ¯ π sin ( π z ) ξ + ϑ ¯ cos ( π z ) , 0 z 2 ,
where the coefficient of diffusion is ρ ¯ . If we take into account u = u ( z , τ ) , then the computational resolution is represented by:
ϱ i , j u ( z i , τ j ) ,
at the grid points ( i , j ) on a equidistant mesh. Let ϖ 1 and ϖ 2 show the number of steps in spatial and temporal variables, respectively. The parameters are set as follows [21]: ξ = 5 , T = 1 , ρ ¯ = 0.5 , and ϑ ¯ = 4 . The parameters have been chosen so as to obtain a unique and well-defined numerical solution and stay away from stiffness or irregularity for the solution of PDEs.
To address this, we may utilize the first-order backward FD method for the first differentiation in temporal τ :
u τ ( z i , τ j ) ϱ i , j ϱ i , j 1 k ,
wherein k denotes the time step size. For the spatial terms of the PDE, we apply the second-order central difference FD method as follows:
u z ( z i , τ j ) ϱ i + 1 , j ϱ i 1 , j 2 h ,
and
u z z ( z i , τ j ) ϱ i + 1 , j 2 ϱ i , j + ϱ i 1 , j h 2 ,
where h represents the spatial step size along z. Following this discretization and imposing the boundary conditions will lead a to a set of nonlinear equations that must be tackled iteratively.
The numerical solutions using PM2 are illustrated in Figure 7, and Table 3 provides the comparative evidence for this case in double precision when the residual of the numerical obtained solution is less than 10 5 . We set ϖ 1 = ϖ 2 = 21 , i.e., 21 equally spaced nodes in both space and time direction and after imposing the initial and boundary conditions, we derive to a set of nonlinear system of the dimension 400 × 400 , where the initial vector g ( 0 ) = 1 . Along space, we have used central three-point second-order FD approximations, and along time, we have employed first order forward FD approximations to discretize the problem.
For an alternative set of parameters, specifically ϖ 1 = ϖ 2 = 31 , representing 31 equally spaced points in both the spatial and temporal directions, we obtained a nonlinear system of dimension 900 × 900 after applying the initial and boundary conditions. The initial vector is again defined as g ( 0 ) = 1 . Figure 8 presents the numerical simulations for this setup, further demonstrating the effectiveness of PM2 and the concept of memorization.
The numerical performance of the proposed solver is demonstrated through root-finding for various nonlinear equations. Results from the numerical tests clearly indicate that the solver achieves convergence in fewer iterations and with higher accuracy per iteration.
We finish this section by pointing out the following matter. Another concern may arise regarding how the method handles convergence issues in nonlinear systems with multiple solutions or when the initial guess significantly influences convergence behavior. To address this, the focus of this article is on simple roots, not multiple roots; while the methods can be applied to find multiple zeros, their orders of convergence drop significantly in such cases. In fact, if a nonlinear system has multiple roots (whether the multiplicity is known or unknown), specialized solvers designed for multiple roots, along with appropriate initial guesses, must be developed to maximize efficiency.

7. Concluding Summaries

In this paper, we have presented an advanced Steffensen-type iteration expression aimed at solving nonlinear systems of equations, specifically tailored to eliminate the need for computing Fréchet derivatives. This approach has exhibited significant computational efficiency by reducing the number of matrix inversions and functional evaluations, as outlined in Section 3, Section 4 and Section 5. Our examination has validated the efficacy of the proposed method. Future endeavors will focus on further improving the efficiency of the scheme and expanding its applicability to a wider variety of nonlinear challenges, including nonlinear stochastic differential equations as highlighted in [22].

Author Contributions

Conceptualization, S.W., H.X., T.L. and S.S.; formal analysis, S.W., H.X., T.L. and S.S.; funding acquisition, T.L. and S.S.; investigation, S.W., H.X. and T.L.; methodology, S.W., H.X., T.L. and S.S.; supervision, T.L.; validation, T.L. and S.S.; writing—original draft, S.W., H.X. and T.L.; writing—review & editing, S.W., H.X. and T.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by the Research Project on Graduate Education and Teaching Reform of Hebei Province of China (YJG2024133), the Open Fund Project of Marine Ecological Restoration and Smart Ocean Engineering Research Center of Hebei Province (HBMESO2321), the Technical Service Project of Eighth Geological Brigade of Hebei Bureau of Geology and Mineral Resources Exploration (KJ2022-021), the Technical Service Project of Hebei Baodi Construction Engineering Co., Ltd. (KJ2024-012), the Natural Science Foundation of Hebei Province of China (A2020501007), and the Fundamental Research Funds for the Central Universities (N2123015).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to express our sincere gratitude to the four referees whose insightful comments significantly enhanced both the clarity and robustness of this manuscript.

Conflicts of Interest

We have no financial conflicts of interest or personal relationships that might have affected the research presented in this manuscript.

References

  1. McNamee, J.M.; Pan, V.Y. Numerical Methods for Roots of Polynomials–Part I; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  2. Bini, D.A. Numerical computation of the roots of Mandelbrot polynomials: An experimental analysis. Electron. Trans. Numer. Anal. 2024, 61, 1–27. [Google Scholar] [CrossRef]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Khdhr, F.W.; Soleymani, F.; Saeed, R.K.; Akgül, A. An optimized Steffensen–type iterative method with memory associated with annuity calculation. Eur. Phys. J. Plus 2019, 134, 146. [Google Scholar] [CrossRef]
  5. Torkashvand, V.; Kazemi, M.; Azimi, M. Efficient family of three-step with-memory methods and their dynamics. Comput. Methods Differ. Equ. 2024, 12, 599–609. [Google Scholar]
  6. Noda, T. The Steffensen iteration method for systems of nonlinear equations. Proc. Japan Acad. 1987, 63, 186–189. [Google Scholar] [CrossRef]
  7. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  8. Rostami, M.; Lotfi, T.; Brahmand, A. A fast derivative-free iteration scheme for nonlinear systems and integral equations. Mathematics 2019, 7, 637. [Google Scholar] [CrossRef]
  9. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
  10. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton–Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  11. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth-and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
  12. Amat, S.; Busquier, S. Convergence and numerical analysis of a family of two-step Steffensen’s methods. Comput. Math. Appl. 2005, 49, 13–22. [Google Scholar] [CrossRef]
  13. Soleymani, F.; Sharifi, M.; Shateyi, S.; Khaksar Haghani, F. A class of Steffensen-type iterative methods for nonlinear systems. J. Appl. Math. 2014, 2014, 705375. [Google Scholar] [CrossRef]
  14. Lotfi, T.; Momenzadeh, M. Constructing an efficient multi-step iterative scheme for nonlinear system of equations. Comput. Methods Differ. Equ. 2021, 9, 710–721. [Google Scholar]
  15. Ahmad, F.; Soleymani, F.; Khaksar Haghani, F.; Serra-Capizzano, S. Higher order derivative-free iterative methods with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
  16. Shi, L.; Ullah, M.Z.; Nashine, H.K.; Alansari, M.; Shateyi, S. An enhanced numerical iterative method for expanding the attraction basins when computing matrix signs of invertible matrices. Fractal Fract. 2023, 7, 684. [Google Scholar] [CrossRef]
  17. Wang, X.; Li, W. A class of sixth-order iterative methods for solving nonlinear systems: The convergence and fractals of attractive basins. Fractal Fract. 2024, 8, 133. [Google Scholar] [CrossRef]
  18. Bertaccini, D.; Durastante, F. Interpolating preconditioners for the solution of sequence of linear systems. Comput. Math. Appl. 2016, 72, 1118–1130. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  20. Sánchez León, J.G. Mathematica Beyond Mathematics: The Wolfram Language in the Real World; Taylor & Francis Group: Boca Raton, FL, USA, 2017. [Google Scholar]
  21. Sauer, T. Numerical Analysis, 2nd ed.; Pearson: New York, NY, USA, 2012. [Google Scholar]
  22. Soheili, A.R.; Soleymani, F. Iterative methods for nonlinear systems associated with finite difference approach in stochastic differential equations. Numer. Algorithms 2016, 71, 89–102. [Google Scholar] [CrossRef]
Figure 1. Fractal attraction basins for z 3 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.2 .
Figure 1. Fractal attraction basins for z 3 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.2 .
Mathematics 12 03655 g001
Figure 2. Fractal attraction basins for z 3 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.02 .
Figure 2. Fractal attraction basins for z 3 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.02 .
Mathematics 12 03655 g002
Figure 3. Fractal attraction basins for z 4 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.2 .
Figure 3. Fractal attraction basins for z 4 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.2 .
Mathematics 12 03655 g003
Figure 4. Fractal attraction basins for z 4 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.02 .
Figure 4. Fractal attraction basins for z 4 1 = 0 , SM on the left and PM1 on the right using ϑ = 0.02 .
Mathematics 12 03655 g004
Figure 5. Comparison of classic CEIs for various values of ω .
Figure 5. Comparison of classic CEIs for various values of ω .
Mathematics 12 03655 g005
Figure 6. Comparison of flops-type CEIs for various values of ω .
Figure 6. Comparison of flops-type CEIs for various values of ω .
Mathematics 12 03655 g006
Figure 7. Numerical solutions using PM2 with (37) based on the given grid and ϖ 1 = ϖ 2 = 21 in left and its contour plot in right for Example 2.
Figure 7. Numerical solutions using PM2 with (37) based on the given grid and ϖ 1 = ϖ 2 = 21 in left and its contour plot in right for Example 2.
Mathematics 12 03655 g007
Figure 8. Numerical solutions using PM2 with (37) based on the given grid and ϖ 1 = ϖ 2 = 31 in left and its contour plot in right for Example 2.
Figure 8. Numerical solutions using PM2 with (37) based on the given grid and ϖ 1 = ϖ 2 = 31 in left and its contour plot in right for Example 2.
Mathematics 12 03655 g008
Table 1. Efficiency indices for several Steffensen-type methods.
Table 1. Efficiency indices for several Steffensen-type methods.
Compared MethodsSMPM1PM2
Order244.23607
Function assessments ω + ω 2 4 ω + ω 2 4 ω + ω 2
The classical CEI 2 1 ω + ω 2 4 1 4 ω + ω 2 4 . 23607 1 4 ω + ω 2
No. of LU decomposition111
Assessments for LU decompositions (flops) 2 ω 3 3 2 ω 3 3 2 ω 3 3
Assessments for linear systems (flops) 2 ω 3 3 + 2 ω 2 2 ω 3 3 + 6 ω 2 2 ω 3 3 + 6 ω 2
Flops-type CEI 2 1 2 ω 3 3 + 3 ω 2 + ω 4 1 2 ω 3 3 + 7 ω 2 + 4 ω 4 . 23607 1 2 ω 3 3 + 7 ω 2 + 4 ω
Table 2. Comparisons of different methods with and without memory in Example 1.
Table 2. Comparisons of different methods with and without memory in Example 1.
Methods H ( g ( 3 ) ) H ( g ( 4 ) ) H ( g ( 5 ) ) H ( g ( 6 ) ) H ( g ( 7 ) ) H ( g ( 8 ) ) H ( g ( 9 ) ) ρ
NM8.19 × 10 1 2.73 × 10 2 1.79 × 10 5 1.28 × 10 11 2.52 × 10 23 8.28 × 10 47 2.50 × 10 94 2.02
SM 7.68 × 10 1 1.83 × 10 2 7.33 × 10 6 5.17 × 10 12 1.51 × 10 24 2.03 × 10 49 3.91 × 10 99 1.99
PM1 1.01 × 10 4 6.95 × 10 20 1.19 × 10 80 2.12 × 10 323 3.99
PM2 with (36) 2.19 × 10 3 6.12 × 10 16 3.29 × 10 69 7.45 × 10 295 4.23
PM2 with (37) 1.44 × 10 4 2.99 × 10 21 7.96 × 10 92 1.09 × 10 390 4.23
PM2 with (38) 1.60 × 10 4 2.88 × 10 21 1.05 × 10 91 2.88 × 10 390 4.23
PM2 with (39) 1.58 × 10 4 2.61 × 10 21 8.16 × 10 92 9.18 × 10 391 4.24
Table 3. Computational outcomes obtained in Example 2.
Table 3. Computational outcomes obtained in Example 2.
Different SolversSMPM1PM2 with (37)
No. of iterates721
Time (in seconds)6.125.113.27
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Xian, H.; Liu, T.; Shateyi, S. Solving Nonlinear Equation Systems via a Steffensen-Type Higher-Order Method with Memory. Mathematics 2024, 12, 3655. https://doi.org/10.3390/math12233655

AMA Style

Wang S, Xian H, Liu T, Shateyi S. Solving Nonlinear Equation Systems via a Steffensen-Type Higher-Order Method with Memory. Mathematics. 2024; 12(23):3655. https://doi.org/10.3390/math12233655

Chicago/Turabian Style

Wang, Shuai, Haomiao Xian, Tao Liu, and Stanford Shateyi. 2024. "Solving Nonlinear Equation Systems via a Steffensen-Type Higher-Order Method with Memory" Mathematics 12, no. 23: 3655. https://doi.org/10.3390/math12233655

APA Style

Wang, S., Xian, H., Liu, T., & Shateyi, S. (2024). Solving Nonlinear Equation Systems via a Steffensen-Type Higher-Order Method with Memory. Mathematics, 12(23), 3655. https://doi.org/10.3390/math12233655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop