Next Article in Journal
Electric Vehicle Fire Risk Assessment Based on WBS-RBS and Fuzzy BN Coupling
Previous Article in Journal
Risk Evaluation Model of Coal Spontaneous Combustion Based on AEM-AHP-LSTM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Enhanced Fixed Point Method: An Extremely Simple Procedure to Accelerate the Convergence of the Fixed Point Method to Solve Nonlinear Algebraic Equations

by
Uriel Filobello-Nino
1,
Hector Vazquez-Leal
1,2,*,
Jesús Huerta-Chua
3,
Jaime Martínez-Castillo
4,
Agustín L. Herrera-May
4,5,
Mario Alberto Sandoval-Hernandez
3 and
Victor Manuel Jimenez-Fernandez
1
1
Facultad de Instrumentación Electrónica, Universidad Veracruzana, Cto. Gonzalo Aguirre Beltrán S/N, Xalapa 91000, Veracruz, Mexico
2
Consejo Veracruzano de Investigación Científica y Desarrollo Tecnológico (COVEICYDET), Av Rafael Murillo Vidal No. 1735, Cuauhtémoc, Xalapa 91069, Veracruz, Mexico
3
Instituto Tecnológico Superior de Poza Rica, Tecnológico Nacional de México, Luis Donaldo Colosio Murrieta S/N, Arroyo del Maíz, Poza Rica 93230, Veracruz, Mexico
4
Centro de Investigación en Micro y Nanotecnología, Universidad Veracruzana, Boca del Río 94294, Veracruz, Mexico
5
Facultad de Ingeniería de la Construcción y el Hábitat, Universidad Veracruzana, Boca del Río 94294, Veracruz, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3797; https://doi.org/10.3390/math10203797
Submission received: 28 July 2022 / Revised: 21 September 2022 / Accepted: 29 September 2022 / Published: 14 October 2022

Abstract

:
This work proposes the Enhanced Fixed Point Method (EFPM) as a straightforward modification to the problem of finding an exact or approximate solution for a linear or nonlinear algebraic equation. The proposal consists of providing a versatile method that is easy to employ and systematic. Therefore, it is expected that this work contributes to breaking the paradigm that an effective modification for a known method has to be necessarily long and complicated. As a matter of fact, the method expresses an algebraic equation in terms of the same equation but multiplied for an adequate factor, which most of the times is just a simple numeric factor. The main idea is modifying the original equation, slightly changing it for others in such a way that both have the same solution. Next, the modified equation is expressed as a fixed point problem and the proposed parameters are employed to accelerate the convergence of the fixed point problem for the original equation. Since the Newton method results from a possible fixed point problem of an algebraic equation, we will see that it is relatively easy to get modified versions of the Newton method with orders of convergence major than two. We will see in this work the convenience of this procedure.

1. Introduction

As widely known, the importance of research on nonlinear algebraic equations is that many phenomena, practical or theoretical, give rise to this kind of equation. As a matter of fact, it is difficult to find an area of engineering that can omit the use of these equations: electrical and electronic circuits, storage tanks, chemical industry, agricultural irrigation, balancing chemical equations, industrial engineering, and so on [1]. For this reason, several methods focused to find approximate solutions to nonlinear algebraic equations have been reported. As is well known, all the methods begin with an initial approximation and then a succession is generated, which is expected to converge to a root of the equation to solve. Some important methods found in the literature are those based on the bisection method [1,2], false position method [2], Newton–Raphson method [1,2], secant method [1,2], Muller method [1,2], Laguerre method [2,3], Jenkins–Traub method [2,4], Brent method [2,5], fixed point method [1,2], Ying Buzu method [6], He Chengtian average [6], Bubbfil algorithms [7], Steffensen iterative method [8], New Steffensen-type iterative methods constructed through the weight function concept [9], Four-point optimal sixteenth-order iterative method [10], Variants of Cauchy’s methods [11], R-order convergence with memory derivative-free methods [12], One-parameter4-point sixteenth-order king type family of iterative methods [13], Halley’s method [14], Modified Noor method [15], Derivative-free method based in the Traub’s method [16], Family of iterative methods for solving multiple-root nonlinear equations [17], High order Jacobian-free methods with memory [18], Iterative algorithms for Newton’s maps [19], Iterative methods with memory by using inverse interpolation [20], fixed points of mappings defined on probabilistic modular spaces [21], among many others. Three books considered classics on the solution of nonlinear equations are [3,4,22]. On the other hand, this work is heavily concerned with the notion of order of convergence and accelerated convergence and both subjects will be briefly explained later. In general, a succession with a high order of convergence converges faster than a method with a lower order. For this reason, the searching for methods that accelerate the convergence of a given succession is an important theme. One of the most known methods employed to accelerate the convergence of a linearly convergent succession is the Aitken method [1,2]. When the initial succession is generated by a fixed point iteration, it is possible to accelerate convergence to a quadratic by using a modified version of the Aitken method, known as the Stefenssen method [1,2]. This procedure consists of successively alternating the Aitken and fixed point methods and the results show that, effectively, the efficiency of this technique is similar to that of Newton’s method. Although the Stefenssen method is not particularly complicated to apply, this work proposes a simple method to accelerate the convergence of fixed point methods, which in general presents linear convergence. In a sequence, we will show that the proposed method sometimes is able to get not only quadratic convergence but also higher order iterative procedures, including the Newton–Raphson method, which is a relevant particular case of fixed point methods. As a matter of fact, the goal of this work is to enhance the fixed point method and improve its convergence without the need for combining different methods such as occurs with the Stefenssen method.
We will see that EFPM is easy to use and provides an adequate alternative method based on the classical fixed point method but, in general, clearly better. Furthermore, we will see that the application of the proposed method is not more complicated than the fixed point problem from which it is derived, which makes it even more attractive for practical applications.
The rest of this paper is organized as follows. In Section 2, we provide a brief review of the theory of the fixed point method and its application to solve linear and nonlinear algebraic equations. We will emphasize the character of the Newton–Raphson method as an important particular case of the fixed point method. Furthermore, we will introduce the relevant concept of convergence order. Section 3 presents a basic idea of the proposed method EFPM. In Section 4, we will apply EFPM with the purpose to find solutions to nonlinear algebraic equations. In the same way, we will compare the results obtained by the EFPM method with those obtained for the classical fixed point method in order to show the advantages of the proposed method. In the same way, we compare the proposed method with another two methods whose origin are other than the fixed point. Furthermore, Section 5 presents two applications of the EFPM for problems of science and engineering. Section 6, discusses the main results obtained for this work. Finally, a brief conclusion is given in Section 7.

2. Fixed Point Method

The following review about fixed point is based on reference [2].
A fixed point of a function g is a real number p defined so that g ( p ) = p .
A relevant fact for this work is the connection between the search for a root of f ( p ) = 0 , and a fixed point problem related with the mentioned equation.
For the problem f ( p ) = 0 , we could define a function g in many ways with a fixed point in p. For instance g ( x ) = x + 10 f ( x ) or g ( x ) = x 0.5 f ( x ) . On the other hand, if g has a fixed point in a number p, then it is possible to define an equation which that has a root in p. Let us say 2 f ( x ) = g ( x ) x .
The idea is to express the problem of calculating the root of f ( p ) = 0 , in terms of the searching of the fixed point for a function, and taking advantage of everything that is known about the approximation of fixed points of a function.
To start, we mention a theorem that provides sufficient conditions for the existence and uniqueness of a fixed point.

2.1. Theorem 1

Theorem 1.
(1)
If g C [ a , b ] and g ( x ) [ a , b ] , for all x [ a , b ] , then g has a fixed point in [ a , b ] .
(2)
If also g ( x ) exists in ( a , b ) and a positive constant k 1 with g ( x ) k , for all x ( a , b ) , then the fixed point in [ a , b ] is unique.
With the goal to approximate a fixed point of a function g, we have to choose an initial approximation p 0 and afterwards generate a succession p n n = 0 , which is determined from p n = g ( p n 1 ) for all the values n 1 . Assuming that the succession converges in p and the function g is continuous; then, the following results are verified: p = lim n p n = lim n g ( p n 1 ) = g lim n p n 1 = g ( p ) . Thus, we obtained the solution for x = g ( x ) and at the same time for some algebraic equation related to this problem. This technique is known as fixed point iteration or functional iteration.
Given an algebraic equation, the following theorem guides us about which fixed point is convenient to use.

2.2. Theorem 2 (Fixed Point Theorem)

Theorem 2
(Fixed Point Theorem). Let it be g C a , b and g ( x ) [ a , b ] for all x [ a , b ] . Assuming that g ( x ) exists in ( a , b ) and a positive constant k 1 with g ( x ) k , for all x ( a , b ) . Under these conditions, we know from theorem 1 that there exists a unique fixed point in [ a , b ] . In a sequence, for any number p 0 , which belongs to [ a , b ] , the succession defined by p n = g ( p n 1 ) , n 1 , converges to the only fixed point p in [ a , b ] .
These results for fixed point, evidently, are very relevant because they increase the possibility to find the fixed point of a function and probably also allow us to find the solution of an algebraic equation. Nevertheless, we note that most of the fixed point methods present just a linear convergence, which implies that when the method converges it will do slowly (see below).

2.3. Newton–Rapshon Method

It is possibly the most employed and powerful numerical technique in order to find the root of an equation f ( x ) = 0 . We will introduce this method by using the Taylor polynomials proceeding as follows [2].
Starting with f C 2 . Let x ˜ a , b be an approximation for the root p so that f ( x ˜ ) 0 and p x ˜ is small. Next, we will expand f ( x ) around of x ˜ by using the Taylor polynomial:
f ( x ) = f ( x ˜ ) + ( x x ˜ ) f ( x ˜ ) + ( x x ˜ ) 2 2 f ( η ( x ) ) ,
where the value of η ( x ) lies between x and x ˜ .
Evaluating the above equation for the root x = p ,
0 = f ( x ˜ ) + ( p x ˜ ) f ( x ˜ ) + ( p x ˜ ) 2 2 f ( η ( p ) ) .
Next, we will assume that x ˜ is sufficiently close enough to the root to neglect the last term of the equation with respect to the first two.
0 f ( x ˜ ) + ( p x ˜ ) f ( x ˜ ) .
Solving for p we obtain:
p x ˜ f ( x ˜ ) f ( x ˜ ) .
The Newton method begins from an initial approximation p 0 and generates the succession p n n = 0 defined by:
p n = p n 1 f ( p n 1 ) f ( p n 1 ) ,
for the values n 1 .
On the other hand, we see that Newton method is an iteration functional formula of the form p n = g ( p n 1 ) , where from (1):
g ( p n 1 ) = p n 1 f ( p n 1 ) f ( p n 1 ) , n 1 .
Thus, we conclude that Newton–Raphson is a particular case of a fixed point problem.
Next, we mention a convergence theorem for the Newton method which emphasizes the importance of the selection of the initial point p 0 that is convenient to use.

2.4. Theorem 3

Theorem 3.
Let it be f C 2 [ a , b ] . If p [ a , b ] is such that f ( p ) = 0 and f ( p ) 0 , then there exists a δ > 0 such that the Newton method generates a succession p n n = 1 that converges to p for any initial approximation p 0 [ p δ , p + δ ] .
Although this result is important for the Newton method, it is not of practical interest because it does not indicate how to calculate δ . Rather, its importance consists of the following: assuming that the conditions of Theorem 3 are fulfilled, then the Newton converges providing that p 0 is chosen exact enough.
Next, we will provide a criterion to determine how fast a succession converges.
In particular, we will see that the Newton method converges faster than most of the fixed point procedures.

2.5. Convergence Order

So far we have seen that the magnitude of g ( x ) indicates if a process converges. Next, we will show that it determines how fast the convergence is [1]. Defining the error of the i t h iteration as i = x i x ˜ , where x ˜ denotes the value of the fixed point of g.
Assuming that the values of g ( x ) and its derivatives are known in x ˜ , then it is possible to expand g ( x ) around x ˜ by using a Taylor series and also finding the following expression for g ( x i ) .
g ( x i ) = g ( x ˜ ) + g ( x ˜ ) ( x i x ˜ ) + g ( x ˜ ) ( x i x ˜ ) 2 2 + g ( x ˜ ) ( x i x ˜ ) 3 6 + . . .
or,
g ( x i ) g ( x ˜ ) = g ( x ˜ ) ( x i x ˜ ) + g ( x ˜ ) ( x i x ˜ ) 2 2 + g ( x ˜ ) ( x i x ˜ ) 3 6 + . . .
but in accordance with the functional iterative process of g, we have x i + 1 = g ( x i ) , and at the same time, x ˜ satisfies x ˜ = g ( x ˜ ) . Therefore, after using these results, it is possible to rewrite (3) as follows.
x i + 1 x ˜ = g ( x ˜ ) i + g ( x ˜ ) i 2 2 + g ( x ˜ ) i 3 6 + . . .
after employing the definition of the error for the i t h iteration.
In a sequence, the left hand side of the above equation is the error of the ( i + 1 ) t h iteration and therefore [1],
i + 1 = g ( x ˜ ) i + g ( x ˜ ) i 2 2 + g ( x ˜ ) i 3 6 + . . .
We note that if we perform i-iterations, assuming that the value of i is such that i < 1 , then it is clear that the successive values of i 2 , i 3 , i 4 , i 5 , are smaller than i , in such a way that if g ( x ˜ ) 0 , the magnitude of the first term of the right hand side of (4) is most of the times bigger than the magnitude of the rest of terms and i + 1 becomes proportional to i . On the other hand, if g ( x ˜ ) = 0 and g ( x ˜ ) 0 , the magnitude of the second term of the right hand side of (4) is bigger than the magnitude of the rest of the terms. Thus, i + 1 becomes proportional to i 2 . In the same way, if g ( x ˜ ) = 0 , g ( x ˜ ) = 0 , and g ( x ˜ ) 0 , then i + 1 becomese proportional to i 3 . From the above, we see that the functional iterative process x i + 1 = g ( x i ) , is of order one if g ( x ˜ ) 0 , order two if g ( x ˜ ) = 0 and g ( x ˜ ) 0 , order three if g ( x ˜ ) = 0 , g ( x ˜ ) = 0 and g ( x ˜ ) 0 , and so on. If n is the order of the process, then i + 1 i n and is clear that i + 1 will become smaller than i with a bigger value of n, and the convergence, therefore, will be faster. In general, a succession with a high order converges faster than a method with a lower order. In this language, the Newton–Raphson method is of second order of convergence. As a matter of fact, this work will show in principle how to obtain versions of the Newton method of higher order of convergence, in a systematic way, and in general, will show how to increase the convergence order for fixed point problems.

3. Enhanced Fixed Point Method (EFPM)

Next, we will provide the fundamentals of the EFPM method as a convenient tool to increase the order of convergence for the functional iteration technique.
As was already mentioned, given an algebraic equation, there exists many ways to express it in terms of a fixed point problem. The advantage of this late procedure is that it is supported for some well established mathematical results, some of which are mentioned in the previous section. The proposal of this work will be, on one hand, to make use of the mathematical results already explained and, on the other hand, enhance the relation between the fixed point and the search of roots for nonlinear algebraic equations.
Such as was already mentioned at the beginning of Section 2, for the problem f ( p ) = 0 , we could define, in many ways, a function g with a fixed point in p. On the other hand, if g has a fixed point in a number p, then it is possible to define a function that has a root in p. Our proposal is simple, slightly modifying the algebraic equation to solve f ( p ) = 0 but without changing the root of the original equation to solve.
We will find it useful to consider the equation:
α f ( p ) = 0 ,
where α is a parameter that is employed with the purpose of enhancing the possible fixed point procedures related to the search of the root of f ( p ) = 0 .
We begin rewriting α = 1 + β (for another parameter β ) in such a way that:
( 1 + β ) f ( p ) = 0 ,
we rewrite (6) as follows:
f ( p ) = β f ( p ) .
Next, we propose a possible fixed point problem, related with f ( p ) = 0 .
p = g ( p ) ,
where g denotes the function that results in applying f 1 to the right hand side of (7):
g ( p ) = f 1 β f ( p ) .
With the purpose of determining the parameter β , we require imposing the condition g ( x ˜ ) = 0 , where x ˜ is a fixed point of g (see discussion below of (4)). In accordance with the discussion of the last section i + 1 , the results will be proportional to i 2 and for the same the functional iterative process x i + 1 = g ( x i ) generated will converge faster than if the process was of order one and g ( x ˜ ) 0 . Such as was shown in the case of quadratic convergence: i + 1 i 2 .
In principle, we could to try for a third-order functional iterative process, which would result relevant because, in general, a succession with a high order of convergence converges faster than a method with a lower order.
Therefore, we could consider a modified equation as:
( 1 + α + β p ) f ( p ) = 0 ,
as in the case of (6), we note that equation f ( p ) = 0 and (10) have the same relevant solution.
Following a similar way as the one that yields (9), we could obtain a fixed point problem where:
g ( p ) = f 1 α β p f ( p ) .
With the purpose to determine the parameters α and β , we require imposing the conditions g ( x ˜ ) = 0 and g ( x ˜ ) = 0 , where x ˜ is a fixed point of g. In this case, i + 1 would become proportional to i 3 and for the same reason we would get an iterative process of third order.
A possible problem is that the process of the solution for the system g ( x ˜ ) = 0 , g ( x ˜ ) = 0 could result in a nonlinear system of equations, which, in general, would be more complicated than the original problem to solve f ( p ) = 0 but at least in principle could be possible.
A relevant particular case of the iterative fixed point technique is the Newton–Raphson method.
It is relatively easy to show that this method is of second order.
From (1) we deduce that:
g ( p ) = p f ( p ) f ( p ) .
After differentiating (12), we obtain:
g ( p ) = f ( p ) f ( p ) f 2 ( p ) .
Evaluating (13) for the root x ˜ , we obtain:
g ( x ˜ ) = f ( x ˜ ) f ( x ˜ ) f 2 ( x ˜ ) = 0 .
Assuming, in accordance with Theorem 3, that f ( x ˜ ) 0 , , then g ( x ˜ ) = 0 . Furthermore, differentiating (13) and evaluating this result for the root x ˜ , we deduce that in general g ( x ˜ ) 0 , since:
g ( x ˜ ) = f ( x ˜ ) f ( x ˜ ) ,
and therefore the method is of second order.
Unlike other fixed point problems, we point out a notable fact that the iterative process which emanates from Newton–Raphson method is of second order independently of the function f (only it is assumed that f ( x ˜ ) 0 ).
Next, we will make good use of this result in order to propose a relatively easy Newton–Raphson method of third order as follows.
From the above mentioned, let us consider the problem.
F ( p ) = 0 ,
for a function F.
Since g ( x ˜ ) = 0 (see (14)) where x ˜ is a root of F, then assuming again that F ( x ˜ ) 0 , we deduce from (15) that a Newton–Raphson of third order implies that:
F ( x ˜ ) = 0 .
Although F should not satisfy this requirement in general, we propose a function f, which is related to F in the following way.
F ( p ) = f ( p ) + A p f ( p ) ,
where A is a parameter that is determined to satisfy (17) with the purpose to obtain a Newton–Rhapson method of third order. In the same way, we notice that F and f share the same relevant roots.
Thus, after differentiating (18):
F ( p ) = f ( p ) + 2 A f ( p ) + A p f ( p ) .
From (17) and (19) we obtain:
A = f ( x ˜ ) 2 f ( x ˜ ) + x ˜ f ( x ˜ ) .
Therefore, by using (18) we obtain the following fixed point problem related with a Newton–Raphson technique of third order.
g ( p ) = p f ( p ) + A p f ( p ) f ( p ) + A f ( p ) + A p f ( p ) .
A third-order Newton method begins from an initial approximation p 0 and generates the succession p n n = 0 defined by:
p n = p n 1 f ( p n 1 ) + A n 1 p n 1 f ( p n 1 ) 1 + A n 1 p n 1 f ( p n 1 ) + A n 1 f ( p n 1 ) ,
for the values n 1 where:
A n 1 = f ( p n 1 ) 2 f ( p n 1 ) + p n 1 f ( p n 1 ) .
In principle, it is possible to continue in this manner and propose a Newton–Raphson method of higher orders.
For example, for the case of a Newton–Raphson method of fourth order we require that g ( x ˜ ) = 0 , g ( x ˜ ) = 0 , and g ( x ˜ ) = 0 .
After differentiating (12) three times and substituting the root x ˜ we get:
g ( x ˜ ) = 2 f 3 ( x ˜ ) f ( x ˜ ) 3 f ( x ˜ ) f 6 ( x ˜ ) .
From (17) we deduce that:
g ( x ˜ ) = 2 f ( x ˜ ) f 3 ( x ˜ ) .
Thus, after imposing the condition g ( x ˜ ) = 0 , we obtain:
f ( x ˜ ) = 0 .
Therefore, an adequate function for this case would be:
F ( p ) = f ( p ) + A p f ( p ) + B p 2 f ( p ) ,
where A and B are parameters that are determined to satisfy (17) and F ( x ˜ ) = 0 with the purpose of obtaining a Newton–Rhapson method of fourth order. In the same way, we notice that F and f share the same relevant roots. Given that A and B are determined through a linear system, in principle, it is possible to get a fourth-order Newton method and, in general, we could follow in this manner to obtain higher-orders versions. Nevertheless, the iterative formulas associated with these methods would be too long and cumbersome for practical applications, whereby, we will consider the third-order Newton method given by (22) and (23).
We notice that EFPM is a method that increases the convergence order (rate of convergence) for an iteration fixed point problem, which converges slowly. As a matter of fact, although in principle the proposed method is able to increase the rate of convergence without limitations, except the referent of mathematical difficulties, it’s objective is not to provide a method that converges as fast as possible.

4. Case Study

Next, we will consider the solution of the algebraic equation [2].
p 3 + 4 p 2 10 = 0
that possesses a single root in the interval [ 1 , 2 ] .
Next, we present several ways to express (28) in the form p = g ( p ) in order to show the functional iteration technique [2].
(A)
p = g 1 ( p ) = p p 3 4 p 2 + 10 ;
(B)
p = g 2 ( p ) = 10 p 4 p 1 / 2 ;
(C)
p = g 3 ( p ) = 1 2 10 p 3 1 / 2 ;
(D)
p = g 4 ( p ) = 10 4 + p 1 / 2 ;
(E)
p = g 5 ( p ) = p p 3 + 4 p 2 10 3 p 2 + 8 p .
Considering the initial point p 0 = 1.5 , the following table provides the results from the iteration fixed point method for the above five options of g. As a matter of fact, columns F, G, and H correspond to the application of the proposed method for the solution of (28), while columns I and J describe the results obtained for other methods whose origin is different from that of the fixed point method.
Following [2] we provide a brief comment in relation to the first five results shown in Table 1.
(A). g 1 does not map [ 1 , 2 ] into itself and g 1 ( p ) > 1 for all values p in [ 1 , 2 ] . Although the fixed point theorem cannot guarantee that the method fails necessarily, it is difficult to expect a convergence by using this choice of g 1 . (B). g 2 not only does not map [ 1 , 2 ] into itself but condition g 2 ( p ) < 1 is not satisfied for all the values of this interval. Therefore, it is not expected that this method converges.
(C). In this case, it can be shown that g 3 ( 2 ) 2.12 , whereby the condition g 3 ( p ) k < 1 , is not satisfied in [ 1 , 2 ] . Nevertheless, this example shows that sometimes shortening the interval reveals the convergence of a succession. Thus, considering the interval [ 1 , 1.5 ] instead of [ 1 , 2 ] it is easy to show that g 3 maps [ 1 , 1.5 ] in itself and since g 3 ( p ) 0.66 in the interval [ 1 , 1.5 ] , then the fixed point theorem confirms the convergence of the iterative process derived from C such as shown in Table 1.
(D). In this case, we note that g 4 ( p ) < 0.15 for x 1 , 2 . Given that the maximum value of the magnitude of g 4 ( p ) is much less than the magnitude of g 3 ( p ) , this explains the faster convergence obtained for g 4 .
(E). The succession defined by g 5 converges much faster than the other fixed point methods. It results that this option corresponds to the Newton–Raphson method, which was explained in Section 2. As a matter of fact, the quadratic convergence of this method explains its effectiveness.
Next, we will employ the proposed method in order to improve some of the results obtained for the several options of the fixed point method mentioned above.
(F). We notice that p = g 2 ( p ) = 10 p 4 p 1 / 2 totally failed in order to provide an approximate solution for (28). As a matter of fact, from Table 1, we see that the iterative process that derivates from g 2 ( p ) yields an imaginary approximation scarcely in the third iteration of the process. Next, we will consider a slightly modified equation from (28). In accordance with Section 3, we will write:
p 3 + 4 p 2 10 + A ( p 3 + 4 p 2 10 ) = 0 .
Solving for p 3 , we obtain p 3 = 10 4 p 2 A ( p 3 + 4 p 2 10 ) , or
p = g 6 ( p ) = 10 p 4 p A p 2 + 4 p 10 p ,
we note that if A = 0 , we recover the case B.
By the contrary, next we will optimize the value of A requiring that the condition g 6 ( x ˜ ) = 0 be satisfied, where x ˜ is the root of (28) in order to guarantee a quadratic convergence process, such as was explained in Section 2.
After differentiating (30):
g 6 ( p ) = 10 p 2 4 A 2 p + 4 + 10 p 2 2 10 p 4 p A p 2 + 4 p 10 p .
After applying condition g 6 ( x ˜ ) = 0 , we obtain:
A = 10 4 x ˜ 2 2 x ˜ 3 + 4 x ˜ 2 + 10 .
Therefore, from (30) we propose the iterative process:
p i = 10 p i 1 4 p i 1 A i 1 p i 1 2 + 4 p i 1 10 p i 1 ,
where:
A i 1 = 10 4 p 2 i 1 2 p i 1 3 + 4 p i 1 2 + 10
and i = 1 , 2 , 3 , . . .
After evaluating (33) for i = 1 , considering again as initial point p 0 = 1.5 we get:
p 1 = 1.354603800 .
Evaluating (33) for i = 2 , and after employing (35) we obtain:
p 2 = 1.365161089 .
Next, we will evaluate (33) for i = 3 , to get:
p 3 = 1.365230010 ,
where we used (36).
The evaluation of (33) for i = 4 , yields:
p 4 = 1.365230013 .
We note from Table 1 that iteration fixed point methods E and F require only four iterations to obtain the same precision of nine digits. In accordance with Figure 1 and Figure 2 we note that g 6 ( p ) effectively maps 1 , 2 into itself and also there exists a K such that condition g 6 ( p ) k where k < 1 for x [ 1 , 2 ] . Therefore, the fixed point theorem guarantees the convergence of the method.
(G). We notice that p = g 3 ( p ) = 1 2 10 p 3 1 / 2 obtained an approximate solution for (28) but it required 30 approximations in order to get an accuracy of nine digits as shown in Table 1.
With the purpose of obtaining a faster iterative process based in C, we consider the following equation, which clearly possesses the same relevant root of (28):
p 3 + 4 p 2 10 A ( p 3 + 4 p 2 10 ) = 0 .
Solving for p 2 we get:
p = 1 2 10 p 3 + A p 3 + 4 p 2 10 .
or:
p = g 7 ( p ) = 1 2 10 p 3 + A p 3 + 4 p 2 10 ,
we note that if A = 0 , we recover the case C.
Next, we will optimize the value of A requiring that the condition g 7 ( x ˜ ) = 0 be satisfied, where x ˜ is the root of (28) in order to guarantee a quadratic convergence process, such it was explained in Section 2.
The differentiation of (41) results in:
A = 3 x ˜ 3 x ˜ + 8 .
From (41) we propose the iterative process:
p i = 1 2 10 p i 1 3 + A i 1 p i 1 3 + 4 p i 1 2 10 ,
where:
A i 1 = 3 p i 1 3 p i 1 + 8
and i = 1 , 2 , 3 , . . .
After evaluating (43) for i = 1 , considering again the initial point p 0 = 1.5 we obtain:
p 1 = 1.3674794331 .
On the other hand, evaluating (43) for i = 2 , and after employing (45) we obtain:
p 2 = 1.36523306408 .
Next, we will evaluate (43) for i = 3 , to get
p 3 = 1.365230013 ,
after using (46).
Table 1 shows that, unlike the iteration fixed point method C which required thirty iterations, the proposed method G derived from C; requires only three iterations for getting the same precision of nine digits. As a matter of fact, it is notable that the iterative process (43) and (44) required less iterations than the Newton–Raphson method to get the same accuracy for this case study, although both methods are of quadratic convergence. From Figure 3 and Figure 4, it results that g 7 ( p ) maps [ 1 , 2 ] into itself and g 7 ( p ) K for x [ 1 , 2 ] , where k < 1 ; therefore, the fixed point theorem guarantees the convergence. As a matter of fact, from the same Figure 4, we note that K 0.15 , which represents only the fifteen percent of one. It explains the fast convergence of g 7 ( p ) .
(H). Next, we will solve (28) by using the modified version of Newton–Raphson (22) and (23).
After evaluating (22) for n = 1 , considering f ( p ) = p 3 + 4 p 2 10 and initial point p 0 = 1.5 we obtain:
p 1 = 1.365616748 .
On the other hand, evaluating (22) for i = 2 , and after employing (48) we get:
p 2 = 1.365230013 .
Table 1 shows that the proposed method H requires only two iterations to obtain a precision of nine digits. As a matter of fact, it is notable that the iterative process (22) and (23) required less iterations than Newton–Raphson method to get the same accuracy which follows from that our modified version is a third-order convergence method.
Finally, to compare EFPM with other methods whose origin are other than fixed point, we added to Table 1 columns I and J. Column I corresponds to the solution for (28) by using the Ying Buzu Shu method [6], which requires choosing starting points p 1 and p 2 in order to get a more accurate one. The method is especially effective for monotonous functions and demands that f ( p 1 ) f ( p 2 ) < 0 [7]. After choosing the values p 1 = 1 and p 2 = 1.5 for which f ( p 1 ) f ( p 2 ) < 0 , we note that the Ying Buzu Shu method got an acceptable convergence since it required five iterations to obtain a precision of nine digits. From the Table 1, we note that EFPM converged faster than the Ying Buzu Shu method. On the other hand, column J corresponds to the solution for (28) by using the Bubbfil algorithm 1 [7], which is a modification of the Ying Buzu Shu method aforementioned [6]. This method also requires two initial guesses, p 1 and p 2 , so that f ( p 1 ) f ( p 2 ) < 0 . We also chose p 1 = 1 and p 2 = 1.5 and from the Table 1 we note that the Bubbfil method got a precision of nine digits by using only three iterations, which makes it competitive when compared with EFPM. Nevertheless, the above mentioned condition f ( p 1 ) f ( p 2 ) < 0 required to start the simulation, leads to this strategy (although powerful) spending too much computing time, in particular, when the equation to be solved is highly nonlinear. In a sequence, other favorable points of EFPM are that it is a fix point method (although enhanced) and its performance is predictable since it relies on the well known fixed point theory explained in Section 2.

5. Application of Enhanced Fixed Point Method for Science and Engineering Problems

This section presents two case studies to exemplify how to use the proposed method in order to show their potential application to science and engineering.

5.1. Case 1

Gelfand equation (Bratu’s differential equation in one dimension) is very important in science and engineering; for example, the Chandrasekar model for the universe expansion and the chemical reactor theory, among many others [23]. Bratu’s differential equation is expressed by:
y ( x ) + λ exp ( y ) = 0 , 0 < x < 1 ,
where y ( 0 ) = y ( 1 ) = 0 .
It turns out that the planar one-dimensional Bratu problem solution is given by:
y ( x ) = 2 ln cosh ( α ) cosh α 1 2 x ,
where α satisfies the transcendental equation,
cosh ( α ) = 4 α 2 λ .
Next, we will solve the above equation for the value λ = 2 so that (52) adopts the form:
cosh ( α ) = 2 α .
With the end to corroborate the accuracy of the proposed method, next we provide the solution of (53):
α = 0.5893877635 .
With the purpose to get a fast iterative process based in EFPM, we consider the following equation, which clearly possesses the same relevant root of (53):
cosh ( α ) 2 α + β cosh ( α ) 2 α = 0 ,
where the value of β is calculated with the purpose to guarantee a quadratic convergence process.
Therefore, following EFPM we rewrite (55) as:
α = 1 2 cosh ( α ) + β 2 cosh ( α ) 2 α ,
whereby, it is possible to identify:
g ( α ) = 1 2 cosh ( α ) + β 2 cosh ( α ) 2 α .
Next, we will optimize the value of β requiring that the condition g ( x ˜ ) = 0 be satisfied, where x ˜ is the root of (53).
β 2 = 1 / 2 sinh ( x ˜ ) sinh ( x ˜ ) 2 ,
thus, from (56) and (58), we propose the iterative process:
α n = 1 2 cosh ( α n 1 ) + 1 2 · sinh α n 1 2 sinh α n 1 cosh ( α n 1 ) 2 α n 1 ,
where n = 1 , 2 , 3 , . . .
Next, we evaluate (59) for n = 1 , considering as an initial point α 0 = 0.55 to obtain:
α 1 = 0.5887533679
On the other hand, evaluating (59) for n = 2 , and after employing (60) we get:
α 2 = 0.5893875911
Next, we evaluate (59) for n = 3 , to obtain:
α 3 = 0.58938776337 ,
from (54) and (62), we note that only three iterations are required to obtain a precision of nine digits.

5.2. Case 2

Coastal Structures: The fundamentals of wave mechanics and coastal processes are briefly introduced in [23]. From that reference, we notice that the wave length L of a progressive surface wave is given by:
L = g T 2 2 π tanh 2 π d L ,
where g is the gravity constant expressed in m / s 2 ; T, is the period in seconds and d denotes the water depth.
Next, we employ EFPM in order to solve (63) for L (given the period T). For the sake of simplicity, we choose the values of T and d in such a way that (63) adopts a simplified form:
L = 0.5 tanh 1 L .
Let the substitution be:
L = 1 y ,
so that (64) is rewritten as:
y = 2 coth ( y ) .
With the purpose to corroborate the precision of EFPM, we provide the solution of (66) as:
y = 2.065338139 .
To obtain a fast iterative process, we will consider the following equation, which possesses the same relevant root of (66):
coth ( y ) 0.5 y + β coth ( y ) 0.5 y = 0 ,
where the value of β is calculated with the purpose to guarantee a quadratic convergence process.
Therefore, following the proposed method we rewrite (68) as:
y = 2 coth ( y ) + 2 β coth ( y ) 0.5 y
whereby it is possible to identify:
g ( y ) = 2 coth ( y ) + 2 β coth ( y ) 0.5 y .
Next, we optimize the value of β requiring that the condition g ( x ˜ ) = 0 be satisfied, where x ˜ is the root of (66).
2 β = 2 csc h 2 ( y ˜ ) 0.5 + csc h 2 ( y ˜ ) ,
thus, from (69) and (71), we propose the iterative process:
y n = 2 coth ( y n 1 ) 2 · csc h 2 y n 1 0.5 + csc h 2 y n 1 coth ( y n 1 ) 0.5 y n 1 ,
where n = 1 , 2 , 3 , . . .
Next, we will evaluate (72) for n = 1 , considering as an initial point y 0 = 1.6 .
y 1 = 2.0208335814
On the other hand, evaluating (72) for n = 2 , and after employing (73), we get:
y 2 = 2.0650854583
In the same way, after evaluating (72) for n = 3 , we obtain:
y 3 = 2.06533813118 ,
from (67) and (75), we note that only three iterations are required to obtain the precision of eight digits.

6. Discussion

This work introduced the Enhanced Fixed Point Method (EFPM), which is a modification to the procedure of finding an exact or approximate solution for a linear or nonlinear algebraic equation. We note that the proposed method expresses an algebraic equation in terms of the same equation but multiplied for an adequate factor, which many times is just a simple numeric factor or a linear function of the variable involved in such a way that the original equation and the modified equation possess the same relevant solution. The modified equation is expressed as a fixed point problem and the proposed parameters are optimized with the end to accelerate the convergence of the fixed point problem for solving the original equation. Following the ideas of Section 2, the above mentioned parameters are chosen with the end to obtain an iterative process with a higher order of convergence in comparison with the fixed point original method. Such as explained in Section 2, most of the fixed point methods present just a linear convergence, which implies that when the method converges, it will do so slowly (see below). For the question of accelerating, in general, a slow succession was discussed, above all, for the Aitken method [2]. When the initial succession is generated by a fixed point iteration, then it is possible to accelerate the convergence to a quadratic by using a modified version of Aitken method known as the Stefenssen method. This procedure consists of successively alternating the Aitken and fixed point methods and the results show that, effectively, the efficiency of this technique is similar to that of Newton’s method. Although the Stefenssen method is not complicated to apply, this work proposed EFPM which, unlike the Stefensen method, is a single method to accelerate the convergence of fixed point methods. In a sequence, while the Stefenson method requires a convergent succession to work, EFPM possesses a procedure to follow in order to propose a fixed point problem to solve which in case of convergence provides an iterative method of higher order convergence than the linear (generally a quadratic convergence). A relevant point of EFPM is related with the fixed point theorem, which provides sufficient conditions for the existence and uniqueness of a fixed point. The mentioned theorem guarantees the uniqueness of the fixed point of the function such that g C a , b and g ( x ) [ a , b ] if besides g ( x ) exists in ( a , b ) and a positive constant k 1 with g ( x ) k , for all x ( a , b ) . Given that EFPM guarantees that g ( x ˜ ) = 0 (where x ˜ is the fixed point of g and x ˜ [ a , b ] ) then g ( x ˜ ) = 0 contributes to fulfilling the condition g ( x ) k , k 1 in ( a , b ) or redefine the interval for other arounds of x ˜ where the above condition is verified.
These ideas were verified in the application of EFPM in the solution of (28), and its comparison with the fixed point methods described in B and C of Table 1. Such as was explained in Section 4, the fixed point problem B described for g 2 does not map [ 1 , 2 ] into itself and the condition g 2 ( p ) < 1 is not satisfied for all the values of this interval; therefore, it is not expected that this method converges. On the other hand, with respect to the fixed problem C, we note that g 3 2 2.12 , whereby the condition g 3 ( p ) k < 1 , is not satisfied in [ 1 , 2 ] . This example shows that sometimes shortening the interval reveals the convergence of a succession. Considering the interval [ 1 , 1.5 ] instead of [ 1 , 2 ] , it is easy to show that g 3 maps [ 1 , 1.5 ] in itself and since g 3 ( p ) 0.66 in the interval [ 1 , 1.5 ] , then the fixed point theorem confirmed the convergence of the iterative process derived from C. As a matter of fact, the iterative process that emerges from C converges but needed 30 iterations to obtain the required precision for the problem, while the one that comes from B ends in a complex number (see Table 1). By the contrary, EFPM had slight modifications of Equation (28) with the purpose of obtaining adequate fixed point problems that generalize those proposed in B and C.
In order to improve the poor performance of B, in accordance with EFPM, we proposed (29). After solving for p 3 , we obtained (30), which generalizes B. The proposed method was able to optimize the value of the parameter A requiring that the condition g 6 ( x ˜ ) = 0 be satisfied, where x ˜ is the root of (28), in order to guarantee a quadratic convergence process, such as was explained in Section 2. We noted from Table 1 that iteration fixed point method F requires only four iterations to obtain a precision of nine digits. In accordance with Figure 1 and Figure 2, we notice that g 6 ( p ) effectively maps 1 , 2 into itself and there also exists a K such that condition g 6 ( p ) k where k < 1 for x [ 1 , 2 ] . Therefore, the fixed point theorem guarantees the convergence of the method. Also, it is worth mentioning that the simple extension proposed for EFPM helped to transform a divergent method into another convergent single method which is quadratically convergent, (unlike Stefenssen method).
On the other hand, we improved the performance of the convergence of the iterative process, which derives from C. Such as was noticed, the iterative process that emerges from C converges but requires 30 iterations to get the required precision for the problem.
To get a faster iterative process based on C, we proposed (39), which clearly possesses the same relevant root of (28). After solving for p 2 , we got (41), which is reduced to C for the particular case of A = 0 . In accordance with the proposed method, we obtained the value of A requiring that the condition g 7 ( x ˜ ) = 0 where x ˜ is the root of (28) with the purpose of guaranteeing a quadratic convergence process. From (41), we derived the iterative process (43). As a matter of fact, from Table 1, we noted that unlike the iteration fixed point method C, the proposed method G derived from C, requires only three iterations to obtain the same precision of nine digits from here follows the potentiality of EFPM in the search for roots of an algebraic equation.
Figure 3 and Figure 4 indicate that g 7 ( p ) maps [ 1 , 2 ] into itself and the condition g 7 ( p ) < 1 is satisfied in the same interval. Therefore, the fixed point theorem guarantees the convergence. The improvement of C introduced for EFPM through method G is notable.
A relevant fact from EFPM is the proposal of a generalized Newton method of higher order than the second. A notable fact is that the iterative process that emanates from the Newton–Raphson method (1) is of second order independently of the function f (only it is assumed that f ( x ˜ ) 0 ) . Making an adequate use of this result, we proposed a Newtonp–Raphson method of third order. The basic result (17) led to the fundamental substitution (18) where A is a parameter that was determined to satisfy (17) with the purpose to get a Newton–Rhapson method of third order (22). As a matter of fact, we noticed that it is possible to continue in this manner and propose, in principle, Newton–Raphson methods of higher orders, although the higher the order of the Newton method derived from the proposed method, the more cumbersome and long the calculations are. From Table 1, we note that the application of (22) for the solution of (28) required only two iterations to get a precision of nine decimal places proposed by the problem, which showed the efficiency of (22) and, in general, the potential of EFPM as a practical tool to solve algebraic equations.
In the same way, with the end to compare EFPM with other methods whose origin is different from that of the fixed point method, we added columns I and J to Table 1, which corresponded to the solution of (28) by using the Ying Buzu Shu and Bubbfil Algorithm 1 methods. We noticed that although both methods had a good performance (above all Bubbfil Algorithm 1), they have to satisfy the condition f ( p 1 ) f ( p 2 ) < 0 . Unlike the proposed method, the methods that require two initial guesses, satisfying the above condition, can spend too much computing time above all for the case of highly nonlinear equations. On the other hand, we applied EFPM for the solution of nonlinear equations which arise from scientific and engineering problems. In the first case, we solved a relevant algebraic equation related to the solution of the Gelfand differential equation, while the second case is related to the fundamentals of wave mechanics and coastal processes. In the first case, EFPM required only three iterations to get a precision of nine digits, while in the second case study, the proposed method required only three iterations to get a precision of eight digits.

7. Conclusions

This article introduced the EFPM as an option to accelerate the convergence of the fixed point methods with the purpose to obtain exact and approximate solutions for nonlinear algebraic equations. We noted that the proposed method expresses the algebraic equation in terms of the same equation multiplied for one or several adequate factors in such a way that the original equation and the modified equation possess the same relevant solution. The modified equation is expressed in terms of a fixed point problem and the proposed parameters are determined with the end to accelerate the convergence of the iterative process derived from the above mentioned fixed point problem and for the same, accelerating the process of solving the original algebraic equation. This article showed that the application of EFPM, even for the case of divergent fixed point problems or convergent but slow problems, is able to produce iterative processes of higher order than the first. These characteristics make EFPM, an ideal method to practical applications. A future work would consist of applying the proposed method for the search of exact and approximate solutions of nonlinear system of algebraic equations. In the same way, following the idea introduced by [24], another possible future application of the proposed method would be to enlarge its use in order to find exact and analytical approximate solutions for nonlinear differential equations.

Author Contributions

Conceptualization, U.F.-N. and H.V.-L.; formal analysis, U.F.-N., J.H.-C., J.M.-C., A.L.H.-M., M.A.S.-H. and V.M.J.-F.; investigation, U.F.-N., H.V.-L., J.H.-C., J.M.-C., A.L.H.-M., M.A.S.-H. and V.M.J.-F.; methodology, U.F.-N., H.V.-L., J.H.-C., J.M.-C., A.L.H.-M., M.A.S.-H. and V.M.J.-F.; project administration, U.F.-N. and H.V.-L.; software, U.F.-N., H.V.-L. and M.A.S.-H.; Ssupervision, U.F.-N. and H.V.-L.; validation, U.F.-N. and H.V.-L.; visualization, U.F.-N. and H.V.-L.; writing—original draft, U.F.-N. and H.V.-L.; writing—review and editing, U.F.-N. and H.V.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Roberto Ruiz Gomez for his contribution to this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nieves Hurtado, A.; Domínguez Sánchez, F.C. Métodos Numéricos: Aplicados a la Ingeniería; Grupo Editorial Patria: México city, Mexico, 2014; ISBN 978-607-438-926-5 (Electronic). [Google Scholar]
  2. Faires, J.D.; Burden, R.L. Numerical Methods, 3rd ed.; Brooks Cole: Pacific Grove, CA, USA, 2002; ISBN 10 0534407617. [Google Scholar]
  3. Householder, A.S. The Numerical Treatment of a Single Nonlinear Equation; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
  4. Traub, J. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Brent, R.P. Algorithms for Minimization without Derivatives; Prentice-Hall: Englewood Cliffs, NJ, USA, 1973. [Google Scholar]
  6. Liu, Y.Q.; He, J.H. On relationship between two ancient Chinese algorithms and their application to flash evaporation. Results Phys. 2017, 7, 320–322. [Google Scholar] [CrossRef]
  7. He, C.H. An introduction to an ancient Chinese algorithm and its modification. Int. J. Numer. Methods Heat Fluid Flow 2016, 26, 2486–2491. [Google Scholar] [CrossRef]
  8. Eftekhari, T. A New Sixth-Order Steffensen-Type Iterative Method for Solving Nonlinear Equations. Int. J. Anal. 2014, 2014, 685796. [Google Scholar] [CrossRef] [Green Version]
  9. Singh, A.; Jaiswal, J.P. Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations. Int. J. Eng. Math. 2014, 2014, 828409. [Google Scholar] [CrossRef] [Green Version]
  10. Ullah, M.Z.; Al-Fhaid, A.S.; Ahmad, F. Four-Point Optimal Sixteenth-Order Iterative Method for Solving Nonlinear Equations. J. Appl. Math. 2013, 2013, 850365. [Google Scholar] [CrossRef] [Green Version]
  11. Liu, T.; Li, H. Some New Variants of Cauchy’s Methods for Solving Nonlinear Equations. J. Appl. Math. 2012, 2012, 927450. [Google Scholar] [CrossRef]
  12. Jaiswal, J.P. Two Bi-Accelerator Improved with Memory Schemes for Solving Nonlinear Equations. Discret. Dyn. Nat. Soc. 2015, 2015, 938606. [Google Scholar] [CrossRef]
  13. Babajee, D.K.R.; Thukral, R. On a 4-Point Sixteenth-Order King Family of Iterative Methods for Solving Nonlinear Equations. Int. J. Math. Math. Sci. 2012, 2012, 979245. [Google Scholar] [CrossRef]
  14. Barrada, M.; Ouaissa, M.; Rhazali, Y.; Ouaissa, M. A New Class of Halley’s Method with Third-Order Convergence for Solving Nonlinear Equations. J. Appl. Math. 2020, 2020, 3561743. [Google Scholar] [CrossRef]
  15. Khan, W.A.; Noor, M.A.; Rauf, A. Second Derivatives Free Fourth-Order Iterative Method Solving for Nonlinear Equation. Appl. Math. 2015, 5, 15–20. [Google Scholar] [CrossRef]
  16. Neta, B. A New Derivative-Free Method to Solve Nonlinear Equations. Mathematics 2021, 9, 583. [Google Scholar] [CrossRef]
  17. Chicharro, F.I.; Contreras, R.A.; Garrido, N. A Family of Multiple-Root Finding Iterative Methods Based on Weight Functions. Mathematics 2020, 8, 2194. [Google Scholar] [CrossRef]
  18. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. Memory in a New Variant of King’s Family for Solving Nonlinear Systems. Mathematics 2020, 8, 1251. [Google Scholar] [CrossRef]
  19. Amat, S.; Castro, R.; Honorato, G.; Magreñán, Á. Purely Iterative Algorithms for Newton’s Maps and General Convergence. Mathematics 2020, 8, 1158. [Google Scholar] [CrossRef]
  20. Wang, X.; Zhu, M. Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics. Mathematics 2020, 8, 1080. [Google Scholar] [CrossRef]
  21. Lael, F.; Nourouzi, K. Fixed points of mappings defined on probabilistic modular spaces. Bull. Math. Anal. Appl. 2012, 4, 23–28. [Google Scholar]
  22. Ostrowski, A.M. Solution of Equations and Systems of Equations, 2nd ed.; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  23. Vazquez-Leal, H.; Sandoval-Hernandez, M.A.; Filobello-Nino, U. The novel family of transcendental Leal-functions with applications to science and engineering. Heliyon 2020, 6, e05418. [Google Scholar] [CrossRef] [PubMed]
  24. He, C.H.; He, J.H. Double trials method for nonlinear problems arising in heat transfer. Therm. Sci. 2011, 15, 153–155. [Google Scholar] [CrossRef]
Figure 1. Graph for g 6 ( p ) .
Figure 1. Graph for g 6 ( p ) .
Mathematics 10 03797 g001
Figure 2. Graph for derivative of g 6 ( p ) .
Figure 2. Graph for derivative of g 6 ( p ) .
Mathematics 10 03797 g002
Figure 3. Graph for g 7 ( p ) .
Figure 3. Graph for g 7 ( p ) .
Mathematics 10 03797 g003
Figure 4. Graph for derivative of g 7 ( p ) .
Figure 4. Graph for derivative of g 7 ( p ) .
Mathematics 10 03797 g004
Table 1. Comparison of the results obtained for EFPM with other fixed point methods.
Table 1. Comparison of the results obtained for EFPM with other fixed point methods.
nABCDEFGHIJ
01.51.51.51.51.51.51.51.5 p 1 = 1 p 2 = 1.5 p 1 = 1 p 2 = 1.5
1−0.8750.81651.2869537681.3483997251.3733333331.3546038001.3674794331.3656167481.3389830501.338983050
26.7322.99691.4025408041.3673763721.3652620151.3651610891.3652330641.3652300131.3635628491.365220918
3−469.7(−8.65) 1 / 2 1.3454583741.3649570151.3652300141.3652300101.365230013 1.3652516871.365230013
41 × 108 1.3751702531.3652647481.3652300131.365230013 1.365229995
5 1.3600941931.365225594 1.365230013
6 1.3678469681.365230576
7 1.3638870041.365229942
8 1.3659167341.365230022
9 1.3648782171.365230012
10 1.3654100621.365230014
15 1.3652236801.365230013
20 1.365230236
25 1.365230006
30 1.365230013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Filobello-Nino, U.; Vazquez-Leal, H.; Huerta-Chua, J.; Martínez-Castillo, J.; Herrera-May, A.L.; Sandoval-Hernandez, M.A.; Jimenez-Fernandez, V.M. The Enhanced Fixed Point Method: An Extremely Simple Procedure to Accelerate the Convergence of the Fixed Point Method to Solve Nonlinear Algebraic Equations. Mathematics 2022, 10, 3797. https://doi.org/10.3390/math10203797

AMA Style

Filobello-Nino U, Vazquez-Leal H, Huerta-Chua J, Martínez-Castillo J, Herrera-May AL, Sandoval-Hernandez MA, Jimenez-Fernandez VM. The Enhanced Fixed Point Method: An Extremely Simple Procedure to Accelerate the Convergence of the Fixed Point Method to Solve Nonlinear Algebraic Equations. Mathematics. 2022; 10(20):3797. https://doi.org/10.3390/math10203797

Chicago/Turabian Style

Filobello-Nino, Uriel, Hector Vazquez-Leal, Jesús Huerta-Chua, Jaime Martínez-Castillo, Agustín L. Herrera-May, Mario Alberto Sandoval-Hernandez, and Victor Manuel Jimenez-Fernandez. 2022. "The Enhanced Fixed Point Method: An Extremely Simple Procedure to Accelerate the Convergence of the Fixed Point Method to Solve Nonlinear Algebraic Equations" Mathematics 10, no. 20: 3797. https://doi.org/10.3390/math10203797

APA Style

Filobello-Nino, U., Vazquez-Leal, H., Huerta-Chua, J., Martínez-Castillo, J., Herrera-May, A. L., Sandoval-Hernandez, M. A., & Jimenez-Fernandez, V. M. (2022). The Enhanced Fixed Point Method: An Extremely Simple Procedure to Accelerate the Convergence of the Fixed Point Method to Solve Nonlinear Algebraic Equations. Mathematics, 10(20), 3797. https://doi.org/10.3390/math10203797

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop