1. Introduction
As widely known, the importance of research on nonlinear algebraic equations is that many phenomena, practical or theoretical, give rise to this kind of equation. As a matter of fact, it is difficult to find an area of engineering that can omit the use of these equations: electrical and electronic circuits, storage tanks, chemical industry, agricultural irrigation, balancing chemical equations, industrial engineering, and so on [
1]. For this reason, several methods focused to find approximate solutions to nonlinear algebraic equations have been reported. As is well known, all the methods begin with an initial approximation and then a succession is generated, which is expected to converge to a root of the equation to solve. Some important methods found in the literature are those based on the bisection method [
1,
2], false position method [
2], Newton–Raphson method [
1,
2], secant method [
1,
2], Muller method [
1,
2], Laguerre method [
2,
3], Jenkins–Traub method [
2,
4], Brent method [
2,
5], fixed point method [
1,
2], Ying Buzu method [
6], He Chengtian average [
6], Bubbfil algorithms [
7], Steffensen iterative method [
8], New Steffensen-type iterative methods constructed through the weight function concept [
9], Four-point optimal sixteenth-order iterative method [
10], Variants of Cauchy’s methods [
11], R-order convergence with memory derivative-free methods [
12], One-parameter4-point sixteenth-order king type family of iterative methods [
13], Halley’s method [
14], Modified Noor method [
15], Derivative-free method based in the Traub’s method [
16], Family of iterative methods for solving multiple-root nonlinear equations [
17], High order Jacobian-free methods with memory [
18], Iterative algorithms for Newton’s maps [
19], Iterative methods with memory by using inverse interpolation [
20], fixed points of mappings defined on probabilistic modular spaces [
21], among many others. Three books considered classics on the solution of nonlinear equations are [
3,
4,
22]. On the other hand, this work is heavily concerned with the notion of order of convergence and accelerated convergence and both subjects will be briefly explained later. In general, a succession with a high order of convergence converges faster than a method with a lower order. For this reason, the searching for methods that accelerate the convergence of a given succession is an important theme. One of the most known methods employed to accelerate the convergence of a linearly convergent succession is the Aitken method [
1,
2]. When the initial succession is generated by a fixed point iteration, it is possible to accelerate convergence to a quadratic by using a modified version of the Aitken method, known as the Stefenssen method [
1,
2]. This procedure consists of successively alternating the Aitken and fixed point methods and the results show that, effectively, the efficiency of this technique is similar to that of Newton’s method. Although the Stefenssen method is not particularly complicated to apply, this work proposes a simple method to accelerate the convergence of fixed point methods, which in general presents linear convergence. In a sequence, we will show that the proposed method sometimes is able to get not only quadratic convergence but also higher order iterative procedures, including the Newton–Raphson method, which is a relevant particular case of fixed point methods. As a matter of fact, the goal of this work is to enhance the fixed point method and improve its convergence without the need for combining different methods such as occurs with the Stefenssen method.
We will see that EFPM is easy to use and provides an adequate alternative method based on the classical fixed point method but, in general, clearly better. Furthermore, we will see that the application of the proposed method is not more complicated than the fixed point problem from which it is derived, which makes it even more attractive for practical applications.
The rest of this paper is organized as follows. In
Section 2, we provide a brief review of the theory of the fixed point method and its application to solve linear and nonlinear algebraic equations. We will emphasize the character of the Newton–Raphson method as an important particular case of the fixed point method. Furthermore, we will introduce the relevant concept of convergence order.
Section 3 presents a basic idea of the proposed method EFPM. In
Section 4, we will apply EFPM with the purpose to find solutions to nonlinear algebraic equations. In the same way, we will compare the results obtained by the EFPM method with those obtained for the classical fixed point method in order to show the advantages of the proposed method. In the same way, we compare the proposed method with another two methods whose origin are other than the fixed point. Furthermore,
Section 5 presents two applications of the EFPM for problems of science and engineering.
Section 6, discusses the main results obtained for this work. Finally, a brief conclusion is given in
Section 7.
2. Fixed Point Method
The following review about fixed point is based on reference [
2].
A fixed point of a function g is a real number p defined so that .
A relevant fact for this work is the connection between the search for a root of , and a fixed point problem related with the mentioned equation.
For the problem , we could define a function g in many ways with a fixed point in p. For instance or . On the other hand, if g has a fixed point in a number p, then it is possible to define an equation which that has a root in p. Let us say .
The idea is to express the problem of calculating the root of , in terms of the searching of the fixed point for a function, and taking advantage of everything that is known about the approximation of fixed points of a function.
To start, we mention a theorem that provides sufficient conditions for the existence and uniqueness of a fixed point.
2.1. Theorem 1
Theorem 1. - (1)
Ifand, for all, then g has a fixed point in.
- (2)
If alsoexists inand a positive constantwith, for all, then the fixed point inis unique.
With the goal to approximate a fixed point of a function g, we have to choose an initial approximationand afterwards generate a succession, which is determined fromfor all the values. Assuming that the succession converges in p and the function g is continuous; then, the following results are verified:. Thus, we obtained the solution forand at the same time for some algebraic equation related to this problem. This technique is known as fixed point iteration or functional iteration.
Given an algebraic equation, the following theorem guides us about which fixed point is convenient to use.
2.2. Theorem 2 (Fixed Point Theorem)
Theorem 2 (Fixed Point Theorem). Let it be and for all . Assuming that exists in and a positive constant with , for all . Under these conditions, we know from theorem 1 that there exists a unique fixed point in . In a sequence, for any number , which belongs to , the succession defined by , , converges to the only fixed point p in .
These results for fixed point, evidently, are very relevant because they increase the possibility to find the fixed point of a function and probably also allow us to find the solution of an algebraic equation. Nevertheless, we note that most of the fixed point methods present just a linear convergence, which implies that when the method converges it will do slowly (see below).
2.3. Newton–Rapshon Method
It is possibly the most employed and powerful numerical technique in order to find the root of an equation
We will introduce this method by using the Taylor polynomials proceeding as follows [
2].
Starting with
. Let
be an approximation for the root
p so that
and
is small. Next, we will expand
around of
by using the Taylor polynomial:
where the value of
lies between
x and
.
Evaluating the above equation for the root
,
Next, we will assume that
is sufficiently close enough to the root to neglect the last term of the equation with respect to the first two.
Solving for
p we obtain:
The Newton method begins from an initial approximation
and generates the succession
defined by:
for the values
.
On the other hand, we see that Newton method is an iteration functional formula of the form
, where from (
1):
Thus, we conclude that Newton–Raphson is a particular case of a fixed point problem.
Next, we mention a convergence theorem for the Newton method which emphasizes the importance of the selection of the initial point that is convenient to use.
2.4. Theorem 3
Theorem 3. Let it be . If is such that and , then there exists a such that the Newton method generates a succession that converges to p for any initial approximation .
Although this result is important for the Newton method, it is not of practical interest because it does not indicate how to calculate . Rather, its importance consists of the following: assuming that the conditions of Theorem 3 are fulfilled, then the Newton converges providing that is chosen exact enough.
Next, we will provide a criterion to determine how fast a succession converges.
In particular, we will see that the Newton method converges faster than most of the fixed point procedures.
2.5. Convergence Order
So far we have seen that the magnitude of
indicates if a process converges. Next, we will show that it determines how fast the convergence is [
1]. Defining the error of the
iteration as
, where
denotes the value of the fixed point of
g.
Assuming that the values of
and its derivatives are known in
, then it is possible to expand
around
by using a Taylor series and also finding the following expression for
.
or,
but in accordance with the functional iterative process of
g, we have
, and at the same time,
satisfies
. Therefore, after using these results, it is possible to rewrite (
3) as follows.
after employing the definition of the error for the
iteration.
In a sequence, the left hand side of the above equation is the error of the
iteration and therefore [
1],
We note that if we perform
i-iterations, assuming that the value of
is such that
, then it is clear that the successive values of
,
are smaller than
, in such a way that if
, the magnitude of the first term of the right hand side of (
4) is most of the times bigger than the magnitude of the rest of terms and
becomes proportional to
. On the other hand, if
and
, the magnitude of the second term of the right hand side of (
4) is bigger than the magnitude of the rest of the terms. Thus,
becomes proportional to
. In the same way, if
,
and
, then
becomese proportional to
. From the above, we see that the functional iterative process
, is of order one if
, order two if
and
, order three if
,
and
, and so on. If
n is the order of the process, then
and is clear that
will become smaller than
with a bigger value of
n, and the convergence, therefore, will be faster. In general, a succession with a high order converges faster than a method with a lower order. In this language, the Newton–Raphson method is of second order of convergence. As a matter of fact, this work will show in principle how to obtain versions of the Newton method of higher order of convergence, in a systematic way, and in general, will show how to increase the convergence order for fixed point problems.
3. Enhanced Fixed Point Method (EFPM)
Next, we will provide the fundamentals of the EFPM method as a convenient tool to increase the order of convergence for the functional iteration technique.
As was already mentioned, given an algebraic equation, there exists many ways to express it in terms of a fixed point problem. The advantage of this late procedure is that it is supported for some well established mathematical results, some of which are mentioned in the previous section. The proposal of this work will be, on one hand, to make use of the mathematical results already explained and, on the other hand, enhance the relation between the fixed point and the search of roots for nonlinear algebraic equations.
Such as was already mentioned at the beginning of
Section 2, for the problem
, we could define, in many ways, a function
g with a fixed point in
p. On the other hand, if
g has a fixed point in a number
p, then it is possible to define a function that has a root in
p. Our proposal is simple, slightly modifying the algebraic equation to solve
but without changing the root of the original equation to solve.
We will find it useful to consider the equation:
where
is a parameter that is employed with the purpose of enhancing the possible fixed point procedures related to the search of the root of
.
We begin rewriting
(for another parameter
) in such a way that:
we rewrite (
6) as follows:
Next, we propose a possible fixed point problem, related with
.
where
g denotes the function that results in applying
to the right hand side of (
7):
With the purpose of determining the parameter
, we require imposing the condition
, where
is a fixed point of
g (see discussion below of (
4)). In accordance with the discussion of the last section
, the results will be proportional to
and for the same the functional iterative process
generated will converge faster than if the process was of order one and
. Such as was shown in the case of quadratic convergence:
.
In principle, we could to try for a third-order functional iterative process, which would result relevant because, in general, a succession with a high order of convergence converges faster than a method with a lower order.
Therefore, we could consider a modified equation as:
as in the case of (
6), we note that equation
and (
10) have the same relevant solution.
Following a similar way as the one that yields (
9), we could obtain a fixed point problem where:
With the purpose to determine the parameters and , we require imposing the conditions and , where is a fixed point of g. In this case, would become proportional to and for the same reason we would get an iterative process of third order.
A possible problem is that the process of the solution for the system , could result in a nonlinear system of equations, which, in general, would be more complicated than the original problem to solve but at least in principle could be possible.
A relevant particular case of the iterative fixed point technique is the Newton–Raphson method.
It is relatively easy to show that this method is of second order.
From (
1) we deduce that:
After differentiating (
12), we obtain:
Evaluating (
13) for the root
, we obtain:
Assuming, in accordance with Theorem 3, that
, then
. Furthermore, differentiating (
13) and evaluating this result for the root
, we deduce that in general
, since:
and therefore the method is of second order.
Unlike other fixed point problems, we point out a notable fact that the iterative process which emanates from Newton–Raphson method is of second order independently of the function f (only it is assumed that ).
Next, we will make good use of this result in order to propose a relatively easy Newton–Raphson method of third order as follows.
From the above mentioned, let us consider the problem.
for a function
F.
Since
(see (
14)) where
is a root of
F, then assuming again that
, we deduce from (
15) that a Newton–Raphson of third order implies that:
Although
F should not satisfy this requirement in general, we propose a function
f, which is related to
F in the following way.
where
A is a parameter that is determined to satisfy (
17) with the purpose to obtain a Newton–Rhapson method of third order. In the same way, we notice that
F and
f share the same relevant roots.
Thus, after differentiating (
18):
From (
17) and (
19) we obtain:
Therefore, by using (
18) we obtain the following fixed point problem related with a Newton–Raphson technique of third order.
A third-order Newton method begins from an initial approximation
and generates the succession
defined by:
for the values
where:
In principle, it is possible to continue in this manner and propose a Newton–Raphson method of higher orders.
For example, for the case of a Newton–Raphson method of fourth order we require that , , and .
After differentiating (
12) three times and substituting the root
we get:
From (
17) we deduce that:
Thus, after imposing the condition
, we obtain:
Therefore, an adequate function for this case would be:
where
A and
B are parameters that are determined to satisfy (
17) and
with the purpose of obtaining a Newton–Rhapson method of fourth order. In the same way, we notice that
F and
f share the same relevant roots. Given that
A and
B are determined through a linear system, in principle, it is possible to get a fourth-order Newton method and, in general, we could follow in this manner to obtain higher-orders versions. Nevertheless, the iterative formulas associated with these methods would be too long and cumbersome for practical applications, whereby, we will consider the third-order Newton method given by (
22) and (
23).
We notice that EFPM is a method that increases the convergence order (rate of convergence) for an iteration fixed point problem, which converges slowly. As a matter of fact, although in principle the proposed method is able to increase the rate of convergence without limitations, except the referent of mathematical difficulties, it’s objective is not to provide a method that converges as fast as possible.
4. Case Study
Next, we will consider the solution of the algebraic equation [
2].
that possesses a single root in the interval
.
Next, we present several ways to express (
28) in the form
in order to show the functional iteration technique [
2].
- (A)
;
- (B)
;
- (C)
;
- (D)
;
- (E)
.
Considering the initial point
, the following table provides the results from the iteration fixed point method for the above five options of
g. As a matter of fact, columns F, G, and H correspond to the application of the proposed method for the solution of (
28), while columns I and J describe the results obtained for other methods whose origin is different from that of the fixed point method.
Following [
2] we provide a brief comment in relation to the first five results shown in
Table 1.
(A). does not map into itself and for all values p in . Although the fixed point theorem cannot guarantee that the method fails necessarily, it is difficult to expect a convergence by using this choice of . (B). not only does not map into itself but condition is not satisfied for all the values of this interval. Therefore, it is not expected that this method converges.
(C). In this case, it can be shown that
, whereby the condition
, is not satisfied in
. Nevertheless, this example shows that sometimes shortening the interval reveals the convergence of a succession. Thus, considering the interval
instead of
it is easy to show that
maps
in itself and since
in the interval
, then the fixed point theorem confirms the convergence of the iterative process derived from
C such as shown in
Table 1.
(D). In this case, we note that for . Given that the maximum value of the magnitude of is much less than the magnitude of , this explains the faster convergence obtained for .
(E). The succession defined by
converges much faster than the other fixed point methods. It results that this option corresponds to the Newton–Raphson method, which was explained in
Section 2. As a matter of fact, the quadratic convergence of this method explains its effectiveness.
Next, we will employ the proposed method in order to improve some of the results obtained for the several options of the fixed point method mentioned above.
(F). We notice that
totally failed in order to provide an approximate solution for (
28). As a matter of fact, from
Table 1, we see that the iterative process that derivates from
yields an imaginary approximation scarcely in the third iteration of the process. Next, we will consider a slightly modified equation from (
28). In accordance with
Section 3, we will write:
Solving for
, we obtain
, or
we note that if
, we recover the case
B.
By the contrary, next we will optimize the value of
A requiring that the condition
be satisfied, where
is the root of (
28) in order to guarantee a quadratic convergence process, such as was explained in
Section 2.
After differentiating (
30):
After applying condition
, we obtain:
Therefore, from (
30) we propose the iterative process:
where:
and
After evaluating (
33) for
, considering again as initial point
we get:
Evaluating (
33) for
, and after employing (
35) we obtain:
Next, we will evaluate (
33) for
, to get:
where we used (
36).
The evaluation of (
33) for
, yields:
We note from
Table 1 that iteration fixed point methods
E and
F require only four iterations to obtain the same precision of nine digits. In accordance with
Figure 1 and
Figure 2 we note that
effectively maps
into itself and also there exists a
K such that condition
where
for
. Therefore, the fixed point theorem guarantees the convergence of the method.
(G). We notice that
obtained an approximate solution for (
28) but it required 30 approximations in order to get an accuracy of nine digits as shown in
Table 1.
With the purpose of obtaining a faster iterative process based in
C, we consider the following equation, which clearly possesses the same relevant root of (
28):
Solving for
we get:
or:
we note that if
, we recover the case C.
Next, we will optimize the value of
A requiring that the condition
be satisfied, where
is the root of (
28) in order to guarantee a quadratic convergence process, such it was explained in
Section 2.
The differentiation of (
41) results in:
From (
41) we propose the iterative process:
where:
and
After evaluating (
43) for
, considering again the initial point
we obtain:
On the other hand, evaluating (
43) for
, and after employing (
45) we obtain:
Next, we will evaluate (
43) for
, to get
after using (
46).
Table 1 shows that, unlike the iteration fixed point method
C which required thirty iterations, the proposed method
G derived from
C; requires only three iterations for getting the same precision of nine digits. As a matter of fact, it is notable that the iterative process (
43) and (
44) required less iterations than the Newton–Raphson method to get the same accuracy for this case study, although both methods are of quadratic convergence. From
Figure 3 and
Figure 4, it results that
maps
into itself and
for
, where
; therefore, the fixed point theorem guarantees the convergence. As a matter of fact, from the same
Figure 4, we note that
, which represents only the fifteen percent of one. It explains the fast convergence of
.
(H). Next, we will solve (
28) by using the modified version of Newton–Raphson (
22) and (
23).
After evaluating (
22) for
, considering
and initial point
we obtain:
On the other hand, evaluating (
22) for
, and after employing (
48) we get:
Table 1 shows that the proposed method
H requires only two iterations to obtain a precision of nine digits. As a matter of fact, it is notable that the iterative process (
22) and (
23) required less iterations than Newton–Raphson method to get the same accuracy which follows from that our modified version is a third-order convergence method.
Finally, to compare EFPM with other methods whose origin are other than fixed point, we added to
Table 1 columns I and J. Column I corresponds to the solution for (
28) by using the Ying Buzu Shu method [
6], which requires choosing starting points
and
in order to get a more accurate one. The method is especially effective for monotonous functions and demands that
[
7]. After choosing the values
and
for which
, we note that the Ying Buzu Shu method got an acceptable convergence since it required five iterations to obtain a precision of nine digits. From the
Table 1, we note that EFPM converged faster than the Ying Buzu Shu method. On the other hand, column J corresponds to the solution for (
28) by using the Bubbfil algorithm 1 [
7], which is a modification of the Ying Buzu Shu method aforementioned [
6]. This method also requires two initial guesses,
and
, so that
. We also chose
and
and from the
Table 1 we note that the Bubbfil method got a precision of nine digits by using only three iterations, which makes it competitive when compared with EFPM. Nevertheless, the above mentioned condition
required to start the simulation, leads to this strategy (although powerful) spending too much computing time, in particular, when the equation to be solved is highly nonlinear. In a sequence, other favorable points of EFPM are that it is a fix point method (although enhanced) and its performance is predictable since it relies on the well known fixed point theory explained in
Section 2.
6. Discussion
This work introduced the Enhanced Fixed Point Method (EFPM), which is a modification to the procedure of finding an exact or approximate solution for a linear or nonlinear algebraic equation. We note that the proposed method expresses an algebraic equation in terms of the same equation but multiplied for an adequate factor, which many times is just a simple numeric factor or a linear function of the variable involved in such a way that the original equation and the modified equation possess the same relevant solution. The modified equation is expressed as a fixed point problem and the proposed parameters are optimized with the end to accelerate the convergence of the fixed point problem for solving the original equation. Following the ideas of
Section 2, the above mentioned parameters are chosen with the end to obtain an iterative process with a higher order of convergence in comparison with the fixed point original method. Such as explained in
Section 2, most of the fixed point methods present just a linear convergence, which implies that when the method converges, it will do so slowly (see below). For the question of accelerating, in general, a slow succession was discussed, above all, for the Aitken method [
2]. When the initial succession is generated by a fixed point iteration, then it is possible to accelerate the convergence to a quadratic by using a modified version of Aitken method known as the Stefenssen method. This procedure consists of successively alternating the Aitken and fixed point methods and the results show that, effectively, the efficiency of this technique is similar to that of Newton’s method. Although the Stefenssen method is not complicated to apply, this work proposed EFPM which, unlike the Stefensen method, is a single method to accelerate the convergence of fixed point methods. In a sequence, while the Stefenson method requires a convergent succession to work, EFPM possesses a procedure to follow in order to propose a fixed point problem to solve which in case of convergence provides an iterative method of higher order convergence than the linear (generally a quadratic convergence). A relevant point of EFPM is related with the fixed point theorem, which provides sufficient conditions for the existence and uniqueness of a fixed point. The mentioned theorem guarantees the uniqueness of the fixed point of the function such that
and
if besides
exists in
and a positive constant
with
, for all
. Given that EFPM guarantees that
(where
is the fixed point of
g and
) then
contributes to fulfilling the condition
,
in
or redefine the interval for other arounds of
where the above condition is verified.
These ideas were verified in the application of EFPM in the solution of (
28), and its comparison with the fixed point methods described in
B and
C of
Table 1. Such as was explained in
Section 4, the fixed point problem
B described for
does not map
into itself and the condition
is not satisfied for all the values of this interval; therefore, it is not expected that this method converges. On the other hand, with respect to the fixed problem
C, we note that
, whereby the condition
, is not satisfied in
. This example shows that sometimes shortening the interval reveals the convergence of a succession. Considering the interval
instead of
, it is easy to show that
maps
in itself and since
in the interval
, then the fixed point theorem confirmed the convergence of the iterative process derived from
C. As a matter of fact, the iterative process that emerges from
C converges but needed 30 iterations to obtain the required precision for the problem, while the one that comes from
B ends in a complex number (see
Table 1). By the contrary, EFPM had slight modifications of Equation (
28) with the purpose of obtaining adequate fixed point problems that generalize those proposed in
B and
C.
In order to improve the poor performance of
B, in accordance with EFPM, we proposed (
29). After solving for
, we obtained (
30), which generalizes
B. The proposed method was able to optimize the value of the parameter
A requiring that the condition
be satisfied, where
is the root of (
28), in order to guarantee a quadratic convergence process, such as was explained in
Section 2. We noted from
Table 1 that iteration fixed point method
F requires only four iterations to obtain a precision of nine digits. In accordance with
Figure 1 and
Figure 2, we notice that
effectively maps
into itself and there also exists a
K such that condition
where
for
. Therefore, the fixed point theorem guarantees the convergence of the method. Also, it is worth mentioning that the simple extension proposed for EFPM helped to transform a divergent method into another convergent single method which is quadratically convergent, (unlike Stefenssen method).
On the other hand, we improved the performance of the convergence of the iterative process, which derives from C. Such as was noticed, the iterative process that emerges from C converges but requires 30 iterations to get the required precision for the problem.
To get a faster iterative process based on
C, we proposed (
39), which clearly possesses the same relevant root of (
28). After solving for
, we got (
41), which is reduced to C for the particular case of
. In accordance with the proposed method, we obtained the value of
A requiring that the condition
where
is the root of (
28) with the purpose of guaranteeing a quadratic convergence process. From (
41), we derived the iterative process (
43). As a matter of fact, from
Table 1, we noted that unlike the iteration fixed point method
C, the proposed method
G derived from
C, requires only three iterations to obtain the same precision of nine digits from here follows the potentiality of EFPM in the search for roots of an algebraic equation.
Figure 3 and
Figure 4 indicate that
maps
into itself and the condition
is satisfied in the same interval. Therefore, the fixed point theorem guarantees the convergence. The improvement of
C introduced for EFPM through method
G is notable.
A relevant fact from EFPM is the proposal of a generalized Newton method of higher order than the second. A notable fact is that the iterative process that emanates from the Newton–Raphson method (
1) is of second order independently of the function
f (only it is assumed that
. Making an adequate use of this result, we proposed a Newtonp–Raphson method of third order. The basic result (
17) led to the fundamental substitution (
18) where
A is a parameter that was determined to satisfy (
17) with the purpose to get a Newton–Rhapson method of third order (
22). As a matter of fact, we noticed that it is possible to continue in this manner and propose, in principle, Newton–Raphson methods of higher orders, although the higher the order of the Newton method derived from the proposed method, the more cumbersome and long the calculations are. From
Table 1, we note that the application of (
22) for the solution of (
28) required only two iterations to get a precision of nine decimal places proposed by the problem, which showed the efficiency of (
22) and, in general, the potential of EFPM as a practical tool to solve algebraic equations.
In the same way, with the end to compare EFPM with other methods whose origin is different from that of the fixed point method, we added columns I and J to
Table 1, which corresponded to the solution of (
28) by using the Ying Buzu Shu and Bubbfil Algorithm 1 methods. We noticed that although both methods had a good performance (above all Bubbfil Algorithm 1), they have to satisfy the condition
. Unlike the proposed method, the methods that require two initial guesses, satisfying the above condition, can spend too much computing time above all for the case of highly nonlinear equations. On the other hand, we applied EFPM for the solution of nonlinear equations which arise from scientific and engineering problems. In the first case, we solved a relevant algebraic equation related to the solution of the Gelfand differential equation, while the second case is related to the fundamentals of wave mechanics and coastal processes. In the first case, EFPM required only three iterations to get a precision of nine digits, while in the second case study, the proposed method required only three iterations to get a precision of eight digits.