1. Introduction
There are many sciences (mathematics, computer science, dynamical systems in engineering, agriculture, biomedical, etc.) that require finding the roots of non-linear equations. When there is not an analytic solution, we try to determine a numerical solution. There is not a specific algorithm for solving every non-linear equation efficiently.
There are several pure methods for solving such problems, including the pure, metaheuristic and blended methods. Pure methods include classical techniques such as the bisection method, the false position method, the secant method and the Newton–Raphson method, etc. Metaheuristic methods use metaheuristic algorithms such as particle swarm optimization, firefly, and ant colony for root finding, whereas blended methods are hybrid combinations of two classical methods.
There is not a specific method for solving every non-linear equation efficiently. In general, we can see more details about classical methods in [
1,
2,
3,
4] and especially for the bisection and Newton–Raphson methods in [
5,
6,
7,
8]. Other problems such as minimization, target shooting, etc. are discussed in [
9,
10,
11,
12,
13,
14].
Sabharwal [
15] proposed a novel blended method that is a dynamic hybrid of the bisection and false position methods. He deduced that his algorithm outperformed pure methods (bisection and false position). On the other hand, he observed that his algorithm outperformed the secant method and the Newton–Raphson method according to the number of iterations. Sabharwal did not analyze his algorithm according to the running time, but he was satisfied with the iterations number only. Perhaps there is a method that has a small number of iterations, but the execution time is large and vice versa. For this reason, the iteration number and the running time are important metrics to evaluate the algorithms. Unfortunately, most researchers have not paid attention to the details of finding the running time. Furthermore, they did not discuss and did not answer the following question: why does the running time change from one run to another on the used software package?
The genetic algorithm was used to compare among the classical methods [
9,
10,
11] based on the fitness ratio of the equations. The authors deduced that the genetic algorithm is more efficient than the classical algorithms for solving the functions
x2 −
x − 2 [
12] and
x2 + 2
x − 7 [
11]. Mansouri et al. [
12] presented a new iterative method to determine the fixed point of a nonlinear function. Therefore, they combined ideas proposed in the artificial bee colony algorithm [
13] and the bisection method [
14]. They illustrate this method with four benchmark functions and compare results with others methods, such as artificial bee colony (ABC), particle swarm optimization (PSO), genetic algorithm (GA) and firefly algorithms.
For more details about the classical methods, hybrid methods and the metaheuristic approaches, the reader can refer to [
16,
17].
In this work, we propose a novel blended algorithm that has the advantages of the trisection method and the false position algorithm. The computational results show that the proposed algorithm outperforms the trisection and regula falsi methods. On the other hand, the introduced algorithm outperforms the bisection, Newton–Raphson and secant methods according to the iteration number and the average of running time. Finally, the implementation results show the superiority of the proposed algorithm on the blended bisection and false position algorithm, which was proposed by Sabharwal [
15]. The results presented in this paper open the way for presenting new methods that compete with traditional methods and may replace them in software packages.
The rest of this work is organized as follows: The pure methods for determining the roots of non-linear equations are introduced in
Section 2. The blended algorithms for finding the roots of non-linear equations are presented in
Section 3. In
Section 4, the numerical results analysis and statistical test among the pure methods and the blended algorithms are provided. Finally, conclusions are drawn in
Section 5.
2. Pure Methods
In this section, we introduce five pure methods for finding the roots of non-linear equations. These methods are the bisection method, the trisection method, the false position method, the secant method and the Newton–Raphson method. We contribute to implementing the trisection algorithm with equal subintervals that overcomes the bisection algorithm on fifteen benchmark equations as shown in
Section 3. On the other hand, the trisection algorithm also outperforms the false position method, secant method and Newton–Raphson method partially, as shown in
Section 3.
2.1. Bisection Method
We assume that the function
f(
x) is defined and continuous on the closed interval [
a,
b], where the signals of
f(
x) at the ends (
a and
b) are different. We divide the interval [
a,
b] into two halves, where
, if
f(
x) = 0; then,
x becomes a solution for the equation
f(
x) = 0. Otherwise, (
) and we can choose one subinterval
that has different signals of
f(
x) at its ends. We repeat dividing the new subinterval into two halves until we reach the exact solution
x where
or the approximate solution
with tolerance,
eps. The value of
eps closes to zero as shown in Algorithm 1 and other algorithms.
Algorithm 1. Bisection(f, a, b, eps). |
Input: The function f(x), |
The interval [a, b] where the root lies in, |
The absolute error (eps). |
Output: The root (x), |
The value of f(x) |
Numbers of iterations (n), |
The interval [a, b] where the root lies in |
n := 0 |
while true do |
n := n + 1 |
x := (a + b)/2 |
if |f(x)| <= eps. |
return x, f(x), n, a, b |
else if f(a) * f(x) < 0 |
b := x |
else |
a := x |
end (while) |
The size of the interval was reduced by half at each iteration. Therefore the value
eps is determined from the following formula:
where
n is the number of iterations. From (1), the number of iterations is found by
The bisection method is a bracketing method, so it brackets the root in the interval [
a,
b], and at each iteration, the size of the interval [
a,
b] is halved. Accordingly, it reduces the error between the approximation root and the exact root for any iteration. On the other hand, the bisection method works quickly if the approximate root is far from the endpoint of the interval; otherwise, it needs more iterations to reach the root [
17].
Advantages and Disadvantages of the Bisection Method
The bisection method is simple to implement, and its convergence is guaranteed. On the other hand, it has a relatively slow convergence, it needs different signs for the function values of the endpoints, and the test for checking this affects the complexity in the number of operations.
2.2. Trisection Method
The trisection method is like the bisection method, except that it divides the interval [a, b] into three subintervals, while the bisection method divides the interval [a, b] into two partial periods. Algorithm 2 divides the interval [a, b] into three equal subintervals and searches for the root in the subinterval that contains different signs of the function values at the endpoints of this subinterval.
If the condition of termination is true, then the iteration has finished its task; otherwise, the algorithm repeats the calculations.
In order to divide the interval [a, b] into equal three parts by x1 and x2, we need to know the locations of x1 and x2 as the following:
By solving Equations (3) and (4),
The size of the interval [
a,
b] decreases to a third with each repetition. Therefore, the value
eps is determined from the following formula:
where
n is the number of iterations. From (5) the number of iterations is found by
When we compare Equations (2) and (6), we conclude that the iterations number of the trisection algorithm is less than the iterations number of the bisection algorithm. We might think that the trisection algorithm is better than the bisection algorithm since it requires a few iterations. However, it might be the case that one iteration of the trisection algorithm has an execution time greater than the execution time of one iteration of the bisection algorithm. Therefore, we will consider both execution time and the number of iterations to evaluate the different algorithms.
Algorithm 2. Trisection(f, a, b, eps). |
Input: The function f(x), |
The interval [a, b] where the root lies in, |
The absolute error (eps). |
Output: The root (x), |
The value of f(x) |
Numbers of iterations (n), |
The interval [a, b] where the root lies in |
n := 0 |
while true do |
n := n + 1 |
x1 := (b + 2*a)/3 |
x2 := (2*b + a)/3 |
if |f(x1)| < |f(x2)| |
x := x1 |
else |
x := x2 |
if |f(x)| <= eps |
return x, f(x), n, a, b |
else if f(a) * f(x1) < 0 |
b := x1 |
else if f(x1) * f(x2) < 0 |
a := x1 |
b := x2 |
else |
a := x2 |
end (while) |
Advantages and Disadvantages of the Trisection Method
2.3. False Position (Regula Falsi) Method
There is no unique method suitable for finding the roots of all nonlinear functions. Each method has advantages and disadvantages. Hence, the false position method is a dynamic and fast method when the nature of the function is linear. The function
f(
x), whose roots are in the interval [
a,
b] must be continuous, and the values of
f(
x) at the endpoints of the interval [
a,
b] have different signs. The false position method uses two endpoints of the interval [
a,
b] with initial values (
r0 =
a,
r1 =
b). The connecting line between the two points (
r0,
f(
r0)) and (
r1,
f(
r1)) intersects the
x-axis at the next estimate,
r2. Now, we can determine the successive estimates,
rn from the following relationship
for
n 2.
Remark: The regula falsi method is very similar to the bisection method. However, the next iteration point is not the midpoint of the interval but the intersection of the x-axis with a secant through (a, f(a)) and (b, f(b)).
Algorithm 3 uses the relation (7) to get the successive approximations by the false position method.
Algorithm 3. False Position(f, a, b, eps). |
Input: The function (f), |
The interval [a, b] where the root lies in, |
The absolute error (eps). |
Output: The root (x), |
The value of f(x) |
Numbers of iterations (n), |
The interval [a, b] where the root lies in |
n := 0 |
while true do |
n := n + 1; |
x = a − (f(a)*(b − a))/(f(b) − f(a)) |
if |f(x)| <= eps |
return x, f(x), n, a, b |
else if f(a) * f(x) < 0 |
b := x |
else |
a := x |
end (while) |
Advantages and Disadvantages of the Regula Falsi Method
It is guaranteed to converge, and it is fast when the function is linear. On the other hand, we cannot determine the iterations number needed for convergence. It is very slow when the function is not linear.
2.4. Newton–Raphson Method
This method depends on a chosen initial point
x0. This point plays an important role for Newton–Raphson method. The success of the method depends mainly on the point
x0, and then the method may converge to its root or diverge based on the choice of the point
x0. Therefore, the first estimate can be determined from the following relation.
The successive approximations for the Newton–Raphson method can be found from the following relation:
such that the
is the first derivative of the function
f (
x) at the point
xi.
Algorithm 4 uses the relation (9) to get the successive approximations by the Newton–Raphson method.
Algorithm 4. Newton(f, xi, eps). |
This function implements Newton’s method. |
Input: The function (f), |
An initial root xi, |
The absolute error (eps). |
Output: The root (x), |
The value of f(x) |
Numbers of iterations (n), |
g(x) := f’(x) |
n = 0 |
while true do |
n := n + 1 |
xi = xi − f(xi)/g(xi) |
if |f(x)| <= eps |
return xi, f(xi), n |
end (while) |
Advantages and Disadvantages of the Newton–Raphson Method
It is very fast compared to other methods, but it sometimes fails, meaning that there is no guarantee of its convergence.
2.5. Secant Method
Just as there is the possibility of the Newton method failing, there is also the possibility that the secant method will fail. The Newton method uses the relation (9) to find the successive approximations, but the secant method uses the following relation:
Algorithm 5 uses the relation (10) to get the successive approximations by the secant method.
Algorithm 5. Secant(f, a, b, eps). |
This function implements the Secant method. |
Input: The function (f), |
Two initial roots: a and b, |
The absolute error (eps). |
Output: The root (x), |
The value of f(x) |
Numbers of iterations (n), |
n := 0 |
while true do |
n := n + 1 |
x := b − f(b)*(b − a)/(f(b) − f(a)) |
if |f(x)| <= eps |
return x, f(x), n |
a := b |
b := x |
end (while) |
Advantages and Disadvantages of the Secant Method
It is very fast compared to other methods, but it sometimes fails, meaning that there is no guarantee of its convergence.
4. Computational Study
The numerical results of the pure methods bisection method, trisection method, false position method, secant method and Newton–Raphson method are proposed. In addition to the computational results for the hybrid methods, the bisection–false position and trisection–false position are proposed. We compare the pure method and the hybrid method with the proposed hybrid method according to the number of iterations and CPU time. We used fifteen benchmark problems for this comparison, as shown in
Table 1. We ran each problem ten times, and then we computed the average of CPU time and the number of iterations.
We used MATLAB v7.01 Software Package to implement all the codes. All codes were run under 64-bit Window 8.1 Operating System with Core(TM)i5 CPU M 460 @2.53GHz, 4.00 GB of memory.
Dataset and Evaluation Metrics
There are different ways to terminate the numerical algorithms such as the absolute error (eps) and the number of iterations. In this paper, we used the absolute error (eps = 10−14) to terminate all the algorithms. Perhaps there is a method that has a small number of iterations, but the execution time is large and vice versa. For this reason, the iteration number and the running time are important metrics to evaluate the algorithms. Unfortunately, most researchers did not pay attention to the details of finding the running time. Furthermore, they did not discuss and did not answer the following question: why does the running time change from one run to another with the used software package? Therefore, we ran every algorithm ten times and calculated the average of the running time to obtain an accurate running time and avoid the problems of the operating systems.
In
Table 2, the abbreviations AppRoot, Error, LowerB and UpperB are used to denote the approximation root, the difference between two successive roots, lower bound and upper bound, respectively.
Table 2 shows the performance of all classical methods and blended algorithms for solving the Problem 4. It is clear that both the trisection and the proposed blended algorithm (trisection-false position) outperformed the other algorithms. Because it is not accurate enough to make a conclusion from one function, we used fifteen benchmark functions (
Table 1) to evaluate the proposed algorithm.
Ali Demir [
23] proved that the trisection method with
k-Lucas number works faster than the bisection method. From
Table 3 and
Table 4 and
Figure 2, it is clear that the trisection method is better than the bisection method with respect to the running time for all problems except for problem 9. On the other hand, the trisection method determined the exact root (2.0000000000000000) of problem 4 after one iteration, but the bisection method found the approximate root (2.0000000000000284) after 45 iterations.
Figure 3 shows that the trisection method always has fewer iterations than the bisection method. We can determine the number of iterations for the trisection method by
and the number of iterations for the bisection method by
. The authors [
6,
11] explained that the secant method is better than the bisection and Newton–Raphson methods for problem 8. It is not accurate to draw a conclusion from one function [
15], so we experimented on fifteen benchmark functions. From
Table 7, it is clear that the secant method failed to solve problem 11.
From
Table 5,
Table 6 and
Table 7, we deduce that the proposed hybrid algorithm (trisection-false position) is better than the Newton–Raphson, false-position and secant. The Newton–Raphson method failed to solve problems P6, P9 and P11, and the secant method failed to solve P11.
From
Figure 4 and
Table 8 and
Table 9, it is clear that the proposed blended algorithm (trisection–false position) has fewer iterations than the blended algorithm (bisection–false position) [
15] on all the problems except problem 5 (i.e., according to the number of iterations, the proposed algorithm achieved 93.3% of fifteen problems but Sabharwal’s algorithm achieved 6.6%).
From
Figure 5 and
Table 8 and
Table 9, it is clear that the proposed blended algorithm (trisection–false position) outperforms the blended algorithm (bisection-false position) [
15] for eight problems versus seven problems (i.e., the proposed algorithm achieved 53.3% of fifteen problems but Sabharwal’s algorithm achieved 46.6%). On the other hand, the trisection method determined the exact root (1.0000000000000000) of the problem 4 after nine iterations, but the bisection method found the approximate root (0.9999999999999999) after 12 iterations.