1. Introduction
Many theoretical and applied problems require us to find the solution of the nonlinear equation , where . Due to most of the nonlinear problems not having an analytical solution, many authors have proposed numerical methods by means of fixed point schemes to approximate the solution of . Most of these procedures possess the evaluation of an integer order derivative, or its approximation.
In the recent literature, some authors have introduced several numerical methods with noninteger order derivatives (fractal derivative, fractional derivative, and conformable derivative). These derivatives of order
establish a generalization of a classical one, which is a particular case when the order is
. Noninteger derivatives can be used to model many applied problems because of the higher degree of freedom of its tools compared to classical calculus tools [
1,
2,
3,
4].
In Ref. [
5], the first Newton’s methods with fractal derivatives are presented, whose order of convergence is quadratic. With regard to iterative schemes with fractional derivatives, the authors in Ref. [
6] designed Newton-type methods, of order
, with Caputo and Riemann–Liouville fractional derivatives. In Ref. [
7], two Newton’s methods with Caputo and Riemann–Liouville fractional derivatives, of order
, allow the design of two Traub’s methods with Caputo and Riemann–Liouville fractional derivatives, of order
for each one; these are the first multipoint fractional methods in the literature. The authors in Ref. [
8] perform a dynamical analysis of Newton-type methods whose derivatives are replaced by Caputo and Riemann–Liouville fractional derivatives. In Ref. [
9], Newton’s schemes with Caputo and Riemann–Liouville fractional derivatives of order
are proposed, obtaining a quadratic order of convergence in both cases. Also, the authors in Ref. [
10] study the dynamics of a family of procedures with Caputo and conformable derivatives of order three.
Iterative methods with fractional derivatives do not hold the order of convergence of their classical versions; they need higher-order fractional derivatives to increase the order of convergence, preventing it from being possible to obtain optimal order procedures according to Kung and Traub conjecture [
11]; unlike schemes with conformable derivatives. So, another approach is the conformable calculus [
12,
13], whose low computational cost constitutes an advantage versus fractional calculus, due to special functions as Gamma or Mittag–Leffler functions are not evaluated [
3]. In that sense, several conformable iterative schemes were designed: In Refs. [
14,
15] are proposed the scalar and vectorial versions of a Newton-type method with conformable derivative/Jacobian, respectively, and a general technique is designed in Ref. [
16] in order to obtain the conformable version of any scalar classical procedure. Also, the authors in Ref. [
17] proposed the first multipoint conformable method for solving nonlinear systems (a Traub-type method). Finally, some derivative-free schemes were designed (a Steffensen-type and Secant-type procedures) with an approximation of conformable derivatives in Ref. [
18], and a Traub–Steffensen-type method in Ref. [
19] (in scalar and vectorial version). The theoretical convergence order of these methods is preserved in practice. Indeed, these methods show good qualitative behavior, improving even their respective classical cases in some numerical aspects.
Most of these fractal, fractional, and conformable schemes mentioned above need the evaluation of fractal, fractional, or conformable derivatives, respectively. Since conformable procedures have presented many advantages versus fractional ones, in this manuscript, we focus in the approximation of conformable derivatives in order to design, to our knowledge, the first conformable derivative-free iterative methods to solve nonlinear equations: a Steffensen-type method and a Secant-type method (based in Ref. [
18]); we also compare them with their classical partners.
Let us recall some basic definitions from conformable calculus: Given a function
, its left conformable derivative, starting from
a, of order
, where
, being
, can be defined as shown next [
12,
13]
If this limit does exist, then f is -differentiable. Let us suppose that f is differentiable, then . Given such that f is -differentiable in , then .
This derivative preserves the property of non fractional derivatives: , where K is a constant. As mentioned before, this kind of derivative does not require to evaluate any special function.
In Ref. [
20], an appropriate conformable Taylor series is provided, as shown in the following result.
Theorem 1 (Theorem 4.1, [
20]).
Let be an infinitely α-differentiable function, , about , where the conformable derivatives start at a. Then, the conformable Taylor series of can be given by being , , , , …. We can easily prove that
,
, and so on. So, (
2) is expressed as
Since the Secant-type method we propose includes memory, we need to introduce a generalization of order of convergence (The R-order [
21,
22]), but first, let us see the concept of R-factor:
Definition 1 ([
21,
22]).
Let ϕ be a converging iterative method to some limit β, and let be an arbitrary sequence in converging to β. Then, the R-factor of the sequence is We can define now the R-order:
Definition 2 ([
21,
22]).
The R-order of convergence of an iterative method ϕ at the point β is The following result states a relation between the roots of a characteristic polynomial and the R-order of an iterative procedure with memory:
Theorem 2 ([
21,
22]).
Let ϕ be an iterative method with memory generating the sequence of approximations of the root , and let us suppose that the sequence converges to . If exists a not null constant η, and nonnegative numbers , , such that is fulfilled, then the R-order of iterative scheme ϕ satisfies being the only positive root of polynomial Finally, if we take into account that, given an iteration function
of order
p, its asymptotical error constant
C is defined as [
23]
then the next result permits the calculation of the error constant of an iterative scheme with
p-order of convergence, knowing that of other iterative method with the same order.
Theorem 3 ([
23], Theorems 2–8).
Let us consider iteration functions and with order p and fixed point with multiplicity m. If we define and and are the asymptotical error constants of and , respectively. Therefore, Later, we support Theorem 3 in the conformable schemes proposed in this work, and we use for our purposes.
In the next section, we design the Steffensen’s and Secant type procedures, the convergence of these methods is analyzed in
Section 3. In
Section 4, we study their numerical performance, and the concluding remarks are provided in
Section 5.
2. Deduction of the Methods
As in Ref. [
18], we consider the approximation of (
1) with the following conformable finite divided difference of linear order:
In Refs. [
14,
15,
16], we can see that the conformable schemes preserve the theoretical order of their classical versions (when
), no matter if these procedures are scalar or vectorial, one-point, or multipoint. Now, we wonder if the conformable version of derivative-free methods (with or without memory) hold the order of convergence of their classical versions too. For this aim, we use the general technique proposed in [
16], which is useful for finding the conformable partner of any known procedure, and show that these procedures preserve the order of convergence of their classical versions.
The general technique given in Ref. [
16], states that the classical method
has the conformable version
If
in (
11) includes classical derivatives of
, then
in (
12) includes conformable derivatives of
. So, given a classical scheme, we need to identify the analytical expression of
to obtain its conformable version.
In the case of Steffensen’s procedure [
22,
23,
24]:
where
is an approximation of the classical derivative. So,
Regarding (
10),
being
in (
10). Hence, the conformable version of Steffensen’s method is
and we denote it by SeCO; note that when
the classical Steffensen’s scheme is obtained.
In the case of the Secant procedure [
22,
23]:
where
is an approximation of the classical derivative. Then,
Considering (
10),
being
in (
10). Therefore, the conformable version of the Secant method is
and we denote it by EeCO; note that when
the classical Secant scheme is obtained.
4. Numerical Results
To obtain the results shown in this section, we have used Matlab R2020a with double precision arithmetics,
or
as stopping criteria, and a maximum of 500 iterates. The approximate computational order of convergence (ACOC)
defined in [
27], is used to confirm that theoretical order of convergence is also conserved in practice.
Now, we test six nonlinear functions with the methods that we have designed in the previous section; in this sense, we compare each scheme with its classic version (when ). For EeCO, we choose to perform the first iteration, we fix for each method, and .
In each table, we show the results obtained for each test function using the two schemes designed in the previous section (SeCO and EeCO), where coincides in both procedures.
The first test function is , with real and complex roots , , , , , and .
In
Table 1, we can see that SeCO can require the same number of iterations as the classical Steffensen’s method (when
), and
can be slightly higher than 2 when
. Note that SeCO needs the initial estimate
to be very close to
to converge with any
. We observe that EeCO require in some cases less iterations than Secant scheme for most values of
, and the ACOC can be slightly higher than 1.618.
Our second test function is , with real roots and .
In
Table 2 we note that SeCO can converge in fewer iterations than its classical partner, and a different root can be found; case
is not shown because this method converges to some point which is not a root of
, due to one of the stopping criteria is much greater than zero, also, no results are shown when it is required more than 500 iterations. We can see that EeCO can require the same number of iterations of its classical partner, and
can be slightly higher than 1.618.
Next test function is , with double roots , , and .
In
Table 3, we observe that many times SeCO requires lower number of iterations than Steffensen’s procedure, and the ACOC is linear for all
, because the multiplicity of all roots is
. We note that number of iterations is increasing when EeCO is used, a distinct root can be found when a different value of
is chosen, and again,
is linear for any
, because the multiplicity of these roots is 2; we show no results for
as this scheme converges to some point which is not a root of
, due to one of the stopping criteria is much greater than zero.
The fourth nonlinear function is , with roots and .
In
Table 4, we see that SeCO needs fewer iterations than its classical partner in many cases;
is not provided for
because it is necessary at least three iterations to be computed, and we do not show results for
as this procedure does not converge to a root of
. We can observe that the classical Secant method has failed, whereas, the conformable version can find solution for some values of
, and the ACOC can be slightly higher than 1.618; and again, no results are shown for
because this procedure does not converge to a root of
.
The fifth test function is , with real root . Also, the complex root can be obtained.
In
Table 5, we note that SeCO can require fewer iterations than its classical version in some cases, a different root can be found when choosing a distinct value of
, a complex root can be obtained starting from a real initial estimate, and
can be slightly higher than 2. We can see that EeCO can converge in a lower number of iterations than its classical version, and the ACOC can be slightly higher than 1.618. No results are shown in both methods when they require more than 500 iterations.
Finally, our sixth test function is , with real roots , , , , , , , , , , , , and .
In
Table 6, we observe that SeCO and EeCO need lower number of iterations than their classical partners, respectively, for some values of
, and that
is similar to the classical one in each case; no results is shown for
because this scheme converges to some point which is not a root of
, due to one of the stopping criteria is much greater than zero. Neither are results shown for EeCO when
, since it requires more than 500 iterations to converge. We note that with each procedure different roots are obtained by modifying the value of
.