**1. Introduction**

We are concerned with the problem of solving a system of nonlinear equations

$$F(\mathbf{x}) = 0.\tag{1}$$

This problem can precisely be stated as to find a solution vector *α* = (*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>m*)*<sup>T</sup>* such that *<sup>F</sup>*(*α*) = 0, where <sup>F</sup>(*x*) : *D* ⊂ R*m* −→ R*m* is the given nonlinear vector function *<sup>F</sup>*(*x*) = (*f*1(*x*), *f*2(*x*), ..., *fm*(*x*))*<sup>T</sup>* and *x* = (*<sup>x</sup>*1, *x*2, ..., *xm*)*<sup>T</sup>*. The vector *α* can be computed as a fixed point of some function M : *D* ⊂ R*m* → R*m* by means of fixed point iteration

$$\mathbf{x}^{(0)} \in D,$$

$$\mathbf{x}^{(k+1)} = \mathbf{M}(\mathbf{x}^{(k)}), \ k \ge 0. \tag{2}$$

Many applied problems in Science and Engineering are reduced to solve numerically the system *<sup>F</sup>*(*x*) = 0 of nonlinear equations (see, for example [1–6]). A plethora of iterative methods are developed in literature for solving such equations. A classical method is cubically convergen<sup>t</sup> Chebyshev's method (see [7])

$$\begin{aligned} \mathbf{x}^{(0)} \in D, \\ \mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - \left(I + \frac{1}{2} L\_F(\mathbf{x}^{(k)})\right) F'(\mathbf{x}^{(k)})^{-1} F(\mathbf{x}^{(k)}), \; k \ge 0, \end{aligned} \tag{3}$$

where *LF*(*x*(*k*)) = *<sup>F</sup>* (*x*(*k*))−1*<sup>F</sup>* (*x*(*k*))*<sup>F</sup>* (*x*(*k*))−1*F*(*x<sup>k</sup>*). This one-point iterative scheme depends explicitly on the first two derivatives of *F*. In [7], Ezquerro and Hernández present modification in Chebyshev's method that avoids the computation of second derivative *F* while maintaining third-order of convergence. It has the following form:

$$\begin{aligned} \mathbf{x}^{(0)} \in D\_{\prime} \\ \mathbf{y}^{(k)} = \mathbf{x}^{(k)} - a \, F'(\mathbf{x}^{(k)})^{-1} F(\mathbf{x}^{(k)}), \\ \mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - \frac{1}{a^2} F'(\mathbf{x}^{(k)})^{-1} \left( (a^2 + a - 1) F(\mathbf{x}^{(k)}) + F(\mathbf{y}^{(k)}) \right), \, k \ge 0. \end{aligned} \tag{4}$$

There is an interest in constructing derivative free iterative processes obtained by considering an approximation of the first derivative of *F* from a divided difference of first order. One class of such methods is called the class of Secant-type methods which is obtained by replacing *F* with the divided difference operator [*x*(*k*−<sup>1</sup>), *x*(*k*) ; *<sup>F</sup>*]. Using this operator a family of derivative free methods is given in [8]. The authors call this family the Chebyshev-Secant-type method and it is defined as

$$\begin{aligned} \mathbf{x}^{(-1)}, \mathbf{x}^{(0)} &\in D, \\ \mathbf{y}^{(k)} &= \mathbf{x}^{(k)} - a \left[ \mathbf{x}^{(k-1)}, \mathbf{x}^{(k)}; F \right]^{-1} F(\mathbf{x}^{(k)}), \\ \mathbf{x}^{(k+1)} &= \mathbf{x}^{(k)} - \left[ \mathbf{x}^{(k-1)}, \mathbf{x}^{(k)}; F \right]^{-1} \left( b^{\mathsf{F}} F(\mathbf{x}^{(k)}) + c F(\mathbf{y}^{(k)}) \right), \; k \ge 0, \end{aligned} \tag{5}$$

where *a*, *b* and *c* are non-negative parameters.

Another class of derivative free methods is the class of Steffensen-type processes that replaces *F* with operator [*w*(*x*(*k*)), *x*(*k*) ; *<sup>F</sup>*], wherein *w* : R*m* → R*<sup>m</sup>*. The work presented in [9] analyzes Steffensen-type iterative method which is given as

$$\begin{aligned} \mathbf{x}^{(0)} &\in D, \\ \mathbf{y}^{(k)} &= \mathbf{x}^{(k)} - a \left[ w(\mathbf{x}^{(k)}), \mathbf{x}^{(k)}; F \right]^{-1} F(\mathbf{x}^{(k)}), \\ \mathbf{x}^{(k+1)} &= \mathbf{x}^{(k)} - \left[ w(\mathbf{x}^{(k)}), \mathbf{x}^{(k)}; F \right]^{-1} \left( b \, F(\mathbf{x}^{(k)}) + c \, F(\mathbf{y}^{(k)}) \right), \, k \ge 0. \end{aligned} \tag{6}$$

For *a* = *b* = *c* = 1 and *w*(*x*(*k*)) = *x*(*k*) + *βF*(*x*(*k*)), *β* is an arbitrary non-zero constant, this method possesses third order convergence. In this case *y*(*k*) is Traub-Steffensen iteration [6]. For *β* = 1, *y*(*k*) belongs to Steffensen iteration [10]. Both of these iterations are quadratically convergent.

The two-step third order Traub-Steffensen-type method, i.e., the case of (6) for *a* = *b* = *c* = 1, can be written as

$$\begin{aligned} \mathbf{x}^{(0)} \in D, \; w(\mathbf{x}^{(k)}) &= \mathbf{x}^{(k)} + \beta F(\mathbf{x}^{(k)}),\\ \mathbf{y}^{(k)} = \mathbf{M}\_{2,1}(\mathbf{x}^{(k)}),\\ \mathbf{x}^{(k+1)} = \mathbf{M}\_{3,1}(\mathbf{x}^{(k)}, \mathbf{y}^{(k)}) &= \mathbf{y}^{(k)} - [w(\mathbf{x}^{(k)}), \mathbf{x}^{(k)}; F]^{-1} F(\mathbf{y}^{(k)}), \; k \ge 0,\end{aligned} \tag{7}$$

where M2,1(*x*(*k*)) = *x*(*k*) − [*w*(*x*(*k*)), *x*(*k*) ; *<sup>F</sup>*]−1*F*(*x*(*k*)) is the quadratically convergen<sup>t</sup> Traub-Steffensen scheme. Here and in the sequel, the symbol <sup>M</sup>*p*,*<sup>i</sup>* is used for denoting an *i*-th iteration function of convergence order *p*. It can be observed that the third order scheme (7) is computationally more efficient than quadratically convergen<sup>t</sup> Traub-Steffensen scheme. The reason is that the convergence

order is increased from two to three at the cost of only one function evaluation without adding extra inverse operator. We discuss computational efficiency in later sections.

Researchers have always been trying to develop the iterative method with increasing efficiency since different methods converge to the solution with different convergence speed. This can be done either by increasing the convergence order or by decreasing the computational cost or both. In [11], Ren et al. have derived a fourth order derivative-free method that uses three *F*, three divided differences and two matrix inversions per iteration. Zheng et al. [12] have constructed two families of fourth order derivative-free methods for scalar nonlinear equations, that are extendable to solve systems of nonlinear equations. First family requires to evaluate three *F*, three divided differences and two matrix inversions, whereas the second family needs three *F*, three divided differences and three matrix inversions. Grau et al. presented a fourth order derivative-free method in [13] utilizing four *F*, two divided differences and two matrix inversions. Sharma and Arora [14] presented a fourth order derivative-free method that uses the evaluations of three *F*, three divided differences and one matrix inversion per each step.

In search of more fast techniques, researchers have also introduced sixth and seventh order derivative-free methods in [13,15–18]. The sixth order method in [13] proposed by Grau et al. requires five *F*, two divided differences and two matrix inverses. Sharma and Arora [17] also developed a method of at least sixth order which requires evaluation of five functions, two divided difference and one matrix inversion per iteration. The seventh order method proposed by Sharma and Arora [15] utilizes four *F*, five divided differences and two matrix inversions per iteration. The seventh order methods presented by Wang and Zhang [16] use four *F*, five divided differences and three matrix inversions. Ahmad et al. [18] proposed eighth order derivative free method without memory which uses six functions evaluations, three divided difference and one matrix inversion.

The main goal in this study is to develop a derivative-free method of high computational efficiency, that means a method with high convergence speed and low computational cost. Consequently, we present a Traub-Steffensen-type method of fifth order of convergence which requires the evaluations four *F*, two divided differences and only one matrix inversion per step. The scheme of the present contribution is simple and consists of three steps. Of the three steps, the first two are that of cubically convergen<sup>t</sup> Traub-Steffensen-type scheme (7) whereas the third is derivative-free modification of Chebyshev's scheme (3). We show that the proposed method is more efficient than existing methods of similar nature.

The content of the rest of the paper is summarized as follows. Basic definitions relevant to the present work are stated in Section 2. In Section 3, the scheme of fifth order method is introduced and its convergence behavior is studied. In Section 4, the computational efficiency of the new method is examined and also compared with the existing derivative-free methods. In Section 5, the basins of attractors are presented to check the stability and convergence of the new method. Numerical tests are performed in Section 6 to verify the theoretical results as proved in Sections 3 and 4. Section 7 contains the concluding remarks.

## **2. Preliminary Results**

#### *2.1. Computational Order of Convergence*

Let *α* be a solution of the function *<sup>F</sup>*(*x*) = 0 and *<sup>x</sup>*(*k*−<sup>2</sup>), *<sup>x</sup>*(*k*−<sup>1</sup>), *x*(*k*) and *x*(*k*+<sup>1</sup>) be the four consecutive iterations close to *α*. Then, the computational order of convergence (say, *pc*) can be calculated using the formula (see [19])

$$p\_c = \frac{\log(\|\|\mathbf{x}^{(k+1)} - \mathbf{x}^{(k)}\| / \|\|\mathbf{x}^{(k)} - \mathbf{x}^{(k-1)}\| \|)}{\log(\|\|\mathbf{x}^{(k)} - \mathbf{x}^{(k-1)}\| / \|\|\mathbf{x}^{(k-1)} - \mathbf{x}^{(k-2)}\| \|)}. \tag{8}$$
