**1. Introduction**

In this study, we consider the problem of solving the nonlinear equations *<sup>F</sup>*(*x*) = 0; wherein *F* : D ⊂ R*m* → R*m* is a univariate function when *m* = 1 or multivariate function when *m* > 1 on an open domain D, by iterative methods. Univariate function is usually denoted by *f*(*x*).

Newton's method [1–3] is one of the basic one-point methods which has quadratic convergence and requires one function and one derivative evaluation per iteration but it may diverge if the derivative is very small or zero. To overcome this problem, researchers have also proposed some derivative free one-point methods, for example, the Secant method [2], the Traub method [2], the Muller method [4,5], the Jarratt and Nudds method [6] and the Sharma method [7]. These methods are classified as one-point methods with memory whereas Newton's method is a one-point method without memory (see Reference [2]). All the above mentioned one-point methods with memory require one function evaluation per iteration and possess order of convergence 1.84 except Secant which has order 1.62.

In this paper, we develop a new efficient one-point method with memory of convergence order 1.84 by using rational linear interpolation. The method consists of deriving the coefficients of a rational function that goes through by three points. Then, the derived coefficients are substituted into the derivative of the considered rational function which, when used in Newton's scheme, gives the new scheme. The formula uses one function evaluation per step and has an efficiency index equal to the efficiency of the aforementioned methods of the same order. However, the main advantages of new method over the existing ones are its simplicity and suitability to solve systems of nonlinear equations.

The contents of the paper are organized as follows: in Section 2, the new method is developed and its convergence is discussed; in Section 3, some numerical examples are considered to verify the theoretical results and to compare the performance of proposed technique with existing techniques; the proposed method is generalized for solving the system of nonlinear equations in Section 4.

#### **2. The Method and Its Convergence**

Our aim is to develop a derivative-free iterative method by Newton's scheme

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - \frac{f(\mathbf{x}\_n)}{f'(\mathbf{x}\_n)}, \quad n = 0, 1, 2, \dots,\tag{1}$$

wherein *f* (*xn*) is approximated by using the rational linear function

$$R(t) = \frac{t - \mathbf{x}\_n + a}{b(t - \mathbf{x}\_n) + c},\tag{2}$$

such that

$$R(\mathbf{x}\_{n-2}) = f(\mathbf{x}\_{n-2}), \ R(\mathbf{x}\_{n-1}) = f(\mathbf{x}\_{n-1}), \ R(\mathbf{x}\_n) = f(\mathbf{x}\_n), \ n = 2, 3, 4, \dots \tag{3}$$

Imposing the conditions (3) in (2), we have that

$$(b(\mathbf{x}\_{n-2} - \mathbf{x}\_n) + c)f(\mathbf{x}\_{n-2}) = \mathbf{x}\_{n-2} - \mathbf{x}\_n + a\_\prime$$

$$(b(\mathbf{x}\_{n-1} - \mathbf{x}\_n) + c)f(\mathbf{x}\_{n-1}) = \mathbf{x}\_{n-1} - \mathbf{x}\_n + a\_\prime$$

$$cf(\mathbf{x}\_n) = a. \tag{4}$$

Then

$$\begin{aligned} b(\mathbf{x}\_{n-2} - \mathbf{x}\_{\text{\textquotedblleft}f})f(\mathbf{x}\_{n-2}) + \mathfrak{c}(f(\mathbf{x}\_{n-2}) - f(\mathbf{x}\_{\text{\textquotedblright}})) &= \mathbf{x}\_{n-2} - \mathbf{x}\_{\text{\textquotedblleft}f} \\ b(\mathbf{x}\_{n-1} - \mathbf{x}\_{\text{\textquotedblright}})f(\mathbf{x}\_{n-1}) + \mathfrak{c}(f(\mathbf{x}\_{n-1}) - f(\mathbf{x}\_{\text{\textquotedblright}})) &= \mathbf{x}\_{n-1} - \mathbf{x}\_{n} \end{aligned} \tag{5}$$

equivalently

$$bf(\mathbf{x}\_{n-2}) + cf[\mathbf{x}\_{n-2}, \mathbf{x}\_n] = 1,$$

$$bf(\mathbf{x}\_{n-1}) + cf[\mathbf{x}\_{n-1}, \mathbf{x}\_n] = 1,\tag{6}$$

where *f* [*s*, *t*] = *f*(*s*)−*f*(*t*) *s*−*t* is the Newton first order divided difference. Solvingfor*b*and *c*,weobtain that

$$b = \frac{f[\mathbf{x}\_{n-2}, \mathbf{x}\_n] - f[\mathbf{x}\_{n-1}, \mathbf{x}\_n]}{f[\mathbf{x}\_{n-2}, \mathbf{x}\_n]f(\mathbf{x}\_{n-1}) - f[\mathbf{x}\_{n-1}, \mathbf{x}\_n]f(\mathbf{x}\_{n-2})} \tag{7}$$

and

$$\mathcal{L} = \frac{f(\mathbf{x}\_{n-2}) - f(\mathbf{x}\_{n-1})}{f[\mathbf{x}\_{n-1}, \mathbf{x}\_n]f(\mathbf{x}\_{n-2}) - f[\mathbf{x}\_{n-2}, \mathbf{x}\_n]f(\mathbf{x}\_{n-1})}.\tag{8}$$

Some simple calculations yield

$$\begin{split} R'(\mathbf{x}\_n) &= \frac{1 - bf(\mathbf{x}\_n)}{c} \\ &= \frac{f[\mathbf{x}\_{n-1}, \mathbf{x}\_n] f[\mathbf{x}\_{n-2}, \mathbf{x}\_n]}{f[\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}]}. \end{split} \tag{9}$$

Assuming that *f* (*xn*) is approximately equal to *<sup>R</sup>*(*xn*), then, in view of (9) the method (1) can be presented as

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - \frac{f[\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}]}{f[\mathbf{x}\_{n-1}, \mathbf{x}\_n] f[\mathbf{x}\_{n-2}, \mathbf{x}\_n]} f(\mathbf{x}\_n). \tag{10}$$

The scheme (10) defines a one-point method with memory and requires one function evaluation per iteration.

In the following theorem, we shall find the order of convergence of (10). We use the concept of *R*- order of convergence given by Ortega and Rheinboldt [8]. Suppose {*xn*} is a sequence of approximation generated by an iteration method. If the sequence converges to a zero *α* of *f* with *R*order ≥ *r*, then we write

$$
\mathfrak{e}\_{n+1} \sim \mathfrak{e}\_n^I. \tag{11}
$$

**Theorem 1.** *Suppose that f*(*x*)*, f* (*x*)*, f* (*x*) *and f* (*x*) *are continuous in the neighborhood* D *of a zero (say, α) of f . If the initial approximations x*0*, x*1 *and x*2 *are sufficiently close to α, then the R-order of convergence of the method* (10) *is 1.84.*

**Proof.** Let *en* = *xn* − *α*, *en*−1 = *xn*−1 − *α* and *en*−2 = *xn*−2 − *α* be the errors in the *n*-th, *n* − 1-th and *n* − 2-th iterations, respectively. Using Taylor's expansions of *f*(*xn*) , *f*(*xn*−<sup>1</sup>) and *f*(*xn*−<sup>2</sup>) about *α* and taking into account that *f*(*α*) = 0 and *f* (*α*) = 0, we have that

$$f(\mathbf{x}\_n) = f'(a) \left[ \mathbf{e}\_n + A\_2 \mathbf{e}\_n^2 + A\_3 \mathbf{e}\_n^3 + \dotsb \right],\tag{12}$$

$$f(\mathbf{x}\_{n-1}) = f'(\mathbf{a}) \left[ \mathbf{c}\_{n-1} + A\_2 \mathbf{c}\_{n-1}^2 + A\_3 \mathbf{c}\_{n-1}^3 + \dotsb \right],\tag{13}$$

$$f(\mathbf{x}\_{n-2}) = f'(\mathbf{a}) \left[ \mathbf{e}\_{n-2} + A\_2 \mathbf{e}\_{n-2}^2 + A\_3 \mathbf{e}\_{n-2}^3 + \dotsb \right],\tag{14}$$

where *A*1 = 1 and *Ai* = (1/*i*!)*f* (*i*)(*α*)/ *f* (*α*), *i* = 2, 3, 4, .

Using Equations (12) and (13), we have

$$f[\mathbf{x}\_{n-1}, \mathbf{x}\_n] = f'(\mathbf{a}) \left( 1 + A\_2(\mathbf{e}\_n + \mathbf{e}\_{n-1}) + A\_3(\mathbf{e}\_n^2 + \mathbf{e}\_{n-1}^2 + \mathbf{e}\_n \mathbf{e}\_{n-1}) + \cdots \right). \tag{15}$$

. .

Similarly we can obtain

$$f[\mathbf{x}\_{n-2}, \mathbf{x}\_n] = f'(\mathbf{a})(1 + A\_2(\mathbf{e}\_n + \mathbf{e}\_{n-2}) + A\_3(\mathbf{e}\_n^2 + \mathbf{e}\_{n-2}^2 + \mathbf{e}\_n\mathbf{e}\_{n-2}) + \cdots),\tag{16}$$

$$f[\mathbf{x}\_{n-2}, \mathbf{x}\_{n-1}] = f'(\mathbf{a})(1 + A\_2(\mathbf{e}\_{n-2} + \mathbf{e}\_{n-1}) + A\_3(\mathbf{e}\_{n-2}^2 + \mathbf{e}\_{n-1}^2 + \mathbf{e}\_{n-2}\mathbf{e}\_{n-1}) + \cdots). \tag{17}$$

Using Equations (12), (15)–(17) in (10), we obtain that

$$\begin{split} \varepsilon\_{n+1} &= \varepsilon\_n - \frac{\varepsilon\_n + A\_2 \varepsilon\_n^2 + A\_3 \varepsilon\_n^3 + A\_2 (\varepsilon\_{n-2} \varepsilon\_n + \varepsilon\_{n-1} \varepsilon\_n) + A\_3 (\varepsilon\_n \varepsilon\_{n-1} \varepsilon\_{n-2}) + \cdots}{1 + A\_2 (\varepsilon\_n + \varepsilon\_{n-2}) + A\_2 (\varepsilon\_n + \varepsilon\_{n-1}) + A\_2^2 (\varepsilon\_n^2 + \varepsilon\_n \varepsilon\_{n-2} + \varepsilon\_{n-1} \varepsilon\_n + \varepsilon\_{n-1} \varepsilon\_{n-2}) + \cdots} \\ &= (A\_2^2 - A\_3) \varepsilon\_n \varepsilon\_{n-1} \varepsilon\_{n-2} + \cdots \ . \end{split}$$

that is

$$
\mathfrak{e}\_{n+1} \sim \mathfrak{e}\_n \mathfrak{e}\_{n-1} \mathfrak{e}\_{n-2} \. \tag{18}
$$

From (11), we have that

$$
\varepsilon\_n \sim \varepsilon\_{n+1'}^{\frac{1}{r}} \tag{19}
$$

$$
\mathfrak{e}\_{n-1} \sim \mathfrak{e}\_n^{\frac{1}{r}} \tag{20}
$$

and

$$
\varepsilon\_{\text{tr}} - 2 \sim \mathcal{e}\_{\text{tr}-1}^{\frac{1}{\tau}} \sim \mathcal{e}\_{\text{tr}}^{\frac{1}{\tau}}.\tag{21}
$$

Combining (18), (20) and (21), it follows that

$$
\varepsilon\_{n+1} \sim \varepsilon\_n \varepsilon\_n^{\frac{1}{\tau}} \varepsilon\_n^{\frac{1}{\tau}} = \varepsilon\_n^{1 + \frac{1}{\tau} + \frac{1}{\tau^2}}.\tag{22}
$$

Comparison of the exponents of *en* on the right hand side of (11) and (22) leads to

$$r^3 - r^2 - r - 1 = 0\_{\prime\prime}$$

which has a unique positive real root 1.84. That means the order *r* of method (10) is 1.84.

**Remark 1.** *According to Ostrowski's formula [9] for the efficiency measure of an iterative method of order r; if c is the computational cost measured in terms of the number of evaluations of the function f and its derivatives that are required for each iteration, then the efficiency index of the method is given by r*1/*c. Thus the efficiency index of Newton's method is* 1.414*, the Secant method is* 1.62 *whereas in the case of the Muller, Jarratt-Nudds, Traub, Sharma and new methods* (10) *this index is* 1.84*.*

#### **3. Numerical Results**

We check the performance of the new method (10), now denoted by NM, using the computational software package Mathematica [10] with multiple-precision arithmetic. For comparison purposes, we consider the Muller method (MM), the Traub method (TM), the Jarratt-Nudds method (JNM) and the Sharma method (SM). These methods are given as follows:

*Muller method* (MM):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - \frac{2a\_2}{a\_1 \pm \sqrt{a\_1^2 - 4a\_0a\_2}}\mathbf{x}$$

where

$$\begin{split} a\_{0} &= \frac{1}{D} \left[ (\mathbf{x}\_{n} - \mathbf{x}\_{n-2}) (f(\mathbf{x}\_{n}) - f(\mathbf{x}\_{n-1})) - (\mathbf{x}\_{n} - \mathbf{x}\_{n-1}) (f(\mathbf{x}\_{n}) - f(\mathbf{x}\_{n-2})) \right], \\ a\_{1} &= \frac{1}{D} \left[ (\mathbf{x}\_{n} - \mathbf{x}\_{n-2})^{2} (f(\mathbf{x}\_{n}) - f(\mathbf{x}\_{n-1})) - (\mathbf{x}\_{n} - \mathbf{x}\_{n-1})^{2} (f(\mathbf{x}\_{n}) - f(\mathbf{x}\_{n-2})) \right], \\ a\_{2} &= f(\mathbf{x}\_{n}), \\ D &= (\mathbf{x}\_{n} - \mathbf{x}\_{n-1}) (\mathbf{x}\_{n} - \mathbf{x}\_{n-2}) (\mathbf{x}\_{n-1} - \mathbf{x}\_{n-2}). \end{split}$$

*Traub method* (TM):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - \frac{f(\mathbf{x}\_n)}{f[\mathbf{x}\_{n\prime}, \mathbf{x}\_{n-1}]} + \frac{f(\mathbf{x}\_n)f(\mathbf{x}\_{n-1})}{f(\mathbf{x}\_n) - f(\mathbf{x}\_{n-2})} \left(\frac{1}{f[\mathbf{x}\_n, \mathbf{x}\_{n-1}]} - \frac{1}{f[\mathbf{x}\_{n-1}, \mathbf{x}\_{n-2}]}\right).$$

*Jarratt-Nudds method* (JNM):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n + \frac{(\mathbf{x}\_n - \mathbf{x}\_{n-1})(\mathbf{x}\_n - \mathbf{x}\_{n-2})f(\mathbf{x}\_n)(f(\mathbf{x}\_{n-1}) - f(\mathbf{x}\_{n-2}))}{(\mathbf{x}\_n - \mathbf{x}\_{n-1})(f(\mathbf{x}\_{n-2}) - f(\mathbf{x}\_n))f(\mathbf{x}\_{n-1}) + (\mathbf{x}\_n - \mathbf{x}\_{n-2})(f(\mathbf{x}\_n) - f(\mathbf{x}\_{n-1}))f(\mathbf{x}\_{n-2})}.$$

*Sharma method* (SM):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - \frac{2f(\mathbf{x}\_n)(bf(\mathbf{x}\_n) - d)}{c \pm \sqrt{c^2 - 4af(\mathbf{x}\_n)(bf(\mathbf{x}\_n) - d)}}$$

,

where

$$\begin{aligned} c &= \frac{a(h\_1\delta\_2 - h\_2\delta\_1) + b\delta\_1\delta\_2(h\_1\delta\_1 - h\_2\delta\_2)}{\delta\_2 - \delta\_1}, \\ d &= \frac{a(h\_2 - h\_1) + b(h\_2\delta\_2^2 - h\_1\delta\_1^2)}{\delta\_2 - \delta\_1}, \\ h\_k &= \mathbf{x}\_n - \mathbf{x}\_{n-k\prime} \\ \delta\_k &= \frac{f(\mathbf{x}\_n) - f(\mathbf{x}\_{n-k})}{\mathbf{x}\_n - \mathbf{x}\_{n-k}}, \ k = 1, 2. \end{aligned}$$

,

We consider five examples for numerical tests as follows:

**Example 1.** *Let us consider Kepler's equation; f*1(*x*) = *x* − *α*1 sin(*x*) − *K* = 0*, where* 0 ≤ *α*1 < 1 *and* 0 ≤ *K* ≤ *π. A numerical study, based on different values of parameters K and α*1*, has been performed in [11]. We solve the equation taking K* = 0.1 *and α*1 = 0.25*. For this set of values the solution α is* 0.13320215082857313...*. The numerical results are shown in Table 1.*

**Table 1.** Comparison of performance of methods for function *f*1(*x*), taking *x*0 = 0.5, *x*1 = − 0.3, *x*2 = 0.1.


**Example 2.** *Next, we consider isentropic supersonic flow across a sharp expansion corner (see Reference [12]). The relationship between the Mach number before the corner* (*<sup>i</sup>*.*e*., *<sup>M</sup>*1) *and after the corner* (*<sup>i</sup>*.*e*., *<sup>M</sup>*2) *is expressed by*

$$f\_2(\mathbf{x}) = b^{1/2} \left( \tan^{-1} \left( \frac{M\_2^2 - 1}{b} \right)^{1/2} - \tan^{-1} \left( \frac{M\_1^2 - 1}{b} \right)^{1/2} \right) - \left( \tan^{-1} (M\_2^2 - 1)^{1/2} - \tan^{-1} (M\_1^2 - 1)^{1/2} \right) - \delta\_\nu$$

*where b* = *γ*+1 *γ*−1 *and γ is the specific heat ratio of the gas. We take values M*1 = 1.5*, γ* = 1.4 *and δ* = 100*. The solution α of this problem is* 1.8411294068501996...*. The numerical results are shown in Table 2.*

**Table 2.** Comparison of performance of methods for function *f*2(*x*), taking *x*0 = 1.1, *x*1 = 2.3, *x*2 = 1.7.


**Example 3.** *Consider the equation governing the L-C-R circuit in electrical engineering [13]*

$$L\frac{d^2q}{dt^2} + R\frac{dq}{dt} + \frac{q}{C} = 0$$

*whose solution q*(*t*) *is*

$$q(t) = q\_0 e^{-Rt/2L} \cos\left(\sqrt{\frac{1}{LC} - \left(\frac{R}{2L}\right)^2}t\right),$$

*where at t* = 0, *q* = *q*0.

A particular problem as a case study is given as: *Assuming that the charge is dissipated to 1 percent of its original value* (*q*/*q*0 = 0.01) *in t* = 0.05 *s, with L* = 5 *Henry and C* = 10−<sup>4</sup> *Farad. Determine the proper value of R*?

Using the given numerical data, the problem is given as

$$f\_3(\mathbf{x}) = e^{-0.005\mathbf{x}} \cos\left(\sqrt{2000 - 0.01\mathbf{x}^2} \left(0.05\right)\right) - 0.01 = 0.1$$

where *x* = *R*. Solution of this problem is, *α* = 328.15142908514817.... Numerical results are displayed in Table 3.

**Table 3.** Comparison of performance of methods for function *f*3(*x*), taking *x*0 = 430, *x*1 = 200, *x*2 = 315.


**Example 4.** *Law of population growth is given as (see References [14,15])*

$$\frac{dN(t)}{dt} = \lambda N(t) + \nu\_{\prime}$$

*where N*(*t*) = *population at time t*, *λ* = *constant birth rate of population and ν* = *constant immigration rate. The solution of this differential equation is given by*

$$N(t) = N\_0 e^{\lambda t} + \frac{\nu}{\lambda} (e^{\lambda t} - 1)\_{\nu}$$

*where N*0*is initial population.*

A particular problem for the above model can be formulated as: *Suppose that a certain population consists of 1,000,000 people initially. Further suppose that 435,000 people immigrate into the community in the first year and 1,564,000 people are present at the end of one year. Determine the birth rate (λ) of this population.* Tofindthebirthrate,wewillsolvetheequation

$$f\_4(\mathbf{x}) = 1564 - 1000e^{\mathbf{x}} - \frac{435}{\mathbf{x}}(e^{\mathbf{x}} - 1) = 0\_\gamma$$

wherein *x* = *λ*. Solution of this problem is, *α* = 0.10099792968574979.... The numerical results are given in Table 4.

**Table 4.** Comparison of performance of methods for function *f*4(*x*), taking *x*0 = −0.4, *x*1 = 0.5, *x*2 = 0.05.


**Example 5.** *Next, we consider an example of academic interest, which is defined by*

$$f\_5(x) = \begin{cases} \ x^3 \ln x^2 + x^5 - x^4, & x \neq 0, \\\ 0, & x = 0. \end{cases}$$

*It has three zeros. Note that α* = 0 *is the multiple zero of multiplicity* 3*. We consider the zero α* = 1 *in our work. Numerical results are displayed in Table 5.*

**Table 5.** Comparison of performance of methods for function *f*5(*x*), taking *x*0 = 1.2, *x*1 = 0.9, *x*2 = 1.05.


Numerical results shown in Tables 1–5 contain the required iterations *n*, computed estimated error |*xn*+<sup>1</sup> − *xn*| in first three iterations (wherein A(-h) denotes *A* × <sup>10</sup>−*<sup>h</sup>*), computational order of convergence (COC) and CPU time (CPU-time) are measured during the execution of the program. Computational order of convergence (COC) is computed by using the formula [16]

$$\text{COC} = \frac{\ln(|\mathbf{x\_{n+1}} - \mathbf{x\_n}|/|\mathbf{x\_n} - \mathbf{x\_{n-1}}|)}{\ln(|\mathbf{x\_n} - \mathbf{x\_{n-1}}|/|\mathbf{x\_{n-1}} - \mathbf{x\_{n-2}}|)}.$$

The necessary iterations (*n*) are obtained so as to satisfy the criterion (|*xn*+<sup>1</sup> − *xn*| + | *f*(*xn*)|) < 10−100. The first two initial approximations *x*0 and *x*1 are chosen arbitrarily, whereas third *x*2 is taken as the average of these two. From the numerical results displayed in Tables 1–5, we can conclude that the accuracy of the new method (NM) is either equal to or better than existing methods. Moreover, it requires less CPU-time compared with that of existing methods. This character makes it more efficient than the existing ones.

#### **4. Generalized Method**

We end this work with a method for solving a system of nonlinear equations *<sup>F</sup>*(*x*) = 0; *F* : D ⊂ R*m* → R*m* is the given nonlinear function *F* = (*f*1, *f*2, ..... *fm*)*<sup>T</sup>* and *x* = (*<sup>x</sup>*1, ...., *xm*)*<sup>T</sup>*. The divided difference *<sup>F</sup>*[*<sup>x</sup>*, *y*] of *F* is a matrix of order *m* × *m* (see Reference [2], p. 229) with elements

$$F[\mathbf{x}, y]\_{ij} = \frac{f\_i(\mathbf{x}\_1, \dots, \mathbf{x}\_j, y\_{j+1}, \dots, y\_m) - f\_i(\mathbf{x}\_1, \dots, \mathbf{x}\_{j-1}, y\_j, \dots, y\_m)}{\mathbf{x}\_j - y\_j}, \quad 1 \le i, j \le m. \tag{23}$$

Keeping in mind (10), we can write the corresponding method for the system of nonlinear equations as:

$$\mathbf{x}^{(n+1)} = \mathbf{x}^{(n)} - F[\mathbf{x}^{(n-1)}, \mathbf{x}^{(n)}]^{-1} F[\mathbf{x}^{(n-2)}, \mathbf{x}^{(n-1)}] F[\mathbf{x}^{(n-2)}, \mathbf{x}^{(n)}]^{-1} F(\mathbf{x}^{(n)}),\tag{24}$$

where *<sup>F</sup>*[·, ·]−<sup>1</sup> is the inverse of the divided difference operator *<sup>F</sup>*[·, ·].

**Remark 2.** *The computational efficiency of an iterative method for solving the system <sup>F</sup>*(*x*) = 0 *is calculated by the efficiency index E* = *r*1/*C, (see Reference [17]), where r is the order of convergence and C is the total cost of computation. The cost of computation C is measured in terms of the total number of function evaluations per iteration and the number of operations (that means products and divisions) per iteration. The various evaluations and operations that contribute to the cost of computation are described as follows. For the computation of F in any iterative function we evaluate m scalar functions fi* , (1 ≤ *i* ≤ *m*) *and when computing a divided difference <sup>F</sup>*[*<sup>x</sup>*, *y*]*, we evaluate m*(*m* − 1) *scalar functions, wherein <sup>F</sup>*(*x*) *and <sup>F</sup>*(*y*) *are evaluated separately. Furthermore, one has to add m*<sup>2</sup> *divisions from any divided difference. For the computation of an inverse linear operator, a linear system can be solved that requires m*(*m* − <sup>1</sup>)(<sup>2</sup>*m* − 1)/6 *products and m*(*m* − 1)/2 *divisions in the LU decomposition process, and m*(*m* − 1) *products and m divisions in the resolution of two triangular linear systems. Thus, taking into account the above considerations of evaluations and operations for the method* (24)*, we have that*

$$C = \frac{2}{3}m^3 + 8m^2 - \frac{8}{3}m \quad \text{and} \quad E = 1.84^{\frac{3}{2m^3 + 24m^2 - 8m}}.$$

Next, we apply the generalized method on the following problems:

**Example 6.** *The following system of m equations (selected from [18]) is considered:*

$$\sum\_{j=1, j\neq i}^{m} x\_j - e^{-x\_i} = 0, \quad 1 \le i \le m.$$

*In particular, we solve this problem for m* = 10, 30, 50, 100 *by selecting initial guesses x*(0) = {2, 2, *m*··· , 2}*T, x*(1) = {−1, −1, *m*···, −<sup>1</sup>}*<sup>T</sup> and x*(2) = { 12 , 12 , *m*···, 12 }*T towards the corresponding solution:*

> *α* = {0.100488400337 . . . , 0.100488400337 . . . , 10 ···, 0.100488400337 . . .}*<sup>T</sup>*, *α* = {0.033351667835 . . . , 0.033351667835 . . . , 30 ···, 0.033351667835 . . .}*<sup>T</sup>*, *α* = {0.020003975040 . . . , 0.020003975040 . . . , 50 ···, 0.020003975040 . . .}*<sup>T</sup>*, *α* = {0.010000498387 . . . , 0.010000498387 . . . , 100 ···, 0.010000498387 . . .}*<sup>T</sup>*.

*Numerical results are displayed in Table 6.*


**Table 6.** Performance of new method (NM) for Example 6.

**Example 7.** *Consider the system of m equations (selected from Reference [19]):*

$$\tan^{-1}(\mathbf{x}\_i) + 1 - 2\sum\_{j=1, j\neq i}^{m} \mathbf{x}\_j^2 = 0, \quad 1 \le i \le m.$$

*Let us solve this problem for m* = 10, 30, 50, 100 *with initial values x*(0) = {−1, −1, *m*···, −<sup>1</sup>}*T, x*(1) = {0, 0, *m*···, 0}*<sup>T</sup> and x*(2) = {−0.5, −0.5, *m*···, −0.5}*<sup>T</sup> towards the corresponding solutions:*


*Numerical results are displayed in Table 7.*


**Table 7.** Performance of new method (NM) for Example 7.

In Tables 6 and 7 we have shown the results of the new method only, because the other methods are not applicable for nonlinear systems. We conclude that there are numerous one-point iterative methods for solving a scalar equation *f*(*x*) = 0. Contrary to this fact, such methods are rare for multi-dimensional cases, that is, for approximating the solution of *<sup>F</sup>*(*x*) = 0. Since the method uses first divided difference, a drawback of the method is that if at some stage (say *j*) the denominator *xj* = *yj* in the Formula (23), then the method may fail to converge. However, this situation is rare since we have applied the method successfully on many other different problems. In the present work, an attempt has been made to develop an iterative scheme which is equally suitable for both categories viz. univariate and multivariate functions.

**Author Contributions:** The contribution of all the authors has been equal. All of them have worked together to prepare the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.
