**4. Computational Efficiency**

In order to find the computational efficiency we will use the definition given in Section 2.3. The various evaluations and arithmetic operations that contribute towards the cost of computation are described as follows. For the computation of *F* in any iterative function we evaluate *m* scalar functions *fi* , (1 ≤ *i* ≤ *m*) and when computing a divided difference [*x*, *y* ; *F*] (see, Section 2.2) we evaluate *m*(*m* − 1) scalar functions, wherein *<sup>F</sup>*(*x*) and *<sup>F</sup>*(*y*) are evaluated separately. Furthermore, one has to add *m*<sup>2</sup> divisions from any divided difference. For the computation of inverse linear operator, a linear system can be solved that requires *m*(*m* − <sup>1</sup>)(<sup>2</sup>*m* − 1)/6 products and *m*(*m* − 1)/2 divisions in the LU decomposition process, and *m*(*m* − 1) products and *m* divisions in the resolution of two triangular linear systems. Moreover, we add *m* products for the multiplication of a vector by a scalar and *m*<sup>2</sup> products for multiplying a matrix by a vector or of a matrix by a scalar.

The comparison of computational efficiency of the present method M5,1 is drawn with second order method M2,1; third order method M3,1; fourth order methods by Ren et al. [11], Grau et al. [13] and Sharma-Arora [14]; fifth order method by Kumar et al. [25]; sixth order method by Grau et al. [13]; seventh order methods by Sharma-Arora [15] and Wang-Zhang [16]. These methods are expressed as follows:

*Fourth order method by Ren et al.* (M4,1):

$$\begin{aligned} y^{(k)} &= x^{(k)} - [u^{(k)}, x^{(k)}; F]^{-1} F(x^{(k)}), \\ x^{(k+1)} &= y^{(k)} - \left( [y^{(k)}, x^{(k)}; F] + [y^{(k)}, u^{(k)}; F] - [u^{(k)}, x^{(k)}; F] \right)^{-1} F(y^{(k)}), \end{aligned}$$

where *u*(*k*) = *x*(*k*) + *<sup>F</sup>*(*x*(*k*)).

*Fourth order method by Grau et al.* (M4,2):

$$\begin{aligned} y^{(k)} &= \mathbf{x}^{(k)} - [\boldsymbol{u}^{(k)}, \boldsymbol{v}^{(k)}; \boldsymbol{F}]^{-1} \boldsymbol{F}(\mathbf{x}^{(k)}) \\ \mathbf{x}^{(k+1)} &= \mathbf{y}^{(k)} - \left( 2[\boldsymbol{y}^{(k)}, \mathbf{x}^{(k)}; \boldsymbol{F}] - [\boldsymbol{u}^{(k)}, \boldsymbol{v}^{(k)}; \boldsymbol{F}] \right)^{-1} \boldsymbol{F}(\mathbf{y}^{(k)}) \end{aligned}$$

where *u* = *x* + *<sup>F</sup>*(*x*) and *v* = *x* − *<sup>F</sup>*(*x*).

*Sharma-Arora fourth order method* (M4,3):

$$\begin{aligned} y^{(k)} &= x^{(k)} - [w^{(k)}, \mathbf{x}^{(k)}; \mathcal{F}]^{-1} F(\mathbf{x}^{(k)}) \\ \mathbf{x}^{(k+1)} &= y^{(k)} - \{ \mathbf{3}I - [w^{(k)}, \mathbf{x}^{(k)}; \mathcal{F}]^{-1} ([y^{(k)}, \mathbf{x}^{(k)}; \mathcal{F}] + [y^{(k)}, w^{(k)}; \mathcal{F}]) \} \\ &\quad \times [w^{(k)}, \mathbf{x}^{(k)}; \mathcal{F}]^{-1} F(y^{(k)}), \end{aligned}$$

where *w*(*k*) = *x*(*k*) + *βF*(*x*(*k*)), *β* is a non-zero constant.

*Fifth order method by Kumar et al.* (M5,2):

$$\begin{aligned} y^{(k)} &= \, \, \mathbf{x}^{(k)} - [w^{(k)}, \mathbf{x}^{(k)}; F]^{-1} F(\mathbf{x}^{(k)}) \\ z^{(k)} &= \, \, y^{(k)} - [w^{(k)}, \mathbf{x}^{(k)}; F]^{-1} F(y^{(k)}) \\ x^{(k+1)} &= \, z^{(k)} - [\mathbf{x}^{(k)}, y^{(k)}; F]^{-1} [w^{(k)}, \mathbf{x}^{(k)}; F] [w^{(k)}, y^{(k)}; F]^{-1} F(z^{(k)}) \end{aligned}$$

where *w*(*k*) = *x*(*k*) + *<sup>F</sup>*(*x*(*k*)).

*Sixth order method by Grau et al.* (M6,1):

$$\begin{aligned} y^{(k)} &= x^{(k)} - [u^{(k)}, v^{(k)}; F]^{-1} F(x^{(k)}) \\ z^{(k)} &= y^{(k)} - \left(2[y^{(k)}, x^{(k)}; F] - [u^{(k)}, v^{(k)}; F]\right)^{-1} F(y^{(k)}) \\ x^{(k+1)} &= z^{(k)} - \left(2[y^{(k)}, x^{(k)}; F] - [u^{(k)}, v^{(k)}; F]\right)^{-1} F(z^{(k)}) .\end{aligned}$$

*Wang-Zhang seventh order method* (M7,1):

$$\begin{aligned} y^{(k)} &= \mathbf{x}^{(k)} - [\boldsymbol{u}^{(k)}, \mathbf{x}^{(k)}; \boldsymbol{F}]^{-1} \boldsymbol{F}(\mathbf{x}^{(k)}), \\ z^{(k)} &= y^{(k)} - \left( [y^{(k)}, \mathbf{x}^{(k)}; \boldsymbol{F}] + [y^{(k)}, \boldsymbol{u}^{(k)}; \boldsymbol{F}] - [\boldsymbol{u}^{(k)}, \mathbf{x}^{(k)}; \boldsymbol{F}] \right)^{-1} \boldsymbol{F}(\boldsymbol{y}^{(k)}) \\ x^{(k+1)} &= z^{(k)} - \left( [z^{(k)}, \mathbf{x}^{(k)}; \boldsymbol{F}] + [z^{(k)}, \boldsymbol{y}^{(k)}; \boldsymbol{F}] - [\boldsymbol{y}^{(k)}, \mathbf{x}^{(k)}; \boldsymbol{F}] \right)^{-1} \boldsymbol{F}(\boldsymbol{z}^{(k)}), \end{aligned}$$

where *u*(*k*) = *x*(*k*) + *<sup>F</sup>*(*x*(*k*)).

*Sharma-Arora seventh order method* (M7,2):

$$\begin{split} y^{(k)} = x^{(k)} - [w^{(k)}, x^{(k)}; F]^{-1} F(x^{(k)}) \\ z^{(k)} = y^{(k)} - \left( 3I - [w^{(k)}, x^{(k)}; F]^{-1} ([y^{(k)}, x^{(k)}; F] + [y^{(k)}, w^{(k)}; F]) \right) \\ &\times [w^{(k)}, x^{(k)}; F]^{-1} ([w^{(k)}, x^{(k)}; F]^{-1} F(y^{(k)}) \\ x^{(k+1)} = z^{(k)} - [z^{(k)}, y^{(k)}; F]^{-1} ([w^{(k)}, x^{(k)}; F] + [y^{(k)}, x^{(k)}; F] - [z^{(k)}, x^{(k)}; F]) \\ &\times [w^{(k)}, x^{(k)}; F]^{-1} F(z^{(k)}). \end{split}$$

Let us denote efficiency indices of the methods <sup>M</sup>*p*,*<sup>i</sup>* by <sup>E</sup>*p*,*<sup>i</sup>* and their computational costs by <sup>C</sup>*p*,*i*. Then, using the definition of the Section 2.3 taking into account the above considerations of evaluations and operations, we have that

$$\mathcal{C}\_{2,1} = \frac{1}{3}m^3 + 3m^2 + \frac{2}{3}m \quad \text{and} \quad \mathcal{E}\_{2,1} = 2^{1/\mathcal{C}\_{2,1}}.\tag{40}$$

$$\mathcal{E}\_{3,1} = \frac{1}{3}m^3 + 4m^2 + \frac{5}{3}m \quad \text{and} \quad \mathcal{E}\_{3,1} = 3^{1/\mathcal{C}\_{3,1}}.\tag{41}$$

$$\mathbb{C}\_{4,1} = \frac{2}{3}m^3 + 8m^2 - \frac{2}{3}m \quad \text{and} \quad \mathbb{E}\_{4,1} = 4^{1/\mathbb{C}\_{4,1}}.\tag{42}$$

$$\mathbb{C}\_{4,2} = \frac{2}{3}m^3 + 7m^2 + \frac{4}{3}m \quad \text{and} \quad \mathbb{E}\_{4,2} = 4^{1/\mathbb{C}\_{4,2}}.\tag{43}$$

$$\mathbf{C\_{4,3}} = \frac{1}{3}m^3 + 10m^2 + \frac{2}{3}m \quad \text{and} \quad \mathbf{E\_{4,3}} = 4^{1/C\_{4,3}}.\tag{44}$$

$$\mathbb{E}\_{5,1} = \frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m \quad \text{and} \quad \mathbb{E}\_{5,1} = \mathbb{S}^{1/\mathbb{C}\_{5,1}}.\tag{45}$$

$$\mathbf{C}\_{5,2} = \underset{\mathbf{A}}{m^3} + 11m^2 \quad \text{and} \quad \mathbf{E}\_{5,2} = \mathbf{S}^{1/\zeta\_{5,2}}.\tag{46}$$

$$\mathbb{C}\_{6,1} = \frac{2}{3}m^3 + 8m^2 + \frac{7}{3}m \quad \text{and} \quad \mathbb{E}\_{6,1} = 6^{1/\mathbb{C}\_{6,1}}.\tag{47}$$

*Symmetry* **2019**, *11*, 891

$$\mathcal{L}\_{\mathcal{T},1} = m^3 + 13m^2 - 2m \quad \text{and} \quad \mathcal{E}\_{\mathcal{T},1} = \mathcal{T}^{1/\mathcal{C}\_{\mathcal{T},1}}.\tag{48}$$

$$\mathbf{C}\_{7,2} = \frac{2}{3}m^3 + 17m^2 - \frac{2}{3}m \quad \text{and} \quad \mathbf{E}\_{7,2} = \mathcal{T}^{1/C\_{7,2}}.\tag{49}$$

To compare the efficiency of considered iterative methods, say <sup>M</sup>*p*,*<sup>i</sup>* against <sup>M</sup>*q*,*j*, we consider the ratio

$$\mathcal{R}\_{p,i;q,j} = \frac{\log \mathcal{E}\_{p,i}}{\log \mathcal{E}\_{q,j}} = \frac{\mathbb{C}\_{q,j} \log(p)}{\mathbb{C}\_{p,i} \log(q)}. \tag{50}$$

It is clear that when <sup>R</sup>*p*,*i*;*q*,*<sup>j</sup>* > 1, the iterative method <sup>M</sup>*p*,*<sup>i</sup>* is more efficient than <sup>M</sup>*q*,*j*.

M3,1 *versus* M2,1 *case*:

> For this case the ratio (50) is given by

$$\mathcal{R}\_{3,1;2,1} = \frac{\left(\frac{1}{3}m^3 + 3m^2 + \frac{2}{3}m\right)\log(3)}{\left(\frac{1}{3}m^3 + 4m^2 + \frac{5}{3}m\right)\log(2)}.$$

It can be easily shown that R3,1;2,1 > 1 for *m* ≥ 2. This implies that E3,1 > E2,1 for *m* ≥ 2. Thus, M3,1 is more efficient than M2,1 as we have stated in the introduction section.

M5,1 *versus* M2,1 *case*:

> The ratio .50/ is given by

$$\mathcal{R}\_{5,1;2,1} = \frac{\left(\frac{1}{3}m^3 + 3m^2 + \frac{2}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(2)}.$$

It is easy to prove that R5,1;2,1 > 1 for *m* ≥ 6. Thus, we conclude that E5,1 > E2,1 for *m* ≥ 6.

M5,1 *versus* M3,1 *case*:

> The ratio .50/ is given by

$$\mathcal{R}\_{5,1;3,1} = \frac{\left(\frac{1}{3}m^3 + 4m^2 + \frac{5}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(3)}.$$

It can be checked that R5,1;3,1 > 1 for *m* ≥ 21. Thus, we have that E5,1 > E3,1 for *m* ≥ 21.

M5,1 *versus* M4,1 *case*:

> In this case the ratio

$$\mathcal{R}\_{5,1;4,1} = \frac{\left(\frac{2}{3}m^3 + 8m^2 - \frac{2}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(4)} > 1,$$

for *m* ≥ 3, which implies that E5,1 > E4,1 for *m* ≥ 3.

M5,1 *versus* M4,2 *case*:

> Here the ratio

$$\mathcal{R}\_{5,1;4,2} = \frac{\left(\frac{2}{3}m^3 + 7m^2 + \frac{4}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(4)} > 1,2$$

for *m* ≥ 3 which implies that E5,1 > E4,2 for *m* ≥ 3.

M5,1 *versus* M4,3 *case*:

> Here the ratio

$$R\_{5,1;4,3} = \frac{\left(\frac{1}{3}m^3 + 10m^2 + \frac{2}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(4)} > 1.7$$

for *m* ≥ 2 which implies that E5,1 > E4,3 for *m* ≥ 2.

M5,1 *versus* M5,2 *case*:

> In this case the ratio

$$\mathcal{R}\_{5,1;5,2} = \frac{m^3 + 11m^2}{\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m} > 1.7$$

for *m* ≥ 2 which means E5,1 > E5,2 for *m* ≥ 2.

M5,1 *versus* M6,1 *case*:

> Here the ratio

$$R\_{5,1;6,1} = \frac{\left(\frac{2}{3}m^3 + 8m^2 + \frac{7}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(6)} > 1.$$

for *m* ≥ 8 which means E5,1 > E6,1 for *m* ≥ 8.

M5,1 *versus* M7,1 *case*:

> Here also the ratio

$$\mathcal{R}\_{5,1:7,1} = \frac{\left(m^3 + 13m^2 - 2m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(7)} > 1,$$

for *m* ≥ 2 which means E5,1 > E7,1 for *m* ≥ 2.

M5,1 *versus* M7,2 *case*:

> Here also the ratio

$$\mathcal{R}\_{5,1\mathcal{T},2} = \frac{\left(\frac{2}{3}m^3 + 17m^2 - \frac{2}{3}m\right)\log(5)}{\left(\frac{1}{3}m^3 + 9m^2 + \frac{8}{3}m\right)\log(7)} > 1,$$

for *m* ≥ 2 which means E5,1 > E7,2 for *m* ≥ 2.

> The above results are summarized in the following theorem:

**Theorem 2.** *We have that*


#### **5. Complex Dynamics of Methods**

Our aim is to analyze the complex dynamics of the new method based on graphical tool 'basins of attraction' of the zeros of polynomial *<sup>P</sup>*(*z*) in complex plane. Visual display of the basins gives important information about the stability and convergence of iterative methods. This idea was introduced initially by Vrscay and Gilbert [26]. In recent times, many authors have used this concept in their work, see, for example [27,28] and references therein. We consider the method (22) to analyze the basins of attraction.

To start with we take the initial point *z*0 in a rectangular region *R* ∈ C that contains all the zeros of a polynomial *<sup>P</sup>*(*z*). The iterative method, when starting from point *z*0 in a rectangle, either converges to the zero *<sup>P</sup>*(*z*) or eventually diverges. Stopping condition for convergence is considered as 10−<sup>3</sup> to a maximum of 25 iterations. If the required tolerance is not achieved in 25 iterations, we conclude that the iterative scheme starting at point *z*0 does not converge to any root. The strategy adopted is as follows: A color is allocated to each initial point *z*0 in the basin of attraction of a zero. If the iteration initiating at *z*0 converges, then it represents the attraction basin with that assigned color to it, otherwise in the failing (divergence) situation in 25 iterations the iteration represents the black color.

We analyze the basins of attraction of the new method (for the choices *β* = <sup>10</sup>−2, <sup>10</sup>−4, <sup>10</sup>−8) on following three polynomials:

**Example 1.** *In the first case, consider the polynomial <sup>P</sup>*1(*z*) = *z*2 − 1 *which has zeros* {±1}*. A grid of* 400 × 400 *points in a rectangle D* ∈ C *of size* [−2, 2] × [−2, 2] *is used for drawing the graphics. We assign the color red to each initial point in the basin of attraction of zero '*1*' and the color green to the points in the basin of attraction of zero '*−1*'. The graphics are shown in Figure 1 corresponding to β* = <sup>10</sup>−2, <sup>10</sup>−4, 10−8*. Observing the behavior of the basins of the new method, we conclude that the convergence domain becoming wider as parameter β assumes smaller values since black zones (divergent points) are getting smaller in size.*

**Figure 1.** Basins of attraction for polynomial *<sup>P</sup>*1(*z*).

**Example 2.** *Let us consider the next polynomial as <sup>P</sup>*2(*z*) = *z*3 − *z having zeros* {0, <sup>±</sup><sup>1</sup>}*. To draw the dynamical view, we select a rectangle D* = [−2, 2] × [−2, 2] ∈ C *containing* 400 × 400 *grid points. Then, allocate the colors green, blue and red to each point in the basin of attraction of* 0*,* 1 *and* <sup>−</sup>1*, respectively. Basins for this example are exhibited in Figure 2 corresponding to parameter choices β* = <sup>10</sup>−2, <sup>10</sup>−4, 10−<sup>8</sup> *in the proposed methods. In addition, observe that the basins are becoming larger and larger with the smaller values of β.*

**Figure 2.** Basins of attraction for polynomial *<sup>P</sup>*2(*z*).

**Example 3.** *Lastly, we consider the polynomial as <sup>P</sup>*3(*z*) = *z*5 + 2*z* − 1 *having zeros* {−0.945068 ± 0.854518*i*, 0.701874 ± 0.879697*i*, 0.486389}*. To draw the dynamical view, we select a rectangle D* = [−2, 2] × [−2, 2] ∈ C *containing* 400 × 400 *grid points. Then, allocate the colors green, blue, red, yellow and pink to each point in the basin of attraction of* 0.701874 + 0.879697*i,* −0.945068 − 0.854518*i,* 0.701874 − 0.879697*i,* 0.486389 *and* −0.945068 + 0.854518*i, respectively. Basins for this example are exhibited in Figure 3 corresponding to parameter choices β* = <sup>10</sup>−2, <sup>10</sup>−4, 10−<sup>8</sup> *in the proposed methods. We observe that the basins are getting larger with the smaller values of β.*

**Figure 3.** Basins of attraction for polynomial *<sup>P</sup>*3(*z*).

## **6. Numerical Tests**

In this section, some numerical tests on different problems are performed to demonstrate the convergence behavior and computational efficiency of the method M5,1. A comparison between the performance of M5,1 with the existing methods M2,1, M3,1, M4,*j* (*j* = 1, 2, <sup>3</sup>), M5,2, M6,1, M7,1 and M7,2 is also drawn. The programs are performed in the processor with specifications Intel (R) Core (TM) i5-4210U CPU @ 1.70 GHz 2.40 GHz (64-bit Operating System) Microsoft Windows 10 Professional and are complied by *Mathematica* 10.0 using multiple-precision arithmetic. We record the number of iterations (*k*) required to converge to the solution such that the stopping condition

$$||\mathbf{x}^{(k+1)} - \mathbf{x}^{(k)}|| + ||F(\mathbf{x}^{(k)})|| < 10^{-300}$$

is satisfied. In order to verify the theoretical order of convergence, the computational order of convergence (*pc*) is obtained by using the Formula (8). In the comparison of performance of considered methods, we also include the real CPU time elapsed during the execution of program computed by the *Mathematica* command "TimeUsed[ ]".

The methods M2,1, M3,1, M4,3, M5,1 and M7,2 are tested by using the value 0.01 for the parameter *β*. In numerical experiments we consider the following five problems:

**Example 4.** *Let us consider the system of two equations (selected from [29]):*

$$\begin{cases} \mathbf{x}^2 + \sin \mathbf{x} - e^y = 0, \\ 3\mathbf{x} - \cos \mathbf{x} - y = 0. \end{cases}$$

*The initial guess assumed is x*(0) = {−1, −<sup>2</sup>}*<sup>T</sup> for obtaining the solution*

> *α* = {−0.90743021707369569 . . . , −3.3380632251862363 . . .}*<sup>T</sup>*.

**Example 5.** *Now considering the mixed Hammerstein integral equation (see [4]):*

$$\mathbf{x}(s) = 1 + \frac{1}{5} \int\_0^1 \mathbf{G}(s, t) \mathbf{x}(t)^3 dt.$$

*wherein x* ∈ *C*[0, 1]*; s*, *t* ∈ [0, 1] *and the kernel G is*

$$G(s,t) = \begin{cases} \ (1-s)t, \ t \le s, \\\ s(1-t), \ s \le t. \end{cases}$$

*The above equation is transformed to a finite-dimensional problem by using the Gauss-Legendre quadrature formula*

$$\int\_0^1 f(t)dt \approx \sum\_{j=1}^m \alpha\_j f(t\_j),$$

*where the weights j and abscissas tj are obtained for m* = 8 *by Gauss-Legendre quadrature formula. Then, setting x*(*ti*) = *xi*, *i* = 1, 2, ....., 8*, we obtain the following system of nonlinear equations*

$$\left| \mathfrak{F} \mathfrak{x}\_i - \mathfrak{F} - \sum\_{j=1}^{8} a\_{ij} \mathfrak{x}\_j^3 = 0 \right.$$

*where*

$$a\_{ij} = \begin{cases} \ \ \ \ \mathcal{O}\_{\dot{f}} t\_{\dot{f}} (1 - t\_{\dot{i}}) \ \ \text{if} \ \ j \le i\_{\text{\textquotedblleft}} \\ \ \ \ \ \ \mathcal{O}\_{\dot{f}} t\_{\dot{i}} (1 - t\_{\dot{f}}) \ \ \text{if} \ \ i < j\_{\text{\textquotedblright}} \end{cases} \quad \dot{i} = 1, 2, ... 8. 1$$

*wherein the abscissas tj and the weights j are known and produced in Table 1 for m* = 8*. The initial approximation assumed is*

$$\mathbf{x}^{(0)} = \{-1, \, -1, \, -1, \, -1, \, -1, \, -1, \, -1, \, -1, \, -1\}^T$$

*and the solution of this problem is:*

$$\begin{aligned} \mu &= \{1.002096245031..., 1.009900316187..., 1.019726960993..., 1.026435743030..., \\ &\quad 1.026435743030..., 1.019726960993..., 1.009900316187..., 1.002096245031...\}^T \end{aligned}$$

**Table 1.** Weights and abscissas of Gauss-Legendre quadrature formula for *m* = 8.


**Example 6.** *Consider the system of 20 equations (see [29]):*

$$\tan^{-1}\left(\mathbf{x}\_{i}\right) + 1 - 2\sum\_{j=1, j\neq i}^{20} \mathbf{x}\_{j}^{2} = 0, \quad 1 \le i \le 20.$$

*This problem has the following two solutions:*

> *α*1= {0.1757683176158 . . . , 0.1757683176158 . . . , ·····, 0.1757683176158 . . .}*<sup>T</sup>*.

*and* *α*2 = {−0.14968543422 . . . , −0.14968543422, . . . , ·····, −0.14968543422 . . .}*<sup>T</sup>*.

*We intend to find the first solution and so choose the initial value: x*(0) = {0.5, 0.5, 0.5, ·····, 0.5}*T.* *Symmetry* **2019**, *11*, 891

**Example 7.** *Consider the boundary value problem:*

$$y'''' + y^3 = 0, \quad y(0) = 0, \quad y(1) = 1.$$

*Assuming the following partitioning of the interval* [0, 1]*:*

*u*0 = 0 < *u*1 < *u*2 < ··· < *un*−1 < *un* = 1, *uj*+<sup>1</sup> = *uj* + *h*, *h* = 1/*<sup>n</sup>*.

*Setting y*0 = *y*(*<sup>u</sup>*0) = 0, *y*1 = *y*(*<sup>u</sup>*1), ··· , *yn*−<sup>1</sup> = *y*(*un*−<sup>1</sup>), *yn* = *y*(*un*) = 1. *If we discretize the problem by using the finite difference approximation for second derivative*

$$y\_{m}^{\prime\prime} = \frac{y\_{m-1} - 2y\_m + y\_{m+1}}{h^2}, \quad m = 1, 2, 3, \dots, n - 1, 2$$

*we obtain a system of n* − 1 *equations in n* − 1 *variables:*

$$y\_{m-1} - 2y\_m + y\_{m+1} + h^2 y\_m^3 = 0, \quad m = 1, 2, 3, \dots, n - 1.$$

*In particular, let us solve this problem for n* = 51, *that is for m* = 50 *by choosing y*(0) = {−1, −1, −1, ··· , −<sup>1</sup>}*<sup>T</sup> as the initial value. The solution vector α of this problem is*


**Example 8.** *Consider the following Burger's equation (see [30]):*

$$f\frac{\partial^2 f}{\partial u^2} + f\frac{\partial f}{\partial u} - \frac{\partial f}{\partial t} + \mathcal{g}(u, t) = 0, \ (u, t) \in [0, 1]^2, t$$

*where g*(*<sup>u</sup>*, *t*) = −10*e*<sup>−</sup>2*<sup>t</sup>*[*e<sup>t</sup>*(<sup>2</sup> − *u* + *u*<sup>2</sup>) + <sup>10</sup>*u*(<sup>1</sup> − 3*u* + <sup>2</sup>*u*<sup>2</sup>)] *and function f* = *f*(*<sup>u</sup>*, *t*) *satisfies the boundary conditions*

$$f(0,t) = f(1,t) = 0, \; f(u,0) = 10u(u-1) \; and \; f(u,1) = 10u(u-1)/e.$$

*Assuming the following partitioning of the domain* [0, <sup>1</sup>]<sup>2</sup>*:*

$$\begin{aligned} 0 = \mu\_0 < \mu\_1 < \mu\_2 < \dots < \mu\_{n-1} < \mu\_n = 1, \ \mu\_{k+1} = \mu\_k + h, \\ 0 = t\_0 < t\_1 < t\_2 < \dots < t\_{n-1} < t\_n = 1, \ t\_{l+1} = t\_l + h, \ h = 1/n. \end{aligned}$$

*Let us define fk*,*<sup>l</sup>* = *f*(*uk*, *tl*) *and gk*,*<sup>l</sup>* = *g*(*uk*, *tl*) *for k*, *l* = 0, 1, 2, .......*n. Then the boundary conditions would be f*0,*l* = *f*(*<sup>u</sup>*0, *tl*) = 0*, fn*,*<sup>l</sup>* = *f*(*un*, *tl*) = 0*, fk*,<sup>0</sup> = *f*(*uk*, *<sup>t</sup>*0) = <sup>10</sup>*uk*(*uk* − 1) *and fk*,*<sup>n</sup>* = *f*(*uk*, *tn*) = <sup>10</sup>*uk*(*uk* − <sup>1</sup>)/*e. If we discretize Burger's equation by using the numerical formulas for the partial derivatives*

$$\left(\frac{\partial f}{\partial u}\right)\_{i,j} = \frac{f\_{i+1,j} - f\_{i-1,j}}{2h}, \ \left(\frac{\partial f}{\partial t}\right)\_{i,j} = \frac{f\_{i,j+1} - f\_{i,j-1}}{2h}\lambda$$

*Symmetry* **2019**, *11*, 891

$$\left(\frac{\partial^2 f}{\partial u^2}\right)\_{i,j} = \frac{f\_{i+1,j} - 2f\_{i,j} + f\_{i-1,j}}{h^2}, \text{ i.e.}\\j = 1, 2, ...\\n - 1, ...$$

*then we obtain the following system of* (*n* − 1)<sup>2</sup> *nonlinear equations in* (*n* − 1)<sup>2</sup> *variables:*

$$f\_{i-1,j}(2 - hf\_{i,j}) + h(f\_{i,j-1} - f\_{i,j+1}) - f\_{i,j}(4 - hf\_{i+1,j}) + 2f\_{i+1,j} + 2h^2g\_{i,j} = 0,\tag{51}$$

*where i*, *j* = 1, 2......*n* − 1*. In particular, we solve this nonlinear system for n* = 11 *so that m* = 100 *by selecting fi*,*j* = 1 *for i*, *j* = 1, 2......10 *as the initial value. The solution of this system of nonlinear equations is given in Table 2.*

**Table 2.** The solution of system (51) with the unknowns *fi*,*j* for *i*, *j* = 1, 2, ......10.


In Tables 3–7 we present the numerical results produced for the methods M2,1, M3,1, M4,*j* (*j* = 1, 2, <sup>3</sup>), M5,1, M5,2, M6,1, M7,1 and M7,2. Displayed in each table are the errors ||*x*(*k*+<sup>1</sup>) − *x*(*k*)|| of first three consecutive approximations to corresponding solution of Examples 4–8, number of iterations (*k*) needed to converge to the required solution, computational order of convergence *pc*, computational cost <sup>C</sup>*p*,*i*, computational efficiency <sup>E</sup>*p*,*<sup>i</sup>* and elapsed CPU-time (e-time) measured in seconds. In each table the meaning of *<sup>A</sup>*(−*h*) is *A* × 10−*h*. Numerical values of computational cost and efficiency are obtained according to the corresponding expressions given by (40)–(49). The e-time is calculated by taking the average of 50 performances of the program, where we use ||*x*(*k*+<sup>1</sup>) − *x*(*k*)|| + ||*F*(*x*(*k*))|| < 10−<sup>300</sup> as the stopping condition in a single performance of the program.

**Table 3.** Comparison of performance of methods for Example 4.



**Table 4.** Comparison of performance of methods for Example 5.

**Table 5.** Comparison of performance of methods for Example 6.


\* The methods M4,1 and M7,1 converge to the solution *α*2.

**Table 6.** Comparison of performance of methods for Example 7.



**Table 7.** Comparison of performance of methods for Example 8.

From the numerical results displayed in Tables 3–7, it can be observed that like that of the existing methods the proposed new method shows consistent convergence behavior. Seventh order methods produce approximations with large accuracy due to their higher order of convergence, but they are less efficient. In Example 6, M4,1 and M7,1 do not converge to the required solution *α*1. Instead, they converge to solution *α*2 which is far off from initial approximation chosen. Calculation of computational order of convergence shows that the order of convergence of the new method is preserved in all the numerical examples. However, this is not true for some existing methods, e.g., M4,*j* (*j* = 1, 2, <sup>3</sup>), M5,2, M6,1, M7,1 and M7,2, in Example 8. Values of the efficiency index shown in the penultimate column of each table also verify the theoretical results stated in Theorem 2. The efficiency results are also in complete agreemen<sup>t</sup> with the CPU time utilized in the execution of the program since the method with large efficiency uses less computing time than the method with small efficiency. Moreover, the proposed method utilizes less CPU time than existing higher order methods which points to the dominance of the method. In fact, the new method is especially more efficient for large systems of nonlinear equations.
