*Article* **Convergence and Stability of a Parametric Class of Iterative Schemes for Solving Nonlinear Systems**

**Alicia Cordero, Eva G. Villalba , Juan R. Torregrosa \* and Paula Triguero-Navarro**

Multidisciplinary Institute of Mathematics, Universitat Politènica de València, 46022 València, Spain; acordero@mat.upv.es (A.C.); egarvil@posgrado.upv.es (E.G.V.); ptrinav@posgrado.upv.es (P.T.-N.) **\*** Correspondence: jrtorre@mat.upv.es

**Abstract:** A new parametric class of iterative schemes for solving nonlinear systems is designed. The third- or fourth-order convergence, depending on the values of the parameter being proven. The analysis of the dynamical behavior of this class in the context of scalar nonlinear equations is presented. This study gives us important information about the stability and reliability of the members of the family. The numerical results obtained by applying different elements of the family for solving the Hammerstein integral equation and the Fisher's equation confirm the theoretical results.

**Keywords:** nonlinear system; iterative method; divided difference operator; stability; parameter plane; dynamical plane

#### **1. Design of a Parametric Family of Iterative Methods**

The need to find a solution *x*¯ of equations or systems of nonlinear equations of the form *<sup>F</sup>*(*x*) = 0, where *<sup>F</sup>* : *<sup>D</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*n*, *<sup>n</sup>* <sup>≥</sup> 1, is present in many problems of applied mathematics as a basis for solving other more complex ones. In general, it is not possible to find the exact solution to this type of equations, so iterative methods are required in order to approximate the desired solution.

The essence of these methods is to find, through an iterative process and, from an initial approximation *<sup>x</sup>*(0) close to a solution *<sup>x</sup>*¯, a sequence {*x*(*k*)} of approximations such that, under different requirements, lim*k*→<sup>∞</sup> *<sup>x</sup>*(*k*) <sup>=</sup> *<sup>x</sup>*¯.

It is well known that one of the most used iterative methods, due to its simplicity and efficiency, is Newton's scheme, whose iterative expression is

$$\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - [F'(\mathbf{x}^{(k)})]^{-1} F(\mathbf{x}^{(k)}), \quad k = 0, 1, 2, \dots \tag{1}$$

where *F* (*x*(*<sup>k</sup>*)) denotes the derivative or the Jacobian matrix of function *<sup>F</sup>* evaluated in the *k*th iteration *x*(*k*). In addition, this method has great importance in the study of iterative methods because it presents quadratic convergence under certain conditions and has great accessibility, that is, the region of initial estimates *x*(0) for which the method converges is wide, at least for polynomials or polynomial systems.

Based on Newton-type methods and by using different procedures, many iterative schemes for solving *<sup>F</sup>*(*x*) = 0 have been presented in the last years. Refs. [1,2] compile many of the methods recently designed to solve this type of problem. These books give us good overviews about this area of research.

In this paper, we use a convex combination of the methods presented by Chun et al. in [3] and Maheswari in [4]. As the mentioned schemes are designed for nonlinear equations and they have as the first step Newton's method, we use the following algebraic manipulation in order to extend the mentioned schemes to nonlinear systems:

**Citation:** Cordero, A.; Villalba, E.G.; Torregrosa, J.R.; Triguero-Navarro, P. Convergence and Stability of a Parametric Class of Iterative Schemes for Solving Nonlinear Systems. *Mathematics* **2021**, *9*, 86. https://dx.doi.org/10.3390/ math9010086

Received: 1 December 2020 Accepted: 24 December 2020 Published: 3 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).

$$\frac{f(\mathbf{y}^{(k)})}{f(\mathbf{x}^{(k)})} = \frac{f(\mathbf{y}^{(k)})}{(\mathbf{x}^{(k)} - \mathbf{y}^{(k)})f'(\mathbf{x}^{(k)})} = \frac{f(\mathbf{y}^{(k)}) - f(\mathbf{x}^{(k)}) + f(\mathbf{x}^{(k)})}{(\mathbf{x}^{(k)} - \mathbf{y}^{(k)})f'(\mathbf{x}^{(k)})} = -\frac{[\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; f]}{f'(\mathbf{x}^{(k)})} + 1.$$

Therefore, the parametric family of iterative methods for solving nonlinear systems that we propose has the following iterative expression:

$$\begin{cases} \begin{aligned} \boldsymbol{y}^{(k)} &= \mathbf{x}^{(k)} - \mathbf{F}'(\mathbf{x}^{(k)})^{-1} \mathbf{F}(\mathbf{x}^{(k)}), \\ &H(\mathbf{x}^{(k)}, \boldsymbol{y}^{(k)}, \boldsymbol{\gamma}) = I + \frac{\gamma}{2}I + (1-\gamma)B\_{k}^{-1} - (1-\gamma)B\_{k}(2I - B\_{k}) - \frac{\gamma}{2}F'(\mathbf{x}^{(k)})^{-1}F'(\mathbf{y}^{(k)}) \\ &\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - H(\mathbf{x}^{(k)}, \boldsymbol{y}^{(k)}, \boldsymbol{\gamma})F'(\mathbf{x}^{(k)})^{-1}F(\mathbf{x}^{(k)}) \\ &\text{for } k = 0, 1, 2, \ldots, \end{aligned} \end{cases} \tag{2}$$

where *<sup>x</sup>*(0) is the initial estimation, *Bk* <sup>=</sup> *<sup>F</sup>* (*x*(*<sup>k</sup>*))<sup>−</sup>1*P*(*k*) and *<sup>P</sup>*(*k*) = [*x*(*k*), *<sup>y</sup>*(*k*); *<sup>F</sup>*] is the divided difference operator defined as

$$[\mathbf{x}, \mathbf{y}; F](\mathbf{x} - \mathbf{y}) = F(\mathbf{x}) - F(\mathbf{y}), \ \mathbf{x}, \mathbf{y} \in \mathbb{R}^n.$$

The rest of the paper is organized as follows: Section 2 is devoted to analyze the convergence of family (2) in terms of the values of parameter *γ*. In Section 3, we study the dynamical behavior of the class on quadratic polynomials in the context of scalar equations. This study allows for selecting the members that are more stable in the family. In the numerical section, (Section 4), we apply the proposed class on different examples such as the Hammerstein integral equation and the Fisher's equation in order to confirm the theoretical results obtained in Sections 2 and 3. We finish the work with some conclusions and the references used in it.

#### **2. Convergence Analysis**

Let us consider function *<sup>F</sup>* : *<sup>D</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*n*, differentiable in the convex set *<sup>D</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* which contains a solution *<sup>x</sup>*¯ of the nonlinear equation *<sup>F</sup>*(*x*) = 0. From the Genochi–Hermite formula (see [5]) of the divided difference operator

$$\left[\mathbf{x} + h, \mathbf{x}; F\right] = \int\_0^1 F'(\mathbf{x} + th)dt\tag{3}$$

and by performing the Taylor's expansion of *F* (*<sup>x</sup>* + *th*) on the point *<sup>x</sup>* and integrating, we obtain the following development:

$$[\mathbf{x} + h, \mathbf{x}; \mathbf{F}] = F'(\mathbf{x}) + \frac{1}{2} F''(\mathbf{x})h + \frac{1}{6} F''''(\mathbf{x})h^2 + O(h^3),\tag{4}$$

which we will use in the proof of the following result, when the order of convergence of family is established.

**Theorem 1.** *Let <sup>F</sup>* : *<sup>D</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* −→ <sup>R</sup>*<sup>n</sup> be a sufficiently Fréchet differentiable function in a convex neighborhood <sup>D</sup> of <sup>x</sup>*¯*, being <sup>F</sup>*(*x*¯) = <sup>0</sup>*. We suppose the Jacobian matrix <sup>F</sup>* (*x*) *is continuous and non-singular in x*¯*. Then, taking an initial estimate x*(0) *close enough to x*¯*, the sequence of iterates* {*x*(*k*)} *generated with family* (2) *converges to x with the following error equation:* ¯

$$\mathcal{C}e\_{k+1} = \frac{\gamma}{2}(\mathcal{C}\_3 + 4\mathcal{C}\_2^2)e\_k^3 + (\gamma\mathcal{C}\_4 + (4 - 13\gamma)\mathcal{C}\_2^3 + 3\gamma\mathcal{C}\_2\mathcal{C}\_3 + (-1 + \frac{5}{2}\gamma)\mathcal{C}\_3\mathcal{C}\_2)e\_k^4 + O(e\_k^5),\tag{5}$$

*where Cj* <sup>=</sup> <sup>1</sup> *<sup>j</sup>*! *F* (*α*)−1*F*(*<sup>j</sup>*)(*α*) <sup>∈</sup> *Lj*(R*n*, <sup>R</sup>*n*)*, Lj*(R*n*, <sup>R</sup>*n*) *being the set of j-linear functions of bounded functions, <sup>j</sup>* <sup>=</sup> 2, 3, ... *and ek* <sup>=</sup> *<sup>x</sup>*(*k*) <sup>−</sup> *<sup>x</sup>*¯*. In the particular case in which <sup>γ</sup>* <sup>=</sup> <sup>0</sup>*, the error equation is*

$$
\omega\_{k+1} = (4\mathbf{C}\_2^3 - \mathbf{C}\_3\mathbf{C}\_2)\mathbf{e}\_k^4 + O(\mathbf{e}\_k^5),\tag{6}
$$

*and so the method has an order of convergence four.*

**Proof.** We consider the Taylor's expansion of *<sup>F</sup>*(*x*(*<sup>k</sup>*)) around *<sup>x</sup>*¯:

$$F(\mathbf{x}^{(k)}) = \Gamma \left( \mathbf{c}\_k + \mathbf{C}\_2 \mathbf{c}\_k^2 + \mathbf{C}\_3 \mathbf{c}\_k^3 + \mathbf{C}\_4 \mathbf{c}\_k^4 + \mathbf{C}\_5 \mathbf{c}\_k^5 + O(\mathbf{c}\_k^6) \right), \tag{7}$$

where <sup>Γ</sup> = *<sup>F</sup>* (*x*¯), *ek* <sup>=</sup> *<sup>x</sup>*(*k*) <sup>−</sup> *<sup>x</sup>*¯ and *Cj* <sup>=</sup> *<sup>F</sup>* (*x*¯)−1*F*(*<sup>j</sup>*)(*x*¯) *<sup>j</sup>*! <sup>∈</sup> *Lj*(R*n*, <sup>R</sup>*n*), *<sup>j</sup>* <sup>=</sup> 2, 3, . . . In a similar way, the derivatives of *<sup>F</sup>*(*x*(*<sup>k</sup>*)) around *<sup>x</sup>*¯ take the form:

$$\begin{aligned} F'(\mathbf{x}^{(k)}) &= \Gamma \Big[I + 2\mathbf{C}\_2\mathbf{e}\_k + 3\mathbf{C}\_3\mathbf{e}\_k^2 + 4\mathbf{C}\_4\mathbf{e}\_k^3 + 5\mathbf{C}\_5\mathbf{e}\_k^4\Big] + O(\mathbf{e}\_k^5), \\ F''(\mathbf{x}^{(k)}) &= \Gamma \Big[2\mathbf{C}\_2 + 6\mathbf{C}\_3\mathbf{e}\_k + 12\mathbf{C}\_4\mathbf{e}\_k^2 + 20\mathbf{C}\_5\mathbf{e}\_k^3\Big] + O(\mathbf{e}\_k^4), \\ F''(\mathbf{x}^{(k)}) &= \Gamma \Big[6\mathbf{C}\_3 + 24\mathbf{C}\_4\mathbf{e}\_k + 60\mathbf{C}\_5\mathbf{e}\_k^2\Big] + O(\mathbf{e}\_k^3). \end{aligned} \tag{8}$$

From the development of *F* (*x*(*<sup>k</sup>*)) around *<sup>x</sup>*¯, we calculate the inverse

$$F'(\mathbf{x}^{(k)})^{-1} = \left[I + X\_2\varepsilon\_k + X\_3\varepsilon\_k^2 + X\_4\varepsilon\_k^3 + X\_5\varepsilon\_k^4\right]\Gamma^{-1} + O(\varepsilon\_k^5),\tag{9}$$

with *<sup>X</sup>*2, *<sup>X</sup>*3, *<sup>X</sup>*<sup>4</sup> and *<sup>X</sup>*<sup>5</sup> satisfying *F* (*x*(*<sup>k</sup>*)) <sup>−</sup><sup>1</sup> *F* (*x*(*<sup>k</sup>*)) = *<sup>I</sup>*. Therefore,


$$\mathcal{F}'(\mathbf{x}^{(k)})^{-1}F(\mathbf{x}^{(k)}) = \mathfrak{e}\_{\mathbf{k}} - \mathbb{C}\_2 \mathfrak{e}\_{\mathbf{k}}^2 + (-2\mathcal{C}\_3 + 2\mathcal{C}\_2^2)\mathfrak{e}\_{\mathbf{k}}^3 + (-3\mathcal{C}\_4 + 4\mathcal{C}\_2\mathcal{C}\_3 + 3\mathcal{C}\_5\mathcal{C}\_2 - 4\mathcal{C}\_2^3)\mathfrak{e}\_{\mathbf{k}}^4 + O(\mathfrak{e}\_{\mathbf{k}}^5). \tag{10}$$

Then, we obtain the error equation of the first step of the parametric family (2):

$$\begin{split} y^{(k)} - \vec{x} &= x^{(k)} - \vec{x} - F'(x^{(k)})^{-1} F(x^{(k)}) = \\ &= \mathcal{C}\_2 \mathfrak{e}\_k^2 + (2\mathcal{C}\_3 - 2\mathcal{C}\_2^2) \mathfrak{e}\_k^3 + (3\mathcal{C}\_4 - 4\mathcal{C}\_2 \mathcal{C}\_3 - 3\mathcal{C}\_3 \mathcal{C}\_2 + 4\mathcal{C}\_2^3) \mathfrak{e}\_k^4 + O(\mathfrak{e}\_k^5). \end{split} \tag{11}$$

Substituting this expression in the Taylor expansion of *<sup>F</sup>*(*y*(*<sup>k</sup>*)) around *<sup>x</sup>*¯, we get:

$$F(y^{(k)}) = \Gamma \left[ \mathbf{C}\_2 \mathbf{e}\_k^2 + (2\mathbf{C}\_3 - 2\mathbf{C}\_2^2) \mathbf{e}\_k^3 + (3\mathbf{C}\_4 - 4\mathbf{C}\_2 \mathbf{C}\_3 - 3\mathbf{C}\_3 \mathbf{C}\_2 + 5\mathbf{C}\_2^3) \mathbf{e}\_k^4 \right] + O(\mathbf{e}\_k^5). \tag{12}$$

Furthermore,

$$F'(y^{(k)}) = \Gamma[I + 2\mathcal{C}\_2^2 \epsilon\_k^2 + (4\mathcal{C}\_2\mathcal{C}\_3 - 4\mathcal{C}\_2^3)\epsilon\_k^3 + (6\mathcal{C}\_2\mathcal{C}\_4 - 8\mathcal{C}\_2^2\mathcal{C}\_3 - 6\mathcal{C}\_2\mathcal{C}\_3\mathcal{C}\_2 + 8\mathcal{C}\_2^4 + \mathcal{R}\_3\mathcal{C}\_2^2)\epsilon\_k^4] + O(\epsilon\_k^5).\tag{13}$$

Multiplying expressions (9) and (13), we obtain:

$$F(x^{(k)})^{-1}F(y^{(k)}) = \begin{array}{c} \text{I} - 2\text{C}\_2\text{e}\_k + (-\text{C}\text{C}\_3 + 6\text{C}\_2^2)\text{e}\_k^2 + (-4\text{C}\_4 + 10\text{C}\_2\text{C}\_3 + 6\text{C}\_3\text{C}\_2 - 16\text{C}\_2^3)\text{e}\_k^3 + \\ + (-\text{5C}\_5 + 14\text{C}\_2\text{C}\_4 + 9\text{C}\_3^2 - 28\text{C}\_2^2\text{C}\_3 + 8\text{C}\_4\text{C}\_2 - 18\text{C}\_2\text{C}\_3\text{C}\_2 - 15\text{C}\_3\text{C}\_2^2 + 40\text{C}\_2^4)\text{e}\_k^4 + \\ + \mathcal{O}(\epsilon\_k^5). \end{array} \tag{14}$$

To obtain the development of the divided difference operator of (2), we use the Taylor series expansion of (4), considering in this case *<sup>x</sup>* + *<sup>h</sup>* = *<sup>y</sup>* and, so, *<sup>h</sup>* = *<sup>y</sup>* <sup>−</sup> *<sup>x</sup>* = −*F* (*x*(*<sup>k</sup>*))<sup>−</sup>1*F*(*x*(*<sup>k</sup>*)). Therefore, substituting (8) and (10) in (4), we obtain

$$\begin{array}{rcl} [\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; F] = & \Gamma[l + \mathsf{C}\_{2}\mathsf{e}\_{k} + (\mathsf{C}\_{3} + \mathsf{C}\_{2}^{2})\mathsf{e}\_{k}^{2} + (\mathsf{C}\_{4} + \mathsf{C}\_{3}\mathsf{C}\_{2} + 2\mathsf{C}\_{2}\mathsf{C}\_{3} - 2\mathsf{C}\_{2}^{3})\mathsf{e}\_{k}^{3} + \\ & + (\mathsf{C}\_{5} + \mathsf{C}\_{4}\mathsf{C}\_{2} + 2\mathsf{C}\_{3}^{2} - \mathsf{C}\_{5}\mathsf{C}\_{2}^{2} + 3\mathsf{C}\_{2}\mathsf{C}\_{4} - 4\mathsf{C}\_{2}^{2}\mathsf{C}\_{3} - 3\mathsf{C}\_{2}\mathsf{C}\_{3}\mathsf{C}\_{2} + 4\mathsf{C}\_{2}^{4})\mathsf{e}\_{k}^{4}] + O(\mathfrak{e}\_{k}^{5}). \end{array} \tag{15}$$

To calculate the inverse of this operator, we search

$$[\mathbf{x}^{(k)}, \mathbf{y}^{(k)}; \mathbf{F}]^{-1} = \left[I + \mathbf{Y}\_2 \mathbf{e}\_k + \mathbf{Y}\_3 \mathbf{e}\_k^2 + \mathbf{Y}\_4 \mathbf{e}\_k^3 + \mathbf{Y}\_5 \mathbf{e}\_k^4\right] \Gamma^{-1} + O(\mathbf{e}\_k^5),\tag{16}$$

with *<sup>Y</sup>*2,*Y*3,*Y*<sup>4</sup> and *<sup>Y</sup>*<sup>5</sup> satisfying [*x*(*k*), *<sup>y</sup>*(*k*); *<sup>F</sup>*] <sup>−</sup>1[*x*(*k*), *<sup>y</sup>*(*k*); *<sup>F</sup>*] = *<sup>I</sup>*. Thus,


$$\begin{aligned} B\_k &= \mathcal{C}\_2^2 \mathcal{e}\_k^2 + (2\mathcal{C}\_2 \mathcal{C}\_3 + 2\mathcal{C}\_3 \mathcal{C}\_2 - 6\mathcal{C}\_2^3) \mathcal{e}^3 + \\ &+ (3\mathcal{C}\_2 \mathcal{C}\_4 - 10\mathcal{C}\_2 \mathcal{C}\_3 \mathcal{C}\_2 - 12\mathcal{C}\_2^2 \mathcal{C}\_3 + 25\mathcal{C}\_2^4 + 4\mathcal{C}\_3^2 - 10\mathcal{C}\_3 \mathcal{C}\_2^2 + 3\mathcal{C}\_4 \mathcal{C}\_2) \mathcal{e}\_k^4 + O(\mathcal{e}\_k^5), \end{aligned} \tag{17}$$

and using (8) and (16), we calculate *B*−<sup>1</sup> *<sup>K</sup>* ,

$$\begin{split} B\_{k}^{-1} &= I + \mathsf{C}\_{2}\boldsymbol{e}\_{k} + (2\mathsf{C}\_{3} - 2\mathsf{C}\_{2}^{2})\boldsymbol{e}\_{k}^{2} + (3\mathsf{C}\_{4} - 4\mathsf{C}\_{2}\mathsf{C}\_{3} - 2\mathsf{C}\_{3}\mathsf{C}\_{2} + 3\mathsf{C}\_{2}^{3})\boldsymbol{e}\_{k}^{3} + \\ &+ (4\mathsf{C}\_{5} - 6\mathsf{C}\_{2}\mathsf{C}\_{4} - 2\mathsf{C}\_{4}\mathsf{C}\_{2} - 4\mathsf{C}\_{3}^{2} - 3\mathsf{C}\_{2}^{4} + 2\mathsf{C}\_{3}\mathsf{C}\_{2}^{2} + 3\mathsf{C}\_{2}\mathsf{C}\_{3}\mathsf{C}\_{2} + 6\mathsf{C}\_{2}^{2}\mathsf{C}\_{3})\boldsymbol{e}\_{k}^{4} + O(\mathsf{c}\_{k}^{5}). \end{split} \tag{18}$$

Substituting the expressions (10), (14), (17), and (18) in the scheme (2), we get the error equation of the parametric family

$$\varepsilon\_{k+1} = x^{(k+1)} - \bar{x} = \frac{\gamma}{2} (\text{C}\_3 + 4\text{C}\_2^2)\varepsilon\_k^3 + (\gamma\text{C}\_4 + (4 - 13\gamma)\text{C}\_2^3 + \mathcal{B}\gamma\text{C}\_2\text{C}\_3 + (-1 + \frac{5}{2}\gamma)\text{C}\_3\text{C}\_2)\varepsilon\_k^4 + O(\varepsilon\_k^5). \tag{19}$$

Finally, from the error equation, we conclude that the parametric family (2) has order 3 for all *<sup>γ</sup>* = 0 and order 4 for *<sup>γ</sup>* = 0, being in this last case the error equation

$$
\omega\_{k+1} = (4\mathbf{C}\_2^3 - \mathbf{C}\_3\mathbf{C}\_2)\mathbf{e}\_k^4 + O(\mathbf{e}\_k^5). \tag{20}
$$

In the next section, we analyze the dynamical behavior of the parametric family (2) on quadratic scalar polynomials.

#### **3. Complex Dynamics**

The dynamical analysis of (2) is performed throughout this section in terms of complex analysis. The order of convergence is not the only important criterion to study when evaluating an iterative scheme. The validity of a method also depends on other aspects such as knowing how it behaves based on the initial estimates that are taken, that is, how wide the set of initial estimations is for which the method is convergent. For this reason, it is necessary to introduce several tools that allow for a more exhaustive study.

The analysis of the dynamics of a method is becoming one of the most investigated parts within the study of iterative methods since it allows for classifying the different iterative schemes, not only from the point of view of their speed of convergence, but also analyzing its behavior based on the initial estimate taken (see, for example, [6–13]). This study allows for visualizing graphically the set of initial approximations that converge to a given root or to points that are not roots of the equation. In addition, it provides important information about the stability and reliability of the iterative method.

In this paper, we focus on studying the complex dynamic of the parametric family (2) on quadratic polynomials of the form *<sup>p</sup>*(*z*)=(*<sup>z</sup>* <sup>−</sup> *<sup>a</sup>*)(*<sup>z</sup>* <sup>−</sup> *<sup>b</sup>*), where *<sup>a</sup>*, *<sup>b</sup>* <sup>∈</sup> <sup>C</sup>. For this study, we need to present the result called the Scaling Theorem, since it allows us to conjugate the dynamical behavior of one operator with the behavior associated with another, conjugated through an affine application, that is, our operator has the same stability on all quadratic polynomials. This result will be of great use to us since we can apply the Möbius transformation on the operator *Rp*,*<sup>γ</sup>* associated with our parametric family acting on *<sup>p</sup>*(*z*), assuming that the conclusions obtained will be of general application for any quadratic polynomial used.

**Theorem 2** (Scaling Theorem for family (2))**.** *Let <sup>f</sup>*(*z*) *be an analytic function in the Riemann sphere* <sup>C</sup><sup>ˆ</sup> *and let <sup>T</sup>*(*z*) = *<sup>α</sup><sup>z</sup>* + *<sup>β</sup> be an affine transformation with <sup>α</sup>* = <sup>0</sup>*. We consider <sup>g</sup>*(*z*) = *<sup>λ</sup>*(*<sup>f</sup>* ◦ *<sup>T</sup>*)(*z*)*, <sup>λ</sup>* <sup>=</sup> <sup>0</sup>*. Let Rf* ,*<sup>γ</sup> and Rg*,*<sup>γ</sup> be the fixed point operators of the family* (2) *associated with the functions f and g, respectively, that is to say,*

$$R\_{f,\gamma}(z) = z + \left[ -\frac{\gamma}{2} \left( 3 - \frac{f'(y)}{f'(z)} \right) + (1 - \gamma) \left( \frac{1}{\frac{f(y)}{f(z)} - 1} - \left( \frac{f(y)}{f(z)} \right)^2 \right) \right] \frac{f(z)}{f'(z)},\tag{21}$$

$$R\_{\mathcal{G},\gamma}(z) = z + \left[ -\frac{\gamma}{2} \left( 3 - \frac{\mathcal{g}'(\mathcal{y})}{\mathcal{g}'(z)} \right) + (1 - \gamma) \left( \frac{1}{\frac{\mathcal{g}(\mathcal{y})}{\mathcal{g}(z)} - 1} - \left( \frac{\mathcal{g}(\mathcal{y})}{\mathcal{g}(z)} \right)^2 \right) \right] \frac{\mathcal{g}(z)}{\mathcal{g}'(z)},\tag{22}$$

*where <sup>y</sup>* = *<sup>z</sup>* <sup>−</sup> *<sup>f</sup>*(*z*) *<sup>f</sup>* (*z*) *and <sup>z</sup>* <sup>∈</sup> <sup>C</sup>*. Then, Rf* ,*<sup>γ</sup> is analytically conjugated to Rg*,*<sup>γ</sup> through T, that is to say,*

$$(T \circ R\_{\S,\gamma} \circ T^{-1})(z) = R\_{f,\gamma}(z).$$

**Proof.** Taking into account that *<sup>T</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*) = *<sup>T</sup>*(*x*) <sup>−</sup> *<sup>T</sup>*(*y*) + *<sup>β</sup>*, *<sup>T</sup>*(*<sup>x</sup>* + *<sup>y</sup>*) = *<sup>T</sup>*(*x*) + *<sup>T</sup>*(*y*) <sup>−</sup> *<sup>β</sup>* and *g* (*z*) = *αλ <sup>f</sup>* (*T*(*z*)), so

$$\begin{split} &(T\circ R\_{\underline{\mathbf{g}},\gamma}\circ T^{-1})(\mathbf{z}) = T(R\_{\underline{\mathbf{g}},\gamma}(T^{-1})(\mathbf{z})) = \\ &= T\left(T^{-1}(\mathbf{z}) + \left[ -\frac{\gamma}{2} \left( 3 - \frac{\mathbf{g}'(T^{-1}(\mathbf{y}))}{\mathbf{g}'(T^{-1}(\mathbf{z}))} \right) + (1-\gamma) \left( \frac{1}{\frac{\mathbf{g}(T^{-1}(\mathbf{y}))}{\mathbf{g}(T^{-1}(\mathbf{z}))} - 1} - \left( \frac{\mathbf{g}(T^{-1}(\mathbf{y}))}{\mathbf{g}(T^{-1}(\mathbf{z}))} \right)^{2} \right) \right] \frac{\mathbf{g}(T^{-1}(\mathbf{z}))}{\mathbf{g}'(T^{-1}(\mathbf{z}))} \right), \end{split}$$

where *<sup>y</sup>* = *<sup>z</sup>* <sup>−</sup> *<sup>g</sup>*(*z*) *<sup>g</sup>* (*z*) , *<sup>T</sup>*(*T*−1(*z*)) = *<sup>z</sup>* and

$$T\left(T^{-1}(y)\right) = T\left(T^{-1}(z) - \frac{g\left(T^{-1}(z)\right)}{g'\left(T^{-1}(z)\right)}\right) = T\left(T^{-1}(z) - \frac{f(z)}{af'(z)}\right) = z - T\left(\frac{f(z)}{af'(z)}\right) + \beta = z - \frac{f(z)}{f'(z)} = y - \beta$$

Therefore, substituting these equalities and simplifying, we have

$$\begin{split} &(T \diamond R\_{\mathcal{G}, \gamma} \circ T^{-1})(z) = \\ &= T \left( T^{-1}(z) + \left[ -\frac{\gamma}{2} \left( 3 - \frac{f'(y)}{f'(z)} \right) + (1 - \gamma) \left( \frac{1}{\frac{f(y)}{f(z)} - 1} - \left( \frac{f(y)}{f(z)} \right)^2 \right) \right] \frac{f(z)}{af'(z)} \right) \\ &= z + T \left( -\frac{\gamma}{2} \left( 3 - \frac{f'(y)}{f'(z)} \right) + (1 - \gamma) \left( \frac{1}{\frac{f(y)}{f(z)} - 1} - \left( \frac{f(y)}{f(z)} \right)^2 \right) \frac{f(z)}{af'(z)} \right) - \beta \\ &= z + T \left[ -\frac{\gamma}{2} \left( 3 - \frac{f'(y)}{f'(z)} \right) + (1 - \gamma) \left( \frac{1}{\frac{f(y)}{f(z)} - 1} - \left( \frac{f(y)}{f(z)} \right)^2 \right) \right] \frac{f(z)}{af'(z)}, \end{split}$$

then (*<sup>T</sup>* ◦ *Rg*,*<sup>γ</sup>* ◦ *<sup>T</sup>*−1)(*z*) = *Rf* ,*γ*(*z*), that is to say, *Rf* ,*<sup>γ</sup>* and *Rg*,*<sup>γ</sup>* are analytically conjugated by *<sup>T</sup>*(*z*).

Now, we can apply the Möbius transformation on the operator associated with the parametric family (2) in order to obtain an operator that does not depend on the constants *a* and *b* and, thus, be able to study the dynamical behavior of this family for any quadratic polynomial. The Möbius transformation, in this case, is *<sup>h</sup>*(*z*) = *<sup>z</sup>*−*<sup>a</sup> <sup>z</sup>*−*<sup>b</sup>* and has the following properties:

$$\text{(i)} \; h(\infty) = 1 \quad \text{(ii)} \; h(a) = 0 \quad \text{(iii)} \; h(b) = \infty.$$

The fixed-point rational operator of family (2) on *<sup>p</sup>*(*z*) has the expression

$$O\_{\mathcal{T}}(z) = (h \circ R\_{\mathcal{P};\mathcal{T}} \circ h^{-1})(z) = \frac{z^3(2\gamma z^2 + 3\gamma z + 2\gamma + z^5 + 5z^4 + 10z^3 + 9z^2 + 4z)}{2\gamma z^5 + 3\gamma z^4 + 2\gamma z^3 + 4z^4 + 9z^3 + 10z^2 + 5z + 1}. \tag{23}$$

We can also deduce from (23) that the order of the methods for quadratic polynomials is 3 when *<sup>γ</sup>* = 0 and 4 when *<sup>γ</sup>* = 0.

#### *3.1. Fixed Points*

The orbit of a point *<sup>z</sup>* <sup>∈</sup> <sup>C</sup> is defined (see, for example, [14,15]) as the set of the successive applications of the rational operator, i.e.,

$$\left\{ z, O\_{\gamma}(z), O\_{\gamma}^2(z), \dots \right\}.$$

The performance of the orbit of *z* is deduced attending to its asymptotic behavior. A point *x<sup>T</sup>* is said to be *T*-periodic if *O<sup>T</sup> <sup>γ</sup>*(*z*) = *<sup>z</sup>* and *<sup>O</sup><sup>t</sup> <sup>γ</sup>*(*z*) = *<sup>z</sup>*, for *<sup>t</sup>* <sup>&</sup>lt; *<sup>T</sup>*. For *<sup>T</sup>* = 1, this point is a fixed point.

Therefore, a fixed point is one that is kept invariant by the operator *Oγ*, that is, it is one that satisfies the equation *<sup>O</sup>γ*(*z*) = *<sup>z</sup>*. All the roots of the quadratic polynomial are, of course, fixed points of the *Oγ* operator. However, it may happen that fixed points appear that do not correspond to any root; we call these points strange fixed points. These points are not desirable from a numerical point of view because when an initial estimate is taken that is in the neighborhood of a strange fixed point, there is a possibility that the numerical method will converge to it, that is, to a point that is not a solution of the equation. Strange fixed points often appear when iterative methods are analyzed and their presence can show the instability of the method.

Fixed points can be classified according to the behavior of the derivative operator on them; thus, a fixed point *z*∗ can be:


Moreover, the basin of attraction <sup>A</sup>(*z*∗) of an attracting fixed point *<sup>z</sup>*<sup>∗</sup> is the set of initial guesses whose orbits tend to *z*∗. Therefore, the set of points whose orbit tends to an attracting fixed point defines the Fatou set <sup>F</sup>(*Oγ*), while its complement is the Julia set <sup>J</sup> (*Oγ*).

In what follows, we study what are the fixed points of operator *Oγ* and their character depending on the value of parameter *γ*. The proof of the following result is straightforward, as it only needs to solve the equation *<sup>O</sup>γ*(*z*) = *<sup>z</sup>*.

**Proposition 1.** *By analyzing the equation Oγ*(*z*) = *z, one obtains the following statements:*


*(iii) the roots of polynomial*

$$k(t) = 1 + 6t + (16 - 2\gamma)t^2 + (21 - 3\gamma)t^3 + (16 - 2\gamma)t^4 + 6t^5 + t^6,\tag{24}$$

*which we denote by Exi*(*γ*)*, where <sup>i</sup>* = 1, 2, ... , 6*, are also strange fixed points for each value of γ.*

We need the expression of the differentiated operator to analyze the stability of the fixed points and to obtain the critical points:

$$O\_{\gamma}^{\prime}(z) = \frac{z^2(z+1)^4\left(\gamma\left(6z^6 + 8z^5 + 7z^4 + 7z^2 + 8z + 6\right) + z\left(16z^4 + 41z^3 + 60z^2 + 41z + 16\right)\right)}{\left(2\gamma z^5 + (3\gamma + 4)z^4 + (2\gamma + 9)z^3 + 10z^2 + 5z + 1\right)^2},$$

It is clear that 0 and ∞ are always superattracting fixed points because they come from the roots of the polynomial, and the order of the iterative methods is higher than 2, but the stability of the other fixed points can change depending on the values of parameter *γ*.

**Proposition 2.** *The character of the strange fixed point z* = <sup>1</sup> *is as follows:*

$$(a) \quad \text{If } \gamma = -\frac{29}{7}, \text{ then } z = 1 \text{ is not a strange fixed point.}$$


$$\text{I(e)}\quad \text{If} \\ \text{Re}(\gamma) \in \left[ -\frac{125}{7}, \frac{67}{7} \right] \\ \text{and } \text{Im}(\gamma)^2 + \left( \text{Re}(\gamma) + \frac{29}{7} \right)^2 = \frac{9216}{49}, \text{ then } z = 1 \text{ is a parabolic.} $$

*(f) In another case, z* = <sup>1</sup> *is the repulsor.*

**Proof.** We obtain that

$$|O'\_{\gamma}(1)| = \left|\frac{96}{7\gamma + 29}\right|.$$

It is not difficult to check that |*O <sup>γ</sup>*(1)<sup>|</sup> cannot be 0, so *<sup>z</sup>* = 1 cannot be a superattractor, and, when *<sup>γ</sup>* = <sup>−</sup><sup>29</sup> <sup>7</sup> , *<sup>z</sup>* <sup>=</sup> 1 is not a fixed point.

Now, we are going to study when *<sup>z</sup>* = 1 is an attracting point. It is easy to check that |*O <sup>γ</sup>*(1)<sup>|</sup> <sup>&</sup>lt; 1 is equivalent to 96<sup>2</sup> <sup>&</sup>lt; <sup>|</sup><sup>29</sup> + <sup>7</sup>*γ*<sup>|</sup> 2. Rewriting the last expression, we obtain the following inequality:

$$8375 \prec 406 \text{Re}(\gamma) + 49 \text{Re}(\gamma)^2 + 49 \text{Im}(\gamma)^2.$$

Let us see when this inequality is verified. When 8375 <sup>−</sup> <sup>406</sup>*Re*(*γ*) <sup>−</sup> <sup>49</sup>*Re*(*γ*)<sup>2</sup> <sup>&</sup>lt; 0, that is, *Re*(*γ*) <sup>−</sup> <sup>67</sup> 7 *Re*(*γ*) + <sup>125</sup> 7 <sup>&</sup>gt; 0, *<sup>z</sup>* = 1 is an attracting point, so we obtain that *<sup>z</sup>* = 1 is an attracting point when *Re*(*γ*) <sup>&</sup>gt; 67 <sup>7</sup> or *Re*(*γ*) <sup>&</sup>lt; <sup>−</sup><sup>125</sup> <sup>7</sup> . When we have *Re*(*γ*) <sup>∈</sup> - −<sup>125</sup> 7 , 67 7 . , we need *Im*(*γ*) to satisfy 8375 <sup>&</sup>lt; <sup>406</sup>*Re*(*γ*) + <sup>49</sup>*Re*(*γ*)<sup>2</sup> + <sup>49</sup>*Im*(*γ*)2, for *<sup>z</sup>* = 1 being a superattractor.

We are going to study when *<sup>z</sup>* = 1 is a parabolic point. *<sup>z</sup>* = 1 will be a parabolic point when 8375 <sup>−</sup> <sup>406</sup>*Re*(*γ*) <sup>−</sup> <sup>49</sup>*Re*(*γ*)<sup>2</sup> = <sup>49</sup>*Im*(*γ*)2, that is, *<sup>z</sup>* = 1 is a parabolic point when *Re*(*γ*) <sup>∈</sup> - −<sup>125</sup> 7 , 67 7 . and 49*Im*(*γ*)<sup>2</sup> = <sup>−</sup>*Re*(*γ*)<sup>2</sup> <sup>−</sup> <sup>406</sup>*Re*(*γ*) + 8375.

Now, we establish the stability of the strange fixed points that are roots of the polynomial (24). To do this, we calculate these roots noting that this polynomial is a sixth degree symmetric polynomial, that is, it is a polynomial that can be reduced to a third degree one, and that satisfies the following properties:


Performing the reduction of (24), we obtain:

$$\begin{aligned} 1 + 6t + (16 - 2\gamma)t^2 + (21 - 3\gamma)t^3 + (16 - 2\gamma)t^4 + 6t^5 + t^6 &= 0, \\ 6t + (\frac{1}{t^3} + t^3) + 6(\frac{1}{t^2} + t^2) + (16 - 2\gamma)(\frac{1}{t} + t) + 21 - 3\gamma &= 0, \\ 6t + z^3 + 6z^2 + (13 - 2\gamma)z + 9 - 3\gamma &= 0, \end{aligned}$$

where *<sup>z</sup>* = <sup>1</sup> *<sup>t</sup>* <sup>+</sup> *<sup>t</sup>*, *<sup>z</sup>*<sup>2</sup> <sup>−</sup> <sup>2</sup> <sup>=</sup> <sup>1</sup> *<sup>t</sup>*<sup>2</sup> <sup>+</sup> *<sup>t</sup>* <sup>2</sup> and *<sup>z</sup>*<sup>3</sup> <sup>−</sup> <sup>3</sup>*<sup>z</sup>* = <sup>1</sup> *<sup>t</sup>*<sup>3</sup> <sup>+</sup> *<sup>t</sup>* 3. Now, we calculate the roots of this polynomial and obtain:

$$\begin{aligned} z\_{1}(\gamma) &= \frac{\sqrt[3]{\frac{2}{3}}(2\gamma-1)}{\sqrt[3]{-9\gamma+\sqrt{3\gamma((75-32\gamma)\gamma-78)+93}}+9} + \frac{\sqrt[3]{-9\gamma+\sqrt{3\gamma((75-32\gamma)\gamma-78)+93}+9}}{\sqrt[3]{23^{2/3}}}-2, \\\ z\_{2}(\gamma) &= \frac{\sqrt[3]{-\frac{2}{3}}(1-2\gamma)}{\sqrt[3]{-9\gamma+\sqrt{3\gamma((75-32\gamma)\gamma-78)+93}}+9} + \frac{(-1)^{2/3}\sqrt[3]{-9\gamma+\sqrt{3\gamma((75-32\gamma)\gamma-78)+93}+9}}{\sqrt[3]{23^{2/3}}}-2, \\\ z\_{3}(\gamma) &= \frac{(-1)^{2/3}\sqrt[3]{\frac{2}{3}}(2\gamma-1)}{\sqrt[3]{-9\gamma+\sqrt{3\gamma((75-32\gamma)\gamma-78)+93}}+9} - \frac{\sqrt[3]{-\frac{1}{2}}\sqrt[3]{-9\gamma+\sqrt{3\gamma((75-32\gamma)\gamma-78)+93}+9}}{3^{2/3}}-2. \end{aligned}$$

To calculate the roots of polynomial (24) from the *zi*(*γ*), *<sup>i</sup>* = 1, 2, 3, we undo the variable change since *<sup>t</sup>* = *zi*(*γ*) <sup>±</sup> <sup>0</sup>*zi*(*γ*)<sup>2</sup> <sup>−</sup> <sup>4</sup> <sup>2</sup> . Therefore, we obtain the roots of the sixth degree polynomial, which are conjugated two by two

$$\begin{aligned} \operatorname{Ex}\_{1}(\gamma) &= \frac{z\_{1}(\gamma) + \sqrt{z\_{1}(\gamma)^{2} - 4}}{2}, & \operatorname{Ex}\_{2}(\gamma) &= \frac{z\_{1}(\gamma) - \sqrt{z\_{1}(\gamma)^{2} - 4}}{2}, \\ \operatorname{Ex}\_{3}(\gamma) &= \frac{z\_{2}(\gamma) + \sqrt{z\_{2}(\gamma)^{2} - 4}}{2}, & \operatorname{Ex}\_{4}(\gamma) &= \frac{z\_{2}(\gamma) - \sqrt{z\_{2}(\gamma)^{2} - 4}}{2}, \\ \operatorname{Ex}\_{5}(\gamma) &= \frac{z\_{3}(\gamma) + \sqrt{z\_{3}(\gamma)^{2} - 4}}{2}, & \operatorname{Ex}\_{6}(\gamma) &= \frac{z\_{3}(\gamma) - \sqrt{z\_{3}(\gamma)^{2} - 4}}{2}. \end{aligned}$$

Now, we study when the roots of the polynomial (24) are superattractors. For them, we solve |*O <sup>γ</sup>*(*Exi*(*γ*))<sup>|</sup> = 0 for all *<sup>i</sup>* = 1, . . . , 6, and we get the following relevant values of *<sup>γ</sup>*:


Next, we are going to study the character of the fixed points by analyzing those values of *<sup>γ</sup>* close to the values of the parameter for which some *Exi*(*γ*) is a supertractor. To do this, we study how |*O <sup>γ</sup>*(*Exi*(*γ*))<sup>|</sup> behaves near the four previous values, and we obtain regions where some of the roots will be attractors. These regions are represented in Figure 1.

(**d**) Neighbourhood of *γ*<sup>4</sup>

**Figure 1.** Character of the roots of polynomial *<sup>k</sup>*(*t*): (**a**) *<sup>γ</sup>*1, (**b**) *<sup>γ</sup>*2, (**c**) *<sup>γ</sup>*3, (**d**) *<sup>γ</sup>*4.

### *3.2. Critical Points*

The relevance of knowing that the free critical points (that is, critical points different from the roots of the polynomial) is based on this known fact: each invariant Fatou component contains, at least, one critical point. Operator *<sup>O</sup>γ*(*z*) has as critical points *<sup>z</sup>* = 0, *<sup>z</sup>* = <sup>−</sup>1, *<sup>z</sup>* = <sup>∞</sup>, and the roots of the polynomial

$$q(t) = 6\gamma + (16 + 8\gamma)t + (41 + 7\gamma)t^2 + 60t^3 + (41 + 7\gamma)t^4 + (16 + 8\gamma)t^5 + 6\gamma t^6.$$

which we denote by *Zxi*(*γ*), where *<sup>i</sup>* = 1, . . . , 6.

Let us remark that *<sup>z</sup>* = <sup>−</sup>1 is a preimage of the fixed point *<sup>z</sup>* = 1. We can see that *<sup>q</sup>*(*t*) is a symmetric polynomial, so we can obtain the roots of *<sup>q</sup>*(*t*) obtaining roots of a polynomial of degree 3. The polynomial reduced of *<sup>q</sup>*(*t*) is the following one that we obtain analogously to the polynomial (24):

$$\widehat{q}(t) = 6\gamma t^3 + (16 + 8\gamma)t^2 + (41 - 11\gamma)t + 28 - 16\gamma.$$

In order to calculate the roots *<sup>z</sup>* of *<sup>q</sup>*(*t*), we need to obtain the roots of *<sup>q</sup>*ˆ(*t*) and apply the following expression to them *<sup>z</sup>* <sup>±</sup> <sup>√</sup>*z*<sup>2</sup> <sup>−</sup> <sup>4</sup> <sup>2</sup> . Thus, the roots of *<sup>q</sup>*(*t*) are conjugated.

Now, we are going to study the asymptotic behavior of the critical points to establish if there are different convergence basins than those generated by the roots. For the free critical point <sup>−</sup>1, we have *<sup>O</sup>γ*(−1) = 1, who is a strange fixed point, so the parameter plane associated with this critical point is not significative, since we know the stability of *<sup>z</sup>* = 1.

The other free critical points are roots of a polynomial that depends on *γ*; for that, we draw the parameter planes. As we have that the roots are conjugated, we will only draw three planes. We use as an initial estimate a free critical point that depends on *γ*. We establish a mesh in the complex plane of 500 × 500 points. Each point of the mesh corresponds to a parameter value. In each of them, the rational function is iterated to obtain the orbit of the critical point as a function of *<sup>γ</sup>*. If that orbit converges to *<sup>z</sup>* = 0 or to *<sup>z</sup>* = <sup>∞</sup> in less than 40 iterations, that point of the mesh is painted red; otherwise, the point appears in black.

As we can see, there are many values of the parameter *γ* that would result in a method in which the free critical points converge to one of the two roots. As it is observed in Figure 2, they are located in the red area on the right side of the plane. Moreover, some black areas can be identified as the regions of stability of those fixed points that can be attracting, such as Figure 1b, whose stability region appears in black on the right side of Figure 2c.

Now, we select some stable (in red in parameter planes) and unstable values of *γ* (in black) in order to show their performance.

In the case of dynamical planes, the value of the parameter *γ* is fixed. Each point in the complex plane is considered as a starting point of the iterative scheme, and it is painted in different colors depending on the point that it has converged to. In this case, we paint in blue points what converged to ∞, and in orange points what converged to 0. These dynamical planes have been generated with a mesh of 500 × 500 points and a maximum of 40 iterations per point. We mark strange fixed points with white circles, the fixed point z=0 with a white star, and free critical points with white squares (again, the routines used appear in [6]).

One value of the parameter that would be an interesting value is *<sup>γ</sup>* = 0 because it is the only one that obtains order 4. In that case, we obtain the dynamical plane that we can see in Figure 3a. In this case, two free critical points are in each basin of attraction, and the strange fixed points are in the boundary of both basins of attraction, so they are repulsive. In that case, the method is stable, and, as we can see, almost every point converges to 0 or ∞ (Let us notice that, in practice, any initial estimation taken in the Julia set will converge to 0 or to∞, due to the rounding error).

Other value for the parameter that we study is *<sup>γ</sup>* = 1, Figure 3b. As we can see, this dynamical plane is similar to that of *<sup>γ</sup>* = 0, but, in this case, we obtain less free critical points and less strange fixed points, due to the simplification of the rational function for this value of *γ*.

**Figure 2.** Parameter planes of *<sup>O</sup>γ*(*z*).

**Figure 3.** Dynamical planes of *<sup>γ</sup>* <sup>=</sup> 0 and *<sup>γ</sup>* <sup>=</sup> 1.

Carrying out numerous experiments, we have realized that the simplest dynamics is that of the methods with parameter *<sup>γ</sup>* = 0 and *<sup>γ</sup>* = 1. Next, we will see other dynamical planes associated with other values of the parameter *γ*. Some of these planes do not have a bad dynamics, although it is not as simple as the previous ones. This is the case of *<sup>γ</sup>* = 2, Figure 4b, or the case of *<sup>γ</sup>* = <sup>2</sup>*i*, Figure 4a.

However, values such as *<sup>γ</sup>* = <sup>−</sup><sup>10</sup> + *<sup>i</sup>*, *<sup>γ</sup>* = <sup>−</sup>5 or *<sup>γ</sup>* = <sup>−</sup><sup>29</sup> <sup>7</sup> present a dynamical plane with the same number of basins of attraction but with more complicated performance. We can see some of these dynamical planes in Figures 5a,b and 6a. There are also parameter values for which the number of basins of attraction increases, for example, *<sup>γ</sup>* = 5 (Figure 6b). These cases should be avoided since our method may not converge to the roots and may end up converging to other points.

(**a**) Dynamical plane for *<sup>γ</sup>* = <sup>2</sup>*<sup>i</sup>*

(**b**) Dynamical plane for *<sup>γ</sup>* = <sup>2</sup>

**Figure 4.** Dynamical planes of *<sup>γ</sup>* <sup>=</sup> <sup>2</sup>*<sup>i</sup>* and *<sup>γ</sup>* <sup>=</sup> 2.

**Figure 5.** Dynamical planes of *<sup>γ</sup>* <sup>=</sup> <sup>−</sup><sup>10</sup> <sup>+</sup> *<sup>i</sup>* and *<sup>γ</sup>* <sup>=</sup> <sup>−</sup>5.

**Figure 6.** Dynamical planes of *<sup>γ</sup>* <sup>=</sup> <sup>−</sup><sup>29</sup> <sup>7</sup> and *<sup>γ</sup>* <sup>=</sup> 5.

#### **4. Numerical Experiments**

In this section, we compare different iterative methods of the parametric family (2), solving two classical problems of applied mathematics: the Hammerstein integral equation and the Fisher partial derivative equation. We are going to use elements for the proposed class for which we have studied the dynamical plane because we want to verify that, although some of them have complicated dynamics, they can be methods that give good numerical results.

For the computational calculations, Matlab R2020b with variable precision arithmetics with 1000 digits of mantissa is used. From an initial estimation *x*(0), the different algorithms calculate iterations until the stoping criterium *<sup>x</sup>*(*k*+1) <sup>−</sup> *<sup>x</sup>*(*k*) <sup>&</sup>lt; *tol* is satisfied.

For the different examples and algorithms, we compare the approximation obtained, the norm of the function in the last iterate, the norm of the distance between the last two approximations, the number of iterations needed to satisfy the required tolerance, the computational time and the approximate computational convergence order (ACOC), defined by Cordero and Torregrosa in [16], which has the following expression:

$$p \approx A \text{COC} = \frac{\ln(||\mathbf{x}^{(k+1)} - \mathbf{x}^{(k)}||\_2 / ||\mathbf{x}^{(k)} - \mathbf{x}^{(k-1)}||\_2)}{\ln(||\mathbf{x}^{(k)} - \mathbf{x}^{(k-1)}||\_2 / ||\mathbf{x}^{(k-1)} - \mathbf{x}^{(k-2)}||\_2)}.$$

#### *4.1. Hammerstein Equation*

In this example, we consider the well-known Hammerstein integral equation (see [5]), which is given as follows:

$$\mathbf{x}(s) = 1 + \frac{1}{5} \int\_0^1 F(s, t) \mathbf{x}(t)^3 dt,\tag{25}$$

where *<sup>x</sup>* <sup>∈</sup> <sup>C</sup>[0, 1], *<sup>s</sup>*, *<sup>t</sup>* <sup>∈</sup> [0, 1] and the kernel *<sup>F</sup>* is

$$F(s,t) = \begin{cases} (1-s)t & t \le s, \\ s(1-t) & s \le t. \end{cases}$$

We transform the above equation into a finite-dimensional nonlinear problem by using the Gauss–Legendre quadrature formula given as " <sup>1</sup> <sup>0</sup> *<sup>f</sup>*(*t*)*dt* <sup>≈</sup> <sup>7</sup> ∑ *j*=1 *<sup>ω</sup><sup>j</sup> <sup>f</sup>*(*tj*), where the nodes *tj* and the weights *<sup>ω</sup><sup>j</sup>* are determined for *<sup>n</sup>* <sup>=</sup> 7 by the Gauss–Legendre quadrature formula. In this case, the nodes and the weights are in Table 1.

**Table 1.** Weights and nodes of the Gauss–Legendre quadrature.


By denoting the approximations of *<sup>x</sup>*(*ti*) by *xi* (*<sup>i</sup>* <sup>=</sup> 1, ... , 7), one gets the system of nonlinear equations:

$$\mathfrak{F}\mathfrak{x}\_i - \mathfrak{F} - \sum\_{j=1}^7 a\_{ij}\mathfrak{x}\_j^3 = 0\_{\prime\prime}$$

where *<sup>i</sup>* <sup>=</sup> 1, . . . , 7 and

$$a\_{i\dot{j}} = \begin{cases} w\_j t\_j (1 - t\_i) & j \le i\_{\prime} \\ w\_j t\_i (1 - t\_{\dot{j}}) & i < j. \end{cases}$$

Starting from an initial approximation *<sup>x</sup>*(0) = (−1, ... , <sup>−</sup>1)*<sup>T</sup>* and with a tolerance of *tol* = <sup>10</sup>−15, we run the parametric family for different values of the parameter *<sup>γ</sup>*. The numerical results are shown in Table 2.


**Table 2.** Hammerstein results for different parameters.

In all cases, we obtain as an approximation of the solution of Equation (25) the following vector *<sup>x</sup>*(*k*+1) = (1.0026875, 1.0122945, 1.0229605, 1.0275616, 1.0229605, 1.0122945, 1.0026875)*T*.

In the case of the Hammerstein integral equation, we see that the numerical results of the parametric family (2) for different values of *γ* are quite similar. The main difference observed between the methods is that the ACOC for *<sup>γ</sup>* = 0 is 4, and, for the rest of the methods, it is approximately 3. On the other hand, we note that the method with *<sup>γ</sup>* = <sup>−</sup><sup>10</sup> + *<sup>i</sup>* needs to perform a larger number of iterations than the rest of the methods to satisfy the required tolerance, so the time it takes to approximate the solution is also longer. Finally, taking into account the columns that measure the error of the approximation, that is, *<sup>F</sup>*(*x*(*k*+<sup>1</sup>))<sup>2</sup> and *<sup>x</sup>*(*k*+1) <sup>−</sup> *<sup>x</sup>*(*k*)2, we see that iterative methods that get lower errors are those associated with the parameters *<sup>γ</sup>* = 0 and *<sup>γ</sup>* = 2. These results confirm the information obtained in the dynamical section.

#### *4.2. Fisher Equation*

In this second example, we are going to study the equation proposed in [17] by Fisher to model the diffusion process in population dynamics. The analytical expression of this partial derivative equation is as follows:

$$u\_t(\mathbf{x}, t) = Du\_{xx}(\mathbf{x}, t) + ru(\mathbf{x}, t) \left(1 - \frac{u(\mathbf{x}, t)}{p}\right), \quad \mathbf{x} \in [a, b], \ t \ge 0,\tag{26}$$

where *D* ≤ 0 is the diffusion constant, *r* is the level of growth of the species, and *p* is the carrying capacity.

In this case, we will study the Fisher equation for the values *<sup>p</sup>* = 1, *<sup>r</sup>* = 1, and *<sup>D</sup>* = <sup>1</sup> in the spatial interval [0, 1] and with the initial condition *<sup>u</sup>*(*x*, 0) = *sech*2(*πx*) and null boundary conditions.

We transform the problem we just described in a set of nonlinear systems by applying an implicit method of finite differences, providing the estimated solution in the instant *tk* from the estimated one in *tk*−1. We denote the spatial step by *<sup>h</sup>* = <sup>1</sup> *nx* and the temporal step by *<sup>k</sup>* <sup>=</sup> *Tmax nt* , where *Tmax* is the final instant and *nx* and *nt* are the number of subintervals in *<sup>x</sup>* and *<sup>t</sup>*, respectively. Therefore, we define a mesh of the domain [0, 1] <sup>×</sup> [0, *Tmax*], composed of points (*xi*, *tj*), as follows:

$$t\_i x\_i = 0 + i h\_i \quad i = 0, \dots, n\_{\ge \prime} \qquad t\_j = 0 + j k\_\prime \quad j = 0, \dots, n\_\prime.$$

Our objective is to approximate the solution of problem (26) in these points of the mesh, solving as many nonlinear systems as there are temporary nodes *tj* in the mesh. For this, we use the following finite differences:

$$\begin{aligned} u\_t(\mathbf{x}, t) &\approx \frac{u(\mathbf{x}, t) - u(\mathbf{x}, t - k)}{k} \\ u\_{xx}(\mathbf{x}, t) &\approx \frac{u(\mathbf{x} + h, t) - 2u(\mathbf{x}, t) + u(\mathbf{x} - h, t)}{h^2}. \end{aligned}$$

We observe that, for the time step, we use first order backward divided differences and for the spatial step they are second order centered divided differences.

By denoting *ui*,*<sup>j</sup>* as the approximation of the solution at (*xi*, *tj*), and, by replacing it in the Cauchy problem, we get the system

$$(k\mu\_{i+1,j} + (kh^2 - 2k - h^2)\mu\_{i,j} - kh^2\mu\_{i,j}^2 + k\mu\_{i-1,j} = -h^2\mu\_{i,j-1}\mu\_i$$

for *<sup>i</sup>* <sup>=</sup> 1, 2, ... , *nx* <sup>−</sup> 1 and *<sup>j</sup>* <sup>=</sup> 1, 2, ... , *nt*. The unknowns of this system are *<sup>u</sup>*1,*j*, *<sup>u</sup>*2,*j*, ... , *unx*−1,*j*, that is, the approximations of the solution in each spatial node for the fixed instant *tj*.

In this example, we are going to work with the parameters *Tmax* <sup>=</sup> 10, *nx* <sup>=</sup> 10 and *nt* <sup>=</sup> 50. As we have said, it is necessary to solve as many systems as there are temporary nodes *tj*; for each of these systems, we use the parametric family (2) to approximate its solution. Thus, starting from an initial approximation *ui*,0 <sup>=</sup> *sech*2(*πxi*), *<sup>i</sup>* <sup>=</sup> 0, ... , *nx*, with a tolerance of 10−6, we execute the parametric family for different values of *γ* so that we get Table 3.

**Table 3.** Fisher results for different parameters.


In all cases, we obtain as an approximation of the solution of problem (26) the following vector *<sup>x</sup>*(*k*+1) = (0,4.32639,0.708718,0.853425,0.918847,0.93729,0.918847,0.853425,0.708718, 0.432639,0)*T*.

In this case, it can seen that the results are very similar, although there are subtle differences. For example, the method when *<sup>γ</sup>* = 0 uses a smaller number of iterations than the rest to satisfy the required tolerance, although this does not make it much faster than the rest of the methods since the difference in time is seconds. On the other hand, if we look at the time column, we can see that there is a method that stands out for its slowness; this is the case of *<sup>γ</sup>* = <sup>−</sup><sup>10</sup> + *<sup>i</sup>*. Again, we note that the ACOC of the methods roughly match the theoretical predictions made throughout the article. Observing the columns of the errors, we find similar results as well and that, in this case, having a higher tolerance than in the first example, no great differences are observed in these results.

#### **5. Conclusions**

A parametric family of iterative methods for solving nonlinear systems is presented. The dynamical analysis of the class on quadratic polynomials is done in order to select the members of the family with better stability properties. We prove that there exist a wide

set of real and complex values of the parameter for which the corresponding methods are stable. That is, the set of initial estimations converging to the roots is very wide.In particular, we have stated that those procedures with *<sup>γ</sup>* = 0, *<sup>γ</sup>* = 1, and *<sup>γ</sup>* = 2 are especially stable, although some other ones can also show similar dynamical properties. Two numerical examples related to Hammerstein's equation and Fisher's equation allow us to confirm the theoretical results corresponding to the convergence and the stability of the proposed class.

**Author Contributions:** The individual contributions of the authors are as follows: conceptualization, J.R.T.; writing—original draft preparation, E.G.V. and P.T.-N.; validation, A.C.; numerical experiments, E.G.V. and P.T.-N. All authors have read and agreed to the published version of the manuscript

**Funding:** This research was supported by Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-BC22 (MCIU/AEI/FEDER, UE).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to thank the anonymous reviewers for their useful comments that have improved the final version of this manuscript.

**Conflicts of Interest:** The authors declare that there is no conflict of interest regarding the publication of this paper.

#### **References**

