**1. Introduction**

We look for a unique root *p*∗ of the equation:

$$
\Omega(\upsilon) = 0,\tag{1}
$$

where Ω is a continuous operator defined on a convex subset P of S with values in S, and S = R or S = C. This is a relevant issue since several problems from mathematics, physics, chemistry, and engineering can be reduced to Equation (1).

In general, either the lack, or the intractability of analytic solutions force researchers to adopt iterative techniques. However, when using that type of approach, we find problems such as slow convergence, converge to undesired root, divergence, computational inefficiency, or failure (see Traub [1] and Petkovíc et al. [2]). The study of the convergence of iterative algorithms can be classified into two categories, namely the semi-local and local convergence analysis. The first case is based on the information in the neighborhood of the starting point. This also gives criteria for guaranteeing the convergence of iteration algorithms. Therefore, a relevant issue is the convergence domain, as well as the radii of convergence of the algorithm.

*Symmetry* **2019**, *11*, 586

Herein, we deal with the second case, that is the local convergence analysis. Let us consider a fourth order algorithm defined for *n* = 0, 1, 2, . . . , as:

$$\begin{aligned} \lambda\_s &= \delta\_s + \beta \Omega(\delta\_s)^k, \text{ with } \beta \neq 0 \in \mathbb{R}, \\\mu\_s &= \lambda\_s - \frac{\Omega(\lambda\_s)}{\left[\delta\_{s\_s} \; \lambda\_s; \Omega\right]'} \\\ \delta\_{s+1} &= \mu\_s - H(v\_{s\_s} \; w\_s) \frac{\Omega(\mu\_s)}{\left[\delta\_{s\_s} \; \lambda\_s; \Omega\right]'} \end{aligned} \tag{2}$$

where *λ*0 ∈ P is an initial point, *k* ∈ N (*k* is an arbitrary natural number), [*<sup>δ</sup><sup>s</sup>*, *λs*; Ω] : P × P → *<sup>L</sup>*(S, S) satisfies [*<sup>δ</sup><sup>s</sup>*, *λs*; Ω] = <sup>Ω</sup>(*x*)−<sup>Ω</sup>(*y*) *<sup>x</sup>*−*y* for *x* = *y*, *vs* = <sup>Ω</sup>(*μs*) <sup>Ω</sup>(*<sup>λ</sup>s*), *ws* = <sup>Ω</sup>(*μs*) <sup>Ω</sup>(*<sup>δ</sup>s*) , and *H* : S × S → S is a continuous function. The fourth order convergence for Method (2) was studied by Lee and Kim [3] with Taylor series, hypotheses up to the fourth order derivative of function Ω, and hypotheses on the first and second partial derivatives of function *H*. However, only the divided difference of the first order appears in (2). Favorable computations were also given with related Kung–Traub methods [1] of the form:

$$\begin{split} \lambda\_{s} &= \delta\_{s} + \beta \Omega(\delta\_{s})^{4}, \text{ with } \beta \neq 0 \in \mathbb{R}, \\ \mu\_{s} &= \lambda\_{s} - \frac{\Omega(\lambda\_{s})}{\left[\delta\_{s}, \ \lambda\_{s}; \Omega\right]}, \\ \delta\_{s+1} &= \mu\_{s} - \frac{\Omega(\delta\_{s})}{\Omega(\delta\_{s}) - 2\Omega(\mu\_{s})} \frac{\Omega(\mu\_{s})}{\left[\lambda\_{s}, \ \mu\_{s}; \Omega\right]}. \end{split} \tag{3}$$

.

Notice that (3) is obtained from (2), if we define function *H* as *<sup>H</sup>*(*<sup>v</sup>*, *w*) = 1 1−2*<sup>w</sup>* . The assumptions on the derivatives of Ω and *H* restrict the suitability of Algorithms (2) and (3). For instance, let us consider Ω on P = S = R, P1 = [ − 1 *π* , 2 *π* ] as:

$$\Omega(\upsilon) = \begin{cases} \upsilon^3 \log(\pi^2 \upsilon^2) + \upsilon^5 \sin\left(\frac{1}{\upsilon}\right), & \upsilon \neq 0 \\\ 0, & \upsilon = 0 \end{cases}$$

From this expression, we obtain:

$$
\Omega'(\upsilon) = 2\upsilon^2 - \upsilon^3 \cos\left(\frac{1}{\upsilon}\right) + 3\upsilon^2 \log(\pi^2 \upsilon^2) + 5\upsilon^4 \sin\left(\frac{1}{\upsilon}\right),
$$

$$
\Omega''(\upsilon) = -8\upsilon^2 \cos\left(\frac{1}{\upsilon}\right) + 2\upsilon(5 + 3\log(\pi^2 \upsilon^2)) + \upsilon(20\upsilon^2 - 1)\sin\left(\frac{1}{\upsilon}\right),
$$

$$
\Omega'''(\upsilon) = \frac{1}{\upsilon} \left[ (1 - 36\upsilon^2)\cos\left(\frac{1}{\upsilon}\right) + \upsilon\left(22 + 6\log(\pi^2 \upsilon^2) + (60\upsilon^2 - 9)\sin\left(\frac{1}{\upsilon}\right)\right) \right].
$$

We find that <sup>Ω</sup>(*υ*) is unbounded on P1 at the point *υ* = 0. Therefore, the results in [3] cannot be applied for the analysis of the convergence of Methods (2) or (3). Notice that there are numerous algorithms and convergence results available in the literature [1–15]. Nonetheless, practice shows that the initial prediction must be in the neighborhood of the root for achieving convergence. However, how close must it be to the starting point? Indeed, local results do not give any information about the ball convergence radii.

We broaden the suitability of Methods (2) and (3) by using only assumptions on the first derivative of function Ω. Moreover, we estimate the computable radii of convergence and the error bounds from Lipschitz constants. Additionally, we discuss the range of initial estimate *p*∗ that tells us how close it must be to achieve a granted convergence of (2). This problem was not addressed in [3], but is of capital importance in practical applications.

In what follows: Section 2 addresses the study of local convergence (2) and (3). Section 3 contains three numerical examples that illustrate the theoretical formulation. Finally, Section 4 gives the concluding remarks.
