**1. Introduction**

Solving nonlinear equations *h*(*x*) = 0, where function *h*(*x*) is defined as *h* : Ω ⊆ *R* → *R* in an open interval Ω, is a delightful and demanding task in many applied scientific branches, such as Mathematical Biology, Physics, Chemistry, Economics, and also Engineering, to name a few [1–4]. This is mainly because problems from these areas usually include needing to find the root of a nonlinear equation. The huge value of this subject has led to the development of many numerical methods, with most of them having an iterative nature (see [5–7]). With the advanced technology of computer hardware and the latest software, the topic of solving nonlinear equations by using numerical methods has gained additional significance. Researchers are utilizing iterative methods for approximating the solution, since closed-form solutions cannot be obtained in general. In particular, here we consider iterative methods to compute a multiple root (say, *α*) with multiplicity *m* ≥ 1, i.e., *<sup>h</sup>*(*k*)(*α*) = 0, *k* = 0, 1, 2, ..., *m* − 1 and *<sup>h</sup>*(*m*)(*α*) = 0, of the equation *h*(*x*) = 0.

There are a plethora of methods of an iterative nature with a different order of convergence, constructed to approximate the zeros of Equation *h*(*x*) = 0 (see [8–18]). The computational efficiency index is a very effective mechanism, defined by Ostrowski in [19] which categorizes the iterative algorithms in the form of their convergence order *pc* and the function evaluations *d* required per iteration. It is formulated as *I* = *p*1/*<sup>d</sup> c* . The higher the computational efficiency index of an iterative scheme, the better the scheme is.

This idea becomes more rigid with Kung-Traub's conjecture [20], which imposes an upper bound for the convergence order to be limited with fixed functional evaluations. According to this conjecture, an iterative scheme which requires a *d* number of function evaluations can attain the convergence order *pc* = 2*d*−1. The iterative methods which obey Kung-Traub's conjecture are optimal in nature.

The most basic and widely used method is the well-known modified Newton's method:

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - m \frac{h(\mathbf{x}\_n)}{h'(\mathbf{x}\_n)} \quad \forall \ n = 0, 1, 2, \dots \tag{1}$$

This method can efficiently find the required zero of multiplicity, *m* with a quadratic order of convergence, provided that the initial approximate *x*0 is sufficiently nearer to zero [8]. In Traub's terminology (see [2]), Newton's method (1) is called the one-point method. This classical method attracts many researchers because of its huge applications in several kinds of problems, which are formulated as non-linear equations, differential equations, integral equations, systems of non-linear algebraic equations, and even to random operator equations. However, a common issue and main obstacle in the use of Newton's method is its sensitivity to initial guesses, which must be sufficiently nearer to the exact solution for assured convergence. Developing a criterion for selecting these initial guesses is quite difficult, and therefore, a more effective iterative technique that is globally-convergent is ye<sup>t</sup> to be discovered. Some other important higher order methods that are based on Newton's method (1) have been developed in [11,13,14,21–28].

Recently, Chicharro et al. [28] used the weight function technique to design a class of optimal second-order one-point iterative methods for simple roots, including Newton's method. In this paper, we have applied this technique for the development of a class of optimal second-order one-point methods for multiple roots. The new proposed family contains the modified Newton's method and many other efficient methods. These methods exist when particular weight functions are selected. Therefore, with a wide range of initial approximations, we can select those methods from the family which are able to converge towards exact zero, when Newton's method does not.

The rest of the paper is organized as follows. In Section 2, the technique of the second-order method is developed and its convergence is studied. In Section 3, the basins of attractors are studied to check the stability of the methods. Numerical experiments on different equations are performed in Section 4 to demonstrate the applicability and efficiency of the presented methods. We finish the manuscript with some valuable conclusions in Section 5.

#### **2. The Method**

*where en* = For a known multiplicity *m* ≥ 1, we consider the following one–step scheme for multiple roots:

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - G(\boldsymbol{\nu}\_n),\tag{2}$$

where the function *<sup>G</sup>*(*<sup>ν</sup>n*) : C → C is differentiable in a neighborhood of "0" with *νn* = *h*(*xn*) *h*(*xn*). Inthenextresult,weproveatheoremfor theorderofconvergenceofthescheme(2).

**Theorem 1.** *Let f* : C → C *be a differentiable function in a region in which a multiple zero (say, α) with multiplicity m lies. Suppose that the initial approximate x*0 *is sufficiently close to α—then, the iteration scheme defined by (2) has a second-order of convergence, provided that G*(0) = 0*, G* (0) = *m, and* |*G*(0)| < <sup>∞</sup>*, and the error is*

$$\mathfrak{e}\_{n+1} = \left(\frac{2m\mathbb{C}\_1 - G''(0)}{2m^2}\right)\mathfrak{e}\_n^2 + O(\mathfrak{e}\_n^3),\tag{3}$$
  $\mathfrak{x}\_n - \mathfrak{a}$  and  $\mathbb{C}\_k = \frac{m!}{(m+k)!} \frac{f^{(m+k)}(\mathfrak{a})}{f^{(m)}(\mathfrak{a})}$  for  $k \in \mathbb{N}$ .

**Proof.** Let the error at the *n*-th iteration be *en* = *xn* − *α*. Using the Taylor's expansion of *f*(*xn*) and *f* (*xn*) about *α*, we have that

$$f(\mathbf{x}\_n) = \frac{f^{(m)}(\boldsymbol{\alpha})}{m!} \mathbf{e}\_n^m \left( \mathbf{1} + \mathbb{C}\_1 \mathbf{e}\_n + \mathbb{C}\_2 \mathbf{e}\_n^2 + \mathbb{C}\_3 \mathbf{e}\_n^3 + O(\mathbf{e}\_n^4) \right). \tag{4}$$

and

$$f'(\mathbf{x}\_{\mathbb{R}}) = \frac{f^{(m)}(\mathfrak{a})}{m!} \mathfrak{e}\_{\mathfrak{n}}^{m-1} \left( m + (m+1)\mathbb{C}\_1 \mathfrak{e}\_{\mathbb{R}} + (m+2)\mathbb{C}\_2 \mathfrak{e}\_{\mathbb{R}}^2 + (m+3)\mathbb{C}\_3 \mathfrak{e}\_{\mathbb{R}}^3 + O(\mathfrak{e}\_{\mathbb{R}}^4) \right). \tag{5}$$

By using (4) and (5), we obtained

$$\nu\_n = \frac{e\_n}{m} - \frac{C\_1}{m} e\_n^2 + \frac{(1+m) - 2mC\_1}{m^3} e\_n^3 + O(e\_n^4).$$

If we write the expansion-of-weight function *<sup>G</sup>*(*<sup>ν</sup>n*) about the origin by using the Taylor series, then we have that

$$G(\nu\_n) \approx G(0) + \nu G'(0) + \frac{1}{2}\nu^2 G''(0). \tag{6}$$

By employing the expression (6) in the scheme (2), we were able to obtain the error equation

$$\varepsilon\_{n+1} = -G(0) + \left(1 - \frac{G'(0)}{m}\right)\varepsilon\_n - \left(\frac{-2\mathbb{C}\_1G'(0) + G''(0)}{2m^2}\right)\varepsilon\_n^2 + O(\varepsilon\_n^3). \tag{7}$$

To obtain the second-order of convergence, the constant term and coefficient of *en* in (7) should simultaneously be equal to zero. This is possible when *G*(0) and *G*(0) have the following values:

$$G(0) = 0, \quad G'(0) = m. \tag{8}$$

By using the above values in (7), the error equation becomes

$$
\varepsilon\_{n+1} = \left(\frac{2mC\_1 - G''(0)}{2m^2}\right) \varepsilon\_n^2 + O(\varepsilon\_n^3). \tag{9}
$$

.

.

Hence, the second-order convergence is established.

#### *Some Particular Forms of <sup>G</sup>*(*<sup>ν</sup>n*)

We were able to obtain numerous methods of the family (2) based on the form of function *<sup>G</sup>*(*ν*) that satisfies the conditions of Theorem 1. However, we limited the choices to consider only some simple functions. Accordingly, the following simple forms were chosen:

(1)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = m\nu\_{\mathfrak{n}}(1 + a\_{1}\nu\_{\mathfrak{n}})$  (2)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = \frac{m\nu\_{\mathfrak{n}}}{1 + a\_{2}\nu\_{\mathfrak{n}}}$  (3)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = \frac{m\nu\_{\mathfrak{n}}}{1 + a\_{3}m\nu\_{\mathfrak{n}}}$  (4)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = m(\epsilon^{\mathfrak{n}\mathfrak{n}} - 1)$  (5)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = m\log\left[\nu\_{\mathfrak{n}} + 1\right]$  (6)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = \frac{\nu\_{\mathfrak{n}}}{\left(\frac{1}{\sqrt{m}} + a\_{4}\nu\_{\mathfrak{n}}\right)^{2}}$  (8)  $\mathcal{G}(\nu\_{\mathfrak{n}}) = \frac{\nu\_{\mathfrak{n}}^{2} + \nu\_{\mathfrak{n}}}{\frac{1}{\pi} + a\nu\_{\mathfrak{n}}},$ 

where *a*1, *a*2, *a*3, *a*4 and *a*5 are arbitrary constants.

The corresponding methods to each of the above forms are defined as follows: *Method1*(M1):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - m\boldsymbol{\nu}\_n (1 + a\_1 \boldsymbol{\nu}\_n).$$

*Method 2* (M2):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - \frac{m\nu\_n}{1 + a\_2\nu\_n}$$

*Method 3* (M3):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_{\mathrm{fl}} - \frac{m\nu\_{\mathrm{n}}}{1 + a\_3 m\nu\_{\mathrm{n}}}.$$

*Method 4* (M4):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n - m(e^{\nu\_n} - 1).$$

*Method 5* (M5):

$$
\mathbf{x}\_{n+1} = \mathbf{x}\_n - m \log(\nu\_n + 1).
$$

*Method 6* (M6):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_{\text{ll}} - m \sin \upsilon\_n.$$

*Method 7* (M7):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_{\text{fl}} - \frac{\boldsymbol{\nu}\_{\text{n}}}{\left(\frac{1}{\sqrt{\boldsymbol{m}}} + a\_{4}\boldsymbol{\nu}\_{\text{n}}\right)^{2}}.$$

*Method 8* (M8):

$$\mathbf{x}\_{n+1} = \mathbf{x}\_{\text{fl}} - \frac{\nu\_n^2 + \nu\_n}{\frac{1}{m} + a\_{\text{fb}}\nu\_n}.$$

**Remark 1.** *The scheme (2) shows a one-point family of second-order methods which needs only two function evaluations—namely, h*(*xn*) *and <sup>h</sup>*(*xn*)*.*

**Remark 2.** *Note that the modified Newton's method (1) is the special case of the above methods—M1, M2, M3, and M7—if the corresponding constants a*1, *a*2, *a*3, *and a*4 *become zero.*

#### **3. Complex Dynamics of Methods**

Our goal here is to check the complex dynamics of new methods with the help of a graphical tool, *the basins of attraction*, of the zeros for a polynomial *<sup>P</sup>*(*z*) in the Argand plane. The nature of the basins of attraction provides important ideas about the stability and convergence of iterative methods. This idea was initially introduced by Vrscay and Gilbert [29]. In recent times, most researchers have been using this concept in their work—see, for example [30,31]. We consider the special cases corresponding to *<sup>G</sup>*(*vn*) of (2) to analyze the basins of attraction.

The starting approximate *z*0 is taken in a region of rectangular shape *R* ∈ C that contains all the zeros of *<sup>P</sup>*(*z*). A method, when it starts from point *z*0 in a rectangle, either converges to zero, *<sup>P</sup>*(*z*), or eventually diverges. Therefore, the stopping criterion is 10−<sup>3</sup> up to a number of 25 iterations.

To show complex geometry, we checked the basins of attraction of the methods M1–M8 on the following four polynomials:

**Problem 1.** *In this problem, we took the polynomial <sup>P</sup>*1(*z*)=(*z*<sup>2</sup> + <sup>4</sup>)<sup>2</sup>*, which has zeros* {±2*i*} *with a multiplicity of 2. We used a mesh of* 400 × 400 *points in a rectangular frame D* ∈ C *of area* [−2, 2] × [−2, 2]*, and gave the color green for "*2*i" and red for "*−2*i". Each initial point from the green region converges towards "*2*i", and from the red region it converges to "*−2*i". Basins obtained for the methods M1–M8 are shown in Figure 1. Analyzing the behavior of the methods, we see that the methods M5 and M6 possess lesser numbers of divergent points, followed by M1, M4, M8, and M2. On the contrary, the method M3 has a higher number of divergent points, followed by M7.*

**Problem 2.** *Let us consider the polynomial <sup>P</sup>*2(*z*)=(*z*<sup>3</sup> − *z*)<sup>3</sup> *having zeros* {0, ±1} *with a multiplicity of 3. To see the dynamical structure, we considered a rectangular frame D* = [−2, 2] × [−2, 2] ∈ C *with* 400 × 400 *mesh points, and gave the colors red, green, and blue to each point in the basins of attraction of* <sup>−</sup>1*,* 0*, and* 1*, respectively. Basins obtained for the methods M1–M8 are shown in Figure 2. Analyzing the behavior of the methods, we observe that the methods M5 and M6 have wider convergence regions, followed by M1, M4, M8, M2, M3, and M7.*

**Figure 1.** Basins of attraction of M1–M8 for polynomial *<sup>P</sup>*1(*z*).

**Figure 2.** Basins of attraction of M1–M8 for polynomial *<sup>P</sup>*2(*z*).

**Problem 3.** *Next, consider the polynomial <sup>P</sup>*3(*z*)=(*z*<sup>4</sup> − 6*z*<sup>2</sup> + 8)<sup>2</sup> *that has four zeros* {±2, ±1.414 ...} *with a multiplicity of 2. To view attraction basins, we considered a rectangular frame D* = [−2, 2] × [−2, 2] ∈ C *with* 400 × 400 *mesh points and assigned the colors red, blue, green, and yellow to each point in the basin of* <sup>−</sup>2*,* −1.414, ...*,* 1.414 ...*, and* 2*, respectively. Basins obtained for the methods M1–M8 are shown in Figure 3. Observing the behavior of the methods, we see that the methods M5, M8, M2, M3, M4, M1, and M6 possess a lesser number of divergent points, and therefore, they show good convergence. On the contrary, the method M7 has a higher number of divergent points.*

**Figure 3.** Basins of attraction of M1–M8 for polynomial *<sup>P</sup>*3(*z*).

**Problem 4.** *Lastly, we consider the polynomial <sup>P</sup>*4(*z*) = *z*6 − 12 *z*5 + 114 (1 + *<sup>i</sup>*)*z*<sup>4</sup> − 14 (19 + <sup>3</sup>*<sup>i</sup>*)*z*<sup>3</sup> + 14 (11 + <sup>5</sup>*<sup>i</sup>*)*z*<sup>2</sup> − 14 (11 + *<sup>i</sup>*)*z* + 32 − 3*i that has six simple zeros* {1, −1 + 2*i*, −12 − *i*2 , *i*, −3*i*2 , 1 − *i*}*. To view the attraction basins, we considered a rectangle D* = [−2, 2] × [−2, 2] ∈ C *with* 300 × 300 *grid points, and assigned the colors red, green, yellow, blue, cyan, and purple to each point in the basin of* 1*,* −1 + 2*i,* −12 − 12 *i, i,* −32 *i, and* 1 − *i, respectively. Basins obtained for the methods M1–M8 are shown in Figure 4. Analyzing the basins of the methods, we observe that the methods M5, M8, M2, M3, M4, and M6 possess a lesser number of divergent points. On the contrary, the methods M1 and M7 have a higher number of divergent points.*

**Figure 4.** Basins of attraction of M1–M8 for polynomial *<sup>P</sup>*4(*z*).

From the graphics of the basins, we can give the judgment on the behavior and suitability of any method in the applications. In the event that we pick an initial point *z*0 in a zone where various basins of attraction contact one another, it is difficult to anticipate which root will be achieved by the iterative

technique that begins in *z*0. Subsequently, the choice of *z*0 in such a zone is anything but a decent one. Both the dark zones and the zones with various colors are not appropriate for choosing *z*0 when we need to obtain a specific root. The most appealing pictures showed up when we had extremely intricate borders between basins of attraction. These borders correspond to the cases where the method is more demanding with respect to the initial point.

#### **4. Numerical Results**

In this section, we demonstrate the efficiency, effectiveness, and convergence behavior of the family of new methods by applying them to some practical problems. In this view, we take the special cases M1–M8 of the proposed class and choose (*<sup>a</sup>*1 = 1/10), (*<sup>a</sup>*2 = 1/4), (*<sup>a</sup>*3 = 1/10), (*<sup>a</sup>*4 = 1/10), and (*<sup>a</sup>*5 = 1/5) in the numerical work.

As we know that the constants *a*1, *a*2, *a*3, *a*4 and *a*5 are arbitrary, there is no particular reason for choosing these values for the constants, and we chose the values randomly. The proposed methods are compared with the existing modified Newton Method (1), also known as MNM.

To verify the theoretical results, we calculate the computational order of convergence (COC) by using the formula (see [32])

$$\text{COC} = \frac{\log \left| (\mathbf{x}\_{n+1} - \boldsymbol{\alpha}) / (\mathbf{x}\_n - \boldsymbol{\alpha}) \right|}{\log \left| (\mathbf{x}\_n - \boldsymbol{\alpha}) / (\mathbf{x}\_{n-1} - \boldsymbol{\alpha}) \right|}.$$

The computational work was performed in the programming software, Mathematica, by using multiple-precision arithmetic. Numerical results displayed in Tables 1–4 include: (i) the number of approximations (*n*) required to converge to the solution such that |*xn*+<sup>1</sup> − *xn*| + | *f*(*xn*)| < <sup>10</sup>−100, (ii) values of the last three consecutive errors |*en*| = |*xn*+<sup>1</sup> − *xn*|, (iii) residual error | *f*(*xn*)|, and (iv) computational order of convergence (COC).

For testing, we chose four test problems as follows:

**Example 1** (Eigenvalue problem)**.** *Eigen value problem is a difficult problem when characteristic polynomial involves a huge square matrix. Finding the zeros of characteristic equation of a square matrix with order more than 4 can even be a challenging task. So, we think about accompanying 9* × *9 matrix*

.


*The characteristic polynomial of the matrix* (*M*) *is given as*

$$\begin{split} h\_1(\mathbf{x}) &= \mathbf{x}^9 - 29\mathbf{x}^8 + 349\mathbf{x}^7 - 2261\mathbf{x}^6 + 8455\mathbf{x}^5 - 17663\mathbf{x}^4 + 15927\mathbf{x}^3 \\ &+ 6993\mathbf{x}^2 - 24732\mathbf{x} + 12960. \end{split}$$

*This function has one multiple zero α* = 3 *with a multiplicity of 4. We chose initial approximations x*0 = 2.75 *and obtained the numerical results as shown in Table 1.*


**Table 1.** Comparison of performance of methods for Example 1.

**Example 2** (Beam Designing Model)**.** *Here, we consider a beam situating problem (see [4]) where a beam of length r unit is inclining toward the edge of a cubical box with the length of the sides being 1 unit each, to such an extent that one end of the bar touches the wall and the opposite end touches the floor, as shown in Figure 5.*

**Figure 5.** Beam situating problem.

The problem is: What should be the distance be along the bottom of the beam to the floor from the base of the wall? Suppose *y* is the distance along the edge of the box to the beam from the floor, and let *x* be the distance from the bottom of the box and of the beam. Then, for a particular value of *r*, we have

$$\mathbf{x}^4 + 4\mathbf{x}^3 - 24\mathbf{x}^2 + 16\mathbf{x} + 16 = 0.$$

The non-negative solution of the equation is a root *x* = 2 with a multiplicity of 2. We consider the initial approximate *x*0 = 3 to find the root. Numerical results of various methods are shown in Table 2.

**Table 2.** Comparison of performance of methods for Example 2.


**Example 3.** *Van der Waals Equation of State, which can be expressed as*

$$\left(P + \frac{a\_1 n^2}{V^2}\right)(V - na\_2) = nRT\_{\prime\prime}$$

*explains the conduct of a real gas by taking in the perfect gas conditions two additional parameters, a*1 *and a*2*, explicit for every gas. So as to decide the volume V of the gas as far as the rest of the parameters, we are required to explain the nonlinear condition in V.*

$$(PV^3 - (na\_2P + nRT)V^2 + a\_1n^2V - a\_1a\_2n^3 = 0.1$$

*Given the constants a*1 *and a*2 *of a specific gas, one can find values for n, P, and T, with the end goal that this condition has three real roots. By utilizing the specific values, we get the accompanying nonlinear function*

> *f* <sup>1</sup>(*x*) = *x*3 − 5.22*x*<sup>2</sup> + 9.0825*x* − 5.2675,

*having three real roots. One root among three roots is a multiple zero α* = 1.75 *with a multiplicity of order two, and other one being a simple zero ξ* = 1.72*. However, our desired zero is α* = 1.75*. We considered the initial guess x*0 = 1.8 *for this problem. Numerical results of various methods are shown in Table 3.*


**Table 3.** Comparison of performance of methods for Example 3.

**Example 4.** *Lastly, we consider the test function*

$$h\_4(\mathbf{x}) = \left(\mathbf{x}^2 + 1\right) \left(2\mathbf{x}e^{\mathbf{x}^2 + 1} + \mathbf{x}^3 - \mathbf{x}\right) \cosh^2\left(\frac{\pi x}{2}\right).$$

*The function has a multiple zero α* = *i of multiplicity 4. We chose the initial approximation x*0 = 1.25*i to obtain the zero of the function.*


**Table 4.** Comparison of performance of methods for Example 4.
