**1. Introduction**

The interpolation and expansion of a function are two of the oldest and most interesting branches in both computational mathematics and approximation theory. Most often, they have a natural link with their corresponding algorithms, such as Newton's interpolatory formula and its divided-difference algorithm, Thiele's interpolating continued fraction and its inverse-difference algorithm, Thiele's expansion of a univariate function and its Viscovatov's algorithm, and so on. For the function *f* being a univariate function, such problems have been extensively investigated, and abundant research results have been achieved. Some surveys and a complete literature for the problems in single variable interpolation and expansion can be found in Cheney [1], Hildebrand [2], Davis [3], Alfio et al. [4], Gautschi [5], Burden et al. [6], and the references therein. However, in comparison to the broad research and application of the univariate interpolation and expansion problems, much less attention has been paid to the problems associated with multivariate interpolation and expansion, and the study of multivariate rational interpolation and expansion is even less. However, fortunately, there exists some literature discussing the multivariate rational interpolation and expansion problems. We mention the works of Baker et al. [7,8], Kuchminskaya [9], Skorobogatko [10], Siemaszko [11], Viscovatov [12], Graves-Morris [13], Cuyt and Verdonk [14–17], Möller [18], Zhu et al. [19], Gu et al. [20,21], Tan et al. [22–28], and the references therein for results concerning the multivariate rational interpolation and expansion.

Skorobogatko applied the idea of the branch to the continued fraction from about the 1960s to the 1980s, which ushered in a new era of the research on the theories and methods of the continued fraction [10]. In 1983, the concept of the Thiele-type interpolation by the continued fraction in one variable was generalized to the multivariate case by Siemaszko [11], and the Thiele-type branched continued fractions were obtained and an algorithm for the computation of the limiting case of branched continued fractions for bivariable functions suggested. Furthermore, in the 1980s, based on the so-called symmetric branched continued fraction, Cuyt et al. [14–16] introduced a symmetric

interpolation scheme and studied the expansion of a bivariate function by using this method and technique. By following the prior works, in 1995, Zhu et al. [19,22] discussed the vector-valued rational interpolants by branched continued fractions. In 1997, Gu et al. [20,21] investigated the problem about matrix-valued rational interpolants. In the meantime, Tan et al. engaged in studying bivariate rational interpolants and obtained tremendous scholarly achievements in this field [22–28]. In 2007, Tan summarized the research results concerning the theory of the continued fraction and published the famous book The Theory of Continued Fractions and Their Applications. This book has played an important role in promoting some modern research about the continued fraction. Furthermore, there are a few works and references about the application of the continued fraction in image processing, such as the literature of Hu and Tan [29,30], Li et al. [31].

As we all know, Taylor's expansion of a function is likely to be the best known and most widely-used formula for the function approximation problem. If *f* is a function of a univariate *x* and the derivatives of all orders are uniformly bounded in a neighborhood ✵(*ξ*), then for each *x* in ✵(*ξ*), *f*(*x*) can be expanded into the following Taylor's formula about *ξ*:

$$f(\mathbf{x}) = \mathbb{C}\_0 + \mathbb{C}\_1(\mathbf{x} - \boldsymbol{\xi}) + \mathbb{C}\_2(\mathbf{x} - \boldsymbol{\xi})^2 + \dots + \mathbb{C}\_k(\mathbf{x} - \boldsymbol{\xi})^k + \dotsb,$$

where *Ck* = 1*k*! *f* (*k*)(*ξ*), *k* = 0, 1, 2, .... On the other hand, the function *f*(*x*) can also be expanded about *ξ* in terms of the continued fraction, which is in the form of the following:

$$f(\mathbf{x}) = d\_0 + \left\| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_1} \right\| + \left\| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_2} \right\| + \dots + \left\| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_n} \right\| + \dots + \boldsymbol{\xi}$$

where *dk* ∈ R, *k* = 0, 1, 2, .... Here, the above formula is called Thiele's expansion for *f*(*x*) about *ξ*. There is a famous algorithm to compute the coefficients *d*0, *d*1, *d*2, ... , of Thiele's expansion, which is called Viscovatov's algorithm. We can see the references [16,28].

Motivated by the results concerning the univariate function, in this paper, we consider the rational expansion by Thiele's continued fraction of a bivariate function and give a Viscovatov-like algorithm for the computations of the coefficients. As a preliminary to our discussions, Thiele–Newton's interpolation needs to be introduced first. In the works [25,28], the so-called Thiele–Newton's interpolation was suggested to construct bivariate interpolants by Tan et al. Its main idea is to combine Thiele's interpolating continued fraction in one variable with Newton's interpolating polynomial in another variable to hybridize a new interpolation, which is defined as below:

$$TN\_{m,n}(\mathbf{x}, \mathbf{y}) = t\_0(\mathbf{y}) + \left| \frac{\mathbf{x} - \mathbf{x}\_0}{t\_1(\mathbf{y})} \right| + \left| \frac{\mathbf{x} - \mathbf{x}\_1}{t\_2(\mathbf{y})} \right| + \dots + \left| \frac{\mathbf{x} - \mathbf{x}\_{m-1}}{t\_m(\mathbf{y})} \right| \tag{1}$$

where:

$$\begin{aligned} t\_i(y) &= \mathfrak{q}\_{\text{TN}}[\mathbf{x}\_0 \cdot \cdots \cdot, \mathbf{x}\_i; y\_0] + (y - y\_0)\mathfrak{q}\_{\text{TN}}[\mathbf{x}\_0 \cdot \cdots \cdot, \mathbf{x}\_i; y\_0, y\_1] \\ &+ \cdots + (y - y\_0)(y - y\_1) \cdots \cdot (y - y\_{n-1})\mathfrak{q}\_{\text{TN}}[\mathbf{x}\_0 \cdot \cdots \cdot, \mathbf{x}\_i; y\_0 \cdot \cdots \cdot, y\_n] \end{aligned} \tag{2}$$

for *i* = 0, 1, ... , *m*, both *X* = {*xi*|*<sup>i</sup>* ∈ N} and *Y* = {*yj*|*j* ∈ N} are two sets of points belonging to R, and *ϕTN*[*<sup>x</sup>*0, ··· , *xi*; *y*0, ··· , *yj*] denotes the blending difference of the function *f*(*<sup>x</sup>*, *y*) at points *x*0, ... , *xi*; *y*0, ... , *yj*. Suppose that any blending difference *ϕTN*[*<sup>x</sup>*0, ··· , *xi*; *y*0, ··· , *yj*] exists. Then, one can easily confirm that:

$$T N\_{m,n}(\mathbf{x}\_i, y\_j) = f(\mathbf{x}\_i, y\_j), \quad i = 0, 1, \dots, m; \quad j = 0, 1, \dots, n.$$

The limiting case of Thiele's interpolating continued fraction expansion of a univariate function has been discussed in the literature [26]. With the inspiration of the limiting case, Thiele–Newton's expansion of a bivariate function is yielded when all the points in sets *X* = {*xi*|*<sup>i</sup>* ∈ N} and *Y* = {*yj*|*j* ∈

N} are coincident with certain points *ξ* and *ζ*, respectively, from Equations (1) and (2), or in other words, a bivariate function *f*(*<sup>x</sup>*, *y*) has Thiele–Newton's expansion of the following form:

$$f(\mathbf{x}, \mathbf{y}) = l\_0(\mathbf{y}) + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{l\_1(\mathbf{y})} \right| + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{l\_2(\mathbf{y})} \right| + \dots + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{l\_m(\mathbf{y})} \right| \tag{3}$$

where:

$$l\_i(\underline{y}) = a\_{i,0} + a\_{i,1}(\underline{y} - \underline{\zeta}) + a\_{i,2}(\underline{y} - \underline{\zeta})^2 + a\_{i,3}(\underline{y} - \underline{\zeta})^3 + \cdots \tag{4}$$

for all *i* ∈ N. Therefore, there exists a question about how to calculate the unknowns *ai*,*j*, *i* = 0, 1, 2, . . . , *j* = 0, 1, 2, . . ., in Equation (4).

The aim of this paper is to find an algorithm for the computations of the coefficients of Thiele–Newton's expansion of a bivariate function. The paper is organized as follows. In Section 2, we briefly recall some preliminaries for Thiele's continued fraction and Thiele–Newton's blending interpolation. In Section 3, we sugges<sup>t</sup> Thiele–Newton's blending rational expansion and prove the Viscovatov-like algorithm. In Section 4, numerical examples are given to illustrate the application of the Viscovatov-like algorithm. Throughout the paper, we let N and R stand for the set of natural numbers and the set of real numbers, respectively.

## **2. Preliminaries**

In this section, we briefly review some basic definitions and results for Thiele's continued fraction, Thiele's expansion of a univariate function, and blending interpolation. Some surveys and complete literature about the continued fraction could be found in Cuyt et al. [14–16], Zhu et al. [19], Gu et al. [20,21], and Tan et al. [25,26,28].

**Definition 1.** *Assume that G is a subset of the complex plane and X* = {*xi*|*<sup>i</sup>* ∈ N} *is a set of points belonging to G. Suppose, in addition, that f*(*x*) *is a function defined on G. Let:*

$$\begin{aligned} f[\mathbf{x}\_i] &= f(\mathbf{x}\_i), i \in \mathbb{N}, \\ f[\mathbf{x}\_{i\prime}, \mathbf{x}\_j] &= \frac{f[\mathbf{x}\_i] - f[\mathbf{x}\_j]}{\mathbf{x}\_i - \mathbf{x}\_j}, \\ f[\mathbf{x}\_{i\prime}, \mathbf{x}\_{j\prime}, \mathbf{x}\_k] &= \frac{f[\mathbf{x}\_{i\prime}, \mathbf{x}\_k] - f[\mathbf{x}\_{i\prime}, \mathbf{x}\_j]}{\mathbf{x}\_k - \mathbf{x}\_j} \end{aligned}$$

*and:*

$$f[\mathbf{x}\_{i\prime}, \dots, \mathbf{x}\_{j\prime}, \mathbf{x}\_{k\prime} \mathbf{x}\_l] = \frac{f[\mathbf{x}\_{i\prime}, \dots, \mathbf{x}\_{j\prime} \mathbf{x}\_l] - f[\mathbf{x}\_{i\prime}, \dots, \mathbf{x}\_{j\prime} \mathbf{x}\_k]}{\mathbf{x}\_l - \mathbf{x}\_k}.$$

*Then, f* [*xi*,..., *xj*, *xk*] *is called the divided difference of f*(*x*) *with respect to points xi*,..., *xj*, *xk.*

**Definition 2.** *Assume that G is a subset of the complex plane and X* = {*xi*|*<sup>i</sup>* ∈ N} *is a set of points in G. Suppose, in addition, that f*(*x*) *is a function defined on G. We let:*

$$\begin{aligned} \rho[\mathbf{x}\_i] &= f(\mathbf{x}\_i), i \in \mathbb{N}, \\ \rho[\mathbf{x}\_{i\prime}, \mathbf{x}\_j] &= \frac{\mathbf{x}\_i - \mathbf{x}\_j}{\rho[\mathbf{x}\_i] - \rho[\mathbf{x}\_j]}, \\ \rho[\mathbf{x}\_{i\prime}, \mathbf{x}\_{j\prime}, \mathbf{x}\_k] &= \frac{\mathbf{x}\_k - \mathbf{x}\_j}{\rho[\mathbf{x}\_{i\prime}, \mathbf{x}\_k] - \rho[\mathbf{x}\_{i\prime}, \mathbf{x}\_j]} \end{aligned}$$

*and:*

$$
\rho[\mathbf{x}\_{i\prime}, \dots, \mathbf{x}\_{j\prime}, \mathbf{x}\_{k\prime} \mathbf{x}\_l] = \frac{\mathbf{x}\_l - \mathbf{x}\_k}{\rho[\mathbf{x}\_{i\prime}, \dots, \mathbf{x}\_{j\prime} \mathbf{x}\_l] - \rho[\mathbf{x}\_{i\prime}, \dots, \mathbf{x}\_{j\prime} \mathbf{x}\_k]}.
$$

*Then, ρ*[*xi*,..., *xj*, *xk*] *is called the inverse difference of f*(*x*) *with respect to points xi*,..., *xj*, *xk.*

**Definition 3.** *Assume that G is a subset of the complex plane and X* = {*xi*|*<sup>i</sup>* ∈ N} ⊆ *G is a set of points. In addition, let f*(*x*) *be a function defined on G, and let:*

$$R\_{\mathfrak{n}}(\mathbf{x}) = \rho[\mathbf{x}\_0] + \frac{\mathbf{x} - \mathbf{x}\_0}{\left[\rho[\mathbf{x}\_0, \mathbf{x}\_1]\right]} + \dots + \frac{\mathbf{x} - \mathbf{x}\_{n-1}}{\left[\rho[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n]\right]},\tag{5}$$

*where ρ*[*<sup>x</sup>*0, *x*1, ... , *xi*], *i* = 0, 1, 2, ... , *n*, *is the inverse difference of f*(*x*) *with respect to points x*0, *x*1, ... , *xi. Then, Rn*(*x*) *is called Thiele's interpolating continued fraction of order n. It is easy to verify that the rational function satisfies the following conditions:*

$$R\_{\mathfrak{n}}(\mathfrak{x}\_{i}) = f(\mathfrak{x}\_{i})\_{\mathfrak{i}}\mathfrak{i} = 0, 1, 2, \dots, n.$$

When all the points in the set *X* = {*xi*|*<sup>i</sup>* ∈ N} are coincident with a certain point *ξ* ∈ *G*, Thiele's expansion of a univariate function *f*(*x*) at *x* = *ξ* is obtained as follows:

$$f(\mathbf{x}) = d\_0 + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_1} \right| + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_2} \right| + \dots + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_n} \right| + \dots \tag{6}$$

where *dk* ∈ R, *k* = 0, 1, 2, .... Moreover, if *f*(*x*) is a function with derivatives of all orders in a neighborhood ✵(*ξ*), then Taylor's expansion of the function *f*(*x*) at *x* = *ξ* is denoted as below:

$$f(\mathbf{x}) = \sum\_{n=0}^{\infty} \mathcal{C}\_n^{(0)} (\mathbf{x} - \boldsymbol{\xi})^n \boldsymbol{\zeta}$$

where *C*(0) *n* = 1*n*! *f* (*n*)(*ξ*), *n* = 0, 1, 2, ... A famous method, called Viscovatov's algorithm (see [16,28]), is available for the computations of the coefficients *d*0, *d*1, *d*2, ... , of Thiele's expansion, which is formulated as follows.

Algorithm 1. *Viscovatov's algorithm to calculate the coefficients d*0, *d*1, *d*2,...:

$$\begin{array}{l} \mathsf{C}\_{i}^{(0)} = f^{(i)}(\xi)/i!, i = 0, 1, 2, \ldots, \\ d\_{0} = \mathsf{C}\_{0}^{(0)}, \\ d\_{1} = 1/\mathsf{C}\_{1}^{(0)}, \\ \mathsf{C}\_{i}^{(1)} = -\mathsf{C}\_{i+1}^{(0)}/\mathsf{C}\_{1}^{(0)}, i > 1, \\ d\_{l} = \mathsf{C}\_{1}^{(l-2)}/\mathsf{C}\_{1}^{(l-1)}, l \gg 2, \\ \mathsf{C}\_{i}^{(l)} = \mathsf{C}\_{i+1}^{(l-2)} - d\_{l}\mathsf{C}\_{i+1}^{(l-1)}, i \gg 1, l \gg 2. \end{array}$$

**Remark 1.** *Clearly, by applying Viscovatov's algorithm, we can carry out computations step by step for the coefficients d*0, *d*1, *d*2,...*.*

In [25,28], the method known as Thiele–Newton's blending interpolation was suggested to construct bivariate interpolants by Tan et al. Before the method can be introduced, we recall the definition concerning the blending difference.

**Definition 4.** *Assume that* Π*<sup>m</sup>*,*<sup>n</sup>* = *Xm* × *Yn, where Xm* = {*xi*|*<sup>i</sup>* = 0, 1, 2, ... , *m*} ⊂ [*a*, *b*] ⊂ R *and Yn* = {*yj*|*j* = 0, 1, 2, ... , *n*} ⊂ [*c*, *d*] ⊂ R *are two sets of points. Suppose that f*(*<sup>x</sup>*, *y*) *is a function of two variables defined on D* = [*a*, *b*] × [*c*, *d*]*. Let:*

*ϕTN*[*xi*; *yj*] = *f*(*xi*, *yj*),(*xi*, *yj*) ∈ *D*, *ϕTN*[*xi*; *yp*, *yq*] = *ϕTN*[*xi*; *yq*] − *ϕTN*[*xi*; *yp*] *yq* − *yp* , *ϕTN*[*xi*; *yp*,..., *yq*, *yr*, *ys*] = *ϕTN*[*xi*; *yp*,..., *yq*, *ys*] − *ϕTN*[*xi*; *yp*,..., *yq*, *yr*] *ys* − *yr* , *ϕTN*[*xi*, *xj*; *yp*] = *xj* − *xi <sup>ϕ</sup>TN*[*xj*; *yp*] − *ϕTN*[*xi*; *yp*], *ϕTN*[*xi*,..., *xj*, *xk*, *xl*; *yp*] = *xl* − *xk ϕTN*[*xi*,..., *xj*, *xl*; *yp*] − *ϕTN*[*xi*,..., *xj*, *xk*; *yp*]

*and:*

$$\mathfrak{q}\_{TN}[\mathbf{x}\_{i},\ldots,\mathbf{x}\_{l};\mathbf{y}\_{p\prime},\ldots,\mathbf{y}\_{q\prime}\mathbf{y}\_{r\prime}\mathbf{y}\_{s}] = \frac{\mathfrak{q}\_{TN}[\mathbf{x}\_{i},\ldots,\mathbf{x}\_{l};\mathbf{y}\_{p\prime},\ldots,\mathbf{y}\_{q\prime}\mathbf{y}\_{s}] - \mathfrak{q}\_{TN}[\mathbf{x}\_{i},\ldots,\mathbf{x}\_{l};\mathbf{y}\_{p\prime},\ldots,\mathbf{y}\_{q\prime}\mathbf{y}\_{r}]}{\mathbf{y}\_{s} - \mathbf{y}\_{r}}.$$

*Then, ϕTN*[*<sup>x</sup>*0, ... , *xi*; *y*0, ... , *yj*] *is called Thiele–Newton's blending difference of f*(*<sup>x</sup>*, *y*) *with respect to the set of points* <sup>Π</sup>*i*,*j.*

**Remark 2.** *From Definition 4, it is easy to see that the first recurrence relations on Thiele–Newton's blending difference ϕTN*[*<sup>x</sup>*0, ... , *xi*; *y*0, ... , *yj*] *are just the inverse difference of f*(*<sup>x</sup>*, *y*) *with respect to the variable x, and the second recurrence relations are only the divided difference of f*(*<sup>x</sup>*, *y*) *with respect to the variable y.*

Next, recall Thiele–Newton's interpolation *TNm*,*<sup>n</sup>*(*<sup>x</sup>*, *y*), as shown in Equations (1) and (2). In order to calculate this rational interpolation, we need to utilize the following algorithm whose main operation is matrix transformations (see [23,28]).

Algorithm 2. *Four main steps for the algorithm to calculate Thiele–Newton's interpolation are as follows:*

• **Step 1: Initialization.** For *i* = 0, 1, ... , *m*; *j* = 0, 1, ... , *n*, let *f* (0,0) *i*,*j* = *f*(*xi*, *yj*). Define the following initial information matrix:

$$M\_0 = \begin{bmatrix} f\_{0,0}^{(0,0)} & f\_{1,0}^{(0,0)} & \cdots & f\_{M,0}^{(0,0)} \\ f\_{0,1}^{(0,0)} & f\_{1,1}^{(0,0)} & \cdots & f\_{M,1}^{(0,0)} \\ \vdots & \vdots & \ddots & \vdots \\ f\_{0,n}^{(0,0)} & f\_{1,n}^{(0,0)} & \cdots & f\_{M,n}^{(0,0)} \end{bmatrix}$$

.

• **Step 2: Thiele's recursion along the** *X***-axis.** For *j* = 0, 1, ... , *n*; *p* = 1, 2, ... , *m*; *i* = *p*, *p* + 1, . . . , *m*, compute:

$$f\_{i,j}^{(p,0)} = \frac{\mathbf{x}\_i - \mathbf{x}\_{p-1}}{f\_{i,j}^{(p-1,0)} - f\_{p-1,j}^{(p-1,0)}}$$

and construct the following information matrix:

$$M\_1 = \begin{bmatrix} f\_{0,0}^{(0,0)} & f\_{1,0}^{(1,0)} & \cdots & f\_{m,0}^{(m,0)} \\ f\_{0,1}^{(0,0)} & f\_{1,1}^{(1,0)} & \cdots & f\_{m,1}^{(m,0)} \\ \vdots & \vdots & \ddots & \vdots \\ f\_{0,n}^{(0,0)} & f\_{1,n}^{(1,0)} & \cdots & f\_{m,n}^{(m,0)} \end{bmatrix}.$$

• **Step 3: Newton's recursion along the** *Y***-axis.** For *i* = 0, 1, ... , *m*; *q* = 1, 2, ... , *n*; *j* = *q*, *q* + 1, . . . , *n*, compute:

$$f\_{i,j}^{(i\rho)} = \frac{f\_{i,j}^{(i,\eta-1)} - f\_{i,\eta-1}^{(i,\eta-1)}}{y\_j - y\_{\eta-1}}$$

and construct the following information matrix:

$$M\_2 = \begin{bmatrix} f\_{0,0}^{(0,0)} & f\_{1,0}^{(1,0)} & \cdots & f\_{m,0}^{(m,0)} \\ f\_{0,1}^{(0,1)} & f\_{1,1}^{(1,1)} & \cdots & f\_{m,1}^{(m,1)} \\ \vdots & \vdots & \ddots & \vdots \\ f\_{0,n}^{(0,n)} & f\_{1,n}^{(1,n)} & \cdots & f\_{m,n}^{(m,n)} \end{bmatrix}.$$

• **Step 4: Establish Thiele–Newton's interpolation.** For *i* = 0, 1, . . . , *m*, let:

$$t\_{i,n}(y) = f\_{i,0}^{(i,0)} + (y - y\_0)f\_{i,1}^{(i,1)} + \cdots + (y - y\_0)(y - y\_1)\cdots(y - y\_{n-1})f\_{i,n}^{(i,n)}.$$

Then, Thiele–Newton's interpolation is established as follows:

$$T\mathcal{N}\_{\mathfrak{m},\mathfrak{n}}(\mathbf{x},\mathbf{y}) = t\_{0,\mathfrak{n}}(\mathbf{y}) + \left. \frac{\mathbf{x} - \mathbf{x}\_{0}}{t\_{1,\mathfrak{n}}(\mathbf{y})} \right| + \left. \frac{\mathbf{x} - \mathbf{x}\_{1}}{t\_{2,\mathfrak{n}}(\mathbf{y})} \right| + \dots + \left. \frac{\mathbf{x} - \mathbf{x}\_{m-1}}{t\_{m,\mathfrak{n}}(\mathbf{y})} \right|.$$

which satisfies:

$$T N\_{m,n}(\mathfrak{x}\_{i\prime} y\_j) = f(\mathfrak{x}\_{i\prime} y\_j)$$

for *i* = 0, 1, . . . , *m*; *j* = 0, 1, . . . , *n*.

**Remark 3.** *Obviously, for any i* ∈ {0, 1, ... , *<sup>m</sup>*}*, by using the elements f* (*<sup>i</sup>*,*j*) *i*,*j* , *j* = 0, 1, ... , *n*, *in the* (*i* + 1)*th column of the information matrix M*2*, Newton's interpolating polynomial ti*,*<sup>n</sup>*(*y*) *with respect to the variable y can be constructed.*

#### **3. Thiele–Newton's Blending Expansion and the Viscovatov-Like Algorithm**

In this section, our main objective is to expound on Thiele–Newton's blending rational expansion of a bivariate function and show the Viscovatov-like algorithm that finds the coefficients of Thiele–Newton's expansion.

*3.1. Thiele–Newton's Blending Expansion*

**Definition 5.** *Assume that* Π = *X* × *Y with* Π ⊂ *D* = [*a*, *b*] × [*c*, *d*]*, where X* = {*xi*|*<sup>i</sup>* = 0, 1, 2, ...} ⊂ [*a*, *b*] ⊂ R *and Y* = {*yj*|*j* = 0, 1, 2, ...} ⊂ [*c*, *d*] ⊂ R *are two sets of points. Suppose, in addition, that the point* (*ξ*, *ζ*) ∈ *D and f*(*<sup>x</sup>*, *y*) *is a bivariate function defined on D. Let all the points in the set X* = {*xi*|*<sup>i</sup>* = 0, 1, 2, ...} *and Y* = {*yj*|*j* = 0, 1, 2, ...} *be coincident with the given points ξ and ζ, respectively. Then, Thiele–Newton's* *interpolation TNm*,*<sup>n</sup>*(*<sup>x</sup>*, *y*) *of a bivariate function f*(*<sup>x</sup>*, *y*) *defined in Section 2 turns into Thiele–Newton's blending expansion of the bivariate function f*(*<sup>x</sup>*, *y*) *as shown below:*

$$f(\mathbf{x}, \mathbf{y}) = d\_0(\mathbf{y}) + \frac{\mathbf{x} - \boldsymbol{\xi}}{\left| \overline{d\_1(\mathbf{y})} + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{\overline{d\_2(\mathbf{y})}} \right| + \frac{\mathbf{x} - \boldsymbol{\xi}}{\left| \overline{d\_3(\mathbf{y})} \right|} + \dotsb,\tag{7}$$

*where:*

$$d\_i(y) = a\_{i,0} + a\_{i,1}(y - \zeta) + a\_{i,2}(y - \zeta)^2 + a\_{i,3}(y - \zeta)^3 + \cdots \tag{8}$$

*for any i* ∈ N*.*

Obviously, a main topic for further discussion is how to calculate the coefficients *di*(*y*), *i* = 0, 1, 2, ..., in Equation (7), or in other words, how to compute the coefficients *ai*,*j*, *i* = 0, 1, 2, ... ; *j* = 0, 1, 2, ... , in Equation (8). In order to handle the problem that we are facing in the bivariate case, we introduce the following algorithm.

#### *3.2. Viscovatov-Like Algorithm*

Suppose that *f*(*<sup>x</sup>*, *y*) is a bivariate function of two variables *x* and *y*. If *y* is held constant, say *y* = *ζ*, then *f*(*<sup>x</sup>*, *ζ*) is a function of the single variable *x*. Likewise, *f*(*ξ*, *y*) is also a function of the single variable *y* when *x* is regarded as a constant, i.e., *x* = *ξ*. We use the notation: *Dmx f*(*<sup>x</sup>*, *y*) denotes the *m*-order partial derivative of *f*(*<sup>x</sup>*, *y*) with respect to *x*. Similarly, the *n*-order partial derivative of *f*(*<sup>x</sup>*, *y*) with respect to *y* is denoted by *Dny f*(*<sup>x</sup>*, *y*). Furthermore, *Dmx f*(*ξ*, *ζ*) and *Dny f*(*ξ*, *ζ*) denote the values of *Dmx f*(*<sup>x</sup>*, *y*) and *Dny f*(*<sup>x</sup>*, *y*) about the point (*<sup>x</sup>*, *y*)=(*ξ*, *ζ*), respectively. Let:

$$\mathcal{C}\_k^{(0)}(y) = \frac{1}{k!} D\_x^k f(\zeta, y), k = 0, 1, 2, \dots$$

Then, the bivariate function *f*(*<sup>x</sup>*, *y*) can be expanded formally about the point *ξ* as follows:

$$f(\mathbf{x}, y) = \mathbb{C}\_0^{(0)}(y) + \mathbb{C}\_1^{(0)}(y)(\mathbf{x} - \boldsymbol{\xi}) + \dots + \mathbb{C}\_k^{(0)}(y)(\mathbf{x} - \boldsymbol{\xi})^k + \dotsb \tag{9}$$

. .

From Equations (7)–(9), we give the Viscovatov-like algorithm, which finds out the coefficients of Thiele–Newton's expansion *di*(*y*), *i* = 0, 1, 2, ... , and *ai*,*j*, *i* = 0, 1, 2, ... ; *j* = 0, 1, 2, ..., as described by the following algorithm.

Algorithm 3. *Viscovatov-like algorithm to calculate the coefficients di*(*y*), *i* = 0, 1, 2, ... , *and ai*,*j*, *i* = 0, 1, 2, . . . ; *j* = 0, 1, 2, . . .:

$$\begin{array}{l} \mathsf{C}\_{i}^{(0)}(y) = D\_{x}^{i}f(\ulcorner\_{"\prime}y)/i!, i = 0, 1, 2, \ldots, \\ d\_{0}(y) = \mathsf{C}\_{0}^{(0)}(y) = f(\ulcorner\_{"\prime}y), \\ d\_{1}(y) = 1/\mathsf{C}\_{1}^{(0)}(y), \\ \mathsf{C}\_{i}^{(1)}(y) = -\mathsf{C}\_{i+1}^{(0)}(y)/\mathsf{C}\_{1}^{(0)}(y), i \geqslant 1, \\ d\_{l}(y) = \mathsf{C}\_{1}^{(l-2)}(y)/\mathsf{C}\_{1}^{(l-1)}(y), l \geqslant 2, \\ \mathsf{C}\_{i}^{(l)}(y) = \mathsf{C}\_{i+1}^{(l-2)}(y) - d\_{l}(y)\mathsf{C}\_{i+1}^{(l-1)}(y), i \geqslant 1, l \geqslant 2, \\ a\_{i,j} = D\_{y}^{j}d\_{i}(\zeta\_{j})/j!, i = 0, 1, 2, \ldots; j = 0, 1, 2, \ldots \end{array}$$

**Proof of Algorithm 3.** First, we compute the coefficients *d*0(*y*) and *d*1(*y*). Considering the two expansions (7) and (9), we have:

$$\mathbb{C}\_{0}^{(0)}(y) + \mathbb{C}\_{1}^{(0)}(y)(\mathbf{x} - \boldsymbol{\xi}) + \mathbb{C}\_{2}^{(0)}(y)(\mathbf{x} - \boldsymbol{\xi})^{2} + \cdots = d\_{0}(y) + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_{1}(y)} \right| + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_{2}(y)} \right| + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_{3}(y)} \right| + \cdots \text{ . (10)}$$

Letting *x* = *ξ*, from Equation (10), one can clearly get:

$$d\_0(y) = \mathbb{C}\_0^{(0)}(y). \tag{11}$$

Combining Equation (11) with Equation (10), we have:

$$d\_1(y) + \frac{\mathbf{x} - \underline{\mathfrak{z}}}{d\_2(y)} + \frac{\mathbf{x} - \underline{\mathfrak{z}}}{\left| \underline{d\_3(y)} \right|} + \dots = \frac{1}{\mathbb{C}\_1^{(0)}(y) + \mathbb{C}\_2^{(0)}(y)(\mathbf{x} - \underline{\mathfrak{z}}) + \mathbb{C}\_3^{(0)}(y)(\mathbf{x} - \underline{\mathfrak{z}})^2 + \dotsb}. \tag{12}$$

Let *x* = *ξ* in Equation (12). Then, we can easily obtain:

$$d\_1(y) = \frac{1}{\mathcal{C}\_1^{(0)}(y)}.\tag{13}$$

Next, by mathematical induction, we shall prove that the following equation:

$$d\_l(y) = \frac{\mathbb{C}\_1^{(l-2)}(y)}{\mathbb{C}\_1^{(l-1)}(y)} \tag{14}$$

is true for all *l* - 2.

When *l* = 2, we shall verify that Equation (14) holds. Substituting Equation (13) into Equation (12), we have:

$$\begin{split} \frac{\mathbf{x} - \boldsymbol{\xi}}{\sqrt{d\_2(\mathbf{y})} + \sqrt{\frac{\mathbf{x} - \boldsymbol{\xi}}{d\_3(\mathbf{y})}}} + \cdots &= \frac{1}{\mathbf{C}\_1^{(0)}(\mathbf{y}) \left[ 1 + \frac{\mathbf{C}\_2^{(0)}(\mathbf{y})}{\mathbf{C}\_1^{(0)}(\mathbf{y})} (\mathbf{x} - \boldsymbol{\xi}) + \frac{\mathbf{C}\_3^{(0)}(\mathbf{y})}{\mathbf{C}\_1^{(0)}(\mathbf{y})} (\mathbf{x} - \boldsymbol{\xi})^2 + \cdots \right]} - \frac{1}{\mathbf{C}\_1^{(0)}(\mathbf{y})} \\ &= -\frac{\frac{\mathbf{C}\_2^{(0)}(\mathbf{y})}{\mathbf{C}\_1^{(0)}(\mathbf{y})} (\mathbf{x} - \boldsymbol{\xi}) + \frac{\mathbf{C}\_3^{(0)}(\mathbf{y})}{\mathbf{C}\_1^{(0)}(\mathbf{y})} (\mathbf{x} - \boldsymbol{\xi})^2 + \cdots}, \\ &\tag{15} \\ \end{split} \tag{16}$$

which implies that:

$$d\_2(y) + \frac{\mathbf{x} - \mathfrak{z}}{d\_3(y)} + \frac{\mathbf{x} - \mathfrak{z}}{\sqrt{d\_4(y)}} + \dots = \frac{\mathsf{C}\_1^{(0)}(y) + \mathsf{C}\_2^{(0)}(y)(\mathbf{x} - \mathfrak{z}) + \mathsf{C}\_3^{(0)}(y)(\mathbf{x} - \mathfrak{z})^2 + \dots}{-\frac{\mathsf{C}\_2^{(0)}(y)}{\mathsf{C}\_1^{(0)}(y)} - \frac{\mathsf{C}\_3^{(0)}(y)}{\mathsf{C}\_1^{(0)}(y)}(\mathbf{x} - \mathfrak{z}) - \frac{\mathsf{C}\_4^{(0)}(y)}{\mathsf{C}\_1^{(0)}(y)}(\mathbf{x} - \mathfrak{z})^2 - \dots}. \tag{16}$$

Let:

$$\mathbb{C}\_{i}^{(1)}(y) = -\frac{\mathbb{C}\_{i+1}^{(0)}(y)}{\mathbb{C}\_{1}^{(0)}(y)}, i = 1, 2, 3, \dots \tag{17}$$

Then, it follows from the identity (16) that:

$$d\_2(y) + \frac{\mathbf{x} - \underline{\mathfrak{x}}}{d\_3(y)} + \frac{\mathbf{x} - \underline{\mathfrak{x}}}{d\_4(y)} + \dots = \frac{\mathbf{C}\_1^{(0)}(y) + \mathbf{C}\_2^{(0)}(y)(\mathbf{x} - \underline{\mathfrak{x}}) + \mathbf{C}\_3^{(0)}(y)(\mathbf{x} - \underline{\mathfrak{x}})^2 + \dots}{\mathbf{C}\_1^{(1)}(y) + \mathbf{C}\_2^{(1)}(y)(\mathbf{x} - \underline{\mathfrak{x}}) + \mathbf{C}\_3^{(1)}(y)(\mathbf{x} - \underline{\mathfrak{x}})^2 + \dots}. \tag{18}$$

Using *x* = *ξ* in Equation (18) yields:

$$d\_2(y) = \frac{\mathbb{C}\_1^{(0)}(y)}{\mathbb{C}\_1^{(1)}(y)},\tag{19}$$

which implies that Equation (14) is true for *l* = 2.

When *l* - 3, assume that Equation (14) holds for any *l* = *n*, *n* = 3, 4, .... Then, let us prove that Equation (14) is also true for *l* = *n* + 1.

By assumption, we have the following equation:

$$d\_{\mathbb{H}}(y) = \frac{\mathbb{C}\_1^{(n-2)}(y)}{\mathbb{C}\_1^{(n-1)}(y)} \tag{20}$$

holds.

> Referring to Equation (18), we assume that the following equation:

$$d\_n(y) + \left| \frac{\mathbf{x} - \boldsymbol{\xi}}{d\_{n+1}(y)} \right| + \frac{\mathbf{x} - \boldsymbol{\xi}}{\left| d\_{n+2}(y) \right|} + \dots = \frac{\mathbf{C}\_1^{(n-2)}(y) + \mathbf{C}\_2^{(n-2)}(y)(\mathbf{x} - \boldsymbol{\xi}) + \mathbf{C}\_3^{(n-2)}(y)(\mathbf{x} - \boldsymbol{\xi})^2 + \dots}{\mathbf{C}\_1^{(n-1)}(y) + \mathbf{C}\_2^{(n-1)}(y)(\mathbf{x} - \boldsymbol{\xi}) + \mathbf{C}\_3^{(n-1)}(y)(\mathbf{x} - \boldsymbol{\xi})^2 + \dots} \tag{21}$$

is true, where:

$$\mathbb{C}\_{i}^{(k)}(y) = \mathbb{C}\_{i+1}^{(k-2)}(y) - d\_k(y)\mathbb{C}\_{i+1}^{(k-1)}(y), \\ k = n-2, n-1; n \ge 2; i = 1, 2, 3, \dots \tag{22}$$

Combining Equation (20) with Equation (21), one has:

*x* − *ξ dn*+<sup>1</sup>(*y*) + *x* − *ξ dn*+<sup>2</sup>(*y*) + ··· = *C*(*<sup>n</sup>*−<sup>2</sup>) 1 (*y*) + *C*(*<sup>n</sup>*−<sup>2</sup>) 2 (*y*)(*x* − *ξ*) + *C*(*<sup>n</sup>*−<sup>2</sup>) 3 (*y*)(*x* − *ξ*)<sup>2</sup> + ··· *C*(*<sup>n</sup>*−<sup>1</sup>) 1 (*y*) \*1 + *C*(*<sup>n</sup>*−<sup>1</sup>) 2 (*y*) *C*(*<sup>n</sup>*−<sup>1</sup>) 1 (*y*)(*<sup>x</sup>* − *ξ*) + *C*(*<sup>n</sup>*−<sup>1</sup>) 3 (*y*) *C*(*<sup>n</sup>*−<sup>1</sup>) 1 (*y*)(*<sup>x</sup>* − *ξ*)<sup>2</sup> + ··· + − *C*(*<sup>n</sup>*−<sup>2</sup>) 1 (*y*) *C*(*<sup>n</sup>*−<sup>1</sup>) 1 (*y*) = *C*(*<sup>n</sup>*−<sup>2</sup>) 2 (*y*) − *dn*(*y*)*C*(*<sup>n</sup>*−<sup>1</sup>) 2 (*y*) (*x* − *ξ*) + *C*(*<sup>n</sup>*−<sup>2</sup>) 3 (*y*) − *dn*(*y*)*C*(*<sup>n</sup>*−<sup>1</sup>) 3 (*y*) (*x* − *ξ*)<sup>2</sup> + ··· *C*(*<sup>n</sup>*−<sup>1</sup>) 1 (*y*) + *C*(*<sup>n</sup>*−<sup>1</sup>) 2 (*y*)(*x* − *ξ*) + *C*(*<sup>n</sup>*−<sup>1</sup>) 3 (*y*)(*x* − *ξ*)<sup>2</sup> + ··· . (23)

Let:

$$\mathbb{C}\_{i}^{(n)}(y) = \mathbb{C}\_{i+1}^{(n-2)}(y) - d\_n(y)\mathbb{C}\_{i+1}^{(n-1)}(y), i = 1, 2, 3, \dots \tag{24}$$

Then, Equation (23) is rewritten as follows:

$$d\_{n+1}(y) + \frac{\mathbf{x} - \mathbf{y}}{\left| \frac{d\_{n+2}(y)}{d\_{n+2}(y)} \right| + \frac{\mathbf{x} - \boldsymbol{\xi}}{\left| \frac{d\_{n+3}(y)}{d\_{n+3}(y)} \right|} + \dots = \frac{\mathbf{C}\_1^{(n-1)}(y) + \mathbf{C}\_2^{(n-1)}(y)(\mathbf{x} - \boldsymbol{\xi}) + \mathbf{C}\_3^{(n-1)}(y)(\mathbf{x} - \boldsymbol{\xi})^2 + \dots}{\mathbf{C}\_1^{(n)}(y) + \mathbf{C}\_2^{(n)}(y)(\mathbf{x} - \boldsymbol{\xi}) + \mathbf{C}\_3^{(n)}(y)(\mathbf{x} - \boldsymbol{\xi})^2 + \dots},\tag{25}$$

Using the above Equation (25) with *x* = *ξ* produces:

$$d\_{n+1}(y) = \frac{\mathbb{C}\_1^{(n-1)}(y)}{\mathbb{C}\_1^{(n)}(y)},\tag{26}$$

which means that Equation (14) holds for *l* = *n* + 1.

As is shown above, Equation (14) is true by mathematical induction for all *l* - 2. Meanwhile, we show that Equation (24) is also true for any *l* = *n*, *n* - 2.

Moreover, by differentiating Equation (8) *j* times with respect to the variable *y*, one has:

$$D\_y^j d\_i(y) = a\_{i,j}! + a\_{i,j+1} \frac{(j+1)!}{1!} (y - \zeta) + a\_{i,j+2} \frac{(j+2)!}{2!} (y - \zeta)^2 + \dotsb \,\_i \tag{27}$$

Notice *y* = *ζ*, from Equation (27), and we immediately obtain:

$$a\_{i,j} = \frac{D\_y^j d\_i(\zeta)}{j!} \tag{28}$$

for *i* ∈ N and *j* ∈ N.

Therefore, associating Equation (28) with Equations (11), (13), (14), (17), and (24), we have shown the desired conclusion denoted by Algorithm 3. This completes the proof.

#### **4. Numerical Experiments**

In the section, we give the results of numerical experiments to compare the efficiency of Thiele–Newton's blending expansion (7) with series expansion of bivariate functions.

For |*x*| < 1, |*y*| < 1 and *x* = *y*, given the following two test functions:

$$f\_1(x,y) = \frac{1}{y-x} \left[ \ln\left(1-x\right) - \ln\left(1-y\right) \right] \tag{29}$$

and:

$$f\_2(\mathbf{x}, y) = \frac{\mathbf{x}^2}{(1 - \mathbf{x})(\mathbf{x} - y)^2} + \frac{y^2}{(1 - y)(\mathbf{x} - y)^2} + \frac{2\mathbf{x}y\left[\ln\left(1 - \mathbf{x}\right) - \ln\left(1 - y\right)\right]}{(\mathbf{x} - y)^3},\tag{30}$$

where ln(*z*) gives the natural logarithm of *z* (logarithm to base *e*). We shall discuss Thiele–Newton's blending expansions of Equations (29) and (30), respectively.

First of all, let us consider Thiele–Newton's blending expansion of the bivariate function *f*1(*<sup>x</sup>*, *y*) defined by Equation (29) at the point (*ξ*, *ζ*)=(0, <sup>0</sup>). Therefore, using the Viscovatov-like algorithm, we can obtain the coefficient using the notation *a f*1*i*,*j* of Thiele–Newton's expansion of *f*1(*<sup>x</sup>*, *y*). Some values of *a f*1*i*,*j*, *i* = 0, 1, 2, . . . , *m*,...; *j* = 0, 1, 2, . . . , *n*, . . . , are shown in Table 1.

**Table 1.** The coefficients *a f*1 *i*,*j* of Thiele–Newton's expansion of *f*1(*<sup>x</sup>*, *y*) given by Equation (29).


Thus, Thiele–Newton's blending expansion of *f*1(*<sup>x</sup>*, *y*) at (*ξ*, *ζ*)=(0, 0) is denoted in the form:

$$\begin{aligned} f\_1(x,y) &= R^{f\_1}(x,y) = 1 + \frac{1}{2}y + \frac{1}{3}y^2 + \frac{1}{4}y^3 + \frac{1}{5}y^4 + \cdots \\ &\quad + \boxed{\frac{x}{2 - \frac{4}{3}y - \frac{1}{9}y^2 - \frac{8}{155}y^3 - \frac{31}{810}y^4 + \cdots}} \\ &\quad + \boxed{\frac{x}{-\frac{3}{4} - \frac{7}{15}y - \frac{293}{960}y^2 - \frac{299}{1280}y^3 - \frac{33869}{172000}y^4 + \cdots}} \\ &\quad + \boxed{\frac{x}{16 - \frac{88}{15}y - \frac{191}{225}y^2 - \frac{10264}{25625}y^3 - \frac{194491}{705700}y^4 + \cdots}} + \cdots \end{aligned} \tag{31}$$

For *m* = 2, *n* = 3, taking into account the truncated Thiele–Newton's blending expansion *<sup>R</sup>f*<sup>1</sup>*m*,*<sup>n</sup>*(*<sup>x</sup>*, *y*) of *Rf*1 (*<sup>x</sup>*, *y*) expressed by the above Equation (31), one can have:

$$R\_{2,3}^{f\_1}(\mathbf{x}, y) = 1 + \frac{y}{2} + \frac{y^2}{3} + \frac{y^3}{4} + \frac{\mathbf{x}}{2 - \frac{4}{3}y - \frac{1}{9}y^2 - \frac{8}{135}y^3 - \frac{\mathbf{x}}{\frac{7}{4} + \frac{7}{18}y + \frac{295}{990}y^2 + \frac{299}{1280}y^3}}.\tag{32}$$

On the other hand, the bivariate function *f*1(*<sup>x</sup>*, *y*) defined by Equation (29) can be expanded at the point (*ξ*, *ζ*)=(0, 0) by means of the Appell series *F*1*f*1 (*a*, *b*, *c*; *d*; *x*, *y*) denoted for |*x*| < 1, |*y*| < 1 and *x* = *y* by the following bivariate series (see [17]):

$$F1^{f\_1}(a,b,c;d; \mathbf{x}, \mathbf{y}) = \sum\_{i,j=0}^{\infty} \frac{(a)\_{i+j}(b)\_{i}(c)\_{j}}{(d)\_{i+j}i!j!} \mathbf{x}^{i} \mathbf{y}^{j},\tag{33}$$

where *a* = *b* = *c* = 1, *d* = 2, and the Pochhammer symbol (*τ*)*k* represents the rising factorial:

$$(\tau)\_k = \tau(\tau+1)(\tau+2)\cdots(\tau+k-1)\tag{34}$$

for any *τ* ∈ R<sup>+</sup> (see [17,32]). In particular, (1)0 = 1,(1)*k* = *k*!,(2)*k* = (*k* + 1)!. ForEquation(33),thefollowing polynomial:

$$F1\_{m,n}^{f\_1}(a,b,c;d; \mathbf{x}, \mathbf{y}) = \sum\_{i=0}^{m} \sum\_{j=0}^{n} \frac{(a)\_{i+j}(b)\_{i}(c)\_{j}}{(d)\_{i+j}i!j!} \mathbf{x}^{i} y^{j} \tag{35}$$

is defined as the (*<sup>m</sup>*, *n*)-order truncated Appell series, where *m* ∈ N and *n* ∈ N.

By Equations (33)–(35), we have:

$$f\_1(\mathbf{x}, \mathbf{y}) = F1^{f\_1}(1, 1, 1; \mathbf{1}; \mathbf{2}; \mathbf{x}, \mathbf{y}) = \sum\_{i, j = 0}^{\infty} \frac{1}{i + j + 1} \mathbf{x}^i \mathbf{y}^j \tag{36}$$

and for *m* = 2, *n* = 3, the (2, 3)-order truncated Appell series is given by:

$$F1\_{2,3}^{f\_1}(1,1,1;2;\mathbf{x},\mathbf{y}) = 1 + \frac{\mathbf{x}}{2} + \frac{\mathbf{x}^2}{3} + \frac{\mathbf{y}}{2} + \frac{\mathbf{x}\mathbf{y}}{3} + \frac{\mathbf{x}^2\mathbf{y}}{4} + \frac{\mathbf{y}^2}{3} + \frac{\mathbf{x}\mathbf{y}^2}{4} + \frac{\mathbf{x}^2\mathbf{y}^2}{5} + \frac{\mathbf{y}^3}{4} + \frac{\mathbf{x}\mathbf{y}^3}{5} + \frac{\mathbf{x}^2\mathbf{y}^3}{6}.\tag{37}$$

Second, performing similar operations for the bivariate function *f*2(*<sup>x</sup>*, *y*) defined by Equation (30), this gives the coefficient of Thiele–Newton's expansion, which is denoted by the notation *a f*2*i*,*j*. Some values of *a f*2*i*,*j*, *i* = 0, 1, 2, . . . , *m*,...; *j* = 0, 1, 2, . . . , *n*, . . . , are listed in Table 2.

**Table 2.** The coefficients *a f*2 *i*,*j*of Thiele–Newton's expansion of *f*2(*<sup>x</sup>*, *y*) given by Equation (30).


Therefore, according to the values of *a f*2*i*,*j*, *i* = 0, 1, 2, ... , ; *j* = 0, 1, 2, ... , in Table 2, the Thiele–Newton's blending expansion of *f*2(*<sup>x</sup>*, *y*) at (*ξ*, *ζ*)=(0, 0) can be written as:

$$f\_2(\mathbf{x}, y) = \mathcal{R}^{f\_2}(\mathbf{x}, y) = 1 + y + y^2 + y^3 + y^4 + \cdots$$

$$+ \begin{array}{c} \text{x} \\ \hline \left| -\frac{4}{5}y + \frac{5}{19}y^2 + \frac{4}{135}y^3 + \frac{17}{120}y^4 + \cdots \right| \\\\ + \left| \frac{\mathbf{x}}{-1 - \frac{7}{6}y - \frac{22}{180}y^2 - \frac{151}{120}y^3 - \frac{10272}{8400}y^4 + \cdots \right|}{- \left| -\frac{7}{6}y - \frac{22}{180}y^2 - \frac{151}{120}y^3 - \frac{10272}{8400}y^4 + \cdots \right|} + \cdots \end{array} \tag{38}$$

The corresponding truncated Thiele–Newton's blending expansion *<sup>R</sup><sup>f</sup>*22,3(*<sup>x</sup>*, *y*) of *Rf*2 (*<sup>x</sup>*, *y*) is:

$$R\_{2,3}^{\frac{f\_2}{2}}(x,y) = 1 + y + y^2 + y^3 + \frac{x}{1 - \frac{4}{3}y + \frac{5}{18}y^2 + \frac{4}{179}y^3 - \frac{x}{1 + \frac{7}{6}y + \frac{211}{180}y^2 + \frac{151}{120}y^3}}.\tag{39}$$

By a similar technique, consider the Appell series for the bivariate function *f*2(*<sup>x</sup>*, *y*) expanded about the point (*ξ*, *ζ*)=(0, <sup>0</sup>),

$$F1^{f\_2}(1,2,2;2;\mathbf{x},\mathbf{y}) = \sum\_{i,j=0}^{\infty} \frac{(1)\_{i+j}(2)\_{i}(2)\_{j}}{(2)\_{i+j}i!j!} \mathbf{x}^{i} y^{j} = \sum\_{i,j=0}^{\infty} \frac{(i+1)(j+1)}{i+j+1} \mathbf{x}^{i} y^{j}.\tag{40}$$

The (2, 3)-order truncated Appell series for *f*2(*<sup>x</sup>*, *y*) is:

$$F1\_{2,3}^{f\_2}(1,2,2;2;\mathbf{x},\mathbf{y}) = 1 + \mathbf{x} + \mathbf{x}^2 + \mathbf{y} + \frac{4}{3}\mathbf{x}\mathbf{y} + \frac{3}{2}\mathbf{x}^2\mathbf{y} + \mathbf{y}^2 + \frac{3}{2}\mathbf{x}\mathbf{y}^2 + \frac{9}{5}\mathbf{x}^2\mathbf{y}^2 + \mathbf{y}^3 + \frac{8}{5}\mathbf{x}\mathbf{y}^3 + 2\mathbf{x}^2\mathbf{y}^3. \tag{41}$$

Considering the errors, we let:

$$e\_{2,3}^{f\_k} = f\_k(\mathbf{x}, \mathbf{y}) - R\_{2,3}^{f\_k}(\mathbf{x}, \mathbf{y}) \tag{42}$$

and:

$$E\_{2,3}^{f\_k} = f\_k(\mathbf{x}, y) - F\mathbf{1}\_{2,3}^{f\_k}(a, b, c; d; \mathbf{x}, y) \tag{43}$$

for *k* = 1, 2.

Table 3 lists various values of (*<sup>x</sup>*, *y*), together with the values of the bivariate function *f*1(*<sup>x</sup>*, *y*), the truncated Thiele–Newton's blending expansion *<sup>R</sup><sup>f</sup>*12,3(*<sup>x</sup>*, *y*), and the truncated Appell series *F*1*<sup>f</sup>*12,3(1, 1, 1; 2; *x*, *y*). Furthermore, for comparison purposes, the values of errors *ef*12,3 and *Ef*12,3 are given in this table. It can be seen from Table 3 that the error *ef*12,3 using the truncated Thiele–Newton's blending expansion *<sup>R</sup><sup>f</sup>*12,3(*<sup>x</sup>*, *y*) is less than when using the truncated Appell series *F*1*<sup>f</sup>*12,3(1, 1, 1; 2; *x*, *y*), which gives the error *E<sup>f</sup>*12,3. Similarly, displayed in Table 4 are the numerical results for the bivariate function *f*2(*<sup>x</sup>*, *y*) defined by Equation (30). Thus, these results illustrate that the approximation by the truncated Thiele–Newton's blending expansion is clearly superior in the two test examples.


**Table 3.** Comparison of the numerical results by using *Rf*<sup>1</sup> 2,3(*<sup>x</sup>*, *y*) and *F*1*f*<sup>1</sup> 2,3(1, 1, 1; 2; *x*, *y*).

**Table 4.** Comparison of the numerical results by using *Rf*<sup>2</sup> 2,3(*<sup>x</sup>*, *y*) and *F*1*f*<sup>2</sup> 2,3(1, 2, 2; 2; *x*, *y*).


## **5. Conclusions**

From Section 3 in the paper, it is clear to see that we generalized Thiele's expansion of a univariate function to the bivariate case. Thus, we obtained a rational approximation method, say Thiele–Newton's blending expansion of a bivariate function. Furthermore, we suggested the Viscovatov-like algorithm, which calculates the coefficients of Thiele–Newton's expansion and gave the proof of this algorithm. Finally, the application of the Viscovatov-like algorithm was given. Numerical experiments and comparisons were presented in Tables 3 and 4, showing that Thiele–Newton's blending expansion performed much better approximation than the polynomial expansion. Moreover, the next step in the research work is the consideration of a vector case by a similar technique.

**Author Contributions:** The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

**Funding:** This research was funded by the Project of Leading Talent Introduction and Cultivation in Colleges and Universities of the Education Department of Anhui Province (Grant No. gxfxZD2016270), the Natural Science Key Foundation of the Education Department of Anhui Province (Grant No.KJ2013A183), the Incubation Project of the

National Scientific Research Foundation of Bengbu University (Grant No. 2018GJPY04), and the Project of Quality Curriculums of Education Department of Anhui Province (Grant Nos. 2016gxx087, 2018mooc517).

**Acknowledgments:** The authors are thankful to the anonymous reviewers for their valuable comments.

**Conflicts of Interest:** The authors declare no conflict of interest.
