*2.3. Native Spaces*

Every strictly positive definite function can indeed be associated with a reproducing kernel Hilbert space (or its native space see [5]).

**Definition 4.** *Suppose* Φ ∈ *C*(R*n*) ∩ *L*<sup>1</sup>(R*n*) *is a real-valued strictly positive definite function. Then, the native space of* Φ *is defined by*

$$\mathcal{N}\_{\Phi}(\mathbb{R}^n) = \{ f \in L^2(\mathbb{R}^n) \cap \mathbb{C}(\mathbb{R}^n) : \frac{\widehat{f}}{\sqrt{\Phi}} \in L^2(\mathbb{R}^n) \},$$

*and equip this space with the norm*

$$\left\|\|f\|\right\|\_{\mathcal{N}\_{\Phi}(\mathbb{R}^{n})}^{2} = \int\_{\mathbb{R}^{n}} \frac{|\widehat{f}(\omega)|^{2}}{\widehat{\Phi}(\omega)} d\omega < \infty. \tag{3}$$

For any domain Ω ⊆ R*<sup>n</sup>*, NΦ(Ω) is in fact the completion of the pre-Hilbert space *<sup>H</sup>*Φ(Ω) = span{Φ(·, **y**) : **y** ∈ <sup>Ω</sup>}. Of course, NΦ(Ω) contains all functions of the form:

$$f = \sum\_{j=1}^{N} c\_j \Phi(\cdot, \mathbf{x}\_j)$$

provided **x***j* ∈ Ω, and can be assembled with an equivalent norm:

$$\|f\|\_{\mathcal{N}\_{\Phi}(\Omega)}^2 = \sum\_{j=1}^N \sum\_{k=1}^N c\_j c\_k \Phi(\mathbf{x}\_j, \mathbf{x}\_k). \tag{4}$$

Here, *N* = ∞ is also allowed.

#### **3. Truncated Exponential Function**

In this section, we design a truncated exponential function:

$$
\varphi(r) = (e^{1-r} - 1)^l\_+ \tag{5}
$$

with *r* ∈ R, and *l* is a positive integer. By Definition 1, it becomes apparent that <sup>Φ</sup>(**x**) = *ϕ*(*r*) is a radial function centered on the origin on R*n* when *r* = **x** and **x** ∈ R*<sup>n</sup>*.

The following theorem characterizes the strictly positive definiteness of <sup>Φ</sup>(**x**).

**Theorem 3.** *The function* <sup>Φ</sup>(*x*)=(*e*<sup>1</sup>− *<sup>x</sup>* − <sup>1</sup>)*<sup>l</sup>*+ *is strictly positive definite and radial on* R*n provided parameter l satisfies l* ≥ *n*2 + 1*.*

**Proof.** Theorem 2 shows that multiply monotone functions give rise to positive definite radial functions. Therefore, we only need to verify the multiply monotonicity of univariate function *ϕ*(*r*) defined by (5).

Obviously, the truncated exponential function *ϕ*(*r*) is in *<sup>C</sup>l*−<sup>1</sup>(0, ∞) when *r* ∈ (0, ∞) and

$$\varrho(r) = (\varrho^{1-r} - 1)^l\_+ \ge 0,$$

$$\varrho'(r) = -l\varrho^{1-r}(\varrho^{1-r} - 1)^{l-1}\_+ \le 0,$$

$$\varrho'''(r) = l(l-1)(\varrho^{1-r})^2(\varrho^{1-r} - 1)^{l-2}\_+ + l\varrho^{1-r}(\varrho^{1-r} - 1)^{l-1}\_+ \ge 0.$$

For any positive integers *p* and *q*, (*e*1−*<sup>r</sup>*)*<sup>p</sup>* and (*e*1−*<sup>r</sup>* − <sup>1</sup>)*<sup>q</sup>*+ are non-negative, but with negative derivatives. Therefore,

$$(-1)^n \varphi^{(n)}(r) \ge 0, \quad n = 0, 1, \cdots, l - 1, \ldots$$

and *ϕ*(*r*) is (*l* + 1)-times monotone on (0, <sup>∞</sup>). Then, the conclusion follows directly by Theorem 2.

There are two ways to scale *ϕ*(*r*):

(1) In order to make *ϕ*(0) = 1, we can multiply (5) by the positive constant 1 (*<sup>e</sup>*−<sup>1</sup>)*<sup>l</sup>* such that *ϕ*(*r*) = 1 (*<sup>e</sup>*−<sup>1</sup>)*<sup>l</sup>* (*e*1−*<sup>r</sup>* − <sup>1</sup>)*<sup>l</sup>*+. Here, *ϕ*(*r*) is still strictly positive definite and has the same support as (5). (2)Addingashapeparameter*ε*>0,thescaledtruncatedexponentialfunctioncanbegivenby:

$$
\varphi(r) = (e^{1-xr} - 1)^l\_+.\tag{6}
$$

Obviously, a smaller *ε* causes the function to become flatter and the support to become larger, while increasing *ε* leads to a more peaked *ϕ*(*r*) and therefore localizes its support.

#### **4. Errors in Native Spaces**

This section discusses the scattered data interpolation with compactly supported radial basis functions <sup>Φ</sup>(**<sup>x</sup>**, **<sup>x</sup>***k*)=(*e*<sup>1</sup>− **<sup>x</sup>**−**x***<sup>k</sup>* − <sup>1</sup>)*<sup>l</sup>*+, **x**, **<sup>x</sup>***k* ∈ R*<sup>n</sup>*.

Given a distinct scattered point set X = {**<sup>x</sup>**1, **x**2, ···, **<sup>x</sup>***N*} ⊂ R*<sup>n</sup>*, the interpolant of target function *f* can be represented as:

$$P\_f(\mathbf{x}) = \sum\_{j=1}^{N} c\_j \Phi(\mathbf{x}, \mathbf{x}\_j), \quad \mathbf{x} \in \mathbb{R}^n. \tag{7}$$

Solving the interpolation problem leads to the following system of linear equations:

$$A\mathbf{c} = \mathbf{y},\tag{8}$$

where the entries of matrix *A* are given by *Ai*,*<sup>j</sup>* = <sup>Φ</sup>(**<sup>x</sup>***i*, **<sup>x</sup>***j*), *i*, *j* = 1, ···, *N*, **c** = [*<sup>c</sup>*1, ···, *cN*]*<sup>T</sup>*, and **y** = [ *f*(**<sup>x</sup>**1), ···, *f*(**<sup>x</sup>***N*)]*<sup>T</sup>*. A solution to the system (8) exists and is unique, since the matrix *A* is positive definite.

Let **<sup>u</sup>**<sup>∗</sup>(**x**)=[*u*<sup>∗</sup>1 (**x**), ···, *<sup>u</sup>*<sup>∗</sup>*N*(**x**)]*<sup>T</sup>* be a cardinal basis vector function, then *Pf* also has the following form (see [6]):

$$P\_f(\mathbf{x}) = \sum\_{j=1}^{N} f(\mathbf{x}\_j) u\_j^\*(\mathbf{x}), \quad \mathbf{x} \in \mathbb{R}^n. \tag{9}$$

Comparing (9) with (7), we have:

$$A\mathbf{u}^\*(\mathbf{x}) = \mathbf{b}(\mathbf{x}),\tag{10}$$

.

where **b**(**x**)=[Φ(**<sup>x</sup>**, **<sup>x</sup>**1), ···, <sup>Φ</sup>(**<sup>x</sup>**, **<sup>x</sup>***N*)]*<sup>T</sup>*.

Equation (10) shows a connection between the radial basis functions and the cardinal basis functions.

First, the generic error estimate is as follows.

**Theorem 4.** *Let* Ω ⊆ R*n,* X = {*<sup>x</sup>*1, *x*2, ···, *<sup>x</sup>N*} ⊂ Ω *be distinct and* Φ ∈ *C*(Ω × Ω) *be the truncated exponential radial basis function with l* ≥ *n*2 + 1*. Denote the interpolant to f* ∈ NΦ(Ω) *on the set* X *by Pf . Then, for every x* ∈ Ω*, we have*

$$|f(\mathfrak{x}) - P\_f(\mathfrak{x})| \le P\_{\Phi, \mathcal{X}}(\mathfrak{x}) \|f\|\_{\mathcal{N}\_{\Phi}(\Omega)}.$$

*Here*

$$P\_{\Phi, \mathcal{X}}(\mathbf{x}) = \sqrt{\mathbb{C} - (\mathbf{b}(\mathbf{x}))^T A^{-1} \mathbf{b}(\mathbf{x})}, \quad \mathbb{C} = (\varepsilon - 1)^l.$$

**Proof.** Since *f* ∈ NΦ(Ω), the reproducing property yields

$$f(\mathbf{x}) = \langle f, \Phi(\cdot, \mathbf{x}) \rangle\_{\mathcal{N}\_{\Phi}(\Omega)}.$$

Then

$$P\_f(\mathbf{x}) = \sum\_{j=1}^N f(\mathbf{x}\_j) \mu\_j^\*(\mathbf{x}) = \langle f, \sum\_{j=1}^N \mu\_j^\*(\mathbf{x}) \Phi(\cdot, \mathbf{x}\_j) \rangle\_{\mathcal{N}\_\Phi(\Omega)}.$$

Applying the Cauchy–Schwarz inequality, we have

$$\begin{aligned} |f(\mathbf{x}) - P\_f(\mathbf{x})| &= \left| \langle f, \Phi(\cdot, \mathbf{x}) - \sum\_{j=1}^N u\_j^\*(\mathbf{x}) \Phi(\cdot, \mathbf{x}\_j) \rangle\_{\mathcal{N}\_\Phi(\Omega)} \right| \\ &\le \left| ||f||\_{\mathcal{N}\_\Phi(\Omega)} \right| \left| \Phi(\cdot, \mathbf{x}) - \sum\_{j=1}^N u\_j^\*(\mathbf{x}) \Phi(\cdot, \mathbf{x}\_j) \right| \bigg|\_{\mathcal{N}\_\Phi(\Omega)} \end{aligned}$$

Denote the second term as

$$P\_{\Phi, \mathcal{X}}(\mathbf{x}) = \left\lvert \left| \Phi(\cdot, \mathbf{x}) - \sum\_{j=1}^{N} u\_j^\*(\mathbf{x}) \Phi(\cdot, \mathbf{x}\_j) \right| \right\rvert\_{\mathcal{N}\_\Phi(\Omega)}$$

.

By the definition of the native space norm and Equation (10), *<sup>P</sup>*Φ,<sup>X</sup> (**x**) can be rewritten as

$$P\_{\Phi, \mathcal{X}}(\mathbf{x}) = \sqrt{\Phi(\mathbf{x}, \mathbf{x}) - (\mathbf{b}(\mathbf{x}))^T A^{-1} \mathbf{b}(\mathbf{x})}.$$

Then, the conclusion follows directly by the strict positive definiteness of Φ.

One of the main benefits of Theorem 4 is that we are now able to estimate the interpolation error by computing *<sup>P</sup>*Φ,<sup>X</sup> (**x**). In addition, *<sup>P</sup>*Φ,<sup>X</sup> (**x**) can be used as an indicator for choosing a good shape parameter.

When equipping the dataset X with a fill distance (or sample density, see [7]):

$$h\_{\mathcal{X},\Omega} = \sup\_{\mathbf{x} \in \Omega} \min\_{\mathbf{x}\_{\bar{\jmath}} \in \mathcal{X}} ||\mathbf{x} - \mathbf{x}\_{\bar{\jmath}}||\_{\prime}$$

for any symmetric and strictly positive definite Φ ∈ *C*2*<sup>k</sup>*(Ω × <sup>Ω</sup>), the following generic error estimate can be obtained.

**Theorem 5.** *Suppose* Ω ⊆ R*n is bounded and satisfies an interior cone condition. Suppose* Φ ∈ *C*2*<sup>k</sup>*(Ω × Ω) *is symmetric and strictly positive definite. Denote the interpolant to f* ∈ NΦ(Ω) *on the set* X *by Pf . Then, there exist some positive constants h*0 *and C such that:*

$$|f(\mathbf{x}) - P\_f(\mathbf{x})| \le C h\_{\mathcal{X}, \Omega}^k \sqrt{D\_{\Phi}(\mathbf{x})} \|f\|\_{\mathcal{N}\_{\Phi}(\Omega)^\sigma}$$

*provided h*X ,Ω ≤ *h*0*. Here*

$$D\_{\Phi}(\mathbf{x}) = \max\_{|\boldsymbol{\beta}| = 2k \text{ } \boldsymbol{w}, \boldsymbol{z} \in \Omega \cap B(\mathbf{x}, \boldsymbol{\epsilon} \boldsymbol{h}, \boldsymbol{\chi}, \boldsymbol{\alpha})} |D\_2^{\beta} \Phi(\boldsymbol{w}, \mathbf{z})|^2$$

*with <sup>B</sup>*(*<sup>x</sup>*, *ch*X ,<sup>Ω</sup>) *denoting the ball of radius ch*X ,Ω *centered at x.*

**Proof.** The estimate can be obtained by applying the Taylor expansion. The technical details can be found in [2,3].

Since the truncated radial basis function Φ is only in *C*<sup>0</sup>(Ω × <sup>Ω</sup>), *hk*X ,Ω is vanishing in the above error estimate of Theorem 5. Therefore, we need to bound the *<sup>D</sup>*Φ(**x**) by some additional powers of *h*X ,Ω in order to obtain the estimate in terms of fill distance. The resulting theorem is as follows.

**Theorem 6.** *Suppose* Ω ⊆ R*n is bounded and satisfies an interior cone condition. Suppose* Φ *is the truncated exponential radial basis function with l* ≥ *n*2 + 1*. Denote the interpolant to f* ∈ NΦ(Ω) *on the set* X *by Pf . Then, there exist some positive constants h*0 *and C such that:*

$$|f(\mathbf{x}) - P\_f(\mathbf{x})| \le C h\_{\mathcal{X}, \Omega}^{\frac{1}{2}} ||f||\_{\mathcal{N}\_{\Phi}(\Omega)}.$$

*provided h*X ,Ω ≤ *h*0*.* **Proof.** From [2], for *C*<sup>0</sup> functions, the factor *<sup>D</sup>*Φ(**x**) can be expressed as:

$$D\_{\Phi}(\mathbf{x}) = \|\Phi\|\_{L^{\infty}(B(\mathbf{0}, 2ch\_{\mathcal{X}\Omega}))}$$

independent of **x**. Selecting *h*0 ≤ 14*c* , we bound the *<sup>D</sup>*Φ(**x**) determined by the truncated exponential radial basis function.

Using the definition of Φ and Lagrange's mean value theorem, we have:

$$\begin{aligned} |||\Phi|||\_{L^{\infty}(B(\mathbf{0},2ch\_{\mathcal{X},\Omega}))} &= \max\_{r \in (0, 4ch\mathcal{X}\mathcal{X})} |e^{1-r} - 1|^l \\ &\leq \|\mathbf{C}\| \max\_{r \in (0, 4ch\mathcal{X}\mathcal{X})} |1-r|^l \\ &= \|\mathbf{C}||\Psi||\_{L^{\infty}(B(\mathbf{0}, 2ch\_{\mathcal{X},\Omega}))} \end{aligned}$$

with Ψ denoting the truncated power radial basis function. From [2],

$$||\Psi||\_{L^{\infty}(\mathbb{B}(\mathbf{0},2ch\_{\mathcal{X},\Omega}))} \leq Ch\_{\mathcal{X},\Omega}^{\frac{1}{2}}.$$

## **5. Numerical Experiments**

## *5.1. Single-Level Approximation*

This subsection shows how our truncated exponential radial basis function (TERBF) works at a single level. Our first 2D target surface is the standard Franke's function. In the experiments, we let the kernel Φ in (7) be the truncated exponential radial function <sup>Φ</sup>(**x**)=(*e*1−*<sup>ε</sup>* **<sup>x</sup>** − <sup>1</sup>)<sup>2</sup>+. A Halton point set with increasingly greater data density is generated in domain [0, 1]2. Tables 1–8 list the test results of Gaussian interpolation, MQ (Multiquadrics) interpolation, IMQ (Inverse Multiquadrics) interpolation, and TERBF interpolation with different values of *ε* respectively. In the tables, the RMS-error is computed by

$$\text{RMS-error} = \sqrt{\frac{1}{M} \sum\_{k=1}^{M} [f(\xi\_k) - P\_f(\xi\_k)]^2} = \frac{1}{\sqrt{M}} ||f - P\_f||\_{2\prime}$$

where *ξk* are the evaluation points. The rate listed in the Tables is computed using the formula:

$$\text{rate}\_{k} = \frac{\ln(e\_{k-1}/e\_k)}{\ln(h\_{k-1}/h\_k)}, \quad k = 2, 3, 4, 5, 6, \dots$$

where *ek* is the RMS-error for experiment number *k* and *hk* is the fill distance of the *k*-level. cond(*A*) is the condition number of the interpolation matrix defined by (8). From Tables 1–6, we observe that the globally supported radial basis functions (Gaussian, MQ, IMQ) can obtain ideal accuracy when assembling a smaller value of *ε*. However, the condition number of the interpolation matrix will become surprisingly large as the scattered data increase. We note that MATLAB issues a "matrix close to singular" warning when carrying out Gaussian and MQ interpolation experiments for *N* = 1089, 4225 and *ε* = 10. Tables 7 and 8 show that TERBF interpolation can not only keep better approximation accuracy, but also produce a well conditioned interpolation matrix. Even for *N* = 4225 and *ε* = 0.7, the condition number of the presented method is relatively smaller (about 105). The change of RMS-error with varying *ε* values is displayed in Figure 1. We see that the error curves of Gaussian and MQ interpolation are not monotonic and even become erratic for the largest datasets. However, the curves of IMQ and TERBF interpolation are relatively smooth. In particular, TERBF greatly improves the condition number of the interpolation matrix. To show the application of TERBF approximation to compact 3D images, we interpolate Beethoven data in Figure 2 and Stanford bunny in Figure 3. Numerical experiments sugges<sup>t</sup> that TERBF interpolation is essentially faster than the scattered data interpolation with globally supported radial basis functions. However, we observe that TERBF interpolation causes some artifacts such as the extra surface fragment near the bunny's ear from the left part of Figure 3. This is because the interpolating implicit surface has a narrow band support. It will be better if the sample density is smaller than the width of the support band (see the right part of Figure 3). Similar observations have been reported in Fasshauer's book [3], where a partition of unity fits based on Wendland's *C*<sup>2</sup> function was used. The same observation was also made in [1].


**Table 1.** Gaussian interpolation to the 2D Franke's function with *ε* = 20.

**Table 2.** Gaussian interpolation to the 2D Franke's function with *ε* = 10.


**Table 3.** MQ interpolation to the 2D Franke's function with *ε* = 20.


**Table 4.** MQ interpolation to the 2D Franke's function with *ε* = 10.



**Table 5.** IMQ interpolation to the 2D Franke's function with *ε* = 20.

**Table 6.** IMQ interpolation to the 2D Franke's function with *ε* = 10.


**Table 7.** TERBF interpolation to the 2D Franke's function with *ε* = 1.


**Table 8.** TERBF interpolation to the 2D Franke's function with *ε* = 0.7.


(**a**) *ε* curves of Gaussian interpolation

**Figure 1.** *Cont*.

(**c**) *ε* curves of

IMQ interpolation (**d**) *ε* curves of TERBF interpolation

**Figure 1.** RMS-error curves for Gaussian, MQ, IMQ, and TERBF interpolations.

**Figure 2.** TERBF approximation of the Beethoven data. From top left to bottom right: 163 (**a**), 663 (**b**), 1163(**c**),and2663(**d**)points.

**Figure 3.** TERBF approximation of the Stanford bunny with 453 (**left**) and 8171 (**right**) data points.
