*Article* **Arbitrarily Accurate Analytical Approximations for the Error Function**

**Roy M. Howard**

School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, GPO Box U1987, Perth 6845, Australia; r.howard@curtin.edu.au

**Abstract:** AbstractA spline-based integral approximation is utilized to define a sequence of approximations to the error function that converge at a significantly faster manner than the default Taylor series. The real case is considered and the approximations can be improved by utilizing the approximation erf(*x*) ≈ 1 for |*x*| > *xo* and with *xo* optimally chosen. Two generalizations are possible; the first is based on demarcating the integration interval into *m* equally spaced subintervals. The second, is based on utilizing a larger fixed subinterval, with a known integral, and a smaller subinterval whose integral is to be approximated. Both generalizations lead to significantly improved accuracy. Furthermore, the initial approximations, and those arising from the first generalization, can be utilized as inputs to a custom dynamic system to establish approximations with better convergence properties. Indicative results include those of a fourth-order approximation, based on four subintervals, which leads to a relative error bound of 1.43 <sup>×</sup> <sup>10</sup>−<sup>7</sup> over the interval [0, <sup>∞</sup>]. The corresponding sixteenth-order approximation achieves a relative error bound of 2.01 <sup>×</sup> <sup>10</sup><sup>−</sup>19. Various approximations that achieve the set relative error bounds of 10<sup>−</sup>4, 10−6, 10−10, and 10−16, over [0, ∞], are specified. Applications include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for exp(−*x*2) that have significantly higher convergence properties than a Taylor series approximation. Fourth, the definition of a complementary demarcation function *eC*(*x*) that satisfies the constraint *e*<sup>2</sup> *<sup>C</sup>*(*x*) + erf2(*x*) = 1. Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal that is modeled by the error function.

**Keywords:** error function; function approximation; spline approximation; Gaussian function

**MSC:** 33B20; 41A10; 41A15; 41A58

#### **1. Introduction**

The error function arises in many areas of mathematics, science, and scientific applications including diffusion associated with Brownian motion (Fick's second law), the heat kernel for the heat equation, e.g., [1], the modeling of magnetization, e.g., [2], the modelling of transitions between two levels, for example, with the modeling of smooth or soft limiters, e.g., [3] and the psychometric function, e.g., [4,5], the modeling of amplifier non-linearities, e.g., [6,7], and the modeling of rubber-like materials and soft tissue, e.g., [8,9]. It is widely used in the modeling of random phenomena as the error function defines the cumulative distribution of the Gaussian probability density function and examples include, the probability of error in signal detection, option pricing via the Black–Scholes formula, etc. Many other applications exist. In general, the error function is associated with a macro description of physical phenomena and the de Moivre–Laplace theorem is illustrative of the link between fundamental outcomes and a higher-level model.

**Citation:** Howard, R.M. Arbitrarily Accurate Analytical Approximations for the Error Function. *Math. Comput. Appl.* **2022**, *27*, 14. https://doi.org/ 10.3390/mca27010014

Received: 26 November 2021 Accepted: 27 January 2022 Published: 9 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The error function is defined on the complex plane according to

$$\text{erf}(z) = \frac{2}{\sqrt{\pi}} \int\_{\gamma} e^{-\lambda^2} \text{d}\lambda \,\, z \in \mathbb{C} \,\,\,\,\,\tag{1}$$

where the path *γ* is between the points zero and *z* and is arbitrary. Associated functions are the complementary error function, the Faddeyeva function, the Voigt function, and Dawson's integral, e.g., [10]. The Faddeyeva function and the Voigt function, for example, have applications in spectroscopy, e.g., [11]. The error function can also be defined in terms of the spherical Bessel functions, e.g., Equation (7.6.8) of [10], and the incomplete Gamma function, e.g., Equation (7.11.1) of [10]. Marsaglia [12] provides a brief insight into the history of the error function.

For the real case, which is the case considered in this paper, the error function is defined by the integral

$$\text{erf}(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \int\_0^\mathbf{x} e^{-\lambda^2} \mathbf{d}\lambda, \ x \in \mathbb{R}. \tag{2}$$

The widely used, and associated, cumulative distribution function for the standard normal distribution is defined according to

$$\Phi(\mathbf{x}) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{\chi} \exp\left[\frac{-\lambda^2}{2}\right] d\lambda = 0.5 + 0.5 \text{erf}\left[\frac{\chi}{\sqrt{2}}\right].\tag{3}$$

Being defined by an integral, which does not have an explicit analytical form, there is interest in approximations of the error function and, in recent decades, many approximations have been developed. Table 1 details indicative approximations for the real case and their relative errors are shown in Figure 1. Most of the approximations detailed in this table are custom and have a limited relative error bound with bounds in the range of 3.05 <sup>×</sup> <sup>10</sup>−<sup>5</sup> [13] to 7.07 <sup>×</sup> <sup>10</sup>−<sup>3</sup> [14]. It is preferable to have an approximation form that can be generalized to create approximations that converge to the error function. Examples include the standard Taylor series, the Bürmann series, and the approximation by Abrarov, which are defined in Table 1.

**Figure 1.** Graph of the magnitude of the relative error in the approximations, detailed in Table 1, for erf(*x*).

Many of the approximations detailed in Table 1 can be improved upon by approximating the associated residual function, denoted *g*, via a Padé approximant or a basis set decomposition. Examples of possible approximation forms, and the resulting residual functions, are given in Table 2. One example is that of a 4/2 Padé approximant for the function *g*3, which leads to the approximation:

$$\text{erf}(\mathbf{x}) \approx \sqrt{1 - \exp\left[ -\mathbf{x}^2 \cdot \frac{4}{\pi} \left[ 1 + \frac{n\_1 \mathbf{x}\_1 + n\_2 \mathbf{x}\_1^2 + n\_3 \mathbf{x}\_1^3 + n\_4 \mathbf{x}\_1^4}{1 + d\_1 \mathbf{x}\_1 + d\_2 \mathbf{x}\_1^2} \right] \right]}}, \\ \mathbf{x}\_1 = \frac{\mathbf{x}}{\mathbf{x} + 1}, \mathbf{x} \ge \mathbf{0}, \quad \text{(4)}$$

$$\begin{split} n\_1 &= \frac{279}{10,000,000}, n\_2 = \frac{-303.923}{10,000,000}, n\_3 = \frac{34,783}{5,000,000}, n\_4 = \frac{40,793}{10,000,000} \\ d\_1 &= \frac{-21,941,279}{10,000,000}, d\_2 = \frac{3.229,407}{2,500,000}. \end{split} \tag{5}$$

The relative error bound in this approximation is 4.02 <sup>×</sup> <sup>10</sup>−7. Higher-order Pad<sup>é</sup> approximants can be used to generate approximations with a lower relative error bound. Matic [15] provides a similar approximation with an absolute error of 5.79 <sup>×</sup> <sup>10</sup>−6.

An approximation of the error function can also be obtained by combining separate approximations, which are accurate, respectively, for |*x*| 1 and |*x*| 1, via a demarcation function *d*

$$\text{erf}(\mathbf{x}) = \frac{2\mathbf{x}}{\sqrt{\pi}}d(\mathbf{x}) + \left[1 - \frac{e^{-\mathbf{x}^2}}{\sqrt{\pi}\mathbf{x}}\right] \cdot [1 - d(\mathbf{x})], \quad \mathbf{x} \ge \mathbf{0},\tag{6}$$

where

$$d(\mathbf{x}) = \frac{\sqrt{\pi}\mathbf{x}\mathbf{x}\mathbf{r}\mathbf{f}(\mathbf{x}) - \sqrt{\pi}\mathbf{x} + e^{-\mathbf{x}^2}}{2\mathbf{x}^2 - \sqrt{\pi}\mathbf{x} + e^{-\mathbf{x}^2}}.\tag{7}$$

Naturally, an approximation of *d* requires an approximation of the error function. Unsurprisingly, the relative error in the approximation for the error function equals the relative error in the approximation utilized to approximate the error function in *d*.

Finally, efficient numerical implementation of the error function is of interest and Chevillard [16] and De Schrijver [17] provide results and an overview. Highly accurate piecewise approximations have long since been defined, e.g., [18].

The two-point spline-based approximations for functions and integrals, detailed in [19], have recently been applied to find arbitrarily accurate approximations for the hyperbolic tangent function [20]. In this paper, the general two-point spline approximation form is applied to define a sequence of convergent approximations for the error function. The basic form of the approximation of order *n*, denoted *fn*, is

$$\text{erf}(\mathbf{x}) \approx f\_n(\mathbf{x}) = p\_{n,0}(\mathbf{x}) + p\_{n,1}(\mathbf{x})e^{-\mathbf{x}^2},\tag{8}$$

where *pn*,1 is a polynomial of order *n* and *pn*,0 is a polynomial of order less than *n*. Convergence of the sequence of approximations *f*1, *f*2, ... to erf(*x*) is shown and the convergence is significantly better than the default Taylor series. For example, the second-order approximation

$$f\_2(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} \cdot \left[1 - \frac{\mathbf{x}^2}{30}\right] + \frac{\mathbf{x}}{\sqrt{\pi}} \cdot \left[1 + \frac{11\mathbf{x}^2}{30} + \frac{\mathbf{x}^4}{15}\right] e^{-\mathbf{x}^2} \tag{9}$$

yields a relative error bound of 0.056 over the interval [0, 2] which is better than a fifteenthorder Taylor series approximation. The approximations can be improved by utilizing the approximation erf(*x*) ≈ 1 for |*x*| 1 and thereby establishing approximations with a set relative error bound over the interval [0, ∞).


**Table 1.** Examples of published approximations for erf(*x*), 0 < *x* < ∞. For the third and second last approximations, the coefficient definitions are detailed in the associated reference. The stated relative error bounds arise from sampling the interval [0, 5] with 10,000 uniformly spaced points.

**Table 2.** Residual functions associated with approximations for erf(*x*), 0 < *x* < ∞.


Two generalizations are detailed. The first is of the form

$$\text{erf}(\mathbf{x}) \approx p\_0(\mathbf{x}) + p\_1(\mathbf{x})e^{-k\_1\mathbf{x}^2} + \dots + p\_{\text{ $\mathcal{W}$ }}(\mathbf{x})e^{-k\_m\mathbf{x}^2} \tag{10}$$

and is based on utilizing approximations associated with *m* equally spaced subintervals of the interval [0, *x*]. The second is based on utilizing a fixed subinterval within [0, *x*] and then approximating the error function over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth-order approximation based on four variable subintervals, when used with the approximation erf(*x*) ≈ 1 for *x* 1, has a relative error bound of 1.43 <sup>×</sup> <sup>10</sup>−<sup>7</sup> over the interval [0, <sup>∞</sup>]. The corresponding sixteenthorder approximation has a relative error bound of 2.01 <sup>×</sup> <sup>10</sup>−19. Finally, by utilizing the solutions of a custom dynamical system, approximations with better convergence properties can be established.

Applications of the proposed approximations for the error function include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for exp <sup>−</sup>*x*<sup>2</sup> that have significantly higher convergence properties than

a Taylor series approximation. Fourth, the definition of a complementary demarcation function *eC*(*x*) that satisfies the constraint *e*<sup>2</sup> *<sup>C</sup>*(*x*)+ erf2(*x*) = 1. Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal modeled by the error function.

Section 2 details the spline-based approximation for the error function and its convergence. Improved approximations, obtained by utilizing the nature of the error function for large arguments, are detailed in Section 3. Two generalizations, with the potential for lower relative error bounds, are detailed in Sections 4 and 5. Section 6 details how the initial approximations, and the approximations arising from the first generalization, can be utilized as inputs to a custom dynamical system to establish approximations with better convergence properties. Applications are specified in Section 7. Conclusions are stated in Section 8.

#### *1.1. Notes and Notation*

As erf(−*x*) = −erf(*x*), it is sufficient to consider approximations for the interval [0, ∞).

For a function *f* defined over the interval [*α*, *β*], an approximating function *fA* has a relative error, at a point *x*1, defined according to re(*x*1) = 1 − *fA*(*x*1)/ *f*(*x*1). The relative error bound for the approximating function over the interval [*α*, *β*] is defined according to

$$\text{re}\_{\mathsf{B}} = \max\{|re(\mathsf{x}\_{1})| \colon \mathsf{x}\_{1} \in [\mathsf{a}, \beta] \}. \tag{11}$$

The notation *f* (*k*)(*x*) = <sup>d</sup>*<sup>k</sup>* <sup>d</sup>*x<sup>k</sup> <sup>f</sup>*(*x*) is used. The symbol *<sup>u</sup>* denotes the unit step function. Mathematica has been used to facilitate the analysis and to obtain numerical results.

#### *1.2. Background Results*

The following result underpins the bounds proposed for the error function:

**Lemma 1.** *Upper and Lower Functional Bounds*. *A positive approximation fA to a positive function f over the interval* [*α*, *β*], *with a relative error bound*

$$1 - \varepsilon\_B < 1 - \frac{f\_A(\mathbf{x})}{f(\mathbf{x})} < \varepsilon\_B, \quad \mathbf{x} \in [a, \beta], \varepsilon\_B > 0,\tag{12}$$

*leads to the following upper and lower bounded functions:*

$$\frac{f\_A(\mathbf{x})}{1+\varepsilon\_B} < f(\mathbf{x}) < \frac{f\_A(\mathbf{x})}{1-\varepsilon\_B}, \; \mathbf{x} \in [\mathfrak{a}, \beta]. \tag{13}$$

*The relative error bounds, over the interval* [*α*, *β*], *for the upper and lower bounded functions, respectively, are:*

$$\frac{2\varepsilon\_B}{1+\varepsilon\_B}, \frac{2\varepsilon\_B}{1-\varepsilon\_B}.\tag{14}$$

**Proof.** The definition of the relative error bound, as specified by Equation (12), leads to

$$1 - \varepsilon\_B < \frac{f\_A(\mathbf{x})}{f(\mathbf{x})} < 1 + \varepsilon\_{B\prime} \tag{15}$$

which implies

$$\frac{1-\varepsilon\_B}{1+\varepsilon\_B} < \frac{f\_A(\mathbf{x})/(1+\varepsilon\_B)}{f(\mathbf{x})} < 1,\\ 1 < \frac{f\_A(\mathbf{x})/(1-\varepsilon\_B)}{f(\mathbf{x})} < \frac{1+\varepsilon\_B}{1-\varepsilon\_B} \tag{16}$$

and the relative error bounds:

$$\begin{aligned} 0 &< 1 - \frac{f\_A(\mathbf{x})/(1+\varepsilon\_B)}{f(\mathbf{x})} < 1 - \frac{1-\varepsilon\_B}{1+\varepsilon\_B} = \frac{2\varepsilon\_B}{1+\varepsilon\_B} \\ 1 - \frac{1+\varepsilon\_B}{1-\varepsilon\_B} &= \frac{-2\varepsilon\_B}{1-\varepsilon\_B} < 1 - \frac{f\_A(\mathbf{x})/(1-\varepsilon\_B)}{f(\mathbf{x})} < 0 \end{aligned} \tag{17}$$


Convergent Integral Approximations

One application of the proposed approximations for the error function requires knowledge of when function convergence implies convergence of associated integrals.

**Lemma 2.** *Convergent Integral Approximation. If a sequence of functions f*1, *f*2, ... *converges, over the interval* [0, *x*], *to a bounded and integrable function f* , *then point-wise convergence is sufficient for the associated integrals to be convergent, i.e., for*

$$\lim\_{\substack{\mathbf{u}\to\infty\\0}}\int\_{0}^{\mathbf{x}}f\_{\mathbf{n}}(\lambda)d\lambda=\int\_{0}^{\mathbf{x}}f(\lambda)d\lambda.\tag{18}$$

**Proof.** The required result follows if it is possible to interchange the order of limit and integration, i.e.,

$$\lim\_{n \to \infty} \int\_0^x f\_n(\lambda) d\lambda = \int\_0^x \lim\_{n \to \infty} f\_n(\lambda) d\lambda = \int\_0^x f(\lambda) d\lambda. \tag{19}$$

Standard conditions for when the interchange is valid are specified by the monotone and dominated convergence theorems, e.g., [29], (p. 26). Sufficient conditions for a valid interchange include point-wise function convergence, and for *f* to be integrable and bounded. -

#### **2. Spline-Based Approximations for Error Function**

#### *2.1. Spline Approximation for Error Function*

The following *n*th-order, two-point spline-based approximation for an integral has been detailed in Equation (48) of [19], for a function *f* that is at least (*n* + 1)th-order differentiable:

$$\begin{aligned} \int\_a^x f(\lambda)d\lambda &= \sum\_{k=0}^n c\_{n,k}(x-a)^{k+1} \left[ f^{(k)}(a) + (-1)^k f^{(k)}(x) \right] + R\_n(a,x) \\ &\qquad n \in \{0, 1, 2, \ldots\} \end{aligned} \tag{20}$$

where

$$\mathcal{L}\_{n,k} = \frac{n!}{(n-k)!(k+1)!} \cdot \frac{(2n+1-k)!}{2(2n+1)!}, \quad k \in \{0, 1, \ldots, n\}, \tag{21}$$

$$R\_{n}^{(1)}(
\boldsymbol{a},\boldsymbol{x}) = -\sum\_{k=0}^{n} c\_{n,k}(k+1)(\boldsymbol{x}-\boldsymbol{a})^{k} \left[ f^{(k)}(\boldsymbol{a}) + (-1)^{k+1} \cdot \frac{n+1}{n-k+1} \cdot f^{(k)}(\boldsymbol{x}) \right] + \\ \tag{22}$$
 
$$c\_{n,n}(-1)^{n+1}(\boldsymbol{x}-\boldsymbol{a})^{n+1} f^{(n+1)}(\boldsymbol{x}).$$

Direct application of this result to the integral defining the error function leads to the results stated in Theorem 1:

**Theorem 1.** *Spline-Based Integral Approximation for Error Function. The error function can be defined according to*

$$\text{erf}(\mathbf{x}) = f\_n(\mathbf{x}) + \varepsilon\_n(\mathbf{x}),\tag{23}$$

*where fn is the nth-order spline-based integral approximation defined according to*

$$f\_n(\mathbf{x}) = \frac{2}{\sqrt{\pi t}} \cdot \sum\_{k=0}^n c\_{n,k} \mathbf{x}^{k+1} \left[ p(k, 0) + (-1)^k p(k, \mathbf{x}) e^{-\mathbf{x}^2} \right] \tag{24}$$

*and* ε*n*(*x*) *is the associated residual function whose derivative is*

$$\begin{split} \varepsilon\_{n}^{(1)}(\mathbf{x}) &= \ \frac{2\varepsilon^{-\mathbf{x}^{2}}}{\sqrt{\pi}} - \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{\mathrm{N}} \varepsilon\_{n,k}(k+1) \mathbf{x}^{k} p(k,0) - \\ &\frac{2\varepsilon^{-\mathbf{x}^{2}}}{\sqrt{\pi}} \cdot \sum\_{k=0}^{\mathrm{N}} \varepsilon\_{n,k} \mathbf{x}^{k} (-1)^{k} \left[ (k+1-2\mathbf{x}^{2}) p(k,\mathbf{x}) + \mathbf{x} p^{(1)}(k,\mathbf{x}) \right] \end{split} \tag{25}$$

*In these equations,*

$$p(k, \mathbf{x}) = p^{(1)}(k - 1, \mathbf{x}) - 2\mathbf{x}p(k - 1, \mathbf{x}), \ \ p(0, \mathbf{x}) = 1. \tag{26}$$

*A more general approximation is*

$$\text{erf}(\mathbf{x}) - \text{erf}(\boldsymbol{a}) \quad = \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k} (\mathbf{x} - \boldsymbol{a})^{k+1} \left[ p(k, \boldsymbol{a}) e^{-\mathbf{a}^2} + (-1)^k p(k, \mathbf{x}) e^{-\mathbf{x}^2} \right] + \varepsilon\_n(\boldsymbol{a}, \mathbf{x})$$

$$= f\_{\mathbb{H}}(\mathbf{a}, \mathbf{x}) + \varepsilon\_n(\boldsymbol{a}, \mathbf{x}). \tag{27}$$

**Proof.** The proof is detailed in Appendix A. -

#### 2.1.1. Note

The polynomial function *p*(*k*, *x*) is equivalently defined by the *k*th-order Hermite polynomial function ([21], p. 775, Equation (23.3.10)) and an explicit form is

$$p(k, \mathbf{x}) = \sum\_{i=0}^{\lfloor k/2 \rfloor} \frac{(-1)^{i+k} k!}{i!(k-2i)!} \cdot 2^{k-2i} \mathbf{x}^{k-2i}.\tag{28}$$

A change of variable *r* = *k* − 2*i*, and noting that *i* ∈ {0, 1, . . . , *k*/2} implies *r* ∈ {*k*, *k* − 2, . . . , *k* − 2*k*/2}, leads to the alternative form

$$p(k, \mathbf{x}) = \sum\_{r=k-2\lfloor k/2 \rfloor}^{k} d\_{k,r} \mathbf{x}^r, \quad d\_{k,r} = \frac{(-1)^{(3k-r)/2} \left[ \frac{1 + (-1)^{r-\lfloor k-2\lfloor k/2 \rfloor \rfloor}}{2} \right] k! 2^r}{[(k-r)/2]! r!}. \tag{29}$$

Substitution of this form into Equation (24) leads to the direct polynomial form for the *n*th-order approximation to the error function:

$$f\_n(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^n c\_{n,k} p(k, 0) \mathbf{x}^{k+1} + \frac{2e^{-\mathbf{x}^2}}{\sqrt{\pi}} \cdot \sum\_{k=0}^n \left[ (-1)^k c\_{n,k} \sum\_{r=k-2\lfloor k-2 \rfloor}^k d\_{k,r} \mathbf{x}^{r+k+1} \right]. \tag{30}$$

#### 2.1.2. Approximations

The polynomial functions *p*, as defined by Equations (26), (28), and (29), have the explicit forms:

$$\begin{array}{llll}p(0,\mathbf{x})=1, & p(1,\mathbf{x})=-2\mathbf{x}, & p(2,\mathbf{x})=-2[1-2\mathbf{x}^2],\\p(3,\mathbf{x})=12x[1-2\mathbf{x}^2/3], & p(4,\mathbf{x})=12[1-4\mathbf{x}^2+4\mathbf{x}^4/3], & \dots \end{array} \tag{31}$$

Approximations for the error function, as defined by Equation (24) and for orders zero to five, are:

$$f\_0(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi t}} + \frac{\mathbf{x}}{\sqrt{\pi t}} \cdot e^{-\mathbf{x}^2} \tag{32}$$

$$f\_1(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} + \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 + \frac{\mathbf{x}^2}{3} \right] e^{-\mathbf{x}^2} \tag{33}$$

$$f\_2(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 - \frac{\mathbf{x}^2}{30} \right] + \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 + \frac{11\mathbf{x}^2}{30} + \frac{\mathbf{x}^4}{15} \right] e^{-\mathbf{x}^2} \tag{34}$$

$$f\_{\mathfrak{H}}(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 - \frac{\mathbf{x}^2}{21} \right] + \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 + \frac{8\mathbf{x}^2}{21} + \frac{17\mathbf{x}^4}{210} + \frac{\mathbf{x}^6}{105} \right] e^{-\mathbf{x}^2} \tag{35}$$

$$f\_4(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 - \frac{\mathbf{x}^2}{18} + \frac{\mathbf{x}^4}{1260} \right] + \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 + \frac{7\mathbf{x}^2}{18} + \frac{37\mathbf{x}^4}{420} + \frac{4\mathbf{x}^6}{315} + \frac{\mathbf{x}^8}{945} \right] e^{-\mathbf{x}^2} \tag{36}$$

$$f\_{\tilde{\mathbf{y}}}(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 - \frac{2\mathbf{x}^2}{33} + \frac{\mathbf{x}^4}{660} \right] + \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 + \frac{13\mathbf{x}^2}{33} + \frac{61\mathbf{x}^4}{660} + \frac{67\mathbf{x}^6}{4620} + \frac{16\mathbf{x}^8}{10,395} + \frac{\mathbf{x}^{10}}{10,395} \right] \varepsilon^{-\mathbf{x}^2} \tag{37}$$

#### *2.2. Results*

The relative error in the zero- to tenth-order spline-based series approximations, along with the relative error in Taylor series approximations of orders 1–15, are detailed in Figure 2. The clear superiority, in terms of convergence, of the spline-based series relative to the Taylor series is evident. The relative error in the spline approximations, of orders 16, 20, 24, 28, and 32, are shown in Figure 3.

**Figure 2.** Graph of the magnitude of the relative errors in approximations to erf(*x*): zero to tenth order integral spline based series and first, third, ..., fifteenth order Taylor series (dotted).

**Figure 3.** Graph of the magnitude of the relative errors associated with the approximation erf(*x*) ≈ 1 and erf(*x*) <sup>≈</sup> <sup>1</sup> <sup>−</sup> exp −*x*2 / <sup>√</sup>*π<sup>x</sup>* along with the relative error in spline approximations of orders 16, 20, 24, 28 and 32.

The Mathematica code underpinning the results shown in Figure 2, is detailed in Supplementary Material. Such code is indicative of the code underpinning the results detailed in the paper.

#### *2.3. Approximation for Large Arguments*

Zero- and first-order approximations for the error function, and for the case of *x* 1, are

$$\text{erf}(\mathbf{x}) \approx 1, \quad \text{erf}(\mathbf{x}) \approx 1 - \frac{e^{-\mathbf{x}^2}}{\sqrt{\pi \mathbf{x}} \mathbf{x}}. \tag{38}$$

The relative errors in such approximations, respectively, are

$$\text{re}(\mathbf{x}) = 1 - \frac{1}{\text{erf}(\mathbf{x})}, \quad \text{re}(\mathbf{x}) \approx 1 - \frac{1 - e^{-\mathbf{x}^2}/\sqrt{\pi}\mathbf{x}}{\text{erf}(\mathbf{x})},\tag{39}$$

and their graphs are shown in Figure 3.

#### *2.4. Convergence*

To prove the convergence of the sequence of functions *f*0, *f*1, *f*2, ..., defined by Theorem 1, to the error function, it is sufficient to prove that the corresponding sequence of residual functions ε0, ε1, ε2, ... converge to zero. This can be shown by considering the derivatives of the residual functions defined by Equation (25). The derivatives of the residual functions of orders zero, one, and two, respectively, are:

$$
\varepsilon\_0^{(1)}(\mathbf{x}) = \frac{1}{\sqrt{\pi}} \left[ 1 + 2\mathbf{x}^2 \right] e^{-\mathbf{x}^2} - \frac{1}{\sqrt{\pi}} \tag{40}
$$

$$
\varepsilon\_1^{(1)}(\mathbf{x}) = \frac{1}{\sqrt{\pi}} \left[ 1 + \mathbf{x}^2 + \frac{2\mathbf{x}^4}{3} \right] e^{-\mathbf{x}^2} - \frac{1}{\sqrt{\pi}} \tag{41}
$$

$$
\varepsilon\_2^{(1)}(\mathbf{x}) = \frac{1}{\sqrt{\pi t}} \left[ 1 + \frac{9\mathbf{x}^2}{10} + \frac{2\mathbf{x}^4}{5} + \frac{2\mathbf{x}^6}{15} \right] e^{-\mathbf{x}^2} - \frac{1}{\sqrt{\pi t}} \left[ 1 - \frac{\mathbf{x}^2}{10} \right]. \tag{42}
$$

**Theorem 2.** *Convergence of Spline-Based Approximations. For all fixed values of x, the derivatives of the residual functions converge to zero as the order increases, i.e., for all fixed values of x it is the case that*

$$\lim\_{n \to \infty} \varepsilon\_n^{(1)}(\mathbf{x}) = 0, \ \mathbf{x} > 0. \tag{43}$$

*This is sufficient for the convergence of the residual functions, i.e.,* lim*n*→∞ε*n*(*x*) = 0, *<sup>x</sup>* <sup>&</sup>gt; 0, *and, hence, for x fixed:*

$$\lim\_{n \to \infty} f\_n(x) = \text{erf}(x), \ x > 0. \tag{44}$$

*The convergence is nonuniform.*

**Proof.** The proof is detailed in Appendix B. -

*2.5. Improved Approximation Based on Iteration*

Consider the general result

$$\int\_0^{\pi} \text{erf}(\lambda) d\lambda = \frac{1}{\sqrt{\pi \mathbf{x}}} \left[ 1 - e^{-\mathbf{x}^2} \right] + \text{zerf}(\mathbf{x}) \tag{45}$$

By using the approximations, *fn*, as defined in Theorem 1, in the integral, improved approximations for the error function can be defined.

**Theorem 3.** *Improved Approximation via Iteration. An improved approximation, of order n, for the error function is*

$$\begin{split} F\_{\pi}(\mathbf{x}) &= \ \frac{1}{\sqrt{\pi}\pi} \left[ 1 - e^{-\mathbf{x}^{2}} \right] + \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} \frac{c\_{n,k} p(k,0)}{k+2} \cdot \mathbf{x}^{k+1} + \\ & \frac{2}{x\sqrt{\pi}} \cdot \sum\_{k=0}^{n} \left[ (-1)^{k} c\_{n,k} \sum\_{r=k-2\lfloor k/2 \rfloor}^{k} \frac{d\_{kr}}{2} \left[ \frac{r+k}{2} \right]! \left[ 1 - e^{-\mathbf{x}^{2}} \sum\_{i=0}^{\frac{r+k}{2}} \frac{x\_{i}^{2i}}{i!} \right] \right] \end{split} \tag{46}$$

*where cn*,*<sup>k</sup> and dk*,*<sup>r</sup> are defined, respectively, by Equations (21) and (29).*

**Proof.** From Equation (45), it follows that

$$\text{erf}(\mathbf{x}) \approx \frac{1}{\sqrt{\pi \mathbf{x}}} \left[ 1 - e^{-\mathbf{x}^2} \right] + \frac{1}{\mathbf{x}} \cdot \int\_0^\mathbf{x} f\_\mathbf{n}(\lambda) d\lambda. \tag{47}$$

As

$$\int\_0^\infty \mathbf{x}^\mu e^{-\lambda^2} d\lambda = \frac{1}{2} \left[ \frac{u-1}{2} \right]! \left[ 1 - e^{-\mathbf{x}^2} \sum\_{i=0}^{(u-1)/2} \frac{\mathbf{x}^{2i}}{i!} \right], \quad u \in \{1, 3, 5, \dots\}, \tag{48}$$

it follows, from the form for *fn* detailed in Equation (30), that

$$\begin{split} \text{erf}(\mathbf{x}) &\approx \quad \frac{1}{\sqrt{\pi x}} \Big[ 1 - e^{-\mathbf{x}^2} \Big] + \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} \frac{c\_{k k} p(k, 0)}{k + 2} \cdot \mathbf{x}^{k+1} + \\ &\quad \frac{2}{x \sqrt{\pi}} \cdot \sum\_{k=0}^{n} \left[ (-1)^k c\_{n, k} \sum\_{r=k-2\lceil k-2 \rceil}^{k} \frac{d\_{k,r}}{2} \left[ \frac{r+k}{2} \right]! \left[ 1 - e^{-\mathbf{x}^2} \sum\_{i=0}^{\frac{r+k}{2}} \frac{\mathbf{x}^{2i}}{i!} \right] \right] \end{split} \tag{49}$$

which is the required result. -

#### 2.5.1. Explicit Approximations

Approximations to the error function, of orders zero to five, are:

$$f\_0(\mathbf{x}) = \frac{3}{2\sqrt{\pi \mathbf{x}}} \left[ 1 + \frac{\mathbf{x}^2}{3} \right] - \frac{3e^{-\mathbf{x}^2}}{2\sqrt{\pi \mathbf{x}}\mathbf{x}} \tag{50}$$

$$f\_1(\mathbf{x}) = \frac{5}{3\sqrt{\pi}\mathbf{x}} \left[ 1 + \frac{3\mathbf{x}^2}{10} \right] - \frac{5}{3\sqrt{\pi}\mathbf{x}} \left[ 1 + \frac{\mathbf{x}^2}{10} \right] e^{-\mathbf{x}^2} \tag{51}$$

$$f\_2(\mathbf{x}) = \frac{7}{4\sqrt{\pi}\mathbf{x}} \left[ 1 + \frac{2\mathbf{x}^2}{7} - \frac{\mathbf{x}^4}{210} \right] - \frac{7}{4\sqrt{\pi}\mathbf{x}} \left[ 1 + \frac{\mathbf{x}^2}{7} + \frac{2\mathbf{x}^4}{105} \right] e^{-\mathbf{x}^2} \tag{52}$$

$$f\_3(\mathbf{x}) = \frac{9}{5\sqrt{\pi}\mathbf{x}} \left[ 1 + \frac{5\mathbf{x}^2}{18} - \frac{5\mathbf{x}^4}{756} \right] - \frac{9}{5\sqrt{\pi}\mathbf{x}} \left[ 1 + \frac{\mathbf{x}^2}{6} + \frac{23\mathbf{x}^4}{756} + \frac{\mathbf{x}^6}{378} \right] \varepsilon^{-\mathbf{x}^2} \tag{53}$$

$$f\_4(\mathbf{x}) = \begin{array}{c} \frac{11}{6\sqrt{\pi x}} \left[ 1 + \frac{3\mathbf{x}^2}{11} - \frac{\mathbf{x}^4}{132} + \frac{\mathbf{x}^6}{13.860} \right] -\\\\ \frac{11}{6\sqrt{\pi x}} \left[ 1 + \frac{2\mathbf{x}^2}{11} + \frac{5\mathbf{x}^4}{132} + \frac{16\mathbf{x}^6}{3465} + \frac{\mathbf{x}^8}{5465} \right] e^{-\mathbf{x}^2} \end{array} \tag{54}$$

$$f\_5(\mathbf{x}) = \begin{array}{c} \frac{13}{7\sqrt{\pi x}} \left[ 1 + \frac{7\mathbf{x}^2}{26} - \frac{7\mathbf{x}^4}{858} + \frac{7\mathbf{x}^6}{51480} \right] -\\\\ \frac{13}{7\sqrt{\pi x}} \left[ 1 + \frac{5\mathbf{x}^2}{26} + \frac{37\mathbf{x}^4}{858} + \frac{313\mathbf{x}^6}{51480} + \frac{7\mathbf{x}^8}{12,850} + \frac{\mathbf{x}^{10}}{38,610} \right] e^{-\mathbf{x}^2} \end{array} \tag{55}$$

Note that integration of these expressions leads to functions defined, in part, by the Gamma function that is an integral. This makes further iteration impractical.

#### 2.5.2. Results

The relative errors in even order approximations, of orders 0–10, are shown in Figure 4. A comparison of the results detailed in Figures 2 and 4 show the clear improvement in the approximations specified by Equation (46).

**Figure 4.** Graph of the magnitude of the relative errors in the approximations to erf(*x*), of even orders, as specified by Equation (46). The dotted results are for the fourth order approximation specified by Theorem 1 (Equation (36)).

#### **3. Improved Approximations**

*3.1. Improved Approximation for Error Function*

An improved approximation for the error function can be achieved by noting, as illustrated in Figure 3, that the approximation erf(*x*) ≈ 1 is increasingly accurate for the case of *x* 1 and for *x* increasing. By switching at a suitable point *xo*, as illustrated in Figure 5, from a spline-based approximation to the approximation erf(*x*) ≈ 1, an improved approximation is achieved. Naturally, it is possible to switch to the approximation erf(*x*) <sup>≈</sup> <sup>1</sup> <sup>−</sup> *<sup>e</sup>*−*x*<sup>2</sup> / <sup>√</sup>*πx*, or higher-order approximations, in a similar manner.

**Figure 5.** Illustration of the crossover point where the magnitude of the relative error in the approximation erf(*x*) ≈ 1 equals the magnitude of the relative error in a set order spline approximation.

**Theorem 4.** *Improved Approximation for Error Function. Improved approximations for the error function, based on the nth-order approximations detailed in Theorems 1 and 3, and consistent with the illustration shown in* Figure 5*, are*

$$\begin{array}{l} \text{erf}(\mathbf{x}) \approx f\_{\boldsymbol{n}}(\mathbf{x}) u[\boldsymbol{x}\_{\boldsymbol{o}}(\boldsymbol{n}) - \boldsymbol{x}] + [1 - u[\boldsymbol{x}\_{\boldsymbol{o}}(\boldsymbol{n}) - \boldsymbol{x}]] \\ \text{erf}(\mathbf{x}) \approx F\_{\boldsymbol{n}}(\mathbf{x}) u[\boldsymbol{x}\_{\boldsymbol{o}}(\boldsymbol{n}) - \boldsymbol{x}] + [1 - u[\boldsymbol{x}\_{\boldsymbol{o}}(\boldsymbol{n}) - \boldsymbol{x}]] \end{array} \tag{56}$$

*where the transition points, respectively, are defined according to*

$$\begin{aligned} \mathbf{x}\_o(n) &= \mathbf{x} : \left| 1 - \frac{1}{\text{erf}(x)} \right| = \left| 1 - \frac{f\_n(x)}{\text{erf}(x)} \right| \\ \mathbf{x}\_o(n) &= \mathbf{x} : \left| 1 - \frac{1}{\text{erf}(x)} \right| = \left| 1 - \frac{F\_n(x)}{\text{erf}(x)} \right| \end{aligned} \tag{57}$$

**Proof.** The improved approximation results follow from optimally switching, as illustrated in Figure 5 and at the point specified by Equation (57), to the approximation erf(*x*) ≈ 1, which has a lower relative error magnitude. -

Transition Points and Relative Error Bounds

The transition points, for various orders of spline approximation, are specified in Table 3. The relationship between the transition point and order is shown in Figure 6 for the case of the approximations *fn*. This relationship can be approximated, with a second-order polynomial, according to

$$\mathbf{x}\_o(n) = 1.3607 + 0.20511n - 0.002932n^2, \quad 0 \le n \le 24. \tag{58}$$

**Table 3.** The transition points, *xo*, and the resulting relative error bounds for the spline-based approximations specified by Equation (56). The transition points are based on sampling the interval [0, 5] with 10,000 points.


**Figure 6.** Graph of the relationship between the optimum transition point *xo*(*n*), as defined by Equation (57) for the case of *fn* and the order of the spline approximation.

However, as small variations in *xo*(*n*) can lead to significant changes in the maximum relative error in the approximation for the error function, precise values for *xo*(*n*) are preferable.

The graphs of the relative errors in the approximations *fn* to erf(*x*), as specified by Equation (56), are shown in Figure 7 for orders 2, 4, 6, ... , 20. The relative error bounds that can be achieved over the interval [0, ∞], using the optimally chosen transition points, are detailed in Table 3.

**Figure 7.** Graph of the relative errors in the approximations, *fn*, to erf(*x*), of orders 2, 4, 6, ... , 20, based on utilizing the approximation erf(*x*) ≈ 1 in an optimum manner.

#### *3.2. Improved Approximation for Taylor Series*

The approximation of erf(*x*) ≈ 1 for *x* 1 can be utilized to improve the relative error bound for a Taylor series approximation according to

$$\text{erf}(\mathbf{x}) \approx T\_n(\mathbf{x}) u[\mathbf{x}\_\circ(n) - \mathbf{x}] + [1 - u[\mathbf{x}\_\circ(n) - \mathbf{x}]],\tag{59}$$

where *Tn* is a *n*th-order Taylor series, with *n* odd, as specified in Table 1. The optimum transition points and relative error bounds, for selected orders, are detailed in Table 4. The variation of the relative errors, with order, are shown in Figure 8. The change in the optimum transition point can be approximated according to

$$\mathbf{x}\_o(n) \approx 0.932 + 0.0560n - 0.0003503n^2, \ 3 \le n \le 61,\tag{60}$$

but, again, as small variations in *xo*(*n*) can lead to significant changes in the maximum relative error in the approximation for the error function, precise values for *xo*(*n*) are preferable. The clear superiority in the convergence of the spline-based series is evident from a visual comparison of the relative errors shown in Figures 7 and 8.

**Table 4.** The transition points, and resulting relative error bounds, for Taylor series approximations specified by Equation (59). The transition points are based on sampling the interval [0, 4] with 10,000 points.


**Figure 8.** Graph of the magnitude of the relative error in Taylor series approximations to erf(*x*) that utilize an optimized change to the approximation erf(*x*) ≈ 1.

#### **4. Variable Subinterval Approximations for Error Function**

An improved analytic approximation for the error function can be achieved by demarcating the interval [0, *x*] into variable subintervals, e.g., the subintervals [0, *x*/4], [*x*/4, *x*/2], [*x*/2, 3*x*/4], and [3*x*/4, *x*] for the four-subintervals case, and by utilizing spline-based integral approximations for each subinterval. Chiani [30] utilized subintervals to enhance approximations for the complementary error function.

**Theorem 5.** *Variable Subinterval Approximations for Error Function. The nth-order spline-based approximation to the error function, based on m equal-width subintervals, is*

$$f\_{\boldsymbol{n},\boldsymbol{m}}(\boldsymbol{x}) = \frac{2}{\sqrt{\pi}} \sum\_{i=0}^{m-1} \left[ \sum\_{k=0}^{n} c\_{n,k} \left[ \frac{\boldsymbol{x}}{m} \right]^{k+1} \begin{bmatrix} p\left[k\_{\prime} \frac{i\boldsymbol{x}}{m} \right] \exp\left[-\frac{i^{2}\boldsymbol{x}^{2}}{m^{2}}\right] + \\\ (-1)^{k} p \left[k\_{\prime} \frac{(i+1)\boldsymbol{x}}{m} \right] \exp\left[-\frac{(i+1)^{2}\boldsymbol{x}^{2}}{m^{2}}\right] \end{bmatrix} \right] \tag{61}$$

*where*

$$p(k, \mathbf{x}) = p^{(1)}(k - 1, \mathbf{x}) - 2\mathbf{x}p(k - 1, \mathbf{x}), \ \ p(0, \mathbf{x}) = 1. \tag{62}$$

*An alternative form is*

$$f\_{n,m}(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \left[ p\_{n,0}(\mathbf{x}) + \sum\_{i=1}^{m-1} p\_{n,i}(\mathbf{x}) \exp\left[ -\frac{i^2 \mathbf{x}^2}{m^2} \right] + p\_{n,m}(\mathbf{x}) \exp(-\mathbf{x}^2) \right] \tag{63}$$

*where*

$$p\_{n,0}(\mathbf{x}) = \sum\_{k=0}^{n} \frac{c\_{n,k}}{m^{k+1}} \cdot p(k,0) \mathbf{x}^{k+1}$$

$$p\_{n,l}(\mathbf{x}) = \sum\_{k=0}^{n} \frac{c\_{n,k}}{m^{k+1}} \cdot \left[1 + (-1)^{k}\right] p(k, \frac{i\mathbf{x}}{m}) \mathbf{x}^{k+1} \tag{64}$$

$$p\_{n,m}(\mathbf{x}) = \sum\_{k=0}^{n} \frac{c\_{n,k}}{m^{k+1}} \cdot (-1)^{k} p(k, \mathbf{x}) \mathbf{x}^{k+1}.$$

**Proof.** The first result follows by applying Equation (27) in Theorem 1 to the subintervals [0, *x*/*m*], [*x*/*m*, 2*x*/*m*], ... , *<sup>x</sup>* <sup>−</sup> *<sup>x</sup> <sup>m</sup>* , *x* . The alternative form arises by expanding the outer summation in Equation (61) and collecting terms of similar form. -

#### *4.1. Explicit Expressions*

A first-order approximation, based on *m* subintervals, is

$$f\_{1,m}(\mathbf{x}) = \frac{\mathbf{x}}{m\sqrt{\pi}} \left[ 1 + 2\sum\_{i=1}^{m-1} \exp\left[\frac{-l^2\mathbf{x}^2}{m^2}\right] + \exp(-\mathbf{x}^2) \right] + \frac{\mathbf{x}^3}{3m^2\sqrt{\pi}} \cdot \exp(-\mathbf{x}^2). \tag{65}$$

For the four-subintervals case, explicit expressions are

$$f\_{0A}(\mathbf{x}) = \frac{\mathbf{x}}{4\sqrt{\pi}} \left[ 1 + 2\exp\left[\frac{-\mathbf{x}^2}{16}\right] + 2\exp\left[\frac{-\mathbf{x}^2}{4}\right] + 2\exp\left[\frac{-9\mathbf{x}^2}{16}\right] + \exp(-\mathbf{x}^2) \right] \tag{66}$$

$$f\_{1,4}(\mathbf{x}) = \frac{\mathbf{x}}{4\sqrt{\pi}} \left[ 1 + 2\exp\left[\frac{-\mathbf{x}^2}{16}\right] + 2\exp\left[\frac{-\mathbf{x}^2}{4}\right] + 2\exp\left[\frac{-9\mathbf{x}^2}{16}\right] + \exp(-\mathbf{x}^2) \right] + \frac{\mathbf{x}^3}{48\sqrt{\pi}} \cdot \exp(-\mathbf{x}^2) \tag{67}$$

Using the alternative form, a fourth-order expression is

$$f\_{4,4}(\mathbf{x}) = \begin{array}{c} \frac{\mathbf{x}}{4\sqrt{\pi}} \left[ 1 - \frac{\mathbf{x}^2}{288} + \frac{\mathbf{x}^4}{322.560} \right] + \\\\ \frac{\mathbf{x}}{2\sqrt{\pi}} \cdot \exp\left[ \frac{-\mathbf{x}^2}{16} \right] \left[ 1 - \frac{\mathbf{x}^2}{288} + \frac{47\mathbf{x}^4}{107.520} - \frac{\mathbf{x}^6}{1280.040} + \frac{\mathbf{x}^8}{61.931.520} \right] + \\\\ \frac{\mathbf{x}}{2\sqrt{\pi}} \cdot \exp\left[ \frac{-\mathbf{x}^2}{4} \right] \left[ 1 - \frac{\mathbf{x}^2}{288} + \frac{187\mathbf{x}^4}{107.520} - \frac{\mathbf{x}^6}{322.560} + \frac{\mathbf{x}^8}{5370.720} \right] + \\\\ \frac{\mathbf{x}}{2\sqrt{\pi}} \cdot \exp\left[ \frac{-9\mathbf{x}^2}{16} \right] \left[ 1 - \frac{\mathbf{x}^2}{288} + \frac{1261\mathbf{x}^4}{322.560} - \frac{\mathbf{x}^6}{143.360} + \frac{3\mathbf{x}^8}{2.293\sqrt{\pi}60} \right] + \\\\ \frac{\mathbf{x}}{4\sqrt{\pi}} \cdot \exp\left( -\mathbf{x}^2 \right) \left[ 1 + \frac{31\mathbf{x}^2}{288} + \frac{101\mathbf{x}^4}{15.360} - \frac{19\mathbf{x}^6}{60.640} + \frac{\mathbf{x}^8}{241.920} \right] + \end{array} \tag{68}$$

A fourth-order spline approximation, which utilizes 16 subintervals, is detailed in Appendix C. This expression, when utilized with the transition point *xo* = 7.1544, yields an approximation with a relative error bound of 4.82 <sup>×</sup> <sup>10</sup>−16.

#### Results

The relative errors in the spline approximations of orders one to six, and for the case of four equal subintervals [0, *x*/4], [*x*/4, *x*/2], [*x*/2, 3*x*/4], and [3*x*/4, *x*], are shown in Figure 9.

**Figure 9.** Graph of the relative errors in spline approximations to erf(*x*), of orders one to six and based on four variable sub-intervals of equal width.

#### *4.2. Improved Approximation*

The spline approximations utilizing variable subintervals can be improved by using the transition to the approximation erf(*x*) ≈ 1 at a suitable point, as specified by Equation (56). The relative error in the spline approximations of orders one to seven, and for the case of four equal subintervals [0, *x*/4], [*x*/4, *x*/2], [*x*/2, 3*x*/4], and [3*x*/4, *x*], are updated in Figure 10 to show the improvement associated with utilizing the optimum transition point to the approximation erf(*x*) ≈ 1. The relative error bounds, and transition points, are detailed in Tables 5 and 6 for the cases of four and 16 subintervals.

**Figure 10.** Graph of the relative errors in approximations to erf(*x*): first to seventh order spline based series based on four sub-intervals of equal width and with utilization of the approximation erf(*x*) ≈ 1 at the optimum transition point.



**Table 6.** Transition point and relative error bound for the 16 equal subintervals case. The transition points are based on sampling the interval [0, 12] with 10,000 points.


#### Examples

A first-order approximation, based on *m* subintervals, as specified by Equation (65), yields the relative error bound of 7.21 <sup>×</sup> <sup>10</sup>−<sup>5</sup> for four subintervals and the transition point *xo* <sup>=</sup> 3.2928; 4.51 <sup>×</sup> <sup>10</sup>−<sup>6</sup> for eight subintervals with a transition point *xo* <sup>=</sup> 4.784; 2.82 <sup>×</sup> <sup>10</sup>−<sup>7</sup> for 16 subintervals and a transition point of *xo* <sup>=</sup> 6.88; and 1.10 <sup>×</sup> <sup>10</sup>−<sup>9</sup> for 64 subintervals and a transition point *xo* = 15.7888.

The fourth-order approximation, based on four equal subintervals, as specified by Equation (68), leads to the relative error bound of 1.43 <sup>×</sup> <sup>10</sup>−<sup>7</sup> when used with the transition point *xo* = 3.7208. A sixteenth-order approximation, based on four equal subintervals, leads to an error bound of 2.01 <sup>×</sup> <sup>10</sup>−<sup>19</sup> when used with the optimum transition point of *xo* = 6.3736.

#### **5. Dynamic Constant plus Spline Approximation**

Consider the demarcation of the areas, as illustrated in Figure 11 and based on a resolution Δ, that define the error function. It follows that

$$\begin{array}{ll} \text{erf}(\mathbf{x}) = \sum\_{k=0}^{\lfloor \mathbf{x}/\Delta \rfloor} c\_k + \frac{2}{\sqrt{\pi}} \int\_{\Delta[\mathbf{x}/\Delta]}^{\mathbf{x}} e^{-\lambda^2} d\lambda & \left\{ \begin{array}{l} c\_0 = 0 \\ c\_k = \text{erf}(k\Delta) - \text{erf}[(k-1)\Delta], k \ge 1 \end{array} \right. \\\\ \frac{2}{\sqrt{\pi}} \left[ \begin{array}{l} \text{area} \\ \frac{1}{c\_1} \end{array} \frac{2}{\sqrt{\pi}} e^{-\lambda^2} \right. \\\\ \frac{2}{\sqrt{\pi}} \left[ \begin{array}{l} \text{area} \\ \frac{1}{c\_2} \end{array} \right. \\\\ \text{known area} \end{array} \right. \end{array} \tag{69}$$

**Figure 11.** Illustration of areas comprising erf(*x*).

For the general case of nonuniformly spaced intervals, as defined by the set of monotonically increasing points {*x*0, *x*1, *x*2,..., *xm*}, and where it is not necessarily the case that *x* > *xm*, the error function is defined according to

$$\text{erf}(\mathbf{x}) = \sum\_{k=1}^{m} c\_k \mu(\mathbf{x} - \mathbf{x}\_k) + \frac{2}{\sqrt{\pi t}} \int\_{\mathbf{x}\_\mathcal{S}}^{\mathbf{x}} e^{-\lambda^2} d\lambda,\tag{70}$$

where *c*<sup>0</sup> = 0, *x*<sup>0</sup> = 0 and

$$\mathbf{x}\_{k} = \text{erf}(\mathbf{x}\_{k}) - \text{erf}[\mathbf{x}\_{k-1}], \ \mathbf{x}\_{S} = \sum\_{k=1}^{m} [\mathbf{x}\_{k} - \mathbf{x}\_{k-1}] u(\mathbf{x} - \mathbf{x}\_{k}).\tag{71}$$

A spline-based approximation, as defined by Equation (27), can be utilized for the unknown integrals in Equations (69) and (70). This leads to the results stated in Theorem 6:

**Theorem 6.** *Error Function Approximation: Dynamic Constant Plus a Spline Approximation. The error function, as defined by Equations (69) and (70), can be approximated, respectively, by*

$$f\_{\boldsymbol{n},\boldsymbol{\Delta}}(\mathbf{x}) = \sum\_{k=0}^{\lfloor \boldsymbol{x}/\boldsymbol{\Delta} \rfloor} c\_{k} + \frac{2}{\sqrt{\pi}} \left[ \sum\_{k=0}^{\boldsymbol{n}} c\_{\boldsymbol{n},k} \left[ \mathbf{x} - \boldsymbol{\Delta} \left\lfloor \frac{\mathbf{x}}{\boldsymbol{\Delta}} \right\rfloor \right]^{k+1} \right] \begin{array}{c} p\left[\boldsymbol{k}, \boldsymbol{\Delta} \left\lfloor \frac{\mathbf{x}}{\boldsymbol{\Delta}} \right\rfloor \right] \exp\left[-\boldsymbol{\Delta}^{2} \left\lfloor \frac{\mathbf{x}}{\boldsymbol{\Delta}} \right\rfloor^{2} \right] + \\\ (-1)^{\boldsymbol{k}} p(\boldsymbol{k}, \boldsymbol{\kappa}) \exp\left(-\boldsymbol{x}^{2}\right) \end{array} \tag{72}$$

$$\text{erf}(\mathbf{x}) \approx \sum\_{k=1}^{m} c\_k u(\mathbf{x} - \mathbf{x}\_k) + \frac{2}{\sqrt{\pi t}} \left[ \sum\_{k=0}^{n} c\_{n,k} (\mathbf{x} - \mathbf{x}\_S)^{k+1} \left[ \begin{array}{c} p(k, \mathbf{x}\_S) \exp(-\mathbf{x}\_S^2) + \\ (-1)^k p(k, \mathbf{x}) \exp(-\mathbf{x}^2) \end{array} \right] \right] \tag{73}$$

**Proof.** These results arise from spline approximation of order *n*, as defined by Equation (27), for the integrals, respectively, over the intervals [Δ*x*/Δ , *x*] and [*xS*, *x*]. -

#### *5.1. Approximations of Orders Zeros to Four*

Approximations of orders zero to four arising from Theorem 6 are:

3

$$f\_{0, \Lambda}(\mathbf{x}) = \sum\_{k=0}^{\lfloor \mathbf{x}/\Delta \rfloor} c\_k + \frac{\mathbf{x} - \Delta \lfloor \mathbf{x}/\Delta \rfloor}{\sqrt{\pi}} \cdot \left[ e^{-\Lambda^2 \lfloor \mathbf{x}/\Delta \rfloor^2} + e^{-\mathbf{x}^2} \right] \tag{74}$$

$$f\_{1, \Delta}(\mathbf{x}) = \sum\_{k=0}^{\lfloor \mathbf{x}/\Delta \rfloor} c\_k + \frac{\mathbf{x} - \Delta \lfloor \mathbf{x}/\Delta \rfloor}{\sqrt{\pi}} \cdot \left[ e^{-\Delta^2 \lfloor \mathbf{x}/\Delta \rfloor^2} + e^{-\mathbf{x}^2} \right] - \tag{75}$$

$$\frac{(\mathbf{x} - \Delta \lfloor \mathbf{x}/\Delta \rfloor)^2}{3\sqrt{\pi}} \cdot \left[ \Delta \lfloor \mathbf{x}/\Delta \rfloor e^{-\Delta^2 \lfloor \mathbf{x}/\Delta \rfloor^2} - \mathbf{x} e^{-\mathbf{x}^2} \right]$$

$$f\_{2\Delta}(\mathbf{x}) = \sum\_{k=0}^{\lfloor \mathbf{x}/\Delta \rfloor} c\_k + \frac{\mathbf{x} - \boldsymbol{\Delta} \lfloor \mathbf{x}/\Delta \rfloor}{\sqrt{\pi}} \cdot \left[ e^{-\boldsymbol{\Delta}^2 \left\lfloor \mathbf{x}/\Delta \right\rfloor^2} + e^{-\mathbf{x}^2} \right] -$$

$$\frac{2(\mathbf{x} - \boldsymbol{\Delta} \lfloor \mathbf{x}/\Delta \rfloor)^2}{5\sqrt{\pi}} \cdot \left[ \boldsymbol{\Delta} \lfloor \mathbf{x}/\Delta \rfloor e^{-\boldsymbol{\Delta}^2 \left\lfloor \mathbf{x}/\Delta \right\rfloor^2} - \mathbf{x} e^{-\mathbf{x}^2} \right] - \tag{76}$$

$$\frac{\left(\boldsymbol{\varepsilon} - \boldsymbol{\Delta} \lfloor \mathbf{x}/\Delta \rfloor\right)^3}{30\sqrt{\pi}} \cdot \left[ \left[ 1 - 2\boldsymbol{\Delta}^2 \lfloor \mathbf{x}/\Delta \rfloor^2 \right] e^{-\boldsymbol{\Delta}^2 \left\lfloor \mathbf{x}/\Delta \right\rfloor^2} + \left[ 1 - 2\mathbf{x}^2 \right] e^{-\mathbf{x}^2} \right]$$

$$f\_{3,\Lambda}(\mathbf{x}) = \sum\_{k=0}^{\lfloor x/\Lambda \rfloor} c\_k + \frac{\mathbf{x} - \boldsymbol{\Lambda} \lfloor x/\Lambda \rfloor}{\sqrt{\pi}} \cdot \left[ e^{-\Lambda^2 \left\lfloor x/\Lambda \right\rfloor^2} + e^{-\mathbf{x}^2} \right] -$$

$$\frac{3(\mathbf{x} - \boldsymbol{\Lambda} \lfloor x/\Lambda \rfloor)^2}{7\sqrt{\pi}} \cdot \left[ \boldsymbol{\Lambda} \left\lfloor \mathbf{x}/\Lambda \right\rfloor e^{-\Lambda^2 \left\lfloor x/\Lambda \right\rfloor^2} - \mathbf{x} e^{-\mathbf{x}^2} \right] - \tag{77}$$

$$\frac{(\mathbf{x} - \boldsymbol{\Lambda} \lfloor x/\Lambda \rfloor)^3}{21\sqrt{\pi}} \cdot \left[ \left[ 1 - 2\Lambda^2 \left\lfloor x/\Lambda \right\rfloor^2 \right] e^{-\Lambda^2 \left\lfloor x/\Lambda \right\rfloor^2} + \left[ 1 - 2\Lambda^2 \right] e^{-\mathbf{x}^2} \right] +$$

$$\frac{(\mathbf{x} - \boldsymbol{\Lambda} \lfloor x/\Lambda \rfloor)^4}{70\sqrt{\pi}} \cdot \left[ \Lambda \left\lfloor x/\Lambda \right\rfloor \left[ 1 - \frac{2\Lambda^2 \left\lfloor x/\Lambda \right\rfloor^2}{3} \right] e^{-\Lambda^2 \left\lfloor x/\Lambda \right\rfloor^2} - \mathbf{x} \left[ 1 - \frac{2\Lambda^2}{3} \right] e^{-\mathbf{x}^2} \right]$$

$$f\_{\mathsf{A},\mathsf{A}}(\mathbf{x}) = \sum\_{k=0}^{\lfloor x/\Delta \rfloor} c\_k + \frac{x - \Delta \lfloor x/\Delta \rfloor}{\sqrt{\pi}} \cdot \left[e^{-\Delta^2 \lfloor x/\Delta \rfloor^2} + e^{-x^2}\right] -$$

$$\frac{4(x - \Delta \lfloor x/\Delta \rfloor)^2}{9\sqrt{\pi}} \cdot \left[\Delta \lfloor x/\Delta \rfloor e^{-\Delta^2 \lfloor x/\Delta \rfloor^2} - x e^{-x^2}\right] -$$

$$\frac{(x - \Delta \lfloor x/\Delta \rfloor)^3}{18\sqrt{\pi}} \cdot \left[\left[1 - 2\Delta^2 \lfloor x/\Delta \rfloor^2\right] e^{-\Delta^2 \lfloor x/\Delta \rfloor^2} + \left[1 - 2x^2\right] e^{-x^2}\right] + \tag{78}$$

$$\frac{(x - \Delta \lfloor x/\Delta \rfloor)^4}{42\sqrt{\pi}} \cdot \left[\Delta \lfloor x/\Delta \rfloor \left[1 - \frac{2\Delta^2 \lfloor x/\Delta \rfloor^2}{3}\right] e^{-\Delta^2 \lfloor x/\Delta \rfloor^2} - x \left[1 - \frac{2x^2}{3}\right] e^{-x^2}\right] +$$

$$\frac{(x - \Delta \lfloor x/\Delta \rfloor)^5}{1260\sqrt{\pi}} \cdot \left[\left[1 - 4\Delta^2 \lfloor x/\Delta \rfloor^2 + \frac{4\Delta^4 \lfloor x/\Delta \rfloor^4}{3}\right] e^{-\Delta^2 \lfloor x/\Delta \rfloor^2} + \left[1 - 4x^2 + \frac{4x^4}{3}\right] e^{-x^2}\right]$$

*5.2. Results*

For a resolution of Δ = 1/2, the coefficients are tabulated in Table 7.

A resolution of <sup>Δ</sup> <sup>=</sup> 1/2 yields a relative error bound of 1.16 <sup>×</sup> <sup>10</sup>−<sup>5</sup> for a second-order approximation, a relative error bound of 1.35 <sup>×</sup> <sup>10</sup>−<sup>9</sup> for a fourth-order approximation, a relative error bound of 7.15 <sup>×</sup> <sup>10</sup>−<sup>14</sup> for a sixth-order approximation and a relative error bound of 9.03 <sup>×</sup> <sup>10</sup>−<sup>37</sup> for a sixteenth-order approximation. These bounds are based on 10,000 equally spaced samples in the interval [0, 8].


**Table 7.** Coefficient values for the case of Δ = 1/2.

The variation of the relative error bound with resolution, and order, is detailed in Figure 12. The nature of the variation of the relative error, for orders two, three, and four, is shown in Figure 13 for a resolution of 0.5. It is possible to obtain better results by using nonuniformly spaced intervals but the improvement, in general, does not warrant the increase in complexity.

**Figure 12.** Graph of the relative error bound, versus the order of approximation, for various set resolutions.

**Figure 13.** Graph of the relative errors, based on a resolution of Δ = 0.5, in second to fourth order approximations to erf(*x*).

#### **6. A Dynamic System to Yield Improved Approximations**

It is possible to utilize the approximations detailed in Theorems 1 and 5 as the basis for determining new approximations with a lower relative error. The approach is indirect and based on considering the feedback system illustrated in Figure 14, which has varying feedback. The differential equation characterizing the system is

$$y\prime(t) + f\_M(t)y(t) = x(t). \tag{79}$$

**Figure 14.** Feedback system with dynamically varying (modulated) feedback.

For specific input, *x*, and modulated feedback, *fM*, signals the output has a known form. For example, for the case of *x*(*t*) = *fM*(*t*) = erf(*t*)*u*(*t*), the output signal, assuming zero initial conditions, is

$$y(t) = 1 - \exp\left[\frac{1}{\sqrt{\pi t}} \cdot \left[1 - e^{-t^2}\right] - t \text{erf}(t)\right], \ t \ge 0. \tag{80}$$

For the case of

$$\mathbf{x}(t) = f\_M(t) = \frac{4}{\sqrt{\pi}} e^{-t^2} \text{erf}(t)\boldsymbol{u}(t) \tag{81}$$

the output signal, assuming zero initial conditions, is

$$y(t) = 1 - \exp\left[-\text{erf}^2(t)\right], \ t \ge 0. \tag{82}$$

This case facilitates approximations for the error function, which can be made arbitrarily accurate and which are valid for the positive real line.

**Theorem 7.** *Dynamical System Approximations for Error Function. Based on the differential equation specified by Equation (79), a nth-order approximation to* erf(*x*)*, for the case of x* ≥ 0*, can be defined according to*

$$f\_n(\mathbf{x}) = \sqrt{p\_{n,0} + p\_{n,1}(\mathbf{x})e^{-\mathbf{x}^2} + p\_{n,2}(\mathbf{x})e^{-2\mathbf{x}^2}},\tag{83}$$

*where, for the case of n* ≥ 2*:*

$$\begin{aligned} p\_{n,1}(\mathbf{x}) &= \mathbf{a}\_0 + \mathbf{a}\_2 \mathbf{x}^2 + \dots + \mathbf{a}\_m \mathbf{x}^m, \ m = \begin{cases} & n-1 \ m \text{ odd} \\ & n \ m \text{ even} \end{cases} \\ p\_{n,2}(\mathbf{x}) &= \beta\_0 + \beta\_2 \mathbf{x}^2 + \dots + \beta\_{2n} \mathbf{x}^{2n} \\ p\_{n,0}(\mathbf{x}) &= -(\mathbf{a}\_0 + \beta\_0) \end{aligned} \tag{84}$$

*and with*

$$\begin{aligned} a\_{m} &= \frac{-4}{\pi} \cdot c\_{n,m} a\_{m,0} \\ a\_{m-2i} &= \frac{(m-2i+2)a\_{m-2i+2}}{2} - \frac{4}{\pi} \cdot c\_{n,m-2i} a\_{m-2i,0}, \; i \in \left\{1, \ldots, \frac{m}{2} - 1\right\} \\ a\_{0} &= a\_{2} - \frac{4}{\pi} \cdot c\_{n,0} a\_{0,0} \end{aligned} \tag{85}$$

$$\begin{aligned} \beta\_{2n} &= \frac{-2}{\pi} \cdot (-1)^n c\_{n,n} a\_{n,n} \\ \beta\_{2n-2i} &= \frac{(n-i+1)\beta\_{2n-2i+2}}{2} - \frac{2}{\pi} \sum\_{k=n-i}^{\min\{2n-2i,n\}} (-1)^k c\_{n,k} a\_{k,2(n-i)-k}, \; i \in \{1, \dots, n-1\} \\ \varepsilon &\quad \varepsilon \end{aligned} \tag{86}$$

$$\beta\_0 = \frac{\beta\_2}{2} - \frac{2}{\pi} \cdot c\_{n,0} a\_{0,0}$$

*Here, the coefficients ai*,*j, i*, *j* ∈ {0, 1, . . . , *n*} *are defined by the expansion*

$$p(k, \mathbf{x}) = a\_{k,0} + a\_{k,1}\mathbf{x} + a\_{k,2}\mathbf{x}^2 + \dots + a\_{k,k}\mathbf{x}^k, \ k \in \{0, 1, \dots, n\}, \tag{87}$$

*arising from the polynomials (Equation (26))*

$$p(k, \mathbf{x}) = p^{(1)}(k - 1, \mathbf{x}) - 2xp[k - 1, \mathbf{x}], \quad p(0, \mathbf{x}) = 1 \tag{88}$$

*Finally, it is the case that*

$$\lim\_{n \to \infty} f\_n(x) = \text{erf}(x), \quad x \ge 0,\tag{89}$$

*with the convergence being uniform.*

**Proof.** The proof is detailed in Appendix D. -

#### *6.1. Explicit Approximations*

Explicit approximations for orders zero to four for erf(*x*), *x* ≥ 0 are:

$$f\_0(\mathbf{x}) = \frac{1}{\sqrt{\pi t}} \sqrt{3 - 2\mathbf{e}^{-\mathbf{x}^2} - \mathbf{e}^{-2\mathbf{x}^2}} \tag{90}$$

$$f\_1(\mathbf{x}) = \frac{1}{\sqrt{\pi}} \sqrt{\frac{19}{6} - 2e^{-\mathbf{x}^2} - \frac{7e^{-2\mathbf{x}^2}}{6} \left[1 + \frac{2\mathbf{x}^2}{7}\right]} \tag{91}$$

$$f\_2(\mathbf{x}) = \frac{1}{\sqrt{\pi t}} \sqrt{\frac{63}{20} - \frac{29e^{-\mathbf{x}^2}}{15} \left[1 - \frac{\mathbf{x}^2}{29}\right] - \frac{73e^{-2\mathbf{x}^2}}{60} \left[1 + \frac{26\mathbf{x}^2}{73} + \frac{4\mathbf{x}^4}{73}\right]}\tag{92}$$

$$f\_3(\mathbf{x}) = \frac{1}{\sqrt{\pi}} \sqrt{\frac{22}{7} - \frac{40e^{-\mathbf{x}^2}}{21} \left[1 - \frac{\mathbf{x}^2}{20}\right] - \frac{26e^{-2\mathbf{x}^2}}{21} \left[1 + \frac{10\mathbf{x}^2}{26} + \frac{\mathbf{x}^4}{13} + \frac{\mathbf{x}^6}{130}\right]} \tag{93}$$

$$f\_4(x) = \frac{1}{\sqrt{\pi t}} \sqrt{\frac{\frac{377}{120} - \frac{596x^{-2}}{315} \left[1 - \frac{17x^2}{258} + \frac{x^4}{1192}\right]}} - 1$$

#### *6.2. Results*

The relative error bounds associated with the approximations to erf(*x*) are detailed in Table 8. The graphs of the relative errors in the approximations are shown in Figure 15. The clear advantage of the proposed approximations is evident, with the improvement increasing with the order of the initial approximation (i.e., a function with an initial lower relative error bound leads to an increasingly lower relative error bound). The other clear advantage of the approximations, as is evident in Figure 15, is that the relative error is bounded as *x* → ∞.


**Table 8.** Relative error bounds, over the interval (0, ∞), for approximations to erf(*x*) as defined in Theorem 7.

**Figure 15.** Graph of the relative errors in approximations, of orders one to eight, to erf(*x*) as defined in Theorem 7.

#### *6.3. Extension*

By utilizing the approximations detailed in Theorem 5, similar approximations can be detailed, with lower relative error. For example, the first-order approximation, *f*1,4, which is based on four equal subintervals and is defined by Equation (67), yields the approximation

$$f\_{1,4}(\mathbf{x}) = \frac{1}{\sqrt{\pi}} \begin{bmatrix} \frac{128.177}{40.800} - \frac{e^{-\mathbf{x}^2}}{2} - \frac{16e^{-17\mathbf{x}^2/16}}{17} - \frac{4e^{-5\mathbf{x}^2/4}}{5} - \frac{16e^{-25\mathbf{x}^2/16}}{25} \\\\ - \frac{25e^{-2\mathbf{x}^2}}{96} \left[1 + \frac{2\mathbf{x}^2}{25}\right] \end{bmatrix} \tag{95}$$

which has a relative error bound of 2.83 <sup>×</sup> <sup>10</sup><sup>−</sup>6. With an optimum transition point of 3.292, the original approximation has a relative error bound of 7.21 <sup>×</sup> <sup>10</sup>−<sup>5</sup> (see Table 5).

#### *6.4. Notes*

First, the constants *pn*,0, *n* ∈ {0, 1, . . .}, as defined in Equation (84), form a series that in the limit converges to 1. It then follows that the corresponding series converges to π:

$$13, \frac{19}{6}, \frac{63}{20}, \frac{22}{7}, \frac{377}{120}, \frac{174, 169}{55, 440}, \frac{4, 528, 409}{1, 441, 440}, \dots \tag{96}$$

Second, the square root functional structure has been utilized for approximations to the error function, as is evident from the approximations detailed in Table 1. It is easy to conclude that the form

$$f\_n(\mathbf{x}) = \sqrt{p\_{n,0} - p\_{n,1}(\mathbf{x})e^{-k\_1\mathbf{x}^2} - p\_{n,2}(\mathbf{x})e^{-k\_2\mathbf{x}^2} - \dots} \tag{97}$$

is well suited for approximating the error function.

#### **7. Applications**

This section details indicative applications for the approximations to the error function that have been detailed above.

The distinct analytical forms specified in Theorems 1, 3, 5 and 7, for approximations to the error function, in general, facilitate analysis for different applications. For example, the form detailed in Theorem 1 underpins approximations to exp(−*x*2), as detailed in Section 7.2. The form detailed in Theorem 7 underpins analytical approximations for the power associated with the output of a nonlinearity modeled by the error function when subject to a sinusoidal signal. The approximations are detailed in Section 7.6.

For applications, where a set relative error bound over a set interval is required, the suitable approximation will depend, in part, on the domain over which an approximation is required as well as the level of the relative error bound that is acceptable. For example, the approximations detailed in Theorems 1 and 3 lead to simple analytical forms and modest relative error bounds over [0, ∞) when used with an appropriate transition point for the approximation of erf(*x*) ≈ 1. Without the use of a transition point, such approximations are likely to be best suited for a restricted domain—for example, the domain 0, <sup>√</sup><sup>3</sup> 2 , which is consistent with the three sigma case arising from a Gaussian distribution. The fourth-order approximations, as specified by Equations (36) and (54), have relative error bounds, respectively, of 1.02 <sup>×</sup> <sup>10</sup>−<sup>3</sup> and 1.90 <sup>×</sup> <sup>10</sup>−<sup>4</sup> over 0, <sup>√</sup><sup>3</sup> 2 . Section 7.1 provides examples of approximations that are consistent with set relative error bounds, over the interval [0, ∞), of 10−4, 10−6, 10−10, and 10−16.

#### *7.1. Error Function Approximations: Set Relative Error Bounds*

Consider the case where an approximation for the error function, with a relative error bound over the positive real line of 10−4, is required. A 47th-order Taylor series, with a transition point of *xo* <sup>=</sup> 2.752, yields a relative bound of 1.00 <sup>×</sup> <sup>10</sup>4.

An eighth-order spline approximation, with a transition point of *xo* = 2.963, yields a relative error bound of 2.79 <sup>×</sup> <sup>10</sup>−5. The approximation, according to Equation (56), is

$$f\_{\mathbf{S}}(\mathbf{x}) = \begin{array}{c} \frac{\mathbf{x}}{\sqrt{\pi}} \cdot u[\mathbf{x}\_{\vartheta} - \mathbf{x}] \cdot \begin{bmatrix} 1 - \frac{7\mathbf{x}^{2}}{102} + \frac{\mathbf{x}^{4}}{340} - \frac{\mathbf{x}^{6}}{18.564} + \frac{\mathbf{x}^{8}}{5.2509400} + \\\\ \left[ \frac{1 + \frac{41\mathbf{x}^{2}}{102} + \frac{101\mathbf{x}^{4}}{1020} + \frac{1591\mathbf{x}^{6}}{92.820} + \frac{4793\mathbf{x}^{8}}{2.162,160} + \right] \\\ \frac{2017\mathbf{x}^{10}}{9.189,180} + \frac{38\mathbf{x}^{12}}{2.297,295} + \frac{31\mathbf{x}^{14}}{54,359,425} + \\ \end{bmatrix} e^{-\mathbf{x}^{2}} \\\ + \\ \end{array} \tag{98}$$

A seventh-order approximation, with a transition point of *xo* = 2.65, yields a relative error bound of 1.79 <sup>×</sup> <sup>10</sup>−4.

A first-order spline approximation, based on four equal subintervals [0, *x*/4], [*x*/4, *x*/2], [*x*/2, 3*x*/4] and [3*x*/4, *x*], is defined according to

$$f\_{1,4}(\mathbf{x}) = \begin{bmatrix} \frac{\mathbf{x}}{4\sqrt{\pi}} \cdot \left[1 + 2\exp\left[\frac{-\mathbf{x}^2}{16}\right] + 2\exp\left[\frac{-\mathbf{x}^2}{4}\right] + 2\exp\left[\frac{-9\mathbf{x}^2}{16}\right] + \exp\left[-\mathbf{x}^2\right]\right] + \\\\ \frac{\mathbf{x}^3}{48\sqrt{\pi}} \cdot \exp\left[-\mathbf{x}^2\right] \end{bmatrix} u[\mathbf{x}\_0 - \mathbf{x}] + \\ \tag{99}$$

and yields a relative error bound of 7.21 <sup>×</sup> <sup>10</sup>−<sup>5</sup> with the transition point *xo* <sup>=</sup> 3.292.

A dynamic constant plus a spline approximation of order 2, and based on a resolution of <sup>Δ</sup> <sup>=</sup> 19/20, achieves a relative error bound of 8.33 <sup>×</sup> <sup>10</sup>−<sup>5</sup> (10,000 points in the interval [0, 5]). The approximation is

*<sup>f</sup>*2,Δ(*x*) <sup>=</sup> <sup>20</sup>*x*/19 ∑ *k*=0 *ck* + <sup>√</sup><sup>1</sup> *π <sup>x</sup>* <sup>−</sup> <sup>19</sup> 20 · ! 20*x* <sup>19</sup> "exp −361 <sup>400</sup> ! <sup>20</sup>*<sup>x</sup>* <sup>19</sup> "<sup>2</sup> + *e*−*x*<sup>2</sup> − 2 5 √*<sup>π</sup> <sup>x</sup>* <sup>−</sup> <sup>19</sup> 20 · ! 20*x* <sup>19</sup> "<sup>2</sup> 19 20 · ! 20*x* <sup>19</sup> " exp −361 <sup>400</sup> ! <sup>20</sup>*<sup>x</sup>* <sup>19</sup> "<sup>2</sup> <sup>−</sup> *xe*−*x*<sup>2</sup> − 1 <sup>30</sup>√*<sup>π</sup> <sup>x</sup>* <sup>−</sup> <sup>19</sup> 20 · ! 20*x* <sup>19</sup> "<sup>3</sup> · <sup>1</sup> <sup>−</sup> <sup>361</sup> 200 · ! 20*x* <sup>19</sup> "<sup>2</sup> exp −361 <sup>400</sup> ! <sup>20</sup>*<sup>x</sup>* <sup>19</sup> "<sup>2</sup> + (<sup>1</sup> <sup>−</sup> <sup>2</sup>*x*2)*e*−*x*<sup>2</sup> (100) where *<sup>c</sup>*<sup>0</sup> <sup>=</sup> 0, *ck* <sup>=</sup> erf 19*k* 20 <sup>−</sup> erf <sup>19</sup>(*k*−1) <sup>20</sup> , *k* ∈ {1, 2, . . .}, *c*<sup>1</sup> = 0.82089081, *c*<sup>2</sup> = 0.17189962, *<sup>c</sup>*<sup>3</sup> <sup>=</sup> 7.1539145 <sup>×</sup> <sup>10</sup>−3, *<sup>c</sup>*<sup>4</sup> <sup>=</sup> 5.5579 <sup>×</sup> <sup>10</sup>−5, *<sup>c</sup>*<sup>5</sup> <sup>=</sup> *<sup>c</sup>*<sup>6</sup> <sup>=</sup> ... <sup>=</sup> 0. (101)

Here, the approximation of erf(*x*) ≈ 1, for *x* ≥ 57/20 (after three intervals) can be utilized without impacting the relative error bound.

Utilizing a fourth-order spline approximation and iteration consistent with Theorem 7, the approximation

$$f\_{4}(\mathbf{x}) = \frac{1}{\sqrt{\pi t}} \sqrt{\frac{\frac{377}{120} - \frac{596\mathbf{x}^{-2}}{315} \left[1 - \frac{17\mathbf{x}^{2}}{298} + \frac{\mathbf{x}^{4}}{1192}\right] - }{\frac{3149\mathbf{x}^{-2}}{2520} \left[1 + \frac{1258\mathbf{x}^{2}}{3149} + \frac{278\mathbf{x}^{4}}{3149} + \frac{112\mathbf{x}^{6}}{9447} + \frac{8\mathbf{x}^{8}}{9447}\right]} \tag{102}$$

yields a relative error bound of 1.82 <sup>×</sup> <sup>10</sup>−5.

Details of approximations that are consistent with higher-order relative error bounds are detailed in Table 9.

**Table 9.** Approximations that are consistent with a set relative error bound. The actual relative error bound is specified by re*B*.


### *7.2. Approximation for Exp(*−x*2)*

<sup>A</sup> *<sup>n</sup>*th-order approximation to the Gaussian function exp(−*x*2) is detailed in the following theorem:

**Theorem 8.** *Approximation for Gaussian Function. A nth-order approximation, gn*, *to the Gaussian function* exp(−*x*2) *is*

$$g\_n(\mathbf{x}) = \frac{\sum\_{k=0}^n c\_{n,k}(k+1)\mathbf{x}^k p(k,0)}{1 + \sum\_{k=0}^n c\_{n,k}(-1)^{k+1}\mathbf{x}^k \left[p(k,\mathbf{x})[k+1-2\mathbf{x}^2] + \mathbf{x}p^{(1)}(k,\mathbf{x})\right]} \tag{103}$$

*where cn*,*<sup>k</sup> is defined by Equation (21) and p*(*k*, *x*) *is defined by Equation (26).*

**Proof.** The proof is detailed in Appendix E. -

#### 7.2.1. Approximations

Approximations to exp(−*x*2), of orders zero to five, are:

$$g\_0(\mathbf{x}) = \frac{1}{1 + 2\mathbf{x}^2} \quad g\_1(\mathbf{x}) = \frac{1}{1 + \mathbf{x}^2 + \frac{2\mathbf{x}^4}{5}} \quad g\_2(\mathbf{x}) = \frac{1 - \mathbf{x}^2/10}{1 + \frac{9\mathbf{x}^2}{10} + \frac{2\mathbf{x}^4}{5} + \frac{2\mathbf{x}^6}{15}} \tag{104}$$

$$\begin{array}{l} \text{g3} \left( \text{x} \right) = \frac{1 - \text{x}^2 / 7}{1 + \frac{6\alpha^2}{7} + \frac{5\alpha^4}{14} + \frac{2\alpha^6}{21} + \frac{7\alpha^8}{105}} \\\\ \text{g4} \left( \text{x} \right) = \frac{1 - \text{x}^2 / 6 + \text{x}^4 / 252}{1 + \frac{6\alpha^2}{6} + \frac{65\alpha^4}{272} + \frac{11\alpha^6}{126} + \frac{88}{65} + \frac{2\alpha^{10}}{945}} \end{array} \tag{105}$$

$$g\_5(\mathbf{x}) = \frac{1 - 2\mathbf{x}^2/11 + \mathbf{x}^4/132}{1 + \frac{9\mathbf{x}^2}{11} + \frac{43\mathbf{x}^4}{132} + \frac{\mathbf{x}^6}{12} + \frac{\mathbf{x}^8}{66} + \frac{\mathbf{x}^{10}}{495} + \frac{2\mathbf{x}^{12}}{10\beta 95}}\tag{106}$$

#### 7.2.2. Results

The relative errors in the above defined approximations to exp(−*x*2) are detailed in Figure 16 for approximations of order 0, 2, 4, 6, 8, 10, and 12, along with the relative error in Taylor series for orders 1, 3, 5, ... , 15. The clear superiority of the defined approximations is evident.

**Figure 16.** Graph of the magnitude of the relative errors in approximations to exp(−*x*2), as defined by Equation (103), of orders 0, 2, 4, 6, 8, 10 and 12. The dotted curves are the relative errors associated with Taylor series of orders 1, 3, 5, 7, 9, 11, 13 and 15.

7.2.3. Comparison

The following *<sup>n</sup>*th-order approximation for exp(−*x*2) has been proposed in Equation (77) of [19]:

$$h\_n(\mathbf{x}) = \frac{1 - c\_{n,0}\mathbf{x}^2 + c\_{n,1}\mathbf{x}^4 - c\_{n,2}\mathbf{x}^6 + \dots + c\_{n,n}(-1)^{n+1}\mathbf{x}^{2n+2}}{1 + c\_{n,0}\mathbf{x}^2 + c\_{n,1}\mathbf{x}^4 + c\_{n,2}\mathbf{x}^6 + \dots + c\_{n,n}\mathbf{x}^{2n+2}} \tag{107}$$

.

where *cn*,*<sup>k</sup>* is defined by Equation (21). The relative error bounds over the interval 0, 3/√<sup>2</sup> (the three sigma bound case for Gaussian probability distributions) for this approximation, and the approximation defined by Equation (103), are detailed in Table 10. The tabulated results clearly show that this approximation is more accurate than the approximation detailed in Equation (103). The improvement is consistent with the higher-order Padé approximant being used.

**Table 10.** Relative error bounds for approximations over the interval 0, 3/√<sup>2</sup> 


The following approximations (seventh- and fifth-order) yield relative error bounds of less than 0.001 over the interval 0, 3/√<sup>2</sup> :

$$\mathcal{G}\_{\mathcal{T}}(\mathbf{x}) = \frac{1 - \frac{\mathbf{x}^2}{5} + \frac{\mathbf{x}^4}{78} - \frac{\mathbf{x}^6}{4290}}{1 + \frac{4\mathbf{x}^2}{5} + \frac{61\mathbf{x}^4}{195} + \frac{34\mathbf{x}^6}{429} + \frac{83\mathbf{x}^8}{5270} + \frac{\mathbf{x}^{10}}{495} + \frac{7\mathbf{x}^{12}}{32.175} + \frac{4\mathbf{x}^{14}}{225.225} + \frac{2\mathbf{x}^{16}}{2.027.025}}\tag{108}$$

$$h\_5(x) = \frac{1 - \frac{x^2}{2} + \frac{5x^4}{44} - \frac{x^6}{66} + \frac{x^8}{792} - \frac{x^{10}}{15,840} + \frac{x^{12}}{665,280}}{1 + \frac{x^2}{2} + \frac{5x^4}{44} + \frac{x^6}{66} + \frac{x^8}{792} + \frac{x^{10}}{15,840} + \frac{x^{12}}{665,280}} \tag{109}$$

A twenty-seventh-order Taylor series approximation yields a relative bound of 1.03 <sup>×</sup> <sup>10</sup>−3.

#### *7.3. Upper and Lower Bounded Approximations to Error Function*

Establishing bounds for erf(*x*) has received modest research interest, e.g., [31], and published bounds for erf(*x*) for the case of *x* > 0 include Chu [32]:

$$\sqrt{1-\exp[-p\mathbf{x}^2]} \le \text{erf}(\mathbf{x}) \le \sqrt{1-\exp[-q\mathbf{x}^2]}, \ p \in (0,1], q \in [4/\pi,\infty]. \tag{110}$$

Corollary 4.2 of Neuman [33]:

$$\frac{2\mathbf{x}}{\sqrt{\pi t}}\exp\left[\frac{-\mathbf{x}^2}{3}\right] \le \text{erf}(\mathbf{x}) \le \frac{4\mathbf{x}}{3\sqrt{\pi t}} \left[1 + \frac{\exp(-\mathbf{x}^2)}{2}\right] \tag{111}$$

and refinements to the form proposed by Chu [32], e.g., Yang [34,35], Corollary 3.4 of [35]:

$$\sqrt{1-\frac{20}{3\pi}\left[1-\frac{\pi}{4}\right]\exp\left[\frac{-8\mathbf{x}^{2}}{5}\right]-\frac{8}{3}\left[1-\frac{5}{2\pi}\right]\exp(-\mathbf{x}^{2})}} \leq \text{erf}(\mathbf{x}) \leq \tag{112}$$

$$\sqrt{1-\lambda(p\_{0})\exp(-p\_{0}\mathbf{x}^{2})-\left[1-\lambda(p\_{0})\right]\exp\left[-\mu(p\_{0})\mathbf{x}^{2}\right]}$$

where

$$\begin{split}p\_0 &= \frac{21\pi - 60 + \sqrt{3(147\pi^2 - 920\pi + 1440)}}{30(\pi - 3)},\\ \lambda(p) &= \frac{4[7\pi - 20 - 5(\pi - 3)p]}{\pi(15p^2 - 40p + 28)},\ \mu(p) = \frac{4(5p - 7)}{5(3p - 4)}.\end{split} \tag{113}$$

The relative error in these bounds is detailed in Figure 17.

**Figure 17.** Relative error in upper and lower bounds to erf(*x*) as, respectively, defined by Equation (110)–(112). The parameters *p* = 1 and *q* = π/4 have been used for the bounds defined by Equation (110).

Utilizing the results of Lemma 1, it follows that any of the approximations detailed in Theorems 4, 5, 6, or 7 can be utilized to create upper and lower bounded functions for erf(*x*), *x* > 0, of arbitrary accuracy and with an arbitrary relative error bound. For example, the approximation *f*1,4 specified by Equation (99) yields the functional bounds:

$$\frac{f\_{1,4}(\mathbf{x})}{1+\varepsilon\_B} \le \text{erf}(\mathbf{x}) \le \frac{f\_{1,4}(\mathbf{x})}{1-\varepsilon\_B}, \quad \varepsilon\_B = 7.21 \times 10^{-5}, \quad \mathbf{x} > 0,\tag{114}$$

with a relative error bound of 8.33 <sup>×</sup> <sup>10</sup>−<sup>5</sup> for the lower bounded function and 1.44 <sup>×</sup> <sup>10</sup>−<sup>4</sup> for the upper bounded function. Such accuracy is better than the bounds underpinning the results shown in Figure 17. The sixteenth-order approximation, *f*4,16, based on four equal subintervals, specified in Appendix C and when used with a transition point *xo* = 7.1544, leads to the functional bounds

$$\frac{f\_{4,16}(\mathbf{x})}{1+\varepsilon\_B} \le \text{erf}(\mathbf{x}) \le \frac{f\_{4,16}(\mathbf{x})}{1-\varepsilon\_B}, \quad \varepsilon\_B = 4.82 \times 10^{-16}, \text{ } \mathbf{x} > 0,\tag{115}$$

with a relative error bound of less than 9.64 <sup>×</sup> <sup>10</sup>−<sup>16</sup> for the lower bounded function and 9.32 <sup>×</sup> <sup>10</sup>−<sup>16</sup> for the upper bounded function.

*7.4. New Series for Error Function*

Consider the exact results

$$\text{erf}(\mathbf{x}) = f\_n(\mathbf{x}) + \varepsilon\_n(\mathbf{x}), \ n \in \{0, 1, 2, \dots\}, \tag{116}$$

where *fn* is specified by Equation (24) and ε (1) *<sup>n</sup>* (*x*) is specified by Equation (25). By utilizing a Taylor series approximation for exp(−*x*2) in <sup>ε</sup> (1) *<sup>n</sup>* (*x*) and then integrating, an approximation for ε*n*(*x*) can be established. This leads to new series for the error function.

**Theorem 9.** *New Series for Error Function. Based on zero-, first-, and second-order approximations, the following series for the error function are valid:*

$$\begin{array}{ll} \text{erf}(\mathbf{x}) = \begin{array}{c} \frac{\mathbf{x}}{\sqrt{\pi}} + \frac{\mathbf{x}}{\sqrt{\pi}} \cdot \mathbf{e}^{-\mathbf{x}^2} +\\ \frac{\mathbf{x}^3}{\sqrt{\pi}} \left[ \frac{1}{3} - \frac{3\mathbf{x}^2}{2 \cdot 5} + \frac{5\mathbf{x}^4}{6 \cdot 7} - \frac{7\mathbf{x}^6}{24 \cdot 9} + \frac{9\mathbf{x}^8}{120 \cdot 11} - \dots + \frac{(-1)^k (2k+1) \mathbf{x}^{2k}}{(2k+3) \cdot (k+1)!} + \dots \right] \end{array} \tag{117}$$

erf(*x*) = <sup>√</sup>*<sup>x</sup> <sup>π</sup>* <sup>+</sup> <sup>√</sup>*<sup>x</sup> π* 1 + *<sup>x</sup>*<sup>2</sup> 3 *e*−*x*<sup>2</sup> + *x*5 6 <sup>√</sup>*<sup>π</sup>* ⎡ ⎢ ⎣1 <sup>5</sup> <sup>−</sup> <sup>3</sup>*x*<sup>2</sup> (3/2)·<sup>7</sup> <sup>+</sup> <sup>5</sup>*x*<sup>4</sup> <sup>4</sup>·<sup>9</sup> <sup>−</sup> <sup>7</sup>*x*<sup>6</sup> <sup>15</sup>·<sup>11</sup> <sup>+</sup> <sup>9</sup>*x*<sup>8</sup> <sup>72</sup>·<sup>13</sup> <sup>−</sup> ... <sup>+</sup> (−1) *k* (2*k*+1)(*k*+1)!*x*2*<sup>k</sup>* (2*k*+5) *<sup>k</sup>* ∏ *r*=1 *r* ∑ *i*=1 2*i*+1 + ... ⎤ ⎥ ⎦ (118) erf(*x*) = <sup>√</sup>*<sup>x</sup> π* <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> 30 + <sup>√</sup>*<sup>x</sup> π* 1 + <sup>11</sup>*x*<sup>2</sup> <sup>30</sup> <sup>+</sup> *<sup>x</sup>*<sup>4</sup> 15 *e*−*x*<sup>2</sup> + √1 *<sup>π</sup>* · *<sup>x</sup>*<sup>7</sup> 60 ⎡ ⎢ ⎣ <sup>1</sup>·<sup>3</sup> <sup>1</sup>·3·<sup>7</sup> <sup>−</sup> <sup>3</sup>·5*x*<sup>2</sup> <sup>2</sup>·3·<sup>9</sup> <sup>+</sup> <sup>5</sup>·7*x*<sup>4</sup> <sup>4</sup>·5·<sup>11</sup> <sup>−</sup> <sup>7</sup>·9*x*<sup>6</sup> <sup>9</sup>·10·<sup>13</sup> <sup>+</sup> ... <sup>+</sup> (−1) *k* (2*k*+1)(2*k*+3)(*k*+1)!*x*2*<sup>k</sup>* 3(2*k*+7) *k*+1 ∏ *r*=2 2 *r* ∑ *i*=2 *i* + ... ⎤ ⎥ ⎦ (119)

*Further series, based on higher-order approximations, can also be established.*

**Proof.** The proof is detailed in Appendix F. -

#### Results

The relative errors associated with the zero- and second-order series are shown in Figures 18 and 19. Clearly, the relative error improves as the number of terms used in the series expansion increases. The significant improvement in the relative error, for small values of *x*, is evident. A comparison with the relative errors associated with Taylor series approximations, as shown in Figure 2, shows the improved performance.

**Figure 18.** Relative error in the approximations *f*0(*x*) and *f*0(*x*) + ε0(*x*) to erf(*x*) where the residual function ε0(*x*) is approximated by the stated order.

**Figure 19.** Relative error in the approximations *f*2(*x*) and *f*2(*x*) + ε2(*x*) to erf(*x*) where the residual function ε2(*x*) is approximated by the stated order.

The second-order approximation arising from Equation (118), i.e.,

$$\text{erf}(\mathbf{x}) = \frac{\mathbf{x}}{\sqrt{\pi}} + \frac{\mathbf{x}}{\sqrt{\pi}} \left[ 1 + \frac{\mathbf{x}^2}{3} \right] e^{-\mathbf{x}^2} + \frac{\mathbf{x}^5}{6\sqrt{\pi}} \left[ \frac{1}{5} - \frac{2\mathbf{x}^2}{7} \right] \tag{120}$$

yields a relative error bound of less than 0.001 for the interval [0, 0.87] and less than 0.01 for the interval [0, 1.1].

#### *7.5. Complementary Demarcation Functions*

Consider a complementary function *eC* which is such that

$$
\epsilon\_\mathbb{C}^2(\mathbf{x}) + \text{erf}^2(\mathbf{x}) = 1,\ \mathbf{x} \ge 0. \tag{121}
$$

With the approximation detailed in Theorem 7 (and by noting that lim*n*→<sup>∞</sup>*pn*,0 <sup>=</sup> 1—see Equation (96)), it is the case that

$$e\_{\mathbb{C}}^2(\mathbf{x}) = \lim\_{n \to \infty} \left[ p\_{n,1}(\mathbf{x})e^{-\mathbf{x}^2} + p\_{n,2}(\mathbf{x})e^{-2\mathbf{x}^2} \right] \tag{122}$$

and, thus, *eC*(*x*) can be defined independently of the error function. This function is shown in Figure 20 along with erf(*x*) 2 . These two functions act as complementary demarcation functions for the interval [0, ∞). The transition point is *xo* = 0.74373198514677 as

**Figure 20.** Graph of the signals *e*<sup>2</sup> *<sup>C</sup>*(*x*) and erf(*x*) 2.

#### *7.6. Power and Harmonic Distortion: Erf Modeled Nonlinearity*

The error function is often used to model nonlinearities and the harmonic distortion created by such a nonlinearity is of interest. Examples include the harmonic distortion in magnetic recording, e.g., [2,36], and the harmonic distortion arising, in a communication context, by a power amplifier, e.g., [7]. For these cases, the interest was in obtaining, with a sinusoidal input signal defined by *a* sin(2*π fot*), the harmonic distortion created by an error function nonlinearity over the input amplitude range of [−2, 2].

Consider the output signal of a nonlinearity modelled by the error function:

$$y(t) = \text{erf}[a\sin(2\pi f\_0 t)].\tag{124}$$

For such a case, the output power is defined according to

$$P = \frac{1}{T} \int\_{0}^{T} y^2(t)dt = \frac{1}{T} \int\_{0}^{T} \text{erf}^2[a \sin(2\pi f\_o t)]dt, \quad T = 1/f\_{o\prime} \tag{125}$$

and the output amplitude associated with the *k*th harmonic is

$$\frac{\sqrt{2}}{\sqrt{T}} \int\_0^T \text{erf}[a \sin(2\pi f\_0 t)] \sin(2\pi kf\_0 t)dt.\tag{126}$$

To determine an analytical approximation to the output power, the approximations stated in Theorem 7 lead to relatively simple expressions. Consider the third-order approximation, as specified by Equation (93), which has a relative error bound of 2.03 <sup>×</sup> <sup>10</sup>−<sup>4</sup> for the positive real line. For such a case, the output signal is approximated according to

$$y\_3(t) = \frac{1}{\sqrt{\pi t}} \left[ \begin{array}{c} \frac{22}{7} - \frac{40e^{-\left[a\sin\left(2\pi f\_o t\right)\right]^2}}{21} \left[ 1 - \frac{\left[a\sin\left(2\pi f\_o t\right)\right]^2}{20} \right] -\\ \frac{26e^{-2\left[a\sin\left(2\pi f\_o t\right)\right]^2}}{21} \left[ 1 + \frac{\frac{10\left[a\sin\left(2\pi f\_o t\right)\right]^2}{26} + \frac{\left[a\sin\left(2\pi f\_o t\right)\right]^4}{13}}{\frac{\left[a\sin\left(2\pi f\_o t\right)\right]^6}{130}} \right] \end{array} \tag{127}$$

and is shown in Figure 21.

**Figure 21.** Graph of *y*3(*t*) for the case of *fo* = 1 and for amplitudes of *a* = 0.5, *a* = 1, *a* = 1.5 and *a* = 2.

The power in *y*<sup>3</sup> can be readily determined (e.g., via Mathematica) and it then follows that an approximation to the true power is

$$\begin{split} P(a) \approx \quad & \frac{\mathcal{V}^2}{\mathcal{T}\pi} - \frac{40}{21\pi} I\_0 \left[ \frac{a^2}{2} \right] \left[ 1 - \frac{a^2}{40} \right] e^{-a^2/2} - \frac{26}{21\pi} I\_0 \left[ a^2 \right] \left[ 1 + \frac{5a^2}{26} + \frac{41a^4}{1040} + \frac{a^6}{260} \right] e^{-a^2} - \\ & \frac{a^2}{21\pi} I\_1 \left[ \frac{a^2}{2} \right] e^{-a^2/2} + \frac{37a^2}{140\pi} I\_1 \left[ a^2 \right] \left[ 1 + \frac{43a^2}{222} + \frac{2a^4}{111} \right] e^{-a^2} \end{split} \tag{128}$$

where *I*<sup>0</sup> and *I*1, respectively, are the zero- and first-order Bessel functions of the first kind. The variation in output power is shown in Figure 22.

**Figure 22.** Graph of the input power, output power and ratio of output power to input power as the amplitude of the input signal varies.

#### Harmonic Distortion

To establish analytical approximations for the harmonic distortion, the functional forms detailed in Theorem 7 are not suitable. However, the functional forms detailed in Theorem 1 do lead to analytical approximations that are valid over a restricted domain. Consider a fourth-order spline approximation, as specified by Equation (36), which approximates the error function over the range [−2, 2] with a relative error bound that is better than 0.001 and leads to the approximation

$$\begin{split} y\_4(t) &= \quad \frac{a\sin(2\pi f\_c t)}{\sqrt{\pi}} \Big[ 1 - \frac{a^2 \sin\left(2\pi f\_c t\right)^2}{18} + \frac{a^4 \sin\left(2\pi f\_c t\right)^4}{1260} \Big] + \\ &\frac{a\sin(2\pi f\_c t)}{\sqrt{\pi}} \Bigg[ 1 + \frac{7a^2 \sin\left(2\pi f\_c t\right)^2}{18} + \frac{37a^4 \sin\left(2\pi f\_c t\right)^4}{420} + \Bigg] e^{-a^2 \sin\left(2\pi f\_c t\right)^2} \end{split} \tag{129}$$

The amplitude of the *k*th harmonic in such a signal is given by

$$\mathbf{c}\_{4,k} = \frac{\sqrt{2}}{\sqrt{T}} \int\_{0}^{T} y\_4 [a \sin(2\pi f\_o t)] \sin(2\pi k f\_o t) dt = \sqrt{2T} \int\_{0}^{1} y\_4 [a \sin(2\pi \lambda)] \sin(2\pi k \lambda) d\lambda \tag{130}$$

where the change of variable *λ* = *fot* has been used. The first, third, fifth, and seventh harmonic levels are:

$$\frac{c\_{41}}{\sqrt{\gamma}} = \quad \frac{\sqrt{2}a}{2\sqrt{\pi}} \left[ 1 - \frac{a^2}{24} + \frac{a^4}{2016} \right] + \frac{\sqrt{2}a}{2\sqrt{\pi}} \cdot I\_0 \left[ \frac{a^2}{2} \right] \left[ 1 + \frac{11a^2}{24} + \frac{11a^4}{105} + \frac{a^6}{70} + \frac{a^8}{945} \right] e^{-a^2/2} - $$
 
$$\frac{5\sqrt{2}a}{6\sqrt{\pi}} \cdot I\_1 \left[ \frac{a^2}{2} \right] \left[ 1 + \frac{1481a^2}{4200} + \frac{38a^4}{525} + \frac{29a^6}{5150} + \frac{a^8}{1575} \right] e^{-a^2/2} \tag{131}$$

$$\frac{\kappa\_{4,3}}{\sqrt{T}} = \quad \frac{\sqrt{2}a^3}{144\sqrt{\pi}} \left[1 - \frac{a^2}{56}\right] - \frac{115\sqrt{2}a}{84\sqrt{\pi}} \cdot I\_0\left[\frac{a^2}{2}\right] \left[1 + \frac{403a^2}{1380} + \frac{6a^4}{115} + \frac{31a^6}{5175} + \frac{2a^8}{5175}\right] e^{-a^2/2} + \tag{132}$$

$$\frac{115\sqrt{2}}{21a\sqrt{\pi}} \cdot I\_1\left[\frac{a^2}{2}\right] \left[1 + \frac{8a^2}{23} + \frac{163a^4}{1840} + \frac{76a^6}{5175} + \frac{11a^8}{6900} + \frac{a^{10}}{10,350}\right] e^{-a^2/2} \tag{132}$$

$$\frac{c\_{45}}{\sqrt{\gamma}} = \frac{\sqrt{2}a^5}{40\beta 20\sqrt{\pi}} + \frac{262\sqrt{2}}{15a\sqrt{\pi}} \cdot I\_0 \left[\frac{a^2}{2}\right] \left[1 + \frac{1943a^2}{7336} + \frac{1485a^4}{29\beta 344} + \frac{73a^6}{11004} + \frac{13a^8}{22.008} + \frac{a^{10}}{35.012}\right] e^{-a^2/2} - \frac{1}{35a^5\sqrt{\pi}} \cdot I\_1 \left[\frac{a^2}{2}\right] \left[1 + \frac{1943a^2}{7336} + \frac{13a^6}{22.008} + \frac{13a^{10}}{22.008} + \frac{a^{12}}{35.012}\right] e^{-a^2/2} \tag{133}$$
 
$$\frac{1048\sqrt{2}}{15a^7\sqrt{\pi}} \cdot I\_1 \left[\frac{a^2}{2}\right] \left[1 + \frac{1943a^2}{72336} + \frac{1301a^4}{14.62^2} + \frac{5125a^6}{352.128} + \frac{5a^8}{251} + \frac{41a^{10}}{254.096} + \frac{a^{12}}{132.048}\right] e^{-a^2/2} \tag{134}$$

$$\frac{\epsilon\_{4}\tau\_{2}}{\sqrt{\tau}} = \quad \frac{-6784\sqrt{\pi}}{21s^{3}\sqrt{\pi}} \cdot I\_{0} \left[\frac{a^{2}}{2}\right] \left[1 + \frac{779a^{2}}{3392} + \frac{631a^{4}}{13.568} + \frac{2407a^{6}}{325.632} + \frac{25a^{8}}{40.704} + \frac{17a^{10}}{407.040} + \frac{a^{12}}{610.560}\right] e^{-a^{2}/2} - \frac{271a^{3}b^{3}}{21b^{5}\sqrt{\pi}} \cdot I\_{1} \left[\frac{a^{2}}{2}\right] \left[1 + \frac{779a^{3}}{3392} + \frac{25a^{3}b^{3}}{13.568} + \frac{25a^{3}b^{3}}{13.568} + \frac{67a^{10}}{13.568} + \frac{17a^{12}}{13.568}\right] e^{-a^{2}/2} \tag{134}$$
 
$$\frac{27136\sqrt{2}}{21b^{5}\sqrt{\pi}} \cdot I\_{1} \left[\frac{a^{2}}{2}\right] \left[1 + \frac{779a^{3}}{3392} + \frac{1055a^{4}}{13.568} + \frac{137a^{6}}{13.568} + \frac{2269a^{3}}{407.040} + \frac{67a^{10}}{407.040} + \frac{1}{325}\right] e^{-a^{2}/2} \tag{135}$$

The variation, with the input signal amplitude, of the harmonic distortion, as defined by *c*<sup>2</sup> 4,*k*/*c*<sup>2</sup> 4,1, is shown in Figure 23.

**Figure 23.** Graph of the variation of harmonic distortion with amplitude.

#### *7.7. Linear Filtering of an Error Function Step Signal*

Consider the case of a practical step input signal that is modeled by the error function, erf(*t*/*γ*) and the case where such a signal is input to a 2nd-order linear filter with a transfer function defined by

$$H(s) = \frac{1}{\left[1 + \frac{s}{2\pi f\_p}\right]^2} \Leftrightarrow h(t) = \frac{t e^{-t/\tau}}{\tau^2} \cdot u(t), \ \tau = \frac{1}{2\pi f\_p}.\tag{135}$$

**Theorem 10.** *Linear Filtering of an Error Function Signal. The output of a second-order linear filter, defined by Equation (135), to an error function input signal, defined by* erf(*t*/*γ*)*, is*

$$\begin{split} y(t) &= \operatorname{erf}\left[\frac{t}{\gamma}\right] u(t) + \pi \theta \\ &\overset{\varepsilon^{-t/\tau}}{\top} \cdot \left[ \begin{bmatrix} \frac{\gamma^2}{2\pi} - (t+\tau) \end{bmatrix} \exp\left[\frac{\gamma^2}{4\tau^2}\right] \left[\operatorname{erf}\left[\frac{\gamma}{2\pi}\right] - \operatorname{erf}\left[\frac{\gamma}{2\pi} - \frac{t}{\gamma}\right] \right] - \\ &\overset{\gamma}{\sqrt{\pi}} \exp\left[\frac{\gamma^2}{4\tau^2}\right] \exp\left[-\left[\frac{t}{\gamma} - \frac{\gamma}{2\pi}\right]^2\right] + \frac{\gamma}{\sqrt{\pi}} \end{split} \tag{136}$$

*and can be approximated by the nth-order signal*

$$\begin{array}{l} y\_n(t) = & f\_n\left[\frac{t}{\gamma}\right] u(t) + \\ & \frac{e^{-t/\tau}}{\tau} \cdot \begin{bmatrix} \left[\frac{\gamma^2}{2\pi} - (t+\tau)\right] \exp\left[\frac{\gamma^2}{4\pi^2}\right] \left[f\_n\left[\frac{\gamma}{2\pi}\right] - f\_n\left[\frac{\gamma}{2\pi} - \frac{t}{\gamma}\right]\right] - \\ & \frac{\gamma}{\sqrt{\pi}} \exp\left[\frac{\gamma^2}{4\pi^2}\right] \exp\left[-\left(\frac{t}{\gamma} - \frac{\gamma}{2\pi}\right)^2\right] + \frac{\gamma}{\sqrt{\pi}} \end{array} \tag{137}$$

*where fn is defined by one of the approximations detailed in Theorems 4, 5, 6, or 7. It is the case that*

$$\lim\_{n \to \infty} y\_n(t) = y(t), \ t \in (0, \infty). \tag{138}$$

**Proof.** The proof is detailed in Appendix G. - Results

For an input signal erf(*t*/*γ*)*u*(*t*), *γ* = 1/2, input into a second-order linear filter with *fp* = 1, the output signal is shown in Figure 24. The relative errors in the approximations to the output signal are shown in Figure 25 for the case of approximations as specified by Equation (56) and with the use of optimum transition points.

**Figure 24.** Graph of the input signal erf(*t*/*γ*), *γ* = 1/2 and the corresponding approximation to the output of a second order linear filter with *fp* = 1, *τ* = 1/2*π*.

**Figure 25.** Graph of the relative errors associated with the output signal, shown in Figure 24, for approximations to the error function (Equation (56)) of orders six to twelve which utilize optimum transition points.

#### *7.8. Extension to Complex Case*

By definition, the error function, for the general complex case, is defined according to

$$\text{erf}(z) = \frac{2}{\sqrt{\pi}} \int\_{\gamma} e^{-\lambda^2} \text{d}\lambda, \ z \in \mathbb{C},\tag{139}$$

where the path *γ* is between the points zero and *z* and is arbitrary. For the case of *z* = *x* + *jy*, and a path along the *x* axis to the point (*x*, 0) and then to the point *z*, the error function is defined according to Equation (5) of [37]:

$$\begin{split} \text{erf}(\mathbf{x} + j\mathbf{y}) &= \ \ \_ {\frac{2}{\sqrt{\pi}} \int\limits\_{0}^{\mathbf{x}} e^{-\lambda^2} \mathbf{d}\lambda + \frac{2j}{\sqrt{\pi}} \int\limits\_{0}^{\mathbf{y}} e^{-\left(\mathbf{x} + j\lambda\right)^2} \mathbf{d}\lambda \\ &= \text{erf}(\mathbf{x}) + \frac{2e^{-\mathbf{z}^2}}{\sqrt{\pi}} \cdot \int\limits\_{0}^{\mathbf{y}} e^{\lambda^2} \sin(2\mathbf{x}\lambda) \mathbf{d}\lambda + \frac{2je^{-\mathbf{z}^2}}{\sqrt{\pi}} \cdot \int\limits\_{0}^{\mathbf{y}} e^{\lambda^2} \cos(2\mathbf{x}\lambda) \mathbf{d}\lambda \end{split} \tag{140}$$

Explicit approximations for erf(*x* + *jy*) then arise when integrable approximations for the two-dimensional surfaces exp(*y*2) sin(2*xy*) and exp(*y*2) cos(2*xy*) over [0, *<sup>x</sup>*] <sup>×</sup> [0, *<sup>y</sup>*] are available. Naturally, significant existing research exists, e.g., [28,37].

#### *7.9. Approximation for the Inverse Error Function*

There are many applications where the inverse error function is required and accurate approximations for this function are of interest. From the research underpinning this paper, the author's view is that finding approximations to the inverse error function is best treated directly and as a separate problem, rather than approaching it via finding the inverse of an approximation to the error function.

#### **8. Conclusions**

This paper has detailed analytical approximations for the real case of the error function, underpinned by a spline-based integral approximation, which have significantly better convergence than the default Taylor series. The original approximations can be improved by utilizing the approximation erf(*x*) ≈ 1 for *x* > *xo*, with *xo* being dependent on the order of approximation. The fourth-order approximations arising from Theorems 1 and 3, with respective transition points of *xo* = 2.3715 and *xo* = 2.6305, achieve relative error bounds over the interval [0, <sup>∞</sup>], respectively, of 1.03 <sup>×</sup> <sup>10</sup>−<sup>3</sup> and 2.28 <sup>×</sup> <sup>10</sup><sup>−</sup>4. The respective sixteenth-order approximations, with *xo* = 3.9025 and *xo* = 4.101, have relative error bounds of 3.44 <sup>×</sup> <sup>10</sup>−<sup>8</sup> and 6.66 <sup>×</sup> <sup>10</sup>−9.

Further improvements were detailed via two generalizations. The first was based on utilizing integral approximations for each of the *m* equally spaced subintervals in the required interval of integration. The second was based on utilizing a fixed subinterval within the interval of integration with a known tabulated area, and then utilizing an integral approximation over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth-order approximation based on four subintervals, with *xo* <sup>=</sup> 3.7208, achieves a relative error bound of 1.43 <sup>×</sup> <sup>10</sup>−<sup>7</sup> over the interval [0, ∞]. A sixteenth-order approximation, with *xo* = 6.3726, has a relative error bound of 2.01 <sup>×</sup> <sup>10</sup>−19.

Finally, it was shown that a custom feedback system, with inputs defined by either the original error function approximations or approximations based on the use of subintervals, leads to analytical approximations with improved accuracy that are valid over the positive real line without utilizing the approximation erf(*x*) ≈ 1 for suitably large values of *x*. The original fourth-order error function approximation yields an approximation with a relative error bound of 1.82 <sup>×</sup> <sup>10</sup>−<sup>5</sup> over the interval [0, <sup>∞</sup>]. The original sixteenth-order approximation yields an approximation with a relative error bound of 1.68 <sup>×</sup> <sup>10</sup>−14.

Applications of the approximations include, first, approximations to achieve the specified error bounds of 10−4, 10−6, 10−10, and 10−<sup>16</sup> over the positive real line. Second, the definitions of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Third, new series for the error function. Fourth, new sequences of approximations for exp(−*x*2) that have significantly higher convergence properties than a Taylor series approximation. Fifth, a complementary demarcation function satisfying the constraint *e*<sup>2</sup> *<sup>C</sup>*(*x*) + *er f* <sup>2</sup>(*x*) = 1 was defined. Sixth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Seventh, approximate expressions for the linear filtering of a step signal that is modeled by the error function.

**Supplementary Materials:** The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/mca27010014/s1, File S1. Mathematica Code for Figure 2 Results.

**Funding:** This research received no external funding.

**Acknowledgments:** The support of Abdelhak M. Zoubir, SPG, Technische Universität Darmstadt, Darmstadt, Germany, who hosted a visit during which the research for, and writing of, this paper was completed, is gratefully acknowledged. An anonymous reviewer provided a comment, which, after consideration, led to the improved approximations detailed in Theorem 3.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Proof of Theorem 1**

Consider *<sup>f</sup>*(*x*) = exp(−*x*2). Successive differentiation of this function leads to the iterative formula

$$f^{(k)}(\mathbf{x}) = p(k, \mathbf{x}) \exp(-\mathbf{x}^2),\tag{A1}$$

where

$$p(k, \mathbf{x}) = p^{(1)}(k - 1, \mathbf{x}) - 2\mathbf{x}p(k - 1, \mathbf{x}), \ \ p(0, \mathbf{x}) = 1. \tag{A2}$$

It then follows from Equation (20) that

$$\begin{split} \frac{2}{\sqrt{\pi}} \cdot \int\_{a}^{\infty} \exp(-\lambda^{2}) d\lambda & \quad \approx \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k} (\mathbf{x} - a)^{k+1} \Big[ f^{(k)}(a) + (-1)^{k} f^{(k)}(\mathbf{x}) \Big] \\ &= \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k} (\mathbf{x} - a)^{k+1} \Big[ \begin{array}{c} p(k, a) \exp(-a^{2}) + \\ (-1)^{k} p(k, \mathbf{x}) \exp(-\mathbf{x}^{2}) \end{array} \Big] \end{split} \tag{A3}$$

The result for the case of *α* = 0 then yields the *n*th-order approximation for the error function:

$$f\_{\rm ll}(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k} \mathbf{x}^{k+1} \left[ p(k, 0) + (-1)^k p(k, \mathbf{x}) e^{-\mathbf{x}^2} \right] \tag{A4}$$

To determine ε (1) *<sup>n</sup>* (*x*) consider the equality erf(*x*) = *fn*(*x*) + ε*n*(*x*). Differentiation yields

$$\begin{split} \varepsilon\_{n}^{(1)}(\mathbf{x}) &= \begin{array}{c} \frac{2e^{-\mathbf{x}^{2}}}{\sqrt{\pi}} - \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k}(k+1)\mathbf{x}^{k} \Big[ p(k,0) + (-1)^{k}p(k,\mathbf{x})e^{-\mathbf{x}^{2}} \Big] - \\ \frac{2e^{-\mathbf{x}^{2}}}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k}\mathbf{x}^{k+1}(-1)^{k} \Big[ p^{(1)}(k,\mathbf{x}) - 2\mathbf{x}p(k,\mathbf{x}) \Big] \end{array} \tag{A5} \end{split} \tag{A5}$$

and the required result:

$$\begin{aligned} \varepsilon\_{n}^{(1)}(\mathbf{x}) &= \ \frac{2e^{-\mathbf{x}^{2}}}{\sqrt{\pi}} - \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k}(k+1) \mathbf{x}^{k} p(k,0) - \\\\ &\frac{2e^{-\mathbf{x}^{2}}}{\sqrt{\pi}} \cdot \sum\_{k=0}^{n} c\_{n,k} \mathbf{x}^{k} (-1)^{k} \left[ (k+1-2\mathbf{x}^{2}) p(k,\mathbf{x}) + \mathbf{x} p^{(1)}(k,\mathbf{x}) \right] \end{aligned} \tag{A6}$$

#### **Appendix B. Proof of Theorem 2**

The use of a Taylor series expansion for exp(−*x*2) in the definitions of <sup>ε</sup> (1) <sup>0</sup> (*x*) and ε (1) <sup>1</sup> (*x*), as defined by Equations (40) and (41), yields:

$$\begin{split} \varepsilon\_{0}^{(1)}(\mathbf{x}) &= \frac{\mathbf{x}^{2}}{\sqrt{\pi}} \cdot \left[ 1 - \frac{3\mathbf{x}^{2}}{2} + \frac{5\mathbf{x}^{4}}{6} - \frac{7\mathbf{x}^{6}}{24} + \frac{3\mathbf{x}^{8}}{40} - \frac{11\mathbf{x}^{10}}{720} + \frac{13\mathbf{x}^{12}}{5040} - \frac{\mathbf{x}^{14}}{2688} + \dots \right] \\ &= \frac{\mathbf{x}^{2}}{\sqrt{\pi}} \cdot \left[ \varepsilon\_{0,0} - \varepsilon\_{0,1}\mathbf{x}^{2} + \dots + (-1)^{k}\varepsilon\_{0,k}\mathbf{x}^{2k} + \dots \right], \quad \varepsilon\_{0,k} = \frac{2}{k!} - \frac{1}{(k+1)!}, k \ge 0 \end{split} \tag{A7}$$

$$\varepsilon\_1^{(1)}(\mathbf{x}) = \frac{\mathbf{x}^4}{6\sqrt{\pi}} \cdot \left[1 - 2\mathbf{x}^2 + \frac{5\mathbf{x}^4}{4} - \frac{7\mathbf{x}^6}{16} + \frac{\mathbf{x}^8}{8} - \frac{11\mathbf{x}^{10}}{420} \dots \right] \tag{A8}$$

From a consideration of ε (1) <sup>0</sup> (*x*) and ε (1) <sup>1</sup> (*x*), as well as higher-order residual functions, it can be readily seen that the polynomial terms of order zero to 2*n* + 1 in ε (1) <sup>1</sup> (*x*) have coefficients of zero. It then follows that ε (1) *<sup>n</sup>* (*x*) can be written as

$$
\varepsilon\_n^{(1)}(\mathbf{x}) = \frac{1}{\sqrt{\pi t}} \cdot \frac{\mathbf{x}^{2n+2}}{\mathbf{x}\_{n,0}} \cdot g\_n(\mathbf{x}), \quad n \in \{0, 1, 2, \dots\}, \tag{A9}
$$

where

$$g\_n(\mathbf{x}) = 1 - d\_{n,1}\mathbf{x}^2 + d\_{n,2}\mathbf{x}^4 - \dots + (-1)^k d\_{n,k}\mathbf{x}^{2k} + \dots \tag{A10}$$

and where it can readily be shown that

$$\mathbf{x}\_{n,0} = \mathbf{2}^n \prod\_{i=0}^n (2i+1) \ge \mathbf{2}^n n!, \ n \in \{0, 1, 2, \dots\}. \tag{A11}$$

Graphs of the magnitude of the residual functions, ε (1) *<sup>n</sup>* (*x*), for orders zero, two, four, six, and eight, are shown in Figure A1. The magnitude of the functions defined by *gn*, for the same orders, are shown in Figure A2 and it is evident that

$$|g\_n(x)| \le k\_{\nu}, \ x \ge 0, n \in \{0, 1, 2, \dots\},\tag{A12}$$

for a fixed constant *ko*, which is of the order of unity. Hence:

$$\left| \varepsilon\_n^{(1)}(\mathbf{x}) \right| \le \frac{k\_o}{\sqrt{\pi}} \cdot \frac{\mathbf{x}^{2n+2}}{\mathbf{x}\_{n,0}}, \ x \ge 0, n \in \{0, 1, 2, \dots\}. \tag{A13}$$

**Figure A1.** Graphs of ε (1) *<sup>n</sup>* (*x*) for orders zero, two, four, six and eight.

**Figure A2.** Graphs of |*gn*(*x*)| for orders zero, two, four, six and eight.

Further, as *xn*,0 <sup>≥</sup> <sup>2</sup>*nn*!, it follows, for all fixed values of *<sup>x</sup>*, that

$$\lim\_{n \to \infty} \varepsilon\_n^{(1)}(\mathbf{x}) = 0, \ \mathbf{x} \ge 0. \tag{A14}$$

The convergence is not uniform.

It then follows, for all fixed values of *x*, that there exists an order of approximation, *n*, such that the error in the approximation ε (1) *<sup>n</sup>* (*x*) can be made arbitrarily small, i.e., ∀ε*<sup>o</sup>* > 0 there exists a number *N*(*x*) such that

$$\left|\varepsilon\_n^{(1)}(\mathbf{x})\right| < \varepsilon\_o, \quad \forall n > N(\mathbf{x}). \tag{A15}$$

In general, *N*(*x*) increases with *x*. Thus, ∀ε*<sup>o</sup>* > 0 there exists a number *Nxo* (*xo*) such that

$$\left|\varepsilon\_{n}^{(1)}(\mathbf{x})\right| < \varepsilon\_{\sigma}, \ \mathbf{x} \in [0, \mathfrak{x}\_{\mathcal{O}}], \forall n > N\_{\mathfrak{x}\_{\mathcal{O}}}(\mathfrak{x}\_{\mathcal{O}}).\tag{A16}$$

Finally, as ε (1) *<sup>n</sup>* (0) = 0, for all *n*, it then follows, for *x* fixed, that

$$|\varepsilon\_{\mathfrak{n}}(\mathbf{x})| = \left| \int\_{0}^{\mathbf{x}} \varepsilon\_{\mathfrak{n}}^{(1)}(\lambda) d\lambda \right| < \varepsilon\_{\mathfrak{o}} \mathbf{x}, \quad \forall \mathfrak{n} > N\_{\mathfrak{x}}(\mathbf{x}), \tag{A17}$$

which proves convergence.

#### **Appendix C. Fourth-Order Spline Approximation—The Sixteen-Subintervals Case**

Consistent with Theorem 5, a fourth-order spline approximation, which utilizes 16 subintervals, is

*f*4,16(*x*) = *<sup>x</sup>* <sup>16</sup>√*<sup>π</sup>* <sup>1</sup> <sup>−</sup> <sup>16</sup>*x*<sup>2</sup> 73,728 <sup>+</sup> <sup>16</sup>*x*<sup>4</sup> 1,321,205,760 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>47</sup>*x*<sup>4</sup> 27,525,120 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 5,284,823,040 <sup>+</sup> *<sup>x</sup>*<sup>8</sup> 4,058,744,094,720 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>*x*<sup>2</sup> <sup>64</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>187</sup>*x*<sup>4</sup> 27,525,120 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 1,321,205,760 <sup>+</sup> *<sup>x</sup>*<sup>8</sup> 253,671,505,920 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>9*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>1261</sup>*x*<sup>4</sup> 82,575,360 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 587,202,560 <sup>+</sup> <sup>3</sup>*x*<sup>8</sup> 150,323,855,360 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>*x*<sup>2</sup> <sup>16</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>249</sup>*x*<sup>4</sup> 9,175,040 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 330,301,440 <sup>+</sup> *<sup>x</sup>*<sup>8</sup> 15,854,469,120 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>25*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>389</sup>*x*<sup>4</sup> 9,175,040 <sup>−</sup> <sup>5</sup>*x*<sup>6</sup> 1,056,964,608 <sup>+</sup> <sup>125</sup>*x*<sup>8</sup> 811,748,818,944 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>9*x*<sup>2</sup> <sup>64</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>5041</sup>*x*<sup>4</sup> 82,575,360 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 146,800,640 <sup>+</sup> <sup>3</sup>*x*<sup>8</sup> 9,395,240,960 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>49*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>2287</sup>*x*<sup>4</sup> 27,525,120 <sup>−</sup> <sup>7</sup>*x*<sup>6</sup> 754,974,720 <sup>+</sup> <sup>343</sup>*x*<sup>8</sup> 579,820,584,960 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>*x*<sup>2</sup> 4 <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>2987</sup>*x*<sup>4</sup> 27,525,120 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 82,575,360 <sup>+</sup> *<sup>x</sup>*<sup>8</sup> 990,904,320 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>81*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>11341</sup>*x*<sup>4</sup> 82,575,360 <sup>−</sup> <sup>9</sup>*x*<sup>6</sup> 587,202,560 <sup>+</sup> <sup>243</sup>*x*<sup>8</sup> 150,323,855,360 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>25*x*<sup>2</sup> <sup>64</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>4667</sup>*x*<sup>4</sup> 27,525,120 <sup>−</sup> <sup>5</sup>*x*<sup>6</sup> 264,241,152 <sup>+</sup> <sup>125</sup>*x*<sup>8</sup> 50,734,301,184 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>121*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>5647</sup>*x*<sup>4</sup> 27,525,120 <sup>−</sup> <sup>121</sup>*x*<sup>6</sup> 5,284,823,040 <sup>+</sup> <sup>14641</sup>*x*<sup>8</sup> 4,058,744,094,720 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>9*x*<sup>2</sup> <sup>16</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>20161</sup>*x*<sup>4</sup> 82,575,360 <sup>−</sup> *<sup>x</sup>*<sup>6</sup> 36,700,160 <sup>+</sup> <sup>3</sup>*x*<sup>8</sup> 587,202,560 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>169*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>2629</sup>*x*<sup>4</sup> 9,175,040 <sup>−</sup> <sup>169</sup>*x*<sup>6</sup> 5,284,823,040 <sup>+</sup> 28,561*x*<sup>8</sup> 4,058,744,094,720 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>49*x*<sup>2</sup> <sup>64</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>3049</sup>*x*<sup>4</sup> 9,175,040 <sup>−</sup> <sup>7</sup>*x*<sup>6</sup> 188,743,680 <sup>+</sup> <sup>343</sup>*x*<sup>8</sup> 36,238,786,560 + *x* 8 <sup>√</sup>*<sup>π</sup>* · exp <sup>−</sup>225*x*<sup>2</sup> <sup>256</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>31501</sup>*x*<sup>4</sup> 82,575,360 <sup>−</sup> <sup>5</sup>*x*<sup>6</sup> 117,440,512 <sup>+</sup> <sup>375</sup>*x*<sup>8</sup> 30,064,771,072 + *x* <sup>16</sup>√*<sup>π</sup>* · exp <sup>−</sup>*x*<sup>2</sup> 1 + <sup>127</sup>*x*<sup>2</sup> <sup>4608</sup> <sup>+</sup> <sup>3929</sup>*x*<sup>4</sup> 9,175,040 <sup>+</sup> <sup>79</sup>*x*<sup>6</sup> 20,643,840 <sup>+</sup> *<sup>x</sup>*<sup>8</sup> 61,931,520 (A18)

When this approximation is utilized with the transition point *xo* = 7.1544, the relative error bound in the approximation to the error function, over the interval (0, ∞), is 4.82 <sup>×</sup> <sup>10</sup>−16.

#### **Appendix D. Proof of Theorem 7**

Consider the differential Equation

$$y\_n y(t) + g\_n(t) y\_n(t) = g\_n(t), \ y\_n(0) = 0,\tag{A19}$$

for the case where *gn* is based on the *n*th-order approximation *fn* to the error function, defined in Theorem 1, and is defined according to

$$g\_n(t) = \frac{4}{\sqrt{\pi}} e^{-t^2} f\_n(t) = \frac{8e^{-t^2}}{\pi} \cdot \sum\_{k=0}^n c\_{n,k} t^{k+1} \left[ p(k,0) + (-1)^k p(k,t) e^{-t^2} \right], t \ge 0. \tag{A20}$$

To find a solution to the differential equation for such a driving signal, first note that the solution to the differential equation for the case of *gn*(*t*) = <sup>√</sup><sup>4</sup> *<sup>π</sup> <sup>e</sup>*−*<sup>t</sup>* 2 erf(*t*) is

$$y\_n(t) = 1 - \exp[-\text{erf}^2(t)].\tag{A21}$$

Second, with *gn* defined by Equation (A20), the following signal form

$$y\_n(t) = 1 - \exp\left[ -\left[ p\_{n,0} + p\_{n,1}(t)e^{-t^2} + p\_{n,2}(t)e^{-2t^2} \right] \right], \ t \ge 0,\tag{A.22}$$

has potential as a basis for finding solutions for the unknown polynomial functions *pn*,1 and *pn*,2 and the unknown constant *pn*,0. With such a form, the initial condition of *yn*(0) = 0 implies

$$p\_{n,0} = -|p\_{n,1}(0) + p\_{n,2}(0)|.\tag{A23}$$

It is the case that

$$y\_n t(t) = \left[ p\_{n,1}^{(1)}(t)e^{-t^2} - 2tp\_{n,1}(t)e^{-t^2} + p\_{n,2}^{(1)}(t)e^{-2t^2} - 4tp\_{n,2}(t)e^{-2t^2} \right] \cdot [1 - y\_n(t)]. \tag{A24}$$

Substitution of *yn*(*t*) and *yn*(*t*) into the differential equation yields

$$\begin{aligned} \left[p\_{n,1}^{(1)}(t)e^{-t^2} - 2tp\_{n,1}(t)e^{-t^2} + p\_{n,2}^{(1)}(t)e^{-2t^2} - 4tp\_{n,2}(t)e^{-2t^2}\right] \cdot \exp\left[-\left[p\_{n,0} + p\_{n,1}(t)e^{-t^2} + p\_{n,2}(t)e^{-2t^2}\right]\right] + \\ \left[\frac{4}{\sqrt{\pi}}e^{-t^2}f\_n(t) \cdot \left[1 - \exp\left[-\left(p\_{n,0} + p\_{n,1}(t)e^{-t^2} + p\_{n,2}(t)e^{-2t^2}\right)\right]\right]\right] &= \frac{4}{\sqrt{\pi}}e^{-t^2}f\_n(t) \end{aligned} \tag{A25}$$

which implies

$$p\_{n,1}^{(1)}(t)e^{-t^2} - 2tp\_{n,1}(t)e^{-t^2} + p\_{n,2}^{(1)}(t)e^{-2t^2} - 4tp\_{n,2}(t)e^{-2t^2} - \frac{4}{\sqrt{\pi}}e^{-t^2}f\_n(t) = 0 \tag{A26}$$

and

$$\begin{split} \left[ p\_{n,1}^{(1)}(t)e^{-t^2} - 2tp\_{n,1}(t)e^{-t^2} + p\_{n,2}^{(1)}(t)e^{-2t^2} - 4tp\_{n,2}(t)e^{-2t^2} \right. \\ \left. \frac{8}{\pi}e^{-t^2} \sum\_{k=0}^{n} c\_{n,k}t^{k+1} \left[ p(k,0) + (-1)^k p(k,t)e^{-t^2} \right] . \end{split} \tag{A27}$$

Thus:

$$\begin{aligned} p\_{n,1}^{(1)}(t) - 2tp\_{n,1}(t) &= \frac{8}{\pi} \sum\_{k=0}^{n} c\_{n,k} p(k, 0) t^{k+1} \\ p\_{n,2}^{(1)}(t) - 4tp\_{n,2}(t) &= \frac{8}{\pi} \sum\_{k=0}^{n} c\_{n,k} (-1)^k p(k, t) t^{k+1} \end{aligned} \tag{A28}$$

To solve for the polynomials *pn*,1 and *pn*,2, first note (see Equation (26)) that

$$p(k,t) = a\_{k,0} + a\_{k,1}t + a\_{k,2}t^2 + \dots + a\_{k,k}t^k, \\ k \in \{0, 1, \dots, n\},\tag{A29}$$

for appropriately defined coefficients *ak*,*j*, *j* ∈ {0, 1, . . . , *k*}.

#### *Appendix D.1. Solving for Coefficients of First Polynomial*

Substitution of *p*(*k*, 0) from Equation (A29) into the differential equation defining *pn*,1 yields

$$p\_{n,1}^{(1)}(t) - 2tp\_{n,1}(t) = \frac{8}{\pi t} \sum\_{k=0}^{n} c\_{n,k} a\_{k,0} t^{k+1}.\tag{A30}$$

With *an*,0 = 0 for *n* odd, the maximum power for *t* on the right side of the differential equation is *t <sup>n</sup>*<sup>+</sup>1, *n* even, and *t <sup>n</sup>* for *n* odd. Thus, the form required for *pn*,1 is

$$p\_{n,1}(t) = \begin{cases} \begin{array}{l} \alpha\_0 + \alpha\_1 t + \dots + \alpha\_{n-1} t^{n-1} \quad n \text{ odd} \\ \alpha\_0 + \alpha\_1 t + \dots + \alpha\_n t^n \quad n \text{ even} \end{array} \tag{A31}$$

Substitution then yields

$$\begin{aligned} \left[a\_1 + 2a\_2t + \dots + (n-1)a\_{n-1}t^{n-2}\right] - 2t\left[a\_0 + a\_1t + \dots + a\_{n-1}t^{n-1}\right] &= \\ \frac{8}{\pi} \sum\_{k=0}^{n-1} c\_{n,k} a\_{k,0} t^{k+1} &\quad n \text{ odd} \\ \left[a\_1 + 2a\_2t + \dots + na\_nt^{n-1}\right] - 2t\left[a\_0 + a\_1t + \dots + a\_nt^n\right] &= \\ \frac{8}{\pi} \sum\_{k=0}^{n} c\_{n,k} a\_{k,0} t^{k+1} &\quad n \text{ even} \end{aligned} \tag{A32}$$

For the case of *n* even, equating coefficients associated with set powers of *t*, yields:

$$\begin{aligned} t^{n+1}: & -2a\_n = \frac{8}{\pi} \cdot c\_{n,n} a\_{n,0} \Rightarrow & a\_n = \frac{-4}{\pi} \cdot c\_{n,n} a\_{n,0} \\ t^n: & -2a\_{n-1} = \frac{8}{\pi} \cdot c\_{n,n-1} a\_{n-1,0} \Rightarrow & a\_{n-1} = \frac{-4}{\pi} \cdot c\_{n,n-1} a\_{n-1,0} \\ t^{n-1}: & na\_n - 2a\_{n-2} = \frac{8}{\pi} \cdot c\_{n,n-2} a\_{n-2,0} \Rightarrow & a\_{n-2} = \frac{n}{2} \cdot a\_n - \frac{4}{\pi} \cdot c\_{n,n-2} a\_{n-2,0} \\ & \dots \\ t^2: & 3a\_3 - 2a\_1 = \frac{8}{\pi} \cdot c\_{n,1} a\_{1,0} \Rightarrow & a\_1 = \frac{3a\_3}{2} - \frac{4}{\pi} \cdot c\_{n,1} a\_{1,0} \\ t^1: & 2a\_2 - 2a\_0 = \frac{8}{\pi} \cdot c\_{n,0} a\_{0,0} \Rightarrow & a\_0 = a\_2 - \frac{4}{\pi} \cdot c\_{n,0} a\_{0,0} \end{aligned} \tag{A33}$$

With the odd coefficients *a*1,0, *a*3,0, ..., *an*−1,0 being zero, it follows that the corresponding odd coefficients *αn*−1, *αn*−3, ... , *α*<sup>1</sup> are also zero. For the even coefficients, the algorithm is:

$$\begin{cases} \mathfrak{a}\_{\mathfrak{M}} = \frac{-4}{\pi} \cdot \mathfrak{c}\_{n,\mathfrak{m}} a\_{\mathfrak{M},0} \\ \mathfrak{a}\_{\mathfrak{M}-2i} = \frac{(m-2i+2)a\_{m-2i+2}}{2} - \frac{4}{\pi} \cdot \mathfrak{c}\_{n,\mathfrak{m}-2i} a\_{\mathfrak{m}-2i,0}, & i \in \left\{1, \dots, \frac{\mathfrak{m}}{2} - 1\right\} \\ \mathfrak{a}\_{\mathfrak{0}} = \mathfrak{a}\_{2} - \frac{4}{\pi} \cdot \mathfrak{c}\_{\mathfrak{N}} a\_{\mathfrak{0},0} \end{cases} \tag{A34}$$

where *m* = *n*. For the case of *n* being odd, the odd coefficients *αn*, *αn*−2, ... , *α*<sup>1</sup> are again zero and the algorithm is the same as that specified in Equation (A34) with *m* = *n* − 1.

#### *Appendix D.2. Solving for Coefficients of Second Polynomial*

Substitution of *p*(*k*, *t*) from Equation (A29) into the differential equation defining *pn*,2 yields:

$$p\_{n,2}^{(1)}(t) - 4tp\_{n,2}(t) = \frac{8}{\pi} \cdot \sum\_{k=0}^{n} c\_{n,k}(-1)^k \left[ \sum\_{i=0}^{k} a\_{k,i} t^{k+i+1} \right]. \tag{A35}$$

The coefficients *ak*,*<sup>i</sup>* that are associated with a given power of *t* are illustrated in Figure A3. It then follows, for a fixed power of *t*, say *t r* , that the associated coefficients, *ak*,*i*, are

$$\begin{aligned} k &\in \left\{ \left\lfloor \frac{r}{2} \right\rfloor, \left\lfloor \frac{r}{2} \right\rfloor + 1, \dots, \min\{r - 1, n\} \right\} \\ i &= r - k - 1. \end{aligned} \tag{A36}$$

Thus:

$$p\_{n,2}^{(1)}(t) - 4t p\_{n,2}(t) = \frac{8}{\pi t} \cdot \sum\_{r=1}^{2n+1} \left[ \sum\_{k=\lfloor \frac{r}{2} \rfloor}^{\min\{r-1, n\}} c\_{n,k}(-1)^k a\_{k,r-k-1} \right] t^r. \tag{A37}$$

**Figure A3.** Illustration of the coefficients that potentially are non-zero for a set power of *t*. The illustration is for the case of *n* = 8.

With

$$p\_{\text{tr},2}(t) = \beta \mathbf{o} + \beta\_1 t + \dots + \beta\_m t^m \,\text{.}\tag{A38}$$

the differential equation implies

$$\begin{aligned} \left[\beta\_1 + 2\beta\_2 t + \dots + m\beta\_m t^{m-1}\right] - 4t\left[\beta\_0 + \beta\_1 t + \dots + \beta\_m t^m\right] \\ = \frac{8}{\pi} \cdot \sum\_{r=1}^{2n+1} \left[\sum\_{k=\lfloor\frac{r}{2}\rfloor}^{\min\{r-1, n\}} c\_{n,k}(-1)^k a\_{k,r-k-1}\right] t^r. \end{aligned} \tag{A39}$$

The requirement, thus, is for *m* = 2*n*. Equating coefficients (see Figure A3) yields:

$$\begin{aligned} \beta^{2n+1}: &\quad -4\beta\_{2n} = \frac{8}{\pi} \cdot (-1)^n c\_{n,n} a\_{n,n} \Rightarrow \quad \beta\_{2n} = \frac{-2}{\pi} \cdot (-1)^n c\_{n,n} a\_{n,n} \\\ A^{2n}: &\quad -4\beta\_{2n-1} = \frac{8}{\pi} \left[ (-1)^n c\_{n,n} a\_{n,n-1} \right] \Rightarrow \quad \beta\_{2n-1} = \frac{-2}{\pi} \cdot (-1)^n c\_{n,n} a\_{n,n-1} \\\ A^{2n-1}: &\quad 2n\beta\_{2n} - 4\beta\_{2n-2} = \frac{8}{\pi} \left[ (-1)^{n-1} c\_{n,n-1} a\_{n-1,n-1} + (-1)^n c\_{n,n} a\_{n,n-2} \right] \\\ &\quad \Rightarrow \quad \beta\_{2n-2} = \frac{2n\beta\_{2n}}{4} - \frac{2}{\pi} \left[ (-1)^{n-1} c\_{n,n-1} a\_{n-1,n-1} + (-1)^n c\_{n,n} a\_{n,n-2} \right] \\\ &\quad \dots \\\ &\quad \beta\_{2n-1,n-1} = \beta\_{2n} \end{aligned} \tag{A40}$$

$$t^3: \qquad 4\beta\_4 - 4\beta\_2 = \frac{8}{\pi} [-\mathfrak{c}\_{n,1}a\_{1,1} + \mathfrak{c}\_{n,2}a\_{2,0}] \implies \beta\_2 = \beta\_4 - \frac{2}{\pi} [-\mathfrak{c}\_{n,1}a\_{1,1} + \mathfrak{c}\_{n,2}a\_{2,0}]$$

$$\begin{aligned} t^2: \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad t^1 = \frac{-8}{\pi} \cdot c\_{n,1} a\_{1,0} \implies \quad \beta\_1 = \frac{\frac{3\beta\_1}{4}}{\frac{\pi}{4}} + \frac{\frac{2}{\pi}}{\pi} \cdot c\_{n,1} a\_{1,0}, \\\ t^1: \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \beta\_0 = \frac{\frac{8}{\pi}}{\pi} \cdot c\_{n,0} a\_{0,0} \implies \quad \beta\_0 = \frac{2\beta\_2}{4} - \frac{2}{\pi} \cdot c\_{n,0} a\_{0,0} \end{aligned}$$

As the coefficients *an*,*n*−1, *an*,*n*−3,..., *a*1,0 are zero, the algorithm is:

$$\begin{aligned} \beta\_{2n} &= \frac{-2}{\pi} \cdot (-1)^{\text{il}} c\_{n,n} a\_{n,n} \\ \beta\_{2n-2i} &= \frac{[2n-2i+2]\beta\_{2n-2i+2}}{4} - \frac{2}{\pi} \sum\_{k=n-i}^{\min\{2n-2i,n\}} (-1)^k c\_{n,k} a\_{k,2(n-i)-k}, \text{ i} \in \{1, \dots, n-1\} \quad \text{(A41)} \\ \beta\_0 &= \frac{\beta\_2}{2} - \frac{2}{\pi} \cdot c\_{n,0} a\_{0,0} \end{aligned}$$

#### **Appendix E. Proof of Theorem 8**

Consider the results stated in Theorem 1:

$$\text{erf}(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \cdot \int\_0^\mathbf{x} \varepsilon^{-\lambda^2} \mathbf{d}\lambda \approx \frac{2}{\sqrt{\pi}} \cdot \sum\_{k=0}^n \varepsilon\_{n,k} \mathbf{x}^{k+1} \left[ p(k,0) + (-1)^k p(k,\mathbf{x}) \varepsilon^{-\mathbf{x}^2} \right]. \tag{A42}$$

Differentiation then yields

$$\begin{split} \boldsymbol{\varepsilon}^{-\mathbf{x}^{2}} &\approx \quad \sum\_{k=0}^{n} \boldsymbol{c}\_{n,k} (\boldsymbol{k}+1) \mathbf{x}^{k} \Big[ p(\boldsymbol{k},0) + (-1)^{k} p(\boldsymbol{k}, \mathbf{x}) \boldsymbol{\varepsilon}^{-\mathbf{x}^{2}} \Big] + \\ &\quad \sum\_{k=0}^{n} \boldsymbol{c}\_{n,k} (-1)^{k} \mathbf{x}^{k+1} \Big[ p^{(1)}(\boldsymbol{k}, \mathbf{x}) \boldsymbol{\varepsilon}^{-\mathbf{x}^{2}} - 2 \boldsymbol{x} p(\boldsymbol{k}, \mathbf{x}) \boldsymbol{\varepsilon}^{-\mathbf{x}^{2}} \Big] \end{split} \tag{A43}$$

which leads to the required result:

$$e^{-\mathbf{x}^2} \approx \frac{\sum\_{k=0}^n c\_{n,k}(k+1)\mathbf{x}^k p(k,0)}{1 + \sum\_{k=0}^n c\_{n,k}(-1)^{k+1}\mathbf{x}^k \left[p(k,\mathbf{x})(k+1-2\mathbf{x}^2) + \mathbf{x}p^{(1)}(k,\mathbf{x})\right]} \tag{A44}$$

#### **Appendix F. Proof of Theorem 9**

Consider the exact result

$$\text{erf}(\mathbf{x}) = f\_0(\mathbf{x}) + \varepsilon\_0(\mathbf{x}) \tag{A45}$$

where *f*<sup>0</sup> is specified by Equation (32) and the derivative of the error term, ε (1) <sup>0</sup> (*x*), is specified by Equation (40). By utilizing a Taylor series approximation for exp <sup>−</sup>*x*<sup>2</sup> , ε (1) <sup>0</sup> (*x*) can be written as

$$\varepsilon\_{0}^{(1)}(\mathbf{x}) = \frac{\mathbf{x}^{2}}{\sqrt{\pi}} \cdot \left[ 1 - \frac{3\mathbf{x}^{2}}{2} + \frac{5\mathbf{x}^{4}}{6} - \frac{7\mathbf{x}^{6}}{24} + \frac{9\mathbf{x}^{8}}{120} - \dots + \frac{(-1)^{k}(2k+1)\mathbf{x}^{2k}}{(k+1)!} + \dots \right] \tag{A46}$$

Integration yields

$$\varepsilon\_{0}(\mathbf{x}) = \frac{\mathbf{x}^{3}}{\sqrt{\pi}} \cdot \left[ \frac{1}{3} - \frac{3\mathbf{x}^{2}}{2 \cdot 5} + \frac{5\mathbf{x}^{4}}{6 \cdot 7} - \frac{7\mathbf{x}^{6}}{24 \cdot 9} + \frac{9\mathbf{x}^{8}}{120 \cdot 11} - \dots + \frac{(-1)^{k}(2k+1)\mathbf{x}^{2k}}{(2k+3)(k+1)!} + \dots \right] \tag{A47}$$

and the following series for the error function then follows:

$$\begin{array}{ll}\text{erf}(\mathbf{x}) = & \frac{\mathbf{x}}{\sqrt{\pi}} + \frac{\mathbf{x}}{\sqrt{\pi}} \cdot e^{-\mathbf{x}^2} + \\\\ \frac{\mathbf{x}^3}{\sqrt{\pi}} \cdot \left[ \frac{1}{3} - \frac{3\mathbf{x}^2}{2\cdot 5} + \frac{5\mathbf{x}^4}{6\cdot 7} - \frac{7\mathbf{x}^6}{24\cdot 9} + \frac{9\mathbf{x}^8}{120\cdot 11} - \dots + \frac{(-1)^k (2k+1) \mathbf{x}^{2k}}{(2k+3)(k+1)!} + \dots \right] \end{array} \tag{A48}$$

The series associated with first- and second-order approximations follow in an analogous manner.

#### **Appendix G. Proof of Theorem 10**

The filter output is given by the convolution integral:

$$\begin{split} y(t) &= \int\_{0}^{t} \text{erf}\left[\frac{\lambda}{\gamma}\right] \cdot \frac{(t-\lambda)e^{-[t-\lambda]/\tau}}{\tau^{2}} \text{d}\lambda \\ &= \frac{e^{-t/\tau}}{\tau^{2}} \cdot \left[t \int\_{0}^{t} \text{erf}\left[\frac{\lambda}{\gamma}\right] e^{\lambda/\tau} \text{d}\lambda - \int\_{0}^{t} \text{erf}\left[\frac{\lambda}{\gamma}\right] \lambda e^{\lambda/\tau} \text{d}\lambda\right]. \end{split} \tag{49}$$

Using the integral results, e.g., Equations (4.2.1) and (4.2.5) of [38]:

$$\int\_{0}^{t} \text{erf}(a\lambda)e^{b\lambda} \text{d}\lambda = \frac{1}{b} \text{erf}(at)e^{bt} - \frac{1}{b} \exp\left[\frac{b^2}{4a^2}\right] \text{erf}\left[at - \frac{b}{2a}\right] + \frac{1}{b} \exp\left[\frac{b^2}{4a^2}\right] \text{erf}\left[-\frac{b}{2a}\right] \tag{A50}$$

$$\begin{split} \int\_{0}^{t} \text{erf}(a\lambda) \lambda e^{b\lambda} \text{d}\lambda &= \quad \frac{1}{b} \Big[ t - \frac{1}{b} \Big] \text{erf}(at) e^{bt} - \frac{1}{b} \exp\left[\frac{b^{2}}{4a^{2}}\right] \begin{bmatrix} \left[\frac{b}{2a^{2}} - \frac{1}{b}\right] \text{erf}\left[at - \frac{b}{2a}\right] -\\ \frac{1}{a\sqrt{\pi}} \exp\left[-\left(at - \frac{b}{2a}\right)^{2}\right] \end{bmatrix} + \\ & \quad \frac{1}{b} \exp\left[\frac{b^{2}}{4a^{2}}\right] \left[\left[\frac{b}{2a^{2}} - \frac{1}{b}\right] \text{erf}\left[-\frac{b}{2a}\right] - \frac{1}{a\sqrt{\pi}} \exp\left[\frac{-b^{2}}{4a^{2}}\right]\right] \\ \text{with } a = 1/\gamma, b = 1/\tau, b^{2}/4a^{2} = \gamma^{2}/4\tau^{2}, \text{it then follows that} \end{split} \tag{A51}$$

$$y(t) = \begin{array}{c} \frac{te^{-l/\tau}}{\overline{\tau}^2} \left[ \text{erf}\left[\frac{t}{\gamma}\right]e^{l/\tau} - \text{\textpi}\exp\left[\frac{\gamma^2}{4\tau^2}\right] \text{erf}\left[\frac{t}{\gamma} - \frac{\gamma}{2\overline{\tau}}\right] + \text{\textpi}\exp\left[\frac{\gamma^2}{4\tau^2}\right] \text{erf}\left[\frac{-\gamma}{2\overline{\tau}}\right] \right] - \\\\ \frac{te^{-l/\tau}}{\overline{\tau}^2} \left[ \text{erf}\left[\frac{t}{\gamma}\right]e^{l/\tau} - \text{\textpi}\exp\left[\frac{\gamma^2}{4\tau^2}\right] \left[\frac{\left[\frac{\gamma^2}{2\tau} - \text{\textpi}\right] \text{erf}\left[\frac{t}{\gamma} - \frac{\gamma}{2\overline{\tau}}\right]}{\frac{\gamma}{\sqrt{\tau}} \exp\left[-\left[\frac{t}{\gamma} - \frac{\gamma}{2\overline{\tau}}\right]^2\right]} \right] + \\\\ \text{\pi}\exp\left[\frac{\gamma^2}{4\tau^2}\right] \left[\left[\frac{\gamma^2}{2\tau} - \text{\textpi}\right] \text{erf}\left[\frac{-\gamma}{2\overline{\tau}}\right] - \frac{\gamma}{\sqrt{\tau}} \exp\left[\frac{-\gamma^2}{4\tau^2}\right] \right] \end{array} \tag{A52}$$

Simplifying, and using the fact that the error function is an odd function, yields the required result:

$$\begin{array}{c} y(t) = \ \operatorname{erf}\left[\frac{t}{\gamma}\right] +\\ \frac{t^{-t/\tau}}{\gamma} \cdot \begin{bmatrix} \exp\left[\frac{\gamma^2}{4\tau^2}\right] \left[\frac{\gamma^2}{2\tau} - (t+\tau)\right] \left[\operatorname{erf}\left[\frac{\gamma}{2\tau}\right] - \operatorname{erf}\left[\frac{\gamma}{2\tau} - \frac{t}{\gamma}\right] \right] -\\ \frac{\gamma}{\sqrt{\pi}} \exp\left[\frac{\gamma^2}{4\tau^2}\right] \exp\left[-\left[\frac{t}{\gamma} - \frac{\gamma}{2\tau}\right]^2\right] + \frac{\gamma}{\sqrt{\pi}} \end{bmatrix} \end{array} \tag{A53}$$

To prove convergence, consider

$$\lim\_{n \to \infty} y\_n(t) = \lim\_{n \to \infty} \int\_0^t f\_n\left[\frac{\lambda}{\gamma}\right] h(t - \lambda) d\lambda = \int\_0^t \text{erf}\left[\frac{\lambda}{\gamma}\right] h(t - \lambda) d\lambda \tag{A54}$$

where lim*n*→<sup>∞</sup> *fn*(*x*) = erf(*x*) and *<sup>h</sup>* is the impulse response of the second-order filter. The interchange of limit and integration is valid, consistent with Lemma 2, as the integrand comprises differentiable bounded functions.

#### **References**

