**4. NSFD Methods for Linear Random Differential Equations with Delay: Approximations of Moments**

When *α* and *β* are random variables and *f* is a stochastic process, problem (1) is randomized. These inputs depend on each outcome *ω* ∈ Ω and (2) is obtained. The numerical schemes from Section 3 are also randomized. The exact scheme (7) becomes

$$\begin{split} \mathbf{x}\_{n+1}(\omega) &= \mathbf{e}^{a(\omega)h} \sum\_{k=0}^{m-1} \frac{\beta(\omega)^{k}h^{k}}{k!} \mathbf{x}\_{n-kN}(\omega) \\ &+ \frac{\beta(\omega)^{m}}{(m-1)!} \int\_{t\_{n}-m\tau}^{t\_{n}-m\tau+h} (t\_{n}-m\tau+h-s)^{m-1} \mathbf{e}^{a(\omega)(t\_{n}-m\tau+h-s)} f(s,\omega) \, \mathbf{d}s, \end{split} \tag{10}$$

where the integral is considered in the m.s. sense (it is assumed that *f* is a m.s. integrable stochastic process), while (8) becomes

$$\mathbf{x}\_{n+1}(\omega) = \mathbf{e}^{\mathbf{a}(\omega)\mathbf{h}} \sum\_{k=0}^{M} \frac{\beta(\omega)^{k}h^{k}}{k!} \mathbf{x}\_{n-kN}(\omega). \tag{11}$$

We translate Theorem 2 into m.s. convergence.

**Theorem 3.** *Suppose that α and β are bounded random variables, and that f is a m.s. continuous stochastic process on* [−*τ*, 0]*. Fix M* ≥ 1*, and compute the numerical stochastic solution to* (2) *in the intervals* (*m* − 1)*τ* ≤ *nh* ≤ *mτ, for* 0 ≤ *m* ≤ *M with the exact method* (10)*. Then, for m* ≥ *M* + 1 *and* (*m* − 1)*τ* ≤ *nh* < *mτ, the expression* (11) *defines a random NSFD scheme of m.s. global error* <sup>O</sup>(*hM*)*.*

**Proof.** For *n* = *MN*, we have *tn* = *nh* = *Mτ* and *tn*+<sup>1</sup> = (*n* + 1)*h* > *Mτ*. For *tk*, *k* ≤ *n*, the exact scheme (10) is used, so that *x*(*tk*) − *xk*<sup>2</sup> = 0. By (10) and (11), one gets

$$\begin{split} \|\mathbf{x}(t\_{n+j+1}) - \mathbf{x}\_{n+j+1}\|\_{2} &\leq \|\mathbf{e}^{ah}\|\_{\infty} \sum\_{k=0}^{M} \frac{\|\beta\|\_{\infty}^{k} h^{k}}{k!} \|\mathbf{x}(t\_{n+j-kN}) - \mathbf{x}\_{n+j-kN}\|\_{2} \\ &+ \frac{\|\|\beta\|\_{\infty}^{m}}{(m-1)!} \int\_{t\_{n+j}-m\tau}^{t\_{n+j}-m\tau+h} (t\_{n+j} - m\tau + h - s)^{m-1} \|\mathbf{e}^{a(t\_{n+j}-m\tau+h-s)}\|\_{\infty} \|\|f(s)\|\|\_{2} \operatorname{d}s, \end{split} \tag{12}$$

for (*m* − 1)*τ* ≤ *tn*+*<sup>j</sup>* < *mτ*, *m* − 1 ≥ *M*, *j* ≥ 0.

Let *<sup>M</sup>*<sup>0</sup> <sup>=</sup> *β*∞, and *<sup>M</sup>*1, *<sup>M</sup>*<sup>2</sup> <sup>&</sup>gt; 0 such that e*αs*<sup>∞</sup> <sup>&</sup>lt; *<sup>M</sup>*<sup>1</sup> and *<sup>f</sup>*(*s*)<sup>2</sup> <sup>&</sup>lt; *<sup>M</sup>*<sup>2</sup> for *<sup>s</sup>* <sup>∈</sup> [0, *<sup>h</sup>*]. We first consider *j* = 0, . . . , *N* − 1 and *m* − 1 = *M*. By (12),

$$\begin{split} \|\mathbf{x}(t\_{n+1}) - \mathbf{x}\_{n+1}\|\_{2} &\leq \frac{\|\beta\|\_{\infty}^{\mathsf{m}}}{(m-1)!} \int\_{t\_{n}-m\tau}^{t\_{n}-m\tau+h} (t\_{n}-m\tau+h-s)^{m-1} \|e^{a(t\_{n}-m\tau+h-s)}\|\_{\infty} \|f(s)\|\_{2} \, \mathrm{d}s \\ &\leq \frac{M\_{0}^{\mathsf{m}}M\_{1}M\_{2}}{(m-1)!} \int\_{t\_{n}-m\tau}^{t\_{n}-m\tau+h} (t\_{n}-m\tau+h-s)^{m-1} \, \mathrm{d}s \\ &= \frac{M\_{0}^{\mathsf{m}}M\_{1}M\_{2}}{m!} h^{m} \leq \mathsf{C}\_{1}h^{m} = \mathsf{C}\_{1}h^{M+1} \, \end{split} \tag{13}$$

where *C*<sup>1</sup> is a constant independent of *m* and *h*,

$$\|\mathbf{x}(t\_{n+2}) - \mathbf{x}\_{n+2}\|\_2 \le \|\mathbf{e}^{ah}\|\_\infty \|\mathbf{x}(t\_{n+1}) - \mathbf{x}\_{n+1}\|\_2 + \mathbf{C}\_1 h^{M+1} \le \mathbf{C}\_1 \left(\mathbf{e}^{\|\mathbf{z}\|\_\infty h} + 1\right) h^{M+1},\tag{14}$$

$$\|\mathbf{x}(t\_{n+3}) - \mathbf{x}\_{n+3}\|\_2 \le \|\mathbf{e}^{ah}\|\_\infty \|\mathbf{x}(t\_{n+2}) - \mathbf{x}\_{n+2}\|\_2 + \mathbb{C}\_1 h^{M+1} \le \mathbb{C}\_1 \left(\mathbf{e}^{2\|a\|\_\infty h} + \mathbf{e}^{\|a\|\_\infty h} + 1\right) h^{M+1}, \tag{15}$$

$$\begin{split} \|\|\mathbf{x}(t\_{n+N}) - \mathbf{x}\_{n+N}\|\|\_{2} &\leq \mathbb{C}\_{1} \left(\sum\_{j=0}^{N-1} \mathsf{e}^{j\|\boldsymbol{\mu}\|\_{\infty}\hbar}\right) h^{M+1} \\ &= \mathbb{C}\_{1} \frac{\mathsf{e}^{N\|\boldsymbol{\mu}\|\_{\infty}\hbar} - 1}{\mathsf{e}^{\|\boldsymbol{\mu}\|\_{\infty}\hbar} - 1} h^{M+1} = \mathbb{C}\_{1} \frac{\mathsf{e}^{\|\boldsymbol{\mu}\|\_{\infty}\pi} - 1}{\mathsf{e}^{\|\boldsymbol{\mu}\|\_{\infty}\hbar} - 1} h^{M+1} \\ &= \left(\mathsf{C}\_{1} \left(\mathsf{e}^{\|\boldsymbol{\mu}\|\_{\infty}\pi} - 1\right) \frac{\|\boldsymbol{\mu}\|\_{\infty}\hbar}{\mathsf{e}^{\|\boldsymbol{\mu}\|\_{\infty}\hbar} - 1} \frac{1}{\|\boldsymbol{\mu}\|\_{\infty}}\right) h^{M} \leq \mathsf{C}\_{2} h^{M}, \end{split} \tag{16}$$

where *C*<sup>2</sup> is a constant that only depends on *τ*, *α*<sup>∞</sup> and *β*∞. Thus,

$$\max\_{1 \le j \le j \le n+N} \|\mathbf{x}(t\_j) - \mathbf{x}\_j\|\_2 = \mathcal{O}(h^M). \tag{17}$$

We continue by evaluating *x*(*tn*+*N*+*j*) − *xn*+*N*+*j*2, 1 ≤ *j* ≤ *N*, by starting from (12) and by employing the bounds already obtained:

$$\|\mathbf{x}(t\_{n+N+j}) - \mathbf{x}\_{n+N+j}\|\_2 \le \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} \|\mathbf{x}(t\_{n+N+j-1}) - \mathbf{x}\_{n+N+j-1}\|\_2 + \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} \|\beta\|\_{\infty}h \mathbf{C}\_2 h^M + \mathbf{C}\_1 h^{M+1}.\tag{18}$$

By solving this first order recursive inequality, we derive

$$\begin{split} \|\mathbf{x}(t\_{n+N+j}) - \mathbf{x}\_{n+N+j}\|\_{2} &\leq \mathbf{e}^{\|\|\mathbf{a}\|\|\_{\infty}h} \|\mathbf{x}(t\_{n+N}) - \mathbf{x}\_{n+N}\|\_{2} + \sum\_{k=1}^{j} h^{M+1} \left(\mathbb{C}\_{1} + \mathbb{C}\_{2} \|\beta\|\|\infty \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h}\right) \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h(j-k)} \\ &\leq \mathbf{C}\_{2}h^{M}\mathbf{e}^{N\|\mathbf{a}\|\_{\infty}h} + h^{M+1} \left(\mathbb{C}\_{1} + \mathbb{C}\_{2} \|\beta\|\infty \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h}\right) \frac{\mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} - 1}{\mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} - 1} \\ &\leq \mathbf{C}\_{2}h^{M}\mathbf{e}^{\|\mathbf{a}\|\_{\infty}T} + h^{M} \left(\mathbb{C}\_{1} + \mathbb{C}\_{2} \|\beta\|\infty \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h}\right) \left(\mathbf{e}^{\|\mathbf{a}\|\_{\infty}\tau} - 1\right) \left(\frac{\|\mathbf{a}\|\_{\infty}h}{\mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} - 1}\right) \frac{1}{\|\mathbf{a}\|\_{\infty}} \\ &\leq \mathbf{C}\_{3}h^{M}, \end{split} \tag{19}$$

where *C*<sup>3</sup> is a constant that only depends on *τ*, *α*<sup>∞</sup> and *β*∞. Therefore,

$$\max\_{\{u+N+1\le j\le n+2N\}} \|\mathbf{x}(t\_j) - \mathbf{x}\_j\|\_2 = \mathcal{O}(h^M). \tag{20}$$

In general, we proceed by induction. Suppose that *x*(*tj*) <sup>−</sup> *xj*<sup>2</sup> <sup>≤</sup> *Ch<sup>M</sup>* for *<sup>n</sup>* <sup>+</sup> <sup>1</sup> <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>n</sup>* <sup>+</sup> *lN*, where *<sup>C</sup>* = *<sup>C</sup>*(*τ*, *α*∞, *β*∞) > 0 is constant and *<sup>l</sup>* ≥ 1. We prove that max*n*+*lN*+1≤*j*≤*n*+(*l*+1)*<sup>N</sup> x*(*tj*) − *xj*<sup>2</sup> <sup>=</sup> <sup>O</sup>(*hM*). Fix 1 <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>N</sup>*. From (12) and by induction hypothesis,

$$\begin{split} \|\mathbf{x}(t\_{n+IN+j}) - \mathbf{x}\_{n+IN+j} \|\_{2} &\leq \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} \|\mathbf{x}(t\_{n+IN+j-1}) - \mathbf{x}\_{n+IN+j-1} \|\_{2} \\ &+ \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} \sum\_{k=1}^{M} \frac{\|\beta\|\_{\infty}^{k}h^{k}}{k!} \mathbf{C}h^{M} + \mathbf{C}h^{M+1} . \end{split} \tag{21}$$

By solving this first order recursive inequality, we derive (for *h* < 1)

$$\begin{split} \|\mathbf{x}(t\_{n+lN+j}) - \mathbf{x}\_{n+lN+j}\|\_{2} &\leq \mathbf{e}^{j\|\mathbf{a}\|\_{\infty}h} \|\mathbf{x}(t\_{n+lN}) - \mathbf{x}\_{n+lN}\|\_{2} \\ &\quad + \sum\_{k=1}^{j} Ch^{M+1} \left(\mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} \sum\_{r=1}^{M} \frac{\|\beta\|\_{\infty}^{r}h^{r-1}}{r!} + 1\right) \mathbf{e}^{\|\mathbf{a}\|\_{\infty}h(j-k)} \\ &\leq Ch^{M} \mathbf{e}^{\|\mathbf{a}\|\_{\infty}\tau} + Ch^{M+1} \left(\mathbf{e}^{\|\mathbf{a}\|\_{\infty} + \|\beta\|\_{\infty}} + 1\right) \frac{\mathbf{e}^{\|\mathbf{a}\|\_{\infty}hj} - 1}{\mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} - 1} \\ &\leq Ch^{M} \left(\mathbf{e}^{\|\mathbf{a}\|\_{\infty}\tau} + \left(\mathbf{e}^{\|\mathbf{a}\|\_{\infty} + \|\beta\|\_{\infty}} + 1\right) \left(\mathbf{e}^{\|\mathbf{a}\|\_{\infty}\tau} - 1\right) \frac{h}{\mathbf{e}^{\|\mathbf{a}\|\_{\infty}h} - 1}\right) \\ &\leq \tilde{C}h^{M}, \end{split} \tag{22}$$

where *<sup>C</sup>*˜ <sup>=</sup> *<sup>C</sup>*˜(*τ*, *α*∞, *β*∞) <sup>&</sup>gt; 0 is constant. This concludes the proof by induction.

**Remark 1.** *As shall be seen in the numerical computations from Section 5, the boundedness of α and β from Theorem 3 is sufficient, but not necessary. Nonetheless, for unbounded α and/or β, if one wants to ensure the m.s. convergence of the NSFD scheme a priori, it is possible to properly truncate the support of α and β. Indeed, since* lim*m*→<sup>∞</sup> <sup>P</sup>[*<sup>α</sup>* <sup>∈</sup> (−*m*, *<sup>m</sup>*)] = <sup>1</sup>*, one may take a sufficiently big interval* (−*m*∗, *<sup>m</sup>*∗) *in such a way that* <sup>P</sup>[*<sup>α</sup>* <sup>∈</sup> (−*m*∗, *<sup>m</sup>*∗)] <sup>≈</sup> <sup>1</sup>*, and truncate the support of <sup>α</sup> to* (−*m*∗, *<sup>m</sup>*∗) *(analogously for <sup>β</sup>). In fact, by applying the generalized Markov's inequality, it may be demonstrated that any second order random variable can be truncated to the interval* [*mean* ± 10 × *deviation*]*, so that this interval contains 99% of the probability mass irrespective of the probability distribution. In the theory of m.s. calculus, the boundedness of the random input coefficient must be usually imposed: as proved in ([36], Example p. 541), in order for an autonomous and homogeneous first order linear random differential equation (i.e., x* (*t*) = *ax*(*t*)*, where a is a random variable) to possess a m.s. solution for every m.s. integrable initial condition x*(0) = *x*0*, the coefficient a must be bounded.*

M.s. convergence guarantees convergence of the expectation and the variance. In the computer, *xn*(*ω*) is explicitly and symbolically expressed in terms of *α*(*ω*), *β*(*ω*) and *f*(·, *ω*), by employing either the exact scheme (10) along all the integration region or, at cheaper cost, the NSFD scheme from Theorem 3. By using the linearity of the expectation E, one can explicitly compute E[*xn*]. If the NSFD scheme is being used, one is approximating the true expectation of the solution to (2). Complexity is severely increased for large times *t*, large *N* (small *h*), and moderate or high dimension of the random space. The variance may also be approximated by symbolically expressing *xn*(*ω*)<sup>2</sup> and explicitly computing V[*xn*] = E[*x*<sup>2</sup> *<sup>n</sup>*] <sup>−</sup> (E[*xn*])2, although the complexity becomes significantly affected because the symbolic expression handled is larger.

Notice that working with these symbolic expressions for *xn*(*ω*) seems to be necessary. Indeed, if one applies the expectation operator directly in (11), for instance, then E[*xn*+1] = ∑*<sup>M</sup> <sup>k</sup>*=<sup>0</sup> <sup>E</sup>[e*α<sup>h</sup> <sup>β</sup>khk <sup>k</sup>*! *xn*−*kN*]. Since each *xn*−*kN* depends on *<sup>α</sup>* and *<sup>β</sup>*, the expectation <sup>E</sup>[e*α<sup>h</sup> <sup>β</sup>khk <sup>k</sup>*! *xn*−*kN*] cannot be split as E[e*αh*] E[*β<sup>k</sup>* ]*h<sup>k</sup> <sup>k</sup>*! <sup>E</sup>[*xn*−*kN*], unless both *<sup>α</sup>* and *<sup>β</sup>* are nonrandom. So there does not seem to exist a recursive relation formula for {E[*xn*]}*n*.

Notice that by Jensen's and Cauchy–Schwarz inequalities,

$$\left| \mathbb{E}[\mathbf{x}\_n] - \mathbb{E}[\mathbf{x}(t\_n)] \right| = \left| \mathbb{E}[\mathbf{x}\_n - \mathbf{x}(t\_n)] \right| \le \mathbb{E}[|\mathbf{x}\_n - \mathbf{x}(t\_n)|] \le \|\mathbf{x}\_n - \mathbf{x}(t\_n)\|\_2. \tag{23}$$

By triangular, Jensen's and Cauchy–Schwarz inequalities,

<sup>|</sup>V[*xn*] <sup>−</sup> <sup>V</sup>[*x*(*tn*)]<sup>|</sup> <sup>=</sup> <sup>|</sup>E[(*xn*)2] <sup>−</sup> (E[*xn*])<sup>2</sup> <sup>−</sup> <sup>E</sup>[(*x*(*tn*))2]+(E[*x*(*tn*)])2<sup>|</sup> <sup>≤</sup> <sup>E</sup>[|(*xn*)<sup>2</sup> <sup>−</sup> (*x*(*tn*))2|] + <sup>|</sup>(E[*xn*])<sup>2</sup> <sup>−</sup> (E[*x*(*tn*)])2<sup>|</sup> <sup>=</sup> <sup>E</sup>[|*xn* <sup>−</sup> *<sup>x</sup>*(*tn*)||*xn* <sup>+</sup> *<sup>x</sup>*(*tn*)|] + <sup>|</sup>E[*xn*] <sup>−</sup> <sup>E</sup>[*x*(*tn*)]||E[*xn*] + <sup>E</sup>[*x*(*tn*)]<sup>|</sup> (24) ≤ *xn* <sup>−</sup> *<sup>x</sup>*(*tn*)2*xn* <sup>+</sup> *<sup>x</sup>*(*tn*)<sup>2</sup> <sup>+</sup> <sup>|</sup>E[*xn*] <sup>−</sup> <sup>E</sup>[*x*(*tn*)]|(|E[*xn*]<sup>|</sup> <sup>+</sup> <sup>|</sup>E[*x*(*tn*)]|) ≤ *xn* <sup>−</sup> *<sup>x</sup>*(*tn*)2(*xn*<sup>2</sup> <sup>+</sup> *x*(*tn*)2) + <sup>|</sup>E[*xn*] <sup>−</sup> <sup>E</sup>[*x*(*tn*)]|(|E[*xn*]<sup>|</sup> <sup>+</sup> <sup>|</sup>E[*x*(*tn*)]|).

So the approximations of the expectation and the variance inherit the rate of convergence corresponding to the m.s. norm, which is <sup>O</sup>(*hM*) when the exact numerical scheme (10) is used for the first *M* intervals of length *τ* and (11) is used for the subsequent intervals (Theorem 3).

In the following section, the m.s. convergence of the NSFD scheme is illustrated with some numerical computations. We point out that the boundedness of *α* and *β* from Theorem 3 is sufficient, but not necessary. It may be possible that a random coefficient is unbounded and the NSFD scheme converges in the m.s. sense.

## **5. Numerical Examples**

The theoretical discussion is illustrated with some numerical computations. We consider specific probability distributions for *α*, *β* and/or *f* , and a fixed delay *τ* > 0. We denote by *xn* the discretization of the random NSFD method from Theorem 3. The exact solution *x*(*tn*) is computed with the exact scheme (10). Both random variables are explicitly and symbolically expressed in terms of *α*(*ω*), *β*(*ω*) and *f*(·, *ω*). These expansions are employed to compute the expectation and the variance, by using the linearity of the expectation. To check the accuracy, the absolute errors in the approximations of the mean value, *N*,*<sup>M</sup>* <sup>=</sup> <sup>|</sup>E[*x*(*tn*)] <sup>−</sup> <sup>E</sup>[*xn*]|, and the variance, *<sup>δ</sup>N*,*<sup>M</sup>* <sup>=</sup> <sup>|</sup>V[*x*(*tn*)] <sup>−</sup> <sup>V</sup>[*xn*]|, are calculated for different values of *<sup>N</sup>* and *<sup>M</sup>*. According to the theoretical discussion, the errors should decay as <sup>O</sup>(*hM*) when *h* → 0, which entails accuracy up to a significant number of digits. We remark that such level of accuracy cannot be achieved by Monte Carlo simulation, since its error decreases as the reciprocal of the square root of the number of realizations.

The implementations and computations are performed with Mathematica® (Wolfram Research, Inc, Mathematica, Version 12.0, Champaign, IL, USA, 2019), owing to its capability to handle both symbolic and numeric computations.

**Example 1.** *Let τ* = 0.35*. Consider f*(*t*) = 1 *and α* = −1*, while β is a random variable, uniformly distributed on the interval* [0.1, 0.2]*.*

*Figure 1 plots absolute errors N*,*<sup>M</sup> of the approximation of the expectation. First, N* = 10 *is fixed and M* ∈ {1, 2, 3, 4} *varies. Second, M* = 1 *is fixed and N* ∈ {5, 7, 10} *varies. In addition, third, these errors are divided by <sup>h</sup> to show, because of the overlapping, the decrease* <sup>O</sup>(*hM*) *as <sup>h</sup>* <sup>→</sup> <sup>0</sup>*. Observe that the error is exactly* 0 *on the first M intervals of length τ. In the fourth panel,* E[*xn*] *is plotted, where xn is the output of the NSFD scheme of Theorem 3; the estimated expected values are validated by Monte Carlo simulation. Figure 2 is analogous with the variance and the absolute error of its approximation, δN*,*M. According to ([16], Lemma 4, Theorem 7), since α* + *β* < 0 *and α* ≤ *β almost surely, the NSFD scheme converges to* 0 *almost surely as t* → ∞ *(it is asymptotically stable almost surely). In both figures, observe that* <sup>E</sup>[*xn*] *and* <sup>V</sup>[*xn*] *tend to* <sup>0</sup> *as <sup>t</sup>* <sup>→</sup> <sup>∞</sup>*, which means that the NSFD scheme is asymptotically stable in the m.s. sense. Finally, the numerical solution is always positive because β* > 0 *almost surely and f*(*t*) > 0 *([16], Theorem 8).*

**Example 2.** *Let τ* = 0.35*. Consider f*(*t*) = 1*, α* = 0*, and β random with Gaussian distribution, of zero mean and 0.3 standard deviation. Notice that the support of β is unbounded; however, we will see that m.s. convergence of the NSFD scheme described in Theorem 3 holds.*

*Figure 3 reports absolute errors N*,*<sup>M</sup> of the approximation of the expectation, where N and M take on the same values as in Example 1. The decay* <sup>O</sup>(*hM*) *as <sup>h</sup>* <sup>→</sup> <sup>0</sup> *is captured again. The last panel of the figure plots the expectation of the numerical solution from the NSFD scheme of Theorem 3,* E[*xn*]*, together with Monte Carlo simulation. Figure 4 is analogous with the variance and the absolute error of its approximation, δN*,*M. This example is interesting from the dynamics viewpoint. By ([16], Lemma 4), the probability that the zero solution to the realizations of* (2) *is asymptotically stable is the probability that β* < 0 *and τ* < *τ*<sup>∗</sup> = 1/|*β*|*, i.e.,* −1/*τ* < *β* < 0*. Taking into account the Gaussian distribution of β, this probability is* ≈ 0.5 *up to 12 decimals, i.e., approximately half of the time a realizable NSFD scheme tends to* 0 *as t* → ∞*, and half of the time it does not. The m.s. treatment mixes these two behaviors. In the figures, both* E[*xn*] *and* V[*xn*] *seem to increase as t advances, which means that the NSFD scheme is unstable in the m.s. sense. Finally, notice that β has one half of probability of being negative and the mean of the solution is positive ([16], Theorem 8).*

**Figure 1.** Upper left panel: Absolute errors (log-scale) in the approximation of the mean value with the NSFD scheme, with *M* = 1, 2, 3, 4 and *N* = 10 (*h* = *τ*/*N*). Upper right panel: Absolute errors (log-scale) in the approximation of the mean value with the NSFD scheme, with *N* = 5, 7, 10 (*h* = *τ*/*N*) and *M* = 1. Lower left panel: Errors from the upper right panel divided by *h*. Lower right panel: Approximation of the expectation with the NSFD scheme, with *M* = 1 and *N* = 7, and comparison with Monte Carlo simulation (circles) using 10,000 realizations. This figure corresponds to Example 1.

**Figure 2.** Upper left panel: Absolute errors (log-scale) in the approximation of the variance with the NSFD scheme, with *M* = 1, 2, 3, 4 and *N* = 10 (*h* = *τ*/*N*). Upper right panel: Absolute errors (log-scale) in the approximation of the variance with the NSFD scheme, with *N* = 5, 7, 10, 20 (*h* = *τ*/*N*) and *M* = 1. Lower left panel: Errors from the upper right panel divided by *h*. Lower right panel: Approximation of the variance with the NSFD scheme, with *M* = 1 and *N* = 7, and comparison with Monte Carlo simulation (circles) using 10,000 realizations. This figure corresponds to Example 1.

**Figure 3.** Upper left panel: Absolute errors (log-scale) in the approximation of the mean value with the NSFD scheme, with *M* = 1, 2, 3, 4 and *N* = 10 (*h* = *τ*/*N*). Upper right panel: Absolute errors (log-scale) in the approximation of the mean value with the NSFD scheme, with *N* = 5, 7, 10 (*h* = *τ*/*N*) and *M* = 1. Lower left panel: Errors from the upper right panel divided by *h*. Lower right panel: Approximation of the expectation with the NSFD scheme, with *M* = 1 and *N* = 7, and comparison with Monte Carlo simulation (circles) using 10,000 realizations. This figure corresponds to Example 2.

**Figure 4.** Upper left panel: Absolute errors (log-scale) in the approximation of the variance with the NSFD scheme, with *M* = 1, 2, 3, 4 and *N* = 10 (*h* = *τ*/*N*). Upper right panel: Absolute errors (log-scale) in the approximation of the variance with the NSFD scheme, with *N* = 5, 7, 10 (*h* = *τ*/*N*) and *M* = 1. Lower left panel: Errors from the upper right panel divided by *h*. Lower right panel: Approximation of the variance with the NSFD scheme, with *M* = 1 and *N* = 7, and comparison with Monte Carlo simulation (circles) using 10,000 realizations. This figure corresponds to Example 2.

**Example 3.** *Let τ* = 0.35*. Consider f*(*t*) = *γ, where γ is a random variable. It is assumed that α, β and γ are independent random quantities, uniformly distributed on the interval* [0.1, 0.2]*.*

*As Example 1, Figure 5 reports absolute errors N*,*<sup>M</sup> of the approximation of the expectation. First, for fixed N* = 10 *and M* ∈ {1, 2, 3, 4}*. Second, for fixed M* = 2 *and N* ∈ {5, 7, 10}*. In addition, third, these errors are divided by <sup>h</sup>*<sup>2</sup> *to highlight, because of the overlapping, the decrease* <sup>O</sup>(*hM*) *as <sup>h</sup>* <sup>→</sup> <sup>0</sup>*. The fourth panel plots* E[*xn*] *with the discretization xn computed via the NSFD scheme of Theorem 3, which is validated by Monte Carlo simulation. For the variance, computations become more expensive, due to the dimension three of the random space. In particular, the symbolic expression of the exact scheme* (10) *becomes unfeasible, so the exact error δN*,*<sup>M</sup> of the variance approximation cannot be reported. In Figure 6, we plot* V[*xn*] *with the discretization xn from the NSFD scheme of Theorem 3. Comparison is performed with Monte Carlo simulation, showing agreement of the estimates. Based on ([16], Lemma 4, Theorem 7), the condition α* + *β* > 0 *almost surely entails that the NSFD scheme does not approach* 0 *as t* → ∞ *(almost sure instability). This fact agrees with the plots of* E[*xn*] *and* V[*xn*]*, which seem to increase as t grows; this behavior entails that the NSFD scheme is unstable in the m.s. sense. Finally, β* > 0 *and γ* > 0 *almost surely implies the positivity of the numerical solution ([16], Theorem 8).*

**Figure 5.** Upper left panel: Absolute errors (log-scale) in the approximation of the mean value with the NSFD scheme, with *M* = 1, 2, 3, 4 and *N* = 10 (*h* = *τ*/*N*). Upper right: Absolute errors (log-scale) in the approximation of the mean value with the NSFD scheme, with *N* = 5, 7, 10 (*h* = *τ*/*N*) and *M* = 2. Lower left panel: Errors from the upper right panel divided by *h*2. Lower right panel: Approximation of the expectation with the NSFD scheme, with *M* = 2 and *N* = 7, and comparison with Monte Carlo simulation (circles) using 10,000 realizations. This figure corresponds to Example 3.

**Figure 6.** Approximation of the variance with the NSFD scheme, with *M* = 1 and *N* = 5 (*h* = *τ*/*N*), and comparison with Monte Carlo simulation (circles) using 10,000 realizations. This figure corresponds to Example 3.

*We would like to remark that even when M* = 1 *and the global error of the NSFD scheme is* O(*h*)*, its error is lower than Euler's method, given by xn*+<sup>1</sup> = (1 + *αh*)*xn* + *βhxn*−*N. Euler's method has already been employed for random differential equations (ordinary and fractional) in the m.s. sense [29,30,34]. In this Example 3, Figure 7 plots errors <sup>N</sup> for approximations of the mean, for comparing Euler's method and the NSFD scheme with M* = 1 *fixed. Observe that in log scale, both errors are located in parallel, but the error corresponding to the NSFD scheme is lower; this may be due to the non-standard nature of the method and being error-free on* [0, *τ*]*. Although the proposed NSFD method is restricted to the linear random differential equation with delay, it may provide the foundation for designing new non-standard numerical methods for delay nonlinear equations.*

**Figure 7.** Absolute errors (log-scale) in the approximation of the mean value, with Euler's method (dashed lines) and with the NSFD scheme *M* = 1 (solid lines). For step size *h* = *τ*/*N*, the left panel corresponds to *N* = 7 and the right panel to *N* = 10. This figure corresponds to Example 3.

**Remark 2.** *In the recent literature, the m.s. convergence of Euler's method has not been formally proved for delay random differential equations. If randomness is not incorporated into the system by the coefficients, but by a Wiener noise instead (Itô stochastic delay differential equation, which gives rise to non-differentiable solutions), then Euler's method (Euler-Maruyama's method, which considers discrete increments of the driving Wiener process) was rigorously studied and its m.s. convergence was proved in [37]. In the context of delay random differential equations (those with randomness manifested in coefficients and no Wiener noise), we focus on the linear case* (2) *studied in this paper assuming, as in Theorem 3, that α and β are bounded random variables, so that α*<sup>∞</sup> *and β*<sup>∞</sup> *are finite. The exact m.s. solution to* (2) *satisfies x*(*tn*+1) = *x*(*tn*) + *α tn*+*h tn <sup>x</sup>*(*s*) d*<sup>s</sup>* + *<sup>β</sup> tn*+*h tn <sup>x</sup>*(*<sup>s</sup>* − *<sup>τ</sup>*) d*s, where the integrals are m.s. Riemann. If en* = *<sup>x</sup>*(*tn*) − *xn denotes the difference between the exact solution and the discretization at the mesh point tn, then*

$$\varepsilon\_{n+1} = \varepsilon\_n + \mathfrak{a} \int\_{t\_n}^{t\_n + h} (\mathbf{x}(s) - \mathbf{x}\_n) \, \mathrm{d}s + \beta \int\_{t\_n}^{t\_n + h} (\mathbf{x}(s - \tau) - \mathbf{x}\_{n-N}) \, \mathrm{d}s,\tag{25}$$

*by a simple subtraction. Given any interval* [−*τ*, *T*]*, T* > 0*, the m.s. Lipschitz condition x*(*s*1) − *x*(*s*2)<sup>2</sup> = *s*<sup>1</sup> *<sup>s</sup>*<sup>2</sup> *x* (*s*) <sup>d</sup>*s*<sup>2</sup> ≤ | *s*<sup>1</sup> *<sup>s</sup>*<sup>2</sup> *x* (*s*)<sup>2</sup> <sup>d</sup>*s*| ≤ *<sup>λ</sup>*|*s*<sup>1</sup> − *<sup>s</sup>*2| *holds, <sup>λ</sup>* = *<sup>λ</sup>*(*T*) = max[−*τ*,*T*] *<sup>x</sup>* (*s*)<sup>2</sup> > 0*. Then x*(*s*) − *xm*<sup>2</sup> ≤ *x*(*s*) − *x*(*tm*)<sup>2</sup> + *em*<sup>2</sup> ≤ *λh* + *em*2*, m* ≥ −*N, s* ∈ [*tm*, *tm* + *h*]*, tm* + *h* ≤ *T. Consequently,*

$$\begin{split} \|\boldsymbol{e}\_{n+1}\|\_{2} &\leq \|\boldsymbol{e}\_{n}\|\_{2} + \|\boldsymbol{a}\|\_{\infty}h\left(\lambda h + \|\boldsymbol{e}\_{n}\|\_{2}\right) + \|\beta\|\_{\infty}h\left(\lambda h + \|\boldsymbol{e}\_{n-N}\|\_{2}\right) \\ &= \left(1 + \|\boldsymbol{a}\|\_{\infty}h\right)\|\boldsymbol{e}\_{n}\|\_{2} + \|\beta\|\_{\infty}h\|\boldsymbol{e}\_{n-N}\|\_{2} + \left(\|\boldsymbol{a}\|\_{\infty} + \|\beta\|\_{\infty}\right)h^{2}\lambda. \end{split} \tag{26}$$

*For* <sup>0</sup> ≤ *n* ≤ *N (notice that en*−*<sup>N</sup>* = <sup>0</sup>*), by solving the first order recursive inequality for en*2*, we derive the following bounds:*

$$\begin{split} \|e\_{n}\|\_{2} &\leq \sum\_{i=1}^{n} \left( \|\boldsymbol{a}\|\|\_{\infty} + \|\beta\|\|\_{\infty} \right) h^{2} \lambda \left( 1 + \|\boldsymbol{a}\|\|\_{\infty} h \right)^{n-i} \\ &= h^{2} \lambda \left( \|\boldsymbol{a}\|\_{\infty} + \|\beta\|\|\_{\infty} \right) \frac{\left( 1 + \|\boldsymbol{a}\|\_{\infty} h \right)^{n} - 1}{\|\boldsymbol{a}\|\_{\infty} h} \\ &\leq h \lambda \frac{\|\boldsymbol{a}\|\_{\infty} + \|\beta\|\|\_{\infty}}{\|\boldsymbol{a}\|\_{\infty}} \left[ \left( 1 + \|\boldsymbol{a}\|\_{\infty} \tau/N \right)^{N} - 1 \right] \\ &\leq h \lambda \frac{\|\boldsymbol{a}\|\_{\infty} + \|\beta\|\|\_{\infty}}{\|\boldsymbol{a}\|\_{\infty}} \left( \mathbf{e}^{\|\boldsymbol{a}\|\_{\infty} \tau} - 1 \right) \\ &= \mathbb{C}\_{1} h, \end{split} \tag{27}$$

*where C*<sup>1</sup> = *C*1(*λ*, *τ*, *α*∞, *β*∞) *is constant. Then*

$$\max\_{0 \le n \le N} \|e\_n\|\_2 = \mathcal{O}(h). \tag{28}$$

*For N* + 1 ≤ *n* ≤ 2*N, based on similar calculations,*

$$\begin{split} \|\boldsymbol{\varepsilon}\_{\boldsymbol{h}}\|\_{2} &\leq \left(1+\|\boldsymbol{a}\|\_{\infty}h\right)^{n-N}\|\boldsymbol{\varepsilon}\_{\boldsymbol{N}}\|\_{2} + \sum\_{l=N+1}^{n} \left[\|\boldsymbol{\beta}\|\_{\infty}\|\boldsymbol{\varepsilon}\_{l-N}\|\_{2} + \left(\|\boldsymbol{a}\|\_{\infty} + \|\boldsymbol{\beta}\|\_{\infty}\right)\lambda h\right] h \left(1+\|\boldsymbol{a}\|\_{\infty}h\right)^{n-l} \\ &\leq \mathsf{C}\_{1}h\nu^{\|\boldsymbol{a}\|\_{\infty}\|\boldsymbol{a}\|} + \left[\mathsf{C}\_{1}\|\boldsymbol{\beta}\|\_{\infty} + \left(\|\boldsymbol{a}\|\_{\infty} + \|\boldsymbol{\beta}\|\_{\infty}\right)\lambda\right] h^{2} \frac{\left(1+\|\boldsymbol{a}\|\_{\infty}h\right)^{n-N}-1}{\|\boldsymbol{a}\|\_{\infty}h} \\ &\leq \mathsf{C}\_{2}h, \end{split} \tag{29}$$

*where C*<sup>2</sup> = *C*2(*λ*, *τ*, *α*∞, *β*∞) *is constant. Then*

$$\max\_{N+1 \le n \le 2N} ||e\_{\mathcal{U}}||\_2 = \mathcal{O}(h). \tag{30}$$

*For* 2*N* + 1 ≤ *n* ≤ 3*N,* 3*N* + 1 ≤ *n* ≤ 4*N, etc. one proceeds similarly. This proves that the random Euler's method has m.s. global error* O(*h*)*. This proof corresponds to the linear case, although it may be extendible to delay random differential equations satisfying a m.s. Lipschitz condition.*

**Example 4.** *In this example, the linear model* (2) *is considered for fitting the time evolution of a photosynthetic bacterial population, Rhodobacter capsulatus (R. capsulatus) [38], under infrared lighting conditions. Direct cell counts were made for the first 7 days, every two to three days, during which the population grew with no effect of competition for resources (light and/or CO*2*) that would yield logistic nonlinearities. For days 0, 2, 4 and 7, the population sizes, measured in cells/mL scaled by one million, were 0.583, 0.635, 1.08 and 3.20, respectively. For delay τ* = 1 *and initial function f*(*t*) = 0.583 *on* [−*τ*, 0]*, the least-squares estimates for α and β are* 1.20426 *and* −1.18024*, respectively. The effect of small random displacements on the coefficients is studied here. Let us suppose* 0.5% *displacements of α and β with respect to their least-squares estimates, with zero mean values. According to the maximum entropy principle [39], α and* −*β follow truncated exponential distributions, with rates* 1/1.20426 *and* 1/1.18024 *respectively. The expectation and the variance of the output are approximated with the random NSFD scheme. Figure 8 plots the results for M* = 2 *and distinct N (mean values in solid line, and mean* ± *2* × *standard deviation in dashed lines), together with the least-squares fitting. Observe that a small uncertainty of* 0.5% *for parameters may cause significant changes in the final solution, up to 30% variation for the seventh day compared to an idealized situation containing no uncertainty. Observe also that as N increases, the approximations from the NSFD scheme tend to overlap, thus indicating convergence.*

**Figure 8.** Application of the random NSFD scheme for the model of the *R. capsulatus* bacterial population, for *M* = 2 and different values of *N* (mean values in solid line, and mean ± 2 × standard deviation in dashed lines). The least-squares fitting is also plotted. A zoom for a particular region is included for a better appreciation of convergence as *N* grows. This figure corresponds to Example 4.

## **6. Conclusions**

In this paper, we have extended a NSFD numerical scheme recently proposed for deterministic linear differential equations with delay to the random framework. Incorporating randomness into models is important to account for measurement errors in data. M.s. convergence of the numerical discretizations has been established when the two equation coefficients are bounded random variables and the initial condition is a regular stochastic process, with rate of convergence given by <sup>O</sup>(*hM*), where *h* is the step size and *M* is the number of intervals of length *τ* where the exact scheme is applied. M.s. convergence allows for approximating the expectation and the variance of the solution at inherited rate <sup>O</sup>(*hM*), by symbolically expanding the discretizations in terms of the random inputs. The numerical examples have illustrated and assessed the proposed approach. The convergence rate <sup>O</sup>(*hM*) has been supported numerically, even when the two equation coefficients are unbounded random variables. The asymptotic behavior of the expectation and the variance as the time *t* grows has been evaluated, graphically and taking into account theoretical results on deterministic stability and instability of the zero solution. A comparison with Euler's method has been performed when *M* = 1; although both methods have global errors O(*h*), the error of the NSFD scheme is lower, possibly due to the non-standard nature of the scheme and being error-free on [0, *τ*]. Also, we have considered an example dealing with actual experimental data for a bacterial specie growing under infrared lighting conditions, and have calculated numerical solutions after randomizing the input parameters according to the maximum entropy principle.

The advantage of the random NSFD scheme is the high accuracy to approximate some statistics, which cannot be achieved with Monte Carlo methods. In addition, the procedure is simple: one only symbolically expands the discretizations in terms of the random inputs and afterward applies the corresponding statistical operator. However, this strategy possesses some limitations. Obviously, the necessity of symbolically expressing the discretizations restricts the applicability of the NSFD scheme to moderate step size *h* and time variable *t*, as well as small dimension of the random space. Although the calculation of the expectation of the discretization seems to be quite feasible in the computer, the calculation of the variance may become a big issue with this approach, let alone other statistics of order greater than two. We ask ourselves about the possibility of accurately calculating statistics with the random NSFD scheme without relying on symbolic expansions. We admit that for the moment, Monte Carlo simulation seems the best option for large time variable *t*, small step size *h* or large dimension of the random space, where each realization of the governing delay model is numerically solved by employing a NSFD scheme. For estimating densities, the symbolic expression is too complex and kernel methods are the preferable.

Mickens' methodology on NSFD schemes has shown fruitful applications along the years on ordinary, partial and fractional deterministic differential equations. However, only very recently, the application of NSFD numerical schemes to delay deterministic differential equations has been explored, in the context of linear models. Thus, this is the first contribution that proposes the use of NSFD schemes for quantifying uncertainty for delay random differential equations. Further study of NSFD methods for delay deterministic and random differential equations needs to be conducted, especially for nonlinear equations, for applications to modeling of real-life systems with aftereffects or time lags.

We propose specific lines of research for possible future developments:


**Author Contributions:** Investigation, J.C. and M.J.; Methodology, J.C. and M.J.; Software, M.J.; Supervision, J.C.C. and F.R.; Validation, all authors; Visualization, all authors; Writing—original draft, J.C. and M.J.; Writing—review & editing, all authors. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work has been supported by the Spanish Ministerio de Economía, Industria y Competitividad (MINECO), the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER UE) grant MTM2017–89664–P.

**Conflicts of Interest:** The authors declare that there is no conflict of interests regarding the publication of this article.
