**1. Introduction**

Modeling physical systems for which the future state depends on history due to hereditary characteristics, such as aftereffects or time lags, usually requires the use of delay differential models. The delay may be discrete or continuous, depending on whether a specific or complete past information is used. The inclusion of a delay requires specific techniques for the theoretical study of the differential model [1–4]. In practice, delay differential models play a key role in different scientific and technical fields [5–10].

In the context of delay differential equations, the construction of non-standard finite difference (NSFD) numerical schemes has not been much explored. Historically, NSFD schemes were developed by Mickens in the years 1994 and 2000 [11,12], together with a later edited book in 2005 [13]. Mickens observed that traditional standard finite difference schemes may be modified, on the basis of exact numerical schemes for basic ordinary differential equations, so that the essential properties of the governing continuous model are mimicked [14]. Until relatively recently, NSFD schemes were successfully designed and applied for ordinary, partial and fractional differential equations [15]. However, delay differential equations have not been addressed in detail.

Recently, [16] proposed a NSFD scheme for the general linear delay problem

$$\begin{cases} \mathbf{x}'(t) = a\mathbf{x}(t) + \beta \mathbf{x}(t-\tau), & t > 0, \\ \mathbf{x}(t) = f(t), & -\tau \le t \le 0, \end{cases} \tag{1}$$

(*τ* > 0) from an exact scheme on the whole domain, providing high order of accuracy and consistent dynamical behavior with simple computational properties. Such approach was extended to the non-scalar case in [17].

In modeling, the variability of data, due to limited knowledge and fluctuation of the process under study, lack of information, bad calibration machines, etc., gives rise to variability in the model coefficients. Therefore, for a more realistic description of the process, coefficients should be regarded as random quantities on an abstract probability space. When the coefficients are random variables and regular stochastic processes, the solution to the model becomes a differentiable stochastic process, whose realizable trajectories solve the deterministic version of the model. A common treatment of random differential models uses mean square (m.s.) calculus [18–24]. Of special importance is the computation of the mean and the variance of the solution stochastic process, or even its probability density function.

We are interested on delay random differential equations. Specifically, the randomization of (1) as

$$\begin{cases} \mathbf{x}'(t,\omega) = \mathbf{a}(\omega)\mathbf{x}(t,\omega) + \beta(\omega)\mathbf{x}(t-\tau,\omega), & t>0, \,\omega \in \Omega, \\ \mathbf{x}(t,\omega) = f(t,\omega), & -\tau \le t \le 0, \,\omega \in \Omega. \end{cases} \tag{2}$$

Here *α* and *β* are random variables and *f* is a stochastic process on a complete probability space (Ω, <sup>F</sup>, <sup>P</sup>), where <sup>Ω</sup> is the sample space formed by the outcomes *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>, <sup>F</sup> is the *<sup>σ</sup>*-algebra of events, and <sup>P</sup> : F → [0, 1] is the probability measure. The solution *<sup>x</sup>* is a differentiable stochastic process.

Only recently, a theoretical study on delay random differential equations was started. General delay random differential equations were analyzed in the m.s. sense in [25], with the goal of extending some of the existing results on random differential equations with no delay from the book [18]. Problem (2) was solved in the m.s. sense in [26], and later generalized to equations with a random forcing term in [27]. On the other hand, in [28] the authors studied (2), but considered the solution in the sample-path sense and computed its probability density function via the random variable transformation technique, for certain forms of the initial condition process.

In this paper, we are concerned with computational aspects of delay random differential equations. Standard finite difference methods have already been applied to random ordinary, partial and fractional differential equations, by establishing the m.s. convergence, and even the convergence of densities, of the numerical discretizations towards the stochastic process solution [29–34]. Here we aim at extending the NSFD method from [16] to (2), by assessing the m.s. convergence of the discretizations. This permits approximating the expectation and the variance of the solution with high accuracy, whenever computationally feasible.

The organization of this paper is the following. In Section 2, the main results on m.s. calculus are exposed. The material for this section is essentially taken from [18]. In Section 3, the NSFD numerical scheme from [16] is presented. The randomization of the scheme, its m.s. convergence and its usefulness for approximating moments are discussed in Section 4. Illustration of the theory with numerical examples is conducted in Section 5. Finally, Section 6 draws the main conclusions.

## **2. M.s. Calculus**

We are interested in second order real random variables *<sup>y</sup>* : <sup>Ω</sup> <sup>→</sup> <sup>R</sup>, satisfying

$$\mathbb{E}[y^2] = \int\_{\Omega} y(\omega)^2 \, \mathrm{d}\mathbb{P}(\omega) < \infty. \tag{3}$$

We refer the reader to ([18], Ch. 4), [35]. The set of these random variables is a Hilbert space, denoted as L2(Ω) and endowed with the inner product *y*1, *<sup>y</sup>*2 <sup>=</sup> <sup>E</sup>[*y*1*y*2]. This inner product gives rise to the norm *y*<sup>2</sup> = (E[*y*2])1/2. By Cauchy–Schwarz inequality ([18], p. 19) <sup>E</sup>[|*y*1*y*2|] ≤ *y*12*y*22. Random variables in L2(Ω) are characterized by having finite variance:

$$\mathbb{V}[y] = \mathbb{E}[(y - \mathbb{E}[y])^2] = \mathbb{E}[y^2] - (\mathbb{E}[y])^2 < \infty. \tag{4}$$

This is one of the principal reasons for working with second order random variables, since the main statistical information for uncertainty quantification, namely the average value and the dispersion, are well-defined.

Given a stochastic process {*z*(*t*) : *<sup>t</sup>* <sup>∈</sup> *<sup>I</sup>* <sup>⊆</sup> <sup>R</sup>}, it is of second order if the random variable *<sup>z</sup>*(*t*) is of second order, for all *t* ∈ *I*. By Cauchy–Schwarz inequality, it is straightforward to check that a second order stochastic process possesses a correlation function, E[*z*(*t*1)*z*(*t*2)].

Convergence in L2(Ω) is defined through its norm ·2: a sequence of random variables {*yn*}<sup>∞</sup> *n*=1 converges to *<sup>y</sup>* in L2(Ω) if lim*n*→<sup>∞</sup> *yn* <sup>−</sup> *<sup>y</sup>*<sup>2</sup> <sup>=</sup> 0. This is referred to as m.s. convergence.

M.s. convergence preserves the convergence of the expectation and the variance. This is a key fact. In general, if {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> are two sequences of second order random variables such that *xn* <sup>→</sup> *<sup>x</sup>* and *yn* <sup>→</sup> *<sup>y</sup>* as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup> in the m.s. sense, then <sup>E</sup>[*xnyn*] <sup>→</sup> <sup>E</sup>[*xy*] ([18], p. 88).

In the particular case that {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> is a sequence of second order random variables such that its mean and its variance tend to zero, i.e., <sup>E</sup>[*xn*] <sup>→</sup> 0 and <sup>V</sup>[*xn*] <sup>→</sup> 0 as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>, then {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> is m.s. convergent to zero, since (*xn*2)<sup>2</sup> <sup>=</sup> <sup>E</sup>[(*xn*)2] = <sup>V</sup>[*xn*]+(E[*xn*])<sup>2</sup> <sup>→</sup> 0 as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>. The converse is also true.

M.s. convergence gives rise to m.s. calculus, where continuity, differentiability and Riemann integrability of a stochastic process are naturally defined by taking m.s. limits in the classical definitions. A stochastic process {*z*(*t*) : *<sup>t</sup>* <sup>∈</sup> *<sup>I</sup>* <sup>⊆</sup> <sup>R</sup>} is m.s. continuous at *<sup>t</sup>*<sup>0</sup> <sup>∈</sup> *<sup>I</sup>* if *<sup>z</sup>*(*t*) <sup>→</sup> *<sup>z</sup>*(*t*0) as *<sup>t</sup>* <sup>→</sup> *<sup>t</sup>*<sup>0</sup> in the m.s. sense. It is m.s. differentiable at *<sup>t</sup>*<sup>0</sup> <sup>∈</sup> *<sup>I</sup>* if lim*h*→<sup>0</sup> *<sup>z</sup>*(*t*0+*h*)−*z*(*t*0) *<sup>h</sup>* exists in the m.s. sense, which is denoted as *z* (*t*0). Finally, *z*(*t*) is m.s. Riemann integrable on an interval [*a*, *b*] ⊆ *I* if there exists a sequence of partitions {*Pn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> with mesh tending to 0, *Pn* = {*a* = *t n* <sup>0</sup> < *t n* <sup>1</sup> < ... < *t n rn* = *b*}, such that for any choice of points *s<sup>n</sup> <sup>i</sup>* ∈ [*t n <sup>i</sup>*−1, *<sup>t</sup> n <sup>i</sup>* ], *<sup>i</sup>* <sup>=</sup> 1, ... ,*rn*, the limit lim*n*→<sup>∞</sup> <sup>∑</sup>*rn <sup>i</sup>*=<sup>1</sup> *<sup>z</sup>*(*s<sup>n</sup> <sup>i</sup>* )(*t n <sup>i</sup>* − *t n <sup>i</sup>*−1) exists in the m.s. sense, and it is denoted as *b <sup>a</sup> z*(*t*) d*t*.

The following important properties will be used: m.s. continuity on an interval implies m.s. Riemann integrability ([18] Section 4.5.1 (1)), *b <sup>a</sup> <sup>z</sup>*(*t*) <sup>d</sup>*t*<sup>2</sup> <sup>≤</sup> *b <sup>a</sup> z*(*t*)<sup>2</sup> d*t* for any m.s. Riemann integrable process *z*(*t*) ([18] Section 4.5.1 (3)), and the fundamental theorem of m.s. calculus ([18] Section 4.5.1 (5), (6)).

Finally, we mention that the essential supremum norm is defined as

$$\|\|y\|\|\_{\infty} = \inf\{\mathcal{C} \ge 0 : |y| \le \mathcal{C} \text{ almost surely}\}.\tag{5}$$

The set of random variables satisfying *y*<sup>∞</sup> <sup>&</sup>lt; <sup>∞</sup> gives rise to the Banach space L∞(Ω). Obviously, for two random variables *<sup>y</sup>*<sup>1</sup> <sup>∈</sup> <sup>L</sup>∞(Ω) and *<sup>y</sup>*<sup>2</sup> <sup>∈</sup> L2(Ω), it holds *y*1*y*2<sup>2</sup> ≤ *y*1∞*y*2<sup>2</sup> <sup>&</sup>lt; <sup>∞</sup>.

#### **3. NSFD Methods for Linear Deterministic Differential Equations with Delay**

Based on the explicit solution to (1),

$$\begin{split} \mathbf{x}(t) &= f(0) \sum\_{k=0}^{m-1} \frac{\beta^k (t - k\tau)^k}{k!} \mathbf{e}^{a(t - k\tau)} \\ &+ \sum\_{k=0}^{m-2} \frac{\beta^{k+1}}{k!} \int\_{-\tau}^0 (t - (k+1)\tau - s)^k \mathbf{e}^{a(t - (k+1)\tau - s)} f(s) \, \mathbf{d}s \\ &+ \frac{\beta^m}{(m-1)!} \int\_{-\tau}^{t - m\tau} (t - m\tau - s)^{m-1} \mathbf{e}^{a(t - m\tau - s)} f(s) \, \mathbf{d}s, \end{split} \tag{6}$$

for (*m* − 1)*τ* < *t* ≤ *mτ*, *m* ≥ 1, an exact numerical difference scheme for (1) is obtained in [16,17]. It is detailed in the following theorem.

**Theorem 1.** *([16], Theorem 2), ([17], Theorem 2) Consider a size mesh h* > 0 *such that Nh* = *τ, for some integer N* ≥ 1*. Write tn* = *nh and xn* = *x*(*tn*)*, for n* ≥ −*N. Then the numerical solution given by xn* = *f*(*tn*)*, for* −*N* ≤ *n* ≤ 0*, and by*

$$\mathbf{x}\_{n+1} = \mathbf{e}^{ah} \sum\_{k=0}^{m-1} \frac{\beta^k h^k}{k!} \mathbf{x}\_{n-kN} + \frac{\beta^m}{(m-1)!} \int\_{t\_n - m\tau}^{t\_n - m\tau + h} (t\_n - m\tau + h - s)^{m-1} \mathbf{e}^{a(t\_n - m\tau + h - s)} f(\mathbf{s}) \, \mathbf{ds}, \tag{7}$$

*where* (*m* − 1)*τ* ≤ *nh* < *mτ and m* ≥ 1*, defines an exact numerical scheme for* (1)*.*

Having an exact numerical scheme is ideal, since it reproduces the exact values of the solution at the points of the mesh. However, a drawback of (7) is that definite integrals need to be numerically computed. The number of definite integrals increases with increasing times. Thus, a NSFD method is proposed to maintain sufficient accuracy and adequate dynamical properties, but reduce the complexity by avoiding definite integrals. In the first *M* intervals [0, *τ*], ... , [(*M* − 1)*τ*, *Mτ*], the exact solution (7) is used (or any other numerical method with sufficiently high accuracy), but afterward the integral part from (7) is discarded. The precision of the method increases with *M*.

**Theorem 2.** *([16], Theorem 3), ([17], Theorem 3) Fix M* ≥ 1*, and compute the numerical solution to* (1) *in the intervals* (*m* − 1)*τ* ≤ *nh* ≤ *mτ, for* 0 ≤ *m* ≤ *M with the exact method* (7) *or with any other numerical method of global error at most* <sup>O</sup>(*hM*)*. Then, for m* <sup>≥</sup> *<sup>M</sup>* <sup>+</sup> <sup>1</sup> *and* (*<sup>m</sup>* <sup>−</sup> <sup>1</sup>)*<sup>τ</sup>* <sup>≤</sup> *nh* <sup>&</sup>lt; *<sup>m</sup>τ, the expression*

$$\mathbf{x}\_{n+1} = \mathbf{e}^{ah} \sum\_{k=0}^{M} \frac{\beta^k h^k}{k!} \mathbf{x}\_{n-kN} \tag{8}$$

*defines a NSFD scheme of global error* <sup>O</sup>(*hM*)*.*

As detailed in ([16], Remark 1), the method (8) has the characteristics of a NSFD method:

$$\frac{\mathbf{x}\_{n+1} - \mathbf{x}\_n}{\left(\mathbf{e}^{ah} - 1\right)/n} = a\mathbf{x}\_n + \frac{\mathbf{a}\mathbf{e}^{ah}}{\mathbf{e}^{ah} - 1} \sum\_{k=1}^{M} \frac{\beta^k h^k}{k!} \mathbf{x}\_{n-kN}.\tag{9}$$

Furthermore, in the rest of [16], it is proved and illustrated that the method from Theorem 2 is dynamically consistent with (1), for asymptotic stability, positive preserving properties, and oscillation behavior.
