**3.** L*p***-solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term**

In this section we solve (1) in the L*p*-sense. To do so, we will demonstrate that *x*(*t*) defined by (2) is the unique L*p*-solution to (1). We will take advantage of the decomposition of problem (1) into its homogeneous part, (3), and its complete part, (4). The formal solution to (3) is given by *y*(*t*) defined as (5), while the formal solution to (4) is given by *z*(*t*) expressed as (6). The previous contribution [17] provides conditions under which *y*(*t*) defined by (5) solves (3) in the L*p*-sense. Thus, our primary goal will be to find conditions under which *z*(*t*) given by (6) is a true solution to (4) in the L*p*-sense.

Again, recall that the integrals that appear in the expressions (2), (5) and (6) are L*p*-Riemann integrals.

The uniqueness (not existence for now) of (1) is proved analogously to ([17] Theorem 3.1), by invoking results from [7] that connect L*p*-solutions with sample-path solutions, which satisfy analogous properties to deterministic solutions. The precise uniqueness statement is as follows.

**Theorem 1** (Uniqueness)**.** *The random differential equation problem with delay (1) has at most one* L*p-solution, for* 1 ≤ *p* < ∞*.*

**Proof.** Assume that (1) has an L*p*-solution. We will prove it is unique. Let *x*1(*t*) and *x*2(*t*) be two <sup>L</sup>*p*-solutions to (1). Let *<sup>u</sup>*(*t*) = *<sup>x</sup>*1(*t*) <sup>−</sup> *<sup>x</sup>*2(*t*), which satisfies the random differential equation problem with delay

$$\begin{cases} u'(t,\omega) = a(\omega)u(t,\omega) + b(\omega)u(t-\tau,\omega), \ t \ge 0, \\ u(t,\omega) = 0, \ -\tau \le t \le 0. \end{cases}$$

If *t* ∈ [0, *τ*], then *t* − *τ* ∈ [−*τ*, 0]; therefore, *u*(*t* − *τ*) = 0. Thus, *u*(*t*) satisfies a random differential equation problem with no delay on [0, *τ*]:

$$\begin{cases} u'(t,\omega) = a(\omega)u(t,\omega), \ t \in [0,\pi],\\ u(0,\omega) = 0. \end{cases} \tag{7}$$

In [7], it was proved that any L*p*-solution to a random initial value problem has a product measurable representative which is an absolutely continuous solution in the sample-path sense. Since the sample-path solution to (7) must be 0 (from the deterministic theory), we conclude that *u*(*t*) = 0 on [0, *τ*], as desired. For the subsequent intervals [*τ*, 2*τ*], [2*τ*, 3*τ*], etc., the same reasoning applies.

Now we move on to existence results. First, recall that the random delayed exponential function is the solution to the random linear homogeneous differential equation with pure delay that satisfies the unit initial condition.

**Proposition 4** (L*p*-derivative of the random delayed exponential function ([17] Prop 3.1))**.** *Consider the random system with discrete delay*

$$\begin{cases} \mathbf{x}'(t,\omega) = \mathbf{c}(\omega)\mathbf{x}(t-\tau,\omega), \ t \ge 0, \\ \mathbf{x}(t,\omega) = 1, \ -\tau \le t \le 0, \end{cases} \tag{8}$$

*where c*(*ω*) *is a random variable.*

*If c has absolute moments of any order, then* e*c*,*<sup>t</sup> <sup>τ</sup> is the unique* <sup>L</sup>*p-solution to (8), for all* <sup>1</sup> <sup>≤</sup> *<sup>p</sup>* <sup>&</sup>lt; <sup>∞</sup>*. On the other hand, if c is bounded, then* e*c*,*<sup>t</sup> <sup>τ</sup> is the unique* L∞*-solution to (8).*

In [17], two results on the existence of solution to (3) were stated and proved. In terms of notation, the moment-generating function of a random variable *<sup>a</sup>* is denoted as *<sup>φ</sup>a*(*ζ*) = <sup>E</sup>[e*a<sup>ζ</sup>* ], *<sup>ζ</sup>* <sup>∈</sup> <sup>R</sup>.

**Theorem 2** (Existence and uniqueness for (3), first version ([17] Theorem 3.2))**.** *Fix* 1 ≤ *p* < ∞*. Suppose that <sup>φ</sup>a*(*ζ*) <sup>&</sup>lt; <sup>∞</sup> *for all <sup>ζ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>b</sup> has absolute moments of any order, and <sup>g</sup> belongs to <sup>C</sup>*1([−*τ*, 0]) *in the* L*p*+*η-sense, for certain η* > 0*. Then the stochastic process y*(*t*) *defined by (5) is the unique* L*p-solution to (3).*

**Theorem 3** (Existence and uniqueness for (3), second version ([17] Theorem 3.4))**.** *Fix* 1 ≤ *p* < ∞*. Suppose that <sup>a</sup> and <sup>b</sup> are bounded random variables, and <sup>g</sup> belongs to <sup>C</sup>*1([−*τ*, 0]) *in the* <sup>L</sup>*p-sense. Then the stochastic process y*(*t*) *defined by (5) is the unique* L*p-solution to (3).*

In what follows, we establish two theorems on the existence of a solution to (4); see Theorem 4 and Theorem 5. As a corollary, we will derive the solution to (1); see Theorem 6 and Theorem 7.

**Theorem 4** (Existence and uniqueness for (4), first version)**.** *Fix* 1 ≤ *p* < ∞*. Suppose that φa*(*ζ*) < ∞ *for all <sup>ζ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>b</sup> has absolute moments of any order, and <sup>f</sup> is continuous on* [0, <sup>∞</sup>) *in the* <sup>L</sup>*p*+*η-sense, for certain η* > 0*. Then the stochastic process z*(*t*) *defined by (6) is the unique* L*p-solution to (4).*

**Proof.** At the beginning of the proof of ([17] Theorem 3.2), it was proved that *b*<sup>1</sup> = e−*aτb* has absolute moments of any order, as a consequence of Cauchy-Schwarz inequality, therefore Proposition 4 tells us that the process e*b*1,*<sup>t</sup> <sup>τ</sup>* is L*q*-differentiable, for each 1 <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, and <sup>d</sup> <sup>d</sup>*<sup>t</sup>* e *<sup>b</sup>*1,*<sup>t</sup> <sup>τ</sup>* = *<sup>b</sup>*1e *<sup>b</sup>*1,*t*−*<sup>τ</sup> <sup>τ</sup>* . It was also proved that, by the chain rule theorem (Proposition 1), the process e*at* is L*q*-differentiable, for each <sup>1</sup> <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, and <sup>d</sup> <sup>d</sup>*<sup>t</sup>* <sup>e</sup>*at* <sup>=</sup> *<sup>a</sup>*e*at*. To justify these two assertions on e*b*1,*<sup>t</sup> <sup>τ</sup>* and e*at*, the hypotheses *<sup>φ</sup>a*(*ζ*) <sup>&</sup>lt; <sup>∞</sup> and *b* having absolute moments of any order are required.

Fix 0 <sup>≤</sup> *<sup>s</sup>* <sup>≤</sup> *<sup>t</sup>*. Let *<sup>Y</sup>*1(*t*,*s*) = <sup>e</sup>*a*(*t*−*s*), *<sup>Y</sup>*2(*t*,*s*) = <sup>e</sup> *<sup>b</sup>*1,*t*−*τ*−*<sup>s</sup> <sup>τ</sup>* and *<sup>Y</sup>*3(*s*) = *<sup>f</sup>*(*s*), according to the notation of Lemma 1. Set the product of the three processes *F*(*t*,*s*) = *Y*1(*t*,*s*)*Y*2(*t*,*s*)*Y*3(*s*), so that our candidate solution process becomes *z*(*t*) = *t* <sup>0</sup> *F*(*t*,*s*) d*s*. We check the conditions of the random Leibniz's rule, see Proposition 3, to differentiate *z*(*t*). By the first paragraph of this proof, in which we stated that both e*b*1,*<sup>t</sup> <sup>τ</sup>* and e*at* are L*q*-differentiable, for each 1 <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, we derive that *<sup>Y</sup>*<sup>1</sup> and *<sup>Y</sup>*<sup>2</sup> are <sup>L</sup>*q*-continuous on both variables, for all 1 <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>. Since *<sup>Y</sup>*<sup>3</sup> is L*p*+*η*-continuous, for certain *<sup>η</sup>* <sup>&</sup>gt; 0 by assumption, we deduce that *F* is L*p*-continuous on both variables, as a consequence of Lemma 1.

Fixed *s*, let *Y*1(*t*) = e*a*(*t*−*s*), *Y*2(*t*) = e *<sup>b</sup>*1,*t*−*τ*−*<sup>s</sup> <sup>τ</sup>* and *<sup>Y</sup>*<sup>3</sup> = *<sup>f</sup>*(*s*). We have that *<sup>Y</sup>*<sup>1</sup> and *<sup>Y</sup>*<sup>2</sup> are <sup>L</sup>*q*-differentiable, for each 1 <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>. The random variable *<sup>Y</sup>*<sup>3</sup> belongs to L*p*+*η*. By Lemma 2, *<sup>F</sup>*(·,*s*) is L*p*-differentiable at each *t*, with

$$\frac{\partial F}{\partial t}(t,s) = \left\{ a \mathbf{e}^{a(t-s)} \mathbf{e}\_{\tau}^{b\_1, t-\tau-s} + \mathbf{e}^{a(t-s)} b\_1 \mathbf{e}\_{\tau}^{b\_1, t-2\tau-s} \right\} f(s) .$$

Let us see that *<sup>∂</sup><sup>F</sup> <sup>∂</sup><sup>t</sup>* (*t*,*s*) is L*p*-continuous at (*t*,*s*). Since *<sup>a</sup>* has absolute moments of any order (by finiteness of its moment-generating function) and e*a*(*t*−*s*) is L*q*-continuous at (*t*,*s*), for each <sup>1</sup> <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, we derive that *<sup>a</sup>*e*a*(*t*−*s*) is L*q*-continuous at each (*t*,*s*), for every 1 <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, by Hölder's inequality. Thus, we have that *Y*1(*t*,*s*) = *a*e*a*(*t*−*s*) and *Y*2(*t*,*s*) = e *<sup>b</sup>*1,*t*−*τ*−*<sup>s</sup> <sup>τ</sup>* are L*q*-continuous at (*t*,*s*), for each 1 <sup>≤</sup> *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, while *<sup>Y</sup>*3(*s*) = *<sup>f</sup>*(*s*) is L*p*+*η*-continuous. By Lemma 1, *<sup>a</sup>*e*a*(*t*−*s*)<sup>e</sup> *<sup>b</sup>*1,*t*−*τ*−*<sup>s</sup> <sup>τ</sup> <sup>f</sup>*(*s*) is L*p*-continuous at each (*t*,*s*). Analogously, e*a*(*t*−*s*)*b*1e *<sup>b</sup>*1,*t*−2*τ*−*<sup>s</sup> <sup>τ</sup> <sup>f</sup>*(*s*) is L*p*-continuous at (*t*,*s*). Therefore, *∂F <sup>∂</sup><sup>t</sup>* (*t*,*s*) is L*p*-continuous at (*t*,*s*). By Proposition 3, the process *<sup>z</sup>*(*t*) is L*p*-differentiable and *z* (*t*) = *F*(*t*, *t*) + *t* 0 *∂F <sup>∂</sup><sup>t</sup>* (*t*,*s*) d*s* = *f*(*t*) + *az*(*t*) + *bz*(*t* − *τ*) (by definition of *F*(*t*,*s*) in the proof, *F*(*t*, *t*) = e*a*(*t*−*t*)e *<sup>b</sup>*1,*t*−*τ*−*<sup>t</sup> <sup>τ</sup> <sup>f</sup>*(*t*) = <sup>e</sup> *<sup>b</sup>*1,−*<sup>τ</sup> <sup>τ</sup> <sup>f</sup>*(*t*) = *<sup>f</sup>*(*t*), where e*b*1,−*<sup>τ</sup> <sup>τ</sup>* = 1 by definition of delayed exponential function), and we are done.

Once the existence of L*p*-solution has been proved, uniqueness follows from Theorem 1.

**Theorem 5** (Existence and uniqueness for (4), second version)**.** *Fix* 1 ≤ *p* < ∞*. Suppose that a and b are bounded random variables, and f is continuous on* [0, ∞) *in the* L*p-sense. Then the stochastic process z*(*t*) *defined by (6) is the unique* L*p-solution to (4).*

**Proof.** As was shown in ([17] Theorem 3.4), the process e*b*1,*<sup>t</sup> <sup>τ</sup>* is L∞-differentiable and <sup>d</sup> <sup>d</sup>*<sup>t</sup>* e *<sup>b</sup>*1,*<sup>t</sup> <sup>τ</sup>* = *<sup>b</sup>*1e *<sup>b</sup>*1,*t*−*<sup>τ</sup> <sup>τ</sup>* , because *b*<sup>1</sup> = *e*−*aτb* is bounded. Additionally, the process e*at* is L∞-differentiable and <sup>d</sup> <sup>d</sup>*<sup>t</sup>* <sup>e</sup>*at* = *<sup>a</sup>*e*at*, as a consequence of the deterministic mean value theorem and the boundedness of *a*.

The rest of the proof is completely analogous to the previous Theorem 4, by applying the second part of both Lemma 1 and Lemma 2.

**Theorem 6** (Existence and uniqueness for (1), first version)**.** *Fix* 1 ≤ *p* < ∞*. Suppose that φa*(*ζ*) < ∞ *for all <sup>ζ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>b</sup> has absolute moments of any order, <sup>g</sup> belongs to <sup>C</sup>*1([−*τ*, 0]) *in the* <sup>L</sup>*p*+*η-sense and <sup>f</sup> is continuous on* [0, ∞) *in the* L*p*+*η-sense, for certain η* > 0*. Then the stochastic process x*(*t*) *defined by (2) is the unique* L*p-solution to (1).*

**Proof.** This is a consequence of Theorem 2 and Theorem 4, with *x*(*t*) = *y*(*t*) + *z*(*t*). Uniqueness follows from Theorem 1.

**Theorem 7** (Existence and uniqueness for (1), second version)**.** *Fix* 1 ≤ *p* < ∞*. Suppose that a and b are bounded random variables, <sup>g</sup> belongs to <sup>C</sup>*1([−*τ*, 0]) *in the* <sup>L</sup>*p-sense and <sup>f</sup> is continuous on* [0, <sup>∞</sup>) *in the* L*p-sense. Then the stochastic process x*(*t*) *defined by (2) is the unique* L*p-solution to (1).*

**Proof.** This is a consequence of Theorem 3 and Theorem 5, with *x*(*t*) = *y*(*t*) + *z*(*t*). Uniqueness follows from Theorem 1.

**Remark 4.** *As emphasized in ([17] Remark 3.6), the condition of boundedness for a and b in Theorem 7 is necessary if we only assume that <sup>g</sup>* <sup>∈</sup> *<sup>C</sup>*1([−*τ*, 0]) *in the* <sup>L</sup>*p-sense. See ([7] Example p. 541), where it is proved that, in order for a random autonomous and homogeneous linear differential equation of first-order to have an* L*p-solution for every initial condition in* L*p, one needs the random coefficient to be bounded.*

*Mathematics* **2020**, *8*, 1013

Assume the conditions from Theorem 6 or Theorem 7. From expression (2), it is possible to approximate the statistical moments of *x*(*t*). We focus on its expectation, E[*x*(*t*)], and on its variance, <sup>V</sup>[*x*(*t*)] = <sup>E</sup>[*x*(*t*)2] <sup>−</sup> (E[*x*(*t*)])2. These statistics provide information on the average and the dispersion of *x*(*t*), and they are very useful for uncertainty quantification for *x*(*t*). For ease of notation, denote the stochastic processes

$$F\_1(t,\omega) = \mathbf{e}^{a(\omega)(t+\tau)}\mathbf{e}^{b\_1(\omega),t}\_{\tau}g(-\tau,\omega),$$

$$F\_2(t,s,\omega) = \mathbf{e}^{a(\omega)(t-s)}\mathbf{e}^{b\_1(\omega),t-\tau-s}\_{\tau}(g'(s,\omega)-a(\omega)g(s,\omega)),$$

$$F\_3(t,s,\omega) = \mathbf{e}^{a(\omega)(t-s)}\mathbf{e}^{b\_1(\omega),t-\tau-s}\_{\tau}f(s,\omega).$$

Due to the linearity of the expectation and its interchangeability with the L1-Riemann integral ([5] p. 104), if *p* ≥ 1,

$$\mathbb{E}[\mathbf{x}(t)] = \mathbb{E}[F\_1(t)] + \int\_{-\tau}^{0} \mathbb{E}[F\_2(t, s)] \, \mathrm{d}s + \int\_{0}^{t} \mathbb{E}[F\_3(t, s)] \, \mathrm{d}s.\tag{9}$$

To compute <sup>V</sup>[*x*(*t*)] when *<sup>p</sup>* <sup>≥</sup> 2, we start by

$$\begin{split} x(t)^2 &= F\_1(t)^2 + \int\_{-\tau}^0 \int\_{-\tau}^0 F\_2(t, s\_1) F\_2(t, s\_2) \, \mathrm{d}s\_2 \, \mathrm{d}s\_1 \\ &\quad + \int\_0^t \int\_0^t F\_3(t, s\_1) F\_3(t, s\_2) \, \mathrm{d}s\_2 \, \mathrm{d}s\_1 + 2 \int\_{-\tau}^0 F\_1(t) F\_2(t, s) \, \mathrm{d}s\_1 \\ &\quad + 2 \int\_0^t F\_1(t) F\_3(t, s) \, \mathrm{d}s + 2 \int\_{-\tau}^0 \int\_0^t F\_2(t, s\_1) F\_3(t, s\_2) \, \mathrm{d}s\_2 \, \mathrm{d}s\_1. \end{split}$$

Each of these integrals has to be considered in L*<sup>p</sup>*/2; see ([35] Remark 2). This is due to the loss of integrability of the product, by Hölder's inequality. By applying expectations,

$$\begin{split} \mathbb{E}[\mathbf{x}(t)^2] &= \mathbb{E}[F\_1(t)^2] + \int\_{-\tau}^0 \int\_{-\tau}^0 \mathbb{E}[F\_2(t,s\_1)F\_2(t,s\_2)] \, \mathrm{d}s\_2 \, \mathrm{d}s\_1 \\ &+ \int\_0^t \int\_0^t \mathbb{E}[F\_3(t,s\_1)F\_3(t,s\_2)] \, \mathrm{d}s\_2 \, \mathrm{d}s\_1 + 2 \int\_{-\tau}^0 \mathbb{E}[F\_1(t)F\_2(t,s)] \, \mathrm{d}s \\ &+ 2 \int\_0^t \mathbb{E}[F\_1(t)F\_3(t,s)] \, \mathrm{d}s + 2 \int\_{-\tau}^0 \int\_0^t \mathbb{E}[F\_2(t,s\_1)F\_3(t,s\_2)] \, \mathrm{d}s\_2 \, \mathrm{d}s\_1. \end{split} \tag{10}$$

As a consequence, one derives an expression for V[*x*(*t*)], by utilizing the relation V[*x*(*t*)] = <sup>E</sup>[*x*(*t*)2] <sup>−</sup> (E[*x*(*t*)])2. Other statistics related to moments could be derived in a similar fashion.

In Example 1, we will show how useful these expressions are to determine E[*x*(*t*)] and V[*x*(*t*)] in practice. Our procedure is an alternative to the usual techniques for uncertainty quantification: Monte Carlo simulation, generalized polynomial chaos (gPC) expansions, etc. [1,2].
