*Article* **Reliable Efficient Difference Methods for Random Heterogeneous Diffusion Reaction Models with a Finite Degree of Randomness**

**María Consuelo Casabán †, Rafael Company \*,† and Lucas Jódar †**

† These authors contributed equally to this work.

**Abstract:** This paper deals with the search for reliable efficient finite difference methods for the numerical solution of random heterogeneous diffusion reaction models with a finite degree of randomness. Efficiency appeals to the computational challenge in the random framework that requires not only the approximating stochastic process solution but also its expectation and variance. After studying positivity and conditional random mean square stability, the computation of the expectation and variance of the approximating stochastic process is not performed directly but through using a set of sampling finite difference schemes coming out by taking realizations of the random scheme and using Monte Carlo technique. Thus, the storage accumulation of symbolic expressions collapsing the approach is avoided keeping reliability. Results are simulated and a procedure for the numerical computation is given.

**Keywords:** random mean square parabolic model; finite degree of randomness; monte carlo method; random finite difference scheme

**MSC:** 35R60; 60H15; 65M06; 65M12

#### **1. Introduction**

Dealing with deterministic partial differential equations (PDE), finite difference methods (FD) are probably the most used because they are easy to apply and fairly efficient, [1,2]. Trying to capture real world problems, the models introduced uncertainty in several ways, basically assuming that both data, parameters, initial and or boundary conditions are stochastic processes instead of deterministic functions, [3,4]. The uncertainty appears not only because of error measurements, but also considering heterogeneity of the media, material impurities, or even the lack of access to measurements [5,6]. Independently of the type of modelling the uncertainty, the consideration of partial differential equations models (PDEM) has particular challenges. In fact, it is necessary to compute not only the stochastic process solution or approximating stochastic process, but also their statistical moments, mainly the expectation and the variance. Integral transforms methods are efficient techniques to solve PDEM based on integration resources in fitting domains [7,8]. Another powerful technique suitable for models with complex geometries is the finite element method [9]. Iterative methods, for instance FD have particular troubles derived from the storage accumulation of intermediate levels when the computer manages symbolically the involved stochastic processes, [10–12]. This drawback of the iterative methods for solving PDEM occurs in both approaches, the one based on Itô calculus [13] the so-called stochastic differential approach (SDEA), as well as the one based on the mean square calculus [14] also called random differential equations approach (RDEA). To face this

**Citation:** Casabán, M.-C.; Company, R.; Jódar, L. Reliable Efficient Difference Methods for Random Heterogeneous Diffusion Reaction Models with a Finite Degree of Randomness. *Mathematics* **2021**, *9*, 206. https://doi.org/10.3390/ math9030206

Received: 21 December 2020 Accepted: 18 January 2021 Published: 20 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional clai-ms in published maps and institutio-nal affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Instituto Universitario de Matemática Multidisciplinar, Building 8G, Access C, 2nd Floor, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; macabar@imm.upv.es (M.C.C.); ljodar@imm.upv.es (L.J.)

**<sup>\*</sup>** Correspondence: rcompany@imm.upv.es

computational challenge we take a set of realizations of the model, then we solve each sampled problem using FD method. Finally, we use Monte Carlo technique, [15,16] to average the results of the deterministic sampled problems to compute the expectation and the variance of the approximating solution stochastic process. In our model the involved stochastic processes (s.p.'s) are defined in a complete probability space (Ω, <sup>F</sup>, <sup>P</sup>) and have *n* degrees of randomness [14] (p. 37), i.e., they only depend on a finite number *n* of random variables (r.v.'s):

$$h(\mathbf{s}) = F(\mathbf{s}, A\_1, A\_2, \dots, A\_n), \tag{1}$$

where

*Ai*, *<sup>i</sup>* = 1, . . . , *<sup>n</sup>*, are mutually independent r.v.'s; *F* is a differentiable real function of the variable *s* (being *<sup>s</sup>* the spatial variable *<sup>x</sup>* or the temporal one *<sup>t</sup>*). ⎫ ⎪⎪⎬ ⎪⎪⎭ (2)

In addition, under this hypothesis, the s.p. *<sup>h</sup>*(*s*) has sample differentiable trajectories (realizations), i.e., for a fixed event *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>, the real function *<sup>h</sup>*(*s*, *<sup>ω</sup>*) = *<sup>F</sup>*(*s*, *<sup>A</sup>*1(*ω*), *<sup>A</sup>*2(*ω*), ... , *An*(*ω*)) is a differentiable function of the real variable *<sup>s</sup>*. For the sake of clarity in the presentation and to save notational complexity, we will assume that involved s.p.'s in the coefficients and initial or boundary conditions, have one degree of randomness, i.e., they have the form

$$h(s) = F(s, A)\_{\,\,\,\,\,\,\,\,}$$

with *<sup>A</sup>* a r.v. and *<sup>F</sup>* a differentiable real function of the variable *<sup>s</sup>*. Then the s.p. *<sup>h</sup>*(*s*) has sample differentiable trajectories, i.e., for a fixed event *ω* ∈ Ω, the real function *<sup>h</sup>*(*s*, *<sup>ω</sup>*) = *<sup>F</sup>*(*s*, *<sup>A</sup>*(*ω*)) is a differentiable function of the real variable *<sup>s</sup>*.

This paper deals with random parabolic partial differential models of the form

$$\frac{\partial u(\mathbf{x},t)}{\partial t} = \frac{\partial}{\partial \mathbf{x}} \left[ p(\mathbf{x}) \frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} \right] - q(\mathbf{x})u(\mathbf{x},t), \quad 0 < \mathbf{x} < 1, \quad t > 0,\tag{3}$$

$$\begin{array}{ccccccccc}\mu(0,t)&=&\text{g}\_1(t),&t>0,&\text{a}&\text{a}&\text{a}&\text{a}&\text{a}&\text{a}\end{array}\tag{4}$$

$$\mu(1,t) = \underset{\epsilon \le \epsilon \le \epsilon}{\text{g2}} (t), \quad t > 0,\tag{5}$$

$$u(\mathbf{x},0) \quad = \quad f(\mathbf{x}), \quad 0 \le \mathbf{x} \le 1,\tag{6}$$

where the diffusion coefficient *<sup>p</sup>*(*x*), the reaction coefficient *<sup>q</sup>*(*x*), the boundary conditions *gi*(*t*), *<sup>i</sup>* = 1, 2, and the initial condition *<sup>f</sup>*(*x*) are s.p.'s with one degree of randomness in the sense as defined above. In addition we assume that *<sup>p</sup>*(*x*), *<sup>q</sup>*(*x*), *<sup>f</sup>*(*x*) and *gi*(*t*), *<sup>i</sup>* = 1, 2 are mean square continuous s.p.'s in variables *<sup>x</sup>* and *<sup>t</sup>*, respectively, *<sup>p</sup>*(*x*) is also a mean square differentiable s.p. and the sample realizations of the random inputs *<sup>p</sup>*(*x*), *<sup>q</sup>*(*x*), *gi*(*t*), *<sup>i</sup>* = 1, 2 and *<sup>f</sup>*(*x*) satisfy the following conditions denoting *<sup>p</sup>* (*x*) as the mean square derivative of *<sup>p</sup>*(*x*):

$$0 < a\_1 \le p(\mathbf{x}, \omega) \le a\_2 < +\infty, \quad \mathbf{x} \in [0, 1], \text{ for almost every (a.e.)} \\ \omega \in \Omega,\tag{7}$$

$$\frac{|p'(x,\omega)|}{p(x,\omega)} \le b < +\infty,\qquad\qquad \mathbf{x} \in [0,1], \text{ for a.e.} \omega \in \Omega\_{\prime} \tag{8}$$

$$q\_{\min} \le q(\mathbf{x}, \omega) \le q\_{\max}, \qquad \qquad \mathbf{x} \in \left[0, 1\right], \text{ for a.e.} \omega \in \Omega\_{\prime} \tag{9}$$

$$\mathcal{G}\_i(t,\omega) \ge 0, \ i = 1, 2, \quad \qquad \qquad t > 0, \text{ for a.e.} \omega \in \Omega\_{\prime} \tag{10}$$

$$0 \le f(\mathbf{x}, \omega) \le f\_{\max}, \quad \mathbf{x} \in [0, 1], \text{ for a.e.}\\\omega \in \Omega,\tag{11}$$

This model is frequent in chemical engineering sciences and in heat and mass transfer theory for reaction-diffusion problems with parameters depending on the spatial variables as it occurs in heterogeneous and anisotropic solids, [17] (p. 455), [18] (p. 388), [19,20].

The paper is organized as follows. Section 2 deals with some preliminaries, definitions and notations about the mean square calculus as well as the construction of the random mean square finite difference scheme (RMSFDS) resulting from the discretization of model (3)–(6). Section 3 is addressed to the study of properties of the RMSFDS such as positivity, stability and consistency. Throughout a sample approach, sufficient conditions for stability and positivity of the random numerical solution s.p. in terms of the data and discretization step-sizes are found. Consistency of the RMSFDS with Equation (3) is also treated throughout a sample approach and the consistency of the corresponding realized deterministic scheme for each fixed event *ω* ∈ Ω. In Section 4 we construct an algorithm to perform the efficient computation of the expectation and the variance of the numerical solution s.p. using Monte Carlo method and the solution of the sampled scheme. Numerical simulations for a problem where the exact solution is known are performed showing the efficiency of the proposed numerical method as well as the computations of the expectation and the variance of the numerical approximated s.p. A conclusion Section 5 ends the paper.

#### **2. Preliminaries and Construction of the Random Finite Difference Scheme**

For the sake of clarity in the presentation, in this section we recall some definitions and concepts related to the *Lp*-calculus, [14]. In a probability space (Ω, <sup>F</sup>, <sup>P</sup>), we denote *Lp*(Ω) the space of all real valued r.v.'s *<sup>U</sup>* : <sup>Ω</sup> <sup>→</sup> <sup>R</sup> of order *<sup>p</sup>*, endowed with the norm

$$\|\|\boldsymbol{U}\|\|\_{p} = \left(\mathbb{E}[|\boldsymbol{\mathcal{U}}|^{p}]\right)^{1/p} = \left(\int\_{\Omega} |\boldsymbol{\mathcal{U}}(\boldsymbol{\omega})|^{p} f\_{\boldsymbol{\mathcal{U}}}(\boldsymbol{\omega}) \, \mathrm{d}\boldsymbol{\omega}\right)^{1/p} < +\infty \,\tag{12}$$

where <sup>E</sup>[·] denotes the expectation operator, *fU* the density function of the r.v. *<sup>U</sup>* and *<sup>ω</sup>* an event of sample space Ω.

Given *<sup>T</sup>* <sup>⊂</sup> <sup>R</sup>, a family of *<sup>t</sup>*-indexed r.v.'s, say {*V*(*t*) : *<sup>t</sup>* <sup>∈</sup> *<sup>T</sup>*}, is called a stochastic process of order *<sup>p</sup>* (*p*-s.p.) if for each *<sup>t</sup>* <sup>∈</sup> *<sup>T</sup>* fixed, the r.v. *<sup>V</sup>*(*t*) <sup>∈</sup> *Lp*(Ω). We say that a *<sup>p</sup>*-s.p. {*V*(*t*) : *<sup>t</sup>* <sup>∈</sup> *<sup>T</sup>*} is *<sup>p</sup>*-th mean continuous at *<sup>t</sup>* <sup>∈</sup> *<sup>T</sup>*, if

$$\|\|V(t+h) - V(t)\|\|\_{p} \to 0 \quad \text{as} \ h \to 0, \ t, t+h \in T.$$

Furthermore, if there exists a *p*-s.p. *V* (*t*), such that

$$\left\| \left| \frac{V(t+h) - V(t)}{h} - V'(t) \right| \right\|\_{p} \to 0 \quad \text{as} \ h \to 0, \ t, t+h \in T\_{\omega}$$

then we say that the s.p. {*V*(*t*) : *<sup>t</sup>* <sup>∈</sup> *<sup>T</sup>*} is *<sup>p</sup>* -th mean differentiable at *<sup>t</sup>* <sup>∈</sup> *<sup>T</sup>* and *<sup>V</sup>* (*t*) is the *<sup>p</sup>*-derivative of *<sup>V</sup>*(*t*). In the particular case that *<sup>p</sup>* = 2, *<sup>L</sup>*2(Ω), definitions above leads to the corresponding concept of mean square (m.s.) continuity and m.s. differentiability, respectively.

In this section, we construct an explicit random finite difference scheme for solving problem (3)–(6). Firstly, let us write Equation (3) into the following form

$$\frac{\partial \mu(\mathbf{x},t)}{\partial t} = p(\mathbf{x})\frac{\partial^2 \mu(\mathbf{x},t)}{\partial \mathbf{x}^2} + p'(\mathbf{x})\frac{\partial \mu(\mathbf{x},t)}{\partial \mathbf{x}} - q(\mathbf{x})\,\mu(\mathbf{x},t) \,. \tag{13}$$

where *<sup>p</sup>*(*x*) <sup>∈</sup> *Lp*(Ω) is *<sup>p</sup>*-th mean continuous and differentiable, *<sup>p</sup>* (*x*) is the *<sup>p</sup>*-derivative of *<sup>p</sup>*(*t*) and *<sup>q</sup>*(*x*) <sup>∈</sup> *Lp*(Ω) is *<sup>p</sup>*-th mean continuous.

Let us consider the uniform partition of the spatial interval [0, 1], of the form *xi* <sup>=</sup> *ih*, <sup>0</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>*, with *Mh* = 1. For a fixed time horizon, *<sup>T</sup>*, we consider *<sup>N</sup>* + 1 time levels

*t n* = *nk*, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>* with *Nk* = *<sup>T</sup>*. The numerical approximation of the solution s.p. of the random problem (3)–(6) is denoted by *u<sup>n</sup> <sup>i</sup>* , i.e.,

$$\|u\_i^{\rm n}\| \approx \mathfrak{u}\left(\mathbf{x}\_i, t^{\rm n}\right), \qquad 0 \le i \le M, \ 0 \le n \le N. \tag{14}$$

By using a forward first-order approximation of the time partial derivative and centred second-order approximations for the spatial partial derivatives in Equation (13) one gets the following random numerical scheme for the spatial internal mesh points

$$\frac{u\_{i}^{n+1} - u\_{i}^{n}}{k} = p\_{l} \frac{u\_{l-1}^{n} - 2u\_{l}^{n} + u\_{l+1}^{n}}{h^{2}} + p\_{l}' \frac{u\_{l+1}^{n} - u\_{l-1}^{n}}{2h} - q\_{l} \, u\_{l}^{n}, \quad 1 \le i \le M - 1, \; 0 \le n \le N - 1,\tag{15}$$

where *pi* <sup>=</sup> *<sup>p</sup>*(*xi*), *<sup>p</sup> <sup>i</sup>* <sup>=</sup> *<sup>p</sup>* (*xi*) and *qi* <sup>=</sup> *<sup>q</sup>*(*xi*). The resulting random discretized problem (3)–(6) can be rewritten in the following form

$$u\_{i}^{n+1} = \frac{k}{h^2} \left( p\_i - \frac{h}{2} p\_i' \right) u\_{i-1}^n + \left( 1 - k \, q\_i - \frac{2k}{h^2} p\_i \right) u\_i^n + \frac{k}{h^2} \left( p\_i + \frac{h}{2} p\_i' \right) u\_{i+1}^n,$$

$$1 \le i \le M - 1, \; 1 \le n \le N - 1,$$

$$u\_0^n = g\_1^n, \; u\_M^n = g\_2^n, \quad 1 \le n \le N,$$

$$u\_i^0 = f\_i, \quad 0 \le i \le M,$$

where *g<sup>n</sup>* <sup>1</sup> <sup>=</sup> *<sup>g</sup>*1(*<sup>t</sup> n*), *<sup>g</sup><sup>n</sup>* <sup>2</sup> <sup>=</sup> *<sup>g</sup>*2(*<sup>t</sup> n*), and *fi* <sup>=</sup> *<sup>f</sup>*(*xi*). Please note that all the inputs of the random problem (16) are s.p.'s depending on one finite degree of randomness and lie in *Lp*(Ω).

#### **3. Properties of the Random Numerical Scheme: Positivity, Stability and Consistency**

We are going to prove the positivity of the random numerical solution *un <sup>i</sup>* , 0 ≤ *i* ≤ *M*, 0 ≤ *n* ≤ *N*} of the random scheme (16) and its ·*p*-stability in the sense of fixed station respect to the time. We extend this type of stability to the random field.

**Definition 1.** *A random numerical scheme is said to be* ·*p-stable in the fixed station sense in the domain* [0, 1] <sup>×</sup> [0, *<sup>T</sup>*]*, if for every partition with <sup>k</sup>* = <sup>Δ</sup> *t, <sup>h</sup>* = <sup>Δ</sup> *<sup>x</sup> such that N k* = *<sup>T</sup> and M h* = <sup>1</sup>*,*

$$\|u\_i^n\|\_{\mathcal{P}} \le \mathbb{C}, \quad 0 \le i \le M, \quad 0 \le n \le N,\tag{17}$$

*where C is independent of the step-sizes h, k and the time level n.*

First, we are going to find sufficient conditions on the spatial step-size *h* and the temporal step-size *<sup>k</sup>*, so that the numerical solution {*u<sup>n</sup> <sup>i</sup>* (*ω*)} constructed by sampling random scheme (16) for a fixed *ω* ∈ Ω

$$\begin{aligned} \frac{1}{h^2} \left( p\_l(\omega) - \frac{h}{2} p\_l'(\omega) \right) u\_{l-1}^n + \left( 1 - k \, q\_l(\omega) - \frac{2k}{h^2} p\_l(\omega) \right) u\_l^n + \frac{k}{h^2} \left( p\_l(\omega) + \frac{h}{2} p\_l'(\omega) \right) u\_{l+1}^n, \\ 1 \le i \le M - 1, \; 1 \le n \le N - 1, \\ u\_0^n(\omega) = g\_1^n(\omega), \; u\_M^n(\omega) = g\_2^n(\omega), \; 1 \le n \le N, \\ u\_l^0(\omega) = f\_l(\omega), \; 0 \le i \le M, \end{aligned} \tag{18}$$

guarantee its positivity, i.e., *u<sup>n</sup> <sup>i</sup>* (*ω*) <sup>≥</sup> 0 for 0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>* and for each time level *<sup>n</sup>*, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*. We denote

$$\begin{array}{rcl} A\_i(h,k)(\omega) &=& \frac{k}{h^2} \left( p\_i(\omega) - \frac{h}{2} p\_i'(\omega) \right), \\ B\_i(h,k)(\omega) &=& 1 - k q\_i(\omega) - \frac{2k}{h^2} p\_i(\omega) \,, \\ C\_i(h,k)(\omega) &=& p\_i(\omega) + \frac{h}{2} p\_i'(\omega) \,, \end{array} \tag{19}$$

then the sampling scheme (18) can be rewritten as follows

$$u\_i^{n+1}(\omega) = A\_i(h,k)(\omega) \ u\_{i-1}^n(\omega) + B\_i(h,k)(\omega) \ u\_i^n(\omega) + \mathbb{C}\_i(h,k)(\omega) \ u\_{i+1}^n(\omega) \ .$$

To guarantee the positivity of the numerical approximation {*u<sup>n</sup> <sup>i</sup>* (*ω*)} it is sufficient to impose the positivity of coefficients defined in (19). Please note that the simultaneously positivity of coefficients *Ai*(*h*, *<sup>k</sup>*)(*ω*) and *Ci*(*h*, *<sup>k</sup>*)(*ω*) means that

$$-p\_i(\omega) \le \frac{h}{2}p\_i'(\omega) \le p\_i(\omega) \, ,$$

that is

$$h \le \frac{2p\_i(\omega)}{|p\_i'(\omega)|}. \tag{20}$$

Taking into account the bound condition (8) it follows that coefficients *Ai*(*h*, *<sup>k</sup>*)(*ω*) and *Ci*(*h*, *<sup>k</sup>*)(*ω*), 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> 1, are non-negative for a.e. *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup> under condition

$$h \le \frac{2}{b}.\tag{21}$$

Please note that for the particular case where *pi*(*ω*) is constant the positivity of coefficients *Ai*(*h*, *<sup>k</sup>*)(*ω*) and *Ci*(*h*, *<sup>k</sup>*)(*ω*) defined in (19), is established for *<sup>h</sup>* <sup>&</sup>gt; 0. To guarantee the positivity of coefficient *Bi*(*h*, *<sup>k</sup>*)(*ω*) from (19) and bounds (7)–(9) note that

$$B\_i(h,k)(\omega) = 1 - k \, q\_i(\omega) - \frac{2k}{h^2} p\_i(\omega) \ge 1 - k \, q\_{\max} - \frac{2k}{h^2} a\_2 \,. \tag{22}$$

Thus, the positivity of *Bi*(*h*, *<sup>k</sup>*)(*ω*), 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> 1, for a.e. *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>, is verified under the conditions

$$k \le \frac{h^2}{2a\_2} \, , \qquad \text{(If } q\_{\text{max}} < 0\text{)} \, , \tag{23}$$

$$k \le \frac{h^2}{2a\_2 + h^2 q\_{\text{max}}}, \qquad \text{(If } q\_{\text{max}} \ge 0) \,. \tag{24}$$

Then taking into account the sufficient conditions (21), (23) and (24) over the discretization step-sizes *h* and *k*, the positivity of all the coefficients (19) of sampling scheme (18) for a.e. *ω* ∈ Ω is guaranteed and consequently the positivity of the numerical solution {*u<sup>n</sup> <sup>i</sup>* (*ω*)}, 0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>*, for each time level *<sup>n</sup>*, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*, (*<sup>T</sup>* <sup>=</sup> *k N*) is established.

Let us prove now that random numerical scheme (16) is ·*p*-stable in the sense of Definition 1. In this study we need to distinguish two cases for the sampling parameter *qi*(*ω*) for a fixed *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>.

$$\text{Case 1. } \boxed{q\_i(\omega) \ge 0}, 0 \le i \le M.$$

From (18) imposing conditions (21) and (24) one gets

$$\begin{array}{rclcrcl}\mu\_{i}^{n+1}(\omega) & \leq & \left(A\_{i}(\mathsf{h},\mathsf{k})(\omega) + B\_{i}(\mathsf{h},\mathsf{k})(\omega) + \mathbb{C}\_{i}(\mathsf{h},\mathsf{k})(\omega)\right)\mathsf{u}\_{\mathsf{MAX}\_{i}}^{n}(\omega) \\ & \leq & \left(1 - kq\_{i}(\omega)\right)\mathsf{u}\_{\mathsf{MAX}\_{i}}^{n}(\omega) \\ & \leq & \mathsf{u}\_{\mathsf{MAX}\_{i}}^{n}(\omega), \qquad 1 \leq i \leq M-1, \ 0 \leq n \leq N-1,\end{array} \tag{25}$$

where

$$\mu\_{\text{MAX}\_i}^{\text{\tiny n}}(\omega) = \max\_{0 \le i \le M} \left\{ \mu\_i^{\text{\tiny n}}(\omega) \right\}. \tag{26}$$

Using (11), the boundary conditions of (18) and (25) we have by recurrence

$$\begin{split} \mathfrak{u}\_{i}^{n+1}(\omega) &\leq \max\{\mathfrak{g}\_{1}^{n+1}(\omega), \mathfrak{g}\_{2}^{n+1}(\omega), \mathfrak{u}\_{\text{MAX}\_{i}}^{n}(\omega)\} \\ &\leq \max\{\mathfrak{g}\_{1}^{n+1}(\omega), \mathfrak{g}\_{2}^{n+1}(\omega), \mathfrak{g}\_{1}^{n}(\omega), \mathfrak{g}\_{2}^{n}(\omega), \mathfrak{u}\_{\text{MAX}\_{i}}^{n-1}(\omega)\} \\ &\leq \cdots \\ &\leq \max\_{1 \leq i \leq n+1} \{\mathfrak{g}\_{1}^{i}(\omega), \mathfrak{g}\_{2}^{s}(\omega), \mathfrak{u}\_{\text{MAX}\_{i}}^{0}(\omega)\} \\ &\leq \max\_{0 \leq t \leq (n+1)k} \left\{\mathfrak{g}\_{1}(t, \omega), \mathfrak{g}\_{2}(t, \omega), \max\_{x \in [0,1]} \{f(x, \omega)\}\right\}. \end{split} \tag{27}$$

We denote

$$G(T) = \max\_{0 \le t \le T} \{ g\_{1,\max}(T), \ g\_{2,\max}(T), \ f\_{\max} \} \, \tag{28}$$

where

$$g\_{i, \max}(T) = \max\_{0 \le t \le T} \{ g\_i(t, \omega) \; , \; \text{for a.e.} \; \omega \in \Omega \} \; , \; i = 1, 2 \; . \tag{29}$$

Thus, from (27) and (28) we obtain the following upper bound for the numerical solution of sampling scheme (18)

$$0 \le u\_i^{\eta}(\omega) \le G(T), \quad \text{for a.e.} \ \omega \in \Omega,\tag{30}$$

for each level *<sup>n</sup>*, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>* <sup>=</sup> *Tk*, and for each spatial point *xi*, 0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>*.

Case 2. *<sup>q</sup>*min <sup>≤</sup> min0≤*i*≤*M*{*qi*(*ω*)} <sup>&</sup>lt; <sup>0</sup>

From (18) imposing conditions (21) and (23) and using (25) and (26) we obtain

$$\begin{array}{rcl} \boldsymbol{u}\_{i}^{n+1}(\boldsymbol{\omega}) & \leq & (1 - k \, q\_{i}(\boldsymbol{\omega})) \, \boldsymbol{u}\_{\text{MAX}\_{i}}^{n}(\boldsymbol{\omega})\\ & \leq & (1 + k \, |q\_{\text{min}}|) \, \boldsymbol{u}\_{\text{MAX}\_{i}}^{n}(\boldsymbol{\omega}), \qquad 1 \leq i \leq M - 1, \; 0 \leq n \leq N - 1. \end{array} \tag{31}$$

Then using the boundary conditions of (18) and applying recurrently the bound exhibits in (31) one gets

$$\begin{split} \mathfrak{l}\_{i}^{n+1}(\omega) &\leq \max \{ \mathfrak{l}\_{1}^{n+1}(\omega), \mathfrak{l}\_{2}^{n+1}(\omega), \left(1 + k \, |q\_{\min}| \right) \mathfrak{u}\_{\textup{MAX}\_{i}}^{n}(\omega) \} \\ &\leq \max \{ \mathfrak{g}\_{1}^{n+1}(\omega), \, \mathfrak{g}\_{2}^{n+1}(\omega), \, (1 + k \, |q\_{\min}|) \max \{ \mathfrak{g}\_{1}^{n}(\omega), \, \mathfrak{g}\_{2}^{n}(\omega), \, (1 + k \, |q\_{\min}|) \mathfrak{u}\_{\textup{MAX}\_{i}}^{n-1}(\omega) \} \\ &\leq \ (1 + k \, |q\_{\min}|)^{2} \max \{ \mathfrak{g}\_{1}^{n+1}(\omega), \, \mathfrak{g}\_{2}^{n+1}(\omega), \, \mathfrak{g}\_{1}^{n}(\omega), \, \mathfrak{g}\_{2}^{n}(\omega), \, \mathfrak{u}\_{\textup{MAX}\_{i}}^{n-1}(\omega) \} \\ &\leq \dots \\ &\leq \ (1 + k \, |q\_{\min}|)^{n+1} \max\_{1 \leq \omega \leq n+1} \left\{ \mathfrak{g}\_{1}^{s}(\omega), \, \mathfrak{g}\_{2}^{s}(\omega), \, \mathfrak{u}\_{\textup{MAX}\_{i}}^{0}(\omega) \right\}. \end{split} \tag{32}$$

Taking into account the following inequality

$$(1+k|q\_{\min}|)^s \le (1+k|q\_{\min}|)^N \preccurlyeq \mathbf{e}^T \mathbf{e}^{|q\_{\min}|}, \quad 0 \le s \le N, \quad$$

and the notation introduced in (28), we obtain this upper bound for the numerical solution of sample scheme (18)

$$0 \le u\_l^\eta(\omega) \le \mathbf{e}^T \mathbb{I}^{|q\_{\min}|} \mathcal{G}(T), \quad \text{for a.e.} \ \omega \in \Omega,\tag{33}$$

for each level *<sup>n</sup>*, 0 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>* <sup>=</sup> *Tk*, and for each spatial point *xi*, 0 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>*.

Please note that both bounds (30) and (33) are independent of *n*, *h* and *k*.

Finally, under discretization step-size conditions (21), (23) and (24) and from the upper bounds (30) and (33) it follows that

$$\|\|u\_i^n\|\|\_{p} = \left(\mathbb{E}\left[|u\_i^n|^p\right]\right)^{1/p} = \left(\int\_{\Omega} |u\_i^n(\omega)|^p f\_{u\_i^n}(\omega) \, d\omega\right)^{1/p} \le a(T) \underbrace{G(T) \left(\int\_{\Omega} f\_{u\_i^n}(\omega) \, d\omega\right)^{1/p}}\_{1},\tag{34}$$

where *<sup>G</sup>*(*T*) is defined in (28) and (29) and

$$\mathfrak{a}(T) = \begin{cases} 1 & \text{if } \quad q\_{\min} \ge 0 \\ \mathbf{e}^{T|q\_{\min}|} & \text{if } \quad q\_{\min} < 0 \end{cases} \tag{35}$$

Consequently, random numerical scheme (16) is ·*p*-stable in the sense of Definition 1. Summarizing, the following result was established.

**Theorem 1.** *With the previous notation under conditions* (21)*,* (23) *and* (24) *on the discretized step-sizes <sup>h</sup>* = <sup>Δ</sup>*<sup>x</sup> and <sup>k</sup>* = <sup>Δ</sup>*t, the random numerical solution s.p.* {*u<sup>n</sup> <sup>i</sup>* } *of the RMSFDS* (16) *for the random partial differential model* (3)*–*(11) *is positive for* 0 ≤ *i* ≤ *M at each time-level* <sup>0</sup> <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup> with <sup>T</sup>* = *kN. Furthermore the RMSFDS* (16) *is* ·*p-stable in the fixed station sense taking the value*

$$\mathbb{C} = \mathfrak{a}(T) \, G(T)\_{\prime\prime}$$

*where constants G*(*T*) *and <sup>α</sup>*(*T*) *are defined in* (28) *and* (35)*, respectively.*

To prove the consistency of the random finite difference scheme (16) with the random partial differential Equation (13) let us introduce the following definition inspired in the well-known concept of consistency for deterministic PDEs, see [2].

**Definition 2.** *Let us consider a RMSFDS <sup>F</sup>*(*u<sup>n</sup> <sup>i</sup>* ) = <sup>0</sup> *for a random partial differential equation (RPDE)* <sup>L</sup>(*u*) = <sup>0</sup> *and let the local truncation error <sup>T</sup><sup>n</sup> <sup>i</sup>* (*U*(*ω*)) *for a fixed event <sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup> *be defined by*

$$T\_i^{\mathfrak{n}}(\mathcal{U}(\omega)) = F(\mathcal{U}\_i^{\mathfrak{n}}(\omega)) - \mathcal{L}(\mathcal{U}\_i^{\mathfrak{n}}(\omega))\_{\ast}$$

*where U<sup>n</sup> <sup>i</sup>* (*ω*) *denotes the theoretical solution of* <sup>L</sup>(*u*)(*ω*) = <sup>0</sup> *evaluated at* (*xi*, *<sup>t</sup> n*)*. We call Tn <sup>i</sup>* (*U*) *by*

$$\|\|T\_i^n(\mathcal{U})\|\|\_{p} = \left(\mathbb{E}\left[|T\_i^n(\mathcal{U})|^p\right]\right)^{1/p} = \left(\int\_{\Omega} |T\_i^n(\mathcal{U}(\omega))|^p f\_{T\_i^n(\mathcal{U})}(\omega) \,d\omega\right)^{1/p}.$$

*With previous notation, the RMSFDS <sup>F</sup>*(*u<sup>n</sup> <sup>i</sup>* ) = <sup>0</sup> *is said to be* ·*p-consistent with the RPDE* <sup>L</sup>(*u*) = <sup>0</sup> *if*

$$\|\|T\_i^n(\mathcal{U})\|\|\_p \to 0 \text{ as } h = \triangle x \to 0, \ k = \triangle t \to 0.$$

Next result shows the consistency in the *p*-norm of RFDS (16) with RPDE (13).

**Theorem 2.** *The RFDS* (16) *is* ·*p- consistent with the RPDE* (13)*.*

**Proof.** Please note that for each fixed *ω* ∈ Ω the local truncation error using (13) and (15) is given by

$$\begin{split} T\_{i}^{n}(\mathcal{U}(\omega)) &= \frac{\mathcal{U}\_{i}^{n+1}(\omega) - \mathcal{U}\_{i}^{n}(\omega)}{k} - \frac{\partial \mathcal{U}(\omega)}{\partial t}(\mathbf{x}\_{i\cdot}t^{n}) - p\_{i}\frac{\mathcal{U}\_{i-1}^{n}(\omega) - 2\mathcal{U}\_{i}^{n}(\omega) + \mathcal{U}\_{i+1}^{n}(\omega)}{h^{2}} + \\ &p\_{i}\frac{\partial^{2}\mathcal{U}(\omega)}{\partial\mathbf{x}^{2}}(\mathbf{x}\_{i\cdot}t^{n}) - p\_{i}'\frac{\mathcal{U}\_{i+1}^{n}(\omega) - \mathcal{U}\_{i-1}^{n}(\omega)}{2h} + p\_{i}'\frac{\partial\mathcal{U}(\omega)}{\partial\mathbf{x}}(\mathbf{x}\_{i\cdot}t^{n}). \end{split}$$

Assuming that *<sup>U</sup>*(*x*, *<sup>t</sup>*)(*ω*) is four times continuously differentiable with respect to *<sup>x</sup>* and two times continuously differentiable with respect to *t* and using Taylor expansions of *<sup>U</sup>*(*x*, *<sup>t</sup>*)(*ω*) at (*xi*, *<sup>t</sup> n*) one gets

$$T\_i^{\mathfrak{u}}(\mathcal{U}(\omega)) = \frac{k}{2} \frac{\partial^2 \mathcal{U}(\omega)}{\partial t^2} (\mathbf{x}\_{i\prime}, \delta) - p\_i \frac{h^2}{12} \frac{\partial^4 \mathcal{U}(\omega)}{\partial \mathbf{x}^4} (\eta\_1, t^{\mathfrak{u}}) - p\_i^{\prime} \frac{h^2}{6} \frac{\partial^3 \mathcal{U}(\omega)}{\partial \mathbf{x}^3} (\eta\_2, t^{\mathfrak{u}})\_{\prime} \tag{36}$$

where *t <sup>n</sup>* < *δ* < *t <sup>n</sup>*+1, *xi*−<sup>1</sup> <sup>&</sup>lt; *<sup>η</sup><sup>j</sup>* <sup>&</sup>lt; *xi*+1, *<sup>j</sup>* <sup>=</sup> 1, 2.

Let us denote

$$E\_1(i, n)(\omega) \; := \; \max \left\{ \left| \frac{\partial^2 l I(\omega)}{\partial t^2} (x\_{i\prime} t) \right|\_{\prime}, t^n < t < t^{n+1} \right\} \; \tag{37}$$

$$E\_2(i, n)(\omega) \quad = \max \left\{ \left| \frac{\partial^4 \mathcal{U}(\omega)}{\partial \mathbf{x}^4} (\mathbf{x}, t^\mathbf{i}) \right|, \left. \mathbf{x}\_{i-1} < \mathbf{x} < \mathbf{x}\_{i+1} \right\} \right\} \tag{38}$$

$$E\_3(i, n)(\omega) \; := \; \max \left\{ \left| \frac{\partial^3 \mathcal{U}(\omega)}{\partial \mathbf{x}^3} (\mathbf{x}, t^n) \right|, \; \mathbf{x}\_{i-1} < \mathbf{x} < \mathbf{x}\_{i+1} \right\}. \tag{39}$$

As we are in the scenario of finite degree of randomness and the involved variables have a truncated range, there exist *Dj*(*i*, *<sup>n</sup>*), *<sup>j</sup>* = 1, 2, 3, positive constants such that

$$E\_j(i, n)(\omega) \le D\_j(i, n), \ 1 \le j \le 3, \text{ a.e. } \omega \in \Omega. \tag{40}$$

From Definition 2, condition (7) and (36)–(40) it follows that

$$\|\|T\_l^n(\mathcal{U})\|\|\_{p} \le \quad \left(\int\_{\Omega} \left[D\_1(i,n)\frac{k}{2} + \frac{h^2}{12}\left(p\_l D\_2(i,n) + 2|p\_l'|\,D\_3(i,n)\right)\right]^p f\_{T\_l^n(\mathcal{U})}(\omega)\,\mathrm{d}\omega\right)^{1/p} \tag{41}$$

$$= \quad \frac{k}{2}D\_1(i,n) + \frac{h^2}{12}\left(p\_l D\_2(i,n) + 2|p\_l'|\,D\_3(i,n)\right) = O(k) + O(h^2). \tag{41}$$

#### **4. Algorithm**

From a computational point of view, as it was commented on in the Introduction Section, the handling of the random scheme (16) in a direct way makes unavailable the computation of approximations beyond a few first temporal levels. This is because, throughout the iterative temporal levels, *<sup>n</sup>* = 1, ··· , *<sup>N</sup>*, it is necessary to store the symbolic expressions of all the previous levels of the iteration process collecting big and complex random expressions with which the expectation and the standard deviation must be computed. Furthermore, although the random expressions can be stored it does not guarantee that the two first statistical moments could be computed in a numerical way. For this reason, we propose using the random numerical scheme (16) together with the Monte Carlo technique avoiding the described computational drawbacks. The procedure is as follows: to take a number *K* of realizations of the random data involved in the random PDE (3)–(6) according to their probability distributions; to compute the numerical solution, *u<sup>n</sup> <sup>i</sup>* (*ωj*), *<sup>j</sup>* <sup>=</sup> 1, ··· , *<sup>K</sup>*, of the sampling deterministic difference schemes (18); to obtain the mean and the standard

deviation of these *<sup>K</sup>* numerical solutions evaluated in the mesh points *<sup>i</sup>* = 1, ··· , *<sup>M</sup>* <sup>−</sup> 1, at the last time-level *N*, denoted respectively by

$$\mathbb{E}\_{\text{MC}}^{K}[\boldsymbol{u}\_{i}^{N}] = \mu \Big(\boldsymbol{u}\_{i}^{N}(\boldsymbol{\omega}\_{1}), \boldsymbol{u}\_{i}^{N}(\boldsymbol{\omega}\_{2}), \cdots, \boldsymbol{u}\_{i}^{N}(\boldsymbol{\omega}\_{K})\Big). \tag{42}$$

$$\sqrt{\text{Var}\_{\text{MC}}^{K}[\boldsymbol{u}\_{i}^{N}]} = \sigma\Big(\boldsymbol{u}\_{i}^{N}(\boldsymbol{\omega}\_{1}), \boldsymbol{u}\_{i}^{N}(\boldsymbol{\omega}\_{2}), \cdot, \cdot, \boldsymbol{u}\_{i}^{N}(\boldsymbol{\omega}\_{K})\Big). \tag{43}$$

Algorithm 1 summarizes the steps to compute the stable approximations of the expectation and the standard deviation of the solution s.p., *u<sup>n</sup> <sup>i</sup>* , generated by means of the sampling difference scheme (18) and the MC method.

**Algorithm 1** Procedure to compute the approximations to the expectation and the standard deviation of the numerical solution *u<sup>N</sup> <sup>i</sup>* of the problem (3)–(6).


#### *4.1. Numerical Example*

To illustrate the efficiency of our proposed method in this subsection we present a test example where the exact solution s.p. is available. We consider the problem (3)–(6) with the random coefficients

$$p(\mathbf{x}) = a \,\mathrm{e}^{-\mathbf{x}}, \quad q(\mathbf{x}) = -\mathbf{c} \,\mathrm{,}\tag{44}$$

and the following boundary and initial conditions

$$\mathbf{g}\_1(t) = \mathbf{e}^{\varepsilon t} \left(\frac{1}{2} + a \, t\right), \quad \mathbf{g}\_2(t) = \mathbf{e}^{\varepsilon t} \left(\frac{\mathbf{e}^2}{2} + a \, \mathbf{e} \, t\right), \quad f(\mathbf{x}) = \frac{\mathbf{e}^{2\varepsilon}}{2}, \tag{45}$$

that is,

$$\frac{\partial u(\mathbf{x},t)}{\partial t} = -a\mathbf{e}^{-\mathbf{x}} \frac{\partial^2 u(\mathbf{x},t)}{\partial \mathbf{x}^2} - a\mathbf{e}^{-\mathbf{x}} \frac{\partial u(\mathbf{x},t)}{\partial \mathbf{x}} + c\mathbf{u}(\mathbf{x},t), \quad 0 < \mathbf{x} < 1, \quad t > 0, \tag{46}$$

$$u(0,t) \quad = \quad \mathbf{e}^{\varepsilon t} \left(\frac{1}{2} + at\right), \quad t > 0,\tag{47}$$

$$u(1,t) \quad = \quad \mathbf{e}^{ct} \left(\frac{\mathbf{e}^2}{2} + a\mathbf{e}\,t\right), \quad t > 0,\tag{48}$$

$$u(\mathbf{x},0) \quad = \quad \frac{\mathbf{e}^{2\mathbf{x}}}{2}, \quad 0 \le \mathbf{x} \le 1,\tag{49}$$

where the r.v. *<sup>a</sup>* follows a Gaussian distribution of mean *<sup>μ</sup>* = 0.5 and standard deviation *<sup>σ</sup>* <sup>=</sup> 0.1 truncated on the interval [0.4, 0.6], that is *<sup>a</sup>* <sup>∼</sup> <sup>N</sup>[0.4,0.6](0.5; 0.1), and the r.v. *<sup>c</sup>* <sup>&</sup>gt; <sup>0</sup> has a beta distribution of parameters (2; 4) truncated on the interval [0.45; 0.55], that is *<sup>c</sup>* <sup>∼</sup> Beta[0.45,0.55](2; 4). Hereinafter, we will assume that *<sup>a</sup>* and *<sup>c</sup>* are independent r.v.'s. Please note that *<sup>p</sup>*(*x*) in (44) is a s.p. with one degree of randomness verifying condition (2) and *gi*(*t*), *<sup>i</sup>* = 1, 2, in (45) are s.p.'s with two degree of randomness (because they involve both r.v.'s *<sup>a</sup>* and *<sup>c</sup>*) verifying condition (2). Furthermore all random input data *<sup>p</sup>*(*x*), *<sup>q</sup>*(*x*), *<sup>g</sup>*1(*t*), *<sup>g</sup>*2(*t*) and *<sup>f</sup>*(*x*) lie in *<sup>L</sup>*2(Ω) and they are m.s. continuous and *<sup>p</sup>*(*x*) is m.s. differentiable too. In addition, conditions (7)–(11) are satisfied with

$$a\_1 = 0.4 \,\text{e}^{-1}, \quad a\_2 = 0.6 \,\text{e}^0, \quad -0.55 \le q(\mathbf{x}, \omega) \le -0.45 \,, \,\omega \in \Omega, \quad 0 \le f(\mathbf{x}, \omega) \le 3.69453 \,, \,\omega \in \Omega.$$

From [18] (Section 3.8.5.) the exact solution of problem (46)–(49) when both parameters *a* and *c* are deterministic, is given by

$$u(x,t) = \mathbf{e}^{\varepsilon t} \left( a \,\mathbf{e}^x t + \frac{\mathbf{e}^{2x}}{2} \right). \tag{50}$$

In our context, both *a* and *c* are r.v.'s, and expression (50) must be interpreted as a s.p. Then, using the independence between r.v.'s *a* and *c*, the expectation and the standard deviation of s.p. (50) are given by

$$\mathbb{E}[u(x,t)] \quad = \quad \mathbb{E}\left[\mathbf{e}^{\varepsilon t}\right] \left(\mathbb{E}[a] \, \mathbf{e}^{\chi} t + \frac{\mathbf{e}^{2x}}{2}\right), \tag{51}$$

$$\sqrt{\text{Var}[\boldsymbol{u}(\mathbf{x},t)]} \quad = \quad \sqrt{\mathbb{E}[\boldsymbol{(u}(\mathbf{x},t))^2] - \left(\mathbb{E}[\boldsymbol{u}(\mathbf{x},t)]\right)^2},\tag{52}$$

being

$$\mathbb{E}\left[\left(\mu(\mathbf{x},t)\right)^{2}\right] = \mathbb{E}\left[\mathbf{e}^{2ct}\right]\left(\mathbb{E}[a^{2}]\,\mathbf{e}^{2\times}t^{2} + \mathbb{E}[a]\,\mathbf{e}^{3\times}t + \frac{\mathbf{e}^{4\times}}{4}\right). \tag{53}$$

Figure 1 shows the evolution of the expectation (51) and the standard deviation (52) and (53) of the exact solution s.p. (50) when both parameters *a* and *c* are considered as the r.v.'s described above.

**Figure 1.** (**Left**): Surface of the expectation of the exact solution (50), <sup>E</sup>[*u*(*xi*, *<sup>t</sup> n*)], computed according to (51). (**Right**): Surface of the standard deviation of the exact solution (50), <sup>0</sup>Var[*u*(*xi*, *<sup>t</sup>n*)], computed according to (52) and (53). For both statistical moments the parameters considered in (50) are *<sup>a</sup>* <sup>∼</sup> <sup>N</sup>[0.4,0.6](0.5; 0.1), *<sup>c</sup>* <sup>∼</sup> Beta[0.45,0.55](2; 4) and the plotted domain corresponds to (*xi* <sup>=</sup> *ih*, *<sup>t</sup> n* = *nk*) <sup>∈</sup> [<sup>0</sup> + *<sup>h</sup>* = 0.0125, 1 <sup>−</sup> *<sup>h</sup>* = 0.9875] <sup>×</sup> [0.05, 1] with the step-sizes *<sup>h</sup>* = <sup>Δ</sup>*<sup>x</sup>* = 0.0125, <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 79, and *<sup>k</sup>* = <sup>Δ</sup>*<sup>t</sup>* = 0.05, 1 <sup>≤</sup> *<sup>n</sup>* <sup>≤</sup> 20.

Numerical convergence of the expectation and the standard deviation of the approximate solution s.p. generated by means the sampling difference scheme (18) using the Monte Carlo (MC) technique shown in Algorithm 1, is illustrated in the following way. In the first study, with a fixed time *T*, we have chosen both the spatial and temporal step-sizes *h* and *k*, respectively, according to the stability conditions (21) and (23) and we have varied the number of realizations, *K*, of the r.v.'s *a* and *c* involved in the random problem (46)–(49). Then, at the temporal level *N* where the time *T* is achieved, we have computed the expectation (mean), E*<sup>K</sup>* MC[*u<sup>N</sup> <sup>i</sup>* ], and the standard deviation, / Var*<sup>K</sup>* MC[*u<sup>N</sup> <sup>i</sup>* ], of the *<sup>K</sup>*-deterministic solutions, *u<sup>N</sup> <sup>i</sup>* , obtained to solve the *K*-deterministic difference schemes (18). Table 1 collects the RMSEs (Root Mean Square Errors) computed at the time instant *<sup>T</sup>* = *Nk* = 1 with the temporal step-size *<sup>k</sup>* <sup>=</sup> 0.0001 (*<sup>N</sup>* = 10,000) for *<sup>M</sup>* <sup>−</sup> <sup>1</sup> <sup>=</sup> 79 internal spatial points *xi* <sup>=</sup> *ih*, <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 79 with *<sup>h</sup>* = <sup>Δ</sup>*<sup>x</sup>* = 0.0125 in the domain [0.0125, 1], using the following expressions

$$\text{RMSE} \left[ \mathbb{E}\_{\text{MC}}^{\text{K}} [\boldsymbol{u}\_{i}^{N}] \right] \quad = \quad \sqrt{\frac{1}{M-1} \sum\_{i=1}^{M-1} \left( \mathbb{E} [\boldsymbol{u}(\boldsymbol{x}\_{i}, t^{N})] - \mathbb{E}\_{\text{MC}}^{\text{K}} [\boldsymbol{u}\_{i}^{N}] \right)^{2}}{\text{\text{\textdegree\text\textquotedblleft}}} \tag{54}$$

$$\text{RMSE}\left[\sqrt{\text{Var}\_{\text{MC}}^{K}[u\_{i}^{N}]}\right] \quad = \sqrt{\frac{1}{M-1} \sum\_{i=1}^{M-1} \left(\sqrt{\text{Var}[u(\mathbf{x}\_{i},t^{N})]} - \sqrt{\text{Var}\_{\text{MC}}^{K}[u\_{i}^{N}]}\right)^{2}},\tag{55}$$

where <sup>E</sup>[*u*(*xi*, *<sup>t</sup>N*)] and <sup>0</sup>Var[*u*(*xi*, *<sup>t</sup>N*)] are given by (51)–(53), respectively.

The good behaviour of both approximations the expectation and the standard deviation as the *K* simulations increases was observed. That is, the accuracy of the approximations to both statistical moments increases when the number of MC simulations is growing. In this sense, Figure 2 reflects the improvement of the approximations considering the study of the relative errors computed by the expressions

$$\text{RelError}\left[\mathbb{E}\_{\text{MC}}^{K}[\boldsymbol{u}\_{i}^{N}]\right] \\ \quad = \left. \left\lfloor \frac{\mathbb{E}[\boldsymbol{u}(\boldsymbol{x}\_{i};\boldsymbol{t}^{N})] - \mathbb{E}\_{\text{MC}}^{K}[\boldsymbol{u}\_{i}^{N}]}{\mathbb{E}[\boldsymbol{u}(\boldsymbol{x}\_{i};\boldsymbol{t}^{N})]} \right\rfloor \right. \tag{56}$$

$$\text{RelError}\left(\sqrt{\text{Var}\_{\text{MC}}^{K}[\boldsymbol{u}\_{i}^{N}]}\right) \\ \quad = \left\lfloor \frac{\sqrt{\text{Var}[\boldsymbol{u}(\boldsymbol{x}\_{i},t^{N})]} - \sqrt{\text{Var}\_{\text{MC}}^{K}[\boldsymbol{u}\_{i}^{N}]}}{\sqrt{\text{Var}[\boldsymbol{u}(\boldsymbol{x}\_{i},t^{N})]}} \right\rfloor. \tag{57}$$

**Table 1.** Root mean square errors (RMSEs) at *<sup>T</sup>* = *Nk* = 1 with *<sup>k</sup>* = 0.0001 (*N*0 = 1000) for the numerical expectation and the numerical standard deviation computed after solving the *K*-deterministic difference scheme (18) for several Monte Carlo (MC) realizations *K* ∈ {50, 200, 800, 3200, 12,800}. The spatial discretization have been considered on the domain [<sup>0</sup> <sup>+</sup> *<sup>h</sup>* <sup>=</sup> 0.0125, 1 <sup>−</sup> *<sup>h</sup>* <sup>=</sup> 0.9875] with *xi* <sup>=</sup> *ih*, <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 79, *<sup>h</sup>* = 0.0125.


Table 2 shows the second complementary study, where we have fixed the number of MC simulations *<sup>K</sup>*, *<sup>K</sup>* = 1600, and we have refined the step-sizes *<sup>h</sup>* and *<sup>k</sup>* attending to the stability conditions (21) and (23). It is observed the decrease of the RMSEs of the expectation (54) and an apparent stabilization in the RMSEs behaviour of the standard deviation (55). Computations have been carried out by Mathematica© software version 12.0.0.0, [21] for Windows 10Pro (64-bit) AMD Ryzen Threadripper 2990WX, 3.00 GHz 32 kernels. The CPU times (in seconds) spent in the Wolfram Language kernel to compute, in both experiments, the expectation (mean) and the standard deviation are included in Tables 3 and 4. As a result, a good strategy to study the convergence of approximations consists of choosing step-sizes h and k verifying the stability conditions and take a big enough number of realizations K such that the error does not vary significantly when one increases the number of realizations. For problems with no available solution the error is changed by the deviation between two successive numerical solutions.

**Figure 2.** *Cont*.

**Figure 2.** Plot (**a**): Relative errors of the approximations to the expectation (mean), E*<sup>K</sup>* MC[*u<sup>N</sup> <sup>i</sup>* ], (56). Plot (**b**): Relative errors of the approximations to the standard deviation, / Var*<sup>K</sup>* MC[*u<sup>N</sup> <sup>i</sup>* ], (57). For both graphics the fixed time horizon is *<sup>T</sup>* <sup>=</sup> <sup>1</sup> <sup>=</sup> *Nk* with the temporal step-size *<sup>k</sup>* <sup>=</sup> 0.0001 (*<sup>N</sup>* = 10,000), the spatial domain *xi* <sup>∈</sup> [<sup>0</sup> <sup>+</sup> *<sup>h</sup>*, 1 <sup>−</sup> *<sup>h</sup>*] with *xi* <sup>=</sup> *ih*, *<sup>h</sup>* <sup>=</sup> 0.0125, but varying the number of MC simulations *K* ∈ {50, 3200, 12,800}.

**Table 2.** RMSEs at *<sup>T</sup>* <sup>=</sup> 1 and *<sup>K</sup>* <sup>=</sup> 1600 (MC simulations) for the expectation (54) and the standard deviation (55). The considered step-sizes *<sup>h</sup>* and *<sup>k</sup>* verify stability conditions (21) and (23). *<sup>T</sup>* = *Nk* = 1, *<sup>N</sup>* ∈ {125, 500, 2000, 8000}, the spatial domain is [<sup>0</sup> + *<sup>h</sup>*, 1 <sup>−</sup> *<sup>h</sup>*] considering *<sup>M</sup>* <sup>−</sup> 1 internal points *xi* <sup>=</sup> *ih*, 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>* <sup>−</sup> 1 with *<sup>M</sup>* <sup>=</sup> 1/*h*.


**Table 3.** CPU time (in seconds) spent to compute the approximations to the expectation (mean), E*<sup>K</sup>* MC, and the standard deviation, / Var*<sup>K</sup>* MC in Table 1, for a fixed time horizon *<sup>T</sup>* <sup>=</sup> 1 and the step-sizes *<sup>h</sup>* = 0.0125 and *<sup>k</sup>* = 0.0001 while the number of MC simulations, *<sup>K</sup>*, varies.


The use of MC method has allowed the obtainment of approximations to the expectation and the standard deviation of the solution s.p. *u<sup>N</sup> <sup>i</sup>* of the random difference scheme (16) at time horizon *<sup>T</sup>* = *Nk* for *<sup>N</sup>* not necessarily small. However, if we use the random numerical scheme (16) directly in this example with the step-sizes *<sup>h</sup>* = 0.05 (*<sup>M</sup>* = 20) and *<sup>k</sup>* = 0.002 verifying the stability conditions (21) and (23), troubles appear in

the early time-level *<sup>n</sup>* = 3, that corresponds to time *<sup>t</sup> n* = 0.006. In this case the symbolic expressions for the random numerical solution *u<sup>n</sup> <sup>i</sup>* and (*u<sup>n</sup> <sup>i</sup>* )2, for *<sup>n</sup>* <sup>=</sup> 3 are available and their correspond expectations too. However, the Mathematica© software can not compute the numerical expectation of *un i* <sup>2</sup> for 2 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 18, *<sup>n</sup>* = 3, in consequence it is not possible to compute the approximation of the standard deviation for these internal points at *t n* = 0.006 and hence at no other later time.

**Table 4.** CPU time (in seconds) spent to compute the approximations to the expectation (mean), E*K* MC, and the standard deviation, / Var*<sup>K</sup>* MC in Table 2, for a fixed time horizon *<sup>T</sup>* <sup>=</sup> 1 and *<sup>K</sup>* <sup>=</sup> <sup>1600</sup> MC simulations but varying the temporal step-size *k* and the spatial step-size *h* in the domain [<sup>0</sup> + *<sup>h</sup>*, 1 <sup>−</sup> *<sup>h</sup>*].


#### **5. Conclusions**

The main target of this paper is to solve the challenge of storage accumulation and further computational breakdown dealing with FD methods for solving random PDEM. Our approach is based on a combination of Monte Carlo method and the solution of sampled deterministic methods using explicit FD schemes. Explicitness is necessary to compute the statistical moments of the approximate solution what disregards the implicit FD methods. We here use the explicit classic difference method, but the Crank-Nicolson semi-implicit approach could be tried, making an ad hoc analysis. Numerical analysis provides sufficient conditions for positivity, stability and consistency for the proposed RMSFDS. Numerical experiments illustrate the reliability of the approach.

**Author Contributions:** M.C.C., R.C. and L.J. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Spanish Ministerio de Economía, Industria y Competitividad (MINECO), the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER UE) grant MTM2017-89664-P.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

