Next Article in Journal
Assessment of Humic and Fulvic Acid Sorbing Potential for Heavy Metals in Water
Previous Article in Journal
Common Fixed-Point Theorems for Families of Compatible Mappings in Neutrosophic Metric Spaces
Previous Article in Special Issue
Ostrowski-Type Fractional Integral Inequalities: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Discretization for Stochastic Semilinear Superdiffusion Driven by Fractionally Integrated Multiplicative Space–Time White Noise

Department of Physical, Mathematical and Engineering Sciences, University of Chester, Chester CH1 4BJ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Foundations 2023, 3(4), 763-787; https://doi.org/10.3390/foundations3040043
Submission received: 29 September 2023 / Revised: 27 November 2023 / Accepted: 1 December 2023 / Published: 6 December 2023

Abstract

:
We investigate the spatial discretization of a stochastic semilinear superdiffusion problem driven by fractionally integrated multiplicative space–time white noise. The white noise is characterized by its properties of being white in both space and time, and the time fractional derivative is considered in the Caputo sense with an order α ∈ (1, 2). A spatial discretization scheme is introduced by approximating the space–time white noise with the Euler method in the spatial direction and approximating the second-order space derivative with the central difference scheme. By using the Green functions, we obtain both exact and approximate solutions for the proposed problem. The regularities of both the exact and approximate solutions are studied, and the optimal error estimates that depend on the smoothness of the initial values are established.

1. Introduction

Consider the following stochastic semilinear superdiffusion equation driven by fractionally integrated multiplicative space–time white noise, with 1 < α < 2 , 0 γ 1 ,
0 C D t α u ( t , x ) 2 u ( t , x ) x 2 = F ( u ( t , x ) ) + 0 D t γ g ( u ( t , x ) ) 2 W ( t , x ) t x , 0 < x < 1 , t > 0 , u ( t , 0 ) = u ( t , 1 ) = 0 , t > 0 , u ( 0 , x ) = u 0 ( x ) , 0 x 1 , u t ( 0 , x ) = u 1 ( x ) , 0 x 1 ,
where 0 C D t α denotes the Caputo fractional derivative with order 1 < α < 2 , and 0 D t γ denotes the Riemann–Liouville fractional integral with order 0 γ 1 . Here, u 0 ( x ) and u 1 ( x ) are the given initial values. The nonlinear terms F and g satisfy the following global Lipschitz conditions and linear growth condition.
Assumption 1.
The nonlinear functions F and g satisfy [1,2]
| F ( v ) F ( w ) | + | g ( v ) g ( w ) | C | v w | , v , w R , | F ( v ) | + | g ( v ) | C ( 1 + | v | ) , v R .
Let ( Ω , F , F t 0 , P ) be a stochastic basis carrying on F t -adapted Brownian sheet W = { W ( t , x ) : t 0 , x R + } , which is a zero mean Gaussian random field with covariance [1,3],
E W ( t , x ) W ( s , y ) = ( t s ) ( x y ) ,
where E means the expectation, and a b = min ( a , b ) , a , b R .
The deterministic Equation (1) (i.e., g = 0 ) offers a suitable representation for attenuated wave propagation, which often exhibits a power law relationship between attenuation and frequency. This type of behavior is commonly observed in various scenarios, such as acoustic wave propagation in lossy media [4]. The exponent of the power law typically falls within the range of 1 to 2, indicating anomalous attenuation [5]. In physical systems, stochastic perturbations arise from diverse natural sources, and their influence cannot be disregarded. Therefore, it is necessary to study the model (1) both theoretically and numerically [6,7].
There are many theoretical and numerical studies for (1) when the noise 2 W ( t , x ) t x in (1) is white in time but smooth in space, that is,
2 W ( t , x ) t x = k = 1 σ k ( t ) β ˙ k ( t ) e k ( x ) ,
where σ k ( t ) is continuous and rapidly decays as k increases to ensure the convergence of the series, and β ˙ k ( t ) = d β k ( t ) d t represents the white noise, which is the formal derivative of the standard Brownian motion β k ( t ) for k = 1 , 2 , The eigenfunctions e k ( x ) , k = 1 , 2 , , correspond to the Laplacian operator Δ = x 2 subject to homogeneous Dirichlet boundary conditions on the interval [ 0 , 1 ] . Li et al. [6] employed the Galerkin finite element method to study the linear problem (1) with f = 0 and g = 1 , driven by the noise (3). Recently, Li et al. [8] explored the existence and uniqueness of (1) driven by the noise (3) using the Banach fixed-point theorem. Furthermore, Li et al. [7] investigated the Galerkin finite element method for (1) driven by the noise (3) and derived the optimal error estimates. Zou [9] considered the semidiscrete finite element method for approximating (1) driven by the noise (3) with F = 0 and established the optimal error estimates. More recently, Egwu and Yan [10] studied the existence and uniqueness of (1) driven by the noise (3), and they also considered the finite element approximation for the regularized equation of (1).
Numerous theoretical results have been established for the stochastic subdiffusion equation with a time fractional order α ( 0 , 1 ) . Anh et al. [11] investigated the existence and uniqueness of the solution for the space–time fractional stochastic equation in the multi-dimensional case. Anh et al. [12] introduced a variational constant formula for the Caputo fractional stochastic differential equation. Chen [13] studied the moments, Hölder regularity, and intermittency for the solution of a nonlinear stochastic subdiffusion problem on R . Chen et al. [14] provided insights for the nonlinear stochastic subdiffusion equation on R d , where d = 1 , 2 , 3 . Additionally, Chen et al. [15] addressed the existence, uniqueness, and regularity of the solution of the stochastic subdiffusion problem on R d for d = 1 , 2 , 3 . Various numerical approximations for the stochastic subdiffusion problem with a time fractional order α ( 0 , 1 ) have also been proposed. AL-Maskari and Karra [16] explored strong convergence rates for approximating a stochastic time fractional Allen–Cahn equation. Dai et al. [17] discussed the Mittag–Leffler Euler integrator for solving the stochastic space–time fractional diffusion equation. Gunzburger et al. [18,19] investigated the finite element method’s application for approximating the stochastic partial integral–differential equation driven by white noise. Kang et al. [20] developed a Galerkin finite element method to approximate a stochastic semilinear subdiffusion with fractionally integrated additive noise. Additionally, Wu et al. [21] explored the L1 scheme for solving the stochastic subdiffusion equation driven by integrated space–time white noise.
To the best of our knowledge, there is currently no existing numerical method for approximating (1) when the noise 2 W ( t , x ) t x is given by (2), which corresponds to white noise in both space and time. The objective of this paper is to bridge this gap by proposing a finite difference scheme for (1). Our approach involves approximating the noise term 2 W ( t , x ) t x using the Euler method in the spatial direction and approximating the Laplacian Δ = x 2 , using the central difference scheme. By using Green’s functions, we obtain solutions for both (1) and its corresponding finite difference scheme. The regularity of the approximated solution is investigated, and detailed error estimates in the maximum norm are derived. We observe that the spatial convergence order decreases as α traverses the range from 1 to 2 due to the white noise in the spatial direction. Throughout this paper, we shall consider the time discretization, and we expect that as α traverses the range from 1 to 2, the time convergence orders will increase as in the subdiffusion case with α ( 0 , 1 ) [22].
The methodology employed in this paper shares similarities with the work of Gyöngy [1], which considered the spatial discretization of (1) with α = 1 , and the work of Wang et al. [22], which examined the spatial discretization of (1) for the subdiffusion problem with α ( 0 , 1 ) . However, the error estimates in our paper are more complicated than [1,22] due to the presence of an additional initial value u ( 0 ) , which requires the studies of the estimates related to the initial value u ( 0 , x ) t = u 1 ( x ) .
Let 0 = x 0 < x 1 < < x M 1 < x M = 1 be a partition of space interval [ 0 , 1 ] , and Δ x = 1 M be the space step size. At the fixed point x = x k for k = 1 , 2 , , M 1 , we approximate the second-order space derivative 2 u ( t , x ) x 2 by the central difference scheme
2 u ( t , x ) x 2 x = x k u t , x k + 1 2 u t , x k + u t , x k 1 Δ x 2 ,
and approximate the space–time white noise 2 W ( t , x ) x t in the spatial direction with the Euler method
2 W ( t , x ) t x x = x k d d t W t , x k + 1 W t , x k Δ x .
Let u M ( t , x k ) for k = 0 , 1 , 2 , , M be the approximate solution of u ( t , x ) at x = x k . Define the following spatial semidiscrete scheme to approximate (1): with k = 1 , 2 , , M 1 ,
0 C D t α u M t , x k u M t , x k + 1 2 u M t , x k + u M t , x k 1 Δ x 2 = F ( u M t , x k ) + 0 D t γ g u M t , x k d d t W t , x k + 1 W t , x k Δ x , t > 0 , u M ( t , 0 ) = u M ( t , 1 ) = 0 , t 0 , u M 0 , x k = u 0 x k , t 0 , d u M 0 , x k d t = u 1 x k , t 0 .
This paper is primarily focused on establishing the convergence rates of sup x [ 0 , 1 ] E | u M ( t , x ) u ( t , x ) | 2 concerning the varying degrees of smoothness in the initial data u 0 and u 1 . The key findings are presented in Theorems 1 and 2 within Section 4.
This paper is organized as follows. Section 2 focuses on the continuous problem. Section 3 considers the spatial discretization. Section 4 examines the error estimates in the maximum norm in space. Lastly, in Appendix A, we present error estimates of the Green functions.
Throughout this paper, we denote C as a generic constant that is independent of the step size Δ x , which could be different at different occurrences. Additionally, we always assume ϵ > 0 is a small positive constant.

2. Continuous Problem

In this section, our objective is to determine the expression for the solution u of Equation (1) and study its spatial regularity.
Let { λ j , e j } , j = 1 , 2 , , be the eigenvalues and eigenfunctions of the operator A = 2 x 2 with D ( A ) = H 0 1 ( 0 , 1 ) H 2 ( 0 , 1 ) , which implies that A e j = λ j e j for j = 1 , 2 , . Let E α , β ( z ) be the two-parameter Mittag–Leffler function defined, with z C , by
E α , β ( z ) = k = 0 z k Γ ( α k + β ) , α ( 1 , 2 ) , β R .
It is well known that the Mittag–Leffler function E α , β ( z ) satisfies the following asymptotic estimates ([23], Theorem 1.6) and ([24], (1.8.28)), with α ( 1 , 2 ) , β R and π α 2 < μ < min ( π , α π ) ,
E α , β ( z ) C 1 1 + | z | , μ | arg ( z ) | π ,
E α , α ( z ) C 1 ( 1 + | z | ) 2 , μ | arg ( z ) | π .
The following differential formulae are used frequently in this paper.
Lemma 1
([23], (1.83)). Let α ( 1 , 2 ) and γ [ 0 , 1 ] . There hold
d d t E α , 1 t α λ = λ t α 1 E α , α t α λ , λ > 0 , d d t [ t E α , 2 t α λ ] = E α , 1 t α λ , λ > 0 , d d t t α + γ 1 E α , α + γ t α λ = t α + γ 2 E α , α + γ 1 t α λ , λ > 0 .
Lemma 2.
Assume that Assumption 1 holds true. Further assume that the initial conditions u 0 and u 1 C [ 0 , 1 ] . Then, (1) has the following mild solution:
u ( t , x ) = 0 1 G 1 ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 ( t , x , y ) u 1 ( y ) d y + 0 t 0 1 G 3 ( t s , x , y ) F ( u ( s , y ) ) d y d s + 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) d W ( s , y ) ,
where
G 1 ( t , x , y ) : = j = 1 E α , 1 t α λ j e j ( x ) e j ( y ) , G 2 ( t , x , y ) : = j = 1 t · E α , 2 t α λ j e j ( x ) e j ( y ) , G 3 ( t , x , y ) : = j = 1 t α 1 E α , α t α λ j e j ( x ) e j ( y ) , G 4 ( t , x , y ) : = j = 1 t α + γ 1 E α , α + γ t α λ j e j ( x ) e j ( y ) ,
respectively.
Proof. 
Assume that the solution of (1) satisfies u ( t , x ) = j = 1 e j ( x ) α j ( t ) for some unknown functions α j ( t ) . Applying this expression to (1) yields
0 C D t α j = 1 e j ( x ) α j ( t ) + A j = 1 e j ( x ) α j ( t ) = j = 1 ( F ( u ( t ) ) , e j ) e j ( x ) + 0 D t γ j = 1 ( g ( u ( t ) ) , e j ) e j ( x ) ,
which implies that, since A e j = λ j e j and { e j } j = 1 is an orthonormal basis in L 2 ( 0 , 1 ) ,
0 C D t α α j ( t ) + λ j α j ( t ) = ( F ( u ( t ) ) , e j ) + 0 D t γ ( g ( u ( t ) ) , e j ) , α j ( 0 ) = ( u 0 , e j ) , α j ( 0 ) = ( u 1 , e j ) ,
where α j ( t ) means the derivative of α j ( t ) .
Let F j ( t ) = ( F ( u ( t ) ) , e j ) and g j ( t ) = ( g ( u ( t ) ) , e j ) . Taking the Laplacian transform on (11), we arrive at
z α α j ^ ( z ) z α 1 α j ( 0 ) z α 2 α j ( 0 ) + λ j α j ^ ( z ) = F j ^ ( z ) + z γ g j ^ ( z ) ,
which implies that, by applying the inverse Laplace transform,
α j ( t ) = E α , 1 ( t α λ j ) α j ( 0 ) + t E α , 2 ( t α λ j ) α j ( 0 ) + 0 t ( t s ) α 1 E α , α ( ( t s ) α λ j ) ( F ( u ( s ) ) , e j ) d s + 0 t ( t s ) α + γ 1 E α , α + γ ( ( t s ) α λ j ) ( g ( u ( s ) ) , e j ) d s .
Thus, we have
u ( t , x ) = j = 1 E α , 1 ( t α λ j ) e j ( x ) α j ( 0 ) + j = 1 t E α , 2 ( t α λ j ) e j ( x ) α j ( 0 ) + j = 1 0 t ( t s ) α 1 E α , α ( ( t s ) α 1 λ j ) ( F ( u ( s ) ) , e j ) e j ( x ) d s + j = 1 0 t ( t s ) α + γ 1 E α , α + γ ( ( t s ) α 1 λ j ) ( g ( u ( s ) ) , e j ) e j ( x ) d s = 0 1 G 1 ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 ( t , x , y ) u 1 ( y ) d y + 0 t 0 1 G 3 ( t s , x , y ) F ( u ( s , y ) ) d y d s + 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) d W ( s , y ) ,
which completes the proof of Lemma 2. □

2.1. The Spatial Regularity of the Solution u ( t , x ) Defined in (8)

In this subsection, we shall consider the spatial regularity of the solution u ( t , x ) defined in (8). To do this, we write the solution into two parts u ( t , x ) = h ( t , x ) + n ( t , x ) , where h ( t , x ) is the solution of the homogeneous problem
0 C D t α h ( t , x ) 2 h ( t , x ) x 2 = 0 , 0 < x < 1 , t > 0 , h ( t , 0 ) = h ( t , 1 ) = 0 , t > 0 , h ( 0 , x ) = u 0 ( x ) , 0 x 1 , h 0 , x t = u 1 x , 0 x 1 ,
and n ( t , x ) is the solution of the inhomogeneous problem
0 C D t α n ( t , x ) 2 n ( t , x ) x 2 = F ( u ( t , x ) ) + 0 D t γ g ( u ( t , x ) ) 2 W ( t , x ) t x , 0 < x < 1 , t > 0 , n ( t , 0 ) = n ( t , 1 ) = 0 , t > 0 , n ( 0 , x ) = 0 , 0 x 1 , n 0 , x t = 0 , 0 x 1 .
By Lemma 2, it follows that
h ( t , x ) = 0 1 G 1 ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 ( t , x , y ) u 1 ( y ) d y ,
and
n ( t , x ) = 0 t 0 1 G 3 ( t s , x , y ) F ( u ( s , y ) ) d y d s + 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) d W ( s , y ) .

2.1.1. The Spatial Regularity of the Homogeneous Problem (13) When u 0 , u 1 C [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0

Let 0 = y 0 < y 1 < < y M 1 < y M = 1 be a partition of [ 0 , 1 ] and Δ y = 1 M be the step size. Denote by k M ^ ( y ) the piecewise constant function
k M ^ ( y ) = y j , y j y < y j + 1 , j = 0 , 1 , , M 1 , y M , y = y M .
Lemma 3.
Let h ( t , x ) be the solution of the homogeneous problem (13). Assume that u 0 , u 1 C [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 . Then, we have
E h ( t , y ) h t , k M ^ ( y ) 2 C t 1 + ϵ ( Δ x r 1 + Δ x r 2 ) ,
where r 1 and r 2 are defined in (21).
Proof. 
Note that, by (13) and the Cauchy–Schwarz inequality,
| h ( t , y ) h ( t , k M ^ ( y ) ) | 2 = 0 1 G 1 ( t , y , z ) G 1 ( t , k M ^ ( y ) , z ) u 0 ( z ) d z 2 + 0 1 G 2 ( t , y , z ) G 2 ( t , k M ^ ( y ) , z ) u 1 ( z ) d z 2 , 0 1 G 1 ( t , y , z ) G 1 t , k M ^ ( y ) , z 2 d z 0 1 u 0 ( z ) 2 d z + 0 1 G 2 ( t , y , z ) G 2 t , k M ^ ( y ) , z 2 d z 0 1 u 1 ( z ) 2 d z .
By Lemmas A1 and A4, we arrive at
E h ( t , y ) h t , k M ^ ( y ) 2 C t 1 + ϵ ( Δ x r 1 + Δ x r 2 ) ,
which completes the proof of Lemma 3. □

2.1.2. The Spatial Regularity of the Homogeneous Problem (13) When u 0 C 2 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 , u 1 C [ 0 , 1 ]

Lemma 4.
Let h ( t , x ) be the solution of the homogeneous problem (13). Assume that u 0 C 2 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 , u 1 C [ 0 , 1 ] . Then, we have
E h ( t , y ) h t , k M ^ ( y ) 2 C Δ x r 3 | | u 0 | | C 2 [ 0 , 1 ] + C t 1 + ϵ Δ x r 2 | | u 1 | | C [ 0 , 1 ] 2 ,
where r 2 , r 3 are defined in (21).
Proof. 
The proof is similar to the proof of ([22], Lemma 4). We omit the proof here. □
Remark 1.
In Lemma 4, the error bounds pertaining to u 0 undergo a transformation from Δ x r 1 to Δ x r 3 for the smooth initial data u 0 .

2.1.3. The Spatial Regularity of the Inhomogeneous Problem (14)

In this section, we shall consider the spatial regularity of the inhomogeneous problem (14).
Lemma 5.
Assume that Assumption 1 holds. Let n ( t , x ) be the solution of (14). Then, we have
E | n ( t , y ) n t , k M ^ ( y ) | 2 C Δ x r 3 + Δ x r 4 ,
where r 3 and r 4 are defined in (21).
Proof. 
We only consider the estimate related to the nonlinear term F ( u ) . The estimate related to the multiplicative noise term g ( u ) can be obtained by using a similar method as that in the proof of ([22], Lemma 5).
Let h 1 ( s , z ) : = F ( u ( s , z ) ) . It is easy to show that sup s , z E h 1 ( s , z ) 2 < . Further, let
n 1 ( t , x ) : = 0 t 0 1 G 3 ( t s , x , z ) h 1 ( s , z ) d z d s .
By Lemma A7, we arrive at
E | n 1 ( t , y ) n 1 ( t , k M ^ ( y ) ) | 2 = E 0 t 0 1 G 3 ( t s , y , z ) h 1 ( s , z ) G 3 t s , k M ^ ( y ) , z h 1 ( s , z ) d z d s 2 , C E 0 t 0 1 G 3 ( t s , y , z ) G 3 t s , k M ^ ( y ) , z 2 | h 1 ( s , z ) | 2 d z d s , C 0 t 0 1 G 3 ( t s , y , z ) G 3 t s , k M ^ ( y ) , z 2 d z d s C Δ x r 3 ,
where r 3 is defined in (21). The proof of Lemma 5 is complete. □

3. Spatial Discretization

In this section, we shall consider the expression of the approximate solution in (4) and study its regularity.
Lemma 6.
Assume that Assumption 1 holds. Let u M ( t , x k ) , k = 0 , 1 , 2 , , M be the approximate solutions in (4). Assume that u 0 , u 1 C [ 0 , 1 ] . Let u M ( t , x ) be the piecewise linear continuous function satisfying u M ( t , x ) | x = x k = u M ( t , x k ) , k = 0 , 1 , 2 , , M . Then, we have
u M ( t , x ) = 0 1 G 1 M ( t s , x , y ) u 0 ( k M ^ ( y ) ) d y + 0 1 G 2 M ( t s , x , y ) u 1 ( k M ^ ( y ) ) d y + 0 t 0 1 G 3 M ( t s , x , y ) F ( u M ( s , k M ^ ( y ) ) ) d y d s + 0 t 0 1 G 4 M ( t s , x , y ) g ( u M ( s , k M ^ ( y ) ) ) 2 W M ( s , y ) s y d y d s ,
where, with k M ( y ) , y [ 0 , 1 ] defined by (15),
G 1 M ( t , x , y ) : = j = 1 M 1 E α , 1 t α λ j e j M ( x ) e j ( k M ^ ( y ) ) , G 2 M ( t , x , y ) : = j = 1 M 1 t · E α , 2 t α λ j e j M ( x ) e j ( k M ^ ( y ) ) , G 3 M ( t , x , y ) : = j = 1 M 1 t α 1 E α , α t α λ j e j M ( x ) e j ( k M ^ ( y ) ) , G 4 M ( t , x , y ) : = j = 1 M 1 t α + γ 1 E α , α + γ t α λ j e j M ( x ) e j ( k M ^ ( y ) ) ,
where e j M ( x ) is the linear interpolation function of e j ( x ) defined on x m , m = 1 , 2 , , M 1 , that is, e j M ( x ) = e j ( x m ) + e j ( x m + 1 ) e j ( x m ) Δ x ( x x m ) for x [ x m , x m + 1 ] .
Proof. 
The proof is similar to the proof of ([22], Lemma 6). We omit the proof here. □

3.1. The Spatial Regularity of the Solution u M ( t , x ) Defined in (16)

In this subsection, we shall consider the spatial regularity of the solution u M ( t , x ) defined in (16). To do this, we write the solution into two parts
u M ( t , x ) = h M ( t , x ) + n M ( t , x ) .
Here, h M ( t , x ) and n M ( t , x ) are the homogeneous and inhomogeneous parts of u M ( t , x ) defined by
0 C D t α h M ( t , x ) 2 h M ( t , x ) x 2 = 0 , 0 < x < 1 , t > 0 , h M ( t , 0 ) = h M ( t , 1 ) = 0 , t > 0 , h M ( 0 , x ) = u 0 M ( x ) , 0 x 1 , h M 0 , x t = u 1 M x , 0 x 1 ,
and
0 C D t α n M ( t , x ) 2 n M ( t , x ) x 2 = F ( u M ( t , x ) ) + 0 D t γ g ( u M ( t , x ) ) 2 W ( t , x ) t x , 0 < x < 1 , t > 0 , n M ( t , 0 ) = n M ( t , 1 ) = 0 , t > 0 , n M ( 0 , x ) = 0 , 0 x 1 , n M 0 , x t = 0 , 0 x 1 ,
respectively. Here, u 0 M ( x ) and u 1 M ( x ) are the linear interpolation functions of u 0 ( x ) and u 1 ( x ) defined on the nodes 0 < x 0 < x 1 < < x M = 1 , respectively.
By Lemma 6, we have
h M ( t , x ) = 0 1 G 1 M ( t , x , y ) u 0 ( k M ^ ( y ) ) d y + 0 1 G 2 M ( t , x , y ) u 1 ( k M ^ ( y ) ) d y ,
and
n M ( t , x ) = 0 t 0 1 G 3 M ( t s , x , y ) F ( u M ( s , k M ^ ( y ) ) ) d y d s + 0 t 0 1 G 4 M ( t s , x , y ) g M ( u ( s , k M ^ ( y ) ) ) 2 W M ( s , y ) s y d y d s .

3.1.1. The Spatial Regularity of the Homogeneous Problem (17) When u 0 , u 1 C [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0

Lemma 7.
Let h M ( t , x ) be the solution of the homogeneous problem (17). Assume that u 0 , u 1 C [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 . Then, we have
E | h M ( t , y ) h M t , k M ^ ( y ) | 2 C t 1 + ϵ ( Δ x r 1 + Δ x r 2 ) ,
where r 1 and r 2 are defined by (21).
Proof. 
Note that
| h M ( t , y ) h M ( t , k M ^ ( y ) | 2 = 0 1 G 1 M ( t , y , z ) G 1 M t , k M ^ ( y ) , z u 0 k M ^ ( z ) d z 2 + 0 1 G 2 M ( t , y , z ) G 2 M t , k M ^ ( y ) , z u 1 k M ^ ( z ) d z 2 .
Applying the Cauchy–Schwartz inequality, we have
| h M ( t , y ) h M ( t , k M ^ ( y ) | 2 0 1 G 1 M ( t , y , z ) G 1 M t , k M ^ ( y ) , z 2 d z 0 1 u 0 k M ^ ( z ) 2 d z + 0 1 G 2 M ( t , y , z ) G 2 M t , k M ^ ( y ) , z 2 d z 0 1 u 1 k M ^ ( z ) 2 d z ,
which implies that, by Lemmas A2 and A5,
E | h M ( t , y ) h M t , k M ^ ( y ) | 2 C t 1 + ϵ ( Δ x r 1 + Δ x r 2 ) ,
where r 1 and r 2 are defined by (21). The proof of Lemma 7 is complete. □

3.1.2. The Spatial Regularity of the Homogeneous Problem (17) When u 0 C 2 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 , u 1 C [ 0 , 1 ]

Lemma 8.
Let h M ( t , x ) be the solution of the homogeneous problem (17). Assume that u 0 C 2 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 , u 1 C [ 0 , 1 ] . Then, we have
E h M ( t , y ) h M t , k M ^ ( y ) 2 C Δ x r 3 + C t 1 + ϵ Δ x r 2 ,
where r 2 and r 3 are defined by (21).
Proof. 
The proof is similar to the proof of ([22], Lemma 8). We omit the proof here. □

3.1.3. The Spatial Regularity of the Inhomogeneous Problem (16)

In this subsection, we shall consider the spatial regularity of the inhomogeneous problem (18).
Lemma 9.
Assume that Assumption 1 holds. Let n M ( t , x ) be the solution of (18). Then, we have
E n M ( t , y ) n M t , k M ^ ( y ) 2 C Δ x r 3 + Δ x r 4 ,
where r 3 and r 4 are defined in (21).
Proof. 
The proof is similar to the proof of Lemma 5 above. We omit the proof here. □

4. Error Estimates

In this section, we shall prove the following two theorems which provide the error estimates of the proposed numerical methods for the different smoothness of the initial value u 0 .
Theorem 1.
Let α ( 1 , 2 ) and γ [ 0 , 1 ] . Assume that Assumption 1 holds. Let u ( t , x ) and u M t , x k , k = 0 , 1 , 2 , , M be the solutions of (1) and (4), respectively. Further assume that u 0 , u 1 C 1 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 . Let ϵ > 0 be any small number.
1. 
If F = 0 , then we have
sup x [ 0 , 1 ] E | u M ( t , x ) u ( t , x ) | 2 C t 1 + ϵ ( Δ x r 1 + Δ x r 2 ) + C ( Δ x r 1 + Δ x r 2 + Δ x r 4 ) .
2. 
If F 0 , then we have
sup x [ 0 , 1 ] E | u M ( t , x ) u ( t , x ) | 2 C t 1 + ϵ ( Δ x r 1 + Δ x r 2 ) + C ( Δ x r 1 + Δ x r 2 + Δ x r 3 + Δ x r 4 ) .
Here,
r 1 = 4 1 ϵ 2 α 1 , r 2 = 2 , r 3 = 3 2 α , r 4 = 3 2 ( 1 2 γ ) α .
Remark 2.
The distinction between the error estimates for F = 0 and F 0 in Theorem 1 lies in the presence of Δ x r 3 , originating from the estimate for the term F.
Remark 3.
When the initial value u 0 C 1 [ 0 , 1 ] with boundary conditions u 0 ( 0 ) = u 0 ( 1 ) = 0 , the error is bounded by Δ x r 1 , where r 1 = 4 ( 1 ϵ 2 α ) 1 , α ( 1 , 2 ) . This error bound exhibits a reduction as α transitions from 1 to 2. Regarding the time discretization, it is worth noting that the convergence order in time will exhibit an increase as the order α transitions from 1 to 2, which we will report in our next paper.
If the initial value u 0 is smooth enough such that u 0 C 3 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 , then we have the following result.
Theorem 2.
Let α ( 1 , 2 ) and γ [ 0 , 1 ] . Assume that Assumption 1 holds. Let u ( t , x ) and u M t , x k , k = 0 , 1 , 2 , , M be the solutions of (1) and (4), respectively. Further assume that u 0 C 3 [ 0 , 1 ] , u 1 C 1 [ 0 , 1 ] , u 0 ( 0 ) = u 0 ( 1 ) = 0 . Let ϵ > 0 be any small number. Then, we have
sup k [ 0 , 1 ] E | u M ( t , x k ) u ( t , x k ) | 2 C ( Δ x r 2 + Δ x r 3 + Δ x r 4 ,
where r 2 , r 3 and r 4 are defined in (21).
Remark 4.
In this theorem, the error bounds remain identical for both cases, whether F = 0 or F 0 . This uniformity arises from the transformation of the error bounds associated with the initial value u 0 , transitioning from Δ x r 1 to Δ x r 3 .
To prove Theorems 1 and 2, we need the following Grönwall lemma.
Lemma 10
([1], Lemma 3.4). Let z : [ 0 , T ] R + be a function satisfying
0 z ( t ) a + K 0 t ( t s ) σ z ( s ) d s ,
with constants a 0 , K > 0 and σ > 1 . Then there exists a constant C = C ( σ , K , T ) such that
z ( t ) C a .

4.1. Proof of Theorem 2

In this subsection, we will give the proof of Theorem 2.
Proof of Theorem 2.
We shall consider three cases.
Case 1. F = 0 , g = 0 . In this case, the solution h ( t , x ) and the approximate solution h M ( t , x ) take the forms
h ( t , x ) = 0 1 G 1 ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 ( t , x , y ) u 1 ( y ) d y = h 1 ( t , x ) + h 2 ( t , x ) , h M ( t , x ) = 0 1 G 1 M ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 M ( t , x , y ) u 1 ( y ) d y = h 1 M ( t , x ) + h 2 M ( t , x ) ,
where the Green functions G j , j = 1 , 2 and G j M , j = 1 , 2 are defined by Lemmas 2 and 6, respectively.
Note that
h 1 ( t , x ) = u 0 ( x ) + 0 t 0 1 G 3 ( t , x , y ) u 0 ( y ) d y d s , h 1 M ( t , x ) = u 0 M ( x ) + 0 t 0 1 G 3 M ( t , x , y ) u 0 ( k ^ M ( y ) + 1 M ) 2 u 0 ( k ^ M ( y ) ) + u 0 ( k ^ M ( y ) 1 M ) Δ x 2 d y d s ,
where u 0 M ( x ) is the piecewise linear interpolation function of u 0 ( x ) on x k , k = 0 , 1 , 2 , , M . Following the proof of ([22], (41)), we may show, noting that u 0 C 3 [ 0 , 1 ] ,
E | h 1 ( t , x ) h 1 M ( t , x ) | 2 C ( Δ x 2 + Δ x r 3 ) ,
where r 3 is defined in (21). Further, by Lemma A6, we have
E | h 2 ( t , x ) h 2 M ( t , x ) | 2 0 1 | G 2 ( t , x , y ) G 2 M ( t , x , y ) | 2 d y | | u 1 | | C [ 0 , 1 ] 2 C t 1 + ϵ Δ x r 2 | | u 1 | | C [ 0 , 1 ] 2 C t 1 + ϵ Δ x r 2 ,
where r 2 is defined in (21).
Hence, we obtain the following error estimates when F = 0 , g = 0 ,
E | h ( t , x ) h M ( t , x ) | 2 C ( Δ x 2 + Δ x r 3 ) + C t 1 + ϵ Δ x r 2 .
Case 2. F = 0 , g 0 and u 0 ( x ) = u 1 ( x ) = 0 . In this case, the solution n ( t , x ) and the approximate solution n M ( t , x ) take the forms, by Lemmas 2 and 6,
n ( t , x ) = 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) 2 W ( s , y ) s y d y d s , n M ( t , x ) = 0 t 0 1 G 4 M ( t s , x , y ) g u M s , k M ^ ( y ) 2 W M ( s , y ) s y d y d s ,
which implies that
E | n M ( t , x ) n ( t , x ) | 2 = E | 0 t 0 1 G 4 M ( t s , x , y ) g ( u M ( s , k M ^ ( y ) ) ) 2 W M ( s , y ) s y d y d s 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) 2 W ( s , y ) s y d y d s | 2 , C E | 0 t 0 1 G 4 M ( t s , x , y ) g ( u M ( s , k M ^ ( y ) ) ) G 4 ( t s , x , y ) g ( u ( s , y ) ) 2 W ( s , y ) s y d y d s | 2 + C E | 0 t 0 1 G 4 M ( t s , x , y ) g ( u M ( s , k M ^ ( y ) ) ) 2 W ( s , y ) s y 2 W M ( s , y ) s y d y d s | 2 , = I 1 + I 2 .
Following the estimate of ([22], (50)), we may show
I 1 C Δ x r 4 + 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | u M ( s , k M ^ ( y ) ) u ( s , y ) | 2 d s ,
and
I 2 = E | k = 0 M 1 0 t y k y k + 1 0 d W ( s , y ) | 2 = 0 ,
where r 4 is defined in (21). Hence, we obtain
E | n M ( t , x ) n ( t , x ) | 2 C Δ x r 4 + 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | u M ( s , k M ^ ( y ) ) u ( s , y ) | 2 d s .
Note that
E | u M ( s , k M ^ ( y ) ) u ( s , y ) | 2 C E | n M ( s , k M ^ ( y ) ) n M ( s , y ) | 2 + C E | n M ( s , k M ^ ( y ) ) n ( s , y ) | 2 + C E | h M ( s , k M ^ ( y ) ) h M ( s , y ) | 2 + C E | h M ( s , y ) h ( s , y ) | 2 , = I 1 + I 2 + I 3 + I 4 .
From Lemmas 9 and 8 and (23), it follows, noting that r 2 = 2 by (21),
I 1 + I 3 + I 4 C Δ x r 3 + Δ x r 4 + Δ x r 3 + t 1 + ϵ Δ x r 2 + Δ x 2 + Δ x r 3 + t 1 + ϵ Δ x r 2 C Δ x r 2 + Δ x r 3 + Δ x r 4 + t 1 + ϵ Δ x r 2 ,
where r 2 , r 3 and r 4 are defined by (21). Thus, we have, noting 2 ( α + γ 1 ) α 2 > 1 for α ( 1 , 2 ) ,
E | n M ( t , x ) n ( t , x ) | 2 C Δ x r 4 + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 ( Δ x r 2 + s 1 + ϵ Δ x r 2 + Δ x r 3 + Δ x r 4 ) + sup y [ 0 , 1 ] E | n M ( s , y ) n ( s , y ) | 2 d s , C ( Δ x r 2 + Δ x r 3 + Δ x r 4 ) + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , y ) n ( s , y ) | 2 d s ,
which implies that
sup x [ 0 , 1 ] E | n M ( t , x ) n ( t , x ) | 2 C Δ x r 2 + Δ x r 3 + Δ x r 4 + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , y ) n ( s , y ) | 2 d s .
Applying the Grönwall Lemma 10, we arrive at, for F = 0 ,
sup x [ 0 , 1 ] E | n M ( t , x ) n ( t , x ) | 2 C Δ x r 2 + Δ x r 3 + Δ x r 4 .
Case 3. F 0 , g = 0 and u 0 ( x ) = u 1 ( x ) = 0 . In this case, the solution n ( t , x ) and the approximate solution n M ( t , x ) take the forms, by Lemmas 2 and 6,
n ( t , x ) = 0 t 0 1 G 3 ( t s , x , y ) F ( u ( s , y ) ) d y d s ,
and
n M ( t , x ) = 0 t 0 1 G 3 M ( t s , x , y ) F ( u M ( s , k M ^ ( y ) ) ) d y d s .
Following the same argument as in ([22], p. 19), we arrive at
sup x [ 0 , 1 ] E | n M ( t , x ) n ( t , x ) | 2 C Δ x r 2 + Δ x r 3 + Δ x r 4 ,
where r 2 , r 3 and r 4 are defined by (21). Together, these estimates complete the proof of Theorem. □

4.2. Proof of Theorem 1

In this subsection, we will give the proof of Theorem 1.
Proof of Theorem 1.
Similar as in the proof of Theorem 2, we consider three cases.
Case 1. F = 0 , g = 0 . In this case, the solution h ( t , x ) and the approximate solution h M ( t , x ) take the forms
h ( t , x ) = 0 1 G 1 ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 ( t , x , y ) u 1 ( y ) d y , h M ( t , x ) = 0 1 G 1 M ( t , x , y ) u 0 ( y ) d y + 0 1 G 2 M ( t , x , y ) u 1 ( y ) d y .
By applying the Cauchy–Schwarz inequality, we arrive at
| h ( t , x ) h M ( t , x ) | 2 C | 0 1 [ G 1 M ( t , x , y ) G 1 ( t , x , y ) ] u 0 ( k M ^ ( y ) ) d y | 2 + C | 0 1 G 1 ( t , x , y ) [ u 0 ( k M ^ ( y ) ) u 0 ( y ) ] d y | 2 + C | 0 1 [ G 2 M ( t , x , y ) G 2 ( t , x , y ) ] u 1 ( k M ^ ( y ) ) d y | 2 + C | 0 1 G 2 ( t , x , y ) [ u 1 ( k M ^ ( y ) ) u 1 ( y ) ] d y | 2 , C 0 1 | G 1 M ( t , x , y ) G 1 ( t , x , y ) | 2 d y 0 1 | u 0 ( k M ^ ( y ) ) | 2 d y + C | G 1 ( t , x , y ) | 2 d y 0 1 | u 0 ( k M ^ ( y ) ) u 0 ( y ) | 2 d y + C 0 1 | G 2 M ( t , x , y ) G 2 ( t , x , y ) | 2 d y 0 1 | u 1 ( k M ^ ( y ) ) | 2 d y + C | G 2 ( t , x , y ) | 2 d y 0 1 | u 1 ( k M ^ ( y ) ) u 1 ( y ) | 2 d y .
An application of the mean-value theorem yields
| h ( t , x ) h M ( t , x ) | 2 C 0 1 | G 1 M ( t , x , y ) G 1 ( t , x , y ) | 2 d y | | u 0 | | C [ 0 , 1 ] 2 + C 0 1 | G 1 ( t , x , y ) | 2 d y Δ x 2 | | u 0 | | C 1 [ 0 , 1 ] 2 + C 0 1 | G 2 M ( t , x , y ) G 2 ( t , x , y ) | 2 d y | | u 1 | | C [ 0 , 1 ] 2 + C 0 1 | G 2 ( t , x , y ) | 2 d y Δ x 2 | | u 1 | | C 1 [ 0 , 1 ] 2 .
It follows that, by Lemmas A1, A3, A4, and A6,
| h ( t , x ) h M ( t , x ) | 2 C t 1 + ϵ Δ x r 1 | | u 0 | | C [ 0 , 1 ] 2 + C t α 2 Δ x 2 | | u 0 | | C 1 [ 0 , 1 ] 2 + C t 1 + ϵ Δ x r 2 | | u 1 | | C [ 0 , 1 ] 2 + C t 2 α 2 Δ x 2 | | u 1 | | C 1 [ 0 , 1 ] 2 ,
where r 1 and r 2 are defined in (21).
Case 2. F = 0 , g 0 and u 0 ( x ) = u 1 ( x ) = 0 . In this case, the solution n ( t , x ) and the approximate solution n M ( t , x ) take the forms
n ( t , x ) = 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) 2 W ( s , y ) s y d y d s , n M ( t , x ) = 0 t 0 1 G 4 M ( t s , x , y ) g u M s , k M ^ ( y ) 2 W M ( s , y ) s y d y d s .
Following the same argument as in Case 2 in the proof of Theorem 2, we obtain
| n M ( t , x ) n ( t , x ) | 2 C Δ x r 4 + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | u M ( s , k M ^ ( y ) ) u ( s , y ) | 2 d s ,
where r 4 is defined in (21). Note that
E | u M ( s , k M ^ ( y ) ) u ( s , y ) | 2 E | n M ( s , k M ^ ( y ) ) n M ( s , y ) | 2 + E | n M ( s , y ) n ( s , y ) | 2 + E | h M ( s , k M ^ ( y ) ) h M ( s , y ) | 2 + E | h M ( s , y ) h ( s , y ) | 2 ,
which implies that
E | n ( t , x ) n M ( t , x ) | 2 C Δ x r 4 + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , k M ^ ( y ) ) n M ( s , y ) | 2 d s + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , k M ^ ( y ) ) n ( s , y ) | 2 d s + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | h M ( s , k M ^ ( y ) ) h M ( s , y ) | 2 d s + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | h M ( s , k M ^ ( y ) ) h ( s , y ) | 2 d s , C Δ x r 4 + I 1 ( t ) + I 2 ( t ) + I 3 ( t ) + I 4 ( t ) .
By Lemmas 9 and 7 and (27), we have
I 1 ( t ) C 0 t ( t s ) 2 ( α + γ 1 ) α 2 Δ x r 4 d s C Δ x r 4 , I 3 ( t ) C 0 t ( t s ) 2 ( α + γ 1 ) α 2 ( Δ x r 1 + Δ x r 2 ) s 1 + ϵ d s C ( Δ x r 1 + Δ x r 2 ) , I 4 ( t ) C 0 t ( t s ) 2 ( α + γ 1 ) α 2 ( Δ x r 1 + Δ x r 2 ) s 1 + ϵ d s C ( Δ x r 1 + Δ x r 2 ) ,
where r 1 , r 2 and r 4 are defined in (21). Thus, we arrive at
E | n M ( t , x ) n ( t , x ) | 2 C Δ x r 4 + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , y ) n ( s , y ) | 2 d s + C Δ x r 1 + C Δ x r 2 .
An application of the Grönwall Lemma 10 yields
E | n M ( t , x ) n ( t , x ) | 2 C ( Δ x r 1 + Δ x r 2 + Δ x r 4 ) ,
where r 1 , r 2 and r 4 are defined in (21).
Case 3. F 0 , g = 0 and u 0 ( x ) = u 1 ( x ) = 0 . In this case, the solution n ( t , x ) and the approximate solution n M ( t , x ) take the forms, by Lemmas 2 and 6,
n ( t , x ) = 0 t 0 1 G 3 ( t s , x , y ) F ( u ( s , y ) ) d y d s + 0 t 0 1 G 4 ( t s , x , y ) g ( u ( s , y ) ) 2 W ( s , y ) s y d y d s , n M ( t , x ) = 0 t 0 1 G 3 M ( t s , x , y ) F ( u M ( s , k M ^ ( y ) ) ) d y d s + 0 t 0 1 G 4 M ( t s , x , y ) g ( u ( s , k M ^ ( y ) ) ) 2 W M ( s , y ) s y d y d s .
Following the same argument as in Case 2 in the proof of Theorem 2, we arrive at
E | n M ( t , x ) n ( t , x , y ) | 2 C Δ x r 4 + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , k M ^ ( y ) ) n M ( s , y ) | 2 d s + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , k M ^ ( y ) ) n ( s , y ) | 2 d s + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | h M ( s , k M ^ ( y ) ) h M ( s , y ) | 2 d s + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | h M ( s , y ) h ( s , y ) | 2 d s , C Δ x r 4 + I 1 ( t ) + I 2 ( t ) + I 3 ( t ) + I 4 ( t ) .
By Lemmas 9 and 7 and (27), we have
I 1 ( t ) C 0 t ( t s ) 2 ( α + γ 1 ) α 2 ( Δ x r 3 + Δ x r 4 ) d s C ( Δ x r 3 + Δ x r 4 ) , I 3 ( t ) C 0 t ( t s ) 2 ( α + γ 1 ) α 2 ( Δ x r 1 + Δ x r 2 ) s 1 + ϵ d s C ( Δ x r 1 + Δ x r 2 ) , I 4 ( t ) C 0 t ( t s ) 2 ( α + γ 1 ) α 2 ( Δ x r 1 + Δ x r 2 ) s 1 + ϵ d s C ( Δ x r 1 + Δ x r 2 ) ,
where r 1 , r 2 , r 3 and r 4 are defined in (21). Hence, we arrive at
E | n M ( t , x ) n ( t , x ) | 2 C ( Δ x r 3 + Δ x r 4 ) + C 0 t ( t s ) 2 ( α + γ 1 ) α 2 sup y [ 0 , 1 ] E | n M ( s , y ) n ( s , y ) | 2 d s + C Δ x r 1 + C Δ x r 2 .
An application of the Grönwall Lemma 10 yields
E | n M ( t , x ) n ( t , x ) | 2 C ( Δ x r 1 + Δ x r 2 + Δ x r 3 + Δ x r 4 ) ,
where r 1 , r 2 , r 3 and r 4 are defined in (21). The proof of Theorem 1 is complete. □

5. Conclusions

This paper presents a spatial discretization scheme for approximating the stochastic semilinear superdiffusion equation, wherein the noise exhibits white properties in both time and space domains. We employ the Euler method to approximate the noise in the spatial direction and utilize the central difference method to approximate the second-order space derivative. By employing the Green functions, we provide both exact and approximate solutions. Moreover, we examine the spatial regularities of these solutions for the proposed problem. Additionally, a comprehensive discussion of error estimates in the maximum norm in space is included. In our forthcoming research, we will explore the time discretization of the stochastic nonlinear superdiffusion equation driven by fractionally integrated multiplicative space–time white noise.

Author Contributions

We have both contributed equal amounts towards this paper. J.A.H. considered the error estimates for the super-diffusion equation. Y.Y. introduced this research topic and guided throughout the writing of this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1

In this Appendix, we shall give some estimates of the Green functions G i ( t , x , y ) ,   i = 1 , 2 , 3 , 4 and their discretized analogues G i M ( t , x , y ) , i = 1 , 2 , 3 , 4 ,
G 1 ( t , x , y ) = j = 1 E α , 1 t α λ j e j ( x ) e j ( y ) , G 2 ( t , x , y ) = j = 1 t E α , 2 t α λ j e j ( x ) e j ( y ) , G 3 ( t , x , y ) = j = 1 t α 1 E α , α t α λ j e j ( x ) e j ( y ) , G 4 ( t , x , y ) = j = 1 t α + γ 1 E α , α + γ t α λ j e j ( x ) e j ( y ) , G 1 M ( t , x , y ) = j = 1 M 1 E α , 1 t α λ j M e j M ( x ) e j M ( k M ^ ( y ) ) , G 2 M ( t , x , y ) = j = 1 M 1 t E α , 2 t α λ j M e j M ( x ) e j M ( k M ^ ( y ) ) , G 3 M ( t , x , y ) = j = 1 M 1 t α 1 E α , α t α λ j M e j M ( x ) e j M ( k M ^ ( y ) ) , G 4 M ( t , x , y ) = j = 1 M 1 t α + γ 1 E α , α + γ t α λ j M e j M ( x ) e j M ( k M ^ ( y ) ) .

Appendix A.2. Green Function G 1 ( t , x , y ) and Its Discretized Analogue G 1 M ( t , x , y )

In this subsection, we shall give the estimates of the Green function G 1 ( t , x , y ) and its discretized analogue G 1 M ( t , x , y ) , defined in Lemmas 4 and 6, respectively. The proofs are similar to the proofs of ([22], Lemmas A1–A3). We omit the proofs here.
Lemma A1.
Let α ( 1 , 2 ) . Then we have, for any small ϵ > 0 ,
0 1 G 1 ( t , x , y ) 2 d y C t α 2 , 0 x 1 , 0 t 0 1 G 1 ( s , x , y ) 2 d y d s C t ϵ , 0 x 1 , 0 1 G 1 ( t , y , z ) G 1 t , k M ^ ( y ) , z 2 d z C t 1 + ϵ Δ x r 1 , 0 y 1 , 0 t 0 1 G 1 ( s , y , z ) G 1 s , k M ^ ( y ) , z 2 d z d s Ct ϵ Δ x r 1 , 0 y 1 ,
where r 1 is defined in Theorem 21.
Lemma A2.
Let α ( 1 , 2 ) . Then we have, for any small ϵ > 0 ,
0 1 G 1 M ( t , x , y ) 2 d y Ct α 2 , 0 x 1 , 0 t 0 1 G 1 M ( s , x , y ) 2 d y d s C t ϵ , 0 x 1 , 0 1 G 1 M ( t , y , z ) G 1 M t , k M ^ ( y ) , z 2 d z Ct 1 + ϵ Δ x r 1 , 0 y 1 , 0 t 0 1 G 1 M ( s , y , z ) G 1 M s , k M ^ ( y ) , z 2 d z d s C t ϵ Δ x r 1 , 0 y 1 ,
where r 1 is defined in (21).
Lemma A3.
Let α ( 1 , 2 ) . Then we have, for any small ϵ > 0 ,
0 1 G 1 ( t , x , y ) G 1 M ( t , x , y ) 2 d y C t 1 + ϵ Δ x r 1 , 0 x 1 , 0 t 0 1 G 1 ( s , x , y ) G 1 M ( s , x , y ) 2 d y d s C t ϵ Δ x r 1 , 0 x 1 ,
where r 1 is defined in (21).

Appendix A.3. Green Function G 2 (t,x,y) and Its Discretized Analogue G 2 M (t,x,y)

In this subsection, we shall give the estimates of the Green function G 2 ( t , x , y ) and its discretized analogue G 2 M ( t , x , y ) defined in Lemmas 4 and 6, respectively.
Lemma A4.
Let α ( 1 , 2 ) . Then we have, for any small ϵ > 0 ,
0 1 G 2 ( t , x , y ) 2 d y C t 2 α 2 , 0 x 1 ,
0 t 0 1 G 2 ( s , x , y ) 2 d y d s C t 3 α 2 , 0 x 1 ,
0 1 G 2 ( t , y , z ) G 2 t , k M ^ ( y ) , z 2 d z C t 1 + ϵ Δ x r 2 , 0 y 1 ,
0 t 0 1 G 2 ( s , y , z ) G 2 s , k M ^ ( y ) , z 2 d z d s Ct ϵ Δ x r 2 , 0 y 1 ,
where r 2 is defined in (21).
Proof. 
We first prove (A1). Note that
0 1 G 2 ( t , x , y ) 2 d y = 0 1 j = 1 t E α , 2 ( t α λ j ) e j ( x ) e j ( y ) 2 d y ,
we have, since { e j ( y ) } j = 1 is an orthonormal basis in H = L 2 ( 0 , 1 ) ,
0 1 G 2 ( t , x , y ) 2 d y C j = 1 t 2 1 1 + t α λ j 2 C t 2 j = 1 1 1 + t α j 2 2 , C t 2 0 1 1 + t α x 2 2 d x C t 2 t α 2 0 1 1 + y 2 2 d y C t 2 α 2 ,
which shows (A1).
For (A2), we have
0 t 0 1 G 2 ( t , x , y ) 2 d y d s C 0 t s 2 α 2 d s C t 3 α 2 .
For (A3), we need to split the summation into two parts:
0 1 G 2 ( t , y , z ) G 2 t , k M ^ ( y ) , z 2 d z = j = M t 2 E α , 2 2 e j ( y ) e j ( k M ^ ( y ) ) 2 + j = 1 M 1 t 2 E α , 2 2 e j ( y ) e j ( k M ^ ( y ) ) 2 = I 1 + I 2 .
For I 1 , we have, noting that | e j ( y ) | C , | e j ( k M ^ ( y ) ) | C ,
I 1 = j = M t 2 E α , 2 2 e j ( y ) e j ( k M ^ ( y ) ) 2 C j = M t 2 E α , 2 2 ( t α λ j ) .
Applying (6), we arrive at, with 0 < γ 1 1 ,
I 1 C j = M t 2 1 1 + t α λ j 2 ( γ 1 + 1 γ 1 ) C j = M t 2 1 1 + t α λ j 2 γ 1 C j = M t 2 1 t 2 γ 1 α λ j 2 γ 1 C j = M t 2 1 t 2 γ 1 α j 4 γ 1 C j = M t 2 2 γ 1 α 1 j 4 γ 1 .
Case 1. Choose γ 1 = 1 . We obtain that if 2 α 3 ϵ , then
I 1 C j = M t 2 2 α 1 j 4 = C j = M t 1 + ϵ t 3 2 α ϵ 1 j 4 C t 1 + ϵ j = M 1 j 4 C t 1 + ϵ Δ x 3 .
Case 2. If 2 α > 3 ϵ , then we may choose 2 2 γ 1 α = 1 + ϵ , that is, γ 1 = 3 ϵ 2 α < 1 , and obtain
I 1 C t 1 + ϵ j = M 1 j 4 3 ϵ 2 α C t 1 + ϵ M x 4 3 ϵ 2 α d x C t 1 + ϵ M 4 3 ϵ 2 α + 1 = C t 1 + ϵ Δ x 4 3 ϵ 2 α 1 .
Note that 4 3 ϵ 2 α 1 > 2 for all α ( 1 , 2 ) , which implies that
I 1 C t 1 + ϵ Δ x r 2 ,
where r 2 is defined in (21).
For I 2 , we have, using the mean-value theory, with 0 < γ 1 1 ,
I 2 = j = 1 M 1 t 2 E α , 2 2 e j ( y ) e j ( k M ^ ( y ) ) 2 j = 1 M 1 t 2 E α , 2 2 | e j ( ξ ) ( y k M ^ ( y ) ) | 2 , C j = 1 M 1 t 2 E α , 2 2 j M 2 C j = 1 M 1 t 2 1 1 + t α λ j 2 γ 1 j 2 M 2 , C j = 1 M 1 t 2 1 t α λ j 2 γ 1 j 2 M 2 C j = 1 M 1 t 2 1 t α j 2 2 γ 1 j 2 M 2 , C M 2 j = 1 M 1 t 2 1 t 2 α γ 1 j 4 γ 1 j 2 C M 2 j = 1 M 1 t 2 2 α γ 1 1 j 4 γ 1 2 .
Case 1. Choose γ 1 = 1 . We obtain that if 2 α 3 ϵ , then
I 2 C t 1 + ϵ j = 1 M 1 j 2 M 2 < C t 1 + ϵ Δ x 2 .
Case 2. If 2 α > 3 ϵ , then we may choose 2 2 γ 1 α = 1 + ϵ , that is, γ 1 = 3 ϵ 2 α < 1 and obtain
I 2 C t 1 + ϵ j = 1 M 1 j 2 4 γ 1 M 2 C t 1 + ϵ 1 M x 2 4 γ 1 d x 1 M 2 = C t 1 + ϵ 1 M x 2 4 3 ϵ 2 α d x 1 M 2 .
Note that 3 4 3 ϵ 2 α < 0 for all α ( 1 , 2 ) since ϵ > 0 is an arbitrarily small number, which implies that
I 2 C t 1 + ϵ Δ x 2 = C t 1 + ϵ Δ x r 2 ,
where r 2 is defined in (21). Combining I 1 and I 2 , we obtain (A3).
Finally, (A4) follows from
0 t 0 1 G 2 ( s , y , z ) G 2 s , k M ^ ( y ) , z 2 d z d s C 0 t s 1 + ϵ Δ x r 2 d s C t ϵ Δ x r 2 ,
which completes the proof of Lemma A4. □
Lemma A5.
Let α ( 1 , 2 ) . Then, we have, for any small ϵ > 0
0 1 G 2 M ( t , x , y ) 2 d y Ct 2 α 2 , 0 x 1 ,
0 t 0 1 G 2 M ( s , x , y ) 2 d y d s C t 3 α 2 , 0 x 1 ,
0 1 G 2 M ( t , y , z ) G 2 M t , k M ^ ( y ) , z 2 d z Ct 1 + ϵ Δ x r 2 , 0 y 1 ,
0 t 0 1 G 2 M ( s , y , z ) G 2 M s , k M ^ ( y ) , z 2 d z d s C t ϵ Δ x r 2 , 0 y 1 ,
where r 2 is defined in (21).
Proof. 
We first prove (A5). Note that
0 1 G 2 M ( t , x , y ) 2 d y 0 1 j = 1 M 1 t E α , 2 ( t α λ j M ) e j M ( x ) e j ( k M ^ ( y ) ) 2 d y .
Since e j ( x ) , j = 1 , 2 , are bounded and
0 1 e j ( k M ^ ( y ) ) e l ( k M ^ ( y ) ) d y = Δ x j = 1 M 1 e j ( y k ) e l ( y k ) = 1 , j = l 0 , j l ,
we have, noting that λ j M λ j , j = 1 , 2 , , M 1 ,
0 1 G 2 M ( t , x , y ) 2 d y C t 2 j = 1 M 1 E α , 2 ( t α λ j ) C t 2 j = 1 E α , 2 ( t α λ j ) C t 2 α 2 ,
which shows (A5).
For (A6), we have
0 t 0 1 G 2 M ( s , x , y ) 2 d y d s C 0 t s 2 α 2 d s C 1 3 α 2 s 3 α 2 0 t C t 3 α 2 .
For (A7), we have
0 1 G 2 M ( t , y , z ) G 2 M t , k M ^ ( y ) , z 2 d z = 0 1 j = 1 M 1 t E α , 2 ( t α λ j M ) e j M ( y ) e j M ( k M ^ ( y ) ) e j ( k M ^ ( z ) ) 2 d z , j = 1 M 1 t 2 E α , 2 2 ( t α λ j M ) e j M ( y ) e j M ( k M ^ ( y ) ) 2 .
Applying the mean value theory, we obtain
0 1 G 2 M ( t , y , z ) G 2 M t , k M ^ ( y ) , z 2 d z j = 1 M 1 t 2 E α , 2 2 ( t α λ j M ) e j ( ξ ) ( y k M ^ ( y ) ) 2 , C j = 1 M 1 t 2 E α , 2 2 ( t α λ j M ) j M 2 .
Following the same argument as in the proof of (A3), we arrive at
0 1 G 2 M ( t , y , z ) G 2 M t , k M ^ ( y ) , z 2 d z Ct 1 + ϵ Δ x r 2 ,
which shows (A6).
Finally, (A8) follows from
0 t 0 1 G 2 M ( s , y , z ) G 2 M s , k M ^ ( y ) , z 2 d z d s C 0 t s 1 + ϵ Δ x r 2 d s C t ϵ Δ x r 2 .
The proof of Lemma A5 is complete. □
Lemma A6.
Let α ( 1 , 2 ) . Then we have, for any small ϵ > 0 ,
0 1 G 2 ( t , x , y ) G 2 M ( t , x , y ) 2 d y C t 1 + ϵ Δ x r 2 , 0 x 1 ,
0 t 0 1 G 2 ( s , x , y ) G 2 M ( s , x , y ) 2 d y d s C t ϵ Δ x r 2 , 0 x 1 ,
where r 2 is defined in (21).
Proof. 
We first show (A9). Note that
0 1 G 2 ( t , x , y ) G 2 M ( t , x , y ) 2 d y 0 1 j = 1 t E α , 2 ( λ j t α ) e j ( x ) e j ( y ) j = 1 M 1 t E α , 2 ( λ j M t α ) e j M ( x ) e j M ( k M ^ ( y ) ) 2 d y C 0 1 j = M t E α , 2 ( λ j t α ) e j ( x ) e j ( y ) 2 d y + C 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j ( y ) e j ( k M ^ ( y ) ) 2 d y + C 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j M ( x ) e j M ( k M ^ ( y ) ) 2 d y + C 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) t E α , 2 ( λ j M t α ) e j M ( x ) e j M ( k M ^ ( y ) ) 2 d y = I 1 ( t ) + I 2 ( t ) + I 3 ( t ) + I 4 ( t ) .
For I 1 ( t ) , we have
I 1 ( t ) = C 0 1 j = M t E α , 2 ( λ j t α ) e j ( x ) e j ( y ) 2 d y C j = M t 2 E α , 2 2 ( λ j t α ) C t 1 + ϵ Δ x r 2 .
For I 2 ( t ) , we have
I 2 ( t ) = C 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j ( y ) e j ( k M ^ ( y ) ) 2 d y , = C k = 0 M 1 y k y k + 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) y k y e j ( ξ ) d ξ 2 d y , = C k = 0 M 1 y k y k + 1 y k y j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j ( ξ ) d ξ 2 d y , C k = 0 M 1 y k y k + 1 1 M y k y k + 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j ( ξ ) 2 d ξ d y , = C M 2 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j ( ξ ) 2 d ξ .
Note that
0 1 e j ( ξ ) e l ( ξ ) d ξ = j 2 π 2 , j = 1 , 0 , j l ,
and that e j ( x ) , j = 1 , 2 , are bounded, so we arrive at
I 2 ( t ) C 1 M 2 j = 1 M 1 t 2 E α , 2 2 ( λ j t α ) j 2 C j = 1 M 1 j M 2 t 2 E α , 2 2 ( λ j t α ) C t 1 + ϵ Δ x r 2 .
For I 3 ( t ) , we have
I 3 ( t ) = C 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) e j ( x ) e j M ( x ) e j M ( k M ^ ( y ) ) 2 d y = C j = 1 M 1 t 2 E α , 2 2 ( λ j t α ) e j ( x ) e j M ( x ) 2 .
By the linear interpolation theorem, it follows that
I 3 ( t ) C j = 1 M 1 t 2 E α , 2 2 ( λ j t α ) e j ( ξ ) ( y y k ) ( y y k + 1 ) 2 C j = 1 M 1 t 2 E α , 2 2 ( λ j t α ) C ( j 2 π 2 ) 1 M 2 2 C j = 1 M 1 t 2 E α , 2 2 ( λ j t α ) j M 4 C j = 1 M 1 t 2 E α , 2 2 ( λ j t α ) j M 2 C t 1 + ϵ Δ x r 2 .
For I 4 ( t ) , we have
I 4 ( t ) C 0 1 j = 1 M 1 t E α , 2 ( λ j t α ) t E α , 2 ( λ j M t α ) e j M ( x ) e j M ( k M ^ ( y ) ) 2 d y C j = 1 M 1 t E α , 2 ( λ j t α ) t E α , 2 ( λ j M t α ) 2 .
Note that, from Lemma 1,
d d t t E α , 2 t α λ = E α , 1 t α λ ,
which implies that
t E α , 2 ( t α λ ) = t 1 α λ 1 E α , 1 t α λ E α , 2 ( t α λ ) ,
where E α , 2 ( x ) is the derivative of E α , 2 ( x ) with respect to x.
By using the mean value theorem and (A11), we arrive at, with 0 < γ 1 1 ,
I 4 ( t ) C j = 1 M 1 t E α , 2 ( λ j t α ) t E α , 2 ( λ j M t α ) 2 = C j = 1 M 1 t E α , 2 ( ξ ) ( t α ( λ j λ j M ) ) 2 C j = 1 M 1 t E α , 2 ( t α λ j ) ( t α ( λ j λ j M ) ) 2 = C j = 1 M 1 | t 1 α λ j 1 E α , 1 ( t α λ j ) E α , 2 ( t α λ j ) ( t α ( λ j λ j M ) | 2 C j = 1 M 1 | t λ j 1 1 1 + t α λ j γ 1 ( λ j λ j M ) | 2 ,
which implies that, using | λ j λ j M | C j 4 M 2 , ([22], (A18)),
I 4 ( t ) C j = 1 M 1 | t λ j 1 ( t α λ j ) γ 1 j 4 M 2 | 2 C j = 1 M 1 | t 1 α γ 1 j 2 2 γ 1 j 4 M 2 | 2 C t 2 2 α γ 1 j = 1 M 1 j 4 4 γ 1 M 4 = C t 2 2 α γ 1 1 M 4 γ 1 1 .
Case 1. If 2 α 3 ϵ , then we may choose γ 1 = 1 and obtain
I 4 ( t ) C t 2 2 α 1 M 3 C t 1 + ϵ Δ x 3 C t 1 + ϵ Δ x r 2 .
Case 2. If 2 α > 3 ϵ , then we may choose 2 2 α γ 1 = 1 + ϵ , that is γ 1 = 3 ϵ 2 α < 1 and obtain, noting that 4 3 ϵ 2 α 1 > 2 for any α ( 1 , 2 ) since ϵ > 0 is an arbitrarily small number,
I 4 ( t ) C t 1 + ϵ Δ x 4 3 ϵ 2 α 1 C t 1 + ϵ Δ x 2 = C t 1 + ϵ Δ x r 2 .
Thus, we have
I 4 ( t ) C t 1 + ϵ Δ x r 2 ,
where r 2 is defined in (21). Combining I 1 ( t ) , I 2 ( t ) , I 3 ( t ) and I 4 ( t ) , we show (A9).
Finally, (A9) follows from
0 t 0 1 G 2 ( s , x , y ) G 2 M ( s , x , y ) 2 d y d s C Δ x r 2 0 t s 1 + ϵ d s C t ϵ Δ x r 2 .
The proof of Lemma A6 is concluded. □

Appendix A.4. Green Function G 3 ( t , x , y ) and Its Discretized Analogue G 3 M ( t , x , y )

In this subsection, we shall give the estimates of the Green function G 3 ( t , x , y ) and its discretized analogue G 3 M ( t , x , y ) defined in Lemmas 4 and 6, respectively. The proofs are similar to the proofs of ([22], Lemmas A4–A6). We omit the proofs here.
Lemma A7.
Let α ( 1 , 2 ) . Assume that Assumption 1 holds. Then we have, with any ϵ > 0 ,
0 1 G 3 ( t , x , y ) 2 d y C t α 2 , 0 x 1 , 0 t 0 1 G 3 ( s , x , y ) 2 d y d s C t ϵ , 0 x 1 , 0 1 G 3 ( t , y , z ) G 3 t , k M ^ ( y ) , z 2 d z C t 1 + ϵ Δ x r 3 , 0 y 1 , 0 t 0 1 G 3 ( s , y , z ) G 3 s , k M ^ ( y ) , z 2 d z d s Ct ϵ Δ x r 3 , 0 y 1 ,
where r 3 is defined in (21).
Lemma A8.
Let α ( 1 , 2 ) . Assume that Assumption 1 holds. Then we have, with any ϵ > 0 ,
0 1 G 3 M ( t , x , y ) 2 d y Ct α 2 , 0 x 1 , 0 t 0 1 G 3 M ( s , x , y ) 2 d y d s C t ϵ , 0 x 1 , 0 1 G 3 M ( t , y , z ) G 3 M t , k M ^ ( y ) , z 2 d z Ct 1 + ϵ Δ x r 3 , 0 y 1 , 0 t 0 1 G 3 M ( s , y , z ) G 3 M s , k M ^ ( y ) , z 2 d z d s C t ϵ Δ x r 3 , , 0 y 1 ,
where r 3 is defined in (21).
Lemma A9.
Let α ( 1 , 2 ) . Assume that Assumption 1 holds. Then we have, with any ϵ > 0 ,
0 1 G 3 ( t , x , y ) G 3 M ( t , x , y ) 2 d y C t 1 + ϵ Δ x r 3 , 0 x 1 , 0 t 0 1 G 3 ( s , x , y ) G 3 M ( s , x , y ) 2 d y d s C t ϵ Δ x r 3 , 0 x 1 ,
where r 3 is defined in (21).

Appendix A.5. Green Function G 4 ( t , x , y ) and Its Discretized Analogue G 4 M ( t , x , y )

In this subsection, we shall give the estimates of the Green function G 4 ( t , x , y ) and its discretized analogue G 4 M ( t , x , y ) defined in Lemmas 4 and 6, respectively. The proofs are similar to the proofs of ([22], Lemmas A7–A9). We omit the proofs here.
Lemma A10.
Let α ( 1 , 2 ) . Assume that Assumption 1 holds. Then we have, with any ϵ > 0 ,
0 1 G 4 ( t , x , y ) 2 d y C t α 2 , 0 x 1 , 0 t 0 1 G 4 ( s , x , y ) 2 d y d s C t ϵ , 0 x 1 , 0 1 G 4 ( t , y , z ) G 4 t , k M ^ ( y ) , z 2 d z C t 1 + ϵ Δ x r 4 , 0 y 1 , 0 t 0 1 G 4 ( s , y , z ) G 4 s , k M ^ ( y ) , z 2 d z d s Ct ϵ Δ x r 4 , 0 y 1 ,
where r 4 is defined in (21).
Lemma A11.
Let α ( 1 , 2 ) . Assume that Assumption 1 holds. Then we have, with any ϵ > 0 ,
0 1 G 4 M ( t , x , y ) 2 d y Ct α 2 , 0 x 1 , 0 t 0 1 G 4 M ( s , x , y ) 2 d y d s C t ϵ , 0 x 1 , 0 1 G 4 M ( t , y , z ) G 4 M t , k M ^ ( y ) , z 2 d z Ct 1 + ϵ Δ x r 4 , 0 y 1 , 0 t 0 1 G 4 M ( s , y , z ) G 4 M s , k M ^ ( y ) , z 2 d z d s C t ϵ Δ x r 4 , , 0 y 1 ,
where r 4 is defined in (21).
Lemma A12.
Let α ( 1 , 2 ) . Assume that Assumption 1 holds. Then we have, with any ϵ > 0 ,
0 1 G 4 ( t , x , y ) G 4 M ( t , x , y ) 2 d y C t 1 + ϵ Δ x r 4 , 0 x 1 , 0 t 0 1 G 4 ( s , x , y ) G 4 M ( s , x , y ) 2 d y d s C t ϵ Δ x r 4 , 0 x 1 ,
where r 4 is defined in (21).

References

  1. Gyöngy, I. Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise I. Potential Anal. 1998, 9, 1–25. [Google Scholar] [CrossRef]
  2. Anton, R.; Cohen, D.; Quer-Sardanyons, L. A fully discrete approximation of the one-dimensional stochastic heat equation. IMA J.Numer. Anal. 2020, 40, 247–284. [Google Scholar] [CrossRef]
  3. Walsh, J.B. Finite element methods for parabolic stochastic PDE’s. Potential Anal. 2005, 23, 1–43. [Google Scholar] [CrossRef]
  4. Szabo, T.L. Time domain wave equations for lossy media obeying a frequency power law. J. Acoust. Soc. Am. 1994, 96, 491–500. [Google Scholar] [CrossRef]
  5. Chen, W.; Holm, S. Fractional Laplacian time-space models for linear and nonlinear lossy media exhibiting arbitrary frequency power-law dependency. J. Acoust. Soc. Am. 2004, 115, 1424–1430. [Google Scholar] [CrossRef]
  6. Li, Y.; Wang, Y.; Deng, W. Galerkin finite element approximations for stochastic space-time fractional wave equations. SIAM J. Numer. Anal. 2017, 55, 3173–3202. [Google Scholar] [CrossRef]
  7. Li, Y.; Wang, Y.; Deng, W.; Nie, D. Galerkin finite element approximation for semilinear stochastic time-tempered fractional wave equations with multiplicative Gaussian noise and additive fractional Gaussian noise. Numer. Math. Theory Methods Appl. 2022, 15, 1063–1098. [Google Scholar] [CrossRef]
  8. Li, Y.; Wang, Y.; Deng, W.; Nie, D. Existence and regularity results for semilinear stochastic time-tempered fractional wave equations with multiplicative Gaussian noise and additive fractional Gaussian noise. Discret. Contin. Dyn. Syst. Ser. S 2023, 16, 2686–2720. [Google Scholar] [CrossRef]
  9. Zou, G.A.; Atangana, A.; Zhou, Y. Error estimates of a semidiscrete finite element method for fractional stochastic diffusion-wave equations. Numer. Methods Partial. Differ. Equ. 2018, 34, 1834–1848. [Google Scholar] [CrossRef]
  10. Egwu, B.A.; Yan, A. Galerkin finite element approximation of a stochastic semilinear fractional wave equation driven by fractionally integrated additive noise. Foundation 2023, 3, 290–322. [Google Scholar] [CrossRef]
  11. Anh, V.V.; Leonenko, N.N.; Ruiz-Medina, M. Space-time fractional stochastic equations on regular bounded open domains. Fract. Calc. Appl. Anal. 2016, 19, 1161–1199. [Google Scholar] [CrossRef]
  12. Anh, P.T.; Doan, T.S.; Huong, P.T. A variational constant formula for Caputo fractional stochastic differential equations. Statist. Probab. Lett. 2019, 145, 351–358. [Google Scholar] [CrossRef]
  13. Chen, L. Nonlinear stochastic time-fractional diffusion equations on ℝ: Moments. Hölder regularity and intermittency. Trans. Am. Math. Soc. 2017, 369, 8497–8535. [Google Scholar] [CrossRef]
  14. Chen, L.; Hu, Y.; Nualart, D. Nonlinear stochastic time-fractional slow and fast diffusion equations on ℝd. Stoch. Process. Appl. 2019, 129, 5073–5112. [Google Scholar] [CrossRef]
  15. Chen, Z.Q.; Kim, K.H.; Kim, P. Fractional time stochastic partial differential equations. Stoch. Process. Appl. 2015, 125, 1470–1499. [Google Scholar] [CrossRef]
  16. Al-Maskari, M.; Karaa, S. Strong convergence rates for the approximation of a stochastic time-fractional Allen-Cahn equation. Commun. Nonlinear Sci. Numer. Simul. 2023, 119, 107099. [Google Scholar] [CrossRef]
  17. Dai, X.; Hong, J.; Sheng, D. Mittag—Leffler Euler Integrator and Large Deviations for Stochastic Space-Time Fractional Diffusion Equations. Potential Anal. 2023. [Google Scholar] [CrossRef]
  18. Gunzburger, M.; Li, B.; Wang, J. Convergence of finite element solutions of stochastic partial integro-differential equations driven by white noise. Numer. Math. 2019, 141, 1043–1077. [Google Scholar] [CrossRef]
  19. Gunzburger, M.; Li, B.; Wang, J. Sharp convergence rates of time discretization for stochastic time fractional PDEs subject to additive space-time white noise. Math. Comp. 2019, 88, 1715–1741. [Google Scholar] [CrossRef]
  20. Kang, W.; Egwu, B.A.; Yan, Y.; Pani, K.A. Galerkin finite element approximation of a stochastic semilinear fractional subdiffusion with fractionally integrated additive noise. IMA J. Numer. Anal. 2022, 42, 2301–2335. [Google Scholar] [CrossRef]
  21. Wu, X.; Yan, Y.; Yan, Y. An analysis of the L1 scheme for stochastic subdiffusion problem driven by integrated space-time white noise. Appl. Numer. Math. 2020, 157, 69–87. [Google Scholar] [CrossRef]
  22. Wang, J.; Hoult, J.; Yan, Y. Spatial discretization for stochastic semi-linear sub-diffusion equations driven by fractionally integrated multiplicative space-time white noise. Mathemaitcs 2021, 9, 1917. [Google Scholar]
  23. Podlubny, I. Fractional Differential Equations; Academic Press: Cambridge, MA, USA, 2009. [Google Scholar]
  24. Kilbas, K.K.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; North-Holland Mathematical Studies: Amsterdam, The Netherlands, 2006. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hoult, J.A.; Yan, Y. Spatial Discretization for Stochastic Semilinear Superdiffusion Driven by Fractionally Integrated Multiplicative Space–Time White Noise. Foundations 2023, 3, 763-787. https://doi.org/10.3390/foundations3040043

AMA Style

Hoult JA, Yan Y. Spatial Discretization for Stochastic Semilinear Superdiffusion Driven by Fractionally Integrated Multiplicative Space–Time White Noise. Foundations. 2023; 3(4):763-787. https://doi.org/10.3390/foundations3040043

Chicago/Turabian Style

Hoult, James A., and Yubin Yan. 2023. "Spatial Discretization for Stochastic Semilinear Superdiffusion Driven by Fractionally Integrated Multiplicative Space–Time White Noise" Foundations 3, no. 4: 763-787. https://doi.org/10.3390/foundations3040043

Article Metrics

Back to TopTop