Next Article in Journal
Assessing Strategies to Overcome Barriers for Drone Usage in Last-Mile Logistics: A Novel Hybrid Fuzzy MCDM Model
Previous Article in Journal
Relationship between Generalized Orthogonality and Gâteaux Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Approximation for a Stochastic Fractional Differential Equation Driven by Integrated Multiplicative Noise

Department of Mathematical and Physical Sciences, University of Chester, Chester CH1 4BJ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(3), 365; https://doi.org/10.3390/math12030365
Submission received: 19 December 2023 / Revised: 17 January 2024 / Accepted: 22 January 2024 / Published: 23 January 2024

Abstract

:
We consider a numerical approximation for stochastic fractional differential equations driven by integrated multiplicative noise. The fractional derivative is in the Caputo sense with the fractional order  α ( 0 , 1 ) , and the non-linear terms satisfy the global Lipschitz conditions. We first approximate the noise with the piecewise constant function to obtain the regularized stochastic fractional differential equation. By applying Minkowski’s inequality for double integrals, we establish that the error between the exact solution and the solution of the regularized problem has an order of  O ( Δ t α )  in the mean square norm, where  Δ t  denotes the step size. To validate our theoretical conclusions, numerical examples are presented, demonstrating the consistency of the numerical results with the established theory.

1. Introduction

Consider the following stochastic fractional differential equation driven by multiplicative white noise, with  α ( 0 , 1 )  [1]:
  0 C D t α u ( t ) = f ( t , u ( t ) ) + 0 t g ( t , u ( s ) ) d W ( s ) , t ( 0 , T ] , u ( 0 ) = u 0 ,
where  W ( s )  is Brownian motion defined over a probability space ( Ω , F , P )  and    0 C D t α v ( t )  denotes the Caputo fractional derivative defined by [2]
  0 C D t α v ( t ) = 1 Γ ( 1 α ) a t v ( τ ) d τ ( t τ ) α .
The non-linear functions  f ( t , x ) , g ( t , x )  satisfy the following globally Lipschitz conditions and the linear growth conditions with some suitable constant  C > 0 :
| f ( t , x ) f ( t , y ) | C | x y | , | g ( t , x ) g ( t , y ) | C | x y | , x , y R , | f ( t , x ) | C ( 1 + | x | ) , | g ( t , x ) | C ( 1 + | x | ) , x R .
It is well-known that (1) is equivalent to the following stochastic Volterra integral equations (SVIEs) with a weakly singular kernel of the form [1]
u ( t ) = u 0 + 1 Γ ( α ) 0 t f ( ζ , u ( ζ ) ) ( t ζ ) 1 α d ζ + 1 Γ ( α ) 0 t 0 ζ g ( ζ , u ( s ) ) ( t ζ ) 1 α d W ( s ) d ζ .
It is evident that the solution to Equation (2) relies not only on the current states but also on past states. This characteristic makes stochastic Volterra integral equations (SVIEs) able to model the different problems involving memory and noise across various domains of science and technology. Examples include biological population models [3,4], mathematical finance models [5,6], and others [7]. In the realm of mathematics, numerous studies have been conducted, such as those by [8,9]. Ravichandran et al. [10] considered a fractional integrodifferential system with state-dependent delay in Banach spaces. They used Krasnoselskii’s fixed-point theorem and the Leray–Schauder alternative theorem to consider the controllability and continuous dependence of these systems. Similarly, ref. [11] examined the conditions of a slightly different system to also find its controllability. Dhayal et al. [12] considered a second-order stochastic differential equation driven by fractional Brownian motion with many different Hurst parameters. Various Caputo-based fractional equations are also discussed in the current literature, such as the Caputo–Fabrizo fractional order differential equation with multiple lags. Zhang et al. [13] introduced the premise for finding the acquired difference form and obtained a solution by applying the fractional PCEC algorithm. These systems are widely used in control theory.
When  α = 1 , the stochastic Volterra integral equations (SVIEs) (2), along with their numerical schemes, have been thoroughly investigated [14,15]. In contrast, the singular Volterra integral equations of the same form have received less attention. Some results concerning their existence and uniqueness have been established under a (global) Lipschitz condition and a linear growth condition, which can be found in [16,17,18] and the references therein.
When substituting  ( t s ) α 1  with alternative well-behaved functions, the exploration of numerical schemes for (regular) stochastic Volterra integral equations (SVIEs) has gained attention only in recent times. Tudor and Tudor [19] considered the strong convergence of one-step numerical approximations for Itô–Volterra equations, providing a convergence rate in the  L p ( Ω )  norm. Wen and Zhang [20] analyzed an enhanced version of the rectangular method for stochastic Volterra equations, demonstrating a convergence order of  O ( Δ t ) . Subsequently, Wang [21] approximated SVIE solutions using a class of stochastic differential equations (SDEs) and introduced two numerical methods: the stochastic theta method and the splitting method. Xiao et al. [22] presented a split-step collocation method for SVIEs, establishing its convergence with an order of  O ( Δ t 1 2 ) . Liang et al. [23] found that the Euler–Maruyama (EM) method achieves superconvergence on the order  ( Δ t )  if the kernel function in the diffusion term satisfies specific boundary conditions. More recently, research has extended to the Euler scheme for a broader class of equations, such as SVIEs with delay, stochastic Volterra integro-differential equations, and stochastic fractional integro-differential equations. For further exploration, we refer to [16,24,25,26,27,28] and the references therein.
The numerical treatment of stochastic Volterra integral equations (SVIEs) with a weakly singular kernel of the form (2) has been minimally explored in the existing literature. The primary challenge stems from the singularity of the integrand kernel. In such cases, the potent and essential Itô formula, commonly employed for stochastic differential equations (SDEs), is not applicable to SVIEs with a singular kernel.
Li et al. [29] addressed this issue by examining the Euler–Maruyama scheme for solving (2) with  α > 1 2 . They established that the convergence order of the scheme is  O ( Δ t α 1 2 )  for  α > 1 2 . Another contribution by Kamrani [1] focused on the numerical approximation of (2) with additive noise when  g = 1 . Kamrani approximated the additive noise using a piecewise constant function, leading to a regularized equation. The study demonstrated that the error between the exact solution and the solution of the regularized problem is  O ( Δ t ) . This regularized equation was further approximated using the Jacobian method.
In this paper, we extend Kamrani’s approach [1] to address (2) in the presence of multiplicative noise. We begin by examining the regularity of the solution of (2) and subsequently approximate the noise using piecewise constant functions to derive the regularized equation. The regularized equation is further approximated using an L1 scheme. Our analysis establishes that the error between the exact solution and the solution of the regularized problem is  O ( Δ t α )  for any  α ( 0 , 1 )  by applying Minkowski’s inequality for double integrals. This extends and improves upon the results presented in [29], where the authors only considered the case with  α > 1 2  and achieved a convergence order of only  O ( Δ t α 1 2 )  for  α > 1 2 .
Let us briefly review the main results obtained in this paper. Let  t 0 < t 1 < t 2 < < t n 1 < t n = T  be a partition of  [ 0 , T ]  and let  Δ t be the step size. We approximate the white noise  d W ( t ) d t  by a piecewise constant function  d W ^ ( t ) d t  defined by [1]
d W ^ ( t ) d t = W ( t 1 ) W ( t 0 ) Δ t Δ t · η 1 Δ t , t [ t 0 , t 1 ) , W ( t 2 ) W ( t 1 ) Δ t Δ t · η 2 Δ t , t [ t 1 , t 2 ) , W ( t 3 ) W ( t 2 ) Δ t Δ t · η 3 Δ t , t [ t 2 , t 3 ) , W ( t n ) W ( t n 1 ) Δ t Δ t · η n Δ t , t [ t n 1 , t n ) ,
where  η i N ( 0 , 1 ) , i = 1 , 2 , 3 , , n  are the normally distributed random variables. For simplicity, we may write  d W ^ ( t ) d t  as  χ i ( t ) = 1 , [ t i , t i + 1 ) , 0 , otherwise ,
d W ^ ( t ) d t = i = 1 n Δ t χ i ( t ) η i Δ t = i = 1 n χ i ( t ) η i Δ t .
We then obtain the following regularized stochastic fractional differential equation of (1):
u ˜ ( t ) = u 0 + 1 Γ ( α ) 0 t f ( ζ , u ˜ ( ζ ) ) ( t ζ ) 1 α d ζ + 1 Γ ( α ) 0 t 0 ζ g ( ζ , u ˜ ( s ) ) ( t ζ ) 1 α d W ^ ( t ) d t d t d ζ .
In Theorem 2, we consider the error of convergence for the additive noise case and show that the convergence order is  O ( Δ t ) . In Theorem 4, we consider the error of convergence for the multiplicative noise case and show that the convergence order is  O ( Δ t α ) ,   α ( 0 , 1 ) .
The paper is organized as follows. In Section 2, we consider the approximation for the additive case. In Section 3, we consider the approximation for the multiplicative noise. In Section 4, we give some numerical simulations where the fractional derivatives are approximated using the L1 scheme. In the Appendix A, we include Minkowski’s inequality for double integrals, which is the main tool used in the proofs of error estimates.
Throughout this paper, we denote C as a generic constant that is independent of the step size  Δ t , which could be different for different occurrences.

2. The Additive Noise Case

In this section, we will consider the approximation of (2) for the additive noise case, that is,  g ( t , u ( s ) ) = g 1 ( s )  in (2), which is independent of u. We first study the regularity of (2).
The following Grönwall Lemma is used in this paper.
Lemma 1
(Grönwall Inequality [1], Lemma 4.1)). Let  z : R + R +  be a function satisfying, for all  t [ 0 , T ] , the inequality
z ( t ) a + K 0 t ( t s ) σ z ( s ) d s ,
with some constants  a 0 , K > 0  and  σ > 1 . Then there exists a constant  C = C ( σ , K , T )  such that  z ( t ) a C  for all  t [ 0 , T ] .
Lemma 2.
Let  u ( t )  be the solution of (2) with  g ( t , u ( s ) ) = g 1 ( s ) . Then, there exists a constant  C = C ( T )  such that
E | u ( t 2 ) u ( t 1 ) | 2 C ( t 2 t 1 ) 2 α .
Proof. 
Note that
u ( t 2 ) u ( t 1 ) = 0 t 2 ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ 0 t 1 ( t 1 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ + 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ = I + I I .
For I, we have
I = t 1 t 2 ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ 0 t 1 ( t 1 ζ ) α 1 f ( ζ , u ( ζ ) ) ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ , = I 1 + I 2 .
For  I 2 , we obtain
E | I 2 | 2 = E | 0 t 1 ( t 1 ζ ) α 1 f ( ζ , u ( ζ ) ) ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ | 2 , = E | 0 t 1 ( t 1 ζ ) α 1 ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ | 2 , E 0 t 1 ( t 1 ζ ) α 1 ( t 2 ζ ) α 1 d ζ 0 t 1 ( t 1 ζ ) α 1 ( t 2 ζ ) α 1 f 2 ( ζ , u ( ζ ) ) d ζ , C 0 t 1 ( t 1 ζ ) α 1 ( t 2 ζ ) α 1 d ζ 2 · E max 0 ζ t f 2 ( ζ , u ( ζ ) ) ,
which implies that
E | I 2 | 2 ( t 2 t 1 ) 2 α · max 0 ζ t E f 2 ( ζ , u ( ζ ) ) C ( 1 + E | u ( 0 ) | 2 ) ( t 2 t 1 ) 2 α .
For  I 1 , we obtain
E | I 1 | 2 = E | t 1 t 2 ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ | 2 E t 1 t 2 ( t 2 ζ ) α 1 f 2 ( ζ , u ( ζ ) ) d ζ · t 1 t 2 ( t 2 ζ ) α 1 d ζ , C t 1 t 2 ( t 2 ζ ) α 1 d ζ 2 max 0 ζ t E f 2 ( ζ , u ( ζ ) ) C ( t 2 t 1 ) 2 α 1 + E | u ( 0 ) | 2 C Δ t 2 α .
Now we turn to  I I .
I I = 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ , 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ + 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ , = I I 1 + I I 2 .
For  I I 1 , we have
E | I I 1 | 2 = E | 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ | 2 , E | t 1 t 2 ( t 2 ζ ) α 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ | 2 .
Splitting  ( t 2 ζ ) α 1  into two parts yields
E | I I 1 | 2 E | t 1 t 2 ( t 2 ζ ) α 1 2 0 ζ ( t 2 ζ ) α 1 2 g 1 ( s ) d W ( s ) d ζ | 2 .
By applying the Cauchy–Schwarz inequality, we arrive at
E | I I 1 | 2 E t 1 t 2 ( t 2 ζ ) α 1 d ζ · t 1 t 2 0 ζ ( t 2 ζ ) α 1 2 g 1 ( s ) d W ( s ) 2 d ζ , Δ t α t 1 t 2 E | 0 ζ ( t 2 ζ ) α 1 2 g 1 ( s ) d W ( s ) | 2 d ζ .
Using the Ito isometry property, we obtain
E | I I 1 | 2 Δ t α · t 1 t 2 0 ζ ( t 2 ζ ) α 1 | g 1 ( s ) | 2 d s d ζ .
Note that  | g 1 ( s ) | 2  is bounded so we arrive at
E | I I 1 | 2 C Δ t α · t 1 t 2 0 ζ ( t 2 ζ ) α 1 d s d ζ C Δ t α · t 1 t 2 ( t 2 ζ ) α 1 d ζ C Δ t 2 α .
For  I I 2 , we obtain
E | I I 2 | 2 = E | 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ | 2 , = E | 0 t 1 0 ζ ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 g 1 ( s ) d W ( s ) d ζ | 2 .
Splitting  ( t 2 ζ ) α 1 ( t 1 ζ ) α 1  into two yields
E | I I 2 | 2 = E | 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 1 2 · 0 ζ ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 1 2 g 1 ( s ) d W ( s ) d ζ | 2 .
By applying the Cauchy–Schwarz inequality, we have
E | I I 2 | 2 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 d ζ 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 · E | 0 ζ g 1 ( s ) d W ( s ) | 2 d ζ .
Using the Ito isometry property, we yield
E | I I 2 | 2 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 d ζ 2 d ζ | 0 ζ | g 1 ( s ) | 2 d s .
Note that  g ( s , u ( s ) )  is bounded so we obtain
E | I I 2 | 2 C 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 d ζ 2 = C 0 t 1 t 1 t 2 ( x ζ ) α 2 d x d ζ 2 .
By interchanging the double integrals, we arrive at
E | I I 2 | 2 C t 1 t 2 0 t 1 ( x ζ ) α 2 d ζ d x 2 C t 1 t 2 ( x t 1 ) α 1 d x 2 , C ( t 2 t 1 ) α 2 = C ( t 2 t 1 ) 2 α = C Δ t 2 α .
Hence, we obtain
E | u ( t 2 ) u ( t 1 ) | 2 C ( t 2 t 1 ) 2 α ,
which completes the proof of Lemma 2. □
We now introduce the following two theorems obtained in [1] for the additive noise case. Theorem 1 considers the stability of the solution of the regularized problem (3), and Theorem 2 considers the error estimates.
Theorem 1
([1], Theorem 4.2). Let  u ˜ ( t )  be the solution of (3). Then, we have
E 0 T | u ˜ ( t ) d t | 2 C 1 + E | u ˜ ( 0 ) | 2 .
Theorem 2
([1], Theorem 4.3). Let  u ( t )  and  u ˜ ( t )  be the solutions of (2) and (3), respectively. Then, we have the following inequality:
E 0 1 | u ( t ) u ˜ ( t ) | 2 C ( Δ t ) 2 .

3. The Multiplicative Noise Case

In this section, we shall consider the approximation of (2) for the multiplicative noise case. We first consider the stability of the solution for the multiplicative noise case.
Theorem 3.
Let  u ˜ ( t )  be the solution of (3). Then, we have
E 0 T | u ˜ ( t ) d t | 2 C 1 + E | u ˜ ( 0 ) | 2 .
Proof. 
The proof of Theorem 3 is similar to the proof of Theorem 1, which can be found in [1]. For the length of the paper, we omit the proof here. □
Let us first consider the regularity of the solution of (2) when  g ( t , u ( s ) )  is independent of t, i.e.,   g ( t , u ( s ) ) = g 1 ( s , u ( s ) )  for some function  g 1 .
Lemma 3.
Let  u ( t )  be the solution of (2) with  g ( t , u ( s ) ) = g 1 ( s , u ( s ) ) . Then, we have
E | u ( t 2 ) u ( t 1 ) | 2 C ( t 2 t 1 ) 2 α .
Proof. 
Note that
u ( t 2 ) u ( t 1 ) = 0 t 2 ( t 2 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ 0 t 1 ( t 1 ζ ) α 1 f ( ζ , u ( ζ ) ) d ζ + 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ , = I + I I .
For I, we may establish the same result as in Lemma 1 using the same notion. We obtain
E | I | 2 C ( t 2 t 1 ) 2 α .
Now we turn to  I I .
I I = 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ , 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ + 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ , = I I 1 + I I 2 .
For  I I 1 , we have
E | I I 1 | 2 = E | 0 t 2 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ | 2 , E | t 1 t 2 ( t 2 ζ ) α 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ | 2 .
Splitting  ( t 2 ζ ) α 1  into two yields
E | I I 1 | 2 E | t 1 t 2 ( t 2 ζ ) α 1 2 0 ζ ( t 2 ζ ) α 1 2 g 1 ( s , u ( s ) ) d W ( s ) d ζ | 2 .
By applying the Cauchy–Schwarz inequality (3), we arrive at
E | I I 1 | 2 E t 1 t 2 ( t 2 ζ ) α 1 d ζ · t 1 t 2 0 ζ ( t 2 ζ ) α 1 2 g 1 ( s , u ( s ) ) d W ( s ) 2 d ζ , Δ t α t 1 t 2 E | 0 ζ ( t 2 ζ ) α 1 2 g 1 ( s , u ( s ) ) d W ( s ) | 2 d ζ .
Using the Ito isometry property, we obtain
E | I I 1 | 2 Δ t α · t 1 t 2 0 ζ ( t 2 ζ ) α 1 | g 1 ( s , u ( s ) ) | 2 d s d ζ .
Note that  | g 1 ( s , u ( s ) ) | 2  is bounded such that we have
E | I I 1 | 2 C Δ t α · t 1 t 2 0 ζ ( t 2 ζ ) α 1 d s d ζ C Δ t α · t 1 t 2 ( t 2 ζ ) α 1 d ζ C Δ t 2 α .
For  I I 2 , we obtain
E | I I 2 | 2 = E | 0 t 1 0 ζ ( t 2 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ 0 t 1 0 ζ ( t 1 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ | 2 , = E | 0 t 1 0 ζ ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 g 1 ( s , u ( s ) ) d W ( s ) d ζ | 2 .
Splitting  ( t 2 ζ ) α 1 ( t 1 ζ ) α 1  into two yields
E | I I 2 | 2 = E | 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 1 2 · 0 ζ ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 1 2 g 1 ( s , u ( s ) ) d W ( s ) d ζ | 2 .
By applying the Cauchy–Schwarz inequality, it follows that
E | I I 2 | 2 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 d ζ 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 · E | 0 ζ g 1 ( s , u ( s ) ) d W ( s ) | 2 d ζ .
Using the Ito isometry property, we obtain
E | I I 2 | 2 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 d ζ 2 d ζ | 0 ζ | g 1 ( s , u ( s ) ) | 2 d s .
Note that  g 1 ( s , u ( s ) )  is bounded; hence, we have
E | I I 2 | 2 C 0 t 1 ( t 2 ζ ) α 1 ( t 1 ζ ) α 1 d ζ 2 = C 0 t 1 t 1 t 2 ( x ζ ) α 2 d x d ζ 2 .
By interchanging the double integrals, we arrive at
E | I I 2 | 2 C t 1 t 2 0 t 1 ( x ζ ) α 2 d ζ d x 2 C t 1 t 2 ( x t 1 ) α 1 d x 2 , C ( t 2 t 1 ) α 2 = C ( t 2 t 1 ) 2 α = C Δ t 2 α .
Hence, we obtain
E | u ( t 2 ) u ( t 1 ) | 2 C ( t 2 t 1 ) 2 α ,
which completes the proof of Lemma 3. □
Remark 1.
The difference between Lemma 2 and Lemma 3 are as follows. Lemma 2 considers the case for (2) driven by additive noise with  g ( t , u ( s ) ) = g 1 ( s ) , whereas Lemma 3 considers the case for (2) driven by multiplicative noise with  g ( t , u ( s ) ) = g 1 ( s , u ( s ) ) .  Both cases yield the same regularity order  O ( Δ t α ) , α ( 0 , 1 ) .
Now, we introduce the main theorem in this section.
Theorem 4.
Let  u ( t )  and  u ˜ ( t )  be the solutions of (2) and (3), respectively. Then, we have
E 0 1 | u ( t ) u ˜ ( t ) | 2   d t C Δ t 2 α .
Proof. 
Note that the solution  u ˜ ( t )  of the regularized stochastic fractional differential equation takes the following form:
u ˜ ( t ) = u 0 + 1 Γ ( α ) 0 t ( t ζ ) α 1 f ( ζ , u ˜ ( ζ ) ) d ζ + 1 Γ ( α ) 0 t 0 ζ ( t ζ ) α 1 g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s d ζ .
Denote
e ( t ) = u ( t ) u ˜ ( t ) .
Then,  e ( t )  satisfies the following equation:
e ( t ) = 1 Γ ( α ) 0 t ( t ζ ) α 1 f ( ζ , u ( ζ ) ) f ( ζ , u ˜ ( ζ ) ) d ζ + 1 Γ ( α ) 0 t [ 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) d W ( s ) d s d s 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) d W ^ ( s ) d s d s ] d ζ + 1 Γ ( α ) 0 t 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s d ζ ,
which implies that
E 0 T | e ( t ) | 2 d t C E 0 T 0 t ( t ζ ) α 1 f ( ζ , u ( ζ ) ) f ( ζ , u ˜ ( ζ ) d ζ 2 d t + C E 0 T 0 t 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) d W ( s ) d s d s d ζ 0 t 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) d W ^ ( s ) d s d s d ζ 2 d t + C E 0 T 1 Γ ( α ) 0 t 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s d ζ 2 d t = I + I I + I I I .
For I, using a variable change  ν = t ζ , we have
I = E 0 T 0 t ( t ζ ) α 1 f ( ζ , u ( ζ ) ) f ( ζ , u ˜ ( ζ ) d ζ 2 d t , = E 0 T 0 t ν α 1 f ( t ν , u ( t ν ) ) f ( t ν , u ˜ ( t ν ) d ν 2 d t , = E 0 T 0 T χ [ 0 , t ] ( ν ) · ν α 1 f ( t ν , u ( t ν ) ) f ( t ν , u ˜ ( t ν ) d ν 2 d t ,
where  χ [ 0 , t ] ( ν )  is defined as
χ [ 0 , t ] ( ν ) = 1 , T t ν , 0 , 0 t ν , = 1 , 0 ν t , 0 , t ν T .
Applying Minkowski’s inequality for the double integrals of Lemma A1, we arrive at
I = E 0 T 0 T χ [ 0 , t ] ( ν ) · ν α 1 f ( t ν , u ( t ν ) ) f ( t ν , u ˜ ( t ν ) d ν 2 d t , 0 T E 0 T | χ [ 0 , t ] ( ν ) · ν α 1 f ( t ν , u ( t ν ) ) f ( t ν , u ˜ ( t ν ) | 2 d t 1 2 d ν 2 , = 0 T ν α 1 E 0 T | χ [ 0 , t ] ( ν ) f ( t ν , u ( t ν ) ) f ( t ν , u ˜ ( t ν ) | 2 d t 1 2 d ν 2 ,
which implies that
I C 0 T ν α 1 E ν T | f ( t ν , u ( t ν ) ) f ( t ν , u ˜ ( t ν ) | 2 d t 1 2 d ν 2 .
Using the Lipschitz condition for f, we obtain
I C 0 T ν α 1 E ν T e 2 ( t ν ) d t 1 2 d ν 2 C 0 T ν α 1 E 0 T ν e 2 ( ζ ) d ζ 1 2 d ν 2 .
By applying a variable change  ν = T ν ˜ , we arrive at
I C 0 T ( T ν ˜ ) α 1 E 0 ν ˜ e 2 ( ζ ) d ζ 1 2 d ν ˜ 2 = C 0 T ( T ν ) α 1 E 0 ν e 2 ( ζ ) d ζ 1 2 d ν 2 .
Now, we turn to  I I . We have
I I = C E 0 T 0 t 0 ζ g ( ζ , u ( s ) ) ( t ζ ) 1 α d W ( s ) d ζ 0 t 0 ζ g ( ζ , u ( s ) ) ( t ζ ) 1 α d W ^ ( s ) d s d s d ζ 2 d t .
Applying a change of variable  ν = t ζ , we have
I I = C E 0 T [ 0 t 0 t ν g ( t ν , u ( s ) ) ν 1 α d W ( s ) d ν 0 t 0 t ν g ( t ν , u ( s ) ) ν 1 α d W ^ ( s ) d s d s d ν ] 2 d t ,
which implies that
I I C E 0 T 0 T χ [ 0 , t ] 0 t ν g ( t ν , u ( s ) ) ν 1 α d W ( s ) d ν 0 T χ [ 0 , t ] 0 t ν g ( t ν , u ( s ) ) ν 1 α d W ^ ( s ) d s d s d ν 2 d t .
Applying Minkowski’s inequality for the double integrals of Lemma A1, we arrive at
I I 0 T E 0 T | χ [ 0 , t ] 0 t ν g ( t ν , u ( s ) ) ν 1 α d W ( s ) d s d W ^ ( s ) d s d s | 2 d t 1 2 d ν 2 , = 0 T E ν T | 0 t ν g ( t ν , u ( s ) ) ν 1 α d W ( s ) d s d W ^ ( s ) d s d s | 2 d t 1 2 d ν 2 , = 0 T ν α 1 E ν T | 0 t ν g ( t ν , u ( s ) ) d W ( s ) d s d W ^ ( s ) d s d s | 2 d t 1 2 d ν 2 .
Note that, for  0 = t 0 < t 1 < < t m < t m + 1 = t ν ,
E | 0 t ν g ( t ν , u ( s ) ) d W ( s ) d s d W ^ ( s ) d s d s | 2 , = E | i = 1 m + 1 t i 1 t i g ( t m + 1 , u ( s ) ) d W ( s ) t i 1 t i g ( t m + 1 , u ( τ ) ) W ( t i ) W ( t i 1 ) Δ t d τ | 2 , = E | i = 1 m + 1 t i 1 t i g ( t m + 1 , u ( s ) ) d W ( s ) t i 1 t i 1 Δ t t i 1 t i g ( t m + 1 , u ( τ ) ) d τ d W ( s ) | 2 , = E | i = 1 m + 1 t i 1 t i 1 Δ t t i 1 t i g ( t m + 1 , u ( s ) ) g ( t m + 1 , u ( τ ) ) d τ d W ( s ) | 2 , = E i = 1 m + 1 t i 1 t i 1 Δ t t i 1 t i g ( t m + 1 , u ( s ) ) g ( t m + 1 , u ( τ ) ) d τ 2 d s ,
which implies that
E | 0 t ν g ( t ν , u ( s ) ) d W ( s ) d s d W ^ ( s ) d s d s | 2 , E i = 1 m + 1 t i 1 t i 1 Δ t 2 t i 1 t i 1 2 d t · t i 1 t i g ( t m + 1 , u ( s ) ) g ( t m + 1 , u ( τ ) ) 2 d τ d s , E i = 1 m + 1 t i 1 t i 1 Δ t t i 1 t i g ( t m + 1 , u ( s ) ) g ( t m + 1 , u ( τ ) ) 2 d τ d s .
Applying the Lipschitz condition for g and Lemma 3, we have
E | g ( t m + 1 , u ( s ) ) g ( t m + 1 , u ( τ ) ) | E | u ( s ) u ( τ ) | 2 C | s τ | 2 ,
which implies that
E | 0 t ν g ( t ν , u ( s ) ) d W ( s ) d s d W ^ ( s ) d s d s | 2 , E i = 1 m + 1 t i 1 t i 1 Δ t t i 1 t i g ( t m + 1 , u ( s ) ) g ( t m + 1 , u ( τ ) ) 2 d τ d s C Δ t 2 α .
Thus, we obtain
I I 0 T ν α 1 0 T Δ t 2 d t 1 2 d ν 2 C Δ t 2 α 0 T ν α 1 d ν 2 C Δ t 2 α .
For  I I I , using a change of variable  ν = t ζ , it follows that
I I I = E 0 T 0 t 0 ζ ( t ζ ) α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s d ζ 2 d t , = C E 0 T 0 t 0 t ν ν α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s d ν 2 d t , = C E 0 T 0 T χ [ 0 , t ] ( ν ) 0 t ν ν α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s d ν 2 d t .
Applying Minkowski’s inequality for the double integrals of Lemma A1, we have
I I I 0 T E 0 T | χ [ 0 , t ] ( ν ) 0 t ν ν α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s | 2 d t 1 2 d ν 2 , = 0 T E 0 T | 0 t ν ν α 1 g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s | 2 d t 1 2 d ν 2 , = 0 T ν α 1 ν T | 0 t ν g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s | 2 d t 1 2 d ν 2 .
Following the same argument as for the estimate of the term  I I , we obtain
E | 0 t ν g ( ζ , u ( s ) ) g ( ζ , u ˜ ( s ) ) d W ^ ( s ) d s d s | 2 C E 0 t ν | e ( s ) | 2 d s .
Therefore, using the variable change  ζ = t ν ,
I I I 0 T ν α 1 ν T 0 t ν E | e ( s ) | 2 d s d t 1 2 d ν 2 0 T ν α 1 0 T ν 0 ζ E | e ( s ) | 2 d s d ζ 1 2 d ν 2 ,
Using the variable change  ν = T ν ˜ , we arrive at
I I I 0 T ( T ν ˜ ) α 1 0 ν ˜ 0 ζ E | e ( s ) | 2 d s d ζ 1 2 d ν ˜ 2 , = 0 T ( T ν ) α 1 0 ν 0 ζ E | e ( s ) | 2 d s d ζ 1 2 d ν 2 , 0 T ( T ν ) α 1 0 ν 0 ν E | e ( s ) | 2 d s d ζ 1 2 d ν 2 , C 0 T ( T ν ) α 1 E 0 ν e 2 ( s ) d s 1 2 d ν 2 .
Hence, we obtain
E 0 T e 2 ( t ) d t C 0 T ( T ν ) α 1 E 0 ν e 2 ( ζ ) d ζ 1 2 d ν 2 + C Δ t 2 α .
Denote  e 1 ( ν ) = E 0 ν e 2 ( ζ ) d ζ 1 2 .  We therefore obtain
e 1 ( T ) 0 T ( T ν ) α 1 e 1 ( ν ) d ν + C Δ t β .
By applying the Grönwall Lemma 1, we have
e 1 ( T ) C Δ t α ,
which implies that
E 0 1 | u ( t ) u ˜ ( t ) | 2 C Δ t 2 α .
The proof of Theorem 4 is now complete. □
Remark 2.
The difference between Theorem 2 and Theorem 4 are as follows. Theorem 2 considers the convergence order for (2) driven by additive noise, whereas Theorem 4 considers the convergence order for (2) driven by multiplicative noise with  g ( t , u ( s ) ) .  The additive case yields a convergence order of  O ( Δ t ) , α ( 0 , 1 ) , whereas the multiplicative case achieves a convergence order of  O ( Δ t α ) ,   α ( 0 , 1 ) .

4. Numerical Simulations

In this section, we shall consider the numerical simulations for the following problem with different values of f and g.
  0 C D 0 α u ( t ) = f ( t , u ( t ) ) + 0 t g ( t , u ( s ) ) d W ( s ) ,
u ( 0 ) = u 0 .
Let  0 = t 0 < t 1 < < t N = T  be the partition of  [ 0 , T ]  and let  Δ t  be the step size. At  t = t n , we have
  0 C D 0 α u ( t ) | t = t n = f ( t n , u ( t n ) ) + 0 t n g ( t n , u ( s ) ) d W ( s ) .
We shall approximate the Caputo fractional derivative    0 C D 0 α u ( t ) | t = t n  with the L1 scheme [30]
  0 C D 0 α u ( t ) | t = t n Δ t α j = 0 n w j , n u ( t n j ) ,
where the weights  w j , n  are defined by
Γ ( 2 α ) w j , n = 1 , j = 0 , 2 1 α 2 , j = 1 , ( j 1 ) 1 α + ( j + 1 ) 1 α 2 j 1 α , j = 2 , 3 , , n 1 , ( j 1 ) 1 α ( α 1 ) j α j 1 α , j = n .
Further, we will approximate the integral  0 t n g ( t n , u ( s ) ) d W ( s )  by the following rectangular integration formula:
0 t n g ( t n , u ( s ) ) d W ( s ) j = 1 n t j 1 t j g ( t n , u ( t j 1 ) ) η j = j = 1 n Δ t g ( t n , u ( t j 1 ) ) η j ,
where  η j = Δ t N ( 0 , 1 ) . Here  N ( 0 , 1 )  denotes the standard normally distributed random variable calculated by the MATLAB function ”randn”.
Let  U n u ( t n )  be the approximate solution. We may obtain the following numerical method for  U n , n = 1 , 2 , , N  with  U 0 = u 0 :
Δ t α j = 0 n w j , n U n j = f ( t n , U n ) + j = 1 n Δ t g ( t n , U j 1 ) η j .
In our numerical simulations, we chose  T = 1  and the different step sizes  h 1 = 1 16 ,   h 2 = 1 32 ,   h 3 = 1 64 , and  h 4 = 1 128 . Since there are no exact solutions for our problems, we shall use a reference solution calculated with a sufficiently small step size  h = Δ t = 2 12 .
Since we did not obtain the convergence order of the scheme defined in (6), we shall use the following method to compute the experimentally determined order of convergence (EOC)  p > 0 .
We shall calculate the error at  T = t N = N h = 1 . Assume that we have the following error estimate, which depends on the step size  h = Δ t ; that is, if  p > 0 ,
e r r o r ( h ) = | | U N u ( t N ) | | C h p , p > 0 .
By choosing  t N i = N i h i = T = 1  with the different step sizes  h i = 1 2 i  for  i = 4 , 5 , 6 , 7 , we have
e r r o r ( h i ) = E | | U N i u ( T ) | | 2 1 2 C h i p ,
which implies that the convergence order  p > 0  satisfies, with  i = 4 , 5 , 6 ,
e r r o r ( h i ) e r r o r ( h i + 1 ) h i h i + 1 p ,
or
p log 2 e r r o r ( h i ) e r r o r ( h i + 1 ) log 2 h i h i + 1 .
We obtain 3 different EOCs with 4 step sizes  h i , i = 4 , 5 , 6 , 7  and take the average of the three EOCs, which are found in the EOC column of Table 1, Table 2, Table 3, Table 4 and Table 5.
In Table 1, Table 2, Table 3, Table 4 and Table 5, we provide the approximation results by using the following different f and g.
f ( t , u ( t ) ) = u ( t ) + t + Γ ( 2 ) Γ ( 2 α ) t 1 α , g ( t , u ( t ) ) = 1 ,
f ( t , u ( t ) ) = u ( t ) + t 2 + 2 t 1.5 Γ ( 2.5) t 1 α , g ( t , u ) = t ,
f ( t , u ( t ) ) = u ( t ) + t 2 + 2 t 1.5 Γ ( 2.5) t 1 α , g ( t , u ) = u ,
f ( t , u ( t ) ) = u ( t ) + t 2 + 2 t 1.5 Γ ( 2.5) t 1 α , g ( t , u ) = sin ( u ) ,
f ( t , u ( t ) ) = u ( t ) + t 2 + 2 t 1.5 Γ ( 2.5) t 1 α , g ( t , u ) = u 3 u .
We note that, across all cases, the experimentally determined convergence orders are nearly  O ( Δ t )  for various  α ( 0 , 1 ) . These observed orders outperform the theoretical order  Δ t α , α ( 0 , 1 )  in the context of multiplicative noise cases. In future investigations, we will consider the factors contributing to the superior performance of experimentally determined convergence orders compared to their theoretical counterparts in the presence of multiplicative noise.
Table 1 presents the approximation results in the presence of additive noise with  G ( t , u ( t ) ) = 1 . Three experimental orders of convergence are computed using step sizes  h i  for  i = 4 , 5 , 6 , 7 . The experimentally determined order of convergence in the EOC column is obtained by averaging the EOC values. Across various fractional orders  α , an EOC of approximately 1 is consistently achieved.
Table 2, Table 3, Table 4 and Table 5 examine additional cases, and they exhibit a comparable average EOC of approximately 1. This observation suggests that the experimental orders of convergence for both additive and multiplicative noise scenarios are similar.
Mathematics 12 00365 i001
The left graph illustrates the experimental orders of convergence corresponding to Table 1, where  α = 0.4 , and various step sizes are considered. The EOC line represents the average of the EOC values, clearly indicating an EOC of approximately 1 for the additive noise case.
On the right, the graph displays the experimental orders of convergence for Table 5, with  α = 0.4  and different step sizes. The EOC line, representing the average of the EOC values, distinctly shows an EOC of approximately 1 for the multiplicative noise case.

5. Conclusions

This paper considers the numerical approximation of stochastic fractional differential equations driven by integrated multiplicative noise. The approach involves employing a piecewise constant function to approximate the noise, leading to the derivation of a regularized stochastic fractional differential equation. We establish the regularity of the solution and analyze the convergence order of the proposed approximation scheme. To conduct numerical simulations, we employ the L1 scheme for approximating the Caputo fractional derivative. The results of our numerical simulations reveal convergence orders of nearly  O ( Δ t )  for cases involving multiplicative noise. Surprisingly, these experimentally determined convergence orders outperform the expected theoretical orders of  O ( Δ t ) α ,   α ( 0 , 1 ) . In future research, we will investigate the reasons behind this discrepancy and further explore the implications of our findings.

Author Contributions

We have both contributed equal amounts towards this paper. J.H. conducted the theoretical analysis, wrote the original draft, and carried out the numerical simulations. Y.Y. introduced and provided guidance in this research area. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare that they have no competing interest.

Appendix A

In the Appendix A, we will provide Minkowski’s inequality for double integrals.
Lemma A1
(Minkowski’s inequality for double integrals). Suppose that we have the following σ-finite measure spaces: ( E 1 , A , λ ) and ( E 2 , B , μ ), and suppose that  f : E 1 × E 2 R  is  A B -measurable. Then, we have
E 1 E 2 | f ( x , y ) | μ ( d y ) p λ ( d x ) 1 p E 2 E 1 | f ( x , y ) | p λ ( d x ) 1 p μ ( d y ) ,
which is satisfied  p , r [ 1 , ) .  An equality occurs if  p = 1 .
Proof. 
We omit the proof here. For more details, please see Theorem 13.14 in [31]. □

References

  1. Kamrani, M. Numerical solution of stochastic fractional differential equations. Numer. Algorithms 2015, 68, 81–93. [Google Scholar] [CrossRef]
  2. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2010. [Google Scholar]
  3. Khodabin, M.; Maleknejad, K.; Asgari, M. Numerical solution of a stochastic population growth model in a closed system. Adv. Differ. Equ. 2013, 2013, 130. [Google Scholar] [CrossRef]
  4. Tsokos, C.P.; Padgett, W.J. Random Integral Equations with Applications to Life Sciences and Engineering; Academic Press: Cambridge, MA, USA, 1974. [Google Scholar]
  5. Vahdati, S. A wavelet method for stochastic Volterra integral equations and its application to general stock model. Comput. Methods Differ. Equ. 2017, 5, 170–188. [Google Scholar]
  6. Zhao, Q.; Wang, R.; Wei, R. Exponential utility maximization for an insurer with time-inconsistent preferences. Insurance 2016, 70, 89–104. [Google Scholar] [CrossRef]
  7. Szynal, D.; Wedrychowicz, S. On solutions of a stochastic integral equation of the Volterra type with applications for chemotherapy. J. Appl. Probab. 1988, 25, 257–267. [Google Scholar] [CrossRef]
  8. Berger, M.; Mizel, V. Volterra equations with Itô integrals I. J. Integral Equ. 1980, 2, 187–245. [Google Scholar]
  9. Berger, M.; Mizel, V. Volterra equations with Itô integrals II. J. Integral Equ. 1980, 2, 319–337. [Google Scholar]
  10. Ravichandran, C.; Valliammal, N.; Nieto, J.J. New results on exact controllability of a class of fractional neutral integro-differential systems with state-dependent delay in Banach spaces. J. Frank. Inst. 2019, 356, 1535–1565. [Google Scholar] [CrossRef]
  11. Dineshkumar, C.; Udhayakumar, R.; Vijayakumar, V.; Shukla, A.; Nisar, K.S. New discussion regarding approximate controllability for Sobolev-type fractional stochastic hemivariational inequalities of order r∈(1,2). Commun. Nonlinear Sci. Numer. Simul. 2023, 116, 106891. [Google Scholar] [CrossRef]
  12. Dhayal, R.; Malik, M.; Abbas, S.; Debbouche, A. Optimal controls for second-order stochastic differential equations driven by mixed-fractional Brownian motion with impulses. Appl. Math. Lett. 2020, 43, 4107–4124. [Google Scholar] [CrossRef]
  13. Zhang, T.W.; Li, Y.K. Exponential Euler scheme of multi-delay Caputo–Fabrizio fractional-order differential equations. Appl. Math. Lett. 2022, 124, 107709. [Google Scholar] [CrossRef]
  14. Kloeden, P.E.; Platen, E. Numerical Solution of Stochastic Differential Equations. Applications of Mathematics; Springer: New York, NY, USA; Berlin, Germany, 1992; Volume 23. [Google Scholar]
  15. Milstein, G.N.; Treyakov, M.V. Stochastic Numerics for Mathematical Physics; Scientific Computation; Springer: Berlin, Germany, 2004. [Google Scholar]
  16. Zhang, X. Euler schemes and large deviations for stochastic Volterra equations with singular kernels. J. Differ. Equ. 2008, 244, 2226–2250. [Google Scholar] [CrossRef]
  17. Son, D.T.; Huong, P.T.; Kloeden, P.E.; Tuan, H.T. Asymptotic separation between solutions of Caputo fractional stochastic differential equations. Stoch. Anal. Appl. 2018, 36, 654–664. [Google Scholar] [CrossRef]
  18. Anh, P.T.; Doan, T.S.; Huong, P.T. A variation of constant formula for Caputo fractional stochastic differential equations. Stat. Probab. Lett. 2019, 145, 351–358. [Google Scholar] [CrossRef]
  19. Tudor, C.; Tudor, M. Approximation schemes for Itô-Volterra stochastic equations. Bol. Soc. Mat. Mex. 1995, 1, 73–85. [Google Scholar]
  20. Wen, C.H.; Zhang, T.S. Improved rectangular method on stochastic Volterra equations. J. Comput. Appl. Math. 2011, 235, 2492–2501. [Google Scholar] [CrossRef]
  21. Wang, Y. Approximate representations of solutions to SVIEs, and an application to numerical analysis. J. Math. Anal. Appl. 2017, 449, 642–659. [Google Scholar] [CrossRef]
  22. Xiao, Y.; Shi, J.N.; Yang, Z.W. Split-step collocation methods for stochastic Volterra integral equations. J. Integral Equ. Appl. 2018, 30, 197–218. [Google Scholar] [CrossRef]
  23. Liang, H.; Yang, Z.; Gao, J. Strong superconvergence of the Euler–Maruyama method for linear stochastic Volterra integral equations. J. Comput. Appl. Math. 2017, 317, 447–457. [Google Scholar] [CrossRef]
  24. Dai, X.; Bu, W.; Xiao, A. Well-posedness and EM approximations for non-Lipschitz stochastic fractional integro-differential equations. J. Comput. Appl. Math. 2019, 356, 377–390. [Google Scholar] [CrossRef]
  25. Gao, J.; Liang, H.; Ma, S. Strong convergence of the semi-implicit Euler method for nonlinear stochastic Volterra integral equations with constant delay. Appl. Math. Comput. 2019, 348, 385–398. [Google Scholar] [CrossRef]
  26. Yang, H.; Yang, Z.; Ma, S. Theoretical and numerical analysis for Volterra integro-differential equations with Itô integral under polynomially growth conditions. Appl. Math. Comput. 2019, 360, 70–82. [Google Scholar] [CrossRef]
  27. Zhang, W. Theoretical and numerical analysis of a class of stochastic Volterra integro-differential equations with non-globally Lipschitz continuous coefficients. Appl. Numer. Math. 2020, 147, 254–276. [Google Scholar] [CrossRef]
  28. Zhang, W.; Liang, H.; Gao, J. Theoretical and numerical analysis of the Euler-Maruyama method for generalized stochastic Volterra integro-differential equations. J. Comput. Appl. Math. 2020, 365, 17. [Google Scholar] [CrossRef]
  29. Li, M.; Huang, C.; Hu, Y. Numerical methods for stochastic Volterra integral equations with weakly singular kernels. IMA J. Numer. Anal. 2022, 42, 2656–2683. [Google Scholar] [CrossRef]
  30. Yan, Y.; Khan, M.; Ford, N.J. An analysis of the modified L1 scheme for time-fractional partial differential equations with nonsmooth data. SIAM J. Numer. Anal. 2018, 56, 210–227. [Google Scholar] [CrossRef]
  31. Schilling, R.L. Measures, Integrals And Martingales; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
Table 1. The convergence orders for Equations (4) and (5) defined by (7).
Table 1. The convergence orders for Equations (4) and (5) defined by (7).
α h 1 = 1 16 h 2 = 1 32 h 3 = 1 64 h 4 = 1 128 EOC
0.2 1.1869× 10 3 5.8082× 10 4 2.8542× 10 4 1.3972× 10 4 1.03
1.03101.02501.0305
0.4 2.5771× 10 3 1.2469× 10 3 6.0768× 10 4 2.9570× 10 4 1.04
1.04731.03701.0392
0.6 4.4683× 10 3 2.1448× 10 3 1.0357× 10 3 4.9966× 10 4 1.05
1.05891.05021.0516
0.8 6.9502× 10 3 3.3953× 10 3 1.6528× 10 3 7.9956× 10 4 1.04
1.03351.03861.0476
1 8.0788× 10 3 4.0749× 10 3 2.0344× 10 3 1.0043× 10 3 1.00
0.98741.00211.0184
Table 2. The convergence orders for Equations (4) and (5) defined by (8).
Table 2. The convergence orders for Equations (4) and (5) defined by (8).
α h 1 = 1 16 h 2 = 1 32 h 3 = 1 64 h 4 = 1 128 EOC
0.2 3.6254× 10 2 1.7819× 10 2 8.9154× 10 3 4.4166× 10 3 1.01
1.02470.999041.0134
0.4 2.9412× 10 2 1.4521× 10 2 7.2194× 10 3 3.5814× 10 3 1.01
1.01831.00821.0114
0.6 3.1514× 10 2 1.5556× 10 2 7.6321× 10 3 3.7425× 10 3 1.02
1.01851.02731.0281
0.8 3.2311× 10 2 1.6120× 10 2 7.9141× 10 3 3.8351× 10 3 1.02
1.00321.02641.0452
1 4.9803× 10 2 2.4976× 10 2 1.2433× 10 2 6.1283× 10 3 1.01
1.00571.01161.0232
Table 3. The convergence orders for Equations (4) and (5) defined by (9).
Table 3. The convergence orders for Equations (4) and (5) defined by (9).
α h 1 = 1 16 h 2 = 1 32 h 3 = 1 64 h 4 = 1 128 EOC
0.2 3.8205× 10 2 1.9042× 10 2 9.4368× 10 3 4.6378× 10 3 1.01
1.00461.01281.0249
0.4 3.3152× 10 2 1.6205× 10 2 7.9161× 10 3 3.8499× 10 3 1.04
1.03261.03361.0400
0.6 3.1321× 10 2 1.4878× 10 2 7.0823× 10 3 3.3676× 10 3 1.07
1.07401.07081.0725
0.8 3.4886× 10 2 1.6429× 10 2 7.7244× 10 3 3.6156× 10 3 1.09
1.08641.08881.0952
1 4.7068× 10 2 2.3335× 10 2 1.1548× 10 2 5.6754× 10 3 1.02
1.01231.01481.0248
Table 4. The convergence orders for Equations (4) and (5) defined by (10).
Table 4. The convergence orders for Equations (4) and (5) defined by (10).
α h 1 = 1 16 h 2 = 1 32 h 3 = 1 64 h 4 = 1 128 EOC
0.2 3.1570× 10 2 1.5542× 10 2 7.6456× 10 3 3.7412× 10 3 1.03
1.02241.02341.0312
0.4 2.9524× 10 2 1.4249× 10 2 6.8956× 10 3 3.3313× 10 3 1.05
1.05101.04711.0496
0.6 2.9624× 10 2 1.3939× 10 2 6.5793× 10 3 3.1055× 10 3 1.08
1.08761.08321.0831
0.8 3.4298× 10 2 1.6103× 10 2 7.5450× 10 3 3.5191× 10 3 1.09
1.09081.09371.1003
1 4.7021× 10 2 2.3320× 10 2 1.1543× 10 2 5.6735× 10 3 1.02
1.01171.01461.0247
Table 5. The convergence orders for Equations (4) and (5) defined by (11).
Table 5. The convergence orders for Equations (4) and (5) defined by (11).
α h 1 = 1 16 h 2 = 1 32 h 3 = 1 64 h 4 = 1 128 EOC
0.2 5.8169× 10 2 2.7932× 10 2 1.3664× 10 2 6.6809× 10 3 1.04
1.05831.03151.0323
0.4 3.8562× 10 2 1.9719× 10 2 9.8286× 10 3 4.8399× 10 3 1.00
0.96761.00451.0220
0.6 1.9561× 10 2 9.7425× 10 3 4.7290× 10 3 2.2623× 10 3 1.04
1.00561.04271.0638
0.8 2.3815× 10 2 1.1518× 10 2 5.7530× 10 3 2.8175× 10 3 1.03
1.04801.00151.0299
1 4.5648× 10 2 2.2806× 10 2 1.1329× 10 2 5.5780× 10 3 1.01
1.00111.00941.0221
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hoult, J.; Yan, Y. Numerical Approximation for a Stochastic Fractional Differential Equation Driven by Integrated Multiplicative Noise. Mathematics 2024, 12, 365. https://doi.org/10.3390/math12030365

AMA Style

Hoult J, Yan Y. Numerical Approximation for a Stochastic Fractional Differential Equation Driven by Integrated Multiplicative Noise. Mathematics. 2024; 12(3):365. https://doi.org/10.3390/math12030365

Chicago/Turabian Style

Hoult, James, and Yubin Yan. 2024. "Numerical Approximation for a Stochastic Fractional Differential Equation Driven by Integrated Multiplicative Noise" Mathematics 12, no. 3: 365. https://doi.org/10.3390/math12030365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop