Next Article in Journal
Spectral Analysis of Lattice Schrödinger-Type Operators Associated with the Nonstationary Anderson Model and Intermittency
Previous Article in Journal
Multilayer Neurolearning of Measurement-Information-Poor Hydraulic Robotic Manipulators with Disturbance Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Donsker-Type Theorem for Numerical Schemes of Backward Stochastic Differential Equations

1
Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan 250100, China
2
School of Mathematics, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(4), 684; https://doi.org/10.3390/math13040684
Submission received: 22 January 2025 / Revised: 6 February 2025 / Accepted: 11 February 2025 / Published: 19 February 2025

Abstract

:
This article studies the theoretical properties of the numerical scheme for backward stochastic differential equations, extending the relevant results of Briand et al. with more general assumptions. To be more precise, the Brown motion will be approximated using the sum of a sequence of martingale differences or a sequence of i.i.d. Gaussian variables instead of the i.i.d. Bernoulli sequence. We cope with an adaptation problem of Y n by defining a new process Y ^ n ; then, we can obtain the Donsker-type theorem for numerical solutions using a similar method to Briand et al.

1. Introduction

In this article we consider the following backward stochastic differential equation (BSDE for short):
Y t = ξ + t T f ( Y s , Z s ) d s t T Z s d W s , 0 t T ,
where W is standard Brownian motion on ( Ω , F , { F t } 0 t T , P ) , f ( y , z ) is Lipschitz continuous with y and z, and ξ is measurable with F T .
Research on backward stochastic differential equations can be traced back to 1973, Bismut [1] first studied linear backward stochastic differential equations. Pardoux and Peng [2] first introduced nonlinear backward stochastic differential equations, and the existence and uniqueness of the solutions under the standard Lipschitz condition were presented. For over thirty years, based on the works of Pardoux and Peng [2], backward stochastic differential equations have developed rapidly in the fields of financial mathematics, stochastic control, differential games, and numerical analysis; see El Karoui et al. [3] for details.
Unlike forward stochastic differential equations, the solution of backward stochastic differential equations is a pair of adapted processes ( Y , Z ) ; this also poses certain difficulties in numerically solving backward stochastic differential equations. Early studies mostly used stochastic differential equations to study numerical solutions for forward and backward stochastic differential equations. The system of differential equations consists of a portion of forward stochastic differential equations and a portion of backward stochastic differential equations. Ma et al. [4] proposed a four-step solution method for forward and backward stochastic differential equations. Douglas et al. [5] provided a numerical solution for forward and backward stochastic differential equations using approximation techniques for partial differential equations and stochastic differential equations. Zhang [6] also provided the L 2 convergence rate of numerical schemes for forward and backward stochastic differential equations. Chevance [7] assumed that when f is irrelevant with z, by discretizing conditional expectations, the following can be obtained:
Y t = E [ ξ + t T f ( Y s , Z s ) d s | F t ]
This provided a numerical solution for Y and proved the convergence property. Coquet et al. [8] studied the convergence of numerical schemes for backward stochastic differential equations in the sense of function space using the convergence of filtration when f is irrelevant with z.
When f is relevant with z, Briand et al. [9] studied the Donsker-type theorem to develop a numerical scheme of backward stochastic differential equations, and investigated the following numerical scheme:
y k = y k + 1 + h f ( y k , z k ) h z k ε k + 1 , k = n 1 , , 0 , y n = ξ n ,
where h = T / n , { ε k } 1 k n is a set of independent and identically distributed random variables, and
P ( ε 1 = 1 ) = P ( ε 1 = 1 ) = 1 2 .
Here, we note G k = σ ( ε 1 , , ε k ) , 1 k n and set ξ n as a G n measurable random variable. Meanwhile, set
z k = h 1 2 E [ y k + 1 ε k + 1 | G k ] ,
For 0 t T , define
Y t n = y [ t / h ] , Z t n = z t / h ,
where x = ( x 1 ) + when x is an integer, and x = [ x ] when x is not an integer.
Briand et al. [9] found that ( Y n , Z n ) converge to ( Y , Z ) in a certain sense. This is a type of Donsker-type theorem that was first used to describe the weak convergence of partial sum processes to Brownian motion on C [ 0 , T ] (see Donsker [10]).
We can see that the Bernoulli distribution is critical for obtaining Donsker-type theorem for numerical solutions. The key contribution of this article is to extend the Bernoulli condition (2) to a more general condition: { ε k } 1 k n is a sequence of martingale difference or i.i.d. Gaussian variables. In the former case, we need an additional assumption (see Assumption 6 below) to deal with the absence of independent and identical distributions. In the latter case, we will use an estimation Lemma 5 to obtain the UT property, which is necessary to prove the Theorem 3. Moreover, we can see in Remark 1 that the original solution Y n might not be adapted to the filtration. Therefore, we introduce an improved process Y ^ n to insure the adaptation. The structure of this article is as follows: Section 2 introduces the main results of this article. The Section 3 introduces a result for the convergence of filtration, which is our tool to prove the main results of this paper. In Section 4, we provide proof of the main results of this article. We also introduce some lemmas that could be used in Section 5.

2. Main Result

The discussion in this article is based on the complete probability space with filtration ( Ω , F , { F t } 0 t T , P ) ; W is a standard Brownian motion on ( Ω , F , { F t } 0 t T , P ) ; and filtration { F t } 0 t T , is the natural filtration generated by W, assumed to be right continuous. Consider random variable { ε k n } 1 k n defined on the probability space above, and define
W t n = h k = 1 [ t / h ] ε k n , 0 t T , h = T n .
Similar to the settings of Briand et al. [9], we set y n n = ξ n , and for k = n 1 , , 0 ,
y k n = y k + 1 n + h f ( y k n , z k n ) h z k n ε k + 1 n ,
and
z k n = h 1 2 E [ y k + 1 n ε k + 1 n | G k n ] ,
where G k n = σ ε 1 n , , ε k n .
Remark 1.
In fact, (4) is equivalent to the following equation:
y k n = ξ n + h i = k n 1 f ( y i n , z i n ) i = k n 1 z i n ε i + 1 n .
Let k = 0 and subtract it from (5); we obtain
y k n = y 0 n + h i = 0 k 1 f ( y i n , z i n ) i = 0 k 1 z i n ε i + 1 n .
However, this could not yield that y n is adapted to the filtration { G k n } 0 k n because we know nothing about y 0 .
To adapt y k n to filtration G k n , we further define y ^ k n as a modified version of y k n :
y ^ k n = E [ y k n | G k n ] ,
and for 0 t T ,
Y t n = y t / h n , Z t n = z t / h n , Y ^ t n = y ^ t / h n .
Remark 2.
Because of the definition of [·] and ⌊·⌋ in the previous section, Y n and Y ^ n are c à d l à g processes, and Z n is a c à g l à d process.
This article investigates the weak convergence problem of ( Y ^ , Z ) in space D ( [ 0 , T ] ) . The topology of D ( [ 0 , T ] ) is J 1 . Related concepts are referenced in the work of Jacod and Shiryaev [11]. Here, we present a few symbols required for this article. M n M means M n converge in law under the J 1 topology. M n u . c . p . M represent sup s t | M s n M s | P 0 for all 0 t T . If { G t n } 0 t T , { G t } 0 t T are two filtrations, then { G t n } 0 t T w { G t } 0 t T means that for any B G T , E [ 1 B | G · n ] probably converges to E [ 1 B | G · ] under topology J 1 . We denote the natural filtration generated by W n in (3) as { F t n } 0 t T , which is assumed to be right continuous. In the following, we recall the UT condition, which is crucial for the convergence of the stochastic integrals.
Definition 1.
A sequence of continuous R d -valued semimartingales ( X n ) n 1 is said to have the UT condition if for each n 1 , the decomposition X n = M n + A n has
sup n E [ M n , M n T + 0 T | d A s n | ] < ,
where M n is a local martingale and A n is a local finite variation process.
Now, we provide the assumptions of this article.
Assumption 1.
f : R × R R is Lipschitz, which means there exists a constant K 0 , such that
( y , z ) , y , z R 2 , f ( y , z ) f y , z K y y + z z ;
Assumption 2.
ξ is F T measurable. For n 1 , ξ n is G n n measurable, and
E ξ 2 + sup n E ξ n 2 < ;
Assumption 3.
lim n E [ | ξ n ξ | ] = 0 ;
Assumption 4.
W n u . c . p . W
Assumption 5.
for 0 k n , E [ ε k n | G k 1 n ] = 0 , P ( ε k n = 1 ) = P ( ε k n = 1 ) = 1 2 .
Assumption 6.
For any bounded continuous function g,
E [ g ( W t n ) | W s n ] P E [ g ( W t ) | W s ] , 0 s < t < T .
Remark 3.
At first glance, Assumption 6 may seem quite strange. However, it is a technical condition to apply Lemma 1 to proof the convergence of filtration if { ε k n } 1 k n is not an i.i.d. sequence.
Assumption 7.
{ ε k n } 1 k n is a family of independent and identically distributed Gaussian random variables, and E [ ε k n ] = 0 , E [ ( ε k n ) 2 ] = 1 .
Remark 4.
Actually, under Assumptions 5 and 6 or Assumption 7, we can obtain W n W through Lemma 4. Since W is continuous, we also obtain that W n converges under local uniform topology in law because of the property of the J 1 topology.
Now, we present the main results of this article. We should note that the results are based on Assumption 5 or Assumption 7, which cannot coexist, so the conditions for the main results are divided into two independent assumptions.
Theorem 1.
Under Assumptions 1–4, with 5 and 6 or 7, we have
Y ^ n , 0 · Z s n d s ( Y , 0 · Z s d s ) ,
Furthermore,
sup 0 t T Y ^ t n Y t 2 + 0 T Z s n Z s 2 d s P 0 .
Now, we provide an example by applying Theorem 1.
Theorem 2.
Let S n be defined as follows:
S t n = h k = 1 [ t / h ] ε k , 0 t T , h = T n ,
where ε k are in Assumptions 5 and 6 or Assumption 7 without superscript n. Let Y t , Z t 0 t T be the solution of the BSDE (1) with ξ = g ( W T ) and Y ^ t n , Z t n 0 t T be defined by (7) with ξ n = g S T n . If g : R R is a bounded continuous function, then Y ^ n u . c . p . Y .
Proof. 
From the Donsker’s theorem and Skorokhod representation theorem, we obtain that there exists a probability space, with a Brown motion W and a sequence of ε k n satisfying Assumptions 5 and 6 or Assumption 7 and
W t n = h k = 1 [ t / h ] ε k n , 0 t T , h = T n
satisfy
sup 0 t T | W t n W t | P 0 , as n .
As g is a bounded continuous function, Assumptions 2 and 3 are trivially satisfied. Then, we could apply Theorem 1 to come to the conclusion. □

3. Convergence of Filtration

Before proving our main results, we first provide the conclusion of the convergence of filtration. This conclusion is crucial for us to prove the main results of this article.
We denote the natural filtration generated by W n as { F t n } 0 t T in ( ) . We assume { F t n } 0 t T is right continuous. We first provide a necessary and sufficient condition for the convergence of filtration.
Lemma 1
(Coquet et al. [12]). S is an adaptive continuous process on ( Ω , F , { F t } 0 t T , P ) ; F S is the natural filtration generated by that, assumed to be right continuous. The following statements are equivalent:
(1). 
F n w F S ;
(2). 
For all c 1 , , c m R m , λ 1 l , λ m l R m , and non-negative real numbers t 1 , t k , in J 1 topology,
E j = 1 m c j exp i l = 1 k λ j l S t l F · n P E j = 1 m c j exp i l = 1 k λ j l S t l F · S .
Based on the above results, we have
Theorem 3.
Set X n as F T n a measurable integrable random variable; X is F T a measurable integrable random variable:
M t n = E [ X n | F t n ] , M t = E [ X | F t ] .
Assume
(a) 
E X 2 + sup n E X n 2 < ,
(b) 
lim n E X n X 0 .
Under Assumptions 4–6 or 4 and 7, we have
W n , M n , M n , M n , M n , W n u . c . p . ( W , M , [ M , M ] , [ M , W ] ) .
Proof. 
From Remark 1.(2) in Coquet et al. [12], if we can prove F n w F , together with condition (b), we could obtain M n u . c . p . M . Hence, now we prove F n w F . According to Assumptions 4–6, we first prove for fix t > 0 ,
E j = 1 m c j exp i l = 1 k λ j l W t l F t n P E j = 1 m c j exp i l = 1 k λ j l W t l F t .
According to Assumption 4, we only need to prove
E j = 1 m c j exp i l = 1 k λ j l W t l n F t n P E j = 1 m c j exp i l = 1 k λ j l W t l F t .
For t [ t p 1 , t p ,
F t : = E j = 1 m c j exp i l = 1 p 1 λ j l W t l + i l = p k λ j l W t l F t = j = 1 m c j l < p exp i λ j l W t l · l p E exp i λ j l W t l F t , F t n : = j = 1 m c j l < p exp i λ j W t i n · l p E exp i λ j l W t i n F t n .
According to Assumptions 4 and 6, we have F t n P F t . Therefore, we have proved the pointwise convergence in probability of F t n , which is uniformly integrable for fix t to F t , which is a continuous martingale. Then, using Lemma 1 and Proposition 1.2 in Aldous [13], we can obtain
F n w F .
If Assumptions 4 and 7 hold, the convergence of filtration is a direct corollary of Proposition 2 of [12] since { W n } n 1 is a sequence of processes with independent increments. Therefore we have M n u . c . p . M .
Next, we proceed to prove M n , M n u . c . p . [ M , M ] and W n , W n u . c . p . [ W , W ] . Under Assumptions 4–6 or 4 and 7, taking advantage of the Doob inequality and condition (a),
sup n E sup 0 t T Δ M t n < .
From Lemma 7, we could obtain
M n , M n u . c . p . [ M , M ] .
For W n , if Assumptions 4–6 hold, this trivially satisfies
sup n E sup 0 t T Δ W t n < .
On the other hand, if Assumptions 4 and 7 hold, noting that log n / n is bounded and
sup 0 t T Δ W t n = sup 1 k n h ε k n = T n sup 1 k n ε k n ,
it is easy to verify (15) using Lemma 5. Hence, using the same proof, we have
W n , W n u . c . p . [ W , W ] .
Combining this with (14), we have
M n , W n u . c . p . [ M , W ] .
Since the convergence likely has addictive properties, we have proved the theorem. □
Using the above theorem, we can obtain the following theorem, similarly to Corollary 3.2 in the work of Briand et al. [9]. The proof is the same as that corollary, so we present only the theorem, without the proof.
Theorem 4.
Under the condition of Theorem 3, a predictable process exists, Z t n 0 t T and Z t 0 t T , such that
t [ 0 , T ] , M t n = E X n + 0 t Z s n d W s n , M t = E [ X ] + 0 t Z s d W s ,
and
0 T ( Z t n Z t ) 2 d t P 0 .

4. Proof of Theorem 1

Equation (9) is a direct corollary of (10); therefore, we proceed to prove (10). Set ( Y t , p + 1 , Z t , p + 1 ) as the solution of the backward stochastic differential equation:
Y t , p + 1 = ξ + t T f Y s , p , Z s , p d s t T Z s , p + 1 d W s , 0 t T ,
and define
y k n , p + 1 = y k + 1 n , p + 1 + h f y k n , p , z k n , p h z k n , p + 1 ε k + 1 n , k = n 1 , , 0 , y n n , p + 1 = ξ n .
Set Y , 0 = 0 , Z , 0 = 0 , y n , 0 = 0 , z n , 0 = 0 we have
Y ^ n Y = Y ^ n Y n + Y n Y n , p + Y n , p Y , p + Y , p Y ,
Z n Z = Z n Z n , p + Z n , p Z , p + Z , p Z ,
where 0 t T , Y t n , p = y [ t / h ] n , p and Z t n , p = z t / h n , p .
Using the Picard iteration, we can obtain Y , p , Z , p ( Y , Z ) in the sense of (10), which is a classic conclusion and can be found in the work of Zhang [14]. Therefore, only the convergence of other terms needs to be considered. Hence, now we need a only few lemmas.
Lemma 2.
Under Assumptions 5 and 6 or 7, there exists n 0 , such that when n > n 0 , p ,
E [ sup 0 t T | Y t n Y t n , p | 2 + 0 T ( Z t n Z t n , p ) 2 d t ] 0 .
Proof. 
First assume we have Assumptions 5 and 6; then, we need to prove that there exists α > 1 , n 0 > 1 such that n > n 0 , for all p 1 ,
y n , p + 1 y n , p , z n , p + 1 z n , p α 2 2 3 y n , p y n , p 1 , z n , p z n , p 1 α 2 ,
where
y n , p + 1 y n , p , z n , p + 1 z n , p α 2 = E sup 0 k n α k h y k n , p + 1 y k n , p 2 + h k = 0 n 1 α k h z k n , p + 1 z k n , p 2 .
For the convenience of narration, we use y , z instead of y n , p + 1 y n , p , z n , p + 1 z n , p , and use u , v instead of y n , p y n , p 1 , z n , p z n , p 1 . Set β > 1 ; since y n = 0 , we have k = 0 , , n 1 ,
β k y k 2 = i = k n 1 β i y i 2 β i + 1 y i + 1 2 = ( 1 β ) i = k n 1 β i y i 2 + β i = k n 1 β i y i 2 y i + 1 2 .
Set
y i 2 y i + 1 2 = 2 y i y i y i + 1 y i y i + 1 2 ,
considering relation (17), we obtain
y i y i + 1 = h f y i n , p , z i n , p f y i n , p 1 , z i n , p 1 h z i ε i + 1 n .
Since f is Lipschitz continuous, for ν > 0 ,
2 y i f y i n , p , z i n , p f y i n , p 1 , z i n , p 1 2 K y i u i + v i 2 K 2 / ν y i 2 + ν u i 2 + v i 2 .
Furthermore, squaring the last term in (22) and using Assumption 5, we derive
h z i 2 2 y i y i + 1 2 + 4 K 2 h 2 u i 2 + v i 2 .
From (22) and (23), it is easy to obtain
2 i = k n 1 β i y i y i y i + 1 2 K 2 ( h / ν ) i = k n 1 β i y i 2 + ν h i = k n 1 β i u i 2 + v i 2 2 h i = k n 1 β i y i z i ε i + 1 n .
On the other hand, using (24), we know
i = k n 1 β i y i y i + 1 2 ( h / 2 ) i = k n 1 β i z i 2 + 2 K 2 h 2 i = k n 1 β i u i 2 + v i 2 .
Set ρ = ν + 2 K 2 h β h ; according to (20) and (21) and the two inequalities shown above, we have
β k y k 2 + β ( h / 2 ) i = k n 1 β i z i 2 = ( 1 β ) i = k n 1 β i y i 2 + 2 β i = k n 1 β i y i y i y i + 1 β i = k n 1 β i y i y i + 1 2 + β ( h / 2 ) i = k n 1 β i z i 2 1 β + 2 K 2 h β / ν i = k n 1 β i y i 2 2 β h i = k n 1 β i y i z i ε i + 1 n + ρ i = k n 1 β i u i 2 + v i 2 .
Therefore, if 1 β + 2 K 2 h β / ν 0 , the above inequality implies, for k = 0 , , n 1 ,
β k y k 2 + β ( h / 2 ) i = k n 1 β i z i 2 ρ i = 0 n 1 β i u i 2 + v i 2 2 β h i = k n 1 β i y i z i ε i + 1 n .
Since ε k n is a martingale difference sequence, using the expectation, we have
E i = 0 n 1 β i z i 2 2 ν + 2 K 2 h E i = 0 n 1 β i u i 2 + v i 2 .
Furthermore, using (26), we can obtain
sup 0 k n β k y k 2 ρ i = 0 n 1 β i u i 2 + v i 2 + 4 β h sup 0 k n 1 i = 0 k β i y i z i ε i + 1 n ,
Because of Assumption 5, we can easily verify that { i = 0 k β i y i z i ε i + 1 n } 0 k n 1 is a martingale. Using the Burkholder–Davis–Gundy inequality, a b a + b 2 , it follows that
E sup 0 k n β k y k 2 ρ E i = 0 n 1 β i u i 2 + v i 2 + C h β E i = 0 n 1 β 2 i y i 2 z i 2 1 / 2 ρ E i = 0 n 1 β i u i 2 + v i 2 + C 2 β 2 ( h / 2 ) E i = 0 n 1 β i z i 2 + 1 2 E sup 0 k n β k y k 2 .
Using (27), we have
E sup 0 k n β k y k 2 + h i = 0 n 1 β i z i 2 λ E sup 0 k n β k u k 2 + h i = 0 n 1 β i v i 2 ,
where λ = 2 ν + 2 K 2 h 1 + β + C 2 β 2 ( 1 T ) , set 1 β + 2 K 2 h β / ν 0 . Choose ν , such that 2 ν ( 2 + C 2 ) ( 1 T ) = 1 / 2 . Make β = α h , where α 1 ; we hope that 1 β + 2 K 2 h β / ν 0 , so α should be greater than or equal to exp ( h 1 log ( 1 2 K 2 h / ν ) ) . When n , this equation tends towards exp ( 2 K 2 / ν ) . Hence, we set α = exp ( 2 K 2 / ν ) . On the other hand, when n , λ 2 ν ( 2 + C 2 ) ( 1 T ) = 1 / 2 . Based on the analysis above, there exists n 0 , such that when n n 0 ,
E sup 0 k n β k y k 2 + h i = 0 n 1 β i z i 2 2 3 E sup 0 k n β k u k 2 + h i = 0 n 1 β i v i 2 ,
Now, we prove that for n n 0 ,
E [ sup 0 k n 1 | y k n , 1 | 2 + h i = 0 n 1 | z i n , 1 | 2 ] < .
In fact, since y n , 0 = 0 , z n , 0 = 0 ,
y k n , 1 = ξ n + h i = k n 1 f ( 0 , 0 ) h i = k n 1 z i n , 1 ε i + 1 n ,
then,
sup 0 k n 1 | y k n , 1 | 2 C [ ( ξ n ) 2 + T 2 f 2 ( 0 , 0 ) h i = 0 n ( z i n , 1 ) 2 ] ( C 1 ) [ ( ξ n ) 2 + T 2 f 2 ( 0 , 0 ) h i = 0 n ( z i n , 1 ) 2 ] ,
so we have
E [ sup 0 k n 1 | y k n , 1 | 2 + ( C 1 ) h i = 0 n ( z i n , 1 ) 2 ] ( C 1 ) [ ( ξ n ) 2 + T 2 f 2 ( 0 , 0 ) ] < .
According to Assumption 7, in fact, (20)–(23) and (25) are similar to what can be obtained. Meanwhile, it should be noted that (24) can be estimated as follows:
h z i 2 ( ε i + 1 n ) 2 2 y i y i + 1 2 + 4 K 2 h 2 u i 2 + v i 2 .
Hence,
i = k n 1 β i y i y i + 1 2 ( h / 2 ) i = k n 1 β i z i 2 ( ε i + 1 n ) 2 + 2 K 2 h 2 i = k n 1 β i u i 2 + v i 2 .
Combining (25), we have
β k y k 2 + β ( h / 2 ) i = k n 1 β i z i 2 ( ε i + 1 n ) 2 1 β + 2 K 2 h β / ν i = k n 1 β i y i 2 2 β h i = k n 1 β i y i z i ε i + 1 n + ρ i = k n 1 β i u i 2 + v i 2 .
Taking the mathematical expectations from both sides, for k = 0 , since ε i n is independent and identically distributed, we can also obtain (27). From (28), we have
E [ sup 0 k n β k y k 2 ] E ρ i = 0 n 1 β i u i 2 + v i 2 + 4 β h sup 0 k n 1 i = 0 k β i y i z i ε i + 1 n ,
According to the Burkholder–Davis–Gundy inequality, a b a + b 2 and Assumption 7, we have:
E sup 0 k n β k y k 2 ρ E i = 0 n 1 β i u i 2 + v i 2 + C h β E i = 0 n 1 β 2 i y i 2 z i 2 ( ε i + 1 n ) 2 1 / 2 ρ E i = 0 n 1 β i u i 2 + v i 2 + C 2 β 2 ( h / 2 ) E i = 0 n 1 β i z i 2 + 1 2 E sup 0 k n β k y k 2 ( ε i + 1 n ) 2 = ρ E i = 0 n 1 β i u i 2 + v i 2 + C 2 β 2 ( h / 2 ) E i = 0 n 1 β i z i 2 + 1 2 E sup 0 k n β k y k 2
The remaining parts are similar and can be obtained. □
Lemma 3.
Under Assumptions 1–4, with 5 and 6 or 7, for p > 0 , when n , we have
Y n , p , Z n , p ) ( Y , p , Z , p .
Proof. 
We will use induction to prove that if Y n , p , Z n , p ) ( Y , p , Z , p , , then we have Y n , p + 1 , Z n , p + 1 ) ( Y , p + 1 , Z , p + 1 . For convenience of notation, we omit the p and consider everything in a continuous setting; hence, by virtue of Remark 2, the situation in (16) and (17) changes to
Y t = ξ + t T f Y s , Z s d s t T Z s d W s , 0 t T , Y t n = ξ n + t T f Y s n , Z s n d A s n t T Z s n d W s n , 0 t T ,
where A s n = [ s / h ] h . Suppose we already know that Y t n , Z t n 0 t T converges to Y t , Z t 0 t T and we only need to prove that Y t n , Z t n 0 t T converges to Y t , Z t 0 t T . Now, we can use induction on p. For p = 0 , Y = Z = Y n = Z n = 0 , set
M t n = Y t n + 0 t f 0 , 0 d A s n , 0 t T ,
which satisfies
M t n = M 0 n + 0 t Z s n d W s n ,
thus, M n is a F n -martingale. Since Y T n = ξ n , we have
M t n = E M T n F t n , M T n = ξ n + 0 T f 0 , 0 d A s n .
To use Theorems 3 and 4, we need to prove the convergence of M T n under L 1 . Since we have
M T n Y T 0 T f 0 , 0 d s = ξ n ξ + 0 T f 0 , 0 d A s n 0 T f 0 , 0 d s ξ n ξ + 0 T f 0 , 0 d A s n 0 T f 0 , 0 d s ,
from Assumption 3, the equation above converges under L 1 . Applying Theorems 3 and 4, we have M n u . c . p . M , where
M t = Y t + 0 t f 0 , 0 d s
and
0 T ( Z s n Z t ) 2 d t P 0 .
Therefore, we can obtain
sup 0 t T M t n M t + 0 T Z s n Z s 2 d s P 0 .
Our task is to prove
sup 0 t T Y t n Y t + 0 T Z s n Z s 2 d s P 0 ,
Hence, we only need to prove
sup 0 t T 0 t f 0 , 0 d A s n 0 t f 0 , 0 d s P 0 ,
where A s n = [ s / h ] h , and the result clearly follows from the Riemann integration. For the case of p 1 , using a similar analysis to Briand et al. [9], set
M t n = Y t n + 0 t f Y s n , Z s n d A s n , 0 t T ,
which satisfies
M t n = M 0 n + 0 t Z s n d W s n ,
Hence, M n is a F n -martingale. Since Y T n = Y T n = ξ n , we have
M t n = E M T n F t n , M T n = Y T n + 0 T f Y s n , Z s n d A s n .
To use Theorems 3 and 4, we need to prove the convergence of M T n under L 1 . Noting that Y n and Z n are piecewise constant and Y, Z are continous, we have
M T n Y T 0 T f Y s , Z s d s Y T n Y T + 0 T f ( Y s n , Z s n ) f Y s , Z s d s ( 1 + K T ) sup 0 t T Y t n Y t + K 0 T Z s n Z s d s .
The equation above converges under L 1 because of the boundedness of L 2 ; e refer to [14], Theorem 4.2.1. Thanks to Theorems 3 and 4, using the same step as used in p = 0 , we again have
sup 0 t T M t n M t + 0 T Z s n Z s 2 d s P 0 .
where
M t = Y t + 0 t f Y t , Z t d s
Now, we proceed to show
sup 0 t T Y t n Y t + 0 T Z s n Z s 2 d s P 0 ,
so we only need to prove
sup 0 t T 0 t f Y s n , Z s n d A s n 0 t f Y s , Z s d s P 0 ,
However, we have just proved this in (29), which finishes the proof. □
Now, we prove Theorem 1. Due to the two lemmas above, we only need to show
sup 0 t T | Y ^ t n Y t n | P 0 ,
This can be used to show
sup 0 t T | E [ Y t n | F t n ] Y t n | P 0 .
In fact,
sup 0 t T | E [ Y t n | F t n ] Y t + Y t Y t n | sup 0 t T | E [ Y t n | F t n ] Y t | + sup 0 t T | Y t Y t n | sup 0 t T | E [ Y t n Y t | F t n ] + sup 0 t T | E [ Y t | F t n ] Y t | + sup 0 t T | Y t Y t n | .
From Lemmas 2 and 3, the third term converged in probability. For the first term, thanks to Doob’s inequality, we have
lim n P ( sup 0 t T | E [ Y t n Y t | F t n ] ε ) lim n P ( sup 0 s T E [ sup 0 t T | Y t n Y t | F s n ] ε ) lim n ε 1 E [ sup 0 t T | Y t n Y t | ] = 0 ,
where the last equality is because of the L 2 boundedness of sup 0 t T | Y t n Y t | .
Furthermore, we proved F n w F in Theorem 3; combining this with
E [ sup 0 t T | Y t | ] < ,
where Y is the solution of (1), we can prove the second convergence using Lemma 6. Hence, we, at length, complete the proof.

5. Some Lemmas

Lemma 4.
Under Assumptions 5 and 6 or Assumption 7, it holds that W n W .
Proof. 
We follow the notations in the book by Jacod and Shiryaev [11], which states that if the Assumptions 5 and 6 hold, then W n ’s predictable characteristics “without truncation” are as follows:
B t n = k = 1 [ t / h ] E [ h ε k n | G k 1 n ] = 0 , C t n = k = 1 [ t / h ] E [ h ( ε k n ) 2 | G k 1 n ] = h [ t / h ] , g ν t n = k = 1 [ t / h ] E [ g ( h ε k n ) | G k 1 n ] , g C 1 ( R ) .
Remember, the characteristics of the standard Brown motion are B = 0 , C t = t , ν = 0 . Furthermore, for some C , a > 0 , we have | g ( x ) | C | x | 2 1 { | x | > a } because of g C 1 ( R ) , implying
g ν t n k = 1 [ t / h ] E [ C h ( ε k n ) 2 1 { | h ε k n | > a } ] = [ t / h ] C h 1 { h > a } ,
which tend toward 0 as n . Additionally, | x | 2 1 { | x | > a } ν t n = [ t / h ] h 1 { h > a } 0 as n . Combining the above, thanks to [11] VII.3.7, we can obtain W n W .
Now, if the Assumption 7 holds, the proof is the same as the above. □
Lemma 5.
Let η i , 1 i n be a sequence of random variables satisfying the standard normal distribution N ( 0 , 1 ) . Then, we have
E [ max 1 i n η i ] O ( log n ) .
Consequently, it holds that
E [ max 1 i n | η i | ] O ( log n ) .
Proof. 
On the one hand, let Z n = i = 1 n exp β η i for β > 0 . We have
β max 1 i n η i = log exp max 1 i n β η i log Z n .
Using Jensen’s inequality and taking the expectations on both sides of the above inequality, we could obtain
β E [ max 1 i n η i ] E [ log Z n ] log E [ Z n ] = log n + β 2 / 2 .
Taking β = log n , we can obtain E [ max 1 i n η i ] 3 2 log n .
On the other hand, it is obvious that max 1 i n η i > t = i = 1 n η i > t for all t > 0 . Since η i is independent, we can obtain
P max 1 i n η i > t = 1 P i = 1 n η i t = 1 i = 1 n 1 P η i > t
For 0 < x < 1 , since 1 x < exp ( x ) , we have
P max 1 i n η i > t > 1 exp i = 1 n P η i > t .
Set A = i 1 η i > 0 ; it is easy to verify that
E [ max 1 i n η i ] = E [ max 1 i n η i I A + max 1 i n η i I A c ] E [ η 1 ] + E [ max 1 i n η i + ] .
Using Fubini theorem, we can show that, for fix μ > 0 ,
E [ max 1 i n η i + ] > μ P { max 1 i n η i + > μ } .
It is well-known that for some t > t 0 , since η i is a Gaussian variable,
C t exp t 2 / 2 P η i > t .
Combining (30)–(33), we obtain
E [ max 1 i n η i ] E [ η 1 ] + μ 1 exp C n e μ 2 / 2 / μ
Now, we choose μ = 2 log n log log n and compute the value of the second part on the right side of (34):
n e μ 2 / 2 / μ = n × log n 2 n × 1 2 log n log log n log n + ,
which implies
1 exp C n e μ 2 / 2 / μ 1 .
Therefore, E [ max 1 i n η i ] E [ η 1 ] + μ 1 exp C n e μ 2 / 2 / μ C log n , so, for some constants C 1 , C 2 and a large arbitrary n, we have
C 1 log n E [ max 1 i n η i ] C 2 log n ,
which completes the first proof.
We shall start to prove the last result. Since we have max 1 i n η i max 1 i n | η i | and (35), it is obvious that we need only to verify
E [ max 1 i n | η i | ] C 3 log n + C 4 .
The same step is used as the proof in the beginning, noting that
E [ exp β | η i | ] = 1 2 π exp ( β | x | ) exp ( x 2 2 ) d x exp ( β x ) exp ( x 2 2 ) d x + exp ( β x ) exp ( x 2 2 ) d x = E [ exp β η i ] + E [ exp β η i ] = 2 exp ( β 2 2 ) ,
we get
β E [ max 1 i n η i ] log 2 n + β 2 / 2 .
Taking β = log 2 n , because of a + b a + b , we can obtain
E [ max 1 i n η i ] 3 2 log 2 + log n 3 2 ( log 2 + log n ) .
We, at length, complete the proof. □
The following Lemma is a simplified version of a result of [12], Theorem 1.
Lemma 6.
Let { F t n } 0 t T be a sequence of filtrations and { F t } 0 t T be a filtration on ( Ω , F , P ) , such that F n w F . Let X be a F adapted continuous process, such that sup 0 t T | X t | is integrable. Then, E [ X · | F · n ] u . c . p . X ·
We also list Proposition 1.5 (b) and Corollary 1.9 in [15] for readers’ convenience.
Lemma 7.
If a sequence of continuous local martingales M n satisfies
sup n E sup 0 t T Δ M t n < ,
then it has the UT condition. Moreover, if M n u . c . p . M , then the quadratic variation process M n , M n u . c . p . [ M , M ] .

Author Contributions

Writing—original draft, Y.G. and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bismut, J. Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl. 1973, 44, 384–404. [Google Scholar] [CrossRef]
  2. Pardoux, É.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
  3. El Karoui, N.; Peng, S.; Quenez, M. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  4. Ma, J.; Protter, P.; Yong, J. Solving forward-backward stochastic differential equations explicitly—A four step scheme. Probab. Theory Relat. Fields 1994, 98, 339–359. [Google Scholar] [CrossRef]
  5. Douglas, J.; Ma, J.; Protter, P. Numerical methods for forward-backward stochastic differential equations. Ann. Appl. Probab. 1996, 6, 940–968. [Google Scholar] [CrossRef]
  6. Zhang, J. A numerical scheme for BSDEs. Ann. Appl. Probab. 2004, 14, 459–488. [Google Scholar] [CrossRef]
  7. Chevance, D. Numerical methods for backward stochastic differential equations. In Numerical Methods in Finance; Publications of the Newton Institute; Cambridge University Press: Cambridge, UK, 1997; Volume 13, pp. 232–244. [Google Scholar]
  8. Coquet, F.; Mackevičius, V.; Mémin, J. Stability in D of martingales and backward equations under discretization of filtration. Stoch. Process. Appl. 1998, 75, 235–248. [Google Scholar] [CrossRef]
  9. Briand, P.; Delyon, B.; Mémin, J. Donsker-type theorem for BSDEs. Electron. Commun. Probab. 2001, 6, 1–14. [Google Scholar] [CrossRef]
  10. Donsker, M. An invariance principle for certain probability limit theorems. Mem. Am. Math. Soc. 1951, 6, 1–12. [Google Scholar]
  11. Jacod, J.; Shiryaev, A. Limit Theorems for Stochastic Processes, 2nd ed.; Grundlehren der Mathematischen Wissenschaften; Springer: Berlin/Heidelberg, Germany, 2003; Volume 288. [Google Scholar]
  12. Coquet, F.; Mémin, J.; Słominski, L. On weak convergence of filtrations. In Séminaire de Probabiliés XXXV; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2001; Volume 1755, pp. 306–328. [Google Scholar]
  13. Aldous, D. Stopping times and tightness. Ann. Probab. 1978, 6, 335–340. [Google Scholar] [CrossRef]
  14. Zhang, J. Backward Stochastic Differential Equations: From Linear to Fully Nonlinear Theory; Springer: New York, NY, USA, 2017. [Google Scholar]
  15. Mémin, J.; Słominski, L. Condition UT et stabilité en loi des solutions d’équations différentielles stochastiques. In Séminaire de Probabilités XXV; Springer: Berlin/Heidelberg, Geramny, 2006; pp. 162–177. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Y.; Liu, N. Donsker-Type Theorem for Numerical Schemes of Backward Stochastic Differential Equations. Mathematics 2025, 13, 684. https://doi.org/10.3390/math13040684

AMA Style

Guo Y, Liu N. Donsker-Type Theorem for Numerical Schemes of Backward Stochastic Differential Equations. Mathematics. 2025; 13(4):684. https://doi.org/10.3390/math13040684

Chicago/Turabian Style

Guo, Yi, and Naiqi Liu. 2025. "Donsker-Type Theorem for Numerical Schemes of Backward Stochastic Differential Equations" Mathematics 13, no. 4: 684. https://doi.org/10.3390/math13040684

APA Style

Guo, Y., & Liu, N. (2025). Donsker-Type Theorem for Numerical Schemes of Backward Stochastic Differential Equations. Mathematics, 13(4), 684. https://doi.org/10.3390/math13040684

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop