Next Article in Journal
Bi-Fuzzy S-Approximation Spaces
Previous Article in Journal
Seasonal Mathematical Model of Salmonellosis Transmission and the Role of Contaminated Environments and Food Products
Previous Article in Special Issue
Anticipated BSDEs Driven by Fractional Brownian Motion with a Time-Delayed Generator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions

Division of Data Science, Data Science Convergence Research Center, Hallym University, Chuncheon 24252, Republic of Korea
Mathematics 2025, 13(2), 323; https://doi.org/10.3390/math13020323
Submission received: 29 November 2024 / Revised: 8 January 2025 / Accepted: 15 January 2025 / Published: 20 January 2025

Abstract

:
Let G be a random variable of functionals of an isonormal Gaussian process X defined on some probability space. Studies have been conducted to determine the exact form of the density function of the random variable G. In this paper, unlike previous studies, we will use the Stein’s method for invariant measures of diffusions to obtain the density formula of G. By comparing the density function obtained in this paper with that of the diffusion invariant measure, we find that the diffusion coefficient of an Itô diffusion with an invariant measure having a density can be expressed as in terms of operators in Malliavin calculus.

1. Introduction

Let X = { X ( h ) , h H } , where H is a real separable Hilbert space, be an isonormal Gaussian process defined on a probability space ( Ω , F , P ) , and let G be a random variable of functionals of an isonormal Gaussian process X. The following formula on the density of a random variable G is a well-known fact of the Malliavin calculus: if D G D G H belongs to the domain of divergence operator δ , then the law of G has a continuous and bounded density p G , given by
p G ( x ) = E 1 { G > x } δ D G D G H for   all x R .
Several examples are detailed in the section titled “Malliavin Calculus” of Nualart’s book [1] (or [2]). Nourdin and Viens (2009) prove a new general formula for p G that does not refer to divergence operator δ . For a random variable G D 1 , 2 with E [ G ] = 0 , where D 1 , 2 is the domain of the Malliavin derivative operator D with respect to X, such that the Malliavin derivative D G of G is a random element belonging in H with E [ D G H 2 ] < , we define the function g G by
g G ( x ) = E [ D G , D L 1 G H | G = x ] .
The operator L appearing in (1) is the so-called generator of the Ornstein–Uhlenbeck semigroup and L 1 is its pseudo-inverse. For details, see Section 2. It is well known that g G is non-negative on the support of the law of G (see Proposition 3.9 in [3]).
Under some general conditions on a random variable G, Nourdin and Viens (2009) obtained the new formula of the density p G for the law of G, provided that it exists. A precise statement is given in the following theorem.
Theorem 1.
([Nourdin and Viens]). The law of G admits a density (with respect to Lebesgue measure), say p G , if and only if the random variable g G ( G ) is almost surely strictly positive. In this case, the support of p G , denoted by s u p p ( p G ) , is a closed interval of R containing zero and, for almost all x s u p p ( p G ) ,
p G ( x ) = E [ | G | ] 2 g G ( x ) exp 0 x y g G ( y ) d y .
Assume that the density p satisfies the following conditions: it is continuous and bounded, with l u x 2 p ( x ) d x < . Let us set an interval I = ( l , u ) ( l < u ). Then,
p ( x ) > 0 i f x I p ( x ) = 0 i f x I c .
We define a continuous function b on I such that there exists e ( l , u ) , satisfying
b ( x ) > 0 i f x ( l , e ) b ( x ) < 0 i f x ( e , u ) ,
where b p is bounded on I and
l u b ( x ) p ( x ) d x = 0 .
Define
a ( x ) = 2 p ( x ) l x b ( y ) p ( y ) d y .
Then, the diffusion with the invariant density p has the Stochastic Differential Equation (SDE) with the form
d X t = b ( X t ) d t + a ( X t ) d W t ,
where W is a standard Brownian motion. Equation (4) should be interpreted as an informal way of expressing the corresponding integral equation,
X t = X 0 + 0 t b ( X s ) d s + 0 t a ( X s ) d W s .
The stochastic integral used in Equation (5) is of an Itô integral.
In the previous studies in this field (see [1,2,4]), the density function was obtained using the integration-by-parts formula (see Lemma 1 below) in Malliavin calculus. On the other hand, in this paper, we derive the new density formula of a random variable G, satisfying appropriate conditions related to Malliavin calculus, from the following equation obtained by using Stein’s method: for every z R ,
P ( G z ) P ( F z ) = E h ˜ z ( G ) 1 2 a ( G ) + D L 1 b ( G ) , D G H + E [ b ( G ) ] E [ h ˜ z ( G ) ] ,
where F is a random variable with the invariant density p and h ˜ z is a solution to the Stein’s equation (for a detailed explanation of Stein’s method, see [5,6,7]).
The density function obtained in this paper provides a surprising method for solving an existing problem (see Theorem 2 in [8]) linked to diffusions with an invariant density. As an application of our results, we will show that the diffusion coefficient a of SDE (4) can be written in an explicit form, like (1), if the random variable G in (6), with its value on I, has a density p and satisfies b ( G ) L 2 ( Ω ) . The rest of this paper is organized as follows. Section 2 reviews some basic notations, and the contents of Malliavin calculus. In Section 3, we will briefly discuss the construction of a diffusion process with an invariant density p, and then describe our main results. Finally, as an application of our main results, in Section 4, we give some examples.

2. Preliminaries

Malliavin Calculus

In this section, we present some basic facts about Malliavin operators defined on spaces of random elements that are functionals of possibly infinite-dimensional Gaussian fields. For a more detailed explanation, see [1,9]. Suppose that H is a real separable Hilbert space with a scalar product denoted by · , · H . Let X = { X ( h ) , h H } be an isonormal Gaussian process, which is a centered Gaussian family of random variables such that E [ X ( h ) X ( g ) ] = h , g H . For every n 1 , let H n be the nth Wiener chaos of X, which is the closed linear subspace of L 2 ( Ω ) generated by { H n ( X ( h ) ) : h H , h H = 1 } , where H n is the nth Hermite polynomial. We define a linear isometric mapping I n : H n H n by I n ( h n ) = n ! H n ( X ( h ) ) , where H n is the symmetric tensor product. It is well known that any square integrable random variable F L 2 ( Ω , F , P ) ( F denotes the σ -field generated by X) can be expanded into a series of multiple stochastic integrals,
F = q = 0 I q ( f q ) ,
where f 0 = E [ F ] , the series converges in L 2 , and the functions f q H q are uniquely determined by F.
Let S be the class of smooth and cylindrical random variables F of the form
F = f ( X ( φ 1 ) , , X ( φ n ) ) ,
where n 1 , f C b ( R n ) and φ i H , i = 1 , , n . The Malliavin derivative of F with respect to X is the element of L 2 ( Ω , H ) defined by
D F = i = 1 n f x i ( X ( φ 1 ) , , X ( φ n ) ) φ i .
We denote by D l , p the closure of its associated smooth random variable class with respect to the norm
F l , p p = E ( | F | p ) + k = 1 l E ( D k F H k p ) .
We denote by δ the adjoint of the operator D, also called the divergence operator. The domain of δ , denoted by Dom ( δ ) , is an element u L 2 ( Ω ; H ) , such that
| E ( < D l F , u > H l ) | C ( E | F | 2 ) 1 / 2 for   all F D l , 2 .
If u Dom ( δ ) , then δ ( u ) is the element of L 2 ( Ω ) defined by the duality relationship,
E [ F δ ( u ) ] = E [ D F , u H ] for   every F D 1 , 2 .
Recall that F L 2 ( Ω ) can be expanded as F = E [ F ] + q = 1 P q F , where p q is the projection operator L 2 ( Ω ) to the qth Wiener chaos H n . The operator L is defined through the projection operator P q , q = 0 , 1 , 2 , as L = q = 0 q P q , and is called the infinitesimal generator of the Ornstein–Uhlenbeck semigroup. The relationship between the operator D, δ , and L is given as follows: δ D F = L F , i.e., for F L 2 ( Ω ) , the statement F Dom ( L ) is equivalent to F Dom ( δ D ) (i.e., F D 1 , 2 and D F Dom ( δ ) ), and in this case, δ D F = L F . For any F L 2 ( Ω ) , we define the operator L 1 , which is the pseudo-inverse of L, as L 1 F = q = 1 1 q P q F . Note that L 1 is an operator with values in D 2 , 2 and L L 1 F = F E [ F ] for all F L 2 ( Ω ) .

3. Diffusion Process with Invariant Measures and Main Results

In this section, we will give the construction of a diffusion process with an invariant measure, and present our main results in this paper.

3.1. Diffusion Process with Invariant Measures

In this section, we will briefly describe the construction of a diffusion process with an invariant measure μ having a density p with respect to the Lebesgue measure (for more details, see [8,10]). Let F be a random variable with a probability measure μ on I = ( l , u ) ( l < u ) with a density p, which is continuous, bounded, strictly positive on I, and E [ F 2 ] < . Let b be a continuous function on I such that there exists e ( l , u ) that satisfies b ( x ) > 0 for e ( l , u ) and b ( x ) < 0 for e ( l . u ) . Moreover, the function b p is bounded on I, and
E [ b ( F ) ] = 0 .
For x I , define
a ( x ) = 2 p ( x ) l x b ( y ) p ( y ) d y .
Then, the diffusion coefficient a in (10) is strictly positive for all x ( l , u ) , and also satisfies E [ a ( F ) ] < . Equation (10) implies that, for some c I ,
p ( x ) a ( x ) = p ( c ) a ( c ) exp c x 2 b ( y ) a ( y ) d y .
Then, the following SDE:
d X t = b ( X t ) d t + a ( X t ) d B t ,
has a unique ergodic Markovian weak solution with the invariant density p. Let C 0 ( I ) = { f : I R | f is   continuous   on I vanishing   at   the   boundary   of I } . For f C 0 ( I ) , define
h f ( x ) = 0 x h ˜ f ( y ) d y ,
where
h ˜ f ( x ) = 2 l x ( f ( y ) E [ f ( F ) ] ) p ( y ) d y a ( x ) p F ( x ) .
Then, h f satisfies Stein’s equation,
f ( x ) E [ f ( F ) ] = b ( x ) h f ( x ) + 1 2 a ( x ) h f ( x ) = b ( x ) h ˜ f ( x ) + 1 2 a ( x ) h ˜ f ( x ) ,
where F is a random variable with a probability measure μ as its law.

3.2. Main Results

Before describing our main result in this paper, we begin with the following simple result, given in Theorem 2.9.1 in [9].
Lemma 1.
Suppose that F , G D 1 , 2 , and let g : R R be a continuously differentiable with bounded derivative (or when g is only almost everywhere differentiable, one needs G to have an absolutely continuous). Then,
E [ F g ( G ) ] = E [ F ] E [ g ( G ) ] + E [ g ( G ) D F , D L 1 G H ] .
Let us set
g b ( G ) ( x ) = E [ D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H | G = x ] .
Similarly to the proof of Proposition 3.9 in [3], we will show that g b ( G ) ( x ) is non-negative almost everywhere with respect to the law of G.
Proposition 1.
Let G D 1 , 2 . Then, we have that g b ( G ) ( x ) 0 for almost everywhere with respect to the law of G; say, H G ( x ) = P ( G x ) .
Proof. 
Let q be a smooth non-negative real function. Define
Q ( x ) = β x q ( y ) d y i f x β x β q ( y ) d y i f x < β ,
where β R is a constant that satisfies b ( x ) E [ b ( G ) ] > 0 for β ( l , u ) and b ( x ) E [ b ( G ) ] < 0 for β ( l . u ) . Since Q ( x ) 0 for x β and Q ( x ) < 0 for x < β , we have E [ ( b ( G ) E [ b ( G ) ] ) Q ( G ) ] 0 . An application of Lemma 14 yields that
E [ ( b ( G ) E [ b ( G ) ] ) Q ( G ) ] = E [ D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H ] = g b ( G ) ( x ) q ( x ) d H G ( x ) 0 .
By an approximation of the function q, we can show that, for all Borel measurable sets B B ( R ) , we have
B g b ( G ) ( x ) q ( x ) d H G ( x ) 0 .
This obviously implies that g b ( G ) ( x ) 0 for almost everywhere with respect to the law of G. □
Lemma 2.
If the random variable g b ( G ) ( G ) is almost surely strictly positive, then the law of G has a density with respect to Lebesgue measure; say, p G .
Proof. 
By a similar argument to the proof of Theorem 3.1 in [4], we have that, for any Borel set B B ( R ) and any n 1 ,
E ( b ( G ) E [ b ( G ) ] ) G 1 B [ n , n ] ( x ) d x = E ( b ( G ) E [ b ( G ) ] ) 1 B [ n , n ] ( G ) g b ( G ) ( G ) .
The same argument as for the case of b ( G ) = G in the proof of Theorem 3.1 in [4] shows that the law of G has a density. □
An explicit formula for the density is the following statement:
Theorem 2.
Let F be a random variable having the law μ, and let G be a random variable in D 1 , 2 with b ( G ) L 2 ( Ω ) . Assume that the random variable g b ( G ) ( G ) is almost surely strictly positive, and
b h ˜ f C f = sup x I | f ( x ) | < .
In this case, the support of p G , denoted by s u p p ( p G ) , is a closed interval of R and, for almost all x s u p p ( p G ) ,
p G ( x ) = p G ( β ) g b ( G ) ( β ) g b ( G ) ( x ) exp β x b ( y ) E [ b ( G ) ] g b ( G ) ( y ) d y
for some β s u p p ( p G ) .
Proof. 
Obviously, using (11) shows that the function h ˜ f can be written as
h ˜ f ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × l x ( f ( y ) E [ f ( F ) ] ) p F ( y ) d y .
Let us set H F ( x ) = P ( F z ) . If f ( x ) = 1 ( , z ] ( x ) for z R , we write h f = h z and h ˜ f = h ˜ z . Then, the function h ˜ z can be written as
h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × H F ( z ) [ 1 H F ( x ) ] i f x z H F ( x ) [ 1 H F ( z ) ] i f x < z .
From (21), it follows that, for x z ,
h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( z ) [ 1 H F ( x ) ] p F ( x ) H F ( z ) .
For x < z ,
h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( x ) [ 1 H F ( z ) ] + p F ( x ) [ 1 H F ( z ) ] .
If f ( x ) = 1 ( , z ] ( x ) for x I , we take f n C 0 ( I ) such that { f n } is an increasing sequence and f n ( x ) f ( x ) for all x I . Obviously, by the dominated convergence theorem, we have that, as n ,
h ˜ f n ( x ) h ˜ z ( x ) and h ˜ f n ( x ) h ˜ z ( x ) for   all x I .
The bound of (18) yields that, for all n 1 ,
b h ˜ f n C f n 1 .
Combining (11) with the bound in (18), we also obtain, for all n 1 ,
a h ˜ f n C f n 1 .
From (13), it follows that, for f n C 0 ( I ) ,
E [ f n ( G ) ] E [ f n ( F ) ] = E [ b ( G ) h ˜ f n ( G ) ] + E 1 2 a ( G ) h ˜ f n ( G ) .
Due to the bounds of (25) and (26), the dominated convergence theorem can be applied to (27), which gives the following limit value:
P ( G z ) P [ ( F z ) = E ( b ( G ) E [ b ( G ) ] ) h ˜ z ( G ) + E [ b ( G ) ] E [ ] h ˜ z ( G ) ] + E 1 2 a ( G ) h ˜ z ( G ) .
Applying (14) in Lemma 1 to the first expectation in (28), we obtain that
P ( G z ) P [ ( F z ) = E D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H h ˜ z ( G ) + E 1 2 a ( G ) h ˜ z ( G ) + E [ b ( G ) ] E [ h ˜ z ( G ) ] = E h ˜ z ( G ) E D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H | G + E 1 2 a ( G ) h ˜ z ( G ) + E [ b ( G ) ] E [ h ˜ z ( G ) ] .
Differentiating both sides in (29) yields that
p G ( z ) p F ( z ) = z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + E [ b ( G ) ] z h ˜ z ( x ) p G ( x ) d x .
Next, we concentrate on the computations of two integrals in (30). Using (22) and (23) gives that
z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x : = J 1 ( z ) + J 2 ( z ) ,
where
J 1 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x J 2 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x
Obviously, we write J 1 ( z ) = J 11 ( z ) + J 12 ( z ) ,
J 11 ( z ) = h ˜ z ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) , J 12 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
For J 12 , we first differentiate h ˜ z ( x ) with respect to z. For x < z ,
z h ˜ z ( x ) = 2 p F ( c ) a ( c ) exp c x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( x ) p F ( z ) p F ( x ) p F ( z ) .
By (23) and (31), we obtain
J 11 ( z ) = 2 p F ( c ) a ( c ) exp c z 2 b ( y ) a ( y ) d y × 2 b ( z ) a ( z ) H F ( z ) [ 1 H F ( z ) ] + p F ( z ) [ 1 H F ( z ) ] × g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) ,
J 12 ( z ) = 2 p F ( c ) a ( c ) z exp c x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) H F ( x ) p F ( z ) p F ( x ) p F ( z ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
For x z ,
z h ˜ z ( x ) = 2 p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) p F ( z ) [ 1 H F ( x ) ] p F ( x ) p F ( z ) .
On the other hand, we write J 2 ( z ) = J 21 ( z ) + J 22 ( z ) , where
J 21 ( z ) = h ˜ z ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) , J 22 ( z ) = z z h ˜ z ( x ) g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
From (33), we have that
J 21 ( z ) = 2 p F ( β ) a ( β ) exp c z 2 b ( y ) a ( y ) d y × 2 b ( z ) a ( z ) H F ( z ) [ 1 H F ( z ) ] p F ( z ) H F ( z ) × g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) ,
J 22 ( z ) = 2 p F ( β ) a ( β ) z exp β x 2 b ( y ) a ( y ) d y × 2 b ( x ) a ( x ) [ 1 H F ( x ) ] p F ( z ) p F ( x ) p F ( z ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x .
From (21), the differentiation of the second integral in (30) can be easily calculated as follows:
E [ b ( G ) ] z h ˜ z ( x ) p G ( x ) d x = E [ b ( G ) ] z z h ˜ z ( x ) p G ( x ) d x + E [ b ( G ) ] z z h ˜ z ( x ) p G ( x ) d x = 2 E [ b ( G ) ] p F ( β ) a ( β ) { p F ( z ) z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + ( 1 H F ( z ) ) exp β z 2 b ( y ) a ( y ) d y H F ( z ) p G ( z ) } + 2 E [ b ( G ) ] p F ( β ) a ( β ) { p F ( z ) z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x H F ( z ) exp β z 2 b ( y ) a ( y ) d y ( 1 H F ( z ) ) p G ( z ) } = 2 p F ( z ) E [ b ( G ) ] p F ( β ) a ( β ) { z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
Combining (32), (33) and (35)–(37) yields that, for z R ,
p G ( z ) p F ( z ) = 2 p F ( z ) p F ( β ) a ( β ) exp β z 2 b ( y ) a ( y ) d y g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) + 2 p F ( z ) p F ( β ) a ( β ) [ z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) H F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) [ 1 H F ( x ) ] × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x ] 2 p F ( z ) p F ( β ) a ( β ) exp β x 2 b ( y ) a ( y ) d y p F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + 2 p F ( z ) E [ b ( G ) ] p F ( β ) a ( β ) { z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
Substituting p F in (11) for p F in the right-hand side of Equation (38), we obtain
p G ( z ) p F ( z ) = 2 a ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) + 2 a ( z ) exp β z 2 b ( y ) a ( y ) d y [ z exp c x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) H F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) [ 1 H F ( x ) ] × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x ] 2 a ( z ) exp β z 2 b ( y ) a ( y ) d y exp β x 2 b ( y ) a ( y ) d y p F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + 2 E [ b ( G ) ] a ( z ) exp β z 2 b ( y ) a ( y ) d y { z exp β x 2 b ( y ) a ( y ) d y H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
From the formula of p F in (11) and (39), we obtain that, for some β s u p p ( p G ) ,
g b ( G ) ( z ) p G ( z ) = p F ( β ) a ( β ) 2 exp β z 2 b ( y ) a ( y ) d y + exp β z 2 b ( y ) a ( y ) d y { z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) H F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y 2 b ( x ) a ( x ) [ 1 H F ( x ) ] × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x } exp β z 2 b ( y ) a ( y ) d y exp β x 2 b ( y ) a ( y ) d y p F ( x ) × g b ( G ) ( x ) + 1 2 a ( x ) p G ( x ) d x + E [ b ( G ) ] exp β z 2 b ( y ) a ( y ) d y { z exp β x 2 b ( y ) a ( y ) d y × H F ( x ) p G ( x ) d x + z exp β x 2 b ( y ) a ( y ) d y ( 1 H F ( x ) ) p G ( x ) d x } .
Differentiating Equation (40) with respect to z proves that
z g b ( G ) ( z ) p G ( z ) = 2 b ( z ) a ( z ) g b ( G ) ( z ) p G ( z ) 2 b ( z ) a ( z ) g b ( G ) ( z ) + 1 2 a ( z ) p G ( z ) E [ b ( G ) ] p G ( z ) = ( b ( z ) E [ b ( G ) ] ) p G ( z ) .
This Equation (41) proves that, for almost all z s u p p ( p G ) ,
g b ( G ) ( z ) p G ( z ) = z ( b ( x ) E [ b ( G ) ] ) p G ( x ) d x .
From (41) and (42), it follows that, for almost all z s u p p ( p G ) ,
d d z ( g b ( G ) ( z ) p G ( z ) ) g b ( G ) ( z ) p G ( z ) = b ( z ) E [ b ( G ) ] g b ( G ) ( z )
Hence,
d d z log ( g b ( G ) ( z ) p G ( z ) ) = b ( z ) E [ b ( G ) ] g b ( G ) ( z )
By integrating both sides of (44) from β s u p p ( p G ) to z, we have
log ( g b ( G ) ( z ) p G ( z ) ) = log ( g b ( G ) ( β ) p G ( β ) ) β z b ( x ) E [ b ( ( G ) ] ) ] g b ( G ) ( x ) d x .
Equation (45) proves that, for almost all z s u p p ( p G ) ,
p G ( z ) = g b ( G ) ( β ) p G ( β ) g b ( G ) ( z ) exp β z b ( x ) E [ b ( G ) ] g b ( G ) ( x ) d x .
When a random variable G is general, it is not easy to find an explicit computation of g b ( G ) ( ( x ) ) . In particular, when D L 1 ( b ( G ) E [ b ( G ) ] ) , D G H is not measurable with respect to the σ -field generated by G, there are cases where it is impossible to compute the expectation. Using the above Theorem 2, we derive the explicit form of g b ( G ) ( x ) . The following theorem corresponds to Theorem 2 in [8].
Theorem 3.
A random variable G D 1 , 2 , taking its value on I, has the distribution μ and satisfies that E [ b ( G ) 2 ] < if and only if E [ b ( G ) ] = 0 and
g b ( G ) ( x ) = 1 2 a ( x ) f o r a l l x I .
Proof. 
Suppose that E [ b ( G ) ] = 0 , and Equation (47) holds true. Let p F be a density of an invariant measure μ corresponding to a solution of SDE (12). Then, substituting 1 2 a ( x ) in (47) instead of g b ( G ) ( x ) in (19) gives that
p G ( x ) = p G ( β ) g b ( G ) ( β ) g b ( G ) ) ( x ) exp β x b ( y ) g b ( G ) ( y ) d y = p G ( β ) a ( β ) a ( x ) exp β x 2 b ( y ) a ( y ) d y .
Combining (11) and (48), we obtain
p G ( x ) = p G ( β ) p F ( β ) p F ( x ) .
This Equation (49) shows that s u p p ( p G ) = s u p p ( p F ) . Hence, integrating both sides of (49) over I = ( l , u ) yields that
p G ( β ) p F ( β ) = 1 ,
which implies that p G = p F on I. If p G = p F on I, then E [ b ( G ) ] = 0 . From (10) and (42), it follows that
a ( x ) = 2 l x b ( y ) p F ( y ) d y p F ( x ) = 2 l x b ( y ) p G ( y ) d y p G ( x ) = 2 g b ( G ) ( z ) ,
which gives that (47) holds. □

4. Examples

In this section, two examples will be given where invariant measures have the standard Gaussian and uniform distribution.

4.1. The Standard Gaussian Distribution

When μ is the standard Gaussian distribution, then the coefficients in (13) are given by a ( x ) = 2 and b ( x ) = x , and u = and l = . Then, from (21), we have that
h z = e x 2 2 x [ 1 ( , z ] ( y ) ) Φ ( z ) ] e x 2 2 d y = 2 π e x 2 2 Φ ( x ) ( 1 Φ ( z ) ) i f x z 2 π e x 2 2 Φ ( z ) ( 1 Φ ( x ) ) i f x > z ,
where Φ ( z ) = P ( Z z ) . From (22), we have that, for x > z , taking β = 0 ,
h ˜ z ( x ) = 2 π x e x 2 2 Φ ( z ) ( 1 Φ ( x ) ) 2 π e x 2 2 p F ( x ) Φ ( z ) = [ 2 π x e x 2 2 ( 1 Φ ( x ) ) 1 ] Φ ( z ) ,
and for x < z ,
h ˜ z ( x ) = 2 π x e x 2 2 Φ ( x ) [ 1 Φ ( z ) ] + 2 π e x 2 2 p F ( x ) [ 1 Φ ( z ) ] = [ 2 π x e x 2 2 Φ ( x ) + 1 ] [ 1 Φ ( z ) ] .
If G D 1 , 2 and the random variable g G ( G ) is almost surely strictly positive, then the density p G of G can be obtained, with β = 0 , by
p G ( z ) = g G ( 0 ) p G ( 0 ) g G ( z ) exp 0 z x g G ( x ) d x = g G ( 0 ) p G ( 0 ) g G ( z ) exp 0 z x g G ( x ) d x .
Since E [ G ] = 0 , from (42), we see that
g G ( 0 ) p G ( 0 ) = g G ( 0 ) p G ( 0 ) = l 0 x p G ( x ) d x = 1 2 E [ | G | ] .
Substituting (54) into (53), we have
p G ( z ) = E [ | G | ] 2 g G ( z ) exp 0 z x g G ( x ) d x ,
which is the density (19) in Theorem 1. If g G = ( z ) = 1 ,
p G ( z ) = E [ | G | ] 2 exp 0 z x d x = 1 2 π exp x 2 2 ,
which implies that Theorem 3 holds.

4.2. The Uniform Distribution

When μ is the uniform distribution, i.e., F U ( [ 0 , 1 ] ) , then the coefficients in (13) are given by
a ( x ) = x ( 1 x ) and b ( x ) = x 1 2 for x ( 0 , 1 ) .
From (21), we have that
h ˜ z ( x ) = 2 p F ( 1 / 2 ) a ( 1 / 2 ) exp 1 / 2 x ( 2 y 1 ) ) y ( 1 y ) d y × [ ( x z ) z x ] = 8 exp 1 / 2 x ( 2 y 1 ) ) y ( 1 y ) d y × [ ( x z ) z x ] = 2 x 1 x × z ( 1 x ) i f x z x ( 1 z ) i f x < z , .
Then, the density of G is given by
p G ( x ) = p G ( β ) g b ( G ) ( β ) g b ( G ) ( x ) exp β x b ( y ) E [ b ( G ) ] g b ( G ) ( y ) d y .
Taking β = E [ G ] , then
p G ( x ) = p G ( E [ G ] ) g b ( G ) ( E [ G ] ) g b ( G ) ( x ) exp E [ G ] x b ( y ) E [ b ( G ) ] g b ( G ) ( y ) d y .
The relation (42) gives that
p G ( E [ G ] ) g b ( G ) ( E [ G ] ) = 1 2 E [ | G E [ G ] | ] .
Hence, (57) can be written as
p G ( x ) = E [ | G E [ G ] | ] 2 g G ( x ) exp E [ G ] x y E [ G ] g G ( y ) d y = E [ | G E [ G ] | ] 2 g G ( x ) exp E [ G ] x y E [ G ] g G ( y ) d y .
Putting E [ G ] = 0 , we know, from (58), that the density p G is identical to the density in Theorem 1. If g G ( x ) = 1 2 x ( 1 x ) for x ( 0 , 1 ) , a direct computation yields that
p G ( x ) = 1 4 x ( 1 x ) exp 1 / 2 x y 1 2 1 2 y ( 1 y ) d y = 1 4 x ( 1 x ) exp 1 / 2 x 2 ( 1 y ) y ( 1 y ) + 1 y ( 1 y ) d y = 1 4 x ( 1 x ) exp log x + log ( 1 x ) + log 4 = 1 [ 0 , 1 ] ( x ) ,
which implies that Theorem 3 holds true.

5. Conclusions and Future Works

When a random variable F follows an invariant measure μ that has a density p F , and a random variable G D 1 , 2 also allows for density p G , this paper find an explicit formula of the density p G based on the coefficients in the diffusion associated with the density p F . The significant feature of our works is that it shows that the density p G can be obtained by connecting the diffusion with the invariant measure and the density formula obtained in this paper provides a new and very useful method for solving an existing problem related to an invariant density of diffusions. If g b ( G ) is equal to the diffusion coefficient, Theorem 2 in [8] can be easily proven by using our result. A limitation of this study is that it is difficult for our method to directly prove that g b ( G ) > 0 .
Future works will be carried out in three directions: (1) Using the results worked in this paper, we plan to derive a density formula associated with an Edgeworth expansion with general terms given in [11]. (2) In the case when G is a random variable belonging to a fixed Wiener chaos, we will obtain a more rigorous formula than the formula obtained in the previous works. (3) We will devise new methods to overcome the limitation of this study mentioned above.

Funding

This research was supported by Hallym University Research Fund (HRF-202309-009).

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

We are very grateful to the anonymous referees for their suggestions and valuable comments.

Conflicts of Interest

The author declare no conflicts of interest.

References

  1. Nualart, D. Malliavin calculus and related topics. In Probability and Its Applications, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  2. Nualart, D. Malliavin Calculus and Its Applications; Regional Conference Series in Mathematics, Number 110; American Mathematical Society: Providence, RI, USA, 2008. [Google Scholar]
  3. Nourdin, I.; Peccati, G. Stein’s method on Wiener Chaos. Probab. Theory Relat. Fields 2009, 145, 75–118. [Google Scholar] [CrossRef]
  4. Nourdin, I.; Viens, F.G. Density formula and concentration inequalities with Malliavin calculus. Elec. J. Probabi. 2009, 14, 2287–2309. [Google Scholar] [CrossRef]
  5. Chen, L.H.Y.; Goldstein, L.; Shao, Q.-M. Normal Approximation by Stein’s Method; Probability and Its Applications (New York); Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  6. Stein, C. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Vol. II: Probability Theory; University of California Press: Berkeley, CA, USA, 1972; pp. 583–602. [Google Scholar]
  7. Stein, C. Approximate Computation of Expectations; MR882007; IMS: Hayward, CA, USA, 1986. [Google Scholar]
  8. Kusuoka, S.; Tudor, C.A. Stein’s method for invariant measures of diffusions via Malliavin calculus. Stoch. Proc. Their Appl. 2012, 122, 1627–1651. [Google Scholar] [CrossRef]
  9. Nourdin, I.; Peccati, G. Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality; Cambridge Tracts in Mathematica, Cambridge University Press: Cambridge, UK, 2012; Volume 192. [Google Scholar]
  10. Bibby, B.M.; Skovgaard, I.M.; Sorensen, M. Diffusion-type models with given marginals and auto-correlation function. Bernoulli 2003, 11, 191–220. [Google Scholar]
  11. Kim, Y.T.; Park, H.S. An Edeworth expansion for functionals of Gaussian fields and its applications. Stoch. Proc. Their Appl. 2018, 44, 312–320. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, H.-S. Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions. Mathematics 2025, 13, 323. https://doi.org/10.3390/math13020323

AMA Style

Park H-S. Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions. Mathematics. 2025; 13(2):323. https://doi.org/10.3390/math13020323

Chicago/Turabian Style

Park, Hyun-Suk. 2025. "Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions" Mathematics 13, no. 2: 323. https://doi.org/10.3390/math13020323

APA Style

Park, H.-S. (2025). Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions. Mathematics, 13(2), 323. https://doi.org/10.3390/math13020323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop