Next Article in Journal
Quantum Coherent States and Path Integral Method to Stochastically Determine the Anisotropic Volume Expansion in Lithiated Silicon Nanowires
Previous Article in Journal
A Novel Method of Offset Approximation along the Normal Direction with High Precision
 
 
Retraction published on 30 August 2018, see Math. Comput. Appl. 2018, 23(3), 44.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensitivity Analysis Based on Markovian Integration by Parts Formula

1
School of Finance and Economics, Jiangsu University, Zhenjiang 212013, China
2
School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, Singapore
3
CNOOC Oil and Gas (Taizhou) Petrochemicals Co., Ltd., Taizhou 225321, China
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2017, 22(4), 40; https://doi.org/10.3390/mca22040040
Submission received: 7 July 2017 / Revised: 5 October 2017 / Accepted: 6 October 2017 / Published: 12 October 2017

Abstract

:
Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

1. Introduction

Sensitivity analysis studying the variation of value function with respect to the value change of certain parameter, is developed for risk management, and defined as Greeks for each specific parameter.
Markovian models are widely applied to structure engineering problems. Governed by the state space of Markov chains, different conditions and situations in the system are described with considerations of the transition probability. We refer to some applications, such as [1], where the structural condition of storm water pipes is described by five states of a Markov chain, and [2], where the feedback measurements are randomly dropped with a distribution selected from an underlying Markov chain. Moreover, randomness based on Markov chains is also used for decision making (DM) problems, see e.g., [3], which applies a Markov chain to simulate the highway condition index. Specifically, an optimization model of sequential decision making based on the controlled Markov process is developed, and called Markov decision process, cf. [4] for an introduction to the concepts and see e.g., [5,6] for applications.
Sensitivity analysis is a critical technique of risk management. Because the Markovian models are established as infrastructure of some engineering problems, it is also meaningful in practice to compute the sensitivity with respect to some parameters, see e.g., [7], as we may notice that, the sensitivity analysis in financial modeling, called Greeks, plays a key roll in financial risk management, see e.g., [8]. We refer to [9] for background of gradient estimation theory.
In this paper, we proceed to compute the sensitivities of two major classes of time-continuous Markovian models respectively. Mathematically, sensitivity is expressed in form of a derivative of some expectation with respect to certain parameter. Indeed, it is computable by Monte Carlo simulation applying the traditional finite difference approach, however, it is outperformed by the computation based on integration by parts formula, as we can see from [10] on Wiener space. Therefore, intense research interests have been drawn to establish the integration by parts formula of Markov chains, which recently has been investigated by [11,12]. This paper extends some of their results and applies them to calculate the sensitivities of two Markovian models. For other approaches studying the sensitivity and gradient defined for it, we refer to [13] which estimates the gradient for ratio and [14] for inhomogeneous finite Markov chains. However, the sensitivity considered by [14] is only about the parameter of the Markov chain itself, (such as some factor of the transition rate matrix), and it can not be obviously extended for the computation of the commonly-used Greeks.
The remainder of this paper is arranged as follows. Section 2 formulates the sensitivity analysis and shows the closed-form expressions, which are proved by Section 3. In the end and by Section 4, we provide a numerical simulation to compare our approaches to the method of finite difference.

2. Formulation and Main Results

Based on a probability space ( Ω α , P α ) , we consider a time-continuous Markov chain { α t } t [ 0 , T ] with an infinitesimal generator Q = ( q i j ) m × m on the state space M = { 1 , , m } , m > 1 . Constructed with this Markov chain, the process ( X t ) t R + in form of the following expressions is investigated about its sensitivity with respect to the parameter θ R .
X t ( θ ) = F ( t , α t , θ ) t [ 0 , T ] , θ R ,
where F ( t , x , θ ) on [ 0 , T ] × M × R is twice differentiable with respect to x and θ , { x | F x ( T , x , θ ) = 0 } is a countable set, and F θ ( T , i , θ ) is uniformly bounded for any i M .
X t ( θ ) = G 0 t f ( α u ) d u , θ t [ 0 , T ] , θ R ,
where f ( · ) is a real function, G ( x , θ ) on R 2 is twice differentiable with respect to x and θ , { x | G x ( x , θ ) = 0 } is a countable set, and G θ ( x , θ ) is uniformly bounded for any x R . Without losing generality, we only consider the case f ( i ) f ( j ) for any different i , j M , because for the case f ( i ) = f ( j ) , we can combine i and j as the same state and rectify the infinitesimal generator Q accordingly, resulting in reducing this problem into the case with a new state space and generator matrix accordingly.
Given a differentiable function ϕ with bounded derivative, consider the value function V ( α 0 , θ ) defined as follows:
V ( α 0 , θ ) = I E α 0 [ ϕ ( X T ( θ ) ) ] , α 0 M , θ R ,
where I E α 0 [ · ] : = I E [ · | α 0 ] . Then we compute the sensitivity of V ( α 0 , θ ) with respect to the change of parameter θ and obtain the following Propositions 1 and 2, providing the unbiased estimators for sensitivities, cf. the proofs in Section 3.1 and Section 3.2 respectively.
Proposition 1.
For any differentiable function ϕ ( · ) with bounded derivative, and ( X t ( θ ) ) t [ 0 , T ] , θ R defined in the form of Equation (1), we have
θ V ( α 0 , θ ) = I E α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T              F θ , x ( T , α T , θ ) F x ( T , α T , θ ) F x , x ( T , α T , θ ) F θ ( T , α T , θ ) ( F x ) 2 ( T , α T , θ ) .
where I E α 0 [ · ] : = I E [ · | α 0 ] , F x ( t , x , θ ) , F θ ( t , x , θ ) denote F ( t , x , θ ) / x and F ( t , x , θ ) / θ respectively, and for any i M ,
T i : = 0 T 1 { α t = i } d t ,
for any i , j M , t [ 0 , T ] ,
J i , j ( t ) : = 0 < s t 1 { α s = i } 1 { α s = j } f o r i j ,
and
J i , j ( t ) : = 0 f o r i = j .
For short, we denote
W : = 0 T f ( α s ) d s .
With the expression Equation (2) of X t ( θ ) such that X T ( θ ) = G W , θ , the sensitivity of value function V in Equation (3) to the parameter θ R is calculated by Proposition 2.
Proposition 2.
For any differentiable function ϕ ( · ) with bounded derivative, and ( X t ( θ ) ) t [ 0 , T ] , θ R defined in the form of Equation (2), we have
θ V ( α 0 , θ ) = I E α 0 ϕ ( X T ( θ ) ) 3 G θ W , θ i j J i , j / q i , j T i [ f ( i ) f ( j ) ] 2 G x W , θ ( m 1 ) T 3              G θ , x ( W , θ ) G x ( W , θ ) G x , x ( W , θ ) G θ ( W , θ ) G x ( W , θ ) ,
where T i is defined by Equation (5), { J i , j ( t ) } i , j M , t [ 0 , T ] are defined by Equations (6) and (7), G x ( x , θ ) , G θ ( x , θ ) denote G ( x , θ ) / x and G ( x , θ ) / θ respectively.
By the same approach as in [15], we extend the results in Propositions 1 and 2 for non-differentiable function ϕ Λ ( R ; R ) defined by Equations (10) and (11):
Λ ( R ; R ) : = { g : R R | g = i = 1 n g i 1 { A i } , n 1 , g i C L ( R ; R ) , A i are intervals of R } ,
where
C L ( R ; R ) : = { g C ( R ; R ) | | g ( x ) g ( y ) | k | x y | for some k 0 } .

3. Proof of Propositions 1 and 2

In this section, we process to prove Propositions 1 and 2.

3.1. Case ( a ) : X t ( θ ) = F ( t , α t , θ )

For the case ( a ) that ( X t ( θ ) ) t [ 0 , T ] , θ R is given in the form of Equation (1), we compute the sensitivity of V ( α 0 , θ ) with respect to the change of parameter θ by Proposition 1, proved as follows.
Proof. 
For any differentiable function H : R m ( m 1 ) R , define the gradient of H with respect to x : = ( x 1 , x 2 , , x m ( m 1 ) ) R m ( m 1 ) as follows:
D H ( x 1 , x 2 , , x m ( m 1 ) ) : = H ( x ) x 1 , H ( x ) x 2 , , H ( x ) x m ( m 1 ) R m ( m 1 ) ,
and for any random variable ν : = { ν 1 , ν 2 , , ν m ( m 1 ) } on R m ( m 1 ) , we define
D ν H ( x 1 , x 2 , , x m ( m 1 ) ) : = D H ( x 1 , x 2 , , x m ( m 1 ) ) · ν T = i = 1 m ( m 1 ) H ( x ) x i ν i .
For any random variable β expressed as β : = H ( J 1 , 2 ( T ) , J 2 , 1 ( T ) , , J m , m 1 ( T ) ) , we say β is differentiable by D ν when H is differentiable, and denoted as: β D o m ( D ν ) , then we define
D ν β : = D ν H α ( J 1 , 2 ( T ) , J 2 , 1 ( T ) , , J m , m 1 ( T ) ) .
Since α T = α 0 + i , j ( j i ) J i , j ( T ) , we have
α T = H α ( J 1 , 2 ( T ) , J 2 , 1 ( T ) , , J m , m 1 ( T ) ) ,
where
H α ( x 1 , 2 , x 2 , 1 , , x m , m 1 ) = α 0 + i , j ; i j ( j i ) x i , j .
Therefore,
D λ α T = D λ H α ( J 1 , 2 ( T ) , J 2 , 1 ( T ) , , J m , m 1 ( T ) ) = ( m 1 ) T .
We also have ϕ ( X T ( θ ) ) D o m ( D λ ) because
ϕ ( X T ( θ ) ) = ϕ ( F ( T , H α ( J 1 , 2 ( T ) , J 2 , 1 ( T ) , , J m , m 1 ( T ) ) , θ ) ) ,
and the function ϕ ( F ( T , x , θ ) ) is differentiable with respect to x. Similarly X T ( θ ) D o m ( D λ ) . Let
J ( T ) : = ( J 1 , 2 ( T ) , J 2 , 1 ( T ) , , J m ( m 1 ) ( T ) ) R m ( m 1 ) ,
and
λ ( t ) : = 0 t λ 1 , 2 ( s ) d s , 0 t λ 2 , 1 ( s ) d s , , 0 t λ m , m 1 ( s ) d s , R m ( m 1 ) ,
where λ i , j ( t ) : = 1 { α t = i } / ( i j ) for any t [ 0 , T ] , i j M . Then we have the integration by parts formula of Markov chain with the gradient operator D ( · ) according to Theorem 1 in [11] that:
E [ D λ H ( J ( T ) ) ] = E H ( J ( T ) ) 0 T η ( t ) , d M ( t ) ,
where η ( t ) = { η 1 , 2 ( t ) , η 2 , 1 ( t ) , , η m , m 1 ( t ) } , M ( t ) = { M 1 , 2 ( t ) , M 2 , 1 ( t ) , , M m , m 1 ( t ) } and
M i , j ( t ) = J i , j ( t ) 0 t q i , j 1 { α s = i } d s , i j .
Let η i , j ( t ) = [ ( i j ) q i , j ] 1 , i j { 1 , 2 , , m } , t [ 0 , T ] , for any integrable and differentiable function H on R m ( m 1 ) , we have
E [ D λ H ( J ( T ) ) ] = E H ( J ( T ) ) i , j ; i j 0 T η i , j d J i , j ( t ) 0 t q i , j 1 { α s = i } d s = E H ( J ( T ) ) i , j ; i j J i , j ( T ) ( i j ) q i , j 0 T 1 { α t = i } i j d t ,
and the chain rule for integrable and differentiable function H, K:
E [ D λ H ( J ( T ) ) K ( J ( T ) ) ] = E [ D λ H ( J ( T ) ) · K ( J ( T ) ) ] + E [ D λ K ( J ( T ) ) · H ( J ( T ) ) ] .
Note that ϕ ( X T ( θ ) ) , X T ( θ ) D o m ( D λ ) , and D λ X T ( θ ) is a.e. nonzero because D λ X T ( θ ) = F x ( T , α T , θ ) D λ α T and F x ( T , α T , θ ) is a.e. nonzero. ϕ ( X T θ ) is integrable, and with the boundedness of θ X T ( θ ) , the order of taking expectation and taking derivative with respect to θ is changeable. These two facts will be proved in the Appendix below for the extended case that ϕ Λ ( R ; R ) C ( R ; R ) , cf. Equations (A7)–(A10). By definition Equations (1) and (3) and Formulas (16), (19) and (20), we have
θ V ( α 0 , θ ) = θ I E α 0 [ ϕ ( X T ( θ ) ) ] = I E α 0 ϕ ( X T ( θ ) ) θ X T ( θ ) = I E α 0 D λ ϕ ( X T ( θ ) ) D λ X T ( θ ) F θ ( T , α T , θ ) = I E α 0 D λ ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) F x ( T , α T , θ ) D λ α T = I E α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j J i , j ( T ) ( i j ) q i , j 0 T 1 { α t = i } i j d t F x ( T , α T , θ ) D λ α T ϕ ( X T ( θ ) ) D λ F θ ( T , α T , θ ) F x ( T , α T , θ ) D λ α T = I E α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j J i , j ( T ) ( i j ) q i , j 0 T 1 { α t = i } i j d t F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x .
Remark 1.
Beside λ defined by Equation (18), we have other alternatives for ν in the operator D ν , such like the process π defined by
π ( t ) : = 0 t u 1 , 2 ( s ) d s , 0 t u 2 , 1 ( s ) d s , , 0 t u m , m 1 ( s ) d s R m ( m 1 ) , t [ 0 , T ]
with u i , j ( t ) = q i , j 1 { α t = i } for any different i , j { 1 , 2 , , m } , and followed by the another version of integration by parts formula: for any integrable and differentiable function H on R m ( m 1 ) , we have
E [ D π H ( J ( T ) ) ] = E H ( J ( T ) ) N T + 0 T q α s , α s d s ,
where N T denotes the jump times of the Markov chain α t over the time period ( 0 , T ] .

3.2. Case (b): X t ( θ ) = G 0 t f ( α u ) d u , θ

For the case ( b ) that ( X t ( θ ) ) t [ 0 , T ] , θ R is given in the form of Equation (2), we compute the sensitivity of V ( α 0 , θ ) with respect to the change of parameter θ by Proposition 2, proved as follows.
Proof. 
Define
I ( T ) : = ( I 1 , 2 ( T ) , I 2 , 1 ( T ) , , I m ( m 1 ) ( T ) ) R m ( m 1 ) ,
with
I i , j ( T ) : = 0 T η i , j ( t ) d J i , j ( t ) , i j M ,
where { η i , j ( t ) } i j M are any L 2 -integrable functions and { J i , j ( t ) } i j M are defined by Equations (6) and (7). Define a sequence of function φ i , j ( t ) for any i j { 1 , 2 , , m } , t [ 0 , T ] :
φ i , j ( t ) : = 0 t λ i , j ( s ) d s ,
where for any t [ 0 , T ] we define
λ i , j ( t ) : = ( t T ) 1 { α t = i } f ( i ) f ( j ) , i j M .
Thereby we have the gradient D φ I for any differentiable function h of I ( T ) as follows,
D φ I h ( I ( T ) ) : = D φ I h ( I 1 , 2 ( T ) , I 2 , 1 ( T ) , , I m ( m 1 ) ( T ) ) = i , j ; i j h ( I ( T ) ) x i , j 0 T η i , j ( t ) λ i , j ( t ) d t .
According to Lemma 1, for any real function f on M = { 1 , 2 , , m } , and { J i , j ( t ) } i , j M , t [ 0 , T ] defined by Equations (6) and (7), we represent 0 T f ( α s ) d s as follows,
0 T f ( α s ) d s = i , j 0 T [ f ( i ) f ( j ) ] ( s T ) d J i , j ( s ) + T f ( α 0 ) .
By Equations (26) and (27), we have
D φ I W = D φ I 0 T f ( α s ) d s = D φ I i , j 0 T [ f ( i ) f ( j ) ] ( s T ) d J i , j ( s ) + T f ( α 0 ) = i , j ; i j 0 T [ f ( i ) f ( j ) ] ( s T ) 2 1 { α s = i } f ( i ) f ( j ) d s = ( m 1 ) T 3 3 .
For any random variable β expressed as β : = h ( I 1 , 2 ( T ) , I 2 , 1 ( T ) , , I m , m 1 ( T ) ) , where { I i , j ( t ) } i j M are defined by Equation (23). We say β is differentiable by D φ when h is differentiable, hence we denote: β D o m ( D φ I ) . By Theorem 4 in [11] with η i , j = 1 / ( q i , j [ f ( i ) f ( j ) ] 2 ) , for any integrable and differentiable function U on R m ( m 1 ) , we have
I E [ D φ U ( J ( T ) ) ] = I E U ( J ( T ) ) i j 0 T d J i , j ( t ) q i , j [ f ( i ) f ( j ) ] 2 0 T 1 { α t = i } d t [ f ( i ) f ( j ) ] 2 ,
and the chain rule for any integrable and differentiable function U and K on R m ( m 1 ) :
D φ ( U ( J ( T ) ) K ( J ( T ) ) ) = D φ U ( J ( T ) ) · K ( J ( T ) ) + D φ K ( J ( T ) ) · U ( J ( T ) ) .
Note that ϕ ( X T ( θ ) ) , X T ( θ ) D o m ( D φ I ) , and D φ I X T ( θ ) is a.e. nonzero because D φ I X T ( θ ) = G x ( W , θ ) D φ I W and G x ( W , θ ) is a.e. nonzero. With the boundedness of θ ϕ ( X T ( θ ) ) , the order of taking expectation and taking derivative with respect to θ is changeable. By definition Equations (2), (3) and (8), and Formulas (28)–(30), we have
θ V ( α 0 , θ ) = I E α 0 θ ϕ ( X T ( θ ) ) = I E α 0 ϕ ( X T ( θ ) ) θ X T ( θ ) = I E α 0 D φ I ϕ ( X T ( θ ) ) D φ I X T ( θ ) G θ 0 T f ( α u ) d u , θ = I E α 0 D φ I ϕ ( X T ( θ ) ) G θ W , θ G x W , θ D φ I W = I E α 0 ϕ ( X T ( θ ) ) G θ W , θ G x W , θ D φ I W i , j ; i j 0 T d J i , j ( t ) ( i j ) q i , j 0 T 1 { α t = i } d t i j ϕ ( X T ( θ ) ) D φ I G θ W , θ G x W , θ D φ I W = I E α 0 ϕ ( X T ( θ ) ) 3 G θ W , θ G x W , θ ( m 1 ) T 3 i j J i , j / q i , j T i [ f ( i ) f ( j ) ] 2 D φ I 3 G θ W , θ G x W , θ ( m 1 ) T 3 = I E α 0 ϕ ( X T ( θ ) ) 3 G θ W , θ G x W , θ ( m 1 ) T 3 i j J i , j / q i , j T i [ f ( i ) f ( j ) ] 2 G θ W , θ G x W , θ x .
The following lemma is applied in the above proof.
Lemma 1.
For any real function f on M = { 1 , 2 , , m } , t R + , 0 t f ( α s ) d s can be represented as follows,
0 t f ( α s ) d s = i , j 0 t [ f ( i ) f ( j ) ] ( s t ) d J i , j ( s ) + t f ( α 0 ) ,
where { J i , j ( t ) } i , j M , t [ 0 , T ] are defined by Equations (6) and (7).
Proof. 
Consider the embedded chain { η n ; n = 1 , 2 , } and let N t denote the jump times of the Markov chain α s over the time period ( 0 , t ] . Let η 0 = α 0 , t 0 = 0 , and define a sequence of stopping time that t n = inf { t > t n 1 | α t α t } for n = 1 , 2 , , then we have
0 t f ( α s ) d s = i = 1 N t t i 1 t i f ( α s ) d s + t N t t f ( α s ) d s = i = 1 N t f ( η i 1 ) ( t i t i 1 ) + f ( α T ) ( t t N t ) = i = 1 N t f ( η i 1 ) t i i = 0 N t 1 f ( η i ) t i + f ( α T ) ( t t N t ) = i = 1 N t f ( η i 1 ) t i i = 1 N t f ( η i ) t i + f ( α T ) τ + f ( α T ) ( t t N t ) = i = 1 N t [ f ( η i 1 ) f ( η i ) ] t i + t f ( α t ) = i , j 0 t [ f ( i ) f ( j ) ] s d J i , j ( s ) + t f ( α t ) .
Since for t [ 0 , T ] we have
f ( α t ) = f ( α 0 ) + i , j [ f ( j ) f ( i ) ] J i , j ( t ) = f ( α 0 ) + i , j 0 t [ f ( j ) f ( i ) ] d J i , j ( s ) ,
plugging Equation (34) into Equation (33), we see that
0 t f ( α s ) d s = i , j 0 t [ f ( i ) f ( j ) ] s d J i , j ( s ) + t f ( α 0 ) i , j 0 t ( f ( i ) f ( j ) ) d J i , j ( s ) = i , j 0 t [ f ( i ) f ( j ) ] ( s t ) d J i , j ( s ) + t f ( α 0 ) ,
which completes the proof.  ☐

4. Numerical Simulation of Simple Examples with Two-State Markov Chains

In this section, we carry on a numerical simulation to compare the computation by Proposition 2 with that by finite difference. Consider the case
X T ( θ ) = θ 0 T f ( α u ) d u , ϕ ( x ) = 1 { x > K } ,
with K = 46 , T = 10 and the Markov chain ( α t ) t R + on the state space M = { 1 , 2 } has a Q matrix that q 1 = 0.2 , q 2 = 0.1 . Let f be any function whose domain contains M such that f ( 1 ) = 0.5 and f ( 2 ) = 0.4 . Then we compute the sensitivity of V ( α 0 , θ ) = I E α 0 [ ϕ ( X ( θ ) ) ] with respect to θ at α 0 = 1 , θ = 10 .
Applying Proposition 2, we obtain that
θ V ( α 0 , θ ) = I E α 0 ϕ ( X T ( θ ) ) 3 x θ T 3 i j J i , j / q i , j T i [ f ( i ) f ( j ) ] 2 1 = I E 1 { 0 1 0 f ( α u ) d u > 4.6 } 0.15 x ( 2 J 1 , 2 2 J 2 , 1 ) 1 .
On the other hand, we apply the finite difference method to estimate the sensitivity by the following ratio
V ( α 0 , θ + Δ ) V ( α 0 , θ ) Δ = I E ϕ ( X ( θ + Δ ) ) ϕ ( X ( θ ) ) Δ ,
where in practice we let Δ = 0.0001 .
Illustrated by the following Figure 1 and in accordance with expectations, the value obtained by the approach applying the Proposition 2 and expression Equation (35) converges faster than that by finite difference hence outperforms the latter one. Finite difference method is not a sound choice for sensitivity computation because of the fat variance of the estimator which is also a biased one (the variance is approximately 2 V a r ( ϕ ( X ( θ ) ) ) / Δ 2 and this estimator only asymptotically approaches the unbiased one when passing Δ to 0). However, besides the finite difference method, few existing approaches are general enough to conclude the cases in form of Equations (1) and (2) for the general purpose of sensitivity analysis.

Acknowledgments

We would like to thank two anonymous referees and the associate editor for many valuable comments and suggestions, which have led to a much improved version of the paper. The work is partly supported by the grant 201710299068X from the Innovation of College Student Project of UJS and the grant 17JDG051 from the Startup Foundation of UJS.

Author Contributions

Yongsheng Hang and Yue Liu conceived and designed the experiments; Shu Mo collects data and Xiaoyang Xu programs and analysis. Yongsheng Hang and Yan Chen wrote the paper under supervision of Yue Liu, Yue Liu and Shu Mo replied the referees and modifies the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Extension to Non-Differentiable Function ϕ(·)

We find from the above results in Propositions 1 and 2 that, the final expression of sensitivity can be free of the derivative of function ϕ ( · ) . So it is possible to loosen the constrains on ϕ ( · ) , shown by the following Proposition A1. We find from the above results in Propositions 1 and 2 that, the final expression of sensitivity can be free of the derivative of function ϕ ( · ) . So it is possible to loosen the constrains on ϕ ( · ) , shown by the following Proposition A1.
Proposition A1.
For any function ϕ Λ ( R ; R ) , and X T ( θ ) = F ( T , α T , θ ) defined by Equation (1) that F ( t , x , θ ) on [ 0 , T ] × M × R is twice differentiable with respect to x and θ, { x | F x ( T , x , θ ) = 0 } is a countable set, and F θ ( T , i , θ ) is uniformly bounded for any i M , then we have
θ V ( α 0 , θ ) = I E α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x ,
where { J i , j ( t ) } i , j M , t [ 0 , T ] are defined by Equations (6) and (7), T i : = 0 T 1 { α t = i } d t and I E α 0 [ · ] : = E [ · | α 0 ] , F x ( t , x , θ ) denotes F ( t , x , θ ) / x , F θ ( t , x , θ ) denotes F ( t , x , θ ) / θ .
Proof. 
Since ϕ Λ ( R ; R ) , there exists a n 1 , a sequence { k i [ 0 , ) ; i = 1 , , n } and a list of disjoint sets ( A i ) i { 1 , , n } such that
ϕ ( x ) = i = 1 n f i ( x ) 1 { A i } ( x ) , x R ,
where f i ( x ) C L ( R ; R ) that | f ( x ) f ( y ) | k i | x y | for any x , y A i , i { 1 , , n } . Denote the boundary points of each ( A i ) i { 1 , , n } as a i and b i and define
S D : = { a 1 , b 1 , , a n , b n } ,
where
a 1 = , b n = , b 1 a i < b i a i + 1 R , i = { 2 , , n 1 } .
By Rademacher’s theorem, Lipschitz continuous function is differentiable at almost every point in an open set in R n . In view of (Juha Heinonen, Lectures on Lipschitz Analysis, p. 19), given a Lipschitz continuous function in an open set A, each non-differentiable point admits an open neighborhood inside A and all these neighborhood set are disjoint. Therefore, there are countable non-differentiable points in each set ( a i , b i ) for i { 1 , , n } , combine all these non-differentiable points and boundary point set S D Equation (A3), it is also a countable set, noted as S C , listed as c 1 , c 2 , . Define two event sets
S N : = { ( α T , θ ) M × R : F ( T , α T , θ ) S C } ,
and
S N c : = ( S N ) c = { ( α T , θ ) M × R : F ( T , α T , θ ) S C } ,
then the probability measure of set S N is
P ( S N ) = i = 1 P ( F ( T , α T , θ ) = c i ) = 0 ,
since the law of F ( T , α T , θ ) x is absolute continuous.
(i)
First, suppose ϕ Λ ( R ; R ) C ( R ; R ) , we show that
lim ε 0 E S N c ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε = E S N c lim ε 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε ,
where E S N c [ · ] denotes the expectation taken on the set S N c . Since ϕ C ( R ; R ) , we let b i = a i + 1 for i { 1 , , n 1 } and have
| ϕ ( F ( T , α T , θ ) ) ϕ ( F ( T , 1 , θ ) ) | max 2 i n [ | ϕ ( F ( T , α T , θ ) ) ϕ ( a i ) | + | ϕ ( a i ) ϕ ( F ( T , 1 , θ ) ) | ] [ ( | ϕ ( F ( T , α T , θ ) ) ϕ ( a 2 ) | + | ϕ ( a 2 ) ϕ ( F ( T , 1 , θ ) ) | ) 1 { F ( T , α T , θ ) < a 2 } ] max 2 i n [ k i | F ( T , α T , θ ) a i | + | ϕ ( a i ) ϕ ( F ( T , 1 , θ ) ) | ] [ k 1 | F ( T , α T , θ ) a 2 | + | ϕ ( a 2 ) ϕ ( F ( T , 1 , θ ) ) | ] max 2 i n ( k i | X T θ a i | ) [ k 1 | X T θ a 2 | ] + max 2 i n | ϕ ( a i ) ϕ ( F ( T , 1 , θ ) ) | max 1 i n k i | X T θ | + | a n | | a 2 | + max 2 i n | ϕ ( a i ) ϕ ( F ( T , 1 , θ ) ) | .
Hence we proved that | ϕ ( F ( T , α T , θ ) ) | is integrable. On the event set S N c , F ( T , α T , θ ) S C , so that ϕ ( F ( T , α T , θ ) ) is differentiable on S N c . Then we show the integrability of ϕ ( F ( T , α T , θ ) ) / θ by the uniform boundedness of F ( T , α T , θ ) / θ and the following Formula (A9).
lim ε 0 | ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε | lim ε 0 max 1 i n k i | F ( T , α T , θ + ε ) F ( T , α T , θ ) ε | = max 1 i n k i | F ( T , α T , θ ) θ | ,
where ( α T , θ ) S N c . Similarly to Equation (A9), for any ε R , there exists a θ 0 ( θ , θ + ε ) or ( θ ε , θ ) such that
| ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε | max 1 i n k i | ϕ ( F ( T , α T , θ ) ) θ | θ = θ 0 ,
which is uniformly bounded. Therefore, Equation (A7) is proved by Lebesgue’s dominated convergence theorem.
(ii)
For ϕ Λ ( R ; R ) C ( R ; R ) , we prove Equation (A1). Since ϕ ( F ( T , α T , θ ) ) is differentiable with respect to θ when ( α T , θ ) S N c , the conclusion in Section 3.1 is valid on the set S N c . By Equations (A6) and (A7) we have
θ V ( α 0 , θ ) = lim ε 0 I E α 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε = lim ε 0 I E S N c α 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε = I E S N c α 0 lim ε 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε = I E S N c α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x = I E α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x .
(iii)
Finally, we extend from ϕ Λ ( R ; R ) C ( R ; R ) to the class Λ ( R ; R ) . Clearly, ϕ Λ ( R ; R ) can be a.e. approximated by a sequence ( ϕ n ) n N Λ ( R ; R ) C ( R ; R ) s.t.
lim n ϕ n ( x ) = ϕ ( x ) , x R S D , n N ,
where S D is defined by Equation (A3) and
ϕ ( x ) c 0 ϕ n ( x ) ϕ ( x ) , ϕ n ( x ) ϕ n + 1 ( x ) x R , n N ,
where c 0 R + is a constant. Since P ( S N ) = 0 for θ R , we have, for any ε R
I E α 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε = I E α 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε 1 { ( α T , θ ) , ( α T , θ + ε ) S N } = I E α 0 lim n ϕ n ( F ( T , α T , θ + ε ) ) ϕ n ( F ( T , α T , θ ) ) ε 1 { ( α T , θ ) , ( α T , θ + ε ) S N } = lim n I E α 0 ϕ n ( F ( T , α T , θ + ε ) ) ϕ n ( F ( T , α T , θ ) ) ε ,
where by Equation (A13) we note that ϕ n ( F ( T , α T , θ + ε ) ) is bounded by | ϕ ( F ( T , α T , θ + ε ) ) | + c 0 which is integrable , hence Lebesgue’s dominated convergence theorem is applied in the last line.
Next, we show this convergence in the last line is uniformly independent of ε . Let the function series { K n ( x ) } n N , x R defined by
K n ( x ) : = I E α 0 ϕ n ( F ( T , α T , θ + x ) ) , n N , x R ,
then we have
lim n K n ( x ) = I E α 0 ϕ ( F ( T , α T , θ + x ) ) ,
and
K n ( x ) K n + 1 ( x ) , n N , x R .
By Dini’s theorem, the convergence is uniform when x [ 1 , 1 ] , which guarantees the order exchanging of limits in Equation (A17) below. Therefore, by Equations (A11) and (A14) we have
θ V ( α 0 , θ ) = lim ε 0 I E α 0 ϕ ( F ( T , α T , θ + ε ) ) ϕ ( F ( T , α T , θ ) ) ε = lim ε 0 lim n I E α 0 ϕ n ( F ( T , α T , θ + ε ) ) ϕ n ( F ( T , α T , θ ) ) ε = lim n lim ε 0 I E α 0 ϕ n ( F ( T , α T , θ + ε ) ) ϕ n ( F ( T , α T , θ ) ) ε
θ V ( α 0 , θ ) = lim n I E α 0 ϕ n ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x .
Since | ϕ n ( X T ( θ ) ) | | ϕ 1 ( X T ( θ ) ) | + c 0 , the integrand for any n in Equation (A16) is bounded by
( | ϕ 1 ( X T ( θ ) ) | + c 0 ) | F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x | ,
which is integrable, hence we can apply the Lebesgue’s dominated convergence theorem in Equation (A16) and complete the proof by
θ V ( α 0 , θ ) = I E α 0 lim n ϕ n ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x = I E α 0 ϕ ( X T ( θ ) ) F θ ( T , α T , θ ) i , j ; i j T i J i , j / q i , j i j F x ( T , α T , θ ) ( m 1 ) T F θ ( T , α T , θ ) F x ( T , α T , θ ) x .

References

  1. Micevski, T.; Kuczera, G.; Coombes, P. Markov model for storm water pipe deterioration. J. Infrastruct. Syst. 2002, 8, 49–56. [Google Scholar] [CrossRef]
  2. Ling, Q.; Lemmon, M.D. Soft real-time scheduling of networked control systems with dropouts governed by a Markov chain. In Proceedings of the 2003 American Control Conference, Denver, CO, USA, 4–6 June 2003. [Google Scholar]
  3. Zhao, T.; Sundararajan, S.; Tseng, C. Highway development decision-making under uncertainty: A real options approach. J. Infrastruct. Syst. 2004, 10, 23–32. [Google Scholar] [CrossRef]
  4. White, C.C.; White, D.J. Markov decision processes. Eur. J. Oper. Res. 1989, 39, 1–16. [Google Scholar] [CrossRef]
  5. Scherer, W.; Glagola, D. Markovian models for bridge maintenance management. J. Transp. Eng. 1994, 120, 37–51. [Google Scholar] [CrossRef]
  6. Song, H.; Liu, C.C.; Lawarrée, J. Optimal electricity supply bidding by Markov decision process. IEEE Trans. Power Syst. 2000, 15, 618–624. [Google Scholar] [CrossRef]
  7. McDougall, J.; Miller, S. Sensitivity of wireless network simulations to a two-state Markov model channel approximation. IEEE Glob. Telecommun. Conf. 2003, 2, 697–701. [Google Scholar]
  8. Fournié, E.; Lasry, J.M.; Lebuchoux, J.; Lions, P.L.; Touzi, N. Applications of Malliavin calculus to Monte Carlo methods in finance. Finance Stoch. 1999, 3, 391–412. [Google Scholar] [CrossRef]
  9. Glasserman, P.; Ho, Y.C. Gradient Estimation via Perturbation Analysis; Springer: Berlin, Germany, 1996. [Google Scholar]
  10. Davis, M.H.A.; Johansson, M.P. Malliavin Monte Carlo Greeks for jump diffusions. Stoch. Process. Appl. 2006, 116, 101–129. [Google Scholar] [CrossRef]
  11. Siu, T.K. Integration by parts and martingale representation for a Markov chain. Abstr. Appl. Anal. 2014, 2014. [Google Scholar] [CrossRef]
  12. Denis, L.; Nguyen, T.M. Malliavin Calculus for Markov Chains Using Perturbations of Time. Preprint. 2015. Available online: http://perso.univ-lemans.fr/~ldenis/MarkovChain.pdf (accessed on 10 October 2017).
  13. Glynn, W.G.; Pierre, L.E.; Michel, A. Gradient Estimation for Ratios; Preprint; Stanford University: Stanford, CA, USA, 1991. [Google Scholar]
  14. Heidergott, B.; Leahu, H.; Löpker, A.; Pflug, G. Perturbation analysis of inhomogeneous finite Markov chains. Adv. Appl. Probab. 2016, 48, 255–273. [Google Scholar] [CrossRef]
  15. Kawai, R.; Takeuchi, A. Greeks formulas for an asset price model with Gamma processes. Math. Finance 2011, 21, 723–742. [Google Scholar] [CrossRef]
Figure 1. Computation of Sensitivity.
Figure 1. Computation of Sensitivity.
Mca 22 00040 g001

Share and Cite

MDPI and ACS Style

Hang, Y.; Liu, Y.; Xu, X.; Chen, Y.; Mo, S. Sensitivity Analysis Based on Markovian Integration by Parts Formula. Math. Comput. Appl. 2017, 22, 40. https://doi.org/10.3390/mca22040040

AMA Style

Hang Y, Liu Y, Xu X, Chen Y, Mo S. Sensitivity Analysis Based on Markovian Integration by Parts Formula. Mathematical and Computational Applications. 2017; 22(4):40. https://doi.org/10.3390/mca22040040

Chicago/Turabian Style

Hang, Yongsheng, Yue Liu, Xiaoyang Xu, Yan Chen, and Shu Mo. 2017. "Sensitivity Analysis Based on Markovian Integration by Parts Formula" Mathematical and Computational Applications 22, no. 4: 40. https://doi.org/10.3390/mca22040040

Article Metrics

Back to TopTop