Next Article in Journal
Are Cryptocurrency Forks Wealth Creating?
Next Article in Special Issue
Can Investment Views Explain Why People Insure Their Cell Phones But Not Their Homes?—A New Perspective on the Catastrophe Insurance Puzzle
Previous Article in Journal
Optimal and Non-Optimal MACD Parameter Values and Their Ranges for Stock-Index Futures: A Comparative Study of Nikkei, Dow Jones, and Nasdaq
Previous Article in Special Issue
The Effect of Short-Sale Restrictions on Corporate Managers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monte Carlo Sensitivities Using the Absolute Measure-Valued Derivative Method

1
Centre for Actuarial Studies, University of Melbourne, Parkville, VIC 3010, Australia
2
Discipline of Finance, Codrington Building (H69), The University of Sydney, Sydney, NSW 2006, Australia
3
Trinity College, University of Cambridge, Cambridge CB2 1TQ, UK
*
Author to whom correspondence should be addressed.
This paper is dedicated to the memory of Mark Joshi who passed away unexpectedly.
J. Risk Financial Manag. 2023, 16(12), 509; https://doi.org/10.3390/jrfm16120509
Submission received: 7 November 2023 / Revised: 4 December 2023 / Accepted: 5 December 2023 / Published: 8 December 2023

Abstract

:
Measure-valued differentiation (MVD) is a relatively new method for computing Monte Carlo sensitivities, relying on a decomposition of the derivative of transition densities of the underlying process into a linear combination of probability measures. In computing the sensitivities, additional paths are generated for each constituent distribution and the payoffs from these paths are combined to produce sample estimates. The method generally produces sensitivity estimates with lower variance than the finite difference and likelihood ratio methods, and can be applied to discontinuous payoffs in contrast to the pathwise differentiation method. However, these benefits come at the expense of an additional computational burden. In this paper, we propose an alternative approach, called the absolute measure-valued differentiation (AMVD) method, which expresses the derivative of the transition density at each simulation step as a single density rather than a linear combination. It is computationally more efficient than the MVD method and can result in sensitivity estimates with lower variance. Analytic and numerical examples are provided to compare the variance in the sensitivity estimates of the AMVD method against alternative methods.
JEL Classification:
D52; D81; G11

1. Introduction

Sensitivities play a key role in the assessment and the management of risks associated with financial derivatives and must be computed along with prices as part of sound risk management practice. Given that it is not possible to compute the sensitivities let alone the prices of  many financial derivatives analytically, they are usually computed numerically, with Monte Carlo simulation being the most widely used technique. The associated sensitivities are then computed by finite difference, pathwise differentiation, or likelihood ratio methods, each with its own set of limitations. In particular, the finite difference method is biased and sensitivity estimates have relatively large variances, the pathwise derivative method cannot be applied to products with discontinuous payoffs such as digitals, and the likelihood ratio method can be subject to unbounded variance when the associated score function is not absolutely continuous with respect to the sensitivity parameter. For a detailed discussion of these methods and their limitations, refer to Glasserman (2004), chapter 7.
There has been extensive work on regularizing the pathwise method to enable it to work with discontinuities. Such work tends either to involve differentiating the discontinuity directly, which yields a delta distribution, or to utilize smoothing to avoid this differentiation. Joshi and Kainth (2004), Rott and Fries (2005), and Brace (2007) explore the explicit integration of the delta distribution, which requires at least one extra simulation that must be handcrafted to the payoff. Liu and Hong (2011) and Lyuu and Teng (2011) reduce the degree of handcrafting by employing kernel smoothing and importance sampling, respectively. Chan and Joshi (2015) modify the simulation scheme as a function of the perturbation variable so that the pathwise payoff, as a function of perturbation size, is smooth. It is then possible to apply the pathwise method at the cost of gaining a small likelihood ratio term. All these methods have the drawback that the methodology is adapted to the singularities of the payoff. This renders them much harder to implement generically than the likelihood ratio and pathwise methods.
Despite their importance, there is a relative absence of new techniques for computing Monte Carlo sensitivities, and recent advances have focused instead on improving the computational efficiency of existing techniques. For example, Capriotti and Giles (2018) apply adjoint algorithmic differentiation to speed up pathwise sensitivity calculations, while Xiang and Wang (2022) apply low discrepancy sequences to achieve variance reduction for American option sensitivities.
A notable exception is the measure-valued differentiation (MVD) method considered, for example, in Heidergott et al. (2008) and  Pflug and Thoma (2016). The MVD method does not differentiate the payoff and so discontinuities do not cause difficulties, and it has been shown to produce sensitivity estimates with lower variance than those computed using the finite difference and the likelihood ratio (LR) methods. It is also generic in the sense that it is not adapted to the payoff in any way, and so the same paths can be used to compute the sensitivities of many derivatives on the same underlying framework. In the MVD approach, the derivative of the transition density is recognized as a signed measure and decomposed as the difference of two proper measures. The simulation at each step is then branched according to the two constituent measures to produce multiple MVD paths. Sensitivity estimates from these branched paths are then recombined to produce one MVD sensitivity sample. It is shown that this approach generally results in sensitivity estimates with lower variance than alternative methods, albeit at the expense of a greater computational burden due to the branching of the paths. Further variance reduction is possible by coupling, or correlating, each pair of the branched paths.
In this paper, we introduce an alternative to the MVD method, called the absolute measure-valued differentiation (AMVD) method, which decomposes the density derivative as the product of its sign and its absolute value. This method shares many of the properties of the MVD method, such as the ability to handle discontinuous payoffs, but  instead of initiating two additional paths at each simulation step of the original path, it initiates only one additional subpath. It is found that the AMVD method gives vega estimates with lower variance than the MVD method for vanilla and digital calls in the Black–Scholes model. Moreover, the AMVD vega estimates have lower variance than the pathwise differentiation method in certain situations, such as the case of short-dated deep in-the-money vanilla calls. A consideration of vegas for the more exotic double-barrier options revealed that the relative performances of the LR, MVD, and AMVD methods depend, in general, on the positions of the barriers and the way each method distributes densities relative to the barriers.
Applying the AMVD method requires the generation of nonstandard random variates,1 and for the examples considered in this paper, we provide a highly efficient implementation of the inverse transform method for this purpose using Newton’s method with a careful selection of the initial point. The time taken to generate each vega subpath for the AMVD method is approximately equal to the time taken to generate the corresponding MVD method using the efficient generation of the double-sided Boltzmann–Maxwell distribution introduced in Heidergott et al. (2008).
The remainder of the paper is structured as follows: the measure-valued differentiation method is briefly reviewed in Section 2, and the absolute measure-valued differentiation method is presented in Section 3. This is followed by Section 4 in which the notation and key identities for the remainder of the paper are presented. Sensitivities for vanilla calls and digital calls are computed analytically in Section 5 and Section 6, respectively. A more complex example of the double knock-out barrier option is investigated numerically in Section 7, and the paper concludes with Section 8.

2. Brief Review of Measure-Valued Differentiation

In this section, we provide a heuristic overview of the MVD method and refer the interested reader to Heidergott et al. (2008) and Pflug and Thoma (2016) for the details.
Let X be an  R -valued random variable, and let  ϕ ( x ; θ )  be the probability density function of X with  θ  as one of its parameters. Then, given a payoff,  f : R R , associated with a financial derivative on X, the likelihood ratio (LR) sensitivity with respect to the parameter  θ  is given by
Δ θ ( f ) = R f ( x ) ϕ ( x ; θ ) θ d x
ignoring discounting. At this point, the likelihood ratio method rewrites the above expression as
Δ θ LR ( f ) = R ln ϕ ( x ; θ ) θ f ( x ) · ϕ ( x ; θ ) d x ,
and computes the sensitivity as the expectation of the modified payoff obtained by scaling  f ( x )  by the score function  ln ϕ ( x ; θ ) / θ .
Now, suppose  φ ( x ) = ϕ ( x ; θ ) / θ  is a signed measure on  R , and that it decomposes as the difference
φ ( x ) = φ 1 ( x ) φ 2 ( x ) ,
where  φ 1  and  φ 2  are proper measures on  R . Then, under the MVD method, the sensitivity with respect to  θ  is computed as the difference
Δ θ MVD ( f ) = μ ( φ 1 ) R f ( x ) φ 1 ( x ) μ ( φ 1 ) d x μ ( φ 2 ) R f ( x ) φ 2 ( x ) μ ( φ 2 ) d x ,
where  μ ( φ i ) = R φ i ( x ) d x > 0 . The two summands are computed separately, although they may be coupled to reduce the total variance. It should be noted that  φ ¯ i ( x ) = φ i ( x ) / μ ( φ i ) , for  1 i 2 , are probability measures on  R , and so the decomposition in (4) provides the basis for a numerical approximation of the sensitivity using, for example, a Monte Carlo simulation in which the integrals in (4) are computed numerically using random draws from the distributions determined by  φ ¯ i ( x ) .
The above description of the MVD method applies to path-independent European derivatives. For the more general case, suppose that a Monte Carlo simulation, with simulation times  0 = t 0 < t 1 < < t n = T , is used to value a given derivative with expiry T. If the underlying process is assumed to be Markovian, then the probability density of a simulated path is given by the product
ϕ ( x 0 , x 1 , , x n ; θ ) = i = 1 n ϕ i ( x i x i 1 ; θ ) ,
where  ϕ i ( x i x i 1 ; θ )  is the transition density from the  t i 1  to  t i  conditional starting from  x i 1  at  t i 1 . For notational convenience, write  ϕ ( x ; θ ) = ϕ ( x 0 , x 1 , , x n ; θ ) ϕ i ( x i ; θ ) = ϕ i ( x i x i 1 ; θ ) , and define
i , θ ϕ ( x ; θ ) = ϕ 1 ( x 1 ; θ ) ϕ i 1 ( x i 1 ; θ ) ϕ i ( x i ; θ ) θ ϕ i + 1 ( x i + 1 ; θ ) ϕ n ( x n ; θ )
The likelihood ratio sensitivity of a general payoff,  f ( x ) = f ( x 0 , x 1 , x 2 , , x n ) , with  respect to the parameter,  θ , is then given by
Δ θ LR ( f ) = i = 1 n R n f ( x ) i , θ ϕ ( x ; θ ) d x .
Once again, if it is assumed that the derivatives
φ i ( x i ) = ϕ i ( x i , θ ) θ
are signed measures on  R , and decompose as differences
φ i ( x i ) = φ i , 1 ( x i ) φ i , 2 ( x i ) ,
where  φ i , j ( x )  are proper measures, then the MVD estimate of the sensitivity is given by
Δ θ MVD ( f ) = i = 1 n R n f ( x ) i , θ MVD ϕ ( x ; θ ) d x ,
where  i , θ MVD ϕ ( x ; θ )  is obtained from  i , θ ( x ; θ )  by substituting the derivative term,  ϕ i ( x i ; θ ) / θ , in  (6) with its decomposition
ϕ i ( x i ; θ ) θ = μ ( φ i , 1 ) φ i , 1 ( x i ) μ ( φ i , 1 ) μ ( φ i , 2 ) φ i , 2 ( x i ) μ ( φ i , 2 ) ,
where  μ ( φ i , j ) = R φ i , j ( x i ) d x i .
The implementation of the MVD method begins with the simulation, in the usual manner, of the underlying asset price path. Then, for each  1 i n , the i-th step is branched into two by drawing from distributions corresponding to the probability densities,  φ ¯ i , j ( x ) = φ i , j ( x ) / μ ( φ i , j ) , for  1 j 2 . The remaining steps in the two branched paths are then resimulated by drawing from the original distributions,  ϕ j ( x j ; θ )  for  i + 1 j n . Thus, the full MVD method generates a set of  2 n  paths for each regular path and is consequently more computationally intensive than, for  example, the likelihood ratio method.2 In Pflug and Thoma (2016), less computationally demanding approximations to the full MVD method are proposed in which branching is performed only in a small subset of the simulation steps.

3. Absolute Measure-Valued Differentiation

An important factor taken into consideration when deciding on the form of the decomposition of the transition density in (9) is the additional computation burden associated with the need to generate random variates from the component distributions. The decomposition chosen in Heidergott et al. (2008) and Pflug and Thoma (2016) is due, at least in part, to ensuring that one of the components coincides with the distribution already used in generating the underlying process. The remaining factor, however, will generally be from a different distribution.
The main motivation behind the absolute measure-valued differentiation (AMVD) method is improve the computational efficiency of the MVD method by requiring the introduction of only one additional distribution and branching off only one path at each simulation step, while retaining the key features of the MVD method, viz., not requiring the payoff derivative, and producing sensitivity estimates with low variances. Using the notation from the previous section, the AMVD estimate for the sensitivity of a payoff f with respect to the parameter  θ  is defined by   
Δ θ AMVD ( f ) = i = 1 n R n f ( x ) ε φ i ( x i ) μ | φ i | | i | , θ MVD ϕ ( x ; θ ) μ | φ i | d x ,
where  ε φ i ( x i ) = 1 { φ i ( x i ) 0 } 1 { φ i ( x i ) < 0 } μ | φ i | = R | φ i ( x i ) | d x i , and 
| i | , θ MVD ϕ ( x ; θ ) = ϕ 1 ( x 1 ; θ ) ϕ i 1 ( x i 1 ; θ ) φ i ( x i ) ϕ i + 1 ( x i + 1 ; θ ) ϕ n ( x n ; θ ) .
It follows from  ε φ i ( x i ) φ i ( x i ) = φ i ( x i )  that  Δ θ AMVD ( f )  is an unbiased estimator for  Δ θ ( f ) . Moreover, although the AMVD method introduces what appears to be a problematic term, viz.,  ε φ i ( x i ) , since  ε φ i ( x i ) = ± 1 , and these terms are squared when computing the associated variance, they do not adversely affect the AMVD variance.

4. Notation, Model, and Useful Identities

In situations where it applies, the pathwise differentiation (PW) method usually gives sensitivity estimates with lower variances than other methods, and it was shown in Pflug and Thoma (2016) that the MVD sensitivities have lower variance than the corresponding finite difference estimates. In subsequent sections, we will compare the variances of sensitivity estimates computed using the LR, MVD, AMVD, and  PW methods. Although the analysis could be performed for any sensitivity and with any underlying model, we will focus on delta and vega under the Black and Scholes (1973) model. This allows us to obtain, analytically, the variances of deltas and vegas for vanillas and digitals, and avoid the consideration of more complex models that do not contribute, in any substantial way, to the analysis of the relative performance of the various approaches for computing sensitivities.

4.1. Black–Scholes Delta and Vega

Let r be the risk-free rate,  S t  the price of the underlying asset at time t, and  σ  the asset volatility. Moreover, let  0 = t 0 < t 1 < < T n = T  be the sequence of Monte Carlo simulation times, and, for notational convenience, define
S i = S t i , x i = ln S i , and Δ j = t j + 1 t j ,
where  0 i n  and  0 j n 1 . The Euler–Maruyama discretization scheme for the Black–Scholes model
x i + 1 = x i + r 1 2 σ 2 Δ i + σ Δ i ξ i
is unbiased, where  ξ i N ( 0 , 1 )  is a normally distributed random variable. If we define
μ i ( σ ) = r 1 2 σ 2 Δ i 1 and ς i ( σ ) = σ Δ i 1 ,
for  1 i n , then the transition density,  ϕ i ( x i ; σ ) , is given by
ϕ i ( x i ; σ ) = 1 ς i ( σ ) 2 π exp x i x i 1 μ i ( σ ) 2 2 ς i 2 ( σ ) ,
so that  x i N x i 1 + μ i ( σ ) , ς i 2 ( σ ) . Straight-forward calculations give
φ 1 ( x i ; S 0 ) = ϕ 1 ( x 1 ; S 0 ) S 0 = 1 S 0 ς 1 ( σ ) ξ 1 ϕ ( ξ 1 ) ,
φ i ( x i ; S 0 ) = 0 , 2 i n ,
φ i ( x i ; σ ) = ϕ i ( x i ; σ ) σ = 1 σ ξ i 2 ς i ξ i 1 ϕ ( ξ i ) , 1 i n ,
where  ξ i  is the standard normal random variate in (15) and  ϕ ( · )  is the standard normal density. The decompositions chosen for the MVD method in Pflug and Thoma (2016) are
φ 1 MVD ( x i ; S 0 ) = 1 S 0 ς 1 ( σ ) ξ 1 ϕ ( ξ 1 ) 1 { ξ 1 0 } 1 S 0 ς 1 ( σ ) ( ξ 1 ) ϕ ( ξ 1 ) 1 { ξ 1 0 } ,
φ i MVD ( x i ; σ ) = 1 σ ξ i 2 ϕ ( ξ i ) ς i ( σ ) σ ξ i ϕ ( ξ i ) 1 { ξ i 0 } + ς i ( σ ) σ ( ξ i ) ϕ ( ξ i ) 1 { ξ i 0 } 1 σ ϕ ( ξ i ) ,
where  2 i n . Both terms in (21) correspond to the density of the Rayleigh distribution, while in (22), the first term corresponds to the double-sided Maxwell–Boltzmann distribution, the second and third terms correspond to the Rayleigh distribution, and the last term is the standard normal density. Note that the linear terms in (18) and (20) were decomposed into two Rayleigh density terms since they define signed measures rather than proper measures on  R . The decompositions that will be used in the AMVD method are
φ 1 AMVD ( x i ; S 0 ) = 1 S 0 ς 1 ( σ ) ξ 1 ϕ ( ξ 1 ) ϕ ( ξ 1 ) ,
φ i AMVD ( x i ; σ ) = 1 σ ε ξ i 2 ς i ( σ ) ξ i 1 ξ i 2 ς i ( σ ) ξ i 1 ϕ ( ξ i ) .

4.2. Notation and Identities

For any  n N + , define polynomials,  p n ( x ) , by  p 0 ( x ) = 1  and for  n 1 ,
p n ( x ) = x n ( n 1 ) x n 2 .
Moreover, given any polynomial,  p ( x ) , of degree n, and  a R , let  α i ( p , a ) R  be the coefficients in the decomposition
p ( x + a ) = i = 0 n α i ( p , a ) p i ( x ) ,
which are welldefined and unique since  { p i ( x ) 0 i n }  is a basis for the space of polynomials of degree at the highest n. For the purposes of computing the sensitivity variances, we note that
( x + a ) 2 = 1 · x 2 1 + 2 a · x + a 2 + 1 · 1 ,
so that
α 0 x 2 , a = a 2 + 1 , α 1 x 2 , a = 2 a , and α 2 x 2 , a = 1 .
If for any given  ζ R , we define the quadratic  q ζ ( x ) , by 
q ζ ( x ) = x 2 ζ x 1 ,
then
q ζ ( x + a ) = 1 · x 2 1 + ( 2 a ζ ) · x + a ( a ζ ) · 1 ,
and so
α 0 q ζ ( x ) , a = a ( a ζ ) , α 1 q ς ( x ) , a = 2 a ζ , and α 2 q ς ( x ) , a = 1 .
Next, squaring  q ζ ( x )  gives
q ζ 2 ( x + a ) = 1 · x 4 3 x 2 + 2 2 a ζ · x 3 2 x + 6 a 2 6 ζ a + ζ 2 + 1 · x 2 1 + 4 a 3 6 a 2 ζ + 2 a ζ 2 + 2 2 ζ · x + a 4 2 ζ a 3 + ζ 2 + 4 a 2 4 ζ a + ζ 2 + 2 · 1 ,
and so  α 4 q ζ 2 ( x ) , a = 1 α 3 q ζ 2 ( x ) , a = 2 ( 2 a ζ ) , and 
α 0 q ζ 2 ( x ) , a = a 4 2 ζ a 3 + ζ 2 + 4 a 2 4 ζ a + ζ 2 + 2 ,
α 1 q ζ 2 ( x ) , a = 4 a 3 6 a 2 ζ + 2 a ζ 2 + 2 2 ζ ,
α 2 q ζ 2 ( x ) , a = 6 a 2 6 ζ a + ζ 2 + 1 .
Finally, for any  i N , define
χ i = ( i 1 ) 1 = max i 1 , 1 ,
and given  a R l u R , and a polynomial  p ( x )  of degree n, define
I ( p , a ; l , u ) = α 0 ( p , a ) Φ u a Φ l a + ϕ ( l a ) i = 1 n χ i α i ( p , a ) ( l a ) i 1 ϕ ( u a ) i = 1 n χ i α i ( p , a ) ( u a ) i 1 .
In (37), we agree to set  Φ ( ) = ϕ ( ± ) = 0  and  Φ ( ) = 1 . Note that the definition of  I ( p , a ; l , u )  was motivated by the identity
l u e a x p ( x ) ϕ ( x ) d x = e 1 2 a 2 I ( p , a ; l , u ) .
It will be seen that the variances of deltas and vegas for vanilla and digital options can be obtained analytically and expressed succinctly in terms of  I ( p , a ; l , u ) .

5. Vanilla Calls

For vanilla options, it is possible to derive analytic expressions for the variances of deltas and vegas for the LR, MVD, AMVD, and PW methods. We only provide the details for European calls, since the corresponding calculations for puts are entirely similar. Throughout this section, let K be the strike for a European call with expiry T, and let  ς = σ T .

5.1. Vanilla Call Delta

Recall from (18) that if  Δ 1 = T , then the derivative of the transition density from  t = 0  to  t = T  with respect to  S 0  is given by
φ ( x T ; S 0 ) = ϕ ( x T ; S 0 ) S 0 = 1 S 0 ς ξ ϕ ( ξ ) ,
where  ξ N ( 0 , 1 ) , and for notational clarity, we have written  x T = x 1 ξ = ξ 1  and  φ ( x T ; S 0 ) = φ 1 ( x 1 ; S 0 ) . The likelihood ratio estimator for the call delta is unbiased and computed using the expression   
E Δ S 0 LR ( c ) = ξ e r T ξ S 0 ς S 0 e r 1 2 σ 2 T + ς ξ K · ϕ ( ξ ) d ξ = Φ ( ξ + ) ,
where
ξ ± = ln ( S 0 / K ) + r ± 1 2 σ 2 T ς .
The variance of the likelihood ratio call delta is given by
V Δ S 0 LR ( c ) = e ς 2 ς 2 I x 2 , 2 ς ; ξ , 2 K e r T S 0 ς 2 I x 2 , ς ; ξ , + K 2 e 2 r T S 0 2 ς 2 I x 2 , 0 ; ξ , Φ 2 ξ + .
The expected value of the MVD delta is unbiased and is computed using the expression
E Δ S 0 MVD ( c ) = ξ e r T S 0 ς 2 π S 0 e r 1 2 σ 2 T + ς ξ K · 1 { ξ 0 } 2 π ξ ϕ ( ξ ) d ξ ξ e r T S 0 ς 2 π S 0 e r 1 2 σ 2 T + ς ξ K · 1 { ξ < 0 } 2 π ( ξ ) ϕ ( ξ ) d ξ ,
where the terms are computed independently. The coefficient,  2 π , preceding the terms  ± ξ ϕ ( ξ ) , is introduced to ensure that the product is a probability density function. The variance of the MVD delta is given by
V Δ S 0 MVD ( c ) = m S 0 ( 2 ) ( c ) Φ 2 ( ξ ) ,
where
m S 0 ( 2 ) ( c ) = e ς 2 2 π ς 2 I x , 2 ς ; 0 ξ , 2 K e r T 2 π S 0 ς 2 I x , ς ; 0 ξ , + K 2 e 2 r T 2 π S 0 2 ς 2 I x , 0 ; 0 ξ , e ς 2 2 π ς 2 I x , 2 ς ; 0 ξ , 0 + 2 K e r T 2 π S 0 ς 2 I x , ς ; 0 ξ , 0 K 2 e 2 r T 2 π S 0 2 ς 2 I x , 0 ; 0 ξ , 0 .
The expected value of the AMVD delta is unbiased and is computed using the expression
E Δ S 0 AMVD ( c ) = ξ ε ( ξ ) e r T S 0 ς 2 π S 0 e r 1 2 σ 2 T + ς ξ K · π 2 ξ ϕ ( ξ ) d ξ ,
and its variance is given by
V Δ S 0 AMVD ( c ) = 2 m S 0 ( 2 ) ( c ) Φ 2 ξ + .
It should be noted that although  V Δ S 0 MVD ( c )  is approximately half the value of  V Δ S 0 AMVD ( c ) , implementing MVD using a Monte Carlo simulation would require two paths for each sample, while the AMVD method requires only one. Finally, the expected value for the PW delta is given by
E Δ S 0 PW ( c ) = ξ e 1 2 ς 2 + ς ξ · ϕ ( ξ ) d ξ ,
with the associated variance
V Δ S 0 PW ( c ) = e ς 2 I 1 , 2 ς ; ξ , Φ 2 ξ + = e ς 2 Φ ξ + + ς Φ 2 ( ξ + ) .
The ratios of variances for the vanilla call delta are shown in Figure 1 and Figure 2. It can be seen that the AMVD method gives lower variances than the LR method, but higher variances than the MVD method. Somewhat unexpectedly, the AMVD method results in lower variances than the PW method for short-dated deep in-the-money calls.

5.2. Vanilla Call Vega

The derivative of the transition density,  ϕ ( x T ; σ ) , with respect to  σ  is given by
φ ( x T ; σ ) = ϕ ( x T ; σ ) σ = 1 σ ξ 2 ς ξ 1 ϕ ( ξ ) ,
where  ς = σ T  is as defined previously. The likelihood ratio vega for the vanilla call is unbiased and is given by
E Δ σ LR ( c ) = ξ e r T σ S 0 e r 1 2 σ 2 T + σ T ξ K ξ 2 ς ξ 1 · ϕ ( ξ ) d ξ ,
so that  E Δ σ LR = S 0 ϕ ξ + T . The associated variance is
V Δ σ LR ( c ) = e ς 2 S 0 2 σ 2 I q ς 2 ( x ) , 2 ς ; ξ , 2 e r T K S 0 σ 2 I q ς 2 ( x ) , ς ; ξ , + e 2 r T K 2 σ 2 I q ς 2 ( x ) , 0 ; ξ , S 0 2 ϕ 2 ξ + T .
The MVD vanilla call vega is unbiased and is computed using the decomposition
E Δ σ MVD ( c ) = ξ e r T σ S 0 e r 1 2 σ 2 T + σ T ξ K · ξ 2 ϕ ( ξ ) d ξ 0 ξ e r T ς σ 2 π S 0 e r 1 2 σ 2 T + σ T ξ K · 2 π ξ ϕ ( ξ ) d ξ + 0 ξ 0 e r T ς σ 2 π S 0 e r 1 2 σ 2 T + σ T ξ K · 2 π ( ξ ) ϕ ( ξ ) d ξ ξ e r T σ S 0 e r 1 2 σ 2 T + σ T ξ K · ϕ ( ξ ) d ξ ,
where it is assumed that the components are uncoupled. It should be noted that the first and last terms are coupled in Heidergott et al. (2008) and Pflug and Thoma (2016) to reduce the resulting variance. The variance associated with the uncoupled MVD vanilla call vega is
V Δ σ MVD ( c ) = e ς 2 S 0 2 σ 2 I x 2 , 2 ς ; ξ , + I 1 , 2 ς ; ξ , 2 e r T K S 0 σ 2 I x 2 , ς ; ξ , + I 1 , ς ; ξ , + e 2 r T K 2 σ 2 I x 2 , 0 ; ξ , + I 1 , 0 ; ξ , + e ς 2 S 0 2 T 2 π I x , 2 ς ; 0 ξ , I x , 2 ς ; 0 ξ , 0 2 e r T K S 0 T 2 π I x , ς ; 0 ξ , I x , ς ; 0 ξ , 0 + e 2 r T K 2 T 2 π I x , 0 ; 0 ξ , I x , 0 ; 0 ξ , 0 S 0 2 ϕ 2 ξ + T .
The AMVD vanilla call vega is computed using the expression
E Δ σ AMVD ( c ) = ξ ε ( q ς ( x ) ) e r T η ς σ S 0 e r 1 2 σ 2 T + ς ξ K · 1 η ς | q ς ( ξ ) | ϕ ( ξ ) d ξ ,
where
η ς = I q ς , 0 ; , ς I q ς , 0 ; ς , ς + + I q ς , 0 ; ς + , ,
ς ± = 1 2 ς ± 1 + 1 4 ς 2 ,
and the associated variance is
V Δ σ AMVD ( c ) = e ς 2 S 0 2 η ς σ 2 I q ( x ) , 2 ς ; ς + ξ , I q ( x ) , 2 ς ; ς ξ ς + , ς + + I q ( x ) , 2 ς ; ς ξ , ς 2 e r T K S 0 η ς σ 2 I q ( x ) , ς ; ς + ξ , I q ( x ) , ς ; ς ξ ς + , ς + + I q ( x ) , ς ; ς ξ , ς + e 2 r T K 2 η ς σ 2 I q ( x ) , 0 ; ς + ξ , I q ( x ) , 0 ; ( ς ξ ) ς + , ς + + I q ( x ) , 0 ; ς ξ , ς S 0 2 ϕ 2 ξ + T .
Finally, the pathwise vega for the vanilla call is given by
E Δ σ PW ( c ) = ξ σ T + T ξ S 0 e 1 2 ς 2 + ς ξ ϕ ( ξ ) d ξ ,
with the associated variance
V Δ σ PW ( c ) = e ς 2 S 0 2 T I x 2 , 2 ς ; ξ , 2 ς I x , 2 ς ; ξ , + ς 2 I 1 , 2 ς ; ξ , S 0 2 ϕ 2 ξ + T .
The ratios of variances for vanilla call vegas are shown in Figure 3 and Figure 4. As was the case with the deltas, the AMVD method gives lower variances than the LR method, but, in contrast to the deltas, it also produces lower variances than the MVD method. The AMVD method also produces lower variances than the PW method for in-the-money calls.

6. Digital Calls

In this section, we compute the LR, MVD, and AMVD variances for the digital call deltas and vegas. Once again, we only provide details for digital calls; let K be the strike for a digital call with expiry T.

6.1. Digital Call Delta

The likelihood ratio delta for the digital call is computed using the expression
E Δ S 0 LR ( d ) = ξ e r T ξ S 0 ς ϕ ( ξ ) d ξ = e r T S 0 ς ϕ ( ξ ) ,
and the associated variance is
V Δ S 0 LR ( d ) = e 2 r T S 0 2 ς 2 I x 2 , 0 ; ξ , ϕ 2 ( ξ ) .
The MVD delta for the digital call is computed using the expression
E Δ S 0 MVD ( d ) = 0 ξ e r T S 0 ς 2 π 2 π ξ ϕ ( ξ ) d ξ 0 ξ 0 e r T S 0 ς 2 π 2 π ( ξ ) ϕ ( ξ ) d ξ ,
and the variance is given by
V Δ S 0 MVD ( d ) = e 2 r T S 0 2 ς 2 2 π I x , 0 ; 0 ξ , I x , 0 ; 0 ξ , 0 2 π ϕ 2 ( ξ ) .
Finally, AMVD delta for the digital is given by
E Δ S 0 AMVD ( d ) = ξ e r T S 0 ς 2 π · π 2 | ξ | ϕ ( ξ ) d ξ ,
with the associated variance
V Δ S 0 AMVD ( d ) = e 2 r T S 0 2 ς 2 2 π I x , 0 ; 0 ξ , I x , 0 ; 0 ξ , 0 π 2 ϕ 2 ( ξ ) .
We note that the PW method cannot be applied for digital options.
The ratios of variances for digital call delta are shown in Figure 5. The AMVD method gives lower variances than the LR method, particularly for short-dated deep in-the-money digitals. The MVD method gives lower variances than the AMVD method, most noticeably for at-the-money digitals.

6.2. Digital Call Vega

The likelihood ratio vega for the digital call is computed using the expression
E Δ σ LR ( d ) = ξ e r T σ q ς ( ξ ) · ϕ ( ξ ) d ξ = e r T ξ + σ ϕ ( ξ ) ,
and the associated variance is
V Δ σ LR ( d ) = e 2 r T σ 2 I q ς 2 ( x ) , 0 ; ξ , ξ + 2 ϕ 2 ( ξ ) .
The MVD vega for the digital call is computed using the expression
E Δ σ MVD ( d ) = ξ e r T σ · ξ 2 ϕ ( ξ ) d ξ 0 ξ e r T σ 2 π · 2 π ξ ϕ ( ξ ) d ξ + 0 ξ 0 e r T σ 2 π · 2 π ( ξ ) ϕ ( ξ ) d ξ ξ e r T σ · ϕ ( ξ ) d ξ ,
and the associated variance is given by
V Δ σ MVD ( d ) = e 2 r T σ 2 I x 2 , 0 ; ξ , + I 1 , 0 ; ξ , ξ + 2 ϕ 2 ( ξ ) + e 2 r T σ 2 2 π I x , 0 ; 0 ξ , I x , 0 ; 0 ξ , 0 .
Finally, the AMVD vega for the digital call is computed using the expression
E Δ σ AMVD ( d ) = ξ e r T η ς σ · 1 η ς | q ς ( ξ ) | ϕ ( ξ ) d ξ ,
with the associated variance
V Δ σ AMVD ( d ) = e 2 r T η ς σ 2 I q ( x ) , 0 ; ς + ξ , ξ + 2 η ς I q ( x ) , 0 ; ς ξ ς + , ς + + I q ( x ) , 0 ; ς ξ , ς ξ + 2 η ς ϕ 2 ( ξ ) .
The ratios of variances for digital calls are shown in Figure 6, where it can be seen that the AMVD method gives lower variance than both the LR and the MVD methods.

7. Double-Barrier Option

In this section, we consider the delta and vega for a double-barrier option that knocks out if the closing price of the underlying asset lies outside the range,  ] B L , B U [ , on at least three days prior to expiry, where  B L < B U , and pays the constant rebate  R = 100  otherwise. We take  S 0 = 100 T = 0.25 σ = 0.20 , and subdivide the interval  [ 0 , 25 ]  into 90 equal subintervals. This is the example considered in Chan and Joshi (2015) Subsection 5.3.

7.1. Generation of Non-Normal Random Variates

In order to implement the AMVD method, and the MVD method for comparison purposes, we need to generate random variates from the Rayleigh distribution, the double-sided Maxwell–Boltzmann distribution, and the distributions corresponding to the absolute value densities appearing in (23) and (24).

7.1.1. Rayleigh Random Variate

The standard Rayleigh distribution is defined for  x 0  and has the probability density function
ϕ R ( x ) = x e 1 2 x 2 ,
and the cumulative density function
Φ R ( x ) = 1 e 1 2 x 2 .
Since the inverse of the cumulative density function is given by
Φ R 1 ( x ) = 2 ln ( 1 x )
for  x [ 0 , 1 [ , a variate from the Rayleigh distribution can be generated using the following algorithm:
R1.
Generate a uniform variate  u [ 0 , 1 [ .
R2.
Set  x = Φ R 1 ( u ) = 2 ln ( 1 u ) , which is then a Rayleigh random variate.

7.1.2. Double-Sided Maxwell–Boltzmann Random Variate

The standard Maxwell–Boltzmann distribution is defined for  x 0  and has probability density function
ϕ MB ( x ) = 2 π x 2 e 1 2 x 2 .
An algorithm for generating variates from the Maxwell–Boltzmann distribution based on the acceptance–rejection method, and introduced in Heidergott et al. (2008), is as follows:
MB1.
Generate independent uniform variates  u 1 , u 2 [ 0 , 1 [. Set  w = 4 ln ( u 1 ) .
MB2.
If  u 2 > 1.16582199 u 1 w , then go back to step MB1.3
MB3.
Generate a uniform variate  u 3 [ 0 , 1 [ , independent of  u 1  and  u 2 .
MB4.
If  u 3 < 0.5 , set  x = w , and set  x = w  otherwise. Then, x is a double-sided Maxwell–Boltzmann random variate.

7.1.3. Absolute Rayleigh Distribution

Consider the random variable with domain  R , and the probability density function
ϕ | R | ( x ) = 1 2 | x | e 1 2 x 2 .
Note that  ϕ | R |  is obtained by extending  ϕ R  to  R  using symmetry about the vertical axis. Hence, an algorithm for generating a variate from the absolute Rayleigh distribution is as follows:
AR1.
Generate a uniform variate  u ] 0 , 1 [ .
AR2.
Then, an absolute Rayleigh random variate, x, is given by
x = 2 ln ( 2 ( 1 u ) ) , if u 1 2 , 2 ln ( 2 u ) , if u < 1 2 .

7.1.4. Absolute Quadratic Normal (AQN) Distribution

Assume  ς R + , and let  q ς η ς , and  ς ±  be as defined in (29), (49), and (50), respectively. Consider the random variable with domain  R , and the probability density function
ϕ AQN ( x ; ς ) = 1 η ς q ς ( x ) ϕ ( x ) .
Note that  ς ±  are the roots of  q ς ( x ) , and   q ς ( x ) < 0  for  x ς , ς + . If we denote by  Φ AQN  the corresponding cumulative density function, then a straight forward calculation gives
Φ AQN ( x ; ς ) = 1 η ς ς x ϕ ( x ) , x ς , 2 ς ς ϕ ς + x ς ϕ ( x ) , ς < x ς + , η ς x ς ϕ ( x ) , x > ς + .
Note that, in each of the three regions,  Φ AQN ( x ; ς )  has the form
1 η ς α + ϵ ( x ς ) ϕ ( x ) ,
where  α R  and  ϵ = ± 1 . Thus, given a uniform random variate u, to compute the corresponding variate with density  ϕ AQN ( x ; ς ) , we need to compute the root of a function of the form
h ( x ) = 1 η ς α + ϵ ( x ς ) ϕ ( x ) u ,
with  α  and  ϵ  appropriately chosen according to the value of u. For this purpose, we apply Newton’s method noting that
h ( x ) = ϵ η ς x 2 ς x 1 ϕ ( x ) = ϵ η ς q ς ( x ) ϕ ( x ) .
Thus, given an initial point  x ( 0 ) , we can apply the recursion
x ( i + 1 ) = x ( i ) h x ( i ) h x ( i ) ,
until the desired degree of precision is achieved.
However, due to the nature of  h ( x ) , Newton’s method can be highly sensitive to the initial point  x ( 0 ) . In fact, a poor initial value can result in the method failing to converge to the correct root. Thus, it is crucial that the initial point is chosen carefully, and, for this, we take inspiration from Heidergott et al. (2008) where the Weibull distribution is used in the acceptance–rejection method to generate the double-sided Maxwell–Boltzmann distribution.
For the two regions,  ] , ς ]  and  ] ς + , [ , we approximate the AQN density using Weibull distributions
ϕ W ± ( x ) = 2 β ± x ς ± e β ± x ς ± 2 ,
with the cumulative density function
Φ W ± ( x ) = e β ± x ς ± 2 .
The parameters,  β ± , are determined by matching the variances of the approximating Weibull distributions to the variances of the AQN variate restricted to the intervals ] , ς ]  and ] ς + , [ , respectively. This gives
β ± = 1 σ ± 2 1 π 4 .
where  σ ± 2  are variances of the AQN variate restricted to ] , ς ]  and  ] ς + , [. For the details, refer to Appendix A.
For the region  [ ς , ς + ] , we approximate the AQN cumulative density on  [ ς , ς m ]  and  [ ς m , ς + ] , where  ς m = 1 2 ς + ς + , using quadratics
q ( x ) = c x ς 2 , and q + ( x ) = Φ AQN ς + ; ς Φ AQN ς ; ς c + x ς + 2 ,
with  c ±  determined by setting  q ς m = Φ AQN ς m ; ς Φ AQN ς ; ς = q + ς m , which gives   
c = Φ AQN ( ς m ; ς ) Φ AQN ς ; ς ς m ς 2 ,
c + = Φ AQN ( ς + ; ς ) Φ AQN ς m ; ς ς m ς + 2 .
The objective function, h, and the initial point,  x ( 0 ) , for the four regions determined by the uniform random variate  u ] 0 , 1 [  are shown in Table 1.
Using the initial values given in Table 1, Newton’s method converges within four iterations in most cases. It should be noted that each Newton iteration only involves elementary operations and one exponential, and so the method is quite computationally efficient.

7.2. Double-Barrier Vega

Vegas for the double knock-out barrier was computed using a Monte Carlo simulation implemented in C++ with uniform random variates generated by Mersenne Twister from the boost library.4 For each method, 100 independent valuations were performed with each valuation using 50,000 paths for the MVD method and 100,000 paths for the LR and AMVD methods. Standard deviations were then computed from these 100 samples. The lower and upper barriers were set to  L i = 75 + 2 i  and  U i = 105 + 2 j  for  0 i , j 10 . The resulting means and standard deviations of vegas for a subset of  ( L i , U i )  pairs is shown in Table 2.
It is evident from Figure 7 that MVD vegas have the lowest standard deviations in general for double-barrier options, followed by those of the AMVD, and then the LR method. It is worth noting that the standard deviation in vegas for the AMVD method is lowest, in relative terms, when the upper barrier is close to the lower barrier. In fact, the AMVD method has the lowest standard deviation among the three methods when  ( L , U ) = ( 95 , 105 ) .
This is consistent with an observation made in previous sections that the method giving sensitivities with the smallest variance will not only depend on the derivative being considered but also on the details such as strikes and barrier levels, and the way each method distributes densities relative to the strikes and barriers. The implementer should experiment and choose the method that is most appropriate for each application.

8. Conclusions

In this paper, we introduced an alternative to the measure-valued differentiation (MVD) method for computing Monte Carlo sensitivities, called the absolute measure-valued differentiation (AMVD) method, which decomposes the density derivative as the product of its sign and its absolute value. This method shares many of the properties of the MVD method, but instead of initiating two additional paths in each simulation step of the original path, it initiates only one additional subpath. It was shown that the AMVD method gives vega estimates with lower variance than the MVD method for vanilla and digital calls in the Black–Scholes model. Moreover, the AMVD vega estimates have lower variance than the pathwise differentiation method in certain situations, such as the case of short-dated deep in-the-money vanilla calls. A consideration of the vegas for the more exotic double-barrier options revealed that the relative performances of the likelihood ratio, MVD, and AMVD methods depend, in general, on the positions of the barriers, and the way each method distributes densities relative to the barriers.
As is the case with the MVD method, applying the AMVD method requires the generation of nonstandard random variates. For the examples considered in this paper, we provided a highly efficient implementation of the inverse transform method for this purpose using Newton’s method with a careful selection of the initial point.

Author Contributions

Conceptualization, M.J. and O.K.K.; methodology, M.J. and O.K.K.; software, O.K.K.; formal analysis, M.J., O.K.K. and S.S.; writing—original draft preparation, M.J. and O.K.K. writing—review and editing, O.K.K. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data and software used in this article are available on reqeust from the corresponding author (O.K.).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Initial Values for Newton’s Method

For the initial point in Newton’s method to invert the AQN cumulative density function, we approximate the cumulative density function using a Weibull density with matching variance. For this purpose, note that the mean,  μ + , and the second moment,  m + ( 2 ) , for the AQN distribution over the region  [ x + , [  are given by
μ + = 1 η + ς + x x 2 ς x 1 ϕ ( x ) d x = 1 η + ς + x 3 2 x ς x 2 1 + x ς ϕ ( x ) d x = 1 η + ς + 2 ς ς + + 1 ϕ ( ς + ) ς Φ ( ς + ) ,
where  η + = ( ς + ς ) ϕ ( ς + ) / η ς , and
m + ( 2 ) = 1 η + ς + x 2 x 2 ς x 1 ϕ ( x ) d x = 1 η + ς + x 4 3 x 2 ς x 3 2 x + 2 x 2 1 2 ς x + 2 ϕ ( x ) d x = 1 η + ς + 3 ς ς + 2 + 2 ς + 2 ς ϕ ( ς + ) + 2 Φ ( ς + ) ,
The variance is given by  σ + 2 = m + ( 2 ) μ + 2 , and the corresponding mean and second moment for approximating the Weibull distribution are
μ W , + = 2 β + x + x x ς + e β + x ς + 2 d x = 2 β + 0 x + ς + x e β + x 2 d x = 0 e β + x 2 d x ς + e β + x 2 0 = 1 2 π β + + ς + ,
and
m W , + ( 2 ) = 2 β + ς + x 2 x ς + e β + x ς + 2 d x = 2 β + 0 x + ς + 2 x e β + x 2 d x = 2 β + 0 x 3 + 2 ς + x 2 + ς + 2 x e β + x 2 d x = 2 0 1 + β + ς + 2 x + ς + e β + x 2 d x = 2 ς + 0 e β + x 2 d x 1 + β + ς + 2 β e β + x 2 0 = π β + ς + + 1 + β + ς + 2 β .
The variance of the approximating the Weibull distribution is given by
σ W , + 2 = π β + ς + + 1 β + + ς + 2 1 2 π β + + ς + 2 = π β + ς + + 1 β + + ς + 2 π 4 β + π β + ς + ς + 2 = 1 β + 1 π 4 ,
and similar calculations for the region  ] , ς ]  gives
μ = 1 η ς x x 2 ς x 1 ϕ ( x ) d x = 1 η ς 2 + ς ς 1 ϕ ( ς ) ς Φ ( ς ) , m + ( 2 ) = 1 η ς x 2 x 2 ς x 1 ϕ ( x ) d x = 1 η ς 3 + ς ς 2 2 ς + 2 ς ϕ ( ς ) + 2 Φ ( ς ) , σ W , 2 = 1 β 1 π 4 ,
where  η = ( ς ς ) ϕ ( ς ) / η ς . Equating the corresponding variances gives
β ± = 1 σ ± 2 1 π 4 .

Appendix B. Vanilla Call Variances

Appendix B.1. Likelihood Ratio Call Delta Variance

The second moment associated with the likelihood ratio delta for the vanilla call is given by
m ( 2 ) Δ S 0 LR = ξ e r T S 0 e r 1 2 σ 2 T + ς ξ K ξ S 0 ς 2 ϕ ( ξ ) d ξ = e 2 r T S 0 2 ς 2 ξ S 0 2 e 2 r σ 2 T + 2 ς ξ 2 K S 0 e r 1 2 σ 2 T + ς ξ + K 2 ξ 2 ϕ ( ξ ) d ξ .
Computing the three terms, we obtain
m S 0 , 1 LR = e ς 2 ς 2 ξ e 2 ς ξ ξ 2 ϕ ( ξ ) d ξ = e ς 2 ς 2 I x 2 , 2 ς ; ξ , , m S 0 , 2 LR = 2 K e r + 1 2 σ 2 T S 0 ς 2 ξ e ς ξ ξ 2 ϕ ( ξ ) d ξ = 2 K e r T S 0 ς 2 I x 2 , ς ; ξ , , m S 0 , 3 LR = K 2 e 2 r T S 0 2 ς 2 ξ ξ 2 ϕ ( ξ ) d ξ = K 2 e 2 r T S 0 2 ς 2 I x 2 , 0 ; ξ , .

Appendix B.2. MVD Call Delta Variance

The second moment for the MVD delta estimate is given by
m ( 2 ) Δ S 0 MVD = 0 ξ e r T S 0 ς 2 π S 0 e r 1 2 σ 2 T + ς ξ K 2 · 2 π ξ ϕ ( ξ ) d ξ + 0 ξ 0 e r T S 0 ς 2 π S 0 e r 1 2 σ 2 T + ς ξ K 2 · 2 π ( ξ ) ϕ ( ξ ) d ξ .
Computing the first term, we obtain
e 2 r T 2 π S 0 2 ς 2 0 ξ S 0 2 e 2 r σ 2 T + 2 ς ξ 2 K S 0 e r 1 2 σ 2 T + ς ξ + K 2 ξ ϕ ( ξ ) d ξ ,
and the three subterms are given by
m S 0 , 1 , 1 MVD = e ς 2 2 π ς 2 0 ξ e 2 ς ξ ξ ϕ ( ξ ) d ξ = e ς 2 2 π ς 2 I x , 2 ς ; 0 ξ , , m S 0 , 1 , 2 MVD = 2 K e r + 1 2 σ 2 T 2 π S 0 ς 2 0 ξ e ς ξ ξ ϕ ( ξ ) d ξ = 2 K e r T 2 π S 0 ς 2 I x , ς ; 0 ξ , , m S 0 , 1 , 3 MVD = K 2 e 2 r T 2 π S 0 2 ς 2 0 ξ ξ ϕ ( ξ ) d ξ = K 2 e 2 r T 2 π S 0 2 ς 2 I x , 0 ; 0 ξ , .
Similarly, computing the second term, we obtain
e 2 r T 2 π S 0 2 ς 2 0 ξ 0 S 0 2 e 2 r σ 2 T + 2 ς ξ 2 K S 0 e r 1 2 σ 2 T + ς ξ + K 2 ( ξ ) ϕ ( ξ ) d ξ ,
and the three subterms are given by
m S 0 , 2 , 1 MVD = e ς 2 2 π ς 2 0 ξ 0 e 2 ς ξ ξ ϕ ( ξ ) d ξ = e ς 2 2 π ς 2 I x , 2 ς ; 0 ξ , 0 , m S 0 , 2 , 2 MVD = 2 K e r + 1 2 σ 2 T 2 π S 0 ς 2 0 ξ 0 e ς ξ ξ ϕ ( ξ ) d ξ = 2 K e r T 2 π S 0 ς 2 I x , ς ; 0 ξ , 0 , m S 0 , 2 , 3 MVD = K 2 e 2 r T 2 π S 0 2 ς 2 0 ξ 0 ξ ϕ ( ξ ) d ξ = K 2 e 2 r T 2 π S 0 2 ς 2 I x , 0 ; 0 ξ , 0 .

Appendix B.3. AMVD Call Delta Variance

The second moment for the AMVD delta estimate is given by
m ( 2 ) Δ S 0 AMVD = 2 ξ e r T S 0 ς 2 π S 0 e r 1 2 σ 2 T + ς ξ K 2 · 2 π | ξ | ϕ ( ξ ) d ξ = 2 m ( 2 ) Δ S 0 MVD .

Appendix B.4. Likelihood Ratio Call Vega Variance

The second moment associated with the likelihood ratio vega for the vanilla call is given by
m ( 2 ) Δ σ LR = e 2 r T σ 2 ξ S 0 e r 1 2 σ 2 T + σ T ξ K 2 q 2 ( ξ ) ϕ ( ξ ) d ξ ,
where  q ( x )  is as defined in (30). Expanding out the first term, we obtain
m ( 2 ) Δ σ LR = e 2 r T σ 2 ξ S 0 2 e 2 r σ 2 T + 2 ς ξ 2 K S 0 e r 1 2 σ 2 T + ς ξ + K 2 q 2 ( ξ ) ϕ ( ξ ) d ξ .
Computing the three terms gives
m σ , 1 LR = e ς 2 S 0 2 σ 2 ξ e 2 ς ξ q 2 ( ξ ) ϕ ( ξ ) d ξ = e ς 2 S 0 2 σ 2 I q 2 , 2 ς ; ξ , , m σ , 2 LR = 2 e r + 1 2 σ 2 T K S 0 σ 2 ξ e ς ξ q 2 ( ξ ) ϕ ( ξ ) d ξ = 2 e r T K S 0 σ 2 I q 2 , ς ; ξ , , m σ , 3 LR = e 2 r T K 2 σ 2 ξ q 2 ( ξ ) ϕ ( ξ ) d ξ = e 2 r T K 2 σ 2 I q 2 , 0 ; ξ , .

Appendix B.5. MVD Call Vega Variance

The second moment associated with the MVD vega is given by
m ( 2 ) Δ σ MVD = e 2 r T σ 2 ξ S 0 e r 1 2 σ 2 T + ς ξ K 2 ξ 2 ϕ ( ξ ) d ξ + e 2 r T T 2 π 0 ξ S 0 e r 1 2 σ 2 T + ς ξ K 2 2 π ξ ϕ ( ξ ) d ξ + e 2 r T T 2 π 0 ξ 0 S 0 e r 1 2 σ 2 T + ς ξ K 2 2 π ( ξ ) ϕ ( ξ ) d ξ + e 2 r T σ 2 ξ S 0 e r 1 2 σ 2 T + ς ξ K 2 ϕ ( ξ ) d ξ ,
The four terms are as follows:
m σ , 1 MVD = e ς 2 S 0 2 σ 2 I x 2 , 2 ς ; ξ , 2 e r T K S 0 σ 2 I x 2 , ς ; ξ , + e 2 r T K 2 σ 2 I x 2 , 0 ; ξ , , m σ , 2 MVD = e ς 2 S 0 2 T 2 π I x , 2 ς ; 0 ξ , 2 e r T K S 0 T 2 π I x , ς ; 0 ξ , + e 2 r T K 2 T 2 π I x , 0 ; 0 ξ , , m σ , 3 MVD = e ς 2 S 0 2 T 2 π I x , 2 ς ; 0 ξ , 0 + 2 e r T K S 0 T 2 π I x , ς ; 0 ξ , 0 e 2 r T K 2 T 2 π I x , 0 ; 0 ξ , 0 , m σ , 4 MVD = e ς 2 S 0 2 σ 2 I 1 , 2 ς ; ξ , 2 e r T K S 0 σ 2 I 1 , ς ; ξ , + e 2 r T K 2 σ 2 I 1 , 0 ; ξ , .

Appendix B.6. AMVD Call Vega Variance

The second moment associated with the AMVD vega is given by
m ( 2 ) Δ σ AMVD = e 2 r T η ς σ 2 ς + ξ S 0 e r 1 2 σ 2 T + ς ξ K 2 q ( x ) ϕ ( x ) d x e 2 r T η ς σ 2 ς ξ ς + ς + S 0 e r 1 2 σ 2 T + ς ξ K 2 q ( x ) ϕ ( x ) d x + e 2 r T η ς σ 2 ς ξ ς S 0 e r 1 2 σ 2 T + ς ξ K 2 q ( x ) ϕ ( x ) d x .
The three terms are as follows:
m σ , 1 AMVD = e ς 2 S 0 2 η ς σ 2 I q 2 ( x ) , 2 ς ; ς + ξ , 2 e r T K S 0 η ς σ 2 I q 2 ( x ) , ς ; ς + ξ , + e r T K 2 η ς σ 2 I q 2 ( x ) , 0 ; ς + ξ , , m σ , 2 AMVD = e ς 2 S 0 2 η ς σ 2 I q 2 ( x ) , 2 ς ; ς ξ ς + , ς + + 2 e r T K S 0 η ς σ 2 I q 2 ( x ) , ς ; ς ξ ς + , ς + e r T K 2 η ς σ 2 I q 2 ( x ) , 0 ; ς ξ , , m σ , 3 AMVD = e ς 2 S 0 2 η ς σ 2 I q 2 ( x ) , 2 ς ; ς ξ , ς 2 e r T K S 0 η ς σ 2 I q 2 ( x ) , ς ; ς ξ , ς + e r T K 2 η ς σ 2 I q 2 ( x ) , 0 ; ς ξ , ς .

Appendix C. Digital Call Variances

Appendix C.1. Likelihood Ratio Digital Call Delta Variance

The second moment associated with the likelihood ratio delta for the digital call is
m ( 2 ) Δ S 0 LR = 1 S 0 2 ς 2 ξ ξ 2 ϕ ( ξ ) d ξ = 1 S 0 2 ς 2 I x 2 , 0 ; ξ , .

Appendix C.2. MVD Digital Call Delta Variance

The second moment associated with the MVD delta for the digital call is
m ( 2 ) Δ S 0 MVD = 1 S 0 2 ς 2 0 ξ ξ ϕ ( ξ ) d ξ + 1 S 0 2 ς 2 0 ξ 0 ( ξ ) ϕ ( ξ ) d ξ = 1 S 0 2 ς 2 I x , 0 ; 0 ξ , I x , 0 ; 0 ξ , 0 .

Appendix C.3. AMVD Digital Call Delta Variance

The second moment associated with the AMVD delta for the digital call is
m ( 2 ) Δ S 0 AMVD = 2 S 0 2 ς 2 ξ | ξ | ϕ ( ξ ) d ξ = 2 m ( 2 ) Δ S 0 MVD .

Appendix C.4. Likelihood Ratio Digital Call Vega Variance

The second moment associated with the likelihood ratio vega for the digital call is
m ( 2 ) Δ σ LR = 1 σ 2 ξ q 2 ( ξ ) ϕ ( ξ ) d ξ = 1 σ 2 I q 2 ( x ) , 0 ; ξ , .

Appendix C.5. MVD Digital Call Vega Variance

The second moment associated with the MVD delta for the digital call is
m ( 2 ) Δ σ MVD = 1 σ 2 ξ ξ 2 ϕ ( ξ ) d ξ 1 σ 2 ξ ϕ ( ξ ) d ξ + 1 σ 2 0 ξ ξ ϕ ( ξ ) d ξ 1 σ 2 0 ξ 0 ( ξ ) ϕ ( ξ ) d ξ = 1 σ 2 I x 2 , 0 ; ξ , I 1 , 0 ; ξ , + 1 σ 2 I x 2 , 0 ; 0 ξ , I 1 , 0 ; 0 ξ , 0 .

Appendix C.6. AMVD Digital Call Vega Variance

The second moment associated with the AMVD vega for the digital call is
m ( 2 ) Δ S 0 AMVD = η S 0 2 ς 2 ς + ξ q ( ξ ) ϕ ( ξ ) d ξ η S 0 2 ς 2 ς ξ ς + ς + q ( ξ ) ϕ ( ξ ) d ξ + η S 0 2 ς 2 ς ξ ς q ( ξ ) ϕ ( ξ ) d ξ = η S 0 2 ς 2 I q ( x ) , 0 ; ς + ξ , I q ( x ) , 0 ; ς ξ ς + , ς + + η S 0 2 ς 2 I q ( x ) , 0 ; ς ξ , ς .

Notes

1
This is also the case for the MVD method.
2
It should be noted, however, that this higher computational burden is partially offset by the MVD sensitivity estimates having lower variance.
3
We note that the inequality involving  u 2  in  Heidergott et al. (2008) is in the opposite direction, which is most likely a typographical error.
4
Refer to www.boost.org. Accessed 3 June 2022.

References

  1. Black, Fischer, and Myron Scholes. 1973. The pricing of options and corporate liabilities. Journal of Political Economy 81: 637–54. [Google Scholar] [CrossRef]
  2. Brace, Alan. 2007. Engineering BGM. Sydney: Chapman and Hall. [Google Scholar]
  3. Capriotti, Luca, and Mike Giles. 2018. Fast Correlation Greeks by Adjoint Algorithmic Differentiation. arXiv arXiv:1004.1855v1. [Google Scholar] [CrossRef]
  4. Chan, Jiun Hong, and Mark Joshi. 2015. Optimal limit methods for computing sensitivities of discontinuous integrals including triggerable derivative securities. IIE Transactions 47: 978–97. [Google Scholar] [CrossRef]
  5. Glasserman, Paul. 2004. Monte Carlo Methods in Financial Engineering. New York: Springer. [Google Scholar]
  6. Heidergott, Bernd, Felisa J. Váquez-Abad, and Warren Volk-Makarewicz. 2008. Sensitivity esitmation for Gaussian systems. European Journal of Operational Research 187: 193–207. [Google Scholar] [CrossRef]
  7. Joshi, Mark S., and Dherminder Kainth. 2004. Rapid and accurate development of prices and Greeks for n-th to default credit swaps in the Li model. Quantitative Finance 4: 266–75. [Google Scholar] [CrossRef]
  8. Liu, Guangwu, and L. Jeff Hong. 2011. Kernel estimation of the greeks for options with discontinuous payoffs. Operations Research 59: 96–108. [Google Scholar] [CrossRef]
  9. Lyuu, Yuh-Dauh, and Huei-Wen Teng. 2011. Unbiased and efficient Greeks of financial options. Finance and Stochastics 15: 141–81. [Google Scholar] [CrossRef]
  10. Pflug, Georg Ch, and Philipp Thoma. 2016. Efficient calculation of the Greeks for exponentional Lévy processes: An application of measure valued differentiation. Quantitative Finance 16: 247–57. [Google Scholar] [CrossRef]
  11. Rott, Marius G., and Christian P. Fries. 2005. Fast and Robust Monte Carlo Cdo Sensitivities and Their Efficient Object Oriented Implementation. SSRN, May 31. [Google Scholar]
  12. Xiang, Jiangming, and Xiaoqun Wang. 2022. Quasi-Monte Carlo simulation for American option sensitivities. Journal of Computational and Applied Mathematics 413: 114268. [Google Scholar] [CrossRef]
Figure 1. On the left, there is the ratio  V [ Δ S 0 LR ( c ) ] V [ Δ S 0 AMVD ( c ) ]  and, on the right, there is the ratio  V [ Δ S 0 MVD ( c ) ] V [ Δ S 0 AMVD ( c ) ] .
Figure 1. On the left, there is the ratio  V [ Δ S 0 LR ( c ) ] V [ Δ S 0 AMVD ( c ) ]  and, on the right, there is the ratio  V [ Δ S 0 MVD ( c ) ] V [ Δ S 0 AMVD ( c ) ] .
Jrfm 16 00509 g001
Figure 2. On the left, there is the ratio  V [ Δ S 0 PW ( c ) ] V [ Δ S 0 AMVD ( c ) ] . On the right, there are variance ratios at  T = 0.1 . The solid curve is  V [ Δ S 0 LR ( c ) ] V [ Δ S 0 AMVD ( c ) ] , the dashed curve is  V [ Δ S 0 MVD ( c ) ] V [ Δ S 0 AMVD ( c ) ] , and the dotted curve is  V [ Δ S 0 PW ( c ) ] V [ Δ S 0 AMVD ( c ) ] .
Figure 2. On the left, there is the ratio  V [ Δ S 0 PW ( c ) ] V [ Δ S 0 AMVD ( c ) ] . On the right, there are variance ratios at  T = 0.1 . The solid curve is  V [ Δ S 0 LR ( c ) ] V [ Δ S 0 AMVD ( c ) ] , the dashed curve is  V [ Δ S 0 MVD ( c ) ] V [ Δ S 0 AMVD ( c ) ] , and the dotted curve is  V [ Δ S 0 PW ( c ) ] V [ Δ S 0 AMVD ( c ) ] .
Jrfm 16 00509 g002
Figure 3. On the left, there is the ratio  V [ Δ σ LR ( c ) ] V [ Δ σ AMVD ( c ) ]  and, on the right, there is the ratio  V [ Δ σ PW ( c ) ] V [ Δ σ AMVD ( c ) ] .
Figure 3. On the left, there is the ratio  V [ Δ σ LR ( c ) ] V [ Δ σ AMVD ( c ) ]  and, on the right, there is the ratio  V [ Δ σ PW ( c ) ] V [ Δ σ AMVD ( c ) ] .
Jrfm 16 00509 g003
Figure 4. On the left, there is the ratio  V [ Δ σ PW ( c ) ] V [ Δ σ AMVD ( c ) ] . On the right, there are variance ratios at  T = 0.1 . The solid curve is  V [ Δ σ LR ( c ) ] V [ Δ σ AMVD ( c ) ] , the dashed curve is  V [ Δ σ MVD ( c ) ] V [ Δ σ AMVD ( c ) ] , and the dotted curve is  V [ Δ σ PW ( c ) ] V [ Δ σ AMVD ( c ) ] .
Figure 4. On the left, there is the ratio  V [ Δ σ PW ( c ) ] V [ Δ σ AMVD ( c ) ] . On the right, there are variance ratios at  T = 0.1 . The solid curve is  V [ Δ σ LR ( c ) ] V [ Δ σ AMVD ( c ) ] , the dashed curve is  V [ Δ σ MVD ( c ) ] V [ Δ σ AMVD ( c ) ] , and the dotted curve is  V [ Δ σ PW ( c ) ] V [ Δ σ AMVD ( c ) ] .
Jrfm 16 00509 g004
Figure 5. On the left, there is the ratio  V [ Δ S 0 LR ( d ) ] V [ Δ S 0 AMVD ( d ) ]  and, on the right, there is the ratio  V [ Δ S 0 MVD ( d ) ] V [ Δ S 0 AMVD ( d ) ] .
Figure 5. On the left, there is the ratio  V [ Δ S 0 LR ( d ) ] V [ Δ S 0 AMVD ( d ) ]  and, on the right, there is the ratio  V [ Δ S 0 MVD ( d ) ] V [ Δ S 0 AMVD ( d ) ] .
Jrfm 16 00509 g005
Figure 6. On the left, there is the ratio  V [ Δ σ LR ( d ) ] V [ Δ σ AMVD ( d ) ]  and, on the right, there is the ratio  V [ Δ σ MVD ( d ) ] V [ Δ σ AMVD ( d ) ] .
Figure 6. On the left, there is the ratio  V [ Δ σ LR ( d ) ] V [ Δ σ AMVD ( d ) ]  and, on the right, there is the ratio  V [ Δ σ MVD ( d ) ] V [ Δ σ AMVD ( d ) ] .
Jrfm 16 00509 g006
Figure 7. Comparison of double-barrier option vega standard deviations for the LR, MVD, and AMVD methods.
Figure 7. Comparison of double-barrier option vega standard deviations for the LR, MVD, and AMVD methods.
Jrfm 16 00509 g007
Table 1. Objective functions and the initial values for generating the AQN random variate using Newton’s method, where  u ± = Φ AQN ( ς ± ; ς ) .
Table 1. Objective functions and the initial values for generating the AQN random variate using Newton’s method, where  u ± = Φ AQN ( ς ± ; ς ) .
u h ( x ) + u x ( 0 )
] 0 , u ] 1 η ς ( ς x ) ϕ ( x ) ς 1 β ln u u
u , 1 2 ( u + u + ) 2 η ς ( ς ς ) ϕ ( ς ) + 1 η ς ( x ς ) ϕ ( x ) ς + u u c
1 2 ( u + u + ) , u + 2 η ς ( ς ς ) ϕ ( ς ) + 1 η ς ( x ς ) ϕ ( x ) ς + u + u c +
u + , 1 1 1 η ς ( x ς ) ϕ ( x ) ς + + 1 β + ln 1 u 1 u +
Table 2. Means and standard deviations of vegas for double-barrier options with varying lower and upper barriers using the LR, MVD, and AMVD methods.
Table 2. Means and standard deviations of vegas for double-barrier options with varying lower and upper barriers using the LR, MVD, and AMVD methods.
MeanStandard Deviation
L U LRMVDAMVDLRMVDAMVD
75105−169.34−167.52−168.2915.687.6911.17
75115−173.85−172.96−171.8119.516.4313.65
75125−60.85−60.17−60.3920.083.2515.7
85105−292.55−291.65−292.0812.048.610.75
85115−305.83−306.81−307.6919.138.0514.14
85125−193.72−195.74−195.1121.136.2613.96
95105−146.66−147.24−146.535.677.775.92
95115−316.54−317.12−318.4513.4710.1510.38
95125−220.83−219.58−218.1914.367.810.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Joshi, M.; Kwon, O.K.; Satchell, S. Monte Carlo Sensitivities Using the Absolute Measure-Valued Derivative Method. J. Risk Financial Manag. 2023, 16, 509. https://doi.org/10.3390/jrfm16120509

AMA Style

Joshi M, Kwon OK, Satchell S. Monte Carlo Sensitivities Using the Absolute Measure-Valued Derivative Method. Journal of Risk and Financial Management. 2023; 16(12):509. https://doi.org/10.3390/jrfm16120509

Chicago/Turabian Style

Joshi, Mark, Oh Kang Kwon, and Stephen Satchell. 2023. "Monte Carlo Sensitivities Using the Absolute Measure-Valued Derivative Method" Journal of Risk and Financial Management 16, no. 12: 509. https://doi.org/10.3390/jrfm16120509

APA Style

Joshi, M., Kwon, O. K., & Satchell, S. (2023). Monte Carlo Sensitivities Using the Absolute Measure-Valued Derivative Method. Journal of Risk and Financial Management, 16(12), 509. https://doi.org/10.3390/jrfm16120509

Article Metrics

Back to TopTop