Next Article in Journal
A Review of Optimization-Based Deep Learning Models for MRI Reconstruction
Previous Article in Journal
Mathematical Modeling of Cancer Progression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computation of the Survival Probability of Brownian Motion with Drift Subject to an Intermittent Step Barrier

by
Tristan Guillaume
Laboratoire Thema, CYU Cergy Paris Université, 33 Boulevard du Port, F-95011 Cergy, France
AppliedMath 2024, 4(3), 1080-1097; https://doi.org/10.3390/appliedmath4030058
Submission received: 27 June 2024 / Revised: 19 August 2024 / Accepted: 27 August 2024 / Published: 2 September 2024

Abstract

:
This article provides an exact formula for the survival probability of Brownian motion with drift when the absorbing boundary is defined as an intermittent step barrier, i.e., an alternate sequence of time intervals when the boundary is piecewise constant, and time intervals without any defined boundary. Numerical implementation is dealt with by a simple and robust Monte Carlo integration algorithm directly derived from the formula, which compares favorably with conditional Monte Carlo simulation. Exact analytical benchmarks are also provided to assess the accuracy of the numerical implementation.

1. Introduction

The question of the survival probability of a diffusion process in finite time is an active area of research in applied probability, as it serves as a fundamental modeling tool for a large variety of phenomena in many different mathematical sciences such as biology, economics, engineering reliability, epidemiology, finance, genetics, seismology, and sequential statistical analysis [1]. The term “survival” refers to problems in which the variations of some random dynamics have to be kept contained within certain limits or inside a given domain for a particular property to hold. When the first-passage time of the process to the boundary of the domain entails the end of that property, the boundary is said to be absorbing. An absorbing boundary is often called a “barrier”, a metaphor making it clear that the boundary must not be hit. In other words, the crossing of the barrier means the elimination of the necessary conditions for a certain property to remain active or to continue to be alive. The barrier may be located above the current value of the process (upward barrier or ‘up-barrier’), below it (downward barrier or ‘down-barrier’), and both above and below (double barrier, also called two-sided). The barrier may be crossed at any instant in continuous time (in which case it is called ‘continuous’) or only at one or a few predefined instants (in which case it is called ‘discrete’).
There are very few exact, closed-form results in this field. The ones that are known essentially apply to Brownian motion. The seminal and most classical of them is the so-called Bachelier–Levy formula [2], which provides the first-passage time density of Brownian motion to a linear boundary. This result was extended to a two-sided linear boundary in [3], but only in infinite time. The generalization to a closed time interval was given in [4], who was also able to integrate the density. The first-passage time density of Brownian motion to a quadratic barrier was obtained independently in [5,6], while [7] worked out the hitting time of Brownian motion to a square root barrier. However, the numerical evaluation is so involved in both cases that one can hardly speak of a “closed form” solution.
Most of the extant literature focuses on numerical approximation methods dealing with the following:
-
Specific processes, usually diffusions closely related to Brownian motion that have precise applications, e.g., Ornstein–Uhlenbeck processes [8,9];
-
Specific boundaries, e.g., a square root one [10,11] or a curved one [12];
-
General classes of diffusion processes [13,14];
-
General classes of boundaries [7,15,16,17,18,19].
A form of boundary that is frequently encountered in applications is the step barrier, i.e., a barrier defined as a step function of time. For example, a substantial fraction of the so-called “barrier options”, which are the most extensively traded non-vanilla options in the financial markets, rely on step barriers. Exact, closed-form results are known for arithmetic and geometric Brownian motion when the step barrier is piecewise constant [20] and when it is piecewise linear or exponential [21]. Stochastic exponential barriers are also dealt with in Guillaume [22]. However, all these results assume that the step barrier is piecewise continuous, i.e., the only discontinuities considered are jumps at the times when the step barrier takes on a new value. To the best of our knowledge, no exact result has ever been published when the step barrier is intermittent, i.e., alternately active and non-active in time, with sequences of time subintervals left without exposure to the danger of absorption. Yet, this is a variant that can be useful in a number of applications. For example, in finance, holders of long positions in barrier options are subjected to the risk of “knocking-out”, i.e., of the option becoming deactivated upon hitting the barrier at any time before the option’s expiry. A simple and effective way of mitigating this risk is to remove the barrier during specific periods of time, e.g., when there is an expected spike in the volatility of the option’s underlying asset. This is precisely what an intermittent step barrier allows investors to achieve. The property of intermittence does not mean that the barrier is eliminated once and for all. It can be reset at later times, at another value, and in another direction (upward if it was downward before the temporary removal or downward if it was upward). An intermittent step barrier is thus a particularly flexible way of managing risk, allowing option holders to optimize the trade-off between security enhancement and cost reduction. It leaves them entirely free to define exactly when they want to be insured against adverse market fluctuations and what level of protection they wish to fix.
One may object that the intermittent barrier framework can be tackled by the existing formulae applicable to non-intermittent barriers by taking appropriate limits on the barrier levels during the periods devoid of risk of absorption, e.g., decreasing a down-barrier or increasing an up-barrier to such an extent that the probability of hitting it becomes negligible. But such an approach is actually flawed because of the entailed numerical errors, which can be substantial. That is why this article solves the problem of calculating the survival probability of Brownian motion with drift exposed to an intermittent absorbing step barrier with four alternately up and down steps, as illustrated by the following Figure 1:
In addition to the continuous steps H 1 , H 2 , H 3 , and H 4 of the barrier, discrete barriers can also be defined at the various t i ’s. The obtained formula is a general result that nests a large number of known distributions with non-intermittent steps as particular cases. The formula is a continuous function of its variables; consequently, “small” changes in the parameters can be accommodated without entailing numerical instability. The formula is actually more than a continuous function; it is even differentiable. This property is particularly attractive for option traders, as it allows them to compute precise hedging parameters by mere differentiation of the valuation formula, according to the classical dynamic hedging strategy. Although the analytical calculations involved are non-trivial, the main issue is the dimension of the multiple integrals that solve the problem, which rapidly increases, especially if one wants to extend the given result to a higher number of steps. To solve this problem, a simple, efficient, and robust numerical integration scheme is designed that can handle very high-dimensional integrals, as its computational time grows only linearly with dimension, like a Monte Carlo simulation.
Given the prevalence of the boundary-crossing probability framework in scientific modeling, there is no doubt that potential applications exist in subjects other than mathematical finance. When applying the results of this article to different contexts, researchers should remain aware, though, of the main assumptions underlying our work, especially the modeling of random dynamics by a geometric Brownian motion. This precludes the application of the derived formulae to settings in which randomness needs to be represented by a discontinuous process, e.g., because “large” variations may occur in “small” time intervals. Also, it should be stressed that each step of the barrier is supposed to be constant, although non-constant steps could be dealt with only slightly more effort using results provided in [21] for deterministic non-constant steps or in [22] for steps modeled as geometric Brownian motions.
This article is organized as follows: Section 2 states the main formula and outlines its proof; Section 3 is dedicated to the numerical implementation of the formula in Section 2 and provides additional analytical results in this regard, which are valuable by themselves.

2. Statement and Proof of the Main Formula

Let S t , t 0 be a geometric Brownian motion whose instantaneous rate of change, under a given measure P , is given by the following:
d S t = v S t d t + σ S t d B t
where v , σ + and B t , t 0 is a standard Brownian motion defined on a probability space Ω , F , F t , P with right-continuous natural filtration F t .
The probability measure P is characterized by v or, equivalently, by
μ = v σ 2 / 2
If we refer to the arithmetic Brownian process X t = ln S t / S 0 , the differential under P is given by
d X t = μ d t + σ d B t
Our objective is to find an exact formula for the following cumulative distribution function:
p 1 μ P A
where A is the set constructed by the following intersection of subsets of Ω :
A = sup t 0 t t 1 S t H 1 S t 1 min K 1 , H 1 S t 2 max K 2 , H 2 inf t 2 t t 3 S t H 2 S t 3 max K 3 , H 2 S t 4 min K 4 , H 3 sup t 4 t t 5 S t H 3 S t 5 min K 5 , H 3 S t 6 max K 6 , H 4 inf t 6 t t 7 S t H 4 S t 7 max K 7 , H 4 S t 8 K 8
Let the function Φ n m 1 , , m n 1 , m n ; θ 1.2 , θ 2.3 , , θ n 1 . n be defined by the following n dimensional integral:
Φ n m 1 , , m n 1 , m n ; θ 1.2 , θ 2.3 , , θ n 1 . n
= x 1 = m 1 x n 1 = m n 1 x n = m n exp x 1 2 2 i = 2 n x i θ i 1 . i x i 1 2 2 σ i . i 1 2 2 π n / 2 i = 2 n σ i . i 1 d x n d x n 1 d x 1
m 1 , , m n 1 , m n ,   θ 1.2 , θ 2.3 , , θ n 1 . n 1 , 1 ,   σ i . i 1 = 1 θ i 1 . i 2
Then, the exact value of p 1 μ is given by the following result.
Proposition 1.
p 1 μ
= Φ 8 m 1 , 0 , m 2 + , 0 , m 3 + , 00 , m 4 , 00 , m 5 , 000 , m 6 + , 000 , m 7 0000 , m 8 0000 ; θ + + + +
ε 1 Φ 8 m 1 , 1 , m 2 + , 1 , m 3 + , 01 , m 4 , 10 , m 5 , 100 , m 6 + , 010 , m 7 0100 , m 8 0100 ; θ + + + +
ε 2 Φ 8 m 1 + , 0 , m 2 , 0 , m 3 + , 02 , m 4 , 20 , m 5 , 200 , m 6 + , 020 , m 7 0200 , m 8 0200 ; θ + + +
ε 3 Φ 8 m 1 + , 0 , m 2 , 0 , m 3 , 00 , m 4 + , 00 , m 5 , 300 , m 6 + , 030 , m 7 0300 , m 8 0300 ; θ + + +
ε 4 Φ 8 m 1 + , 0 , m 2 , 0 , m 3 , 00 , m 4 + , 00 , m 5 + , 000 , m 6 , 000 , m 7 0400 , m 8 0400 ; θ + + +
+ ε 21 Φ 8 m 1 + , 1 , m 2 , 1 , m 3 + , 12 , m 4 , 21 , m 5 , 210 , m 6 + , 120 , m 7 1200 , m 8 1200 ; θ + + +
+ ε 31 Φ 8 m 1 + , 1 , m 2 , 1 , m 3 , 01 , m 4 + , 10 , m 5 , 310 , m 6 + , 130 , m 7 1300 , m 8 1300 ; θ + + +
+ ε 32 Φ 8 m 1 , 0 , m 2 + , 0 , m 3 , 02 , m 4 + , 20 , m 5 , 320 , m 6 + , 230 , m 7 2300 , m 8 2300 ; θ + +
ε 41 Φ 8 m 1 + , 1 , m 2 , 1 , m 3 , 01 , m 4 + , 10 , m 5 + , 100 , m 6 , 010 , m 7 1400 , m 8 1400 ; θ + + +
ε 42 Φ 8 m 1 , 0 , m 2 + , 0 , m 3 , 02 , m 4 + , 20 , m 5 + , 200 , m 6 , 020 , m 7 2400 , m 8 2400 ; θ + +
ε 43 Φ 8 m 1 , 0 , m 2 + , 0 , m 3 + , 00 , m 4 , 00 , m 5 + , 300 , m 6 , 030 , m 7 3400 , m 8 3400 ; θ + +
ε 321 Φ 8 m 1 , 1 , m 2 + , 1 , m 3 , 12 , m 4 + , 21 , m 5 , 321 , m 6 + , 231 , m 7 2301 , m 8 2301 ; θ + +
ε 421 Φ 8 m 1 , 1 , m 2 + , 1 , m 3 , 12 , m 4 + , 21 , m 5 + , 210 , m 6 , 120 , m 7 2401 , m 8 2401 ; θ + +
ε 431 Φ 8 m 1 , 1 , m 2 + , 1 , m 3 + , 01 , m 4 , 10 , m 5 + , 310 , m 6 , 130 , m 7 3401 , m 8 3401 ; θ + +
ε 432 Φ 8 m 1 + , 0 , m 2 , 0 , m 3 + , 02 , m 4 , 20 , m 5 + , 320 , m 6 , 230 , m 7 3402 , m 8 3402 ; θ +
ε 4321 Φ 8 m 1 + , 1 , m 2 , 1 , m 3 + , 12 , m 4 , 21 , m 5 + , 231 , m 6 , 231 , m 7 3412 , m 8 3412 ; θ +
where  i n , i , j , k , l :
h i = ln H i S 0 , k i = ln K i S 0 , ε i 1 = exp 2 μ h i 1 σ 2 , ε i 1 i 2 = exp 2 μ h i 1 h i 2 σ 2
ε i 1 i 2 i 3 = exp 2 μ h i 1 h i 2 + h i 3 σ 2 , ε i 1 i 2 i 3 i 4 = exp 2 μ h i 1 h i 2 + h i 3 h i 4 σ 2
θ ± ± ± n   times = ± t 1 t 2 , ± t 2 t 3 , , ± t n 1 t n
m 1 ± , i = min k 1 , h 1 2 h i ± μ t 1 σ t 1 , m 2 ± , i = max k 2 , h 2 + 2 h i ± μ t 2 σ t 2
m 3 ± , i j = max k 3 , h 2 2 h i h j ± μ t 3 σ t 3 , m 4 ± , i j = min k 4 , h 3 2 h i h j ± μ t 4 σ t 4
m 5 ± , i j k = min k 5 , h 3 2 h i h j + h k ± μ t 5 σ t 5 , m 6 ± , i j k = max k 6 , h 4 2 h i h j h k ± μ t 6 σ t 6
m 7 ± , i j k l = max k 7 , h 4 2 h i h j + h k h l + μ t 7 σ t 7 ,
m 8 ± , i j k l = k 8 2 h i h j + h k h l + μ t 8 σ t 8
Corollary 1.
It suffices to multiply by 1 all the m 1 . , m 2 . , , m 8 . arguments in each Φ 8 function and to swap all max and min operators (i.e., each max becomes min and conversely) in Proposition 1 to obtain an exact formula for the following probability:
p 2 μ = P inf t 0 t t 1 S t H 1 , S t 1 max K 1 , H 1 , S t 2 min K 2 , H 2 , sup t 2 t t 3 S t H 2 , S t 3 min K 3 , H 2 , S t 4 max K 4 , H 3 , inf t 4 t t 5 S t H 3 , S t 5 max K 5 , H 3 , S t 6 min K 6 , H 4 , sup t 6 t t 7 S t H 4 , S t 7 min K 7 , H 4 , S t 8 K 8
Corollary 2.
The riskless interest rate is denoted by r , assumed to be constant, with the no-arbitrage value of an option with a payoff based on the intersection of events in set A with expiry t 8 (which can be called an intermittent up-and-down knock-out call, following the terminology in mathematical finance), which is given by
S 0 × p 1 μ ¯ K 8 × exp r t 8 × p 1 μ ˜
where
μ ¯ = r + σ 2 / 2 , μ ˜ = r σ 2 / 2
and
K 1 = H 1 , K 2 = K 3 = H 2 , K 4 = K 5 = H 3 , K 6 = K 7 = H 4
To obtain the no-arbitrage value of an option with a payoff based on (15) with expiry  t 8  (i.e., an intermittent down-and-up knock-out put), replace  p 1 .  with  p 2 .  in (16).
End of Proposition 1.
Proof of Proposition 1.
Since the log function is strictly increasing, we have
p 1 μ = P sup t 0 t t 1 X t h 1 , X t 1 min k 1 , h 1 , X t 2 max k 2 , h 2 , inf t 2 t t 3 X t h 2 , X t 3 max k 3 , h 2 , X t 4 min k 4 , h 3 , sup t 4 t t 5 X t h 3 , X t 5 min k 5 , h 3 , X t 6 max k 6 , h 4 , inf t 6 t t 7 X t h 4 , X t 7 max k 7 , h 4 , X t 8 k 8
where
X t = ln S t S 0
and h i , k i are given by (7). The solution of Equation (1) is given by
S t = S 0 exp v σ 2 2 t + σ B t
Therefore, each X t , t 0 , is an absolutely continuous random variable, normally distributed with expectation μ t and variance σ 2 t . Moreover, X t , t 0 , as a diffusion, is a Markov process. Hence, the probability sought can be expanded as the following integral:
p 1 μ = D φ 1 x 1 φ 2 x 1 , x 2 φ 3 x 2 , x 3 φ 2 x 3 , x 4 φ 4 x 4 , x 5 φ 2 x 5 , x 6 φ 3 x 6 , x 7 φ 2 x 7 , x 8 d x 8 d x 1
where
D = , min k 1 , h 1 × max k 2 , h 2 , × max k 3 , h 2 , × , min k 4 , h 3 × , min k 5 , h 3 × max k 6 , h 4 , × max k 7 , h 4 , × k 8 ,
φ 1 x i = P sup 0 t t i X t < h i , X t i d x i
φ 2 x i , x j = P X t j d x j X t i d x i
φ 3 x i , x j = P inf t i t t j X t > h i , X t j d x j X t i d x i
φ 4 x i , x j = P sup t i t t j X t < h i , X t j d x j X t i d x i
The function φ 1 x i is obtained by differentiating the classical formula for the joint cumulative distribution of the maximum of a Brownian motion with drift and its endpoint over a closed time interval (see, e.g., [23]):
φ 1 x i = e 1 2 x i μ t i σ t i 2 σ 2 π t i e 2 μ σ 2 h i e 1 2 x i 2 h i μ t i σ t i 2 σ 2 π t i d x i
The function φ 2 x i , x j is easily derived from the bivariate normality of the pair X t i , X t j :
φ 2 x i , x j = e 1 2 x j x i μ t j t i σ t j t i 2 σ 2 π t j t i d x i d x j = e 1 2 1 t i / t j x j μ t j σ t j t i t j x i μ t i σ t i 2 σ 2 π t j 1 t i / t j d x i d x j
To handle the function φ 4 x i , x j , we begin by conditioning the conditional expectation of sup t i t t j S t , S t j w.r.t. with the filtration at time t i > t 0 . Through the substitution of S 0 with S t i and the substitution of S t i with S t j in P sup 0 t t i S t H i , S t i S 0 e x i S 0 , we obtain in t i , t j :
P sup t i t t j S t H i , S t j S 0 e x j S t i = S 0 e x i
= N ln S 0 e x j S 0 e x i μ t j t i σ t j t i H i S 0 e x i 2 μ σ 2 N ln S 0 e x j S 0 e x i 2 ln H i S 0 e x i μ t j t i σ t j t i
where N . is the univariate standard normal cumulative distribution function.
Equation (30) can be rewritten as follows:
P sup t i t t j X t h i , X t j x j X t i d x i
= N x j x i μ t j t i σ t j t i exp 2 μ σ 2 h i x i N x j x i 2 h i x i μ t j t i σ t j t i
Therefore, by differentiating (31) w.r.t. x j , we obtain
φ 4 x i , x j = e 1 2 x j x i μ t j t i σ t j t i 2 σ 2 π t j t i e 2 μ σ 2 h i x i e 1 2 x j + x i 2 h i μ t j t i σ t j t i 2 σ 2 π t j t i d x i d x j
By the symmetry of paths of Brownian motion, we have
P inf t i t t j X t h i , X t j x j X t i d x i
= N x j + x i + μ t j t i σ t j t i exp 2 μ σ 2 h i x i N x j + x i + 2 h i x i + μ t j t i σ t j t i
Hence,
φ 3 x i , x j = φ 4 x i , x j
The rest of the proof, whose cumbersome details are omitted, then consists of solving the integral in (22) to obtain the linear combination of sixteen Φ 8 functions given in Proposition 1. The origin of the function Φ n is that, when θ i 1 . i = t i 1 t i , i 1 , , n , Φ n turns out to be the standardized finite-dimensional cumulative distribution function of order n of Brownian motion with drift as can be verified by easy calculations, i.e., M 1 , , M n + , we have
P S t 1 M 1 , , S t n 1 M n 1 , S t n M n = P X t 1 m 1 , , X t n 1 m n 1 , X t n m n
= Φ n m 1 μ t 1 σ t 1 , , m n 1 μ t n 1 σ t n 1 , m n μ t n σ t n ; t 1 t 2 , t 2 t 3 , , t n 1 t n
where
m i = ln M i / S 0 , i 1 , , n
Corollary 1 comes from the property of symmetry of Brownian paths.
As for Corollary 2, according to the theory of non-arbitrage pricing in a complete market [24,25], the value of an option with a payoff based on the intersection of events in set A , i.e., of an intermittent up-and-down knock-out call, is given by
exp r t 8 E Q S t 8 K 8 1 A
where Q is the unique equivalent martingale measure, under which the instantaneous variations of S t verify
d S t = r S t d t + σ S t d B t
An elementary application of the Cameron–Martin–Girsanov theorem yields
E Q S t 8 K 8 1 A = S 0 Q ¯ A K 8 Q A
where Q ¯ is the measure equivalent to Q whose Radon–Nikodym derivative is given by
d Q ¯ d Q | F t = exp σ B t σ 2 2 t

3. Numerical Implementation

The function Φ n is a special case of the function N n , i.e., the multivariate standard normal cumulative distribution of order n , which has the advantage of rendering both analytical and numerical computations much easier by allowing the vast majority of the pairwise correlation coefficients to be removed thanks to the Markov property of Brownian motion.
It is possible to express the sought probability p 1 μ as a linear combination of N 8 functions by expanding it as the following integral:
p 1 μ = D φ 5 x 1 , x 2 , , x 8 φ 6 x 1 φ 7 x 2 , x 3 φ 8 x 4 , x 5 φ 7 x 6 , x 7 d x 8 d x 1
where
φ 1 x 1 , x 2 , , x 8 is   the   multivariate   standard   normal   density   function   of   order   8
φ 6 x 1 = P sup 0 t t 1 X t < h 1 X t 1 d x 1 = 1 exp 2 h 1 x 1 h 1 σ 2 t 1
φ 7 x i , x j = P inf t i t t j X t > h i X t i d x i , X t j d x j = 1 exp 2 h 2 x 2 x 3 h 2 σ 2 t 2 t 1
φ 8 x i , x j = P sup t i t t j X t < h i X t i d x i , X t j d x j = φ 7 x i , x j
We have used well-known formulae to expand φ 6 x 1 in (43) and φ 7 x i , x j in (44) (see, e.g., [26]).
But, for each one of the sixteen N 8 functions involved, there are 28 pairwise correlation coefficients to deal with, compared with only 4 pairwise correlation coefficients when using Φ 8 functions, and it must be very tedious and time-consuming to analytically solve the integral in (41), not to mention the very cumbersome size of the resulting formula. There is no upside in using N 8 functions from a numerical standpoint either since they are by no means simpler to evaluate than Φ 8 functions. The opposite is true, actually, since any quadrature rule used to compute Φ 8 requires fewer function evaluations than the computation of N 8 .
Whether one uses N 8 or Φ 8 functions, the numerical evaluation of p 1 μ by way of a quadrature is arguably not the proper approach to pursue anyway, owing to the dimension of the integral. Even though it can be shown that the dimension of Φ 8 , which is an 8-dimensional integral, can be reduced by a factor of 2 [20], this still leaves us with a 6-dimensional integral, which is not efficient to evaluate by means of a quadrature rule, since the latter involves a multiple sum of order 6. Even if a low-degree quadrature rule, such as a 16-point Gauss–Legendre rule, were sufficient in terms of accuracy, a total of 16,777,216 integrand evaluations would be needed to evaluate only one of the sixteen Φ 8 functions involved in Proposition 1. Since high absolute values of the correlation coefficients θ i 1 . i entail numerical instability, a fixed-degree rule would not be reliable in general anyway, thus necessitating a subregion adaptive algorithm that increases the number of integrand evaluations in the subregions where the rate of change of the integrand accelerates.
The good news is that a closed-form result such as Proposition 1 allows the development of a simple and robust integration scheme. Indeed, the exact value of the function Φ 8 m 1 , m 2 , , m 8 ; θ 1.2 , θ 2.3 , , θ 7.8 can be approximated by Algorithm 1, where N μ , σ 2 denotes the normal distribution with expectation μ and variance σ 2 :
Algorithm 1: Numerical evaluation of the  Φ 8  function
(i) Draw a number n of 8–tuples x i = x 1 i N 0 , 1 , , x 8 i N 0 , 1 , i 1 , , n , of independent standard normal coordinates using, e.g., the Box-Muller algorithm [27]
(ii) Turn these n 8–tuples x i into n 8–tuples of correlated standard normal coordinates x ¯ i = x ¯ 1 i N 0 , 1 , , x ¯ 8 i N 0 , 1 according to the simple correlation structure that characterizes the Φ n function, i.e.,
x ¯ i = x ¯ 1 i = x 1 i , x ¯ 2 i = θ 1.2 x 1 i + σ 2 1 x 2 i , , x ¯ 8 i = θ 7.8 x 7 i + σ 8 7 x 8 i
(iii) Test the conditions imposed by the function Φ 8 m 1 , m 2 , , m 8 ; θ 1.2 , θ 2.3 , , θ 7.8 for each 8–tuple x ¯ i , i.e., the conditions:
x ¯ 1 i m 1 , x ¯ 2 i m 2 , , x ¯ 8 i m 8
If we denote by α the number of 8–tuples x ¯ i such that all 8 conditions in (47) are met, then the value of the function Φ 8 m 1 , m 2 , , m 8 ; θ 1.2 , θ 2.3 , , θ 7.8 can be approximated by α / n . According to the law of large numbers, the approximation will converge to the exact value of the function Φ 8 as n becomes larger and larger.
This integration scheme is essentially a Monte Carlo integration scheme since it relies on the generation of random numbers. As such, it satisfies the fundamental attractive property that its rate of convergence grows linearly with dimension, unlike a quadrature rule, whose rate of convergence grows exponentially with dimension, thus leading to the notorious ‘curse of dimensionality’ in numerical integration. In practice, the rate of convergence of Algorithm 1 varies according to the random number generator used. It is well-known that convergence can be accelerated by resorting to a quasi-random generator instead of a pseudo-random one due to the greater uniformity of the distributions of numbers generated by low discrepancy sequences relative to those generated by linear congruential generators [27].
One way to assess the quality of a numerical algorithm is to test its ability to match known, exact analytical benchmarks as closely as possible. The idea, which is at the center of the method of control variates in Monte Carlo simulation, for instance, is to find an exact formula for a random quantity that shares similar features as the targeted one but which can be numerically evaluated easily, i.e., by means of functions that can be computed with arbitrary accuracy as fast as required for practical purposes. Each time a numerical value is obtained for the targeted random quantity, i.e., the probability p 1 μ in our case, the gap measured between the exact value of the analytical benchmark and its approximated value provided by the numerical algorithm (using the same random numbers) is an indicator of the reliability of the value obtained for the targeted random quantity. In this respect, the forthcoming Proposition 2 provides an exact formula for the value of the following probability:
p 2 μ = P sup t 0 t t 1 S t H 1 , S t 1 min K 1 , H 1 , S t 2 max K 2 , H 2 , inf t 2 t t 3 S t H 2 , S t 3 max K 3 , H 2
The combination of events that characterizes p 2 μ is identical to the combination of events in p 1 μ up until time t 3 . But the dimension of the integral resulting from the computation of p 2 μ is only 3 compared with 8 for p 1 μ , which makes it possible to evaluate numerically p 2 μ with all the required accuracy and efficiency, as will be shown by the forthcoming Proposition 5. In the meantime, let us state Proposition 2.
Proposition 2.
The exact value of p 2 μ  as defined by (48) is given by
p ¯ 1 μ = Φ 3 m 1 , 0 , m 2 + , 0 , m 3 + , 00 ; θ + ε 1 Φ 3 m 1 , 1 , m 2 + , 1 , m 3 + , 01 ; θ +
ε 2 Φ 3 m 1 + , 0 , m 2 , 0 , m 3 + , 02 ; θ + ε 21 Φ 3 m 1 + , 1 , m 2 , 1 , m 3 + , 12 ; θ
where
θ ± ± = ± t 1 t 2 , ± t 2 t 3
and all other symbols and notations are identically defined as in Proposition 1.
Proof of Proposition 2.
Following the same steps as at the beginning of the proof of Proposition 1, the problem at hand can be expressed as the following integral:
p 2 μ = x 1 = min k 1 , h 1 x 2 = max k 2 , h 2 x 3 = max k 3 , h 2 φ 1 x 1 φ 2 x 1 , x 2 φ 3 x 2 , x 3 d x 3 d x 2 d x 1
where the functions φ 1 , φ 2 , φ 3 are given by (24), (25), and (26), respectively.
Performing the necessary calculations yields Proposition 2.
Next, Proposition 3 provides an analytical benchmark of dimension 4, which is still very easy to evaluate numerically and has more “information” in common with p 1 μ than the benchmark provided by Proposition 2, as it matches the combination of events in p 1 μ up until time t 4 .
Proposition 3.
p 3 μ P sup t 0 t t 1 S t H 1 , S t 1 min K 1 , H 1 , S t 2 max K 2 , H 2 , inf t 2 t t 3 S t H 2 , S t 3 max K 3 , H 2 , S t 4 K 8
= Φ 4 m 1 , 0 , m 2 + , 0 , m 3 + , 00 , m 8 0000 ; θ + +
ε 1 Φ 4 m 1 , 1 , m 2 + , 1 , m 3 + , 01 , m 8 0100 ; θ + +
ε 2 Φ 4 m 1 + , 0 , m 2 , 0 , m 3 + , 02 , m 8 0200 ; θ +
+ ε 21 Φ 4 m 1 + , 1 , m 2 , 1 , m 3 + , 12 , m 8 1200 ; θ +
where
θ ± ± ± = ± t 1 t 2 , ± t 2 t 3 , ± t 3 t 8
and all other symbols and notations are identically defined as in Proposition 1.
Proof of Proposition 3.
p 3 μ = x 1 = min k 1 , h 1 x 2 = max k 2 , h 2 x 3 = max k 3 , h 2 x 8 = k 8 φ 1 x 1 φ 2 x 1 , x 2 φ 3 x 2 , x 3 φ 2 x 3 , x 8 d x 8 d x 3 d x 2 d x 1
Performing the necessary calculations yields Proposition 3.
Finally, Proposition 4 provides an analytical benchmark that shares even more information with p 1 μ as it matches the combination of events in p 1 μ up until time t 5 . Its dimension is 5, but it can be brought down to an effective numerical dimension of 3, as will be shown in Proposition 5.
Proposition 4.
P sup t 0 t t 1 S t H 1 , S t 1 min K 1 , H 1 , S t 2 max K 2 , H 2 , inf t 2 t t 3 S t H 2 , S t 3 max K 3 , H 2 , S t 4 min K 4 , H 3 , sup t 4 t t 5 S t H 3 , S t 5 min K 5 , H 3
p 4 μ
= Φ 5 m 1 , 0 , m 2 + , 0 , m 3 + , 00 , m 4 , 00 , m 5 , 000 ; θ + +
ε 1 Φ 5 m 1 , 1 , m 2 + , 1 , m 3 + , 01 , m 4 , 10 , m 5 , 100 ; θ + +
ε 2 Φ 5 m 1 + , 0 , m 2 , 0 , m 3 + , 02 , m 4 , 20 , m 5 , 200 ; θ +
ε 3 Φ 5 m 1 + , 0 , m 2 , 0 , m 3 , 00 , m 4 + , 00 , m 5 , 300 ; θ +
+ ε 21 Φ 5 m 1 + , 1 , m 2 , 1 , m 3 + , 12 , m 4 , 21 , m 5 , 210 ; θ +
+ ε 31 Φ 5 m 1 + , 1 , m 2 , 1 , m 3 , 01 , m 4 + , 10 , m 5 , 310 ; θ +
+ ε 32 Φ 5 m 1 , 0 , m 2 + , 0 , m 3 , 02 , m 4 + , 20 , m 5 , 320 ; θ
ε 321 Φ 5 m 1 , 1 , m 2 + , 1 , m 3 , 12 , m 4 + , 21 , m 5 , 321 ; θ
where all symbols and notations are identically defined as in Proposition 1.
Proof of Proposition 4.
p 4 μ
= x 1 = min k 1 , h 1 x 2 = max k 2 , h 2 x 3 = max k 3 , h 2 x 4 = min k 4 , h 3 x 5 = min k 5 , h 3 φ 1 x 1 φ 2 x 1 , x 2 φ 3 x 2 , x 3 φ 2 x 3 , x 4 φ 4 x 4 , x 5 d x 5 d x 4 d x 3 d x 2 d x 1
Performing the necessary calculations yields Proposition 4.
The analytical benchmarks provided by Propositions 2, 3, and 4 are all the more convenient as their dimension can be reduced by a factor of two, as shown by the following result.
Proposition 5.
(i) The triple integral defining the function  Φ 3  can be rewritten as the following one-dimensional integral:
Φ 3 m 1 , m 2 , m 3 ; θ 1.2 , θ 2.3 = x = m 2 exp x 2 / 2 2 π × N m 1 θ 1.2 x 1 θ 1.2 2 × N m 3 θ 2.3 x 1 θ 2.3 2 d x
(ii) The quadruple integral defining the function   Φ 4   can be rewritten as the following two-dimensional integral:
Φ 4 m 1 , m 2 , m 3 , m 4 ; θ 1.2 , θ 2.3 , θ 3.4 = x 2 = m 2 x 3 = m 3 θ 2.3 y 2 1 θ 2.3 2 1 2 π exp x 2 2 + x 3 2 2 × N m 1 θ 1.2 x 2 1 θ 1.2 2 × N m 4 θ 3.4 1 θ 2.3 2 x 3 θ 3.4 θ 2.3 x 2 1 θ 3.4 2 d x 2 d x 3
(iii) The quintuple integral defining the function   Φ 5   can be rewritten as the following three-dimensional integral:
Φ 5 m 1 , , m 5 ; θ 1.2 , , θ 4.5 = 1 8 π 3 x 2 = m 2 x 3 = m 3 θ 2.3 x 2 1 θ 2.3 2 x 4 = m 4 θ 3.4 x 3 1 θ 2.3 2 + θ 2.3 x 2 1 θ 3.4 2 exp x 2 2 + x 3 2 + x 4 2 2 × N m 1 θ 1.2 x 2 1 θ 1.2 2 × N m 5 θ 4.5 x 4 1 θ 3.4 2 + θ 3.4 x 3 1 θ 2.3 2 + θ 2.3 x 2 1 θ 4.5 2 d x 4 d x 3 d x 2
Proposition 5 ensues from simple algebra and straightforward changes of variables in the integrals, and its proof is therefore omitted; the required application of Fubini’s theorem is elementary since both the exponential of a negative number and the N . function, as a cumulative distribution function, are bounded by zero and 1.
Since the function N . can be evaluated with arbitrary precision for all practical purposes, and the exponential function is of class C , the numerical evaluation of the integrals (55), (56), and (57) can be implemented using a classical Gauss–Legendre quadrature method of orders 1, 2 and 3, respectively, thus effectively reducing dimension by a factor of two in all three cases.
Table 1 reports a few evaluations of p 1 μ for various levels of the volatility of S . The first three rows implement Proposition 1 by means of Algorithm 1 with pseudo-random numbers generated by the robust Mersenne Twister algorithm [28]. The next three rows implement Proposition 1 by means of Algorithm 1 with quasi-random numbers generated by Sobol low-discrepancy sequences, which are considered particularly reliable by experts in this field [29]. The final three rows provide approximations of Proposition 1 by Monte Carlo simulation using the powerful variance reduction technique known as “conditional Monte Carlo” or “Brownian bridge”, which essentially consists of making use of the analytically computed probability of not hitting a barrier between times t i and t j conditional on the values of S t i and S t j that have been “randomly” drawn, so that the path of S between times t i and t j does not have to be simulated [26]. In terms of computational complexity, the CMC (Conditional Monte Carlo) method and our proposed algorithm are equivalent. Both approaches rely on random number generation and imply the exact same number of random variates to be generated at each run of the algorithm. However, Algorithm 1 does not require the function evaluations entailed by CMC and thus avoids a great number of calls to exponential and quadratic functions. The total number of runs of the algorithm required to reach a given level of accuracy may also not be identical in both numerical schemes. Compared computational times between the two methods will be shortly illustrated and discussed. In terms of inputs, the initial value of S is S 0 = 100 , the drift coefficient v is equal to 2.5 % , the final time t 8 is equal to 1.5, and the step barrier is given by H 1 = 125 on t 0 = 0 , t 1 = 0.2 , H 2 = 75 on t 2 = 0.3 , t 3 = 0.5 , H 3 = 135 on t 4 = 0.6 , t 5 = 0.8 , H 4 = 65 on t 6 = 1 , t 7 = 1.3 . The list of K i parameters is given by
K 1 = 110 , K 2 = 85 , K 3 = 83 , K 4 = 113 , K 5 = 116 , K 6 = 84 , K 7 = 80 , K 8 = 95
In its first row, Table 2 reports a few evaluations of the first analytical benchmark p 2 μ for various levels of the volatility of S by implementing (55) with a 96-point Gauss–Legendre single quadrature, and then, in its following rows, it reports the absolute values of the differences between the benchmark and each numerical approximation, expressed as a percentage of the benchmark. Table 3 and Table 4 do the same for the second and the third analytical benchmarks, i.e., p 3 μ and p 4 μ , by implementing (56) and (57) with 96-point Gauss–Legendre double and triple quadratures, respectively. The inputs are the same as those of Table 1.
It can be observed in Table 1 that there is an increasing convergence between the values obtained by the implementation of Proposition 1 and those obtained by CMC (Conditional Monte Carlo) simulation as more and more ‘random’ numbers are drawn, which not only provides an empirical ‘validation’ of the analytical calculations performed to obtain Proposition 1 but also highlights the good stability property of Algorithm 1, which is not subject to oscillations around the exact value; simply put, to obtain more accurate approximations, simply add more pseudo-random numbers—on condition that your pseudo-random number generator is uniform enough and has a sufficiently large period, though. Interestingly, smaller samples of pseudo-random variates seem necessary to attain a given level of convergence when implementing Proposition 1 by Algorithm 1 than those required by CMC. This finding was borne out by further numerical experiments performed with randomly drawn parameters of the model: on average, the number of pseudo-random variates required was divided by 2.6 when implementing Proposition 1 for three consecutive levels of convergence (1, 2, and 3 decimals of the probability expressed as a percentage). The computational times to evaluate one Φ 8 function when implementing Proposition 1 by Algorithm 1, as measured on a computer equipped with an Intel Core i7 CPU (made in Ireland), are the following:
-
0.11 s for n = 500,000;
-
0.39 s for n = 2,000,000;
-
1.87 s for n = 10,000,000.
They are significantly shorter than those observed when using CMC. This is due to the fact that, in addition to having to generate pseudo-random variates, CMC involves a lot of arithmetical operations and calls to exponential and quadratic functions in order to simulate all the S t i ’s and to compute the conditional probabilities required by the method, whereas Algorithm 1 only involves generating pseudo-random variates. However, this does not mean that resorting to Algorithm 1 to evaluate a probability will always be faster than CMC since the latter does not require drawing a new set of pseudo-random numbers for each Φ n function. In practice, the computational time differential will depend on the number of Φ n functions in the formula for the evaluated probability, which, in turn, essentially depends on the number of steps in the step barrier. To put it simply, the larger the number of steps in the barrier, the more the computational time differential in favor of Algorithm 1 will tend to shrink as a result of the increased burden entailed by a larger number of Φ n functions to evaluate. As regards p 1 μ , which involves sixteen eight-dimensional Gaussian integrals, the total computational time measured ranges from 1.6 s when n = 500,000 to 27.3 s when n = 10,000,000 when implementing Proposition 1 by Algorithm 1, whereas a CMC approximation takes 1.9 s when n = 2,000,000 , 8.7 s when n = 10,000,000 and 39.3 s when n = 50,000,000 , where we recall that the number n refers to the size of the sample of pseudo-random variates for each Φ 8 function evaluation in the case of the implementation of Proposition 1, while it refers to the number of paths simulated in the case of CMC. Since a number of 50,000,000 simulated paths seems necessary to attain the level of convergence attained with n = 10,000,000 when implementing Proposition 1, one can conclude that the latter is not only more accurate but also more efficient than CMC to evaluate p 1 μ .
The convergence attained by using quasi-random numbers instead of pseudo-random ones is very satisfactory, considering the small sizes of the samples used. For n = 50,000 , the level of accuracy obtained is much higher than when using pseudo-random numbers, while only a number of 50,000 simulated paths in a CMC simulation would yield very poor results. If only moderate accuracy is needed, then implementing Proposition 1 with a quasi-random number generator such as the one used here (Sobol) can be a good approach. However, the numerical experiments have not been conducted beyond n = 500,000 for two reasons:
-
As n increases, it becomes more and more time-consuming to maintain the same level of uniformity when computing the successive points in the low discrepancy sequence [29]; for instance, taking n = 10,000,0000 , as for the pseudo-random variates, would result in quite slow computational times and make the method insufficiently competitive w.r.t. to the other methods considered;
-
Yet more importantly, it becomes rapidly complicated and often intractable to compute the error bound entailed by quasi-random integration, and the pattern of convergence is not as simple as that of pseudo-random integration.
In Table 2, Table 3 and Table 4, one can also observe closer and closer convergence between the values obtained by implementing Propositions 2, 3, and 4 by Algorithm 1 and the values obtained by performing CMC simulation as more and more random numbers are drawn. One can also observe a seemingly faster convergence using Algorithm 1 rather than CMC simulation, similar to what is observed in Table 1. One can also clearly notice that the approximation error of all numerical methods examined, as measured by the scaled absolute value of the difference with the analytical benchmark, increases with volatility. This makes sense intuitively since volatility can be interpreted as sensitivity to randomness and unpredictability, so the higher the volatility is, the more the numerical values obtained are subject to the errors entailed by both the lack of statistical uniformity of the random number generator and the finiteness of the samples used. The error of all numerical methods examined also increases with dimension, as can be expected.
Overall, the reliability of Algorithm 1 is validated since the error is always less than 0.1% for all levels of volatility when n = 10,000,000 . The smaller sample of n = 2,000,000 is not accurate enough since it does not allow the relatively modest error threshold of 0.1% to be achieved for one value of σ in Table 2, two values of σ out of three in Table 3, and all three values of σ in Table 4. As regards CMC simulation, as many as 50,000,000 simulated paths are necessary for the error threshold of 0.1% to be secured, whatever the value of σ , which is quite time-consuming. These observations are a reminder that, despite the moderate dimension tackled here, analytical formulae and semi-analytical schemes need to be carefully implemented as the inaccuracies and inefficiencies entailed by the evaluation of multidimensional integrals are significant, even when the integrand is as smooth as a Gaussian one.

4. Conclusions

This article has provided an exact formula for the survival probability of Brownian motion with drift exposed to an intermittent absorbing step barrier with four alternately up and down steps. Moreover, it has shown how to implement the formula in a simple and efficient manner and given benchmarks to assess the accuracy of the numerical implementation. As more and more steps are added to the barrier, exact formulae become bulkier and bulkier, so conditional Monte Carlo simulation may be preferred for its simplicity. However, the formula provided in this article still remains a valuable tool in at least two ways: either as a powerful variance reduction technique allowing the extension of the Brownian bridge over a longer period of time, thus eliminating the need to draw a vast number of intermediate values of the underlying process, or as an effective control variate displaying a high degree of correlation with the simulated payoff. Finally, the algorithm provided can be applied to other problems involving high-dimensional Gaussian integrals based on the values of a Brownian motion observed at a large number of points in time, e.g., the valuation of discretely monitored barriers or lookback options.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Metzler, R.; Oshanin, G.; Redner, S. First-Passage Phenomena and Their Applications; World Scientific: Singapore, 2014. [Google Scholar]
  2. Levy, P. Process. Stochastiques et Mouvement Brownien; Gauthier-Villars: Paris, France, 1948. [Google Scholar]
  3. Doob, J.L. Heuristic Approach to the Kolmogorov-Smirnov Theorem. Ann. Math. Statist. 1949, 20, 393–403. [Google Scholar] [CrossRef]
  4. Anderson, T.W. A Modification of the Sequential Probability Ratio Test to Reduce the Sample Size. Ann. Math. Statist. 1960, 31, 165–197. [Google Scholar] [CrossRef]
  5. Salminen, P. On the first hitting time and last exit time for a Brownian motion to/from a moving boundary. Adv. Appl. Probab. 1988, 20, 411–426. [Google Scholar] [CrossRef]
  6. Groeneboom, P. Brownian motion with a parabolic drift and Airy functions. Probab. Theory Relat. Fields 1989, 81, 79–109. [Google Scholar] [CrossRef]
  7. Novikov, A.; Frishling, V.; Kordzakhia, N. Approximations of boundary crossing probabilities for a Brownian motion. J. Appl. Probab. 1999, 36, 1019–1030. [Google Scholar] [CrossRef]
  8. Alili, L.; Patie, P.; Pedersen, J.L. Representation of the first hitting time density of an Ornstein-Uhlenbeck process. Stoch. Models 2005, 21, 967–980. [Google Scholar] [CrossRef]
  9. Lo, C.F.; Hui, C.H. Computing the first passage time density of a time-dependent Ornstein-Uhlenbeck process to a moving boundary. Appl. Math. Lett. 2006, 19, 1399–1405. [Google Scholar] [CrossRef]
  10. Breiman, L. First exit times from a square root boundary. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; Statistical Laboratory of the University of California: Berkeley, CA, USA; pp. 9–16.
  11. Sato, S. Evaluation of the first-passage time probability to a square root boundary for the Wiener process. J. Appl. Probab. 1977, 14, 850–856. [Google Scholar] [CrossRef]
  12. Daniels, H.E. Approximating the first crossing-time density for a curved boundary. Bernoulli 1996, 2, 133–143. [Google Scholar] [CrossRef]
  13. Wang, L.; Pötzelberger, K. Crossing probabilities for diffusion processes with piecewise continuous boundaries. Methodol. Comput. Appl. Probab. 2007, 9, 21–40. [Google Scholar] [CrossRef]
  14. Downes, A.N.; Borovkov, K. First passage densities and boundary crossing probabilities for diffusion processes. Methodol. Comput. Appl. Probab. 2008, 10, 621–644. [Google Scholar] [CrossRef]
  15. Park, C.; Schuurmann, F.J. Evaluation of barrier-crossing probabilities of Wiener paths. J. Appl. Probab. 1976, 13, 267–275. [Google Scholar] [CrossRef]
  16. Jennen, C.; Lerche, H.R. First exit densities of Brownian motion through one-sided moving boundaries. Z. Wahr. Verw. Geb. 1981, 55, 133–148. [Google Scholar] [CrossRef]
  17. Durbin, J. The first-passage density of a continuous Gaussian process to a general boundary. J. Appl. Probab. 1985, 2, 99–122. [Google Scholar] [CrossRef]
  18. Pötzelberger, K.; Wang, L. Boundary crossing probability for Brownian motion. J. Appl. Probab. 2001, 38, 152–164. [Google Scholar] [CrossRef]
  19. Fu, J.C.; Wu, T.L. Linear and nonlinear boundary crossing probabilities for Brownian motion and related processes. J. Appl. Probab. 2010, 47, 1058–1071. [Google Scholar] [CrossRef]
  20. Guillaume, T. On the computation of the survival probability of Brownian motion with drift in a closed time interval when the absorbing boundary is a step function. J. Probab. Stat. 2015, 2015, 391681. [Google Scholar] [CrossRef]
  21. Guillaume, T. Computation of the survival probability of Brownian motion with drift in a closed time interval when the absorbing boundary is an affine or an exponential function of time. Int. J. Stat. Probab. 2016, 5, 119–138. [Google Scholar] [CrossRef]
  22. Guillaume, T. Closed form valuation of barrier options with stochastic barriers. Ann. Oper. Res. 2022, 313, 1021–1050. [Google Scholar] [CrossRef]
  23. Karatzas, I.; Shreve, S. Brownian Motion and Stochastic Calculus; Springer: New York, NY, USA, 2000. [Google Scholar]
  24. Harrison, J.M.; Kreps, D. Martingales and arbitrage in multiperiod securities markets. J. Econ. Theory 1979, 20, 381–408. [Google Scholar] [CrossRef]
  25. Harrison, J.M.; Pliska, S. Martingales and stochastic integrals in the theory of continuous trading. Stoch. Process. Their Appl. 1981, 11, 312–316. [Google Scholar] [CrossRef]
  26. Wang, L.; Pötzelberger, K. Boundary crossing probability for Brownian motion and general boundaries. J. Appl. Probab. 1997, 34, 54–65. [Google Scholar] [CrossRef]
  27. Dagpunar, J.S. Simulation and Monte Carlo; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  28. Matsumoto, M.; Nishimura, T. Mersenne Twister: A 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans. Model. Comput. Simul. 1998, 8, 3–30. [Google Scholar] [CrossRef]
  29. Dick, J.; Kuo, F.Y.; Sloan, I.H. High-dimensional integration: The quasi-Monte Carlo way. Acta Numer. 2013, 22, 133–288. [Google Scholar] [CrossRef]
Figure 1. A path of geometric Brownian motion staying within the confines of an intermittent absorbing step barrier with four alternately up and down steps.
Figure 1. A path of geometric Brownian motion staying within the confines of an intermittent absorbing step barrier with four alternately up and down steps.
Appliedmath 04 00058 g001
Table 1. Numerical evaluations of p 1 μ .
Table 1. Numerical evaluations of p 1 μ .
σ = 20 % σ = 40 % σ = 60 %
Pseudo-random integration
n = 500,000
0.2728280.0576560.013676
Pseudo-random integration
n = 2,000,000
0.2734160.0576310.013305
Pseudo-random integration
n = 10,000,000
0.2736710.0572060.013262
Quasi-random integration
n = 50,000
0.2729640.0569620.013508
Quasi-random integration
n = 500,000
0.2734540.0571780.013290
Monte Carlo simulation
n = 2,000,000
0.2749630.0574510.013125
Monte Carlo simulation
n = 10,000,000
0.2735320.0571860.013288
Monte Carlo simulation
n = 50,000,000
0.2736550.0572340.013271
Table 2. Numerical evaluations of p 2 μ .
Table 2. Numerical evaluations of p 2 μ .
σ = 20 % σ = 40 % σ = 60 %
Analytical benchmark0.7352887620.347277940.166710729
Pseudo-random integration
n = 2,000,000
0.0082813%0.092647%0.27459%
Pseudo-random integration
n = 10,000,000
0.002324%0.049262%0.072539%
Quasi-random integration
n = 500,000
0.02715%0.16273%0.23898%
Monte Carlo simulation
n = 10,000,000
0.019556%0.4281%0.7927%
Monte Carlo simulation
n = 50,000,000
0.00844%0.056413%0.07194%
Table 3. Numerical evaluations of p 3 μ .
Table 3. Numerical evaluations of p 3 μ .
σ = 20 % σ = 40 % σ = 60 %
Analytical benchmark0.4506403460.1949645170.094961373
Pseudo-random integration
n = 2,000,000
0.084825%0.166638%0.32867%
Pseudo-random integration
n = 10,000,000
0.009034%0.05896%0.0886341%
Quasi-random integration
n = 500,000
0.062305%0.15349%0.24208%
Monte Carlo simulation
n = 10,000,000
0.12761%0.51563%0.92624%
Monte Carlo simulation
n = 50,000,000
0.04619%0.05533%0.075462%
Table 4. Numerical evaluations of p 4 μ .
Table 4. Numerical evaluations of p 4 μ .
σ = 20 % σ = 40 % σ = 60 %
Analytical benchmark0.5506688270.1836912170.063942516
Pseudo-random integration
n = 2,000,000
0.13414%0.35617%0.49621%
Pseudo-random integration
n = 10,000,000
0.03451%0.07265%0.09404%
Quasi-random integration
n = 500,000
0.14668%0.28937%0.42358%
Monte Carlo simulation
n = 10,000,000
0.18606%0.61485%1.14362%
Monte Carlo simulation
n = 50,000,000
0.05409%0.07564%0.09593%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guillaume, T. Computation of the Survival Probability of Brownian Motion with Drift Subject to an Intermittent Step Barrier. AppliedMath 2024, 4, 1080-1097. https://doi.org/10.3390/appliedmath4030058

AMA Style

Guillaume T. Computation of the Survival Probability of Brownian Motion with Drift Subject to an Intermittent Step Barrier. AppliedMath. 2024; 4(3):1080-1097. https://doi.org/10.3390/appliedmath4030058

Chicago/Turabian Style

Guillaume, Tristan. 2024. "Computation of the Survival Probability of Brownian Motion with Drift Subject to an Intermittent Step Barrier" AppliedMath 4, no. 3: 1080-1097. https://doi.org/10.3390/appliedmath4030058

Article Metrics

Back to TopTop