Next Article in Journal
Novel Design and Analysis for Rare Disease Drug Development
Previous Article in Journal
Computer Science Education in ChatGPT Era: Experiences from an Experiment in a Programming Course for Novice Programmers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Randomly Shifted Lattice Rules with Importance Sampling and Applications

1
École Nationale de la Statistique et de L’administration Économique Paris, 91120 Paris, France
2
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(5), 630; https://doi.org/10.3390/math12050630
Submission received: 25 December 2023 / Revised: 27 January 2024 / Accepted: 19 February 2024 / Published: 21 February 2024

Abstract

:
In financial and statistical computations, calculating expectations often requires evaluating integrals with respect to a Gaussian measure. Monte Carlo methods are widely used for this purpose due to their dimension-independent convergence rate. Quasi-Monte Carlo is the deterministic analogue of Monte Carlo and has the potential to substantially enhance the convergence rate. Importance sampling is a widely used variance reduction technique. However, research into the specific impact of importance sampling on the integrand, as well as the conditions for convergence, is relatively scarce. In this study, we combine the randomly shifted lattice rule with importance sampling. We prove that, for unbounded functions, randomly shifted lattice rules combined with a suitably chosen importance density can achieve convergence as quickly as O ( N 1 + ϵ ) , given N samples for arbitrary ϵ values under certain conditions. We also prove that the conditions of convergence for Laplace importance sampling are stricter than those for optimal drift importance sampling. Furthermore, using a generalized linear mixed model and Randleman–Bartter model, we provide the conditions under which functions utilizing Laplace importance sampling achieve convergence rates of nearly O ( N 1 + ϵ ) for arbitrary ϵ values.
MSC:
65C20; 65C05; 91G60

1. Introduction

In finance and statistics, many computational challenges involve calculating expectations, frequently represented as integrals over Gaussian measures. Typical examples include the valuation of assets driven by Brownian motion in security pricing. Monte Carlo (MC) methods, characterized by an O ( N 1 / 2 ) convergence rate for square-integrable integrands of sample size N, are widely used in expectation estimation [1]. Despite their dimension-independent rate, they may fall short in practical applications. To enhance the order of convergence and reduce variance, we combine quasi-Monte Carlo (QMC) methods with importance sampling (IS). QMC methods are deterministic counterparts of MC that use low-discrepancy sequences [2]. In the last 30 years, the use of these methods in scientific computations has grown significantly, particularly in the fields of finance and statistics. Given certain conditions, the error bound of QMC quadrature can reach O ( N 1 + ϵ ) for a sample size N and any positive ϵ . This ϵ value is associated with the dimension of the integrand [2]. In practical scenarios, randomized quasi-Monte Carlo (RQMC) methods are commonly used. They preserve the convergence characteristics of QMC while providing the added advantage of making computational efficiency analysis more accessible, including tasks like error estimation and confidence interval construction. RQMC quadratures come in various forms, including scrambled digital nets and randomly shifted lattice rules, as explored in [3,4]. For a comprehensive understanding of both QMC and RQMC, readers are referred to [5,6]. Recently, researchers [7] successfully combined scrambled digital nets with IS, demonstrating that an approximate O ( N 1 + ϵ ) root mean square error can be achieved. This level of accuracy depends on the specific boundary growth conditions of the integrand and the optimization of the IS density. This study focuses on the effects of combining randomly shifted lattice rules with IS.
The use of IS as an effective technique for reducing variance has been well-established in the MC literature, with its application in security pricing having been explored in earlier studies, such as [8]. Over the years, further research has highlighted the effectiveness of IS in dealing with rare events, as detailed in sources like [9]. This has revealed that the usefulness of IS surpasses traditional methods of variance reduction. The success of IS heavily depends on choosing the appropriate importance density. For a detailed discussion on IS, refer to [10]. When selecting an importance density, two types of importance densities stand out: optimal drift importance sampling (ODIS) and Laplace importance sampling (LapIS). ODIS utilizes a multivariate normal density that shares the covariance matrix with the prior density, while LapIS uses a general multivariate normal density, aligning its mean and covariance matrices with the mode and curvature of the integrand [11]. Combining IS into QMC quadrature makes deriving a theoretical convergence rate more complex than with MC methods [7]. The effectiveness of MC is generally unaffected by the dimension and smoothness of the problem, except for the requirement of square-integrability, and these factors significantly influence the performance of QMC. Kuo and Dunsmuir et al. [12] have shown that QMC outperforms MC in the context of log-likelihood integrals. Furthermore, Dick et al. [13] developed a weighted discrepancy bound specifically for QMC when combined with IS, providing clear error bounds for integrands that are sufficiently regular. The study in [7] highlighted potential boundary singularities in the unit hypercube due to the IS density. These works prompted us to investigate the error bounds associated with the combination of IS and randomly shifted lattice rules. It is important to note that the core theoretical basis of our study is based on the reproducing kernel Hilbert space (RKHS) framework.
This contributions of this study are as follows. Our research is focused on the combination of the shifted rank-1 lattice rule with IS. This combination has been shown to effectively reduce variance. Our study also analyzes the convergence error of the integral estimation of unbounded functions, using importance sampling combined with the shifted lattice rule. Moreover, we conduct a preliminary analysis to investigate how varying importance densities affect functions. This analysis is crucial for determining the effectiveness of different IS strategies in diverse scenarios. In the realms of finance and statistics, we identify specific conditions under which functions utilizing IS demonstrate enhanced convergence rates. Although our study does not investigate the optimization of weights, the numerical results that we have obtained indicate that the observed numerical effects are relatively robust toward different weights.
The structure of this paper is as follows. Section 2 offers an overview of IS and the randomly shifted lattice rule. Section 3 primarily focuses on the convergence error incurred when estimating the integrals of unbounded functions within RKHS by using importance sampling combined with shifted lattice rules. Section 4 presents case studies illustrating the effects of ODIS and LapIS, and Section 5 concludes the paper.

2. Preliminary

2.1. Monte Carlo and Quasi-Monte Carlo Methods

Consider the computation of a d-dimensional integral over a unit hypercube:
I ( g ) = ( 0 , 1 ) d g ( z ) d z .
For numerical approximation, we employ
I ^ N ( g ) = 1 N i = 1 N g ( z i ) .
In the MC approach, points z i are randomly selected within the unit hypercube. If the variance of g ( z ) is finite, i.e.,
σ 2 : = ( 0 , 1 ) d ( g ( z ) I ( g ) ) 2 d z < ,
then, by the central limit theorem, the root mean square error (RMSE) of the MC method is
E [ ( I ^ N ( g ) I ( g ) ) 2 ] = σ N .
This suggests an RMSE rate of O ( N 1 / 2 ) for square-integrable integrands.
To improve the convergence, we use QMC methods. The QMC estimator is similar to that in MC methods, but it utilizes a deterministic set of low-discrepancy points P = { z 1 , , z N } , in contrast to the independently and identically distributed points used in MC. We employ lattice rules as the low-discrepancy point sets.
The typical error bound for QMC methods follows the Koksma–Hlawka inequality:
| I ^ N ( g ) I ( g ) | D * ( P ) V HK ( g ) ,
where D * ( P ) denotes the star-discrepancy of the point set P and V HK ( · ) represents the variation in the Hardy–Krause sense of a function over the unit hypercube. With low-discrepancy sequences achieving a star-discrepancy of O ( N 1 ( log N ) d ) and assuming bounded variation, the QMC estimator guarantees a deterministic error bound of O ( N 1 ( log N ) d ) , which is superior to MC for a fixed dimension d.

2.2. Importance Sampling

In this study, we examine integrals with respect to
C = R d G ( z ) p ( z ; μ 0 , Σ 0 ) d z ,
where p ( z ; μ , Σ ) represents the probability density function of a d-dimensional normal distribution with mean vector μ and covariance matrix Σ , that is,
1 ( 2 π ) k | Σ | exp 1 2 ( z μ ) Σ 1 ( z μ ) .
Employing IS involves using a proposal importance density q to reformulate the integral as
C = R d G ( z ) p ( z ; μ 0 , Σ 0 ) q ( z ) q ( z ) d z = R d G ( z ) W ( z ) q ( z ) d z = R d G IS ( z ) q ( z ) d z ,
where
W ( z ) = p ( z ; μ 0 , Σ 0 ) q ( z )
is defined as the likelihood ratio (LR) function and G IS ( z ) = G ( z ) W ( z ) denotes the transformed integrand.
Choosing an IS density among normal densities, i.e., q ( z ) = p ( z ; μ , Σ ) , leads to
W ( z ) = det ( Σ ) det ( Σ 0 ) exp 1 2 ( z μ ) T Σ 1 ( z μ ) 1 2 ( z μ 0 ) T Σ 0 1 ( z μ 0 ) .
Using the affine transformation property of the normal distribution, we can express the integral C as
C = R d G IS ( μ + L z ) p ( z ; 0 , I d ) d z = ( 0 , 1 ) d G IS ( μ + L Φ 1 ( u ) ) d u ,
where L signifies any square root decomposition of the covariance matrix Σ (satisfying L L T = Σ ). Here, Φ ( · ) denotes the cumulative distribution function (CDF) of the standard normal distribution in d dimensions, and Φ 1 ( · ) is its inverse, which is applied component-wise. The LR function is now represented as
W ( μ + L z ) = det ( Σ ) det ( Σ 0 ) exp 1 2 z T z 1 2 ( μ μ 0 + L z ) T Σ 0 1 ( μ μ 0 + L z ) .
It is noteworthy that it is sufficient to use μ = μ 0 and Σ = Σ 0 when IS is not employed.
The proper selection of parameters μ and Σ can reduce the error of this system. In the context of MC simulations, this is tantamount to reducing the variance σ 2 , as indicated in Equation (4). Assuming that G is nonnegative for all z, the optimal zero-variance IS density would be
q opt ( z ) : = 1 C G ( z ) p ( z ; μ 0 , Σ 0 ) ,
where C is unknown. Although this is impractical due to the unknown value of C, this formulation inspires the selection of an IS density that approximates the behavior of q opt .
We define H ( z ) = log ( q opt ( z ) ) . Assuming that H ( z ) is differentiable and unimodal, the mode of H ( z ) by μ may be denoted via
μ = arg max z R d H ( z ) = arg max z R d G ( z ) p ( z ; μ 0 , Σ 0 ) .
A second-order Taylor expansion about μ yields
H ( z ) H ( μ ) 1 2 ( z μ ) Σ 1 ( z μ ) ,
where Σ is determined by
Σ = 2 H ( μ ) 1 = Σ 0 2 log ( G ( μ ) ) 1 .
Consequently, we approximate q opt ( z ) as
q opt ( z ) exp H ( μ ) 1 2 ( z μ ) Σ 1 ( z μ ) p ( z ; μ , Σ ) .
Thus, p ( z ; μ , Σ ) represents a density closely resembling the optimal density. The LapIS density adopts μ = μ and Σ = Σ , Σ = L T L , while the ODIS density selects μ = μ but maintains Σ = Σ 0 .

2.2.1. ODIS Estimator

The importance density for ODIS is p ( z ; μ , Σ 0 ) . The function f O ( z ) is defined as
f O ( z ) = ( 2 π ) d / 2 exp ( H ( L z + μ ) ) j = 1 d 1 ρ ( z j ) ,
and the integral becomes
C = R d f O ( z ) j = 1 d ρ ( z j ) d z = ( 0 , 1 ) d f O ( Φ 1 ( u ) ) d u .

2.2.2. LapIS Estimator

The importance density for LapIS is p ( z ; μ , Σ * ) . The function f L ( z ) for LapIS is defined as
f L ( z ) = ( 2 π ) d / 2 det ( L ) det ( Σ ) exp ( H ( L z + μ ) ) j = 1 d 1 ρ ( z j ) ,
and the integral takes the form
R d f L ( z ) j = 1 d ρ ( z j ) d z = ( 0 , 1 ) d f L ( Φ 1 ( u ) ) d u .
It is noteworthy that LapIS does not always outperform ODIS (for example, see [7,11]). Generally, in an MC framework, LapIS tends to be effective if q opt ( z ) closely resembles a normal distribution. From a QMC perspective, this situation becomes increasingly complex as the performance of QMC is further influenced by additional factors such as the boundary growth of the integrand when using scrambled digital nets. This aspect is further explored in the context of randomly shifted lattice rules in the subsequent sections.

2.3. Reproducing Kernel Hilbert Space

RKHS theory offers a framework for error analysis. Recent developments in this system can be found in [14,15,16].
Consider integrals with respect to a probability density, which is a product of univariate marginal densities
C ( f ) = R d f ( z ) i = 1 d ϕ ( z i ) d z .
Let Φ denote the cumulative distribution function (CDF) of ϕ :
Φ ( x ) = x ϕ ( y ) d y ,
and Φ 1 denote its inverse. For instance, if ϕ ( x ) = p ( x ; 0 , 1 ) , Φ represents the CDF of the standard normal distribution. Assume that F is a Hilbert space on R d . Suppose that K F is a reproducing kernel, and the function K F : R d × R d R has the following properties:
  • Functional property: K F ( · , x ) F , x R d ;
  • Symmetry: K F ( x , y ) = K F ( y , x ) , x , y R d ;
  • Reproducing property: f , K F ( · , x ) = f ( x ) , x R d and f F .
Additionally, it is required that
R d K F ( x , x ) i = 1 d ϕ ( x i ) d x < .
This leads to
C ( f ) = R d f , K F ( · , x ) i = 1 d ϕ ( x i ) d x = f , h ,
where
h ( x ) = R d K F ( x , y ) i = 1 d ϕ ( y i ) d y .
Equation (24) ensures that Equations (25) and (26) are finite. The initial error is
e 0 ( F ) = sup | | f | | F 1 | C ( f ) | = h , h ,
yielding
| e 0 ( F ) | 2 = h , h = R d R d K F ( x , y ) i = 1 d [ ϕ ( x i ) ϕ ( y i ) ] d x d y .
The transformation ϕ establishes a one-to-one correspondence between functions in the space F and those in a Hilbert space G defined on ( 0 , 1 ) d . This mapping induces an isometry of
f F g = f ( Φ 1 ( · ) ) G .
Consequently, the following equality holds:
f F = g G .
Recalling the isometric space G of F , the integral formulation converts to
C ( f ) = ( 0 , 1 ) d g ( u ) d u : = I ( g ) .
Furthermore, G , similar to F , is an RKHS equipped with a reproducing kernel of
K G ( u , v ) = K F ( Φ 1 ( u ) , Φ 1 ( v ) ) .
This implies that the initial error in G is analogous to that in F :
e 0 ( g ) = sup | | g | | G 1 | I ( g ) | = sup | | f | | F 1 | C ( f ) | = e 0 ( f ) .
Consider applying a QMC rule to estimate the integral Equation (29) via
I ^ N ( g ) = 1 N i = 1 N g ( u i ) ,
where the worst-case error in G is defined as
e w ( I ^ N , G ) = sup | | g | | G 1 | I ^ N ( g ) I ( g ) | .
Therefore, the error in approximating C ( f ) using the transformed QMC rule is bounded by
| I ^ N ( f Φ 1 ) C ( f ) | = | I ^ N ( g ) I ( g ) | e w ( I ^ N , G ) | | g | | G = e w ( I ^ N , G ) | | f | | F .
In the context of QMC methods, we utilize a randomly shifted rank-1 lattice rule defined by the sequence u i , i = 1 , 2 , , N , where each u i value is given by
u i = i z N + ζ , i = 1 , 2 , , N .
Here, { · } denotes the component-wise fractional part, ζ ( 0 , 1 ) d is a uniformly distributed random variable, and z Z N d represents the generating vector.
Let us define
Z N = { z Z 1 z N 1 , gcd ( z , N ) = 1 } ,
which represents the set of positive integers not exceeding N 1 and coprime to N. We consider the shift-average worst-case error, depending solely on the generating vector, as described in [17]:
| e N sa ( z ) | 2 = ( 0 , 1 ) d | e w ( I ^ N , G ) | 2 d ζ = ( 0 , 1 ) d ( 0 , 1 ) d K G ( u , v ) d u d v + 1 N i = 1 N K G sik i z N ,
where K G sik is the shift-invariant kernel, defined as
K G sik ( u , v ) = ( 0 , 1 ) d K G ( { u + ζ } , { v + ζ } ) d ζ .
In our analysis, we simplify the notation of K G sik based on the difference between u and v as
K G sik ( { u v } ) : = K G sik ( { u v } , 0 ) = K G sik ( u , v ) .
For a linear functional space F over R d with a weight function ψ and positive weight coefficients γ k , k = 1 , 2 , , d , the inner product is defined as [18]
f , g F = u { 1 : d } k u 1 γ k R | u | | u | f x u ( x u , 0 ) | u | g x u ( x u , 0 ) k u ψ 2 ( x k ) d x u ,
where ( x u , 0 ) denotes a d-dimensional vector, with x k for k u and otherwise zero. The reproducing kernel components satisfy
R K F , j ( y , y ) ψ ( y ) d y < ,
where K F , j depends on γ j ,
K F , j ( x , y ) = 1 + γ j η ( x , y ) ,
with η defined as
η ( x , y ) = 0 max ( x , y ) 1 ψ 2 ( t ) d t , x , y > 0 , max ( x , y ) 0 1 ψ 2 ( t ) d t , x , y < 0 , 0 , otherwise .
The shift-invariant kernel, as detailed in [14], is expressed as
K G sik ( u , v ) = u { 1 : d } γ u i u θ ( { u i v i } ) , u , v ( 0 , 1 ) d ,
where
θ ( x ) = Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) d t + Φ 1 ( 1 x ) 0 Φ ( t ) 1 + x ψ 2 ( t ) d t , x ( 0 , 1 ) .
Expanding θ ( x ) into a Fourier series, we obtain
θ ( x ) = h Z θ ^ ( h ) exp ( 2 π i h x ) ,
with the Fourier coefficients given by
θ ^ ( h ) = 0 1 θ ( x ) exp ( 2 π i h x ) d x .
The explicit form of θ ^ ( h ) in the Gaussian setting is shown in Appendix A. In this space setting, to satisfy Equation (41), we require that, for any finite c,
c Φ ( t ) ψ 2 ( t ) d t + c 1 Φ ( t ) ψ 2 ( t ) d t <
as shown in [15].

3. Main Results

This section primarily addresses the theoretical convergence rate of the randomly shifted lattice rule combined with IS methods for functions defined in R d . We establish the conditions under which IS-based estimators achieve a convergence rate of O ( N 1 + ϵ ) , with ϵ > 0 being arbitrarily small.

Error Bounds for Importance Sampling

We first review a Theorem proposed by Nichols and Kou [15] that established a connection between Fourier analysis and the shift-average worst-case error.
Assumption 1.
Assume there exist 1 / 2 < r < 1 and M > 0 such that
θ ^ ( h ) M | h | 2 r f o r a l l h 0 .
Theorem 1.
Let f F , with density function ϕ, weight parameters γ u , and weight function ψ. If Assumption 1 is satisfied with a parameter 1 / 2 < r < 1 , then a randomly shifted lattice rule with N points in d dimensions can be constructed by a CBC algorithm such that, for all λ [ 1 2 , r ) , the RMSE is
E Δ | I ^ N ( f Φ 1 ) C ( f ) | 2 1 φ ( N ) u 1 : s γ u 1 2 λ 2 M 1 2 λ ζ r λ | u | λ f F ,
where ζ ( x ) : = k = 1 k x denotes the Riemann zeta function and φ ( N ) : = | { 1 i N 1 : gcd ( i , N ) = 1 } | denotes the Euler function. If
| u | < γ u 1 2 λ 2 M 1 2 λ ζ r λ | u | <
further holds, the RMSE has an upper bound independently of d.
Proof. 
The proof is the combination of Theorems 7 and 8 in [15]. □
Note that the Euler function φ ( N ) = N 1 when N is prime. The parameter r of commonly encountered combinations of distribution and weight functions have been computed in [15]. Here, we select some combinations of the probability density ϕ and weight function ψ that satisfy (48) from [15] in Table 1, which are used later.
Since the Hardy–Krause variation of unbounded functions is infinite, if we want to determine whether the QMC method is suitable for these functions, RKHS theory provides us with a great tool. To ascertain the achievable convergence order using the lattice rule, it suffices to examine whether the function adheres to the conditions stipulated in RKHS.
According to [15], when we select product weights of the form γ u = j u γ j , which satisfy the condition
j = 1 γ j 1 2 r 2 < ,
then (51) holds and it is possible to achieve a convergence error (Equation (50)) for a fixed dimension d.
The remaining hidden condition indicates that G I S needs to be in the RKHS. We next list several conditions for the log integrand g ^ = log ( G I S ( z ) ) .
Assumption 2
(Upper Bound). g ^ ( z ) = O ( z β ) with β < 2 .
Assumption 3
(Minimax Eigenvalue). min i { 1 , , d } h i · max j { 1 , , d } λ j < 1 α , where h i and λ j are the eigenvalues of 2 g ^ ( μ ) and Σ , respectively.
Assumption 4
(Semi-Positive Definiteness). 2 g ^ ( μ ) is semi-definite.
The fulfillment of these assumptions is vital in ensuring that the function G I S belongs to the RKHS, thereby facilitating the effective application of the established results.
We can now state the theoretical convergence rate of the randomly shifted lattice rule combined with ODIS or LapIS.
Theorem 2.
Let the function G I S be the integrand in Equation (11). Assume that the function G I S belongs to an RKHS equipped with the weight function ψ ( z ) exp ( z 2 2 α 2 ) , with α 2 > 2 . Then, using the lattice rule obtained by the CBC algorithm yields an error rate of the estimator as
E | ( I ^ N ( G I S Φ 1 ) C | 2 = O ( N 1 + 1 α 2 + δ ) , where δ > 0 .
For ODIS, Assumption 2 offers a sufficient condition for G I S to qualify as a member of the RKHS. For LapIS, both Assumptions 2 and 3 are required.
Proof. 
Under the condition β < 2 , Assumption 2 ensures that the expression g ^ ( L z + μ ) is dominated by z T z 2 α . This implies that exp ( g ^ ( L z + μ ) z T z 2 α ) approaches zero as x tends towards infinity. For ODIS, Equation (18) confirms that Assumption 2 suffices for f O to belong to RKHS. In contrast, for LapIS, Equation (20) requires additional conditions, which may be addressed by examining the dominant term exp [ D ( z ) ] , where
D ( z ) = g ^ ( L z + μ ) 1 2 ( L z ) T 2 g ^ ( μ ) ( L z ) .
Using Assumption 2, we can governs the first term. Incorporating Assumption 3, the singular value decomposition of 2 g ( μ ) , represented as Q T T Q , shows that Q is an orthogonal matrix and T is a diagonal matrix containing the eigenvalues. The term
1 2 ( L z ) T 2 g ^ ( μ ) ( L z ) = 1 2 ( Q L z ) T T ( Q L z ) ,
is then dominated by
h 2 Q L z 2 h 2 Q 2 2 L 2 2 z 2 h 2 max j 1 : d λ j z 2 .
The second inequality holds because the spectral norm of an orthogonal matrix is 1, and the spectral norm of L is the nonnegative square root of the maximum eigenvalue of L T L , which equals Σ . Assumption 3 implies that 1 2 α > h 2 max j 1 : d λ j , ensuring that the integrand remains within the RKHS. Then, we can obtain the results by using Table 1. □
Assumption 3 can also be substituted with Assumption 4, which provides an alternative solution.
Remark 1.
Considering β in Assumption 2, if β = 2 , then
g ^ ( L z + μ ) = O ( L z 2 ) O ( L 2 z 2 ) = O ( ( max j 1 : d λ j ) z 2 ) .
As the RKHS contains exp ( γ z 2 ) for γ < 1 2 α , it follows that max j 1 : d λ j < 1 2 α . Therefore, a more precise format of Assumption 2 asserts that
g ^ ( z ) = O ( z β ) , β < 2 ,
or
g ^ ( z ) = O ( z 2 ) , max j 1 : d λ j < 1 2 α .
And we also find this result if the function has a second-order exponential growth. Then, using ODIS, we can obtain a convergence rate of nearly O ( N 1 ) by letting α gros to infinity while using LapIS; we cannot let α be larger, to make sure that the integrand belongs to RKHS.
Remark 2.
Regarding the weight functions, they should be carefully chosen for LapIS. For example, ψ ( x ) = exp ( | x | α ) cannot be selected as the weight function. LapIS introduces the term
exp ( 1 2 ( L z ) T 2 g ^ ( μ ) ( L z ) )
to the function, which brings a second-order growth to the function and may not belong to the corresponding RKHS, thereby potentially conflicting with the desired conclusion. This restriction, however, does not apply to ODIS, which does not introduce a second-order growth term. As a result, ODIS offers more flexibility in the choice of weight functions than LapIS, indicating that the convergence conditions for LapIS are more stringent.

4. Case Studies

The requirements of growth for the integrand in LapIS are less stringent compared to those in ODIS. Therefore, in the subsequent sections on the generalized linear mixed model (GLMM) and the Randleman–Bartter model, it is sufficient to verify that the integrand for LapIS lies within the RKHS, providing conditions under which the function resides in the RKHS. When calculating the expected values for Asian options, although the conditions for the inclusion of the function in the RKHS cannot be provided, favorable results are still observed by applying IS combined with the lattice rule. The theoretical justification of these observations is a subject of our future research. The code behind these examples can be found in https://github.com/wanghj20/Importance-sampling-with-lattice-rule.git. (accessed on 26 January 2024).

4.1. Statistical Models

In this section, we consider two commonly encountered models in statistical problems.

4.1.1. Generalized Linear Mixed Model

Next, we discusses a class of highly structured statistical models known as GLMMs (see [12]). For simplicity, the likelihood is represented as
L ( β , κ , σ ) = R d j = 1 d exp ( y j ( ω j + β ) exp ( ω j + β ) ) y j exp ( 1 2 ω T Σ 1 ω T ) ( 2 π ) d det ( Σ ) d ω ,
where y = ( y 1 , y 2 , , y d ) denotes the data of nonnegative integers and Σ is the covariance matrix with entries Σ i , j = σ 2 κ | i j | 1 κ 2 for i , j = 1 , 2 , , d . The objective is to maximize the log-likelihood log L ( β , κ , σ ) given y with respect to ( β , κ , σ ) , subject to κ ( 0 , 1 ) and σ > 0 .
The integral in Equation (59) can be rewritten as R d exp ( F ( ω ) ) d ω , neglecting a normalizing constant, where
F ( ω ) = j = 1 d ( y j ( ω j + β ) exp ( ω j + β ) ) 1 2 ω T Σ 1 ω T .
As F is a unimodal function, the optimum value of this system is given by
ω = arg max ω R d F ( ω ) ,
which is determined by solving
F ( ω ) = y e β exp ( ω ) Σ 1 ω = 0 .
The Hessian matrix at the optimum value is expressed as
2 F ( ω ) = e β diag ( e ω 1 , e ω 2 , , e ω d ) Σ 1 .
In the framework of LapIS, the covariance matrix Σ is defined by
Σ = ( 2 F ( ω ) ) 1 ,
and the integral R d exp ( F ( ω ) ) d ω is transformed as
R d exp ( F ( ω ) ) d ω = det ( L ) R d exp ( F ( L v ) + ω ) d v = det ( L ) ( 0 , 1 ) d exp ( F ( L Φ 1 ( u ) + ω ) ) j = 1 d 1 ρ ( Φ 1 ( u j ) ) d u ,
where ρ and Φ represent the PDF and CDF of the standard normal distribution, respectively.
Following the insights in [18] and previous analyses, the weight function ψ ( x ) = exp x 2 2 α 2 is adopted to construct the RKHS, requiring that α 2 > 2 .
Theorem 3.
If the condition
max j 1 : d λ j < exp ( β ) α 2 exp ( max j 1 : d ω j )
is satisfied, then the integrand obtained using LapIS in the GLMM belongs to an RKHS, as characterized by the weight function ψ ( x ) = exp x 2 2 α 2 . Under these conditions, there exists a randomly shifted lattice rule that achieves an error bound of O ( N 1 + 1 / α 2 + δ ) for this model.
Proof. 
The function g ( ω ) is defined as
g ( ω ) = log j = 1 d exp ( y j ( ω j + β ) exp ( ω j + β ) ) y j = j = 1 d y j ( ω j + β ) exp ( ω j + β ) log y j .
Because g is upper bounded, it satisfies Assumption 2. Differentiation of g with respect to each component yields
g ( ω ) ω k = y k exp ( ω k + β ) ,
and
2 g ( ω ) ω j ω k = exp ( ω k + β ) δ j k .
Therefore, 2 g is a negative definite diagonal matrix. According to Theorem 2, it is sufficient to verify whether Assumption 3 is valid. Clearly, this is equivalent to Equation (66). □
Remark 3.
First, it is necessary to have α 2 > 2 to ensure that the RKHS is well-defined. Additionally, according to Equation (66), the following is a necessary condition:
2 < α 2 < exp ( β ) max j 1 : d λ j exp ( max j 1 : d ω j ) ,
or, equivalently,
2 e β max j 1 : d λ j exp ( max j 1 : d ω j ) < 1 .
This condition may not always be valid, especially when d is large. This is because LapIS can result in an unbounded boundary, potentially diverging faster than exp ( x T x / 2 α 2 ) , which is unfavorable for using QMC points to estimate.

4.1.2. Binary Regression Model

We define these functions such that their integral corresponds to the marginal likelihood integral π 0 ( β ) L ( y | β ) d β for a Bayesian statistical model. Here, β R s , π 0 ( β ) denotes a Gaussian prior density with mean zero and covariance 5 I s . The likelihood function L ( y | β ) is given by the logistic regression model L ( y | β ) = i = 1 n F ( y i β T x i ) , where F ( z ) = 1 1 + e z , and the data ( x i , y i ) i = 1 n comprise predictors x i R s and labels y i { 1 , 1 } . Here, we adapt the importance sampling approach with MC and QMC to approximate the quantities above as in [19]. We consider the Pima dataset, which has 10 predictors. For s = 8 , we take the first eight predictors (and same for s = 4 ). Here, we also fix the number of datapoints as 100. And, in this experiment, we use the product weight, which is generated by the CBC algorithm. Then, we calculate the relative RMSE of the four estimators: classic MC (NONE–MC), LapIS–QMC, ODIS–QMC and classic QMC (NONE–QMC). The standard error results are shown in Table 2 and Table 3.
Based on the experiments conducted, it is observed that the application of importance sampling in QMC methods does exhibit improvement over the traditional Monte Carlo (MC) methods. However, it was noted that, in this particular problem, the performance of LapIS combined with QMC does not surpass that of optimal ODIS. This may be attributed to the fact that, for this posterior density estimation problem, an increase in sample size could result in the integrand falling outside the RKHS. This aspect is an issue that will be further investigated in our subsequent research.

4.2. Financial Models

4.2.1. Randleman–Bartter Model

We consider the valuation problem of a zero-coupon bond within the framework of the Randleman-Bartter model, as discussed in [20]. A three-dimensional Gaussian integral related to this model was previously analyzed by Caflisch [1]. This general reformulation aims to determine the fair price of a ( d + 1 ) -year zero-coupon bond with a face value of $1. The valuation is represented as an integral:
P = R d k = 0 d p ( z ; 0 , I d ) 1 + r k d z .
The model treats the interest rates as
r k = r 0 exp ( k σ 2 / 2 + σ B k ) , k = 1 , 2 , , d ,
where the vector ( B 1 , B 2 , , B d ) T is distributed normally with a zero mean and a covariance matrix C, with entries C i , j = min ( i , j ) . The parameters r 0 and σ indicate the current annual interest rate and volatility, respectively.
The generation of Brownian motion involves finding a matrix A that satisfies A A T = C . The standard construction method for this system is used in our analysis in addition to other widely used methods like the Brownian bridge and principal component analysis, as detailed in [20]. Future studies may include these alternative generation methods.
The standard construction of this system is defined as
A : = ( a i , j ) d × d = 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 ,
and Brownian motion is generated as
( B 1 , B 2 , , B d ) T = A ( z 1 , z 2 , , z d ) T , ( z 1 , z 2 , , z d ) T N ( 0 , I d ) .
Finally, the integral in Equation (72) is rewritten in the standard form of Equation (6), with
G ( z ) = k = 0 d ( 1 + r k ) 1 , μ = 0 , Σ = I d .
Note that a i , j = 1 if and only if i j ; otherwise, a i , j = 0 . Therefore,
r k z i = σ r k 1 { k i } .
2 r k z i z j = σ 2 r k 1 { k max ( i , j ) } .
We have
g ( z ) = log G ( z ) = k = 0 d log ( 1 + r k ) , H ( z ) = g ( z ) 1 2 z T z .
Then, the matrix 2 H is given by 2 H = 2 g I d . Let μ be the solution to H ( z ) = 0 and let Σ = ( 2 H ( μ ) ) 1 . The derivatives of g with respect to z i are
g z i = k = 1 d 1 1 + r k r k z i = k = i d σ r k 1 + r k .
The second derivatives of g are derived as
2 g z i z j = k = i d σ [ r k z j ( 1 + r k ) r k z j r k ] ( 1 + r k ) 2 = k = i d σ r k z j ( 1 + r k ) 2 = k = i d σ 2 r k 1 { k j } ( 1 + r k ) 2 = k = max ( i , j ) d σ 2 r k ( 1 + r k ) 2 .
Defining R k = σ 2 r k ( 1 + r k ) 2 for simplicity, 2 g is given by
2 g = k = 1 d R k k = 2 d R k k = 3 d R k R d k = 2 d R k k = 2 d R k k = 3 d R k R d k = 3 d R k k = 3 d R k k = 3 d R k R d R d R d R d R d .
This matrix is negative definite. A suitable choice for the decomposition matrix H is an upper triangular matrix defined as
H = R 1 R 2 R 3 R d 0 R 2 R 3 R d 0 0 R 3 R d 0 0 0 R d .
Therefore, 2 g = H H T . Let r = min k 1 : d r k ( μ ) . The weight function ψ ( x ) = exp x 2 2 α 2 is used to establish the RKHS, with the condition that α 2 > 2 .
Theorem 4.
In the Randleman–Bartter model, if the condition α 2 < 1 κ 6 d 3 + 2 d 2 + 2 d + 1 is satisfied, where κ = σ 2 2 + r + 1 / r , then using randomly shifted lattice rule obtained by the CBC algorithm attains an error bound of O ( N 1 + 1 / α 2 + δ ) .
Proof. 
Assumption 2 is clearly satisfied. Because 2 g is negative definite, it is sufficient to verify the validity of Assumption 3.
Here, the challenge lies in theoretically computing the maximal eigenvalue of Σ and the minimal eigenvalue of 2 g ( μ ) . Instead of using direct computation, an estimation approach is used that leverages inequalities.
Considering that 2 g is a matrix, a classic linear algebra conclusion may be applied via
k = 1 d ν k 2 = | | 2 g | | F 2 ,
where ν k denotes all eigenvalues of 2 g and | | · | | F denotes the Frobenius norm. Estimating the Frobenius norm with R k = σ 2 r k ( 1 + r k ) 2 κ yields
| | 2 g | | F 2 = j = 1 d ( 2 j 1 ) ( k = j d R k ) 2 j = 1 d ( 2 j 1 ) ( d j + 1 ) 2 κ 2 = d ( d 3 + 2 d 2 + 2 d + 1 ) κ 2 6 ) : = S d .
Consequently,
max k 1 : d ν k S d / d .
and
h = max k 1 : d ν k ( d 3 + 2 d 2 + 2 d + 1 ) 6 κ .
For max k 1 : d λ k , a trivial upper bound is used. The proof is completed by applying Theorem 2. □
In Theorem 4, concerns may arise when considering the upper bound of α , as it tends to decrease when the dimension d increases, potentially rendering the condition α > 2 impractical in higher dimensions. However, this limitation does not significantly limit the use of the model. First, the estimation of max k { 1 , , d } λ k can be further refined. Second, with σ = 0.1 and κ σ 2 4 = 0.025 , a low dimension is both feasible and adequate for most practical scenarios.
In Figure 1, we present the numerical results for dimensions d = 5 . The parameters employed in this system are r 0 = 0.1 and σ = 0.01 . Here, we set the product weights as γ j = 1 j η , η = 3 . Therefore, these weights satisfy Equation (52) (refer to [21]).
In the RQMC setting, both ODIS and LapIS demonstrate more rapid convergence compared to their performance in the MC context. And we can also find this if we let dimension d grow and set κ not too small. Then, we can obtain a best convergence rate. In the result of the experiment, we can also find that both ODIS- and LapIS-enhanced convergence manifest an error rate approaching O ( N 1 ) , which agree with our theoretical findings. As the dimensions increase, these findings remain consistent.
Our analysis reveals that employing LapIS yields superior outcomes compared to ODIS, which may be attributed to the condition that G I S belongs to the RKHS when both IS methods are implemented. Consequently, for the integrand belonging to the RKHS, LapIS performs better than ODIS.

4.2.2. Asian Options

The conditions previously discussed are satisfied by the integrands in the above two examples. We now consider a classic financial example, referred to as the Asian option.
Under the Black–Scholes model, the asset price S ( t ) follows geometric Brownian motion under the risk-neutral measure, given by
d S ( t ) = r S ( t ) d t + σ S ( t ) d B ( t ) ,
where r is the risk-free rate, σ is the volatility, and B ( t ) represents a standard Brownian motion. The solution to this stochastic differential equation is
S ( t ) = S 0 exp r σ 2 2 t + σ B ( t ) ,
with S 0 = S ( 0 ) denoting the initial asset price at time zero. The discounted payoff of an Asian call option is
e r T max S A K , 0 ,
where T is the maturity, K is the strike price, and S A = 1 d k = 1 d S ( t j ) is the arithmetic average of the asset prices at discrete times t j = j T / d for j = 1 , 2 , , d . To conduct numerical simulations, we first generate the asset prices at these times. For notational convenience, let S j = S ( t j ) and Δ t = T / d . The asset price vector is then
S : = ( S 1 , S 2 , , S d ) T = S 0 exp r σ 2 2 Δ t ( 1 , 2 , , d ) T + σ Δ t A z ,
where z = ( z 1 , z 2 , , z d ) T N ( 0 , I d ) and A is the same matrix as in (74), which represents the Cholesky decomposition of the standard Brownian motion covariance matrix C = ( c i , j ) d × d = ( min ( i , j ) ) d × d .
The fair pricing of an Asian option involves integrating Equation (90). Moreover, the integrand is not differentiable, and, therefore, our previous results do not apply. However, we can still observe the effects of importance sampling through numerical experiments. The numerical results for this example are presented in Table 4 and Table 5. We use the CBC algorithm with two sets of weights: γ j = 1 / j 3 and constant weights γ u 1 . The parameters are set as S 0 = 100 , r = 0.1 , σ = 0.2 , K = 100 , T = 1 , dimension d = 100 , normal weight function α 2 = 10 , and weight coefficient κ = 0.75 . For clarity, we list the results computed by the MC method as a baseline, followed by those obtained using QMC without IS (NONE), with ODIS, and with LapIS. Errors are reported to two significant figures, and variance reduction factors are rounded to the nearest integers.
We briefly summarize the performance of the estimator in this case as follows:
  • Using a randomly shifted lattice rule with importance sampling, the error convergence order becomes close to O ( N 1 ) . The fitting result obtained in the example case is approximately O ( N 0.95 ) ;
  • Importance sampling can be used to reduce calculation errors. Using ODIS can improve the variance reduction factor by hundreds or thousands of times compared to MC, while using LapIS can improve it by thousands of times. In this case, we found that LapIS still has great potential in financial problems, although it is difficult to study whether this integrand falls in RKHS;
  • Upon changing the weights to constant values, the overall results do not greatly differ. It is worth mentioning that the advantage of LapIS over ODIS is somewhat reduced under this condition. We leave the problem of choosing the weight to future research.

4.3. Estimator Study

In this subsection, we present a comprehensive analysis of the performance of LapIS–QMC and ODIS–QMC across three distinct numerical examples: binary regression, the Randleman–Bartter Model, and Asian options. The effectiveness of these estimators is assessed in terms of error reduction and their suitability for the problems at hand.
In the case of binary regression, we found that our experiments in binary regression demonstrated that the application of importance sampling within QMC methods notably enhances performance compared to traditional Monte Carlo methods. However, it was observed that, in this specific context, the efficacy of LapIS combined with QMC did not exceed that of ODIS. This may be ascribed to the challenges posed by the posterior density estimation problem, where an increased sample size could lead the integrand to fall outside the RKHS. This observation warrants further investigation, which will be pursued in future research.
In examining the Randleman–Bartter Model, it was revealed that employing LapIS yields significantly better outcomes than ODIS. This improvement is likely due to the conditions that the integrand resides within the RKHS and that the growth of the integrand is now larger than second-order exponential function when both importance sampling methods are applied. Therefore, for integrands belonging to the RKHS, LapIS exhibits superior performance over ODIS.
In the context of Asian options, although the integrand is not continuous, we can still find that use of importance sampling has great potential. Importance sampling, particularly using ODIS, significantly enhances variance reduction, potentially by hundreds to thousands of times compared to traditional MC methods. LapIS, in this scenario, showed even greater potential for variance reduction, improving it by thousands of times.
We can also deduce that, if the function belongs to RKHS and the growth of integrand is slower the second-order exponential function, then using LapIS has greater advantage than ODIS. Otherwise, using ODIS is a better choice. Additionally, combining IS with QMC is usually better than the combination of IS with MC.

5. Conclusions

This study investigated the effectiveness of combining randomly shifted lattice rules with importance sampling techniques, specifically focusing on their impact on the convergence rates for integrals. We prove that, for unbounded functions, randomly shifted lattice rules combined with suitably chosen importance densities can achieve a convergence as fast as O ( N 1 + ϵ ) given N samples for arbitrary ϵ > 0 values under certain conditions. We investigated the application of the estimators in statistical and financial models, considering the conditions under which the integrands can meet the aforementioned criteria (in RKHS). Furthermore, we identified the potential benefits of using importance sampling with the lattice rule in option pricing problems. We conclude that both ODIS and LapIS have their own applicable scenarios. Pre-integration methods may be required to verify the theoretical convergence conditions of these systems [22]. Investigating convergence conditions for integrand estimators using pre-integration techniques combined with importance sampling will be a subject in our future research.

Author Contributions

Conceptualization, H.W. and Z.Z.; methodology, H.W.; software, H.W.; validation, H.W.; writing—original draft preparation, Z.Z.; writing—review and editing, H.W.; visualization, H.W.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NATIONAL SCIENCE FOUNDATION OF CHINA grant number No. 72071119.

Data Availability Statement

The data used in this research are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to express gratitude to Professor Xiaoqun Wang for his guidance and suggestions for revisions on this thesis.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this appendix, we deduce the simple form of θ ^ ( h ) . First,
θ ^ ( h ) = 0 1 Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) exp ( 2 π i h x ) d t d x + 0 1 Φ 1 ( 1 x ) 0 Φ ( t ) 1 + x ψ 2 ( t ) exp ( 2 π i h x ) d t d x = 0 1 Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) exp ( 2 π i h x ) d t d x + 0 1 Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) exp ( 2 π i h x ) d t d x = 2 0 1 Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) cos ( 2 π h x ) d t d x .
The outer integral at Φ ( 0 ) may be split using the Fubini theorem via
θ ^ ( h ) = 2 0 Φ ( 0 ) Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) cos ( 2 π h x ) d t d x + 2 Φ ( 0 ) 1 Φ 1 ( x ) 0 Φ ( t ) x ψ 2 ( t ) cos ( 2 π h x ) d t d x = 2 0 1 ψ 2 ( t ) 0 Φ ( t ) ( Φ ( t ) x ) cos ( 2 π h x ) d x d t + 2 0 1 ψ 2 ( t ) Φ ( t ) 1 ( x Φ ( t ) ) cos ( 2 π h x ) d x d t = 2 0 1 ψ 2 ( t ) sin 2 ( π h Φ ( t ) ) 2 π 2 h 2 + 2 0 1 ψ 2 ( t ) sin 2 ( π h Φ ( t ) ) 2 π 2 h 2 d t = 1 π 2 h 2 sin 2 ( π h Φ ( t ) ) ψ 2 ( t ) d t = 1 π 2 h 2 0 1 sin 2 ( π h t ) ψ 2 ( Φ 1 ( t ) ) ϕ ( Φ 1 ( t ) ) d t .

References

  1. Caflisch, R.E. Monte Carlo and quasi-Monte Carlo methods. Acta Numer. 1998, 7, 1–49. [Google Scholar] [CrossRef]
  2. Niederreiter, H. Random Number Generation and Quasi-Monte Carlo Methods; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  3. Owen, A.B. Scrambled net variance for integrals of smooth functions. Ann. Stat. 1997, 25, 1541–1562. [Google Scholar] [CrossRef]
  4. Owen, A.B. Monte Carlo variance of scrambled net quadrature. SIAM J. Numer. Anal. 1997, 34, 1884–1910. [Google Scholar] [CrossRef]
  5. Dick, J.; Pillichshammer, F. Digital Nets and Sequences: Discrepancy Theory and Quasi–Monte Carlo Integration; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  6. L’Ecuyer, P.; Lemieux, C. Recent Advances in Randomized Quasi-Monte Carlo Methods. In Modeling Uncertainty: An Examination of Stochastic Theory, Methods, and Applications; Kluwer Academic: Boston, MA, USA, 2002; pp. 419–474. [Google Scholar]
  7. He, Z.; Zheng, Z.; Wang, X. On the Error Rate of Importance Sampling with Randomized Quasi-Monte Carlo. SIAM J. Numer. Anal. 2023, 61, 515–538. [Google Scholar] [CrossRef]
  8. Glasserman, P.; Heidelberger, P.; Shahabuddin, P. Asymptotically optimal importance sampling and stratification for pricing path-dependent options. Math. Financ. 1999, 9, 117–152. [Google Scholar] [CrossRef]
  9. Rubino, G.; Tuffin, B. Rare Event Simulation Using Monte Carlo Methods; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  10. Owen, A.B. Monte Carlo Theory, Methods and Examples. Available online: https://artowen.su.domains/mc/ (accessed on 26 January 2013).
  11. Zhang, C.; Wang, X.; He, Z. Efficient importance sampling in quasi-Monte Carlo methods for computational finance. SIAM J. Sci. Comput. 2021, 43, B1–B29. [Google Scholar] [CrossRef]
  12. Kuo, F.Y.; Dunsmuir, W.T.M.; Sloan, I.H.; Wand, M.P.; Womersley, R.S. Quasi-Monte Carlo for highly structured generlised response models. Methodol. Comput. Appl. Probab. 2008, 10, 239–275. [Google Scholar] [CrossRef]
  13. Dick, J.; Rudolf, D.; Zhu, H. A weighted discrepancy bound of quasi-Monte Carlo importance sampling. Stat. Probab. Lett. 2019, 149, 100–106. [Google Scholar] [CrossRef]
  14. Kuo, F.Y.; Wasilkowski, G.W.; Waterhouse, B.J. Randomly shifted lattice rules for unbounded integrands. J. Complex. 2006, 22, 630–651. [Google Scholar] [CrossRef]
  15. Nichols, J.A.; Kuo, F.Y. Fast CBC construction of randomly shifted lattice rules achieving O(N-1+Δ) convergence for unbounded integrands over Rd in weighted spaces with POD weights. J. Complex. 2014, 30, 444–468. [Google Scholar] [CrossRef]
  16. Waterhouse, B.J.; Kuo, F.Y.; Sloan, I.H. Randomly shifted lattice rules on the unit cube for unbounded integrands in high dimensions. J. Complex. 2006, 22, 71–101. [Google Scholar] [CrossRef]
  17. Sloan, I.H.; Kuo, F.Y.; Joe, S. Constructing randomly shifted lattice rules in weighted Sobolev spaces. SIAM J. Numer. Anal. 2002, 40, 1650–1665. [Google Scholar] [CrossRef]
  18. Kuo, F.Y.; Sloan, I.H.; Wasilkowski, G.W.; Waterhouse, B.J. Randomly shifted lattice rules with the optimal rate of convergence for unbounded integrands. J. Complex. 2010, 26, 135–160. [Google Scholar] [CrossRef]
  19. Chopin, N.; Ridgway, J. Leave Pima Indians alone: Binary regression as a benchmark for Bayesian computation. Statist. Sci. 2017, 32, 64–87. [Google Scholar] [CrossRef]
  20. Hull, J. Options, Futures and Other Derivatives; Prentice-Hall: Upper Saddle River, NJ, USA, 2005. [Google Scholar]
  21. Graham, I.G.; Kuo, F.Y.; Nichols, J.A.; Scheichl, R.; Schwab, C.; Sloan, I.H. Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients. Numer. Math. 2015, 131, 329–368. [Google Scholar] [CrossRef]
  22. Xiao, Y.; Wang, X. Conditional quasi-Monte Carlo methods and dimension reduction for option pricing and hedging with discontinuous functions. J. Comput. Appl. Math. 2018, 343, 289–308. [Google Scholar] [CrossRef]
Figure 1. RMSE for the Randleman–Bartter short-term interest model with d = 5 .
Figure 1. RMSE for the Randleman–Bartter short-term interest model with d = 5 .
Mathematics 12 00630 g001
Table 1. The choice of the parameter r in Assumption 1 for some common combinations of the probability density ϕ and weight function ψ .
Table 1. The choice of the parameter r in Assumption 1 for some common combinations of the probability density ϕ and weight function ψ .
ϕ ( x ) exp ( x 2 / 2 ν ) 2 π ν 1 ν π Γ ( ν + 1 2 ) Γ ( ν 2 ) ( 1 + x 2 ν ) ν + 1 2
ψ ( x ) = exp ( x 2 / ( 2 α 2 ) ) r = 1 ν α 2 , α > 2 ν -
ine ψ ( x ) = ( 1 + | x | ) α r = 1 δ , δ ( 0 , min ( 1 2 , 9 8 α ν ) ) r = 1 2 α + 1 2 ν , 2 α + 1 < ν
Table 2. Standard errors of IS estimators for a 4-dimension binary regression problem.
Table 2. Standard errors of IS estimators for a 4-dimension binary regression problem.
NMCNONE–QMCLapIS–QMCODIS–QMC
5122.17 × 10−16.86 × 10−21.66 × 10−12.59 × 10−2
10241.40 × 10−12.91 × 10−24.01 × 10−12.41 × 10−2
20481.03 × 10−11.56 × 10−28.39 × 10−16.46 × 10−3
40966.10 × 10−29.86 × 10−32.98 × 10−14.18 × 10−3
81925.39 × 10−24.68 × 10−31.01 × 10−12.29 × 10−3
16,3843.39 × 10−22.73 × 10−32.73 × 10−37.13 × 10−4
Table 3. Standard errors of IS estimators for an 8-dimension binary regression problem.
Table 3. Standard errors of IS estimators for an 8-dimension binary regression problem.
NMCLapIS–QMCNONE–QMCODIS–QMC
5126.18 × 10−11.81 × 10−14.67 × 10−12.84 × 10−1
10244.37 × 10−12.05 × 10−13.15 × 10−11.94 × 10−1
20483.60 × 10−11.59 × 10−11.93 × 10−19.16 × 10−2
40961.80 × 10−14.00 × 10−11.08 × 10−16.63 × 10−2
81921.54 × 10−11.66 × 10−19.86 × 10−24.74 × 10−2
16,3849.54 × 10−21.56 × 10−17.32 × 10−22.57 × 10−2
Table 4. Standard errors of IS estimators for Asian options with constant weight coefficients.
Table 4. Standard errors of IS estimators for Asian options with constant weight coefficients.
NMCNONE–QMCODIS–QMCLapIS–QMC
10249.01 × 10−36.74 × 10−3 (2)7.07 × 10−4 (162)3.44 × 10−4 (686)
20485.97 × 10−33.05 × 10−3 (4)3.21 × 10−4 (346)1.66 × 10−4 (1293)
40964.11 × 10−31.25 × 10−3 (11)1.19 × 10−4 (1193)7.97 × 10−5 (2659)
81922.77 × 10−36.89 × 10−4 (16)5.53 × 10−4 (2509)4.00 × 10−5 (4796)
16,3841.98 × 10−34.79 × 10−4 (17)2.77 × 10−4 (5109)2.10 × 10−5 (8890)
32,7681.32 × 10−32.33 × 10−4 (32)1.44 × 10−4 (8403)1.01 × 10−5 (17,081)
65,5369.74 × 10−41.17 × 10−4 (69)7.53 × 10−5 (16,731)4.72 × 10−6 (42,583)
Table 5. Standard errors of IS estimators for Asian options with decreasing weight coefficients.
Table 5. Standard errors of IS estimators for Asian options with decreasing weight coefficients.
NMCNONE–QMCODIS–QMCLapIS–QMC
10249.64 × 10−36.19 × 10−3 (2)7.32 × 10−4 (173)3.15 × 10−4 (937)
20487.46 × 10−33.03 × 10−3 (6)3.66 × 10−4 (415)1.59 × 10−4 (2201)
40964.57 × 10−31.33 × 10−3 (12)1.75 × 10−4 (682)6.11 × 10−5 (5594)
81923.23 × 10−37.99 × 10−4 (16)8.08 × 10−5 (1596)2.74 × 10−5 (13,896)
16,3842.09 × 10−35.25 × 10−4 (16)4.20 × 10−5 (2476)1.25 × 10−5 (27,956)
32,7681.40 × 10−32.56 × 10−4 (30)2.99 × 10−5 (2192)5.67 × 10−6 (60,966)
65,5361.01 × 10−38.55 × 10−5 (140)1.07 × 10−5 (8909)3.03 × 10−6 (111,111)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Zheng, Z. Randomly Shifted Lattice Rules with Importance Sampling and Applications. Mathematics 2024, 12, 630. https://doi.org/10.3390/math12050630

AMA Style

Wang H, Zheng Z. Randomly Shifted Lattice Rules with Importance Sampling and Applications. Mathematics. 2024; 12(5):630. https://doi.org/10.3390/math12050630

Chicago/Turabian Style

Wang, Hejin, and Zhan Zheng. 2024. "Randomly Shifted Lattice Rules with Importance Sampling and Applications" Mathematics 12, no. 5: 630. https://doi.org/10.3390/math12050630

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop