Next Article in Journal
Time-Varying Correlations between JSE.JO Stock Market and Its Partners Using Symmetric and Asymmetric Dynamic Conditional Correlation Models
Next Article in Special Issue
Doubly Robust Estimation and Semiparametric Efficiency in Generalized Partially Linear Models with Missing Outcomes
Previous Article in Journal
Bayesian Model Averaging and Regularized Regression as Methods for Data-Driven Model Exploration, with Practical Considerations
Previous Article in Special Issue
Precise Tensor Product Smoothing via Spectral Splines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Case Report

Parametric Estimation in Fractional Stochastic Differential Equation

by
Paramahansa Pramanik
1,
Edward L. Boone
2,* and
Ryad A. Ghanam
3
1
Department of Mathematics and Statistics, University of South Alabama, Mobile, AL 36688, USA
2
Department of Statistical Science and Operations Research, Virginia Commonwealth University, Richmond, VA 23284, USA
3
Department of Liberal Arts and Sciences, Virginia Commonwealth University, Doha P.O. Box 8095, Qatar
*
Author to whom correspondence should be addressed.
Stats 2024, 7(3), 745-760; https://doi.org/10.3390/stats7030045
Submission received: 8 May 2024 / Revised: 14 June 2024 / Accepted: 15 July 2024 / Published: 20 July 2024
(This article belongs to the Special Issue Novel Semiparametric Methods)

Abstract

:
Fractional Stochastic Differential Equations are becoming more popular in the literature as they can model phenomena in financial data that typical Stochastic Differential Equations models cannot. In the formulation considered here, the Hurst parameter, H, controls the Fraction of Differentiation, which needs to be estimated from the data. Fortunately, the covariance structure among observations in time is easily expressed in terms of the Hurst parameter which means that a likelihood is easily defined. This work derives the Maximum Likelihood Estimator for H, which shows that it is biased and is not a consistent estimator. Simulation data used to understand the bias of the estimator is used to create an empirical bias correction function and a bias-corrected estimator is proposed and studied. Via simulation, the bias-corrected estimator is shown to be minimally biased and its simulation-based standard error is created, which is then used to create a 95% confidence interval for H. A simulation study shows that the 95% confidence intervals have decent coverage probabilities for large n. This method is then applied to the S&P500 and VIX data before and after the 2008 financial crisis.

1. Introduction

In this paper, we consider a class of stochastic differential equations (SDE) with fraction Brownian motion with Hurst parameter H ( 1 / 2 , 1 ) as
X ( t ) = X ( 0 ) + 0 t σ s , X ( s ) d B H ( s ) , α = 2 H , t ( 0 , ) ,
where the state variable X ( s ) X ( [ 0 , t ] ) takes values from a convex open set in R n , the initial condition X ( 0 ) R n , and the diffusion coefficient σ : [ 0 , t ] × R n R n × n . We assume that B H ( s ) = { B H ( s ) } s = 0 t is an n-dimensional Brownian motion defined on a probability space ( Ω , F , P ) , and F s = { F s x } s = 0 t be its natural filtration augmented with an independent σ -algebra F , where P be the probability law. Our main objective in this paper is to estimate the Hurst parameter by using the maximum likelihood estimation (MLE) method. The SDE expressed in Equation (1) is very useful in sticky-price general equilibrium economic models under Calvo-type settings (Calvo [1], Alvarez et al. [2]).
Brownian motion, also referred to as a Wiener process, is a crucial concept in probability theory and stochastic processes. This mathematical model depicts the random movement of particles in a fluid, a phenomenon first observed by botanist Robert Brown. Brownian motion has the following properties: (i) it has continuous trajectories over time, (ii) its increments over non-overlapping time intervals are independent, (iii) its increments over any given time interval depend solely on the duration of the interval, regardless of its specific position on the time axis, and (iv) its increments throughout length t follow a normal distribution with a mean of 0 and a variance of t. Mathematically the Brownian motion B ( s ) for s [ 0 , t ] is defined as B ( 0 ) = 0 , B ( t ) B ( s ) i i d N ( 0 , t s ) , and B ( s ) has continuous paths with respect to time s almost surely. Fractional stochastic differential equations extend classical SDEs by using fractional Brownian motion (fBm) rather than standard Brownian motion. Fractional Brownian motion is a broader concept of Brownian motion that accounts for memory and long-range dependence characteristics, defined by the Hurst parameter H ( 0 , 1 ) : if H = 1 / 2 , fBm reduces to a standard Brownian motion if H > 1 / 2 , fBm shows positive correlation or exhibits persistent behavior, and if H < 1 / 2 , fBm shows negative correlation or exhibits anti-persistent behavior. Fractional Brownian motion has the following properties: it exhibits self-similarity property or, for any constant c ( 0 , ) , B H ( c s ) c H B H ( s ) , s [ 0 , t ] , the increments B H ( ν ) B H ( s ) are stationary for every ν [ 0 , t ] , and the increments of fBm are correlated, reflecting the memory of the process (i.e., long-range dependence). A fractional stochastic differential equation has the integral form
X ( t ) = X ( 0 ) + 0 t μ s , X ( s ) d s + 0 t σ s , X ( s ) d B H ( s ) ,
where X ( s ) is an unknown process, μ is the drift coefficient, σ is the diffusion coefficient, and B H ( s ) is the fractional Brownian motion with Hurst parameter H. Since in this paper, we are considering driftless fractional stochastic differential equation, hence, μ = 0 .
Fractional Brownian motion is a commonly used extension of the standard Brownian motion, incorporating long-range dependence while maintaining self-similarity and Gaussian properties. These characteristics render it a favored stochastic process for modeling a range of complex systems, including financial scenarios and surface growth dynamics (Gairing et al. [3]). The demand for detecting and studying such systems has spurred the creation of several established statistical tools, such as wavelet analysis, R/S estimators, and generalized variations, among others, which are extensively covered in referenced monographs (Beran [4], Embrechts [5], Tudor [6]). Fractional Brownian motion demonstrates spatial homogeneity. However, many physical systems encounter external forces that confine them to particular areas with high likelihood. In such instances, investigating SDEs as an extension of the fBm model may prove more appropriate. In this context, fBm serves as a stochastic driving force that substantially influences the dynamics of the system. Consequently, exploring and comprehending this random driving force emerge as pivotal pursuits.
Since the publication of Rogers [7], there has been a sustained discourse surrounding the utilization of fBm in financial modeling. While much of the scholarly literature concentrates on mitigating arbitrage, some publications explore the market microstructure foundations associated with fBm (Rostek and Schöbel [8]). The predictability mentioned earlier presents hurdles when using fBm to model stock prices. Rogers [7] highlighted arbitrage possibilities within a fractional Bachelier-type model, which led him to conclude that fBm is not suitable for financial modeling. However, concerns about the broader applicability of Rogers’ findings emerged as they were limited to a linear scenario without drift (Rostek and Schöbel [8]). The contribution of Shiryaev [9] involved crafting a precise arbitrage strategy within the fractional market framework, igniting additional debate on the topic. Despite initial findings on financial market models using fBm being misleading, there was still hope for overcoming the shortcomings of the proposed market framework. The research enthusiasm in this area was reignited by fresh insights in stochastic analysis, primarily sparked by the contributions of Duncan et al. [10]. They introduced a stochastic integration calculus concerning fBm, leveraging the Wick product. This integration framework enables drawing parallels to the established Itô calculus, revitalizing interest in the field (Rostek and Schöbel [8]).
Even with the introduction of this novel stochastic integration concept, the overall situation faced little improvement. Debaen and Schachermayer [11] had previously established a significant result applicable to continuous-time market models: regardless of the chosen integration theory, a type of weak arbitrage called free lunch with vanishing risk can only be eliminated if the underlying stock price process S is a semimartingale. It is apparent that processes driven by fBm, due to their persistent behavior, do not fulfill the semimartingale criterion. Additionally, Cheridito [12] successfully devised explicit arbitrage strategies in both the fractional Bachelier model and the fractional Black–Scholes market, regardless of whether pathwise or Wick-based calculus was utilized for integration.
In recent studies, attention has shifted towards establishing a market microstructure basis for fBm. Klüppelberg and Kühn [13] propose a concept called a shot noise process, aimed at modeling the arrival of new information at random intervals, which then diffuses into the market (Rostek and Schöbel [8]). This implies that new information can exert a prolonged influence on the price process. Ultimately, Klüppelberg and Kühn [13] model converges to fractional Brownian motion in the limit. Bayraktar et al. [14] characterize the passive behavior of investors. This implies that following a trading activity, there exists a probability that an investor refrains from conducting any further transactions during the subsequent period. This is explicitly achieved by introducing a stochastic process. When this process remains at zero for all time intervals, the investor remains inactive. Orders from market participants arrive asynchronously, and trades are executed by a market maker who adjusts prices based on the demand-supply imbalance. In the limit, the price process converges to a geometric fBm. In the absence of this passive behavior exhibited by investors, a classical Brownian motion is observed. Consequently, a market model incorporating both passive investors and continuously active market participants leads to a mixed model akin to Cheridito [15].

2. Long Memory as Hurst Parameter

The concept of memory effect holds significant importance within financial systems, prompting extensive research into understanding the persistence of patterns, commonly referred to as long memory, within financial markets. Recently, fractional-order ordinary differential equations have emerged as a valuable tool for characterizing this memory effect within intricate systems (Li et al. [16]). Consider a stochastic process denoted by { X ( s ) } s N , where observations X ( s ) are made at discrete intervals s = 1 , 2 , , n , for all n N . A time series exhibits long memory when the correlation between current and past observations decays slowly over time. This stochastic process { X ( s ) } s N is considered stationary if the distributions of { X ( s 1 ) , X ( s 2 ) , , X ( s K ) } T and X ( s 1 ) + δ , X ( s 2 ) + δ , , X ( s K ) + δ T remain the same for all integers K, shifts δ [ 1 , ) , and time indices s 1 , s 2 , , s k 0 , with T representing vector transposition. In the context of Gaussian processes, this is equivalent to the autocovariance function cov X ( δ ) , X ( δ + s ) : = ϱ ( s ) being independent of s, see (Chronopoulou and Viens [17]). These notions are often termed as strict stationarity and second-order stationarity. Here, ϱ ( s ) represents the autocovariance function, and λ ( s ) = ϱ ( s ) / ϱ ( 0 ) stands as the autocorrelation function. If the sum of absolute autocovariances s = 1 n | ϱ ( s ) | diverges, the process X ( s ) exhibits long memory. Conversely, if this sum is finite, the process demonstrates short memory, and if ϱ ( s ) = 0 for all s 0 , the process lacks memory (Li et al. [16]). Traditionally, the autocorrelation function has been used to quantify the stochastic memory process, but recently, the Hurst index has gained popularity as a practical alternative for identifying long or short-range dependence.
Definition 1. 
Consider a stationary process { X ( s ) } s N . When the sum of absolute autocovariances diverges, represented as s = 1 n ϱ ( s ) = , the process X ( s ) displays a long memory or long range dependence. One sufficient condition for this phenomenon is the presence of the Hurst parameter H ( 1 / 2 , 1 ) such that
lim inf s ρ ( s ) s 2 ( H 1 ) > 0 .
Remark 1. 
Typically, a long memory model satisfies the stronger condition lim s ρ ( s ) / s 2 ( H 1 ) > 0 , where H is the long range dependence parameter of X ( s ) [17].
Proof. 
Definition 1 implies that a typical long memory model satisfies s = 1 n ϱ ( s ) = . Let there be an autocorrelation function ρ ( s ) of the long memory process { X ( s ) } s N such that for a positive finite constant K * , following condition holds:
ρ ( s ) K * s 2 ( H 1 ) , as s .
Equation (2) implies,
lim s ρ ( s ) s 2 ( H 1 ) lim s K * s 2 ( H 1 ) s 2 ( H 1 ) = K * .
Since K * > 0 , we conclude lim s ρ ( s ) / s 2 ( H 1 ) > 0 . Therefore, for typical long memory models where the autocorrelation function decays according to ρ ( s ) K * s 2 ( H 1 ) the limit lim s ρ ( s ) / s 2 ( H 1 ) is indeed positive. This implies that the Hurst parameter H can be called the long memory parameter of X ( s ) . □
An autocorrelation function that exhibits long memory demonstrates a gradual decay over time. This observation was first made by Hurst, who identified extended persistence in the annual minimum water levels of the Nile River ( Hurst [18]). The Hurst parameter serves to quantify the smoothness of a time series by examining how it scales concerning the stochastic process. A crucial aspect of memory processes is self-similarity, a concept characterized by the Hurst parameter. In geometry, self-similarity refers to shapes that consist of repeating patterns across multiple scales, potentially extending infinitely. From a statistical perspective, self-similarity suggests that the distributional properties of the process paths remain consistent, regardless of the observation distance (Chronopoulou and Viens [17]). The rigorous definition of the self-similarity property is as follows:
Definition 2. 
A stochastic process { X ( s ) } s N is self-similar with self-similarity Hurst parameter H, for any positive constant α,
α H X α ( s ) : s N d X ( s ) : s N .
To better understand the impact of the fraction of derivative, H, on the resulting stochastic process, Panel (a) in Figure 1 shows the stochastic process for a set of realized deviates across different values of H. The black line is for H = 1 which corresponds to no fraction of the derivative, which is for reference. Notice that the fraction of the derivative compresses the stochastic process and small changes in H have a large impact on the resulting process. For example, compare H = 1 (black) to H = 31 / 32 (magenta) and notice at the peak near t = 300 , the value of the realized process for H = 1 is above 30 in contrast to the realized process for H = 31 / 32 which is under 30. Panel (b) shows the same set of deviates with H = 15 / 16 across σ = 1 , 0.8 , 0.6 , 0.4 and 0.2 . Notice as the standard deviation decreases it is much more of a simple scaling of the process. The fraction of the derivative has more of an exponential effect on the process. For reference across the two panels, the green line in Panel (a) is the same as the black line in Panel (b). Hence, H has a much different impact on the process than σ , and care should be taken when estimating H.

3. A Calvo-Type Construction

This section explores an issue concerning strategic complementarities and pricing, as originally discussed by Calvo [1]. This particular scenario garners frequent attention in studies on sticky prices owing to its practical significance. The model offers a clear-cut framework for elucidating the core elements of the analysis and for scrutinizing crucial outcomes such as the existence, uniqueness, and non-monotonic characteristics of impulse response profiles, which also bear relevance to the state-dependent problem.
At time s, P ( s ) is the consumer price index (CPI), V i ( s ) is a consumer preference shock corresponding to the ith variety and the price set by a firm on consumer good of ith variety is P ^ ( s ) such that p ( s ) p ^ ( s ) / V i ( s ) (Kimball [19]). Define x ( s ) : = p ( s ) P ^ ( s ) P ( s ) and X ( s ) : = P ( s ) P ^ ( s ) P ( s ) as percent deviation from a symmetric equilibrium of a firm’s own and the aggregate price (CPI), respectively (Alvarez et al. [2]). The economy comprises a range of atomistic individual firms, each operating independently. Each firm operates under the assumption of a consistent fluctuation in markup averages, denoted by X ( s ) for all times s [ 0 , t ] . The firm can adjust its pricing only at specific, randomly occurring times denoted by { ξ i } , which follows a Poisson process characterized by a parameter θ . These instances of adjustment are referred to as adjustment opportunities, and the state of the firm’s pricing at these times is termed the optimal reset value. Following a price adjustment at time s, the difference in markup, x ( s ) , jumps according to a Brownian motion without a drift component but with a variance of σ 2 . Additionally, the markup experiences abrupt jumps immediately after a price adjustment at s = ξ i , with each jump in markup denoted by U i . Therefore, the markup gap evolves as
X ( s ) = X ( 0 ) + σ B ( s ) B ( 0 ) + ξ i s U i , for all s [ 0 , t ] ,
where B is an n-dimensional standard Brownian motion. Furthermore, in the absence of any markup jump the continuous version of Equation (3) becomes,
X ( t ) = X ( 0 ) + 0 t σ d B ( s ) .
Moreover, the fractional version of the above equation is Equation (1), where the variance σ is indeed a function of time and markups.

4. Hurst Index Estimators

In this section, we are going to discuss different types of estimators of the Hurst index. The most renowned estimator among these is known as the R/S estimator, initially proposed by Hurst [18] in the context of a hydrological issue related to Nile river water storage. Hurst divided the dataset into finite, equal-length blocks and computed the adjusted range (R/S), which is the ratio of the range (R) to the standard error (S) of the new dataset. To ascertain the Hurst index value, the natural logarithm of the adjusted range was first calculated, followed by constructing a regression model where the natural logarithms of the adjusted range and the total number of observations served as the response variable and regressor, respectively (Chronopoulou and Viens [17]). The slope of this regression equation served as the estimator for H. This method relies on graphical analysis, where it is up to the statistician to identify the portion of data exhibiting nicely scattered behavior along a straight line. Particularly in smaller samples, the distribution of the R/S statistic deviates significantly from normal, exacerbating the problem. Moreover, the estimator is biased and prone to a large standard error.
Gladyshev [20] formulated a limit theorem concerning a statistic derived from the first-order quadratic variations of fBms. This theorem led to the development of an estimator for the Hurst parameter H, which exhibited strong consistency but lacked asymptotic normality. Istas and Lang [21] introduced another estimator tailored specifically for centered Gaussian processes with stationary increments. This estimator, which also utilized first-order quadratic variations, demonstrated asymptotic normality for values of H ( 1 / 2 , 3 / 4 ) . Next, Benassi et al. [22] investigated the second-order quadratic variations of a particular class of Gaussian processes that possessed local fractal properties similar to fractional Brownian motion. They derived an estimator for the Hurst index H that showed asymptotic normality across all possible values of H (Melichov [23]). Coeurjolly [24] devised a set of consistent estimators for H based on the asymptotic behavior of the kth absolute moment of discrete variations of sample paths over discrete intervals within the range of [ 0 , 1 ] . These estimators were accompanied by explicit convergence rates applicable throughout the entire range of H ( 0 , 1 ) , and their asymptotic normality was established. Moreover, Begyn [25] explored second-order quadratic variations across general subdivisions for processes with Gaussian increments. For a more comprehensive investigation into the asymptotic behavior of quadratic variations for Gaussian processes. Finally, Fhima et al. [26] presented a technique for conducting change point analysis on the Hurst index concerning a piece-wise fBm, which is an extension of the conventional fBm. Their approach combines two methodologies: the filtered derivative with p-value (FDpV) method for detecting changes in mean, variance, or regression parameter, and a modified version of the increment ratios (IR) statistic estimator initially introduced in Bardet and Surgailis [27].
Based on Definition 1, we can derive the sample autocorrelation function λ ˜ ( s ) by dividing ϱ ˜ ( s ) by ϱ ˜ ( 0 ) . Using the Correlogram approach, we can then plot this function against the total number of observations, denoted as n. As a heuristic, it is common practice to draw two horizontal lines at ± 2 / n . Any observations falling outside these lines are deemed significantly correlated at the significance level of 0.05 (Chronopoulou and Viens [17]). In cases where the process demonstrates long memory, the plot should exhibit a slow decay. However, the graphical nature of this method introduces limitations, as it cannot ensure precise results. Given that long memory is an asymptotic concept, it is essential to analyze the correlogram at high lags. Nonetheless, distinguishing long memory from short memory can be challenging, particularly when, for instance, H = 0.6 . To mitigate this challenge, a more appropriate plot involves plotting ln λ ˜ ( n ) against ln ( n ) (Chronopoulou and Viens [17]). If the asymptotic decay follows a precise hyperbolic pattern, then for large lags, the points should cluster around a straight line with a negative slope equal to 2 ( H 1 ) , indicating long memory in the data. Conversely, if the plot diverges toward with at least an exponential rate, it suggests short memory. Similar to a Correlogram, a Variogram can be graphed against the total number of observations, represented as N. Assessing whether the data display short or long memory using the Variogram approach poses similar challenges as with the Correlogram. The primary benefit of these methods lies in their simplicity. Furthermore, because they are non-parametric, they can be applied to any long-memory process. However, these graphical techniques lack precision. Additionally, they often lead to erroneous conclusions, suggesting the presence of long memory even when it is absent. For instance, in scenarios where a process exhibits short memory alongside a rapidly decaying trend, both a Correlogram and a Variogram might erroneously indicate long memory.
Maximum likelihood estimation (MLE) stands out as the predominant method for parameter estimation within the realm of Statistics. However, its application within the class of Hermite processes is confined to fBm. This limitation arises from the absence of distribution function expressions for other processes. MLE estimation within this context typically occurs in the spectral domain, utilizing the spectral density of fBm in the following manner.
Let X H : = { X H ( 0 ) , X H ( 1 ) , X H ( n ) } be a vector of the fractional Gaussian noise (increments of fBm) with covariance matrix Q ^ H = Q ^ k l H k , l = 1 , , n
Q ^ k l H = E [ X H ( k ) X H ( l ) ] = 1 2 k 2 H + l 2 H | k l | 2 H , for all k , l [ 0 , t ] .
Therefore, the log-likelihood has the following expression:
l ( H ) = ln f ( x ; H ) = 1 2 N ln ( 2 π ) 1 2 ln det Q ^ H 1 2 X H Q ^ H 1 X H T ,
where f ( . ; . ) is the normal probability density function (pdf), matrix Q ^ H is non-singular and X H T is the transposition of the vector X H . To determine the MLE of H (i.e., H ^ ) we need to maximize the log-likelihood equation with respect to H.

5. Fractional Brownian Motion

For a fixed t, an n-dimensional Brownian motion on ( Ω , F , P ) , and Hurst parameter H ( 1 / 2 , 1 ) the covariance function is
Q H ( v , s ) = E [ B H ( v ) B H ( s ) ] = 1 2 v 2 H + s 2 H | v s | 2 H , for all v , s [ 0 , t ] .
It is important to note that fractional Brownian motion becomes the usual Brownian motion when H = 1 / 2 which we are not considering here. Since B H ( v ) B H ( s ) i i d N ( 0 , | v s | 2 H ) , fractional Brownian motion has a stationary increment (Hayashi and Nakagawa [28]).
Lemma 1. 
(Kolmogorov–Chenstov continuity criterion) [29]). The random field X = { X ( s ) } s = 0 t with index set j is a dense of points s on open domain D R n . For H ( 1 / 2 , 1 ) and ε > 0 there exists positive constants κ , ω , and C * such that
E | B H ( v ) B H ( s ) | κ C * | v s | n + ω , for all v , s [ 0 , t ] ,
then the random field can be extended to a random field { X ^ ( s ) } s D ˜ indexed by the closure D ˜ of D in such a way that, with probability one, the mapping s X ^ ( s ) is continuous, and so that for each s j , we have that X ( s ) = X ^ ( s ) , almost surely. Furthermore, if there exists another positive constant λ such that λ < ω / κ for each compact set Θ D ˜ ,
max s ν Θ | B H ( v ) B H ( s ) | | v s | λ < C * ,
almost surely for all C * ( 0 , ) .
For simplicity to prove Lemma 1, we consider a special case where D = ( 0 , 1 ) n be a unit cube. Let S be the set of dyadic rationals in D . Now we are going to state and prove the following Lemma 2 Then by using this Lemma 2, we can prove Lemma 1. The general statement of Lemma 1 can be proved by using a similar argument with some changes.
Lemma 2. 
Let x ( s ) be a real-valued function of random variable X ( s ) in S . Consider a set of points L u with the coordinates l / 2 u , l ( 0 , 2 u ) such that S : = u = 1 L u . Suppose for some λ [ 0 , ) and finite C * > 0 the following condition holds:
| x ( ν ) x ( s ) | C * | ν s | λ ,
for all neighboring pairs ν , s in the same dyadic level L u . Then the real-valued function x ( s ) extends to a continuous function in [ 0 , 1 ] n , and the extension is Hölder of exponent λ, such that
| x ( ν ) x ( s ) | C | ν s | λ ,
for some positive finite C .
Proof. 
To prove this Lemma we are utilizing the fact that every s D ˜ can be attained by a sequence of points ν u ( s ) u = 1 L u . For every s D ˜ there is a point L u within distance C n 2 u of s, so that C n = 2 n / 2 . We choose such a point and label it as ν u ( s ) . Hence, ν u ( s ) s as u . Furthermore, the distance between the consecutive points ν u ( s ) and ν u + 1 ( s ) is less than 2 C n 2 u . This leads to a shorter chain of neighboring pairs in level L u + 1 connecting ν u ( s ) and ν u + 1 ( s ) . By Equation (5),
x ν u ( s ) x ν u + 1 ( s ) 2 n C * 2 λ u .
As λ is positive, the above expression is additive in u. Thus, for every s D ˜ , the sequence x ν u ( s ) u 1 is Cauchy. Since the sequence is convergent, define x ( s ) : = lim u x ν u ( s ) . It is important to note that, if s D ˜ , then the limit converges to the original value x ( s ) . Moreover, since the chain from x ν u ( s ) is dominated by the terms of a geometric series with ratio 2 λ , then
x ( s ) x ν u ( s ) C 2 u ,
where C is a positive finite constant.
We need to prove that indeed x ( s ) is continuous and satisfies the Hölder condition (6). Set two distinct points ν , s D ˜ such that for some u the following condition holds:
2 ( 1 + u ) | ν s | < 2 u .
The distance between ν u ( ν ) and ν u ( s ) is less than 6 C n . This leads to a short chain of neighboring pairs in level L u connecting ν u ( ν ) and ν u ( s ) . There is a positive finite constant C such that,
x ν u ( ν ) x ν u ( s ) C 2 λ u .
As x ( s ) x ν u ( s ) C 2 u for all s D ˜ , by construction, inequality (6) follows by triangle inequality. □
Proof of Lemma 1. 
Since D is an open unit cube, the cardinality of L u grows like 2 n u . Two points ν , s L u are subsequent if | ν s | is a unit vector times 2 u . Since, ν and s are neighbors in L u , imposing Markov inequality on Condition (4) implies
P B H ( v ) B H ( s ) | ν s | λ E | B H ( v ) B H ( s ) | κ | ν s | κ λ C * | ν s | n + ω κ λ = C * 2 u ( n + ω κ λ ) .
Since we are using the fact that, L u grows like a constant multiple of 2 n u ,
P ( M u ) C a * 2 u ( ω κ λ ) ,
such that
M u : = B H ( v ) B H ( s ) | ν s | λ : for some subsequent ν , s L u ,
where C a * is a positive finite constant. Since ω is positive, it is possible to choose λ so that ω > κ λ . Therefore, probabilities expressed in Equation (7) are additive in u. Therefore, the Borel–Cantelli lemma guarantees the sum to be one and the events M u appear finitely in u. Finally, w.p. 1 there exists a random number ζ < , so that B H ( v ) B H ( s ) ζ | ν s | λ , for all ν , s L u at the same dyadic level. The final result is followed by Lemma 2. This process is λ -Hölder continuous. □
Lemma 3. 
The fractional Brownian motion with a Hurst index H has a version where the sample paths are λ-Hölder continuous if λ < H . However, if λ H , the fractional Brownian motion is almost surely not λ-Hϊder continuous over any time interval.
Proof. 
We first prove the sufficiency condition λ < H . Let us choose κ such that κ N . Self-similarity and stationarity in the increment of s in L u [ 0 , 1 ] imply,
E B H ( v ) B H ( s ) κ = E | | ν s | H B H ( 1 ) | κ = C κ * | ν s | κ H ,
where C κ * is κ th absolute moment of a standard normal variable. The Kolmogorov–Chentsov criterion implies this claim.
Now, we prove the necessity of the condition λ < H . Due to the stationarity of increments, it suffices to examine the point s = 0 . Following [30], the fractional Brownian motion adheres to the following law of the iterated logarithm,
P lim sup s 0 B H ( s ) s H ln ( ln ( 1 / s ) ) = 1 = 1 .
Therefore, B H ( s ) fails to satisfy λ -Hölder continuity at λ H for every s 0 . □
From Lemma 3 we conclude that λ has to be less than the Hurst index H. Therefore, by Lemma 1 the bound of the absolute distance between two increments of fractional Brownian motion becomes,
B H ( ν ) B H ( s ) C * | ν s | H ε , ε > 0 ,
where λ = H ε . Therefore, the process described in Equation (8) is H ε -Hölder continuous.
Remark 2. 
Lemma 1 and Equation (8) imply that, if the Hurst parameter takes larger values, the sample paths of the fractional Brownian motion have better regularity. For instance, see Figure 1 in [28].
Proof. 
Stationary increment of fractional Brownian motion implies the increments B H ( s + ν ) B H ( ν ) have the same distribution as B H ( s ) for every s , ν [ 0 , t ] . On the other hand, The regularity of the sample paths of a stochastic process can be quantified using Hölder continuity. Lemma 3 implies that, for λ < H the fractional Brownian motion B H ( s ) is ( H ε ) -Hölder continuous almost surely. Thus, the larger the Hurst parameter H, the larger the exponent λ that can be chosen. This leads to a better regularity. The variance of increments is
E B H ( ν ) B H ( s ) 2 = | ν s | 2 H .
Equation (9) implies that the expected magnitude of the increments decreases as H increases. By Kolmogorov continuity theorem, if there exist positive finite constants C * and κ such that,
E B H ( ν ) B H ( s ) κ C * | ν s | 1 + κ λ ,
then B H ( s ) is λ -Hölder continuous. Choose κ > 0 sufficiently large. Combining Equations (9) and (10) implies
E B H ( ν ) B H ( s ) κ = E B H ( ν s ) κ C H * | ν s | κ H .
For inequalities (10) and (11), as κ H > 1 + κ λ for λ < H , the Kolmogorov continuity theorem is satisfied. Hence, if H takes larger values, the sample paths of fractional Brownian motion have better regularity. □
For H ( 1 / 2 , 1 ) , the fractional Brownian motion has the representation
B H ( v ) = 0 v K H ( v , s ) d B ( s ) ,
where B ( s ) is some Brownian motion on the probability space ( Ω , F , P ) . The kernel function K H is defined as
K H ( v , s ) : = C H * s 1 / 2 H s v ( m s ) H 3 / 2 m H 1 / 2 d m , v > s ,
with constant
C H * = H ( 2 H 1 ) B ( 2 2 H , H 1 / 2 ) ,
where B ( a , b ) = Γ ( a + b ) / [ Γ ( a ) Γ ( b ) ] is the Euler Beta function with the corresponding Euler Gamma function
Γ ( a ) = 0 x a 1 exp ( x ) d x
(Buckdahn and Jing [31]). For Kernel K H ( ν , s ) , define E as the set of Borel measurable functions f such that f ( s ) = 0 s K H ( ν , s ) f ˜ ( ν ) d ν for all f ˜ L 2 ( [ 0 , t ] ) . Let Φ : [ 0 , t ] 2 R given by,
Φ ( ν , s ) = H ( 2 H 1 ) | ν s | 2 H 2 ,
stays in the set of step functions S in [ 0 , t ] such that the function Φ is symmetric and positive semidefinite in the general sense. Since f L 2 ( [ 0 , t ] ) we define | f | Φ 2 : = 0 t 0 t f ( s ) f ( ν ) Φ ( ν , s ) d s d ν < (Duncan and Hu [10]). In Equation (12), ϕ represents the derivative of the covariance function of fBm with respect to time. This derivative is sometimes referred to as the incremental variance or incremental covariance and is crucial in understanding the behavior of fBm. Moreover, this expression provides an in-depth understanding of the local behavior of the covariance structure of fractional Brownian motion. It is valuable for analyzing the characteristics of fBm. It is applicable in fields like signal processing, finance, and other areas where fBM models phenomena with self-similarity and long-range dependence. Equation (12) has been used to estimate the Hurst parameter H.

6. Estimators for H

In the cases presented here, it is assumed that the data are equally spaced can be scaled to a unit spacing, and have a homogeneous variance of one unit which can easily be obtained by simply standardizing the data. If the data X 0 , X 1 , X 2 , X n are observed, the first difference in the data:
d t = X t X t 1 .
This will produce a series with zero mean and variance V a r [ d t ] = H ( 2 H 1 ) . As the data are generated by a fractional Brownian motion process the data will have a normal likelihood of:
L ( d 1 , , d n | H ) = ( H ( 2 H 1 ) ) n / 2 e i = 1 n d i 2 2 ( H ( 2 H 1 ) ) .
Then using Equation (13) the following Maximum Likelihood Estimator (MLE) can be derived.
H ^ = n + 8 n i = 1 n d i 2 + n 2 4 n
Furthermore, the variance can be approximated by considering the inverse of the Fisher Information matrix evaluated at the MLE. This results in:
V a r [ H ^ ] n + 8 n i = 1 n d i 2 + n 2 3 n + 8 n i = 1 n d i 2 + n 2 3 256 n 5 ( 8 i = 1 n d i 2 + n ) i = 1 n d i 2
With a quick examination of (15) one can see that V a r [ H ^ ] is inconsistent as n , V a r [ H ^ ] C , which is problematic as it typically desired that the variance converge to asymptotically to 0 with respect to n. The issue of inconsistency will not be covered in this work.
It is very difficult to theoretically assess the Bias associated with the MLE due to the square root; however, numerical techniques can be used instead. A simulation study is conducted to determine the Bias of the estimator for n = 1000 across the values of H. At each H value, 10,000 simulated datasets were created and the MLE, H ^ was calculated. The mean H ^ from the 10,000 samples was calculated for each value of H and plotted against the true value of H. This is shown in Figure 2 panel (a). Notice there is also a reference line where H ^ = H in red. From the graph, it is apparent that the estimator tends to be biased on the high side as the mean line is above the red line. To further study the bias H ^ H is plotted against H ^ in Figure 2 panel (b) using the 1000 already obtained simulations. Notice that the shape of this curve allows one to see that the bias is worst for values around 0.61 and best at values around 0.5 and 1.0. The general shape of the curve also allows for the creation of a possible polynomial-based correction.
To study the nature of the bias as sample size increases, a simulation study was performed generating 10,000 datasets for each value of H between 0.51 and 0.99 in 0.01 increments and estimating H ^ for sample sizes of n = 100, 250, 500, 1000, 5000 and 10,000. Figure 3 shows the results of this study. Notice that the curve does not change much as the sample size increases. This implies that any correction method does not need to depend on the sample size.
Using the data from the simulation study and the fact that the bias correction function must be 0 at 0.5 and 0 at 1 the following functional form is proposed for the bias correction.
f ( H ^ ) = a ( 1 H ^ ) α ( H ^ 0.5 ) β , 0.5 < H ^ < 1 ,
where α > 0 and β > 0 . Equation (16) is fit to the simulation data using nonlinear least squares to obtain the parameters a , α and β in order to create an empirical bias correction function resulting in the following:
B ( H ^ ) = 0.877 ( 1 H ^ ) 1.149 ( H ^ 0.5 ) 0.354 , 0.5 < H ^ < 1 ,
Combining Equation (17) with (14) produces the following approximately unbiased estimator for H:
H ˜ = H ^ B ( H ^ ) = n + 8 n i = 1 n d i 2 + n 2 4 n + 0.877 1 n + 8 n i = 1 n d i 2 + n 2 4 n 1.149 n + 8 n i = 1 n d i 2 + n 2 4 n 0.5 0.354
To examine the performance of the estimator given in (18) a simulation study was performed across true values of H. For each H, 10,000 datasets were generated and Equation (18) was calculated for each with the resulting mean of the simulations recorded. Figure 4 shows the results of this simulation study, where the mean H ^ (black) versus true H and bias-corrected estimator H ˜ (blue) and reference line (red-dashed). Notice that the approximate bias-corrected estimate H ˜ is very close to the reference line indicating that most of the bias has been accounted for. While a small amount of bias does remain, the “corrected” estimator H ˜ is much better than the MLE alone.
Using the variance of the simulated means for H ˜ one can create a standard error for H ˜ by taking the square root of the variances. This is shown in Figure 3 Panel (b). Using a least squares polynomial approximation to the simulation data one arrives at:
s . e . { H ˜ } 21.031 H ˜ 6 + 95.795 H ˜ 5 180.302 H ˜ 4 + 179.575 H ˜ 3 99.877 H ˜ 2 + 29.443 H ˜ 3.594
To be clear, this is an approximation to simulation results and hence there is both simulation and estimation error associated with this standard error. However, there is no clear analytical way to address this problem. Using the results from a dense simulation across the domain should provide us with a reasonable approximation. Here, due to the nonlinear nature of the simulation results, a higher-order polynomial is used.
Using the corrected mean value and the estimated standard error one can create a 95% confidence interval using:
H ˜ ± 1.96 s e { H ˜ }
Figure 5 shows the coverage probabilities for 95% confidence intervals using Equation (20) across the true values of H for sample sizes n = 100 , 250 , 500 , 1000 , 5000 and 10,000. Each is evaluated on 10,000 simulated datasets at each level of H. A grey dashed line is provided for reference to 95%. One particular item to notice is that no matter the sample size, confidence intervals when the true value of H is near 0.5 have very poor coverage. As H obtains larger from 0.5 the coverage probabilities increase dramatically for all sample sizes. Once H is larger than 0.6, the coverage probabilities for sample sizes 100, 250 and 500 fall below the 95% benchmark with sample size 100 falling as low as 0.4. This suggests that the variance model is quite sensitive to low sample sizes. This result also shows that once the sample size is 1000 or greater the 95% coverage probabilities are maintained. Hence, if one wishes to estimate H, a sample size of 1000 or more is a good guideline.

7. Example: S&P500 and VIX

To illustrate the method, the same financial datasets used in Beskos et al. [32] are evaluated here. The S&P500 which is a broad measure of the economy and the Volatility Index (VIX) are each examined for the year prior to the Bear Sterns closure and the year after the Lehmans closure. These are interesting because they reflect how the 2008 financial crisis unfolded as both Bear Sterns and Lehman’s were trading houses that were casualties of the crisis. Specifically, prior to Bear Sterns’s closure the daily close data for 5 March 2007 to 5 March 2008 (labeled 2007 data) and for the year after Lehman’s closure, the daily close data from 15 September 2008 to 15 September 2009 (labeled 2008) were utilized. To employ the method presented here, all-time series were standardized by creating a Z score for each observation.
Table 1 shows the MLE, H ^ , the approximate corrected MLE, H ˜ , and the 95% confidence intervals for the Hurst parameter H are given for each of the datasets. One particular item to notice is that the parameter for H estimates does not seem to change that much in the before and after times for either the S&P500 or VIX. Another interesting note is that H ˜ is considerably higher here. When comparing with the Bayesian approach of Beskos et al. [32], the estimated H is considerably different. In their work, they consider when H ( 0 , 1 ) and estimate H to be 0.3 for S&P500 and 0.29 for VIX for 2007 and 0.36 for S&P500 and 0.38 for VIX for 2008. The formulation here allows only for H ( 1 / 2 , 1 ) and hence is not directly applicable.

8. Discussion

This work shows how to estimate the Hurst parameter H in a Fractional Stochastic Differential Equation using both an MLE, H ^ and an approximately bias-corrected MLE H ˜ . Simulations demonstrate that the MLE alone is a biased estimator. An approximately biased corrected MLE, H ˜ , is constructed using a model based on simulation results which performs much better in terms of bias than the MLE. Furthermore, a simulation approach is used to obtain an approximate standard error for H ˜ allowing for confidence intervals to be constructed. The coverage performance of these confidence intervals is studied via simulation which shows that as n increases the coverage probability increases, except for when the true value of H is near 0.5, in which the coverage probabilities are very low.
The definition of Q ^ k l H indicates that when H = 0.5 , Q ^ k l H equals min k , l , effectively rendering the fBM akin to standard Brownian motion. A careful examination of Table 1 yields several insights. Notably, when scrutinizing the H ^ values for S&P500 and VIX, they cluster around 0.5, suggesting an absence of long-term memory in these indices. Conversely, post-bias correction, each H ˜ value exhibits long-memory characteristics, especially notable in the VIX data from 2007. This finding implies that during the 2007–2008 period, stock indices tended towards weak-form efficiency, as indicated by the lack of long-memory effect under H ^ . However, upon utilizing H ˜ , the opposite trend emerges, hinting at increased long-term memory and persistence during the 2008 financial crisis, post-bias correction. This stark contrast could significantly erode investor confidence, potentially prompting major institutional investors to divest from stock markets, leading to diminished liquidity and market efficiency. Notably, investors can only observe H ^ , neglecting the informative insights provided by H ˜ .
Balancing accuracy and efficiency in parameter estimation poses a perennial challenge. Consequently, determining the appropriate trade-off between the two factors remains a conundrum. While MLE is widely used for its accuracy in estimating the Hurst exponent, its computational demands have historically been considerable, rendering it unsuitable for rapid-response systems such as the case in the stock market. Chang et al. [33] suggest that the combination of the Levinson algorithm and Cholesky decomposition offers a pathway to enhance computational efficiency, thus mitigating the aforementioned dilemma. However, their focus was solely on conducting efficiency tests. In this paper, our primary objective is to identify an unbiased estimator for the Hurst index, which we accomplish through the application of a least squares polynomial approximation method.

Author Contributions

All authors contributed equally to Conceptualization, Methodology, Validation, Writing, Review and Editing. E.L.B. wrote all computer code and ran the simulations. R.A.G. was responsible for Project Management Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The research received support through Qatar Foundation and Virginia Commonwealth University in Qatar by funding the activities of the Mathematical Data Science Lab.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and computer codes can be requested from the corresponding author.

Acknowledgments

Ryad Ghanam, Paramahansa Pramanik and Edward Boone would like to thank Qatar Foundation and Virginia Commonwealth University in Qatar for their support through the Mathematical Data Science Lab.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Calvo, G.A. Staggered prices in a utility-maximizing framework. J. Monet. Econ. 1983, 12, 383–398. [Google Scholar] [CrossRef]
  2. Alvarez, F.; Lippi, F.; Souganidis, P. Price setting with strategic complementarities as a mean field game. Econometrica 2023, 91, 2005–2039. [Google Scholar] [CrossRef]
  3. Gairing, J.; Imkeller, P.; Shevchenko, R.; Tudor, C. Hurst index estimation in stochastic differential equations driven by fractional Brownian motion. J. Theor. Probab. 2020, 33, 1691–1714. [Google Scholar] [CrossRef]
  4. Beran, J. Statistics for Long-Memory Processes; Routledge: London, UK, 2017. [Google Scholar]
  5. Embrechts, P. Selfsimilar Processes; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  6. Tudor, C. Analysis of Variations for Self-Similar Processes: A Stochastic Calculus Approach; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  7. Rogers, L.C.G. Arbitrage with fractional Brownian motion. Math. Financ. 1997, 7, 95–105. [Google Scholar] [CrossRef]
  8. Rostek, S.; Schöbel, R. A note on the use of fractional Brownian motion for financial modeling. Econ. Model. 2013, 30, 30–35. [Google Scholar] [CrossRef]
  9. Shiryaev, A.N. On arbitrage and replication for fractal models. In Workshop on Mathematical Finance; Shiryaev, A., Sulem, A., Eds.; INRIA: Paris, France, 1998. [Google Scholar]
  10. Duncan, T.E.; Hu, Y.; Pasik-Duncan, B. Stochastic calculus for fractional Brownian motion I. Theory. SIAM J. Control. Optim. 2000, 38, 582–612. [Google Scholar] [CrossRef]
  11. Delbaen, F.; Schachermayer, W. A general version of the fundamental theorem of asset pricing. Math. Ann. 1994, 300, 463–520. [Google Scholar] [CrossRef]
  12. Cheridito, P. Regularizing Fractional Brownian Motion with a View towards Stock Price Modelling. Ph.D. Thesis, ETH Zurich, Zurich, Switerland, 2001. [Google Scholar]
  13. Klüppelberg, C.; Kühn, C. Fractional Brownian motion as a weak limit of Poisson shot noise processes—With applications to finance. Stoch. Process. Their Appl. 2004, 113, 333–351. [Google Scholar] [CrossRef]
  14. Bayraktar, E.; Horst, U.; Sircar, R. A limit theorem for financial markets with inert investors. Math. Oper. Res. 2006, 31, 789–810. [Google Scholar] [CrossRef]
  15. Cheridito, P. Mixed fractional Brownian motion. Bernoulli 2001, 7, 913–934. [Google Scholar] [CrossRef]
  16. Li, Q.; Zhou, Y.; Zhao, X.; Ge, X. Fractional order stochastic differential equation with application in European option pricing. Discret. Dyn. Nat. Soc. 2014, 2014, 621895. [Google Scholar] [CrossRef]
  17. Chronopoulou, A.; Viens, F.G. Hurst index estimation for self-similar processes with long-memory. In Recent Development in Stochastic Dynamics and Stochastic Analysis; World Scientific: Singapore, 2010; pp. 91–117. [Google Scholar]
  18. Hurst, H.E. Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770–799. [Google Scholar] [CrossRef]
  19. Kimball, M.S. The quantitative analytics of the basic neomonetarist model. J. Money Credit. Bank. 1995, 27, 1241–1289. [Google Scholar] [CrossRef]
  20. Gladyshev, E. A new limit theorem for stochastic processes with Gaussian increments. Theory Probab. Its Appl. 1961, 6, 52–61. [Google Scholar] [CrossRef]
  21. Istas, J.; Lang, G. Quadratic variations and estimation of the local Hölder index of a Gaussian process. In Proceedings of the Annales de l’Institut Henri Poincare (B) Probability and Statistics; Elsevier: Amsterdam, The Netherlands, 1997; Volume 33, pp. 407–436. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0246020397800994 (accessed on 7 May 2024).
  22. Benassi, A.; Cohen, S.; Istas, J.; Jaffard, S. Identification of filtered white noises. Stoch. Process. Their Appl. 1998, 75, 31–49. [Google Scholar] [CrossRef]
  23. Melichov, D. On Estimation of the Hurst Index of Solutions of Stochastic Differential Equations. Ph.D. Thesis, Vilnius Gediminas Technical University, Vilnius, Lithuania, 2011. [Google Scholar]
  24. Coeurjolly, J.F. Simulation and identification of the fractional Brownian motion: A bibliographical and comparative study. J. Stat. Softw. 2000, 5, 1–53. [Google Scholar] [CrossRef]
  25. Begyn, A. Quadratic variations along irregular subdivisions for Gaussian processes. Electron. J. Probab. 2005, 10, 691–717. [Google Scholar] [CrossRef]
  26. Fhima, M.; Guillin, A.; Bertrand, P.R. Fast change point analysis on the Hurst index of piecewise fractional Brownian motion. arXiv 2011, arXiv:1103.4029. [Google Scholar]
  27. Bardet, J.M.; Surgailis, D. Measuring the roughness of random paths by increment ratios. Bernoulli 2011, 17, 749–780. [Google Scholar] [CrossRef]
  28. Hayashi, K.; Nakagawa, K. Fractional SDE-Net: Generation of time series data with long-term memory. In Proceedings of the 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), Shenzhen, China, 13–16 October 2022; pp. 1–10. [Google Scholar]
  29. Chentsov, N.N. Weak convergence of stochastic processes whose trajectories have no discontinuities of the second kind and the “heuristic” approach to the Kolmogorov-Smirnov tests. Theory Probab. Its Appl. 1956, 1, 140–144. [Google Scholar] [CrossRef]
  30. Arcones, M.A. On the law of the iterated logarithm for Gaussian processes. J. Theor. Probab. 1995, 8, 877–903. [Google Scholar] [CrossRef]
  31. Buckdahn, R.; Jing, S. Mean-field SDE driven by a fractional Brownian motion and related stochastic control problem. SIAM J. Control. Optim. 2017, 55, 1500–1533. [Google Scholar] [CrossRef]
  32. Beskos, A.; Dureau, J.; Kalogeropoulos, K. Bayesian inference for partially observed stochastic differential equations driven by fractional Brownian motion. Biometrika 2015, 102, 809–827. [Google Scholar] [CrossRef]
  33. Chang, Y.C. Efficiently implementing the maximum likelihood estimator for Hurst exponent. Math. Probl. Eng. 2014, 2014, 490568. [Google Scholar] [CrossRef]
Figure 1. Panel (a) shows the impact of the fraction of derivative parameter H on the same set of 1000 deviates for H = 3 / 4 , 7 / 8 , 15 / 16 , 31 / 32 and 1 with σ = 1 . Panel (b) shows the impact of variance σ 2 on the same set of deviates when σ = 1 , 0.8 , 0.6 , 0.4 and 0.2 when H = 15 / 16 .
Figure 1. Panel (a) shows the impact of the fraction of derivative parameter H on the same set of 1000 deviates for H = 3 / 4 , 7 / 8 , 15 / 16 , 31 / 32 and 1 with σ = 1 . Panel (b) shows the impact of variance σ 2 on the same set of deviates when σ = 1 , 0.8 , 0.6 , 0.4 and 0.2 when H = 15 / 16 .
Stats 07 00045 g001
Figure 2. Panel (a) shows True H versus mean H ^ with reference line (red) for sample size of 1000. Panel (b) Bias H ^ H versus H ^ for a sample size of 1000. All H ^ are based on 10,000 simulated samples.
Figure 2. Panel (a) shows True H versus mean H ^ with reference line (red) for sample size of 1000. Panel (b) Bias H ^ H versus H ^ for a sample size of 1000. All H ^ are based on 10,000 simulated samples.
Stats 07 00045 g002
Figure 3. Panel (a) Mean H ^ versus true H and Panel (b) simulated standard error of H ^ . Here n = 100 , 250 , 500 , 1000 , 5000 and 10,000. Here mean H ^ is based on 10,000 simulated samples.
Figure 3. Panel (a) Mean H ^ versus true H and Panel (b) simulated standard error of H ^ . Here n = 100 , 250 , 500 , 1000 , 5000 and 10,000. Here mean H ^ is based on 10,000 simulated samples.
Stats 07 00045 g003
Figure 4. The mean H ^ (black) versus true H and Bias corrected estimator H ˜ (blue) with reference line (red-dashed). Here mean H ^ and H ˜ are based on 1000 simulated samples.
Figure 4. The mean H ^ (black) versus true H and Bias corrected estimator H ˜ (blue) with reference line (red-dashed). Here mean H ^ and H ˜ are based on 1000 simulated samples.
Stats 07 00045 g004
Figure 5. Coverage probabilities for 95% confidence intervals across values of H for sample sizes n = 100, 250, 500, 1000, 5000 and 10,000 with reference line at 95% (grey dashed line). Coverage probabilities based on 10,000 simulations.
Figure 5. Coverage probabilities for 95% confidence intervals across values of H for sample sizes n = 100, 250, 500, 1000, 5000 and 10,000 with reference line at 95% (grey dashed line). Coverage probabilities based on 10,000 simulations.
Stats 07 00045 g005
Table 1. Estimated Hurst parameters for S&P500 and VIX for the time frames 5 March 2007 to 5 March 2008 and 15 September 2008 and 15 September 2009.
Table 1. Estimated Hurst parameters for S&P500 and VIX for the time frames 5 March 2007 to 5 March 2008 and 15 September 2008 and 15 September 2009.
SymbolYear H ^ H ˜ 95% CI
S&P50020070.5610.688(0.678, 0.698)
20080.5570.681(0.671, 0.691)
VIX20070.5830.716(0.705, 0.726)
20080.5690.699(0.689, 0.709)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pramanik, P.; Boone, E.L.; Ghanam, R.A. Parametric Estimation in Fractional Stochastic Differential Equation. Stats 2024, 7, 745-760. https://doi.org/10.3390/stats7030045

AMA Style

Pramanik P, Boone EL, Ghanam RA. Parametric Estimation in Fractional Stochastic Differential Equation. Stats. 2024; 7(3):745-760. https://doi.org/10.3390/stats7030045

Chicago/Turabian Style

Pramanik, Paramahansa, Edward L. Boone, and Ryad A. Ghanam. 2024. "Parametric Estimation in Fractional Stochastic Differential Equation" Stats 7, no. 3: 745-760. https://doi.org/10.3390/stats7030045

Article Metrics

Back to TopTop