Next Article in Journal
Special Issues of Econometrics: Celebrated Econometricians
Next Article in Special Issue
Copula–Based vMEM Specifications versus Alternatives: The Case of Trading Activity
Previous Article in Journal
Econometrics Best Paper Award 2016
Previous Article in Special Issue
Market Microstructure Effects on Firm Default Risk Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets

Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, USA
*
Author to whom correspondence should be addressed.
Econometrics 2016, 4(3), 34; https://doi.org/10.3390/econometrics4030034
Submission received: 29 February 2016 / Revised: 1 June 2016 / Accepted: 27 July 2016 / Published: 16 August 2016
(This article belongs to the Special Issue Financial High-Frequency Data)

Abstract

:
This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n 4 / 9 ) , which is better than the convergence rate O P ( n 1 / 4 ) for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.

1. Introduction

1.1. Motivation

Volatility analysis plays an important role in finance. For example, portfolio allocation, derivative pricing and risk management all require accurate estimation of volatility. With the advance of technology, financial instruments are traded at frequencies as high as milliseconds, which produces large numbers of trading records each day and enables us to better estimate volatility. Such data are often referred to as high-frequency financial data. There are a growing number of volatility analysis studies based on high-frequency financial data. One popular approach is to use realized volatility, which is the sum of the squared difference of the log prices. See, for example, Barndorff-Nielsen and Shephard (2002) [1], Zhang et al. (2005) [2] and Zhang (2006) [3]. If the observed data are assumed to be a continuous semi-martingale price process, realized volatility yields a consistent estimator of integrated volatility. However, high-frequency financial data are contaminated with market microstructure noise, and the contaminated price observations make the realized volatility inconsistent. If we assume that high-frequency financial data follow semi-martingale price processes with additive microstructure noises, several methods are proposed to consistently estimate integrated volatility based on high-frequency financial data. See, for example, Bandi and Russell (2008) [4], Barndorff-Nielsen et al. (2008) [5], Jacod et al. (2009) [6], Jacod et al. (2010) [7], Xiu (2010) [8] and Zhang et al. (2005) [2].
Jumps are often observed in financial markets due to events happening around the globe or the release of financial reports. See Aït-Sahalia et al. (2002) [9], Barndorff-Nielsen and Shephard (2004) [10]. Actually, a jump-diffusion model has been studied back in Merton (1976) [11]. Such jumps lead to a violation of the continuous assumption and, thus, cause the inconsistency of those established estimators. To accurately estimate and predict volatility, it is important to have methods that estimate jumps and that separate it from the continuous part. Many existing methods have focused on testing for jumps under the strong assumption that we can observe the exact price process (see Aït-Sahalia 2004 [12], Aït-Sahalia and Jacod 2009 [13], Andersen et al. 2012 [14], Barndorff-Nielsen and Shephard 2006 [15], Carr and Wu 2003 [16], Eraker et al. 2003 [17], Eraker 2004 [18], Fan and Fan 2011 [19], Huang and Tauchen 2005 [20], Jiang and Oomen 2008 [21], Lee and Hannig 2010 [22], Lee and Mykland 2008 [23], Bollerslev et al. 2009 [24] and Todorov 2009 [25]). Recently, a few tests started to take microstructure noise into account; for example, the pre-averaging method is used to take care of the microstructure noise, and a test for jumps based on power variation is employed in Aït-Sahalia et al. (2012) [26]. Furthermore, some interesting jump research going on includes, but is not limited to: estimating the degree of jump activities (see Ait-Sahalia and Jacod 2009 [27], Jing et al. 2011 [28], Jing et al. 2012 [29] and Todorov and Tauchen 2011 [30]), modeling return using pure jump process (see Jing et al. 2012 [31] and Kong et al. 2015 [32]), modeling jumps in volatility processes (see Todorov and Tauchen 2011 [33]) and forecasting volatility with the existence of jumps (see Andersen et al. 2007 [34], Andersen et al. 2011 [35] and Andersen et al. 2011 [36]).
The method proposed in this paper mainly focuses on the wavelet-based detection and estimation of jump variation based on high-frequency financial data with market microstructure noise. Wavelet analysis is a tool for decomposing signals and functions into time-frequency contents that can unfold a signal or function over the time-frequency plane to provide information on “when” such “frequency” occurs (Wang 2006 [37]). It enables us to analyze non-stationary time series and to detect local changes. With such features, wavelet methods are developed to detect sudden localized changes in Wang (1995) [38] and to study jump variation based on high-frequency financial data with microstructure noise in Fan and Wang (2007) [39]. In this paper, we are going to take advantage of the nice properties of wavelets and adopt a new approach to improve the wavelet estimation methods.
The paper is organized as follows. In the remaining part of Section 1, we will introduce the basic model and wavelet techniques. In Section 2, we analyze the statistical properties of the original log price process, realized volatility process and average realized volatility process. The simulation is in Section 3, and the empirical study of jump variation is in Section 4. Section 5 provides some concluding remarks about our results. The proofs of the main theorems are in Section 6. More detailed proofs of the lemmas are in Appendix A. Tables and figures for further illustration are in Appendix B.

1.2. Integrated Volatility, Realized Volatility and Jump Variation

Let S t be the price process of a financial asset. We assume that the log price X t = log S t is a semi-martingale satisfying the stochastic differential equation
d X t = μ t d t + σ t d B t + L t d Λ t , t [ 0 , 1 ] ,
where we assume μ t and σ t are both bounded and continuous, B t is a standard Brownian motion, Λ t is a counting process and the total number of jumps in time period [ 0 , 1 ] is finite. The three terms on the right-hand side of Equation (1) correspond to the drift, diffusion and jump parts of X, respectively.
Integrated volatility, as an important measure of the variation of the return process during a time period [ 0 , 1 ] , is defined as
0 1 σ t 2 d t .
In financial practice, the estimation of the integrated volatility is of great interest.
Suppose { X ( t i ) , t i [ 0 , 1 ] , i = 1 , . . . , n } are n observations of the log price process in time period [ 0 , 1 ] . Define the realized volatility of X as
[ X , X ] 1 t i X ( t i + 1 ) X ( t i ) 2 .
Stochastic analysis shows that as max ( t i t i 1 ) 0 ,
[ X , X ] 1 p 0 1 σ t 2 d t + 0 t 1 L t 2 ,
where the first term on the right-hand side is the integrated volatility, and the second term is called the jump variation, which is the main focus of this paper. We denote it as Ψ,
Ψ 0 t 1 L t 2 .
Equation (4) says that, as the number of observations n increases, realized volatility [ X , X ] 1 converges in probability to integrated volatility plus jump variation Ψ. Thus, to better model and predict volatility, we need methods to estimate jump variation and to separate it from the integrated volatility.

1.3. Market Microstructure Noises

Microstructure noise plays a big role in the analysis of high-frequency financial data. It is often assumed that the observed data Y ( t i ) are a noisy version of X ( t i ) at time points t i ,
Y ( t i ) = X ( t i ) + ϵ ( t i ) , i = 1 , . . . , n , t i = i / n ,
where ϵ’s are the microstructure noises. We assume that ϵ’s are i.i.d. random variables with mean zero and variance η 2 . We also assume that ϵ and X are independent.
With noisy observations Y ( t i ) , now it becomes even more challenging. Let us define the realized volatility based on Y as
[ Y , Y ] 1 t i Y ( t i + 1 ) Y ( t i ) 2 .
It can be shown (Zhang et al., 2005 [2]) that:
[ Y , Y ] 1 L 0 1 σ t 2 d t + 0 t 1 L t 2 + 2 n η 2 + 4 n E ϵ 4 + 2 n 0 1 σ t 4 d t 1 / 2 Z total ,
where L is convergence in law and Z total is a standard normal random variable. The result (8) implies that realized volatility [ Y , Y ] 1 is an inconsistent estimator of integrated volatility.

1.4. Wavelet Basics

Let φ and ψ be the specially-constructed father wavelet and mother wavelet, respectively. Then, the wavelet basis is φ ( t ) , ψ j , k ( t ) = 2 j / 2 ψ ( 2 j t k ) , j = 0 , 1 , 2 , . . . , k = 0 , . . . , 2 j 1 . We can expand a function f ( t ) over the wavelet basis as follows. Denote by f j , k the wavelet coefficient of f ( t ) associated with location k 2 j and frequency 2 j ,
f j , k = f ( t ) ψ j , k ( t ) d t .
Then, we have
f ( t ) = f 1 , 0 φ ( t ) + j = 0 k = 0 2 j 1 f j , k ψ j , k ( t ) ,
where f 1 , 0 = f ( t ) φ ( t ) d t .
There are two cases regarding the order of the wavelet coefficients f j , k (Daubechies, 1992 [40]):
Case 1: If f is a Hölder continuous function with exponent α, 0 < α 1 , i.e., | f ( x ) f ( y ) | C | x y | α for any x , y , then f j , k satisfies
| f j , k | C 1 2 j ( α + 1 / 2 ) .
That means, for a Hölder continuous function f, the wavelet coefficients decrease at the order of 2 j ( α + 1 / 2 ) as the frequency level 2 j increases. Details are in Lemmas 1 and 2 in Appendix A.
Case 2: If f is a Hölder continuous function with exponent α, 0 < α 1 , except for a jump of size L at s. Then, for sufficiently large j with 2 j s k G d , there exists a constant C L depending on L, such that
| f j , k | C L 2 j / 2 ,
where G d x : | ( , x ) ψ ( y ) d y | d is an interval around zero for some positive constant d. That means, nearby this jump point, the wavelet coefficients decrease no faster than the order of 2 j / 2 as the frequency level 2 j increases. Details are in Lemma 3 in Appendix A. Examples will be shown later in Section 5 for the Daubechies wavelets used in this paper.
Note that with n discrete observations, j takes values 0 , 1 , 2 , . . . J 1 , where 2 J = n .

2. Statistical Analysis

2.1. Choosing a Frequency Level to Differentiate Jumps

In this section, we are going to study several different processes and the orders of their wavelet coefficients for both the continuous part and the jump part. For each process, we will determine a frequency level for which the order of the corresponding wavelet coefficients is significantly larger nearby jumps than the continuous part.

2.1.1. Starting from X and [X, X]

We first focus on the processes without noises. We consider X ( t i ) and [ X , X ] ( t i ) at time points { t i = i / n , i = 1 , . . . , n } .
From Equation (1), we have
X ( t i ) = X 0 + 0 t i μ s d s + 0 t i σ s d B s + 0 < s t i L s .
For simplicity, we assume that X ( t ) in Equation (1) is defined in [ 1 , 1 ] . Define [ X , X ] ( t i ) as:
[ X , X ] ( t i ) t i 1 t r < t i X ( t r + 1 ) X ( t r ) 2 .
For [ X , X ] ( t i ) , by Lemma 7 and Corollary 1 in Appendix A, we have:
[ X , X ] ( t i ) L t i 1 t i σ s 2 d s + 2 n t i 1 t i σ s 2 d B s discrete + t i 1 < s t i L s 2 ,
where B discrete is a standard Brownian motion whose associated diffusion term in the above equation is due to discretization, and L is convergence in law.
We let X j , k and [ X , X ] j , k be the wavelet coefficients of X and [ X , X ] associated with location k 2 j and frequency 2 j , respectively.
For X j , k , from Equation (13), the drift term is Hölder continuous with exponent α = 1 . The diffusion term is Hölder continuous with exponent α arbitrarily close to 1 / 2 , so the order of the continuous component of X j , k is dominated by the diffusion term. The order of the jump component is no less than 2 j / 2 . Therefore, if we pick a frequency level at 2 j n n , the order of the jump component is significantly larger than the other terms.
For [ X , X ] j , k , from Equation (15), the drift term is Hölder continuous with exponent α = 1 . The diffusion term is Hölder continuous with exponent α arbitrarily close to 1 / 2 and multiplied by an extra n 1 / 2 . Thus, again, if we pick a frequency level at 2 j n n , the order of the jump component is significantly larger than the others.

2.1.2. Moving on to Y and [Y, Y]

We now consider the noisy observations Y ( t i ) from Equation (6):
Y ( t i ) = X 0 + 0 t i μ s d s + 0 t i σ s d B s + 0 < s t i L s + ϵ ( t i ) .
We let Y j , k be the wavelet coefficients of Y, so Y j , k = X j , k + ϵ j , k . ϵ j , k ’s are i.i.d with mean zero and variance n 1 η 2 , so the order of noise component is O P ( n 1 / 2 ) . If we pick a frequency level at 2 j n n / log 2 n , the order of the jump component is significantly larger than the others (Fan and Wang, 2007 [39]).
Next, let us consider [ Y , Y ] . Similar to the way we rewrite [ X , X ] ( t i ) in Equation (13), we rewrite Equation (8) as follows:
[ Y , Y ] ( t i ) L t i 1 t i σ s 2 d s + 2 n t i 1 t i σ s 2 d B s discrete + 2 n η 2 + 4 n E ϵ 4 t i 1 t i d B s noise + t i 1 s t i L s 2 ,
where the B discrete term is a diffusion term due to discretization, the B noise term is a diffusion term due to noise, B discrete and B noise are independent Brownian motions and 2 n η 2 is a constant (mean of the noise term).
We let [ Y , Y ] j , k be the wavelet coefficients of [ Y , Y ] . The order for the continuous component dominated by the diffusion term due to noise is no greater than n 1 / 2 2 j ( α + 1 / 2 ) , where α is arbitrarily close to 1/2. The order of the jump component is greater than 2 j / 2 . However, in this situation, since the order of the diffusion term due to noise is too large, we are not able to have a frequency level 2 j n at which we can differentiate the jumps.

2.1.3. Subsampling and Averaging on [Y, Y]

We now consider subsampling and averaging [ Y , Y ] to reduce the order of the diffusion term due to noise. Subsampling: Create M grids (sub-samples) on the time line so that the distance between two consecutive observations within each grid is M / n and the average size of each grid is n ¯ = n / M . Denote by [ Y , Y ] ( m ) , m = 1 , . . . , M , the realized volatility of Y using grid m.
Averaging: [ Y , Y ] ( a v g ) denotes the average of the realized volatilities of Y using all M grids:
[ Y , Y ] ( a v g ) = 1 M m = 1 M [ Y , Y ] ( m ) .
Rewriting the results in Zhang et al. (2005) [2], we have
[ Y , Y ] ( t i ) ( a v g ) L t i 1 t i σ s 2 d s + 4 3 n ¯ t i 1 t i σ s 2 d B s discrete + 2 n ¯ η 2 + 4 n ¯ M E ϵ 4 t i 1 t i d B s noise + t i 1 s t i L s 2 .
Let [ Y , Y ] j , k ( a v g ) be the wavelet coefficients of [ Y , Y ] ( a v g ) . We choose M n γ , 0 < γ 2 / 3 . Then, the order of the continuous component is no greater than n ( 1 2 γ ) / 2 2 j ( α + 1 / 2 ) , where α is arbitrarily close to 1/2. Therefore, if we pick a frequency level at 2 j n n , the order of the jump component is larger than the others.

2.2. Threshold Selection and Jump Location Estimation

For each process discussed in the above section, we are able to choose a frequency level 2 j n to differentiate jumps, and thus, we develop a threshold on the wavelet coefficients at such a frequency level to detect those jumps. To accomplish the task, we first standardize the wavelet coefficients at chosen frequency level j n by dividing them by their median. Given a process, if the continuous component is dominated by the diffusion term at chosen frequency level j n , by Lemma 5, the standardized wavelet coefficients nearby a jump point are at least of the order 2 j n α , where α is arbitrarily close to 1/2. As volatility process σ t 2 is bounded, using Lemma 6, we can show that with probability tending to one, the maximum of standardized wavelet coefficients for the continuous part can be bounded by c 2 log 2 j n / 0.6745 , where c 1 is some constant that may depend on σ t 2 , and 0.6745 is the median absolute deviation of the standard normal distribution.
For [ Y , Y ] ( a v g ) in (19), if we take γ < 2 / 3 , the continuous part is dominated by the B noise term, and by Lemma 6, we have that with probability tending to one, the maximum of standardized wavelet coefficients is bounded by 2 log 2 j n / 0.6745 . Therefore, we may use threshold
T j n = 2 log 2 j n 0.6745 .
If any standardized wavelet coefficients exceed threshold T j n , then their associated locations are considered to be estimated jump locations. Let { τ 1 , . . . , τ q } be q true jump locations in X t , t [ 0 , 1 ] . Denote those estimated locations by { τ ^ 1 , . . . , τ ^ q ^ } . It is shown in Wang (1995) [38] and Raimondo (1998) [41] that at chosen frequency level 2 j n , we have P ( q ^ = q ) 1 as n and:
l = 1 q | τ ^ l τ l | = O P ( 2 j n ) .
Note that for processes X, [ X , X ] , Y and [ Y , Y ] ( a v g ) , to differentiate jumps, we have chosen frequency levels 2 j n n , n, n / log 2 n and n, respectively. The orders of wavelet coefficients of different components and convergence rates of wavelet jump location estimators are summarized in Table B1 and Table B2 in Appendix B.

2.3. Estimation of Jump Variation

Finally, our goal is to estimate jump variation Ψ. For each estimated jump location τ ^ l , we choose a small neighborhood δ n and calculate the average of the process over [ τ ^ l δ n , τ ^ l ) and [ τ ^ l , τ ^ l + δ n ] . We estimate the jump size by taking the difference of two averages. The jump variation is estimated by the sum of squares of all jump size estimators.

2.3.1. Without Microstructure Noise Assumption

To estimate jump variation without noises, we have the results using X and [ X , X ] in the following two theorems. Since [ X , X ] is smoother, its associated method can achieve a convergence rate O P n 1 / 2 , which is higher than convergence rate O P n 1 / 3 for the method based on X.
Theorem 1. 
Assume we observe X ( t i ) , i = 1 , . . . , n , t i = i / n . Under model (1), let X ¯ τ ^ l + be the average of X over [ τ ^ l δ n , τ ^ l ) and X ¯ τ ^ l the average over [ τ ^ l , τ ^ l + δ n ] . For each estimated jump location τ ^ l , let L ^ l = X ¯ τ ^ l + X ¯ τ ^ l . Define
Ψ ^ X l = 1 q ^ L ^ l 2 .
Choose δ n n 2 / 3 ; we have
Ψ ^ X Ψ = O P n 1 / 3 .
Theorem 2. 
Assume we observe X ( t i ) , i = 1 , . . . , n , t i = i / n . Under model (1), let [ X , X ] ¯ τ ^ + be the average of [ X , X ] over [ τ ^ , τ ^ + δ n ] and [ X , X ] ¯ τ ^ the average of [ X , X ] over ( τ ^ δ n , τ ^ ] . For each estimated jump location τ ^ l , let L l 2 ^ = [ X , X ] ¯ τ ^ l + [ X , X ] ¯ τ ^ l . Define
Ψ ^ [ X , X ] l = 1 q ^ L l 2 ^ .
Choose δ n n 1 / 2 ; we have
Ψ ^ [ X , X ] Ψ = O P n 1 / 2 .

2.3.2. With the Microstructure Noise Assumption

For noisy observations, we are able to improve the convergence rate O P n 1 / 4 in Fan and Wang (2007) [39] by using a smoother [ Y , Y ] ( a v g ) to obtain a higher convergence rate O P n 4 / 9 .
Theorem 3. 
Under models (1) and (6), let [ Y , Y ] ( a v g ) ¯ τ ^ l + be the average of [ Y , Y ] ( a v g ) over [ τ ^ , τ ^ + δ n ] and [ Y , Y ] ( a v g ) ¯ τ ^ l be the average of [ Y , Y ] ( a v g ) over ( τ ^ δ n , τ ^ ] . For each estimated jump location τ ^ l , let L l 2 ˜ = [ Y , Y ] ( a v g ) ¯ τ ^ l + [ Y , Y ] ( a v g ) ¯ τ ^ l . Define
Ψ ^ [ Y , Y ] ( a v g ) l = 1 q ^ L l 2 ˜ .
Let M = n γ , 0 < γ 2 / 3 . If we choose δ n n ( 2 / 3 ) γ 1 , then we have
Ψ ^ [ Y , Y ] ( a v g ) Ψ = O P n ( 2 / 3 ) γ .
Moreover, the convergence rate is arbitrarily close to O P ( n 4 / 9 ) as γ can get arbitrarily close to 2 / 3 for the threshold in Equation (20). For γ = 2 / 3 , if we choose the threshold to be c 2 log 2 j n / 0.6745 for some constant c > 1 , then the convergence rate is O P ( n 4 / 9 ) .
See the proof of Theorems 1–3 in Section 6. The orders of the convergence rates of the jump variation estimators are summarized in Table B3 in Appendix B.

3. Simulations

We have conducted a simulation study that assumes 32,768 = 2 15 number of observations for one day. Here are the simulation model and procedure.
  • A sample path of σ t 2 is from the geometric OU volatility model
    d log σ t 2 = 2.5 ( log σ t 2 log 0.25 ) d t + 0.5 d W t .
  • A sample path of X t is from
    d X t = σ t d B t
    with c o r r ( W t , B t ) = 0.5 .
  • Jump locations are randomly selected, and three jumps are added to X t with size from i.i.d. N ( 0 , 0.3 2 ) .
  • Noises ϵ’s are from i.i.d. N ( 0 , η 2 ) with η at four different levels: 0.01, 0.02, 0.03, 0.04. Then, a sample path of Y t is from Y t = X t + ϵ t .
  • Realized volatility processes are calculated using a moving window of 32,768 observations. We actually simulate 65,536 of records, so that we have 32,768 complete observations of all processes for calculating these realized volatility processes. There are eight of such processes: X, [ X , X ] , Y, [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) , [ Y , Y ] M = 64 ( a v g ) . Figure 1 displays their sample paths as an example to show how those processes look under our scheme.
  • Discrete wavelet transformations are performed using Daubechies wavelet D20 on those realized volatility processes. We illustrate the behaviors of the wavelet coefficients at different frequency levels in Figure B1 and Figure B2 in Appendix B using those of the X process and the Y process as an example. We use the notations in Percival and Walden (2000) [42]: W1 represents the highest frequency level, W2 the second highest, and so on and so forth. At each frequency level from W1 to W5, if any standardized wavelet coefficient exceeds the threshold T j n in (20), we declare the location associated with that coefficient as an estimated jump location. Here, we use T j n , since the results in Section 2 and the simulation study show that the method based on [ Y , Y ] ( a v g ) is better than others.
  • For each estimated jump location, we estimate the jump size and jump variation as described in Section 2. For X and Y, we use intervals of length 64. For [ X , X ] , [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) , [ Y , Y ] M = 64 ( a v g ) , we use intervals of length 128.
  • The whole simulation procedure is repeated 1000 times.
If we do not assume the existence of market microstructure in the model, we are able to improve our estimation of jump variation using the [ X , X ] process from using the X process, as shown in Theorems 1 and 2. The simulation results without adding noises are given in Table 1 and Table 2. Table 1 gives the average number of detected jumps and the corresponding standard deviation. We find that we are able to detect all three jumps most of the time using either X or [ X , X ] at any frequency level from W1–W5. Table 2 is the mean squared error (MSE) of jump variation estimation and shows a smaller MSE using [ X , X ] at every frequency level from W1 to W5.
If we assume the existence of market microstructure in the model, we are able to improve our estimation of jump variation using [ Y , Y ] ( a v g ) processes from using the Y process, as shown in Theorem 3. The simulation results are summarized in Table 3 and Table 4. The results are illustrated at frequency levels from W1 to W5 and noise levels η = 0.01 , 0.02 , 0.03 , 0.04 for processes Y, [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) and [ Y , Y ] M = 64 ( a v g ) . Table 3 displays the average number of detected jumps and the corresponding standard deviation, with Table 4 for the MSEs of the jump variation estimation.
The results show that if we falsely detect some jumps while they are not, it does not affect too much the jump variation estimation since the estimated jump sizes are very small. However, if our detection misses some of the true jumps, it does have a certain effect on jump variation estimation. [ X , X ] and X are both able to detect the jumps fairly well. As the noise level increases, the Y- and [ Y , Y ] -based methods start to miss one or two jumps; the method based on [ Y , Y ] ( a v g ) can also miss detecting the jumps when M is small and noise is large, but it improves with a bigger M. In terms of the MSE of the jump variation estimation, the methods based on [ X , X ] and [ Y , Y ] ( a v g ) perform better than those based on X and Y, respectively. As the noise level increases, we need to increase M for the [ Y , Y ] ( a v g ) method in order to improve its performance. [ Y , Y ] does poorly compared to other processes.

4. Empirical Study

4.1. Distribution of Jump Variation

We study the distribution of jump variation for returns of each Dow Jones Industrial Average stock in January 2013. We collect the tick by tick prices from 9:30 a.m.–4 p.m. for each trading day from the TAQ database. There are 21 trading days and 30 stocks, which correspond to 630 stock-days. We compare the jump variation estimation of the 630 stock-days using Y, [ Y , Y ] and [ Y , Y ] ( a v g ) and at different sampling frequencies: 1 tick, 2 ticks, 4 ticks. Here, we consider tick time equidistant, as discussed in Andersen et al., 2012 [14].
To detect the jump locations, we use Daubechies wavelet D4 at W4, W3, W2 for sampling frequencies at 1 tick, 2 ticks, 4 ticks, respectively, with the thresholds in Equation (20). We take such wavelet frequency levels to make sure of the consistency of jump detection along different sampling frequencies. To estimate the jump variation, at each estimated jump location, we take the difference between the means of intervals with width four. For [ Y , Y ] ( a v g ) , we use M = 2 . We transform the data by the Box–Cox power transformation with power selected by the data, so that the Gaussian assumption of the threshold should be approximately followed.
The histograms of estimated jump variation for these 630 stock-days are illustrated in Figure 2. From left to right, the first column shows the result from using process Y, the second column using [ Y , Y ] and the third column using [ Y , Y ] ( a v g ) . From top to bottom, the first row is the result from sampling data at every one tick, the second row at every two ticks and the third row at every four ticks.
Compared to Y, processes [ Y , Y ] and [ Y , Y ] ( a v g ) are able to pick up much larger jump variation at the same sampling frequency level. We find that around 25% of the jump variation is above 1 × 10 5 using [ Y , Y ] and [ Y , Y ] ( a v g ) , while using Y, we only have around 1.5% (note that a price change from 100 to 99.99 leads to a squared log return change of around 1 × 10 8 ). On the other hand, for the same process, the distribution of jump variation looks very similar across different sampling frequencies.

4.2. Evidence of Microstructure Noises

To remove the idiosyncratic effects of each single stock, we plot the weighted average of the daily realized volatility using all stocks in Figure 3. They are before removing jump variation in (a) and after removing jump variation using processes Y, [ Y , Y ] and [ Y , Y ] ( a v g ) in (b), (c) and (d), respectively. The solid lines are calculated from sampling at every one tick, the dashed lines at every two ticks and the dotted lines at every four ticks. The weight is decided by the mean level of the realized volatility of each stock. The larger it is, the smaller the weight is, so that each stock will contribute equally to the weighted average.
We see from Figure 3 that there is clear evidence of microstructure noises, since as the sampling frequency increases, the realized volatility increases, as well, in each case, just as we show in theory. Moreover, the effects of removing jumps are more significant using [ Y , Y ] and [ Y , Y ] ( a v g ) than Y.

5. Discussion

We develop a nonparametric method for estimating jump variation with noisy high frequency financial data. With a better approach based on new process [ Y , Y ] ( a v g ) and the wavelet techniques, we are able to detect and estimate jump variation more efficiently if the variance of microstructure noise η 2 is assumed to be a constant. Numerically, we show that the proposed [ Y , Y ] ( a v g ) method indeed has smaller mean square errors.
From the empirical results, we observe that the method based on [ Y , Y ] outperforms that based on Y. This may suggest that η 2 is not a constant, so if we modify η 2 to be some decreasing function of n, for example, η n 2 log n / n and E ϵ 4 ( log n / n ) 2 , then the [ Y , Y ] -based method can achieve a better convergence rate than that for Y.
For the Daubechies wavelets D4 and D20 used in the paper, we display their graphs below in Figure 4, as well as the graphs of the absolute values of their integrations. Using Daubechies wavelet D4, we may pick G d = ( 0.28 , 0.14 ) for d = 0.2 , while using Daubechies wavelet D20, we may pick G d = ( 0.21 , 0.28 ) for d = 0.1 .
We leave some open problems for future study, including the extension to estimate jump size, optimal frequency level selection in practice, the irregular spaced observations, data-dependent threshold selection and its sensitivity study.

6. Proof of Theorems

Theorem 1. 
(Estimation of jump size/variation using X) Let τ ^ be an estimated jump location. Denote X ¯ τ ^ + the average of X over [ τ ^ , τ ^ + δ n ] and X ¯ τ ^ the average of X over ( τ ^ δ n , τ ^ ] . We estimate the jump size by
L ^ = X ¯ τ ^ + X ¯ τ ^ .
If we choose δ n n 2 / 3 , then
L ^ L = O P ( n 1 / 3 )
and
L ^ 2 L 2 = O P ( n 1 / 3 ) .
Proof. 
Denote m ± the number of t i in I ^ + = [ τ ^ , τ ^ + δ n ] and I ^ = ( τ ^ δ n , τ ^ ] , respectively. Then, | m + m | 2 and m + m n δ n . We decompose L ^ L as
L ^ L = U 1 + U 2 + U 3 ,
where U 1 , U 2 and U 3 correspond to the drift, diffusion and jump part.
U 1 = 1 m + t i I ^ + τ ^ t i μ s d s 1 m t i I ^ t i τ ^ μ s d s = 1 m + O P i = 1 m + i / n + 1 m O P i = 1 m i / n = O P ( δ n ) ,
U 2 = 1 m + t i I ^ + τ ^ t i σ s d W s 1 m t i I ^ t i τ ^ σ s d W s = 1 m + t i I ^ + τ 2 δ n t i σ s d W s 1 m t i I ^ τ 2 δ n t i σ s d W s = 1 m + t i I ^ + τ 2 δ n t i σ s d W s 1 m + t i I + τ 2 δ n t i σ s d W s 1 m t i I ^ τ 2 δ n t i σ s d W s 1 m t i I τ 2 δ n t i σ s d W s + 1 m + t i I + τ 2 δ n t i σ s d W s 1 m t i I τ 2 δ n t i σ s d W s = A 1 A 2 + A 3 ,
where I + = [ τ , τ + δ n ] and I = [ τ δ n , τ ) .
Because τ ^ τ = O P n 1 , the total non-zero terms in A 1 and A 2 is O P ( 1 ) . Furthermore, by maximal martingale inequality,
P max τ 2 δ n t i σ s d W s , t i I ^ + I + > δ n log n P max τ δ n t i τ + 2 δ n τ 2 δ n t i σ s d W s > δ n log n + P ( | τ τ ^ | > δ n ) = E P τ max τ δ n t i τ + 2 δ n τ 2 δ n t i σ s d W s > δ n log n + o ( 1 ) 1 δ n log n E τ 2 δ n τ + 2 δ n E ( σ s 2 ) d s + o ( 1 ) max E ( σ s 2 ) O 1 log n + o ( 1 ) 0 .
Therefore, A 1 A 2 O P 1 n δ n δ n log n .
For A 3 , since
E τ τ 2 δ n t i σ s d W s = 0
and
E τ τ 2 δ n t i σ s d W s τ 2 δ n t j σ s d W s = τ 2 δ n t i t j E ( σ s ) 2 d s
and τ independent of ( σ , W ) , we have
E ( A 3 ) = 0
and
V a r ( A 3 ) = E ( V a r τ ( A 3 ) ) E 3 m + t i I + τ 2 δ n t i E ( σ s 2 ) d s + 4 m t i I τ 2 δ n t i E ( σ s 2 ) d s 4 max E ( σ s 2 ) 4 δ n + 1 m + i = 1 m + i / n + 1 m i = 1 m i / n = O ( δ n ) .
Together, we have
U 2 = O P 1 n log n δ n + δ n .
Finally,
U 3 = 1 m + t i I ^ + 1 { t i τ } L 1 m t i I ^ 1 { t i τ } L = 1 m + n ( τ τ ^ ) L if τ τ ^ 1 m n ( τ ^ τ ) L if τ < τ ^ = O P 1 n δ n .
Thus, if we take δ n n 2 / 3 , we have
L ^ L = O P ( n 1 / 3 ) .
And
L ^ 2 L ^ 2 = ( L ^ L ) ( L ^ + L ) = O P ( n 1 / 3 ) .
Theorem 2. 
(Estimation of jump variation using [ X , X ] ) Denote [ X , X ] ¯ τ ^ + the average of [ X , X ] over [ τ ^ , τ ^ + δ n ] and [ X , X ] ¯ τ ^ the average of [ X , X ] over ( τ ^ δ n , τ ^ ] . We estimate the jump variation by
L 2 ^ = [ X , X ] ¯ τ ^ + [ X , X ] ¯ τ ^ .
If we choose δ n n 1 / 2 , then
L 2 ^ L 2 = O P ( n 1 / 2 ) .
Proof. 
Similar to the proof of the previous theorem, we decompose the L 2 ^ L 2 as
L 2 ^ L 2 = U 1 + U 2 + U 3 ,
where U 1 , U 2 , U 3 correspond to the drift, diffusion and jump parts.
We still have
U 1 = O P ( δ n )
and
U 3 = O P 1 n δ n .
U 2 has changed to
U 2 = O P n 1 / 2 1 n log n δ n + δ n .
Thus, if we take δ n n 1 / 2 , we have
L 2 ^ L 2 = O P ( n 1 / 2 ) .
Theorem 3. 
(Estimation of jump variation using [ Y , Y ] ( a v g ) ) Denote [ Y , Y ] ( a v g ) ¯ τ ^ + the average of [ Y , Y ] ( a v g ) over [ τ ^ , τ ^ + δ n ] and [ Y , Y ] ( a v g ) ¯ τ ^ the average of [ Y , Y ] ( a v g ) over ( τ ^ δ n , τ ^ ] . We estimate the jump variation by
L 2 ˜ = [ Y , Y ] ( a v g ) ¯ τ ^ + [ Y , Y ] ( a v g ) ¯ τ ^ .
Let M = n γ , 0 < γ 2 / 3 . If we choose δ n n ( 2 / 3 ) γ 1 , then
L 2 ˜ L 2 = O P ( n ( 2 / 3 ) γ ) .
Moreover, the convergence rate is arbitrarily close to O P ( n 4 / 9 ) as γ can get arbitrarily close to 2 / 3 for the threshold in Equation (20). For γ = 2 / 3 , if we choose the threshold to be c 2 log 2 j n / 0.6745 for some constant c > 1 , then, the convergence rate is O P ( n 4 / 9 ) .
Proof. 
This time, we have
U 1 = O P ( δ n ) ,
U 2 = O P n ( 1 / 2 ) γ 1 n log n δ n + δ n ,
U 3 = O P 1 n δ n .
Thus, if we take δ n n ( 2 / 3 ) γ 1 , we have
L 2 ˜ L 2 = O P ( n ( 2 / 3 ) γ ) .

Acknowledgments

Wang’s research is partly supported by NSF Grants DMS-10-5635, DMS-12-65203 and DMS-15-28375. The authors would like to thank the editor, associate editor and referees for suggestions that led to the improvement of the paper.

Author Contributions

Yazhen Wang proposed the project, and Xin Zhang carried out the work. Xin Zhang and Yazhen Wang contributed to the development of methodology and theory. Xin Zhang, Donggyu Kim and Yazhen Wang contributed to the theoretic proof. Xin Zhang contributed the numerical results. Xin Zhang and Yazhen Wang contributed to the writing of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemmas

Lemma A1. 
(Order of wavelet coefficients in the continuous case) Suppose that | ψ ( t ) | ( 1 + | t | ) d t < . If f is Hölder continuous with exponent α, 0 < α 1 , i.e., | f ( x ) f ( y ) | C | x y | α . Then, its wavelet transform f j , k satisfies
| f j , k | C 1 2 j ( α + 1 / 2 ) ,
where C 1 = C | y | α | ψ ( y ) | d y .
Proof. 
First, note that ψ ( t ) d t = 0 , so we have
f j , k = 2 j / 2 f ( t ) ψ ( 2 j t k ) d t 2 j / 2 f ( 2 j k ) ψ ( 2 j t k ) d t = 2 j / 2 f ( t ) f ( 2 j k ) ψ ( 2 j t k ) d t .
Taking the absolute value, we have
| f j , k | 2 j / 2 C | t 2 j k | α | ψ ( 2 j t k ) | d t = 2 j / 2 C | 2 j y | α | ψ ( y ) | 2 j d y = 2 j ( α + 1 / 2 ) C | y | α | ψ ( y ) | d y .
Note that the integral in the last equation is finite. The result then follows. ☐
Lemma A2. 
(Necessary condition: order of wavelet coefficients in the continuous case) Suppose that ψ is compactly supported. Suppose also that f L 2 ( R ) is bounded and continuous. If for some α ( 0 , 1 ) , the wavelet transform f j , k of f satisfies
| f j , k | C 2 j ( α + 1 / 2 ) ,
then f is Hölder continuous with exponent α.
See Section 2.9 in Daubechies, 1992 [40].
Lemma A3. 
(Order of wavelet coefficients in the jump case) Suppose that | ψ ( t ) | ( 1 + | t | ) d t < . If g is Hölder continuous with exponent α, 0 < α 1 . Let
f ( t ) = g ( t ) i f t < s g ( t ) + L i f t s .
Then, for sufficiently large j with 2 j s k G d , there exists a constant C L depending on L, such that
| f j , k | C L 2 j / 2 ,
where G d x : | ( , x ) ψ ( y ) d y | d for some positive constant d.
More specifically, for j N , we have
| f j , k | 2 j / 2 | L | d C 1 ,
where N = max 0 , 1 α log 2 C 1 | L | d + 1 and C 1 is defined in Equation (A1).
Proof. 
f j , k = 2 j / 2 ( , s ) ( L ) ψ ( 2 j t k ) d t + 2 j / 2 g ( t ) ψ ( 2 j t k ) d t .
For the first part, we have its absolute value
| 2 j / 2 ( L ) ( , s ) ψ ( 2 j t k ) d t | = 2 j / 2 | L | | ( , 2 j s k ) ψ ( y ) d y | 2 j / 2 | L | d .
For the second part, by Lemma 1, we have its absolute value
| 2 j / 2 g ( t ) ψ ( 2 j t k ) d t | C 1 2 j ( α + 1 / 2 ) .
Let N = max 0 , 1 α log 2 C 1 | L | d + 1 . Putting Equations (A8) and (A9) together, we have for j N ,
| f j , k | 2 j / 2 | L | d C 1 2 j α 2 j / 2 | L | d C 1 2 N α 2 j / 2 | L | d C 1 .
The proof is complete. ☐
Lemma A4. 
(Robustness of the median) Let f j , k = f j , k + f j , k * , where f j , k is the wavelet coefficient corresponding to the continuous part and f j , k * to the jump part. Suppose no more than a % of { f j , k * , k = 1 , . . . , 2 j } is nonzero. Let m q be the q-th quantile of { | f j , k | , k = 0 , . . . , 2 j 1 } and m q be the q-th quantile of { | f j , k | , k = 0 , . . . , 2 j 1 } . Then
m 0.5 a % m 0.5 m 0.5 m 0.5 m 0.5 + a % m 0.5 .
Proof. 
Suppose there is only one nonzero f j , k * , so α % = 1 / 2 j . Let f j , l * denote this nonzero coefficient. If its corresponding | f j , l | m 0.5 , we have the following three cases:
Case 1:
| f j , l | m 0.5 . Since all other f j , k = f j , k , we have m 0.5 m 0.5 = 0 .
Case 2:
m 0.5 < | f j , l | m 0.5 + a % . Then, we have m 0.5 = | f j , l | , so m 0.5 m 0.5 = | f j , l | m 0.5 m 0.5 + a % m 0.5 .
Case 3:
| f j , l | > m 0.5 + a % . Then, we have m 0.5 = m 0.5 + a % , so m 0.5 m 0.5 = m 0.5 + a % m 0.5 .
Together, we have
m 0.5 m 0.5 m 0.5 + a % m 0.5 .
If its corresponding | f j , l | > m 0.5 , similarly, we can prove
m 0.5 a % m 0.5 m 0.5 m 0.5 .
This proves the result when we have only one nonzero f j , k * . For more than one nonzero f j , k * coefficient, we can use iteration by adding one at one time. Therefore, the result holds. ☐
Lemma A5. 
(Order of the maximum to median ratio of wavelet coefficients in the jump case) Suppose f t is a Hölder continuous process with exponent α, 0 < α 1 , except for a jump of size L. Then, for sufficiently large j,
m a x ( | f j , k | ) m e d i a n ( | f j , k | ) C L 2 j α ,
where m a x ( | f j , k | ) = m a x { | f j , k | , k = 0 , . . . , 2 j 1 } , m e d i a n ( | f j , k | ) = m e d i a n { | f j , k | , k = 0 , . . . , 2 j 1 } and C L is a constant and increases as L increases.
Proof. 
First, we consider max ( | f j , k | ) . In a neighborhood of point t where f t is continuous, by Lemmas 1 and 2, wavelet coefficients at such points are at order 2 j ( α + 1 / 2 ) with α close to 1/2. On the other hand, nearby a jump point in f t , by Lemma 3, for sufficiently large j, | f j , k | converges no faster than 2 j / 2 . Since the order 2 j / 2 is much larger than 2 j ( α + 1 / 2 ) , we have max ( | f j , k | ) regulated by the jump part.
Second, we consider median ( | f j , k | ) . Assume we have n = 2 J discrete observations and that the number of jumps is R. Since the wavelet functions have compact support, among the wavelet coefficients at a fine level, no more than b R (%) of them will be affected by the jumps, where b is very small. For example, if we use a wavelet of support length S, we have 2 J 1 wavelet coefficients at the finest level, and there will be no more than S R of them affected by jumps. Thus, in this case, we have b R = S R / 2 J 1 = ( 2 S / n ) R , which goes to zero as n goes to infinity. Since the number of coefficients affected by jumps is very small compared to the total number of wavelets coefficients, i.e. b R 0.5 , if we order the wavelet coefficients at fine levels, by Lemma 4, median ( | f j , k | ) will be surely regulated by the continuous part.
Together, when j is sufficiently large, the ratio
max ( | f j , k | ) median ( | f j , k | ) C L 2 j / 2 C 2 j ( α + 1 / 2 ) = C L 2 j α .
Lemma A6. 
(Order of the maximum to median ratio of wavelet coefficients in the Brownian motion case) Let B t , 0 t 1 be a Brownian motion and B j , k be the corresponding wavelet coefficients. Then,
lim j P m a x ( | B j , k | ) m e d i a n ( | B j , k | ) 2 log 2 j 0.6745 = 0 ,
where m a x ( | B j , k | ) = m a x { | B j , k | , k = 0 , . . . , 2 j 1 } , and m e d i a n ( | B j , k | ) is the median of | B j , k | for k = 0 , , 2 j 1 .
Proof. 
For fixed j, B j , k is a stationary Gaussian process with mean zero and variance function
V a r B j , k = 2 j ( ψ ( 2 j t k ) ) 2 d t = ( ψ ( t ) ) 2 d t = 1 .
Therefore, we have
median ( | B j , k | ) = 0.6745 .
We have
P max k ( | B j , k | ) 2 log 2 j k = 0 2 j 1 P | B j , k | 2 log 2 j 2 j 1 2 π 2 log 2 j exp 2 log 2 j 2 (A16) 1 2 π log 2 j
where the second inequality is derived using the inequality P ( B j , k > x ) 1 2 π x exp ( x 2 2 ) .
By (A15) and (A16), Equation (A14) follows. ☐
Remark. 
The covariance function of B j , k is
E B j , k B j , k = E 2 j / 2 ψ ( 2 j t k ) d B t 2 j / 2 ψ ( 2 j t k ) d B t = 2 j ψ ( 2 j t k ) ψ ( 2 j t k ) d t = ψ ( t ) ψ ( t ( k k ) ) d t ,
which is zero when | k k | is bigger than the range of s u p p ( ψ ) . That is, B j , k , k = 0 , 1 , , 2 j 1 , a stationary Gaussian process with the m-dependent covariance structure, and we can derive an asymptotic distribution of max k ( | B j , k | ) .
Lemma A7. 
(Equivalent representation of the realized volatility of X) Suppose X follows an Itô process satisfying the stochastic differential equation
d X t = μ t d t + σ t d B t , t [ 0 , 1 ] .
The realized volatility of X at t = 1 is defined by
[ X , X ] 1 k = 1 n ( X t k X t k 1 ) 2 ,
where { t k , k = 1 , . . . n } is an equidistant partition on [ 0 , 1 ] . Then, we have
n 2 [ X , X ] 1 0 1 σ s 2 d s L s t a b l y 0 1 σ s 2 d B s d i s c r e t e ,
where stable convergence is defined in Rényi, 1963 [43], Aldous and Eagleson, 1978 [44], and Hall and Heyde, 1980 [45].
Proof. 
By Itô’s Lemma, we have for any k = 1 , , n ,
( X t k X t k 1 ) 2 = 2 t k 1 t k ( X s X t k 1 ) d X s + t k 1 t k σ s 2 d s .
Then, we have
[ X , X ] 1 = k = 1 n 2 t k 1 t k ( X s X t k 1 ) d X s + t k 1 t k σ s 2 d s (A21) = 2 k = 1 n t k 1 t k ( X s X t k 1 ) d X s + 0 1 σ s 2 d s .
Now, it is enough to show
2 n k = 1 n t k 1 t k ( X s X t k 1 ) d X s L stably 0 1 σ t 2 d B t d i s c r e t e .
Let X t = X t D + X t M , where X t D = 0 t μ s d s and X t M = 0 t σ s d B s . Simple algebraic manipulations show
k = 1 n t k 1 t k ( X s X t k 1 ) d X s = k = 1 n t k 1 t k ( X s D X t k 1 D ) d X s D + k = 1 n t k 1 t k ( X s M X t k 1 M ) d X s D + k = 1 n t k 1 t k ( X s D X t k 1 D ) d X s M + k = 1 n t k 1 t k ( X s M X t k 1 M ) d X s M = T 1 + T 2 + T 3 + T 4 .
For T 1 , since μ t is bounded, we have
max k max s [ t k 1 , t k ] | X s D X t k 1 D | max t [ 0 , 1 ] | μ t | max k max s [ t k 1 , t k ] | s t k 1 | (A23) O p ( n 1 ) a.s.
Then, we have
| T 1 | O p ( n 1 ) k = 1 n t k 1 t k | μ s | d s O p ( n 1 ) max t [ 0 , 1 ] | μ t | k = 1 n ( t k t k 1 ) (A24) = O p ( n 1 ) a.s. ,
where the first and last lines are due to (A23) and the boundedness of μ t , respectively.
For T 2 , we have
T 2 = k = 1 n t k 1 t k ( X s M X t k 1 M ) ( μ s μ t k 1 ) d s + k = 1 n t k 1 t k ( X s M X t k 1 M ) μ t k 1 d s = T 21 + T 22 .
First consider that T 21 . μ t is continuous, and so, we have | μ s μ s n / n | 0 a.s. The boundedness of μ t 2 and dominated convergence theorem imply E 0 1 ( μ s μ n s / n ) 2 d s = o ( 1 ) . Thus, we have
E | T 21 | = E 0 1 ( X s M X n s / n M ) ( μ s μ n s / n ) d s E 0 1 ( X s M X n s / n M ) 2 d s 1 / 2 E 0 1 ( μ s μ n s / n ) 2 d s 1 / 2 O ( n 1 / 2 ) E 0 1 ( μ s μ n s / n ) 2 d s 1 / 2 (A25) = o ( n 1 / 2 ) ,
where the first inequality is due to Hölder’s inequality, and the second inequality is by the fact that
E ( X s M X t k 1 M ) 2 = E t k 1 s σ x 2 d x (A26) = O ( n 1 ) ,
where the first and second equalities are due to the Itô isometry and the boundedness of σ t , respectively. For T 22 , since t k 1 t k ( X s M X t k 1 M ) μ t k 1 d s , k = 1 , , n , are uncorrelated, simple algebraic manipulations show
E T 22 2 = k = 1 n E t k 1 t k ( X s M X t k 1 M ) μ t k 1 d s 2 + k k n E t k 1 t k ( X s M X t k 1 M ) μ t k 1 d s t k 1 t k ( X s M X t k 1 M ) μ t k 1 d s = k = 1 n E t k 1 t k ( X s M X t k 1 M ) μ t k 1 d s 2 k = 1 n E t k 1 t k ( X s M X t k 1 M ) 2 d s t k 1 t k μ t k 1 2 d s O ( n 1 ) k = 1 n t k 1 t k E ( X s M X t k 1 M ) 2 d s (A27) = O ( n 2 ) ,
where the first inequality is due to Hölder’s inequality and the fifth and sixth lines are due to the boundedness of μ t and (A26), respectively. By (A25) and (A27), we have
T 2 = o p ( n 1 / 2 ) .
For T 3 , the sequence, t k 1 t k ( X s D X t k 1 D ) d X s M , k = 1 , , n , is a martingale difference sequence, and so, we have
E k = 1 n t k 1 t k ( X s D X t k 1 D ) d X s M 2 = k = 1 n E t k 1 t k ( X s D X t k 1 D ) d X s M 2 = k = 1 n E t k 1 t k ( X s D X t k 1 D ) 2 σ s 2 d s = O ( n 2 ) ,
where the second equality is due to the Itô isometry and the last equality is from (A23) and the boundedness of σ t . Thus,
T 3 = O p ( n 1 ) .
For T 4 , since σ t is bounded and integrable, 0 1 σ t 4 d t < . Thus, an application of Theorem 5.1 in Jacod and Protter (1998) [46] leads to:
n k = 1 n t k 1 t k ( X s M X t k 1 M ) d X s M L stably 1 2 0 1 σ t 2 d B t d i s c r e t e .
Collecting (A24), (A28)–(A30), we have (A22). ☐
Corollary 1. 
Suppose X follows (1). Then, [ X , X ] 1 defined by Equation (A18) follows
[ X , X ] 1 L 0 1 σ t 2 d t + 0 < s 1 L s 2 + 2 n 0 1 σ t 2 d B t discrete .

Appendix B. Tables and Figures

Note that in the following tables, we take α close to 1/2 and 0 < γ 2 / 3 .
Table B1. Summary of the order of the wavelet coefficients of different components.
Table B1. Summary of the order of the wavelet coefficients of different components.
ComponentX [ X , X ] Y [ Y , Y ] [ Y , Y ] ( a v g )
cont. drift (≤) 2 j ( 3 / 2 ) 2 j ( 3 / 2 ) 2 j ( 3 / 2 ) 2 j ( 3 / 2 ) 2 j ( 3 / 2 )
cont. diffusion (≤) 2 j ( 1 / 2 + α ) n 1 / 2 2 j ( 1 / 2 + α ) 2 j ( 1 / 2 + α ) n 1 / 2 2 j ( 1 / 2 + α ) n ( 1 2 γ ) / 2 2 j ( 1 / 2 + α )
jump (≥) 2 j / 2 2 j / 2 2 j / 2 2 j / 2 2 j / 2
noise ( O P )0 0 n 1 / 2 0 0
Table B2. Summary of the convergence rate of jump location estimation.
Table B2. Summary of the convergence rate of jump location estimation.
X [ X , X ] Y [ Y , Y ] [ Y , Y ] ( a v g )
O P ( n 1 ) O P ( n 1 ) O P ( n 1 log 2 n ) NA O P ( n 1 )
Table B3. Summary of the convergence rate of jump variation estimation.
Table B3. Summary of the convergence rate of jump variation estimation.
X [ X , X ] Y [ Y , Y ] [ Y , Y ] ( a v g )
O P ( n 1 / 3 ) O P ( n 1 / 2 ) O P ( n 1 / 4 ) NA O P ( n ( 2 / 3 ) γ )
Figure B1. Wavelet coefficients of X at levels W1–W5, with a jump at location 9424.
Figure B1. Wavelet coefficients of X at levels W1–W5, with a jump at location 9424.
Econometrics 04 00034 g005
Figure B2. Wavelet coefficients of Y at levels W1–W5, with a jump at location 9424.
Figure B2. Wavelet coefficients of Y at levels W1–W5, with a jump at location 9424.
Econometrics 04 00034 g006

References

  1. O.E. Barndorff-Nielsen, and N. Shephard. “Econometric Analysis of Realized Volatility and Its Use in Estimating Stochastic Volatility Models.” J. R. Stat. Soc. Ser. B 64 (2002): 253–280. [Google Scholar] [CrossRef]
  2. L. Zhang, P.A. Mykland, and Y. Aït-Sahalia. “A Tale of Two Time Scales: Determining Integrated Volatility with Noisy High-Frequency Data.” J. Am. Stat. Assoc. 100 (2005): 1394–1411. [Google Scholar] [CrossRef]
  3. L. Zhang. “Efficient Estimation of Stochastic Volatility Using Noisy Observations: A Multi-Scale Approach.” Bernoulli 12 (2006): 1019–1043. [Google Scholar] [CrossRef]
  4. F.M. Bandi, and J.R. Russell. “Microstructure Noise, Realized Variance, and Optimal Sampling.” Rev. Econ. Stud. 75 (2008): 339–369. [Google Scholar] [CrossRef]
  5. O.E. Barndorff-Nielsen, P.R. Hansen, A. Lunde, and N. Shephard. “Designing Realized Kernels to Measure Ex-post Variation of Equity Prices in the Presence of Noise.” Econometrica 76 (2008): 1481–1536. [Google Scholar]
  6. J. Jacod, Y. Li, P.A. Mykland, M. Podolskij, and M. Vetter. “Microstructure Noise in the Continuous Case: The Pre-averaging Approach.” Stoch. Process. Their Appl. 119 (2009): 2249–2276. [Google Scholar] [CrossRef]
  7. J. Jacod, M. Podolskij, and M. Vetter. “Limit Theorems for Moving Averages of Discretized Processes Plus Noise.” Ann. Stat. 38 (2010): 1478–1545. [Google Scholar] [CrossRef]
  8. D. Xiu. “Quasi-maximum Likelihood Estimation of Volatility with High Frequency Data.” J. Econ. 159 (2010): 235–250. [Google Scholar] [CrossRef]
  9. Y. Aït-Sahalia. “Telling from Discrete Data Whether the Underlying Continuous-Time Model Is a Diffusion.” J. Finance 57 (2002): 2075–2121. [Google Scholar] [CrossRef]
  10. O.E. Barndorff-Nielsen, and N. Shephard. “Power and Bipower Variation with Stochastic Volatility and Jumps (with discussion).” J. Financ. Econ. 2 (2004): 1–48. [Google Scholar]
  11. R.C. Merton. “Option Pricing When Underlying Stock Returns Are Discontinuous.” J. Financ. Econ. 3 (1976): 125–144. [Google Scholar] [CrossRef]
  12. Y. Aït-Sahalia. “Disentangling Volatility from Jumps.” J. Financ. Econ. 74 (2004): 487–528. [Google Scholar] [CrossRef]
  13. Y. Aït-Sahalia, and J. Jacod. “Testing for Jumps in a Discretely Observed Process.” Ann. Stat. 37 (2009): 184–222. [Google Scholar] [CrossRef]
  14. T.G. Andersen, D. Dobrev, and E. Schaumburg. “Jump-robust Volatility Estimation Using Nearest Neighbor Truncation.” J. Econ. 169 (2012): 75–93. [Google Scholar] [CrossRef]
  15. O.E. Barndorff-Nielsen, and N. Shephard. “Econometrics of Testing for Jumps in Financial Economics using Bipower Variation.” J. Financ. Econ. 4 (2006): 1–30. [Google Scholar] [CrossRef]
  16. P. Carr, and L. Wu. “What Type of Process Underlies Options? A Simple Robust Test.” J. Finance 58 (2003): 2581–2610. [Google Scholar] [CrossRef]
  17. B. Eraker, M. Johannes, and N. Polson. “The Impact of Jumps in Returns and Volatility.” J. Finance 58 (2003): 1269–1300. [Google Scholar] [CrossRef]
  18. B. Eraker. “Do Stock Prices and Volatility Jump? Reconciling Evidence From Spot and Option Prices.” J. Finance 59 (2004): 1367–1404. [Google Scholar] [CrossRef]
  19. Y. Fan, and J. Fan. “Testing and Detecting Jumps Based on a Discretely Observed Process.” J. Econom. 164 (2011): 331–344. [Google Scholar] [CrossRef]
  20. X. Huang, and G.T. Tauchen. “The Relative Contribution of Jumps to Total Price Variance.” J. Financ. Econ. 4 (2005): 456–499. [Google Scholar]
  21. G.J. Jiang, and R.C. Oomen. “Testing for Jumps When Asset Prices Are Observed with Noise—A “Swap Variance” Approach.” J. Econ. 144 (2008): 352–370. [Google Scholar] [CrossRef]
  22. S.S. Lee, and J. Hannig. “Detecting Jumps From Lévy Jump Diffusion Processes.” J. Financ. Econ. 96 (2010): 271–290. [Google Scholar] [CrossRef]
  23. S.S. Lee, and P.A. Mykland. “Jumps in Financial Markets: A New Nonparametric Test and Jump Dynamics.” Rev. Financ. Stud. 21 (2008): 2535–2563. [Google Scholar] [CrossRef]
  24. T. Bollerslev, U. Kretschmer, C. Pigorsch, and G. Tauchen. “A Discrete-time Model for daily S & P500 Returns and Realized Variations: Jumps and Leverage Effects.” J. Econ. 150 (2009): 151–166. [Google Scholar]
  25. V. Todorov. “Estimation of Continuous-Time Stochastic Volatility Models with Jumps Using High-Frequency Data.” J. Econ. 148 (2009): 131–148. [Google Scholar] [CrossRef]
  26. Y. Aït-Sahalia, J. Jacod, and J. Li. “Testing for Jumps in Noisy High Frequency Data.” J. Econ. 168 (2012): 207–222. [Google Scholar] [CrossRef]
  27. Y. Aït-Sahalia, and J. Jacod. “Estimating the Degree of Activity of Jumps in High Frequency Data.” Ann. Stat. 37 (2009): 2202–2244. [Google Scholar] [CrossRef]
  28. B.Y. Jing, X.B. Kong, and Z. Liu. “Estimating the Jump Activity Index of Lévy Processes Under Noisy Observations Using High Frequency Data.” J. Am. Stat. Assoc. 106 (2011): 558–568. [Google Scholar] [CrossRef]
  29. B.Y. Jing, X.B. Kong, Z. Liu, and P.A. Mykland. “On the Jump Activity Index for Semimartingales.” J. Econ. 166 (2012): 213–223. [Google Scholar] [CrossRef]
  30. V. Todorov, and G. Tauchen. “Limit Theorems for Power Variations of Pure-Jump Processes with Application to Activity Estimation.” Ann. Appl. Probabil. 21 (2011): 546–588. [Google Scholar] [CrossRef]
  31. B.Y. Jing, X.B. Kong, and Z. Liu. “Modeling High Frequency Data by Pure Jump Processes.” Ann. Stat. 40 (2012): 759–784. [Google Scholar] [CrossRef]
  32. X.B. Kong, Z. Liu, and B.Y. Jing. “Testing For Pure-jump Processes For High-Frequency Data.” Ann. Stat. 43 (2015): 847–877. [Google Scholar] [CrossRef]
  33. V. Todorov, and G. Tauchen. “Volatility Jumps.” J. Bus. Econ. Stat. 29 (2011): 356–371. [Google Scholar] [CrossRef]
  34. T.G. Andersen, T. Bollerslev, and F.X. Diebold. “Roughing it Up: Including Jump Components in the Measurement, Modeling, and Forecasting of Return Volatility.” Rev. Econ. Stat. 89 (2007): 701–720. [Google Scholar] [CrossRef]
  35. T.G. Andersen, T. Bollerslev, and X. Huang. “A Reduced Form Framework for Modeling and Forecasting Jumps and Volatility in Speculative Prices.” J. Econ. 160 (2011): 176–189. [Google Scholar] [CrossRef]
  36. T.G. Andersen, T. Bollerslev, and N. Meddahi. “Realized Volatility Forecasting and Market Microstructure Noise.” J. Econ. 160 (2011): 220–234. [Google Scholar] [CrossRef]
  37. Y. Wang. “Selected Review on Wavelets.” In Frontier Statistics, a Festschrift for Peter Bickel. Edited by H. Koul and J. Fan. London, UK: Imperial College Press, 2006, pp. 163–179. [Google Scholar]
  38. Y. Wang. “Jump and Sharp Cusp Detection by Wavelets.” Biometrika 82 (1995): 385–397. [Google Scholar] [CrossRef]
  39. J. Fan, and Y. Wang. “Multi-scale Jump and Volatility Analysis for High-Frequency Financial Data.” J. Am. Stat. Assoc. 102 (2007): 1349–1362. [Google Scholar] [CrossRef]
  40. I. Daubechies. Ten Lectures on Wavelets. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 1992. [Google Scholar]
  41. M. Raimondo. “Minimax Estimation of Sharp Change Points.” Ann. Stat. 26 (1998): 1379–1397. [Google Scholar] [CrossRef]
  42. D.B. Percival, and A.T. Walden. Wavelet Methods for Time Series Analysis. Cambridge, UK: Cambridge University Press, 2000. [Google Scholar]
  43. A. Rényi. “On Stable Sequences of Events.” Sankhyā Ser. A 25 (1963): 293–302. [Google Scholar]
  44. D.J. Aldous, and G.K. Eagleson. “On Mixing and Stability of Limit Theorems.” Ann. Probab. 6 (1978): 325–331. [Google Scholar] [CrossRef]
  45. P. Hall, and C.C. Heyde. Martingale Limit Theory and Its Application. Boston, MA, USA: Academic Press, 1980. [Google Scholar]
  46. J. Jacod, and P. Protter. “Asymptotic Error Distributions for the Euler Method for Stochastic Differential Equations.” Ann. Probab. 26 (1998): 267–307. [Google Scholar] [CrossRef]
Figure 1. Processes simulated with a true jump at location 9424, noise level η = 0.02 .
Figure 1. Processes simulated with a true jump at location 9424, noise level η = 0.02 .
Econometrics 04 00034 g001
Figure 2. Histograms of jump variation: The first row from left to right (a1,a2,a3) shows the results using processes Y, [ Y , Y ] and [ Y , Y ] ( a v g ) , respectively, sampled at every one tick, the second row (b1,b2,b3) sampled at every two ticks and the third row (c1,c2,c3) sampled at every four ticks.
Figure 2. Histograms of jump variation: The first row from left to right (a1,a2,a3) shows the results using processes Y, [ Y , Y ] and [ Y , Y ] ( a v g ) , respectively, sampled at every one tick, the second row (b1,b2,b3) sampled at every two ticks and the third row (c1,c2,c3) sampled at every four ticks.
Econometrics 04 00034 g002
Figure 3. Realized volatility: (a) before removing jump variation; (bd) after removing jump variation using processes Y, [ Y , Y ] and [ Y , Y ] ( a v g ) , respectively. The solid lines are calculated from sampling at every one tick, the dashed lines at every two ticks and the dotted lines at every four ticks.
Figure 3. Realized volatility: (a) before removing jump variation; (bd) after removing jump variation using processes Y, [ Y , Y ] and [ Y , Y ] ( a v g ) , respectively. The solid lines are calculated from sampling at every one tick, the dashed lines at every two ticks and the dotted lines at every four ticks.
Econometrics 04 00034 g003
Figure 4. (a,b) Daubechies wavelets D4 and D20, respectively; (c,d) absolute values of integrated D4 and D20, respectively.
Figure 4. (a,b) Daubechies wavelets D4 and D20, respectively; (c,d) absolute values of integrated D4 and D20, respectively.
Econometrics 04 00034 g004
Table 1. Mean number of detected jumps at frequency levels from W1 to W5 using X and [ X , X ] with the corresponding standard deviation in parenthesis.
Table 1. Mean number of detected jumps at frequency levels from W1 to W5 using X and [ X , X ] with the corresponding standard deviation in parenthesis.
levelX [ X , X ]
W1 3.3 ( 0.8 ) 3.1 ( 0.6 )
W2 3.2 ( 0.8 ) 3.1 ( 0.6 )
W3 3.0 ( 0.9 ) 3.3 ( 0.8 )
W4 2.9 ( 0.8 ) 4.0 ( 1.3 )
W5 2.8 ( 0.8 ) 3.4 ( 0.9 )
Table 2. MSE of the jump variation estimation at frequency levels from W1 to W5 using X and [ X , X ] .
Table 2. MSE of the jump variation estimation at frequency levels from W1 to W5 using X and [ X , X ] .
levelX [ X , X ]
W13.0 × 10 4 1.4 × 10 5
W23.7 × 10 4 3.0 × 10 5
W31.7 × 10 3 8.6 × 10 5
W46.0 × 10 3 2.4 × 10 4
W52.2 × 10 2 8.4 × 10 4
Table 3. Mean number of detected jumps at frequency levels from W1 to W5 and noise levels η = 0.01 , 0.02 , 0.03 , 0.04 using Y, [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) and [ Y , Y ] M = 64 ( a v g ) , with the corresponding standard deviation in parenthesis.
Table 3. Mean number of detected jumps at frequency levels from W1 to W5 and noise levels η = 0.01 , 0.02 , 0.03 , 0.04 using Y, [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) and [ Y , Y ] M = 64 ( a v g ) , with the corresponding standard deviation in parenthesis.
levelηY [ Y , Y ] [ Y , Y ] M = 8 ( a v g ) [ Y , Y ] M = 16 ( a v g ) [ Y , Y ] M = 32 ( a v g ) [ Y , Y ] M = 64 ( a v g )
W10.012.3 (0.9)2.5 (0.7)2.5 (0.8)2.5 (0.8)2.6 (0.9)2.7 (0.9)
W20.012.4 (0.9)2.7 (0.9)2.6 (0.8)2.7 (0.9)3.6 (1.3)5.1 (1.8)
W30.012.3 (0.9)2.9 (1.1)3.3 (1.0)3.6 (1.2)5.3 (1.8)7.6 (2.6)
W40.012.6 (0.8)3.3 (1.3)4.2 (1.3)6.6 (2.1)7.8 (2.4)9.9 (3.0)
W50.012.7 (0.8)2.7 (1.0)3.2 (0.9)5.1 (1.8)6.6 (2.1)7.0 (2.4)
W10.021.5 (1.0)2.0 (0.9)2.0 (0.9)2.1 (0.9)2.2 (0.9)2.2 (0.9)
W20.021.7 (0.9)2.1 (1.0)2.1 (0.9)2.1 (0.9)2.3 (0.9)2.8 (1.1)
W30.021.7 (1.0)2.3 (1.2)2.7 (0.9)2.6 (0.9)3.0 (1.1)4.1 (1.5)
W40.022.3 (0.9)2.7 (1.4)3.2 (1.0)3.7 (1.3)4.0 (1.4)5.9 (2.1)
W50.022.6 (0.9)2.0 (1.1)2.8 (0.9)3.3 (1.2)4.0 (1.4)4.7 (1.7)
W10.031.0 (0.9)1.5 (0.9)1.6 (0.9)1.7 (0.9)1.8 (0.9)1.9 (0.9)
W20.031.1 (0.9)1.7 (1.1)1.7 (0.9)1.8 (0.9)1.9 (0.9)2.1 (0.9)
W30.031.2 (0.9)1.8 (1.2)2.5 (1.0)2.3 (0.9)2.4 (1.0)2.8 (1.1)
W40.031.9 (0.9)2.2 (1.4)2.9 (1.1)3.2 (1.2)3.1 (1.2)3.7 (1.5)
W50.032.4 (0.9)2.5 (1.1)2.6 (1.0)2.9 (1.1)2.9 (1.1)3.2 (1.4)
W10.040.6 (0.8)1.2 (0.9)1.2 (0.9)1.4 (0.9)1.5 (0.9)1.6 (0.9)
W20.040.8 (0.8)1.3 (1.0)1.3 (0.9)1.4 (0.9)1.6 (0.9)1.8 (0.9)
W30.040.8 (0.8)1.4 (1.2)2.3 (1.0)2.0 (2.0)2.1 (1.0)2.2 (1.0)
W40.041.5 (0.9)1.8 (1.4)2.6 (1.1)3.0 (1.2)2.7 (1.2)3.0 (1.3)
W50.042.2 (0.9)1.2 (1.0)2.3 (1.1)2.7 (1.2)2.5 (1.0)2.6 (1.1)
Table 4. MSE of the jump variation estimation at frequency levels from W1 to W5 and noise levels η = 0.01 , 0.02 , 0.03 , 0.04 using Y, [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) and [ Y , Y ] M = 64 ( a v g ) .
Table 4. MSE of the jump variation estimation at frequency levels from W1 to W5 and noise levels η = 0.01 , 0.02 , 0.03 , 0.04 using Y, [ Y , Y ] , [ Y , Y ] M = 8 ( a v g ) , [ Y , Y ] M = 16 ( a v g ) , [ Y , Y ] M = 32 ( a v g ) and [ Y , Y ] M = 64 ( a v g ) .
levelηY [ Y , Y ] [ Y , Y ] M = 8 ( a v g ) [ Y , Y ] M = 16 ( a v g ) [ Y , Y ] M = 32 ( a v g ) [ Y , Y ] M = 64 ( a v g )
W10.014.3 × 10 4 3.0 × 10 4 8.9 × 10 5 1.2 × 10 4 2.1 ×10 4 5.8 × 10 4
W20.014.6 × 10 4 1.4 × 10 3 8.9 × 10 5 1.5 × 10 4 2.4 × 10 4 6.3 × 10 4
W30.011.9 × 10 3 1.8 × 10 3 1.8 × 10 4 1.6 × 10 4 2.7 × 10 4 6.1 × 10 4
W40.017.2 × 10 3 2.5 × 10 3 2.1 × 10 4 2.7 × 10 4 3.9 × 10 4 5.0 × 10 4
W50.011.2 × 10 2 8.0 × 10 3 1.2 × 10 2 2.8 × 10 4 4.2 × 10 4 5.1 × 10 4
W10.022.4 × 10 3 1.8 × 10 3 3.5 × 10 4 2.9 × 10 4 3.3 × 10 4 6.7 × 10 4
W20.022.0 × 10 3 3.5 × 10 3 3.5 × 10 4 2.8 × 10 4 3.2 × 10 4 6.9 × 10 4
W30.023.7 × 10 3 4.2 × 10 3 5.2 × 10 4 2.0 × 10 4 3.0 × 10 4 6.2 × 10 4
W40.027.5 × 10 3 9.9 × 10 3 5.1 × 10 4 3.6 × 10 4 4.6 × 10 4 5.3 × 10 4
W50.021.4 × 10 2 2.8 × 10 2 4.1 × 10 2 7.3 × 10 4 5.6 × 10 4 5.2 × 10 4
W10.031.2 × 10 2 6.1 × 10 3 1.6 × 10 3 9.6 × 10 4 7.0 × 10 4 7.8 × 10 4
W20.031.2 × 10 2 9.2 × 10 3 1.6 × 10 3 5.1 × 10 4 6.8 × 10 4 6.9 × 10 4
W30.031.3 × 10 2 1.3 × 10 2 1.0 × 10 3 4.9 × 10 4 3.5 × 10 4 7.2 × 10 4
W40.038.7 × 10 3 2.2 × 10 2 6.4 × 10 4 5.9 × 10 4 5.4 × 10 4 5.8 × 10 4
W50.031.5 × 10 2 4.9 × 10 2 5.1 × 10 2 1.8 × 10 3 6.9 × 10 4 6.6 × 10 4
W10.043.5 × 10 2 1.5 × 10 2 4.4 × 10 3 2.7 × 10 3 1.8 × 10 3 1.5 × 10 3
W20.042.6 × 10 2 2.1 × 10 2 4.7 × 10 3 3.2 × 10 3 1.6 × 10 3 1.2 × 10 3
W30.043.0 × 10 2 3.1 × 10 2 1.9 × 10 3 1.2 × 10 3 7.2 × 10 4 7.4 × 10 4
W40.041.1 × 10 2 4.3 × 10 2 1.4 × 10 3 1.2 × 10 3 5.6 × 10 4 6.6 × 10 4
W50.041.5 × 10 2 7.1 × 10 2 5.3 × 10 2 3.0 × 10 3 1.5 × 10 3 7.7 × 10 4

Share and Cite

MDPI and ACS Style

Zhang, X.; Kim, D.; Wang, Y. Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets. Econometrics 2016, 4, 34. https://doi.org/10.3390/econometrics4030034

AMA Style

Zhang X, Kim D, Wang Y. Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets. Econometrics. 2016; 4(3):34. https://doi.org/10.3390/econometrics4030034

Chicago/Turabian Style

Zhang, Xin, Donggyu Kim, and Yazhen Wang. 2016. "Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets" Econometrics 4, no. 3: 34. https://doi.org/10.3390/econometrics4030034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop