Next Article in Journal
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
Previous Article in Journal
Ten Things You Should Know about the Dynamic Conditional Correlation Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forecasting Value-at-Risk Using High-Frequency Information

1
Grantham, Mayo, Van Otterloo and Company LLC, 2150 Shattuck Ave, Suite 900, Berkeley, CA 94704, USA
2
Department of Economics, University of California, Riverside, CA 92521-0427, USA
*
Author to whom correspondence should be addressed.
Econometrics 2013, 1(1), 127-140; https://doi.org/10.3390/econometrics1010127
Submission received: 7 March 2013 / Accepted: 22 April 2013 / Published: 21 June 2013

Abstract

:
in the prediction of quantiles of daily Standard & Poor’s 500 (S&P 500) returns we consider how to use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly, through combining forecasts (using forecasts generated from returns sampled at different intraday interval), or directly, through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach, for both direct and indirect cases. We show that in forecasting the daily S&P 500 index return quantile (Value-at-Risk or VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so, in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high-frequency intraday information, provide an excellent forecasting performance compared to using just low-frequency daily information.

1. Introduction

Due to increasing fragility in financial markets and the extensive use of derivative products, effective management of financial risks has become ever more important. A risk measurement tool, namely “Value-at-Risk” (VaR), has received great attention from both regulators and the academic world due to its simplicity, despite its deficiency in that it lacks coherence in the sense of [1]. Numerous papers have studied various aspects of VaR methodology. Typically the VaR is computed using daily frequency, such as for the 1 to 10-day forecasts of the tail quantiles. In this paper, we discuss how we can improve the accuracy of daily out-of-sample VaR forecasts by using high-frequency intraday information.
Consider a financial return series { r t } t = 1 T , generated by the probability law Pr r t r | F t - 1 F t ( r ) conditional on the information set F t - 1 (σ-field) at time t - 1 . Suppose { r t } admits the stochastic process
r t = μ t + ε t = μ t + σ t z t ,
where μ t = E r t | F t - 1 , σ t 2 = E ε t 2 | F t - 1 , and z t ε t / σ t has the conditional distribution function G t ( z ) Pr z t z | F t - 1 . The VaR with a given tail probability α ( 0 , 1 ) is defined as the negative of conditional quantile (denoted q t ( α ) )
Pr r t q t ( α ) | F t - 1 = F t ( q t ( α ) ) = α .
The quantile can be estimated either from inverting the distribution function
q t α = F t - 1 ( α ) = μ t + σ t G t - 1 ( α ) ,
where the second equality holds if F t ( r ) belongs to location-scale family, or from the quantile regression. A quantile model involves the specification of the conditional distribution F t ( · ) or the quantile regression function q t ( α ) . The former can also be estimated by its components, namely, μ t , σ t 2 , G t ( · ) . For instance, [2] makes distributional assumptions for future daily returns coupled with forecasts of realized volatility to get quantile forecasts. In this paper we focus on the latter, the quantile regression with the high frequency information incorporated.
The rich dynamics in ultra-high-frequency financial data have been used extensively in estimating quadratic variation (integrated variance) in so-called realized volatility literature, and also used in high-frequency duration models. There is a considerable amount of literature addressing the use of high-frequency data for realized volatility and duration models, whereas little work has been done for quantiles, which is practically more relevant and central to financial risk management. This paper contributes to the literature by examining whether it pays to incorporate the intraday data and, more importantly, how to incorporate them to achieve better performance for forecasting daily quantiles (VaR is simply the negative of it).
The existing literature on high-frequency data is mainly devoted to volatility issues: estimating quadratic variation, mixed data sampling (MIDAS) approach of [3], and the autoregressive conditional duration models of [4]. In this paper we use the high-frequency data for a different purpose. Motivated by the bagging (bootstrap aggregating) approach of [5], which replicates the true distribution by bootstrap distribution in computing the ensemble average, we consider returns computed from high frequency intraday observations as multiple replications of the true distribution of daily returns. Suppose there are hourly time series available while we are interested in generating daily forecasts (e.g., 1 day ahead or 10 days ahead). Assuming that each group of hourly information is generated from an identical distribution (without intraday pattern or diurnal cycles), we may consider the high-frequency (hourly) series as multiple replications of lower frequency (daily) series. In that context, we regard the high-frequency data as subsamples of daily series taken at different times within a day.
Viewing the high-frequency information in this manner leads us to consider the subsample averaging idea from the realized volatility (RV) literature and use it for forecasting quantiles instead. The original subsampling idea can be traced back to [6] and further studied by [7, 8]. References [9, 10] establish that RV, defined as the sum of squared intraday returns of small intervals, is an asymptotically unbiased estimator of the unobserved quadratic variation as the interval length approaches zero. However, in the presence of market microstructure noise, this helpful property of RV is contaminated. Recent innovative works investigating this issue include [8,11,12,13,14,15]. When the observed price process is the true underlying price process plus microstructure noise, it is shown that RV will be overwhelmed by the noise and explodes when the sampling frequency approaches infinity. Therefore, it may be optimal to sample less frequently than is the case in the absence of noise. Reference [6] proposes for the first time an unbiased data-driven estimator of volatility and a subsample averaging volatility estimator. References [8, 15] establish improved estimators for quadratic variation through subsampling. The bias-adjusted estimator of [8] based on the subsample averaging method is able to eventually push the estimation bias to zero. Reference [15] shows that subsampling is highly advantageous for RV estimators based on discontinuous kernels.
Motivated by this subsampling approach, which is shown to outperform those using directly the highest frequency series, and to avoid arbitrariness in choosing sampling frequencies, we propose a subsample averaging quantile forecast similar in construction to the bagging approach and to the simple average combination of forecasts. We also compare this approach with other forecasts utilizing the highest-frequency information directly. There is a vast amount of literature on predicting daily market returns using daily close data only. We deem that proper use of high-frequency intraday data should not only be helpful for achieving a more accurate estimation of volatility, but should also be beneficial for forecasting daily VaR/quantiles. The question is how to incorporate the vast amount of intraday high-frequency information for daily low-frequency modeling or forecasting of VaR/quantiles.
In [16], the relative advantages of the combination of forecasts (CF) over the combination of information (CI) are discussed. Under circumstances of highly correlated predictors, important predictors omitted, and/or low signal-to-noise ratio, CF is more likely to win over CI. In this paper, we proceed by considering these two alternate approaches of using intraday information and compare the CF approaches (combination of individual forecasts obtained from using one 5-minute intraday information block at a time) with the CI approaches (combination of all 5-minute intraday information blocks into one model) for their forecasting ability for daily market return VaR/quantiles prediction. It is well-known that the mean of market daily return is very hard to predict, whereas its quantiles may be predictable, particularly in tails. See [17] for some evidence collected by using bagging. Practically, quantile forecasts are very important for risk management purposes. See, for example, [18]. Therefore, we implement the quantile forecasts using high-frequency data through various CF and CI methods. To avoid being overly parameterized, we consider subsample averaging, some CF methods, and factor models with principal component approach. We compare their prediction errors and find that in daily S&P 500 return quantile forecasts, generally CF approaches with simple weighting schemes, subsample averaging in particular, are found to be superior to others.
The rest of the paper is organized as follows. Section 2 describes the data and its organization. Section 3 discusses the quantile forecasting methods we consider. Section 4 presents out-of-sample VaR/quantile forecasting results. Section 5 concludes.

2. Data

The data we use consists of S&P 500 index values at 5-minute intervals recorded between 9:35 a.m. and 4:00 p.m. from 9 June, 1997 to 30 May, 2003, a total of 1,501 days and 117,078 observations.1 In cleaning the data, those periods of market closings are treated as no variation in index values, thus there exist 78 ticks each trading day. From this pool of 5-minute index data, we construct 78 “daily” returns, each having a time span of twenty-four hours but pointing at different times in a day. The sampling frequency used in [9] is also 5-minute when they analyze the distributional properties of RV. Many RV papers using high-frequency data have justified the use of 5-minute frequency.
Specifically, let us denote the daily close return by r t ( 0 ) = p t 4 : 00 p m - p t - 1 4 : 00 p m , where p t 4 : 00 p m denotes the logarithm of the S&P 500 index value at 4:00 p.m. on day t. { p t 4 : 00 p m } t = 1 T is one subsample of the entire S&P 500 5-minute index data, obtained by systematic sampling at 4:00 p.m. each day, hence it represents one 78th of the entire 5-minute high-frequency information. Therefore, r t ( 0 ) is the daily return based on this particular subsample, which is also the conventional daily close return.
We define a subsample daily return, r t ( 1 ) = p t 9 : 35 a m - p t - 1 9 : 35 a m , and similarly other subsample daily returns r t ( j ) for j = 2 , , m = 77 , and we have r t ( m = 77 ) = p t 3 : 55 p m - p t - 1 3 : 55 p m .

3. Forecasting Quantiles Using High-Frequency Information

In this section we describe various methods of using high-frequency information in quantile forecasting. These include Daily-Close quantile forecasts, Bagging-Daily-Close quantile forecasts, Subsample-Averaging (SA) quantile forecasts, Combining-Forecast (CF) quantile forecasts, Combining-Information (CI) quantile forecasts, and some variations of SA, CF, CI methods, as presented in Table 1.

3.1. Daily Close Quantile Forecasts

Suppose our objective is to predict the α-quantile of daily close return r T + h ( 0 ) conditioning on the information up to time T: Q α ( r T + h ( 0 ) | X T ) , where h denotes the forecast horizon. Typically, X T F T is the vector of past values of daily returns in the same daily frequency as { r t ( 0 ) } t = 1 T . As in [17] we use a quantile regression model
Q α r t + h ( 0 ) | X t = X t ( 0 ) β ( 0 ) ( α ) + ε t + h ,
to predict the quantiles of daily S&P 500 return with h = 1 , using [18]’s linear polynomial model (with the quadratic term capturing the volatility clustering effect on quantiles) such that
X t ( 0 ) = 1 r t ( 0 ) r t ( 0 ) 2 .
While more complicated nonlinear models such as the artificial neural network models may be considered, we follow [17] where the linear polynomial model is successfully used. The quantile forecast Q ^ α r T + h ( 0 ) | X T = X T ( 0 ) β ^ ( 0 ) ( α ) from this quantile regression model is denoted “Daily-Close” in Table 1, which is estimated by minimizing the “tick” (or check) function of [19]
ρ α ε = α - 1 ε < 0 ε .
To examine whether it pays to incorporate the intraday data, we use this Daily-Close as the benchmark model to be compared with other models incorporating intraday information in our empirical analysis in Section 4.

3.2. CI Quantile Forecasts

Besides the past realization of daily close return r t ( 0 ) , now we have other subsamples r t ( j ) , j = 1 , , m , constructed from the high-frequency intraday data. For the purpose of accommodating more information, it is natural to expand X t to include the rich dynamics in all other intraday subsamples r t ( 0 ) r t ( 1 ) r t ( m ) . As an example, for the linear polynomial model, to utilize high frequency information, the new regressors will be
X t m = 1 r t ( 0 ) r t ( 0 ) 2 r t ( 1 ) r t ( 1 ) 2 r t ( m ) r t ( m ) 2 .
By doing this, one follows the CI scheme. That is, combining all the high-frequency information into one model altogether to generate an ultimate forecast:
Q ^ α m r T + h ( 0 ) | X T m = X T m β ^ m ( α ) ,
where the coefficient β ^ m ( α ) is estimated by running the quantile regression of r t ( 0 ) on X t - h m . Denote this quantile forecast “CI-Unrestricted” in Table 1.
Now re-define X t m = r t ( 0 ) r t ( 0 ) 2 r t ( 1 ) r t ( 1 ) 2 r t ( m ) r t ( m ) 2 excluding the constant term. We note that this is a fairly large vector with dimension 156 ( m = 77 in our empirical study). Moreover, consecutive subsample returns, r t ( j ) and r t ( j + 1 ) , may be highly correlated. Therefore methods that can mitigate high dimension and multicollinearity problems are desirable in order to improve out-of-sample forecast performance. We consider factor model with principal component approach (see [20, 21]). The procedure here starts by assuming that X t m , r t + h ( 0 ) admits a factor model representation with k common latent factors f t
X t m = Λ f t + e t ,
Q α ( r t + h ( 0 ) | X t m ) = 1 f t β ^ f ( α ) + ε t + h ,
and β ^ f ( α ) is obtained by running quantile regression of r t ( 0 ) on 1 f ^ t - h . This quantile forecast is denoted as “CI-PC” in Table 1.2
If k is unknown, it can be estimated by some conventional information criteria (IC) or ones tailored to quantile forecasting models. Reference [22] provides variants of the Schwarz Information Criterion for model selection in M-estimation, which include quantile regression as a special case. According to [23], for median regression, the criterion S I C = ln ζ ^ T + 1 2 p ln T , where ζ ^ T 1 T - h t = 1 T - h ρ α ε t + h , α = 0 . 5 , ρ α ( · ) is the tick loss function in Equation (6), and p is the dimension of the model, yields a consistent model selection procedure under certain assumptions. To our knowledge, however, a consistent model selection procedure and corresponding information criterion for factor quantile regressions have not been established yet. We leave the theoretical study of this procedure for our future research. In our empirical study, we consider conventional information criteria such as AIC and BIC where the estimated number of factors k is selected by min 1 k k m a x I C k = ln ( S S R ( k ) / T ) + g ( T ) k , where k m a x is the hypothesized upper limit for the true number of factors and SSR is the sum of squared residuals. Here we simply extend the formulas for AIC (with g ( T ) = 2 / T ) and BIC (with g ( T ) = ln T / T ) straight into the quantile case, but substitute “residual” in the above-mentioned term SSR by tick loss α - 1 r t + h ( 0 ) < Q ^ α r t + h ( 0 ) | X t m r t + h ( 0 ) - Q ^ α r t + h ( 0 ) | X t m , for t = 1 , , T - h . Alternatively, we can fix k ex-ante at some small values such as 1, 2, or 3.

3.3. CF Quantile Forecasts

The conventional CF methodology with one fixed forecast target, i.e., the α-quantile of r T + h ( 0 ) , is implemented as follows.
  • CF Step 1: Compute quantile forecasts from regressing r t ( 0 ) on each individual sub-sample,
    Q ^ α ( j ) r T + h ( 0 ) | X T ( j ) = X T ( j ) β ^ ( j ) ( α ) ,
    where X t ( j ) = 1 r t ( j ) r t ( j ) 2 , and β ^ ( j ) ( α ) is obtained by estimating the quantile regression of r t ( 0 ) on X t - h ( j ) , for j = 0 , 1 , , m .
  • CF Step 2: Combine the quantile forecasts from Step 1 by some weighting methods for forecast combination. A simplest example of the combination methodology is the simple average:
    Q ¯ α C F - M e a n r T + h ( 0 ) | X T m = 1 m + 1 j = 0 m Q ^ α ( j ) r T + h ( 0 ) | X T ( j ) ,
    i.e., taking the mean of all the individual quantile forecasts (denoted “CF-Mean” in Table 1). One can also use the median of the individual quantile forecasts as a combined forecast (denoted “CF-Median” in Table 1).
In Step 1, given the high-frequency 5-minute data we have, there are in total 78 individual quantile forecasts to be combined. Besides the simple methods such as combining these 78 forecasts by simple average or median without estimating any weighting parameters, we also use the regression approach forecast combination [24] to explore more of the cross-sectional information contained in these 78 individual forecasts. Obviously, this regression approach is not working well because of the high dimensionality. Therefore, we consider the principal component methodology for combining the quantile forecasts generated in Step 1 and form the VaR forecast (denoted “CF-PC” in Table 1) by the following factor model of the quantile forecasts
Q ^ α m r t ( 0 ) | X t - h m = Γ F ^ t + v t ,
where
Q ^ α m r t ( 0 ) | X t - h m = Q ^ α ( 0 ) r t ( 0 ) | X t - h ( 0 ) , Q ^ α ( 1 ) r t ( 0 ) | X t - h ( 1 ) , , Q ^ α ( m ) r t ( 0 ) | X t - h ( m ) .
Then we estimate β ^ F ( α ) from
Q α C F - P C r t + h ( 0 ) | X t m = 1 F ^ t + h β F ( α ) + η t + h ,
to obtain the final CF-PC quantile forecast
Q ^ α C F - P C r T + h ( 0 ) | X T m = 1 F ^ T + h β ^ F ( α ) .
The size of the common factor set F ^ t , i.e., the number of principal components of Q ^ α m r t ( 0 ) | X t - h m , can be determined again by information criteria AIC or BIC as discussed in Section 3.2, or fixed ex-ante at some small values such as 1, 2, or 3.

3.4. Subsample-Averaging Quantile Forecasts

Inspired from the RV literature and as discussed in Section 1, here we propose a new methodology that extends the subsample idea for RV estimation (using intraday high-frequency information) to daily quantile forecasting. We call it subsample average (SA), which is unique to forecasting using high-frequency data. This approach is similar to the CF-Mean procedure discussed in Section 3.3 but it differs from CF-Mean in Step 1: instead of fixing left-hand-side (LHS) variable in Equation (11) as the 4 p.m. daily close return r t ( 0 ) , we modify Equation (11) by setting LHS variable as the corresponding subsample return r t ( j ) for each j = 0 , 1 , , m .
  • SA Step 1:
    Q ˜ α ( j ) r t + h ( j ) | X t ( j ) = X t ( j ) β ˜ ( j ) ( α ) ,
    which give the SA quantile forecasts: Q ˜ α ( j ) r T + h ( j ) | R T ( j ) = X T ( j ) β ˜ ( j ) ( α ) for j = 0 , 1 , , m .
  • SA Step 2: Taking simple average, we get the quantile forecast
    Q ¯ α , T + h S A - M e a n r T + h ( j ) | X T m = 1 m + 1 j = 0 m Q ˜ α ( j ) r T + h ( j ) | X T ( j ) ,
    which is denoted “SA-Mean” in Table 1. Similarly, “SA-Median” is to take median instead of average.
We compare the out-of-sample tick loss of SA-Mean and SA-Median against r T + h ( 0 ) , i.e., ε = r T + h ( 0 ) - Q ¯ α , T + h S A r T + h ( j ) | X T m in Equation (6). Although the benchmark Daily-Close, CF, and CI quantile forecasts are all computed for r T + h ( 0 ) by design, we consider returns computed from high-frequency intraday subsample observations as multiple replications of the true distribution of daily returns (even if it is at a different time of a day). This is motivated by the bagging approach, which replicates the true distribution by bootstrap distribution in computing the ensemble average. In this sense, we consider the high-frequency subsamples just like the bootstrap samples, which both permit us to estimate the ensemble average in daily frequency. Thus we treat the subsample averaging VaR/quantile forecasts as daily close VaR/quantile forecasts.
The main idea of SA is based on the assumption that subsamples from high frequency data may replicate the true distribution, which may permit computing the ensemble average of a stochastic process. The subsamples from the high-frequency series may be considered as multiple replications of lower-frequency (daily) series. Under the assumption of the strict stationarity we use the high-frequency data to generate subsamples of daily series in different times within a day. In this section we consider high-frequency observations as multiple replications of the true distribution of daily returns. The subsample returns, r t ( j ) ( j = 0 , 1 , , m ) , are considered as ( m + 1 ) replications (draws) of the daily return series.
The assumption that the daily returns measured at different points of time during a trading day are originating from the same distribution is relevant only for SA, which uses high-frequency data. For bagging (next section), we use daily data and do not use the high-frequency data, and thus we do not need to make such assumptions for bagging. However, as a referee pointed out, we do not need to exclude the intradaily or diurnal patterns. All we need to assume is that the diurnal patterns are the same from day to day. We can allow diurnal patterns as long as they are the same from day to day.3

3.5. Bagging Daily Close Quantile Forecasts

Motivated from the subsample averaging, which replicates the true distribution by using returns computed from high frequency data in computing the ensemble average of a stochastic process, we consider bagging in this section.
Bootstrap averaging (bagging) replicates the true distribution by bootstrap distribution in computing the ensemble average of a stochastic process. Bagging predictor is a combined predictor formed over a set of training sets to smooth out the “instability” caused by parameter estimation uncertainty and model uncertainty. A predictor is said to be “unstable” if a small change in the training set will lead to a significant change in the predictor [5]. Reference [25] shows that the indicator functions are unstable. The check loss function of [19] for the regression quantile involves an indicator function as shown in Equation (6). The mechanism of bagging has been explained in various ways. Reference [5] uses the Cauchy-Schwarz inequality for a squared error loss. Reference [17] extends it to a convex loss (e.g., a tick function for quantiles) using the Jensen’s inequality. Reference [25] shows that, for a nonsmooth unstable predictor, bagging reduces variance of the first order term. In particular, they show that bagging can reduce the mean squared error of forecasts by averaging over the randomness of variable selection. References [26, 27] expand a smooth unstable function into linear and higher order terms, and show bagging reduces the variance of the higher order terms. Bagging also stabilizes prediction by equalizing the influence of outlying training observations. Reference [21] shows that bagging is asymptotically Bayesian shrinkage. Applications of bagging include inflation [28], financial volatility [29], equity premium [16], short-term interest rates [30], and employment data [31].
Bagging for the α-quantile of daily close return r t ( 0 ) is implemented as follows.
  • Bagging Step 1: Construct a bootstrap sample { r t ( 0 ) * } according to the empirical distribution of daily close returns. Run the quantile regression in Equation (4) by regressing r t ( 0 ) * on X t - h ( 0 ) * = 1 r t - h ( 0 ) * r t - h ( 0 ) * 2 , obtain β ^ ( 0 ) * ( α ) , and compute the bootstrap Daily Close quantile forecast
    Q ^ α ( 0 ) * r T + h ( 0 ) * | X T ( 0 ) * = X T ( 0 ) * β ^ ( 0 ) * ( α ) .
    Repeat this B times.
  • Bagging Step 2: Combine the quantile forecasts from Step 1 by some weighting methods for forecast combination. A simplest example of the combination is the simple average:
    Q ¯ α B a g g i n g r T + h ( 0 ) | X T ( 0 ) = 1 B b = 1 B Q ^ α ( 0 ) * ( b ) r T + h ( 0 ) * ( b ) | X T ( 0 ) * ( b ) ,
    i.e., taking the simple mean of all the bootstrap Daily Close quantile forecasts (denoted “Bagging-Mean”). One can also use the median of the individual quantile forecasts as a combined forecast (denoted “Bagging-Median”). As they are very similar, we only report the former in Table 1, denoted as “Bagging Daily-Close.”
A concern of applying bagging to time series is whether a bootstrap can provide a sound simulation sample for dependent data, for which the bootstrap is required to be consistent. It has been shown that some bootstrap procedure (such as moving block bootstrap) can provide consistent densities for moment estimators and quantile estimators. See references [32, 33]. Therefore we use block bootstrapping in our empirical study as stock market returns are likely to exhibit time dependence. In the next section, we use the number of bootstrap samples B = 50 and the bootstrap block size fixed at 4.

4. Out-of-Sample Quantile Forecasting Results

Table 1 presents the performance of each forecasting method for predicting one day ahead ( h = 1 ) daily close S&P 500 return quantiles. The 78 daily returns introduced in Section 2 are calculated by logarithm difference of corresponding index values and multiplied by 100. The size of out-of-sample period is P = 500 days from 29 May, 2001 to 30 May, 2003. The in-sample size is R = 1000 days, from 10 June, 1997 to 25 May, 2001. We use the rolling window scheme to estimate the parameters and set the size of estimation window at R = 1000 . Quantile regressions are estimated using the interior-point algorithm by [34]. We report the out-of-sample mean tick loss ratios of each chosen forecasting scheme over Daily Close benchmark for the left-tail probability parameter α = 0 . 01 , 0 . 05 , , 0 . 99 , as defined in Equation (2).
The results in Table 1 show that, except for CI-Unrestricted which obviously does not work well due to its dimensionality problem, in general we indeed gain by incorporating high-frequency intra-day information into predicting daily close return quantiles. Evidently, we observe that generally the “averaging” methods (SA-Mean, SA-Median, CF-Mean, CF-Median, and Bagging) work better than various CI methods, for most values of α. All SA and CF methods with simple weighting schemes improve upon Daily-Close benchmark (less than 1 in loss ratios), sometimes quite substantially. This indicates the stability of the averaging methods with simple weighting schemes (mean or median).
For those with factor model approaches, the maximum hypothesized number of factors, k m a x , is set at 15, so number of factors k is chosen within interval [ 1 , 15 ] for AIC and BIC or fixed at 1, 2, or 3. We see that generally BIC performs better than AIC but usually worse than fixing factor numbers (at 1, 2 or 3). Therefore, the simple rule-of-thumb of fixing k at a small number, that is, using a few fixed number of factors to summarize the entire high-frequency information, works considerably better than estimating k by AIC or BIC.
We also find that CF is more robust than CI. This highlights the merits of CF that are illustrated in [16], and consistent with what is often found in the literature about CF.
The percentage loss reductions at tails, especially at the left tails, are much larger than those of middle quantiles. It shows that there is more room for the various forecasting schemes to improve upon Daily-Close model at tails by incorporating the high frequency information to forecast daily quantiles. Let us look at the results for the left tail with α = 0 . 01 , which financial risk management is usually concerned with. Relative to the benchmark quantile forecast using Daily-Close returns, the Bagging Daily-Close quantile forecast reduces the relative out-of-sample mean forecast tick loss ratio to 91.39% (improving by 8.61%). The CF-Mean quantile forecast and CF-Median quantile forecast reduce the loss ratio to 92.26% and 93.39% respectively, which is slightly worse than Bagging Daily-Close quantile Forecast. The subsample averaging methods (SA-Mean and SA-Median) are generating the best daily quantile forecasts with the out-of-sample mean tick loss ratios at 88% and 89%. That is an astounding 11-12% reduction of tick loss for daily VaR forecasting with α = 0 . 01 , which is achieved by incorporating the intraday 5-minute high-frequency information into modeling the lower-frequency daily series.
Table 1. Forecasting Quantiles Using High-Frequency Information
Table 1. Forecasting Quantiles Using High-Frequency Information
α = 0 . 01 α = 0 . 05 α = 0 . 1 α = 0 . 3 α = 0 . 5 α = 0 . 7 α = 0 . 9 α = 0 . 95 α = 0 . 99
Daily-Close (mean tick lossx100)4.581615.120224.758348.369754.571348.637826.793216.51655.0059
Bagging Daily-Close0.91390.98260.99760.99350.99710.99800.99340.99380.9131
SA-Mean0.88030.97230.98770.98730.99900.99670.98820.97640.9513
SA-Median0.89030.97310.99160.98850.99940.99540.99050.98180.9699
CF-Mean0.92260.97180.99280.99380.99920.99430.99120.97500.9403
CF-Median0.93390.97460.99520.99390.99970.99520.99400.98050.9445
CF-PC (AIC)1.10061.02211.02731.00251.00971.00471.02240.97722.2536
CF-PC (BIC)1.07461.01030.99820.99480.99850.99441.00640.99382.0761
CF-PC ( k = 1 )0.92530.97260.98920.99620.99850.99820.99930.98320.9562
CF-PC ( k = 2 )0.93560.97920.98940.99730.99850.99830.99161.00340.9896
CF-PC ( k = 3 )0.98530.98231.00120.99830.99921.00080.98840.99850.9607
CI-Unrestricted5.88852.03871.56191.16431.19711.24201.43621.84544.8916
CI-PC (AIC)1.09561.03401.03370.99141.00060.99800.98881.02780.9905
CI-PC (BIC)1.04561.02911.02840.99511.00040.98710.98910.99450.9967
CI-PC ( k = 1 )0.94710.96751.01210.99831.00041.00181.00300.99520.9167
CI-PC ( k = 2 )0.95890.98171.01320.99771.00331.00401.00950.99660.8947
CI-PC ( k = 3 )0.99740.97521.00140.99841.00250.99551.00180.98791.0224
Notes: This table presents the relative performance of each forecasting scheme for one-day ahead daily close S&P 500 return quantile prediction. We use 5-minute high-frequency S&P 500 index data from 9 June, 1997 to 30 May, 2003, a total of 117,078 observations on 1,501 days. There are 78 5-minute price index observations during a day. We generate the 78 subsample “daily ”return series from a time in a day to the same time in the following trading day. Each of the daily return series is thus with length 1,500 days. The daily returns are calculated by log difference of corresponding index values and multiplied by 100. Out-of-sample size P is 500 days, from 29 May, 2001 to 30 May, 2003; in-sample size R is 1,000 days, from 10 June, 1997 to 25 May, 2001. We use rolling window scheme to estimate the parameters and set the size of estimation window at R = 1000 . Here we report the out-of-sample mean forecast tick loss ratios of each chosen forecasting scheme relative to the Daily Close benchmark model. We report for different values of the left-tail probability parameter α, including α = 0 . 5 for the median. A less-than-one number indicates that the chosen model outperforms the benchmark Daily Close, and bolded number indicates the smallest average tick loss ratio of a model among all methods for each a in each column. For CF-PC and CI-PC, the information criteria AIC and BIC in selecting number of factors are modified for the mean tick forecast errors instead of the mean squared errors. In all factor model approaches, the maximum hypothesized number of factors, k m a x , is set at 15, and so the number of factors k is chosen within interval [1,15]. In Bagging the number of bootstrap samples is 50 and bootstrap block size is fixed at 4.

5. Conclusions

In this paper, we present and propose several methods of incorporating high-frequency information in forecasting daily quantiles, which are shown to be particularly useful for lower tail daily Value-at-Risk forecasting. We predict that one-day-ahead Value-at-Risk forecasting will become an essential daily risk management tool. Long horizon Value-at-Risk forecasting would also be interesting; we leave this for future work. The aftermath of the global financial crisis of 2007-08 makes daily risk management more relevant. Tighter banking regulations also lifted the bar on stress testing. Central banks, the IMF, and the Basel Committee call for tighter regulations to fix the causes of the crisis. Many financial institutions are required to report daily exposures and downside risks to regulators.
The methods considered in this paper are based on “averaging.”Unlike a typical method of model-averaging whose average is taken over different models, our averaging involves only one model but is taken over multiple forecasts that are generated from using multiple lower-frequency (daily) datasets constructed from higher-frequency (intra-day) data. Hence, it is not model averaging, but high-frequency data averaging or combining. We suggest three such data averaging/combining methods – combining forecast (CF), subsample averaging (SA), and bootstrap averaging (Bagging) – as explained in Section 3. Each of these three averaging methods is designed to incorporate high-frequency (intraday) information into lower-frequency (daily) modelling/forecasting.
As demonstrated in Section 4 in forecasting VaR/quantiles of the S&P 500 index return, using high-frequency information is beneficial, often substantially and particularly so in forecasting downside risk. For daily S&P 500 return lower tail out-of-sample VaR forecasts, our empirical results show that the averaging methods of SA, Bagging, and CF (which serve as different ways of forming the ensemble average), which use high-frequency intraday information, have excellent forecasting performance when compared to using only low-frequency daily close information.

Acknowledgements

We thank George Jiang, Eric Hillebrand, Aman Ullah and two referees for their useful comments. The opinions expressed in this paper are those of the authors and do not necessarily reflect those of Grantham, Mayo, Van Otterloo and Company LLC.

References

  1. P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. “Coherent Measures of Risk.” Mathematical Finance 9, 3 (1999): 203–228. [Google Scholar] [CrossRef]
  2. M.P. Clements, A.B. Galvao, and J.H. Kim. “Quantile Forecasts of Daily Exchange Rate Returns from Forecasts of Realized Volatility.” Journal of Empirical Finance 15 (2008): 729–750. [Google Scholar] [CrossRef]
  3. E. Ghysels, A. Sinko, and R. Valkanov. “MIDAS Regressions: Further Results and New Directions.” Econometric Reviews 26 (2006): 53–90. [Google Scholar] [CrossRef]
  4. R.F. Engle, and J.R. Russell. “Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data.” Econometrica 66, 5 (1998): 1127–1162. [Google Scholar] [CrossRef]
  5. L. Breiman. “Bagging Predictors.” Machine Learning 36 (1996): 105–139. [Google Scholar] [CrossRef]
  6. B. Zhou. “High-Frequency Data and Volatility in Foreign-Exchange Rates.” Journal of Business and Economic Statistics 14 (1996): 45–52. [Google Scholar]
  7. O.E. Barndorff-Nielsen, P.R. Hansen, A. Lunde, and N. Shephard. “Subsampling Realised Kernels.” Journal of Econometrics 160 (2011): 204–219. [Google Scholar] [CrossRef]
  8. L. Zhang, P.A. Mykland, and Y. Aït-Sahalia. “A Tale of Two Time Scales: Determining Integrated Volatility with Noisy High-Frequency Data.” Journal of the American Statistical Association 100 (2005): 1394–1411. [Google Scholar] [CrossRef]
  9. T.G. Andersen, T. Bollerslev, F.X. Diebold, and P. Labys. “The Distribution of Realized Exchange Rate Volatility.” Journal of the American Statistical Association 96 (2001): 42–55. [Google Scholar] [CrossRef]
  10. O.E. Barndorff-Nielsen, and N. Shephard. “Econometric Analysis of Realised Volatility and Its Use in Estimating Stochastic Volatility Models.” Journal of the Royal Statistical Society Series B 64 (2002): 853–223. [Google Scholar] [CrossRef]
  11. Y. Aït-Sahalia, P.A. Mykland, and L. Zhang. “How Often to Sample a Continuous-time Process in the Presence of Market Microstructure Noise.” Review of Financial Studies 10 (2005): 351–416. [Google Scholar] [CrossRef]
  12. F.M. Bandi, and J.R. Russell. “Microstructure Noise, Realized Variance, and Optimal Sampling.” Review of Economic Studies 75 (2008): 339–369. [Google Scholar] [CrossRef]
  13. P.R. Hansen, and A. Lunde. “Realized Variance and Market Microstructure Noise.” Journal of Business and Economic Statistics 24 (2006): 127–218. [Google Scholar] [CrossRef]
  14. O.E. Barndorff-Nielsen, P.R. Hansen, A. Lunde, and N. Shephard. “Designing Realised Kernels to Measure the Ex-post Variation of Equity Prices in the Presence of Noise.” Econometrica 76, 6 (2008): 1481–1536. [Google Scholar] [CrossRef]
  15. O.E. Barndorff-Nielsen, P.R. Hansen, A. Lunde, and N. Shephard. “Subsampling Realised Kernels,” Journal of Econometrics 160 (2011): 204–219. [Google Scholar] [CrossRef]
  16. H. Huang, and T.-H. Lee. “To Combine Forecasts or to Combine Information? ” Econometric Reviews 29, 5 (2010): 534–570. [Google Scholar] [CrossRef]
  17. T.-H. Lee, and Y. Yang. “Bagging Binary and Quantile Predictors for Time Series.” Journal of Econometrics 135 (2006): 465–497. [Google Scholar] [CrossRef]
  18. V. Chernozhukov, and L. Umantsev. “Conditional Value-at-Risk: Aspects of Modeling and Estimation.” Empirical Economics 36 (2001): 221–292. [Google Scholar]
  19. R. Koenker, and G. Basset. “Asymptotic Theory of Least Absolute Error Regression.” The Journal of the American Statistical Association 73 (1978): 618–622. [Google Scholar]
  20. J.H. Stock, and M.W. Watson. “Forecasting Using Principal Components from a Large Number of Predictors.” Journal of the American Statistical Association 97 (2002): 1167–1179. [Google Scholar] [CrossRef]
  21. J.H. Stock, and M.W. Watson. “Generalized Shrinkage Methods for Forecasting Using Many Predictors.” Journal of Business and Economic Statistics 30 (2012): 481–493. [Google Scholar] [CrossRef]
  22. J.A.F. Machado. “Robust Model Selection and M-Estimaiion.” Econometric Theory 9 (1993): 479–990. [Google Scholar] [CrossRef]
  23. R. Koenker. Quantile Regression. Cambridge, UK: Cambridge University Press, 2005. [Google Scholar]
  24. C.W.J. Granger, and R. Ramanathan. “Improved Methods of Combining Forecasts.” Journal of Forecasting 3 (1984): 197–204. [Google Scholar] [CrossRef]
  25. P. Bühlmann, and B. Yu. “Analyzing Bagging.” Annals of Statistics 30 (2002): 927–961. [Google Scholar] [CrossRef]
  26. A. Buja, and W. Stuetzle. “Observations on Bagging.” Statistica Sinica 16 (2006): 323–352. [Google Scholar]
  27. J.H. Friedman, and P. Hall. “On Bagging and Nonlinear Estimation.” Journal of Statistical Planning and Inference 137 (2007): 669–683. [Google Scholar] [CrossRef]
  28. A. Inoue, and L. Kilian. “How Useful is Bagging in Forecasting Economic Time Series? A Case Study of U.S. CPI Inflation.” Journal of the American Statistical Association 103, 482 (2008): 511–522. [Google Scholar] [CrossRef]
  29. E. Hillebrand, and M.C. Medeiros. “The Benefits of Bagging for Forecast Models of Realized Volatility.” Econometric Reviews 29, 5 (2010): 571–593. [Google Scholar] [CrossRef]
  30. F. Audrino, and M.C. Medeiros. “Modeling and forecasting short-term interest rates: The benefits of smooth regimes, macroeconomic variables, and bagging.” Journal of Applied Econometrics 26, 6 (2011): 999–1022. [Google Scholar] [CrossRef]
  31. D. Rapach, and J. Strauss. “Bagging or Combining (or Both)? An Analysis Based on Forecasting U.S. Employment Growth.” Econometric Reviews 29, 5 (2010): 511–533. [Google Scholar] [CrossRef]
  32. P. Hall, J.L. Horowitz, and B.-Y. Jing. “On Block Rules for the Bootstrap with Dependent Data.” Biometrika 82, 3 (1995): 561–574. [Google Scholar] [CrossRef]
  33. B. Fitzenberger. “The Moving Blocks Bootstrap and Robust Inference for Linear Least Squares and Quantile Regressions.” Journal of Econometrics 82 (1997): 235–287. [Google Scholar] [CrossRef]
  34. S. Portnoy, and R. Koenker. “The Gaussian Hare And the Laplacean Tortoise: Computability of l1 vs l2 Regression Estimators.” Statistical Science 12 (1997): 279–300. [Google Scholar]
  • 1We are grateful to George Jiang who generously shared this high-frequency intra-day data with us. The data are extracted from the contemporaneous index levels recorded with the quotes of SPX options from the CBOE.
  • 2To better comprehend this factor representation, if we think of the stock market as a vast pool of information and that price is determined by aggregate behavior of market participants, there may exist a few fundamental shocks to the market in a given day that ultimately determine the daily return at close. This small set of shocks (main forces) may be captured by the latent factors in the factor model Equation (9), or technically, by a few principal components of the big explanatory variable set X t m , which contains as large as 156 variables capturing levels and volatilities of the market throughout the trading day.
  • 3We thank a referee for this point.

Share and Cite

MDPI and ACS Style

Huang, H.; Lee, T.-H. Forecasting Value-at-Risk Using High-Frequency Information. Econometrics 2013, 1, 127-140. https://doi.org/10.3390/econometrics1010127

AMA Style

Huang H, Lee T-H. Forecasting Value-at-Risk Using High-Frequency Information. Econometrics. 2013; 1(1):127-140. https://doi.org/10.3390/econometrics1010127

Chicago/Turabian Style

Huang, Huiyu, and Tae-Hwy Lee. 2013. "Forecasting Value-at-Risk Using High-Frequency Information" Econometrics 1, no. 1: 127-140. https://doi.org/10.3390/econometrics1010127

Article Metrics

Back to TopTop