*2.2. Value-at-Risk Forecasting and Backtesting*

One of the most important financial risk measures is the Value-at-Risk. The VaR is usually defined as a specific loss, which is not exceeded for a given probability (e.g., (1 − *a*)%).

When using GARCH models, the *k*-days ahead VaR forecast is simply derived by a *k*-steps forecast of the variance based on the estimated parameters at time *T*, *hT*+*<sup>k</sup>* = E (*hT*+*<sup>k</sup>*|F*T*), which is then applied in the general VaR calculation scheme, yielding

$$
\widehat{\mathbf{VaR}}\_{T+k} = \widehat{\mu}\_{T+k} + \sqrt{\widehat{h}\_{T+k}} Q\_a \left( \widehat{\boldsymbol{\nu}}, \widehat{\boldsymbol{\xi}} \right) \boldsymbol{\nu}
$$

where *μ <sup>T</sup>*+*<sup>k</sup>* is the estimated mean, *hT*+*<sup>k</sup>* is the estimated conditional variance, and *ν* and *ξ* are the estimated distributional parameters from the training set 1, ... , *T*. *Qa* (*<sup>ν</sup>*, *ξ*) denotes the *a* quantile function of the Skewed Student's-*t* distribution with degrees-of-freedom parameter *ν* and skewness *ξ*. Note that we only forecast the variance. Hence, in the VaR forecast, the quantile function depends on the estimated insample parameters, which are not forecasted separately. The calculation of the forecasted variance depends on the specific GARCH model. In this sense, there exists a closed form solution for the GARCH(1,1) *k*-days ahead forecast while, for the APARCH, FIGARCH, and FIAPARCH, the forecasts are calculated iteratively.<sup>6</sup>

With stochastic volatility (SV, SV-L, SV-*t*) models, we approximate the conditional distribution of the returns at the forecast horizon given the observed returns non-parametrically, i.e., the conditional distribution of *rT*+*k* given *r*1, ... ,*rT*. We use particle filtering, a sequential Monte Carlo algorithm for state-space models (Sarkka 2013, chp. 7). The particle filter approximates the conditional distribution of the volatility *hT* given the returns *r*1, ... ,*rT* in the form of a sample of so-called "particles". This sample is propagated *k* times according to the volatility dynamics (Equation (7)), which is the same in the three stochastic volatility models. Then, from the volatility sample at time *T* + *k*, a return sample is generated according to the return model (Equation (6)). We compute the VaR by taking the empirical quantiles of this return sample.

For the RiskMetrics approach (Longerstaey and Spencer 1996)—also known as Exponentially Weighted Moving Average (EWMA)—we use the standard GARCH-like case with fixed parameters *ω* = 0.00, *α* = 0.06, and *β* = 0.94 for Equation (1). Since the RiskMetrics model is not stationary, we use the estimate *hT* for all *k*-days ahead forecasts.

Lastly, we derive VaR forecast non-parametrically by the Historical Simulation (HS) and the semi-parametric Filtered Historical Simulation (FHS). The former method takes the past 250 returns as possible scenarios of a future return distribution and the VaR is calculated from the empirical *a*-quantile of the past returns, i.e.,

$$
\widehat{\mathbf{VaR}}\_{T+k} = Q\_{\mathbf{a}}(\{r\_t\}\_{t=T-249}^T).
$$

For the FHS, we follow Barone-Adesi et al. (1999). The technique combines the aforementioned GARCH and HS. To calculate the VaR, we estimate the parameters for a GARCH model with Skewed Student's-*t* innovations over the whole insample. From the parameters, we derive a *k*-days ahead volatility forecast. Moreover, we calculate the empirical *a*-quantile from the most recent 250 standardized and centered GARCH residuals. The volatility forecast is then multiplied with the empirical quantile to estimate the VaR:

$$\widehat{\mathbf{VaR}}\_{T+k} = \widehat{\mu}\_{T+k} + \sqrt{\widehat{h}\_{T+k}} Q\_a(\{\widetilde{z}\_t\}\_{t=T-249}^T)\_{t=0}$$

<sup>6</sup> An outline of forecasting conditional variance can be found in Klein and Walther (2016) and Walther (2017) for example.

where

$$
\tilde{z}\_t = \frac{r\_t - \hat{\mu}}{\sqrt{\hat{\eta}\_t}}.
$$

is the standardized and centered GARCH residual.

To evaluate the quality of the VaR forecasts for the different models and classes, we use four different VaR tests: the regulatory traffic light test, the conditional coverage test, the multi-level unconditional coverage test, and the loss function based comparison. In what follows, we refer to VaR violations or exceptions for the cases, where *rT*+*k* < VaR *T*+*k* for the long trading position and where *rT*+*k* > VaR *T*+*k* for the short trading position.

The Basel traffic light backtest (Basel Committee on Banking Supervision 2016) sorts VaR test results in three different zones. The test uses the 1-day ahead *a* = 1% VaR for the last 250 trading days. A model is considered in the *green zone*, if four or less VaR violations occurred in that period. The *yellow zone* includes models yielding between five and nine exceptions. Lastly, the *red zone* covers all models with more than nine violations. The idea behind this color scheme is that the yellow zone is a buffer area for models that violate the VaR too often due to "bad luck" (type I error). Thus, banks only have to adjust their calculated minimum capital requirements by a fixed factor. However, models in the red zone are not allowed to be used; instead, the standard approach of the Basel framework has to be employed. Here, we calculate the traffic light test on a rolling time frame over the whole out-of-sample period and report how many days each model appears in the green, the yellow, or the red zone, respectively. Doing so, we gain a regulatory perspective of the results of the VaR forecasts.

In order to account for possible clustering of VaR violations, we include the conditional coverage test proposed by Christoffersen (1998). The test combines the unconditional coverage test with a test for the independence of VaR exceptions. Independence is assumed if the VaR violations do not follow a first order Markov chain. Thus, the test procedure penalizes models not only for an undesirable amount of violations, but also for not adjusting quickly after an exception occurred. Unfortunately, the test only evaluates a certain quantile and not the whole tail of the distribution.

The multi-level coverage test of Pérignon and Smith (2008) resembles a joint unconditional coverage test for three VaR levels at *a* = 1%, 2.5%, and 5%. Thus, the test is able to evaluate the whole tail in one single test. The test compares the actual coverage ratio (number of VaR violations to length of observation period) with the preferred one (i.e., the VaR level *a*) based on a likelihood ratio test. Hence, only the absolute number of VaR violations is important to that test and it penalizes too conservative models as well as too optimistic models. However, the test is not designed to cope with clustering of VaR violations.

The outcome of the two presented backtests can only decided whether a particular model pass the requirements of being in the admired VaR coverage zone and not having clustered violations. Nevertheless, the backtests cannot be used to compare the VaR forecasts among a given set of models. Therefore, we incorporate a loss function based comparison. Here, we follow the idea of Angelidis and Degiannakis (2007). The authors sugges<sup>t</sup> a two-stage approach: (1) all models are tested with a backtest such as the conditional coverage test; and (2) for the models that pass this test, the following VaR loss function suggested by Lopez (1998) is used:

$$L\_{T+k} = \begin{cases} 1 + \left(r\_{T+k} - \widehat{\text{Var}}\_{T+k}\right)^2, & \text{if } r\_{T+k} < \widehat{\text{Var}}\_{T+k} \\ 0, & \text{if } r\_{T+k} \ge \widehat{\text{Var}}\_{T+k}. \end{cases}$$

The results of the loss functions for each model are compared with the Superior Predictive Ability test by Hansen (2005). We deviate from this procedure by using the Model Confidence Set (MCS, (Hansen et al. 2011)) in place of the Superior Predictive Ability test. The MCS yields a set of models of equally predictive ability. Thus, this procedure allows us to directly compare those models that pass the first-stage backtests.
