Next Article in Journal
Investigation of the Intra- and Inter-Limb Muscle Coordination of Hands-and-Knees Crawling in Human Adults by Means of Muscle Synergy Analysis
Previous Article in Journal
Entropy Information of Cardiorespiratory Dynamics in Neonates during Sleep
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Entropy and Measures of Market Risk

1
Department of Statistics and Econometrics, Bucharest University of Economic Studies, Piata Romana 6, Bucharest 010371, Romania
2
ICMA Centre, Henley Business School, University of Reading, Whiteknights, Reading RG6 6BA, UK
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(5), 226; https://doi.org/10.3390/e19050226
Submission received: 29 March 2017 / Revised: 8 May 2017 / Accepted: 11 May 2017 / Published: 16 May 2017

Abstract

:
In this paper we investigate the relationship between the information entropy of the distribution of intraday returns and intraday and daily measures of market risk. Using data on the EUR/JPY exchange rate, we find a negative relationship between entropy and intraday Value-at-Risk, and also between entropy and intraday Expected Shortfall. This relationship is then used to forecast daily Value-at-Risk, using the entropy of the distribution of intraday returns as a predictor.

1. Introduction

Entropy, as a measure of uncertainty of a system, is widely used in many applications, from physics to social sciences. As stated by the second law of thermodynamics, “this entropy cannot decrease in any process in which the system remains adiabatically isolated, i.e., shielded from heat exchange with its environment” [1].
From this point of view, the stock market could be regarded as a non-isolated system, subject to a constant information exchange process with the real economy. Using the terminology from the information theory (Avery, [2]), the information entropy of the system cannot increase, other way than by exchanging information with the exterior environment. The impact of the incoming information on the stock market entropy can be illustrated in the case of a collective behaviour determined by some extreme bad news: most traders will tend to sell thus reducing the overall market entropy.
There is a lot of theoretical and empirical work dealing with the relationship between entropy and financial markets. The entropy has been used as a measure of stock market efficiency in Zunino et al. [3], since high values of entropy are related to randomness in the evolution of stock prices, according to the Efficient Market Hypothesis. A variant of entropy called normalized entropy—being a dimensionless measure—can be used to assess the relative efficiency of a stock market. Risso [4], on the other hand, relates entropy to stock market crashes, his main result being that for some markets the probability of having a crash increases as the market efficiency, measured by entropy, decreases. An application in the foreign exchange markets is that of Oh et al. [5], who use the approximate entropy as a measure of the relative efficiency of the FX markets. Their results suggest that market efficiency measured by approximate entropy is correlated with the liquidity level of foreign exchange markets.
Considering the stock market a complex open system far from equilibrium, Wang et al. [6] analyse the interactions among agents based on generalized entropy. Using nonlinear evolutionary dynamic equations for the stock markets, derived from the Maximum Generalized Entropy Principle, the structural evolution of the stock market system is demonstrated.
A related use of entropy is to study the predictability of stock market returns, as in Maasoumi and Racine [7] who employ an entropy measure for the dependence between stocks’ returns. They find that the entropy is capable of detecting nonlinear dependence between the returns series. Billio et al. [8] use entropy to construct an early warning indicator for the systemic risk of the banking system. They estimate the entropy of marginal systemic risk measures like Marginal Expected Shortfall, Delta CoVaR and network connectedness. By using various definitions of entropy (Shannon, Tsallis and Rényi), they prove that entropy measures have the ability to forecast and predict banking crises.
Dionisio et al. [9] provide a comparison between the theoretical and empirical properties of the entropy and the variance, as measures of uncertainty (although volatility can be considered a measure of risk in finance, it is a measure of uncertainty in statistical terms. Here, measures that are symmetric by nature we call measures of uncertainty, and tail measures that consider certain negative outcomes we call measures of risk.). They conclude that the entropy is a more general measure of uncertainty than the variance or standard deviation, as originally suggested by Philippatos and Wilson [10] and Ebrahimi et al. [11]. An explanation is that the entropy may be related to higher-order moments of a distribution, unlike the variance, so it could be a better measure of uncertainty. Furthermore, argue Dionisio et al. [9], both measures, the entropy and the variance, reflect concentration but use different metrics; while the variance measures the concentration around the mean, the entropy measures the dispersion of the density irrespective of the location of the concentration (see also [12] and [13]). Moreover, as we will show in the paper, the entropy of a distribution function is strongly related to its tails and this feature is more important for distributions with heavy tails or with an infinite second-order moment (like the non-Gaussian alpha-stable distribution) for which an estimator of variance is obsolete.
Entropy-based measures have been compared to the classical coefficient of correlation as well. A measure called cross-sample entropy has been used by Liu et al. [14] to assess the degree of asynchrony between foreign exchange rates, concluding that their measure is superior to the classical correlation measure as a descriptor of the relationship between time series.
Entropy can be applied in the area of risk management as described in Bowden [15]. The authors propose a new concept called directional entropy and use it to improve the performance of classical measures like value-at-risk (VaR) in capturing regime changes. An interesting application is using measures based on the Tsallis entropy as a warning indicator of financial crises, as in Gradojevic and Gencay [16], Gencay and Gradojevic [17] and Gradojevic and Caric [18]. A further application of entropy is for option pricing, as in Stutzer [19] and Stutzer and Kitamura [20]. Also, entropy-based risk measures have been used in a decision-making model context in Yang and Qiu [21].
Besides [5] and [14] above, other innovative approaches involving entropy and FX markets can be found in Ishizaki and Inoue [22], showing how entropy can be a signal of turning points for exchange rates regimes. Furthermore, Bekiros and Marcellino [23] and Bekiros [24] use entropy in wavelet analysis, revealing the complex dynamics across different timescales in the FX markets.
The main objective of this paper is (1) to study the link between the entropy of the distribution function of intraday returns, and intraday and daily measures of market risk, namely VaR and Expected Shortfall (ES); and then (2) to demonstrate their VaR-forecasting ability. The entropy is considered to have more informational content than the standard measures of risk and it is also more reactive to new information. This paper uses the concept of entropy of a function (Lorentz [25]) in order to estimate the entropy of a distribution function, using a non-parametric approach, with an application to the FX market. The main advantage of this approach is that the entropy can be estimated for any distribution, without any prior knowledge about its functional form, which is especially important for distributions with no closed form for the probability distribution function.
The rest of this paper is organized as follows: in Section 2 we provide the theoretical background defining the entropy of a distribution function and measures of market risk and uncertainty. Section 3 presents the results of the empirical analysis whilst Section 4 concludes.

2. The Entropy of a Distribution Function and Measures of Market Risk and Uncertainty

2.1. The Entropy and Intraday Measures of Market Risk and Uncertainty

The entropy, as a measure of uncertainty, can be defined using different metrics (Shannon Entropy, Tsallis Entropy, Rényi Entropy etc.), based on the informational content of a discrete or continuous random variable (see Zhou et al. [26] for a comprehensive review on entropy measures used in finance). The most common entropy metric, the Shannon Information Entropy, quantifies the expected value of information contained in a discrete distribution ([27]):
Definition 1 (Shannon information entropy).
If X is a discrete random variable, with probability distribution X : ( x 1 ...... x n p 1 ...... p n ) , where p i = P ( X = x i ) , 0 p i 1 and i p i = 1 , then the Shannon Information Entropy is defined as follows:
H ( X ) = i p i log 2 p i .
It will reach its maximum value of H ( X ) = l o g 2 n for the uniform distribution, while the minimum of 0 is attained for a distribution where one of the probabilities pi is 1 and the rest are 0. In other words, high (low) levels of entropy are obtained for probability distributions with high (low) levels of uncertainty. If X is a continuous random variable with probability density function f ( x ) , then we can define the differential entropy as:
H ( f ) = A f ( x ) log 2 f ( x ) d x , A = s u p p ( X ) .
Unlike the Shannon entropy, the differential entropy does not possess certain desirable properties: invariance to linear transformations and non-negativity ([25,28]). However, the analogue Shannon entropy of a function can be defined through a transformation called quantization. We present this transformation, as in Lorentz [25].
Definition 2 (sampled function).
Let f : I = [ a , b ] R be a real valued continuous function, let n N be fixed and let x i = a + ( i + 1 / 2 ) h , for i = 0 , .. , n 1 , where h = ( b a ) / n . Then the sampled function for f is:
S n ( f ) ( i ) = f ( x i ) ,   for   i = 0 , .. , n 1 .
If f : I = [ a , b ] R is essentially bounded, then the sampled function is
S n ( f ) ( i ) = h 1 x i h / 2 x i + h / 2 f ( x ) d x ,   for   i = 0 , .. , n 1 .
The sampling defined in (3) is called point sampling, whilst the one in (4) is called mean sampling.
Definition 3 (quantization).
The quantization process of a function refers to creating a simple function that approximates the original function. Let q > 0 be a quantum. Then the following function defines a quantization of f:
Q q ( f ) ( x ) = ( i + 1 / 2 ) q ,   if   f ( x ) [ i q , ( i + 1 ) q ) .
Definition 4 (entropy of a function at a quantization level q).
Let f be a measurable and essentially bounded real valued function defined on [a,b] and let q > 0. Also let I i = [ i q , ( i + 1 ) q ) and B i = f 1 ( I i ) . Then the entropy of f at quantization level q is
H q ( f ) = i μ ( B i ) log 2 ( μ ( B i ) ) ,
where μ is the Lebesgue measure.
In light of this definition, we can calculate the entropy of any continuous function on a compact interval. The following theorem provides a conceptual framework for defining an estimator of the entropy of a continuous function.
Theorem (Lorentz, [25].
“Let f be continuous for point sampling, measurable and essentially bounded for mean sampling. The sampling spacing is 1/n. Let S n ( f ) be the corresponding sampling of (3) and respectively (4). Fix q > 0 and let Q q S n be the quantization of the samples with resolution q as given in (5). Denote the number of occurrences of the value ( i + 1 / 2 ) q in Q q S n by c n ( i ) = c a r d { ( i + 1 / 2 ) q Q q S n } and denote the relative probability of the occurrence of the value i by p n ( i ) = c n ( i ) j c n ( j ) = c n ( i ) n . Then we have the following result:
lim n i p n ( i ) log 2 p n ( i ) = H q ( f ) .
The above theorem assures us that regardless of the sampling and quantization, we obtain a consistent estimator of the entropy of a function. As such, we can define the entropy of a distribution function for a continuous random variable on a compact interval. In general, continuous random variables do not have a compact support, with finite Lebesgue measure. In order to meet the assumptions of Lorentz’s theorem, we can define the entropy of the distribution function on a compact interval. In what follows we assume that we are dealing with a continuous random variable X, whose support is the interval [0,1]. Then its distribution function F: [0,1] → [0,1] is continuous and the conditions of Lorentz’s theorem are fulfilled, so we can define the entropy of the density function Hq(F) at the quantization level q > 0.

2.1.1. Entropy of a Distribution Function

Let I i = [ i q , ( i + 1 ) q ) and B i = F 1 ( I i ) . Then, according to Lorentz’s theorem, the entropy of F with the quantum q is H q ( F ) = i μ ( B i ) log 2 ( μ ( B i ) ) , where μ is the Lebesgue measure. In fact, i I i = [ 0 , 1 ] , I i are disjoint, and i B i = [ 0 , 1 ] , with B i not necessarily disjoint.
Note: In general, for any distribution function, defined on the set A, not necessarily of finite measure, we can consider the restriction of this function on some compact interval: F | [ a , b ] : [ a , b ] [ 0 , 1 ] , F | [ a , b ] ( x ) = F ( x ) . Then F | [ a , b ] : [ a , b ] [ 0 , 1 ] satisfies the conditions of Lorentz’s theorem so we can define the entropy measure.
The framework described above can be applied to estimate the entropy of a distribution function of a continuous random variable X. The cumulative distribution function (CDF) is F ( x ) = P ( X < x ) . The distribution function is defined on the support set of X, with values on [0,1] and has the following properties:
(i)
F is right continuous;
(ii)
F is monotonically increasing;
(iii)
lim x F ( x ) = 0 ;
(iv)
lim x F ( x ) = 1 .
If in addition F is absolutely continuous, then there is a Lebesgue integrable function f(x) such as F ( b ) F ( a ) = P ( a < X < b ) = a b f ( x ) d x , with f(x) the density function of X. Also, F ( x ) = x f ( t ) d t .
In practice, there are cases when the analytical form of the distribution function is unknown. When it is not known, a robust approach can be taken based on a nonparametric method.

2.1.2. Empirical Distribution Function

The cumulative density function can be estimated in a simple way by using the histogram estimator of a probability density function [29]. The basic algorithm to obtain the estimator of the CDF from a sample X 1 < ... < X n is as follows:
Step 1.
Let x 0 be a fixed point and let h > 0 be the bin width;
Step 2.
Define the bins as I h m = [ x 0 + m h , x 0 + ( m + 1 ) h ) , m Z , obtaining a partition of the real line;
Step 3.
For x R , m Z such as x I h m , let A x = { X i | X i I h m } ;
Step 4.
The histogram estimator of the pdf is defined as f ^ ( x ) = 1 n h c a r d   A x ,   x R ;
Step 5.
The empirical estimator of distribution function (CDF) is:
F ^ n ( x ) = { 0 , x < X 1 X j < x f ^ ( X j ) h , x [ X i 1 , X i ) 1 , x X n .

2.1.3. Kernel Density Estimator

To estimate the distribution function, we can use the Kernel Density Estimation (KDE) methods. If X 1 , ... , X n is a sample of i.i.d. observations, then an estimator of the distribution function is:
F ^ n ( x ) = x f ^ ( u ) d u = x 1 n h i = 1 n K ( u X i h ) d u = 1 n h i = 1 n x K ( u X i h ) d u ,
where K is a real function with the following properties: K ( x ) 0 , x R , K ( x ) = K ( x ) , x R , R K ( x ) d x = 1 and R x K ( x ) d x = 0 .
Such a function is called the kernel and is usually chosen from the known probability density functions. The parameter h is the scale parameter (also called the smoothing parameter or bandwidth), the choice of which determines the estimate. The asymptotic properties of the kernel estimator above have been studied in numerous papers [30,31], establishing the uniform convergence and convergence in probability to the theoretical distribution function, regardless of the form of the kernel used.
As a special case, the uniform distribution is considered. Given X a uniformly distributed random variable on interval [0,1], with distribution function:
F ( x ) = { 0 , x < 0 x , x [ 0 , 1 ) 1 , x 1 .
Then the entropy of the function F | [ 0 , 1 ] ( x ) = { x , x [ 0 , 1 ) 1 , x = 1 is H q ( F | [ 0 , 1 ] ) = log 2 n , which is the maximum value of the entropy. Next, we present the estimation methodology of the entropy of a distribution function.

2.1.4. Estimation of the Entropy of a Distribution Function

Let X 0 , .... , X n 1 be a sample of i.i.d. observations drawn from the distribution F. In order to ensure the comparability of results between various estimates, we assume that the observed values are normalized in the interval [0,1], through a transformation of the type X i X i X min X max X min .
The following steps present the estimation of the entropy of a distribution function (a similar approach, but in a different context, can be found in [27,28] and [32]):
Step 1.
Estimate the distribution function, obtaining values F ^ n ( X i ) for i = 0 , .. , n 1 ;
Step 2.
Sample from the distribution function, using the sampled function S n ( F ^ n ) ( i ) = F ^ n ( X i ) for i = 0 , .. , n 1 ;
Step 3.
Define a quantum q > 0 ; then Q q S n ( F ^ n ) ( j ) = ( i + 1 / 2 ) q , if F ^ n ( X j ) [ i q , ( i + 1 ) q ) ;
Step 4.
Compute the probabilities p n ( i ) = c n ( i ) j c n ( j ) = c n ( i ) n = c a r d { F ^ n ( X j ) [ i q , ( i + 1 ) q ) } n ;
Step 5.
Estimate the entropy of the distribution function: H q ( F ^ n ) = i p n ( i ) log 2 p n ( i ) .
As previously shown, the entropy of the distribution function reaches its maximum value for the uniform distribution. One can define a dimensionless measure of uncertainty, the normalized entropy, defined as the ratio between the entropy of the distribution function and the entropy of the uniform distribution:
N H q ( F ^ n ) = i p n ( i ) log 2 p n ( i ) log 2 n [ 0 , 1 ] .
In the following sections, we will refer to the entropy of the distribution function as the normalized entropy of the distribution function: H ( F ) N H q ( F ) [ 0 , 1 ] .

2.1.5. Properties and Asymptotic Behaviour of the Entropy of a Distribution Function

To illustrate the properties of the entropy estimator, we performed a Monte Carlo experiment, estimating the entropy for simulated distributions, using a sample of 400 observations and replicating the experiment 1000 times. We have simulated several α-stable distributions, allowing for higher probabilities in the tails. Stable distributions have some important properties: they allow for heavy tails and more, any linear combination of independent stable variable follows a stable distribution, up to a scale and location parameter [33]; the Gaussian distribution is a particular case of a stable distribution.
In the literature there are several parameterizations of α-stable distributions. In this paper we use the S1 parameterization [33]: a random variable X follows a α-stable distribution S ( α , β , γ , δ ; 1 ) if its characteristic function is:
φ ( t ) = E [ e i t X ] = { exp ( γ α | t | α [ 1 i β tan ( π α 2 ) s i g n ( t ) ] + i δ t ) , α 1 exp ( γ | t | [ 1 + i β t 2 π s i g n ( t ) ( ln ( | t | ) ] + i δ t ) , α = 1 .
In the above notation α ( 0.2 ] is the characteristic parameter (for a normal distribution α = 2 ), β [ 1 , 1 ] is the skewness parameter, γ ( 0 , ) is the scale parameter and δ R is the location parameter. To simulate a α-stable distribution S ( α , β , γ , δ ; 1 ) we have used the algorithm from [34] (see Appendix A). The results of the simulations are presented in Table 1. The entropy reaches its maximum value for the uniform distribution and as the α parameter decreases, the entropy of the stable distribution decreases, too. As expected, low entropy values are associated with heavy-tailed distributions; as the tail probability increases, the expected value of the entropy goes down. Figure 1 presents the relationship between the α parameter of a stable distribution, and entropy. Before turning our attention to the link between entropy and measures of risk and uncertainty, we consider the optimal sampling frequency to efficiently compute returns (ignoring noise).

2.1.6. Optimal Sampling Frequency

When dealing with intraday data, one problem is to separate the fundamental dynamics from market noise. Assuming that the trading price can be decomposed into an efficient component and a noise component, reflecting market microstructure frictions, one way to distinguish between the informational content of these components is to choose an optimal sampling frequency for the intraday data. Following Bandi and Russell [35], we assume that the observed logprice is given by:
p ˜ i = p i + η i , i = 1 ... n ,
where n is the number of trading days, p i is the efficient log-price and η i is the microstructure noise. Now we divide the trading day into M subperiods and define the observed intraday logreturns as:
r ˜ i j = p ˜ i 1 + j δ p ˜ i 1 + ( j 1 ) δ , j = 1 ... M ,
where δ = 1 / M is the sampling frequency of the intraday returns, used to estimate the daily entropy. Then the intraday returns can be decomposed into an unobserved efficient return and a market microstructure disturbance as:
r ˜ i j = r i j + ε i j ,   with   r i j = p i 1 + j δ p i 1 + ( j 1 ) δ   ε i j = η i 1 + j δ η i 1 + ( j 1 ) δ .
In terms of the probability density function, if the unobserved efficient return and the market microstructure disturbance are independent, then the probability density function of the observed returns is the convolution of the probability density functions of unobserved returns and microstructure noise:
f r ˜ i j ( x ) = f r i j + ε i j ( x ) = f r i j ( y ) f ε i j ( x y ) d y = ( f r i j f ε i j ) ( x ) .
Assuming that the entropy of the distribution function is estimated using intraday data with a quantum q = 0.05 , while the distribution function is estimated using the empirical distribution function, the following measures of risk and uncertainty can be defined, for an α ( 0 , 1 ) :
(1)
Intraday VaR at significance level α computed from observations at frequency ν, being the α-quantile of the distribution of intraday returns, so that the following is satisfied:
P ( r ˜ i j , ν < I V a R α , ν ) = α ;
(2)
Intraday ES at significance level α computed from observations at frequency ν, defined as:
I E S α , ν = 1 α 0 α I V a R γ , ν d γ ;
(3)
Intraday Realized Volatility computed from intraday returns at frequency ν, computed as:
I R V ν = i = 1 n r ˜ i j , ν 2 .
If [ 1 , T ] is the time-horizon of daily data, then we can compute daily estimates of the following 4 measures: entropy H δ ; t , intraday Value at Risk I V a R α , ν ; t , intraday Expected Shortfall I E S α , ν ; t and Intraday Realized Volatility I R V ν ; t .
In order to asses the relationship between entropy of the distribution of the intraday returns and intraday measures of market risk, we estimate static and dynamic linear regression models using entropy as (one of) the explanatory variable(s):

2.1.7. Static Models

The first class of models study the explanatory power of the (daily) entropy to explain different measures of market risk and uncertainty: daily observations of Intraday ES, Intraday VaR and Intraday Realized Volatility estimated at different time scales ν, running the following regressions:
{ I E S α , ν ; t = β 0 + β 1 H δ ; t + ε t I V a R α , ν ; t = α 0 + α 1 H δ ; t + υ t I R V ν ; t = γ 0 + γ 1 H δ ; t + ζ t .

2.1.8. Dynamic Models

The second class of models aims to check whether the entropy of the distribution of intraday returns can provide additional information to that contained in the latest observation of the risk measures by estimating the following regressions:
{ I E S α , ν ; t = β 0 + β 1 H δ ; t 1 + β 2 I E S α , ν ; t 1 + ε t I V a R α , ν ; t = α 0 + α 1 H δ ; t 1 + α 2 I V a R α , ν ; t 1 + υ t I R V ν ; t = γ 0 + γ 1 H δ ; t 1 + γ 2 I R V ν ; t 1 + ζ t .

2.2. Quantile Regressions

Classical linear regression is used to estimate the conditional mean of a dependent variable, given the values of an explanatory variable. However, the presence of outliers and/or heteroskedasticity can affect the results. Also, in many situations not just the conditional mean of a variable is required, but its entire conditional distribution, in particular the conditional quantiles. For a random variable Y with distribution function F, the τ th quantile is defined as the inverse of distribution function Q ( τ ) = inf { y , F ( y ) τ } , where τ ( 0 , 1 ) .
The τ th sample quantile ξ ( τ ) is the minimizer of the expression below:
ξ ( τ ) = arg min ξ i = 1 n ρ τ ( y i ξ ) ,
where ρ τ ( z ) = z ( τ I ( z < 0 ) ) , τ ( 0 , 1 ) , and I ( . ) is the indicator function.
For a given τ ( 0 , 1 ) , quantile regressions estimate the linear conditional quantile function Q ( τ | X = x ) = x β ( τ ) by solving:
β ^ ( τ ) = arg min β i = 1 n ρ τ ( y i x i β ) .
The quantity β ^ ( τ ) is the estimate of the τ th regression quantile.

2.2.1. Quantile Regressions for Intraday VaR

To better understand the effect of the entropy on IVaR, we estimate a quantile regression model using the entropy of the distribution of intraday returns as explanatory variable. The model is:
Q I V a R ( τ | H δ ; t = x ) = β 0 + β 1 H δ ; t ,
where, for a given time scale v and significance level α:
Q I V a R ( τ ) = i n f { I V a R α , ν ; t , F ( I V a R α , ν ; t ) τ } .

2.2.2. Quantile Regressions for Daily Returns

Quantile regressions can be used to assess the relationship between extreme values of daily returns and the entropy of the distribution of intraday returns. We estimate the following model:
Q R ( τ | H δ ; t = x ) = β 0 + β 1 H δ ; t ,
where, denoting the daily log-returns by R t , the quantile of the returns is denoted by:
Q R ( τ ) = i n f { R t , F ( R t ) τ } .

2.3. Forecasting Daily VaR Using Entropy

The daily VaR at probability level α can be defined by the following equation:
P r ( R t < V a R α ; t ) = α .
VaR can be forecasted using various methods. Furthermore, many VaR measures and forecasts fail to react fast enough to new information (market shocks) so often underestimate risk. Entropy, on the other hand, is very sensitive to new information so can be used to update and improve VaR forecasts. In order to forecast the daily entropy-based VaR, with t { k + 1 , ... , k + w } , k { 0 , ... , T w + 1 } and T the number of daily returns, Equation (13) below is estimated (we tried adding extra lags of the entropy in the regression, but the extra lags were not significant, the optimal lag length was found to be one) using a rolling window of length w:
Q R , t ( α ) = β 0 + β 1 H δ ; t 1 .
Estimating this on the time interval [k + 1, k + w], the parameter estimates β 0 k and β 1 k are obtained. Then the forecast of VaR for the next trading day is given by the following:
V a R α ; k + w + 1 = β 0 k β 1 k H δ ; k + w .
Equation (13) can be extended to include an autoregressive term for the VaR as below (it is not required to add extra lags of the quantile in the regression as the quantile is highly autocorrelated):
Q R , t ( α ) = β 0 + β 1 H δ ; t 1 + β 2 Q R , t 1 ( α ) .
Then the forecast of VaR for the next trading day can be computed as:
V a R α ; k + w + 1 = β 0 k β 1 k H δ ; k + w + β 2 k V a R α ; k + w .
The results obtained based on the models (13) & (15) and forecasting formulae (14) & (16), respectively, are compared with the VaR forecasting results obtained from a historical VaR forecasting model, and a VaR forecasting model based on the GARCH(1,1) model below:
{ R t = μ + ε t ε t = σ t z t σ t 2 = α 0 + α 1 σ t 1 2 + β 1 ε t 1 2 .
Model (17) is estimated using the same rolling windows as above, with the error term zt being a standard normal variable or following a Student’s t distribution, and the forecast of the VaR for the next trading day is given by the formula below:
V a R k + w + 1 = ( q a σ ^ k + w + 1 + μ k + w + 1 ) ,
where q α is the Gaussian quantile or the Student-t quantile with the degrees of freedom estimated.
Thus, we compare the VaR forecasting ability of the following five models:
  • Historical VaR forecasts, estimated using a rolling window of length w;
  • Normal GARCH(1,1) VaR forecasts, with (17), z t N ( 0 , 1 ) and forecasting equation (18);
  • Student’s t-GARCH(1,1) VaR forecasts, with (17), z t Student’s t and equation (18);
  • Entropy-based VaR forecasts, given by (13) and forecasting equation (14);
  • Entropy-based autoregressive VaR forecasts, given in (15) and (16).
In order to test the forecasting ability of the above models, Christoffersen’s [36] tests are used: the LR test of Unconditional Coverage, the LR test of Independence and the LR test of Independence and Conditional Coverage. Also, we employed the forecast performance tests of Diebold and Mariano [37] and West [38], using the loss function of Giacomini and White [39] (see Appendix B and Appendix C):
L ( R t + 1 , V a R t + 1 | t ) = ( R t + 1 + V a R t + 1 | t ) [ α I R t + 1 < V a R t + 1 | t ] .

3. Empirical Analysis

In order to illustrate the application of the entropy of the distribution of intraday returns in financial risk management, we consider the EUR/JPY exchange rate (sourced from Disk Trading). FX rates are mostly symmetric, so the two tails of the distributions are similar, and so the entropy is closer related to the VaR estimate. For highly asymmetrical distributions, like stocks and commodities, the link between entropy and VaR could be weaker. The time period considered is 1999–2005. The database used for estimation has two components: (1) intraday prices (2025 transaction days and 2,340,624 minute-by-minute intraday observations); and (2) daily prices (2025 daily observations). Using the methodology from Bandi and Russell [35], the optimal sampling frequency for intraday data was estimated at 10 minutes on average (δ = δ* = 10); this frequency is used to compute the entropy of distribution of intraday returns. We use the frequencies of v { 1 , 10 , 15 } (minutes) to compute intraday measures of risk and uncertainty, and compute the risk measures at 1% significance level.

3.1. Entropy and Intraday Measures of Market Risk and Uncertainty

Figure 2 presents a comparison of the entropy and the intraday ES (estimated at 1% level). The two series show a strong negative correlation; the entropy has a similar relationship with IVaR and IRV as well. Next, the models given in (9) and (10) are estimated, using the statistical software SAS 9.3.
Panel A in Table 2 presents the results of the regressions specified in (9), using level α = 1% for the risk measures. The R2 estimates show that the entropy is strongly linked with intraday VaR, intraday ES and intraday Realized Volatility and, as expected, the coefficients are all negative and significant. Panel B in Table 2 presents the results of the dynamic regression models (10) using significance level α = 1% for the risk measures. The coefficients of the entropy remain in all cases negative and significant, showing that the entropy has forecasting power for intraday measures of risk and uncertainty, even after taking past values of intraday measures of risk and uncertainty into account.

3.2. Quantile Regression Results

We use quantile regressions to see the effect of the entropy on the quantiles of VaR, estimating Equation (11) for frequencies of 1, 10 and 15 min and α = 1%. Panel A of Table 3 presents the estimation results; as expected, there is a positive correlation between the entropy and the upper tails of the distribution of IVaR. Also, IVaR is more sensitive to entropy in the upper tail of its distribution, meaning that high values of IVaR have a stronger relationship with entropy. Furthermore, the relationship is strongest for ν = 15 min frequency. As an example, Figure 3 gives a visual presentation of the dependence of the 99% quantile on entropy.
Regarding the relationship between extreme values of daily returns and the entropy of the distribution of intraday returns, Panel B of Table 3 presents the results of regression (12), whilst Figure 4 presents the scatter plot and the regression line of the estimation. As expected, the relationship between the quantile (equal with minus VaR) and entropy is positive and significant. Low (high) values of the entropy of the distribution of intraday returns generally correspond to high (low) absolute VaR estimates for daily returns; in line with the results in the previous section.

3.3. Forecasting Daily VaR Using Entropy

Entropy can be used to forecast VaR, and for this Equation (13) is estimated on a rolling basis. We use a length of w = 1000 days for the estimation windows. The time series for daily returns are shown and plotted against the entropy-based 1% VaR forecasts and the GARCH-based 1% VaR forecasts in Figure 5 and Figure 6. As expected, the GARCH-based VaR is more stable whilst the entropy-based VaR forecast reacts faster to new information.
The backtesting results of the VaR forecasts for the five models given in Section 2.3, for α = 1% are presented in Table 4. The second column gives the probabilities that the returns are below the negative of the VaR, for different models; it can be seen that the Historical VaR model (with a probability of 0.29) provides the highest VaR forecasts on average. Looking at the unconditional test results, the entropy-based AR VaR has the smallest test statistic and the Historical VaR model marginally fails the test. Considering the independence and full results, it can be seen that the Historical VaR model performs the worst failing the tests, whilst the normal GARCH model passes all three tests. We conclude that the best results overall are obtained by the entropy-based AR VaR forecast model.
Additionally, we employed the VaR forecast comparison tests of Diebold and Mariano [37] and West [38]. The test statistic only considers the unconditional forecasting ability of the VaR models, and is highly asymmetric, the loss function in (19) favouring models which overestimate VaR and strongly penalizes models which, even very mildly, underestimate it. Our results in Table 5 show that the Historical VaR is the best performer, whilst the t-GARCH(1,1) based VaR and the entropy-based VaR models are favoured over the normal GARCH(1-1) VaR. Furthermore, the difference between the performance of the entropy-based AR VaR model and the entropy-based VaR is not statistically significant. However, these results depend very strongly on the loss function. Looking at the overall picture, we conclude that the entropy has good forecasting power for VaR.

4. Conclusions

This paper investigates the link between entropy and various measures of market risk such as Value-at-Risk or Expected Shortfall. Based on the result of Lorentz [25], we developed the concept of entropy of a distribution function and we applied this concept to estimate the entropy of the distribution of intraday returns. Using Monte-Carlo simulations, we showed that there is an inverse relationship between entropy and the probability in the tails of a distribution, high levels of entropy being characteristic of a distribution with light tails like the normal distribution, and low entropy values being associated with heavy-tailed distributions.
Furthermore, we investigated the relationship between risk measures and the entropy of the distribution of intraday returns, in a static and dynamic setting. The entropy of a distribution function has more informational content than the classical measures of market risk, as it takes into account the entire distribution. We found evidence of a strong, negative relationship between entropy and intraday Value-at-Risk, intraday Expected Shortfall and intraday Realized Volatility. From a dynamic point of view, the entropy proves to be a strong predictor for IVaR, IES and IRV, with R2 values up to 41%. Our quantile results confirm that the entropy has a strong explanatory power for the quantiles of the intraday VaR as well as the quantiles of the daily returns. The final part of our empirical study compares entropy-based VaR estimates, which are mostly more reactive to new information than standard VaR models, with competing VaR forecasts. Whilst the Historical VaR model is the preferred model based on the Diebold-Mariano (unconditional) test results, when Cristoffersen’s unconditional, conditional and joint test results are considered, it comes out as the worst performer, failing the tests. Cristoffersen’s three tests favour the entropy-based AR VaR model, and we conclude that the entropy is a strong predictor of daily VaR, performing better than the competing VaR models. As it takes into account the extreme events that happen at an intraday level, it proves to be able to provide reliable VaR forecasts.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.

Author Contributions

Daniel Pele designed the study, derived the results for the entropy estimation, ran the regressions and wrote the draft of the paper. Emese Lazar designed the forecasting exercise, provided the data, performed the DM tests and rewrote the paper. Alfonso Dufour was consulted at various stages and offered comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Algorithm for Simulation of Stable Distributions (Weron [34])

Step 1. Generate the following two random variables:
U  ~  U n i f ( π 2 , π 2 ) uniformly distributed and E  ~  E x p ( 1 ) exponentially distributed;
Step 2. Compute:
X = { 2 π ( π 2 + β U ) t a n U β l n π 2 E c o s U π 2 + β U i f α = 1 ( 1 + β 2 t a n 2 π α 2 ) 2 s i n ( U + B ( α , β ) ) ( c o s U ) 1 / α ( U α ( U + B ( α , β ) ) E ) 1 α α o t h e r w i s e ,
where B ( α , β ) = arctan ( β tan π α 2 ) α ;
Step 3. Compute Y = { γ X + 2 π β γ l n γ + δ i f α = 1 γ X + δ o t h e r w i s e , which will follow a stable distribution S ( α , β , γ , δ ; 1 ) .

Appendix B. VaR Forecasting Tests (Christoffersen [36])

Appendix B.1. The LR Test of Unconditional Coverage

Let V a R ^ t denote the forecasted VaR and let R t be the observed logreturn;
Define I t = { 1 , i f   R t < V a R ^ t 0 , o t h e r w i s e ;
The hypothesis being tested is: { H 0 : E ( I t ) = α H A : E ( I t ) α ;
The test statistic is defined as:
L R u c = 2 log L ( α ) L ( α ^ ) = 2 log α n 0 ( 1 α ) n n 0 α ^ n 0 ( 1 α ^ ) n n 0 χ 2 ( 1 ) , where
α ^ = n 0 n = Pr ( I t = 0 ) .

Appendix B.2. The LR Test of Independence

Consider I t a first-order Markov chain with transition probability matrix:
Π 1 = [ 1 π 01 π 01 1 π 11 π 11 ] , where π i j = Pr ( I t = i | I t 1 = j ) .
Then the likelihood function is defined as L ( Π 1 ) = ( 1 π 01 ) n 00 π 01 n 01 ( 1 π 11 ) n 10 π 11 n 11 .
The likelihood under the null hypothesis of independence is
L ( Π 2 ) = ( 1 π 2 ) n 00 + n 10 π 2 n 01 + n 11 , where π 2 = n 01 + n 11 n 01 + n 00 + n 10 + n 11 ;
The test statistic is defined as:
L R i = 2 log L ( Π ^ 2 ) L ( Π ^ 1 ) χ 2 ( 1 ) .

Appendix B.3. The Joint Test of Coverage and Independence

The test statistic is given by:
L R f u l l = 2 log L ( α ) L ( Π ^ 1 ) χ 2 ( 2 ) .

Appendix C. The Diebold-Mariano Test for VaR Forecast Comparisons (Diebold and Mariano [37] and West [38])

Let V a R t A ^ and V a R t B ^ denote two competing VaR forecasts given by models A and B, respectively and let R t be the observed logreturn. We denote by L ( R t + 1 , V a R t + 1 | t X ^ ) the loss function of the VaR forcast of model X { A , B } (based on the distance between the returns and the VaR forecast of model X) and let dt below be the difference between the two loss functions:
d t = L ( R t + 1 , V a R t + 1 | t A ^ ) L ( R t + 1 , V a R t + 1 | t B ^ ) .
The hypothesis being tested is: { H 0 : E ( d t ) = 0 H A : E ( d t ) 0 .
We compute the average of these differences as (assuming there are T 1-day ahead forecasts):
d ¯ = 1 T t = k + 1 k + T d t
and let V ^ ( d ) be a HAC-consistent variance estimator of the true variance of dt (computed, for example, using the Newey-West estimator).
The Diebold-Mariano test statistic is computed as:
D M = d ¯ V ^ ( d ) / T .
This will follow an asymptotical normal distribution. Large negative (positive) values indicate that model A (B) provides superior forecasts.

References

  1. Uffink, J. Bluff your way in the second law of thermodynamics. Stud. Hist. Philos. Mod. Phys. 2001, 32, 305–394. [Google Scholar] [CrossRef]
  2. Avery, J. Information Theory and Evolution, 2nd ed.; World Scientific Publishing: Singapore, 2012. [Google Scholar]
  3. Zunino, L.; Zanin, M.; Tabak, B.M.; Pérez, D.G.; Rosso, O.A. Forbidden patterns, permutation entropy and stock market inefficiency. Phys. A Stat. Mech. Appl. 2009, 388, 2854–2864. [Google Scholar] [CrossRef]
  4. Risso, A. The informational efficiency and the financial crashes. Res. Int. Bus. Financ. 2008, 22, 396–408. [Google Scholar] [CrossRef]
  5. Oh, G.; Kim, S.; Eom, C. Market efficiency in foreign exchange markets. Phys. A Stat. Mech. Appl. 2007, 382, 209–212. [Google Scholar] [CrossRef]
  6. Wang, Y.; Feng, Q.; Chai, L. Structural evolutions of stock markets controlled by generalized entropy principles of complex systems. Int. J. Mod. Phys. B 2010, 24, 5949–5971. [Google Scholar] [CrossRef]
  7. Maasoumi, E.; Racine, J. Entropy and predictability of stock market returns. J. Econ. 2002, 107, 291–312. [Google Scholar] [CrossRef]
  8. Billio, M.; Casarin, R.; Costola, M.; Pasqualini, A. An entropy-based early warning indicator for systemic risk. J. Int. Financ. Mark. Inst. Money 2016, 45, 42–59. [Google Scholar] [CrossRef]
  9. Dionisio, A.; Menezes, R.; Mendes, D.A. An econophysics approach to analyse uncertainty in financial markets: An application to the Portuguese stock market. Eur. Phys. J. B 2006, 50, 161–164. [Google Scholar] [CrossRef]
  10. Philippatos, G.C.; Wilson, C. Entropy, market risk and the selection of efficient portfolios. Appl. Econ. 1972, 4, 209–220. [Google Scholar] [CrossRef]
  11. Ebrahimi, N.; Maasoumi, E.; Soofi, E.S. Ordering univariate distributions by entropy and variance. J. Econ. 1999, 90, 317–336. [Google Scholar] [CrossRef]
  12. Ebrahimi, N.; Maasoumi, E.; Soofi, E.S. Measuring Informativeness of Data by Entropy and Variance. In Advances in Econometrics: Income Distribution and Methodolgy of Science, Essays in Honor of Camilo Dagum; Springer: Heidelberg, Germany, 1999. [Google Scholar]
  13. Allen, D.E.; McAleer, M.; Powell, R.; Singh, A.K. A non-parametric and entropy based analysis of the relationship between the VIX and S&P 500. J. Risk Financ. Manag. 2013, 6, 6–30. [Google Scholar]
  14. Liu, L.Z.; Qian, X.Y.; Lu, H.Y. Cross-sample entropy of foreign exchange time series. Phys. A Stat. Mech. Appl. 2010, 389, 4785–4792. [Google Scholar] [CrossRef]
  15. Bowden, R.J. Directional entropy and tail uncertainty, with applications to financial hazard. Quant. Financ. 2011, 11, 437–446. [Google Scholar] [CrossRef]
  16. Gradojevic, N.; Gencay, R. Overnight interest rates and aggregate market expectations. Econ. Lett. 2008, 100, 27–30. [Google Scholar] [CrossRef]
  17. Gencay, R.; Gradojevic, N. Crash of ’87—Was it expected? Aggregate market fears and long range dependence. J. Empir. Financ. 2010, 17, 270–282. [Google Scholar] [CrossRef]
  18. Gradojevic, N.; Caric, M. Predicting systemic risk with entropic indicators. J. Forecast. 2017, 36, 16–25. [Google Scholar] [CrossRef]
  19. Stutzer, M.J. Simple entropic derivation of a generalized Black-Scholes option pricing model. Entropy 2000, 2, 70–77. [Google Scholar] [CrossRef]
  20. Stutzer, M.J.; Kitamura, Y. Connections between entropic and linear projections in asset pricing estimation. J. Econ. 2002, 107, 159–174. [Google Scholar] [CrossRef]
  21. Yang, J.; Qiu, W. A measure of risk and a decision-making model based on expected utility and entropy. Eur. J. Oper. Res. 2005, 164, 792–799. [Google Scholar] [CrossRef]
  22. Ishizaki, R.; Inoue, M. Time-series analysis of foreign exchange rates using time-dependent pattern entropy. Phys. A Stat. Mech. Appl. 2013, 392, 3344–3350. [Google Scholar] [CrossRef]
  23. Bekiros, S. Timescale analysis with an entropy-based shift-invariant discrete wavelet transform. Comput. Econ. 2014, 44, 231–251. [Google Scholar] [CrossRef]
  24. Bekiros, S.; Marcellino, M. The multiscale causal dynamics of foreign exchange markets. J. Int. Money Financ. 2013, 33, 282–305. [Google Scholar] [CrossRef]
  25. Lorentz, R. On the entropy of a function. J. Approx. Theor. 2009, 158, 145–150. [Google Scholar] [CrossRef]
  26. Zhou, R.; Cai, R.; Tong, G. Applications of entropy in finance: A review. Entropy 2013, 15, 4909–4931. [Google Scholar] [CrossRef]
  27. Pele, D.T.; Mazurencu-Marinescu, M. Uncertainty in EU stock markets before and during the financial crisis. Econophys. Sociophys. Multidiscip. Sci. J. 2012, 2, 33–37. [Google Scholar]
  28. Pele, D.T. Information entropy and occurrence of extreme negative returns. J. Appl. Quant. Methods 2011, 6, 23–32. [Google Scholar]
  29. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman and Hall: London, UK, 1986. [Google Scholar]
  30. Yamato, H. Uniform convergence of an estimator of a distribution function. Bull. Math. Stat. 1973, 15, 69–78. [Google Scholar]
  31. Chacón, J.E.; Rodríguez-Casal, A. A note on the universal consistency of the kernel distribution function estimator. Stat. Probab. Lett. 2009, 80, 1414–1419. [Google Scholar] [CrossRef]
  32. Pele, D.T. Uncertainty and Heavy Tails in EU Stock Markets before and during the Financial Crisis. In Proceedings of the 13th International Conference on Finance and Banking, Lessons Learned from the Financial Crisis, Ostrava, Czech Republic, 12–13 October 2011; pp. 501–512. [Google Scholar]
  33. Nolan, J.P. Stable Distributions—Models for Heavy Tailed Data; Birkhauser: Boston, MA, USA, 2011. [Google Scholar]
  34. Weron, R. On the Chambers-Mallows-Stuck method for simulating skewed stable random variables. Stat. Probab. Lett. 1996, 28, 165–171. [Google Scholar] [CrossRef]
  35. Bandi, F.; Russell, J. Separating microstructure noise from volatility. J. Financ. Econ. 2006, 79, 655–692. [Google Scholar] [CrossRef]
  36. Christoffersen, P. Evaluating interval forecasts. Int. Econ. Rev. 1998, 39, 841–862. [Google Scholar] [CrossRef]
  37. Diebold, F.X.; Mariano, R.S. Comparing predictive accuracy. J. Bus. Econ. Stat. 1995, 13, 253–263. [Google Scholar] [CrossRef]
  38. West, K.D. Asymptotic inference about predictive ability. Econometrica 1996, 64, 1067–1084. [Google Scholar] [CrossRef]
  39. Giacomini, R.; White, H. Tests of conditional predictive ability. Econometrica 2006, 74, 1545–1578. [Google Scholar] [CrossRef]
Figure 1. The entropy of distribution functions of simulated alpha-stable distributions, as a function of α.
Figure 1. The entropy of distribution functions of simulated alpha-stable distributions, as a function of α.
Entropy 19 00226 g001
Figure 2. Intraday ES and the entropy of the distribution of intraday returns.
Figure 2. Intraday ES and the entropy of the distribution of intraday returns.
Entropy 19 00226 g002
Figure 3. Quantile regression results for the 1% quantile of IVaR as a function of the entropy, for ν = 1 min.
Figure 3. Quantile regression results for the 1% quantile of IVaR as a function of the entropy, for ν = 1 min.
Entropy 19 00226 g003
Figure 4. Quantile regression results for the 1% quantile of the returns as a function of the entropy.
Figure 4. Quantile regression results for the 1% quantile of the returns as a function of the entropy.
Entropy 19 00226 g004
Figure 5. Forecasting daily 1% VaR using entropy.
Figure 5. Forecasting daily 1% VaR using entropy.
Entropy 19 00226 g005
Figure 6. Forecasting daily 1% VaR using a normal GARCH(1,1) model.
Figure 6. Forecasting daily 1% VaR using a normal GARCH(1,1) model.
Entropy 19 00226 g006
Table 1. The entropy of simulated distributions.
Table 1. The entropy of simulated distributions.
DistributionAverage Value of the Entropy of the Distribution FunctionStandard Deviation of the Entropy of the Distribution Function
Uniform (0,1)0.99820.0011
Normal (0,1)0.89330.0319
Stable ( α = 1.9)0.67250.1600
Stable ( α = 1.5)0.47880.1186
Stable ( α = 1)0.39790.1125
Stable ( α = 0.5)0.28580.0984
Stable ( α = 0.1)0.18720.0610
Note: This table presents the average value of the entropy and its standard deviation, estimated by simulating a sample of 400 cases and repeating the experiment 1000 times.
Table 2. The relationship between intraday measures of risk and uncertainty and entropy.
Table 2. The relationship between intraday measures of risk and uncertainty and entropy.
Sampling Frequencyν = 1 minν = 10 minν = 15 min
Dependent VariableDependent VariableDependent Variable
I V a R α , ν ; t I E S α , ν ; t I R V ν ; t I V a R α , ν ; t I E S α , ν ; t I R V ν ; t I V a R α , ν ; t I E S α , ν ; t I R V ν ; t
Panel A. Static Models
H δ ; t −0.0050 ***−0.0070 ***−0.049 ***−0.004 ***−0.7565 ***−0.044 ***−0.0089 ***−1.0669 ***−0.889 ***
[0.0003][0.0003][0.0015][0.0002][0.0205][0.0009][0.0002][0.0234][0.0018]
R a d j 2 0.460.580.660.290.510.630.480.510.53
Panel B. Dynamic Models
I V a R α , ν ; t 1 0.5355 ***--0.2920 ***--0.3687 ***--
[0.0234]--[0.0351]--[0.0271]--
I E S α , ν ; t 1 -0.6024 ***--0.1991 ***--0.3599 ***-
-[0.0262]--[0.0428]--[0.0280]-
I R V ν ; t 1 --0.629 ***--0.569 ***--0.513 ***
--[0.0300]--[0.0433]--[0.0280]
H δ ; t 1 −0.0009 ***−0.0420 ***−0.001 ***−0.0011 ***−0.1607 ***−0.0003 ***−0.0017 ***−0.1835 ***−0.0002 ***
[0.0002][0.0246][0.0018][0.0003][0.0455][0.0024][0.0004][0.0420][0.0034]
R a d j 2 0.400.410.380.160.110.330.220.210.26
Note: Estimation results for regressions (9) and (10) in Panel A and B, respectively. Risk measures IVaR and IES are at level α = 1%. White’s heteroscedasticity consistent standard errors are given in brackets. *** signify significance at 1%.
Table 3. Quantile regression results.
Table 3. Quantile regression results.
Panel A. for Intraday VaR Panel B. for Daily Returns
τ = 1% Q for IVaRτ = 5% Q for IVaRτ = 1% Q for Returnsτ = 5% Q for Returns
Sampling Frequencyν = 1 minν = 10 minν = 15 minν = 1 minν = 10 minν = 15 minDailyDaily
H δ ; t 0.0111 ***0.0092 ***0.0154 ***0.0078 ***0.0073 ***0.0140 ***0.0728 ***0.0368 ***
[0.0008][0.0012][0.0012][0.0005][0.0005][0.0005][0.0155][0.0075]
Confidence Interval (95%)0.01260.01160.01770.00880.00830.01500.10310.0516
0.00950.00680.0130.00680.00630.01310.04240.0220
t-Value13.777.5812.7615.5314.3928.924.714.88
Note: Quantile regressions for IVAR (Panel A) and daily log-returns (Panel B) illustrating the effect of entropy on the quantiles of IVaR and daily log-returns; the models estimated are (11) and (12). *** signify significance at 1%.
Table 4. VaR forecast backtest results.
Table 4. VaR forecast backtest results.
Model Pr ( R t < V a R ^ t ) L R u c Testp-Value L R i Testp-Value L R f u l l Testp-Value
Historical VaR0.290%4.867 **0.0277.443 ***0.00612.310 ***0.002
n.GARCH(1,1) VaR1.597%2.0970.1481.6580.1983.7550.153
t-GARCH(1,1) VaR0.581%1.4420.2305.108 **0.0246.550 **0.038
Entropy VaR 0.726%0.5790.4474.329 **0.0374.908 *0.086
Entropy AR VaR1.016%0.0020.9663.156 *0.0763.1580.206
Note: Backtest results for daily 1% VaR forecasts based on the following five models specified in Section 2.3: (1) Historical VaR forecasts; (2) Normal GARCH(1,1) VaR forecasts; (3) Student’s t-GARCH(1,1) VaR forecasts; (4) Entropy-based VaR forecasts and (5) Entropy-based AR VaR forecasts. Christoffersen’s backtests were used (see Appendix B) with N = 689. *, ** and *** signify significance at 10%, 5% and 1%, respectively.
Table 5. The Diebold and Mariano test results for VaR forecasts.
Table 5. The Diebold and Mariano test results for VaR forecasts.
Modeln.GARCH(1,1) VaRt-GARCH(1,1) VaREntropy VaR Entropy AR VaR
Historical VaR−3.146 ***−1.004−1.755 **−2.294 **
n.GARCH(1,1) VaR-2.744 ***2.167 **1.428
t-GARCH(1,1) VaR--−0.448−1.141
Entropy VaR ---−1.428
Note: 1% VaR forecast comparison tests of Diebold and Mariano [37] and West [38], with loss function given in [19]. The DM test statistics reported are for comparisons of the models on the left side against models in the column titles. ** and *** signify significance at 5% and 1%, respectively. N = 689.

Share and Cite

MDPI and ACS Style

Pele, D.T.; Lazar, E.; Dufour, A. Information Entropy and Measures of Market Risk. Entropy 2017, 19, 226. https://doi.org/10.3390/e19050226

AMA Style

Pele DT, Lazar E, Dufour A. Information Entropy and Measures of Market Risk. Entropy. 2017; 19(5):226. https://doi.org/10.3390/e19050226

Chicago/Turabian Style

Pele, Daniel Traian, Emese Lazar, and Alfonso Dufour. 2017. "Information Entropy and Measures of Market Risk" Entropy 19, no. 5: 226. https://doi.org/10.3390/e19050226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop