Next Article in Journal
Modeling and Performance of Bonus-Malus Systems: Stationarity versus Age-Correction
Next Article in Special Issue
When the U.S. Stock Market Becomes Extreme?
Previous Article in Journal
Catastrophe Insurance Modeled by Shot-Noise Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Academic Response to Basel 3.5

1
RiskLab and SFI, Department of Mathematics, ETH Zurich, Zurich 8092, Switzerland
2
School of Economics and Management, University of Firenze, Firenze 50127, Italy
3
Department of Mathematical Stochastics, University of Freiburg, Freiburg 79104, Germany
4
Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
Risks 2014, 2(1), 25-48; https://doi.org/10.3390/risks2010025
Submission received: 25 November 2013 / Revised: 9 February 2014 / Accepted: 17 February 2014 / Published: 27 February 2014
(This article belongs to the Special Issue Risk Management Techniques for Catastrophic and Heavy-Tailed Risks)

Abstract

:
Recent crises in the financial industry have shown weaknesses in the modeling of Risk-Weighted Assets (RWAs). Relatively minor model changes may lead to substantial changes in the RWA numbers. Similar problems are encountered in the Value-at-Risk (VaR)-aggregation of risks. In this article, we highlight some of the underlying issues, both methodologically, as well as through examples. In particular, we frame this discussion in the context of two recent regulatory documents we refer to as Basel 3.5.

1. Introduction

In May, 2001, the first author contributed to the influential and highly visible “An academic response to Basel II” Daníelsson et al. [1]. In their academic response to, at the time, Basel 2, the authors were spot on concerning the weaknesses within the prevailing, international regulatory framework, as well as for the way in which larger international banks were managing market, credit and operational risk. We cite from their Executive Summary:
  • The proposed regulations fail to consider the fact that risk is endogenous. Value-at-Risk can destabilize an economy and induce crashes [⋯]
  • Statistical models used for forecasting risk have been proven to give inconsistent and biased forecasts, notably underestimating the joint downside risk of different assets. The Basel Committee has chosen poor quality measures of risk when better risk measures are available.
  • Heavy reliance on credit rating agencies for the standard approach to credit risk is misguided [⋯]
  • Operational risk modeling is not possible given current databases [⋯]
  • Financial regulation is inherently procyclical [⋯] the purpose of financial regulation is to reduce the likelihood of systemic crises, these proposals will actually tend to negate, not promote this useful purpose.
And in summary:
  • Perhaps our most serious concern is that these proposals, taken altogether, will enhance both the procyclicality of regulation and the susceptibility of the financial system to systemic crises, thus negating the central purpose of the whole exercise. Reconsider before it is too late.
Unfortunately, five years later it was too late!
The above quotes serve two purposes: first, academia has a crucial role to play in commenting officially on proposed changes in the regulatory landscape; second, when well-documented, properly researched and effectively communicated, we may have an influence on regulatory and industry practice. We refer to the full document Daníelsson et al. [1] for further details and Shin [2] for more background.
For the purpose of this paper, we refer to the regulatory document BCBS [3] as Basel 3.5 for the trading book; Basel 4 is already on the regulatory horizon, even if the implementation of Basel 3 is only planned for 2019. In particular, through its consultative document BCBS [4], the Basel Committee went already a step beyond BCBS [3]; indeed “the Committee has its intention to pursue two key confirmed reforms outlined in the first consultative paper BCBS [3]: stressed calibration [⋯] move from Value-at-Risk (VaR) to Expected Shortfall (ES)”. It is proposed in the same document that VaR at a confidence level of 99% should be replaced by ES at a confidence level of 97.5% for the internal models-based approach. Our comments are also relevant for insurance regulation’s Solvency 2, now planned in the EU for January 1, 2016. The Basel 3.5 document arose out of consultations between the regulators, industry and academia, and this was in the wake of the subprime crisis. It also paid attention to and remedied some of the criticisms raised in Daníelsson et al. [1]; we shall exemplify this below. Among the various issues raised, for our purpose, the following question (no. 8, p. 41) in BCBS [3], is relevant:
“What are the likely constraints with moving from Value-at-Risk (VaR) to Expected Shortfall (ES), including any challenges in delivering robust backtesting and how might these be best overcome?”
Since its introduction around 1994, VaR has been criticized by numerous academics, as well as practitioners for its weaknesses as the benchmark (see Jorion [5]) for the calculation of regulatory capital in banking and insurance:
  • W1 VaR says nothing concerning the what-if question: “Given we encounter a high loss, what can be said about its magnitude?”;
  • W2 For high confidence levels, e.g., 95% and beyond, the statistical quantityVaR can only be estimated with considerable statistical, as well as model uncertainty, and
  • W3 VaR may add up the wrong way, i.e., for certain (one-period) risks, it is possible that:
    VaR α ( X 1 + + X d ) > VaR α ( X 1 ) + + VaR α ( X d )
    the latter defies the (better said, some of the) notion of diversification.
The worries, W1–W3, were early on brushed aside as being less relevant for practice. By now, practice has caught up,and W1–W3 have become highly relevant, hence parts of Basel 3.5.
The fact that the above concerns about VaR and, more importantly, about model uncertainty are well founded can be learned from some of the recent political discussions concerning banking regulation and the financial crisis. Proof of this is, for instance, to be found in (USS [6], p.13) and (UKHLHC [7], p.119); we quote explicitly from these documents, as they nicely summarize some of the key practical issues facing more quantitative regulation of modern financial markets. Before doing so, we recall the terminology of RWA (Risk-Weighted Asset). In general terms, banking solvency is based on a quotient of capital (specifically defined through levels of liquidity) to RWAs. The latter are the risk numbers associated with trading or credit positions mainly based on mark-to-market or mark-to-model values. Also included is risk capital for operational risk, which can easily reach the 20%–30% range of the total RWAs. In these numbers, risk measures, like VaR, appear prominently. In general, financial engineers (including mathematicians) and their products/models play a crucial role in determining these RWAs. Accountants are typically more involved with the numerator, capital. Below, we list some quotes related to the concerns about VaR and model uncertainty. The highlighting is ours.
  • Quote 1 (from USS [6]): “Near the end of January, the bank approved use of a new CIO Value-at-Risk (VaR) model that cut in half the SCP’s [London Whale’s structured credit portfolio’s] purported risk profile [...] The change in VaR methodology effectively masked the significant changes in the portfolio.” The quote refers to JPMorgan Chase Whale Trades.
  • Quote 2 (from UKHLHC [7]): “From a former employee of HBOS: We actually got an external advisor [to asses how frequently a particular event might happen] and they came out with one in 100,000 years and we said <<no>>, and I think we submitted one in 10,000 years. But that was a year and a half before it happened. It does not mean to say it was wrong: it was just unfortunate that the 10,000th year was so near.
  • Quote 3 (from BCBS [3], p.20): “However, a number of weaknesses have been identified with VaR, including its inability to capture <<tail risk>>.” The quote clearly refers to the what-if question W1.
  • Quote 4 The RWA uncertainty issue is very well addressed in BCBS [8] (in particular p.6), indeed: “There is considerable variation across banks in average RWAs for credit risk. In broad terms, the variation is similar to that found for market risk in the trading book. Much of the variation (up to three quarters) is explained by the underlying differences in the risk composition of the banks’ assets, reflecting differences in risk preferences as intended under the risk-based capital framework. The remaining variation is driven by diversity in both bank and supervisory practices.” The supervision of the Euro area’s biggest banks by the European Central Bank will very much concentrate on RWAs in its asset-quality reviews; see The Economist [9].
Though ES also suffers from W2, it partly corrects W1 and always adds up correctly (≤), i.e., ES is subadditive (corrects W3). The subadditivity is accepted widely as a desirable property of risk measures, since its introduction in Artzner et al. [10], although it is also considered by several authors (see, for instance, Dhaene et al. [11] and Kou et al. [12]), as a debatable one. Of course, the ‘one number cannot suffice’ paradigm also holds for ES; see Rootzén and Klüppelberg [13]. Concerning W2, classical Extreme Value Theory (EVT), as, for instance, explained in (McNeil et al. [14], Chapter 7), yields sufficient warnings concerning the near-impossibility of the accurate estimation of single risk measures, like VaR and ES, at high confidence levels; see in particular (McNeil et al. [14], Figure 7.6). For the purpose of this paper, we shall mainly concentrate on W3, compare VaR and ES estimates and discuss Question 8, p. 41, of BCBS [3] from the point of view of risk aggregation and model uncertainty.

2. How Superadditive Can VaR Be?

In Embrechts et al. [15], the question is addressed of how large the gap between the left- and right-hand side in Equation (1) can be. The answer is very much related to the issue of model uncertainty (MU), especially at the level of inter-dependence, i.e., dependence uncertainty.
Let us first recall the standard definitions of VaR and ES. Suppose X is a random variable (rv) with distribution function (df) F X , F X ( x ) = P ( X x ) for x R . For 0 < α 1 , we then define:
VaR α ( X ) = F X 1 ( α ) = inf { x R : F X ( x ) α }
and:
ES α ( X ) = 1 1 α α 1 VaR β ( X ) d β
whenever F X is continuous; it follows that:
ES α ( X ) = E [ X | X > VaR α ( X ) ]
leading to the standard interpretation for ES as a conditional expected loss. The ES and its equivalence in the continuous setting, are known under different names and abbreviations, such as TVaR, CVaR, CTE, TCE and AVaR. When discrete distributions are involved, the above cited notions are no longer equivalent; see Acerbi and Tasche [16].
Our set-up is as follows.
  • Suppose X 1 , , X d are one-period risk positions with dfs F i , i = 1 , , d , also denoted as X i d F i . We assume F 1 , , F d to be known for the purpose of our discussion. In practice, this may correspond to models fitted to historical data or models chosen in a stress-testing environment. One could also envisage empirical dfs when sufficient data are available. Important is that in the analyses and examples below, we disregard statistical uncertainty; this can and should be added in a full-scale discussion. As a consequence, for the purpose of this paper, MU should henceforth be interpreted as functional-MU at the level of inter-dependence rather than statistical-MU. Full MU would combine (at least) both.
  • Consider the portfolio position X d + = X 1 + + X d . The techniques discussed below would also allow for the analysis of other portfolio structures, like, for instance, X d = max ( X 1 , , X d ) , X d = min ( X 1 , , X d ) or X d + 1 { X d > m } for some m > 0 , typically large. MU results for such more general examples, however, need further detailed study; see Embrechts et al. [15] for some remarks on this.
  • Denote by VaR α ( X i ) , i = 1 , , d , the marginal VaRs at the common confidence level, α ( 0 , 1 ) , typically close to one. For the moment, we concentrate on VaR as a risk measure, as it still is the regulatory benchmark. Other risk measures will appear later in the paper.
Task:   Calculate   VaR α ( X d + )
As stated, this task cannot be performed, since for the calculation of VaR α ( X d + ) , we need a joint model for the random vector X = ( X 1 , , X d ) . Under a specific joint model, the calculation of VaR α ( X d + ) amounts to a d-dimensional integral (or sum in the discrete case). Only in very few cases can this be done analytically. As a consequence numerical integration and/or Monte Carlo methodology, including the use of quasi-random (low discrepancy) techniques, may enter. For α close to one, tools from rare event simulation become important; see for instance Asmussen and Glynn [17], Chapter VI. For a more geometric approach, useful in lower dimensions, say d 5 , see [18,19].
When we relax from a full joint distributional assumption (a single model) to a specific subclass of models, it may be possible to obtain some inequalities or asymptotics (in α 1 or d , say) for VaR α ( X d + ) . For instance, if X is elliptically distributed, then:
VaR α ( X d + ) i = 1 d VaR α ( X i )
see (McNeil et al. [14], Theorem 6.8). An important subclass of elliptical distributions form the so-called multivariate normal variance mixture models, i.e.,
X = d μ + W A Z
where:
(i)
Z N k ( 0 , I k ) , I k stands for the k-dimensional identity matrix;
(ii)
W 0 is a non-negative, scalar-valued rv, which is independent of Z ; and
(iii)
A R d × k and μ R d .
See (McNeil et al. [14], Section 3.2) for this definition. For the most general definition based on affine transformations of spherical random vectors, see (McNeil et al. [14], Section 3.3.2). In many ways, elliptical models are like “heaven” for finance and risk management; see (McNeil et al. [14], Theorem 6.8 and Proposition 6.13). Unfortunately, and this in particular in moments of stress, the world of finance may be highly non-elliptical.
A further interesting class of models results for X being comonotonic, i.e., there exist increasing functions ψ i , i = 1 , , d , and an rv, Z, so that:
X i = ψ i ( Z ) a.s. , i = 1 , , d
and in that case:
VaR α ( X d + ) = i = 1 d VaR α ( X i )
i.e., VaR is comonotone additive. For a proof of Equation (3), see (McNeil et al. [14], Theorem 6.15). Recall that two risks (two rvs) with finite second moments are comonotone exactly when the joint model achieves maximal correlation, typically less than one; see (McNeil et al. [14], Theorem 5.25). Consequently, strictly superadditive risks, like in Equation (1), correspond to dependence structures with less than maximal correlation. This often leads to confusion amongst practitioners; it was also one of the reasons (a correlation pitfall) for writing Embrechts et al. [20]. Without extra model knowledge, we are hence led to calculating best-worst VaR bounds for VaR α ( X d + ) in the presence of dependence uncertainty:
VaR ̲ α ( X d + ) = inf { VaR α ( X 1 F + + X d F ) : X = ( X 1 F , , X d F ) has joint df F with marginals F 1 , , F d }
VaR ¯ α ( X d + ) = sup { VaR α ( X 1 F + + X d F ) : X = ( X 1 F , , X d F ) has joint df F with marginals F 1 , , F d }
The terminology, best versus worst, of course, very much depends on the situation at hand: whether one is long or short in a trading environment or whether the bounds are interpreted by a regulator or a bank, say.
We further comment on notation: recall that the only available information so far is the marginal distributions of the risks, i.e., X i d F i , i = 1 , , d . Whenever we use a joint distribution function, F, with those given marginals for the vector, X, we denote X d + = X 1 F + + X d F in order to highlight this choice; see Equations (4) and (5) above. We hope that the reader is fine with this slight abuse of notation.
Using the notion of copula, we may rephrase (4) and (5) by applying Sklar’s Theorem; see (McNeil et al. [14], Theorem 5.3). Denote by C d the set of all d-dimensional copulas; then Equations (4) and (5) are equivalent to:
VaR ̲ α ( X d + ) = inf { VaR α ( X 1 C + + X d C ) : C C d , X i d F i , i = 1 , , d }
VaR ¯ α ( X d + ) = sup { VaR α ( X 1 C + + X d C ) : C C d , X i d F i , i = 1 , , d }
As with F above, the upper C-index highlights the fact that the joint df of ( X 1 , , X d ) is F = C ( F 1 , , F d ) .
Rewriting the optimization problem (Equations (4) and (5)) in its equivalent copula form (Equations (6) and (7)) stresses the fact that, once we are given the marginal dfs, F i , i = 1 , , d , solving for VaR ̲ and VaR ¯ amounts to finding the copulas which, together with the F i s, achieve these bounds. Hence, solving for Equations (4) and (5), or equivalently for Equations (6) and (7) (the set-up we will usually consider), one obtains the MU-interval for fixed marginals:
VaR ̲ α ( X d + ) VaR α ( X d + ) VaR ¯ α ( X d + )
If an inequality in Equation (8) becomes an equality for a given copula, C, the corresponding copula, C, is referred to as an optimal coupling. A current important area of research corresponds to finding the bounds in Equation (8), analytically and/or numerically, prove sharpness under specific conditions and find the corresponding optimal couplings.
The interval VaR ̲ α ( X d + ) , VaR ¯ α ( X d + ) yields a measure for MU across all possible joint models as a function of inter-dependencies between the marginal factors (recall that we assume the F i s to be known!). So far, we assume no prior knowledge about the inter-dependence among the marginal risks, X 1 , , X d . If extra, though still incomplete information, like, for instance, “all X i s are positively correlated” is available, then the above MU interval narrows. An important question becomes: can one quantify such MU? This is precisely the topic treated in Embrechts et al. [15], Barrieu and Scandolo [21], Bignozzi and Tsanakas [22]. There is a multitude of both analytic, as well as numeric (algorithmic) results. We consider three measures relevant for the MU discussion; the confidence level, α ( 0 , 1 ) , is fixed.
  • Measure 1 The model-specific superadditivity ratio for the aggregate loss, X d + :
    Δ α ( X d + ) = VaR α ( X d + ) i = 1 d VaR α ( X i ) = VaR α ( X d + ) VaR α + ( X d + )
    where we define VaR α + ( X d + ) : = i = 1 d VaR α ( X i ) . The superadditivity ratio measures the non-coherence, equivalently, the superadditivity gap of VaR for a given joint model for X. As such, it yields an indication of how far VaR can be away from being able to properly describe diversification.
  • Measure 2 The worst superadditivity ratio :
    Δ ¯ α ( X d + ) = VaR ¯ α ( X d + ) VaR α + ( X d + )
    between the worst-possible VaR and the comonotonic VaR. It measures the superadditivity gap across all joint models with given marginals.
  • Measure 3 The ratio between worst-possible ES and worst-possible VaR:
    B α ( X d + ) = ES ¯ α ( X d + ) VaR ¯ α ( X d + ) = i = 1 d ES α ( X i ) VaR ¯ α ( X d + )
    This relates to the question in Basel 3.5 from the Introduction.
In the next section, we discuss some of the methodological results leading to estimates for Equations (4)–(11); these are based on some very recent mathematical developments on dependence uncertainty. Section 4 contains several numerical examples. Section 5 addresses the robust backtesting question for VaR and comments on the possible change from VaR to ES for regulatory purposes. We draw a conclusion in Section 6. As it stands, the paper has a dual goal: first, it provides a broadly accessible critical assessment of the VaR versus ES debate triggered by Basel 3.5; at the same time, we list several areas of ongoing and possible future research that may come out of these discussions.

3. Mathematical Developments on Dependence Uncertainty

Questions of the type Equations (4)–(7) go back a long way in probability theory: an early solution for d = 2 was given independently by Makarov [23], a student of A. N. Kolmogorov from whom Makarov obtained the problem, and Rüschendorf [24] with a different approach. This type of question belongs to a rather specialized area of multivariate probability theory and is mathematically non-trivial to answer. Although at the moment of writing this paper, we still do not yet have complete answers, recently, significant progress has been made providing insight not only in the mathematical theory in this area, but also yielding answers to practically relevant questions.
To investigate problems with dependence uncertainty, like Equations (4)–(7), it is useful to define the set of all possible aggregations:
S d = S d ( F 1 , , F d ) = { X 1 + + X d : X 1 d F 1 , , X d d F d }
Such problems lead to research on the probabilistic properties of and statistical inference in this set, S d ( S d was formally introduced in Bernard et al. [25], but all prior research in this area dealt in some form or another with the same framework). For example, the questions, (4) and (5), can be rephrased as:
VaR ̲ α ( X d + ) = inf S S d VaR α ( S ) and VaR ¯ α ( X d + ) = sup S S d VaR α ( S )
A full characterization of S d is still out of reach; recently, however, significant progress has been made, especially in the so-called homogeneouscase. We refer to a recent book Rüschendorf [26] for an overview of research on extremal problems with marginal constraints and dependence uncertainty. In particular, the book provides links between Equations (4)–(7) and copula theory, mass-transportation and financial risk analysis.

3.1. The Homogeneous Case

Let us first look at the case F 1 = = F d = : F , which we call a homogeneous model. For this model, analytical results are available. Analytical values for VaR ¯ α ( X d + ) have been obtained in Wang et al. [27] and Puccetti and Rüschendorf [28] for the homogeneous model when the marginal distributions have a tail-decreasing density (such as Pareto, Gamma or log-normal distributions). Wang et al. [27] also provide analytical expressions for VaR ̲ α ( X d + ) for marginal distributions with a decreasing density. These results are summarized below.
Proposition 1 Corollary 3.7 of Wang et al. [27], in a slightly different form. Suppose that the density function of F is decreasing on [ β , ) for some β R . Then, for α [ F ( β ) , 1 ) and X d F ,
VaR ¯ α ( X d + ) = d E [ X | X [ F 1 ( α + ( d 1 ) c d , α ) , F 1 ( 1 c d , α ) ] ]
where c d , α is the smallest number in [ 0 , ( 1 α ) / d ] , such that:
α + ( d 1 ) c 1 c F 1 ( t ) d t 1 α d c d ( F 1 ( α + ( d 1 ) c ) + F 1 ( 1 c ) ) .
Moreover, suppose that the density function of F is decreasing on its support. Then, for α ( 0 , 1 ) and X d F ,
VaR ̲ α ( X d + ) = max { ( d 1 ) F 1 ( α ) + F 1 ( 0 ) , d E [ X | X F 1 ( α ) ] } .
Although the expressions (13)–(15) look somewhat complicated, they can be reformulated using the notion of duality, which dates back to Rüschendorf [24], and the resulting dual representation originated in the theory of mass-transportation. The following proposition provides an equivalent representation of Equation (13). It is stated in Rüschendorf [28], in a slightly modified form and under a more general condition of complete mixability.
Proposition 2 Under the same assumptions of Proposition 1, suppose that for any sufficiently large threshold, s, it is possible to find a < s / d , such that:
D ( s ) : = d a b F ¯ ( x ) d x ( b a ) = F ¯ ( a ) + ( d 1 ) F ¯ ( b )
where b = s ( d 1 ) a , with F 1 ( 1 D ( s ) ) a . Then, for α 1 D ( s ) , we have that:
VaR ¯ α ( X d + ) = D 1 ( 1 α ) .
The proof of Propositions 1 and 2 are based on the recently introduced and further developed mathematical concept of complete mixability.
Definition 3.1 Wang and Wang [29]. The marginal distribution, F, is said to be d-completely mixable if there exist rvs X 1 , , X d with df F, such that X 1 + + X d is almost surely constant.
Recent results on complete mixability are summarized in Wang and Wang [29] and Puccetti et al. [30]; an earlier study on the problem of constant sums can be found in Rüschendorf and Uckelmann [31], with ideas that originated from the early 1980s (see Rüschendorf [26]). A necessary and sufficient condition for distributions with monotone densities to be completely mixable is given in Wang and Wang [29]; it is used in the proof of the bounds in Propositions 1 and 2.
Complete mixability, as opposed to comonotonicity, corresponds for this problem to extreme negative dependence. In other words, VaR ¯ is obtained through a concept of extreme negative correlation (given that correlation exists) between conditional distributions, instead of maximal correlation, as discussed in Section 2. This rather counter-intuitive mathematical observation partially answers why VaR is non-subadditive and warns that regulatory or pricing criteria based on comonotonicity are not as conservative as one may think.
So far, conditional complete mixability and, hence, the sharpness of the dual bound for VaR ¯ α ( X d + ) has only been shown for dfs satisfying the tail-decreasing density condition of Proposition 1. Of course, this condition is satisfied by most distributional models used in risk management practice. For such models, we are hence able to calculate (12), Δ ¯ α ( X d + ) and B α ( X d + ) as defined in Equations (10) and (11). Examples will be given later.

3.2. Towards the Inhomogeneous Case

When the assumption of homogeneity F 1 = = F d is removed, analytical results become much more challenging to obtain. The connection between (12) and the concept of convex orderturns out to be relevant. Relations between the two concepts were described in (Bernard et al. [25], Theorem 4.6) and (Bernard et al. [32], Theorem 2.4 and Appendix (A1)–(A4)). Let U be a uniform rv on [ 0 , 1 ] , F i [ α ] be the distribution of F i 1 ( 1 α U ) (upper α-tail distribution) and F i ( α ) be the distribution of F i 1 ( α U ) (lower α-tail distribution) for α ( 0 , 1 ) and i = 1 , , d .
Proposition 3 Suppose dfs F 1 , , F d have positive densities on their supports; then, for α ( 0 , 1 ) ,
VaR ¯ α ( X d + ) = sup S S d ( F 1 , , F d ) VaR α ( S ) = sup { ess inf S : S S d ( F 1 [ α ] , , F d [ α ] ) }
and:
VaR ̲ α ( X d + ) = inf S S d ( F 1 , , F d ) VaR α ( S ) = inf { ess sup S : S S d ( F 1 ( α ) , , F d ( α ) ) }
where the essential infimum of a random variable, S, is defined as:
ess inf S = sup { t R : P ( S t ) = 0 }
and the essential supremum of a random variable, S, is defined as:
ess sup S = inf { t R : P ( S t ) = 1 } .
As a consequence, it is shown that the worst VaR in S d ( F 1 , , F d ) is attained by a minimal element with respect to convex order in S d ( F 1 [ α ] , , F n [ α ] ) . A similar statement holds for VaR ̲ α ( X d + ) . In some cases, for instance, under assumptions of complete mixability, even the smallest elements with respect to convex order can be given, but in general, there may not be such a smallest element. Recent attempts to find analytical solutions for minimal convex ordering elements have been summarised in Bernard et al. [25]. Based on current knowledge, we are only able to calculate Equation (12), Δ ¯ α ( X d + ) and B α ( X d + ) in the inhomogeneous case under fairly strong assumptions on the marginal dfs, for which the “sup-inf” and “inf-sup” problems are solvable. It is of interest that an algorithm called the Rearrangement Algorithm (RA) has been introduced to approximate these worst (best)-case VaRs (see Numerical Optimization below).
Another important issue is the optimal coupling structure for the worst VaR. From Proposition 3, we can see that the interdependence (copula) between the random variables can be set arbitrarily in the lower region of the marginal supports, and only the tail dependence (in a region of probability, 1 α , in each margin) matters for the worst VaR value. In the tail region, a smallest element in the convex order sense solves these “sup-inf” and “inf-sup” problems, (18) and (19). To be more precise, each of the individual risks are coupled in a way, such that, conditional on their being all large, their sum is concentrated around a constant (ideally, the sum is a constant, but this is not realistic in many cases). That is why conditional complete mixability plays an important role in the optimal coupling for the worst VaR (the optimal coupling for the best VaR is similar, just that the conditional region now is a (typically large) interval of probability α). This also leads to the fact that information on overall correlation, such as the linear correlation coefficient or Spearman’s rho, may not directly affect the value of the worst VaR. Even with a constraint that the risks in a portfolio are uncorrelated or mildly correlated, the worst VaR may still be reached. This, to some extent, warns about the danger of using a single number as the dependence indicator in a quantitative risk management model. In the recent paper, Bernard et al. [32], it has been shown that additional variance constraints may lead to essentially improved VaR bounds.

3.3. Numerical Optimization

Numerical methods are regarded as very useful when it comes to optimization problems. One such method is the rearrangement algorithm (RA) introduced in Puccetti and Rüschendorf [33], which was modified, extended and further discussed with applications to quantitative risk management in Embrechts et al. [15]. The RA is a simple, but fast, algorithm designed to approximate convex minimal elements in a set, S d , through discretization. The RA allows for a fast and accurate computation of the bounds in Equation (12) for arbitrary marginal dfs, both in the homogeneous, as well as inhomogeneous case. The method uses a discretization step of the relevant quantile region ( N = 10 6 , say) resulting in a N × d matrix on which, through successive operations, a matrix with minimal variance for the row-sums (think of complete mixability) is obtained. The RA can easily handle large dimensionality problems of d 1,000, say. For details and examples, see Embrechts et al. [15]. As argued in Bernard et al. [25], the numerical approximations obtained through the RA suggest that the bound, VaR ¯ , in Proposition 1 is sharp for all commonly used marginal distributions, and this is without the requirement of a tail-decreasing density. Up to now, we do not have a formal proof of this. In Bernard et al. [32], an extension of the RA (called ERA) is introduced and shown to give reliable bounds for the variance constrained VaR.

3.4. Asymptotic Equivalence of Worst VaR and Worst ES

For any random variable, Y, VaR α ( Y ) is bounded above by ES α ( Y ) . As a consequence, the worst case VaR is bounded above by the worst case ES, i.e.,
VaR ¯ α ( X d + ) ES ¯ α ( X d + )
Since ES ¯ α ( X d + ) = i = 1 d ES α ( X i ) , Equation (20) gives a simple way to calculate the upper bounds of the worst VaR. This bound implies that:
B α ( X d + ) = ES ¯ α ( X d + ) VaR ¯ α ( X d + ) 1
It was an observation made in Puccetti and Rüschendorf [34] that the bound in Equation (20) is asymptotically sharp under general conditions, i.e., an asymptotic equivalence of worst VaR and worst ES holds.
The exact identity of worst-possible VaR and ES estimates holds for bounded homogeneous portfolios when the common marginal distribution is completely mixable, as indicated in the following remark.
Remark 1 Assume that F is a bounded, continuous distribution on the bounded interval [ F 1 ( α ) , b ] , b > F 1 ( α ) . Assume also that F is d-completely mixable in [ F 1 ( α ) , b ] , i.e., there exists a random vector ( X 1 * , , X d * ) and a constant, k, such that:
P ( X 1 * + + X d * = k | X i * [ F 1 ( α ) , b ] , 1 i d ) = 1
By the definition of VaR in Equation (2), we then have that:
VaR α ( X 1 * + + X d * ) = k
and:
k = E i = 1 d X i * | X i * [ F 1 ( α ) , b ] , 1 i d = i = 1 d ES α ( X i * ) = ES ¯ α ( X 1 * + + X d * )
Because of Equation (20), Equations (22) and (23) imply:
VaR ¯ α ( X 1 * + + X d * ) = ES ¯ α ( X 1 * + + X d * )
The above example suggests a strong connection between VaR ¯ α ( X d + ) and ES ¯ α ( X d + ) . Indeed, consider Equation (18), and note that by the definition of F i [ α ] , ES ¯ α ( X d + ) = E [ S ] for any S S d ( F 1 [ α ] , , F d [ α ] ) . Therefore, mathematically, the link between VaR ¯ α ( X d + ) and ES ¯ α ( X d + ) really concerns the difference between ess inf S and E [ S ] for some S in S d ( F 1 [ α ] , , F d [ α ] ) . Intuitively, such S, which solves the “sup-inf” and “inf-sup” problems, should have a rather small value of | S E [ S ] | , leading to a small value of | VaR ¯ α ( X d + ) ES ¯ α ( X d + ) | . Furthermore, the c d , α = 0 case in Proposition 1 points in the same direction.
The asymptotic equivalence of worst VaR and worst ES was established in Puccetti and Rüschendorf [34] in the homogeneous case based on the dual bounds in Embrechts and Puccetti [35] and an assumption of conditional complete mixability. The assumption was later weakened by Puccetti et al. [36] and Wang [37] and removed in Wang and Wang [38]. The following extension to inhomogeneous models in the general case was given in (Embrechts et al. [39] (Theorem 3.3)).
Proposition 4 Asymptotic equivalence of worst VaR and ES. Suppose that the continuous distributions, F i , i N , satisfy that for some α ( 0 , 1 ) and k > 1 ,
(a)
E [ | X i E [ X i ] | k ] is uniformly bounded, and
(b)
lim inf n d 1 / k i = 1 d ES α ( X i ) = + ,
then:
lim d VaR ¯ α ( X d + ) ES ¯ α ( X d + ) = 1
In the homogeneous model, no assumptions other than a finite and non-zero ES α ( X 1 ) is required for Equation (24) to hold; see (Wang and Wang [38], Corollary 3.8). A notion of asymptotic mixability (asymptotically constant sum) leads to the asymptotic equivalence of worst VaR and worst ES (see Bernard et al. [32], Puccetti and Rüschendorf [40]), indicating that this equivalence is connected with the law of large numbers, and therefore, it holds under general conditions. The equivalence (24) is also suggested by several numerical examples (see Examples 4.1 and 4.2 in Section 4). For research on the asymptotic equivalence (24) for general risk measures, we refer to Wang et al. [41].
Remark 2 An immediate consequence from Equation (24) is that in the finite mean, homogeneous case, when VaR α ( X 1 ) > 0 , as d , we have that:
Δ ¯ α ( X d + ) = VaR ¯ α ( X d + ) VaR α + ( X d + ) ES α ( X 1 ) VaR α ( X 1 )
Generally speaking, the worst superadditivity ratio Δ ¯ α ( X d + ) is asymptotically ES ¯ α ( X d + ) / VaR α + ( X d + ) in all homogeneous and inhomogeneous models of practical interest. In other words, we can say that the worst VaR is almost as extreme as the worst ES at the same confidence level for d large. According to BCBS [4], VaR 0 . 99 is to be compared with ES 0 . 975 ; by Equation (24), the worst VaR 0 . 99 is generally (much) larger than the worst ES 0 . 975 .
It also is worth pointing out that the worst superadditivity ratio Δ ¯ α ( X d + ) approaches infinity when ES ¯ α ( X d + ) is infinite; this is consistent with Equation (25) and was shown in Puccetti and Rüschendorf [34]. Models leading to (estimated) infinite mean risks have attracted considerable attention in the literature; see, for instance, the early contribution Nešlehová et al. [42] within the realm of operational risk and Delbaen [43] for a more methodological point of view. Clearly, ES is not defined in this case, nor does there exist any non-trivial coherent risk measure on the space of infinite mean rvs; see Delbaen [43]. As a consequence, as VaR is always well-defined, it may become a risk measure of “last resort”. We shall not enter into a more applied “pro versus contra” discussion on the topic here, but just stress the extra insight our results give within the context of Basel 3.5. In particular, note that for infinite mean risks, the worst VaR grows much faster than the comonotonic one.
Remark 3 So far, we have mainly looked at the asymptotic properties when the portfolio dimension, d, becomes large, i.e., d , like in Proposition 4. Alternatively, one could consider d fixed and α 1 . The latter then quickly becomes a question in (Multivariate) Extreme Value Theory (EVT); indeed for α close to one, one is concerned about the extreme tail-behavior of the underlying dfs. The reader interested in the results of this type can, for instance, consult Mainik and Rüschendorf [44] and Mainik and Embrechts [45] and the references therein. Finally, one could also consider joint asymptotic behavior d = d ( α ) where both d and α 1 together in a coupled way; this would correspond to so-called Large Deviations Theory; see (Asmussen and Glynn [17] (Section VI)) for an introduction in the context of rare event simulation.

4. Examples

By Equation (25), the ratio between the ES and the VaR of a random variable represents a degree of superadditivity, which is peculiar to its distribution: it measures how badly VaR can behave in a homogeneous model. We start computing the ES/VaR ratio in some homogeneous examples F 1 = = F d and later discuss some inhomogeneous examples.

4.1. The Pareto Case

Suppose X i d Pareto ( θ ) , i = 1 , , d , i.e.,
1 F i ( x ) = P ( X i > x ) = ( 1 + x ) θ , x 0
Power-laws of the type of Equation (26) are omnipresent in finance and insurance, often with θ values in the range [ 0 . 5 , 5 ] ; see (Embrechts et al. [46], Chapter 6). The lower range [0.5,1] typically corresponds to catastrophe insurance; the upper one, [3,5], to market return data. Operational risk data typically spans the whole range and, indeed, beyond; see Moscadelli [47] and Gourier et al. [48].
Under Equation (26), we have that for θ > 0 , α ( 0 , 1 ) ,
VaR α ( X i ) = ( 1 α ) 1 / θ 1
and for θ > 1 , α ( 0 , 1 ) ,
ES α ( X i ) = θ θ 1 VaR α ( X i ) + 1 θ 1
As a consequence, we have that for i = 1 , , d ,
ES α ( X i ) VaR α ( X i ) = θ θ 1 + 1 ( θ 1 ) VaR α ( X i ) .
As the Pareto distribution (26) is unbounded from above, the latter equation implies that:
lim α 1 ES α ( X i ) VaR α ( X i ) = θ θ 1 > 1 .
Result (27) holds true if we replace the exact Pareto df in Equation (26) by a so-called power-tail-like df, i.e.,
1 F ( x ) = ( 1 + x ) θ L ( x )
where L is a so-called slowly varying function in Karamata’s sense; see (McNeil et al. [14], Definition 7.7). For fast decaying tails, like in the normal case, the limit in Equation (27) equals one, so that for α close to one, there is not much difference between using Value-at-Risk or Expected Shortfall as a basis for risk capital calculations. In the case of Pareto tails, the difference can, however, be substantial. In Table 1, Table 2 and Table 3, we illustrate values for the ES/VaR ratio for Pareto, log-normal and exponential distributions. Combined with Equation (25), for α close to one, these results already yield a fast, rough estimate for Δ ¯ α ( X d + ) for d large. For example, in the case of the above Pareto(θ) df with θ = 2 , from Equations (25) and (27), we expect values Δ ¯ α ( X d + ) 2 , as Table 1 confirms.
Table 1. Values for the Expected Shortfall/Value-at-Risk (ES/VaR) ratio for Pareto ( θ ) distributions.
Table 1. Values for the Expected Shortfall/Value-at-Risk (ES/VaR) ratio for Pareto ( θ ) distributions.
α θ = 1 . 1 θ = 1 . 5 θ = 2 θ = 3 θ = 4
0 . 99 11 . 154337 3 . 097350 2 . 111111 1 . 637303 1 . 487492
0 . 995 11 . 081599 3 . 060242 2 . 076091 1 . 603135 1 . 454080
0 . 999 11 . 018773 3 . 020202 2 . 032655 1 . 555556 1 . 405266
α 1 11 . 000000 3 . 000000 2 . 000000 1 . 500000 1 . 333333
Table 2. Values for the ES/VaR ratio for LogN ( 0 , θ ) distributions.
Table 2. Values for the ES/VaR ratio for LogN ( 0 , θ ) distributions.
α θ = 0 . 5 θ = 1 θ = 1 . 5 θ = 2 θ = 2 . 5
0 . 99 1 . 200364 1 . 487037 1 . 920334 2 . 621718 3 . 858599
0 . 995 1 . 184949 1 . 443519 1 . 823195 2 . 415980 3 . 415242
0 . 999 1 . 158988 1 . 372433 1 . 670393 2 . 107238 2 . 787941
α 1 1 . 000000 1 . 000000 1 . 000000 1 . 000000 1 . 000000
Table 3. Values for the ES/VaR ratio for Exponential ( θ ) distributions.
Table 3. Values for the ES/VaR ratio for Exponential ( θ ) distributions.
α θ = 0 . 5 θ = 1 θ = 1 . 5 θ = 2 θ = 2 . 5
0 . 99 1 . 217147 1 . 217147 1 . 217147 1 . 217147 1 . 217147
0 . 995 1 . 188739 1 . 188739 1 . 188739 1 . 188739 1 . 188739
0 . 999 1 . 144765 1 . 144765 1 . 144765 1 . 144765 1 . 144765
α 1 1 . 000000 1 . 000000 1 . 000000 1 . 000000 1 . 000000

4.2. Some Numerical Examples

4.2.1. Example: The homogeneous case.

We start with an example that is realistic in the context of operational risk under the Basel 2 guidelines; see for instance (McNeil et al. [14], Chapter 10). Indeed, internal operational risk data often show Pareto-type behavior; see for instance Moscadelli [47], Gourier et al. [48]. The values of d used typically correspond either to d = 8 business lines or d = 56 = 8 × 7 (seven being the standard number of risk types). For α, we take the Basel 2 value, 0.999. The numbers in Table 4 were calculated with the theory summarized in Section 3 for homogeneous Pareto(2) portfolios; they speak for themselves. Note that the numerical values reported are very much in line with the asymptotics, as presented in Equations (24) and (25), together with Equation (27), and this is already for small ( d = 8 ) to moderate values ( d = 56 ) of d. Furthermore, note that in this Pareto example, the VaR-MU spread is much larger than the ES-MU spread, suggesting that ES is “more robust” than VaR with respect to dependence uncertainty; a theoretical analysis of this phenomenon can be found in Embrechts et al. [39].
Table 4. Values (rounded) for best and worst VaR and ES for a homogeneous portfolio with d Pareto(2) risks; α = 0 . 999 . VaR α + represents VaR in the comonotonic case. Comonotonic values might not be strict multiples, because of rounding.
Table 4. Values (rounded) for best and worst VaR and ES for a homogeneous portfolio with d Pareto(2) risks; α = 0 . 999 . VaR α + represents VaR in the comonotonic case. Comonotonic values might not be strict multiples, because of rounding.
d = 8 VaR ̲ α ES ̲ α VaR α + VaR ¯ α ES ¯ α Δ ¯ α ( X d + ) B α ( X d + )
311782454654981.8981.071
d = 56 VaR ̲ α ES ̲ α VaR α + VaR ¯ α ES ¯ α Δ ¯ α ( X d + ) B α ( X d + )
534721715345434862.0141.009
We want to stress that often, in industry, VaR α ( X d + ) is reported as the capital upper bound for operational risk within the Loss Distribution Approach (LDA). “Diversification arguments” are then used in order to obtain a 10 to 30% deduction. In Table 4 ( d = 8 ), this leads to a capital charge of around 170 to 220, the worst possible VaR, however, being 465. In this case, the results reported in Section 3 tell us that the full VaR range [ 31 , 465 ] is attainable (given that all marginal dfs are Pareto(2)); more research is needed concerning the question “which interdependencies (copulas) yield VaR values in the superadditivity range [ 245 , 465 ] ”. We will comment on this question in Section 5. For the moment, it suffices to understand that statements, like “diversification yields” have to be taken with care; financial institutions indeed need to carefully explain where such diversification deductions come from.
This is not just an academic issue; for instance, FINMA , the Swiss regulator, ordered UBS to increase its capital reserves against litigation, compliance and operational risks by 50%. Financial Times [49] reports that: “FINMA’s move, which UBS expects will increase its risk-weighted assets (RWAs) by CHF 28 billion.” Banks worldwide are carefully monitored by their respective regulatory authorities concerning the calculation of RWAs in general, and operational risk in particular. This is no doubt due to the aftermath of the subprime crisis, the LIBOR scandal and the allegations around possible manipulations of the $5 trillion (in 2013) global foreign exchange market. Regulators are in general also worried by some of the complexity of RWA models in use in the industry and the possibly resulting model uncertainty and/or regulatory arbitrage.
Table 5. Values (rounded) for best and worst VaR and ES for a inhomogeneous portfolio divided into three homogeneous subgroups, i.e., d = 3 k having marginals distributed as F 1 = Pareto(2), F 2 = Exp(1), F 3 = LogN(0,1); α = 0 . 999 . Comonotonic values might not be strict multiples, because of rounding.
Table 5. Values (rounded) for best and worst VaR and ES for a inhomogeneous portfolio divided into three homogeneous subgroups, i.e., d = 3 k having marginals distributed as F 1 = Pareto(2), F 2 = Exp(1), F 3 = LogN(0,1); α = 0 . 999 . Comonotonic values might not be strict multiples, because of rounding.
k = 1 VaR ̲ α ES ̲ α VaR α + VaR ¯ α ES ¯ α Δ ¯ α ( X d + ) B α ( X d + )
316460771001.28331.299
k = 3 VaR ̲ α ES ̲ α VaR α + VaR ¯ α ES ¯ α Δ ¯ α ( X d + ) B α ( X d + )
311071792773011.54751.087
k = 10 VaR ̲ α ES ̲ α VaR α + VaR ¯ α ES ¯ α Δ ¯ α ( X d + ) B α ( X d + )
3619059597910031.64541.025
k = 20 VaR ̲ α ES ̲ α VaR α + VaR ¯ α ES ¯ α Δ ¯ α ( X d + ) B α ( X d + )
712641190198220061.66551.012

4.2.2. Example: An inhomogeneous portfolio

In Table 5, we present an inhomogeneous portfolio in line with Proposition 4. We have three types of marginal dfs, Pareto(2), exponential(1) and log-normal(0,1), yielding dimensions d = 3 k for k = 1 , 3 , 10 , 20 . In particular, note the line for B α ( X d + ) with values close to one as k (hence d) increases, as stated in Equation (24). Again, note that values of B α ( X d + ) close to one are already reached for small to moderate values of d. One could compare and contrast the numbers more in detail or include other dfs/parameters. The main point, however, is that, using the results reported in Section 3 and the RA, these numbers can actually be calculated; see, for instance, the recent paper Aas and Puccetti [50], where the RA is applied to a real bank’s portfolio.
We remark that, till now, we compared VaR and ES at the same confidence level. For a more comprehensive comparison of VaR and ES, one should look at VaR at a high level compared to ES, as suggested in BCBS [4] and also by Kou and Peng [51]. We refer to (Embrechts et al. [39], Section 4 and Section 5) for a dependence-uncertainty comparison of ES at level α and VaR at a level higher than α.

5. Robust Backtesting of Risk Measures

Finally, we want to further comment on Question 8 from BCBS [3]: whereas it is fully clear that, if one wants to regulate a financial institution relying on a number, ES is better than VaR concerning W1 and W3. However, both suffer from W2. In Section 3 and the examples in Section 4, we have seen that under a worst scenario of interdependence, both VaR and ES yield similar values. Backtesting VaR is fairly straightforward (hit-and-miss tests), whereas for ES, one has to assume an underlying model; for an EVT -based approach, see for instance (McNeil et al. [14], p. 168). Below, we will make the latter statement scientifically more precise.
If one, for instance, needs to compare different backtesting procedures on one measure, the situation for ES as compared to VaR is less favorable. An important notion here is elicitability; see Gneiting [52]. Such forecasts, in our case, risk measures, like VaR and ES, are functionals of the underlying data: they map a data vector to a real number, in some cases, an interval. A (statistical) functional is called elicitable if it can be defined as the minimizer of a suitable scoring function. The scoring functions are then used to compare competing forecasts through their average scores calculated from point forecast and realized observations. In Gneiting [52], it is shown that, in general, VaR is elicitable, whereas ES is not. To this observation, the author adds the following statement: “The negative result [for ES] may challenge the use of the CVaR [ES] functional as a predictive measure of risk, and may provide a partial explanation for the lack of literature on the evaluation of CVaR forecasts, as opposed to quantile or VaR forecasts.” Recently, considerable progress has been made concerning the embedding of the statistical theory of elicitability within the mathematical theory of risk measures; see for instance Ziegel [53]. An interesting question that early on emerged from this research was: “Do there exist (non-trivial) coherent (i.e., subadditive) risk measures that are elicitable?” A positive answer is to be found in Ziegel [53]: the τ-expectiles. The τ-expectile, 0 < τ < 1 , for an rv, X, with E [ X 2 ] < , is defined as:
e τ ( X ) = argmin x R E τ max ( X x , 0 ) 2 + ( 1 τ ) max ( x X , 0 ) 2 .
For an rv, X, with E [ | X | ] < , the τ-expectiles is the unique solution, x, of the equation:
τ E [ max ( X x , 0 ) ] = ( 1 τ ) E [ max ( x X , 0 ) ] .
In particular, e 1 / 2 ( X ) = E [ X ] . One can show that for 0 < τ < 1 , e τ is elicitable; for 1 / 2 τ < 1 , e τ is subadditive, whereas for 0 < τ 1 / 2 , e τ is superadditive. Moreover, e τ is not comonotone additive. In Bellini and Bignozzi [54], it is shown that, under a slight modification of elicitability, the only elicitable and coherent risk measures are the expectiles. We are not advocating e τ as the risk measure to use, but mainly want to show the kind of research that is triggered by parts of Basel 3.5. For more information, see, for instance, Bellini et al. [55] and Delbaen [56]. Early contributions are to be found in Rémillard [57] (Section 4.4.4.1). We do mention the above publications, as on p. 60 in BCBS [3], it is mentioned that “Spectral risk measures are a promising generalization of ES that is cited in the literature.” As mentioned above, it is shown that non-trivial law-invariant spectral risk measures, such as ES, are not elicitable. As a consequence, and this is by definition of elicitability, “objective comparison and backtesting of competing estimation procedures for spectral risk measures is difficult, if not impossible, in a decision theoretically sound manner”; see the Discussion section in Ziegel [53]. The latter paper also describes a possible approach to ES-prediction. Another recent paper Emmer et al. [58] discusses the backtesting issues of popular risk measures and presents a discretized backtesting procedure for ES. Clearly, proper backtesting of spectral risk measures needs more research.
A different way of looking at the relative merits of VaR and ES as measures of financial risk is presented in Davis [59]. In the latter paper, the author uses the notion of prequential statistics and concludes: “The point [⋯] is that significant conditions must be imposed to secure the consistency of mean-type estimates [ES], in contrast to the situation for quantile estimates [VaR] (Theorem 5.2), where almost no conditions are imposed. [⋯] This seems to indicate—in line with the elicitability conclusions—that verifying the validity of mean-based estimates is essentially more problematic than the same problem for quantile-based statistics.” To what extent these conclusions tip the decision from an ES-based capital charge back to a VaR-based one is, at the moment, not yet clear.
Whereas the picture concerning backtesting across risk measures needs further discussion, the situation becomes even more blurred when a notion of robustness is added. In its broadest sense, robustness has to do with (in)sensitivity to underlying model deviations and/or data changes. Furthermore, here, a whole new field of research is opening up; at the moment, it is difficult to point to the right approach. The quotes and the references below give the interested reader some insight into the underlying issues and different approaches. The spectrum goes from a pure statistical one, like in Huber and Ronchetti [60], to a more economics decision making one, like Hansen and Sargent [61]. In the former text, robustness mainly concerns so-called distributional robustness: what are the consequences when the shape of the actual underlying distribution deviates slightly from the assumed model? In the latter text, the emphasis lies more on robust control, in particular, how should agents cope with fear of model misspecification, and goes back to earlier work in statistics, mainly Whittle [62] (the first edition appeared already in 1963). The authors of Hansen and Sargent [61] provide the following advice: “If Peter Whittle wrote it, read it.” Finally, an area of research that also uses the term robustness and is highly relevant in the context of Section 3 is the field of Robust Optimization as, for instance, summarized in Ben-Tal et al. [63].
The main point of the comments above is that “there is more to robustness than meets the eye”. In many ways, in some form or another, robustness lies at the core of financial and insurance risk management. Below, we gathered some quotes on the topic, which readers may find interesting for follow-up; we briefly add a comment when relevant in light of our paper as presented so far.
  • Quote 1 (from Stahl [64]): “Use stress testing based on mixture models [⋯] contamination.” In practice, one often uses so-called contamination; this amounts to model constructions of the type ( 1 ϵ ) F + ϵ G with 0 < ϵ < 1 and ϵ typically small. In this case, the df F corresponds to “normal” behavior, whereas G corresponds to a stress component. In Stahl [64], this approach is also championed and embedded in a broader Bayesian ansatz.
  • Quote 2 (from Cont et al. [65]) “Our results illustrate, in particular, that using recently proposed risk measures, such as CVaR/Expected Shortfall, leads to a less robust risk measurement procedure than Value-at-Risk.” The authors showed that, in general, quantile estimators are robust with respect to the weak topology, and coherent distortion estimators are not robust in the same sense; this is consistent with Hampel’s notion of robustness for L-statistics, as discussed in Huber and Ronchetti [60].
  • Quote 3 (from Kou et al. [12]): “Coherent risk measures are not robust”. The authors showed that VaR is more robust compared to ES, and this was with respect to a small change in the data by using tools, such as influence functions and breakdown points; see, also, Kou and Peng [51] for similar results with respect to Hampel’s notion of robustness. The authors of [51] champion Median Shortfall, which is defined as the median of the alpha-tail distribution and is equal to a VaR at a higher confidence level.
  • Quote 4 (from Cambou and Filipović [66]) “ES is robust, and VaR is non-robust based on the notion of ϕ -divergence.”
  • Quote 5 (from Krätschmer et al. [67]) “We argue here that Hampel’s classical notion of quantitative robustness is not suitable for risk measurement, and we propose and analyse a refined notion of robustness that applies to tail-dependent law-invariant convex risk measures on Orliz spaces.” These authors introduce an index of quantitative robustness. As a consequence, and this is somewhat in contrast to Quotes 2 and 3: “This new look at robustness will then help us to bring the argument against coherent risk measures back into perspective: robustness is not lost entirely, but only to some degree when VaR is replaced by a coherent risk measure, such as ES.”
  • Quote 6 (from Emmer et al. [58]) “With respect to the weak topology, most of the common risk measures are discontinuous. Therefore [⋯] in risk management, one usually considers robustness as continuity with respect to the Wasserstein distance [⋯]” “[⋯] mean, VaR, and Expected Shortfalls are continuous with respect to the Wasserstein distance.”
  • Quote 7 (from BCBS [4]): “This confidence level [97.5th ES] will provide a broadly similar level of risk capture as the existing 99th percentile VaR threshold, while providing a number of benefits, including generally more stable model output and often less sensitivity to extreme outlier observations.”
  • Quote 8 (from Embrechts et al. [39]) “With respect to dependence uncertainty in aggregation, VaR is less robust compared to ES.” The authors introduce a notion of aggregation-robustness, under which ES and other spectral risk measures are robust. They also show that VaR generally exhibits a larger dependence-uncertainty spread compared to ES.
The above quotes hopefully bring the robustness in “robust backtesting” somewhat into perspective. More discussions with regulators are needed in order to understand what precisely is intended when formulating this aspect of future regulation. As we already stressed, the multi-facetted notion of robustness must be key to any financial business and, consequently, regulation. In its broadest interpretation as “resilience against or awareness of model and data sensitivity”, this ought to be clear to all involved. How to make this awareness more tangible is a key task going forward.

6. Conclusions

The recent financial crises have shown how unreliably some quantitative tools perform in stormy markets. Through its Basel 3.5 documents, BCBS [3] and BCBS [8], the Basel Committee has opened up the discussion in order to make the international banking world a safer place for all involved. Admittedly, our contribution to the choice of risk measure for the potential supervision of market risk is a minor one and only touches upon a small aspect of the above regulatory documents. We do however hope that some of the methodology, examples and research reviews presented will contribute to a better understanding of the issues at hand.
On some of the issues, our views are clear, like “In the finite mean case, ES is a superior risk measure to VaR in the sense of aggregation and answering the crucial what-if question”. The debate for the lack of proper aggregation has been ongoing within academia since VaR was introduced around 1994: in several, for practice, relevant cases, VaR adds up wrongly, whereas ES always adds up correctly (subadditivity). More importantly, thinking in ES-terms makes risk managers concentrate more on the “what-if” question, whereas the VaR thinking is only concerned about the “if” question. In the infinite mean case, ES cannot be used, whereas VaR remains well defined. Within (mainly environmental) economics, an interesting debate on risk management in the presence of infinite mean risks is taking place. The key terminology here is “The dismal Theorem”; see for instance Weitzman [68]. Moreover, in the finite mean case, our results show that quite generally, the conservative estimates provided by ES are roughly not too pessimistic if compared to the corresponding VaR estimates in the worst case dependence scenarios.
Both, however, remain statistical quantities, the estimation of which is marred by model risk and data scarcity. The frequency rather than severity thinking is also very much embedded in other fields of finance; think for instance of the calibrations used by rating agencies for securitization products (recall the (in)famous CDO senior equity tranches) or companies (transition and default probabilities). Backtesting models to data remains a crucial aspect throughout finance; elicitability and prequentist forecasting add new aspects to this discussion. Robustness remains, for the moment at least, somewhat elusive. Our brief review of some of the recent work, this motivated by Basel 3.5, will hopefully entice more academic, as well as practical research and discussions on these very important themes.
The interested reader is advised to consult several of the references mentioned in the paper; questions underlying Basel 3.5 have already led to interesting discussions (not to say, controversies) among academics and practitioners. Especially on the importance of the subadditivity axiom, as well as on the interpretation of robustness, diverging views exist.

Acknowledgement

The authors would like to thank the Editor and three referees for various useful comments. The authors would like to thank RiskLab and the Forschungsinstitut für Mathematik (FIM) at the Department of Mathematics of ETHZurich for its hospitality and financial support while carrying out research related to this paper. Paul Embrechts acknowledges financial support by the Swiss Finance Institute. Giovanni Puccetti acknowledges a grant under the call, PRIN 2010–2011, from MIUR within the project “Robust decision making in markets and organisations”. Ruodu Wang acknowledges financial support from the FIM and the Natural Sciences and Engineering Research Council of Canada (NSERC) during his visit to ETH Zurich.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. J. Daníelsson, P. Embrechts, C. Goodhart, C. Keating, F. Muennich, O. Renault, and H.S. Shin. “An academic response to Basel II.” 2001, London School of Economics Financial Markets Group, Special Paper 130. [Google Scholar]
  2. H.S. Shin. Risk and Liquidity. Oxford, UK: Clarendon Press, 2010. [Google Scholar]
  3. BCBS. Consultative Document May 2012. Fundamental Review of the Trading Book. Basel, Switzerland: Basel Committee on Banking Supervision, Bank for International Settlements, 2012. [Google Scholar]
  4. BCBS. Consultative Document October 2013. Fundamental Review of the Trading Book: A Revised Market Risk Framework. Basel, Switzerland: Basel Committee on Banking Supervision, Bank for International Settlements, 2013. [Google Scholar]
  5. P. Jorion. Value at Risk: The New Benchmark for Managing Financial Risk, 3rd ed. New York, NY, USA: McGraw-Hill, 2006. [Google Scholar]
  6. “JPMorgan Chase Whale trades: A Case History of Derivatives Risks and Abuses.” Report; 15 March 2013, Washington, DC, USA: U.S. Senate Committee on Homeland Security & Governmental Affairs.
  7. “Changing Banking for Good. Volumes I and II.” Report; London, UK: The Parliamentary Commission on Banking Standards, 12 June 2013.
  8. BCBS. Regulatory Consistency Assessment Programme (RCAP)–Analysis of Risk-Weighted Assets for Market Risk. Basel, Switzerland: Basel Committee on Banking Supervision, Bank for International Settlements, 2013. [Google Scholar]
  9. “The asset-quality review. Gentlemen, start your audits.” Economist October 5–11 (2013): 64.
  10. P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. “Coherent measures of risk.” Math. Finance 9 (1999): 203–228. [Google Scholar] [CrossRef]
  11. J. Dhaene, R.J.A. Laeven, S. Vanduffel, G. Darkiewicz, and M.J. Goovaerts. “Can a coherent risk measure be too subadditive? ” J. Risk Insur. 75 (2008): 365–386. [Google Scholar] [CrossRef]
  12. S. Kou, X. Peng, and C.C. Heyde. “External risk measures and Basel accords.” Math. Oper. Res. 38 (2013): 393–417. [Google Scholar] [CrossRef]
  13. H. Rootzén, and C. Klüppelberg. “A single number can’t hedge against economic catastrophes.” Ambio 28 (1999): 550–555. [Google Scholar]
  14. A.J. McNeil, R. Frey, and P. Embrechts. Quantitative Risk Management: Concepts, Techniques, Tools. Princeton, NJ, USA: Princeton University Press, 2005. [Google Scholar]
  15. P. Embrechts, G. Puccetti, and L. Rüschendorf. “Model uncertainty and VaR aggregation.” J. Bank. Financ. 37 (2013): 2750–2764. [Google Scholar] [CrossRef]
  16. C. Acerbi, and D. Tasche. “On the coherence of expected shortfall.” J. Bank. Financ. 26 (2002): 1487–1503. [Google Scholar] [CrossRef]
  17. S. Asmussen, and P.W. Glynn. Stochastic Simulation: Algorithms and Analysis. New York, NY, USA: Springer, 2007, Volume 57. [Google Scholar]
  18. P. Arbenz, P. Embrechts, and G. Puccetti. “The AEP algorithm for the fast computation of the distribution of the sum of dependent random variables.” Bernoulli 17 (2011): 562–591. [Google Scholar] [CrossRef]
  19. P. Arbenz, P. Embrechts, and G. Puccetti. “The GAEP algorithm for the fast computation of the distribution of a function of dependent random variables.” Stochastics 84 (2012): 569–597. [Google Scholar] [CrossRef]
  20. P. Embrechts, A.J. McNeil, and D. Straumann. “Correlation and Dependence in Risk Management: Properties and Pitfalls.” In Risk Management: Value at Risk and Beyond. Cambridge, UK: Cambridge University Press, 2002, pp. 176–223. [Google Scholar]
  21. P. Barrieu, and G. Scandolo. Assessing Financial Model Risk. London, UK: LSE, 2013, preprint. [Google Scholar]
  22. V. Bignozzi, and A. Tsanakas. Model Uncertainty in Risk Capital Measurement. Zurich, Switzerland: ETH Zurich, 2013, preprint. [Google Scholar]
  23. G.D. Makarov. “Estimates for the distribution function of the sum of two random variables with given marginal distributions.” Theory Probab. Appl. 26 (1981): 803–806. [Google Scholar] [CrossRef]
  24. L. Rüschendorf. “Random variables with maximum sums.” Adv. Appl. Probab. 14 (1982): 623–632. [Google Scholar] [CrossRef]
  25. C. Bernard, X. Jiang, and R. Wang. “Risk aggregation with dependence uncertainty.” Insurance Math. Econ. 54 (2014): 93–108. [Google Scholar] [CrossRef]
  26. L. Rüschendorf. Mathematical Risk Analysis. Dependence, Risk Bounds, Optimal Allocations and Portfolios. Heidelberg, Germany: Springer, 2013. [Google Scholar]
  27. R. Wang, L. Peng, and J. Yang. “Bounds for the sum of dependent risks and worst Value-at-Risk with monotone marginal densities.” Finance Stoch. 17 (2013): 395–417. [Google Scholar] [CrossRef]
  28. G. Puccetti, and L. Rüschendorf. “Sharp bounds for sums of dependent risks.” J. Appl. Probab. 50 (2013): 42–53. [Google Scholar] [CrossRef]
  29. B. Wang, and R. Wang. “The complete mixability and convex minimization problems with monotone marginal densities.” J. Multivar. Anal. 102 (2011): 1344–1360. [Google Scholar] [CrossRef]
  30. G. Puccetti, B. Wang, and R. Wang. “Advances in complete mixability.” J. Appl. Probab. 49 (2012): 430–440. [Google Scholar] [CrossRef]
  31. L. Rüschendorf, and L. Uckelmann. “Variance Minimization and Random Variables with Constant Sum.” In Distributions with Given Marginals and Statistical Modelling. Dordrecht, The Netherlands: Kluwer Academic Publishers, 2002, pp. 211–222. [Google Scholar]
  32. C. Bernard, L. Rüschendorf, and S. Vanduffel. Value-at-Risk Bounds with Variance Constraints. Freiburg, Germany: University of Freiburg, 2013, preprint. [Google Scholar]
  33. G. Puccetti, and L. Rüschendorf. “Computation of sharp bounds on the distribution of a function of dependent risks.” J. Comput. Appl. Math. 236 (2012): 1833–1840. [Google Scholar] [CrossRef]
  34. G. Puccetti, and L. Rüschendorf. “Asymptotic equivalence of conservative value-at-risk- and expected shortfall-based capital charges.” J. Risk 16 (2014): 1–19. [Google Scholar]
  35. P. Embrechts, and G. Puccetti. “Bounds for functions of dependent risks.” Finance Stoch. 10 (2006): 341–352. [Google Scholar] [CrossRef]
  36. G. Puccetti, B. Wang, and R. Wang. “Complete mixability and asymptotic equivalence of worst-possible VaR and ES estimates.” Insurance Math. Econ. 53 (2013): 821–828. [Google Scholar] [CrossRef]
  37. R. Wang. “Asymptotic bounds for the distribution of the sum of dependent random variables.” J. Appl. Probab., 2014. to appear. [Google Scholar] [CrossRef]
  38. B. Wang, and R. Wang. Extreme Negative Dependence and Risk Aggregation. Waterloo, ON, Canada: University of Waterloo, 2013, preprint. [Google Scholar]
  39. P. Embrechts, B. Wang, and R. Wang. Aggregation-robustness and model uncertainty of regulatory risk measures. Zurich, Switzerland: ETH Zurich, 2014, preprint. [Google Scholar]
  40. G. Puccetti, and L. Rüschendorf. “Bounds for joint portfolios of dependent risks.” Stat. Risk Model. 29 (2012): 107–132. [Google Scholar] [CrossRef]
  41. R. Wang, V. Bignozzi, and A. Tsakanas. How Superadditive Can a Risk Measure be? Rochester, NY, USA: SSRN, 2014, Available online: http://ssrn.com/abstract=2373149 preprint (accessed on 4 February 2014).
  42. J. Nešlehová, P. Embrechts, and V. Chavez-Demoulin. “Infinite-mean models and the LDA for operational risk.” J. Oper. Risk 1 (2006): 3–25. [Google Scholar]
  43. F. Delbaen. “Risk measures for non-integrable random variables.” Math. Finance 19 (2009): 329–333. [Google Scholar] [CrossRef]
  44. G. Mainik, and L. Rüschendorf. “Ordering of multivariate risk models with respect to extreme portfolio losses.” Stat. Risk Model. 29 (2012): 73–105. [Google Scholar] [CrossRef]
  45. G. Mainik, and P. Embrechts. “Diversification in heavy-tailed portfolios: Properties and pitfalls.” Ann. Actuar. Sci. 7 (2013): 26–45. [Google Scholar] [CrossRef]
  46. P. Embrechts, C. Klüppelberg, and T. Mikosch. Modelling Extremal Events for Insurance and Finance. Berlin, Germany: Springer, 1997. [Google Scholar]
  47. M. Moscadelli. “The modelling of operational risk: Experience with the analysis of the data collected by the Basel Committee.” Rome, Italy: Temi di discussione, Banca d’Italia, 2004. [Google Scholar]
  48. E. Gourier, W. Farkas, and D. Abbate. “Operational risk quantification using extreme value theory and copulas: From theory to practice.” J. Oper. Risk 4 (2009): 3–26. [Google Scholar]
  49. Financial Times. “UBS ordered to increase capital reserves.” Financ. Times October 29 (2013). [Google Scholar]
  50. K. Aas, and G. Puccetti. Bounds for Total Economic Capital: The DNB Case Study. Firenze, Italy: University of Firenze, 2013, preprint. [Google Scholar]
  51. S. Kou, and X. Peng. On the Measurement of Economic Tail Risk. Hong Kong, China: Hong Kong University of Science and Technology, 2014, preprint. [Google Scholar]
  52. T. Gneiting. “Making and evaluating point forecasts.” J. Am. Stat. Assoc. 106 (2011): 746–762. [Google Scholar] [CrossRef]
  53. J. Ziegel. “Coherence and elicitability.” Math. Finance, 2014. to appear. [Google Scholar] [CrossRef]
  54. F. Bellini, and V. Bignozzi. Elicitable Risk Measures. Zurich, Switzerland: ETH Zurich, 2013, preprint. [Google Scholar]
  55. F. Bellini, B. Klar, A. Müller, and E.R. Gianin. “Generalized quantiles as risk measures.” Insurance Math. Econ. 54 (2014): 41–48. [Google Scholar] [CrossRef]
  56. F. Delbaen. A Remark on the Structure of Expectiles. Zurich, Switzerland: ETH Zurich, 2013, preprint. [Google Scholar]
  57. B. Rémillard. Statistical Methods for Financial Engineering. Boca Raton, FL, USA: CRC Press, 2013. [Google Scholar]
  58. S. Emmer, M. Kratz, and D. Tasche. What is the Best Risk Measure in Practice? A Comparison of Standard Measures. Paris, France: ESSEC Business School, 2014, preprint. [Google Scholar]
  59. M.H.A. Davis. Consistency of Risk Measures Estimates. London, UK: Imperial College London, 2013, preprint. [Google Scholar]
  60. P.J. Huber, and E.M. Ronchetti. Robust Statistics, 2nd ed. Hoboken, NJ, USA: Wiley Series in Probability and Statistics. John Wiley & Sons Inc., 2009. [Google Scholar]
  61. L.P. Hansen, and T.J. Sargent. Robustness. Princeton, NJ, USA: Princeton University Press, 2007. [Google Scholar]
  62. P. Whittle. Prediction and Regulation by Linear Least-square Methods, 2nd ed. Minneapolis, MN, USA: University of Minnesota Press, 1983. [Google Scholar]
  63. A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton, NJ, USA: Princeton University Press, 2009. [Google Scholar]
  64. G. Stahl. “Robustifying MCEV.” Hannover, Germany: Talanx AG, KQR, 2009, Version 1.0. [Google Scholar]
  65. R. Cont, R. Deguest, and G. Scandolo. “Robustness and sensitivity analysis of risk measurement procedures.” Quant. Finance 10 (2010): 593–606. [Google Scholar] [CrossRef]
  66. M. Cambou, and D. Filipović. Scenario Aggregation for Solvency Regulation. Lausanne, Switzerland: EPF Lausanne, 2013, preprint. [Google Scholar]
  67. V. Krätschmer, A. Schied, and H. Zähle. “Comparative and quantitiative robustness for law-invariant risk measures.” Finance Stoch., 2014. to appear. [Google Scholar]
  68. M.L. Weitzman. “On modeling and interpreting the economics of catastrophic climate change.” Rev. Econ. Stat. 91 (2009): 1–19. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Embrechts, P.; Puccetti, G.; Rüschendorf, L.; Wang, R.; Beleraj, A. An Academic Response to Basel 3.5. Risks 2014, 2, 25-48. https://doi.org/10.3390/risks2010025

AMA Style

Embrechts P, Puccetti G, Rüschendorf L, Wang R, Beleraj A. An Academic Response to Basel 3.5. Risks. 2014; 2(1):25-48. https://doi.org/10.3390/risks2010025

Chicago/Turabian Style

Embrechts, Paul, Giovanni Puccetti, Ludger Rüschendorf, Ruodu Wang, and Antonela Beleraj. 2014. "An Academic Response to Basel 3.5" Risks 2, no. 1: 25-48. https://doi.org/10.3390/risks2010025

Article Metrics

Back to TopTop