*4.1. Derivation of SBBS Yields*

Since sovereign bond-backed securities did not exist in the past we rely on estimates of yields on such instruments based on a simulation approach proposed by Schönbucher (2003)—this approach has already been implemented for the case of the SBBS proposal by De Sola Perea et al. (2017) and the ESRB High-Level Task Force on Safe Assets (2018). It is important to state that a motivating principle for the Schönbucher (2003) method is to retain the properties of the underlying relationship between yields and what they imply for default probabilities—including changing correlations and dynamic dependencies. Hence, the estimated SBBS yields in this case are not just some linear combination of the underlying securities (those used as backing for the securitisation). If that were the case, the linear relation would exactly determine the correlations that we rely on for hedge selection. The Schönbucher approach retains the variable probabilities of default in the underlying securities as well as their time-varying interdependencies.

The simulations that produce the SBBS yields are conducted on a daily basis over the period from soon after the introduction of the euro to the end of 2017. The simulations involve draws from 11 of the main sovereign bond yield-spreads (according to the report of the ESRB Task Force on Safe Assets, this group of sovereigns covers approximately 97.5% of outstanding sovereign debt in the euro area). The time period covered contains a lot of variability in circumstances including; the pre-2008 Great Moderation period, the period of the financial crisis in the wake of the Lehman Brothers default, the euro area sovereign debt crisis of 2009–2012 and the subsequent gradual improvement in euro area sovereign bond markets—particularly those of peripheral member states. The latter part of the sample also overlaps with implementation of unconventional monetary policy in the form of largescale bond purchases when these markets became less liquid and harder to hedge. Figure 1 provides a view of the data for the individual sovereign yield changes that was used in the SBBS yield simulation analysis. The negative of yield changes multiplied by 100 is shown. In each case, the conditional volatility and 1% Value-at-Risk are also provided (where the volatility is estimated using a GJR-GARCH(1,1) and the latter is implied by the conditional volatility under normality). It is important to note that the scale of the *y*-axis is not constant across the panels of this figure.

**Figure 1.** Daily Yield-Change Data 11 Sovereigns (bps). This panel of figures displays the daily data of the individual sovereigns that was used in the simulation analysis that generated the Sovereign Bond-Backed Security yields. Each panel contains data for two sovereigns. The negative of the daily yield changes multiplied by 100 are displayed as dots. In each case, the conditional volatility and 1% Value-at-Risk (VaR) are provided (the conditional volatility is estimated using a GJR-GARCH(1,1) and the VaR is the Value-at-Risk implied using the conditional volatility combined with an assumption of normality). It is important to note that the scale of the *y*-axis is not constant across the panels of this figure.

More precisely, the SBBS yield estimation method relies on a simulated default-triggering mechanism interacting with an observable market-based proxy for default probability in the underlying securities (in this case, the default probability proxy is simply the *yield premium* in excess of the lowest yield among the sovereigns). The triggering device generates uniformly-distributed triggers on the unit-interval (where all trigger combinations have cross-correlations of 0.6). Whenever these simulated unit-interval triggers exceed the non-default/survival probability, (1 minus the *yield premium*), losses are calculated as though defaults have occurred.

Each simulation produces simulated default losses among the underlying bonds. These are summed and allocated sequentially to the SBBS securities according to their level of subordination (only spilling over to a more senior tranche if simulated losses have exceeded the total par value of all subordinates). The sum of the yield premiums of the national bonds, for each simulated day, is then allocated to the yield premiums of SBBS according to their proportional allocation of simulated default losses. Hence, the likelihood of triggering a simulated default is determined by the size of yield premiums and by how correlated the triggers are. However, risk aversion also has some role in determining the premium.<sup>9</sup>

<sup>9</sup> The implied risk premium (i.e., yield above the risk-free rate) reflects the risk aversion of the representative market investor on any given day and, hence, may exceed the expected loss anticipated by a risk-neutral investor. This degree of risk aversion enters the simulation and is consequently also reflected consistently in the resulting estimated yields of senior, mezzanine and junior SBBS.

In this way, the probable daily yields on the SBBS components are generated for two different securitisation structures over roughly a 17-year historical period without the need for a structural modelling of the complex dependencies among the underlying sovereigns (e.g., as in Lucas et al. (2017)). This then enables estimation of optimal hedge ratios, hedge effectiveness measures and assessments of the diversification benefits. For reasons of data availability, the simulation is based on yield data for two-, five- and ten-year governmen<sup>t</sup> bonds of Austria, Belgium, Germany, Spain, Finland, France, Greece, Ireland, Italy, the Netherlands and Portugal, following a weighting scheme based on GDP (averaged over 2006–2015). This basket covers approximately 97.5% of the SBBS volume. As a robustness check, the SBBS yield estimations are re-done using a t-copula instead of the Gaussian copula.

Panels A and B of Figure 2, respectively, depict the time series behaviour of yields on SBBS securities (the derivation of which is discussed above) under two alternative tranching assumptions (70:30 and 70:20:10) while panel C shows yields of a selection of individual sovereigns. The period of the European sovereign debt crisis is highlighted and extends from November 2009, when the Greek governmen<sup>t</sup> indicated its 2009 deficit projection was being revised upward from 5% to 12.7%, until just after Mario Draghi's speech making a commitment to 'do whatever it takes' to prevent the break-up of the euro in late July 2012. All of the 10-year yield data used in the hedge selection and assessment analysis discussed in the results section has been converted to price and then daily holding period returns assuming a duration of nine years.

**Figure 2.** *Cont.*

**Figure 2.** Estimated yields on SBBS tranches & selected sovereigns (%). Note: This panel of figures indicates how SBBS yields are likely to have changed over the sample period. These can be compared with a selection of sovereign yields. The shaded area is the euro area Sovereign Debt Crisis period (November 2009–August 2012).

### *4.2. Methodology for Optimal Hedge Selection*

There is a well developed literature dealing with the selection of optimal hedge ratios. Chen et al. (2003) and Lien and Tse (2011) provide extensive reviews of different theoretical approaches to determining optimal hedge ratios. A prominent approach to optimising a hedge position is based on minimising the variance of the returns on the hedged portfolio without regard to optimising the expected returns in some way. This is a particularly suitable approach in the case of hedging to facilitate market making in sovereigns since the objective is to minimise risk for the reward of earning the bid–ask spread (less costs) rather than to improve returns from the underlying investment.

In the single hedge case, the optimal hedge ratio (see Ederington 1979 and Baillie and Myers 1991) is simply the negative of the slope coefficient in a regression of the asset return on the hedge instrument return. Composite hedging has sometimes been found to be more effective than relying on a single hedge instrument. This has been claimed in the extant literature for the case of hedging positions in corporate bonds using a combination of the relevant sovereign bond and a futures position in the relevant equity (e.g., Marcus and Ors 1996), using bonds at particular maturities with futures on a variety of other maturities (e.g., Morgan 2008) and hedging mortgage backed securities with Treasuries at 2-, 5- and 10-year tenors (e.g., Koutmos and Pericli 2000). Garbade (1999) also provides an interesting application of a two-asset hedge for a bond. This is similar to the case of hedging with both the Senior and Mezzanine (or Junior) SBBS.

### *4.3. Measuring Out-Of-Sample Hedge Effectiveness*

The hedge selection and measurement carried out in this paper follows Bessler et al. (2016) who conduct a similar analysis in a European sovereign bond context. In that analysis, hedge ratios and effectiveness were estimated using rolling OLS, constant conditional correlation (CCC), dynamic conditional correlation (DCC-GARCH) and a Bayesian based mixture of models. In the current analysis, hedge ratios and hedge effectiveness (using single or multiple SBBS as hedges) are estimated for each of the 11 individual sovereign bonds. Hedge effectiveness in each case is measured by the percentage change in risk achieved through hedging. This is done using two different risk metrics for the hedged and unhedged positions. The first metric is the rolling standard deviations of returns. The second metric is based on Values-at-Risk bounds for the hedged and unhedged positions (i.e., the percentage change in the range between the 5% and 95% Value-at-Risks for hedged and unhedged cases).

Hedge effectiveness is therefore measured by the size of the reduction in risk exposure achieved by hedging (essentially, the risk of the hedged position relative to that of the unhedged position). The optimal hedge ratio and the hedge effectiveness measure typically change through time so hedge effectiveness is assessed on a rolling basis across various sub-samples. Engle (2009) applies a pre-crisis

model of time varying covariance to the problem of hedge selection during the Great Financial Crisis and shows that this improves on static approaches that were the industry norm at that time. The rolling linear regression results discussed below are therefore unlikely to overstate the achievable hedge effectiveness.

To keep the exposition tractable, the results obtained using the rolling linear regressions represent a base case (and arguably a lower bound on effectiveness) and are the main focus of the results presented below.<sup>10</sup> The optimal hedge for each of the 11 sovereigns (with 10 years to maturity) was estimated on a rolling basis for the case of the following seven hedge instruments/combinations: {Senior, Mezzanine, Junior, (Senior and Mezzanine), (Senior and Junior), (Mezzanine and Junior), (Senior, Mezzanine and Junior) }. In line with previous literature, the optimal hedge is derived by applying the chosen hedge selection method (e.g., linear regression) over a prior 250 day window and rolling the estimation window forward at regular intervals. The hedge ratio is therefore used in an out-of-sample context for the entire interval between the estimation sub-samples. To keep the exposition tractable, hedge ratio results are presented based on a rolling regression at intervals of 25 days.
