*4.11. ARMA, GARCH, EGARCH Model Parameter*

#### 4.11.1. ARMA Parameters

The authors first used the ARMA model to forecast future risk. In coming up with the final forecast, there were a few assumptions made that should be noted. The order used in the simulation was an ARMA (1, 1). Using NUMXL, the first step was to come up with general parameters; after that, the author calibrated the parameters to find the optimal ones and all this conducted in the program.

μ is the long run mean, ϕ1 is the first coefficient of the autoregressive (AR) component (first order correlation), θ,1 is the first coefficient of the moving average (MA) component and finally, σ is the standard deviation of the residuals (this measure looks at the dispersion of points and measures the accuracy of the variable being investigated).

For all six assets, these parameters were calculated to ascertain the final ARMA forecast over a period of 12 months, and the results are detailed in Table 4 below.


**Table 4.** ARMA parameters. Source: authors.

Here, it can be observed that the standard deviation of the residuals is highest for the equities asset class. For the optimal coefficients for the autoregressive process and moving average process, the calibration process selected the ideal figures to input into the best ARMA model for each asset.

#### 4.11.2. GARCH Parameters

In forecasting with GARCH, there were also some model parameters that were calculated and calibrated via the Excel spreadsheet. These are the ones input into the equation to come up with the volatility forecast. μ is the long run mean, α0 is the constant in the conditional volatility equation, α1 is the first coefficient of the ARCH component and β1 is the first coefficient of the GARCH component. All of this is summarized in Table 5 below.


**Table 5.** GARCH parameters. Source: authors.

The GARCH model takes the ARMA process a step forward and factors in the conditional volatility element as well as the other GARCH components. The first coefficient of the ARCH component across all assets has a parameter close to zero.

### 4.11.3. EGARCH Parameters

Forecasting with the EGARCH model of the 1, 1 order has the parameters shown in Table 6 below.


**Table 6.** EGARCH parameters. Source: authors.

The μ is the long run mean; α0 is a constant in the conditional volatility equation; α1 is the first coefficient of the ARCH component, γ1 is the first leverage coefficient and β1 is the first coefficient of the ARMA component. The EGARCH has "the constant volatility embedded in the leverage coefficient which if included in a model of volatility it amplifies the volatility" (Engle and Siriwardane 2014).

#### 4.11.4. ARMA, GARCH, EGARCH Model Forecasts

The models were used intend to capture the autocorrelation of squared returns and the volatility going back to the mean. These models took non-normal components into account, such as excess kurtosis, and factored them into the forecasts. The NumXL ARCH/GARCH modelling wizard automated the steps used to construct an ARCH model: guessing initial parameters, parameter validation, the goodness of fit testing, and residuals diagnosis. Therefore, by using these parameters, the final volatility forecasts were made using three models for the two equity assets, and Figure 16 shows the 12-month forecasts.

#### 4.11.5. Equities Model Forecast

The first analysis is for the two equity assets, and Figure 16 gives the three forecasts made using all three models: ARMA, GARCH and EGARCH.

First, the HEX equity index and BGSMDC equity index and their 12-month volatility forecast based on the previously mentioned parameters are observed in Figure 16. It can be seen that the EGARCH has the lowest volatility forecast, followed by ARMA, and the highest is the GARCH model. Slight increases are anticipated in the first period, then they slowly taper off to become stable. This indicates that the nearer the forecast, the higher the level of accuracy. That is why the authors limited this to a 12-month forecast, as only current and relevant information affected the volatility forecast.

**Figure 16.** Equities model forecasts (12 months). Source: authors.

#### 4.11.6. Hedge Fund Model Forecasts

For the hedge funds, the results are similar to the equity, with the EGARCH recording the lowest volatility relative to GARCH and ARMA. However, for the GARCH forecast, it is lower than the ARMA forecast for the HFRI (See Figure 17). This could be down to the difference in asset characteristics. The GARCH for both has a steady upward increase, whilst the ARMA indicates a steady, almost flat increase for the DJCS, which is different from the HFRI composite index, which increases up in the first period and then tapers off. This shows higher initial anticipated volatility for the composite and lower volatility for the event driven fund, which already anticipated the event shocks in its historic data.

**Figure 17.** Hedge fund model forecasts (12 months). Source: authors.

#### 4.11.7. Bond Model Forecasts

The bond forecasts have varying results from both the equity and hedge funds. It can be observed that the corporate bonds HOAO forecast has a higher volatility using the ARMA model but is similar to all other assets with the lowest EGARCH forecast. On the other hand, the sovereign GBI\_EM bond forecast has higher volatility in the first period, falling sharply after that, with the lowest forecasts for the rest of the periods. In this situation, the GARCH has the highest volatility of all three (See Figure 18).

**Figure 18.** Bonds model forecasts (12 months). Source: authors.

After carrying out the tests for the three asset classes, there is need to adopt the best model to use. It cannot easily be interpreted by the forecast tables, and since the parameters used for all three assets are not similar, this is where the Akaike Information Criterion (AIC) test is considered. Akaike Information Criterion (AIC) is a method that is used to measure the goodness of fit of a model. Ideally, its lowest value is the best model to use. This can be used to determine which method between the three is ideal based on this test in order to obtain the best volatility forecast. Below is a table showing the results of the three tests for the six assets. In general, the AIC is defined as:

$$\text{AIC} = \text{2k} - \text{2} \times \ln(\text{L})$$

where:

k is the number of model parameters.table

ln(L) is the log-likelihood function for the statistical model.

By looking overall at the above results, it is clear that the asset class with the lowest AIC is generally the hedge funds and bonds, whilst equity has the highest. This can therefore help to conclude that when applying these models, it is most suitable for hedge funds and bonds and not as much for the equities. However, it must be noted that the AIC figure for EGARCH of the BGSMDC is unavailable for comparison (See Table 7).

When analysing based on the model type, it can be seen that EGARCH is the most accurate model to use because for hedge funds and bonds, it has the lowest AIC for the three methods per asset. To analyse this further, the next step of assessing the volatility forecast for each asset critiques this discovery and seeks to validate it or otherwise.


**Table 7.** AIC model goodness of fit test for ARMA, GARCH and EGARCH. Source: authors.

#### **5. Discussion**

*5.1. Non-Normality Test Results*

The normality tests indicated that for the hedge funds, all 11 were not normally distributed; for equities, 19 out of 23 assets were not normally distributed, with the exception of the Japanese indices that exhibited normality. When testing the bonds, it was also established that 9 out of the 10 data sets were not normally distributed, with the exception of the JP Morgan Government Bond Index-Emerging Markets (GBI-EM), which covers comprehensive emerging market debt benchmarks that track local currency bonds issued by emerging market governments.

The normality tests also yielded an analysis on various factors, and this was performed on the six selected assets. One was the standard deviation analysis, where it was discovered that hedge funds tend to have a lower standard deviation than equities, implying that equities are riskier than hedge funds. These are just generalized assumptions based on the initial analysis. For bonds, the level of risk differs based on the underlying asset. Corporate bonds have a higher standard deviation than sovereign bonds. The summary on the skewness was that most of the assets are negatively skewed with long tails, showing that they have a high downside risk. The results for the kurtosis showed that most of the distributions are leptokurtic. Another element in the summary was the ARCH effect, which gives details on the distribution's fat tails and excess kurtosis, and all of this was carried out to further affirm the non-normality of the returns. Once the argument on the type of

distribution was resolved, the next step was to assess the risk measures relevant to this distribution.

#### VaR versus CVaR

After carrying out the analysis, it can be said that the CVaR is an extension of VaR that gives the total amount of loss given a loss event. If one is using VaR, according to Kidd (2012), the main question would be how often their portfolio could lose at least X amount of euros, whilst one using the CVaR method is concerned with answering the question of how much it could lose when its portfolio's losses exceed X amount of euros.

From the tests undertaken in the analysis, it can be seen that CVaR is a better measure for downside risk than VaR, as it has higher risk measures and is more sensitive to the shape of the tail loss the non-normal distributions. From an asset class perspective, it can be seen that the equities generally have a higher risk measure compared to the hedge funds. However, this is merely a generalization because what drives risk in an asset is not only its class but also what the asset is composed of. When drawing conclusions for this, it is critical to factor in other items such as the composition of the asset, its liquidity features, the industry, the geographic location and all other risk factors so that one can obtain an even more accurate measurement. This means combining this analysis asset by asset with the information in Figure 3, which gives information on the global risk landscape according to the world economic forum survey (World Economic Forum 2017).

#### *5.2. VaR versus Stressed VaR*

The results from the stressed VaR analysis show that when using different time horizons to compute the VaR and SVaR, it was observed that the returns are lower for the daily-observed data in comparison to the monthly observations. This shows that one needs to be aware of the reason for carrying out the tests so that they can use the correct data time horizons to obtain accurate results and not end up understating or overstating the risk measure. If it is for daily use, it would be ideal to use a shorter time to calculate, and if it is to observe an asset over a longer period, then the monthly time horizon could be adequate. All of this is situational, and so other factors should be taken into account together with this information.

Another interesting observation made was when looking at the bonds whereby the one asset was normally distributed and the other was non-normal. It was observed that the normally distributed GBI-EM had VaR and SVaR measures that were close together, unlike what was observed in the other asset classes. This therefore showed that the normal distribution, due to it not having the presence of fat tails, had less risk observed and so the returns were generally uniform, even in stressed time periods. This means that the standard VaR would be a good measure for this, but for all other data that is non-normal, it is critical to isolate the stress period and calculate its risks in comparison to a normal period. This makes SVaR a good measure to use in volatile periods because it captures the market risk inherent at that time. By knowing these measures, it can aid in improving the risk measures, in particular when forecasting data whereby the expectation is a downward one. This will enable the investor to assess if they have taken on excessive risk which may lead to their bankruptcy should another stress period occur. This then can be adjusted to suitable risk levels that do not threaten the loss of the assets.

#### *5.3. Time Series Analysis*

The time series analysis is useful in helping to identify the way security moves over time. By looking at the past trends, it can aid in anticipating the types of risks that affect that particular stock. For instance, one can refer to the World Economic Forum Global Risks Perception Survey 2016 indicated above, which identifies highly probable risks such as cyber-attacks and natural disasters. Using this information, one can look at the asset and what it's underlying asset is, and so determine the likelihood of adding more risk to it or reducing risk. For instance, if an index tracks agriculture companies that are based in an area where there is increasing amounts of natural disasters that occur and affect the industry, this means it is likely to affect the profitability and so lower returns will be forecasted. If an investor had calculated a VaR up to a certain point, it could now adjust that point based on anticipated risks it would have identified by analysing the time series.

It is also worth noting that when looking at a time series, it is essential to look at the underlying asset, as it can also give information on how the time series behaves. For instance, the HFRI hedge fund index, which is a fund that is managed by a diverse array of fund managers, is likely to exhibit a time series with a few extreme fluctuations, indicating that there is less volatility, probably resulting from the diversification aspect.

By using this information in conjunction with information obtained from the variations of VaR previously observed, one can decide whether the risk figure must be adjusted to accommodate the added risk based on the historic behaviour of the time series. The time series analysis also has a plot of the WMA; the authors are able to visually determine the stationarity of the time series. The EWMA plot shows the volatility of the asset over time, and it is easy to visually observe the periods of volatility clustering. The EWMA shows the periods with large returns that are clustered and very distinct from those observed periods of small returns, which may also be clustered. Not only does the volatility spike up during a crisis or stress period, but also it eventually drops back to approximately the same level of volatility as before the crisis, which is an indication of stationarity. This is a vital property which affirms the suitability of the use of advanced forecasting models in the time series volatility forecasting. A stationary time series is a time series that is mean reverting, meaning that it moves close to the mean.

### *5.4. Correlation Test—Single Asset*

In time series analysis, it is said that a correlogram/analysis is one such tool that is necessary in identifying the model to use in the volatility forecasts. The correlogram analysis examines the time–spatial dependency within the sample data, and focuses on the empirical auto-covariance, auto-correlation and related statistical tests. It is key to the discovery of the interdependency of the observations.

Upon carrying out the tests, it can be seen that five of the six assets exhibit serial correlation. These results further affirm the general hypothesis that the returns of several asset classes tend to be serially correlated. For instance, it is commonly expected in hedge funds because they usually hold illiquid assets which can be hard to price and so for the fund managers to determine the prices, they rely on past information to derive the value. This therefore has an impact on the ideal method to use to forecast volatility in the next section.

The analysis after this discovery also looked at the squared values to assess if there is correlation amongst them. The results showed that even though the returns showed correlation, the ARCH effect showed that in both bonds, the HFRI hedge fund and HEX Equity, the ARCH effect is true, meaning that the squared values are correlated. This is the opposite for the BGSM and DJCS, which recorded false results in the tests. Overall, an assumption of correlation can be assumed and a GARCH model may be used.

#### *5.5. Correlation Test-Portfolio Context*

After carrying out all the tests and looking at the relationship between assets of a similar class or different combinations, it can be concluded that there is no particular pattern on asset classes and how they interact with each other in normal versus stressed periods. Therefore, it can be advised that for each portfolio, a portfolio manager needs to generate a similar correlation matrix to assess and see which assets are correlated, how they behave with each other during stress periods and how they are under normal periods. In the example used in the analysis, selecting the two hedge funds will a good example because whether or not the assets are in a normal or stressed period, they seem to maintain a correlation that is positive and close to each other. Therefore, this will allow an asset manager to maintain his portfolio in the same risk levels if that is his investment approach.

If there were a need to diversify and hedge, the portfolio manager would pick the assets with the lowest correlation or those that move in opposite directions, such as the two equity assets that switch correlations from being negatively correlated to being positively correlated during times of stress. Regardless of what the change in the correlation is, it is essential for the investor to know which direction volatility of the assets is likely to move. This will allow one to anticipate which changes may occur depending on how the assets relate to each other and how it affects the level of risk at a given time or when planning for future risk. Failure to correctly anticipate market risk may overstate the amount of diversification in a portfolio that can lead to an investor taking on excessive risk.

#### *5.6. Volatility Forecast (ARMA, GARCH and EGARCH)*

Beg and Anwar (2014) view the arrival of new information as a key driver of volatility in asset returns. According to the authors, the constant release of new information results in financial assets prices showing volatility, which in turn affects the financial risk analysis and risk management strategies. To consider this factor, models such as the Generalized Autoregressive Conditional Heteroscedasticity model can be used to analyse financial data.

Upon carrying out the studies, it can be said that the use of these volatility forecasts (ARMA, GARCH, EGARCH) is useful for non-normal distributions to analyse an asset's risk as well as market risk. This is because these models take into account the use of various parameters such as the use of a leverage coefficient in calculating volatility with the GARCH model. This can affect an asset's risk and incorporate them into the model. The type of model lags to consider can also be determined by looking at the ACF and PACF plots, as was carried out. These plots are useful in determining the best lag order to use for each individual asset class. In our case, most of the plots showed an ACF with one lag and then decay. Thus, for this study, the authors applied the order of 1, 1. However, in further use of these models, it is critical to look at other elements of the data, such as the stationarity, as all of this helps to determine whether the models are suitable for use with the chosen data set.

In the data analysis section, after the parameters were calculated and defined for all three models, the next step was to plot the 12-month forecasts using each model. It was generally observed that the GARCH model had the highest volatility of all three for each asset, whilst the EGARCH was the lowest. The main advantage of GARCH models is that they captured jointly heavy tails and volatility, which contribute to clustering (Jansky and Rippel 2011). This demonstrates a characteristic of the data sets being observed (except for the GBI-EM bond index, which is normally distributed and so its plot was very different from the other two, as it had a high EGARCH at period one which sharply declined.

Overall, when one examines the literature regarding the use of these models, it can be seen that they are vastly expanding to cater to different elements within the data set. Thus, it can be concluded that when carrying out a risk forecast, any of these models can be applied, but it all depends on the asset characteristics and what risk factors to consider.

#### *5.7. AIC Test for Best Model*

Akaike Information Criterion (AIC) was used to measure the goodness fit for each of the three models. Ideally, its lowest value is the best model to use, and in the test, it was seen that the EGARCH was seen as the best fit because it had the lowest AIC overall for all the assets examined (See Table 7).

Results from this test therefore give a guideline to the best method to use to forecast risk for these non-normally distributed assets (with the exception of the normally distributed JP Morgan Government Bond Index-Emerging Markets (GBI-EM)) out of all the methods. Hedge funds and bonds seemed best suited, with the lowest AIC when looking at asset classes. When looking at the best method to use overall, the EGARCH reigns superior, with the lowest AIC for the three methods per asset. Further tests to see which is better between GARCH and EGARCH could be carried out, as suggested by Malmsten (2004). He considers some of these tests to be symmetric models and asymmetric models. However, these go beyond the scope of this paper, but it is worth noting that various other tests can be used to determine the validity of model choices apart from the AIC, which is used in this study.

#### Results summary

VaR has become a popular method in measuring risk, and due to its simplicity, it became an overnight sensation; however, the main distribution assumption of normality led to it being discredited, leading to more investors moving away from it and focusing on its hybrids that take into account the fat tails. Although value at risk has been a measure that has been used in the recent past by most investors when measuring the risk exposure, it was inadequate when calculating how much would be lost, and this situation usually led to some investors underestimating risk measures such that when a crisis occurred, most of them suffered to the brink of collapse and bankruptcy. This led to the creation of other variations of this risk measure that specifically focus on the downside risks, and these included the conditional value at risk and stressed value at risk, amongst many. These were tested, and it was seen that the CVaR had a higher risk measure compared to the standard VaR, as it took into account the existence of heavy tails, and so this was a better measure of risk and would shield the asset holder from overexposure of their asset or portfolio which would lead to bankruptcy. Another measure that was considered was the stressed value at risk, and this measure showed that volatility differs during the stressed period and so if it is measured separately, it would give a better picture of how risky an asset could get in periods of high volatility.

The next step involved analysing the time series, and it was seen as a critical step in the research because it showed how the assets behave and can be used to see if any patterns are prevalent in the behaviour of the assets as they move over time. Observing a time series helped give information on issues such as the stationarity of the time series, which allow for the use of certain econometric models to establish volatility forecasts. After this, the test of correlation was performed for each asset to observe whether or not correlation of an asset's prices with one another exists. This was proven to be true, and so based on these data, it can be concluded that price dependence is a major theme occurring in the distribution of returns showing long periods of price dependence. It was also observed that there is the existence of price jumps, which are all key to obtaining a more realistic risk measure.

The next test was to find and compare volatility forecasting over the time horizon using three selected methods, namely ARCH, GARCH and its hybrid EGARCH. These methods were ideal to use because financial time series returns are skewed and fat tailed, and these models are geared to take this into account. They were able to capture the volatility clustering and took into account changes in trends, and all these were factored into the parameters which the authors calibrated using the NUMXL function in Excel to obtain the best combination of parameter values to use. The benefits of using these models are also that depending on the characteristics of the data being observed, there are hybrids to each method to take this into account. GARCH, for instance, has many of these, one of which is EGARCH, which was used here, and others include FIGARCH, HYGARCH and so forth. These model extensions are very relevant for the conditional volatility of financial returns.

However, in looking at the various methods, one must take into account and beware of the limits of statistics, as one cannot fully rely on this method for accurate measures. The data sets used in this paper aimed to offer a more robust representation of various assets and classifications of the market, such as sampling from a low developed economy, a composite index, a specialized index and so forth, but this does not mean all risks will be taken into account in the asset pricing or risk allocation in portfolios. There is still the risk of risks that are not quite well understood to either come in and drive returns up or down drastically, thus resulting in price jumps. Therefore, it can be observed that by having a combination of the risk measures which take into account other elements such as scenario

analysis which involves looking at other macroeconomic variables, such as stress testing, which tests just how risky an asset can become, and forecasting with risk models, the ideal method for an investor would be to combine all elements to ensure they obtain the best risk measure possible.

#### **6. Conclusions and Future Research**

This paper made numerous attempts to elucidate the complexity of measuring risk. In order to obtain the best possible accurate risk measures, we recommend that researchers should incorporate historical data and adopt scenario analysis for a more robust outcome. This calls for observing asset behaviours and how they respond to external shocks and normal periods. This also means looking at an asset and how its individual returns are autocorrelated, and this will reveal how much of an impact historical data has on the future pricing of the asset. The findings also reveal several stylized facts about financial markets, such as fat tails in asset distributions, the absence of autocorrelations of asset returns, volatility clustering and aggregational normality.

Incorporating non-normality in risk calculations should provide a better way of understanding and measuring the downside risk that is in an asset. The models that have been analysed in this paper considered this, and so the results that are obtained reflect this aspect. All of the methods that were explored in the analysis section shed some light into the many options available that can be used to assess risk in non-normal distributions. One aspect identified was that if the methods are used together and not in isolation, they complement each other by addressing the shortcomings of the other to give the closest to accuracy in the measures. For instance, VaR can be used as the standard measure, and then adjusted based on the information obtained from the time series. However, it must be noted that there is no single risk measurement framework that can ensure portfolios and assets are immune to unexpected risk exposure, but by using a mosaic of methods, we can come close to this.

To summarize the study and tests conducted, the first step of testing the assumption that returns tend to be non-normal as opposed to the general assumption of normality was examined. The interpretation of findings from the tests clearly showed that the majority of the returns are not normally distributed, but instead are skewed and also possess excess kurtosis. Thus, it would be significant in the approach of assessing risk to first test the asset's historical data for normality. In some cases, one may find a distribution that is normal, such as the one found for the bonds index GBI-EM, in which case a basic risk test such as VaR could be adequate to some degree.

Finally, in order to obtain the most accurate possible risk measures, the best approach is not to just move away from the use of historic data. However, creating a balance yields better risk measures, whereby historic data are used to assess the behaviour of the asset and how it responds to shocks and normal periods by doing a time series analysis. This can also include looking at the asset and how its individual returns are correlated to each other (autocorrelation) to see how much of an impact previous data has on the future pricing of the asset. Since all models discussed have both merits and weaknesses, for the ones under non-normal assumption discussed in this paper, the risk measures that will be calculated will be closest to the level of accuracy in reflecting the market conditions.

#### **7. Research Limitations**

The ARCH and GARCH model family has been evolving since its introduction, such that a multitude of new varieties of these have emerged (Bollerslev et al. 2010). This poses a limitation on to how many varieties this study could have used due to the vastness of computations needed. This makes picking the best model to forecast volatility all types of financial data a bit cumbersome for this particular study. Another limitation faced was the availability of more recent data to adequately analyse the impact that COVID has had over the past year and more.

Another major limitation faced in providing universal data results was due to inadequate computing power to dissect all available asset classes. Instead, we picked three relatively common ones to apply. Even within these three asset classes, only two from each were used for volatility forecasting. For instance, the authors had 60 bond indices but only had to analyse two. It is hoped that in future studies, we can evaluate other asset classes such as forex, futures and other derivatives to apply the tests used in this study.

### Future Research

Future research papers will be undertaken, implementing different asset classes as well as newer GARCH models for forecasting risk in the continued search for the most accurate risk measures. The authors are currently gathering information in the present COVID era to further study its impact on risk measures. It would be interesting to assess the short-term and long-term impacts the pandemic has had. This will allow the authors to gain a clearer picture on the future of similar events and find ways to circumvent negative financial crisis. Once the data are fully available, a follow up paper should be written to assess this new variable.

Another research area the authors wish to investigate are crypto currencies as an "asset class" more so than a digital currency. It would also be interesting to assess their price movements and behaviour and see how they affect the performance of other assets, as well as the type of risks involved with using these.

#### **8. Patents**

This section is not mandatory but may be added if there are patents resulting from the work reported in this manuscript.

**Author Contributions:** Conceptualization: E.S. & Y.T.M. methodology—Y.T.M. software, Y.T.M. validation, E.S. and Y.T.M. did formal analysis, Y.T.M. investigation, E.S. and Y.T.M. resources, E.S. and Y.T.M. data curation—E.S. and Y.T.M. writing—original draft preparation, E.S. and Y.T.M. writing—review and editing, E.S. visualization, E.S. supervision, E.S. project administration, N/A funding acquisition. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee).

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Conflicts of Interest:** The authors declare no conflict of interest.

*J. Risk Financial Manag.* **2021**, *14*, 540

#### **Appendix A**

**Table A1.** Jarque Bera Test On Hedge Fund Data 1995–2016.


Note: N—not normal. Source: authors.

#### **Appendix B**

#### **Table A2.** Jarque Bera Test On Global Equity Indices Data 1995–2016.


Source: authors.

#### **Appendix C**

**Table A3.** Jarque Bera Test On Global Bond Indices Data 1995–2016.


*J. Risk Financial Manag.* **2021**, *14*, 540

**Table A3.** *Cont.*


Source: authors.

119

#### **References**

Acerbi, Carlo, and Dirk Tasche. 2002. On the coherence of expected shortfall. *Journal of Banking and Finance* 26: 1487–503. [CrossRef] Alexander, Carol, and Elizabeth Sheedy. 2008. Developing stress testing framework based on market risk models. *Journal of Banking*


Bollerslev, Tim. 1986. Generalized autoregressive conditional heteroscedasticity. *Journal of Econometrics* 31: 307–27. [CrossRef]

Bollerslev, Tim, Jeffrey Russell, and Mark Watson. 2010. *Volatility and Time Series Econometrics*. Oxford: Oxford University Press.

Box, George Pelham, and Gwilym Jenkins. 1976. *Time Series Analysis, Forecasting and Control*. San Francisco: Holden Day.


Crouchy, Michel Dan Galal, and Robert Mark. 2014. *The Essentials of Risk Management*, 2nd ed. London: McGraw-Hill.


Fama, Eugene, and Kenneth French. 1992. The cross-section of expected stock returns. *Journal of Finance* 57: 427–65. [CrossRef]


Hedge Fund Research Inc. 2017. HFRI Indices–Index Descriptions. Available online: https://www.hedgefundresearch.com/hfriindices-index-descriptions (accessed on 22 August 2017).

J.P. Morgan Asset Management. 2009. Non-normality of Market Returns, a Framework for Asset Allocation Decision Making. Available online: https://jai.pm-research.com/content/12/3/8 (accessed on 14 June 2021).

Jain, Shrey, and Siddhartha Chakrabarty. 2020. Does Marginal VaR lead to improved performance of managed portfolios? A study of S&P BSE 100 and S&P BSE 200. *Asia Pacific Financial Markets* 27: 291–323. [CrossRef]

Jansky, Ivo, and Milan Rippel. 2011. *Value at Risk Forecasting with ARMA-GARCH Family of Models in Times of Increased Volatility*. IES Working Paper: 27/2011. Prague: Institute of Economic Studies, Faculty of Social Sciences, Charles University.

Jorion, Philippe. 2003. *Financial Risk Manager Handbook*, 2nd ed. Hoboken: GARP, Wiley & Sons.

Kamara, Avraham, Robert Korajczyk, Xiaoxia Lou, and Ronnie Sadka. 2018. Short-horizon beta or long-horizon alpha? *Journal of Portfolio Management* 45: 96–105. [CrossRef]

Katchova, Ani. 2013. Time Series ARIMA Models: Econometrics Academy. Available online: https://sites.google.com/site/ econometricsacademy/econometrics-models/time-series-arima-models (accessed on 4 September 2017).

Kidd, D. 2012. Value at Risk and Conditional Value at Risk. Investment and Risk Performance. CFA Institute. Available online: http://www.cfapubs.org/doi/pdf/10.2469/irpn.v2012.n1.6 (accessed on 16 August 2020).

Kilai, Mutua, Anthony Gichuhi Waititu, and Anthony Wanjoya. 2018. Modelling Kenyan foreign exchange risk using asymmetry GARCH models and Extreme Value Theory approaches. *International Journal of Data Science and Analysis* 4: 38–45. [CrossRef]

Krause, Andreas. 2003. Exploring the limitations of Value at Risk: How good is it in practice? *Journal of Risk Finance* 4: 19–28. [CrossRef] Kumar, Arya, and Saroj Kantar Biswal. 2019. Impulsive clustering and leverage effect of emerging stock market with special reference to Brazil, India, Indonesia and Pakistan. *Journal of Advanced Research in Dynamic Control System* 11: 33–37. [CrossRef]


Lopez, Jose. 2005. Stress tests: Useful complements to financial risk models. *FRBSF Economic Letter* 2005: 119–24.

Loretan, Mico, and William English. 2000a. *Evaluating "Correlation Breakdowns" during the Periods of Market Volatility*. Basel: Bank of International Settlements (BIS), Available online: http://www.bis.org/publ/confer08k.pdf (accessed on 18 September 2017).


Malmsten, Hans. 2004. *Evaluating Exponential GARCH Models: SSE/EFI Working Paper Series in Economics and Finance, No. 564*. Stockholm: Stockholm School of Economics.

Markowitz, Harry. 1952. Portfolio selection. *Journal of Finance* 7: 71–91. [CrossRef]

Marrison, Chrsitopher. 2002. *The Fundamentals of Risk Measurement*. New York: Mc Graw Hill.

Mina, Jorge. 2002. Measuring bets with Relative Value at Risk. *Derivatives Week, Learning Curve* 2002: 14–5.

*RiskMetrics Technical Document*, 4th ed. New York: Available online: https://my.liuc.it/MatSup/2006/F85555/rmtd.pdf (accessed on 16 July 2021).

Napper, Kerry. 2008. Refining Risk Measures. MLC Investment Management. *White Paper—Serial Correlation* 1: 2–13.

Nelson, Daniel. 1990. ARCH models as diffusion approximations. *Journal of Econometrics* 45: 7–38. [CrossRef]

Nieto, Maria Rosa, and Esther Ruiz. 2016. Frontiers in VaR forecasting and back testing. *International Journal of Forecasting* 32: 475–501. [CrossRef]

Orhan, Mehmet, and Bülent Köksal. 2011. A comparison of GARCH models for VaR estimation. *ESWA* 2012: 2582–3592.

Pflug, Georg Ch. 2000. Some remarks on the Value at Risk and the Conditional Value at Risk. In *Probabilistic Constrained Optimization. Nonconvex Optimization and Its Applications 49*. Edited by Uryasev Stanislav. Boston: Springer. [CrossRef]

Reider, Rob. 2009. Volatility Forecasting 1: GARCH Models: October 2009. Available online: cims.nyu.edu/~{}almgren/timeseries/ Vol\_Forecast1.pdf (accessed on 10 June 2017).

Rockafellar, Ralph Tyrell, and Stanislav Uryasev. 2000. Optimization of conditional value at risk. *Journal of Risk* 2: 21–42. [CrossRef]


Tang, Wang, and Fang Su. 2017. Analysis of the impact of extreme financial events on systematic risk. A case study of China's banking sector. *Economic Research Journal* 52: 17–33.

Van Dyk, Francois. 2016. Stress Testing 101: The method, recent developments, transparency concerns and some recommendations. *The SA Financial Markets Journal*. Available online: http://financialmarketsjournal.co.za/oldsite/16thedition/printedarticles/ stresstesting.htm (accessed on 7 March 2017).

Wang, Zhouwei, Qicheng Zhao, Min Zhu, and Tao Pang. 2020. Jump Aggregation, Volatility Prediction, and Nonlinear Estimation of Banks' Sustainability Risk. *Sustainability* 12: 8849. [CrossRef]

World Economic Forum. 2017. *The Global Risks Report 2017*, 12th ed. Zurich: World Economic Forum.
