Next Article in Journal
The Italian Pension Gap: A Stochastic Optimal Control Approach
Next Article in Special Issue
The Effect of Non-Proportional Reinsurance: A Revision of Solvency II Standard Formula
Previous Article in Journal
Volatility Is Log-Normal—But Not for the Reason You Think
Previous Article in Special Issue
Operational Choices for Risk Aggregation in Insurance: PSDization and SCR Sensitivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Cascade Bayesian Approach: Prior Transformation for a Controlled Integration of Internal Data, External Data and Scenarios

by
Bertrand K. Hassani
1,2,3,* and
Alexis Renaudin
4
1
University College London Computer Science, 66-72 Gower Street, London WC1E 6EA, UK
2
LabEx ReFi, Université Paris 1 Panthéon-Sorbonne, CESCES, 106 bd de l’Hôpital, 75013 Paris, France
3
Capegemini Consulting, Tour Europlaza, 92400 Paris-La Défense, France
4
Aon Benfield, The Aon Centre, 122 Leadenhall Street, London EC3V 4AN, UK
*
Author to whom correspondence should be addressed.
Risks 2018, 6(2), 47; https://doi.org/10.3390/risks6020047
Submission received: 14 March 2018 / Revised: 22 April 2018 / Accepted: 23 April 2018 / Published: 27 April 2018
(This article belongs to the Special Issue Capital Requirement Evaluation under Solvency II framework)

Abstract

:
According to the last proposals of the Basel Committee on Banking Supervision, banks or insurance companies under the advanced measurement approach (AMA) must use four different sources of information to assess their operational risk capital requirement. The fourth includes ’business environment and internal control factors’, i.e., qualitative criteria, whereas the three main quantitative sources available to banks for building the loss distribution are internal loss data, external loss data and scenario analysis. This paper proposes an innovative methodology to bring together these three different sources in the loss distribution approach (LDA) framework through a Bayesian strategy. The integration of the different elements is performed in two different steps to ensure an internal data-driven model is obtained. In the first step, scenarios are used to inform the prior distributions and external data inform the likelihood component of the posterior function. In the second step, the initial posterior function is used as the prior distribution and the internal loss data inform the likelihood component of the second posterior function. This latter posterior function enables the estimation of the parameters of the severity distribution that are selected to represent the operational risk event types.

1. Introduction

Basel II capital accord defines the capital charge as a risk measure obtained on an annual basis at a given confidence level on a loss distribution that integrates the following four sources of information: internal data, external data, scenario analysis, business environment and internal control factors (see Box 1).
Box 1. Mixing of outcomes from AMA sub-models. 
‘258. The Range of Practice Paper recognises that “there are numerous ways that the four data elements have been combined in AMA capital models and a bank should have a clear understanding of the influence of each of these elements in their capital model”. In some cases, it may not be possible to:
(a) Perform separate calculations for each data element; or
(b) Precisely evaluate the effect of gradually introducing the different elements.
259. While, in principle, this may be a useful mathematical approach, certain approaches to modelling may not be amenable to this style of decomposition. However, regardless of the modelling approach, a bank should have a clear understanding of how each of the four data elements influences the capital charge.
260. A bank should avoid arbitrary decisions if they combine the results from different sub-models within an AMA model. For example, in a model where internal and external loss data are modelled separately and then combined, the blending of the output of the two models should be based on a logical and sound statistical methodology. There is no reason to expect that arbitrarily weighted partial capital requirement estimates would represent a bank’s requisite capital requirements commensurate with its operational risk profile. Any approach using weighted capital charge estimates needs to be defensible and supported, for example by thorough sensitivity analysis that considers the impact of different weighting schemes.’ BCBS (2010)
Basel II and Solvency II require financial institutions to access the capital charge associated with operational risks BCBS (2001, 2010). There are three different approaches to measure operational risks, namely the basic, the standard and the advanced measurement approach (AMA), representing increasing levels of control and the difficulty of implementation. The AMA requires a better understanding of the exposure for implementing an internal model.
Basel II capital accord defines the capital charge as a risk measure obtained on an annual basis at a given confidence level on a loss distribution that integrates the following four sources of information: internal data, external data, scenario analysis, business environment and internal control factors. The regulatory capital is given by the 99.9 percentile (99.5 for Solvency II) of the loss distribution, whereas the economic capital is given by a higher percentile related to the rating of the financial institution, which is usually between 99.95% and 99.98%.
The purpose of using multiple sources of information is building an internal model based on the largest set of data possible to increase the robustness, stability and conservatism of the final capital evaluation. However, different sources of information have different characteristics, which, taken in isolation, can be misleading. Therefore, as clearly stated in the regulation (BCBS 2010, c.f. above), practitioners should obtain a clear understanding of the impact of each of these elements on the capital charge that is computed. Internal loss data represent the entity risk profile, external loss data characterise the industry risk profile and scenarios offer a forward-looking perspective and enable one to gauge the unexpected loss from an internal point of view (Guégan and Hassani 2014). Figure 1 illustrates this point by assuming that the internal loss data tends to represent the body of the severity distribution, the scenarios, the extreme tail, and the external data, the section in between1.
As presented in Cruz et al. (2014) and Hassani (2016), operational risk can be modelled considering various approaches depending on the situation, the economical environment, the bank activity, the risk profile of the financial institution, the appetite of the domestic regulator, the legal and the regulatory environment. Indeed, AMA banks are not all using the same methodological frameworks: some are using Bayesian networks (Cowell et al. 2007), some are relying on an extreme value theory (Peters and Shevchenko 2015), some are considering Bayesian inference (BI) strategies for parameter estimation and some are implementing traditional frequentist approaches (among others). We observed that the choice of the methodologies were driven by the way components were captured, for instance if the scenarios were captured on a monthly basis and modelled using a generalized extreme value (GEV) distribution (Guégan and Hassani 2014) or annually capturing only a few points in the tail (this approach led to the scenario assessment used in this paper), whether internal data have been collected over a threshold, if the categories underpinning quantification are combining different types of events (e.g., external fraud may combine credit card fraud, commercial paper fraud, cyber attacks, Ponzi schemes, etc.), how external data have been provided, if loss data are independent (Guégan and Hassani 2013b), or how the dependencies between risk types are being dealt with. This last point is out of the scope of this paper; however, this methodology is compatible with Guegan and Hassani (2013a).
Lately, Bayesian inference has been used for modelling operational risk incidents as described in Peters and Sisson (2006); Valdés et al. (2018); Mizgier and Wimmer (2018); and Leone and Porretta (2018). This paper expands on the frequency × severity framework (loss distribution approach), in which the loss distribution function (Frachot et al. 2001; Cruz 2004; Böcker and Klüppelberg 2010; Chernobai et al. 2007) G is a weighted sum of k-fold convoluted severity F distribution, where k represents the order of convolution. The weight is provided by the frequency distribution p. Mathematically, this function corresponds to the following:
G ( x ) = k = 1 p ( k ; Λ ) F k ( x ; Θ ) , x > 0 ,
with
G ( x ) = 0 , x = 0 .
Here, ⊗ denotes the convolution operator. Denoting g the density of G, we have
g ( x ) = k = 1 p ( k ; Λ ) f k ( x ; Θ ) , x > 0 ,
where Λ and Θ are respectively representing the sets of parameters of the frequency and the severity distributions considered.
The parameter is estimated by the maximum likelihood of the internal loss data using a Poisson distribution to model the frequencies2. However, as collection thresholds are being set up, the frequency distribution parameter is adjusted using the parameterised severity distribution.
The focal point of this paper is to construct severity distributions by combining the three data sources presented above. As the level of granularity of the risk taxonomy increases, the quantity of data available per risk category tends to decrease. As a result, the traditional estimation methods, such as the maximum likelihood or the method of moments, tend to be less reliable. Consequently, we opted to bring together the three components within the Bayesian inference theoretical framework (Box and Tiao 1992; Shevchenko 2011). Despite the numerous hypotheses surrounding BI, it is known to be effective in situations in which there are only a few data points available. Using loss collection thresholds, the empirical frequencies are biased as some incidents are not captured. Therefore, the frequency distribution parameters depend on the shape of the severity distribution obtained once all the elements have been combined, for instance the scenarios, the external data and the internal data conditionally to the collection thresholds. Thus, the frequency distribution parameters are corrected to take into account the censored part of the severity distribution, and as such mechanically integrate some frequency information consistent with the shape of the severities. The fatter the tail, the lower the correction of the frequency distribution, and conversely. Consequently, a model driven by the tail would lead to frequency distribution being not consistent with the risk profile of the target entity. This issue has been dealt with in this paper and the pertaining results are presented in the fourth section.
In statistics, the Bayesian inference is a statistical method of inference in which Bayes’ theorem (Bayes 1763) is used to update the probability estimate of a proposition as additional information becomes available. The initial degree of confidence is called the prior and the updated degree of confidence, the posterior.
Consider a random vector of loss data X = X 1 , , X n , whose joint density for a given vector of parameters ϕ = ( ϕ 1 , , ϕ K ) , is h ( x | ϕ ) . In the Bayesian approach, both observations and parameters are considered to be random. Then, the joint density is as follows:
h ( x , ϕ ) = h ( x | ϕ ) π ( ϕ ) = π ( ϕ | x ) h ( x ) ,
where π ( ϕ ) is the probability density of the parameters, which is known as the prior density function. Typically, π ( ϕ ) depends on a set of further parameters that are usually called ’hyper’ parameters3. π ( ϕ | x ) is the density of parameters in given data X , which is known as the posterior density, h ( x , ϕ ) is the joint density of the observed data and parameters and h ( x | ϕ ) is the density of observations for given parameters. This is the same as a likelihood function (as such, and to draw a parallel with the frequentist approach, this component will be referred to as the likelihood component in what follows) if considered as a function of ϕ , i.e., l X ( ϕ ) = h ( x | ϕ ) . h ( x ) is the marginal density of X which can be written as h ( x ) = h ( x | ϕ ) π ( ϕ ) d | ϕ .
The Bayesian inference approach permits the reliable estimation of distributions’ parameters even if the quantity of data denoted as n is limited. In addition, as n becomes larger, the weight of the likelihood component increases, such that, if n , the posterior distribution tends to the likelihood function and, consequently, the parameters obtained from both approaches converge. As a result, the data selected to inform the likelihood component may lead the model and, consequently, the capital charge. Therefore, our two-step approach presents an opportunity for operational risk managers and modellers to integrate all the aforementioned components in a way that they do not have to justify a capital charge increase due to an extreme loss that another entity would have suffered.
This paper employs two Bayesian inference techniques in a sequential manner to obtain the parameter of the statistical distribution that is used to characterise the severity. Scenarios are used to build the prior distributions of the parameters denoted π ( ϕ ) , which is refined using external data Y to inform the likelihood component. This results in an initial posterior function ( π ( ϕ | Y ) ), which is then used as a prior distribution, and the likelihood part is informed by the internal data. This leads to a second posterior distribution that allows the estimation of the parameters of the severity distribution used to build the loss distribution function in the loss distribution approach (LDA) approach. The first step of the methodology aims, in reality, to bring about the transformation of a first prior containing little information (but not yet non-informative) to a prior containing much more information, leading to the creation of a narrower set of acceptable parameters.
Universally suitable methodologies for operational risk capital modelling are rare if they can be designed at all (Hassani 2016). With that caveat in mind, our methodology can be useful in several ways. First, our Bayesian cascade approach can be a viable alternative when available internal data is not numerous, in which the risk profile has some similarities with the entities providing the external data and in which the scenarios are only characterised by a few points evaluated by expert judgement for a given set of likelihoods. Indeed, the priors are acting as penalisation functions during the estimation of the parameters and eventually as a weighting function while combining the elements. Here, we would like to reinforce the fact that the objective is to obtain a set of distributions driven by internal data and not distributions driven by losses collected by other banks and therefore not necessarily fully representative of the risk profile of the target entity.
Second, bank management as well as external stakeholders and regulators find transparency of a model highly desirable from a model risk and governance perspective. Our methodology advances this cause in a significant manner through a controlled integration of the three data elements that allows evaluating the marginal impact of each element type on the estimated risk capital. In addition, in the situation considered in this paper, the three elements did not have the same characteristics, indeed external data were only collected above a high threshold, scenarios were only representing by a few points in the right tail of the distributions underpinning quantifications and internal data were collected above different thresholds depending on the type of risks.
Third, when one or more data elements for some categories of events are not numerous, the resulting capital estimate can be exorbitantly high and unrealistic depending on how the data elements are combined. In such challenging contexts, our methodology seems relatively more robust in producing sensible capital estimates. As a matter of fact, the sensitivity analysis we performed at the time was providing an increase of 400% of the initial value-at-risk (VaR) on some categories.
Section 2 presents the Cascade strategy and the underlying assumption that allows the implementation of a data-driven model. Section 3 deals with the implementation in practice, using real data sets. Section 4 presents the results, and Section 5 provides the conclusion.

2. A Bayesian Inference in Two Steps for Severity Estimation

This section outlines the two-step approach to parametrise the severity distribution. The cascade implementation of the Bayesian inference approach is justified by the following property (Berger 1985). The Bayesian posterior distribution implies that the larger the quantity of data used, the larger the weight of the likelihood component. Consequently,
π ( ϕ ; x 1 , , x k ) π ( ϕ ) i = 1 k f i ( x i | ϕ ) k i = 1 k f i ( x i | ϕ ) ,
where f is the density characterising the severity distribution, x i , a data point, and ϕ , the set of parameters to be estimated. As a result, the order of the Bayesian integration of the components is significant.
In other words, the information contained in the observed incidents may override the prior (Parent and Bernier 2007). Splitting the posterior function into two mathematical sets, in which the prior characterises ’observable’ values and the likelihood component, ’observed’ values. Consequently, in our case, three sources of information are available that qualify as ’observable’ (scenarios), ’observed’ outside the target entity (ELD) and ’observed’ inside the entity (ILD). If both ’observed’ components were considered to belong to the same source, then we would have two likelihood components that may be linearly combined as one. Consequently, if the number of data points is limited internally, then the external data would have a major impact on our parameters and would drive the risk measures. Furthermore, ELD only includes incidents that are observed outside the target entity. Therefore, they logically should be considered as ’observable’ internally.
As a consequence, this strategy has been based on the construction of two successive posterior distribution functions. The first uses the scenario values to inform the prior and the external loss data to inform the likelihood function. Using a Markov chain Monte Carlo algorithm (Gilks et al. 1996), the first posterior empirical distribution is obtained. This is used as a prior distribution in a second Bayesian approach for which the likelihood component is informed by internal loss data. Due to the Bayesian approach property presented above, in the worst case, the final posterior distribution is entirely driven by internal data. The method may be formalised as follows: f is the density function of the severity distribution, y i represent the internal data, x i the external data and ϕ the set of parameters to be estimated.
  • Prior π 0 by the scenarios and the likelihood component by external data as follows:
    π 1 ( ϕ ; x 1 , , x k ) π 0 ( ϕ ) i = 1 k f i ( y i | ϕ ) .
  • The aforementioned posterior function π 1 is then sampled and used as prior, and the likelihood component is informed by internal data as follows:
    π 2 ( ϕ ; x 1 , , x k ) π 1 ( ϕ ) i = 1 k f i ( x i | ϕ ) .
As a result, the first posterior function becomes a prior, and only a prior, limiting the impact of the first two components on the parameters. Another way to describe the process is to present it as a prior transformation. Indeed, the initial parametric prior constructed on tail information, i.e., a non-informative prior or more likely a non-very informative prior, is transformed into a non-parametric informative prior as external data are integrated. The sampling step of the algorithm is crucial, as, otherwise, the entire interest of this method is lost as the external data and internal data would be given the same weight. The Bayesian way of reasoning considers probability as a numerical representation of the information available regarding the level of confidence given to an assumption. Applying a frequentist way of reasoning to the Bayesian inference and, especially, to this method would lead to the consideration that the previous algorithm could be integrated in one single step, disregarding the fundamental distinction between ’observable’ and ’observed’ values and that, by extension, the choice of the prior has no importance.

3. Carrying Out the Cascade Approach in Practice

This section presents how the methodology may be carried out in practice. First, we will introduce the data, second, we will detail the approach and the estimation and, finally, we will present the resulting capital charges.

3.1. The Data Sets

The results presented were obtained using OpBase, which is the external database developed by Aon Ltd (London, UK). It captures the losses corresponding to the operational risk claims and the losses in the public domain (PKM). The scenarios correspond to those that we obtained through a scenario workshop process. In this paper, the scenario analysis used was forward looking in the sense that none of the values considered were already captured in the internal loss data base and were different in terms of likelihood and magnitude to those captured in the external loss data base. Many different scenario approaches are compatible with our approach; therefore, this will not be discussed any further (Guégan and Hassani 2014) in this paper. The internal loss data were provided by a first-tier European bank. The statistical characteristics of the data and scenarios used are provided in Table 1, Table 2 and Table 3.

3.2. The Priors

This approach applies to any type of traditional statistical distribution. In this paper, internal fraud has been modelled using a lognormal distribution, external fraud/payment via by a Weibull distribution and execution, delivery and process management/other than payments is represented by a mixture model using a lognormal distribution in the body and a generalized Pareto distribution in the tail (Guégan et al. 2011).
Two criteria drive the selection of the prior distributions. They are either chosen so that their supports are consistent with the acceptable values for the parameters, e.g., the shape parameter of a generalized Pareto distribution cannot be greater than one. Otherwise, the infinite mean model obtained leads to unrealistic capital values. Therefore, a beta distribution defined on finite support has been selected. Alternatively, the priors are selected such that the joint prior is a conjugate distribution, i.e., the posterior distribution belongs to the same family as the prior distribution with a different set of parameters.
In the general case, the prior π on the severity parameters can be written as follows:
π ( ϕ ) = π ( ϕ 1 , ϕ 2 ) = π ( ϕ 1 | ϕ 2 ) π ( ϕ 2 ) = π ( ϕ 2 | ϕ 1 ) π ( ϕ 1 ) .
If the priors are independent, then the previous function becomes as follows:
π ( ϕ ) = π ( ϕ 1 , ϕ 2 ) = π ( ϕ 1 ) π ( ϕ 2 ) .
Thereafter, priors are assumed to be independent.
For each distributions parameters, a set of prior distributions have been selected as follows (Table 4 lists the corresponding prior density functions):
  • Lognormal—a Gaussian and a gamma distribution,
  • Weibull—gamma distributions for both the scale and the shape,
  • Generalized Pareto distribution—a beta distribution for the shape and a gamma distribution on the scale.

3.3. Estimation

To obtain the global severity parameters4 of the selected distributions, three different approaches may be carried out. These are listed below in the order of complexity.
  • One can choose conjugate priors for the parameters in the first step of the Bayesian inference estimation. In this case, the (joint) distribution of the posterior parameters is directly known, and it is possible to sample directly from this distribution to recreate the marginal posterior distributions for each severity parameter. These may then be used to generate the posterior empirical densities required to compute a Bayesian point estimator of the parameters. The admissible estimators are the median, mean and mode of the posterior distribution. The mode (also called maximum a posteriori (MAP)) can be seen as the ’most probable estimator’ and ultimately coincides with the maximum likelihood estimator (Lehmann and Casella 1998). Despite having good asymptotic properties, finding the mode of an empirical distribution is not a trivial matter and often requires some additional techniques and hypotheses (e.g., smoothing). This paper, therefore, uses posterior means as point estimators. To the best of our knowledge, the only conjugate priors for continuous distributions were studied by Shevchenko (2011), for the Lognormal severity case. Conjugate approach requires some assumptions that may not be sustainable in practice, particularly for priors that are modelled with ’uncommon’ distributions (e.g., inverse-Chi-squared). This might lead to difficulties in the step known as ’elicitation’, i.e., calibrating the prior hyper-parameters from the chosen scenario values.
  • Another solution is to release the conjugate prior assumption and use a Markov chain Monte Carlo approach in the first step to sample from the first posterior. One can then use a parametric or non-parametric method to compute the corresponding densities. This enables the posterior function to be evaluated (see Equation (6)). Maximising this function directly gives the MAP estimators of the severity parameters (see above). Even if this method is sufficient for computing values for the global parameters, it misses the purpose of the Bayesian inference, which is to provide distribution as a final result instead of a single value. In addition, it may also suffer from all the drawbacks that an optimisation algorithm may suffer (e.g., non-convex functions, sensitivity to starting values, etc.)
  • The final alternative is to use a Markov chain Monte Carlo (MCMC) approach at each step of the aforementioned cascade Bayesian inference. This method is more challenging to implement but is the most powerful one, as it generates the entire distribution of the final severity parameters from which any credibility intervals and/or other statistics may be evaluated.
In this paper, the third alternative has been implemented using the Metropolis–Hastings algorithm. This allows sequential sampling from the two posterior distributions. This algorithm then enables us to build the distributions for the parameters contained in the ϕ vector. However, these distributions are used as priors in the next step of the cascade and, as mentioned in the second point above, their densities are required from the sample obtained from the MCMC. Two solutions may be carried out—a parametric and a non-parametric one. The parametric solution requires fitting statistics to empirical distributions, which may introduce bias in the construction. To remain as close as possible to the empirical distributions that are generated and, therefore, to the data, a non-parametric approach based on a Kernel density estimation has been chosen. Using an Epanechnikov kernel and a cross-validation method to calibrate the bandwidth value, we can obtain the non-parametric density, representing the parameter’s distributions (Appendix A and Appendix B).
As a result, each of the different elements of the new posterior distribution has been built. The new prior densities in the second step result from the first posterior. The new likelihood function is informed by another data set, for instance, the internal loss data to guarantee an internal loss data-driven model.
Remark 1.
The innovation of this paper lies in the cascade Bayesian approach and its use to combine different sources of information. Consequently, little emphasis has been placed on our implementation of the MCMC with the Metropolis–Hastings algorithm, which is a known and widespread topic in literature. The interested reader could, for instance, refer to Gilks et al. (1996).

4. Results

The Cascade Bayesian approach enables the updating of the parameters of the distributions that are considered to model particular risk events. Table 5 presents the parameters that are estimated, considering the different pieces of information used to build the loss distribution function (LDF), i.e., the parameters of the selected distributions. For instance, the lognormal, the Weibull and the generalized Pareto distribution are estimated on each of these pieces of information without considering the information introduced by the others. It results in a large variance in the parameters, which, once translated into financial values, may be inconsistent with the observations. For example, considering the lognormal distribution for internal fraud, the median is equal to 40,783.42, when considering only the scenarios, 6047.53, using the external data, and 11,472.33, considering the internal data. The parameter implied variance of the losses is also quite different across the different elements. The variation is similar for the Weibull distribution that models external fraud. It may be even worse in the case of the GPD, as the risk measures are extremely sensitive to the shape parameter.
Table 6 shows the evolution of the parameters as they are updated with respect to the different pieces of information.
  • Scenarios’ severity is derived from the calibration of the prior distributions with the scenario values. The theoretical means obtained from the calibrated priors provide scenario severity estimates.
  • Intermediate severity refers to the estimation of the severity of the first obtained posterior, i.e., the mean of the posterior distribution obtained from scenario values updated with external loss data.
  • Similarly, final severity represents the severity estimation of the second and last posterior distribution, which includes scenarios, external and internal loss data. A 95% Bayesian confidence interval (also known as a credible interval) derived from this final posterior distribution is also provided. It is worth noticing that this interval, which is formally defined as containing 95% of the distribution mass, is not unique and is chosen here as the narrowest possible interval. It is, therefore, not necessarily symmetric around the posterior mean estimator.
Implementation of the Cascade approach results in final parameters located within the range of values obtained from the different elements taken independently. As the dispersion of the information increases, the variance of the theoretical losses tends to increase, as does the theoretical kurtosis. For example, the evolution of the lognormal parameters is significant, as the introduction of data belonging to the body (Table 5, first line) tends to decrease the mean (the μ parameter is the mean of the log-transformed of the data). The parameters of the body part of the risk category for which a GPD is used to model the losses in the tail are provided in Table 7. However, naturally, the dispersion increases, and, therefore, the variance tends to increase (the σ parameter is a representation of the standard deviation of the log-transform of the data) (Table 6, first line). The impact of the severity on frequency is given in Table 8. The conditional frequencies naturally increase with respect to the severity distributions.
It also results in conservative capital charges, considering the VaR (Appendix D) obtained from the different elements taken independently, as it tends to be a weighted average of the VaR obtained from each part. The weights are automatically evaluated during the cascade and directly related to the quantity and the quality of information integrated (Table 9).
Practically, implementing the solution presented in this paper, the following remarks should be carefully considered:
Remark 2.
The scenarios are defined by the bank. As mentioned previously, eventually all elements have to be combined, though internal data and scenarios are collected by the bank, external data are not. Both external data and scenarios are belonging to the domain of ’observable’ losses and therefore are belonging to the domain of what may happen (either in magnitude, in frequency or in type of incidents). Consequently, in the Bayesian inference philosophy, these should inform the prior functions. On the other hand, internal data are belonging to the domain of what had happened and therefore these should inform what we referred to as the ’likelihood component’.
Remark 3.
In our case, the scenarios are representing events located far in the right tail and have been collected given specific percentiles; therefore, if the internal data are not numerous enough, combining them directly in the posterior may lead to very large capital charges. On the contrary, if the internal data set is very large, the scenarios would have a very limited impact on the capital charge.
Remark 4.
In this paper, we started form the tail, i.e., the most conservative part of the distribution, using the few largest events to inform the prior. Then, external data are integrated as these are closer to the scenarios. The objective is to avoid a gap too large between scenarios and internal data. Indeed, we observed that a large gap between the element informing the prior and the elements informing the ’likelihood component’ may prevent the MCMC algorithm from converging.
Remark 5.
It is noteworthy to mention that some prior distributions have been chosen to constrain the range of parameter’s values. For instance, considering the generalised Pareto distribution, the use of the beta distribution to characterise the shape parameter is justified by the fact that this parameter would mechanically lie between 0 and 1 and therefore this approach would prevent infinite mean models.
As a final step, these results may be illustrated for the whole Bayesian cascade estimation in the simple lognormal case. Figure 2 shows the final posterior distribution obtained for μ and σ . This is used to derive the 95% Bayesian credible interval given in Table 6. We also show the results of the convergence of the posterior mean as a function of the number of MCMC iterations for ( μ and σ ). One can see that the values stabilise after 100 to 500 iterations in this example. In the general case, we chose to sample 3000 values and discard the first 1000 MCMC generated values (burn-in period) to ensure the stability of our estimates.

5. Conclusions

This paper presents an intuitive approach to building the loss distribution function using the Bayesian inference framework and combining the different regulatory components. This approach overcomes the problem related to the non-homogeneity of the data.
This approach enables the controlled integration of the different elements through the Bayesian inference. The prior functions have the same role as the penalisation functions on the parameters and, therefore, behave as constraints during the estimation procedure. This results in capital charges being driven by internal data (as shown in Figure 3), which are not dramatically influenced by external data or extreme scenarios.
Hence, with our approach, the capital requirement calculation is inherently related to the risk profile of the target financial institution and, therefore, provides the senior management with greater assurance of the validity of the results.
As presented in the previous section of this paper, best practices, i.e., practices approved by the regulator for AMA capital calculations, are highly heterogeneous, therefore it is necessary to reinforce the fact that there is no panacea, there are only methodologies or modelling strategies more appropriate than others in particular situation. As such, this methodology can be really interesting in an environment similar to the one presented in this paper.
The next step is to evaluate the financial institutions’ global operational risk capital requirement would involve the construction of the multivariate distribution function, linking the different loss distributions that characterise the various event types through a copula. In this case, the approach developed by Guegan and Hassani (2013a) may be of interest.

Author Contributions

B.H. and A.R. conceived, designed and performed the experiments, analysed the data, contributed reagents/materials/analysis tools and wrote the paper.

Acknowledgments

This work was achieved through the Laboratory of Excellence on Financial Regulation (Labex ReFi) supported by PRES heSam under the reference ANR10LABX0095. It benefited from a French government support managed by the National Research Agency (ANR) within the Q11 project Investissements d’Avenir Paris Nouveaux Mondes (investments for the future Paris New Worlds) under the reference ANR11IDEX000602.

Conflicts of Interest

The authors declare no conflict of interest. The opinions, ideas and approaches expressed or presented are those of the authors and do not necessarily reflect any past or future Capgemini’s or Aon’s positions. As a result, neither Capgemini nor Aon can be held responsible for them.

Appendix A. Epanechnikov Kernel

K ( t ) = 3 4 ( 1 1 5 t 2 ) 5 ,   for t < 5 , 0 , otherwise .
The efficiency is equal to 1, and the canonical bandwidth is equal to 15 1 5 1.3510 .

Appendix B. Least Square Cross Validation

One of the most famous methods to estimate the Kernel bandwidth is known as the least square cross validation (Rudemo (1982) and Bowman (1984)). The structural idea is that the mean integrated squared error (MISE) might be written has:
ζ ( f ( x , h ) ^ f ( x ) ) = ζ ( f ^ ( x ; h ) ) 2 + f ^ ( x ; h ) f ( x ) d x + ζ ( f ( x ) ) .
Obviously, the last term of the equation does not depend on f; therefore, we shall have the same h minimizing the full MISE or only the first part,
ζ ( f ^ ( x ; h ) ) 2 + f ^ ( x ; h ) f ( x ) .
An unbiased estimator for Equation (A2) is given by
h L S C V = ζ ( f ^ ( x , h ) ) 2 1 n i = 1 n f ^ i ( X i ; h ) ,
where f ^ 1 ( x ) is the density estimate constructed from all the data points except X i :
f ^ 1 ( x ) = 1 h ( n 1 ) j i n K ( x X j ) h .

Appendix C. The Metropolis–Hastings Algorithm

The Metropolis–Hastings algorithm is almost a universal algorithm used to generate a Markov chain with a stationary distribution π ( ϕ | x ) . It has been developed by Metropolis et al. in mechanical physics and generalised by Hastings in a statistical setting. It can be applied to a variety of problems since it requires the knowledge of the distribution of interest up to a constant only. Given a density π ( ϕ | x ) , known up to a normalization constant, and a conditional density q ( ϕ | ϕ ) , the method generates the chain ϕ ( 1 ) , ϕ ( 2 ) , using the following algorithm:
  • Initialise ϕ l = 0 with any value within a support of π ( ϕ | x ) .
  • For l = 1 , , L .
    (a)
    Set ϕ l = ϕ l 1 .
    (b)
    Generate a proposal ϕ from q ( ϕ | ϕ ( l ) ) .
    (c)
    Accept proposal with the acceptance probability:
    p ( ϕ ( l ) , ϕ ( ) ) = min 1 , π ( ϕ ( l ) | x ) q ( ϕ ( ) | ϕ ( l ) ) π ( ϕ ( ) | x ) q ( ϕ ( l ) | ϕ ( ) ) .
    i.e., simulate U from the uniform distribution function U ( 0 , 1 ) and set ϕ ( l ) = ϕ ( ) if U < p ( ϕ ( l ) , ϕ ( ) ) . Note that the normalization constant of the posterior does not contribute here.
  • Next, l (i.e., do an increment, l = l + 1 ), and return to step 2.

Appendix D. Risk Measure Evaluation

For financial institutions, the capital requirement pertaining to operational risks is related to a VaR at 99.9%. It may be defined as follows: given a confidence level α [ 0 , 1 ] , the VaR associated with a random variable X is given by the smallest number x such that the probability that X exceeds x is not larger than ( 1 α )
V a R ( 1 α ) % = inf ( x R : P ( X > x ) ( 1 α ) ) .
In addition, we can compare these results to those obtained based on the expected shortfall (ES) defined as follows:
For a given α in [ 0 , 1 ] , η the V a R ( 1 α ) % , and X a random variable that represents losses during a prespecified period (such as a day, a week, or some other chosen time period), then
E S ( 1 α ) % = E ( X | X > η ) .

References

  1. Álvarez, Gene. 2001. Operational Risk Quantification Mathematical Solutions for Analyzing Loss Data. Available online: https://www.bis.org/bcbs/ca/galva.pdf (accessed on 26 April 2018).
  2. Bayes, Thomas. 1763. An Essay towards Solving a Problem in the Doctrine of Chances. By the Late Rev. Mr. Bayes, F. R. S. Communicated by Mr. Price, in a Letter to John Canton, A. M. F. R. S. Philosophical Transactions (1683–1775) 53: 370–418. [Google Scholar] [CrossRef]
  3. BCBS. 2001. Working Paper on the Regulatory Treatment of Operational Risk. Basel: Bank for International Settlements. [Google Scholar]
  4. BCBS. 2010. Basel III: A Global Regulatory Framework for More Resilient Banks and Banking Systems. Basel: Bank for International Settlements. [Google Scholar]
  5. Berger, James O. 1985. Statistical Decision Theory and Bayesian Analysis. New York: Springer. [Google Scholar]
  6. Böcker, Klaus, and Claudia Klüppelberg. 2010. Operational VaR: A Closed-Form Approximation. Risk 18: 90–93. [Google Scholar]
  7. Bowman, Adrian W. 1984. An alternative method of cross-validation for the smoothing of density estimates. Biometrika 71: 353–60. [Google Scholar] [CrossRef]
  8. Box, George E. P., and George C. Tiao. 1992. Bayesian Inference in Statistical Analysis. New York, Chichester and Brisbane: Wiley Classics Library, JohnWiley & Sons. [Google Scholar]
  9. Chernobai, Anna S., Svetlozar T. Rachev, and Frank J. Fabozzi. 2007. Operational Risk: A Guide to Basel II Capital Requirements, Models, and Analysis. New York: John Wiley & Sons. [Google Scholar]
  10. Cowell, Robert G., Richard J. Verrall, and Y. K. Yoon. 2007. Modeling operational risk with bayesian networks. Journal of Risk and Insurance 74: 795–827. [Google Scholar] [CrossRef]
  11. Cruz, Marcelo G. 2004. Operational Risk Modelling and Analysis. London: Risk Books. [Google Scholar]
  12. Cruz, Marcelo G., Gareth W. Peters, and Pavel V. Shevchenko. 2014. Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk. Hoboken: John Wiley & Sons. [Google Scholar]
  13. Frachot, Antoine, Pierre Georges, and Thierry Roncalli. 2001. Loss Distribution Approach for Operational Risk. Working paper. Paris: GRO, Crédit Lyonnais. [Google Scholar]
  14. Gilks, Walter R., Sylvia Richardson, and David Spiegelhalter. 1996. Markov Chain Monte Carlo in Practice. London: Chapman & Hall/CRC. [Google Scholar]
  15. Guegan, Dominique, and Bertrand K. Hassani. 2013. Multivariate vars for operational risk capital computation: A vine structure approach. International Journal of Risk Assessment and Management 17: 148–70. [Google Scholar] [CrossRef]
  16. Guégan, Dominique, and Bertrand Hassani. 2013. Using a time series approach to correct serial correlation in operational risk capital calculation. The Journal of Operational Risk 8: 31. [Google Scholar] [CrossRef]
  17. Guégan, Dominique, and Bertrand Hassani. 2014. A mathematical resurgence of risk management: An extreme modeling of expert opinions. Frontiers in Finance & Economics 11: 25–45. [Google Scholar]
  18. Guégan, Dominique, Bertrand K. Hassani, and Cédric Naud. 2011. An efficient threshold choice for the computation of operational risk capital. The Journal of Operational Risk 6: 3–19. [Google Scholar] [CrossRef]
  19. Hassani, Bertrand, and Bertrand K. Hassani. 2016. Scenario Analysis in Risk Management. Cham: Springer. [Google Scholar]
  20. Lehmann, Erich L., and George Casella. 1998. Theory of Point Estimation, 2nd ed. Berlin: Springer. [Google Scholar]
  21. Leone, Paola, and Pasqualina Porretta. 2018. Operational risk management: Regulatory framework and operational impact. In Measuring and Managing Operational Risk. Cham: Palgrave Macmillan, pp. 25–93. [Google Scholar]
  22. Mizgier, Kamil J., and Maximilian Wimmer. 2018. Incorporating single and multiple losses in operational risk: A multi-period perspective. Journal of the Operational Research Society 69: 358–71. [Google Scholar] [CrossRef]
  23. Parent, Eric, and Jacques Bernier. 2007. Le Raisonnement Bayésien, Modelisation Et Inférence. Paris: Springer. [Google Scholar]
  24. Peters, Gareth William, and Scott Sisson. 2006. Bayesian inference, Monte Carlo sampling and operational risk. Journal of Operational Risk 1. [Google Scholar] [CrossRef]
  25. Peters, Gareth W., and Pavel V. Shevchenko. 2015. Advances in Heavy Tailed Risk Modeling: A Handbook of Operational Risk. Hoboken: John Wiley & Sons. [Google Scholar]
  26. Rudemo, Mats. 1982. Empirical choice of histograms and kernel density estimators. Scandinavian Journal of Statistics 9: 65–78. [Google Scholar]
  27. Shevchenko, Pavel V. 2011. Modelling Operational Risk Using Bayesian Inference. Berlin: Springer. [Google Scholar]
  28. Valdés, Rosa María Arnaldo, V. Fernando Gómez Comendador, Luis Perez Sanz, and Alvaro Rodriguez Sanz. 2018. Prediction of aircraft safety incidents using Bayesian inference and hierarchical structures. Safety Science 104: 216–30. [Google Scholar] [CrossRef]
1.
External loss data usually overlap with both internal data and scenario analysis and are therefore represented as a link between the two previous components.
2.
It is generally accepted that capital calculations are not particularly sensitive to the choice of the frequency distribution (Álvarez 2001).
3.
These parameters are omitted here for simplicity of notation, these are the parameters of the densities used as a prior distribution (e.g., a Gamma or Beta distribution).
4.
These take into account the three different data sources.
Figure 1. Combination of internal loss data, external loss data and scenario analysis. The representation may be slightly different, and the components may overlap with each other depending on the risk profile or the quantity of data available.
Figure 1. Combination of internal loss data, external loss data and scenario analysis. The representation may be slightly different, and the components may overlap with each other depending on the risk profile or the quantity of data available.
Risks 06 00047 g001
Figure 2. Posterior distributions and convergence of the estimations obtained on internal fraud (Lognormal case). (left) Obtained posterior distribution for each severity parameter ( μ and σ ); (right) convergence of the severity estimation as the posterior mean in the Markov chain Monte Carlo (MCMC) sampling.
Figure 2. Posterior distributions and convergence of the estimations obtained on internal fraud (Lognormal case). (left) Obtained posterior distribution for each severity parameter ( μ and σ ); (right) convergence of the severity estimation as the posterior mean in the Markov chain Monte Carlo (MCMC) sampling.
Risks 06 00047 g002
Figure 3. Comparison of capital charge value obtained from the three components on a standalone basis and in combination.
Figure 3. Comparison of capital charge value obtained from the three components on a standalone basis and in combination.
Risks 06 00047 g003
Table 1. The table presents the statistical moments of the internal loss data used in this paper, as well as other statistics such as the minimum value, the maximum value and the number of data point available. NB stands for number of data points used. Level 2 ’Other’ gathers ’Execution, Delivery and Process Management’ losses other than ’Financial Instruments’ and ’Payments’.
Table 1. The table presents the statistical moments of the internal loss data used in this paper, as well as other statistics such as the minimum value, the maximum value and the number of data point available. NB stands for number of data points used. Level 2 ’Other’ gathers ’Execution, Delivery and Process Management’ losses other than ’Financial Instruments’ and ’Payments’.
Level 1Level 2NB UsedMinMedianMeanMaxsdSkewnessKurtosis
Internal FraudGlobal665413728,165261,47546,779,1301.9 × 10621.4407.1
External FraudPayments1567409112,35836,1331,925,0009.2 × 10411.7185.2
Execution, Delivery & Process ManagementOther3602408410,78996,62030,435,4009.5× 10524.8653.8
Table 2. The table presents the statistical moments of the external loss data used in this paper, as well as other statistics such as the minimum value, the maximum value and the number of data available. NB stands for number of data points used. Level 2 ’Other’ gathers ’Execution, Delivery and Process Management’ losses other than ’Financial Instruments’ and ’Payments’.
Table 2. The table presents the statistical moments of the external loss data used in this paper, as well as other statistics such as the minimum value, the maximum value and the number of data available. NB stands for number of data points used. Level 2 ’Other’ gathers ’Execution, Delivery and Process Management’ losses other than ’Financial Instruments’ and ’Payments’.
Level 1Level 2NB UsedMinMedianMeanMaxsdSkewnessKurtosis
Internal FraudGlobal295620,00188,691697,005130,715,8004.5 × 10617.6387.6
External FraudPayments108520,00636,464326,127106,772,2004.3 × 10620.8461.2
Execution, Delivery & Process ManagementOther31,12620,00447,428271,974585,000,0004.1 × 106107.214,068.6
Table 3. The table presents the scenario values used in this paper: 1 in 10 and 1 in 40, respectively, denote the biggest loss that may occur in the next 10 and 40 years. Level 2 ’Other’ gathers ’Execution, Delivery and Process Management’ losses other than ’Financial Instruments’ and ’Payments’.
Table 3. The table presents the scenario values used in this paper: 1 in 10 and 1 in 40, respectively, denote the biggest loss that may occur in the next 10 and 40 years. Level 2 ’Other’ gathers ’Execution, Delivery and Process Management’ losses other than ’Financial Instruments’ and ’Payments’.
Level 1Level 21 in 101 in 40
Internal FraudGlobal6.0 × 1065.2 × 107
External FraudPayments1.5 × 1062.5 × 107
Execution, Delivery & Process ManagementOther2.5 × 1075.0 × 107
Table 4. This table presents the following priors that were used to parametrise: 1—the Lognormal distribution, i.e., the Gaussian distribution for μ and the gamma distribution for σ ; 2—the Weibull distribution, i.e., two gamma distributions for the shape and the scale; 3—the generalized Pareto distribution (GPD), i.e., a beta distribution for the shape and a gamma distribution for the scale. These are informed by the scenarios.
Table 4. This table presents the following priors that were used to parametrise: 1—the Lognormal distribution, i.e., the Gaussian distribution for μ and the gamma distribution for σ ; 2—the Weibull distribution, i.e., two gamma distributions for the shape and the scale; 3—the generalized Pareto distribution (GPD), i.e., a beta distribution for the shape and a gamma distribution for the scale. These are informed by the scenarios.
LabelDensityParameters
Beta Γ ( α + β ) Γ ( α ) Γ ( β ) x α 1 ( 1 x ) β 1 α = s h a p e , β = s c a l e
Gamma 1 Γ ( α ) β α x α 1 e x β α = s h a p e , β = s c a l e
Gaussian 1 b 2 π e ( x a ) 2 2 b 2 a = l o c a t i o n , b = s t a n d a r d D e v i a t i o n
Table 5. This table presents the standalone parameters estimated for each components. Internal fraud severities are modelled using a lognormal distribution, external fraud/payments using a Weibull and execution, delivery, and product management considering a mixture model combining a Lognormal distribution in the body and a generalized Pareto distribution in the tail. The first column presents the initial parameters estimated for the scenarios. ϕ 1 and ϕ 2 represent the severity parameters of the chosen distribution, i.e., (resp.) μ and σ for the lognormal and shape and scale for the Weibull or GPD distribution. The second column shows the values obtained for the external data. The third column shows the parameters obtained for the internal data.
Table 5. This table presents the standalone parameters estimated for each components. Internal fraud severities are modelled using a lognormal distribution, external fraud/payments using a Weibull and execution, delivery, and product management considering a mixture model combining a Lognormal distribution in the body and a generalized Pareto distribution in the tail. The first column presents the initial parameters estimated for the scenarios. ϕ 1 and ϕ 2 represent the severity parameters of the chosen distribution, i.e., (resp.) μ and σ for the lognormal and shape and scale for the Weibull or GPD distribution. The second column shows the values obtained for the external data. The third column shows the parameters obtained for the internal data.
LabelScenariosExternal DataInternal Data
ϕ 1 ϕ 2 ϕ 1 ϕ 2 Size ϕ 1 ϕ 2 Size
Internal Fraud10.6160312.0245928.7074052.69300129569.3476932.485755665
(Global)—Lognormal
External Fraud0.376424.6951 × 1041.6449245.4293 × 10410850.292442976.9225861567
(Payments)—Weibull
Execution, Delivery,0.47058931.5309 × 1060.7325261.1532 × 106311260.822317.0388 × 1053602
and Product Management
(Financial Instruments)—GPD
Table 6. The table presents the evolution of the parameters obtained by carrying out the cascade Bayesian approach. Internal fraud severities are modelled using a Lognormal distribution, external fraud/payments using a Weibull and execution, delivery, and product management considering a mixture model combining a Lognormal distribution in the body and a generalized Pareto distribution in the tail. The first column presents the initial parameters estimated from the scenarios. ϕ 1 and ϕ 2 represent the severity parameters of the chosen distribution, i.e., (resp.) μ and σ for the Lognormal and shape and scale for the Weibull or GPD distribution. The second column shows the values obtained after the first refinement, i.e., after the incorporation of the external data. The third column shows the final parameters following the second refinement, i.e., after the integration of the internal data. The figures in brackets represent a 95% Bayesian credible interval obtained from the final posterior distribution.
Table 6. The table presents the evolution of the parameters obtained by carrying out the cascade Bayesian approach. Internal fraud severities are modelled using a Lognormal distribution, external fraud/payments using a Weibull and execution, delivery, and product management considering a mixture model combining a Lognormal distribution in the body and a generalized Pareto distribution in the tail. The first column presents the initial parameters estimated from the scenarios. ϕ 1 and ϕ 2 represent the severity parameters of the chosen distribution, i.e., (resp.) μ and σ for the Lognormal and shape and scale for the Weibull or GPD distribution. The second column shows the values obtained after the first refinement, i.e., after the incorporation of the external data. The third column shows the final parameters following the second refinement, i.e., after the integration of the internal data. The figures in brackets represent a 95% Bayesian credible interval obtained from the final posterior distribution.
LabelScenarios Severity from PriorsIntermediate Severity (Scenarios + External Data)Final Severity (Scenarios + External Data + Internal Data)
ϕ 1 ϕ 2 ϕ 1 ϕ 2 ϕ 1 ϕ 2
Internal Fraud (Global)—Lognormal10.6160312.0245929.1239642.5398448.7278552.684253
(95% Bayesian Credibility Interval)----[7.98176; 9.95824][1.23225; 4.42314]
External Fraud (Payments)—Weibull0.3761469420.3887349,9420.3991046,622
(95% Bayesian Credibility Interval)----[0.23127; 0.50673][37,632; 58,981]
Execution, Delivery, and Product Management0.47058931.5309 × 1060.44526261.5109 × 1060.60216.05 × 105
(Financial Instruments)—GPD
(95% Bayesian Credibility Interval)----[0.3313; 0.9089][4.52 × 105; 8.30 × 105]
Table 7. Body distribution (Lognormal) parameters for execution, delivery, and product management/financial instruments.
Table 7. Body distribution (Lognormal) parameters for execution, delivery, and product management/financial instruments.
Event Type μ σ
Execution, Delivery,8.0928491.882122
and Product Management
(Financial Instruments)—Lognormal body
Table 8. Frequency distribution parameters used in the capital charge calculations. GDP: generalized Pareto distribution.
Table 8. Frequency distribution parameters used in the capital charge calculations. GDP: generalized Pareto distribution.
Event TypeInitial λ Corrected λ
Internal Fraud133261.5885
(Global)—Lognormal
External Fraud313.4485.0353
(Payments)—Weibull
Event TypeInitial λ bodyInitial λ tailInitial Global λ Corrected Global λ
Execution, Delivery,144.089.2153.281882.327
and Product Management
(Financial Instruments)—GPD
Table 9. This table presents the stand alone value-at-risk (VaR) for each of the three components, as well as the VaR and the expected shortfall (ES) computed by combining the three elements with cascade Bayesian integration for each of the three different event types. Internal fraud severities are modelled using a Lognormal distribution, external fraud/payments using a Weibull and execution, delivery, and product management considering a mixture model combining a lognormal distribution in the body and a generalized Pareto distribution in the tail. The parameters of the distributions used to compute these values are shown in Table 5 and Table 6.
Table 9. This table presents the stand alone value-at-risk (VaR) for each of the three components, as well as the VaR and the expected shortfall (ES) computed by combining the three elements with cascade Bayesian integration for each of the three different event types. Internal fraud severities are modelled using a Lognormal distribution, external fraud/payments using a Weibull and execution, delivery, and product management considering a mixture model combining a lognormal distribution in the body and a generalized Pareto distribution in the tail. The parameters of the distributions used to compute these values are shown in Table 5 and Table 6.
LabelScenariosExternal DataInternal DataCombination
VaRVaRVaRVaRES
Internal Fraud843,445,037855,464,1581,143,579,3961,106,692,2112,470,332,812
(Global)
External Fraud Payments128,243,36713,664,0573,945,116119,637,592126,935,364
(Payments)
Execution, Delivery, and Product Management122,544,708583,519,888137,456,938213,982,300281,157,384
(Financial Instruments)

Share and Cite

MDPI and ACS Style

Hassani, B.K.; Renaudin, A. The Cascade Bayesian Approach: Prior Transformation for a Controlled Integration of Internal Data, External Data and Scenarios. Risks 2018, 6, 47. https://doi.org/10.3390/risks6020047

AMA Style

Hassani BK, Renaudin A. The Cascade Bayesian Approach: Prior Transformation for a Controlled Integration of Internal Data, External Data and Scenarios. Risks. 2018; 6(2):47. https://doi.org/10.3390/risks6020047

Chicago/Turabian Style

Hassani, Bertrand K., and Alexis Renaudin. 2018. "The Cascade Bayesian Approach: Prior Transformation for a Controlled Integration of Internal Data, External Data and Scenarios" Risks 6, no. 2: 47. https://doi.org/10.3390/risks6020047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop