Next Article in Journal
Pricing of Longevity Derivatives and Cost of Capital
Next Article in Special Issue
Bank Competition in India: Some New Evidence Using Risk-Adjusted Lerner Index Approach
Previous Article in Journal
Imbalance Market Real Options and the Valuation of Storage in Future Energy Systems
Previous Article in Special Issue
CEO Overconfidence and Shadow-Banking Life Insurer Performance Under Government Purchases of Distressed Assets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recent Regulation in Credit Risk Management: A Statistical Framework

Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB T6G 2G1, Canada
*
Author to whom correspondence should be addressed.
Risks 2019, 7(2), 40; https://doi.org/10.3390/risks7020040
Submission received: 7 March 2019 / Revised: 8 April 2019 / Accepted: 10 April 2019 / Published: 14 April 2019
(This article belongs to the Special Issue Financial Risks and Regulation)

Abstract

:
A recently introduced accounting standard, namely the International Financial Reporting Standard 9, requires banks to build provisions based on forward-looking expected loss models. When there is a significant increase in credit risk of a loan, additional provisions must be charged to the income statement. Banks need to set for each loan a threshold defining what such a significant increase in credit risk constitutes. A low threshold allows banks to recognize credit risk early, but leads to income volatility. We introduce a statistical framework to model this trade-off between early recognition of credit risk and avoidance of excessive income volatility. We analyze the resulting optimization problem for different models, relate it to the banking stress test of the European Union, and illustrate it using default data by Standard and Poor’s.

1. Introduction

The global financial crisis of 2007–2008 highlighted the delayed recognition of credit losses as a weakness of the accounting standards at that time. As a result, the International Accounting Standards Board introduced International Financial Reporting Standard (IFRS) 9, which is an accounting standard that requires banks to recognize increased credit risk of loans early and build additional provisions for such loans. Because loss provisions affect banks’ income statements, it is critical to have accurate estimations of these values for each reporting period. However, the building of provisions and their subsequent release for reduced credit risk leads to undesired volatility in the income statements of banks. In this paper, we introduce and analyze a framework to model this trade-off between early recognition of increased credit risk and avoidance of excessive income volatility.
IFRS 9, being mandatory since 1 January 2018, requires banks to consider forward-looking expected loss impairment models. Randall and Thompson (2017) outlined that forward-looking information regarding credit risk includes qualitative information, such as the current economic environment, along with both statistical models and non-statistical quantitative information. On the contrary, the previously used backward-looking approach required a trigger event to occur prior to any credit losses being reported. Qualitative information to complement methods from statistics and operational research has been used in credit risk modelling beyond applications related to IFRS 9; see Angilella and Mazzù (2017); Crook et al. (2007); Kumar and Ravi (2007). Regardless of the specific information considered, there is still a subjective aspect involved in a financial institution’s assessment of credit risk for loans, but it is crucial for each bank to have a sound underlying modelling framework.
Under IFRS 9, banks are required to estimate the expected credit loss (ECL) for each loan and build corresponding provisions. Compared to charging only highly likely or even realized credit losses, this procedure allows for early recognition of credit risk, well before an actual default occurs. The new standards require various ECL measures for loans classified into three different buckets of progressively higher default risk, compared to the initial default risk when the loan is issued; see Cohen and Edwards (2017); Maggi et al. (2017). To avoid misunderstandings, these IFRS 9 buckets are not to be confused with the typical rating buckets in which banks group obligors with similar credit ratings, within their diversified loan portfolio. Table 1 provides a summary of the structure of the IFRS 9 buckets. ECL is estimated over a one-year period for loans in bucket 1, when the obligor generally possesses “good credit” relative to their initial credit quality, so it is considered to be a performing loan in this case.
The reclassification of a loan takes place when deteriorating credit quality of an obligor is observed, based on predefined warning signs. These warning signs involve an increased probability of default, in accordance with the current forecast of the credit environment. Therefore, IFRS 9 bucket 2 represents underperforming loans that have experienced a significant increase in credit risk since initial recognition. Furthermore, impaired loans, which result in the bank actually incurring credit losses, are classified in bucket 3.
An important feature of this classification is that, for underperforming loans (bucket 2), lifetime ECL is estimated over the remaining term to maturity, rather than one-year ECL. Clearly, the ECL of a loan in bucket 2, especially in the early stages of the period of the loan, would typically be much greater than the ECL of a loan in bucket 1. It must be noted that, for loans in bucket 3, ECL is estimated based on the bank’s exposure along with the estimated recovery values. The subjective nature of this IFRS 9 classification is critical when analyzing, from the perspective of a financial institution, what a significant increase in credit risk actually is, as this is not specified in IFRS 9.
It is quite challenging to decide on the threshold for precisely defining a significant increase in credit risk, for numerous reasons. If an extremely conservative framework is implemented by a bank, its obligors would only need to experience minor indicators of credit downturn to warrant a reclassification from bucket 1 to bucket 2. In this instance, the bank’s ECL calculations are prone to volatility, as loans could transfer from one-year ECL to lifetime ECL frequently. However, setting a threshold that is very lenient towards obligor’s credit quality would indeed create more stable ECL calculations for the bank, but at the same time lead to late recognition of credit risk with respect to these loans. Finding a balance between early recognition of credit risk and income volatility, in this sense, is an interesting problem that is at the heart of this paper and directly linked to the new loss impairment standards of IFRS 9. Indeed, a survey by the European Banking Authority (2017) found that “72% of the banks included in the survey anticipate that IFRS 9 impairment requirements will increase volatility in profit or loss…mainly due to the “cliff effect” when moving exposures from stage 1 to stage 2 (from 12-month ECL to lifetime ECL)…”
Our setting is based on the structural model of default by Merton (1974). It essentially states that an obligor can only meet its financial obligations if the value of their assets exceeds that of the liabilities at the time of maturity, for a single debt obligation. While the actual nature of an individual obligor’s debt is considerably more complex, with default possible at several times, the preceding assumptions do provide us with a quality, widely used starting point for credit risk modelling. There have been extensions in terms of asset value modelling (see Bluhm et al. (2010); Bohn and Stein (2009); McNeil et al. (2015) for an overview), but the Merton model remains the “prototype” of many credit risk models, such as Bluhm and Overbeck (2003); Frei and Wunsch (2018); Gordy (2000). In particular, the Merton model is at the basis of the capital requirement described by the Basel Committee on Banking Supervision (2005), whose framework Miu and Ozdemir (2017) suggest to employ for IFRS 9 purposes. Such models have also been applied for stress testing; see, for example, Miu and Ozdemir (2009); Simons and Rolwes (2009); Yang and Du (2015).
Since Merton (1974) models the underlying asset value of the obligor as a stochastic process over time, we can formulate an optimization problem that considers the impact on the income statement at different reporting moments. The optimization problem consists of two penalization terms: (1) a term to penalize for failing to early recognize an eventual default, and (2) a term that penalizes for income volatility. We weight the two terms by a tuning parameter, which determines the relative importance of low income volatility compared to early credit recognition. While IFRS 9 does not define exactly what a significant increase in credit risk is, and thus leaves flexibility in choosing our tuning parameter, we apply the stress test framework of the European Banking Authority (2018) to obtain a suitable estimation for this parameter. The optimization problem is involved, but we solve it analytically for certain distributions of the asset value process. For the classical setting of Merton (1974), where the asset value process is modelled as a Brownian motion, we recast it as an optimization problem that we solve efficiently by numerical routines and illustrate using default data from Standard and Poor’s (2018).
The introduction of IFRS 9 has implications on credit risk measurement and management, which go far beyond the question of building suitable provisions for ECL. In terms of a bank’s risk appetite framework, which assesses and determines the level of risk the bank is prepared to accept (see Stulz (2015)), banks need to take effects of ECL provision building and particularly its ramifications on income volatility into account. Thus, not only the default probability of an obligor at a given moment in time matters, but it also becomes crucial how volatile and sensitive the obligor’s credit risk is to changes in the economic environment. Our proposed framework models both of these aspects and allows for a quantification of the associated risk. As a consequence of IFRS 9, banks will adjust their product offerings and business strategies, for example, by reducing lending to clients and sectors that may be too risky or volatile under IFRS 9.
As required by IFRS 9, we compute credit provisions in a forward-looking way. Such forward-looking models are related to scenario analysis; see Miu and Ozdemir (2017); Skoglund and Chen (2016). The building of provisions for ECL under IFRS 9 also has important implications on banks’ capital assessments: Principle 11 of the Basel Committee on Banking Supervision (2015) requires banking supervisors to consider banks’ credit risk practices and recognize their ECL processes and models when assessing capital adequacy. Moreover, IFRS 9 incentivizes banks to reform their credit management polices and provide additional training to their relationship managers, so that they can help reduce the risk of credit deterioration and propose mitigation actions to prevent loans from becoming underperforming and falling into bucket 2; compare also Maggi et al. (2017).
The remainder of this paper is organized as follows. Section 2 introduces a credit risk model that captures the trade-off between early recognition of credit risk and income volatility. The resulting optimization problem is analyzed when the latent asset return has either a continuous distribution (Section 3) or discrete distribution (Section 4). In Section 5, we compare our model with the stress test framework of the European Banking Authority (2018) and apply it to default data from Standard and Poor’s (2018). Section 6 provides the conclusions.

2. Modelling Income Volatility and Early Recognition of Credit Risk

When considering an obligor being issued a loan from a bank, we can model the trade-off between income volatility and the early recognition of credit risk. The net asset value of the obligor at time t [ 0 , T ] can be represented by A t for a stochastic process ( A t ) 0 t T , akin to the Merton model’s basic framework. In contrast to the Merton model, we do not impose that ( A t ) 0 t T is a Brownian motion, but only assume that this stochastic process has stationary and independent increments. We measure time t in years. The starting point A 0 = k is the initial distance to default of an obligor, based on their initial default probability. We then let c denote the threshold in which the obligor’s credit quality drops from bucket 1 to bucket 2 in terms of the IFRS 9 classification, for some constants c [ 0 , k ] . Figure 1 provides an example of the net asset value of an obligor with a loan over T = 10 years. To explain further, if A t > c , the obligor’s loan is in bucket 1, signifying “good credit” relative to their initial credit quality at the time the loan was issued. We define that a default at time T occurs if A T < 0 , and, consequently, the initial distance to default of the obligor is equal to k. Furthermore, when A t c , the loan is in bucket 2, which means the bank’s provisions for this obligor’s particular loan are calculated based on lifetime expected credit loss rather than one-year expected credit loss. We assume that there needs to be enough assets to pay back the loan only at the maturity time T, so that we consider default events only at T, consistent with the modelling of default in the Merton model. Hence, in Figure 1, although the obligor’s loan experiences a significant increase in credit risk at approximately T = 4 , a default does not occur, since its asset value evidently exceeds that of its liabilities at maturity, T = 10 .
To establish an optimization problem, a formulation involving discrete-time observations is incorporated. This logic is quite realistic, as obligor specific information from a bank is typically observed at monthly or quarterly reporting dates. Here, we consider instances t 0 = 0 < t 1 < < t N = T and assume that t N 1 T 1 . This assumption is without loss of generality, as we could simply disregard the instances after reporting time T 1 . There is no impact on provisions when switching above and below the threshold c after T 1 , as one-year ECL equals lifetime ECL during the final year of the loan. Prior to formally defining the actual optimization problem, we examine the two portions of the prospective objective function, which will model the trade-off between income volatility and the early recognition of credit risk. The recognition portion is
j = 1 N 1 ( T 1 t j ) P ( A t j > c | A T < 0 ) ,
while the volatility portion is
E j = 1 N 1 ( T 1 t j ) 1 1 A t j > c 1 1 A t j 1 > c 2 .
As we subsequently explain in more detail, the recognition portion penalizes for failing to early recognize a default while the volatility portion measures fluctuations resulting from reclassifications between buckets 1 and 2. We first examine the probability, P ( A t j > c | A T < 0 ) , which is part of Formula (1), representing the conditional probability that the net asset value of the obligor at time t j exceeds c, given a default occurs at time T. Hence, if this probability is low, the early recognition of credit risk is likely, since experiencing the “warning sign” of a loan being classified in bucket 2 before a default occurs is probable. Intuitively, this probability being low would correspond to a high threshold value c, near A 0 = k . In Formula (2), the expression 1 1 A t j > c 1 1 A t j 1 > c 2 gives the value 1 (and zero otherwise) if the process changes from being above to below (or vice versa) the threshold c from time t j 1 to t j . As we would expect, if c is high, the net asset value of an obligor is more prone to fluctuation between classification buckets 1 and 2. This would lead to frequent adjustments in the calculation of ECL, changing from one-year ECL to lifetime ECL, and vice versa, ultimately contributing to substantial earnings volatility from the perspective of the bank that issues the loan. We recall here that ECL equals the product of probability of default (PD), loss given default (LGD), and exposure at default (EaD). In practice, LGD and EaD are often assumed to be constant so that the PD, which we capture in this term, models the default risk. The summations in both Formulae (1) and (2) capture the net asset value throughout each time step, up until the final year of the loan. The coefficient ( T 1 t j ) in both portions conveys that the early recognition of credit risk, along with the management of income volatility, is more critical earlier in the term of the loan. Thus, each portion has a negative, approximately linear relationship with time. First of all, the earlier the risk of non-payment is identified, the sooner loan loss provisions can be built proactively. Relative to income volatility, the earlier a significant increase in credit risk occurs, the larger the impact on a bank’s income relative to ECL estimation, hence the factor ( T 1 t j ) . We can write the volatility portion (2) as
E j = 1 N 1 ( T 1 t j ) 1 1 A t j > c 1 1 A t j 1 > c 2 = j = 1 N 1 ( T 1 t j ) P A t j > c , A t j 1 c + P A t j c , A t j 1 > c ,
using 1 1 A t j > c 1 1 A t j 1 > c 2 = 1 1 A t j > c , A t j 1 c + 1 1 A t j c , A t j 1 > c .
Acknowledging that Formulae (1) and (2) are decreasing and increasing in c, respectively, we combine these two portions and model the inherent balance between early recognition of credit risk and income volatility through the optimization problem
min c k f ( c ) ,
where
f ( c ) = j = 1 N 1 ( T 1 t j ) P ( A t j > c | A T < 0 ) + λ j = 1 N 1 ( T 1 t j ) P A t j > c , A t j 1 c + P A t j c , A t j 1 > c .
Note that λ > 0 is a tuning parameter that, in practical terms, determines how much importance is placed on minimizing income volatility from the perspective of a bank. Various approaches to modelling the net asset value process ( A t ) 0 t T will be investigated in the following sections, including both continuous and discrete distributions for the increments of ( A t ) 0 t T .

3. Analyzing the Optimization Problem for Continuous Asset Distribution

In this section, we study the situation where the net asset value process has a continuous distribution. Before analyzing specific distributions, we rephrase our optimization problem in terms of the distribution of A t . We denote by G t and g t the cumulative distribution function and probability density function, respectively, of the net asset value change A t A 0 for a fixed time t.
Proposition 1.
The function f ( c ) in Formula (3) can be written as
f ( c ) = j = 1 N 1 ( T 1 t j ) c k G T t j ( k x ) g t j ( x ) d x G T ( k ) + λ j = 1 N 1 ( T 1 t j ) ( c k 1 G t j t j 1 c k x g t j 1 ( x ) d x + c k G t j t j 1 c k x g t j 1 ( x ) d x ) .
Proof. 
Because the net asset value process has stationary increments, the cumulative distribution function of A t j A t k for j > k is G t j t k . For the recognition portion of f ( c ) , we derive the conditional probability
P ( A t j > c | A T < 0 ) = P A t j > c , A T A t j < A t j P ( A T < 0 ) = E 1 1 A t j > c E 1 1 A T A t j < A t j | A t j P A T A 0 < k = E 1 1 A t j > c E [ 1 1 A T A t j < a ] | a = A t j G T ( k ) = E 1 1 A t j A 0 > c k G T t j k ( A t j A 0 ) G T ( k ) = c k G T t j ( k x ) g t j ( x ) d x G T ( k ) ,
with the numerator of the third equality arising from the well-known result that E [ f ( X , Y ) | Y ] = E [ f ( X , y ) | y = Y for independent random variables X and Y, and a measurable function f with E [ | f ( X , Y ) | ] < ; see, for example, Theorem 6.4 of Kallenberg (2002). The volatility portion simplification involves comparable methodology, with some slight differences. Similarly to Formula (5), we can deduce that
P A t j > c , A t j 1 c = c k 1 G t j t j 1 ( c k x ) g t j 1 ( x ) d x , P A t j c , A t j 1 > c = c k G t j t j 1 ( c k x ) g t j 1 ( x ) d x
so that the volatility portion is
λ j = 1 N 1 ( T 1 t j ) c k 1 G t j t j 1 ( c k x ) g t j 1 ( x ) d x + c k G t j t j 1 ( c k x ) g t j 1 ( x ) d x .
Merging the two portions, the objective function (3) results in (4). □
To extend this analysis, a distribution must be established for the increments of the net asset value, with the purpose of solving the optimization problem analytically when possible. In Section 3.1, we give a simple example where we can find an explicit formula for the optimal threshold while in Section 3.2, we present and numerically analyze the optimization problem in the case of the net asset value given by Brownian motion.

3.1. Considering a Shifted Exponential Distribution for Modelling the Net Asset Value

In this subsection, we assume the increments in the net asset value of an obligor are of equal time length, such that t j = j T N , with t j t j 1 being constant. In addition, the distribution of an increment, A t j A t j 1 , is given by G with corresponding density g. More specifically, we consider the shifted exponential distribution with shift parameter δ and mean parameter θ , such that
g ( x ) = 1 θ e x δ θ
and
G ( x ) = 1 e x δ θ
with x δ and θ > 0 . In our situation, we specify δ < 0 to account for the obviously realistic possibility of net losses occurring in any given time interval prior to maturity. Under the assumption of equal time steps, we now have E ( A t j ) = k + j ( θ + δ ) and V a r ( A t j ) = j θ 2 , observing that the net asset value has greater variability over longer periods of time. We further note that k > δ , so that a default can theoretically occur in any time increment, with the initial default probability depending on k directly.
In this situation, we examine the case when T = t N = t 2 = 2 t 1 , such that we have only one reporting date of the net asset value of the obligor, exactly halfway to the time of maturity of the loan. We state an additional simple, albeit important assumption, that T > 2 . Otherwise, our problem would have no relevance to IFRS 9 expected credit loss, as we cannot have early recognition of credit risk if t 1 = T 2 1 , since there would be no possible impact on provisions at the only reporting date prior to maturity, for t 1 T 1 . We can find an explicit formula for the optimal threshold, given in the following result.
Proposition 2.
Consider the assumptions of this subsection and suppose that
1 e k + δ θ 1 e k + 2 δ θ θ ( k + 2 δ ) θ < λ < 1 e δ θ 1 e k + 2 δ θ θ ( k + 2 δ ) θ .
Then, the optimal threshold is given by
c * = θ ln 1 λ 1 e k + 2 δ θ θ ( k + 2 δ ) θ δ = θ ln 1 λ P ( A T < 0 ) δ .
The technical condition (6) ensures that the minimizer of the objective function (3) over the interval [ 0 , k ] is attained at an interior point of this interval. For instance, if λ does not satisfy the first inequality in (6), the recognition portion dominates the objective function, leading to the most conservative choice of the optimal threshold c * = k . By contrast, if the second inequality in (6) is not satisfied, the objective function is driven by the volatility portion so that the minimizer will be c * = 0 .
Proof of Proposition 2.
Under the assumptions of this subsection, the objective function (3) is simplified as
f ( c ) = T 1 T 2 P A T 2 > c | A T < 0 + λ T 1 T 2 P A T 2 > c , A 0 c + P A T 2 c , A 0 > c .
Under the assumption that A 0 = k > c , we eliminate the unrealistic possibility that the loan is in IFRS 9 bucket 2 at the time it is issued. Consequently, we deduce that P A T 2 > c , A 0 c = 0 and P A T 2 c , A 0 > c = P A T 2 c because P ( A 0 c ) = 0 and P ( A 0 > c ) = 1 . Therefore, f ( c ) simply becomes
f ( c ) = T 2 2 P A T 2 > c | A T < 0 + λ T 2 2 P A T 2 c ,
which, using Formula (4), is expressed in terms of functions G and g as
f ( c ) = T 2 2 c k G ( k x ) g ( x ) d x P ( A T < 0 ) + λ T 2 2 G ( c k ) .
The next step in this problem is to further simplify f ( c ) , take the partial derivative with respect to c, and arrive at an expression for the solution to the minimization problem, c * , depending on the other parameters involved. For the distribution of A T , we write A T = A T A T 2 + A T 2 A 0 + A 0 , where A T A T 2 and A T 2 A 0 are two independent and identically distributed random variables. This means that the distribution of A T A 0 is a convolution, given by P ( A T A 0 < x ) = G ( x y ) g ( y ) d y . Then, the initial default probability P ( A T < 0 ) is derived from
P ( A T A 0 < k ) = G ( k y ) g ( y ) d y = 1 e k y δ θ 1 θ e y δ θ d y = δ k δ 1 e k y δ θ 1 θ e y δ θ d y = 1 e k + 2 δ θ θ ( k + 2 δ ) θ ,
with the limits of integration in the penultimate equality being a consequence of the fact that g ( y ) = 0 for y < δ , and G ( k y ) = 0 for k y < δ , or y > k δ .
Obtaining P ( A T < 0 ) now allows us to write our objective function in (8) as
f ( c ) = T 2 2 c k k δ 1 e k x δ θ 1 θ e x δ θ d x 1 e k + 2 δ θ θ ( k + 2 δ ) θ + λ 1 e c k δ θ = T 2 2 e c k δ θ + 1 θ e k + 2 δ θ c + δ θ 1 e k + 2 δ θ θ ( k + 2 δ ) θ + λ 1 e c k δ θ ,
noting that, in the first equality, G ( k x ) = 0 for k x < δ , or equivalently x > k δ , thus the integral
c k G ( k x ) g ( x ) d x = c k k δ G ( k x ) g ( x ) d x .
We next find the derivative
f ( c ) = T 2 2 1 θ e c k δ θ + 1 θ e k + 2 δ θ 1 e k + 2 δ θ θ ( k + 2 δ ) θ + λ 1 θ e c k δ θ = T 2 2 θ 1 e k + 2 δ θ θ ( k + 2 δ ) θ e k + 2 δ θ e c + k + δ θ + λ e c + k + δ θ e c + 2 k + 3 δ θ θ ( k + 2 δ ) θ ,
which equals zero if and only if
e c + δ θ 1 + λ 1 e k + 2 δ θ θ ( k + 2 δ ) θ = 0 ,
yielding our result for the optimal c * given by Formula (7).
However, we must establish an appropriate interval for λ to verify that our solution (minimum) exists in the appropriate interval 0 < c * < k , along with confirmation that f ( c * ) > 0 , hence f ( c * ) is convex. Initially, we analyze the inequality
0 < c * < k
or
0 < θ ln 1 λ 1 e k + 2 δ θ θ ( k + 2 δ ) θ δ < k ,
which is equivalent to the inequalities in (6) and gives our applicable interval for the tuning parameter. Furthermore, we take the second partial derivative
f ( c ) = T 2 2 θ 2 1 e k + 2 δ θ θ ( k + 2 δ ) θ e c + k + δ θ + λ e c + 2 k + 3 δ θ θ ( k + 2 δ ) θ e c + k + δ θ ,
and note that f ( c ) > 0 is equivalent to
λ < 1 1 e k + 2 δ θ θ ( k + 2 δ ) θ ,
which is already satisfied by the conditions from (6). This provides certainty that f ( c * ) is convex, and, to summarize, we have a proper solution to the optimization problem in (7), given that λ is selected in the interval derived in (6). □
Focusing on the solution in detail, it is clear that c * is decreasing in λ , consistent with the notion from the prior section that λ is associated with income volatility management, thus an increased weight placed on it results in later recognition of credit risk. It is also intuitive to note that, as k increases, keeping θ and δ constant, the initial default probability decreases, since the distance to default becomes larger at the time of origination of the loan. To confirm, we compute the first partial derivative
d d k P ( A T < 0 ) = d d k 1 e k + 2 δ θ θ ( k + 2 δ ) θ = e k + 2 δ θ θ > 0 k + 2 δ θ < 0 < 0 ,
leading us to the conclusion that P ( A T < 0 ) is indeed decreasing in k. From the initial assumptions of this subsection, we use the fact that θ > 0 and k + 2 δ < 0 confirm the sign of the derivative. As P ( A T < 0 ) decreases, we see from Formula (7) that c * increases, thus the overall solution is increasing in k.
Lastly, to provide some context for these numerous parameters, we precisely set λ = 3 , k = 3.5 , θ = 14 , and δ = 3.6 , so the initial default probability of the obligor is P ( A T < 0 ) 0.03 . In practice, this is a realistic default risk, and the solution for this particular set of parameter values is c * 2.31 , which can be seen in Figure 2, where the optimal (minimum) c * values are plotted for various λ . The decreasing relationship between c * and λ is evident.

3.2. Modelling the Net Asset Value with Brownian Motion

We now consider the case of the net asset value process driven by Brownian motion. Concretely, we assume in this subsection that ( A t A 0 ) 0 t T is a Brownian motion. With this modelling assumption, we can rephrase the objective function. We denote by Φ and φ the cumulative distribution function and probability density function, respectively, of a standard normally distributed random variable. Since G t ( x ) = Φ x t in this case of Brownian motion, we immediately obtain from Proposition 1 the following result.
Proposition 3.
If ( A t A 0 ) 0 t T is a Brownian motion, the function f ( c ) in (3) can be written as
f ( c ) = j = 1 N 1 ( T 1 t j ) c k t j Φ k x t j T t j φ ( x ) d x Φ k T + λ j = 1 N 1 ( T 1 t j ) ( c k t j 1 1 Φ c k x t j 1 t j t j 1 φ ( x ) d x + c k t j 1 Φ c k x t j 1 t j t j 1 φ ( x ) d x ) .
As Formula (9) is quite elaborate and complicated, arriving at an analytical solution relative to (3) is not feasible, although numerical results are certainly attainable. To demonstrate an example of the objective function and its relevant minimum, consider a loan issued over ten years ( T = 10 ) with yearly reporting dates ( N = 10 ) , assuming equal time increments of ( t j t j 1 ) = 1 for j { 1 , 2 , , 10 } . We fix the initial distance to default, k, such that P ( A T < 0 ) = 0.05 . Accordingly, P A T A 0 T < k T = 0.05 and k = T Φ 1 ( 0.05 ) 5.2 . We observe the shape of the objective function for some reasonably chosen λ values, and the respective solutions to the minimization problem, in Figure 3.
Note that, for this specific example, smaller values of the tuning parameter ( λ < 5 ) produce solutions of c * k 5.2 , indicating the volatility portion is not properly weighted, since the early recognition of credit risk would be considered overwhelmingly important in that situation. Although an “ideal” selection of λ is very subjective in terms of balancing the significance of the two portions of f ( c ) , the choices in this example clearly result in solutions existing in the desired interval of 0 < c * < k . In addition, the decreasing relationship between c * and λ is apparent, identical to the conclusion established in the previous subsection, where the shifted exponential distribution was used to model the net asset value.

4. Analyzing the Optimization Problem for Discrete Asset Distribution

4.1. Specific Increments

In a discrete-time framework, we can consider a situation where the increments A t j A t j 1 are explicitly defined in terms of assuming possible values, along with their corresponding transition probabilities. In particular, we again consider the case where T = t N = t 2 , letting t 1 = T 2 be the midpoint between the time the loan is issued and the time of maturity. Consistent with prior formulations, A t 0 = A 0 = k > 0 , and, in this instance, the optimization problem corresponding to (3) is simplified as
min c k f ( c ) ,
where
f ( c ) = T 2 2 P A T 2 > c | A T < 0 + λ T 2 2 P A T 2 > c , A 0 c + P A T 2 c , A 0 > c ,
equivalent to
f ( c ) = T 2 2 P A T 2 > c | A T < 0 + λ T 2 2 P A T 2 > c , k c + P A T 2 c , k > c ,
assuming again that λ > 0 . The increments are independent, with fixed transition probabilities
P A T 2 A 0 = b 1 = P A T A T 2 = b 2 = p 1 > 0.5 ,
P A T 2 A 0 = b 1 = P A T A T 2 = b 2 = 1 p 1 < 0.5 .
Here, we do not require identical distribution of the two increments, but impose b 2 > k > b 1 > 0 . We also specify k + b 1 b 2 < 0 < k b 1 + b 2 , which implies the initial default probability satisfies P ( A T < 0 ) = ( 1 p 1 ) 2 + p 1 ( 1 p 1 ) = 1 p 1 < 0.5 . Therefore, relative to a specific obligor, we select p 1 = 1 P ( A T < 0 ) . Ultimately, the solution to the optimization problem, in terms of possible intervals for c * , can be expressed as a piecewise function dependent on λ . We do consider the endpoints of the interval 0 c * k in this instance. The objective function is
f ( c ) = T 2 2 , if 0 c < k b 1 , T 2 2 p 1 + λ ( 1 p 1 ) , if k b 1 c < k , T 2 2 p 1 ( 1 + λ ) , if c = k ,
and, since the coefficient T 2 2 appears in each scenario, we examine all the other coefficients as a means of solving for the minimum of f ( c ) . First, we note that
p 1 + λ ( 1 p 1 ) < p 1 ( 1 + λ )
regardless of the value of λ , since 1 p 1 < p 1 for our specified p 1 > 0.5 . Therefore, we are certain that a solution of c * = k will not be chosen. Secondly, we see that p 1 + λ ( 1 p 1 ) < 1 if λ < 1 , for any p 1 > 0.5 , leading to the clear solution to the minimization problem in terms of optimal c * selection intervals:
select k b 1 c * < k if λ < 1 , any 0 c * < k if λ = 1 , 0 c * < k b 1 if λ > 1 .
However, the instance when λ = 1 provides us with no information other than c * k , while the case with 0 c * < k b 1 allows no opportunity to observe a possible IFRS 9 significant increase in credit risk prior to an eventual default. To conclude, choosing 0 < λ < 1 , resulting in a solution of k b 1 c * < k , is most feasible in this setting.

4.2. General Increments

In this part, we still consider just two reporting dates, T 2 and T, but, in each period, we have K different possible values for the increments A t j A t j 1 , including b 1 , , b K , comprised of both positive and negative integers, with relevant probabilities p 1 , , p K . A general discrete random variable is modelled in each step, where we still assume that the changes in the two time steps are independent and identically distributed. In particular, we explicitly let X and Y be the independent random variables, in time order, so that A T 2 = k + X and A T = k + X + Y . To evaluate the objective function in this instance, starting with the recognition portion of f ( c ) , we compute
P A T 2 > c | A T < 0 = P A T 2 > c , A T < 0 P [ A T < 0 ] = P [ X > c k , X + Y < k ] P [ X + Y < k ] .
We simplify the denominator by splitting the event into disjoints parts, namely,
P [ X + Y < k ] = P n = 1 K { X = b n } { X + Y < k } = P n = 1 K { X = b n } { b n + Y < k } = n = 1 K p n P [ Y < k b n ] = n = 1 K p n { : b < k b n } p ,
using the independence of X and Y in the penultimate step. In an analogous way, the numerator in (11) can be written as
P [ X > c k , X + Y < k ] = { n : b n > c k } p n { : b < k b n } p
so that Formula (11) becomes
P A T 2 > c | A T < 0 = P [ X > c k , X + Y < k ] P [ X + Y < k ] = { n : b n > c k } p n { : b < k b n } p n = 1 K p n { : b < k b n } p .
Under the assumption that A 0 = k > c , from the volatility portion of f ( c ) , we have that
P A T 2 c = P [ X c k ] = n = 1 K P [ { X = b n } { X c k } ] = { n : b n c k } P [ X = b n ] = { n : b n c k } p n ,
resulting in Formula (10) being expressed in this setting as
f ( c ) = T 2 2 { n : b n > c k } p n { : b < k b n } p n = 1 K p n { : b < k b n } p + λ T 2 2 { n : b n c k } p n = T 2 2 { n : b n > c k } p n { : b < k b n } p n = 1 K p n { : b < k b n } p + λ { n : b n c k } p n = T 2 2 { n : b n > c k } p n { : b < k b n } p + n = 1 K p n { : b < k b n } p λ { n : b n c k } p n n = 1 K p n { : b < k b n } p .
Notice that the expression for f ( c ) depends on c only in the index sets in two of the summations. This implies that f ( c ) is piecewise constant with jumps when c = b n + k for n = 1 , , N . Furthermore, the sum { n : b n > c k } p n is decreasing in c, as it relates to the early recognition of credit risk. On the contrary, the sum { n : b n c k } p n in the second term of the numerator of the objective function, is increasing in c, being part of the volatility portion. As a result, the optimal c * is chosen in an interval between b n + k and b n + 1 + k such that the expression for f ( c ) on this interval is minimal compared to values on other intervals. Therefore, the solution is similar to that in Section 4.1, but more intricate and cumbersome. Due to the complexity of this expression, which depends on the exact distance to default along with the incremental and probability values, a computer program is required to select the minimum numerically. For each n, it will select c n with b n + k c n < b n + 1 + k and compute f ( c n ) . Because f is constant on such intervals, the specific choice of c n within the interval does not affect the value of f ( c n ) . Among all f ( c n ) for different n, the program will choose the one with the smallest value, say f ( c n * ) , as the minimum.

5. Relating Our Model to the European Union Stress Test and Standard and Poor’s Default Data

For this section, we use the assumption from Section 3.2 regarding the net asset value modelling, such that ( A t A 0 ) 0 t T is a Brownian motion, and apply it in practical settings.

5.1. Selection of λ in Comparison to European Union Framework

A basic criticism of the trade-off modelled in our problem formulation is that the tuning parameter λ could be chosen in a subjective manner by a financial institution, based on the extent to which they are concerned about minimizing income volatility as compared to recognizing credit risk earlier. To justify what reasonable values for λ are in practice, we make a comparison with a European Union (EU) definition. Although IFRS 9 does not specify explicitly what constitutes a significant increase, the stress test framework of the European Banking Authority (2018) defines a significant increase in credit risk as being 200% from the initial credit risk. To convert this definition into our setting, consider a loan with maturity T and current time t, so that the remaining time to maturity is T t . The lifetime default probability at the initial time is P ( A T < 0 ) = Φ k T . Assuming that the net asset value A t equals a at time t, the lifetime default probability at time t is
P ( A T < 0 | A t = a ) = P ( A t + A T A t < 0 | A t = a ) = P ( a + W T t < 0 )
for a Brownian motion ( W t ) 0 t T . We use the property of Brownian motion that P ( W T W t a ) = P ( W T t a ) for any constant a . Since W T t is normally distributed with mean zero and variance T t , we obtain that P ( a + W T t < 0 ) = Φ a T t . By the framework of the EU stress test, the risk of a loan has significantly increased if the current lifetime default probability is at least triple the initial lifetime default probability, which means P ( a + W T t < 0 ) 3 P ( A T < 0 ) . We now define a * as the critical value so that P ( a * + W T t < 0 ) = 3 P ( A T < 0 ) . Assuming the loan is in the middle of the lifetime, such that t = T 2 , we compare P ( a * + W T t < 0 ) with the corresponding value P ( c * + W T t < 0 ) for our optimizer c * . For various k values, these solutions are determined numerically, using the objective function representation in (9). Figure 4 conveys an example of the relationship between the initial lifetime default probability P ( A T < 0 ) and the critical lifetime default probability P c * + W T 2 < 0 , with t = T 2 . A positive relationship exists for each λ value shown, since the critical lifetime default probability increases, intuitively, as the initial lifetime default probability increases. In comparison to the EU stress test reference line, we see that, for each λ , the relationship does not appear to be perfectly linear. However, we can still feasibly suggest that the value P ( a * + W T t < 0 ) from the stress test threshold corresponds to a parameter choice of λ between 5.5 and 6.5, for this loan with maturity of T = 10 . It is not critical for us to choose λ completely analogously to the EU framework. Rather, this comparison is made in order to explain how values of the tuning parameter can be inferred as a guideline for practical applications of our framework.

5.2. Analyzing the Income Volatility Portion with Default Data by Standard and Poor’s

To directly relate our model to recent financial data, we use Standard and Poor’s (2018) global corporate average default rates and yearly credit grouping transition rates. Each defined credit rating group has assigned one-year and lifetime probabilities of default, based on corporate averages from 1981 to 2017, which is relevant for any future length of time up until maturity of a loan. Additionally, yearly transition probabilities to all possible credit rating groups are available. From this information, evaluating what percentage of loans experience a significant increase in credit risk, depending on the threshold c * , is critical in terms of determining the severity of income volatility for a bank in a particular year.
As far as integrating this data into our problem formulation, we first specify an initial rating group, and subsequently observe the performance at time t = 1 year of a loan issued over T = 10 years. The transition rates reveal proportions of loans in each respective rating group possessing lower credit quality, one year since origination. Furthermore, under the modelling structure of Section 5.1, possible threshold solutions are approximated using P ( c * + W T t < 0 ) = P ( c * + W 10 1 < 0 ) = P ( c * + W 9 < 0 ) , in which nine-year probabilities of default over the remaining time to maturity are used, relative to the current credit rating group at time t = 1 . For various c * values defined by the cutoff points described by progressive downgrades in credit rating groups, the yearly transition rates reveal the percentage of loans that would transfer to an IFRS 9 bucket of worsened performance status. Assuming each loan examined originated in bucket 1, and that c * < k , Figure 5 displays the influence of the threshold level on the IFRS 9 bucket reclassification rate, for three different initial rating groups.
With an initial rating group of A, the initial distance to default for a ten-year loan is k = 10 Φ 1 ( 0.0147 ) 6.89 , and if we attribute, for example, a significant increase in credit risk over a one-year period being a downgrade to rating BBB or worse, then c * = 9 Φ 1 ( 0.0285 ) 5.71 . From our illustration, it is evident that the loan reclassification rate from one-year ECL to lifetime ECL is more profound for lower initial credit rating groups (such as BB), meaning these obligors that have a higher risk of non-payment also contribute significantly to increased income volatility for the bank that issues their loans.
The actual calculation of ECL involves sophisticated models for each component, including PD, EaD, and LGD for a credit facility. For demonstration purposes, a proxy ECL calculation can be created to exemplify the relationship between ECL and IFRS 9 loan reclassification, incorporating the same default data from Standard and Poor’s (2018). We still consider the case when T = 10 , and assess three different samples of 100 similar loans having the same initial and current rating group, namely ratings A, BBB, and BB for the three samples, as of the midpoint of T = 5 years. After year 5, we have three samples of 100 loans each, with one sample experiencing a common migration from rating group A to BBB, the second showing a deterioration from rating group BBB to BB, and the third having a transition from rating group BB to B. In summary, all 100 obligors in a particular sample have undergone a credit quality downgrade of three notches. We then fix EaD = $ 500 , 000 and LGD = 0.1 , and use both one-year and lifetime PDs (for the five remaining years to maturity) to calculate proxy ECLs, simply from the product of the three components. Figure 6 emphasizes the difference between one-year and lifetime ECL for each sample from an initial rating group, assuming all loans originated in IFRS 9 bucket 1.
Essentially, if a downgrade of three rating groups is deemed sufficient enough to constitute a significant increase in credit risk, lifetime (five-year) ECL is computed. Otherwise, ECL for a one-year horizon is calculated. Consistent with the interpretation of Figure 5, we observe that the effect of reclassification from bucket 1 to bucket 2 is considerably more drastic for inferior rating groups. Clearly, the total ECL for a sample of just 100 loans increases sharply, by over $700,000, when current rating group B, corresponding to initial rating group BB, is classified in IFRS 9 bucket 2 rather than bucket 1.

6. Conclusions

The late recognition of credit losses was a major criticism of accounting of financial assets during the global financial crisis of 2007–2008. The introduction of IFRS 9 addressed this issue by requiring banks to build provisions for the lifetime ECL, rather than one-year ECL, when the credit risk of a loan has significantly increased. However, IFRS 9 poses the challenging task to banks of defining what exactly such a significant increase in credit risk means for each of their loans. Such a definition embeds the trade-off between early recognition of credit risk and income volatility, which is the downside of frequently reclassifying loans between buckets to account for either one-year or lifetime ECL.
Our approach to introduce and analyze a statistical framework that models this trade-off makes three contributions to the discussion on implementing IFRS 9 and, more broadly, on the provisioning of ECL. First, it helps better understand how different factors in credit risk management affect the optimal threshold for what should be considered a significant increase in credit risk. Second, banks can use our framework to define significant increases in credit risk consistently across different loans. Finally, regulators may use our framework to better compare the banks’ different definitions of significant increases in credit risk. Because IFRS 9 does not define what constitutes a significant increase in credit risk, our framework gives flexibility to specify the relative importance of early recognition of credit risk compared to reduction of income volatility, although we showed how information about stress tests, such as that of the European Banking Authority (2018), helps as a guideline for the parameter specification.
Beyond the application to IFRS 9, our work shows how interesting research questions emerge: in response to economic and political developments, accounting standards are introduced and revised, which may require mathematical models and lead to statistical questions. Possible future studies in this area at the intersection of accounting and statistics include implementation questions about IFRS 15, which became effective 1 January 2018 and provides guidance on revenue accounting from contracts with customers. IFRS 15 requires operational models for revenues and contract assessments; see PricewaterhouseCoopers (2018). Similarly, accounting of share-based payments to employees may require mathematical and statistical models, as per IFRS 2; see Ernst & Young (2015). Under IFRS 17, whose proposed effective date is 1 January 2022, liabilities from insurance contracts will be calculated using a present-value model with provisions for risk; see PricewaterhouseCoopers (2016). In all such applications, mathematical models together with statistical techniques can provide useful quantitative assessments, but also have limitations. Inevitably, they come with model risk and statistical errors so that they need to be applied in a reasonable way, combined with qualitative assessments.

Author Contributions

Formal analysis, L.E. and C.F.; Methodology, L.E. and C.F.; Writing—original draft, L.E. and C.F.; Writing—review & editing, L.E. and C.F.

Funding

Financial support by the Natural Sciences and Engineering Council of Canada under grants RGPIN/402585, EGP/531183 and EGP2/536672 is gratefully acknowledged.

Acknowledgments

The authors would like to thank Richard Hou, Adam Kashlak, and three anonymous referees for valuable comments that helped significantly improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EaDexposure at default
ECLexpected credit loss
EUEuropean Union
IFRSInternational Financial Reporting Standard
LGDloss given default
PDprobability of default

References

  1. Angilella, Silvia, and Sebastiano Mazzù. 2017. A credit risk model with an automatic override for innovative small and medium-sized enterprises. Journal of the Operational Research Society, 1–17. [Google Scholar] [CrossRef]
  2. Basel Committee on Banking Supervision. 2005. An Explanatory Note on the Basel II IRB Risk Weight Functions. Bank for International Settlements. Available online: www.bis.org (accessed on 5 April 2019).
  3. Basel Committee on Banking Supervision. 2015. Guidance on Credit Risk and Accounting for Expected Credit Losses. Bank for International Settlements. Available online: www.bis.org (accessed on 5 April 2019).
  4. Bluhm, Christian, and Ludger Overbeck. 2003. Systematic risk in homogeneous credit portfolios. In Credit Risk: Measurement, Evaluation and Management (Contributions to Economics). Edited by Georg Bol, Gholamreza Nakhaeizadeh, Svetlozar T. Rachev, Thomas Ridder and Karl-Heinz Vollmer. Berlin/Heidelberg: Springer, pp. 35–48. [Google Scholar]
  5. Bluhm, Christian, Ludger Overbeck, and Christoph Wagner. 2010. An Introduction to Credit Risk Modeling, 2nd ed. Boca Raton: Chapman & Hall/CRC. [Google Scholar]
  6. Bohn, Jeffrey R., and Roger M. Stein. 2009. Active Credit Portfolio Management in Practice. Hoboken: Wiley Finance. [Google Scholar]
  7. Cohen, Benjamin H., and Gerald A. Edwards Jr. 2017. The New Era of Expected Credit Loss Provisioning. BIS Quarterly Review. Available online: www.bis.org (accessed on 5 April 2019).
  8. Crook, Jonathan N., David B. Edelman, and Lyn C. Thomas. 2007. Recent developments in consumer credit risk assessment. European Journal of Operational Research 183: 1447–65. [Google Scholar] [CrossRef]
  9. Ernst & Young. 2015. Accounting for Share-Based Payments under IFRS 2—The Essential Guide. Available online: www.ey.com (accessed on 5 April 2019).
  10. European Banking Authority. 2017. Report on Results from the Second EBA Impact Assessment of IFRS 9. Available online: www.eba.europa.eu (accessed on 5 April 2019).
  11. European Banking Authority. 2018. 2018 EU-Wide Stress Test. European Union. Available online: www.eba.europa.eu (accessed on 5 April 2019).
  12. Frei, Christoph, and Marcus Wunsch. 2018. Moment estimators for autocorrelated time series and their application to default correlations. Journal of Credit Risk 14: 1–29. [Google Scholar] [CrossRef]
  13. Gordy, Michael B. 2000. A comparative anatomy of credit risk models. Journal of Banking and Finance 24: 119–49. [Google Scholar] [CrossRef]
  14. Kallenberg, Olav. 2002. Foundations of Modern Probability, 2nd ed. Berlin/Heidelberg: Springer. [Google Scholar]
  15. Kumar, P. Ravi, and Vadlamani Ravi. 2007. Bankruptcy prediction in banks and firms via statistical and intelligent techniques—A review. European Journal of Operational Research 180: 1–28. [Google Scholar] [CrossRef]
  16. Maggi, Filippo, Alfonso Natale, Theodore Pepanides, Enrico Risso, and Gerhard Schröck. 2017. IFRS 9: A Silent Revolution in Banks’ Business Models. New York: McKinsey & Company. [Google Scholar]
  17. McNeil, Alexander J., Rüdiger Frey, and Paul Embrechts. 2015. Quantitative Risk Management: Concepts, Techniques and Tools, rev. ed. Princeton: Princeton University Press. [Google Scholar]
  18. Merton, Robert C. 1974. On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance 29: 449–70. [Google Scholar]
  19. Miu, Peter, and Bogie Ozdemir. 2009. Stress-testing probability of default and migration rate with respect to Basel II requirements. Journal of Risk Model Validation 3: 1–36. [Google Scholar] [CrossRef]
  20. Miu, Peter, and Bogie Ozdemir. 2017. Adapting the Basel II advanced internal-ratings-based models for International Financial Reporting Standard 9. Journal of Credit Risk 13: 53–83. [Google Scholar] [CrossRef]
  21. PricewaterhouseCoopers. 2016. The Wait Is Nearly Over? IFRS 17 Is Coming, Are You Prepared for It? Available online: www.pwc.com (accessed on 5 April 2019).
  22. PricewaterhouseCoopers. 2018. Revenue from Contracts with Customers. Available online: www.pwc.com (accessed on 5 April 2019).
  23. Randall, Mark, and Sandra Thompson. 2017. IFRS 9 Impairment: Significant Increase in Credit Risk: PwC in Depth. Available online: www.pwc.com (accessed on 5 April 2019).
  24. Simons, Dietske, and Ferdinand Rolwes. 2009. Macroeconomic default modeling and stress testing. International Journal of Central Banking 5: 177–204. [Google Scholar]
  25. Skoglund, Jimmy, and Wei Chen. 2016. The application of credit risk models to macroeconomic scenario analysis and stress testing. Journal of Credit Risk 12: 1–45. [Google Scholar] [CrossRef]
  26. Standard and Poor’s. 2018. Default, Transition, and Recover: 2017 Annual Global Corporate Default Study and Rating Transitions. Available online: www.spratings.com (accessed on 5 April 2019).
  27. Stulz, Rene M. 2015. Risk-taking and risk management by banks. Journal of Applied Corporate Finance 27: 8–18. [Google Scholar] [CrossRef]
  28. Yang, Bill Huajian, and Zunwei Du. 2015. Stress testing and modeling of rating migration under the Vasicek model framework: Empirical approaches and technical implementation. Journal of Risk Model Validation 9: 33–47. [Google Scholar] [CrossRef]
Figure 1. Simulation of net asset value of an obligor, with k = 5.2 and c = 2 , arbitrarily chosen, for a ten-year loan. ( A t ) 0 t T is modelled by Brownian motion in this illustration.
Figure 1. Simulation of net asset value of an obligor, with k = 5.2 and c = 2 , arbitrarily chosen, for a ten-year loan. ( A t ) 0 t T is modelled by Brownian motion in this illustration.
Risks 07 00040 g001
Figure 2. Solutions, c * , versus corresponding λ values, with k = 3.5 , θ = 14 and δ = 3.6 . A specific solution, for λ = 3 ( c * 2.31 ) is labelled in red.
Figure 2. Solutions, c * , versus corresponding λ values, with k = 3.5 , θ = 14 and δ = 3.6 . A specific solution, for λ = 3 ( c * 2.31 ) is labelled in red.
Risks 07 00040 g002
Figure 3. Plot of f ( c ) for various λ , with T = 10 , N = 10 , and k 5.2 . The solutions, determined numerically, are c * 2.32 ( λ = 5.5 ), c * 2.07 ( λ = 6 ) and c * 1.92 ( λ = 6.5 ).
Figure 3. Plot of f ( c ) for various λ , with T = 10 , N = 10 , and k 5.2 . The solutions, determined numerically, are c * 2.32 ( λ = 5.5 ), c * 2.07 ( λ = 6 ) and c * 1.92 ( λ = 6.5 ).
Risks 07 00040 g003
Figure 4. Plot of P c * + W T 2 < 0 versus P ( A T < 0 ) , for various λ , with T = 10 . The dashed line shows the relationship outlined by the EU stress test, which is linear by definition.
Figure 4. Plot of P c * + W T 2 < 0 versus P ( A T < 0 ) , for various λ , with T = 10 . The dashed line shows the relationship outlined by the EU stress test, which is linear by definition.
Risks 07 00040 g004
Figure 5. Plot of reclassification percentage from IFRS 9 bucket 1 to buckets 2 or 3, after year one of a ten year loan, for different initial ratings based on Standard and Poor’s global corporate averages from 1981–2017.
Figure 5. Plot of reclassification percentage from IFRS 9 bucket 1 to buckets 2 or 3, after year one of a ten year loan, for different initial ratings based on Standard and Poor’s global corporate averages from 1981–2017.
Risks 07 00040 g005
Figure 6. Plot of total ECL versus number of years involved in the calculation, for samples of 100 loans grouped in the same rating at the midpoint of a ten year loan. For each of the three current rating groups shown, we assume that these loans originated in IFRS 9 bucket 1, with the same initial rating.
Figure 6. Plot of total ECL versus number of years involved in the calculation, for samples of 100 loans grouped in the same rating at the midpoint of a ten year loan. For each of the three current rating groups shown, we assume that these loans originated in IFRS 9 bucket 1, with the same initial rating.
Risks 07 00040 g006
Table 1. ECL calculation criteria for loans, relative to their IFRS 9 classification.
Table 1. ECL calculation criteria for loans, relative to their IFRS 9 classification.
IFRS 9Loan TypeECL
bucket 1performingone-year
bucket 2underperforminglifetime
bucket 3impairedlifetime

Share and Cite

MDPI and ACS Style

Ewanchuk, L.; Frei, C. Recent Regulation in Credit Risk Management: A Statistical Framework. Risks 2019, 7, 40. https://doi.org/10.3390/risks7020040

AMA Style

Ewanchuk L, Frei C. Recent Regulation in Credit Risk Management: A Statistical Framework. Risks. 2019; 7(2):40. https://doi.org/10.3390/risks7020040

Chicago/Turabian Style

Ewanchuk, Logan, and Christoph Frei. 2019. "Recent Regulation in Credit Risk Management: A Statistical Framework" Risks 7, no. 2: 40. https://doi.org/10.3390/risks7020040

APA Style

Ewanchuk, L., & Frei, C. (2019). Recent Regulation in Credit Risk Management: A Statistical Framework. Risks, 7(2), 40. https://doi.org/10.3390/risks7020040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop