Risks doi: 10.3390/risks5040064

Authors: Krzysztof Burnecki Mario Giuricich

We consider the subject of approximating tail probabilities in the general compound renewal process framework, where severity data are assumed to follow a heavy-tailed law (in that only the first moment is assumed to exist). By using the weak convergence of compound renewal processes to α -stable Lévy motion, we derive such weak approximations. Their applicability is then highlighted in the context of an existing, classical, index-linked catastrophe bond pricing model, and in doing so, we specialize these approximations to the case of a compound time-inhomogeneous Poisson process. We emphasize a unique feature of our approximation, in that it only demands finiteness of the first moment of the aggregate loss processes. Finally, a numerical illustration is presented. The behavior of our approximations is compared to both Monte Carlo simulations and first-order single risk loss process approximations and compares favorably.

]]>Risks doi: 10.3390/risks5040063

Authors: Luca Regis

The aim of the Special Issue is to address some of the main challenges individuals and companies face in managing financial and actuarial risks, when dealing with their investment/retirement or business-related decisions [...]

]]>Risks doi: 10.3390/risks5040062

Authors: Nguyet Nguyen

Future stock prices depend on many internal and external factors that are not easy to evaluate. In this paper, we use the Hidden Markov Model, (HMM), to predict a daily stock price of three active trading stocks: Apple, Google, and Facebook, based on their historical data. We first use the Akaike information criterion (AIC) and Bayesian information criterion (BIC) to choose the numbers of states from HMM. We then use the models to predict close prices of these three stocks using both single observation data and multiple observation data. Finally, we use the predictions as signals for trading these stocks. The criteria tests’ results showed that HMM with two states worked the best among two, three and four states for the three stocks. Our results also demonstrate that the HMM outperformed the naïve method in forecasting stock prices. The results also showed that active traders using HMM got a higher return than using the naïve forecast for Facebook and Google stocks. The stock price prediction method has a significant impact on stock trading and derivative hedging.

]]>Risks doi: 10.3390/risks5040061

Authors: Peter Carr

Diffusions are widely used in finance due to their tractability. Driftless diffusions are needed to describe ratios of asset prices under a martingale measure. We provide a simple example of a tractable driftless diffusion which also has a bounded state space.

]]>Risks doi: 10.3390/risks5040058

Authors: Arthur Charpentier Arthur David Romuald Elie

In this paper, we investigate the impact of the accident reporting strategy of drivers, within a Bonus-Malus system. We exhibit the induced modification of the corresponding class level transition matrix and derive the optimal reporting strategy for rational drivers. The hunger for bonuses induces optimal thresholds under which, drivers do not claim their losses. Mathematical properties of the induced level class process are studied. A convergent numerical algorithm is provided for computing such thresholds and realistic numerical applications are discussed.

]]>Risks doi: 10.3390/risks5040060

Authors: Enrique Calderín-Ojeda Kevin Fergusson Xueyuan Wu

Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN) in Reed and Jorgensen (2004), we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind) and lognorma distributions.

]]>Risks doi: 10.3390/risks5040059

Authors: Sebastian Fuchs Ruben Schlotter Klaus Schmidt

In the present paper, we study quantile risk measures and their domain. Our starting point is that, for a probability measure Q on the open unit interval and a wide class L Q of random variables, we define the quantile risk measure ϱ Q as the map that integrates the quantile function of a random variable in L Q with respect to Q. The definition of L Q ensures that ϱ Q cannot attain the value + ∞ and cannot be extended beyond L Q without losing this property. The notion of a quantile risk measure is a natural generalization of that of a spectral risk measure and provides another view of the distortion risk measures generated by a distribution function on the unit interval. In this general setting, we prove several results on quantile or spectral risk measures and their domain with special consideration of the expected shortfall. We also present a particularly short proof of the subadditivity of expected shortfall.

]]>Risks doi: 10.3390/risks5040057

Authors: Gaurav Khemka Adam Butt

This paper considers an alternative way of structuring stochastic variables in a dynamic programming framework where the model structure dictates that numerical methods of solution are necessary. Rather than estimating integrals within a Bellman equation using quadrature nodes, we use nodes directly from the underlying data. An example of the application of this approach is presented using individual lifetime financial modelling. The results show that data-driven methods lead to the least losses in result accuracy compared to quadrature and Quasi-Monte Carlo approaches, using historical data as a base. These results hold for both a single stochastic variable and multiple stochastic variables. The results are significant for improving the computational accuracy of lifetime financial models and other models that employ stochastic dynamic programming.

]]>Risks doi: 10.3390/risks5040056

Authors: Mohamed Abdelghani Alexander Melnikov

The paper deals with defaultable markets, one of the main research areas of mathematical finance. It proposes a new approach to the theory of such markets using techniques from the calculus of optional stochastic processes on unusual probability spaces, which was not presented before. The paper is a foundation paper and contains a number of fundamental results on modeling of defaultable markets, pricing and hedging of defaultable claims and results on the probability of default under such conditions. Moreover, several important examples are presented: a new pricing formula for a defaultable bond and a new pricing formula for credit default swap. Furthermore, some results on the absence of arbitrage for markets on unusual probability spaces and markets with default are also provided.

]]>Risks doi: 10.3390/risks5040055

Authors: Georges Dionne Sara Malekan

We address the moral hazard problem of securitization using a principal-agent model where the investor is the principal and the lender is the agent. Our model considers structured asset-backed securitization with a credit enhancement (tranching) procedure. We assume that the originator can affect the default probability and the conditional loss distribution. We show that the optimal form of retention must be proportional to the pool default loss even in the absence of systemic risk when the originator can affect the conditional loss given default rate, yet the current regulations propose a constant retention rate.

]]>Risks doi: 10.3390/risks5040054

Authors: Jean-Philippe Boucher Steven Côté Montserrat Guillen

In Pay-As-You-Drive (PAYD) automobile insurance, the premium is fixed based on the distance traveled, while in usage-based insurance (UBI) the driving patterns of the policyholder are also considered. In those schemes, drivers who drive more pay a higher premium compared to those with the same characteristics who drive only occasionally, because the former are more exposed to the risk of accident. In this paper, we analyze the simultaneous effect of the distance traveled and exposure time on the risk of accident by using Generalized Additive Models (GAM). We carry out an empirical application and show that the expected number of claims (1) stabilizes once a certain number of accumulated distance-driven is reached and (2) it is not proportional to the duration of the contract, which is in contradiction to insurance practice. Finally, we propose to use a rating system that takes into account simultaneously exposure time and distance traveled in the premium calculation. We think that this is the trend the automobile insurance market is going to follow with the eruption of telematics data.

]]>Risks doi: 10.3390/risks5040053

Authors: Gareth Peters Rodrigo Targino Mario Wüthrich

The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal) Sequential Monte Carlo algorithms are described and their efficiency is analysed.

]]>Risks doi: 10.3390/risks5040052

Authors: A. Seetharaman Vikas Kumar Sahu A. Saravanan John Rudolph Raj Indu Niranjan

An empirical study was conducted to determine the impact of different types of risk on the performance management of credit rating agencies (CRAs). The different types of risks were classified as operational, market, business, financial, and credit. All these five variables were analysed to ascertain their impact on the performance of CRAs. In addition, apart from identifying the significant variables, the study focused on setting out a structured framework for future research. The five independent variables were tested statistically using structural equation modelling (SEM). The results indicated that market risk, financial risk, and credit risk have a significant impact on the performance of CRAs, whereas operational risk and business risk, though important, do not have a significant influence. This finding has a significant implication for the examination and inter-firm evaluation of CRAs.

]]>Risks doi: 10.3390/risks5030051

Authors: Carolyn W. Chang Jack S. K. Chang

We propose an integrated approach straddling the actuarial science and the mathematical finance approaches to pricing a default-risky catastrophe reinsurance contract. We first apply an incomplete-market version of the no-arbitrage martingale pricing paradigm to price the reinsurance contract as a martingale by a measure change, then we apply risk loading to price in—as in the traditional actuarial practice—market imperfections, the underwriting cycle, and other idiosyncratic factors identified in the practice and empirical literatures. This integrated approach is theoretically appealing for its merit of factoring risk premiums into the probability measure, and yet practical for being applicable to price a contract not traded on financial markets. We numerically study the catastrophe pricing effects and find that the reinsurance contract is more valuable when the catastrophe is more severe and the reinsurer’s default risk is lower because of a stronger balance sheet. We also find that the price is more sensitive to the severity of catastrophes than to the arrival frequency; implying (re)insurers should focus more on hedging the severity than the arrival frequency in their risk management programs.

]]>Risks doi: 10.3390/risks5030050

Authors: Silvia Romagnoli Simona Santoro

After financial crisis, the role of uncertainty in decision making processes has largely been recognized as the new variable that contributes to shaping interest rates and bond prices. Our aim is to discuss the impact of ambiguity on bonds interest rates (yields). Starting from the realistic assumption that investors ask for an ambiguity premium depending on the efficacy of government interventions (if any), we lead to an exponential multi-factor affine model which includes ambiguity as well as an ambiguous version of the Heath-Jarrow-Morton (HJM)model. As an example, we propose the realistic economic framework given by Ulrich (2008, 2011), and we recover the corresponding ambiguous HJM framework, thus offering a large set of interest rate models enriched with ambiguity. We also give a concrete view of how different simulated scenarios of ambiguity can influence the economic cycle (through rates and bond prices).

]]>Risks doi: 10.3390/risks5030049

Authors: Daoping Yu Vytaras Brazauskas

Over the last decade, researchers, practitioners, and regulators have had intense debates about how to treat the data collection threshold in operational risk modeling. Several approaches have been employed to fit the loss severity distribution: the empirical approach, the “naive” approach, the shifted approach, and the truncated approach. Since each approach is based on a different set of assumptions, different probability models emerge. Thus, model uncertainty arises. The main objective of this paper is to understand the impact of model uncertainty on the value-at-risk (VaR) estimators. To accomplish that, we take the bank’s perspective and study a single risk. Under this simplified scenario, we can solve the problem analytically (when the underlying distribution is exponential) and show that it uncovers similar patterns among VaR estimates to those based on the simulation approach (when data follow a Lomax distribution). We demonstrate that for a fixed probability distribution, the choice of the truncated approach yields the lowest VaR estimates, which may be viewed as beneficial to the bank, whilst the “naive” and shifted approaches lead to higher estimates of VaR. The advantages and disadvantages of each approach and the probability distributions under study are further investigated using a real data set for legal losses in a business unit (Cruz 2002).

]]>Risks doi: 10.3390/risks5030048

Authors: Daniel Leonhardt Antony Ware Rudi Zagst

Energy commodities and their futures naturally show cointegrated price movements. However, there is empirical evidence that the prices of futures with different maturities might have, e.g., different jump behaviours in different market situations. Observing commodity futures over time, there is also evidence for different states of the underlying volatility of the futures. In this paper, we therefore allow for cointegration of the term structure within a multi-factor model, which includes seasonality, as well as joint and individual jumps in the price processes of futures with different maturities. The seasonality in this model is realized via a deterministic function, and the jumps are represented with thinned-out compound Poisson processes. The model also includes a regime-switching approach that is modelled through a Markov chain and extends the class of geometric models. We show how the model can be calibrated to empirical data and give some practical applications.

]]>Risks doi: 10.3390/risks5030047

Authors: Johan Andréasson Pavel Shevchenko

Means-tested pension policies are typical for many countries, and the assessment of policy changes is critical for policy makers. In this paper, we consider the Australian means-tested Age Pension. In 2015, two important changes were made to the popular Allocated Pension accounts: the income means-test is now based on deemed income rather than account withdrawals, and the income-test deduction no longer applies. We examine the implications of the new changes in regard to optimal decisions for consumption, investment and housing. We account for regulatory minimum withdrawal rules that are imposed by regulations on Allocated Pension accounts, as well as the 2017 asset-test rebalancing. The policy changes are considered under a utility-maximising life cycle model solved as an optimal stochastic control problem. We find that the new rules decrease the advantages of planning the consumption in relation to the means-test, while risky asset allocation becomes more sensitive to the asset-test. The difference in optimal drawdown between the old and new policy is only noticeable early in retirement until regulatory minimum withdrawal rates are enforced. However, the amount of extra Age Pension received by many households is now significantly higher due to the new deeming income rules, which benefit wealthier households who previously would not have received Age Pension due to the income-test and minimum withdrawals.

]]>Risks doi: 10.3390/risks5030046

Authors: Knut Aase

We reconsider costs in insurance, and suggest a new type of cost function, which we argue is a natural choice when there are relatively small, but frequent, claims. If a fixed cost is incurred each time a claim is made, we obtain a Pareto optimal deductible even if the cost function does not vary with the indemnity. The classical result says that deductibles appear if and only if costs are variable. This implies that when the claims are relatively small, it is not optimal for the insured to be compensated, since the costs outweigh the benefits and a deductible will naturally occur. When we constrain the contract to contain a cap, a non-trivial deductible is Pareto optimal regardless of the assumptions about the cost structure, which is what is known as an XL-contract.

]]>Risks doi: 10.3390/risks5030044

Authors: Andreas Hermes Stanislaus Maier-Paape

In this paper, the multivariate fractional trading ansatz of money management from Vince (Vince 1990) is discussed. In particular, we prove existence and uniqueness of an “optimal f” of the respective optimization problem under reasonable assumptions on the trade return matrix. This result generalizes a similar result for the univariate fractional trading ansatz. Furthermore, our result guarantees that the multivariate optimal f solutions can always be found numerically by steepest ascent methods.

]]>Risks doi: 10.3390/risks5030045

Authors: George-Jason Siouris Alex Karagrigoriou

In this work, we focus on volatility estimation which plays a crucial role in risk analysis and management. In order to improve value at risk (VaR) forecasts, we discuss the concept of low price effect and introduce the low price correction which does not require any additional parameters and instead of returns it takes into account the prices of the asset. Judgement on the forecasting quality of the proposed methodology is based on both the relative number of violations and VaR volatility. For illustrative purposes, a real example from the Athens Stock Exchange is fully explored.

]]>Risks doi: 10.3390/risks5030043

Authors: Dimitrina Dimitrova Zvetan Ignatov Vladimir Kaishev

We derive a closed form expression for the probability that a non-decreasing, pure jump stochastic risk process with the order statistics (OS) property will not exit the strip between two non-decreasing, possibly discontinuous, time-dependent boundaries, within a finite time interval. The result yields new expressions for the ruin probability in the insurance and the dual risk models with dependence between the claim severities or capital gains respectively.

]]>Risks doi: 10.3390/risks5030042

Authors: Dorota Toczydlowska Gareth Peters Man Fung Pavel Shevchenko

In this study we develop a multi-factor extension of the family of Lee-Carter stochastic mortality models. We build upon the time, period and cohort stochastic model structure to extend it to include exogenous observable demographic features that can be used as additional factors to improve model fit and forecasting accuracy. We develop a dimension reduction feature extraction framework which (a) employs projection based techniques of dimensionality reduction; in doing this we also develop (b) a robust feature extraction framework that is amenable to different structures of demographic data; (c) we analyse demographic data sets from the patterns of missingness and the impact of such missingness on the feature extraction, and (d) introduce a class of multi-factor stochastic mortality models incorporating time, period, cohort and demographic features, which we develop within a Bayesian state-space estimation framework; finally (e) we develop an efficient combined Markov chain and filtering framework for sampling the posterior and forecasting. We undertake a detailed case study on the Human Mortality Database demographic data from European countries and we use the extracted features to better explain the term structure of mortality in the UK over time for male and female populations when compared to a pure Lee-Carter stochastic mortality model, demonstrating our feature extraction framework and consequent multi-factor mortality model improves both in sample fit and importantly out-off sample mortality forecasts by a non-trivial gain in performance.

]]>Risks doi: 10.3390/risks5030041

Authors: Sabyasachi Guharay KC Chang Jie Xu

Value-at-Risk (VaR) is a well-accepted risk metric in modern quantitative risk management (QRM). The classical Monte Carlo simulation (MCS) approach, denoted henceforth as the classical approach, assumes the independence of loss severity and loss frequency. In practice, this assumption does not always hold true. Through mathematical analyses, we show that the classical approach is prone to significant biases when the independence assumption is violated. This is also corroborated by studying both simulated and real-world datasets. To overcome the limitations and to more accurately estimate VaR, we develop and implement the following two approaches for VaR estimation: the data-driven partitioning of frequency and severity (DPFS) using clustering analysis, and copula-based parametric modeling of frequency and severity (CPFS). These two approaches are verified using simulation experiments on synthetic data and validated on five publicly available datasets from diverse domains; namely, the financial indices data of Standard &amp; Poor’s 500 and the Dow Jones industrial average, chemical loss spills as tracked by the US Coast Guard, Australian automobile accidents, and US hurricane losses. The classical approach estimates VaR inaccurately for 80% of the simulated data sets and for 60% of the real-world data sets studied in this work. Both the DPFS and the CPFS methodologies attain VaR estimates within 99% bootstrap confidence interval bounds for both simulated and real-world data. We provide a process flowchart for risk practitioners describing the steps for using the DPFS versus the CPFS methodology for VaR estimation in real-world loss datasets.

]]>Risks doi: 10.3390/risks5030040

Authors: Wolf-Dieter Richter

For evaluating the probabilities of arbitrary random events with respect to a given multivariate probability distribution, specific techniques are of great interest. An important two-dimensional high risk limit law is the Gauss-exponential distribution whose probabilities can be dealt with based on the Gauss–Laplace law. The latter will be considered here as an element of the newly-introduced family of ( p , q ) -spherical distributions. Based on a suitably-defined non-Euclidean arc-length measure on ( p , q ) -circles, we prove geometric and stochastic representations of these distributions and correspondingly distributed random vectors, respectively. These representations allow dealing with the new probability measures similarly to with elliptically-contoured distributions and more general homogeneous star-shaped ones. This is demonstrated by the generalization of the Box–Muller simulation method. In passing, we prove an extension of the sector and circle number functions.

]]>Risks doi: 10.3390/risks5030039

Authors: Mathias Lindholm Filip Lindskog Felix Wahl

This paper provides a complete program for the valuation of aggregate non-life insurance liability cash flows based on claims triangle data. The valuation is fully consistent with the principle of valuation by considering the costs associated with a transfer of the liability to a so-called reference undertaking subject to capital requirements throughout the runoff of the liability cash flow. The valuation program includes complete details on parameter estimation, bias correction and conservative estimation of the value of the liability under partial information. The latter is based on a new approach to the estimation of mean squared error of claims reserve prediction.

]]>Risks doi: 10.3390/risks5030038

Authors: Matthias Fischer Daniel Kraus Marius Pfeuffer Claudia Czado

Measuring interdependence between probabilities of default (PDs) in different industry sectors of an economy plays a crucial role in financial stress testing. Thereby, regression approaches may be employed to model the impact of stressed industry sectors as covariates on other response sectors. We identify vine copula based quantile regression as an eligible tool for conducting such stress tests as this method has good robustness properties, takes into account potential nonlinearities of conditional quantile functions and ensures that no quantile crossing effects occur. We illustrate its performance by a data set of sector specific PDs for the German economy. Empirical results are provided for a rough and a fine-grained industry sector classification scheme. Amongst others, we confirm that a stressed automobile industry has a severe impact on the German economy as a whole at different quantile levels whereas, e.g., for a stressed financial sector the impact is rather moderate. Moreover, the vine copula based quantile regression approach is benchmarked against both classical linear quantile regression and expectile regression in order to illustrate its methodological effectiveness in the scenarios evaluated.

]]>Risks doi: 10.3390/risks5030036

Authors: Hirbod Assa Nikolay Gospodinov

This paper proposes a model-free approach to hedging and pricing in the presence of market imperfections such as market incompleteness and frictions. The generality of this framework allows us to conduct an in-depth theoretical analysis of hedging strategies with a wide family of risk measures and pricing rules, and study the conditions under which the hedging problem admits a solution and pricing is possible. The practical implications of our proposed theoretical approach are illustrated with an application on hedging economic risk.

]]>Risks doi: 10.3390/risks5030037

Authors: John Fry Andrew Brint

In this paper we develop a well-established financial model to investigate whether bubbles were present in opinion polls and betting markets prior to the UK’s vote on EU membership on 23 June 2016. The importance of our contribution is threefold. Firstly, our continuous-time model allows for irregularly spaced time series—a common feature of polling data. Secondly, we build on qualitative comparisons that are often made between market cycles and voting patterns. Thirdly, our approach is theoretically elegant. Thus, where bubbles are found we suggest a suitable adjustment. We find evidence of bubbles in polling data. This suggests they systematically over-estimate the proportion voting for remain. In contrast, bookmakers’ odds appear to show none of this bubble-like over-confidence. However, implied probabilities from bookmakers’ odds appear remarkably unresponsive to polling data that nonetheless indicates a close-fought vote.

]]>Risks doi: 10.3390/risks5030034

Authors: Carlo Maccheroni Samuel Nocito

This work proposes a backtesting analysis that compares the Lee–Carter and the Cairns–Blake–Dowd mortality models, employing Italian data. The mortality data come from the Italian National Statistics Institute (ISTAT) database and span the period 1975–2014, over which we computed back-projections evaluating the performances of the models compared with real data. We propose three different backtest approaches, evaluating the goodness of short-run forecast versus medium-length ones. We find that neither model was able to capture the improving shock on mortality observed for the male population on the analysed period. Moreover, the results suggest that CBD forecasts are reliable prevalently for ages above 75, and that LC forecasts are basically more accurate for this data.

]]>Risks doi: 10.3390/risks5030035

Authors: Iain Clark Saeed Amen

Much of the debate around a potential British exit (Brexit) from the European Union has centred on the potential macroeconomic impact. In this paper, we instead focus on understanding market expectations for price action around the Brexit referendum date. Extracting implied distributions from the GBPUSD option volatility surface, we originally estimated, based on our visual observation of implied probability densities available up to 13 June 2016, that the market expected that a vote to leave could result in a move in the GBPUSD exchange rate from 1.4390 (spot reference on 10 June 2016) down to a range in 1.10 to 1.30, i.e., a 10–25% decline—very probably with highly volatile price action. To quantify this more objectively, we construct a mixture model corresponding to two scenarios for the GBPUSD exchange rate after the referendum vote, one scenario for “remain” and one for “leave”. Calibrating this model to four months of market data, from 24 February to 22 June 2016, we find that a “leave” vote was associated with a predicted devaluation of the British pound to approximately 1.37 USD per GBP, a 4.5% devaluation, and quite consistent with the observed post-referendum exchange rate move down from 1.4877 to 1.3622. We contrast the behaviour of the GBPUSD option market in the run-up to the Brexit vote with that during the 2014 Scottish Independence referendum, finding the potential impact of Brexit to be considerably higher.

]]>Risks doi: 10.3390/risks5030033

Authors: Marta Ferreira Helena Ferreira

Pareto processes are suitable to model stationary heavy-tailed data. Here, we consider the auto-regressive Gaver–Lewis Pareto Process and address a study of the tail behavior. We characterize its local and long-range dependence. We will see that consecutive observations are asymptotically tail independent, a feature that is often misevaluated by the most common extremal models and with strong relevance to the tail inference. This also reveals clustering at “penultimate” levels. Linear correlation may not exist in a heavy-tailed context and an alternative diagnostic tool will be presented. The derived properties relate to the auto-regressive parameter of the process and will provide estimators. A comparison of the proposals is conducted through simulation and an application to a real dataset illustrates the procedure.

]]>Risks doi: 10.3390/risks5020032

Authors: Robert Rietz Evan Cronick Shelby Mathers Matt Pollie

This paper examines the effect of gainsharing provisions on the selection of a discount rate for a defined benefit pension plan. The paper uses a traditional actuarial approach of discounting liabilities using the expected return of the associated pension fund. A stochastic Excel model was developed to simulate the effect of varying investment returns on a pension fund with four asset classes. Lognormal distributions were fitted to historical returns of two of the asset classes; large company stocks and long-term government bonds. A third lognormal distribution was designed to represent the investment returns of alternative investments, such as real estate and private equity. The fourth asset class represented short term cash investments and that return was held constant. The following variables were analyzed to determine their relative impact of gainsharing on the selection of a discount rate: hurdle rate, percentage of gainsharing, actuarial asset method smoothing period, and variations in asset allocation. A 50% gainsharing feature can reduce the discount rate for a defined benefit pension plan from 0.5% to more than 2.5%, depending on the gainsharing design and asset allocation.

]]>Risks doi: 10.3390/risks5020031

Authors: Stephen Mildenhall

The literature on capital allocation is biased towards an asset modeling framework rather than an actuarial framework. The asset modeling framework leads to the proliferation of inappropriate assumptions about the effect of insurance line of business growth on aggregate loss distributions. This paper explains why an actuarial analog of the asset volume/return model should be based on a Lévy process. It discusses the impact of different loss models on marginal capital allocations. It shows that Lévy process-based models provide a better fit to the US statutory accounting data, and identifies how parameter risk scales with volume and increases with time. Finally, it shows the data suggest a surprising result regarding the form of insurance parameter risk.

]]>Risks doi: 10.3390/risks5020030

Authors: Nataliya Chukhrova Arne Johannssen

This paper gives a detailed overview of the current state of research in relation to the use of state space models and the Kalman-filter in the field of stochastic claims reserving. Most of these state space representations are matrix-based, which complicates their applications. Therefore, to facilitate the implementation of state space models in practice, we present a scalar state space model for cumulative payments, which is an extension of the well-known chain ladder (CL) method. The presented model is distribution-free, forms a basis for determining the entire unobservable lower and upper run-off triangles and can easily be applied in practice using the Kalman-filter for prediction, filtering and smoothing of cumulative payments. In addition, the model provides an easy way to find outliers in the data and to determine outlier effects. Finally, an empirical comparison of the scalar state space model, promising prior state space models and some popular stochastic claims reserving methods is performed.

]]>Risks doi: 10.3390/risks5020029

Authors: Susanna Levantesi Massimiliano Menzietti

Longevity risk constitutes an important risk factor for life insurance companies, and it can be managed through longevity-linked securities. The market of longevity-linked securities is at present far from being complete and does not allow finding a unique pricing measure. We propose a method to estimate the maximum market price of longevity risk depending on the risk margin implicit within the calculation of the technical provisions as defined by Solvency II. The maximum price of longevity risk is determined for a survivor forward (S-forward), an agreement between two counterparties to exchange at maturity a fixed survival-dependent payment for a payment depending on the realized survival of a given cohort of individuals. The maximum prices determined for the S-forwards can be used to price other longevity-linked securities, such as q-forwards. The Cairns–Blake–Dowd model is used to represent the evolution of mortality over time that combined with the information on the risk margin, enables us to calculate upper limits for the risk-adjusted survival probabilities, the market price of longevity risk and the S-forward prices. Numerical results can be extended for the pricing of other longevity-linked securities.

]]>Risks doi: 10.3390/risks5020028

Authors: Jing Liu Huan Zhang

Motivated by the EU Solvency II Directive, we study the one-year ruin probability of an insurer who makes investments and hence faces both insurance and financial risks. Over a time horizon of one year, the insurance risk is quantified as a nonnegative random variable X equal to the aggregate amount of claims, and the financial risk as a d-dimensional random vector Y consisting of stochastic discount factors of the d financial assets invested. To capture both heavy tails and asymptotic dependence of Y in an integrated manner, we assume that Y follows a standard multivariate regular variation (MRV) structure. As main results, we derive exact asymptotic estimates for the one-year ruin probability for the following cases: (i) X and Y are independent with X of Fréchet type; (ii) X and Y are independent with X of Gumbel type; (iii) X and Y jointly possess a standard MRV structure; (iv) X and Y jointly possess a nonstandard MRV structure.

]]>Risks doi: 10.3390/risks5020027

Authors: Michael Metel Traian A. Pirvu Julian Wong

We prove that the Omega measure, which considers all moments when assessing portfolio performance, is equivalent to the widely used Sharpe ratio under jointly elliptic distributions of returns. Portfolio optimization of the Sharpe ratio is then explored, with an active-set algorithm presented for markets prohibiting short sales. When asymmetric returns are considered, we show that the Omega measure and Sharpe ratio lead to different optimal portfolios.

]]>Risks doi: 10.3390/risks5020026

Authors: Albert Cohen Nick Costanzino

Building on recent work incorporating recovery risk into structural models by Cohen &amp; Costanzino (2015), we consider the Black-Cox model with an added recovery risk driver. The recovery risk driver arises naturally in the context of imperfect information implicit in the structural framework. This leads to a two-factor structural model we call the Stochastic Recovery Black-Cox model, whereby the asset risk driver At defines the default trigger and the recovery risk driver Rt defines the amount recovered in the event of default. We then price zero-coupon bonds and credit default swaps under the Stochastic Recovery Black-Cox model. Finally, we compare our results with the classic Black-Cox model, give explicit expressions for the recovery risk premium in the Stochastic Recovery Black-Cox model, and detail how the introduction of separate but correlated risk drivers leads to a decoupling of the default and recovery risk premiums in the credit spread. We conclude this work by computing the effect of adding coupons that are paid continuously until default, and price perpetual (consol bonds) in our two-factor firm value model, extending calculations in the seminal paper by Leland (1994).

]]>Risks doi: 10.3390/risks5020025

Authors: Koon-Shing Kwong Yiu-Kuen Tse Wai-Sum Chan

Building a social security system to ensure Singapore residents have peace of mind in funding for retirement has been at the top of Singapore government’s policy agenda over the last decade. Implementation of the Lifelong Income For the Elderly (LIFE) scheme in 2009 clearly shows that the government spares no effort in improving its pension scheme to boost its residents’ income after retirement. Despite the recent modifications to the LIFE scheme, Singapore residents must still choose between two plans: the Standard and Basic plans. To enhance the flexibility of the LIFE scheme with further streamlining of its fund management, we propose some plan modifications such that scheme members do not face a dichotomy of plan choices. Instead, they select two age parameters: the Payout Age and the Life-annuity Age. This paper discusses the actuarial analysis for determining members’ payouts and bequests based on the proposed age parameters. We analyze the net cash receipts and Internal Rate of Return (IRR) for various plan-parameter configurations. This information helps members make their plan choices. To address cost-of-living increases we propose to extend the plan to accommodate an annual step-up of monthly payouts. By deferring the Payout Age from 65 to 68, members can enjoy an annual increase of about 2% of the payouts for the same first-year monthly benefits.

]]>Risks doi: 10.3390/risks5020024

Authors: Gabriella Piscopo Marina Resta

We apply spectral biclustering to mortality datasets in order to capture three relevant aspects: the period, the age and the cohort effects, as their knowledge is a key factor in understanding actuarial liabilities of private life insurance companies, pension funds as well as national pension systems. While standard techniques generally fail to capture the cohort effect, on the contrary, biclustering methods seem particularly suitable for this aim. We run an exploratory analysis on the mortality data of Italy, with ages representing genes, and years as conditions: by comparison between conventional hierarchical clustering and spectral biclustering, we observe that the latter offers more meaningful results.

]]>Risks doi: 10.3390/risks5020023

Authors: Jonas Hirz Uwe Schmock Pavel Shevchenko

We introduce an additive stochastic mortality model which allows joint modelling and forecasting of underlying death causes. Parameter families for mortality trends can be chosen freely. As model settings become high dimensional, Markov chain Monte Carlo (MCMC) is used for parameter estimation. We then link our proposed model to an extended version of the credit risk model CreditRisk+. This allows exact risk aggregation via an efficient numerically stable Panjer recursion algorithm and provides numerous applications in credit, life insurance and annuity portfolios to derive P&amp;L distributions. Furthermore, the model allows exact (without Monte Carlo simulation error) calculation of risk measures and their sensitivities with respect to model parameters for P&amp;L distributions such as value-at-risk and expected shortfall. Numerous examples, including an application to partial internal models under Solvency II, using Austrian and Australian data are shown.

]]>Risks doi: 10.3390/risks5020022

Authors: Zaghum Umar Tahir Suleman

Abstract: This paper analyses the interdependence between Islamic and conventional equities by taking into consideration the asymmetric effect of return and volatility transmission. We empirically investigate the decoupling hypothesis of Islamic and conventional equities and the potential contagion effect. We analyse the intra-market and inter-market spillover among Islamic and conventional equities across three major markets: the USA, the United Kingdom and Japan. Our sample period ranges from 1996 to 2015. In addition, we segregate our sample period into three sub-periods covering prior to the 2007 financial crisis, the crisis period and the post-crisis period. We find weak support for the decoupling hypothesis during the post-crisis period.

]]>Risks doi: 10.3390/risks5020021

Authors: Yuan Gao Han Shang

This study considers the forecasting of mortality rates in multiple populations. We propose a model that combines mortality forecasting and functional data analysis (FDA). Under the FDA framework, the mortality curve of each year is assumed to be a smooth function of age. As with most of the functional time series forecasting models, we rely on functional principal component analysis (FPCA) for dimension reduction and further choose a vector error correction model (VECM) to jointly forecast mortality rates in multiple populations. This model incorporates the merits of existing models in that it excludes some of the inherent randomness with the nonparametric smoothing from FDA, and also utilizes the correlation structures between the populations with the use of VECM in mortality models. A nonparametric bootstrap method is also introduced to construct interval forecasts. The usefulness of this model is demonstrated through a series of simulation studies and applications to the age-and sex-specific mortality rates in Switzerland and the Czech Republic. The point forecast errors of several forecasting methods are compared and interval scores are used to evaluate and compare the interval forecasts. Our model provides improved forecast accuracy in most cases.

]]>Risks doi: 10.3390/risks5010020

Authors: Jinhui Zhang Sachi Purcal Jiaqin Wei

We consider the financial planning problem of a retiree wishing to enter a retirement village at a future uncertain date. The date of entry is determined by the retiree’s utility and bequest maximisation problem within the context of uncertain future health states. In addition, the retiree must choose optimal consumption, investment, bequest and purchase of insurance products prior to their full annuitisation on entry to the retirement village. A hyperbolic absolute risk-aversion (HARA) utility function is used to allow necessary consumption for basic living and medical costs. The retirement village will typically require an initial deposit upon entry. This threshold wealth requirement leads to exercising the replication of an American put option at the uncertain stopping time. From our numerical results, active insurance and annuity markets are shown to be a critical aspect in retirement planning.

]]>Risks doi: 10.3390/risks5010019

Authors: Changyu Liu Michael Sherris

Designing post retirement benefits requires access to appropriate investment instruments to manage the interest rate and longevity risks. Post retirement benefits are increasingly taken as a form of income benefit, either as a pension or an annuity. Pension funds and life insurers offer annuities generating long term liabilities linked to longevity. Risk management of life annuity portfolios for interest rate risks is well developed but the incorporation of longevity risk has received limited attention. We develop an immunization approach and a delta-gamma based hedging approach to manage the risks of adverse portfolio surplus using stochastic models for mortality and interest rates. We compare and assess the immunization and hedge effectiveness of fixed-income coupon bonds, annuity bonds, as well as longevity bonds, using simulations of the portfolio surplus for an annuity portfolio and a range of risk measures including value-at-risk. We show how fixed-income annuity bonds can more effectively match cash flows and provide additional hedge effectiveness over coupon bonds. Longevity bonds, including deferred longevity bonds, reduce risk significantly compared to coupon and annuity bonds, reflecting the long duration of the typical life annuity and the exposure to longevity risk. Longevity bonds are shown to be effective in immunizing surplus over short and long horizons. Delta gamma hedging is generally only effective over short horizons. The results of the paper have implications for how providers of post retirement income benefit streams can manage risks in demanding conditions where innovation in investment markets can support new products and increase the product range.

]]>Risks doi: 10.3390/risks5010017

Authors: Gaurav Khemka Steven Roberts Timothy Higgins

We explore the extent to which claim incidence in Disability Income Insurance (DII) is affected by changes in the unemployment rate in Australia. Using data from 1986 to 2001, we fit a hurdle model to explore the presence and magnitude of the effect of changes in unemployment rate on the incidence of DII claims, controlling for policy holder characteristics and seasonality. We find a clear positive association between unemployment and claim incidence, and we explore this further by gender, age, deferment period, and occupation. A multinomial logistic regression model is fitted to cause of claim data in order to explore the relationship further, and it is shown that the proportion of claims due to accident increases markedly with rising unemployment. The results suggest that during periods of rising unemployment, insurers may face increased claims from policy holders with shorter deferment periods for white-collar workers and for medium and heavy manual workers. Our findings indicate that moral hazard may have a material impact on DII claim incidence and insurer business in periods of declining economic conditions.

]]>Risks doi: 10.3390/risks5010018

Authors: Silvio Aldrovandi Petko Kusev Tetiana Hill Ivo Vlaev

Previous research has shown that risk preferences are sensitive to the financial domain in which they are framed. In the present paper, we explore whether the effect of negative priming on risk taking is moderated by financial context. A total of 120 participants completed questionnaires, where risky choices were framed in six different financial scenarios. Half of the participants were allocated to a negative priming condition. Negative priming reduced risk-seeking behaviour compared to a neutral condition. However, this effect was confined to non-experiential scenarios (i.e., gamble to win, possibility to lose), and not to ‘real world’ financial products (e.g., pension provision). The results call into question the generalisability of priming effects on different financial contexts.

]]>Risks doi: 10.3390/risks5010016

Authors: Syazreen Shair Sachi Purcal Nick Parr

Coherent models were developed recently to forecast the mortality of two or more sub-populations simultaneously and to ensure long-term non-divergent mortality forecasts of sub-populations. This paper evaluates the forecast accuracy of two recently-published coherent mortality models, the Poisson common factor and the product-ratio functional models. These models are compared to each other and the corresponding independent models, as well as the original Lee–Carter model. All models are applied to age-gender-speciﬁc mortality data for Australia and Malaysia and age-gender-ethnicity-speciﬁc data for Malaysia. The out-of-sample forecast error of log death rates, male-to-female death rate ratios and life expectancy at birth from each model are compared and examined across groups. The results show that, in terms of overall accuracy, the forecasts of both coherent models are consistently more accurate than those of the independent models for Australia and for Malaysia, but the relative performance differs by forecast horizon. Although the product-ratio functional model outperforms the Poisson common factor model for Australia, the Poisson common factor is more accurate for Malaysia. For the ethnic groups application, ethnic-coherence gives better results than gender-coherence. The results provide evidence that coherent models are preferable to independent models for forecasting sub-populations’ mortality.

]]>Risks doi: 10.3390/risks5010014

Authors: Xing-Fang Huang Ting Zhang Yang Yang Tao Jiang

This paper considered a dependent discrete-time risk model, in which the insurance risks are represented by a sequence of independent and identically distributed real-valued random variables with a common Gamma-like tailed distribution; the ﬁnancial risks are denoted by another sequence of independent and identically distributed positive random variables with a ﬁnite upper endpoint, but a general dependence structure exists between each pair of the insurance risks and the ﬁnancial risks. Following the works of Yang and Yuen in 2016, we derive some asymptotic relations for the ﬁnite-time and inﬁnite-time ruin probabilities. As a complement, we demonstrate our obtained result through a Crude Monte Carlo (CMC) simulation with asymptotics.

]]>Risks doi: 10.3390/risks5010015

Authors: Ourania Theodosiadou Sotiris Skaperas George Tsaklidis

In the first part of the paper, the positive and negative jumps of NASDAQ daily (log-) returns and three of its stocks are estimated based on the methodology presented by Theodosiadou et al. 2016, where jumps are assumed to be hidden random variables. For that reason, the use of stochastic state space models in discrete time is adopted. The daily return is expressed as the difference between the two-sided jumps under noise inclusion, and the recursive Kalman filter algorithm is used in order to estimate them. Since the estimated jumps have to be non-negative, the associated pdf truncation method, according to the non-negativity constraints, is applied. In order to overcome the resulting underestimation of the empirical time series, a scaling procedure follows the stage of truncation. In the second part of the paper, a nonparametric change point analysis concerning the (variance–) covariance is applied to the NASDAQ return time series, as well as to the estimated bivariate jump time series derived after the scaling procedure and to each jump component separately. A similar change point analysis is applied to the three other stocks of the NASDAQ index.

]]>Risks doi: 10.3390/risks5010013

Authors: Jan Natolski Ralf Werner

The replicating portfolio approach is a well-established approach carried out by many life insurance companies within their Solvency II framework for the computation of risk capital. In this note,weelaborateononespeciﬁcformulationofareplicatingportfolioproblem. Incontrasttothetwo most popular replication approaches, it does not yield an analytic solution (if, at all, a solution exists andisunique). Further,althoughconvex,theobjectivefunctionseemstobenon-smooth,andhencea numericalsolutionmightthusbemuchmoredemandingthanforthetwomostpopularformulations. Especially for the second reason, this formulation did not (yet) receive much attention in practical applications, in contrast to the other two formulations. In the following, we will demonstrate that the (potential) non-smoothness can be avoided due to an equivalent reformulation as a linear second order cone program (SOCP). This allows for a numerical solution by efﬁcient second order methods like interior point methods or similar. We also show that—under weak assumptions—existence and uniqueness of the optimal solution can be guaranteed. We additionally prove that—under a further similarly weak condition—the fair value of the replicating portfolio equals the fair value of liabilities. Based on these insights, we argue that this unloved stepmother child within the replication problem family indeed represents an equally good formulation for practical purposes.

]]>Risks doi: 10.3390/risks5010012

Authors: Catherine Donnelly

I show that risk-sharing pension plans can reduce some of the shortcomings of defined benefit and defined contributions plans. The risk-sharing pension plan presented aims to improve the stability of benefits paid to generations of members, while allowing them to enjoy the expected advantages of a risky investment strategy. The plan does this by adjusting the investment strategy and benefits in response to a changing funding level, motivated by the with-profits contract proposed by Goecke (2013). He suggests a mean-reverting log reserve (or funding) ratio, where mean reversion occurs through adjustments to the investment strategy and declared bonuses. To measure the robustness of the plan to human factors, I introduce a measurement of disappointment, where disappointment is high when there are many consecutive years over which benefit payments are declining. Another measure introduced is devastation, where devastation occurs when benefit payments are zero. The motivation is that members of a pension plan who are easily disappointed or likely to get no benefit, are more likely to exit the plan. I find that the risk-sharing plan offers more disappointment than a defined contribution plan, but it eliminates the devastation possible in a plan that tries to accumulate contributions at a steadily increasing rate. The proposed risk-sharing plan can give a narrower range of benefits than in a defined contribution plan. Thus it can offer a stable benefit to members without the risk of running out of money.

]]>Risks doi: 10.3390/risks5010010

Authors: Søren Asmussen Jaakko Lehtomaa

Well-behaved densities are typically log-convex with heavy tails and log-concave with light ones. We discuss a benchmark for distinguishing between the two cases, based on the observation that large values of a sum X 1 + X 2 occur as result of a single big jump with heavy tails whereas X 1 , X 2 are of equal order of magnitude in the light-tailed case. The method is based on the ratio | X 1 − X 2 | / ( X 1 + X 2 ) , for which sharp asymptotic results are presented as well as a visual tool for distinguishing between the two cases. The study supplements modern non-parametric density estimation methods where log-concavity plays a main role, as well as heavy-tailed diagnostics such as the mean excess plot.

]]>Risks doi: 10.3390/risks5010011

Authors: Wenjun Jiang Jiandong Ren Ričardas Zitikis

Optimal forms of reinsurance policies have been studied for a long time in the actuarial literature. Most existing results are from the insurer’s point of view, aiming at maximizing the expected utility or minimizing the risk of the insurer. However, as pointed out by Borch (1969), it is understandable that a reinsurance arrangement that might be very attractive to one party (e.g., the insurer) can be quite unacceptable to the other party (e.g., the reinsurer). In this paper, we follow this point of view and study forms of Pareto-optimal reinsurance policies whereby one party’s risk, measured by its value-at-risk (VaR), cannot be reduced without increasing the VaR of the counter-party in the reinsurance transaction. We show that the Pareto-optimal policies can be determined by minimizing linear combinations of the VaR s of the two parties in the reinsurance transaction. Consequently, we succeed in deriving user-friendly, closed-form, optimal reinsurance policies and their parameter values.

]]>Risks doi: 10.3390/risks5010008

Authors: Xuebing Kuang Xiaowen Zhou

Using a Poisson approach, we find Laplace transforms of joint occupation times over n disjoint intervals for spectrally negative Lévy processes. They generalize previous results for dimension two.

]]>Risks doi: 10.3390/risks5010009

Authors: Thomas Koch

This article considers an economy where risk is insurable, but selection determines the pool of individuals who take it up. First, we demonstrate that the comparative statics of these economies do not necessarily depend on its marginal selection (adverse versus favorable), but rather other characteristics. We then use repeated cross-sections of medical expenditures in the U.S. to understand the role of changes in the medical risk distribution on the fraction of Americans without medical insurance. We ﬁnd that both the level and the shape of the distribution of risk are important in determining the equilibrium quantity of insurance. Symmetric changes in risk (e.g., shifts in the price of medical care) better explain the shifting insurance rate over time. Asymmetric changes (e.g., associated with a shifting age distribution) are not as important.

]]>Risks doi: 10.3390/risks5010006

Authors: Bin Zou Abel Cadenillas

We consider an insurer who faces an external jump-diffusion risk that is negatively correlated with the capital returns in a multidimensional regime switching model. The insurer selects investment and liability ratio policies continuously to maximize her/his expected utility of terminal wealth. We obtain explicit solutions of optimal policies for logarithmic and power utility functions. We study the impact of the insurer’s risk aversion, the negative correlation between the external risk and the capital returns, and the regime of the economy on the optimal policy. We find, among other things, that the regime of the economy and the negative correlation between the external risk and the capital returns have a dramatic effect on the optimal policy.

]]>Risks doi: 10.3390/risks5010007

Authors: Barbora Peštová Michal Pešta

Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented, as well.

]]>Risks doi: 10.3390/risks5010005

Authors: Pierre Devolder Sébastien de Valeriola

The regulation on the Belgian occupational pension schemes has been recently changed. The new law allows for employers to choose between two different types of guarantees to offer to their affiliates. In this paper, we address the question arising naturally: which of the two guarantees is the best one? In order to answer that question, we set up a stochastic model and use financial pricing tools to compare the methods. More specifically, we link the pension liabilities to a portfolio of financial assets and compute the price of exchange options through the Margrabe formula.

]]>Risks doi: 10.3390/risks5010004

Authors: Risks Editorial Office

The editors of Risks would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016. [...]

]]>Risks doi: 10.3390/risks5010003

Authors: Yuguang Fan Philip Griffin Ross Maller Alexander Szimayer Tiandong Wang

We compare two types of reinsurance: excess of loss (EOL) and largest claim reinsurance (LCR), each of which transfers the payment of part, or all, of one or more large claims from the primary insurance company (the cedant) to a reinsurer. The primary insurer’s point of view is documented in terms of assessment of risk and payment of reinsurance premium. A utility indifference rationale based on the expected future dividend stream is used to value the company with and without reinsurance. Assuming the classical compound Poisson risk model with choices of claim size distributions (classified as heavy, medium and light-tailed cases), simulations are used to illustrate the impact of the EOL and LCR treaties on the company’s ruin probability, ruin time and value as determined by the dividend discounting model. We find that LCR is at least as effective as EOL in averting ruin in comparable finite time horizon settings. In instances where the ruin probability for LCR is smaller than for EOL, the dividend discount model shows that the cedant is able to pay a larger portion of the dividend for LCR reinsurance than for EOL while still maintaining company value. Both methods reduce risk considerably as compared with no reinsurance, in a variety of situations, as measured by the standard deviation of the company value. A further interesting finding is that heaviness of tails alone is not necessarily the decisive factor in the possible ruin of a company; small and moderate sized claims can also play a significant role in this.

]]>Risks doi: 10.3390/risks5010002

Authors: Liivika Tee Meelis Käärik Rauno Viin

We consider the well-known stochastic reserve estimation methods on the basis of generalized linear models, such as the (over-dispersed) Poisson model, the gamma model and the log-normal model. For the likely variability of the claims reserve, bootstrap method is considered. In the bootstrapping framework, we discuss the choice of residuals, namely the Pearson residuals, the deviance residuals and the Anscombe residuals. In addition, several possible residual adjustments are discussed and compared in a case study. We carry out a practical implementation and comparison of methods using real-life insurance data to estimate reserves and their prediction errors. We propose to consider proper scoring rules for model validation, and the assessments will be drawn from an extensive case study.

]]>Risks doi: 10.3390/risks5010001

Authors: Başak Bulut Karageyik Şule Şahin

In this paper, we approximate the aggregate claims process by using the translated gamma process under the classical risk model assumptions, and we investigate the ultimate ruin probability. We consider optimal reinsurance under the minimum ultimate ruin probability, as well as the maximum benefit criteria: released capital, expected profit and exponential-fractional-logarithmic utility from the insurer’s point of view. Numerical examples are presented to explain how the optimal initial surplus and retention level are changed according to the individual claim amounts, loading factors and weights of the criteria. In the decision making process, we use The Analytical Hierarchy Process (AHP) and The Technique for Order of Preference by Similarity to ideal Solution (TOPSIS) methods as the Multi-Attribute Decision Making methods (MADM) and compare our results considering different combinations of loading factors for both exponential and Pareto individual claims.

]]>Risks doi: 10.3390/risks4040048

Authors: Ambrose Lo

Reinsurance is often empirically hailed as a value-adding risk management strategy which an insurer can utilize to achieve various business objectives. In the context of a distortion-risk-measure-based three-party model incorporating a policyholder, insurer and reinsurer, this article formulates explicitly the optimal insurance–reinsurance strategies from the perspective of the insurer. Our analytic solutions are complemented by intuitive but scientifically rigorous explanations on the marginal cost and benefit considerations underlying the optimal insurance–reinsurance decisions. These cost-benefit discussions not only cast light on the economic motivations for an insurer to engage in insurance with the policyholder and in reinsurance with the reinsurer, but also mathematically formalize the value created by reinsurance with respect to stabilizing the loss portfolio and enlarging the underwriting capacity of an insurer. Our model also allows for the reinsurer’s failure to deliver on its promised indemnity when the regulatory capital of the reinsurer is depleted by the reinsured loss. The reduction in the benefits of reinsurance to the insurer as a result of the reinsurer’s default is quantified, and its influence on the optimal insurance–reinsurance policies analyzed.

]]>Risks doi: 10.3390/risks4040050

Authors: Mi Chen Wenyuan Wang Ruixing Ming

In this paper, we study the optimal reinsurance problem where risks of the insurer are measured by general law-invariant risk measures and premiums are calculated under the TVaR premium principle, which extends the work of the expected premium principle. Our objective is to characterize the optimal reinsurance strategy which minimizes the insurer’s risk measure of its total loss. Our calculations show that the optimal reinsurance strategy is of the multi-layer form, i.e., f * ( x ) = x ∧ c * + ( x - d * ) + with c * and d * being constants such that 0 ≤ c * ≤ d * .

]]>Risks doi: 10.3390/risks4040051

Authors: Ying Wang Sai Choy Hoi Wong

The application of stochastic volatility (SV) models in the option pricing literature usually assumes that the market has sufficient option data to calibrate the model’s risk-neutral parameters. When option data are insufficient or unavailable, market practitioners must estimate the model from the historical returns of the underlying asset and then transform the resulting model into its risk-neutral equivalent. However, the likelihood function of an SV model can only be expressed in a high-dimensional integration, which makes the estimation a highly challenging task. The Bayesian approach has been the classical way to estimate SV models under the data-generating (physical) probability measure, but the transformation from the estimated physical dynamic into its risk-neutral counterpart has not been addressed. Inspired by the generalized autoregressive conditional heteroskedasticity (GARCH) option pricing approach by Duan in 1995, we propose an SV model that enables us to simultaneously and conveniently perform Bayesian inference and transformation into risk-neutral dynamics. Our model relaxes the normality assumption on innovations of both return and volatility processes, and our empirical study shows that the estimated option prices generate realistic implied volatility smile shapes. In addition, the volatility premium is almost flat across strike prices, so adding a few option data to the historical time series of the underlying asset can greatly improve the estimation of option prices.

]]>Risks doi: 10.3390/risks4040049

Authors: Pierre Devolder Adrien Lebègue

In this paper, we consider compositions of conditional risk measures in order to obtain time-consistent dynamic risk measures and determine the solvency capital of a life insurer selling pension liabilities or a pension fund with a single cash-flow at maturity. We first recall the notion of conditional, dynamic and time-consistent risk measures. We link the latter with its iterated property, which gives us a way to construct time-consistent dynamic risk measures from a backward iteration scheme with the composition of conditional risk measures. We then consider particular cases with the conditional version of the value at risk, tail value at risk and conditional expectation measures. We finally give an application of these measures with the determination of the solvency capital of a pension liability, which offers a fixed guaranteed rate without any intermediate cash-flow. We assume that the company is fully hedged against the mortality and underwriting risks.

]]>Risks doi: 10.3390/risks4040047

Authors: Philippe Deprez Mario Wüthrich

This article provides a case study that analyzes national macroprudential insurance regulation in Switzerland. We consider an insurance market that is based on data from the Swiss private insurance industry. We stress this market with several scenarios related to financial and insurance risks, and we analyze the resulting risk capitals of the insurance companies. This stress-test analysis provides insights into the vulnerability of the Swiss private insurance sector to different risks and shocks.

]]>Risks doi: 10.3390/risks4040046

Authors: Jean-François Bégin

Life insurers are exposed to deflation risk: falling prices could lead to insufficient investment returns, and inflation-indexed protections could make insurers vulnerable to deflation. In this spirit, this paper proposes a market-based methodology for measuring deflation risk based on a discrete framework: the latter accounts for the real interest rate, the inflation index level, its conditional variance, and the expected inflation rate. US inflation data are then used to estimate the model and show the importance of deflation risk. Specifically, the distribution of a fictitious life insurer’s future payments is investigated. We find that the proposed inflation model yields higher risk measures than the ones obtained using competing models, stressing the need for dynamic and market-consistent inflation modelling in the life insurance industry.

]]>Risks doi: 10.3390/risks4040045

Authors: Anastasia Novokreshchenova

In this paper, we quantitatively compare the forecasts from four different mortality models. We consider one discrete-time model proposed by Lee and Carter (1992) and three continuous-time models: the Wills and Sherris (2011) model, the Feller process and the Ornstein-Uhlenbeck (OU) process. The first two models estimate the whole surface of mortality simultaneously, while in the latter two, each generation is modelled and calibrated separately. We calibrate the models to UK and Australian population data. We find that all the models show relatively similar absolute total error for a given dataset, except the Lee-Carter model, whose performance differs significantly. To evaluate the forecasting performance we therefore look at two alternative measures: the relative error between the forecasted and the actual mortality rates and the percentage of actual mortality rates which fall within a prediction interval. In terms of the prediction intervals, the results are more divergent since each model implies a different structure for the variance of mortality rates. According to our experiments, the Wills and Sherris model produces superior results in terms of the prediction intervals. However, in terms of the mean absolute error, the OU and the Feller processes perform better. The forecasting performance of the Lee Carter model is mostly dependent on the choice of the dataset.

]]>Risks doi: 10.3390/risks4040044

Authors: Eckhard Liebscher Wolf-Dieter Richter

Scatter plots of multivariate data sets motivate modeling of star-shaped distributions beyond elliptically contoured ones. We study properties of estimators for the density generator function, the star-generalized radius distribution and the density in a star-shaped distribution model. For the generator function and the star-generalized radius density, we consider a non-parametric kernel-type estimator. This estimator is combined with a parametric estimator for the contours which are assumed to follow a parametric model. Therefore, the semiparametric procedure features the ﬂexibility of nonparametric estimators and the simple estimation and interpretation of parametric estimators. Alternatively, we consider pure parametric estimators for the density. For the semiparametric density estimator, we prove rates of uniform, almost sure convergence which coincide with the corresponding rates of one-dimensional kernel density estimators when excluding the center of the distribution. We show that the standardized density estimator is asymptotically normally distributed. Moreover, the almost sure convergence rate of the estimated distribution function of the star-generalized radius is derived. A particular new two-dimensional distribution class is adapted here to agricultural and ﬁnancial data sets.

]]>Risks doi: 10.3390/risks4040043

Authors: Annika Krutto

For general stable distribution, cumulant function based parameter estimators are proposed. Extensive simulation experiments are carried out to validate the effectiveness of the estimates over the entire parameter space. An application to non-life insurance losses distribution is made.

]]>Risks doi: 10.3390/risks4040042

Authors: Julie Thøgersen

An insurance company offers an insurance contract ( p , K ) , consisting of a premium p and a deductible K. In this paper, we consider the problem of choosing the premium optimally as a function of the deductible. The insurance company is facing a market of N customers, each characterized by their personal claim frequency, α, and risk aversion, β. When a customer is offered an insurance contract, she/he will, based on these characteristics, choose whether or not to insure. The decision process of the customer is analyzed in detail. Since the customer characteristics are unknown to the company, it models them as i.i.d. random variables; A 1 , … , A N for the claim frequencies and B 1 , … , B N for the risk aversions. Depending on the distributions of A i and B i , expressions for the portfolio size n ( p ; K ) ∈ [ 0 , N ] and average claim frequency α ( p ; K ) in the portfolio are obtained. Knowing these, the company can choose the premium optimally, mainly by minimizing the ruin probability.

]]>Risks doi: 10.3390/risks4040041

Authors: Marcos Escobar Mikhail Krayzler Franz Ramsauer David Saunders Rudi Zagst

Variable annuities represent certain unit-linked life insurance products offering different types of protection commonly referred to as guaranteed minimum benefits (GMXBs). They are designed for the increasing demand of the customers for private pension provision. In this paper we analytically price variable annuities with guaranteed minimum repayments at maturity and in case of the insured’s death. If the contract is prematurely surrendered, the policyholder is entitled to the current value of the fund account reduced by the prevailing surrender fee. The financial market and the mortality model are affine linear. For the surrender model, a Cox process is deployed whose intensity is given by a deterministic function (s-curve) with stochastic inputs from the financial market. So, the policyholders’ surrender behavior depends on the performance of the financial market and is stochastic. The presented pricing scheme incorporates the stochastic surrender behavior of the policyholders and is only based on suitable closed-form approximations.

]]>Risks doi: 10.3390/risks4040040

Authors: Lei Hua

The family of Liouville copulas is defined as the survival copulas of multivariate Liouville distributions, and it covers the Archimedean copulas constructed by Williamson’s d-transform. Liouville copulas provide a very wide range of dependence ranging from positive to negative dependence in the upper tails, and they can be useful in modeling tail risks. In this article, we study the upper tail behavior of Liouville copulas through their upper tail orders. Tail orders of a more general scale mixture model that covers Liouville distributions is first derived, and then tail order functions and tail order density functions of Liouville copulas are derived. Concrete examples are given after the main results.

]]>Risks doi: 10.3390/risks4040039

Authors: Annamaria Olivieri Ermanno Pitacco

Life annuities are attractive mainly for healthy people. In order to expand their business, in recent years, some insurers have started offering higher annuity rates to those whose health conditions are critical. Life annuity portfolios are then supposed to become larger and more heterogeneous. With respect to the insurer’s risk profile, there is a trade-off between portfolio size and heterogeneity that we intend to investigate. In performing this, there is a second and possibly more important issue that we address. In actuarial practice, the different mortality levels of the several risk classes are obtained by applying adjustment coefficients to population mortality rates. Such a choice is not supported by a rigorous model. On the other hand, the heterogeneity of a population with respect to mortality can formally be described with a frailty model. We suggest adopting a frailty model for risk classification. We identify risk groups (or classes) within the population by assigning specific ranges of values to the frailty within each group. The different levels of mortality of the various groups are based on the conditional probability distributions of the frailty. Annuity rates for each class then can be easily justified, and a comprehensive investigation of insurer’s liabilities can be performed.

]]>Risks doi: 10.3390/risks4040038

Authors: Pierre Picard

In the linear coinsurance problem, examined first by Mossin (1968), a higher absolute risk aversion with respect to wealth in the sense of Arrow–Pratt implies a higher optimal coinsurance rate. We show that this property does not hold for health insurance under ex post moral hazard; i.e., when illness severity cannot be observed by insurers, and policyholders decide on their health expenditures. The optimal coinsurance rate trades off a risk-sharing effect and an incentive effect, both related to risk aversion.

]]>Risks doi: 10.3390/risks4040037

Authors: Benjamin Avanzi Vincent Tu Bernard Wong

Because of the profitable nature of risk businesses in the long term, de Finetti suggested that surplus models should allow for cash leakages, as otherwise the surplus would unrealistically grow (on average) to infinity. These leakages were interpreted as ‘dividends’. Subsequent literature on actuarial surplus models with dividend distribution has mainly focussed on dividend strategies that either maximise the expected present value of dividends until ruin or lead to a probability of ruin that is less than one (see Albrecher and Thonhauser, Avanzi for reviews). An increasing number of papers are directly interested in modelling dividend policies that are consistent with actual practice in financial markets. In this short note, we review the corporate finance literature with the specific aim of fleshing out properties that dividend strategies should ideally satisfy, if one wants to model behaviour that is consistent with practice.

]]>Risks doi: 10.3390/risks4040036

Authors: Sascha Desmettre Ralf Korn Javier Varela Norbert Wehn

Risk analysis and management currently have a strong presence in financial institutions, where high performance and energy efficiency are key requirements for acceleration systems, especially when it comes to intraday analysis. In this regard, we approach the estimation of the widely-employed portfolio risk metrics value-at-risk (VaR) and conditional value-at-risk (cVaR) by means of nested Monte Carlo (MC) simulations. We do so by combining theory and software/hardware implementation. This allows us for the first time to investigate their performance on heterogeneous compute systems and across different compute platforms, namely central processing unit (CPU), many integrated core (MIC) architecture XeonPhi, graphics processing unit (GPU), and field-programmable gate array (FPGA). To this end, the OpenCL framework is employed to generate portable code, and the size of the simulations is scaled in order to evaluate variations in performance. Furthermore, we assess different parallelization schemes, and the targeted platforms are evaluated and compared in terms of runtime and energy efficiency. Our implementation also allowed us to derive a new algorithmic optimization regarding the generation of the required random number sequences. Moreover, we provide specific guidelines on how to properly handle these sequences in portable code, and on how to efficiently implement nested MC-based VaR and cVaR simulations on heterogeneous compute systems.

]]>Risks doi: 10.3390/risks4040035

Authors: Marcos Escobar Sven Panz

This paper presents a comprehensive extension of pricing two-dimensional derivatives depending on two barrier constraints. We assume randomness on the covariance matrix as a way of generalizing. We analyse common barrier derivatives, enabling us to study parameter uncertainty and the risk related to the estimation procedure (estimation risk). In particular, we use the distribution of empirical parameters from IBM and EURO STOXX50. The evidence suggests that estimation risk should not be neglected in the context of multidimensional barrier derivatives, as it could cause price differences of up to 70%.

]]>Risks doi: 10.3390/risks4040034

Authors: Chuancun Yin Dan Zhu

It is well known that a random vector with given marginals is comonotonic if and only if it has the largest convex sum, and that a random vector with given marginals (under an additional condition) is mutually exclusive if and only if it has the minimal convex sum. This paper provides an alternative proof of these two results using the theories of distortion risk measure and expected utility.

]]>Risks doi: 10.3390/risks4040033

Authors: Mélina Mailhot Mhamed Mesfioui

In order to protect stakeholders of insurance companies and financial institutions against adverse outcomes of risky businesses, regulators and senior management use capital allocation techniques. For enterprise-wide risk management, it has become important to calculate the contribution of each risk within a portfolio. For that purpose, bivariate lower and upper orthant tail value-at-risk can be used for capital allocation. In this paper, we present multivariate value-at-risk and tail-value-at-risk for d ≥ 2 , and we focus on three different methods to calculate optimal values for the contribution of each risk within the sums of random vectors to the overall portfolio, which could particularly apply to insurance and financial portfolios.

]]>Risks doi: 10.3390/risks4030032

Authors: Rüdiger Kiesel Robin Rühlicke Gerhard Stahl Jinsong Zheng

In the aftermath of the financial crisis, it was realized that the mathematical models used for the valuation of financial instruments and the quantification of risk inherent in portfolios consisting of these financial instruments exhibit a substantial model risk. Consequently, regulators and other stakeholders have started to require that the internal models used by financial institutions are robust. We present an approach to consistently incorporate the robustness requirements into the quantitative risk management process of a financial institution, with a special focus on insurance. We advocate the Wasserstein metric as the canonical metric for approximations in robust risk management and present supporting arguments. Representing risk measures as statistical functionals, we relate risk measures with the concept of robustness and hence continuity with respect to the Wasserstein metric. This allows us to use results from robust statistics concerning continuity and differentiability of functionals. Finally, we illustrate our approach via practical applications.

]]>Risks doi: 10.3390/risks4030031

Authors: Maximilian Hughes Ralf Werner

Transition matrices, containing credit risk information in the form of ratings based on discrete observations, are published annually by rating agencies. A substantial issue arises, as for higher rating classes practically no defaults are observed yielding default probabilities of zero. This does not always reflect reality. To circumvent this shortcoming, estimation techniques in continuous-time can be applied. However, raw default data may not be available at all or not in the desired granularity, leaving the practitioner to rely on given one-year transition matrices. Then, it becomes necessary to transform the one-year transition matrix to a generator matrix. This is known as the embedding problem and can be formulated as a nonlinear optimization problem, minimizing the distance between the exponential of a potential generator matrix and the annual transition matrix. So far, in credit risk-related literature, solving this problem directly has been avoided, but approximations have been preferred instead. In this paper, we show that this problem can be solved numerically with sufficient accuracy, thus rendering approximations unnecessary. Our direct approach via nonlinear optimization allows one to consider further credit risk-relevant constraints. We demonstrate that it is thus possible to choose a proper generator matrix with additional structural properties.

]]>Risks doi: 10.3390/risks4030030

Authors: Hirbod Assa Manuel Morales Hassan Omidi Firouzi

In this paper we introduce a new coherent cumulative risk measure on a subclass in the space of càdlàg processes. This new coherent risk measure turns out to be tractable enough within a class of models where the aggregate claims is driven by a spectrally positive Lévy process. We focus our motivation and discussion on the problem of capital allocation. Indeed, this risk measure is well-suited to address the problem of capital allocation in an insurance context. We show that the capital allocation problem for this risk measure has a unique solution determined by the Euler allocation method. Some examples and connections with existing results as well as practical implications are also discussed.

]]>Risks doi: 10.3390/risks4030029

Authors: Mario Ghossoub

In problems of optimal insurance design, Arrow’s classical result on the optimality of the deductible indemnity schedule holds in a situation where the insurer is a risk-neutral Expected-Utility (EU) maximizer, the insured is a risk-averse EU-maximizer, and the two parties share the same probabilistic beliefs about the realizations of the underlying insurable loss. Recently, Ghossoub re-examined Arrow’s problem in a setting where the two parties have different subjective beliefs about the realizations of the insurable random loss, and he showed that if these beliefs satisfy a certain compatibility condition that is weaker than the Monotone Likelihood Ratio (MLR) condition, then optimal indemnity schedules exist and are nondecreasing in the loss. However, Ghossoub only gave a characterization of these optimal indemnity schedules in the special case of an MLR. In this paper, we consider the general case, allowing for disagreement about zero-probability events. We fully characterize the class of all optimal indemnity schedules that are nondecreasing in the loss, in terms of their distribution under the insured’s probability measure, and we obtain Arrow’s classical result, as well as one of the results of Ghossoub as corollaries. Finally, we formalize Marshall’s argument that, in a setting of belief heterogeneity, an optimal indemnity schedule may take “any”shape.

]]>Risks doi: 10.3390/risks4030028

Authors: Leah Dundon Katherine Nelson Janey Camp Mark Abkowitz Alan Jones

Extreme weather and climate change can have a significant impact on all types of infrastructure and assets, regardless of location, with the potential for human casualties, physical damage to assets, disruption of operations, economic and community distress, and environmental degradation. This paper describes a methodology for using extreme weather and climate data to identify climate-related risks and to quantify the potential impact of extreme weather events on certain types of transportation infrastructure as part of a vulnerability screening assessment. This screening assessment can be especially useful when a large number of assets or large geographical areas are being studied, with the results enabling planners and asset managers to undertake a more detailed assessment of vulnerability on a more targeted number of assets or locations. The methodology combines climate, weather, and impact data to identify vulnerabilities to a range of weather and climate related risks over a multi-decadal planning period. The paper applies the methodology to perform an extreme weather and climate change vulnerability screening assessment on transportation infrastructure assets for the State of Tennessee. This paper represents the results of one of the first efforts at spatial vulnerability assessments of transportation infrastructure and provides important insights for any organization considering the impact of climate and weather events on transportation or other critical infrastructure systems.

]]>Risks doi: 10.3390/risks4030025

Authors: Richard Verrall Mario Wüthrich

The aim of this paper is to understand and to model claims arrival and reporting delay in general insurance. We calibrate two real individual claims data sets to the statistical model of Jewell and Norberg. One data set considers property insurance and the other one casualty insurance. For our analysis we slightly relax the model assumptions of Jewell allowing for non-stationarity so that the model is able to cope with trends and with seasonal patterns. The performance of our individual claims data prediction is compared to the prediction based on aggregate data using the Poisson chain-ladder method.

]]>Risks doi: 10.3390/risks4030027

Authors: Stanislaus Maier-Paape Andreas Platen

The intermarket analysis, in particular the lead–lag relationship, plays an important role within financial markets. Therefore, a mathematical approach to be able to find interrelations between the price development of two different financial instruments is developed in this paper. Computing the differences of the relative positions of relevant local extrema of two charts, i.e., the local phase shifts of these price developments, gives us an empirical distribution on the unit circle. With the aid of directional statistics, such angular distributions are studied for many pairs of markets. It is shown that there are several very strongly correlated financial instruments in the field of foreign exchange, commodities and indexes. In some cases, one of the two markets is significantly ahead with respect to the relevant local extrema, i.e., there is a phase shift unequal to zero between them.

]]>Risks doi: 10.3390/risks4030026

Authors: Tim Boonen

This paper studies the problem of optimal reinsurance contract design. We let the insurer use dual utility, and the premium is an extended Wang’s premium principle. The novel contribution is that we allow for heterogeneity in the beliefs regarding the underlying probability distribution. We characterize layer-reinsurance as an optimal reinsurance contract. Moreover, we characterize layer-reinsurance as optimal contracts when the insurer faces costs of holding regulatory capital. We illustrate this in cases where both firms use the Value-at-Risk or the conditional Value-at-Risk.

]]>Risks doi: 10.3390/risks4030023

Authors: Francesca Biagini Andreas Groll Jan Widenmann

We study risk-minimization for a large class of insurance contracts. Given that the individual progress in time of visiting an insurance policy’s states follows an F -doubly stochastic Markov chain, we describe different state-dependent types of insurance benefits. These cover single payments at maturity, annuity-type payments and payments at the time of a transition. Based on the intensity of the F -doubly stochastic Markov chain, we provide the Galtchouk-Kunita-Watanabe decomposition for a general insurance contract and specify risk-minimizing strategies in a Brownian financial market setting. The results are further illustrated explicitly within an affine structure for the intensity.

]]>Risks doi: 10.3390/risks4030024

Authors: Daniel Buncic

Let me say from the outset that this is an excellent book to read. It is not only informative, as it should be for a book on forecasting, but it is highly entertaining.[...]

]]>Risks doi: 10.3390/risks4030022

Authors: Pavel Shevchenko Xiaolin Luo

In this paper, we review pricing of the variable annuity living and death guarantees offered to retail investors in many countries. Investors purchase these products to take advantage of market growth and protect savings. We present pricing of these products via an optimal stochastic control framework and review the existing numerical methods. We also discuss pricing under the complete/incomplete financial market models, stochastic mortality and optimal/sub-optimal policyholder behavior, and in the presence of taxes. For numerical valuation of these contracts in the case of simple risky asset process, we develop a direct integration method based on the Gauss-Hermite quadratures with a one-dimensional cubic spline for calculation of the expected contract value, and a bi-cubic spline interpolation for applying the jump conditions across the contract cashflow event times. This method is easier to implement and faster when compared to the partial differential equation methods if the transition density (or its moments) of the risky asset underlying the contract is known in closed form between the event times. We present accurate numerical results for pricing of a Guaranteed Minimum Accumulation Benefit (GMAB) guarantee available on the market that can serve as a numerical benchmark for practitioners and researchers developing pricing of variable annuity guarantees to assess the accuracy of their numerical implementation.

]]>Risks doi: 10.3390/risks4030021

Authors: Kevin Dowd David Blake Andrew Cairns

This paper uses mortality fan charts to illustrate prospective future male mortality. These fan charts show both the most likely path of male mortality and the bands of uncertainty surrounding that path. The fan charts are based on a model of male mortality that is known to provide a good fit to UK mortality data. The fan charts suggest that there are clear limits to longevity—that future mortality rates are very uncertain and tend to become more uncertain the further ahead the forecast—and that forecasts of future mortality uncertainty must also take account of uncertainty in the parameters of the underlying mortality model.

]]>Risks doi: 10.3390/risks4030020

Authors: René Brenner Stanislaus Maier-Paape

In this survey, a short introduction of the recent discovery of log-normally-distributed market-technical trend data will be given. The results of the statistical evaluation of typical market-technical trend variables will be presented. It will be shown that the log-normal assumption fits better to empirical trend data than to daily returns of stock prices. This enables one to mathematically evaluate trading systems depending on such variables. In this manner, a basic approach to an anti-cyclic trading system will be given as an example.

]]>Risks doi: 10.3390/risks4030019

Authors: Ayşegül İşcanog̃lu-Çekiç

The Turkish Private Pension System is an investment system which aims to generate income for future consumption. This is a volunteer system, and the contributions are held in individual portfolios. Therefore, management of the funds is an important issue for both the participants and the insurance company. In this study, we propose an optimal private pension plan with a guarantee feature that is based on Constant Proportion Portfolio Insurance (CPPI). We derive a closed form formula for the optimal strategy with the help of dynamic programming. Moreover, our model is evaluated with numerical examples, and we compare its performance by implementing a sensitivity analysis.

]]>Risks doi: 10.3390/risks4030018

Authors: Philipp Harms David Stefanovits Josef Teichmann Mario Wüthrich

The discrete-time multifactor Vasiček model is a tractable Gaussian spot rate model. Typically, two- or three-factor versions allow one to capture the dependence structure between yields with different times to maturity in an appropriate way. In practice, re-calibration of the model to the prevailing market conditions leads to model parameters that change over time. Therefore, the model parameters should be understood as being time-dependent or even stochastic. Following the consistent re-calibration (CRC) approach, we construct models as concatenations of yield curve increments of Hull–White extended multifactor Vasiček models with different parameters. The CRC approach provides attractive tractable models that preserve the no-arbitrage premise. As a numerical example, we fit Swiss interest rates using CRC multifactor Vasiček models.

]]>Risks doi: 10.3390/risks4020017

Authors: Corina Constantinescu Suhang Dai Weihong Ni Zbigniew Palmowski

We analyse the ruin probabilities for a renewal insurance risk process with inter-arrival times depending on the claims that arrive within a fixed (past) time window. This dependence could be explained through a regenerative structure. The main inspiration of the model comes from the bonus-malus (BM) feature of pricing car insurance. We discuss first the asymptotic results of ruin probabilities for different regimes of claim distributions. For numerical results, we recognise an embedded Markov additive process, and via an appropriate change of measure, ruin probabilities could be computed to a closed-form formulae. Additionally, we employ the importance sampling simulations to derive ruin probabilities, which further permit an in-depth analysis of a few concrete cases.

]]>Risks doi: 10.3390/risks4020016

Authors: Elisa Luciano Jaap Spreeuw Elena Vigna

This paper studies the dependence between coupled lives, i.e., the spouses’ dependence, across different generations, and its effects on prices of reversionary annuities in the presence of longevity risk. Longevity risk is represented via a stochastic mortality intensity. We find that a generation-based model is important, since spouses’ dependence decreases when passing from older generations to younger generations. The independence assumption produces quantifiable mispricing of reversionary annuities, with different effects on different generations. The research is conducted using a well-known dataset of double life contracts.

]]>