Risks doi: 10.3390/risks6010023

Authors: José Garrido Ramin Okhrati

An arbitrage portfolio provides a cash flow that can never be negative at zero cost. We define the weaker concept of a “desirable portfolio” delivering cash flows with negative risk at zero cost. Although these are not completely risk-free investments and subject to the risk measure used, they can provide attractive investment opportunities for investors. We investigate in detail the theoretical aspects of this portfolio selection procedure and the existence of such opportunities in fixed income markets. Then, we present two applications of the theory: one in analyzing market integration problem and the other in gauging the credit quality of defaultable bonds in a portfolio. We also discuss the model calibration and provide some numerical illustrations.

]]>Risks doi: 10.3390/risks6010022

Authors: Thierry Moudiki Frédéric Planchet Areski Cousin

We are interested in obtaining forecasts for multiple time series, by taking into account the potential nonlinear relationships between their observations. For this purpose, we use a specific type of regression model on an augmented dataset of lagged time series. Our model is inspired by dynamic regression models (Pankratz 2012), with the response variable’s lags included as predictors, and is known as Random Vector Functional Link (RVFL) neural networks. The RVFL neural networks have been successfully applied in the past, to solving regression and classification problems. The novelty of our approach is to apply an RVFL model to multivariate time series, under two separate regularization constraints on the regression parameters.

]]>Risks doi: 10.3390/risks6010021

Authors: Robert Aykroyd Víctor Leiva Carolina Marchant

Since its origins and numerous applications in material science, the Birnbaum–Saunders family of distributions has now found widespread uses in some areas of the applied sciences such as agriculture, environment and medicine, as well as in quality control, among others. It is able to model varied data behaviour and hence provides a flexible alternative to the most usual distributions. The family includes Birnbaum–Saunders and log-Birnbaum–Saunders distributions in univariate and multivariate versions. There are now well-developed methods for estimation and diagnostics that allow in-depth analyses. This paper gives a detailed review of existing methods and of relevant literature, introducing properties and theoretical results in a systematic way. To emphasise the range of suitable applications, full analyses are included of examples based on regression and diagnostics in material science, spatial data modelling in agricultural engineering and control charts for environmental monitoring. However, potential future uses in new areas such as business, economics, finance and insurance are also discussed. This work is presented to provide a full tool-kit of novel statistical models and methods to encourage other researchers to implement them in these new areas. It is expected that the methods will have the same positive impact in the new areas as they have had elsewhere.

]]>Risks doi: 10.3390/risks6010020

Authors: Edita Kizinevič Jonas Šiaulys

In this work, the non-homogeneous risk model is considered. In such a model, claims and inter-arrival times are independent but possibly non-identically distributed. The easily verifiable conditions are found such that the ultimate ruin probability of the model satisfies the exponential estimate exp { − ϱ u } for all values of the initial surplus u ⩾ 0 . Algorithms to estimate the positive constant ϱ are also presented. In fact, these algorithms are the main contribution of this work. Sharpness of the derived inequalities is illustrated by several numerical examples.

]]>Risks doi: 10.3390/risks6010019

Authors: Zinoviy Landsman Udi Makov Tomer Shushi

In this paper, we offer a novel class of utility functions applied to optimal portfolio selection. This class incorporates as special cases important measures such as the mean-variance, Sharpe ratio, mean-standard deviation and others. We provide an explicit solution to the problem of optimal portfolio selection based on this class. Furthermore, we show that each measure in this class generally reduces to the efficient frontier that coincides or belongs to the classical mean-variance efficient frontier. In addition, a condition is provided for the existence of the a one-to-one correspondence between the parameter of this class of utility functions and the trade-off parameter λ in the mean-variance utility function. This correspondence essentially provides insight into the choice of this parameter. We illustrate our results by taking a portfolio of stocks from National Association of Securities Dealers Automated Quotation (NASDAQ).

]]>Risks doi: 10.3390/risks6010018

Authors: Andrea Macrina Obeid Mahomed

The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets). As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

]]>Risks doi: 10.3390/risks6010017

Authors: Asmerilda Hitaj Cesario Mateus Ilaria Peri

This paper presents the first methodological proposal of estimation of the Λ V a R . Our approach is dynamic and calibrated to market extreme scenarios, incorporating the need of regulators and financial institutions in more sensitive risk measures. We also propose a simple backtesting methodology by extending the V a R hypothesis-testing framework. Hence, we test our Λ V a R proposals under extreme downward scenarios of the financial crisis and different assumptions on the profit and loss distribution. The findings show that our Λ V a R estimations are able to capture the tail risk and react to market fluctuations significantly faster than the V a R and expected shortfall. The backtesting exercise displays a higher level of accuracy for our Λ V a R estimations.

]]>Risks doi: 10.3390/risks6010016

Authors: Pavel Shevchenko

n/a

]]>Risks doi: 10.3390/risks6010015

Authors: Reda Aboutajdine Pierre Picard

Audit mechanisms frequently take place in the context of repeated relationships between auditor and auditee. This paper focuses attention on the insurance fraud problem in a setting where insurers repeatedly verify claims satisfied by service providers (e.g., affiliated car repairers or members of managed care networks). We highlight a learning bias that leads insurers to over-audit service providers at the beginning of their relationship. The paper builds a bridge between the literature on optimal audit in insurance and the exploitation/exploration trade-off in multi-armed bandit problems.

]]>Risks doi: 10.3390/risks6010014

Authors: Yang Shen Tak Siu

This paper presents a novel risk-based approach for an optimal asset allocation problem with default risk, where a money market account, an ordinary share and a defaultable security are investment opportunities in a general non-Markovian economy incorporating random market parameters. The objective of an investor is to select an optimal mix of these securities such that a risk metric of an investment portfolio is minimized. By adopting a sub-additive convex risk measure, which takes into account interest rate risk, as a measure for risk, the investment problem is discussed mathematically in a form of a two-player, zero-sum, stochastic differential game between the investor and the market. A backward stochastic differential equation approach is used to provide a flexible and theoretically sound way to solve the game problem. Closed-form expressions for the optimal strategies of the investor and the market are obtained when the penalty function is a quadratic function and when the risk measure is a sub-additive coherent risk measure. An important case of the general non-Markovian model, namely the self-exciting threshold diffusion model with time delay, is considered. Numerical examples based on simulations for the self-exciting threshold diffusion model with and without time delay are provided to illustrate how the proposed model can be applied in this important case. The proposed model can be implemented using Excel spreadsheets.

]]>Risks doi: 10.3390/risks6010013

Authors: Valeria D’Amato Emilia Di Lorenzo Marilena Sibillo

The relevance of critical illness coverage and life insurance in cause-specific mortality conditions is increasing in many industrialized countries. Specific conditions on the illness and on death event, providing cheapest premiums for the insureds and lower obligations for the insurers, constitute interesting products in an insurance market looking to offer appealing products. On the other hand, the systematic improvement in longevity gives rise to a market with agents getting increasingly older, and the insurer pays attention to this trend. There are financial contracts joined with insurance coverage, and this particularly happens in the case of the so-called insured loan. Insured loans are financial contracts often proposed together with a term life insurance in order to cover the lender and the heirs against the borrower’s death event within the loan duration. This paper explores new insurance products that, linked to an insured loan, are founded on specific illness hypotheses and/or cause-specific mortality. The aim is to value how much the insurance costs lighten with respect to the traditional term insurance. The authors project cause-specific mortality rates and specific diagnosis rates, in this last case overcoming the discontinuities in the data. The new contractual schemes are priced. Numerical applications also show, with several graphs, the rates projection procedure and plenty of tables report the premiums in the new proposed contractual forms. The complete amortization schedule closes the work.

]]>Risks doi: 10.3390/risks6010012

Authors: David Babbel Miguel Herce

Little in the scholarly economics literature is directed specifically to the performance of stable value funds, although they occupy a leading place among retirement investment vehicles. They are currently offered in more than one-third of all defined contribution plans in the USA, with more than $800 billion of assets under management. This paper rigorously examines their performance throughout the entire period since their inception in 1973. We produce a composite index of stable value returns. We next conduct mean-variance analysis, Sharpe and Sortino ratio analysis, stochastic dominance analysis, and optimal multi-period portfolio composition analysis. Our evidence suggests that stable value funds dominate (on average) two major asset classes based on a historical analysis, and that they often occupy a significant position in optimized portfolios across a broad range of risk aversion levels. We discuss factors that contributed to stable value funds’ past performance and whether they can continue to perform well into the future. We also discuss considerations regarding whether or not to include stable value as an element in target date funds within defined contribution pension plans.

]]>Risks doi: 10.3390/risks6010011

Authors: Enrique Calderín-Ojeda

Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL) distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

]]>Risks doi: 10.3390/risks6010010

Authors: Yang Chang Michael Sherris

The design and development of post-retirement income products require the assessment of longevity risk, as well as a basis for hedging these risks. Most indices for longevity risk are age-period based. We develop and assess a cohort-based value index for life insurers and pension funds to manage longevity risk. There are two innovations in the development of this index. Firstly, the underlying variables of most existing longevity indices are based on mortality experience only. The value index is based on the present value of future cash flow obligations, capturing all the risks in retirement income products. We use the index to manage both longevity risk and interest rate risk. Secondly, we capture historical dependencies between ages and cohorts with a cohort-based stochastic mortality model. We achieve this by introducing age-dependent model parameters. With our mortality model, we obtain realistic cohort correlation structures and improve the fitting performance, particularly for very old ages.

]]>Risks doi: 10.3390/risks6010009

Authors: Catalina Bolancé Montserrat Guillen Jens Perch Nielsen Fredrik Thuring

Prospective customers of financial and insurance products can be targeted based on the profit the provider expects to earn from them. We present a model for individual expected profit and two alternatives for calculating optimal personalized prices that maximize the expected profit. For one of these alternatives, we obtain a closed-form expression for the price offered to each prospective customer; for the other, we need to use a numerical approximation. In both approaches, the profits generated by prospective customers are not immediately observed, given that the products sold by these companies have a risk component. We assume that willingness to pay is heterogeneous and apply our methodology using real data from a European insurance company. Our study indicates that a substantial boost in profits can be expected when applying the simplest optimal pricing method proposed.

]]>Risks doi: 10.3390/risks6010008

Authors: Georges Dionne Denise Desjardins Martin Lebeau Stéphane Messier André Dascal

The ability and willingness of health care workers to report for work during a pandemic are essential to pandemic response. The main contribution of this article is to examine the relationship between risk perception of personal and work activities and willingness to report for work during an influenza pandemic. Data were collected through a quantitative Web-based survey sent to health care workers on the island of Montreal. Respondents were asked about their perception of various risks to obtain index measures of risk perception. A multinomial logit model was applied for the probability estimations, and a factor analysis was conducted to compute risk perception indexes (scores). Risk perception associated with personal and work activities is a significant predictor of intended presence at work during an influenza pandemic. This means that correcting perceptual biases should be a public policy concern. These results have not been previously reported in the literature. Many organizational variables are also significant.

]]>Risks doi: 10.3390/risks6010007

Authors: Noemi Nava Tiziana Di Matteo Tomaso Aste

We introduce a multistep-ahead forecasting methodology that combines empirical mode decomposition (EMD) and support vector regression (SVR). This methodology is based on the idea that the forecasting task is simplified by using as input for SVR the time series decomposed with EMD. The outcomes of this methodology are compared with benchmark models commonly used in the literature. The results demonstrate that the combination of EMD and SVR can outperform benchmark models significantly, predicting the Standard &amp; Poor’s 500 Index from 30 s to 25 min ahead. The high-frequency components better forecast short-term horizons, whereas the low-frequency components better forecast long-term horizons.

]]>Risks doi: 10.3390/risks6010006

Authors: Kam Wat Kam Yuen Wai Li Xueyuan Wu

This paper extends the work of Yuen et al. (2013), who obtained explicit results for the discount-free Gerber–Shiu function for a compound binomial risk model in the presence of delayed claims and a randomized dividend strategy with a zero threshold level. Specifically, we establish a recursion method for computing the Gerber–Shiu expected discounted penalty function, which entails a number of important quantities in ruin theory, within the framework of the compound binomial aggregate claims with delayed by-claims and randomized dividends payable at a non-negative threshold level.

]]>Risks doi: 10.3390/risks6010005

Authors: Jerome Detemple Yerkin Kitapbayev

This paper studies the valuation of real options when the cost of investment jumps at a random time. Three valuation formulas are derived. The first expresses the value of the project in terms of a collection of knockout barrier claims. The second identifies the premium relative to a project with delayed investment right and prices its components. The last one identifies the premium/discount relative to a project with constant cost equal to the post-jump cost and prices its components. All formulas are in closed form. The behavior of optimal investment boundaries and valuation components are examined.

]]>Risks doi: 10.3390/risks6010004

Authors: Albert Cohen

In the nearly thirty years since Hans Buhlmann (Buhlmann (1987)) set out the notion of the Actuary of the Third Kind, the connection between Actuarial Science (AS) and Mathematical Finance (MF) has been continually reinforced. As siblings in the family of Risk Management techniques, practitioners in both fields have learned a great deal from each other. The collection of articles in this volume are contributed by scholars who are not only experts in areas of AS and MF, but also those who present diverse perspectives from both industry and academia. Topics from multiple areas, such as Stochastic Modeling, Credit Risk, Monte Carlo Simulation, and Pension Valuation, among others, that were maybe thought to be the domain of one type of risk manager are shown time and again to have deep value to other areas of risk management as well. The articles in this collection, in my opinion, contribute techniques, ideas, and overviews of tools that specialists in both AS and MF will find useful and interesting to implement in their work. It is also my hope that this collection will inspire future collaboration between those who seek an interdisciplinary approach to risk management.

]]>Risks doi: 10.3390/risks6010003

Authors: Risks Editorial Office

n/a

]]>Risks doi: 10.3390/risks6010002

Authors: Nick Costanzino Michael Curran

We propose a Traffic Light approach to backtesting Expected Shortfall which is completely consistent with, and analogous to, the Traffic Light approach to backtesting VaR (Value at Risk) initially proposed by the Basel Committee on Banking Supervision in their 1996 consultative document Basle Committee on Banking Supervision (1996). The approach relies on the generalized coverage test for Expected Shortfall developed in Costanzino and Curran (2015).

]]>Risks doi: 10.3390/risks6010001

Authors: Christian Hipp

Optimal dividend payment under a ruin constraint is a two objective control problem which—in simple models—can be solved numerically by three essentially different methods. One is based on a modified Bellman equation and the policy improvement method (see Hipp (2003)). In this paper we use explicit formulas for running allowed ruin probabilities which avoid a complete search and speed up and simplify the computation. The second is also a policy improvement method, but without the use of a dynamic equation (see Hipp (2016)). It is based on closed formulas for first entry probabilities and discount factors for the time until first entry. Third a new, faster and more intuitive method which uses appropriately chosen barrier levels and a closed formula for the corresponding dividend value. Using the running allowed ruin probabilities, a simple test for admissibility—concerning the ruin constraint—is given. All these methods work for the discrete De Finetti model and are applied in a numerical example. The non stationary Lagrange multiplier method suggested in Hipp (2016), Section 2.2.2, also yields optimal dividend strategies which differ from those in all other methods, and Lagrange gaps are present here.

]]>Risks doi: 10.3390/risks5040065

Authors: Albert Cohen Nick Costanzino

In this work, we introduce a general framework for incorporating stochastic recovery into structural models. The framework extends the approach to recovery modeling developed in Cohen and Costanzino (2015, 2017) and provides for a systematic way to include different recovery processes into a structural credit model. The key observation is a connection between the partial information gap between firm manager and the market that is captured via a distortion of the probability of default. This last feature is computed by what is essentially a Girsanov transformation and reflects untangling of the recovery process from the default probability. Our framework can be thought of as an extension of Ishizaka and Takaoka (2003) and, in the same spirit of their work, we provide several examples of the framework including bounded recovery and a jump-to-zero model. One of the nice features of our framework is that, given prices from any one-factor structural model, we provide a systematic way to compute corresponding prices with stochastic recovery. The framework also provides a way to analyze correlation between Probability of Default (PD) and Loss Given Default (LGD), and term structure of recovery rates.

]]>Risks doi: 10.3390/risks5040064

Authors: Krzysztof Burnecki Mario Giuricich

We consider the subject of approximating tail probabilities in the general compound renewal process framework, where severity data are assumed to follow a heavy-tailed law (in that only the first moment is assumed to exist). By using the weak convergence of compound renewal processes to α -stable Lévy motion, we derive such weak approximations. Their applicability is then highlighted in the context of an existing, classical, index-linked catastrophe bond pricing model, and in doing so, we specialize these approximations to the case of a compound time-inhomogeneous Poisson process. We emphasize a unique feature of our approximation, in that it only demands finiteness of the first moment of the aggregate loss processes. Finally, a numerical illustration is presented. The behavior of our approximations is compared to both Monte Carlo simulations and first-order single risk loss process approximations and compares favorably.

]]>Risks doi: 10.3390/risks5040063

Authors: Luca Regis

The aim of the Special Issue is to address some of the main challenges individuals and companies face in managing financial and actuarial risks, when dealing with their investment/retirement or business-related decisions [...]

]]>Risks doi: 10.3390/risks5040062

Authors: Nguyet Nguyen

Future stock prices depend on many internal and external factors that are not easy to evaluate. In this paper, we use the Hidden Markov Model, (HMM), to predict a daily stock price of three active trading stocks: Apple, Google, and Facebook, based on their historical data. We first use the Akaike information criterion (AIC) and Bayesian information criterion (BIC) to choose the numbers of states from HMM. We then use the models to predict close prices of these three stocks using both single observation data and multiple observation data. Finally, we use the predictions as signals for trading these stocks. The criteria tests’ results showed that HMM with two states worked the best among two, three and four states for the three stocks. Our results also demonstrate that the HMM outperformed the naïve method in forecasting stock prices. The results also showed that active traders using HMM got a higher return than using the naïve forecast for Facebook and Google stocks. The stock price prediction method has a significant impact on stock trading and derivative hedging.

]]>Risks doi: 10.3390/risks5040061

Authors: Peter Carr

Diffusions are widely used in finance due to their tractability. Driftless diffusions are needed to describe ratios of asset prices under a martingale measure. We provide a simple example of a tractable driftless diffusion which also has a bounded state space.

]]>Risks doi: 10.3390/risks5040058

Authors: Arthur Charpentier Arthur David Romuald Elie

In this paper, we investigate the impact of the accident reporting strategy of drivers, within a Bonus-Malus system. We exhibit the induced modification of the corresponding class level transition matrix and derive the optimal reporting strategy for rational drivers. The hunger for bonuses induces optimal thresholds under which, drivers do not claim their losses. Mathematical properties of the induced level class process are studied. A convergent numerical algorithm is provided for computing such thresholds and realistic numerical applications are discussed.

]]>Risks doi: 10.3390/risks5040060

Authors: Enrique Calderín-Ojeda Kevin Fergusson Xueyuan Wu

Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN) in Reed and Jorgensen (2004), we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind) and lognorma distributions.

]]>Risks doi: 10.3390/risks5040059

Authors: Sebastian Fuchs Ruben Schlotter Klaus Schmidt

In the present paper, we study quantile risk measures and their domain. Our starting point is that, for a probability measure Q on the open unit interval and a wide class L Q of random variables, we define the quantile risk measure ϱ Q as the map that integrates the quantile function of a random variable in L Q with respect to Q. The definition of L Q ensures that ϱ Q cannot attain the value + ∞ and cannot be extended beyond L Q without losing this property. The notion of a quantile risk measure is a natural generalization of that of a spectral risk measure and provides another view of the distortion risk measures generated by a distribution function on the unit interval. In this general setting, we prove several results on quantile or spectral risk measures and their domain with special consideration of the expected shortfall. We also present a particularly short proof of the subadditivity of expected shortfall.

]]>Risks doi: 10.3390/risks5040057

Authors: Gaurav Khemka Adam Butt

This paper considers an alternative way of structuring stochastic variables in a dynamic programming framework where the model structure dictates that numerical methods of solution are necessary. Rather than estimating integrals within a Bellman equation using quadrature nodes, we use nodes directly from the underlying data. An example of the application of this approach is presented using individual lifetime financial modelling. The results show that data-driven methods lead to the least losses in result accuracy compared to quadrature and Quasi-Monte Carlo approaches, using historical data as a base. These results hold for both a single stochastic variable and multiple stochastic variables. The results are significant for improving the computational accuracy of lifetime financial models and other models that employ stochastic dynamic programming.

]]>Risks doi: 10.3390/risks5040056

Authors: Mohamed Abdelghani Alexander Melnikov

The paper deals with defaultable markets, one of the main research areas of mathematical finance. It proposes a new approach to the theory of such markets using techniques from the calculus of optional stochastic processes on unusual probability spaces, which was not presented before. The paper is a foundation paper and contains a number of fundamental results on modeling of defaultable markets, pricing and hedging of defaultable claims and results on the probability of default under such conditions. Moreover, several important examples are presented: a new pricing formula for a defaultable bond and a new pricing formula for credit default swap. Furthermore, some results on the absence of arbitrage for markets on unusual probability spaces and markets with default are also provided.

]]>Risks doi: 10.3390/risks5040055

Authors: Georges Dionne Sara Malekan

We address the moral hazard problem of securitization using a principal-agent model where the investor is the principal and the lender is the agent. Our model considers structured asset-backed securitization with a credit enhancement (tranching) procedure. We assume that the originator can affect the default probability and the conditional loss distribution. We show that the optimal form of retention must be proportional to the pool default loss even in the absence of systemic risk when the originator can affect the conditional loss given default rate, yet the current regulations propose a constant retention rate.

]]>Risks doi: 10.3390/risks5040054

Authors: Jean-Philippe Boucher Steven Côté Montserrat Guillen

In Pay-As-You-Drive (PAYD) automobile insurance, the premium is fixed based on the distance traveled, while in usage-based insurance (UBI) the driving patterns of the policyholder are also considered. In those schemes, drivers who drive more pay a higher premium compared to those with the same characteristics who drive only occasionally, because the former are more exposed to the risk of accident. In this paper, we analyze the simultaneous effect of the distance traveled and exposure time on the risk of accident by using Generalized Additive Models (GAM). We carry out an empirical application and show that the expected number of claims (1) stabilizes once a certain number of accumulated distance-driven is reached and (2) it is not proportional to the duration of the contract, which is in contradiction to insurance practice. Finally, we propose to use a rating system that takes into account simultaneously exposure time and distance traveled in the premium calculation. We think that this is the trend the automobile insurance market is going to follow with the eruption of telematics data.

]]>Risks doi: 10.3390/risks5040053

Authors: Gareth Peters Rodrigo Targino Mario Wüthrich

The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal) Sequential Monte Carlo algorithms are described and their efficiency is analysed.

]]>Risks doi: 10.3390/risks5040052

Authors: A. Seetharaman Vikas Kumar Sahu A. Saravanan John Rudolph Raj Indu Niranjan

An empirical study was conducted to determine the impact of different types of risk on the performance management of credit rating agencies (CRAs). The different types of risks were classified as operational, market, business, financial, and credit. All these five variables were analysed to ascertain their impact on the performance of CRAs. In addition, apart from identifying the significant variables, the study focused on setting out a structured framework for future research. The five independent variables were tested statistically using structural equation modelling (SEM). The results indicated that market risk, financial risk, and credit risk have a significant impact on the performance of CRAs, whereas operational risk and business risk, though important, do not have a significant influence. This finding has a significant implication for the examination and inter-firm evaluation of CRAs.

]]>Risks doi: 10.3390/risks5030051

Authors: Carolyn W. Chang Jack S. K. Chang

We propose an integrated approach straddling the actuarial science and the mathematical finance approaches to pricing a default-risky catastrophe reinsurance contract. We first apply an incomplete-market version of the no-arbitrage martingale pricing paradigm to price the reinsurance contract as a martingale by a measure change, then we apply risk loading to price in—as in the traditional actuarial practice—market imperfections, the underwriting cycle, and other idiosyncratic factors identified in the practice and empirical literatures. This integrated approach is theoretically appealing for its merit of factoring risk premiums into the probability measure, and yet practical for being applicable to price a contract not traded on financial markets. We numerically study the catastrophe pricing effects and find that the reinsurance contract is more valuable when the catastrophe is more severe and the reinsurer’s default risk is lower because of a stronger balance sheet. We also find that the price is more sensitive to the severity of catastrophes than to the arrival frequency; implying (re)insurers should focus more on hedging the severity than the arrival frequency in their risk management programs.

]]>Risks doi: 10.3390/risks5030050

Authors: Silvia Romagnoli Simona Santoro

After financial crisis, the role of uncertainty in decision making processes has largely been recognized as the new variable that contributes to shaping interest rates and bond prices. Our aim is to discuss the impact of ambiguity on bonds interest rates (yields). Starting from the realistic assumption that investors ask for an ambiguity premium depending on the efficacy of government interventions (if any), we lead to an exponential multi-factor affine model which includes ambiguity as well as an ambiguous version of the Heath-Jarrow-Morton (HJM)model. As an example, we propose the realistic economic framework given by Ulrich (2008, 2011), and we recover the corresponding ambiguous HJM framework, thus offering a large set of interest rate models enriched with ambiguity. We also give a concrete view of how different simulated scenarios of ambiguity can influence the economic cycle (through rates and bond prices).

]]>Risks doi: 10.3390/risks5030049

Authors: Daoping Yu Vytaras Brazauskas

Over the last decade, researchers, practitioners, and regulators have had intense debates about how to treat the data collection threshold in operational risk modeling. Several approaches have been employed to fit the loss severity distribution: the empirical approach, the “naive” approach, the shifted approach, and the truncated approach. Since each approach is based on a different set of assumptions, different probability models emerge. Thus, model uncertainty arises. The main objective of this paper is to understand the impact of model uncertainty on the value-at-risk (VaR) estimators. To accomplish that, we take the bank’s perspective and study a single risk. Under this simplified scenario, we can solve the problem analytically (when the underlying distribution is exponential) and show that it uncovers similar patterns among VaR estimates to those based on the simulation approach (when data follow a Lomax distribution). We demonstrate that for a fixed probability distribution, the choice of the truncated approach yields the lowest VaR estimates, which may be viewed as beneficial to the bank, whilst the “naive” and shifted approaches lead to higher estimates of VaR. The advantages and disadvantages of each approach and the probability distributions under study are further investigated using a real data set for legal losses in a business unit (Cruz 2002).

]]>Risks doi: 10.3390/risks5030048

Authors: Daniel Leonhardt Antony Ware Rudi Zagst

Energy commodities and their futures naturally show cointegrated price movements. However, there is empirical evidence that the prices of futures with different maturities might have, e.g., different jump behaviours in different market situations. Observing commodity futures over time, there is also evidence for different states of the underlying volatility of the futures. In this paper, we therefore allow for cointegration of the term structure within a multi-factor model, which includes seasonality, as well as joint and individual jumps in the price processes of futures with different maturities. The seasonality in this model is realized via a deterministic function, and the jumps are represented with thinned-out compound Poisson processes. The model also includes a regime-switching approach that is modelled through a Markov chain and extends the class of geometric models. We show how the model can be calibrated to empirical data and give some practical applications.

]]>Risks doi: 10.3390/risks5030047

Authors: Johan Andréasson Pavel Shevchenko

Means-tested pension policies are typical for many countries, and the assessment of policy changes is critical for policy makers. In this paper, we consider the Australian means-tested Age Pension. In 2015, two important changes were made to the popular Allocated Pension accounts: the income means-test is now based on deemed income rather than account withdrawals, and the income-test deduction no longer applies. We examine the implications of the new changes in regard to optimal decisions for consumption, investment and housing. We account for regulatory minimum withdrawal rules that are imposed by regulations on Allocated Pension accounts, as well as the 2017 asset-test rebalancing. The policy changes are considered under a utility-maximising life cycle model solved as an optimal stochastic control problem. We find that the new rules decrease the advantages of planning the consumption in relation to the means-test, while risky asset allocation becomes more sensitive to the asset-test. The difference in optimal drawdown between the old and new policy is only noticeable early in retirement until regulatory minimum withdrawal rates are enforced. However, the amount of extra Age Pension received by many households is now significantly higher due to the new deeming income rules, which benefit wealthier households who previously would not have received Age Pension due to the income-test and minimum withdrawals.

]]>Risks doi: 10.3390/risks5030046

Authors: Knut Aase

We reconsider costs in insurance, and suggest a new type of cost function, which we argue is a natural choice when there are relatively small, but frequent, claims. If a fixed cost is incurred each time a claim is made, we obtain a Pareto optimal deductible even if the cost function does not vary with the indemnity. The classical result says that deductibles appear if and only if costs are variable. This implies that when the claims are relatively small, it is not optimal for the insured to be compensated, since the costs outweigh the benefits and a deductible will naturally occur. When we constrain the contract to contain a cap, a non-trivial deductible is Pareto optimal regardless of the assumptions about the cost structure, which is what is known as an XL-contract.

]]>Risks doi: 10.3390/risks5030044

Authors: Andreas Hermes Stanislaus Maier-Paape

In this paper, the multivariate fractional trading ansatz of money management from Vince (Vince 1990) is discussed. In particular, we prove existence and uniqueness of an “optimal f” of the respective optimization problem under reasonable assumptions on the trade return matrix. This result generalizes a similar result for the univariate fractional trading ansatz. Furthermore, our result guarantees that the multivariate optimal f solutions can always be found numerically by steepest ascent methods.

]]>Risks doi: 10.3390/risks5030045

Authors: George-Jason Siouris Alex Karagrigoriou

In this work, we focus on volatility estimation which plays a crucial role in risk analysis and management. In order to improve value at risk (VaR) forecasts, we discuss the concept of low price effect and introduce the low price correction which does not require any additional parameters and instead of returns it takes into account the prices of the asset. Judgement on the forecasting quality of the proposed methodology is based on both the relative number of violations and VaR volatility. For illustrative purposes, a real example from the Athens Stock Exchange is fully explored.

]]>Risks doi: 10.3390/risks5030043

Authors: Dimitrina Dimitrova Zvetan Ignatov Vladimir Kaishev

We derive a closed form expression for the probability that a non-decreasing, pure jump stochastic risk process with the order statistics (OS) property will not exit the strip between two non-decreasing, possibly discontinuous, time-dependent boundaries, within a finite time interval. The result yields new expressions for the ruin probability in the insurance and the dual risk models with dependence between the claim severities or capital gains respectively.

]]>Risks doi: 10.3390/risks5030042

Authors: Dorota Toczydlowska Gareth Peters Man Fung Pavel Shevchenko

In this study we develop a multi-factor extension of the family of Lee-Carter stochastic mortality models. We build upon the time, period and cohort stochastic model structure to extend it to include exogenous observable demographic features that can be used as additional factors to improve model fit and forecasting accuracy. We develop a dimension reduction feature extraction framework which (a) employs projection based techniques of dimensionality reduction; in doing this we also develop (b) a robust feature extraction framework that is amenable to different structures of demographic data; (c) we analyse demographic data sets from the patterns of missingness and the impact of such missingness on the feature extraction, and (d) introduce a class of multi-factor stochastic mortality models incorporating time, period, cohort and demographic features, which we develop within a Bayesian state-space estimation framework; finally (e) we develop an efficient combined Markov chain and filtering framework for sampling the posterior and forecasting. We undertake a detailed case study on the Human Mortality Database demographic data from European countries and we use the extracted features to better explain the term structure of mortality in the UK over time for male and female populations when compared to a pure Lee-Carter stochastic mortality model, demonstrating our feature extraction framework and consequent multi-factor mortality model improves both in sample fit and importantly out-off sample mortality forecasts by a non-trivial gain in performance.

]]>Risks doi: 10.3390/risks5030041

Authors: Sabyasachi Guharay KC Chang Jie Xu

Value-at-Risk (VaR) is a well-accepted risk metric in modern quantitative risk management (QRM). The classical Monte Carlo simulation (MCS) approach, denoted henceforth as the classical approach, assumes the independence of loss severity and loss frequency. In practice, this assumption does not always hold true. Through mathematical analyses, we show that the classical approach is prone to significant biases when the independence assumption is violated. This is also corroborated by studying both simulated and real-world datasets. To overcome the limitations and to more accurately estimate VaR, we develop and implement the following two approaches for VaR estimation: the data-driven partitioning of frequency and severity (DPFS) using clustering analysis, and copula-based parametric modeling of frequency and severity (CPFS). These two approaches are verified using simulation experiments on synthetic data and validated on five publicly available datasets from diverse domains; namely, the financial indices data of Standard &amp; Poor’s 500 and the Dow Jones industrial average, chemical loss spills as tracked by the US Coast Guard, Australian automobile accidents, and US hurricane losses. The classical approach estimates VaR inaccurately for 80% of the simulated data sets and for 60% of the real-world data sets studied in this work. Both the DPFS and the CPFS methodologies attain VaR estimates within 99% bootstrap confidence interval bounds for both simulated and real-world data. We provide a process flowchart for risk practitioners describing the steps for using the DPFS versus the CPFS methodology for VaR estimation in real-world loss datasets.

]]>Risks doi: 10.3390/risks5030040

Authors: Wolf-Dieter Richter

For evaluating the probabilities of arbitrary random events with respect to a given multivariate probability distribution, specific techniques are of great interest. An important two-dimensional high risk limit law is the Gauss-exponential distribution whose probabilities can be dealt with based on the Gauss–Laplace law. The latter will be considered here as an element of the newly-introduced family of ( p , q ) -spherical distributions. Based on a suitably-defined non-Euclidean arc-length measure on ( p , q ) -circles, we prove geometric and stochastic representations of these distributions and correspondingly distributed random vectors, respectively. These representations allow dealing with the new probability measures similarly to with elliptically-contoured distributions and more general homogeneous star-shaped ones. This is demonstrated by the generalization of the Box–Muller simulation method. In passing, we prove an extension of the sector and circle number functions.

]]>Risks doi: 10.3390/risks5030039

Authors: Mathias Lindholm Filip Lindskog Felix Wahl

This paper provides a complete program for the valuation of aggregate non-life insurance liability cash flows based on claims triangle data. The valuation is fully consistent with the principle of valuation by considering the costs associated with a transfer of the liability to a so-called reference undertaking subject to capital requirements throughout the runoff of the liability cash flow. The valuation program includes complete details on parameter estimation, bias correction and conservative estimation of the value of the liability under partial information. The latter is based on a new approach to the estimation of mean squared error of claims reserve prediction.

]]>Risks doi: 10.3390/risks5030038

Authors: Matthias Fischer Daniel Kraus Marius Pfeuffer Claudia Czado

Measuring interdependence between probabilities of default (PDs) in different industry sectors of an economy plays a crucial role in financial stress testing. Thereby, regression approaches may be employed to model the impact of stressed industry sectors as covariates on other response sectors. We identify vine copula based quantile regression as an eligible tool for conducting such stress tests as this method has good robustness properties, takes into account potential nonlinearities of conditional quantile functions and ensures that no quantile crossing effects occur. We illustrate its performance by a data set of sector specific PDs for the German economy. Empirical results are provided for a rough and a fine-grained industry sector classification scheme. Amongst others, we confirm that a stressed automobile industry has a severe impact on the German economy as a whole at different quantile levels whereas, e.g., for a stressed financial sector the impact is rather moderate. Moreover, the vine copula based quantile regression approach is benchmarked against both classical linear quantile regression and expectile regression in order to illustrate its methodological effectiveness in the scenarios evaluated.

]]>Risks doi: 10.3390/risks5030036

Authors: Hirbod Assa Nikolay Gospodinov

This paper proposes a model-free approach to hedging and pricing in the presence of market imperfections such as market incompleteness and frictions. The generality of this framework allows us to conduct an in-depth theoretical analysis of hedging strategies with a wide family of risk measures and pricing rules, and study the conditions under which the hedging problem admits a solution and pricing is possible. The practical implications of our proposed theoretical approach are illustrated with an application on hedging economic risk.

]]>Risks doi: 10.3390/risks5030037

Authors: John Fry Andrew Brint

In this paper we develop a well-established financial model to investigate whether bubbles were present in opinion polls and betting markets prior to the UK’s vote on EU membership on 23 June 2016. The importance of our contribution is threefold. Firstly, our continuous-time model allows for irregularly spaced time series—a common feature of polling data. Secondly, we build on qualitative comparisons that are often made between market cycles and voting patterns. Thirdly, our approach is theoretically elegant. Thus, where bubbles are found we suggest a suitable adjustment. We find evidence of bubbles in polling data. This suggests they systematically over-estimate the proportion voting for remain. In contrast, bookmakers’ odds appear to show none of this bubble-like over-confidence. However, implied probabilities from bookmakers’ odds appear remarkably unresponsive to polling data that nonetheless indicates a close-fought vote.

]]>Risks doi: 10.3390/risks5030034

Authors: Carlo Maccheroni Samuel Nocito

This work proposes a backtesting analysis that compares the Lee–Carter and the Cairns–Blake–Dowd mortality models, employing Italian data. The mortality data come from the Italian National Statistics Institute (ISTAT) database and span the period 1975–2014, over which we computed back-projections evaluating the performances of the models compared with real data. We propose three different backtest approaches, evaluating the goodness of short-run forecast versus medium-length ones. We find that neither model was able to capture the improving shock on mortality observed for the male population on the analysed period. Moreover, the results suggest that CBD forecasts are reliable prevalently for ages above 75, and that LC forecasts are basically more accurate for this data.

]]>Risks doi: 10.3390/risks5030035

Authors: Iain Clark Saeed Amen

Much of the debate around a potential British exit (Brexit) from the European Union has centred on the potential macroeconomic impact. In this paper, we instead focus on understanding market expectations for price action around the Brexit referendum date. Extracting implied distributions from the GBPUSD option volatility surface, we originally estimated, based on our visual observation of implied probability densities available up to 13 June 2016, that the market expected that a vote to leave could result in a move in the GBPUSD exchange rate from 1.4390 (spot reference on 10 June 2016) down to a range in 1.10 to 1.30, i.e., a 10–25% decline—very probably with highly volatile price action. To quantify this more objectively, we construct a mixture model corresponding to two scenarios for the GBPUSD exchange rate after the referendum vote, one scenario for “remain” and one for “leave”. Calibrating this model to four months of market data, from 24 February to 22 June 2016, we find that a “leave” vote was associated with a predicted devaluation of the British pound to approximately 1.37 USD per GBP, a 4.5% devaluation, and quite consistent with the observed post-referendum exchange rate move down from 1.4877 to 1.3622. We contrast the behaviour of the GBPUSD option market in the run-up to the Brexit vote with that during the 2014 Scottish Independence referendum, finding the potential impact of Brexit to be considerably higher.

]]>Risks doi: 10.3390/risks5030033

Authors: Marta Ferreira Helena Ferreira

Pareto processes are suitable to model stationary heavy-tailed data. Here, we consider the auto-regressive Gaver–Lewis Pareto Process and address a study of the tail behavior. We characterize its local and long-range dependence. We will see that consecutive observations are asymptotically tail independent, a feature that is often misevaluated by the most common extremal models and with strong relevance to the tail inference. This also reveals clustering at “penultimate” levels. Linear correlation may not exist in a heavy-tailed context and an alternative diagnostic tool will be presented. The derived properties relate to the auto-regressive parameter of the process and will provide estimators. A comparison of the proposals is conducted through simulation and an application to a real dataset illustrates the procedure.

]]>Risks doi: 10.3390/risks5020032

Authors: Robert Rietz Evan Cronick Shelby Mathers Matt Pollie

This paper examines the effect of gainsharing provisions on the selection of a discount rate for a defined benefit pension plan. The paper uses a traditional actuarial approach of discounting liabilities using the expected return of the associated pension fund. A stochastic Excel model was developed to simulate the effect of varying investment returns on a pension fund with four asset classes. Lognormal distributions were fitted to historical returns of two of the asset classes; large company stocks and long-term government bonds. A third lognormal distribution was designed to represent the investment returns of alternative investments, such as real estate and private equity. The fourth asset class represented short term cash investments and that return was held constant. The following variables were analyzed to determine their relative impact of gainsharing on the selection of a discount rate: hurdle rate, percentage of gainsharing, actuarial asset method smoothing period, and variations in asset allocation. A 50% gainsharing feature can reduce the discount rate for a defined benefit pension plan from 0.5% to more than 2.5%, depending on the gainsharing design and asset allocation.

]]>Risks doi: 10.3390/risks5020031

Authors: Stephen Mildenhall

The literature on capital allocation is biased towards an asset modeling framework rather than an actuarial framework. The asset modeling framework leads to the proliferation of inappropriate assumptions about the effect of insurance line of business growth on aggregate loss distributions. This paper explains why an actuarial analog of the asset volume/return model should be based on a Lévy process. It discusses the impact of different loss models on marginal capital allocations. It shows that Lévy process-based models provide a better fit to the US statutory accounting data, and identifies how parameter risk scales with volume and increases with time. Finally, it shows the data suggest a surprising result regarding the form of insurance parameter risk.

]]>Risks doi: 10.3390/risks5020030

Authors: Nataliya Chukhrova Arne Johannssen

This paper gives a detailed overview of the current state of research in relation to the use of state space models and the Kalman-filter in the field of stochastic claims reserving. Most of these state space representations are matrix-based, which complicates their applications. Therefore, to facilitate the implementation of state space models in practice, we present a scalar state space model for cumulative payments, which is an extension of the well-known chain ladder (CL) method. The presented model is distribution-free, forms a basis for determining the entire unobservable lower and upper run-off triangles and can easily be applied in practice using the Kalman-filter for prediction, filtering and smoothing of cumulative payments. In addition, the model provides an easy way to find outliers in the data and to determine outlier effects. Finally, an empirical comparison of the scalar state space model, promising prior state space models and some popular stochastic claims reserving methods is performed.

]]>Risks doi: 10.3390/risks5020029

Authors: Susanna Levantesi Massimiliano Menzietti

Longevity risk constitutes an important risk factor for life insurance companies, and it can be managed through longevity-linked securities. The market of longevity-linked securities is at present far from being complete and does not allow finding a unique pricing measure. We propose a method to estimate the maximum market price of longevity risk depending on the risk margin implicit within the calculation of the technical provisions as defined by Solvency II. The maximum price of longevity risk is determined for a survivor forward (S-forward), an agreement between two counterparties to exchange at maturity a fixed survival-dependent payment for a payment depending on the realized survival of a given cohort of individuals. The maximum prices determined for the S-forwards can be used to price other longevity-linked securities, such as q-forwards. The Cairns–Blake–Dowd model is used to represent the evolution of mortality over time that combined with the information on the risk margin, enables us to calculate upper limits for the risk-adjusted survival probabilities, the market price of longevity risk and the S-forward prices. Numerical results can be extended for the pricing of other longevity-linked securities.

]]>Risks doi: 10.3390/risks5020028

Authors: Jing Liu Huan Zhang

Motivated by the EU Solvency II Directive, we study the one-year ruin probability of an insurer who makes investments and hence faces both insurance and financial risks. Over a time horizon of one year, the insurance risk is quantified as a nonnegative random variable X equal to the aggregate amount of claims, and the financial risk as a d-dimensional random vector Y consisting of stochastic discount factors of the d financial assets invested. To capture both heavy tails and asymptotic dependence of Y in an integrated manner, we assume that Y follows a standard multivariate regular variation (MRV) structure. As main results, we derive exact asymptotic estimates for the one-year ruin probability for the following cases: (i) X and Y are independent with X of Fréchet type; (ii) X and Y are independent with X of Gumbel type; (iii) X and Y jointly possess a standard MRV structure; (iv) X and Y jointly possess a nonstandard MRV structure.

]]>Risks doi: 10.3390/risks5020027

Authors: Michael Metel Traian A. Pirvu Julian Wong

We prove that the Omega measure, which considers all moments when assessing portfolio performance, is equivalent to the widely used Sharpe ratio under jointly elliptic distributions of returns. Portfolio optimization of the Sharpe ratio is then explored, with an active-set algorithm presented for markets prohibiting short sales. When asymmetric returns are considered, we show that the Omega measure and Sharpe ratio lead to different optimal portfolios.

]]>Risks doi: 10.3390/risks5020026

Authors: Albert Cohen Nick Costanzino

Building on recent work incorporating recovery risk into structural models by Cohen &amp; Costanzino (2015), we consider the Black-Cox model with an added recovery risk driver. The recovery risk driver arises naturally in the context of imperfect information implicit in the structural framework. This leads to a two-factor structural model we call the Stochastic Recovery Black-Cox model, whereby the asset risk driver At defines the default trigger and the recovery risk driver Rt defines the amount recovered in the event of default. We then price zero-coupon bonds and credit default swaps under the Stochastic Recovery Black-Cox model. Finally, we compare our results with the classic Black-Cox model, give explicit expressions for the recovery risk premium in the Stochastic Recovery Black-Cox model, and detail how the introduction of separate but correlated risk drivers leads to a decoupling of the default and recovery risk premiums in the credit spread. We conclude this work by computing the effect of adding coupons that are paid continuously until default, and price perpetual (consol bonds) in our two-factor firm value model, extending calculations in the seminal paper by Leland (1994).

]]>Risks doi: 10.3390/risks5020025

Authors: Koon-Shing Kwong Yiu-Kuen Tse Wai-Sum Chan

Building a social security system to ensure Singapore residents have peace of mind in funding for retirement has been at the top of Singapore government’s policy agenda over the last decade. Implementation of the Lifelong Income For the Elderly (LIFE) scheme in 2009 clearly shows that the government spares no effort in improving its pension scheme to boost its residents’ income after retirement. Despite the recent modifications to the LIFE scheme, Singapore residents must still choose between two plans: the Standard and Basic plans. To enhance the flexibility of the LIFE scheme with further streamlining of its fund management, we propose some plan modifications such that scheme members do not face a dichotomy of plan choices. Instead, they select two age parameters: the Payout Age and the Life-annuity Age. This paper discusses the actuarial analysis for determining members’ payouts and bequests based on the proposed age parameters. We analyze the net cash receipts and Internal Rate of Return (IRR) for various plan-parameter configurations. This information helps members make their plan choices. To address cost-of-living increases we propose to extend the plan to accommodate an annual step-up of monthly payouts. By deferring the Payout Age from 65 to 68, members can enjoy an annual increase of about 2% of the payouts for the same first-year monthly benefits.

]]>Risks doi: 10.3390/risks5020024

Authors: Gabriella Piscopo Marina Resta

We apply spectral biclustering to mortality datasets in order to capture three relevant aspects: the period, the age and the cohort effects, as their knowledge is a key factor in understanding actuarial liabilities of private life insurance companies, pension funds as well as national pension systems. While standard techniques generally fail to capture the cohort effect, on the contrary, biclustering methods seem particularly suitable for this aim. We run an exploratory analysis on the mortality data of Italy, with ages representing genes, and years as conditions: by comparison between conventional hierarchical clustering and spectral biclustering, we observe that the latter offers more meaningful results.

]]>Risks doi: 10.3390/risks5020023

Authors: Jonas Hirz Uwe Schmock Pavel Shevchenko

We introduce an additive stochastic mortality model which allows joint modelling and forecasting of underlying death causes. Parameter families for mortality trends can be chosen freely. As model settings become high dimensional, Markov chain Monte Carlo (MCMC) is used for parameter estimation. We then link our proposed model to an extended version of the credit risk model CreditRisk+. This allows exact risk aggregation via an efficient numerically stable Panjer recursion algorithm and provides numerous applications in credit, life insurance and annuity portfolios to derive P&amp;L distributions. Furthermore, the model allows exact (without Monte Carlo simulation error) calculation of risk measures and their sensitivities with respect to model parameters for P&amp;L distributions such as value-at-risk and expected shortfall. Numerous examples, including an application to partial internal models under Solvency II, using Austrian and Australian data are shown.

]]>Risks doi: 10.3390/risks5020022

Authors: Zaghum Umar Tahir Suleman

Abstract: This paper analyses the interdependence between Islamic and conventional equities by taking into consideration the asymmetric effect of return and volatility transmission. We empirically investigate the decoupling hypothesis of Islamic and conventional equities and the potential contagion effect. We analyse the intra-market and inter-market spillover among Islamic and conventional equities across three major markets: the USA, the United Kingdom and Japan. Our sample period ranges from 1996 to 2015. In addition, we segregate our sample period into three sub-periods covering prior to the 2007 financial crisis, the crisis period and the post-crisis period. We find weak support for the decoupling hypothesis during the post-crisis period.

]]>Risks doi: 10.3390/risks5020021

Authors: Yuan Gao Han Shang

This study considers the forecasting of mortality rates in multiple populations. We propose a model that combines mortality forecasting and functional data analysis (FDA). Under the FDA framework, the mortality curve of each year is assumed to be a smooth function of age. As with most of the functional time series forecasting models, we rely on functional principal component analysis (FPCA) for dimension reduction and further choose a vector error correction model (VECM) to jointly forecast mortality rates in multiple populations. This model incorporates the merits of existing models in that it excludes some of the inherent randomness with the nonparametric smoothing from FDA, and also utilizes the correlation structures between the populations with the use of VECM in mortality models. A nonparametric bootstrap method is also introduced to construct interval forecasts. The usefulness of this model is demonstrated through a series of simulation studies and applications to the age-and sex-specific mortality rates in Switzerland and the Czech Republic. The point forecast errors of several forecasting methods are compared and interval scores are used to evaluate and compare the interval forecasts. Our model provides improved forecast accuracy in most cases.

]]>Risks doi: 10.3390/risks5010020

Authors: Jinhui Zhang Sachi Purcal Jiaqin Wei

We consider the financial planning problem of a retiree wishing to enter a retirement village at a future uncertain date. The date of entry is determined by the retiree’s utility and bequest maximisation problem within the context of uncertain future health states. In addition, the retiree must choose optimal consumption, investment, bequest and purchase of insurance products prior to their full annuitisation on entry to the retirement village. A hyperbolic absolute risk-aversion (HARA) utility function is used to allow necessary consumption for basic living and medical costs. The retirement village will typically require an initial deposit upon entry. This threshold wealth requirement leads to exercising the replication of an American put option at the uncertain stopping time. From our numerical results, active insurance and annuity markets are shown to be a critical aspect in retirement planning.

]]>Risks doi: 10.3390/risks5010019

Authors: Changyu Liu Michael Sherris

Designing post retirement benefits requires access to appropriate investment instruments to manage the interest rate and longevity risks. Post retirement benefits are increasingly taken as a form of income benefit, either as a pension or an annuity. Pension funds and life insurers offer annuities generating long term liabilities linked to longevity. Risk management of life annuity portfolios for interest rate risks is well developed but the incorporation of longevity risk has received limited attention. We develop an immunization approach and a delta-gamma based hedging approach to manage the risks of adverse portfolio surplus using stochastic models for mortality and interest rates. We compare and assess the immunization and hedge effectiveness of fixed-income coupon bonds, annuity bonds, as well as longevity bonds, using simulations of the portfolio surplus for an annuity portfolio and a range of risk measures including value-at-risk. We show how fixed-income annuity bonds can more effectively match cash flows and provide additional hedge effectiveness over coupon bonds. Longevity bonds, including deferred longevity bonds, reduce risk significantly compared to coupon and annuity bonds, reflecting the long duration of the typical life annuity and the exposure to longevity risk. Longevity bonds are shown to be effective in immunizing surplus over short and long horizons. Delta gamma hedging is generally only effective over short horizons. The results of the paper have implications for how providers of post retirement income benefit streams can manage risks in demanding conditions where innovation in investment markets can support new products and increase the product range.

]]>Risks doi: 10.3390/risks5010017

Authors: Gaurav Khemka Steven Roberts Timothy Higgins

We explore the extent to which claim incidence in Disability Income Insurance (DII) is affected by changes in the unemployment rate in Australia. Using data from 1986 to 2001, we fit a hurdle model to explore the presence and magnitude of the effect of changes in unemployment rate on the incidence of DII claims, controlling for policy holder characteristics and seasonality. We find a clear positive association between unemployment and claim incidence, and we explore this further by gender, age, deferment period, and occupation. A multinomial logistic regression model is fitted to cause of claim data in order to explore the relationship further, and it is shown that the proportion of claims due to accident increases markedly with rising unemployment. The results suggest that during periods of rising unemployment, insurers may face increased claims from policy holders with shorter deferment periods for white-collar workers and for medium and heavy manual workers. Our findings indicate that moral hazard may have a material impact on DII claim incidence and insurer business in periods of declining economic conditions.

]]>Risks doi: 10.3390/risks5010018

Authors: Silvio Aldrovandi Petko Kusev Tetiana Hill Ivo Vlaev

Previous research has shown that risk preferences are sensitive to the financial domain in which they are framed. In the present paper, we explore whether the effect of negative priming on risk taking is moderated by financial context. A total of 120 participants completed questionnaires, where risky choices were framed in six different financial scenarios. Half of the participants were allocated to a negative priming condition. Negative priming reduced risk-seeking behaviour compared to a neutral condition. However, this effect was confined to non-experiential scenarios (i.e., gamble to win, possibility to lose), and not to ‘real world’ financial products (e.g., pension provision). The results call into question the generalisability of priming effects on different financial contexts.

]]>Risks doi: 10.3390/risks5010016

Authors: Syazreen Shair Sachi Purcal Nick Parr

Coherent models were developed recently to forecast the mortality of two or more sub-populations simultaneously and to ensure long-term non-divergent mortality forecasts of sub-populations. This paper evaluates the forecast accuracy of two recently-published coherent mortality models, the Poisson common factor and the product-ratio functional models. These models are compared to each other and the corresponding independent models, as well as the original Lee–Carter model. All models are applied to age-gender-speciﬁc mortality data for Australia and Malaysia and age-gender-ethnicity-speciﬁc data for Malaysia. The out-of-sample forecast error of log death rates, male-to-female death rate ratios and life expectancy at birth from each model are compared and examined across groups. The results show that, in terms of overall accuracy, the forecasts of both coherent models are consistently more accurate than those of the independent models for Australia and for Malaysia, but the relative performance differs by forecast horizon. Although the product-ratio functional model outperforms the Poisson common factor model for Australia, the Poisson common factor is more accurate for Malaysia. For the ethnic groups application, ethnic-coherence gives better results than gender-coherence. The results provide evidence that coherent models are preferable to independent models for forecasting sub-populations’ mortality.

]]>Risks doi: 10.3390/risks5010014

Authors: Xing-Fang Huang Ting Zhang Yang Yang Tao Jiang

This paper considered a dependent discrete-time risk model, in which the insurance risks are represented by a sequence of independent and identically distributed real-valued random variables with a common Gamma-like tailed distribution; the ﬁnancial risks are denoted by another sequence of independent and identically distributed positive random variables with a ﬁnite upper endpoint, but a general dependence structure exists between each pair of the insurance risks and the ﬁnancial risks. Following the works of Yang and Yuen in 2016, we derive some asymptotic relations for the ﬁnite-time and inﬁnite-time ruin probabilities. As a complement, we demonstrate our obtained result through a Crude Monte Carlo (CMC) simulation with asymptotics.

]]>Risks doi: 10.3390/risks5010015

Authors: Ourania Theodosiadou Sotiris Skaperas George Tsaklidis

In the first part of the paper, the positive and negative jumps of NASDAQ daily (log-) returns and three of its stocks are estimated based on the methodology presented by Theodosiadou et al. 2016, where jumps are assumed to be hidden random variables. For that reason, the use of stochastic state space models in discrete time is adopted. The daily return is expressed as the difference between the two-sided jumps under noise inclusion, and the recursive Kalman filter algorithm is used in order to estimate them. Since the estimated jumps have to be non-negative, the associated pdf truncation method, according to the non-negativity constraints, is applied. In order to overcome the resulting underestimation of the empirical time series, a scaling procedure follows the stage of truncation. In the second part of the paper, a nonparametric change point analysis concerning the (variance–) covariance is applied to the NASDAQ return time series, as well as to the estimated bivariate jump time series derived after the scaling procedure and to each jump component separately. A similar change point analysis is applied to the three other stocks of the NASDAQ index.

]]>Risks doi: 10.3390/risks5010013

Authors: Jan Natolski Ralf Werner

The replicating portfolio approach is a well-established approach carried out by many life insurance companies within their Solvency II framework for the computation of risk capital. In this note,weelaborateononespeciﬁcformulationofareplicatingportfolioproblem. Incontrasttothetwo most popular replication approaches, it does not yield an analytic solution (if, at all, a solution exists andisunique). Further,althoughconvex,theobjectivefunctionseemstobenon-smooth,andhencea numericalsolutionmightthusbemuchmoredemandingthanforthetwomostpopularformulations. Especially for the second reason, this formulation did not (yet) receive much attention in practical applications, in contrast to the other two formulations. In the following, we will demonstrate that the (potential) non-smoothness can be avoided due to an equivalent reformulation as a linear second order cone program (SOCP). This allows for a numerical solution by efﬁcient second order methods like interior point methods or similar. We also show that—under weak assumptions—existence and uniqueness of the optimal solution can be guaranteed. We additionally prove that—under a further similarly weak condition—the fair value of the replicating portfolio equals the fair value of liabilities. Based on these insights, we argue that this unloved stepmother child within the replication problem family indeed represents an equally good formulation for practical purposes.

]]>Risks doi: 10.3390/risks5010012

Authors: Catherine Donnelly

I show that risk-sharing pension plans can reduce some of the shortcomings of defined benefit and defined contributions plans. The risk-sharing pension plan presented aims to improve the stability of benefits paid to generations of members, while allowing them to enjoy the expected advantages of a risky investment strategy. The plan does this by adjusting the investment strategy and benefits in response to a changing funding level, motivated by the with-profits contract proposed by Goecke (2013). He suggests a mean-reverting log reserve (or funding) ratio, where mean reversion occurs through adjustments to the investment strategy and declared bonuses. To measure the robustness of the plan to human factors, I introduce a measurement of disappointment, where disappointment is high when there are many consecutive years over which benefit payments are declining. Another measure introduced is devastation, where devastation occurs when benefit payments are zero. The motivation is that members of a pension plan who are easily disappointed or likely to get no benefit, are more likely to exit the plan. I find that the risk-sharing plan offers more disappointment than a defined contribution plan, but it eliminates the devastation possible in a plan that tries to accumulate contributions at a steadily increasing rate. The proposed risk-sharing plan can give a narrower range of benefits than in a defined contribution plan. Thus it can offer a stable benefit to members without the risk of running out of money.

]]>Risks doi: 10.3390/risks5010010

Authors: Søren Asmussen Jaakko Lehtomaa

Well-behaved densities are typically log-convex with heavy tails and log-concave with light ones. We discuss a benchmark for distinguishing between the two cases, based on the observation that large values of a sum X 1 + X 2 occur as result of a single big jump with heavy tails whereas X 1 , X 2 are of equal order of magnitude in the light-tailed case. The method is based on the ratio | X 1 − X 2 | / ( X 1 + X 2 ) , for which sharp asymptotic results are presented as well as a visual tool for distinguishing between the two cases. The study supplements modern non-parametric density estimation methods where log-concavity plays a main role, as well as heavy-tailed diagnostics such as the mean excess plot.

]]>Risks doi: 10.3390/risks5010011

Authors: Wenjun Jiang Jiandong Ren Ričardas Zitikis

Optimal forms of reinsurance policies have been studied for a long time in the actuarial literature. Most existing results are from the insurer’s point of view, aiming at maximizing the expected utility or minimizing the risk of the insurer. However, as pointed out by Borch (1969), it is understandable that a reinsurance arrangement that might be very attractive to one party (e.g., the insurer) can be quite unacceptable to the other party (e.g., the reinsurer). In this paper, we follow this point of view and study forms of Pareto-optimal reinsurance policies whereby one party’s risk, measured by its value-at-risk (VaR), cannot be reduced without increasing the VaR of the counter-party in the reinsurance transaction. We show that the Pareto-optimal policies can be determined by minimizing linear combinations of the VaR s of the two parties in the reinsurance transaction. Consequently, we succeed in deriving user-friendly, closed-form, optimal reinsurance policies and their parameter values.

]]>Risks doi: 10.3390/risks5010008

Authors: Xuebing Kuang Xiaowen Zhou

Using a Poisson approach, we find Laplace transforms of joint occupation times over n disjoint intervals for spectrally negative Lévy processes. They generalize previous results for dimension two.

]]>Risks doi: 10.3390/risks5010009

Authors: Thomas Koch

This article considers an economy where risk is insurable, but selection determines the pool of individuals who take it up. First, we demonstrate that the comparative statics of these economies do not necessarily depend on its marginal selection (adverse versus favorable), but rather other characteristics. We then use repeated cross-sections of medical expenditures in the U.S. to understand the role of changes in the medical risk distribution on the fraction of Americans without medical insurance. We ﬁnd that both the level and the shape of the distribution of risk are important in determining the equilibrium quantity of insurance. Symmetric changes in risk (e.g., shifts in the price of medical care) better explain the shifting insurance rate over time. Asymmetric changes (e.g., associated with a shifting age distribution) are not as important.

]]>Risks doi: 10.3390/risks5010006

Authors: Bin Zou Abel Cadenillas

We consider an insurer who faces an external jump-diffusion risk that is negatively correlated with the capital returns in a multidimensional regime switching model. The insurer selects investment and liability ratio policies continuously to maximize her/his expected utility of terminal wealth. We obtain explicit solutions of optimal policies for logarithmic and power utility functions. We study the impact of the insurer’s risk aversion, the negative correlation between the external risk and the capital returns, and the regime of the economy on the optimal policy. We find, among other things, that the regime of the economy and the negative correlation between the external risk and the capital returns have a dramatic effect on the optimal policy.

]]>Risks doi: 10.3390/risks5010007

Authors: Barbora Peštová Michal Pešta

Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented, as well.

]]>Risks doi: 10.3390/risks5010005

Authors: Pierre Devolder Sébastien de Valeriola

The regulation on the Belgian occupational pension schemes has been recently changed. The new law allows for employers to choose between two different types of guarantees to offer to their affiliates. In this paper, we address the question arising naturally: which of the two guarantees is the best one? In order to answer that question, we set up a stochastic model and use financial pricing tools to compare the methods. More specifically, we link the pension liabilities to a portfolio of financial assets and compute the price of exchange options through the Margrabe formula.

]]>Risks doi: 10.3390/risks5010004

Authors: Risks Editorial Office

The editors of Risks would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016. [...]

]]>Risks doi: 10.3390/risks5010003

Authors: Yuguang Fan Philip Griffin Ross Maller Alexander Szimayer Tiandong Wang

We compare two types of reinsurance: excess of loss (EOL) and largest claim reinsurance (LCR), each of which transfers the payment of part, or all, of one or more large claims from the primary insurance company (the cedant) to a reinsurer. The primary insurer’s point of view is documented in terms of assessment of risk and payment of reinsurance premium. A utility indifference rationale based on the expected future dividend stream is used to value the company with and without reinsurance. Assuming the classical compound Poisson risk model with choices of claim size distributions (classified as heavy, medium and light-tailed cases), simulations are used to illustrate the impact of the EOL and LCR treaties on the company’s ruin probability, ruin time and value as determined by the dividend discounting model. We find that LCR is at least as effective as EOL in averting ruin in comparable finite time horizon settings. In instances where the ruin probability for LCR is smaller than for EOL, the dividend discount model shows that the cedant is able to pay a larger portion of the dividend for LCR reinsurance than for EOL while still maintaining company value. Both methods reduce risk considerably as compared with no reinsurance, in a variety of situations, as measured by the standard deviation of the company value. A further interesting finding is that heaviness of tails alone is not necessarily the decisive factor in the possible ruin of a company; small and moderate sized claims can also play a significant role in this.

]]>Risks doi: 10.3390/risks5010002

Authors: Liivika Tee Meelis Käärik Rauno Viin

We consider the well-known stochastic reserve estimation methods on the basis of generalized linear models, such as the (over-dispersed) Poisson model, the gamma model and the log-normal model. For the likely variability of the claims reserve, bootstrap method is considered. In the bootstrapping framework, we discuss the choice of residuals, namely the Pearson residuals, the deviance residuals and the Anscombe residuals. In addition, several possible residual adjustments are discussed and compared in a case study. We carry out a practical implementation and comparison of methods using real-life insurance data to estimate reserves and their prediction errors. We propose to consider proper scoring rules for model validation, and the assessments will be drawn from an extensive case study.

]]>Risks doi: 10.3390/risks5010001

Authors: Başak Bulut Karageyik Şule Şahin

In this paper, we approximate the aggregate claims process by using the translated gamma process under the classical risk model assumptions, and we investigate the ultimate ruin probability. We consider optimal reinsurance under the minimum ultimate ruin probability, as well as the maximum benefit criteria: released capital, expected profit and exponential-fractional-logarithmic utility from the insurer’s point of view. Numerical examples are presented to explain how the optimal initial surplus and retention level are changed according to the individual claim amounts, loading factors and weights of the criteria. In the decision making process, we use The Analytical Hierarchy Process (AHP) and The Technique for Order of Preference by Similarity to ideal Solution (TOPSIS) methods as the Multi-Attribute Decision Making methods (MADM) and compare our results considering different combinations of loading factors for both exponential and Pareto individual claims.

]]>Risks doi: 10.3390/risks4040048

Authors: Ambrose Lo

Reinsurance is often empirically hailed as a value-adding risk management strategy which an insurer can utilize to achieve various business objectives. In the context of a distortion-risk-measure-based three-party model incorporating a policyholder, insurer and reinsurer, this article formulates explicitly the optimal insurance–reinsurance strategies from the perspective of the insurer. Our analytic solutions are complemented by intuitive but scientifically rigorous explanations on the marginal cost and benefit considerations underlying the optimal insurance–reinsurance decisions. These cost-benefit discussions not only cast light on the economic motivations for an insurer to engage in insurance with the policyholder and in reinsurance with the reinsurer, but also mathematically formalize the value created by reinsurance with respect to stabilizing the loss portfolio and enlarging the underwriting capacity of an insurer. Our model also allows for the reinsurer’s failure to deliver on its promised indemnity when the regulatory capital of the reinsurer is depleted by the reinsured loss. The reduction in the benefits of reinsurance to the insurer as a result of the reinsurer’s default is quantified, and its influence on the optimal insurance–reinsurance policies analyzed.

]]>Risks doi: 10.3390/risks4040050

Authors: Mi Chen Wenyuan Wang Ruixing Ming

In this paper, we study the optimal reinsurance problem where risks of the insurer are measured by general law-invariant risk measures and premiums are calculated under the TVaR premium principle, which extends the work of the expected premium principle. Our objective is to characterize the optimal reinsurance strategy which minimizes the insurer’s risk measure of its total loss. Our calculations show that the optimal reinsurance strategy is of the multi-layer form, i.e., f * ( x ) = x ∧ c * + ( x - d * ) + with c * and d * being constants such that 0 ≤ c * ≤ d * .

]]>Risks doi: 10.3390/risks4040051

Authors: Ying Wang Sai Choy Hoi Wong

The application of stochastic volatility (SV) models in the option pricing literature usually assumes that the market has sufficient option data to calibrate the model’s risk-neutral parameters. When option data are insufficient or unavailable, market practitioners must estimate the model from the historical returns of the underlying asset and then transform the resulting model into its risk-neutral equivalent. However, the likelihood function of an SV model can only be expressed in a high-dimensional integration, which makes the estimation a highly challenging task. The Bayesian approach has been the classical way to estimate SV models under the data-generating (physical) probability measure, but the transformation from the estimated physical dynamic into its risk-neutral counterpart has not been addressed. Inspired by the generalized autoregressive conditional heteroskedasticity (GARCH) option pricing approach by Duan in 1995, we propose an SV model that enables us to simultaneously and conveniently perform Bayesian inference and transformation into risk-neutral dynamics. Our model relaxes the normality assumption on innovations of both return and volatility processes, and our empirical study shows that the estimated option prices generate realistic implied volatility smile shapes. In addition, the volatility premium is almost flat across strike prices, so adding a few option data to the historical time series of the underlying asset can greatly improve the estimation of option prices.

]]>Risks doi: 10.3390/risks4040049

Authors: Pierre Devolder Adrien Lebègue

In this paper, we consider compositions of conditional risk measures in order to obtain time-consistent dynamic risk measures and determine the solvency capital of a life insurer selling pension liabilities or a pension fund with a single cash-flow at maturity. We first recall the notion of conditional, dynamic and time-consistent risk measures. We link the latter with its iterated property, which gives us a way to construct time-consistent dynamic risk measures from a backward iteration scheme with the composition of conditional risk measures. We then consider particular cases with the conditional version of the value at risk, tail value at risk and conditional expectation measures. We finally give an application of these measures with the determination of the solvency capital of a pension liability, which offers a fixed guaranteed rate without any intermediate cash-flow. We assume that the company is fully hedged against the mortality and underwriting risks.

]]>Risks doi: 10.3390/risks4040047

Authors: Philippe Deprez Mario Wüthrich

This article provides a case study that analyzes national macroprudential insurance regulation in Switzerland. We consider an insurance market that is based on data from the Swiss private insurance industry. We stress this market with several scenarios related to financial and insurance risks, and we analyze the resulting risk capitals of the insurance companies. This stress-test analysis provides insights into the vulnerability of the Swiss private insurance sector to different risks and shocks.

]]>Risks doi: 10.3390/risks4040046

Authors: Jean-François Bégin

Life insurers are exposed to deflation risk: falling prices could lead to insufficient investment returns, and inflation-indexed protections could make insurers vulnerable to deflation. In this spirit, this paper proposes a market-based methodology for measuring deflation risk based on a discrete framework: the latter accounts for the real interest rate, the inflation index level, its conditional variance, and the expected inflation rate. US inflation data are then used to estimate the model and show the importance of deflation risk. Specifically, the distribution of a fictitious life insurer’s future payments is investigated. We find that the proposed inflation model yields higher risk measures than the ones obtained using competing models, stressing the need for dynamic and market-consistent inflation modelling in the life insurance industry.

]]>Risks doi: 10.3390/risks4040045

Authors: Anastasia Novokreshchenova

In this paper, we quantitatively compare the forecasts from four different mortality models. We consider one discrete-time model proposed by Lee and Carter (1992) and three continuous-time models: the Wills and Sherris (2011) model, the Feller process and the Ornstein-Uhlenbeck (OU) process. The first two models estimate the whole surface of mortality simultaneously, while in the latter two, each generation is modelled and calibrated separately. We calibrate the models to UK and Australian population data. We find that all the models show relatively similar absolute total error for a given dataset, except the Lee-Carter model, whose performance differs significantly. To evaluate the forecasting performance we therefore look at two alternative measures: the relative error between the forecasted and the actual mortality rates and the percentage of actual mortality rates which fall within a prediction interval. In terms of the prediction intervals, the results are more divergent since each model implies a different structure for the variance of mortality rates. According to our experiments, the Wills and Sherris model produces superior results in terms of the prediction intervals. However, in terms of the mean absolute error, the OU and the Feller processes perform better. The forecasting performance of the Lee Carter model is mostly dependent on the choice of the dataset.

]]>Risks doi: 10.3390/risks4040044

Authors: Eckhard Liebscher Wolf-Dieter Richter

Scatter plots of multivariate data sets motivate modeling of star-shaped distributions beyond elliptically contoured ones. We study properties of estimators for the density generator function, the star-generalized radius distribution and the density in a star-shaped distribution model. For the generator function and the star-generalized radius density, we consider a non-parametric kernel-type estimator. This estimator is combined with a parametric estimator for the contours which are assumed to follow a parametric model. Therefore, the semiparametric procedure features the ﬂexibility of nonparametric estimators and the simple estimation and interpretation of parametric estimators. Alternatively, we consider pure parametric estimators for the density. For the semiparametric density estimator, we prove rates of uniform, almost sure convergence which coincide with the corresponding rates of one-dimensional kernel density estimators when excluding the center of the distribution. We show that the standardized density estimator is asymptotically normally distributed. Moreover, the almost sure convergence rate of the estimated distribution function of the star-generalized radius is derived. A particular new two-dimensional distribution class is adapted here to agricultural and ﬁnancial data sets.

]]>Risks doi: 10.3390/risks4040043

Authors: Annika Krutto

For general stable distribution, cumulant function based parameter estimators are proposed. Extensive simulation experiments are carried out to validate the effectiveness of the estimates over the entire parameter space. An application to non-life insurance losses distribution is made.

]]>Risks doi: 10.3390/risks4040042

Authors: Julie Thøgersen

An insurance company offers an insurance contract ( p , K ) , consisting of a premium p and a deductible K. In this paper, we consider the problem of choosing the premium optimally as a function of the deductible. The insurance company is facing a market of N customers, each characterized by their personal claim frequency, α, and risk aversion, β. When a customer is offered an insurance contract, she/he will, based on these characteristics, choose whether or not to insure. The decision process of the customer is analyzed in detail. Since the customer characteristics are unknown to the company, it models them as i.i.d. random variables; A 1 , … , A N for the claim frequencies and B 1 , … , B N for the risk aversions. Depending on the distributions of A i and B i , expressions for the portfolio size n ( p ; K ) ∈ [ 0 , N ] and average claim frequency α ( p ; K ) in the portfolio are obtained. Knowing these, the company can choose the premium optimally, mainly by minimizing the ruin probability.

]]>Risks doi: 10.3390/risks4040041

Authors: Marcos Escobar Mikhail Krayzler Franz Ramsauer David Saunders Rudi Zagst

Variable annuities represent certain unit-linked life insurance products offering different types of protection commonly referred to as guaranteed minimum benefits (GMXBs). They are designed for the increasing demand of the customers for private pension provision. In this paper we analytically price variable annuities with guaranteed minimum repayments at maturity and in case of the insured’s death. If the contract is prematurely surrendered, the policyholder is entitled to the current value of the fund account reduced by the prevailing surrender fee. The financial market and the mortality model are affine linear. For the surrender model, a Cox process is deployed whose intensity is given by a deterministic function (s-curve) with stochastic inputs from the financial market. So, the policyholders’ surrender behavior depends on the performance of the financial market and is stochastic. The presented pricing scheme incorporates the stochastic surrender behavior of the policyholders and is only based on suitable closed-form approximations.

]]>Risks doi: 10.3390/risks4040040

Authors: Lei Hua

The family of Liouville copulas is defined as the survival copulas of multivariate Liouville distributions, and it covers the Archimedean copulas constructed by Williamson’s d-transform. Liouville copulas provide a very wide range of dependence ranging from positive to negative dependence in the upper tails, and they can be useful in modeling tail risks. In this article, we study the upper tail behavior of Liouville copulas through their upper tail orders. Tail orders of a more general scale mixture model that covers Liouville distributions is first derived, and then tail order functions and tail order density functions of Liouville copulas are derived. Concrete examples are given after the main results.

]]>