Risks
http://www.mdpi.com/journal/risks
Latest open access articles published in Risks at http://www.mdpi.com/journal/risks<![CDATA[Risks, Vol. 5, Pages 32: Effects of Gainsharing Provisions on the Selection of a Discount Rate for a Defined Benefit Pension Plan]]>
http://www.mdpi.com/2227-9091/5/2/32
This paper examines the effect of gainsharing provisions on the selection of a discount rate for a defined benefit pension plan. The paper uses a traditional actuarial approach of discounting liabilities using the expected return of the associated pension fund. A stochastic Excel model was developed to simulate the effect of varying investment returns on a pension fund with four asset classes. Lognormal distributions were fitted to historical returns of two of the asset classes; large company stocks and long-term government bonds. A third lognormal distribution was designed to represent the investment returns of alternative investments, such as real estate and private equity. The fourth asset class represented short term cash investments and that return was held constant. The following variables were analyzed to determine their relative impact of gainsharing on the selection of a discount rate: hurdle rate, percentage of gainsharing, actuarial asset method smoothing period, and variations in asset allocation. A 50% gainsharing feature can reduce the discount rate for a defined benefit pension plan from 0.5% to more than 2.5%, depending on the gainsharing design and asset allocation.Risks2017-06-2052Article10.3390/risks5020032322227-90912017-06-20doi: 10.3390/risks5020032Robert RietzEvan CronickShelby MathersMatt Pollie<![CDATA[Risks, Vol. 5, Pages 31: Actuarial Geometry]]>
http://www.mdpi.com/2227-9091/5/2/31
The literature on capital allocation is biased towards an asset modeling framework rather than an actuarial framework. The asset modeling framework leads to the proliferation of inappropriate assumptions about the effect of insurance line of business growth on aggregate loss distributions. This paper explains why an actuarial analog of the asset volume/return model should be based on a Lévy process. It discusses the impact of different loss models on marginal capital allocations. It shows that Lévy process-based models provide a better fit to the US statutory accounting data, and identifies how parameter risk scales with volume and increases with time. Finally, it shows the data suggest a surprising result regarding the form of insurance parameter risk.Risks2017-06-1652Article10.3390/risks5020031312227-90912017-06-16doi: 10.3390/risks5020031Stephen Mildenhall<![CDATA[Risks, Vol. 5, Pages 30: State Space Models and the Kalman-Filter in Stochastic Claims Reserving: Forecasting, Filtering and Smoothing]]>
http://www.mdpi.com/2227-9091/5/2/30
This paper gives a detailed overview of the current state of research in relation to the use of state space models and the Kalman-filter in the field of stochastic claims reserving. Most of these state space representations are matrix-based, which complicates their applications. Therefore, to facilitate the implementation of state space models in practice, we present a scalar state space model for cumulative payments, which is an extension of the well-known chain ladder (CL) method. The presented model is distribution-free, forms a basis for determining the entire unobservable lower and upper run-off triangles and can easily be applied in practice using the Kalman-filter for prediction, filtering and smoothing of cumulative payments. In addition, the model provides an easy way to find outliers in the data and to determine outlier effects. Finally, an empirical comparison of the scalar state space model, promising prior state space models and some popular stochastic claims reserving methods is performed.Risks2017-05-2752Article10.3390/risks5020030302227-90912017-05-27doi: 10.3390/risks5020030Nataliya ChukhrovaArne Johannssen<![CDATA[Risks, Vol. 5, Pages 29: Maximum Market Price of Longevity Risk under Solvency Regimes: The Case of Solvency II]]>
http://www.mdpi.com/2227-9091/5/2/29
Longevity risk constitutes an important risk factor for life insurance companies, and it can be managed through longevity-linked securities. The market of longevity-linked securities is at present far from being complete and does not allow finding a unique pricing measure. We propose a method to estimate the maximum market price of longevity risk depending on the risk margin implicit within the calculation of the technical provisions as defined by Solvency II. The maximum price of longevity risk is determined for a survivor forward (S-forward), an agreement between two counterparties to exchange at maturity a fixed survival-dependent payment for a payment depending on the realized survival of a given cohort of individuals. The maximum prices determined for the S-forwards can be used to price other longevity-linked securities, such as q-forwards. The Cairns–Blake–Dowd model is used to represent the evolution of mortality over time that combined with the information on the risk margin, enables us to calculate upper limits for the risk-adjusted survival probabilities, the market price of longevity risk and the S-forward prices. Numerical results can be extended for the pricing of other longevity-linked securities.Risks2017-05-1052Article10.3390/risks5020029292227-90912017-05-10doi: 10.3390/risks5020029Susanna LevantesiMassimiliano Menzietti<![CDATA[Risks, Vol. 5, Pages 28: Asymptotic Estimates for the One-Year Ruin Probability under Risky Investments]]>
http://www.mdpi.com/2227-9091/5/2/28
Motivated by the EU Solvency II Directive, we study the one-year ruin probability of an insurer who makes investments and hence faces both insurance and financial risks. Over a time horizon of one year, the insurance risk is quantified as a nonnegative random variable X equal to the aggregate amount of claims, and the financial risk as a d-dimensional random vector Y consisting of stochastic discount factors of the d financial assets invested. To capture both heavy tails and asymptotic dependence of Y in an integrated manner, we assume that Y follows a standard multivariate regular variation (MRV) structure. As main results, we derive exact asymptotic estimates for the one-year ruin probability for the following cases: (i) X and Y are independent with X of Fréchet type; (ii) X and Y are independent with X of Gumbel type; (iii) X and Y jointly possess a standard MRV structure; (iv) X and Y jointly possess a nonstandard MRV structure.Risks2017-05-0652Article10.3390/risks5020028282227-90912017-05-06doi: 10.3390/risks5020028Jing LiuHuan Zhang<![CDATA[Risks, Vol. 5, Pages 27: Risk Management under Omega Measure]]>
http://www.mdpi.com/2227-9091/5/2/27
We prove that the Omega measure, which considers all moments when assessing portfolio performance, is equivalent to the widely used Sharpe ratio under jointly elliptic distributions of returns. Portfolio optimization of the Sharpe ratio is then explored, with an active-set algorithm presented for markets prohibiting short sales. When asymmetric returns are considered, we show that the Omega measure and Sharpe ratio lead to different optimal portfolios.Risks2017-05-0652Article10.3390/risks5020027272227-90912017-05-06doi: 10.3390/risks5020027Michael MetelTraian A. PirvuJulian Wong<![CDATA[Risks, Vol. 5, Pages 26: Bond and CDS Pricing via the Stochastic Recovery Black-Cox Model]]>
http://www.mdpi.com/2227-9091/5/2/26
Building on recent work incorporating recovery risk into structural models by Cohen &amp; Costanzino (2015), we consider the Black-Cox model with an added recovery risk driver. The recovery risk driver arises naturally in the context of imperfect information implicit in the structural framework. This leads to a two-factor structural model we call the Stochastic Recovery Black-Cox model, whereby the asset risk driver At defines the default trigger and the recovery risk driver Rt defines the amount recovered in the event of default. We then price zero-coupon bonds and credit default swaps under the Stochastic Recovery Black-Cox model. Finally, we compare our results with the classic Black-Cox model, give explicit expressions for the recovery risk premium in the Stochastic Recovery Black-Cox model, and detail how the introduction of separate but correlated risk drivers leads to a decoupling of the default and recovery risk premiums in the credit spread. We conclude this work by computing the effect of adding coupons that are paid continuously until default, and price perpetual (consol bonds) in our two-factor firm value model, extending calculations in the seminal paper by Leland (1994).Risks2017-04-1952Article10.3390/risks5020026262227-90912017-04-19doi: 10.3390/risks5020026Albert CohenNick Costanzino<![CDATA[Risks, Vol. 5, Pages 25: Enhancing Singapore’s Pension Scheme: A Blueprint for Further Flexibility]]>
http://www.mdpi.com/2227-9091/5/2/25
Building a social security system to ensure Singapore residents have peace of mind in funding for retirement has been at the top of Singapore government’s policy agenda over the last decade. Implementation of the Lifelong Income For the Elderly (LIFE) scheme in 2009 clearly shows that the government spares no effort in improving its pension scheme to boost its residents’ income after retirement. Despite the recent modifications to the LIFE scheme, Singapore residents must still choose between two plans: the Standard and Basic plans. To enhance the flexibility of the LIFE scheme with further streamlining of its fund management, we propose some plan modifications such that scheme members do not face a dichotomy of plan choices. Instead, they select two age parameters: the Payout Age and the Life-annuity Age. This paper discusses the actuarial analysis for determining members’ payouts and bequests based on the proposed age parameters. We analyze the net cash receipts and Internal Rate of Return (IRR) for various plan-parameter configurations. This information helps members make their plan choices. To address cost-of-living increases we propose to extend the plan to accommodate an annual step-up of monthly payouts. By deferring the Payout Age from 65 to 68, members can enjoy an annual increase of about 2% of the payouts for the same first-year monthly benefits.Risks2017-04-1352Article10.3390/risks5020025252227-90912017-04-13doi: 10.3390/risks5020025Koon-Shing KwongYiu-Kuen TseWai-Sum Chan<![CDATA[Risks, Vol. 5, Pages 24: Applying spectral biclustering to mortality data]]>
http://www.mdpi.com/2227-9091/5/2/24
We apply spectral biclustering to mortality datasets in order to capture three relevant aspects: the period, the age and the cohort effects, as their knowledge is a key factor in understanding actuarial liabilities of private life insurance companies, pension funds as well as national pension systems. While standard techniques generally fail to capture the cohort effect, on the contrary, biclustering methods seem particularly suitable for this aim. We run an exploratory analysis on the mortality data of Italy, with ages representing genes, and years as conditions: by comparison between conventional hierarchical clustering and spectral biclustering, we observe that the latter offers more meaningful results.Risks2017-04-0452Article10.3390/risks5020024242227-90912017-04-04doi: 10.3390/risks5020024Gabriella PiscopoMarina Resta<![CDATA[Risks, Vol. 5, Pages 23: Actuarial Applications and Estimation of Extended CreditRisk+]]>
http://www.mdpi.com/2227-9091/5/2/23
We introduce an additive stochastic mortality model which allows joint modelling and forecasting of underlying death causes. Parameter families for mortality trends can be chosen freely. As model settings become high dimensional, Markov chain Monte Carlo (MCMC) is used for parameter estimation. We then link our proposed model to an extended version of the credit risk model CreditRisk+. This allows exact risk aggregation via an efficient numerically stable Panjer recursion algorithm and provides numerous applications in credit, life insurance and annuity portfolios to derive P&amp;L distributions. Furthermore, the model allows exact (without Monte Carlo simulation error) calculation of risk measures and their sensitivities with respect to model parameters for P&amp;L distributions such as value-at-risk and expected shortfall. Numerous examples, including an application to partial internal models under Solvency II, using Austrian and Australian data are shown.Risks2017-03-3152Article10.3390/risks5020023232227-90912017-03-31doi: 10.3390/risks5020023Jonas HirzUwe SchmockPavel Shevchenko<![CDATA[Risks, Vol. 5, Pages 22: Asymmetric Return and Volatility Transmission in Conventional and Islamic Equities]]>
http://www.mdpi.com/2227-9091/5/2/22
Abstract: This paper analyses the interdependence between Islamic and conventional equities by taking into consideration the asymmetric effect of return and volatility transmission. We empirically investigate the decoupling hypothesis of Islamic and conventional equities and the potential contagion effect. We analyse the intra-market and inter-market spillover among Islamic and conventional equities across three major markets: the USA, the United Kingdom and Japan. Our sample period ranges from 1996 to 2015. In addition, we segregate our sample period into three sub-periods covering prior to the 2007 financial crisis, the crisis period and the post-crisis period. We find weak support for the decoupling hypothesis during the post-crisis period.Risks2017-03-2952Article10.3390/risks5020022222227-90912017-03-29doi: 10.3390/risks5020022Zaghum UmarTahir Suleman<![CDATA[Risks, Vol. 5, Pages 21: Multivariate Functional Time Series Forecasting: Application to Age-Specific Mortality Rates]]>
http://www.mdpi.com/2227-9091/5/2/21
This study considers the forecasting of mortality rates in multiple populations. We propose a model that combines mortality forecasting and functional data analysis (FDA). Under the FDA framework, the mortality curve of each year is assumed to be a smooth function of age. As with most of the functional time series forecasting models, we rely on functional principal component analysis (FPCA) for dimension reduction and further choose a vector error correction model (VECM) to jointly forecast mortality rates in multiple populations. This model incorporates the merits of existing models in that it excludes some of the inherent randomness with the nonparametric smoothing from FDA, and also utilizes the correlation structures between the populations with the use of VECM in mortality models. A nonparametric bootstrap method is also introduced to construct interval forecasts. The usefulness of this model is demonstrated through a series of simulation studies and applications to the age-and sex-specific mortality rates in Switzerland and the Czech Republic. The point forecast errors of several forecasting methods are compared and interval scores are used to evaluate and compare the interval forecasts. Our model provides improved forecast accuracy in most cases.Risks2017-03-2552Article10.3390/risks5020021212227-90912017-03-25doi: 10.3390/risks5020021Yuan GaoHan Shang<![CDATA[Risks, Vol. 5, Pages 20: Optimal Time to Enter a Retirement Village]]>
http://www.mdpi.com/2227-9091/5/1/20
We consider the financial planning problem of a retiree wishing to enter a retirement village at a future uncertain date. The date of entry is determined by the retiree’s utility and bequest maximisation problem within the context of uncertain future health states. In addition, the retiree must choose optimal consumption, investment, bequest and purchase of insurance products prior to their full annuitisation on entry to the retirement village. A hyperbolic absolute risk-aversion (HARA) utility function is used to allow necessary consumption for basic living and medical costs. The retirement village will typically require an initial deposit upon entry. This threshold wealth requirement leads to exercising the replication of an American put option at the uncertain stopping time. From our numerical results, active insurance and annuity markets are shown to be a critical aspect in retirement planning.Risks2017-03-2251Article10.3390/risks5010020202227-90912017-03-22doi: 10.3390/risks5010020Jinhui ZhangSachi PurcalJiaqin Wei<![CDATA[Risks, Vol. 5, Pages 19: Immunization and Hedging of Post Retirement Income Annuity Products]]>
http://www.mdpi.com/2227-9091/5/1/19
Designing post retirement benefits requires access to appropriate investment instruments to manage the interest rate and longevity risks. Post retirement benefits are increasingly taken as a form of income benefit, either as a pension or an annuity. Pension funds and life insurers offer annuities generating long term liabilities linked to longevity. Risk management of life annuity portfolios for interest rate risks is well developed but the incorporation of longevity risk has received limited attention. We develop an immunization approach and a delta-gamma based hedging approach to manage the risks of adverse portfolio surplus using stochastic models for mortality and interest rates. We compare and assess the immunization and hedge effectiveness of fixed-income coupon bonds, annuity bonds, as well as longevity bonds, using simulations of the portfolio surplus for an annuity portfolio and a range of risk measures including value-at-risk. We show how fixed-income annuity bonds can more effectively match cash flows and provide additional hedge effectiveness over coupon bonds. Longevity bonds, including deferred longevity bonds, reduce risk significantly compared to coupon and annuity bonds, reflecting the long duration of the typical life annuity and the exposure to longevity risk. Longevity bonds are shown to be effective in immunizing surplus over short and long horizons. Delta gamma hedging is generally only effective over short horizons. The results of the paper have implications for how providers of post retirement income benefit streams can manage risks in demanding conditions where innovation in investment markets can support new products and increase the product range.Risks2017-03-1651Article10.3390/risks5010019192227-90912017-03-16doi: 10.3390/risks5010019Changyu LiuMichael Sherris<![CDATA[Risks, Vol. 5, Pages 17: The Impact of Changes to the Unemployment Rate on Australian Disability Income Insurance Claim Incidence]]>
http://www.mdpi.com/2227-9091/5/1/17
We explore the extent to which claim incidence in Disability Income Insurance (DII) is affected by changes in the unemployment rate in Australia. Using data from 1986 to 2001, we fit a hurdle model to explore the presence and magnitude of the effect of changes in unemployment rate on the incidence of DII claims, controlling for policy holder characteristics and seasonality. We find a clear positive association between unemployment and claim incidence, and we explore this further by gender, age, deferment period, and occupation. A multinomial logistic regression model is fitted to cause of claim data in order to explore the relationship further, and it is shown that the proportion of claims due to accident increases markedly with rising unemployment. The results suggest that during periods of rising unemployment, insurers may face increased claims from policy holders with shorter deferment periods for white-collar workers and for medium and heavy manual workers. Our findings indicate that moral hazard may have a material impact on DII claim incidence and insurer business in periods of declining economic conditions.Risks2017-03-1451Article10.3390/risks5010017172227-90912017-03-14doi: 10.3390/risks5010017Gaurav KhemkaSteven RobertsTimothy Higgins<![CDATA[Risks, Vol. 5, Pages 18: Context Moderates Priming Effects on Financial Risk Taking]]>
http://www.mdpi.com/2227-9091/5/1/18
Previous research has shown that risk preferences are sensitive to the financial domain in which they are framed. In the present paper, we explore whether the effect of negative priming on risk taking is moderated by financial context. A total of 120 participants completed questionnaires, where risky choices were framed in six different financial scenarios. Half of the participants were allocated to a negative priming condition. Negative priming reduced risk-seeking behaviour compared to a neutral condition. However, this effect was confined to non-experiential scenarios (i.e., gamble to win, possibility to lose), and not to ‘real world’ financial products (e.g., pension provision). The results call into question the generalisability of priming effects on different financial contexts.Risks2017-03-1451Article10.3390/risks5010018182227-90912017-03-14doi: 10.3390/risks5010018Silvio AldrovandiPetko KusevTetiana HillIvo Vlaev<![CDATA[Risks, Vol. 5, Pages 16: Evaluating Extensions to Coherent Mortality Forecasting Models]]>
http://www.mdpi.com/2227-9091/5/1/16
Coherent models were developed recently to forecast the mortality of two or more sub-populations simultaneously and to ensure long-term non-divergent mortality forecasts of sub-populations. This paper evaluates the forecast accuracy of two recently-published coherent mortality models, the Poisson common factor and the product-ratio functional models. These models are compared to each other and the corresponding independent models, as well as the original Lee–Carter model. All models are applied to age-gender-speciﬁc mortality data for Australia and Malaysia and age-gender-ethnicity-speciﬁc data for Malaysia. The out-of-sample forecast error of log death rates, male-to-female death rate ratios and life expectancy at birth from each model are compared and examined across groups. The results show that, in terms of overall accuracy, the forecasts of both coherent models are consistently more accurate than those of the independent models for Australia and for Malaysia, but the relative performance differs by forecast horizon. Although the product-ratio functional model outperforms the Poisson common factor model for Australia, the Poisson common factor is more accurate for Malaysia. For the ethnic groups application, ethnic-coherence gives better results than gender-coherence. The results provide evidence that coherent models are preferable to independent models for forecasting sub-populations’ mortality.Risks2017-03-1051Article10.3390/risks5010016162227-90912017-03-10doi: 10.3390/risks5010016Syazreen ShairSachi PurcalNick Parr<![CDATA[Risks, Vol. 5, Pages 14: Ruin Probabilities in a Dependent Discrete-Time Risk Model With Gamma-Like Tailed Insurance Risks]]>
http://www.mdpi.com/2227-9091/5/1/14
This paper considered a dependent discrete-time risk model, in which the insurance risks are represented by a sequence of independent and identically distributed real-valued random variables with a common Gamma-like tailed distribution; the ﬁnancial risks are denoted by another sequence of independent and identically distributed positive random variables with a ﬁnite upper endpoint, but a general dependence structure exists between each pair of the insurance risks and the ﬁnancial risks. Following the works of Yang and Yuen in 2016, we derive some asymptotic relations for the ﬁnite-time and inﬁnite-time ruin probabilities. As a complement, we demonstrate our obtained result through a Crude Monte Carlo (CMC) simulation with asymptotics.Risks2017-03-0351Article10.3390/risks5010014142227-90912017-03-03doi: 10.3390/risks5010014Xing-Fang HuangTing ZhangYang YangTao Jiang<![CDATA[Risks, Vol. 5, Pages 15: Change Point Detection and Estimation of the Two-Sided Jumps of Asset Returns Using a Modified Kalman Filter]]>
http://www.mdpi.com/2227-9091/5/1/15
In the first part of the paper, the positive and negative jumps of NASDAQ daily (log-) returns and three of its stocks are estimated based on the methodology presented by Theodosiadou et al. 2016, where jumps are assumed to be hidden random variables. For that reason, the use of stochastic state space models in discrete time is adopted. The daily return is expressed as the difference between the two-sided jumps under noise inclusion, and the recursive Kalman filter algorithm is used in order to estimate them. Since the estimated jumps have to be non-negative, the associated pdf truncation method, according to the non-negativity constraints, is applied. In order to overcome the resulting underestimation of the empirical time series, a scaling procedure follows the stage of truncation. In the second part of the paper, a nonparametric change point analysis concerning the (variance–) covariance is applied to the NASDAQ return time series, as well as to the estimated bivariate jump time series derived after the scaling procedure and to each jump component separately. A similar change point analysis is applied to the three other stocks of the NASDAQ index.Risks2017-03-0351Article10.3390/risks5010015152227-90912017-03-03doi: 10.3390/risks5010015Ourania TheodosiadouSotiris SkaperasGeorge Tsaklidis<![CDATA[Risks, Vol. 5, Pages 13: Mathematical Analysis of Replication by Cash Flow Matching]]>
http://www.mdpi.com/2227-9091/5/1/13
The replicating portfolio approach is a well-established approach carried out by many life insurance companies within their Solvency II framework for the computation of risk capital. In this note,weelaborateononespeciﬁcformulationofareplicatingportfolioproblem. Incontrasttothetwo most popular replication approaches, it does not yield an analytic solution (if, at all, a solution exists andisunique). Further,althoughconvex,theobjectivefunctionseemstobenon-smooth,andhencea numericalsolutionmightthusbemuchmoredemandingthanforthetwomostpopularformulations. Especially for the second reason, this formulation did not (yet) receive much attention in practical applications, in contrast to the other two formulations. In the following, we will demonstrate that the (potential) non-smoothness can be avoided due to an equivalent reformulation as a linear second order cone program (SOCP). This allows for a numerical solution by efﬁcient second order methods like interior point methods or similar. We also show that—under weak assumptions—existence and uniqueness of the optimal solution can be guaranteed. We additionally prove that—under a further similarly weak condition—the fair value of the replicating portfolio equals the fair value of liabilities. Based on these insights, we argue that this unloved stepmother child within the replication problem family indeed represents an equally good formulation for practical purposes.Risks2017-02-2851Article10.3390/risks5010013132227-90912017-02-28doi: 10.3390/risks5010013Jan NatolskiRalf Werner<![CDATA[Risks, Vol. 5, Pages 12: A Discussion of a Risk-Sharing Pension Plan]]>
http://www.mdpi.com/2227-9091/5/1/12
I show that risk-sharing pension plans can reduce some of the shortcomings of defined benefit and defined contributions plans. The risk-sharing pension plan presented aims to improve the stability of benefits paid to generations of members, while allowing them to enjoy the expected advantages of a risky investment strategy. The plan does this by adjusting the investment strategy and benefits in response to a changing funding level, motivated by the with-profits contract proposed by Goecke (2013). He suggests a mean-reverting log reserve (or funding) ratio, where mean reversion occurs through adjustments to the investment strategy and declared bonuses. To measure the robustness of the plan to human factors, I introduce a measurement of disappointment, where disappointment is high when there are many consecutive years over which benefit payments are declining. Another measure introduced is devastation, where devastation occurs when benefit payments are zero. The motivation is that members of a pension plan who are easily disappointed or likely to get no benefit, are more likely to exit the plan. I find that the risk-sharing plan offers more disappointment than a defined contribution plan, but it eliminates the devastation possible in a plan that tries to accumulate contributions at a steadily increasing rate. The proposed risk-sharing plan can give a narrower range of benefits than in a defined contribution plan. Thus it can offer a stable benefit to members without the risk of running out of money.Risks2017-02-1451Article10.3390/risks5010012122227-90912017-02-14doi: 10.3390/risks5010012Catherine Donnelly<![CDATA[Risks, Vol. 5, Pages 10: Distinguishing Log-Concavity from Heavy Tails]]>
http://www.mdpi.com/2227-9091/5/1/10
Well-behaved densities are typically log-convex with heavy tails and log-concave with light ones. We discuss a benchmark for distinguishing between the two cases, based on the observation that large values of a sum X 1 + X 2 occur as result of a single big jump with heavy tails whereas X 1 , X 2 are of equal order of magnitude in the light-tailed case. The method is based on the ratio | X 1 − X 2 | / ( X 1 + X 2 ) , for which sharp asymptotic results are presented as well as a visual tool for distinguishing between the two cases. The study supplements modern non-parametric density estimation methods where log-concavity plays a main role, as well as heavy-tailed diagnostics such as the mean excess plot.Risks2017-02-0751Article10.3390/risks5010010102227-90912017-02-07doi: 10.3390/risks5010010Søren AsmussenJaakko Lehtomaa<![CDATA[Risks, Vol. 5, Pages 11: Optimal Reinsurance Policies under the VaR Risk Measure When the Interests of Both the Cedent and the Reinsurer Are Taken into Account]]>
http://www.mdpi.com/2227-9091/5/1/11
Optimal forms of reinsurance policies have been studied for a long time in the actuarial literature. Most existing results are from the insurer’s point of view, aiming at maximizing the expected utility or minimizing the risk of the insurer. However, as pointed out by Borch (1969), it is understandable that a reinsurance arrangement that might be very attractive to one party (e.g., the insurer) can be quite unacceptable to the other party (e.g., the reinsurer). In this paper, we follow this point of view and study forms of Pareto-optimal reinsurance policies whereby one party’s risk, measured by its value-at-risk (VaR), cannot be reduced without increasing the VaR of the counter-party in the reinsurance transaction. We show that the Pareto-optimal policies can be determined by minimizing linear combinations of the VaR s of the two parties in the reinsurance transaction. Consequently, we succeed in deriving user-friendly, closed-form, optimal reinsurance policies and their parameter values.Risks2017-02-0551Article10.3390/risks5010011112227-90912017-02-05doi: 10.3390/risks5010011Wenjun JiangJiandong RenRičardas Zitikis<![CDATA[Risks, Vol. 5, Pages 8: n-Dimensional Laplace Transforms of Occupation Times for Spectrally Negative Lévy Processes]]>
http://www.mdpi.com/2227-9091/5/1/8
Using a Poisson approach, we find Laplace transforms of joint occupation times over n disjoint intervals for spectrally negative Lévy processes. They generalize previous results for dimension two.Risks2017-01-2951Article10.3390/risks501000882227-90912017-01-29doi: 10.3390/risks5010008Xuebing KuangXiaowen Zhou<![CDATA[Risks, Vol. 5, Pages 9: The Shifting Shape of Risk: Endogenous Market Failure for Insurance]]>
http://www.mdpi.com/2227-9091/5/1/9
This article considers an economy where risk is insurable, but selection determines the pool of individuals who take it up. First, we demonstrate that the comparative statics of these economies do not necessarily depend on its marginal selection (adverse versus favorable), but rather other characteristics. We then use repeated cross-sections of medical expenditures in the U.S. to understand the role of changes in the medical risk distribution on the fraction of Americans without medical insurance. We ﬁnd that both the level and the shape of the distribution of risk are important in determining the equilibrium quantity of insurance. Symmetric changes in risk (e.g., shifts in the price of medical care) better explain the shifting insurance rate over time. Asymmetric changes (e.g., associated with a shifting age distribution) are not as important.Risks2017-01-2751Article10.3390/risks501000992227-90912017-01-27doi: 10.3390/risks5010009Thomas Koch<![CDATA[Risks, Vol. 5, Pages 6: Optimal Investment and Liability Ratio Policies in a Multidimensional Regime Switching Model]]>
http://www.mdpi.com/2227-9091/5/1/6
We consider an insurer who faces an external jump-diffusion risk that is negatively correlated with the capital returns in a multidimensional regime switching model. The insurer selects investment and liability ratio policies continuously to maximize her/his expected utility of terminal wealth. We obtain explicit solutions of optimal policies for logarithmic and power utility functions. We study the impact of the insurer’s risk aversion, the negative correlation between the external risk and the capital returns, and the regime of the economy on the optimal policy. We find, among other things, that the regime of the economy and the negative correlation between the external risk and the capital returns have a dramatic effect on the optimal policy.Risks2017-01-2251Article10.3390/risks501000662227-90912017-01-22doi: 10.3390/risks5010006Bin ZouAbel Cadenillas<![CDATA[Risks, Vol. 5, Pages 7: Change Point Estimation in Panel Data without Boundary Issue]]>
http://www.mdpi.com/2227-9091/5/1/7
Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented, as well.Risks2017-01-2251Article10.3390/risks501000772227-90912017-01-22doi: 10.3390/risks5010007Barbora PeštováMichal Pešta<![CDATA[Risks, Vol. 5, Pages 5: Minimum Protection in DC Funding Pension Plans and Margrabe Options]]>
http://www.mdpi.com/2227-9091/5/1/5
The regulation on the Belgian occupational pension schemes has been recently changed. The new law allows for employers to choose between two different types of guarantees to offer to their affiliates. In this paper, we address the question arising naturally: which of the two guarantees is the best one? In order to answer that question, we set up a stochastic model and use financial pricing tools to compare the methods. More specifically, we link the pension liabilities to a portfolio of financial assets and compute the price of exchange options through the Margrabe formula.Risks2017-01-1851Article10.3390/risks501000552227-90912017-01-18doi: 10.3390/risks5010005Pierre DevolderSébastien de Valeriola<![CDATA[Risks, Vol. 5, Pages 4: Acknowledgement to Reviewers of Risks in 2016]]>
http://www.mdpi.com/2227-9091/5/1/4
The editors of Risks would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016. [...]Risks2017-01-1251Editorial10.3390/risks501000442227-90912017-01-12doi: 10.3390/risks5010004 Risks Editorial Office<![CDATA[Risks, Vol. 5, Pages 3: The Effects of Largest Claim and Excess of Loss Reinsurance on a Company’s Ruin Time and Valuation]]>
http://www.mdpi.com/2227-9091/5/1/3
We compare two types of reinsurance: excess of loss (EOL) and largest claim reinsurance (LCR), each of which transfers the payment of part, or all, of one or more large claims from the primary insurance company (the cedant) to a reinsurer. The primary insurer’s point of view is documented in terms of assessment of risk and payment of reinsurance premium. A utility indifference rationale based on the expected future dividend stream is used to value the company with and without reinsurance. Assuming the classical compound Poisson risk model with choices of claim size distributions (classified as heavy, medium and light-tailed cases), simulations are used to illustrate the impact of the EOL and LCR treaties on the company’s ruin probability, ruin time and value as determined by the dividend discounting model. We find that LCR is at least as effective as EOL in averting ruin in comparable finite time horizon settings. In instances where the ruin probability for LCR is smaller than for EOL, the dividend discount model shows that the cedant is able to pay a larger portion of the dividend for LCR reinsurance than for EOL while still maintaining company value. Both methods reduce risk considerably as compared with no reinsurance, in a variety of situations, as measured by the standard deviation of the company value. A further interesting finding is that heaviness of tails alone is not necessarily the decisive factor in the possible ruin of a company; small and moderate sized claims can also play a significant role in this.Risks2017-01-0651Article10.3390/risks501000332227-90912017-01-06doi: 10.3390/risks5010003Yuguang FanPhilip GriffinRoss MallerAlexander SzimayerTiandong Wang<![CDATA[Risks, Vol. 5, Pages 2: On Comparison of Stochastic Reserving Methods with Bootstrapping]]>
http://www.mdpi.com/2227-9091/5/1/2
We consider the well-known stochastic reserve estimation methods on the basis of generalized linear models, such as the (over-dispersed) Poisson model, the gamma model and the log-normal model. For the likely variability of the claims reserve, bootstrap method is considered. In the bootstrapping framework, we discuss the choice of residuals, namely the Pearson residuals, the deviance residuals and the Anscombe residuals. In addition, several possible residual adjustments are discussed and compared in a case study. We carry out a practical implementation and comparison of methods using real-life insurance data to estimate reserves and their prediction errors. We propose to consider proper scoring rules for model validation, and the assessments will be drawn from an extensive case study.Risks2017-01-0451Article10.3390/risks501000222227-90912017-01-04doi: 10.3390/risks5010002Liivika TeeMeelis KäärikRauno Viin<![CDATA[Risks, Vol. 5, Pages 1: Optimal Retention Level for Infinite Time Horizons under MADM]]>
http://www.mdpi.com/2227-9091/5/1/1
In this paper, we approximate the aggregate claims process by using the translated gamma process under the classical risk model assumptions, and we investigate the ultimate ruin probability. We consider optimal reinsurance under the minimum ultimate ruin probability, as well as the maximum benefit criteria: released capital, expected profit and exponential-fractional-logarithmic utility from the insurer’s point of view. Numerical examples are presented to explain how the optimal initial surplus and retention level are changed according to the individual claim amounts, loading factors and weights of the criteria. In the decision making process, we use The Analytical Hierarchy Process (AHP) and The Technique for Order of Preference by Similarity to ideal Solution (TOPSIS) methods as the Multi-Attribute Decision Making methods (MADM) and compare our results considering different combinations of loading factors for both exponential and Pareto individual claims.Risks2016-12-2751Article10.3390/risks501000112227-90912016-12-27doi: 10.3390/risks5010001Başak Bulut KarageyikŞule Şahin<![CDATA[Risks, Vol. 4, Pages 48: How Does Reinsurance Create Value to an Insurer? A Cost-Benefit Analysis Incorporating Default Risk]]>
http://www.mdpi.com/2227-9091/4/4/48
Reinsurance is often empirically hailed as a value-adding risk management strategy which an insurer can utilize to achieve various business objectives. In the context of a distortion-risk-measure-based three-party model incorporating a policyholder, insurer and reinsurer, this article formulates explicitly the optimal insurance–reinsurance strategies from the perspective of the insurer. Our analytic solutions are complemented by intuitive but scientifically rigorous explanations on the marginal cost and benefit considerations underlying the optimal insurance–reinsurance decisions. These cost-benefit discussions not only cast light on the economic motivations for an insurer to engage in insurance with the policyholder and in reinsurance with the reinsurer, but also mathematically formalize the value created by reinsurance with respect to stabilizing the loss portfolio and enlarging the underwriting capacity of an insurer. Our model also allows for the reinsurer’s failure to deliver on its promised indemnity when the regulatory capital of the reinsurer is depleted by the reinsured loss. The reduction in the benefits of reinsurance to the insurer as a result of the reinsurer’s default is quantified, and its influence on the optimal insurance–reinsurance policies analyzed.Risks2016-12-1644Article10.3390/risks4040048482227-90912016-12-16doi: 10.3390/risks4040048Ambrose Lo<![CDATA[Risks, Vol. 4, Pages 50: Optimal Reinsurance Under General Law-Invariant Convex Risk Measure and TVaR Premium Principle]]>
http://www.mdpi.com/2227-9091/4/4/50
In this paper, we study the optimal reinsurance problem where risks of the insurer are measured by general law-invariant risk measures and premiums are calculated under the TVaR premium principle, which extends the work of the expected premium principle. Our objective is to characterize the optimal reinsurance strategy which minimizes the insurer’s risk measure of its total loss. Our calculations show that the optimal reinsurance strategy is of the multi-layer form, i.e., f * ( x ) = x ∧ c * + ( x - d * ) + with c * and d * being constants such that 0 ≤ c * ≤ d * .Risks2016-12-1644Article10.3390/risks4040050502227-90912016-12-16doi: 10.3390/risks4040050Mi ChenWenyuan WangRuixing Ming<![CDATA[Risks, Vol. 4, Pages 51: Bayesian Option Pricing Framework with Stochastic Volatility for FX Data]]>
http://www.mdpi.com/2227-9091/4/4/51
The application of stochastic volatility (SV) models in the option pricing literature usually assumes that the market has sufficient option data to calibrate the model’s risk-neutral parameters. When option data are insufficient or unavailable, market practitioners must estimate the model from the historical returns of the underlying asset and then transform the resulting model into its risk-neutral equivalent. However, the likelihood function of an SV model can only be expressed in a high-dimensional integration, which makes the estimation a highly challenging task. The Bayesian approach has been the classical way to estimate SV models under the data-generating (physical) probability measure, but the transformation from the estimated physical dynamic into its risk-neutral counterpart has not been addressed. Inspired by the generalized autoregressive conditional heteroskedasticity (GARCH) option pricing approach by Duan in 1995, we propose an SV model that enables us to simultaneously and conveniently perform Bayesian inference and transformation into risk-neutral dynamics. Our model relaxes the normality assumption on innovations of both return and volatility processes, and our empirical study shows that the estimated option prices generate realistic implied volatility smile shapes. In addition, the volatility premium is almost flat across strike prices, so adding a few option data to the historical time series of the underlying asset can greatly improve the estimation of option prices.Risks2016-12-1644Article10.3390/risks4040051512227-90912016-12-16doi: 10.3390/risks4040051Ying WangSai ChoyHoi Wong<![CDATA[Risks, Vol. 4, Pages 49: Compositions of Conditional Risk Measures and Solvency Capital]]>
http://www.mdpi.com/2227-9091/4/4/49
In this paper, we consider compositions of conditional risk measures in order to obtain time-consistent dynamic risk measures and determine the solvency capital of a life insurer selling pension liabilities or a pension fund with a single cash-flow at maturity. We first recall the notion of conditional, dynamic and time-consistent risk measures. We link the latter with its iterated property, which gives us a way to construct time-consistent dynamic risk measures from a backward iteration scheme with the composition of conditional risk measures. We then consider particular cases with the conditional version of the value at risk, tail value at risk and conditional expectation measures. We finally give an application of these measures with the determination of the solvency capital of a pension liability, which offers a fixed guaranteed rate without any intermediate cash-flow. We assume that the company is fully hedged against the mortality and underwriting risks.Risks2016-12-1644Article10.3390/risks4040049492227-90912016-12-16doi: 10.3390/risks4040049Pierre DevolderAdrien Lebègue<![CDATA[Risks, Vol. 4, Pages 47: Macroprudential Insurance Regulation: A Swiss Case Study]]>
http://www.mdpi.com/2227-9091/4/4/47
This article provides a case study that analyzes national macroprudential insurance regulation in Switzerland. We consider an insurance market that is based on data from the Swiss private insurance industry. We stress this market with several scenarios related to financial and insurance risks, and we analyze the resulting risk capitals of the insurance companies. This stress-test analysis provides insights into the vulnerability of the Swiss private insurance sector to different risks and shocks.Risks2016-12-1544Article10.3390/risks4040047472227-90912016-12-15doi: 10.3390/risks4040047Philippe DeprezMario Wüthrich<![CDATA[Risks, Vol. 4, Pages 46: Deflation Risk and Implications for Life Insurers]]>
http://www.mdpi.com/2227-9091/4/4/46
Life insurers are exposed to deflation risk: falling prices could lead to insufficient investment returns, and inflation-indexed protections could make insurers vulnerable to deflation. In this spirit, this paper proposes a market-based methodology for measuring deflation risk based on a discrete framework: the latter accounts for the real interest rate, the inflation index level, its conditional variance, and the expected inflation rate. US inflation data are then used to estimate the model and show the importance of deflation risk. Specifically, the distribution of a fictitious life insurer’s future payments is investigated. We find that the proposed inflation model yields higher risk measures than the ones obtained using competing models, stressing the need for dynamic and market-consistent inflation modelling in the life insurance industry.Risks2016-12-0344Article10.3390/risks4040046462227-90912016-12-03doi: 10.3390/risks4040046Jean-François Bégin<![CDATA[Risks, Vol. 4, Pages 45: Predicting Human Mortality: Quantitative Evaluation of Four Stochastic Models]]>
http://www.mdpi.com/2227-9091/4/4/45
In this paper, we quantitatively compare the forecasts from four different mortality models. We consider one discrete-time model proposed by Lee and Carter (1992) and three continuous-time models: the Wills and Sherris (2011) model, the Feller process and the Ornstein-Uhlenbeck (OU) process. The first two models estimate the whole surface of mortality simultaneously, while in the latter two, each generation is modelled and calibrated separately. We calibrate the models to UK and Australian population data. We find that all the models show relatively similar absolute total error for a given dataset, except the Lee-Carter model, whose performance differs significantly. To evaluate the forecasting performance we therefore look at two alternative measures: the relative error between the forecasted and the actual mortality rates and the percentage of actual mortality rates which fall within a prediction interval. In terms of the prediction intervals, the results are more divergent since each model implies a different structure for the variance of mortality rates. According to our experiments, the Wills and Sherris model produces superior results in terms of the prediction intervals. However, in terms of the mean absolute error, the OU and the Feller processes perform better. The forecasting performance of the Lee Carter model is mostly dependent on the choice of the dataset.Risks2016-12-0244Article10.3390/risks4040045452227-90912016-12-02doi: 10.3390/risks4040045Anastasia Novokreshchenova<![CDATA[Risks, Vol. 4, Pages 44: Estimation of Star-Shaped Distributions]]>
http://www.mdpi.com/2227-9091/4/4/44
Scatter plots of multivariate data sets motivate modeling of star-shaped distributions beyond elliptically contoured ones. We study properties of estimators for the density generator function, the star-generalized radius distribution and the density in a star-shaped distribution model. For the generator function and the star-generalized radius density, we consider a non-parametric kernel-type estimator. This estimator is combined with a parametric estimator for the contours which are assumed to follow a parametric model. Therefore, the semiparametric procedure features the ﬂexibility of nonparametric estimators and the simple estimation and interpretation of parametric estimators. Alternatively, we consider pure parametric estimators for the density. For the semiparametric density estimator, we prove rates of uniform, almost sure convergence which coincide with the corresponding rates of one-dimensional kernel density estimators when excluding the center of the distribution. We show that the standardized density estimator is asymptotically normally distributed. Moreover, the almost sure convergence rate of the estimated distribution function of the star-generalized radius is derived. A particular new two-dimensional distribution class is adapted here to agricultural and ﬁnancial data sets.Risks2016-11-3044Article10.3390/risks4040044442227-90912016-11-30doi: 10.3390/risks4040044Eckhard LiebscherWolf-Dieter Richter<![CDATA[Risks, Vol. 4, Pages 43: Parameter Estimation in Stable Law]]>
http://www.mdpi.com/2227-9091/4/4/43
For general stable distribution, cumulant function based parameter estimators are proposed. Extensive simulation experiments are carried out to validate the effectiveness of the estimates over the entire parameter space. An application to non-life insurance losses distribution is made.Risks2016-11-2544Article10.3390/risks4040043432227-90912016-11-25doi: 10.3390/risks4040043Annika Krutto<![CDATA[Risks, Vol. 4, Pages 42: Optimal Premium as a Function of the Deductible: Customer Analysis and Portfolio Characteristics]]>
http://www.mdpi.com/2227-9091/4/4/42
An insurance company offers an insurance contract ( p , K ) , consisting of a premium p and a deductible K. In this paper, we consider the problem of choosing the premium optimally as a function of the deductible. The insurance company is facing a market of N customers, each characterized by their personal claim frequency, α, and risk aversion, β. When a customer is offered an insurance contract, she/he will, based on these characteristics, choose whether or not to insure. The decision process of the customer is analyzed in detail. Since the customer characteristics are unknown to the company, it models them as i.i.d. random variables; A 1 , … , A N for the claim frequencies and B 1 , … , B N for the risk aversions. Depending on the distributions of A i and B i , expressions for the portfolio size n ( p ; K ) ∈ [ 0 , N ] and average claim frequency α ( p ; K ) in the portfolio are obtained. Knowing these, the company can choose the premium optimally, mainly by minimizing the ruin probability.Risks2016-11-0944Article10.3390/risks4040042422227-90912016-11-09doi: 10.3390/risks4040042Julie Thøgersen<![CDATA[Risks, Vol. 4, Pages 41: Incorporation of Stochastic Policyholder Behavior in Analytical Pricing of GMABs and GMDBs]]>
http://www.mdpi.com/2227-9091/4/4/41
Variable annuities represent certain unit-linked life insurance products offering different types of protection commonly referred to as guaranteed minimum benefits (GMXBs). They are designed for the increasing demand of the customers for private pension provision. In this paper we analytically price variable annuities with guaranteed minimum repayments at maturity and in case of the insured’s death. If the contract is prematurely surrendered, the policyholder is entitled to the current value of the fund account reduced by the prevailing surrender fee. The financial market and the mortality model are affine linear. For the surrender model, a Cox process is deployed whose intensity is given by a deterministic function (s-curve) with stochastic inputs from the financial market. So, the policyholders’ surrender behavior depends on the performance of the financial market and is stochastic. The presented pricing scheme incorporates the stochastic surrender behavior of the policyholders and is only based on suitable closed-form approximations.Risks2016-11-0844Article10.3390/risks4040041412227-90912016-11-08doi: 10.3390/risks4040041Marcos EscobarMikhail KrayzlerFranz RamsauerDavid SaundersRudi Zagst<![CDATA[Risks, Vol. 4, Pages 40: A Note on Upper Tail Behavior of Liouville Copulas]]>
http://www.mdpi.com/2227-9091/4/4/40
The family of Liouville copulas is defined as the survival copulas of multivariate Liouville distributions, and it covers the Archimedean copulas constructed by Williamson’s d-transform. Liouville copulas provide a very wide range of dependence ranging from positive to negative dependence in the upper tails, and they can be useful in modeling tail risks. In this article, we study the upper tail behavior of Liouville copulas through their upper tail orders. Tail orders of a more general scale mixture model that covers Liouville distributions is first derived, and then tail order functions and tail order density functions of Liouville copulas are derived. Concrete examples are given after the main results.Risks2016-11-0844Article10.3390/risks4040040402227-90912016-11-08doi: 10.3390/risks4040040Lei Hua<![CDATA[Risks, Vol. 4, Pages 39: Frailty and Risk Classification for Life Annuity Portfolios]]>
http://www.mdpi.com/2227-9091/4/4/39
Life annuities are attractive mainly for healthy people. In order to expand their business, in recent years, some insurers have started offering higher annuity rates to those whose health conditions are critical. Life annuity portfolios are then supposed to become larger and more heterogeneous. With respect to the insurer’s risk profile, there is a trade-off between portfolio size and heterogeneity that we intend to investigate. In performing this, there is a second and possibly more important issue that we address. In actuarial practice, the different mortality levels of the several risk classes are obtained by applying adjustment coefficients to population mortality rates. Such a choice is not supported by a rigorous model. On the other hand, the heterogeneity of a population with respect to mortality can formally be described with a frailty model. We suggest adopting a frailty model for risk classification. We identify risk groups (or classes) within the population by assigning specific ranges of values to the frailty within each group. The different levels of mortality of the various groups are based on the conditional probability distributions of the frailty. Annuity rates for each class then can be easily justified, and a comprehensive investigation of insurer’s liabilities can be performed.Risks2016-10-2644Article10.3390/risks4040039392227-90912016-10-26doi: 10.3390/risks4040039Annamaria OlivieriErmanno Pitacco<![CDATA[Risks, Vol. 4, Pages 38: A Note on Health Insurance under Ex Post Moral Hazard]]>
http://www.mdpi.com/2227-9091/4/4/38
In the linear coinsurance problem, examined first by Mossin (1968), a higher absolute risk aversion with respect to wealth in the sense of Arrow–Pratt implies a higher optimal coinsurance rate. We show that this property does not hold for health insurance under ex post moral hazard; i.e., when illness severity cannot be observed by insurers, and policyholders decide on their health expenditures. The optimal coinsurance rate trades off a risk-sharing effect and an incentive effect, both related to risk aversion.Risks2016-10-2544Article10.3390/risks4040038382227-90912016-10-25doi: 10.3390/risks4040038Pierre Picard<![CDATA[Risks, Vol. 4, Pages 37: A Note on Realistic Dividends in Actuarial Surplus Models]]>
http://www.mdpi.com/2227-9091/4/4/37
Because of the profitable nature of risk businesses in the long term, de Finetti suggested that surplus models should allow for cash leakages, as otherwise the surplus would unrealistically grow (on average) to infinity. These leakages were interpreted as ‘dividends’. Subsequent literature on actuarial surplus models with dividend distribution has mainly focussed on dividend strategies that either maximise the expected present value of dividends until ruin or lead to a probability of ruin that is less than one (see Albrecher and Thonhauser, Avanzi for reviews). An increasing number of papers are directly interested in modelling dividend policies that are consistent with actual practice in financial markets. In this short note, we review the corporate finance literature with the specific aim of fleshing out properties that dividend strategies should ideally satisfy, if one wants to model behaviour that is consistent with practice.Risks2016-10-2044Article10.3390/risks4040037372227-90912016-10-20doi: 10.3390/risks4040037Benjamin AvanziVincent TuBernard Wong<![CDATA[Risks, Vol. 4, Pages 36: Nested MC-Based Risk Measurement of Complex Portfolios: Acceleration and Energy Efficiency]]>
http://www.mdpi.com/2227-9091/4/4/36
Risk analysis and management currently have a strong presence in financial institutions, where high performance and energy efficiency are key requirements for acceleration systems, especially when it comes to intraday analysis. In this regard, we approach the estimation of the widely-employed portfolio risk metrics value-at-risk (VaR) and conditional value-at-risk (cVaR) by means of nested Monte Carlo (MC) simulations. We do so by combining theory and software/hardware implementation. This allows us for the first time to investigate their performance on heterogeneous compute systems and across different compute platforms, namely central processing unit (CPU), many integrated core (MIC) architecture XeonPhi, graphics processing unit (GPU), and field-programmable gate array (FPGA). To this end, the OpenCL framework is employed to generate portable code, and the size of the simulations is scaled in order to evaluate variations in performance. Furthermore, we assess different parallelization schemes, and the targeted platforms are evaluated and compared in terms of runtime and energy efficiency. Our implementation also allowed us to derive a new algorithmic optimization regarding the generation of the required random number sequences. Moreover, we provide specific guidelines on how to properly handle these sequences in portable code, and on how to efficiently implement nested MC-based VaR and cVaR simulations on heterogeneous compute systems.Risks2016-10-1844Article10.3390/risks4040036362227-90912016-10-18doi: 10.3390/risks4040036Sascha DesmettreRalf KornJavier VarelaNorbert Wehn<![CDATA[Risks, Vol. 4, Pages 35: A Note on the Impact of Parameter Uncertainty on Barrier Derivatives]]>
http://www.mdpi.com/2227-9091/4/4/35
This paper presents a comprehensive extension of pricing two-dimensional derivatives depending on two barrier constraints. We assume randomness on the covariance matrix as a way of generalizing. We analyse common barrier derivatives, enabling us to study parameter uncertainty and the risk related to the estimation procedure (estimation risk). In particular, we use the distribution of empirical parameters from IBM and EURO STOXX50. The evidence suggests that estimation risk should not be neglected in the context of multidimensional barrier derivatives, as it could cause price differences of up to 70%.Risks2016-09-2944Article10.3390/risks4040035352227-90912016-09-29doi: 10.3390/risks4040035Marcos EscobarSven Panz<![CDATA[Risks, Vol. 4, Pages 34: Sharp Convex Bounds on the Aggregate Sums–An Alternative Proof]]>
http://www.mdpi.com/2227-9091/4/4/34
It is well known that a random vector with given marginals is comonotonic if and only if it has the largest convex sum, and that a random vector with given marginals (under an additional condition) is mutually exclusive if and only if it has the minimal convex sum. This paper provides an alternative proof of these two results using the theories of distortion risk measure and expected utility.Risks2016-09-2944Article10.3390/risks4040034342227-90912016-09-29doi: 10.3390/risks4040034Chuancun YinDan Zhu<![CDATA[Risks, Vol. 4, Pages 33: Multivariate TVaR-Based Risk Decomposition for Vector-Valued Portfolios]]>
http://www.mdpi.com/2227-9091/4/4/33
In order to protect stakeholders of insurance companies and financial institutions against adverse outcomes of risky businesses, regulators and senior management use capital allocation techniques. For enterprise-wide risk management, it has become important to calculate the contribution of each risk within a portfolio. For that purpose, bivariate lower and upper orthant tail value-at-risk can be used for capital allocation. In this paper, we present multivariate value-at-risk and tail-value-at-risk for d ≥ 2 , and we focus on three different methods to calculate optimal values for the contribution of each risk within the sums of random vectors to the overall portfolio, which could particularly apply to insurance and financial portfolios.Risks2016-09-2344Article10.3390/risks4040033332227-90912016-09-23doi: 10.3390/risks4040033Mélina MailhotMhamed Mesfioui<![CDATA[Risks, Vol. 4, Pages 32: The Wasserstein Metric and Robustness in Risk Management]]>
http://www.mdpi.com/2227-9091/4/3/32
In the aftermath of the financial crisis, it was realized that the mathematical models used for the valuation of financial instruments and the quantification of risk inherent in portfolios consisting of these financial instruments exhibit a substantial model risk. Consequently, regulators and other stakeholders have started to require that the internal models used by financial institutions are robust. We present an approach to consistently incorporate the robustness requirements into the quantitative risk management process of a financial institution, with a special focus on insurance. We advocate the Wasserstein metric as the canonical metric for approximations in robust risk management and present supporting arguments. Representing risk measures as statistical functionals, we relate risk measures with the concept of robustness and hence continuity with respect to the Wasserstein metric. This allows us to use results from robust statistics concerning continuity and differentiability of functionals. Finally, we illustrate our approach via practical applications.Risks2016-08-3143Article10.3390/risks4030032322227-90912016-08-31doi: 10.3390/risks4030032Rüdiger KieselRobin RühlickeGerhard StahlJinsong Zheng<![CDATA[Risks, Vol. 4, Pages 31: Choosing Markovian Credit Migration Matrices by Nonlinear Optimization]]>
http://www.mdpi.com/2227-9091/4/3/31
Transition matrices, containing credit risk information in the form of ratings based on discrete observations, are published annually by rating agencies. A substantial issue arises, as for higher rating classes practically no defaults are observed yielding default probabilities of zero. This does not always reflect reality. To circumvent this shortcoming, estimation techniques in continuous-time can be applied. However, raw default data may not be available at all or not in the desired granularity, leaving the practitioner to rely on given one-year transition matrices. Then, it becomes necessary to transform the one-year transition matrix to a generator matrix. This is known as the embedding problem and can be formulated as a nonlinear optimization problem, minimizing the distance between the exponential of a potential generator matrix and the annual transition matrix. So far, in credit risk-related literature, solving this problem directly has been avoided, but approximations have been preferred instead. In this paper, we show that this problem can be solved numerically with sufficient accuracy, thus rendering approximations unnecessary. Our direct approach via nonlinear optimization allows one to consider further credit risk-relevant constraints. We demonstrate that it is thus possible to choose a proper generator matrix with additional structural properties.Risks2016-08-3043Article10.3390/risks4030031312227-90912016-08-30doi: 10.3390/risks4030031Maximilian HughesRalf Werner<![CDATA[Risks, Vol. 4, Pages 30: On the Capital Allocation Problem for a New Coherent Risk Measure in Collective Risk Theory]]>
http://www.mdpi.com/2227-9091/4/3/30
In this paper we introduce a new coherent cumulative risk measure on a subclass in the space of càdlàg processes. This new coherent risk measure turns out to be tractable enough within a class of models where the aggregate claims is driven by a spectrally positive Lévy process. We focus our motivation and discussion on the problem of capital allocation. Indeed, this risk measure is well-suited to address the problem of capital allocation in an insurance context. We show that the capital allocation problem for this risk measure has a unique solution determined by the Euler allocation method. Some examples and connections with existing results as well as practical implications are also discussed.Risks2016-08-1643Article10.3390/risks4030030302227-90912016-08-16doi: 10.3390/risks4030030Hirbod AssaManuel MoralesHassan Omidi Firouzi<![CDATA[Risks, Vol. 4, Pages 29: Optimal Insurance with Heterogeneous Beliefs and Disagreement about Zero-Probability Events]]>
http://www.mdpi.com/2227-9091/4/3/29
In problems of optimal insurance design, Arrow’s classical result on the optimality of the deductible indemnity schedule holds in a situation where the insurer is a risk-neutral Expected-Utility (EU) maximizer, the insured is a risk-averse EU-maximizer, and the two parties share the same probabilistic beliefs about the realizations of the underlying insurable loss. Recently, Ghossoub re-examined Arrow’s problem in a setting where the two parties have different subjective beliefs about the realizations of the insurable random loss, and he showed that if these beliefs satisfy a certain compatibility condition that is weaker than the Monotone Likelihood Ratio (MLR) condition, then optimal indemnity schedules exist and are nondecreasing in the loss. However, Ghossoub only gave a characterization of these optimal indemnity schedules in the special case of an MLR. In this paper, we consider the general case, allowing for disagreement about zero-probability events. We fully characterize the class of all optimal indemnity schedules that are nondecreasing in the loss, in terms of their distribution under the insured’s probability measure, and we obtain Arrow’s classical result, as well as one of the results of Ghossoub as corollaries. Finally, we formalize Marshall’s argument that, in a setting of belief heterogeneity, an optimal indemnity schedule may take “any”shape.Risks2016-08-0543Article10.3390/risks4030029292227-90912016-08-05doi: 10.3390/risks4030029Mario Ghossoub<![CDATA[Risks, Vol. 4, Pages 28: Using Climate and Weather Data to Support Regional Vulnerability Screening Assessments of Transportation Infrastructure]]>
http://www.mdpi.com/2227-9091/4/3/28
Extreme weather and climate change can have a significant impact on all types of infrastructure and assets, regardless of location, with the potential for human casualties, physical damage to assets, disruption of operations, economic and community distress, and environmental degradation. This paper describes a methodology for using extreme weather and climate data to identify climate-related risks and to quantify the potential impact of extreme weather events on certain types of transportation infrastructure as part of a vulnerability screening assessment. This screening assessment can be especially useful when a large number of assets or large geographical areas are being studied, with the results enabling planners and asset managers to undertake a more detailed assessment of vulnerability on a more targeted number of assets or locations. The methodology combines climate, weather, and impact data to identify vulnerabilities to a range of weather and climate related risks over a multi-decadal planning period. The paper applies the methodology to perform an extreme weather and climate change vulnerability screening assessment on transportation infrastructure assets for the State of Tennessee. This paper represents the results of one of the first efforts at spatial vulnerability assessments of transportation infrastructure and provides important insights for any organization considering the impact of climate and weather events on transportation or other critical infrastructure systems.Risks2016-08-0343Article10.3390/risks4030028282227-90912016-08-03doi: 10.3390/risks4030028Leah DundonKatherine NelsonJaney CampMark AbkowitzAlan Jones<![CDATA[Risks, Vol. 4, Pages 25: Understanding Reporting Delay in General Insurance]]>
http://www.mdpi.com/2227-9091/4/3/25
The aim of this paper is to understand and to model claims arrival and reporting delay in general insurance. We calibrate two real individual claims data sets to the statistical model of Jewell and Norberg. One data set considers property insurance and the other one casualty insurance. For our analysis we slightly relax the model assumptions of Jewell allowing for non-stationarity so that the model is able to cope with trends and with seasonal patterns. The performance of our individual claims data prediction is compared to the prediction based on aggregate data using the Poisson chain-ladder method.Risks2016-07-0843Article10.3390/risks4030025252227-90912016-07-08doi: 10.3390/risks4030025Richard VerrallMario Wüthrich<![CDATA[Risks, Vol. 4, Pages 27: Lead–Lag Relationship Using a Stop-and-Reverse-MinMax Process]]>
http://www.mdpi.com/2227-9091/4/3/27
The intermarket analysis, in particular the lead–lag relationship, plays an important role within financial markets. Therefore, a mathematical approach to be able to find interrelations between the price development of two different financial instruments is developed in this paper. Computing the differences of the relative positions of relevant local extrema of two charts, i.e., the local phase shifts of these price developments, gives us an empirical distribution on the unit circle. With the aid of directional statistics, such angular distributions are studied for many pairs of markets. It is shown that there are several very strongly correlated financial instruments in the field of foreign exchange, commodities and indexes. In some cases, one of the two markets is significantly ahead with respect to the relevant local extrema, i.e., there is a phase shift unequal to zero between them.Risks2016-07-0743Article10.3390/risks4030027272227-90912016-07-07doi: 10.3390/risks4030027Stanislaus Maier-PaapeAndreas Platen<![CDATA[Risks, Vol. 4, Pages 26: Optimal Reinsurance with Heterogeneous Reference Probabilities]]>
http://www.mdpi.com/2227-9091/4/3/26
This paper studies the problem of optimal reinsurance contract design. We let the insurer use dual utility, and the premium is an extended Wang’s premium principle. The novel contribution is that we allow for heterogeneity in the beliefs regarding the underlying probability distribution. We characterize layer-reinsurance as an optimal reinsurance contract. Moreover, we characterize layer-reinsurance as optimal contracts when the insurer faces costs of holding regulatory capital. We illustrate this in cases where both firms use the Value-at-Risk or the conditional Value-at-Risk.Risks2016-07-0743Article10.3390/risks4030026262227-90912016-07-07doi: 10.3390/risks4030026Tim Boonen<![CDATA[Risks, Vol. 4, Pages 23: Risk Minimization for Insurance Products via F-Doubly Stochastic Markov Chains]]>
http://www.mdpi.com/2227-9091/4/3/23
We study risk-minimization for a large class of insurance contracts. Given that the individual progress in time of visiting an insurance policy’s states follows an F -doubly stochastic Markov chain, we describe different state-dependent types of insurance benefits. These cover single payments at maturity, annuity-type payments and payments at the time of a transition. Based on the intensity of the F -doubly stochastic Markov chain, we provide the Galtchouk-Kunita-Watanabe decomposition for a general insurance contract and specify risk-minimizing strategies in a Brownian financial market setting. The results are further illustrated explicitly within an affine structure for the intensity.Risks2016-07-0743Article10.3390/risks4030023232227-90912016-07-07doi: 10.3390/risks4030023Francesca BiaginiAndreas GrollJan Widenmann<![CDATA[Risks, Vol. 4, Pages 24: Superforecasting: The Art and Science of Prediction. By Philip Tetlock and Dan Gardner]]>
http://www.mdpi.com/2227-9091/4/3/24
Let me say from the outset that this is an excellent book to read. It is not only informative, as it should be for a book on forecasting, but it is highly entertaining.[...]Risks2016-07-0543Book Review10.3390/risks4030024242227-90912016-07-05doi: 10.3390/risks4030024Daniel Buncic<![CDATA[Risks, Vol. 4, Pages 22: A Unified Pricing of Variable Annuity Guarantees under the Optimal Stochastic Control Framework]]>
http://www.mdpi.com/2227-9091/4/3/22
In this paper, we review pricing of the variable annuity living and death guarantees offered to retail investors in many countries. Investors purchase these products to take advantage of market growth and protect savings. We present pricing of these products via an optimal stochastic control framework and review the existing numerical methods. We also discuss pricing under the complete/incomplete financial market models, stochastic mortality and optimal/sub-optimal policyholder behavior, and in the presence of taxes. For numerical valuation of these contracts in the case of simple risky asset process, we develop a direct integration method based on the Gauss-Hermite quadratures with a one-dimensional cubic spline for calculation of the expected contract value, and a bi-cubic spline interpolation for applying the jump conditions across the contract cashflow event times. This method is easier to implement and faster when compared to the partial differential equation methods if the transition density (or its moments) of the risky asset underlying the contract is known in closed form between the event times. We present accurate numerical results for pricing of a Guaranteed Minimum Accumulation Benefit (GMAB) guarantee available on the market that can serve as a numerical benchmark for practitioners and researchers developing pricing of variable annuity guarantees to assess the accuracy of their numerical implementation.Risks2016-07-0543Article10.3390/risks4030022222227-90912016-07-05doi: 10.3390/risks4030022Pavel ShevchenkoXiaolin Luo<![CDATA[Risks, Vol. 4, Pages 21: The Myth of Methuselah and the Uncertainty of Death: The Mortality Fan Charts]]>
http://www.mdpi.com/2227-9091/4/3/21
This paper uses mortality fan charts to illustrate prospective future male mortality. These fan charts show both the most likely path of male mortality and the bands of uncertainty surrounding that path. The fan charts are based on a model of male mortality that is known to provide a good fit to UK mortality data. The fan charts suggest that there are clear limits to longevity—that future mortality rates are very uncertain and tend to become more uncertain the further ahead the forecast—and that forecasts of future mortality uncertainty must also take account of uncertainty in the parameters of the underlying mortality model.Risks2016-07-0443Article10.3390/risks4030021212227-90912016-07-04doi: 10.3390/risks4030021Kevin DowdDavid BlakeAndrew Cairns<![CDATA[Risks, Vol. 4, Pages 20: Survey on Log-Normally Distributed Market-Technical Trend Data]]>
http://www.mdpi.com/2227-9091/4/3/20
In this survey, a short introduction of the recent discovery of log-normally-distributed market-technical trend data will be given. The results of the statistical evaluation of typical market-technical trend variables will be presented. It will be shown that the log-normal assumption fits better to empirical trend data than to daily returns of stock prices. This enables one to mathematically evaluate trading systems depending on such variables. In this manner, a basic approach to an anti-cyclic trading system will be given as an example.Risks2016-07-0443Article10.3390/risks4030020202227-90912016-07-04doi: 10.3390/risks4030020René BrennerStanislaus Maier-Paape<![CDATA[Risks, Vol. 4, Pages 19: An Optimal Turkish Private Pension Plan with a Guarantee Feature]]>
http://www.mdpi.com/2227-9091/4/3/19
The Turkish Private Pension System is an investment system which aims to generate income for future consumption. This is a volunteer system, and the contributions are held in individual portfolios. Therefore, management of the funds is an important issue for both the participants and the insurance company. In this study, we propose an optimal private pension plan with a guarantee feature that is based on Constant Proportion Portfolio Insurance (CPPI). We derive a closed form formula for the optimal strategy with the help of dynamic programming. Moreover, our model is evaluated with numerical examples, and we compare its performance by implementing a sensitivity analysis.Risks2016-06-2743Article10.3390/risks4030019192227-90912016-06-27doi: 10.3390/risks4030019Ayşegül İşcanog̃lu-Çekiç<![CDATA[Risks, Vol. 4, Pages 18: Consistent Re-Calibration of the Discrete-Time Multifactor Vasiček Model]]>
http://www.mdpi.com/2227-9091/4/3/18
The discrete-time multifactor Vasiček model is a tractable Gaussian spot rate model. Typically, two- or three-factor versions allow one to capture the dependence structure between yields with different times to maturity in an appropriate way. In practice, re-calibration of the model to the prevailing market conditions leads to model parameters that change over time. Therefore, the model parameters should be understood as being time-dependent or even stochastic. Following the consistent re-calibration (CRC) approach, we construct models as concatenations of yield curve increments of Hull–White extended multifactor Vasiček models with different parameters. The CRC approach provides attractive tractable models that preserve the no-arbitrage premise. As a numerical example, we fit Swiss interest rates using CRC multifactor Vasiček models.Risks2016-06-2343Article10.3390/risks4030018182227-90912016-06-23doi: 10.3390/risks4030018Philipp HarmsDavid StefanovitsJosef TeichmannMario Wüthrich<![CDATA[Risks, Vol. 4, Pages 17: Ruin Probabilities with Dependence on the Number of Claims within a Fixed Time Window]]>
http://www.mdpi.com/2227-9091/4/2/17
We analyse the ruin probabilities for a renewal insurance risk process with inter-arrival times depending on the claims that arrive within a fixed (past) time window. This dependence could be explained through a regenerative structure. The main inspiration of the model comes from the bonus-malus (BM) feature of pricing car insurance. We discuss first the asymptotic results of ruin probabilities for different regimes of claim distributions. For numerical results, we recognise an embedded Markov additive process, and via an appropriate change of measure, ruin probabilities could be computed to a closed-form formulae. Additionally, we employ the importance sampling simulations to derive ruin probabilities, which further permit an in-depth analysis of a few concrete cases.Risks2016-06-1542Article10.3390/risks4020017172227-90912016-06-15doi: 10.3390/risks4020017Corina ConstantinescuSuhang DaiWeihong NiZbigniew Palmowski<![CDATA[Risks, Vol. 4, Pages 16: Spouses’ Dependence across Generations and Pricing Impact on Reversionary Annuities]]>
http://www.mdpi.com/2227-9091/4/2/16
This paper studies the dependence between coupled lives, i.e., the spouses’ dependence, across different generations, and its effects on prices of reversionary annuities in the presence of longevity risk. Longevity risk is represented via a stochastic mortality intensity. We find that a generation-based model is important, since spouses’ dependence decreases when passing from older generations to younger generations. The independence assumption produces quantifiable mispricing of reversionary annuities, with different effects on different generations. The research is conducted using a well-known dataset of double life contracts.Risks2016-05-2542Article10.3390/risks4020016162227-90912016-05-25doi: 10.3390/risks4020016Elisa LucianoJaap SpreeuwElena Vigna<![CDATA[Risks, Vol. 4, Pages 15: Improving Convergence of Binomial Schemes and the Edgeworth Expansion]]>
http://www.mdpi.com/2227-9091/4/2/15
Binomial trees are very popular in both theory and applications of option pricing. As they often suffer from an irregular convergence behavior, improving this is an important task. We build upon a new version of the Edgeworth expansion for lattice models to construct new and quickly converging binomial schemes with a particular application to barrier options.Risks2016-05-2342Article10.3390/risks4020015152227-90912016-05-23doi: 10.3390/risks4020015Alona BockRalf Korn<![CDATA[Risks, Vol. 4, Pages 14: Estimating Quantile Families of Loss Distributions for Non-Life Insurance Modelling via L-Moments]]>
http://www.mdpi.com/2227-9091/4/2/14
This paper discusses different classes of loss models in non-life insurance settings. It then overviews the class of Tukey transform loss models that have not yet been widely considered in non-life insurance modelling, but offer opportunities to produce flexible skewness and kurtosis features often required in loss modelling. In addition, these loss models admit explicit quantile specifications which make them directly relevant for quantile based risk measure calculations. We detail various parameterisations and sub-families of the Tukey transform based models, such as the g-and-h, g-and-k and g-and-j models, including their properties of relevance to loss modelling. One of the challenges that are amenable to practitioners when fitting such models is to perform robust estimation of the model parameters. In this paper we develop a novel, efficient, and robust procedure for estimating the parameters of this family of Tukey transform models, based on L-moments. It is shown to be more efficient than the current state of the art estimation methods for such families of loss models while being simple to implement for practical purposes.Risks2016-05-2042Article10.3390/risks4020014142227-90912016-05-20doi: 10.3390/risks4020014Gareth PetersWilson ChenRichard Gerlach<![CDATA[Risks, Vol. 4, Pages 12: Macro vs. Micro Methods in Non-Life Claims Reserving (an Econometric Perspective)]]>
http://www.mdpi.com/2227-9091/4/2/12
Traditionally, actuaries have used run-off triangles to estimate reserve (“macro” models, on aggregated data). However, it is possible to model payments related to individual claims. If those models provide similar estimations, we investigate uncertainty related to reserves with “macro” and “micro” models. We study theoretical properties of econometric models (Gaussian, Poisson and quasi-Poisson) on individual data, and clustered data. Finally, applications in claims reserving are considered.Risks2016-05-1442Article10.3390/risks4020012122227-90912016-05-14doi: 10.3390/risks4020012Arthur CharpentierMathieu Pigeon<![CDATA[Risks, Vol. 4, Pages 13: Community Analysis of Global Financial Markets]]>
http://www.mdpi.com/2227-9091/4/2/13
We analyze the daily returns of stock market indices and currencies of 56 countries over the period of 2002–2012. We build a network model consisting of two layers, one being the stock market indices and the other the foreign exchange markets. Synchronous and lagged correlations are used as measures of connectivity and causality among different parts of the global economic system for two different time intervals: non-crisis (2002–2006) and crisis (2007–2012) periods. We study community formations within the network to understand the influences and vulnerabilities of specific countries or groups of countries. We observe different behavior of the cross correlations and communities for crisis vs. non-crisis periods. For example, the overall correlation of stock markets increases during crisis while the overall correlation in the foreign exchange market and the correlation between stock and foreign exchange markets decrease, which leads to different community structures. We observe that the euro, while being central during the relatively calm period, loses its dominant role during crisis. Furthermore we discover that the troubled Eurozone countries, Portugal, Italy, Greece and Spain, form their own cluster during the crisis period.Risks2016-05-1342Article10.3390/risks4020013132227-90912016-05-13doi: 10.3390/risks4020013Irena VodenskaAlexander BeckerDi ZhouDror KenettH. StanleyShlomo Havlin<![CDATA[Risks, Vol. 4, Pages 11: Participating Life Insurance Products with Alternative Guarantees: Reconciling Policyholders’ and Insurers’ Interests]]>
http://www.mdpi.com/2227-9091/4/2/11
Traditional participating life insurance contracts with year-to-year (cliquet-style) guarantees have come under pressure in the current situation of low interest rates and volatile capital markets, in particular when priced in a market-consistent valuation framework. In addition, such guarantees lead to rather high capital requirements under risk-based solvency frameworks such as Solvency II or the Swiss Solvency Test (SST). Therefore, insurers in several countries have developed new forms of participating products with alternative (typically weaker and/or lower) guarantees that are less risky for the insurer. In a previous paper, it has been shown that such alternative product designs can lead to higher capital efficiency, i.e., higher and more stable profits and reduced capital requirements. As a result, the financial risk for the insurer is significantly reduced while the main guarantee features perceived and requested by the policyholder are preserved. Based on these findings, this paper now combines the insurer’s and the policyholder’s perspective by analyzing product versions that compensate policyholders for the less valuable guarantees. We particularly identify combinations of asset allocation and profit participation rate for the different product designs that lead to an identical expected profit for the insurer (and identical risk-neutral value for the policyholder), but differ with respect to the insurer’s risk and solvency capital requirements as well as with respect to the real-world return distribution for the policyholder. We show that alternative products can be designed in a way that the insurer’s expected profitability remains unchanged, the insurer’s risk and hence capital requirement is substantially reduced and the policyholder’s expected return is increased. This illustrates that such products might be able to reconcile insurers’ and policyholders’ interests and serve as an alternative to the rather risky cliquet-style products.Risks2016-05-0542Article10.3390/risks4020011112227-90912016-05-05doi: 10.3390/risks4020011Andreas ReußJochen RußJochen Wieland<![CDATA[Risks, Vol. 4, Pages 10: Telematics and Gender Discrimination: Some Usage-Based Evidence on Whether Men’s Risk of Accidents Differs from Women’s]]>
http://www.mdpi.com/2227-9091/4/2/10
Pay-as-you-drive (PAYD), or usage-based automobile insurance (UBI), is a policy agreement tied to vehicle usage. In this paper we analyze the effect of the distance traveled on the risk of accidents among young drivers with a PAYD policy. We use regression models for survival data to estimate how long it takes them to have their first accident at fault during the coverage period. Our empirical application with real data is presented and shows that gender differences are mainly attributable to the intensity of use. Indeed, although gender has a significant effect in explaining the time to the first crash, this effect is no longer significant when the average distance traveled per day is introduced in the model. This suggests that gender differences in the risk of accidents are, to a large extent, attributable to the fact that men drive more often than women. Estimates of the time to the first accident for different driver risk types are presented. We conclude that no gender discrimination is necessary if telematics provides enough information on driving habits.Risks2016-04-0842Article10.3390/risks4020010102227-90912016-04-08doi: 10.3390/risks4020010Mercedes AyusoMontserrat GuillenAna Pérez-Marín<![CDATA[Risks, Vol. 4, Pages 9: Inflation Protected Investment Strategies]]>
http://www.mdpi.com/2227-9091/4/2/9
In this paper, a dynamic inflation-protected investment strategy is presented, which is based on traditional asset classes and Markov-switching models. Different stock market, as well as inflation regimes are identified, and within those regimes, the inflation hedging potential of stocks, bonds, real estate, commodities and gold are investigated. Within each regime, we determine optimal investment portfolios driven by the investment idea of protection from losses due to changing inflation if inflation is rising or high, but decoupling the performance from inflation if inflation is low. The results clearly indicate that these asset classes behave differently in different stock market and inflation regimes. Whereas in the long-run, we agree with the general opinion in the literature that stocks and bonds are a suitable hedge against inflation, we observe for short time horizons that the hedging potential of each asset class, especially of real estate and commodities, depend strongly on the state of the current market environment. Thus, our approach provides a possible explanation for different statements in the literature regarding the inflation hedging properties of these asset classes. A dynamic inflation-protected investment strategy is developed, which combines inflation protection and upside potential. This strategy outperforms standard buy-and-hold strategies, as well as the well-known 1 N -portfolio.Risks2016-03-2842Article10.3390/risks402000992227-90912016-03-28doi: 10.3390/risks4020009Mirco MahlstedtRudi Zagst<![CDATA[Risks, Vol. 4, Pages 8: Optimal Insurance for a Minimal Expected Retention: The Case of an Ambiguity-Seeking Insurer]]>
http://www.mdpi.com/2227-9091/4/1/8
In the classical expected utility framework, a problem of optimal insurance design with a premium constraint is equivalent to a problem of optimal insurance design with a minimum expected retention constraint. When the insurer has ambiguous beliefs represented by a non-additive probability measure, as in Schmeidler, this equivalence no longer holds. Recently, Amarante, Ghossoub and Phelps examined the problem of optimal insurance design with a premium constraint when the insurer has ambiguous beliefs. In particular, they showed that when the insurer is ambiguity-seeking, with a concave distortion of the insured’s probability measure, then the optimal indemnity schedule is a state-contingent deductible schedule, in which the deductible depends on the state of the world only through the insurer’s distortion function. In this paper, we examine the problem of optimal insurance design with a minimum expected retention constraint, in the case where the insurer is ambiguity-seeking. We obtain the aforementioned result of Amarante, Ghossoub and Phelps and the classical result of Arrow as special cases.Risks2016-03-2141Article10.3390/risks401000882227-90912016-03-21doi: 10.3390/risks4010008Massimiliano AmaranteMario Ghossoub<![CDATA[Risks, Vol. 4, Pages 7: Nonlinear Time Series and Neural-Network Models of Exchange Rates between the US Dollar and Major Currencies]]>
http://www.mdpi.com/2227-9091/4/1/7
This paper features an analysis of major currency exchange rate movements in relation to the US dollar, as constituted in US dollar terms. Euro, British pound, Chinese yuan, and Japanese yen are modelled using a variety of non-linear models, including smooth transition regression models, logistic smooth transition regressions models, threshold autoregressive models, nonlinear autoregressive models, and additive nonlinear autoregressive models, plus Neural Network models. The models are evaluated on the basis of error metrics for twenty day out-of-sample forecasts using the mean average percentage errors (MAPE). The results suggest that there is no dominating class of time series models, and the different currency pairs relationships with the US dollar are captured best by neural net regression models, over the ten year sample of daily exchange rate returns data, from August 2005 to August 2015.Risks2016-03-1641Article10.3390/risks401000772227-90912016-03-16doi: 10.3390/risks4010007David AllenMichael McAleerShelton PeirisAbhay Singh<![CDATA[Risks, Vol. 4, Pages 6: Analysis of Insurance Claim Settlement Process with Markovian Arrival Processes]]>
http://www.mdpi.com/2227-9091/4/1/6
This paper proposes a model for the claim occurrence, reporting, and handling process of insurance companies. It is assumed that insurance claims occur according to a Markovian arrival process. An incurred claim goes through some stages of a claim reporting and handling process, such as Incurred But Not Reported (IBNR), Reported But Not Settled (RBNS) and Settled (S). We derive formulas for the joint distribution and the joint moments for the amount of INBR, RBNS and Settled claims. This model generalizes previous ones in the literature, which generally assume Poisson claim arrivals. Due to the flexibility of the Markovian arrival process, the model can be used to evaluate how the claim occurring, reporting, and handling mechanisms may affect the volatilities of the amount of IBNR, RBNS and Settled claims, and the interdependencies among them.Risks2016-03-1141Article10.3390/risks401000662227-90912016-03-11doi: 10.3390/risks4010006Jiandong Ren<![CDATA[Risks, Vol. 4, Pages 5: High-Frequency Financial Econometrics]]>
http://www.mdpi.com/2227-9091/4/1/5
This book is fundamentally about the estimation of risk.[...]Risks2016-02-2641Book Review10.3390/risks401000552227-90912016-02-26doi: 10.3390/risks4010005Harley Thompson<![CDATA[Risks, Vol. 4, Pages 4: Multivariate Frequency-Severity Regression Models in Insurance]]>
http://www.mdpi.com/2227-9091/4/1/4
In insurance and related industries including healthcare, it is common to have several outcome measures that the analyst wishes to understand using explanatory variables. For example, in automobile insurance, an accident may result in payments for damage to one’s own vehicle, damage to another party’s vehicle, or personal injury. It is also common to be interested in the frequency of accidents in addition to the severity of the claim amounts. This paper synthesizes and extends the literature on multivariate frequency-severity regression modeling with a focus on insurance industry applications. Regression models for understanding the distribution of each outcome continue to be developed yet there now exists a solid body of literature for the marginal outcomes. This paper contributes to this body of literature by focusing on the use of a copula for modeling the dependence among these outcomes; a major advantage of this tool is that it preserves the body of work established for marginal models. We illustrate this approach using data from the Wisconsin Local Government Property Insurance Fund. This fund offers insurance protection for (i) property; (ii) motor vehicle; and (iii) contractors’ equipment claims. In addition to several claim types and frequency-severity components, outcomes can be further categorized by time and space, requiring complex dependency modeling. We find significant dependencies for these data; specifically, we find that dependencies among lines are stronger than the dependencies between the frequency and average severity within each line.Risks2016-02-2541Article10.3390/risks401000442227-90912016-02-25doi: 10.3390/risks4010004Edward FreesGee LeeLu Yang<![CDATA[Risks, Vol. 4, Pages 3: Premiums for Long-Term Care Insurance Packages: Sensitivity with Respect to Biometric Assumptions]]>
http://www.mdpi.com/2227-9091/4/1/3
Long-term care insurance (LTCI) covers are rather recent products, in the framework of health insurance. It follows that specific biometric data are scanty; pricing and reserving problems then arise because of difficulties in the choice of appropriate technical bases. Different benefit structures imply different sensitivity degrees with respect to changes in biometric assumptions. Hence, an accurate sensitivity analysis can help in designing LTCI products and, in particular, in comparing stand-alone products to combined products, i.e., packages including LTCI benefits and other lifetime-related benefits. Numerical examples show, in particular, that the stand-alone cover is much riskier than all of the LTCI combined products that we have considered. As a consequence, the LTCI stand-alone cover is a highly “absorbing” product as regards capital requirements for solvency purposes.Risks2016-02-2241Article10.3390/risks401000332227-90912016-02-22doi: 10.3390/risks4010003Ermanno Pitacco<![CDATA[Risks, Vol. 4, Pages 2: Ruin Analysis of a Discrete-Time Dependent Sparre Andersen Model with External Financial Activities and Randomized Dividends]]>
http://www.mdpi.com/2227-9091/4/1/2
We consider a discrete-time dependent Sparre Andersen risk model which incorporates multiple threshold levels characterizing an insurer’s minimal capital requirement, dividend paying situations, and external financial activities. We focus on the development of a recursive computational procedure to calculate the finite-time ruin probabilities and expected total discounted dividends paid prior to ruin associated with this model. We investigate several numerical examples and make some observations concerning the impact our threshold levels have on the finite-time ruin probabilities and expected total discounted dividends paid prior to ruin.Risks2016-02-0341Article10.3390/risks401000222227-90912016-02-03doi: 10.3390/risks4010002Sung KimSteve Drekic<![CDATA[Risks, Vol. 4, Pages 1: Acknowledgement to Reviewers of Risks in 2015]]>
http://www.mdpi.com/2227-9091/4/1/1
The editors of Risks would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]Risks2016-01-2141Editorial10.3390/risks401000112227-90912016-01-21doi: 10.3390/risks4010001 Risks Editorial Office<![CDATA[Risks, Vol. 3, Pages 624-646: Modified Munich Chain-Ladder Method]]>
http://www.mdpi.com/2227-9091/3/4/624
The Munich chain-ladder method for claims reserving was introduced by Quarg and Mack on an axiomatic basis. We analyze these axioms, and we define a modified Munich chain-ladder method which is based on an explicit stochastic model. This stochastic model then allows us to consider claims prediction and prediction uncertainty for the Munich chain-ladder method in a consistent way.Risks2015-12-2134Article10.3390/risks30406246246462227-90912015-12-21doi: 10.3390/risks3040624Michael MerzMario Wüthrich<![CDATA[Risks, Vol. 3, Pages 599-623: Dependence Uncertainty Bounds for the Expectile of a Portfolio]]>
http://www.mdpi.com/2227-9091/3/4/599
We study upper and lower bounds on the expectile risk measure of risky portfolios when the joint distribution of the risky components is not fully specified. First, we summarize methods for obtaining bounds when only the marginal distributions of the components are known, but not their interdependence (unconstrained bounds). In particular, we provide the best-possible upper bound and the best-possible lower bound (under some conditions), as well as numerical procedures to compute them. We also derive simple analytic bounds that appear adequate in various situations of interest. Second, we study bounds when some information on interdependence is available (constrained bounds). When the variance of the portfolio is known, a simple-to-compute upper bound is provided, and we illustrate that it may significantly improve the unconstrained upper bound. We also show that the unconstrained lower bound cannot be readily improved using variance information. Next, we derive improved bounds when the bivariate distributions of each of the risky components and a risk factor are known. When the factor induces a positive dependence among the components, it is typically possible to improve the unconstrained lower bound. Finally, the unconstrained dependence uncertainty spreads of expected shortfall, value-at-risk and the expectile are compared.Risks2015-12-1034Article10.3390/risks30405995996232227-90912015-12-10doi: 10.3390/risks3040599Edgars JakobsonsSteven Vanduffel<![CDATA[Risks, Vol. 3, Pages 573-598: Information-Based Trade in German Real Estate and Equity Markets]]>
http://www.mdpi.com/2227-9091/3/4/573
This paper employs four established market microstructure measures on information-based trade in financial markets. A set of German mid and small caps is used to analyze potential differential information content in real estate stocks compared to other asset classes. After linking substantially lower amounts of information-based trade in real estate stocks to higher liquidity premia, it is found that the evolution of the information content in real estate and other assets follows similar trends. Consequently, interdependence is tested for rolling time windows, revealing strong informational links between real estate and other assets. Particularly, small caps, financials, as well as companies offering consumer goods and services show a close relationship to real estate. Depending on the choice of the measure of information-based trade, up to 75% of the variation in the information content in real estate shares is related to other asset classes, pointing to the notion of high dependence.Risks2015-12-0734Article10.3390/risks30405735735982227-90912015-12-07doi: 10.3390/risks3040573Marco Wölfle<![CDATA[Risks, Vol. 3, Pages 553-572: Stochastic Optimal Control for Online Seller under Reputational Mechanisms]]>
http://www.mdpi.com/2227-9091/3/4/553
In this work we propose and analyze a model which addresses the pulsing behavior of sellers in an online auction (store). This pulsing behavior is observed when sellers switch between advertising and processing states. We assert that a seller switches her state in order to maximize her profit, and further that this switch can be identified through the seller’s reputation. We show that for each seller there is an optimal reputation, i.e., the reputation at which the seller should switch her state in order to maximize her total profit. We design a stochastic behavioral model for an online seller, which incorporates the dynamics of resource allocation and reputation. The design of the model is optimized by using a stochastic advertising model from [1] and used effectively in the Stochastic Optimal Control of Advertising [2]. This model of reputation is combined with the effect of online reputation on sales price empirically verified in [3]. We derive the Hamilton-Jacobi-Bellman (HJB) differential equation, whose solution relates optimal wealth level to a seller’s reputation. We formulate both a full model, as well as a reduced model with fewer parameters, both of which have the same qualitative description of the optimal seller behavior. Coincidentally, the reduced model has a closed form analytical solution that we construct.Risks2015-12-0434Article10.3390/risks30405535535722227-90912015-12-04doi: 10.3390/risks3040553Milan BradonjićMatthew CausleyAlbert Cohen<![CDATA[Risks, Vol. 3, Pages 543-552: Production Flexibility and Hedging]]>
http://www.mdpi.com/2227-9091/3/4/543
We extend the analysis on hedging with price and output uncertainty by endogenizing the output decision. Specifically, we consider the joint determination of output and hedging in the case of flexibility in production. We show that the risk-averse firm always maintains a short position in the futures market when the futures price is actuarially fair. Moreover, in the context of an example, we show that the presence of production flexibility reduces the incentive to hedge for all risk averse agents.Risks2015-12-0434Article10.3390/risks30405435435522227-90912015-12-04doi: 10.3390/risks3040543Georges DionneMarc Santugini<![CDATA[Risks, Vol. 3, Pages 515-542: The Impact of Guarantees on the Performance of Pension Saving Schemes: Insights from the Literature]]>
http://www.mdpi.com/2227-9091/3/4/515
Guarantees are often seen as the key characteristics of pension saving products, but securing them can become costly and is of central relevance especially in the course of the current low interest rate environment. In this article, we deal with the question of how costly the typical types of guarantees are, in the sense that they reduce a pension saving scheme’s financial performance over time. In this context, we aim to provide a presentation of insights from selected literature studying the impact of point-to-point guarantees and cliquet-style interest rate guarantees on the performance of pension contracts. The comparative analysis emphasizes that, in most cases, guarantee costs are not negligible with regard to a contract’s financial performance, especially compared to benchmarks, and that customers knowingly opt for such guarantees (or not) is, thus, indispensable. To further investigate the willingness-to-pay for guarantees in life insurance is an area for future research, in particular for innovative contract design.Risks2015-11-2034Article10.3390/risks30405155155422227-90912015-11-20doi: 10.3390/risks3040515Alexander Bohnert<![CDATA[Risks, Vol. 3, Pages 491-514: On the Joint Analysis of the Total Discounted Payments to Policyholders and Shareholders: Dividend Barrier Strategy]]>
http://www.mdpi.com/2227-9091/3/4/491
In the compound Poisson insurance risk model under a dividend barrier strategy, this paper aims to analyze jointly the aggregate discounted claim amounts until ruin and the total discounted dividends until ruin, which represent the insurer’s payments to its policyholders and shareholders, respectively. To this end, we introduce a Gerber–Shiu-type function, which further incorporates the higher moments of these two quantities. This not only unifies the individual study of various ruin-related quantities, but also allows for new measures concerning covariances to be calculated. The integro-differential equation satisfied by the generalized Gerber–Shiu function and the boundary condition are derived. In particular, when the claim severity is distributed as a combination of exponentials, explicit expressions for this Gerber–Shiu function in some special cases are given. Numerical examples involving the covariances between any two of (i) the aggregate discounted claims until ruin, (ii) the discounted dividend payments until ruin and (iii) the time of ruin are presented along with some interpretations.Risks2015-11-1034Article10.3390/risks30404914915142227-90912015-11-10doi: 10.3390/risks3040491Eric CheungHaibo LiuJae-Kyung Woo<![CDATA[Risks, Vol. 3, Pages 474-490: Combining Alphas via Bounded Regression]]>
http://www.mdpi.com/2227-9091/3/4/474
We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM) for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted) regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.Risks2015-11-0434Article10.3390/risks30404744744902227-90912015-11-04doi: 10.3390/risks3040474Zura Kakushadze<![CDATA[Risks, Vol. 3, Pages 455-473: Hidden Markov Model for Stock Selection]]>
http://www.mdpi.com/2227-9091/3/4/455
The hidden Markov model (HMM) is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI)), industrial production index (INDPRO), stock market index (S&amp;P 500) and market volatility (VIX). At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&amp;P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&amp;P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&amp;P 500.Risks2015-10-2934Article10.3390/risks30404554554732227-90912015-10-29doi: 10.3390/risks3040455Nguyet NguyenDung Nguyen<![CDATA[Risks, Vol. 3, Pages 445-454: Risk Classification Efficiency and the Insurance Market Regulation]]>
http://www.mdpi.com/2227-9091/3/4/445
Given that the insurance market is characterized by asymmetric information, its efficiency has traditionally been based to a large extent on risk classification. In certain regulations, however, we can find restrictions on these differentiations, primarily the ban on those considered to be “discriminatory”. In 2011, following the European Union Directive 2004/113/EC, the European Court of Justice concluded that any gender-based discrimination was prohibited, meaning that gender equality in the European Union had to be ensured from 21 December 2012. Another restriction was imposed by EU and national competition regulation on the exchange of information considered as anti-competitive behavior. This paper aims to contribute to the recent policy debate in the EU, evaluating the negative economic consequences of these regulatory restrictions in terms of market efficiency.Risks2015-09-2534Article10.3390/risks30404454454542227-90912015-09-25doi: 10.3390/risks3040445Donatella Porrini<![CDATA[Risks, Vol. 3, Pages 420-444: The Financial Stress Index: Identification of Systemic Risk Conditions]]>
http://www.mdpi.com/2227-9091/3/3/420
This paper develops a financial stress measure for the United States, the Cleveland Financial Stress Index (CFSI). The index is based on publicly available data describing a six-market partition of the financial system comprising credit, funding, real estate, securitization, foreign exchange, and equity markets. This paper improves upon existing stress measures by objectively selecting between several index weighting methodologies across a variety of monitoring frequencies through comparison against a volatility-based benchmark series. The resulting measure facilitates the decomposition of stress to identify disruptions in specific markets and provides insight into historical stress regimes.Risks2015-09-1633Article10.3390/risks30304204204442227-90912015-09-16doi: 10.3390/risks3030420Mikhail OetJohn DooleyStephen Ong<![CDATA[Risks, Vol. 3, Pages 390-419: Multi-Objective Stochastic Optimization Programs for a Non-Life Insurance Company under Solvency Constraints]]>
http://www.mdpi.com/2227-9091/3/3/390
In the paper, we introduce a multi-objective scenario-based optimization approach for chance-constrained portfolio selection problems. More specifically, a modified version of the normal constraint method is implemented with a global solver in order to generate a dotted approximation of the Pareto frontier for bi- and tri-objective programming problems. Numerical experiments are carried out on a set of portfolios to be optimized for an EU-based non-life insurance company. Both performance indicators and risk measures are managed as objectives. Results show that this procedure is effective and readily applicable to achieve suitable risk-reward tradeoff analysis.Risks2015-09-1533Article10.3390/risks30303903904192227-90912015-09-15doi: 10.3390/risks3030390Massimiliano KaucicRoberto Daris<![CDATA[Risks, Vol. 3, Pages 365-389: Supervising System Stress in Multiple Markets]]>
http://www.mdpi.com/2227-9091/3/3/365
This paper develops an extended financial stress measure that considers the supervisory objective of identifying risks to the stability of the financial system. The measure provides a continuous and bounded signal of financial stress using daily public market data. Broad coverage of material financial system markets over time is achieved by leveraging dynamic credit weights. We consider how this measure can be used to monitor, analyze, and alert financial system stress.Risks2015-09-1433Article10.3390/risks30303653653892227-90912015-09-14doi: 10.3390/risks3030365Mikhail OetJohn DooleyAmanda JanoskoDieter GramlichStephen Ong<![CDATA[Risks, Vol. 3, Pages 338-364: Valuation of Index-Linked Cash Flows in a Heath–Jarrow–Morton Framework]]>
http://www.mdpi.com/2227-9091/3/3/338
In this paper, we study the valuation of stochastic cash flows that exhibit dependence on interest rates. We focus on insurance liability cash flows linked to an index, such as a consumer price index or wage index, where changes in the index value can be partially understood in terms of changes in the term structure of interest rates. Insurance liability cash flows that are not explicitly linked to an index may still be valued in our framework by interpreting index returns as so-called claims inflation, i.e., an increase in claims cost per sold insurance contract. We focus primarily on the case when a deep and liquid market for index-linked contracts is absent or when the market price data are unreliable. Firstly, we present an approach for assigning a monetary value to a stochastic cash flow that does not require full knowledge of the joint dynamics of the cash flow and the term structure of interest rates. Secondly, we investigate in detail model selection, estimation and validation in a Heath–Jarrow–Morton framework. Finally, we analyze the effects of model uncertainty on the valuation of the cash flows and how forecasts of cash flows and interest rates translate into model parameters and affect the valuation.Risks2015-09-1033Article10.3390/risks30303383383642227-90912015-09-10doi: 10.3390/risks3030338Jonas AlmFilip Lindskog<![CDATA[Risks, Vol. 3, Pages 318-337: Delivering Left-Skewed Portfolio Payoff Distributions in the Presence of Transaction Costs]]>
http://www.mdpi.com/2227-9091/3/3/318
For pension-savers, a low payoff is a financial disaster. Such investors will most likely prefer left-skewed payoff distributions over right-skewed payoff distributions. We explore how such distributions can be delivered. Cautious-relaxed utility measures are cautious in ensuring that payoffs don’t fall much below a reference value, but relaxed about exceeding it. We find that the payoff distribution delivered by a cautious-relaxed utility measure has appealing features which payoff distributions delivered by traditional utility functions don’t. In particular, cautious-relaxed distributions can have the mass concentrated on the left, hence be left-skewed. However, cautious-relaxed strategies prescribe frequent portfolio adjustments which may be expensive if transaction costs are charged. In contrast, more traditional strategies can be time-invariant. Thus we investigate the impact of transaction costs on the appeal of cautious-relaxed strategies. We find that relatively high transaction fees are required for the cautious-relaxed strategy to lose its appeal. This paper contributes to the literature which compares utility measures by the payoff distributions they produce and finds that a cautious-relaxed utility measure will deliver payoffs that many investors will prefer.Risks2015-08-2133Article10.3390/risks30303183183372227-90912015-08-21doi: 10.3390/risks3030318Jacek Krawczyk<![CDATA[Risks, Vol. 3, Pages 290-317: Life Insurance Cash Flows with Policyholder Behavior]]>
http://www.mdpi.com/2227-9091/3/3/290
The problem of the valuation of life insurance payments with policyholder behavior is studied. First, a simple survival model is considered, and it is shown how cash flows without policyholder behavior can be modified to include surrender and free policy behavior by calculation of simple integrals. In the second part, a more general disability model with recovery is studied. Here, cash flows are determined by solving a modified Kolmogorov forward differential equation. We conclude the paper with numerical examples illustrating the methods proposed and the impact of policyholder behavior.Risks2015-07-2433Article10.3390/risks30302902903172227-90912015-07-24doi: 10.3390/risks3030290Kristian BuchardtThomas Møller<![CDATA[Risks, Vol. 3, Pages 277-289: Monopolistic Insurance and the Value of Information]]>
http://www.mdpi.com/2227-9091/3/3/277
The value of information regarding risk class for a monopoly insurer and its customers is examined in both symmetric and asymmetric information environments. A monopolist always prefers contracting with uninformed customers as this maximizes the rent extracted under symmetric information while also avoiding the cost of adverse selection when information is held asymmetrically. Although customers are indifferent to symmetric information when they are initially uninformed, they prefer contracting with hidden knowledge rather than symmetric information since the monopoly responds to adverse selection by sharing gains from trade with high-risk customers when low risks are predominant in the insurance pool. However, utilitarian social welfare is highest when customers are uninformed, and is higher when information is symmetric rather than asymmetric.Risks2015-07-2433Article10.3390/risks30302772772892227-90912015-07-24doi: 10.3390/risks3030277Arthur Snow