Next Article in Journal
Generalized Pandemic Model with COVID-19 for Early-Stage Infection Forecasting
Previous Article in Journal
Fock Quantization of a Klein–Gordon Field in the Interior Geometry of a Nonrotating Black Hole
Previous Article in Special Issue
Media Tone and Stock Price Crash Risk: Evidence from China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributionally Robust Reinsurance with Glue Value-at-Risk and Expected Value Premium

1
School of Mathematics and Finance, Chuzhou University, Chuzhou 239000, China
2
College of Science, Wuhan University of Technology, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3923; https://doi.org/10.3390/math11183923
Submission received: 12 July 2023 / Revised: 15 August 2023 / Accepted: 16 August 2023 / Published: 15 September 2023

Abstract

:
In this paper, we explore a distributionally robust reinsurance problem that incorporates the concepts of Glue Value-at-Risk and the expected value premium principle. The problem focuses on stop-loss reinsurance contracts with known mean and variance of the loss. The optimization problem can be formulated as a minimax problem, where the inner problem involves maximizing over all distributions with the same mean and variance. It is demonstrated that the inner problem can be represented as maximizing either over three-point distributions under some mild condition or over four-point distributions otherwise. Additionally, analytical solutions are provided for determining the optimal deductible and optimal values.

1. Introduction

Reinsurance has played a crucial role in facilitating the transfer of risks from insurers to reinsurers. The optimization of reinsurance strategies has been the subject of extensive research, with significant contributions made by [1,2]. Since then, numerous extensions and advancements have been proposed in this field.
Some researchers have devoted their attention to identifying the most favorable reinsurance contract through the examination of multiple criteria. These criteria encompass minimizing regulatory capital and mitigating the insurer’s risk exposure with respect to specific risk measures. The first proposal of the optimal reinsurance problem for VaR and CVaR risk measures was put forward by [3]. They analyzed the class of stop-loss policies based on the expected value premium principle. Ref. [4] expanded the scope of optimal reinsurance selection to a broader set. Additionally, Refs. [5,6,7] have investigated various approaches and models to determine the optimal reinsurance contract in accordance with the selected risk measure. For a comprehensive overview of optimal reinsurance design with a focus on risk measures, we highly recommend consulting the survey paper [8].
Given the practical difficulties in obtaining the precise distribution of losses, a widely adopted approach to tackle model uncertainty is the utilization of a moment-based uncertainty set. This approach entails considering distributions that adhere to particular moment constraints. An example of this is seen in [9], where optimal reinsurance incorporating stop-loss contracts was examined while accounting for incomplete information regarding the loss distribution. In a similar vein, Ref. [10] examined model uncertainty in the insurance context by maximizing over a finite set of probability measures. In the research conducted in [11], the investigation of VaR-based and CVaR-based reinsurance took place, while taking into account model uncertainty related to stop-loss reinsurance contracts. The authors successfully provided an analytical solution to the reinsurance problem. Moreover, in the study conducted in [12], a distributionally robust reinsurance problem incorporating expectiles was examined, and the optimization problem was numerically solved.
In this paper, we investigate a reinsurance problem incorporating the risk measure Glue Value-at-Risk (GlueVaR) under the framework of distributional robustness. GlueVaR, initially introduced in [13] as a function with four parameters, enables the customization of risk measures to specific contexts by adjusting these parameters. The practical aspects of tail-subadditivity were analyzed by the authors of [14], who also examined the subadditivity and tail-subadditivity of risk aggregation. In a recent study, Ref. [15] examined the optimal reinsurance problem using a novel combination of risk measures and derived closed-form solutions for optimal reinsurance policies. They also explored the application of VaR and GlueVaR as specific instances of risk management. However, it is important to note that their analysis assumed knowledge of the risk distribution. In our research, we possess partial knowledge about the distribution of losses, specifically, regarding the mean and variance. The primary contribution of this paper is to establish that the worst-case distribution falls within the category of three-point or four-point distributions, thereby transforming the infinite-dimensional inner problem into a finite-dimensional one. As a result, we obtain closed-form solutions for the optimal deductible and optimal value. Our findings build upon the conclusions presented in [11], thus extending their implications.
The remaining sections of this paper are structured as follows. Section 2 presents the definition and properties of GlueVaR and formulates our distributionally robust reinsurance problem as a minmax problem. In Section 3, we address the distributionally robust reinsurance problem by identifying the optimal deductibles and optimal values. Section 4 provides an in-depth discussion on the optimal solution, and concludes the paper with closing remarks. Some detailed proofs can be found in the Appendix A and Appendix B.

2. Preliminaries

2.1. Risk Measures

A risk measure is a mathematical function that assigns a non-negative real number to a specific level of risk. Within the financial and insurance sectors, two commonly utilized risk measures are Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). For a deeper understanding of CVaR, please consult the discussion provided in [16].
For a random variable X, VaR is defined in two versions: left-continuous VaR ( VaR ) and right-continuous VaR (VaR). They are defined as
VaR α ( X ) = inf { t R P ( X t ) α } , α ( 0 , 1 ] ,
and
VaR α ( X ) = inf { t R P ( X t ) > α } , α [ 0 , 1 ) ,
respectively. Specifically, let VaR 0 ( X ) = VaR 0 ( X ) and VaR 1 ( X ) = VaR 1 ( X ) . Both of these measures serve as a scientific foundation for determining the initial capital required by a company to withstand risks. However, it should be noted that VaR and VaR do not satisfy subadditivity, and they cannot accurately calculate extreme risks.
CVaR, also known as expected shortfall, satisfies the property of subadditivity, which was defined as
CVaR α ( X ) = 1 1 α α 1 VaR u ( X ) d u , α [ 0 , 1 ) , CVaR 1 ( X ) = VaR 1 ( X ) ,
provided the integral exists. Ref. [17] proposed RVaR α , β at levels 0 < α < β 1 , as follows:
RVaR α , β ( X ) = 1 β α α β VaR u ( X ) d u .
RVaR includes right-continuous VaR and CVaR as its limiting cases, and the two versions of VaR do not impact the values of RVaR and CVaR.
Moving on to another perspective, Ref. [18] defines a family of risk measures using the concept of a distortion function. A distortion function is an non-decreasing function g : [ 0 , 1 ] [ 0 , 1 ] such that g ( 0 ) = 0 and g ( 1 ) = 1 . The corresponding distortion risk measure, denoted as ρ g [ · ] , is defined as follows:
ρ g [ X ] = 0 g ( F ¯ ( t ) ) d t 0 [ 1 g ( F ¯ ( t ) ) ] d t
where F ¯ ( t ) is the survival function of X. For a more comprehensive understanding of distortion risk measures, see [19,20].
Ref. [13] introduced GlueVaR, denoted by GlueVaR α , β r 1 , r 2 ( X ) , which utilizes a distortion function defined as follows:
g α , β r 1 , r 2 ( x ) : = r 1 1 β x , if 0 x < 1 β , r 1 + r 2 r 1 β α [ x ( 1 β ) ] , if 1 β x < 1 α , 1 , if 1 α x 1 ,
where α , β [ 0 , 1 ] such that α β , r 1 [ 0 , 1 ] and r 2 [ r 1 , 1 ] . By [13], GlueVaR can be expressed as
GlueVaR α , β r 1 , r 2 = ω 1 CVaR β ( X ) + ω 2 CVaR α ( X ) + ω 3 VaR α ( X ) ,
where
ω 1 = r 1 ( r 2 r 1 ) ( 1 β ) β α , ω 2 = r 2 r 1 β α ( 1 α ) , ω 3 = 1 ω 1 ω 2 = 1 r 2 .
The GlueVaR family can exhibit great flexibility by imposing appropriate conditions on r 1 and r 2 . For instance, risk managers may choose parameters such that VaR α ( X ) GlueVaR α , β r 1 , r 2 ( X ) CVaR α ( X ) . This implies that GlueVaR has the potential to address the limitation of VaR’s tendency to underestimate risks. On the contrary, unlike CVaR, GlueVaR is not overly conservative. Additionally, it is important to note that GlueVaR encompasses (right-continuous) Value-at-Risk (VaR), Conditional Value-at-Risk (CVaR), and Range Value-at-Risk (RVaR) as specific cases. The corresponding distortion functions for these measures are g α , α 0 , 0 ( x ) , g α , α 1 , 1 ( x ) , and g α , β 0 , 1 ( x ) , respectively. The graphs of these distortion functions are clearly illustrated in Figure 1.

2.2. Distributionally Robust Reinsurance with GlueVaR

Suppose we have a non-negative ground-up loss, denoted as X, defined on an atomless probability space ( Ω , F , P ) . The insurer has the option to transfer a portion of the loss, denoted as I ( X ) , to a reinsurer in exchange for a reinsurance premium, denoted as π ( I ( X ) ) . A stop-loss reinsurance contract is defined as I ( X ) = ( X d ) + , where d [ 0 , ] is referred to as the deductible. When d = 0 , the identity function I ( X ) is equal to X, which signifies that stop-loss reinsurance defaults to full reinsurance. Conversely, if d = + , I ( X ) equals 0, indicating that no reinsurance is purchased. In a stop-loss reinsurance arrangement, the total retained loss for the insurer can be expressed as X I ( X ) + π ( I ( X ) ) = X d + ( 1 + θ ) E [ ( X d ) + ] . Here, θ denotes a loading factor.
Given a pair of non-negative mean and standard deviation, denoted as ( μ , σ ) , we define the uncertainty set F ( μ , σ ) as follows:
F ( μ , σ ) = F is a cdf on [ 0 , ) : 0 x d F ( x ) = μ , 0 x 2 d F ( x ) = μ 2 + σ 2 .
The distributionally robust reinsurance problem based on the risk measure ρ ( · ) is defined as
min d 0 sup F F ( μ , σ ) ρ X F d + ( 1 + θ ) E [ ( X F d ) + ] ,
where X F is the loss with distribution function F. An optimal solution to (2) is called an optimal deductible, denoted by d * . A distribution that solves
ρ ( d ) : = sup F F ( μ , σ ) ρ { X F d + ( 1 + θ ) E [ ( X F d ) + ] } ,
is referred to as a worst-case distribution, and ρ ( d ) is called the worst-case value corresponding to a given d. In other words, we have ρ ( d * ) = min d 0 ρ ( d ) .
The distributionally robust reinsurance with ρ = VaR and ρ = CVaR has been studied by [11], and when ρ is an expectile-based risk measure, it was studied by [12]. Since GlueVaR can incorporate more information about agents’ attitudes towards risk, we consider the distributionally robust reinsurance problem with GlueVaR. By utilizing the translation invariance of GlueVaR, model (2) can be reduced to
min d 0 sup F F ( μ , σ ) { GlueVaR c ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] } ,
where GlueVaR c : = GlueVaR α , β r 1 , r 2 with 0 α β 1 and 0 r 1 r 2 1 .
From the expression in Equation (1), α and β represent the levels of riskiness in GlueVaR. The larger the values of α and β , the more risk the insurer will transfer to the reinsurer. Conversely, if α and β are small enough, the insurer will assume all the risk, aligning with the following proposition.
Proposition 1. 
For d 0 , if the following conditions hold
( 1 α ) ( 1 + θ ) 1 , ( 1 β ) ( 1 + θ ) r 1 ,
then the optimal deductible for problem (3) is d * = .
Proof. 
To demonstrate that h c ( X F , d ) : = GlueVaR c ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] is decreasing in d 0 , which implies that sup F F ( μ , σ ) h c ( X F , d ) is decreasing in d 0 , we can use Equations (1) and (3) to obtain the following expression:
h c ( X F , d ) = ω 1 CVaR β ( X F d ) + ω 2 CVaR α ( X F d ) + ω 3 VaR α ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] = r 1 1 β r 2 r 1 β α β 1 VaR u ( X F d ) d u + r 2 r 1 β α α 1 VaR u ( X F d ) d u + ( 1 r 2 ) VaR α ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] .
Next, let us consider the following three cases:
(i)
If d VaR α ( X ) , we have h c ( X F , d ) = d + ( 1 + θ ) E [ ( X F d ) + ] . Taking the derivative with respect to d, we have
h c ( X F , d ) = 1 ( 1 + θ ) F ¯ ( d ) ,
From this, we can observe that h c ( X F , d ) 0 . This implies that h c ( X F , d ) is decreasing for d F 1 θ 1 + θ , and h c ( X F , d ) is increasing for d F 1 θ 1 + θ . Since ( 1 α ) ( 1 + θ ) 1 , it follows that F 1 θ 1 + θ VaR α ( X ) . Consequently, h c ( X F , d ) is decreasing in d [ 0 , VaR α ( X ) ] .
(ii)
If VaR α ( X ) < d VaR β ( X ) , which implies α F ( d ) β , we have
h c ( X F , d ) = d r 1 + r 2 r 1 β α β d α VaR α ( X ) VaR α d F ( t ) d t , + ( 1 r 2 ) VaR α ( X ) + ( 1 + θ ) E [ ( X F d ) + ] .
Thus, the derivative with respect to d is
h c ( X F , d ) = 1 + θ r 2 r 1 β α F ( d ) ( 1 + θ ) r 1 ( r 2 r 1 ) β β α .
We will demonstrate that h c ( X F , d ) is decreasing in d by considering the following three subcases.
Subcase 1: 1 + θ > ( r 2 r 1 ) / ( β α ) .
In this subcase, h c ( X F , d ) is decreasing for d 0 if and only if F ( d ) H ( c ) , where
H ( c ) : = ( 1 + θ ) r 1 ( r 2 r 1 ) β β α 1 + θ r 2 r 1 β α .
By Condition (4), we have ( 1 β ) ( 1 + θ ) r 1 / r 2 r 1 and H ( c ) β . Consequently, we can conclude that F ( d ) H ( c ) .
Subcase 2: 1 + θ < ( r 2 r 1 ) / ( β α ) .
In this subcase, h c ( X F , d ) is decreasing in d 0 if and only if F ( d ) H ( c ) . By utilizing condition (4), we can verify that F ( d ) α H ( c ) .
Subcase 3: 1 + θ = ( r 2 r 1 ) / ( β α ) .
In this subcase, we have
h c ( X F , d ) = r 1 + ( r 2 r 1 ) β β α ( 1 + θ ) = r 1 ( 1 β ) ( 1 + θ ) 0 .
(iii)
If d VaR β ( X ) , then
h c ( X F , d ) = r 1 1 β d r 1 1 β F 1 ( β ) d F ( t ) d t + ( 1 + θ ) d + F ¯ ( t ) d t + r 2 r 1 β α α β VaR u ( X ) d u r 1 1 β β F 1 ( β ) + ( 1 r 2 ) VaR α ( X ) .
From this, we can observe that h c ( X F , d ) = r 1 1 β ( 1 + θ ) F ¯ ( d ) 0 . Therefore, h c ( X F , d ) is decreasing for d [ VaR β ( X ) , + ) .
Combining the above three cases, we conclude that h c ( X F , d ) is decreasing for d 0 . □
Noting that RVaR α , β = GlueVaR α , β 0 , 1 , we can derive the following corollary as an immediate consequence of Proposition 1.
Corollary 1. 
If ( 1 α ) ( 1 + θ ) 1 , then the optimal deductible of problem (2) with ρ = RVaR α , β is d * = .

3. Main Results

As indicated by Proposition 1, the optimal deductible under certain moderate conditions is d * = for the case where ( 1 α ) ( 1 + θ ) 1 , implying a scenario with no reinsurance purchase. However, practical instances often present α values near 1 and relatively minor θ values. This makes the condition ( 1 α ) ( 1 + θ ) > 1 rarely satisfied. Consequently, our primary focus in this section is on cases where ( 1 α ) ( 1 + θ ) 1 .
Moving forward, we employ a methodological approach in examining the issue. In Section 3.1, we transform problem (3) into a finite-dimensional and more computationally amenable problem. Subsequently, in Section 3.2, we deduce the worst-case scenario for problem (5). Lastly, within Section 3.2, we present the optimal deductible for the optimization problem represented by (3).

3.1. The Problem Formulation

We first consider the inner problem of (3), which is defined as follows:
h c ( X F , d ) : = sup F F ( μ , σ ) { GlueVaR c ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] } .
We demonstrate that under mild conditions, if ( 1 α ) r 1 ( 1 β ) r 2 , the worst-case distribution of the problem (5) can be restricted to the class of three-point distributions F 3 ( μ , σ ) . Conversely, when ( 1 α ) r 1 ( 1 β ) r 2 , the worst-case distribution of problem (5) can be confined to the class of four-point distributions F 4 ( μ , σ ) .
We first present the following two lemmas, which will be utilized in the proof of the subsequent propositions.
Lemma 1. 
For d 0 and σ 1 < σ 2 , let F = ( x 1 , α ; x 2 , p ; x 3 , 1 α p ) F 3 ( μ , σ 1 ) be a distribution with x 1 < x 2 d < x 3 . Suppose ( 1 α ) r 1 ( 1 β ) r 2 . Then there exists F * F 3 ( μ , σ 2 ) such that
h c ( X F * , d ) h c ( X F , d ) , 0 α β 1 , 0 r 1 r 2 1 .
Similar to the proof of Lemma 1, we can obtain the following Lemma.
Lemma 2. 
For d 0 , σ 1 < σ 2 , let F = ( x 1 , α ; x 2 , β α ; x 3 , p 3 ; x 4 , p 4 ) F 4 ( μ , σ 1 ) with 0 x 1 < x 2 < x 3 d < x 4 and p 3 + p 4 = 1 β . There exists F * F 4 ( μ , σ 2 ) such that
h c ( X F * , d ) h c ( X F , d ) , 0 α β 1 , 0 r 1 r 2 1 .
The next proposition states that if ( 1 α ) r 1 ( 1 β ) r 2 , then the worst-case distribution of optimization problem (5) can be limited to the set of three-point distributions F 3 ( μ , σ ) .
Proposition 2. 
Suppose d 0 and ( 1 α ) r 1 ( 1 β ) r 2 . The optimization problem (5) is equivalent to
sup F F 3 ( μ , σ ) { GlueVaR c ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] } .
Proof. 
It suffices to show that for each F F ( μ , σ ) , there exists a distribution F ˜ F 3 ( μ , σ ) such that h c ( X F ˜ , d ) h c ( X F , d ) . One can verify that
h c ( X , d ) = GlueVaR c ( X d ) + ( 1 + θ ) E [ ( X d ) + ] , if d VaR α ( X ) , d + ( 1 + θ ) E [ ( X d ) + ] , if d < VaR α ( X ) .
To construct F ˜ , we consider the following three cases.
(i)
If VaR β ( X ) d , let U U [ 0 , 1 ] be a uniform random variable such that U and X are comonotonic. Let A 1 = { U α } , A 2 = { U > α , X d } , A 3 = { X > d } . Denote E F [ X A i ] by x i for i = 1 , 2 , 3 . Define a discrete random variable
X ˜ ( ω ) = x 1 , if ω A 1 , x 2 , if ω A 2 , x 3 , if ω A 3 .
We state that E [ X ˜ ] = μ , Var ( X ˜ ) σ 2 and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . Indeed, E [ X ˜ ] = μ = E [ X ] obviously holds. Using Hölder’s inequality, we obtain that
E [ X ˜ 2 ] = i = 1 3 1 P ( A i ) A i x d P ( ω ) 2 i = 1 3 A i x 2 d P ( ω ) = E [ X 2 ] .
Hence, Var ( X ˜ ) Var ( X ) = σ 2 . By the definition of X ˜ as in (7), we obtain
GlueVaR c ( X ˜ d ) GlueVaR c ( X d ) = r 2 r 1 β α J 1 r 1 1 β J 2 + ( 1 r 2 ) J 3
with
J 1 = ( β α ) x 2 α β VaR u ( X ) d u , J 2 = β F ( d ) VaR u ( X ) d u ( F ( d ) β ) x 2 ,
and J 3 = x 2 VaR α ( X ) . By (7), x 2 = E [ X A 2 ] E [ X A 2 1 ] = 1 β α α β VaR u ( X ) d u , where A 2 1 = { α < U β } . Consequently, J 1 0 . In a similar manner, we can confirm that J 2 0 and J 3 0 . Noting that ( 1 α ) r 1 ( 1 β ) r 2 implies r 2 r 1 β α r 1 1 β , we conclude that
GlueVaR c ( X ˜ d ) GlueVaR c ( X d ) r 1 1 β ( J 1 J 2 ) + ( 1 r 2 ) J 3 = r 1 1 β x 2 ( F ( d ) α ) α F ( d ) VaR u ( X ) d u + ( 1 r 2 ) ( x 2 VaR α ( X ) ) = ( 1 r 2 ) ( x 2 VaR α ( X ) ) 0 .
The second equality holds because x 2 = E [ X | A 2 ] = 1 F ( d ) α α F ( d ) VaR u ( X ) d u . Combining with E [ ( X ˜ d ) + ] = E [ X A 3 ] d P ( A 3 ) = d ( x d ) d F ( x ) = E [ ( X d ) + ] , the result h c ( X ˜ F ˜ , d ) h c ( X F , d ) follows. If Var ( X ˜ ) = σ 2 , then F ˜ F 3 ( μ , σ ) , and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . If Var ( X ˜ F ˜ ) < σ 2 , by Lemma 1, there exists a three-point distribution F * F 3 ( μ , σ ) such that h c ( d , X F * ˜ ) h c ( X ˜ F ˜ , d ) h c ( X F , d ) .
(ii)
If VaR α ( X ) d < VaR β ( X ) , let A 1 = { U α } , A 2 = { U > α , X d } , A 3 1 = { X > d , U β } , A 3 2 = { U > β } , and A 3 = A 3 1 A 3 2 = { X > d } . Denote E F [ X A i ] by x i , i = 1 , 2 , 3 . Define a discrete random variable X ˜ ( ω ) as in Equation (7). Similar to the proof of (i), it is easy to show E [ X ˜ ] = μ , Var ( X ˜ ) σ 2 , and
GlueVaR c ( X ˜ d ) GlueVaR c ( X d ) = r 2 r 1 β α x 2 ( F ( d ) α ) α F ( d ) VaR u ( X ) d u + ( 1 r 2 ) ( x 2 VaR α ( X ) ) = ( 1 r 2 ) ( x 2 VaR α ( X ) ) 0 ,
where the inequality follows from x 2 VaR α ( X ) . Noting that E [ ( X ˜ d ) + ] = E [ ( X d ) + ] , the result h c ( X ˜ F ˜ , d ) h c ( X F , d ) follows. If Var ( X ˜ ) = σ 2 , then F ˜ F 3 ( μ , σ ) , and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . If Var ( X ˜ ) < σ 2 , by Lemma 1, there exists a three-point distribution F * F 3 ( μ , σ ) such h c ( X F * ˜ , d ) h c ( X ˜ F ˜ , d ) h c ( X F , d ) . Then, the result follows.
(iii)
If VaR α ( X ) > d , define a two-point random variable
X ˜ ( ω ) = E [ X X d ] = x 1 , if X d , E [ X X > d ] = x 2 , if X > d .
Similar to case (i), we have E ( X ˜ ) = μ , Var ( X ˜ ) σ 2 and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . For ε [ 0 , x 1 / ( 1 p ) ] , define
X ε = x 1 ε ( 1 p ) , if X d , x 2 + ε p , if X > d ,
where p : = P ( X d ) [ 0 , α ] . Note that E [ X ε ] = μ , Var ( X ε ) is continuous and increasing in ε 0 . Next, we consider the following two subcases.
(iii.a)
If there exists an ε [ 0 , x 1 / ( 1 p ) ] such that Var ( X ε ) = σ 2 , then the distribution of X ε is the desired two-point distribution.
(iii.b)
Otherwise, we have Var ( X ε 0 ) < σ 2 with ε 0 : = x 1 / ( 1 p ) . In this case, for q [ 0 , 1 p ] , we define
X q = ( 0 , p ; d , q ; y , 1 p q )
with y : = ( μ d q ) / ( 1 p q ) > d . Then X 0 and X ε 0 have the same distribution and E X q = μ , h c ( d , X q ) h c ( d , X ε 0 ) . Noting that the variance Var ( X q ) is continuous in q [ 0 , 1 p ] and lim q 1 p Var ( X q ) = , there exists an q * [ 0 , 1 p ] such that Var ( X q * ) = σ 2 . Then the distribution of X q * is the desired distribution F * .
Since GlueVaR α , β 0 , 1 ( X ) = RVaR α , β ( X ) , we derive the following results for RVaR from the proof of Proposition 2. These results generalize Proposition 1 in [11].
Corollary 2. 
For d 0 , we have
sup F F ( μ , σ ) { RVaR ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] } = sup F F 3 ( μ , σ ) { RVaR ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] }
From the proof of Proposition 2, we can obtain the explicit form of the worst-case distributions. Specifically, we have the following result as a corollary of Proposition 2.
Corollary 3. 
For d 0 , if ( 1 α ) r 1 ( 1 β ) r 2 , the worst-case distribution F = ( x 1 , p 1 ; x 2 , p 2 ; x 3 , p 3 ) F 3 ( μ , σ ) of problem (5) must be in the following two sets:
F 2 ( μ , σ ; d ) : = { ( x 1 , p 1 ; x 2 , p 2 ) F ( μ , σ ) : p 1 α , x 1 d < x 2 } ,
F 3 ( μ , σ ; d ) : = { ( x 1 , α ; x 2 , p 2 ; x 3 , p 3 ) F ( μ , σ ) : x 1 x 2 d x 3 } .
Proof. 
In the proof of Proposition 2 case (iii.b), for the X q * founded in the proof, we consider the following three cases.
(i)
If q * > β p , then VaR β ( X q * ) = d . By applying case (i) of the proof of Proposition 2, we obtain a three-point distribution in F 3 ( μ , σ ; d ) .
(ii)
If α p < q * β p , then VaR α ( X q * ) = d if p α , VaR α ( X q * ) = 0 if p > α , and VaR β ( X q * ) = y > d . In this case, by applying case (ii) of the proof of Proposition 2, we obtain a three-point distribution in F 3 ( μ , σ ; d ) .
(iii)
If q * α p , then VaR α ( X q * ) = y > d . In this case, we can apply case (iii) of the proof of Proposition 2. Specifically, we obtain a random variable X q 1 defined by (9) with q 1 + p 1 > q * + p . By iterating this process, we can obtain a distribution in case (i), (ii), or case (iii.a) of the proof of Proposition 2, such that the objective function h c ( X , d ) is not less than that of the original one.
The proposition below asserts that if ( 1 α ) r 1 ( 1 β ) r 2 , then the worst-case distribution of optimization problem (5) can be limited to the set of four-point distributions F 4 ( μ , σ ) .
Proposition 3. 
For d 0 , we assume that ( 1 α ) r 1 ( 1 β ) r 2 . The optimization problem (5) can be equivalently represented as follows:
sup F F 4 ( μ , σ ) { GlueVaR c ( X F d ) + ( 1 + θ ) E [ ( X F d ) + ] } .
Proof. 
To prove this, it is sufficient to demonstrate that for every F F ( μ , σ ) , there exists a distribution F ˜ F 4 ( μ , σ ) such that h c ( d , X F ˜ ) h c ( d , X F ) . We can derive the following expression:
h c ( X , d ) = GlueVaR c ( X d ) + ( 1 + θ ) E [ ( X d ) + ] , if d VaR α ( X ) d + ( 1 + θ ) E [ ( X d ) + ] , if d < VaR α ( X ) .
To construct F ˜ , we consider the following three cases.
(i)
If VaR β ( X ) d , let U U [ 0 , 1 ] be a uniform random variable that is comonotonic with X. Define the events A 1 = { U α } , A 2 = { α < U β } , A 3 = { U > β , X d } , and A 4 = { X > d } . Denote E F [ X A i ] by x i for i = 1 , 2 , 3 , 4 . We define a discrete random variable as follows:
X ˜ ( ω ) = x 1 , if ω A 1 , x 2 , if ω A 2 , x 3 , if ω A 3 , x 4 , if ω A 4 .
It can be stated that E [ X ˜ ] = μ , Var ( X ˜ ) σ 2 , and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . By applying the Hölder inequality, we obtain that
E [ X ˜ 2 ] = i = 1 4 1 P ( A i ) A i x d F ( x ) 2 i = 1 4 A i x 2 d F ( x ) = E [ X 2 ] .
Thus, Var ( X ˜ ) Var ( X ) = σ 2 . From the definition of X ˜ , it follows that
GlueVaR c ( X ˜ d ) GlueVaR c ( X d ) = r 1 1 β ( F ( d ) β ) x 3 β F ( d ) VaR u ( X ) d u + r 2 r 1 β α ( β α ) x 2 α β VaR u ( X ) d u + ( 1 r 2 ) ( x 2 VaR α ( X ) ) 0 .
Since
E [ ( X ˜ d ) + ] = x 4 d P ( A 4 ) = d + ( x d ) d F ( x ) = E [ ( X d ) + ] ,
it follows that h c ( X ˜ F ˜ , d ) h c ( X F , d ) .
If Var ( X ˜ ) = σ 2 , then F ˜ F 4 ( μ , σ ) , and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . If Var F ˜ ( X ˜ ) < σ 2 , by Lemma 2, there exists a four-point distribution F * F 4 ( μ , σ ) such that E [ ( X F * d ) + ] = E [ ( X F ˜ d ) + ] = E [ ( X F d ) + ] and GlueVaR c ( X F * d ) GlueVaR c ( X F ˜ d ) GlueVaR c ( X F d ) . Thus, we have h c ( X F * ˜ , d ) h c ( X ˜ F ˜ , d ) h c ( X F , d ) .
(ii)
If VaR α ( X ) d < VaR β ( X ) , let U U [ 0 , 1 ] be comonotonic with X. Let A 1 = { U α } , A 2 = { α < U , X d } , A 3 = { X > d } . Denote E [ X F | A i ] by x i for i = 1 , 2 , 3 . Define a discrete random variable X ˜ ( ω ) as in (7), and we can determine that h c ( X ˜ F ˜ , d ) h c ( X F , d ) .
If Var ( X ˜ ) = σ 2 , then F ˜ S 3 ( μ , σ ) , and h c ( X ˜ F ˜ , d ) h c ( X F , d ) . If Var ( X ˜ F ˜ ) < σ 2 , by Corollary 2, there exists a three-point distribution F * F 3 ( μ , σ ) such that h c ( X F * ˜ , d ) h c ( X ˜ F ˜ , d ) h c ( X F , d ) .
(iii)
If d < VaR α ( X ) , the proof is similar to the proof of (iii) in Proposition 2, so we omit it here.

3.2. Worst-Case Value

In this subsection, we first present the worst-case value of problem (5) when ( 1 α ) r 1 ( 1 β ) r 2 . Subsequently, we derive the worst-case value for ( 1 α ) r 1 ( 1 β ) r 2 as a corollary.
Corollary 3 implies that when ( 1 α ) r 1 ( 1 β ) r 2 , we have
h ( d ) : = sup F F ( μ , σ ) h c ( X F , d ) = max sup F F 2 ( μ , σ ; d ) h c ( X F , d ) , sup F F 3 ( μ , σ ; d ) h c ( X F , d ) .
Define F 3 ( μ , σ ) : = { ( x 1 , α ; x 2 , p 2 ; x 3 , p 3 ) F ( μ , σ ) : x 1 x 2 x 3 } . Then, F 3 ( μ , σ ; d ) F 3 ( μ , σ ) F 3 ( μ , σ ) , and we can express the supremum as follows:
h c ( d ) = max sup F F 2 ( μ , σ ; d ) h c ( X F , d ) , sup F F 3 ( μ , σ ) h c ( X F , d ) .
Next, we demonstrate that sup F F 3 ( μ , σ ) h c ( X F , d ) is increasing with respect to σ . This implies that the uncertainty set F 3 ( μ , σ ) can be substituted with the following set:
F 3 ( μ , σ ) : = { F F 3 ( μ , σ 0 ) , σ 0 σ } .
Specifically, we can state the following result.
Lemma 3. 
For d 0 , μ 0 , and σ > 0 , if ( 1 α ) r 1 ( 1 β ) r 2 , we have
h c ( d ) = max sup F F 2 ( μ , σ ; d ) h c ( X F , d ) , sup F F 3 ( μ , σ ; d ) h c ( X F , d ) ,
where F 3 ( μ , σ ; d ) is any uncertainty set satisfying
F 3 ( μ , σ ; d ) F 3 ( μ , σ ; d ) F 3 ( μ , σ ) .
In the following theorem, we provide the worst-case value for problem (5) under the conditions ( 1 α ) ( 1 + θ ) 1 and ( 1 α ) r 1 ( 1 β ) r 2 .
Theorem 1. 
Let d 0 , for ( 1 α ) ( 1 + θ ) 1 and ( 1 α ) r 1 ( 1 β ) r 2 .
(i)
If α < σ 2 / ( μ 2 + σ 2 ) , then the worst-case value of problem (5) is
h c ( d ) = μ 1 α d + ( 1 + θ ) ( 1 α ) μ 1 α d + .
(ii)
If α σ 2 / ( μ 2 + σ 2 ) , then the worst-case value of problem (5) is
h c ( d ) = d + ( 1 + θ ) μ 1 μ d μ 2 + σ 2 , f o r d d 1 , d + 1 + θ 2 μ d + σ 2 + ( μ d ) 2 , f o r d 1 d d 2 , d + ( 1 + θ ) ( 1 α ) μ d + σ α 1 α , f o r d 2 d d 3 , μ + σ α 1 α , o t h e r w i s e ,
where
d 1 = μ 2 + σ 2 2 μ , d 2 = μ σ 1 2 α 2 α ( 1 α ) , d 3 = μ + σ α 1 α .
Since VaR α ( X ) = GlueVaR α , α 0 , 0 and CVaR α ( X ) = GlueVaR α , α 1 , 1 , we can derive the following corollary from Theorem 1.
Corollary 4. 
(Mao and Hu, 2022) For d 0 , α ( 0 , 1 ) , the worst case value of the following two optimization problems
VaR α : = sup F F ( μ , σ ) VaR α { X F d + ( 1 + θ ) E [ ( X F d ) + ] } CVaR α : = sup F F ( μ , σ ) CVaR α { X F d + ( 1 + θ ) E [ ( X F d ) + ] }
equal (14) for α < σ 2 / ( μ 2 + σ 2 ) , and equal (15) for α σ 2 / ( μ 2 + σ 2 ) .
For ξ [ 0 , 1 ] , let us define the following functions:
L 1 ( ξ , d ) : = μ 1 ξ d + ( 1 + θ ) ( 1 ξ ) μ 1 ξ d + , L 2 ( d ) : = d + ( 1 + θ ) μ 1 μ d μ 2 + σ 2 , L 3 ( d ) : = d + 1 + θ 2 μ d + σ 2 + ( μ d ) 2 , L 4 ( ξ , d ) : = d + ( 1 + θ ) ( 1 ξ ) μ d + σ ξ 1 ξ , L 5 ( ξ , d ) : = μ + σ ξ 1 ξ ,
Next, we define the constants:
d 2 = μ σ 1 2 β 2 β ( 1 β ) , d 3 = μ + σ β 1 β .
It can be shown that d 2 d 2 and d 3 d 3 . Additionally,
d 2 d 3 , for 2 β 1 α , d 2 d 3 , for 2 β 1 α .
Since ( 1 α ) r 1 ( 1 β ) r 2 , we can show that ω i [ 0 , 1 ] for i = 1 , 2 , 3 . Then, we have
h c ( d ) = ω 1 CVaR β ( d ) + ω 2 CVaR α ( d ) + ω 3 VaR α ( d ) .
According to Corollary 4, we have VaR α ( d ) = CVaR α ( d ) . Therefore, we rewrite the above equation as
h c ( d ) = ω 1 CVaR β ( d ) + ( 1 ω 1 ) CVaR α ( d ) .
Theorem 2. 
For d 0 , under the conditions ( 1 α ) ( 1 + θ ) 1 and ( 1 α ) r 1 ( 1 β ) r 2 , we have
(i)
If β < σ 2 / ( μ 2 + σ 2 ) , the worst-case value of problem (5) is given by
h c ( d ) = ω 1 L 1 ( β , d ) + ( 1 ω 1 ) L 1 ( α , d ) .
(ii)
If α < σ 2 / ( μ 2 + σ 2 ) and β σ 2 / ( μ 2 + σ 2 ) , we have
h c ( d ) = ω 1 L 2 ( d ) + ( 1 ω 1 ) L 1 ( α , d ) , for d d 1 , ω 1 L 3 ( d ) + ( 1 ω 1 ) L 1 ( α , d ) , for d 1 d d 2 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 1 ( α , d ) , for d 2 d d 3 , ω 1 L 5 ( β , d ) + ( 1 ω 1 ) L 1 ( α , d ) , o t h e r w i s e .
(iii)
If α β σ 2 / ( μ 2 + σ 2 ) and 2 β 1 α , the worst-case value of problem (5) is given by
h c ( d ) = L 2 ( d ) , for d d 1 , L 3 ( d ) , for d 1 d d 2 , ω 1 L 3 ( d ) + ( 1 ω 1 ) L 4 ( α , d ) , for d 2 d d 2 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 4 ( α , d ) , for d 2 d d 3 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 5 ( α , d ) , for d 3 d d 3 , ω 1 L 5 ( β , d ) + ( 1 ω 1 ) L 5 ( α , d ) , o t h e r w i s e .
(iv)
If α β σ 2 / ( μ 2 + σ 2 ) and 2 β 1 α , then
h c ( d ) = L 2 ( d ) , for d d 1 , L 3 ( d ) , for d 1 d d 2 , ω 1 L 3 ( d ) + ( 1 ω 1 ) L 4 ( α , d ) , for d 2 d d 3 , ω 1 L 3 ( d ) + ( 1 ω 1 ) L 5 ( α , d ) , for d 3 d d 2 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 5 ( α , d ) , for d 2 d d 3 , ω 1 L 5 ( β , d ) + ( 1 ω 1 ) L 5 ( α , d ) , o t h e r w i s e ,
where d 1 , d 2 , d 3 are defined in (16) and d 2 , d 3 are defined in (17).

3.3. Optimal Deductible

In this subsection, we provide the explicit solutions for the optimization problems (3) and (5). From Proposition 1, we know that the worst-case value GlueVaR c ( d ) is decreasing in d, and the optimal deductible is d * = . Based on this, we can state the following theorem.
Theorem 3. 
If ( 1 α ) ( 1 + θ ) 1 and ( 1 β ) ( 1 + θ ) r 1 , then an optimal deductible of the optimization problem (3) is d * = , and the optimal value is given by
h c ( d * ) = μ 1 α , for α < σ 2 μ 2 + σ 2 , μ + σ α 1 α , for α σ 2 μ 2 + σ 2 .
Proof. 
By Proposition 1, the optimal deductible of the optimization problem is d * = . Therefore, based on Proposition 2, we obtain the optimal value given by (19). □
When ( 1 α ) ( 1 + θ ) 1 , regardless of whether ( 1 α ) r 1 ( 1 β ) r 2 or not, the optimal deductibles are the same, and the optimal values are equal. Specifically, we have the following theorem.
Theorem 4. 
If ( 1 α ) ( 1 + θ ) 1 , then the optimal deductible for the optimization problem (3) is given by
d * = 0 , for θ σ 2 μ 2 , μ σ 1 θ 2 θ , for θ > σ 2 μ 2 ,
and the optimal value is given by
h c ( d * ) = ( 1 + θ ) μ , for θ σ 2 μ 2 , μ + σ θ , for θ > σ 2 μ 2 .
Proof. 
For ( 1 α ) ( 1 + θ ) 1 and ( 1 α ) r 1 ( 1 β ) r 2 , according to Corollary 4, the worst-case value of the problem for GlueVaR α , β r 1 , r 2 is exactly the same as that of CVaR α and VaR α . Therefore, the optimal deductible remains the same in both cases. Referring to Theorem 1 of [11], the optimal deductible is given by (20), and the optimal value is given by (21).
For ( 1 α ) ( 1 + θ ) 1 and ( 1 α ) r 1 ( 1 β ) r 2 , we consider the following cases.
(i)
For β < σ 2 / ( μ 2 + σ 2 ) , we have
h c ( d ) = ω 1 L 1 ( β , d ) + ( 1 ω 1 ) L 1 ( α , d ) ,
which is increasing and continuous in d for d 0 . Therefore, the optimal deductible is d * = 0 , and the optimal value is as follows:
h c ( d * ) = ω 1 L 1 ( β , 0 ) + ( 1 ω 1 ) L 1 ( α , 0 ) = ω 1 ( 1 + θ ) ( 1 β ) μ 1 β + ( 1 ω 1 ) ( 1 + θ ) ( 1 α ) μ 1 α = ( 1 + θ ) μ .
(ii)
If α < σ 2 / ( μ 2 + σ 2 ) , β σ 2 / ( μ 2 + σ 2 ) , we have
h c ( d ) = ω 1 L 2 ( d ) + ( 1 ω 1 ) L 1 ( α , d ) , for 0 d d 1 , ω 1 L 3 ( d ) + ( 1 ω 1 ) L 1 ( α , d ) , for d 1 d d 2 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 1 ( α , d ) , for d 2 d d 3 , ω 1 L 5 ( β , d ) + ( 1 ω 1 ) L 1 ( α , d ) , o t h e r w i s e .
Thus, the derivative with respect to d is
h c ( d ) d = ω 1 1 ( 1 + θ ) μ 2 μ 2 + σ 2 + ( 1 ω 1 ) L 1 ( α , d ) , for 0 d d 1 , ω 1 ( 1 ( 1 + θ ) C μ , σ ( d ) ) + ( 1 ω 1 ) L 1 ( α , d ) , for d 1 d d 2 , ω 1 ( 1 ( 1 + θ ) ( 1 β ) ) + ( 1 ω 1 ) L 1 ( α , d ) , for d 2 d d 3 , ( 1 ω 1 ) L 1 ( α , d ) , o t h e r w i s e ,
where C μ , σ ( d ) = 1 2 + μ d 2 σ 2 + ( μ d ) 2 and L 1 ( α , d ) = ( 1 ( 1 + θ ) ( 1 α ) ) I ( 0 < d μ 1 α ) 0 . Since ( 1 α ) ( 1 + θ ) 1 and α < σ 2 / ( μ 2 + σ 2 ) , we can derive the inequality ( 1 + θ ) μ 2 / ( μ 2 + σ 2 ) 1 . This can also be expressed equivalently as θ σ 2 / μ 2 . Then, we can prove that h c ( d ) d 0 for all d > 0 . This implies that d * = 0 , and h c ( d * ) = ( 1 + θ ) μ .
(iii)
If α σ 2 / ( μ 2 + σ 2 ) and 2 β 1 α , then we can conclude that d 2 d 3 , and the expression for h c ( d ) is as follows:
h c ( d ) = L 2 ( d ) , for d d 1 , L 3 ( d ) , for d 1 d d 2 , ω 1 L 3 ( d ) + ( 1 ω 1 ) L 4 ( α , d ) , for d 2 d d 2 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 4 ( α , d ) , for d 2 d d 3 , ω 1 L 4 ( β , d ) + ( 1 ω 1 ) L 5 ( α , d ) , for d 3 d d 3 , ω 1 L 5 ( β , d ) + ( 1 ω 1 ) L 5 ( α , d ) , o t h e r w i s e .
Then, the derivative with respect to d is given by
h c ( d ) d = 1 ( 1 + θ ) μ 2 μ 2 + σ 2 , if 0 < d d 1 , 1 ( 1 + θ ) 1 2 + μ d 2 σ 2 + ( μ d ) 2 , if d 1 d d 2 , ω 1 1 ( 1 + θ ) 1 2 + μ d 2 σ 2 + ( μ d ) 2 + ( 1 ω 1 ) ( 1 ( 1 + θ ) ( 1 α ) ) , if d 2 d d 2 , ω 1 ( 1 ( 1 + θ ) ( 1 β ) ) + ( 1 ω 1 ) ( 1 ( 1 + θ ) ( 1 α ) ) , if d 2 d d 3 , ω 1 ( 1 ( 1 + θ ) ( 1 β ) ) , if d 3 d d 3 , 0 , o t h e r w i s e .
One can observe that h ( d ) is continuous for d > 0 ; we will now consider the two subcases.
(iii.1)
If ( 1 + θ ) μ 2 / ( μ 2 + σ 2 ) 1 , which is equivalent to θ σ 2 / μ 2 , then we have h c ( d ) d 0 for all d 0 . Therefore, the optimal deductible is d * = 0 , and h c ( d * ) = ( 1 + θ ) μ .
(iii.2)
If ( 1 + θ ) μ 2 / ( μ 2 + σ 2 ) > 1 , or equivalently θ > σ 2 / μ 2 , we see that the second-order differentiation of h c ( d ) with respect to d is
2 h c ( d ) d 2 = 1 + θ 2 σ 2 + ( μ d ) 2 1 ( μ d ) 2 σ 2 + ( μ d ) 2 > 0
Noting that
lim d d 1 + h c ( d ) d = 1 ( 1 + θ ) μ 2 / ( μ 2 + σ 2 ) < 0 , lim d d 2 h c ( d ) d = 1 ( 1 + θ ) ( 1 α ) 0 ,
with d [ d 1 , d 2 ] . Then, in the interval ( d 1 , d 2 ] , there exists a unique d ^ such that h c ( d ) d | d = d ^ = 0 . Furthermore, we have that h c ( d ) d 0 for d d 1 , and h c ( d ) d 0 for d d 2 . Consequently, h c ( d ) initially decreases for d ( 0 , d ^ ] and then increases for d [ d ^ , ) . Therefore, the minimum value of h c ( d ) is attained at d ^ . Solving the equation h c ( d ) d = 0 , we can determine the value of d ^ , and
d * = d ^ = μ σ 1 θ 2 θ , h c ( d * ) = μ + σ θ .
(iv)
If α σ 2 / ( μ 2 + σ 2 ) and 2 β 1 α , the proof is similar to case (iii), with the same optimal deductible and optimal value.
Combining the above four cases, we have ( 1 α ) ( 1 + θ ) 1 , an optimal deductible of the optimization problem (3) is (20), and the optimal value is (21). □

4. Conclusions

In this paper, we examine a distributionally robust reinsurance problem with the risk measure, GlueVaR (Generalized Value-at-Risk), under model uncertainty. As demonstrated in the main results, the parameters θ and ( α , β , r 1 , r 2 ) significantly impact the optimal reinsurance design. If θ is sufficiently large such that ( 1 α ) ( 1 + θ ) > 1 and ( 1 β ) ( 1 + θ ) r 1 , then the insurer will not transfer any risk to the reinsurer. A smaller value of θ leads to a lower optimal deductible. The parameters α and β represent the levels of GlueVaR used to measure riskiness. Higher values of α and β indicate that the insurer transfers more risk to the reinsurer. This observation aligns with the results stated in Proposition 1 and Theorem 4. The values of r 1 and r 2 determine the weights of CVaR β , CVaR α , and VaR α . When ( 1 α ) ( 1 + θ ) > 1 and r 1 < ( 1 + θ ) ( 1 β ) , the optimal deductible is infinite ( d * = ). This means that the insurer will not purchase reinsurance. When ( 1 α ) ( 1 + θ ) 1 , regardless of whether r 1 < ( 1 + θ ) ( 1 β ) holds or not, the optimal deductible is zero ( d * = 0 ) when θ σ 2 / μ 2 .
By demonstrating that the worst-case distribution must fall within the set of four-point distributions or three-point distributions, we have effectively transformed the infinite-dimensional minimax problem into a finite-dimensional optimization problem. This reduction allows us to solve the problem in a tractable manner. We have provided closed-form optimal deductibles and optimal values for the distributionally robust reinsurance problem with GlueVaR. Our result generalizes the result in [11], although our proof differs from that in [11].

Author Contributions

Conceptualization, W.L.; methodology, W.L.; software, W.L. and L.W.; validation, W.L.; formal analysis, W.L. and L.W.; investigation, W.L.; resources, W.L.; writing—original draft preparation, W.L.; writing—review and editing, W.L. and L.W.; visualization, W.L. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Project of Chuzhou University (No.zrjz2021019) and Scientific Research Program for Universities in Anhui Province (Project Nos. 2023AH051573, 2023AH051582).

Data Availability Statement

The data that support the analysis of this study are openly available in https://cn.investing.com/ (accessed on 12 May 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of Lemma 1 and Lemma 3

Proof of Lemma 1. 
Let U U [ 0 , 1 ] represent a uniform random variable that is comonotonic with X F . We define the following sets: A 1 = { U α } , A 2 = { α < U α + p + ε } , and A 3 = { U > α + p + ε } , where ε [ 0 , 1 α p ) . We define a random variable X ˜ with a three-point distribution as follows:
X ˜ ε ( ω ) = x 1 , if ω A 1 , p x 2 + d ε p + ε , if ω A 2 , ( 1 α p ) x 3 d ε 1 α p ε , if ω A 3 .
It is easy to verify that E [ X ˜ ε ] = μ = E [ X F ] , Var ( X ˜ 0 ) = Var ( X F ) , and lim ε 1 α p Var ( X ˜ ε ) = . By the continuity of Var ( X ˜ ε ) with respect to ε , there exists an ε * ( 0 , 1 α p ) such that Var ( X ˜ ε * ) = σ 2 2 . Suppose F * is the distribution of X ˜ ε * . Then
E [ ( X F * d ) + ] = ( 1 α p ) x 3 d ε 1 α p ε d ( 1 α p ε ) = ( 1 α p ) ( x 3 d ) E [ ( X F d ) + ] ,
and
GlueVaR c ( X F * d ) = x 2 * + r 1 1 β ( 1 α p * ) ( d x 2 * ) , for β [ α , α p * ) , x 2 * + r 2 ( r 2 r 1 ) β α ( p + ε * ) ( d x 2 * ) , for β [ α p * , 1 ] ,
where α p * : = α + p + ε * and x 2 * : = ( p x 2 + d ε * ) / ( p + ε * ) . Similarly, we can show that
GlueVaR c ( X F d ) = x 2 + r 1 1 β ( 1 α p ) ( d x 2 ) , for β [ α , α + p ) , x 2 + r 2 p ( r 2 r 1 ) β α ( d x 2 ) , for β [ α + p , 1 ] .
Thus, we have
GlueVaR c ( X F * d ) GlueVaR c ( X F d ) = 1 1 α 1 β r 1 ( x 2 * x 2 ) , for β [ α , α + p ) , L ( c , ε * ) ( d x 2 ) , for β [ α + p , α p * ) , ( 1 r 2 ) ( x 2 * x 2 ) , for β [ α p * , 1 ] ,
where
L ( c , ε * ) : = ε * p + ε * + p r 1 ( 1 α p * ) ( p + ε * ) ( 1 β ) r 2 + p ( r 2 r 1 ) β α .
For β [ α , α + p ) [ α p * , 1 ] , noting that ( 1 α ) r 1 / ( 1 β ) r 2 1 and x 2 * x 2 , we can show that
GlueVaR c ( X F * d ) GlueVaR c ( X F d ) .
For β [ α + p , α p * ) , taking the derivative with respect to r 1 yields
L ( c , ε * ) r 1 = p ( 1 α p * ) ( p + ε ) ( 1 β ) p β α = p ( 1 α ) ( β α p * ) ( p + ε ) ( 1 β ) ( β α ) 0 ,
where the inequality follows from β [ α + p , α p * ) . Thus, L ( c , ε * ) ( d x 2 ) is decreasing in r 1 . Since ( 1 α ) r 1 ( 1 β ) r 2 , we obtain r 1 [ 0 , ( 1 β ) r 2 / ( 1 α ) ] . When r 1 = ( 1 β ) r 2 / ( 1 α ) , we determine that
L ( c , ε * ) = ε * p + ε * + p ( 1 α p * ) r 2 ( p + ε * ) ( 1 α ) r 2 + p 1 α r 2 = ε * p + ε * + p r 2 p + ε * r 2 = ε * ( 1 r 2 ) p + ε * 0 .
Thus, L ( c , ε * ) 0 for r 1 [ 0 , ( 1 β ) r 2 / ( 1 α ) ] . Noting that d x 2 , we obtain that L ( c , ε * ) ( d x 2 ) 0 for r 1 [ 0 , ( 1 β ) r 2 / ( 1 α ) ] . Combining above three cases, we determine that
GlueVaR c ( X F * d ) GlueVaR c ( X F d ) for β [ α + p , α p * ) .
This completes the proof. □
Proof of Lemma 3. 
To establish (13), it is sufficient to demonstrate that
sup F F 3 ( μ , σ ) h c ( X F , d ) = sup F F 3 ( μ , σ ) h c ( X F , d ) .
For any distribution F = ( x 1 , α ; x 2 , p ; x 3 , 1 α p ) F 3 ( μ , σ 1 ) with 0 < σ 1 σ , as shown in the proof of Proposition 2, there exists a three-point distribution F * F 3 ( μ , σ ) such that h c ( X F , d ) h c ( X F * , d ) . Consequently, we obtain that
sup F F 3 ( μ , σ ) h c ( X F , d ) sup F F 3 ( μ , σ ) h c ( X F , d ) .
Since F 3 ( μ , σ ) F 3 ( μ , σ ) , the reverse inequality is trivially satisfied. Thus, we have completed the proof of Lemma 3. □

Appendix B. Proof of Theorem 1

The proof procedure of Theorem 1 follows a similar idea as Theorem 1 of [11], so it is included in the appendix.
Proof. 
Define F 3 ( μ , σ ; d ) = σ 1 σ F 3 ( μ , σ 1 ; d ) . By Lemma 3, we prove the theorem in the following two cases.
(i)
For any F F 2 ( μ , σ ; d ) , we have h c ( X F , d ) = d + ( 1 + θ ) ( 1 p 1 ) ( x 2 d ) . Then, the optimization problem sup F F 2 ( μ , σ ; d ) h c ( X F , d ) is equivalent to maximizing
d + ( 1 + θ ) ( 1 p 1 ) ( x 2 d ) ,
subject to
p 1 x 1 + ( 1 p 1 ) x 2 = μ , p 1 x 1 2 + ( 1 p 1 ) x 2 2 = μ 2 + σ 2 , 0 < p 1 α , 0 x 1 d < x 2 .
Solving the first two equations in (A6), we obtain that
x 1 = μ σ 1 p 1 p 1 , x 2 = μ + σ p 1 1 p 1 .
As x 1 0 , we have p 1 σ 2 σ 2 + μ 2 . Hence, the optimization problem (A5) is reduced to
max p 1 σ 2 σ 2 + μ 2 , α Q ( p 1 ) ,
subject to
μ σ 1 p 1 p 1 d μ + σ p 1 1 p 1 ,
where Q ( p 1 ) = d + ( 1 p 1 ) ( 1 + θ ) ( μ + σ p 1 1 p 1 d ) . The first- and second-order differentiations of Q ( p 1 ) with respect to d are
Q ( p 1 ) = ( 1 + θ ) ( d μ ) + 1 2 ( 1 + θ ) σ [ p 1 ( 1 p 1 ) ] 1 2 ( 1 2 p 1 ) ,
and
Q ( p 1 ) = 1 4 ( 1 + θ ) σ [ p 1 ( 1 p 1 ) ] 3 2 ( 1 2 p 1 ) 2 ( 1 + θ ) σ [ p 1 ( 1 p 1 ) ] 1 2 0 ,
respectively. Solving Q ( p 1 ) = 0 yields p 1 ^ = 1 2 μ d 2 σ 2 + ( μ d ) 2 .
(i.a)
If α < σ 2 σ 2 + μ 2 , the feasible set in problem (A7) is empty. Therefore, we obtain
max p 1 ϕ Q ( p 1 ) =
(i.b)
If α σ 2 σ 2 + μ 2 , it can be verified that d 1 d 2 d 3 . We consider the following subcases:
(1)
If d d 1 , then
p ^ 1 σ 2 σ 2 + μ 2 α ,
and the maximizer to (A7) is p 1 * = σ 2 σ 2 + μ 2 . As a result, we have x 1 * = 0 , x 2 * = μ + σ 2 / μ and x 1 * d x 2 * . Consequently, the corresponding distribution is
P ( X = 0 ) = σ 2 σ 2 + μ 2 = 1 P ( X = μ + σ 2 / μ )
Additionally, the maximum value of (A7) is
max p 1 [ σ 2 σ 2 + μ 2 , α ] Q ( p 1 ) = d + μ ( 1 + θ ) 1 μ d σ 2 + μ 2
(2)
If d 1 < d d 2 , then σ 2 / ( σ 2 + μ 2 ) < p 1 ^ α . Hence, the maximizer to (A7) is p 1 * = p 1 ^ , and we have
x 1 * = d σ 2 + ( μ d ) 2 d x 2 * = d + σ 2 + ( μ d ) 2 .
The corresponding distribution is
P ( X = d σ 2 + ( μ d ) 2 ) = p 1 ^ = 1 P ( X = d + σ 2 + ( μ d ) 2 )
In addition, the maximum value of (A7) is
max p 1 [ σ 2 σ 2 + μ 2 , α ] Q ( p 1 ) = Q ( p 1 ^ ) = d + 1 + θ 2 [ μ d + σ 2 + ( μ d ) 2 ]
(3)
If d 2 < d d 3 , then
p 1 ^ > α σ 2 μ 2 + σ 2 .
Hence, the maximizer to (A7) is p 1 * = α , and
x 1 * = μ σ 1 α α , x 2 * = μ + σ α 1 α .
The corresponding distribution is
P X = μ σ 1 α α = α = 1 P X = μ + σ α 1 α
As x 1 * d 2 , we have x 1 * < d d 3 = x 2 * . The maximum value of (A7) is
max p 1 [ σ 2 σ 2 + μ 2 , α ] Q ( p 1 ) = Q ( α ) = d + ( 1 + θ ) ( 1 α ) μ d + σ α 1 α
(4)
If d > d 3 , then
σ 2 σ 2 + μ 2 α < ( d μ ) 2 σ 2 + ( d μ ) 2 .
However, d μ and constraint (3) imply
p 1 ( d μ ) 2 σ 2 + ( d μ ) 2 .
Hence, the feasible set of optimization problem (A7) is an empty set and the maximum value of (A7) is
max p 1 ϕ Q ( p 1 ) =
To summarize, if α < σ 2 / ( μ 2 + σ 2 ) , the maximum value of (A7) is . If α σ 2 / ( μ 2 + σ 2 ) , the maximum value of (A7) is as follows
Q ( p 1 * ) = d + μ ( 1 + θ ) 1 μ d μ 2 + σ 2 , for d d 1 , d + 1 + θ 2 μ d + σ 2 + ( μ d ) 2 , for d 1 d d 2 , d + ( 1 + θ ) ( 1 α ) μ d + σ α 1 α , for d 2 d d 3 , , otherwise ,
where d 1 , d 2 , d 3 are defined in (16).
(ii)
For any F = ( x 1 , α ; x 2 , p ; x 3 , 1 α p ) F 3 ( μ , σ ; d ) , we have
h c ( d , X F ) = 1 r 1 + r 1 ( α + p β ) 1 β x 2 + J c ( x 3 ) , α β α + p , 1 r 2 + p ( r 2 r 1 ) β α x 2 + K c ( x 3 ) , α + p < β 1 .
where
J c ( x 3 ) = d r 1 ( 1 α p ) 1 β + ( 1 + θ ) ( x 3 d ) ( 1 α p ) ,
and
K c ( x 3 ) = d r 2 ( r 2 r 1 ) d p β α + ( 1 + θ ) ( x 3 d ) ( 1 α p ) .
Then, the optimization problem sup F S 3 ( μ , σ ; d ) h c ( d , X F ) is equivalent to maximizing (A17) subject to
α x 1 + p x 2 + ( 1 α p ) x 3 = μ , α x 1 2 + p x 2 2 + ( 1 α p ) x 3 2 μ 2 + σ 2 , 0 p 1 α , 0 x 1 x 2 d x 3 .
Denote
τ = min 1 r 1 + r 1 ( α + p β ) 1 β , 1 r 2 + p ( r 2 r 1 ) β α .
By ( 1 α ) r 1 ( 1 β ) r 2 , we can obtain that
τ = 1 r 1 + r 1 ( α + p β ) 1 β , for α β α + p , 1 r 2 + p ( r 2 r 1 ) β α , for α + p < β 1 .
Since p 1 α , ( 1 α ) ( 1 + θ ) 1 and ( 1 α ) r 1 ( 1 β ) r 2 , we can deduce that τ / p 1 + θ . Therefore, the function
u ( x ) : = τ p x , if x d , ( 1 + θ ) ( x d ) + d τ p , if x > d .
is a concave function. By conventional calculation, we find that
h c ( X F , d ) = · τ p x 2 + ( 1 α p ) d τ p + ( 1 + θ ) ( x 3 d ) + ( 1 α p τ ) d τ p = p u ( x 2 ) + ( 1 α p ) u ( x 3 ) ( 1 α p τ ) d τ p = ( 1 α ) E [ u ( X ) X x 2 ] ( 1 α p τ ) d τ p ( 1 α ) u [ E ( X X x 2 ) ] ( 1 α p τ ) d τ p = τ p ( 1 α ) ( x 2 * d ) + d , x 2 * < d , ( 1 α ) ( 1 + θ ) ( x 2 * d ) + d , x 2 * d , x 2 * d + ( 1 + θ ) ( 1 α ) ( x 2 * d ) + = h c ( d , X F * ( x 1 ) ) ,
where
x 2 * = μ α x 1 1 α , F * ( x 1 ) : = ( x 1 , α ; x 2 * , 1 α ) with x 1 μ d .
It can be easily verified that F * ( x 1 ) belongs to F 3 ( μ , σ ) . Noting that
sup F F 3 ( μ , σ ; d ) h c ( d , X F ) sup x 1 μ d h c ( d , X F * ( x 1 ) ) ,
and
sup x 1 μ d h c ( d , X F * ( x 1 ) ) sup F F 3 ( μ , σ ) h c ( d , X F ) ,
we can conclude that the optimization problem sup F F 3 ( μ , σ ; d ) h c ( d , X F ) can be reduced to
max h c ( d , X F * ( x 1 ) ) = μ α x 1 1 α d + ( 1 + θ ) ( 1 α ) μ α x 1 1 α d + ,
subject to
α x 1 2 + ( 1 α ) μ α x 1 1 α 2 μ 2 + σ 2 , x 1 μ d .
Solving the first constraint condition of (A19), the optimization problem (A18) can be further simplified to
max 0 d 4 x μ d h ( x ) , w h e r e h ( x ) = μ α x 1 α d + ( 1 + θ ) ( 1 α ) μ α x 1 α d + ,
with d 4 : = μ σ 1 α α . Next, we solve problem (A20) in two cases.
(ii.a)
If α < σ 2 / ( μ 2 + σ 2 ) , then d 4 < 0 , and the maximum value of problem (A20) is
max 0 x μ d h ( x ) = h ( 0 ) = μ 1 α d + ( 1 + θ ) ( 1 α ) μ 1 α d + .
The corresponding distribution of X is
P ( X = 0 ) = α = 1 P X = μ 1 α
(ii.b)
If α σ 2 / ( μ 2 + σ 2 ) , then 0 d 4 d 3 . If d [ 0 , d 4 ) , the feasible set of equation (A20) is empty, and the maximum value is . If d [ d 4 , d 3 ] , since h ( x ) is decreasing in x, the maximizer of (A20) is x 1 * = d 4 , and the maximum value is
max d 4 x μ d h ( x ) = h ( d 4 ) = d + ( 1 + θ ) ( 1 α ) μ + σ α 1 α d .
The corresponding distribution is
P X = μ σ 1 α α = α = 1 P X = μ + σ α 1 α
If d ( d 3 , ) , the maximizer is still x 1 * = d 4 , and the maximum value of (A20) is
max d 4 x μ h ( x ) = h ( d 4 ) = μ + α 1 α .
Then, if α σ 2 / ( μ 2 + σ 2 ) , the maximum value of h is given by
h ( x 1 * ) = , for d < d 4 , d + ( 1 + θ ) ( 1 α ) μ + σ α 1 α d , for d 4 d d 3 , μ + α 1 α , otherwise .
If α < σ 2 / ( μ 2 + σ 2 ) , selecting the larger value between Equations (A15) and (A21) gives us the worst-case value as stated in Equation (14). If α σ 2 / ( μ 2 + σ 2 ) , we can confirm that
μ d + σ 2 + ( μ d ) 2 2 ( 1 α ) μ d + σ α 1 α ,
and
μ 1 μ d μ 2 + σ 2 ( 1 α ) μ d + σ α 1 α d α μ σ 1 α α μ 2 μ 2 + σ 2 ( 1 α ) .
Choosing the larger one between Equations (A16) and (A23), we obtain the worst-case value as stated in Equation (15).

References

  1. Borch, K. An attempt to determine the optimum amount of stop loss reinsurance. In Proceedings of the Transactions of the 16th International Congress of Actuaries I; 1960; pp. 597–610. [Google Scholar]
  2. Arrow, K.J. Uncertainty and the welfare economics of medical care. Am. Econ. Rev. 1963, 53, 941–973. [Google Scholar]
  3. Cai, J.; Tan, K.S. Optimal retention for a stop-loss reinsurance under the VaR and CTE risk measures. ASTIN Bullet. 2007, 37, 93–112. [Google Scholar] [CrossRef]
  4. Chi, Y.; Tan, K.S. Optimal reinsurance under VaR and CVaR risk measures: A simplified approach. ASTIN Bullet. 2011, 41, 487–509. [Google Scholar]
  5. Cui, W.; Yang, J.; Wu, L. Optimal reinsurance minimizing the distortion risk measure under general reinsurance premium principles. Insur. Math. Econ. 2013, 53, 74–85. [Google Scholar] [CrossRef]
  6. Cheung, K.C.; Sung, K.C.J.; Yam, S.C.P.; Yung, S.P. Optimal reinsurance under general law-invariant risk measures. Scand. Actuar. J. 2014, 72–91. [Google Scholar] [CrossRef]
  7. Cai, J.; Lemieux, C.; Liu, F. Optimal reinsurance from the perspectives of both an insurer and a reinsurer. ASTIN Bullet. 2016, 46, 815–849. [Google Scholar] [CrossRef]
  8. Cai, J.; Chi, Y. Optimal reinsurance designs based on risk measures: A review. Stat. Theor. Relat. Field. 2020, 4, 1–13. [Google Scholar] [CrossRef]
  9. Hu, X.; Yang, H.; Zhang, L. Optimal retention for a stop-loss reinsurance with incomplete information. Insur. Math. Econ. 2015, 65, 15–21. [Google Scholar] [CrossRef]
  10. Asimit, A.V.; Bignozzi, V.; Cheung, K.C.; Hu, J.; Kim, E.S. Robust and pareto optimality of insurance contracts. Eur. J. Oper. Res. 2017, 262, 720–732. [Google Scholar] [CrossRef]
  11. Liu, H.; Mao, T. Distributionally robust reinsurance with Value-at-Risk and Conditional Value-at-Risk. Insur. Math. Econ. 2022, 107, 393–417. [Google Scholar] [CrossRef]
  12. Xie, X.; Liu, H.; Mao, T.; Zhu, X.B. Distributionally robust reinsurance with expectile. ASTIN Bullet. 2023, 53, 129–148. [Google Scholar] [CrossRef]
  13. Belles-Sampera, J.; Guillén, M.; Santolino, M. Beyond Value-at-Risk: GlueVaR Distortion Risk Measures. Risk. Anal. 2014, 34, 121–134. [Google Scholar] [CrossRef] [PubMed]
  14. Belles-Sampera, J.; Guillén, M.; Santolino, M. The use of flexible quantile-based measures in risk assessment. Commun. Stat. Theor. M. 2016, 45, 1670–1681. [Google Scholar] [CrossRef]
  15. Zhu, D.; Yin, C. Optimal reinsurance policy under a new distortion risk measure. Commun. Stat. Theory Methods 2023, 52, 4151–4164. [Google Scholar] [CrossRef]
  16. Wang, R.; Zitikis, R. An axiomatic foundation for the expected shortfall. Manag. Sci. 2020, 67, 1413–1429. [Google Scholar] [CrossRef]
  17. Cont, R.; Deguest, R.; Scandolo, G. Robustness and sensitivity analysis of risk measurement procedures. Quant. Financ. 2010, 10, 593–606. [Google Scholar] [CrossRef]
  18. Wang, S. Premium calculation by transforming the layer premium density. ASTIN Bullet. 1996, 26, 71–92. [Google Scholar] [CrossRef]
  19. Denuit, M.; Dhaene, J.; Goovaerts, M.J.; Kaas, R. Actuarial Theory for Dependent Risks: Measures, Orders and Models; John Wiley & Sons: Ltd.: London, UK, 2005. [Google Scholar]
  20. Dhaene, J.; Vanduffel, S.; Goovaerts, M.J.; Kaas, R.; Tang, Q.; Vyncke, D. Risk measures and comonotonicity: A review. Stoch. Model. 2006, 22, 573–606. [Google Scholar] [CrossRef]
Figure 1. The distortion function of GlueVaR α , β r 1 , r 2 , VaR α , CVaR α , and RVaR α , β : g α , β r 1 , r 2 ( t ) , g α , α 0 , 0 ( t ) , g α , α 1 , 1 ( t ) , and g α , β 0 , 1 ( t ) .
Figure 1. The distortion function of GlueVaR α , β r 1 , r 2 , VaR α , CVaR α , and RVaR α , β : g α , β r 1 , r 2 ( t ) , g α , α 0 , 0 ( t ) , g α , α 1 , 1 ( t ) , and g α , β 0 , 1 ( t ) .
Mathematics 11 03923 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, W.; Wei, L. Distributionally Robust Reinsurance with Glue Value-at-Risk and Expected Value Premium. Mathematics 2023, 11, 3923. https://doi.org/10.3390/math11183923

AMA Style

Lv W, Wei L. Distributionally Robust Reinsurance with Glue Value-at-Risk and Expected Value Premium. Mathematics. 2023; 11(18):3923. https://doi.org/10.3390/math11183923

Chicago/Turabian Style

Lv, Wenhua, and Linxiao Wei. 2023. "Distributionally Robust Reinsurance with Glue Value-at-Risk and Expected Value Premium" Mathematics 11, no. 18: 3923. https://doi.org/10.3390/math11183923

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop