Next Article in Journal
Critical Size of Secondary Nuclei Determined via Nucleation Theorem Reveals Selective Nucleation in Three-Component Co-Crystals
Previous Article in Journal
Performance of a Simple Energetic-Converting Reaction Model Using Linear Irreversible Thermodynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate Shortfall and Divergence Risk Statistics

1
School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
2
College of Finance and Statistics, Hunan University, Changsha 410082, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(11), 1031; https://doi.org/10.3390/e21111031
Submission received: 22 September 2019 / Revised: 16 October 2019 / Accepted: 22 October 2019 / Published: 24 October 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The aim of this paper is to construct two new classes of multivariate risk statistics, and to study their properties. We, first, introduce the multivariate shortfall risk statistics and multivariate divergence risk statistics. Then, their basic properties are studied, and their representation results are provided. Furthermore, their coherency is also characterized by means of the corresponding loss function. Finally, entropic risk statistics are given to illustrate the proposed new classes of multivariate risk statistics. The relationship between multivariate shortfall and divergence risk statistics is also discussed.

1. Introduction

To evaluate the risk of financial positions, in paper [1], the authors first introduced the concept of coherent risk measure. In [2,3], the authors introduced the broader class, named convex risk measure.
Shortfall risk measures and divergence risk measures are two important kinds of risk measures, and they have a dual relationship, as pointed out by [4] and [5]. Shortfall risk measures were introduced by [2] for random variables. For more studies about shortfall risk measures, see [4,5,6,7,8,9] and the references therein. Divergence risk measures were introduced by [10]. For more works about divergence risk measures, see [5,8,11] and the references therein.
For a financial portfolio X ˜ = ( X 1 , , X N ) consisting of N financial positions, to measure not only the risk of the marginals X i separately, but also the joint risk of all components X i caused by their possible dependence between X i . In [12], the authors first introduced the scalar multivariate coherent and convex risk measures, see also [13]. For more works on multivariate risk measures, see [8,9,14,15,16,17] and the references therein.
From the statistical point of view, the behaviour of a random variable can be characterized by its observations: the samples of the random variable. In [18,19], the authors first introduced the natural risk statistic, which can be considered as a data-based (or empirical) version of a coherent risk measure. In [20,21], the authors studied convex and quasiconvex risk statistics, respectively. In [22,23], the authors studied multivariate convex risk statistics.
In the aforementioned frameworks of risk statistics, the approaches are mainly axiomatic ones, and mainly focus on the representation results for various kinds of risk statistics. An natural and interesting issue is how about constructing meaningful risk statistics. Taking into account the importance of the shortfall risk measures, in the present paper, we attempt to explore the multivariate shortfall risk statistics, as well as the multivariate divergence risk statistics. This consideration mainly motivates the present study.
The purpose of the present paper is to construct multivariate shortfall and divergence risk statistics, respectively, and to study their properties, including their representation results. We provide the representation results for the multivariate shortfall and divergence risk statistics with explicit penalty functions, which are expressed in terms of the corresponding loss functions or divergence functions, respectively. The coherency of the univariate shortfall risk statistics is also characterized by means of the corresponding loss function. Finally, as examples, the multivariate entropic (or entropy-like) risk statistics are constructed to illustrate the proposed multivariate shortfall and divergence risk statistics.
The steps and methods of the present paper are as follows. We, first, introduce the acceptance set of the accepted portfolios. Then, the multivariate shortfall risk statistics are introduced, and their properties including their representations are investigated. Further, the coherency of the univariate shortfall risk statistics is characterized. Meanwhile, a kind of multivariate risk statistic closely related the multivariate shortfall risk statistic is also introduced and investigated, which is the so-called multivariate divergence risk statistic. Finally, examples are given. Convex analysis is employed to complete the involved argumentation.
The main contribution of the present paper are as follows. First, we have constructed two new meaningful classes of multivariate risk statistics, which are multivariate shortfall and divergence risk statistics. Second, their properties including the representation results are investigated, and their coherency is characterized. Finally, multivariate entropic (or entropy-like) risk statistics are introduced.
The rest of the paper is organized as follows. In Section 2, we briefly state some preliminaries including the definitions of multivariate shortfall and divergence risk measures. The main results are stated in Section 3. In Section 4, All the proofs of the main results of the paper are provided. In Section 5, examples are given. Finally, conclusions are summarized.

2. Preliminaries

In this section, we briefly introduce some preliminaries. Henceforth, let N 1 be a fixed positive integer. We describe the loss of a financial position by a random variable. In practice, the behavior of a random loss vector X ˜ = ( X 1 , , X N ) under different scenarios is preferably represented by different sets of data generated or observed under those scenarios because specifying accurate models for X ˜ (under different scenarios) is usually very difficult. Assume that there exist K scenarios. Let n i j be the sample size of X i in the jth scenario for any 1 i N and 1 j K . Denote n i : = j = 1 K n i j for any 1 i N and n : = i = 1 N n i . That is, the behavior of X ˜ = ( X 1 , , X N ) is represented by a collection of data x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N = R n , where x ˜ i = ( x ˜ i , 1 , , x ˜ i , K ) R n i 1 × × R n i K = R n i and x ˜ i , j = ( x 1 i , j , , x n i j i , j ) R n i j is the data subset that corresponds to the jth scenario of X i . For each i = 1 , , N , j = 1 , , K , x ˜ i , j can be a data set based on historical observations of X i , hypothetical samples simulated according to a model, or a mixture of observations and simulated samples.
For x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N and y ˜ = ( y ˜ 1 , , y ˜ N ) R n 1 × × R n N , x ˜ y ˜ means x k i , j y k i , j , i = 1 , , N , j = 1 , , K , k = 1 , , n i j . Denote 1 ˜ : = ( 1 ˜ 1 , , 1 ˜ N ) R n 1 × × R n N , where 1 ˜ i : = ( 1 , , 1 ) R n i for i = 1 , , N ; 0 ˜ : = ( 0 ˜ 1 , , 0 ˜ N ) R n 1 × × R n N , where 0 ˜ i : = ( 0 , , 0 ) R n i for i = 1 , , N . e ˜ i : = ( 0 ˜ 1 , , 0 ˜ i 1 , 1 ˜ i , 0 ˜ i + 1 , , 0 ˜ N ) R n 1 × × R n N for i = 1 , , N . e ˜ k i , j : = ( 0 , , 0 , 1 , 0 , , 0 ) R n where 1 is located in the ( s = 1 i 1 n s + m = 1 j 1 n i m + k ) th position, and e ˜ 1 1 , 1 , , e ˜ n N K N , K are the canonical basis of R n . · , · denotes the usual inner product on the Euclidean space. Given a set A, int ( A ) means the interior point of A, co ( A ) means the convex hull of A and cl ( A ) the closure of A. Denote
W : = ω ˜ = ( ω ˜ 1 , , ω ˜ N ) R n 1 × × R n N : ω ˜ 0 , 1 ˜ i , ω ˜ i = 1 , i = 1 , , N .
We begin with the acceptance set. Given a nonempty set A R n , we list only some of its axioms, which will be closely related to the present paper, as follows.
(A1)
Finiteness: sup { i = 1 N m i : ( m 1 1 ˜ 1 , , m N 1 ˜ N ) A } < + .
(A2)
Monotonicity: For any x ˜ A , y ˜ R n , if y ˜ x ˜ , then y ˜ A .
(A3)
Convexity: A is a convex set.
(A4)
Cone: A is a positively homogeneous cone, that is, λ x ˜ A for any λ > 0 and x ˜ A .
The interpretations of the axioms (A1)–(A4) are as follows. The finiteness means that the maximum of the possible deterministic losses of the portfolio should not be infinity. The monotonicity implies that if a portfolio with large losses is accepted, then a portfolio with less losses should also be accepted. The convexity means that given two accepted portfolios, any convex combination of these two portfolios should also be accepted. The axiom of cone means that given an accepted portfolio, any multiple of the portfolio should also be accepted.
Definition 1.
A nonempty subset A of R n is called an acceptance set if it satisfies axioms A1–A2, and called a convex acceptance set if it satisfies axioms A1–A3, and called a coherent acceptance set if it satisfies axioms A1–A4.
Next, we list some axioms for a mapping ρ : R n R as follows, which were proposed by [23], see also [19,20,21].
(B1)
Translation invariance: For any x ˜ R n , a i R , i = 1 , , N ,
ρ x ˜ + ( a 1 1 ˜ 1 , , a N 1 ˜ N ) = ρ ( x ˜ ) + i = 1 N a i .
(B2)
Monotonicity: For any x ˜ , y ˜ R n , if x ˜ y ˜ , then ρ ( x ˜ ) ρ ( y ˜ ) .
(B3)
Convexity: For any x ˜ , y ˜ R n and α [ 0 , 1 ] ,
ρ α x ˜ + ( 1 α ) y ˜ α ρ ( x ˜ ) + ( 1 α ) ρ ( y ˜ ) .
(B4)
Positive homogeneity: For any x ˜ R n and λ > 0 , ρ ( λ x ˜ ) = λ ρ ( x ˜ ) .
The interpretations of the axioms (B1)–(B4) are as follows. The translation invariance says that adding the sure loss i = 1 N a i to a portfolio x ˜ simply increases the risk by i = 1 N a i . Monotonicity means that the larger loss of the portfolio is, the more risky it is. Convexity means that diversification does not increase the risk, that is, the risk of a diversified position α x ˜ + ( 1 α ) y ˜ is less or equal to the weighted average of the individual risks. Positive homogeneity means that when the loss of a portfolio increases (or decreases) linearly with a multiple, then the risk of the portfolio should also increase (or decreases) linearly with the same multiple.
On a general level, a multivariate risk statistic ρ is any mapping from R n to the real numbers R . It is a data-based version of a multivariate risk measure, and assigns ( x ˜ 1 , , x ˜ N ) , the data representation of the random losses ( X 1 , , X N ) , a real number ρ ( ( x ˜ 1 , , x ˜ N ) ) , the risk measurement of ( X 1 , , X N ) .
Definition 2.
A mapping ρ : R n R is called a multivariate monetary risk statistic if ρ satisfies axioms B1–B2, and called a multivariate convex risk statistic if it satisfies axioms B1–B3, and called a multivariate coherent risk statistic if it satisfies axioms B1–B4.
Given an acceptance set A , we say that x ˜ R n is acceptable (with respect to A ) if x ˜ A . For a position X ˜ , represented by its sample data x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N , define the capital requirement ρ A ( x ˜ ) for x ˜ as the minimal amount i = 1 N m i such that ( x ˜ 1 m 1 1 ˜ 1 , , x ˜ N m N 1 ˜ N ) is acceptable by A . That is,
ρ A ( x ˜ ) : = inf i = 1 N m i : ( x ˜ 1 m 1 1 ˜ 1 , , x ˜ N m N 1 ˜ N ) A , m i R , 1 i N
for any x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N . we call ρ A the risk statistic induced by A .
On the other hand, for any multivariate monetary risk statistic ρ : R n R , we can define a set
A ρ : = x ˜ R n : ρ ( x ˜ ) 0 .
Note that Proposition 2 below will show that A ρ satisfies axioms (A1)–(A2), i.e., A ρ is an acceptance set, and we call the set A ρ the acceptance set of ρ .
The following two propositions discuss the properties of ρ A and A ρ for given acceptance set A and risk statistics ρ , respectively. Their proofs will be postponed to Section 4.
Proposition 1.
Let A R n be an acceptance set and ρ A be defined by Equation (1). Then,
(1) 
ρ A is a multivariate monetary risk statistic.
(2) 
ρ A italicis a multivariate convex risk statistic if A is convex.
(3) 
ρ A is a multivariate coherent risk statistic if A is a convex cone.
(4) 
A is a subset of A ρ A , and cl ( A ) = A ρ A if A satisfies the finiteness and monotonicity axioms.
Proposition 2.
Let ρ : R n R be a multivariate monetary risk statistic and A ρ be defined by Equation (2). Then,
(1) 
A ρ is an acceptance set.
(2) 
ρ can be recovered from A ρ , i.e., ρ = ρ A ρ .
(3) 
A ρ is a convex acceptance set if and only if ρ is convex.
(4) 
A ρ is a coherent acceptance set if and only if ρ is coherent.
The following representation result for multivariate convex risk statistics has been provided in [23].
Lemma 1.
A mapping ρ : R n R is a multivariate convex risk statistic if and only if there exists a set of weights W ˜ W such that
ρ ( x ˜ ) = sup ω ˜ W ˜ x ˜ , ω ˜ α min ( ω ˜ ) ,
for any x ˜ R n , where the penalty function α min : R n R is given by
α min ( ω ˜ ) : = sup x ˜ A ρ x ˜ , ω ˜ ,
where A ρ : = { x ˜ R n : ρ ( x ˜ ) 0 } .
Definition 3.
A function : R R is called a loss function, if it is increasing and not identically constant.
The conjugate function of a proper convex function : R R is defined as
* ( y ) : = sup x R { x y ( x ) } , y R .
The following two lemmas characterize the conjugate functions of loss functions, which were provided by [24] and will be used to prove the main results.
Lemma 2.
Let { n } n N be a sequence of convex loss functions which decreases pointwise to the convex loss function ℓ, then the corresponding conjugate functions ( n ) * increase pointwise to * .
Lemma 3.
Let ℓ be a convex loss function, then the conjugate funcion * has the following properties.
(1) 
* ( 0 ) = inf x R ( x ) and * ( y ) ( 0 ) for any y R .
(2) 
* ( y ) y as y .
(3) 
Denote by J : = ( * ) , the derivative function of * , then for any x , y R ,
x y ( x ) + * ( y ) with equality if x = J ( y ) .
For now on, let the loss functions i , , N be convex. For each i = 1 , , N , let z 0 , i be an interior point of the range of i . Denote
z 0 : = z 0 , 1 + + z 0 , N .
We define the following acceptance set,
B : = x ˜ R n : i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j ) z 0 .
The interpretation of z 0 , i and the acceptance set B as in Equation (4) are as follows. z 0 , i denotes the tolerance level towards loss of the ith financial positions at which the investor can bear. Therefore, z 0 = z 0 , 1 + + z 0 , N denotes the tolerance level towards loss of the portfolio X ˜ = ( X 1 , , X N ) at which the investor can bear. For example, for risk-averse investor, the tolerance levels z 0 , i ’s could be chosen as zero provided that the zero is an interior point of the range of i . For a risk-neutral investor, the tolerance levels z 0 , i ’s could be chosen as a positive but small numbers. For a risk-appetitive investor, the tolerance levels z 0 , i ’s could be chosen as certain large positive numbers. Therefore, these three situations can also be regarded as kinds of trading preferences for an investor. Note that given the tolerance levels z 0 , i ’s by the investor, not all of financial positions are in the acceptance set B .
Note that the acceptance set B defined as Equation (4) is convex and closed, as each loss function i is convex, 1 i N . Next, according to the acceptance set B , we can introduce the definition of the multivariate shortfall risk statistics.
Definition 4.
The mapping ρ B : R n 1 × × R n N R defined by
ρ B ( x ˜ ) : = inf i = 1 N m i : ( x ˜ 1 m 1 1 ˜ 1 , , x ˜ N m N 1 ˜ N ) B
for x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N , is called a multivariate shortfall risk statistic.
By the definition of the acceptance set B in Equation (4), the multivariate shortfall risk statistic ρ B can be rewritten as
ρ B ( x ˜ ) = inf i = 1 N m i : i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j m i ) z 0
for x ˜ = ( x ˜ 1 , , x ˜ N ) = ( x 1 1 , 1 , , x n 11 1 , 1 , , x 1 1 , K , , x n 1 k 1 , K , , x 1 N , 1 , , x n N 1 N , 1 , , x 1 N , K , , x n N K N , K ) R n 1 × × R n N .
Remark 1.
As each convex loss function i is convex, 1 i N , it is not hard to verify that B satisfies A1–A3. Therefore, a multivariate shortfall risk statistic ρ B is also a multivariate convex risk statistic. By Proposition 1, B = A ρ B , as B is closed.
Now, we introduce the definition of multivariate divergence risk statistics.
Definition 5.
For each i = 1 , , N , let g i : [ 0 , + ) R { + } be a lower semicontinuous convex function satisfying g i ( 1 ) < and the superlinear growth condition g i ( x ) x + as x . The mapping I ( g 1 , , g N ) : [ 0 , + ) n 1 × × [ 0 , + ) n N R defined by
I ( g 1 , , g N ) ( ω ˜ ) : = i = 1 N 1 n i j = 1 K k = 1 n i j g i ( n i ω k i , j ) ,
for ω ˜ = ( ω ˜ 1 , , ω ˜ N ) = ( ω 1 1 , 1 , , x n 11 1 , 1 , , ω 1 1 , K , , ω n 1 k 1 , K , , ω 1 N , 1 , , ω n N 1 N , 1 , , ω 1 N , K , , ω n N K N , K ) [ 0 , + ) n 1 × × [ 0 , + ) n N , is called a multivariate ( g 1 , , g N ) divergence function.
The mapping ρ ( g 1 , , g N ) : R n 1 × × R n N R defined by
ρ ( g 1 , , g N ) ( x ˜ ) : = sup ω ˜ W i = 1 N x ˜ i , ω ˜ i I ( g 1 , , g N ) ( ω ˜ ) ,
for x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N , is called a multivariate divergence risk statistic.
Remark 2. 
(1) 
If g i : [ 0 , + ) R { + } is a convex function for each 1 i N , then ( β , ω ) β g i ( ω β ) is a convex function on ( 0 , + ) × [ 0 , + ) , i = 1 , , N .
(2) 
For any i = 1 , , N , let g i : [ 0 , + ) R { + } be a lower semicontinuous convex function satisfying g i ( 1 ) < and the superlinear growth condition g i ( x ) x + as x . For β > 0 , denote g ˜ i ( β , x ) : = β g i ( x β ) , the corresponding multivariate ( g ˜ 1 , , g ˜ N ) divergence function
I ( g ˜ 1 , , g ˜ N ) ( β , ω ˜ ) : = i = 1 N 1 n i j = 1 K k = 1 n i j β g i n i ω k i , j β
is a convex function on ( 0 , + ) × [ 0 , + ) n . For any fixed x ˜ R n 1 × × R n N , define
G ( β ) : = G x ˜ ( β ) : = ρ ( g ˜ 1 , , g ˜ N ) ( x ˜ ) = inf ω ˜ W x ˜ , ω ˜ + I ( g ˜ 1 , , g ˜ N ) ( β , ω ˜ ) if β > 0 , + otherwise ,
it is not hard to check that G ( β ) is a lower semicontinuouse convex function on ( 0 , + ) .

3. Main Results

In this section, we state the main results of this paper, and their proofs will be postponed to the next section. Specifically, we will study the properties and the representation results for multivariate shortfall risk statistics and divergence risk statistics.
The following proposition shows that ρ B is a multivariate convex risk statistics, which can be steadily implied by Proposition 1, and therefore we omit its proof here.
Proposition 3.
The multivariate shortfall risk statistic ρ B : R n R satisfies the translation invariance, monotonicity, and convexity.
Now, we are ready to state the first main result of the present paper, which provides the representation results for multivariate shortfall risk statistics.
Theorem 1.
Let ρ B : R n R be a multivariate shortfall risk statistic associated with the convex loss functions ( 1 , , N ) and z 0 . Then the minimal penalty function of ρ B is given by
α m i n ( ω ˜ ) = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) , ω ˜ W ,
where i * is the conjugate function of i . In particular, for any x ˜ R n ,
ρ B ( x ˜ ) = sup ω ˜ W x ˜ , ω ˜ inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) .
Theorem 1 shows that given N convex loss functions 1 , , N and the corresponding totally average loss level z 0 , the minimal penalty function α min can be expressed in terms of 1 , , N . From a practical point of view, an investor may have uncertainty about kinds of loss function (i.e., the utility function). Next, we will discuss this issue in the case where N = 1 , and will provide a slight more general result, which is similar to Theorem 1.
Let L be a class of convex loss functions. Given L , similar to Equation (4), we can define the corresponding average loss level z 0 = z 0 ( ) and the acceptance set B = B ( , z 0 ) by
B ( , z 0 ) : = x ˜ R n : 1 n j = 1 K k = 1 n j ( x j k ) z 0 .
Assume that
B L : = L B ( , z 0 ) .
Note that B L is convex and closed. Therefore, similar to Equation (5), we can define a convex risk statistic ρ L : = ρ B L : R n R by
ρ L ( x ˜ ) : = inf m R : x ˜ m 1 ˜ B L .
Theorem 1 yields the following representation for the convex risk statistic ρ L .
Corollary 1.
Assume that B L defined by Equation (7) is not empty. Then the multivariate convex risk statistic ρ L defined by Equation (8) can be expressed by
ρ L ( x ˜ ) = sup ω ˜ W ˜ p x ˜ , ω ˜ α min ( ω ˜ ) ,
where the minimal penalty function α min : R n R is given by
α min ( ω ˜ ) = inf λ > 0 inf L z 0 ( ) λ + 1 λ n j = 1 K k = 1 n j * ( λ n ω j k ) , ω ˜ W ˜ ,
where W ˜ = ω ˜ = ( ω 1 , , ω n ) R n : ω ˜ 0 , i = 1 n ω i = 1 .
Remark 3.
Corollary 1 can be considered as the data-based (or empirical) version of Föllmer and Schied (2011, Corollary 4.119).
Next, we will state the second main result of the present paper, which characterizes the (univariate) shortfall risk statistic ρ B defined by Equation (5), being coherent by means of the loss function in a special case where N = 1 .
Theorem 2.
Assume that ρ B : R n R is a univariate shortfall risk statistic associated with a convex loss function ℓ and z 0 int { ( y ) : y R } , and that inf x R ( x ) = and sup x R ( x ) = + . Then ρ B is coherent if and only if ( x ) = z 0 + α x + β x with 0 < β α .
Finally, we will state the last main result of the present paper, which provides the representation result for multivariate divergence risk statistics.
Theorem 3.
Let g i : [ 0 , + ) R { + } be a lower semicontinuous convex function satisfying g i ( 1 ) < and the superlinear growth condition g i ( x ) x + as x for each 1 i N ; then, the multivariate divergence risk statistic ρ ( g 1 , , g N ) is of the following expression
ρ ( g 1 , , g N ) ( x ˜ ) = inf ( z 1 , , z N ) R N i = 1 N 1 n i j = 1 K k = 1 n i j g i * ( x k i , j z i ) + i = 1 N z i ,
for x ˜ = ( x ˜ 1 , , x ˜ N ) = ( x 1 1 , 1 , , x n 11 1 , 1 , , x 1 1 , K , , x n 1 k 1 , K , , x 1 N , 1 , , x n N 1 N , 1 , , x 1 N , K , , x n N K N , K ) R n 1 × × R n N , where g i * ( y ) is the conjugate function of g i for each 1 i N .

4. Proofs of Main Results

In this section, we will provide an alternate proofs of Propositions 1 and 2, and all the proofs of the results stated in Section 3.
Proof of Proposition 1.
(1) Let A be an acceptance set, i.e., satisfying the finiteness and monotonicity, it is not hard to verify the translation invariance and monotonicity of ρ A . Therefore, we only need to show that ρ A takes finite values.
Let x ˜ = ( x ˜ 1 , , x ˜ N ) A be fixed (as ρ A is nonempty, such x ˜ must exist). Then we know that
0 ˜ ( m 1 , , m N ) R N : ( x ˜ 1 m 1 1 ˜ 1 , , x ˜ N m N 1 ˜ N ) A ,
which, together with the definition of ρ A , implies that ρ A ( x ˜ ) 0 .
For any given y ˜ = ( y ˜ 1 , , y ˜ N ) R n 1 × × R n N , clearly, there exists a m ˜ = ( m 1 , , m N ) R N such that ( y ˜ 1 m 1 1 ˜ 1 , , y ˜ N m N 1 ˜ N ) ( x ˜ 1 , , x ˜ N ) . Thus,
ρ A ( y ˜ ) i = 1 N m i = ρ A x ˜ + ( m 1 1 ˜ 1 , , m N 1 ˜ N ) ρ A ( x ˜ ) 0 ,
which yields that ρ A ( y ˜ ) i = 1 N m i < + .
On the other hand, by the finiteness of A , we have that
ρ A ( 0 ˜ ) = inf i = 1 N m i : ( 0 m 1 1 ˜ 1 , , 0 m N 1 ˜ N ) A = sup i = 1 N m i : ( m 1 1 ˜ 1 , , m N 1 ˜ N ) A > .
Clearly, there exists a m ˜ = ( m 1 , , m N ) R N such that ( y ˜ 1 m 1 1 ˜ 1 , , y ˜ N m N 1 ˜ N ) 0 . Therefore, from the translation invariance and monotonicity of ρ A , it follows that
ρ A ( y ˜ ) ρ A ( 0 ˜ ) + i = 1 N m i > .
(2) Let A be a convex acceptance set. Then by part (1) above, we only need to show the convexity of ρ A .
Let x ˜ = ( x ˜ 1 , , x ˜ N ) , y ˜ = ( y ˜ 1 , , y ˜ N ) R n 1 × × R n N . For any ( a 1 , , a N ) , ( b 1 , , b N ) R N with ( x ˜ 1 a 1 1 ˜ 1 , , x ˜ N a N 1 ˜ N ) , ( y ˜ 1 b 1 1 ˜ 1 , , y ˜ N b N 1 ˜ N ) A , and any α [ 0 , 1 ] , from the convexity of A , it follows that
α ( x ˜ 1 a 1 1 ˜ 1 , , x ˜ N a N 1 ˜ N ) + ( 1 α ) ( y ˜ 1 b 1 1 ˜ 1 , , y ˜ N b N 1 ˜ N ) A ,
which, together with the definition of ρ A , yields that
ρ A α ( x ˜ 1 a 1 1 ˜ 1 , , x ˜ N a N 1 ˜ N ) + ( 1 α ) ( y ˜ 1 b 1 1 ˜ 1 , , y ˜ N b N 1 ˜ N ) 0 .
Therefore, by the translation invariance of ρ A ,
0 ρ A α ( x ˜ 1 a 1 1 ˜ 1 , , x ˜ N a N 1 ˜ N ) + ( 1 α ) ( y ˜ 1 b 1 1 ˜ 1 , , y ˜ N b N 1 ˜ N ) = ρ A α ( x ˜ 1 , , x ˜ N ) + ( 1 α ) ( y ˜ 1 , , y ˜ N ) α i = 1 N a i + ( 1 α ) i = 1 N b i ,
which implies the convexity of ρ A by letting i = 1 N a i and i = 1 N b i converge to ρ A ( x ˜ ) and ρ A ( y ˜ ) , respectively.
(3) Let A be a coherent acceptance set. Then by parts (1) and (2) above, we only need to show the positive homogeneity of ρ A .
For any λ > 0 and x ˜ = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N ,
ρ A ( λ x ˜ ) = inf i = 1 N m i : ( λ x ˜ 1 m 1 1 ˜ 1 , , λ x ˜ N m N 1 ˜ N ) A = inf i = 1 N m i : ( x ˜ 1 m 1 λ 1 ˜ 1 , , x ˜ N m N λ 1 ˜ N ) A = inf i = 1 N λ m i : ( x ˜ 1 m 1 1 ˜ 1 , , x ˜ N m N 1 ˜ N ) A = λ inf i = 1 N m i : ( x ˜ 1 m 1 1 ˜ 1 , , x ˜ N m N 1 ˜ N ) A = λ ρ A ( x ˜ ) .
(4) If x ˜ A , then ρ A ( x ˜ ) 0 , which, together with the definition of A ρ A , yields that x ˜ A ρ A , and therefore A A ρ A .
For any x ˜ = ( x ˜ 1 , , x ˜ N ) , y ˜ = ( y ˜ 1 , , y ˜ N ) R n 1 × × R n N , denote x ˜ i : = max { x k i , j , 1 j K , 1 k n i j } and x ˜ : = i = 1 N x ˜ i . Since cl A is the closure of A in R n , then cl A is · -closed. Note that
x ˜ y ˜ + ( x ˜ 1 y ˜ 1 1 ˜ 1 , , x ˜ N y ˜ N 1 ˜ N ) ,
which, along with the translation invariance and monotonicity of ρ A , implies that
ρ A ( x ˜ ) ρ A ( y ˜ ) + i = 1 N x ˜ i y ˜ i .
Reversing the roles of x ˜ and y ˜ , we know that
| ρ A ( x ˜ ) ρ A ( y ˜ ) | i = 1 N x ˜ i y ˜ i = x ˜ y ˜ .
For any fixed x ˜ cl ( A ) , we can conclude that ρ A ( x ˜ ) > 0 . In fact, take m i < x ˜ i , 1 i N , that is, m i 1 ˜ i < x ˜ i 1 ˜ i x ˜ i and i = 1 N m i < i = 1 N x ˜ i = x ˜ . As, cl ( A ) is · -closed and x ˜ cl ( A ) , there is some λ ( 0 , 1 ) , such that λ ( m 1 1 ˜ 1 , , m N 1 ˜ N ) + ( 1 λ ) x ˜ cl ( A ) . Therefore,
0 < ρ A λ ( m 1 1 ˜ 1 , , m N 1 ˜ N ) + ( 1 λ ) x ˜ = ρ A ( 1 λ ) x ˜ + λ i = 1 N m i .
From Equation (10), it follows that
| ρ A ( x ˜ ) ρ A ( ( 1 λ ) x ˜ ) | λ x ˜ .
Therefore,
ρ A ( x ˜ ) ρ A ( ( 1 λ ) x ˜ ) λ x ˜ > λ i = 1 N m i x ˜ > 0 .
By the definition of A ρ A , we know that ρ A ( x ˜ ) > 0 implies x ˜ A ρ A . Thus, A ρ A cl ( A ) . The proof of Proposition 1 is completed. □
Proof of Proposition 2.
(1) Let ρ be a multivariate monetary risk statistic, the monotonicity of A ρ is straightforward. If sup { i = 1 N m i : ( m 1 1 ˜ 1 , , m N 1 ˜ N ) A ρ } = + , then there exists a sequence { ( m 1 k , , m N k ) } k N where ( m 1 k 1 ˜ 1 , , m N k 1 ˜ N ) A ρ such that
lim k + i = 1 N m i k = + .
By the translation invariance of ρ ,
ρ 0 ˜ + ( m 1 k 1 ˜ 1 , , m N k 1 ˜ N ) = ρ ( 0 ˜ ) + i = 1 N m i k + ( k + ) .
This is a contradict with ρ ( ( m 1 k 1 ˜ 1 , , m N k 1 ˜ N ) ) 0 .
(2) The translation invariance of ρ implies that for x ˜ R n ,
ρ A ρ ( x ˜ ) = inf i = 1 N m i : x ˜ ( m 1 k 1 ˜ 1 , , m N k 1 ˜ N ) A ρ = inf i = 1 N m i : ρ x ˜ ( m 1 k 1 ˜ 1 , , m N k 1 ˜ N ) 0 = inf i = 1 N m i : ρ x ˜ i = 1 N m i = ρ ( x ˜ ) .
(3) A ρ is a convex set if ρ is convex. The converse will follow from Proposition 1 together with ρ A ρ = ρ .
(4) By part (3) above, we only need to show the relationship between the cone of A ρ and the positive homogeneity of ρ . The cone of A ρ is straightforward from the positive homogeneity of ρ . The converse follows from ρ A ρ = ρ and the part (3) in Proposition 1. The proof of Proposition 2 is completed. □
Proof of Theorem 1.
We will make full use of convex analysis to show the theorem.
Define a totally average loss function L : R n R as
L ( x ˜ ) : = i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j )
for any x ˜ = ( x ˜ 1 , , x ˜ N ) = ( x 1 1 , 1 , , x n 11 1 , 1 , , x 1 1 , K , , x n 1 k 1 , K , , x 1 N , 1 , , x n N 1 N , 1 , , x 1 N , K , , x n N K N , K ) R n 1 × × R n N .
First, we show that it suffices to prove the claim for L ( 0 ˜ ) < z 0 ; otherwise, we can find some a ˜ = ( a ˜ 1 , , a ˜ N ) R n 1 × × R n N such that L ( a ˜ ) < z 0 , as z 0 was assumed to be an interior point of L ( R n ) . Denote b i : = min { a k i , j : j = 1 , , k ; t = 1 , , n i j } , i = 1 , , N , and b ˜ : = ( b 1 1 ˜ 1 , , b N 1 ˜ N ) . Then, b ˜ a ˜ , which, together with the increasing of L, yields that
L ( b ˜ ) L ( a ˜ ) < z 0 .
Let ˜ i ( x ) : = i ( x + b i ) , i = 1 , , N define a function:
L ˜ ( x ˜ ) : = i = 1 N 1 n i j = 1 K k = 1 n i j ˜ i ( x k i , j ) = i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j + b i ) = L ( x ˜ + b ˜ )
for any x ˜ = ( x ˜ 1 , , x ˜ N ) = ( x 1 1 , 1 , , x n 11 1 , 1 , , x 1 1 , K , , x n 1 k 1 , K , , x 1 N , 1 , , x n N 1 N , 1 , , x 1 N , K , , x n N K N , K ) R n 1 × × R n N . It easy to verify that the function L ˜ is increasing and satisfies the requirement L ˜ ( 0 ˜ ) = L ( b ˜ ) < z 0 . From Equation (4), it follows that
B ˜ : = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N : i = 1 N 1 n i j = 1 K k = 1 n i j ˜ i ( x k i , j ) z 0 = ( x ˜ 1 , , x ˜ N ) R n 1 × × R n N : i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j + b i ) z 0 = ( y ˜ 1 b 1 1 ˜ 1 , , y ˜ N b N 1 ˜ N ) R n 1 × × R n N : i = 1 N 1 n i j = 1 K k = 1 n i j i ( y k i , j ) z 0 = ( y ˜ 1 , , y ˜ N ) R n 1 × × R n N : i = 1 N 1 n i j = 1 K k = 1 n i j i ( y k i , j ) z 0 ( b 1 1 ˜ 1 , , b N 1 ˜ N ) = B ( b 1 1 ˜ 1 , , b N 1 ˜ N ) .
From Proposition 1, we know B ˜ = A ρ B ˜ . Therefore, for any ω ˜ = ( ω ˜ 1 , , ω ˜ N ) W , we have that
sup ( x ˜ 1 , , x ˜ N ) A ρ B ˜ i = 1 N x ˜ i , ω ˜ i = sup ( x ˜ 1 , , x ˜ N ) B ˜ i = 1 N x ˜ i , ω ˜ i = sup ( y ˜ 1 , , y ˜ N ) B i = 1 N y ˜ i b i 1 ˜ i , ω ˜ i = sup ( y ˜ 1 , , y ˜ N ) B i = 1 N y ˜ i , ω ˜ i i = 1 N b i .
Thus, if the assertion is established for L ˜ , then we find that
α ˜ m i n ( ω ˜ ) = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j ˜ i * ( λ n i ω k i , j ) = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) b i λ n i ω k i , j = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) i = 1 N b i ,
for any ω ˜ = ( ω ˜ 1 , , ω ˜ N ) W , where the second equation is implied by the fact that the Fenchel–Legendre transform ˜ i * of i * satisfies ˜ i * ( x ) = i * ( x ) b i x .
Note that ρ B ˜ is a multivariate convex risk statistic; by Lemma 1, we have that α ˜ m i n ( ω ˜ ) = sup ( x ˜ 1 , , x ˜ N ) A ρ B ˜ i = 1 N x ˜ i , ω ˜ i , for any ω ˜ = ( ω ˜ 1 , , ω ˜ N ) W , which, together with Equations (11) and (12), yields that
sup ( y ˜ 1 , , y ˜ N ) B i = 1 N y ˜ i , ω ˜ i = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) , ω ˜ W .
Therefore, it suffices to prove the claim for L ( 0 ˜ ) < z 0 .
Next, we show the main proof process. Let us fix ω ˜ = ( ω ˜ 1 , , ω ˜ N ) W .
The part of “≤” in Equation (6). Note that i * is the conjugate function of i , i * ( ω k i , j ) = sup y R { ω k i , j y i ( y ) } . For any λ > 0 and x ˜ R n , we have that
x k i , j ω k i , j = 1 λ n i x k i , j λ n i ω k i , j 1 λ n i i ( x k i , j ) + i * ( λ n i ω k i , j ) , i = 1 , , N , j = 1 , , K , k = 1 , , n i j ,
which yields that
i = 1 N x ˜ i , ω ˜ i = i = 1 N j = 1 k t = 1 n i j x k i , j ω k i , j 1 λ i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j ) + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) .
Therefore,
α m i n ( ω ˜ ) = sup ( x ˜ 1 , , x ˜ N ) A ρ B i = 1 N x ˜ i , ω ˜ i = sup ( x ˜ 1 , , x ˜ N ) B i = 1 N x ˜ i , ω ˜ i sup ( x ˜ 1 , , x ˜ N ) B 1 λ i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j ) + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) .
The part of “≥” in Equation (6). We will show that
α m i n ( ω ˜ ) inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) .
Without loss of generality, we assume that α m i n ( ω ˜ ) < . The proof of Equation (13) will be completed by three steps. First, the inequality Equation (13) will be proved under three extra conditions. Second, without the continuity condition, the result will be done. Finally, under the general case, we show the inequality.
Step 1. Assume that the following three conditions hold.
(C1)
There exists D R n such that L ( x ˜ ) = inf y ˜ R n L ( y ˜ ) for all x ˜ D , where D satisfies the monotonicity, i.e., if x ˜ D then for any y ˜ x ˜ , y ˜ D .
(C2)
i , i = 1 , , N are finite on ( 0 , ) .
(C3)
J i , i = 1 , , N are continuous on ( 0 , ) .
Note that these assumptions imply that i ( 0 ) < and that J i ( 0 + ) have lower bounds. Moreover, J i ( z ) increases to as z , and hence so does i ( J i ( z ) ) . Note that for all z R , i * ( z ) i ( 0 ) , i = 1 , , N , which yields that
i = 1 N 1 n i j = 1 K k = 1 n i j i * ( z k i , j ) i = 1 N 1 n i j = 1 K k = 1 n i j i ( 0 ) = L ( 0 ˜ ) > z 0 .
From Equation (3), it follows that
lim z ˜ 0 ˜ i = 1 N 1 n i j = 1 K k = 1 n i j i ( J i ( z k i , j ) ) z 0 < lim z ˜ 0 ˜ i = 1 N 1 n i j = 1 K k = 1 n i j i ( J i ( z k i , j ) ) i = 1 N 1 n i j = 1 K k = 1 n i j i * ( z k i , j ) = lim z ˜ 0 ˜ i = 1 N 1 n i j = 1 K k = 1 n i j z k i , j J i ( z k i , j ) = 0 .
These facts and the continuity of J i imply that, for large enough m, there exists some λ m > 0 such that
i = 1 N 1 n i j = 1 K k = 1 n i j i J i ( λ m n i ω k i , j ) I { n i ω k i , j m } = z 0 .
Denote x k i , j ( m ) : = J i ( λ m n i ω k i , j ) I { n i ω k i , j m } , i = 1 , , N , j = 1 , , K , k = 1 , , n i j , x ˜ i , j ( m ) : = ( x 1 i , j ( m ) , , x n i j i , j ( m ) ) , x ˜ i ( m ) : = ( x ˜ i , 1 ( m ) , , x ˜ i , K ( m ) ) and x ˜ ( m ) : = ( x ˜ 1 ( m ) , , x ˜ N ( m ) ) . Then x ˜ ( m ) is bounded and belongs to B . Therefore, it follows from Equations (3) and (14) that
α m i n ( ω ˜ ) = sup ( x ˜ 1 , , x ˜ N ) B i = 1 N x ˜ i , ω ˜ i i = 1 N x ˜ i ( m ) , ω ˜ i = i = 1 N j = 1 K k = 1 n i j J i ( λ m n i ω k i , j ) I { n i ω k i , j m } ω k i , j = 1 λ m i = 1 N 1 n i j = 1 K k = 1 n i j I { n i ω k i , j m } J i ( λ m n i ω k i , j ) λ m n i ω k i , j = 1 λ m i = 1 N 1 n i j = 1 K k = 1 n i j I { n i ω k i , j m } i ( J i ( λ m n i ω k i , j ) ) + i * ( λ m n i ω k i , j ) = 1 λ m i = 1 N 1 n i j = 1 K k = 1 n i j i ( J i ( λ m n i ω k i , j ) I { n i ω k i , j m } ) i ( 0 ) I { n i ω k i , j > m } + i * ( λ m n i ω k i , j ) I { n i ω k i , j m } = 1 λ m z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i ( 0 ) I { n i ω k i , j > m } + i * ( λ m n i ω k i , j ) I { n i ω k i , j m } 1 λ m z 0 L ( 0 ˜ ) .
As we assume that α m i n ( ω ˜ ) < , the limit λ of { λ m } must be strictly positive.
α m i n ( ω ˜ ) lim m 1 λ m z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i ( 0 ) I { n i ω k i , j > m } + i * ( λ m n i ω k i , j ) I { n i ω k i , j m } 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) ,
which implies Equation (13).
Step 2. Assume that the conditions C1 and C2 hold, but not all J i , i = 1 , , N are continuous. Without loss of generality, we may assume J i 0 , i 0 I 0 are not continuous and the other J i are continuous. Then, we can approximate the function J i 0 from above with an increasing continuous function J ˜ i 0 on [ 0 , ) such that
˜ i 0 * ( z ) : = i 0 * ( 0 ) + 0 z J ˜ i 0 ( y ) d y
satisfies
i 0 * ( z ) ˜ i 0 * ( z ) i 0 * ( ( 1 + ε ) z ) , z 0 , ε > 0 .
Denote ^ i 0 : = ˜ i 0 * * , the Fenchel–Legendre transform of ˜ i 0 * . We can renew the loss functions such that the derivative J i is continuous for any i = 1 , , N . As i is a proper convex function, then i * * = i and
i 0 ( x 1 + ε ) ^ i 0 ( x ) i 0 ( x ) .
Thus,
B ^ : = x ˜ R n : i I 0 1 n i j = 1 K k = 1 n i j i ( x k i , j ) + i I 0 1 n i j = 1 K k = 1 n i j ^ i ( x k i , j ) z 0 x ˜ R n : i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j 1 + ε ) z 0 = ( 1 + ε ) B .
By Step 1, we know that the assertion holds if the derivative J i , i = 1 , , N are continuous, which implies that
inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) inf λ > 0 1 λ z 0 + i I 0 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) + i I 0 1 n i j = 1 K k = 1 n i j ^ i * ( λ n i ω k i , j ) = sup ( x ˜ 1 , , x ˜ N ) B ^ i = 1 N x ˜ i , ω ˜ i sup ( x ˜ 1 , , x ˜ N ) ( 1 + ε ) B i = 1 N x ˜ i , ω ˜ i = ( 1 + ε ) α m i n ( ω ˜ ) .
By letting ε 0 , we obtain inequality Equation (13).
Step 3. We remove conditions C1 and C2. Without loss of generality, suppose that there exists an i 0 such that i 0 * ( z ) = + for some z. Then, z must be an upper bound for the slope of i 0 . Therefore, we approximate i 0 by a sequence { i 0 n } n N of convex loss functions whose slope is unbound. Simultaneously, we can handle the case where i 0 dose not take on its infimum. For this reason, we choose a sequence z n inf x R i 0 with z n i 0 ( 0 ) < z 0 , i 0 . Define
i 0 n ( x ) : = max { i 0 ( x ) , z n } + 1 n ( e x 1 ) + .
Then, i 0 n decreases to i 0 , and each loss function i 0 n satisfies the conditions C1 and C2. Therefore, for any n R and ε > 0 , there are λ ε n such that
> α m i n ( ω ˜ ) α m i n n ( ω ˜ ) 1 λ ε n z 0 + ( i 0 n ) * ( λ ε n n i ω k i , j ) + i i 0 1 n i j = 1 K k = 1 n i j i * ( λ ε n n i ω k i , j ) ε ,
where α m i n n ( ω ˜ ) is the penalty function arising from n : = ( 1 , , i 0 n , , N ) . Note that ( i 0 n ) * i 0 * , by the assumption α m i n ( ω ˜ ) < , we have
inf z R ( i 0 n ) * ( z ) i 0 n ( 0 ) = i 0 ( 0 ) , inf z ˜ R n ( i 0 n ) * ( z k i , j ) + i i 0 1 n i j = 1 K k = 1 n i j i * ( z k i , j ) L ( 0 ˜ ) > z 0 .
As ( i n ) * ( z ) z as z , the sequence { λ ε n } n N must be bounded away from zero and from infinity. Therefore, we assume that λ ε n converges to some λ ε ( 0 , ) . Using again the fact that Equation (15) uniformly in n and z, we have that
α m i n ( ω ˜ ) + ε lim inf n 1 λ ε n z 0 + ( i 0 n ) * ( λ ε n n i ω k i , j ) + i i 0 1 n i j = 1 K k = 1 n i j i * ( λ ε n n i ω k i , j ) 1 λ ε z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ ε n i ω k i , j ) .
The proof of Theorem 1 is completed. □
Proof of Corollary 1.
We will first show that
ρ L ( x ˜ ) = sup L ρ ( x ˜ ) ,
for any x ˜ = ( x 1 , 1 , , x 1 , n 1 , x 2 , 1 , , x K , n K ) R n , where for L ,
ρ ( x ˜ ) : = ρ A ( , z 0 ) ( x ˜ ) : = inf m R : 1 n j = 1 K k = 1 n j ( x j k m ) z 0 .
Note that for any x ˜ R n ,
ρ L ( x ˜ ) = inf m R : 1 n j = 1 K k = 1 n j ( x j k m ) z 0 for all L .
Therefore, for any L and x ˜ R n ,
ρ L ( x ˜ ) inf m R : 1 n j = 1 K k = 1 n j ( x j k m ) z 0 = ρ ( x ˜ ) ,
which implies that
ρ L ( x ˜ ) sup L ρ ( x ˜ ) .
Next, we will show that
ρ L ( x ˜ ) sup L ρ ( x ˜ ) .
Without loss of generality, we assume that M : = sup L ρ ( x ˜ ) < + . For any L ,
M ρ ( x ˜ ) = inf m R : 1 n j = 1 K k = 1 n j ( x j k m ) z 0 .
Therefore, for every ε > 0 , there exists an m = m ( , ε ) with 1 n j = 1 K k = 1 n j ( x j k m ) z 0 such that
m ρ ( x ˜ ) + ε .
Thus, for any L ,
< m ρ ( x ˜ ) + ε M + ε < + ,
which yields that
< sup L m M + ε . P
For any L , by the increasing property of ,
1 n j = 1 K k = 1 n j x j k sup L m 1 n j = 1 K k = 1 n j x j k m z 0 ,
which yields that
sup L m m R : 1 n j = 1 K k = 1 n j ( x j k m ) z 0 for all L .
By the definition of ρ L and Equation (17), we know that
ρ L ( x ˜ ) sup L m M + ε ,
which implies that
ρ L ( x ˜ ) sup L ρ ( x ˜ ) ,
since ε > 0 is arbitrary.
From Theorem 1 it follows that for any L ,
ρ ( x ˜ ) = sup ω ˜ W ˜ x ˜ , ω ˜ α min ( w ˜ ) ,
for x ˜ R n , where the minimal penalty function α min : R n R is given by
α min ( ω ˜ ) = inf λ > 0 z 0 ( ) λ + 1 λ n j = 1 K k = 1 n j * ( λ n ω j k ) .
Therefore, from Equations (16) and (18), it follows that
ρ L ( x ˜ ) = sup L ρ ( x ˜ ) = sup L sup ω ˜ W ˜ x ˜ , ω ˜ α min ( w ˜ ) = sup ω ˜ W ˜ x ˜ , ω ˜ inf L α min ( w ˜ ) .
Therefore, the minimal penalty function α min : R n R is
α min ( w ˜ ) = inf L α min ( w ˜ ) = inf L inf λ > 0 z 0 ( ) λ + 1 λ n j = 1 K k = 1 n j * ( λ n ω j k ) = inf λ > 0 inf L z 0 ( ) λ + 1 λ n j = 1 K k = 1 n j * ( λ n ω j k ) ,
for ω ˜ W ˜ . The proof of Corollary 1 is completed. □
Proof of Theorem 2.
Sufficiency: Suppose that ( x ) = z 0 + α x + β x with some 0 < β α . Since 0 < β α , the shortfall risk statistic ρ A is a convex risk statistic by Proposition 3.
For any x ˜ = ( x 1 , 1 , , x 1 , n 1 , x 2 , 1 , , x K , n K ) R n and λ > 0 ,
ρ B ( λ x ˜ ) = inf m R : 1 n j = 1 K k = 1 n j ( λ x j k m ) z 0 = inf m R : 1 n j = 1 K k = 1 n j z 0 n + α ( λ x j k m ) + β ( λ x j k m ) z 0 = inf m R : 1 n j = 1 K k = 1 n j α ( λ x j k m ) + β ( λ x j k m ) 0 = inf m R : 1 n j = 1 K k = 1 n j α x j k m λ + β x j k m λ 0 = inf λ m * R : 1 n j = 1 K k = 1 n j z 0 n + α ( x j k m * ) + β ( x j k m * ) z 0 = inf λ m * R : 1 n j = 1 K k = 1 n j ( x j k m * ) z 0 = λ ρ B ( x ˜ ) ,
which implies that ρ B is positively homogenous, and hence ρ B is a coherent risk statistic.
Necessity: Suppose that ρ B is coherent. Denote ˜ ( x ) : = ( x ) z 0 and
B ^ : = x ˜ = ( x 1 , 1 , , x 1 , n 1 , x 2 , 1 , , x K , n K ) R n : 1 n j = 1 K k = 1 n j ˜ ( x j k ) 0 .
Then, B ^ = x ˜ R n : 1 n j = 1 K k = 1 n j [ ( x j k ) z 0 ] 0 = B .
By the positive homogeneity and continuity of ρ B , for any x ˜ R n and λ > 0 , ρ B ( λ x ˜ ) = λ ρ B ( x ˜ ) , thus ρ B ( 0 ˜ ) = 0 , and therefore
0 = ρ B ( 0 ˜ ) = inf m R : 1 n j = 1 K k = 1 n j ˜ ( 0 m ) 0 = inf m R : ˜ ( m ) 0 .
As the convex function ˜ is continuous and strictly increasing, we have ˜ ( 0 ) = 0 .
Denote
A ρ B : = x ˜ R n : ρ B ( x ˜ ) 0 .
Then the positive homogeneity of ρ A implies that A ρ B is a cone, i.e., λ x ˜ A ρ A for any x ˜ A ρ A and λ > 0 . By Proposition 1, B = A ρ B and B is a cone.
Next, we will show that ˜ ( λ x ) = λ ˜ ( x ) for any x R and λ > 0 . In fact, suppose that there exist x 0 R and λ 0 > 0 such that ˜ ( λ 0 x 0 ) λ 0 ˜ ( x 0 ) . Without loss of generality, we assume that λ 0 > 1 . Otherwise, if 0 < λ 0 < 1 , then denote λ 0 : = 1 λ 0 , x 0 : = λ 0 x 0 , and thus ˜ ( λ 0 x 0 ) λ 0 ˜ ( x 0 ) .
By the convexity of ˜ and ˜ ( 0 ) = 0 , we have that
1 λ 0 ˜ ( λ 0 x 0 ) = 1 1 λ 0 ˜ ( 0 ) + 1 λ 0 ˜ ( λ 0 x 0 ) ˜ 1 1 λ 0 · 0 + 1 λ 0 λ 0 x 0 = ˜ ( x 0 ) ,
which yields that ˜ ( λ 0 x 0 ) > λ 0 ˜ ( x 0 ) .
Recall that the convex function is continuous with inf x R ( x ) = and sup x R ( x ) = + , by the intermediate value theorem for continuous function, there exist ( x 2 , , x n ) R n 1 such that
˜ ( x 0 ) + ˜ ( x 2 ) + + ˜ ( x n ) = 0 .
Therefore, by the definition of the acceptance set B ^ , ( x 0 , x 2 , , x n ) B ^ = B .
Similar to Equation (19), we know that ˜ ( λ 0 x ) λ 0 ˜ ( x ) for any x R , therefore
˜ ( λ 0 x 0 ) + ˜ ( λ 0 x 2 ) + + ˜ ( λ 0 x n ) ˜ ( λ 0 x 0 ) + λ 0 ˜ ( x 2 ) + + λ 0 ˜ ( x n ) > λ 0 ˜ ( x 0 ) + λ 0 ˜ ( x 2 ) + + λ 0 ˜ ( x n ) = λ 0 [ ˜ ( x 0 ) + ˜ ( x 2 ) + + ˜ ( x n ) ] = 0 .
Thus λ 0 ( x 0 , x 2 , , λ 0 x n ) = ( λ 0 x 0 , λ 0 x 2 , , λ 0 λ 0 x n ) B , which contradicts the fact that B is a cone.
Therefore, λ ˜ ( x ) = ˜ ( λ x ) for any x R and λ > 0 . This results in that
˜ ( x ) = α x + β x , x R
with some α > 0 , β > 0 , since ˜ is increasing. The inequality β α follows from the convexity of ˜ . Consequently, ( x ) = z 0 + α x + β x for all x R with some 0 < β α < + . The proof of Theorem 2 is completed. □
Proof of Theorem 3.
We will make full use of convex analysis to show theorem.
Define g ˜ i , i = 1 , , N and G as the same way as in Remark 2. It is easy to check that i : = g i * satisfies the assumptions in Theorem 1 for each 1 i N . As i * = g i * * = g i , 1 i N , Theorem 1 implies that for fixed x ˜ R n 1 × × R n N
F ( s ) : = F x ˜ ( s ) : = inf i = 1 N z i : i = 1 N 1 n i j = 1 K k = 1 n i j g i * ( x k i , j z i ) s = sup ω ˜ W x ˜ , ω ˜ inf λ > 0 1 λ s + i = 1 N 1 n i j = 1 K k = 1 n i j g i ( λ n i ω k i , j ) = inf ω ˜ W x ˜ , ω ˜ + inf β > 0 β s + i = 1 N 1 n i j = 1 K k = 1 n i j β g i n i ω k i , j β = inf β > 0 inf ω ˜ W x ˜ , ω ˜ + β s + i = 1 N 1 n i j = 1 K k = 1 n i j β g i n i ω k i , j β = inf β > 0 β s + inf ω ˜ W x ˜ , ω ˜ + i = 1 N 1 n i j = 1 K k = 1 n i j β g i n i ω k i , j β = inf β > 0 β s + inf ω ˜ W x ˜ , ω ˜ + I ( g ˜ 1 , , g ˜ N ) ( β , ω ˜ ) = inf β > 0 β s + G ( β ) = sup β > 0 β ( ˙ s ) G ( β ) = G * ( s ) ,
for all s in the interior of set { i = 1 N 1 n i j = 1 K k = 1 n i j g i * ( x k i , j ) : x ˜ R n 1 × × R n N } , which coincides with the interior of domF.
Note that for t R ,
G ( t ) = G * * ( t ) = sup s dom F { s t G * ( s ) } = sup s dom F { s t F ( s ) } .
By the definition of G,
ρ ( g 1 , , g N ) ( x ˜ ) = G ( 1 ) = sup s dom F { s F ( s ) } = inf s dom F { s + F ( s ) } .
For any s dom F , there exists z ˜ * : = z ˜ * ( g 1 , , g N , x ˜ ) : = ( z 1 * , , z N * ) R N with
i = 1 N 1 n i j = 1 K k = 1 n i j g i * ( x k i , j z i * ) = s
such that
i = 1 N z i * = F ( s ) ,
as g i * is continuous for each 1 i N .
Thus, for any x ˜ R n 1 × × R n N , the multivariate divergence risk statistic can be rewritten as
ρ ( g 1 , , g N ) ( x ˜ ) = inf s dom F { s + F ( s ) } = inf ( z 1 , , z N ) R N i = 1 N 1 n i j = 1 K k = 1 n i j g i * ( x k i , j z i ) + i = 1 N z i .
The proof of Theorem 3 is completed. □

5. Examples

In this section, we will construct entropic (or entropy-like) risk statistics by choosing specific loss functions.
Example 1.
For exponential loss functions i ( x ) : = e β i x 1 β i , β i > 0 , and z 0 , i > 1 β i for each 1 i N , then z 0 + i = 1 N 1 β i = i = 1 N ( z 0 , i + 1 β i ) > 0 . we know that the conjugation function i * : [ 0 , + ) is defined as
i * ( x ) = x β i log x x β i + 1 β i , i f x > 0 , 1 β i , x = 0 .
Then by the definition of the shortfall risk statistic in (5), we know that the corresponding shortfall risk statistic is
ρ B ( x ˜ ) = inf i = 1 N m i : i = 1 N 1 n i j = 1 K k = 1 n i j i ( x k i , j m i ) z 0 = inf i = 1 N m i : i = 1 N 1 n i β i j = 1 K k = 1 n i j e β i ( x k i , j m i ) z 0 + i = 1 N 1 β i
for x ˜ = ( x ˜ 1 , , x ˜ N ) = ( x 1 1 , 1 , , x n 11 1 , 1 , , x 1 1 , K , , x n 1 k 1 , K , , x 1 N , 1 , , x n N 1 N , 1 , , x 1 N , K , , x n N K N , K ) R n 1 × × R n N .
By Theorem 1, we have that the penalty function of ρ B is
α m i n ( ω ˜ ) = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) = inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j λ n i ω k i , j β i log λ n i ω k i , j λ n i ω k i , j β i + 1 β i = i = 1 N 1 β i j = 1 K k = 1 n i j ω k i , j log z 0 + i = 1 N 1 β i i = 1 N 1 β i n i ω k i , j
for any ω ˜ W = ω ˜ = ( ω ˜ 1 , , ω ˜ N ) R n 1 × × R n N : ω ˜ 0 , 1 ˜ i , ω ˜ i = 1 , i = 1 , , N , where the infimum is arrived at λ = z 0 + i = 1 N 1 β i i = 1 N 1 β i .
Furthermore, by Theorem 1, we know that ρ B has the following dual representation,
ρ B ( x ˜ ) = sup ω ˜ W x ˜ , ω ˜ inf λ > 0 1 λ z 0 + i = 1 N 1 n i j = 1 K k = 1 n i j i * ( λ n i ω k i , j ) = sup ω ˜ W x ˜ , ω ˜ i = 1 N 1 β i j = 1 K k = 1 n i j ω k i , j log z 0 + i = 1 N 1 β i i = 1 N 1 β i n i ω k i , j .
Let g i ( x ) : = i * ( x ) for each 1 i N , the divergence function I ( g 1 , , g N ) : [ 0 , + ) n 1 × × [ 0 , + ) n N R is defined by
I ( g 1 , , g N ) ( ω ˜ ) = i = 1 N 1 n i j = 1 K k = 1 n i j g i ( n i ω k i , j ) = i = 1 N 1 n i j = 1 K k = 1 n i j n i ω k i , j β i log ( n i ω k i , j ) n i ω k i , j β i + 1 β i = i = 1 N 1 β i j = 1 K k = 1 n i j ω k i , j log ( n i ω k i , j ) .
Therefore, by Theorem 3, the divergence risk statistic is
ρ ( g 1 , , g N ) ( x ˜ ) = inf ( z 1 , , z N ) R N i = 1 N 1 n i j = 1 K k = 1 n i j g i * ( x k i , j z i ) + i = 1 N z i = inf ( z 1 , , z N ) R N i = 1 N 1 n i j = 1 K k = 1 n i j e β i ( x k i , j z i ) 1 β i + i = 1 N z i = i = 1 N 1 β i log 1 n i j = 1 K k = 1 n i j e β i x i j , k
for any x ˜ R n , where the infimum is arrived at z i = 1 β i log 1 n i j = 1 K k = 1 n i j e β i x i j , k , i = 1 , , N .
Note that the penalty functions of multivariate shortfall risk statistic Equation (21) and multivariate divergence risk statistic Equation (23) are equal if we choose z 0 = 0 . In other words, if z 0 = 0 , the corresponding loss function and divergence function has a dual relationship.
Remark 4.
ρ ( g 1 , , g N ) as in Equation (24) is called multivariate entropic risk statistic, which could be considered as the data-based (or empirical) version of the entropic risk measure in Föllmer and Schied (2002).
In Example 1, the infimum in Equation (20) and supermum in Equation (22) are hard to calculate by explicit formulas. However, in the next Example 2, we will show that when N = 1 , the infimum in Equation (20) and supermum in Equation (22) can be expressed by explicit formulas.
Example 2.
For exponential loss function ( x ) : = e β x 1 β , β > 0 , and z 0 > 1 β , we know that the conjugation function * : [ 0 , + ) is defined as
* ( x ) = x β log x x β + 1 β , i f x > 0 , 1 β , x = 0 .
Then by the definition of the shortfall risk statistic in Equation (5), we know that the corresponding shortfall risk statistic is
ρ B ( x ˜ ) = inf m : 1 n j = 1 K k = 1 n j ( x j k m ) z 0 = 1 β log 1 n j = 1 K k = 1 n j e β x j k log ( β z 0 + 1 )
for any x ˜ R n .
By Theorem 1, we have that the penalty function of ρ B is
α m i n ( ω ˜ ) = inf λ > 0 1 λ z 0 + 1 n j = 1 K k = 1 n j * ( λ n ω j k ) = inf λ > 0 1 λ z 0 + 1 n j = 1 K k = 1 n j λ n ω j k β log λ n ω j k λ n ω j k β + 1 β = inf λ > 0 1 λ z 0 + 1 β λ β + 1 n β j = 1 K k = 1 n j λ n ω j k log λ n ω j k = 1 β z 0 + 1 z 0 + 1 β β z 0 + 1 β + 1 n β j = 1 K k = 1 n j ( β z 0 + 1 ) n ω j k log ( β z 0 + 1 ) n ω j k = 1 β j = 1 K k = 1 n j ω j k log [ ( β z 0 + 1 ) n ω j k ]
for any ω ˜ W = { ( ω 11 , , ω 1 n 1 , ω 21 , , ω K n K ) R n : ω j k 0 , j = 1 K k = 1 n j ω j k = 1 } , where the infimum is arrived at λ = β z 0 + 1 .
Let g ( x ) : = * ( x ) , the divergence function I g : [ 0 , + ) n R is defined by
I g ( ω ˜ ) = 1 n j = 1 K k = 1 n i j g ( n ω j k ) = 1 n j = 1 K k = 1 n j n ω j k β log ( n ω j k ) n ω j k β + 1 β = 1 β j = 1 K k = 1 n j ω j k log ( n ω j k ) .
Therefore, by Theorem 3, the divergence risk statistic is
ρ g ( x ˜ ) = inf z R 1 n j = 1 K k = 1 n j g * ( x j k z ) + z = inf z R 1 n j = 1 K k = 1 n j e β ( x j k z ) 1 β + z = 1 β log 1 n j = 1 K k = 1 n j e β x j k
for any x ˜ R n , where the infimum is arrived at z = 1 β log 1 n j = 1 K k = 1 n j e β x j k .
Note that the shortfall risk statistic Equation (25) and divergence risk statistic Equation (26) are equal if we choose z 0 = 0 . In other words, if z 0 = 0 , the corresponding loss function and divergence function has a dual relationship.
Remark 5.
In Example 2, when z 0 = 0 , then ρ B as in Equation (25) is called entropic risk statistic. For general z 0 > 1 β , ρ B as in Equation (25) is called entropy-like risk statistic.

6. Conclusions

In this paper, we have introduced two new classes of multivariate risk statistics, they are multivariate shortfall and divergence risk statistics. Their basic properties are discussed and presentation results for them are given. Moreover, the coherency of the univariate shortfall risk statistics is characterized. These newly introduced multivariate risk statistics complement the study of risk statistics. Meanwhile, these shortfall and divergence statistics are more tractable than the corresponding risk measures, because they are expressed in terms of data (i.e., samples). It would also be interesting to see the method where the coherency of multivariate shortfall risk statistics is characterized.

Author Contributions

Conceptualization, H.S., X.Z. and Y.H.; Formal analysis, H.S., X.Z. and Y.H.; Writing—original draft preparation, H.S., X.Z., Y.C. and Y.H.; Writing—review and editing, H.S., X.Z. and Y.H.; Funding acquisition, X.Z., Y.C. and Y.H.

Funding

This work was supported by the National Natural Science Foundation of China (No. 11771343, 11901184), the Fundamental Research Funds for the Central Universities (No. 531107051210) and the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2014JQ2-1003).

Acknowledgments

The authors are very grateful to the Assistant Editor Coco Guo and the two anonymous referees for their constructive comments and suggestions, which led to the current greatly improved version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Artzner, P.; Dellbaen, F.; Eber, J.M.; Heath, D. Coherent measures of risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
  2. Föllmer, H.; Schied, A. Convex measures of risk and trading constraints. Financ. Stoch. 2002, 6, 429–447. [Google Scholar] [CrossRef]
  3. Frittelli, M.; Rosazza Gianin, E. Putting order in risk measures. J. Bank. Financ. 2002, 26, 1473–1486. [Google Scholar] [CrossRef]
  4. Schied, A. Optimal investments for risk- and ambiguity-averse preferences: A duality approach. Financ. Stoch. 2007, 11, 107–129. [Google Scholar] [CrossRef]
  5. Ben-Tal, A.; Teboulle, M. An old-new concept of convex risk measures: The optimized certainty equivalent. Math. Financ. 2007, 17, 449–476. [Google Scholar] [CrossRef]
  6. Weber, S. Distribution-invariant risk measures, information, and dynamic consistency. Math. Financ. 2006, 16, 419–441. [Google Scholar] [CrossRef]
  7. Delbaen, F.; Bellini, F.; Bignozzi, V.; Ziegel, J.F. Risk measures with the CxLS property. Financ. Stoch. 2016, 20, 433–453. [Google Scholar] [CrossRef]
  8. Ararat, C.; Hamel, A.H.; Rudloff, B. Set-valued shortfall and divergence risk measures. Int. J. Theor. Appl. Financ. 2017, 20, 1750026. [Google Scholar] [CrossRef]
  9. Armenti, Y.; Crépey, S.; Drapeau, S.; Papapantoleon, A. Multivariate shortfall risk allocation and systemic risk. SIAM J. Financ. Math. 2018, 9, 90–126. [Google Scholar] [CrossRef]
  10. Ben-Tal, A.; Teboulle, M. Penalty functions and duality in stochastic programming via ϕ-divergence functionals. Math. Oper. Res. 1987, 12, 224–240. [Google Scholar] [CrossRef]
  11. Xu, M.; Angulo, J. Divergence-based risk measures: A discussion on sensitivities and extensions. Entropy 2019, 21, 634. [Google Scholar] [CrossRef]
  12. Burgert, C.; Rüschendorf, L. Consistent risk measures for portfolio vectors. Insur. Math. Econ. 2006, 38, 289–297. [Google Scholar] [CrossRef]
  13. Rüschendorf, L. Mathematical Risk Analysis; Springer: Berlin, Germany, 2013. [Google Scholar]
  14. Ekeland, I.; Schachermayer, W. Law invariant risk measures on L(Rd). Stat. Risk. Model. 2011, 28, 195–225. [Google Scholar] [CrossRef]
  15. Ekeland, I.; Galichon, A.; Henry, M. Comonotonic measures of multivariate risks. Math. Financ. 2012, 22, 109–132. [Google Scholar] [CrossRef]
  16. Wei, L.; Hu, Y. Coherent and convex risk measures for portfolios and applications. Stat. Probab. Lett. 2014, 90, 114–120. [Google Scholar] [CrossRef]
  17. Chen, Y.; Sun, F.; Hu, Y. Coherent and convex loss-based risk measures for portfolio vectors. Positivity 2018, 22, 399–414. [Google Scholar] [CrossRef]
  18. Heyde, C.C.; Kou, S.; Peng, X. What Is a Good Risk Measure: Bridging the Gaps between Data, Coherent Risk Measures, and Insurance Risk Measures; Columbia University: New York, NY, USA, 2006; Available online: https://www.math.ust.hk/~maxhpeng/KOU_hkpv1.pdf (accessed on 21 October 2019).
  19. Kou, S.; Peng, X.; Heyde, C.C. External risk measures and Basel Accords. Math. Oper. Res. 2013, 38, 393–417. [Google Scholar] [CrossRef]
  20. Tian, D.; Jiang, L. Quasiconvex risk statistics with scenario analysis. Math. Financ. Econ. 2015, 9, 111–121. [Google Scholar] [CrossRef]
  21. Tian, D.; Suo, X. A note on convex risk statistic. Oper. Res. Lett. 2012, 40, 551–553. [Google Scholar] [CrossRef]
  22. Chen, Y.; Hu, Y. Set-valued risk statistics with scenario analysis. Stat. Probab. Lett. 2017, 131, 25–37. [Google Scholar] [CrossRef]
  23. Liu, W.; Wei, L.; Hu, Y. Multivariate convex risk statistics with scenario analysis. Commun. Stat. Theory Methods 2019, 48, 5585–5601. [Google Scholar] [CrossRef]
  24. Föllmer, H.; Schied, A. Stochastic Finance: An Introduction in Discrete Time, 3rd ed.; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]

Share and Cite

MDPI and ACS Style

Song, H.; Zeng, X.; Chen, Y.; Hu, Y. Multivariate Shortfall and Divergence Risk Statistics. Entropy 2019, 21, 1031. https://doi.org/10.3390/e21111031

AMA Style

Song H, Zeng X, Chen Y, Hu Y. Multivariate Shortfall and Divergence Risk Statistics. Entropy. 2019; 21(11):1031. https://doi.org/10.3390/e21111031

Chicago/Turabian Style

Song, Haiyan, Xianfu Zeng, Yanhong Chen, and Yijun Hu. 2019. "Multivariate Shortfall and Divergence Risk Statistics" Entropy 21, no. 11: 1031. https://doi.org/10.3390/e21111031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop