1. Introduction
The family of log-elliptical (LE) distributions is a family of continuous distributions that includes the log-normal (LN) distribution as a special case. This family of distributions is extensively used in quantitative finance to model financial returns and losses (Chriss [
1], Valdez and Dhaene [
2], Hamada and Valdez [
3], Valdez et al. [
4], Klebaner and Landsman [
5], Kortschak and Hashorva [
6], Landsman-Makov-Shushi [
7]). Let
be an
random vector with the multivariate LE distribution. Then, its probability density function (pdf) takes the form (see, for instance, Valdez et al. [
4])
Here
is an
vector of locations,
is an
scale matrix, and
is called the density generator which satisfies the following condition
The LN pdf is obtained by taking the density generator Another important example is the log-Laplace distribution which is obtained by taking where
The LE family of distributions can be derived from the transformation
of an elliptical random vector
whose pdf takes the form
with the characteristic function
for some function
, called the characteristic generator. For the normal distribution, the characteristic generator is the exponential function
For the Laplace distribution
and for the generalized stable laws distributions
Any
log-elliptical random vector
has a unique density generator
and thus has a unique characteristic generator
; this can be shown from the existence and uniqueness theorem. If the expectation of
exists, the characteristic generator can be extended on the negative part and the expectation has the following explicit form
where
and
are the location and scale parameters of
respectively (see, Klebaner and Landsman [
5], formula (2.13). Furthermore, if the covariance between two log-elliptical random variables
exists, then, the covariance
follows the form (see Valdez et al. [
4])
where
and
are the location and scale parameter of
respectively, and
is the scale matrix of the log-elliptical bivariate random vector
The following are some special members of the class of log-elliptical distributions.
- 1
Multivariate log-normal distribution: In the case that the density generator is
, the pdf of the multivariate log-normal distribution is
and we write
- 2
Multivariate log-Student-t distribution: The pdf of the log-Student-t distribution is given by
with
degrees of freedom, and we write
- 3
Multivariate log-logistic distribution: An elliptical vector
is log-logistic distributed if its pdf takes the form
and we write
is
- 4
Multivariate log-Laplace distribution: we say that
is a multivariate log-Laplace random vector if its pdf is
A well-known property of the LE family of distributions is that the moments of this family can be computed explicitly using the elliptical characteristic generator .
The moments of LE distributions can be computed explicitly, by the following celebrated Lemma:
Lemma 1. Let . Then, if moment of exists then the characteristic generator ψ can be extended on the negative part and the mentioned moment takes the form where and ψ is the characteristic generator of the associated elliptical random vector .
In this paper, we focus on the following projection of a random vector of risks
where
is
vector, where
is the value at risk of
under the
quantile,
For simplicity, we also write
, having in mind that we have a vector of q-th quantiles.
Following the asymptotic expansion of the conditional characteristic function
we have
The first measure,
has been introduced in Landsman–Makov–Shushi [
8] and is called the multivariate tail conditional (MTCE) measure, where the second measure centralized around MTCE was introduced in Landsman–Makov–Shushi [
9] in order to capture the dispersion of the random vector of risks when focusing on extreme losses. Analyzing the moments and the tail moments of random variables has been studied and well investigated in the literature, and there is a still active research in this area—in its applications in different fields, from data analysis to actuarial science (Loperfido [
10], Loperfido-Mazur-Podgórski [
11], Ogasawara [
12]).
2. Multivariate Tail Conditional Expectation for Log-Elliptical Models
The multivariate tail conditional expectation (MTCE) measure is a risk measure that naturally extends the tail conditional expectation (TCE) from a univariate risk into a multivariate system of mutually depend risks. This multivariate risk measure has been introduced in Landsman–Makov–Shushi [
8]. For additional literature about the MTCE measure we refer to Cai–Wang–Mao [
13], Hashorva [
14], Mousavi et al. [
14], Frei [
15], Ling [
16], Shushi and Yao [
17], among others. Define
an
vector of random risks that are mutually dependent on each other with cumulative distribution function
. Using this notation, for two
n-variate random vectors
and
means that
We now introduce the MTCE measure with varied quantiles
This definition is essentially more realistic than that introduced in Landsman–Makov–Shushi [
8], where all the quantile levels
were the same. Now each risk or loss may exceed its own VaR, which can be large, small, or even equal to 0, implying total flexibility as to the degree of riskiness of any of the underlying risks. The TCE, which is the tail conditional expectation measure,
Proposition 1. The measure satisfies the following expression, Proof. Please see Landsman–Makov–Shushi [
9]. □
Let
We introduce a new random vector
associated with the element
of vector
with the pdf
Theorem 1. Let be an random vector of risks with characteristic generator which can be extended on the negative part. Then, the MTCE measure is given by Here is the tail function of the random vector associated with and symbol ∘ is the Hadamard product.
Proof. From the definition of MTCE, and using the transformation
we have
where
Now, notice that the following integral is, in fact, the moment generated function of an associated elliptical distribution,
allowing us to introduce a new random vector
associated with the random vector
with the pdf
(a similar technique can be found in Valdez et al. (2009)).Thus, for the
component, we have
where
□
Example 1. Log-normal distribution. Let be a log-elliptical random vector of risks.Then, the density function and the characteristic generator are given by and , respectively. Then where is the identity matrix and whereis the tail function of n-variate standard normal random vector. This result well conforms with Valdez et al. [4], Equation (39). Example 2. Log-Laplace distribution. We say that is multivariate Log Laplace random vector if its characteristic generator The density generator is equaled to where is the modified Bessel function of the third kind. Then the density of the multivariate Log Laplace distribution is equal that coincides with univariate Log-Laplace distribution. (Eltoft et al. [18], Equation (9)). The MTCE of multivariate Log-Laplace can be calculated as follows Here the random vector has the pdf Numerical Illustration
Let us now examine a random vector of five risks
with the multivariate log-normal distribution
where
and
Then, the graph of vector MTCE corresponded to vector of levels
is presented in
Figure 1 (solid line).
One can see that as it is expected the value of the components of the vector MTCE is increased if the level of the corresponded observation increased but is increased not proportionally. On the same graph we presented MTCE for (dot-dashed line), i.e., we changed only level from to , and we can see that all system of components of vector MTCE stretched out mostly in the direction of increasing.
4. Optimal Portfolio Selection with Log-Elliptical Distributions
In risk management, the optimal portfolio selection problem is one of the most profound and well studied areas in both the theoretical and also the practical aspects of portfolio decision making (Castellano and Cerqueti [
19], Li and Hoi [
20], Shen et al. [
21], and Fletcher [
22]).
Modern portfolio theory (MPT), developed by Markowitz in the year 1952 provided the foundations of optimal portfolio theory based on the moments of the portfolio risk (see, Markowitz, Elton, and Gruber [
23], and Francis and Kim [
24]). In MPT the classical mean-variance (MV) is introduced and is given by
where
is the portfolio loss,
and
are a
vectors of weights and losses (equaled the minus portfolio returns), respectively, and
where
is vector of
n ones.
, where
is the vector of expected losses and
where
is the
covariance matrix of
the vector of portfolio losses. Here
is the risk aversion parameter, which can be interpreted as the parameter of a trade-off between the expected loss,
and the variance,
or the standard deviation,
of the portfolio.
Suppose that we have a system of n risks of losses such that
Such considered risks that are having LE distributions describe pure risks, i.e., risks that can only bring losses without the possibility of profit. Then, we aim at finding the optimal weights that would minimize such a portfolio of risks. Since the classical MV model only focuses on the mean and variance of the risk, it does not capture the skewness appearing in the LE distribution. To avoid this problem, we suggest taking the MTCE and MTCov measures instead of the expectation and covariance matrix of the portfolio risk, i.e.,
Thus, we focus on the tails of the risks, leading to the following projection of the portfolio risk
so
and
In that case, instead of the classical MV, we would have the following measure
which the both capture the skewness of the distribution and focusing on extreme loss events, much like the optimal portfolio problem with value-at-risk,
and expected-shortfall
Our goal is to minimize the MV measure
of the losses
Theorem 3. The optimal solution of (23) subject to is given by Proof. The proof relies on the classical MV optimal solution if
and
substitute with
and
See also Landsman-Makov-Shushi [
25], Remark 3.1. □
Another well-known measure is the mean-standard deviation measure, which takes the form where In that case, we also have explicit solution for the OPS problem.
Theorem 4. The solution of the minimization of functionalsubject to is given by Proof. The proof of the Theorem straightforwardly follows from Landsman–Makov–Shushi [
25], Table 3, where instead of taking
and
we take
and
respectively. □
Resuming this section we would point out that the most optimal results listed in Table 3 [
25] can be provided here and can be outspread in the context of
and