Abstract
We obtain covariance and Choquet integral representations for some entropies and give upper bounds of those entropies. The coherent properties of those entropies are discussed. Furthermore, we propose tail-based cumulative residual Tsallis entropy of order (TCRTE) and tail-based right-tail deviation (TRTD); then, we define a shortfall of cumulative residual Tsallis (CRTES) and shortfall of right-tail deviation entropy (RTDS) and provide some equivalent results. As illustrated examples, the CRTESs of elliptical, inverse Gaussian, gamma and beta distributions are simulated.
MSC:
91G70
1. Introduction
A risk measure is a functional that maps from convex cone of risks to on a probability space . A good risk measure should satisfy some desirable properties (see, e.g., [1,2,3]). The several standard properties for general risk measures are presented as follows:
- (A)
- Law invariance: For , if , then ;
- (A1)
- Monotonicity: For , if , then ;
- (A2)
- Translation invariance: For and any , we have ;
- (A3)
- Positive homogeneity: For and any , we have ;
- (A4)
- Subadditivity: For , we have ;
- (A5)
- Comonotonic additivity: If V and W are comonotonic, then .
To estimate and identify risk measure, the law invariance (A) is an essential requirement. When a risk measure further satisfies (A1)–(A4), then is said to be coherent. It is well known that value-at-risk (VaR) and expected shortfall (ES) are the two extremely important risk measures used in banking and insurance. The VaR and ES at confidence level for a random variable (r.v.) V with cumulative distribution function (cdf) are defined as
and
respectively. If is continuous, then ES equals the tail conditional expectation (TCE), which is written as
where .
Some risk measures have other desirable properties; for example, (B1) Standardization: For , we have ; (B2) Location invariance: For and , we have . If a functional satisfies law invariance (A), (B1) and (B2), we say that is a measure of variability. If further satisfies properties (A3) and (A4), then we say is a coherent measure of variability.
To capture the variability of the risk V beyond the quantile , Furman and Landsman [4] proposed the tail standard deviation (TSD) risk measure
where , denotes the loading parameter and the tail standard deviation measure defined as
Here, is the tail variance of V. As its extension, Furman et al. [5] introduced the Gini shortfall (GS), which is defined by
where is tail-Gini functional. Recently, Hu and Chen [6] further proposed a shortfall of cumulative residual entropy (CRE), defined by
where is the tail-based CRE of V. Here, for , and for .
Inspired by those works, our main motivation is to find coherent shortfalls of entropy, which is the generalization of TSD, GS and CRES. These shortfalls of entropy can be used to capture the variability of a financial position. For specific financial applications, we can refer to [5,7,8]. To this aim, we give covariance and Choquet integral representations for some entropies, and provide upper bounds of those entropies. These representations not only make it easier for us to judge their cohesiveness, but also facilitate the extension of these results to two-dimensional and multi-dimensional cases in the future. Furthermore, we define TCRTE and TRTD, and propose CRTES and RTDS. As illustrated examples, CRTESs of elliptical, inverse Gaussian, gamma and beta distributions are simulated.
The remainder of this paper is structured as follows. Section 2 provides the covariance and Choquet integral representations for some entropies. Section 3 introduces some tail-based entropies. In Section 4, we propose two shortfalls of entropy, and give some equivalent results. The CRTESs of some parametric distributions are presented in Section 5. Finally, Section 6 concludes this paper and summarizes two possible research studies in the future.
Throughout the paper, let be an atomless probability space. For a random variable V with cumulative distribution function (cdf) , we use to denote any uniform random variable such that holds almost everywhere. Let be the set of all random variables on with a finite kth-moment. Denote by the set of all non-negative random variables. denotes the first derivative of g. Notation , and is the indicator function of set A.
2. Covariance and Choquet Integral Representations for Some Entropies
In this section, we derive covariance and Choquet integral representations for some entropies, which include initial, weighted and dynamic entropies. In addition, the upper bounds of these entropies are established.
Given g defined in with , weighted function and a r.v. X with cdf , the initial and weighted entropies (forms) are defined as, respectively,
and
Further, given two r.v.s and , the dynamic entropies (forms) are defined as
and
To derive the covariance of entropy, we first introduce below lemma.
Lemma 1
([5]). Let g be an almost everywhere (a.e.) differentiable function in with (a.e.) and . Suppose that and . Then, we have
Further,
Proof.
Since g is almost everywhere differentiable in , we let . Then,
Note that . Therefore,
Further, we use correlation coefficient , , and the last inequality is immediately obtained. □
2.1. Initial Entropy
To find the covariance represent of initial entropy, we give the following theorem.
Theorem 1.
Let g be a continuous and almost everywhere differentiable function in with (a.e.) and . Further, there exists a unique minimum (or maximum) point such that g is decreasing on and increasing on (or there exists such that g is increasing on and decreasing on ). Suppose that and . Then, we have
Further,
Proof.
Since is almost everywhere differentiable in , and , there exists a unique minimum (or maximum) point . Hence, we can use g to induce Lebesgue–Stieltjes measures on the Borel-measurable spaces and , respectively. Denote ; we have
where we have used Fubini’s theorem in the third equality. Further, using Lemma 1, we obtain
□
Remark 1.
Note that the function g is of bounded variation since g has the following representation , where is increasing and is decreasing. Similar results can be found in Lemma 3 of [9]. However, the result of this article is different from Lemma 3 of [9]. The integral interval and integrand are different, with one integrand being a function of F (i.e., ) and the other being a function of (i.e., ). So, our result cannot be obtained from theirs.
Corollary 1.
Let g be a concave function in with (a.e.) and . Suppose that and . Then, we have
Hence,
2.2. Weighted Entropy
Weighted entropy is an extension of initial entropy, which is an initial entropy associated with a weight function. We have the corresponding theorem as follows.
Theorem 2.
Let g be a continuous and almost everywhere differentiable function in with (a.e.) and . Further, there exists a unique minimum (or maximum) point and . Suppose that and . Then, we have
Further,
Proof.
Similar to the proof of Theorem 1, we have
where we have used Fubini’s theorem in the third equality.
Note that
and .
Therefore, we obtain
ending the proof. □
Corollary 2.
Let in Theorem 2; we have
Further,
Corollary 3.
Let g be a concave function in with (a.e.) and . Suppose that and . Then, we have
Further,
2.3. Dynamic (Weighted) Entropy
Dynamic entropy is also a generalization of an initial entropy that focuses on feasible choices of the ranges (upper tail or lower tail).
The survival function of a random variable can be represented as
Therefore, for any ,
Theorem 3.
Let g be a continuous and almost everywhere differentiable function in with (a.e.) and . Further, there is a unique minimum (or maximum) point and . Suppose that and . Then, we have
where for , and for .
Proof.
Using Equation (9) and the same argument of Theorem 2, we can easily obtain Theorem 3. □
Corollary 4.
Let in Theorem 3; we have
where for , and for .
Corollary 5.
Let g be a concave function in with (a.e.) and . Suppose that and . Then, we have
where for , and for .
The distribution function of a random variable can be written as
Therefore, for any ,
Theorem 4.
Let g be a continuous and almost everywhere differentiable function in with (a.e.) and . Further, there is a unique minimum (or maximum) point and . Suppose that and . Then, we have
where for , and for .
Proof.
Using Equation (13), Theorem 2 and translation , we can obtain Theorem 4. □
Corollary 6.
Let in Theorem 4; we have
where for , and for .
Corollary 7.
Let g be a concave function in with (a.e.) and . Suppose that and . Then, we have
where for , and for .
2.4. Examples
Example 1.
Let , in Corollary 1, Equation (6) denoted by ; we have
Further,
In particular, when , the above measure denotes the cumulative Tsallis past entropy introduced by Calì et al. ([10]). Note that when , it reduces to cumulative entropy (), defined as (see [11])
Further,
where .
In particular, when , , we obtain
Further,
which is Gini mean semi-difference, denoted by ; for details, see [6,12].
Example 2.
Let , in Corollary 1, Equation (6) denoted by ; we have
Further,
In particular, when , the above measure is the cumulative residual Tsallis entropy of order introduced by Rajesh and Sunoj [13]. Note that when , it reduces to cumulative entropy (), defined as (see [14])
Further,
where .
Example 3.
Let , in Corollary 1, Equation (6) denoted by , so that . Then, we have
Further,
In particular, when , the above measure is called the fractional cumulative residual entropy of X by Xiong et al. [15].
Example 4.
Let in Corollary 1, Equation (6) denoted by ; we have
In particular, when , the above measure is the right-tail deviation introduced by Wang [16].
Example 5.
Let , in Corollary 1, Equation (6) denoted by ; we have
Further,
In particular, when , the above measure is the extended Gini coefficient (see [7]). As a special case, when , the extended Gini coefficient becomes the simple Gini (see [5]).
Example 6.
Let , in Theorem 1, Equation (5) denoted by , so that . Then, we have
Further,
In particular, when , the above measure is called the fractional generalized cumulative residual entropy of X by Di Crescenzo et al. [17].
In particular, if is a positive integer, say , in this case, . Then, identifies with the generalized cumulative residual entropy () that has been introduced by Psarrakos and Navarro [18], i.e.,
Further,
Example 7.
Let , in Theorem 1, Equation (5) denoted by , so that . Then, we have
Further,
In particular, when , the above measure is called the fractional generalized cumulative entropy of X by Di Crescenzo et al. ([17]).
In particular, if is a positive integer, say , in this case, . Then, identifies with the generalized cumulative entropy () that has been introduced by Kayal [19] (see also [20]), i.e.,
Further,
Example 8.
Let , in Corollary 3, Equation (8) denoted by ; we have
Further,
In particular, when , the above measure is the weighted cumulative Tsallis entropy of order introduced by Chakraborty and Pradhan [21]. Note that when , it reduces to weighted cumulative entropy (), defined as (see [22,23])
Further,
where .
In particular, when , , we obtain
Further,
Example 9.
Let , in Corollary 3, Equation (8) denoted by ; we have
Further,
In particular, when , the above measure is the weighted cumulative residual Tsallis entropy of order introduced by Chakraborty and Pradhan [21]. Note that when , it reduces to weighted cumulative residual entropy (), defined as (see [23,24])
Further,
where .
Example 10.
Let , in Theorem 2, Equation (7) denoted by , so that . Then, we have
Hence,
In particular, when , the above measure is the weighted generalized cumulative residual entropy introduced by Tahmasebi ([25]) (also see [26]). As a special case, when , reduces to a shift-dependent GCRE of order n () defined by Kayal [27], i.e.,
Further,
In particular, when , , the reduces to weighted cumulative residual entropy with weight function () defined by Suhov and Yasaei Sekeh [28], i.e.,
Further,
They also define weighted cumulative entropy with weight function (; in this case, ):
Further,
Example 11.
In particular, when , the above measure is dynamic cumulative entropy defined by Asadi and Zohrevand [29].
Example 12.
Example 13.
In particular, when , the above measure is the dynamic cumulative residual Tsallis entropy of order introduced by Rajesh and Sunoj [13].
In particular, when and , we obtain (dynamic Gini mean semi-difference)
where for , and for .
Example 14.
Let , in Corollary 5; we have
where for , and for .
Example 15.
Let , in Corollary 5; we have
where for , and for .
Example 16.
In particular, when , the above measure denotes the dynamic generalized cumulative residual entropy introduced by Psarrakos and Navarro [18].
Example 17.
In particular, when , the above measure is the dynamic WGCRE introduced by Tahmasebi [25].
Particularly, when and , the reduces to the dynamic weighted cumulative residual entropy () defined as
where for , and for . As a special case, when , the above measure is introduced by Miralia and Baratpour [30]. They also defined , , i.e.,
where for , and for .
Example 18.
In particular, when and , the above measure is a generalization of the dynamic cumulative Tsallis entropy introduced by Calì et al. [10]. Note that when , it reduces to (a generalization of) cumulative past entropy, defined as (see, e.g., [31])
where for , and for .
Example 19.
In particular, when , the above measure is the dynamic generalized cumulative entropy introduced by Kayal [19].
Example 20.
In particular, when and , is reduced as (see [30])
where for , and for .
2.5. Discussion
Note that the above entropy risk measures satisfy (B1) standardization by their covariance representations. For any , using , we obtain that initial entropy and simple dynamic entropy risk measures satisfy (B2) location invariance, but weighted entropy risk measures do not satisfy (B2). Therefore, initial entropy and simple dynamic entropy risk measures are measures of variability.
For any , using , we obtain that initial entropy and simple dynamic entropy risk measures satisfy (B3) positive homogeneity.
When is finite variation and , the signed Choquet integral is defined by
Furthermore, when g is absolutely continuous, with , then Equation (21) can be expressed as
From (21), we can see that the signed Choquet integral satisfies the co-monotonically additive property ([32]). Thus, initial entropy and simple dynamic entropy risk measures are co-monotonically additive measures of variability.
The functional defined by Equation (20) is sub-additive if and only if g is convex (e.g., [33,34]). Hence, initial entropy risk measures, which are shown in Examples 1–5 and (17)–(19), satisfy (A4) sub-additivity. Therefore, Examples 1–5 and (17)–(19) are co-monotonically additive and coherent measures of variability.
These initial entropy risk measures can be applied to the predictability of the failure time of a system (see [11,14]). The weighted entropy risk measures are shift-dependent measures of uncertainty, and can be applied to some practical situations of reliability and neurobiology (see [35,36]). The dynamic entropy risk measures can be used to capture effects of the age t of an individual or an item under study on the information about the residual lifetime (see [29]).
The initial, weighted and dynamic entropy risk measures are closely related, as shown in Figure 1:
Figure 1.
The relationship between three entropy risk measures.
From a risk measure point of view, the initial entropy risk measures can capture the variability of a financial position as a whole. The dynamic entropy risk measures can depict the variability of a financial position focused on feasible choices of the ranges (upper tail or lower tail).
In finance and risk management, Markowitz’s mean-variance portfolio theory plays a vital role in modern portfolio theory. It is known that the initial entropy and simple dynamic entropy risk measures are measures of variability. We can replace variance with the initial entropy and simple dynamic entropy risk measures, respectively. The initial entropy risk measure is used to capture ordinary (general) risk, and it is favored by investors, such as the firm’s ordinary business and the shareholders’ interests. The dynamic entropy risk measure is used to depict the tails of risks (extreme events), which is to reduce (or avoid) the impact of extreme events and is favored by regulators and decision makers (see [37]). For example, we give for different distributions in Section 5, and also use the R software to compute for , shown in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. When , reduces to given by Hu and Chen [6], we can observe the difference between our results and Hu and Chen’s results through Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Other potential applications of these entropy risk measures need to be further explored in the future.
Figure 2.
: (a) of , and ; (b) of , and with ; (c) of , and with .
Figure 3.
: (a) of , , and with ; (b) of , and with ; (c) of , and with and ; (d) of , and with and .
Figure 4.
: (a) of , and ; (b) of , and with ; (c) of , and with .
Figure 5.
: (a) of , and ; (b) of , and with ; (c) of , and with .
Figure 6.
: (a) of , and ; (b) of , and with .
Figure 7.
: (a) of , and ; (b) of , and with .
Figure 8.
: (a) of , and ; (b) of , and with .
3. Tail-Based Entropies
Let in Example 13; we obtain tail-based cumulative residual Tsallis entropy of order :
where for , and for .
Let in Example 12; in this case, , and we obtain tail-based fractional cumulative residual entropy:
where for , and for .
Remark 2.
Let in Example 14; we obtain tail-based right-tail deviation:
where for , and for .
Remark 3.
Let in (23); we observe that
Let in Example 15; we obtain the tail-based extended Gini coefficient (see [7]):
where for , and for .
Remark 4.
4. Shortfalls of Entropy
We now introduce two risk measures of entropy shortfall, which are linear combinations of , and , respectively:
where is also a confidence level, and is a loading parameter.
Theorem 5.
Assume that , and the convex cone .
- (1)
- is represented aswhere
- (2)
- satisfies translation invariance and positive homogeneous and comonotonic additive properties.
- (3)
- The below statements are equivalent: (i) satisfies the monotone property; (ii) satisfies the sub-additive property; (iii) holds the increasing convex order; (iv) is a coherent risk measure; (v) .
Proof.
- (1)
- (2)
- From (23), the positive homogeneous and comonotonic additive properties of are obtained. Further, sincethe translation invariance of follows.
- (3)
- Noting that for all , and that is an increasing function on , therefore, we have that is non-negative if and only if . Furthermore, is non-decreasing if and only if . Using Lemma 4.2 of [5], one obtains that satisfies the monotone property if and only if for all , and that is sub-additive if and only if is increasing in . Therefore, .
Next, by Theorem 2.1 of [38] and (28), we know that if is increasing in , then follows; that is to say, . Then, .
Furthermore, by Corollary 4.65 of [3], we learn that a law-invariant coherent risk measure holds the increasing convex order. This reveals that , then . On the contrary, since satisfies translation invariance and positive homogeneous properties, and , we have . Hence, . □
Remark 5.
Let in Theorem 5; we obtain tail-Gini shortfall (see [6,39]), which is different from the results in [5]. In addition, let in Theorem 5; we obtain the CRE shortfall in [6].
Theorem 6.
Assume that , and the convex cone .
- (1)
- is represented as:where
- (2)
- satisfies translation invariance and positive homogeneous and comonotonic additive properties.
- (3)
- The below statements are equivalent: (i) satisfies the monotone property; (ii) satisfies the sub-additive property; (iii) holds the increasing convex order; (iv) is a coherent risk measure; (v) .
Proof.
Let in Theorem 5; combining with (25), we obtain the desired results. □
Theorem 7.
Assume that and are independent. Then,
Furthermore,
Proof.
Theorem 8.
Assume that and are independent. Then,
Furthermore,
Proof.
Let in Theorem 7; combining with (25), we obtain the desired results. □
5. for Some Distributions
5.1. Elliptical Distributions
Consider an elliptical random variable . If the probability density function (pdf) of X exists, its form will be (see [40])
where and are location and scale parameters, respectively. Moreover, , , is the density generator of X, and is denoted by . The density generator satisfies the condition
and the normalizing constant is given by
Cumulative generator and normalizing constant are, respectively, defined as follows:
and
Landsman and Valdez [40] proved that
where .
Now, several important cases, including normal, Student-t, logistic and Laplace distributions, are given as follows.
Example 21.
(Normal distribution) Let . In this case, the density generators are written as
and the normalization constants are given by
Then,
where .
Without loss of generality [for the convenience of simulation], let ; by Equations (29), (30) and (34), we can use the R software to compute and for , and the results are shown in Figure 2.
From Figure 2a, we find that when is fixed, is decreasing in p. Moreover, when p is fixed, is also decreasing in . As we can see in Figure 2b, when is fixed, is increasing in p, while will be decreasing in when p is fixed. In Figure 2c, we observe that when is fixed, is increasing in p. Moreover, when p is fixed, is also increasing in .
Example 22.
(Student-t distribution). Let . In this case, the density generators are written as
The normalization constants are given by
where and denote gamma and beta functions, respectively. Then,
where .
Let ; by Equations (29), (30) and (35), we can use the R software to compute and for , and the results are shown in Figure 3.
From Figure 3a, we find that the degree of freedom, m, has a great impact on . is decreasing in m. When m is small, is increasing in p, while will be decreasing in p instead of increasing when m is larger than a threshold. From Figure 3b, we find that when is fixed, is increasing in p. However, when p is fixed, is decreasing in . In Figure 3c, we observe that when is fixed, is increasing in p. However, when p is fixed, is decreasing in . From Figure 3d, we find that when is fixed, is increasing in p. Moreover, when p is fixed, is also increasing in .
Example 23.
(Logistic distribution). Let . In this case, the density generators are written as
The normalization constants are given by
Then,
where .
Let ; by Equations (29), (30) and (36), we can use the R software to compute and for , and the results are shown in Figure 4.
It is seen from Figure 4a that the has a little impact on the values of . For fixed p, is decreasing in . From Figure 4b, we observe that when is fixed, is increasing in p. However, when p is fixed, is decreasing in . In Figure 4c, we find that when is fixed, is increasing in p. Moreover, when p is fixed, is also increasing in .
Example 24.
(Laplace distribution). Let . In this case, the density generators are written as
and
The corresponding normalization constants are given by
Then,
where .
Let ; by Equations (29), (30) and (37), we can use the R software to compute and for , and the results are shown in Figure 5.
It is seen from Figure 5a that p has almost no impact on . For fixed , is almost the same in p. However, has a great impact on . For fixed p, is decreasing in . In Figure 5b, we observe that when is fixed, is increasing in p. However, when p is fixed, is decreasing in . From Figure 5c, we find that when is fixed, is increasing in p. Moreover, when p is fixed, is also increasing in .
5.2. Inverse Gaussian, Gamma and Beta Distributions
Example 25.
(Inverse Gaussian distribution) An inverse Gaussian random variable , with parameters and , has its probability density function (pdf) as
From Example 4.3 of Landsman and Valdez (2005), we can obtain
where , , and is the pth quantile of X.
Let ; by Equations (29), (30) and (38), we can use the R software to compute and for , and the results are shown in Figure 6.
Example 26.
(Gamma distribution) A random variable , with parameters and , follows Gamma distribution if its pdf is
Landsman and Valdez [41] provided that
where is the tail distribution function of , and is the pth quantile of X.
Let ; by Equations (29), (30) and (39), we can use the R software to compute and for , and the results are shown in Figure 7.
Example 27.
(Beta distribution) A random variable , with parameters and , follows Beta distribution if its pdf is
We can obtain
where is the tail distribution function of , and is the pth quantile of X.
6. Concluding Remarks
This paper has derived covariance and Choquet integral representations of some entropies, and has proposed shortfalls of entropy CRTES and RTDS. In particular, CRTESs of elliptical, inverse Gaussian, gamma and beta distributions are computed. Furthermore, Hou and Wang [42] generalized the tail-Gini functional of a random variable to a case of a two-dimensional random vector, and Sun et al. [8] extended the TCRE to the two risks (random vector). In the future, we will try to extend the TCRTE and TRTD of a random variable in this paper to a two-dimensional random vector.
Author Contributions
Conceptualization, B.Z.; methodology, B.Z. and C.Y.; investigation, B.Z.; writing—original draft, B.Z.; writing—review and editing, C.Y.; software, B.Z. All authors have read and agreed to the published version of the manuscript.
Funding
The research was supported by the National Natural Science Foundation of China (No. 12071251, 12301605).
Institutional Review Board Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors thank two anonymous reviewers and the editor for their helpful comments and suggestions, which have led to the improvement of this article.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Artzner, P.; Delbaen, F.; Eber, J.M.; Heath, D. Coherent measures of risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
- Delbaen, F. Monetary Utility Functions; Osaka University Press: Osaka, Japan, 2012. [Google Scholar]
- Föllmer, H.; Schied, A. Stochastic Finance: An Introduction in Discrete Time, 3rd ed.; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]
- Furman, E.; Landsman, Z. Tail variance premium with applications for elliptical portfolio of risks. Astin Bull. 2006, 36, 433–462. [Google Scholar] [CrossRef]
- Furman, E.; Wang, R.; Zitikis, R. Gini-type measures of risk and variability: Gini shortfall, capital allocations, and heavy-tailed risks. J. Bank. Financ. 2017, 83, 70–84. [Google Scholar] [CrossRef]
- Hu, T.; Chen, O. On a family of coherent measures of variability. Insur. Math. Econ. 2020, 95, 173–182. [Google Scholar] [CrossRef]
- Berkhouch, M.; Lakhnatia, G.; Righi, M.B. Extended Gini-type measures of risk and variability. Appl. Math. Financ. 2018, 25, 295–314. [Google Scholar] [CrossRef]
- Sun, H.; Chen, Y.; Hu, T. Statistical inference for tail-based cumulative residual entropy. Insur. Math. Econ. 2022, 103, 66–95. [Google Scholar] [CrossRef]
- Wang, R.; Wei, Y.; Willmot, G.E. Characterization, robustness, and aggregation of signed Choquet integrals. Math. Oper. Res. 2020, 45, 993–1015. [Google Scholar] [CrossRef]
- Calì, C.; Longobardi, M.; Ahmadi, J. Some properties of cumulative Tsallis entropy. Phys. A Stat. Mech. Its Appl. 2017, 486, 1012–1021. [Google Scholar] [CrossRef]
- Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
- Yin, X.; Balakrishnan, N.; Yin, C. Bounds for Gini’s mean difference based on first four moments, with some applications. Stat. Pap. 2022. [Google Scholar] [CrossRef]
- Rajesh, G.; Sunoj, S.M. Some properties of cumulative Tsallis entropy of order α. Stat. Pap. 2019, 60, 933–943. [Google Scholar] [CrossRef]
- Rao, M.; Chen, Y.; Vemuri, B.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
- Xiong, H.; Shang, P.; Zhang, Y. Fractional cumulative residual entropy. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104879. [Google Scholar] [CrossRef]
- Wang, S. An actuarial index of the right-tail risk. N. Am. Actuar. J. 1998, 2, 88–101. [Google Scholar] [CrossRef]
- Di Crescenzo, A.; Kayal, S.; Meoli, A. Fractional generalized cumulative entropy and its dynamic version. Commun. Nonlinear Sci. Numer. Simul. 2021, 102, 105899. [Google Scholar] [CrossRef]
- Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika 2013, 76, 623–640. [Google Scholar] [CrossRef]
- Kayal, S. On generalized cumulative entropies. Probab. Eng. Inf. Sci. 2016, 30, 640–662. [Google Scholar] [CrossRef]
- Calì, C.; Longobardi, M.; Navarro, J. Properties for generalized cumulative past measures of information. Probab. Eng. Inf. Sci. 2020, 34, 92–111. [Google Scholar] [CrossRef]
- Chakraborty, S.; Pradhan, B. On weighted cumulative Tsallis residual and past entropy measures. Commun. Stat. Simul. Comput. 2023, 52, 2058–2072. [Google Scholar] [CrossRef]
- Mirali, M.; Baratpour, S. Some results on weighted cumulative entropy. J. Iran. Stat. Soc. 2017, 16, 21–32. [Google Scholar]
- Misagh, F.; Panahi, Y.; Yari, G.H.; Shahi, R. Weighted cumulative entropy and its estimation. In Proceedings of the IEEE International Conference on Quality and Reliability (ICQR), Bangkok, Thailand, 14–17 September 2011; pp. 477–480. [Google Scholar]
- Mirali, M.; Baratpour, S.; Fakoor, V. On weighted cumulative residual entropy. Commun. Stat.-Theory Methods 2017, 46, 2857–2869. [Google Scholar] [CrossRef]
- Tahmasebi, S. Weighted extensions of generalized cumulative residual entropy and their applications. Commun. Stat.-Theory Methods 2020, 49, 5196–5219. [Google Scholar] [CrossRef]
- Toomaj, A.; Di Crescenzo, A. Connections between weighted generalized cumulative residual entropy and variance. Mathematics 2020, 8, 1072. [Google Scholar] [CrossRef]
- Kayal, S. On weighted generalized cumulative residual entropy of order n. Methodol. Comput. Appl. Probab. 2018, 20, 487–503. [Google Scholar] [CrossRef]
- Suhov, Y.; Yasaei Sekeh, S. Weighted cumulative entropies: An extention of CRE and CE. arXiv 2015, arXiv:1507.07051v1. [Google Scholar]
- Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plan. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
- Mirali, M.; Baratpour, S. Dynamic version of weighted cumulative residual entropy. Commun. Stat.-Theory Methods 2017, 46, 11047–11059. [Google Scholar] [CrossRef]
- Navarro, J.; Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Stat. Plan. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
- Schmeidler, D. Integral representation without additivity. Proc. Am. Math. Soc. 1986, 97, 255–261. [Google Scholar] [CrossRef]
- Acerbi, C. Spectral measures of risk: A coherent representation of subjective risk aversion. J. Bank. Financ. 2002, 26, 1505–1518. [Google Scholar] [CrossRef]
- Yaari, M.E. The dual theory of choice under risk. Econometrica 1987, 55, 95–115. [Google Scholar] [CrossRef]
- Di Crescenzo, A.; Longobardi, M. On weighted residual and past entropies. Sci. Math. Jpn. 2006, 64, 255–266. [Google Scholar]
- Misagh, F.; Yari, G.H. On weighted interval entropy. Stat. Probab. Lett. 2011, 29, 167–176. [Google Scholar] [CrossRef]
- Zuo, B.; Yin, C.; Yao, J. Multivariate range Value-at-Risk and covariance risk measures for elliptical and log-elliptical distributions. arXiv 2023, arXiv:2305.09097. [Google Scholar]
- Sordo, M.A.; Ramos, H.M. Characterization of stochastic orders by L-functionals. Stat. Pap. 2007, 48, 249–263. [Google Scholar] [CrossRef]
- Denneberg, D. Premium calculation: Why standard deviation should be replaced by absolute deviation. Astin Bull. 1990, 20, 181–190. [Google Scholar] [CrossRef]
- Landsman, Z.M.; Valdez, E.A. Tail conditional expectations for elliptical distributions. N. Am. Actuar. J. 2003, 7, 55–71. [Google Scholar] [CrossRef]
- Landsman, Z.; Valdez, E.A. Tail conditional expectation for exponential dispersion models. Astin Bull. 2005, 35, 189–209. [Google Scholar] [CrossRef]
- Hou, Y.; Wang, X. Extreme and inference for tail Gini functional with applications in tail risk measurement. J. Am. Stat. Assoc. 2021, 535, 1428–1443. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).







