Next Article in Journal
A Note on the Solutions for a Higher-Order Convective Cahn–Hilliard-Type Equation
Previous Article in Journal
Modeling and Solving a Latin American University Course Timetabling Problem Instance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extreme Value Index Estimation by Means of an Inequality Curve

by
Emanuele Taufer
1,*,
Flavio Santi
2,
Pier Luigi Novi Inverardi
1,
Giuseppe Espa
1 and
Maria Michela Dickson
1
1
Department of Economics and Management, University of Trento, 38122 Trento, Italy
2
Department of Economics, University of Verona, 37136 Verona, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1834; https://doi.org/10.3390/math8101834
Submission received: 3 September 2020 / Revised: 1 October 2020 / Accepted: 9 October 2020 / Published: 19 October 2020
(This article belongs to the Section Probability and Statistics)

Abstract

:
A characterizing property of Zenga (1984) inequality curve is exploited in order to develop an estimator for the extreme value index of a distribution with regularly varying tail. The approach proposed here has a nice graphical interpretation which provides a powerful method for the analysis of the tail of a distribution. The properties of the proposed estimation strategy are analysed theoretically and by means of simulations. The usefulness of the method will be tested also on real data sets.

1. Introduction

A distribution function F with survivor function F ¯ : = 1 F is regularly varying (RV) at infinity with index α , if there exists an α > 0 such that x > 0
lim t F ¯ ( t x ) F ¯ ( t ) = x α ;
in this case we say that F ¯ R V α . In the extreme value (EV) literature it is typical to refer to the EV index γ > 0 with α = 1 / γ . Informally, we will say that the distribution has a Pareto tail or that the distribution is of the power-law type. Note that the case 1 < α 2 (or 1 / 2 γ < 1 ) entails distributions with infinite variance and finite mean while the case α > 2 (or γ < 1 / 2 ) entails distributions with finite mean and variance.
Precision in the analysis of the tail of a distribution allows to, for example, perform proper risk evaluation in finance, correcting empirical income distributions for various top-income measurement problems, or individuating a proper growth theory in economics or the biological sciences. For further examples of applications and deeper discussion see Clauset et al. [1], Jenkins [2] and Hlasny [3] with specific references to applications in income distributions and an overview of available models; see also Heyde and Kou [4] for a deep discussion of graphical methods for tail analysis.
The present paper will concentrate on estimation of the EV index γ . Probably the most well-known estimator of the EV index is the Hill [5] estimator, which exploits the k upper order statistics of a random sample through the formula
H ( k ) : = H k , n = k 1 i = 1 k log X ( n i + 1 ) log X ( n k ) ,
where X ( i ) denotes the i-th order statistics from a sample of size n and k = k ( n ) n diverges to in an appropriate way. The Hill estimator has been thoroughly studied in the literature and several generalizations have been proposed. For a recent review of estimation procedures for the EV (or tail) index of a distribution see Gomes and Guillou [6].
Some recent approaches in tail or EV index estimation we would like to mention here are those of Brilhante et al. [7] which define a moment of order p estimator which reduces to the Hill estimator for p = 0 and Beran et al. [8] which define a harmonic moment tail index estimator. Recently Paulauskas and Vaičiulis [9] and Paulauskas and Vaičiulis [10] have connected in an interesting way some of the above approaches by defining parametric families of functions of the order statistics. Reduced bias (RB) versions of the above estimators have appeared in the literature, see for example Caeiro et al. [11], Gomes et al. [12] and Gomes et al. [13].
The main contribution of this paper consists in a new estimation procedure for EV index of a distribution satisfying (1) which relies on Zenga’s inequality curve λ ( p ) , p ( 0 , 1 ) (Zenga [14]).
The curve λ has the property of being constant for the Pareto Type I distribution, it has an intuitive graphical interpretation, it does not depend on location and it shows a nice regular behaviour when estimated. These properties will be discussed, analysed and extended in order to define our inferential strategies. Also it is important to point out that an inequality curve is defined for positive observations and hence we will implicitly assume that the right tail of a distribution is analysed. This is not really a restriction since if one wishes to consider the left tail it is enough to change sign to the data. Also, if the distribution is over the real line, tails can be considered separately and, under the symmetry assumption, absolute values of the data could be considered. The approach to estimation proposed here, directly connected to the inequality curve λ , has a nice and effective graphical interpretation which greatly helps in the analysis. Other graph-based methods are to be found in Kratz and Resnick [15], which exploit properties of the QQ-plot, and Grahovac et al. [16] which discuss an approach based on the asymptotic properties of the partition function, a moment statistic generally employed in the analysis of multi-fractality; see also Jia et al. [17] which analyse graphically and analytically the real part of the characteristic function at the origin.
We would like to point out here that the λ curve discussed by Zenga [14] does not coincide with the Zenga [18] curve originally indicated by the author with I ( p ) , p ( 0 , 1 ) (more details in the next Section).
The paper is organized as follows: Section 2 introduces the curve λ and discusses its properties; Section 3 analyses the proposed estimation strategy and discusses some practical issues in applications. Finite sample performances are analysed in Section 4 and Section 5 where applications with simulated and real data are considered. Proofs are postponed to the last Section.

2. The Proposed Estimator for the EV Index

Let X be a positive random variable with finite mean μ , distribution function F, and probability density f. The inequality curve λ ( p ) has been first defined in Zenga [14]; with original notation:
λ ( p ) = 1 log ( 1 Q ( F 1 ( p ) ) ) log ( 1 p ) , 0 < p < 1 ,
where F 1 ( p ) = inf { x : F ( x ) p } is the generalized inverse of F and Q ( x ) = 0 x t f ( t ) d t / μ is the first incomplete moment. Q can be defined as a function of p via the Lorenz curve
L ( p ) = Q ( F 1 ( p ) ) = 1 μ 0 p F 1 ( t ) d t .
See further Zenga [19] Arcagni and Porro [20] for a general introduction and analysis of λ ( p ) for different distributions. It is worth to mention that the curve λ ( p ) should not be confused with the inequality curve defined in Zenga [18], originally indicated as
I ( p ) = 1 L ( p ) p 1 p 1 L ( p ) , p ( 0 , 1 ) .
The curve I ( p ) has many nice properties and has been heavily studied in some recent literature; it is now commonly known as the Zenga curve Z ( p ) . For the sake of completeness in Zenga [14] the notation Z ( p ) was originally used for another inequality curve based on quantiles, that is,
Z ( p ) = 1 x p x p * , p ( 0 , 1 ) ,
where x p = F 1 ( p ) and x p * = Q 1 ( p ) . As pointed out in Zenga [14] (without providing if and only if results) the curve λ is constant in p for type-I Pareto distributions, while the curve Z, as defined in Equation (6), is constant in p for Log-normal distributions. On the contrary, the curve I, as defined in (5), is not constant for any distribution, see Zenga [14] and Zenga [18] for further details. Turning back the attention to the curve λ , note that for a Pareto Type I distribution with
F ( x ) = 1 ( x / x 0 ) α , x x 0 , x 0 > 0
under the condition that α > 1 , the Lorenz curve has the form
L ( p ) = 1 ( 1 p ) 1 γ , γ = 1 / α ;
it follows that in this case λ ( p ) = γ , p ( 0 , 1 ) , that is λ ( p ) is constant in p. This is actually an if-and-only-if result, which we formalize in the following lemma (see Section 7 for its proof).
Lemma 1.
The curve λ ( p ) defined in (3) is constant in p, p ( 0 , 1 ) , and equals γ = 1 / α if, and only if, F satisfies (7) with α > 1 or, equivalently, γ < 1 .
Lemma 1 could be exploited to derive a new approach to the estimation of the EV index γ = 1 / α for the Pareto distribution. In order to define an estimator for the more general case where F ¯ satisfies (1) it is worth to analyse in more detail what is the behaviour of the Lorenz curve under the framework defined by (1). This will be done by considering the truncated random variable Y = X | X > s with X F , F R V 1 / γ . If G and g denote respectively the distribution function and the density of Y, note that G ( y ) = F ( y ) F ( s ) F ¯ ( s ) and g ( y ) = f ( y ) / F ¯ ( s ) . Furthermore, setting G ( y ) = p and inverting we have G 1 ( p ) = F 1 ( F ( s ) + p F ¯ ( s ) ) . A formal result on the Lorenz curve for Y is given in the next lemma.
Lemma 2.
Consider the random variable X with distribution function F R V 1 / γ and absolutely continuous density f; define Y = X | X > s , s > 0 , and let L Y ( p ) the Lorenz curve of Y. Then
1 L Y ( p ) = ( 1 p ) 1 γ , p ( 0 , 1 ) , s .
Remark 1.
Lemma 2 implies that the curve λ ( p ) , for the truncated random variable Y = X | X > s , with distribution satisfying (1), will be constant with value γ for all p ( 0 , 1 ) if the truncation level s will be large enough. This fact can be exploited to derive a general estimator for the EV index for all distributions in the class (1) as long as γ < 1 .
Before arriving at a formal definition of the estimator, some preliminary quantities need to be defined. Let X ( 1 ) , , X ( n ) be the order statistics of a random sample of size n from a distribution satisfying (1). Let k = k ( n ) and k ( n ) / n 0 as n . Define the estimator of the conditional Lorenz curve as
L ^ k ( p ) = j = 1 i X ( n k + j ) j = 1 k X ( n k + j ) , for i k p < i + 1 k , i = 1 , , k 1 .
After defining
λ ^ k , i = λ ^ k ( p i ) = 1 log ( 1 L ^ k ( p i ) ) log ( 1 p i )
the proposed estimator of γ is
γ ^ k = 1 k 1 i = 1 k 1 λ ^ k , i .
Remark 2.
The estimator defined in (12), based on a Lorenz curve computed on upper order statistics (defined by k), puts into practice the results of Lemma 1 and Lemma 2. Below we will discuss conditions under which (12) provides a consistent estimator of γ for the class of distributions satisfying (1). Guidance in the choice of k will be also discussed.
Letting I ( A ) denote the indicator function of the event A the above estimators are based on the non-parametric estimators
F n ( x ) = 1 n i = 1 n I ( X i x ) Q n ( x ) = i = 1 n X i I ( X i x ) i = 1 n X i .
Under the Glivenko-Cantelli theorem it holds that F n ( x ) F ( x ) almost surely and uniformly in 0 < x < ; under the assumption that E ( X ) < , it holds that Q n ( x ) Q ( x ) almost surely and uniformly in 0 < x < (Goldie [21]). F n and Q n are both step functions with jumps at X ( 1 ) , , X ( n ) . The jumps of F n are of size 1 / n while the jumps of Q n are of size X ( i ) / T where T = i = 1 n X ( i ) .
Letting F n 1 ( p ) = inf { x : F n ( x ) p } , we note that since F n 1 n k n = x ( n k ) and that F n 1 F n ( X ( n k ) ) + p F ¯ n ( X ( n k ) ) = x ( n k + i ) for i / k p < ( i + 1 ) / k we have the representation
L ^ k ( p ) = i = 1 n X i I ( X i > X ( n k ) ) I ( X i X ( n k + i ) ) i = 1 n X i I ( X i > X ( n k ) ) , for i k p < i + 1 k , i = 1 , , k 1 .
Exploiting the above representation and the results of Goldie [21], uniform consistency of L ^ k ( p ) can be claimed. As far as uniform consistency of λ ^ k ( p ) we state the following lemma, which is proven in Section 7.
Lemma 3.
For X 1 , , X n i . i . d . from a distribution F with E ( X ) < ; then
sup p ( 0 , 1 ) | λ ^ k ( n ) ( p ) λ ( p ) | = o p ( 1 ) , n .
Following Lemma 2, graphical inspection of the tail of a distribution satisfying (1) can be carried out by analysing a graph with coordinates ( p i , λ ^ i ) , i = 1 , , n which will show a flat line with intercept around the value γ = 1 / α . Apart from the case of the Pareto distribution, for distributions satisfying (1), to observe a constant line with intercept close to γ = 1 / α it is necessary to truncate the sample, that is, using only the upper order statistics X ( n k + 1 ) , X ( n ) when estimating λ .
As an example, Figure 1 reports the empirical curve λ ^ i as a function of p i for some cases of interest at different truncation thresholds. There appear two distributions with tail satisfying (1), namely Pareto as defined by (7) and Fréchet (more formally defined below), both with tail index α = 2 . There appear also two distributions which do not satisfy (1), namely Log-normal with null location and standard deviation equal to 2 and Exponential with unit scale. Note that for Log-normal distribution the curve λ does not depend on location, while it does not depend on scale for the exponential distribution (Zenga [14]).
Inspection of the graphs reveals a remarkably regular behaviour of the curves; the Pareto case is constant (with some slight variations) for all level of truncation, while the Fréchet one becomes more and more constant with increasing levels of truncation. The Log-normal and Exponential cases show a slope in the curve at all levels of truncation.

3. Asymptotic Properties of γ ^ k

3.1. Consistency

Exploiting some theoretical results given Section 7 (see the proof Lemma 2 for details), one can write
1 k 1 i = 1 k 1 ( λ ^ k ( p i ) λ ( p i ) ) = ( γ ^ n γ ) 1 k 1 i = 1 k 1 log H U ( [ ( 1 p i ) F ¯ ( s ) ] 1 ) / H U ( [ F ¯ ( s ) ] 1 ) log ( 1 p i )
where H U ( s ) is a slowly varying function, that is, it holds that lim s H U ( s x ) H U ( s ) = 1 for any x > 0 .
To analyze in more detail the second term on the r.h.s. of the above equation we assume a second order condition which is quite common in the EV theory (Caeiro et al. [11] and Gomes et al. [13]), that is,
lim s log H U ( [ ( 1 p ) F ¯ ( s ) ] 1 ) log H U ( [ F ¯ ( s ) ] 1 ) = ( 1 p i ) ρ 1 ρ A ( [ F ¯ ( s ) ] 1 ) ,
where A ( t ) = γ β t ρ , C > 0 , γ > 0 , ρ < 0 , β 0 . To evaluate the r.h.s. of (16), as n , set  s = F 1 ( n k ) n from which F ¯ ( s ) = k / n . Then, an asymptotic evaluation requires to evaluate the expression
γ β k n ρ 1 k 1 i = 1 k 1 ( 1 p i ) ρ 1 ρ log ( 1 p i ) .
Note that the asymptotic behaviour of the sum in (18) is governed by p i 0 . Expanding in Taylor series the numerator and using log ( 1 x ) x as x 0 ,
k n ρ 1 k 1 i = 1 k 1 ( 1 p i ) ρ 1 ρ log ( 1 p i ) k n ρ 1 k 1 i = 1 k 1 1 ρ p i + 1 2 ρ ( ρ 1 ) p i 2 1 ρ p i k n ρ 1 + 1 4 ( 1 ρ ) ( k 1 ) ,
which is o ( 1 ) as k if k = n δ , 0 < δ < 1 with δ < ρ 1 ρ . For example if ρ = 1 / 2 then one can choose 0 < δ < 1 / 3 in order to have an asymptotically bias free estimator. There exists valid estimation methods for ρ and β implemented also in R (see Section 4 for details).
Lemma 4.
Under the conditions of Lemma 3 and condition (17), with k = n δ , 0 < δ < ρ 1 ρ
| γ ^ k γ | = o P ( 1 ) as n .

3.2. Asymptotic Distribution

To provide an operational distributional result, we exploit a result of Csorgo and Mason [22]. Given X 1 , , X n i.i.d. with X F , F R V 1 / γ , let S j = E 1 + E 2 + E j , where E j ’s are i.i.d. Exponential with unit scale parameter (Exp(1)) random variables; then, for fixed k,
1 n γ H ( 1 / n ) j = 1 k X ( n j + 1 ) D j = 1 k ( S j ) γ , n .
Note that S j Γ ( j , 1 ) and that if Y = S j γ then Y has a Generalized Inverse Gamma (GIG) distribution with density
f ( y ) = γ 1 Γ ( j ) x j γ 1 e 1 y γ , j , γ > 0 .
Compare with Mead [23] setting λ = 0 , α = j , β = γ , θ = 1 . Using this general result, a simple and fast parametric bootstrap procedure can be implemented in order to obtain the full asymptotic distribution of γ ^ and from it estimates of the standard error and confidence intervals.
Remark 3.
Since γ ^ k is consistent, the above procedure is consistent for the asymptotic distribution of the estimator (12).
Once the bootstrap distribution is available it can be used for variance and confidence intervals estimation. Simulations show that the approximation works quite well already for small sample sizes and for different k values. Clearly the precision depends on a good preliminary estimator of γ which is the only parameter needed in determining the distribution; this is however a typical feature of asymptotic results for estimators.
Figure 2, considering different sample sizes ( n = 100 , 500 , 1000 , 2000 ) from a Fréchet(2) distribution, shows the histograms of true distribution of γ ^ k (obtained by the Montecarlo method with 2000 iterations) and the bootstrap distribution obtained by Algorithm 1. The value of γ used in Algorithm 1 has been randomly selected from the 1000 central values estimated in the connected Montecarlo experiment. Sampling of γ ^ k has been done independently in each of the 4 experiments in the graph.
Algorithm 1 Bootstrap for the asymptotic distribution of γ ^ k .
1:
Given the data, get the estimated value γ ^ k using formulae (10) to (12).
2:
Generate k i.i.d. Exp(1) random variables E 1 , , E k and form the partial sums S j = i = 1 j E i .
3:
Obtain the bootstrap estimate, say γ ^ * , using estimator (12) applied to the data S 1 γ ^ , , S k γ ^ .
4:
Repeat the previous steps a large number of times to get the asymptotic distribution of γ ^ k for given k and given sample size.

3.3. Selecting k

The estimator γ ^ k , like many tail estimators, requires the choice of k, the number of upper order statistics to be used in estimation. Lemma 4 provides some indication on how to do so; estimation of the required parameters governing the second order conditions can be carried on quite straightforwardly (see discussion in the next section).
In order to arrive at a data-driven procedure to define the fraction of upper order statistics for estimation of the EV index, consider the linear equation
λ ( p ) = β 0 + β 1 p .
From Lemmas 1 and 2, it follows that for a type-I Pareto distribution and for for a truncated random variable satisfying (1), as s is large enough, distribution F satisfying (7) with γ = 1 / α , in the above equation one has β 0 = γ and β 1 = 0 . Considering a sample version of (23): given a random sample X 1 , , X n , using the notation established in the previous Section, write
λ ^ k , i = β 0 + β 1 p i + ε i i = 1 , , k 1 ,
where ε i = λ ^ i λ i . Note that the proposed estimator (12) can be interpreted as the intercept estimate in model (24) exploiting the information that β 1 = 0 . More formally, using ordinary least squares, define the estimators
γ ^ k = β ^ 0 = 1 k 1 i = 1 k 1 λ ^ k , i , β ^ 1 = i = 1 k 1 λ ^ k , i ( p i p ¯ ) S p 2 = i = 1 k 1 λ ^ k , i c ( p i ) ,
where p ¯ is the mean of the p i ’s and S p 2 = i k 1 ( p i p ¯ ) 2 , c ( p i ) = ( p i p ¯ ) / S p 2 . Lemma 3 implies
Proposition 1.
Under the conditions of Lemma 4 it holds that β ^ 1 P 0 .
Following this reasoning, one can define a procedure based on the graph of ( p i , λ ^ k , i ) for different levels of truncation: we observe the fraction of upper order statistics which gets the smallest, in absolute value, regression coefficient β ^ 1 from the regression (25).
Algorithm 2 Data-driven estimator γ ^ O p t .
1:
Given a random sample of size n, order the data and consider sub-samples defined by the ( 1 p ) -th fraction of upper order statistics. In our simulations the values p = 0.1 · i , i = 0 , 1 , , 9 were considered. However, in order to avoid using sub-samples with too few observations when n is small an upper bound of the form 0.5 + 0.4 max ( 0 , ( n 100 ) / n ) is imposed to the sequence p = 0.1 · i , i = 0 , 1 , , 9 . For example, when n 100 , at least 50 % of the largest order statistics is used.
2:
For each sub-sample estimate γ ^ k and β ^ 1 .
3:
Define γ ^ O p t as the estimate γ ^ k obtained for the sub-sample which has the lowest value of | β ^ 1 | .
In the next two Sections, the performance of the proposed estimation strategy is analysed on simulated and real data.

4. Numerical Comparisons

In this section we will evaluate the performance of γ ^ k with respect to some alternative estimators of the EV (or tail) index. As far as the estimator for γ is concerned, beyond considering the estimator γ ^ O p t , the estimator γ ^ k with different levels of truncation of the data is considered. In the tables, γ ^ 1 p indicates the estimator γ ^ k , with 1 p indicating the fraction of upper order statistics used in estimation; the notation γ ^ A l l indicates the case where all the sample data are used in estimation.
Numerical comparisons will be carried out with respect to some reduced bias (RB) competitors (Caeiro et al. [11], Gomes et al. [12]) based on Hill (Hill [5]), generalized Hill (Beirlant et al. [24]), moment (Dekkers et al. [25]) and moment of order p (Gomes et al. [13]) estimators; optimized with respect to the choice of k as discussed in Gomes et al. [13].
RB estimation of γ for the above mentioned alternative estimators is based on external estimation of additional parameters ( ρ , β ) (refer to Gomes et al. [26] and Gomes et al. [13] for further details). In our comparisons the following RB-versions are used:
(1)
RB-Hill estimator, outperforming H ( k ) (defined in (2)) for all k
H ¯ ( k ) = H ( k ) 1 β ^ ( n / k ) ρ ^ / ( 1 ρ ^ ) .
(2)
RB-Moment estimator, denoted by MM in the tables,
M ¯ ( k ) = M ( k ) 1 β ^ ( n / k ) ρ ^ / ( 1 ρ ^ ) β ^ ρ ^ ( n / k ) ρ ^ / ( 1 ρ ^ ) 2 ,
with
M ( k ) = M k ( 1 ) + 1 2 1 ( M k ( 2 ) / ( M k ( 1 ) ) 2 1 ) 1
and M k ( j ) = i = 1 k ( ln X ( n i + 1 ) ln X ( n k ) ) j , j 1 .
(3)
RB-Generalized Hill estimator, G H ¯ ( k ) , denoted GH in the tables, with the same bias correction as in (27) applied to
GH ( k ) = i = 1 k ( ln UH ( j ) ln UH ( k ) )
with UH ( j ) = X ( n j ) H ( k ) 1 j k .
(4)
RB-MOP (moment of order p) estimator, for 0 < p < α (the case p = 0 reduces to the Hill estimator) defined by
H ¯ p ( k ) = H p ( k ) 1 β ^ ( 1 p H p ( k ) ) 1 ρ ^ p H p ( k ) n k ρ ^ ,
with H p ( k ) = ( 1 A p p ( k ) ) / p , A p ( k ) = i = 1 k U i k p / k 1 / p , U i k = X ( n i + 1 ) / X ( n k ) , 1 i k < n . Denoted by MP p in the tables. In this case p is a tuning parameter which will be set, in our simulations, equal to 0.5 and 1. For an estimated optimal value of p based on a preliminary estimator of α see Gomes et al. [13].
Computations of the above estimators have been performed using the package evt0 (Manjunat and Caeiro [27]) in R. More precisely, G H ( k ) and M ( k ) are obtained using the function other.EVI() respectively with the options GH and MO. Estimation of the parameters ( ρ , β ) for the bias correction terms can be obtained from the function mop(). RB-Hill and RB-MOP estimates are directly obtained by the function mop() by appropriately specifying a value of p and the option RB-MOP. In order to optimize the choice of k we used the formula [13]
k ^ = min n 1 , ( ( 1 φ ( ρ ^ ) ρ ^ ) 2 n 2 ρ ^ / ( 2 ρ ^ β ^ 2 ( 1 2 φ ( ρ ^ ) ) ) ) 1 / ( 1 2 ρ ^ ) + 1 ,
where x is the integer part of x and φ ( ρ ) = 1 ( ρ + ρ 2 4 ρ + 2 ) / 2 . For the comparisons, the following distributions are used:
(1)
Pareto distribution, as defined in (7). Random numbers from this distribution are simply generated in R using the function runif() and inversion of F.
(2)
Fréchet distribution with F ( x ) = exp ( x α ) , x 0 , denote by Fréchet ( α ) . This distribution is simulated in R using the function rfrechet() from the package evd (Stephenson [28]) with shape parameter set equal to α .
(3)
Burr distribution with F ( x ) = 1 ( 1 + x α ) 1 , indicated with Burr ( α ) . This distribution is simulated in R using the function rburr() from the package actuar (Dutang et al. [29]) with the parameter shape1 set to 1 and shape2 set equal to α .
(4)
Symmetric stable distribution with index of stability α , 0 < α < 2 , indicated with Stable ( α ) := Stable ( α , β = 0 , μ = 0 , σ = 1 ) ; where β , μ and σ indicate, respectively, asymmetry, location and scale. This distribution is simulated in R using the function rstable() from the package stabledist (Wuertz et al. [30]). For this distribution only the positive observed data are used in estimation.
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 contain the empirical RMSE (Root-MSE) and the relative RMSE, with respect to γ ^ O p t , of the estimators, that is, for any of the evaluated estimators, say γ ^ , then
RMSE ( γ ^ ) = E ^ ( γ ^ γ ) 2 , Rel-RMSE ( γ ^ ) = RMSE ( γ ^ ) RMSE ( γ ^ O p t ) .
Note that a Rel-RMSE greater than one implies a worse performance of the estimator with respect to γ ^ O p t . E ^ denotes the empirical expected value, that is, the mean over the Montecarlo experiments. For each sample size n = 50 , 100 , 200 , 300, 500, and 1000; 1000 Montecarlo replicates were generated. Computations have been carried out with R version 3.5.1 and each experiment, that is, given a chosen distribution and a chosen n, has been initialized using set.seed(3). Numerical results representative for each distribution are reported in the tables. More tables with other choices of parameters can be found in the on-line Supplementary Materials accompanying this paper.
Trying to summarize the results we note the general good performance of the estimators based on the curve λ defined in this paper for which the gain in efficiency can be substantial. We note also the actual usefulness of γ ^ O p t for practical applications since it is able to individuate appropriate levels of truncation for different distributions although an actual knowledge of the optimal level of truncation would obtain higher efficiency.
Turning to the single cases, one can note that the γ ^ O p t outperforms all the other estimators for the Pareto distribution where relative efficiency (see Table 2), is always greater than 4. For the case of the Pareto distribution, γ ^ A l l would be the most efficient choice, as expected.
In the case of the Fréchet distribution γ ^ O p t is always more efficient than all competitors test for smaller sample sizes (see Table 4); as sample size increases the gain in efficiency decreases and maybe slightly lower in some cases.
The performance of γ ^ O p t in the case of the Burr distribution is comparable to that of the competitors, with relative RMSE (see Table 6) slightly smaller or greater than one depending on the case considered.
In the case of the Symmetric stable distribution, the performance of γ ^ O p t is slightly better than all alternative estimators in all cases (see Table 8). The MM estimator turns out to be quite efficient for the stable distribution with α closer to 2 (see the on-line Supplementary Materials).
We note that the MM and GH estimators, computed with the package evt0, has shown some illogical results in some instances with extremely high values of the RMSE, typically for some specific sample sizes, after several checks, we could not figure out the reason of such results.

5. Examples

Here we concentrate on six real data examples that have been used in the literature to discuss methods to detect a power-law in the tail of the underlying distribution. These data have all been thoroughly analysed, for example, in Clauset et al. [1]. The following data sets are analysed here:
1.
The frequency of occurrence of unique words in the novel Moby Dick by Herman Melville (Newman [31]).
2.
The severity of terrorist attacks worldwide from February 1968 to June 2006, measured as the number of deaths directly resulting (Clauset et al. [32]).
3.
The sizes in acres of wildfires occurring on U.S. federal land between 1986 and 1996 (Newman [31]).
4.
The intensities of earthquakes occurring in California between 1910 and 1992, measured as the maximum amplitude of motion during the quake (Newman [31]).
5.
The frequencies of occurrence of U.S. family names in the 1990 U.S. Census (Clauset et al. [1]).
6.
Peak gamma-ray intensity of solar flares between 1980 and 1989 (Newman [31]).
Figure 3 provides the estimated λ curves for the six examples, either considering the whole data and selected percentages of the upper order statistics. The range of λ may vary in the graphs in order to have a better detail of the path of the curves.
On each of the data-set we apply Algorithm 2 in order to select the optimal number of k in computing γ ^ O p t ; with the given estimate we apply Algorithm 1 in order to compute a 95% confidence interval for the estimate.
Next we apply a testing procedure to evaluate if the graphs in Figure 3, for the k chosen by Algorithm 1, can be considered “enough flat” in order to support the hypothesis that the data come from a distribution within the class (1). A bootstrap test setting H 0 : β 1 = 0 in model (24) has been developed in Taufer et al. [33].
For comparison we apply also the testing procedure for the power-law hypothesis developed by Clauset et al. [1].
Table 9 reports analytical results on estimated values, 95% confidence intervals, the fraction of upper order statistics used and the p-values of the testing procedures.
Trying to summarize briefly the results we would say that the conclusions about the presence of a Pareto-type tail in the distributions coincide fully with the conclusions of Clauset et al. [1], that is: clear evidence of a power law distribution fitting the data is for the Moby Dick and Terrorism data-sets. For the others there is no convincing evidence. We point out that for the contrasting p-values for the Solar Flares data, Clauset et al. [1] suggest a power tail with an exponential cut-off at a certain point. Given the characteristics of the graphs based on the λ curve this feature cannot be noticed in our analysis.
As far as the estimated values of γ , the values of the estimators obtained here are substantially lower with respect to those obtained by Clauset et al. [1] (which uses the Hill estimator). Given the good performance in the simulations of γ ^ O p t in comparison to the Hill estimator, the values in Table 9, at least for the Moby Dick and Terrorism data-set can be considered reliable.
For the other data-sets, since the null hypotheses of a power law has significant p-values, the estimated γ should be discarded and it becomes of interest to select an alternative model by using, for example a likelihood ratio test as discussed in Clauset et al. [1] to which the interested reader is referred.

6. Conclusions

An estimation strategy for the tail index of a distribution in class (1) has been defined starting from a characterizing property of Zenga’s inequality curve λ . On the basis of the theoretical properties of the estimator γ ^ k two simple bootstrap procedures have been obtained: the first provides a general result for the asymptotic distribution of γ ^ k and the second gives a data-driven procedure to determine the optimal value of k. Simulations show the good performance of γ ^ k and the implementation algorithm.
The data-driven optimized estimator often outperforms optimized (with respect to bias) competing estimation strategies. The gain in efficiency is substantial in the case of Pareto distributions.
The graph of the λ curve associated with the estimator provides a valid support in the analysis of real data.

7. Proofs

Proof of Lemma 1.
It is trivially verified that if F satisfies (7) then λ ( p ) = 1 / α . Suppose now that λ ( p ) = k , p ( 0 , 1 ) , where k is some constant. Then it must hold that 1 L ( p ) = ( 1 p ) k or equivalently, after some algebraic manipulation,
0 p F 1 ( u ) d u = μ [ 1 ( 1 p ) k ]
Taking derivatives on both sides we have that
d d p 0 p F 1 ( u ) d u = d d p μ [ 1 ( 1 p ) k ] ,
which gets
F 1 ( p ) = μ k ( 1 p ) k 1
from which, setting x p = F 1 ( p ) , which implies p = F ( x p ) , it follows that, after some further elementary manipulations,
x p μ k 1 / ( k 1 ) = 1 F ( x p ) .
Setting 1 / ( k 1 ) = α , properly normalized, the above F follows (7). □
Proof of Lemma 2.
Let G and g denote respectively the distribution function and the density of Y = X | X > s and note that G ( y ) = P ( Y y ) = F ( y ) F ( s ) F ¯ ( s ) and g ( y ) = f ( y ) / F ¯ ( s ) . Setting G ( y ) = p and inverting we have G 1 ( p ) = F 1 ( F ( s ) + p F ¯ ( s ) ) . Note that
E ( Y ) = E ( X | X > s ) = E ( X I ( X > s ) ) F ¯ ( s )
also, setting s = F 1 ( p ) ,
E ( X I ( X > s ) ) = s x f ( x ) d x = p 1 F 1 ( u ) d u
Consider the Lorenz curve for Y, L ( p ) and L ¯ ( p ) = 1 L ( p ) , let t > 0 and s = G 1 ( p ) . Then
L ¯ ( p ) = E ( Y I ( Y > G 1 ( p ) ) ) E ( Y ) = E ( X I ( X > F 1 ( F ( s ) + p F ¯ ( s ) ) ) ) E ( X | X > s ) .
Consider first the numerator of the above expression
E ( Y I ( Y > G 1 ( p ) ) ) = p 1 G 1 ( u ) d u = p 1 F 1 ( F ( s ) + u F ¯ ( s ) ) d u = 1 F ¯ ( s ) F ( s ) + p F ¯ ( s ) 1 F 1 ( t ) d t
after setting t = F ( s ) + u F ¯ ( s ) . Next, to link to the function U ( w ) = F 1 ( 1 1 / w ) , set t = 1 1 / w ; the above term, as s , by Karamata’s theorem (see De Haan and Ferreira [34], p. 363), becomes
1 F ¯ ( s ) [ ( 1 p ) F ¯ ( s ) ] 1 U ( w ) 1 w 2 d w = ( 1 γ ) 1 ( 1 p ) 1 γ F ¯ ( s ) γ H U ( [ ( 1 p ) F ¯ ( s ) ] 1 ) ,
since as s , F ¯ ( s ) 0 . H U is a slowly varying function. Next consider the denominator; similar computations bring to
E ( X | X > s ) = 0 1 G 1 ( u ) d u = 0 1 F 1 ( F ( s ) + u F ¯ ( s ) ) d u = 1 F ¯ ( s ) [ F ¯ ( s ) ] 1 U ( w ) 1 w 2 d w
which, as s , converges to
( 1 γ ) 1 F ¯ ( s ) γ H U ( [ F ¯ ( s ) ] 1 ) .
Finally, putting together the results one has
L ¯ ( p ) = ( 1 p ) 1 γ , s
since L U ( [ ( 1 p ) F ¯ ( s ) ] 1 ) L U ( [ F ¯ ( s ) ] 1 ) 1 as s , by the properties of slowly varying functions. □
Proof of Lemma 3.
Note that lim n sup p ( 0 , 1 ) | L n ( p ) L ( p ) | = o P ( 1 ) and that p lies in a compact interval. λ n ( p ) is continuous transformation of L n ( p ) ; it follows that for fixed p ( 0 , 1 ) , lim n | λ n ( p ) λ ( p ) | = o P ( 1 ) .
To prove uniform consistency of λ n ( p ) we need to show it is equicontinuous. For this note that λ n ( p ) depends on p stochastically only through L n ( p ) , which is uniformly continuous. Hence for any δ > 0 such that | p 1 p 2 | < δ , it is possible to find an n 0 , not depending on p 1 , p 2 such that for n > n 0 , ε , η > 0 , one has
P ( | λ n ( p 1 ) λ n ( p 2 ) ) | > ε ) < η .
 □
Proof of Lemma 4.
We have
lim n | γ n γ | = lim n 1 k i = 1 k λ ^ ( p i ) γ lim n 1 k i = 1 k ( λ n ( p i ) λ ( p i ) ) + 1 k i = 1 k λ ( p i ) γ lim n sup p ( 0 , 1 ) | λ ^ ( p ) λ ( p ) | + 1 k i = 1 k log L U ( [ ( 1 p ) F ¯ ( s ) ] 1 ) / L U ( [ F ¯ ( s ) ] 1 ) log ( 1 p i ) = o P ( 1 ) .
by using Lemma 3 and condition (17) with k = n δ , 0 < δ < ρ 1 ρ . □

Supplementary Materials

The following are available online at https://www.mdpi.com/2227-7390/8/10/1834/s1.

Author Contributions

Conceptualization, E.T.; Data curation, P.L.N.I., G.E. and M.M.D.; Investigation, E.T., F.S. and P.L.N.I.; Methodology, E.T. and F.S.; Project administration, G.E.; Supervision, E.T.; Writing, review and editing, E.T., P.L.N.I. and M.M.D. All authors have read and agree to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to than two anonymous referees whose comments lead to an improved version of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Clauset, A.; Shalizi, C.R.; Newman, M.E. Power-law distributions in empirical data. SIAM Rev. 2009, 51, 661–703. [Google Scholar] [CrossRef] [Green Version]
  2. Jenkins, S.P. Pareto Distributions, Top Incomes, and Recent Trends in UK Income Inequality. Economica 2016. Available online: https//www.econstor.eu/bitstream/10419/145258/1/dp10124.pdf (accessed on 30 September 2020).
  3. Hlasny, V. Parametric Representation of the Top of Income Distributions: Options, Historical Evidence and Model Selection (No. 90); Department of Economics, Tulane University: New Orleans, LA, USA, 2020. [Google Scholar]
  4. Heyde, C.C.; Kou, S.G. On the controversy over tailweight of distributions. Oper. Res. Lett. 2004, 32, 399–408. [Google Scholar] [CrossRef]
  5. Hill, B.M. A simple general approach to inference about the tail of a distribution. Ann. Stat. 1975, 3, 1163–1174. [Google Scholar] [CrossRef]
  6. Gomes, M.I.; Guillou, A. Extreme value theory and statistics of univariate extremes: A review. Int. Stat. Rev. 2015, 83, 263–292. [Google Scholar] [CrossRef] [Green Version]
  7. Brilhante, M.F.; Gomes, M.I.; Pestana, D. A simple generalisation of the Hill estimator. Comput. Stat. Data Anal. 2013, 57, 518–535. [Google Scholar] [CrossRef]
  8. Beran, J.; Schell, D.; Stehlík, M. The harmonic moment tail index estimator: Asymptotic distribution and robustness. Ann. Inst. Stat. Math. 2014, 66, 193–220. [Google Scholar] [CrossRef]
  9. Paulauskas, V.; Vaičiulis, M. A class of new tail index estimators. Ann. Inst. Stat. Math. 2017, 69, 461–487. [Google Scholar] [CrossRef]
  10. Paulauskas, V.; Vaičiulis, M. On an improvement of Hill and some other estimators. Lith. Math. J. 2013, 53, 336–355. [Google Scholar] [CrossRef]
  11. Caeiro, F.; Gomes, M.I.; Pestana, D. Direct reduction of bias of the classical Hill estimator. Revstat 2005, 3, 111–136. [Google Scholar]
  12. Gomes, M.I.; de Haan, L.; Henriques-Rodrigues, L. Tail index estimation for heavy tailed models: Accomodation of bias in weighted log-excesses. J. R. Stat. Soc. B 2008, 70, 31–52. [Google Scholar]
  13. Gomes, M.I.; Brilhante, M.F.; Pestana, D. New reduced-bias estimators of a positive extreme value index. Commun. Stat.-Simul. Comput. 2016, 45, 833–862. [Google Scholar] [CrossRef]
  14. Zenga, M. Proposta per un indice di concentrazione basato sui rapporti fra quantili di popolazione e quantili di reddito. Giornale degli Economisti e Annali di Economia 1984, 5, 301–326. [Google Scholar]
  15. Kratz, M.F.; Resnick, S.I. The QQ-estimator and heavy tails. Commun. Stat. Stoch. Model. 1996, 12, 699–724. [Google Scholar] [CrossRef] [Green Version]
  16. Grahovac, D.; Jia, M.; Leonenko, N.N.; Taufer, E. Asymptotic properties of the partition function and applications in tail index inference of heavy-tailed data. Stat. J. Theor. Appl. Stat. 2015, 49, 1221–1242. [Google Scholar] [CrossRef] [Green Version]
  17. Jia, M.; Taufer, E.; Dickson, M. Semi-parametric regression estimation of the tail index. Electron. J. Stat. 2018, 12, 224–248. [Google Scholar] [CrossRef]
  18. Zenga, M. Inequality curve and inequality index based on the ratios between lower and upper arithmetic means. Stat. Appl. 2007, 1, 3–27. [Google Scholar]
  19. Zenga, M. Concentration curves and Concentration indexes derived from them. In Income and Wealth Distribution, Inequality and Poverty; Dagum, C., Zenga, M., Eds.; Springer: Berlin/Heidelberg, Germany, 1990; pp. 94–110. [Google Scholar]
  20. Arcagni, A.; Porro, F. A comparison of income distributions models through inequality curves. Stat. Appl. 2016, XIV, 123–144. [Google Scholar]
  21. Goldie, C.M. Convergence theorems for empirical Lorenz curves and their inverses. Adv. Appl. Probab. 1977, 9, 765–791. [Google Scholar] [CrossRef]
  22. Csorgo, S.; Mason, D.M. The asymptotic distribution of sums of extreme values from a regularly varying distribution. Ann. Probab. 1986, 14, 974–983. [Google Scholar] [CrossRef]
  23. Mead, M.E. Generalized Inverse Gamma Distribution and its Application in Reliability. Commun. Stat. Theory Methods 2015, 44, 1426–1435. [Google Scholar] [CrossRef]
  24. Beirlant, J.; Vynckier, P.; Teugels, J.L. Excess functions and estimation of the extreme-value index. Bernoulli 1996, 2, 293–318. [Google Scholar] [CrossRef]
  25. Dekkers, A.L.M.; Einmahl, J.H.J.; de Haan, L. A moment estimator for the index of an extreme-value distribution. Ann. Stat. 1989, 17, 1833–1855. [Google Scholar] [CrossRef]
  26. Gomes, M.I.; Figueiredo, F.; Neves, M.M. Adaptive estimation of heavy right tails: Resampling-based methods in action. Extremes 2012, 15, 463–489. [Google Scholar] [CrossRef]
  27. Manjunath, B.G.; Caeiro, F. evt0:Mean of Order p, Peaks over Random Threshold Hill and High Quantile Estimates R Package Version 1.1-3. Available online: https://cran.r-project.org/web/packages/evt0 (accessed on 31 August 2020).
  28. Stephenson, A.G. evd: Extreme Value Distributions. R News 2002, 2, 31–32. Available online: https://CRAN.R-project.org/doc/Rnews/ (accessed on 31 August 2020).
  29. Dutang, C.; Goulet, V.; Pigeon, M. actuar: An R Package for Actuarial Science. J. Stat. Softw. 2008, 25, 1–37. Available online: http://www.jstatsoft.org/v25/i07 (accessed on 31 August 2020).
  30. Wuertz, D.; Maechler, M.; Rmetrics Core Team Members. Stabledist: Stable Distribution Functions. R Package Version 0.7-1. 2016. Available online: https://CRAN.R-project.org/package=stabledist (accessed on 31 August 2020).
  31. Newman, M.E. Power laws, Pareto distributions and Zipf’s law. Contemp. Phys. 2005, 46, 323–351. [Google Scholar] [CrossRef] [Green Version]
  32. Clauset, A.; Young, M.; Gleditsch, K.S. On the frequency of severe terrorist events. J. Confl. Resolut. 2007, 51, 58–87. [Google Scholar] [CrossRef]
  33. Taufer, E.; Santi, F.; Espa, G.; Dickson, M.M. Goodness-of-fit test for Pareto and Log-normal distributions based on inequality curves. 2020. Unpublished. [Google Scholar]
  34. De Haan, L.; Ferreira, A. Extreme Value Theory: An Introduction; Springer Science & Business Media: New York, NY, USA, 2007. [Google Scholar]
Figure 1. Plot of λ ^ i (y-axis) as a function of p i (x-axis), i = 1 , , n , for Pareto, Fréchet, Log-normal and Exponential distributions at various levels of truncation. Sample size n = 1000 .
Figure 1. Plot of λ ^ i (y-axis) as a function of p i (x-axis), i = 1 , , n , for Pareto, Fréchet, Log-normal and Exponential distributions at various levels of truncation. Sample size n = 1000 .
Mathematics 08 01834 g001
Figure 2. Histograms of the empirical distribution of γ ^ k for selected sample sizes; k = n 0.5 ; data samples are generated from a Fréchet(2) distribution. Yellow: values obtained by Montecarlo simulations (2000 iterations); blue: values obtained by Algorithm 1 (2000 iterations). The value of γ used has been selected randomly from a pool of estimated values.
Figure 2. Histograms of the empirical distribution of γ ^ k for selected sample sizes; k = n 0.5 ; data samples are generated from a Fréchet(2) distribution. Yellow: values obtained by Montecarlo simulations (2000 iterations); blue: values obtained by Algorithm 1 (2000 iterations). The value of γ used has been selected randomly from a pool of estimated values.
Mathematics 08 01834 g002
Figure 3. Plot of the estimated λ curves for the six dataset: all data and selected percentages of upper order statistics.
Figure 3. Plot of the estimated λ curves for the six dataset: all data and selected percentages of upper order statistics.
Mathematics 08 01834 g003
Table 1. RMSE of the estimators for the Pareto(4) distribution; 1000 Montecarlo replications.
Table 1. RMSE of the estimators for the Pareto(4) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
500.0510.0400.0470.0540.0660.2940.2580.2282.6600.893
1000.0380.0310.0360.0420.0520.2450.1980.1710.9254.882
3000.0260.0180.0210.0250.0320.2840.2530.2275.6570.734
5000.0230.0140.0170.0200.0250.2500.2260.2052.8520.631
10000.0160.0100.0120.0140.0180.1590.1420.1280.7550.530
Table 2. Relative RMSE of the estimators for the Pareto(4) distribution; 1000 Montecarlo replications.
Table 2. Relative RMSE of the estimators for the Pareto(4) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
501.0000.7880.9141.0541.2825.7165.0124.42651.76117.372
1001.0000.8120.9471.1061.3896.4815.2414.51924.468129.156
3001.0000.6680.7920.9361.21510.7139.5328.574213.45327.687
5001.0000.6270.7370.8641.10510.9569.9128.982125.10527.689
10001.0000.6240.7450.8791.14010.1219.0578.17848.09633.771
Table 3. RMSE of the estimators for the Fréchet(1.5) distribution; 1000 Montecarlo replications.
Table 3. RMSE of the estimators for the Fréchet(1.5) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
500.1140.0980.0950.1160.1530.1860.1850.1740.18316.691
1000.0910.0990.0810.0920.1190.1410.1420.1360.138518.904
3000.0790.0980.0670.0680.0840.0950.0970.1000.09415.324
5000.0730.0980.0600.0570.0680.0750.0770.0820.07518.848
10000.0650.1000.0600.0540.0600.0590.0630.0730.05922.018
Table 4. Relative RMSE of the estimators for the Fréchet(1.5) distribution; 1000 Montecarlo replications.
Table 4. Relative RMSE of the estimators for the Fréchet(1.5) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
501.0000.8550.8321.0111.3351.6251.6201.5221.603145.897
1001.0001.0860.8891.0121.3121.5471.5661.4951.5145702.241
3001.0001.2380.8410.8591.0591.1951.2281.2621.187193.238
5001.0001.3440.8280.7840.9301.0331.0581.1361.030259.609
10001.0001.5400.9310.8320.9270.9150.9711.1310.918339.784
Table 5. RMSE of the estimators for the Burr(2) distribution; 1000 Montecarlo replications.
Table 5. RMSE of the estimators for the Burr(2) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
500.1110.2280.1300.1060.1120.1250.1240.1190.1057.446
1000.1000.2290.1260.0970.0960.1140.1130.1100.1046.218
3000.0730.2280.1200.0840.0710.0840.0840.0870.0801.778
5000.0660.2270.1180.0800.0630.0710.0710.0760.0681.266
10000.0540.2260.1170.0780.0550.0550.0560.0630.0530.326
Table 6. Relative RMSE of the estimators for the Burr(2) distribution; 1000 Montecarlo replications.
Table 6. Relative RMSE of the estimators for the Burr(2) distribution; 1000 Montecarlo replications.
n γ ^ O p t γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
501.0002.0531.1710.9511.0111.1281.1121.0690.94666.962
1001.0002.2941.2580.9710.9661.1411.1341.0991.04362.240
3001.0003.0991.6351.1490.9711.1441.1501.1801.08924.226
5001.0003.4401.7941.2220.9531.0761.0831.1521.03619.215
10001.0004.1502.1541.4311.0151.0091.0281.1580.9805.982
Table 7. RMSE of the estimators for the Stable(1.1) distribution; 1000 Montecarlo replications.
Table 7. RMSE of the estimators for the Stable(1.1) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
500.3720.2920.3550.3750.3940.4200.4130.4170.4084.398
1000.3350.2670.3240.3370.3450.3740.3650.3660.3653.163
3000.2970.2390.2890.2950.2920.3430.3230.3120.3384.996
5000.2750.2260.2710.2750.2680.3320.3050.2880.3289.098
10000.2510.2110.2520.2520.2410.3250.2920.2630.3213.132
Table 8. Relative RMSE of the estimators for the Stable(1.1) distribution; 1000 Montecarlo replications.
Table 8. Relative RMSE of the estimators for the Stable(1.1) distribution; 1000 Montecarlo replications.
n γ ^ Opt γ ^ All γ ^ 0.7 γ ^ 0.5 γ ^ 0.3 Hill MP 0.5 MP 1 GHMM
501.0000.7850.9541.0071.0581.1281.1101.1201.09511.814
1001.0000.7950.9661.0051.0301.1171.0891.0921.0899.431
3001.0000.8070.9720.9930.9841.1571.0881.0521.13916.827
5001.0000.8210.9850.9990.9761.2071.1111.0481.19133.097
10001.0000.8421.0031.0060.9631.2941.1671.0481.28212.492
Table 9. Sample size, estimated γ and 95% confidence intervals for the six data-sets. Fraction of upper order statistics used ( 1 p ) and p-values of the testing procedures defined in Taufer et al. [33] (Sig 1 ) and Clauset et al. [1] (Sig 2 ). Asterisk indicates significant p-values.
Table 9. Sample size, estimated γ and 95% confidence intervals for the six data-sets. Fraction of upper order statistics used ( 1 p ) and p-values of the testing procedures defined in Taufer et al. [33] (Sig 1 ) and Clauset et al. [1] (Sig 2 ). Asterisk indicates significant p-values.
Datasetn γ ^ 0.95-CI 1 p Sig 1 Sig 2
Moby Dick18,8550.90(0.80–0.95)0.40.2240.49
Terrorism91010.80(0.73–0.87)1.00.1840.68
Wildfires203,7850.99(0.90–0.99)1.00.012 *0.05 *
Earthquakes19,3020.22(0.21–0.23)0.50.000 *0.00 *
Surnames27530.74(0.67–0.84)1.00.000 *0.00 *
Solar flares12,7730.96(0.81–0.97)0.20.038 *1.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Taufer, E.; Santi, F.; Novi Inverardi, P.L.; Espa, G.; Dickson, M.M. Extreme Value Index Estimation by Means of an Inequality Curve. Mathematics 2020, 8, 1834. https://doi.org/10.3390/math8101834

AMA Style

Taufer E, Santi F, Novi Inverardi PL, Espa G, Dickson MM. Extreme Value Index Estimation by Means of an Inequality Curve. Mathematics. 2020; 8(10):1834. https://doi.org/10.3390/math8101834

Chicago/Turabian Style

Taufer, Emanuele, Flavio Santi, Pier Luigi Novi Inverardi, Giuseppe Espa, and Maria Michela Dickson. 2020. "Extreme Value Index Estimation by Means of an Inequality Curve" Mathematics 8, no. 10: 1834. https://doi.org/10.3390/math8101834

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop