**Theorem 4.**

$$(\hat{\alpha}^\* - \alpha) \xrightarrow{L} N(0, V(\hat{\alpha}^\*))$$

*where <sup>V</sup>*(*α*<sup>ˆ</sup>∗) *is as derived above.*

Now, assuming the sample size m is large, say 100, the asymptotic variances of the modified truncated estimator *α*ˆ∗ for different values of *α* and different values of *ρ* (ranging from 0 to 1) are displayed in Table 2.

**Table 2.** Comparison of RMSEs of modified truncated estimator (RMSE1) and Hill estimator (RMSE2, RMSE3, and RMSE4) with relocations of true mean, sample mean and median.



**Table 2.** *Cont.*


**Table 2.** *Cont.*


**Table 2.** *Cont.*


**Table 2.** *Cont.*


**Table 2.** *Cont.*

\* The value of *k* is obtained by linear interpolation from Dufour and Kurz-Kim (2010).

#### **6. Comparison of the Proposed Estimator With the Hill Estimator and the Characteristic Function Based Estimator**

Next, we want to compare the performance of this modified truncated estimator with that of a popular estimator known as Hill-estimator Dufour and Kurz-Kim (2010); Hill (1975), which is a simple non-parametric estimator based on order statistic. Given a sample of *n* observations *X*1, *X*2, ...*Xn*, the Hill-estimator is defined as,

$$\mathfrak{A}\_H = \left[ (k^{-1} \sum\_{j=1}^k \ln X\_{n+1-j} : n) - \ln X\_{n-k:n} \right]^{-1}$$

with standard error

$$SD(\hbar\_H) = \frac{k\hat{\alpha}\_H}{(k-1)\sqrt{k-2}}$$

where *k* is the number of observations which lie on the tails of the distribution of interest and is to be optimally chosen depending on the sample size, *n*, tail thickness *α*, as *k* = *k*(*<sup>n</sup>*, *α*) and *Xj*:*n* denotes *j*-order statistic of the sample of size *n*.

The asymptotic normality of the Hill estimator is provided by Goldie and Richard (1987) as,

$$\sqrt{k}(\mathfrak{a}\_H^{\hat{\phantom{\alpha}}1} - \mathfrak{a}^{-1}) \xrightarrow{L} N(0, \mathfrak{a}^{-2}) \tag{3}$$

**Lemma 4.**

$$(a\hat{\imath}\_H - \varkappa) \xrightarrow{L} N\left(0, \frac{1}{a^2k}\right)$$

**Proof.** Assuming *g* ˆ *α*<sup>−</sup><sup>1</sup> *H* = 1ˆ *α*<sup>−</sup><sup>1</sup> *H* = *<sup>α</sup>*<sup>ˆ</sup>*H* (since *g* (.) exists and is non-zero valued) and using Equation (3), we ge<sup>t</sup>

$$\begin{aligned} & \left(\mathring{\alpha\_H}^{\hat{-}1} - \mathfrak{a}^{-1}\right) \xrightarrow{L} N\left(0, \frac{\mathfrak{a}^{-2}}{k}\right) \\ & \Rightarrow \left(\mathring{\alpha\_H} - \mathfrak{a}\right) \xrightarrow{L} N\left(0, \frac{(\mathfrak{g}'^{-1})\backslash^2 \mathfrak{a}^{-2}}{k}\right) \\ & \Leftrightarrow \mathfrak{a}\_H^{\hat{-}} \xrightarrow{L} N\left(\mathfrak{a}, \frac{1}{\mathfrak{a}^2 k}\right) \end{aligned}$$

We need this result for comparing the performances of the estimators for *α*.

In addition, we make a comparison of the performance of the modified truncated estimator *α*ˆ2 with that of the characteristic function based estimator Anderson and Arnold (1993), which is obtained by minimization of the objective function (where *μ* = 0 and *σ* = 1) given by

$$\hat{I}\_s(\boldsymbol{a}) = \sum\_{i=1}^n w\_i(\boldsymbol{\eta}(z\_i) - \exp(-|z\_i|^a))^2 \tag{4}$$

The performance of the modified truncated estimator *α*ˆ3 is compared with that of the characteristic function-based estimator Anderson and Arnold (1993), which is obtained by minimization of the objective function (where *μ* = 0 and *σ* unknown) given by,

$$\hat{I}\_s(a) = \sum\_{i=1}^n w\_i(\hat{\eta}(z\_i) - \exp(-|\upsilon z\_i|^a))^2 \tag{5}$$

where

$$
\hat{\eta}(t) = \frac{1}{n} \sum\_{\vec{j}} \cos(tx\_{\vec{j}}).
$$

*x*1, *x*2, ..., *xn* are realizations from symmetric stable (*α*) distribution, *zi* is the *i*th zero of the *m*th degree Hermite polynomial *Hm*(*z*) and

$$w\_i = \frac{2^{m-1}m!\sqrt{m}}{(mH\_{m-1}(z\_i))^2}$$

It is to be noted that, for the estimator of *α* < 1, we do not know any explicit form of the probability density function. However, for value of the estimator between 1 and 2, i.e., for 1 < *α*ˆ∗ < 2, we may compare the fit with the stable family by modeling a mixture of normal and Cauchy distribution and then using the method as proposed in Anderson and Arnold (1993) by the objective function given by

$$\sum\_{i=1}^{n} w\_i \left(\boldsymbol{\eta}\left(z\_i\right) - \boldsymbol{\psi}\_{NC}\right)^2$$

where *η*<sup>ˆ</sup>(*t*) is the same as defined above with the realizations taken from the mixture distribution. *ψNC* denotes the corresponding theoretical characteristic function given by

$$\psi\_{\rm NC} = p \exp(-\sigma\_1^2 t^2 / 2) + (1 - p) \exp(-\sigma\_2 |t|)$$

where *p* denotes the mixture proportion, *σ*1 and *σ*2 are taken as the scale parameters of the normal and Cauchy distributions, respectively (the location parameters are taken as zeros, the reason for which is mentioned above). Finally, a measure for the goodness of fit is proposed as:

Index of Objective function (I.O.) = Objective function + Number of parameters estimated

The distribution for which I.O. is minimum gives the best fit to the data.

The modified truncated estimator based on the moment estimator is free of the location parameter since it is defined in terms of *R* ¯ *j* = 1 *m* ∑*mi*=<sup>1</sup> cos *j*(*<sup>θ</sup>i* − ¯*<sup>θ</sup>*), *j* = 1, 2, that is in terms of the quantity (*θi* − ¯*<sup>θ</sup>*), which is centered with respect to the mean direction ¯ *θ*, although it is not free of the nuisance parameter that is the concentration parameter *ρ*. The Hill estimator is scale invariant since it is defined in terms of log of ratios but not location invariant. Therefore, centering needs to be done in order to take care of the location invariance.
