Next Article in Journal
Transition of Transient Channel Flow with High Reynolds Number Ratios
Next Article in Special Issue
Non-Quadratic Distances in Model Assessment
Previous Article in Journal
An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques
Previous Article in Special Issue
A Generalized Relative (α, β)-Entropy: Geometric Properties and Applications to Robust Statistical Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Estimation for the Single Index Model Using Pseudodistances

1
Department of Applied Mathematics, Bucharest Academy of Economic Studies, 010374 Bucharest, Romania
2
“Gh. Mihoc - C. Iacob” Institute of Mathematical Satistics and Applied Mathematics, Romanian Academy, 010071 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(5), 374; https://doi.org/10.3390/e20050374
Submission received: 31 March 2018 / Revised: 11 May 2018 / Accepted: 14 May 2018 / Published: 17 May 2018

Abstract

:
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the mean and the covariance matrix of the asset returns, which practically would be unrealistic given the dynamic of market conditions. The traditional way to estimate the regression parameters in the single index model is the maximum likelihood method. Although the maximum likelihood estimators have desirable theoretical properties when the model is exactly satisfied, they may give completely erroneous results when outliers are present in the data set. In this paper, we define minimum pseudodistance estimators for the parameters of the single index model and using them we construct new robust optimal portfolios. We prove theoretical properties of the estimators, such as consistency, asymptotic normality, equivariance, robustness, and illustrate the benefits of the new portfolio optimization method for real financial data.

1. Introduction

The problem of portfolio optimization in the mean-variance approach depends on a large number of parameters that need to be estimated on the basis of relatively small samples. Due to the dynamics of market conditions, only a short period of market history can be used for estimation of the model’s parameters. In order to reduce the number of parameters that need to be estimated, the single index model proposed by Sharpe (see [1,2]) can be used. The traditional estimators for parameters of the single index model are based on the maximum likelihood method. These estimators have optimal properties for normally distributed variables, but they may give completely erroneous results in the presence of outlying observations. Since the presence of outliers in financial asset returns is a frequently occurring phenomenon, robust estimates for the parameters of the single index model are necessary in order to provide robust and optimal portfolios.
Our contribution to robust portfolio optimization through the single index model is based on using minimum pseudodistance estimators.
The interest on statistical methods based on information measures and particularly on divergences has grown substantially in recent years. It is a known fact that, for a wide variety of models, statistical methods based on divergence measures have some optimal properties in relation to efficiency, but especially in relation to robustness, representing viable alternatives to the classical methods. We refer to the monographs of Pardo [3] and Basu et al. [4] for an excellent presentation of such methods, for their importance and applications.
We can say that the minimum pseudodistance methods for estimation go to the same category as the minimum divergence methods. The minimum divergence estimators are defined by minimizing some appropriate divergence between the assumed theoretical model and the true model corresponding to the data. Depending on the choice of the divergence, minimum divergence estimators can afford considerable robustness with a minimal loss of efficiency. The classical minimum divergence methods require nonparametric density estimation, which imply some difficulties such as the bandwidth selection. In order to avoid the nonparametric density estimation in minimum divergence estimation methods, some proposals have been made in [5,6,7] and robustness properties of such estimators have been studied in [8,9].
The pseudodistances that we use in the present paper were originally introduced in [6], where they are called "type-0" divergences, and corresponding minimum divergence estimators have been studied. They are also obtained (using a cross entropy argument) and extensively studied in [10] where they are called γ -divergences. They are also introduced in [11] in the context of decomposable pseudodistances. By its very definition, a pseudodistance satisfies two properties, namely the nonnegativity and the fact that the pseudodistance between two probability measures equals to zero if and only if the two measures are equal. The divergences are moreover characterized by the information processing property, i.e., by the complete invariance with respect to statistically sufficient transformations of the observation space (see [11], p. 617). In general, a pseudodistance may not satisfy this property. We adopted the term pseudodistance for this reason, but in the literature we can also meet the other terms above. The minimum pseudodistance estimators for general parametric models have been presented in [12] and consist of minimization of an empirical version of a pseudodistance between the assumed theoretical model and the true model underlying the data. These estimators have the advantages of not requiring any prior smoothing and conciliate robustness with high efficiency, usually requiring distinct techniques.
In this paper, we define minimum pseudodistance estimators for the parameters of the single index model and using them we construct new robust optimal portfolios. We study properties of the estimators, such as, consistency, asymptotic normality, robustness and equivariance and illustrate the benefits of the proposed portfolio optimization method through examples for real financial data.
We mention that we define minimum pseudodistance estimators, and prove corresponding theoretical properties, for the parameters of the simple linear regression model (35), associated with the single index model. However, in a very similar way, we can define minimum pseudodistance estimators and obtain the same theoretical results for the more general linear regression model Y j = X j T β + e j , j = 1 , , n , where the errors e j are i.i.d. normal variables with mean zero and variance σ 2 , X j = ( X j 1 , , X j p ) T is the vector of independent variables corresponding to the j-th observation and β = ( β 1 , , β p ) T represents the regression coefficients.
The rest of the paper is organized as follows. In Section 2, we present the problem of robust estimation for some portfolio optimization models. In Section 3, we present the proposed approach. We define minimum pseudodistance estimators for regression parameters corresponding to the single index model and obtain corresponding estimating equations. Some asymptotic properties and equivariance properties of these estimators are studied. The robustness issue for estimators is considered through the influence function analysis. Using minimum pseudodistance estimators, new optimal portfolios are defined. Section 4 presents numerical results illustrating the performance of the proposed methodology. Finally, the proofs of the theorems are provided in the Appendix A.

2. The Single Index Model

Portfolio selection represents the problem of allocating a given capital over a number of available assets in order to maximize the return of the investment while minimizing the risk. We consider a portfolio formed by a collection of N assets. The returns of the assets are given by the random vector X : = ( X 1 , , X N ) T . Usually, it is supposed that X follows a multivariate normal distribution N N ( μ , Σ ) , with μ being the vector containing the mean returns of the assets and Σ = ( σ i j ) the covariance matrix of the assets returns. Let w : = ( w 1 , , w N ) T be the vector of weights associated with the portfolio, where w i is the proportion of capital invested in the asset i. Then, the total return of the portfolio is defined by the random variable
w T X = w 1 X 1 + + w N X N .
The mean and the variance of the portfolio return are given by
R ( w ) : = w T μ ,
S ( w ) : = w T Σ w .
A classical approach for portfolio selection is the mean-variance optimization introduced by Markowitz [13]. For a given investor’s risk aversion λ > 0 , the mean-variance optimization gives the optimal portfolio w , solution of the problem
arg max w { R ( w ) λ 2 S ( w ) } ,
with the constraint w T e N = 1 , e N being the N-dimensional vector of ones. The solution of the optimization problem (4) is explicit, the optimal portfolio weights for a given value of λ being
w = 1 λ Σ 1 ( μ η e N ) ,
where
η = e N T Σ 1 μ λ e N T Σ 1 e N .
This is the case when short selling is allowed. When short selling is not allowed, we have a supplementary constraint in the optimization problem, namely all the weights w i are positive.
Another classical approach for portfolio selection is to minimize the portfolio risk defined by the portfolio variance, under given constraints. This means determining the optimal portfolio w as a solution of the optimization problem
arg min w S ( w ) ,
subject to R ( w ) = w T μ μ 0 , for a given value μ 0 of the portfolio return.
However, the mean-variance analysis has been criticized for being sensitive to estimation errors of the mean and the covariance of the assets returns. For both optimization problems above, estimations of the input parameters μ and Σ are necessary. The quality and hence the usefulness of the results of the portfolio optimization problem critically depend on the quality of the statistical estimates for these input parameters. The mean vector and the covariance matrix of the returns are in practice estimated by the maximum likelihood estimators under the multivariate normal assumption. When the model is exactly satisfied, the maximum likelihood estimators have optimal properties, being the most efficient. On the other hand, in the presence of outlying observations, these estimators may give completely erroneous results and consequently the weights of the corresponding optimal portfolio may be completely misleading. It is a known fact that outliers frequently occur in asset returns, where an outlier is defined to be an unusually large value well separated from the bulk of the returns. Therefore, robust alternatives to the classical approaches need to be carefully analyzed.
For an overview on the robust methods for portfolio optimization, using robust estimators of the mean and covariance matrix in the Markowitz’s model, we refer to [14]. We also cite the methods proposed by Vaz-de Melo and Camara [15], Perret-Gentil and Victoria-Feser [16], Welsch and Zhou [17], DeMiguel and Nogales [18], and Toma and Leoni-Aubin [19].
On the other hand, in portfolio analysis, one is sometimes faced with two conflicting demands. Good quality statistical estimates require a large sample size. When estimating the covariance matrix, the sample size must be larger than the number of different elements of the matrix. For example, for a portfolio involving 100 securities, this would mean observations from 5050 trading days, which is about 20 years. From a practical point of view, considering such large samples is not adequate for the considered problem. Since the market conditions change rapidly, very old observations would lead to irrelevant estimates for the current or future market conditions. In addition, in some situations, the number of assets could even be much larger than the sample size of exploitable historical data. Therefore, estimating the covariance matrix of asset returns is challenging due to the high dimensionality and also to the heavy-tailedness of asset return data. It is a known fact that extreme events are typical in financial asset prices, leading to heavy-tailed asset returns. One way to treat these problems is to use the single index model.
The single index model (see [1]) allows us to express the large number of covariances between the returns of the individual assets through a significantly smaller number of parameters. This is possible under the hypothesis that the correlation between two assets is strictly given by their dependence on a common market index. The return of each asset i is expressed under the form
X i = α i + β i X M + e i ,
where X M is the random variable representing the return of the market index, e i are zero mean random variables representing error terms and α i , β i are new parameters to be estimated. It is supposed that the e i ’s are independent and also that the e i s are independent of x M . Thus, E ( e i ) = 0 , E ( e i e j ) = 0 and E ( e i x M ) = 0 for all i and all j i .
The intercept in Equation (35) represents the asset’s expected return when the market index return is zero. The slope coefficient β i represents the asset’s sensitivity to the index, namely the impact of a unit change in the return of the index. The error e i is the return variation that cannot be explained by the index.
The following notations are also used:
σ i 2 : = V a r ( e i ) , μ M : = E ( X M ) , σ M 2 : = V a r ( X M ) .
Using Equation (35), the components of the parameters μ and Σ from the models (4) and (7) are given by
μ i = α i + β i μ M ,
σ i i = β i 2 σ M 2 + σ i 2 ,
σ i j = β i β j σ M 2 .
Both variances and covariances are determined by the assets’ betas and sigmas and by the standard deviation of the market index. Thus, the N ( N + 1 ) / 2 different elements of the covariance matrix Σ can be expressed by 2 N + 1 parameters β i , σ i , σ M . This is a significant reduction of the number of parameters that need to be estimated.
The traditional estimators for parameters of the single index model are based on the maximum likelihood method. These estimators have optimal properties for normally distributed variables, but they may give completely erroneous results in the presence of outlying observations. Therefore, robust estimates for the parameters of the single index model are necessary in order to provide robust and optimal portfolios.

3. Robust Estimators for the Single Index Model and Robust Portfolios

3.1. Definitions of the Estimators

Consider the linear regression model
X = α + β X M + e .
Suppose we have i.i.d. two-dimensional random vectors Z j = ( X M j , X j ) , j = 1 , , n , such that X j = α + β X M j + e j . The random variables e j , j = 1 , , n , are i.i.d. with N ( 0 , σ ) and independent on the X M j , j = 1 , , n .
The classical estimators for the unknown parameters α , β , σ of the linear regression model are the maximum likelihood estimators (MLE). The classical MLE estimators perform well if the model hypotheses are satisfied exactly and may otherwise perform poorly. It is well known that the MLE are not robust, since a small fraction of outliers, even one outlier may have an important effect inducing significant errors on the estimates. Therefore, robust alternatives of the MLE should be considered, in order to propose robust estimates for the single index model, leading then to robust portfolio weights.
In order to robustly estimate the unknown parameters α , β , σ , suppressing the outsized effects of outliers, we use the approach based on pseudodistance minimization.
For two probability measures P , Q admitting densities p, respectively, q with respect to the Lebesgue measure, we consider the following family of pseudodistances (also called γ -divergences in some articles) of orders γ > 0
R γ ( P , Q ) : = 1 ( 1 + γ ) ln p γ d P + 1 γ ( 1 + γ ) ln q γ d Q 1 γ ln p γ d Q ,
satisfying the limit relation
R γ ( P , Q ) R 0 ( P , Q ) : = ln q p d Q for γ 0 .
Note that R 0 ( P , Q ) is the well-known modified Kullback–Leibler divergence. Minimum pseudodistance estimators for parametric models, using the family (13), have been studied by [6,10,11]. We also mention that pseudodistances (13) have also been used for defining optimal robust M-estimators with the Hampel’s infinitesimal approach in [20].
For the linear regression model, we consider the joint distribution of the entire data, the explanatory variable X M being random together with the response variable X, and write a pseudodistance between a theoretical model and the data. Let P θ , with θ = : ( α , β , σ ) , be the probability measure associated with the theoretical model given by the random vector ( X M , X ) , where X = α + β X M + e with e N ( 0 , σ ) , e independent on X M , and Q the probability measure associated with the data. Denote by p θ , respectively, q, the corresponding densities. For γ > 0 , the pseudodistance between P θ and Q is defined by
R γ ( P θ , Q ) : = 1 ( 1 + γ ) ln p θ γ ( x M , x ) d P θ ( x M , x ) + 1 γ ( 1 + γ ) ln q γ ( x M , x ) d Q ( x M , x ) 1 γ ln p θ γ ( x M , x ) d Q ( x M , x ) .
Using the change of variables ( x M , x ) ( u , v ) : = ( x M , x α β x M ) and taking into account that f ( u , v ) : = p θ ( u , v + α + β u ) is the density of ( X M , e ) , since X M and e are independent, we can write
p θ γ ( x M , x ) d P θ ( x M , x ) = p M γ + 1 ( u ) d u · ϕ σ γ + 1 ( v ) d v ,
p θ γ ( x M , x ) d Q ( x M , x ) = p M γ ( x M ) · ϕ σ γ ( x α β x M ) d Q ( x M , x ) ,
where p M is the density of X M and ϕ σ is the density of the random variable e N ( 0 , σ ) . Then,
R γ ( P θ , Q ) = 1 ( 1 + γ ) ln p M γ + 1 ( u ) d u + 1 ( 1 + γ ) ln ϕ σ γ + 1 ( v ) d v + 1 γ ( 1 + γ ) ln q γ ( x M , x ) d Q ( x M , x ) 1 γ ln p M γ ( x M ) · ϕ σ γ ( x α β x M ) d Q ( x M , x ) .
Notice that the first and the third terms in the pseudodistance R γ ( P θ , Q ) do not depend on θ and hence are not included in the minimization process. The parameter θ 0 : = ( α 0 , β 0 , σ 0 ) of interest is then given by
( α 0 , β 0 , σ 0 ) : = arg min α , β , σ R γ ( P θ , Q ) = arg min α , β , σ 1 ( 1 + γ ) ln ϕ σ γ + 1 ( v ) d v 1 γ ln p M γ ( x M ) · ϕ σ γ ( x α β x M ) d Q ( x M , x ) .
Suppose now that an i.i.d. sample Z 1 , , Z n is available from the true model. For a given γ > 0 , we define a minimum pseudodistance estimator of θ 0 = ( α 0 , β 0 , σ 0 ) by minimizing an empirical version of the objective function in Equation (17). This empirical version is obtained by replacing p M ( x M ) with the empirical density function p ^ M ( x M ) = 1 n i = 1 n δ ( x M X M i ) , where δ ( · ) is the Dirac delta function, and Q with the empirical measure corresponding to the sample. More precisely, we define θ ^ : = ( α ^ , β ^ , σ ^ )
( α ^ , β ^ , σ ^ ) : = arg min α , β , σ 1 ( 1 + γ ) ln ϕ σ γ + 1 ( v ) d v 1 γ ln p ^ M γ ( x M ) · ϕ σ γ ( x α β x M ) d P n ( x M , x ) = arg min α , β , σ 1 ( 1 + γ ) ln ϕ σ γ + 1 ( v ) d v 1 γ ln 1 n γ + 1 j = 1 n ϕ σ γ ( X j α β X M j ) ,
or equivalently
( α ^ , β ^ , σ ^ ) = arg max α , β , σ j = 1 n ϕ σ γ ( X j α β X M j ) [ ϕ σ γ + 1 ( v ) d v ] γ / ( γ + 1 ) = arg max α , β , σ j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j α β X M j σ 2 .
Differentiating with respect to α , β , σ , the estimators α ^ , β ^ , σ ^ are solutions of the system
j = 1 n exp γ 2 X j α β X M j σ 2 X j α β X M j σ = 0 ,
j = 1 n exp γ 2 X j α β X M j σ 2 X j α β X M j σ X M j = 0 ,
j = 1 n exp γ 2 X j α β X M j σ 2 X j α β X M j σ 2 1 γ + 1 = 0 .
Note that, for γ = 0 , the solution of this system is nothing but the maximum likelihood estimator of ( α , β , σ ) . Therefore, the estimating Equations (19)–(21) are generalizations of the maximum likelihood score equations. The tuning parameter γ associated with the pseudodistance controls the trade-off between robustness and efficiency of the minimum pseudodistance estimators.
We can also write that θ ^ = ( α ^ , β ^ , σ ^ ) is a solution of
j = 1 n Ψ ( Z j , θ ^ ) = 0 or Ψ ( z , θ ^ ) d P n ( z ) = 0 ,
where
Ψ ( z , θ ) = ϕ x α β x M σ , ϕ x α β x M σ x M , χ x α β x M σ T ,
with z = ( x M , x ) , θ = ( α , β , σ ) , ϕ ( t ) = exp ( γ 2 t 2 ) t and χ ( t ) = exp ( γ 2 t 2 ) [ t 2 1 γ + 1 ] .
When the measure Q corresponding to the data pertain to the theoretical model, hence Q = P θ 0 , it holds that
Ψ ( z , θ 0 ) d P θ 0 ( z ) = 0 .
Thus, we can consider θ ^ = ( α ^ , β ^ , σ ^ ) as a Z-estimator of θ 0 = ( α 0 , β 0 , σ 0 ) , which allows for adapting in the present context asymptotic results from the general theory of Z-estimators (see [21]).
Remark 1.
In the case when the density p M is known, by replacing Q with the empirical measure P n in Equation (17), a new class of estimators of ( α 0 , β 0 , σ 0 ) can be obtained. These estimators can also be written under the form of Z-estimators, using the same reasoning as above. The results of Theorems 1–4 below could be adapted for these new estimators, and moreover all the influence functions of these estimators would be redescending bounded. However, in practice, the density of the index return is not known. Therefore, we will work with the class of minimum pseudodistance estimators as defined above.

3.2. Asymptotic Properties

In order to prove the consistency of the estimators, we use their definition (22) as Z-estimators.

3.2.1. Consistency

Theorem 1.
Assume that, for any ε > 0 , the following condition for the separability of solution holds
inf θ M ψ ( z , θ ) d P θ 0 ( z ) > 0 = ψ ( z , θ 0 ) d P θ 0 ( z ) ,
where M : = { θ s . t . θ θ 0 ε } . Then, θ ^ = ( α ^ , β ^ , σ ^ ) converges in probability to θ 0 = ( α 0 , β 0 , σ 0 ) .

3.2.2. Asymptotic Normality

Assume that Z 1 , , Z n are i.i.d. two-dimensional random vectors having the common probability distribution P θ 0 . For γ > 0 fixed, let θ ^ = ( α ^ , β ^ , σ ^ ) be a sequence of estimators of the unknown parameter θ 0 = ( α 0 , β 0 , σ 0 ) , solution of
j = 1 n Ψ ( Z j , θ ^ ) = 0 ,
where
Ψ ( z , θ ) = σ 2 ϕ x α β x M σ , σ 2 ϕ x α β x M σ x M , σ 2 χ x α β x M σ T ,
with z = ( x M , x ) , θ = ( α , β , σ ) , ϕ ( t ) = exp ( γ 2 t 2 ) t and χ ( t ) = exp ( γ 2 t 2 ) [ t 2 1 γ + 1 ] . Note that the estimators θ ^ = ( α ^ , β ^ , σ ^ ) defined by Equations (19)–(21), or equivalently by (22), are also solutions of the system (26). Using the function (27) for defining the estimators allows for obtaining the asymptotic normality, only imposing the consistency condition of the estimators, without other supplementary assumptions that are usually imposed in the case of Z-estimators.
Theorem 2.
Assume that θ ^ θ 0 in probability. Then,
n ( θ ^ θ 0 ) N 3 ( 0 , B 1 A ( B 1 ) T )
in distribution, where A = E ( Ψ ( Z , θ 0 ) Ψ ( Z , θ 0 ) T ) and B = E ( Ψ ˙ ( Z , θ 0 ) ) , with Ψ defined by (27), Ψ ˙ being the matrix with elements Ψ ˙ i k = Ψ i θ k .
After some calculations, we obtain the asymptotic covariance matrix of θ ^ having the form
σ 0 2 ( γ + 1 ) 3 ( 2 γ + 1 ) 3 / 2 μ M 2 + σ M 2 σ M 2 μ M σ M 2 0 μ M σ M 2 1 σ M 2 0 0 0 3 γ 2 + 4 γ + 2 4 ( 2 γ + 1 ) .
It follows that β ^ and σ ^ are asymptotically independent; in addition, α ^ and σ ^ are asymptotically independent.

3.3. Influence Functions

In order to describe stability properties of the estimators, we use the following well-known concepts from the theory of robust statistics. A map T, defined on a set of probability measures and parameter space valued, is a statistical functional corresponding to an estimator θ ^ of the parameter θ , if θ ^ = T ( P n ) , P n being the empirical measure pertaining to the sample. The influence function of T at P θ is defined by
IF ( z ; T , P θ ) : = T ( P ˜ ε z ) ε ε = 0 ,
where P ˜ ε z : = ( 1 ε ) P θ + ε δ z , δ z being the Dirac measure putting all mass at z. As a consequence, the influence function describes the linearized asymptotic bias of a statistic under a single point contamination of the model P θ . An unbounded influence function implies an unbounded asymptotic bias of a statistic under single point contamination of the model. Therefore, a natural robustness requirement on a statistical functional is the boundedness of its influence function.
For γ > 0 fixed and a given probability measure P, the statistical functionals α ( P ) , β ( P ) and σ ( P ) , corresponding to the minimum pseudodistance estimators α ^ , β ^ and σ ^ , are defined by the solution of the system
Ψ ( z , T ( P ) ) d P ( z ) = 0 ,
with Ψ defined by (23) and T ( P ) : = ( α ( P ) , β ( P ) , σ ( P ) ) , whenever this solution exists.
When P = P θ corresponds to the considered theoretical model, the solution of system (29) is T ( P θ ) = θ = ( α , β , σ ) .
Theorem 3.
The influence functions corresponding to the estimators α ^ , β ^ and σ ^ are respectively given by
I F ( x M 0 , x 0 ; α , P θ ) = σ ( γ + 1 ) 3 / 2 ϕ x 0 α β x M 0 σ 1 ( x M 0 E ( X M ) ) E ( X M ) V a r ( X M ) ,
I F ( x M 0 , x 0 ; β , P θ ) = σ ( γ + 1 ) 3 / 2 ϕ x 0 α β x M 0 σ x M 0 E ( X M ) V a r ( X M ) ,
I F ( x M 0 , x 0 ; σ , P θ ) = σ ( γ + 1 ) 5 / 2 2 χ x 0 α β x M 0 σ .
Since χ is redescending, σ ^ has a bounded influence function and hence it is a redescending B-robust estimator. On the other hand, I F ( x M 0 , x 0 , α , P ) and I F ( x M 0 , x 0 , β , P ) will tend to infinity only when x M 0 tends to infinity and | x 0 α β x M 0 σ | k , for some k. Hence, these influence functions are bounded with respect to partial outliers or leverage points (outlying values of the independent variable). This means that large outliers with respect to x M , or with respect to x, will have a reduced influence on the estimates. However, the influence functions are clearly unbounded for γ = 0 , which corresponds to the non-robust maximum likelihood estimators.

3.4. Equivariance of the Regression Coefficients’ Estimators

If an estimator is equivariant, it means that it transforms "properly" in some sense. Rousseeuw and Leroy [22] (p. 116) discuss three important equivariance properties for a regression estimator: regression equivariance, scale equivariance and affine equivariance. These are desirable properties since they allow one to know how the estimates change under different types of transformations of the data. Regression equivariance means that any additional linear dependence is reflected in the regression vector accordingly. The regression equivariance is routinely used when studying regression estimators. It allows for assuming, without loss generality, any value for the parameter ( α , β ) for proving asymptotic properties or describing Monte-Carlo studies. An estimator being scale equivariant means that the fit produced by it is independent of the choice of measurement unit for the response variable. The affine equivariance is useful because it means that changing to a different co-ordinate system for the explanatory variable will not affect the estimate. It is known that the maximum likelihood estimator of the regression coefficients satisfies all these three properties. We show that the minimum pseudodistance estimators of the regression coefficients satisfy all the three equivariance properties, for all γ > 0 .
Theorem 4.
For all γ > 0 , the minimum pseudodistance estimators ( α ^ , β ^ ) T of the regression coefficients ( α , β ) T are regression equivariant, scale equivariant and affine equivariant.
On the other hand, the objective function in the definition of the estimators depends on data only through the summation
j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j α β X M j σ 2 ,
which is permutation invariant. Thus, the corresponding estimators of the regression coefficients and of the error standard deviation are permutation invariant, therefore the ordering of data does not affect the estimators.
The minimum pseudodistance estimators are also equivariant with respect to reparametrizations. If θ = ( α , β , σ ) and the model is reparametrized to Υ = Υ ( θ ) with a one-to-one transformation, then the minimum pseudodistance estimator of Υ is simply Υ ^ = Υ ( θ ^ ) , in terms of the minimum pseudodistance estimator θ ^ of θ , for the same γ .

3.5. Robust Portfolios Using Minimum Pseudodistance Estimators

The robust estimation of the parameters α i , β i , σ i from the single index model given by (35), using minimum pseudodistance estimators, together with the robust estimation of μ M and σ M lead to robust estimates of μ and Σ , on the basis of relations (9)–(11). Since we do not model the explanatory variable X M in a specific way, we estimate μ M and the standard deviation σ M using as robust estimators the median, respectively the median absolute deviation. Then, the portfolio weights, obtained as solutions of the optimization problems (4) or (7) with input parameters robustly estimated, will also be robust. This methodology leads to new optimal robust portfolios. In the next section, on the basis of real financial data, we illustrate this new methodology and compare it with the traditional method based on maximum likelihood estimators.

4. Applications

4.1. Comparisons of the Minimum Pseudodistance Estimators with Other Robust Estimators for the Linear Regression Model

In order to illustrate the performance of the minimum pseudodistance estimators for the simple linear regression model, we compare them with the least median of squares (LMS) estimator (see [22,23]), with S-estimators (SE) (see [24]) and with the minimum density power divergence (MDPD) estimators (see [25]), estimators that are known to have a good behavior from the robustness point of view.
We considered a data set that comes from astronomy, namely the data from the Hertzsprung–Russell diagram of the star clusters CYG OB1 containing 47 stars in the direction of Cygnus. For these data, the independent variable is the logarithm of the effective temperature at the surface of the star and the dependent variable is the logarithm of its light intensity. The data are given in Rousseeuw and Leroy [22] (p. 27), who underlined that there are two groups of points: the majority, following a steep band, and four stars clearly forming a separate group from the rest of the data. These four stars are known as giants in astronomy. Thus, these outliers are not recording errors, but represents leverage points coming from a different group.
The estimates of the regression coefficients and of error standard deviation obtained with minimum pseudodistance estimators for several values of γ are given in Table 1 and some of the fitted models are plotted in Figure 1. For comparison, in Table 1, we also give estimates obtained with S-estimators based on the Tukey biweighted function, these estimates being taken from [24], as well as estimations obtained with minimum density power divergence methods for several values of the tuning parameter, and estimates obtained with the least median of squares method, all these estimates being taken from [25]. The MLE estimates, given on the first line of Table 1, are significantly affected by the four leverage points. On the other hand, like the robust least median of squares estimator, the robust S-estimators and some minimum density power divergence estimators, the minimum pseudodistance estimators with γ 0.32 can successfully ignore outliers. In addition, the minimum pseudodistance estimators with γ 0.5 give robust fits that are closer to the fits generated by the least median of squares estimates or by the S-estimates than the fits generated by the minimum density power divergence estimates.

4.2. Robust Portfolios Using Minimum Pseudodistance Estimators

In order to illustrate the performance of the proposed robust portfolio optimization method, we considered real data sets for the Russell 2000 index and for 50 stocks from its components. The stocks are listed in Appendix B. We selected daily return data for the Russell 2000 index and for all these stocks from 2 January 2013 to 30 June 2016. The data were retrieved from Yahoo Finance.
The data has been divided by quarter, in total 14 quarters for index and each stock. For each quarter, on the basis of data corresponding to the index, we estimated μ M and the standard deviation σ M using as robust estimators the median (MED), respectively the median absolute deviation (MAD) defined by
M A D : = 1 0.6745 · M E D ( | X i M E D ( X i ) | ) .
We also estimated μ M and σ M classically, using sample mean and sample variance. Then, for each quarter and each of the 50 stocks, we estimated α , β and σ from the regression model using robust minimum pseudodistance estimators, respectively the classical MLE estimators. Then, on the basis of relations (9), (10) and (11), we estimated μ and Σ first using the robust estimates and then the classical estimates, all being previously computed.
Once the input parameters for the portfolio optimization procedure were estimated, for each quarter, we determined efficient frontiers, for both robust estimates and classical estimates. In both cases, the efficient frontier is determined as follows. Firstly, the range of returns is determined as the interval comprised between the return of the portfolio of global minimum risk (variance) and the maximum value of the return of a feasible portfolio, where the feasible region is
X = w R N w T e N = 1 , w k 0 , k 1 , , 50
and N = 50 . We trace each efficient frontier in 100 points; therefore, the range of returns is divided, in each case, in ninety-nine sub-intervals with
μ 1 < μ 2 < < μ 100 ,
where μ 1 is the return of the portfolio of global minimum variance and μ 100 is the maximum return for the feasible region X. We determined μ 1 and μ 100 using robust estimates of μ and Σ (for the robust frontier) and then using classical estimates (for the classical frontier). In each case, 100 optimization problems are solved:
arg min w R N S ( w ) w k 0 , k 1 , , 50 w T e N = 1 R ( w ) μ i ,
where i 1 , , 100 .
In Figure 2, for eight quarters (the first four quarters and the last four quarters), we present efficient frontiers corresponding to the optimal minimum variance portfolios based on the robust minimum pseudodistance estimates with γ = 0.5 , respectively based on the classical estimates. Thus, on the o x -axis, we consider the portfolio risk (given by the portfolio standard deviation) and, on the o y -axis, we represent the portfolio return. We notice that, in comparison with the classical method based on MLE, the proposed robust method provides optimal portfolios that have higher returns for the same level of risk (standard deviation). Indeed, for each quarter, the robust frontier is situated above the classical one, the standard deviations of the robust portfolios being smaller compared with those of the classical portfolios. We obtained similar results for the other quarters and for other choices of the tuning parameter γ , corresponding to the minimum pseudodistance estimators, too.
We also illustrate the empirical performance of the proposed optimal portfolios through an out-of-sample analysis, by using the Sharpe ratio as out-of-sample measure. For this analysis, we apply a “rolling-horizon” procedure as presented in [18]. First, we choose a window over which to perform the estimation. We denote the length of the estimation window by τ < T , where T is the size of the entire data set. Then, using the data in the first estimation window, we compute the weights for the considered portfolios. We repeat this procedure for the next window, by including the data for the next day and dropping the data for the earliest day. We continue doing this until the end of the data set is reached. At the end of this process, we have generated T τ portfolio weight vectors for each strategy, which are the vectors w t k for t { τ , , T 1 } , k denoting the strategy. For a strategy k, w t k has the components w j , t k , where w j , t k denotes the portfolio weight in asset j chosen at the time t.
The out-of-sample return at the time t + 1 , corresponding to the strategy k, is defined as ( w t k ) T X t + 1 , X t + 1 : = ( X 1 , t + 1 , , X N , t + 1 ) T representing the data at the time t + 1 . For each strategy k, using these out-of-sample returns, the out-of-sample mean and the out-of-sample variance are defined by
μ ^ k = 1 T τ t = τ T 1 ( w t k ) T X t + 1 and ( σ ^ k ) 2 = 1 T τ 1 t = τ T 1 ( ( w t k ) T X t + 1 μ ^ k ) 2
and the out-of-sample Sharpe ratio is defined by
S R ^ k = μ ^ k σ ^ k .
In this example, we considered the data set corresponding to the quarters 13 and 14. The size of the entire data set was T = 126 and the length of the estimation window was τ = 63 points. For the data from the first window, classical and robust efficient frontiers were traced, following all the steps that we explained in the first part of this subsection. More precisely, we considered the classical efficient frontier corresponding to the optimal minimum variance portfolios based on MLE and three robust frontiers, corresponding to the optimal minimum variance portfolios using robust minimum pseudodistance estimations with γ = 1 , γ = 1.2 and γ = 1.5 , respectively. Then, on each frontier, we chose the optimal portfolio associated with the maximal value of the ratio between the portfolio return and portfolio standard deviation. These four optimal portfolios represent the strategies that we compared in the out-of-sample analysis. For each of these portfolios, we computed the out-of-sample returns for the next time (next day). Then, we repeated all these procedures for the next window, and so on until the end of the data set has been reached. In the spirit of [18] Section 5, using (35) and (36), we computed out-of-sample means, out-of-sample variances and out-of-sample Sharpe ratios for each strategy. The out-of-sample means and out-of-sample variances were annualized, and we also considered a benchmark rate of 1.5 %. In this way, we obtained the following values for the out-of-sample Sharpe ratio: S R ^ = 0.22 for the optimal portfolio based on MLE, S R ^ = 0.74 for the optimal portfolio based on minimum pseudodistance estimations with γ = 1 , S R ^ = 0.71 for the optimal portfolio based on minimum pseudodistance estimations with γ = 1.2 and S R ^ = 0.29 for the optimal portfolio based on minimum pseudodistance estimations with γ = 1.5 . In Figure 3, we illustrate efficient frontiers for the windows 7 and 8, as well as the optimal portfolios chosen on each frontier.
This example shows that the optimal minimum variance portfolios based on robust minimum pseudodistance estimations in the single index model may attain higher Sharpe ratios than the traditional optimal minimum variance portfolios given by the single index model using MLE.
The obtained numerical results show that, for the single index model, the presented robust technique for portfolio optimization yields better results than the classical method based on MLE, in the sense that it leads to larger returns for the same value of risk in the case when outliers or atypical observations are present in the data set. The considered data sets contain such outliers. This is often the case for the considered problem, since outliers frequently occur in asset returns data. However, when there are no outliers in the data set, the classical method based on MLE is more efficient than the robust ones and therefore may lead to better results.

5. Conclusions

When outliers or atypical observations are present in the data set, the new portfolio optimization method based on robust minimum pseudodistance estimates yields better results than the classical single index method based on MLE estimates, in the sense that it leads to larger returns for smaller risks. In literature, there exist various methods for robust estimation in regression models. In the present paper, we proposed the method based on the minimum pseudodistance approach, which suppose to solve a simple optimization problem. In addition, from a theoretical point of view, these estimators have attractive properties, such as being redescending robust, consistent, equivariant and asymptotically normally distributed. The comparison with other known robust estimators of the regression parameters, such as the least median of squares estimators, the S-estimators or the minimum density power divergence estimators, shows that the minimum pseudodistance estimators represent an attractive alternative that may be considered in other applications too.

Author Contributions

A.T. designed the methodology, obtained the theoretical results and wrote the paper. A.T. and C.F. conceived the application part. C.F. implemented the methods in MATLAB and obtained the numerical results. Both authors have read and approved the final manuscript.

Acknowledgments

This work was supported by a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-RU-TE-2012-3-0007.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of the Results

Proof of Theorem 1.
Since the functions ϕ and χ are redescending bounded functions, for a compact neighborhood N θ 0 of θ 0 , it holds
sup θ N θ 0 Ψ ( z , θ ) d P θ 0 ( z ) < .
Since θ Ψ ( z , θ ) is continuous, by the uniform law of large numbers, (A1) implies
sup θ N θ 0 ψ ( z , θ ) d P n ( z ) ψ ( z , θ ) d P θ 0 ( z ) 0
in probability.
Then, (A2) together with assumption (25) assure the convergence in probability of θ ^ toward θ 0 . The arguments are the same as those from van der Vaart [21], Theorem 5.9, p. 46. ☐
Proof of Theorem 2.
First, note that Ψ defined by (27) is twice differentiable with respect to θ with bounded derivatives. The matrix Ψ ˙ ( z , θ ) has the form
σ ϕ x α β x M σ σ ϕ x α β x M σ x M 2 σ ϕ x α β x M σ σ ϕ x α β x M σ x α β x M σ σ ϕ x α β x M σ x M σ ϕ x α β x M σ x M 2 2 σ ϕ x α β x M σ x M σ ϕ x α β x M σ x α β x M σ x M σ χ x α β x M σ σ χ x α β x M σ x M 2 σ χ x α β x M σ σ χ x α β x M σ x α β x M σ
with ϕ ( t ) = [ 1 γ t 2 ] exp ( γ 2 t 2 ) and χ ( t ) = [ 3 γ + 2 γ + 1 t γ t 3 ] exp ( γ 2 t 2 ) . Since ϕ ( t ) , χ ( t ) , ϕ ( t ) , χ ( t ) are redescending bounded functions, for θ = θ 0 , it holds
| Ψ ˙ i k ( z , θ 0 ) | K ( z ) with E ( K ( Z ) ) < .
In addition, a simple calculation shows that each component Ψ i θ k θ l is a bounded function, since it can be expressed through the functions ϕ ( t ) , χ ( t ) , ϕ ( t ) , χ ( t ) , ϕ ( t ) , χ ( t ) , which are redescending bounded functions. In addition, bounds that can be established for each component Ψ i θ k θ l do not depend on the parameter θ .
For each i, call Ψ ¨ i the matrix with elements Ψ i θ k θ l and C n ( z , θ ) the matrix with its i-th raw equal to ( θ ^ θ 0 ) T Ψ ¨ i ( z , θ ) . Using a Taylor expansion, we get
0 = j = 1 n Ψ ( Z j , θ ^ ) = j = 1 n { Ψ ( Z j , θ 0 ) + Ψ ˙ ( Z j , θ 0 ) ( θ ^ θ 0 ) + 1 2 C n ( Z j , θ j ) ( θ ^ θ 0 ) } .
Therefore,
0 = A n + ( B n + C ¯ n ) ( θ ^ θ 0 )
with
A n = 1 n j = 1 n Ψ ( Z j , θ 0 ) , B n = 1 n j = 1 n Ψ ˙ ( Z j , θ 0 ) , C ¯ n = 1 2 n j = 1 n C n ( Z j , θ j )
i.e., C ¯ n is the matrix with its i-th raw equal to ( θ ^ θ 0 ) T Ψ ¨ i , where
Ψ ¨ i = 1 2 n j = 1 n Ψ ¨ i ( Z j , θ j ) ,
which is bounded by a constant that does not depend on θ , according to the arguments mentioned above. Since θ ^ θ 0 0 in probability, this implies that C ¯ n 0 in probability.
We have
n ( θ ^ θ 0 ) = ( B n + C ¯ n ) 1 n A n .
Note that, for j = 1 , , n , the vectors Ψ ( Z j , θ 0 ) are i.i.d. with mean zero and the covariance matrix A, and the matrices Ψ ˙ ( Z j , θ 0 ) are i.i.d. with mean B. Hence, when n , using (A3), the law of large numbers implies that B n B in probability, which implies B n + C ¯ n B in probability, which is nonsingular. Then, the multivariate central limit theorem implies n A n N 3 ( 0 , A ) in distribution.
Then,
n ( θ ^ θ 0 ) N 3 ( 0 , B 1 A ( B 1 ) T )
in distribution, according to the multivariate Slutzki’s Lemma. ☐
Proof of Theorem 3.
The system (29) can be written as
ϕ x α ( P ) β ( P ) x M σ ( P ) d P ( x M , x ) = 0 , ϕ x α ( P ) β ( P ) x M σ ( P ) x M d P ( x M , x ) = 0 , χ x α ( P ) β ( P ) x M σ ( P ) d P ( x M , x ) = 0 .
We consider the contaminated model P ˜ ε , x M 0 , x 0 : = ( 1 ε ) P θ + ε δ ( x M 0 , x 0 ) , where δ ( x M 0 , x 0 ) is the Dirac measure putting all mass in the point ( x M 0 , x 0 ) , which we simply denote here by P ˜ ε . Then, it holds
( 1 ε ) ϕ x α ( P ˜ ε ) β ( P ˜ ε ) x M σ ( P ˜ ε ) d P θ ( x M , x ) + ε ϕ x 0 α ( P ˜ ε ) β ( P ˜ ε ) x M 0 σ ( P ˜ ε ) = 0 ,
( 1 ε ) ϕ x α ( P ˜ ε ) β ( P ˜ ε ) x M σ ( P ˜ ε ) x M d P θ ( x M , x ) + ε ϕ x 0 α ( P ˜ ε ) β ( P ˜ ε ) x M 0 σ ( P ˜ ε ) x M 0 = 0 ,
( 1 ε ) χ x α ( P ˜ ε ) β ( P ˜ ε ) x M σ ( P ˜ ε ) d P θ ( x M , x ) + ε χ x 0 α ( P ˜ ε ) β ( P ˜ ε ) x M 0 σ ( P ˜ ε ) = 0 .
Derivating the first equation with respect to ε and taking the derivatives in ε = 0 , we obtain
ϕ x α β x M σ 1 σ ( I F ( x M 0 , x 0 , α , P θ ) x M I F ( x M 0 , x 0 , β , P θ ) ) x α β x M σ 2 I F ( x M 0 , x 0 , σ , P θ ) d P θ ( x M , x ) + ϕ x 0 α β x M 0 σ = 0 .
After some calculations, we obtain the relation
1 σ ( γ + 1 ) 3 / 2 I F ( x M 0 , x 0 , α , P θ ) 1 σ ( γ + 1 ) 3 / 2 I F ( x M 0 , x 0 , β , P θ ) E ( X M ) + ϕ x 0 α β x M 0 σ = 0 .
Similarly, derivating with respect to ε Equations (A11) and (A12) and taking the derivatives in ε = 0 , we get
1 σ ( γ + 1 ) 3 / 2 E ( X M ) I F ( x M 0 , x 0 , α , P θ ) 1 σ ( γ + 1 ) 3 / 2 E ( X M 2 ) I F ( x M 0 , x 0 , β , P θ ) + ϕ x 0 α β x M 0 σ x M 0 = 0
and
2 σ ( γ + 1 ) 5 / 2 I F ( x M 0 , x 0 , σ , P θ ) + χ x 0 α β x M 0 σ = 0 .
Solving the system formed with the Equations (A13)–(A15), we find the expressions for the influence functions. ☐
Proof of Theorem 4.
In the following, we simply denote by X M j the vector ( 1 , X M j ) T . Then,
( α ^ , β ^ ) T ( { ( X M j , X j ) : j = 1 , , n } ) = arg ( α , β ) T max ( α , β , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j X M j T ( α , β ) T σ 2 .
For any two-dimensional column vector v, we have
( α ^ , β ^ ) T ( { ( X M j , X j + X M j T v ) : j = 1 , , n } ) = arg ( α , β ) T max ( α , β , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j + X M j T v X M j T ( α , β ) T σ 2 = arg ( α , β ) T max ( α , β , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j X M j T ( ( α , β ) T v ) σ 2 = arg ( ( α , β ) T v ) max ( ( α , β ) T v ) T , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j X M j T ( ( α , β ) T v ) σ 2 + v = ( α ^ , β ^ ) T ( { ( X M j , X j ) : j = 1 , , n } ) + v ,
which show that ( α ^ , β ^ ) T is regression equivariant.
For any constant c 0 , we have
( α ^ , β ^ ) T ( { ( X M j , c X j ) : j = 1 , , n } ) = arg ( α , β ) T max ( α , β , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 c X j X M j T ( α , β ) T σ 2 = arg ( α , β ) T max ( α , β , σ ) j = 1 n c γ / ( γ + 1 ) σ / c γ / ( γ + 1 ) exp γ 2 X j X M j T ( ( α , β ) T / c ) ( σ / c ) 2 = c · arg ( α / c , β / c ) T max ( α / c , β / c , σ / c ) j = 1 n c γ / ( γ + 1 ) σ / c γ / ( γ + 1 ) exp γ 2 X j X M j T ( ( α , β ) T / c ) ( σ / c ) 2 = c · ( α ^ , β ^ ) T ( { ( X M j , X j ) : j = 1 , , n } ) .
This implies that the estimator ( α ^ , β ^ ) = ( α ^ , β ^ ) ( { ( X M j , X j ) : j = 1 , , n } ) is scale equivariant.
Now, for any two-dimensional square matrix A, we get
( α ^ , β ^ ) T ( { ( A T X M j , X j ) : j = 1 , , n } ) = arg ( α , β ) T max ( α , β , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j X M j T A ( α , β ) T σ 2 = A 1 arg A ( α , β ) T max ( ( α , β ) A T , σ ) j = 1 n σ γ / ( γ + 1 ) exp γ 2 X j X M j T ( A ( α , β ) T ) σ 2 = A 1 · ( α ^ , β ^ ) T ( { ( X M j , X j ) : j = 1 , , n } ) ,
which show the affine equivariance of the estimator ( α ^ , β ^ ) = ( α ^ , β ^ ) ( { ( X M j , X j ) : j = 1 , , n } ) . ☐

Appendix B. The 50 Stocks and Their Abbreviations

  • Asbury Automotive Group, Inc. (ABG)
  • Arctic Cat Inc. (ACAT)
  • American Eagle Outfitters, Inc. (AEO)
  • AK Steel Holding Corporation (AKS)
  • Albany Molecular Research, Inc. (AMRI)
  • The Andersons, Inc. (ANDE)
  • ARMOUR Residential REIT, Inc. (ARR)
  • BJ’s Restaurants, Inc. (BJRI)
  • Brooks Automation, Inc. (BRKS)
  • Caleres, Inc. (CAL)
  • Cincinnati Bell Inc. (CBB)
  • Calgon Carbon Corporation (CCC)
  • Coeur Mining, Inc. (CDE)
  • Cohen & Steers, Inc. (CNS)
  • Cray Inc. (CRAY)
  • Cirrus Logic, Inc. (CRUS)
  • Covenant Transportation Group, Inc. (CVTI)
  • EarthLink Holdings Corp. (ELNK)
  • Gray Television, Inc. (GTN)
  • Triple-S Management Corporation (GTS)
  • Getty Realty Corp. (GTY)
  • Hecla Mining Company (HL)
  • Harmonic Inc. (HLIT)
  • Ligand Pharmaceuticals Incorporated (LGND)
  • Louisiana-Pacific Corporation (LPX)
  • Lattice Semiconductor Corporation (LSCC)
  • ManTech International Corporation (MANT)
  • MiMedx Group, Inc. (MDXG)
  • Medifast, Inc. (MED)
  • Mentor Graphics Corporation (MENT)
  • Mistras Group, Inc. (MG)
  • Mesa Laboratories, Inc. (MLAB)
  • Meritor, Inc. (MTOR)
  • Monster Worldwide, Inc. (MWW)
  • Nektar Therapeutics (NKTR)
  • Osiris Therapeutics, Inc. (OSIR)
  • PennyMac Mortgage Investment Trust (PMT)
  • Paratek Pharmaceuticals, Inc. (PRTK)
  • Repligen Corporation (RGEN)
  • Rigel Pharmaceuticals, Inc. (RIGL)
  • Schnitzer Steel Industries, Inc. (SCHN)
  • comScore, Inc. (SCOR)
  • Safeguard Scientifics, Inc. (SFE)
  • Silicon Graphics International (SGI)
  • Sagent Pharmaceuticals, Inc. (SGNT)
  • Semtech Corporation (SMTC)
  • Sapiens International Corporation N.V. (SPNS)
  • Sarepta Therapeutics, Inc. (SRPT)
  • Take-Two Interactive Software, Inc. (TTWO)
  • Park Sterling Corporation (PSTB)

References

  1. Sharpe, W.F. A simplified model to portfolio analysis. Manag. Sci. 1963, 9, 277–293. [Google Scholar] [CrossRef]
  2. Alexander, G.J.; Sharpe, W.F.; Bailey, J.V. Fundamentals of Investments; Prentice-Hall: Upper Saddle River, NJ, USA, 2000. [Google Scholar]
  3. Pardo, L. Statistical Inference Based on Divergence Measures; Chapman & Hall: Boca Raton, FL, USA, 2006. [Google Scholar]
  4. Basu, A.; Shioya, H.; Park, C. Statistical Inference: the Minimum Pseudodistance Approach; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  5. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and efficient estimation by minimizing a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef]
  6. Jones, M.C.; Hjort, N.L.; Harris, I.R.; Basu, A. A comparison of related density-based minimum divergence estimators. Biometrika 2001, 88, 865–873. [Google Scholar] [CrossRef]
  7. Broniatowski, M.; Keziou, A. Parametric estimation and tests through divergences and the duality technique. J. Multivar. Anal. 2009, 100, 16–36. [Google Scholar] [CrossRef]
  8. Toma, A.; Leoni-Aubin, S. Robust tests based on dual divergence estimators and saddlepoint approximations. J. Multivar. Anal. 2010, 101, 1143–1155. [Google Scholar] [CrossRef]
  9. Toma, A.; Broniatowski, M. Dual divergence estimators and tests: Robustness results. J. Multivar. Anal. 2011, 102, 20–36. [Google Scholar] [CrossRef]
  10. Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivar. Anal. 2008, 99, 2053–2081. [Google Scholar] [CrossRef]
  11. Broniatowski, M.; Vajda, I. Several applications of divergence criteria in continuous families. Kybernetica 2012, 48, 600–636. [Google Scholar]
  12. Broniatowski, M.; Toma, A.; Vajda, I. Decomposable pseudodistances and applications in statistical estimation. J. Stat. Plan. Inference 2012, 142, 2574–2585. [Google Scholar] [CrossRef]
  13. Markowitz, H.M. Mean-variance analysis in portfolio choice and capital markets. J. Finance 1952, 7, 77–91. [Google Scholar]
  14. Fabozzi, F.J.; Huang, D.; Zhou, G. Robust portfolios: contributions from operations research and finance. Ann. Oper. Res. 2010, 176, 191–220. [Google Scholar] [CrossRef]
  15. Vaz-de Melo, B.; Camara, R.P. Robust multivariate modeling in finance. Int. J. Manag. Finance 2005, 4, 12–23. [Google Scholar] [CrossRef]
  16. Perret-Gentil, C.; Victoria-Feser, M.P. Robust Mean-Variance Portfolio Selection. FAME Research Paper, No. 140. 2005. Available online: papers.ssrn.com/sol3/papers.cfm?abstract_id=721509 (accessed on 28 February 2018).
  17. Welsch, R.E.; Zhou, X. Application of robust statistics to asset allocation models. Revstat. Stat. J. 2007, 5, 97–114. [Google Scholar]
  18. DeMiguel, V.; Nogales, F.J. Portfolio selection with robust estimation. Oper. Res. 2009, 57, 560–577. [Google Scholar] [CrossRef]
  19. Toma, A.; Leoni-Aubin, S. Robust portfolio optimization using pseudodistances. PLoS ONE 2015, 10, 1–26. [Google Scholar] [CrossRef] [PubMed]
  20. Toma, A.; Leoni-Aubin, S. Optimal robust M-estimators using Renyi pseudodistances. J. Multivar. Anal. 2013, 115, 359–373. [Google Scholar] [CrossRef]
  21. Van der Vaart, A. Asymptotic Statistics; Cambridge University Press: New York, NY, USA, 1998. [Google Scholar]
  22. Rousseeuw, P.J.; Leroy, A.M. Robust Regression and Outlier Detection; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  23. Andersen, R. Modern Methods for Robust Regression; SAGE Publications, Inc.: Los Angeles, CA, USA, 2008. [Google Scholar]
  24. Rousseeuw, P.J.; Yohai, V. Robust regression by means of S-estimators. In Robust and Nonlinear Time Series Analysis; Franke, J., Hardle, W., Martin, D., Eds.; Springer: New York, NY, USA, 1984; pp. 256–272. ISBN 978-0-387-96102-6. [Google Scholar]
  25. Ghosh, A.; Basu, A. Robust estimations for independent, non-homogeneous observations using density power divergence with applications to linear regression. Electron. J. Stat. 2013, 7, 2420–2456. [Google Scholar] [CrossRef]
Figure 1. Plots of the Hertzsprung–Russell data and fitted regression lines using MLE, minimum density power divergence (MDPD) methods for several values of γ , minimum pseudodistance (MP) methods for several values of γ , S-estimators (SE) and the least median of squares (LMS) method.
Figure 1. Plots of the Hertzsprung–Russell data and fitted regression lines using MLE, minimum density power divergence (MDPD) methods for several values of γ , minimum pseudodistance (MP) methods for several values of γ , S-estimators (SE) and the least median of squares (LMS) method.
Entropy 20 00374 g001
Figure 2. Efficient frontiers, classical (MLE) vs. robust corresponding to γ = 0.5 (RE), for eight quarters (the first four quarters and the last four quarters).
Figure 2. Efficient frontiers, classical (MLE) vs. robust corresponding to γ = 0.5 (RE), for eight quarters (the first four quarters and the last four quarters).
Entropy 20 00374 g002
Figure 3. Efficient frontiers, classical (MLE) vs. robust corresponding to γ = 1 (RE), and optimal portfolios chosen on frontiers, for the windows 7 (left) and 8 (right).
Figure 3. Efficient frontiers, classical (MLE) vs. robust corresponding to γ = 1 (RE), and optimal portfolios chosen on frontiers, for the windows 7 (left) and 8 (right).
Entropy 20 00374 g003
Table 1. The parameter estimates for the linear regression model for the Hertzsprung–Russell data using several minimum pseudodistance (MP) methods, several minimum density power divergence (MDPD) methods, the least median of squares (LMS) method, S-estimators and the MLE method. γ represents tuning parameter.
Table 1. The parameter estimates for the linear regression model for the Hertzsprung–Russell data using several minimum pseudodistance (MP) methods, several minimum density power divergence (MDPD) methods, the least median of squares (LMS) method, S-estimators and the MLE method. γ represents tuning parameter.
MLE Estimates
α β σ
6.79−0.410.55
MP Estimates
γ α β σ
0.01 6.79 0.41 0.55
0.1 6.81 0.41 0.56
0.25 6.86 0.42 0.58
0.3 6.88 0.42 0.59
0.31 6.89 0.43 0.59
0.32 6.81 2.66 0.39
0.35 7.16 2.74 0.38
0.4 7.62 2.85 0.38
0.5 8.17 2.97 0.37
0.75 8.65 3.08 0.38
1 8.84 3.12 0.39
1.2 8.94 3.15 0.40
1.5 9.08 3.18 0.41
2 9.31 3.23 0.43
MDPD Estimates
γ α β σ
0.1 6.78 0.41 0.60
0.25 5.16 2.30 0.42
0.5 7.22 2.76 0.40
0.8 7.89 2.91 0.40
1 8.03 2.95 0.41
S-Estimates
α β σ
9.59 3.28
LMS Estimates
α β σ
12.30 3.90

Share and Cite

MDPI and ACS Style

Toma, A.; Fulga, C. Robust Estimation for the Single Index Model Using Pseudodistances. Entropy 2018, 20, 374. https://doi.org/10.3390/e20050374

AMA Style

Toma A, Fulga C. Robust Estimation for the Single Index Model Using Pseudodistances. Entropy. 2018; 20(5):374. https://doi.org/10.3390/e20050374

Chicago/Turabian Style

Toma, Aida, and Cristinca Fulga. 2018. "Robust Estimation for the Single Index Model Using Pseudodistances" Entropy 20, no. 5: 374. https://doi.org/10.3390/e20050374

APA Style

Toma, A., & Fulga, C. (2018). Robust Estimation for the Single Index Model Using Pseudodistances. Entropy, 20(5), 374. https://doi.org/10.3390/e20050374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop