Next Article in Journal
Action-Amplitude Approach to Controlled Entropic Self-Organization
Previous Article in Journal
Equivalent Temperature-Enthalpy Diagram for the Study of Ejector Refrigeration Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Selection Criteria Using Divergences

1
Department of Applied Mathematics, Bucharest Academy of Economic Studies, Piaṱa Romană 6, Bucharest, 010374, Romania
2
Gh. Mihoc-C. Iacob" Institute of Mathematical Statistics and Applied Mathematics,Romanian Academy, Calea 13 Septembrie 13, Bucharest, 050711, Romania 
Entropy 2014, 16(5), 2686-2698; https://doi.org/10.3390/e16052686
Submission received: 1 April 2014 / Revised: 12 May 2014 / Accepted: 13 May 2014 / Published: 14 May 2014

Abstract

: In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown that the proposed criteria are asymptotically unbiased. The influence functions of these criteria are also derived and some comments on robustness are provided.

1. Introduction

The minimum divergence approach is a useful technique in statistical inference. In recent years, the literature dedicated to the divergence-based statistical methods has grown substantially and the monographs of Pardo [1] and Basu et al. [2] are important references that present developments and applications in this field of research. Minimum divergence estimators and related methods have received considerable attention in statistical inference because of their ability to reconcile efficiency and robustness. Among others, Beran [3], Tamura and Boos [4], Simpson [5,6] and Toma [7] proposed families of parametric estimators minimizing the Hellinger distance between a nonparametric estimator of the observations density and the model. They showed that those estimators are both asymptotically efficient and robust. Generalizing earlier work based on the Hellinger distance, Lindsay [8] and Basu and Lindsay [9] have investigated minimum divergence estimators, for both discrete and continuous models. Some families of estimators based on approximate divergence criteria have also been considered; see Basu et al. [10]. Broniatowski and Keziou [11] have introduced a minimum divergence estimation method based on a dual representation of the divergence between probability measures. Their estimators, called minimum dual divergence estimators, are defined in a unified way for both continuous and discrete models. They do not require any prior smoothing and include the classical maximum likelihood estimators as a benchmark. Robustness properties of these estimators have been studied in [12,13].

In this paper we apply estimators of divergences in dual form and corresponding minimum dual divergence estimators, as presented by Broniatowski and Keziou [11], in the context of model selection.

Model selection is a method for selecting the best model among candidate models. A model selection criterion can be considered as an approximately unbiased estimator of the expected overall discrepancy, a nonnegative quantity that measures the distance between the true unknown model and a fitted approximating model. If the value of the criterion is small, then the approximated candidate model can be chosen.

Many model selection criteria have been proposed so far. Classical model selection criteria using least square error and log-likelihood include the Cp-criterion, cross-validation (CV), the Akaike information criterion (AIC) based on the well-known Kullback–Leibler divergence, Bayesian information criterion (BIC), a general class of criteria that also estimates the Kullback–Leibler divergence (GIC). These criteria have been proposed by Mallows [14], Stone [15], Akaike [16], Schwarz [17] and Konishi and Kitagawa [18], respectively. Robust versions of classical model selection criteria, which are not strongly affected by outliers, have been firstly proposed by Ronchetti [19], Ronchetti and Staudte [20]. Other references on this topic can be found in Maronna et al. [21]. Among the recent proposals for model selection we recall the criteria presented by Karagrigoriou et al. [22], the divergence information criteria (DIC) introduced by Mattheou et al. [23]. The DIC criteria use the density power divergences introduced by Basu et al. [10].

In the present paper, we apply the same methodology used for AIC, and also for DIC, to a general class of divergences including the Cressie–Read divergences [24] in order to obtain model selection criteria. These criteria also use dual forms of the divergences and minimum dual divergence estimators. We show that the criteria are asymptotically unbiased and compute the corresponding influence functions.

The paper is organized as follows. In Section 2 we recall the duality formula for divergences, as well as the definitions of associated dual divergence estimators and minimum dual divergence estimators, together with their asymptotic properties, all these being necessary in the next section where we define new criteria for model selection. In Section 3, we apply the same methodology used for AIC to the divergences in dual form in order to develop criteria for model selection. We define criteria based on estimators of the expected overall discrepancy and prove their asymptotic unbiasedness. The influence functions of the proposed criteria are also derived. In Section 4 we present some conclusions.

2. Minimum Dual Divergence Estimators

2.1. Examples of Divergences

Let φ be a non-negative convex function defined from (0, ∞) onto [0, ∞] and satisfying φ(1) = 0. Also extend φ at 0 defining φ ( 0 ) = lim x 0 φ ( x ). Let (X, B) be a measurable space and P be a probability measure (p.m.) defined on (X, B). Following Rüschendorf [25], for any p.m. Q absolutely continuous (a.c.) w.r.t. P, the divergence between Q and P is defined by

D ( Q , P ) : = φ ( d Q d P ) d P .

When Q is not a.c. w.r.t. P, we set D(Q, P) = ∞. We refer to Liese and Vajda [26] for an overview on the origin of the concept of divergence in statistics.

A commonly used family of divergences is the so-called “power divergences” or Cressie–Read divergences. This family is defined by the class of functions

x + * φ γ ( x ) : = x γ γ x + γ 1 γ ( γ 1 )

for γ ∈ ℝ \ {0,1} and φ0(x) := − log x + x − 1, φ1(x) := x log xx + 1 with φ γ ( 0 ) = lim x 0 φ γ ( x ), φ γ ( ) = lim x φ γ ( x ), for any γ ∈ ℝ. The Kullback–Leibler divergence (KL) is associated with φ1, the modified Kullback–Leibler (KLm) to φ0, the χ2 divergence to φ2, the modified χ2 divergence ( χ m 2 ) to φ−1 and the Hellinger distance to φ1/2. We refer to [11] for the modified versions of χ2 and KL divergences.

Some applied models using divergence and entropy measures can be found in Toma and Leoni-Aubin [27], Kallberg et al. [28], Preda et al. [29] and Basu et al. [2], among others.

2.2. Dual Form of a Divergence and Minimum Divergence Estimators

Let {Fθ, θ ∈ Θ} be an identifiable parametric model, where Θ is a subset of ℝp. We assume that for any θ ∈ Θ, Fθ has density fθ with respect to some dominating σ-finite measure λ. Consider the problem of estimating the unknown true value of the parameter θ0 on the basis of an i.i.d. sample X1,…, Xn with the law F θ 0.

In the following, D ( f θ , f θ 0 ) denotes the divergence between fθ and f θ 0, namely

D ( f θ , f θ 0 ) : = φ ( f θ f θ 0 ) f θ 0 .

Using a Fenchel duality technique, Broniatowski and Keziou [11] have proved a dual representation of divergences. The main interest on this duality formula is that it leads to a wide variety of estimators, by a plug-in method of the empirical measure evaluated to the data set, without making use of any grouping, nor smoothing.

We consider divergences, defined through differentiable functions φ, that we assume to satisfy (C.0) There exists 0 < δ < 1 such that for all c ∈ [1 − δ, 1 + δ], there exist numbers c1, c2, c3 such that

φ ( c x ) c 1 φ ( x ) + c 2 | x | + c 3 , x .

Condition (C.0) holds for all power divergences, including KL and KLm divergences.

Assuming that D ( f θ , f θ 0 ) is finite and that the function φ satisfies the condition (C.0), the dual representation holds

D ( f θ , f θ 0 ) = sup α Θ m ( α , θ , x ) f θ 0 ( x ) d x ,

with

m ( α , θ , x ) : = φ ˙ ( f θ ( z ) f α ( z ) ) f θ ( z ) d z { φ ˙ ( f θ ( x ) f α ( x ) ) f θ ( x ) f α ( x ) φ ( f θ ( x ) f α ( x ) ) } ,

where φ ˙ is the notation for the derivative of φ, the supremum in Equation (5) being uniquely attained in α = θ0, independently on θ.

We mention that the dual representation Equation (5) of divergences has been obtained independently by Liese and Vajda [30].

Naturally, for fixed θ, an estimator of the divergence D ( f θ , f θ 0 ) is obtained by replacing Equation (5) by its sample analogue. This estimator is exactly

D ( f θ , f θ 0 ) : = sup α Θ 1 n i = 1 n m ( α , θ , X i ) ,

the supremum being attained for

α ( θ ) : = arg sup α Θ 1 n i = 1 n m ( α , θ , X i ) .

Formula (8) defines a class of estimators of the parameter θ0 called dual divergence estimators. Further, since

i n f θ Θ D ( f θ , f θ 0 ) = D ( f θ 0 , f θ 0 ) = 0

and since the infimum in the above display is unique, a natural definition of estimators of the parameter θ0, called minimum dual divergence estimators, is provided by

θ : = arg inf θ Θ D ( f θ , f θ 0 ) = arg inf θ Θ sup α Θ 1 n i = 1 n m ( α , θ , X i ) .

For more details on the dual representation of divergences and associated minimum dual divergence estimators, we refer to Broniatowski and Keziou [11].

2.3. Asymptotic Properties

Broniatowski and Keziou [11] have proved both the weak and the strong consistency, as well as the asymptotic normality for the classes of estimators α ( θ ), α ( θ ) and θ . Here, we shortly recall those asymptotic results that will be used in the next sections. The following conditions are considered.

(C.1) The estimates θ and α ( θ ) exist.

(C.2) sup α , θ Θ | 1 n i = 1 n m ( α , θ , X i ) m ( α , θ , x ) f θ 0 ( x ) d x | tends to 0 in probability.

(a) for any positive ε, there exists some positive η such that for any α ∈ Θ with ||α − θ0 || > ε and for all θ ∈ Θ it holds that m ( α , θ 0 , x ) f θ 0 ( x ) d x < m ( θ 0 , θ , x ) f θ 0 ( x ) d x η.

(b) there exists some neighborhood N θ 0 of θ0 such that for any positive ε, there exists some positive η such that for all α N θ 0 and all θ ∈ Θ satisfying ||θθ0|| > ε, it holds that m ( α , θ 0 , x ) f θ 0 ( x ) < m ( α , θ , x ) f θ 0 ( x ) d x η.

(C.3) There exists some neighborhood N θ 0 of θ0 and a positive function H with H ( x ) f θ 0 ( x ) d x finite, such that for all α N θ 0 , | | m ( α , θ 0 , X ) | | H ( X ), ||m(α, θ0, x) || ≤ H (X) in probability.

(C.4) There exists a neighborhood N θ 0 of θ0 such that the first and the second order partial derivatives with respect to α and θ of φ ˙ ( f θ ( x ) f α ( x ) ) f θ ( x ) are dominated on N θ 0 × N θ 0 by some λ-integrable functions. The third order partial derivatives with respect to α and θ of m (α, θ, x) are dominated on N θ 0 × N θ 0 by some P θ 0 -integrable functions (where P θ 0 is the probability measure corresponding to the law F θ 0).

(C.5) The integrals | | α m ( θ 0 , θ 0 , x ) | | 2 f θ 0 ( x ) d x, | | θ m ( θ 0 , θ 0 , x ) | | 2 f θ 0 ( x ) d x, | | 2 2 α m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x, | | 2 2 θ m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x, | | 2 θ α m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x are finite and the Fisher information matrix I ( θ 0 ) : = f ˙ θ 0 ( z ) f ˙ θ 0 t f θ 0 ( z ) d z is nonsingular, t denoting the transpose.

Proposition 1

Assume that conditions (C.1)–(C3) hold. Then

(a) sup θ Θ | | α ( θ ) θ 0 | | tends to 0 in probability.

(b) θ converges to θ0 in probability. If (C.1)–(C.5) are fulfilled, then

(c) n ( θ θ 0 ) and n ( α ( θ ) θ 0 ) converge in distribution to a centered p-variate normal random variable with covariance matrix I (θ0)−1.

For discussions and examples about the fulfillment of conditions (C.1)–(C5), we refer to Broniatowski and Keziou [11].

3. Model Selection Criteria

In this section, we apply the same methodology used for AIC to the divergences in dual form in order to develop model selection criteria. Consider a random sample X1, …, Xn from the distribution with density g (the true model) and a candidate model fθ from a parametric family of models (fθ) indexed by an unknown parameter θ ∈ Θ, where Θ is a subset of ℝp. We use divergences satisfying (C.0) and denote for simplicity the divergence D (fθ, g) between fθ and the true density g by Wθ.

3.1. The Expected Overall Discrepancy

The target theoretical quantity that will be approximated by an asymptotically unbiased estimator is given by

E [ W θ ] = E [ W θ | θ = θ ]

where θ is a minimum dual divergence estimator defined by Equation (10). The same divergence is used for both Wθ and θ . The quantity E [ W θ ] can be viewed as the average distance between g and (fθ) and it is called the expected overall discrepancy between g and (fθ).

The next Lemma gives the gradient vector and the Hessian matrix of Wθ and is useful for evaluating the expected overall discrepancy E [ W θ ] through Taylor expansion. We denote by f ˙ θ and f ¨ θ the first and the second order derivative of fθ with respect to θ, respectively. We assume the following conditions allowing derivation under the integral sign.

(C.6) There exists a neighborhood Nθ of θ such that

sup u N θ u [ φ ( f u g ) ] g dλ< .

(C.7) There exists a neighborhood Nθ of θ such that

sup u N θ u [ φ ˙ ( f u g ) f ˙ u ] dλ< .

Lemma 1

Assume that conditions (C.6) and (C.7) hold. Then, the gradient vector θ W θ of Wθ is given by

φ ˙ ( f θ g ) f ˙ θ

and the Hessian matrix 2 2 θ W θ is given by

[ φ ¨ ( f θ g ) f ˙ θ f ˙ θ t g + φ ˙ ( f θ g ) f ˙ θ ]  dλ.

The proof of this Lemma is straightforward, therefore it is omitted.

Particularly, when using Cressie–Read divergences, the gradient vector θ W θ of Wθ is given by

1 γ 1 ( f θ ( z ) g ( z ) ) γ 1 f ˙ θ ( z ) d z , if γ \ { 0 , 1 }
g ( z ) f θ ( z ) f ˙ θ ( z ) d z , if γ = 0
log ( f θ ( z ) g ( z ) ) f ˙ θ ( z ) d z , if γ = 1

and the Hessian matrix 2 2 θ W θ is given by

Entropy 16 02686f1
g ( z ) f θ 2 ( z ) f ˙ θ ( z ) f ˙ θ t ( z ) d z g ( z ) f θ ( z ) f ¨ θ ( z ) d z , if γ = 0
log ( f θ ( z ) g ( z ) ) f ¨ θ ( z ) d z + f ˙ θ ( z ) f ˙ θ t ( z ) f θ ( z ) d z , if γ = 1.

When the true model g belongs to the parametric model (fθ), hence g = f θ 0, the gradient vector and the Hessian matrix of Wθ evaluated in θ = θ0 simplify to

[ θ W θ ] θ = θ 0 = 0
[ 2 2 θ W θ ] θ = θ 0 = φ ¨ ( 1 ) I ( θ 0 ) .

The hypothesis that the true model g belongs to the parametric family (fθ) is the assumption made by Akaike [16]. Although this assumption is questionable in practice, it is useful because it provides the basis for the evaluation of the expected overall discrepancy (see also [23]).

Proposition 2

When the true model g belongs to the parametric model (fθ), assuming that conditions (C.6) and (C.7) are fulfilled for g = f θ 0 and θ = θ0, the expected overall discrepancy is given by

Entropy 16 02686f2

where R n = o ( | | θ θ 0 | | 2 ) and θ0 is the true value of the parameter.

Proof

By applying a Taylor expansion to Wθ around the true parameter θ0 and taking θ = θ , on the basis of Equations (22) and (23), we obtain

Entropy 16 02686f3

Then Equation (24) is proved.

3.2. Estimation of the Expected Overall Discrepancy

In this section we construct an asymptotically unbiased estimator of the expected overall discrepancy, under the hypothesis that the true model g belongs to the parametric family (fθ).

For a given θ ∈ Θ, a natural estimator of Wθ is

Entropy 16 02686f4

where m (α, θ, x) is given by formula (6), which can also be expressed as

Entropy 16 02686f5

using the sample analogue of the dual representation of the divergence.

The following conditions allow derivation under the integral sign for the integral term of Qθ.

(C.8) There exists a neighborhood Nθ of θ such that

sup u N θ u [ φ ˙ ( f u f α ( u ) ) f u ] dλ< .

(C.9) There exists a neighborhood Nθ of θ such that

Entropy 16 02686f6

Lemma 2

Under (C.8) and (C.9), the gradient vector and the Hessian matrix of Qθ are

θ Q θ = 1 n i = 1 n θ m ( α ( θ ) , θ , X i )
2 2 θ Q θ = 1 n i = 1 n 2 2 θ m ( α ( θ ) , θ , X i ) .

Proof

Since

Q θ = 1 n i = 1 n m ( α ( θ ) , θ , X i )

derivation yields

Entropy 16 02686f7

Note that, by its very definition, α ( θ ) is a solution of the equation

1 n i 1 n α m ( α , θ , X i ) = 0

taken with respect to α, therefore

θ Q θ = 1 n i = 1 n θ m ( α ( θ ) , θ , X i ) .

On the other hand,

Entropy 16 02686f8
= 1 n i = 1 n 2 2 θ m ( α ( θ ) , θ , X i ) .

Proposition 3

Under conditions (C.1)–(C.3) and (C.8)–(C.9) and assuming that the integrals | | 2 2 θ m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x, | | 3 2 θ α m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x and | | 3 3 θ m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x are finite, the gradient vector and the Hessian matrix of Qθ evaluated in θ = θ satisfy

[ θ Q θ ] θ = 0
[ 2 2 θ Q θ ] θ = φ ¨ ( 1 ) I ( θ 0 ) + o P ( 1 ) .

Proof

By the very definition of θ , the equality (38) is verified. For the second relation, we take θ = θ in Equation (31) and obtain

Entropy 16 02686f9

A Taylor expansion of 1 n i = 1 n 2 2 θ m ( α , θ , X i ) as function of (α, θ) around to (θ0, θ0) yields

Entropy 16 02686f10

Using the fact that | | 2 2 θ m ( θ 0 , θ 0 , x ) | | 2 f θ 0 ( x ) d x is finite, the weak law of large numbers leads to

Entropy 16 02686f11

Then, since ( α ( θ ) θ 0 ) = o P ( 1 ) and ( θ θ 0 ) = o P ( 1 ), and taking into account that | | 3 2 θ α m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x and | | 3 3 θ m ( θ 0 , θ 0 , x ) | | f θ 0 ( x ) d x are finite, we deduce that

Entropy 16 02686f12

Thus we obtain Equation (39).

In the following, we suppose that conditions of Proposition 1, Proposition 2 and Proposition 3 are all satisfied. These conditions allow obtaining an asymptotically unbiased estimator of the expected overall discrepancy.

Proposition 4

When the true model g belongs to the parametric model (fθ), the expected overall discrepancy evaluated at θ is given by

E [ W θ ^ ] = E [ Q θ ^ + φ ¨ ( 1 ) ( θ ^ θ 0 ) t I ( θ 0 ) ( θ ^ θ 0 ) + R n ] ,

where R n = o ( | | θ 0 θ | | 2 ).

Proof

A Taylor expansion of Qθ around to θ yields

Entropy 16 02686f13

and using Proposition 3, we have

Entropy 16 02686f14

Taking θ = θ0, for large n, it holds

Entropy 16 02686f15

and consequently

Entropy 16 02686f16

Where R n = o ( | | θ 0 θ | | 2 ).

According to Proposition 2 it holds

Entropy 16 02686f17

Note that

Entropy 16 02686f18

Then, combining Equation (48) with Equations (49) and (47), we get

Entropy 16 02686f19

Proposition 4 shows that an asymptotically unbiased estimator of the expected overall discrepancy is given by

Q θ + φ ¨ ( 1 ) ( θ θ 0 ) t I ( θ 0 ) ( θ θ 0 ) .

According to Proposition 1, n ( θ θ 0 ) is asymptotically distributed as Np (0, I (θ0)−1). Consequently, n ( θ θ 0 ) t I ( θ 0 ) ( θ θ 0 ) has approximately a χ p 2 distribution. Then, taking into account that n o ( | | θ θ 0 | | 2 ) = o P ( 1 ), an asymptotically unbiased estimator of n-times the expected overall discrepancy evaluated at θ is provided by

n Q θ + φ ¨ ( 1 ) p .

3.3. Influence Functions

In the following, we compute the influence function of the statistics Q θ . As it is known, the influence function is a useful tool for describing the robustness of an estimator. Recall that a map T defined on a set of distribution functions and parameter space valued is a statistical functional corresponding to an estimator θ of the parameter θ, if θ = T ( F n ), where Fn is the empirical distribution function associated to the sample. The influence function of T at fθ is defined by

IF( x T F θ ): = T ( F ˜ ε x ) ε | ε = 0

where F ˜ ε x : = ( 1 ε ) F θ + ε δ x , ε > 0 , δ x being the Dirac measure putting all mass at x. Whenever the influence function is bounded with respect to x, the corresponding estimator is called robust (see [31]).

Since

Q θ = 1 n i = 1 n m ( α ( θ ) , θ , X i ) ,

the statistical functional corresponding to Q θ , which we denote by U (•), is defined by

U ( F ) : = m ( T V ( F ) ( F ) , V ( F ) , y ) d F ( y )

where tθ(f) is the statistical functional associated to the estimator α ( θ ) and V (F) is the statistical functional associated to the estimator θ .

Proposition 5

The influence function of Q θ is

IF ( x ; U , F θ 0 ) = φ ¨ ( 1 ) f ˙ θ 0 ( x ) f θ 0 ( x ) .

Proof

For the contaminated model F ˜ ε x : = ( 1 ε ) F θ 0 + ε δ x, it holds

Entropy 16 02686f20

Derivation with respect to ε yields

Entropy 16 02686f21

Note that m (θ0, θ0, y) = 0 for any y and α m ( θ 0 , θ 0 , y ) d F θ 0 ( y ) = 0. Also, some straightforward calculations give

θ m ( θ 0 , θ 0 , y ) d F θ 0 ( y ) = φ ¨ ( 1 ) I ( θ 0 ) .

On the other hand, according to the results presented in [12], the influence function of the minimum dual divergence estimator is

IF ( x ; V , F θ 0 ) = I ( θ 0 ) 1 f ˙ θ 0 ( x ) f θ 0 ( x ) .

Consequently, we obtain Equation (60).

Note that, for Cressie–Read divergences, it holds

IF ( x ; U , F θ 0 ) = f ˙ θ 0 ( x ) f θ 0 ( x )

irrespective of the used divergence, since φ ¨ γ ( 1 ) = 1, for any γ.

Generally, IF ( x ; U , F θ 0 ) is not bounded, therefore the robustness of the statistics Q θ , as measured by the influence function, does not hold.

4. Conclusions

The dual representation of divergences and corresponding minimum dual divergence estimators are useful tools in statistical inference. The presented theoretical results show that, in the context of model selection, these tools provide asymptotically unbiased criteria. These criteria are not robust in the sense of the bounded influence function, but this fact does not exclude the stability of the criteria with respect to other robustness measures. The computation of Q θ could lead to serious difficulties, for example when considering various regression models to choose from. Such difficulties are implied by the double optimization in the criterion. Therefore, from the computation point of view, some other existing model selection criteria could be preferred. On the other hand, some performant computation techniques, involving such a double optimization, could arrive in the favor of using these new criteria also. These problems represent the topic of future research.

Acknowledgments

The author thanks the referees for a careful reading of the paper and for the suggestions leading to an improved version of the paper. This work was supported by a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-RU-TE-2012-3-0007.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Pardo, L. Statistical Inference Based on Divergence Measures; Chapmann & Hall: Boca Raton, FL, USA, 2006. [Google Scholar]
  2. Basu, A.; Shioya, H.; Park, C. Statistical Inference: The Minimum Distance Approach; Chapmann & Hall: Boca Raton, FL, USA, 2011. [Google Scholar]
  3. Beran, R. Minimum Hellinger distance estimates for parametric models. Ann. Stat 1977, 5, 445–463. [Google Scholar]
  4. Tamura, R.N.; Boos, D.D. Minimum Hellinger distance estimation for multivariate location and covariance. J. Am. Stat. Assoc 1986, 81, 223–229. [Google Scholar]
  5. Simpson, D.G. Minimum Hellinger distance estimation for the analysis of count data. J. Am. Stat. Assoc 1987, 82, 802–807. [Google Scholar]
  6. Simpson, D.G. Hellinger deviance tests: Efficiency, breakdown points, and examples. J. Am. Stat. Assoc 1989, 84, 104–113. [Google Scholar]
  7. Toma, A. Minimum Hellinger distance estimators for multivariate distributions from Johnson system. J. Stat. Plan. Inference 2008, 183, 803–816. [Google Scholar]
  8. Lindsay, B.G. Efficiency versus robustness: The case of minimum Hellinger distance and related methods. Ann. Stat 1994, 22, 1081–1114. [Google Scholar]
  9. Basu, A.; Lindsay, B.G. Minimum disparity estimation for continuous models: Efficiency, distributions and robustness. Ann. Inst. Stat. Math 1994, 46, 683–705. [Google Scholar]
  10. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and efficient estimation by minimising a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar]
  11. Broniatowski, M.; Keziou, A. Parametric estimation and tests through divergences and duality technique. J. Multivar. Anal 2009, 100, 16–36. [Google Scholar]
  12. Toma, A.; Broniatowski, M. Dual divergence estimators and tests: Robustness results. J. Multivar. Anal 2011, 102, 20–36. [Google Scholar]
  13. Toma, A.; Leoni-Aubin, S. Robust tests based on dual divergence estimators and saddlepoint approximations. J. Multivar. Anal 2010, 101, 1143–1155. [Google Scholar]
  14. Mallows, C.L. Some comments on Cp. Technometrics 1973, 15, 661–675. [Google Scholar]
  15. Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B 1974, 36, 111–147. [Google Scholar]
  16. Akaike, H. Information theory and an extension of the maximum likelihood principle, Proceedings of the Second International Symposium on Information Theory, Akademiai Kaido, Budapest, 1973; Petrov, B.N., Csaki, I.F., Eds.; pp. 267–281.
  17. Schwarz, G. Estimating the dimension of a model. Ann. Stat 1978, 6, 461–464. [Google Scholar]
  18. Konishi, S.; Kitagawa, G. Generalised information criteria in model selection. Biometrika 1996, 83, 875–890. [Google Scholar]
  19. Ronchetti, E. Robust model selection in regression. Stat. Probab. Lett 1985, 3, 21–23. [Google Scholar]
  20. Ronchetti, E.; Staudte, R.G. A robust version of Mallows’ CP. J. Am. Stat. Assoc 1994, 89, 550–559. [Google Scholar]
  21. Maronna, R.A.; Martin, R.D.; Yohai, V.J. Robust Statistics: Theory and Methods; Wiley: New York, NY, USA, 2006. [Google Scholar]
  22. Karagrigoriou, A.; Mattheou, K.; Vonta, F. On asymptotic properties of AIC variants with applications. Open J. Stat 2011, 1, 105–109. [Google Scholar]
  23. Mattheou, K.; Lee, S.; Karagrigoriou, A. A model selection criterion based on the BHHJ measure of divergence. J. Stat. Plan. Inference 2009, 139, 228–235. [Google Scholar]
  24. Cressie, N.; Read, T.R.C. Multinomial goodness of fit tests. J. R. Stat. Soc. Ser. B 1984, 46, 440–464. [Google Scholar]
  25. Ru¨schendorf, L. On the minimum discrimination information theorem. Stat. Decis 1984, 1, 163–283. [Google Scholar]
  26. Liese, F.; Vajda, I. Convex Statistical Distances; BSB Teubner: Leipzig, Germany, 1987. [Google Scholar]
  27. Toma, A.; Leoni-Aubin, S. Portfolio selection using minimum pseudodistance estimators. Econ. Comput. Econ. Cybern. Stud. Res 2013, 46, 117–132. [Google Scholar]
  28. Kallberg, D.; Leonenko, N.; Seleznjev, O. Statistical inference for Re´nyi entropy functionals. In Conceptual Modelling and Its Theoretical Foundations; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7260, pp. 36–51. [Google Scholar]
  29. Preda, V.; Dedu, S.; Sheraz, M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Physica A 2014, 407, 350–359. [Google Scholar]
  30. Liese, F.; Vajda, I. On divergences and informations in statistics and information theory. IEEE Trans. Inf. Theory 2006, 52, 4394–4412. [Google Scholar]
  31. Hampel, F.R.; Ronchetti, E.; Rousseeuw, P.J.; Stahel, W. Robust Statistics: The Approach Based on Influence Functions; Wiley: New York, NY, USA, 1986. [Google Scholar]

Share and Cite

MDPI and ACS Style

Toma, A. Model Selection Criteria Using Divergences. Entropy 2014, 16, 2686-2698. https://doi.org/10.3390/e16052686

AMA Style

Toma A. Model Selection Criteria Using Divergences. Entropy. 2014; 16(5):2686-2698. https://doi.org/10.3390/e16052686

Chicago/Turabian Style

Toma, Aida. 2014. "Model Selection Criteria Using Divergences" Entropy 16, no. 5: 2686-2698. https://doi.org/10.3390/e16052686

Article Metrics

Back to TopTop