Next Article in Journal
Maximum Entropy Production as an Inference Algorithm that Translates Physical Assumptions into Macroscopic Predictions: Don’t Shoot the Messenger
Next Article in Special Issue
Fisher Information and Semiclassical Treatments
Previous Article in Journal
On the Structural Non-identifiability of Flexible Branched Polymers
Previous Article in Special Issue
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Weighted Generalized Maximum Entropy Estimator with a Data-driven Weight

Department of Agricultural Economics, Texas A&M University, College Station, TX 77843-2124, USA
Entropy 2009, 11(4), 917-930; https://doi.org/10.3390/e11040917
Submission received: 24 September 2009 / Accepted: 16 November 2009 / Published: 26 November 2009
(This article belongs to the Special Issue Maximum Entropy)

Abstract

:
The method of Generalized Maximum Entropy (GME), proposed in Golan, Judge and Miller (1996), is an information-theoretic approach that is robust to multicolinearity problem. It uses an objective function that is the sum of the entropies for coefficient distributions and disturbance distributions. This method can be generalized to the weighted GME (W-GME), where different weights are assigned to the two entropies in the objective function. We propose a data-driven method to select the weights in the entropy objective function. We use the least squares cross validation to derive the optimal weights. Monte Carlo simulations demonstrate that the proposed W-GME estimator is comparable to and often outperforms the conventional GME estimator, which places equal weights on the entropies of coefficient and disturbance distributions.

1. Introduction

Jaynes’ Principle of Maximum Entropy provides a method of constructing distributions based on limited information. This approach, and its generalization through minimization of the cross entropy by Kullback, Leibler and others, have found wide-spread applications in various fields of science. See, for example, [1] and references therein. As illustrated by the famous Jaynes’ die problem, this principle provides a solution to the “ill-posed” inverse problem. Golan, Judge and Miller ([2], GJM henceforth) generalizes this method to the regression framework. In particular, they reparameterize the coefficients and disturbances in a linear regression model as discrete random variables on bounded supports. The sum of entropies of distributions of the coefficients and disturbances are maximized subject to model consistency constraints. The coefficients of interest are then calculated as the expectation of random variables on the prescribed supports under the derived distributions of the entropy maximization. They further generalize this so-called Generalized Maximum Entropy (GME) method to a weighted one, in which different weights are assigned to the entropies of coefficient and disturbance distributions.
Although the specifications of the coefficient and disturbance supports can be guided by non-sample information and preliminary estimates, there is no clear guidance on how to select the weights placed on the entropies of the coefficient and disturbance distributions. In this study, we propose a data-driven method of selecting this weight parameter, which balances the two components in the entropy maximization objective function in an automatic, objective manner. We use the least squares cross validation in our implement of the proposed method. The results are shown to improve on the conventional GME estimator under various scenarios.

2. Generalized Maximum Entropy Estimator

In this section, we briefly review the literature on information entropy, the principle of maximum entropy and its applications to possibly ill-posed inverse problem. We then discuss the generalized maximum entropy estimator for linear regressions and its statistical properties.

2.1. The ME principle

Let X be a random variable with possible outcome values x k , k = 1 , , K and probabilities p k such that k = 1 K p k = 1 . [3] defined the information entropy of the distribution of probabilities, p = p k k = 1 K as the measure
H ( p ) = k = 1 K p k log p k
where 0 log 0 = 0 . The entropy measures the uncertainty of a distribution and reaches a maximum when p 1 = p 2 = = p K = 1 / K or, in other words, when the probabilities are uniform.
Reference [4] proposed using the entropy concept in choosing the unknown distribution of probabilities. Under what Jaynes called the maximum entropy principle, one chooses the distribution for which the information (data) is just sufficient to determine the probability assignment. More precisely, one chooses the distribution, among those distributions consistent with known information, that maximizes the entropy. This maximum entropy formulation that is based on the work of [3] and [4] has been extended by [5], [6] and many others who are identified in the collection of papers in [7]. Axiomatic arguments for the justification of the ME principle have been made by [1], [8], [9] and [10]. See GJM for an in-depth review of this literature.
Suppose that E [ X ] = y . According to the ME principle, one can construct a density of X by maximizing
H ( p ) = p log p
subject to the data consistency and normalization-additivity requirements
y = X p p 1 = 1
where X , p are K × 1 vectors, and 1 is a K × 1 vector of ones. The analytical solution to the entropy maximization problem can be obtained by the Lagrangian function
L = p log p + λ ( y X p ) + μ ( 1 p 1 )
with optimality conditions
L / p = log p ^ 1 X λ ^ μ ^ = 0 L / λ = y X p ^ = 0 L / μ = 1 p ^ 1 = 0
We can then solve for p ^ , in terms of λ ^ to get
p ^ = exp ( X λ ^ ) / Ω ( λ ^ )
where
Ω ( λ ^ ) = k exp ( x k λ ^ )
is a normalization factor that converts the relative probabilities into absolute probabilities.
Solution (1) establishes a unique non-linear relation between p ^ and y through λ ^ . Unlike conventional regression methods such as the least squares estimator, the ME method can be used for inferences in the so-called ill-posed problem. For instance, let us look at the famous Jaynes’ die problem. Suppose that one is given a six-sided die that can take on the values k = 1 , 2 , , 6 , and asked to estimate the probabilities for each possible outcome given that the average outcome from a large number of independent rolls of the die was y. The ME formulation of this problem is as follows:
max H ( p ) = k = 1 6 p k log p k
subject to
k = 1 6 p k k = y k = 1 6 p k = 1
This is an inverse problem with one observation (the mean) and six unknowns and thus clearly ill-posed. Using the ME framework, one is able to assign unique probability to each possible outcome. For example, when the average outcome is 3.5, the ME method assigns equal weights to all six outcomes. If the average outcome is larger/smaller than 3.5, the ME method “tilts” the distribution smoothly such that the weight to each side of the die increases/decreases with number of dots on it.

2.2. The GME estimator

GJM generalizes the ME solution to the inverse problem to the regression framework. Consider the linear model
y = X β + e
where y is a T-dimensional vector of observables, X is a T × K design matrix, and β is a K-dimensional vector of unknown parameters. The unobservable disturbance vector e may represent one or more sources of noise in the observed system, including sample and non-sample errors in the data, randomness in the behavior of the economic agents, and specification or modeling errors.
GJM reparameterize model (2) such that β are represented by expectations of random variables with compact supports. In particular, one can parameterize β k as a discrete random variable with a compact support and M possible outcomes z k = [ z k 1 , , z k M ] , where 2 M < , and z k 1 and z k M are the plausible extreme values (upper and lower bounds) of β k . We can express β k as a convex combination
β k = z k p k
where p k = [ p k 1 , , p k M ] is an M-dimensional vector of positive weights that sum to one. Further, these convex combinations may be assembled in matrix form so that β may be written as
β = Z p = z 1 0 · 0 0 z 2 · 0 · · · · 0 0 · z K p 1 p 2 · p K
where Z is a K × K M matrix and p is a K M -dimensional vector of weights.
Further assuming that e is a random vector with finite location and scale parameters, one can represent his uncertainty about the outcome of the error process by representing each e t as a finite and discrete random variable with 2 J < possible outcomes. Suppose that there exist sets of error bounds, v t 1 and v t J , for each e t so that 1 Pr [ v t 1 < e t < v t J ] may be made arbitrarily small. One can then write
e t = v t w t
where v t = [ v t 1 , , v t J ] is a finite support for e t , and w t = [ w t 1 , , w t J ] is a J-dimensional vector of positive weights that sum to one. The T unknown disturbances may be written in matrix form as
e = V w = v 1 0 · 0 0 v 2 · 0 · · · · 0 0 · v T w 1 w 2 · w T
where V is a T × T J matrix and w is a T J -dimensional vector of weights, which are strictly positive and sum to one for each t.
Using the reparameterized unknowns, β = Z p and e = V w , one can rewrite model (2) as
y = X β + e = X Z p + V w
The Generalized Maximum Entropy (GME) estimator is then defined by
max H ( p , w ) = p log p w log w
subject to
y = X Z p + V w 1 K = ( I K 1 M ) p 1 T = ( I T 1 J ) w
This optimization problem can be solved using the Lagrangian method. The Lagrangian equation takes the form
L = H ( p , w ) + λ [ y X Z p V w ] + θ [ 1 K ( I K 1 M ) p ] + τ [ 1 T ( I T 1 J ) w ]
where λ , θ , τ are T × 1 , K × 1 , T × 1 vectors of Lagrangian multipliers respectively. Solving the first order conditions yields
p ^ k m = exp ( z k m X k λ ^ ) Ω k ( λ ^ ) w ^ t j = exp ( v t j λ ^ ) Ψ t ( λ ^ )
where
Ω k ( λ ^ ) = m = 1 M exp ( z k m X k λ ^ ) Ψ t ( λ ^ ) = j = 1 J exp ( v t j λ ^ )
Furthermore, this constrained optimization problem can be rewritten as an unconstrained one, in which the objective function takes the form
L ( λ ) = y λ k = 1 K log ( Ω k ( λ ) ) t = 1 T log ( Ψ t ( λ ) ) M ( λ )
The minimal value function, M ( λ ) , may be interpreted as a constrained expected log-likelihood function. This dual version of the GME problem simplifies the estimation considerably. The analytical gradient of the dual problem
λ M ( λ ) = y X Z p V w
is simply the model consistency constraint. The Hessian matrix of M ( λ ) takes the form
λ λ M ( λ ) = X Z λ p ( λ ) V λ w ( λ ) = X Σ Z ( λ ) X Σ V ( λ )
where Σ Z ( λ ) and Σ V ( λ ) are covariance matrices for distributions p ( λ ) and w ( λ ) respectively. Both covariance matrices are strictly positive definite for any interior solution, ( p ^ , w ^ ) , which ensures the uniqueness of the solution.

2.3. Statistical properties of GME

Under some mild regularity conditions, GJM establish large sample properties of the GME estimation. They also analyze its small sample properties, both analytically for some special cases and numerically using Monte Carlo simulations.
The noise term, V w , effectively “loosens” the model constraints for a given set of observations, and thus an interior solution is more likely. On the other hand, because of the presence of Σ V ( λ ) , which is positive definite, in the Hessian matrix (5), the GME estimator behaves like the ridge estimator in the sense that all coefficients are shrunk toward zero. Consider, for simplicity, the case where var ( e ) = σ 2 I T and X is orthogonal. The approximate covariance matrix of the GME estimate β ^ is
σ 2 Σ Z ( Σ Z + Σ V ) 2 Σ Z
The finite sample performance of this estimator clearly depends on the specification of the error support V. Intuitively, the wider is V, the larger is the degree of shrinkage toward zero. GJM proposed to use the 3σ rule for the error support, where σ refers to the standard deviation of the disturbance. In practice, σ is replaced by its consistent estimator, such as that based on the OLS regression.
A second factor that may influence the finite sample performance of the GME estimator is the specification of the coefficient support, Z. The restrictions imposed on the parameter space through Z reflect prior knowledge about the unknown parameters. However, such knowledge is not always available, and researchers may want to entertain a variety of plausible bounds on β. As the parameter supports are widened, the GME risk functions modestly shift upward reflecting the reduced constraints on the parameter space. Hence, wide bounds may be used without extreme risk consequences, if one’s knowledge is minimal, to ensure that Z contains β. Intuitively, widening the bounds increases the impact of the data and decreases the impact of the support. On the other hand, narrowing the parameter supports only improves the risk as long as the true parameter vector is well in the interior of the support. GJM conducted Monte Carlo simulations on the impact of Z by using different supports. They found modest impacts of varying the parameter support on the estimation.
For both the coefficient and error supports, we need to select the number of points, M and J, respectively. Since the variances of the distributions p ( λ ) and w ( λ ) depend on the specifications of the supports, the dimension of the supports may affect the sampling properties of the estimator. Adding more points to the support of Z should decrease the variance of the associated point estimator. On the other hand, it increases the computational burden of the optimization problem. GJM reported an experiment showing that the estimator improves as the number of support points M increases for small and modest M. The greatest improvement is observed when M is increased from three to five.
GJM demonstrate various merits of the GME estimator, especially its resistance to multicollinearity problem. The implementation of the GME estimation, however, requires several “human” decisions which are not required in the OLS. GJM provides some guidance on the specifications of these factors. First, non-sample information can be useful. For example, it is not uncommon in practice that the sign, range or approximate multitude of coefficients in question are known a priori. This information provides useful guidance on the specification of the coefficient support. Similarly, non-sample information regarding the error distribution is sometimes available. For instance, it is well known that the error distributions in financial studies have fat tails. Accordingly, one can use a wider error support than the usual 3σ rule does.
A second useful principle is adaptation. Generally any consistent estimators can provide useful information on the coefficients and the distribution of the disturbance. Thus, one can tailor his specification of the coefficient and error supports based on preliminary consistent estimators. For example, to use the usual 3σ rule, one can replace σ with a consistent estimator. In the spirit of adaptive estimation, one can further tailor the error support such that it reflects characterizations of the error distribution, such as skewness, fat-tailedness, and so on.
Lastly, the maximum entropy problem can be further generalized to the minimum cross entropy problem. The cross entropy, or Kullback-Leibler information criteria, for two distributions, p and q (with a common support) is defined as
D ( p , q ) = k p k log ( p k / q k )
The cross entropy measures the discrepancy between p and q. Suppose in addition to the model consistency requirement, prior information is available in the form of distributions of probabilities on the discrete supports for the coefficients and disturbances, it can be incorporated into the estimation by minimizing the cross entropy subject to the model consistency and additivity constraint. The ME principle is a special case of the minimum cross entropy principle, with the prior distributions set to constant.

3. The Weighted GME Estimator with a Data-driven Weight

3.1. The weighted GME estimator (W-GME)

As discussed above, the specifications of the coefficient and error supports may affect the GME estimation results. In addition, the specification of the dual loss objective function (3) can also influence the estimator. By accounting for the unknown signal and noise components in the consistency relations, the GME estimates of the unknown parameter β and disturbances e are jointly determined. As a result, the entropy based objective function reflects statistical losses in the sample space (prediction) and in the parameter space (precision). It is noted, however, the objective function (3) implicitly places equal weights on the parameter and error entropies.
To avoid arbitrarily assigning weights to the two loss components, GJM suggested a weighted GME (W-GME) estimator with the following objective function
H ( p , w ; γ ) = ( 1 γ ) p log p γ w log w
where γ ( 0 , 1 ) controls the weights given to the two entropies. The corresponding unconstrained weighted GME(γ) objective function is
M ( λ ; γ ) = y λ ( 1 γ ) k = 1 K log ( Ω k ( λ ; γ ) ) γ t = 1 T log ( Ψ t ( λ ; γ ) )
One can then show that
p ^ k m = exp ( z k m X k λ ^ / ( 1 γ ) ) Ω k ( λ ^ ; γ ) w ^ t j = exp ( v t j λ ^ / γ ) Ψ t ( λ ^ ; γ )
where λ ^ are functions of γ, and
Ω k ( λ ^ ; γ ) = m = 1 M exp ( z k m X k λ ^ / ( 1 γ ) ) Ψ t ( λ ^ ; γ ) = j = 1 J exp ( v t j λ ^ / γ )
GJM illustrated that the entropy optimization results are affected by γ. Furthermore, they reported that the effect of the weight on the estimation results cannot be determined unambiguously even for some very simple cases.

3.2. W-GME with a data-driven weight

GJM show that one can use non-sample information and preliminary estimates to aid the specification of the coefficient and error supports. The prior distributions on these supports can be further “tilted” exponentially by non-uniform prior distributions incorporated through the minimum cross entropy framework. On the other hand, there is no clear guidance on the selection of γ in the W-GME problem. In this section, we propose a data-driven method to select γ for the W-GME estimator.
Specially, we use the method of least squares cross-validation (LSCV), which is widely used in nonparametric estimations. This method is implemented as follows:
  • Given the coefficient support Z , disturbance support V , and weight γ ( 0 , 1 ) , estimate β using the W-GME method (6), on T 1 observations, with the t t h observation omitted for t = 1 , , T . Denote each estimate β ^ t ( γ ) . For simplicity, we use uniform prior distributions for Z and V .
  • Calculate the squared prediction error s ^ t ( γ ) = ( y t x t β ^ t ( γ ) ) 2 for each t.
  • Select γ such that it minimizes the sum of the squared prediction errors t = 1 T s ^ t ( γ ) .
The LSCV method is known to produce asymptotically consistent estimators for the optimal smoothing or regularization parameter ([11], [12] and [13]). By doing so, we allow the data to specify the weight for the coefficient uncertainty and the disturbance uncertainty.

4. Monte Carlo Simulations

To investigate the finite sample performance of the proposed W-GME method, we conducted some Monte Carlo simulations. The purpose of these experiments is to compare the W-GME with the conventional GME, which places equal weights on the entropy of the coefficient distribution and that of the disturbance distribution. It is not intended as an investigation of the GME method, where careful selection of the support for the coefficient and the disturbance is crucial to its performance. For simplicity, we choose not to use non-sample information in the specification of coefficient and error support, and to use the simple GME approach, or, uniform prior distributions in the minimum cross entropy framework.
Following GJM’s Monte Carlo simulation setup, we investigate the performance of the W-GME on linear models where the design matrices vary in degree of multicollinearity. Recall that the GME is similar to the ridge regression as a robust estimator against multicollinearity. Thus we are interested in its performance in the presence of multicollinearity.
We measure a matrix’s multicollinearity using its condition number, which is the ratio between its largest and smallest eigenvalues. Let X be a T × 4 matrix which is generated randomly from an i.i.d. standard normal distribution. To form a design matrix with a desired condition number, κ ( X X ) = μ , the singular value decomposition of X = Q L R was recovered. Then, the eigenvalues in L were replaced with the vector
a = 2 1 + μ , 1 , 1 , 2 μ 1 + μ
which has length K = 4 . The new design matrix, X a = Q L a R , is characterized by κ ( X a X a ) = μ , and the condition number may be specified a priori. 1 We then set
y = X a β + e
where β = [ 2 , 1 , 3 , 2 ] , and e are T i.i.d. random errors.
In the Monte Carlo simulations, we consider three estimators: the OLS, GME and W-GME. We consider sample size n = 30 and n = 50 . 500 samples are generated for each case. We use the LSCV to select the optimal weight γ. In particular, we use a line search over the interval ( 0 , 1 ) to locate the γ that minimizes the squared prediction errors.2

4.1. Regressions with normal errors

Firstly, we assume that e are iid standard normal random errors. For both GME estimators, we use a five-point support Z = [ z , z / 2 , 0 , z / 2 , z ] for z = 10 , 20 , 30 , 50 , 100 respectively. For the error support, we set V = [ σ ^ , σ ^ / 2 , 0 , σ ^ / 2 , σ ^ ] × 3 , where σ ^ is the standard error of the OLS residuals. We report the Mean Squared Errors (MSE) of coefficient estimates, | | β ^ β | | 2 , in Table 1. When μ = 1 , the MSE of the OLS is close to 4, its theoretical value, in all cases. Not surprisingly, when X is orthogonal, the OLS outperforms both GME estimators, which are shrinkage estimators and thus biased. On the other hand, in most cases where μ > 1 , the two GME estimators have smaller MSEs than the OLS does. This is consistent with the famous Stein’s phenomenon that the OLS is dominated by some shrinkage estimators in multiple linear regressions.
Comparing the GME and W-GME, we note that when z = 10 , or the coefficient support is defined on [ 10 , 10 ] , the MSEs of the GME are smaller than those of the W-GME. This result suggests that when a relatively precise coefficient support is used, the GME estimator has a smaller risk. Intuitively, with a narrow support for the coefficients that covers the true values, the coefficients can be estimated precisely, regardless the choice of the weight γ in a weighted GME framework. On the other hand, the potential benefit of the W-GME is largely offset by the additional variation entailed by the data-driven method of selecting the entropy weight γ. However, in practice, the improvement due to narrow coefficient supports is only obtainable if the supports contain the true unknown coefficient values. Without prior or non-sample information, using a narrow coefficient support increases the risk of missing the true values and renders the estimator inconsistent.
When z 20 , the W-GME outperforms the GME considerably. Furthermore, the performance of the W-GME relative to that of the GME improves with both the width of coefficient support and the condition number. The average ratios between the MSEs of the W-GME and those of the GME across two sample sizes are respectively [1.11, 0.87, 0.78, 0.69, 0.70] for z = [ 10 , 20 , 30 , 50 , 100 ] , while these ratios are respectively [1.00, 0.87, 0.76, 0.69] across the condition numbers μ = [ 1 , 10 , 20 , 50 ] . In addition, it is noted that the performance of the W-GME appears to stabilize for z 50 . In other words, its performance seems to be affected little when a wide coefficient support is further widened. In contrast, the MSE of the GME increases with the coefficient support and reaches the level of that of the OLS for z 50 . Given the fact that a narrow coefficient support increases the risk of inconsistency, the stability of the W-GME under a wide range of coefficient supports is highly desirable.
Table 1. MSE of regressions with normal errors.
Table 1. MSE of regressions with normal errors.
n = 30 n = 50
zκOLSW-GME( γ ^ )GMEOLSW-GME( γ ^ )GME
1013.844.06 (0.26)3.573.844.26 (0.24)3.67
107.385.97 (0.24)5.376.575.77 (0.23)4.91
2010.837.29 (0.24)6.7810.637.06 (0.22)6.49
5019.858.28 (0.23)7.9921.298.15 (0.23)7.50
2013.944.25 (0.08)4.034.074.28 (0.08)4.18
108.356.81 (0.11)7.597.556.55 (0.09)7.27
2013.008.72 (0.11)10.6913.858.63 (0.10)11.13
5027.5812.81 (0.14)16.0525.9111.88 (0.12)16.27
3013.864.00 (0.04)3.924.104.42 (0.04)4.53
107.736.07 (0.05)7.498.116.73 (0.05)8.21
2013.318.29 (0.07)12.1112.357.90 (0.06)11.33
5026.8912.67 (0.08)21.0926.7012.75 (0.08)21.01
5014.163.89 (0.02)4.283.923.98 (0.02)4.35
107.445.55 (0.02)7.588.706.91 (0.02)9.90
2013.148.19 (0.03)12.9812.788.21 (0.03)13.58
5028.4013.87 (0.05)26.6531.1415.29 (0.04)28.83
10014.023.82 (0.02)4.153.744.39 (0.01)4.74
107.716.04 (0.02)7.938.036.64 (0.01)8.81
2012.217.93 (0.02)12.1412.778.32 (0.02)13.47
5026.3313.52 (0.03)26.3826.9812.84 (0.02)27.20
Next we turn our attention to the empirically determined weight γ ^ in the weighted entropy objective function. For each experiment, the average γ ^ is reported in parenthesis for the W-GME estimator. We observe two note-worthy features. First, γ ^ increases generally with μ. Recall that γ is the weight placed on the entropy of the disturbance distributions. Thus the more severe the “ill-posed” problem is, the larger is the weight selected by the LSCV. In other words, the data-driven method automatically relaxes the model consistency constraints when the underlying linear inverse problem associated with the OLS becomes problematic. Second, γ ^ decreases with the width of the coefficient support across all condition numbers. Intuitively, the wider is the coefficient support, the weaker are the restrictions imposed by the GME estimation procedure. Correspondingly, the smaller is the need to regulate the entropy, or uncertainty, of the disturbance distribution.3
Lastly, we note that the overall performance of the estimators in question remains quite stable when the sample size is increased from 30 to 50. The average ratios, across all cases, in the MSE between the W-GME and GME are 0.834 and 0.828 respectively for n = 30 and 50. It is well-known that data-driven methods normally require a sizeable sample to attain its theoretical advantages. Nonetheless our results demonstrate that the W-GME can outperform the GME with quite small sample sizes under various scenarios.
Table 2. MSE of regressions with χ2(4) errors.
Table 2. MSE of regressions with χ2(4) errors.
n = 30 n = 50
zκOLSW-GME( γ ^ )GMEOLSW-GME( γ ^ )GME
1013.983.41 (0.42)2.814.083.43 (0.44)2.73
108.075.64 (0.41)4.187.755.06 (0.45)3.67
2012.176.50 (0.40)4.9912.165.88 (0.43)4.69
5029.337.70 (0.44)5.6627.467.09 (0.44)5.55
2014.033.54 (0.18)3.183.923.39 (0.19)2.92
108.535.66 (0.19)5.407.795.26 (0.20)4.83
2012.127.27 (0.18)6.7612.587.16 (0.21)6.08
5028.1511.22 (0.22)9.2328.6110.82 (0.22)8.29
3014.213.62 (0.11)3.444.343.60 (0.10)3.46
108.095.66 (0.12)5.828.415.85 (0.12)6.05
2013.607.58 (0.12)9.0013.767.91 (0.13)8.76
5026.9911.45 (0.15)13.0625.8710.95 (0.15)11.57
5014.113.46 (0.05)3.353.903.23 (0.04)3.14
108.585.48 (0.05)6.797.765.36 (0.05)6.01
2013.156.97 (0.06)10.2713.347.39 (0.06)9.89
5027.8111.16 (0.08)17.8428.0111.28 (0.08)17.47
10013.903.02 (0.02)3.454.002.94 (0.01)3.25
107.554.70 (0.02)6.358.545.12 (0.02)6.65
2013.626.86 (0.02)11.4913.526.94 (0.03)10.81
5028.2711.29 (0.04)22.1429.2612.03 (0.04)21.93
Table 3. MSE of regressions with t(3) errors.
Table 3. MSE of regressions with t(3) errors.
n = 30 n = 50
zκOLSW-GME( γ ^ )GMEOLSW-GME( γ ^ )GME
1013.844.09 (0.47)3.613.364.71 (0.44)4.02
107.015.68 (0.44)4.986.356.47 (0.42)5.49
2010.656.41 (0.44)5.4710.317.31 (0.42)5.91
5020.907.03 (0.44)6.2820.587.98 (0.43)6.93
2013.844.11 (0.21)4.543.706.91 (0.18)6.92
107.466.07 (0.21)6.307.188.12 (0.20)8.95
2011.026.83 (0.21)7.4811.119.70 (0.20)10.69
5024.319.98 (0.24)9.3724.2011.64 (0.22)12.03
3014.604.33 (0.21)5.393.596.23 (0.18)6.50
107.835.91 (0.21)7.827.199.00 (0.20)10.77
2012.037.73 (0.21)9.8411.389.71 (0.20)13.32
5023.9210.75 (0.24)13.8925.6515.82 (0.22)19.98
5015.645.29 (0.05)7.153.788.32 (0.04)8.76
107.865.74 (0.05)9.287.8012.54 (0.05)17.37
2013.418.18 (0.06)13.9111.899.69 (0.05)16.53
5024.0711.05 (0.08)21.3025.8220.27 (0.08)32.39
10013.964.01 (0.02)5.193.8011.75 (0.01)12.88
106.925.33 (0.02)8.688.3114.55 (0.02)19.93
2014.257.78 (0.02)17.6411.6417.64 (0.02)25.50
5032.0914.24 (0.03)33.8228.3541.85 (0.04)59.32

4.2. Regressions with non-normal errors

Next we investigate the performance of the proposed estimator when the errors are generated from some non-normal distributions. Using the same sample design outlined above, we generated the errors instead from a χ 2 ( 4 ) and a t ( 3 ) distribution. The χ 2 ( 4 ) errors were centered by subtracting the mean (i.e., 4), and all drawings were scaled to have unit variance by dividing each by the associated standard deviation ( 3 and 8 respectively). Under a χ 2 error distribution, we set the disturbance support to V = [ σ ^ , σ ^ / 2 , 0 , σ ^ , 2 σ ^ ] × 3 to account for the skewness of the χ 2 distribution.4 When the disturbance terms were generated from the t distribution, instead of using the 3σ rule, we set V = [ σ ^ , σ ^ / 2 , 0 , σ ^ / 2 , σ ^ ] × 5 to account for the fat-tailedness of the error distribution.
The estimation results for the χ 2 case are reported in Table 2. The overall pattern is similar to that with normal errors. With narrow coefficient supports, the GME has a smaller MSE than the W-GME. On the other hand, when z 30 , the W-GME outperforms the GME, and the performance gap generally increases with the condition number. The performance is quite similar between n = 30 and n = 50 . On the other hand, the average γ is larger than that for normal error case, indicating a heavier penalty for the uncertainty in error distribution when it is skewed.
Lastly, Table 3 reports the results for t error distributions. With n = 30 , the overall pattern is again similar to those of the first two cases. A noteworthy difference is that the MSEs for the two GME estimators increase substantially when the sample size is raised to 50. In contrast, the OLS does not seem to be affected by the change in sample size. Nonetheless, except when the condition number is small or a very wide coefficient support is used (i.e., z = 100 ), the W-GME still outperforms the OLS. We also note that the weight γ is larger than that chosen under normal errors.

5. Concluding Remarks

The Generalized Maximum Entropy (GME) estimator is a robust estimator that is resistant to multicollinearity. Like other robust estimators, the estimator requires specification of some “tuning” parameters. In particular, it requires users to specify discrete supports for the coefficients and disturbances. In a more general weighted GME framework, one also needs to specify a weight that determines the relative weight placed on the entropies of the coefficient and error distributions. Although the specifications of the coefficient and error supports can be guided by non-sample information and preliminary estimates, there is no clear guidance on the selection of the weight in a weighted GME estimation.
In this study, we have presented a weighted-GME estimator with a data-driven weight. The conventional GME estimator places equal weights on the entropies of coefficient and disturbance distributions. Instead, we proposed to use the method of least squares cross validation to select this weight in a data-driven manner. We demonstrate numerically that the proposed W-GME estimator provides superior performance under various scenarios. Investigation on combining the data-driven selection of the weight parameter and automatic specification of the supports for the coefficients and errors to achieve adaptiveness and further improvement shall be of interest for future studies.
  • Notes

  • 1.A high condition number indicates a high degree of multicollinearity, and vise versa. A condition number of one signifies that the columns of the matrix in question are orthogonal to each other.
  • 2.We searched over an equally-spaced interval ρ = [ log ( 0.01 ) , log ( 0.01 ) + h , log ( 0.01 ) + 2 h , , log ( 0.99 ) ] , where h = ( log ( 0.99 ) log ( 0.01 ) ) / 15 and γ is set to exp ( ρ ) .
  • 3.Recall that the GME estimator implicitly assumes a uniform prior distribution for the error support. Since we use a symmetric error support centered at zero, a uniform distribution over this support leads to a zero disturbance. A wider coefficient support means less restrictive constraints on β and thus the a smaller e = y x β in absolute value. With the error terms more likely to be close to zero, the need to regulate the entropy of error term distributions is less.
  • 4.A non-uniform prior distribution u = [ 4 / 15 , 4 / 15 , 1 / 5 , 2 / 15 , 2 / 15 ] is used for the error support such that the prior distribution is centered at zero.

References

  1. Skilling, J. The axioms of maximum entropy. In Maximum Entropy and Bayesian Methods in Science and Engineering; Skilling, J., Ed.; Kluwer: Dordrecht, The Netherlands, 1989; pp. 173–187. [Google Scholar]
  2. Golan, A.; Judge, G.; Miller, D. Maximum Entropy Econometrics: Robust Estimation with Limited Data; Wiley: Chichester, UK, 1996. [Google Scholar]
  3. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  4. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  5. Kullback, J. Information Theory and Statistics; John Wiley: New York, NY, USA, 1959. [Google Scholar]
  6. Levine, R.D. An information theoretic approach to inversion problems. J. Phys. A-Math. Gen. 1980, 13, 91–108. [Google Scholar] [CrossRef]
  7. Levine, R.D.; Tribus, M. The Maximum Entropy Formalism; MIT Press: Cambridge, MA, USA, 1979. [Google Scholar]
  8. Shore, J.E.; Johnson, R.W. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inform. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef]
  9. Jaynes, E.T. Prior information and ambiguity in inverse problems. In Inverse Problems; McLaughlin, D.W., Ed.; SIAM Proceedings, American Mathematical Society: Providence, RI, USA, 1984; pp. 151–166. [Google Scholar]
  10. Ciszár, I. Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. Ann. Stat. 1991, 19, 2032–2066. [Google Scholar] [CrossRef]
  11. Hall, P. Large sample optimality of least squares cross-validation in density estimation. Ann. Stat. 1984, 11, 1156–1174. [Google Scholar]
  12. Stone, C.J. An asymptotically optimal window selection rule for kernel density estimates. Ann. Stat. 1984, 12, 1285–1297. [Google Scholar] [CrossRef]
  13. Hall, P.; Marron, J.S. Extent to which least-squares cross-validation minimises integrated square error in nonparametric density estimation. Probab. Theory Rel. 1987, 74, 567–581. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Wu, X. A Weighted Generalized Maximum Entropy Estimator with a Data-driven Weight. Entropy 2009, 11, 917-930. https://doi.org/10.3390/e11040917

AMA Style

Wu X. A Weighted Generalized Maximum Entropy Estimator with a Data-driven Weight. Entropy. 2009; 11(4):917-930. https://doi.org/10.3390/e11040917

Chicago/Turabian Style

Wu, Ximing. 2009. "A Weighted Generalized Maximum Entropy Estimator with a Data-driven Weight" Entropy 11, no. 4: 917-930. https://doi.org/10.3390/e11040917

Article Metrics

Back to TopTop