Next Article in Journal
Analytical Solutions of Fractional-Order Heat and Wave Equations by the Natural Transform Decomposition Method
Next Article in Special Issue
Model Selection for Non-Negative Tensor Factorization with Minimum Description Length
Previous Article in Journal
Confidential Cooperative Communication with the Trust Degree of Jammer
Previous Article in Special Issue
SIMIT: Subjectively Interesting Motifs in Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Maximum Entropy Procedure to Solve Likelihood Equations

Department of Developmental and Social Psychology, University of Padova, 35131 Padova, Italy
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(6), 596; https://doi.org/10.3390/e21060596
Submission received: 20 May 2019 / Revised: 13 June 2019 / Accepted: 14 June 2019 / Published: 15 June 2019
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)

Abstract

:
In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon’s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth’s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation.

1. Introduction

Maximum likelihood is one of the most used tools of modern statistics. As a result of its attractive properties, it is useful and suited for a wide class of statistical problems, including modeling, testing, and parameters estimation [1,2]. In the case of regular and correctly-specified models, maximum likelihood provides a simple and elegant means of choosing the best asymptotically normal estimators. Generally, the maximum likelihood workflow proceeds by first defining the statistical model which is thought to generate the sample data and the associated likelihood function. Then, the likelihood is differentiated around the parameters of interest by getting the likelihood equations (score), which are solved at zero to find the final estimates. In most simple cases, the maximum likelihood solutions are expressed in closed-form. However, analytic expressions are not always available for most complex problems and researchers need to solve likelihood equations numerically. A broad class of these procedures include Newton-like algorithms, such as the Newton–Raphson, Fisher-scoring, and quasi Newton–Raphson algorithms [3]. However, when the sample size is small, or when the optimization is no longer convex as in the case of more sophisticated statistical models, the standard version of Newton–Raphson may not be optimal. In this case, robust versions should instead be used [4]. A typical example of such a situation is the logistic regression for binary data, where maximum likelihood estimates may no longer be available, for instance, when the binary outcome variable can be perfectly or partially separated by a linear combination of the covariates [5]. As a result, the Newton–Raphson is unstable with inconsistent or infinite estimates. Other examples include small sample sizes, large numbers of covariates, and multicollinearity among the regressor variables [6]. Different proposals have been made to solve these drawbacks, many of which are based on iterative adjustments of the Newton–Raphson algorithm (e.g., see [7,8]), penalized maximum likelihood (e.g., see [9]), or the homotopy-based method (e.g., see [10]). Among them, bias-corrected methods guarantee the existence of finite maximum likelihood estimates by removing first-order bias [11], whereas homotopy Newton–Raphson algorithms, which are mostly based on Adomian’s decomposition, ensure more robust numerical convergences in finding roots of the score function (e.g., see [12]).
Maximum entropy (ME)-based methods have a long history in statistical modeling and inference (e.g., for a recent review see [13]). Since the seminal work by Golan et al. [14], there have been many applications of maximum entropy to the problem of parameter estimation in statistics, including autoregressive models [15], multinomial models [16], spatial autoregressive models [17], structural equation models [18], the co-clustering problem [19], and fuzzy linear regressions [20]. What all these works share in common is an elegant estimation method that avoids strong parametric assumptions on the model being used (e.g., error distribution). Differently, maximum entropy has also been widely adopted in many optimization problems, including queueing systems, transportation, portfolio optimization, image reconstruction, and spectral analysis (for a comprehensive review see [21,22]). In all these cases, maximum entropy is instead used as a pure mathematical solver engine for complex or ill-posed problems, such as those encountered when dealing with differential equations [23], oversampled data [24], and data decomposition [25].
The aim of this article is to introduce a maximum entropy-based technique to solve likelihood equations as they appear in many standard statistical models. The idea relies upon the use of Jaynes’ classical ME principle as a mathematical optimization tool [22,23,26]. In particular, instead of maximizing the likelihood function and solving the corresponding score, we propose a solution where the score is used as the data constraint to the estimation problem. The solution involves two steps: (i) reparametrizing the parameters as discrete probability distributions and (ii) maximizing the Shannon’s entropy function w.r.t. to the unknown probability mass points constrained by the score equation. Thus, parameter estimation is reformulated as recovering probabilities in a (hyper) symplex space, with the searching surface being always regular and convex. In this context, the score equation represents all the available information about the statistical problem and is used to identify a feasible region for estimating the model parameters. In this sense, our proposal differs from other ME-based procedures for statistical estimation (e.g., see [27]). Instead, our intent is to offer an alternative technique to solve score functions of parametric, regular, and correctly specified statistical models, where inference is still based on maximum likelihood theory.
The reminder of this article is organized as follows. Section 2 presents our proposal and describes its main characteristics by means of simple numerical examples. Section 3 describes the results of a simulation study where the ME method is assessed in the typical case of logistic regression under separation. Finally, Section 4 provides a general discussion of findings, comments, and suggestions for further investigations. Complementary materials like datasets and scripts used throughout the article are available to download at https://github.com/antcalcagni/ME-score, whereas the list of symbols and abbreviations adopted hereafter is available in Table 1.

2. A Maximum Entropy Solution to Score Equations

Let y = { y 1 , , y n } be a random sample of independent observations from the parametric model M = { f ( y ; θ ) : θ Θ , y Y } , with  f ( y ; θ ) being a density function parameterized over θ , Θ R J the parameter space with J being the number of parameters, and Y the sample space. Let
l ( θ ) = i = 1 n ln f ( y i ; θ )
be the log-likelihood of the model and
U ( θ ) = θ l ( θ ) = ( l / θ 1 , , l / θ j , , l / θ J )
the score equation. In the regular case, the maximum likelihood estimate (MLE) θ ^ of the unknown vector of parameters θ is the solution of the score U ( θ ) = 0 J . In simple cases, θ ^ has closed-form expression but, more often, a numerical solution is required for θ ^ , for instance by using iterative algorithms like Newton–Raphson and expectation-maximization.
In the maximum likelihood setting, our proposal is instead to solve U ( θ ) = 0 J by means of a maximum entropy approach (for a brief introduction, see [28]). This involves a two step formulation of the problem, where θ is first reparameterized as a convex combination of a numerical support with some predefined points and probabilities. Next, a non-linear programming (NLP) problem is set with the objective of maximizing the entropy of the unknown probabilities subject to some feasible constraints. More formally, let
θ ˜ = ( z 1 T p 1 , , z j T p j , , z J T p J ) T
be the reparameterized J × 1 vector of parameters of the model M , where z j is a user-defined vector of K × 1 (finite) points, whereas p j is a K × 1 vector unknown probabilities obeying to p j T 1 K = 1 . Note that the arrays z 1 , , z J must be chosen to cover the natural range of the model parameters. Thus, for instance, in the case of estimating the population mean μ R for a normal model N ( μ , σ 2 ) with σ 2 known, z μ = ( d , , 0 , , d ) T with d as large as possible. In practice, as observations y are available, the support vector can be defined using sample information, i.e.,  z μ = min ( y ) , , max ( y ) T . Similarly, in the case of estimating the parameter π [ 0 , 1 ] of the Binomial model Bin ( π , n ) , the support vector is z π = ( 0 , , 1 ) T . The choice of the number of points K of z can be made via sensitivity analysis although it has been shown that K { 5 , 7 , 11 } is usually enough for many regular problems (e.g., see [27,29]). Readers may refer to [27,30] for further details.
Under the reparameterization in Equation (1), U ( θ ) = 0 J is solved via the following NLP problem:
maximize ( p 1 , , p J ) H ( p 1 , , p J ) subject to : U ( θ ˜ ) = 0 J p 1 T 1 K = 1 p J T 1 K = 1 ,
where H ( p ) = j = 1 J p j T log p j is the Shannon’s entropy function, whereas the score equation U ( θ ˜ ) has been rewritten using the reparameterized parameters θ ˜ . The problem needs to recover K × J quantities which are defined in a (convex) hyper-simplex region with J (non-) linear equality constraints U ( θ ˜ 1 ) , , U ( θ ˜ J ) (consistency constraints) and linear equality constraints p 1 T 1 K , , p J T 1 K (normalization constraints). The latter ensure that the recovered quantities p ^ 1 , , p ^ J are still probabilities. Note that closed-form solutions for the ME-score problem do not exist and solutions need to be attained numerically.
In the following examples, we will show how the ME-score problem can be formulated in the most simple cases of estimating a mean from normal, Poisson, and gamma models (Examples 1–3) as well as in more complex cases of estimating parameters for logistic regression (Example 4).

2.1. Example 1: The Normal Case

Consider the case of estimating the location parameter μ R of a Normal density function with σ 2 known. In particular, let
y = ( 2.61 , 4.18 , 3.40 , 3.73 , 3.63 , 2.41 , 3.76 , 3.93 , 4.66 , 1.59 , 4.51 , 2.77 ) T
be a sample of n = 12 drawn from a population with Normal density N ( μ , σ 0 2 ) with σ 0 2 = 1 known. Our objective is to estimate μ using the information of y . Let
l ( μ ) = ( σ 0 2 ) 1 | | y μ 1 n | | 2
be the log-likelihood of the model where constant terms have been dropped and
U ( μ ) = ( σ 0 2 ) 1 y T 1 n n μ
be the corresponding score w.r.t. μ . To define the associated ME-score problem to solve U ( μ ) = 0 , first let μ ME = z T p with z and p being K × 1 vector of supports and unknown probabilities. In this example,
z = 1.59 , 2.10 , 2.61 , 3.13 , 3.64 , 4.15 , 4.66 T
with K = 7 , z 1 = min ( y ) , and  z K = max ( y ) . Given the optimization problem in (2), in this case p can be recovered via the Lagrangean method, as follows. Let
L ( λ 0 , λ 1 , p ) = p T log p λ 0 1 p T 1 K λ 1 ( σ 0 2 ) 1 ( y T 1 n n ( z T p ) )
be the Lagrangean function, with  λ 0 and λ 1 being the usual Lagrangean multipliers. The Lagrangean system of the problem is
L ( λ 0 , λ 1 , p ) p = log ( p ) 1 λ 0 λ 1 n z = 0 K
L ( λ 0 , λ 1 , p ) λ 0 = 1 p T 1 K = 0
L ( λ 0 , λ 1 , p ) λ 1 = ( σ 0 2 ) 1 ( y T 1 n n ( z T p ) = 0 .
Solving p in Equation (4), by using Equation (6), we get the general solutions for the ME-score problem:
p ^ = exp z λ ^ 1 n ( σ 0 2 ) 1 exp z λ ^ 1 n ( σ 0 2 ) 1 T 1 K ,
where the quantity in the denominator is the normalization constant. Note that solutions in Equation (7) depend on the Lagrangean multiplier λ ^ 1 , which needs to be determined numerically [31]. In this particular example, we estimate the unknown Lagrangean multiplier using a grid-search approach, yielding to λ ^ 1 = 0.024 . The final solutions are
p ^ = 0.087 , 0.101 , 0.117 , 0.136 , 0.159 , 0.185 , 0.215 T
with μ ^ ME = z T p ^ = 3.432 , which corresponds to the maximum likelihood estimate of μ ^ ML = 1 n y T 1 n = 3.432 , as expected.

2.2. Example 2: The Poisson Case

Consider the simple case of estimating λ R + of a Poisson density function. Let
y = ( 5 , 7 , 7 , 4 , 4 , 8 , 15 , 7 , 7 , 4 , 7 , 3 , 8 , 5 , 4 , 7 ) T
be a sample of n = 16 drawn from a Poisson density Pois ( λ ) and U ( λ ) = n + ( y T 1 n ) / λ be the score of the model. The reparameterized Poisson parameter is λ ME = z T p , with support being defined as follows:
z = 0.00 , 3.75 , 7.50 , 11.25 , 15.00 T ,
where K = 5 and z K = max ( y ) . Note that, since the Poisson parameter λ is bounded below by zero, we can set z 1 = 0 . Unlike the previous case, we cannot determine p ^ analytically. For this reason, we need to solve the ME-score problem:
maximize p p T log ( p ) subject to : p T 1 K n + ( y T 1 n ) / ( z T p ) ,
via the augmented Lagrangean adaptive barrier algorithm as implemented in the function constrOptim.nl of the R package alabama [32]. The algorithm converged successfully in few iterations. The recovered probabilities are as follows:
p ^ = 0.184 , 0.256 , 0.283 , 0.247 , 0.034 T
with λ ^ ME = 6.375 , which is equal to the maximum likelihood solution λ ^ ML = 1 n y T 1 n = 6.375 , as expected.

2.3. Example 3: The Gamma Case

Consider the following random sample
y = ( 0.09 , 0.35 , 0.98 , 0.20 , 0.44 , 0.13 , 0.25 , 0.48 , 0.09 , 0.45 , 0.03 , 0.06 , 0.18 , 0.26 , 0.79 , 0.36 , 0.26 ) T
drawn from a Gamma density Ga ( α , ρ ) with α R + being the scale parameter and ρ R + the rate parameter. The log-likelihood of the model is as follows:
l ( α , ρ ) = ( ( α 1 ) log ( y ) T 1 n ( y T 1 n ρ ) + n α log ( ρ ) n log Γ ( α ) )
where Γ ( . ) is the well-known gamma function. The corresponding score function equals to
U ( α ) = y T 1 n + n α ρ 1 U ( ρ ) = log ( y ) T 1 n + n log ( ρ ) n ψ ( α ) ,
with ψ ( α ) = α log ( Γ ( α ) ) being the digamma function, i.e., the derivative of the logarithm of the gamma function evaluated in α . The re-parameterized gamma parameters are defined as usual α ˜ ME = z α T p α and ρ ˜ ME = z ρ T p ρ whereas the supports can be determined as z α = 0 , , α ¯ + δ and z ρ = 0 , , ρ ¯ + δ , with  δ being a positive constant. Note that the upper limits of the support can be chosen according to the following approximations: α ¯ = 1 / 2 M and ρ ¯ = α ¯ / y ¯ , with  M = log ( y ¯ ) i log ( y i ) / n and y ¯ = i y i / n [33]. In the current example, the supports for the parameters are:
z α = 0.00 , 1.12 , 2.24 , 3.35 , 4.47 T and z ρ = 0.00 , 1.91 , 3.82 , 5.73 , 7.64 T ,
where K = 5 , α ¯ = 1.47 , ρ ¯ = 4.64 , and  δ = 3 . The ME-score problem for the gamma case is
maximize p p α T log ( p α ) p ρ T log ( p ρ ) subject to : p α T 1 K p ρ T 1 K y T 1 n + ( n z α T p α ) ( z ρ T p ρ ) 1 log ( y ) T 1 n + n log ( z ρ T p ρ ) n ψ ( z α T p α ) ,
which is solved via an augmented Lagrangean adaptive barrier algorithm. The algorithm required few iterations to converge and the recovered probabilities are as follows:
p ^ α = ( 0.290 , 0.261 , 0.222 , 0.164 , 0.063 ) T and p ^ ρ = ( 0.058 , 0.138 , 0.208 , 0.270 , 0.327 ) T .
The estimated parameters under the ME-score formulation are α ^ ME = 1.621 and ρ ^ ME = 5.103 which equal to the maximum likelihood solutions α ^ ML = 1.621 and ρ ^ ML = 5.103 .

2.4. Example 4: Logistic Regression

In what follows, we show the ME-score formulation for logistic regression. We will consider both the cases of simple situations involving no separation—where maximum likelihood estimates can be easily computed—and those unfortunate situations in which separation occur. Note that in the latter case, maximum likelihood estimates are no longer available without resorting to the use of a bias reduction iterative procedure [7]. Formally, the logistic regression model with p continuous predictors is as follows:
π i = 1 + exp ( X i β ) 1 y i Bin π i , ,
where X is an n × p matrix containing predictors, β is a p × 1 vector of model parameters, and  y is an n × 1 vector of observed responses. Here, the standard maximum likelihood solutions β ^ are usually attained numerically, e.g., using Newton–Raphson like algorithms [5].
No separation case. As an illustration of the ME-score problem in the optimal situation where no separation occurs, we consider the traditional Finney’s data on vasoconstriction in the skin of the digits (see Table 2) [34].
In the Finney’s case, the goal is to predict the vasoconstriction responses as a function of volume and rate, according to the following linear term [34]:
logit ( π i ) = β 0 + β 1 log Volume i + β 2 log Rate i ,
with logit : [ 0 , 1 ] R being the inverse of the logistic function. In the maximum entropy framework, the model parameters can be reformulated as follows:
β ME = z T I p + 1 vec ( P T ) ,
where z is a K × 1 vector of support points, I p + 1 is an identity matrix of order p + 1 (including the intercept term), P is a ( p + 1 ) × K matrix of probabilities associated to the p parameters plus the intercept, ⊗ is the Kronecker product, whereas vec ( ) is a linear operator that transforms a matrix into a column vector. Note that in this example p = 2 and K = 7 , whereas the support z = ( 10 , , 0 , , 10 ) T is defined to be the same for both predictors and the intercept (the bounds of the support have been chosen to reflect the maximal variation allowed by the logistic function). Finally, the ME-score problem for the Finney’s logistic regression is:
maximize vec ( P ) vec ( P ) T log ( vec ( P ) ) subject to : vec ( P ) T 1 p ( K + 1 ) X T ( y π ) ,
where X is the n × ( p + 1 ) matrix containing the variables rate, volume, and a column of all ones for the intercept term, and  π = 1 + exp ( X β ME ) 1 , with  β ME being defined as in Equation (12). Solutions for P ^ were obtained via the augmented Lagrangean adaptive barrier algorithm, which yielded the following estimates:
P ^ = 0.000 0.004 0.062 0.159 0.220 0.263 0.293 0.000 0.001 0.099 0.178 0.224 0.247 0.251 0.205 0.201 0.190 0.170 0.137 0.085 0.013 ,
where the third line of P ^ refers to the intercept term. The final estimated coefficients are
β ^ 0 ME = 2.875 β ^ 1 ME = 5.179 β ^ 2 ME = 4.562 ,
which are the same as those obtained in the original paper of Pregibon et al. [34].
Separation case. As a typical example of data under separation, we consider the classical Fisher iris dataset [35]. As generally known, the dataset contains fifty measurements of length and width (in centimeters) of sepal and petal variables for three species of iris, namely setosa, versicolor, and virginica [36]. For the sake of simplicity, we keep a subset of the whole dataset containing two species of iris (i.e., setosa and virginica) with sepal length and width variables only. Inspired by the work of Lesaffre and Albert [35], we study a model where the response variable is a binary classification of iris, with  Y = 0 indicating the class virginica and Y = 1 the class setosa, whereas petal length and width are predictors of Y. The logistic regression for the iris data assumes the following linear term:
logit ( π i ) = β 0 + β 1 length i + β 1 width i ,
where model parameters can be reformulated as in Equation (12), with  K = 7 , p = 2 , and  z being centered around zero with bounds z 1 = 25 and z K = 25 . The ME-score problem for the iris dataset is the same as in (13) and it is solved using the augmented Lagrangean adaptive barrier algorithm. The recovered P ^ is
P ^ = 0.228 0.226 0.215 0.190 0.137 0.001 0.001 0.000 0.039 0.040 0.158 0.218 0.257 0.285 0.000 0.000 0.000 0.037 0.210 0.329 0.426 ,
where the intercept term is reported in the third line of the matrix. The estimates for the model coefficients are reported in Table 3 (ME, first column). For the sake of comparison, Table 3 also reports the estimates obtained by solving the score of the model via bias-corrected Newton–Raphson (NRF, second column) and Newton–Raphson (NR, third column). The NRF algorithm uses the Firth’s correction for the score function [7] as implemented in the R package logistf [37]. As expected, the NR algorithm fails to converge reporting divergent estimates. By contrast, the NRF procedure converges to non-divergent solutions. Interestingly, the maximum entropy solutions are more close to NRF estimates although they differ in magnitude.

3. Simulation Study

Having examined the ME-score problem with numerical examples for both simple and more complex cases, in this section, we will numerically investigate the behavior of the maximum entropy solutions for the most critical case of logistic regression under separation.
Design. Two factors were systematically varied in a complete two-factorial design:
(i)
the sample size n at three levels: 15, 20, 200;
(ii)
the number of predictors p (excluding the intercept) at three levels: 1, 5, 10.
The levels of n and p were chosen to represent the most common cases of simple, medium, and complex models, as those usually encountered in many social research studies.
Procedure. Consider the logistic regression model as represented in Equation (10) and let n k and p k be distinct elements of sets n and p. The following procedure was repeated Q = 10,000 times for each of the n × p = 9 combinations of the simulation design:
  • Generate the matrix of predictors X n k × ( 1 + p k ) = [ 1 n k | X ˜ n k × p k ] , where X ˜ n k × p k is drawn from the multivariate standard normal distribution N ( 0 p k , I p k ) , whereas the column vector of all ones 1 stands for the intercept term;
  • Generate the vector of predictors β 1 + p k from the multivariate centered normal distribution N ( 0 1 + p k , σ I 1 + p k ) , where σ = 2.5 was chosen to cover the natural range of variability allowed by the logistic equation;
  • Compute the vector π n k via Equation (10) using X n k × ( 1 + p k ) and β p k ;
  • For q = 1 , , Q , generate the vectors of response variables y n k ( q ) from the binomial distribution Bin ( π n k ) , with  π n k being fixed;
  • For q = 1 , , Q , estimate the vectors of parameters β ^ 1 + p k ( q ) by means of Newton–Raphson (NR), bias-corrected Newton–Raphson (NRF), and ME-score (ME) algorithms.
The entire procedure involves a total of 10,000 × 3 × 3 = 90,000 new datasets as well as an equivalent number of model parameters. For the NR and NRF algorithms, we used the glm and logistf routines of the R packages stats [38] and logistf [37]. By contrast, the ME-score problem was solved via the augmented Lagrangean adaptive barrier algorithm implemented in constrOptim.nl routine of the R package alabama [32]. Convergences of the algorithms were checked using the built-in criteria of glm, logistf, and constrOptim.nl. For each of the generated data { y , X } q = 1 , , Q , the occurrence of separation was checked using a linear programming-based routine to find infinite estimates in the maximum likelihood solution [39,40]. The whole simulation procedure was performed on a (remote) HPC machine based on 16 cpu Intel Xeon CPU E5-2630L v3 1.80 GHz, 16 × 4 GB Ram.
Measures. The simulation results were evaluated considering the averaged bias of the parameters B ^ = 1 Q ( β ( k ) β ^ ( k ) ) T 1 , its squared version B ^ 2 (the square here is element-wise), and the averaged variance of the estimates V ^ = 1 Q Var ( β ^ ( k ) ) . They were then combined together to form the mean square error (MSE) of the estimates MSE = V ^ + B ^ 2 . The relative bias R B = ( β ^ j ( k ) β j 0 ) / | β j 0 | was also computed for each predictor j = 1 , , J , ( β 0 indicates the population parameter). The measures were computed for each of the three algorithms and for all the combinations of the simulation design.
Results. Table 4 reports the proportions of separation present in the data for each level of the simulation design along with the proportions of non-convergence for the three algorithms. As expected, NR failed to converge when severe separation occurred, for instance, in the case of small samples and large number of predictors. By contrast, for NRF and ME algorithms, the convergence criteria were always met. The results of the simulation study with regards to bias, variance, and mean square error (MSE) are reported in Table 5 and Figure 1. In general, MSE for the three algorithms decreased almost linearly with increasing sample sizes and number of predictors. As expected, the NR algorithm showed higher MSE than NRF and ME, except in the simplest case of n = 200 and p = 1 . Unlike for the NR algorithm, with increasing model complexity ( p > 1 ), ME showed a similar performances of NRF both for medium ( n = 50 ) and large ( n = 200 ) sample sizes. Interestingly, for the most complex scenario, involving a large sample ( n = 200 ) and higher model complexity ( p = 10 ), the ME algorithm outperformed NRF in terms of MSE. To further investigate the relationship between NRF and ME, we focused on the latter conditions and analyzed the behavior of ME and NRF in terms of relative bias (RB, see Figure 2). Both the ME and NRF algorithms showed RB distributions centered about 0. Except for the condition N = 200 P = 10 , where ME showed smaller variance than NRF, both the algorithms showed similar variance in the estimates of the parameters. Finally, we also computed the ratio of over- and under-estimation r as the ratio between the number of positive RB and negative RB, getting the following results: r ME = 1.18 (over-estimation: 54%), r NRF = 0.96 (over-estimation: 49%) for the case N = 200 P = 5 and r ME = 1.12 (over-estimation: 53%), r NRF = 0.91 (over-estimation: 47%) for the case N = 200 P = 10 .
Overall, the results suggest the following points:
  • In the simplest cases with no separation (i.e., N = 50 P = 1 , N = 200 P = 1 , N = 200 P = 5 ), the ME solutions to the maximum likelihood equations were the same as those provided by standard Newton–Raphson (NR) and the bias-corrected version (NRF). In all these cases, the bias of the estimates approximated zero (see Table 5);
  • In the cases of separation, ME showed comparable performances to NRF, which is known to provide the most efficient estimates in the case of logistic model under separation: Bias and MSE decreased as a function of sample size and predictors, with MSE being lower for ME than NRF in the case of N = 200 P = 5 and N = 200 P = 10 ;
  • In the most complex scenario with a large sample and higher model complexity ( N = 200 P = 5 , N = 200 P = 10 ), ME and NRF algorithms showed similar relative bias, with ME estimates being less variable than NRF in N = 200 P = 10 condition. The ME algorithm tended to over-estimate the population parameters, by contrast NRF tended to under-estimate the true model parameters.

4. Discussion and Conclusions

We have described a new approach to solve the problem U ( θ ) = 0 in order to get θ ^ in the context of maximum likelihood theory. Our proposal took the advantages of using the maximum entropy principle to set a non-linear programming problem where U ( θ ) was not solved directly, but it was used as informative constraint to maximize the Shannon’s entropy. Thus, the parameter θ was not searched over the parameter space Θ R J , rather it was reparameterized as a convex combination of a known vector z , which indicated the finite set of possible values for θ , and a vector of unknown probabilities p , which instead needed to be estimated. In so doing, we converted the problem U ( θ ) = 0 from one of numerical mathematics to one of inference, where U ( θ ) was treated as one of the many pieces of (external) information we may have had. As a result, the maximum entropy solution did not require either the computation of the Hessian of second-order derivatives of l ( θ ) (or the expectation of the Fisher information matrix) or the definition of initial values, as is required by Newton-like algorithms θ 0 . In contrast, the maximum entropy solution revolved around the reduction of the initial uncertainty: as one adds pieces of external information (constraints), a departure from the initial uniform distribution p results, implying a reduction of the uncertainty about θ ; a solution is found when no further reduction can be enforced given the set of constraints. We used a set of empirical cases and a simulation study to assess the maximum entropy solution to the score problem. In cases where the Newton–Raphson is no longer correct for θ (e.g., logistic regression under separation), the ME-score formulation showed results (numerically) comparable with those obtained using the Bias-corrected Newton–Raphson, in the sense of having the same or even smaller mean square errors (MSE). Broadly speaking, these first findings suggest that the ME-score formulation can be considered as a valid alternative to solve U ( θ ) = 0 , although further in-depth investigations need to be conducted to formally evaluate the statistical properties of the ME-score solution.
Nevertheless, we would like to say that the maximum entropy approach has been used to build a solver for maximum likelihood equations [22,23,26]. In this sense, standard errors, confidence levels, and other likelihood based quantities can be computed using the usual asymptotic properties of maximum likelihood theory. However, attention should be directed at the definition of the support points z since they need to be sufficiently large to include the true (hypothesized) parameters we are looking for. Relatedly, our proposal differs from other methods, such as generalized maximum entropy (GME) or generalized cross entropy (GCE) [20,27], in two important respects. First, the ME-score formulation does not provide a class of estimators for the parameters of statistical models. By contrast, GME and GCE are estimators belonging to the exponential family, which can be used in many cases as alternatives to maximum likelihood estimators [28]. Secondly, the ME-score formulation does not provide an inferential framework for θ . While GME and GCE use information theory to provide the basis for inference and model evaluation (e.g., using Lagrangean multipliers and normalized entropy indices), the ME-score formulation focuses on the problem of finding roots for U ( θ ) = 0 . Finally, an open issue which deserves greater consideration in future investigations is the examination of how the ME-score solution can be considered in light of the well-known maximum entropy likelihood duality [41].
Some advantages of the ME-score solution over Newton-like algorithms may include the following: (i) model parameters are searched in a smaller and simpler space because of the convex reparameterization required for θ ; (ii) the function to be maximized does not require either the computation of second-order derivatives of l ( θ ) , or searching for good initial values θ 0 ; (iii) additional information on the parameters, such as dominance relations among the parameters, can be added to the ME-score formulation in terms of inequality constraints (e.g., θ j < θ t , j t ). Furthermore, the ME-score formulation may be extended to include a priori probability distributions on θ . While in the current proposal, the elements of z j have the same probability to occur, the Kullback–Leibler entropy might be used to form a Kullback–Leibler-score problem, where z = ( z 1 , , z J ) T are adequately weighted by known vectors of probability w = ( w 1 , , w J ) T . This would offer, for instance, another opportunity to deal with cases involving penalized likelihood estimations.
In conclusion, we think that this work yielded initial findings in the solution of likelihood equations from a maximum entropy perspective. To our knowledge, this is the first time that maximum entropy is used to define a solver to the score function. We believe this contribution will be of interest to all researchers working at the intersection of information theory, data mining, and applied statistics.

Author Contributions

Conceptualizaton, A.C. and L.F.; methodology, A.C.; software, A.C., M.P.; formal analysis, A.C. and M.P.; visualization, M.P.; writing—original draft preparation, A.C.; writing—review and editing, A.C., L.F., G.A., M.P.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cox, D.R. Principles of Statistical Inference; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  2. Stigler, S.M. The epic story of maximum likelihood. Stat. Sci. 2007, 22, 598–620. [Google Scholar] [CrossRef]
  3. Tanner, M.A. Tools for Statistical Inference; Springer: Berlin, Germany, 2012. [Google Scholar]
  4. Commenges, D.; Jacqmin-Gadda, H.; Proust, C.; Guedj, J. A newton-like algorithm for likelihood maximization: The robust-variance scoring algorithm. arXiv 2006, arXiv:math/0610402. [Google Scholar]
  5. Albert, A.; Anderson, J.A. On the existence of maximum likelihood estimates in logistic regression models. Biometrika 1984, 71, 1–10. [Google Scholar] [CrossRef]
  6. Shen, J.; Gao, S. A solution to separation and multicollinearity in multiple logistic regression. J. Data Sci. 2008, 6, 515. [Google Scholar] [PubMed]
  7. Firth, D. Bias reduction of maximum likelihood estimates. Biometrika 1993, 80, 27–38. [Google Scholar] [CrossRef]
  8. Kenne Pagui, E.; Salvan, A.; Sartori, N. Median bias reduction of maximum likelihood estimates. Biometrika 2017, 104, 923–938. [Google Scholar] [CrossRef]
  9. Gao, S.; Shen, J. Asymptotic properties of a double penalized maximum likelihood estimator in logistic regression. Stat. Probabil. Lett. 2007, 77, 925–930. [Google Scholar] [CrossRef]
  10. Abbasbandy, S.; Tan, Y.; Liao, S. Newton-homotopy analysis method for nonlinear equations. Appl. Math. Comput. 2007, 188, 1794–1800. [Google Scholar] [CrossRef]
  11. Cordeiro, G.M.; McCullagh, P. Bias correction in generalized linear models. J. R. Stat. Soci. Ser. B Methodol. 1991, 53, 629–643. [Google Scholar] [CrossRef]
  12. Wu, T.M. A study of convergence on the Newton-homotopy continuation method. Appl. Math. Comput. 2005, 168, 1169–1174. [Google Scholar] [CrossRef]
  13. Golan, A. Foundations of Info-Metrics: Modeling and Inference with Imperfect Information; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  14. Golan, A.; Judge, G.; Robinson, S. Recovering information from incomplete or partial multisectoral economic data. Rev. Econ. Stat. 1994, 76, 541–549. [Google Scholar] [CrossRef]
  15. Golan, A.; Judge, G.; Karp, L. A maximum entropy approach to estimation and inference in dynamic models or counting fish in the sea using maximum entropy. J. Econ. Dyn. Control 1996, 20, 559–582. [Google Scholar] [CrossRef]
  16. Golan, A.; Judge, G.; Perloff, J.M. A maximum entropy approach to recovering information from multinomial response data. J. Am. Stat. Assoc. 1996, 91, 841–853. [Google Scholar] [CrossRef]
  17. Marsh, T.L.; Mittelhammer, R.C. Generalized maximum entropy estimation of a first order spatial autoregressive model. In Spatial and Spatiotemporal Econometrics; Emerald Group Publishing Limited: Bingley, UK, 2004; pp. 199–234. [Google Scholar]
  18. Ciavolino, E.; Al-Nasser, A.D. Comparing generalised maximum entropy and partial least squares methods for structural equation models. J. Nonparametric Stat. 2009, 21, 1017–1036. [Google Scholar] [CrossRef]
  19. Banerjee, A.; Dhillon, I.; Ghosh, J.; Merugu, S.; Modha, D.S. A generalized maximum entropy approach to bregman co-clustering and matrix approximation. J. Mach. Learn. Res. 2007, 8, 1919–1986. [Google Scholar]
  20. Ciavolino, E.; Calcagnì, A. A Generalized Maximum Entropy (GME) estimation approach to fuzzy regression model. Appl. Soft Comput. 2016, 38, 51–63. [Google Scholar] [CrossRef]
  21. Kapur, J.N. Maximum-Entropy Models in Science and Engineering; John Wiley & Sons: Hoboken, NJ, USA, 1989. [Google Scholar]
  22. Fang, S.C.; Rajasekera, J.R.; Tsao, H.S.J. Entropy Optimization and Mathematical Programming; Springer Science & Business Media: Berlin, Germany, 2012; Volume 8. [Google Scholar]
  23. El-Wakil, S.; Elhanbaly, A.; Abdou, M. Maximum entropy method for solving the collisional Vlasov equation. Phys. A Stat. Mech. Appl. 2003, 323, 213–228. [Google Scholar] [CrossRef]
  24. Bryan, R. Maximum entropy analysis of oversampled data problems. Eur. Biophys. J. 1990, 18, 165–174. [Google Scholar] [CrossRef]
  25. Calcagnì, A.; Lombardi, L.; Sulpizio, S. Analyzing spatial data from mouse tracker methodology: An entropic approach. Behav. Res. Methods 2017, 49, 2012–2030. [Google Scholar] [CrossRef]
  26. Sukumar, N. Construction of polygonal interpolants: a maximum entropy approach. Int. J. Numer. Methods Eng. 2004, 61, 2159–2181. [Google Scholar] [CrossRef]
  27. Golan, A.; Judge, G.G.; Miller, D. Maximum Entropy Econometrics: Robust Estimation with Limited Data; Wiley: New York, NY, USA, 1996. [Google Scholar]
  28. Golan, A.; Judge, G.; Miller, D. The maximum entropy approach to estimation and inference. In Applying Maximum Entropy to Econometric Problems; Emerald Group Publishing Limited: Bingley, UK, 1997; pp. 3–24. [Google Scholar]
  29. Papalia, R.B. A composite generalized cross-entropy formulation in small samples estimation. Econom. Rev. 2008, 27, 596–609. [Google Scholar] [CrossRef]
  30. Ciavolino, E.; Calcagnì, A. A generalized maximum entropy (GME) approach for crisp-input/fuzzy-output regression model. Qual. Quant. 2014, 48, 3401–3414. [Google Scholar] [CrossRef]
  31. Golan, A. Maximum entropy, likelihood and uncertainty: A comparison. In Maximum Entropy and Bayesian Methods; Springer: Berlin, Germany, 1998; pp. 35–56. [Google Scholar]
  32. Varadhan, R. Alabama: Constrained Nonlinear Optimization, R package version 2015.3-1; R Core Team: Vienna, Austria, 2015. [Google Scholar]
  33. Choi, S.C.; Wette, R. Maximum likelihood estimation of the parameters of the gamma distribution and their bias. Technometrics 1969, 11, 683–690. [Google Scholar] [CrossRef]
  34. Pregibon, D. Logistic regression diagnostics. Ann. Stat. J. 1981, 9, 705–724. [Google Scholar] [CrossRef]
  35. Lesaffre, E.; Albert, A. Partial separation in logistic discrimination. J. R. Stat. Soc. Ser. B 1989, 51, 109–116. [Google Scholar] [CrossRef]
  36. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  37. Heinze, G.; Ploner, M. Logistf: Firth’s Bias-Reduced Logistic Regression, R package version 1.23; R Core Team: Vienna, Austria, 2018. [Google Scholar]
  38. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019. [Google Scholar]
  39. Konis, K. Linear Programming Algorithms for Detecting Separated Data in Binary Logistic Regression Models. Ph.D. Thesis, Department of Statistics, University of Oxford, Oxford, UK, 2007. Available online: https://ora.ox.ac.uk/objects/uuid:8f9ee0d0-d78e-4101-9ab4-f9cbceed2a2a (accessed on 14 June 2019).
  40. Konis, K. SafeBinaryRegression: Safe Binary Regression, R package version 0.1-3; R Core Team: Vienna, Austria, 2013. [Google Scholar]
  41. Brown, L.D. Fundamentals of Statistical Exponential Families: With Applications in Statistical Decision Theory; Lecture Notes-Monograph Series; Institute of Mathematical Statistics: Hayworth, CA, USA, 1986; Volume 9. [Google Scholar]
Figure 1. Simulation study: averaged bias, squared averaged bias, and mean squared error (MSE) for Newton–Raphson (NR), bias-corrected Newton–Raphson (NRF), maximum entropy (ME) algorithms. Note that the number of predictors p is represented column-wise (outside) whereas the sample sizes n is reported in the x-axis (inside). The measures are plotted on logarithmic scale.
Figure 1. Simulation study: averaged bias, squared averaged bias, and mean squared error (MSE) for Newton–Raphson (NR), bias-corrected Newton–Raphson (NRF), maximum entropy (ME) algorithms. Note that the number of predictors p is represented column-wise (outside) whereas the sample sizes n is reported in the x-axis (inside). The measures are plotted on logarithmic scale.
Entropy 21 00596 g001
Figure 2. Simulation study: relative bias for NRF and ME algorithms in the conditions N = 200 P = 5 (A) and N = 200 P = 10 (B). Note that plots are paired vertically by predictor. Rate of over-estimation (under-estimation): (A) ME = 0.54 (0.46), NRF = 0.49 (0.51); (B) ME = 0.53 (0.47), NRF = 0.47 (0.53).
Figure 2. Simulation study: relative bias for NRF and ME algorithms in the conditions N = 200 P = 5 (A) and N = 200 P = 10 (B). Note that plots are paired vertically by predictor. Rate of over-estimation (under-estimation): (A) ME = 0.54 (0.46), NRF = 0.49 (0.51); (B) ME = 0.53 (0.47), NRF = 0.47 (0.53).
Entropy 21 00596 g002
Table 1. List of symbols and abbreviations used throughout the manuscript.
Table 1. List of symbols and abbreviations used throughout the manuscript.
MEmaximum entropy
NRNewton–Raphson algorithm
NFRbias corrected Newton–Raphson algorithm
ysample of observations
Y sample space
θ J × 1 vector of parameters
θ ^ estimated vector of parameters
θ ˜ reparameterized vector of parameters under ME
f ( y ; θ ) density function
l ( θ ) likelihood function
U ( θ ) , U ( θ ˜ ) score function
z K × 1 vector of finite elements for θ ˜
p K × 1 vector of unknown probabilities for θ ˜
p ^ vector of estimated probabilities for θ ˜
Table 2. Finney’s data on vasoconstriction in the skin of the digits. The response Y indicates the occurrence ( Y = 1 ) or non-occurrence ( Y = 0 ) of the vasoconstriction.
Table 2. Finney’s data on vasoconstriction in the skin of the digits. The response Y indicates the occurrence ( Y = 1 ) or non-occurrence ( Y = 0 ) of the vasoconstriction.
VolumeRateY
3.700.8251
3.501.0901
1.252.5001
0.751.5001
0.803.2001
0.703.5001
0.600.7500
1.101.7000
0.900.7500
0.900.4500
0.800.5700
0.552.7500
0.603.0000
1.402.3301
0.753.7501
2.301.6401
3.201.6001
0.851.4151
1.701.0600
Table 3. Estimates for the iris logistic regression: ME (maximum entropy), NRF (biased-corrected Newton–Raphson), NR (Newton–Raphson). Note that the NRF algorithm implements the Firth’s bias correction [7].
Table 3. Estimates for the iris logistic regression: ME (maximum entropy), NRF (biased-corrected Newton–Raphson), NR (Newton–Raphson). Note that the NRF algorithm implements the Firth’s bias correction [7].
MENRFNR
β 0 17.89212.539445.917
β 1 −10.091−6.151−166.637
β 2 12.2296.890140.570
Table 4. Simulation study: proportions of separation occurred in the data and non-convergence (nc) rates for NR, NRF, ME algorithms.
Table 4. Simulation study: proportions of separation occurred in the data and non-convergence (nc) rates for NR, NRF, ME algorithms.
npSeparation nc NR nc NRF nc ME
1510.3330.0850.0000.000
5010.0020.0020.0000.000
20010.0000.0000.0000.000
1550.9760.2370.0000.000
5050.7710.7710.0000.000
20050.0000.0000.0000.000
15101.0000.0020.0000.000
50100.9490.9500.0000.000
200100.0130.0130.0000.000
Table 5. Simulation study: averaged bias, squared averaged bias, and MSE for NR, NRF, ME algorithms.
Table 5. Simulation study: averaged bias, squared averaged bias, and MSE for NR, NRF, ME algorithms.
NRNRFME
np B ^ V ^ B ^ 2 MSE B ^ V ^ B 2 MSE B ^ V ^ B 2 MSE
151−5.54236.7030.67267.360.220.350.050.40−1.176.281.377.64
501−0.133.420.023.44−0.001.410.001.41−0.121.990.012.00
20010.030.110.000.110.000.100.000.100.030.110.000.11
15510.681553.37113.981667.33−1.223.001.504.490.205.320.045.36
5057.461918.1855.651973.78−0.442.200.202.39−0.111.450.011.46
20050.241.580.061.640.010.500.000.500.120.420.020.44
1510−0.97177.400.95178.35−0.134.820.024.84−0.388.100.148.24
50102.801490.397.831498.20−0.071.230.001.23−0.021.530.001.53
200100.6615.290.4315.720.020.860.000.860.100.480.010.50

Share and Cite

MDPI and ACS Style

Calcagnì, A.; Finos, L.; Altoé, G.; Pastore, M. A Maximum Entropy Procedure to Solve Likelihood Equations. Entropy 2019, 21, 596. https://doi.org/10.3390/e21060596

AMA Style

Calcagnì A, Finos L, Altoé G, Pastore M. A Maximum Entropy Procedure to Solve Likelihood Equations. Entropy. 2019; 21(6):596. https://doi.org/10.3390/e21060596

Chicago/Turabian Style

Calcagnì, Antonio, Livio Finos, Gianmarco Altoé, and Massimiliano Pastore. 2019. "A Maximum Entropy Procedure to Solve Likelihood Equations" Entropy 21, no. 6: 596. https://doi.org/10.3390/e21060596

APA Style

Calcagnì, A., Finos, L., Altoé, G., & Pastore, M. (2019). A Maximum Entropy Procedure to Solve Likelihood Equations. Entropy, 21(6), 596. https://doi.org/10.3390/e21060596

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop