E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Maximum Entropy and Its Application"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (31 March 2014)

Special Issue Editor

Guest Editor
Dr. Dawn E. Holmes (Website)

Department of Statistics and Applied Probability, University of California, USA
Interests: Bayesian Networks; machine learning; data mining; knowledge discovery; the Foundations of Bayesianism

Special Issue Information

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Published Papers (18 papers)

View options order results:
result details:
Displaying articles 1-18
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle A Note of Caution on Maximizing Entropy
Entropy 2014, 16(7), 4004-4014; doi:10.3390/e16074004
Received: 4 May 2014 / Revised: 10 July 2014 / Accepted: 15 July 2014 / Published: 17 July 2014
Cited by 1 | PDF Full-text (709 KB) | HTML Full-text | XML Full-text
Abstract
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses [...] Read more.
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Hierarchical Geometry Verification via Maximum Entropy Saliency in Image Retrieval
Entropy 2014, 16(7), 3848-3865; doi:10.3390/e16073848
Received: 5 April 2014 / Revised: 16 June 2014 / Accepted: 30 June 2014 / Published: 14 July 2014
Cited by 1 | PDF Full-text (1252 KB) | HTML Full-text | XML Full-text
Abstract
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical [...] Read more.
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency and also fully exploring the geometric context of all visual words in images. First of all, we obtain hierarchical salient regions of a query image based on the maximum entropy principle and label visual features with salient tags. The tags added to the feature descriptors are used to compute the saliency matching score, and the scores are regarded as the weight information in the geometry verification step. Second we define a spatial pattern as a triangle composed of three matched features and evaluate the similarity between every two spatial patterns. Finally, we sum all spatial matching scores with weights to generate the final ranking list. Experiment results prove that Hierarchical Geometry Verification based on Maximum Entropy Saliency can not only improve retrieval accuracy, but also reduce the time consumption of the full retrieval. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation
Entropy 2014, 16(7), 3635-3654; doi:10.3390/e16073635
Received: 19 March 2014 / Revised: 6 June 2014 / Accepted: 23 June 2014 / Published: 30 June 2014
PDF Full-text (864 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the [...] Read more.
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile) Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Duality of Maximum Entropy and Minimum Divergence
Entropy 2014, 16(7), 3552-3572; doi:10.3390/e16073552
Received: 28 April 2014 / Revised: 19 June 2014 / Accepted: 24 June 2014 / Published: 26 June 2014
Cited by 7 | PDF Full-text (286 KB) | HTML Full-text | XML Full-text
Abstract
We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model [...] Read more.
We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle A Maximum Entropy Method for a Robust Portfolio Problem
Entropy 2014, 16(6), 3401-3415; doi:10.3390/e16063401
Received: 27 March 2014 / Revised: 9 June 2014 / Accepted: 17 June 2014 / Published: 20 June 2014
Cited by 2 | PDF Full-text (247 KB) | HTML Full-text | XML Full-text
Abstract
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed [...] Read more.
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Figures

Open AccessArticle Density Reconstructions with Errors in the Data
Entropy 2014, 16(6), 3257-3272; doi:10.3390/e16063257
Received: 2 May 2014 / Revised: 3 June 2014 / Accepted: 9 June 2014 / Published: 12 June 2014
Cited by 3 | PDF Full-text (136 KB) | HTML Full-text | XML Full-text
Abstract
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the [...] Read more.
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Reaction Kinetics Path Based on Entropy Production Rate and Its Relevance to Low-Dimensional Manifolds
Entropy 2014, 16(6), 2904-2943; doi:10.3390/e16062904
Received: 22 December 2013 / Revised: 13 May 2014 / Accepted: 14 May 2014 / Published: 26 May 2014
Cited by 1 | PDF Full-text (2080 KB) | HTML Full-text | XML Full-text
Abstract
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the [...] Read more.
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the degree of quasi-equilibrium for each reaction, and the equation approximates the trajectory along at least final part of one-dimensional (1-D) manifold of true chemical kinetics that reaches equilibrium in concentration phase space. Besides the 1-D manifold, each higher dimensional manifold of the trajectories given by the equation is an approximation to that of true chemical kinetics when the contour of the entropy production rate in the concentration phase space is not highly distorted, because the Jacobian and its eigenvectors for the equation are exactly the same as those of true chemical kinetics at equilibrium; however, the path or trajectory itself is not necessarily an approximation to that of true chemical kinetics in manifolds higher than 1-D. The equation is for the path of steepest descent that sufficiently accounts for the constraints inherent in chemical kinetics such as element conservation, whereas the simple steepest-descent-path formulation whose Jacobian is the Hessian of the entropy production rate cannot even approximately reproduce any part of the 1-D manifold of true chemical kinetics except for the special case where the eigenvector of the Hessian is nearly identical to that of the Jacobian of chemical kinetics. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Figures

Open AccessArticle A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
Entropy 2014, 16(5), 2869-2889; doi:10.3390/e16052869
Received: 22 March 2014 / Revised: 12 May 2014 / Accepted: 21 May 2014 / Published: 23 May 2014
Cited by 2 | PDF Full-text (4820 KB) | HTML Full-text | XML Full-text
Abstract
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the [...] Read more.
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Figures

Open AccessArticle The Impact of the Prior Density on a Minimum Relative Entropy Density: A Case Study with SPX Option Data
Entropy 2014, 16(5), 2642-2668; doi:10.3390/e16052642
Received: 31 March 2014 / Revised: 7 May 2014 / Accepted: 9 May 2014 / Published: 14 May 2014
PDF Full-text (469 KB) | HTML Full-text | XML Full-text
Abstract
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to [...] Read more.
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative) Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation
Entropy 2014, 16(5), 2568-2591; doi:10.3390/e16052568
Received: 16 January 2014 / Revised: 16 April 2014 / Accepted: 4 May 2014 / Published: 9 May 2014
PDF Full-text (2852 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to [...] Read more.
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains
Entropy 2014, 16(4), 2244-2277; doi:10.3390/e16042244
Received: 19 February 2014 / Revised: 28 March 2014 / Accepted: 8 April 2014 / Published: 22 April 2014
Cited by 1 | PDF Full-text (1785 KB) | HTML Full-text | XML Full-text
Abstract
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. [...] Read more.
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle memory effects in spike statistics, for large-sized neural networks. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle A Bayesian Approach to the Balancing of Statistical Economic Data
Entropy 2014, 16(3), 1243-1271; doi:10.3390/e16031243
Received: 16 January 2014 / Revised: 12 February 2014 / Accepted: 17 February 2014 / Published: 26 February 2014
Cited by 1 | PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
This paper addresses the problem of balancing statistical economic data, when data structure is arbitrary and both uncertainty estimates and a ranking of data quality are available. Using a Bayesian approach, the prior configuration is described as a multivariate random vector and [...] Read more.
This paper addresses the problem of balancing statistical economic data, when data structure is arbitrary and both uncertainty estimates and a ranking of data quality are available. Using a Bayesian approach, the prior configuration is described as a multivariate random vector and the balanced posterior is obtained by application of relative entropy minimization. The paper shows that conventional data balancing methods, such as generalized least squares, weighted least squares and biproportional methods are particular cases of the general method described here. As a consequence, it is possible to determine the underlying assumptions and range of application of each traditional method. In particular, the popular biproportional method is found to assume that all source data has the same relative uncertainty. Finally, this paper proposes a simple linear iterative method that generalizes the biproportional method to the data balancing problem with arbitrary data structure, uncertainty estimates and multiple data quality levels. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Fluctuations, Entropic Quantifiers and Classical-Quantum Transition
Entropy 2014, 16(3), 1178-1190; doi:10.3390/e16031178
Received: 23 December 2013 / Revised: 18 February 2014 / Accepted: 19 February 2014 / Published: 25 February 2014
Cited by 2 | PDF Full-text (119 KB) | HTML Full-text | XML Full-text
Abstract We show that a special entropic quantifier, called the statistical complexity, becomes maximal at the transition between super-Poisson and sub-Poisson regimes. This acquires important connotations given the fact that these regimes are usually associated with, respectively, classical and quantum processes. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle A Relationship between the Ordinary Maximum Entropy Method and the Method of Maximum Entropy in the Mean
Entropy 2014, 16(2), 1123-1133; doi:10.3390/e16021123
Received: 16 December 2013 / Revised: 11 February 2014 / Accepted: 13 February 2014 / Published: 24 February 2014
PDF Full-text (115 KB) | HTML Full-text | XML Full-text
Abstract
There are two entropy-based methods to deal with linear inverse problems, which we shall call the ordinary method of maximum entropy (OME) and the method of maximum entropy in the mean (MEM). Not only doesMEM use OME as a stepping stone, it [...] Read more.
There are two entropy-based methods to deal with linear inverse problems, which we shall call the ordinary method of maximum entropy (OME) and the method of maximum entropy in the mean (MEM). Not only doesMEM use OME as a stepping stone, it also allows for greater generality. First, because it allows to include convex constraints in a natural way, and second, because it allows to incorporate and to estimate (additive) measurement errors from the data. Here we shall see both methods in action in a specific example. We shall solve the discretized version of the problem by two variants of MEM and directly with OME. We shall see that OME is actually a particular instance of MEM, when the reference measure is a Poisson Measure. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model
Entropy 2014, 16(2), 825-853; doi:10.3390/e16020825
Received: 20 November 2013 / Revised: 17 January 2014 / Accepted: 28 January 2014 / Published: 12 February 2014
Cited by 1 | PDF Full-text (335 KB) | HTML Full-text | XML Full-text
Abstract
A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior [...] Read more.
A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior in mean square error to two and three stage least squares. Analytical results are provided relating to asymptotic properties of the estimator and associated hypothesis testing statistics. Monte Carlo experiments are also used to provide evidence on the power and size of test statistics. An empirical application is included to demonstrate the practical implementation of the estimator. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
Entropy 2014, 16(2), 747-769; doi:10.3390/e16020747
Received: 9 December 2013 / Revised: 10 January 2014 / Accepted: 28 January 2014 / Published: 10 February 2014
Cited by 5 | PDF Full-text (1322 KB) | HTML Full-text | XML Full-text
Abstract
We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy [...] Read more.
We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Review

Jump to: Research, Other

Open AccessReview Maximum Entropy in Drug Discovery
Entropy 2014, 16(7), 3754-3768; doi:10.3390/e16073754
Received: 28 April 2014 / Revised: 28 May 2014 / Accepted: 27 June 2014 / Published: 7 July 2014
Cited by 4 | PDF Full-text (744 KB) | HTML Full-text | XML Full-text
Abstract
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the [...] Read more.
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Other

Jump to: Research, Review

Open AccessConcept Paper Entropy Estimation of Disaggregate Production Functions: An Application to Northern Mexico
Entropy 2014, 16(3), 1349-1364; doi:10.3390/e16031349
Received: 6 January 2014 / Revised: 12 February 2014 / Accepted: 26 February 2014 / Published: 3 March 2014
Cited by 1 | PDF Full-text (349 KB) | HTML Full-text | XML Full-text
Abstract
This paper demonstrates a robust maximum entropy approach to estimating flexible-form farm-level multi-input/multi-output production functions using minimally specified disaggregated data. Since our goal is to address policy questions, we emphasize the model’s ability to reproduce characteristics of the existing production system and [...] Read more.
This paper demonstrates a robust maximum entropy approach to estimating flexible-form farm-level multi-input/multi-output production functions using minimally specified disaggregated data. Since our goal is to address policy questions, we emphasize the model’s ability to reproduce characteristics of the existing production system and predict outcomes of policy changes at a disaggregate level. Measurement of distributional impacts of policy changes requires use of farm-level models estimated across a wide spectrum of sizes and types, which is often difficult with traditional econometric methods due to data limitations. We use a two-stage approach to generate observation-specific shadow values for incompletely priced inputs. We then use the shadow values and nominal input prices to estimate crop-specific production functions using generalized maximum entropy (GME) to capture individual heterogeneity of the production environment while replicating observed inputs and outputs to production. The two-stage GME approach can be implemented with small data sets. We demonstrate this methodology in an empirical application to a small cross-section data set for Northern Rio Bravo, Mexico and estimate production functions for small family farms and moderate commercial farms. The estimates show considerable distributional differences resulting from policies that change water subsidies in the region or shift price supports to direct payments. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top