entropy-logo

Journal Browser

Journal Browser

Maximum Entropy and Its Application

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (31 March 2014) | Viewed by 109424

Special Issue Editor

Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110, USA
Interests: Bayesian networks; machine learning; data mining; knowledge discovery; the foundations of Bayesianism
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

709 KiB  
Article
A Note of Caution on Maximizing Entropy
by Richard E. Neapolitan and Xia Jiang
Entropy 2014, 16(7), 4004-4014; https://doi.org/10.3390/e16074004 - 17 Jul 2014
Cited by 2 | Viewed by 6468
Abstract
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some [...] Read more.
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

1252 KiB  
Article
Hierarchical Geometry Verification via Maximum Entropy Saliency in Image Retrieval
by Hongwei Zhao, Qingliang Li and Pingping Liu
Entropy 2014, 16(7), 3848-3865; https://doi.org/10.3390/e16073848 - 14 Jul 2014
Cited by 6 | Viewed by 5142
Abstract
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency [...] Read more.
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency and also fully exploring the geometric context of all visual words in images. First of all, we obtain hierarchical salient regions of a query image based on the maximum entropy principle and label visual features with salient tags. The tags added to the feature descriptors are used to compute the saliency matching score, and the scores are regarded as the weight information in the geometry verification step. Second we define a spatial pattern as a triangle composed of three matched features and evaluate the similarity between every two spatial patterns. Finally, we sum all spatial matching scores with weights to generate the final ranking list. Experiment results prove that Hierarchical Geometry Verification based on Maximum Entropy Saliency can not only improve retrieval accuracy, but also reduce the time consumption of the full retrieval. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

864 KiB  
Article
A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation
by Louis De Grange, Sebastián Raveau and Felipe González
Entropy 2014, 16(7), 3635-3654; https://doi.org/10.3390/e16073635 - 30 Jun 2014
Cited by 1 | Viewed by 4895
Abstract
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial [...] Read more.
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile) Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

286 KiB  
Article
Duality of Maximum Entropy and Minimum Divergence
by Shinto Eguchi, Osamu Komori and Atsumi Ohara
Entropy 2014, 16(7), 3552-3572; https://doi.org/10.3390/e16073552 - 26 Jun 2014
Cited by 13 | Viewed by 6272
Abstract
We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of [...] Read more.
We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
247 KiB  
Article
A Maximum Entropy Method for a Robust Portfolio Problem
by Yingying Xu, Zhuwu Wu, Long Jiang and Xuefeng Song
Entropy 2014, 16(6), 3401-3415; https://doi.org/10.3390/e16063401 - 20 Jun 2014
Cited by 15 | Viewed by 5950
Abstract
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. [...] Read more.
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

Graphical abstract

136 KiB  
Article
Density Reconstructions with Errors in the Data
by Erika Gomes-Gonçalves, Henryk Gzyl and Silvia Mayoral
Entropy 2014, 16(6), 3257-3272; https://doi.org/10.3390/e16063257 - 12 Jun 2014
Cited by 13 | Viewed by 5521
Abstract
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method [...] Read more.
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

2080 KiB  
Article
Reaction Kinetics Path Based on Entropy Production Rate and Its Relevance to Low-Dimensional Manifolds
by Shinji Kojima
Entropy 2014, 16(6), 2904-2943; https://doi.org/10.3390/e16062904 - 26 May 2014
Cited by 2 | Viewed by 6190
Abstract
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the degree [...] Read more.
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the degree of quasi-equilibrium for each reaction, and the equation approximates the trajectory along at least final part of one-dimensional (1-D) manifold of true chemical kinetics that reaches equilibrium in concentration phase space. Besides the 1-D manifold, each higher dimensional manifold of the trajectories given by the equation is an approximation to that of true chemical kinetics when the contour of the entropy production rate in the concentration phase space is not highly distorted, because the Jacobian and its eigenvectors for the equation are exactly the same as those of true chemical kinetics at equilibrium; however, the path or trajectory itself is not necessarily an approximation to that of true chemical kinetics in manifolds higher than 1-D. The equation is for the path of steepest descent that sufficiently accounts for the constraints inherent in chemical kinetics such as element conservation, whereas the simple steepest-descent-path formulation whose Jacobian is the Hessian of the entropy production rate cannot even approximately reproduce any part of the 1-D manifold of true chemical kinetics except for the special case where the eigenvector of the Hessian is nearly identical to that of the Jacobian of chemical kinetics. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

Graphical abstract

4820 KiB  
Article
A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
by Viviana Meruane, Valentina Del Fierro and Alejandro Ortiz-Bernardin
Entropy 2014, 16(5), 2869-2889; https://doi.org/10.3390/e16052869 - 23 May 2014
Cited by 15 | Viewed by 6647
Abstract
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending [...] Read more.
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

Graphical abstract

469 KiB  
Article
The Impact of the Prior Density on a Minimum Relative Entropy Density: A Case Study with SPX Option Data
by Cassio Neri and Lorenz Schneider
Entropy 2014, 16(5), 2642-2668; https://doi.org/10.3390/e16052642 - 14 May 2014
Cited by 6 | Viewed by 5984
Abstract
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find [...] Read more.
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative) Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

2852 KiB  
Article
A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation
by Mesfin Dema and Hamed Sari-Sarraf
Entropy 2014, 16(5), 2568-2591; https://doi.org/10.3390/e16052568 - 09 May 2014
Cited by 33 | Viewed by 5558
Abstract
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate [...] Read more.
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

1785 KiB  
Article
Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains
by Hassan Nasser and Bruno Cessac
Entropy 2014, 16(4), 2244-2277; https://doi.org/10.3390/e16042244 - 22 Apr 2014
Cited by 16 | Viewed by 8709
Abstract
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The [...] Read more.
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle memory effects in spike statistics, for large-sized neural networks. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

296 KiB  
Article
A Bayesian Approach to the Balancing of Statistical Economic Data
by João F. D. Rodrigues
Entropy 2014, 16(3), 1243-1271; https://doi.org/10.3390/e16031243 - 26 Feb 2014
Cited by 19 | Viewed by 5220
Abstract
This paper addresses the problem of balancing statistical economic data, when data structure is arbitrary and both uncertainty estimates and a ranking of data quality are available. Using a Bayesian approach, the prior configuration is described as a multivariate random vector and the [...] Read more.
This paper addresses the problem of balancing statistical economic data, when data structure is arbitrary and both uncertainty estimates and a ranking of data quality are available. Using a Bayesian approach, the prior configuration is described as a multivariate random vector and the balanced posterior is obtained by application of relative entropy minimization. The paper shows that conventional data balancing methods, such as generalized least squares, weighted least squares and biproportional methods are particular cases of the general method described here. As a consequence, it is possible to determine the underlying assumptions and range of application of each traditional method. In particular, the popular biproportional method is found to assume that all source data has the same relative uncertainty. Finally, this paper proposes a simple linear iterative method that generalizes the biproportional method to the data balancing problem with arbitrary data structure, uncertainty estimates and multiple data quality levels. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
119 KiB  
Article
Fluctuations, Entropic Quantifiers and Classical-Quantum Transition
by Flavia Pennini and Angelo Plastino
Entropy 2014, 16(3), 1178-1190; https://doi.org/10.3390/e16031178 - 25 Feb 2014
Cited by 2 | Viewed by 4951
Abstract
We show that a special entropic quantifier, called the statistical complexity, becomes maximal at the transition between super-Poisson and sub-Poisson regimes. This acquires important connotations given the fact that these regimes are usually associated with, respectively, classical and quantum processes. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

115 KiB  
Article
A Relationship between the Ordinary Maximum Entropy Method and the Method of Maximum Entropy in the Mean
by Henryk Gzyl and Enrique Ter Horst
Entropy 2014, 16(2), 1123-1133; https://doi.org/10.3390/e16021123 - 24 Feb 2014
Cited by 2 | Viewed by 5378
Abstract
There are two entropy-based methods to deal with linear inverse problems, which we shall call the ordinary method of maximum entropy (OME) and the method of maximum entropy in the mean (MEM). Not only doesMEM use OME as a stepping stone, it also [...] Read more.
There are two entropy-based methods to deal with linear inverse problems, which we shall call the ordinary method of maximum entropy (OME) and the method of maximum entropy in the mean (MEM). Not only doesMEM use OME as a stepping stone, it also allows for greater generality. First, because it allows to include convex constraints in a natural way, and second, because it allows to incorporate and to estimate (additive) measurement errors from the data. Here we shall see both methods in action in a specific example. We shall solve the discretized version of the problem by two variants of MEM and directly with OME. We shall see that OME is actually a particular instance of MEM, when the reference measure is a Poisson Measure. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

335 KiB  
Article
Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model
by Thomas L. Marsh, Ron Mittelhammer and Nicholas Scott Cardell
Entropy 2014, 16(2), 825-853; https://doi.org/10.3390/e16020825 - 12 Feb 2014
Cited by 8 | Viewed by 5331
Abstract
A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior in [...] Read more.
A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior in mean square error to two and three stage least squares. Analytical results are provided relating to asymptotic properties of the estimator and associated hypothesis testing statistics. Monte Carlo experiments are also used to provide evidence on the power and size of test statistics. An empirical application is included to demonstrate the practical implementation of the estimator. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
1322 KiB  
Article
Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
by Jonathan Borwein, Phil Howlett and Julia Piantadosi
Entropy 2014, 16(2), 747-769; https://doi.org/10.3390/e16020747 - 10 Feb 2014
Cited by 14 | Viewed by 7891
Abstract
We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to [...] Read more.
We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

Review

Jump to: Research, Other

744 KiB  
Review
Maximum Entropy in Drug Discovery
by Chih-Yuan Tseng and Jack Tuszynski
Entropy 2014, 16(7), 3754-3768; https://doi.org/10.3390/e16073754 - 07 Jul 2014
Cited by 7 | Viewed by 6255
Abstract
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success [...] Read more.
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Other

Jump to: Research, Review

349 KiB  
Concept Paper
Entropy Estimation of Disaggregate Production Functions: An Application to Northern Mexico
by Richard E. Howitt and Siwa Msangi
Entropy 2014, 16(3), 1349-1364; https://doi.org/10.3390/e16031349 - 03 Mar 2014
Cited by 5 | Viewed by 5193
Abstract
This paper demonstrates a robust maximum entropy approach to estimating flexible-form farm-level multi-input/multi-output production functions using minimally specified disaggregated data. Since our goal is to address policy questions, we emphasize the model’s ability to reproduce characteristics of the existing production system and predict [...] Read more.
This paper demonstrates a robust maximum entropy approach to estimating flexible-form farm-level multi-input/multi-output production functions using minimally specified disaggregated data. Since our goal is to address policy questions, we emphasize the model’s ability to reproduce characteristics of the existing production system and predict outcomes of policy changes at a disaggregate level. Measurement of distributional impacts of policy changes requires use of farm-level models estimated across a wide spectrum of sizes and types, which is often difficult with traditional econometric methods due to data limitations. We use a two-stage approach to generate observation-specific shadow values for incompletely priced inputs. We then use the shadow values and nominal input prices to estimate crop-specific production functions using generalized maximum entropy (GME) to capture individual heterogeneity of the production environment while replicating observed inputs and outputs to production. The two-stage GME approach can be implemented with small data sets. We demonstrate this methodology in an empirical application to a small cross-section data set for Northern Rio Bravo, Mexico and estimate production functions for small family farms and moderate commercial farms. The estimates show considerable distributional differences resulting from policies that change water subsidies in the region or shift price supports to direct payments. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

Back to TopTop