Next Article in Journal
A Unique Perspective on Data Coding and Decoding
Next Article in Special Issue
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Previous Article in Journal
Complexity through Recombination: From Chemistry to Biology
Previous Article in Special Issue
Tsallis Entropy, Escort Probability and the Incomplete Information Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Information Approach to the Dynamics in Farm Income: Implications for Farmland Markets

by
Matthew J. Salois
1 and
Charles B. Moss
2,*
1
Department of Food Economics and Marketing, University of Reading, PO Box 237, Reading, Berkshire, RG6 6AR, UK
2
Food and Resource Economics Department, University of Florida, 1155 McCarty Hall, Gainesville, FL 32611, USA
*
Author to whom correspondence should be addressed.
Entropy 2011, 13(1), 38-52; https://doi.org/10.3390/e13010038
Submission received: 9 November 2010 / Revised: 17 December 2010 / Accepted: 22 December 2010 / Published: 24 December 2010
(This article belongs to the Special Issue Advances in Statistical Mechanics)

Abstract

:
The valuation of farmland is a perennial issue for agricultural policy, given its importance in the farm investment portfolio. Despite the significance of farmland values to farmer wealth, prediction remains a difficult task. This study develops a dynamic information measure to examine the informational content of farmland values and farm income in explaining the distribution of farmland values over time.
PACS Codes:
89.65.Gh; 89.70.Cf

1. Introduction

The topic of real estate bubbles has gained prominence amidst the recent housing market crash and financial crisis. The related issue of bubbles in the rural land market is equally important given that farmland is the most important asset in the farm business and in the farm household investment portfolio. Volatility in farmland values generates potential economic hardship, especially for communities dependent upon agriculture for economic security [1]. Yet the valuation of farmland is not well understood and remains a problematic exercise [2].
The dynamics of farmland pricing are only partially explained by market fundamentals in the long-run, with the relationship breaking down in the short-run. As indicated in Schmitz [3] farmland values appear to be in long-run equilibrium, but there is significant correlation in the short-run errors. In other words, farmland markets appear to be efficient in the long-run, but are not weakly efficient in the short-run. Copeland [4] defines a weakly efficient market (or a weak-form efficiency) as the case where investors cannot earn excess returns by trading rules based on historical prices and return information. Ingersoll [5] gives a similar definition based on random walks. Predicting farmland markets is thus important not only to the rural community but also for formulating policy responses during economic turmoil.
Decomposing the information content of asset values is key to understanding the dynamics of farmland prices. Measuring the informational content of asset values originates with the work of Theil and Leenders [6] and Fama [7], who calculate the informational content of stock market prices using an information measure based on the entropy measure of Shannon [8]. Since then, use of entropy to forecast financial market volumes has become a valuable exercise [9,10,11,12,13]. We depart from the standard analysis in this paper to examine the informational content of changes in relative asset values and allow for a regional decomposition of the information measure.
In particular, we are interested in the dynamic information content in farm returns and we extend the information measure in Moss, Mishra, and Erickson [14] to incorporate persistence into the entropy measure. Specifically, we let the signal in the previous year's information measure decay as the number of lags increases. This allows us to obtain a measure of the loss of information over time. This loss of information can also be interpreted as the additional entropy between time periods. Thus the larger this measure is, the greater the dispersion in information between the two sets of information measures.
In the boom-bust cycle debate, one issue is the amount of new information contained in each year's income. Similar to the variance bounds formulations [15,16,17], if changes in farmland values are determined solely by changes in income then the information in the changes in farmland values cannot exceed the amount of information contained in changes in income. The concept is similar to the variance bounds concept in that given that asset values are derived from income, the variance in income places a bound on the variance in asset values.

2. Information Theory and Economics

Information theory, originating with Shannon [8], brought a technical and precise definition of information to the field of statistics. The technical notion of information states that outcomes conflicting with prior expectations should be given more weight than outcomes conforming to prior expectations. Shannon popularized the notion of entropy as the expected information from a distribution, and developed a quantified measure of information. The optimal measure of the amount of information, as developed by Shannon, is the entropy of an outcome or an event (or a signal in Shannon terminology) and is expressed as
J = i = 1 N p i ln ( p i )
where J is the measure of entropy and p i is the probability that a given event or signal will occur. As pi→1 then ln ( p i ) →0, meaning a signal that is almost certain to occur contains no information. The weighted average of each signal that could be received is the total amount of information in the signal. Entropy was proposed by Shannon as a way of measuring the information contained in a message that causes a change in prior expectations or probabilities. More generally, entropy measures the uncertainty or volatility of a random variable or distribution. Shannon named the measure entropy because of the similarity with the concept of thermodynamics entropy. Davis [18] is credited with introducing information theory to the econometrics literature while Theil [19] popularized the use of information theory (see [10] for a discussion). Golan [20] offers an excellent introduction to entropy measures and uses in econometrics.
Kullback and Liebler [21] generalize Shannon-entropy and develop relative entropy, or cross-entropy, that measures how two distributions differ from each other. Specifically, cross-entropy measures the discrepancy or inequality between two distributions, thus often being referred to as a measure of information inequality. The Kullback-Leibler function is also interpreted as a measures of the difference of information content between distributions. Many generalizations of Shannon entropy exist, but the Kullback-Leibler function provides a meaningful information quantity that serves as the basis of the empirical application in this paper.
Kullback-Leibler cross-entropy is basically a measure of unpredictability or uncertainty, measuring the divergence between two densities. Given N mutually exclusive events E = {E1,…,EN} each event has an associated probability of occurrence. The prior probabilities, xi, are the probabilities of an event occurring before a message is received and the posterior probabilities, yi, are the updated, or conditional, probabilities given the information content of the message. Formally stated then, cross-entropy measures the unpredictability of an event Ei given the event's prior and posterior probabilities of occurrence.
The value of the information contained in the message is proportional to the inequality between the prior and posterior distributions since a greater discrepancy implies a more unexpected event. For example, if event E1 has a prior probability of 0.95 and a message is received resulting in an updated posterior probability of 0.05, then the message is informative since the probability of the event occurring went from high to low. However if the message results in an updated posterior probability of 0.94, then the message is uninformative (or at least no new information) since the probabilities remain mostly unchanged. This is similar to answering the question “what is the information gain between x and y” or “what is the distance or divergence between x and y”.
Cross-entropy, I ( y : x ) , is written as the logarithmic measure of the discrepancy or inequality between the prior and posterior probability distributions
I ( y : x ) = i = 1 N y i ln ( y i x i ) 0.
The cross-entropy, I ( y : x ) , is a measure of the gain or loss in information as a result of the change from the prior probabilities to the posterior probabilities. When the logarithm in Equation 2 has 2 as a base, information is measured in binary digits, or bits. Often the natural log is used, in which case information is measured in nits, where 1 nit is equal to 1.443 bits. Given that the logarithm is a concave function, I ( y : x ) is always positive, meaning the value of the information in the message is always positive. The cross-entropy measure is also a monotonic function since the greater the information in the signal the larger the value of the inequality. The more valuable the information in the message the greater the discrepancy, or information inequality, between the prior and the posterior. If I ( y : x ) = 0 , then no discrepancy exists between the prior and the posterior and so the message contains no information. If I ( y : x ) , then the discrepancy is so large that the information is infinitely valuable or that the receiver of the message is “infinitely” surprised by the information contained within that message [18].
The cross-entropy measure has useful aggregation properties that allow decomposition of the total entropy into a between-group information measure and a within-group information measure. Suppose that N mutually exclusive events E = { E 1 , , E N } can be aggregated into G N sets of events, S = { S 1 , , S G } so that each E i belongs to exactly one S g , where g = 1 , , G . The prior and posterior probabilities can be aggregated so that
X g = i s g x i
Y g = i s g y i ,
which are the sum of the prior and posterior probabilities, respectively, of the events in set S g .
The cross-entropy measure I ( y : x ) in Equation 2 can be applied to each group S g . The within-group cross-entropy, I g , is
I g ( y : x ) = i S g y i Y g ln ( y i Y g x i X g ) .
The between-group cross-entropy, I 0 , is
I 0 ( y : x ) = g =1 G Y g ln ( Y g X g ) .
The total cross-entropy is equal to the sum of the average within-group cross-entropy, g Y g I g , and the between-group cross-entropy, I 0 ( y : x ) ,
I ( y : x ) = I 0 ( y : x ) + g =1 G Y g I g ( y : x )
The between-group entropy measures the information inequality across groups, whilst the within-group entropy measures information inequality across events of set S g . The average within-group inequality, given by I ¯ = Y g I g , is a weighted average of the individual within-group inequalities.
Note that no implications about the content of the message are provided by the cross-entropy measure. The interpretation cross-entropy as a measure of information depends on the prior and posterior probability distributions involved as well as the context of the problem under consideration [23]. The terms x i , y i , X i , and Y i can be given an interpretation other than probabilities as long as they satisfy the properties of probabilities; non-negativity and summing up to unity. For example, consider the states of the United States and define x i as state i population divided by total U.S. population and then define y i as per capita income of state i divided by total U.S. per capita income. The terms x i and y i now have the interpretation of shares, which satisfy the properties of probabilities. Moreover, the cross-entropy measure in Equation 2 has the interpretation of a state income-inequality measure. The income-inequality measure takes on positive values when per-capita incomes among the states differ and reduces to zero in the instance of no income-inequality. The aggregate decomposition in Equation 3 through Equation 7 can be used to define regions of the U.S. and compare between-region and within-region income-inequality.

3. Entropy Model and Data

The starting point is a typical farmland pricing formula based on the net present value framework for land price determinantion with rational expectations. Specifically, the model explains the value of land based on changes in asset valuation over time using the differential model of farmland values proposed by Schmitz [3]
Δ V t = E [ C F t | Ω t 1 ] ( 1 + r j ) + r t V t ( 1 + r t ) + i =1 E [ C F t + i | Ω t ] E [ C F t + i | Ω t 1 ] j = 0 i ( 1 + r t + j )
where V t is the farmland value per acre at time t , Δ V t = V t V t 1 is the difference in farmland values, E [ C F t | Ω t 1 ] is the expected value of cash flows to farmland (typically Ricardian rent) given the information available at time t 1 , where the information set is denoted Ω t 1 , and r t is the effective discount rate (or opportunity cost of capital) at time t . Schmitz assumes that
γ t = i =1 E [ C F t + i | Ω t ] E [ C F t + i | Ω t 1 ] j = 0 i ( 1 + r t + j )
where γ t becomes white noise, meaning no information remains in the residual term. However, Schmitz rejects the hypothesis that the residuals are white noise using a Ljung-Box test. Thus, while farmland appears to be appropriately priced in the long-run, the series contains significant information which could support the notion of rational bubbles.
To measure the persistence in farm returns, this paper proposes an extension of the information measure used by Moss, Mishra, and Erickson [14]. Specifically, we let p i t be the share of farm revenues this year and p i , t 1 be the share of farm revenues last year. Therefore, a measure of the new information I 1 t in γ t from Equations 8 and 9 is
I 1 t = i =1 n p i t ln ( p i t p i , t 1 )
This information inequality measures the relative persistence in the spatial value of land prices and represents a dynamic information inequality. The lagged share can be thought of as a dynamic probability.
Next, we consider the decay of the information in the signal by computing the information in the second lagged shares p i , t 2 denoted I 2 t defined as
I 2 t = i =1 n p i t ln ( p i t p i , t 2 )
Taking the difference between Equations 11 and 10 yields a dynamic information inequality that measures the loss of information (additional entropy between data points)
Δ 21 I t = i =1 n p i t [ ln ( p i t ) ln ( p i , t 2 ) ln ( p i , t ) + ln ( p i , t 1 ) ]    = i =1 n p i t ln ( p i , t 1 p i , t 2 )
The larger the number, the greater the dispersion in information between the two information sets. If the value is positive, there is an information loss, meaning the first information lag is less than the second information lag so the information in the measure is increasing. Equations 10, 11, and 12 can be similarly derived for the total value of farmland as well as a cross-inequality between total land values and net value added (where net value added is the prior probability and total land value is the posterior).
Data are published by the National Agriculture Statistics Service (NASS) of the U.S. Department of Agriculture for 1950 through 2008. Farm real estate values are obtained from the Agricultural Land Values and Cash Rents publication. Land in farms are obtained from the Farms, Land in Farms, and Livestock publication. Farm real estate values are defined as the per acre dollar value of all land and buildings used for agricultural production. Land in farms is defined as the total acres of farmland, in thousands of acres, for each state. The total value of farm real estate is computed by multiplying the per acre dollar real estate value by the total number of acres of farmland for each state. Net value added is used in place of the more traditional net farm income for describing farm revenues. Net value added (NVA) includes the net returns to all equity and non-equity holders and thus represents the contribution of agriculture to the overall economic activity of the United States.

4. Results

The results for the information change across time periods are presented in Table 1 for the ten Economic Research Regions. In this study we use the traditional regions with the Northeastern states include Connecticut, Delaware, Maine, Maryland, Massachusetts, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont, Lake States are Michigan, Minnesota, Wisconsin; the Corn Belt region includes Illinois, Indiana, Iowa, Missouri, and Ohio; the Appalachian region includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia; the Southeast states are Alabama, Florida, Georgia, and South Carolina; Delta States are Arkansas, Louisiana, and Mississippi; the Southern Plains states are Oklahoma and Texas; the Mountain region includes Arizona, Colorado, Idaho, Montana, New Mexico, Nevada, Utah, and Wyoming; and the Pacific states are California, Oregon, and Washington. The results in this table indicate whether income (net value added) contains dynamic information. At the mean and median there is little change in the information set for value added. In other words, the dynamic probability doesn't change the prediction very much. Moreover, many of the information measures are negative, indicating that the first lag may contain more noise than the second lag. Thus, even if the mean and median are positive, if the first quantile (Q1) is negative, the most recent information may simply add noise. This oscillation is somewhat consistent with the dynamic adjustment found by Burt [24].
Table 1. Dynamic Information Inequality in Value Added.
Table 1. Dynamic Information Inequality in Value Added.
StatisticΔ21IΔ31IΔ41IΔ51I Δ21IΔ31IΔ41IΔ51I
Northeast Lake States
Min−3.92−4.89−3.67−4.53 −1.98−5.47−5.79−5.62
Q1−0.31−0.15−0.040.05 −0.30−0.40−0.28−0.19
Median0.040.140.470.50 0.02−0.010.000.17
Q30.690.761.000.95 0.360.520.590.57
Max2.972.834.385.26 5.585.512.314.49
Mean0.180.240.540.63 0.120.160.030.20
Std. Dev.1.081.151.161.34 1.101.451.131.33
Corn Belt Northern Plains
Min−4.57−3.06−2.90−2.66 −11.23−3.85−5.98−5.33
Q1−0.96−0.32−0.42−0.22 −0.65−0.21−0.23−0.17
Median−0.100.070.190.13 −0.160.110.320.29
Q30.330.500.600.67 0.370.650.971.44
Max3.063.123.183.09 5.717.1812.107.51
Mean−0.090.150.150.22 −0.150.430.620.58
Std. Dev.1.211.051.021.06 2.481.803.011.92
Appalachia Southeast
Min−1.79−1.15−1.36−0.96 −4.41−4.78−4.47−4.75
Q1−0.29−0.04−0.06−0.01 −0.44−0.290.22−0.09
Median−0.080.190.360.40 −0.130.120.240.35
Q30.200.671.061.19 0.100.621.080.78
Max1.302.073.334.17 4.822.863.954.53
Mean−0.060.270.530.70 −0.110.100.450.45
Std. Dev.0.580.630.971.07 1.171.091.371.29
Delta States Southern Plains
Min−3.05−2.63−1.88−1.73 −1.42−1.59−1.33−1.08
Q1−0.43−0.31−0.09−0.32 −0.22−0.19−0.15−0.12
Median−0.050.020.040.09 −0.010.000.030.05
Q30.080.470.190.46 0.120.270.300.42
Max2.742.913.013.66 1.362.121.871.82
Mean−0.160.040.090.16 −0.020.090.090.16
Std. Dev.0.830.980.810.92 0.500.660.610.60
Mountain Pacific States
Min−9.18−8.83−9.46−9.67 −0.59−0.40−0.21−0.36
Q1−0.950.05−0.120.04 −0.14−0.10−0.07−0.06
Median−0.140.440.410.52 −0.030.010.060.04
Q30.281.421.482.34 0.070.180.250.34
Max8.365.937.779.45 0.370.690.821.22
Mean−0.410.811.011.19 −0.040.060.120.16
Std. Dev.2.102.132.692.90 0.210.230.230.32
Regional Inequality Overall Inequality
Min−3.36−2.39−0.91−2.08 −4.54−2.76−1.46−2.17
Q1−0.41−0.12−0.080.11 −0.880.040.050.24
Median−0.170.220.300.48 −0.190.400.590.89
Q30.040.490.631.06 0.200.940.991.58
Max2.602.751.803.15 3.343.113.024.69
Mean−0.180.280.300.58 −0.260.490.600.96
Std. Dev.0.870.890.630.93 1.181.100.921.14
A typical result for the information measure reported in Table 1 is that the change in information is initially negative becoming positive in the second difference. Typically, the change in information becomes increasingly positive for the remaining two differences. For example, the mean and median Δ 21 I t are both negative for the Corn Belt (−0.09 and −0.10, respectively) while the mean and median Δ 31 I t are positive (0.07 and 0.15, respectively). This may be the result of optimizing behavior at the farm level. Specifically, higher profits resulting from exogenous factors such as weather or demand shocks may result in increased plantings that reduce the relative profitability in the intermediate run. Alternatively, the loss of information may simply be the result of uncertainty caused by short-run noise. In almost all cases, the first quantile remains negative, indicating that at least 25 percent of the time the new information does not reduce the information measure. Further, the first quartile is only negative for Δ 21 I t in the Corn Belt. Thus, the newer information on net value added is informative at least 75 percent of the time.
The value of the change in information contains nonparametric information about the changes in income into permanent and transitory components. Specifically, if the returns follow a random walk the change in information would be large and positive (especially if the innovations were uncorrelated within a particular region). That is new information would result in differences between the prior and posterior shares of income. However, as the returns become less highly autocorrelated the information measure decay because the returns would not depart significantly from their original shares. That is the distribution of returns across states would not diverge because of permanent shocks. As the length of the change increases, the short-run (transitory) shocks average out while the long-run (persistent or permanent) information across states in a region remains. Alternatively, in the parlance of nonstationarity, the transitory effect yields a bounded variance while the permanent effect grows arithmetically over time. Following the implication of Equation 8, transitory variation cannot explain permanent changes (in farmland values).
Thus, the mean and median changes in inequality along with the asymmetry of the change in equality contain information about the relative persistence of changes in income. Across all regions, only the Northeast possesses both a positive initial change in information and a positive asymmetry in information change (i.e., the median less the first quantile is 0.35 while the third quantile less the median is 0.65). Hence, the income changes in the Northeast contain the greatest quantity of information about the distribution of future income (at least initially). In the long-run the Mountain region contains the greatest median information at 0.52. The Lake States, Corn Belt, Delta States, Southern Plains, and Pacific States all have small long-run changes in information. The differences in the change in long-run information may be due to several factors. One temptation may be to equate the significance of farm policy with the lack of information since the Lake States, Corn Belt, Delta States, and Southern Plains are reliant on program crops (specifically, corn, cotton, soybeans, and wheat) while Northeast is less dependent on agricultural policy (with the exception of dairy payments in New York). However, other factors such as farm size and weather risk should also be considered in future studies.
The results for the information change across time periods based on total farmland values are presented in Table 2. The results in this table examine the value of the dynamic information in land values. Looking at the change in the relative asset values yields some significant changes in the information set. For example, the median for the Northeast increases from 0.07 to 0.16 to 0.40, so there are noteworthy changes in the information measure. More recent values contain more information than lagged values. The consistency of this result across regions and lags indicates that more recent information always yields more information in regards to the total value of farmland.
Table 2. Dynamic Information Inequality in Land Values.
Table 2. Dynamic Information Inequality in Land Values.
StatisticΔ21IΔ31IΔ41IΔ51I Δ21IΔ31IΔ41IΔ51I
Northeast Lake States
Min−0.06−0.05−0.09−0.05 −0.16−0.05−0.03−0.13
Q10.030.100.160.22 0.010.010.020.09
Median0.070.160.230.40 0.040.090.160.16
Q30.140.350.530.93 0.130.180.290.45
Max1.512.332.222.12 1.301.601.401.05
Mean0.130.280.420.57 0.120.180.250.29
Std. Dev.1.080.370.450.49 0.240.300.340.31
Corn Belt Northern Plains
Min−0.05−0.09−0.13−0.12 −0.08−0.11−0.02−0.02
Q10.010.020.040.02 0.000.010.020.04
Median0.040.070.090.09 0.020.050.070.11
Q30.080.150.210.23 0.060.090.150.23
Max0.870.820.730.61 0.280.510.620.87
Mean0.080.110.150.13 0.040.080.120.15
Std. Dev.0.140.160.180.16 0.060.120.140.16
Appalachia Southeast
Min−0.12−0.02−0.040.00 −0.32−0.39−0.33−0.27
Q10.010.030.090.10 0.020.050.090.19
Median0.040.070.130.19 0.060.110.220.30
Q30.070.140.240.30 0.140.350.460.70
Max0.390.510.620.94 1.211.271.632.81
Mean0.050.110.170.23 0.110.240.400.54
Std. Dev.0.070.110.140.19 0.200.310.460.63
Delta States Southern Plains
Min−0.05−0.08−0.21−0.34 −0.04−0.08−0.08−0.08
Q10.000.010.010.02 0.000.000.000.01
Median0.030.040.070.09 0.010.020.030.05
Q30.070.120.160.20 0.040.070.110.15
Max0.620.770.951.38 0.591.111.652.10
Mean0.050.090.130.16 0.040.080.130.17
Std. Dev.0.100.170.210.28 0.110.210.320.40
Mountain Pacific States
Min−1.94−1.14−1.01−1.83 −0.03−0.08−0.06−0.07
Q10.030.060.100.14 0.010.020.030.07
Median0.060.150.190.39 0.040.080.160.22
Q30.090.370.580.76 0.110.320.480.53
Max5.245.255.475.50 0.490.650.811.11
Mean0.150.330.510.68 0.080.160.250.35
Std. Dev.0.750.851.021.18 0.100.180.250.34
Regional Inequality Overall Inequality
Min−0.13−0.12−0.010.04 −0.30−0.19−0.03−0.07
Q10.030.070.140.20 0.070.160.270.41
Median0.050.130.190.30 0.110.240.400.57
Q30.130.260.460.61 0.190.540.831.03
Max0.671.122.162.80 1.051.622.603.58
Mean0.110.240.380.52 0.180.400.620.84
Std. Dev.0.140.280.430.56 0.230.390.560.69
Given that the information measures are greater than zero for the median and mean throughout Table 2, the question is whether the information measures are increasing or decreasing over time. In all cases, Δ j 1 I > Δ 21 I for each region. Hence, the information in the sample declines over time or information loss occurs in farmland values. The next question is whether this information loss is occurring at an increasing or decreasing rate. To measure this concept we adopt the measure of logarithmic concavity proposed by Hansen [25]. Specifically, a positive measure is logarithmically concave if
a n 2 a n + 1 a n 1 , n =1,2,
and logarithmically convex if
a n 2 a n + 1 a n 1 , n =1,2, .
To apply this concept to the information measures, we compute
t 1 = ( Δ 31 I ) 2 Δ 21 I Δ 41 I t 2 = ( Δ 41 I ) 2 Δ 31 I Δ 51 I .
If t 1 , t 2 0 then the information measure is concave while if t 1 , t 2 0 the measure is convex. Applying these rules to the median and mean information in Table 2 we see that the median of the information measure is concave in the Corn Belt, Appalachia, and the Southeast regions, and the mean of the information measure is concave in the Northeast, Corn Belt, Appalachia, Southeast, and Delta regions. Further, neither measure is convex in any region. Hence, the information loss is increasing at an increasing rate.
As in our discussion of changes in the information in returns from Table 1, the change in information of farmland values has information about the decomposition of these changes into permanent and transitory components. Several Falk [26] provides evidence that farmland values are nonstationary. Hence, errors tend to be permanent. This result is largely borne out in Table 2 in that both the mean and median information measures are uniformly positive for every state. In fact the first quantile is positive for every state at every time period. Again, mimicking the results in Table 1, the median measure of the change in information is highest for the Northeast (increasing to 0.40 for the fourth change) and Mountain (increasing to 0.39 for the fourth change) regions and the smallest for the Corn Belt (reaching a maximum of 0.09), Delta States (reaching a maximum of 0.09), and Southern Plains (reaching a maximum of 0.05). In this case, the size of the change in information measures the similarity between farmland in each region. Specifically, if dispersion between farmland shares is permanent because of stationarity, when the errors are highly correlated (the Wiener increment in the random walk is correlated) the farmland values do not wander far from each other. This correlation may be a artifact of cointegration. Thus, the Corn Belt, Delta States, and Southern Plains are composed of states with similar farmland value changes over time. However, there is more dispersion in the Northeast and the Mountain regions. This result is plausible because of the number of states in each region and the dispersion of crops and practices.
Table 3 gives the difference in information on current land values based on changes in lagged returns. Results presented here look at the dynamic information in net value added in predicting land values. Thus, the results indicate the predictive value of returns in the explanation of current land values changes over time. The results for this measure are fairly diverse across regions. For example, the mean information is consistently positive for the Northeast (reaching a maximum of 0.16), Southeast (reaching a maximum of 0.27) and Southern Plains (reaching a maximum of 0.09) regions implying that more recent information on returns is informative in explaining farmland values on average. In addition, the median is consistently positive for the Northeast region. However, the first quartile for each distribution is consistently negative. Alternatively, the mean and median information measure for the Northern Plains, Appalachia, Mountain States, and Pacific States is always negative. Building on Equation 15, the information measure is concave for both the mean and median in the Northeast region.
The critical insight from Shiller [16] is that the variance of an asset (such as a stock price) cannot exceed the variance of the return sequence (such as the dividends) on which the price of that asset is based. Hence, the variance of asset values (farmland) must be bounded by the variance of its returns series (returns to farmland). Table 3 indicates that information on the dispersion in returns do not significantly explain the dispersion of farmland values. Further, interquartile range of the change in information in returns in predicting farmland values (in Table 3) are much larger than the interquartile range of the change in information in farmland values (in Table 2). For example, the interquartile range for the change in information for the Corn Belt are 0.07, 0.13, 0.17, and 0.21 for changes in farmland value compared with 2.03, 1.52, 1.29, and 1.54 for the information in farmland returns in predicting values. Thus, the dispersion in farmland values is smaller than the dispersion in returns consistent with the insight from Shiller.
Table 3. Dynamic Information Cross Inequality Between Land Values and Valued Added.
Table 3. Dynamic Information Cross Inequality Between Land Values and Valued Added.
Statistic Δ 21 I Δ 31 I Δ 41 I Δ 51 I Δ 21 I Δ 31 I Δ 41 I Δ 51 I
Northeast Lake States
Min−6.21−6.77−5.03−4.94 −4.99−4.97−4.53−4.99
Q1−1.21−1.00−0.91−1.10 −0.67−0.70−0.61−0.96
Median0.040.180.220.15 −0.06−0.10−0.12−0.17
Q30.811.230.981.10 0.620.560.480.47
Max4.964.075.485.53 4.554.584.015.28
Mean0.010.040.120.16 −0.10−0.15−0.18−0.21
Std. Dev.1.921.941.902.30 1.631.501.551.70
Corn Belt Northern Plains
Min−3.48−3.12−2.65−3.75 −5.84−4.84−9.24−6.72
Q1−1.22−0.87−0.74−0.81 −0.98−0.86−0.95−0.93
Median−0.070.000.00−0.12 −0.03−0.05−0.09−0.10
Q30.810.650.550.73 1.040.500.920.77
Max3.003.772.252.67 9.809.489.519.24
Mean−0.03−0.04−0.02−0.01 −0.06−0.05−0.09−0.03
Std. Dev.1.261.271.111.30 2.152.242.672.47
Appalachia Southeast
Min−5.93−8.95−12.79−13.71 −7.96−7.74−7.11−8.35
Q1−1.50−2.10−2.70−3.23 −0.64−0.51−0.62−0.54
Median−0.18−0.43−0.51−0.76 0.120.11−0.03−0.16
Q30.850.990.740.76 0.550.720.730.99
Max9.569.6713.3416.24 7.446.355.767.45
Mean−0.33−0.53−0.69−0.77 0.050.070.140.27
Std. Dev.2.443.604.705.51 2.141.991.822.44
Delta States Southern Plains
Min−4.53−4.75−3.47−4.57 −2.56−2.44−2.40−0.98
Q1−0.46−0.59−0.62−0.51 −0.12-0.15-0.10−0.11
Median−0.03−0.08−0.05−0.15 0.000.000.010.02
Q30.300.350.360.31 0.190.160.200.24
Max4.775.813.645.40 2.552.852.812.65
Mean−0.06−0.08−0.14−0.15 0.020.040.050.09
Std. Dev.1.161.401.051.21 0.600.660.640.54
Mountain Pacific States
Min−12.42−16.38−19.80−20.01 −1.42−2.15−2.39−2.35
Q1−1.31−1.67−2.57−2.54 −0.27−0.27−0.33−0.54
Median−0.20−0.35-0.190.13 −0.09−0.06−0.06−0.06
Q30.570.821.451.09 0.180.150.150.25
Max15.9316.3514.2717.84 0.850.720.831.00
Mean−0.22−0.42−0.42−0.31 −0.08−0.11−0.13−0.13
Std. Dev.3.334.124.554.87 0.430.470.540.64
Regional Inequality Overall Inequality
Min−4.21−3.85−3.30−3.44 −5.50−4.79−4.43−4.24
Q1−0.57−1.00−0.80−1.06 −0.85−1.48−1.26−1.42
Median−0.17−0.26−0.130.02 −0.25−0.260.01−0.20
Q30.670.500.580.73 0.720.660.721.15
Max3.694.343.513.66 4.394.134.954.65
Mean−0.11−0.16−0.18−0.13 −0.18−0.28−0.31−0.22
Std. Dev.1.351.511.321.55 1.661.841.752.02

5. Conclusions

This paper examines the change in information in net value added to farmland and farmland values over time and the relationship between the two. Results indicate that new information increases the entropy in the short-run, but reduces the entropy in the signal in the intermediate run. This loss in short-run information may be the result of random shocks which do not persist or producer response to market changes, a finding consistent with Burt [24]. However, changes in information are consistently positive (even at the first quantile) for farmland values. Hence, more recent data on farmland values is relatively more informative than recent data on net value added.
The results for the information in farmland values over time are fairly uniform. The minimum and first quartiles of the information measures are negative while the median, third quartile, maximum, and mean are generally positive. In addition, most are increasing, though by varying magnitudes. This result may contain information about the differential value of information by region. For example the Mountain states have a very high maximum value which is much larger than the other regions. The mean and median information for the Northeast and Southeast are consistently positive. However, the same values are consistently negative for the Northern Plains, Appalachia, Mountain States, and Pacific States. This result is consistent with Schmitz [3] in that short-run variations in net value added (or net returns to farmland) may contain significant noise.
The results are less consistent for the lagged information of returns in predicting farmland values. For example, while the means and medians are negative for many regions, they are positive for the Northeast, Southeast, and Southern Plains. The maximum values are also quite a bit higher than for the information contained in lagged farmland values, moreover the range in general is wider. Returning to the possibility of excess volatility, the data supports the contention that the variation in farmland values is smaller than the variation in net value added. Thus, farmland prices may be more consistent with variance bounds than common stocks under the formulation of Shiller [16].

References

  1. Power, G.J.; Turvey, C.G. US rural land value bubbles. Appl. Econom. Lett. 2010, 17, 649–656. [Google Scholar] [CrossRef]
  2. Goodwin, B.K.; Mishra, A.K.; Ortalo-Magne, F.N. What’s Wrong with Our Models of Agricultural Land Values. Am. J. Agr. Econom. 2003, 85, 744–752. [Google Scholar] [CrossRef]
  3. Schmitz, A. Boom/Bust Cycles and Ricardian Rent. Am. J. Agr. Econom. 1995, 77, 1110–1125. [Google Scholar] [CrossRef]
  4. Copeland, T.E.; Weston, J.F. Financial Theory and Corporate Policy, 3rd ed.; Addison Wesley: Reading, PA, USA, 1988; p. 332. [Google Scholar]
  5. Ingersoll, J.E., Jr. Theory of Financial Decision Making; Rowman & Littlefield Publishers: Lanham, MD, USA, 1987; pp. 229–230. [Google Scholar]
  6. Theil, H.; Leenders, C.T. Tomorrow on the Amsterdam Stock Exchange. J. Bus. 1965, 38, 227–284. [Google Scholar] [CrossRef]
  7. Fama, E.F. Tomorrow on the New York Stock Exchange. J. Bus. 1965, 38, 285–299. [Google Scholar] [CrossRef]
  8. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Techn. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  9. Molgedey, L.; Ebeling, W. Local order, entropy and predictability of financial time series. Eur. Phys. J. B 2000, 107, 733–737. [Google Scholar] [CrossRef]
  10. Maasoumi, E.; Racine, J. Entropy and predictability of stock market returns. J. Econom. 2002, 107, 291–312. [Google Scholar] [CrossRef]
  11. Robertson, J.; Tallman, E.; Whiteman, C. Forecasting using relative entropy. J. Money Credit Bank. 2005, 37, 383–401. [Google Scholar] [CrossRef]
  12. Bentes, S.; Menezes, R.; Mendes, D.A. Long memory and volatility clustering: Is the empirical evidence consistent across stock markets? Phys. A 2008, 387, 3826–3830. [Google Scholar] [CrossRef]
  13. de Souza, J.; Moyano, L.; Querios, S.D. On Statistical Properties of Traded Volume in Financial Markets. Eur. Phys. J. B 2008, 50, 165–168. [Google Scholar] [CrossRef]
  14. Moss, C.B.; Mishra, A.K.; Erickson, K. Next Year on the U.S. Farmland Market: An Informational Approach. Appl. Econom. 2007, 39, 581–585. [Google Scholar] [CrossRef]
  15. Campbell, J.Y.; Shiller, R.J. The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors. Rev. Finan. Stud. 1981, 1, 195–228. [Google Scholar] [CrossRef]
  16. Shiller, R.J. Do Stock Prices Move Too Much to be Justified by Subsequent Changes in Dividends? Am. Econom. Rev. 1981, 71, 421–436. [Google Scholar]
  17. Shiller, R.J. From Efficient Markets Theory to Behavioral Finance. J. Econom. Perspect. 2003, 17, 83–104. [Google Scholar] [CrossRef]
  18. Davis, H.T. The Theory of Econometrics; The Principia Press: Bloomington, IA, USA, 1941. [Google Scholar]
  19. Theil, H. Economics and Information Theory; North Holland: Amsterdam, The Netherlands, 1967. [Google Scholar]
  20. Golan, A. Information and entropy econometrics: A review and synthesis. Found. Trend. Econom. 2006, 2, 1–145. [Google Scholar] [CrossRef]
  21. Kullback, S.; Liebler, R.A. On information and sufficiency. Ann. Math. Statist. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  22. Soofi, E.S.; Retzer, J.J. Information indices: Unification and applications. J. Econom. 2002, 107, 17–40. [Google Scholar] [CrossRef]
  23. Soofi, E.S. Capturing the intangible concept of information. J. Am. Statist. Assoc. 1994, 89, 1243–1254. [Google Scholar] [CrossRef]
  24. Burt, O.R. Econometric Modeling of the Capitalization Formula for Farmland Prices. Am. J. Agr. Econom. 1986, 68, 10–26. [Google Scholar] [CrossRef]
  25. Hansen, B.G. On Log-Concave and Log-Convex Infinitely Divisible Sequences and Densities. Ann. Probab. 1988, 16, 1832–1839. [Google Scholar] [CrossRef]
  26. Falk, B. Formally Testing the Present Value Model of Farmland Prices. Am. J. Agr. Econom. 1991, 73, 1–10. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Salois, M.J.; Moss, C.B. An Information Approach to the Dynamics in Farm Income: Implications for Farmland Markets. Entropy 2011, 13, 38-52. https://doi.org/10.3390/e13010038

AMA Style

Salois MJ, Moss CB. An Information Approach to the Dynamics in Farm Income: Implications for Farmland Markets. Entropy. 2011; 13(1):38-52. https://doi.org/10.3390/e13010038

Chicago/Turabian Style

Salois, Matthew J., and Charles B. Moss. 2011. "An Information Approach to the Dynamics in Farm Income: Implications for Farmland Markets" Entropy 13, no. 1: 38-52. https://doi.org/10.3390/e13010038

Article Metrics

Back to TopTop