Next Article in Journal
Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency
Next Article in Special Issue
Accuracy and Efficiency of Various GMM Inference Techniques in Dynamic Micro Panel Data Models
Previous Article in Journal
Generalized Information Matrix Tests for Detecting Model Misspecification
Previous Article in Special Issue
Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models

1
Department of Economics, Virginia Tech, Blacksburg, VA 24060, USA
2
Department of Economics and Finance, University of Texas at El Paso, El Paso, TX 79968, USA
*
Author to whom correspondence should be addressed.
Econometrics 2016, 4(4), 47; https://doi.org/10.3390/econometrics4040047
Submission received: 25 May 2016 / Revised: 23 November 2016 / Accepted: 25 November 2016 / Published: 30 November 2016
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)

Abstract

:
The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. This paper proposes a computationally feasible variation on these standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only, given the fact that the absolute value of the autoregressive parameter is less than unity as a necessary requirement for the data-generating process to be stationary. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators, and it therefore retains consistency. Our simulation results indicate that the subset-continuous-updating GMM estimators outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test.

1. Introduction

In recent decades, dynamic panel data models with unobserved individual-specific heterogeneity have been widely used to investigate the dynamics of economic activities. Several estimators have been suggested for estimating the model parameters. A standard estimation procedure is to first-difference the model, so as to eliminate the unobserved heterogeneity, and then base GMM estimation on the moment conditions implied where endogenous differences of the variables are instrumented by their lagged levels. This is the well known Arellano-Bond estimator, or first-difference (DIF) GMM estimator (see Arellano and Bond [1]). The DIF GMM estimator was found to be inefficient since it does not make use of all available moment conditions (see Ahn and Schmidt [2]); it also has very poor finite sample properties in dynamic panel data models with highly persistent series and large variations in the fixed effects relative to the idiosyncratic errors (see Blundell and Bond [3]) since the instruments in those cases become less informative.
To improve the performance of the DIF GMM estimator, Blundell and Bond [3] propose taking into consideration extra moment conditions from the level equation that rely on certain restrictions on the initial observations, as suggested by Arellano and Bover [4]. The resulting system (SYS) GMM estimator has been shown to perform much better than the DIF GMM estimator in terms of finite sample bias and mean squared error, as well as with regard to coefficient estimator standard errors since the instruments used for the level equation are still informative as the autoregressive coefficient approaches unity (see Blundell and Bond [3] and Blundell, Bond, and Windmeijer [5]). As a result, the SYS GMM estimator has been widely used for estimation of production functions, demand for addictive goods, empirical growth models, etc. However, it was pointed out later on (see Hayakawa [6] and Bun and Windmeijer [7]) that the weak instruments problem still remains in the SYS GMM estimator. Since the increase in the length of the panel leads to a quadratic increase in the number of instruments, the two-step DIF and SYS GMM estimators are both biased due to many weak moment conditions; see Newey and Windmeijer [8].
The work by Hansen, Heaton, and Yaron [9] suggests that the continuous-updating GMM estimator has smaller bias than the standard two-step GMM estimator. However, it involves high-dimensional optimizations when the number of regressors is large. Given the fact that the absolute value of the autoregressive parameter must be less than unity as a necessary requirement for the data-generating process to be stationary, we propose a computationally feasible variation on the two-step DIF and SYS GMM estimators, in which the idea of continuous-updating is applied solely to the autoregressive parameter; these two new estimators are denoted “SCUDIF” and “SCUSYS” below. Following the jackknife interpretation of the continuous-updating estimator in the work of Donald and Newey [10], we show that the subset-continuous-updating method that we propose in this paper does not alter the asymptotic distribution of the two-step GMM estimators, and it hence retains consistency. It is computationally advantageous relative to the continuous-updating estimator in that it replaces a relatively high-dimensional optimization over unbounded intervals by a one-dimensional optimization limited to the stationary domain ( 1 , 1 ) of the autoregressive parameter. We conduct Monte Carlo experiments and show that the proposed subset-continuous-updating versions of the DIF and SYS GMM estimators outperform their standard two-step counterparts in small samples in terms of the estimation accuracy on the autoregressive parameter and the rejection frequency of the Sargan-Hansen test.
The layout of the paper is as follows: Section 2 describes the model specification and our proposed subset-continuous-updating method; Section 3 describes the Monte Carlo experiments and presents the results; and Section 4 concludes the paper.

2. Subset-Continuous-Updating GMM Estimator

Consider a linear panel data model with one dynamic dependent variable y i t , additional explanatory variables X i t = ( x i t 1 , , x i t K ) , unobserved individual-specific fixed effects μ i , and idiosyncratic errors ν i t :
y i t = θ y i , t 1 + X i t β + u i t , where u i t = μ i + ν i t ,
for i = 1 , , N and t = 2 , , T , where N is large and T is small. Here, θ is the autoregressive parameter and we make a familiar assumption as in the literature that it satisfies | θ | < 1 to ensure the stationarity of the model; β is a K-dimensional column vector of remaining coefficients. As Blundell, Bond, and Windmeijer [5] argue, this model specification is sufficient to cover most cases that researchers would encounter in linear dynamic panel applications.
While our discussion applies to the general setup of dynamic panel data models in Equation (1), for expositional clarity, we consider a special case with a unique, i.e., K = 1 , additional explanatory variable x i t :
y i t = θ y i , t 1 + β x i t + u i t , where u i t = μ i + ν i t ,
for i = 1 , , N and t = 2 , , T . We also follow Blundell, Bond, and Windmeijer [5] to allow for persistence and endogeneity in the explanatory variable x i t :
x i t = ρ x i , t 1 + τ μ i + λ ν i t + e i t ,
where ρ captures the persistence of x i t , and τ and λ determine the correlation of x i t with the individual effects μ i and the idiosyncratic errors ν i t , respectively.
We assume, at the outset, that μ i , ν i t , and e i t have the following properties:
E ( μ i ) = 0 , E ( ν i t ) = 0 , E ( e i t ) = 0 for i = 1 , , N and t = 2 , , T ,
E ( ν i t μ i ) = 0 , E ( e i t μ i ) = 0 for i = 1 , , N and t = 2 , , T ,
E ( ν i t ν i s ) = 0 , E ( e i t e i s ) = 0 for i = 1 , , N and t s ,
E ( ν i t e i s ) = 0 for i = 1 , , N and t , s .
Furthermore, we impose mean-stationarity restrictions on the initial conditions:
x i 1 = τ 1 ρ μ i + ϵ i 1 for i = 1 , , N ,
y i 1 = 1 1 θ 1 + β τ 1 ρ μ i + ξ i 1 for i = 1 , , N ,
and
E ( ϵ i 1 ) = E ( μ i ϵ i 1 ) = E ( ξ i 1 ) = E ( μ i ξ i 1 ) = 0 for i = 1 , , N ,
E ( ϵ i 1 ν i t ) = E ( ξ i 1 ν i t ) = 0 for i = 1 , , N and t = 2 , , T .
Under these conditions, we consider both the DIF GMM estimator of Arellano and Bond [1] and the SYS GMM estimator of Blundell and Bond [3], which are derived from the following moment conditions:
E [ g ( w , θ 0 , β 0 ) ] = 0 ,
where w denotes the data, ( θ 0 , β 0 ) are true parameters, and
g = Z d Δ u for the DIF GMM estimator , g = Z s p for the SYS GMM estimator .
In the above equations, Z d is the m d × N ( T 2 ) matrix ( Z d 1 , Z d 2 , , Z d N ) and Z s is the m s × 2 N ( T 2 ) matrix ( Z s 1 , Z s 2 , , Z s N ) , where the number of instruments for the DIF GMM estimator is m d = ( T 2 ) ( T 1 ) and the instrument count for the SYS GMM estimator is m s = m d + 2 ( T 2 ) in the case of K = 1 ; Δ u is the N ( T 2 ) vector ( Δ u 1 , Δ u 2 , , Δ u N ) and p is the 2 N ( T 2 ) vector ( p 1 , p 2 , , p N ) with
Δ u i = Δ u i 3 Δ u i 4 Δ u i T , u i = u i 3 u i 4 u i T , p i = Δ u i u i .
The instrument matrix for the differenced equation is Z d i = ( Z d i y , Z d i x ) , where
Z d i y = y i 1 0 0 0 0 0 y i 1 y i 2 0 0 0 0 0 y i 1 y i , T 2 ,
and Z d i x is similarly defined. The instrument for the system equation Z s i is a block matrix with Z d i and Z l i on the main diagonals and zeros otherwise, i.e.,
Z s i = Z d i 0 0 Z l i ,
where Z l i is the instrument matrix for the level equation and Z l i = ( Z l i y , Z l i x ) , where
Z l i y = Δ y i 2 0 0 0 Δ y i 3 0 0 0 Δ y i , T 1 ,
and Z l i x is defined similarly.
In words, the DIF GMM estimator is obtained from the moment conditions where endogenous differences of the variables are instrumented by their lagged levels and the SYS GMM estimator utilizes further moment conditions where endogenous level variables are instrumented by their lagged differences. The validity of these moment conditions is tested by the Sargan-Hansen test of overidentifying restrictions (see Sargan [11] and Hansen [12]).
Let w i ( i = 1 , , N ) denote the i-th observation and g i ( θ , β ) = g ( w i , θ , β ) . The sample first and second moments of the g are given by:
g ^ ( θ , β ) = 1 N i = 1 N g i ( θ , β ) ,
Ω ^ ( θ , β ) = 1 N i = 1 N g i ( θ , β ) g i ( θ , β ) .
Two-Step GMM Estimator: The standard two-step GMM estimator is the solution to the following minimization problem:
θ ^ , β ^ = argmin θ , β g ^ ( θ , β ) Ω ^ θ ˜ , β ˜ 1 g ^ ( θ , β ) ,
where θ ˜ , β ˜ is a preliminary estimator, e.g., the first-step estimator.1 The first-order conditions are:
g ^ θ Ω ^ θ ˜ , β ˜ 1 g ^ = 0 ,
g ^ β Ω ^ θ ˜ , β ˜ 1 g ^ = 0 ,
where g ^ = g ^ θ ^ , β ^ , g ^ θ = g ^ θ ^ , β ^ / θ and g ^ β = g ^ θ ^ , β ^ / β .
Subset-Continuous-Updating GMM Estimator: Motivated by the fact that the autoregressive parameter θ in a stationary dynamic panel data model lies in the bounded interval ( 1 , 1 ) , we propose to apply the idea of continuous-updating of Hansen, Heaton, and Yaron [9] solely to this bounded parameter θ. The subset-continuous-updating GMM estimator is obtained as the solution to the following minimization problem:
β ^ ( θ ) = argmin β g ^ ( θ , β ) Ω ^ θ , β ˜ 1 g ^ ( θ , β ) conditional on θ ,
and
θ ^ = argmin θ g ^ θ , β ^ ( θ ) Ω ^ θ , β ^ ( θ ) 1 g ^ θ , β ^ ( θ ) ,
where β ˜ is a preliminary estimator.2 The first-order condition with respect to θ is:
g ^ θ + g ^ β β ^ θ Ω ^ θ ^ , β ^ ( θ ^ ) 1 g ^ g ^ Ω ^ θ ^ , β ^ ( θ ^ ) 1 Λ ^ Ω ^ θ ^ , β ^ ( θ ^ ) 1 g ^ = 0 ,
where
Λ ^ = 1 N i = 1 N g ^ i g ^ θ , i + g ^ β , i β ^ θ ,
and g ^ = g ^ θ ^ , β ^ , g ^ θ = g ^ θ ^ , β ^ / θ , g ^ β = g ^ θ ^ , β ^ / β , g ^ i = g w i , θ ^ , β ^ , g ^ θ , i = g w i , θ ^ , β ^ / θ , and g ^ β , i = g w i , θ ^ , β ^ / β .
We call the solution to the above minimization problem a subset-continuous-updating estimator to reflect the fact that we are applying the idea of continuous-updating of Hansen, Heaton, and Yaron [9] on a subset of model parameters. Given the linearity of the DIF and SYS GMM moment conditions, the minimization problem in Equation (19) yields a closed-form solution for β ^ as a function of θ. The β ^ estimator is of course consistent conditional on a consistent estimator θ ^ . The θ ^ estimator is consistent according to the jackknife interpretation in the work of Donald and Newey [10]. More specifically, consider the regression of g ^ θ , i + g ^ β , i β ^ / θ on g ^ i , let
B ^ = 1 N i = 1 N g ^ i g ^ i 1 1 N i = 1 N g ^ i g ^ θ , i + g ^ β , i β ^ θ = Ω ^ θ ^ , β ^ ( θ ^ ) 1 Λ ^
denote the matrix of coefficients. The vector of residuals follows:
η ^ i = g ^ θ , i + g ^ β , i β ^ θ B ^ g ^ i .
By definition, the residual η ^ i is orthogonal to the regressor g ^ i , i.e.,
1 N i = 1 N η ^ i g ^ i = 0 .
Then, we can rewrite the first-order condition with respect to θ in Equation (21) as:
g ^ θ + g ^ β β ^ θ B ^ g ^ Ω ^ θ ^ , β ^ ( θ ^ ) 1 g ^ = 0 ,
which simplifies to essentially the same equation given by Donald and Newey [10] (at the bottom of page 240):
1 N i = 1 N 1 N j i N η ^ j Ω ^ θ ^ , β ^ ( θ ^ ) 1 g ^ i = 0 .
Let A ^ i denote the term inside the parentheses in Equation (27):
A ^ i = 1 N j i N η ^ j Ω ^ θ ^ , β ^ ( θ ^ ) 1 .
This term converges to the same limit for all i = 1 , , N as N since g ^ converges to zero in probability, i.e., g ^ = o p ( 1 ) . As Donald and Newey [10] point out, Equation (27) is simply a modification of the usual interpretation of a GMM estimator that allows “a linear combination coefficient for each observation, which excludes its own observation from the Jacobian of the moments.” Given that A ^ i has the same limit, this modification does not change the asymptotic distribution of the estimator, which implies that our subset-continuous-updating estimator retains consistency.
While we use a scalar parameter β for expositional clarity here, our subset-continuous-updating method applies to a K-dimensional vector of parameters β without any extra computational burden because the continuous-updating is imposed on the scalar parameter θ only; see Equation (20). Applying our subset-continuous-updating method to the dynamic panel data model, the resulting SCUDIF and SCUSYS GMM estimators have exactly the same asymptotic distributions as their two-step counterparts in terms of the first-order terms. They are computationally advantageous relative to the continuous-updating estimator in that they replace a K + 1 -dimensional numerical optimization over the unbounded domain of ( θ , β ) by a one-dimensional optimization over a bounded domain θ ( 1 , 1 ) . It is worth noting that our subset-continuous-updating estimator can be easily extended to the AR(2) case, where necessary. This extension leads to a two-dimensional, instead of one-dimensional, optimization, which is more computationally burdensome, but the optimization is at least still limited to the well-defined, bounded region of the parameter space corresponding to stationary dynamics. Per Box and Jenkins [13], this AR(2) stationary region is a triangle.

3. Monte Carlo Experiments

In this section, we conduct Monte Carlo experiments to compare the performance of our subset-continuous-updating estimators with the standard two-step estimators in finite samples. We consider the model and assumptions specified in Section 2. Without loss of generality, we consider one additional explanatory variable beyond the lagged dependent variable, i.e., we restrict K = 1 . This restriction is made for expositional clarity here, as our proposed estimation method applies to models with multiple explanatory variables ( K > 1 ) without any extra computational burden because the continuous-updating is imposed solely on the bounded scalar parameter θ. The model specification is described by Equations (2) and (3).
For each Monte Carlo replication, μ i , ν i t , and e i t are all drawn from the normal distribution with zero means and standard deviations σ μ , σ ν , and σ e . The initial observations are drawn from the mean stationary distribution as in Equations (8) and (9).3 Then, we generate the data x i t and y i t and discard the first 30 observations before selecting our sample. We keep the following parameters fixed in the various Monte Carlo simulations:
β = 1 , τ = 0.25 , σ ν 2 = 1 , σ e 2 = 0.16 .
These parameter values are taken from Blundell, Bond, and Windmeijer [5]. The parameters that are varied in the Monte Carlo experiments include:
  • θ = {0.5, 0.8} for a moderate vs. high persistence in yit,
  • ρ = {0.5, 0.8} for a moderate vs. high persistence in xit,
  • λ = {−0.1, −0.4} for a low vs. high level of idiosynchratic-error endogeneity in xit,
  • σ μ 2 = {1/4, 4} for a small vs. large variance of fixed effects relative to idiosyncratic errors.
Following Blundell, Bond, and Windmeijer [5], we fix the sample size N at 500 and consider T = { 4 , 8 } . We also further consider an even longer panel, i.e., T = 12 . These results are presented in Table 1, Table 2 and Table 3, respectively, where we compare our subset-continuous-updating estimators (denoted SCUDIF and SCUSYS) to the standard two-step estimators of Arellano and Bond [1] and Blundell and Bond [3] (denoted DIF and SYS) from three perspectives: (1) the estimation accuracy, quantified by median absolute errors (MAE), (2) the sampling standard deviations across all simulation repetitions (SD), (3) theWindmeijer [14] corrected standard errors (SE),4 (4) the size of the two-tailed t-test under the null hypothesis that the parameter equals the true value at the 5% significance level, and (5) the rejection frequency of the 5% Sargan-Hansen test (denoted RFSH). All results are obtained from 10,000 Monte Carlo repetitions. The following findings are worth noting.
Firstly, we find evidence that the standard two-step DIF and SYS GMM estimates of θ are sensitive to the variance ratio σ μ 2 / σ ν 2 , which is consistent with the results of Hayakawa [6] and Bun and Windmeijer [7]. In particular, for any given combination of θ, ρ, and λ, the DIF and SYS GMM estimates of the autoregressive parameter become more biased as the variance ratio increases. In most cases, the bias in the β estimates also increases with the variance ratio. For example, with ( θ , ρ , λ ) = ( 0.8 , 0.8 , 0.4 ) and T = 4 , the MAEs of θ ^ D I F and θ ^ S Y S both become double when the variance ratio increases from 1 / 4 to 4; the MAEs of β ^ D I F and β ^ S Y S become twice and four thirds bigger, respectively. In contrast, while the variance ratio also affects the estimation accuracy of the SCUDIF and SCUSYS GMM estimators, the MAE values for these estimators deteriorate less quickly than do the corresponding values for the standard two-step DIF and SYS GMM estimators.
Secondly, compared to the standard two-step DIF and SYS GMM estimators, our proposed subset-continuous-updating counterparts are noticeably less biased in estimating the autoregressive parameter, especially in the case of large variance ratios and relatively long panels, where the problem of too many weak instruments becomes more prominent. Consequently, for example, with ( θ , ρ , λ ) = ( 0.8 , 0.8 , 0.4 ) , σ μ 2 / σ ν 2 = 4 , and T = 12 , the MAE of θ ^ S C U D I F is two-thirds that of θ ^ D I F and the MAE of θ ^ S C U S Y S is only one-fourth of that of θ ^ S Y S . Summarizing the θ MAE results in Table 1, Table 2 and Table 3, the following matrix displays the frequencies in the cases corresponding to the large variance ratio of σ μ 2 / σ ν 2 = 4 with which our proposed subset-continuous-updating estimators have smaller MAE than the corresponding two-step estimators across the different ( θ , ρ , λ ) combinations:
Estimation accuracy on θT = 4T = 8T = 12
SCUDIF outperforms DIF75.0%100.0%75.0%
SCUSYS outperforms SYS87.5%100.0%100.0%
With regard to the β estimation, we do not observe any clear pattern in terms of the relative performance of these two sets of estimators. For example, with ( θ , ρ , λ ) = ( 0.8 , 0.8 , 0.4 ) , σ μ 2 / σ ν 2 = 4 , and T = 12 , both SCUDIF and SCUSYS are respective improvements on DIF and SYS in estimating β. However, when x i t becomes less persistent, i.e., ρ = 0.5 , while SCUDIF still significantly outperforms DIF, SCUSYS performs worse than SYS in terms of the estimation accuracy on β.
Turning to the size estimates for the t-tests, we find that none of the standard two-step or the subset-continuous-updating estimators consistently yields well-sized t-tests for either θ or for β. For the DIF and SYS tests, this is a bias problem: the Windmeijer [14] corrected standard errors are generally good estimates of the sampling standard deviations for these standard two-step estimators, but this is no guarantee of a well-sized t-test when the parameter estimates suffer substantial biases. For example, with ( θ = 0.8 , ρ = 0.5 , λ = 0.4 ) and σ μ 2 / σ ν 2 = 4 , the empirical rejection frequencies of the t-tests of H 0 : θ = 0.8 and H 0 : β = 1 are both greater than 50% at all of the dataset lengths. The SCUDIF and SCUSYS estimators are less biased, but these subset-continuous-updating estimators lead to standard error estimates which are somewhat downward biased; hence, these tests are also over-sized. These results suggest that bootstrap-simulation based methods are necessary for credible structural parameter inference with samples of the lengths considered here, regardless of which of these estimators one chooses. We note, however, that the more precise estimation provided by the SCUDIF and SCUSYS theta estimators is also advantageous using simulation-based inference.
Lastly and most importantly, the standard two-step estimators tend to over-reject the Sargan-Hansen test and our subset-continuous-updating counterparts are generally well-sized. Given the fact that the endogenous variable x i t and the dependent variable y i t are both mean stationary, lagged levels and lagged differences of x i t and y i t are valid instruments for the first-difference equation and the level equation, respectively. Thus, presuming that the model is in all other respects well-specified, the null hypothesis of the Sargan-Hansen test is satisfied. With any panel length that we consider, T = { 4 , 8 , 12 } , the rejection frequency of the Sargan-Hansen test for our subset-continuous-updating estimators lies around 5% whereas it sometimes exceeds 15%, for instance with θ = 0.8 , σ μ 2 / σ ν 2 = 4 , and T = { 8 , 12 } , when the standard two-step estimators are adopted. In the worst case scenario, ( θ , ρ , λ ) = ( 0.8 , 0.5 , 0.1 ) , σ μ 2 / σ ν 2 = 4 , and T = 8 , the rejection frequency is even higher than 25%. In other words, the Sargan–Hansen associated with the standard two-step SYS GMM estimator has a 25 percent chance of incorrectly rejecting a true null hypothesis. In contrast, the 5% Sargan-Hansen test associated with the subset-continuous-updating SYS GMM estimator has a reasonable rejection frequency (size) of 6.2%.
In conclusion, the subset-continuous-updating method that we propose in this paper is shown to improve the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test in a dynamic panel data model when the variance ratio becomes large. No extra computational burden is incurred when we apply this method to models with multiple explanatory variables.

4. Conclusions

The two-step GMM estimators of Arellano and Bond [1] and Blundell and Bond [3] for dynamic panel data models have been widely used in empirical work. However, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron [9] is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. Given the fact that the absolute value of the autoregressive parameter is less than unity for a dynamic panel data model to be stationary, we propose a computationally feasible variation on the standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators and hence retains consistency. According to our Monte Carlo simulation results, the subset-continuous-updating GMM estimators for dynamic panel data models outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan–Hansen test.

Acknowledgments

We would like to thank editor Kerry Patterson and two anonymous referees for constructive comments.

Author Contributions

Overall, both authors contributed equally to this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. M. Arellano, and S. Bond. “Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations.” Rev. Econ. Stud. 58 (1991): 277–297. [Google Scholar] [CrossRef]
  2. S.C. Ahn, and P. Schmidt. “Efficient estimation of models for dynamic panel data.” J. Econom. 68 (1995): 5–27. [Google Scholar] [CrossRef]
  3. R. Blundell, and S. Bond. “Initial conditions and moment restrictions in dynamic panel data models.” J. Econom. 87 (1998): 115–143. [Google Scholar] [CrossRef]
  4. M. Arellano, and O. Bover. “Another look at the instrumental variable estimation of error-component models.” J. Econom. 68 (1995): 29–51. [Google Scholar] [CrossRef]
  5. R. Blundell, S. Bond, and F. Windmeijer. “Estimation in Dynamic Panel Data Models: Improving on the Performance of the Standard GMM Estimator.” In Nonstationary Panels, Panel Cointegration and Dynamic Panels. Edited by B.H. Baltagi, T.B. Fomby and R.C. Hill. (Book Series: Advances in Econometrics); Bingley, UK: Emerald Group Publishing Limited, 2001, Volume 15, pp. 53–91. [Google Scholar]
  6. K. Hayakawa. “Small sample bias properties of the system GMM estimator in dynamic panel data models.” Econ. Lett. 95 (2007): 32–38. [Google Scholar] [CrossRef]
  7. M.J.G. Bun, and F. Windmeijer. “The weak instrument problem of the system GMM estimator in dynamic panel data models.” Econom. J. 13 (2010): 95–126. [Google Scholar] [CrossRef]
  8. W.K. Newey, and F. Windmeijer. “Generalized method of moments with many weak moment conditions.” Econometrica 77 (2009): 687–719. [Google Scholar]
  9. L.P. Hansen, J. Heaton, and A. Yaron. “Finite-sample properties of some alternative GMM estimators.” J. Bus. Econ. Stat. 14 (1996): 262–280. [Google Scholar]
  10. S.G. Donald, and W.K. Newey. “A Jackknife Interpretation of the Continuous Updating Estimator.” Econ. Lett. 67 (2000): 239–243. [Google Scholar] [CrossRef]
  11. J.D. Sargan. “The estimation of economic relationships using instrumental variables.” Econometrica 26 (1958): 393–415. [Google Scholar] [CrossRef]
  12. L.P. Hansen. “Large sample properties of generalized method of moments estimators.” Econometrica 50 (1982): 1029–1054. [Google Scholar] [CrossRef]
  13. G.E.P. Box, and G.M. Jenkins. Time Series Analysis: Forecasting and Control. San Francisco, CA, USA: Holden-Day, 1976. [Google Scholar]
  14. F. Windmeijer. “A finite sample correction for the variance of linear efficient two-step GMM estimators.” J. Econom. 126 (2005): 25–51. [Google Scholar] [CrossRef]
  • 1.To obtain a consistent first-step estimator, we use
    W d = 1 N i = 1 N Z d i H d Z d i 1 and W s = 1 N i = 1 N Z s i H s Z s i 1 ,
    in place of Ω ^ θ ˜ , β ˜ 1 in Equation (16), suggested by Arellano and Bond [1] and Blundell, Bond, and Windmeijer [5] for the DIF and SYS GMM estimators, respectively, where H d is a (T-2) square matrix that has twos in the main diagonal, minus ones in the first subdiagonals, and zeros otherwise, i.e.,
    H d = 2 1 0 0 1 2 1 0 0 1 2 0 0 0 0 2 ,
    and H s is the matrix
    H s = H d 0 0 I T 2 .
  • 2.In the Monte Carlo experiments, we conduct a bounded optimization over θ ( 1 , 1 ) using the two-step DIF or SYS GMM estimator of θ as the starting value until a convergence criterion is met. We set both the step tolerance and the function tolerance to a relatively small number, 10−8.
  • 3.After we draw μ i , ν i t , and e i t from the normal distribution, we simply set the initial values of x i t and y i t to
    x i 1 = τ 1 ρ μ i + ϵ i 1 where ϵ i 1 = λ ν i 1 + e i 1 , y i 1 = 1 1 θ 1 + β τ 1 ρ + ξ i 1 where ξ i 1 = β ( λ ν i 1 + e i 1 ) + ν i 1
    to ensure mean stationarity.
  • 4.The standard error estimates for the two-step DIF and SYS GMM estimators of θ and β are calculated as inWindmeijer [14]. The same is obtained for the SCUDIF and SCUSYS GMM estimators of β conditional on θ. The standard error estimates for the SCUDIF and SCUSYS GMM estimators of θ are instead obtained from the Hessian matrix of the single-dimensional optimization problem.
Table 1. Monte Carlo results, N = 500 and T = 4 .
Table 1. Monte Carlo results, N = 500 and T = 4 .
λ = 0.1 σ μ 2 / σ ν 2 = 1 / 4 σ μ 2 / σ ν 2 = 4
( θ , ρ ) MAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSHMAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSH
( 0.5 , 0.5 ) DIF0.06250.09140.09225.50.23410.34790.34635.45.20.12400.17070.17218.70.35430.50180.50577.25.7
SCUDIF0.06270.09450.066516.70.23280.35380.31807.44.80.12870.20040.144725.80.36790.57600.338423.04.1
SYS0.03660.05250.05255.40.15650.23210.23094.94.60.06520.07490.075317.40.22030.30250.30199.710.2
SCUSYS0.03770.05460.037518.40.15690.23270.22825.34.40.05150.07920.049926.80.21910.31960.29789.56.0
( 0.5 , 0.8 ) DIF0.05930.08720.08775.50.28110.42850.43094.54.90.11390.15980.16128.60.51460.78630.77996.96.2
SCUDIF0.05920.09170.064216.90.28590.44560.351911.14.30.13160.21410.148234.00.60211.04790.407440.64.1
SYS0.03350.04880.04856.00.10370.15920.16324.24.30.05680.06390.061220.00.19860.22170.221419.014.2
SCUSYS0.03400.05090.034619.00.10420.16050.16054.74.10.04620.07670.040033.70.21240.25650.224922.210.3
( 0.8 , 0.5 ) DIF0.14230.20100.20588.50.32430.46240.47267.26.20.33360.36470.370219.50.65840.77670.765617.57.2
SCUDIF0.14420.17780.223014.90.29500.44310.311416.54.50.20000.34460.437726.50.48620.75550.312939.54.3
SYS0.03880.05930.05787.20.14750.22230.22704.44.50.08360.06520.060548.00.16950.25290.26714.912.8
SCUSYS0.04220.06830.041923.60.15080.22680.22135.54.10.07380.14660.098048.90.20660.34660.28779.05.7
( 0.8 , 0.8 ) DIF0.10730.15920.15986.70.42050.64860.64796.05.40.22020.27910.282712.60.84221.13801.133411.24.9
SCUDIF0.11240.16580.178618.10.41790.70140.337528.84.00.20000.32460.368432.60.78721.35990.366956.62.6
SYS0.03010.04610.04596.40.09100.14330.14504.34.70.05860.04570.042944.30.11380.17520.18714.89.5
SCUSYS0.03300.05390.033823.50.09350.15280.14006.04.10.05420.08930.038953.60.15230.27170.205113.45.5
λ = 0.4 σ μ 2 / σ ν 2 = 1 / 4 σ μ 2 / σ ν 2 = 4
( θ , ρ ) MAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSHMAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSH
( 0.5 , 0.5 ) DIF0.09740.12880.131910.00.26230.33490.340712.37.80.19090.19610.202022.40.46260.47500.487222.811.6
SCUDIF0.09360.15000.103718.20.24830.38350.224124.55.30.17760.25750.189334.20.42150.60520.233449.14.9
SYS0.04520.06680.06666.10.12370.17120.17276.95.20.09210.10270.098123.50.17530.24890.24749.910.5
SCUSYS0.04830.07180.047920.40.12350.17430.15749.94.90.07350.12420.067833.80.18640.29120.208216.04.2
( 0.5 , 0.8 ) DIF0.06310.08880.08997.60.25010.34090.34728.66.60.11440.14040.142514.40.42600.53650.542614.07.7
SCUDIF0.06260.09930.068217.90.23900.37870.236120.75.20.11160.18900.128333.90.40300.70970.255643.94.0
SYS0.03310.04840.04845.50.07050.10570.10594.74.80.06110.06810.063922.80.12650.16370.165615.510.8
SCUSYS0.03490.05130.034519.70.07130.10730.09976.64.60.04590.08370.041532.70.13710.20800.161815.85.5
( 0.8 , 0.5 ) DIF0.29410.27780.265930.40.63480.59280.565531.612.90.50760.30540.299653.71.07080.66560.633553.010.3
SCUDIF0.20000.27540.296121.60.38770.60990.212242.95.80.22160.37710.425533.10.61690.81820.197060.85.9
SYS0.05030.08090.07826.90.12540.18610.18615.44.90.09660.06980.064354.10.12380.19310.20593.98.4
SCUSYS0.05930.09500.062428.20.13120.19920.156713.34.00.10530.18530.110359.90.18450.37840.215315.83.3
( 0.8 , 0.8 ) DIF0.10670.13500.136013.00.43770.54730.544413.98.50.21150.19410.192329.40.84410.79320.775328.79.2
SCUDIF0.10340.14910.136021.70.39890.61640.234144.34.60.20000.27050.228838.10.67281.11540.241967.73.3
SYS0.02720.04290.04285.90.06010.09400.09453.55.20.05400.04590.043042.00.08200.13000.13786.48.4
SCUSYS0.03090.04990.031723.60.06190.10290.08378.14.60.05100.09430.038553.00.11670.27540.150216.44.1
Note: Statistics are based on 10,000 Monte Carlo replications. MAE stands for the median absolute error; SD is the sampling standard deviation across 10,000 simulation replications and SE stands for the average standard error estimate. Size (in percent) is the rejection frequency of the two-tailed t-test, and RFSH (in percent) is the rejection frequency of the 5% Sargan-Hansen test. DIF and SYS stand for the standard two-step Arellano-Bond and Blundell-Bond GMM estimators. Their subset-continuous-updating counterparts are denoted SCUDIF and SCUSYS. The total number of instruments is m d = 6 for DIF and SCUDIF and m s = 10 for SYS and SCUSYS.
Table 2. Monte Carlo results, N = 500 and T = 8 .
Table 2. Monte Carlo results, N = 500 and T = 8 .
λ = 0.1 σ μ 2 / σ ν 2 = 1 / 4 σ μ 2 / σ ν 2 = 4
( θ , ρ ) MAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSHMAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSH
( 0.5 , 0.5 ) DIF0.02510.03450.03477.10.09690.13040.12897.65.50.03530.04430.04438.90.10800.13810.13819.15.6
SCUDIF0.02420.03610.024519.30.09430.13050.12637.75.20.03190.04790.032732.40.10110.13970.12889.25.0
SYS0.01670.02440.02415.80.07780.11160.11016.15.40.03830.03490.032022.60.11740.14350.134012.711.0
SCUSYS0.01710.02520.017119.70.07730.11170.10936.25.30.01970.02940.020529.40.10170.14210.13608.15.6
( 0.5 , 0.8 ) DIF0.02350.03250.03287.00.08800.12030.11976.95.20.03190.04140.04158.50.10980.14440.14318.15.3
SCUDIF0.02290.03430.023821.80.08370.12110.11118.14.70.03050.04600.029139.60.10090.15060.116713.04.7
SYS0.01580.02280.02256.20.05120.07600.07515.74.80.04620.03000.027540.20.14930.09620.090941.917.1
SCUSYS0.01610.02370.016221.70.05130.07630.07446.24.80.02000.02780.017839.70.14410.11350.105631.29.6
( 0.8 , 0.5 ) DIF0.05680.05840.059513.80.13220.14650.145412.66.40.10230.08000.082022.80.19520.17380.171519.47.0
SCUDIF0.04310.06500.043932.40.10690.15200.124612.35.00.06580.09940.054252.40.13060.19210.125820.34.6
SYS0.01940.02640.02638.70.07570.10830.10736.35.00.08130.02750.024080.20.08700.11370.11248.026.0
SCUSYS0.01940.02810.018829.60.07620.10900.10656.94.90.02900.04770.022752.00.09760.13650.12868.36.2
( 0.8 , 0.8 ) DIF0.04100.04560.045511.50.13470.15910.155310.85.60.06660.06240.061717.60.20740.21370.204916.05.8
SCUDIF0.03400.05310.034534.80.11400.17520.106521.84.50.05430.08640.041755.20.16560.27540.110239.34.0
SYS0.01540.02010.02019.30.04470.06750.06725.15.60.05730.01960.016881.50.06420.06990.069314.422.4
SCUSYS0.01490.02220.014531.20.04620.06950.06736.15.00.02370.04030.013059.70.10030.09630.090123.08.4
λ = 0.4 σ μ 2 / σ ν 2 = 1 / 4 σ μ 2 / σ ν 2 = 4
( θ , ρ ) MAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSHMAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSH
( 0.5 , 0.5 ) DIF0.04470.04490.045415.80.15140.10070.100733.89.30.06370.05310.053422.50.18260.11020.109639.99.6
SCUDIF0.03410.04990.034222.80.10780.10590.084828.37.30.04130.06150.039540.60.11500.11900.085031.86.9
SYS0.02160.03130.03095.90.09370.08090.080421.66.70.06240.04960.043731.50.08100.11210.102510.710.9
SCUSYS0.02260.03340.022223.10.08710.08170.075222.56.40.02660.04020.027335.60.08370.10720.091714.74.1
( 0.5 , 0.8 ) DIF0.02440.03180.03188.30.08680.08790.085316.96.10.03130.03770.037410.70.10260.09780.095719.26.8
SCUDIF0.02240.03380.023422.90.07410.08970.072818.85.40.02720.04090.025340.40.07900.10160.073922.76.0
SYS0.01540.02220.02206.10.04050.05230.05079.15.00.04970.03190.028542.20.07820.07400.068725.214.2
SCUSYS0.01610.02360.016123.80.04050.05230.04989.94.90.01840.02810.018537.60.06090.08400.075511.65.2
( 0.8 , 0.5 ) DIF0.16270.08200.079754.40.35600.16150.152665.913.50.23290.09740.094170.20.47880.19260.178277.014.0
SCUDIF0.06650.09940.060742.90.14770.18330.084844.26.90.08950.13020.065357.60.17250.23410.085150.86.0
SYS0.02570.03700.03587.90.09360.08690.085718.96.30.09400.02870.024884.10.05910.08050.07927.217.5
SCUSYS0.02810.04050.024938.80.09030.08910.073726.95.70.04520.07550.030561.70.09880.12740.090021.14.1
( 0.8 , 0.8 ) DIF0.04320.03880.038319.40.16450.12880.124826.67.50.06420.04830.046827.50.23010.16230.152434.17.8
SCUDIF0.02840.04370.027737.80.10120.13950.072034.75.40.03760.05890.029453.80.12590.18640.073044.65.3
SYS0.01360.01880.01838.90.03480.04790.04717.25.20.05220.01900.016578.00.05520.05020.049921.716.9
SCUSYS0.01400.02060.013333.90.03620.04920.044510.24.90.01950.03620.013554.70.06280.08810.069516.05.4
Note: Statistics are based on 10,000 Monte Carlo replications. MAE stands for the median absolute error; SD is the sampling standard deviation across 10,000 simulation replications and SE stands for the average standard error estimate. Size (in percent) is the rejection frequency of the two-tailed t-test, and RFSH (in percent) is the rejection frequency of the 5% Sargan-Hansen test. DIF and SYS stand for the standard two-step Arellano-Bond and Blundell-Bond GMM estimators. Their subset-continuous-updating counterparts are denoted SCUDIF and SCUSYS. The total number of instruments is m d = 42 for DIF and SCUDIF and m s = 54 for SYS and SCUSYS.
Table 3. Monte Carlo Results, N = 500 and T = 12 .
Table 3. Monte Carlo Results, N = 500 and T = 12 .
λ = 0.1 σ μ 2 / σ ν 2 = 1 / 4 σ μ 2 / σ ν 2 = 4
( θ , ρ ) MAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSHMAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSH
( 0.5 , 0.5 ) DIF0.01660.02310.02307.20.07520.08960.087912.04.50.02050.02650.02668.50.08130.09180.090012.84.8
SCUDIF0.01710.02570.017521.60.07380.08940.087211.94.40.01970.02990.019537.00.07700.09180.088412.24.5
SYS0.01250.01770.01755.90.06200.08220.08087.94.30.03270.02340.021035.80.08070.09880.092412.18.1
SCUSYS0.01330.01930.013322.00.06160.08220.08018.24.30.01450.02090.014535.70.07330.09800.09368.93.9
( 0.5 , 0.8 ) DIF0.01560.02190.02176.90.05300.06880.06848.75.20.01920.02510.02508.10.05900.07490.073910.25.3
SCUDIF0.01670.02460.017325.20.05170.06880.06698.75.00.01970.02900.017444.10.05530.07600.069410.55.1
SYS0.01190.01670.01646.50.03700.05450.05365.84.90.04300.02180.019557.70.11600.06830.063246.313.0
SCUSYS0.01260.01840.012626.80.03680.05450.05276.15.00.01550.02140.013045.70.11340.07670.070538.07.5
( 0.8 , 0.5 ) DIF0.03610.03350.033617.60.09530.09140.092116.56.10.05410.04100.041424.40.11490.09770.097821.06.3
SCUDIF0.02670.03990.026037.90.07530.09330.086113.05.20.03470.05270.027053.80.07910.10220.086615.15.1
SYS0.01360.01870.01877.70.06120.07950.07918.34.70.07600.02030.018091.70.06630.08250.08169.922.1
SCUSYS0.01390.02040.013634.30.06100.07970.07868.44.60.01750.02710.014352.00.07010.09480.09088.94.6
( 0.8 , 0.8 ) DIF0.02420.02570.025512.30.07180.08170.078912.85.20.03430.03170.031318.10.09580.09640.091116.75.2
SCUDIF0.02020.03070.019239.20.05850.08650.064716.04.60.02700.04190.019756.60.07210.11090.065823.74.4
SYS0.01110.01430.01429.80.03290.04890.04905.14.60.05450.01390.012394.80.06300.04760.048525.322.6
SCUSYS0.01080.01620.010536.70.03320.04940.04905.54.50.01680.02580.010259.90.08740.06480.062929.57.0
λ = 0.4 σ μ 2 / σ ν 2 = 1 / 4 σ μ 2 / σ ν 2 = 4
( θ , ρ ) MAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSHMAE(θ)SD(θ)SE(θ)Size(θ)MAE(β)SD(β)SE(β)Size(β)RFSH
( 0.5 , 0.5 ) DIF0.03190.02940.029118.20.14930.06380.062865.69.10.03930.03210.031923.20.15910.06600.064668.09.4
SCUDIF0.02270.03460.023026.60.12470.06660.057756.98.00.02510.03860.021944.60.12530.07000.057956.77.7
SYS0.01580.02270.02256.20.11070.05820.057149.06.20.05340.03400.029145.10.04980.07380.06747.99.0
SCUSYS0.01760.02570.017327.50.10270.05920.054447.76.00.01900.02850.018043.20.07140.06960.062722.53.3
( 0.5 , 0.8 ) DIF0.01480.02100.02066.70.06240.04720.045529.16.20.01680.02300.02268.00.06670.04950.047430.16.4
SCUDIF0.01600.02380.015928.40.05790.04800.043429.16.10.01740.02640.015745.00.05950.05090.043930.35.8
SYS0.01170.01650.01617.40.03720.03710.036017.75.20.04710.02260.019962.70.04760.04990.045418.410.8
SCUSYS0.01300.01860.012531.20.03630.03700.035717.45.20.01410.02120.013143.90.03680.05290.04888.54.3
( 0.8 , 0.5 ) DIF0.10160.04560.044463.90.25550.08510.082088.013.40.12970.05220.050274.10.29670.09540.089591.514.1
SCUDIF0.03860.05800.032746.50.12870.09620.057657.28.30.04480.06960.031559.20.13310.11040.057757.68.3
SYS0.01770.02560.02527.40.11470.06140.061046.77.00.08970.02090.018494.80.03840.05720.05645.418.4
SCUSYS0.02100.03020.017045.20.10620.06380.053650.76.60.02540.04220.017458.90.07900.07540.062826.14.1
( 0.8 , 0.8 ) DIF0.02310.02150.021018.10.09470.06310.058737.36.70.02940.02470.023823.20.11030.07100.064741.66.6
SCUDIF0.01700.02550.015340.90.06190.06900.043135.45.70.02010.03030.014655.20.06730.08000.043439.05.5
SYS0.00950.01310.01278.80.03210.03620.034713.54.80.04950.01350.011992.30.04700.03580.035328.917.1
SCUSYS0.01050.01540.009440.20.03290.03690.033117.34.90.01220.02080.009254.60.03830.05500.048210.94.3
Note: Statistics are based on 10,000 Monte Carlo replications. MAE stands for the median absolute error; SD is the sampling standard deviation across 10,000 simulation replications and SE stands for the average standard error estimate. Size (in percent) is the rejection frequency of the two-tailed t-test, and RFSH (in percent) is the rejection frequency of the 5% Sargan-Hansen test. DIF and SYS stand for the standard two-step Arellano-Bond and Blundell-Bond GMM estimators. Their subset-continuous-updating counterparts are denoted SCUDIF and SCUSYS. The total number of instruments is m d = 110 for DIF and SCUDIF and m s = 130 for SYS and SCUSYS.

Share and Cite

MDPI and ACS Style

Ashley, R.A.; Sun, X. Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models. Econometrics 2016, 4, 47. https://doi.org/10.3390/econometrics4040047

AMA Style

Ashley RA, Sun X. Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models. Econometrics. 2016; 4(4):47. https://doi.org/10.3390/econometrics4040047

Chicago/Turabian Style

Ashley, Richard A., and Xiaojin Sun. 2016. "Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models" Econometrics 4, no. 4: 47. https://doi.org/10.3390/econometrics4040047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop