Next Article in Journal
Testing for the Equality of Integration Orders of Multiple Series
Previous Article in Journal
Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency

Department of Economics, Michigan State University, 486W Circle Dr, East Lansing, MI 48824, USA
Econometrics 2016, 4(4), 48; https://doi.org/10.3390/econometrics4040048
Submission received: 21 September 2016 / Revised: 15 November 2016 / Accepted: 23 November 2016 / Published: 8 December 2016

Abstract

:
This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equations in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically-bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that at least in terms of the third-order stochastic expansion, we cannot improve on the simple one-step bias correction by using the bias correction of moment equations. This finding suggests that the comparison between the one-step bias correction and the method of correcting the moment equations or the fully-iterated bias correction should be based on the stochastic expansions higher than the third order.
JEL Classification:
C10

1. Introduction

Asymptotic bias corrections are pursued to make estimators closer to the truth values. There are several ways of achieving this goal, including analytical corrections, jackknife and bootstrap methods (see, e.g., Quenouille (1956) [1], Hall (1992) [2], Shao and Tu (1995) [3], MacKinnon and Smith (1998) [4], Andrews (2002) [5], Hahn and Newey (2004) [6], Bun and Carree (2005) [7], Bao and Ullah (2007) [8,9], Bao (2013) [10] and Yang (2015) [11]). This variety of bias correction methods evokes the issue whether one method is preferable to others at least on asymptotic efficiency grounds (e.g., see Hahn et al. (2004) [12]). For the maximum likelihood (ML) estimation, they show that a method of bias correction does not affect the higher order efficiency of any estimator that is first-order efficient in parametric or semiparametric models. An ML estimator is a class of M-estimators, and this paper extends their intuition to a general class of M-estimators.1
Specifically, this paper considers an alternative bias correction for the M-estimator, which is achieved by correcting moment equations in the spirit of Firth (1993) [13]. In particular, we compare the stochastic expansions of the analytically-bias-corrected estimator (which is referred to one-step bias correction) and the alternative estimator and find that the third-order stochastic expansions of these two estimators are identical. This is a stronger result than comparing higher order variances, since it implies that these two estimators do not only have the same higher order variances, but would also agree upon more properties in terms of their stochastic expansions.2 We do not consider other bias correction methods, such as bootstrap and jackknife methods, in this paper.
In the literature (see Hahn and Newey (2004) [6] and Fernandez-Val (2004) [14] for nonlinear panel data models), it has been discussed that removing the bias directly from the moment equations has an attractive feature that it does not use pre-estimated parameters that are not bias corrected, though this alternative approach requires more intensive computations.3 Because the analytically-bias-corrected estimator is a two-step estimator, for which an initial estimator needs to be plugged in, while the bias-corrected moment equations estimator is a one-step estimator that does not need an initial estimator, the higher order asymptotic equivalence of these two estimators is not obvious. This paper, however, shows that at least for the third-order stochastic expansion, there is no benefit of using the bias correction of the moment equations over the simple one-step bias correction in the context of M-estimators. This finding suggests that the comparison between the one-step bias correction and the method of correcting the moment equations should be based on the stochastic expansions higher than the third order.
Examples of the M-estimation include maximum likelihood estimation (MLE), least squares and instrumental variable (IV) estimation. Many other useful estimators can also fit into the M-estimation framework with the appropriate definition of the moment equations. It includes some cases of the generalized method of moments (GMM; see examples in Rilstone et al. (1996) [15]) and two-step estimators (Newey (1984) [16]). We note that the generalized empirical likelihood (GEL) can also fit into this framework. This suggests that Firth (1993)’s [13] correcting moment equations approach can be an alternative to Newey and Smith’s approach to obtain the higher order bias and variance terms of GEL (2004) [17].
Our paper is organized as follows. In Section 2, we derive the higher order stochastic expansion of the M-estimator and consider the one-step bias correction. Section 3 introduces the bias-corrected moment equations estimator and derives its higher order stochastic expansion. Section 4 discusses the higher order efficiency properties of several analytically-bias-corrected estimators. We conclude in Section 5. Primitive conditions for the validity of the higher order stochastic expansions and mathematical details are gathered in Appendix A and Appendix B.

2. Higher Order Expansion for the M-Estimator

Consider a moment condition:
E s z i , θ 0 = 0
where s ( z i , θ ) is a known k × 1 vector-valued function of the data, and a parameter vector θ Θ R k and z i includes both endogenous and exogenous variables. The M-estimator is obtained by solving:
1 n i = 1 n s z i , θ ^ = 0 .
Examples for this class of estimators include MLE, least squares and IV estimation. In the MLE, s ( z i , θ ) is the single observation score function. For the linear or nonlinear regression model of y i = f ( X i ; θ 0 ) + ε i , we set s z i , θ = f ( X i ; θ ) θ y i f ( X i ; θ ) and z i = ( y i X i ) for a known function f ( · ) . In the linear IV model, we have s z i , θ = w i ( y i X i θ ) and z i = ( y i X i w i ) for some instruments w i with dim ( w i ) = dim ( θ ) . Two-step estimators such as two-stage least squares, feasible generalized least squares (GLS) and Heckman (1979) [18]’s two-step estimator also fit into this framework (see Newey (1984) [16]). Rilstone et al. (1996) [15] provide some special cases of GMM estimators that can be put into the M-estimation, but the examples are not restricted to those. Partly motivated with this wide applicability, we study the stochastic expansion and the bias correction of the M-estimator.
We obtain the higher order stochastic expansion of the M-estimator using the iterative approach used in Rilstone et al. (1996) [15] up to a certain order. This approach is analytically convenient and straightforward to implement since the estimators are expressed as functions of the sums of random variables. Edgeworth expansion can be considered as an alternative whose validity has been derived in Bhattacharya and Ghosh (1978) [19], but the stochastic expansion approach is noted as a much simpler approach. Moreover, the main purpose of this paper is to provide the comparison of several estimators based on the higher order variance ( O ( n 1 ) variance). Noting that rankings based on the higher order variances in a third-order stochastic expansion are equivalent to rankings based on the variances of an Edgeworth expansion as shown in Pfanzagl and Wefelmeyer (1978) [20] and Ghosh et al. (1980) [21] and as discussed in Rothenberg (1984) [22], it suffices to use the simple stochastic expansions for our purposes.
Here, we borrow Rilstone et al. (1996) [15]’s notation. We denote the matrix of υ-th order partial derivatives of a matrix A ( θ ) as υ A ( θ ) . Specifically, if A ( θ ) is a k × 1 vector function, A ( θ ) is the usual Jacobian whose l-th row contains the partial derivatives of the l-th element of A ( θ ) . υ A ( θ ) (a k × k υ matrix) is defined recursively, such that the j-th element of the l-th row of υ A ( θ ) is the 1 × k vector a l j υ ( θ ) = a l j υ 1 ( θ ) / θ , where a l j υ 1 is the l-th row and the j-th element of υ 1 A ( θ ) . We use ⊗ to denote a usual Kronecker product. Using this Kronecker product, we can express υ A ( θ ) = υ A ( θ ) θ θ θ υ Kronecker product of θ . Finally, we use a matrix norm A = t r ( A A ) for a matrix A.
We first derive the higher order stochastic expansion of the M-estimator and consider the one-step bias correction here. In the next section, we introduce the bias-corrected moment equations estimator and derive its higher order stochastic expansion. Then, we compare these two approaches.
Before we derive the second-order expansion of the M-estimator to obtain the second-order bias analytically, we introduce simplifying notation. Let H 1 ( θ ) = E s ( z i , θ ) , H 2 ( θ ) = E 2 s ( z i , θ ) , Q ( θ ) = E s ( z i , θ ) 1 , and write H 1 = H 1 ( θ 0 ) , H 2 = H 2 ( θ 0 ) , Q = Q ( θ 0 ) . Let H ^ 1 ( θ ) = 1 n i = 1 n s z i , θ , H ^ 2 ( θ ) = 1 n i = 1 n 2 s z i , θ , Q ^ ( θ ) = ( H ^ 1 ( θ ) ) 1 , H ^ 1 = H ^ 1 ( θ 0 ) , H ^ 2 = H ^ 2 ( θ 0 ) and Q ^ = Q ^ ( θ 0 ) . Furthermore, define J 1 n i = 1 n s z i , θ 0 , V 1 n i = 1 n s z i , θ 0 E s z i , θ 0 , W 1 n i = 1 n 2 s z i , θ 0 E 2 s z i , θ 0 .
Lemma 1.
(Rilstone et al. (1996) [15]) Suppose { z i } i = 1 n are i.i.d.; θ 0 is in the interior of Θ, and is the only θ Θ satisfying (1); and the M-estimator θ ^ defined in (2) is consistent. Further suppose that: (i) s ( z , θ ) is κ-times continuously differentiable in the neighborhood of θ 0 , denoted by Θ 0 Θ for all z Z S u p p o r t ( z i ) , κ 3 with probability one; (iia) υ s ( z , θ ) is integrable for each fixed θ Θ 0 , υ = { 0 , 1 , 2 , κ } , κ 3 ; and (iib) E 3 s ( z , θ ) is continuous and bounded at θ 0 ; (iii) 1 n i = 1 n υ s z i , θ ¯ E υ s z i , θ 0 = o p ( 1 ) for θ ¯ = θ 0 + o p ( 1 ) and υ = 1 , 2 ; (iv) 1 n i = 1 n 2 s z i , θ ¯ H 2 ( θ ¯ ) 1 n i = 1 n 2 s z i , θ 0 H 2 θ 0 = o p 1 for θ ¯ = θ 0 + o p ( 1 ) ; (v) Q ( θ 0 ) exists, i.e., E s ( z i , θ 0 ) is nonsingular; (vi) J = O p ( 1 ) ; (vii) V = O p ( 1 ) ; (viii) W = O p ( 1 ) . Then, we have n θ ^ θ 0 = Q J + O p 1 n , and moreover, n θ ^ θ 0 = Q J + 1 n Q V Q J + 1 2 H 2 Q J Q J + O p ( n 1 ) .
This lemma and the following Lemma 2 are available in Rilstone et al. (1996) [15], but we reproduce them since some of their results are useful to derive our new results. From Lemma 1, the higher order bias of θ ^ is obtained as:
B i a s ( θ ^ ) 1 n Q E V Q J + 1 2 H 2 E Q J Q J .
Defining d i ( θ ) = Q ( θ ) s ( z i , θ ) and v i ( θ ) = s z i , θ E s z i , θ and letting d i = d i ( θ 0 ) and v i = v i ( θ 0 ) , it is not difficult to see Q E V Q J + 1 2 H 2 E Q J Q J = Q E v i d i + 1 2 H 2 E d i d i , as shown below. In this regard, we will write B ( θ ) Q ( θ ) E v i ( θ ) d i ( θ ) + 1 2 H 2 ( θ ) E d i ( θ ) d i ( θ ) .
Lemma 2.
(Rilstone et al. (1996) [15]) Suppose (1) holds and z i i = 1 n are i.i.d. Then, E V Q J + 1 2 H 2 E Q J Q J = E v i d i + 1 2 H 2 E d i d i , where d i = Q s ( z i , θ 0 ) and v i = s z i , θ 0 E s z i , θ 0 .
Thus, we can eliminate the second-order bias of the M-estimator θ ^ by subtracting a consistent estimator of the bias.4 Now, let θ ^ b c denote the bias-corrected estimator of this sort defined by:
θ ^ b c = θ ^ 1 n B ^ ( θ ^ )
where the function B ^ ( θ ) , a consistent estimator of B ( θ ) , is constructed as:
Q ^ ( θ ) 1 n i = 1 n v ^ i ( θ ) d ^ i ( θ ) + 1 2 H ^ 2 ( θ ) 1 n i = 1 n d ^ i ( θ ) d ^ i ( θ )
for d ^ i ( θ ) = Q ^ ( θ ) s ( z i , θ ) and v ^ i θ = s z i , θ . In particular, we can replace θ ^ in B ^ ( θ ^ ) with any n -consistent estimator of θ 0 . In this sense, θ ^ b c is a two-step estimator.
To characterize the higher order efficiency based on the higher order variance ( O ( n 1 ) variance) of the bias-corrected estimators, we need to expand the M-estimator to the third order. We use some additional simplifying terms: H 3 θ = E [ 3 s ( z , θ ) ] , H ^ 3 θ = 1 n i = 1 n 3 s ( z i , θ ) , H 3 = H 3 θ 0 , W 3 1 n i = 1 n 3 s z i , θ 0 E 3 s z i , θ 0 . Furthermore, we write:
a 1 / 2 = Q J , a 1 = Q V a 1 / 2 + 1 2 H 2 a 1 / 2 a 1 / 2 a 3 / 2 = Q V a 1 + 1 2 Q W a 1 / 2 a 1 / 2 + 1 2 Q H 2 a 1 / 2 a 1 + a 1 a 1 / 2 + 1 6 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2
for the ease of notation. We obtain:
Lemma 3.
Suppose { z i } i = 1 n are i.i.d., θ 0 is in the interior of Θ, is the only θ Θ satisfying (1) and the M-estimator θ ^ that solves (2) is consistent. Further suppose that: (i) s ( z , θ ) is κ-times continuously differentiable in a neighborhood of θ 0 , denoted by Θ 0 Θ for all z Z , κ 4 with probability one; (iia) υ s ( z , θ ) is integrable for each fixed θ Θ 0 , υ = { 0 , 1 , 2 , κ } , κ 4 ; (iib) E [ 4 s ( z , θ ) ] is continuous and bounded at θ 0 ; (iii) 1 n i = 1 n 3 s z i , θ ¯ H 3 θ ¯ 1 n i = 1 n 3 s z i , θ 0 H 3 θ 0 = o p 1 for θ ¯ = θ 0 + o p ( 1 ) ; (iv) Q is nonsingular; (v) J = O p ( 1 ) ; (vi) V = O p ( 1 ) ; (vii) W = O p ( 1 ) ; (viii) W 3 = O p ( 1 ) ; (ix) n ( θ ^ θ 0 ) = a 1 / 2 + 1 n a 1 + O p 1 n . Then, we have n θ ^ θ 0 = a 1 / 2 + 1 n a 1 + 1 n a 3 / 2 + O p ( n 3 / 2 ) .
Note that the conditions in Lemma 3 are all standard regularity conditions.
In the following section, we propose an alternative one-step estimator that eliminates the second-order bias by adjusting the moment equations inspired by Firth (1993) [13].

3. Bias-Corrected Moment Equation

Here, we consider an alternative higher order bias reduced estimator that solves bias-corrected moment equations. This idea was proposed in Firth (1993) [13] for the ML with a fixed number of parameters and exploited in Hahn and Newey (2004) [6] and Fernandez-Val (2004) [14] for the nonlinear panel data models with individual specific effects. We refer to this estimator as Firth’s estimator.
To be precise, consider:
0 = 1 n i = 1 n s z i , θ 1 n c ( θ )
for a known function c ( θ ) that is given by:
c ( θ ) = Q ( θ ) 1 B ( θ ) = 1 2 H 2 ( θ ) E Q ( θ ) s ( z i , θ ) Q θ s ( z i , θ ) + E s z i , θ Q ( θ ) s ( z i , θ ) .
This correction term c ( θ ) is obtained following Firth (1993) [13] and using the bias term for the M-estimator. In the ML context, Firth (1993) [13] shows that by adjusting the score function (he refers to this as a modified score function) with the correction term defined by the product of the Fisher information matrix and the bias term, one can obtain a bias-corrected ML estimator. c ( θ ) has the same interpretation in the ML, since Q ( θ ) 1 is the Hessian matrix, and hence, Q ( θ ) 1 is the Fisher information in the ML. Therefore, (5) is a generalization of Firth (1993) [13]’s approach to the M-estimation . In general c ( θ ) contains population terms, and hence, to implement this alternative estimator, we need to estimate the function c θ . We use a sample analogue of (5) as:
c ^ ( θ ) = Q ^ ( θ ) 1 B ^ ( θ ) = 1 2 H ^ 2 ( θ ) 1 n i = 1 n Q ^ ( θ ) s ( z i , θ ) Q ^ ( θ ) s ( z i , θ ) + 1 n i = 1 n s z i , θ Q ^ ( θ ) s ( z i , θ ) .
Now, we estimate θ 0 by solving:
0 = 1 n i = 1 n s z i , θ 1 n c ^ ( θ ) ,
and claim that the solution of this modified moment condition eliminates the second-order bias of θ ^ that solves the original moment condition (2).
Assumption 1.
(i) z i i = 1 n are i.i.d.; (ii) s ( z , θ ) is κ-times continuously differentiable in a neighborhood of θ 0 , denoted by Θ 0 for all z Z , κ 4 ; (iii) E sup θ Θ 0 υ s ( z , θ ) 2 < , υ = { 0 , 1 , 2 , κ } , κ 4 ; (iv) Θ is compact; (v) θ 0 is in the interior of Θ and is the only θ Θ satisfying (1); (vi) E υ ¯ s ( z , θ 0 ) 4 < for υ ¯ = { 0 , 1 , 2 , , κ ¯ } , κ ¯ 3 .
Assumption 2.
For θ Θ 0 , E s z i , θ θ is nonsingular.
Alternatively, we can assume the following instead of Assumption 1.
Assumption 3.
(i) z i i = 1 n are i.i.d.; (ii) υ s ( z , θ ) satisfies the Lipschitz condition in θ as:
υ s ( z , θ 1 ) υ s ( z , θ 2 ) B υ ( z ) θ 1 θ 2 θ 1 , θ 2 Θ 0
for some function B υ ( · ) : Z R and E B υ ( · ) 2 t + δ < , υ = { 0 , 1 , 2 , κ } , with positive integer t 2 and for some δ > 0 and κ 4 in a neighborhood of θ 0 ; (iii) E sup θ Θ 0 υ s ( z , θ ) 2 t + δ < , υ = { 0 , 1 , 2 , κ } , κ 4 with positive integer t 2 and for some δ > 0 ; (iv) Θ is bounded; (v) θ 0 is in the interior of Θ and is the only θ Θ satisfying (1).
Under Assumptions 1 and 2 or Assumptions 3 and 2, the following three conditions are satisfied (see Lemma A.9 in Appendix A).
Condition 1.
(i) c ^ ( θ 0 ) = O p ( 1 ) ; (ii) c ^ ( θ 0 ) = c ( θ 0 ) + O p 1 n .
Condition 2.
c ^ ( θ ) = O p ( 1 ) in the n 1 / 2 neighborhood of θ 0 .
Condition 3.
2 c ^ ( θ ) = O p ( 1 ) in the n 1 / 2 neighborhood of θ 0 .
Note that these three conditions are required to control for the estimation error in c ^ ( θ ) in the stochastic expansions. Now, we are ready to present one of our main results.
Proposition 1.
Suppose θ * solves (7) where c ^ ( θ ) is given by (6) and that θ * is a consistent estimator of θ 0 . Further, suppose that Conditions 1–3 and Conditions (i)–(viii) in Lemma 1 are satisfied, then we have:
n θ * θ 0 = Q J + 1 n Q V Q J + 1 2 H 2 Q J Q J c ( θ 0 ) + O p 1 n ,
where c ( θ 0 ) = 1 2 H 2 E Q s ( z i , θ 0 ) Q s ( z i , θ 0 ) + E s z i , θ 0 Q s ( z i , θ 0 ) , and hence, the second-order bias of θ * is B i a s ( θ * ) 1 n E Q V Q J + 1 2 H 2 Q J Q J c ( θ 0 ) = 0 .
This concludes that we can eliminate the second-order bias by adjusting the moment equations as (7), and it is a proper alternative to the analytic bias correction of (3).

4. Higher Order Efficiency

Asymptotic bias corrections can provide estimators that have better bias properties in the finite sample. There are several ways of achieving bias correction, including analytical corrections that we focus on in this paper, the jackknife and bootstrap methods. These abundant ways of bias correction evoke the issue of which method is preferable to others at least on asymptotic efficiency grounds. For the ML estimation, Hahn et al. (2004) [12] show that the method of bias correction does not affect the higher order efficiency of any bias-corrected estimator that is first-order efficient. Although the ML estimator is a class of the M-estimator we consider, it is not trivial to conjecture that the same equivalence result will hold for a general class of M-estimators because the equivalence in the ML can hold due to some specific properties of the ML estimator. In this section, we formally extend the equivalence result to a general M-estimator.
We compare the higher order efficiency of several first-order efficient bias-corrected estimators by comparing the higher order variances, which are defined by the O 1 n variance in a third-order stochastic expansion of an estimator.

4.1. Third-Order Expansion of the One-Step Bias-Corrected Estimator

To compare with the estimator of interest θ * in (7), first we consider a one-step bias-corrected estimator θ ^ b c defined in (3) as θ ^ b c = θ ^ 1 n B ^ ( θ ^ ) and observe that B ^ ( θ ^ ) = Q ^ ( θ ^ ) c ^ ( θ ^ ) from (4) and (6). We also consider its infeasible version θ ^ b as θ ^ b = θ ^ 1 n B ( θ ^ ) , where the function B ( θ ^ ) is constructed as B ( θ ^ ) = Q ( θ ^ ) c ( θ ^ ) , provided that both B ^ ( θ ^ ) and B ( θ ^ ) are consistent estimators of the higher order bias term B ( θ 0 ) = Q ( θ 0 ) c ( θ 0 ) . Note that for some θ ˜ between θ ^ and θ 0 , a first-order Taylor expansion gives us:
c ( θ ^ ) c ( θ 0 ) = c ( θ ˜ ) θ ^ θ 0 = O p ( 1 ) O p 1 / n = o p 1
under Condition 2 and because θ ^ θ 0 = O p 1 n . Furthermore, we have:
c ^ ( θ ^ ) c ( θ 0 ) c ^ ( θ ^ ) c ( θ ^ ) + c ( θ ^ ) c ( θ 0 ) sup θ Θ 0 c ^ ( θ ) c ( θ ) + c ( θ ^ ) c ( θ 0 ) = o p 1 + o p ( 1 ) = o p ( 1 )
by the triangle inequality, Lemma A.7 (in Appendix A) and the continuity of c ( θ ) at θ 0 (applying the Slutsky theorem), hence, both B ( θ ^ ) and B ^ ( θ ^ ) are indeed consistent estimators of the higher order bias noting that Q ( θ ^ ) = Q ( θ 0 ) + o p ( 1 ) by the continuity of Q ( θ ) at θ 0 and Q ^ ( θ ^ ) = Q ( θ 0 ) + o p ( 1 ) .
Now, from the result of Lemma 3 and a second-order Taylor expansion of B ( θ ^ ) , it follows that:
n ( θ ^ b θ 0 ) = n ( θ ^ θ 0 ) 1 n B ( θ ^ ) = a 1 / 2 + 1 n a 1 + 1 n a 3 / 2 + O p ( n 3 / 2 ) 1 n B ( θ 0 ) 1 n B ( θ 0 ) ( θ ^ θ 0 ) 1 2 n 2 B ( θ ˜ ) ( ( θ ^ θ 0 ) ( θ ^ θ 0 ) )
where θ ˜ is a point between θ ^ and θ 0 , and hence:
n ( θ ^ b θ 0 ) = a 1 / 2 + 1 n a 1 B ( θ 0 ) + 1 n a 3 / 2 B ( θ 0 ) a 1 / 2 + O p ( n 3 / 2 ) ,
since n ( θ ^ θ 0 ) = a 1 / 2 + O p 1 n and 2 B ( θ ˜ ) = 2 B ( θ 0 ) + o p ( 1 ) = O p ( 1 ) by the Slutsky theorem, from which we conclude 1 2 n 2 B ( θ ˜ ) ( ( θ ^ θ 0 ) ( θ ^ θ 0 ) ) = O p ( n 3 / 2 ) .
Now, similarly for θ ^ b c , we obtain:
n ( θ ^ b c θ 0 ) = n ( θ ^ θ 0 ) 1 n B ^ ( θ ^ ) = a 1 / 2 + 1 n a 1 + 1 n a 3 / 2 + O p ( n 3 / 2 ) 1 n B ^ ( θ 0 ) 1 n B ^ ( θ 0 ) ( θ ^ θ 0 ) 1 2 n 2 B ^ ( θ ˜ ) ( ( θ ^ θ 0 ) ( θ ^ θ 0 ) ) .
Then, applying the following three results (that hold under Assumptions 1 and 2 or 3 and 2, as shown in Lemma A.12 in Appendix A):
Condition 4.
B ^ ( θ 0 ) = B ( θ 0 ) + O p 1 / n ,
Condition 5.
B ^ ( θ 0 ) = B ( θ 0 ) + O p 1 / n ,
Condition 6.
2 B ^ ( θ ) = O p ( 1 ) in the neighborhood of θ 0 ,
we obtain:
n ( θ ^ b c θ 0 ) = a 1 / 2 + 1 n a 1 B ( θ 0 ) + 1 n a 3 / 2 B ( θ 0 ) a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) + O p ( n 3 / 2 )
noting that 1 n B ^ ( θ 0 ) ( θ ^ θ 0 ) = 1 n B ^ ( θ 0 ) a 1 / 2 + O p 1 n = 1 n B ( θ 0 ) a 1 / 2 + O p ( n 3 / 2 ) by Condition 5 and that 1 2 n 2 B ^ ( θ ˜ ) ( ( θ ^ θ 0 ) ( θ ^ θ 0 ) ) = O p ( n 3 / 2 ) by Condition 6 and the fact that θ ^ θ 0 = O p 1 / n .

4.2. Third-Order Expansion of the Bias-Corrected Moment Equations Estimator

Now, we derive the higher order expansion of the proposed bias-corrected estimator θ * up to the third order. For this, we need to verify an additional condition below, which is satisfied under Assumptions 1 and 2 or 3 and 2 with κ 5 as shown in Lemma A.10 and A.11 in Appendix A.
Condition 7.
(i) c ^ ( θ 0 ) = c ( θ 0 ) + O p 1 n ; (ii) 3 c ^ ( θ ) = O p ( 1 ) in the n 1 / 2 neighborhood of θ 0 .
Recall that c ( θ ) = Q 1 ( θ ) B ( θ ) and c ^ ( θ ) = Q ^ ( θ ) 1 B ^ ( θ ) , and we obtain:
Proposition 2.
Suppose θ * solves (7), where c ^ ( θ ) is given in (6), and that θ * is consistent. Further, suppose that Conditions 1–7 and Conditions (i)–(viii) in Lemma 3 are satisfied, and assume n ( θ ^ θ 0 ) = a 1 / 2 + 1 n a 1 B ( θ 0 ) + O p 1 n . Then, we have:
n θ * θ 0 = a 1 / 2 + 1 n a 1 B ( θ 0 ) + 1 n a 3 / 2 B ( θ 0 ) a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) + O p ( n 3 / 2 ) .
Comparing (10) and (11), we therefore conclude that n θ * θ 0 and n θ ^ b c θ 0 are identical up to O p ( 1 n ) order terms. This implies that θ * and θ ^ b c at least agree upon their higher order variances, as we discuss in the following section.

4.3. Higher Order Variances

For a three-term stochastic expansion of an estimator θ ˇ , such as:
n θ ˇ θ 0 = T 1 / 2 + 1 n T 1 + 1 n T 3 / 2 + O p ( n 3 / 2 ) ,
the higher order variance is given by:
Λ θ ˇ Σ + 1 n Ξ ,
with Σ = Var [ T 1 / 2 ] and Ξ = Var [ T 1 ] + E n T 1 + T 3 / 2 T 1 / 2 + E T 1 / 2 n T 1 + T 3 / 2 . Then, from the third-order stochastic expansions of the bias-corrected estimators derived in (8), (10) and (11), we can obtain the higher order variances of three alternative estimators, denoted by Λ θ ^ b , Λ θ ^ b c , and Λ θ * , respectively, as:5
Λ θ ^ b = E a 1 / 2 a 1 / 2 + 1 n E a 1 B ( θ 0 ) a 1 B ( θ 0 ) + 1 n E a 1 / 2 a 3 / 2 B ( θ 0 ) a 1 / 2 + 1 n E a 3 / 2 B ( θ 0 ) a 1 / 2 a 1 / 2 + 1 n E n a 1 / 2 a 1 B ( θ 0 ) + 1 n E n a 1 B ( θ 0 ) a 1 / 2
Λ θ ^ b c = Λ θ ^ b 1 n E a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) 1 n E n ( B ^ ( θ 0 ) B ( θ 0 ) ) a 1 / 2
Λ θ * = Λ θ ^ b c .
First note that the result of (12) reveals that the higher order variance of θ ^ b c has additional terms compared with θ ^ b (infeasible estimator) because we use the sample analogue of the second-order bias, unless E a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) = 0 . These additional terms contribute to the cost of estimating the bias term B ^ ( · ) in the analytic bias correction approach. Now, the result of (13) tells that the higher order variances of two alternative bias-corrected estimators are the same, so on the grounds of this higher order variances’ comparison, we find that the bias correction method by adjusting moment equations does not improve over the analytic bias correction.
Indeed, it is more remarkable that comparing the third-order expansions of (10) and (11), we further find:
n 3 / 2 ( θ ^ b c θ * ) = o p ( 1 ) .
This is a stronger result than just comparing the higher order variances because it implies that these two estimators do not only have the same higher order variance, but also agree on more properties in terms of their stochastic expansions. In the literature, it has been argued that removing the bias directly from the moment equations has an attractive feature that it does not use pre-estimated parameters that are not bias corrected, though this alternative approach requires more intensive computations, since it requires solving some nonlinear equation. However, in view of the result (14), this paper concludes that at least for the third-order stochastic expansion comparison, there is no benefit of using such bias correction of the moment equations over the simple bias-corrected estimator.

4.4. Further Comparison of Alternative Bias Corrections

To have a better understanding of the equivalence result (14), here we compare several versions of bias-corrected estimators that are infeasible by their nature. First, let θ 1 * be the solution of 0 = 1 n i = 1 n s z i , θ 1 n c ( θ ) where c ( θ ) is known. We also define two other bias-corrected estimators:
θ ^ 2 = θ ^ Q ^ ( θ ^ ) c ( θ ^ ) and θ ^ 3 = θ ^ Q ( θ ^ ) c ^ ( θ ^ ) ,
so for θ ^ 2 , c ( θ ) is known, but Q ( θ ) is estimated, while for θ ^ 3 , c ( θ ) is estimated, but Q ( θ ) is known. For these estimators, we obtain the following results.6
n ( θ 1 * θ 0 ) = n ( θ ^ b θ 0 ) 1 n Q V B ( θ 0 ) + O p ( n 3 / 2 ) , n ( θ 1 * θ 0 ) = n ( θ ^ 2 θ 0 ) + O p ( n 3 / 2 ) , n ( θ * θ 0 ) = n ( θ ^ 3 θ 0 ) + 1 n Q V B θ 0 + O p ( n 3 / 2 ) , n ( θ * θ 0 ) = n ( θ ^ b c θ 0 ) + O p ( n 3 / 2 ) .
The results illustrate that using Q ^ ( · ) rather than Q ( · ) in the bias correction term plays a critical role for equating the stochastic expansions (up to the third order) of the one-step bias-corrected estimator and the bias-corrected moment equations estimator. To see the point compare θ 1 * and θ ^ 2 and compare θ * and θ ^ b c where in the former c ( · ) is known, and in the latter, c ( · ) is also estimated.
Next, we consider the possible iteration of the bias correction. Hahn and Newey (2004) [6] discuss the relationship between the bias correction of moment equations and the iterated bias correction. The iteration idea is that one can update B ^ ( · ) several times using the previous estimator of θ ^ . To be precise, denoting B ^ ( θ ) as a function of θ, we can write the one-step bias-corrected estimator as θ ^ b c 1 = θ ^ B ^ ( θ ^ ) / n . The k-th iteration will give us θ ^ b c k = θ ^ B ^ ( θ ^ b c k 1 ) / n for k 2 where θ ^ b c 1 = θ ^ b c . If we iterate this procedure until convergence, we will obtain θ ^ b c = θ ^ B ^ ( θ ^ b c ) / n , which implies that θ ^ b c solves (note B ^ ( θ ) = Q ^ ( θ ) c ^ ( θ ) ):
0 = Q ^ ( θ ) 1 ( θ ^ θ ) 1 n c ^ ( θ ) = 1 n i = 1 n s z i , θ ^ + Q ^ ( θ ) 1 ( θ ^ θ ) 1 n c ^ ( θ ) ,
where the second equality is from the definition of θ ^ in (2). Observing that Q ^ ( θ ) 1 = 1 n i = 1 n s z i , θ , if s z i , θ is linear in θ, then we find that the Equation (15) is the same as Equation (7) for the bias-corrected moment equations; hence, θ ^ b c is exactly the same with θ * . Otherwise, (15) is an approximation of (7). From this, we conclude that the fully-iterated bias-corrected estimator θ ^ b c can be interpreted as the solution to an approximation of the bias-corrected moment Equation (7). Similarly to (9), for θ ˜ ˜ between θ ^ b c and θ 0 , we can show that:
n ( θ ^ b c θ 0 ) = n ( θ ^ θ 0 ) 1 n B ^ ( θ ^ b c ) = a 1 / 2 + 1 n a 1 + 1 n a 3 / 2 1 n B ^ ( θ 0 ) 1 n B ^ ( θ 0 ) ( θ ^ b c θ 0 ) 1 2 n 2 B ^ ( θ ˜ ˜ ) ( ( θ ^ b c θ 0 ) ( θ ^ b c θ 0 ) ) + O p ( n 3 / 2 ) = a 1 / 2 + 1 n a 1 B ( θ 0 ) + 1 n a 3 / 2 B ( θ 0 ) a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) + O p ( n 3 / 2 )
using Conditions 4, 5 and 6 and n θ ^ b c θ 0 = Q J + O p 1 / n . This result confirms that n ( θ ^ b c θ ^ b c ) = O p ( n 3 / 2 ) , which actually holds for all θ ^ b c k ( k 2 ).
From this equivalence of the higher order expansions for θ ^ b c and θ ^ b c at least up to the third order term, one would expect that the higher order expansion of θ * will be equivalent to that of θ ^ b c at least up to the third order, and we have verified that this intuition is correct. However, as observed in some Monte Carlo examples of Hahn and Newey (2004) [6] and Fernandez-Val (2004) [14], the iterative bias correction can lower the bias for small samples and so can the bias correction of the moment equations. This suggests that the comparison between the one-step bias correction and the method of correcting the moment equations (or the fully-iterated bias correction) should be based on the stochastic expansions higher than the third order. The comparison of the two alternative bias-corrected estimators over the fourth order or even higher order stochastic expansions can be challenging and is beyond the scope of this paper.

5. Conclusions

This paper considers an alternative bias correction for the M-estimator, which is achieved by correcting the moment equations in the spirit of Firth (1993) [13]. In particular, this paper compares the stochastic expansions of the analytically-bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that these two estimators do not only have the same higher order variances, but also agree upon more properties in terms of their stochastic expansions.
We conclude that at least in terms of the third-order stochastic expansion, we cannot improve on the simple one-step bias-correction by using the bias correction of the moment equations. The intuition is that the fully-iterated bias-corrected estimator can be interpreted as the solution of an approximation to the bias-corrected moment equations, and the iteration will not improve the asymptotic properties in general; neither will the alternative estimator. We have verified this intuition in this paper.
multiple

Supplementary Materials

The following are available online at https://www.mdpi.com/2225-1146/4/4/48/s1, Technical Lemmas and Proofs.

Acknowledgments

I am truly grateful to Jinyong Hahn for his advice and encouragement. Three anonymous referees that greatly improved the paper with their comments and suggestions are gratefully acknowledged.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Technical Lemmas and Proofs

Some preliminary Lemmas and their proofs are available in the Supplementary Material, which are useful to derive the main results presented in the paper.

Appendix B. Proofs of the Main Lemmas and Propositions

This section collects proofs for the main results in the paper.

Appendix B.1. Proposition 1

Proof. 
By the first-order Taylor series approximation of (7), we have:
0 = 1 n i = 1 n s z i , θ 0 + 1 n i = 1 n s z i , θ ˜ ( θ * θ 0 ) 1 n c ^ ( θ 0 ) 1 n c ^ θ ˜ ( θ * θ 0 )
for θ ˜ between θ * and θ 0 and, hence:
n θ * θ 0 = 1 n i = 1 n s z i , θ ˜ 1 n c ^ ( θ ˜ ) 1 1 n i = 1 n s z i , θ 0 1 n c ^ ( θ 0 ) = E s z i , θ 0 + o p ( 1 ) + O p 1 n 1 1 n i = 1 n s z i , θ 0 + O p 1 n = Q J + o p ( 1 ) ,
by Conditions 1(i), 2 and θ ˜ = θ 0 + o p ( 1 ) provided that 1 n i = 1 n s z i , θ ¯ E s z i , θ 0 = o p ( 1 ) for θ ¯ = θ 0 + o p ( 1 ) . This confirms that the estimator has the same first-order asymptotic distribution with n ( θ ^ θ 0 ) . Recalling H ^ 1 ( θ ) 1 n i = 1 n s z i , θ and H 1 ( θ 0 ) ( = Q 1 ) E [ s z i , θ 0 ] , we can rewrite (B1) as:
n θ * θ 0 = H 1 ( θ 0 ) 1 n c ^ ( θ 0 ) 1 1 n i = 1 n s z i , θ 0 1 n c ^ ( θ 0 ) H ^ 1 ( θ ˜ ) 1 n c ^ ( θ ˜ ) 1 H 1 ( θ 0 ) 1 n c ^ ( θ 0 ) 1 1 n i = 1 n s z i , θ 0 1 n c ^ ( θ 0 ) = H 1 ( θ 0 ) + O p 1 / n 1 1 n i = 1 n s z i , θ 0 + O p 1 / n H ^ 1 ( θ ˜ ) + O p 1 / n 1 H 1 ( θ 0 ) + O p 1 n 1 1 n i = 1 n s z i , θ 0 + O p 1 / n = H 1 ( θ 0 ) 1 + O p 1 / n 1 n i = 1 n s z i , θ 0 + O p 1 / n H ^ 1 ( θ ˜ ) 1 H 1 ( θ 0 ) 1 + O p 1 / n 1 n i = 1 n s z i , θ 0 + O p 1 / n = H 1 ( θ 0 ) 1 1 n i = 1 n s z i , θ 0 H ^ 1 ( θ ˜ ) 1 H 1 ( θ 0 ) 1 1 n i = 1 n s z i , θ 0 + O p 1 / n = H 1 ( θ 0 ) 1 1 n i = 1 n s z i , θ 0 + O p 1 / n ,
where the second inequality is by Condition 2 and the last equality is obtained by H ^ 1 ( θ ˜ ) 1 H 1 ( θ 0 ) 1 = O p 1 / n and 1 n i = 1 n s z i , θ 0 = O p ( 1 ) , and hence, we have n θ * θ 0 = Q J + O p 1 / n . This implies that θ * and θ ^ have the same first order asymptotics. In order to analyze the higher order asymptotic distribution, we make a second-order Taylor series expansion:
0 = 1 n i = 1 n s z i , θ 0 + 1 n i = 1 n s z i , θ 0 ( θ * θ 0 ) + 1 2 1 n i = 1 n 2 s z i , θ ˜ θ * θ 0 θ * θ 0 1 n c ^ ( θ 0 ) 1 n c ^ θ 0 ( θ * θ 0 ) 1 2 n 2 c ^ θ ˜ θ * θ 0 θ * θ 0 .
We rewrite (B2) as:
0 = 1 n J + Q 1 + 1 n V θ * θ 0 + 1 2 H 2 + 1 n W θ * θ 0 θ * θ 0 1 n c ^ ( θ 0 ) 1 n c ^ θ 0 ( θ * θ 0 ) 1 2 n 2 c ^ θ ˜ θ * θ 0 θ * θ 0 + O p ( n 3 / 2 ) = 1 n J + Q 1 + 1 n V θ * θ 0 + 1 2 H 2 + 1 n W θ * θ 0 θ * θ 0 1 n c ^ ( θ 0 ) + O p ( n 3 / 2 )
since (a) 1 n c ^ θ 0 ( θ * θ 0 ) = O p ( n 3 / 2 ) by Condition 2 and θ * = θ 0 + O p ( 1 n ) from (B1) noting J = O p ( 1 ) and since (b):
1 2 n 2 c ^ θ ˜ θ * θ 0 θ * θ 0 1 2 n 2 c ^ θ ˜ θ * θ 0 2 = O ( n 1 ) O p ( 1 ) O p ( n 1 ) = O p ( n 2 )
by Condition 3 and θ * = θ 0 + O p 1 / n .
From (B3), by observing that θ * and θ ^ have the same first-order asymptotics, we obtain:
n θ * θ 0 = Q J + 1 n Q V Q J + 1 2 H 2 Q J Q J c ^ ( θ 0 ) + O p 1 n = Q J + 1 n Q V Q J + 1 2 H 2 Q J Q J c ( θ 0 ) + O p 1 n ,
as in Lemma 1. The second equality comes from Condition 1(ii) ( c ^ ( θ 0 ) = c ( θ 0 ) + O p 1 / n ), and thus, the second-order bias B i a s ( θ * ) 1 n E Q V Q J + 1 2 H 2 Q J Q J c ( θ 0 ) = 0 since (noting Q Q ( θ 0 ) and H 2 H 2 ( θ 0 ) ):
E V Q J + 1 2 H 2 Q J Q J = E s z i , θ 0 Q ( θ 0 ) s ( z i , θ 0 ) + 1 2 H 2 E Q ( θ 0 ) s ( z i , θ 0 ) Q θ 0 s ( z i , θ 0 ) = c θ 0
by the definition of c ( θ ) in (5) and Lemma 2. ☐

Appendix B.2. Lemma 3

Proof. 
Consider a higher order Taylor expansion of (2) around the true value of θ = θ 0 up to the third order as:
0 = 1 n i = 1 n s z i , θ 0 + 1 n i = 1 n s z i , θ 0 θ ^ θ 0 + 1 2 1 n i = 1 n 2 s z i , θ 0 θ ^ θ 0 θ ^ θ 0 + 1 6 1 n i = 1 n 3 s z i , θ ˜ θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 ,
where θ ˜ lies between θ 0 and θ ^ . Now, by the stochastic equicontinuity Condition (iii) and θ ˜ = θ 0 + o p ( 1 ) , we have:
1 n i = 1 n 3 s z i , θ ˜ H 3 θ ˜ 1 n i = 1 n 3 s z i , θ 0 H 3 ( θ 0 ) = o p 1
and hence:
1 n i = 1 n 3 s z i , θ ˜ 1 n i = 1 n 3 s z i , θ 0 θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 = H 3 θ ˜ H 3 θ 0 + o p 1 n θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 = E 3 s z i , θ θ = θ ˜ ˜ ( θ ˜ θ 0 ) + o p 1 n θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 = E 4 s z i , θ 0 ( θ ˜ θ 0 ) + o p 1 n θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 = O p ( n 2 ) ,
applying the mean value theorem where θ ˜ ˜ lies between θ ˜ and θ 0 and from standard results on differentiating inside the integral. The second to last equality is from the continuity of E 4 s z i , θ 0 at θ 0 and since θ ˜ ˜ = θ 0 + o p 1 . We, thus, obtain:
0 = 1 n i = 1 n s z i , θ 0 + 1 n i = 1 n s z i , θ 0 θ ^ θ 0 + 1 2 1 n i = 1 n 2 s z i , θ 0 θ ^ θ 0 θ ^ θ 0 + 1 6 1 n i = 1 n 3 s z i , θ 0 θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 + O p ( n 2 ) .
Now, note:
1 n i = 1 n s z i , θ 0 = Q 1 + 1 n V , 1 n i = 1 n 2 s z i , θ 0 = H 2 + 1 n W 1 n i = 1 n 3 s z i , θ 0 = H 3 + 1 n W 3 with W 3 1 n i = 1 n 3 s z i , θ 0 E 3 s z i , θ 0 = O p ( 1 ) .
We then rewrite (B4) as:
0 = 1 n J + Q 1 + 1 n V θ ^ θ 0 + 1 2 H 2 + 1 n W θ ^ θ 0 θ ^ θ 0 + 1 6 H 3 + 1 n W 3 θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 + O p ( n 2 ) .
Now, note that we can expand:
Q 1 + 1 n V 1 = I Q 1 n V 1 ( Q ) = Q + O p ( 1 n ) = Q 1 n Q V Q + O p ( n 1 ) = Q 1 n Q V Q 1 n Q V Q V Q + O p ( n 3 / 2 )
depending on the expansions we need. Plugging (B6) into (B5) (depending on the orders we need) and inspecting the orders, we have:
θ ^ θ 0 = Q 1 + 1 n V 1 1 n J 1 2 Q 1 + 1 n V 1 H 2 + 1 n W θ ^ θ 0 θ ^ θ 0 1 6 Q 1 + 1 n V 1 H 3 + 1 n W 3 θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 + O p ( n 2 ) = Q 1 n Q V Q 1 n Q V Q V Q + O p ( n 3 / 2 ) 1 n J 1 2 Q 1 n Q V Q + O p ( n 1 ) H 2 + 1 n W θ ^ θ 0 θ ^ θ 0 1 6 Q + O p 1 n H 3 + 1 n W 3 θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 + O p ( n 2 )
= 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 Q H 2 + 1 n W + 1 n Q V Q H 2 θ ^ θ 0 θ ^ θ 0 + 1 6 Q H 3 θ ^ θ 0 θ ^ θ 0 θ ^ θ 0 + O p ( n 2 ) .
Now, plugging n ( θ ^ θ 0 ) = a 1 / 2 + O p 1 / n or n ( θ ^ θ 0 ) = a 1 / 2 + 1 n a 1 + O p 1 / n into (B8) depending on the orders required, we obtain:
θ ^ θ 0 = 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 1 n Q H 2 + W n + Q V Q H 2 n a 1 / 2 + a 1 n + O p 1 n a 1 / 2 + a 1 n + O p 1 n + 1 6 1 n 3 / 2 Q H 3 a 1 / 2 + O p 1 n a 1 / 2 + O p 1 n a 1 / 2 + O p 1 n + O p ( n 2 ) = 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 1 n Q H 2 + 1 n W + 1 n Q V Q H 2 a 1 / 2 + 1 n a 1 a 1 / 2 + 1 n a 1 + 1 6 1 n 3 / 2 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 + O p ( n 2 ) = 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 1 n Q H 2 a 1 / 2 a 1 / 2 + 1 2 1 n 3 / 2 Q H 2 a 1 / 2 a 1 + a 1 a 1 / 2 + Q W + Q V Q H 2 a 1 / 2 a 1 / 2 + 1 6 1 n 3 / 2 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 + O p ( n 2 ) .
Finally, rearranging (B9) according to the orders, we have:
θ ^ θ 0 = 1 n Q J + 1 n Q V Q J + 1 2 H 2 a 1 / 2 a 1 / 2 + 1 n 3 / 2 Q V Q V Q J + 1 2 H 2 a 1 / 2 a 1 / 2 + 1 2 Q W a 1 / 2 a 1 / 2 + 1 n 3 / 2 1 2 Q H 2 a 1 / 2 a 1 + a 1 a 1 / 2 + 1 6 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 + O p ( n 2 ) = 1 n a 1 / 2 + 1 n a 1 + 1 n 3 / 2 Q V a 1 + 1 2 Q W a 1 / 2 a 1 / 2 + 1 2 Q H 2 a 1 / 2 a 1 + a 1 a 1 / 2 + 1 n 3 / 2 1 6 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 + O p ( n 2 ) .
 ☐

Appendix B.3. Proposition 2

Proof. 
Now, consider a third-order Taylor series expansion of 0 = 1 n i = 1 n s z i , θ * 1 n c ^ ( θ * ) :
0 = 1 n i = 1 n s z i , θ 0 + 1 n i = 1 n s z i , θ 0 ( θ * θ 0 ) + 1 2 1 n i = 1 n 2 s z i , θ 0 θ * θ 0 θ * θ 0 + 1 6 1 n i = 1 n 3 s z i , θ ˜ θ * θ 0 θ * θ 0 θ * θ 0 1 n c ^ ( θ 0 ) 1 n c ^ θ 0 ( θ * θ 0 ) 1 2 1 n 2 c ^ θ 0 θ * θ 0 θ * θ 0 1 6 1 n 3 c ^ θ ˜ θ * θ 0 θ * θ 0 θ * θ 0
From this, similarly as (B4) to (B5), we obtain:
0 = 1 n J + Q 1 + 1 n V θ * θ 0 + 1 2 H 2 + 1 n W θ * θ 0 θ * θ 0 1 6 H 3 + 1 n W 3 θ * θ 0 θ * θ 0 θ * θ 0 1 n c ^ ( θ 0 ) 1 n c ^ θ 0 ( θ * θ 0 ) + O p ( n 2 ) ,
since 1 2 1 n 2 c ^ θ 0 θ * θ 0 θ * θ 0 = O p ( n 2 ) by Condition 3 and θ * = θ 0 + O p ( 1 n ) and since:
1 2 1 n 3 c ^ θ ˜ θ * θ 0 θ * θ 0 θ * θ 0 1 2 1 n 3 c ^ θ ˜ θ * θ 0 3 = O ( n 1 ) O p ( 1 ) O p ( n 3 / 2 ) = O p ( n 5 / 2 )
by Condition 7(ii) and θ * = θ 0 + O p ( 1 n ) . Similarly to (B7), we obtain:
θ * θ 0 = Q 1 n Q V Q 1 n Q V Q V Q + O p ( n 3 / 2 ) 1 n J 1 2 Q 1 n Q V Q + O p ( n 1 ) H 2 + 1 n W θ * θ 0 θ * θ 0 1 6 Q + O p 1 n H 3 + 1 n W 3 θ * θ 0 θ * θ 0 θ * θ 0 + 1 n Q 1 n Q V Q + O p ( n 1 ) c ^ ( θ 0 ) + 1 n Q + O p 1 n c ^ θ 0 ( θ * θ 0 ) + O p ( n 2 )
= 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 Q H 2 + 1 n W + Q V Q H 2 n θ * θ 0 θ * θ 0 + 1 6 Q H 3 θ * θ 0 θ * θ 0 θ * θ 0 1 n Q + 1 n Q V Q c ^ ( θ 0 ) 1 n Q c ^ θ 0 ( θ * θ 0 ) + O p ( n 2 ) .
Now, replacing n ( θ * θ 0 ) = a 1 / 2 + O p 1 n or n ( θ * θ 0 ) = a 1 / 2 + 1 n a 1 Q c ( θ 0 ) + O p 1 n in (B11) depending on the orders required, we obtain:
θ * θ 0 = 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 n Q H 2 + 1 n W + Q V Q H 2 n a 1 / 2 + a 1 Q c ( θ 0 ) n + O p n 1 a 1 / 2 + a 1 Q c ( θ 0 ) n + O p n 1 + 1 6 n 3 / 2 Q H 3 a 1 / 2 + O p 1 n a 1 / 2 + O p 1 n a 1 / 2 + O p 1 n 1 n Q + 1 n Q V Q c ^ ( θ 0 ) 1 n 3 / 2 Q c ^ θ 0 a 1 / 2 + O p 1 n + O p ( n 2 ) = 1 n Q J + 1 n Q V Q J + 1 n 3 / 2 Q V Q V Q J + 1 2 n Q H 2 + W n + Q V Q H 2 n a 1 / 2 + a 1 Q c ( θ 0 ) n a 1 / 2 + a 1 Q c ( θ 0 ) n + 1 6 n 3 / 2 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 1 n Q + 1 n Q V Q c ^ ( θ 0 ) 1 n 3 / 2 Q c ( θ 0 ) a 1 / 2 + O p ( n 2 ) ,
where we replaced c ^ ( θ 0 ) with c ( θ 0 ) + O p 1 n from Condition 7(i). Rearranging terms according to the orders, we have:
θ * θ 0 = 1 n Q J + 1 n Q V Q J + 1 2 Q H 2 ( a 1 / 2 a 1 / 2 ) Q c ^ ( θ 0 ) + 1 n 3 / 2 Q V Q V Q J + 1 2 Q H 2 a 1 / 2 a 1 Q c ( θ 0 ) + a 1 Q c ( θ 0 ) a 1 / 2 + 1 2 ( Q W + Q V Q H 2 ) a 1 / 2 a 1 / 2 + 1 6 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 Q V Q c ^ θ 0 Q c ( θ 0 ) a 1 / 2 + O p ( n 2 ) = 1 n Q J + 1 n Q V Q J + 1 2 Q H 2 ( a 1 / 2 a 1 / 2 ) Q c ( θ 0 ) + 1 n 3 / 2 Q V a 1 + 1 2 Q H 2 a 1 / 2 a 1 Q c ( θ 0 ) + a 1 Q c ( θ 0 ) a 1 / 2 + 1 2 Q W a 1 / 2 a 1 / 2 + 1 6 Q H 3 a 1 / 2 a 1 / 2 a 1 / 2 Q V Q c θ 0 + O p 1 n Q c ( θ 0 ) a 1 / 2 n Q ( c ^ θ 0 c θ 0 ) + O p ( n 2 ) = 1 n Q J + 1 n a 1 Q c ( θ 0 ) + 1 n 3 / 2 a 3 / 2 1 2 Q H 2 a 1 / 2 Q c ( θ 0 ) + Q c ( θ 0 ) a 1 / 2 Q V Q c θ 0 Q c θ 0 a 1 / 2 n Q ( c ^ θ 0 c θ 0 ) + O p ( n 2 ) ,
noting that c ^ θ 0 = c θ 0 + O p 1 n .
Now, we rewrite the higher order expansion of θ * in terms of B ( θ ) recalling that Q ( θ ) 1 B ( θ ) = c ( θ ) , and hence:
c ( θ ) = Q ( θ ) 1 B ( θ ) v e c * B ( θ ) H 1 θ
from Remark A.2 in Appendix A. From (B12), note:
n θ * θ 0 = a 1 / 2 + 1 n a 1 Q c ( θ 0 ) + 1 n a 3 / 2 1 2 Q H 2 a 1 / 2 Q c ( θ 0 ) + Q c ( θ 0 ) a 1 / 2 Q V Q c θ 0 Q c θ 0 a 1 / 2 1 n Q n ( c ^ ( θ 0 ) c ( θ 0 ) ) + O p ( n 3 / 2 )
from (11), and also note that:
1 2 Q H 2 a 1 / 2 Q c ( θ 0 ) + Q c ( θ 0 ) a 1 / 2 + Q V Q c θ 0 + Q c θ 0 a 1 / 2 = 1 2 Q H 2 a 1 / 2 B ( θ 0 ) + B ( θ 0 ) a 1 / 2 + Q V B θ 0 + Q Q ( θ 0 ) 1 B ( θ 0 ) v e c * B ( θ 0 ) H 1 θ 0 a 1 / 2 = 1 2 Q H 2 a 1 / 2 Q c ( θ 0 ) + Q c ( θ 0 ) a 1 / 2 Q v e c * B ( θ 0 ) H 1 θ 0 a 1 / 2 + B ( θ 0 ) a 1 / 2 + Q V B θ 0
from (B13) and B ( θ ) = Q ( θ ) c ( θ ) . We claim that:
1 2 H 2 a 1 / 2 B ( θ 0 ) + B ( θ 0 ) a 1 / 2 v e c * B ( θ 0 ) H 1 θ 0 a 1 / 2 = 0 ,
which simplifies (B15) to B ( θ 0 ) a 1 / 2 + Q V B θ 0 . This is obvious when dim ( θ 0 ) = 1 , since:
1 2 H 2 a 1 / 2 B ( θ 0 ) + B ( θ 0 ) a 1 / 2 = H 2 B ( θ 0 ) a 1 / 2
and v e c * B ( θ 0 ) H 1 θ 0 a 1 / 2 = B ( θ 0 ) H 2 a 1 / 2 noting H 1 θ 0 = H 2 for the scalar case. To verify this for a general case with dim ( θ 0 ) = k , we note v e c ( A B ) = ( I A ) v e c ( B ) = ( B I ) v e c ( A ) , and hence:
1 2 H 2 a 1 / 2 B ( θ 0 ) + B ( θ 0 ) a 1 / 2 = 1 2 H 2 v e c B ( θ 0 ) a 1 / 2 + v e c a 1 / 2 B ( θ 0 ) = 1 2 H 2 I B ( θ 0 ) a 1 / 2 + B ( θ 0 ) I a 1 / 2 = 1 2 H 2 I B ( θ 0 ) + B ( θ 0 ) I a 1 / 2 .
Finally, after some tedious algebra, we find 1 2 H 2 I B ( θ 0 ) + B ( θ 0 ) I = v e c * B ( θ 0 ) H 1 θ 0 , which concludes (B16). Therefore, we can rewrite (B14) as:
n θ * θ 0 = a 1 / 2 + 1 n a 1 B ( θ 0 ) + 1 n a 3 / 2 B ( θ 0 ) a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) + 1 n n ( Q ^ ( θ 0 ) Q ( θ 0 ) ) c ^ ( θ 0 ) Q V B θ 0 + O p ( n 3 / 2 ) .
Now, we have n ( Q ^ ( θ 0 ) Q ( θ 0 ) ) c ^ ( θ 0 ) Q V B θ 0 = O p 1 n , and hence:
n θ * θ 0 = a 1 / 2 + 1 n a 1 B ( θ 0 ) + 1 n a 3 / 2 B ( θ 0 ) a 1 / 2 n ( B ^ ( θ 0 ) B ( θ 0 ) ) + O p ( n 3 / 2 ) .
This completes the proof. ☐

Appendix C. Higher Order Variances

Here, we derive the analytic forms of the higher order variances for several alternative estimators. Note that E a 1 B ( θ 0 ) a 1 B ( θ 0 ) = E [ a 1 a 1 ] B ( θ 0 ) B ( θ 0 ) , E n a 1 / 2 a 1 B ( θ 0 ) = E n a 1 / 2 a 1 , E a 1 / 2 a 3 / 2 B ( θ 0 ) a 1 / 2 = E a 1 / 2 a 3 / 2 E a 1 / 2 a 1 / 2 B ( θ 0 ) from E [ a 1 ] = B ( θ 0 ) and E [ a 1 / 2 ] = 0 , and hence:
Λ θ ^ b = E a 1 / 2 a 1 / 2 + 1 n E n a 1 a 1 / 2 + E n a 1 / 2 a 1 + 1 n E [ a 1 a 1 ] + E a 3 / 2 a 1 / 2 + E a 1 / 2 a 3 / 2 B ( θ 0 ) B ( θ 0 ) E a 1 / 2 a 1 / 2 B ( θ 0 ) B ( θ 0 ) E a 1 / 2 a 1 / 2 .
Rilstone et al. (1996) [15] derive the second-order mean squared error (MSE) of the M-estimator that solves the moment condition (2). Proposition 3.4 in Rilstone et al. (1996) [15] implies that:
Λ θ ^ b = γ 1 + 1 n γ 2 + γ 2 + 1 n γ 3 + γ 4 + γ 4 1 n B ( θ 0 ) B ( θ 0 ) + γ 1 B ( θ 0 ) + B ( θ 0 ) γ 1 + O n 2
where (denoting the expectation of a function A ( θ ) as A ( θ ) ¯ = E [ A ( θ ) ] for notational convenience):
γ 1 = d 1 d 1 ¯ , γ 2 = Q v 1 d 1 d 1 ¯ + 1 2 H 2 d 1 d 1 d 1 ¯ γ 3 = Q v 1 d 1 d 2 V 2 ¯ + v 1 d 2 d 1 v 2 ¯ + v 1 d 2 d 2 v 1 ¯ Q + Q H 2 d 1 d 1 ¯ d 2 d 2 ¯ + d 1 d 2 d 1 d 2 ¯ + d 1 d 2 d 2 d 1 ¯ H 2 Q Q v 1 d 1 d 2 d 2 ¯ + v 1 d 2 d 1 d 2 ¯ + v 1 d 2 d 2 d 1 ¯ H 2 Q Q H 2 d 1 d 1 d 2 v 2 ¯ + d 1 d 2 d 1 v 2 ¯ + d 1 d 2 d 2 v 1 ¯ Q γ 4 = Q v 1 Q v 1 d 2 d 2 ¯ + v 1 Q v 2 d 1 d 2 ¯ + v 1 Q v 2 d 2 d 1 ¯ + 1 2 Q v 1 Q H 2 d 1 d 2 d 2 ¯ + v 1 Q H 2 d 2 d 1 d 2 ¯ + v 1 Q H 2 d 2 d 2 d 1 ¯ + 1 2 Q w 1 d 1 d 2 d 2 ¯ + w 1 d 2 d 1 d 2 ¯ + w 1 d 2 d 2 d 1 ¯ + 1 2 Q H 2 d 1 Q v 1 d 2 d 2 ¯ + d 1 Q v 2 d 1 d 2 ¯ + d 1 Q v 2 d 2 d 1 ¯ + 1 4 Q H 2 d 1 Q H 2 d 1 d 2 d 2 ¯ + d 1 Q H 2 d 2 d 1 d 2 ¯ + d 1 Q H 2 d 2 d 2 d 1 ¯ + 1 2 Q H 2 Q V 1 d 1 d 2 d 2 ¯ + Q V 1 d 2 d 1 d 2 ¯ + Q V 1 d 2 d 2 d 1 ¯ + 1 4 Q H 2 Q H 2 d 1 d 1 d 2 d 2 ¯ + Q H 2 d 1 d 2 d 1 d 2 ¯ + Q H 2 d 1 d 2 d 2 d 1 ¯ + 1 6 Q H 3 d 1 d 1 d 2 d 2 ¯ + d 1 d 2 d 1 d 2 ¯ + d 1 d 2 d 2 d 1 ¯ .
for d i = Q s ( z i , θ 0 ) , v i = s ( z i , θ 0 ) E s ( z i , θ 0 ) and w i = 2 s ( z i , θ 0 ) E 2 s ( z i , θ 0 ) . We also note B ( θ 0 ) = Q v 1 d 1 ¯ + 1 2 H 2 d 1 d 1 ¯ from Lemma 2. Finally, we derive B ( θ 0 ) as follows. Noting v e c * s ( z i , θ 0 ) Q ( θ 0 ) = v e c * s ( z i , θ 0 ) Q Q H 2 from Remark A.5 in Appendix A, we can show:
c ( θ 0 ) = 1 2 v e c * d 1 d 1 ¯ H 2 ( θ ) θ = θ 0 + 1 2 H 2 e 1 * d 1 ¯ + v e c * d 1 Q H 2 * d 1 ¯ + 1 2 H 2 d 1 e 1 ¯ + d 1 v e c * d 1 Q H 2 ¯ + s 1 e 1 + v e c * d 1 Q H 2 ¯ + v e c * d 1 s 1 ( θ ) θ = θ 0 ¯
where e 1 = Q s ( z 1 , θ 0 ) , s 1 ( θ ) = s ( z 1 , θ ) and s 1 = s ( z 1 , θ 0 ) . Combining this result with B ( θ 0 ) = Q ( θ 0 ) c ( θ 0 ) + v e c * c ( θ 0 ) Q ( θ 0 ) and B ( θ 0 ) = Q v 1 d 1 ¯ + 1 2 H 2 d 1 d 1 ¯ , we obtain:
B ( θ 0 ) = Q ( θ 0 ) c ( θ 0 ) + v e c * c ( θ 0 ) Q ( θ 0 ) = Q ( θ 0 ) c ( θ 0 ) + v e c * c ( θ 0 ) Q Q H 2 = Q ( θ 0 ) c ( θ 0 ) + v e c * B ( θ 0 ) Q H 2 = 1 2 Q v e c * d 1 d 1 ¯ H 2 ( θ ) θ = θ 0 + 1 2 Q H 2 e 1 * d 1 ¯ + v e c * d 1 Q H 2 * d 1 ¯ + 1 2 Q H 2 d 1 e 1 ¯ + d 1 v e c * d 1 Q H 2 ¯ + e 1 e 1 + v e c * d 1 Q H 2 ¯ + Q v e c * d 1 s 1 ( θ ) θ = θ 0 ¯ + v e c * d 1 v 1 ¯ + 1 2 d 1 d 1 ¯ H 2 Q Q H 2 .

References

  1. M. Quenouille. “Notes on Bias in Estimation.” Biometrika 43 (1956): 353–360. [Google Scholar] [CrossRef]
  2. P. Hall. The Bootstrap and Edgeworth Expansion. New York, NY, USA: Springer, 1992. [Google Scholar]
  3. J. Shao, and D. Tu. The Jackknife and Bootstrap. New York, NY, USA: Springer, 1995. [Google Scholar]
  4. J. MacKinnon, and A. Smith. “Approximate Bias Correction in Econometrics.” J. Econom. 85 (1998): 205–230. [Google Scholar] [CrossRef]
  5. D.W.K. Andrews. “Higher-Order Improvements of a Computationally Attractive k-Step Bootstrap for Extremum Estimators.” Econometrica 70 (2002): 119–162. [Google Scholar] [CrossRef]
  6. J. Hahn, and W.K. Newey. “Jackknife and Analytical Bias Reduction for Nonlinear Panel Models.” Econometrica 72 (2004): 1295–1319. [Google Scholar] [CrossRef]
  7. M.J.G. Bun, and M.A. Carree. “Bias-corrected Estimation in Dynamic Panel Data Models.” J. Bus. Econ. Stat. 23 (2005): 200–210. [Google Scholar] [CrossRef]
  8. Y. Bao, and A. Ullah. “Finite Sample Properties of Maximum Likelihood Estimator in Spatial Models.” J. Econom. 137 (2007): 396–413. [Google Scholar] [CrossRef]
  9. Y. Bao, and A. Ullah. “The Second-order Bias and Mean Squared Error of Estimators in Time-series Models.” J. Econom. 140 (2007): 650–669. [Google Scholar] [CrossRef]
  10. Y. Bao. “Finite Sample Bias of the QMLE in Spatial Autoregressive Models.” Econom. Theory 29 (2013): 68–88. [Google Scholar] [CrossRef]
  11. Z. Yang. “A General Method for Third-order Bias and Variance Corrections on a Nonlinear Estimator.” J. Econom. 186 (2015): 178–200. [Google Scholar] [CrossRef]
  12. J. Hahn, G. Kuersteiner, and W. Newey. Higher Order Efficiency of Bias Corrections. Working Paper. 2004. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.421.1735&rep=rep1&type=pdf (accessed on 2 March 2016).
  13. D. Firth. “Bias Reduction of Maximum Likelihood Estimation.” Biometrika 80 (1993): 27–38. [Google Scholar] [CrossRef]
  14. I. Fernandez-Val. “Bias Correction in Panel Data Models with Individual Specific Parameters.” Ph.D. Thesis, MIT, Cambridge, MA, USA, 2004. [Google Scholar]
  15. P. Rilstone, V.K. Srivastava, and A. Ullah. “The Second-order Bias and Mean Squared Error of Nonlinear Estimators.” J. Econom. 75 (1996): 369–395. [Google Scholar] [CrossRef]
  16. W.K. Newey. “A Method of Moments Interpretation of Sequential Estimators.” Econom. Lett. 14 (1984): 201–206. [Google Scholar] [CrossRef]
  17. W.K. Newey, and R. Smith. “Higher Order Properties of GMM and Generalized Empirical Likelihood Estimators.” Econometrica 72 (2004): 219–255. [Google Scholar] [CrossRef]
  18. J. Heckman. “Sample Selection Bias as a Specification Error.” Econometrica 47 (1979): 153–161. [Google Scholar] [CrossRef]
  19. R.N. Bhattacharya, and J.K. Ghosh. “On the Validity of the Formal Edgeworth Expansion.” Ann. Math. Stat. 6 (1978): 434–451. [Google Scholar] [CrossRef]
  20. J. Pfanzagl, and W. Wefelmeyer. “A Third-Order Optimum Property of the Maximum Likelihood Estimator.” J. Multivar. Anal. 8 (1978): 1–29. [Google Scholar] [CrossRef]
  21. J.K. Ghosh, B.K. Sinha, and H.S. Wieand. “Second Order Efficiency of the MLE with Respect to Any Bowl Shaped Loss Function.” Ann. Stat. 8 (1980): 506–521. [Google Scholar] [CrossRef]
  22. T.J. Rothenberg. “Approximating the Distributions of Econometric Estimators and Test Statistics.” In Handbook of Econometrics V2. Edited by Z. Griliches and M.D. Intriligator. Amsterdam, The Netherlands: North Holland Publishing Co., 1984, pp. 881–935. [Google Scholar]
  • 1.This possible extension was noted in Hahn and Newey (2004) [6].
  • 2.This is subject to some caveats, such as the existence of moments and other negligible remainder terms in the stochastic expansions.
  • 3.Note that the bias correction problem in nonlinear panel data models is the correction for the first-order bias due to the incidental parameters, while the bias correction in this paper is for the second-order bias.
  • 4.The fact that the estimating equation is the sum of n independent terms allows one to simply estimate the bias terms using their sample analogues. This approach does not have a direct extension to cases where the estimating function takes a more general form.
  • 5.The analytic forms of these variances are provided in Appendix C.
  • 6.Detailed derivations are available upon request.

Share and Cite

MDPI and ACS Style

Kim, K.I. Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency. Econometrics 2016, 4, 48. https://doi.org/10.3390/econometrics4040048

AMA Style

Kim KI. Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency. Econometrics. 2016; 4(4):48. https://doi.org/10.3390/econometrics4040048

Chicago/Turabian Style

Kim, Kyoo Il. 2016. "Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency" Econometrics 4, no. 4: 48. https://doi.org/10.3390/econometrics4040048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop