Next Article in Journal
A Spatial-Filtering Zero-Inflated Approach to the Estimation of the Gravity Model of Trade
Previous Article in Journal
A Multivariate Kernel Approach to Forecasting the Variance Covariance of Stock Market Returns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

1
School of Economics, Shanghai University of Finance and Economics, Shanghai 200433, China
2
Key Laboratory of Mathematical Economics (SUFE), Ministry of Education, Shanghai 200433, China
3
Department of Economics, The Ohio State University, Columbus, OH 43210, USA
*
Author to whom correspondence should be addressed.
Econometrics 2018, 6(1), 8; https://doi.org/10.3390/econometrics6010008
Submission received: 1 December 2017 / Revised: 13 February 2018 / Accepted: 13 February 2018 / Published: 22 February 2018

Abstract

:
An information matrix of a parametric model being singular at a certain true value of a parameter vector is irregular. The maximum likelihood estimator in the irregular case usually has a rate of convergence slower than the n -rate in a regular case. We propose to estimate such models by the adaptive lasso maximum likelihood and propose an information criterion to select the involved tuning parameter. We show that the penalized maximum likelihood estimator has the oracle properties. The method can implement model selection and estimation simultaneously and the estimator always has the usual n -rate of convergence.
JEL Classification:
C13; C18; C51; C52

1. Introduction

It has long been noted that some parametric models may have singular information matrices but still be identifiable. For example, Silvey (1959) finds that the score statistic in a single-parameter identifiable model can be zero for all data and Cox and Hinkley (1974) notice that a zero score can arise in the estimation of variance component parameters. Zero or linearly dependent scores imply that information matrices are singular. Other examples include, among others, parametric mixture models that include one homogeneous distribution (Kiefer 1982), simultaneous equations models (Sargan 1983), the sample selection model (Lee and Chesher 1986), the stochastic frontier function model (Lee 1993), and a finite mixture model (Chen 1995).
Some authors have considered the asymptotic distribution of the maximum likelihood estimator (MLE) in some irregular cases with singular information matrices. Cox and Hinkley (1974) show that the asymptotic distribution of the MLE of variance components can be found after a power reparameterization. Lee (1993) derives the asymptotic distribution of the MLE for parameters in a stochastic frontier function model with a singular information matrix by several reparameterizations so that the transformed model has a nonsingular information matrix. Rotnitzky et al. (2000) consider a general parametric model where the information matrix has a rank being one less than the number of parameters, and derive the asymptotic distribution of the MLE by reparameterizations and investigating high order Taylor expansions of the first order conditions. Typically, the MLEs of some components of the parameter vector in the irregular case may have slower than the n -rate of convergence and have non-normal asymptotic distributions, while the MLE in the regular case has the n -rate of convergence and is asymptotically normally distributed. As a result, for inference purposes, one may need to first test whether the parameter vector takes a certain value at which the information matrix is singular.
We consider the case that the irregularity of a singular information matrix occurs when a subvector of the parameter vector takes a specific true value, while the information matrix at any other value is nonsingular. For example, zero true value of a variance parameter in the stochastic frontier function model, and zero true values of a correlation coefficient and coefficients for variables in the selection equation of a sample selection model can lead to singular information matrices (Lee and Chesher 1986). For such a model, if the true value of the subvector is known and imposed in the model, the restricted model will usually have a nonsingular information matrix for the remaining parameters and the MLE has the usual n -rate of convergence. This reminds us of the oracle properties of the lasso in linear regressions, i.e., it may select the correct model with probability approaching one (w.p.a.1.) and the resulting estimator satisfies the properties as if we knew the true model (Fan and Li 2001). In this paper, we propose to estimate an irregular parametric model by a penalized maximum likelihood (PML) which appends a lasso penalty term to the likelihood function. Without loss of generality, we consider the situation when the information matrix is singular at a zero true value θ 20 of a subvector θ 2 of the parameter vector θ .1 We expect that a PML with oracle properties for parametric models can avoid the slow rate of convergence and nonstandard asymptotic distribution for the irregular case. We penalize θ 2 using the Euclidean norm as for the group lasso (Yuan and Lin 2006), since the interest is in whether the whole vector θ 2 rather than its individual components are zero. The penalty term is constructed to be adaptive by using an initial consistent estimator as for the adaptive lasso (Zou 2006) and adaptive group lasso (Wang and Leng 2008), so that the PML can have the oracle properties. In the irregular case, the initial estimate used to construct the adaptive penalty term has a slower rate of convergence than that in the literature, but the lasso approach can still be applied if the tuning parameter is properly selected. We prove the oracle properties under regularity conditions. Consequently, the PML can implement model selection and estimation simultaneously. Because the model with θ 20 0 and the restricted one with θ 20 = 0 imposed have nonsingular information matrices, the PML estimator (PMLE) always has the n -rate of convergence and standard asymptotic distributions.
The PML criterion function has a tuning parameter in the penalty term. In asymptotic analysis, the tuning parameter is assumed to have certain order so that the PML can have the oracle properties. In finite samples, the tuning parameter needs to be chosen. For least square shrinkage methods, the generalized cross validation (GCV) and information criteria such as the Akaike information criterion (AIC) and Bayesian information criterion (BIC) are often used. While the GCV and AIC cannot identify the true model consistently (Wang et al. 2007), the BIC can (Wang and Leng 2007; Wang et al. 2007; Wang et al. 2009). Zhang et al. (2010) propose a general information criterion (GIC) that can nest the AIC and BIC and show its consistency in model selection. Following Zhang et al. (2010), we propose to choose the tuning parameter by minimizing an information criterion. We show that the procedure is consistent in model selection under regularity conditions. Because of the irregularity in the model, the proposed information criterion can be different from the traditional AIC, BIC and GIC.
Jin and Lee (2017) show that, in a matrix exponential spatial specification model, the covariance matrix of the gradient vector for the nonlinear two stage least squares (N2SLS) criterion function can be singular when a subvector of the parameter vector has the true value zero. They consider the penalized lasso N2SLS estimation of the model. This paper generalizes the lasso method to the ML estimation of the several cited models with singular information matrices. For the model in Jin and Lee (2017), the true parameter vector is in the interior of the parameter space. However, for some irregular models cited above, the true parameter vector is on the boundary of the parameter space. We thus consider also the boundary case in this paper.
The PML approach proposed in this paper can be applied to all of the parametric models with singular information matrices mentioned above, e.g., the sample selection model and the stochastic frontier function model. Since the PMLE has the n -rate of convergence for the components which are not super-consistently estimated, we expect the PMLE to outperform the unrestricted MLE in finite samples for such models in the irregular case, e.g., in terms of smaller root mean squared errors and shorter confidence intervals.
The rest of the paper is organized as follows. Section 2 presents the PML estimation procedure for general parametric models with singular information matrices. Section 3 discusses specifically the PMLEs for the sample selection model and stochastic frontier function model. Section 4 reports some Monte Carlo results. Section 5 concludes. In Appendix A, we derive the asymptotic distribution of the MLE of the sample selection model in the irregular case. Proofs are in Appendix B.

2. PMLE for Parametric Models

Let the data ( y 1 , , y n ) be i.i.d. with the probability density function (pdf) f ( y ; θ 0 ) , a member of the family of pdf’s f ( y ; θ ) , θ Θ , if y’s are continuous random variables. If y’s are discrete, f ( y ; θ ) will be a probability mass function. Furthermore, if y’s are mixed continuous and discrete random variables, f ( y ; θ ) will be a mixed probability mass and density function. Assumption 1 is a standard condition for the consistency of the MLE (Newey and McFadden 1994).
Assumption 1.
Suppose that y i , i = 1 , , n , are i.i.d. with pdf (or mixed probability mass and density function) f ( y i ; θ 0 ) and (i) if θ θ 0 then f ( y i ; θ ) f ( y i ; θ 0 ) with probability one; (ii) θ 0 Θ , which is compact; (iii) ln f ( y i ; θ ) is continuous at each θ with probability one; (iv) E [ sup θ Θ | ln f ( y ; θ ) | ] < .
Rothenberg (1971) shows that, if the information matrix of a parametric model has constant rank in an open neighborhood of the true parameter vector, then local identification of parameters is equivalent to nonsingularity of the information matrix at the true parameter vector. Local identification is necessary but not sufficient for global identification. For the examples in the introduction, the information matrix of a parametric model is singular when the true parameter vector takes certain value, but it is nonsingular at other values. Thus, the result in Rothenberg (1971) does not apply but the parameters may still be identifiable in all cases.
We consider the case that the information matrix of the likelihood function is singular at θ 0 , with a subvector θ 20 of θ 0 being zero. We propose to estimate θ = ( θ 1 , θ 2 ) by maximizing the following penalized likelihood function
Q n ( θ ) = [ L n ( θ ) λ n θ ˜ 2 n μ θ 2 ] I ( θ ˜ 2 n 0 ) + L n ( θ 1 , 0 ) I ( θ ˜ 2 n = 0 ) ,
where L n ( θ ) = 1 n i = 1 n l i ( θ ) is the log likelihood function divided by n with l i ( θ ) = ln f ( y i ; θ ) , λ n > 0 is a tuning parameter, μ > 0 is a constant, θ ˜ 2 n is an initial consistent estimator of θ 2 , which can be the MLE or any other consistent estimator, · denotes the Euclidean norm and I ( · ) is the set indicator. The PMLE θ ^ n maximizes (1).
Assumption 2.
θ ˜ 2 n = θ 20 + o p ( 1 ) .
The initial estimator θ ˜ 2 n can be zero in value, especially when θ 20 is on the boundary of the parameter space, e.g., a zero variance parameter for the stochastic frontier function model in Section 3.2. With a zero value for θ ˜ 2 n , the PMLE of θ 2 in (1) is set to zero and the value of the PMLE equals that of the restricted MLE with the restriction θ 2 = 0 imposed. The tuning parameter λ n needs to be positive which tends to zero as the sample size increases.
Assumption 3.
λ n > 0 and λ n = o ( 1 ) .
We have the consistency of θ ^ n as long as λ n goes to zero as n goes to infinity in Assumption 3.
Proposition 1.
Under Assumptions 1–3, θ ^ n = θ 0 + o p ( 1 ) .
The convergence rate of θ ^ n can be derived under regularity conditions. Let Θ = Θ 1 × Θ 2 , where Θ 1 and Θ 2 are, respectively, the parameter spaces for θ 1 and θ 2 . We investigate the case where θ 20 is on the boundary as well as the case where θ 20 is in the interior int ( Θ 2 ) of Θ 2 . The rest of parameters θ 10 are always in the interior of Θ 1 . The following regularity condition is required.
Assumption 4.
(i) θ 0 = ( θ 10 , θ 20 ) Θ 1 × Θ 2 which are compact convex subsets in some finite dimensional Euclidean space R k ; (ii) θ 10 int ( Θ 1 ) ; (iii) Θ 2 = [ 0 , ζ ) for some ζ > 0 if θ 2 R 1 , and θ 20 int ( Θ 2 ) if θ 2 R k 2 with k 2 2 ; (iv) f ( y i ; θ ) is twice continuously differentiable and f ( y ; θ ) > 0 on S , where S = N ( θ 0 ) ( Θ 1 × Θ 2 ) with N ( θ 0 ) being an open neighborhood at θ 0 of R k ; (v) sup θ S f ( y ; θ ) θ d y < , sup θ S 2 f ( y ; θ ) θ θ d y < ; (vi) E ( l i ( θ 0 ) θ l i ( θ 0 ) θ ) exists and is nonsingular when θ 20 0 , and E ( l i ( θ 0 ) θ 1 l i ( θ 0 ) θ 1 ) exists and is nonsingular when θ 20 = 0 ; (vii) E ( sup θ S 2 l i ( θ ) θ θ ) < .
In the literature, several irregular models have parameters on the boundary: the model on simplified components of variances in Cox and Hinkley (1974, p. 117), the mixture model in Kiefer (1982) and the stochastic frontier function model in Aigner et al. (1977).2 For these models, a scalar parameter θ 2 is always nonnegative but irregularity occurs when θ 20 = 0 on the boundary. True parameters other than θ 20 are in the interior of their parameter spaces. We thus assume that θ 20 is a scalar when it can be on the boundary of its parameter space.3 (iv)–(vii) in Assumption 4 are standard. Note that for the partial derivative with respect to θ 2 at θ 20 on the boundary, only perturbations on Θ 2 are considered, as for the (left/right) partial derivatives in Andrews (1999). The convexity of Θ 1 and Θ 2 makes such derivatives well-defined and convexity is relevant when the mean value theorem is applied to the log likelihood function.
For our main focus in this paper, at θ 20 = 0 , the information matrix is singular. However, our lasso estimation method is also applicable to regular models where the information matrix might be nonsingular even at θ 20 = 0 . The following proposition provides such a generality.
Proposition 2.
Under Assumptions 1–4, if E ( l i ( θ 0 ) θ l i ( θ 0 ) θ ) exists and is nonsingular, then θ ^ n = θ 0 + O p ( n 1 / 2 + λ n ) .
Proposition 2 derives the rate of convergence of the PMLE θ ^ n in the case of a nonsingular information matrix. When θ 20 0 , we have assumed in Assumption 4 that the information matrix is nonsingular. When θ 20 = 0 , Proposition 2 is relevant in the event that the PML is formulated with a reparameterized model that has a nonsingular information matrix and the reparameterized unknown parameters are represented by θ .
We now consider whether the PMLE has the sparsity property, i.e., whether θ ^ 2 n is equal to zero w.p.a.1. when θ 20 = 0 . For the lasso penalty function, λ n and the initial consistent estimate θ ˜ n are required to have certain orders of convergence for the sparsity property.
Assumption 5.
Suppose that θ ˜ n θ 0 = O p ( n s ) , where 0 < s 1 / 2 . The tuning parameter sequence λ n is selected to satisfy either
(i) 
λ n converges to zero such that λ n n μ s as n ; or
(ii) 
if E ( l i ( θ 0 ) θ l i ( θ 0 ) θ ) exists and is nonsingular, λ n is selected to have at most the order O ( n 1 / 2 ) such that λ n n μ s + 1 / 2 as n .
According to Rotnitzky et al. (2000), in the case that the information matrix is singular with rank being one less than the number of parameters k, there exists a reparameterization such that the MLE of one of the transformed parameter component converges at a rate slower than n , but the remaining k 1 transformed components converge at the n -rate. As a result, some components of the MLE in terms of the original parameter vector have a slower than the n -rate of convergence, while the remaining components may have the n -rate. In this case, for θ ˜ n as a whole, s < 1 / 2 in Assumption 5 if θ ˜ n is the MLE. Assumption 5 (i) can be satisfied if λ n is selected to have a relatively slow rate of convergence to 0. The condition differs from that in the literature due to the irregularity issue we are considering. In the case that the PML is formulated with a reparameterized model that has a nonsingular information matrix and θ represents the reparameterized unknown parameter vector, Assumption 5 (ii) is relevant with s = 1 / 2 if θ ˜ n is the MLE.
The oracle properties of the PMLE, including the sparsity property, are presented in Proposition 3.4 When θ 20 = 0 , the PMLE θ ^ 2 n of θ 2 can equal zero w.p.a.1., and θ ^ 1 n has the same asymptotic distribution as that of the MLE as if we knew θ 20 = 0 .
Proposition 3.
Under Assumptions 1–5, if θ 20 = 0 , then lim n P ( θ ^ 2 n = 0 ) = 1 , and n ( θ ^ 1 n θ 10 ) d N 0 , ( E 2 l ( θ 0 ) θ 1 θ 1 ) 1 .
We next turn to the case with θ 20 0 . The consistency of θ ^ n to θ 0 in Proposition 1 will guarantee that P ( θ ^ 2 n 0 ) goes to 1 if θ 20 0 . By Proposition 2, in order that θ ^ n can converge to θ 0 with n -consistency and without an asymptotic impact of the first order by λ n when θ 20 0 , we need to select λ n to converge to zero with the order o ( n 1 / 2 ) .
Assumption 6.
λ n = o ( n 1 / 2 ) .
Assumptions 5 and 6 need to coordinate with each other as they are opposite requirements. By taking λ n = O ( n τ ) for some τ > 1 / 2 , Assumption 6 holds. Assumption 5 (i) can be satisfied if we take μ to be large enough such that μ s > τ > 1 / 2 . For such a τ to exist, it is necessary to take μ > 1 / ( 2 s ) for a given s. For the regular case in Assumption 5 (ii) , it is relatively more flexible on the value of μ .
Proposition 4.
Under Assumptions 1–4 and 6, if θ 20 0 , θ ^ n θ 0 = O p ( n 1 / 2 ) . Furthermore, as θ 0 int ( Θ ) , n ( θ ^ n θ 0 ) d N ( 0 , ( E 2 l ( θ 0 ) θ θ ) 1 ) .
We next consider the selection of the tuning parameter λ n . To make explicit the dependence of the PMLE θ ^ n on a tuning parameter λ , denote the PMLE θ ^ λ = arg max θ Θ { [ L n ( θ ) λ θ ˜ 2 n μ θ 2 ] I ( θ ˜ 2 n 0 ) + L n ( θ 1 , 0 ) I ( θ ˜ 2 n = 0 ) } for a given λ .5 Let Λ = [ 0 , λ max ] be an interval from which the tuning parameter λ is selected, where λ max is a finite positive number. We propose to select the tuning parameter that maximizes the following information criterion:
H n ( λ ) = L n ( θ ^ λ ) + Γ n I ( θ ^ 2 λ = 0 ) ,
where { Γ n } is a positive sequence of constants, and θ ^ 2 λ is the PMLE of θ 2 for a given λ . That is, given Γ n , the selected tuning parameter is λ ^ n = arg max λ Λ H n ( λ ) . The term Γ n is an extra bonus for setting θ 2 to zero. Some conditions on Γ n are also invoked.
Assumption 7.
Γ n > 0 , Γ n 0 and n 2 s Γ n as n .
To balance the order requirements of Γ n 0 and n 2 s Γ n , Γ n can be taken to be O ( n s ) . As this order changes with s, the information criterion in (2) can be different from the traditional ones such as the AIC, BIC and Hannan-Quinn information criterion.
Let { λ ¯ n } be an arbitrary sequence of tuning parameters which satisfy Assumptions 3, 5 and 6, e.g., λ ¯ n = n ( μ s ) / 2 1 / 4 , where μ is chosen such that μ s > 1 / 2 . Define Λ n = { λ Λ : θ ^ 2 λ = 0 if θ 20 0 , and θ ^ 2 λ 0 if θ 20 = 0 } . In Proposition 5, we let the initial estimator θ ˜ n be the MLE.
Proposition 5.
Under Assumptions 1–7, P ( sup λ Λ n H n ( λ ) < H n ( λ ¯ n ) ) 1 as n .
Proposition 5 states that the model selection by the tuning parameter selection procedure is consistent. It implies that any λ in Λ n that fails to identify the true model would not be selected asymptotically by the information criterion in (2) as an optimal tuning parameter in Λ n , because such a λ is less favorable than any λ ¯ n , which can identify the true model asymptotically.

3. Examples

In this section, we illustrate the PMLEs of the sample selection model as well as the stochastic frontier function model. In the irregular case, the true parameter vector is in the interior of its parameter space for the sample selection model, but it is on the boundary for the stochastic frontier function model.

3.1. The Sample Selection Model

We consider the sample selection model in Lee and Chesher (1986), which can have a singular information matrix. The model is as follows:
y i = x i β + ϵ i , y i * = z i γ u i , i = 1 , , n ,
where n is the sample size, ( x i , z i ) is the ith observation of exogenous variables, and the vectors ( ϵ i , u i ) , for i = 1 , , n , are independently distributed as the bivariate normal N 0 , σ 2 ρ σ ρ σ 1 . The variable y i * is not observed, but a binary indicator I i is observed to be 1 if and only if y i * 0 and I i is 0 otherwise. The variable y i is only observed when I i = 1 . Let θ = ( β , σ 2 , γ , ρ ) , β = ( β 1 , β 2 ) and γ = ( γ 1 , γ 2 ) , where β 1 and γ 1 are, respectively, the coefficients for the intercept terms in the outcome and selection equations. According to Lee and Chesher (1986), when x i contains an intercept term, but the true values of γ 2 and the correlation coefficient ρ are zero, elements of the score vector are linearly dependent and the information matrix is singular.6 For this model, the true parameter vector θ 0 which causes irregularity is in the interior of the parameter space.
We derive the asymptotic distribution of the MLE in this irregular case in Appendix A.7 Let θ ˜ n be the MLE of θ . It is shown that, for ( γ 20 , ρ 0 ) 0 , all components of θ ˜ n have the usual n -rate of convergence and are asymptotically normal. However, at ( γ 20 , ρ 0 ) = 0 , n 1 / 6 ρ ˜ n has the same asymptotic distribution as that of ( n 1 / 2 r ˜ n ) 1 / 3 , where r ˜ n is a transformed parameter and n 1 / 2 r ˜ n is asymptotically normal, n 1 / 6 ( β ˜ 1 n β 10 ) has the same asymptotic distribution as that of σ 0 ψ 0 ( n 1 / 2 r ˜ n ) 1 / 3 , where ψ 0 = ϕ ( γ 10 ) / Φ ( γ 10 ) , and n 1 / 3 ( σ ˜ n 2 σ 0 2 ) has the same asymptotic distribution as that of σ 0 2 ψ 0 ( ψ 0 + γ 10 ) ( n 1 / 2 r ˜ n ) 2 / 3 , while n 1 / 2 ( β ˜ 2 n β 20 , γ ˜ n γ 0 ) is asymptotically normal. Thus, ρ ˜ n , β ˜ 1 n and σ ˜ n 2 have slower than the n -rate of convergence, but β ˜ 2 n and γ ˜ n have the usual n -rate of convergence.
Let θ 1 = ( β , σ 2 , γ 1 ) and θ 2 = ( γ 2 , ρ ) . The PML criterion function for model (3) with the MLE θ ˜ 2 n is
[ L n ( θ ) λ n θ ˜ 2 n μ θ 2 ] I ( θ ˜ 2 n 0 ) + L n ( θ 1 , 0 ) I ( θ ˜ 2 n = 0 ) .
Since γ ˜ 2 n = O p ( n 1 / 2 ) and ρ ˜ n = O p ( n 1 / 6 ) , Assumptions 5 (i) and 6 hold when μ is greater than 3. By Assumption 7, in the information criterion function (2), Γ n should satisfy Γ n 0 and n 1 / 3 Γ n as n .
According to the discussions in deriving the asymptotic distribution of the MLE via reparameterizations, alternatively, the criterion function for the PMLE can be formulated with the function L n 3 ( η , r ) of the transformed parameters as
[ L n 3 ( η , r ) λ n ω ˜ n μ 1 ω ] I ( ω ˜ n 0 ) + L n 3 ( η 1 , 0 ) I ( ω ˜ n = 0 ) = [ L n ( θ ) λ n ω ˜ n μ 1 ω ] I ( θ ˜ 2 n 0 ) + L n ( θ 1 , 0 ) I ( θ ˜ 2 n = 0 ) .
where η = ( β 1 σ 0 λ 0 ρ , β 2 , σ 2 ρ 2 σ 0 2 λ 0 ( λ 0 + γ 10 ) , γ ) = ( ϖ , γ 2 ) , r = ρ 3 , and ω = ( γ 2 , r ) . While γ 2 enters the penalty terms of (4) and (5) in the same way, it is not the case for ρ : it is ρ in (4) but ρ 3 in (5). Since L n 3 ( η , r ) has a nonsingular information matrix, by Proposition 2, the PMLE has the order O p ( n 1 / 2 + λ n ) , which is O p ( n 1 / 2 ) under the assumption λ n = o ( n 1 / 2 ) . Then λ n n μ 1 s + 1 / 2 = λ n n ( μ 1 + 1 ) / 2 as n in Assumption 5 (ii) will be relevant. Thus, for the PML criterion function (5), as long as μ 1 > 0 , no further condition on μ 1 is needed. Furthermore, Assumption 7 for Γ n in the information criterion function (2) with Γ n 0 and n Γ n as n is relevant, and we can take Γ n = O ( n 1 / 2 ) .

3.2. The Stochastic Frontier Function Model

Consider the following stochastic frontier function model:
y i = x i β + u i + v i , i = 1 , , n ,
where x i is a k-dimensional vector of exogenous variables which contains a constant term, the disturbance u i 0 represents technical inefficiency, v i represents uncontrollable disturbance, and u i and v i are independent. Following the literature, u i is assumed to be half normal with the pdf
h ( u ) = 2 2 π σ 1 exp ( u 2 2 σ 1 2 ) , u 0 ,
and v i N ( 0 , σ 2 2 ) . As in Aigner et al. (1977), let δ = σ 1 / σ 2 and σ 2 = σ 1 2 + σ 2 2 . For a random sample of size n, the log likelihood function divided by n is
L n ( θ ) = ln ( 2 ) 1 2 ln ( 2 π ) 1 2 ln ( σ 2 ) 1 2 n σ 2 i = 1 n ( y i x i β ) 2 + 1 n i = 1 n ln 1 Φ ( δ ( y i x i β ) σ ) ,
where θ = ( β , σ 2 , δ ) . In this model, δ is nonnegative and, for the irregular case, the true parameter δ 0 = 0 lies on the boundary, which represents the absence of technical inefficiency. According to Lee (1993), when δ 0 = 0 , the information matrix is singular and the MLE of δ has the convergence rate n 1 / 6 ; when δ 0 0 , the information matrix has full rank and the MLE has the n -rate of convergence. The asymptotic distribution of the MLE when δ 0 = 0 is derived by transforming the model into one with a nonsingular information matrix via several reparameterizations. Thus, the PML estimation can be formulated similarly to the sample selection model, using the original model or the transformed model. Note that in finite samples, the MLE of δ , regardless of whether δ 0 = 0 or not, can be zero with a positive probability. A necessary and sufficient condition for the MLE of δ to be zero is i = 1 n ϵ ^ i 2 0 , where ϵ ^ i ’s are the least squares residuals (Lee 1993).

4. Monte Carlo

In this section, we report results from some Monte Carlo experiments for both the sample selection model and the stochastic frontier function model. The code files are written and run in MATLAB.

4.1. The Sample Selection Model

For the sample selection model, in the experiments, there are two exogenous variables in x i : one is an intercept term and the other is drawn randomly from the standard normal distribution. The true vector of coefficients for x i is ( 1 , 1 ) . There are also two exogenous variables in z i : an intercept term with true coefficient 1 and a variable randomly drawn from the standard normal distribution, for which the true coefficient is 2, 0.5 or 0. Two values of σ 0 2 , 2 and 0.5, are considered. The ρ 0 is either 0.7, −0.7, 0.3, −0.3 or 0. In the information criterion function (2) for the tuning parameter selection, μ is set to 4 and Γ n = 0.26 n 1 / 2 .8 An estimate is regarded as zero if it is smaller than 10−5. The number of Monte Carlo repetitions is 1000. The sample sizes considered are n = 200 or 600.
Table 1 reports the probabilities that the PMLEs select the right model, i.e., the probabilities of the PMLEs of θ 2 being zero when θ 20 = 0 , and being nonzero when θ 20 0 . We use PMLE-o and PMLE-t to denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. When γ 20 = 2 or 0.5, with the sample size n = 200 , the probabilities are 1 or very closed to 1; with the sample size n = 600 , all probabilities are 1. When γ 20 = 0 and ρ 0 = 0 , the PMLEs estimate θ 2 = ( γ 2 , ρ ) as zero with high probabilities, higher than 95% for the PMLE-o and higher than 69% for the PMLE-t. The PMLE-o has higher probabilities of estimating θ 2 as zero than the PMLE-t. As the sample size increases from 200 to 600, the correct model selection probabilities of the PMLE-o increase while those of the PMLE-t decrease. When γ 20 = 0 but ρ 0 0 , the PMLEs estimate θ 2 as nonzero with very low probabilities. With γ 20 = 0 , we see that ψ 0 σ 0 L n ( α 0 , ρ ) β 1 + 2 ρ σ 0 2 ψ 0 ( ψ 0 + γ 10 ) L n ( α 0 , ρ ) σ 2 + L n ( α 0 , ρ ) ρ = O ( ρ 2 ) . Thus, the scores are approximately linearly dependent as | ρ | < 1 . In finite samples, even though ρ 0 0 , the identification can be weak and the MLE behaves similarly to that in the case with ρ 0 = 0 , which has large bias and variance, as seen from Tables 4 and 5 below. As a result, the PMLEs which use the MLEs to construct the penalty terms have low probabilities of estimating θ 2 to be non-zero.
Table 2 presents the biases, standard errors (SE) and root mean squared errors (RMSE) of the estimates when γ 20 = 2 . For a nonzero true parameter value, the biases, SEs and RMSEs are divided by the absolute value of the true parameter value. The upper panel is for the sample with size n = 200 . The restricted MLE, denoted as MLE-r, usually has the largest bias, because it imposes the wrong restriction θ 2 = 0 . The MLE, PMLE-o and PMLE-t almost have identical summary statistics. Their biases and SEs are relatively low, e.g., the biases of ρ are all below or equal to 0.012, or 2.5% for a nonzero true ρ 0 , and the SEs are all below or equal to 0.246. As the SEs dominate the biases, the RMSEs have similar magnitudes as those of the SEs. As the value of ρ 0 changes, the biases, SEs and RMSEs do not change much. When σ 0 2 decreases from 2 to 0.5, all estimates of β 1 , β 2 and σ 2 tend to have smaller biases and SEs, but those for γ 1 , γ 2 and ρ show little changes. As the sample size increases to 600, all estimates have smaller biases, SEs and RMSEs.
Table 3 illustrates the biases, SEs and RMSEs of the estimates when γ 20 = 0.5. The patterns are similar to those for Table 2. With a smaller γ 0 , the biases and SEs of β 2 , γ 1 and γ 2 tend to be smaller, but those of β 1 , σ 2 and ρ are larger.
Table 4 reports the biases, SEs and RMSEs when γ 20 = 0 but ρ 0 0 . We observe that the MLE has relatively large biases and SEs. For n = 200 , the biases of ρ can be as high as 0.46 in absolute value, or higher than 100%, and the SEs can be as high as 0.72. While the biases of the MLE are usually smaller than those of the MLE-r, the SEs are usually much larger, especially for β 1 , σ 2 and ρ . In terms of the RMSEs, the MLE does not show an advantage over the MLE-r. The biases of the PMLE-o are usually smaller than those of the MLE-r and larger than those of the MLE, but the SEs of the PMLE-o are generally smaller than those of the MLE. The PMLE-t has smaller biases than those of PMLE-o but larger SEs in most cases, more similar to the MLE. That is consistent with Table 1, since the PMLE-t estimates θ 2 as nonzero with higher probabilities. The RMSEs of the PMLEs are usually smaller than those of the MLE but larger than those of the MLE-r. In this case, even though the PML methods do not provide good probabilities of selecting the non-zero models, the shrinkage feature of the lasso does provide smaller RMSEs than those of the unconstrained MLEs.
The results for γ 20 = 0 and ρ 0 = 0 are reported in Table 5. As expected, the MLE-r usually has the smallest biases, SEs and RMSEs, since it has imposed the correct restriction θ 2 = 0 . The biases, SEs and RMSEs of the PMLEs are between those of the MLE-r and MLE. The PMLE-o of β 1 , σ 2 , γ 2 and ρ have significantly smaller biases, SEs and RMSEs than those of the MLE. The biases, SEs and RMSEs of the PMLE-t are smaller than those of the MLE, but larger than those of the PMLE-o, since it estimates θ 2 as nonzero with higher probabilities. Note that the MLEs of β 1 , σ 2 and ρ have relatively very large SEs, and the MLEs of σ 2 have very large biases, which can be larger than 50%. With a smaller σ 0 2 , the estimates generally have smaller biases, SEs and RMSEs. As n increases to 600, the summary statistics of the PMLE-o become very similar to those of the MLE-r, and all estimates have smaller biases, SEs and RMSEs in general.

4.2. The Stochastic Frontier Function Model

In the Monte Carlo experiments for the stochastic frontier function model, there are three explanatory variables in x: the first one is the intercept term, the second one is randomly drawn from the standard normal distribution, and the third one is randomly drawn from the centered chi-squared distribution χ 2 ( 2 ) 2 . The true coefficient vector β 0 for the explanatory variables is ( 1 , 1 , 1 ) . We fix σ 20 2 = 1 , thus σ 0 2 = δ 0 2 + 1 , where δ 0 is either 2, 1, 0.5, 0.25, 0.1 or 0. For the PML criterion function (1) using the original likelihood function, μ is set to 4, and Γ n in the information criterion (2) is taken to be Γ n = 0.1 n 1 / 2 , which is chosen in a way similar to that for the sample selection model. For the PML criterion function using the transformed likelihood function as in (5), 3 μ 1 = 4 and Γ n = 0.1 n 1 / 2 .
Table 6 reports the probabilities that the PMLEs select the right model. For sample size n = 200 , when δ 0 = 2 , both the PMLE-o and PMLE-t estimate δ to be nonzero with probabilities higher than 80%. However, when δ 0 = 1 , 0.5, 0.25 or 0.1, the PMLEs estimate δ to be nonzero with very low probabilities. With δ 0 = 0 , the PMLEs estimate δ as zero with probabilities higher than 85%. There is a weak identification issue for the stochastic frontier function model similar to that for the sample selection model: ψ 0 σ 0 L n ( θ 10 , δ ) β 1 + 2 σ 0 2 ψ 0 2 δ L n ( θ 10 , δ ) σ 2 + L n ( θ 10 , δ ) δ = O ( δ 2 ) , where θ 10 = ( β 0 , σ 0 2 ) and ψ 0 = ϕ ( 0 ) / [ 1 Φ ( 0 ) ] . Thus, when δ 0 is nonzero but small, the MLE and thus the PMLEs can perform poorly, which can be seen from Table 7. When the sample size increases from 200 to 600, the probabilities for δ 0 = 2 and δ 0 = 0 increase, but others decrease except that of the PMLE-o with δ 0 = 1 .
Table 7 presents biases, SEs and RMSEs of the MLE, PMLE-o, PMLE-t and MLE-r with the restriction δ = 0 imposed, even though δ 0 0 . Since the MLE-r imposes the wrong restriction, it has very large biases for β 1 , σ 2 and δ but it generally has the smallest SEs. The MLE, PMLE-o and PMLE-t of β 2 and β 3 have similar features. For δ 0 = 2 , 1 and 0.5, the biases of the PMLEs of β 1 , σ 2 and δ are generally larger than those of the MLE, but are smaller than those of the MLE-r. The SEs of the PMLEs are larger than those of the MLE for δ 0 = 2 and 1 but are smaller for smaller values of δ 0 . For δ 0 = 0.25 and 0.1, even though the PMLEs estimate δ as zero with high probabilities, they have smaller biases, SEs and RMSEs than those of the MLE in almost all cases. As the sample size n increases, all estimates have smaller SEs, the MLEs have smaller biases, but the MLE-r and PMLEs may have smaller or larger biases.
The biases, SEs and RMSEs of the estimators when δ 0 = 0 are presented in Table 8. All estimators of various estimation methods have similar summary statistics for β 2 and β 3 . For other parameters, the MLE-r has the smallest biases, SEs and RMSEs, since it imposes the correct restriction δ = 0 . The PMLEs have much smaller biases, SEs and RMSEs than those of the MLE. The biases, SEs and RMSEs of the PMLE-o are smaller than those of the PMLE-t. As the sample size increases to 600, the summary statistics of the PMLE-o become very close to those of the MLE-r. For all estimates, we observe smaller biases, SEs and RMSEs for a larger sample size.

5. Conclusions

In this paper, we investigate the estimation of parametric models with singular information matrices using the PML based on the adaptive lasso (group lasso). An irregular model has a singular information matrix occurring at a subvector θ 20 of the true parameter vector θ 0 being zero, but its information matrices at other parameter values are nonsingular. In addition, if we knew that θ 20 is zero, the restricted model always has a nonsingular information matrix. We show that the PMLEs have oracle properties. Consequently, the PMLEs always have the n -rate of convergence, no matter whether θ 20 = 0 or not, while the MLEs usually have slower than the n -rate of convergence and their asymptotic distributions might not be normal when θ 20 = 0 . The PML can conduct model selection and estimation simultaneously. As examples, we consider the PMLEs for the sample selection model and the stochastic frontier function model, which can be formulated with both original structural parameters of interest and transformed parameters. Our Monte Carlo results show that the PMLE formulated with the original parameters generally performs well and outperforms the reparameterized one in terms of smaller RMSEs.

Acknowledgments

Fei Jin gratefully acknowledges the financial support from the National Natural Science Foundation of China (No. 71501119).

Author Contributions

The authors have contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. MLE of the Sample Selection Model

In this section, we derive the asymptotic distribution of the MLE of the sample selection model (3). The irregularity of the information matrix occurs at ρ 0 = 0 , which is in the interior of the range for the correlation coefficient. So for this model, the true parameter vector θ 0 of interest is in the interior of the compact parameter space Θ . In addition, we assume that the exogenous variables x i and z i are uniformly bounded, the empirical distribution of ( x i , z i ) converges in distribution to a limiting distribution and the matrices lim n 1 n i = 1 n x i x i and lim n 1 n i = 1 n z i z i exist and are positive definite. These assumptions are strong enough to establish the asymptotic properties in this section.
The log likelihood function of model (3) divided by n is
L n ( θ ) = 1 n i = 1 n ( 1 I i ) ln ( 1 Φ ( z i γ ) ) 1 2 I i ln ( 2 π σ 2 ) 1 2 σ 2 I i ( y i x i β ) 2 + I i ln Φ 1 1 ρ 2 z i γ ρ ( y i x i β ) σ ,
where θ = ( β , σ 2 , γ , ρ ) and Φ ( · ) is the standard normal distribution. The first order derivatives of L n ( θ ) are
L n ( θ ) β = 1 n σ 2 i = 1 n I i x i [ ϵ i ( β ) + σ ρ ( 1 ρ 2 ) 1 / 2 ψ i ( θ ) ] ,
L n ( θ ) σ 2 = 1 2 n σ 2 i = 1 n I i ϵ i 2 ( β ) σ 2 1 + 1 σ ρ ( 1 ρ 2 ) 1 / 2 ψ i ( θ ) ϵ i ( β ) ,
L n ( θ ) γ = 1 n i = 1 n z i ( 1 ρ 2 ) 1 / 2 I i ψ i ( θ ) ( 1 I i ) ϕ i 1 Φ i ,
L n ( θ ) ρ = 1 n ( 1 ρ 2 ) 3 / 2 i = 1 n I i ψ i ( θ ) ρ z i γ ϵ i ( β ) σ ,
where ϵ i ( β ) = y i x i β , ϕ i = ϕ ( z i γ ) , Φ i = Φ ( z i γ ) and ψ i ( θ ) = ϕ ( 1 ρ 2 ) 1 / 2 ( z i γ ρ σ ϵ i ( β ) ) / Φ ( 1 ρ 2 ) 1 / 2 ( z i γ ρ σ ϵ i ( β ) ) with ϕ ( · ) being the standard normal pdf. It is known that the variance-covariance matrix of a vector of random variables is positive definite if and only if there is no linear relation among the components of the random vector (Rao 1973, p. 107). Under the assumed regularity conditions, one can easily show that when ρ 0 0 , the gradients (A2)–(A5) at θ 0 are linearly independent w.p.a.1., and hence the limiting matrix of 1 n I n ( θ 0 ) , where I n ( θ 0 ) is the information matrix with the sample size n, is positive definite. Thus, there are no irregularities in the model when ρ 0 0 , and the MLE is n -consistent and asymptotically normal.
However, when ρ 0 = 0 and together with γ 0 , there are some irregularities in the model. With ρ 0 = 0 , the first order derivatives are
L n ( θ 0 ) β = 1 n σ 0 2 i = 1 n I i x i ϵ i ,
L n ( θ 0 ) σ 2 = 1 2 n σ 0 4 i = 1 n I i ( ϵ i 2 σ 0 2 ) ,
L n ( θ 0 ) γ = 1 n i = 1 n [ I i Φ ( z i γ 0 ) ] ϕ ( z i γ 0 ) Φ ( z i γ 0 ) [ 1 Φ ( z i γ 0 ) ] z i ,
L n ( θ 0 ) ρ = 1 n σ 0 i = 1 n ϕ ( z i γ 0 ) Φ ( z i γ 0 ) I i ϵ i .
These derivatives are linearly independent as long as x and ϕ ( z γ 0 ) / Φ ( z γ 0 ) are linearly independent, which will usually be the case if z contains some relevant continuous exogenous variables with nonzero coefficients. However, when the non-intercept variables in z have coefficients equal to zero, ϕ ( z γ 0 ) / Φ ( z γ 0 ) is a constant for all i, and the first component of L n ( α 0 , 0 ) β and L n ( α 0 , 0 ) ρ are linearly dependent as x contains an intercept term. It follows that the information matrix must be singular. We consider this irregularity below. Let x i = ( 1 , x 2 i ) , β = ( β 1 , β 2 ) with β 1 being a scalar, γ = ( γ 1 , γ 2 ) with γ 1 being the coefficient for the intercept term of the selection equation, α = ( β , σ 2 , γ 1 ) , θ 2 = ( γ 2 , ρ ) , θ = ( α , θ 2 ) , and θ 20 = 0 . Then,
L n ( θ 0 ) ρ + σ 0 ψ 0 L n ( θ 0 ) β 1 = 0 .
where ψ 0 = ϕ ( γ 10 ) / Φ ( γ 10 ) . Furthermore, the submatrix of the information matrix corresponding to α with the sample size n is
Ξ n = E n 2 L n ( θ 0 ) α L n ( θ 0 ) α = Φ ( γ 0 ) σ 0 2 i = 1 n x i x i 0 0 0 n Φ ( γ 0 ) 2 σ 0 4 0 0 0 n ϕ ( γ 10 ) 2 Φ ( γ 10 ) [ 1 Φ ( γ 10 ) ] .
The limit of Ξ n / n has full rank under the assumed regularity conditions. Thus, the rank of the information matrix is one less than the total number of parameters. This sample selection model (3) has irregularities similar to the stochastic frontier function model in Lee (1993), with the exception that the true parameter vector is not on the boundary of a parameter space. The asymptotic distribution of its MLE can be similarly derived. The method in Rotnitzky et al. (2000) can also be used, but the method in Lee (1993) is simpler for this particular model.
Consider the transformation of ( α , θ 2 ) to ( ξ , θ 2 ) defined by ξ = α ρ K 1 , where K 1 = ( σ 0 ψ 0 , 0 1 × ( k x + 1 ) ) with k x being the number of variables in x.9 At ρ 0 = 0 , ξ 0 = α 0 . Define L n 1 ( ξ , ρ ) by
L n 1 ( ξ , θ 2 ) = L n ( ξ + ρ K 1 , θ 2 ) ,
which is the log likelihood divided by n in terms of ξ and θ 2 . Then
L n 1 ( ξ 0 , 0 ) ξ = L n ( α 0 , 0 ) α ,
and by (A10),
L n 1 ( ξ 0 , 0 ) ρ = L n ( α 0 , 0 ) ρ + σ 0 ψ 0 L n ( α 0 , 0 ) β 1 = 0 .
Thus, the derivative of L n 1 ( ξ , θ 2 ) with respect to ρ at ( ξ 0 , 0 ) is zero. The derivative can be interpreted as the residual vector L n ( α 0 , 0 ) ρ E L n ( α 0 , 0 ) ρ L n ( α 0 , 0 ) β 1 E L n ( α 0 , 0 ) β 1 2 1 L n ( α 0 , 0 ) β 1 of the minimum mean square regression of L n ( α 0 , 0 ) ρ on L n ( α 0 , 0 ) β 1 . The linear dependence relation (A10) implies that the residual vector must be zero and E L n ( α 0 , 0 ) ρ L n ( α 0 , 0 ) β 1 E L n ( α 0 , 0 ) β 1 2 1 = σ 0 ψ 0 . Furthermore, we see that
2 L n 1 ( ξ 0 , 0 ) ρ 2 = 2 L n ( α 0 , 0 ) ρ 2 + 2 σ 0 ψ 0 2 L n ( α 0 , 0 ) ρ β 1 + σ 0 2 ψ 0 2 2 L n ( α 0 , 0 ) β 1 2 = ψ 0 ( ψ 0 + γ 10 ) n σ 0 2 i = 1 n I i ( σ 0 2 ϵ i 2 ) .
Then by (A12) and (A7),
2 L n 1 ( ξ 0 , 0 ) ρ 2 + 2 σ 0 2 ψ 0 ( ψ 0 + γ 10 ) L n 1 ( ξ 0 , 0 ) ξ k x + 1 = 0 ,
where ξ k x + 1 denotes the ( k x + 1 ) th component of ξ . This is a second irregularity of the model. Following Lee (1993) and Rotnitzky et al. (2000), consider the transformation of ( κ , ρ ) to ( η , ρ ) defined by η = κ 1 2 ρ 2 K 2 , where κ = ( ξ , γ 2 ) and K 2 = [ 0 1 × k x , 2 σ 0 2 ψ 0 ( ψ 0 + γ 10 ) , 0 1 × ( k z + 1 ) ] with k z being the number of parameters in z, and the function L n 2 ( η , ρ ) defined by
L n 2 ( η , ρ ) = L n 1 ( η + 1 2 ρ 2 K 2 , ρ ) .
Then
L n 2 ( η , ρ ) η = L n 1 ( κ , ρ ) κ ,
L n 2 ( η , ρ ) ρ = ρ L n 1 ( κ , ρ ) κ K 2 + L n 1 ( κ , ρ ) ρ ,
2 L n 2 ( η , ρ ) ρ 2 = ρ 2 K 2 2 L n 1 ( κ , ρ ) κ κ K 2 + 2 ρ 2 L n 1 ( κ , ρ ) ρ κ K 2 + L n 1 ( κ , ρ ) κ K 2 + 2 L n 1 ( κ , ρ ) ρ 2 .
At ρ 0 = 0 , η 0 = κ 0 . By (A13) and the linear dependence relation in (A14),
L n 2 ( η 0 , 0 ) η = L n 1 ( κ 0 , 0 ) κ ,
L n 2 ( η 0 , 0 ) ρ = 0 ,
and
1 1 2 L n 2 ( η 0 , 0 ) ρ 2 = 0 .
Since the first and second order derivatives of L n 2 ( η , ρ ) with respect to ρ at ( η 0 , 0 ) are zero, it is necessary to investigate the third order derivative of L n 2 ( η , ρ ) with respect to ρ at ( η 0 , 0 ) . By (A18) and (A10),
3 L n 2 ( η 0 , 0 ) ρ 3 = 3 2 L n 1 ( κ 0 , 0 ) ρ κ K 2 + 3 L n 1 ( κ 0 , 0 ) ρ 3 .
Note that 3 2 L n 1 ( κ 0 , 0 ) ρ κ K 2 = 6 σ 0 2 ψ 0 ( ψ 0 + γ 10 ) 2 L n 1 ( κ 0 , 0 ) ρ κ k x + 1 . Since L n 1 ( κ , ρ ) ρ = σ 0 ψ 0 L n ( α , ρ ) β 1 + L n ( α , ρ ) ρ , 2 L n 1 ( κ , ρ ) ρ κ k x + 1 = σ 0 ψ 0 2 L n ( α , ρ ) β 1 σ 2 + 2 L n ( α , ρ ) ρ σ 2 , 2 L n 1 ( κ , ρ ) ρ 2 = σ 0 2 ψ 0 2 2 L n ( α , ρ ) β 1 2 + 2 σ 0 ψ 0 2 L n ( α , ρ ) β 1 ρ + 2 L n ( α , ρ ) ρ 2 , and 3 L n 1 ( κ , ρ ) ρ 3 = σ 0 3 ψ 0 3 3 L n ( α , ρ ) β 1 3 + 3 σ 0 2 ψ 0 2 3 L n ( α , ρ ) β 1 2 ρ + 3 σ 0 ψ 0 3 L n ( α , ρ ) β 1 ρ 2 + 3 L n ( α , ρ ) ρ 3 , it is straightforward to show that
3 2 L n 1 ( κ 0 , 0 ) ρ κ K 2 = 3 n ψ 0 2 ( ψ 0 + γ 10 ) i = 1 n I i ϵ i σ 0 ,
and
3 L n 1 ( κ 0 , 0 ) ρ 3 = 1 n ψ 0 ( 1 2 ψ 0 2 3 ψ 0 γ 10 γ 10 2 ) i = 1 n I i ϵ i σ 0 3 3 ϵ i σ 0 .
Then
3 L n 2 ( η 0 , 0 ) ρ 3 = 3 n ψ 0 2 ( ψ 0 + γ 10 ) i = 1 n I i ϵ i σ 0 + 1 n ψ 0 ( 1 2 ψ 0 2 3 ψ 0 γ 10 γ 10 2 ) i = 1 n I i ϵ i σ 0 3 3 ϵ i σ 0 .
Thus, 3 L n 2 ( η 0 , 0 ) ρ 3 is not linearly dependent on L n 2 ( η 0 , 0 ) η . Under this circumstance, as in Rotnitzky et al. (2000), the asymptotic distribution of the MLE can be derived by investigating high order Taylor expansions of the first order condition of L n 2 ( η , ρ ) . For the stochastic frontier function model, Lee (1993) shows that the asymptotic distribution of the MLE can be derived by considering one more reparameterization. We employ the approach in Lee (1993).10 Note that a Taylor expansion of L n 2 ( η 0 , ρ ) ρ around ρ = 0 up to the second order yields
L n 2 ( η 0 , ρ ) ρ = L n 2 ( η 0 , 0 ) ρ + 2 L n 2 ( η 0 , 0 ) ρ 2 ρ + 1 2 3 L n 2 ( η 0 , 0 ) ρ 3 ρ 2 + o ( ρ 2 ) = 1 2 3 L n 2 ( η 0 , 0 ) ρ 3 ρ 2 + o ( ρ 2 ) ,
where the second equality follows by (A20) and (A21). Consider the transformation of ( η , δ ) to ( η , r ) defined by
r = ρ 3 ,
and the function L n 3 ( η , r ) defined by
L n 3 ( η , r ) = L n 2 ( η , r 1 / 3 ) .
It follows that
L n 3 ( η , r ) η = L n 2 ( η , δ ) η , and L n 3 ( η , r ) r = 1 3 ρ 2 L n 2 ( η , r ) ρ .
Hence,
L n 3 ( η 0 , 0 ) η = L n 2 ( η 0 , 0 ) η , and L n 3 ( η 0 , 0 ) r = 1 6 3 L n 2 ( η 0 , 0 ) ρ 3 .
From (A27) and (A23), L n 3 ( η 0 , 0 ) η and L n 3 ( η 0 , 0 ) r are linearly independent. Then the information matrix for L n 3 ( η , r ) is nonsingular and the MLE ( η ˜ n , r ˜ n ) has the asymptotic distribution
n ( η ˜ n η 0 , r ˜ n ) d N ( 0 , lim n Ω n ) ,
where
Ω n = Φ ( γ 10 ) n σ 0 2 i = 1 n x i x i 0 0 ψ 0 2 ( ψ 0 + γ 10 ) Φ ( γ 0 ) 2 n σ 0 i = 1 n x i 0 Φ ( γ 10 ) 2 σ 0 4 0 0 0 0 ϕ 2 ( γ 10 ) n Φ ( γ 10 ) [ 1 Φ ( γ 10 ) ] i = 1 n z i z i 0 ψ 0 2 ( ψ 0 + γ 10 ) Φ ( γ 10 ) 2 n σ 0 i = 1 n x i 0 0 1 12 Φ ( γ 10 ) [ 3 ψ 0 4 ( ψ 0 + γ 10 ) 2 + 2 ψ 0 2 ( 1 2 ψ 0 2 3 ψ 0 γ 10 γ 10 2 ) 2 ] .
The complete transformation for the model is
η 1 = β 1 σ 0 ψ 0 ρ , η 2 = β 2 , η 3 = σ 2 ρ 2 σ 0 2 ψ 0 ( ψ 0 + γ 10 ) , η 4 = γ , r = ρ 3 .
The inverse transformation is
β 1 = η 1 + σ 0 ψ 0 r 1 / 3 ,
β 2 = η 2 ,
σ 2 = η 3 + r 2 / 3 σ 0 2 ψ 0 ( ψ 0 + γ 10 ) ,
γ = η 4 ,
ρ = r 1 / 3 .
With the asymptotic distribution of ( η ˜ n , ρ ˜ n ) in (A28), the asymptotic distribution of the MLE ( β ˜ n , σ ˜ n 2 , γ ˜ n , ρ ˜ n ) for the original parameters can then be derived from the inverse transformations (A30)–(A34) by Slutsky’s theorem and the continuous mapping theorem. From (A34), ρ ˜ n = r ˜ n 1 / 3 . By the matrix inverse formula in a block form, n r ˜ n is asymptotically normal N 0 , 1 6 Φ ( γ 0 ) ψ 0 2 ( 1 2 ψ 0 2 3 ψ 0 γ 10 γ 10 2 ) 2 . Then it follows that n 1 / 6 ρ ˜ n = ( n 1 / 2 r ˜ n ) 1 / 3 is asymptotically distributed as a cubic root of a normal variable, and ρ ˜ n converges in distribution at a much lower rate of convergence.11 Since
n 1 / 6 ( β ˜ 1 n β 10 ) = n 1 / 6 ( η ˜ 1 n η 10 ) + σ 0 ψ 0 ( n 1 / 2 r ˜ n ) 1 / 3 = σ 0 ψ 0 ( n 1 / 2 r ˜ n ) 1 / 3 + o p ( 1 ) ,
the MLE β ˜ 1 n has the same rate of convergence as ρ ˜ n , and the asymptotic distribution of n 1 / 6 ( β ˜ 1 n β 10 ) is the same as that of σ 0 ψ 0 ( n 1 / 2 r ˜ n ) 1 / 3 . Similarly, as
n 1 / 3 ( σ ˜ n 2 σ 0 2 ) = n 1 / 3 ( η ˜ 3 n η 30 ) + σ 0 2 ψ 0 ( ψ 0 + γ 10 ) ( n 1 / 2 r ˜ n ) 2 / 3 = σ 0 2 ψ 0 ( ψ 0 + γ 10 ) ( n 1 / 2 r ˜ n ) 2 / 3 + o p ( 1 ) ,
n 1 / 3 ( σ ˜ n 2 σ 0 2 ) has the same asymptotic distribution as σ 0 2 ψ 0 ( ψ 0 + γ 10 ) ( n 1 / 2 r ˜ n ) 2 / 3 . Both β ˜ 1 n and σ ˜ n 2 converge in distribution at some lower rates of convergence and are not asymptotically normally distributed. n 1 / 6 ( β ˜ 1 n β 10 ) is asymptotically distributed as a cubic root of a normal variable and is asymptotically proportional to n 1 / 6 ρ ˜ n . n 1 / 3 ( σ ˜ n 2 σ 0 2 ) is asymptotically distributed as a 2 / 3 power of a normal variable. The remaining estimates β ˜ 2 n and γ ˜ n , however, have the usual order O p ( n 1 / 2 ) and n β ˜ 2 n β 20 γ ˜ n γ 0 = n η ˜ 2 n η 20 η ˜ 4 n η 40 is asymptotically normally distributed. From the information matrix in (A29), the joint asymptotic distribution of β ˜ n , σ ˜ n 2 , γ ˜ n , and ρ ˜ n can also be derived.

Appendix B. Proofs

Proof of Proposition 1.
When θ 20 0 , by Assumption 2, θ ˜ 2 n = θ 20 + o p ( 1 ) and θ ˜ 2 n μ = O p ( 1 ) . Then, w.p.a.1.,
Q n ( θ ^ n ) = L n ( θ ^ n ) λ n θ ˜ 2 n μ θ ^ 2 n L n ( θ 0 ) λ n θ ˜ 2 n μ θ 20 .
When θ 20 = 0 , if θ ˜ 2 n 0 ,
Q n ( θ ^ n ) = L n ( θ ^ n ) λ n θ ˜ 2 n μ θ ^ 2 n L n ( θ 0 ) λ n θ ˜ 2 n μ θ 20 = L n ( θ 0 ) ;
if θ ˜ 2 n = 0 , Q n ( θ ^ n ) = L n ( θ ^ 1 n , 0 ) L n ( θ 0 ) . Thus, w.p.a.1., for any δ > 0 ,
Q n ( θ ^ n ) > L n ( θ 0 ) δ 3 .
By Lemma 2.4 in Newey and McFadden (1994), sup θ Θ | L n ( θ ) E l i ( θ ) | = o p ( 1 ) under Assumption 1. Hence, w.p.a.1.,
E l i ( θ ^ n ) L n ( θ ^ n ) δ 3 Q n ( θ ^ n ) δ 3 > L n ( θ 0 ) 2 δ 3 > E l i ( θ 0 ) δ .
Let N be any relative open subset of Θ containing θ 0 . As Θ N c is compact and E l i ( θ ) is uniquely maximized at θ 0 , for some θ * Θ N c , sup θ Θ N c E l i ( θ ) = E l i ( θ * ) < E l i ( θ 0 ) . Therefore, choosing δ = E l i ( θ 0 ) sup θ Θ N c E l i ( θ ) , it follows that w.p.a.1. E l i ( θ ^ n ) > sup θ Θ N c E l i ( θ ) . Thus, the consistency of θ ^ n follows. ☐
Proof of Proposition 2.
Let α n = n 1 / 2 + λ n . As in Fan and Li (2001), we show that for any given ϵ > 0 , there exists a large enough constant C such that
P { sup u = C Q n ( θ 0 + α n u ) < Q n ( θ 0 ) } 1 ϵ .
We consider the two cases θ 20 0 and θ 20 = 0 separately.
(i) θ 20 0 . Note that Taylor’s theorem still holds when some parameters are on the boundary (Andrews 1999, Theorem 6) as the parameter space is convex. Then by a first order Taylor expansion of u at 0, w.p.a.1.,
Q n ( θ 0 + α n u ) Q n ( θ 0 ) = α n L n ( θ 0 ) θ u + 1 2 α n 2 u 2 L n ( θ 0 + α n u ¯ ) θ θ u α n λ n θ ˜ 2 n μ θ 20 1 θ 20 u 2 1 2 α n 2 λ n θ ˜ 2 n μ u 2 [ θ 20 + α n u ¯ 2 3 ( θ 20 + α n u ¯ 2 ) ( θ 20 + α n u ¯ 2 ) + θ 20 + α n u ¯ 2 1 I p ] u 2 ,
where u 2 is the subvector of u that consists of the last p elements of u, and u ¯ lies between u and 0. The first term on the r.h.s. excluding u has the order O p ( n 1 / 2 α n ) = O p ( α n 2 ) . As 2 L n ( θ 0 + α n u ¯ ) θ θ = E 2 l i ( θ 0 ) θ θ + o p ( 1 ) , the second term on the r.h.s. excluding u and u has the order O p ( α n 2 ) . The third term on the r.h.s. excluding u 2 has the order O p ( λ n α n ) = O p ( α n 2 ) , since θ ˜ 2 n = θ 20 + o p ( 1 ) and θ 20 0 . By the Cauchy-Schwarz inequality, the fourth term on the r.h.s. is bounded by α n 2 λ n θ ˜ 2 n μ u 2 u 2 θ 20 + α n u ¯ 2 1 = O p ( λ n α n 2 ) = o p ( α n 2 ) . Since E 2 l i ( θ 0 ) θ θ is negative definite, for a sufficiently large C, the second term dominates other terms. Thus, (A35) holds.
(ii) θ 20 = 0 . If θ ˜ 2 n = 0 , then Q n ( θ ) = L n ( θ 1 , 0 ) and the PMLE becomes the restricted MLE with θ 2 = 0 imposed. Thus, θ ^ n = O p ( n 1 / 2 ) . If θ ˜ 2 n 0 , then Q n ( θ ) = L n ( θ ) λ n θ ˜ 2 n μ θ 2 and
Q n ( θ 0 + α n u ) Q n ( θ 0 ) = L n ( θ 0 + α n u ) λ n θ ˜ 2 n μ α n u 2 L n ( θ 0 ) L n ( θ 0 + α n u ) L n ( θ 0 ) .
Expanding L n ( θ 0 + α n u ) L n ( θ 0 ) by Taylor’s theorem as in (i), we see that (A35) holds.
Equation (A35) implies that there exists a local maximum in the ball { θ 0 + α n u : u C } with probability at least 1 ϵ . Furthermore, for given ϵ > 0 , because θ ^ n is a consistent estimator of θ 0 by Proposition 1, there exists a small ball with radius δ > 0 , such that P ( θ ^ n θ 0 δ ) 1 ϵ . So one may choose C such that the small ball is a subset of { θ 0 + α n u : u C } and (A35) holds. Because Q n ( θ ^ n ) Q n ( θ 0 ) , this implies that θ ^ n { θ 0 + α n u : u C } . Then the result in the proposition holds. ☐
Proof of Proposition 3.
From the construction of Q n ( θ ) in (1), if the initial θ ˜ 2 n = 0 , θ ^ 2 n is set to zero. So it is sufficient to consider θ ˜ 2 n 0 . If θ ^ 2 n 0 , we have the first order condition
L n ( θ ^ n ) θ 2 λ n θ ˜ 2 n μ θ ^ 2 n θ ^ 2 n 1 = 0 .
By a first order Taylor expansion, L n ( θ ^ n ) θ 2 = L n ( θ 0 ) θ 2 + 2 L n ( θ ˇ n ) θ 2 θ ( θ ^ n θ 0 ) , where θ ˇ n lies between θ 0 and θ ^ n . Let T be a relative compact neighborhood of θ 0 contained in S . Under Assumption 4, by Lemma 2.4 in Newey and McFadden (1994), sup θ T L n ( θ ) θ 2 E l i ( θ ) θ 2 = o p ( 1 ) , sup θ T 2 L n ( θ ) θ 2 θ E 2 l i ( θ ) θ 2 θ = o p ( 1 ) , and E l i ( θ ) θ 2 and E 2 l i ( θ ) θ 2 θ are continuous for θ T . For L n ( θ ) on S , Lemma 3.6 in Newey and McFadden (1994) holds and E l ( θ 0 ) θ = 0 . Then L n ( θ 0 ) θ = O p ( n 1 / 2 ) as its variance has the order O ( n 1 ) . As S is compact, 2 L n ( θ ˇ n ) θ 2 θ = O p ( 1 ) . Thus, L n ( θ ^ n ) θ 2 = o p ( 1 ) . Furthermore, if the information matrix is nonsingular, by Proposition 2, θ ^ n θ 0 = O p ( n 1 / 2 + λ n ) and L n ( θ ^ n ) θ 2 = O p ( n 1 / 2 + λ n ) . Since θ ^ 2 n 0 , there must be some component θ ^ 2 n , j of θ ^ 2 n = ( θ ^ 2 n , 1 , , θ ^ 2 n , p ) , where p is the length of θ 2 , such that | θ ^ 2 n , j | = max { | θ ^ 2 n , i | : 1 i p } . Then | θ ^ 2 n , j | / θ ^ 2 n 1 / p > 0 . Under Assumption 5 (i), the first term on the l.h.s. of (A36) has the order o p ( 1 ) , but the maximum of the components in absolute value of the second term goes to infinity w.p.a.1., then (A36) cannot hold with a positive probability. Under Assumption 5 (ii), the first term on the l.h.s. of (A36) multiplied by n 1 / 2 has the order O p ( 1 ) , but the maximum of the components in absolute value of the second term multiplied by n 1 / 2 goes to infinity w.p.a.1., then (A36) cannot hold with a positive probability either. Hence, P ( θ ^ 2 n = 0 ) 1 as n .
Since lim n P ( θ ^ 2 n = 0 ) = 1 , w.p.a.1., we have the first order condition L n ( θ ^ 1 n , 0 ) θ 1 = 0 . By the mean value theorem,
0 = L n ( θ 0 ) θ 1 + 2 L n ( θ ¯ 1 n , 0 ) θ 1 θ 1 ( θ ^ 1 n θ 10 ) ,
where θ ¯ 1 n lies between θ ^ 1 n and θ 10 . Thus,
n ( θ ^ 1 n θ 10 ) = 2 L n ( θ ¯ 1 n , 0 ) θ 1 θ 1 1 n L n ( θ 0 ) θ 1 .
Under Assumption 4, 2 L n ( θ ¯ 1 n , 0 ) θ 1 θ 1 = E ( 2 l i ( θ 0 ) θ 1 θ 1 ) + o p ( 1 ) and the information matrix equality E ( 2 l i ( θ 0 ) θ 1 θ 1 ) = E ( l i ( θ 0 ) θ 1 l i ( θ 0 ) θ 1 ) holds, thus the result in the proposition follows. ☐
Proof of Proposition 4.
When θ 20 0 , by Proposition 2, θ ^ n = θ 0 + O p ( n 1 / 2 ) under Assumption 6, and also θ ˜ 2 n 0 w.p.a.1. As θ 0 int ( Θ ) , we have the first order condition
L n ( θ ^ n ) θ λ n θ ˜ 2 n μ θ ^ 2 n 1 0 θ ^ 2 n = 0 .
Applying the mean value theorem to the first term on the l.h.s. yields
L n ( θ 0 ) θ + 2 L n ( θ ¯ n ) θ θ ( θ ^ n θ 0 ) λ n θ ˜ 2 n μ θ ^ 2 n 1 0 θ ^ 2 n = 0 ,
where θ ¯ n lies between θ ^ n and θ 0 . As in the proof of Proposition 3, E l ( θ 0 ) θ = 0 and L n ( θ 0 ) θ = O p ( n 1 / 2 ) . The second term on the l.h.s. has the order O p ( n 1 / 2 ) . By Assumption 6, the third term on the l.h.s. has the order o p ( n 1 / 2 ) . Thus,
n ( θ ^ n θ 0 ) = 2 L n ( θ ¯ n ) θ θ 1 n L n ( θ 0 ) θ + o p ( 1 ) .
Since 2 L n ( θ ¯ n ) θ θ = E ( 2 l i ( θ 0 ) θ θ ) + o p ( 1 ) and the information matrix equality E ( 2 l i ( θ 0 ) θ θ ) = E ( l i ( θ 0 ) θ l i ( θ 0 ) θ ) holds, n ( θ ^ n θ 0 ) has the asymptotic distribution in the proposition. ☐
Proof of Proposition 5.
We consider the following two cases separately: (1) θ 20 0 , but θ ^ 2 λ = 0 ; (2) θ 20 = 0 , but θ ^ 2 λ 0 .
Case 1: θ 20 0 , but θ ^ 2 λ = 0 . Let θ ˇ n = ( θ ˇ 1 n , 0 ) be the restricted MLE with the restriction θ 2 = 0 imposed, where θ ˇ 1 n = arg max θ 1 Θ 1 L n ( θ 1 , 0 ) . As θ 20 0 , θ ¯ plim n θ ˇ n θ 0 . Then E l ( θ ¯ ) < E l ( θ 0 ) . By the setting of Case 1 and the definition of θ ˇ n , since Γ n 0 as n , H n ( λ ) = L n ( θ ^ λ ) + Γ n L n ( θ ˇ n ) + Γ n = E l ( θ ˇ n ) + o p ( 1 ) = E l ( θ ¯ ) + o p ( 1 ) . Furthermore, by Proposition 2, θ ^ λ ¯ n = θ 0 + o p ( 1 ) . Then w.p.a.1., H n ( λ ¯ n ) = L n ( θ ^ λ ¯ n ) = E l ( θ ^ λ ¯ n ) + o p ( 1 ) = E l ( θ 0 ) + o p ( 1 ) . Hence, P ( sup { λ Λ : θ 20 0 , but θ ^ 2 λ = 0 } H n ( λ ) < H n ( λ ¯ n ) ) 1 as n .
Case 2: θ 20 = 0 , but θ ^ 2 λ 0 . As θ ^ 2 λ 0 , H n ( λ ) = L n ( θ ^ λ ) . By the definition of the MLE θ ˜ n , L n ( θ ^ λ ) L n ( θ ˜ n ) . By Proposition 3, P ( θ ^ 2 λ ¯ n = 0 ) 1 as n , and θ ^ 1 λ ¯ n = θ 10 + O p ( n 1 / 2 ) . Then w.p.a.1., H n ( λ ¯ n ) = L n ( θ ^ 1 λ ¯ n , 0 ) + Γ n . By a first order Taylor expansion (Andrews 1999, Theorem 6), w.p.a.1.,
n 2 s [ H n ( λ ) H n ( λ ¯ n ) ] n 2 s [ L n ( θ ˜ n ) L n ( θ 0 ) ] n 2 s [ L n ( θ ^ 1 λ ¯ n , 0 ) L n ( θ 0 ) ] n 2 s Γ n = n 2 s L n ( θ 0 ) θ ( θ ˜ n θ 0 ) + 1 2 n s ( θ ˜ n θ 0 ) 2 L n ( θ ¨ n ) θ θ n s ( θ ˜ n θ 0 ) n 2 s L n ( θ 0 ) θ 1 ( θ ^ 1 λ ¯ n θ 10 ) 1 2 n 2 s ( θ ^ 1 λ ¯ n θ 10 ) 2 L n ( θ ˘ n ) θ 1 θ 1 ( θ ^ 1 λ ¯ n θ 10 ) n 2 s Γ n ,
where θ ¨ n lies between θ 0 and θ ˜ n , and θ ˘ n lies between θ 0 and θ ^ λ ¯ n . As in the proof of Proposition 3, sup θ T L n ( θ ) θ E l ( θ ) θ = o p ( 1 ) , sup θ T 2 L n ( θ ) θ θ E 2 l ( θ ) θ θ = o p ( 1 ) and L n ( θ 0 ) θ = O p ( n 1 / 2 ) . Then the first term on the r.h.s. has the order O p ( n s 1 / 2 ) = O p ( 1 ) , the second term has the order O p ( 1 ) since 2 L n ( θ ¨ n ) θ θ = E 2 l ( θ ¨ n ) θ θ + o p ( 1 ) = E 2 l ( θ 0 ) θ θ + o p ( 1 ) = O p ( 1 ) , the third term has the order O p ( n 2 s 1 ) = O p ( 1 ) , the fourth term has the order O p ( n 2 s 1 ) = O p ( 1 ) , and the last term goes to minus infinity as n . Hence, P ( sup { λ Λ : θ 20 = 0 , but θ ^ 2 λ 0 } H n ( λ ) < H n ( λ ¯ n ) ) 1 as n .
Combining the results in the above two cases, we have the result in the proposition. ☐

References

  1. Aigner, Dennis, C. A. Knox Lovell, and Peter Schmidt. 1977. Formulation and estimation of stochastic frontier production function models. Journal of Econometrics 6: 21–37. [Google Scholar] [CrossRef]
  2. Andrews, Donald W. K. 1999. Estimation when a parameter is on a boundary. Econometrica 67: 1341–83. [Google Scholar] [CrossRef]
  3. Chen, Jiahua. 1995. Optimal rate of convergence for finite mixture models. Annals of Statistics 23: 221–33. [Google Scholar] [CrossRef]
  4. Cox, David R., and David V. Hinkley. 1974. Theoretical Statistics. London: Chapman and Hall. [Google Scholar]
  5. Fan, Jianqing, and Runze Li. 2001. Variable selection via Nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96: 1348–60. [Google Scholar] [CrossRef]
  6. Goldfeld, Stephen M., and Richard E. Quandt. 1975. Estimation in a disequilibrium model and the value of information. Journal of Econometrics 3: 325–48. [Google Scholar] [CrossRef]
  7. Jin, Fei, and Lung-Fei Lee. 2017. Irregular N2SLS and LASSO estimation of the matrix exponential spatial specification model. Journal of Econometrics. forthcoming. [Google Scholar]
  8. Kiefer, Nicholas M. 1982. A Remark on the Parameterization of a Model for Heterogeneity. Working Paper No 278. Ithaca, NY, USA: Department of Economics, Cornell University. [Google Scholar]
  9. Lee, Lung-Fei. 1993. Asymptotic distribution of the maximum likelihood estimator for a stochastic frontier function model with a singular information matrix. Econometric Theory 9: 413–30. [Google Scholar] [CrossRef]
  10. Lee, Lung-Fei, and Andrew Chesher. 1986. Specification testing when score test statistics are identically zero. Journal of Econometrics 31: 121–49. [Google Scholar] [CrossRef]
  11. Newey, Whitney K., and Daniel McFadden. 1994. Large sample estimation and hypothesis testing. In Handbook of Econometrics. Edited by James J. Heckman and Edward E. Leamer. Amsterdam: Elsevier, chapter 36. vol. 4, pp. 2111–245. [Google Scholar]
  12. Quandt, Richard E. 1978. Tests of the Equilibrium vs. Disequilibrium Hypotheses. International Economic Review 19: 435–52. [Google Scholar] [CrossRef]
  13. Rao, Calyampudi Radhakrishna. 1973. Linear Statistical Inferene and Its Applications. New York: John Wiley and Sons. [Google Scholar]
  14. Rothenberg, Thomas J. 1971. Identification in parametric models. Eonometrica 39: 577–91. [Google Scholar] [CrossRef]
  15. Rotnitzky, Andrea, David R.Cox, Matteo Bottai, and James Robins. 2000. Likelihood-based inference with singular information matrix. Bernoulli 6: 243–84. [Google Scholar] [CrossRef]
  16. Sargan, John D. 1983. Identification and lack of identification. Econometrica 51: 1605–33. [Google Scholar] [CrossRef]
  17. Silvey, Samuel D. 1959. The Lagrangean multiplier test. Annals of Mathematical Statistics 30: 389–407. [Google Scholar] [CrossRef]
  18. Wang, Hansheng, and Chelei Leng. 2007. Unified LASSO Estimation by Least Squares Approximation. Journal of the American Statistical Association 102: 1039–48. [Google Scholar] [CrossRef]
  19. Wang, Hansheng, and Chelei Leng. 2008. A note on adaptive group lasso. Computational Statistics and Data Analysis 52: 5277–86. [Google Scholar] [CrossRef]
  20. Wang, Hansheng, Bo Li, and Chelei Leng. 2009. Shrinkage Tuning Parameter Selection with a Diverging Number of Parameters. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 71: 671–83. [Google Scholar] [CrossRef]
  21. Wang, Hansheng, Runze Li, and Chih-Ling Tsai. 2007. Tuning Parameter Selectors for the Smoothly Clipped Absolute Deviation Method. Biometrika 94: 553–68. [Google Scholar] [CrossRef] [PubMed]
  22. Yuan, Ming, and Yi Lin. 2006. Model selection and estimation in regression with group variables. Journal of the Royal Statistical Society, Series B 68: 49–67. [Google Scholar] [CrossRef]
  23. Zhang, Yiyun, Runze Li, and Chih-Ling Tsai. 2010. Regularization Parameter Selections via Generalized Information Criterion. Journal of the American Statistical Association 105: 312–23. [Google Scholar] [CrossRef] [PubMed]
  24. Zou, Hui. 2006. The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101: 1418–29. [Google Scholar] [CrossRef]
1
A model with the new parameter η = θ 2 θ 20 can be considered in the case of a nonzero θ 20 .
2
As pointed out by an anonymous referee, our PML approach can also be applied to interesting economic models such as disequilibrium models and structural change models. For a market possibly in disequilibrium, an equilibrium is characterized by a parameter value on the boundary (Goldfeld and Quandt 1975; Quandt 1978). Structural changes can also be characterized by parameters on the boundary. Thus, our PML approach can be applied in those models with singular information matrices.
3
This implies that θ 0 int ( Θ ) when θ 20 0 , which simplifies later presentation for the asymptotic distribution of the PMLE. In the case that θ 2 R k 2 with k 2 2 and θ 20 is allowed to be on the boundary of Θ 2 , when θ 20 0 , some components of θ 20 can still be on the boundaries of their parameter spaces, then the asymptotic distributions of their PMLEs will be nonstandard.
4
Proposition 2 is proved in the case of a nonsingular information matrix, similar to that in Fan and Li (2001). The method cannot be used in the case of a singular information matrix. However, the sparsity property can still be established by using only the consistency of θ ^ n under Assumption 5 (i).
5
As before, when θ ˜ 2 n = 0 , the PMLE of θ 2 is θ ^ 2 λ = 0 .
6
Another irregular case is that z i consists of only a constant term and dichotomous explanatory variables, and x i contains the same set of dichotomous explanatory variables and their interaction terms. For this case, the reparameterization process discussed in Appendix A to derive the asymptotic distribution of the MLE also applies.
7
The method is similar to that in Lee (1993) for the stochastic frontier function model.
8
In theory, the information criterion (2) can achieve model selection consistency as long as Γ n satisfies the order requirement in Assumption 7. However, the finite sample performance depends on the choice of Γ n . From the proof of Proposition 5, when θ 20 0 , for large enough n, Γ n should be smaller than the difference between the function values of the expected log density at the true parameter vector and at the probability limit of the restricted MLE with the restriction θ 2 = 0 imposed. When θ 20 = 0 , Γ n should be larger than the difference of the function values of the likelihood divided by n at the MLE and at the restricted MLE. For θ 20 = 0 , σ 0 2 = 2 and n = 200 , we compute the second difference 1000 times, and set Γ n = k n 1 / 2 to be the sample mean plus 2 times the standard error, which yields k = 0.26. We then set Γ n = 0.26 n 1 / 2 in all cases and for all sample sizes. We also tried setting Γ n = k n 1 / 2 to be the sample mean plus zero to four times the standard error. The results are relatively sensitive to the choice of k. We leave the theoretical study on the choice of the constant in Γ n to future research.
9
For the reparameterization in Lee (1993), the parameters σ and ψ in K 1 are not taken to be the true values. Both methods work. The method here might be simpler in computation.
10
In Rotnitzky et al. (2000), for a general model, it is possible that the order of the first non-zero derivative with respect to the first component (last component in this paper) is either odd or even after proper reparameterizations. If the order is even, there is a need to analyze the sign of the MLE. In our case, the order is odd and the asymptotic distribution of the MLE can be derived by considering one more reparameterization.
11
Note that we cannot use the delta method because r 1 / 3 is not differentiable at r = 0 .
Table 1. Probabilities that the PMLEs of the sample selection model select the right model.
Table 1. Probabilities that the PMLEs of the sample selection model select the right model.
γ 20 = 2 γ 20 = 0.5 γ 20 = 0
PMLE-oPMLE-tPMLE-oPMLE-tPMLE-oPMLE-t
n = 200
σ 0 2 = 2 , ρ 0 = 0.7 1.0001.0000.9990.9990.0580.222
σ 0 2 = 2 , ρ 0 = 0.7 1.0001.0001.0001.0000.0720.241
σ 0 2 = 2 , ρ 0 = 0.3 1.0001.0000.9990.9990.0450.196
σ 0 2 = 2 , ρ 0 = 0.3 1.0001.0000.9970.9990.0430.200
σ 0 2 = 2 , ρ 0 = 0 1.0001.0000.9990.9990.9550.808
σ 0 2 = 0.5 , ρ 0 = 0.7 1.0001.0001.0001.0000.0510.191
σ 0 2 = 0.5 , ρ 0 = 0.7 1.0001.0001.0001.0000.0500.209
σ 0 2 = 0.5 , ρ 0 = 0.3 1.0001.0000.9981.0000.0540.216
σ 0 2 = 0.5 , ρ 0 = 0.3 1.0001.0000.9970.9970.0350.166
σ 0 2 = 0.5 , ρ 0 = 0 1.0001.0000.9960.9980.9640.809
n = 600
σ 0 2 = 2 , ρ 0 = 0.7 1.0001.0001.0001.0000.0140.310
σ 0 2 = 2 , ρ 0 = 0.7 1.0001.0001.0001.0000.0070.333
σ 0 2 = 2 , ρ 0 = 0.3 1.0001.0001.0001.0000.0040.255
σ 0 2 = 2 , ρ 0 = 0.3 1.0001.0001.0001.0000.0030.273
σ 0 2 = 2 , ρ 0 = 0 1.0001.0001.0001.0000.9960.692
σ 0 2 = 0.5 , ρ 0 = 0.7 1.0001.0001.0001.0000.0080.275
σ 0 2 = 0.5 , ρ 0 = 0.7 1.0001.0001.0001.0000.0110.244
σ 0 2 = 0.5 , ρ 0 = 0.3 1.0001.0001.0001.0000.0020.228
σ 0 2 = 0.5 , ρ 0 = 0.3 1.0001.0001.0001.0000.0010.214
σ 0 2 = 0.5 , ρ 0 = 0 1.0001.0001.0001.0000.9970.755
The penalized maximum likelihood (PMLE)-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. When θ 20 0 , the numbers in the table are the probabilities that the PMLEs of θ 2 are non-zero; when θ 20 = 0 , the numbers are the probabilities that the PMLEs of θ 2 are zero.
Table 2. The biases, standard errors (SE) and root mean squared errors (RMSE) of the estimators when γ 20 = 2 in the sample selection model.
Table 2. The biases, standard errors (SE) and root mean squared errors (RMSE) of the estimators when γ 20 = 2 in the sample selection model.
n, σ 0 2 , ρ 0 β 1 β 2 σ 2 γ 1 γ 2 ρ
200, 2, 0.7MLE-r−0.344[0.134]0.369−0.003[0.135]0.135−0.163[0.260]0.307−0.001[0.091]0.091−2.000[0.000]2.000−0.700[0.000]0.700
MLE0.011[0.164]0.164−0.003[0.129]0.129−0.022[0.301]0.302−0.004[0.132]0.1320.054[0.269]0.2740.002[0.146]0.146
PMLE-o0.011[0.164]0.165−0.003[0.129]0.129−0.022[0.302]0.302−0.004[0.132]0.1320.053[0.269]0.2740.003[0.146]0.146
PMLE-t0.011[0.164]0.164−0.003[0.129]0.129−0.022[0.301]0.302−0.004[0.132]0.1320.054[0.269]0.2740.002[0.146]0.146
200, 2, −0.7MLE-r0.359[0.140]0.3850.000[0.138]0.138−0.161[0.252]0.299−0.002[0.088]0.088−2.000[0.000]2.0000.700[0.000]0.700
MLE0.001[0.171]0.1710.000[0.130]0.130−0.017[0.296]0.296−0.001[0.134]0.1340.046[0.264]0.268−0.004[0.153]0.154
PMLE-o0.001[0.171]0.1710.000[0.130]0.130−0.017[0.296]0.296−0.001[0.134]0.1340.046[0.264]0.268−0.004[0.153]0.153
PMLE-t0.001[0.171]0.1710.000[0.130]0.130−0.017[0.296]0.296−0.001[0.134]0.1340.046[0.264]0.268−0.004[0.153]0.153
200, 2, 0.3MLE-r−0.146[0.142]0.204−0.015[0.145]0.146−0.055[0.283]0.288−0.000[0.089]0.089−2.000[0.000]2.000−0.300[0.000]0.300
MLE0.002[0.187]0.187−0.014[0.145]0.146−0.017[0.297]0.297−0.004[0.132]0.1320.053[0.273]0.278−0.007[0.231]0.231
PMLE-o0.002[0.187]0.187−0.014[0.145]0.146−0.017[0.297]0.297−0.004[0.133]0.1330.053[0.273]0.278−0.007[0.230]0.231
PMLE-t0.002[0.187]0.187−0.014[0.145]0.146−0.017[0.296]0.297−0.004[0.133]0.1330.053[0.273]0.278−0.006[0.231]0.231
200, 2, −0.3MLE-r0.151[0.142]0.208−0.001[0.143]0.143−0.054[0.285]0.290−0.002[0.086]0.086−2.000[0.000]2.0000.300[0.000]0.300
MLE0.000[0.189]0.189−0.002[0.144]0.144−0.016[0.297]0.2980.002[0.127]0.1270.050[0.264]0.2690.003[0.225]0.225
PMLE-o0.000[0.189]0.189−0.002[0.144]0.144−0.016[0.297]0.2980.002[0.127]0.1270.050[0.264]0.2690.003[0.225]0.225
PMLE-t0.000[0.189]0.189−0.002[0.144]0.144−0.016[0.297]0.2980.002[0.127]0.1270.050[0.264]0.2690.003[0.225]0.225
200, 2, 0MLE-r0.002[0.141]0.141−0.005[0.140]0.140−0.051[0.278]0.283−0.002[0.088]0.088−2.000[0.000]2.0000.000[0.000]0.000
MLE0.003[0.186]0.186−0.005[0.142]0.142−0.036[0.281]0.283−0.005[0.133]0.1330.065[0.277]0.2850.004[0.238]0.238
PMLE-o0.003[0.185]0.186−0.005[0.142]0.142−0.036[0.281]0.283−0.006[0.133]0.1330.065[0.277]0.2850.003[0.238]0.238
PMLE-t0.003[0.185]0.186−0.005[0.142]0.142−0.036[0.281]0.283−0.005[0.133]0.1330.065[0.277]0.2850.004[0.238]0.238
200, 0.5, 0.7MLE-r−0.174[0.064]0.1860.003[0.069]0.069−0.039[0.066]0.0760.001[0.091]0.091−2.000[0.000]2.000−0.700[0.000]0.700
MLE0.004[0.082]0.0820.004[0.066]0.066−0.003[0.076]0.0760.001[0.132]0.1320.066[0.280]0.2870.012[0.142]0.143
PMLE-o0.004[0.082]0.0820.004[0.066]0.066−0.003[0.075]0.0750.001[0.132]0.1320.067[0.279]0.2870.012[0.142]0.142
PMLE-t0.004[0.082]0.0820.004[0.066]0.066−0.003[0.075]0.0760.001[0.132]0.1320.067[0.279]0.2870.012[0.142]0.142
200, 0.5, −0.7MLE-r0.177[0.069]0.190−0.003[0.070]0.070−0.042[0.070]0.0810.003[0.090]0.090−2.000[0.000]2.0000.700[0.000]0.700
MLE0.002[0.082]0.082−0.004[0.066]0.066−0.008[0.079]0.0800.001[0.126]0.1260.067[0.262]0.270−0.006[0.137]0.137
PMLE-o0.001[0.082]0.082−0.004[0.066]0.066−0.008[0.079]0.0800.001[0.126]0.1260.067[0.262]0.270−0.006[0.137]0.137
PMLE-t0.002[0.082]0.082−0.004[0.066]0.066−0.008[0.079]0.0800.001[0.126]0.1260.067[0.262]0.270−0.006[0.137]0.137
200, 0.5, 0.3MLE-r−0.077[0.072]0.1050.006[0.074]0.074−0.017[0.068]0.070−0.000[0.089]0.089−2.000[0.000]2.000−0.300[0.000]0.300
MLE0.000[0.096]0.0960.006[0.073]0.074−0.007[0.071]0.0720.000[0.132]0.1320.042[0.264]0.2670.008[0.220]0.220
PMLE-o0.000[0.096]0.0960.006[0.073]0.074−0.007[0.071]0.0720.000[0.132]0.1320.042[0.264]0.2670.008[0.220]0.220
PMLE-t0.000[0.096]0.0960.006[0.073]0.074−0.007[0.071]0.0720.000[0.132]0.1320.042[0.264]0.2670.008[0.220]0.220
200, 0.5, −0.3MLE-r0.074[0.073]0.103−0.002[0.074]0.074−0.019[0.068]0.0710.001[0.091]0.091−2.000[0.000]2.0000.300[0.000]0.300
MLE−0.001[0.094]0.094−0.001[0.075]0.075−0.010[0.071]0.072−0.000[0.130]0.1300.060[0.281]0.2880.004[0.224]0.224
PMLE-o−0.001[0.094]0.094−0.002[0.075]0.075−0.010[0.071]0.072−0.000[0.130]0.1300.060[0.281]0.2880.003[0.223]0.223
PMLE-t−0.001[0.094]0.094−0.002[0.075]0.075−0.010[0.071]0.072−0.000[0.130]0.1300.060[0.281]0.2880.003[0.223]0.223
200, 0.5, 0MLE-r−0.001[0.071]0.071−0.007[0.075]0.076−0.011[0.071]0.072−0.005[0.086]0.086−2.000[0.000]2.0000.000[0.000]0.000
MLE−0.001[0.092]0.092−0.007[0.076]0.077−0.007[0.072]0.073−0.001[0.135]0.1350.066[0.279]0.287−0.001[0.246]0.246
PMLE-o−0.001[0.092]0.092−0.007[0.076]0.077−0.007[0.072]0.073−0.001[0.135]0.1350.066[0.279]0.287−0.001[0.246]0.246
PMLE-t−0.001[0.092]0.092−0.007[0.076]0.077−0.007[0.072]0.073−0.001[0.135]0.1350.066[0.279]0.287−0.001[0.246]0.246
600, 2, 0.7MLE-r−0.356[0.079]0.3640.000[0.078]0.078−0.137[0.159]0.210−0.000[0.050]0.050−2.000[0.000]2.000−0.700[0.000]0.700
MLE0.000[0.093]0.0930.001[0.072]0.072−0.004[0.180]0.1800.005[0.069]0.0690.008[0.147]0.1480.005[0.080]0.080
PMLE-o0.000[0.093]0.0930.001[0.072]0.072−0.004[0.180]0.1800.005[0.069]0.0690.008[0.147]0.1470.005[0.080]0.080
PMLE-t0.000[0.093]0.0930.001[0.072]0.072−0.004[0.180]0.1800.006[0.069]0.0690.008[0.147]0.1470.005[0.080]0.080
600, 2, −0.7MLE-r0.351[0.076]0.360−0.005[0.078]0.078−0.138[0.154]0.2070.002[0.051]0.052−2.000[0.000]2.0000.700[0.000]0.700
MLE−0.001[0.091]0.091−0.006[0.073]0.073−0.010[0.175]0.1750.002[0.073]0.0730.011[0.148]0.149−0.002[0.080]0.080
PMLE-o−0.001[0.091]0.091−0.006[0.073]0.073−0.009[0.175]0.1750.002[0.073]0.0730.011[0.148]0.149−0.002[0.080]0.080
PMLE-t−0.001[0.091]0.091−0.006[0.073]0.073−0.009[0.175]0.1750.002[0.073]0.0730.011[0.148]0.149−0.002[0.080]0.080
600, 2, 0.3MLE-r−0.158[0.081]0.178−0.000[0.082]0.082−0.031[0.165]0.168−0.003[0.052]0.052−2.000[0.000]2.000−0.300[0.000]0.300
MLE−0.003[0.104]0.1040.001[0.081]0.081−0.003[0.170]0.170−0.001[0.075]0.0750.019[0.158]0.1590.006[0.124]0.124
PMLE-o−0.003[0.104]0.1040.001[0.081]0.081−0.003[0.170]0.170−0.001[0.075]0.0750.019[0.158]0.1590.006[0.124]0.124
PMLE-t−0.003[0.104]0.1040.001[0.081]0.081−0.003[0.170]0.170−0.001[0.075]0.0750.019[0.158]0.1590.006[0.124]0.124
600, 2, −0.3MLE-r0.151[0.084]0.173−0.000[0.082]0.082−0.040[0.163]0.168−0.000[0.051]0.051−2.000[0.000]2.0000.300[0.000]0.300
MLE−0.001[0.107]0.107−0.001[0.081]0.081−0.012[0.168]0.169−0.002[0.071]0.0710.018[0.159]0.160−0.002[0.126]0.126
PMLE-o−0.001[0.107]0.107−0.001[0.081]0.081−0.012[0.168]0.169−0.002[0.071]0.0710.018[0.159]0.160−0.002[0.126]0.126
PMLE-t−0.001[0.107]0.107−0.001[0.081]0.081−0.012[0.168]0.169−0.002[0.071]0.0710.018[0.159]0.160−0.002[0.126]0.126
600, 2, 0MLE-r−0.005[0.081]0.0810.002[0.084]0.084−0.007[0.162]0.162−0.002[0.050]0.050−2.000[0.000]2.0000.000[0.000]0.000
MLE−0.005[0.108]0.1080.002[0.084]0.084−0.003[0.162]0.162−0.003[0.074]0.0750.013[0.151]0.1510.000[0.131]0.131
PMLE-o−0.005[0.108]0.1080.002[0.084]0.084−0.003[0.162]0.162−0.003[0.074]0.0750.013[0.151]0.1510.000[0.131]0.131
PMLE-t−0.005[0.108]0.1080.002[0.084]0.084−0.003[0.162]0.162−0.003[0.074]0.0750.013[0.151]0.1510.000[0.131]0.131
600, 0.5, 0.7MLE-r−0.176[0.039]0.1800.001[0.038]0.038−0.033[0.039]0.051−0.000[0.050]0.050−2.000[0.000]2.000−0.700[0.000]0.700
MLE0.002[0.047]0.0470.001[0.036]0.036−0.000[0.043]0.043−0.000[0.071]0.0710.014[0.144]0.1450.005[0.078]0.078
PMLE-o0.002[0.047]0.0470.001[0.036]0.036−0.000[0.043]0.043−0.000[0.071]0.0710.014[0.144]0.1450.005[0.078]0.078
PMLE-t0.002[0.047]0.0470.001[0.036]0.036−0.000[0.043]0.043−0.000[0.071]0.0710.014[0.144]0.1450.005[0.078]0.078
600, 0.5, −0.7MLE-r0.178[0.040]0.182−0.000[0.039]0.039−0.034[0.039]0.0520.000[0.053]0.053−2.000[0.000]2.0000.700[0.000]0.700
MLE−0.000[0.048]0.048−0.000[0.037]0.037−0.001[0.045]0.0450.002[0.074]0.0740.016[0.147]0.148−0.005[0.080]0.080
PMLE-o−0.000[0.048]0.048−0.000[0.037]0.037−0.001[0.045]0.0450.002[0.074]0.0750.016[0.147]0.148−0.005[0.080]0.080
PMLE-t−0.000[0.048]0.048−0.000[0.037]0.037−0.001[0.045]0.0450.002[0.074]0.0750.016[0.147]0.148−0.005[0.080]0.080
600, 0.5, 0.3MLE-r−0.075[0.042]0.0850.002[0.041]0.041−0.009[0.041]0.0420.002[0.053]0.053−2.000[0.000]2.000−0.300[0.000]0.300
MLE0.000[0.053]0.0530.002[0.041]0.041−0.002[0.042]0.042−0.001[0.076]0.0760.023[0.155]0.156−0.001[0.124]0.124
PMLE-o0.000[0.053]0.0530.002[0.041]0.041−0.002[0.042]0.042−0.001[0.076]0.0760.023[0.155]0.156−0.001[0.124]0.124
PMLE-t0.000[0.053]0.0530.002[0.041]0.041−0.002[0.042]0.042−0.001[0.076]0.0760.023[0.155]0.156−0.001[0.124]0.124
600, 0.5, −0.3MLE-r0.076[0.039]0.0850.000[0.041]0.041−0.012[0.041]0.043−0.002[0.052]0.052−2.000[0.000]2.0000.300[0.000]0.300
MLE0.001[0.051]0.0510.000[0.040]0.040−0.005[0.043]0.043−0.005[0.074]0.0750.019[0.156]0.1570.002[0.121]0.121
PMLE-o0.001[0.051]0.0510.000[0.040]0.040−0.005[0.043]0.043−0.005[0.074]0.0750.019[0.156]0.1570.002[0.121]0.121
PMLE-t0.001[0.051]0.0510.000[0.040]0.040−0.005[0.043]0.043−0.005[0.074]0.0750.019[0.156]0.1570.002[0.121]0.121
600, 0.5, 0MLE-r−0.001[0.040]0.0400.001[0.041]0.041−0.003[0.041]0.041−0.002[0.052]0.052−2.000[0.000]2.0000.000[0.000]0.000
MLE−0.000[0.052]0.0520.001[0.041]0.041−0.002[0.041]0.041−0.005[0.074]0.0740.015[0.146]0.1470.001[0.129]0.129
PMLE-o−0.000[0.052]0.0520.001[0.041]0.041−0.002[0.041]0.041−0.005[0.074]0.0740.015[0.146]0.1470.001[0.129]0.129
PMLE-t−0.000[0.052]0.0520.001[0.041]0.041−0.002[0.041]0.041−0.005[0.074]0.0740.015[0.146]0.1470.001[0.129]0.129
The maximum likelihood estimator (MLE)-r denotes the restricted MLE with the restriction θ 2 = 0 imposed, and the PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. The three numbers in each cell are bias [SE]RMSE. ( β 10 , β 20 , γ 10 ) = ( 1 , 1 , 1 ) .
Table 3. The biases, SEs and RMSEs of the estimators when γ 20 = 0.5 in the sample selection model.
Table 3. The biases, SEs and RMSEs of the estimators when γ 20 = 0.5 in the sample selection model.
n, σ 0 2 , ρ 0 β 1 β 2 σ 2 γ 1 γ 2 ρ
200, 2, 0.7MLE-r−0.704[0.123]0.7150.001[0.125]0.125−0.532[0.204]0.5700.002[0.090]0.090−0.500[0.000]0.500−0.700[0.000]0.700
MLE−0.014[0.323]0.3230.003[0.120]0.1200.027[0.483]0.4840.003[0.094]0.0940.006[0.098]0.098−0.035[0.217]0.220
PMLE-o−0.015[0.324]0.3250.003[0.120]0.1200.028[0.487]0.4870.003[0.094]0.0940.006[0.099]0.099−0.035[0.218]0.221
PMLE-t−0.015[0.324]0.3240.003[0.120]0.1200.027[0.483]0.4840.003[0.094]0.0940.006[0.099]0.099−0.035[0.218]0.221
200, 2, −0.7MLE-r0.705[0.124]0.7160.003[0.125]0.125−0.533[0.208]0.572−0.004[0.093]0.093−0.500[0.000]0.5000.700[0.000]0.700
MLE0.009[0.306]0.3060.001[0.123]0.1230.030[0.508]0.509−0.003[0.097]0.0970.012[0.104]0.1040.031[0.207]0.209
PMLE-o0.009[0.308]0.3080.001[0.123]0.1230.031[0.510]0.511−0.002[0.097]0.0970.012[0.104]0.1040.031[0.207]0.209
PMLE-t0.009[0.306]0.3060.001[0.123]0.1230.030[0.508]0.509−0.003[0.097]0.0970.012[0.104]0.1040.031[0.207]0.209
200, 2, 0.3MLE-r−0.305[0.140]0.3360.003[0.142]0.142−0.126[0.270]0.2970.000[0.088]0.088−0.500[0.000]0.500−0.300[0.000]0.300
MLE0.014[0.404]0.4040.003[0.142]0.1420.124[0.486]0.5010.002[0.092]0.0920.006[0.102]0.102−0.009[0.324]0.324
PMLE-o0.017[0.410]0.4100.002[0.142]0.1420.130[0.506]0.5230.002[0.092]0.0920.006[0.103]0.103−0.007[0.325]0.325
PMLE-t0.014[0.404]0.4040.003[0.142]0.1420.123[0.486]0.5010.002[0.092]0.0920.006[0.103]0.103−0.009[0.324]0.324
200, 2, −0.3MLE-r0.301[0.139]0.3310.002[0.142]0.142−0.124[0.273]0.3000.002[0.089]0.089−0.500[0.000]0.5000.300[0.000]0.300
MLE−0.014[0.421]0.4210.002[0.142]0.1420.125[0.472]0.4890.002[0.094]0.0940.010[0.107]0.1070.012[0.332]0.333
PMLE-o−0.014[0.420]0.4210.002[0.142]0.1420.125[0.472]0.4880.002[0.094]0.0940.009[0.109]0.1100.012[0.332]0.332
PMLE-t−0.015[0.421]0.4210.002[0.142]0.1420.125[0.472]0.4890.002[0.094]0.0940.009[0.107]0.1080.012[0.332]0.332
200, 2, 0MLE-r0.004[0.144]0.1440.005[0.147]0.147−0.051[0.275]0.280−0.004[0.092]0.093−0.500[0.000]0.5000.000[0.000]0.000
MLE0.007[0.446]0.4460.006[0.148]0.1490.124[0.401]0.419−0.002[0.098]0.0980.010[0.105]0.1050.001[0.370]0.370
PMLE-o0.007[0.446]0.4460.006[0.148]0.1490.123[0.400]0.419−0.002[0.098]0.0980.010[0.106]0.1060.002[0.369]0.369
PMLE-t0.007[0.446]0.4460.006[0.148]0.1490.123[0.400]0.419−0.002[0.098]0.0980.010[0.106]0.1060.002[0.369]0.369
200, 0.5, 0.7MLE-r−0.356[0.059]0.3610.002[0.064]0.064−0.130[0.054]0.141−0.000[0.090]0.090−0.500[0.000]0.500−0.700[0.000]0.700
MLE−0.009[0.158]0.1580.002[0.065]0.0650.012[0.127]0.127−0.001[0.097]0.0970.016[0.103]0.104−0.033[0.227]0.229
PMLE-o−0.009[0.158]0.1580.002[0.065]0.0650.012[0.127]0.127−0.001[0.097]0.0970.016[0.103]0.104−0.033[0.227]0.229
PMLE-t−0.009[0.158]0.1580.002[0.065]0.0650.012[0.127]0.127−0.001[0.097]0.0970.016[0.103]0.104−0.033[0.227]0.229
200, 0.5, −0.7MLE-r0.350[0.063]0.356−0.000[0.062]0.062−0.133[0.056]0.144−0.003[0.088]0.088−0.500[0.000]0.5000.700[0.000]0.700
MLE0.009[0.147]0.147−0.000[0.060]0.0600.001[0.125]0.125−0.001[0.093]0.0930.017[0.102]0.1040.038[0.204]0.207
PMLE-o0.010[0.151]0.151−0.000[0.061]0.0610.002[0.125]0.126−0.001[0.093]0.0930.017[0.103]0.1040.039[0.210]0.214
PMLE-t0.009[0.147]0.147−0.000[0.060]0.0600.001[0.125]0.125−0.001[0.093]0.0930.017[0.102]0.1040.038[0.204]0.207
200, 0.5, 0.3MLE-r−0.145[0.070]0.161−0.000[0.068]0.068−0.035[0.068]0.0760.005[0.090]0.090−0.500[0.000]0.500−0.300[0.000]0.300
MLE0.006[0.212]0.212−0.000[0.069]0.0690.027[0.123]0.1260.003[0.096]0.0960.007[0.109]0.110−0.028[0.338]0.339
PMLE-o0.006[0.212]0.212−0.000[0.069]0.0690.028[0.123]0.1260.003[0.096]0.0960.006[0.111]0.111−0.028[0.338]0.339
PMLE-t0.006[0.212]0.212−0.000[0.069]0.0690.028[0.123]0.1260.003[0.096]0.0960.007[0.109]0.110−0.028[0.338]0.339
200, 0.5, −0.3MLE-r0.152[0.068]0.1670.003[0.070]0.070−0.032[0.065]0.0720.004[0.088]0.088−0.500[0.000]0.5000.300[0.000]0.300
MLE0.009[0.203]0.2030.003[0.071]0.0710.025[0.105]0.1080.003[0.092]0.0920.010[0.106]0.1060.036[0.331]0.333
PMLE-o0.010[0.202]0.2020.003[0.071]0.0710.024[0.105]0.1080.003[0.092]0.0920.009[0.108]0.1080.036[0.330]0.332
PMLE-t0.010[0.202]0.2020.003[0.071]0.0710.024[0.105]0.1080.003[0.092]0.0920.009[0.108]0.1080.036[0.330]0.332
200, 0.5, 0MLE-r−0.000[0.072]0.072−0.001[0.070]0.070−0.010[0.071]0.072−0.001[0.086]0.086−0.500[0.000]0.5000.000[0.000]0.000
MLE0.004[0.216]0.216−0.002[0.071]0.0710.032[0.104]0.108−0.001[0.090]0.0900.005[0.107]0.1070.007[0.360]0.360
PMLE-o0.003[0.219]0.219−0.002[0.071]0.0710.033[0.107]0.112−0.001[0.090]0.0900.004[0.110]0.1100.006[0.363]0.363
PMLE-t0.005[0.215]0.216−0.002[0.071]0.0710.031[0.104]0.108−0.001[0.090]0.0900.005[0.108]0.1090.008[0.360]0.360
600, 2, 0.7MLE-r−0.707[0.072]0.711−0.004[0.069]0.069−0.506[0.124]0.5210.001[0.050]0.050−0.500[0.000]0.500−0.700[0.000]0.700
MLE−0.003[0.172]0.172−0.004[0.066]0.0660.015[0.287]0.2880.001[0.052]0.052−0.000[0.059]0.059−0.010[0.106]0.106
PMLE-o−0.003[0.172]0.172−0.004[0.066]0.0660.015[0.287]0.2880.001[0.052]0.052−0.000[0.059]0.059−0.010[0.106]0.106
PMLE-t−0.003[0.172]0.172−0.004[0.066]0.0660.015[0.287]0.2880.001[0.052]0.052−0.000[0.059]0.059−0.010[0.106]0.106
600, 2, −0.7MLE-r0.709[0.071]0.7120.002[0.070]0.070−0.518[0.124]0.533−0.002[0.050]0.050−0.500[0.000]0.5000.700[0.000]0.700
MLE0.010[0.166]0.1660.002[0.069]0.069−0.009[0.279]0.280−0.002[0.052]0.0520.005[0.056]0.0570.012[0.106]0.106
PMLE-o0.010[0.166]0.1660.002[0.069]0.069−0.009[0.279]0.280−0.002[0.052]0.0520.005[0.056]0.0570.012[0.106]0.106
PMLE-t0.010[0.166]0.1660.002[0.069]0.069−0.009[0.279]0.280−0.002[0.052]0.0520.005[0.056]0.0570.012[0.106]0.106
600, 2, 0.3MLE-r−0.303[0.079]0.3130.000[0.081]0.081−0.100[0.163]0.191−0.001[0.050]0.050−0.500[0.000]0.500−0.300[0.000]0.300
MLE0.009[0.231]0.2310.001[0.081]0.0810.044[0.239]0.2430.000[0.052]0.0520.003[0.058]0.058−0.001[0.196]0.196
PMLE-o0.009[0.231]0.2310.001[0.081]0.0810.044[0.239]0.2430.000[0.052]0.0520.003[0.058]0.058−0.001[0.196]0.196
PMLE-t0.009[0.231]0.2310.001[0.081]0.0810.044[0.239]0.2430.000[0.052]0.0520.003[0.058]0.058−0.001[0.196]0.196
600, 2, −0.3MLE-r0.305[0.082]0.316−0.001[0.079]0.079−0.104[0.158]0.189−0.000[0.053]0.053−0.500[0.000]0.5000.300[0.000]0.300
MLE0.012[0.229]0.229−0.000[0.079]0.0790.028[0.228]0.229−0.000[0.055]0.0550.002[0.057]0.0570.018[0.196]0.197
PMLE-o0.012[0.229]0.229−0.000[0.079]0.0790.028[0.228]0.229−0.000[0.055]0.0550.002[0.057]0.0570.018[0.196]0.197
PMLE-t0.012[0.229]0.229−0.000[0.079]0.0790.028[0.228]0.229−0.000[0.055]0.0550.002[0.057]0.0570.018[0.196]0.197
600, 2, 0MLE-r0.002[0.082]0.0820.001[0.082]0.082−0.011[0.154]0.1550.004[0.050]0.051−0.500[0.000]0.5000.000[0.000]0.000
MLE−0.003[0.226]0.2260.001[0.082]0.0820.035[0.176]0.1800.003[0.051]0.0520.001[0.061]0.061−0.005[0.205]0.205
PMLE-o−0.003[0.226]0.2260.001[0.082]0.0820.035[0.176]0.1800.003[0.051]0.0520.001[0.061]0.061−0.005[0.205]0.205
PMLE-t−0.003[0.226]0.2260.001[0.082]0.0820.035[0.176]0.1800.003[0.051]0.0520.001[0.061]0.061−0.005[0.205]0.205
600, 0.5, 0.7MLE-r−0.351[0.036]0.3530.000[0.035]0.035−0.127[0.031]0.130−0.001[0.050]0.050−0.500[0.000]0.500−0.700[0.000]0.700
MLE−0.001[0.084]0.084−0.001[0.034]0.0340.002[0.070]0.070−0.001[0.053]0.0530.002[0.059]0.059−0.013[0.109]0.110
PMLE-o−0.001[0.084]0.084−0.001[0.034]0.0340.002[0.070]0.070−0.001[0.053]0.0530.002[0.059]0.059−0.013[0.109]0.110
PMLE-t−0.001[0.084]0.084−0.001[0.034]0.0340.002[0.070]0.070−0.001[0.053]0.0530.002[0.059]0.059−0.013[0.109]0.110
600, 0.5, −0.7MLE-r0.353[0.036]0.3550.001[0.036]0.036−0.128[0.030]0.131−0.002[0.051]0.051−0.500[0.000]0.5000.700[0.000]0.700
MLE0.002[0.081]0.0810.001[0.034]0.0340.001[0.069]0.069−0.002[0.054]0.0540.003[0.058]0.0580.010[0.101]0.101
PMLE-o0.002[0.081]0.0810.001[0.034]0.0340.001[0.069]0.069−0.002[0.054]0.0540.003[0.058]0.0580.010[0.101]0.101
PMLE-t0.002[0.081]0.0810.001[0.034]0.0340.001[0.069]0.069−0.002[0.054]0.0540.003[0.058]0.0580.010[0.101]0.101
600, 0.5, 0.3MLE-r−0.149[0.040]0.154−0.002[0.039]0.039−0.025[0.039]0.0470.002[0.051]0.051−0.500[0.000]0.500−0.300[0.000]0.300
MLE0.006[0.114]0.114−0.002[0.039]0.0390.010[0.057]0.0580.002[0.054]0.0540.003[0.059]0.059−0.000[0.188]0.188
PMLE-o0.006[0.114]0.114−0.002[0.039]0.0390.010[0.057]0.0580.002[0.054]0.0540.003[0.059]0.059−0.000[0.188]0.188
PMLE-t0.006[0.114]0.114−0.002[0.039]0.0390.010[0.057]0.0580.002[0.054]0.0540.003[0.059]0.059−0.000[0.188]0.188
600, 0.5, −0.3MLE-r0.152[0.039]0.157−0.002[0.040]0.040−0.026[0.040]0.0470.001[0.053]0.053−0.500[0.000]0.5000.300[0.000]0.300
MLE0.003[0.110]0.110−0.002[0.040]0.0400.006[0.056]0.0570.000[0.055]0.0550.002[0.059]0.0600.014[0.188]0.188
PMLE-o0.003[0.110]0.110−0.002[0.040]0.0400.006[0.056]0.0570.000[0.055]0.0550.002[0.059]0.0600.014[0.188]0.188
PMLE-t0.003[0.110]0.110−0.002[0.040]0.0400.006[0.056]0.0570.000[0.055]0.0550.002[0.059]0.0600.014[0.188]0.188
600, 0.5, 0MLE-r0.001[0.040]0.0400.000[0.042]0.042−0.004[0.041]0.042−0.001[0.052]0.052−0.500[0.000]0.5000.000[0.000]0.000
MLE−0.003[0.119]0.1190.000[0.042]0.0420.008[0.047]0.047−0.001[0.053]0.0530.002[0.060]0.060−0.007[0.212]0.213
PMLE-o−0.003[0.119]0.1190.000[0.042]0.0420.008[0.047]0.047−0.001[0.053]0.0530.002[0.060]0.060−0.007[0.212]0.213
PMLE-t−0.003[0.119]0.1190.000[0.042]0.0420.008[0.047]0.047−0.001[0.053]0.0530.002[0.060]0.060−0.007[0.212]0.213
The MLE-r denotes the restricted MLE with the restriction θ 2 = 0 imposed, and the PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. The three numbers in each cell are bias[SE]RMSE. ( β 10 , β 20 , γ 10 ) = ( 1 , 1 , 1 ) .
Table 4. The biases, SEs and RMSEs of the estimators when γ 20 = 0 and ρ 0 0 in the sample selection model.
Table 4. The biases, SEs and RMSEs of the estimators when γ 20 = 0 and ρ 0 0 in the sample selection model.
n, σ 0 2 , ρ 0 β 1 β 2 σ 2 γ 1 γ 2 ρ
200, 2, 0.7MLE-r−0.792[0.118]0.801−0.002[0.121]0.121−0.647[0.199]0.677−0.001[0.091]0.0910.000[0.000]0.000−0.700[0.000]0.700
MLE−0.482[0.884]1.007−0.001[0.124]0.1240.210[0.666]0.698−0.002[0.092]0.092−0.004[0.096]0.096−0.459[0.698]0.835
PMLE-o−0.743[0.333]0.814−0.001[0.122]0.122−0.546[0.510]0.747−0.002[0.099]0.099−0.000[0.036]0.036−0.666[0.215]0.699
PMLE-t−0.665[0.521]0.845−0.001[0.122]0.122−0.376[0.639]0.741−0.001[0.091]0.091−0.001[0.050]0.050−0.606[0.382]0.717
200, 2, −0.7MLE-r0.786[0.114]0.7940.004[0.115]0.115−0.649[0.191]0.676−0.001[0.089]0.0890.000[0.000]0.0000.700[0.000]0.700
MLE0.420[0.867]0.9630.004[0.116]0.1170.213[0.650]0.684−0.001[0.090]0.090−0.001[0.098]0.0980.421[0.687]0.806
PMLE-o0.735[0.326]0.8040.004[0.115]0.115−0.550[0.462]0.718−0.000[0.090]0.090−0.002[0.043]0.0430.664[0.226]0.701
PMLE-t0.648[0.538]0.8420.004[0.116]0.116−0.361[0.636]0.731−0.001[0.089]0.089−0.003[0.055]0.0550.598[0.396]0.718
200, 2, 0.3MLE-r−0.343[0.136]0.3690.008[0.139]0.139−0.158[0.263]0.3070.000[0.093]0.0930.000[0.000]0.000−0.300[0.000]0.300
MLE−0.302[1.047]1.0890.008[0.143]0.1430.908[0.847]1.242−0.000[0.093]0.0930.005[0.098]0.099−0.270[0.719]0.768
PMLE-o−0.340[0.332]0.4750.009[0.141]0.141−0.071[0.555]0.5590.000[0.099]0.099−0.001[0.037]0.037−0.296[0.180]0.347
PMLE-t−0.326[0.569]0.6560.008[0.141]0.1410.145[0.789]0.8020.000[0.093]0.093−0.001[0.051]0.051−0.289[0.362]0.463
200, 2, −0.3MLE-r0.340[0.142]0.3680.001[0.138]0.138−0.161[0.261]0.3070.002[0.089]0.0890.000[0.000]0.0000.300[0.000]0.300
MLE0.347[1.029]1.0860.001[0.142]0.1420.878[0.827]1.2060.002[0.091]0.0910.001[0.102]0.1020.304[0.712]0.774
PMLE-o0.353[0.285]0.4540.001[0.139]0.139−0.095[0.466]0.4760.001[0.094]0.0940.001[0.041]0.0410.309[0.164]0.350
PMLE-t0.376[0.567]0.6800.001[0.139]0.1390.142[0.777]0.7900.002[0.090]0.090−0.000[0.054]0.0540.323[0.361]0.484
200, 0.5, 0.7MLE-r−0.397[0.061]0.402−0.001[0.060]0.060−0.161[0.048]0.1680.001[0.091]0.0910.000[0.000]0.000−0.700[0.000]0.700
MLE−0.240[0.425]0.488−0.001[0.061]0.0610.037[0.158]0.1630.001[0.091]0.0910.002[0.100]0.100−0.464[0.679]0.822
PMLE-o−0.378[0.152]0.408−0.001[0.060]0.060−0.142[0.111]0.1800.001[0.094]0.0940.002[0.040]0.040−0.673[0.192]0.700
PMLE-t−0.341[0.242]0.418−0.001[0.060]0.060−0.105[0.148]0.1810.001[0.091]0.0910.002[0.054]0.054−0.617[0.347]0.708
200, 0.5, −0.7MLE-r0.397[0.059]0.4010.004[0.060]0.060−0.166[0.048]0.173−0.001[0.093]0.0930.000[0.000]0.0000.700[0.000]0.700
MLE0.241[0.434]0.4960.004[0.060]0.0610.039[0.161]0.166−0.001[0.093]0.0930.001[0.097]0.0970.463[0.692]0.833
PMLE-o0.382[0.139]0.4070.004[0.060]0.060−0.151[0.096]0.179−0.001[0.094]0.0940.001[0.040]0.0400.679[0.180]0.702
PMLE-t0.342[0.247]0.4220.004[0.060]0.060−0.107[0.148]0.182−0.000[0.093]0.0930.002[0.054]0.0540.618[0.364]0.717
200, 0.5, 0.3MLE-r−0.171[0.070]0.1840.002[0.071]0.071−0.042[0.066]0.0780.004[0.093]0.0930.000[0.000]0.000−0.300[0.000]0.300
MLE−0.173[0.518]0.5470.002[0.072]0.0720.220[0.209]0.3040.004[0.094]0.094−0.003[0.104]0.104−0.308[0.715]0.779
PMLE-o−0.167[0.174]0.2410.002[0.071]0.071−0.017[0.139]0.1400.003[0.095]0.095−0.001[0.043]0.043−0.294[0.198]0.354
PMLE-t−0.158[0.288]0.3280.002[0.071]0.0710.037[0.192]0.1960.004[0.093]0.093−0.002[0.055]0.056−0.285[0.374]0.470
200, 0.5, −0.3MLE-r0.167[0.071]0.1810.002[0.069]0.069−0.039[0.068]0.079−0.003[0.088]0.0880.000[0.000]0.0000.300[0.000]0.300
MLE0.157[0.517]0.5400.002[0.070]0.0700.222[0.216]0.309−0.003[0.089]0.0890.002[0.100]0.1000.285[0.711]0.766
PMLE-o0.164[0.154]0.2250.002[0.069]0.069−0.020[0.144]0.145−0.002[0.090]0.0900.000[0.034]0.0340.295[0.159]0.336
PMLE-t0.174[0.269]0.3200.002[0.069]0.0690.028[0.196]0.198−0.003[0.088]0.088−0.000[0.046]0.0460.306[0.335]0.454
600, 2, 0.7MLE-r−0.786[0.068]0.789−0.002[0.070]0.070−0.634[0.114]0.6450.001[0.051]0.0510.000[0.000]0.000−0.700[0.000]0.700
MLE−0.322[0.645]0.721−0.002[0.070]0.0700.001[0.415]0.4150.001[0.051]0.0510.002[0.053]0.053−0.317[0.561]0.644
PMLE-o−0.771[0.147]0.785−0.001[0.070]0.070−0.616[0.202]0.6480.001[0.061]0.0610.000[0.011]0.011−0.688[0.100]0.695
PMLE-t−0.581[0.466]0.745−0.002[0.070]0.070−0.378[0.452]0.5890.001[0.051]0.0510.001[0.029]0.029−0.531[0.384]0.656
600, 2, −0.7MLE-r0.788[0.068]0.7900.002[0.069]0.069−0.637[0.114]0.647−0.001[0.049]0.0490.000[0.000]0.0000.700[0.000]0.700
MLE0.281[0.623]0.6830.002[0.069]0.0690.007[0.408]0.408−0.001[0.050]0.050−0.000[0.050]0.0500.280[0.539]0.607
PMLE-o0.779[0.116]0.7870.001[0.069]0.069−0.627[0.164]0.6480.000[0.048]0.0480.000[0.005]0.0050.694[0.075]0.698
PMLE-t0.572[0.465]0.7370.002[0.069]0.069−0.378[0.435]0.576−0.001[0.049]0.049−0.000[0.029]0.0290.522[0.388]0.651
600, 2, 0.3MLE-r−0.339[0.079]0.3480.001[0.082]0.082−0.132[0.152]0.2010.001[0.052]0.0520.000[0.000]0.000−0.300[0.000]0.300
MLE−0.326[0.844]0.9040.001[0.083]0.0830.565[0.508]0.7600.001[0.052]0.0520.002[0.057]0.057−0.290[0.626]0.690
PMLE-o−0.341[0.106]0.3570.002[0.082]0.082−0.127[0.174]0.2160.001[0.060]0.0600.000[0.009]0.009−0.301[0.046]0.304
PMLE-t−0.330[0.504]0.6020.001[0.083]0.0830.113[0.516]0.5280.001[0.052]0.0520.002[0.029]0.029−0.294[0.358]0.463
600, 2, −0.3MLE-r0.343[0.075]0.351−0.002[0.082]0.082−0.122[0.151]0.1940.001[0.052]0.0520.000[0.000]0.0000.300[0.000]0.300
MLE0.311[0.838]0.894−0.002[0.082]0.0820.564[0.509]0.7600.001[0.052]0.052−0.002[0.055]0.0550.279[0.621]0.681
PMLE-o0.342[0.102]0.357−0.002[0.081]0.081−0.118[0.183]0.2170.001[0.054]0.054−0.000[0.007]0.0070.300[0.042]0.303
PMLE-t0.335[0.492]0.595−0.002[0.081]0.0810.111[0.499]0.5110.001[0.052]0.052−0.001[0.032]0.0320.296[0.352]0.460
600, 0.5, 0.7MLE-r−0.394[0.035]0.396−0.000[0.036]0.036−0.159[0.028]0.161−0.001[0.051]0.0510.000[0.000]0.000−0.700[0.000]0.700
MLE−0.159[0.323]0.360−0.000[0.036]0.036−0.001[0.107]0.107−0.001[0.051]0.0510.002[0.054]0.054−0.312[0.552]0.634
PMLE-o−0.390[0.064]0.395−0.000[0.036]0.036−0.156[0.044]0.162−0.001[0.056]0.0560.000[0.009]0.009−0.693[0.076]0.697
PMLE-t−0.304[0.222]0.377−0.000[0.036]0.036−0.102[0.111]0.151−0.001[0.051]0.051−0.001[0.028]0.028−0.554[0.362]0.662
600, 0.5, −0.7MLE-r0.394[0.033]0.396−0.000[0.034]0.034−0.158[0.028]0.160−0.002[0.050]0.0500.000[0.000]0.0000.700[0.000]0.700
MLE0.148[0.313]0.347−0.000[0.034]0.034−0.002[0.106]0.106−0.002[0.050]0.0500.002[0.055]0.0550.293[0.535]0.610
PMLE-o0.389[0.067]0.394−0.001[0.034]0.034−0.154[0.046]0.161−0.003[0.050]0.050−0.000[0.011]0.0110.691[0.088]0.697
PMLE-t0.302[0.210]0.368−0.001[0.034]0.034−0.107[0.106]0.151−0.002[0.050]0.0500.000[0.027]0.0270.549[0.339]0.645
600, 0.5, 0.3MLE-r−0.169[0.041]0.174−0.003[0.040]0.040−0.030[0.039]0.050−0.000[0.050]0.0500.000[0.000]0.000−0.300[0.000]0.300
MLE−0.154[0.413]0.441−0.004[0.040]0.0400.138[0.125]0.186−0.000[0.051]0.051−0.000[0.055]0.055−0.281[0.616]0.677
PMLE-o−0.169[0.048]0.176−0.003[0.040]0.040−0.030[0.042]0.052−0.001[0.053]0.0530.000[0.006]0.006−0.300[0.034]0.302
PMLE-t−0.163[0.235]0.285−0.004[0.040]0.0400.023[0.120]0.122−0.000[0.050]0.0500.000[0.027]0.027−0.293[0.336]0.446
600, 0.5, −0.3MLE-r0.170[0.039]0.174−0.000[0.040]0.040−0.031[0.039]0.050−0.001[0.051]0.0510.000[0.000]0.0000.300[0.000]0.300
MLE0.148[0.422]0.447−0.001[0.040]0.0400.145[0.127]0.193−0.001[0.051]0.0510.000[0.053]0.0530.268[0.627]0.682
PMLE-o0.170[0.044]0.176−0.000[0.040]0.040−0.030[0.041]0.051−0.002[0.052]0.0520.000[0.002]0.0020.301[0.028]0.302
PMLE-t0.166[0.225]0.280−0.000[0.040]0.0400.018[0.116]0.118−0.001[0.051]0.0510.000[0.023]0.0230.295[0.323]0.438
The MLE-r denotes the restricted MLE with the restriction θ 2 = 0 imposed, and the PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. The three numbers in each cell are bias[SE]RMSE. ( β 10 , β 20 , γ 10 ) = ( 1 , 1 , 1 ) .
Table 5. The biases, SEs and RMSEs of the estimators when γ 20 = 0 and ρ 0 = 0 in the sample selection model.
Table 5. The biases, SEs and RMSEs of the estimators when γ 20 = 0 and ρ 0 = 0 in the sample selection model.
n, σ 0 2 β 1 β 2 σ 2 γ 1 γ 2 ρ
200, 2MLE-r0.003[0.141]0.141−0.005[0.136]0.137−0.040[0.284]0.2870.003[0.092]0.0920.000[0.000]0.0000.000[0.000]0.000
MLE0.000[1.076]1.076−0.004[0.138]0.1381.062[0.887]1.3830.002[0.093]0.093−0.001[0.100]0.100−0.004[0.712]0.712
PMLE-o0.001[0.319]0.319−0.004[0.137]0.1370.041[0.521]0.5220.001[0.098]0.098−0.001[0.041]0.0410.000[0.176]0.176
PMLE-t0.024[0.581]0.581−0.005[0.137]0.1370.271[0.801]0.8450.003[0.092]0.092−0.001[0.053]0.0530.014[0.359]0.359
200, 0.5MLE-r0.004[0.071]0.0720.000[0.073]0.073−0.014[0.074]0.075−0.002[0.086]0.0860.000[0.000]0.0000.000[0.000]0.000
MLE0.012[0.535]0.535−0.001[0.074]0.0740.261[0.232]0.349−0.002[0.087]0.087−0.002[0.101]0.1010.012[0.709]0.709
PMLE-o0.001[0.156]0.1560.000[0.074]0.0740.005[0.135]0.135−0.003[0.089]0.089−0.001[0.031]0.031−0.004[0.164]0.164
PMLE-t0.009[0.290]0.2900.000[0.074]0.0740.061[0.200]0.209−0.002[0.086]0.086−0.002[0.048]0.0480.006[0.353]0.353
600, 2MLE-r0.002[0.082]0.082−0.006[0.081]0.081−0.018[0.167]0.168−0.001[0.051]0.0510.000[0.000]0.0000.000[0.000]0.000
MLE−0.014[0.864]0.864−0.006[0.082]0.0820.713[0.537]0.893−0.001[0.051]0.051−0.001[0.056]0.056−0.011[0.623]0.623
PMLE-o0.002[0.115]0.115−0.006[0.081]0.081−0.011[0.211]0.211−0.001[0.057]0.057−0.000[0.008]0.0080.000[0.049]0.049
PMLE-t0.017[0.539]0.539−0.006[0.081]0.0810.261[0.547]0.606−0.001[0.051]0.0510.000[0.032]0.0320.010[0.375]0.375
600, 0.5MLE-r0.001[0.041]0.0410.002[0.041]0.041−0.003[0.040]0.040−0.001[0.051]0.0510.000[0.000]0.0000.000[0.000]0.000
MLE0.025[0.437]0.4380.001[0.041]0.0410.185[0.134]0.229−0.001[0.051]0.051−0.002[0.056]0.0560.033[0.629]0.630
PMLE-o−0.000[0.053]0.0530.002[0.041]0.041−0.002[0.046]0.046−0.001[0.053]0.0530.000[0.007]0.007−0.001[0.046]0.046
PMLE-t0.013[0.250]0.2500.001[0.041]0.0410.057[0.131]0.143−0.001[0.051]0.0510.000[0.028]0.0280.015[0.346]0.346
The MLE-r denotes the restricted MLE with the restriction θ 2 = 0 imposed, and the PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. The three numbers in each cell are bias[SE]RMSE. ( β 10 , β 20 , γ 10 ) = ( 1 , 1 , 1 ) .
Table 6. Probabilities that the PMLEs of the stochastic frontier function model select the right model.
Table 6. Probabilities that the PMLEs of the stochastic frontier function model select the right model.
n = 200 n = 600
PMLE-oPMLE-tPMLE-oPMLE-t
δ 0 = 2 0.8220.8380.9910.991
δ 0 = 1 0.1700.2890.1960.271
δ 0 = 0.50.0710.1840.0250.082
δ 0 = 0.250.0540.1320.0120.065
δ 0 = 0.10.0500.1590.0150.059
δ 0 = 0 0.9610.8560.9900.925
The PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. When δ 0 0 , the numbers in the table are the probabilities that the PMLEs of δ are non-zero; when δ 0 = 0 , the numbers are the probabilities that the PMLEs of δ are zero.
Table 7. The biases, SEs and RMSEs of the estimators when δ 0 0 in the stochastic frontier function model.
Table 7. The biases, SEs and RMSEs of the estimators when δ 0 0 in the stochastic frontier function model.
n, δ 0 β 1 β 2 β 3 σ 2 δ
200, 2MLE-r−1.595[0.112]1.5990.002[0.114]0.114−0.001[0.057]0.057−2.574[0.264]2.588−2.000[0.000]2.000
MLE−0.034[0.301]0.3030.002[0.110]0.110−0.002[0.055]0.055−0.050[0.996]0.9980.115[0.724]0.733
PMLE-o−0.235[0.662]0.7030.002[0.111]0.111−0.002[0.056]0.056−0.291[1.348]1.379−0.093[1.047]1.051
PMLE-t−0.215[0.640]0.6750.002[0.111]0.111−0.002[0.055]0.055−0.266[1.319]1.345−0.072[1.021]1.024
200, 1MLE-r−0.795[0.082]0.7990.002[0.082]0.0820.001[0.041]0.041−0.657[0.134]0.671−1.000[0.000]1.000
MLE−0.136[0.426]0.4470.002[0.082]0.0820.001[0.042]0.042−0.050[0.522]0.524−0.077[0.657]0.661
PMLE-o−0.602[0.438]0.7440.002[0.082]0.0820.001[0.042]0.042−0.434[0.536]0.690−0.684[0.713]0.988
PMLE-t−0.499[0.484]0.6950.002[0.082]0.0820.001[0.042]0.042−0.343[0.561]0.657−0.546[0.756]0.932
200, 0.5MLE-r−0.395[0.073]0.4010.002[0.070]0.0700.000[0.039]0.039−0.178[0.106]0.207−0.500[0.000]0.500
MLE−0.014[0.380]0.3800.002[0.071]0.0710.000[0.039]0.0390.106[0.363]0.3780.068[0.600]0.604
PMLE-o−0.324[0.267]0.4200.002[0.071]0.0710.000[0.039]0.039−0.107[0.284]0.304−0.373[0.470]0.600
PMLE-t−0.242[0.341]0.4180.002[0.071]0.0710.000[0.039]0.039−0.045[0.326]0.330−0.251[0.559]0.613
200, 0.25MLE-r−0.199[0.071]0.211−0.003[0.071]0.071−0.001[0.034]0.034−0.052[0.102]0.115−0.250[0.000]0.250
MLE0.120[0.362]0.382−0.003[0.071]0.071−0.002[0.034]0.0340.177[0.329]0.3730.235[0.572]0.618
PMLE-o−0.147[0.232]0.275−0.003[0.071]0.071−0.002[0.034]0.034−0.002[0.244]0.244−0.158[0.389]0.420
PMLE-t−0.093[0.288]0.302−0.003[0.071]0.071−0.002[0.034]0.0340.037[0.271]0.273−0.075[0.472]0.478
200, 0.1MLE-r−0.079[0.073]0.108−0.002[0.071]0.0710.002[0.037]0.037−0.018[0.105]0.107−0.100[0.000]0.100
MLE0.240[0.355]0.429−0.002[0.071]0.0710.002[0.037]0.0370.208[0.314]0.3770.391[0.573]0.694
PMLE-o−0.032[0.214]0.216−0.002[0.071]0.0710.002[0.037]0.0370.027[0.229]0.231−0.013[0.384]0.384
PMLE-t0.046[0.296]0.299−0.002[0.071]0.0710.002[0.037]0.0370.085[0.278]0.2910.108[0.503]0.514
600, 2MLE-r−1.595[0.066]1.596−0.004[0.065]0.0660.001[0.033]0.033−2.558[0.151]2.563−2.000[0.000]2.000
MLE−0.007[0.142]0.142−0.003[0.061]0.0610.000[0.031]0.031−0.016[0.540]0.5410.038[0.349]0.351
PMLE-o−0.017[0.204]0.204−0.004[0.061]0.0610.000[0.031]0.031−0.028[0.582]0.5830.028[0.390]0.391
PMLE-t−0.017[0.204]0.204−0.004[0.061]0.0610.000[0.031]0.031−0.028[0.582]0.5830.028[0.390]0.391
600, 1MLE-r−0.796[0.047]0.7970.004[0.048]0.0490.001[0.025]0.025−0.640[0.079]0.645−1.000[0.000]1.000
MLE−0.073[0.288]0.2970.004[0.048]0.0480.000[0.025]0.025−0.036[0.350]0.352−0.062[0.417]0.422
PMLE-o−0.597[0.406]0.7220.004[0.048]0.0480.000[0.025]0.025−0.438[0.431]0.614−0.717[0.577]0.921
PMLE-t−0.536[0.433]0.6890.004[0.048]0.0480.000[0.025]0.025−0.387[0.445]0.590−0.639[0.605]0.880
600, 0.5MLE-r−0.397[0.042]0.399−0.002[0.043]0.043−0.000[0.022]0.022−0.165[0.063]0.176−0.500[0.000]0.500
MLE−0.062[0.316]0.322−0.002[0.043]0.043−0.000[0.022]0.0220.047[0.248]0.252−0.040[0.449]0.451
PMLE-o−0.375[0.142]0.401−0.002[0.043]0.043−0.000[0.022]0.022−0.145[0.141]0.202−0.466[0.215]0.513
PMLE-t−0.336[0.210]0.396−0.002[0.043]0.043−0.000[0.022]0.022−0.118[0.177]0.212−0.410[0.309]0.513
600, 0.25MLE-r−0.200[0.041]0.2040.001[0.042]0.0420.001[0.021]0.021−0.046[0.059]0.075−0.250[0.000]0.250
MLE0.065[0.289]0.2960.001[0.042]0.0420.001[0.021]0.0210.107[0.202]0.2290.121[0.414]0.432
PMLE-o−0.190[0.101]0.2150.001[0.042]0.0420.001[0.021]0.021−0.037[0.100]0.107−0.234[0.149]0.277
PMLE-t−0.158[0.170]0.2320.001[0.042]0.0420.001[0.021]0.021−0.017[0.131]0.132−0.187[0.247]0.310
600, 0.1MLE-r−0.080[0.040]0.089−0.003[0.041]0.041−0.001[0.020]0.020−0.011[0.058]0.059−0.100[0.000]0.100
MLE0.187[0.295]0.350−0.003[0.041]0.041−0.001[0.020]0.0200.145[0.205]0.2510.279[0.427]0.510
PMLE-o−0.067[0.110]0.129−0.003[0.041]0.041−0.001[0.020]0.020−0.000[0.100]0.100−0.079[0.169]0.187
PMLE-t−0.039[0.172]0.176−0.003[0.041]0.041−0.001[0.020]0.0200.018[0.132]0.133−0.037[0.258]0.261
The MLE-r denotes the restricted MLE with the restriction δ = 0 imposed, and PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. The three numbers in each cell are bias[SE]RMSE. β 0 = ( 1 , 1 , 1 ) . Corresponding to δ 0 = 2 , 1, 0.5, 0.25 and 0.1, the true value of σ 2 is σ 0 2 = 5 , 2, 1.25, 1.0625 and 1.01.
Table 8. The biases, SEs and RMSEs of the estimators when δ 0 = 0 in the stochastic frontier function model.
Table 8. The biases, SEs and RMSEs of the estimators when δ 0 = 0 in the stochastic frontier function model.
n, δ 0 β 1 β 2 β 3 σ 2 δ
200, 0MLE-r−0.000[0.074]0.074−0.001[0.073]0.073−0.001[0.037]0.037−0.016[0.100]0.1010.000[0.000]0.000
MLE0.302[0.347]0.460−0.001[0.073]0.073−0.002[0.037]0.0370.191[0.295]0.3510.462[0.549]0.718
PMLE-o0.037[0.198]0.202−0.001[0.073]0.073−0.002[0.037]0.0370.018[0.202]0.2030.067[0.337]0.344
PMLE-t0.109[0.278]0.298−0.001[0.073]0.073−0.002[0.037]0.0370.069[0.248]0.2570.178[0.459]0.492
600, 0MLE-r0.001[0.040]0.040−0.001[0.041]0.042−0.001[0.022]0.022−0.002[0.057]0.0570.000[0.000]0.000
MLE0.268[0.292]0.396−0.001[0.042]0.042−0.001[0.022]0.0220.153[0.206]0.2570.377[0.419]0.564
PMLE-o0.009[0.093]0.093−0.001[0.042]0.042−0.001[0.022]0.0220.005[0.089]0.0890.014[0.138]0.139
PMLE-t0.049[0.178]0.185−0.001[0.042]0.042−0.001[0.022]0.0220.031[0.132]0.1350.072[0.262]0.272
The MLE-r denotes the restricted MLE with the restriction δ = 0 imposed, and PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions. The three numbers in each cell are bias[SE]RMSE. β 0 = ( 1 , 1 , 1 ) and σ 0 2 = 1 .

Share and Cite

MDPI and ACS Style

Jin, F.; Lee, L.-f. Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices. Econometrics 2018, 6, 8. https://doi.org/10.3390/econometrics6010008

AMA Style

Jin F, Lee L-f. Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices. Econometrics. 2018; 6(1):8. https://doi.org/10.3390/econometrics6010008

Chicago/Turabian Style

Jin, Fei, and Lung-fei Lee. 2018. "Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices" Econometrics 6, no. 1: 8. https://doi.org/10.3390/econometrics6010008

APA Style

Jin, F., & Lee, L. -f. (2018). Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices. Econometrics, 6(1), 8. https://doi.org/10.3390/econometrics6010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop