Next Article in Journal
A Joint Chow Test for Structural Instability
Next Article in Special Issue
The SAR Model for Very Large Datasets: A Reduced Rank Approach
Previous Article in Journal / Special Issue
Heteroskedasticity of Unknown Form in Spatial Autoregressive Models with a Moving Average Disturbance Term
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Step Lasso Estimation of the Spatial Weights Matrix

Spatial Economics and Econometrics Centre (SEEC), Heriot-Watt University, Edinburgh, Scotland EH14 4AS, UK
*
Author to whom correspondence should be addressed.
Econometrics 2015, 3(1), 128-155; https://doi.org/10.3390/econometrics3010128
Submission received: 31 October 2014 / Accepted: 11 February 2015 / Published: 9 March 2015
(This article belongs to the Special Issue Spatial Econometrics)

Abstract

:
The vast majority of spatial econometric research relies on the assumption that the spatial network structure is known a priori. This study considers a two-step estimation strategy for estimating the n ( n - 1 ) interaction effects in a spatial autoregressive panel model where the spatial dimension is potentially large. The identifying assumption is approximate sparsity of the spatial weights matrix. The proposed estimation methodology exploits the Lasso estimator and mimics two-stage least squares (2SLS) to account for endogeneity of the spatial lag. The developed two-step estimator is of more general interest. It may be used in applications where the number of endogenous regressors and the number of instrumental variables is larger than the number of observations. We derive convergence rates for the two-step Lasso estimator. Our Monte Carlo simulation results show that the two-step estimator is consistent and successfully recovers the spatial network structure for reasonable sample size, T.
JEL classifications:
C23; C33; C52

1. Introduction

This study proposes an estimator, based on the Lasso estimator, for an approximately sparse spatial weights matrix in a high-dimensional setting. The vast majority of spatial econometric research relies on the assumption that the spatial weights matrix, W n , which measures the strength of interactions between units, is known a priori. In applied work, researchers often need to select between standard specifications such as the binary contiguity matrix, inverse distance matrix or other matrices based on some observable notion of distance. The choice of spatial weights has been a focus of criticism of spatial econometric methods, since estimation results highly depend on the researcher’s specification of the spatial weights matrix [1,2,3]. Furthermore, a pre-defined weights matrix does not provide insights into the drivers of socio-economic interactions and general equilibrium effects in a network, but only allows for measuring the general strength of interactions, which is reflected in the size of the spatial autoregressive coefficient.
The shortcomings of employing pre-specified spatial weights are well known. Pinkse et al. [4] is one of the first attempts to conduct inferences in a setting where the spatial weights matrix is not known a priori. The authors propose a semi-parametric estimator which relies on observable distance measures. Bhattacharjee and Jensen-Butler [5] consider estimation of the spatial weights matrix from the spatial autocovariance matrix in spatial panel models, and show that W n is only partially identified. Intuitively, the main issue is that, in contrast to autocovariances, spatial weights relate to the direction and strength of causation between spatial units. Since there are twice as many spatial weights as there are autocovariances, further assumptions are required for identification. Bhattacharjee and Jensen-Butler [5] propose an estimator that provides exact identification under the assumption that the spatial weights matrix is symmetric and n is fixed.1 Estimation of the spatial weights matrix in a low-dimensional small n panel, under different structural assumptions on the autocovariances or using moment conditions is discussed in [7,8].
The aforementioned literature focuses on a low-dimensional context where typically n T . In contrast, Bailey et al. [9] consider sparsity of the spatial weights matrix as an alternative identification assumption based on a large T panel setting and the spatial error model. They apply a multiple testing procedure to the matrix of spatial autocorrelation coefficients in order to identify the non-zero interactions, and place weights of + 1 , - 1 or zero, depending on whether the autocorrelations are significantly positive, significantly negative or insignificant, respectively. There are also a few previous studies which apply Lasso-type estimators to high-dimensional spatial panel models and assume sparsity.2 Manresa [13] considers a non-autoregressive panel model with spatially lagged exogenous regressors. Hence, the model does not suffer from simultaneity and the Lasso estimator can be used for dimensionality reduction. Souza [14] and Lam and Souza [15] consider a spatial autoregressive model with additional spatial lags on exogenous regressors. Souza [14] discusses several exclusion restrictions that allow for identification, but require prior knowledge about the network structure. Lam and Souza [15] propose a Lasso-type estimator for spatial weights under the assumption that the error variance decays to zero as T increases, which may be a strong assumption in some applications. By contrast, the method proposed here does not require prior knowledge about the network structure and does not rely on variance decay, but instead exploits exogenous regressors as instruments.
This study explores the estimation of the spatial weights matrix in a panel data setting where T, the number of time periods, is large. The spatial autoregressive or spatial lag model is given by
y i t = j = 1 n w i j y j t + x i t β i + e i t , t = 1 , , T ; i = 1 , , n
where y i t is the response variable, x i t = ( x 1 , i t , x 2 , i t , , x K , i t ) is the vector of exogenous regressors and β i is the K × 1 parameter vector with K 1 . The error term is assumed to be independently distributed, but allowed to be heteroskedastic and non-Gaussian. w i j is the ( i , j ) th element of the n × n spatial weights matrix, denoted by W n , and measures the strength of spill-over effects from unit j to unit i. The spatial weights matrix has zeros on the diagonal, i.e., w i i = 0 for all i.3 The first term on the right-hand side is often referred to as the spatial lag, analogous to a temporal lag in time-series models. The spatial autoregressive panel model is a natural extension to cross-sectional spatial autoregressive models as introduced by [16,17]. Spatio-temporal panel models, such as the spatial autoregressive model in (1.1), have recently attracted much attention; see, for example, [18,19,20,21,22].4
Estimation of the above model poses two major challenges when W n is treated as unknown. First, the model suffers from reverse causality as the response variable appears both on the left and right-hand side of the equation. It is well known that ordinary least squares (OLS) is inconsistent in the presence of endogeneity. Second, the model is not identified unless the number of parameters, p : = n ( n - 1 ) + K n , is smaller than the number of observations, n T , or further assumptions are made. The identification assumption considered here is sparsity of the weights matrix which requires that each unit is affected by only a limited number of other units. Specifically, the number of units affecting a specific unit i is assumed to be much smaller than T, but we explicitly allow for p n T .
The proposed estimation method is a two-step procedure based on the Lasso estimator introduced by Tibshirani [24]. The Lasso is a regularization technique which can, under the sparsity assumption, deal with high-dimensional settings where the number of exogenous regressors is large relative to the number of observations. The 1 -penalization employed by the Lasso sets some of the coefficient estimates to exactly zero, making the Lasso estimator attractive for model selection. The 1 -penalization behaves similarly to the 0 -penalty, as used in the Akaike information criterion and Bayesian information criterion [25,26], but is computationally more attractive due to its convex form. The Lasso is a popular and well-established technique, but its theoretical properties have only recently been better understood. Recent theoretical contributions include [27,28,29,30,31,32,33,34,35].
Conceptually, identification of a spatial weights matrix requires suitably dealing with the endogeneity inherent in model (1.1). Lam and Souza [15] address this issue by assuming that the error variance asymptotically decays to zero. By contrast, we address endogeneity using instruments. The estimation methodology proceeds in two steps. In the first step, relevant instruments are identified by the Lasso and predictions for y 1 t , , y n t are obtained. In the second step, the regression model in (1.1) is estimated, but the spatial lag on the right-hand side is replaced with predictions from the first step. That is, the second-step Lasso selects the neighbours affecting y i t . The procedure is conceptually based on two-stage least squares (2SLS), but employs the Lasso for selecting relevant instruments in the first step and for selecting relevant spatial lags in the second step. Figure 1 visualizes the spatial autoregressive model in (1.1) for n = 2 and motivates the choice of instruments exploited to identify the spatial weights. In the regression equation with y 1 t as the dependent variable, we can exploit x 2 t as instruments for y 2 t and vice versa.
Figure 1. The Spatial Autoregressive Model for n = 2 .
Figure 1. The Spatial Autoregressive Model for n = 2 .
Econometrics 03 00128 g001
We also consider the post-Lasso OLS estimator due to Belloni and Chernozhukov [31], which applies ordinary least squares (OLS) to the model selected by the Lasso and aims at reducing the Lasso shrinkage bias. Although the estimation methodology relies on large T asymptotics, Monte Carlo results suggest that the two-step Lasso estimator is able to recover the spatial network structure if T is as small as 50–100. The estimator may be combined with established large T panel estimators such as the Common Correlated Effects estimator [36,37], which controls for strong cross-sectional dependence, and can be extended to dynamic models including temporal lags of the dependent variable as regressors.
Finally, this study is also related to the emerging literature on high-dimensional methods which allow the number of endogenous regressors to be larger than the sample size. The Self-Tuning Instrumental Variable (STIV) due to Gautier and Tsybakov [38] is a generalization of the Dantzig estimator [39] allowing for many endogenous regressors. The focused generalized methods of moments (FGMM) developed in Fan and Liao [40] extends shrinkage GMM estimators as in, e.g., [41] to high-dimensional settings. The two-step Lasso estimator considered in this study is conceptually similar to Lin et al. [42] who apply two-step penalized least squares to genetic data. We improve upon Lin et al. [42] in that our approach allows for approximate sparsity, non-Gaussian errors and uses the sharper penalty level proposed by Belloni et al. [30]. However, our main contribution is to point out that a simple two-step Lasso estimation method can be employed to estimate the spatial weights matrix. The approach does not require any prior knowledge about the network structure, except for the sparsity assumption and a set of exogenous regressors.
The article is organized as follows. In Section 2, we consider a general setting where the number of endogenous regressors and the number of instruments is allowed to be larger than the number of observations. The two-step estimator may be of more general interest for applications with endogeneity in high-dimensions. Section 3 applies the proposed two-step estimator to estimate the spatial autoregressive model in (1.1). In Section 4, we present Monte Carlo results to demonstrate the performance of the two-step Lasso for estimating the spatial weights matrix. Finally, Section 5 concludes.
Notation. The q -norm of the vector a R M is defined as a q = ( m = 1 M | a m | q ) 1 / q , q = 1 , 2 . The number of non-zero elements in a is denoted by a 0 , and a is the largest element in a . We use ( ( · ) ) to denote the typical element of a matrix, e.g., A = ( ( a i j ) ) . The Frobenius norm of A is A F = ( i , j | a i j | 2 ) 1 / 2 . Let A 1 be the entry-wise 1 norm, i.e., A 1 = i , j | a i j | . The support operator is supp ( a ) = { m { 1 , , M } : a m 0 } . Let V be a set, then V ¯ is the complement of V. a V is a vector with elements a m 1 { m V } for m = 1 , , M where 1 is the indicator function. The typical element of A V is a i j 1 { j V } . We use x T P z T to denote x T = O P ( z T ) and a b to denote a c b for some constant c > 0 .

2. Two-Step Lasso Estimator

In this section, we develop a two-step estimation procedure that allows the number of possibly endogenous regressors as well as the number of instruments to be larger than the sample size. The identifying assumption is approximate sparsity. Section 3 presents the spatial autoregressive model as an application to this setting. The two-step estimator may be of interest in, for example, cross-country growth regressions where the number of regressors is large relative to the number of countries and endogeneity is a potential issue. Furthermore, endogeneity in high dimensions may arise when the aim is to find a sparse linear approximation to a complex non-parametric data-generating process; see earning regressions in [43].
The structural equation and first-step equations are given by
y t = x t β + e t ,
x t j = z t π j + u t j , j = 1 , , p .
y t is the outcome variable and x t is a p-dimensional vector of regressors. For notational consistency with Section 3, we use t = 1 , , T to denote distinct units or repeated observations over time. Without loss of generality, we assume that the first p ¯ regressors are endogenous, i.e., E [ e t | x t j ] 0 for j = 1 , , p ¯ with p ¯ { 1 , , p } . The remaining p - p ¯ regressors are exogenous. Hence, we allow the set of exogenous regressors to be empty. We assume the existence of L p instruments, z t , which satisfy the exclusion restriction E [ e t | z t ] = E [ u t j | z t ] = 0 for j = 1 , , p . If a regressors x t j is exogenous, it serves as an instrument for itself. Hence, z t j = x t j for j > p ¯ . The error terms e t and u t j are independently distributed, but possibly heteroskedastic and non-Gaussian. The interest lies in obtaining a sparse approximation of β . While the model in (2.1)–(2.2) assumes that the conditional expectation functions are linear, the framework may be easily generalized to a non-parametric data-generating process as in, for example, Bickel et al. [27].

2.1. First-Step Estimation

The aim of the first step is to estimate the conditional expectation function x t j : = E [ x t j | z t ] = z t π j for j = 1 , , p ¯ where x t j represents the optimal instrument. Note that x t j = x t j if x t j is exogenous, which corresponds to j = p ¯ + 1 , , p . If L > T , OLS estimation of the first-step equations in (2.2) is not feasible as the Gram matrix T - 1 Z Z with Z = ( ( z t j ) ) is singular. The Lasso can achieve consistency in a high-dimensional setting where L > T under the assumption of sparsity and further regularity conditions stated below. Exact sparsity requires that the number of nonzero elements in π j , i.e., π j 0 , is small relative to the sample size. This assumption is too strong in most applications as π j may have many elements that are, although negligible, not exactly zero. Instead, we assume the existence of a sparse parameter vector π j 0 that approximates the true parameter vector π j . Specifically, as in [30], we assume that for each endogenous regressor j the number of instruments necessary for approximating the conditional expectation function is smaller than the sample size and the associated approximation error a t j ( z t ) = x t j - z t π j 0 converges as specified below.5
Assumption 2.1. 
Consider the model in (2.2). There exists a parameter vector π j 0 for all j = 1 , , p ¯ such that
E [ x t j | z t ] = z t π j 0 + a t j ( z t ) , s 1 : = max 1 j p ¯ π j 0 0 T , A s 1 : = max 1 j p ¯ 1 T t = 1 T a t j 2 P s 1 T
The target parameter π j 0 can be motivated as the solution to the infeasible oracle program that penalizes the number of non-zero parameters [31]. Under homoskedasticity, we can write the oracle objective function as
min π j 1 T X j - Z π j 2 2 + σ 2 T π j 0
where X j is the jth column of the matrix X = ( ( x t j ) ) . The second term represents the noise level and s 1 / T is the convergence rate of the oracle which knows the true model.
The first-step Lasso estimator for endogenous regressor j is defined as
π ^ j = arg min 1 T X j - Z π j 2 2 + λ 1 T Υ ^ 1 j π j 1
The first term is the residual sum of squares and the second term imposes a penalty on the absolute size of the parameters which is increasing in the penalty level λ 1 . The Lasso nests OLS with λ 1 = 0 and λ 1 = will lead to a null model. Υ ^ 1 j is a diagonal matrix of penalty loadings which account for heteroskedasticity and may be set to the identity matrix under homoskedasticity [30]. The second term imposes a penalty on the absolute size of the coefficients and, thus, shrinks the coefficient estimates towards zero. The Lasso predictions X ^ j : = Z π ^ j replace X j in the second step to address endogeneity. For the exogenous regressors, we set X ^ j = X j .
The penalty level λ 1 may be selected by cross-validation in order to minimize the prediction error as originally suggested by Tibshirani [24]. Since the primary purpose of our study is not prediction, but recovery of the spatial network structure, we follow an alternative approach that originates from Bickel et al. [27]. The penalty level is chosen as the smallest value that, with a high probability, overrules the random part of the data-generating process, which is represented by the score vector S 1 j = - 2 T Υ ^ 1 j - 1 Z u j , i.e.,
λ 1 T c max 1 j p ¯ S 1 j with c > 1
The event in (2.3) plays a crucial role in the derivation of non-asymptotic bounds and convergence rates. Belloni et al. [30] show with the use of moderate deviation theory in [44] that as T , P ( c max 1 j p ¯ S 1 j > λ 1 / T ) = o P ( 1 ) where
λ 1 = 2 c T Φ - 1 ( 1 - α / ( 2 L p ¯ ) ) with log ( 1 / α ) log ( max ( L p ¯ , T ) )
under possibly non-Gaussian and heteroskedastic errors. Note that the term p ¯ in (2.4) accounts for the number of Lasso regressions in the first step and L is the number of instruments. c is a constant greater than, but close to 1. In applied work, Belloni et al. [45] suggest setting c = 1 . 1 and α = min ( 1 / T , 0 . 05 ) .
The optimal penalty loadings are given by
Υ 1 j 0 = diag ( γ 1 j , 1 , , γ 1 j , l , , γ 1 j , L ) , γ 1 j , l = 1 T t z t l 2 u t j 2
but are infeasible as u t j is unobserved. Under the iterative Algorithm in Appendix A.2, we can construct asymptotically valid penalty loadings, Υ ^ 1 j , that are in the probability limit as least as large as the optimal penalty loadings [30].
The properties of the Lasso estimator depend crucially on the Gram matrix T - 1 Z Z . As stated above, OLS is not feasible if L > T as the Gram matrix is singular, which implies that the minimum eigenvalue is zero,
min δ 0 Z δ 2 T δ 2 = 0
Bickel et al. [27] introduce the restricted eigenvalue
min δ Ω ¯ 1 j 1 C δ Ω 1 j 1 , δ 0 Z δ 2 T δ Ω 1 j 2
which is defined as the minimum over the restricted set δ Ω ¯ 1 j 1 C δ Ω 1 j 1 , where Ω 1 j = supp ( π j 0 ) and C is a positive constant. The condition δ Ω ¯ 1 j 1 C δ Ω 1 j 1 holds with high probability and, when it does not hold, it is not required to bound the prediction error norm (see Appendix A.1).
Definition 1. 
Let C and κ ¯ be positive constants and Ω denote the active set. We say that the restricted eigenvalue condition holds for M , if as T
κ C ( M ) : = min δ Ω ¯ 1 C δ Ω 1 , δ 0 s T M δ 2 δ Ω 1 κ ¯ > 0 , s : = δ 0
In the above definition of the restricted eigenvalue the 2 -norm in the denominator is replaced with the 1 -norm using the Cauchy-Schwarz inequality, which allows us to relate the 1 -parameter norm to the 2 -prediction norm. The restricted eigenvalue is closely related to the compatibility constant [46]. Bühlmann and Van de Geer [28] provide an extensive overview of related conditions and their relationship. The restricted eigenvalue conditions hold under general conditions; see, e.g., [27,31,47]. One sufficient condition for the restricted eigenvalue is the restricted sparse eigenvalue condition which requires that any appropriate sub-matrix of the Gram matrix has positive and finite eigenvalues [27].
To accommodate heteroskedasticity, we also define the weighted restricted eigenvalue condition [30],
κ C ω ( M ) : = min Υ 0 δ Ω ¯ 1 C Υ 0 δ Ω 1 , δ 0 s T M δ 2 Υ 0 δ Ω 1
where Υ 0 are the optimal penalty loadings as in (2.5). If the restricted eigenvalue condition holds, the weighted restricted eigenvalue is positive as long as the optimal penalty loadings are bounded away from zero and bounded from above, which we maintain in the following.
With respect to the first-step equations in (2.2), we explicitly state the restricted eigenvalue condition as follows:
Assumption 2.2. 
The Restricted Eigenvalue Condition holds for Z .
Under Assumption 2.1 and 2.2, using the penalty level as in (2.4), assuming the penalty loadings Υ ^ 1 j are asymptotically valid, then by Theorem 1 in [30], the 2 -prediction error norm of the Lasso estimator has the following rate of convergence
max 1 j p ¯ 1 T Z π ^ j - Z π j 2 P s 1 log ( max ( L p ¯ , T ) ) T
We do not reproduce the proof of Theorem 1 in [30]. However, our main results in Theorem 1 below is a generalization in that we account for the prediction error that arises from the first-step Lasso estimation. The proof is provided in Appendix A.1. The convergence rate in (2.6) is slower than the oracle rate of s 1 / T by a factor of log ( max ( L p ¯ , T ) ) , which can be interpreted as the cost of not knowing the active set of π j 0 .

2.2. Second-Step Estimation

Since the second-step is infeasible by OLS if p > T , we require, as in the first step, approximate sparsity and assumptions on the Gram matrix.
Assumption 2.3. 
Consider the model in (2.1). There exists a parameter vector β 0 such that
E [ y t | z t ] = x t β 0 + r t ( z t ) , s 2 : = β 0 0 T , R s 2 : = 1 T t = 1 T r t 2 s 2 T , β 0 2 P s 2
Assumption 2.4. 
The Restricted Eigenvalue Condition holds for X ^ .
Assumption 2.3 is similar to Assumption 2.1, but assumes β 0 2 s 2 , which allows us to simplify the expression for the convergence rates. Assumption 2.4 could also be written in terms of the optimal instrument matrix X . Specifically, Assumption 2.4 holds if the restricted eigenvalue holds for X and X ^ - X is small as specified in [28], Corollary 6.8.
For identification of β 0 we also require, as standard in the IV/GMM literature, that the matrix Π 0 = ( π 1 0 , , π p 0 ) is full column rank.
Assumption 2.5. 
rank ( Π 0 ) = p .
The second-step Lasso estimator uses the predictions X ^ as regressors and is defined as
β ^ = arg min β 1 T y - X ^ β 2 2 + λ 2 T Υ ^ 2 β 1
where the penalty level is set to
λ 2 = 2 c T Φ - 1 ( 1 - α / ( 2 p ) ) with log ( 1 / α ) log ( max ( L p ¯ , T ) )
and the penalty loadings are estimated using the algorithm in Appendix A.2.
The crucial difference to the first-step Lasso estimation is that X is unobservable and we instead use X ^ , which is an estimate that in general deviates from the optimal instrument X . For the two-step Lasso estimator, we consider the prediction bound 1 / T X ^ β ^ - X β 2 where predictions obtained using the unknown optimal instrument and the unknown true parameter vector β serve as a reference point. Note that, using the triangle inequality,
1 T X ^ β ^ - X β 2 = 1 T ( X ^ β ^ - X ^ β 0 ) + ( X ^ β 0 - X β 0 ) + ( X β 0 - X β ) 2 1 T X ^ β ^ - X ^ β 0 2 + 1 T V ^ β 0 2 + R s 2
where we define V ^ = X ^ - X which has the typical element v ^ j t = z t π ^ j - z t π j . The bound for the third term is stated in Assumption 2.3. The convergence rate for the second term follows from prediction norm rate of the first-step Lasso in (2.6). The bound for the first term is derived in Appendix A.1. Combining the three bounds, we have the following result.
Theorem 1. 
Consider the model in (2.1)–(2.2). Suppose Assumptions 2.1–2.5 hold. Suppose asymptotically valid penalty loadings are used and the penalty levels λ 1 and λ 2 are set as in (2.4) and (2.7). Then,
1 T X ^ β ^ - X β 2 P s 2 2 s 1 log ( max ( L p ¯ , T ) ) T
Furthermore, if s 1 and s 2 , do not depend on T, then
β ^ - β 0 1 P log ( max ( L p ¯ , T ) ) T
The proof is provided in Appendix A.1. As expected, the convergence rates of the 2 -prediction norm depend on the degree of sparsity in the first-step and second-step equation. The second part of the theorem is relevant for the spatial panel model in the next section where the sparsity level ( s 1 and s 2 ) and the dimension of the problem (L and p) depend on the number of units (i.e., n), but not on the time dimension (T).

3. The Spatial Autoregressive Model

This section applies the proposed two-step Lasso procedure to the spatial lag model in (1.1). In Section 3.2, we discuss two extensions to the two-step Lasso estimator; namely, the post-Lasso and thresholded post-Lasso.

3.1. Two-Step Lasso

The structural and reduced form equations can be written as
y i = j = 1 n w i j y j + X i β i + e i
y j = s = 1 n X s π j , s + u j , i , j = 1 , , n
where y i = ( y i 1 , y i 2 , , y i T ) , w j j = 0 , and X i = ( x i 1 , , x i T ) is the T × K matrix of exogenous regressors. We assume that e i t is independently distributed across t, i.e., E [ e i t e i s ] = 0 for t s . The -superscripts indicate that we interpret the parameters as the true parameter values.
It is evident that the spatial model in (3.1)–(3.2) is an application of the more general model in (2.1)–(2.2). Specifically, the right-hand side regressors y 1 , , y n correspond to endogenous regressors and X i corresponds to the exogenous regressors in (2.1). Furthermore, the set of exogenous instruments is given by X 1 , , X n .
The choice of instruments is closely related to Kelejian and Prucha [48]. To identify the spatial autoregressive parameter, they suggest the use of first and higher order spatial lags of exogenous regressors as instruments for the endogenous spatial lag. As discussed in the Introduction, we use X j as instruments in order to identify w i j , which represents the causal impact of y j on y i . Therefore, for identification, we require contemporaneous exogeneity across space:
Assumption 3.1. 
E [ e i t | x j t ] = 0 for all i , j = 1 , , n and t = 1 , , T .
In many applications, estimation of (3.1)–(3.2) by 2SLS is not feasible as there appear n - 1 + K and n K regressors on the right-hand side, respectively, which are both potentially larger than T. In order to exploit the Lasso estimator, we require sparseness as in Section 2.
Assumption 3.2. 
(a) Consider the model in (3.1). There exists a parameter vector w i 0 = ( w i 1 0 , , w i n 0 ) for all i = 1 , , n with w i i 0 = 0 such that
E [ y i t | x 1 t , , x n t ] = j = 1 n w i j 0 y j t + x i t β i 0 + a i t s 1 + K T , A s 1 : = max 1 i n 1 T t = 1 T a i t 2 P s 1 T
where y j t = E [ y j t | x 1 t , , x n t ] , s 1 : = max 1 i n w i 0 0 and K = β i 0 0
(b) Consider the model in (3.2). There exists a parameter vector π i , j 0 for all i = 1 , , n such that
E [ y i t | x 1 t , , x n t ] = j i x j t π i , j 0 + x i t π i , i 0 + r i t s 2 + K T , R s 2 : = max 1 i n 1 T t = 1 T r i t 2 P s 2 T
where s 2 : = max 1 i n j i π i , j 0 0 and K = π i , i 0 0
To simplify the exposition and without loss of generality, we assume that β i 0 does not include any zero elements, implying that all regressors in x i t are relevant determinants of the dependent variable y i t . The assumption also guarantees identification as long as K 1 .
The sparsity assumptions in Assumption 3.2 (a) and Assumption 3.2 (b) are related. To see this, consider the case where n = 2 . Then, the reduced form equations are given by
y 1 = X 1 π 1 , 1 + X 2 π 1 , 2 + u 1 y 2 = X 1 π 2 , 1 + X 2 π 2 , 2 + u 2
with π 2 , 1 = w 21 1 - w 12 w 21 β 1 , π 1 , 2 = w 12 1 - w 12 w 21 β 2 , π 1 , 1 = 1 1 - w 12 w 21 β 1 and π 2 , 2 = 1 1 - w 12 w 21 β 2 where we assume w 12 < 1 and w 21 < 1 . It becomes evident that, if π 1 , 2 = 0 , then w 12 = 0 must hold by assumption given that β 0 = K . That is, sparseness of the π i , j parameter vectors as specified in Assumption 3.2 (b) implies sparseness of the W n matrix in Assumption 3.2 (a) if n = 2 .
We maintain the following basic assumptions regarding the spatial weights matrix.
Assumption 3.3. 
(a) The spatial weights matrix, W n 0 = ( ( w i j 0 ; i , j = 1 , , n ) ) , is n × n with zeros on the diagonal, w i i = 0 . (b) The spatial weights matrix is time-invariant. (c) The row sums are bounded in absolute value, i.e., max i j | w i j | < 1 .
Assumption 3.3 (a) is standard. Assumption 3.3 (b) is required as the identification strategy exploits variation over time to identify the weights matrix and is standard in the spatial panel econometrics literature; see, e.g., [20]. The assumption corresponds to parameter stability over time in time series.6 Assumption 3.3 (c) can be interpreted similar to the stationarity condition in time-series econometrics. Assumption (a) and (c) ensure that I n - W n 0 is invertible, where I n is the identity matrix of dimension n. Invertibility of I n - W n 0 is required to derive the reduced form equations in (3.2). The assumptions differ from standard assumptions in the spatial econometrics literature in two points; c.f., [48,49]. First, we do not make use of the spatial autoregressive coefficient as the spatial autoregressive coefficient and the spatial weights are not separately identified. Second, we do not apply row-standardization as commonly employed. This is because some of the spatial weights can be negative, a condition negated in a large part of the literature that measures spatial weights by distances or contiguity. Applications that allow for an estimated spatial weights matrix show that negative spatial weights are common in practice; see for example [7,8,9]. Furthermore, we stress that Assumption 3.3 does not impose any structure on the spatial weights matrix such as symmetry and we allow the interactions effects to be positive and negative.
In order to write the first and second-step Lasso estimator compactly, we introduce some notation. Let X ¯ = ( X 1 , , X n ) and π j = ( π j , 1 , , π j , n ) is the corresponding parameter vector. The first-step Lasso estimator solves
min y i - X π i 2 2 + λ 1 Υ ^ 1 , i π i 1
Let y ^ i denote the first-step predictions from Lasso estimation and let Y ^ = ( y ^ 1 , , y ^ n ) . Furthermore, define w i = ( w i 1 , , w i n ) with w i i = 0 , which is the ith row of the spatial weights matrix. This allows us to write the second-step matrix of regressors as G ^ i = ( Y ^ , X i ) and define the corresponding parameter vector θ i = ( w i , β i ) The second-step Lasso solves
min y i - G ^ i θ i 2 2 + λ 2 Υ ^ 2 , i θ i 1
We require both X ¯ and G ^ i to be well-behaved as stated in Assumption 3.4.7
Assumption 3.4. 
The Restricted Eigenvalue Condition holds for X ¯ and G ^ i for all i = 1 ,…, n .
The penalty levels are set to
λ 1 = 2 c T Φ - 1 ( 1 - α / ( 2 n 2 K ) )
λ 2 = 2 c T Φ - 1 ( 1 - α / ( 2 n ( n - 1 + K ) ) ) with log ( 1 / α ) log ( max ( n 2 K , T ) )
Note there are n K and n - 1 + K penalized regressors in the first and second step, respectively, and n Lasso regressions in each step. The penalty loadings are again estimated using Algorithm A.2.
The convergence rates of the two-step Lasso estimator follow from Theorem 1. However, while the general setting in Section 2 allows s 1 , s 2 and the number of first and second-step variables to depend on T, we can assume in the spatial panel setting that s 1 , s 2 and n are independent of T. Therefore, we obtain the following convergence rates.
Corollary 1. 
Consider the model in (3.1)–(3.2). Suppose Assumptions 3.1–3.4 hold. Suppose asymptotically valid penalty loadings are used and the penalty levels are set as in (3.3) and (3.4). Then,
max i 1 T G ^ i θ ^ i - G θ i 2 P log ( max ( n 2 K , T ) ) T max i θ ^ i - θ i 0 1 P log ( max ( n 2 K , T ) ) T

3.2. Post-Lasso and Thresholded Post-Lasso

The shrinkage of the Lasso estimator induces a downward bias which can be addressed by the post-Lasso estimator. The post-Lasso estimator treats the Lasso as a genuine model selector and applies OLS to the set of regressors for which the Lasso coefficient estimate is non-zero. In other words, post-Lasso is OLS applied to the model selected by the Lasso. Formally, the first and second-step post-Lasso estimator of the spatial autoregressive model are defined as
π ˜ i = arg min π i y i - X ¯ π i 2 2 s . t . supp ( π i ) supp ( π ^ i ) θ ˜ i = arg min θ i y i - G ^ i θ i 2 2 s . t . supp ( θ i ) supp ( θ ^ i )
The thresholded post-Lasso addresses the issue that the Lasso estimator often selects too many variables and that, despite the 1 -penalization, many coefficient estimates are very small, but not exactly zero. The thresholded post-Lasso applies OLS to all spatial lags for which the post-Lasso estimate is larger than a pre-defined threshold τ.8 While it is in general difficult to select and justify a specific threshold, in the spatial autoregressive model we can use the knowledge that - 1 < w i j < 1 and assume interaction effects that are smaller than, for example, 0.05 are negligible. For formal results on the post-Lasso and thresholded Lasso, see Belloni and Chernozhukov [31].

4. Monte Carlo Simulation

This Monte Carlo study9 explores the finite sample performance of the proposed two-step Lasso estimator for estimating the spatial autoregressive model
y i t = j = 1 j i n w i j y j t + η i + x i t β + ε i t , t = 1 , , T ; i = 1 , , n
We consider two different spatial weights matrices. Specification 1 is given by
w i j = 1 if j - i = 1 0 otherwise for i , j = 1 , , n
and specification 2 is given by
w i j = 1 if j - i = 1 0 otherwise for i , j = 1 , , n
Subsequently, a row-standardization is applied such that the row sum is equal to w ¯ . The row-standardization ensures that the strength of spill-over effects is constant across i. The strength of spatial interactions is determined by w ¯ , which corresponds to the spatial autoregressive coefficient.
The spatial weights matrix in specification 1 has non-zeros on the sub-diagonal and super-diagonal. Thus, the weights matrix is symmetric and the number of non-zero elements is 2 ( n - 1 ) . In specification 2, only the super-diagonal elements are non-zero, implying n - 1 non-zero elements. The structure in (4.3) corresponds to the extreme case where there are only one-way spatial effects. Specification 2 is in our view more challenging than specification 1 as the triangular structure makes it difficult to identify the direction of causal effects. Note that, the spatial weights matrix is in principle identified if the spatial weights matrix is known to be triangular or symmetric. However, the challenge here is to estimate the spatial weights matrix without any prior knowledge. We stress that the estimation strategy does not depend on any particular structure of the spatial weights matrix, but only requires sparsity.
The parameter vector β is a K-dimensional vector of ones. Hence, β is constant across i, although the estimation method allows for spatial heterogeneity of β. The exogenous regressors and the spatial fixed effect η i are drawn from the standard normal distribution,i.e., x k , i t N ( 0 , 1 ) for k = 1 , , K . The idiosyncratic error is drawn as ε i t N ( 0 , σ i t 2 ) where
σ i t 2 = ( 1 + x i t β ) 2 1 n T i , t ( 1 + x i t β ) 2
which induces conditional heteroskedasticity.
We consider four estimators:10 (a) The two-step Lasso introduced in Section 3.1. (b) Two-step post-Lasso from Section 3.2. (c) Two-step thresholded post-Lasso with a threshold of τ = 0 . 05 . (d) The oracle estimator. The oracle estimator has full knowledge about which weights are non-zero and applies 2SLS to the true model. The oracle estimator is infeasible as the true model is in general unknown and only serves as a benchmark. The penalty levels are defined as in (3.3)–(3.4) with c = 1 . 1 and α = min ( 1 / T , 0 . 05 ) .11 The penalty loadings are estimated by Algorithm A.2.
We consider a range of different settings. Specifically, n = { 30 , 50 , 70 } , T = { 50 , 100 , 500 } , K = 1 and w ¯ = { 0 . 5 , 0 . 7 , 0 . 9 } . We have also considered K = 2 which results in noticeable performance improvements. However, we do not report the results and focus on K = 1 which is the minimum requirement for identification. The number of Monte Carlo replications is 1000.
Table 1. Monte Carlo results: Specification 1. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Thresholded post-Lasso with τ = 0 . 05 ; (d) Oracle estimator.
Table 1. Monte Carlo results: Specification 1. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Thresholded post-Lasso with τ = 0 . 05 ; (d) Oracle estimator.
(a)(b)
w ¯ NTFalseFalsebias w ¯ NTFalseFalsebias
neg.pos.meanmedianRMSE neg.pos.meanmedianRMSE
0.70305019.2319.010.043350.042660.043350.7030508.8311.550.036490.036060.03649
0.703010014.6221.220.038120.037470.038120.70301005.6411.890.030900.030830.03090
0.70305005.3024.820.023100.023050.023100.70305002.1812.690.020290.020220.02029
0.70505013.9615.130.028550.028260.028550.7050504.819.380.024530.024320.02453
0.70501008.8617.580.026530.026260.026530.70501001.9010.120.021870.021850.02187
0.70505003.1422.130.017250.017180.017250.70505000.5810.800.015190.015190.01519
0.70705011.9312.350.021320.021300.021320.7070503.847.650.018290.018280.01829
0.70701006.0915.360.020710.020550.020710.70701000.878.960.017460.017410.01746
0.70705001.8920.390.014540.014460.014540.70705000.179.780.012770.012730.01277
0.9030502.0714.990.023600.023460.023600.9030500.769.830.018230.018060.01823
0.90301000.9714.920.020020.019920.020020.90301000.257.930.012670.012540.01267
0.90305000.1818.350.013720.013650.013720.90305000.127.620.007940.007870.00794
0.9050501.0312.690.014950.014890.014950.9050500.299.270.013640.013490.01364
0.90501000.2913.110.013540.013540.013540.90501000.047.940.009470.009440.00947
0.90505000.0315.240.009350.009310.009350.90505000.016.070.005130.005120.00513
0.9070500.5210.370.010480.010440.010480.9070500.177.260.009310.009280.00931
0.90701000.1512.170.010580.010540.010580.90701000.028.260.008540.008520.00854
0.90705000.0113.600.007590.007580.007590.90705000.005.500.004090.004080.00409
(c)(d)
w ¯ NTFalseFalsebias w ¯ NTFalseFalsebias
neg.pos.meanmedianRMSE neg.pos.meanmedianRMSE
0.70305010.046.360.025610.025230.025610.7030500.027420.024440.02742
0.70301006.406.440.021650.021450.021650.70301000.019150.017790.01915
0.70305002.366.600.013680.013600.013680.70305000.007620.007530.00762
0.7050505.714.850.016210.016090.016210.7050500.015330.014620.01533
0.70501002.285.210.014340.014300.014340.70501000.011350.010570.01135
0.70505000.655.360.009680.009690.009680.70505000.004550.004530.00455
0.7070504.593.840.012000.011970.012000.7070500.011140.010550.01114
0.70701001.134.480.011140.011100.011140.70701000.008160.007710.00816
0.70705000.204.690.007910.007900.007910.70705000.003250.003230.00325
0.9030501.575.000.013160.012980.013160.9030500.025760.023580.02576
0.90301000.423.540.009230.009150.009230.90301000.020390.018870.02039
0.90305000.132.350.005280.005250.005280.90305000.012500.012370.01250
0.9050501.064.300.008770.008710.008770.9050500.015620.014180.01562
0.90501000.123.250.006200.006150.006200.90501000.012160.011320.01216
0.90505000.011.540.003100.003080.003100.90505000.007510.007440.00751
0.9070500.563.140.005890.005840.005890.9070500.010870.010200.01087
0.90701000.093.240.005130.005100.005130.90701000.008600.008160.00860
0.90705000.001.210.002300.002290.002300.90705000.005330.005320.00533
“False neg.” denotes false negative rate in %. “False pos.” denotes false positive rate in %. RMSE denotes root-mean-square error. The bias is defined in (4.4). The false negative and false positive rate is 0% for the oracle estimator by construction. The oracle estimator is infeasible in practice and serves only as a reference point. Number of replications is 1000. The number of exogenous regressors is K = 1 . See description in the main text.
Table 1 and Table 2 report the following statistics to assess the performance of the estimators. “False negative” is the average percentage of non-zero elements falsely identified as being zero. “False positive” is the average percentage of zero elements falsely identified as non-zero. Furthermore, let W ^ ( i ) be the estimate of the spatial weights matrix from the ith Monte Carlo iteration. The bias is defined as
bias ^ ( i ) = 1 n ( n - 1 ) W ^ ( i ) - W n 1
where · 1 denotes the entry-wise 1 -norm. Average and median bias across iterations are reported, as well as the root-mean-square error (RMSE). Note that the false negative and false positive rate are 0% for the oracle estimator by construction. We do not report the bias for the estimation of β, since estimation of β is a standard problem.
Table 2. Monte Carlo results: Specification 2. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Thresholded post-Lasso with τ = 0 . 05 ; (d) Oracle estimator.
Table 2. Monte Carlo results: Specification 2. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Thresholded post-Lasso with τ = 0 . 05 ; (d) Oracle estimator.
(a)(b)
w ¯ NTFalseFalsebias w ¯ NTFalseFalsebias
neg.pos.meanmedianRMSE neg.pos.meanmedianRMSE
0.50305041.6019.360.052580.049150.052580.50305036.3013.000.056790.053880.05679
0.503010032.8220.110.041190.039520.041190.503010025.7013.080.046810.045720.04681
0.50305007.8018.610.019040.018770.019040.50305004.4613.100.027840.027790.02784
0.50505039.9815.890.038200.035120.038200.50505032.2010.460.039430.036890.03943
0.505010030.5416.980.030550.029760.030550.505010020.8510.800.033530.033260.03353
0.50505007.2617.070.014970.014830.014970.50505003.2211.660.022760.022740.02276
0.50705040.3812.760.028850.027300.028850.50705032.138.410.029440.028140.02944
0.507010028.6014.980.024770.024470.024770.507010017.519.370.026530.026420.02653
0.50705006.9015.950.012900.012840.012900.50705002.3210.590.019580.019560.01958
(c)(d)
w ¯ NTFalseFalsebias w ¯ NTFalseFalsebias
neg.pos.meanmedianRMSE neg.pos.meanmedianRMSE
0.50305037.537.810.038430.037220.038430.5030500.013350.011860.01335
0.503010026.458.030.032470.032180.032470.50301000.009340.008610.00934
0.50305004.547.720.018110.018060.018110.50305000.003510.003480.00351
0.50505033.536.000.025730.024980.025730.5050500.007940.007240.00794
0.505010021.686.360.022570.022520.022570.50501000.005590.005240.00559
0.50505003.316.720.014410.014380.014410.50505000.002100.002080.00210
0.50705033.434.720.019460.018990.019460.5070500.005610.005120.00561
0.507010018.295.400.017620.017580.017620.50701000.003880.003710.00388
0.50705002.405.990.012240.012230.012240.50705000.001490.001490.00149
See notes in Table 1.

4.1. Specification 1

The first specification in (4.2) defines a sparse, symmetric matrix. Across all n and w ¯ , the performance of the two-step Lasso improves in terms of false negative rate and bias as T increases. For example, if n = 70 , T = 50 and w ¯ = 0 . 7 , in which case the model cannot be estimated by 2SLS, on average more than 88.0% of the non-zero spatial weights are identified by the Lasso. When w ¯ = 0 . 9 , this rate increases to 99.4%. However, the false positive rate of the Lasso estimator is high at approximately 10%–25% and remains high as T increases. This is in line with the known phenomenon that the Lasso estimator often selects too many variables; see, e.g., [28].
The two-step post-Lasso estimator shows substantial performance improvement over the two-step Lasso. The bias is smaller across all T and n, suggesting that post-Lasso OLS estimation successfully addresses the shrinkage bias arising from 1 -penalization. Moreover, the two-step post-Lasso also dominates the two-step Lasso in terms of false negative and false positive rate. This is consistent with [30,31], who show that the post-Lasso often performs as least as good as the Lasso. However, the false positive rate is still relatively high at 5%–13% and does not seem to decrease with T. The thresholded post-Lasso, which sets post-Lasso estimates below 0.05 equal to zero, improves upon the post-Lasso in that it shows a lower false positive rate. While we do not recommend τ = 0 . 05 as a general threshold, the thresholded post-Lasso reveals that many ‘falsely positive’ post-Lasso estimates are close to zero, but not exactly zero, which explains the high false positive rate. As expected, the oracle estimator which knows the true model, exhibits the lowest bias across all n and T.
Notice that both false negative as well as false positive rate decrease with n. The decrease in the false positive rate is because the number of zero weights increases with n as a proportion of the total number of off-diagonal elements in W n . The same situation holds in many real spatial applications where the number of neighbors of a region are bounded. In turn such boundedness is a necessity for spatial stationarity; see Assumption 3.3 and the spatial granularity condition in [37]. Note that with large n, standard least squares methods would not work because of high-dimensionality, which underlines the important advantage of the Lasso-based methods proposed in this article.
Figure 2 shows how often each w i j is identified as being non-zero by the estimators for n = T = 50 . It can be seen that the two-step procedures successfully recover the spatial structure in (4.2). Note that weights to the left of the sub-diagonal and to the right of the super-diagonal (i.e., w 13 , w 24 , w 31 , , etc.) are falsely selected slightly more often relative to other weights. This is likely due to indirect effects, resulting in spatial spillage. For example, w 13 is selected slightly more often relative to other zero elements as y 3 t affects y 1 t through y 2 t .
Figure 2. Recovery of Spatial Weights Matrix (N = 50, T = 50): Specification 1. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Two-step post-Lasso with τ = 0 . 05 .
Figure 2. Recovery of Spatial Weights Matrix (N = 50, T = 50): Specification 1. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Two-step post-Lasso with τ = 0 . 05 .
Econometrics 03 00128 g002

4.2. Specification 2

As expected, the performance under specification 2 is not as satisfactory as for specification 1. Table 2 shows that false negative rate and bias decrease in T for all three Lasso-based estimators. As in specification 1, the two-step post-Lasso outperforms the two-step Lasso in terms of the false negative rate. The thresholded Lasso mainly differs from the two-step post-Lasso in that the false positive rate is lower. Figure 3 and Figure 4 show the selection frequency for n = T = 50 and n = 50 , T = 500 . For T = 50 , it can clearly be seen that the elements in the sub-diagonal are selected more often relative to other non-zero elements, stressing the difficulty of identifying the direction of the effects in small samples. This problem reduces with T and is negligible for T = 500 , see Figure 4.
Figure 3. Recovery of Spatial Weights Matrix (N = 50, T = 50): Specification 2. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Two-step post-Lasso with τ = 0 . 05 .
Figure 3. Recovery of Spatial Weights Matrix (N = 50, T = 50): Specification 2. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Two-step post-Lasso with τ = 0 . 05 .
Econometrics 03 00128 g003
Figure 4. Recovery of Spatial Weights Matrix (N = 50, T = 50): Specification 2. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Two-step post-Lasso with τ = 0 . 05 .
Figure 4. Recovery of Spatial Weights Matrix (N = 50, T = 50): Specification 2. (a) Two-step Lasso; (b) Two-step post-Lasso; (c) Two-step post-Lasso with τ = 0 . 05 .
Econometrics 03 00128 g004
Overall, the two-step Lasso performs well in recovering the network structure, even for the more challenging specification 2. However, we observe that the two-step Lasso selects too many spatial lags in small samples, although the performance improves substantially with T in terms of bias and false negative rate. The two-step post-Lasso outperforms the two-step Lasso in terms of bias and selection performance.

5. Conclusions

The identification of interaction effects is crucial for the understanding of how individuals, firms and regions interact. However, to date there is still a lack of methods that allow the estimation of interaction effects, particularly when the spatial dimension is large. Thus, most applied spatial econometric research usesad hoc specifications to incorporate interaction effects. The lack of estimation strategies may also explain why interaction effects in socio-economic processes are often ignored.
We propose a two-step procedure based on the Lasso estimator that accounts for reverse causality and allows estimating interaction effects between units in a spatial autoregressive panel model without requiring any prior knowledge about the network structure. The identifying assumption is sparsity. The two-step estimator can be implemented based on fast algorithms available for the Lasso estimator; e.g., [50]. The estimation methodology is attractive for applied research as the Lasso estimator also serves as a model selector and, hence, is relatively robust to misspecification.
We have derived convergence rates for a general two-step Lasso estimator which allows for the number of endogenous regressors and the number of instruments to be larger than the sample size. We then applied the two-step estimator to the spatial autoregressive panel model. Monte Carlo results confirm that the estimation method recovers the structure of the spatial weights matrix, even if T is as small as 50–100. However, our Monte Carlo results show a tendency for over-selection of spatial weights. The two-step post-Lasso estimator, which in each step applies OLS to the model selected by the Lasso, outperforms the two-step Lasso in terms of bias, false positive and false negative rate.
The use of the two-step Lasso raises several issues shared with other Lasso-type estimators. Controlling uncertainty and conducting inference in the Lasso is challenging and remains an area of ongoing research. Recent contributions include a Lasso significance tests due to Lockhart et al. [51] and the sample splitting approaches proposed by Wasserman and Roeder [52] and Meinshausen et al. [53] which allow for controlling the false discovery rate. Earlier seminal work on the asymptotic distribution of shrinkage estimators include Fan and Li [54] and Knight and Fu [55]. The former introduces the SCAD penalty. In addition, the choice of an optimal penalty level is an important issue. Penalized estimators typically select the penalty level oriented towards optimizing predictive performance, which may not be appropriate if the purpose is structure recovery. The optimal penalty used here is not based on cross-validation or other model selection criteria commonly employed and is therefore not directly subject to this criticism. Specifically, we follow Bickel et al. [27] and Belloni et al. [30] in choosing the smallest penalty level that dominates the noise of the problem. Our Monte Carlo results show that the proposed method works quite well in the structure discovery context.
This work suggests several lines of future research. First, given that the two-step post-Lasso outperforms the two-step Lasso, formal results for the two-step post-Lasso are required. Second, the methodology can be extended to the square-root Lasso and square-root post-Lasso. The main advantage of the square-root Lasso is that the optimal penalty level does not depend on the unknown error variance [56,57]. Hence, further performance improvements seem possible. Third, instead of relying on a two-step Lasso estimation method, an alternative estimation strategy may be based on the recent work by Fan and Liao [40] or Gautier and Tsybakov [38] who allow for endogeneity in high dimensions. These one-step procedures potentially facilitate accounting for uncertainty in model selection and estimation. These ideas are retained for future work.

Acknowledgments

We thank Tapabrata Maiti and Mark Schaffer for helpful suggestions. The comments and constructive criticism by two anonymous referees helped us revise and improve the paper substantially. Their contributions are gratefully acknowledged. Achim Ahrens gratefully acknowledges the support of an ESRC postgraduate research scholarship.

Author Contributions

Achim Ahrens is the main author of the manuscript. Arnab Bhattacharjee contributed in formulation of the problem, writing and revision of the manuscript in advisory capacity.

A. Appendix

A.1. Proof of Theorem 1

Setting. First, we summarize the setting and introduce some notation. We can write the model in (2.1)–(2.2) as
y = X β + e , X = Z Π + U
Thus, the reduced form equation for y is given by
y = Z Π β + U β + e
In Assumption 2.3, we assume approximate sparsity.
y = X β 0 + X ( β - β 0 ) + ε = X β 0 + r + ε
where X = Z Π , ε = U β + e , r = X ( β - β 0 ) with 1 / T r 2 = R s 2 and β 0 is the target parameter vector. As X is unknown, we use X ^ = X + V ^ in the second step.
y = ( X ^ - V ^ ) β 0 + r + ε = X ^ β 0 - V ^ β 0 + r + ε = X ^ β 0 + r + m + ε
where m : = - V ^ β 0 is the matrix of prediction errors from the first step weighted by the target parameter vector. Recall, the second-step Lasso estimator solves
min 1 T y - X ^ β 2 2 + λ 2 T Υ ^ 2 β 2
Let Q ( β ) = 1 T y - X ^ β 2 2 and δ ^ = β ^ - β 0 . Furthermore, define the active set Ω 2 = supp ( β 0 ) and | Ω 2 | = s 2 .
The general approach in the following steps is based on Belloni et al. [30] and Bickel et al. [27], but accounts for the prediction error from the first step, 1 / T m 2 .
Non-asymptotic 2 -prediction norm bound. In this step, we bound 1 / T X ^ δ ^ 2 and treat 1 / T m 2 as given. The convergence rate of 1 / T m 2 will be derived in the next step.
By optimality of the Lasso estimate β ^ ,
Q ( β ^ ) + λ 2 T Υ ^ 2 β ^ 1 Q ( β 0 ) + λ 2 T Υ ^ 2 β 0 1 Q ( β ^ ) - Q ( β 0 ) λ 2 T Υ ^ 2 β 0 1 - Υ ^ 2 β ^ 1
where
Υ ^ 2 β 0 1 - Υ ^ 2 β ^ 1 = Υ ^ 2 β Ω 2 0 1 - Υ ^ 2 β ^ Ω 2 1 - Υ ^ 2 β ^ Ω ¯ 2 1 Υ ^ 2 δ ^ Ω 2 1 - Υ ^ 2 δ ^ Ω ¯ 2 1
using β 0 = β Ω 2 0 , β ^ Ω ¯ 2 = δ ^ Ω ¯ 2 and Υ ^ 2 β Ω 2 0 1 - Υ ^ 2 β ^ Ω 2 1 Υ ^ 2 β ^ Ω 2 - Υ ^ 2 β Ω 2 0 1 by reverse triangle inequality. Futhermore,
T Q ( β ^ ) - T Q ( β 0 ) = y - X ^ β ^ 2 2 - y - X ^ β 0 2 2 = - 2 y X ^ δ ^ + β ^ X ^ X ^ β ^ - β 0 X X ^ β 0
Substracting X ^ δ ^ 2 2 from both sides gives
Q ( β ^ ) - Q ( β 0 ) - 1 T X ^ δ ^ 2 2 = - 2 T ε X ^ δ ^ - 2 T r X ^ δ ^ - 2 T m X ^ δ ^ ( A . 3 ) = - 2 T ε X ^ ( Υ ^ 2 ) - 1 Υ ^ 2 δ ^ - 2 T r X ^ δ ^ - 2 T m X ^ δ ^ = ( i ) - S 2 Υ ^ 2 δ ^ - 2 R s 2 1 T X ^ δ ^ 2 - 2 T m 2 X ^ δ ^ 2 ( i i ) - S 2 Υ ^ 2 δ ^ 1 - 2 R s 2 1 T X ^ δ ^ 2 - 2 T m 2 X ^ δ ^ 2 ( i i i ) - λ 2 c T Υ ^ 2 δ ^ 1 - 2 R s 2 1 T X ^ δ ^ 2 - 2 T m 2 X ^ δ ^ 2 ( A . 4 )
where (i) uses the Cauchy-Schwarz inequality and the definitions R s 2 = 1 T r 2 and S 2 : = 2 T ( Υ ^ 2 ) - 1 X ^ ε . (ii) uses the Hölder inequality. (iii) uses λ 2 c T S 2 which holds as T . Note that, by substituting for S 2 , we have eliminated the random component. Combining (A.1), (A.2) and (A.4) yields
1 T X ^ δ ^ 2 2 2 R s 2 1 T X ^ δ ^ 2 + 2 T m 2 X ^ δ ^ 2 + λ 2 c T Υ ^ 2 δ ^ Ω 2 1 + Υ ^ 2 δ ^ Ω 2 1 + λ 2 T Υ ^ 2 δ ^ Ω 2 1 - Υ ^ 2 δ ^ Ω 2 1 2 R s 2 1 T X ^ δ ^ 2 + 2 T m 2 X ^ δ ^ 2 + 1 + 1 c λ 2 T Υ ^ 2 δ ^ Ω 2 1 - 1 - 1 c λ 2 T Υ ^ 2 δ ^ Ω - 2 1 2 R s 2 + 2 T m 2 1 T X ^ δ ^ 2 + 1 + 1 c λ 2 T u Υ 2 0 δ ^ Ω 2 1 - 1 - 1 c λ 2 T l Υ 2 0 δ ^ Ω - 2 1
with 0 < l 1 u . The last step assumes that Υ ^ 2 is asymptotically valid. Specifically, there are two constants u and l such that l Υ 2 0 Υ ^ 2 u Υ 2 0 where l P 1 and u P u - with u ¯ 1 [30].
We distinguish between two cases. Case A: If 1 / T X ^ δ ^ 2 2 R s 2 + 2 / T m 2 , the bound is established by assumption. Case B: If 1 / T X ^ δ ^ 2 > 2 R s 2 + 2 / T m 2 , the above equation yields
Υ 2 0 δ ^ Ω 2 1 c 0 Υ 2 0 δ ^ Ω 2 1
where c 0 = u ( c + 1 ) / ( l ( c - 1 ) ) which allows us to invoke the weighted restricted eigenvalue condition,
1 T X ^ ( β ^ - β 0 ) 2 2 R s 2 + 2 T m 2 + 1 + 1 c u λ 2 T s 2 κ c 0 ω ( X ^ )
This establishes the non-asymptotic 2 -prediction norm bound, but takes the prediction error 1 / T m 2 from the first step as given. Note that if m = 0 , we arrive at the bound in Lemma 6 in Belloni et al. [30].
Convergence rate of 1 / T m 2 . In this step, we derive the convergence rate for 1 / T m 2 .
m 2 = V ^ β 0 2 = V ^ Ω 2 β Ω 2 0 2 V ^ Ω 2 F β 0 2 = β 0 2 j Ω 2 i v ^ i j 2 1 / 2 β 0 2 j Ω 2 i v ^ i j 2 1 / 2 = β 0 2 j Ω 2 V ^ j 2 β 0 2 s 2 max j V ^ j 2
where V ^ j = Z π ^ j - Z π j . By Theorem 1 in Belloni et al. [30],
max j 1 T V ^ j 2 P s 1 log ( max ( L p ¯ , T ) ) T
Substituting (A.7) into (A.8) and assuming β 0 2 s 2 ,
1 T m 2 = 1 T V ^ β 0 2 P s 2 2 s 1 log ( max ( L p ¯ , T ) ) T
Convergence rate of 2 -prediction norm bound. The non-asymptotic 2 -prediction bound and the convergence rate for 1 / T m 2 allows us to derive the 2 -prediction norm convergence rate. Note that λ 2 T log ( L p ¯ / α ) and R s 2 P s 2 / T by assumption. By (A.6) and substituting the convergence rate of 1 / T V ^ β 0 2 ,
1 T X ^ ( β ^ - β 0 ) 2 P s 2 T + s 2 2 s 1 log ( max ( L p ¯ , T ) ) T + s 2 log ( max ( L p ¯ , T ) ) T P s 2 2 s 1 log ( max ( L p ¯ , T ) ) T
However, we want to bound the deviations from X ^ β ^ to X β . Hence, we apply the triangle inequality
1 T X ^ β ^ - X β 2 = ( X ^ β ^ - X ^ β 0 ) + ( X ^ β 0 - X β 0 ) + ( X β 0 - X β ) 2 1 T X ^ β ^ - X ^ β 0 2 + 1 T V ^ β 0 2 + R s 2 P s 2 2 s 1 log ( max ( L p ¯ , T ) ) T
Non-asymptotic 1 -parameter norm bound. Again, we distinguish between two cases. Case A: Υ 2 0 δ ^ Ω 2 1 2 c 0 Υ 2 0 δ ^ Ω 2 1 . Then, we can use the definition of the weighted restricted eigenvalue
Υ 2 0 δ ^ 1 ( 1 + 2 c 0 ) Υ 2 0 δ ^ Ω 2 1 ( 1 + 2 c 0 ) s 2 κ 2 c 0 ω ( X ^ ) T X ^ δ ^ 2
Case B: If Υ 2 0 δ ^ Ω 2 1 > 2 c 0 Υ 2 0 δ ^ Ω 2 1 , then by (A.5) 2 R s 2 + 2 / T m 2 1 T X ^ δ ^ 2 must hold. Also, from (A.5)
Υ 2 0 δ ^ Ω 2 1 2 R s 2 + 2 T m 2 - 1 T X ^ δ ^ 2 T λ 2 1 T X ^ δ ^ 2 c l ( c - 1 ) + c 0 Υ 2 0 δ ^ Ω 2 1 2 R s 2 + 2 T m 2 2 T λ 2 c l ( c - 1 ) + 1 2 Υ 2 0 δ ^ Ω ¯ 2 1 2 2 R s 2 + 2 T m 2 2 T λ 2 c l ( c - 1 )
where the second step uses max x 0 x ( 2 a - x ) a 2 . In addition, by Case B assumption,
Υ 2 0 δ ^ Ω 2 1 < 1 2 c 0 Υ 2 0 δ ^ Ω ¯ 2 1 Υ 2 0 δ ^ 1 < 1 + 1 2 c 0 Υ 2 0 δ ^ Ω ¯ 2 1 Υ 2 0 δ ^ 1 < 1 + 1 2 c 0 2 2 R s 2 + 2 T m 2 2 T λ 2 c l ( c - 1 )
Combining (A.9) and (A.10),
Υ 2 0 δ ^ 1 ( 1 + 2 c 0 ) s 2 κ ω T X ^ δ ^ 2 + 1 + 1 2 c 0 2 2 R s 2 + 2 T m 2 2 T λ 2 c l ( c - 1 ) Υ 2 0 δ ^ 1 3 c 0 s 2 κ ω T X ^ δ ^ 2 + 3 c 0 T λ 2 2 R s 2 + 2 T m 2 2 Υ 2 0 δ ^ 1 3 c 0 s 2 κ ω T X ^ δ ^ 2 + 3 c 0 T λ 2 4 R s 2 2 + 4 T m 2 2 + 8 R s 2 1 T m 2
where we use that c / ( l ( c - 1 ) ) c 0 and 1 + 1 / ( 2 c 0 ) 3 / 2 .
1 -parameter norm convergence rate. In the last step, we derive the 1 -convergence rates. We assume, as stated in the Theorem, that s 1 and s 2 do not depend on T. This assumption may be strong in general, but is reasonable in the spatial autoregressive panel model where s 1 and s 2 are determined by n and not by T.
Υ 2 0 δ ^ 1 P s 1 s 2 s 2 2 log ( max ( L p ¯ , T ) ) T + s 2 T log ( max ( L p ¯ , T ) ) + s 2 4 s 1 log ( max ( L p ¯ , T ) ) T + s 2 2 s 1 s 2 T Υ 2 0 δ ^ 1 P log ( max ( L p ¯ , T ) ) T .
Lastly,
β ^ - β 0 1 = ( Υ 2 0 ) - 1 Υ 2 0 δ ^ 1 ( Υ 2 0 ) - 1 Υ 2 0 δ ^ 1 P log ( max ( L p ¯ , T ) ) T

A.2. Algorithm for Estimating Penalty Loadings

The algorithm is reproduced from Algorithm A.1 in Belloni et al. [30].
Algorithm 2. Consider the model E [ y t | x t ] = x t β 0 for t = 1 , , T where x t is a p-dimensional vector and β 0 is the target value. The initial and refined penalty loadings are given by
initial : γ ^ j = 1 T t = 1 T x t j 2 ( y t - y ¯ ) 2 refined : γ ^ j = 1 T t = 1 T x t j 2 e ^ t
where y ¯ = T - 1 y t . Specify the number of iterations K. Proceed as follows: (1) Obtain the Lasso or post-Lasso estimate β ^ using the initial penalty loadings and the optimal penalty level λ. (2) Obtain the Lasso or post-Lasso residuals e ^ t = y t - x t β ^ and update the Lasso or post-Lasso estimate β ^ using the refined penalty loadings. (3) Repeat the second step K times.

Conflicts of Interest

The authors declare no conflict of interest.
  • 1.See [6] for a similar approach.
  • 2.The Lasso has also been applied in the GIS literature, where the focus is on estimation of a spatial model where spatial dependence is a function of geographic distance; see, for example, Huang et al. [10] and Wheeler [11]. Likewise, Seya et al. [12] assume a known spatial weights matrix and apply the Lasso for spatial filtering. Spatial filtering is different from our approach as filtering treats the spatial weights as nuisance parameters whereas we focus on the recovery of the spatial dependence structure.
  • 3.We implicitly set the spatial autoregressive parameter, which is commonly employed in spatial models, equal to one, since w i j and the spatial autoregressive parameter are not separately identified [5].
  • 4.See [23] for an overview.
  • 5.The subscripts “1” and “2” indicate, where appropriate, that the corresponding terms refer to the first and second step, respectively.
  • 6.Whether this assumption is reasonable depends on the application. If there is a regime change at a known date, the model can be estimated for each sub-period separately, assuming that parameter stability holds within in each sub-period and that the time dimension is sufficiently large.
  • 7.To simplify the exposition, the first and second-step Lasso also applies a penalty to βi and πi,i although we assume ║πi,i0 = ║βi0 = K for identification. For better performance in finite samples, we recommend that the coefficients βi and πi,i are not penalized.
  • 8.The thresholded Lasso estimators considered in [31] apply the threshold to the Lasso estimates whereas we apply the threshold to the post-Lasso estimates.
  • 9.We are grateful to two anonymous referees who suggested useful extensions to our Monte Carlo simulations.
  • 10.The Lasso estimations were conducted in R based on the package glmnet by Friedman et al. [50]. The code for the two-step Lasso, two-step post-Lasso and thresholded post-Lasso are available on request.
  • 11.We have also considered, among others, c = 1 . 01 and α = 0 . 05 / log ( T ) and did not find significant performance differences.

References

  1. G. Arbia, and B. Fingleton. “New spatial econometric techniques and applications in regional science.” Papers Reg. Sci. 87 (2008): 311–317. [Google Scholar]
  2. R. Harris, J. Moffat, and V. Kravtsova. “In search of ‘W’.” Spat. Econ. Anal. 6 (2011): 249–270. [Google Scholar]
  3. L. Corrado, and B. Fingleton. “Where is the economics in spatial econometrics? ” Reg. Sci. 52 (2012): 210–239. [Google Scholar]
  4. J. Pinkse, M.E. Slade, and C. Brett. “Spatial price competition: A semiparametric approach.” Econometrica 70 (2002): 1111–1153. [Google Scholar]
  5. A. Bhattacharjee, and C. Jensen-Butler. “Estimation of the spatial weights matrix under structural constraints.” Reg. Sci. Urban Econ. 43 (2013): 617–634. [Google Scholar]
  6. M. Beenstock, and D. Felsenstein. “Nonparametric estimation of the spatial connectivity matrix using spatial panel data.” Geogr. Anal. 44 (2012): 386–397. [Google Scholar]
  7. A. Bhattacharjee, and S. Holly. “Structural interactions in spatial panels.” Empir. Econ. 40 (2011): 69–94. [Google Scholar]
  8. A. Bhattacharjee, and S. Holly. “Understanding interactions in social networks and committees.” Spat. Econ. Anal. 8 (2013): 23–53. [Google Scholar]
  9. N. Bailey, S. Holly, and M.H. Pesaran. “A two stage approach to spatiotemporal analysis with strong and weak cross-sectional dependence.” J. Appl. Econome., 2014, in press. [Google Scholar]
  10. H.C. Huang, N.J. Hsu, D.M. Theobald, and F.J. Breidt. “Spatial Lasso with applications to GIS model selection.” J. Comput. Graph. Statist. 19 (2010): 963–983. [Google Scholar]
  11. D.C. Wheeler. “Simultaneous coefficient penalization and model selection in geographically weighted regression: The geographically weighted lasso.” Environ. Plan. A 41 (2009): 722–742. [Google Scholar]
  12. H. Seya, D. Murakami, M. Tsutsumi, and Y. Yamagata. “Application of Lasso to the eigenvector selection problem in eigenvector-based spatial filtering.” Geogr. Anal., 2014. [Google Scholar] [CrossRef]
  13. E. Manresa. Madrid, Spain: CEMFI. “Estimating the structure of social interactions using panel data.” 2014, Unpublished work. [Google Scholar]
  14. P.C. Souza. London, UK: Department of Statistics, London School of Economics and Political Science. “Estimating networks: Lasso for spatial weights.” 2012, Unpublished work. [Google Scholar]
  15. C. Lam, and P.C. Souza. London, UK: Department of Statistics, London School of Economics and Political Science. “Regularization for spatial panel time series using adaptive Lasso.” 2013, Unpublished work. [Google Scholar]
  16. A.D. Cliff, and J.K. Ord. Spatial Autocorrelation: Monographs in Spatial and Environmental Systems Analysis. London, UK: Pion Ltd., 1973. [Google Scholar]
  17. L. Anselin. Spatial Econometrics: Methods and Models. New York, NY, USA: Springer, 1988. [Google Scholar]
  18. M. Kapoor, H.H. Kelejian, and I.R. Prucha. “Panel data models with spatially correlated error components.” J. Econom. 140 (2007): 97–130. [Google Scholar]
  19. L.F. Lee, and J. Yu. “Some recent developments in spatial panel data models.” Reg. Sci. Urban Econ. 40 (2010): 255–271. [Google Scholar]
  20. L.F. Lee, and J. Yu. “Estimation of spatial autoregressive panel data models with fixed effects.” J. Econom. 154 (2010): 165–185. [Google Scholar]
  21. L.F. Lee, and J. Yu. “Efficient GMM estimation of spatial dynamic panel data models with fixed effects.” J. Econom. 180 (2014): 174–197. [Google Scholar]
  22. J. Mutl, and M. Pfaffermayr. “The Hausman test in a Cliff and Ord panel model.” Econom. J. 14 (2011): 48–76. [Google Scholar]
  23. J. Elhorst. Spatial Econometrics: From Cross-Sectional Data to Spatial Panels. New York, NY, USA: Springer, 2014. [Google Scholar]
  24. R. Tibshirani. “Regression shrinkage and selection via the Lasso.” J. R. Stat. Soc. Ser. B (Methodological) 58 (1996): 267–288. [Google Scholar]
  25. H. Akaike. “A new look at the statistical model identification.” IEEE Trans. Autom. Control 19 (1974): 716–723. [Google Scholar]
  26. G. Schwarz. “Estimating the dimension of a model.” Ann. Stat. 6 (1978): 461–464. [Google Scholar]
  27. P.J. Bickel, Y. Ritov, and A.B. Tsybakov. “Simultaneous analysis of Lasso and Dantzig selector.” Ann. Stat. 37 (2009): 1705–1732. [Google Scholar]
  28. P. Bühlmann, and S. van de Geer. Statistics for High-Dimensional Data. New York, NY, USA: Springer, 2011. [Google Scholar]
  29. P. Zhao, and B. Yu. “On model selection consistency of Lasso.” J. Mach. Learn. Res. 7 (2006): 2541–2563. [Google Scholar]
  30. A. Belloni, D. Chen, V. Chernozhukov, and C. Hansen. “Sparse models and methods for optimal instruments with an application to eminent domain.” Econometrica 80 (2012): 2369–2429. [Google Scholar]
  31. A. Belloni, and V. Chernozhukov. “Least squares after model selection in high-dimensional sparse models.” Bernoulli 19 (2013): 521–547. [Google Scholar]
  32. F. Bunea, A. Tsybakov, and M. Wegkamp. “Sparsity oracle inequalities for the Lasso.” Electron. J. Stat. 1 (2007): 169–194. [Google Scholar]
  33. M.J. Wainwright. “Sharp thresholds for high-dimensional and noisy sparsity recovery using L1-constrained quadratic programming.” IEEE Trans. Inf. Theor. 55 (2009): 2183–2202. [Google Scholar]
  34. S. Van de Geer. “High-dimensional generalized linear models and the Lasso.” Ann. Stat. 36 (2008): 614–645. [Google Scholar]
  35. N. Meinshausen, and B. Yu. “Lasso-type recovery of sparse representations for high-dimensional data.” Ann. Stat. 37 (2009): 246–270. [Google Scholar]
  36. M.H. Pesaran. “Estimation and inference in large heterogeneous panels with a multifactor error structure.” Econometrica 74 (2006): 967–1012. [Google Scholar]
  37. M.H. Pesaran, and E. Tosetti. “Large panels with common factors and spatial correlation.” J. Econom. 161 (2011): 182–202. [Google Scholar]
  38. E. Gautier, and A.B. Tsybakov. Toulouse, France: Toulouse School of Economics. “High-dimensional instrumental variables regression and confidence sets.” 2014, Unpublished work. [Google Scholar]
  39. E. Candes, and T. Tao. “The Dantzig selector: Statistical estimation when p is much larger than n.” Ann. Stat. 35 (2007): 2313–2351. [Google Scholar]
  40. J. Fan, and Y. Liao. “Endogeneity in high dimensions.” Ann. Stat. 42 (2014): 872–917. [Google Scholar]
  41. M. Caner. “Lasso-type Gmm estimator.” Econom. Theor. 25 (2009): 270–290. [Google Scholar]
  42. W. Lin, R. Feng, and H. Li. “Regularization methods for high-dimensional instrumental variables regression with an application to genetical genomics.” J. Am. Stat. Assoc., 2014. [Google Scholar] [CrossRef]
  43. A. Belloni, and V. Chernozhukov. “High dimensional sparse econometric models: An introduction.” In Inverse Problems and High-Dimensional Estimation SE - 3. Edited by P. Alquier, E. Gautier and G. Stoltz. Berlin/ Heidelberg, Germany: Springer, 2011, pp. 121–156. [Google Scholar]
  44. B.Y. Jing, Q.M. Shao, and Q. Wang. “Self-normalized Cramér-type large deviations for independent random variables.” Ann. Probab. 31 (2003): 2167–2215. [Google Scholar]
  45. A. Belloni, V. Chernozhukov, and C. Hansen. “Inference on treatment effects after selection among high-dimensional controls.” Rev. Econ. Stud. 81 (2014): 608–650. [Google Scholar]
  46. S. Van de Geer, and P. Bühlmann. “On the conditions used to prove oracle results for the Lasso.” Electron. J. Stat. 3 (2009): 1360–1392. [Google Scholar]
  47. G. Raskutti, M.J. Wainwright, and B. Yu. “Restricted eigenvalue properties for correlated Gaussian designs.” J. Mach. Learn. Res. 11 (2010): 2241–2259. [Google Scholar]
  48. H.H. Kelejian, and I.R. Prucha. “A generalized spatial two-stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances.” J. Real Estate Financ. Econ. 17 (1998): 99–121. [Google Scholar]
  49. H.H. Kelejian, and I.R. Prucha. “A generalized moments estimator for the autoregressive parameter in a spatial model.” Int. Econ. Rev. 40 (1999): 509–533. [Google Scholar]
  50. J. Friedman, T. Hastie, and R. Tibshirani. “Regularization paths for generalized linear models via coordinate descent.” J. Stat. Softw. 33 (2010): 1–22. [Google Scholar]
  51. R. Lockhart, J. Taylor, R.J. Tibshirani, and R. Tibshirani. “A significance test for the Lasso.” Ann. Stat. 42 (2014): 413–468. [Google Scholar]
  52. L. Wasserman, and K. Roeder. “High-dimensional variable selection.” Ann. Statist. 37 (2009): 2178–2201. [Google Scholar]
  53. N. Meinshausen, L. Meier, and P. Bühlmann. “p-Values for high-dimensional regression.” J. Am. Statist. Assoc. 104 (2009): 1671–1681. [Google Scholar]
  54. J. Fan, and R. Li. “Variable selection via nonconcave penalized likelihood and its oracle properties.” J. Am. Stat. Assoc. 96 (2001): 1348–1360. [Google Scholar]
  55. K. Knight, and W. Fu. “Asymptotics for Lasso-type estimators.” Ann. Stat. 28 (2000): 1356–1378. [Google Scholar]
  56. A. Belloni, V. Chernozhukov, and L. Wang. “Square-root lasso: Pivotal recovery of sparse signals via conic programming.” Biometrika 98 (2011): 791–806. [Google Scholar]
  57. A. Belloni, V. Chernozhukov, and L. Wang. “Pivotal estimation via square-root Lasso in nonparametric regression.” Ann. Stat. 42 (2014): 757–788. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ahrens, A.; Bhattacharjee, A. Two-Step Lasso Estimation of the Spatial Weights Matrix. Econometrics 2015, 3, 128-155. https://doi.org/10.3390/econometrics3010128

AMA Style

Ahrens A, Bhattacharjee A. Two-Step Lasso Estimation of the Spatial Weights Matrix. Econometrics. 2015; 3(1):128-155. https://doi.org/10.3390/econometrics3010128

Chicago/Turabian Style

Ahrens, Achim, and Arnab Bhattacharjee. 2015. "Two-Step Lasso Estimation of the Spatial Weights Matrix" Econometrics 3, no. 1: 128-155. https://doi.org/10.3390/econometrics3010128

APA Style

Ahrens, A., & Bhattacharjee, A. (2015). Two-Step Lasso Estimation of the Spatial Weights Matrix. Econometrics, 3(1), 128-155. https://doi.org/10.3390/econometrics3010128

Article Metrics

Back to TopTop