Next Issue
Previous Issue

Table of Contents

Econometrics, Volume 4, Issue 4 (December 2016)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Editorial Announcement
Econometrics 2016, 4(4), 40; doi:10.3390/econometrics4040040
Received: 8 October 2016 / Revised: 8 October 2016 / Accepted: 9 October 2016 / Published: 10 October 2016
PDF Full-text (131 KB) | HTML Full-text | XML Full-text
Abstract I am pleased to announce that, following my retirement on the 30th September 2016, Marc Paolella will become Editor-in-Chief (EiC) of Econometrics. Full article

Research

Jump to: Editorial, Review

Open AccessArticle The Status of Bridge Principles in Applied Econometrics
Econometrics 2016, 4(4), 50; doi:10.3390/econometrics4040050
Received: 4 September 2016 / Revised: 30 November 2016 / Accepted: 2 December 2016 / Published: 17 December 2016
PDF Full-text (1644 KB) | HTML Full-text | XML Full-text
Abstract
The paper begins with a figurative representation of the contrast between present-day and formal applied econometrics. An explication of the status of bridge principles in applied econometrics follows. To illustrate the concepts used in the explication, the paper presents a simultaneous-equation model of
[...] Read more.
The paper begins with a figurative representation of the contrast between present-day and formal applied econometrics. An explication of the status of bridge principles in applied econometrics follows. To illustrate the concepts used in the explication, the paper presents a simultaneous-equation model of the equilibrium configurations of a perfectly competitive commodity market. With artificially generated data I carry out two empirical analyses of such a market that contrast the prescriptions of formal econometrics in the tradition of Ragnar Frisch with the commands of present-day econometrics in the tradition of Trygve Haavelmo. At the end I demonstrate that the bridge principles I use in the formal-econometric analysis are valid in the Real World—that is in the world in which my data reside. Full article
Figures

Figure 1

Open AccessArticle Oil Price and Economic Growth: A Long Story?
Econometrics 2016, 4(4), 41; doi:10.3390/econometrics4040041
Received: 30 August 2016 / Revised: 30 September 2016 / Accepted: 7 October 2016 / Published: 28 October 2016
PDF Full-text (440 KB) | HTML Full-text | XML Full-text
Abstract
This study investigates changes in the relationship between oil prices and the US economy from a long-term perspective. Although neither of the two series (oil price and GDP growth rates) presents structural breaks in mean, we identify different volatility periods in both of
[...] Read more.
This study investigates changes in the relationship between oil prices and the US economy from a long-term perspective. Although neither of the two series (oil price and GDP growth rates) presents structural breaks in mean, we identify different volatility periods in both of them, separately. From a multivariate perspective, we do not observe a significant effect between changes in oil prices and GDP growth when considering the full period. However, we find a significant relationship in some subperiods by carrying out a rolling analysis and by investigating the presence of structural breaks in the multivariate framework. Finally, we obtain evidence, by means of a time-varying VAR, that the impact of the oil price shock on GDP growth has declined over time. We also observe that the negative effect is greater at the time of large oil price increases, supporting previous evidence of nonlinearity in the relationship. Full article
(This article belongs to the Special Issue Unit Roots and Structural Breaks)
Figures

Figure 1

Open AccessArticle Social Networks and Choice Set Formation in Discrete Choice Models
Econometrics 2016, 4(4), 42; doi:10.3390/econometrics4040042
Received: 19 December 2015 / Revised: 4 October 2016 / Accepted: 14 October 2016 / Published: 27 October 2016
PDF Full-text (988 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern
[...] Read more.
The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern discrete choice models include rich specifications of heterogeneity, multi-stage processing for choice set determination, dynamics, and other elements. However, discrete choice models still largely represent socially isolated choice processes —individuals are not affected by the preferences of choices of other individuals. There is a developing literature on the impact of social networks on preferences or the utility function in a random utility model but little examination of such processes for choice set formation. There is also emerging evidence in the marketplace of the influence of friends on choice sets and choices. In this paper we develop discrete choice models that incorporate formal social network structures into the choice set formation process in a two-stage random utility framework. We assess models where peers may affect not only the alternatives that individuals consider or include in their choice sets, but also consumption choices. We explore the properties of our models and evaluate the extent of “errors” in assessment of preferences, economic welfare measures and market shares if network effects are present, but are not accounted for in the econometric model. Our results shed light on the importance of the evaluation of peer or network effects on inclusion/exclusion of alternatives in a random utility choice framework. Full article
(This article belongs to the Special Issue Discrete Choice Modeling)
Figures

Figure 1

Open AccessArticle Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation
Econometrics 2016, 4(4), 44; doi:10.3390/econometrics4040044
Received: 23 July 2016 / Revised: 12 October 2016 / Accepted: 19 October 2016 / Published: 4 November 2016
PDF Full-text (325 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification
[...] Read more.
This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD) test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as N , T . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present. Full article
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)
Open AccessArticle Panel Cointegration Testing in the Presence of Linear Time Trends
Econometrics 2016, 4(4), 45; doi:10.3390/econometrics4040045
Received: 15 May 2016 / Revised: 28 September 2016 / Accepted: 20 October 2016 / Published: 1 November 2016
PDF Full-text (301 KB) | HTML Full-text | XML Full-text
Abstract
We consider a class of panel tests of the null hypothesis of no cointegration and cointegration. All tests under investigation rely on single-equations estimated by least squares, and they may be residual-based or not. We focus on test statistics computed from regressions with
[...] Read more.
We consider a class of panel tests of the null hypothesis of no cointegration and cointegration. All tests under investigation rely on single-equations estimated by least squares, and they may be residual-based or not. We focus on test statistics computed from regressions with intercept only (i.e., without detrending) and with at least one of the regressors (integrated of order 1) being dominated by a linear time trend. In such a setting, often encountered in practice, the limiting distributions and critical values provided for and applied with the situation “with intercept only” are not correct. It is demonstrated that their usage results in size distortions growing with the panel size N. Moreover, we show which are the appropriate distributions, and how correct critical values can be obtained from the literature. Full article
(This article belongs to the Special Issue Recent Developments in Cointegration)
Open AccessArticle Generalized Information Matrix Tests for Detecting Model Misspecification
Econometrics 2016, 4(4), 46; doi:10.3390/econometrics4040046
Received: 29 December 2015 / Revised: 13 September 2016 / Accepted: 26 October 2016 / Published: 15 November 2016
PDF Full-text (1052 KB) | HTML Full-text | XML Full-text
Abstract
Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving
[...] Read more.
Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving novel model misspecification tests for finite-dimensional smooth probability models. These GIMTs include previously published as well as newly developed information matrix tests. To illustrate the application of the GIMT framework, we derived and assessed the performance of new GIMTs for binary logistic regression. Although all GIMTs exhibited good level and power performance for the larger sample sizes, GIMT statistics with fewer degrees of freedom and derived using log-likelihood third derivatives exhibited improved level and power performance. Full article
(This article belongs to the Special Issue Recent Developments of Specification Testing)
Figures

Figure 1

Open AccessArticle Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models
Econometrics 2016, 4(4), 47; doi:10.3390/econometrics4040047
Received: 25 May 2016 / Revised: 23 November 2016 / Accepted: 25 November 2016 / Published: 30 November 2016
PDF Full-text (763 KB) | HTML Full-text | XML Full-text
Abstract
The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed
[...] Read more.
The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. This paper proposes a computationally feasible variation on these standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only, given the fact that the absolute value of the autoregressive parameter is less than unity as a necessary requirement for the data-generating process to be stationary. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators, and it therefore retains consistency. Our simulation results indicate that the subset-continuous-updating GMM estimators outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test. Full article
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)
Open AccessArticle Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency
Econometrics 2016, 4(4), 48; doi:10.3390/econometrics4040048
Received: 21 September 2016 / Revised: 15 November 2016 / Accepted: 23 November 2016 / Published: 8 December 2016
PDF Full-text (917 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equations in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically-bias-corrected estimator and the alternative estimator and finds that
[...] Read more.
This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equations in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically-bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that at least in terms of the third-order stochastic expansion, we cannot improve on the simple one-step bias correction by using the bias correction of moment equations. This finding suggests that the comparison between the one-step bias correction and the method of correcting the moment equations or the fully-iterated bias correction should be based on the stochastic expansions higher than the third order. Full article
Open AccessArticle Estimation of Dynamic Panel Data Models with Stochastic Volatility Using Particle Filters
Econometrics 2016, 4(4), 39; doi:10.3390/econometrics4040039
Received: 6 May 2016 / Revised: 18 September 2016 / Accepted: 26 September 2016 / Published: 9 October 2016
PDF Full-text (741 KB) | HTML Full-text | XML Full-text
Abstract
Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with
[...] Read more.
Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with stochastic volatility by maximizing an approximate likelihood obtained via Rao-Blackwellized particle filters. Monte Carlo studies reveal the good and stable performance of our particle filter-based estimator. When the volatility of volatility is high, or when regressors are absent but stochastic volatility exists, our approach can be better than the maximum likelihood estimator which neglects stochastic volatility and generalized method of moments (GMM) estimators. Full article
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)
Open AccessArticle Testing for the Equality of Integration Orders of Multiple Series
Econometrics 2016, 4(4), 49; doi:10.3390/econometrics4040049
Received: 15 July 2016 / Revised: 23 November 2016 / Accepted: 25 November 2016 / Published: 15 December 2016
PDF Full-text (265 KB) | HTML Full-text | XML Full-text
Abstract
Testing for the equality of integration orders is an important topic in time series analysis because it constitutes an essential step in testing for (fractional) cointegration in the bivariate case. For the multivariate case, there are several versions of cointegration, and the version
[...] Read more.
Testing for the equality of integration orders is an important topic in time series analysis because it constitutes an essential step in testing for (fractional) cointegration in the bivariate case. For the multivariate case, there are several versions of cointegration, and the version given in Robinson and Yajima (2002) has received much attention. In this definition, a time series vector is partitioned into several sub-vectors, and the elements in each sub-vector have the same integration order. Furthermore, this time series vector is said to be cointegrated if there exists a cointegration in any of the sub-vectors. Under such a circumstance, testing for the equality of integration orders constitutes an important problem. However, for multivariate fractionally integrated series, most tests focus on stationary and invertible series and become invalid under the presence of cointegration. Hualde (2013) overcomes these difficulties with a residual-based test for a bivariate time series. For the multivariate case, one possible extension of this test involves testing for an array of bivariate series, which becomes computationally challenging as the dimension of the time series increases. In this paper, a one-step residual-based test is proposed to deal with the multivariate case that overcomes the computational issue. Under certain regularity conditions, the test statistic has an asymptotic standard normal distribution under the null hypothesis of equal integration orders and diverges to infinity under the alternative. As reported in a Monte Carlo experiment, the proposed test possesses satisfactory sizes and powers. Full article
(This article belongs to the Special Issue Unit Roots and Structural Breaks)

Review

Jump to: Editorial, Research

Open AccessReview Pair-Copula Constructions for Financial Applications: A Review
Econometrics 2016, 4(4), 43; doi:10.3390/econometrics4040043
Received: 1 September 2016 / Revised: 4 October 2016 / Accepted: 18 October 2016 / Published: 29 October 2016
PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
This survey reviews the large and growing literature on the use of pair-copula constructions (PCCs) in financial applications. Using a PCC, multivariate data that exhibit complex patterns of dependence can be modeled using bivariate copulae as simple building blocks. Hence, this model represents
[...] Read more.
This survey reviews the large and growing literature on the use of pair-copula constructions (PCCs) in financial applications. Using a PCC, multivariate data that exhibit complex patterns of dependence can be modeled using bivariate copulae as simple building blocks. Hence, this model represents a very flexible way of constructing higher-dimensional copulae. In this paper, we survey inference methods and goodness-of-fit tests for such models, as well as empirical applications of the PCCs in finance and economics. Full article
(This article belongs to the Special Issue Recent Developments in Copula Models)
Figures

Figure 1

Journal Contact

MDPI AG
Econometrics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Econometrics Edit a special issue Review for Econometrics
loading...
Back to Top