Econometrics
http://www.mdpi.com/journal/econometrics
Latest open access articles published in Econometrics at http://www.mdpi.com/journal/econometrics<![CDATA[Econometrics, Vol. 5, Pages 21: Bayesian Inference for Latent Factor Copulas and Application to Financial Risk Forecasting]]>
http://www.mdpi.com/2225-1146/5/2/21
Factor modeling is a popular strategy to induce sparsity in multivariate models as they scale to higher dimensions. We develop Bayesian inference for a recently proposed latent factor copula model, which utilizes a pair copula construction to couple the variables with the latent factor. We use adaptive rejection Metropolis sampling (ARMS) within Gibbs sampling for posterior simulation: Gibbs sampling enables application to Bayesian problems, while ARMS is an adaptive strategy that replaces traditional Metropolis-Hastings updates, which typically require careful tuning. Our simulation study shows favorable performance of our proposed approach both in terms of sampling efficiency and accuracy. We provide an extensive application example using historical data on European financial stocks that forecasts portfolio Value at Risk (VaR) and Expected Shortfall (ES).Econometrics2017-05-2352Article10.3390/econometrics5020021212225-11462017-05-23doi: 10.3390/econometrics5020021Benedikt SchambergerLutz GruberClaudia Czado<![CDATA[Econometrics, Vol. 5, Pages 20: Copula-Based Factor Models for Multivariate Asset Returns]]>
http://www.mdpi.com/2225-1146/5/2/20
Recently, several copula-based approaches have been proposed for modeling stationary multivariate time series. All of them are based on vine copulas, and they differ in the choice of the regular vine structure. In this article, we consider a copula autoregressive (COPAR) approach to model the dependence of unobserved multivariate factors resulting from two dynamic factor models. However, the proposed methodology is general and applicable to several factor models as well as to other copula models for stationary multivariate time series. An empirical study illustrates the forecasting superiority of our approach for constructing an optimal portfolio of U.S. industrial stocks in the mean-variance framework.Econometrics2017-05-1752Article10.3390/econometrics5020020202225-11462017-05-17doi: 10.3390/econometrics5020020Eugen IvanovAleksey MinFranz Ramsauer<![CDATA[Econometrics, Vol. 5, Pages 19: Maximum Likelihood Estimation of the I(2) Model under Linear Restrictions]]>
http://www.mdpi.com/2225-1146/5/2/19
Estimation of the I(2) cointegrated vector autoregressive (CVAR) model is considered. Without further restrictions, estimation of the I(1) model is by reduced-rank regression (Anderson (1951)). Maximum likelihood estimation of I(2) models, on the other hand, always requires iteration. This paper presents a new triangular representation of the I(2) model. This is the basis for a new estimation procedure of the unrestricted I(2) model, as well as the I(2) model with linear restrictions imposed.Econometrics2017-05-1552Article10.3390/econometrics5020019192225-11462017-05-15doi: 10.3390/econometrics5020019Jurgen Doornik<![CDATA[Econometrics, Vol. 5, Pages 18: The Univariate Collapsing Method for Portfolio Optimization]]>
http://www.mdpi.com/2225-1146/5/2/18
The univariate collapsing method (UCM) for portfolio optimization is based on obtaining the predictive mean and a risk measure such as variance or expected shortfall of the univariate pseudo-return series generated from a given set of portfolio weights and multivariate set of assets under interest and, via simulation or optimization, repeating this process until the desired portfolio weight vector is obtained. The UCM is well-known conceptually, straightforward to implement, and possesses several advantages over use of multivariate models, but, among other things, has been criticized for being too slow. As such, it does not play prominently in asset allocation and receives little attention in the academic literature. This paper proposes use of fast model estimation methods combined with new heuristics for sampling, based on easily-determined characteristics of the data, to accelerate and optimize the simulation search. An extensive empirical analysis confirms the viability of the method.Econometrics2017-05-0552Article10.3390/econometrics5020018182225-11462017-05-05doi: 10.3390/econometrics5020018Marc Paolella<![CDATA[Econometrics, Vol. 5, Pages 17: Selecting the Lag Length for the MGLS Unit Root Tests with Structural Change: A Warning Note for Practitioners Based on Simulations]]>
http://www.mdpi.com/2225-1146/5/2/17
This is a simulation-based warning note for practitioners who use the M G L S unit root tests in the context of structural change using different selection lag length criteria. With T = 100 , we find severe oversize problems when using some criteria, while other criteria produce an undersizing behavior. In view of this dilemma, we do not recommend using these tests. While such behavior tends to disappear when T = 250 , it is important to note that most empirical applications use smaller sample sizes such as T = 100 or T = 150 . The A D F G L S test does not present an oversizing or undersizing problem. The only disadvantage of the A D F G L S test arises in the presence of M A ( 1 ) negative correlation, in which case the M G L S tests are preferable, but in all other cases they are very undersized. When there is a break in the series, selecting the breakpoint using the Supremum method greatly improves the results relative to the Infimum method.Econometrics2017-04-1652Article10.3390/econometrics5020017172225-11462017-04-16doi: 10.3390/econometrics5020017Ricardo QuinecheGabriel Rodríguez<![CDATA[Econometrics, Vol. 5, Pages 16: Copula–Based vMEM Specifications versus Alternatives: The Case of Trading Activity]]>
http://www.mdpi.com/2225-1146/5/2/16
We discuss several multivariate extensions of the Multiplicative Error Model to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional expectations of the variables can be simultaneously estimated. Results with realized volatility, volumes and number of trades of the JNJ stock show that significantly superior realized volatility forecasts are delivered with a fully interdependent vMEM relative to a single equation. Alternatives involving log–Normal or semiparametric formulations produce substantially equivalent results.Econometrics2017-04-1252Article10.3390/econometrics5020016162225-11462017-04-12doi: 10.3390/econometrics5020016Fabrizio CipolliniRobert EngleGiampiero Gallo<![CDATA[Econometrics, Vol. 5, Pages 15: A Simple Test for Causality in Volatility]]>
http://www.mdpi.com/2225-1146/5/1/15
An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns was the portmanteau statistic for non-causality in the variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test of non-causality in the conditional variance by Hafner and Herwartz (2006), who provided simulation results to show that their LM test was more powerful than the portmanteau statistic for sample sizes of 1000 and 4000 observations. While the LM test for causality proposed by Hafner and Herwartz (2006) is an interesting and useful development, it is nonetheless arbitrary. In particular, the specification on which the LM test is based does not rely on an underlying stochastic process, so the alternative hypothesis is also arbitrary, which can affect the power of the test. The purpose of the paper is to derive a simple test for causality in volatility that provides regularity conditions arising from the underlying stochastic process, namely a random coefficient autoregressive process, and a test for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the null hypothesis of non-causality. The simple test is intuitively appealing as it is based on an underlying stochastic process, is sympathetic to Granger’s (1969, 1988) notion of time series predictability, is easy to implement, and has a regularity condition that is not available in the LM test.Econometrics2017-03-2051Article10.3390/econometrics5010015152225-11462017-03-20doi: 10.3390/econometrics5010015Chia-Lin ChangMichael McAleer<![CDATA[Econometrics, Vol. 5, Pages 14: Accuracy and Efficiency of Various GMM Inference Techniques in Dynamic Micro Panel Data Models]]>
http://www.mdpi.com/2225-1146/5/1/14
Studies employing Arellano-Bond and Blundell-Bond generalized method of moments (GMM) estimation for linear dynamic panel data models are growing exponentially in number. However, for researchers it is hard to make a reasoned choice between many different possible implementations of these estimators and associated tests. By simulation, the effects are examined in terms of many options regarding: (i) reducing, extending or modifying the set of instruments; (ii) specifying the weighting matrix in relation to the type of heteroskedasticity; (iii) using (robustified) 1-step or (corrected) 2-step variance estimators; (iv) employing 1-step or 2-step residuals in Sargan-Hansen overall or incremental overidentification restrictions tests. This is all done for models in which some regressors may be either strictly exogenous, predetermined or endogenous. Surprisingly, particular asymptotically optimal and relatively robust weighting matrices are found to be superior in finite samples to ostensibly more appropriate versions. Most of the variants of tests for overidentification and coefficient restrictions show serious deficiencies. The variance of the individual effects is shown to be a major determinant of the poor quality of most asymptotic approximations; therefore, the accurate estimation of this nuisance parameter is investigated. A modification of GMM is found to have some potential when the cross-sectional heteroskedasticity is pronounced and the time-series dimension of the sample is not too small. Finally, all techniques are employed to actual data and lead to insights which differ considerably from those published earlier.Econometrics2017-03-2051Article10.3390/econometrics5010014142225-11462017-03-20doi: 10.3390/econometrics5010014Jan KivietMilan PleusRutger Poldermans<![CDATA[Econometrics, Vol. 5, Pages 13: Goodness-of-Fit Tests for Copulas of Multivariate Time Series]]>
http://www.mdpi.com/2225-1146/5/1/13
In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations are discussed. It is also shown that if the stochastic volatility matrices are diagonal, which is the case if the univariate time series are estimated separately instead of being jointly estimated, then the empirical copula process behaves as if the innovations were observed; a remarkable property. As a by-product, one also obtains the asymptotic behavior of rank-based measures of dependence applied to residuals of these time series models.Econometrics2017-03-1751Article10.3390/econometrics5010013132225-11462017-03-17doi: 10.3390/econometrics5010013Bruno Rémillard<![CDATA[Econometrics, Vol. 5, Pages 12: Testing for a Structural Break in a Spatial Panel Model]]>
http://www.mdpi.com/2225-1146/5/1/12
We consider the problem of testing for a structural break in the spatial lag parameter in a panel model (spatial autoregressive). We propose a likelihood ratio test of the null hypothesis of no break against the alternative hypothesis of a single break. The limiting distribution of the test is derived under the null when both the number of individual units N and the number of time periods T is large or N is ﬁxed and T is large. The asymptotic critical values of the test statistic can be obtained analytically. We also propose a break-date estimator that can be employed to determine the location of the break point following evidence against the null hypothesis. We present Monte Carlo evidence to show that the proposed procedure performs well in ﬁnite samples. Finally, we consider an empirical application of the test on budget spillovers and interdependence in ﬁscal policy within the U.S. states.Econometrics2017-03-0651Article10.3390/econometrics5010012122225-11462017-03-06doi: 10.3390/econometrics5010012Aparna Sengupta<![CDATA[Econometrics, Vol. 5, Pages 11: Structural Breaks, Inflation and Interest Rates: Evidence from the G7 Countries]]>
http://www.mdpi.com/2225-1146/5/1/11
This study reconsiders the common unit root/co-integration approach to test for the Fisher effect for the economies of the G7 countries. We first show that nominal interest and inflation rates are better represented as I(0) variables. Later, we use the Bai–Perron procedure to show the existence of structural changes in the Fisher equation. After considering these breaks, we find very limited evidence of a total Fisher effect as the transmission coefficient of the expected inflation rates to nominal interest rates is very different than one.Econometrics2017-02-1751Article10.3390/econometrics5010011112225-11462017-02-17doi: 10.3390/econometrics5010011Jesús ClementeMaría GadeaAntonio MontañésMarcelo Reyes<![CDATA[Econometrics, Vol. 5, Pages 10: A Note on Identification of Bivariate Copulas for Discrete Count Data]]>
http://www.mdpi.com/2225-1146/5/1/10
Copulas have enjoyed increased usage in many areas of econometrics, including applications with discrete outcomes. However, Genest and Nešlehová (2007) present evidence that copulas for discrete outcomes are not identified, particularly when those discrete outcomes follow count distributions. This paper confirms the Genest and Nešlehová result using a series of simulation exercises. The paper then proceeds to show that those identification concerns diminish if the model has a regression structure such that the exogenous variable(s) generates additional variation in the outcomes and thus more completely covers the outcome domain.Econometrics2017-02-1551Article10.3390/econometrics5010010102225-11462017-02-15doi: 10.3390/econometrics5010010Pravin TrivediDavid Zimmer<![CDATA[Econometrics, Vol. 5, Pages 8: Endogeneity, Time-Varying Coefficients, and Incorrect vs. Correct Ways of Specifying the Error Terms of Econometric Models]]>
http://www.mdpi.com/2225-1146/5/1/8
Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant) regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.Econometrics2017-02-0351Article10.3390/econometrics501000882225-11462017-02-03doi: 10.3390/econometrics5010008P.A.V.B. SwamyJatinder MehtaI-Lok Chang<![CDATA[Econometrics, Vol. 5, Pages 9: A Fast Algorithm for the Computation of HAC Covariance Matrix Estimators]]>
http://www.mdpi.com/2225-1146/5/1/9
This paper considers the algorithmic implementation of the heteroskedasticity and autocorrelation consistent (HAC) estimation problem for covariance matrices of parameter estimators. We introduce a new algorithm, mainly based on the fast Fourier transform, and show via computer simulation that our algorithm is up to 20 times faster than well-established alternative algorithms. The cumulative effect is substantial if the HAC estimation problem has to be solved repeatedly. Moreover, the bandwidth parameter has no impact on this performance. We provide a general description of the new algorithm as well as code for a reference implementation in R.Econometrics2017-01-2551Article10.3390/econometrics501000992225-11462017-01-25doi: 10.3390/econometrics5010009Jochen HeberleCristina Sattarhoff<![CDATA[Econometrics, Vol. 5, Pages 6: Between Institutions and Global Forces: Norwegian Wage Formation Since Industrialisation]]>
http://www.mdpi.com/2225-1146/5/1/6
This paper reviews the development of labour market institutions in Norway, shows how labour market regulation has been related to the macroeconomic development, and presents dynamic econometric models of nominal and real wages. Single equation and multi-equation models are reported. The econometric modelling uses a new data set with historical time series of wages and prices, unemployment and labour productivity. Impulse indicator saturation is used to achieve robust estimation of focus parameters, and the breaks are interpreted in the light of the historical overview. A relatively high degree of constancy of the key parameters of the wage setting equation is documented, over a considerably longer historical time period than earlier studies have done. The evidence is consistent with the view that the evolving system of collective labour market regulation over long periods has delivered a certain necessary level of coordination of wage and price setting. Nevertheless, there is also evidence that global forces have been at work for a long time, in a way that links real wages to productivity trends in the same way as in countries with very different institutions and macroeconomic development.Econometrics2017-01-1251Article10.3390/econometrics501000662225-11462017-01-12doi: 10.3390/econometrics5010006Ragnar Nymoen<![CDATA[Econometrics, Vol. 5, Pages 7: Acknowledgement to Reviewers of Econometrics in 2016]]>
http://www.mdpi.com/2225-1146/5/1/7
The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]Econometrics2017-01-1151Editorial10.3390/econometrics501000772225-11462017-01-11doi: 10.3390/econometrics5010007 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 5, Pages 5: Fractional Unit Root Tests Allowing for a Structural Change in Trend under Both the Null and Alternative Hypotheses]]>
http://www.mdpi.com/2225-1146/5/1/5
This paper considers testing procedures for the null hypothesis of a unit root process against the alternative of a fractional process, called a fractional unit root test. We extend the Lagrange Multiplier (LM) tests of Robinson (1994) and Tanaka (1999), which are locally best invariant and uniformly most powerful, to allow for a slope change in trend with or without a concurrent level shift under both the null and alternative hypotheses. We show that the limit distribution of the proposed LM tests is standard normal. Finite sample simulation experiments show that the tests have good size and power. As an empirical analysis, we apply the tests to the Consumer Price Indices of the G7 countries.Econometrics2017-01-0851Article10.3390/econometrics501000552225-11462017-01-08doi: 10.3390/econometrics5010005Seong ChangPierre Perron<![CDATA[Econometrics, Vol. 5, Pages 1: Business Cycle Estimation with High-Pass and Band-Pass Local Polynomial Regression]]>
http://www.mdpi.com/2225-1146/5/1/1
Filters constructed on the basis of standard local polynomial regression (LPR) methods have been used in the literature to estimate the business cycle. We provide a frequency domain interpretation of the contrast filter obtained by the difference of a series and its long-run LPR component and show that it operates as a kind of high-pass filter, so that it provides a noisy estimate of the cycle. We alternatively propose band-pass local polynomial regression methods aimed at isolating the cyclical component. Results are compared to standard high-pass and band-pass filters. Procedures are illustrated using the US GDP series.Econometrics2017-01-0551Article10.3390/econometrics501000112225-11462017-01-05doi: 10.3390/econometrics5010001Luis Álvarez<![CDATA[Econometrics, Vol. 5, Pages 4: Consistency of Trend Break Point Estimator with Underspecified Break Number]]>
http://www.mdpi.com/2225-1146/5/1/4
This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.Econometrics2017-01-0551Article10.3390/econometrics501000442225-11462017-01-05doi: 10.3390/econometrics5010004Jingjing Yang<![CDATA[Econometrics, Vol. 5, Pages 3: Regime Switching Vine Copula Models for Global Equity and Volatility Indices]]>
http://www.mdpi.com/2225-1146/5/1/3
For nearly every major stock market there exist equity and implied volatility indices. These play important roles within finance: be it as a benchmark, a measure of general uncertainty or a way of investing or hedging. It is well known in the academic literature that correlations and higher moments between different indices tend to vary in time. However, to the best of our knowledge, no one has yet considered a global setup including both equity and implied volatility indices of various continents, and allowing for a changing dependence structure. We aim to close this gap by applying Markov-switching R-vine models to investigate the existence of different, global dependence regimes. In particular, we identify times of “normal” and “abnormal” states within a data set consisting of North-American, European and Asian indices. Our results confirm the existence of joint points in a time at which global regime switching between two different R-vine structures takes place.Econometrics2017-01-0451Article10.3390/econometrics501000332225-11462017-01-04doi: 10.3390/econometrics5010003Holger FinkYulia KlimovaClaudia CzadoJakob Stöber<![CDATA[Econometrics, Vol. 5, Pages 2: Fixed-b Inference for Testing Structural Change in a Time Series Regression]]>
http://www.mdpi.com/2225-1146/5/1/2
This paper addresses tests for structural change in a weakly dependent time series regression. The cases of full structural change and partial structural change are considered. Heteroskedasticity-autocorrelation (HAC) robust Wald tests based on nonparametric covariance matrix estimators are explored. Fixed-b theory is developed for the HAC estimators which allows fixed-b approximations for the test statistics. For the case of the break date being known, the fixed-b limits of the statistics depend on the break fraction and the bandwidth tuning parameter as well as on the kernel. When the break date is unknown, supremum, mean and exponential Wald statistics are commonly used for testing the presence of the structural break. Fixed-b limits of these statistics are obtained and critical values are tabulated. A simulation study compares the finite sample properties of existing tests and proposed tests.Econometrics2016-12-3051Article10.3390/econometrics501000222225-11462016-12-30doi: 10.3390/econometrics5010002Cheol-Keun ChoTimothy Vogelsang<![CDATA[Econometrics, Vol. 4, Pages 50: The Status of Bridge Principles in Applied Econometrics]]>
http://www.mdpi.com/2225-1146/4/4/50
The paper begins with a figurative representation of the contrast between present-day and formal applied econometrics. An explication of the status of bridge principles in applied econometrics follows. To illustrate the concepts used in the explication, the paper presents a simultaneous-equation model of the equilibrium configurations of a perfectly competitive commodity market. With artificially generated data I carry out two empirical analyses of such a market that contrast the prescriptions of formal econometrics in the tradition of Ragnar Frisch with the commands of present-day econometrics in the tradition of Trygve Haavelmo. At the end I demonstrate that the bridge principles I use in the formal-econometric analysis are valid in the Real World—that is in the world in which my data reside.Econometrics2016-12-1744Article10.3390/econometrics4040050502225-11462016-12-17doi: 10.3390/econometrics4040050Bernt Stigum<![CDATA[Econometrics, Vol. 4, Pages 49: Testing for the Equality of Integration Orders of Multiple Series]]>
http://www.mdpi.com/2225-1146/4/4/49
Testing for the equality of integration orders is an important topic in time series analysis because it constitutes an essential step in testing for (fractional) cointegration in the bivariate case. For the multivariate case, there are several versions of cointegration, and the version given in Robinson and Yajima (2002) has received much attention. In this definition, a time series vector is partitioned into several sub-vectors, and the elements in each sub-vector have the same integration order. Furthermore, this time series vector is said to be cointegrated if there exists a cointegration in any of the sub-vectors. Under such a circumstance, testing for the equality of integration orders constitutes an important problem. However, for multivariate fractionally integrated series, most tests focus on stationary and invertible series and become invalid under the presence of cointegration. Hualde (2013) overcomes these difficulties with a residual-based test for a bivariate time series. For the multivariate case, one possible extension of this test involves testing for an array of bivariate series, which becomes computationally challenging as the dimension of the time series increases. In this paper, a one-step residual-based test is proposed to deal with the multivariate case that overcomes the computational issue. Under certain regularity conditions, the test statistic has an asymptotic standard normal distribution under the null hypothesis of equal integration orders and diverges to infinity under the alternative. As reported in a Monte Carlo experiment, the proposed test possesses satisfactory sizes and powers.Econometrics2016-12-1544Article10.3390/econometrics4040049492225-11462016-12-15doi: 10.3390/econometrics4040049Man WangNgai Chan<![CDATA[Econometrics, Vol. 4, Pages 48: Higher Order Bias Correcting Moment Equation for M-Estimation and Its Higher Order Efficiency]]>
http://www.mdpi.com/2225-1146/4/4/48
This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equations in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically-bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that at least in terms of the third-order stochastic expansion, we cannot improve on the simple one-step bias correction by using the bias correction of moment equations. This finding suggests that the comparison between the one-step bias correction and the method of correcting the moment equations or the fully-iterated bias correction should be based on the stochastic expansions higher than the third order.Econometrics2016-12-0844Article10.3390/econometrics4040048482225-11462016-12-08doi: 10.3390/econometrics4040048Kyoo Kim<![CDATA[Econometrics, Vol. 4, Pages 47: Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models]]>
http://www.mdpi.com/2225-1146/4/4/47
The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. This paper proposes a computationally feasible variation on these standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only, given the fact that the absolute value of the autoregressive parameter is less than unity as a necessary requirement for the data-generating process to be stationary. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators, and it therefore retains consistency. Our simulation results indicate that the subset-continuous-updating GMM estimators outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test.Econometrics2016-11-3044Article10.3390/econometrics4040047472225-11462016-11-30doi: 10.3390/econometrics4040047Richard AshleyXiaojin Sun<![CDATA[Econometrics, Vol. 4, Pages 46: Generalized Information Matrix Tests for Detecting Model Misspecification]]>
http://www.mdpi.com/2225-1146/4/4/46
Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving novel model misspecification tests for finite-dimensional smooth probability models. These GIMTs include previously published as well as newly developed information matrix tests. To illustrate the application of the GIMT framework, we derived and assessed the performance of new GIMTs for binary logistic regression. Although all GIMTs exhibited good level and power performance for the larger sample sizes, GIMT statistics with fewer degrees of freedom and derived using log-likelihood third derivatives exhibited improved level and power performance.Econometrics2016-11-1544Article10.3390/econometrics4040046462225-11462016-11-15doi: 10.3390/econometrics4040046Richard GoldenSteven HenleyHalbert WhiteT. Kashner<![CDATA[Econometrics, Vol. 4, Pages 44: Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation]]>
http://www.mdpi.com/2225-1146/4/4/44
This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD) test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as N , T → ∞ . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present.Econometrics2016-11-0444Article10.3390/econometrics4040044442225-11462016-11-04doi: 10.3390/econometrics4040044Badi BaltagiChihwa KaoBin Peng<![CDATA[Econometrics, Vol. 4, Pages 45: Panel Cointegration Testing in the Presence of Linear Time Trends]]>
http://www.mdpi.com/2225-1146/4/4/45
We consider a class of panel tests of the null hypothesis of no cointegration and cointegration. All tests under investigation rely on single-equations estimated by least squares, and they may be residual-based or not. We focus on test statistics computed from regressions with intercept only (i.e., without detrending) and with at least one of the regressors (integrated of order 1) being dominated by a linear time trend. In such a setting, often encountered in practice, the limiting distributions and critical values provided for and applied with the situation “with intercept only” are not correct. It is demonstrated that their usage results in size distortions growing with the panel size N. Moreover, we show which are the appropriate distributions, and how correct critical values can be obtained from the literature.Econometrics2016-11-0144Article10.3390/econometrics4040045452225-11462016-11-01doi: 10.3390/econometrics4040045Uwe HasslerMehdi Hosseinkouchack<![CDATA[Econometrics, Vol. 4, Pages 43: Pair-Copula Constructions for Financial Applications: A Review]]>
http://www.mdpi.com/2225-1146/4/4/43
This survey reviews the large and growing literature on the use of pair-copula constructions (PCCs) in financial applications. Using a PCC, multivariate data that exhibit complex patterns of dependence can be modeled using bivariate copulae as simple building blocks. Hence, this model represents a very flexible way of constructing higher-dimensional copulae. In this paper, we survey inference methods and goodness-of-fit tests for such models, as well as empirical applications of the PCCs in finance and economics.Econometrics2016-10-2944Review10.3390/econometrics4040043432225-11462016-10-29doi: 10.3390/econometrics4040043Kjersti Aas<![CDATA[Econometrics, Vol. 4, Pages 41: Oil Price and Economic Growth: A Long Story?]]>
http://www.mdpi.com/2225-1146/4/4/41
This study investigates changes in the relationship between oil prices and the US economy from a long-term perspective. Although neither of the two series (oil price and GDP growth rates) presents structural breaks in mean, we identify different volatility periods in both of them, separately. From a multivariate perspective, we do not observe a significant effect between changes in oil prices and GDP growth when considering the full period. However, we find a significant relationship in some subperiods by carrying out a rolling analysis and by investigating the presence of structural breaks in the multivariate framework. Finally, we obtain evidence, by means of a time-varying VAR, that the impact of the oil price shock on GDP growth has declined over time. We also observe that the negative effect is greater at the time of large oil price increases, supporting previous evidence of nonlinearity in the relationship.Econometrics2016-10-2844Article10.3390/econometrics4040041412225-11462016-10-28doi: 10.3390/econometrics4040041María GadeaAna Gómez-LoscosAntonio Montañés<![CDATA[Econometrics, Vol. 4, Pages 42: Social Networks and Choice Set Formation in Discrete Choice Models]]>
http://www.mdpi.com/2225-1146/4/4/42
The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern discrete choice models include rich specifications of heterogeneity, multi-stage processing for choice set determination, dynamics, and other elements. However, discrete choice models still largely represent socially isolated choice processes —individuals are not affected by the preferences of choices of other individuals. There is a developing literature on the impact of social networks on preferences or the utility function in a random utility model but little examination of such processes for choice set formation. There is also emerging evidence in the marketplace of the influence of friends on choice sets and choices. In this paper we develop discrete choice models that incorporate formal social network structures into the choice set formation process in a two-stage random utility framework. We assess models where peers may affect not only the alternatives that individuals consider or include in their choice sets, but also consumption choices. We explore the properties of our models and evaluate the extent of “errors” in assessment of preferences, economic welfare measures and market shares if network effects are present, but are not accounted for in the econometric model. Our results shed light on the importance of the evaluation of peer or network effects on inclusion/exclusion of alternatives in a random utility choice framework.Econometrics2016-10-2744Article10.3390/econometrics4040042422225-11462016-10-27doi: 10.3390/econometrics4040042Bruno WichmannMinjie ChenWiktor Adamowicz<![CDATA[Econometrics, Vol. 4, Pages 40: Editorial Announcement]]>
http://www.mdpi.com/2225-1146/4/4/40
I am pleased to announce that, following my retirement on the 30th September 2016, Marc Paolella will become Editor-in-Chief (EiC) of Econometrics.Econometrics2016-10-1044Editorial10.3390/econometrics4040040402225-11462016-10-10doi: 10.3390/econometrics4040040Kerry Patterson<![CDATA[Econometrics, Vol. 4, Pages 39: Estimation of Dynamic Panel Data Models with Stochastic Volatility Using Particle Filters]]>
http://www.mdpi.com/2225-1146/4/4/39
Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with stochastic volatility by maximizing an approximate likelihood obtained via Rao-Blackwellized particle filters. Monte Carlo studies reveal the good and stable performance of our particle filter-based estimator. When the volatility of volatility is high, or when regressors are absent but stochastic volatility exists, our approach can be better than the maximum likelihood estimator which neglects stochastic volatility and generalized method of moments (GMM) estimators.Econometrics2016-10-0944Article10.3390/econometrics4040039392225-11462016-10-09doi: 10.3390/econometrics4040039Wen Xu<![CDATA[Econometrics, Vol. 4, Pages 38: Econometric Information Recovery in Behavioral Networks]]>
http://www.mdpi.com/2225-1146/4/3/38
In this paper, we suggest an approach to recovering behavior-related, preference-choice network information from observational data. We model the process as a self-organized behavior based random exponential network-graph system. To address the unknown nature of the sampling model in recovering behavior related network information, we use the Cressie-Read (CR) family of divergence measures and the corresponding information theoretic entropy basis, for estimation, inference, model evaluation, and prediction. Examples are included to clarify how entropy based information theoretic methods are directly applicable to recovering the behavioral network probabilities in this fundamentally underdetermined ill posed inverse recovery problem.Econometrics2016-09-1443Article10.3390/econometrics4030038382225-11462016-09-14doi: 10.3390/econometrics4030038George Judge<![CDATA[Econometrics, Vol. 4, Pages 37: Generalized Fractional Processes with Long Memory and Time Dependent Volatility Revisited]]>
http://www.mdpi.com/2225-1146/4/3/37
In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA) with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV). Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.Econometrics2016-09-0543Article10.3390/econometrics4030037372225-11462016-09-05doi: 10.3390/econometrics4030037M. PeirisManabu Asai<![CDATA[Econometrics, Vol. 4, Pages 36: Nonparametric Regression with Common Shocks]]>
http://www.mdpi.com/2225-1146/4/3/36
This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small) number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.Econometrics2016-09-0143Article10.3390/econometrics4030036362225-11462016-09-01doi: 10.3390/econometrics4030036Eduardo Souza-Rodrigues<![CDATA[Econometrics, Vol. 4, Pages 35: Special Issues of Econometrics: Celebrated Econometricians]]>
http://www.mdpi.com/2225-1146/4/3/35
Econometrics is pleased to announce the commissioning of a new series of Special Issues dedicated to celebrated econometricians of our time.[...]Econometrics2016-08-1743Editorial10.3390/econometrics4030035352225-11462016-08-17doi: 10.3390/econometrics4030035 <em>Econometrics</em> Editorial Office<![CDATA[Econometrics, Vol. 4, Pages 34: Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets]]>
http://www.mdpi.com/2225-1146/4/3/34
This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 ) , which is better than the convergence rate O P ( n − 1 / 4 ) for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.Econometrics2016-08-1643Article10.3390/econometrics4030034342225-11462016-08-16doi: 10.3390/econometrics4030034Xin ZhangDonggyu KimYazhen Wang<![CDATA[Econometrics, Vol. 4, Pages 33: Econometrics Best Paper Award 2016]]>
http://www.mdpi.com/2225-1146/4/3/33
n/aEconometrics2016-08-0143Editorial10.3390/econometrics4030033332225-11462016-08-01doi: 10.3390/econometrics4030033Kerry Patterson<![CDATA[Econometrics, Vol. 4, Pages 32: Measuring the Distance between Sets of ARMA Models]]>
http://www.mdpi.com/2225-1146/4/3/32
A distance between pairs of sets of autoregressive moving average (ARMA) processes is proposed. Its main properties are discussed. The paper also shows how the proposed distance finds application in time series analysis. In particular it can be used to evaluate the distance between portfolios of ARMA models or the distance between vector autoregressive (VAR) models.Econometrics2016-07-1543Article10.3390/econometrics4030032322225-11462016-07-15doi: 10.3390/econometrics4030032Umberto Triacca<![CDATA[Econometrics, Vol. 4, Pages 31: Market Microstructure Effects on Firm Default Risk Evaluation]]>
http://www.mdpi.com/2225-1146/4/3/31
Default probability is a fundamental variable determining the credit worthiness of a firm and equity volatility estimation plays a key role in its evaluation. Assuming a structural credit risk modeling approach, we study the impact of choosing different non parametric equity volatility estimators on default probability evaluation, when market microstructure noise is considered. A general stochastic volatility framework with jumps for the underlying asset dynamics is defined inside a Merton-like structural model. To estimate the volatility risk component of a firm we use high-frequency equity data: market microstructure noise is introduced as a direct effect of observing noisy high-frequency equity prices. A Monte Carlo simulation analysis is conducted to (i) test the performance of alternative non-parametric equity volatility estimators in their capability of filtering out the microstructure noise and backing out the true unobservable asset volatility; (ii) study the effects of different non-parametric estimation techniques on default probability evaluation. The impact of the non-parametric volatility estimators on risk evaluation is not negligible: a sensitivity analysis defined for alternative values of the leverage parameter and average jumps size reveals that the characteristics of the dataset are crucial to determine which is the proper estimator to consider from a credit risk perspective.Econometrics2016-07-0843Article10.3390/econometrics4030031312225-11462016-07-08doi: 10.3390/econometrics4030031Flavia BarsottiSimona Sanfelici<![CDATA[Econometrics, Vol. 4, Pages 30: Estimation of Gini Index within Pre-Specified Error Bound]]>
http://www.mdpi.com/2225-1146/4/3/30
Gini index is a widely used measure of economic inequality. This article develops a theory and methodology for constructing a confidence interval for Gini index with a specified confidence coefficient and a specified width without assuming any specific distribution of the data. Fixed sample size methods cannot simultaneously achieve both specified confidence coefficient and fixed width. We develop a purely sequential procedure for interval estimation of Gini index with a specified confidence coefficient and a specified margin of error. Optimality properties of the proposed method, namely first order asymptotic efficiency and asymptotic consistency properties are proved under mild moment assumptions of the distribution of the data.Econometrics2016-06-2443Article10.3390/econometrics4030030302225-11462016-06-24doi: 10.3390/econometrics4030030Bhargab ChattopadhyayShyamal De<![CDATA[Econometrics, Vol. 4, Pages 29: Evaluating Eigenvector Spatial Filter Corrections for Omitted Georeferenced Variables]]>
http://www.mdpi.com/2225-1146/4/2/29
The Ramsey regression equation specification error test (RESET) furnishes a diagnostic for omitted variables in a linear regression model specification (i.e., the null hypothesis is no omitted variables). Integer powers of fitted values from a regression analysis are introduced as additional covariates in a second regression analysis. The former regression model can be considered restricted, whereas the latter model can be considered unrestricted; this first model is nested within this second model. A RESET significance test is conducted with an F-test using the error sums of squares and the degrees of freedom for the two models. For georeferenced data, eigenvectors can be extracted from a modified spatial weights matrix, and included in a linear regression model specification to account for the presence of nonzero spatial autocorrelation. The intuition underlying this methodology is that these synthetic variates function as surrogates for omitted variables. Accordingly, a restricted regression model without eigenvectors should indicate an omitted variables problem, whereas an unrestricted regression model with eigenvectors should result in a failure to reject the RESET null hypothesis. This paper furnishes eleven empirical examples, covering a wide range of spatial attribute data types, that illustrate the effectiveness of eigenvector spatial filtering in addressing the omitted variables problem for georeferenced data as measured by the RESET.Econometrics2016-06-2142Article10.3390/econometrics4020029292225-11462016-06-21doi: 10.3390/econometrics4020029Daniel GriffithYongwan Chun<![CDATA[Econometrics, Vol. 4, Pages 28: Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels]]>
http://www.mdpi.com/2225-1146/4/2/28
This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent under the alternative. A test-oriented smoothing parameter selection method is also proposed to implement the test. Monte Carlo simulations indicate superior finite-sample performance of the test statistic. It is worth emphasizing that the performance is grounded on the first-order normal limit and a small number of observations, despite a nonparametric convergence rate and a sample-splitting procedure of the test.Econometrics2016-06-1742Article10.3390/econometrics4020028282225-11462016-06-17doi: 10.3390/econometrics4020028Masayuki HirukawaMari Sakudo<![CDATA[Econometrics, Vol. 4, Pages 26: Removing Specification Errors from the Usual Formulation of Binary Choice Models]]>
http://www.mdpi.com/2225-1146/4/2/26
We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.Econometrics2016-06-0342Article10.3390/econometrics4020026262225-11462016-06-03doi: 10.3390/econometrics4020026P.A.V.B. SwamyI-Lok ChangJatinder MehtaWilliam GreeneStephen HallGeorge Tavlas<![CDATA[Econometrics, Vol. 4, Pages 27: Continuous and Jump Betas: Implications for Portfolio Diversification]]>
http://www.mdpi.com/2225-1146/4/2/27
Using high-frequency data, we decompose the time-varying beta for stocks into beta for continuous systematic risk and beta for discontinuous systematic risk. Estimated discontinuous betas for S&amp;P500 constituents between 2003 and 2011 generally exceed the corresponding continuous betas. We demonstrate how continuous and discontinuous betas decrease with portfolio diversification. Using an equiweighted broad market index, we assess the speed of convergence of continuous and discontinuous betas in portfolios of stocks as the number of holdings increase. We show that discontinuous risk dissipates faster with fewer stocks in a portfolio compared to its continuous counterpart.Econometrics2016-06-0142Article10.3390/econometrics4020027272225-11462016-06-01doi: 10.3390/econometrics4020027Vitali AlexeevMardi DungeyWenying Yao<![CDATA[Econometrics, Vol. 4, Pages 25: Stable-GARCH Models for Financial Returns: Fast Estimation and Tests for Stability]]>
http://www.mdpi.com/2225-1146/4/2/25
A fast method for estimating the parameters of a stable-APARCH not requiring likelihood or iteration is proposed. Several powerful tests for the (asymmetric) stable Paretian distribution with tail index 1 &lt; α &lt; 2 are used for assessing the appropriateness of the stable assumption as the innovations process in stable-GARCH-type models for daily stock returns. Overall, there is strong evidence against the stable as the correct innovations assumption for all stocks and time periods, though for many stocks and windows of data, the stable hypothesis is not rejected.Econometrics2016-05-0542Article10.3390/econometrics4020025252225-11462016-05-05doi: 10.3390/econometrics4020025Marc Paolella<![CDATA[Econometrics, Vol. 4, Pages 24: Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors]]>
http://www.mdpi.com/2225-1146/4/2/24
This paper develops a sampling algorithm for bandwidth estimation in a nonparametric regression model with continuous and discrete regressors under an unknown error density. The error density is approximated by the kernel density estimator of the unobserved errors, while the regression function is estimated using the Nadaraya-Watson estimator admitting continuous and discrete regressors. We derive an approximate likelihood and posterior for bandwidth parameters, followed by a sampling algorithm. Simulation results show that the proposed approach typically leads to better accuracy of the resulting estimates than cross-validation, particularly for smaller sample sizes. This bandwidth estimation approach is applied to nonparametric regression model of the Australian All Ordinaries returns and the kernel density estimation of gross domestic product (GDP) growth rates among the organisation for economic co-operation and development (OECD) and non-OECD countries.Econometrics2016-04-2242Article10.3390/econometrics4020024242225-11462016-04-22doi: 10.3390/econometrics4020024Xibin ZhangMaxwell KingHan Shang<![CDATA[Econometrics, Vol. 4, Pages 23: Building a Structural Model: Parameterization and Structurality]]>
http://www.mdpi.com/2225-1146/4/2/23
A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.Econometrics2016-04-1242Article10.3390/econometrics4020023232225-11462016-04-12doi: 10.3390/econometrics4020023Michel MouchartRenzo Orsi<![CDATA[Econometrics, Vol. 4, Pages 22: Distribution of Budget Shares for Food: An Application of Quantile Regression to Food Security 1]]>
http://www.mdpi.com/2225-1146/4/2/22
This study examines, using quantile regression, the linkage between food security and efforts to enhance smallholder coffee producer incomes in Rwanda. Even though in Rwanda smallholder coffee producer incomes have increased, inhabitants these areas still experience stunting and wasting. This study examines whether the distribution of the income elasticity for food is the same for coffee and noncoffee growing provinces. We find that that the share of expenditures on food is statistically different in coffee growing and noncoffee growing provinces. Thus, the increase in expenditure on food is smaller for coffee growing provinces than noncoffee growing provinces.Econometrics2016-04-0842Article10.3390/econometrics4020022222225-11462016-04-08doi: 10.3390/econometrics4020022Charles MossJames OehmkeAlexandre LyambabajeAndrew Schmitz<![CDATA[Econometrics, Vol. 4, Pages 21: Unit Root Tests: The Role of the Univariate Models Implied by Multivariate Time Series]]>
http://www.mdpi.com/2225-1146/4/2/21
In cointegration analysis, it is customary to test the hypothesis of unit roots separately for each single time series. In this note, we point out that this procedure may imply large size distortion of the unit root tests if the DGP is a VAR. It is well-known that univariate models implied by a VAR data generating process necessarily have a finite order MA component. This feature may explain why an MA component has often been found in univariate ARIMA models for economic time series. Thereby, it has important implications for unit root tests in univariate settings given the well-known size distortion of popular unit root test in the presence of a large negative coefficient in the MA component. In a small simulation experiment, considering several popular unit root tests and the ADF sieve bootstrap unit tests, we find that, besides the well known size distortion effect, there can be substantial differences in size distortion according to which univariate time series is tested for the presence of a unit root.Econometrics2016-04-0742Article10.3390/econometrics4020021212225-11462016-04-07doi: 10.3390/econometrics4020021Nunzio CappuccioDiego Lubian<![CDATA[Econometrics, Vol. 4, Pages 20: Recovering the Most Entropic Copulas from Preliminary Knowledge of Dependence]]>
http://www.mdpi.com/2225-1146/4/2/20
This paper provides a new approach to recover relative entropy measures of contemporaneous dependence from limited information by constructing the most entropic copula (MEC) and its canonical form, namely the most entropic canonical copula (MECC). The MECC can effectively be obtained by maximizing Shannon entropy to yield a proper copula such that known dependence structures of data (e.g., measures of association) are matched to their empirical counterparts. In fact the problem of maximizing the entropy of copulas is the dual to the problem of minimizing the Kullback-Leibler cross entropy (KLCE) of joint probability densities when the marginal probability densities are fixed. Our simulation study shows that the proposed MEC estimator can potentially outperform many other copula estimators in finite samples.Econometrics2016-03-2942Article10.3390/econometrics4020020202225-11462016-03-29doi: 10.3390/econometrics4020020Ba ChuStephen Satchell<![CDATA[Econometrics, Vol. 4, Pages 19: A Method for Measuring Treatment Effects on the Treated without Randomization]]>
http://www.mdpi.com/2225-1146/4/2/19
This paper contributes to the literature on the estimation of causal effects by providing an analytical formula for individual specific treatment effects and an empirical methodology that allows us to estimate these effects. We derive the formula from a general model with minimal restrictions, unknown functional form and true unobserved variables such that it is a credible model of the underlying real world relationship. Subsequently, we manipulate the model in order to put it in an estimable form. In contrast to other empirical methodologies, which derive average treatment effects, we derive an analytical formula that provides estimates of the treatment effects on each treated individual. We also provide an empirical example that illustrates our methodology.Econometrics2016-03-2542Article10.3390/econometrics4020019192225-11462016-03-25doi: 10.3390/econometrics4020019P.A.V.B. SwamyStephen HallGeorge TavlasI-Lok ChangHeather GibsonWilliam GreeneJatinder Mehta<![CDATA[Econometrics, Vol. 4, Pages 17: Bayesian Calibration of Generalized Pools of Predictive Distributions]]>
http://www.mdpi.com/2225-1146/4/1/17
Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.Econometrics2016-03-1641Article10.3390/econometrics4010017172225-11462016-03-16doi: 10.3390/econometrics4010017Roberto CasarinGiulia MantoanFrancesco Ravazzolo<![CDATA[Econometrics, Vol. 4, Pages 16: The Evolving Transmission of Uncertainty Shocks in the United Kingdom]]>
http://www.mdpi.com/2225-1146/4/1/16
This paper investigates if the impact of uncertainty shocks on the U.K. economy has changed over time. To this end, we propose an extended time-varying VAR model that simultaneously allows the estimation of a measure of uncertainty and its time-varying impact on key macroeconomic and financial variables. We find that the impact of uncertainty shocks on these variables has declined over time. The timing of the change coincides with the introduction of inflation targeting in the U.K.Econometrics2016-03-1441Article10.3390/econometrics4010016162225-11462016-03-14doi: 10.3390/econometrics4010016Haroon Mumtaz<![CDATA[Econometrics, Vol. 4, Pages 15: Timing Foreign Exchange Markets]]>
http://www.mdpi.com/2225-1146/4/1/15
To improve short-horizon exchange rate forecasts, we employ foreign exchange market risk factors as fundamentals, and Bayesian treed Gaussian process (BTGP) models to handle non-linear, time-varying relationships between these fundamentals and exchange rates. Forecasts from the BTGP model conditional on the carry and dollar factors dominate random walk forecasts on accuracy and economic criteria in the Meese-Rogoff setting. Superior market timing ability for large moves, more than directional accuracy, drives the BTGP’s success. We explain how, through a model averaging Monte Carlo scheme, the BTGP is able to simultaneously exploit smoothness and rough breaks in between-variable dynamics. Either feature in isolation is unable to consistently outperform benchmarks throughout the full span of time in our forecasting exercises. Trading strategies based on ex ante BTGP forecasts deliver the highest out-of-sample risk-adjusted returns for the median currency, as well as for both predictable, traded risk factors.Econometrics2016-03-1141Article10.3390/econometrics4010015152225-11462016-03-11doi: 10.3390/econometrics4010015Samuel MaloneRobert GramacyEnrique ter Horst<![CDATA[Econometrics, Vol. 4, Pages 14: Return and Risk of Pairs Trading Using a Simulation-Based Bayesian Procedure for Predicting Stable Ratios of Stock Prices]]>
http://www.mdpi.com/2225-1146/4/1/14
We investigate the direct connection between the uncertainty related to estimated stable ratios of stock prices and risk and return of two pairs trading strategies: a conditional statistical arbitrage method and an implicit arbitrage one. A simulation-based Bayesian procedure is introduced for predicting stable stock price ratios, defined in a cointegration model. Using this class of models and the proposed inferential technique, we are able to connect estimation and model uncertainty with risk and return of stock trading. In terms of methodology, we show the effect that using an encompassing prior, which is shown to be equivalent to a Jeffreys’ prior, has under an orthogonal normalization for the selection of pairs of cointegrated stock prices and further, its effect for the estimation and prediction of the spread between cointegrated stock prices. We distinguish between models with a normal and Student t distribution since the latter typically provides a better description of daily changes of prices on financial markets. As an empirical application, stocks are used that are ingredients of the Dow Jones Composite Average index. The results show that normalization has little effect on the selection of pairs of cointegrated stocks on the basis of Bayes factors. However, the results stress the importance of the orthogonal normalization for the estimation and prediction of the spread—the deviation from the equilibrium relationship—which leads to better results in terms of profit per capital engagement and risk than using a standard linear normalization.Econometrics2016-03-1041Article10.3390/econometrics4010014142225-11462016-03-10doi: 10.3390/econometrics4010014David ArdiaLukasz GatarekLennart HoogerheideHerman van Dijk<![CDATA[Econometrics, Vol. 4, Pages 13: Bayesian Nonparametric Measurement of Factor Betas and Clustering with Application to Hedge Fund Returns]]>
http://www.mdpi.com/2225-1146/4/1/13
We define a dynamic and self-adjusting mixture of Gaussian Graphical Models to cluster financial returns, and provide a new method for extraction of nonparametric estimates of dynamic alphas (excess return) and betas (to a choice set of explanatory factors) in a multivariate setting. This approach, as well as the outputs, has a dynamic, nonstationary and nonparametric form, which circumvents the problem of model risk and parametric assumptions that the Kalman filter and other widely used approaches rely on. The by-product of clusters, used for shrinkage and information borrowing, can be of use to determine relationships around specific events. This approach exhibits a smaller Root Mean Squared Error than traditionally used benchmarks in financial settings, which we illustrate through simulation. As an illustration, we use hedge fund index data, and find that our estimated alphas are, on average, 0.13% per month higher (1.6% per year) than alphas estimated through Ordinary Least Squares. The approach exhibits fast adaptation to abrupt changes in the parameters, as seen in our estimated alphas and betas, which exhibit high volatility, especially in periods which can be identified as times of stressful market events, a reflection of the dynamic positioning of hedge fund portfolio managers.Econometrics2016-03-0841Article10.3390/econometrics4010013132225-11462016-03-08doi: 10.3390/econometrics4010013Urbi GarayEnrique ter HorstGerman MolinaAbel Rodriguez<![CDATA[Econometrics, Vol. 4, Pages 12: Evolutionary Sequential Monte Carlo Samplers for Change-Point Models]]>
http://www.mdpi.com/2225-1146/4/1/12
Sequential Monte Carlo (SMC) methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC) methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT) algorithm, developed in this paper, combines (off-line) tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.Econometrics2016-03-0841Article10.3390/econometrics4010012122225-11462016-03-08doi: 10.3390/econometrics4010012Arnaud Dufays<![CDATA[Econometrics, Vol. 4, Pages 11: Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM]]>
http://www.mdpi.com/2225-1146/4/1/11
This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.Econometrics2016-03-0741Article10.3390/econometrics4010011112225-11462016-03-07doi: 10.3390/econometrics4010011Nalan BaştürkStefano GrassiLennart HoogerheideHerman van Dijk<![CDATA[Econometrics, Vol. 4, Pages 18: Spatial Econometrics: A Rapidly Evolving Discipline]]>
http://www.mdpi.com/2225-1146/4/1/18
Spatial econometrics has a relatively short history in the scenario of the scientific thought. Indeed, the term “spatial econometrics” was introduced only forty years ago during the general address delivered by Jean Paelinck to the annual meeting of the Dutch Statistical Association in May 1974 (see [1]). [...]Econometrics2016-03-0741Editorial10.3390/econometrics4010018182225-11462016-03-07doi: 10.3390/econometrics4010018Giuseppe Arbia<![CDATA[Econometrics, Vol. 4, Pages 10: Sequentially Adaptive Bayesian Learning for a Nonlinear Model of the Secular and Cyclical Behavior of US Real GDP]]>
http://www.mdpi.com/2225-1146/4/1/10
There is a one-to-one mapping between the conventional time series parameters of a third-order autoregression and the more interpretable parameters of secular half-life, cyclical half-life and cycle period. The latter parameterization is better suited to interpretation of results using both Bayesian and maximum likelihood methods and to expression of a substantive prior distribution using Bayesian methods. The paper demonstrates how to approach both problems using the sequentially adaptive Bayesian learning algorithm and sequentially adaptive Bayesian learning algorithm (SABL) software, which eliminates virtually of the substantial technical overhead required in conventional approaches and produces results quickly and reliably. The work utilizes methodological innovations in SABL including optimization of irregular and multimodal functions and production of the conventional maximum likelihood asymptotic variance matrix as a by-product.Econometrics2016-03-0241Article10.3390/econometrics4010010102225-11462016-03-02doi: 10.3390/econometrics4010010John Geweke<![CDATA[Econometrics, Vol. 4, Pages 8: Volatility Forecasting: Downside Risk, Jumps and Leverage Effect]]>
http://www.mdpi.com/2225-1146/4/1/8
We provide empirical evidence of volatility forecasting in relation to asymmetries present in the dynamics of both return and volatility processes. Using recently-developed methodologies to detect jumps from high frequency price data, we estimate the size of positive and negative jumps and propose a methodology to estimate the size of jumps in the quadratic variation. The leverage effect is separated into continuous and discontinuous effects, and past volatility is separated into “good” and “bad”, as well as into continuous and discontinuous risks. Using a long history of the S &amp; P500 price index, we find that the continuous leverage effect lasts about one week, while the discontinuous leverage effect disappears after one day. “Good” and “bad” continuous risks both characterize the volatility persistence, while “bad” jump risk is much more informative than “good” jump risk in forecasting future volatility. The volatility forecasting model proposed is able to capture many empirical stylized facts while still remaining parsimonious in terms of the number of parameters to be estimated.Econometrics2016-02-2341Article10.3390/econometrics401000882225-11462016-02-23doi: 10.3390/econometrics4010008Francesco AudrinoYujia Hu<![CDATA[Econometrics, Vol. 4, Pages 9: Computational Complexity and Parallelization in Bayesian Econometric Analysis]]>
http://www.mdpi.com/2225-1146/4/1/9
Challenging statements have appeared in recent years in the literature on advances in computational procedures.[...]Econometrics2016-02-2241Editorial10.3390/econometrics401000992225-11462016-02-22doi: 10.3390/econometrics4010009Nalan BaştürkRoberto CasarinFrancesco RavazzoloHerman van Dijk<![CDATA[Econometrics, Vol. 4, Pages 7: Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models]]>
http://www.mdpi.com/2225-1146/4/1/7
We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.Econometrics2016-02-0441Article10.3390/econometrics401000772225-11462016-02-04doi: 10.3390/econometrics4010007Sung JunJoris PinkseHaiqing XuNeşe Yıldız<![CDATA[Econometrics, Vol. 4, Pages 6: Functional-Coefficient Spatial Durbin Models with Nonparametric Spatial Weights: An Application to Economic Growth]]>
http://www.mdpi.com/2225-1146/4/1/6
This paper considers a functional-coefficient spatial Durbin model with nonparametric spatial weights. Applying the series approximation method, we estimate the unknown functional coefficients and spatial weighting functions via a nonparametric two-stage least squares (or 2SLS) estimation method. To further improve estimation accuracy, we also construct a second-step estimator of the unknown functional coefficients by a local linear regression approach. Some Monte Carlo simulation results are reported to assess the finite sample performance of our proposed estimators. We then apply the proposed model to re-examine national economic growth by augmenting the conventional Solow economic growth convergence model with unknown spatial interactive structures of the national economy, as well as country-specific Solow parameters, where the spatial weighting functions and Solow parameters are allowed to be a function of geographical distance and the countries’ openness to trade, respectively.Econometrics2016-02-0341Article10.3390/econometrics401000662225-11462016-02-03doi: 10.3390/econometrics4010006Mustafa KorogluYiguo Sun<![CDATA[Econometrics, Vol. 4, Pages 5: Acknowledgement to Reviewers of Econometrics in 2015]]>
http://www.mdpi.com/2225-1146/4/1/5
The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]Econometrics2016-01-2541Editorial10.3390/econometrics401000552225-11462016-01-25doi: 10.3390/econometrics4010005 Econometrics Editorial Office<![CDATA[Econometrics, Vol. 4, Pages 4: A Conditional Approach to Panel Data Models with Common Shocks]]>
http://www.mdpi.com/2225-1146/4/1/4
This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.Econometrics2016-01-1241Article10.3390/econometrics401000442225-11462016-01-12doi: 10.3390/econometrics4010004Giovanni ForchiniBin Peng<![CDATA[Econometrics, Vol. 4, Pages 3: Forecasting Value-at-Risk under Different Distributional Assumptions]]>
http://www.mdpi.com/2225-1146/4/1/3
Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR). We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.Econometrics2016-01-1141Article10.3390/econometrics401000332225-11462016-01-11doi: 10.3390/econometrics4010003Manuela BraioneNicolas Scholtes<![CDATA[Econometrics, Vol. 4, Pages 1: How Credible Are Shrinking Wage Elasticities of Married Women Labour Supply?]]>
http://www.mdpi.com/2225-1146/4/1/1
This paper delves into the well-known phenomenon of shrinking wage elasticities for married women in the US over recent decades. The results of a novel model experimental approach via sample data ordering unveil considerable heterogeneity across different wage groups. Yet, surprisingly constant wage elasticity estimates are maintained within certain wage groups over time. In addition to those constant wage elasticity estimates, we find that the composition of working women into different wage groups has changed considerably, resulting in shrinking wage elasticity estimates at the aggregate level. These findings would be impossible to obtain had we not dismantled and discarded the instrumental variable estimation route.Econometrics2015-12-2541Article10.3390/econometrics401000112225-11462015-12-25doi: 10.3390/econometrics4010001Duo QinSophie van HuellenQing-Chao Wang<![CDATA[Econometrics, Vol. 4, Pages 2: Interpretation and Semiparametric Efficiency in Quantile Regression under Misspecification]]>
http://www.mdpi.com/2225-1146/4/1/2
Allowing for misspecification in the linear conditional quantile function, this paper provides a new interpretation and the semiparametric efficiency bound for the quantile regression parameter β ( τ ) in Koenker and Bassett (1978). The first result on interpretation shows that under a mean-squared loss function, the probability limit of the Koenker–Bassett estimator minimizes a weighted distribution approximation error, defined as \(F_{Y}(X'\beta(\tau)|X) - \tau\), i.e., the deviation of the conditional distribution function, evaluated at the linear quantile approximation, from the quantile level. The second result implies that the Koenker–Bassett estimator semiparametrically efficiently estimates the quantile regression parameter that produces parsimonious descriptive statistics for the conditional distribution. Therefore, quantile regression shares the attractive features of ordinary least squares: interpretability and semiparametric efficiency under misspecification.Econometrics2015-12-2441Article10.3390/econometrics401000222225-11462015-12-24doi: 10.3390/econometrics4010002Ying-Ying Lee<![CDATA[Econometrics, Vol. 3, Pages 864-887: Non-Parametric Estimation of Intraday Spot Volatility: Disentangling Instantaneous Trend and Seasonality]]>
http://www.mdpi.com/2225-1146/3/4/864
We provide a new framework for modeling trends and periodic patterns in high-frequency financial data. Seeking adaptivity to ever-changing market conditions, we enlarge the Fourier flexible form into a richer functional class: both our smooth trend and the seasonality are non-parametrically time-varying and evolve in real time. We provide the associated estimators and use simulations to show that they behave adequately in the presence of jumps and heteroskedastic and heavy-tailed noise. A study of exchange rate returns sampled from 2010 to 2013 suggests that failing to factor in the seasonality’s dynamic properties may lead to misestimation of the intraday spot volatility.Econometrics2015-12-1834Article10.3390/econometrics30408648648872225-11462015-12-18doi: 10.3390/econometrics3040864Thibault VatterHau-Tieng WuValérie Chavez-DemoulinBin Yu<![CDATA[Econometrics, Vol. 3, Pages 825-863: Bootstrap Tests for Overidentification in Linear Regression Models]]>
http://www.mdpi.com/2225-1146/3/4/825
We study the finite-sample properties of tests for overidentifying restrictions in linear regression models with a single endogenous regressor and weak instruments. Under the assumption of Gaussian disturbances, we derive expressions for a variety of test statistics as functions of eight mutually independent random variables and two nuisance parameters. The distributions of the statistics are shown to have an ill-defined limit as the parameter that determines the strength of the instruments tends to zero and as the correlation between the disturbances of the structural and reduced-form equations tends to plus or minus one. This makes it impossible to perform reliable inference near the point at which the limit is ill-defined. Several bootstrap procedures are proposed. They alleviate the problem and allow reliable inference when the instruments are not too weak. We also study their power properties.Econometrics2015-12-0934Article10.3390/econometrics30408258258632225-11462015-12-09doi: 10.3390/econometrics3040825Russell DavidsonJames MacKinnon<![CDATA[Econometrics, Vol. 3, Pages 797-824: Forecast Combination under Heavy-Tailed Errors]]>
http://www.mdpi.com/2225-1146/3/4/797
Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.Econometrics2015-11-2334Article10.3390/econometrics30407977978242225-11462015-11-23doi: 10.3390/econometrics3040797Gang ChengSicong WangYuhong Yang<![CDATA[Econometrics, Vol. 3, Pages 761-796: Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables]]>
http://www.mdpi.com/2225-1146/3/4/761
We propose a random effects panel data model with both spatially correlated error components and spatially lagged dependent variables. We focus on diagnostic testing procedures and derive Lagrange multiplier (LM) test statistics for a variety of hypotheses within this model. We first construct the joint LM test for both the individual random effects and the two spatial effects (spatial error correlation and spatial lag dependence). We then provide LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against local model misspecification, we derive locally adjusted (robust) LM tests based on the Bera and Yoon principle (Bera and Yoon, 1993). We conduct a small Monte Carlo simulation to show the good finite sample performances of these LM test statistics and revisit the cigarette demand example in Baltagi and Levin (1992) to illustrate our testing procedures.Econometrics2015-11-0934Article10.3390/econometrics30407617617962225-11462015-11-09doi: 10.3390/econometrics3040761Ming HeKuan-Pin Lin<![CDATA[Econometrics, Vol. 3, Pages 733-760: Forecasting Interest Rates Using Geostatistical Techniques]]>
http://www.mdpi.com/2225-1146/3/4/733
Geostatistical spatial models are widely used in many applied fields to forecast data observed on continuous three-dimensional surfaces. We propose to extend their use to finance and, in particular, to forecasting yield curves. We present the results of an empirical application where we apply the proposed method to forecast Euro Zero Rates (2003–2014) using the Ordinary Kriging method based on the anisotropic variogram. Furthermore, a comparison with other recent methods for forecasting yield curves is proposed. The results show that the model is characterized by good levels of predictions’ accuracy and it is competitive with the other forecasting models considered.Econometrics2015-11-0934Article10.3390/econometrics30407337337602225-11462015-11-09doi: 10.3390/econometrics3040733Giuseppe ArbiaMichele Di Marcantonio<![CDATA[Econometrics, Vol. 3, Pages 719-732: Counterfactual Distributions in Bivariate Models—A Conditional Quantile Approach]]>
http://www.mdpi.com/2225-1146/3/4/719
This paper proposes a methodology to incorporate bivariate models in numerical computations of counterfactual distributions. The proposal is to extend the works of Machado and Mata (2005) and Melly (2005) using the grid method to generate pairs of random variables. This contribution allows incorporating the effect of intra-household decision making in counterfactual decompositions of changes in income distribution. An application using data from five latin american countries shows that this approach substantially improves the goodness of fit to the empirical distribution. However, the exercise of decomposition is less conclusive about the performance of the method, which essentially depends on the sample size and the accuracy of the regression model.Econometrics2015-11-0934Article10.3390/econometrics30407197197322225-11462015-11-09doi: 10.3390/econometrics3040719Javier AlejoNicolás Badaracco<![CDATA[Econometrics, Vol. 3, Pages 709-718: Measurement Errors Arising When Using Distances in Microeconometric Modelling and the Individuals’ Position Is Geo-Masked for Confidentiality]]>
http://www.mdpi.com/2225-1146/3/4/709
In many microeconometric models we use distances. For instance, in modelling the individual behavior in labor economics or in health studies, the distance from a relevant point of interest (such as a hospital or a workplace) is often used as a predictor in a regression framework. However, in order to preserve confidentiality, spatial micro-data are often geo-masked, thus reducing their quality and dramatically distorting the inferential conclusions. In particular in this case, a measurement error is introduced in the independent variable which negatively affects the properties of the estimators. This paper studies these negative effects, discusses their consequences, and suggests possible interpretations and directions to data producers, end users, and practitioners.Econometrics2015-10-2934Article10.3390/econometrics30407097097182225-11462015-10-29doi: 10.3390/econometrics3040709Giuseppe ArbiaGiuseppe EspaDiego Giuliani<![CDATA[Econometrics, Vol. 3, Pages 698-708: Is Benford’s Law a Universal Behavioral Theory?]]>
http://www.mdpi.com/2225-1146/3/4/698
In this paper, we consider the question and present evidence as to whether or not Benford’s exponential first significant digit (FSD) law reflects a fundamental principle behind the complex and nondeterministic nature of large-scale physical and behavioral systems. As a behavioral example, we focus on the FSD distribution of Australian micro income data and use information theoretic entropy methods to investigate the degree that corresponding empirical income distributions are consistent with Benford’s law.Econometrics2015-10-2234Article10.3390/econometrics30406986987082225-11462015-10-22doi: 10.3390/econometrics3040698Sofia Villas-BoasQiuzi FuGeorge Judge<![CDATA[Econometrics, Vol. 3, Pages 667-697: A Joint Specification Test for Response Probabilities in Unordered Multinomial Choice Models]]>
http://www.mdpi.com/2225-1146/3/3/667
Estimation results obtained by parametric models may be seriously misleading when the model is misspecified or poorly approximates the true model. This study proposes a test that jointly tests the specifications of multiple response probabilities in unordered multinomial choice models. The test statistic is asymptotically chi-square distributed, consistent against a fixed alternative and able to detect a local alternative approaching to the null at a rate slower than the parametric rate. We show that rejection regions can be calculated by a simple parametric bootstrap procedure, when the sample size is small. The size and power of the tests are investigated by Monte Carlo experiments.Econometrics2015-09-1633Article10.3390/econometrics30306676676972225-11462015-09-16doi: 10.3390/econometrics3030667Masamune Iwasawa<![CDATA[Econometrics, Vol. 3, Pages 654-666: On Bootstrap Inference for Quantile Regression Panel Data: A Monte Carlo Study]]>
http://www.mdpi.com/2225-1146/3/3/654
This paper evaluates bootstrap inference methods for quantile regression panel data models. We propose to construct confidence intervals for the parameters of interest using percentile bootstrap with pairwise resampling. We study three different bootstrapping procedures. First, the bootstrap samples are constructed by resampling only from cross-sectional units with replacement. Second, the temporal resampling is performed from the time series. Finally, a more general resampling scheme, which considers sampling from both the cross-sectional and temporal dimensions, is introduced. The bootstrap algorithms are computationally attractive and easy to use in practice. We evaluate the performance of the bootstrap confidence interval by means of Monte Carlo simulations. The results show that the bootstrap methods have good finite sample performance for both location and location-scale models.Econometrics2015-09-1033Article10.3390/econometrics30306546546662225-11462015-09-10doi: 10.3390/econometrics3030654Antonio GalvaoGabriel Montes-Rojas<![CDATA[Econometrics, Vol. 3, Pages 633-653: A New Family of Consistent and Asymptotically-Normal Estimators for the Extremal Index]]>
http://www.mdpi.com/2225-1146/3/3/633
The extremal index (θ) is the key parameter for extending extreme value theory results from i.i.d. to stationary sequences. One important property of this parameter is that its inverse determines the degree of clustering in the extremes. This article introduces a novel interpretation of the extremal index as a limiting probability characterized by two Poisson processes and a simple family of estimators derived from this new characterization. Unlike most estimators for θ in the literature, this estimator is consistent, asymptotically normal and very stable across partitions of the sample. Further, we show in an extensive simulation study that this estimator outperforms in finite samples the logs, blocks and runs estimation methods. Finally, we apply this new estimator to test for clustering of extremes in monthly time series of unemployment growth and inflation rates and conclude that runs of large unemployment rates are more prolonged than periods of high inflation.Econometrics2015-08-2833Article10.3390/econometrics30306336336532225-11462015-08-28doi: 10.3390/econometrics3030633Jose Olmo<![CDATA[Econometrics, Vol. 3, Pages 610-632: Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting]]>
http://www.mdpi.com/2225-1146/3/3/610
Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML) estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.Econometrics2015-08-1033Article10.3390/econometrics30306106106322225-11462015-08-10doi: 10.3390/econometrics3030610Stanislav AnatolyevStanislav Khrapov<![CDATA[Econometrics, Vol. 3, Pages 590-609: A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts]]>
http://www.mdpi.com/2225-1146/3/3/590
This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS) test, referred to as the KS Predictive Accuracy (KSPA) test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.Econometrics2015-08-0433Article10.3390/econometrics30305905906092225-11462015-08-04doi: 10.3390/econometrics3030590Hossein HassaniEmmanuel Silva<![CDATA[Econometrics, Vol. 3, Pages 577-589: A Spectral Model of Turnover Reduction]]>
http://www.mdpi.com/2225-1146/3/3/577
We give a simple explicit formula for turnover reduction when a large number of alphas are traded on the same execution platform and trades are crossed internally. We model turnover reduction via alpha correlations. Then, for a large number of alphas, turnover reduction is related to the largest eigenvalue and the corresponding eigenvector of the alpha correlation matrix.Econometrics2015-07-2933Article10.3390/econometrics30305775775892225-11462015-07-29doi: 10.3390/econometrics3030577Zura Kakushadze<![CDATA[Econometrics, Vol. 3, Pages 561-576: A Note on the Asymptotic Normality of the Kernel Deconvolution Density Estimator with Logarithmic Chi-Square Noise]]>
http://www.mdpi.com/2225-1146/3/3/561
This paper studies the asymptotic normality for the kernel deconvolution estimator when the noise distribution is logarithmic chi-square; both identical and independently distributed observations and strong mixing observations are considered. The dependent case of the result is applied to obtain the pointwise asymptotic distribution of the deconvolution volatility density estimator in discrete-time stochastic volatility models.Econometrics2015-07-2133Short Note10.3390/econometrics30305615615762225-11462015-07-21doi: 10.3390/econometrics3030561Yang Zu<![CDATA[Econometrics, Vol. 3, Pages 532-560: New Graphical Methods and Test Statistics for Testing Composite Normality]]>
http://www.mdpi.com/2225-1146/3/3/532
Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.Econometrics2015-07-1533Article10.3390/econometrics30305325325602225-11462015-07-15doi: 10.3390/econometrics3030532Marc Paolella<![CDATA[Econometrics, Vol. 3, Pages 525-531: Efficient Estimation in Heteroscedastic Varying Coefficient Models]]>
http://www.mdpi.com/2225-1146/3/3/525
This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.Econometrics2015-07-1533Article10.3390/econometrics30305255255312225-11462015-07-15doi: 10.3390/econometrics3030525Chuanhua WeiLijie Wan<![CDATA[Econometrics, Vol. 3, Pages 494-524: Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects]]>
http://www.mdpi.com/2225-1146/3/3/494
We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002) does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC) are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE). We also study the implications of different levels of inclusion probabilities by simulations.Econometrics2015-07-1033Article10.3390/econometrics30304944945242225-11462015-07-10doi: 10.3390/econometrics3030494Guangjie Li<![CDATA[Econometrics, Vol. 3, Pages 466-493: A New Approach to Model Verification, Falsification and Selection]]>
http://www.mdpi.com/2225-1146/3/3/466
This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.Econometrics2015-06-2933Article10.3390/econometrics30304664664932225-11462015-06-29doi: 10.3390/econometrics3030466Andrew BuckGeorge Lady<![CDATA[Econometrics, Vol. 3, Pages 443-465: Bayesian Approach to Disentangling Technical and Environmental Productivity]]>
http://www.mdpi.com/2225-1146/3/2/443
This paper models the firm’s production process as a system of simultaneous technologies for desirable and undesirable outputs. Desirable outputs are produced by transforming inputs via the conventional transformation function, whereas (consistent with the material balance condition) undesirable outputs are by-produced via the so-called “residual generation technology”. By separating the production of undesirable outputs from that of desirable outputs, not only do we ensure that undesirable outputs are not modeled as inputs and thus satisfy costly disposability, but we are also able to differentiate between the traditional (desirable-output-oriented) technical productivity and the undesirable-output-oriented environmental, or so-called “green”, productivity. To measure the latter, we derive a Solow-type Divisia environmental productivity index which, unlike conventional productivity indices, allows crediting the ceteris paribus reduction in undesirable outputs. Our index also provides a meaningful way to decompose environmental productivity into environmental technological and efficiency changes.Econometrics2015-06-1632Article10.3390/econometrics30204434434652225-11462015-06-16doi: 10.3390/econometrics3020443Emir MalikovSubal KumbhakarEfthymios Tsionas<![CDATA[Econometrics, Vol. 3, Pages 412-442: Strategic Interaction Model with Censored Strategies]]>
http://www.mdpi.com/2225-1146/3/2/412
In this paper, we develop a new model of a static game of incomplete information with a large number of players. The model has two key distinguishing features. First, the strategies are subject to threshold effects, and can be interpreted as dependent censored random variables. Second, in contrast to most of the existing literature, our inferential theory relies on a large number of players, rather than a large number of independent repetitions of the same game. We establish existence and uniqueness of the pure strategy equilibrium, and prove that the censored equilibrium strategies satisfy a near-epoch dependence property. We then show that the normal maximum likelihood and least squares estimators of this censored model are consistent and asymptotically normal. Our model can be useful in a wide variety of settings, including investment, R&amp;D, labor supply, and social interaction applications.Econometrics2015-06-0132Article10.3390/econometrics30204124124422225-11462015-06-01doi: 10.3390/econometrics3020412Nazgul Jenish<![CDATA[Econometrics, Vol. 3, Pages 376-411: Asymptotic Distribution and Finite Sample Bias Correction of QML Estimators for Spatial Error Dependence Model]]>
http://www.mdpi.com/2225-1146/3/2/376
In studying the asymptotic and finite sample properties of quasi-maximum likelihood (QML) estimators for the spatial linear regression models, much attention has been paid to the spatial lag dependence (SLD) model; little has been given to its companion, the spatial error dependence (SED) model. In particular, the effect of spatial dependence on the convergence rate of the QML estimators has not been formally studied, and methods for correcting finite sample bias of the QML estimators have not been given. This paper fills in these gaps. Of the two, bias correction is particularly important to the applications of this model, as it leads potentially to much improved inferences for the regression coefficients. Contrary to the common perceptions, both the large and small sample behaviors of the QML estimators for the SED model can be different from those for the SLD model in terms of the rate of convergence and the magnitude of bias. Monte Carlo results show that the bias can be severe, and the proposed bias correction procedure is very effective.Econometrics2015-05-2132Article10.3390/econometrics30203763764112225-11462015-05-21doi: 10.3390/econometrics3020376Shew LiuZhenlin Yang<![CDATA[Econometrics, Vol. 3, Pages 355-375: A Jackknife Correction to a Test for Cointegration Rank]]>
http://www.mdpi.com/2225-1146/3/2/355
This paper investigates the performance of a jackknife correction to a test for cointegration rank in a vector autoregressive system. The limiting distributions of the jackknife-corrected statistics are derived and the critical values of these distributions are tabulated. Based on these critical values the finite sample size and power properties of the jackknife-corrected tests are compared with the usual rank test statistic as well as statistics involving a small sample correction and a Bartlett correction, in addition to a bootstrap method. The simulations reveal that all of the corrected tests can provide finite sample size improvements, while maintaining power, although the bootstrap procedure is the most robust across the simulation designs considered.Econometrics2015-05-2032Article10.3390/econometrics30203553553752225-11462015-05-20doi: 10.3390/econometrics3020355Marcus Chambers<![CDATA[Econometrics, Vol. 3, Pages 339-354: The Seasonal KPSS Test: Examining Possible Applications with Monthly Data and Additional Deterministic Terms]]>
http://www.mdpi.com/2225-1146/3/2/339
The literature has been notably less definitive in distinguishing between finite sample studies of seasonal stationarity than in seasonal unit root tests. Although the use of seasonal stationarity and unit root tests is advised to determine correctly the most appropriate form of the trend in a seasonal time series, such a use is rarely noted in the relevant studies on this topic. Recently, the seasonal KPSS test, with a null hypothesis of no seasonal unit roots, and based on quarterly data, has been introduced in the literature. The asymptotic theory of the seasonal KPSS test depends on whether data have been filtered by a preliminary regression. More specifically, one may proceed to extracting deterministic components, such as the mean and trend, from the data before testing. In this paper, we examine the effects of de-trending on the properties of the seasonal KPSS test in finite samples. A sketch of the test’s limit theory is subsequently provided. Moreover, a Monte Carlo study is conducted to analyze the behavior of the test for a monthly time series. The focus on this time-frequency is significant because, as we mentioned above, it was introduced for quarterly data. Overall, the results indicated that the seasonal KPSS test preserved its good size and power properties. Furthermore, our results corroborate those reported elsewhere in the literature for conventional stationarity tests. These subsequent results assumed that the nonparametric corrections of residual variances may lead to better in-sample properties of the seasonal KPSS test. Next, the seasonal KPSS test is applied to a monthly series of the United States (US) consumer price index. We were able to identify a number of seasonal unit roots in this time series. [1] [1] Table 1 in this paper is copyrighted and initially published by JMASM in 2012, Volume 11, Issue 1, pp. 69–77, ISSN: 1538–9472, JMASM Inc., PO Box 48023, Oak Park, MI 48237, USA, ea@jmasm.com.Econometrics2015-05-1332Article10.3390/econometrics30203393393542225-11462015-05-13doi: 10.3390/econometrics3020339Ghassen Montasser<![CDATA[Econometrics, Vol. 3, Pages 317-338: The SAR Model for Very Large Datasets: A Reduced Rank Approach]]>
http://www.mdpi.com/2225-1146/3/2/317
The SAR model is widely used in spatial econometrics to model Gaussian processes on a discrete spatial lattice, but for large datasets, fitting it becomes computationally prohibitive, and hence, its usefulness can be limited. A computationally-efficient spatial model is the spatial random effects (SRE) model, and in this article, we calibrate it to the SAR model of interest using a generalisation of the Moran operator that allows for heteroskedasticity and an asymmetric SAR spatial dependence matrix. In general, spatial data have a measurement-error component, which we model, and we use restricted maximum likelihood to estimate the SRE model covariance parameters; its required computational time is only the order of the size of the dataset. Our implementation is demonstrated using mean usual weekly income data from the 2011 Australian Census.Econometrics2015-05-1132Article10.3390/econometrics30203173173382225-11462015-05-11doi: 10.3390/econometrics3020317Sandy BurdenNoel CressieDavid Steel<![CDATA[Econometrics, Vol. 3, Pages 289-316: Selection Criteria in Regime Switching Conditional Volatility Models]]>
http://www.mdpi.com/2225-1146/3/2/289
A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.Econometrics2015-05-1132Article10.3390/econometrics30202892893162225-11462015-05-11doi: 10.3390/econometrics3020289Thomas Chuffart<![CDATA[Econometrics, Vol. 3, Pages 265-288: Nonparametric Regression Estimation for Multivariate Null Recurrent Processes]]>
http://www.mdpi.com/2225-1146/3/2/265
This paper discusses nonparametric kernel regression with the regressor being a \(d\)-dimensional \(\beta\)-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \(\sqrt{n(T)h^{d}}\), where \(n(T)\) is the number of regenerations for a \(\beta\)-null recurrent process and the limiting distribution (with proper normalization) is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.Econometrics2015-04-1432Article10.3390/econometrics30202652652882225-11462015-04-14doi: 10.3390/econometrics3020265Biqing CaiDag Tjøstheim<![CDATA[Econometrics, Vol. 3, Pages 240-264: Detecting Location Shifts during Model Selection by Step-Indicator Saturation]]>
http://www.mdpi.com/2225-1146/3/2/240
To capture location shifts in the context of model selection, we propose selecting significant step indicators from a saturating set added to the union of all of the candidate variables. The null retention frequency and approximate non-centrality of a selection test are derived using a ‘split-half’ analysis, the simplest specialization of a multiple-path block-search algorithm. Monte Carlo simulations, extended to sequential reduction, confirm the accuracy of nominal significance levels under the null and show retentions when location shifts occur, improving the non-null retention frequency compared to the corresponding impulse-indicator saturation (IIS)-based method and the lasso.Econometrics2015-04-1432Article10.3390/econometrics30202402402642225-11462015-04-14doi: 10.3390/econometrics3020240Jennifer CastleJurgen DoornikDavid HendryFelix Pretis<![CDATA[Econometrics, Vol. 3, Pages 233-239: A Pitfall in Using the Characterization of Granger Non-Causality in Vector Autoregressive Models]]>
http://www.mdpi.com/2225-1146/3/2/233
It is well known that in a vector autoregressive (VAR) model Granger non-causality is characterized by a set of restrictions on the VAR coefficients. This characterization has been derived under the assumption of non-singularity of the covariance matrix of the innovations. This note shows that if this assumption is violated, then the characterization of Granger non-causality in a VAR model fails to hold. In these situations Granger non-causality test results must be interpreted with caution.Econometrics2015-04-0932Article10.3390/econometrics30202332332392225-11462015-04-09doi: 10.3390/econometrics3020233Umberto Triacca