Econometrics doi: 10.3390/econometrics5040054

Authors: Yoontae Jeon Thomas McCurdy

Forecasting correlations between stocks and commodities is important for diversification across asset classes and other risk management decisions. Correlation forecasts are affected by model uncertainty, the sources of which can include uncertainty about changing fundamentals and associated parameters (model instability), structural breaks and nonlinearities due, for example, to regime switching. We use approaches that weight historical data according to their predictive content. Specifically, we estimate two alternative models, ‘time-varying weights’ and ‘time-varying window’, in order to maximize the value of past data for forecasting. Our empirical analyses reveal that these approaches provide superior forecasts to several benchmark models for forecasting correlations.

]]>Econometrics doi: 10.3390/econometrics5040053

Authors: Tristan Skolrud

The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

]]>Econometrics doi: 10.3390/econometrics5040052

Authors: Jinyong Hahn Ruoyao Shi

We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and show that the size of permutation tests may be distorted. Several alternative methods are discussed.

]]>Econometrics doi: 10.3390/econometrics5040049

Authors: Jurgen Doornik Rocco Mosconi Paolo Paruolo

This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.

]]>Econometrics doi: 10.3390/econometrics5040051

Authors: Yingjie Dong Yiu-Kuen Tse

We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the TT function. Using our sampled BTS transactions, we test the semi-martingale hypothesis of the stock log-price process and estimate the daily realized volatility. Our method improves the normality approximation of the standardized business-time return distribution. Our Monte Carlo results show that the integrated volatility estimates using our proposed sampling strategy provide smaller root mean-squared error.

]]>Econometrics doi: 10.3390/econometrics5040050

Authors: Martin Ravallion

On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies of American adults have a positive income gradient, the heterogeneity in labor supplies generates considerable horizontal inequality. Using equivalent incomes to adjust for effort can reveal either higher or lower inequality depending on the measurement assumptions. With only a modest allowance for leisure as a basic need, the effort-adjusted poverty rate in terms of equivalent incomes rises.

]]>Econometrics doi: 10.3390/econometrics5040048

Authors: Alain Hecq Sean Telg Lenard Lieb

This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the literature. Using a symmetric argument, we show that those filters also generate a spurious noncausal component in the seasonally adjusted series, but preserve (although amplify) the existence of causal and noncausal relationships. This result has has important implications for modelling economic time series driven by expectation relationships. We consider inflation data on the G7 countries to illustrate these results.

]]>Econometrics doi: 10.3390/econometrics5040047

Authors: Andras Fulop Jun Yu

We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow a mean-reverting process around a stochastic long run mean. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and non-constant probabilities of exit from the bubble regime are both allowed. A Bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has good power properties for detecting bubbles. Empirical analysis using price-dividend ratios of S&amp;P500 highlights the advantages of our method.

]]>Econometrics doi: 10.3390/econometrics5040045

Authors: Apostolos Serletis

William(Bill) Barnett is an eminent econometrician andmacroeconomist.[...]

]]>Econometrics doi: 10.3390/econometrics5040046

Authors: Umberto Triacca

The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z = ( x , y 1 , y 2 ) ′ . We want to study causality relationships between the variables in y = ( y 1 , y 2 ) ′ and x. Suppose that in a bivariate framework, we find that y 1 Granger causes x and y 2 Granger causes x, but these relationships vanish when the analysis is conducted in a trivariate framework. Thus, the causal links, established in a bivariate setting, seem to be spurious. Is this conclusion always correct? In this note, we show that the causal links, in the bivariate framework, might well not be ‘genuinely’ spurious: they could be reflecting causality from the vector y to x. Paradoxically, in this case, it is the non-causality in trivariate system that is misleading.

]]>Econometrics doi: 10.3390/econometrics5040044

Authors: Antoni Espasa Eva Senra

The Bulletin of EU &amp; US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation forecasting methodology stands on working with useful disaggregation schemes, using leading indicators when possible and applying outlier correction. The paper relates this methodology to corresponding topics in the literature and discusses the design of disaggregation schemes. It concludes that those schemes would be useful if they were formulated according to economic, institutional and statistical criteria aiming to end up with a set of components with very different statistical properties for which valid single-equation models could be built. The BIAM assessment, which derives from a new observation, is based on (a) an evaluation of the forecasting errors (innovations) at the components’ level. It provides information on which sectors they come from and allows, when required, for the appropriate correction in the specific models. (b) In updating the path forecast with its corresponding fan chart. Finally, we show that BIAM real time Euro Area inflation forecasts compare successfully with the consensus from the ECB Survey of Professional Forecasters, one and two years ahead.

]]>Econometrics doi: 10.3390/econometrics5030043

Authors: Ronald Butler Marc Paolella

A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated, unlike penalty-based model selection methods. Extensive simulation results indicate that the new method is usually competitive with, and often better than, common model selection methods.

]]>Econometrics doi: 10.3390/econometrics5030042

Authors: Econometrics Editorial Office

With the goal of encouraging and motivating young researchers in the field of econometrics, last year the journal Econometrics accepted applications and nominations for the 2017 Young Researcher Award.[...]

]]>Econometrics doi: 10.3390/econometrics5030041

Authors: Jae Kim In Choi

This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using the line of enlightened judgement under a symmetric loss function, are found to be much higher than conventional ones. We also propose simple calibration rules for the decision-based significance levels for a range of unit root tests. At the decision-based significance levels, many time series in Nelson and Plosser’s (1982) (extended) data set are judged to be trend-stationary, including real income variables, employment variables and money stock. We also find that nearly all real exchange rates covered in Elliott and Pesavento’s (2006) study are stationary; and that most of the real interest rates covered in Rapach and Weber’s (2004) study are stationary. In addition, using a specific loss function, the U.S. nominal interest rate is found to be stationary under economically sensible values of relative loss and prior belief for the null hypothesis.

]]>Econometrics doi: 10.3390/econometrics5030038

Authors: P. Owen

Empirical studies of the determinants of cross-country differences in long-run development are characterized by the ingenious nature of the instruments used. However, scepticism remains about their ability to provide a valid basis for causal inference. This paper examines whether explicit consideration of the statistical adequacy of the underlying reduced form, which provides an embedding framework for the structural equations, can usefully complement economic theory as a basis for assessing instrument choice in the fundamental determinants literature. Diagnostic testing of the reduced forms in influential studies reveals evidence of model misspecification, with parameter non-constancy and spatial dependence of the residuals being almost ubiquitous. This feature, surprisingly not previously identified, potentially undermines the inferences drawn about the structural parameters, such as the quantitative and statistical significance of different fundamental determinants.

]]>Econometrics doi: 10.3390/econometrics5030040

Authors: Andreas Hetland Simon Hetland

The primary contribution of this paper is to establish that the long-swings behavior observed in the market price of Danish housing since the 1970s can be understood by studying the interplay between short-term expectation formation and long-run equilibrium conditions. We introduce an asset market model for housing based on uncertainty rather than risk, which under mild assumptions allows for other forms of forecasting behavior than rational expectations. We test the theory via an I(2) cointegrated VAR model and find that the long-run equilibrium for the housing price corresponds closely to the predictions from the theoretical framework. Additionally, we corroborate previous findings that housing markets are well characterized by short-term momentum forecasting behavior. Our conclusions have wider relevance, since housing prices play a role in the wider Danish economy, and other developed economies, through wealth effects.

]]>Econometrics doi: 10.3390/econometrics5030039

Authors: Jennifer Castle David Hendry Andrew Martinez

Economic policy agencies produce forecasts with accompanying narratives, and base policy changes on the resulting anticipated developments in the target variables. Systematic forecast failure, defined as large, persistent deviations of the outturns from the numerical forecasts, can make the associated narrative false, which would in turn question the validity of the entailed policy implementation. We establish when systematic forecast failure entails failure of the accompanying narrative, which we call forediction failure, and when that in turn implies policy invalidity. Most policy regime changes involve location shifts, which can induce forediction failure unless the policy variable is super exogenous in the policy model. We propose a step-indicator saturation test to check in advance for invariance to policy changes. Systematic forecast failure, or a lack of invariance, previously justified by narratives reveals such stories to be economic fiction.

]]>Econometrics doi: 10.3390/econometrics5030037

Authors: Haci Karatas

The wage curve for Turkey revisited considering the spatial spillovers of the regional unemployment rates using individual level data for a period of 2004–2013 at the 26 NUTS-2 level by employing FE-2SLS models. The unemployment elasticity of real wages is −0.07 without excluding any group of workers unlike previous studies. There is strong evidence on spatial effects of unemployment rate of contiguous regions on wage level, which is larger, in absolute value, than the effect of own-regional unemployment rate, −0.087 and −0.056, respectively. Male workers are slightly more responsive to the own-region unemployment rate than female workers. However, female workers are more responsive to the neighboring regions’ unemployment rate. Furthermore, using group-specific unemployment rates in the estimation of the wage curve for various groups, we find that unemployment elasticity of pay for female workers has become smaller and lost its significance, whereas unemployment elasticity for male workers has changed slightly. However, introducing group-specific unemployment rate results in losing significance in estimates for female workers. The findings in this paper suggest that individual wages are more responsive to the unemployment rates of the proximate regions than that of an individual’s own region. Also, the wage curve estimates are sensitive to the group-specific unemployment rates.

]]>Econometrics doi: 10.3390/econometrics5030036

Authors: Søren Johansen Morten Tabor

A state space model with an unobserved multivariate random walk and a linear observation equation is studied. The purpose is to find out when the extracted trend cointegrates with its estimator, in the sense that a linear combination is asymptotically stationary. It is found that this result holds for the linear combination of the trend that appears in the observation equation. If identifying restrictions are imposed on either the trend or its coefficients in the linear observation equation, it is shown that there is cointegration between the identified trend and its estimator, if and only if the estimators of the coefficients in the observation equations are consistent at a faster rate than the square root of sample size. The same results are found if the observations from the state space model are analysed using a cointegrated vector autoregressive model. The findings are illustrated by a small simulation study.

]]>Econometrics doi: 10.3390/econometrics5030035

Authors: Massimiliano Caporin Francesco Poli

We retrieve news stories and earnings announcements of the S&amp;P 100 constituents from two professional news providers, along with ten macroeconomic indicators. We also gather data from Google Trends about these firms’ assets as an index of retail investors’ attention. Thus, we create an extensive and innovative database that contains precise information with which to analyze the link between news and asset price dynamics. We detect the sentiment of news stories using a dictionary of sentiment-related words and negations and propose a set of more than five thousand information-based variables that provide natural proxies for the information used by heterogeneous market players. We first shed light on the impact of information measures on daily realized volatility and select them by penalized regression. Then, we perform a forecasting exercise and show that the model augmented with news-related variables provides superior forecasts.

]]>Econometrics doi: 10.3390/econometrics5030033

Authors: Junrong Liu Robin Sickles E. Tsionas

This paper considers a linear panel data model with time varying heterogeneity. Bayesian inference techniques organized around Markov chain Monte Carlo (MCMC) are applied to implement new estimators that combine smoothness priors on unobserved heterogeneity and priors on the factor structure of unobserved effects. The latter have been addressed in a non-Bayesian framework by Bai (2009) and Kneip et al. (2012), among others. Monte Carlo experiments are used to examine the finite-sample performance of our estimators. An empirical study of efficiency trends in the largest banks operating in the U.S. from 1990 to 2009 illustrates our new estimators. The study concludes that scale economies in intermediation services have been largely exploited by these large U.S. banks.

]]>Econometrics doi: 10.3390/econometrics5030034

Authors: Jean-David Fermanian

Copula models have become very popular and well studied among the scientific community.[...]

]]>Econometrics doi: 10.3390/econometrics5030032

Authors: P.A.V.B. Swamy Stephen Hall George Tavlas Peter von zur Muehlen

We appreciate the effort and thoughtfulness of Raunig’s (2017) attempted critique of Swamy et al. (2015).[...]

]]>Econometrics doi: 10.3390/econometrics5030031

Authors: Burkhard Raunig

Swamy et al. (2015) argue that valid instruments cannot exist when a structural model is misspecified. This note shows that this is not true in general. In simple examples valid instruments can exist and can help to estimate parameters of interest.

]]>Econometrics doi: 10.3390/econometrics5030030

Authors: Katarina Juselius

A theory-consistent CVAR scenario describes a set of testable regularieties one should expect to see in the data if the basic assumptions of the theoretical model are empirically valid. Using this method, the paper demonstrates that all basic assumptions about the shock structure and steady-state behavior of an an imperfect knowledge based model for exchange rate determination can be formulated as testable hypotheses on common stochastic trends and cointegration. This model obtaines remarkable support for almost every testable hypothesis and is able to adequately account for the long persistent swings in the real exchange rate.

]]>Econometrics doi: 10.3390/econometrics5030029

Authors: Leonardo Salazar

The long and persistent swings in the real exchange rate have for a long time puzzled economists. Recent models built on imperfect knowledge economics seem to provide a theoretical explanation for this persistence. Empirical results, based on a cointegrated vector autoregressive (CVAR) model, provide evidence of error-increasing behavior in prices and interest rates, which is consistent with the persistence observed in the data. The movements in the real exchange rate are compensated by movements in the interest rate spread, which restores the equilibrium in the product market when the real exchange rate moves away from its long-run benchmark value. Fluctuations in the copper price also explain the deviations of the real exchange rate from its long-run equilibrium value.

]]>Econometrics doi: 10.3390/econometrics5030028

Authors: H. Boswijk Paolo Paruolo

Likelihood ratio tests of over-identifying restrictions on the common trends loading matrices in I(2) VAR systems are discussed. It is shown how hypotheses on the common trends loading matrices can be translated into hypotheses on the cointegration parameters. Algorithms for (constrained) maximum likelihood estimation are presented, and asymptotic properties sketched. The techniques are illustrated using the analysis of the PPP and UIP between Switzerland and the US.

]]>Econometrics doi: 10.3390/econometrics5020027

Authors: Mikael Juselius Moshe Kim

The ability to distinguish between sustainable and excessive debt developments is crucial for securing economic stability. By studying US private sector credit loss dynamics, we show that this distinction can be made based on a measure of the incipient aggregate liquidity constraint, the financial obligations ratio. Specifically, as this variable rises, the interaction between credit losses and the business cycle increases, albeit with different intensity depending on whether the problems originate in the household or the business sector. This occurs 1–2 years before each recession in the sample. Our results have implications for macroprudential policy and countercyclical capital-buffers.

]]>Econometrics doi: 10.3390/econometrics5020024

Authors: Paula Simões M. Carvalho Sandra Aleixo Sérgio Gomes Isabel Natário

The Portuguese National Health Line, LS24, is an initiative of the Portuguese Health Ministry which seeks to improve accessibility to health care and to rationalize the use of existing resources by directing users to the most appropriate institutions of the national public health services. This study aims to describe and evaluate the use of LS24. Since for LS24 data, the location attribute is an important source of information to describe its use, this study analyses the number of calls received, at a municipal level, under two different spatial econometric approaches. This analysis is important for future development of decision support indicators in a hospital context, based on the economic impact of the use of this health line. Considering the discrete nature of data, the number of calls to LS24 in each municipality is better modelled by a Poisson model, with some possible covariates: demographic, socio-economic information, characteristics of the Portuguese health system and development indicators. In order to explain model spatial variability, the data autocorrelation can be explained in a Bayesian setting through different hierarchical log-Poisson regression models. A different approach uses an autoregressive methodology, also for count data. A log-Poisson model with a spatial lag autocorrelation component is further considered, better framed under a Bayesian paradigm. With this empirical study we find strong evidence for a spatial structure in the data and obtain similar conclusions with both perspectives of the analysis. This supports the view that the addition of a spatial structure to the model improves estimation, even in the case where some relevant covariates have been included.

]]>Econometrics doi: 10.3390/econometrics5020026

Authors: Ostap Okhrin Anastasija Tetereva

This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC). The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR) prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.

]]>Econometrics doi: 10.3390/econometrics5020025

Authors: Massimo Franchi Søren Johansen

It is well known that inference on the cointegrating relations in a vector autoregression (CVAR) is difficult in the presence of a near unit root. The test for a given cointegration vector can have rejection probabilities under the null, which vary from the nominal size to more than 90%. This paper formulates a CVAR model allowing for multiple near unit roots and analyses the asymptotic properties of the Gaussian maximum likelihood estimator. Then two critical value adjustments suggested by McCloskey (2017) for the test on the cointegrating relations are implemented for the model with a single near unit root, and it is found by simulation that they eliminate the serious size distortions, with a reasonable power for moderate values of the near unit root parameter. The findings are illustrated with an analysis of a number of different bivariate DGPs.

]]>Econometrics doi: 10.3390/econometrics5020023

Authors: Fabrizio Durante Enrico Foscolo Alex Weissensteiner

We analyze the interdependence between the government yield spread and stock returns of the banking sector in Italy during the years 2003–2015. In a first step, we find that the Spearman’s rank correlation between the yield spread and the Italian banking system changed significantly after September 2008. According to this finding, we split the time window in two sub-periods. While we show that the dependence between the banking industry and changes in the yield spread increased significantly in the second time interval, we find no contagion effects from changes in the yield spread to returns of the banking system.

]]>Econometrics doi: 10.3390/econometrics5020022

Authors: Pierre Perron

This special issue deals with problems related to unit roots and structural change, and the interplay between the two.[...]

]]>Econometrics doi: 10.3390/econometrics5020021

Authors: Benedikt Schamberger Lutz Gruber Claudia Czado

Factor modeling is a popular strategy to induce sparsity in multivariate models as they scale to higher dimensions. We develop Bayesian inference for a recently proposed latent factor copula model, which utilizes a pair copula construction to couple the variables with the latent factor. We use adaptive rejection Metropolis sampling (ARMS) within Gibbs sampling for posterior simulation: Gibbs sampling enables application to Bayesian problems, while ARMS is an adaptive strategy that replaces traditional Metropolis-Hastings updates, which typically require careful tuning. Our simulation study shows favorable performance of our proposed approach both in terms of sampling efficiency and accuracy. We provide an extensive application example using historical data on European financial stocks that forecasts portfolio Value at Risk (VaR) and Expected Shortfall (ES).

]]>Econometrics doi: 10.3390/econometrics5020020

Authors: Eugen Ivanov Aleksey Min Franz Ramsauer

Recently, several copula-based approaches have been proposed for modeling stationary multivariate time series. All of them are based on vine copulas, and they differ in the choice of the regular vine structure. In this article, we consider a copula autoregressive (COPAR) approach to model the dependence of unobserved multivariate factors resulting from two dynamic factor models. However, the proposed methodology is general and applicable to several factor models as well as to other copula models for stationary multivariate time series. An empirical study illustrates the forecasting superiority of our approach for constructing an optimal portfolio of U.S. industrial stocks in the mean-variance framework.

]]>Econometrics doi: 10.3390/econometrics5020019

Authors: Jurgen Doornik

Estimation of the I(2) cointegrated vector autoregressive (CVAR) model is considered. Without further restrictions, estimation of the I(1) model is by reduced-rank regression (Anderson (1951)). Maximum likelihood estimation of I(2) models, on the other hand, always requires iteration. This paper presents a new triangular representation of the I(2) model. This is the basis for a new estimation procedure of the unrestricted I(2) model, as well as the I(2) model with linear restrictions imposed.

]]>Econometrics doi: 10.3390/econometrics5020018

Authors: Marc Paolella

The univariate collapsing method (UCM) for portfolio optimization is based on obtaining the predictive mean and a risk measure such as variance or expected shortfall of the univariate pseudo-return series generated from a given set of portfolio weights and multivariate set of assets under interest and, via simulation or optimization, repeating this process until the desired portfolio weight vector is obtained. The UCM is well-known conceptually, straightforward to implement, and possesses several advantages over use of multivariate models, but, among other things, has been criticized for being too slow. As such, it does not play prominently in asset allocation and receives little attention in the academic literature. This paper proposes use of fast model estimation methods combined with new heuristics for sampling, based on easily-determined characteristics of the data, to accelerate and optimize the simulation search. An extensive empirical analysis confirms the viability of the method.

]]>Econometrics doi: 10.3390/econometrics5020017

Authors: Ricardo Quineche Gabriel Rodríguez

This is a simulation-based warning note for practitioners who use the M G L S unit root tests in the context of structural change using different selection lag length criteria. With T = 100 , we find severe oversize problems when using some criteria, while other criteria produce an undersizing behavior. In view of this dilemma, we do not recommend using these tests. While such behavior tends to disappear when T = 250 , it is important to note that most empirical applications use smaller sample sizes such as T = 100 or T = 150 . The A D F G L S test does not present an oversizing or undersizing problem. The only disadvantage of the A D F G L S test arises in the presence of M A ( 1 ) negative correlation, in which case the M G L S tests are preferable, but in all other cases they are very undersized. When there is a break in the series, selecting the breakpoint using the Supremum method greatly improves the results relative to the Infimum method.

]]>Econometrics doi: 10.3390/econometrics5020016

Authors: Fabrizio Cipollini Robert Engle Giampiero Gallo

We discuss several multivariate extensions of the Multiplicative Error Model to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional expectations of the variables can be simultaneously estimated. Results with realized volatility, volumes and number of trades of the JNJ stock show that significantly superior realized volatility forecasts are delivered with a fully interdependent vMEM relative to a single equation. Alternatives involving log–Normal or semiparametric formulations produce substantially equivalent results.

]]>Econometrics doi: 10.3390/econometrics5010015

Authors: Chia-Lin Chang Michael McAleer

An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns was the portmanteau statistic for non-causality in the variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test of non-causality in the conditional variance by Hafner and Herwartz (2006), who provided simulation results to show that their LM test was more powerful than the portmanteau statistic for sample sizes of 1000 and 4000 observations. While the LM test for causality proposed by Hafner and Herwartz (2006) is an interesting and useful development, it is nonetheless arbitrary. In particular, the specification on which the LM test is based does not rely on an underlying stochastic process, so the alternative hypothesis is also arbitrary, which can affect the power of the test. The purpose of the paper is to derive a simple test for causality in volatility that provides regularity conditions arising from the underlying stochastic process, namely a random coefficient autoregressive process, and a test for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the null hypothesis of non-causality. The simple test is intuitively appealing as it is based on an underlying stochastic process, is sympathetic to Granger’s (1969, 1988) notion of time series predictability, is easy to implement, and has a regularity condition that is not available in the LM test.

]]>Econometrics doi: 10.3390/econometrics5010014

Authors: Jan Kiviet Milan Pleus Rutger Poldermans

Studies employing Arellano-Bond and Blundell-Bond generalized method of moments (GMM) estimation for linear dynamic panel data models are growing exponentially in number. However, for researchers it is hard to make a reasoned choice between many different possible implementations of these estimators and associated tests. By simulation, the effects are examined in terms of many options regarding: (i) reducing, extending or modifying the set of instruments; (ii) specifying the weighting matrix in relation to the type of heteroskedasticity; (iii) using (robustified) 1-step or (corrected) 2-step variance estimators; (iv) employing 1-step or 2-step residuals in Sargan-Hansen overall or incremental overidentification restrictions tests. This is all done for models in which some regressors may be either strictly exogenous, predetermined or endogenous. Surprisingly, particular asymptotically optimal and relatively robust weighting matrices are found to be superior in finite samples to ostensibly more appropriate versions. Most of the variants of tests for overidentification and coefficient restrictions show serious deficiencies. The variance of the individual effects is shown to be a major determinant of the poor quality of most asymptotic approximations; therefore, the accurate estimation of this nuisance parameter is investigated. A modification of GMM is found to have some potential when the cross-sectional heteroskedasticity is pronounced and the time-series dimension of the sample is not too small. Finally, all techniques are employed to actual data and lead to insights which differ considerably from those published earlier.

]]>Econometrics doi: 10.3390/econometrics5010013

Authors: Bruno Rémillard

In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations are discussed. It is also shown that if the stochastic volatility matrices are diagonal, which is the case if the univariate time series are estimated separately instead of being jointly estimated, then the empirical copula process behaves as if the innovations were observed; a remarkable property. As a by-product, one also obtains the asymptotic behavior of rank-based measures of dependence applied to residuals of these time series models.

]]>Econometrics doi: 10.3390/econometrics5010012

Authors: Aparna Sengupta

We consider the problem of testing for a structural break in the spatial lag parameter in a panel model (spatial autoregressive). We propose a likelihood ratio test of the null hypothesis of no break against the alternative hypothesis of a single break. The limiting distribution of the test is derived under the null when both the number of individual units N and the number of time periods T is large or N is ﬁxed and T is large. The asymptotic critical values of the test statistic can be obtained analytically. We also propose a break-date estimator that can be employed to determine the location of the break point following evidence against the null hypothesis. We present Monte Carlo evidence to show that the proposed procedure performs well in ﬁnite samples. Finally, we consider an empirical application of the test on budget spillovers and interdependence in ﬁscal policy within the U.S. states.

]]>Econometrics doi: 10.3390/econometrics5010011

Authors: Jesús Clemente María Gadea Antonio Montañés Marcelo Reyes

This study reconsiders the common unit root/co-integration approach to test for the Fisher effect for the economies of the G7 countries. We first show that nominal interest and inflation rates are better represented as I(0) variables. Later, we use the Bai–Perron procedure to show the existence of structural changes in the Fisher equation. After considering these breaks, we find very limited evidence of a total Fisher effect as the transmission coefficient of the expected inflation rates to nominal interest rates is very different than one.

]]>Econometrics doi: 10.3390/econometrics5010010

Authors: Pravin Trivedi David Zimmer

Copulas have enjoyed increased usage in many areas of econometrics, including applications with discrete outcomes. However, Genest and Nešlehová (2007) present evidence that copulas for discrete outcomes are not identified, particularly when those discrete outcomes follow count distributions. This paper confirms the Genest and Nešlehová result using a series of simulation exercises. The paper then proceeds to show that those identification concerns diminish if the model has a regression structure such that the exogenous variable(s) generates additional variation in the outcomes and thus more completely covers the outcome domain.

]]>Econometrics doi: 10.3390/econometrics5010008

Authors: P.A.V.B. Swamy Jatinder Mehta I-Lok Chang

Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant) regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.

]]>Econometrics doi: 10.3390/econometrics5010009

Authors: Jochen Heberle Cristina Sattarhoff

This paper considers the algorithmic implementation of the heteroskedasticity and autocorrelation consistent (HAC) estimation problem for covariance matrices of parameter estimators. We introduce a new algorithm, mainly based on the fast Fourier transform, and show via computer simulation that our algorithm is up to 20 times faster than well-established alternative algorithms. The cumulative effect is substantial if the HAC estimation problem has to be solved repeatedly. Moreover, the bandwidth parameter has no impact on this performance. We provide a general description of the new algorithm as well as code for a reference implementation in R.

]]>Econometrics doi: 10.3390/econometrics5010006

Authors: Ragnar Nymoen

This paper reviews the development of labour market institutions in Norway, shows how labour market regulation has been related to the macroeconomic development, and presents dynamic econometric models of nominal and real wages. Single equation and multi-equation models are reported. The econometric modelling uses a new data set with historical time series of wages and prices, unemployment and labour productivity. Impulse indicator saturation is used to achieve robust estimation of focus parameters, and the breaks are interpreted in the light of the historical overview. A relatively high degree of constancy of the key parameters of the wage setting equation is documented, over a considerably longer historical time period than earlier studies have done. The evidence is consistent with the view that the evolving system of collective labour market regulation over long periods has delivered a certain necessary level of coordination of wage and price setting. Nevertheless, there is also evidence that global forces have been at work for a long time, in a way that links real wages to productivity trends in the same way as in countries with very different institutions and macroeconomic development.

]]>Econometrics doi: 10.3390/econometrics5010007

Authors: Econometrics Editorial Office

The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]

]]>Econometrics doi: 10.3390/econometrics5010005

Authors: Seong Chang Pierre Perron

This paper considers testing procedures for the null hypothesis of a unit root process against the alternative of a fractional process, called a fractional unit root test. We extend the Lagrange Multiplier (LM) tests of Robinson (1994) and Tanaka (1999), which are locally best invariant and uniformly most powerful, to allow for a slope change in trend with or without a concurrent level shift under both the null and alternative hypotheses. We show that the limit distribution of the proposed LM tests is standard normal. Finite sample simulation experiments show that the tests have good size and power. As an empirical analysis, we apply the tests to the Consumer Price Indices of the G7 countries.

]]>Econometrics doi: 10.3390/econometrics5010001

Authors: Luis Álvarez

Filters constructed on the basis of standard local polynomial regression (LPR) methods have been used in the literature to estimate the business cycle. We provide a frequency domain interpretation of the contrast filter obtained by the difference of a series and its long-run LPR component and show that it operates as a kind of high-pass filter, so that it provides a noisy estimate of the cycle. We alternatively propose band-pass local polynomial regression methods aimed at isolating the cyclical component. Results are compared to standard high-pass and band-pass filters. Procedures are illustrated using the US GDP series.

]]>Econometrics doi: 10.3390/econometrics5010004

Authors: Jingjing Yang

This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

]]>Econometrics doi: 10.3390/econometrics5010003

Authors: Holger Fink Yulia Klimova Claudia Czado Jakob Stöber

For nearly every major stock market there exist equity and implied volatility indices. These play important roles within finance: be it as a benchmark, a measure of general uncertainty or a way of investing or hedging. It is well known in the academic literature that correlations and higher moments between different indices tend to vary in time. However, to the best of our knowledge, no one has yet considered a global setup including both equity and implied volatility indices of various continents, and allowing for a changing dependence structure. We aim to close this gap by applying Markov-switching R-vine models to investigate the existence of different, global dependence regimes. In particular, we identify times of “normal” and “abnormal” states within a data set consisting of North-American, European and Asian indices. Our results confirm the existence of joint points in a time at which global regime switching between two different R-vine structures takes place.

]]>Econometrics doi: 10.3390/econometrics5010002

Authors: Cheol-Keun Cho Timothy Vogelsang

This paper addresses tests for structural change in a weakly dependent time series regression. The cases of full structural change and partial structural change are considered. Heteroskedasticity-autocorrelation (HAC) robust Wald tests based on nonparametric covariance matrix estimators are explored. Fixed-b theory is developed for the HAC estimators which allows fixed-b approximations for the test statistics. For the case of the break date being known, the fixed-b limits of the statistics depend on the break fraction and the bandwidth tuning parameter as well as on the kernel. When the break date is unknown, supremum, mean and exponential Wald statistics are commonly used for testing the presence of the structural break. Fixed-b limits of these statistics are obtained and critical values are tabulated. A simulation study compares the finite sample properties of existing tests and proposed tests.

]]>Econometrics doi: 10.3390/econometrics4040050

Authors: Bernt Stigum

The paper begins with a figurative representation of the contrast between present-day and formal applied econometrics. An explication of the status of bridge principles in applied econometrics follows. To illustrate the concepts used in the explication, the paper presents a simultaneous-equation model of the equilibrium configurations of a perfectly competitive commodity market. With artificially generated data I carry out two empirical analyses of such a market that contrast the prescriptions of formal econometrics in the tradition of Ragnar Frisch with the commands of present-day econometrics in the tradition of Trygve Haavelmo. At the end I demonstrate that the bridge principles I use in the formal-econometric analysis are valid in the Real World—that is in the world in which my data reside.

]]>Econometrics doi: 10.3390/econometrics4040049

Authors: Man Wang Ngai Chan

Testing for the equality of integration orders is an important topic in time series analysis because it constitutes an essential step in testing for (fractional) cointegration in the bivariate case. For the multivariate case, there are several versions of cointegration, and the version given in Robinson and Yajima (2002) has received much attention. In this definition, a time series vector is partitioned into several sub-vectors, and the elements in each sub-vector have the same integration order. Furthermore, this time series vector is said to be cointegrated if there exists a cointegration in any of the sub-vectors. Under such a circumstance, testing for the equality of integration orders constitutes an important problem. However, for multivariate fractionally integrated series, most tests focus on stationary and invertible series and become invalid under the presence of cointegration. Hualde (2013) overcomes these difficulties with a residual-based test for a bivariate time series. For the multivariate case, one possible extension of this test involves testing for an array of bivariate series, which becomes computationally challenging as the dimension of the time series increases. In this paper, a one-step residual-based test is proposed to deal with the multivariate case that overcomes the computational issue. Under certain regularity conditions, the test statistic has an asymptotic standard normal distribution under the null hypothesis of equal integration orders and diverges to infinity under the alternative. As reported in a Monte Carlo experiment, the proposed test possesses satisfactory sizes and powers.

]]>Econometrics doi: 10.3390/econometrics4040048

Authors: Kyoo Kim

This paper studies an alternative bias correction for the M-estimator, which is obtained by correcting the moment equations in the spirit of Firth (1993). In particular, this paper compares the stochastic expansions of the analytically-bias-corrected estimator and the alternative estimator and finds that the third-order stochastic expansions of these two estimators are identical. This implies that at least in terms of the third-order stochastic expansion, we cannot improve on the simple one-step bias correction by using the bias correction of moment equations. This finding suggests that the comparison between the one-step bias correction and the method of correcting the moment equations or the fully-iterated bias correction should be based on the stochastic expansions higher than the third order.

]]>Econometrics doi: 10.3390/econometrics4040047

Authors: Richard Ashley Xiaojin Sun

The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. This paper proposes a computationally feasible variation on these standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only, given the fact that the absolute value of the autoregressive parameter is less than unity as a necessary requirement for the data-generating process to be stationary. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators, and it therefore retains consistency. Our simulation results indicate that the subset-continuous-updating GMM estimators outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test.

]]>Econometrics doi: 10.3390/econometrics4040046

Authors: Richard Golden Steven Henley Halbert White T. Kashner

Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving novel model misspecification tests for finite-dimensional smooth probability models. These GIMTs include previously published as well as newly developed information matrix tests. To illustrate the application of the GIMT framework, we derived and assessed the performance of new GIMTs for binary logistic regression. Although all GIMTs exhibited good level and power performance for the larger sample sizes, GIMT statistics with fewer degrees of freedom and derived using log-likelihood third derivatives exhibited improved level and power performance.

]]>Econometrics doi: 10.3390/econometrics4040044

Authors: Badi Baltagi Chihwa Kao Bin Peng

This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD) test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as N , T → ∞ . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present.

]]>Econometrics doi: 10.3390/econometrics4040045

Authors: Uwe Hassler Mehdi Hosseinkouchack

We consider a class of panel tests of the null hypothesis of no cointegration and cointegration. All tests under investigation rely on single-equations estimated by least squares, and they may be residual-based or not. We focus on test statistics computed from regressions with intercept only (i.e., without detrending) and with at least one of the regressors (integrated of order 1) being dominated by a linear time trend. In such a setting, often encountered in practice, the limiting distributions and critical values provided for and applied with the situation “with intercept only” are not correct. It is demonstrated that their usage results in size distortions growing with the panel size N. Moreover, we show which are the appropriate distributions, and how correct critical values can be obtained from the literature.

]]>Econometrics doi: 10.3390/econometrics4040043

Authors: Kjersti Aas

This survey reviews the large and growing literature on the use of pair-copula constructions (PCCs) in financial applications. Using a PCC, multivariate data that exhibit complex patterns of dependence can be modeled using bivariate copulae as simple building blocks. Hence, this model represents a very flexible way of constructing higher-dimensional copulae. In this paper, we survey inference methods and goodness-of-fit tests for such models, as well as empirical applications of the PCCs in finance and economics.

]]>Econometrics doi: 10.3390/econometrics4040041

Authors: María Gadea Ana Gómez-Loscos Antonio Montañés

This study investigates changes in the relationship between oil prices and the US economy from a long-term perspective. Although neither of the two series (oil price and GDP growth rates) presents structural breaks in mean, we identify different volatility periods in both of them, separately. From a multivariate perspective, we do not observe a significant effect between changes in oil prices and GDP growth when considering the full period. However, we find a significant relationship in some subperiods by carrying out a rolling analysis and by investigating the presence of structural breaks in the multivariate framework. Finally, we obtain evidence, by means of a time-varying VAR, that the impact of the oil price shock on GDP growth has declined over time. We also observe that the negative effect is greater at the time of large oil price increases, supporting previous evidence of nonlinearity in the relationship.

]]>Econometrics doi: 10.3390/econometrics4040042

Authors: Bruno Wichmann Minjie Chen Wiktor Adamowicz

The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern discrete choice models include rich specifications of heterogeneity, multi-stage processing for choice set determination, dynamics, and other elements. However, discrete choice models still largely represent socially isolated choice processes —individuals are not affected by the preferences of choices of other individuals. There is a developing literature on the impact of social networks on preferences or the utility function in a random utility model but little examination of such processes for choice set formation. There is also emerging evidence in the marketplace of the influence of friends on choice sets and choices. In this paper we develop discrete choice models that incorporate formal social network structures into the choice set formation process in a two-stage random utility framework. We assess models where peers may affect not only the alternatives that individuals consider or include in their choice sets, but also consumption choices. We explore the properties of our models and evaluate the extent of “errors” in assessment of preferences, economic welfare measures and market shares if network effects are present, but are not accounted for in the econometric model. Our results shed light on the importance of the evaluation of peer or network effects on inclusion/exclusion of alternatives in a random utility choice framework.

]]>Econometrics doi: 10.3390/econometrics4040040

Authors: Kerry Patterson

I am pleased to announce that, following my retirement on the 30th September 2016, Marc Paolella will become Editor-in-Chief (EiC) of Econometrics.

]]>Econometrics doi: 10.3390/econometrics4040039

Authors: Wen Xu

Time-varying volatility is common in macroeconomic data and has been incorporated into macroeconomic models in recent work. Dynamic panel data models have become increasingly popular in macroeconomics to study common relationships across countries or regions. This paper estimates dynamic panel data models with stochastic volatility by maximizing an approximate likelihood obtained via Rao-Blackwellized particle filters. Monte Carlo studies reveal the good and stable performance of our particle filter-based estimator. When the volatility of volatility is high, or when regressors are absent but stochastic volatility exists, our approach can be better than the maximum likelihood estimator which neglects stochastic volatility and generalized method of moments (GMM) estimators.

]]>Econometrics doi: 10.3390/econometrics4030038

Authors: George Judge

In this paper, we suggest an approach to recovering behavior-related, preference-choice network information from observational data. We model the process as a self-organized behavior based random exponential network-graph system. To address the unknown nature of the sampling model in recovering behavior related network information, we use the Cressie-Read (CR) family of divergence measures and the corresponding information theoretic entropy basis, for estimation, inference, model evaluation, and prediction. Examples are included to clarify how entropy based information theoretic methods are directly applicable to recovering the behavioral network probabilities in this fundamentally underdetermined ill posed inverse recovery problem.

]]>Econometrics doi: 10.3390/econometrics4030037

Authors: M. Peiris Manabu Asai

In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA) with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV). Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.

]]>Econometrics doi: 10.3390/econometrics4030036

Authors: Eduardo Souza-Rodrigues

This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small) number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.

]]>Econometrics doi: 10.3390/econometrics4030035

Authors: <em>Econometrics</em> Editorial Office

Econometrics is pleased to announce the commissioning of a new series of Special Issues dedicated to celebrated econometricians of our time.[...]

]]>Econometrics doi: 10.3390/econometrics4030034

Authors: Xin Zhang Donggyu Kim Yazhen Wang

This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 ) , which is better than the convergence rate O P ( n − 1 / 4 ) for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.

]]>Econometrics doi: 10.3390/econometrics4030033

Authors: Kerry Patterson

Econometrics has had a distinguished start publishing over 92 articles since 2013, with 76,475 downloads.[...]

]]>Econometrics doi: 10.3390/econometrics4030032

Authors: Umberto Triacca

A distance between pairs of sets of autoregressive moving average (ARMA) processes is proposed. Its main properties are discussed. The paper also shows how the proposed distance finds application in time series analysis. In particular it can be used to evaluate the distance between portfolios of ARMA models or the distance between vector autoregressive (VAR) models.

]]>Econometrics doi: 10.3390/econometrics4030031

Authors: Flavia Barsotti Simona Sanfelici

Default probability is a fundamental variable determining the credit worthiness of a firm and equity volatility estimation plays a key role in its evaluation. Assuming a structural credit risk modeling approach, we study the impact of choosing different non parametric equity volatility estimators on default probability evaluation, when market microstructure noise is considered. A general stochastic volatility framework with jumps for the underlying asset dynamics is defined inside a Merton-like structural model. To estimate the volatility risk component of a firm we use high-frequency equity data: market microstructure noise is introduced as a direct effect of observing noisy high-frequency equity prices. A Monte Carlo simulation analysis is conducted to (i) test the performance of alternative non-parametric equity volatility estimators in their capability of filtering out the microstructure noise and backing out the true unobservable asset volatility; (ii) study the effects of different non-parametric estimation techniques on default probability evaluation. The impact of the non-parametric volatility estimators on risk evaluation is not negligible: a sensitivity analysis defined for alternative values of the leverage parameter and average jumps size reveals that the characteristics of the dataset are crucial to determine which is the proper estimator to consider from a credit risk perspective.

]]>Econometrics doi: 10.3390/econometrics4030030

Authors: Bhargab Chattopadhyay Shyamal De

Gini index is a widely used measure of economic inequality. This article develops a theory and methodology for constructing a confidence interval for Gini index with a specified confidence coefficient and a specified width without assuming any specific distribution of the data. Fixed sample size methods cannot simultaneously achieve both specified confidence coefficient and fixed width. We develop a purely sequential procedure for interval estimation of Gini index with a specified confidence coefficient and a specified margin of error. Optimality properties of the proposed method, namely first order asymptotic efficiency and asymptotic consistency properties are proved under mild moment assumptions of the distribution of the data.

]]>Econometrics doi: 10.3390/econometrics4020029

Authors: Daniel Griffith Yongwan Chun

The Ramsey regression equation specification error test (RESET) furnishes a diagnostic for omitted variables in a linear regression model specification (i.e., the null hypothesis is no omitted variables). Integer powers of fitted values from a regression analysis are introduced as additional covariates in a second regression analysis. The former regression model can be considered restricted, whereas the latter model can be considered unrestricted; this first model is nested within this second model. A RESET significance test is conducted with an F-test using the error sums of squares and the degrees of freedom for the two models. For georeferenced data, eigenvectors can be extracted from a modified spatial weights matrix, and included in a linear regression model specification to account for the presence of nonzero spatial autocorrelation. The intuition underlying this methodology is that these synthetic variates function as surrogates for omitted variables. Accordingly, a restricted regression model without eigenvectors should indicate an omitted variables problem, whereas an unrestricted regression model with eigenvectors should result in a failure to reject the RESET null hypothesis. This paper furnishes eleven empirical examples, covering a wide range of spatial attribute data types, that illustrate the effectiveness of eigenvector spatial filtering in addressing the omitted variables problem for georeferenced data as measured by the RESET.

]]>Econometrics doi: 10.3390/econometrics4020028

Authors: Masayuki Hirukawa Mari Sakudo

This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent under the alternative. A test-oriented smoothing parameter selection method is also proposed to implement the test. Monte Carlo simulations indicate superior finite-sample performance of the test statistic. It is worth emphasizing that the performance is grounded on the first-order normal limit and a small number of observations, despite a nonparametric convergence rate and a sample-splitting procedure of the test.

]]>Econometrics doi: 10.3390/econometrics4020026

Authors: P.A.V.B. Swamy I-Lok Chang Jatinder Mehta William Greene Stephen Hall George Tavlas

We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.

]]>Econometrics doi: 10.3390/econometrics4020027

Authors: Vitali Alexeev Mardi Dungey Wenying Yao

Using high-frequency data, we decompose the time-varying beta for stocks into beta for continuous systematic risk and beta for discontinuous systematic risk. Estimated discontinuous betas for S&amp;P500 constituents between 2003 and 2011 generally exceed the corresponding continuous betas. We demonstrate how continuous and discontinuous betas decrease with portfolio diversification. Using an equiweighted broad market index, we assess the speed of convergence of continuous and discontinuous betas in portfolios of stocks as the number of holdings increase. We show that discontinuous risk dissipates faster with fewer stocks in a portfolio compared to its continuous counterpart.

]]>Econometrics doi: 10.3390/econometrics4020025

Authors: Marc Paolella

A fast method for estimating the parameters of a stable-APARCH not requiring likelihood or iteration is proposed. Several powerful tests for the (asymmetric) stable Paretian distribution with tail index 1 &lt; α &lt; 2 are used for assessing the appropriateness of the stable assumption as the innovations process in stable-GARCH-type models for daily stock returns. Overall, there is strong evidence against the stable as the correct innovations assumption for all stocks and time periods, though for many stocks and windows of data, the stable hypothesis is not rejected.

]]>Econometrics doi: 10.3390/econometrics4020024

Authors: Xibin Zhang Maxwell King Han Shang

This paper develops a sampling algorithm for bandwidth estimation in a nonparametric regression model with continuous and discrete regressors under an unknown error density. The error density is approximated by the kernel density estimator of the unobserved errors, while the regression function is estimated using the Nadaraya-Watson estimator admitting continuous and discrete regressors. We derive an approximate likelihood and posterior for bandwidth parameters, followed by a sampling algorithm. Simulation results show that the proposed approach typically leads to better accuracy of the resulting estimates than cross-validation, particularly for smaller sample sizes. This bandwidth estimation approach is applied to nonparametric regression model of the Australian All Ordinaries returns and the kernel density estimation of gross domestic product (GDP) growth rates among the organisation for economic co-operation and development (OECD) and non-OECD countries.

]]>Econometrics doi: 10.3390/econometrics4020023

Authors: Michel Mouchart Renzo Orsi

A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.

]]>Econometrics doi: 10.3390/econometrics4020022

Authors: Charles Moss James Oehmke Alexandre Lyambabaje Andrew Schmitz

This study examines, using quantile regression, the linkage between food security and efforts to enhance smallholder coffee producer incomes in Rwanda. Even though in Rwanda smallholder coffee producer incomes have increased, inhabitants these areas still experience stunting and wasting. This study examines whether the distribution of the income elasticity for food is the same for coffee and noncoffee growing provinces. We find that that the share of expenditures on food is statistically different in coffee growing and noncoffee growing provinces. Thus, the increase in expenditure on food is smaller for coffee growing provinces than noncoffee growing provinces.

]]>Econometrics doi: 10.3390/econometrics4020021

Authors: Nunzio Cappuccio Diego Lubian

In cointegration analysis, it is customary to test the hypothesis of unit roots separately for each single time series. In this note, we point out that this procedure may imply large size distortion of the unit root tests if the DGP is a VAR. It is well-known that univariate models implied by a VAR data generating process necessarily have a finite order MA component. This feature may explain why an MA component has often been found in univariate ARIMA models for economic time series. Thereby, it has important implications for unit root tests in univariate settings given the well-known size distortion of popular unit root test in the presence of a large negative coefficient in the MA component. In a small simulation experiment, considering several popular unit root tests and the ADF sieve bootstrap unit tests, we find that, besides the well known size distortion effect, there can be substantial differences in size distortion according to which univariate time series is tested for the presence of a unit root.

]]>Econometrics doi: 10.3390/econometrics4020020

Authors: Ba Chu Stephen Satchell

This paper provides a new approach to recover relative entropy measures of contemporaneous dependence from limited information by constructing the most entropic copula (MEC) and its canonical form, namely the most entropic canonical copula (MECC). The MECC can effectively be obtained by maximizing Shannon entropy to yield a proper copula such that known dependence structures of data (e.g., measures of association) are matched to their empirical counterparts. In fact the problem of maximizing the entropy of copulas is the dual to the problem of minimizing the Kullback-Leibler cross entropy (KLCE) of joint probability densities when the marginal probability densities are fixed. Our simulation study shows that the proposed MEC estimator can potentially outperform many other copula estimators in finite samples.

]]>Econometrics doi: 10.3390/econometrics4020019

Authors: P.A.V.B. Swamy Stephen Hall George Tavlas I-Lok Chang Heather Gibson William Greene Jatinder Mehta

This paper contributes to the literature on the estimation of causal effects by providing an analytical formula for individual specific treatment effects and an empirical methodology that allows us to estimate these effects. We derive the formula from a general model with minimal restrictions, unknown functional form and true unobserved variables such that it is a credible model of the underlying real world relationship. Subsequently, we manipulate the model in order to put it in an estimable form. In contrast to other empirical methodologies, which derive average treatment effects, we derive an analytical formula that provides estimates of the treatment effects on each treated individual. We also provide an empirical example that illustrates our methodology.

]]>Econometrics doi: 10.3390/econometrics4010017

Authors: Roberto Casarin Giulia Mantoan Francesco Ravazzolo

Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.

]]>Econometrics doi: 10.3390/econometrics4010016

Authors: Haroon Mumtaz

This paper investigates if the impact of uncertainty shocks on the U.K. economy has changed over time. To this end, we propose an extended time-varying VAR model that simultaneously allows the estimation of a measure of uncertainty and its time-varying impact on key macroeconomic and financial variables. We find that the impact of uncertainty shocks on these variables has declined over time. The timing of the change coincides with the introduction of inflation targeting in the U.K.

]]>Econometrics doi: 10.3390/econometrics4010015

Authors: Samuel Malone Robert Gramacy Enrique ter Horst

To improve short-horizon exchange rate forecasts, we employ foreign exchange market risk factors as fundamentals, and Bayesian treed Gaussian process (BTGP) models to handle non-linear, time-varying relationships between these fundamentals and exchange rates. Forecasts from the BTGP model conditional on the carry and dollar factors dominate random walk forecasts on accuracy and economic criteria in the Meese-Rogoff setting. Superior market timing ability for large moves, more than directional accuracy, drives the BTGP’s success. We explain how, through a model averaging Monte Carlo scheme, the BTGP is able to simultaneously exploit smoothness and rough breaks in between-variable dynamics. Either feature in isolation is unable to consistently outperform benchmarks throughout the full span of time in our forecasting exercises. Trading strategies based on ex ante BTGP forecasts deliver the highest out-of-sample risk-adjusted returns for the median currency, as well as for both predictable, traded risk factors.

]]>Econometrics doi: 10.3390/econometrics4010014

Authors: David Ardia Lukasz Gatarek Lennart Hoogerheide Herman van Dijk

We investigate the direct connection between the uncertainty related to estimated stable ratios of stock prices and risk and return of two pairs trading strategies: a conditional statistical arbitrage method and an implicit arbitrage one. A simulation-based Bayesian procedure is introduced for predicting stable stock price ratios, defined in a cointegration model. Using this class of models and the proposed inferential technique, we are able to connect estimation and model uncertainty with risk and return of stock trading. In terms of methodology, we show the effect that using an encompassing prior, which is shown to be equivalent to a Jeffreys’ prior, has under an orthogonal normalization for the selection of pairs of cointegrated stock prices and further, its effect for the estimation and prediction of the spread between cointegrated stock prices. We distinguish between models with a normal and Student t distribution since the latter typically provides a better description of daily changes of prices on financial markets. As an empirical application, stocks are used that are ingredients of the Dow Jones Composite Average index. The results show that normalization has little effect on the selection of pairs of cointegrated stocks on the basis of Bayes factors. However, the results stress the importance of the orthogonal normalization for the estimation and prediction of the spread—the deviation from the equilibrium relationship—which leads to better results in terms of profit per capital engagement and risk than using a standard linear normalization.

]]>Econometrics doi: 10.3390/econometrics4010013

Authors: Urbi Garay Enrique ter Horst German Molina Abel Rodriguez

We define a dynamic and self-adjusting mixture of Gaussian Graphical Models to cluster financial returns, and provide a new method for extraction of nonparametric estimates of dynamic alphas (excess return) and betas (to a choice set of explanatory factors) in a multivariate setting. This approach, as well as the outputs, has a dynamic, nonstationary and nonparametric form, which circumvents the problem of model risk and parametric assumptions that the Kalman filter and other widely used approaches rely on. The by-product of clusters, used for shrinkage and information borrowing, can be of use to determine relationships around specific events. This approach exhibits a smaller Root Mean Squared Error than traditionally used benchmarks in financial settings, which we illustrate through simulation. As an illustration, we use hedge fund index data, and find that our estimated alphas are, on average, 0.13% per month higher (1.6% per year) than alphas estimated through Ordinary Least Squares. The approach exhibits fast adaptation to abrupt changes in the parameters, as seen in our estimated alphas and betas, which exhibit high volatility, especially in periods which can be identified as times of stressful market events, a reflection of the dynamic positioning of hedge fund portfolio managers.

]]>Econometrics doi: 10.3390/econometrics4010012

Authors: Arnaud Dufays

Sequential Monte Carlo (SMC) methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC) methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT) algorithm, developed in this paper, combines (off-line) tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

]]>Econometrics doi: 10.3390/econometrics4010011

Authors: Nalan Baştürk Stefano Grassi Lennart Hoogerheide Herman van Dijk

This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.

]]>Econometrics doi: 10.3390/econometrics4010018

Authors: Giuseppe Arbia

Spatial econometrics has a relatively short history in the scenario of the scientific thought. Indeed, the term “spatial econometrics” was introduced only forty years ago during the general address delivered by Jean Paelinck to the annual meeting of the Dutch Statistical Association in May 1974 (see [1]). [...]

]]>Econometrics doi: 10.3390/econometrics4010010

Authors: John Geweke

There is a one-to-one mapping between the conventional time series parameters of a third-order autoregression and the more interpretable parameters of secular half-life, cyclical half-life and cycle period. The latter parameterization is better suited to interpretation of results using both Bayesian and maximum likelihood methods and to expression of a substantive prior distribution using Bayesian methods. The paper demonstrates how to approach both problems using the sequentially adaptive Bayesian learning algorithm and sequentially adaptive Bayesian learning algorithm (SABL) software, which eliminates virtually of the substantial technical overhead required in conventional approaches and produces results quickly and reliably. The work utilizes methodological innovations in SABL including optimization of irregular and multimodal functions and production of the conventional maximum likelihood asymptotic variance matrix as a by-product.

]]>Econometrics doi: 10.3390/econometrics4010008

Authors: Francesco Audrino Yujia Hu

We provide empirical evidence of volatility forecasting in relation to asymmetries present in the dynamics of both return and volatility processes. Using recently-developed methodologies to detect jumps from high frequency price data, we estimate the size of positive and negative jumps and propose a methodology to estimate the size of jumps in the quadratic variation. The leverage effect is separated into continuous and discontinuous effects, and past volatility is separated into “good” and “bad”, as well as into continuous and discontinuous risks. Using a long history of the S &amp; P500 price index, we find that the continuous leverage effect lasts about one week, while the discontinuous leverage effect disappears after one day. “Good” and “bad” continuous risks both characterize the volatility persistence, while “bad” jump risk is much more informative than “good” jump risk in forecasting future volatility. The volatility forecasting model proposed is able to capture many empirical stylized facts while still remaining parsimonious in terms of the number of parameters to be estimated.

]]>Econometrics doi: 10.3390/econometrics4010009

Authors: Nalan Baştürk Roberto Casarin Francesco Ravazzolo Herman van Dijk

Challenging statements have appeared in recent years in the literature on advances in computational procedures.[...]

]]>Econometrics doi: 10.3390/econometrics4010007

Authors: Sung Jun Joris Pinkse Haiqing Xu Neşe Yıldız

We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.

]]>Econometrics doi: 10.3390/econometrics4010006

Authors: Mustafa Koroglu Yiguo Sun

This paper considers a functional-coefficient spatial Durbin model with nonparametric spatial weights. Applying the series approximation method, we estimate the unknown functional coefficients and spatial weighting functions via a nonparametric two-stage least squares (or 2SLS) estimation method. To further improve estimation accuracy, we also construct a second-step estimator of the unknown functional coefficients by a local linear regression approach. Some Monte Carlo simulation results are reported to assess the finite sample performance of our proposed estimators. We then apply the proposed model to re-examine national economic growth by augmenting the conventional Solow economic growth convergence model with unknown spatial interactive structures of the national economy, as well as country-specific Solow parameters, where the spatial weighting functions and Solow parameters are allowed to be a function of geographical distance and the countries’ openness to trade, respectively.

]]>Econometrics doi: 10.3390/econometrics4010005

Authors: Econometrics Editorial Office

The editors of Econometrics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...]

]]>