Previous Issue

Table of Contents

Econometrics, Volume 5, Issue 4 (December 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial An Interview with William A. Barnett
Econometrics 2017, 5(4), 45; doi:10.3390/econometrics5040045
Received: 30 September 2017 / Revised: 30 September 2017 / Accepted: 30 September 2017 / Published: 17 October 2017
PDF Full-text (5050 KB) | HTML Full-text | XML Full-text
Abstract
William(Bill) Barnett is an eminent econometrician andmacroeconomist.[...] Full article

Research

Jump to: Editorial

Open AccessArticle Twenty-Two Years of Inflation Assessment and Forecasting Experience at the Bulletin of EU & US Inflation and Macroeconomic Analysis
Econometrics 2017, 5(4), 44; doi:10.3390/econometrics5040044
Received: 9 June 2017 / Revised: 8 September 2017 / Accepted: 27 September 2017 / Published: 6 October 2017
PDF Full-text (2254 KB) | HTML Full-text | XML Full-text
Abstract
The Bulletin of EU & US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation
[...] Read more.
The Bulletin of EU & US Inflation and Macroeconomic Analysis (BIAM) is a monthly publication that has been reporting real time analysis and forecasts for inflation and other macroeconomic aggregates for the Euro Area, the US and Spain since 1994. The BIAM inflation forecasting methodology stands on working with useful disaggregation schemes, using leading indicators when possible and applying outlier correction. The paper relates this methodology to corresponding topics in the literature and discusses the design of disaggregation schemes. It concludes that those schemes would be useful if they were formulated according to economic, institutional and statistical criteria aiming to end up with a set of components with very different statistical properties for which valid single-equation models could be built. The BIAM assessment, which derives from a new observation, is based on (a) an evaluation of the forecasting errors (innovations) at the components’ level. It provides information on which sectors they come from and allows, when required, for the appropriate correction in the specific models. (b) In updating the path forecast with its corresponding fan chart. Finally, we show that BIAM real time Euro Area inflation forecasts compare successfully with the consensus from the ECB Survey of Professional Forecasters, one and two years ahead. Full article
Figures

Figure 1

Open AccessArticle Non-Causality Due to Included Variables
Econometrics 2017, 5(4), 46; doi:10.3390/econometrics5040046
Received: 25 May 2017 / Revised: 5 October 2017 / Accepted: 5 October 2017 / Published: 15 October 2017
PDF Full-text (205 KB) | HTML Full-text | XML Full-text
Abstract
The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z=(x,y1,y2)
[...] Read more.
The contribution of this paper is to investigate a particular form of lack of invariance of causality statements to changes in the conditioning information sets. Consider a discrete-time three-dimensional stochastic process z = ( x , y 1 , y 2 ) . We want to study causality relationships between the variables in y = ( y 1 , y 2 ) and x. Suppose that in a bivariate framework, we find that y 1 Granger causes x and y 2 Granger causes x, but these relationships vanish when the analysis is conducted in a trivariate framework. Thus, the causal links, established in a bivariate setting, seem to be spurious. Is this conclusion always correct? In this note, we show that the causal links, in the bivariate framework, might well not be ‘genuinely’ spurious: they could be reflecting causality from the vector y to x. Paradoxically, in this case, it is the non-causality in trivariate system that is misleading. Full article
Open AccessArticle Bayesian Analysis of Bubbles in Asset Prices
Econometrics 2017, 5(4), 47; doi:10.3390/econometrics5040047
Received: 14 July 2017 / Revised: 11 September 2017 / Accepted: 11 September 2017 / Published: 23 October 2017
PDF Full-text (394 KB) | HTML Full-text | XML Full-text
Abstract
We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow
[...] Read more.
We develop a new model where the dynamic structure of the asset price, after the fundamental value is removed, is subject to two different regimes. One regime reflects the normal period where the asset price divided by the dividend is assumed to follow a mean-reverting process around a stochastic long run mean. The second regime reflects the bubble period with explosive behavior. Stochastic switches between two regimes and non-constant probabilities of exit from the bubble regime are both allowed. A Bayesian learning approach is employed to jointly estimate the latent states and the model parameters in real time. An important feature of our Bayesian method is that we are able to deal with parameter uncertainty and at the same time, to learn about the states and the parameters sequentially, allowing for real time model analysis. This feature is particularly useful for market surveillance. Analysis using simulated data reveals that our method has good power properties for detecting bubbles. Empirical analysis using price-dividend ratios of S&P500 highlights the advantages of our method. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Peter Phillips)
Figures

Figure 1

Open AccessArticle Do Seasonal Adjustments Induce Noncausal Dynamics in Inflation Rates?
Econometrics 2017, 5(4), 48; doi:10.3390/econometrics5040048
Received: 12 June 2017 / Revised: 27 September 2017 / Accepted: 17 October 2017 / Published: 31 October 2017
PDF Full-text (884 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the
[...] Read more.
This paper investigates the effect of seasonal adjustment filters on the identification of mixed causal-noncausal autoregressive models. By means of Monte Carlo simulations, we find that standard seasonal filters induce spurious autoregressive dynamics on white noise series, a phenomenon already documented in the literature. Using a symmetric argument, we show that those filters also generate a spurious noncausal component in the seasonally adjusted series, but preserve (although amplify) the existence of causal and noncausal relationships. This result has has important implications for modelling economic time series driven by expectation relationships. We consider inflation data on the G7 countries to illustrate these results. Full article
Figures

Figure 1

Open AccessArticle Formula I(1) and I(2): Race Tracks for Likelihood Maximization Algorithms of I(1) and I(2) Cointegrated VAR Models
Econometrics 2017, 5(4), 49; doi:10.3390/econometrics5040049
Received: 1 July 2017 / Revised: 4 October 2017 / Accepted: 15 October 2017 / Published: 20 November 2017
PDF Full-text (638 KB) | HTML Full-text | XML Full-text
Abstract
This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as
[...] Read more.
This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1) and I(2) models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research. Full article
(This article belongs to the Special Issue Recent Developments in Cointegration)
Figures

Figure 1

Open AccessFeature PaperArticle Inequality and Poverty When Effort Matters
Econometrics 2017, 5(4), 50; doi:10.3390/econometrics5040050
Received: 25 August 2017 / Revised: 21 October 2017 / Accepted: 23 October 2017 / Published: 6 November 2017
PDF Full-text (2100 KB) | HTML Full-text | XML Full-text
Abstract
On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies
[...] Read more.
On the presumption that poorer people tend to work less, it is often claimed that standard measures of inequality and poverty are overestimates. The paper points to a number of reasons to question this claim. It is shown that, while the labor supplies of American adults have a positive income gradient, the heterogeneity in labor supplies generates considerable horizontal inequality. Using equivalent incomes to adjust for effort can reveal either higher or lower inequality depending on the measurement assumptions. With only a modest allowance for leisure as a basic need, the effort-adjusted poverty rate in terms of equivalent incomes rises. Full article
(This article belongs to the Special Issue Econometrics and Income Inequality)
Figures

Figure 1

Open AccessArticle Business Time Sampling Scheme with Applications to Testing Semi-Martingale Hypothesis and Estimating Integrated Volatility
Econometrics 2017, 5(4), 51; doi:10.3390/econometrics5040051
Received: 3 August 2017 / Revised: 28 September 2017 / Accepted: 17 October 2017 / Published: 13 November 2017
PDF Full-text (574 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the
[...] Read more.
We propose a new method to implement the Business Time Sampling (BTS) scheme for high-frequency financial data. We compute a time-transformation (TT) function using the intraday integrated volatility estimated by a jump-robust method. The BTS transactions are obtained using the inverse of the TT function. Using our sampled BTS transactions, we test the semi-martingale hypothesis of the stock log-price process and estimate the daily realized volatility. Our method improves the normality approximation of the standardized business-time return distribution. Our Monte Carlo results show that the integrated volatility estimates using our proposed sampling strategy provide smaller root mean-squared error. Full article
(This article belongs to the Special Issue Volatility Modeling)
Figures

Figure 1

Open AccessArticle Synthetic Control and Inference
Econometrics 2017, 5(4), 52; doi:10.3390/econometrics5040052
Received: 27 October 2016 / Revised: 1 November 2017 / Accepted: 3 November 2017 / Published: 28 November 2017
PDF Full-text (267 KB) | XML Full-text
Abstract
We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and
[...] Read more.
We examine properties of permutation tests in the context of synthetic control. Permutation tests are frequently used methods of inference for synthetic control when the number of potential control units is small. We analyze the permutation tests from a repeated sampling perspective and show that the size of permutation tests may be distorted. Several alternative methods are discussed. Full article
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)
Open AccessArticle Reducing Approximation Error in the Fourier Flexible Functional Form
Econometrics 2017, 5(4), 53; doi:10.3390/econometrics5040053
Received: 29 August 2017 / Revised: 16 November 2017 / Accepted: 22 November 2017 / Published: 4 December 2017
PDF Full-text (1494 KB) | HTML Full-text | XML Full-text
Abstract
The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series
[...] Read more.
The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms. Full article
Figures

Figure 1

Back to Top