Next Issue
Volume 8, September
Previous Issue
Volume 8, March
 
 

Econometrics, Volume 8, Issue 2 (June 2020) – 15 articles

Cover Story (view full-size image): We investigate the marginal predictive content of small versus large jump variation. In our portfolios, sorting on signed small jump variation leads to greater value-weighted return differentials between stocks than when either signed total jump or signed large jump variation is sorted on. The benefit of signed small jump variation investing is driven by stock selection within an industry, rather than industry bets. However, signed jump variation has stronger predictive power than both upside and downside jump variation. One reason large and small jump variation have differing marginal predictive content is that the predictive content of signed large jump variation is negligible when controlling for either signed total jump variation or realized skewness. By contrast, signed small jump variation has unique information for predicting future returns. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
20 pages, 863 KiB  
Article
Gini Index Estimation within Pre-Specified Error Bound: Application to Indian Household Survey Data
by Francis Bilson Darku, Frank Konietschke and Bhargab Chattopadhyay
Econometrics 2020, 8(2), 26; https://doi.org/10.3390/econometrics8020026 - 18 Jun 2020
Cited by 2 | Viewed by 6197
Abstract
The Gini index, a widely used economic inequality measure, is computed using data whose designs involve clustering and stratification, generally known as complex household surveys. Under complex household survey, we develop two novel procedures for estimating Gini index with a pre-specified error bound [...] Read more.
The Gini index, a widely used economic inequality measure, is computed using data whose designs involve clustering and stratification, generally known as complex household surveys. Under complex household survey, we develop two novel procedures for estimating Gini index with a pre-specified error bound and confidence level. The two proposed approaches are based on the concept of sequential analysis which is known to be economical in the sense of obtaining an optimal cluster size which reduces project cost (that is total sampling cost) thereby achieving the pre-specified error bound and the confidence level under reasonable assumptions. Some large sample properties of the proposed procedures are examined without assuming any specific distribution. Empirical illustrations of both procedures are provided using the consumption expenditure data obtained by National Sample Survey (NSS) Organization in India. Full article
26 pages, 1706 KiB  
Article
Tornado Occurrences in the United States: A Spatio-Temporal Point Process Approach
by Fernanda Valente and Márcio Laurini
Econometrics 2020, 8(2), 25; https://doi.org/10.3390/econometrics8020025 - 11 Jun 2020
Cited by 11 | Viewed by 5615
Abstract
In this paper, we analyze the tornado occurrences in the Unites States. To perform inference procedures for the spatio-temporal point process we adopt a dynamic representation of Log-Gaussian Cox Process. This representation is based on the decomposition of intensity function in components of [...] Read more.
In this paper, we analyze the tornado occurrences in the Unites States. To perform inference procedures for the spatio-temporal point process we adopt a dynamic representation of Log-Gaussian Cox Process. This representation is based on the decomposition of intensity function in components of trend, cycles, and spatial effects. In this model, spatial effects are also represented by a dynamic functional structure, which allows analyzing the possible changes in the spatio-temporal distribution of the occurrence of tornadoes due to possible changes in climate patterns. The model was estimated using Bayesian inference through the Integrated Nested Laplace Approximations. We use data from the Storm Prediction Center’s Severe Weather Database between 1954 and 2018, and the results provided evidence, from new perspectives, that trends in annual tornado occurrences in the United States have remained relatively constant, supporting previously reported findings. Full article
(This article belongs to the Collection Econometric Analysis of Climate Change)
Show Figures

Figure 1

15 pages, 664 KiB  
Article
Maximum-Likelihood Estimation in a Special Integer Autoregressive Model
by Robert C. Jung and Andrew R. Tremayne
Econometrics 2020, 8(2), 24; https://doi.org/10.3390/econometrics8020024 - 8 Jun 2020
Cited by 1 | Viewed by 3807
Abstract
The paper is concerned with estimation and application of a special stationary integer autoregressive model where multiple binomial thinnings are not independent of one another. Parameter estimation in such models has hitherto been accomplished using method of moments, or nonlinear least squares, but [...] Read more.
The paper is concerned with estimation and application of a special stationary integer autoregressive model where multiple binomial thinnings are not independent of one another. Parameter estimation in such models has hitherto been accomplished using method of moments, or nonlinear least squares, but not maximum likelihood. We obtain the conditional distribution needed to implement maximum likelihood. The sampling performance of the new estimator is compared to extant ones by reporting the results of some simulation experiments. An application to a stock-type data set of financial counts is provided and the conditional distribution is used to compare two competing models and in forecasting. Full article
(This article belongs to the Special Issue Discrete-Valued Time Series: Modelling, Estimation and Forecasting)
Show Figures

Figure 1

15 pages, 1659 KiB  
Article
Bayesian Model Averaging with the Integrated Nested Laplace Approximation
by Virgilio Gómez-Rubio, Roger S. Bivand and Håvard Rue
Econometrics 2020, 8(2), 23; https://doi.org/10.3390/econometrics8020023 - 1 Jun 2020
Cited by 16 | Viewed by 5382
Abstract
The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as [...] Read more.
The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as a GMRF allows the associated software R-INLA to estimate the posterior marginals in a fraction of the time as typical Markov chain Monte Carlo algorithms. INLA can be extended by means of Bayesian model averaging (BMA) to increase the number of models that it can fit to conditional latent GMRF. In this paper, we review the use of BMA with INLA and propose a new example on spatial econometrics models. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

24 pages, 333 KiB  
Article
Sovereign Risk Indices and Bayesian Theory Averaging
by Alex Lenkoski and Fredrik L. Aanes
Econometrics 2020, 8(2), 22; https://doi.org/10.3390/econometrics8020022 - 29 May 2020
Cited by 1 | Viewed by 3454
Abstract
In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different [...] Read more.
In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different proxies are often collected for a given theory and the uneven nature of this collection requires care when employing model averaging. Furthermore, if valid, these theories ought to be relevant outside of any single narrowly focused outcome equation. We propose a methodology which treats theories as represented by latent indices, these latent processes controlled by model averaging on the proxy level. To achieve generalizability of the theory index our framework assumes a collection of outcome equations. We accommodate a flexible set of generalized additive models, enabling non-Gaussian outcomes to be included. Furthermore, selection of relevant theories also occurs on the outcome level, allowing for theories to be differentially valid. Our focus is on creating a set of theory-based indices directed at understanding a country’s potential risk of macroeconomic collapse. These Sovereign Risk Indices are calibrated across a set of different “collapse” criteria, including default on sovereign debt, heightened potential for high unemployment or inflation and dramatic swings in foreign exchange values. The goal of this exercise is to render a portable set of country/year theory indices which can find more general use in the research community. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
29 pages, 540 KiB  
Article
BACE and BMA Variable Selection and Forecasting for UK Money Demand and Inflation with Gretl
by Marcin Błażejowski, Jacek Kwiatkowski and Paweł Kufel
Econometrics 2020, 8(2), 21; https://doi.org/10.3390/econometrics8020021 - 22 May 2020
Cited by 2 | Viewed by 4917
Abstract
In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA [...] Read more.
In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics—an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

36 pages, 1528 KiB  
Article
Triple the Gamma—A Unifying Shrinkage Prior for Variance and Variable Selection in Sparse State Space and TVP Models
by Annalisa Cadonna, Sylvia Frühwirth-Schnatter and Peter Knaus
Econometrics 2020, 8(2), 20; https://doi.org/10.3390/econometrics8020020 - 20 May 2020
Cited by 32 | Viewed by 7310
Abstract
Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, [...] Read more.
Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, since the effect of some explanatory variables is constant over time. We propose a new prior for variance shrinkage in TVP models, called triple gamma. The triple gamma prior encompasses a number of priors that have been suggested previously, such as the Bayesian Lasso, the double gamma prior and the Horseshoe prior. We present the desirable properties of such a prior and its relationship to Bayesian Model Averaging for variance selection. The features of the triple gamma prior are then illustrated in the context of time varying parameter vector autoregressive models, both for simulated dataset and for a series of macroeconomics variables in the Euro Area. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

52 pages, 4274 KiB  
Article
New Evidence of the Marginal Predictive Content of Small and Large Jumps in the Cross-Section
by Bo Yu, Bruce Mizrach and Norman R. Swanson
Econometrics 2020, 8(2), 19; https://doi.org/10.3390/econometrics8020019 - 19 May 2020
Cited by 3 | Viewed by 4999
Abstract
We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our [...] Read more.
We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our highest- and lowest-quintile portfolios (i.e., high–low spreads) than when either signed total jump or signed large jump variation is sorted on. It is shown that the benefit of signed small jump variation investing is driven by stock selection within an industry, rather than industry bets. Investors prefer stocks with a high probability of having positive jumps, but they also tend to overweight safer industries. Also, consistent with the findings in Scaillet et al. (2018), upside (downside) jump variation negatively (positively) predicts future returns. However, signed (large/small/total) jump variation has stronger predictive power than both upside and downside jump variation. One reason large and small (signed) jump variation have differing marginal predictive contents is that the predictive content of signed large jump variation is negligible when controlling for either signed total jump variation or realized skewness. By contrast, signed small jump variation has unique information for predicting future returns, even when controlling for these variables. By analyzing earnings announcement surprises, we find that large jumps are closely associated with “big” news. However, while such news-related information is embedded in large jump variation, the information is generally short-lived, and dissipates too quickly to provide marginal predictive content for subsequent weekly returns. Finally, we find that small jumps are more likely to be diversified away than large jumps and tend to be more closely associated with idiosyncratic risks. This indicates that small jumps are more likely to be driven by liquidity conditions and trading activity. Full article
Show Figures

Figure 1

24 pages, 2606 KiB  
Article
Forecast Accuracy Matters for Hurricane Damage
by Andrew B. Martinez
Econometrics 2020, 8(2), 18; https://doi.org/10.3390/econometrics8020018 - 14 May 2020
Cited by 16 | Viewed by 7273
Abstract
I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane’s predicted landfall location result in higher damage. This relationship holds across [...] Read more.
I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane’s predicted landfall location result in higher damage. This relationship holds across a wide range of model specifications and when controlling for ex-ante uncertainty and potential endogeneity. Using a counterfactual exercise I find that the cumulative reduction in damage from forecast improvements since 1970 is about $82 billion, which exceeds the U.S. government’s spending on the forecasts and private willingness to pay for them. Full article
(This article belongs to the Collection Econometric Analysis of Climate Change)
Show Figures

Figure 1

15 pages, 418 KiB  
Article
Bayesian Model Averaging Using Power-Expected-Posterior Priors
by Dimitris Fouskakis and Ioannis Ntzoufras
Econometrics 2020, 8(2), 17; https://doi.org/10.3390/econometrics8020017 - 11 May 2020
Cited by 1 | Viewed by 4398
Abstract
This paper focuses on the Bayesian model average (BMA) using the power–expected– posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. [...] Read more.
This paper focuses on the Bayesian model average (BMA) using the power–expected– posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. We compare the performance of our method with that of similar approaches in a simulated and a real data example from economics. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

16 pages, 303 KiB  
Article
Are Some Forecasters’ Probability Assessments of Macro Variables Better Than Those of Others?
by Michael P. Clements
Econometrics 2020, 8(2), 16; https://doi.org/10.3390/econometrics8020016 - 6 May 2020
Cited by 6 | Viewed by 3945
Abstract
We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing [...] Read more.
We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing procedure controls for the different economic conditions the forecasters may face, given that each individual responds to only a subset of the surveys. One possible explanation for the different findings for point predictions and histograms is explored: that newcomers may make less accurate histogram forecasts than experienced respondents given the greater complexity of the task. Full article
(This article belongs to the Special Issue Celebrated Econometricians: David Hendry)
22 pages, 496 KiB  
Article
Improved Average Estimation in Seemingly Unrelated Regressions
by Ali Mehrabani and Aman Ullah
Econometrics 2020, 8(2), 15; https://doi.org/10.3390/econometrics8020015 - 27 Apr 2020
Cited by 6 | Viewed by 4995
Abstract
In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional [...] Read more.
In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional to a weighted quadratic loss function. The approximate bias and second moment matrix of the average estimator using the large-sample approximations are provided. We give the conditions under which the average estimator dominates the GLS estimator on the basis of their mean squared errors. We illustrate our estimator by applying it to a cost system for United States (U.S.) Commercial banks, over the period from 2000 to 2018. Our results indicate that on average most of the banks have been operating under increasing returns to scale. We find that over the recent years, scale economies are a plausible reason for the growth in average size of banks and the tendency toward increasing scale is likely to continue Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

35 pages, 1330 KiB  
Article
Balanced Growth Approach to Tracking Recessions
by Marta Boczoń and Jean-François Richard
Econometrics 2020, 8(2), 14; https://doi.org/10.3390/econometrics8020014 - 23 Apr 2020
Viewed by 4462
Abstract
In this paper, we propose a hybrid version of Dynamic Stochastic General Equilibrium models with an emphasis on parameter invariance and tracking performance at times of rapid changes (recessions). We interpret hypothetical balanced growth ratios as moving targets for economic agents that rely [...] Read more.
In this paper, we propose a hybrid version of Dynamic Stochastic General Equilibrium models with an emphasis on parameter invariance and tracking performance at times of rapid changes (recessions). We interpret hypothetical balanced growth ratios as moving targets for economic agents that rely upon an Error Correction Mechanism to adjust to changes in target ratios driven by an underlying state Vector AutoRegressive process. Our proposal is illustrated by an application to a pilot Real Business Cycle model for the US economy from 1948 to 2019. An extensive recursive validation exercise over the last 35 years, covering 3 recessions, is used to highlight its parameters invariance, tracking and 1- to 3-step ahead forecasting performance, outperforming those of an unconstrained benchmark Vector AutoRegressive model. Full article
(This article belongs to the Special Issue Celebrated Econometricians: David Hendry)
Show Figures

Figure 1

22 pages, 2908 KiB  
Article
Bayesian Model Averaging and Prior Sensitivity in Stochastic Frontier Analysis
by Kamil Makieła and Błażej Mazur
Econometrics 2020, 8(2), 13; https://doi.org/10.3390/econometrics8020013 - 20 Apr 2020
Cited by 4 | Viewed by 4967
Abstract
This paper discusses Bayesian model averaging (BMA) in Stochastic Frontier Analysis and investigates inference sensitivity to prior assumptions made about the scale parameter of (in)efficiency. We turn our attention to the “standard” prior specifications for the popular normal-half-normal and normal-exponential models. To facilitate [...] Read more.
This paper discusses Bayesian model averaging (BMA) in Stochastic Frontier Analysis and investigates inference sensitivity to prior assumptions made about the scale parameter of (in)efficiency. We turn our attention to the “standard” prior specifications for the popular normal-half-normal and normal-exponential models. To facilitate formal model comparison, we propose a model that nests both sampling models and generalizes the symmetric term of the compound error. Within this setup it is possible to develop coherent priors for model parameters in an explicit way. We analyze sensitivity of different prior specifications on the aforementioned scale parameter with respect to posterior characteristics of technology, stochastic parameters, latent variables and—especially—the models’ posterior probabilities, which are crucial for adequate inference pooling. We find that using incoherent priors on the scale parameter of inefficiency has (i) virtually no impact on the technology parameters; (ii) some impact on inference about the stochastic parameters and latent variables and (iii) substantial impact on marginal data densities, which are crucial in BMA. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

26 pages, 702 KiB  
Article
Simultaneous Indirect Inference, Impulse Responses and ARMA Models
by Lynda Khalaf and Beatriz Peraza López
Econometrics 2020, 8(2), 12; https://doi.org/10.3390/econometrics8020012 - 2 Apr 2020
Cited by 1 | Viewed by 5195
Abstract
A two-stage simulation-based framework is proposed to derive Identification Robust confidence sets by applying Indirect Inference, in the context of Autoregressive Moving Average (ARMA) processes for finite samples. Resulting objective functions are treated as test statistics, which are inverted rather than optimized, via [...] Read more.
A two-stage simulation-based framework is proposed to derive Identification Robust confidence sets by applying Indirect Inference, in the context of Autoregressive Moving Average (ARMA) processes for finite samples. Resulting objective functions are treated as test statistics, which are inverted rather than optimized, via the Monte Carlo test method. Simulation studies illustrate accurate size and good power. Projected impulse-response confidence bands are simultaneous by construction and exhibit robustness to parameter identification problems. The persistence of shocks on oil prices and returns is analyzed via impulse-response confidence bands. Our findings support the usefulness of impulse-responses as an empirically relevant transformation of the confidence set. Full article
(This article belongs to the Special Issue Resampling Methods in Econometrics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop