Econometrics doi: 10.3390/econometrics12010007
Authors: Chaoyi Chen Thanasis Stengos Jianhan Zhang
This paper investigates the relationship between public debt and economic growth in the context of a panel kink regression with latent group structures. The proposed model allows us to explore the heterogeneous threshold effects of public debt on economic growth based on unknown group patterns. We propose a least squares estimator and demonstrate the consistency of estimating group structures. The finite sample performance of the proposed estimator is evaluated by simulations. Our findings reveal that the nonlinear relationship between public debt and economic growth is characterized by a heterogeneous threshold level, which varies among different groups, and highlight that the mixed results found in previous studies may stem from the assumption of a homogeneous threshold effect.
]]>Econometrics doi: 10.3390/econometrics12010006
Authors: Benedikt M. Pötscher Leopold Sögner Martin Wagner
This Special Issue was organized in relation to the fifth Vienna Workshop on High-Dimensional Time Series in Macroeconomics and Finance, which took place at the Institute for Advanced Studies in Vienna on 9 June and 10 June 2022 [...]
]]>Econometrics doi: 10.3390/econometrics12010005
Authors: João Pedro Coli de Souza Monteneri Nacinben Márcio Laurini
This study introduces a multivariate extension to the class of stochastic volatility models, employing integrated nested Laplace approximations (INLA) for estimation. Bayesian methods for estimating stochastic volatility models through Markov Chain Monte Carlo (MCMC) can become computationally burdensome or inefficient as the dataset size and problem complexity increase. Furthermore, issues related to chain convergence can also arise. In light of these challenges, this research aims to establish a computationally efficient approach for estimating multivariate stochastic volatility models. We propose a multifactor formulation estimated using the INLA methodology, enabling an approach that leverages sparse linear algebra and parallelization techniques. To evaluate the effectiveness of our proposed model, we conduct in-sample and out-of-sample empirical analyses of stock market index return series. Furthermore, we provide a comparative analysis with models estimated using MCMC, demonstrating the computational efficiency and goodness of fit improvements achieved with our approach.
]]>Econometrics doi: 10.3390/econometrics12010004
Authors: Christa Hangl
Software investments can significantly contribute to corporate success by optimising productivity, stimulating creativity, elevating customer satisfaction, and equipping organisations with the essential resources to adapt and thrive in a rapidly changing market. This paper examines whether software investments have an impact on the economic success of the companies listed on the Austrian Traded Prime market (ATX companies). A literature review and qualitative content analysis are performed to answer the research questions. For testing hypotheses, a longitudinal study is conducted. Over a ten-year period, the consolidated financial statements of the businesses under review are evaluated. A panel will assist with the data analysis. This study offers notable distinctions from other research that has investigated the correlation between digitalisation and economic success. In contrast to prior studies that relied on surveys to assess the level of digitalisation, this study obtained the required data by conducting a comprehensive examination of the annual reports of all the organisations included in the analysis. The regression analysis of all businesses revealed no correlation between software expenditures and economic success. The regression models were subsequently calculated independently for financial and non-financial companies. The correlation between software investments and economic success in both industries is evident.
]]>Econometrics doi: 10.3390/econometrics12010003
Authors: Yong Bao
This paper proposes estimating linear dynamic panels by explicitly exploiting the endogeneity of lagged dependent variables and expressing the crossmoments between the endogenous lagged dependent variables and disturbances in terms of model parameters. These moments, when recentered, form the basis for model estimation. The resulting estimator’s asymptotic properties are derived under different asymptotic regimes (large number of cross-sectional units or long time spans), stable conditions (with or without a unit root), and error characteristics (homoskedasticity or heteroskedasticity of different forms). Monte Carlo experiments show that it has very good finite-sample performance.
]]>Econometrics doi: 10.3390/econometrics12010002
Authors: Md Samsul Alam Alessandra Amendola Vincenzo Candila Shahram Dehghan Jabarabadi
The introduction of Bitcoin as a distributed peer-to-peer digital cash in 2008 and its first recorded real transaction in 2010 served the function of a medium of exchange, transforming the financial landscape by offering a decentralized, peer-to-peer alternative to conventional monetary systems. This study investigates the intricate relationship between cryptocurrencies and monetary policy, with a particular focus on their long-term volatility dynamics. We enhance the GARCH-MIDAS (Mixed Data Sampling) through the adoption of the SB-GARCH-MIDAS (Structural Break Mixed Data Sampling) to analyze the daily returns of three prominent cryptocurrencies (Bitcoin, Binance Coin, and XRP) alongside monthly monetary policy data from the USA and South Africa with respect to potential presence of a structural break in the monetary policy, which provided us with two GARCH-MIDAS models. As of 30 June 2022, the most recent data observation for all samples are noted, although it is essential to acknowledge that the data sample time range varies due to differences in cryptocurrency data accessibility. Our research incorporates model confidence set (MCS) procedures and assesses model performance using various metrics, including AIC, BIC, MSE, and QLIKE, supplemented by comprehensive residual diagnostics. Notably, our analysis reveals that the SB-GARCH-MIDAS model outperforms others in forecasting cryptocurrency volatility. Furthermore, we uncover that, in contrast to their younger counterparts, the long-term volatility of older cryptocurrencies is sensitive to structural breaks in exogenous variables. Our study sheds light on the diversification within the cryptocurrency space, shaped by technological characteristics and temporal considerations, and provides practical insights, emphasizing the importance of incorporating monetary policy in assessing cryptocurrency volatility. The implications of our study extend to portfolio management with dynamic consideration, offering valuable insights for investors and decision-makers, which underscores the significance of considering both cryptocurrency types and the economic context of host countries.
]]>Econometrics doi: 10.3390/econometrics12010001
Authors: Peter Roth
Throughout its lifespan, a journal goes through many phases—and Econometrics (Econometrics Homepage n [...]
]]>Econometrics doi: 10.3390/econometrics11040028
Authors: Mohitosh Kejriwal Linh Nguyen Xuewen Yu
This paper presents a new approach to constructing multistep combination forecasts in a nonstationary framework with stochastic and deterministic trends. Existing forecast combination approaches in the stationary setup typically target the in-sample asymptotic mean squared error (AMSE), relying on its approximate equivalence with the asymptotic forecast risk (AFR). Such equivalence, however, breaks down in a nonstationary setup. This paper develops combination forecasts based on minimizing an accumulated prediction errors (APE) criterion that directly targets the AFR and remains valid whether the time series is stationary or not. We show that the performance of APE-weighted forecasts is close to that of the optimal, infeasible combination forecasts. Simulation experiments are used to demonstrate the finite sample efficacy of the proposed procedure relative to Mallows/Cross-Validation weighting that target the AMSE as well as underscore the importance of accounting for both persistence and lag order uncertainty. An application to forecasting US macroeconomic time series confirms the simulation findings and illustrates the benefits of employing the APE criterion for real as well as nominal variables at both short and long horizons. A practical implication of our analysis is that the degree of persistence can play an important role in the choice of combination weights.
]]>Econometrics doi: 10.3390/econometrics11040027
Authors: Willi Semmler Gabriel R. Padró Rosario Levent Koçkesen
Some financial disruptions that started in California, U.S., in March 2023, resulting in the closure of several medium-size U.S. banks, shed new light on the role of liquidity in business cycle dynamics. In the normal path of the business cycle, liquidity and output mutually interact. Small shocks generally lead to mean reversion through market forces, as a low degree of liquidity dissipation does not significantly disrupt the economic dynamics. However, larger shocks and greater liquidity dissipation arising from runs on financial institutions and contagion effects can trigger tipping points, financial disruptions, and economic downturns. The latter poses severe challenges for Central Banks, which during normal times, usually maintain a hands-off approach with soft regulation and monitoring, allowing the market to operate. However, in severe times of liquidity dissipation, they must swiftly restore liquidity flows and rebuild trust in stability to avoid further disruptions and meltdowns. In this paper, we present a nonlinear model of the liquidity–macro interaction and econometrically explore those types of dynamic features with data from the U.S. economy. Guided by a theoretical model, we use nonlinear econometric methods of a Smooth Transition Regression type to study those features, which provide and suggest further regulation and monitoring guidelines and institutional enforcement of rules.
]]>Econometrics doi: 10.3390/econometrics11040026
Authors: Sylvia Frühwirth-Schnatter Darjus Hosszejni Hedibert Freitas Lopes
Despite the popularity of factor models with simple loading matrices, little attention has been given to formally address the identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in simple factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a non-restrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class, the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of post-processing posterior draws in sparse Bayesian factor analysis.
]]>Econometrics doi: 10.3390/econometrics11040025
Authors: Julie Le Gallo Marc-Alexandre Sénégas
We provide new analytical results for the implementation of the Hausman specification test statistic in a standard panel data model, comparing the version based on the estimators computed from the untransformed random effects model specification under Feasible Generalized Least Squares and the one computed from the quasi-demeaned model estimated by Ordinary Least Squares. We show that the quasi-demeaned model cannot provide a reliable magnitude when implementing the Hausman test in a finite sample setting, although it is the most common approach used to produce the test statistic in econometric software. The difference between the Hausman statistics computed under the two methods can be substantial and even lead to opposite conclusions for the test of orthogonality between the regressors and the individual-specific effects. Furthermore, this difference remains important even with large cross-sectional dimensions as it mainly depends on the within-between structure of the regressors and on the presence of a significant correlation between the individual effects and the covariates in the data. We propose to supplement the test outcomes that are provided in the main econometric software packages with some metrics to address the issue at hand.
]]>Econometrics doi: 10.3390/econometrics11040024
Authors: Minkun Kim David Lindberg Martin Crane Marija Bezbradica
In actuarial practice, the modeling of total losses tied to a certain policy is a nontrivial task due to complex distributional features. In the recent literature, the application of the Dirichlet process mixture for insurance loss has been proposed to eliminate the risk of model misspecification biases. However, the effect of covariates as well as missing covariates in the modeling framework is rarely studied. In this article, we propose novel connections among a covariate-dependent Dirichlet process mixture, log-normal convolution, and missing covariate imputation. As a generative approach, our framework models the joint of outcome and covariates, which allows us to impute missing covariates under the assumption of missingness at random. The performance is assessed by applying our model to several insurance datasets of varying size and data missingness from the literature, and the empirical results demonstrate the benefit of our model compared with the existing actuarial models, such as the Tweedie-based generalized linear model, generalized additive model, or multivariate adaptive regression spline.
]]>Econometrics doi: 10.3390/econometrics11040023
Authors: Alecos Papadopoulos
We derive a new matrix statistic for the Hausman test for endogeneity in cross-sectional Instrumental Variables estimation, that incorporates heteroskedasticity in a natural way and does not use a generalized inverse. A Monte Carlo study examines the performance of the statistic for different heteroskedasticity-robust variance estimators and different skedastic situations. We find that the test statistic performs well as regards empirical size in almost all cases; however, as regards empirical power, how one corrects for heteroskedasticity matters. We also compare its performance with that of the Wald statistic from the augmented regression setup that is often used for the endogeneity test, and we find that the choice between them may depend on the desired significance level of the test.
]]>Econometrics doi: 10.3390/econometrics11030022
Authors: Dean Fantazzini Yufeng Xiao
Detecting pump-and-dump schemes involving cryptoassets with high-frequency data is challenging due to imbalanced datasets and the early occurrence of unusual trading volumes. To address these issues, we propose constructing synthetic balanced datasets using resampling methods and flagging a pump-and-dump from the moment of public announcement up to 60 min beforehand. We validated our proposals using data from Pumpolymp and the CryptoCurrency eXchange Trading Library to identify 351 pump signals relative to the Binance crypto exchange in 2021 and 2022. We found that the most effective approach was using the original imbalanced dataset with pump-and-dumps flagged 60 min in advance, together with a random forest model with data segmented into 30-s chunks and regressors computed with a moving window of 1 h. Our analysis revealed that a better balance between sensitivity and specificity could be achieved by simply selecting an appropriate probability threshold, such as setting the threshold close to the observed prevalence in the original dataset. Resampling methods were useful in some cases, but threshold-independent measures were not affected. Moreover, detecting pump-and-dumps in real-time involves high-dimensional data, and the use of resampling methods to build synthetic datasets can be time-consuming, making them less practical.
]]>Econometrics doi: 10.3390/econometrics11030021
Authors: Emil Palikot
I study the relationship between competition and innovation, focusing on the distinction between product and process innovations. By considering product innovation, I expand upon earlier research on the topic of the relationship between competition and innovation, which focused on process innovations. New products allow firms to differentiate themselves from one another. I demonstrate that the competition level that creates the most innovation incentive is higher for process innovation than product innovation. I also provide empirical evidence that supports these results. Using the community innovation survey, I first show that an inverted U-shape characterizes the relationship between competition and both process and product innovations. The optimal competition level for promoting innovation is higher for process innovation.
]]>Econometrics doi: 10.3390/econometrics11030020
Authors: Subal C. Kumbhakar Jingfang Zhang Gudbrand Lien
In this study, we leverage geographical coordinates and firm-level panel data to uncover variations in production across different locations. Our approach involves using a semiparametric proxy variable regression estimator, which allows us to define and estimate a customized production function for each firm and its corresponding location. By employing kernel methods, we estimate the nonparametric functions that determine the model’s parameters based on latitude and longitude. Furthermore, our model incorporates productivity components that consider various factors that influence production. Unlike spatially autoregressive-type production functions that assume a uniform technology across all locations, our approach estimates technology and productivity at both the firm and location levels, taking into account their specific characteristics. To handle endogenous regressors, we incorporate a proxy variable identification technique, distinguishing our method from geographically weighted semiparametric regressions. To investigate the heterogeneity in production technology and productivity among Norwegian grain farmers, we apply our model to a sample of farms using panel data spanning from 2001 to 2020. Through this analysis, we provide empirical evidence of regional variations in both technology and productivity among Norwegian grain farmers. Finally, we discuss the suitability of our approach for addressing the heterogeneity in this industry.
]]>Econometrics doi: 10.3390/econometrics11030019
Authors: Bilel Sanhaji Julien Chevallier
Using the capital asset pricing model, this article critically assesses the relative importance of computing ‘realized’ betas from high-frequency returns for Bitcoin and Ethereum—the two major cryptocurrencies—against their classic counterparts using the 1-day and 5-day return-based betas. The sample includes intraday data from 15 May 2018 until 17 January 2023. The microstructure noise is present until 4 min in the BTC and ETH high-frequency data. Therefore, we opt for a conservative choice with a 60 min sampling frequency. Considering 250 trading days as a rolling-window size, we obtain rolling betas < 1 for Bitcoin and Ethereum with respect to the CRIX market index, which could enhance portfolio diversification (at the expense of maximizing returns). We flag the minimal tracking errors at the hourly and daily frequencies. The dispersion of rolling betas is higher for the weekly frequency and is concentrated towards values of β > 0.8 for BTC (β > 0.65 for ETH). The weekly frequency is thus revealed as being less precise for capturing the ‘pure’ systematic risk for Bitcoin and Ethereum. For Ethereum in particular, the availability of high-frequency data tends to produce, on average, a more reliable inference. In the age of financial data feed immediacy, our results strongly suggest to pension fund managers, hedge fund traders, and investment bankers to include ‘realized’ versions of CAPM betas in their dashboard of indicators for portfolio risk estimation. Sensitivity analyses cover jump detection in BTC/ETH high-frequency data (up to 25%). We also include several jump-robust estimators of realized volatility, where realized quadpower volatility prevails.
]]>Econometrics doi: 10.3390/econometrics11030018
Authors: Manabu Asai
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman filter is competitive with the two-step SML estimator, which is less efficient than the SML estimator. Regarding empirical results for the S&P 500 index, the quasi-likelihood ratio tests favored the two-factor realized asymmetric stochastic volatility model with the standardized t distribution among alternative specifications, and an analysis on out-of-sample forecasts prefers the realized stochastic volatility models, rejecting the model without the realized volatility measure. Furthermore, the forecasts of alternative RSV models are statistically equivalent for the data covering the global financial crisis.
]]>Econometrics doi: 10.3390/econometrics11020017
Authors: Mateusz Szysz Andrzej Torój
In some NUTS 2 (Nomenclature of Territorial Units for Statistics) regions of Europe, the COVID-19 pandemic has triggered an increase in mortality by several dozen percent and only a few percent in others. Based on the data on 189 regions from 19 European countries, we identified factors responsible for these differences, both intra- and internationally. Due to the spatial nature of the virus diffusion and to account for unobservable country-level and sub-national characteristics, we used spatial econometric tools to estimate two types of models, explaining (i) the number of cases per 10,000 inhabitants and (ii) the percentage increase in the number of deaths compared to the 2016–2019 average in individual regions (mostly NUTS 2) in 2020. We used two weight matrices simultaneously, accounting for both types of spatial autocorrelation: linked to geographical proximity and adherence to the same country. For the feature selection, we used Bayesian Model Averaging. The number of reported cases is negatively correlated with the share of risk groups in the population (60+ years old, older people reporting chronic lower respiratory disease, and high blood pressure) and the level of society’s belief that the positive health effects of restrictions outweighed the economic losses. Furthermore, it positively correlated with GDP per capita (PPS) and the percentage of people employed in the industry. On the contrary, the mortality (per number of infections) has been limited through high-quality healthcare. Additionally, we noticed that the later the pandemic first hit a region, the lower the death toll there was, even controlling for the number of infections.
]]>Econometrics doi: 10.3390/econometrics11020016
Authors: Mahmoud Arayssi Ali Fakih Nathir Haimoun
Skills utilization is an important factor affecting labor productivity and job satisfaction. This paper examines the effects of skills mismatch, nepotism, and gender discrimination on wages and job satisfaction in MENA workplaces. Gender discrimination implies social costs for firms due to higher turnover rates and lower retention levels. Young females suffer disproportionality from this than their male counterparts, resulting in a wider gender gap in the labor market at multiple levels. Therefore, we find that the skill mismatch problem appears to be more significant among specific demographic groups, such as females, immigrants, and ethnic minorities; it is also negatively correlated with job satisfaction and wages. We bridge the literature gap on youth skill mismatch’s main determinants, including nepotism, by showing evidence from some developing countries. Given the implied social costs associated with these practices and their impact on the labor market, we have compiled a list of policy recommendations that the government and relevant stakeholders should take to reduce these problems in the workplace. Therefore, we provide a guide to address MENA’s skill mismatch and improve overall job satisfaction.
]]>Econometrics doi: 10.3390/econometrics11020015
Authors: Jarosław Gruszka Janusz Szwabiński
The parametric estimation of stochastic differential equations (SDEs) has been the subject of intense studies already for several decades. The Heston model, for instance, is based on two coupled SDEs and is often used in financial mathematics for the dynamics of asset prices and their volatility. Calibrating it to real data would be very useful in many practical scenarios. It is very challenging, however, since the volatility is not directly observable. In this paper, a complete estimation procedure of the Heston model without and with jumps in the asset prices is presented. Bayesian regression combined with the particle filtering method is used as the estimation framework. Within the framework, we propose a novel approach to handle jumps in order to neutralise their negative impact on the estimates of the key parameters of the model. An improvement in the sampling in the particle filtering method is discussed as well. Our analysis is supported by numerical simulations of the Heston model to investigate the performance of the estimators. In addition, a practical follow-along recipe is given to allow finding adequate estimates from any given data.
]]>Econometrics doi: 10.3390/econometrics11020014
Authors: Tamás Szabados
The aim of this paper to give a multidimensional version of the classical one-dimensional case of smooth spectral density. A spectral density with smooth eigenvalues and H∞ eigenvectors gives an explicit method to factorize the spectral density and compute the Wold representation of a weakly stationary time series. A formula, similar to the Kolmogorov–Szego” formula, is given for the covariance matrix of the innovations. These results are important to give the best linear predictions of the time series. The results are applicable when the rank of the process is smaller than the dimension of the process, which occurs frequently in many current applications, including econometrics.
]]>Econometrics doi: 10.3390/econometrics11020013
Authors: Chengyu Li Luyi Shen Guoqi Qian
Time-series data, which exhibit a low signal-to-noise ratio, non-stationarity, and non-linearity, are commonly seen in high-frequency stock trading, where the objective is to increase the likelihood of profit by taking advantage of tiny discrepancies in prices and trading on them quickly and in huge quantities. For this purpose, it is essential to apply a trading method that is capable of fast and accurate prediction from such time-series data. In this paper, we developed an online time series forecasting method for high-frequency trading (HFT) by integrating three neural network deep learning models, i.e., long short-term memory (LSTM), gated recurrent unit (GRU), and transformer; and we abbreviate the new method to online LGT or O-LGT. The key innovation underlying our method is its efficient storage management, which enables super-fast computing. Specifically, when computing the forecast for the immediate future, we only use the output calculated from the previous trading data (rather than the previous trading data themselves) together with the current trading data. Thus, the computing only involves updating the current data into the process. We evaluated the performance of O-LGT by analyzing high-frequency limit order book (LOB) data from the Chinese market. It shows that, in most cases, our model achieves a similar speed with a much higher accuracy than the conventional fast supervised learning models for HFT. However, with a slight sacrifice in accuracy, O-LGT is approximately 12 to 64 times faster than the existing high-accuracy neural network models for LOB data from the Chinese market.
]]>Econometrics doi: 10.3390/econometrics11020012
Authors: Lars Arne Jordanger Dag Tjøstheim
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by the authors of this paper using the local Gaussian auto-spectrum based on the local Gaussian auto-correlations. This makes it possible to detect local structures in univariate time series that look similar to white noise when investigated by the ordinary auto-spectrum. In this paper, the local Gaussian approach is extended to a local Gaussian cross-spectrum for multivariate time series. The local Gaussian cross-spectrum has the desirable property that it coincides with the ordinary cross-spectrum for Gaussian time series, which implies that it can be used to detect non-Gaussian traits in the time series under investigation. In particular, if the ordinary spectrum is flat, then peaks and troughs of the local Gaussian spectrum can indicate nonlinear traits, which potentially might reveal local periodic phenomena that are undetected in an ordinary spectral analysis.
]]>Econometrics doi: 10.3390/econometrics11020011
Authors: Dietmar Bauer
When using vector autoregressive (VAR) models for approximating time series, a key step is the selection of the lag length. Often this is performed using information criteria, even if a theoretical justification is lacking in some cases. For stationary processes, the asymptotic properties of the corresponding estimators are well documented in great generality in the book Hannan and Deistler (1988). If the data-generating process is not a finite-order VAR, the selected lag length typically tends to infinity as a function of the sample size. For invertible vector autoregressive moving average (VARMA) processes, this typically happens roughly proportional to logT. The same approach for lag length selection is also followed in practice for more general processes, for example, unit root processes. In the I(1) case, the literature suggests that the behavior is analogous to the stationary case. For I(2) processes, no such results are currently known. This note closes this gap, concluding that information-criteria-based lag length selection for I(2) processes indeed shows similar properties to in the stationary case.
]]>Econometrics doi: 10.3390/econometrics11020010
Authors: Paul Haimerl Tobias Hartl
The COVID-19 pandemic is characterized by a recurring sequence of peaks and troughs. This article proposes a regime-switching unobserved components (UC) approach to model the trend of COVID-19 infections as a function of this ebb and flow pattern. Estimated regime probabilities indicate the prevalence of either an infection up- or down-turning regime for every day of the observational period. This method provides an intuitive real-time analysis of the state of the pandemic as well as a tool for identifying structural changes ex post. We find that when applied to U.S. data, the model closely tracks regime changes caused by viral mutations, policy interventions, and public behavior.
]]>Econometrics doi: 10.3390/econometrics11010009
Authors: Gianluca Cubadda Alain Hecq Elisa Voisin
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical tools to detect the common locally explosive dynamics in a Student t-distribution maximum likelihood framework. The performances of both likelihood ratio tests and information criteria were investigated in a Monte Carlo study. Finally, we evaluated the practical value of our approach via an empirical application on three commodity prices.
]]>Econometrics doi: 10.3390/econometrics11010008
Authors: Nick James Max Menzies Jennifer Chan
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we apply recently introduced semi-metrics between finite sets to determine the distance between time series’ structural breaks. Then, we build on the classical portfolio optimization theory of Markowitz and use this distance between asset structural breaks for our penalty function, rather than portfolio variance. Our experiments are promising: on synthetic data, we show that our proposed method does indeed diversify among time series with highly similar structural breaks and enjoys advantages over existing metrics between sets. On real data, experiments illustrate that our proposed optimization method performs well relative to nine other commonly used options, producing the second-highest returns, the lowest volatility, and second-lowest drawdown. The main implication for this method in portfolio management is reducing simultaneous asset shocks and potentially sharp associated drawdowns during periods of highly similar structural breaks, such as a market crisis. Our method adds to a considerable literature of portfolio optimization techniques in econometrics and could complement these via portfolio averaging.
]]>Econometrics doi: 10.3390/econometrics11010007
Authors: Marianna Bolla Dongze Ye Haoyu Wang Renyuan Ma Valentin Frappier William Thompson Catherine Donner Máté Baranyi Fatma Abdelkhalek
A causal vector autoregressive (CVAR) model is introduced for weakly stationary multivariate processes, combining a recursive directed graphical model for the contemporaneous components and a vector autoregressive model longitudinally. Block Cholesky decomposition with varying block sizes is used to solve the model equations and estimate the path coefficients along a directed acyclic graph (DAG). If the DAG is decomposable, i.e., the zeros form a reducible zero pattern (RZP) in its adjacency matrix, then covariance selection is applied that assigns zeros to the corresponding path coefficients. Real-life applications are also considered, where for the optimal order p≥1 of the fitted CVAR(p) model, order selection is performed with various information criteria.
]]>Econometrics doi: 10.3390/econometrics11010006
Authors: Hui-Ching Chuang Jau-er Chen
In this study, we explore the effect of industry distress on recovery rates by using the unconditional quantile regression (UQR). The UQR provides better interpretative and thus policy-relevant information on the predictive effect of the target variable than the conditional quantile regression. To deal with a broad set of macroeconomic and industry variables, we use the lasso-based double selection to estimate the predictive effects of industry distress and select relevant variables. Our sample consists of 5334 debt and loan instruments in Moody’s Default and Recovery Database from 1990 to 2017. The results show that industry distress decreases recovery rates from 15.80% to 2.94% for the 15th to 55th percentile range and slightly increases the recovery rates in the lower and the upper tails. The UQR provide quantitative measurements to the loss given default during a downturn that the Basel Capital Accord requires.
]]>Econometrics doi: 10.3390/econometrics11010005
Authors: Anthony D. Hall Annastiina Silvennoinen Timo Teräsvirta
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of the systematic, improved modelling cycle required for such nonlinear models. There is an R-package that includes the steps in the modelling cycle. Simulations demonstrate the robustness of the recommended model building approach. The modelling cycle is illustrated using daily return series for Australia’s four largest banks.
]]>Econometrics doi: 10.3390/econometrics11010004
Authors: Maksat Jumamyradov Benjamin M. Craig Murat Munkin William Greene
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between persons due to interpersonal heterogeneity (e.g., brand-name medication, generic medication, no medication). To allow for interpersonal heterogeneity, choice probabilities may be described using logit functions with fixed individual-specific parameters. However, in practice, a study team may ignore heterogeneity in health preferences and estimate a conditional logit (CL) model. In this simulation study, we examine the effects of omitted variance and correlations (i.e., omitted heterogeneity) in logit parameters on the estimation of the coefficients, willingness to pay (WTP), and choice predictions. The simulated DCE results show that CL estimates may have been biased depending on the structure of the heterogeneity that we used in the data generation process. We also found that these biases in the coefficients led to a substantial difference in the true and estimated WTP (i.e., up to 20%). We further found that CL and true choice probabilities were similar to each other (i.e., difference was less than 0.08) regardless of the underlying structure. The results imply that, under preference heterogeneity, CL estimates may differ from their true means, and these differences can have substantive effects on the WTP estimates. More specifically, CL WTP estimates may be underestimated due to interpersonal heterogeneity, and a failure to recognize this bias in HPR indirectly underestimates the value of treatment, substantially reducing quality of care. These findings have important implications in health economics because CL remains widely used in practice.
]]>Econometrics doi: 10.3390/econometrics11010003
Authors: Econometrics Editorial Office Econometrics Editorial Office
High-quality academic publishing is built on rigorous peer review [...]
]]>Econometrics doi: 10.3390/econometrics11010002
Authors: Graziano Moramarco
We propose an approach for jointly measuring global macroeconomic uncertainty and bilateral spillovers of uncertainty between countries using a global vector autoregressive (GVAR) model. Over the period 2000Q1–2020Q4, our global index is able to summarize a variety of uncertainty measures, such as financial-market volatility, economic-policy uncertainty, survey-forecast-based measures and econometric measures of macroeconomic uncertainty, showing major peaks during both the global financial crisis and the COVID-19 pandemic. Global spillover effects are quantified through a novel GVAR-based decomposition of country-level uncertainty into the contributions from all countries in the global model. We show that this approach produces estimates of uncertainty spillovers which are strongly related to the structure of the global economy.
]]>Econometrics doi: 10.3390/econometrics11010001
Authors: Omar Abbara Mauricio Zevallos
In this paper, we propose a new method for estimating and forecasting asymmetric stochastic volatility models. The proposal is based on dynamic linear models with Markov switching written as state space models. Then, the likelihood is calculated through Kalman filter outputs and the estimates are obtained by the maximum likelihood method. Monte Carlo experiments are performed to assess the quality of estimation. In addition, a backtesting exercise with the real-life time series illustrates that the proposed method is a quick and accurate alternative for forecasting value-at-risk.
]]>Econometrics doi: 10.3390/econometrics10040037
Authors: Marc Hallin
For more than half a century, Manfred Deistler has been contributing to the construction of the rigorous theoretical foundations of the statistical analysis of time series and more general stochastic processes. Half a century of unremitting activity is not easily summarized in a few pages. In this short note, we chose to concentrate on a relatively little-known aspect of Manfred’s contribution that nevertheless had quite an impact on the development of one of the most powerful tools of contemporary time series and econometrics: dynamic factor models.
]]>Econometrics doi: 10.3390/econometrics10040036
Authors: Francesco Giancaterini Alain Hecq Claudio Morana
This paper proposes strategies to detect time reversibility in stationary stochastic processes by using the properties of mixed causal and noncausal models. It shows that they can also be used for non-stationary processes when the trend component is computed with the Hodrick–Prescott filter rendering a time-reversible closed-form solution. This paper also links the concept of an environmental tipping point to the statistical property of time irreversibility and assesses fourteen climate indicators. We find evidence of time irreversibility in greenhouse gas emissions, global temperature, global sea levels, sea ice area, and some natural oscillation indices. While not conclusive, our findings urge the implementation of correction policies to avoid the worst consequences of climate change and not miss the opportunity window, which might still be available, despite closing quickly.
]]>Econometrics doi: 10.3390/econometrics10040035
Authors: Brian D. O. Anderson Manfred Deistler Marco Lippi
A survey is provided dealing with the formulation of modelling problems for dynamic factor models, and the various algorithm possibilities for solving these modelling problems. Emphasis is placed on understanding requirements for the handling of errors, noting the relevance of the proposed application of the model, be it for example prediction or business cycle determination. Mixed frequency problems are also considered, in which certain entries of an underlying vector process are only available for measurement at a submultiple frequency of the original process. Certain classes of processes are shown to be generically identifiable, and others not to have this property.
]]>Econometrics doi: 10.3390/econometrics10040034
Authors: Merlin Keller Guillaume Damblin Alberto Pasanisi Mathieu Schumann Pierre Barbillon Fabrizio Ruggeri Eric Parent
In this paper, we present a case study aimed at determining a billing plan that ensures customer loyalty and provides a profit for the energy company, whose point of view is taken in the paper. The energy provider promotes new contracts for residential buildings, in which customers pay a fixed rate chosen in advance, based on an overall energy consumption forecast. For such a purpose, we consider a practical Bayesian framework for the calibration and validation of a computer code used to forecast the energy consumption of a building. On the basis of power field measurements, collected from an experimental building cell in a given period of time, the code is calibrated, effectively reducing the epistemic uncertainty affecting the most relevant parameters of the code (albedo, thermal bridge factor, and convective coefficient). The validation is carried out by testing the goodness of fit of the code with respect to the field measurements, and then propagating the posterior parametric uncertainty through the code, obtaining probabilistic forecasts of the average electrical power delivered inside the cell in a given period of time. Finally, Bayesian decision-making methods are used to choose the optimal fixed rate (for the energy provider) of the contract, in order to balance short-term benefits with customer retention. We identify three significant contributions of the paper. First of all, the case study data were never analyzed from a Bayesian viewpoint, which is relevant here not only for estimating the parameters but also for properly assessing the uncertainty about the forecasts. Furthermore, the study of optimal policies for energy providers in this framework is new, to the best of our knowledge. Finally, we propose Bayesian posterior predictive p-value for validation.
]]>Econometrics doi: 10.3390/econometrics10040033
Authors: Neil R. Ericsson Mohammed H. I. Dore Hassan Butt
Structural breaks have attracted considerable attention recently, especially in light of the financial crisis, Great Recession, the COVID-19 pandemic, and war. While structural breaks pose significant econometric challenges, machine learning provides an incisive tool for detecting and quantifying breaks. The current paper presents a unified framework for analyzing breaks; and it implements that framework to test for and quantify changes in precipitation in Mauritania over 1919–1997. These tests detect a decline of one third in mean rainfall, starting around 1970. Because water is a scarce resource in Mauritania, this decline—with adverse consequences on food production—has potential economic and policy consequences.
]]>Econometrics doi: 10.3390/econometrics10040032
Authors: Irwan Susanto Nur Iriawan Heri Kuswanto
This paper proposes enhanced studies on a model consisting of a finite mixture framework of generalized linear models (GLMs) with gamma-distributed responses estimated using the Bayesian approach coupled with the Markov Chain Monte Carlo (MCMC) method. The log-link function, which relates the mean and linear predictors of the model, is implemented to ensure non-negative values of the predicted gamma-distributed responses. The simulation-based inferential processes related to the Bayesian-MCMC method is carried out using the Gibbs sampler algorithm. The performance of proposed model is conducted through two real data applications on the gross domestic product per capita at purchasing power parity and the annual household income per capita. Graphical posterior predictive checks are carried out to verify the adequacy of the fitted model for the observed data. The predictive accuracy of this model is compared with other Bayesian models using the widely applicable information criterion (WAIC). We find that the Bayesian mixture of GLMs with gamma-distributed responses performs properly when the appropriate prior distributions are applied and has better predictive accuracy than the Bayesian mixture of linear regression model and the Bayesian gamma regression model.
]]>Econometrics doi: 10.3390/econometrics10030031
Authors: Robert C. Jung Stephanie Glaser
This paper proposes a new spatial lag regression model which addresses global spatial autocorrelation arising from cross-sectional dependence between counts. Our approach offers an intuitive interpretation of the spatial correlation parameter as a measurement of the impact of neighbouring observations on the conditional expectation of the counts. It allows for flexible likelihood-based inference based on different distributional assumptions using standard numerical procedures. In addition, we advocate the use of data-coherent diagnostic tools in spatial count regression models. The application revisits a data set on the location choice of single unit start-up firms in the manufacturing industry in the US.
]]>Econometrics doi: 10.3390/econometrics10030030
Authors: Jian Kang Johan Stax Jakobsen Annastiina Silvennoinen Timo Teräsvirta Glen Wade
We construct a parsimonious test of constancy of the correlation matrix in the multivariate conditional correlation GARCH model, where the GARCH equations are time-varying. The alternative to constancy is that the correlations change deterministically as a function of time. The alternative is a covariance matrix, not a correlation matrix, so the test may be viewed as a general test of stability of a constant correlation matrix. The size of the test in finite samples is studied by simulation. An empirical example involving daily returns of 26 stocks included in the Dow Jones stock index is given.
]]>Econometrics doi: 10.3390/econometrics10030029
Authors: Shiyun Cao Qiankun Zhou
In this paper, we consider the estimation of a dynamic panel data model with non-stationary multi-factor error structures. We adopted the common correlated effect (CCE) estimation and established the asymptotic properties of the CCE and common correlated effects mean group (CCEMG) estimators, as N and T tend to infinity. The results show that both the CCE and CCEMG estimators are consistent and the CCEMG estimator is asymptotically normally distributed. The theoretical findings were supported for small samples by an extensive simulation study, showing that the CCE estimators are robust to a wide variety of data generation processes. Empirical findings suggest that the CCE estimation is widely applicable to models with non-stationary factors. The proposed procedure is also illustrated by an empirical application to analyze the U.S. cigar dataset.
]]>Econometrics doi: 10.3390/econometrics10030028
Authors: Antonio Pacifico
This paper improves the existing literature on the shrinkage of high dimensional model and parameter spaces through Bayesian priors and Markov Chains algorithms. A hierarchical semiparametric Bayes approach is developed to overtake limits and misspecificity involved in compressed regression models. Methodologically, a multicountry large structural Panel Vector Autoregression is compressed through a robust model averaging to select the best subset across all possible combinations of predictors, where robust stands for the use of mixtures of proper conjugate priors. Concerning dynamic analysis, volatility changes and conditional density forecasts are addressed ensuring accurate predictive performance and capability. An empirical and simulated experiment are developed to highlight and discuss the functioning of the estimating procedure and forecasting accuracy.
]]>Econometrics doi: 10.3390/econometrics10020027
Authors: Diogo de Prince Emerson Fernandes Marçal Pedro L. Valls Pereira
In this paper, we address whether using a disaggregated series or combining an aggregated and disaggregated series improves the forecasting of the aggregated series compared to using the aggregated series alone. We used econometric techniques, such as the weighted lag adaptive least absolute shrinkage and selection operator, and Exponential Triple Smoothing (ETS), as well as the Autometrics algorithm to forecast industrial production in Brazil one to twelve months ahead. This is the novelty of the work, as is the use of the average multi-horizon Superior Predictive Ability (aSPA) and uniform multi-horizon Superior Predictive Ability (uSPA) tests, used to select the best forecasting model by combining different horizons. Our sample covers the period from January 2002 to February 2020. The disaggregated ETS has a better forecast performance when forecasting horizons that are more than one month ahead using the mean square error, and the aggregated ETS has better forecasting ability for horizons equal to 1 and 2. The aggregated ETS forecast does not contain information that is useful for forecasting industrial production in Brazil beyond the information already found in the disaggregated ETS forecast between two and twelve months ahead.
]]>Econometrics doi: 10.3390/econometrics10020026
Authors: Esam Mahdi Ameena Al-Abdulla
In this paper, we investigate the relationship between the RavenPack news-based index associated with coronavirus outbreak (Panic, Sentiment, Infodemic, and Media Coverage) and returns of two commodities—Bitcoin and gold. We utilized the novel quantile-on-quantile approach to uncover the dependence between the news-based index associated with coronavirus outbreak and Bitcoin and gold returns. Our results reveal that the daily levels of positive and negative shocks in indices induced by pandemic news asymmetrically affect the Bearish and Bullish on Bitcoin and gold, and fear sentiment induced by coronavirus-related news plays a major role in driving the values of Bitcoin and gold more than other indices. We find that both commodities, Bitcoin and gold, can serve as a hedge against pandemic-related news. In general, the COVID-19 pandemic-related news encourages people to invest in gold and Bitcoin.
]]>Econometrics doi: 10.3390/econometrics10020025
Authors: Paweł Miłobędzki
I use the data on the COVID-19 pandemic maintained by Our Word in Data to estimate a nonstationary dynamic panel exhibiting the dynamics of confirmed deaths, infections and vaccinations per million population in the European Union countries in the period of January–July 2021. Having the data aggregated on a weekly basis I demonstrate that a model which allows for heterogeneous short-run dynamics and common long-run marginal effects is superior to that allowing only for either homogeneous or heterogeneous responses. The analysis shows that the long-run marginal death effects with respect to confirmed infections and vaccinations are positive and negative, respectively, as expected. Since the estimate of the former effect compared to the latter one is about 71.67 times greater, only mass vaccinations can prevent the number of deaths from being large in the long-run. The success in achieving this is easier for countries with the estimated large negative individual death effect (Cyprus, Denmark, Ireland, Portugal, Estonia, Lithuania) than for those with the large but positive death effect (Bulgaria, Hungary, Slovakia). The speed of convergence to the long-run equilibrium relationship estimates for individual countries are all negative. For some countries (Bulgaria, Denmark, Estonia, Greece, Hungary, Slovakia) they differ in the magnitude from that averaged for the whole EU, while for others (Croatia, Ireland, Lithuania, Poland, Portugal, Romania, Spain), they do not.
]]>Econometrics doi: 10.3390/econometrics10020024
Authors: Rocco Mosconi Paolo Paruolo
This Special Issue collects contributions related to the advances in the theory and practice of Econometrics induced by the research of Katarina Juselius and Søren Johansen, whom this Special Issue aims to celebrate [...]
]]>Econometrics doi: 10.3390/econometrics10020023
Authors: Mikio Ito Akihiko Noda Tatsuma Wada
A multivariate, non-Bayesian, regression-based, or feasible generalized least squares (GLS)-based approach is proposed to estimate time-varying VAR parameter models. Although it has been known that the Kalman-smoothed estimate can be alternatively estimated using GLS for univariate models, we assess the accuracy of the feasible GLS estimator compared with commonly used Bayesian estimators. Unlike the maximum likelihood estimator often used together with the Kalman filter, it is shown that the possibility of the pile-up problem occurring is negligible. In addition, this approach enables us to deal with stochastic volatility models, models with a time-dependent variance–covariance matrix, and models with non-Gaussian errors that allow us to deal with abrupt changes or structural breaks in time-varying parameters.
]]>Econometrics doi: 10.3390/econometrics10020022
Authors: Duo Qin Sophie van Huellen Qing Chao Wang Thanos Moraitis
Aggregate financial conditions indices (FCIs) are constructed to fulfil two aims: (i) The FCIs should resemble non-model-based composite indices in that their composition is adequately invariant for concatenation during regular updates; (ii) the concatenated FCIs should outperform financial variables conventionally used as leading indicators in macro models. Both aims are shown to be attainable once an algorithmic modelling route is adopted to combine leading indicator modelling with the principles of partial least-squares (PLS) modelling, supervised dimensionality reduction, and backward dynamic selection. Pilot results using US data confirm the traditional wisdom that financial imbalances are more likely to induce macro impacts than routine market volatilities. They also shed light on why the popular route of principal-component based factor analysis is ill-suited for the two aims.
]]>Econometrics doi: 10.3390/econometrics10020020
Authors: Rocco Mosconi Paolo Paruolo
This article was prepared for the Special Issue ‘Celebrated Econometricians: Katarina Juselius and Søren Johansen’ of Econometrics. It is based on material recorded on 30–31 October 2018 in Copenhagen. It explores Katarina Juselius’ research, and discusses inter alia the following issues: equilibrium; short and long-run behaviour; common trends; adjustment; integral and proportional control mechanisms; model building and model comparison; breaks, crisis, learning; univariate versus multivariate modelling; mentoring and the gender gap in Econometrics.
]]>Econometrics doi: 10.3390/econometrics10020021
Authors: Rocco Mosconi Paolo Paruolo
This article was prepared for the Special Issue “Celebrated Econometricians: Katarina Juselius and Søren Johansen” of Econometrics. It is based on material recorded on 30 October 2018 in Copenhagen. It explores Søren Johansen’s research, and discusses inter alia the following issues: estimation and inference for nonstationary time series of the I(1), I(2) and fractional cointegration types; survival analysis; statistical modelling; likelihood; econometric methodology; the teaching and practice of Statistics and Econometrics.
]]>Econometrics doi: 10.3390/econometrics10020019
Authors: Chenglong Ye Lin Zhang Mingxuan Han Yanjia Yu Bingxin Zhao Yuhong Yang
This paper aims to better predict highly skewed auto insurance claims by combining candidate predictions. We analyze a version of the Kangaroo Auto Insurance company data and study the effects of combining different methods using five measures of prediction accuracy. The results show the following. First, when there is an outstanding (in terms of Gini Index) prediction among the candidates, the “forecast combination puzzle” phenomenon disappears. The simple average method performs much worse than the more sophisticated model combination methods, indicating that combining different methods could help us avoid performance degradation. Second, the choice of the prediction accuracy measure is crucial in defining the best candidate prediction for “low frequency and high severity” (LFHS) data. For example, mean square error (MSE) does not distinguish well between model combination methods, as the values are close. Third, the performances of different model combination methods can differ drastically. We propose using a new model combination method, named ARM-Tweedie, for such LFHS data; it benefits from an optimal rate of convergence and exhibits a desirable performance in several measures for the Kangaroo data. Fourth, overall, model combination methods improve the prediction accuracy for auto insurance claim costs. In particular, Adaptive Regression by Mixing (ARM), ARM-Tweedie, and constrained Linear Regression can improve forecast performance when there are only weak learners or when no dominant learner exists.
]]>Econometrics doi: 10.3390/econometrics10020018
Authors: Gaetano Perone
The COVID-19 pandemic is a serious threat to all of us. It has caused an unprecedented shock to the world’s economy, and it has interrupted the lives and livelihood of millions of people. In the last two years, a large body of literature has attempted to forecast the main dimensions of the COVID-19 outbreak using a wide set of models. In this paper, I forecast the short- to mid-term cumulative deaths from COVID-19 in 12 hard-hit big countries around the world as of 20 August 2021. The data used in the analysis were extracted from the Our World in Data COVID-19 dataset. Both non-seasonal and seasonal autoregressive integrated moving averages (ARIMA and SARIMA) were estimated. The analysis showed that: (i) ARIMA/SARIMA forecasts were sufficiently accurate in both the training and test set by always outperforming the simple alternative forecasting techniques chosen as benchmarks (Mean, Naïve, and Seasonal Naïve); (ii) SARIMA models outperformed ARIMA models in 46 out 48 metrics (in forecasting future values), i.e., on 95.8% of all the considered forecast accuracy measures (mean absolute error [MAE], mean absolute percentage error [MAPE], mean absolute scaled error [MASE], and the root mean squared error [RMSE]), suggesting a clear seasonal pattern in the data; and (iii) the forecasted values from SARIMA models fitted very well the observed (real-time) data for the period 21 August 2021–19 September 2021 for almost all the countries analyzed. This article shows that SARIMA can be safely used for both the short- and medium-term predictions of COVID-19 deaths. Thus, this approach can help government authorities to monitor and manage the huge pressure that COVID-19 is exerting on national healthcare systems.
]]>Econometrics doi: 10.3390/econometrics10020017
Authors: Niraj Poudyal Aris Spanos
The primary objective of this paper is to revisit DSGE models with a view to bringing out their key weaknesses, including statistical misspecification, non-identification of deep parameters, substantive inadequacy, weak forecasting performance, and potentially misleading policy analysis. It is argued that most of these weaknesses stem from failing to distinguish between statistical and substantive adequacy and secure the former before assessing the latter. The paper untangles the statistical from the substantive premises of inference to delineate the above-mentioned issues and propose solutions. The discussion revolves around a typical DSGE model using US quarterly data. It is shown that this model is statistically misspecified, and when respecified to arrive at a statistically adequate model gives rise to the Student’s t VAR model. This statistical model is shown to (i) provide a sound basis for testing the DSGE overidentifying restrictions as well as probing the identifiability of the deep parameters, (ii) suggest ways to meliorate its substantive inadequacy, and (iii) give rise to reliable forecasts and policy simulations.
]]>Econometrics doi: 10.3390/econometrics10020016
Authors: Katarina Juselius
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination with forward-looking expectations and shows that all assumptions about the model’s shock structure and steady-state behavior can be formulated as testable hypotheses on common stochastic trends and cointegration. The basic stationarity assumptions of the monetary model failed to obtain empirical support. They were too restrictive to explain the observed long persistent swings in the real exchange rate, the real interest rates, and the inflation and interest rate differentials.
]]>Econometrics doi: 10.3390/econometrics10020015
Authors: Piero C. Kauffmann Hellinton H. Takada Ana T. Terada Julio M. Stern
Most factor-based forecasting models for the term structure of interest rates depend on a fixed number of factor loading functions that have to be specified in advance. In this study, we relax this assumption by building a yield curve forecasting model that learns new factor decompositions directly from data for an arbitrary number of factors, combining a Gaussian linear state-space model with a neural network that generates smooth yield curve factor loadings. In order to control the model complexity, we define prior distributions with a shrinkage effect over the model parameters, and we present how to obtain computationally efficient maximum a posteriori numerical estimates using the Kalman filter and automatic differentiation. An evaluation of the model’s performance on 14 years of historical data of the Brazilian yield curve shows that the proposed technique was able to obtain better overall out-of-sample forecasts than traditional approaches, such as the dynamic Nelson and Siegel model and its extensions.
]]>Econometrics doi: 10.3390/econometrics10020014
Authors: Vassilios Bazinas Bent Nielsen
We propose a method to explore the causal transmission of an intervention through two endogenous variables of interest. We refer to the intervention as a catalyst variable. The method is based on the reduced-form system formed from the conditional distribution of the two endogenous variables given the catalyst. The method combines elements from instrumental variable analysis and Cholesky decomposition of structural vector autoregressions. We give conditions for uniqueness of the causal transmission.
]]>Econometrics doi: 10.3390/econometrics10020013
Authors: Jorge González Chapela
Misclassification of a binary response variable and nonrandom sample selection are data issues frequently encountered by empirical researchers. For cases in which both issues feature simultaneously in a data set, we formulate a sample selection model for a misclassified binary outcome in which the conditional probabilities of misclassification are allowed to depend on covariates. Assuming the availability of validation data, the pseudo-maximum likelihood technique can be used to estimate the model. The performance of the estimator accounting for misclassification and sample selection is compared to that of estimators offering partial corrections. An empirical example illustrates the proposed framework.
]]>Econometrics doi: 10.3390/econometrics10010012
Authors: Yiannis Karavias Elias Tzavalis Haotian Zhang
Missing data or missing values are a common phenomenon in applied panel data research and of great interest for panel data unit root testing. The standard approach in the literature is to balance the panel by removing units and/or trimming a common time period for all units. However, this approach can be costly in terms of lost information. Instead, existing panel unit root tests could be extended to the case of unbalanced panels, but this is often difficult because the missing observations affect the bias correction which is usually involved. This paper contributes to the literature in two ways; it extends two popular panel unit root tests to allow for missing values, and secondly, it employs asymptotic local power functions to analytically study the impact of various missing-value methods on power. We find that zeroing-out the missing observations is the method that results in the greater test power, and that this result holds for all deterministic component specifications, such as intercepts, trends and structural breaks.
]]>Econometrics doi: 10.3390/econometrics10010011
Authors: Andreas Lichtenberger Joao Paulo Braga Willi Semmler
The green bond market is emerging as an impactful financing mechanism in climate change mitigation efforts. The effectiveness of the financial market for this transition to a low-carbon economy depends on attracting investors and removing financial market roadblocks. This paper investigates the differential bond performance of green vs non-green bonds with (1) a dynamic portfolio model that integrates negative as well as positive externality effects and via (2) econometric analyses of aggregate green bond and corporate energy time-series indices; as well as a cross-sectional set of individual bonds issued between 1 January 2017, and 1 October 2020. The asset pricing model demonstrates that, in the long-run, the positive externalities of green bonds benefit the economy through positive social returns. We use a deterministic and a stochastic version of the dynamic portfolio approach to obtain model-driven results and evaluate those through our empirical evidence using harmonic estimations. The econometric analysis of this study focuses on volatility and the risk–return performance (Sharpe ratio) of green and non-green bonds, and extends recent econometric studies that focused on yield differentials of green and non-green bonds. A modified Sharpe ratio analysis, cross-sectional methods, harmonic estimations, bond pairing estimations, as well as regression tree methodology, indicate that green bonds tend to show lower volatility and deliver superior Sharpe ratios (while the evidence for green premia is mixed). As a result, green bond investment can protect investors and portfolios from oil price and business cycle fluctuations, and stabilize portfolio returns and volatility. Policymakers are encouraged to make use of the financial benefits of green instruments and increase the financial flows towards sustainable economic activities to accelerate a low-carbon transition.
]]>Econometrics doi: 10.3390/econometrics10010010
Authors: David Pacini
This note studies the criterion for identifiability in parametric models based on the minimization of the Hellinger distance and exhibits its relationship to the identifiability criterion based on the Fisher matrix. It shows that the Hellinger distance criterion serves to establish identifiability of parameters of interest, or lack of it, in situations where the criterion based on the Fisher matrix does not apply, like in models where the support of the observed variables depends on the parameter of interest or in models with irregular points of the Fisher matrix. Several examples illustrating this result are provided.
]]>Econometrics doi: 10.3390/econometrics10010009
Authors: Szabolcs Blazsek Alvaro Escribano
We use data on the following climate variables for the period of the last 798 thousand years: global ice volume (Icet), atmospheric carbon dioxide level (CO2,t), and Antarctic land surface temperature (Tempt). Those variables are cyclical and are driven by the following strongly exogenous orbital variables: eccentricity of the Earth’s orbit, obliquity, and precession of the equinox. We introduce score-driven ice-age models which use robust filters of the conditional mean and variance, generalizing the updating mechanism and solving the misspecification of a recent climate–econometric model (benchmark ice-age model). The score-driven models control for omitted exogenous variables and extreme events, using more general dynamic structures and heteroskedasticity. We find that the score-driven models improve the performance of the benchmark ice-age model. We provide out-of-sample forecasts of the climate variables for the last 100 thousand years. We show that during the last 10–15 thousand years of the forecasting period, for which humanity influenced the Earth’s climate, (i) the forecasts of Icet are above the observed Icet, (ii) the forecasts of CO2,t level are below the observed CO2,t, and (iii) the forecasts of Tempt are below the observed Tempt. The forecasts for the benchmark ice-age model are reinforced by the score-driven models.
]]>Econometrics doi: 10.3390/econometrics10010008
Authors: Florian Wozny
This paper studies the performance of machine learning predictions for the counterfactual analysis of air transport. It is motivated by the dynamic and universally regulated international air transport market, where ex post policy evaluations usually lack counterfactual control scenarios. As an empirical example, this paper studies the impact of the COVID-19 pandemic on airfares in 2020 as the difference between predicted and actual airfares. Airfares are important from a policy makers’ perspective, as air transport is crucial for mobility. From a methodological point of view, airfares are also of particular interest given their dynamic character, which makes them challenging for prediction. This paper adopts a novel multi-step prediction technique with walk-forward validation to increase the transparency of the model’s predictive quality. For the analysis, the universe of worldwide airline bookings is combined with detailed airline information. The results show that machine learning with walk-forward validation is powerful for the counterfactual analysis of airfares.
]]>Econometrics doi: 10.3390/econometrics10010007
Authors: Econometrics Editorial Office Econometrics Editorial Office
Rigorous peer-reviews are the basis of high-quality academic publishing [...]
]]>Econometrics doi: 10.3390/econometrics10010006
Authors: Gianmaria Niccodemi Tom Wansbeek
In linear regression analysis, the estimator of the variance of the estimator of the regression coefficients should take into account the clustered nature of the data, if present, since using the standard textbook formula will in that case lead to a severe downward bias in the standard errors. This idea of a cluster-robust variance estimator (CRVE) generalizes to clusters the classical heteroskedasticity-robust estimator. Its justification is asymptotic in the number of clusters. Although an improvement, a considerable bias could remain when the number of clusters is low, the more so when regressors are correlated within cluster. In order to address these issues, two improved methods were proposed; one method, which we call CR2VE, was based on biased reduced linearization, while the other, CR3VE, can be seen as a jackknife estimator. The latter is unbiased under very strict conditions, in particular equal cluster size. To relax this condition, we introduce in this paper CR3VE-λ, a generalization of CR3VE where the cluster size is allowed to vary freely between clusters. We illustrate the performance of CR3VE-λ through simulations and we show that, especially when cluster sizes vary widely, it can outperform the other commonly used estimators.
]]>Econometrics doi: 10.3390/econometrics10010005
Authors: Ron Mittelhammer George Judge Miguel Henry
In this paper, we introduce a flexible and widely applicable nonparametric entropy-based testing procedure that can be used to assess the validity of simple hypotheses about a specific parametric population distribution. The testing methodology relies on the characteristic function of the population probability distribution being tested and is attractive in that, regardless of the null hypothesis being tested, it provides a unified framework for conducting such tests. The testing procedure is also computationally tractable and relatively straightforward to implement. In contrast to some alternative test statistics, the proposed entropy test is free from user-specified kernel and bandwidth choices, idiosyncratic and complex regularity conditions, and/or choices of evaluation grids. Several simulation exercises were performed to document the empirical performance of our proposed test, including a regression example that is illustrative of how, in some contexts, the approach can be applied to composite hypothesis-testing situations via data transformations. Overall, the testing procedure exhibits notable promise, exhibiting appreciable increasing power as sample size increases for a number of alternative distributions when contrasted with hypothesized null distributions. Possible general extensions of the approach to composite hypothesis-testing contexts, and directions for future work are also discussed.
]]>Econometrics doi: 10.3390/econometrics10010004
Authors: Chung-Yim Yiu Ka-Shing Cheung
The age–period–cohort problem has been studied for decades but without resolution. There have been many suggested solutions to make the three effects estimable, but these solutions mostly exploit non-linear specifications. Yet, these approaches may suffer from misspecification or omitted variable bias. This paper is a practical-oriented study with an aim to empirically disentangle age–period–cohort effects by providing external information on the actual depreciation of housing structure rather than taking age as a proxy. It is based on appraisals of the improvement values of properties in New Zealand to estimate the age-depreciation effect. This research method provides a novel means of solving the identification problem of the age, period, and cohort trilemma. Based on about half a million housing transactions from 1990 to 2019 in the Auckland Region of New Zealand, the results show that traditional hedonic prices models using age and time dummy variables can result, ceteris paribus, in unreasonable positive depreciation rates. The use of the improvement values model can help improve the accuracy of home value assessment and reduce estimation biases. This method also has important practical implications for property valuations.
]]>Econometrics doi: 10.3390/econometrics10010003
Authors: Philip Hans Franses Max Welz
We propose a simple and reproducible methodology to create a single equation forecasting model (SEFM) for low-frequency macroeconomic variables. Our methodology is illustrated by forecasting annual real GDP growth rates for 52 African countries, where the data are obtained from the World Bank and start in 1960. The models include lagged growth rates of other countries, as well as a cointegration relationship to capture potential common stochastic trends. With a few selection steps, our methodology quickly arrives at a reasonably small forecasting model per country. Compared with benchmark models, the single equation forecasting models seem to perform quite well.
]]>Econometrics doi: 10.3390/econometrics10010002
Authors: Jennifer L. Castle Jurgen A. Doornik David F. Hendry
By its emissions of greenhouse gases, economic activity is the source of climate change which affects pandemics that in turn can impact badly on economies. Across the three highly interacting disciplines in our title, time-series observations are measured at vastly different data frequencies: very low frequency at 1000-year intervals for paleoclimate, through annual, monthly to intra-daily for current climate; weekly and daily for pandemic data; annual, quarterly and monthly for economic data, and seconds or nano-seconds in finance. Nevertheless, there are important commonalities to economic, climate and pandemic time series. First, time series in all three disciplines are subject to non-stationarities from evolving stochastic trends and sudden distributional shifts, as well as data revisions and changes to data measurement systems. Next, all three have imperfect and incomplete knowledge of their data generating processes from changing human behaviour, so must search for reasonable empirical modeling approximations. Finally, all three need forecasts of likely future outcomes to plan and adapt as events unfold, albeit again over very different horizons. We consider how these features shape the formulation and selection of forecasting models to tackle their common data features yet distinct problems.
]]>Econometrics doi: 10.3390/econometrics10010001
Authors: Myoung-Jin Keay
This paper presents a method for estimating the average treatment effects (ATE) of an exponential endogenous switching model where the coefficients of covariates in the structural equation are random and correlated with the binary treatment variable. The estimating equations are derived under some mild identifying assumptions. We find that the ATE is identified, although each coefficient in the structural model may not be. Tests assessing the endogeneity of treatment and for model selection are provided. Monte Carlo simulations show that, in large samples, the proposed estimator has a smaller bias and a larger variance than the methods that do not take the random coefficients into account. This is applied to health insurance data of Oregon.
]]>Econometrics doi: 10.3390/econometrics9040047
Authors: Martin Huber
The estimation of the causal effect of an endogenous treatment based on an instrumental variable (IV) is often complicated by the non-observability of the outcome of interest due to attrition, sample selection, or survey non-response. To tackle the latter problem, the latent ignorability (LI) assumption imposes that attrition/sample selection is independent of the outcome conditional on the treatment compliance type (i.e., how the treatment behaves as a function of the instrument), the instrument, and possibly further observed covariates. As a word of caution, this note formally discusses the strong behavioral implications of LI in rather standard IV models. We also provide an empirical illustration based on the Job Corps experimental study, in which the sensitivity of the estimated program effect to LI and alternative assumptions about outcome attrition is investigated.
]]>Econometrics doi: 10.3390/econometrics9040046
Authors: David H. Bernstein Andrew B. Martinez
The COVID-19 pandemic resulted in the most abrupt changes in U.S. labor force participation and unemployment since the Second World War, with different consequences for men and women. This paper models the U.S. labor market to help to interpret the pandemic’s effects. After replicating and extending Emerson’s (2011) model of the labor market, we formulate a joint model of male and female unemployment and labor force participation rates for 1980–2019 and use it to forecast into the pandemic to understand the pandemic’s labor market consequences. Gender-specific differences were particularly large at the pandemic’s outset; lower labor force participation persists.
]]>Econometrics doi: 10.3390/econometrics9040045
Authors: Xin Jin Jia Liu Qiao Yang
This paper suggests a new approach to evaluate realized covariance (RCOV) estimators via their predictive power on return density. By jointly modeling returns and RCOV measures under a Bayesian framework, the predictive density of returns and ex-post covariance measures are bridged. The forecast performance of a covariance estimator can be assessed according to its improvement in return density forecasting. Empirical applications to equity data show that several RCOV estimators consistently perform better than others and emphasize the importance of RCOV selection in covariance modeling and forecasting.
]]>Econometrics doi: 10.3390/econometrics9040044
Authors: Kimon Ntotsis Alex Karagrigoriou Andreas Artemiou
When it comes to variable interpretation, multicollinearity is among the biggest issues that must be surmounted, especially in this new era of Big Data Analytics. Since even moderate size multicollinearity can prevent proper interpretation, special diagnostics must be recommended and implemented for identification purposes. Nonetheless, in the areas of econometrics and statistics, among other fields, these diagnostics are controversial concerning their “successfulness”. It has been remarked that they frequently fail to do proper model assessment due to information complexity, resulting in model misspecification. This work proposes and investigates a robust and easily interpretable methodology, termed Elastic Information Criterion, capable of capturing multicollinearity rather accurately and effectively and thus providing a proper model assessment. The performance is investigated via simulated and real data.
]]>Econometrics doi: 10.3390/econometrics9040043
Authors: Zheng Fang Jianying Xie Ruiming Peng Sheng Wang
Climate finance is growing popular in addressing challenges of climate change because it controls the funding and resources to emission entities and promotes green manufacturing. In this study, we determined that PM2.5, PM10, SO2, NO2, CO, and O3 are the target pollutant in the atmosphere and we use a deep neural network to enhance the regression analysis in order to investigate the relationship between air pollution and stock prices of the targeted manufacturer. We also conduct time series analysis based on air pollution and heavy industry manufacturing in China, as the country is facing serious air pollution problems. Our study uses Convolutional-Long Short Term Memory in 2 Dimension (ConvLSTM2D) to extract the features from air pollution and enhance the time series regression in the financial market. The main contribution in our paper is discovering a feature term that impacts the stock price in the financial market, particularly for the companies that are highly impacted by the local environment. We offer a higher accurate model than the traditional time series in the stock price prediction by considering the environmental factor. The experimental results suggest that there is a negative linear relationship between air pollution and the stock market, which demonstrates that air pollution has a negative effect on the financial market. It promotes the manufacturer’s improving their emission recycling and encourages them to invest in green manufacture—otherwise, the drop in stock price will impact the company funding process.
]]>Econometrics doi: 10.3390/econometrics9040042
Authors: Albert Okunade Ahmad Reshad Osmani Toluwalope Ayangbayi Adeyinka Kevin Okunade
Obesity, as a health and social problem with rising prevalence and soaring economic cost, is increasingly drawing scholarly and public policy attention. While many studies have suggested that infant breastfeeding protects against childhood obesity, empirical evidence on this causal relationship is fragile. Using the health capital development theory, this study exploited multiple data sources from the U.S. and a three-way error components model (ECM) with a jackknife resampling plan to estimate the effect of in-hospital breastfeeding initiation and breastfeeding for durations of 3, 6, and 12 months on the prevalence of obesity during teenage years. The main finding was that a 1% rise in the in-hospital breastfeeding initiation rate reduces the teenage obesity prevalence rate by 1.7% (9.6% of a standard deviation). The magnitude of this effect declines as the infant breastfeeding duration lengthens—e.g., the 12-month infant breastfeeding duration rate is associated with a 0.53% (3.7% of a standard deviation) reduction in obesity prevalence in the teenage years (9th to 12th grades). The study findings agree with both the behavioral and physiological theories on the long-term effects of breastfeeding, and have timely implications for public policies promoting infant breastfeeding to reduce the economic burden of teenage and later adult-stage obesity prevalence rates.
]]>Econometrics doi: 10.3390/econometrics9040041
Authors: Mustafa Salamh Liqun Wang
Many financial and economic time series exhibit nonlinear patterns or relationships. However, most statistical methods for time series analysis are developed for mean-stationary processes that require transformation, such as differencing of the data. In this paper, we study a dynamic regression model with nonlinear, time-varying mean function, and autoregressive conditionally heteroscedastic errors. We propose an estimation approach based on the first two conditional moments of the response variable, which does not require specification of error distribution. Strong consistency and asymptotic normality of the proposed estimator is established under strong-mixing condition, so that the results apply to both stationary and mean-nonstationary processes. Moreover, the proposed approach is shown to be superior to the commonly used quasi-likelihood approach and the efficiency gain is significant when the (conditional) error distribution is asymmetric. We demonstrate through a real data example that the proposed method can identify a more accurate model than the quasi-likelihood method.
]]>Econometrics doi: 10.3390/econometrics9040040
Authors: Kjartan Kloster Osmundsen Tore Selland Kleppe Roman Liesenfeld Atle Oglend
We propose a State-Space Model (SSM) for commodity prices that combines the competitive storage model with a stochastic trend. This approach fits into the economic rationality of storage decisions and adds to previous deterministic trend specifications of the storage model. For a Bayesian posterior analysis of the SSM, which is nonlinear in the latent states, we used a Markov chain Monte Carlo algorithm based on the particle marginal Metropolis–Hastings approach. An empirical application to four commodity markets showed that the stochastic trend SSM is favored over deterministic trend specifications. The stochastic trend SSM identifies structural parameters that differ from those for deterministic trend specifications. In particular, the estimated price elasticities of demand are typically larger under the stochastic trend SSM.
]]>Econometrics doi: 10.3390/econometrics9040039
Authors: J. Eduardo Vera-Valdés
This paper used cross-sectional aggregation as the inspiration for a model with long-range dependence that arises in actual data. One of the advantages of our model is that it is less brittle than fractionally integrated processes. In particular, we showed that the antipersistent phenomenon is not present for the cross-sectionally aggregated process. We proved that this has implications for estimators of long-range dependence in the frequency domain, which will be misspecified for nonfractional long-range-dependent processes with negative degrees of persistence. As an application, we showed how we can approximate a fractionally differenced process using theoretically-motivated cross-sectional aggregated long-range-dependent processes. An example with temperature data showed that our framework provides a better fit to the data than the fractional difference operator.
]]>Econometrics doi: 10.3390/econometrics9040038
Authors: J. M. Calabuig E. Jiménez-Fernández E. A. Sánchez-Pérez S. Manzanares
One of the main challenges posed by the healthcare crisis generated by COVID-19 is to avoid hospital collapse. The occupation of hospital beds by patients diagnosed by COVID-19 implies the diversion or suspension of their use for other specialities. Therefore, it is useful to have information that allows efficient management of future hospital occupancy. This article presents a robust and simple model to show certain characteristics of the evolution of the dynamic process of bed occupancy by patients with COVID-19 in a hospital by means of an adaptation of Kaplan-Meier survival curves. To check this model, the evolution of the COVID-19 hospitalization process of two hospitals between 11 March and 15 June 2020 is analyzed. The information provided by the Kaplan-Meier curves allows forecasts of hospital occupancy in subsequent periods. The results shows an average deviation of 2.45 patients between predictions and actual occupancy in the period analyzed.
]]>Econometrics doi: 10.3390/econometrics9040037
Authors: C. Vladimir Rodríguez-Caballero J. Eduardo Vera-Valdés
This paper tests if air pollution serves as a carrier for SARS-CoV-2 by measuring the effect of daily exposure to air pollution on its spread by panel data models that incorporates a possible commonality between municipalities. We show that the contemporary exposure to particle matter is not the main driver behind the increasing number of cases and deaths in the Mexico City Metropolitan Area. Remarkably, we also find that the cross-dependence between municipalities in the Mexican region is highly correlated to public mobility, which plays the leading role behind the rhythm of contagion. Our findings are particularly revealing given that the Mexico City Metropolitan Area did not experience a decrease in air pollution during COVID-19 induced lockdowns.
]]>Econometrics doi: 10.3390/econometrics9040036
Authors: Chad Fulton Kirstin Hubrich
We analyze real-time forecasts of US inflation over 1999Q3–2019Q4 and subsamples, investigating whether and how forecast accuracy and robustness can be improved with additional information such as expert judgment, additional macroeconomic variables, and forecast combination. The forecasts include those from the Federal Reserve Board’s Tealbook, the Survey of Professional Forecasters, dynamic models, and combinations thereof. While simple models remain hard to beat, additional information does improve forecasts, especially after 2009. Notably, forecast combination improves forecast accuracy over simpler models and robustifies against bad forecasts; aggregating forecasts of inflation’s components can improve performance compared to forecasting the aggregate directly; and judgmental forecasts, which may incorporate larger and more timely datasets in conjunction with model-based forecasts, improve forecasts at short horizons.
]]>Econometrics doi: 10.3390/econometrics9040035
Authors: Michael Creel
This paper studies method of simulated moments (MSM) estimators that are implemented using Bayesian methods, specifically Markov chain Monte Carlo (MCMC). Motivation and theory for the methods is provided by Chernozhukov and Hong (2003). The paper shows, experimentally, that confidence intervals using these methods may have coverage which is far from the nominal level, a result which has parallels in the literature that studies overidentified GMM estimators. A neural network may be used to reduce the dimension of an initial set of moments to the minimum number that maintains identification, as in Creel (2017). When MSM-MCMC estimation and inference is based on such moments, and using a continuously updating criteria function, confidence intervals have statistically correct coverage in all cases studied. The methods are illustrated by application to several test models, including a small DSGE model, and to a jump-diffusion model for returns of the S&P 500 index.
]]>Econometrics doi: 10.3390/econometrics9030034
Authors: S. Yanki Kalfa Jaime Marquez
(Hendry 1980, p. 403) The three golden rules of econometrics are “test, test, and test”. The current paper applies that approach to model the forecasts of the Federal Open Market Committee over 1992–2019 and to forecast those forecasts themselves. Monetary policy is forward-looking, and as part of the FOMC’s effort toward transparency, the FOMC publishes its (forward-looking) economic projections. The overall views on the economy of the FOMC participants–as characterized by the median of their projections for inflation, unemployment, and the Fed’s policy rate–are themselves predictable by information publicly available at the time of the FOMC’s meeting. Their projections also communicate systematic behavior on the part of the FOMC’s participants.
]]>Econometrics doi: 10.3390/econometrics9030033
Authors: Philippe Goulet Coulombe Maximilian Göbel
Stips et al. (2016) use information flows (Liang (2008, 2014)) to establish causality from various forcings to global temperature. We show that the formulas being used hinge on a simplifying assumption that is nearly always rejected by the data. We propose the well-known forecast error variance decomposition based on a Vector Autoregression as an adequate measure of information flow, and find that most results in Stips et al. (2016) cannot be corroborated. Then, we discuss which modeling choices (e.g., the choice of CO2 series and assumptions about simultaneous relationships) may help in extracting credible estimates of causal flows and the transient climate response simply by looking at the joint dynamics of two climatic time series.
]]>Econometrics doi: 10.3390/econometrics9030032
Authors: Dimitrios V. Vougas
There is no available Prais–Winsten algorithm for regression with AR(2) or higher order errors, and the one with AR(1) errors is not fully justified or is implemented incorrectly (thus being inefficient). This paper addresses both issues, providing an accurate, computationally fast, and inexpensive generic zig-zag algorithm.
]]>Econometrics doi: 10.3390/econometrics9030031
Authors: Massimo Franchi Paolo Paruolo
This paper discusses the notion of cointegrating space for linear processes integrated of any order. It first shows that the notions of (polynomial) cointegrating vectors and of root functions coincide. Second, it discusses how the cointegrating space can be defined (i) as a vector space of polynomial vectors over complex scalars, (ii) as a free module of polynomial vectors over scalar polynomials, or finally (iii) as a vector space of rational vectors over rational scalars. Third, it shows that a canonical set of root functions can be used as a basis of the various notions of cointegrating space. Fourth, it reviews results on how to reduce polynomial bases to minimal order—i.e., minimal bases. The application of these results to Vector AutoRegressive processes integrated of order 2 is found to imply the separation of polynomial cointegrating vectors from non-polynomial ones.
]]>Econometrics doi: 10.3390/econometrics9030030
Authors: Fragiskos Archontakis Rocco Mosconi
We showcase the impact of Katarina Juselius and Søren Johansen’s contribution to econometrics using bibliometric data on citations from 1989 to 2017, extracted from the Web of Science (WoS) database. Our purpose is to analyze the impact of KJ and SJ’s ideas on applied and methodological research in econometrics. To this aim, starting from WoS data, we derived two composite indices whose purpose is to disentangle the authors’ impact on applied research from their impact on methodological research. As of 2017, the number of applied citing papers per quarter had not yet reached the peak; conversely, the peak in the methodological literature seem to have been reached around 2000, although the shape of the trajectory is very flat after the peak. We analyzed the data using a multivariate dynamic version of the well known Bass model. Our estimates suggest that the methodological literature is mainly driven by “innovators”, whereas “imitators” are relatively more important in the applied literature: this might explain the different location of the peaks. We also find that, in the literature referring to KJ and SJ, the “cross-fertilization” between methodological and applied research is statistically significant and bi-directional.
]]>Econometrics doi: 10.3390/econometrics9030029
Authors: Federico Bandi Alex Maynard Hyungsik Roger Moon Benoit Perron
Peter Phillips has had a tremendous impact on econometric theory and practice [...]
]]>Econometrics doi: 10.3390/econometrics9030028
Authors: Vincenzo Candila
Recently, the world of cryptocurrencies has experienced an undoubted increase in interest. Since the first cryptocurrency appeared in 2009 in the aftermath of the Great Recession, the popularity of digital currencies has, year by year, risen continuously. As of February 2021, there are more than 8525 cryptocurrencies with a market value of approximately USD 1676 billion. These particular assets can be used to diversify the portfolio as well as for speculative actions. For this reason, investigating the daily volatility and co-volatility of cryptocurrencies is crucial for investors and portfolio managers. In this work, the interdependencies among a panel of the most traded digital currencies are explored and evaluated from statistical and economic points of view. Taking advantage of the monthly Google queries (which appear to be the factors driving the price dynamics) on cryptocurrencies, we adopted a mixed-frequency approach within the Dynamic Conditional Correlation (DCC) model. In particular, we introduced the Double Asymmetric GARCH–MIDAS model in the DCC framework.
]]>Econometrics doi: 10.3390/econometrics9030027
Authors: Arifatus Solikhah Heri Kuswanto Nur Iriawan Kartika Fithriasari
We generalize the Gaussian Mixture Autoregressive (GMAR) model to the Fisher’s z Mixture Autoregressive (ZMAR) model for modeling nonlinear time series. The model consists of a mixture of K-component Fisher’s z autoregressive models with the mixing proportions changing over time. This model can capture time series with both heteroskedasticity and multimodal conditional distribution, using Fisher’s z distribution as an innovation in the MAR model. The ZMAR model is classified as nonlinearity in the level (or mode) model because the mode of the Fisher’s z distribution is stable in its location parameter, whether symmetric or asymmetric. Using the Markov Chain Monte Carlo (MCMC) algorithm, e.g., the No-U-Turn Sampler (NUTS), we conducted a simulation study to investigate the model performance compared to the GMAR model and Student t Mixture Autoregressive (TMAR) model. The models are applied to the daily IBM stock prices and the monthly Brent crude oil prices. The results show that the proposed model outperforms the existing ones, as indicated by the Pareto-Smoothed Important Sampling Leave-One-Out cross-validation (PSIS-LOO) minimum criterion.
]]>Econometrics doi: 10.3390/econometrics9030026
Authors: Jennifer L. Castle Jurgen A. Doornik David F. Hendry
We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoretical results are derived for a three-variable static model, but generalized to include dynamics and many more variables in the simulation experiment. The results show that the trade-off for selecting variables in forecasting models in a stationary world, namely that variables should be retained if their noncentralities exceed unity, still applies in settings with structural breaks. This provides support for model selection at looser than conventional settings, albeit with many additional features explaining the forecast performance, and with the caveat that retaining irrelevant variables that are subject to location shifts can worsen forecast performance.
]]>Econometrics doi: 10.3390/econometrics9020025
Authors: Yuanyuan Deng Hugo Benítez-Silva
Medicare is one of the largest federal social insurance programs in the United States and the secondary payer for Medicare beneficiaries covered by employer-provided health insurance (EPHI). However, an increasing number of individuals are delaying their Medicare enrollment when they first become eligible at age 65. Using administrative data from the Medicare Current Beneficiary Survey (MCBS), this paper estimates the effects of EPHI, employment, and delays in Medicare enrollment on Medicare costs. Given the administrative nature of the data, we are able to disentangle and estimate the Medicare as secondary payer (MSP) effect and the work effects on Medicare costs, as well as to construct delay enrollment indicators. Using Heckman’s sample selection model, we estimate that MSP and being employed are associated with a lower probability of observing positive Medicare spending and a lower level of Medicare spending. This paper quantifies annual savings of $5.37 billion from MSP and being employed. Delays in Medicare enrollment generate additional annual savings of $10.17 billion. Owing to the links between employment, health insurance coverage, and Medicare costs presented in this research, our findings may be of interest to policy makers who should take into account the consequences of reforms on the Medicare system.
]]>Econometrics doi: 10.3390/econometrics9020024
Authors: Hildegart Ahumada Magdalena Cornejo
We analyze the influence of climate change on soybean yields in a multivariate time-series framework for a major soybean producer and exporter—Argentina. Long-run relationships are found in partial systems involving climatic, technological, and economic factors. Automatic model selection simplifies dynamic specification for a model of soybean yields and permits encompassing tests of different economic hypotheses. Soybean yields adjust to disequilibria that reflect technological improvements to seed and crops practices. Climatic effects include (a) a positive effect from increased CO2 concentrations, which may capture accelerated photosynthesis, and (b) a negative effect from high local temperatures, which could increase with continued global warming.
]]>Econometrics doi: 10.3390/econometrics9020023
Authors: Yixiao Jiang
This paper investigates the incentive of credit rating agencies (CRAs) to bias ratings using a semiparametric, ordered-response model. The proposed model explicitly takes conflicts of interest into account and allows the ratings to depend flexibly on risk attributes through a semiparametric index structure. Asymptotic normality for the estimator is derived after using several bias correction techniques. Using Moody’s rating data from 2001 to 2016, I found that firms related to Moody’s shareholders were more likely to receive better ratings. Such favorable treatments were more pronounced in investment grade bonds compared with high yield bonds, with the 2007–2009 financial crisis being an exception. Parametric models, such as the ordered-probit, failed to identify this heterogeneity of the rating bias across different bond categories.
]]>Econometrics doi: 10.3390/econometrics9020022
Authors: Kajal Lahiri Zulkarnain Pulungan
Following recent econometric developments, we use self-assessed general health on a Likert scale conditioned by several objective determinants to measure health disparity between non-Hispanic Whites and minority groups in the United States. A statistical decomposition analysis is conducted to determine the contributions of socio-demographic and neighborhood characteristics in generating disparities. Whereas, 72% of health disparity between Whites and Blacks is attributable to Blacks’ relatively worse socio-economic and demographic characteristics, it is only 50% for Hispanics and 65% for American Indian Alaska Natives. The role of a number of factors including per capita income and income inequality vary across the groups. Interestingly, “blackness” of a county is associated with better health for all minority groups, but it affects Whites negatively. Our findings suggest that public health initiatives to eliminate health disparity should be targeted differently for different racial/ethnic groups by focusing on the most vulnerable within each group.
]]>Econometrics doi: 10.3390/econometrics9020021
Authors: Manabu Asai Chia-Lin Chang Michael McAleer Laurent Pauwels
This paper derives the statistical properties of a two-step approach to estimating multivariate rotated GARCH-BEKK (RBEKK) models. From the definition of RBEKK, the unconditional covariance matrix is estimated in the first step to rotate the observed variables in order to have the identity matrix for its sample covariance matrix. In the second step, the remaining parameters are estimated by maximizing the quasi-log-likelihood function. For this two-step quasi-maximum likelihood (2sQML) estimator, this paper shows consistency and asymptotic normality under weak conditions. While second-order moments are needed for the consistency of the estimated unconditional covariance matrix, the existence of the finite sixth-order moments is required for the convergence of the second-order derivatives of the quasi-log-likelihood function. This paper also shows the relationship between the asymptotic distributions of the 2sQML estimator for the RBEKK model and variance targeting quasi-maximum likelihood estimator for the VT-BEKK model. Monte Carlo experiments show that the bias of the 2sQML estimator is negligible and that the appropriateness of the diagonal specification depends on the closeness to either the diagonal BEKK or the diagonal RBEKK models. An empirical analysis of the returns of stocks listed on the Dow Jones Industrial Average indicates that the choice of the diagonal BEKK or diagonal RBEKK models changes over time, but most of the differences between the two forecasts are negligible.
]]>Econometrics doi: 10.3390/econometrics9020020
Authors: Antonio Pacifico
This paper improves a standard Structural Panel Bayesian Vector Autoregression model in order to jointly deal with issues of endogeneity, because of omitted factors and unobserved heterogeneity, and volatility, because of policy regime shifts and structural changes. Bayesian methods are used to select the best model solution for examining if international spillovers come from multivariate volatility, time variation, or contemporaneous relationship. An empirical application among Central-Eastern and Western Europe economies is conducted to describe the performance of the methodology, with particular emphasis on the Great Recession and post-crisis periods. A simulated example is also addressed to highlight the performance of the estimating procedure. Findings from evidence-based forecasting are also addressed to evaluate the impact of an ongoing pandemic crisis on the global economy.
]]>