Next Article in Journal
Research on Depression Recognition Model and Its Temporal Characteristics Based on Multiscale Entropy of EEG Signals
Next Article in Special Issue
Entropy of Volatility Changes: Novel Method for Assessment of Regularity in Volatility Time Series
Previous Article in Journal
Why Uncertainty Is Essential for Consciousness: Local Prospect Theory vs. Predictive Processing
Previous Article in Special Issue
Analysis of the Spatiotemporal Heterogeneity and Influencing Factors of Regional Economic Resilience in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DSGE Estimation Using Generalized Empirical Likelihood and Generalized Minimum Contrast

by
Gilberto Boaretto
1 and
Márcio Poletti Laurini
2,*
1
Department of Economics, Rio de Janeiro State University (UERJ), Rio de Janeiro 20550-900, Brazil
2
Department of Economics, School of Economics, Business Administration and Accounting at Ribeirão Preto (FEA-RP/USP), University of São Paulo, Ribeirão Preto 14040-905, Brazil
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(2), 141; https://doi.org/10.3390/e27020141
Submission received: 12 December 2024 / Revised: 25 January 2025 / Accepted: 28 January 2025 / Published: 30 January 2025

Abstract

:
We investigate the performance of estimators of the generalized empirical likelihood and minimum contrast families in the estimation of dynamic stochastic general equilibrium models, with particular attention to the robustness properties under misspecification. From a Monte Carlo experiment, we found that (i) the empirical likelihood estimator—as well as its version with smoothed moment conditions—and Bayesian inference obtained, in that order, the best performances, including misspecification cases; (ii) continuous updating empirical likelihood, minimum Hellinger distance, exponential tilting estimators, and their smoothed versions exhibit intermediate comparative performance; (iii) the performance of exponentially tilted empirical likelihood, exponential tilting Hellinger distance, and their smoothed versions was seriously compromised by atypical estimates; (iv) smoothed and non-smoothed estimators exhibit very similar performances; and (v) the generalized method of moments, especially in the over-identified case, and maximum likelihood estimators performed worse than their competitors.

1. Introduction

Economic and statistical models, especially dynamic stochastic general equilibrium (DSGE) models, are susceptible to misspecification problems since they consist of simplifications (sometimes remarkably strong) of reality. The omission of relevant variables, incorrect functional forms, distributional assumptions, and incompleteness of systems of relations are common and often coincide in estimation and forecasting procedures [1]. Thus, we desire to employ estimation methods with good performance in the presence of misspecification. Here, robustness means insensitivity to small deviations from assumptions adopted, as defined by [2].
Misspecification problems can have global (non-local) or local natures. In moment-based estimation, global misspecification occurs when no parameter value is compatible with the moment restrictions, regardless of the sample size. Local misspecification occurs when these conditions are not satisfied in part of the sample, which disappears asymptotically [3]. See Appendix A for details on this and see Appendix B for a more detailed discussion of local and global misspecification in moment-based estimation. In likelihood-based estimation, consistency can be significantly affected by the validity of distributional assumptions; the quasi-maximum likelihood estimator may or may not be consistent for particular parameters of interest [4,5]. In this approach, we can consider this case as global misspecification. The authors of [6] and other papers introduce local misspecification through a nuisance parameter that contaminates the sample and whose effect disappears asymptotically. Depending on the estimation approach, both local and global misspecification problems are usually present in DSGE estimation.
In the current literature on DSGE modeling, we observe two distinct approaches, as highlighted by [7]. The first, which uses the likelihood principle, emerged in [8] and is based on approximations of the model’s police functions obtained by linearizing the equilibrium conditions around the steady state. This technique allows for evaluating the likelihood function using the Kalman filter or particle filter. The parameters are estimated either by classical inference maximizing the likelihood function (maximum likelihood—ML) or by Bayesian inference (BI) combining the likelihood function and prior distribution to obtain the parameters’ posterior distribution.
The moment-based approach, in turn, uses a set of moment conditions generated from the model’s first-order conditions (FOCs). The first works that employed GMM in DSGE estimation were [9,10,11]. Among the advantages of GMM estimation about ML estimation (and BI) are (i) to require fewer constraints for the data distribution and (ii) to require less computational capacity and, consequently, less estimation time [7,12]. Ref. [13] concluded that moment-based methods, more specifically GMM and the simulated method of moments (SMM), present better results than the ML estimator in terms of estimation speed and robustness to specification problems. Ref. [12] showed that GMM and SMM estimators obtained more accurate estimates for model parameters, even in small samples.
This paper aims to analyze the performance of moment-based estimators belonging to generalized empirical likelihood (GEL) and generalized minimum contrast (GMC) families in DSGE estimation, using GMM, ML, and BI as benchmarks. The generalized empirical likelihood (GEL) and generalized minimum contrast (GMC) methods generalize the use of moment methods when performing the nonparametric estimation of the process distribution, obtaining better properties in finite samples in terms of bias and some additional robustness properties. Thus, it is interesting to analyze whether the use of these methods can bring gains in the estimation of DSGE models since these models are commonly estimated from small samples and under relevant simplifications and restrictions.
In minimum contrast estimation, entropy measures, especially those rooted in Kullback–Leibler divergence, are often used to quantify the “distance” between an empirical distribution and a theoretical model. This approach is valuable when likelihood functions are complex or unspecified, allowing for the use of contrast functions like entropy to approximate them [14,15]. By minimizing the “contrast” (e.g., the divergence) between observed and model-based distributions, estimators can achieve a form of alignment that captures the underlying statistical structure with minimal assumptions.
Entropy concepts, particularly in generalized empirical likelihood (GEL), serve a dual role. They adjust likelihood estimates by introducing penalties for models that diverge from empirical data, thereby helping to align model-based probabilities more closely with observed data. GEL methods use entropy-based divergences, such as Kullback–Leibler divergence, to balance fit with the need for generalization, reducing overfitting by choosing model parameters that satisfy the given constraints [16]. This information-theoretic approach to estimation, encompassing methods like GEL and minimum contrast, emphasizes entropy’s role in making robust inferences that respect both model and data uncertainties.
Our paper extends other contributions to the employment of GEL/GMC estimators in the estimation of DSGE and economic models in general. Two moment-based estimators were considered in DSGE estimation by [7]. The author estimated a DSGE model using GMM and exponentially tilted empirical likelihood (ETEL), a GEL/GMC estimator. ETEL did not obtain results as close to true values as the GMM estimator, and, in addition, the ETEL results presented a high standard deviation. The two estimators presented difficulties in dealing with the identification of the utility function curvature parameter. The author encouraged the continuity of research in this field due to the advantages presented by empirical likelihood, such as (i) the direct use of the equilibrium conditions of the model since it is not necessary to compute the police function; (ii) the flexibility to assume distributions for the stochastic process of the economy; and (iii) the preservation of the nonlinear structure of the equilibrium conditions of the model.
Several applications of moment-based GEL/GMC estimators have been made in finance. Among the covered topics, we highlight the following: portfolio selection based on the empirical likelihood and Hellinger distance [17]; asset pricing models under misspecification and robustness analysis [18]; estimation of the discretized stochastic differential equation for interest rates [19]; dealing with hard assumptions of Black–Scholes option pricing [20]; obtaining risk measures of the risk of loss on a specific portfolio, such as value at risk and expected shortfall [21]; improvement in estimation precision for portfolio optimization [22]; and portfolio efficiency tests with conditioning information in the presence of data contamination [23]. These papers explored the good robustness and other properties of estimators belonging to the class of GEL/GMC estimators under misspecification and data contamination. They found promising results both in simulations and empirical applications.
Ref. [18] highlighted the possibilities of robustness analysis in the estimation of misspecified asset pricing models using GEL/GMC methods. Ref. [19] found that GEL/GMC estimators (mainly the ETEL) outperform the GMM, in terms of bias and mean squared error, in the estimation of stochastic differential equations for interest rates. Ref. [23] showed that GEL estimators perform better than the GMM in portfolio efficiency tests with conditioning information in the presence of data contamination, such as heavy tails and outliers. Ref. [24] applied adjusted empirical likelihood to make robust inferences about the Sharpe ratio in asset pricing and [25] analyzed the properties of the maximum empirical likelihood, maximum empirical exponential likelihood, and maximum log Euclidean likelihood estimators to estimate spatial autocorrelation models.
We deal with a real business cycle (RBC) model that can be considered the core of current DSGE models. We verify by means of Monte Carlo experiments if the studied estimators generate satisfactory results in terms of bias and variance measures in situations where the estimated model is correctly specified. We also emphasize the robustness analysis under both local and global misspecification situations. While in the correctly specified model productivity shocks follow a normal distribution, in misspecified models, we generate productivity shocks following a Student’s t-distribution or a normal distribution with the inclusion of single or several outliers.
The objective of this article is to analyze moment condition-based methods applied to the estimation of DSGE models, with a particular focus on extending the use of the generalized method of moments (GMM) to alternative approaches grounded in moment conditions. Specifically, this study evaluates the strengths and limitations of methods such as empirical likelihood and generalized minimum contrast, considering their performance under both correctly specified and misspecified model settings.
Among the main results of our paper for the estimation of a DSGE model are the following: (i) the empirical likelihood (EL) estimator, as well as its version with smoothed moment conditions (SEL), and Bayesian inference (BI) obtained, in this order, the best performances, even in misspecification cases; (ii) continuous updating empirical likelihood (CUE), minimum Hellinger distance (HD), exponential tilting (ET), and their smoothed versions presented intermediate comparative performance; (iii) ETEL, exponential tilting Hellinger distance (ETHD) estimators, and their smoothed versions were compromised by the occurrence of atypical estimates; (iv) smoothed and non-smoothed versions of the GEL/GMC estimators exhibited very similar performances; and (v) GMM, especially in the over-identified case, and ML estimators performed worse than their competitors.
These experiments show some cases of real problems that may affect the DSGE estimation. Thus, our study contributes to the still limited literature on the robust estimation of DSGE models. As these models have high importance in the analysis and conduct of economic policies, our study contributes some recommendations about using estimation methods in situations of both correct and incorrect specification.
This paper is structured in four additional sections besides this introduction and the Appendix. In Section 2, the economic model is presented. Section 3 presents the estimators and Monte Carlo design. Section 4 discusses the results. Section 5 concludes this paper. Lastly, the Appendix shows the derivation of the moment conditions used in the moment-based approach and the definitions of local and global misspecification.

2. Model

This section presents the economic model used in this paper. We will work with the “one consumer–one producer” version of the RBC model, similar to the model proposed by [26] and the standard stochastic growth model used by [12]. This version has a price vector (real wage and interest rate) and firm maximization problem. We consider a utility function in which consumption preferences are characterized by constant relative risk aversion and leisure (labor disutility) is linear, as defined in the model used by [12]. This model presents both the core components and estimation difficulties of the DSGE models, and at the same time, it has few parameters which facilitate the analysis of results.
  • Preferences
Suppose that the economy’s population is identical and that the representative consumer preferences are characterized by the instantaneous utility function
u ( c t , n t ) = c t 1 γ 1 γ + b ( 1 n t ) , γ , b > 0
where c t is consumption in period t, n t is leisure time in period t, γ is the CRRA parameter, and b measures the disutility of labor (or utility of leisure). Time endowment and population size are constant and normalized to 1.
  • Production
Suppose there exists a single perishable good in the economy, y t , whose production can be described by the function
y t = f ( k t , n t , z t ) = z t k t α n t 1 α
where α ( 0 , 1 ) is a parameter, k t is the capital stock in period t, and z t is an exogenous productivity shock that occurred in t. Since this function is homogeneous of degree one, it exhibits constant returns to scale.
  • Laws of motion and feasibility
Consider a technological level described by
ln z t = ρ ln z t 1 + ε t , ε t D ( 0 , σ 2 )
where ρ ( 1 , 1 ) indicates that the technology follows a stationary process, and ε t is an independent and identically distributed (iid) shock following a generic distribution D with mean zero and variance σ 2 .
Suppose that the capital stock evolves according to
k t + 1 = i t + ( 1 δ ) k t , k 0 > 0
where δ [ 0 , 1 ] is the depreciation rate and i t is the investment in period t. In this way, the economy must respect the feasibility condition expressed by
c t + i t = w t n t + r t k t + 1 + π t
where w t is real wage, r t is the real interest rate, and π t is firm profit. Note that prices w t and r t are expressed in units of the consumption good.
  • Maximization problems, optimality conditions, and competitive equilibrium
In this economy, the representative consumer solves the following maximization problem
max { c t , n t , k t + 1 } t = 0 E 0 t = 0 β t u ( c t , n t )
subject to (1)–(4). The representative firm, in turn, solves the following static maximization problem
max y t , n t , k t + 1 π t
with π t = y t w t n t r t k t + 1 , subject to (1)–(3). The optimal choices of consumption and labor supply are given by the first-order conditions
c t γ = w t b
c t γ = β E t ( 1 δ + r t + 1 ) c t + 1 γ
which are the intratemporal relation between consumption and labor and the Euler equation for consumption, respectively. The firm’s first-order conditions are given by
r t = α z t k t α 1 n t 1 α w t = ( 1 α ) z t k t α n t α .
The competitive equilibrium of this economy is given by the sequence of prices { w t , r t } t = 0 and by the sequence of allocations { c t , k t + 1 , n t } t = 0 such that, given prices, consumer utility and firm profit are maximized and goods, capital, and labor markets are clearing.
  • Steady state and policy functions
After obtaining steady-state values, the optimal allocation sequence is obtained from the policy functions. Ref. [27] points out that, even with the computational advancement, to obtain the policy functions, it is necessary to resort to some approximation when the model is more complicated and the dimension of state variables increases. From the log-linearized FOC equations, we proceed by writing the model in matrix form, relating all variables contemporaneously, as well as the lagged and leading values, and obtaining the policy functions [27]. Among the procedures that fulfill this task, [28,29] stand out. For further details, see [27,30,31].

3. Methodology

3.1. Estimators

3.1.1. Generalized Method of Moments

Ref. [32] derived the generalized method of moments (GMM) estimator and demonstrated its large-sample properties. Let h ( x t , θ 0 ) be a vector of moment conditions where θ 0 is a vector with the true parameters, and x t are random variables. In this case, we have E [ h ( x t , θ 0 ) ] = 0 . The sample mean of h ( x t , θ 0 ) generates the sample moment condition g ( x t , θ ) 1 T t = 1 T h ( x t , θ ) , where θ is a vector of unknown parameters belonging to parametric space Θ , and T indicates the sample size. From this, in the over-identified case, we define the GMM estimator by
θ ^ G M M = arg min θ Θ g ( x t , θ ) W g ( x t , θ )
where W is a positive-definite weighting matrix whose optimal form corresponds to the inverse of the asymptotic variance matrix [33]. However, the asymptotic variance matrix is a function of parameters and, therefore, must be estimated. Refs. [34,35] define heteroscedasticity and autocorrelation-consistent (HAC) estimators for asymptotic variance.
In this paper, we use the two-step GMM (2SGMM) estimator. Iterative and continuous updating versions presented optimization and convergence problems and, in general, did not prove to be computationally reliable in DSGE estimation. The 2SGMM procedure, initially proposed by [32], is based on an initial weighting matrix W, usually the identity matrix. From this first step, the HAC matrix S ^ ( θ 1 ^ ) is calculated, where θ 1 ^ is a parameter vector estimated in the first step. The next step starts using the matrix obtained in the first step, and θ ^ 2 is obtained by the minimization of the GMM objective function. If the system is over-identified, that is, if the number of moment conditions is greater than the number of parameters, we can employ the J test proposed by [32], whose null hypothesis is a well-specified model (namely, valid moment conditions).

3.1.2. Generalized Empirical Likelihood and Generalized Minimum Contrast

Generalized Empirical Likelihood (GEL) and EL, ET, and CUE Estimators

The generalized empirical likelihood (GEL) class, initially proposed by [36], is a unifying framework encompassing estimators with a common structure. These estimators are asymptotically equivalent to 2SGMM and have better higher-order asymptotic properties than the latter, as well as better performance in small-sample cases [37,38,39].
Let ρ ( υ ) be a function whose domain is a convex set Υ containing the zero. The GEL estimator is expressed by the saddle point problem
θ ^ G E L = arg min θ Θ sup λ Λ t = 1 T ρ ( λ g ( x t , θ ) )
where Λ = { λ : λ g ( x t , θ ) Υ } [37,40]. The choice of the function ρ ( υ ) defines the following estimators: (1) the empirical likelihood (EL) of [41,42,43]: ρ ( υ ) = ln ( 1 υ ) ; (2) the exponential tilting (ET) of [44,45]: ρ ( υ ) = exp ( υ ) ; and (3) the GMM continuous updating (CUE) of [46] : ρ ( υ ) = ( 1 + υ ) 2 / 2 [37,39,47].
Empirical likelihood is a nonparametric method that constructs a likelihood function directly from the data. Given a set of moment conditions, this method avoids assuming a parametric form for the likelihood and yields estimators with asymptotically normal distributions. One notable property of EL is its Bartlett correctability, which allows for the adjustment of confidence intervals to improve their higher-order accuracy. However, EL can be sensitive to model misspecification, particularly when the moment conditions involve unbounded functions, as this can lead to a loss of n consistency in estimation. Exponential tilting modifies the empirical likelihood framework by introducing exponential weights to observations. ET enhances robustness to model misspecification, maintaining n convergence rates even when the assumed model is not perfectly specified. However, unlike EL, ET is not Bartlett-correctable, limiting its ability to achieve higher-order accuracy in confidence intervals.
The generalized method of moments (GMM) continuous updating estimator (CUE) offers certain theoretical advantages, such as asymptotic efficiency under correct specification, by incorporating an optimal weighting matrix that depends on the parameter estimate θ . However, it is prone to several numerical instabilities, which can limit its practical implementation. These instabilities arise from the nonlinearity of the objective function, sensitivity to the choice of instruments, and properties of the weighting matrix, as discussed by [46].

Generalized Minimum Contrast (GMC) and Its Relationship with GEL

The GEL can be considered a special case of generalized minimum contrast (GMC) class of estimators that is a generalization of the ideas contained in [48,49]. Following [38], consider a general divergence function between two measures of probability, P and Q, expressed by
D ( P , Q ) = ϕ d P d Q d Q
where ϕ ( · ) is a convex function. One of these probability measures usually follows a nonparametric data distribution, and the other corresponds to a statistical distribution associated with some model. Let x t R n be iid random variables, where n is the number of random variables. Consider a model that follows the moment conditions E [ g ( x t , θ ) ] = g ( x , θ ) d μ = 0 , θ Θ R q , where g R q is a known function, q is a number of parameters, and Θ is a parametric space. Let M be the set of all probability measures in R n and define
P = θ Θ P M : g ( x t , θ ) d P = 0 ,
that is, P represents the set of all probability measures compatible with the moment restrictions. Model P is correctly specified only if it includes true probability measures μ . Thus, the GMC optimization problem is given by
inf θ Θ inf P P D ( P , μ ) .
Following [37,50], the minimal contrast problem in (9) can be rewritten using a contrast function h T ( p t ) :
θ ^ M C = arg min θ , p t t = 1 T h T ( p t )
Assuming [51] a family function of discrepancies given by
h T ( p t ) = [ γ ( γ + 1 ) ] 1 ( T p t ) γ + 1 1 T ,
where γ is an indexing parameter, we obtain the estimators of the GEL family, which also belong to the GMC family, assigning specific values for γ : (1) EL: γ = 0 ; (2) ET: γ = 1 ; and (3) CUE: γ = 1 [37,47,50].

Computational Implementation of the GEL/GMC Estimators

After the algebraic development of the optimization problem, Ref. [38] arrives at the following result that defines the GMC estimator for the sample case
θ ^ G M C = arg min θ Θ inf 1 T t = 1 T ϕ ( t p t ) : t = 1 T p t = 1 ; t = 1 T p t g ( θ , x t ) = 0 ,
where ϕ is a convex function. A convenient analogue for computational implementation deriving from the duality theorem present in [52] is given by
θ ^ G M C = arg min θ Θ max λ , γ R n λ 1 T t = 1 T ϕ * λ + γ g ( θ , x t )
where λ and γ are vectors of Lagrange multipliers and ϕ * is the convex conjugate of ϕ (for a convex function f ( x ) , its convex conjugate f * is given by f * ( y ) = sup x [ x y f ( x ) ] [38]). After the algebraic development of Equation (11), we obtain an expression that is equivalent to Equation (7), which defines the GEL estimator. Thus, the GEL and GMC estimators share similar properties, such as the same asymptotic distribution, the possibility to use an objective function value for inference, and similar arguments for conducting overidentifying tests [38].

Exponentially Tilted Empirical Likelihood (ETEL)

Ref. [47] proposed an estimator that merges the EL estimator, which has good asymptotic properties in the case of correctly specified models, with the ET estimator, which has robust behavior under global misspecification, called exponentially tilted empirical likelihood (ETEL), and that we can define by
θ ^ E T E L = arg min θ Θ 1 T t = 1 T h ˜ p ^ t ( θ )
where p ^ t ( θ ) is the solution of
min { p t } t = 1 T 1 T t = 1 T h ( p t ) : t = 1 T p t = 1 ; t = 1 T p t g ( x t , θ ) = 0
with h ˜ ( p t ) = ln ( T p t ) and h ( p t ) = T p ln ( T p t ) .
This estimator exhibits the same advantages of both estimators that define it; that is, it possesses the same low bias and the same variance of the EL under correct specification and avoids the difficulties related to the EL under global misspecification because it contains in its structure the ET [3,47]. More specifically, ETEL uses ET to obtain p ^ t ( θ ) and EL to obtain θ ^ [19].

Minimum Hellinger Distance Estimator (HD)

According to [53], the sense of seeking a robust estimation for small perturbations is because data may show deviations from the distribution considered for the modeling. The Hellinger distance can be used to measure the divergence between distributions, as elucidated by (8), and it is defined by
H ( P , μ ) = p θ 1 2 ( x ) p 1 2 ( x ) 2 d x ,
where P θ and P are probability measures with densities p θ and p, respectively.
Ref. [54] discusses the use of estimators based on the minimization of the Hellinger distance for parametric and nonparametric procedures being asymptotically similar to ML and robust to deviations from the correct specification. Ref. [53] obtains an estimator associating the minimization of the Hellinger distance and moment conditions that is computationally convenient and has minimax robustness properties. The minimum Hellinger distance (HD) estimator is semiparametrically efficient and robust in neighborhoods of true P θ and can be expressed by
θ ^ H D = arg min θ Θ H ( P θ , P ^ ) θ ^ H D = arg min θ Θ p θ 1 2 ( x ) p ^ 1 2 ( x ) 2 d x ,
where p ^ is a nonparametric density estimator for p, such as a kernel estimator, and P ^ is the corresponding estimator for the probability measure of x. This estimator is asymptotically equivalent to ML and, thus, efficient if the model’s hypotheses are satisfied. Moreover, if γ = 1 2 in (10), we obtain the HD estimator [50] and, therefore, it belongs to GEL/GMC families [53].

Exponential Tilting Hellinger Distance (ETHD)

Ref. [3] merged the HD estimator, which has good asymptotic properties and good performance under correct specification and local misspecification, with the ET estimator, which has good properties under global misspecification. Thus, they generated the exponential tilting Hellinger distance (ETHD) estimator, which is efficient under correct specification and robust to both local and global misspecification. The ETHD estimator is given by
θ ^ E T H D = arg min θ Θ H ( p ^ ( θ ) , p ^ )
where p ^ ( θ ) is the solution of
min { p t } t = 1 T 1 T t = 1 T p t log T p t : t = 1 T p t g ( x t , θ ) = 0 ; t = 1 T p t = 1
Note that the ETHD combines the discrepancy function H ( · ) of the HD defined in (8) with the implied probabilities of the ET [3].

Smoothed Generalized Empirical Likelihood (SGEL) and SEL, SET, SCUE, SETEL, SHD, and SETHD Estimators

Smoothing techniques are integral to enhancing the performance of moment condition-based methods, particularly when dealing with non-smooth or discontinuous moment functions. Such irregularities can cause instability in the objective functions, complicating the optimization process. By applying smoothing, the objective functions become numerically differentiable and exhibit properties akin to convex functions, thereby facilitating more reliable optimization. In scenarios involving discrete variables, such as count data, or when moment conditions are associated with nonparametric functions like kernel regressions, smoothing plays a crucial role. It mitigates abrupt changes or high variability in the empirical likelihood calculations, leading to more stable and accurate estimations. Moreover, smoothing addresses the bias–variance tradeoff inherent in statistical estimations. By averaging over noisy or volatile moment conditions, smoothing reduces the variance of the estimated parameters. However, this process may introduce a slight bias. Careful adjustment of smoothing parameters, such as kernel bandwidth, allows for control over this tradeoff, optimizing the balance between bias and variance. When dealing with non-independent and identically distributed (non-iid) data, smoothing becomes even more critical. Non-iid data often exhibit dependencies or heteroscedasticity, which can violate the assumptions underlying traditional GEL methods. Smoothing techniques help in regularizing these complexities, ensuring that the moment conditions remain applicable and the estimations remain consistent and efficient. For instance, in time-series analysis or panel data models where observations are dependent, smoothing can alleviate issues arising from such dependencies, leading to more robust inference.
The estimators of the smoothed generalized empirical likelihood (SEL) family, that is, GEL estimators with smoothed moment conditions (the distinction between GEL and SGEL is present in [40,55]), depicted in [36,40,55], can deal with time-dependent data problems and heteroscedastic and serially correlated moment conditions replacing g ( x t , θ ) using a smoothed version of the moment conditions given by
g ω ( x t , θ ) = j = m m ω ( j ) g ( x t j , θ )
where ω ( j ) is a weighting function normalized for unity, defined from a kernel function, similar to what occurs in the HAC estimators proposed by [34,35], and m is a truncation lag that reflects the serial correlation order in g ( x t , θ ) . Using smoothed moment conditions can improve performance in terms of bias even in the case of iid data [40]. Using moment conditions of the form t = 1 T p t g ω ( x t , θ ) = 0 , we obtain versions with smoothed moments of the EL, ET, CUE, ETEL HD, and ETHD estimators named SEL, SET, SCUE, SETEL, SHD, and SETHD.

Specification Tests for GEL/GMC Estimators

While the J test is usually used to verify the GMM specification in over-identified cases, in GEL and GMC estimations, in addition to this test, we can also use the Lagrange multiplier (LM) and likelihood ratio (LR) tests that consider the Lagrange multipliers of Equation (7). J, LM, and LR statistics are asymptotically equivalent. Thus, the LM and LR tests under the null have chi-square asymptotic distribution, as well as the J test [39].

3.1.3. Maximum Likelihood and Bayesian Inference

Likelihood-based methods maximize a likelihood function to obtain the parameters. We rewrite the model in state-space representation to employ this method in DSGE estimation. To guarantee identification, there must be more structural shocks than observable variables. The method will have many optimality properties if the model is correctly specified. For more details about this, see Canova [30]. Since we have a single structural shock in our RBC model, we will use only one observed variable (output y t ) in the estimation process. In situations like this, it is common to add measurement errors in equations as artificial shocks. However, to maintain the same model in all estimates, we used one observable variable.
Our state-space representation based on the log-linearized solution of the model is given by
y ˜ t = H ξ ˜ t 1
ξ ˜ t = F ( θ ) ξ ˜ t 1 + G ( θ ) ε t
ε t N ( 0 , σ 2 )
where y ˜ t denotes a vector of observable variables as deviations from their steady-state values (in our case, only product), ξ ˜ t is a ( 1 × ( m + n ) ) vector of all variables also as deviations from the steady-state values, with m and s indicating the number of observable and state variables, respectively, that is, ξ ˜ t = [ y ˜ t , z ˜ t ] , where z ˜ t is a vector of state variables (only technological shock).
The matrix H in Equation (12) provides the mapping between all variables in state variables and the matrices F ( θ ) and G ( θ ) , in Equation (13), that contains nonlinear relationships between parameters obtained using the log-linearized solution. As the stochastic process—Equation (14)—follows a normal distribution, we use the Kalman filter to evaluate the likelihood function. In sequence, we estimate the parameters using classical inference, obtaining estimates maximizing the likelihood function, or using Bayesian inference, combining the likelihood function and a prior distribution to obtain the posterior distribution of the parameters.

Maximum Likelihood (ML)

The ML estimator of the θ is given by
θ ^ M L = arg max θ Θ ( θ )
where ( θ ) denotes a log-likelihood function defined by
( θ ) = log L ( y ˜ t | θ ) = T 2 ln ( 2 π ) 1 2 log | H P t | t 1 H | 1 2 t = 1 T ( y ˜ t H ξ ˜ t | t 1 ) ( H P t | t 1 H ) 1 ( y ˜ t H ξ ˜ t | t 1 )
where P t | t 1 = E ( ξ ˜ t 1 ξ ˜ t | t 1 ) ( ξ ˜ t 1 ξ ˜ t | t 1 ) is a variance–covariance matrix. The log-likelihood (15) can be expressed in terms of prediction error decomposition. For more details about DSGE estimation using the ML estimator, see [30].

Bayesian Inference (BI)

This method combines the likelihood function and a prior distribution in order to generate a posterior distribution of the parameters using the Bayes rule
π ( θ | y ˜ t ) = L ( y ˜ t | θ ) π ( θ ) L ( y ˜ t | θ ) π ( θ ) d θ
where π ( θ | y ˜ t ) is the posterior distribution of θ , L ( y ˜ t | θ ) is the likelihood function, π ( θ ) is the prior distribution, and L ( y ˜ t | θ ) π ( θ ) d θ is the marginal likelihood [56]. Typical inference objectives, as the mean of the posterior distribution, involve the calculation of g ( θ ) π ( θ | y ˜ t ) d θ , where g ( θ ) represents a function of interest.
To find marginal distributions, we need to solve multiple integrals whose solutions usually require numerical methods. The main method used in the Bayesian estimation of DSGE models is the Markov chain Monte Carlo (MCMC). The MCMC involves constructing a Markov chain in θ that converges into a posterior distribution of interest. The original algorithm used to perform numerical integration using the MCMC was developed by [57] and refined by [58]. The so-called random walk Metropolis–Hastings (RWMH) generates a sequence of estimates that follows a random walk process and defines a posterior density after an initial burn-in. For more detail about the RWMH-MCMC, see [30,31,56,59].

3.2. Monte Carlo Design

The RBC model presented in Section 2 has seven structural parameters. Table 1 describes these parameters and their values in simulations. The parameters α and δ were calibrated, that is, in estimation, they were replaced by true values (exact calibration). Thus, we have 5 parameters to estimate: θ = { β , γ , b , ρ , σ } . The coefficient b was chosen so that the value of the hours worked ratio in the steady state is approximate 1 3 , as in [12]. The coefficient σ was defined to generate endogenous variables at reasonable intervals, as in [7]. The other parameter values are recurrent in the literature.
We defined four data-generating processes (DGPs) to estimate a log-linearized model with normally distributed productivity shock. The four DGPs considered are as follows:
  • DGP I: The log-linear model in Section 2 with normally distributed productivity shock—Equation (2). Just for this DGP, the estimated model is correctly specified.
  • DGP II: The log-linear model in Section 2 with productivity shock following Student’s t-distribution with 4 degrees of freedom. This DGP admits extreme events; in this case, the estimated model has a global misspecification once the contamination affects a fixed proportion of the sample.
  • DGP III: The log-linear model in Section 2 with normally distributed productivity shock contaminated by a single positive outlier of magnitude equal to 5 standard deviations in the middle of the sample (fixed outlier). The estimated model has a local misspecification in this case since the contamination disappears asymptotically.
  • DGP IV: The log-linear model in Section 2 with normally distributed productivity shock contaminated by multiple outliers. The position of the outliers is drawn from a uniform distribution enabling up to 5% of positive outliers (magnitude equal to 3 standard deviations) and up to 5% of negative outliers (magnitude equal to −3 standard deviations). This is another case in which the estimated model is globally misspecified.
All the procedures were performed using the software R. For simulations of the log-linearized DGPs, we used the package gEcon version 1.0.2. This package uses the method proposed by [29] to obtain the model solution. For all the moment-based estimations, we used the package gmm version 1.6-2, described in [60], and for both the ML and BI estimations, we used the package gEcon_estimation. The results were generated from 2000 sample replications with 200 observations each, corresponding to 50 quarters of data, as performed by [13]. The initialization of all the estimators was performed using the true values of the parameters so that the estimator performances could be compared.
In moment-based estimations, we consider both just- and over-identified cases. The moment conditions used in the estimations are presented in Appendix A. For the just-identified case, we used the first five moment conditions—Equations (A1), (A2), and (A6)–(A8). In the over-identified case, we used all seven moment conditions—the previous equations and Equations (A9) and (A10). The priors used in the Bayesian inference were the following: β Beta ( 0.98 , 0 . 01 2 ) , γ N ( 3.2 , 0 . 5 2 ) , b N ( 3.2 , 1 2 ) , ρ N ( 0.9 , 0 . 05 2 ) , and σ InvGamma ( 0.007 , 0 . 005 2 ) . Finally, lower and upper limits were placed in the estimator algorithms (including BI) according to the model’s constraints, that is, 0 < β < 1 , γ > 0 , b > 0 , 1 < ρ < 1 , and σ > 0 .

4. Results

In this section, we show tables and figures that summarize the main results of this paper. Table 2 presents the correlations between the moment conditions. Note that some correlations are considerably high even between the original moment conditions—the case of conditions g 1 ( x t , θ ) and g 3 ( x t , θ ) , for example. The correlation is greater than 50% between g 1 ( x t , θ ) and g 7 ( x t , θ ) and between g 2 ( x t , θ ) and g 6 ( x t , θ ) in all the DGPs. Despite this, the moment conditions appear to have been informative enough to identify the parameters for most estimators. It should be noted that difficulties arising from using artificial moment conditions are the possibility of stochastic singularity between moment conditions, which could lead to serious implications regarding computational implementation.
In Table 3, we show the results of the J, LM, and LR tests for the validity of the moment conditions of the over-identified moment-based estimators. We observe that while the J test rejected the null “correct specification” at the significance level of 5% between 26.40% and 33.45% for the GMM, the same test rejected almost all the replications for the GEL/GMC estimators. For the latter, the LM and LR tests rejected, at the same level of significance, 63.25% to 69.80% and 75.25% to 79.80% of the specifications, respectively. Therefore, under correct specification, the tests indicated rejection of the null hypothesis. In an experiment not reported here, with the increase in the sample from 200 to 2000 observations, the rejection ratio at 5% of the null in the J test for the GMM dropped to 1.55%, considering the DGP I. That is, there was a noticeable performance problem with the test in a small sample. For the tests based on the GEL/GMC estimators, on the other hand, there was no improvement in performance with increasing the sample size.
In econometric analysis, tests such as the likelihood ratio (LR), Lagrange multiplier (LM), and J tests play a central role in evaluating model specification and the validity of moment conditions, particularly in the context of the generalized empirical likelihood (GEL) and generalized method of moments (GMM) estimators. However, these tests often display notable limitations in finite samples, which can lead to an over-rejection of the null hypothesis even when the model is correctly specified.
The J test, widely used to assess the validity of moment conditions, is prone to over-rejecting the null hypothesis under certain conditions. This tendency is often linked to the choice of instruments and the variability inherent in small samples. Over-rejection can lead to misleading conclusions about model misspecification, even when the specified model aligns well with the data. Additionally, in such cases, parameter estimates derived from the GMM may exhibit downward median bias, further complicating statistical inference. Finite-sample studies of the generalized method of moments (GMM) highlight important tradeoffs and challenges. Ref. [61] observed that using short lags for instruments tends to yield nearly asymptotically optimal estimates, while longer lags introduce bias and misleading confidence intervals. He also noted that tests for overidentifying restrictions perform well in small samples but are slightly biased toward accepting the null hypothesis. Ref. [62] found that the J test exhibits minimal size distortion in some cases but is biased toward over-rejection in others, with parameter estimates often showing downward median bias in cases of over-rejection. Ref. [46] emphasized the importance of how moment conditions are weighted, with continuous updating estimators generally showing less bias but sometimes resulting in fat-tailed sample distributions. This affects confidence intervals and the reliability of overidentifying restriction tests.
Similarly, LR and LM tests are sensitive to sample size, with their performance in finite samples often reflecting size distortions. These distortions increase the likelihood of Type I errors, where valid models are incorrectly rejected. Empirical evidence has demonstrated that the sensitivity of these tests to minor deviations in the data or sampling variability is particularly problematic when dealing with small samples, as is often the case in applied econometrics.
The frequent over-rejection of valid models by these tests raises important concerns, as the results may not necessarily indicate genuine model misspecification but rather reflect the limitations of the test methodologies in finite samples. This issue underscores the need for caution in interpreting the outcomes of LR, LM, and J tests, especially in small-sample contexts. To address these challenges, researchers can employ strategies to mitigate the effects of finite-sample limitations. For example, bootstrap methods are often used to generate more accurate critical values that account for the specific characteristics of the sample, reducing the likelihood of over-rejection. Additionally, the careful selection of instruments and the use of alternative testing procedures designed to perform better in small samples can enhance the robustness of inference.
For the parameters, the results are summarized in Table 4, Table 5, Table 6, Table 7 and Table 8 and in Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5. To facilitate the description of the results, each parameter appears in one table and one figure, separately distinguishing the DGP considered. In the tables, divided into just- and over-identified cases (for moment-based methods), we have the mean of estimates, median of estimates, bias given by the difference between mean and true parameter, mean squared error (MSE), and mean absolute error (MAE). The figures show the distributions of the parameters generated by each estimator considering 2000 replications. As a matter of space, the distributions generated by both just- and over-identified moment-based methods were superimposed (continuous black and dotted gray lines, respectively). We use the optimization method nlminb to implement the GMM and GEL/GMC estimators. For Bayesian inference (BI), the RWMH-MCMC was constructed with a chain size of 5500 (burn-in of 1500) and maximization routine csminwel. Increases in the size of the chain did not lead to an improvement in results.
The smoothed and non-smoothed versions of the GEL/GMC estimators presented very similar performances, making it impossible to differentiate them. Thus, in several moments, we will mention both versions throughout the text by adding (S) before the estimator’s name. Another general highlight is that the occurrence of extreme estimates for ETEL, ETHD, and their smoothed versions (Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5) deteriorate the mean, MSE, and MAE of these estimators but not the median since this is more robust to the presence of atypical values (Table 4, Table 5, Table 6, Table 7 and Table 8). This behavior was present in the estimation of all the parameters, although it has appeared in different ways: estimates at the limit of the parametric space (parameter β ), very discrepant estimates about true values ( γ and σ ), and both problems (b and ρ ). The performance problem of these estimators could not be overcome by using other optimization methods available in the gmm package.
The parameters β , γ , and b were those for which the estimators analyzed in this paper returned the estimates closer to the true values, except for (i) ETEL, ETHD, and their smoothed versions, for the reason highlighted in the previous paragraph; (ii) GMM that, even considering the median, obtained inferior performance regarding γ and b in the over-identified case; and (iii) ML in the case of the γ . In Figure 2 and Figure 3, we see that γ and b distributions generated by the over-identified GMM were asymmetric to the right and reasonably spaced, corroborating the poor performance reflected in the statistics contained in Table 5 and Table 6. The GMM’s estimation of the parameters ρ and σ was poor in both the just- and over-identified cases.
In general, the parameters ρ and σ were the most difficult to estimate, and good performance of the (S)EL should be highlighted in obtaining accurate estimates for both. In the case of ρ , a problem with other estimators may derive from the true value (0.9) being close to 1, the upper limit of the parametric space. Much of the σ estimates were biased down, becoming closer to the true value in poor specifications, something expected due to the inclusion of extreme values that lead to increased data variability. Thus, this result should be treated as a coincidence, not a sign of robustness.
Specifically about parameter β , different experiments generated similar results, and the addition of deviations from a normal distribution (of the productivity shock) led to increases in the MSE and MAE of the estimators—most notably in the case of multiple outliers (DGP IV) and less in the case of a single outlier (DGP II) (Table 4). The lowest MSEs were recorded by BI, GMM, (S)CUE, and (S)EL, while the lowest MAEs were for BI and (S)EL. The problem with (S)ETEL and (S)ETHD was due to the existence of a considerable probability mass close to zero, a lower limit of the parametric space (Figure 1). Even under correct specification (DGP I), ML presented poorer performance than the GMM, GEL/GMC, and BI due to the concentration of estimates around 0.9, which may be associated with the existence of a local maximum in the objective function of this estimator.
The estimation of γ is usually the most difficult since this parameter controls the curvature of the utility function, the primary source of nonlinearity in the model, as pointed out by [7,12]. For this parameter, we have that ML and the over-identified GMM presented the worst performance in terms of bias, MSE, and MAE (Table 2). On the other hand, the just-identified GMM presented good performance due to a low MSE and MAE. (S)EL, (S)CUE, and (S)ET delivered the best results in terms of bias, MSE, and MAE, while (S)ETEL and (S)ETHD were among the worst. Estimates of the latter presented concentration around the true value (1.8) and 0.75. Due to a similar problem, ML delivered a greater bias, MSE, and MAE. For ML, estimates were concentrated around 1.8 and 30, a number quite far from the true value.
(S)EL and ML obtained good performance in estimating parameter b in all the considered drawings. A clear improvement in the (S)EL was observed from the just-identified to the over-identified case. This improvement did not occur with the (S)CUE, whose performance eventually worsened but not enough for it to leave the list of estimators with the best results (Table 6). The just-identified GMM also generated reasonable estimates compared to the others, especially for data generated by DGP I, DGP III, and DGP II, in this order. Even with an informative prior, the BI delivered wrong estimates for the parameter b compared with ML in terms of bias, MSE, and MAE. It occupied an intermediate position in the performance classification of estimators for parameter b. The over-identified GMM delivered the best results, while the results of ETEL and (S)ETHD were compromised due to atypical estimates (Figure 3).
The results for ρ presented remarkable peculiarities because almost all the estimators provided estimates concentrated around 0.9 (true value) and 1 (Figure 4). ET, CUE, ETEL, HD, ETHD, and their smoothed versions presented this peculiarity— 2 3 of the analyzed estimators. Despite not having this behavior, the over-identified GMM obtained the largest bias. In contrast, the just-identified GMM obtained one of the largest biases, with the mean and median values close to one in both cases. The main positive performances were registered by EL and SEL, which delivered concentrated estimates only around the true value and the lowest MSE and MAE among all the estimators (Table 7). Despite reporting values close to true, ML generated a reasonably wide distribution around this value. BI, in turn, despite delivering a good MSE, returned more biased estimates than ML, and its MAE was at least twice greater than the MAE of the (S)EL in all the analyzed cases, even counting with an informative prior.
We impose a positivity restriction for σ since this parameter appears squared in the moment conditions, allowing for two solutions: the true 0.007 and the opposite −0.007. Such a feature tends to explain why, in the case of the (S)ET, (S)CUE, (S)HD, (S)ETEL, and (S)ETHD and the GMM (the latter to a lesser extent) reported values close to zero (Figure 5). In a way, the maximization algorithm could have “looked” at the other admitted value (−0.007), and it was prevented from reaching it due to the lower limit imposed. Several GEL/GMC estimators delivered atypical results due to estimates much higher than true in some replications. The ML distribution concentrated around 0.006 and 0.01 in the estimation using data of the DGP I and around 0.008 and 0.013 in the case of DGP IV. In the correctly specified case (DGP I) and the case of contamination by only a sample-centered outlier (DGP III), BI delivered one of the best results in terms of bias, MSE, and MAE. It was followed closely by (S)EL (Table 8). In misspecification estimations using DGP II and DGP IV, the (S)EL estimator tended to outperform BI in terms of bias, MSE, and MAE. Considering the existence of a fixed parameter, (S)EL can be considered more robust than BI in the estimation of σ since it was able to deliver more accurate estimates with a smaller MSE and MAE.
In general, the (S)EL estimator, in both just- and over-identified cases, and BI obtained the best performances in terms of bias, MSE, and MAE, in both correctly specified and misspecified models. Good performance of the EL in a correctly specified model was expected since it has good asymptotic properties in this situation. Because it incorporates prior information, BI generates good results, mainly due to the good configuration of the prior distribution—which was the case since both the prior and initialization of all the estimators were defined from the true values of the parameters. In addition, it is worth noting that, despite the intermediate comparative performance, the performances of the (S)CUE, (S)HD, and (S)ET estimators were shown to be good tools for DSGE estimation.
The GMM, mainly in the over-identified case, and ML presented performances considerably below a good part of their competitors due to inaccurate estimates. The non-zero empirical moment conditions of the over-identified GMM tend to compromise the performance when data are exposed to disturbances. Along this line, the GEL/GMC estimators have the advantage of always satisfying the constraints due to weighting by implicit probability to act directly on observations and not on moment conditions, as in the GMM. The ML estimator delivered concentrated estimates at points other than the true values, which may be associated with the presence of a local maximum in its objective function, showing that despite full information, this method can be reasonably unstable.
We can interpret the inferior results of the GMM by the fact that the generalized empirical likelihood and generalized minimum contrast methods, along with their smoothed variants, offer significant advantages over the generalized method of moments (GMM) estimators. One key advantage is that GEL methods do not require the estimation of a weight matrix, which is essential in the over-identified GMM. Estimating this matrix can be computationally complex and may introduce additional errors, whereas GEL inherently incorporates the moment conditions without relying on such a step, simplifying the process and reducing potential inaccuracies. Smoothed versions of GEL, such as the smoothed empirical likelihood (SEL), further improve estimation by achieving higher-order accuracy in finite samples. Furthermore, GEL methods are nonparametric, making them more flexible and requiring fewer assumptions about the underlying distribution of errors.
The finite-sample properties of the generalized method of moments (GMM) and generalized empirical likelihood (GEL) are further influenced by the methodology used to estimate the GMM parameters, particularly when employing two-step and iterated GMM approaches. These methodologies introduce additional sources of bias and inefficiency, which can significantly impact the reliability of GMM estimators in finite samples.
In the two-step GMM, the estimation process is divided into two stages. The first step involves obtaining an initial estimate of the parameters, often using an identity weighting matrix or some simple approximation. This preliminary estimate is then used to calculate the optimal weighting matrix in the second stage, which minimizes the asymptotic variance of the estimator. While this approach is theoretically efficient asymptotically, in finite samples, the second-stage weighting matrix depends on the first-stage estimates, introducing a form of feedback bias. This bias arises because the variability in the first-stage estimates propagates into the second stage, amplifying the estimation error. The magnitude of this bias grows with the number of moment conditions, as the estimation of the weight matrix becomes less stable when the dimensionality of the moment conditions is highly relative to the sample size.
The iterated GMM aims to mitigate this issue by repeatedly updating the parameter estimates and the weighting matrix until convergence. This iterative procedure improves asymptotic efficiency and reduces the dependence on the initial choice of the weighting matrix. However, in finite samples, the iterated GMM does not fully eliminate the bias introduced by the staged estimation process. The iterated updates can exacerbate finite-sample sensitivity, particularly when the moment conditions are weakly informative or when the sample size is small relative to the number of moments. Furthermore, the iterated GMM can become computationally burdensome, and convergence to a global optimum is not guaranteed in nonlinear settings, adding further practical limitations.
In contrast, generalized empirical likelihood (GEL) methods, such as empirical likelihood (EL), exponential tilting (ET), and continuous updating estimator (CUE), avoid these staged estimation processes entirely. GEL constructs its objective function directly based on the likelihood principle, which ensures that the weighting of the moment conditions is determined endogenously. This approach eliminates the need for pre-estimating a weighting matrix, thereby avoiding the feedback bias inherent in the two-step and iterated GMM. GEL’s continuous updating structure is particularly advantageous in finite samples, as it incorporates all the moment conditions simultaneously without relying on intermediate steps or iterative updates. The CUE, a variant of GEL, further optimizes the empirical likelihood function directly over both the parameters and the moment conditions, offering an efficient and bias-robust alternative to the iterated GMM.
The smoothing techniques applied in GEL also contribute to improved numerical stability, especially in situations where the moment functions are non-smooth or involve discontinuities. This is especially useful in complex models or irregular data structures, where optimization using traditional methods can be challenging. By addressing these issues, GEL and its smoothed variants provide a reliable and efficient framework for estimation, outperforming the GMM in terms of robustness, flexibility, and finite-sample accuracy.
The performance of ET- and HD-based estimators—(S)ETEL and (S)ETHD—was strongly compromised by atypical estimates in some replications. These estimators could deliver good results mainly when comparing the performance of different estimators under specification problems—especially the ETHD estimator that presents good theoretical properties under both correct specification and misspecification. However, it should be emphasized that other studies that consider other sources of misspecification must be made. In addition, other analyses that consider different initialization can be made to analyze the performance of these estimators in other situations. It would be interesting to analyze the choice of optimization methods used in the implementation of estimators, something outside the scope of this paper.
The exponentially tilted empirical likelihood (ETEL) estimator combines the strengths of empirical likelihood (EL) and exponential tilting (ET) to achieve desirable properties under both correct and incorrect model specifications. Ref. [47] demonstrated that ETEL maintains the low bias of EL in correctly specified models while avoiding EL’s issues under misspecification. However, its finite-sample performance can be significantly compromised by atypical estimates caused by extreme values. This behavior often results in estimates clustering at the boundaries of the parametric space or diverging substantially from true values, highlighting its lack of robustness to outliers. Small sample sizes exacerbate this issue, increasing estimator variance and limiting its reliability, which underscores the need for regularization techniques to mitigate these limitations.
Similarly, the exponentially tilted Hellinger distance (ETHD) estimator was developed to address the lack of robustness in traditional Hellinger distance (HD) estimators under global model misspecification. Ref. [3] showed that while HD estimators are efficient under correct specifications, they fail to maintain n -consistency when the model is globally misspecified. ETHD integrates ET to enhance robustness against such misspecifications; however, its finite-sample performance is similarly vulnerable to extreme values, leading to parameter estimates that deviate significantly from the true values. Additionally, ETHD’s sensitivity to tuning parameters and data characteristics, such as contamination and the underlying distribution of estimating functions, further challenges its practical application.
One of the difficulties related to using moment-based methods, as highlighted by [7], is obtaining a sufficient number of moment conditions for estimating DSGE models with many parameters. Resorting to the definition of artificial moment conditions can generate problems in estimation, making it difficult to use those methods. Another limitation of moment-based methods is the impossibility of recovering latent variables as performed in state-space representation used by ML and BI. However, some GEL/GMC estimators can obtain good results even in situations of misspecification, as found in this paper.
Bayesian inference is a powerful tool for estimating dynamic stochastic general equilibrium (DSGE) models, offering a structured approach to incorporate prior knowledge and quantify parameter uncertainty. However, its effectiveness is closely tied to the specification of prior distributions. In situations where the available data are not sufficiently informative, the choice of priors can significantly influence the posterior distributions, potentially leading to results that reflect the imposed priors more than the underlying data. This issue is particularly pronounced in small-sample contexts, where limited data exacerbate the dominance of prior assumptions. Consequently, the reliability of the estimation outcomes may be compromised, as the posterior inferences may be more indicative of the prior configurations than the empirical evidence. Therefore, careful consideration and sensitivity analysis of prior choices are essential to ensure robust and credible parameter estimates in Bayesian estimation of DSGE models.
About BI, despite the advantages of using prior information already mentioned in [13], it should be emphasized that if the data are not informative enough, the prior configuration can largely dominate the posterior distribution. Thus, sensitivity tests involving prior distribution are fundamental to guarantee the robustness of results. In addition, another disadvantage of BI is the time required to obtain the posterior distribution, which is usually considered high.

5. Conclusions

The estimation of dynamic stochastic general equilibrium (DSGE) models involves a tradeoff between computational efficiency and the thoroughness of parameter inference. Generalized empirical likelihood (GEL) and generalized method of moments (GMM) estimators are often favored for their computational efficiency. These methods rely on moment conditions derived from the model, allowing for parameter estimation without the need to solve the full likelihood function, which simplifies computations and reduces processing time. However, this efficiency may come at the cost of statistical efficiency, particularly in small samples, where GEL and GMM estimators can exhibit higher variance compared to maximum likelihood (ML) estimators.
In contrast, methods like maximum likelihood (ML) and Bayesian inference (BI) provide a more comprehensive framework for parameter estimation by utilizing the full likelihood function. Bayesian estimation, in particular, offers a systematic approach to incorporate prior information and quantify parameter uncertainty through posterior distributions. This process typically involves Markov chain Monte Carlo (MCMC) methods, which, while powerful, are computationally intensive. The need to sample from complex, high-dimensional posterior distributions can lead to substantial computational times, posing challenges for large-scale models or real-time policy analysis.
To evaluate the performance of different moment-based estimators belonging to generalized empirical likelihood (GEL) and generalized minimum contrast (GMC) families in the estimation of DSGE models, we performed a Monte Carlo analysis considering different data-generating processes to verify the results under both correct and incorrect specifications. As a benchmark, we consider the generalized method of moments (GMM), maximum likelihood (ML), and Bayesian (BI) estimators.
The main results found were the following: (i) the just- and over-identified empirical likelihood (EL) estimator, as well as the smoothed version (SEL), and Bayesian inference (BI), obtained, in this order, the best performances in terms of bias, MSE, and MAE, in situations where the estimated model has and does not have specification problems; (ii) continuous updating empirical likelihood (CUE), the minimum Hellinger distance (HD), exponential tilting (ET), and their smoothed versions presented intermediate comparative performance; (iii) the performance of exponentially tilted empirical likelihood (ETEL), the exponential tilting Hellinger distance (ETHD), and their smoothed versions was strongly compromised by atypical (distant of the true values) estimates in some replications; (iv) smoothed and non-smoothed versions of the GEL/GMC estimators showed very similar performances, so it is impossible to distinguish them; and (v) the GMM estimator, especially in the over-identified case, and the ML estimator presented poor performances due to the inaccurate estimates.
In general, the performance of some GEL/GMC estimators, more specifically EL and its version with smoothed moment conditions, was similar (and in some cases even higher) to the Bayesian estimator and superior to the GMM and ML estimators. We emphasize that EL delivering estimates as good as those of the Bayesian method is an outstanding result because the latter has the advantage of incorporating prior information. However, some difficulties associated with defining the sufficient number of informative moment conditions may make the use of GEL/GMC (and GMM) estimators unfeasible. Since GEL/GMC estimators always satisfy their restrictions, they are considerably more advantageous than the over-identified GMM. On the other hand, Bayesian inference has difficulties such as the prior distribution dominating the final result in small samples and the long time spent in estimation. Thus, GEL/GMC estimators can be good tools for DSGE estimation, given their good characteristics in the misspecification context and their easy and fast computational implementation.
The estimation of parameters using methods such as the generalized method of moments (GMM), continuous updating generalized empirical likelihood (GEL), and generalized minimum contrast estimators (GMC) is often challenged by numerical and optimization instabilities. These challenges primarily arise from the complexity of the optimization landscapes, sensitivity to initial conditions, and the potential presence of multiple local minima. Addressing these issues is critical for enhancing the reliability and robustness of these estimators.
In the GMM, numerical instabilities frequently stem from the estimation of the optimal weighting matrix. When the number of moment conditions is large relative to the sample size, the sample covariance matrix of the moments can become ill-conditioned, leading to unreliable parameter estimates. Two-step and iterated GMM procedures exacerbate these issues, as the feedback loop between the initial parameter estimates and the subsequent weighting matrices amplifies estimation errors. Furthermore, convergence to suboptimal solutions is a recurring problem, particularly in high-dimensional settings or when the moment conditions are weak.
GEL methods, such as the continuous updating estimator (CUE), are designed to address some of the shortcomings of the GMM by jointly estimating parameters and the weighting matrix. However, GEL is not immune to optimization difficulties. These methods often involve solving non-convex optimization problems that are highly sensitive to initial conditions, which increases the risk of converging to local optima. The computational complexity of GEL grows with the dimensionality of the parameter space and the number of moment conditions, further complicating its practical application in large-scale problems.
In the case of GMC, which minimizes a contrast function between the empirical and model-implied distributions, the choice of contrast function plays a crucial role. Poorly chosen contrast functions can lead to optimization surfaces that are flat or irregular, making it difficult for numerical algorithms to converge reliably. The estimation of implied probabilities adds another layer of complexity, as these probabilities often require iterative numerical procedures that can be prone to convergence issues, particularly when starting values are poorly selected.
The accuracy of parameter estimation in these methods depends heavily on the specification of moment conditions and the correct computation of implied probabilities. Misspecified moment conditions result in biased or inconsistent estimators, while errors in computing implied probabilities can further distort parameter estimates. In particular, the iterative procedures used in GEL and GMCE for computing implied probabilities may fail to converge in high-dimensional parameter spaces, complicating the estimation process.
To address these challenges, researchers have proposed various strategies. Regularization techniques, for example, can stabilize the estimation of the weighting matrix in the GMM, especially when dealing with a large number of moment conditions [63]. By incorporating penalty terms into the optimization objective, regularization reduces overfitting and enhances numerical stability. Robust optimization algorithms [64], including quasi-Newton methods, simulated annealing, and adaptive learning rate techniques, are also beneficial for GEL and GMC methods, as they mitigate sensitivity to initial conditions and improve the likelihood of converging to a global optimum. The careful selection of instruments, particularly in the GMM, enhances efficiency and stability by reducing the dimensionality of the problem and mitigating weak identification issues. Additionally, the adoption of advanced numerical methods and high-performance computing strategies, such as parallel processing, can alleviate the computational burden associated with these estimation procedures.
Moment-based estimators can be valuable tools for estimating parameters in dynamic stochastic general equilibrium (DSGE) models due to their flexibility and minimal reliance on distributional assumptions. However, these methods have notable limitations, particularly when dealing with the latent variables that are central to DSGE models.
These estimators rely on aligning theoretical model-implied moments with empirical data moments, but it does not directly estimate latent variables, such as structural shocks or unobservable state variables. This indirect approach can lead to inefficiencies in parameter estimation, especially in cases where latent variables significantly influence the dynamics of the model. Moreover, the inability to explicitly account for latent components introduces challenges in identifying structural parameters, as the available moment conditions may not fully capture the model’s underlying dynamics. This limitation is exacerbated when moment conditions are misspecified, potentially leading to biased or inconsistent parameter estimates.
To address these challenges, moment-based estimators can be augmented with filtering techniques, such as the Kalman filter or particle filters, which are well suited to handling latent variables. For linear DSGE models with Gaussian assumptions, the Kalman filter can be used to estimate the state-space representation, efficiently handling the unobserved components of the model. By combining moment-based methods with the Kalman filter, researchers can iteratively estimate structural parameters while accounting for the latent states, improving the robustness of the estimation.
In the case of nonlinear DSGE models or those involving non-Gaussian features, particle filters provide a flexible alternative. These filters approximate the posterior distribution of latent variables through sequential Monte Carlo methods using the estimated fixed parameters. Combining the GMM, GEL and GMC with particle filters allows for a more comprehensive treatment of latent variables, although this approach requires careful attention to computational demands and convergence properties. By integrating moment-based estimators with filtering techniques, it is possible to overcome the limitations of traditional moment-based estimators in the context of DSGE models. This combined approach enhances the ability to estimate structural parameters accurately while accounting for the latent variables that are integral to the model’s structure, leading to more reliable and efficient results.

Author Contributions

Conceptualization, G.B. and M.P.L.; methodology, G.B. and M.P.L.; software, G.B. and M.P.L.; validation, G.B. and M.P.L.; formal analysis, G.B. and M.P.L.; G.B. and M.P.L.; writing—original draft preparation, G.B. and M.P.L.; writing—review and editing, G.B. and M.P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Capes, CNPq (310646/2021-9) and FAPESP (2023/02538-0).

Data Availability Statement

The data used were simulated in a Monte Carlo study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Moment Conditions

From the first-order conditions (5) and (6), we derive the first two moment conditions:
g 1 ( x t , θ ) b w t c t γ
g 2 ( x t , θ ) β ( 1 δ + r t + 1 ) c t + 1 c t γ 1 .
Applying the expectation operator on the technology level Equation (2), we obtain
E [ ln z t ρ ln z t 1 ] = 0
and using ε t ( 0 , σ 2 ) , we know that
E [ ( ln z t ρ ln z t 1 ) 2 σ 2 ] = 0 .
Although z t is an unobservable variable, we can use the production function defined in (1) to put the technological level as a function of product y t , capital stock k t , and hours worked n t :
y t = z t k t α n t 1 α z t = y t k t α n t ( 1 α ) ln z t = ln y t α ln k t ( 1 α ) ln n t
We defined two new moment conditions combining (A3) and (A4) with (A5):
g 3 ( x t , θ ) ln y t ρ ln y t 1 α ( ln k t ρ ln k t 1 ) ( 1 α ) ( ln n t ρ ln n t 1 )
g 4 ( x t , θ ) ln y t ρ ln y t 1 α ( ln k t ρ ln k t 1 ) ( 1 α ) ( ln n t ρ ln n t 1 ) 2 σ 2
Since parameter α is calibrated, moment conditions (A6) and (A7) identify ρ and σ . However, with four moment conditions generated from model equations, we are not yet able to identify β , γ , and b. In this way, we will have to resort to the definition of artificial moment conditions, as in [7]. Using original moment conditions (A1) and (A2), we write the following three artificial moment conditions:
g 5 ( x t , θ ) ( g 1 ( x t , θ ) ) 2 ( g 2 ( x t , θ ) ) 2
g 6 ( x t , θ ) ( g 1 ( x t , θ ) ) 2 g 2 ( x t , θ )
g 7 ( x t , θ ) g 1 ( x t , θ ) ( g 2 ( x t , θ ) ) 2
where ⊙ is a Hadamard product.

Appendix B. Local and Global Misspecification in Econometric Models

Misspecification in econometric models occurs when the assumed model differs from the true data-generating process (DGP). This discrepancy can arise from incorrect functional forms, omitted variables, improper distributional assumptions, or errors in specifying moment conditions. The impact of misspecification on estimators depends on its nature, which can be broadly classified into local and global misspecification. Understanding these concepts is crucial for evaluating the robustness of estimators, i.e., their ability to deliver reliable results despite such discrepancies.
Local misspecification refers to small, asymptotically negligible deviations from the assumed model. These deviations are often treated as perturbations that do not drastically alter the asymptotic properties of estimators. Formally, if the true moment condition is
E [ g ( X , θ ) ] = 0 ,
then a locally misspecified model assumes
E [ g ( X , θ ) ] = δ n ,
where δ n 0 as the sample size n . Such deviations vanish asymptotically, allowing estimators to remain consistent under suitable regularity conditions.
Examples of local misspecification include minor inaccuracies in specifying transition probabilities or utility functions in dynamic models. Estimators such as the generalized method of moments (GMM) and generalized empirical likelihood (GEL) methods often exhibit robustness to local misspecification. For instance, smoothing techniques used in GEL estimators can handle perturbations in moment conditions, ensuring consistent and efficient parameter estimates.
Global misspecification refers to significant deviations where the assumed model structure fundamentally differs from the true DGP. In this case, the moment conditions are violated, such that
E [ g ( X , θ ) ] = δ ,
where δ 0 and does not vanish as n . Global misspecification typically leads to biased and inconsistent estimators because the foundational assumptions required for estimation are invalid.
An example of global misspecification is the application of a linear regression model to data with an inherently nonlinear relationship. Such structural errors require adopting more flexible modeling approaches, such as semiparametric or nonparametric methods, to accurately capture the underlying data dynamics.
The robustness of an estimator refers to its ability to produce reliable and consistent results despite potential model misspecification. Estimators robust to local misspecification maintain their consistency and efficiency despite small deviations in the assumed model. For example, GMM estimators adapt to local perturbations in the moment conditions through their weighting matrix, preserving asymptotic efficiency. GEL and GMC methods, such as the smoothed empirical likelihood (SEL), achieve robustness by incorporating smoothing techniques, which reduce the sensitivity to minor discrepancies in the moment conditions.
Global misspecification poses a greater challenge, as it often results in bias and inconsistency. Estimators robust to global misspecification rely on minimizing strict parametric assumptions. For instance, quasi-maximum likelihood estimators (QMLEs) are robust to certain forms of global misspecification, as they rely on the first two moments of the data rather than full distributional assumptions. Over-identified GMM estimators can detect global misspecification through overidentifying restriction tests. However, if the misspecification persists, the resulting estimates may still be biased. For a detailed discussion of misspecification problems in econometric models and moment-based estimators, see [37,65,66,67]. For an analysis of these issues specifically in the context of rational expectations models, refer to [68].

References

  1. Maasoumi, E. How to live with misspecification if you must. J. Econom. 1990, 44, 67–86. [Google Scholar] [CrossRef]
  2. Huber, P.J.; Ronchetti, E.M. Robust Statistics; WILEY: Hoboken, NJ, USA, 2009. [Google Scholar]
  3. Antoine, B.; Dovonon, P. Robust estimation with exponentially tilted Hellinger distance. J. Econom. 2021, 224, 330–344. [Google Scholar] [CrossRef]
  4. White, H. Maximum Likelihood Estimation of Misspecified Models. Econometrica 1982, 50, 1–25. [Google Scholar] [CrossRef]
  5. Newey, W.K. Maximum Likelihood Specification Testing and Conditional Moment Tests. Econometrica 1985, 53, 1047–1070. [Google Scholar] [CrossRef]
  6. Bera, A.K.; Yoon, M.J. Specification testing with locally misspecified alternatives. Econom. Theory 1993, 9, 649–658. [Google Scholar] [CrossRef]
  7. Riscado, S. On the Estimation of Dynamic Stochastic General Equilibrium Models: An Empirical Likelihood Approach. In DSGE Models in Macroeconomics; Emerald Group Publishing: Bingley, UK, 2012; pp. 387–419. [Google Scholar]
  8. Sargent, T.J. Two models of measurements and the investment accelerator. J. Political Econ. 1989, 97, 251–287. [Google Scholar] [CrossRef]
  9. Christiano, L.J.; Eichenbaum, M. Current real-business-cycle theories and aggregate labor-market fluctuations. Am. Econ. Rev. 1992, 82, 430–450. [Google Scholar]
  10. Burnside, C.; Eichenbaum, M.; Rebelo, S. Labor Hoarding and the Business Cycle. J. Political Econ. 1993, 101, 245–273. [Google Scholar] [CrossRef]
  11. Braun, R.A. Tax disturbances and real economic activity in the postwar United States. J. Monet. Econ. 1994, 33, 441–462. [Google Scholar] [CrossRef]
  12. Ruge-Murcia, F.J. Generalized Method of Moments estimation of DSGE models. In Handbook of Research Methods and Applications in Empirical Macroeconomics; Edward Elgar Publishing: Northampton, UK, 2013; pp. 464–485. [Google Scholar]
  13. Ruge-Murcia, F.J. Methods to estimate dynamic stochastic general equilibrium models. J. Econ. Dyn. Control 2007, 31, 2599–2636. [Google Scholar] [CrossRef]
  14. Bera, A.K.; Bilias, Y. The MM, ME, ML, EL, EF and GMM approaches to estimation: A synthesis. J. Econom. 2002, 107, 51–86. [Google Scholar] [CrossRef]
  15. Golan, A. Information and Entropy Econometrics—Editor’s View. J. Econom. 2002, 107, 1–15. [Google Scholar] [CrossRef]
  16. Wen, K.; Wu, X. Generalized Empirical Likelihood-Based Kernel Estimation of Spatially Similar Densities. In Advances in Info-Metrics: Information and Information Processing across Disciplines; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
  17. Haley, M.R.; McGee, M.K. “KLICing” there and back again: Portfolio selection using the empirical likelihood divergence and Hellinger distance. J. Empir. Financ. 2011, 18, 341–352. [Google Scholar] [CrossRef]
  18. Almeida, C.; Garcia, R. Assessing misspecified asset pricing models with empirical likelihood estimators. J. Econom. 2012, 170, 519–537. [Google Scholar] [CrossRef]
  19. Laurini, M.P.; Hotta, L.K. Generalized moment estimation of stochastic differential equations. Comput. Stat. 2016, 31, 1169–1202. [Google Scholar] [CrossRef]
  20. Zhong, X.; Cao, J.; Jin, Y.; Zheng, W. On empirical likelihood option pricing. J. Risk 2017, 19, 41–53. [Google Scholar] [CrossRef]
  21. Yan, Z.; Zhang, J. Adjusted empirical likelihood for value at risk and expected shortfall. Commun.-Stat.-Theory Methods 2017, 46, 2580–2591. [Google Scholar] [CrossRef]
  22. Post, T.; Karabatı, S.; Arvanitis, S. Portfolio optimization based on stochastic dominance and empirical likelihood. J. Econom. 2018, 206, 167–186. [Google Scholar] [CrossRef]
  23. Vigo-Pereira, C.; Laurini, M. Portfolio Efficiency Tests with Conditioning Information– Comparing GMM and GEL Estimators. Entropy 2022, 24, 1705. [Google Scholar] [CrossRef] [PubMed]
  24. Fu, Y.; Wang, H.; Wong, A. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio. Entropy 2018, 20, 316. [Google Scholar] [CrossRef]
  25. Perevodchikov, E.V.; Marsh, T.L.; Mittelhammer, R.C. Information Theory Estimators for the First-Order Spatial Autoregressive Model. Entropy 2012, 14, 1165–1185. [Google Scholar] [CrossRef]
  26. Hansen, G.D. Indivisible labor and the business cycle. J. Monet. Econ. 1985, 16, 309–327. [Google Scholar] [CrossRef]
  27. McCandless, G. The ABCs of RBCs; Harvard University Press: Cambridge, MA, USA; London, UK, 2008. [Google Scholar]
  28. Blanchard, O.J.; Khan, C. The Solution of Linear Difference Models Under Rational Expectations. Econometrica 1980, 48, 1305–1311. [Google Scholar] [CrossRef]
  29. Sims, C.A. Solving linear rational expectations models. Comput. Econ. 2002, 20, 1–20. [Google Scholar] [CrossRef]
  30. Canova, F. Methods for Applied Macroeconomic Research; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  31. DeJong, D.N.; Dave, C. Structural Macroeconometrics; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  32. Hansen, L.P. Large sample properties of generalized method of moments estimators. Econometrica 1982, 50, 1029–1054. [Google Scholar] [CrossRef]
  33. Hamilton, J.D. Time Series Analysis; Princeton University Press: Princeton, NJ, USA, 1994. [Google Scholar]
  34. Newey, W.K.; West, K.D. Hypothesis testing with efficient method of moments estimation. Int. Econ. Rev. 1987, 28, 777–787. [Google Scholar] [CrossRef]
  35. Andrews, D.W.K. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 1991, 59, 817–858. [Google Scholar] [CrossRef]
  36. Smith, R.J. Alternative semi-parametric likelihood approaches to generalised method of moments estimation. Econ. J. 1997, 107, 503–519. [Google Scholar] [CrossRef]
  37. Newey, W.K.; Smith, R.J. Higher order properties of GMM and generalized empirical likelihood estimators. Econometrica 2004, 72, 219–255. [Google Scholar] [CrossRef]
  38. Kitamura, Y. Empirical Likelihood Methods in Econometrics. In Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress; Cambridge University Press: Cambridge, UK, 2007; pp. 174–237. [Google Scholar]
  39. Smith, R.J. GEL criteria for moment condition models. Econom. Theory 2011, 27, 1192–1235. [Google Scholar] [CrossRef]
  40. Anatolyev, S.; Gospodinov, N. Methods for Estimation and Inference in Modern Econometrics; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  41. Owen, A.B. Empirical likelihood ratio confidence intervals for a single functional. Biometrika 1988, 75, 237–249. [Google Scholar] [CrossRef]
  42. Qin, J.; Lawless, J. Empirical likelihood and general estimating equations. Ann. Stat. 1994, 22, 300–325. [Google Scholar] [CrossRef]
  43. Imbens, G.W. One-step estimators for over-identified generalized method of moments models. Rev. Econ. Stud. 1997, 64, 359–383. [Google Scholar] [CrossRef]
  44. Kitamura, Y.; Stutzer, M. An information-theoretic alternative to generalized method of moments estimation. Econometrica 1997, 65, 861–874. [Google Scholar] [CrossRef]
  45. Imbens, G.; Johnson, P.; Spady, R. Information Theoretic Approaches to Inference in Moment Condition Models. Econometrica 1998, 66, 333–357. [Google Scholar] [CrossRef]
  46. Hansen, L.P.; Heaton, J.; Yaron, A. Finite-sample properties of some alternative GMM estimators. J. Bus. Econ. Stat. 1996, 14, 262–280. [Google Scholar] [CrossRef]
  47. Schennach, S.M. Point estimation with exponentially tilted empirical likelihood. Ann. Stat. 2007, 35, 634–672. [Google Scholar] [CrossRef]
  48. Wolfowitz, J. The minimum distance method. Ann. Math. Stat. 1957, 28, 78–88. [Google Scholar] [CrossRef]
  49. Bickel, P.; Klaassen, C.; Ritov, Y.; Wellner, J. Efficient and Adaptive Estimation for Semiparametric Models; Johns Hopkins Press: Baltimore, MD, USA, 1993. [Google Scholar]
  50. Corcoran, S.A. Bartlett adjustment of empirical discrepancy statistics. Biometrika 1998, 85, 967–972. [Google Scholar] [CrossRef]
  51. Cressie, N.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Ser. B (Methodological) 1984, 46, 440–464. [Google Scholar] [CrossRef]
  52. Borwein, J.M.; Lewis, A.S. Duality relationships for entropy-like minimization problems. SIAM J. Control. Optim. 1991, 29, 325–338. [Google Scholar] [CrossRef]
  53. Kitamura, Y.; Otsu, T.; Evdokimov, K. Robustness, infinitesimal neighborhoods, and moment restrictions. Econometrica 2013, 81, 1185–1201. [Google Scholar] [CrossRef]
  54. Beran, R. Minimum Hellinger distance estimates for parametric models. Ann. Stat. 1977, 5, 445–463. [Google Scholar] [CrossRef]
  55. Anatolyev, S. GMM, GEL, serial correlation, and asymptotic bias. Econometrica 2005, 73, 983–1002. [Google Scholar] [CrossRef]
  56. Cantore, C.; Gabriel, V.J.; Levine, P.; Pearlman, J.; Yang, B. The science and art of DSGE modelling: I—Construction and Bayesian estimation. In Handbook of Research Methods and Applications in Empirical Macroeconomics; Edward Elgar Publishing: Northampton, UK, 2013; pp. 411–440. [Google Scholar]
  57. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  58. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  59. Miao, J. Economic Dynamics in Discrete Time; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  60. Chaussé, P. Computing generalized method of moments and generalized empirical likelihood with R. J. Stat. Softw. 2010, 34, 1–35. [Google Scholar] [CrossRef]
  61. Tauchen, G. Statistical Properties of Generalized Method-of-Moments Estimators of Structural Parameters Obtained from Financial Market Data. J. Bus. Econ. Stat. 1986, 4, 397–416. [Google Scholar] [CrossRef]
  62. Kocherlakota, N.R. On tests of representative consumer asset pricing models. J. Monet. Econ. 1990, 26, 285–304. [Google Scholar] [CrossRef]
  63. Cui, L.; Feng, G.; Hong, Y. Regularized GMM for time-varying models with applications to asset pricing. Int. Econ. Rev. 2024, 65, 851–883. [Google Scholar] [CrossRef]
  64. Beyer, H.G.; Sendhoff, B. Robust optimization—A comprehensive survey. Comput. Methods Appl. Mech. Eng. 2007, 196, 3190–3218. [Google Scholar] [CrossRef]
  65. White, H. Estimation, Inference and Specification Analysis; Econometric Society Monographs, Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  66. Aguirre-Torres, V.; Toribio, M.D. Efficient Method of Moments in Misspecified i.i.d. Models. Econom. Theory 2004, 20, 513–534. [Google Scholar] [CrossRef]
  67. Andrews, D.W.K.; Cheng, X. Estimation and Inference with Weak, Semi-Strong, and Strong Identification. Econometrica 2012, 80, 2153–2211. [Google Scholar] [CrossRef]
  68. Chen, X.; Hansen, L.P.; Hansen, P.G. Robust inference for moment condition models without rational expectations. J. Econom. 2024, 243, 105653. [Google Scholar] [CrossRef]
Figure 1. Distribution of β ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: β = 0.98 . Continuous black line: over-identified case. Dashed line: just-identified case.
Figure 1. Distribution of β ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: β = 0.98 . Continuous black line: over-identified case. Dashed line: just-identified case.
Entropy 27 00141 g001
Figure 2. Distribution of γ ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: γ = 1.8 . Continuous black line: over-identified case. Dashed line: just-identified case.
Figure 2. Distribution of γ ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: γ = 1.8 . Continuous black line: over-identified case. Dashed line: just-identified case.
Entropy 27 00141 g002
Figure 3. Distribution of b ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: b = 3.2 . Continuous black line: over-identified case. Dashed line: just-identified case.
Figure 3. Distribution of b ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: b = 3.2 . Continuous black line: over-identified case. Dashed line: just-identified case.
Entropy 27 00141 g003
Figure 4. Distribution of ρ ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: ρ = 0.9 . Continuous black line: over-identified case. Dashed line: just-identified case.
Figure 4. Distribution of ρ ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution with single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: ρ = 0.9 . Continuous black line: over-identified case. Dashed line: just-identified case.
Entropy 27 00141 g004
Figure 5. Distribution of σ ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: σ = 0.007 . Continuous black line: over-identified case. Dashed line: just-identified case.
Figure 5. Distribution of σ ^ . (a) DGP I: normal distribution; (b) DGP II: Student’s t-distribution; (c) DGP III: normal distribution single centered outlier; and (d) DGP IV: normal distribution with multiple outliers. Notes: True value: σ = 0.007 . Continuous black line: over-identified case. Dashed line: just-identified case.
Entropy 27 00141 g005
Table 1. Description and true values of the parameters.
Table 1. Description and true values of the parameters.
ParameterDescriptionValueType
α Output elasticity of the capital0.33Calibrated
δ Capital depreciation rate0.025Calibrated
β Intertemporal discount factor of the utility function0.98Estimated
γ Consumption relative risk aversion1.8Estimated
bWeight of leisure in the utility function3.2Estimated
ρ Autoregressive coefficient of the technological process0.9Estimated
σ Standard deviation of the productivity shock0.007Estimated
Table 2. Correlation between moment conditions.
Table 2. Correlation between moment conditions.
g 1 g 2 g 3 g 4 g 5 g 6 g 7
DGP I: normal distribution
g 1 1.0000
g 2 −0.12441.0000
g 3 0.6197−0.03551.0000
g 4 0.0346−0.15430.07171.0000
g 5 −0.0055−0.01800.02050.18591.0000
g 6 −0.23330.6505−0.1311−0.1252−0.01661.0000
g 7 0.5734−0.29090.34680.08100.0070−0.35481.0000
DGP II: Student’s t-distribution
g 1 1.0000
g 2 −0.12141.0000
g 3 0.6062−0.03511.0000
g 4 0.0146−0.20630.12961.0000
g 5 −0.0314−0.01650.01240.25051.0000
g 6 −0.20440.6579−0.1067−0.1697−0.01931.0000
g 7 0.5727−0.20310.34010.0321−0.0375−0.22681.0000
DGP III: normal distribution with single, centered outlier
g 1 1.0000
g 2 −0.12871.0000
g 3 0.6178−0.04221.0000
g 4 0.1445−0.10960.17251.0000
g 5 0.1483−0.06740.11590.23321.0000
g 6 −0.23230.6572−0.1365−0.1119−0.09231.0000
g 7 0.5698−0.26540.35270.15330.3185−0.31451.0000
DGP IV: normal distribution with multiple outliers
g 1 1.0000
g 2 −0.11361.0000
g 3 0.6076−0.02911.0000
g 4 0.0127−0.21400.08721.0000
g 5 −0.0389−0.00800.00710.21431.0000
g 6 −0.21530.6551−0.1131−0.1634−0.00241.0000
g 7 0.5700−0.24770.33650.0615−0.0562−0.28661.0000
Notes: The correlation matrix generated from the means of the correlations between the moment conditions generated from the data of each DGP in each replication, considering the true value of the parameters.
Table 3. Over-identified J, LM, and LR tests—moment-based methods.
Table 3. Over-identified J, LM, and LR tests—moment-based methods.
DGP IDGP IIDGP IIIDGP IV
GMMp-value of the J test (mean)0.39730.43550.39480.4469
p-value of the J test <0.0533.45%26.40%30.55%26.85%
GEL/GMCp-value of the J test (mean)0.00000.00000.00000.0000
p-value of the J test <0.0599.35%99.95%99.55%99.75%
p-value of the LM test (mean)0.11000.09050.11310.1108
p-value of the LM test <0.0564.55%69.80%63.25%66.85%
p-value of the LR test (mean)0.06030.05890.06090.0718
p-value of the LR test <0.0579.80%77.30%78.20%75.25%
Notes: Null hypothesis of J, LM, and LR tests: correct specification (moment conditions are valid).
Table 4. Statistics of β ^ .
Table 4. Statistics of β ^ .
β ^ GMMELETCUEETELHDETHDSELSETSCUESETELSHDSETHDMLBI
Just-Identified Case
DGP IMean0.9802020.9800780.9801890.9801930.9480720.9801390.9497100.9801130.9801860.9802010.9475880.9801710.9487610.9692530.981141
Median0.9802560.9800000.9802100.9802110.9801610.9801730.9801620.9800000.9802100.9802040.9801760.9801730.9801860.9854710.981258
Bias0.0002020.0000780.0001890.000193−0.0319280.000139−0.0302900.0001130.0001860.000201−0.0324120.000171−0.031239−0.0107470.001141
MSE0.0000250.0000320.0000260.0000250.0317160.0000280.0298100.0000310.0000260.0000250.0322000.0000280.0307710.0013540.000003
MAE0.0039310.0036030.0039660.0039710.0360610.0039960.0344500.0035140.0039640.0039670.0365450.0039800.0354060.0254200.001502
DGP IIMean0.9803170.9806630.9803070.9803140.9502470.9803240.9480730.9807010.9803400.9803150.9467580.9803600.9448060.9638700.981688
Median0.9802720.9800000.9802290.9802990.9802450.9802280.9801740.9800000.9802100.9803190.9801640.9802660.9802190.9798830.981800
Bias0.0003170.0006630.0003070.000314−0.0297530.000324−0.0319270.0007010.0003400.000315−0.0332420.000360−0.035194−0.0161300.001688
MSE0.0000510.0000560.0000520.0000510.028630.0000540.0299540.0000520.0000510.0000520.0320340.0000520.0332770.0015810.000004
MAE0.0056730.0050580.0056930.0057440.0356980.0057140.0378440.0047330.0056440.0057500.0390320.0056610.0410330.0277380.001845
DGP IIIMean0.9821650.9817220.9821520.9821860.9519190.9821700.9522320.9816640.9821680.9821870.9519290.9821820.9522540.9707150.981311
Median0.9822220.9801240.9821830.9822350.9820440.9821500.9820720.9800670.9822010.9822350.9820160.9821770.9820670.9859820.981383
Bias0.0021650.0017220.0021520.002186−0.0280810.002170−0.0277680.0016640.0021680.002187−0.0280710.002182−0.027746−0.0092850.001311
MSE0.0000300.0000340.0000300.0000300.0298040.0000320.0292920.0000330.0000300.0000300.0298040.0000320.0292910.0012270.000003
MAE0.0043340.0039320.0043490.0043930.0345830.0043910.0342970.0038130.0043460.0043890.0345730.0043920.0342600.0240260.001565
DGP IVMean0.9799430.9801860.979920.9799460.9402990.9799520.9408960.9802160.9799700.9799510.9344930.9799660.9343470.9647140.981702
Median0.9799140.9800000.9799610.9799800.9800000.9800000.9800000.9800000.9799750.9799800.9800000.9800020.9799980.9808670.981757
Bias−0.0000570.000186−0.000080−0.000054−0.039701−0.000048−0.0391040.000216−0.000030−0.000049−0.045507−0.000034−0.045653−0.0152860.001702
MSE0.0000620.0000680.0000640.0000630.0372530.0000660.0358750.0000640.0000620.0000630.0432700.0000670.0425670.0015580.000004
MAE0.0062100.0055520.0062410.0062860.0458130.0061640.0451070.0052080.0061730.0062710.0515520.0061730.0515700.0274720.001832
Over-identified Case
DGP IMean0.9800380.9801610.9802180.9801970.9445200.9801740.9455390.9801570.9802260.9801940.9430960.9801950.944580
Median0.9799680.9800000.9802920.9802610.9802190.9802110.9801820.9800000.9802930.9802600.9802310.9802150.980202
Bias0.0000380.0001610.0002180.000197−0.0354800.000174−0.0344610.0001570.0002260.000194−0.0369040.000195−0.035420
MSE0.0000290.0000260.0000250.0000250.0352970.0000260.0343400.0000260.0000250.0000250.0367300.0000250.035307
MAE0.0043160.0032910.0039110.0039630.0395770.0039250.0386450.0032740.0039040.0039630.0410140.0039060.039606
DGP IIMean0.9801870.9801430.9803030.9803070.9532730.9802880.9508500.9801920.9803120.9802760.9464210.9802930.945478
Median0.9801790.9800000.9802980.9801970.9802150.9802390.9802090.9800000.9802180.9802300.9802030.9802040.980209
Bias0.0001870.0001430.0003030.000307−0.0267270.000288−0.029150.0001920.0003120.000276−0.0335790.000293−0.034522
MSE0.0000580.0000480.0000520.0000500.0269500.0000550.0293510.0000430.0000520.0000500.0336300.0000530.034634
MAE0.0061420.0040830.0056520.0056250.0327770.0056980.0352600.0038460.0056160.0056620.0395080.0056370.040545
DGP IIIMean0.9816520.9816280.9821620.9821860.9606410.9821590.9604450.9816000.9821460.9821920.9586740.9821480.959024
Median0.9817500.9800520.9821440.9822250.9820450.9821060.9821210.9800380.9820950.9822260.9820230.9820880.982086
Bias0.0016520.0016280.0021620.002186−0.0193590.002159−0.0195550.0016000.0021460.002192−0.0213260.002148−0.020976
MSE0.0000320.0000320.0000300.0000300.0212520.0000310.0213380.0000310.0000290.0000300.0231810.0000310.022781
MAE0.0044990.0036700.0043190.0043570.0258020.0043260.0260850.0036250.0043050.0043640.0277270.0043200.027511
DGP IVMean0.9797240.9800460.9799630.9799130.9405650.9799770.9415240.9801210.9799860.9799090.9361830.9800000.936240
Median0.9794700.9800000.9799260.9799820.9799910.9800000.9799780.9800000.9799730.9799430.9799970.9800000.979997
Bias−0.0002760.000046−0.000037−0.000087−0.039435−0.000023−0.0384760.000121−0.000014−0.000091−0.0438170.000000−0.043760
MSE0.0000720.0000530.0000630.0000630.0390450.0000620.0380900.0000510.0000620.0000630.0433970.0000620.043382
MAE0.0067780.0042240.0062060.0062290.0454530.0061200.0445540.0040690.0061330.0062590.0497470.0060910.049777
Notes: True value: β = 0.98 . MSE = mean squared error. MAE = mean absolute error. Best performances highlighted in bold.
Table 5. Statistics of γ ^ .
Table 5. Statistics of γ ^ .
γ ^ GMMELETCUEETELHDETHDSELSETSCUESETELSHDSETHDMLBI
Just-Identified Case
DGP IMean1.7996931.7999751.7928831.8016941.5179731.7894841.5205521.7998851.7926901.8017301.5179991.7880531.5213838.8313741.792836
Median1.7998471.8000001.7985171.8000031.7975671.7982181.7976671.8000001.7985681.8000001.7976101.7982591.7977221.8009391.798497
Bias−0.000307−0.000025−0.0071170.001694−0.282027−0.010516−0.279448−0.000115−0.0073100.001730−0.282001−0.011947−0.2786177.031374−0.007164
MSE0.0001590.0004570.0054230.0007440.3020070.0056820.2970970.0003900.0057470.0008270.3040360.0059290.296932201.5125890.008871
MAE0.0099690.0080810.0303470.0156730.2964910.0328920.2918000.0077730.0307360.0159500.2980520.0334810.2918627.0372280.075397
DGP IIMean1.7909621.8028431.7846331.7999951.5625231.7877091.5432381.8027971.7856111.7992331.5575561.7867211.55790311.1344951.765028
Median1.7988761.8000001.7963641.7995111.7953521.7964171.7964241.8000001.7966541.7991481.7958451.7965861.7968641.8023471.771187
Bias−0.0090380.002843−0.015367−0.000005−0.237477−0.012291−0.2567620.002797−0.014389−0.000767−0.242444−0.013279−0.2420979.334495−0.034972
MSE0.0093430.0026080.0074730.0014850.2522530.0084551.0030010.0023510.0073960.0020060.2564920.0092350.256017267.8488530.010563
MAE0.0224710.0140610.0408240.0211770.2584090.0416540.2768810.0130760.0426330.0243150.2628120.0466380.2631669.3431800.080309
DGP IIIMean1.8042361.8032021.7977631.8059421.6176431.7954891.6157711.8030071.7971531.8057961.6177811.7954711.6163059.7886991.791399
Median1.8044861.8000121.800421.8037241.7999301.8005691.7997171.8000101.8003231.8032401.7999891.8003841.7998341.8011511.797475
Bias0.0042360.003202−0.0022370.005942−0.182357−0.004511−0.1842290.003007−0.0028470.005796−0.182219−0.004529−0.1836957.988699−0.008601
MSE0.0001930.0008110.0056440.0005980.1967660.0068790.1961580.0008330.0056670.0006890.1960220.0070930.196446228.8203040.008040
MAE0.0107220.0097780.0345890.0155180.2031140.0368770.2039220.0096480.0347420.0160120.2020600.0374440.2041797.9910510.071078
DGP IVMean1.7891611.8005561.7874641.8017991.5164801.7863221.5161221.8005911.7868021.8018771.5190301.7825991.51541911.0410911.765867
Median1.7980311.8000001.7969051.7992351.7951991.7968151.795291.8000001.7967371.7992151.7955561.7965331.7954101.8022141.770485
Bias−0.0108390.000556−0.0125360.001799−0.283520−0.013678−0.2838780.000591−0.0131980.001877−0.280970−0.017401−0.2845819.241091−0.034133
MSE0.0120340.0012210.0060670.0016600.3004450.0079030.3001810.0015830.0069210.0026330.2983370.0087420.299022264.7710450.009900
MAE0.0245690.0134090.0401510.0236200.3034720.0429480.3043170.0130330.0426790.0273690.3016830.0464230.3036889.2439350.078666
Over-identified Case
DGP IMean0.9751271.7993311.7908901.7990961.5454881.7911741.5436501.7993981.7903581.7990811.5440861.7920241.540940
Median0.8021891.8000001.7983271.7994361.7972381.7984651.7972551.8000001.7983641.7993481.7973531.7986631.797329
Bias−0.824873−0.000669−0.009110−0.000904−0.254512−0.008826−0.256350−0.000602−0.009642−0.000919−0.255914−0.007976−0.259060
MSE0.8469900.0003860.0034470.0005290.2688750.0040060.2731780.0003800.0036870.0005450.2705870.0040330.276147
MAE0.8386550.0075030.0263600.0138170.2640180.0299320.2648790.0073480.0269800.0140690.2654880.0301210.267831
DGP IIMean0.9426841.8006071.7851421.7872551.5979891.7854531.5983271.8006061.7840151.7850701.5891541.7839841.587769
Median0.7602901.8000001.7964001.7971151.7951941.7961951.7954751.8000001.7964181.7967081.7954431.7963401.795643
Bias−0.8573160.000607−0.014858−0.012745−0.202011−0.014547−0.2016730.000606−0.015985−0.014930−0.210846−0.016016−0.212231
MSE0.8626400.0011140.0053840.0033260.2116250.0055920.2084650.0011040.0056850.0039110.2198130.0061650.219402
MAE0.8601220.0093640.0344910.0279010.2190190.0357440.2151030.0089690.0361430.0302500.2274260.0368030.226450
DGP IIIMean1.0252541.8032901.7944591.8034221.6144351.7948841.6145031.8034451.7943491.8039411.6097701.7940741.609564
Median0.9018991.8000221.8003291.8024201.7993571.8004971.7990781.8000191.8002761.8024201.7991181.8004101.799221
Bias−0.7747460.003290−0.0055410.003422−0.185565−0.005116−0.1854970.003445−0.0056510.003941−0.190230−0.005926−0.190436
MSE0.7604070.0005180.0036110.0006880.1948290.0038730.1949650.0005000.0036890.0007180.1991750.0039930.200495
MAE0.7821700.0084220.0284040.0146090.1991200.0293480.1993200.0082890.0288180.0149910.2029400.0296640.204362
DGP IVMean0.8988781.7998691.7816881.7887051.5462461.7853961.5515581.8000941.7810101.7858891.5383941.7825541.542250
Median0.7459551.8000001.7967891.7970671.7934371.7968181.7944521.8000001.7967311.7970581.7940261.7966871.794737
Bias−0.901122−0.000131−0.018312−0.011295−0.253754−0.014604−0.2484420.000094−0.018990−0.014111−0.261606−0.017446−0.257750
MSE0.9296710.0006730.0095710.0025380.2602480.0072890.2583720.0006340.0099990.0033480.2692710.0081650.269135
MAE0.9021220.0091130.0399620.0263110.2642100.0379980.2607440.0085550.0415240.0291430.2723210.0412610.270135
Notes: True value: γ = 1.8 . MSE = mean squared error. MAE = mean absolute error. Best performances highlighted in bold.
Table 6. Statistics of b ^ .
Table 6. Statistics of b ^ .
b ^ GMMELETCUEETELHDETHDSELSETSCUESETELSHDSETHDMLBI
Just-Identified Case
DGP IMean3.1998843.1998653.1938063.2020633.4135093.1903543.4620553.1996693.1936543.2020623.4143183.1888673.4618943.2000153.202415
Median3.2001253.2000003.1995893.2004753.2000593.1994013.2000003.2000003.1995603.2004753.2001003.1992903.1999993.2000003.201663
Bias−0.000116−0.000135−0.0061940.0020630.213509−0.0096460.262055−0.000331−0.0063460.0020620.214318−0.0111330.2618940.0000150.002415
MSE0.0001460.0005060.0067820.0007400.8283450.0061901.0720160.0004500.0072450.0007950.8332800.0063631.0725460.0000770.004821
MAE0.0095430.0076710.0315980.0150500.4194550.0332410.4939760.0073740.0322650.0152340.4219440.0337660.4938780.0006750.054850
DGP IIMean3.1931723.2029443.1862073.2012103.4655493.1891023.4620183.2026403.1871413.2003313.4643073.1881253.4632993.1990693.202298
Median3.1999403.2000003.1992303.2004033.2000533.1992833.2000833.2000003.1990333.2000153.2000273.1990963.2000603.2000013.202706
Bias−0.0068280.002944−0.0137930.0012100.265549−0.0108980.2620180.002640−0.0128590.0003310.264307−0.0118750.263299−0.0009310.002298
MSE0.0055950.0025460.0078470.0015030.6978290.0086110.6814490.0023310.0084400.0019150.7306160.0096930.7081610.0013800.004512
MAE0.0208510.0131280.0413920.0209830.3890540.0418270.3858720.0121390.0437850.0234930.3997380.0472240.3961070.0016670.053449
DGP IIIMean3.1952133.1971593.189853.1971323.2762413.1876683.2803023.1972103.1891513.1969833.2769583.1876883.2804643.2004113.205173
Median3.1955903.1999813.1977383.1973643.1979403.1974963.1976523.1999903.1976823.1974673.1979753.1975853.1977483.2000003.204919
Bias−0.004787−0.002841−0.010150−0.0028680.076241−0.0123320.080302−0.002790−0.010849−0.0030170.076958−0.0123120.0804640.0004110.005173
MSE0.0001890.0007880.0060160.0005230.6503950.0074340.7659800.0008200.0060310.0006050.6596120.0076780.7751300.0020220.004634
MAE0.0104960.0088990.0356440.0146900.3192430.0382530.3515150.0088530.0358740.0151230.3214410.0391040.3539290.0020410.053944
DGP IVMean3.1940113.2013263.1906183.2048993.4635133.1891623.4695843.2011033.1896363.2050043.4615493.1851923.4661043.2009313.204142
Median3.2009603.2000003.1993603.2019023.2000073.1995503.2001183.2000003.1993803.2016673.2000003.1990163.2000343.2000013.204788
Bias−0.0059890.001326−0.0093820.0048990.263513−0.0108380.2695840.001103−0.0103640.0050040.261549−0.0148080.2661040.0009310.004142
MSE0.0077600.0011820.0061760.0014720.8130660.0077790.8276780.0016180.0068360.0024600.8284570.0087190.8444620.0009500.004668
MAE0.0223390.0126000.0389460.0224820.4524910.0419310.4630550.0121750.0413620.0263440.4543180.0455080.4663010.0019520.054291
Over-identified Case
DGP IMean2.4644743.1992683.1911143.1992513.3145733.1914133.2720903.1992543.1905313.1992613.3143433.1922803.271027
Median2.3119403.2000003.1994663.1999173.1999763.1994053.1998723.2000003.1994163.1999693.1999493.1994153.199850
Bias−0.735526−0.000732−0.008886−0.0007490.114573−0.0085870.072090−0.000746−0.009469−0.0007390.114343−0.0077200.071027
MSE0.6566210.0003920.0036650.0005320.6591700.0041010.6205760.0003880.0038990.0005430.6675370.0041930.627244
MAE0.7491130.0070050.0268860.0136650.3502540.0295720.3273590.0069170.0273640.0138140.3536660.0299080.330982
DGP IIMean2.4331003.2004523.1861763.1881743.3930483.1866603.3474493.2002663.1853033.1859413.3985483.1852943.347984
Median2.2797353.2000003.1984863.1989953.1996123.1989323.1996543.2000003.1986053.1982093.1994653.1989113.199654
Bias−0.7669000.000452−0.013824−0.0118260.193048−0.0133400.1474490.000266−0.014697−0.0140590.198548−0.0147060.147984
MSE0.6733770.0010200.0049080.0031830.6548950.0054980.5559070.0010050.0054340.0037050.6819730.0061580.583170
MAE0.7697470.0085570.0335940.0273960.3469050.0347440.3048450.0081260.0353660.0293220.3599560.0359660.318940
DGP IIIMean2.5028823.1971543.186063.1945633.2154903.1866303.1837003.1975253.1860413.1951143.2190983.1858963.191682
Median2.3876523.1999843.1973263.1963813.1974703.1973933.1971743.1999853.1974863.1965873.1974873.1973933.197343
Bias−0.697118−0.002846−0.013940−0.0054370.015490−0.013370−0.016300−0.002475−0.013959−0.0048860.019098−0.014104−0.008318
MSE0.5948620.0004720.0038890.0007010.6117520.0042170.5958920.0004290.0039790.0007560.6145520.0043350.602757
MAE0.7037040.0078460.0293540.0144810.3043450.0304160.2874950.0076190.0298460.0149090.3072080.0307800.291867
DGP IVMean2.3972723.2006273.1848363.1915403.3958733.1884723.3691613.2008563.1840973.1886423.4024703.1855033.365204
Median2.2695913.2000003.1988783.1997853.1995293.1991723.1996203.2000003.1990773.1996633.1995883.1990403.199674
Bias−0.8027280.000627−0.015164−0.0084600.195873−0.0115280.1691610.000856−0.015903−0.0113580.202470−0.0144970.165204
MSE0.7202600.0006770.0079630.0024520.8730430.0075320.7946600.0006550.0084750.0031860.8738460.0083820.794879
MAE0.8035830.0085630.0380110.0253490.4401370.0369700.4084390.0080900.0395940.0281800.4444920.0400600.413237
Notes: True value: b = 3.2 . MSE = mean squared error. MAE = mean absolute error. Best performances highlighted in bold.
Table 7. Statistics of ρ ^ .
Table 7. Statistics of ρ ^ .
ρ ^ GMMELETCUEETELHDETHDSELSETSCUESETELSHDSETHDMLBI
Just-Identified Case
DGP IMean0.9776910.9054630.9623460.9701180.9306280.9556530.9119810.9053130.9621040.9700830.9293920.9547240.9092440.8852150.882414
Median1.0000000.9000100.9944970.9997920.9036310.9844790.9329660.9000080.9944020.9997110.9041640.9834560.9318730.8900470.883143
Bias0.0776910.0054630.0623460.0701180.0306280.0556530.0119810.0053130.0621040.0700830.0293920.0547240.009244−0.014785−0.017586
MSE0.0222640.0007520.0088830.0166000.0103090.0056720.0449840.0007320.0087070.0161320.0120040.0059120.0495100.0014860.000795
MAE0.1001840.0083800.0680690.0908340.0496670.0573990.0789450.0081380.0677710.0893970.0505310.0579420.0807930.0271790.022399
DGP IIMean0.9830910.9073870.9676260.9792530.8912160.9640430.8423390.9067150.9674690.9791180.8952090.9648620.8469370.8898770.880486
Median1.0000000.9000430.9965731.0000000.9577320.9935010.9697950.9000250.9961241.0000000.9550160.9944030.9610900.8944300.881699
Bias0.0830910.0073870.0676260.079253−0.0087840.064043−0.0576610.0067150.0674690.079118−0.0047910.064862−0.053063−0.010123−0.019514
MSE0.0149730.0005690.0084690.0133880.0854660.0065860.1674180.0005470.0084810.0127130.0818100.0068520.1624620.0011480.000867
MAE0.0967620.0092890.0723920.0948640.1090910.0662340.1612760.0088890.0728370.0941060.1039140.0681120.1542970.0250780.023813
DGP IIIMean0.9834640.9082790.9665910.9735470.9374790.9611330.9288090.9078560.9664310.9730830.9356350.9610780.9298090.8860460.881534
Median1.0000000.9000230.9954270.9997680.9411390.9910400.9672160.9000180.9956460.9997300.9361310.9907630.9636020.8898860.882761
Bias0.0834640.0082790.0665910.0735470.0374790.0611330.0288090.0078560.0664310.0730830.0356350.0610780.029809−0.013954−0.018466
MSE0.0133690.0007910.0092220.0143120.0111140.0058970.0372160.0007620.0092850.0145790.0111800.0058610.0336220.0012990.000786
MAE0.0958710.0103490.0716420.0911170.0566780.0622940.0754370.0099070.0720160.0906230.0563980.0620830.0730010.0268410.022367
DGP IVMean0.9797960.9092210.9683720.9740150.8549990.9632700.8137820.9081040.9683340.9756440.8687640.9626620.8268300.8888800.880070
Median1.0000000.9000840.9969541.0000000.9437160.9937730.9547690.9000520.9963971.0000000.9382820.9934480.9512690.8935720.881925
Bias0.0797960.0092210.0683720.074015−0.0450010.063270−0.0862180.0081040.0683340.075644−0.0312360.062662−0.073170−0.011120−0.019930
MSE0.0203880.0007180.0090750.0201900.1408810.0081710.2089620.0006420.0074900.0173390.1177560.0082430.1901890.0011770.000858
MAE0.0997180.0112550.0736550.0990710.1385130.0674890.1847340.0102660.0727350.0970250.1242870.0676560.1706150.0249730.023359
Over-identified Case
DGP IMean0.9714230.9073340.9551250.9667710.9233090.9531810.9277520.9073920.9554980.9690820.9217610.9533520.926929
Median1.0000000.9000180.9883130.9990780.9002450.9839570.9012540.9000190.9891630.9991870.9002530.9842060.901551
Bias0.0714230.0073340.0551250.0667710.0233090.0531810.0277520.0073920.0554980.0690820.0217610.0533520.026929
MSE0.0328570.0005790.0056920.0211100.0169340.0054290.0170520.0005820.0056940.0172610.0174680.0054160.018445
MAE0.1106860.0087410.0579170.0917810.0490960.0555350.0502730.0087430.0580810.0895480.0498650.0555150.051434
DGP IIMean0.9849620.9052060.9616160.9741320.8982380.9578460.9107040.9052710.9616140.9709460.8912910.9575390.906623
Median1.0000000.9000140.9928150.9984470.9309820.9895100.9350060.9000120.9931000.9988750.9269410.9892640.935006
Bias0.0849620.0052060.0616160.074132−0.0017620.0578460.0107040.0052710.0616140.070946−0.0087090.0575390.006623
MSE0.0155310.0005030.0062190.0114360.0641980.0059200.0503940.0004950.0063570.0155860.0729850.0059990.055186
MAE0.0994270.0072500.0642570.0852750.0901680.0609070.0799720.0071730.0647240.0899130.0955980.0613960.083465
DGP IIIMean0.9799960.9089790.9604020.9750610.9285490.9562170.9381970.9084970.9604880.9750370.9278770.9558240.937631
Median1.0000000.9000320.9922750.9995670.9135140.9884560.9283590.9000280.9924140.9995410.9121030.9869510.928539
Bias0.0799960.0089790.0604020.0750610.0285490.0562170.0381970.0084970.0604880.0750370.0278770.0558240.037631
MSE0.0195460.0007430.0059940.0123750.0145130.0054570.0098310.0007040.0060010.0118250.0139270.0054180.010165
MAE0.1025010.0109030.0622880.0878630.0543270.0570960.0511310.0104230.0623820.0878030.0541470.0567900.051730
DGP IVMean0.9818810.9063360.9631760.9725000.8868560.9571060.8953140.9062440.9614970.9701390.8877120.9569000.896953
Median1.0000000.9000200.9945750.9990720.9402340.9914190.9394970.9000220.9938350.9992040.9324610.9900610.935243
Bias0.0818810.0063300.0631760.072500−0.0131440.057106−0.0046860.0062440.0614970.070139−0.0122880.056900−0.003047
MSE0.0193290.0005710.0073600.0140450.0847680.0076600.0694260.0005200.0080190.0180710.0784940.0064400.066003
MAE0.1023180.0085070.0673310.0872820.1052020.0637870.0970840.0081020.0680380.0905690.1017450.0628600.094515
Notes: True value: ρ = 0.9 . MSE = mean squared error. MAE = mean absolute error. Best performances highlighted in bold.
Table 8. Statistics of σ ^ .
Table 8. Statistics of σ ^ .
σ ^ GMMELETCUEETELHDETHDSELSETSCUESETELSHDSETHDMLBI
Just-Identified Case
DGP IMean0.0038100.0064120.0041040.0042450.0184940.0044530.0178200.0064080.0040760.0042460.0180210.0044950.0179970.0080570.006648
Median0.0046630.0069490.0047690.0047480.0069910.0051090.0069790.0069490.0047600.0047590.0069920.0051300.0069820.0068630.006621
Bias−0.003190−0.000588−0.002896−0.0027550.011494−0.0025470.010820−0.000592−0.002924−0.0027540.011021−0.0025050.0109970.001057−0.000352
MSE0.0000260.0000040.0000200.0000200.0172090.0000200.0162730.0000050.0000200.0000200.0158710.0000200.0162840.0000130.000000
MAE0.0036700.0009040.0032830.0032700.0149250.0030240.0145940.0009100.0032910.0032460.0143870.0030160.0147450.0021640.000451
DGP IIMean0.0070110.0071220.0057440.0068690.0278360.0057420.0200270.0071110.0058210.0067970.0350980.0058890.0210750.0121120.009219
Median0.0068270.0070000.0069280.0068320.0070000.0069700.0069970.0070000.0069420.0068370.0070000.0069710.0069980.0120590.009062
Bias0.0000110.000122−0.001256−0.0001310.020836−0.0012580.0130270.000111−0.001179−0.0002030.028098−0.0011110.0140750.0051120.002219
MSE0.0000100.0000020.0000160.0000190.0271850.0000160.0202340.0000020.0000170.0000140.0406300.0000190.0206710.0000480.000006
MAE0.0012670.0006380.0025630.0016650.0244430.0025910.0169310.0006560.0026210.0016580.0316090.0027090.0179450.0051350.002219
DGP IIIMean0.0042370.0064130.0041070.0045770.0200650.0043130.018240.0064360.0041040.0046090.0174410.0043350.0193080.0086540.007018
Median0.0051040.0069540.0049530.0051020.0069880.0052430.0069680.0069590.0049470.0051080.0069890.0052740.0069700.0076450.007024
Bias−0.002763−0.000587−0.002893−0.0024230.013065−0.0026870.011240−0.000564−0.002896−0.0023910.010441−0.0026650.0123080.0016540.000018
MSE0.0000170.0000030.0000210.0000170.0195420.0000190.0177790.0000030.0000210.0000170.0144480.0000190.0197120.0000190.000000
MAE0.0029990.0008020.0032920.0029050.0165250.0030600.0151960.0007890.0033080.0028860.0138760.0030530.0162590.0023970.000358
DGP IVMean0.0069870.0071450.0057010.0069950.0218870.0056970.0176720.0071210.0057080.0069270.0250740.0057870.0172210.0120700.009144
Median0.0068460.0070000.0069300.0068720.0070000.0069670.0070000.0070000.0069070.0068850.0070000.0069800.0070000.0121400.009071
Bias−0.0000130.000145−0.001299−0.0000050.014887−0.0013030.0106720.000121−0.001292−0.0000730.018074−0.0012130.0102210.0050700.002144
MSE0.0000210.0000030.0000170.0000320.0230860.0000180.0172690.0000030.0000140.0000260.0280710.0000180.0165810.0000550.000005
MAE0.0012660.0006710.0025290.0016410.0185230.0025350.0144740.0006920.0024120.0016040.0216120.0026000.0139840.0050820.002144
Over-identified Case
DGP IMean0.0051060.0062750.0045530.0046650.0206390.0044510.0170780.0062790.0045400.0045930.0208170.0044590.016549
Median0.0047630.0069320.0052160.0048050.0069810.0051850.0069810.0069350.0052100.0048000.0069840.0052000.006983
Bias−0.001894−0.000725−0.002447−0.0023350.013639−0.0025490.010078−0.000721−0.002460−0.0024070.013817−0.0025410.009549
MSE0.0000180.0000030.0000150.0000240.0174060.0000160.0128980.0000030.0000150.0000200.0163000.0000160.011297
MAE0.0025440.0008440.0027190.0030290.0167790.0028340.0131270.0008370.0027580.0029660.0168540.0028410.012563
DGP IIMean0.0069640.0071040.0061870.0068510.0286200.0061000.0218610.0071020.0062280.0070270.0323940.0061300.024782
Median0.0067040.0070000.0069940.0068580.0070000.0069950.0070000.0070000.0069920.0068590.0070000.0069950.006999
Bias−0.0000360.000104−0.000813−0.0001490.021620−0.0009000.0148610.000102−0.0007720.0000270.025394−0.0008700.017782
MSE0.0000080.0000020.0000120.0000110.0314510.0000140.0211580.0000020.0000130.0000230.0383900.0000140.026533
MAE0.0010910.0005620.0020350.0014880.0248620.0021990.0181870.0005570.0021380.0016440.0287640.0022840.021316
DGP IIIMean0.0053040.0063760.0045490.0047850.0204960.0047290.0151260.0063890.0045820.0047820.0187920.0047430.013350
Median0.0050890.0069490.0054100.0051620.0069880.0055950.0069820.0069490.0054300.0051630.0069890.0055980.006983
Bias−0.001696−0.000624−0.002451−0.0022150.013496−0.0022710.008126−0.000611−0.002418−0.0022180.011792−0.0022570.006350
MSE0.0000100.0000020.0000150.0000130.0188380.0000140.0115420.0000020.0000150.0000120.0145320.0000140.007820
MAE0.0020980.0007400.0027320.0025500.0165680.0025940.0113080.0007360.0027260.0025380.0148320.0025700.009542
DGP IVMean0.0071220.0070730.0061650.0068030.0354390.0061270.0151320.0070930.0061840.0069560.0380590.0060520.023901
Median0.0068170.0070000.0069960.0068690.0070000.0070000.0070000.0070000.0069970.0068860.0070000.0069990.007000
Bias0.0001220.000073−0.000835−0.0001970.028439−0.0008730.0081320.000093−0.000816−0.0000440.031059−0.0009480.016901
MSE0.0000120.0000020.0000120.0000140.0400500.0000140.0093110.0000020.0000140.0000200.0463600.0000130.024994
MAE0.0010750.0005480.0019310.0014050.0317750.0021180.0116260.0005610.0020870.0014640.0344640.0022180.020501
Notes: True value: σ = 0.007 . MSE = mean squared error. MAE = mean absolute error. Best performances highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boaretto, G.; Laurini, M.P. DSGE Estimation Using Generalized Empirical Likelihood and Generalized Minimum Contrast. Entropy 2025, 27, 141. https://doi.org/10.3390/e27020141

AMA Style

Boaretto G, Laurini MP. DSGE Estimation Using Generalized Empirical Likelihood and Generalized Minimum Contrast. Entropy. 2025; 27(2):141. https://doi.org/10.3390/e27020141

Chicago/Turabian Style

Boaretto, Gilberto, and Márcio Poletti Laurini. 2025. "DSGE Estimation Using Generalized Empirical Likelihood and Generalized Minimum Contrast" Entropy 27, no. 2: 141. https://doi.org/10.3390/e27020141

APA Style

Boaretto, G., & Laurini, M. P. (2025). DSGE Estimation Using Generalized Empirical Likelihood and Generalized Minimum Contrast. Entropy, 27(2), 141. https://doi.org/10.3390/e27020141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop