Next Article in Journal
Building News Measures from Textual Data and an Application to Volatility Forecasting
Next Article in Special Issue
Synthetic Control and Inference
Previous Article in Journal
Recent Developments in Copula Models
Previous Article in Special Issue
Accuracy and Efficiency of Various GMM Inference Techniques in Dynamic Micro Panel Data Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Treatments for Panel Data Stochastic Frontier Models with Time Varying Heterogeneity

1
Enterprise Risk Solutions, Moody’s Analytics Inc., San Francisco, CA 94105, USA
2
Department of Economics, Rice University, Houston, TX 77005, USA
3
Department of Economics, Lancaster University Management School, Lancaster LA14YX, UK
*
Author to whom correspondence should be addressed.
Econometrics 2017, 5(3), 33; https://doi.org/10.3390/econometrics5030033
Submission received: 22 May 2017 / Revised: 12 June 2017 / Accepted: 21 June 2017 / Published: 28 July 2017
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)

Abstract

:
This paper considers a linear panel data model with time varying heterogeneity. Bayesian inference techniques organized around Markov chain Monte Carlo (MCMC) are applied to implement new estimators that combine smoothness priors on unobserved heterogeneity and priors on the factor structure of unobserved effects. The latter have been addressed in a non-Bayesian framework by Bai (2009) and Kneip et al. (2012), among others. Monte Carlo experiments are used to examine the finite-sample performance of our estimators. An empirical study of efficiency trends in the largest banks operating in the U.S. from 1990 to 2009 illustrates our new estimators. The study concludes that scale economies in intermediation services have been largely exploited by these large U.S. banks.
JEL Classification:
C23; C11; G21; D24

1. Introduction

In this paper, we consider two panel data models with unobserved heterogeneous time-varying effects; one with individual effects treated as random functions of time, and the other with common factors whose number is unknown and whose effects are firm-specific. This paper has two distinctive features and can be considered as a generalization of traditional panel data models. First, the individual effects that are assumed to be heterogeneous across units, as well as to be time varying, are treated non-parametrically, following the spirit of the models of Bai (2009, 2013), Li et al. (2011), Kneip et al. (2012), Ahn et al. (2013), and Bai and Carrion-i-Silverstre (2013). Second, we develop methods that allow us to interpret the effects as measures of technical efficiency in the spirit of the structural productivity approaches of Olley and Pakes (1996) and non-structural approaches from the stochastic frontier literature (Kumbhakar and Lovell 2000; Fried et al. 2008). Levinsohn and Petrin (2003), Kim et al. (2016), and Ackerberg et al. (2015) have provided rationales for various treatments for the endogeneity of inputs and the appropriate instruments or control functions to deal with the potential endogeneity of inputs and of technical change based on variants of the Olley-Pakes basic model set up. Although we do not explicitly address entry/exit in this paper, we do address dynamics, as well as the potential endogeneity of inputs and the correlation of technical efficiency with input choice (Amsler et al. 2016). The general factor structure we utilize can pick up potential nonlinear selection effects that may be introduced when using a balanced panel of firms. Our dynamic heterogeneity estimators could be interpreted as general controls for any mis-specified factors, such as selectivity due to entry/exit, that are correlated with the regressors and could ultimately bias slope coefficients. Olley and Pakes (1996) utilize series expansions and kernel smoothers to model such selectivity. Our second estimator instead utilizes a general factor structure, which is a series expansion with a different set of basis functions than those used in the polynomial expansions employed by Olley-Pakes. Alternatively, we can interpret the effects based on a panel stochastic frontier production specification that formally models productive efficiency as a stochastic shortfall in production, given the input use. Van den Broeck et al. (1994) formulate a Bayesian approach under a random effects composed error model, while Koop et al. (1997) and Osiewalski and Steel (1998) provided extensions to the fixed effect model utilizing Gibbs sampling and Bayesian numerical methods, but these studies assumed that the individual effects were time invariant. Comparisons between the Bayes and classical stochastic frontier estimators have been made by Kim and Schmidt (2000). The estimators we consider are specified in the same spirit as Tsionas (2006), who assumed that the effects evolve log-linearly. We do not force the time-varying effects to follow a specific parametric functional form and utilize Bayesian integration methods and a Markov chain-based sampler to provide the slope parameter and heterogeneous individual effects inferences based on estimators of the posterior means of the model parameters.
The paper is organized as follows. Section 2 describes the first model setup and parameter priors. Section 3 introduces the second model and the corresponding Bayesian inferences, followed by Section 4, presenting the Monte Carlo simulations results. The estimation of the translog distance function is briefly discussed and the empirical application results of the Bayesian estimation of the multi-output/multi-input technology employed by the U.S. banking industry in providing intermediation services are presented in Section 5. Section 6 provides the concluding remarks.

2. Model 1: A Panel Data Model with Nonparametric Time Effects

Our first model is based on a balanced design with T observations for n individual units. Observations in the panel can be represented in the form ( y i t , x i t ) , i = 1 , , n ; t = 1 , , T , where the index i denotes the ith individual unit, and the index t denotes the tth time period.
A panel data model with heterogeneous time-varying effects is:
y i t = x i t β + γ i t + v i t
where y i t is the response variable, x i t is a 1 × p vector of the explanatory variables, β is a p × 1 vector of the parameters, and γ i t is a nonconstant and unknown individual effect. We make a standard assumption that the measurement error v i t N I D ( 0 , σ 2 ) . The time-varying heterogeneity is assumed to be independent across units. This assumption is quite reasonable in many applications, particularly in production/cost stochastic frontier models where the effects are measuring technical efficiency levels. A firm’s efficiency level primarily relies on its own factors such as its executives’ managerial skills, the firm size, and the operational structure, etc., and should thus be heterogeneous across firms. These factors usually change over time, as does the firm’s efficiency level.
For the ith unit, the model is:
Y i = X i β + γ i + v i ,   i = 1 , , n
where Y i ,   X i , and γ i contain the stacked vectors of dimension T for cross-section i.
When interpreting the effects as firm efficiencies, as is done in stochastic frontier analysis (Pitt and Lee 1981; Schmidt and Sickles 1984), the estimation of time-varying technical efficiency levels is as important as that of the slope parameters.
A difference between our model and many other Bayesian approaches in the literature is that no functional form for the prior distribution of the unobserved heterogeneous individual effects is imposed. Instead of resorting to the classical nonparametric regression techniques (Kneip et al. 2012), a Markov chain Monte Carlo (MCMC) algorithm is implemented to estimate the model. We can consider this to be a generalization of Koop and Poirier (2004) in the case of panel data, including both individual-specific and time-varying effects. Moreover, our model does not rely on the restrictive conjugate prior formulation for the time varying individual-effects.
A Bayesian analysis of the panel data model set up above requires specification of the prior distributions over the parameters (γ, β, σ) and computation of the posterior using a Bayesian learning process:
p ( β , γ , σ | Y , X , ω ) p ( β , σ , γ ) l ( Y , X ; β , γ , σ ) .
The prior for the individual effect γ i is not assumed to follow a normal distribution; instead, it is only assumed that the first-order or second-order difference of γi follows a normal prior.
p ( γ ) i = 1 n exp ( γ i Q γ i 2 ω 2 ) = exp ( 1 2 ω 2 γ ( I n Q ) γ )
where Q = D D , and D is the ( T 1 ) × T matrix whose elements are D t t = 1 , for t = 1,…,T − 1; D t 1 , t = 1 for all t = 2,…,T and zero otherwise. The information implied by this prior is that   γ i , t γ i , t 1 ~ N ( 0 , ω 2 ) , or D γ i ~ I I D N ( 0 , ω 2 I T 1 ) . ω is the smoothness parameter that indexes the degree of smoothness. ω can be considered as a hyper-parameter, or it can be assumed to have its own prior, which is explained in the next section. Provided the continuity and first-order differentiability of   γ i , t , this assumption says that the first derivative of the time-varying function   γ i , t in (4) is a smooth function of time. One can assume second-order differentiability instead, which is implied by the condition that γ i t 2 γ i , t 1 + γ i , t 2 ~ N ( 0 , ω 2 ) ,   or   D ( 2 ) γ i ~ I I D N ( 0 , ω 2 I T 2 )   and   Q = D ( 2 ) D ( 2 ) .
A non-informative distribution is assumed for the joint prior of the slope parameter β and the unknown variance term σ2.
p ( β , σ , γ ) σ 1
This is equivalent to assuming that the prior distribution is uniform on ( β , log σ ) .
With the assumptions of the priors above, the joint prior of the model parameters is:
p ( β , σ , γ ) σ 1 i = 1 n exp ( γ i Q γ i 2 ω 2 ) = σ 1 exp ( 1 2 ω 2 γ ( I n Q ) γ )
The corresponding sample likelihood function is:
l ( Y ,   X ,   β , γ , σ )     σ n T exp { 1 2 σ 2 ( Y X β   γ ) ( Y X β   γ ) }
The likelihood is formed by the product of the nT independent disturbance terms, which follow the normal distribution for the idiosyncratic error, assumed to be NID (0, σ2). Applying Bayes’ theorem, the probability density function is updated utilizing the information from the data and to form the joint posterior distribution given by:
p ( β , γ , σ | Y , X , ω ) σ ( n T + 1 ) exp { 1 2 σ 2 ( Y X β γ ) ( Y X β γ ) } × exp { 1 2 ω 2 γ ( I n Q ) γ }
The model in (1) and (2) is identified provided we have a proper prior for the γ i s . To accomplish this, we use (4) with a proper prior for ω : p ( ω ) ω ( n ¯ + 1 ) exp ( q ¯ 2 ω 2 ) , n ¯ 0 , q ¯ > 0 (see (17) below), where q ¯ is the sum of squares with n ¯ observations. The “non-informative” case is to let n ¯ , q ¯ 0 . We use n ¯ = 1 and q ¯ = 10 6 following standard practice (Geweke 1993). The posterior is well-defined and integrable. Such issues have been dealt with by Koop and Poirier (2004), whose spline method is equivalent to the difference prior we adopt.
To proceed with further inference, we need to solve this analytically. However, the joint posterior distribution does not have a standard form and taking draws directly from it is problematic. Therefore, we utilize Gibbs sampling to perform Bayesian inference. The Gibbs sampler is commonly used in such situations because of the desirable result that iterative sampling from the conditional distributions will lead to a sequence of random variables converging to the joint distribution. A general discussion on the use of Gibbs sampling is provided by Gelfand and Smith (1990), who compare the Gibbs sampler with alternative sampling-based algorithms. A more detailed discussion is given in Gelman et al. (2003). Gibbs sampling is well-adapted to sampling the posterior distributions for our model since it is possible to derive the collection of distributions.
The Gibbs sampling algorithm we employ generates a sequence of random samples from the conditional posterior distributions of each block of parameters, in turn conditional on the current values of the other blocks of parameters, and it thus generates a sequence of samples that constitute a Markov Chain, where the stationary distribution of that Markov chain is the desired joint distribution of all the parameters.
In order to derive the conditional posterior distributions of β, γ, and σ, we first rewrite the joint posterior in (8) as:
p ( Y | β , γ , σ ) σ n T exp { 1 2 σ 2 ( Y X β γ ) ( Y X β γ ) }           σ n T exp { 1 2 σ 2 [ ( Y X β ^ γ ) ( Y X β ^ γ ) + ( β β ^ ) ( X X ) ( β β ^ ) ] }
where β ^ = ( X X ) 1 X ( Y γ ) .
The joint posterior can be rewritten as:
p ( β , γ , σ | Y , X , ω ) σ ( n T + 1 ) exp { 1 2 ω 2 γ ( I n Q ) γ } × exp { 1 2 σ 2 [ ( Y X β ^ γ ) ( Y X β ^ γ ) + ( β β ^ ) ( X X ) ( β β ^ ) ] } .
From (10), the conditional distribution of β can be shown to follow the multivariate normal distribution with the mean β ^ and covariance matrix   σ 2 ( X X ) 1 .
p ( β | Y , X , γ , σ , ω ) exp { 1 2 σ 2 ( β β ^ ) ( X X ) ( β β ^ ) }
The conditional distribution of β , therefore, is given by:
β | σ , γ , ω , Y , X f k ( β | β ^ ,   σ 2 ( X X ) 1 )
In order to derive the conditional distribution of the individual effect γ i , we rewrite the joint posterior distribution as:
p ( β , γ , σ | Y , X , ω ) σ ( n T + 1 ) exp { 1 2 σ 2 ( γ Y + X β ) ( γ Y + X β ) 1 2 ω 2 γ ( I n Q ) γ } σ ( n T + 1 ) exp { 1 2 σ 2 i = 1 n ( γ i Y i + X i β ) ( γ i Y i + X i β ) 1 2 ω 2 i = 1 n γ i Q γ i }
Under the assumption that the effects, the γ i s are independent across units, the conditional posterior distribution of γ i | β , σ , ω , { γ j , j i } , Y , X is the same as that of γ i | β , σ , ω , Y , X , and is distributed as a multivariate normal.
γ i | β , σ , ω , { γ j , j i } , Y , X ~ γ i | β , σ , ω , Y , X ϕ T ( γ i | γ ^ i ,   σ 2 ω 2 V )
where the mean γ ^ i and covariance matrix V are γ ^ i = ω 2 V ( y i X i β ) and V = ( σ 2 Q + ω 2 I T ) 1 for i = 1 , , n . The detailed derivation is presented in Appendix A.
The conditional posterior distribution for σ 2 is given below in (15). It is clear that the sum of the squared residuals ( Y X β γ ) ( Y X β γ ) / σ 2 has a conditional chi-squared distribution with nT degrees of freedom, as shown in (16):
p ( σ 2 | β , γ , Y , X , ω ) ( σ 2 ) n T / 2 1 exp { 1 2 σ 2 ( Y X β γ ) ( Y X β γ ) }
( Y X β γ ) ( Y X β γ ) σ 2 | β , γ , ω , Y , X χ n T 2 .
If the smoothing parameter ω is also assumed to follow its own prior instead of being treated as a constant, then its conditional posterior distribution can also be derived. Suppose that q ¯ ω 2 ~ χ n ¯ 2 , where n ¯ ,   q ¯ 0 are hyper-parameters that control the prior degree of smoothness that is imposed on the γ i t s . Then, the conditional posterior distribution of ω 2 is derived as:
q ¯ + i = 1 n γ i Q γ i ω 2 | β , σ , γ , Y , X ~ q ¯ + i = 1 n γ i Q γ i ω 2 | γ , Y , X ~ χ n ¯ + n 2
Generally, small values of the prior “sum of squares” q ¯ / n ¯ correspond to smaller values of ω and thus a higher degree of smoothness. Alternatively, we can choose the smoothing parameter ω using cross validation, which in a Bayesian context is similar to cross validation for tuning parameters in classical nonparametric regression. We choose the smoothing parameter ω so that the marginal likelihood (obtained as in Perrakis et al. 2014) is maximized.
A Gibbs sampler is then used to draw observations from the conditional posteriors based on (11) through (17). Draws from these conditional posteriors will eventually converge to the joint posterior in (8). Since the conditional posterior distribution of β follows the multivariate normal distribution displayed in (12), it will be straightforward to sample from it. For the individual effects γ i , sampling is also straightforward since its conditional posterior follows a multivariate normal distribution with a mean vector γ ^ i and covariance matrix σ 2 ω 2 V , as expressed in (14).
Finally, to draw samples from the conditional posterior distribution function for the unobserved variance of the measurement error σ term, we have two simple steps. First, we draw samples directly from ( Y X β γ ) ( Y X β γ ) / σ 2 , which is shown in (16) to follow a chi-squared distribution with the degree of freedom nT. Next, we assign the values of ( Y X β γ ) ( Y X β γ ) / ( C h i r n d ) to σ 2 , where ( C h i r n d ) is the random generated variable that follows a χ n T 2 in the first step.

3. Model 2: A Panel Data Model with Factors

We next consider a somewhat different specification for the panel data model, wherein the effects are treated as a linear combination of unknown basis functions or factors:
y i t = α i + x i t β + ϕ t γ i + v i t = x i t β + g = 1 G ϕ t g γ i g + v i t .
Here, ϕ t is a 1 × G vector of common factors, γ i is a G × 1 vector of individual-specific factor loadings, and α i represents the firm-specific and time invariant effects. For these effects, we retain the Schmidt and Sickles (1984) interpretation of the fixed effects as measures of unit specific time invariant productivity (inefficiency), but we embed it in a Bayesian framework using the Bayesian Fixed Effects Specification (BFES) of Koop et al. (1997). Following their model specification, the BFES is characterized by marginal prior independence between the individual effects. Therefore, the effects are assumed not to be linked across firms, as would be the case for the spatial stochastic frontier considered by Glass et al. (2016).
As for measuring the inefficiency, the essence of the Schmidt and Sickles (1984) device in the Bayesian context is that, during the sth of the total S (MCMC) iterations or paths, inefficiency is constructed as the difference of the individual effect from the maximum effect across firms: u i ( s ) = α i ( s ) max j = 1 , , n α j ( s ) . Thus, one counts the most efficient firm in the sample as 100% efficient. However, there is uncertainty as to which firm we should use for benchmarking and this is resolved by averaging: u ^ i = S 1 s = 1 S u i ( s ) to account for both the parameter uncertainty, as well as the uncertainty regarding the best performing firm. The efficiency level of the most efficient firm in the sample approaches 1 when S → ∞. This method has much in common with the Cornwell et al. (1990) (CSS) estimator of time and firm specific productivity effects. The difference is that at each path of the Gibbs sampler, we have new draws for the α i s , a new value for max j = 1 , , n α j ( s ) , and thus a new value for u i ( s ) . While CSS have one set of estimates and therefore a single firm to use as the benchmark, in the Bayesian approach, we have draws from the posterior of the α i s . There is also uncertainty as to which firm is the benchmark since we are simulating from the finite sample distribution of the α i s and thus, we re-compute max j = 1 , , n α j ( s ) and the value of u i ( s ) each time.
The method can be extended to the case in which the time effects are nonlinear, e.g., where α i t = l = 1 L ω i l t l . With this specification, we can allow for a firm-specific polynomial trend, where ω i l represents the firm-specific coefficients. Of course other covariates can also be included in the time effects if so desired.
The model can be written for the ith unit as:
Y i ( T × 1 ) = α i ι T + X i ( T × k ) β ( k × 1 ) + Φ ( T × G ) γ i ( G × 1 ) + v i ( T × 1 ) , i = 1 , 2 , , n
or for the t th time period as:
Y t ( n × 1 ) = α ( n × 1 ) + X t ( n × k ) β ( k × 1 ) + Γ ( n × G ) ϕ t ( G × 1 ) + v t ( n × 1 ) , t = 1 , 2 , T
where Φ = [ ϕ 1 ϕ T ] , and Γ = [ γ 1 γ n ] . If we set ϕ 1 t then γ i t acts as an individual-specific intercept. Effectively, the first column of Φ contains ones. The model for all observations can be written as Y = X β + ( I n Φ ) γ + v = X β + ( I T Γ ) ϕ + v , where γ = v e c ( Γ ) and ϕ = v e c ( Φ ) .
This model setting follows that in Kneip et al. (2012), and it satisfies the following structural assumption, which is Assumption 1 from Kneip et al:
Assumption 1:
For some fixed L { 0 , 1 , 2 , } < T , there exists an L-dimensional space L T , where, { ϕ i ( 1 ) , ϕ i ( 2 ) , , ϕ i ( T ) } L T such that the time-varying individual effect ϕ i ( t ) = ϕ t γ i holds with probability 1.
We define the priors similarly to Model 1. Regarding the slope parameter β and variance of the noise term σ , we continue to assume a non-informative prior: p ( β , σ ) σ 1 . For the common factors, it is reasonable to assume that:
p ( ϕ 1 , ϕ 2 , , ϕ T ) exp ( t = 1 T ( ϕ t ϕ t 1 ) ( ϕ t ϕ t 1 ) 2 ω 2 ) = exp ( 1 2 ω 2 t r Φ Q Φ ) .
This prior is consistent with the presence of common factors that evolve smoothly over time. The degree of smoothness is controlled by the parameter ω and by setting ϕ 0 = 0 . Smoothness in this context then comes from the specification of the random walk prior above as essentially a spline.
For the loadings, we assume γ i ~ I I D N G ( γ ¯ , Σ ) . An alternative that we do not pursue but which may attenuate the proliferation of factors would be to stochastically constrain the loadings to approach zero in the following sense: if Γ ( n × G ) = [ γ ( 1 ) , , γ ( G ) ] , then γ ( 1 ) N n ( γ ¯ , ψ 2 I n ) , γ ( g ) N n ( α g γ ¯ , λ g ψ 2 I n ) , for g = 1 , , G , where α , λ are parameters between zero and one. The posterior kernel distribution is:
p ( β , σ , ϕ , γ | Y , X ) σ ( n T + 1 ) exp [ i = 1 n t = 1 T ( y i t x i t β ϕ t γ i ) 2 2 σ 2 t = 2 T ( ϕ t ϕ t 1 ) ( ϕ t ϕ t 1 ) 2 ω 2 ] i = 1 n p ( γ i | ζ )
where ζ denotes any hyper-parameters that are present in the prior of γ ( i ) s . When γ ( i ) ~ I I D N G ( γ ¯ , Σ ) , we have:
p ( β , σ , ϕ , γ , γ ¯ , Σ | Y , X ) σ ( n T + 1 ) exp [ i = 1 n t = 1 T ( y i t x i t β ϕ t γ i ) 2 2 σ 2 t = 1 T ( ϕ t ϕ t 1 ) ( ϕ t ϕ t 1 ) 2 ω 2 ] × | Σ | n / 2 exp [ 1 2 i = 1 n ( γ i γ ¯ ) Σ 1 ( γ i γ ¯ ) ] p ( γ ¯ , Σ )
where p ( γ ¯ , Σ ) denotes the prior on the hyper-parameters. A reasonable choice is the p ( γ ¯ | Σ ) constant and p ( Σ ) | Σ | ( ν ¯ + 1 ) / 2 exp ( 1 2 A ¯ Σ 1 ) , which leads to:
p ( β , σ , ϕ , γ , γ ¯ , Σ | Y , X ) σ ( n T + 1 ) | Σ | ( n + ν ¯ + 1 ) / 2 exp [ i = 1 n t = 1 T ( y i t x i t β ϕ t γ i ) 2 2 σ 2 t = 1 T ( ϕ t ϕ t 1 ) ( ϕ t ϕ t 1 ) 2 ω 2 1 2 t r ( A Σ 1 ) ]
where A = A ¯ + i = 1 n ( γ i γ ¯ ) ( γ i γ ¯ ) .
In order to proceed with Bayesian inference, we again use the Gibbs Sampling algorithm. For our model 2 specification, the implementation of Gibbs sampling is rather straightforward since we can analytically derive the conditional posteriors for the parameters in which we are interested. In what follows, we use the notation Y : = Y α ι T . The conditional posteriors are:
β | σ , ϕ , γ , γ ¯ , Σ , Y , X ~ N k ( β ¯ , σ 2 ( X X ) 1 ) ,   where   β ¯ = ( X X ) 1 X ( Y ( I n Φ ) γ )
( Y X β ( I n Φ ) γ ) ( Y X β ( I n Φ ) γ ) σ 2 | β , γ , ϕ , γ ¯ , Σ ~ χ n T 2
γ ¯ | β , σ , ϕ , γ , γ ¯ , Σ , Y , X ~ γ ¯ | γ , Σ , Y , X ~ N G ( n 1 i = 1 n γ i ,   n 1 Σ )
γ i | β , σ , γ ¯ , Σ , Y , X ~ N G ( γ ^ i ,   σ 2 ( Φ Φ + σ 2 Σ 1 ) 1 )
where γ ^ i = ( Φ Φ + σ 2 Σ 1 ) 1 ( Φ e i + σ 2 Σ 1 γ ¯ ) , e i = y i X i β , for each i = 1 , , n ,
ϕ t | β , σ , γ , γ ¯ , Σ , Y , X , { ϕ τ , τ t } ~ N G ( ϕ ^ t ,   σ 2 ω 2 ( ω 2 Γ Γ + 2 σ 2 I G ) 1 )
where ϕ ^ t = ( ω 2 Γ Γ + 2 σ 2 I G ) 1 ( ω 2 Γ e t + σ 2 ( ϕ t 1 + ϕ t + 1 ) ) , e t = y t X t β for each t = 1 , , T .
Using a Gibbs sampler, we draw observations from the conditional posteriors from (25) to (29). Draws from the conditional posteriors will eventually converge to the joint posterior (24). The conditional posterior distribution of β follows the multivariate normal (25) and it is straightforward to sample from that distribution. To draw samples from the conditional posterior distribution function for the unobserved variance of the measurement error σ term, we first draw samples directly from the distribution of ( Y X β γ ) ( Y X β γ ) / σ 2 , which is shown in (26) to follow a chi-squared distribution with the degree of freedom nT, and then assign the values of ( Y X β γ ) ( Y X β γ ) / ( C h i r n d ) to σ 2 , where ( C h i r n d ) is the generated random variable that follows χ n T 2 in the first step.
For the mean parameter γ ¯ , sampling is also straightforward since its conditional posterior follows a multivariate normal distribution. The variance matrix Σ follows an inverted Wishart distribution. For the unknown common factors γ i and the corresponding factor loadings ϕ t we can draw directly from multivariate normal distribution following (28) and (29). Finally, the individual firm effects α i can be drawn using the procedure in Koop et al. (1997). This involves standard computations as the Bayesian fixed effects are drawn for normal posterior conditional distributions. The difficult distributional issues involved in deriving the analytical finite sample distribution of the parameters and estimates of relative efficiency are resolved through the MCMC procedure used to generate u i ( s ) = α i ( s ) max j = 1 , , n α j ( s ) , a fact that has been mentioned by Koop et al. (1997). α i ( s ) can be calculated from the posterior of β i ( s ) and γ i ( s ) and thus u i ( s ) can also be calculated.
In our discussion of Model 2, we have treated the number of finite factors (G) as known. However, we can also utilize Bayesian techniques to develop inferences on G. Classical inferential approaches have been proposed by Bai and Ng (2007), Onatski (2009), and Kneip et al. (2012). We consider models with at most L finite factors G = 1, 2,…,L. Suppose p ( θ , Γ G ) and L ( θ , Γ G ; Y , G ) denote the prior and likelihood, respectively, of a model with G factors, where θ is the vector of parameters common to all models (such as β and σ) and Γ G denotes a vector of parameters related to the factors and their loadings, φ and γ. The marginal likelihood is M G ( Y ) = L ( θ , Γ G ; Y , G ) p ( θ , Γ G ) d Γ G d θ . For models with different numbers of factors, say G and G’, we can consider the Bayes factor in favor of the first model and against the second:
B F = L ( θ , Γ G ; Y , G ) p ( θ , Γ G ) d Γ G d θ L ( θ , Γ G ; Y , G ) p ( θ , Γ G ) d Γ G d θ = M G ( Y ) M G ( Y )
Computation of the marginal likelihood requires the computation of the integral in the numerator P ( θ | Y , G ) with respect to ϕ and γ . As this is not available analytically, we adopt the following approach.
We first specify:
P ( θ | Y , G ) = L ( θ , Γ G ; Y , G ) p ( θ , Γ G ) d Γ G = L ( θ , Γ G ; Y , G ) p ( θ , Γ G ) q ( Γ G ) q ( Γ G ) d Γ G ,
where q ( Γ G ) is a convenient importance sampling density. We factor the importance density as q ( Γ G ) = t = 1 T q t ϕ ( ϕ t ) i = 1 n q i γ ( γ i ) , where q t ϕ and q i γ are univariate densities. The densities are chosen to be univariate Student’s t-distributions with five degrees of freedom, with parameters matched to the posterior mean and standard deviation of MCMC draws for ϕ and γ, respectively. The integral is then calculated using standard importance sampling, which is quite robust. The standard deviations are multiplied by constants h ϕ and h γ , which are selected so that the importance weights are as close to uniform as possible. We use 100 random pairs in the interval 0.1 to 10 and select the values of h for which the Kolmogorov-Smirnov test is the lowest. We truncate the weights to their 99.5% confidence interval, but in very few instances was this found necessary as extreme values are rarely observed. There is evidence that changing the degrees of freedom of the Student’s t provides some improvement, but we did not pursue this further as the final results for the Bayes factors were not found to differ significantly.
Given marginal likelihoods M g ( Y ) , g = 1 , , G , the posterior model probabilities can be estimated as1:
p g ( Y ) = M g ( Y ) g = 1 G M g ( Y ) ,   g = 1 , , G
The posterior model probabilities summarize the evidence in favor of a model with a given number of factors.

4. Monte Carlo Simulations

In order to illustrate the model and examine the finite sample performance of the new Bayesian estimators with nonparametric individual effects (BE1) and with the factor model specification for the individual effects (BE2), we carry out a series of Monte Carlo experiments. The performance of the Bayesian estimator is compared with the parametric time-variant estimator of Battese and Coelli (1992) (BC), the estimators proposed by (Cornwell et al. 1990)—within (CSSW) and random effects GLS (CSSG)—and the Kneip et al. (2012) estimator that utilizes a combination of nonparametric regression techniques (smoothing splines) and factor analysis (Bada and Liebl 2014) to model the time-varying unit specific effects. The BC estimates are based on the model (1) where the time-varying effects are given by γ i t = e η ( t T ) u i . The temporal pattern of firm-specific effects γ i t depends on the sign of η . The time-invariant case corresponds to η = 0 . The disturbances u i are i.i.d. and are assumed to follow a non-negative truncated normal distribution. Estimation of the BC model is carried out by parametric MLE. The CSSW and CSSG estimates are also based on model (1) and specify the time varying effects as γ i t = θ i 1 + θ i 2 t + θ i 3 t 2 . Derivations of the within, GLS, and efficient Hausman-Taylor type IV estimators can be found in Cornwell et al. (1990). The KSS estimator requires a bit more discussion. They assume that γ i t is a linear combination of some basis functions γ i t = r = 1 L ζ i r g r ( t ) . In the first step of their three step procedure, they obtain estimates of the slope parameters and nonparametric approximations to γ i t by a least squares regression of Y on X and an approximation of the effects using smoothing splines. In the second step, they obtain the empirical covariance matrix of residuals and in the third step they determine the basis functions and corresponding coefficients. Details can be found in Kneip et al. (2012). Point estimates and standard errors for BE1 and BE2 are posterior moments whose derivation we detailed in Section 2 and Section 3. We averaged the point estimates and the standard deviations of the parameter estimates from all of the simulated paths.
We consider a panel data model with two regressors written as y i t = β 1 x i t ( 1 ) + β 2 x i t ( 2 ) + γ i t + v i t . We generate samples of size n = 50, 100, 200, with T = 20, 50. In each experiment, the regressors x i t ( j ) ( j = 1 , 2 ) are randomly drawn from a standard multivariate normal distribution N(0,Ip) The i.i.d. disturbance term v i t is drawn from a standardized N ( 0 , 1 ) . Time-varying individual effects are generated by four different DGPs, which specify the effects as following a unit specific quadratic function of a time trend (DGP1), random walk (DGP2) oscillating function given by a linear combination of sine and cosine functions (DGP3), and finally a simple additive mixture of the previous three data generating processes (DGP4). The parameterizations are:
DGP 1 :   γ i t = θ i 0 + θ i 1 ( t / T ) + θ i 2 ( t / T ) 2
DGP 2 :   γ i t = ϕ i r t
DGP 3 :   γ i t = ν i 1 t / t cos ( 4 π t / T ) + ν i 2 t / T sin ( 4 π t / T )
DGP 4 :   γ i t = θ i 0 + θ i 1 ( t / T ) + θ i 2 ( t / T ) 2 + ν i 1 t / t cos ( 4 π t / T ) + ν i 2 t / T sin ( 4 π t / T ) .
Here θ i j ( j = 0 , 1 , 2 ) and ϕ i are i.i.d N ( 0 , 1 ) , r t + 1 = r t + δ t ,   δ t i . i . d .   N ( 0 , 1 ) , and ν i j ( j = 1 , 2 ) i . i . d . N ( 0 , 1 ) .
Gibbs sampling was implemented using 55,000 iterations with a burn-in period of 5000 samples. We only consider every other 10th draw to mitigate the impact of autocorrelation from successive samples from the Markov chain. With regard to the selection of the number of factors, Gibbs samplers for all DGPs rely on an MCMC simulation from models with a G value ranging from one to eight. The true number of factors is 3, 2, 1, and 6 for the four respective DGPs.
The simulation results for all the DGPs are displayed in Table 1, Table 2, Table 3 and Table 4. Estimates and standard errors of the slope coefficients β1 and β2 are presented in the upper panel of each table, while estimates of the individual effects γit and their normalized MSE are displayed in the lower panel of each table. The normalized MSE of the individual effects γit is calculated as:
R ( γ ^ i t , γ i t ) = i = 1 n t = 1 T ( γ ^ i t γ i t ) 2 i = 1 n t = 1 T γ i t 2
Since we have not analyzed the role of correlated effects in these experiments, estimates of the slope parameters should be consistent for CSSW, CSSG, and KSS. Moreover, the BC model utilizes parametric MLE based on i.i.d. normally distributed random disturbances and thus should also yield consistent slope parameter estimates. Results from the four different specifications of the effects clearly demonstrate that point estimates of the slope coefficients for BC, CSSW, CSSG, and KSS are comparable across the various dgps, although variances will of course be smaller for estimators that do a better job of modeling the effects. The BC estimator does a poor job of estimating the effects since the specification we utilize assumes that the effects have the same temporal pattern for the different units. Generalizations of the BC estimator are available that allow the effects to be functions of selected regressors that may change over units, but we do not utilize these extensions in our experiments. Since DGP1 is consistent with the assumptions for the time-varying effects in the CSS model (we use the version of the CSS estimator utilized in the Cornwell et al. (1990) application wherein the unit specific effects were given by a second-order polynomial in the time trend), it is no surprise that the CSSW and CSSG estimators have the best performance compared with the other estimators for this dgp. However, it is also clear from the results of Table 1 that the Bayesian estimators are comparable to those of the CSSW, CSSG, and KSS estimators in terms of the estimates of individual effects. Moreover, for the sample sizes of n = 50, T = 50, and n = 100, T = 50, the Bayesian estimators provide more accurate estimates of individual effects than the KSS estimator. This implies that the performance of the Bayesian estimators is quite effective in estimating the time-varying effects of the smoothed-curve forms, like the second-order polynomials. It is not surprising that the mean squared errors of the Bayesian estimators are consistently much lower than those of the BC estimator for all sample sizes.
DGP2 considers the case where the individual effects are generated by a random walk and the results for these experiments are shown in Table 2. CSSW and CSSG are over-parameterized as they assume that the individual effects are quadratic functions of the time trend and have relatively poorer performances for this dgp. BE1 and BE2 are mostly data driven and impose no functional forms on the temporal pattern of the individual effects. For this relatively simple random walk specification, they outperform the other estimators that rely on functional form assumptions and also have a better estimation performance in terms of the MSE of individual effects than KSS. DGP3 characterizes significant time variations in the individual effects. As we can see from Table 3, the BE1 and BE 2 estimators have a comparable performance to the KSS estimator and outperform it again for experiments with relatively large panels such as (n = 100 and 200). The other estimators, whose effects rely on parametric assumptions of simple functional forms, are largely dominated by the Bayesian estimators.
DGP4 is a mixture of the scenarios for the time varying effects used in DGP1–DGP4. Table 4 indicates that that BE1 and BE2 outperform the BC, CSSW, and CSSG estimators in terms of the MSE of the individual effects and are comparable to KSS.
As we have pointed out, for all the DGPs, the slope parameter estimates are comparable across the six different estimators. However, this is not the case for the individual effects. This is a drawback for the estimation of technical or efficiency change since such measures are usually based on an unobserved latent variable that is estimated using some function of the model residuals. For example, the individual effects correspond to the technical efficiencies in stochastic frontier analysis. Our new Bayesian estimators for the stochastic frontier would appear to be excellent candidates among competing estimators for modeling a production or cost frontier and it is to topic that we now turn to in our empirical model of banking efficiency.

5. Empirical Application: Efficiency Analysis of the U.S. Banking Industry

5.1. Empirical Models

In this section, the two Bayesian estimators we have introduced are used to estimate temporal changes in the efficiency levels of 40 of the top 50 banks in the U.S. ranked by their book value of assets. We consider only 40 of these banks due to missing observations and other data anomalies. The empirical model is borrowed from Inanoglu et al. (2015), who use a suite of econometric specifications, including time-invariant panel data models, time-variant models, and quantile regression methods, to examine issues of “too big to fail” in the banking industry. In our illustration of the new Bayesian estimators, we will only compare the results across the different time-varying stochastic frontier panel estimators we discussed in the last section, along with modifications in the Bayesian estimators, to deal with potential issues of endogeneity. The estimators we utilize are based on different assumptions on the functional form of the time varying effects and provide various treatments for the unobserved heterogeneity, but they are all based on (1), which characterizes a single output with panel data assuming unobserved individual effects.
We will estimate a second order approximation in logs (the translog specification) to a multi-output/multi-input distance function (Caves et al. 1982). The output-distance function D O ( Y , X ) is non-decreasing, homogeneous, and convex in multiple outputs Y and non-increasing and quasi-convex in multiple inputs X . The translog output distance function takes the following form:
y 1 i t = η i t + j = 2 m γ j y j i t * + 1 2 j = 2 m l = 2 m γ j l y j i t * y l i t * + k = 1 n δ k x k i t + 1 2 k = 1 n p = 1 n δ k p x k i t x p i t + j = 2 m k = 1 n θ j k y j i t * x k i t + v i t , i = 1 , , N ; t = 1 , , T .
Here y j i t , j = 2 , , m * = ln ( Y j i t / Y 1 i t ) , x k i t = ln ( X k i t ) , and the normalization j = 1 m γ j = 1 results from the homogeneity property of the output distance function in outputs. If we denote Z = [ x N T × n , y N T × ( m 1 ) * , x x N T × ( n × ( n + 1 ) / 2 ) , y * y N T × ( ( m 1 ) × m / 2 ) * , x y N T × ( m 1 ) × n ) * ] then the model can be written as a simple re-parameterized version of (1):
y 1 , i t = z i t β * + γ * i t + v * i t
To allow for the endogeneity of the regressors in z we use the model:
z i t = Π z i , t 1 + ε i t
where [ v i t , ε i t ] ~ N ( 0 , Σ ) . That is, we complete the model with a panel VAR reduced form for the potentially endogenous variables. The likelihood and the posterior distributions are straightforward to derive using the methods we discussed in Section 2 and Section 3. Moreover, since the output distance function is bounded from above by 1, the logarithmic transformation used in specifying the estimating equation in (34) provides a natural justification for the bounded support of the unobserved technical efficiency term specified in the stochastic frontier literature for the unit specific time varying effects γ * i t .
The elasticities of the distance function with respect to the input and output variables are:
s p = δ p + k = 1 n δ k p x k + j = 2 m θ p j y j * ,    p = 1 , 2 , , n
r j = γ j + l = 2 m γ j l y j * + k = 1 n θ k j x k ,    j = 2 , , m .
The individual effects are transformed into relative efficiency levels using the standard order statistics device in Schmidt and Sickles (1984):
T E i t = exp { γ * i t max i = 1 , n γ * i t } .
For the BC estimator, technical efficiency levels can differ, but parsimony is achieved by assuming that all firms have the same temporal pattern.
Clearly, the levels of efficiency can vary substantially for the methods that use the order statistics (the firm with the largest effect) to benchmark the most efficient firm and thus the relative efficiencies of the remaining firms. Typically, this impact is mitigated by data trimming, but with only 40 firms in our study, we decided to avoid doing so when presenting the results below. The BC estimator has no such potential drawback.

5.2. Data

The dataset analyzed is a balanced panel of 40 out of the top 50 U.S. commercial banks based on the yearly data of their Book Value of Assets from 1990 through 2009. The panel size is thus 40 by 20. Missing observations and data anomalies reduced the sample from 50 to 40 firms. The data is merged on a pro-forma basis wherein the non-surviving bank’s data is represented as part of the surviving bank going back in time. The three output and six input variables used to estimate the translog output distance function are: Real Estate Loans (“REL”), Commercial and Industrial Loans (“CIL”), Consumer Loans (“CL”), Premises & Fixed Assets (“PFA”) , Number of Employees (“NOE”), Purchased Funds (“PF”), Savings Accounts (“SA”), Certificates of Deposit (“CD”), and Demand Deposits (“DD”). Additionally, three types of risk proxies are considered: Credit Risk (“CR”), which is approximated by the Gross Charge-off Ratio; Liquidity Risk (“LR”), which is proxied by the Liquidity Ratio; and Market Risk (“MR”), which is proxied by the standard deviation of Trading Returns.

5.3. Empirical Results

The input and output elasticities evaluated at the geometric mean of the sample are displayed in Table 5.2 For the BE2 estimates, the BF for two factors versus one factor is 35.12, while the BF for three versus one factor is 2.23 and the BF for four versus one factor is 1.10. For the KSS estimates, we use the procedure outlined in Kneip et al. (2012, pp. 607–8) with α = 0.05. The KSS procedure estimates two factors. Thus, we have two factors in our empirical illustration for BE2, and KSS. BE1* and BE2* are the Bayesian estimators corrected for endogeneity of the terms that are interacted with the endogenous multiple outputs. From Table 5, we can see that magnitudes and signs of the elasticity estimates across different models are comparable, except for the Demand Deposit, where CSSW gives a significantly lower estimate than all of the other estimators. All of the estimators suggest decreasing returns to scale except BC. However, the returns-to-scale estimate suggested by BC is 1.0165, which is not significantly different from 1. Alternatively, we can say that there is no evidence of increasing returns to scales based on the estimation results. The largest US banks appear to have fully exploited their scale economies in generating intermediation services.
Variations in the temporal pattern of the individual effects are displayed in Figure 1. The BC estimator provides higher efficiency estimates, but also efficiencies that decline through the sample period, while all the other estimators find efficiencies of similar magnitudes that increase slightly and then decline, anticipating the meltdowns of financial institutions beginning around 2007 that led to the Great Recession.
As we can see from the last row in Table 5 and in Figure 1, the scale of the average technical efficiency levels ranges from around 0.63 to 0.73. Turning our attention to the estimated temporal pattern of the technical efficiencies using the Bayesian estimators, we notice that the BE1 and BE2 models display similar trends, but the efficiency levels suggested by BE2 are consistently higher than those by BE1. The same pattern exists for the estimators BE1* and BE2*. The efficiencies are higher when endogeneity is considered in the model. The Bayesian estimators all display an initial slowly increasing pattern in the 1990s, and a decreasing one in the 2000s. The increasing trend in efficiency levels at the beginning of 1990s is probably because of the increased competitive pressure in the financial industry due to the deregulations introduced in the 1980s. The decreasing trend in efficiency levels started before the Great Recession, perhaps because the financial institutions were taking on riskier activities and became less focused on their traditional roles as financial intermediaries when the global pool of fixed-income securities substantially increased.
In order to evaluate the significance of endogeneity in the model, we have calculated the Bayes Factors (BF) in favor of the model with endogeneity. These results are in Table 6, along with the corresponding Bayes Factors (BF). The level of the Bayes Factor is higher in recent years than it is in early years and is clearly in favor of endogeneity as all Bayes factors exceed 3.5.

6. Conclusions

This paper has proposed a Bayesian approach to treat time-varying heterogeneity in a panel data stochastic frontier model setting. We introduce two new models: one with nonparametric time effects and one with effects that are driven by a number of unknown common factors. In both of the models, we do not impose parametric assumptions on the individual effects other than smoothness and we utilize the Gibbs sampler to implement our Bayesian inferences. The Monte Carlo experiments indicate that the new Bayesian estimators tend to outperform the non-Bayesian alternatives we consider, including the BC, CSS, and the KSS models, under various data generating processes. The new Bayesian estimators are used to analyze the temporal pattern of the technical efficiencies of the largest 40 U.S. banks from 1990 to 2009. The results indicate that the largest banks experienced a decrease in the efficiency with which they provided intermediation services around the time of the Great Recession.

Acknowledgments

The authors would like to thank seminar participants at the University of Gothenburg (Gothenburg, Sweden), the International Panel Data Conference XIX (London, 4–5 July 2013), University of Rochester (Rochester, NY, USA), NY Camp Econometrics XI (Syracuse University, 8–10 April 2016), and ETH Zurich/KOF Swiss Economic Institute and the University of Zurich (Zurich, Switzerland) for helpful comments. We are indebted to comments and criticisms from the Editors and three anonymous referees. The usual caveat applies.

Author Contributions

All authors contributed equally to the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

A.1. Detailed Derivation of the Conditional Posterior Distribution of γ i | β , σ , ω , Y , X

p ( γ i | β , σ , ω , Y , X ) σ ( n T + 1 ) exp { 1 2 σ 2 ( γ i Y i + X i β ) ( γ i Y i + X i β ) 1 2 ω 2 γ i Q γ i } σ ( n T + 1 ) exp { 1 2 γ i ( σ 2 I T + ω 2 Q ) γ i ( Y i X i β ) σ 2 γ i γ i σ 2 ( Y i X i β ) + ( Y i X i β ) ( Y i X i β ) } σ ( n T + 1 ) exp { 1 2 γ i ( σ 2 I T + ω 2 Q ) γ i ( Y i X i β ) σ 2 γ i γ i σ 2 ( Y i X i β ) } σ ( n T + 1 ) exp { 1 2 γ i ( σ 2 ω 2 V ) 1 γ i ( Y i X i β ) ω 2 V ( σ 2 ω 2 V ) 1 γ i γ i ω 2 V ( σ 2 ω 2 V ) 1 ( Y i X i β ) } σ ( n T + 1 ) exp { 1 2 γ i ( σ 2 ω 2 V ) 1 γ i ( Y i X i β ) ω 2 V ( σ 2 ω 2 V ) 1 γ i γ i ω 2 V ( σ 2 ω 2 V ) 1 ( Y i X i β )        + ( Y i X i β ) ω 2 V ω 2 V ( Y i X i β ) } σ ( n T + 1 ) exp { 1 2 ( γ i ω 2 V ( Y i X i β ) ) ( σ 2 ω 2 V ) 1 ( γ i ω 2 V ( Y i X i β ) ) } = σ ( n T + 1 ) exp { 1 2 ( γ i γ i ^ ) ( σ 2 ω 2 V ) 1 ( γ i γ i ^ ) }
where γ i ^ = ω 2 V ( Y i X i β ) and V = ( ω 2 I T + σ 2 Q ) 1

A.2. Derivations of the Posterior Distribution of the Smoothing Parameter ω

If the smoothing parameter is assumed to follow its prior distribution: q ¯ ω 2 ~ χ n ¯ 2 , or equivalently:
p ( ω ) ( q ¯ ω 2 ) n ¯ / 2 1 exp { q ¯ 2 ω 2 } ω 3 ( q ¯ ω 2 ) n ¯ / 2 + 1 / 2 exp { q ¯ 2 ω 2 }
The joint prior will take the form below:
p ( β , γ , σ , ω | Y , X , n ¯ , q ¯ ) σ ( n T + 1 ) exp { 1 2 σ 2 ( Y X β γ ) ( Y X β γ ) } × exp { 1 2 ω 2 γ ( I n Q ) γ } × ( q ¯ ω 2 ) n ¯ / 2 + 1 / 2 exp { q ¯ 2 ω 2 }
Therefore, the conditional posterior distribution of ω can be derived through the following:
p ( ω | β , γ , σ , Y , X , n ¯ , q ¯ ) exp { 1 2 ω 2 γ ( I n Q ) γ } × ( q ¯ ω 2 ) n ¯ / 2 + 1 / 2 exp { q ¯ 2 ω 2 } ( q ¯ ω 2 ) n ¯ / 2 + 1 / 2 exp { q ¯ + i = 1 n γ i Q γ i 2 ω 2 } ( q ¯ + i = 1 n γ i Q γ i ω 2 ) n ¯ / 2 + 1 / 2 exp { q ¯ + i = 1 n γ i Q γ i 2 ω 2 }
Therefore, the transformation of the smoothing parameter q ¯ + i = 1 n γ i Q γ i ω 2 follows χ n ¯ 2 .
Table A1. The slope parameter estimates for the translog distance function.
Table A1. The slope parameter estimates for the translog distance function.
ModelBCCSSWCSSGKSSBE1BE2 BCCSSWCSSGKSSBE1BE2
CIL0.2673940.2116250.2052960.3200240.2299740.262958PF*CD−0.028282−0.023417−0.023420−0.024378−0.011996−0.012073
(0.015604)(0.014842)(0.004009)(0.016490)(0.014168)(0.013784) (0.018853)(0.011323)(0.007121)(0.009834)(0.011182)(0.019844)
CL0.1023950.1616580.1693030.1331700.1518140.127101PF*DD−0.114018−0.017305−0.024098−0.004148−0.015062−0.101234
(0.012878)(0.012244)(0.003398)(0.011736)(0.010868)(0.010493) (0.015688)(0.009595)(0.006507)(0.008484)(0.008648)(0.017367)
PFA−0.126714−0.106713−0.124307−0.044849−0.122111−0.050466SA*CD−0.141683−0.033535−0.059756−0.067716−0.055219−0.167241
(0.031169)(0.026743)(0.008180)(0.023470)(0.024393)(0.027912) (0.031271)(0.021438)(0.012105)(0.019169)(0.019877)(0.033330)
NOE−0.151782−0.274994−0.273066−0.219497−0.152019−0.066570SA*DD−0.0067030.0537470.0619600.0749330.0367160.001257
(0.035151)(0.035071)(0.009826)(0.030924)(0.028075)(0.030319) (0.030736)(0.021559)(0.011642)(0.019549)(0.019763)(0.032234)
PF−0.108846−0.057149−0.062796−0.067891−0.057049−0.138704CD*DD−0.097991−0.105554−0.098626−0.057446−0.092207−0.119194
(0.010370)(0.006407)(0.003582)(0.007493)(0.005713)(0.010614) (0.033377)(0.020702)(0.013151)(0.017910)(0.018797)(0.036201)
SA−0.305845−0.102552−0.141275−0.128912−0.170044−0.304152CIL*CIL0.2393730.1979440.2073410.1897050.2274650.287373
(0.023115)(0.017762)(0.005433)(0.022026)(0.014980)(0.016878) (0.024646)(0.018416)(0.006345)(0.015932)(0.017940)(0.019379)
CD−0.293822−0.242206−0.249235−0.152578−0.236288−0.286715CL*CL0.1133350.0451830.0527870.0168820.0421510.084385
(0.019988)(0.013899)(0.007184)(0.014208)(0.013410)(0.020398) (0.013263)(0.010120)(0.004141)(0.009309)(0.008506)(0.012036)
DD−0.029454−0.005520−0.029726−0.032132−0.025869−0.063642CIL*CL−0.065016−0.045951−0.048370−0.040675−0.032145−0.058542
(0.018062)(0.014840)(0.005759)(0.014345)(0.013910)(0.016542) (0.014523)(0.011902)(0.004307)(0.010305)(0.010754)(0.012337)
PFA*PFA−0.058407−0.076124−0.064595−0.0271700.054452−0.116818CIL*PFA−0.030027−0.040296−0.030441−0.048343−0.000616−0.046465
(0.105551)(0.081836)(0.035034)(0.067810)(0.079399)(0.097951) (0.040094)(0.029056)(0.012041)(0.025079)(0.030007)(0.036245)
NOE*NOE−0.350934−0.263410−0.254762−0.194616−0.317941−0.647887CIL*NOE0.2279560.0329560.0370510.0683120.0080930.245908
(0.175695)(0.111222)(0.049618)(0.096139)(0.103994)(0.170920) (0.043142)(0.032479)(0.016995)(0.028018)(0.031391)(0.046998)
PF*PF−0.030317−0.017777−0.021905−0.019072−0.028032−0.050570CIL*PF0.0369910.0662310.0669140.0425240.0446170.053115
(0.009275)(0.005224)(0.003626)(0.004633)(0.004308)(0.009989) (0.011928)(0.007423)(0.004722)(0.006691)(0.007033)(0.013064)
SA*SA0.0572660.1111050.0886120.1169560.0374920.051178CIL*SA−0.197701−0.045638−0.058945−0.056339−0.043310−0.213425
(0.039891)(0.031775)(0.015405)(0.030207)(0.026234)(0.043752) (0.021737)(0.015992)(0.007671)(0.014339)(0.016593)(0.020533)
CD*CD0.0189570.0639580.0767930.1045560.0650660.034873CIL*CD0.0331690.0408770.0409020.0198510.0361810.021669
(0.054680)(0.033776)(0.020720)(0.028297)(0.027536)(0.057257) (0.022888)(0.013701)(0.009409)(0.011750)(0.012349)(0.025882)
DD*DD0.0080940.002895−0.012089−0.012085−0.001555−0.083644CIL*DD−0.106948−0.048120−0.056765−0.016793−0.042321−0.137545
(0.039544)(0.025435)(0.014893)(0.021706)(0.020225)(0.043597) (0.022452)(0.016474)(0.008142)(0.014246)(0.014006)(0.021770)
PFA*NOE−0.1124250.0863990.0430340.0705330.0005830.082582CL*PFA0.0487470.0379700.0321060.0395040.0070600.066231
(0.102859)(0.079549)(0.036284)(0.066471)(0.071651)(0.102329) (0.027867)(0.020247)(0.010162)(0.017601)(0.021441)(0.028297)
PFA*PF−0.0238710.0019250.0058020.0148070.0324130.004723CL*NOE−0.134762−0.080639−0.079836−0.073912−0.026040−0.121342
(0.024511)(0.014311)(0.009119)(0.012065)(0.012272)(0.025037) (0.033544)(0.023290)(0.012057)(0.020149)(0.026992)(0.034329)
PFA*SA0.1811000.0657750.0794670.0561010.0698030.194046CL*PF0.024490−0.023625−0.022260−0.016442−0.018719−0.002387
(0.043433)(0.033537)(0.015360)(0.029335)(0.032844)(0.043665) (0.009238)(0.005687)(0.003329)(0.004883)(0.005182)(0.009809)
PFA*CD−0.191012−0.036207−0.035895−0.098737−0.156555−0.235290CL*SA0.0520080.0623640.0646400.0638390.0449660.064692
(0.053517)(0.035121)(0.020674)(0.029706)(0.032240)(0.055515) (0.014866)(0.011813)(0.005851)(0.010214)(0.010948)(0.015289)
PFA*DD0.079834−0.017070−0.019489−0.060602−0.0221350.000665CL*CD0.0146550.0019930.0064270.0001140.006044−0.007961
(0.048869)(0.030405)(0.016956)(0.026224)(0.027307)(0.049371) (0.017719)(0.011504)(0.006786)(0.009937)(0.011207)(0.017886)
NOE*PF0.1377280.0983420.0994050.0597330.0493690.116370CL*DD−0.0579720.0256620.0212660.027790−0.001980−0.044490
(0.031395)(0.019230)(0.012957)(0.016994)(0.016952)(0.035380) (0.015838)(0.011538)(0.005404)(0.009853)(0.008936)(0.016080)
NOE*SA−0.121524−0.118107−0.112644−0.115822−0.068949−0.119951CR0.2171150.6975730.6227770.6611930.6506410.273590
(0.065179)(0.049241)(0.024082)(0.042867)(0.042545)(0.067800) (0.207247)(0.113233)(0.089828)(0.096325)(0.074863)(0.232643)
NOE*CD0.4179430.1457440.1489290.1836210.2451120.478729LR1.1036880.2726720.3035680.3595010.6017311.185407
(0.083620)(0.050044)(0.029691)(0.042715)(0.045040)(0.084171) (0.174812)(0.176180)(0.057171)(0.158809)(0.152083)(0.191323)
NOE*DD0.1791060.0835290.0842230.0573470.0981750.329448MR−0.002988−0.001070−0.000878−0.0004660.000008−0.004905
(0.064194)(0.037821)(0.022992)(0.032270)(0.031018)(0.067150) (0.002167)(0.001070)(0.000974)(0.000906)(0.000728)(0.002496)
PF*SA0.031111−0.034051−0.028221−0.022330−0.0129280.021033
(0.016180)(0.010140)(0.006517)(0.008749)(0.009524)(0.017828)

References

  1. Ackerberg, Daniel A., Kevin Caves, and Garth Frazer. 2015. Identication properties of recent production function estimators. Econometrica 83: 2411–51. [Google Scholar] [CrossRef]
  2. Ahn, Seung C., Young H. Lee, and Peter Schmidt. 2013. Panel data models with multiple time-varying individual effects. Journal of Econometrics 174: 1–14. [Google Scholar] [CrossRef]
  3. Amsler, Christine, Artem Prokhorov, and Peter Schmidt. 2016. Endogeneity in stochastic frontier models. Journal of Econometrics 190: 280–88. [Google Scholar] [CrossRef]
  4. Bada, Oualid, and Dominik Liebl. 2014. Phtt: Panel data analysis with heterogeneous time trends in R. Journal of Statistical Software 59: 1–33. [Google Scholar] [CrossRef]
  5. Bai, Jushan. 2009. Panel data models with interactive fixed effects. Econometrica 77: 1229–79. [Google Scholar]
  6. Bai, Jushan. 2013. Fixed-effects dynamic panel models, a factor analytical method. Econometrica 81: 285–314. [Google Scholar]
  7. Bai, Jushan, and Josep Lluís Carrion-i-Silverstre. 2013. Testing panel cointegration with dynamic common factors that are correlated with the regressors. Econometric Journal 16: 222–49. [Google Scholar] [CrossRef] [Green Version]
  8. Bai, Jushan, and Serena Ng. 2007. Determining the number of primitive shocks in factor models. Journal of Business and Economic Statistics 25: 52–60. [Google Scholar] [CrossRef]
  9. Battese, G.E., and T.J. Coelli. 1992. Frontier production functions, technical efficiency and panel data: With application to paddy farmers in India. Journal of Productivity Analysis 3: 153–69. [Google Scholar] [CrossRef]
  10. Caves, Douglas W., Laurits R. Christensen, and W. Erwin Diewert. 1982. The economic theory of index numbers and the measurement of input, output, and productivity. Econometrica 50: 1393–414. [Google Scholar] [CrossRef]
  11. Cornwell, Christopher, Peter Schmidt, and Robin C. Sickles. 1990. Production frontiers with cross-sectional and time-series variation in efficiency levels. Journal of Econometrics 46: 185–200. [Google Scholar] [CrossRef]
  12. Fried, Harold O., C. A. Knox Lovell, and Shelton S. Schmidt. 2008. The Measurement of Productive Efficiency and Productivity Growth. New York: Oxford University Press. [Google Scholar]
  13. Gelfand, Alan E., and Adrian F.M. Smith. 1990. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association 85: 398–409. [Google Scholar] [CrossRef]
  14. Gelman, Andrew, John B. Carlin, Hal S. Stern, David B. Rubin, Aki Vehtari, and Donald B. Rubin. 2003. Bayesian Data Analysis. Boca Raton: Chapman & Hall/CRC. [Google Scholar]
  15. Geweke, John. 1993. Bayesian treatment of the independent student-t linear model. Journal of Applied Econometrics 8: S19–S40. [Google Scholar] [CrossRef]
  16. Glass, Anthony J., Karligash Kenjegalieva, and Robin C. Sickles. 2016. Spatial autoregressive and spatial Durbin stochastic frontier models for panel data. Journal of Econometrics 190: 289–300. [Google Scholar] [CrossRef] [Green Version]
  17. Inanoglu, Hulusi, Michael Jacobs Jr., Junrong Liu, and Robin C. Sickles. 2015. Analyzing bank efficiency: Are "too-big-to-fail" banks efficient? In Handbook of Post-Crisis Financial Modeling. Edited by Emmanuel Haven, Philip Molyneux, John O. S. Wilson, Sergei Fedotov and Meryem Duygun. London: Palgrave MacMillan Handbook, pp. 110–46. [Google Scholar]
  18. Kim, Yangseon, and Peter Schmidt. 2000. A review and empirical comparison of Bayesian and classical approaches to inference on efficiency levels in stochastic frontier models with panel data. Journal of Productivity Analysis 14: 91–8. [Google Scholar] [CrossRef]
  19. Kim, Kyoo il, Amil Petrin, and Suyong Song. 2016. Estimating production functions with control functions when capital is measured with error. Journal of Econometrics 190: 267–79. [Google Scholar] [CrossRef]
  20. Kneip, Alois, Robin C. Sickles, and Wonho Song. 2012. A new panel data treatment for heterogeneity in time trends. Econometric Theory 28: 590–628. [Google Scholar] [CrossRef]
  21. Koop, Gary, Jacek Osiewalski, and Mark F.J. Steel. 1997. Bayesian efficiency analysis through individual effects: Hospital cost frontiers. Journal of Econometrics 76: 77–105. [Google Scholar] [CrossRef]
  22. Koop, Gary, and Dale J. Poirier. 2004. Bayesian variants of some classical semiparametric regression techniques. Journal of Econometrics 123: 259–82. [Google Scholar] [CrossRef] [Green Version]
  23. Kumbhakar, Subal C., and C. A. Knox Lovell. 2000. Stochastic Frontier Analysis. New York: Cambridge University Press. [Google Scholar]
  24. Levinsohn, James, and Amil Petrin. 2003. Estimating production functions using inputs to control for unobservables. The Review of Economic Studies 70: 317–41. [Google Scholar] [CrossRef]
  25. Li, Degui, Jia Chen, and Jiti Gao. 2011. Non-parametric time-varying coefficient panel data models with fixed effects. Econometrics Journal 14: 387–408. [Google Scholar] [CrossRef]
  26. Olley, G. Steven, and Ariel Pakes. 1996. The dynamics of productivity in the telecommunications equipment industry. Econometrica 64: 1263–97. [Google Scholar] [CrossRef]
  27. Onatski, Alexei. 2009. Testing hypotheses about the number of factors in large factor models. Econometrica 77: 1447–79. [Google Scholar]
  28. Osiewalski, Jacek, and Mark F.J. Steel. 1998. Numerical tools for the Bayesian analysis of stochastic frontier models. Journal of Productivity Analysis 10: 103–17. [Google Scholar] [CrossRef]
  29. Perrakis, Konstantinos, Ioannis Ntzoufras, and Efthymios G. Tsionas. 2014. On the use of marginal posteriors in marginal likelihood estimation via importance-sampling. Computational Statistics and Data Analysis 77: 54–69. [Google Scholar] [CrossRef]
  30. Pitt, Mark M., and Lung-Fei Lee. 1981. The measurement and sources of technical inefficiency in Indonesian weaving industry. Journal of Development Economics 9: 43–64. [Google Scholar] [CrossRef]
  31. Schmidt, Peter, and Robin C. Sickles. 1984. Production frontiers and panel data. Journal of Business and Economic Statistics 2: 367–74. [Google Scholar] [CrossRef]
  32. Tsionas, Efthymios G. 2006. Inference in dynamic stochastic frontier models. Journal of Applied Econometrics 21: 669–76. [Google Scholar] [CrossRef]
  33. Van den Broeck, Julien, Gary Koop, Jacek Osiewalski, and Mark F.J. Steel. 1994. Stochastic frontier models: A Bayesian perspective. Journal of Econometrics 61: 273–303. [Google Scholar] [CrossRef]
1
Prior model probabilities are assumed to be equal so that the Bayes factor is equal to the posterior odds ratio. As pointed out by an anonymous referee, one could consider how prior model probabilities may favor a small g, and we find this issue an interesting question to study in future work. An exponential prior can be used, for example, with p(g) proportional to exp(-ag) for a = 1.
2
The estimation results for all first-order and second-order terms are displayed in Table A1 in Appendix A. Since our dataset is geometric mean corrected (each of the data points have been divided by their geometric sample mean), the second-order term in the elasticities expressed in (37) and (38) will diminish to zero when evaluated at the geometric mean of the sample.
Figure 1. Temporal pattern of changes in the average efficiencies (%) for all estimators
Figure 1. Temporal pattern of changes in the average efficiencies (%) for all estimators
Econometrics 05 00033 g001
Table 1. Monte Carlo simulations for DGP1.
Table 1. Monte Carlo simulations for DGP1.
Mean Squared Error for the Individual Effects
nTBCCSSWCSSGKSSBE1BE2
50200.72840.00120.00120.00390.00530.0671
500.93710.00050.00050.12550.00210.0323
100200.82220.00080.00080.00330.00310.0183
500.82450.00030.00030.02200.00180.0115
200200.84510.00080.00080.00230.00270.0101
500.88230.00030.00030.00210.00110.0083
Estimate and Standard Error for the Slope Coefficients
T = 20T = 50
BCCSSWCSSGKSSBE1BE2BCCSSWCSSGKSSBE1BE2
n = 50
EST10.52500.49610.49650.49540.49810.50170.51050.49910.49920.49990.50130.4998
SE10.01300.00330.00320.00290.00570.00420.00730.00210.00200.00190.00350.0027
EST20.48560.49490.49480.49190.49850.50200.49690.50480.50470.50530.50010.5002
SE20.01390.00350.00330.00310.00550.00410.00730.00200.00200.00180.00320.0024
n = 100
EST10.49730.50180.50130.50450.50230.50020.48430.49990.49980.49910.49990.5001
STD10.00990.00230.00220.00210.00320.00270.00660.00140.00140.00140.00230.0018
EST20.50470.50090.50120.50220.50160.50010.49950.50010.50000.49900.50030.5001
STD20.00980.00220.00220.00200.00320.00280.00660.00140.00140.00130.00220.0017
n = 200
EST10.49360.50130.50150.50090.50000.49810.50000.50070.50070.50000.50120.5001
STD10.00710.00160.00160.00160.00270.00220.00420.00100.00100.00100.00190.0015
EST20.49830.50160.50200.50190.50020.49930.50270.49720.49720.49690.50030.5004
STD20.00710.00160.00160.00160.00270.00220.00420.00100.00100.00100.00180.0014
Table 2. Monte Carlo simulations for DGP2.
Table 2. Monte Carlo simulations for DGP2.
Mean Squared Error for the Individual Effects
nTBCCSSWCSSGKSSBE1BE2
50200.92020.12660.12660.01820.00710.0048
500.90520.29960.29960.02380.00530.0025
100200.85880.45530.45530.05310.00400.0037
500.98840.10650.10650.00460.00280.0013
200200.91830.63760.63750.07060.00220.0027
500.95260.06160.06160.00280.00090.0008
Estimate and Standard Error for the Slope Coefficients
T = 20T = 50
BCCSSWCSSGKSSBE1BE2BCCSSWCSSGKSSBE1BE2
n = 50
EST10.47860.48570.49040.50590.50100.49930.48200.48110.49380.49720.50520.4983
SE10.04600.03080.02980.01360.02620.00370.02430.02300.02270.00590.01770.0029
EST20.46640.44140.48540.45990.50310.49920.48400.46600.48480.49880.50010.4999
SE20.04910.03260.03140.01460.02610.00350.02410.02260.02250.00590.01740.0028
n = 100
EST10.48540.48180.48980.50650.49970.50020.51370.53600.50890.49500.51010.4987
STD10.02000.01950.01880.00750.01630.00280.04150.02570.02540.00550.01280.0018
EST20.50050.49960.51150.51010.49930.50010.44820.52830.51430.51270.50020.4992
STD20.01980.01890.01860.00730.01640.00290.04150.02560.02540.00550.01300.0018
n = 200
EST10.50510.49950.50150.48640.49270.50130.42740.50970.49680.50180.50320.5011
STD10.01690.01750.01710.00670.01200.00210.05270.02020.02000.00320.00780.0013
EST20.48950.48980.51470.49510.49010.50200.39960.49300.50150.50420.50210.5031
STD20.01700.01750.01710.00670.01210.00200.05310.02040.02020.00330.00770.0014
Table 3. Monte Carlo simulations for DGP3.
Table 3. Monte Carlo simulations for DGP3.
Mean Squared Error for the Individual Effects
nTBCCSSWCSSGKSSBE1BE2
50203.34770.88160.88160.01300.02440.0356
503.36390.84690.84680.00820.01340.0152
100203.51020.83090.83030.01230.01160.0282
503.76250.83570.83560.00720.00280.0053
200203.84330.83350.83330.01210.00830.0116
503.85130.83930.83920.00630.00140.0019
Estimate and Standard Error for the Slope Coefficients
T = 20T = 50
BCCSSWCSSGKSSBE1BE2BCCSSWCSSGKSSBE1BE2
n = 50
EST10.52770.52500.49940.49890.50120.50020.48680.48710.49760.50050.49910.5001
SE10.01880.02030.01970.00290.00810.00380.01220.01220.01200.00180.00410.0025
EST20.49050.49980.50620.49300.49810.49970.52590.52550.52070.50520.49940.5003
SE20.01980.02150.02070.00310.00780.00350.01210.01200.01190.00180.00420.0023
n = 100
EST10.48160.47680.49980.50300.49610.49920.48770.48630.49720.49860.49950.5002
STD10.01320.01390.01340.00210.00580.00250.00760.00770.00760.00130.00220.0017
EST20.49070.48160.50880.50280.49710.49850.50240.51180.50890.49930.49900.5004
STD20.01310.01350.01330.00210.00570.00240.00760.00770.00760.00130.00230.0018
n = 200
EST10.51200.51030.51100.50160.50120.50110.49760.50120.49620.49990.49810.5052
STD10.00880.00910.00890.00160.00420.00130.00550.00540.00540.00100.00150.0011
EST20.48850.48920.50190.50290.50150.50140.48740.48830.49570.49730.49920.4994
STD20.00880.00910.00890.00160.00410.00120.00550.00550.00540.00100.00160.0012
Table 4. Monte Carlo simulations for DGP4.
Table 4. Monte Carlo simulations for DGP4.
Mean Squared Error for the Individual Effects
nTBCCSSWCSSGKSSBE1BE2
50200.80420.21610.21610.00300.01300.0445
500.94780.20560.20560.08900.00450.0141
100200.87700.13820.13820.00260.01120.0291
500.86260.13370.13370.01930.00280.0055
200200.87640.13010.13010.00200.00980.0108
500.91110.14450.14450.00150.00150.0021
Estimate and Standard Error for the Slope Coefficients
T = 20T = 50
BCCSSWCSSGKSSBE1BE2BCCSSWCSSGKSSBE1BE2
n =50
EST10.55210.52500.53290.49950.49220.49510.50310.48710.49010.49990.50510.5010
SE10.02330.02030.01970.00300.00390.00310.01480.01220.01200.00190.00320.0025
EST20.47880.49980.50140.49070.50110.49770.52010.52550.52460.50530.50010.5003
SE20.02480.02150.02070.00310.00360.00320.01470.01200.01190.00180.00310.0028
n = 100
EST10.47320.47680.47130.50170.50010.50520.47200.48630.48670.49850.50030.5001
STD10.01690.01390.01340.00220.00310.00270.01030.00770.00760.00140.00210.0017
EST20.48800.48160.48360.50180.50000.50410.50770.51180.51170.49980.50020.5003
STD20.01670.01350.01330.00210.00320.00250.01030.00770.00760.00140.00200.0015
n = 200
EST10.50290.51030.51120.50110.50320.50010.48910.50120.50120.50030.49910.5002
STD10.01160.00910.00890.00160.00280.00200.00690.00540.00540.00100.00180.0014
EST20.48920.48920.49340.50210.50130.50200.49400.48830.48860.49700.49870.5013
STD20.01160.00910.00890.00160.00250.00220.00700.00550.00540.00100.00170.0013
Table 5. Estimation results.
Table 5. Estimation results.
ModelBCCSSWCSSGKSSBE1BE2BE1*BE2*
PFA−0.1267−0.1067−0.1243−0.0448−0.1221−0.0505−0.0972−0.0555
NOE−0.1518−0.2750−0.2731−0.2195−0.1520−0.0666−0.1145−0.0424
PF−0.1088−0.0571−0.0628−0.0679−0.0570−0.1387−0.0930−0.1003
SA−0.3058−0.1026−0.1413−0.1289−0.1700−0.3042−0.1030−0.2542
CD−0.2938−0.2422−0.2492−0.1526−0.2363−0.2867−0.1541−0.2012
DD−0.0295−0.0055−0.0297−0.0321−0.0259−0.0636−0.0715−0.0335
REL0.63020.62670.62540.54680.61820.60990.41030.4242
CIL0.26740.21160.20530.32000.23000.26300.24150.2208
CL0.10240.16170.16930.13320.15180.12710.10130.1212
RTS1.01650.78910.88040.64590.76340.91020.63330.6871
Avg.TE0.75760.60940.66080.55520.45840.69370.79440.7889
Table 6. Estimated efficiencies (evaluated at means) and Bayes factor in favor of endogeneity.
Table 6. Estimated efficiencies (evaluated at means) and Bayes factor in favor of endogeneity.
Without EndogeneityWith EndogeneityBF
BE1BE2BE1*BE2*
19900.62160.71250.71890.71923.672
19920.59150.73170.61790.70033.855
19940.67180.71060.72830.71463.781
19960.71030.73250.77810.76134.038
19980.73170.77160.79250.78454.129
20000.76120.78150.81050.80064.217
20030.71200.74510.80230.79425.333
20050.71010.72220.79450.79435.885
20070.67870.71040.78240.77456.452
20090.65130.68170.77480.76636.812
BF calculation, see Perrakis et al. (2014).

Share and Cite

MDPI and ACS Style

Liu, J.; Sickles, R.C.; Tsionas, E.G. Bayesian Treatments for Panel Data Stochastic Frontier Models with Time Varying Heterogeneity. Econometrics 2017, 5, 33. https://doi.org/10.3390/econometrics5030033

AMA Style

Liu J, Sickles RC, Tsionas EG. Bayesian Treatments for Panel Data Stochastic Frontier Models with Time Varying Heterogeneity. Econometrics. 2017; 5(3):33. https://doi.org/10.3390/econometrics5030033

Chicago/Turabian Style

Liu, Junrong, Robin C. Sickles, and E. G. Tsionas. 2017. "Bayesian Treatments for Panel Data Stochastic Frontier Models with Time Varying Heterogeneity" Econometrics 5, no. 3: 33. https://doi.org/10.3390/econometrics5030033

APA Style

Liu, J., Sickles, R. C., & Tsionas, E. G. (2017). Bayesian Treatments for Panel Data Stochastic Frontier Models with Time Varying Heterogeneity. Econometrics, 5(3), 33. https://doi.org/10.3390/econometrics5030033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop