Next Article in Journal
Twenty-Two Years of Inflation Assessment and Forecasting Experience at the Bulletin of EU & US Inflation and Macroeconomic Analysis
Next Article in Special Issue
Bayesian Analysis of Bubbles in Asset Prices
Previous Article in Journal
Announcement of the 2017 Econometrics Young Researcher Award
Previous Article in Special Issue
Unit Roots in Economic and Financial Time Series: A Re-Evaluation at the Decision-Based Significance Levels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autoregressive Lag—Order Selection Using Conditional Saddlepoint Approximations

1
Department of Statistical Science, Southern Methodist University, Dallas, TX 75275-0332, USA
2
Department of Banking and Finance, University of Zurich, Zurich 8032, Switzerland
*
Author to whom correspondence should be addressed.
Econometrics 2017, 5(3), 43; https://doi.org/10.3390/econometrics5030043
Submission received: 8 May 2017 / Revised: 11 August 2017 / Accepted: 6 September 2017 / Published: 19 September 2017
(This article belongs to the Special Issue Celebrated Econometricians: Peter Phillips)

Abstract

:
A new method for determining the lag order of the autoregressive polynomial in regression models with autocorrelated normal disturbances is proposed. It is based on a sequential testing procedure using conditional saddlepoint approximations and permits the desire for parsimony to be explicitly incorporated, unlike penalty-based model selection methods. Extensive simulation results indicate that the new method is usually competitive with, and often better than, common model selection methods.
JEL Classification:
C1

1. Introduction

We consider the performance of a new method for selecting the appropriate lag order p of an autoregressive (AR) model for the residuals of a linear regression. Our focus is on the small-sample performance as compared to competing methods, and, as such, we concentrate on the underlying small-sample distribution theory employed, instead of considerations of consistency and performance in an asymptotic framework.
The problem of ARMA model specification has a long history, and we omit a literature review, though note the book by Choi (1992), dedicated to the topic. Less well documented is the use of small-sample distributional approximations using saddlepoint techniques. Saddlepoint methodology started with the seminal contributions from Daniels (1954,1987) and continued with those from Barndorff-Nielsen and Cox (1979), Skovgaard (1987), Reid (1988), Daniels and Young (1991) and Kolassa (1996). It has been showcased in the book-length treatments of Field and Ronchetti (1990), Jensen (1995), Kolassa (2006), and Butler (2007), the latter showing the enormous variety of problems in statistical inference that are amendable to its use.
The first uses of saddlepoint approximations for inference concerning serial correlation are Daniels (1956) and McGregor (1960). This was followed by Phillips (1978), who also used such methods for simultaneous systems in Holly and Phillips (1979). Saddlepoint approximations have also been used for computing confidence intervals and unit root inference in first-order autoregressive models; see e.g., Broda et al. (2007), which builds on the methodology in Andrews (1993); and Broda et al. (2009). Further work on construction of confidence intervals in the near unit-root case can be found in Elliott and Stock (2001), Andrews and Guggenberger (2014) and Phillips (2014). Related work can be found in Ploberger and Phillips (2003), Leeb and Pötscher (2005) and Phillips (2008). Peter Phillips continues to work on the selection of the autoregressive order; see e.g., Han et al. (2017) for its study in the panel data case.
In our setting herein, we restrict attention to the stationary setting and do not explicitly consider the unit root case. Time series Y = ( Y 1 , , Y T ) is assumed to have distribution Y N X β , σ 2 Ψ 1 , where X is a full rank, T × k matrix of exogenous variables and Ψ 1 is the covariance matrix corresponding to a stationary AR(p) model with parameter vector ϕ = ( ϕ 1 , , ϕ p ) . Values β , σ 2 , ϕ and p are fixed but unknown. Below (in Section 6), we extend to an ARMA framework, such that, in addition, the MA parameters θ = ( θ 1 , , θ q ) are unknown, but q is assumed known.
The two most common approaches used to determine the appropriate AR lag order are (i) assessing the lag at which the sample partial autocorrelation “cuts off”; and (ii) use of popular penalty-based model selection criteria such as AIC and SBC/BIC (see, e.g., Konishi and Kitagawa 2008; Brockwell and Davis 1991, sct. 9.3; McQuarrie and Tsai 1998; Burnham and Anderson 2003). As emphasized in Chatfield (2001, pp. 31, 91), interpreting a sample correlogram is one of the hardest tasks in time series analysis, and their use for model determination is, at best, difficult, and often involves considerable experience and subjective judgment. The fact that the distribution of the sample autocorrelation function (SACF) and the sample partial ACF (SPACF) of the regression residuals are—especially for smaller sample sizes—highly dependent on the X matrix makes matters significantly more complicated. In comparison, the application of penalty-based model selection criteria is virtually automatic, requiring only the choice of which criteria to use. The criticism that their use involves model estimation and, thus, far more calculation is, with modern computing power, no longer relevant.
A different, albeit seldom used, identification methodology involves sequential testing procedures. One approach sequentially tests
H m : ϕ m = 0 , H m 1 : ϕ m = ϕ m 1 = 0 , , H 1 : ϕ m = = ϕ 1 = 0 .
Testing stops when the first hypothesis is rejected (and all remaining are then also rejected). Tests of the kth null hypothesis can be based on a scaled sum of squared values of the SPACF or numerous variations thereof, all of which are asymptotically χ 2 distributed; see Choi (1992, chp. 6) and the references therein for a detailed discussion.
The use of sequential testing procedures in this context is not new. For example, Jenkins and Alevi (1981) and Tiao and Box (1981) propose methods based on the asymptotic distribution of the SPACF under the null of white noise. More generally, Pötscher (1983) considers determination of optimal values of p and q by a sequence of Lagrange multiplier tests. In particular, for a given choice of maximal orders, P and Q, and a chain of ( p , q ) -values ( p 0 , q 0 ) = ( 0 , 0 ) , ( p 1 , q 1 ) , , ( p K , q K ) = ( P , Q ) , such that either p i + 1 = p i and q i + 1 = q i + 1 or p i + 1 = p i + 1 and q i + 1 = q i , i = 0 , 1 , K = P + Q , a sequence of Lagrange-multiplier tests are performed for each possible chain. The optimal orders are obtained when the test does not reject for the first time. As noted by Pötscher (1983, p. 876), “strong consistency of the estimators is achieved if the significance levels of all the tests involved tend to zero with increasing [sample] size…”. This forward search procedure is superficially similar to the method proposed herein, and also requires specification of a sequence of significance levels. Our method differs in two important regards. First, near-exact small-sample distribution theory is employed by use of conditional saddlepoint approximations; and second, we explicitly allow for, and account for, a mean term in the form of a regression X β .
Besides the inherent problems involved in sequential testing procedures, such as controlling overall type I errors and the possible tendency to over-fit (as in backward regression procedures), the reliance on asymptotic distributions can be detrimental when working with small samples and unknown mean term X β . This latter problem could be overcome by using exact small-sample distribution theory and a sequence of point optimal tests in conjunction with some prior knowledge of the AR coefficients; see, e.g., King and Sriananthakumar (2015) and the references therein.
Compared to sequential hypothesis testing procedures, penalty-based model selection criteria have the arguable advantage that there is no model preference via a null hypothesis, and that the order in which calculations are performed is irrelevant (see, for example, Granger et al. 1995). On the other hand, as is forcefully and elegantly argued in Zellner (2001) in a general econometric modeling context, it is worthwhile to have an ordering of possible models in terms of complexity, with higher probabilities assigned to simpler models. Moreover, Zellner (2001, sct. 3) illustrates the concept with the choice of ARMA models, discouraging the use of MA components in favor of pure AR processes, even if it entails more parameters, because “counting parameters” is not necessarily a measure of complexity (see also Keuzenkamp and McAleer 1997, p. 554). This agrees precisely with the general findings of Makridakis and Hibon (2000, p. 458), who state “Statistically sophisticated or complex models do not necessarily produce more accurate forecasts than simpler ones”. With such a modeling approach, the aforementioned disadvantages of sequential hypothesis testing procedures become precisely its advantages. In particular, one is able to specify error rates on individual hypotheses corresponding to models of differing complexity.
In this paper, we present a sequential hypothesis testing procedure for computing the lag length that, in comparison to the somewhat ad hoc sequential methods mentioned above, operationalizes the uniformly most powerful unbiased (UMPU) test of Anderson (1971, pp. 34–46, 260–76). It makes use of a conditional saddlepoint approximation to the—otherwise completely intractable—distribution of the mth sample autocorrelation given those of order 1 , 2 , , m 1 . While exact calculation of the required distribution is not possible (not even straightforward via simulation because of the required conditioning), Section 4 provides evidence that the saddlepoint approximation is, for practical purposes, virtually exact in this context.
The remainder of the paper is outlined as follows. Section 2 and Section 3 briefly outline the required distribution theory of the sample autocorrelation function and the UMPU test, respectively. Section 4 illustrates the performance of the proposed method in the null case of no autocorrelation, while Section 5 compares the performance of the new and existing order selection methods for several autoregressive structures. Section 6 proposes an extension of the method to handle AR lag selection in the presence of ARMA disturbances. Section 7 provides a performance comparison when the Gaussianity assumption is violated. Section 8 provides concluding remarks.

2. The Distribution of the Autocorrelation Function

Define the T × T matrix A l such that its ( i , j ) -th element is given by a i j = I i j = l / 2 and I · denotes the indicator function. Then, for covariance stationary, mean-zero vector ϵ = ϵ 1 , , ϵ T , the ratio of quadratic forms
R l = ϵ A l ϵ ϵ ϵ = γ ^ ( l ) γ ^ ( 0 ) , γ ^ ( l ) = T 1 t = 1 T l ϵ t ϵ t + l ,
is the usual estimator of the lth lag autocorrelation coefficient. The sample autocorrelation function with m components, hereafter SACF, is then given by vector R m = R 1 , , R m with joint density denoted by f R m .
Recall that a function γ : Z R is the autocovariance function of a weakly stationary time series if and only if γ is even, i.e., γ ( h ) = γ ( h ) for all h Z , and is positive semi-definite. See, e.g., Brockwell and Davis (1991, p. 27) for proof. Next recall that a symmetric matrix is positive definite if and only if all the leading principal minors are positive; see, e.g., Abadir and Magnus (2005, p. 223). As such, the support of R m is given by
m = r = r 1 , r 2 , , r m : R i > 0 , i = 1 , , m ,
where R i is the band matrix given by
R i = 1 r 1 r i r 1 1 r i 1 r i r 1 1 .
Assume for the moment that there are no regression effects and let ϵ N T 0 , Ω 1 with Ω 1 > 0 , i.e., Ω is positive definite. While no tractable expression for f R m appears to exist, a saddlepoint approximation to the density of R m at r = ( r 1 , , r m ) is shown in Butler and Paolella (1998) to be given by
f ^ R m r = 2 π m 2 Ω 1 2 | H ^ Ω | 1 2 | P ^ Ω | 1 2 tr P ^ Ω 1 m ,
where
P ^ Ω = P ^ Ω s ^ = Ω + 2 r s ^ I T 2 i = 1 m s ^ i A i ,
and H ^ Ω = H ^ Ω s ^ with ( i , j ) -th element given by
h ^ i j = 1 2 2 s ^ i s ^ j log | P ^ Ω s ^ | = 2 tr P ^ Ω 1 A i r i I T P ^ Ω 1 A j r j I T ,
i , j = 1 , , m . Saddlepoint vector s ^ = ( s ^ 1 , , s ^ m ) solves
0 = 1 2 s ^ i log | P ^ Ω s ^ | = tr P ^ Ω 1 A i r i I T , i = 1 , , m ,
and, in general, needs to be numerically obtained. In the null setting for which Ω = I T , tr P ^ I 1 = T , so that the last factor in (3) is just T m . Special cases for which explicit solutions to (5) exist are given in Butler and Paolella (1998).
With respect to the calculation of Ω corresponding to a stationary, invertible ARMA( p , q ) process, the explicit method in McLeod (1975) could be used, though more convenient in matrix-based computing platforms are the matrix expressions given in Mittnik (1988) and Van der Leeuw (1994). Code for the latter, as well as for computing the CACF test, are available upon request.
The extension of (3) for use with regression residuals is not immediately possible because the covariance matrix of ϵ ^ is not full rank and a canonical reduction of the residual vector is required. To this end, let
y   N X β , Ψ 1 ,
where X is a full rank T × k matrix of exogenous variables. Denote the OLS residual vector as ϵ ^ = My = M ϵ , where M = I T X X X 1 X . As M is an orthogonal projection matrix, it can be expressed as M = G G , where G is T k × T and such that GG = I T k and GX = 0 . Then
R l = ϵ ^ A l ϵ ^ ϵ ^ ϵ ^ = y MA l My y My = y G GA l G Gy y G Gy = w A ˜ l w w w ,
where w = Gy and A ˜ l = GA l G is a T k × T k symmetric matrix. For example, G could consist of an orthogonal basis for the T k eigenvectors of M corresponding to the unit eigenvalues. By setting Ω 1 = G Ψ 1 G , approximation (3) becomes valid using w   N 0 , Ω 1 and GA l G in place of ϵ and A l , respectively. Note that, in the null case with y   N X β , σ 2 I T , Ω 1 = σ 2 I T k .

3. Conditional Distributions and UMPU Tests

Anderson (1971, sct. 6.3.2) has shown for the regression model with circular AR(m) errors (so ϵ 1 ϵ T ) and the columns of X restricted to Fourier regressors, i.e.,
y t = β 1 + s = 1 k 1 / 2 β 2 s   cos 2 π s t T + β 2 s + 1   sin 2 π s t T + ϵ t ,
that the uniformly most powerful unbiased (UMPU) test of AR ( m 1 ) versus AR ( m ) disturbances rejects for values of r m falling sufficiently far out in either tail of the conditional density
f R m R m 1 r m | r m 1 ,
where r m 1 = r 1 , , r m 1 denotes the observed value of the vector of random variables R m 1 . A p-value can be computed as min τ m , 1 τ m , where
τ 1 = Pr R 1 < r 1 and τ m = Pr R m < r m R m 1 = r m 1 , m > 1 .
Like in the well-studied m = 1 case (cf. (Durbin and Watson 1950, 1971) and the references therein), the optimality of the test breaks down in either the non-circular model and/or with arbitrary exogenous X , but does provide strong motivation for an approximately UMPU test in the general setting considered here. This is particularly so for economic time series, as they typically exhibit seasonal (i.e., cyclical) behavior that can mimic the Fourier regressors in (8) (Dubbelman et al. 1978; King 1985, p. 32).
Following the methodology outlined in Barndorff-Nielsen and Cox (1979), Butler and Paolella (1998) derive a conditional double saddlepoint density to (9) computed as the ratio of two single approximations
f ^ R m R m 1 r m | r m 1 ; Ω = f ^ R m r m 1 , r m ; Ω f ^ R m 1 r m 1 ; Ω = | H ^ m 1 | | P ^ m 1 | 2 π | H ^ m | | P ^ m | tr P ^ m 1 m tr P ^ m 1 1 1 m ,
where H ^ m 1 and P ^ m 1 are the H ^ Ω and P ^ Ω values, respectively, associated with the m 1 -dimensional saddlepoint s ^ m 1 of the denominator determined by r m 1 ; likewise for H ^ m and P ^ m , and explicit dependence on Ω has been suppressed.
Thus, τ m in (10) for m > 1 can be computed as
τ m = m f ^ R m R m 1 r d r m f ^ R m R m 1 r d r = m f ^ R m r m 1 , r m ; Ω d r m m f ^ R m r m 1 , r m ; Ω d r m ,
where m = m 1 , r m and m denotes the conditional support of R m given R m 1 = r m 1 . It is given by
m = r : 1 r min < r < r max 1 ,
where r min and r max are such that, for r m outside these values, r m does not correspond to the ACF of a stationary AR ( m ) process. These values can be found as follows: Assume that r m 1 lies in the support of the distribution of R m 1 . From (2), the range of support for R m is determined by the inequality constraint R m > 0 . Expressing the determinant in block form,
R m = R m 1 1 r m R m 1 1 r m ,
and, as R m 1 > 0 by assumption, r m ranges over
{ r m : 1 r m R m 1 1 r m > 0 } .
Letting R m 1 1 = w i j , i , j = 1 , , m , and using the fact that R m 1 is both symmetric and persymmetric (i.e., symmetric in the northeast-to-southwest diagonal), the range of r m is given by
1 i = 1 m j = 1 m w i j r i r j = 1 w m m r m 2 2 r m i = 1 m 1 w i m r i i = 1 m 1 j = 1 m 1 w i j r i r j > 0 .
This equation is quadratic in r m so that the two ordered roots can be taken as the values for r min and r max respectively. For m = 2 , this yields 2 r 1 2 1 < r 2 < 1 ; for m = 3 ,
r 1 + r 2 2 r 1 + 1 1 < r 3 < r 1 r 2 2 r 1 1 + 1 .
While computation of (12) is straightforward, it is preferable to derive an approximation to τ m similar in spirit to the Lugannani and Rice (1980) saddlepoint approximation to the cdf of a univariate random variable. This method begins with the cumulative integral of the conditional saddlepoint density integrated over ( 1 , r m ] as a portion of m , the support of R m given R m 1 = r m 1 , as specified in the first equality of
τ m = m ( 1 , r m ] f ^ R m R m 1 r d r = w 0 h ( w ) ϕ ( w ) d w .
A change of variable in the integration from d r to d w is needed which allows the integration to be rewritten as in the second equality of (14). Here, ϕ ( w ) is the standard normal density, h ( w ) is the remaining portion of the integrand for equality to hold in (14), and w 0 is the image of r m under the mapping r w and given in (16) below. Temme’s approximation approximates the right-hand-side of (14) and this leads to the expression
Pr R m r m | R m 1 = r m 1 ; Ω Φ w 0 + ϕ w 0 1 w 0 1 v 0 ,
for s ^ m 0 , where
w 0 = sgn ( s ^ m ) log ( | P ^ m | / | P ^ m 1 | ) ,
v 0 = s ^ m ( | H ^ m | / | H ^ m 1 | ) 1 2 tr ( P ^ m 1 1 ) / tr ( P ^ m 1 ) m 1 ,
and Φ and ϕ denote the cdf and pdf of the standard normal distribution, respectively. Details of this derivation are given in Butler and Paolella (1998) and Butler (2007, scts. 12.5.1 and 12.5.5).
In general, it is well known that the middle integral in (14) is less accurate than the normalized ratio in (12); see Butler (2007, eq. 2.4.6). However, quite remarkably, the Temme argument applied to (14) most often makes (15) more accurate than the normalized ratio in (12). A full and definitive answer to the latter tendency remains elusive. However, a partial asymptotic explanation is discussed in Butler (2007, sct. 2.3.2). In this setting, both (15) and (12) indeed yield very similar results, differing only in the third significant digit. It should be noted that the resulting p-values, or even the conditional distribution itself, cannot easily be obtained via simulation, so that it is difficult to check the accuracy of (12) and (15). In the next two sections, we show a way that lends support to the correctness (and usefulness) of the methods.

4. Properties of the Testing Procedure Under the Null

We first wish to assess the accuracy of the conditional saddlepoint approximation to (9) when Ψ 1 = I T in (6), i.e., there is no autocorrelation in the disturbance terms of the linear model y = X β + ϵ . In this case, we expect τ i iid Unif 0 , 1 . This was empirically tested by computing τ 1 , τ 2 , τ 3 and τ 4 in (10), based on (15), using observed values r 1 , , r 4 , for 10 , 000 replications of T-length time series, each consisting of T independent standard normal simulated random variables, T = 15 and T = 30 , but with mean removal, i.e., taking X = 1 . Histograms of the resulting τ i , as shown in Figure 1, are in agreement with the uniform assumption. Furthermore, the absolute sample correlations between each pair of the τ i were all less than 0.02 for T = 15 and less than 0.013 for T = 30 .
These results are in stark contrast to the empirical distribution of the “t-statistics” and the associated p-values of the maximum likelihood estimates (MLEs). For 500 simulated series, the model y t = μ + ε t , ε t = ϕ 1 ε t 1 + ϕ 2 ε t 2 + ϕ 3 ε t 3 + u t , u t iid   N 0 , σ 2 , t = 1 , , T , was estimated using exact maximum likelihood with approximate standard errors of the parameters obtained by numerically evaluating the Hessian at the MLE. Figure 2 shows the empirical distribution of the τ i = F ( t i ) , where t i is the ratio of the MLE of ϕ i to its corresponding approximate standard error, i = 1 , 2 , 3 , and F ( · ) refers to the cdf of the Student’s t distribution with T 4 degrees of freedom.1 The top two rows correspond to T = 15 and T = 30 ; while somewhat better for T = 30 , it is clear that the usual distributional assumption on the MLE t-statistics does not hold. The last two rows correspond to T = 100 and T = 200 , for which the asymptotic distribution is adequate.

5. Properties of the Testing Procedure Under the Alternative

5.1. Implementation and Penalty-Based Criteria

One way of implementing the p-values computed from (15) for selecting the autoregressive lag order p is to let it be the largest value j 1 , , m such that τ j < c or τ j > 1 c ; or set it to zero if no such extreme τ j occurs. Hereafter, we refer to this as the conditional ACF test, or, in short, CACF. The effectiveness of this strategy will clearly be quite dependent on the choices of m and c. We will see that it is, unfortunately and like all selection criteria, also highly dependent on the actual, unknown autoregressive coefficients.
Another possible strategy, say, the alternative CACF test, is, for a given c and m, to start with testing an AR(1) specification and check if τ 1 < c or τ 1 > 1 c . If this is not the case, then one declares the lag order to be p = 0 . If instead τ 1 is in the critical region, then one continues to the AR(2) case, inspecting if τ 2 < c or τ 2 > 1 c . If not, p = 1 is chosen; and if so, then τ 3 is computed, etc., continuing sequentially for j = 1 , , m , stopping when either the null at lag j is not rejected, in which case p = j 1 is returned, or when j = m . Below, we will only investigate the performance of the first strategy. We note that the two strategies will have different small sample properties that clearly depend on the true p and the coefficients of the AR(p) model (as well as the sample size T and choices of m and c). In particular, assuming in both cases that m > p , the alternative CACF test could perform better if m is chosen substantially larger than the true p.
While penalty function methods also require an upper limit m, the CACF has the extra “tuning parameter” c that can be seen as either a blessing or a curse. A natural value might be c = 0.025 , so that, under the null of zero autocorrelation, p assumes a particular wrong value with approximate probability 0.05 ; and p = 0 is chosen approximately with probability 1 0.05 m , i.e.,
Pr choose p = 0 white noise = Pr τ j ( 0.025 , 0.975 ) , j = 1 , , m = ( 1 0.05 ) m 1 0.05 m ,
from the binomial expansion. This value of c will be used for two of the three comparisons below, while the last one demonstrates that a higher value of c is advisable.
The CACF will be compared to the use of the following popular penalty-based measures, such that the lag order is chosen based upon the value of j for which they are minimized:
AIC = ln   σ ^ 2 + 2 z T , AICC = ln   σ ^ 2 + T + z T z 2 , SBC = ln σ ^ 2 + z   ln   T T ,
FPE = σ ^ 2 T + z T z , HQ = ln   σ ^ 2 + 2 z   ln   ln   T T ,
where σ ^ 2 denotes the MLE of the innovation variance and z = j + k denotes the number of estimated parameters (not including σ 2 , but including k, the number of regressors). Observe that both our CACF method, and the use of penalty criteria, assume the model regressor matrix is correctly specified, and condition on it, but both methods do not assume that the regression coefficients are known, and instead need to be estimated, along with the autoregressive parameters. Original references and ample discussion of these selection criteria can be found in the survey books of Choi (1992, chp. 3), McQuarrie and Tsai (1998) and Konishi and Kitagawa (2008), as well as Lütkepohl (2005) for the vector autoregressive case.

5.2. Comparison with AR(1) Models

For each method, the optimal AR lag orders among the choices p = 0 through p = 4 were determined for each of 100 simulated mean-zero AR(1) series of length T = 30 and AR parameter ϕ , using ϕ = 0 , 0.1 , 0.3 , , 0.9 ,2 as well as the two regression models, constant ( X = 1 ) and constant-trend ( X = [ 1 t ] ), where t = ( 1 , 2 , , T ) . For the CACF method, values c = 0.025 and m = 4 were used.
Figure 3 and Figure 4 show the results for the two regression cases. In both cases, the CACF method dominates in the null model, while for small absolute values of ϕ , the CACF underselects more than the penalty-based criteria. For ϕ 0.5 , the CACF dominates, though SBC is not far off in the X = 1 case. For the constant-trend model, the CACF is clearly preferred. The relative improvement of the CACF performance in the constant-trend model shows the benefit of explicitly taking the regressors into account when computing tail probabilities in (15) via (3).
A potential concern with the CACF method is what happens if m is much larger than the true p. To investigate this, we stay with the AR(1) example, but consider only the case with ϕ = 0.5 , and use three choices of m, namely 2, 6 and 10. We first do this with a larger sample size of T = 60 and no X matrix, which should convey an advantage to the penalty-based measures relative to CACF. For the former, we use only the AICC and BIC. Figure 5 shows the results, based on 1 , 000 replications. With m = 2 , all three methods are very accurate, with CACF and BIC being about equal with respect to the probability of choosing the correct p of one, and slightly beating AICC. With m = 6 , the BIC dominates. The nature of the CACF methodology is such that, when m is much larger than p, the probability of overfitting (choosing p too high) will increase, according to the choice of c. With m = 10 , this is apparent. In this case, the BIC is superior, also substantially stronger than the more liberal AICC.
We now conduct a similar exercise, but using conditions for which the CACF method was designed, namely a smaller sample size of T = 30 and a more substantial regressor matrix of an intercept and time trend regression, i.e., X = [ 1 , t ] . Figure 6 shows the results. For m = 2 , the CACF clearly outperforms the penalty-based criteria, while for m = 6 , which is substantially larger than the true p = 1 , the CACF chooses the correct p with the highest probability of the three selection methods, though the AICC is very close. For the very large m = 10 (which, for T = 30 , might be deemed inappropriate), CACF and BIC perform about the same with respect to the probability of choosing the correct p of one, while AICC dominates. Thus, in this somewhat extreme case (with T = 30 and m = 10 ), the CACF still performs competitively, due to its nearly exact small sample distribution theory and the presence of an X matrix.

5.3. Comparison with AR(2) Models

The effectiveness of the simple lag order determination strategy is now investigated for the AR(2) model, but by applying it to cases for which the penalty-based criteria will have a comparative advantage, namely with more observations ( T = 60 ) and either no regressors or just a constant. We also include the constant-trend case ( X = [ 1 , t ] ) used above in the AR(1) simulation. Based on 1000 replications, six different AR(2) parameter constellations are considered. As before, the CACF method uses c = 0.025 and the highest lag order computed is m = 4 .
The simulation results of the CACF test are summarized in Table 1. Some of these results are also given in the collection of more detailed tables in Appendix A. In particular, they are shown in the top left sections of Table A1 (corresponds to no X matrix) and Table A2 ( X = 1 ). There, the magnitude of the roots of the AR(2) polynomial are also shown, labeled as ξ 1 and ξ 2 , the latter being omitted to indicate complex conjugate roots (in which case ξ 1 = ξ 2 ).
The first model corresponds to the null case p = 0 , in which the error rate for false selection of p should be (given the use of c = 0.025 ) about 5% for each false candidate. This is indeed the case, as the error rate with m = 4 should be roughly 0.80, as seen in the boldface entries. For the remaining non-null models, quite different lag selection characteristics were observed, with the choice of p = 2 ranging between 40% and 91% among the five AR(2) models in the known mean case; 33% and 91% in the constant but unknown ( X = 1 ) case; and 31% and 90% in the constant-trend ( X = [ 1 , t ] ) case. Observe how, as expected from the small-sample theory, as the X matrix increases in complexity, there is relatively little effect on the performance of the method, for a given AR(2) parametrization. However, the choice of the latter does have a very strong impact on its performance. For example, the 5th model is such that the CACF method chooses p = 1 more often than the correct p = 2 .
The comparative results using the penalty-based methods are shown in Table A3 through Table A7 under the heading “For AR(p) Models”. We will discuss only the X = 1 case in detail. For the null model (i.e., p = 0 ), denoted 1.0x, where the “x” indicates that an X matrix was used, the CACF outperforms all other criteria by a wide margin, with 80.3 % of the runs resulting in p = 0 , compared with the 2nd best, SBC, with 66.2 % . For model 2.0x, all the model selection criteria performed well, with the CACF and SBC resulting in 91.0 % and 91.6 % , respectively. Similarly, all criteria performed relatively poorly for models 3.0x and 4.0x, but particularly the CACF, which was worst (with p = 2 occurring 54.4 % and 53.1 % of the time, respectively), while AICC and HQ were the best and resulted in virtually the same p = 2 values for the two models ( 70 % and 65 % ). Similar results hold for models 5.0x and 6.0x, for which the CACF again performs relatively poorly.
The unfortunate and, perhaps unexpected, fact that the performance of the new method and the penalty-based criteria highly depend on the true model parameters is not new; see, for example, Rahman and King (1999) and the references therein.

5.4. Higher Order AR(p) Processes

Clearly, as p increases, it becomes difficult to cover the whole parameter space in a simulation study. We (rather arbitrarily) consider AR(4) models with parameters ϕ 1 = 0.4 , ϕ 2 = 0.3 , ϕ 3 = 0.2 and ϕ 4 takes on the 6 values 0.1 through 0.6 , for which the maximum moduli of the roots of the AR polynomial are 0.616, 0.679, 0.747, 0.812, 0.864 and 0.908, respectively. This is done for two sample sizes, T = 30 and T = 60 and, in an attempt to use a more complicated design matrix that is typical in econometric applications, an X matrix corresponding to an intercept-trend model with structural break, i.e., for T = 30 ,
X = 1 1 1 1 1 1 2 3 16 30 0 0 0 1 1 0 0 0 1 15 .
As mentioned in the beginning of Section 5, the choices of c and m are not obvious, and ideally would be purely data driven. The optimal value of m in conjunction with penalty-based criteria is still an open issue; see, for example, the references in Choi (1992, p. 72) to work by E. J. Hannan and colleagues. The derivation of a theoretically based optimal choice of m for the CACF is particularly difficult because the usual appeal to asymptotic arguments is virtually irrelevant in this small-sample setting. At this point, we have little basis for their choices, except to say (precisely as others have) that m should grow with the sample size.
It is not clear if the optimal value of c should vary with sample size. What we can say is that its choice depends on the purpose of the analysis. For example, consider the findings of Fomby and Guilkey (1978), who demonstrated that, when measuring the performance of β ^ based on mean squared error, the optimal size of the Durbin-Watson test when used in conjunction with a pretest estimator for the AR(1) term should be much higher than the usual 5 % , with 50 % being their overall recommendation. This will, of course, increase the risk of model over-fitting, but that can be somewhat controlled in this context by a corresponding reduction in m. (Recall that the probability of not rejecting the null hypothesis of no autocorrelation is 100 ( 1 2 c m ) % under white noise.)
For the trials in the AR(4) case, we take m = 7 to provide some room for over-fitting (but which admittedly might be considered somewhat high for only T = 30 observations). With this m, use of c = 0.125 proved to be a good choice for all the runs with T = 30 . For this sample size, the top panels of Figure 7 and Figure 8 show the outcomes for all criteria for each of the six values of ϕ 4 . For all the criteria, the probability of choosing p = 4 increases as ϕ 4 increases in magnitude.
To assist the interpretation of these plots, we computed the simple measures L 1 = | p ^ i 4 | and L 2 = ( p ^ i 4 ) 2 for each criteria. In all six cases, the CACF was the best according to both L 1 and L 2 . The lower panels of Figure 7 and Figure 8 show just the CACF and the criteria that was second best according to L 2 . For ϕ 4 = 0.1 , , 0.4 , the FPE was the second best, while for ϕ 4 = 0.5 and 0.6 , HQ and AICC were second best, respectively.
This is also brought out in Table 2, which shows the L 2 measures for the two extreme cases ϕ 4 = 0.1 and 0.6 . Observe how, in the former, the SBC and AICC perform significantly worse than the other criteria, while for ϕ 4 = 0.6 , they are 2nd and 3rd best. This example clearly demonstrates the utility of the new CACF method, and also emphasizes how the performance of the various penalty-based criteria are highly dependent on the true autoregressive model parameters.
The results for the T = 60 case were not as unambiguous. For each of the six models, a value of c could be found that rendered the CACF method either the best or a close second best. These ranged from c = 0.125 for ϕ 4 = 0.1 to c = 0.01 for ϕ 4 = 0.6 . When using a constant value of c between these two extremes such as c = 0.05 , the AIC and FPE performed better for ϕ 4 = 0.1 , while SBC and AICC were better for ϕ 4 = 0.6 .

5.5. Parsimonious Model Selection

The frequentist order selection methods such as the the sequential one proposed herein and—even more so—the penalty-based methods, have the initial appeal of being objective. However, the infusion of “prior information”, most often in the forms of non-quantitative beliefs and opinions, is ubiquitous when building models with real data; the well-known quote from Edward Leamer (“There are two things you are better off not watching in the making: sausages and econometric estimates”) immediately comes to mind. Indeed, the empirical case studies by Pankratz (1983) make clear the desire for parsimonious models and attempt to “downplay” the significance of spikes in the correlograms or p-values of estimates at higher order lags that are not deemed “sensible”. Moreover, any experienced modeler has a sense, explicit or not, of the maximally acceptable number of model parameters allowed with respect to the available sample size, and is (usually) well aware of the dangers of overfitting.3
Letting the level of a test decrease with increasing sample size is common and good practice in many testing situations; for discussion see Lehmann (1958), Sanathanan (1974), and Lehmann (1986, p. 70). When the level of the test is fixed and the sample size is allowed to increase, the power can increase to unacceptably high values, forcing the type I and type II errors to be out of balance. Indeed, such balance is important in the model selection context, because the consequences of the two error types are not especially different as they would be in the more traditional testing problem. Furthermore, there may be a desire to have the relative values of the two error types reflect the desire for parsimony. In the context of our sequential test procedure, such preferences would suggest using smaller levels when testing for larger lag m, and increasingly larger levels as the lag order tested becomes smaller. Thus, at each stage of the testing, balance may be achieved between the two types of errors to reflect the desired parsimony.
This frequentist “trap” could be avoided by conducting an explicit Bayesian analysis instead. While genuine Bayesian methods have been pursued (see, for example, Zellner 1971; Schervish and Tsay 1988; Chib 1993; Le et al. 1996; Brooks et al. 2003; and the references therein), their popularity is quite limited. The CACF method proposed herein provides a straightforward method of incorporating the preference for low order models, while still being as “objective” as any frequentist hypothesis testing procedure can, for which the size of the test needs to be chosen by the analyst. This is achieved by letting the tuning parameter c be a vector with different values for different lags. For example, with the AR(4) case considered previously, the optimal choice would be c = [ 0.5 , 0.5 , 0.5 , 0.5 , 0 , 0 , 0 ] . While certainly such flexibility could be abused to arrive at virtually any desired model, it makes sense to let the elements of c decrease in a certain manner if there is a preference for low-order parsimonious models. Notice also that, if seasonal effects are expected, then the c vector could also be chosen to reflect this.
To illustrate, we applied the simple linear sequence
c = [ 0.175 , 0.15 , 0.125 , 0.10 , 0.075 , 0.05 , 0.025 ]
to the T = 60 case. This resulted in the CACF method being the best for all values of ϕ 4 except the last, ϕ 4 = 0.6 , in which case it was slightly outperformed by the SBC. As was expected, when using (19) for the time series with T = 30 , the CACF remained superior in all cases but by an even larger margin.

6. Mixed ARMA Models

Virtually undisputed in time series modeling is the sizeable increase in difficulty for identifying the orders in the mixed, i.e., ARMA, case. It is interesting to note that, in the multivariate time series case, the theoretical and applied literature is dominated by strict AR models (vector autoregressions), whether for prediction purposes or for causality testing. Under the assumption that the true data generating process does not actually belong to the ARMA class, it can be argued that, for forecasting purposes, purely autoregressive structures will often be adequate, if not preferred; see also Zellner (2001) and the references therein. Along these lines, it is also noteworthy that the thorough book on regression and time series model selection by McQuarrie and Tsai (1998) only considers order selection for autoregressive models in both the univariate and multivariate case. Their only use for an MA(1) model is to demonstrate AR lag selection in misspecified models.
Nevertheless, it is of interest to know how the CACF method performs in the presence of mixed models. While the use of penalty-based criteria for mixed model selection is straightforward, it is not readily apparent how the CACF method can be extended to allow for moving average structures. This section presents a way of proceeding.

6.1. Known Moving Average Structure

The middle left and lower left panels of Table A1 and Table A2 show the CACF results when using the same AR structure as previously, but with a known moving average structure, i.e., (15) is calculated taking Ψ 1 corresponding to an MA(1) model. As with all model results in Table A1 and Table A2, the sample size is T = 60 . This restrictive assumption allows a clearer comparison of methods without the burden of MA order selection or estimation; this will be relaxed in the next section, so that the added effect of MA parameter estimation can be better seen.
Models 1.5 through 6.5 take the MA parameter θ to be 0.5, while models 1.9 through 6.9 use θ = 0.9 . For the null models 1.5 and 1.9, only a slight degradation of performance is seen. For the other models, on average, the probability of selecting p > 2 increases markedly as θ increases, drastically so for model 6.
The results can be compared to the entries in Table A3 through Table A7 labeled “For ARMA(p,1) Models”, for which the penalty-based criteria were computed for the five models ARMA(i,1), i = 0 , , 4 . (Notice that the comparison is not entirely fair because the CACF method has the benefit of knowing the MA polynomial, not just its length.) Consider the θ = 0.5 cases first. For the null model 1.5x, the CACF performed as expected under the null hypothesis, with 78.8 % choices of p = 0 and about 5 % for each of the other choices. It was outperformed however by the SBC, with 86 % p = 0 choices and an ever diminishing probability as p increases, which agrees with the SBC’s known properties of low model order preference. The p = 0 choices for the other criteria were between 54.2 % (AIC and FPE) and 70.6 % (HQ). For model 2.5x, in terms of number of p = 2 choices, CACF, AIC and FPE were virtually tied with about 63 % , while SBC, HQ and AICC gave 87 % , 77.4 % and 71.6 % , respectively. The CACF for models 3.5x and 4.5x performed—somewhat unexpectedly in light of previous results—relatively well: For model 3.5x, CACF resulted in 61.2 % choices of p = 2 , while the SBC was the worst, with 37.4 % . The others were all about 43 % . Similarly for model 4.5x: CACF ( 56.8 % ), AIC ( 38.2 % ), SBC ( 25.8 % ), AICC ( 37.8 % ), HQ ( 33.8 % ) and FPE ( 38.2 % ). Model 5.5x resulted in (approximately) CACF, AIC and FPE ( 50 % ), and SBC, AICC and HQ ( 60.0 % ). For model 6.5x, the CACF was disasterous, with only 25.8 % for p = 2 , while the other criteria ranged from 71.2 % (SBC) to 59.2 % (AIC).
As before, the performance of all the criteria is highly dependent on the true model parameters. However, what becomes apparent from this study is the great disparity in performance: For a particular model, the CACF may rank best and SBC the worst, while for different parameter constellation, precisely the opposite may be the case. That the SBC is often among the best performers agrees with the findings of Koreisha and Yoshimoto (1991) and the references therein.
The results for the θ = 0.9 cases do not differ remarkably from the θ = 0.5 cases, except that for models 3.9x and 4.9x, all the penalty-based criteria were much closer to (and occasionally slightly better) than the CACF.

6.2. Iterative Scheme for ARMA( p , q ) Models with q Known

Assume, somewhat more realistically than before, that the regression error terms follow a stationary ARMA(p,q) process with q known, but the actual MA parameters θ = ( θ 1 , , θ q ) and p are not known. A possible method for eliciting p using the sequential test (15) is as follows: Iterate the two following steps starting with p 1 = 0 : (i) Estimate an ARMA( p i , q ) model to obtain θ ^ ; (ii) Compute τ 1 , , τ m with Ψ 1 corresponding to the MA(q) model with parameters θ ^ , from which p i + 1 is determined. Iteration stops when p i + 1 = p i (or i exceeds some preset value, I), and p i + 1 is set as before for a given value of c. The choice of m and c will clearly be critical to the performance of this method; simulation, as detailed next, will be necessary to determine its usefulness.
The right panels of Table A1 and Table A2 show the results when applied to the same 500 simulated time series of length T = 60 as previously used, but with the aforementioned iterative scheme applied with m = 4 and c = 0.025 (also as before), I = 5 and q = 1 . Unexpectedly, for the null model 1, p = 0 was actually chosen more frequently than under the known θ cases, for each choice of θ . Not surprisingly, for the θ = 0 cases, the iterative schemes resulted in poorer performance compared to their known θ counterparts, with model under-selection (i.e., p < 2 ) occurring more frequently, drastically so for models 3 and 4. For θ = 0.5 , models 3 and 4 again suffer from under-selection, though less so than with θ = 0 , while the remaining models exhibit an overall mild improvement in lag order selection. In the θ = 0.9 case, performance is about the same whether θ is known or not for all 6 models, though for model 6, high over-selection in the θ known case is reversed to high under-selection for θ not known.
The number of iterations required until convergence and the probability of not converging (given in the column labeled 6 + ) also depends highly on the true model parameters; models for which the true AR polynomial roots were smallest exhibited the fastest convergence. Nevertheless, in most cases, one or two iterations were enough, and the probability of non-convergence appears quite low. (The few cases that did not converge were discarded from the analysis.) While certainly undesirable, if the iterative scheme does not converge, it does not mean that the results are not useful. Most often, the iterations bounced back and forth between two choices, from which a decision could be made based on “the subjective desire for parsimony” and/or inspection of the actual p-values, which could be very close to the cutoff values, themselves having been arbitrarily chosen.
To more fairly compare the CACF results with the penalty-based criteria, the latter were evaluated using the 10 models AR(i), ARMA(i,1), i = 0 , , 4 , the results of which are in Table A3 through Table A7 under the heading “Among both Sets”. To keep the analysis short, Table 3 presents only the percentage of correct p choices for two criteria (CACF and the 1st or 2nd best). The CACF was best 6 times, SBC and AICC were each best 5 times, while HQ and FPE were best one time each. It must be emphasized that these numbers are a very rough reduction of the performance data. For example, Table 3 shows that cases 2.0x and 2.5x were extremely close, while 6.0x, 3.5x, 4.5x, 5.5x, 1.9x were reasonably close. A fair summary appears to be that
  • In most cases, either the CACF, SBC and AICC will be the best,
  • Each of these can perform (sometimes considerably) better than the other two for certain parameter constellations,
  • Each can perform relatively poorly for certain parameter constellations.

7. Performance in the Non–Gaussian Setting

All our findings above are based on the assumptions that the true model is known to be a linear regression with correctly specified X matrix; error terms are from a Gaussian AR(p) process; tuning parameter m is chosen such that m p ; and parameters p, a 1 , , a p , β , and σ 2 are fixed but unknown. We now modify this by assuming, similarly, that the true data generating process is y = X β + ϵ , with ϵ t = ϕ ϵ t 1 + U t a stationary AR(1) process, but now such that U t iid t ν 0 , σ , t = 1 , , T , i.e., Student’s t with ν degrees of freedom, location zero, and scale σ > 0 .
Figure 9 and Figure 10 depict the results for the CACF, AICC, and SBC methods, having used a grid of ϕ -values, true p = 1 , a sample size of T = 50 , m = 5 , c = 0.025 , and four different values of degrees of freedom parameter ν ; and for all methods falsely assuming Gaussianity. Figure 9 is for the known mean case, while Figure 10 assumes the constant and time-trend model X = [ 1 , t ] .
We see that none of the methods are substantially affected by use of even very heavy-tailed innovation sequences, notably the CACF, which explicitly uses the normality assumption in the small-sample distribution theory. However, note from the top left panel of Figure 9 (which corresponds to ν = 1 , or Cauchy innovations) that, for p = ϕ = 0 , instead of 0.774 = ( 1 0.05 ) m 1 0.05 m = 0.75 from (18), the null of p = 0 is chosen about 86% of the time, while from the third and fourth rows (for ν = 5 and ν = 200 ), it is about 78%. This was to be expected: In the non-Gaussian case, (18) is no longer tenable, as (i) the τ j are no longer independent; and (ii) their marginal distributions under the null will no longer be precisely (at least up to the accuracy allowed for by the saddlepoint approximation) Unif ( 0 , 1 ) . In particular, the latter violation is such that the empirically obtained quantiles of τ j under the null no longer match their theoretical ones, and, as seen, the probability that τ j falls outside the range, say, ( 0.025 , 0.975 ) , is smaller than 0.05. This results in the probability of choosing p = 0 when it is true being larger than the nominal of 0.774.
Similarly, the choice of p = 1 when p = ϕ = 0 should occur about 5% of the time for the CACF method, but, from the right panels of Figure 9, it is lower than this, decreasing as ν decreases. However, for ν = 5 , it is already very close to the nominal of 5%. Interestingly, with respect to choosing p = 0 , the behavior of the CACF for all choices of ν is virtually identical to the AICC near ϕ = 0 , while as | ϕ | grows, the behavior of the CACF coincides with that of the SBC. (Note that this behavior is precisely what we do not want: Ideally, for ϕ = 0 , the method would always choose p = 0 , while for ϕ 0 , the method would never choose p = 0 .)
Figure 10 is similar to Figure 9, but having used X = [ 1 , t ] . Observe how, for all values of ν , unlike the known mean case, the performance of the AICC and SBC is no longer symmetric about ϕ = 0 , but the CACF is still virtually symmetric. Fascinatingly, we see from the left panels of Figure 10 that the CACF probability of choosing p = 0 virtually coincides with that of the AICC for ϕ 0 , while for ϕ < 0.2 , it virtually coincides with that of the SBC.

8. Conclusions

We have operationalized the UMPU test for sequential lag order selection in regression models with autoregressive disturbances. This is made possible by using a saddlepoint approximation to the joint density of the sample autocorrelation function based on ordinary least squares residuals from arbitrary exogenous regressors and with an arbitrary (covariance stationary) variance-covariance matrix. Simulation results verify that, compared to the popular penalty-based model selection methods, the new method fairs very well precisely in situations that are both difficult and common in practice: when faced with small samples and an unknown mean term. With respect to the mean term, the superiority of the new method increases as the complexity of the exogenous regressor matrix X increases; this is because the saddlepoint approximation explicitly incorporates the X matrix, differing from the approximation developed in Durbin (1980), which only takes account of its size, or the standard asymptotic results for the sample ACF and sample partial ACF, which completely ignore the regressor matrix.
The simulation study also verifies a known (but—we believe—not well-known) fact that the small sample performances of penalty-based criteria such as SBC and (corrected) AIC are highly dependent on the actual autoregressive model parameters. The same result was found to hold true for the new CACF method as well. Autoregressive parameter constellations were found for which CACF was greatly superior to all other methods considered, but also for which CACF ranked among the worst performers. Based on the use of a wide variety of parameter sets, we conclude that the new CACF method, the SBC and the corrected AIC, in that order, are the preferred methods, although, as mentioned, their comparative performance is highly dependent on the true model parameters.
An aspect of the new CACF method that greatly enhances its ability and is not applicable with penalty-based model selection methods is the use of different sizes for the sequential tests. This allows an objective way of incorporating prior notions of preferring low order, parsimoniously parameterized models. This was demonstrated using a linear regression model with AR(4) disturbance terms; the results highly favor the use of the new method in conjunction with a simple, arbitrarily chosen, linear sequence of size values. More research should be conducted into finding optimal, sample-size driven choices of this sequence.
Finally, the method was extended to select the autoregressive order when faced with ARMA disturbances. This was found to perform satisfactorily, both numerically as well as in terms of order selection.
Matlab programs to compute the CACF test and some of the examples in the paper are available from the second author.

Acknowledgments

The authors are grateful to the four editors Federico Bandi, Alex Maynard, Hyungsik Roger Moon, and Benoit Perron of this special issue in honor of Peter Phillips, and two anonymous referees, for excellent suggestions that have improved our final version of this manuscript.

Author Contributions

Both authors contributed equally to the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Full Tables

Table A1. Simulation results of the CACF method for model with known mean. (The case with regressor matrix X = 1 is shown in Table A2 below.) The sample size is T = 60 . Shown here are the percentages of the 500 replications that chose AR lag length 0, 1, 2, 3, or 4, for six ARMA(2,1) models with AR parameters a 1 and a 2 and MA parameter θ . The first column, “Model”, specifies which of the six AR(2) parameter constellations and, after the decimal point, the value of the MA coefficient (e.g., 1.5 refers to AR model number 1, and the value of the MA coefficient is 0.5; similar for 1.9). Values ξ 1 and ξ 2 are the magnitude of the roots of the AR polynomial; ξ 2 missing indicates complex conjugate roots, in which case ξ 1 = ξ 2 . Columns labeled “Using θ ^ = θ ” assume that the MA polynomial is known; θ = 0 thus corresponds to AR lag selection with no MA structure. Columns labeled “Iterative Scheme” correspond to AR lag selection when θ is not known; see Section 6 for details. Bold faced numbers indicate the percentage of times the true AR lag order was chosen.
Table A1. Simulation results of the CACF method for model with known mean. (The case with regressor matrix X = 1 is shown in Table A2 below.) The sample size is T = 60 . Shown here are the percentages of the 500 replications that chose AR lag length 0, 1, 2, 3, or 4, for six ARMA(2,1) models with AR parameters a 1 and a 2 and MA parameter θ . The first column, “Model”, specifies which of the six AR(2) parameter constellations and, after the decimal point, the value of the MA coefficient (e.g., 1.5 refers to AR model number 1, and the value of the MA coefficient is 0.5; similar for 1.9). Values ξ 1 and ξ 2 are the magnitude of the roots of the AR polynomial; ξ 2 missing indicates complex conjugate roots, in which case ξ 1 = ξ 2 . Columns labeled “Using θ ^ = θ ” assume that the MA polynomial is known; θ = 0 thus corresponds to AR lag selection with no MA structure. Columns labeled “Iterative Scheme” correspond to AR lag selection when θ is not known; see Section 6 for details. Bold faced numbers indicate the percentage of times the true AR lag order was chosen.
AR(2) Structure θ Using θ ^ = θ Iterative Scheme
AR Lag Order (%)AR Lag Order (%)Required Iterations (%)
a 1 a 2 ξ 1 ξ 2 0123401234123456+
1.0 000·081.54.54.35.24.586.60.45.33.44.285.411.61.20.40.01.4
2.0 1.2 0.8 0.8944·00.00.091.45.03.60.60.087.06.16.40.576.718.70.50.03.5
3.0 0.7 0.3 0.5477·00.135.256.04.24.571.26.815.53.63.068.019.47.70.20.24.5
4.0 0.4 0.3 0.5477·010.727.053.53.75.171.42.719.62.93.368.523.04.00.20.24.0
5.0 1.4 0.45 0.90000.500000.052.540.22.64.70.030.853.812.82.60.067.725.62.40.04.9
6.0 0.3 0.55 0.9066 0.606600.15.369.111.713.80.00.680.66.512.40.087.62.71.60.08.1
1.5 000·0.578.24.67.25.05.090.62.53.31.22.489.57.32.00.00.01.2
2.5 1.2 0.8 0.8944·0.514.40.055.022.48.20.30.069.322.67.80.381.015.70.00.03.0
3.5 0.7 0.3 0.5477·0.51.45.663.419.210.417.78.552.014.27.517.559.920.81.00.00.8
4.5 0.4 0.3 0.5477·0.512.63.659.614.69.650.71.236.85.95.550.339.78.80.40.00.8
5.5 1.4 0.45 0.90000.50000.50.015.168.99.07.00.020.873.34.11.70.082.815.60.00.01.6
6.5 0.3 0.55 0.9066 0.60660.50.00.024.754.720.63.30.033.941.321.42.970.811.81.80.012.7
1.9 000·0.976.15.67.25.65.480.34.37.13.94.579.518.70.80.00.01.0
2.9 1.2 0.8 0.8944·0.90.230.061.524.913.30.30.058.827.014.00.390.97.10.00.01.7
3.9 0.7 0.3 0.5477·0.94.88.251.819.915.35.66.845.423.718.55.583.48.70.00.02.4
4.9 0.4 0.3 0.5477·0.924.61.844.015.813.826.11.837.416.817.825.965.57.60.00.01.0
5.9 1.4 0.45 0.90000.50000.90.014.167.712.06.30.013.966.211.97.90.092.94.60.70.02.0
6.9 0.3 0.55 0.9066 0.60660.95.61.214.129.749.351.135.14.84.54.548.335.09.61.50.05.6
Table A2. Same as Table A1 but for the linear model with X = 1 . Model numbers are now followed by an “x” to indicate the use of the regressor matrix to model the unknown mean.
Table A2. Same as Table A1 but for the linear model with X = 1 . Model numbers are now followed by an “x” to indicate the use of the regressor matrix to model the unknown mean.
AR(2) Structure θ Using θ ^ = θ Iterative Scheme
AR Lag Order (%)AR Lag Order (%)Required Iterations (%)
a 1 a 2 ξ 1 ξ 2 0123401234123456+
1.0 x 000·080.33.95.06.04.887.20.84.32.94.986.010.81.80.00.01.4
2.0 x 1.2 0.8 0.8944·00.00.091.05.33.70.50.087.55.76.30.581.414.61.30.02.1
3.0 x 0.7 0.3 0.5477·00.035.954.44.84.969.110.714.31.94.066.520.29.10.40.03.8
4.0 x 0.4 0.3 0.5477·010.028.353.14.83.876.22.716.31.73.173.219.13.60.20.03.8
5.0 x 1.4 0.45 0.90000.500000.059.933.02.94.20.038.252.98.80.00.060.526.32.60.010.5
6.0 x 0.3 0.55 0.9066 0.606600.36.566.712.514.00.00.079.07.613.40.087.91.71.20.09.3
1.5 x 000·0.578.84.87.04.64.890.82.02.92.02.390.06.92.20.00.01.0
2.5 x 1.2 0.8 0.8944·0.50.00.062.328.49.30.30.064.227.87.70.380.416.60.00.02.7
3.5 x 0.7 0.3 0.5477·0.51.07.461.222.38.015.712.248.717.06.415.659.423.41.00.00.8
4.5 x 0.4 0.3 0.5477·0.512.44.656.818.87.454.41.831.77.74.454.236.38.60.00.00.4
5.5 x 1.4 0.45 0.90000.50000.50.023.749.59.717.20.033.361.11.93.70.087.39.11.80.01.8
6.5 x 0.3 0.55 0.9066 0.60660.50.00.025.850.323.92.90.036.234.626.22.569.914.01.20.014.4
1.9 x 000·0.979.84.07.24.05.083.03.46.93.03.682.715.51.40.00.00.4
2.9 x 1.2 0.8 0.8944·0.90.00.259.627.912.20.00.059.228.712.00.091.45.80.00.02.8
3.9 x 0.7 0.3 0.5477·0.94.310.146.926.012.85.09.442.427.615.74.983.49.60.00.02.0
4.9 x 0.4 0.3 0.5477·0.928.12.639.119.410.828.72.633.021.114.628.263.66.60.00.01.6
5.9 x 1.4 0.45 0.90000.50000.90.019.457.313.69.70.023.959.711.94.50.087.010.10.00.02.9
6.9 x 0.3 0.55 0.9066 0.60660.93.91.913.524.756.048.335.76.03.66.346.138.99.20.90.24.6
Table A3. Simulation results of the AIC method for model with known mean (left panel) and unknown but constant mean (right panel). Percentage of the 500 replications that chose AR lag length 0, 1, 2, 3, or 4, based on the AIC model selection criteria, for the same six ARMA(2,1) models with parameters shown in Table A1 and Table A2. Columns “For AR(p) Models” ignore the MA structure present in models 1.5 to 6.5 and 1.9 to 6.9 and selects among AR(0) through AR(4); columns “For ARMA(p,1) Models” enforces the MA structure, and compares ARMA(0,1) through ARMA(4,1); and columns “Among both Sets” uses all 10 models for comparison.
Table A3. Simulation results of the AIC method for model with known mean (left panel) and unknown but constant mean (right panel). Percentage of the 500 replications that chose AR lag length 0, 1, 2, 3, or 4, based on the AIC model selection criteria, for the same six ARMA(2,1) models with parameters shown in Table A1 and Table A2. Columns “For AR(p) Models” ignore the MA structure present in models 1.5 to 6.5 and 1.9 to 6.9 and selects among AR(0) through AR(4); columns “For ARMA(p,1) Models” enforces the MA structure, and compares ARMA(0,1) through ARMA(4,1); and columns “Among both Sets” uses all 10 models for comparison.
ModelNo X Matrix X = 1
For AR(p) ModelsFor ARMA(p,1) ModelsAmong Both SetsFor AR(p) ModelsFor ARMA(p,1) ModelsAmong Both Sets
012340123401234012340123401234
1.0 52.233.85.25.43.465.812.010.07.25.066.018.65.05.64.856.229.46.25.23.051.016.816.09.27.063.418.67.46.04.6
2.0 0.00.077.012.810.20.20.071.014.814.00.20.072.013.414.40.00.076.212.611.20.20.057.224.418.20.00.062.422.215.4
3.0 1.213.266.612.86.234.419.827.69.48.825.212.046.09.27.62.212.066.812.07.028.412.630.616.611.822.47.643.615.411.0
4.0 10.88.662.611.07.044.67.031.010.86.636.48.639.49.06.612.86.661.411.08.232.67.233.616.89.830.26.839.015.48.6
5.0 0.04.469.817.08.80.037.240.810.611.40.022.854.214.48.60.02.671.815.89.80.028.037.020.814.20.015.650.023.011.4
6.0 0.00.273.815.810.20.013.855.017.214.00.05.865.615.613.00.00.670.417.811.20.018.240.821.819.20.09.451.221.018.4
1.5 9.626.038.217.09.262.615.810.44.66.656.415.416.06.26.011.421.641.214.211.654.211.416.47.011.050.010.221.66.811.4
2.5 0.00.04.444.850.80.40.667.816.215.00.40.656.822.819.40.00.05.239.055.80.40.262.814.821.80.40.252.218.828.4
3.5 0.00.024.046.829.22.628.243.613.612.02.425.036.421.614.60.00.025.239.235.64.421.042.411.420.84.219.434.417.424.6
4.5 0.20.626.641.830.831.29.038.810.011.028.85.832.819.613.00.20.629.835.034.426.06.238.210.019.623.64.433.416.022.6
5.5 0.00.211.246.442.20.010.059.016.814.20.09.051.021.818.20.00.213.237.649.00.07.253.013.426.40.07.045.016.631.4
6.5 4.21.026.840.827.20.42.465.818.213.24.01.852.625.815.86.01.623.446.222.80.42.259.223.614.65.81.647.030.615.0
1.9 0.21.214.230.454.059.419.411.25.44.659.418.611.25.85.00.41.015.824.458.461.416.410.05.86.461.415.810.05.87.0
2.9 0.00.00.09.890.20.00.670.218.011.20.00.669.818.211.40.00.00.27.292.60.00.469.016.813.80.00.468.616.814.2
3.9 0.00.01.417.681.03.823.850.215.46.83.823.850.015.27.20.00.02.013.284.85.222.649.613.88.85.222.649.613.88.8
4.9 0.00.01.820.677.615.415.049.014.26.415.414.848.814.46.60.00.02.016.481.615.413.050.211.89.615.412.850.212.09.6
5.9 0.00.00.216.483.40.07.265.218.49.20.07.064.619.29.20.00.60.612.086.80.09.861.417.211.60.09.861.217.411.6
6.9 2.271.212.49.25.011.454.216.49.28.87.260.615.610.26.44.067.214.28.46.217.240.820.213.48.411.247.022.411.87.6
Table A4. Simulation results of the SBC method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
Table A4. Simulation results of the SBC method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
ModelNo X Matrix X = 1
For AR(p) ModelsFor ARMA(p,1) ModelsAmong Both SetsFor AR(p) ModelsFor ARMA(p,1) ModelsAmong Both Sets
012340123401234012340123401234
1.0 62.634.02.80.60.091.65.41.81.00.280.416.42.40.80.066.231.22.40.20.081.411.44.41.81.078.816.43.60.80.4
2.0 0.00.093.24.62.20.80.888.66.43.40.60.091.65.22.60.00.092.05.62.40.60.478.616.24.20.40.287.29.82.4
3.0 2.034.258.83.81.266.415.216.41.80.247.815.834.21.80.43.428.463.23.61.463.010.820.04.81.444.812.238.03.41.6
4.0 15.422.657.82.61.679.04.015.21.40.464.85.427.81.40.617.818.660.02.01.668.04.222.24.61.058.24.633.03.60.6
5.0 0.07.883.46.82.00.063.632.82.21.40.030.063.65.01.40.06.485.06.42.20.055.632.89.81.80.024.065.48.42.2
6.0 0.01.490.66.61.40.028.461.07.43.20.08.683.46.21.80.24.088.66.01.20.037.443.813.05.80.215.073.68.23.0
1.5 11.851.031.04.81.491.08.00.80.20.079.614.64.80.80.214.846.433.63.22.086.06.05.41.80.876.810.49.62.01.2
2.5 0.00.016.459.024.60.60.689.66.62.60.60.676.217.45.20.00.020.050.429.60.40.287.06.85.60.40.275.615.68.2
3.5 0.00.250.637.611.610.648.835.03.62.09.040.436.811.02.80.00.055.830.413.813.840.837.43.84.212.831.042.68.84.8
4.5 0.21.057.633.08.264.47.424.22.61.457.65.027.67.02.80.40.862.426.010.462.65.825.82.63.255.43.831.65.24.0
5.5 0.00.632.650.016.80.029.863.24.42.60.024.654.218.03.20.00.239.240.420.20.023.663.44.28.80.021.057.212.69.2
6.5 6.63.049.033.87.64.07.478.88.41.46.64.268.218.62.49.83.242.639.25.25.48.271.212.42.89.45.258.824.62.0
1.9 0.410.035.828.825.084.411.63.20.60.284.611.43.00.60.40.88.838.423.228.885.810.82.20.80.486.010.61.81.00.6
2.9 0.00.01.027.072.00.21.285.29.83.60.21.284.810.43.40.00.01.420.678.00.01.087.68.43.00.01.087.08.83.2
3.9 0.00.09.235.655.25.243.245.65.20.85.243.245.25.21.20.00.09.828.661.65.841.448.43.41.05.841.448.23.41.2
4.9 0.00.010.638.850.631.822.441.43.01.431.422.241.03.61.80.00.012.431.056.632.818.044.83.01.432.417.844.43.61.8
5.9 0.00.02.432.265.40.011.879.08.01.20.011.677.69.01.80.00.82.824.272.20.015.474.88.21.60.015.474.08.42.2
6.9 2.889.85.41.40.630.460.06.61.61.410.681.65.61.21.06.285.65.81.60.837.243.412.05.02.417.669.08.83.21.4
Table A5. Simulation results of the AICC method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
Table A5. Simulation results of the AICC method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
ModelNo X Matrix X = 1
For AR(p) ModelsFor ARMA(p,1) ModelsAmong Both SetsFor AR(p) ModelsFor ARMA(p,1) ModelsAmong Both Sets
012340123401234012340123401234
1.0 53.834.45.44.81.671.612.08.45.82.269.018.85.05.41.858.630.85.63.81.258.216.214.27.24.266.818.67.05.42.2
2.0 0.00.080.611.28.20.40.076.013.210.40.20.077.412.69.80.00.081.810.08.20.60.064.023.412.00.40.068.619.411.6
3.0 1.215.867.610.64.840.620.026.87.45.228.813.446.66.25.02.214.270.09.24.435.413.829.812.68.425.68.846.811.47.4
4.0 11.09.464.610.24.852.06.229.07.85.040.47.639.27.65.213.27.865.28.65.242.26.234.011.85.835.26.040.412.65.8
5.0 0.04.674.813.86.80.041.840.89.28.20.024.857.810.66.80.03.477.013.26.40.033.239.418.29.20.017.855.419.27.6
6.0 0.00.278.213.08.60.016.458.814.810.00.06.070.412.810.80.00.876.014.48.80.021.243.021.014.80.010.657.417.814.2
1.5 9.831.439.213.46.269.215.89.02.63.460.816.014.25.23.811.827.043.410.67.261.212.014.25.67.057.011.018.66.07.4
2.5 0.00.06.048.445.60.40.674.814.49.80.40.662.422.014.60.00.06.642.650.80.40.271.613.014.80.40.258.019.222.2
3.5 0.00.028.048.024.03.433.045.010.68.03.228.437.819.211.40.00.032.841.026.25.626.844.49.014.24.823.637.416.617.6
4.5 0.20.631.842.025.437.210.037.47.28.232.66.234.017.89.40.20.636.235.627.432.87.437.87.414.629.64.835.813.216.6
5.5 0.00.213.849.236.80.013.061.215.210.60.011.852.021.614.60.00.218.240.840.80.09.660.010.819.60.09.250.816.223.8
6.5 4.21.431.042.221.20.64.270.216.28.84.02.257.225.211.46.62.227.646.617.00.83.465.221.69.06.42.851.029.210.6
1.9 0.21.816.833.248.065.019.09.44.02.665.018.09.44.63.00.41.219.427.251.869.016.68.42.83.268.816.48.22.83.8
2.9 0.00.00.012.687.40.00.673.017.88.60.00.672.818.08.60.00.00.210.689.20.00.675.015.88.60.00.674.815.88.8
3.9 0.00.02.021.077.04.027.852.212.43.64.027.852.212.23.80.00.02.616.680.85.226.252.611.24.85.226.252.611.24.8
4.9 0.00.02.624.073.415.616.051.612.04.815.616.051.212.05.20.00.02.819.877.417.415.252.010.25.217.415.252.010.25.2
5.9 0.00.00.820.079.20.07.469.216.47.00.07.268.417.27.20.00.60.614.884.00.010.866.815.47.00.010.866.215.67.4
6.9 2.274.011.88.04.013.657.016.07.46.07.665.814.87.84.04.473.011.27.44.021.043.618.211.26.012.852.818.410.65.4
Table A6. Simulation results of the HQ method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
Table A6. Simulation results of the HQ method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
ModelNo X Matrix X = 1
For AR(p) ModelsFor ARMA(p,1) ModelsAmong Both SetsFor AR(p) ModelsFor ARMA(p,1) ModelsAmong Both Sets
012340123401234012340123401234
1.0 58.034.64.02.80.680.49.05.63.41.674.018.24.03.20.661.830.64.42.80.465.615.010.45.63.471.018.85.23.81.2
2.0 0.00.084.49.66.00.60.480.011.27.80.20.083.210.26.40.00.084.68.86.60.60.268.221.49.60.40.073.217.09.4
3.0 1.420.268.07.82.650.818.223.25.02.836.414.244.03.81.62.617.070.27.03.245.412.027.410.25.033.28.845.89.03.2
4.0 12.613.264.86.03.462.05.224.25.23.448.26.438.24.23.015.010.065.05.84.251.65.030.09.24.243.04.840.08.43.8
5.0 0.05.679.010.45.00.049.038.46.66.00.026.460.08.45.20.04.280.610.44.80.038.238.416.27.20.019.858.416.45.4
6.0 0.00.482.211.06.40.019.660.212.67.60.06.875.011.27.00.01.480.411.86.40.025.044.219.011.80.011.463.415.010.2
1.5 10.040.435.69.84.278.612.66.41.01.470.016.09.22.82.013.035.239.67.25.070.610.210.84.04.464.412.015.83.24.6
2.5 0.00.07.852.040.20.60.679.012.07.80.60.667.220.011.60.00.09.445.645.00.40.277.410.611.40.40.264.617.417.4
3.5 0.00.034.445.420.24.239.041.89.25.83.832.838.816.68.00.00.038.239.222.67.030.643.27.811.46.625.040.214.813.4
4.5 0.20.641.239.818.246.69.433.65.05.440.86.433.813.45.60.20.644.632.821.842.67.033.85.611.037.65.034.411.211.8
5.5 0.00.418.850.230.60.018.462.411.28.00.016.453.021.09.60.00.222.641.835.40.014.059.49.017.60.012.850.815.620.8
6.5 5.22.236.240.016.41.25.075.013.05.85.03.460.823.27.67.22.432.244.214.02.05.067.018.27.87.23.453.827.28.4
1.9 0.23.820.033.642.473.816.06.81.81.673.816.06.42.01.80.63.022.827.646.074.814.47.01.82.074.414.27.21.82.4
2.9 0.00.00.216.883.00.00.877.615.26.40.00.877.215.66.40.00.00.213.286.60.00.678.213.67.60.00.678.013.67.8
3.9 0.00.03.825.071.24.632.250.210.22.84.632.250.210.03.00.00.03.819.476.85.430.052.28.24.25.430.052.28.24.2
4.9 0.00.04.228.067.819.419.050.08.03.619.418.849.68.24.00.00.04.822.672.620.016.251.68.24.020.016.251.28.44.2
5.9 0.00.00.822.676.60.08.672.014.25.20.08.271.414.85.60.00.80.817.481.00.011.468.015.25.40.011.467.015.46.2
6.9 2.680.08.86.02.618.860.012.25.63.49.072.09.86.23.05.278.28.25.82.625.043.416.610.64.414.658.014.49.04.0
Table A7. Simulation results of the FPE method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
Table A7. Simulation results of the FPE method for model with known (left panel) and unknown but constant mean (right panel). See Table A3 for comments.
ModelNo X Matrix X = 1
For AR(p) ModelsFor ARMA(p,1) ModelsAmong both SetsFor AR(p) ModelsFor ARMA(p,1) ModelsAmong both Sets
012340123401234012340123401234
1.0 52.233.85.25.43.466.212.29.87.24.666.018.85.05.64.656.229.46.25.23.051.016.816.09.27.063.418.67.46.04.6
2.0 0.00.077.012.810.20.20.071.014.814.00.20.072.013.414.40.00.076.212.611.20.20.057.424.418.00.00.062.422.215.4
3.0 1.213.266.612.86.234.419.827.89.48.625.212.046.29.27.42.212.067.012.26.628.612.630.616.411.822.47.644.015.210.8
4.0 10.88.662.611.07.044.67.031.010.86.636.48.639.49.06.612.86.661.411.08.232.87.233.616.69.830.26.839.015.48.6
5.0 0.04.469.817.08.80.037.240.810.611.40.022.854.214.48.60.02.672.015.89.60.028.237.220.813.80.015.850.422.811.0
6.0 0.00.273.815.810.20.013.855.217.014.00.05.865.815.612.80.00.670.417.811.20.018.240.822.019.00.09.451.221.018.4
1.5 9.626.038.217.09.262.615.810.44.66.656.415.416.06.26.011.421.641.414.211.454.411.416.67.010.650.010.221.86.811.2
2.5 0.00.04.445.050.60.40.667.816.215.00.40.657.022.819.20.00.05.239.055.80.40.263.215.021.20.40.252.418.828.2
3.5 0.00.024.246.829.02.628.443.613.811.62.425.236.421.614.40.00.025.239.635.24.421.042.611.420.64.219.434.617.624.2
4.5 0.20.626.841.830.631.29.038.810.011.028.85.832.819.613.00.20.630.034.834.426.26.238.210.019.423.64.433.416.022.6
5.5 0.00.211.246.642.00.010.059.416.614.00.09.051.221.618.20.00.213.437.848.60.07.253.213.226.40.07.045.416.431.2
6.5 4.21.026.840.827.20.42.466.018.013.24.01.852.625.815.86.01.623.446.222.80.42.259.423.614.45.81.647.230.614.8
1.9 0.21.214.230.653.859.419.411.25.44.659.418.611.25.85.00.41.015.824.658.261.416.410.05.86.461.415.810.05.87.0
2.9 0.00.00.09.890.20.00.670.617.811.00.00.670.218.011.20.00.00.27.292.60.00.469.216.813.60.00.468.816.814.0
3.9 0.00.01.417.681.03.823.850.215.46.83.823.850.015.27.20.00.02.013.284.85.222.649.813.68.85.222.649.813.68.8
4.9 0.00.01.820.877.415.415.049.014.26.415.414.848.814.46.60.00.02.016.481.615.413.250.411.69.415.413.050.411.89.4
5.9 0.00.00.216.483.40.07.265.418.49.00.07.064.819.29.00.00.60.612.286.60.09.861.417.411.40.09.861.217.611.4
6.9 2.271.212.49.25.011.454.616.29.28.67.260.615.610.26.44.067.414.28.46.017.240.820.213.48.411.247.222.411.67.6

References

  1. Abadir, Karim M., and Jan R. Magnus. 2005. Matrix Algebra. Cambridge: Cambridge University Press. [Google Scholar]
  2. Anderson, Theodore W. 1971. The Statistical Analysis of Time Series. New York: John Wiley & Sons. [Google Scholar]
  3. Andrews, Donald W. K. 1993. Exactly Median-Unbiased Estimation of First Order Autoregressive/Unit Root Models. Econometrica 61: 139–65. [Google Scholar] [CrossRef]
  4. Andrews, Donald W. K., and Patrik Guggenberger. 2014. A Conditional-Heteroskedasticity-Robust Confidence Interval for the Autoregressive Parameter. The Review of Economics and Statistics 96: 376–81. [Google Scholar] [CrossRef]
  5. Barndorff-Nielsen, O. E., and D. R. Cox. 1979. Edgeworth and Saddlepoint Approximations with Statistical Applications (with discussion). Journal of the Royal Statistical Society, Series B 41: 279–312. [Google Scholar]
  6. Brockwell, Peter J., and Richard A. Davis. 1991. Time Series: Theory and Methods, 2nd ed. Berlin: Springer. [Google Scholar]
  7. Broda, Simon A., Kai Carstensen, and Marc S. Paolella. 2007. Bias—Adjusted Estimation in the ARX(1) Model. Computational Statistics & Data Analysis 51: 3355–67. [Google Scholar]
  8. Broda, Simon A., Kai Carstensen, and Marc S. Paolella. 2009. Assessing and Improving the Performance of Nearly Efficient Unit Root Tests in Small Samples. Econometric Reviews 28: 468–94. [Google Scholar] [CrossRef]
  9. Brooks, Stephen P., Paolo Giudici, and G. O. Roberts. 2003. Efficient Construction of Reversible Jump Markov Chain Monte Carlo Proposal Distributions (with discussion). Journal of the Royal Statistical Society, Series B 65: 3–55. [Google Scholar] [CrossRef]
  10. Burnham, Kenneth P., and David R. Anderson. 2003. Model Selection and Multi-Model Inference, 2nd ed. New York: Springer. [Google Scholar]
  11. Butler, Ronald W. 2007. An Introduction to Saddlepoint Methods. Cambridge: Cambridge University Press. [Google Scholar]
  12. Butler, Ronald W., and Marc S. Paolella. 1998. Approximate Distributions for the Various Serial Correlograms. Bernoulli 4: 497–518. [Google Scholar] [CrossRef]
  13. Chatfield, Christopher. 2001. Time-Series Forecasting. Boca Raton: Chapman & Hall. [Google Scholar]
  14. Chib, Siddhartha. 1993. Bayes Estimation of Regressions with Autoregressive Errors: A Gibbs Sampling Approach. Journal of Econometrics 58: 275–94. [Google Scholar] [CrossRef]
  15. Choi, Byoung-Seon. 1992. ARMA Model Identification. New York: Springer. [Google Scholar]
  16. Daniels, H. E. 1954. Saddlepoint Approximation in Statistics. The Annals of Mathematical Statistics 25: 631–50. [Google Scholar] [CrossRef]
  17. Daniels, H. E. 1956. The Approximate Distribution of Serial Correlation Coefficients. Biometrika 43: 169–85. [Google Scholar] [CrossRef]
  18. Daniels, H. E. 1987. Tail Probability Approximation. International Statistical Review 55: 37–48. [Google Scholar] [CrossRef]
  19. Daniels, H. E., and G. A. Young. 1991. Saddlepoint Approximation for the Studentized Mean, with an Application to the Bootstrap. Biometrika 78: 169–79. [Google Scholar] [CrossRef]
  20. Dubbelman, C., A. S. Louter, and A. P. J. Abrahamse. 1978. On Typical Characteristics of Economic Time Series and the Relative Qualities of Five Autocorrelation Tests. Journal of Econometrics 8: 295–306. [Google Scholar] [CrossRef]
  21. Durbin, J. 1980. The Approximate Distribution of Serial Correlation Coefficients Calculated from Residuals on Fourier Series. Biometrika 67: 335–50. [Google Scholar] [CrossRef]
  22. Durbin, J., and G. S. Watson. 1950. Testing for Serial Correlation in Least Squares Regression. I. Biometrika 37: 409–28. [Google Scholar] [PubMed]
  23. Durbin, J., and G. S. Watson. 1971. Testing for Serial Correlation in Least Squares Regression. III. Biometrika 58: 1–19. [Google Scholar] [CrossRef]
  24. Elliott, Graham, and James H. Stock. 2001. Confidence Interval for Autoregressive Coefficients Near One. Journal of Econometrics 103: 155–81. [Google Scholar] [CrossRef]
  25. Field, Christopher, and Elvezio Ronchetti. 1990. Small Sample Asymptotics. Lecture Notes—Monograph Series, Volume 13. Hayward: Institute of Mathematical Statistics. [Google Scholar]
  26. Fomby, Thomas B., and David K. Guilkey. 1978. On Choosing the Optimal Level of Significance for the Durbin–Watson Test and the Bayesian Alternative. Journal of Econometrics 8: 203–13. [Google Scholar] [CrossRef]
  27. Granger, Clive W. J., Maxwell L. King, and Halbert White. 1995. Comments on Testing Economic Theories and the Use of Model Selection Criteria. Journal of Econometrics 67: 173–87. [Google Scholar] [CrossRef]
  28. Han, Chirok, Peter C. B. Philips, and Donggyu Sul. 2017. Lag Length Selection in Panel Autoregression. Econometric Reviews 36: 225–40. [Google Scholar] [CrossRef]
  29. Holly, Alberto, and Peter C. B. Phillips. 1979. A Saddlepoint Approximation to the Distribution of the k–Class Estimator of a Coefficient in a Simultaneous System. Econometrica 47: 1527–47. [Google Scholar] [CrossRef]
  30. Jenkins, Gwilym M., and Athar S. Alevi. 1981. Some Aspects of Modelling and Forecasting Multivariate Time Series. Journal of Time Series Analysis 2: 1–47. [Google Scholar] [CrossRef]
  31. Jensen, Jens L. 1995. Saddlepoint Approximations. Oxford: Oxford University Press. [Google Scholar]
  32. Keuzenkamp, Hugo A., and Michael McAleer. 1997. The Complexity of Simplicity. In 11th Biennial Conference on Modelling and Simulation. Edited by P. Binning, H. Bridgman, H. Bridgman, M. McAleer and B. Williams. Special issue, Mathematics and Computers in Simulation 43: 553–61. Available online: http://www.sciencedirect.com/science/article/pii/S037847549700044X (accessed on 8 May 2017).
  33. King, Maxwell L. 1985. A Point Optimal Test for Autoregressive Disturbances. Journal of Econometrics 27: 21–37. [Google Scholar] [CrossRef]
  34. King, Maxwell L., and Sivagowry Sriananthakumar. 2015. Point Optimal Testing: A Survey of the Post 1987 Literature. Model Assisted Statistics and Applications 10: 179–96. [Google Scholar] [CrossRef]
  35. Kolassa, John E. 1996. Higher-Order Approximations to Conditional Distribution Functions. Annals of Statistics 24: 353–64. [Google Scholar] [CrossRef]
  36. Kolassa, John E. 2006. Series Approximation Methods in Statistics, 3rd ed. New York: Springer. [Google Scholar]
  37. Konishi, Sadanori, and Genshiro Kitagawa. 2008. Information Criteria and Statistical Modeling. New York: Springer. [Google Scholar]
  38. Koreisha, Sergio, and Gary Yoshimoto. 1991. A Comparison among Identification Procedures for Autoregressive Moving Average Models. International Statistical Review 59: 37–57. [Google Scholar] [CrossRef]
  39. Le, Nhu D., Adrian E. Raftery, and R. Douglas Martin. 1996. Robust Bayesian Model Selection for Autoregressive Processes with Additive Outliers. Journal of the American Statistical Association 91: 123–31. [Google Scholar] [CrossRef]
  40. Leeb, Hannes, and Benedikt M. Pötscher. 2005. Model Selection and Inference: Facts and Fiction. Econometric Theory 21: 21–59. [Google Scholar] [CrossRef]
  41. Lehmann, E. L. 1958. Significance Level and Power. The Annals of Mathematical Statistics 29: 1167–76. [Google Scholar] [CrossRef]
  42. Lehmann, E. L. 1986. Testing Statistical Hypotheses, 2nd ed. New York: John Wiley & Sons. [Google Scholar]
  43. Lugannani, Robert, and Stephen O. Rice. 1980. Saddlepoint Approximations for the Distribution of Sums of Independent Random Variables. Advances in Applied Probability 12: 475–90. [Google Scholar] [CrossRef]
  44. Lütkepohl, Helmut. 2005. New Introduction to Multiple Time Series Analysis. Berlin: Springer. [Google Scholar]
  45. Makridakis, Spyros, and Michele Hibon. 2000. The M3 Competition: Results, Conclusions and Implications. International Journal of Forecasting 17: 567–70. [Google Scholar] [CrossRef]
  46. McGregor, J. R. 1960. An Approximate Test for Serial Correlation in Polynomial Regression. Biometrika 47: 111–19. [Google Scholar] [CrossRef]
  47. McLeod, Ian. 1975. Derivation of the Theoretical Autocovariance Function of Autoregressive–Moving Average Time Series. Applied Statistics 24: 255–56, Correction: 1977, 26: 194. [Google Scholar] [CrossRef]
  48. McQuarrie, Allan D. R., and Chih-Ling Tsai. 1998. Regression and Time Series Model Selection. River Edge: World Scientific. [Google Scholar]
  49. Mittnik, Stefan. 1988. Derivation of the Theoretical Autocovariance and Autocorrelation Function of Autogressive Moving Average Processes. Communications in Statistics—Theory and Methods 17: 3825–31. [Google Scholar] [CrossRef]
  50. Pankratz, Alan. 1983. Forecasting with Univariate Box–Jenkins Models: Concepts and Cases. New York: John Wiley & Sons. [Google Scholar]
  51. Phillips, Peter C. B. 1978. Edgeworth and Saddlepoint Approximations in a First Order Non-Circular Autoregression. Biometrika 65: 91–98. [Google Scholar] [CrossRef]
  52. Phillips, Peter C. B. 2008. Unit Root Model Selection. Journal of the Japan Statistical Society 38: 65–74. [Google Scholar] [CrossRef]
  53. Phillips, Peter C. B. 2014. On Confidence Intervals for Autoregressive Roots and Predictive Regression. Econometrica 82: 1177–95. [Google Scholar] [CrossRef]
  54. Ploberger, Werner, and Peter C. B. Phillips. 2003. Empirical Limits for Time Series Econometric Models. Econometrica 71: 627–73. [Google Scholar] [CrossRef]
  55. Pötscher, B. M. 1983. Order Estimation in ARMA Models by Lagrangian Multiplier Tests. The Annals of Statistics 11: 872–85. [Google Scholar] [CrossRef]
  56. Rahman, M. S., and Maxwell L. King. 1999. Improved Model Selection Criterion. Communications in Statistics—Simulation and Computation 28: 51–71. [Google Scholar] [CrossRef]
  57. Reid, N. 1988. Saddlepoint Methods and Statistical Inference (with discussion). Statistical Science 3: 213–38. [Google Scholar] [CrossRef]
  58. Sanathanan, Lalitha. 1974. Critical Power Function and Decision Making. Journal of the American Statistical Association 69: 398–402. [Google Scholar] [CrossRef]
  59. Schervish, M. J., and R. S. Tsay. 1988. Bayesian Modeling and Forecasting in Autoregressive Models. In Bayesian Analysis of Time Series and Dynamic Models. Edited by J. C. Spall. New York: Marcel Dekker, pp. 23–52. [Google Scholar]
  60. Skovgaard, Ib M. 1987. Saddlepoint Expansions for Conditional Distributions. Journal of Applied Physiology 24: 275–87. [Google Scholar]
  61. Tiao, G. C., and G. E. P. Box. 1981. Modelling Multiple Time Series with Applications. Journal of the American Statistical Association 76: 802–16. [Google Scholar]
  62. Van der Leeuw, Jan. 1994. The Covariance Matrix of ARMA Errors in Closed Form. Journal of Econometrics 63: 397–405. [Google Scholar] [CrossRef]
  63. Zellner, Arnold. 1971. An Introduction to Bayesian Inference in Econometrics. New York: John Wiley & Sons. [Google Scholar]
  64. Zellner, Arnold. 2001. Keep it Sophisticatedly Simple. In Simplicity, Inference and Modelling. Edited by A. Zellner, H. A. Keuzenkamp and M. McAleer. Cambridge: Cambridge University Press, pp. 242–62. [Google Scholar]
1.
The ad hoc use of the Student’s t distribution was found to be slightly better than the standard normal for the smaller sample sizes.
2.
Negative values of ϕ were also considered; the results essentially paralleled their positive counterparts.
3.
The ill effects of data dredging and autoregressive model over-fitting are well discussed in Chatfield (2001, chp. 8) and the references therein.
Figure 1. Histograms of τ 1 , , τ 4 , based on 10 , 000 replications with true data being iid normal, and taking X = 1 . Top (bottom) panels are for T = 15 ( T = 30 ).
Figure 1. Histograms of τ 1 , , τ 4 , based on 10 , 000 replications with true data being iid normal, and taking X = 1 . Top (bottom) panels are for T = 15 ( T = 30 ).
Econometrics 05 00043 g001
Figure 2. Empirical distribution of F ( t i ) , i = 1 , 2 , 3 (left, middle and right panels) where t i is the ratio of the MLE of ϕ i to its corresponding approximate standard error and F ( · ) is the Student’s t cdf with T 4 degrees of freedom. Rows from top to bottom correspond to T = 15 , T = 30 , T = 100 and T = 200 , respectively.
Figure 2. Empirical distribution of F ( t i ) , i = 1 , 2 , 3 (left, middle and right panels) where t i is the ratio of the MLE of ϕ i to its corresponding approximate standard error and F ( · ) is the Student’s t cdf with T 4 degrees of freedom. Rows from top to bottom correspond to T = 15 , T = 30 , T = 100 and T = 200 , respectively.
Econometrics 05 00043 g002
Figure 3. Performance of the various methods in the AR(1) case using X = 1 , T = 30 , m = 4 , and c = 0.025 .
Figure 3. Performance of the various methods in the AR(1) case using X = 1 , T = 30 , m = 4 , and c = 0.025 .
Econometrics 05 00043 g003
Figure 4. Similar to Figure 3, but based on X = [ 1 , t ] .
Figure 4. Similar to Figure 3, but based on X = [ 1 , t ] .
Econometrics 05 00043 g004
Figure 5. Histograms corresponding to the chosen value of AR lag length p, based on the CACF, AICC and BIC, using three values of tuning parameter m (2, 6 and 10, from left to right), and 1000 replications. True model is Gaussian AR(1) with parameter ϕ = 0.5 , sample size T = 60 , and known mean (no X matrix). The CACF method uses c = 0.025 .
Figure 5. Histograms corresponding to the chosen value of AR lag length p, based on the CACF, AICC and BIC, using three values of tuning parameter m (2, 6 and 10, from left to right), and 1000 replications. True model is Gaussian AR(1) with parameter ϕ = 0.5 , sample size T = 60 , and known mean (no X matrix). The CACF method uses c = 0.025 .
Econometrics 05 00043 g005
Figure 6. Similar to Figure 5, but for sample size T = 30 and X = [ 1 , t ] .
Figure 6. Similar to Figure 5, but for sample size T = 30 and X = [ 1 , t ] .
Econometrics 05 00043 g006
Figure 7. For AR(4) process with T = 30 and ϕ 4 = 0.1 (left), 0.2 (middle) and 0.3 (right). Top panels show all criteria, with SPA = CACF . Bottom panels show just the CACF and the 1st or 2nd best penalty-based criteria: If the CACF was best, it is given first in the legend, otherwise second.
Figure 7. For AR(4) process with T = 30 and ϕ 4 = 0.1 (left), 0.2 (middle) and 0.3 (right). Top panels show all criteria, with SPA = CACF . Bottom panels show just the CACF and the 1st or 2nd best penalty-based criteria: If the CACF was best, it is given first in the legend, otherwise second.
Econometrics 05 00043 g007
Figure 8. Same as Figure 7 but for ϕ 4 = 0.4 (left), 0.5 (middle) and 0.6 (right).
Figure 8. Same as Figure 7 but for ϕ 4 = 0.4 (left), 0.5 (middle) and 0.6 (right).
Econometrics 05 00043 g008
Figure 9. Performance of the three indicated AR order selection methods as a function of autoregressive parameter ϕ , for sample size T = 50 , known mean (denoted by X = [ ] ) and m = 5 , when the true AR order is p = 1 and (falsely) assuming Gaussianity. The true innovation sequence consists of i.i.d. Student’s t ( ν ) realizations, with df = ν indicated in the titles (from top to bottom, ν = 1 , ν = 2 , ν = 5 , and ν = 200 ). Left (right) panels indicate the percentage of the 1000 replications that resulted in choosing p = 0 ( p = 1 ).
Figure 9. Performance of the three indicated AR order selection methods as a function of autoregressive parameter ϕ , for sample size T = 50 , known mean (denoted by X = [ ] ) and m = 5 , when the true AR order is p = 1 and (falsely) assuming Gaussianity. The true innovation sequence consists of i.i.d. Student’s t ( ν ) realizations, with df = ν indicated in the titles (from top to bottom, ν = 1 , ν = 2 , ν = 5 , and ν = 200 ). Left (right) panels indicate the percentage of the 1000 replications that resulted in choosing p = 0 ( p = 1 ).
Econometrics 05 00043 g009
Figure 10. Same as Figure 9 but having used X = [ 1 , t ] .
Figure 10. Same as Figure 9 but having used X = [ 1 , t ] .
Econometrics 05 00043 g010
Table 1. Simulation results of the CACF method for sample size T = 60 : Percentages of the 1000 replications that chose AR lag length 0, 1, 2, 3, or 4, for six AR(2) models, with AR parameters a 1 and a 2 as indicated. The models, under the column denoted M , are labeled as 1.0, etc., with the zero after the decimal point indicating a true MA(1) coefficient of θ = 0 . (Table A1 and Table A2 show the results for nonzero θ 1 .) The panel denoted “No X ” indicates the known mean case, while “ X = 1 ” and “ X = [ 1 , t ] ” refers to the cases with unknown but constant mean, and constant and time trend, respectively. Bold faced numbers indicate the percentage of times the true AR lag order was correctly chosen.
Table 1. Simulation results of the CACF method for sample size T = 60 : Percentages of the 1000 replications that chose AR lag length 0, 1, 2, 3, or 4, for six AR(2) models, with AR parameters a 1 and a 2 as indicated. The models, under the column denoted M , are labeled as 1.0, etc., with the zero after the decimal point indicating a true MA(1) coefficient of θ = 0 . (Table A1 and Table A2 show the results for nonzero θ 1 .) The panel denoted “No X ” indicates the known mean case, while “ X = 1 ” and “ X = [ 1 , t ] ” refers to the cases with unknown but constant mean, and constant and time trend, respectively. Bold faced numbers indicate the percentage of times the true AR lag order was correctly chosen.
M AR ParamNo X X = 1 X = [ 1 , t ]
AR Lag Order (%)AR Lag Order (%)AR Lag Order (%)
a 1 a 2 012340123401234
1.00081.54.54.35.24.580.33.95.06.04.882.03.15.34.94.7
2.01.2-0.80.00.091.45.03.60.00.091.05.33.70.00.189.55.84.6
3.00.7-0.30.135.256.04.24.50.035.954.44.84.90.137.754.03.94.3
4.00.4-0.310.727.053.53.75.110.028.353.14.83.811.626.454.33.83.9
5.01.4-0.450.052.540.22.64.70.059.933.02.94.20.057.631.35.65.5
6.0-0.30.550.15.369.111.713.80.36.566.712.514.00.08.166.110.815.0
Table 2. Summary measures for the two extreme AR(4) models. The measure L 2 = ( p ^ i 4 ) 2 and L 2 scaled by its minimum observed value, for the two model cases ϕ 4 = 0.1 and 0.6 .
Table 2. Summary measures for the two extreme AR(4) models. The measure L 2 = ( p ^ i 4 ) 2 and L 2 scaled by its minimum observed value, for the two model cases ϕ 4 = 0.1 and 0.6 .
ϕ 4 = 0.1 CACFFPEAICHQSBCAICC
L 2 458600607638895918
L 2 / 458 1.0001.3111.3251.3931.9542.005
ϕ 4 = 0.6 CACFAICCSBCHQFPEAIC
L 2 324415460461487512
L 2 / 324 1.0001.2801.4201.4231.5031.580
Table 3. Performance Summary. Selected information from Table A2 and Table A3 through Table A7. For each model (with X = 1 ), the percentage of correct p choices (0 for models 1.0x, 1.5x, 1.9x and 2 for the rest), are shown for the CACF and the penalty-based criteria that was either 1st or 2nd best. Form is criteria:percentage, where criteria 0 is the CACF, 1 is AIC, 2 is SBC, 3 is AICC, 4 is HQ and 5 is FPE. The criteria with the larger percentage is given first.
Table 3. Performance Summary. Selected information from Table A2 and Table A3 through Table A7. For each model (with X = 1 ), the percentage of correct p choices (0 for models 1.0x, 1.5x, 1.9x and 2 for the rest), are shown for the CACF and the penalty-based criteria that was either 1st or 2nd best. Form is criteria:percentage, where criteria 0 is the CACF, 1 is AIC, 2 is SBC, 3 is AICC, 4 is HQ and 5 is FPE. The criteria with the larger percentage is given first.
Model1.0x2.0x3.0x4.0x5.0x6.0x
0:872:790:882:873:470:143:400:162:650:530:792:74
Model1.5x2.5x3.5x4.5x5.5x6.5x
0:912:774:650:640:492:433:360:320:612:572:590:36
Model1.9x2.9x3.9x4.9x5.9x6.9x
2:860:832:870:593:530:423:520:332:740:605:220:06

Share and Cite

MDPI and ACS Style

Butler, R.W.; Paolella, M.S. Autoregressive Lag—Order Selection Using Conditional Saddlepoint Approximations. Econometrics 2017, 5, 43. https://doi.org/10.3390/econometrics5030043

AMA Style

Butler RW, Paolella MS. Autoregressive Lag—Order Selection Using Conditional Saddlepoint Approximations. Econometrics. 2017; 5(3):43. https://doi.org/10.3390/econometrics5030043

Chicago/Turabian Style

Butler, Ronald W., and Marc S. Paolella. 2017. "Autoregressive Lag—Order Selection Using Conditional Saddlepoint Approximations" Econometrics 5, no. 3: 43. https://doi.org/10.3390/econometrics5030043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop