*Article* **Asymptotic Properties of Estimators for Seasonally Cointegrated State Space Models Obtained Using the CVA Subspace Method**

**Dietmar Bauer \* and Rainer Buschmeier**

Department of Business Administration and Economics, Bielefeld University, Universitaetsstrasse 25, 33615 Bielefeld, Germany; RBuschmeier@uni-bielefeld.de

**\*** Correspondence: Dietmar.Bauer@uni-bielefeld.de

**Abstract:** This paper investigates the asymptotic properties of estimators obtained from the so called CVA (canonical variate analysis) subspace algorithm proposed by Larimore (1983) in the case when the data is generated using a minimal state space system containing unit roots at the seasonal frequencies such that the yearly difference is a stationary vector autoregressive moving average (VARMA) process. The empirically most important special cases of such data generating processes are the I(1) case as well as the case of seasonally integrated quarterly or monthly data. However, increasingly also datasets with a higher sampling rate such as hourly, daily or weekly observations are available, for example for electricity consumption. In these cases the vector error correction representation (VECM) of the vector autoregressive (VAR) model is not very helpful as it demands the parameterization of one matrix per seasonal unit root. Even for weekly series this amounts to 52 matrices using yearly periodicity, for hourly data this is prohibitive. For such processes estimation using quasi-maximum likelihood maximization is extremely hard since the Gaussian likelihood typically has many local maxima while the parameter space often is high-dimensional. Additionally estimating a large number of models to test hypotheses on the cointegrating rank at the various unit roots becomes practically impossible for weekly data, for example. This paper shows that in this setting CVA provides consistent estimators of the transfer function generating the data, making it a valuable initial estimator for subsequent quasi-likelihood maximization. Furthermore, the paper proposes new tests for the cointegrating rank at the seasonal frequencies, which are easy to compute and numerically robust, making the method suitable for automatic modeling. A simulation study demonstrates by example that for processes of moderate to large dimension the new tests may outperform traditional tests based on long VAR approximations in sample sizes typically found in quarterly macroeconomic data. Further simulations show that the unit root tests are robust with respect to different distributions for the innovations as well as with respect to GARCH-type conditional heteroskedasticity. Moreover, an application to Kaggle data on hourly electricity consumption by different American providers demonstrates the usefulness of the method for applications. Therefore the CVA algorithm provides a very useful initial guess for subsequent quasi maximum likelihood estimation and also delivers relevant information on the cointegrating ranks at the different unit root frequencies. It is thus a useful tool for example in (but not limited to) automatic modeling applications where a large number of time series involving a substantial number of variables need to be modelled in parallel.

**Keywords:** cointegration; subspace algorithms; VARMA models; seasonality

**JEL Classification:** C13; C32

#### **1. Introduction**

Many time series show seasonal patterns that, according to [1] for example, cannot be modeled appropriately using seasonal dummies because they exhibit a slowly trending behavior typical for unit root processes.

**Citation:** Bauer, D.; Buschmeier, R. Asymptotic Properties of Estimators for Seasonally Cointegrated State Space Models Obtained Using the CVA Subspace Method. *Entropy* **2021**, *23*, 436. https://doi.org/10.3390/ e23040436

Academic Editor: Christian H. Weiss

Received: 19 February 2021 Accepted: 31 March 2021 Published: 8 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

To model such processes in the vector autoregressive (VAR) framework, Ref. [2] (abbreviated as JS in the following) extend the error correction representation for seasonally integrated autoregressive processes pioneered by [3] to the multivariate case. This vector error correction formulation (VECM) models the yearly differences of a process observed *S* times per year. The model includes systems having unit roots at some or all of the possible locations *zj* <sup>=</sup> exp( <sup>2</sup>*π<sup>j</sup> <sup>S</sup> i*), *j* = 0, ..., *S* − 1 of seasonal unit roots. In JS all unit roots are assumed to be simple such that the process of yearly differences is stationary.

In this setting JS propose an estimator for the autoregressive polynomial subject to restrictions on its rank (the so-called cointegrating rank) at the unit roots *zj* based on an iterative scheme focusing on a pair of complex-conjugated unit roots (or the unit roots *zj* = 1 or *zj* = −1 respectively) at a time. The main idea here is the reformulation of the model using the so called vector error correction representation. Beside estimators JS also derived likelihood ratio tests for the cointegrating rank at the various unit roots.

Refs. [4,5] propose simpler estimation schemes based on complex reduced rank regression (cRRR in the following). They also show that their numerically simpler algorithm leads to test statistics for the cointegrating rank that are asymptotically equivalent to the quasi maximum likelihood tests of JS. These schemes still typically alternate between cRRR problems corresponding to different unit roots until convergence, although a one step version estimating only once at each unit root exists. Ref. [6] provides updating equations for quasi maximum likelihood estimation in situations where constraints on the parameters prohibit focusing on one unit root at a time.

The leading case here is that of quarterly data (*S* = 4) where potential unit roots are located at ±1 and ±*i*, implying that the VECM representation contains four potentially rank restricted matrices. However, increasingly time series of much higher sampling frequency such as hourly, daily or weekly observations are available. In such cases it is unrealistic that all unit roots are present. If a unit root is not present, the corresponding matrix in the VECM is of full rank. Therefore in situations with only a few unit roots being present, the VECM requires a large number of parameters to be estimated. Also in cases with a long period length (such as for example hourly data with yearly cycles) usage of the VECM involves the estimation of all coefficient matrices for lags for at least one year.

In general, for processes of moderate to large dimension the VAR framework involves estimation of a large number of parameters which potentially can be avoided by using the more flexible vector autoregressive moving average (VARMA) or the—in a sense equivalent state space framework. This setting has been used in empirical research for the modeling of electricity markets, see the survey [7] for a long list of contributions. In particular, ref. [8] use the model described below without formal verification of the asymptotic theory for the quasi maximum likelihood estimation.

Recently, ref. [9] show that in the setting of dynamic factor models, typically used for observation processes of high dimension, the common assumption that the factors are generated using a vector autoregression jointly with the assumption that the idiosyncratic component is white noise (or more generally generated using a VAR or VARMA model independent of the factors) leads to a VARMA process. Also a number of papers (see for example [10–12]) show that in their empirical application the usage of VARMA models instead of approximations using the VAR model leads to superior prediction performance. This, jointly with the fact that the linearization of dynamic stochastic general equilibrium models (DSGE) leads to state space models, see e.g., [13], has fuelled recent interest in VARMA—and thus state space—modeling in particular in macroeconomics, see for example [14].

In this respect, quasi maximum likelihood estimation is the most often used approach for inference. Due to the typically highly non-convex nature of the quasi likelihood function (using the Gaussian density) in the VARMA setting, the criterion function shows many local maxima where the optimization can easily get stuck. Randomization alone does not solve the problem efficiently, as typically the parameter space is high-dimensional causing problems of the curse of dimensionality type.

Moreover, VARMA modeling requires a full specification of the state space unit root structure of the process, see [15]. The state space unit root structure specifies the number of common trends at each seasonal frequency (see below for definitions). For data of weekly or higher sampling frequency it is unlikely that the state space unit root structure is known prior to estimation. Testing all possible combinations is numerically infeasible in many cases.

As an attractive alternative in this respect the class of subspace algorithms is investigated in this paper. One particular member of this class, the so called canonical variate analysis (CVA) introduced by [16] (in the literature the algorithm is often called canonical correlation analysis; CCA), has been shown to provide system estimators which (under the assumption of known system order) are asymptotically equivalent to quasi maximum likelihood estimation (using the Gaussian likelihood) in the stationary case [17]. CVA shares a number of robustness properties in the stationary case with VAR estimators: [18] shows that CVA produces consistent estimators of the underlying transfer function in situations where the innovations are conditionally heteroskedastic processes of considerable generality. Ref. [19] shows that CVA provides consistent estimators of the transfer function even for stationary fractionally integrated processes, if the order of the system tends to infinity as a function of the sample size at a sufficient rate.

In the I(1) case [20] introduce a heuristic adaptation of the algorithm using the assumption of known cointegrating rank in order to show consistency for the corresponding transfer function estimators. However, the specification of the cointegrating rank is no easy task in itself. In case of misspecification of the cointegrating rank the properties of this approach are unclear. Ref. [21] states without proof that also the original CVA algorithm delivers consistent estimates in the I(1) case without the need to impose the true cointegrating rank.

Furthermore for I(1) processes [20] proposed various tests for the cointegrating rank and compared them to tests in the Johansen framework showing superior finite sample performance in particular for multivariate data sets with a large dimension of the modeled process.

This paper builds on these results and shows that CVA can also be used in the seasonally integrated case. The main contributions of the paper are:


The derivation of the asymptotic properties of the estimators is complemented by a simulation study and an application, both demonstrating the potential of CVA and one of the suggested tests. Jointly our results imply that CVA constitutes a very reasonable initial estimate for subsequent quasi likelihood maximization in the VARMA case. Moreover the method provides valuable information on the number of unit roots present in the process, which can be used for subsequent investigation at the very least by providing upper bounds on the number of common trends present at each unit root frequency. Contrary to the JS approach in the VAR framework these tests can be performed in parallel for all unit roots, eliminating the interdependence of the results inherent in the VECM representation. Moreover, they do not use the VECM representation involving a large number of parameters in the case of high sampling rates.

These properties make CVA a useful tool in automatic modeling of multivariate (with a substantial number of variables) seasonally (co-)integrated processes.

The paper is organized as follows: in the next section the model set and the main assumptions of the paper are presented. The estimation methods are described in Section 3. Section 4 states the consistency results. Inference on the cointegrating ranks is proposed in Section 5. Data preprocessing is discussed in Section 6. The simulations are contained in Section 7, while Section 8 discusses an application to real world data. Section 9 concludes the paper. Appendix A contains supporting material, Appendix C provides the proofs of the main results of this paper, which are based on preliminary results presented in Appendix B.

Throughout the paper we will use the symbols *o*(*gT*) and *O*(*gT*) to denote orders of almost sure convergence where *T* denotes the sample size, i.e., *xT* = *o*(*gT*) if *xT*/*gT* → 0 almost surely and *xT* = *O*(*gT*) if *xT*/*gT* is bounded almost surely for large enough *T* (that is there exists a constant *<sup>M</sup>* <sup>&</sup>lt; <sup>∞</sup> such that lim sup*T*→<sup>∞</sup> *xT*/*gT* <sup>≤</sup> *<sup>M</sup>* a.s.). Furthermore, *oP*(*gT*),*OP*(*gT*) denote the corresponding in probability versions.

#### **2. Model Set and Assumptions**

In this paper state space processes (*yt*)*t*∈Z, *yt* <sup>∈</sup> <sup>R</sup>*<sup>s</sup>* , are considered which are defined as the solutions to the following equations for given white noise (*εt*)*t*∈Z,*ε<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*<sup>s</sup>* ,E*ε<sup>t</sup>* = 0,E*εtε <sup>t</sup>* = Ω > 0:

$$\begin{array}{rcl} \mathbf{x}\_{t+1} &=& \mathbf{A}\mathbf{x}\_{t} + \mathbf{K}\boldsymbol{\varepsilon}\_{t\prime} \\ y\_{t} &=& \mathbf{C}\mathbf{x}\_{t} + \boldsymbol{\varepsilon}\_{t} .\end{array} \tag{1}$$

Here *xt* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* denotes the unobserved state and *<sup>A</sup>* <sup>∈</sup> <sup>R</sup>*n*×*n*, *<sup>C</sup>* <sup>∈</sup> <sup>R</sup>*s*×*<sup>n</sup>* and *<sup>K</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>s</sup>* define the state space system typically written as the tuple (*A*, *C*, *K*).

In this paper we consider without restriction of generality only minimal state space systems in innovations representation. For a minimal system the integer *n* is called the order of the system. As is well known (cf. e.g., [22]) minimal systems are only identified up to the choice of the basis of the state space. Two minimal systems (*A*, *C*, *K*) and (*A*˜, *C*˜, *K*˜) are observationally equivalent if and only if there exists a nonsingular matrix T ∈ <sup>R</sup>*n*×*<sup>n</sup>* such that *<sup>A</sup>* <sup>=</sup> <sup>T</sup> *<sup>A</sup>*˜<sup>T</sup> <sup>−</sup>1, *<sup>C</sup>* <sup>=</sup> *<sup>C</sup>*˜<sup>T</sup> <sup>−</sup>1, *<sup>K</sup>* <sup>=</sup> <sup>T</sup> *<sup>K</sup>*˜. For two observationally equivalent systems the impulse response sequences *k*<sup>0</sup> = *Is*, *kj*+<sup>1</sup> = *CA<sup>j</sup> K* = *C*˜ *A*˜*<sup>j</sup> K*˜, *j* = 0, 1, ... coincide.

Ref. [15] shows that the structure of the Jordan normal form of the matrix *A* determines the properties (such as stationarity) of the solutions to (1) for *<sup>t</sup>* <sup>∈</sup> <sup>Z</sup>. Eigenvalues of *<sup>A</sup>* on the unit circle lead to unit root processes in the sense of [15] who also define a *state space unit root structure* indicating the location and multiplicity of unit roots. A process (*yt*)*t*∈<sup>Z</sup> with state space unit root structure Ω*<sup>S</sup>* = {(0,(*c*0)),(2*π*/*S*,(*c*1)), ...,(*π*,(*cS*/2))} for some even integer *S* is called multi frequency I(1) (in short MFI(1)). Even *S* is chosen because it simplifies the notation by implying that *S*/2 also is an integer and *z* = −1 is a seasonal unit root. By adjusting the notation appropriately all results hold true for odd *S* as well).

If, moreover, such a process is observed for *S* periods per year, it is called *seasonal MFI(1)*. In this case the canonical form in [15] takes the following form:

$$\begin{array}{rclclcl}A&=&&\text{diag}(A\_0, A\_1, \dots, A\_{S/2}, A\_\bullet),\\ A\_0 &=&&I\_{\mathbb{C}\_{0'}}\\ A\_j &=&\begin{bmatrix}\cos(\omega\_j)I\_{\mathbb{C}\_j} & \sin(\omega\_j)I\_{\mathbb{C}\_j}\\ -\sin(\omega\_j)I\_{\mathbb{C}\_j} & \cos(\omega\_j)I\_{\mathbb{C}\_j}\end{bmatrix},\quad 0 < j < S/2,\\ A\_{S/2} &=&\begin{bmatrix}\begin{array}{c}\mathbb{C}\_{0,R}\mid\operatorname{\mathbb{C}\_{1,R}} & \operatorname{\mathbb{C}\_{1,I}}\mid\dots & \dots & \operatorname{\mathbb{C}\_{S/2-1,R}} & \operatorname{\mathbb{C}\_{S/2-1,I}}\mid\operatorname{\mathbb{C}\_{S/2}} & \operatorname{\mathbb{C}\_{\bullet}}\end{bmatrix}\\ &=&\begin{array}{c}\left[\begin{array}{c}\mathbb{C}\_0\mid\operatorname{\mathbb{C}\_1}\mid\operatorname{\mathbb{C}\_{1,R}} & \operatorname{\mathbb{C}\_{1,I}}\mid\dots & \operatorname{\mathbb{C}\_{S/2-1,I}} & \operatorname{\mathbb{C}\_{S/2}}\mid\operatorname{\mathbb{C}\_{\bullet}}\end{array}\right] \end{array}\right]\\ K &=&\left[\begin{array}{c}K\_{0,R}' \mid K\_{1,R}' & K\_{1,I}' \end{array}\right] \dots \dots \left[\begin{array}{c}K\_{S/2-1,R}' & K\_{S/2-1,I}' \end{array}\right] \begin{array}{c}\left[\begin{array}{c}\mathbb{C}\_{2,R}\mid\operatorname{\mathbb{C}\_{1,R}}\mid\operatorname{\mathbb{C}\_{1,R}}\end{array}\right]'\\ \end{array}\right] \end{array}$$

where *<sup>ω</sup><sup>j</sup>* <sup>=</sup> <sup>2</sup>*πj*/*S*, *<sup>j</sup>* <sup>=</sup> 0, ... , *<sup>S</sup>*/2 denote the unit root frequencies and *Cj*,*<sup>R</sup>* <sup>∈</sup> <sup>R</sup>*s*×*cj* , *Cj*,*<sup>I</sup>* <sup>∈</sup> <sup>R</sup>*s*×*cj* , *Kj*,*<sup>R</sup>* <sup>∈</sup> <sup>R</sup>*cj*×*<sup>s</sup>* , *Kj*,*<sup>I</sup>* <sup>∈</sup> <sup>R</sup>*cj*×*<sup>s</sup>* where 0 <sup>≤</sup> *cj* <sup>≤</sup> *<sup>s</sup>*, 0 <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>S</sup>*/2. Furthermore for *Cj*,<sup>C</sup> :<sup>=</sup> *Cj*,*<sup>R</sup>* − *iCj*,*<sup>I</sup>* it holds that *C <sup>j</sup>*,<sup>C</sup>*Cj*,<sup>C</sup> = *Icj* and *Kj*,<sup>C</sup> = *Kj*,*<sup>R</sup>* + *iKj*,*<sup>I</sup>* is of full row rank and positive upper triangular (*C*0,*<sup>I</sup>* = *CS*/2,*<sup>I</sup>* = 0, *K*0,*<sup>I</sup>* = *KS*/2,*<sup>I</sup>* = 0), see [15] for details. Finally |*λmax*(*A*•)| < 1, where *λmax*(A) denotes an eigenvalue of the matrix A with maximal modulus. The stable subsystem (*A*•, *C*•, *K*•) is assumed to be in echelon canonical form (see [22]).

Using this notation the assumptions on the data generating process (dgp) in this paper can be stated as follows:

**Assumption 1.** (*yt*)*t*∈<sup>Z</sup> *has a minimal state space representation* (A◦, C◦, K◦), A◦ <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> of the form* (2) *with minimal* (A◦,•, C◦,•, K◦,•), A◦,• <sup>∈</sup> <sup>R</sup>*n*•×*n*• *in echelon canonical form where c* = *n* − *n*• > 0*.*

*Furthermore the stability assumption* |*λmax*(A◦,•)| < 1 *and the strict minimum-phase condition ρ*<sup>0</sup> := |*λmax*(A◦ − K◦C◦)| < 1 *hold.*

*The state at time t* = 1 *is given by x*<sup>1</sup> = [*x* 1,0, ..., *x* 1,*S*/2, *x* 1,•] *where <sup>x</sup>*1,*<sup>j</sup>* <sup>∈</sup> <sup>R</sup>*δjcj (for <sup>δ</sup><sup>j</sup>* <sup>=</sup> 2, 0 <sup>&</sup>lt; *<sup>j</sup>* <sup>&</sup>lt; *<sup>S</sup>*/2 *and <sup>δ</sup><sup>j</sup>* <sup>=</sup> <sup>1</sup> *else) is deterministic and <sup>x</sup>*1,• <sup>=</sup> <sup>∑</sup><sup>∞</sup> *<sup>j</sup>*=<sup>1</sup> <sup>A</sup>*j*−<sup>1</sup> ◦,• *<sup>K</sup>*◦,•*ε*1−*<sup>j</sup> is such that* (*xt*,•)*t*∈<sup>Z</sup> *is stationary.*

*The noise process* (*εt*)*t*∈<sup>Z</sup> *is assumed to be a strictly stationary ergodic martingale difference sequence with respect to the filtration* <sup>F</sup>*<sup>t</sup> with zero conditional mean* <sup>E</sup>(*εt*|F*t*−1) = <sup>0</sup>*, deterministic conditional variance* E(*εtε <sup>t</sup>*|F*t*−1) = Ω > 0 *and finite fourth moments.*

Due to the block diagonal form of *A* the state equations are in a convenient form such that partitioning the state vector accordingly as

$$\mathbf{x}\_{t} = \begin{pmatrix} \mathbf{x}\_{t,0} \\ \mathbf{x}\_{t,1} \\ \vdots \\ \mathbf{x}\_{t,S/2} \\ \mathbf{x}\_{t,\bullet} \end{pmatrix},\tag{3}$$

the blocks (*xt*,*j*)*t*∈Z, *xt*,*<sup>j</sup>* <sup>∈</sup> <sup>R</sup>*δjcj* for *cj* <sup>&</sup>gt; 0 are unit root processes with state space unit root structure {(*ωj*,(*cj*))}. Finally (*xt*,•)*t*∈<sup>Z</sup> is assumed to be stationary due to the assumptions on *<sup>x</sup>*1,•. If (*y*˜*t*)*t*∈<sup>N</sup> denotes a different solution to the state space equations corresponding to *x*˜1 then (for *t* > 1)

$$
\tilde{y}\_t - y\_t = \mathbb{C}A^{t-1}(\tilde{x}\_1 - x\_1) \\
= \sum\_{j=0}^{S/2} \mathbb{C}\_j A\_j^{t-1} (\tilde{x}\_{1,j} - x\_{1,j}) + \mathbb{C}\_\bullet A\_\bullet^{t-1} (\tilde{x}\_{1,\bullet} - x\_{1,\bullet}).
$$

Note that *CjA<sup>t</sup>*−<sup>1</sup> *<sup>j</sup> z*<sup>12</sup> = cos(*ωjt*)*z*<sup>1</sup> + sin(*ωjt*)*z*2, 0 < *j* < *S*/2 (for appropriate vectors *z*12, *z*1, *z*2),

$$\mathbb{C}\_0 A\_0^{t-1} = \mathbb{C}\_0, \quad \mathbb{C}\_{\mathbb{S}/2} A\_{\mathbb{S}/2}^{t-1} = (-1)^{t-1} \mathbb{C}\_{\mathbb{S}/2}.$$

Therefore the sum ∑*S*/2 *<sup>j</sup>*=<sup>0</sup> *CjA<sup>t</sup>*−<sup>1</sup> *<sup>j</sup>* (*x*˜1,*<sup>j</sup>* − *x*1,*j*) can be modeled using a constant and seasonal dummies. The term *<sup>C</sup>*•*At*−<sup>1</sup> • (*x*˜1,• <sup>−</sup> *<sup>x</sup>*1,•) tends to zero with an exponential rate as *t* → ∞ and hence does not influence the asymptotics.

Assumption 1 implies that the yearly difference

$$\begin{array}{rcl} y\_t - y\_{t-S} &=& \mathcal{C}A^S \mathbf{x}\_{t-S} + \varepsilon\_t + \sum\_{i=1}^{S} \mathcal{C}A^{i-1} \mathbf{K} \varepsilon\_{t-i} - \mathcal{C} \mathbf{x}\_{t-S} - \varepsilon\_{t-S} \\ &=& (\mathcal{C}A^S - \mathcal{C}) \mathbf{x}\_{t-S} + v\_t = (\mathcal{C}\_\bullet A^S\_\bullet - \mathcal{C}\_\bullet) \mathbf{x}\_{t-S,\bullet} + v\_t \end{array}$$

is a stationary VARMA process where *vt* = *ε<sup>t</sup>* + ∑*<sup>S</sup> <sup>i</sup>*=<sup>1</sup> *CAi*−1*Kεt*−*<sup>i</sup>* <sup>−</sup> *<sup>ε</sup>t*−*<sup>S</sup>* since *<sup>A</sup><sup>S</sup> <sup>j</sup>* = *Iδjcj* . Thus the process according to Assumption 1 is a unit root process in the sense of [15]. Note that we do not assume that all unit roots are contained such that the spectral density of the stationary process (*yt* − *yt*−*S*)*t*∈<sup>Z</sup> may contain zeros due to overdifferentiation and hence the process potentially is not stably invertible. The special form of *A*<sup>0</sup> implies that *<sup>I</sup>*(1) processes are a special case of our dgp while *<sup>I</sup>*(*d*), *<sup>d</sup>* <sup>&</sup>gt; 1, *<sup>d</sup>* <sup>∈</sup> <sup>N</sup>, processes are not allowed for.

#### **3. Canonical Variate Analysis**

The main idea of CVA is that, given the state, the system equations (1) are linear in the system matrices. Therefore, based on an estimate of the state sequence, the system can be estimated using least squares regression. The estimate of the state is based on the following equation (for details see for example [17]):

Let *Y*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* := [*y <sup>t</sup>*, *y <sup>t</sup>*+1, ... , *y <sup>t</sup>*+*f*−1] denote the vector of stacked observations for some integer *<sup>f</sup>* <sup>≥</sup> *<sup>n</sup>* and let *<sup>E</sup>*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* := [*ε t*,*ε <sup>t</sup>*+1, ... ,*ε <sup>t</sup>*+*f*−1] . Further define *Y*− *<sup>t</sup>*,*<sup>p</sup>* := [*y <sup>t</sup>*−1, ... , *<sup>y</sup> <sup>t</sup>*−*p*] . Then (for *t* > *p*)

$$\begin{array}{rcl} \mathcal{Y}\_{t,f}^{+} &=& \mathcal{O}\_{f} \mathbf{x}\_{t} + \mathcal{E}\_{f} \mathcal{E}\_{t,f}^{+} = \mathcal{O}\_{f} \mathcal{K}\_{p} \mathcal{Y}\_{t,p}^{-} + \mathcal{O}\_{f} (\mathcal{A}\_{\complement} - \mathcal{K}\_{\complement} \mathcal{C}\_{\complement})^{p} \mathbf{x}\_{t-p} + \mathcal{E}\_{f} \mathcal{E}\_{t,f}^{+} \\ &=& \mathcal{O}\_{1} \mathcal{Y}\_{t,p}^{-} + \mathcal{N}\_{t,f}^{+} \end{array} \tag{4}$$

where <sup>K</sup>*<sup>p</sup>* := [K◦, <sup>A</sup>¯◦K◦, <sup>A</sup>¯ ◦ 2 K◦, ... , <sup>A</sup>¯ ◦ *p*−1 K◦] for <sup>A</sup>¯ ◦ := A◦ − K◦C◦ and O*<sup>f</sup>* := [C ◦, A ◦C ◦, ...,(A*f*−<sup>1</sup> ◦ ) C ◦] . The strict minimum-phase assumption implies <sup>A</sup>¯ ◦ *<sup>p</sup>* <sup>→</sup> 0 for *<sup>p</sup>* <sup>→</sup> <sup>∞</sup>.

Let *at*, *bt* :<sup>=</sup> *<sup>T</sup>*−<sup>1</sup> <sup>∑</sup>*T*−*f*+<sup>1</sup> *<sup>t</sup>*=*p*+<sup>1</sup> *atb <sup>t</sup>* for sequences (*at*)*t*∈<sup>N</sup> and (*bt*)*t*∈N. Then an estimate of *β*<sup>1</sup> is obtained from the reduced rank regression (RRR) *Y*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* = *β*1*Y*<sup>−</sup> *<sup>t</sup>*,*<sup>p</sup>* <sup>+</sup> *<sup>N</sup>*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* under the rank constraint rank(*β*1) = *<sup>n</sup>*. This results in the estimate <sup>O</sup>ˆ*<sup>f</sup>* <sup>K</sup><sup>ˆ</sup> *<sup>p</sup>* := [( <sup>ˆ</sup> <sup>Ξ</sup><sup>+</sup> *<sup>f</sup>* )−1*U*<sup>ˆ</sup> *nS*<sup>ˆ</sup> *<sup>n</sup>*][*V*ˆ *<sup>n</sup>*( <sup>ˆ</sup> <sup>Ξ</sup><sup>−</sup> *<sup>p</sup>* )−1] of *β*<sup>1</sup> using the singular value decomposition (SVD)

$$
\varepsilon\_{\hat{f}}^{\hat{\phantom{a}}+} \mathfrak{F}\_1 \hat{\Xi}\_p^{\hat{\phantom{a}}} = \hat{\mathcal{U}} \mathcal{S} \hat{\mathcal{V}}' = \hat{\mathcal{U}}\_n \hat{\mathfrak{S}}\_n \hat{\mathcal{V}}\_n' + \hat{\mathfrak{R}}\_n \hat{\mathfrak{S}}\_n
$$

Here *β*ˆ <sup>1</sup> <sup>=</sup> *Y*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* ,*Y*<sup>−</sup> *t*,*pY*<sup>−</sup> *t*,*p*,*Y*<sup>−</sup> *<sup>t</sup>*,*p*−<sup>1</sup> denotes the unrestricted least squares estimate of *<sup>β</sup>*<sup>1</sup> and <sup>ˆ</sup> <sup>Ξ</sup><sup>+</sup>

$$
\hat{\Xi}\_f^+ := \langle \Upsilon\_{t,f'}^+ \Upsilon\_{t,f}^+ \rangle^{-1/2}, \quad \hat{\Xi}\_p^- := \langle \Upsilon\_{t,p'}^- \Upsilon\_{t,p}^- \rangle^{1/2}.\tag{5}
$$

Here the symmetric matrix square root is used. The definition is, however, not of importance and other square roots such as Cholesky factors could be used. *<sup>U</sup>*<sup>ˆ</sup> *<sup>n</sup>* <sup>∈</sup> <sup>R</sup>*f s*×*<sup>n</sup>* denotes the matrix whose columns are the left singular vectors to the *n* largest singular values which are the diagonal entries in *S*ˆ *<sup>n</sup>* := diag(*σ*ˆ1, *σ*ˆ2, ... , *σ*ˆ*n*), *σ*ˆ1 ≥···≥ *σ*ˆ*<sup>n</sup>* > 0 and *V*ˆ *<sup>n</sup>* <sup>∈</sup> <sup>R</sup>*ps*×*<sup>n</sup>* contains the corresponding right singular vectors as its columns. *<sup>R</sup>*<sup>ˆ</sup> *<sup>n</sup>* denotes the approximation error.

The system estimate (*A*ˆ, *<sup>C</sup>*ˆ, *<sup>K</sup>*ˆ)is then obtained using the estimated state *<sup>x</sup>*ˆ*<sup>t</sup>* :<sup>=</sup> <sup>K</sup><sup>ˆ</sup> *pY*− *<sup>t</sup>*,*p*, *t* = *p* + 1, . . . , *T* + 1 via regression in the system equations.

In the algorithm a specific decomposition of the rank *<sup>n</sup>* matrix <sup>O</sup>ˆ*<sup>f</sup>* <sup>K</sup><sup>ˆ</sup> *<sup>p</sup>* into the two factors <sup>O</sup>ˆ*<sup>f</sup>* and <sup>K</sup><sup>ˆ</sup> *<sup>p</sup>* is given such that <sup>K</sup><sup>ˆ</sup> *<sup>p</sup>* <sup>ˆ</sup> <sup>Ξ</sup><sup>−</sup> *<sup>p</sup>* ( <sup>ˆ</sup> <sup>Ξ</sup><sup>−</sup> *<sup>p</sup>* ) Kˆ *p* = *In*. It is obvious that every other decomposition of <sup>O</sup>ˆ*<sup>f</sup>* <sup>K</sup><sup>ˆ</sup> *<sup>p</sup>* produces an estimated state sequence in a different coordinate system, leading to a different observationally equivalent representation of the same transfer function estimator. Therefore, with respect to consistency of the transfer function estimator it is sufficient to show that there exists a factorization of <sup>O</sup>ˆ*<sup>f</sup>* <sup>K</sup><sup>ˆ</sup> *p* leading to convergent system matrix estimators (*A*˜, *C*˜, *K*˜), even if this factorization cannot be used in actual computations, as it requires information not known at the time of estimation.

In order to generate a consistent initial guess for subsequent quasi likelihood optimization in the set of all state space systems corresponding to processes with state space unit root structure Ω*<sup>S</sup>* := {(*ω*0,(*c*0)), ...,(*ωS*/2,(*cS*/2))}, however, we will derive a realizable (for known integers *cj* and matrices *Ej* such that *E j* C◦,*j*,<sup>C</sup> = *Icj* ) consistent system estimate. To this end note that consistency of the transfer function implies (see for example [23]) that the eigenvalues *<sup>λ</sup>*˜ *<sup>l</sup>* of *<sup>A</sup>*<sup>ˆ</sup> converge (in a specific sense) to the eigenvalues *<sup>λ</sup><sup>j</sup>* of A◦. Therefore transforming *A*ˆ into complex Jordan normal form (where *A*ˆ is almost surely diagonalizable), ordering the eigenvalues such that groups of eigenvalues *λ*˜ *<sup>l</sup>*(*j*), *l* = 1, ..., *cj*, converging to *λ<sup>j</sup>* are grouped together, we obtain a realizable system (*A*ˇ, *C*ˇ, *K*ˇ) where the diagonal blocks of the block diagonal matrix *A*ˇ corresponding to the unit roots converge to a diagonal matrix with the eigenvalues *zj* on the diagonal:

$$A\_{j, \mathbb{C}} = \begin{bmatrix} \bar{\lambda}\_1(j) & 0 & \dots & 0 \\ 0 & \bar{\lambda}\_2(j) & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \dots & 0 & \bar{\lambda}\_{c\_j}(j) \end{bmatrix} \rightarrow A\_{j, \mathbb{C}} = \begin{bmatrix} z\_j & 0 & \dots & 0 \\ 0 & z\_j & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \dots & 0 & z\_j \end{bmatrix}\_{\mathbb{C}}$$

.

Replacing *A*ˇ*j*,<sup>C</sup> by the limit *Aj*,<sup>C</sup> and transforming the estimates to the real Jordan normal form, we obtain estimates (*A*˘, *C*ˇ, *K*ˇ) corresponding to unit root processes with state space unit root structure Ω*S*.

Note, however, that this representation not necessarily converges as perturbation analysis only implies convergence of the eigenspaces. Therefore in the final step the estimate (*A*˘, *C*ˇ, *K*ˇ) is converted such that we obtain convergence of the system matrix estimates. In the class of observationally equivalent systems with the matrix

$$A\_{\mathbb{C}} = \text{diag}(A\_{0,\mathbb{C}}, A\_{1,\mathbb{C}\prime}, \overline{A\_{1,\mathbb{C}}}, \dots, \overline{A\_{S/2-1,\mathbb{C}\prime}}, A\_{S/2,\mathbb{C}\prime}, A\_{\bullet}), \quad A\_{j,\mathbb{C}} = I\_{\varepsilon\_j} z\_{j\circ}$$

block diagonal transformations of the form T = diag(T0, T1, T1, ..., T*S*/2, *I*) do not change the matrix *A*˘ C. Here the basis of the stable subsystem can be chosen such that the corresponding transformed (*A*˘ •, *<sup>C</sup>*˘•, *<sup>K</sup>*˘ •) is uniquely defined using an overlapping echelon form (see [22], Section 2.6). The impact of such transformations on the blocks of *C* is given by *C*ˇ *<sup>j</sup>*,C<sup>T</sup> <sup>−</sup><sup>1</sup> *<sup>j</sup>* . Therefore, if for each *<sup>j</sup>* <sup>=</sup> 0, ..., *<sup>S</sup>*/2 a matrix *Ej* <sup>∈</sup> <sup>C</sup>*s*×*cj* is known such that *E j* C◦,*j*,<sup>C</sup> <sup>∈</sup> <sup>C</sup>*cj*×*cj* is nonsingular, the restriction *<sup>E</sup> j C*˘ *<sup>j</sup>*,<sup>C</sup> = *Icj* picks a unique representative (*A*˘, *C*˘, *K*˘) of the class of systems observationally equivalent to (*A*˘, *C*ˇ, *K*ˇ).

Note that this estimate (*A*˘, *C*˘, *K*˘) is realizable if the integers *cj* (needed to identify the *cj* eigenvalues of *A*ˆ closest to *zj*), the matrices *Ej* (needed to fix a basis for *xt*,*j*) and the index corresponding to the overlapping echelon form for the stable part are known. Furthermore, this estimate corresponds to a process with state space unit root structure Ω*<sup>S</sup>* and hence can be used as a starting value for quasi likelihood maximization.

Finally in this section it should be noted that the estimate of the state *x*ˆ*<sup>t</sup>* here mainly serves the purpose of obtaining an estimator for the state space system. Based on this estimate, Kalman filtering techniques can be used to obtain different estimates of the state sequence. The relation between these different estimates is unclear and so is their usage for inference. For this paper the state estimates *x*ˆ*<sup>t</sup>* are only an intermediate step in the CVA algorithm.

#### **4. Asymptotic Properties of the System Estimators**

As follows from the last section, the central step in the CVA procedure is a RRR problem involving stationary and nonstationary components. The asymptotic properties of the solution to such RRR problems are derived in Theorem 3.2. of [24]. Using these results the following theorem can be proved (see Appendix C.1):

**Theorem 1.** *Let the process* (*yt*)*t*∈<sup>Z</sup> *be generated according to Assumption 1. Let* (*A*ˆ, *<sup>C</sup>*ˆ, *<sup>K</sup>*ˆ) *denote the* CVA *estimators of the system matrices using the assumption of correctly specified order <sup>n</sup> with <sup>f</sup>* <sup>≥</sup> *<sup>n</sup> not depending on the sample size and finite and <sup>p</sup>* <sup>=</sup> *<sup>o</sup>*((log *<sup>T</sup>*)*a*¯ ) *for some real* 0 < *a*¯ < ∞, *p* ≥ −*d* log *T*/ log *ρ*<sup>0</sup> *for some d* > 1 *where* 0 < *ρ*<sup>0</sup> = |*λmax*(A◦ − K◦C◦)| < 1*. Let* (A◦, C◦, K◦) *be in the form given in* (2) *where* (A◦,•, C◦,•, K◦,•) *is in echelon canonical form and for each <sup>j</sup>* <sup>=</sup> 0, ..., *<sup>S</sup>*/2 *there exists a row selector matrix Ej* <sup>∈</sup> <sup>R</sup>*s*×*cj such that <sup>E</sup> j* C◦,*j*,<sup>C</sup> *is*

*(I) C*ˆ *A*ˆ*<sup>j</sup> <sup>K</sup>*<sup>ˆ</sup> − C◦A*<sup>j</sup>* ◦K◦ <sup>=</sup> *OP*((log *<sup>T</sup>*)*a*/ <sup>√</sup>*T*) *for each j* <sup>≥</sup> <sup>0</sup>*.*

*non-singular. Then for some integer a:*

*(II) Using Dx* <sup>=</sup> *diag*(*T*−<sup>1</sup> *Ic*, *<sup>T</sup>*−1/2 *In*−*c*) *where c* <sup>=</sup> <sup>∑</sup>*S*/2 *<sup>j</sup>*=<sup>0</sup> *cjδ<sup>j</sup> we have*

$$\mathcal{O}\_{\mathbb{P}}(\check{A} - \mathcal{A}\_{\diamond})D\_{\boldsymbol{x}}^{-1} = \mathcal{O}\_{\mathbb{P}}((\log T)^{\mathfrak{a}}),\\\sqrt{T}(\check{\mathcal{K}} - \mathcal{K}\_{\diamond}) = \mathcal{O}\_{\mathbb{P}}((\log T)^{\mathfrak{a}}),\\(\check{\mathcal{L}} - \mathcal{L}\_{\diamond})D\_{\boldsymbol{x}}^{-1} = \mathcal{O}\_{\mathbb{P}}((\log T)^{\mathfrak{a}})$$

*for some integer a* < ∞*. (III) If the noise is assumed to be an iid sequence, then results (I) and (II) hold almost surely.*

Beside stating consistency in the seasonal integration case, the theorem also improves on the results of [20] in the I(1) case by showing that no adaptation of CVA is needed in order to obtain consistent estimators of the impulse response sequence or the system matrices. Note that this consistency result for the impulse response sequence concerns both the short and the long-run dynamics. In particular it implies that short-run prediction coefficients are consistent. Moreover the theorem establishes strong consistency rather than weak consistency as opposed to [20]. (II) establishes orders of convergence which, however, apply only to a transformed system that requires knowledge of the integers *cj* and matrices *Ej* to be realized. No tight bounds for the integer *a* are derived, since they do not seem to be of much value.

Note that the assumptions on the innovations rule out conditionally heteroskedastic processes. However, since the proof mostly relies on convergence properties for covariance estimators for stationary processes and continuous mapping theorems for integrated processes, it appears likely that the results can be extended to conditionally heteroskedastic processes as well. For the stationary cases this follows directly from the arguments in [18], while for integrated processes results (using different assumptions on the innovations) given for example in [25] can be used. The conditions of [25] hold for example in a large number of GARCH type processes, see [26]. The combination of the different sets of assumptions on the innovations is not straightforward, however, and hence would further complicate the proofs. We refrain from including them.

It is worth pointing out that due to the block diagonal structure of A◦ the result (*C*˘ − C◦)*D*−<sup>1</sup> *<sup>x</sup>* <sup>=</sup> *OP*((log *<sup>T</sup>*)*a*) implies consistency of the blocks *<sup>C</sup>*˘ *<sup>j</sup>* corresponding to unit root *zj* (or the corresponding complex pair) of order almost *T*−1. Using the complex valued canonical form this implies consistent estimation of C◦,*j*,<sup>C</sup> by the corresponding *<sup>C</sup>*˘ *<sup>j</sup>*,C. In the canonical form this matrix determines the cointegrating relations (both the static as well as the dynamic ones, for details see [15]) as the unitary complement to this matrix. It thus follows that CVA delivers estimators for the cointegrating relations at the various unit roots that are (super-)consistent. In fact, the proof can be extended to show convergence in distribution of (*C*˘ − C◦)*D*−<sup>1</sup> *<sup>x</sup>* . This distribution could be used in order to derive tests for cointegrating relations. However, preliminary simulations indicate that these estimates and hence the corresponding tests are not optimal and can be improved upon by quasi maximum likelihood estimation in the VARMA setting initialized by the CVA estimates. Therefore we refrain from presenting these results.

Note that the assumptions impose the restriction *ρ*<sup>0</sup> > 0 excluding VAR systems. This is done solely for stating a uniform lower bound on the increase of *p* as a function of *T*. This bound is related to the lag length selection achieved using BIC, see [27]. In the VAR case the lag length estimator using BIC will converge to the true order and thus remain finite. All results hold true if in the VAR case a fixed (that is independent of the sample size) *p* ≥ *n* is used.

#### **5. Inference Based on the Subspace Estimators**

Beside consistency of the impulse response sequence also the specification of the integers *c*0, ..., *cS*/2 is of interest. First, following [20] this information can be obtained by detecting the unity singular values in the RRR step of the procedure. Second, from the system representation (2) it is clear that the location of the unit roots is determined by the eigenvalues of A◦ on the unit circle: The integers *cj* denote the number of eigenvalues at the corresponding locations on the unit circle (provided the eigenvalues are simple). Due to perturbation theory (see for example Lemma A2) we know that the eigenvalues of *<sup>A</sup>*<sup>ˆ</sup> will converge (for *<sup>T</sup>* <sup>→</sup> <sup>∞</sup>) to the eigenvalues of A◦ and the distribution of the mean of all eigenvalues of *<sup>A</sup>*<sup>ˆ</sup> converging to an eigenvalue of A◦ can be derived based on the distribution of the estimation error *<sup>A</sup>*<sup>ˆ</sup> − A◦. This can be used to derive tests on the number

of eigenvalues at a particular location on the unit circle. Third, if *n* ≤ *s* the state process is a VAR(1) process and hence in some cases allows for inference on the number of cointegrating relations and thus also on the integers *cj* as outlined in [4]. Tests based on these three arguments will be discussed below.

**Theorem 2.** *Under the assumptions of Theorem 1 the test statistic T* ∑*<sup>c</sup> <sup>i</sup>*=1(<sup>1</sup> <sup>−</sup> *<sup>σ</sup>*<sup>ˆ</sup> <sup>2</sup> *<sup>i</sup>* ) *converges in distribution to the random variable*

$$Z = \operatorname{tr}\left[\mathbb{E}(\mathbb{E}\_{t,\perp}\mathbb{E}\_{t,\perp}') \left(\int\_0^1 \mathcal{W}(r)\mathcal{W}(r)'\right)^{-1}\right]$$

*where <sup>ε</sup>*˜*t*,<sup>⊥</sup> <sup>=</sup> *<sup>ε</sup>*˜*t*,1 <sup>−</sup> <sup>E</sup>*ε*˜*t*,1*ε*˜ *<sup>t</sup>*,•(E*ε*˜*t*,•*ε*˜ *<sup>t</sup>*,•)−1*ε*˜*t*,• *(for definition of <sup>ε</sup>*˜*t*,1 *and <sup>ε</sup>*˜*t*,• *see the proof in Appendix C.2) and where W*(*r*) *denotes a c-dimensional Brownian motion with variance*

$$\sum\_{i=0}^{S-1} \mathcal{A}\_u^i \mathcal{K}\_u \Omega \mathcal{K}\_u' (\mathcal{A}\_u^i)'$$

*with* A*<sup>u</sup> denoting the c* × *c heading submatrix of* A *and* K*<sup>u</sup> denoting the submatrix of* K *composed of the first c rows such that* (A*u*, C*u*, K*u*) *denotes the unstable subsystem.*

The theorem is proved in Appendix C.2, where also the many nuisance parameters of the limiting random variable are explained and defined. The proof also corrects an error in Theorem 4 of [20], where the wrong distribution is given since the second order terms were neglected.

As the distribution is not pivotal and in particular contains information that is unknown when performing the RRR step, it is not of much interest for direct application. Nevertheless the order of convergence allows for the derivation of simple consistent estimators of the number of common trends: Let *c*ˆ*<sup>T</sup>* denote the number of singular values calculated in the RRR that exceed #<sup>1</sup> <sup>−</sup> *<sup>h</sup>*(*T*)/*<sup>T</sup>* for arbitrary *<sup>h</sup>*(*T*) <sup>→</sup> <sup>∞</sup>, *<sup>h</sup>*(*T*) <sup>&</sup>lt; *<sup>T</sup>*, *<sup>h</sup>*(*T*)/*<sup>T</sup>* <sup>→</sup> 0, for *T* → ∞. Then it is a direct consequence of Theorem 2 in combination with *σ*ˆ*<sup>j</sup>* → *σ<sup>j</sup>* < 1, *j* > *c*, that *c*ˆ*<sup>T</sup>* → *c* in probability, implying consistent estimation of *c*. Based on these results also estimators for *c* could be derived, for example along the lines of [28]. However, as [29] shows, such estimators have not performed well in simulations and thus are not considered subsequently.

The singular values do not provide information on the location of the unit roots. This additional information is contained in the eigenvalues of the matrix A◦:

**Theorem 3.** *Under the assumptions of Theorem 1 let λ*ˆ *<sup>i</sup>*(*m*), *i* = 1, ..., *cm denote the cm eigenvalues of* <sup>A</sup><sup>ˆ</sup> *closest to the unit root zm*, <sup>|</sup>*zm*<sup>|</sup> <sup>=</sup> <sup>1</sup>*. Then defining <sup>μ</sup>*ˆ*<sup>m</sup>* <sup>=</sup> <sup>∑</sup>*cm <sup>i</sup>*=1(*λ*<sup>ˆ</sup> *<sup>i</sup>*(*m*) <sup>−</sup> *zm*) *we obtain*

$$T\hat{\mu}\_m \stackrel{d}{\rightarrow} tr\left[\left(\int B(r)B(r)dr\right)^{-1}\int B(r)dB(r)'\right],$$

*where B*(*r*) *denotes a cm-dimensional Brownian motion with zero expectation and variance Icm for zm* = ±1 *and a complex Brownian motion with expectation zero and variance equal to the identity matrix else.*

*Further if* <sup>A</sup>˜ :<sup>=</sup> *xt*+1, *xtxt*, *xt*−<sup>1</sup> *using the true state xt and <sup>μ</sup>*˜*<sup>m</sup>* <sup>=</sup> <sup>∑</sup>*cm <sup>i</sup>*=1(*λ*˜ *<sup>i</sup>*(*m*) <sup>−</sup> *zm*) *where <sup>λ</sup>*˜ *<sup>i</sup>*(*m*), *<sup>i</sup>* <sup>=</sup> 1, ..., *cm denote the cm eigenvalues of* <sup>A</sup>˜ *closest to zm, then <sup>T</sup>*(*μ*ˆ*<sup>m</sup>* <sup>−</sup> *<sup>μ</sup>*˜*m*) = *oP*(1).

Therefore the estimated eigenvalues can be used in order to obtain a test on the number of common trends at a particular frequency for each frequency separately. The test distribution is obtained as the limit to

$$T \text{tr} \left[ \left< \mathcal{K}\_{\circ\_{\text{'}}, \text{m}\_{\text{'}} \mathbb{C}} \boldsymbol{\varepsilon}\_{\text{t} \prime} \boldsymbol{\omega}\_{\text{t} \prime \text{m}\_{\text{'}} \mathbb{C}} \right> \langle \boldsymbol{\chi}\_{\text{t}, \text{m}\_{\text{'}} \mathbb{C} \prime} \boldsymbol{\omega}\_{\text{t} \prime \text{m}\_{\text{'}} \mathbb{C}} \rangle^{-1} \right]$$

where *xt*,*m*,<sup>C</sup> = *zmxt*−1,*m*,<sup>C</sup> + K◦,*m*,C*εt*−1, *<sup>x</sup>*1,*m*,<sup>C</sup> = 0. The distribution thus does not depend on the presence of other unit roots or stationary components of the state. Furthermore it can be seen that it is independent of the noise variance or the matrix K◦,*m*,C. Hence critical values are easily obtained from simulations. Also note that the limiting distribution is identical for all complex unit roots.

Therefore, for each seasonal unit root location *zm* we can order the eigenvalues of the estimated matrix <sup>A</sup><sup>ˆ</sup> with increasing distance to *zm*. Then starting from the assumption of *H*<sup>0</sup> : *cm* = *c*¯ (for a reasonable *c*¯ obtained, e.g., from a plot of the eigenvalues) one can perform the test with statistic *Tμ*ˆ*m*. If the test rejects, then the hypothesis *H*<sup>0</sup> : *cm* = *c*¯ − 1 is tested, until the hypothesis is not rejected anymore, or *H*<sup>0</sup> : *cm* = 1 is reached. This is then the last test. If *H*<sup>0</sup> is rejected again, no unit root is found at this location. Otherwise we do not have evidence against *cm* = 1. In any case, the system needs to be estimated only once and the calculation of the test statistics is easy even for all seasonal unit roots jointly.

The third option for obtaining tests is to use the tests derived in [4] based on the JS framework for VARs. In the case *n* ≤ *s* the state process *xt*+<sup>1</sup> = A*xt* + K*ε<sup>t</sup>* is a seasonally integrated VAR(1) process (for *n* > *s* the noise variance is singular). The corresponding VECM representation equals

$$p(L)\mathbf{x}\_{t} = \sum\_{m=1}^{S} \left(I\_{n} - \mathcal{A}\varepsilon\_{m}\right) \mathbf{X}\_{t-1}^{(m)} + \mathcal{K}\varepsilon\_{t-1} = \sum\_{m=1}^{S} \alpha\_{m}\beta\_{m}^{\prime}\mathbf{X}\_{t-1}^{(m)} + \mathcal{K}\varepsilon\_{t-1}$$

where *zm* = exp( <sup>2</sup>*π<sup>m</sup> <sup>S</sup> i*), *m* = 1, ..., *S* and

$$\begin{aligned} p(L) &= 1 - L^S \\ p\_m(L) &= \frac{p(L)}{1 - \overline{z\_m}L} \end{aligned}, \quad \begin{aligned} p\_t &= p(L)x\_t = x\_t - x\_{t-S}, \\ \mathbf{X}\_t^{(m)} &= -\frac{p\_m(L)}{p\_m(z\_m)z\_m} \mathbf{x}\_t. \end{aligned}$$

Note that in this VAR(1) setting no additional stationary regressors of the form *<sup>p</sup>*(*L*)*xt*−*<sup>j</sup>* occur. Also no seasonal dummies are needed but could be added to the equation. In this setting [4] suggests to use the eigenvalues *λ*ˆ *<sup>i</sup>* (ordered with increasing modulus) of the matrix (the superscript (.)*<sup>π</sup>* denotes the residuals with respect to the remaining regressors *<sup>X</sup>*(*j*) *<sup>t</sup>*−1, *<sup>j</sup>* <sup>=</sup> *<sup>m</sup>*)

$$
\langle \langle \mathbf{X}\_{t-1}^{(m),\pi}, p\_t^{\pi} \rangle \langle p\_t^{\pi}, p\_t^{\pi} \rangle^{-1} \langle p\_t^{\pi}, \mathbf{X}\_{t-1}^{(m),\pi} \rangle \langle \mathbf{X}\_{t-1}^{(m),\pi}, \mathbf{X}\_{t-1}^{(m),\pi} \rangle^{-1}
$$

as the basis for a test statistic

$$
\bar{\mathcal{C}}\_m := -\delta\_m \sum\_{i=1}^{c\_m} \log(1 - \hat{\lambda}\_i).
$$

where *δ<sup>m</sup>* = 2 for complex unit roots and *δ<sup>m</sup>* = 1 for real unit roots. In the *I*(1) case this leads to the familiar Johansen trace test, for seasonal unit roots a different asymptotic distribution is obtained.

**Theorem 4.** *Under the assumptions of Theorem 1 let C*ˆ*<sup>m</sup> be calculated based on the estimated state and let <sup>C</sup>*˜*<sup>m</sup> denote the same statistic based on the true state. Then for <sup>n</sup>* <sup>≤</sup> *<sup>s</sup> it holds that <sup>C</sup>*ˆ*<sup>m</sup>* <sup>−</sup> *<sup>C</sup>*˜*<sup>m</sup>* <sup>=</sup> *oP*(*T*−1) *and*

$$T\hat{\mathbb{C}}\_{m} \stackrel{d}{\rightarrow} tr\left[\int dB(r)B(r)' \left(\int B(r)B(r)dr\right)^{-1} \int B(r)dB(r)'\right]$$

*where B*(*r*) *is a real Brownian motion for zm* = ±1 *or a complex Brownian motion else.*

Thus again under the null hypothesis the test statistic based on the estimated state and the one based on the true state reject jointly asymptotically with probability one. Therefore for *n* ≤ *s* the tests of JS can be used to obtain information on the number of common cycles, ignoring the fact that the estimated state is used in place of the true state process.

After presenting three disjoint ideas for providing information on the number and location of unit roots, the question arises, which one to use in practice. In the following a number of ideas are given in this respect.

The criterion based on the singular values given in Theorem 2 is of limited information as it only provides the overall number of unit roots. Since the limiting distribution is not pivotal it cannot be used for tests and the choice of the cutoff value *h*(*T*) is somewhat arbitrary. Nevertheless, using a relatively large value one obtains a useful upper bound on *c* which can be included in the typical sequential procedures for tests for *cj*.

Using the results of Theorem 4 has the advantage of using a framework that is well known to many researchers. It is remarkable that in terms of the asymptotic distributions there is no difference involved in using the estimated state in place of the true state. The assumption *n* ≤ *s*, however, is somewhat restrictive except in situations with a large *s*.

Finally the results of Theorem 3 provide simple to use tests for all unit roots, independently of the specification of the model for the remaining unit roots. Again it is remarkable that, under the null, inference is identical for known and for estimated state.

Since our estimators are not quasi maximum likelihood estimators the question of a comparison with the usual likelihood ratio tests arises. For VAR models simulation exercises documented in Section 7 below demonstrate that there are situations where the proposed tests outperform tests in the VAR framework. Comparisons with tests in the state space framework (or equivalently in the VARMA framework) are complicated by the fact that no results are currently available in the literature of this framework. One difference, however, is given by the fact that quasi likelihood ratio tests in the VARMA setting require a full specification of the *cj* values for all unit roots. This introduces interdependencies such that the tests for one unit root depend on the specification of the cointegrating rank at the other roots. The interdependencies can be broken by performing tests based on alternative specifications for each unit root. The test based on Theorem 3 does not require this but can be performed based on the same estimate *A*ˆ. This is seen as an advantage.

The question of the comparison of the empirical size in finite samples as well as power to local alternatives between the CVA based tests and tests based on quasi-likelihood ratios is left as a research question.

#### **6. Deterministic Terms**

Up to now it has been assumed that no deterministic terms appear in the model contrary to common practice. In the VAR framework dealing with trends is complicated by the usage of the VECM representation, see e.g., [30]. In the state space framework used in this paper, however, deterministic terms are easily incorporated.

**Theorem 5.** *Let the process* (*yt*)*t*∈<sup>Z</sup> *be generated according to Assumption <sup>1</sup> and assume that the process* (*y*˜*t*)*t*∈<sup>Z</sup> *is observed where <sup>y</sup>*˜*<sup>t</sup>* = *yt* + <sup>Φ</sup>*dt with*

$$d\_t = \begin{bmatrix} 1, & \cos(\frac{2\pi}{S}t), & \sin(\frac{2\pi}{S}t), & \cdots & (-1)^t \end{bmatrix}' \in \mathbb{R}^S$$

*and* <sup>Φ</sup> <sup>∈</sup> <sup>R</sup>*s*×*S.*

*Then if the* CVA *estimation is applied to*

$$\tilde{y}\_t^\pi := y\_t - \left(\sum\_{t=1}^T y\_t d\_t'\right) \left(\sum\_{t=1}^T d\_t d\_t'\right)^{-1} d\_{t'} \quad \text{t = 1, \dots, T}\_{\pi}$$

*the results of Theorem 1 hold, i.e., the system is estimated consistently and the orders of convergence for the transformed system* (*A*˘, *C*˘, *K*˘) *hold true.*

*Furthermore the convergence in distribution results in Theorems 2–4 hold true where in the limits the Brownian motions B*(*r*) *occurring in the distributions must be replaced by their demeaned versions B*(*r*) <sup>−</sup> <sup>1</sup> <sup>0</sup> *B*(*s*)*ds.*

In this sense the results are robust to some operations typically termed preprocessing of data such as demeaning and deseasonalizing using seasonal dummies. More general preprocessing steps such as detrending or the extraction of more general deterministic terms analogous to [30] can be investigated along the same lines.

#### **7. Simulations**

The estimation of the seasonal cointegration ranks and spaces is usually carried out via quasi maximum likelihood methods that originated from the VAR model class. Typical estimators in this setting are those of [2,4,5,31]. In the first two experiments we focus on the estimation of the cointegrating spaces and the specification of the cointegration ranks in the classical situation of quarterly data and show that there are certain situations in which CVA estimators and the test in Theorem 3 possess finite sample properties superior to those of the methods above. In a third experiment the test performance is evaluated for a daily sampling rate. Moreover, the prediction accuracy of CVA is investigated as well as its robustness to innovations exhibiting behaviors often encountered at such higher sampling rates. All simulations are carried out using 1000 replications.

To investigate the practical usefulness of the proposed procedures we generate quarterly data using two VAR dgps of dimension *s* = 2 first and then two more general VARMA dgps with *s* = 8. Each pair contains dgps with different state space unit root structures

$$\{(0,(1)),(\pi/2,(c\_{\pi/2})),(\pi,(1))\},\quad c\_{\pi/2} = 1,2.$$

From all four dgps samples of size *T* ∈ {50, 100, 200, 500} are generated with initial values set to zero. Although none of the dgps contains deterministics, the data is adjusted for a constant and quarterly seasonal dummies as in [5]. For reasons of comparability, the adjustment for deterministic terms is done before estimation.

In the third experiment we generate daily data with dimension *s* = 4 from a state space system including unit roots corresponding to weekly frequencies (that is a period length of seven days). In the simulations we use several years of data (excluding new year's day to account for 52 weeks of seven days each). The first 200 observations are discarded to include the effects of different starting values. In this example the focus lies on a comparison of the prediction accuracy. Furthermore we investigate the robustness of the test procedures to conditional heteroskedasticity of the GARCH type as well as to non-normality of the innovations.

To assess the performance of specifying the cointegrating rank at unit root *z* using CVA, the following test statistic is constructed from the results in Theorem 3

$$\Lambda(c) = T|(\frac{1}{c}\sum\_{i=1}^{c}\hat{\lambda}\_i) - z|\,. \tag{6}$$

Here *λ*ˆ 1, ... , *λ*ˆ *<sup>n</sup>* are the eigenvalues of *A*ˆ ordered increasingly according to the distance from *z*. Note that a similar test in [20] only uses the *c*-th largest eigenvalue, whereas here the average over the nearest *c* eigenvalues is taken. Critical values have been obtained by simulation using large sample sizes (sample size 2000 (JS) and 5000 (CVA), 10,000 replications).

In our first two experiments usage of Λ(*c*) is compared with variants of the likelihood ratio test from [2] (JS), [4] (*Q*1), and [5] (*Q*2, *Q*3). *Q*<sup>1</sup> is Cubadda's trace test for complexvalued data, *Q*<sup>2</sup> takes the information at frequency *π*/2 into account when the analysis is carried out at frequency 3*π*/2, and *Q*<sup>3</sup> iterates between *π*/2 and 3*π*/2 in the alternating reduced rank regression (ARR) of [5]. For the procedure of [2] the likelihood maximization

at frequency *π*/2 is carried out using numerical optimization (BFGS) with initial values obtained from an unrestricted regression.

All tests are evaluated by comparing the percentages of correctly detected common trends, or *hit rates*, with 0.95, the hit rate to be expected from a nominal significance level of 0.05. The testing procedure employed for all tests is the same: at each of the frequencies it is started from a null hypothesis of *s* unit roots against less than *s* unit roots. In case of rejection, *s* − 1 unit roots are tested versus less than *s* − 1 and so on, until there are zero unit roots under the alternative.

For the first two experiments the estimation performance of CVA for the simultaneous estimation of the seasonal cointegrating spaces is compared with the maximum likelihood estimates of [2,4,31] (cRRR), and also with an iterative procedure (Generalized ARR or GARR) of [5]. The comparison is carried out by means of the gap metric, measuring the distance between the true and the estimated cointegrating space as in [32]. The smaller the mean gap over all replications, the better is the estimation performance. Throughout a difference between two mean gaps or two hit rates is considered statistically significant if it is larger than twice the Monte Carlo standard error.

For all procedures used in this section, an AR lag length has to be chosen first. For CVA this can be done using the AIC as in ([33], Section 5), as is done in the third experiment.

In the first two experiments where sample sizes are rather small, we estimate the lag length via minimization of the corrected AIC (AICc) ([34], p. 432), ˆ *kAICc*, benefitting the simulation results. For larger sample sizes the two criteria lead to the same choices. Due to the quarterly data we work with, the lag length is then chosen to be ˆ *<sup>k</sup>* <sup>=</sup> max{<sup>ˆ</sup> *kAICc*, 4}.

Other information criteria could be chosen here. An anonymous referee also suggested the application of the Modified Akaike Information Criterion (MAIC) of [35], proposed there for the I(1)-case. In an attempt to apply it to the seasonally integrated case considered here, it performed considerably worse than the AICc. Thus we refrain from using the MAIC in the following and also omit the results of that attempt. They can be obtained from the authors upon request.

For CVA the truncation indices *f* and *p* are chosen as ˆ *f* = *p*ˆ = 2ˆ *k* ([33], Section 5). The system order *n* is estimated by minimizing ([33], Section 5)

$$SVC(n) = \mathfrak{d}\_{n+1}^2 + 2ns \frac{\log T}{T} \,. \tag{7}$$

Here *σ*ˆ*<sup>i</sup>* denotes the *i*-th largest singular value from the singular value decomposition of Ξˆ <sup>+</sup> *f β*ˆ <sup>1</sup>Ξˆ <sup>−</sup> *<sup>p</sup>* (Step 2 of CVA). Note that selecting the number of states by *SVC* is made less influential insofar as *n*ˆ = max{*c*<sup>0</sup> + 2*cπ*/2 + *cπ*, *n*ˆ *SVC*}, where *n*ˆ *SVC* denotes the SVC estimated system order.

In Section 7.1 we start with the two VAR dgps and find that the likelihood-based procedures are mostly superior. Continuing with the VARMA dgps in Section 7.2, CVA performs better and is superior for the smaller sample sizes in terms of size and gap and better for all sample sizes in terms of power. Section 7.3 evaluates the performance of the tests for unit roots for larger sample sizes together with the prediction performance in this setting. We find that the tests are robust to the distribution of the innovations as well as to conditional heteroskedasticity of the GARCH type. Furthermore the empirical size of the tests lies close to the size already for moderate sample sizes, where the tests also show almost perfect power properties.

#### *7.1. VAR Processes*

The VAR dgps considered in this paper are given by,

$$X\_t = \Pi\_1 X\_{t-1} + \Pi\_2 X\_{t-2} + \Pi\_3 X\_{t-3} + \Pi\_4 X\_{t-4} + \varepsilon\_t, \qquad \varepsilon\_t \sim \mathcal{N}\left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & 0.5 \\ 0.5 & 1 \end{bmatrix} \right) \tag{8}$$

where (*εt*)*t*∈<sup>Z</sup> is white noise and the coefficient matrices are

$$\begin{aligned} \Pi\_1 &= \begin{bmatrix} \gamma & 0 \\ 0 & 0 \end{bmatrix}, \Pi\_2 = \begin{bmatrix} -0.4 & 0.4 - \gamma \\ 0 & 0 \end{bmatrix}, \\ \Pi\_3 &= \begin{bmatrix} -\gamma & 0 \\ 0 & 0 \end{bmatrix}, \Pi\_4 = \begin{bmatrix} 0.6 - (\gamma/10) & 0.4 + \gamma \\ 0 & 1 \end{bmatrix}. \end{aligned}$$

This dgp is adopted from [5] with a slight adjustment to Π4. The corresponding VECM representation in the notation of [5] equals

$$\begin{array}{rcl} \mathbf{X}\_{0,t} &=& \begin{bmatrix} -0.2\\ 0 \end{bmatrix} \begin{bmatrix} 1+\gamma/8 & -1 \end{bmatrix} \mathbf{X}\_{1,t-1} + \begin{bmatrix} 0.2\\ 0 \end{bmatrix} \begin{bmatrix} 1+\gamma/8 & -1 \end{bmatrix} \mathbf{X}\_{2,t-1} + \begin{bmatrix} 0\\ 0 \end{bmatrix} \\\ & \begin{bmatrix} \gamma\\ 0 \end{bmatrix} \begin{bmatrix} 1+0.05L & -L \end{bmatrix} \mathbf{X}\_{3,t-1} + \boldsymbol{\varepsilon}\_{t}. \end{array}$$

As can be seen from Table 1, the dgps possess unit roots at frequencies 0, *π*, and *π*/2, where *cπ*/2 = 2[1] for *γ* = 0[0.2], respectively. Note that in all cases the order of integration equals 1, while the number of common cycles at *π*/2 is varied.


**Table 1.** Eigenvalues of the coefficient matrix of the companion form.

Table 2 exhibits the hit rates from the application of the different test statistics. At frequencies 0 and *π*, Λ is compared with the trace test of Johansen (J; based on [31] for unit roots *z* = −1), whereas at *π*/2 it is competing with JS, *Q*1, *Q*2, and *Q*3. All competitors are likelihood-based tests which is the term we are referring to when we compare Λ to them as a whole.

**Table 2.** Hit rates for the different tests (VAR dgp). Twice the maximum (over all entries) Monte Carlo standard error is 0.005.


The results for 0 and *π* are very similar for both dgps in that Λ scores more hits than the likelihood-based tests when the sample size is small, *T* ∈ {50, 100}. Convergence of its finite sample distribution is slower than for the other test statistics, however, as J is closer to 0.95 from *T* = 200 on. For *T* = 500 the distribution of Λ only seems to have converged to its asymptotic distribution when *cπ*/2 = 2 at frequency 0, whereas convergence of the likelihood-based tests has occurred in all cases.

At *π*/2 the likelihood ratio test of JS strictly dominates all implementations of [5] for all sample sizes and both dgps. It strictly dominates the CVA-based test procedure as well, except for one case, it seems: when *cπ*/2 = 1 and *T* = 50 Λ scores slightly, but significantly, more hits than the likelihood ratio test of JS. Surprisingly, Λ is drastically worse when *T* = 100 with only 8.7%, only to be up at 85% for *T* = 200.

The behavior of Λ is explained by *z*<sup>5</sup> and *z*<sup>6</sup> being close to ±*i* when *cπ*/2 = 1, cf. Table 1. For future reference we will call the corresponding roots *false unit roots*.

For *T* = 50 the estimates of eigenvalues corresponding to actual unit roots are rather not very close to ±*i* in contrast to the false unit roots. Thus the latter are mistaken for actual unit roots (cf. the first panel in Figure 1), leading to a hit rate of 81.1%, one that is even larger than the rates at 0 and *π*. As the sample size increases, the eigenvalue estimates of the true unit roots become more and more accurate, visible from the second and third panel in Figure 1. Accordingly they can be detected correctly more often. Unfortunately however, for *T* = 100 the false unit roots remain to be detected such that often two instead of just one unit root are found by Λ, resulting in a hit rate of only 8.7%. For *T* ∈ {200, 500} Λ is able to distinguish the false unit roots from the true ones and the detection rate is getting closer to the asymptotic rate, 85.5% and 92.7%, respectively.

**Figure 1.** Eigenvalues around *z* = *i* of 1000 replications when *γ* = 0.2 (*cπ*/2 = 1).

When the VAR dgp without false unit roots and *cπ*/2 = 2 is considered, it is visible that the hit rates of Λ at *π*/2 are monotonously increasing in the sample size again. The rates are smaller than those of the likelihood-based tests, however, and also clearly worse than those of Λ at 0 and *π*, cf. Table 2 again.

Taken together, at frequencies 0 and *π* which correspond to real-valued unit roots, the use of Λ was advantageous for *T* = 50. It also scored more hits for *T* = 100 and *cπ*/2 = 1. For higher sample sizes the likelihood-based tests clearly dominate Λ at these two frequencies. At *π*/2 this superiority of the likelihood-based tests for all sample sizes and both dgps continues. The example also points to a general weakness: if the sample size is low and *false unit roots* are present, it can be difficult for Λ to distinguish them from actual unit roots.

#### *7.2. VARMA Processes*

The second setup consists of VARMA data generated by a state space system (*Ar*, *Cr*, *Kr*), *r* = 1, 2, as in (1), where the matrices *A*<sup>1</sup> and *A*<sup>2</sup> are constructed as in (2) and are taken to be

$$A\_1 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{bmatrix}, \qquad A\_2 = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & -1 & 0 \end{bmatrix}. \tag{9}$$

These two choices yield the same state space unit root structures as those of the two VAR dgps with *cπ*/2 = 1 and *cπ*/2 = 2 for *A*<sup>1</sup> and *A*2, respectively. The other two system matrices *Kr* <sup>∈</sup> <sup>R</sup>(2+2*r*)×*<sup>s</sup>* and *Cr* <sup>∈</sup> <sup>R</sup>*s*×(2+2*r*) with *<sup>s</sup>* <sup>=</sup> 8 are drawn randomly from a standard normal distribution in each replication and (*εt*)*t*∈<sup>Z</sup> is multivariate normal white noise with an identity covariance matrix.

Note that these systems are within the VARMA model class such that the dgp is contained in the VAR setting only by increasing the lag length as a function of the sample size. While superiority of the CVA approach in such a setting might be expected, this is far from obvious. Moreover, using a long VAR approximation is the industry norm in such situations.

From the hit rates in Table 3 it can be seen that the combination of large *s*, small *T*, and a minimal lag length of four render the likelihood-based tests useless at all frequencies with hit rates below ten percent for *T* = 50. Λ in contrast does not suffer from this problem and is already close to 95% for this sample size. Only when *T* = 200 do the likelihood-based tests appear to work, exhibiting hit rates close to 95%.


**Table 3.** Hit rates for the different tests (VARMA dgp). Twice the maximum (over all entries) Monte Carlo standard error is 0.005.

For all tests alike, however, it is striking that hit rates move away from 95% when *T* = 500. This behavior is most pronounced for Λ, e.g., from *T* = 200 to *T* = 500 its hit rate drops from 93.1% to 82.4% at 0 when *A*<sup>2</sup> is used. This phenomenon is a consequence of the fact that *f* and *k* in the algorithm are chosen data dependent. An inspection of how the hit rates depend on *f* and *k* and a comparison with the actually selected ˆ *f* , ˆ *k* reveals that for *T* = 500 too large values of *f* and *k* are chosen too often and leave room for improvement in the hit rates, cf. Figure 2. The figure stresses an important point: The performance of the unit root tests is heavily influenced by the selected lag lengths for all procedures. We tested a number of different information criteria in this respect. AICc turned out to be the best criterion overall, but not uniformly. Figure 2 indicates advantages for this example of BIC over AIC as it on average selects smaller lag lengths, associated here with higher hit rates.

To study the power of the different procedures, the transition dynamics *Ar* in (9) are multiplied by *ρ* ∈ {0.8, 0.85, 0.9, 0.95} so that the systems do not contain unit roots at any of the frequencies. Here empirical power is defined as the frequency of choosing zero common trends. This is why for *ρ* = 1, when there are in fact common trends present in our specifications, the empirical power values plotted in Figure 3 are not equal to the actual size we could define as one minus the hit rate: our measure of empirical power in this situation only counts the false test conclusion of zero common trends, but there are of course multiple ways the testing procedure could conclude falsely.

**Figure 2.** Relationship between hit rates and chosen values of *f* and *k*, illustration for the VARMA dgp using *A*2. The lower x-axes show *f* or *k*, above are the choice frequencies of the selection criteria.

**Figure 3.** Empirical power of the different test procedures (VARMA dgp with *A*2). Twice the Monte Carlo standard error is 0.005.

As expected, rejection of the null hypothesis is easiest when *ρ* is small and is very difficult when it is close to 1, cf. Figure 3 for the case of *A*2.

Further, there are almost no differences among the likelihood-based tests over all combinations of sample size and frequency, only for *T* = 100 is JS significantly worse than the *Qi*, *i* = 1, 2, 3 at *π*/2. It is also clearly visible at all frequencies that the likelihood-based tests possess no or only very limited power when *T* = 50 and *T* = 100, respectively. Λ, in contrast, is clearly more powerful in these cases. As the sample size increases to *T* = 200, the power of each test improves, still Λ remains the most powerful option. Only for *T* = 500 have the differences almost vanished with small, but significant, advantages for Λ at 0 and *π*.

The results are the same when *A*<sup>1</sup> is used and *cπ*/2 = 1 and all of the differences described here are statistically significant.

Next the estimation performance of CVA is evaluated by calculation of the gaps between the true and the estimated cointegrating spaces. At all frequencies these gaps are compared with the GARR procedure of [5] which cycles through frequencies. At *π*/2 CVA and GARR are also compared with our implementation of JS and cRRR of [4], whereas it is also compared with the usual Johansen procedure at 0 and *π*. All estimates are conditional on the true state space unit root structure in the sense that the minimal number of states used is larger or equal to the number of unit roots over all frequencies. Other than imposing a minimum state dimension, the estimation of the order using *SVC* is not influenced. The likelihood-based procedures, on the other hand, take the unit root structure as given, i.e., do not perform CI rank testing for this estimation exercise.

From the results in Table 4 it can be noted first that the likelihood-based procedures show mostly equal mean gaps. Only for *π*/2 and *T* = 50 and both dgps does JS possess significantly larger gaps than cRRR and GARR and other differences are not statistically significant. Thus it does not matter in our example whether the iterative procedure is used or not.

Second, CVA is again superior for *T* = 50 where it exhibits mean gaps that are significantly smaller than those of the other estimators at all frequencies. This advantage is turned around for higher sample sizes, though: mean gaps are smaller for the likelihood-based procedures when *T* ∈ {100, 200, 500} and *A*<sup>2</sup> is used, if only slightly. When *A*<sup>1</sup> is used instead, mean gaps do not differ significantly from each other at *π*/2 when *T* > 50 and at 0, *π* when *T* = 100 and those of CVA are only very modestly worse when *T* ∈ {200, 500} at 0, *π*.


**Table 4.** Mean gaps between estimated and true cointegrating spaces (VARMA dgp). 2*MCse* denotes twice the maximal Monte Carlo standard error for the corresponding row.

Thus, when it comes to estimating the cointegrating spaces, CVA is superior for *T* = 50 and equally good or only slightly worse than the likelihood-based procedures for higher sample sizes. For the systems analyzed, decreasing *cπ*/2 leads to gaps that are smaller for all methods and these improvements are slightly larger for CVA than for the other estimators.

#### *7.3. Robustness of Unit Root Tests for Daily Data*

In this last simulation example we examine the robustness of the proposed procedures with regard to test performance and prediction accuracy with respect to the innovation distribution and the existence of conditional heteroskedasticity of the GARCH-type, as these features are often observed in data of higher frequency, for example in financial applications. While our asymptotic results do not depend on the distribution of the innovations (subject to the assumptions), the assumptions do not include GARCH effects. Nevertheless, the theory in [25,26] suggests that the tests might be robust also in this respect.

We generate a state space system of order *n* = 8 using the matrix *A* = [*Ai*,*j*]*i*,*j*=1,...,8 where *Ai*,*i*+<sup>1</sup> = 1, *i* = 1, ..., 6, *A*7,1 = 1, *A*8,8 = 0.8 and *Ai*,*<sup>j</sup>* = 0 else. This implies that the eigenvalues of this matrix are *λ<sup>j</sup>* = exp(2*πij*/7), *j* = 1, ..., 7, *λ*<sup>8</sup> = 0.8. Therefore the corresponding process has state space unit root structure

$$((0, (1)), (2\pi/7, (1)), (4\pi/7, (1)), (6\pi/7, (1))).$$

The entries of the matrices *C* and *K* are chosen as independent standard normally distributed random variables as before.

A process (*yt*)*t*=1,...,*<sup>T</sup>* is generated from filtering an independent identically distributed innovation process (*εt*)*t*=−199,...,*T*+<sup>1</sup> through the system (*A*, *C*, *K*). The first 200 observations are discarded, the last are used for validation purposes. A total of 1000 replications are generated where in each replication a different system is chosen.

With the generated data three different estimates are obtained: An autoregressive model (called AR in the following) is estimated with lag length chosen using AIC of maximal lag length equal to <sup>√</sup>*T*. Second, an autoregressive model with large lag length (called ARlong) is estimated. This estimate is used to hint at the behavior of an autoregression using the lag length equal to a full year. This would correspond to estimating a VECM without rank restrictions, when accounting for yearly differences. The third method consists of the CVA estimates, where *f* = *p* = 2ˆ *kAIC* is chosen. The order is estimated by minimizing SVC. However, we correct for orders smaller than *n* = 7 which would limit the possibilities of finding all unit roots.

First, we compare the prediction accuracy for the three methods for two different distributions of the innovations: Beside the standard normal distribution also the student t-distribution with *v* = 5 degrees of freedom (scaled to unit variance) is used. This distribution shows considerably heavier tails than the normal distribution but nevertheless is covered by our assumptions.

Figure 4 provides the results for out-of-sample one day ahead mean absolute prediction error (over all coordinates) for the sample sizes *T* = 364 days (one year), *T* = 1092 (3 years) and *T* = 3276 (nine years). The long AR model is estimated with lag lengths of 8 weeks for the smallest sample size, 10 weeks for the medium sample size and 12 weeks for the largest sample size.

**Figure 4.** Mean of absolute value of one day ahead prediction error over all four components. CVA (blue), AR (red) and long AR (black). Dash-dot lines refer to the t-distribution.

In the figure the results for the normally distributed innovations are presented as well as the ones for the t-distributed residuals (scaled to unit variance). It can be seen that for the two larger sample sizes the mean absolute error for the residuals for CVA is smaller in all cases. For the smallest sample size, by contrast, results are mixed. For CVA the results for the heavy tailed distribution in this case are much worse than for the normal distribution. For the larger sample sizes the differences are small. The maximal standard error of the estimated means over 1000 replications for *T* = 1092 and *T* = 3276 amounts to 0.05. This allows the conclusion that CVA performs better for the two larger sample sizes. For *T* = 364 there are no statistically significant differences between the performance of the three methods: CVA seems to suffer more from few very large errors (using the root mean square errors the CVA results are worse for *T* = 364 in comparison; if one uses the 95% percentiles CVA performs best also for the smallest sample size). This results in a standard error over the replications of the mean absolute error for *T* = 364 of 0.18 for normally distributed innovations and 3.4 for t-distributed innovations. The long AR models are clearly worse than the two other approaches. This happens even if we are still far from using a full year as the lag length.

With regard to the unit root tests we investigate results for the tests of the hypotheses *H*<sup>0</sup> : *cm* = 1 versus *H*<sup>1</sup> : *cm* = 0 at all frequencies 2*πm*/364, *m* = 0, ..., 363. The data generating process features unit roots with *cm* = 1 at the seven frequencies 2*πk*/7, *k* = 0, ..., 6. Therefore the tests should not reject at these frequencies, but should reject at all others.

Consequently we compare the minimum of the non-rejection rates for the seven unit roots (called empirical size below) as well as the maximum of the non-rejection rates for the non-unit root frequencies *ω<sup>j</sup>* = 2*πj*/364, *j* = 52*k*, *k* = 0, 1, 2, ..., 6 (called empirical power below).

For the larger sample sizes the empirical size is practically 95% while the empirical power is 100%. For *T* = 364 we obtain an empirical size of 90% for the normal distribution and 91.5% for the t-distribution. The worst empirical power equals 89.3% (normal) and 87.6% (t-distribution). Hence even for one year of data the discrimination properties of the unit root tests are good and we do not observe differences between the normal distribution for the innovations and the heavy tailed t-distribution.

Finally we compare the empirical size and power of the tests for the various unit roots for smaller sample sizes *T* ∈ {104, 208, 312, 416, 520}. For the experiments we consider univariate GARCH models of the form

$$\varepsilon\_{t,i} = h\_{t,i}\eta\_{t,i'} \quad h\_{t,i}^2 = 1 + \alpha \varepsilon\_{t-1,i}^2 + \beta h\_{t-1,i'}^2 \quad \text{i} = 1, \dots, 4\_{\text{s}}$$

where (*ηt*,*i*)*t*∈<sup>Z</sup> is independent and identically standard normally distributed. *<sup>α</sup>*, *<sup>β</sup>* ≥ 0 are reals. It follows that the component processes (*εt*,*i*)*t*∈<sup>Z</sup> show conditional heteroskedasticity, the persistence of which is governed by *α* + *β*. Here 0 < *α* + *β* < 1 implies stationarity while *α* + *β* = 1 implies persistent conditional heteroskedasticity usually termed I-GARCH. We include five different processes for the innovations:


For the five different sample sizes 1000 replications of the estimates using the CVA algorithm are obtained. For each estimate we calculate the test statistic for testing *H*<sup>0</sup> : *cm* = 1 versus *H*<sup>0</sup> : *cm* = 0 for *m* = 0, ..., 363 corresponding to the unit roots *zm* = exp(2*πim*/364). This set of unit roots contains all seven unit roots exp(2*πik*/7), *k* = 0, ..., 6.

Figure 5 provides the mean over the 1000 replications of the test statistics Λ(1) for *zj*, *j* = 0, ..., 363 and the five sample sizes. It can be seen that the test Λ(1) is able to pinpoint the seven unit roots present in the data generating process fairly accurately even for sample size *T* = 104. The zoom on the region around the unit root frequency 2*π*/7 shows that the mean value is larger than the cutoff value of the test (the dashed horizontal line) for the adjacent frequency 2*π* <sup>53</sup> <sup>364</sup> already for *T* = 312.

(**a**) Mean of unit root test statistics. (**b**) Zoom of mean unit root tests. **Figure 5.** Results of the unit root tests for all seasonal unit roots jointly.

Table 5 lists the minimum of the achieved percentages of non-rejections of the test statistic for the seven unit root frequencies as well as the maximum over all non-unit root frequencies. It can be seen that for all GARCH models for *T* = 312 the test rejects unit roots at all non unit root frequencies every time, while the empirical size is close to the nominal 5%. For small sample sizes the tests are slightly undersized while for *T* = 208 a slight oversizing is observed. The two larger sample sizes are omitted as the tests perform perfectly there.



It follows from the examples presented in this subsection that the test is robust also in small samples with respect to heavy tailed distributions of the innovations (subject to the assumptions). Furthermore also a remarkable robustness with respect to GARCH-type conditional heteroskedasticity is observed.

#### **8. Application**

In this section we apply CVA to the modeling of electricity consumption using a data set from [36]. The dataset contains hourly consumption data (in megawatts) from a number of US regions, scraped from the webpage of PJM Interconnection LLC, a regional transmission organization. The number of regions have changed over time, thus the data set contains many missing values. It also contains data aggregated into regions called east and west, which are not used subsequently.

In order to avoid problems with missing values, we restrict the analysis to four regions, for which data over the same time period is available: American Electric Power (AEP; in the following printed in blue), the Dayton Power and Light Company (DAYTON; black), Dominion Virginia Power (DOM; red) and Duquesne Light Co. (DUQ; green). We use data from 1 May 2005 until 31 July 2018. In this period only 3 data points are missing for the four regions and their imputation is handled by interpolation of the corresponding previous values. One observation in this sample is an obvious outlier which is corrected for analogously.

The data is split into an estimation sample covering observations up to the end of 2016 (102,291 observations on 4263 days) and a validation sample containing data in 2017 and 2018 (13,845 observations on 577 days). Data is equally sampled, but contains two hour segments when switching from winter to summer time or back. Table 6 contains some summary statistics.



Figure 6 provides an overview of the data: Panel (a) shows the full data on an hourly basis, while (b) presents aggregation to daily frequency. Panel (c) zooms in on a two year stretch of daily consumption. Panel (d) finally provides hourly data for the first month in the validation data. The figures clearly document strong daily, weekly and yearly patterns. From these figures it appears that these seasonal fluctuations are somewhat regular with changes throughout time. It is hence not clear whether a fixed seasonal pattern is appropriate. Also note that the sampling frequency is on an hourly basis such that a year roughly covers 8760 observations.

(**c**) Log of daily consumption from 2010 to 2012 (**d**) Log of hourly consumption on first month of validation set

**Figure 6.** Electricity consumption data.

In the following we estimate (on the estimation part) and compare (on the validation part) a number of different models, first for the full hourly data set and afterwards for the aggregated daily data. As a benchmark we will use univariate AR models including deterministic seasonal patterns for daily, weekly and yearly variations. Subsequently we estimate models using CVA including different sets of such seasonal patterns.

First in the analysis using dummy variables fixed periodic patterns have been estimated. We model the natural logarithm of consumption (to reduce problems due to heteroskedasticity) and include dummies for weekdays, hours and sine and cosine terms corresponding to the first 20 Fourier frequencies with respect to annual periodicity. The corresponding results can be viewed in Figure 7. It is obvious that there is quite some periodic variation. Also the four data sets show very similar patterns as expected.

After the extraction of these deterministic terms the next step is univariate autoregressive (AR) modeling. Figure 8 shows the BIC values of AR models of lag lengths zero to 800 for the four series as well as the BIC of a multivariate AR model for the same number of lags. The chosen values are given in Table 6.

(**a**) Yearly fluctuation (**b**) Weekly fluctuation (**c**) Daily fluctuation **Figure 7.** Periodic patterns from dummy variables.

**Figure 8.** BIC values for univariate models and multivariate model (dashed line; divided by four to fit).

The BIC curve is extremely flat for the univariate models. Noticeable drops in BIC occur around lag 24 (one day), 144 (six days), 168 (one week), 336 (two weeks), 504 (three weeks). BIC selects large lag lengths from 529 (DUQ) up to 554 (DOM). AIC selects lag lengths close to the maximum allowed with a minimum at 772 lags. The BIC pattern of the multivariate model differs in that the two drops at two and three weeks are missing. Instead, the optimal BIC value is obtained at lag 194, well below the optimal lag lengths in the univariate cases. AIC here opts for lag length 531, just over 22 days.

Subsequently CVA is applied with *f* = ˆ *kBIC*, *p* = ˆ *kAIC* as estimated for the multivariate model. This differs from the usual recommendation of *f* = *p* = 2ˆ *kAIC* in order to avoid numerical problems with huge matrices. The order is chosen according to SVC, resulting in *n*ˆ = 240. The corresponding model is termed Mod 1 in the following. Note that this configuration of *f* , *n*ˆ does not fulfill the requirements of our asymptotic theory. The bound *f* ≥ *n* ensures that the matrix O*<sup>f</sup>* has full column rank. Generically this will be the case for *f s* ≥ *n* leading to a less restrictive assumption. In practice too low values of *f* will be detected by *n*ˆ estimated close to the maximum, which is not the case here.

As a second model we only use weekday dummies but neglect the other deterministics. Again AIC (ˆ *kAIC* = 531) and BIC (ˆ *kBIC* = 195) are used to determine the optimal lag length in the multivariate AR model. The corresponding CVA estimated model uses *n*ˆ = 245 according to SVC, resulting in Mod 2.

The third model uses only a constant as deterministic term. Again similar AIC (555) and BIC (195) values are selected. A state space model, Mod 3, using CVA is estimated with *n*ˆ = 209.

Figure 9 provides information on the results. Panel (a) shows the coefficients of the univariate AR models. It can be seen that lags around one day and one to three weeks play the biggest role for all four datasets. Panel (b) shows that the multivariate models lead to better one step ahead predictions in terms of the root mean square error (RMSE). Mod 1 and Mod 2 show practically equivalent out of sample prediction error for all four data sets, while Mod 3 delivers the best out of sample fit for all four regions.

In particular in financial applications data of high sampling frequency shows persistent behaviour, also in terms of conditional heteroskedasticity, as well as heavy tailed distributions of the innovations. For our data sets Figure 10 below provides some information in this respect for the residuals according to Mod 3. Panel (a) provides a plot of the residuals in the year 2018 (contained in the validation period). It can be seen that large deviations occur occasionally, while else residuals vary in a tight band around 0. The kernel density estimates for the normalized (to unit variance) residuals on the full validation data set in panel (b) show the typical heavy tailed distributions. Panel (c) contains an ACF plot for the four regions again calculated using the full validation sample. It demonstrates that the model successfully eliminates all autocorrelations with only a few ACF values occurring outside the confidence interval. Panel (d) provides the ACF plot for the squared innovations to examine GARCH-type effects. While GARCH-effects are clearly visible, the ACF drops to zero fast with occasional positive values (except maybe for the Duquesne data).

Applying the eigenvalue based test Λ(1) for *c* = 1 and all Fourier frequencies *ω<sup>j</sup>* = 2*πj*/(365 ∗ 24) we find that for Mod 2 and Mod 3 the largest p-value is obtained for *ω*<sup>365</sup> corresponding to a period length of one day with 0.0187 for Mod 2 (test statistic 6.6) and 0.02 for Mod 3 (with a statistic of 6.5). This implies that the unit root at frequency *ω*<sup>365</sup> is not rejected for a significance level of 1%, but is rejected for 5%. All other unit roots are rejected at every usual significance level. For Mod 1 the test statistic for *ω*<sup>365</sup> equals 41.2 corresponding to a p-value of practically 0. This implies that on top of a deterministic daily pattern the series show strong persistence at the daily period. Excluding the hourly dummies pulls the roots closest to *ω*<sup>365</sup> closer to the unit circle resulting in insignificant unit root tests and improves the one step ahead forecasts. Including the dummies weakens the evidence of a unit root while leading to worse predictions.

The analysis is repeated with data aggregated to daily sampling frequency. The aggregation reduces the required lag lengths, as is visible from Table 6 in the univariate case, and hence we use CVA with the recommended *f* = *p* = 2ˆ *kAIC*. Beside the univariate models, in this case also a naive model of predicting the consumption for today as yesterday's consumption is used. Three multivariate models are estimated: Mod 1 contains weekday dummies and sine and cosine terms for the first twenty Fourier frequencies corresponding to a period of one year. Mod 2 only contains the weekday dummies, while Mod 3 only uses the constant. Figure 11 provides the out-of-sample RMSE for one day ahead predictions (panel (a)) and seven days ahead predictions (panel (b)).

**Figure 11.** Results for the hourly datasets.

(**a**) RMSE of one day ahead predictions (**b**) RMSE of seven day ahead predictions

It can be seen that both Mod 1 and Mod 2 beat the univariate AR models in terms of one step ahead prediction error, while Mod 3 performs better for seven days ahead prediction. Mod 1 performs on par with Mod 2 for one step ahead prediction but performs better in predicting seven steps ahead. In Figure 12 poles and zeros for the three estimated state space models are plotted. Here the poles (marked with 'x') are the eigenvalues of the matrix *A*. These are the inverses of the determinantal roots of the autoregressive matrix polynomial in the equivalent VARMA representation. The zeros are the inverses of zeros of the determinant of the MA polynomial. We can see that for Mod 3 with only a constant, poles close to 2*πj*/7, *j* = 1, ..., 6 arise to capture the weekly pattern. The other two models only show one pole close to the unit circle, a real pole of almost *z* = 1. The pole corresponding to Mod 1 is closer to the unit circle than the one for Mod 2 (see (b)).

For Mod 3 we obtain p-values for the tests of three complex unit roots of 0.05 (*ω* = 2*π*/7), 0.165 (4*π*/7) and 0.01 (6*π*/7), which are hence all not statistically significant for significance level *α* = 0.01. The corresponding test for *z* = 1 shows a p-value of 0.004. This provides evidence against the null hypothesis of the root being present. For Mod 1 the *p*-value for the test of *z* = 1 is 0.28 and hence we cannot reject the null. Mod 2 provides a *p*-value of 0.023 and hence weak evidence for the presence of the unit root. This can be seen from the distance of the nearest pole from the point *z* = 1 in Figure 12.

Jointly this indicates that the location and strength of persistence due to the estimated roots is influenced by the presence of deterministic terms: if the deterministic terms are not included in the model, the cyclical patterns are generated by poles situated close to the unit circle.

The decision whether on top of the deterministic seasonality unit roots exist, is not easy in all cases: for the daily data the locations of the poles indicate that deterministic seasonality is enough to capture weekly fluctuations while a unit root at *z* = 1 appears to be needed to capture yearly variations. For hourly data there is evidence that the daily cycle is best captured with a unit root at frequency *ω*365. This leads to the best predictive fit. Finally note that temporal aggregation from hourly data to daily data implies that the frequency *ω*<sup>365</sup> for hourly data aliases to the frequency *ω* = 0 in the daily data. Therefore the higher evidence of a unit root at *z* = 1 found in daily data might be a consequence of the unit root at frequency *ω*<sup>365</sup> found for hourly data, compare [37].

The system matrix estimates as well as the evidence in support of unit roots at *ω*<sup>365</sup> for hourly data and *z* = 1 for daily data that we obtain from the CVA modeling can be taken as starting points in subsequent quasi maximum likelihood estimation.

#### **9. Conclusions**

In this paper the asymptotic properties of CVA estimators for seasonally integrated unit root processes are investigated. The main results can be summarized as follows:


Because of the promising performance of CVA and in particular its robustness it can be recommended as a simple way to extract information on the number of common trends from the estimated matrix of transition dynamics. This information can be used in order to reduce the uncertainty in a subsequent likelihood ratio analysis where quasi maximum likelihood estimates can be obtained starting from the CVA estimates. Since the CVA estimates can be obtained for a range of orders numerically fast they are seen as a valuable starting point for the empirical modeling of time series potentially including seasonal cointegration. Moreover they can also be used in situations where the number of seasons is large or even unclear as in hourly data sets as demonstrated in the case study.

**Author Contributions:** Conceptualization, D.B. and R.B.; methodology, D.B.; software, R.B.; formal analysis, D.B. and R.B.; writing—original draft preparation, D.B. and R.B.; writing—review and editing, D.B. and R.B.; visualization, D.B. and R.B.; supervision, D.B. Both authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation—Projektnummer 276051388) which is gratefully acknowledged. We acknowledge support for the publication costs by the Deutsche Forschungsgemeinschaft and the Open Access Publication Fund of Bielefeld University.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Supporting Material**

*Appendix A.1. Complex Valued Canonical Form*

Additionally to the real valued canonical form (2) we will also use the corresponding complex valued representation obtained by transforming each block corresponding to unit root *zj* = cos(*ωj*) + *i* sin(*ωj*) with the transformation matrix

$$\mathcal{T}\_{\vec{l}} = \begin{bmatrix} I\_{\mathfrak{c}\_{\vec{j}}} & iI\_{\mathfrak{c}\_{\vec{j}}} \\ I\_{\mathfrak{c}\_{\vec{j}}} & -iI\_{\mathfrak{c}\_{\vec{j}}} \end{bmatrix}$$

leading to the triple of system matrices in the *j*-th block as:

$$\mathcal{A}\_{j,\mathbb{C}} = \begin{bmatrix} \ \overline{z\_j}I\_{\mathfrak{c}\_j} & 0\\ 0 & z\_j I\_{\mathfrak{c}\_j} \end{bmatrix}, \quad \mathcal{K}\_{j,\mathbb{C}} = \begin{bmatrix} \ \overline{K\_{j,\mathbb{C}}}\\ \ \overline{K\_{j,\mathbb{C}}} \end{bmatrix}, \quad \mathcal{C}\_{j,\mathbb{C}} = \begin{bmatrix} \ \ \mathbb{C}\_{j,\mathbb{C}}/2 & \overline{\mathbb{C}\_{j,\mathbb{C}}}/2 \end{bmatrix},$$

such that

$$\mathbf{x}\_{t+1,j,\mathbb{C}} = \mathbb{Z}\_j \mathbf{x}\_{t,j,\mathbb{C}} + \mathbb{K}\_{j,\mathbb{C}} \boldsymbol{\varepsilon}\_{t\prime} \quad \mathbf{x}\_{t,j} = \mathcal{T}\_j^{-1} \left[ \begin{array}{c} \underline{\mathbf{x}}\_{t,j,\mathbb{C}} \\ \overline{\mathbf{x}}\_{t,j,\mathbb{C}} \end{array} \right].$$

**Lemma A1.** *Let xt* = [*x <sup>t</sup>*,0, *x <sup>t</sup>*,1, ... , *x <sup>t</sup>*,*S*/2, *x t*,•] *where xt*,*<sup>j</sup> is generated according to xt*+1,*<sup>j</sup>* = *Ajxt*,*<sup>j</sup>* <sup>+</sup> *Kjεt*, *<sup>t</sup>* <sup>∈</sup> <sup>N</sup> *with Aj as in* (2) *and Kj* = [*K <sup>j</sup>*,*R*, *K j*,*I*] <sup>∈</sup> <sup>R</sup>*δjcj*×*<sup>s</sup> using iid white noise process* (*εt*)*t*∈<sup>N</sup> *where <sup>x</sup>*0,*<sup>j</sup> is deterministic. Further let* (*xt*,•)*t*∈<sup>N</sup> *denote the stationary solution to the equation xt*+1,• <sup>=</sup> *<sup>A</sup>*•*xt*,• <sup>+</sup> *<sup>K</sup>*•*ε<sup>t</sup> such that M*• <sup>=</sup> <sup>E</sup>*xt*,•*x <sup>t</sup>*,• <sup>&</sup>gt; <sup>0</sup>*.*

*(I) Then using QT* <sup>=</sup> #(log log *<sup>T</sup>*)/*<sup>T</sup> for ut* <sup>=</sup> <sup>∑</sup>*<sup>q</sup> <sup>i</sup>*=<sup>0</sup> *<sup>ϕ</sup>iεt*+*<sup>i</sup> for arbitrary <sup>q</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>q</sup>* <sup>&</sup>lt; <sup>∞</sup>, *and coefficients ϕi*, *i* = 0, ..., *q we have*

$$\begin{array}{llll} \langle \mathbf{x}\_{t,\bullet}, \boldsymbol{\mu}\_{t} \rangle = O(Q\_{T}) & , & \langle \boldsymbol{\mu}\_{t-j}, \boldsymbol{\mu}\_{t} \rangle - \mathbb{E}\boldsymbol{\mu}\_{t-j} \boldsymbol{\mu}\_{t}' = O(Q\_{T}),\\ \langle \mathbf{x}\_{t,j}, \mathbf{x}\_{t,\bullet} \rangle = O(\log T) & , & \langle \boldsymbol{\mathcal{X}}\_{t,j}, \boldsymbol{\mu}\_{t} \rangle = O(\log T) \\ \langle \mathbf{x}\_{t,j}, \mathbf{x}\_{t,k} \rangle / T = O(\log \log T) & , & j\_{\prime}, k = 0, \ldots, S/2. \end{array}$$

*If* (*εt*)*t*∈<sup>Z</sup> *only fulfills Assumptions <sup>1</sup> then the order bounds hold in probability rather than almost surely.*

*(II) Furthermore for* 0 < *j*, *k* < *S*/2

*xt*,*j*,C,*εt <sup>d</sup>* <sup>→</sup> <sup>1</sup> 2 <sup>1</sup> <sup>0</sup> *WjdB <sup>j</sup>*,<sup>C</sup> =: *Mj*, *xt*,*j*,C, *xt*,*k*,C/*<sup>T</sup> <sup>d</sup>* → 1 2 <sup>1</sup> <sup>0</sup> *WjW <sup>j</sup>* := *Nj* , *j* = *k*, 0 , *j* = *k xt*,*j*,*εt <sup>d</sup>* → 1 2 <sup>1</sup> <sup>0</sup> (*Wj*,*RdB <sup>j</sup>*,*<sup>R</sup>* + *Wj*,*IdB j*,*I*) 1 2 <sup>1</sup> <sup>0</sup> (*Wj*,*IdB <sup>j</sup>*,*<sup>R</sup>* − *Wj*,*RdB j*,*I*) , *xt*,*k*, *xt*,*j*/*<sup>T</sup> <sup>d</sup>* → ⎧ ⎪⎪⎨ ⎪⎪⎩ 1 2 <sup>1</sup> <sup>0</sup> (*Wk*,*RW <sup>k</sup>*,*<sup>R</sup>* + *Wk*,*IW <sup>k</sup>*,*I*) <sup>1</sup> <sup>0</sup> (*Wk*,*RW <sup>k</sup>*,*<sup>I</sup>* − *Wk*,*IW <sup>k</sup>*,*R*) <sup>−</sup> <sup>1</sup> <sup>0</sup> (*Wk*,*RW <sup>k</sup>*,*<sup>I</sup>* − *Wk*,*IW <sup>k</sup>*,*R*) <sup>1</sup> <sup>0</sup> (*Wk*,*RW <sup>k</sup>*,*<sup>R</sup>* + *Wk*,*IW <sup>k</sup>*,*I*) , *j* = *k* 0 , *j* = *k*

*where Wj* = *Wj*,*<sup>R</sup>* + *iWj*,*<sup>I</sup>* = *Kj*,<sup>C</sup>*Bj*,C, *Kj*,<sup>C</sup> = *Kj*,*<sup>R</sup>* + *iKj*,*I*, *Bj*,<sup>C</sup> = *Bj*,*<sup>R</sup>* + *iBj*,*<sup>I</sup> and Bj*,*R*, *Bj*,*<sup>I</sup> are two independent Brownian motions with covariance matrix* Ω*. For j* = 0 *and j* = *S*/2 *the results hold analogously:*

$$
\begin{split}
\langle \mathbf{x}\_{\mathrm{f},0}, \mathbf{c}\_{\mathrm{f}} \rangle & \stackrel{d}{\rightarrow} \int\_{0}^{1} \mathsf{W}\_{\mathrm{0},R} d\mathsf{W}\_{\mathrm{0},R}' \quad , \quad \langle \mathbf{x}\_{\mathrm{f},0}, \mathbf{x}\_{\mathrm{f},0} \rangle / T \stackrel{d}{\rightarrow} \int\_{0}^{1} \mathsf{W}\_{\mathrm{0},R} \mathsf{W}\_{\mathrm{0},R}' \\
\langle \mathbf{x}\_{\mathrm{f},S/2}, \mathbf{c}\_{\mathrm{f}} \rangle & \stackrel{d}{\rightarrow} \int\_{0}^{1} \mathsf{W}\_{\mathrm{S}/2,R} d\mathsf{W}\_{\mathrm{S}/2,R}' \quad , \quad \langle \mathbf{x}\_{\mathrm{f},S/2}, \mathbf{x}\_{\mathrm{f},S/2} \rangle / T \stackrel{d}{\rightarrow} \int\_{0}^{1} \mathsf{W}\_{\mathrm{S}/2,R} \mathsf{W}\_{\mathrm{S}/2,R}' .
\end{split}
$$

**Proof.** Most evaluations in (I) are standard, see for example Lemma 4 in [38]. (II) follows from the results in Section 4 of [2] for the complex valued representations or [39] for the corresponding real case.

*Appendix A.2. Perturbation of Eigendecompositions*

**Lemma A2** (Rayleigh-Schrödinger expansion)**.** *Let <sup>A</sup>*ˆ*<sup>t</sup>* <sup>=</sup> *<sup>A</sup>* <sup>−</sup> *<sup>δ</sup>At where δAt* → <sup>0</sup> *and where <sup>A</sup>* <sup>=</sup> *<sup>U</sup>*Λ*U*−<sup>1</sup> <sup>∈</sup> <sup>R</sup>*n*×*n*, <sup>Λ</sup> <sup>=</sup> *diag*(*λ*<sup>1</sup> *Ic*<sup>1</sup> , ..., *<sup>λ</sup><sup>J</sup> IcJ*), <sup>∑</sup>*<sup>J</sup> <sup>j</sup>*=<sup>1</sup> *cj* = *n is diagonalizable. U* = [*U*1, ..., *UJ*] <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> is a nonsingular matrix such that for <sup>U</sup>*−<sup>1</sup> = [*V*1, ..., *VJ*] *we have V <sup>j</sup> Uj* = *Icj . Then for each circle B*(*λj*, *δ*) *around λ<sup>j</sup> not containing any other eigenvalue of A there exist*

*from some t onwards*


*Then U*ˆ *<sup>t</sup>*,*<sup>j</sup>* = ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *Zk*, *<sup>B</sup>*<sup>ˆ</sup> *<sup>t</sup>*,*<sup>j</sup>* = ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *Ck where*

$$\begin{aligned} Z\_0 &= \mathcal{U}\_{\dot{\jmath}} \quad , \quad \mathcal{C}\_0 = \lambda\_{\dot{\jmath}} I\_{\mathfrak{c}\_{\dot{\jmath}}} \\ Z\_k &= \Sigma(\delta A\_t Z\_{k-1} + \sum\_{i=1}^{k-1} Z\_{k-i} \mathcal{C}\_i) \quad , \quad \mathcal{C}\_k = -V\_{\dot{\jmath}}' \delta A\_t Z\_{k-1} \dots \end{aligned}$$

*Here* <sup>Σ</sup> <sup>=</sup> *<sup>U</sup>*(<sup>Λ</sup> <sup>−</sup> *Inλj*)+*U*−<sup>1</sup> *where diag*(*s*1, ...,*sn*)<sup>+</sup> <sup>=</sup> *diag*(*s*<sup>+</sup> <sup>1</sup> , ...,*s*<sup>+</sup> *<sup>n</sup>* ) *and <sup>x</sup>*<sup>+</sup> <sup>=</sup> 1/*x*, *<sup>x</sup>* <sup>=</sup> <sup>0</sup> *and zero else, that is* (<sup>Λ</sup> <sup>−</sup> *Inλj*)<sup>+</sup> *denotes a quasi-inverse. Furthermore for <sup>ρ</sup>* <sup>=</sup> *δAt* <sup>&</sup>lt; <sup>1</sup> *we obtain: Ck* ≤ *<sup>μ</sup>Cρk*, *Zk* ≤ *<sup>μ</sup>Zρk*, *<sup>k</sup>* <sup>≥</sup> <sup>0</sup>*.*

The results follow directly from Section 2.9 of [23], see in particular Proposition 2.9.1 and the discussion below this proposition. Further note that the results hold for each root separately and hence the restriction *<sup>j</sup>* = 1 needs to hold only for the investigated root for the results to apply. Finally note that a second order approximation *U*ˆ*t*,*<sup>j</sup>* = *Z*<sup>0</sup> + *Z*<sup>1</sup> + *Z*<sup>2</sup> and *B*ˆ *<sup>t</sup>*,*<sup>j</sup>* <sup>=</sup> *<sup>C</sup>*<sup>0</sup> <sup>+</sup> *<sup>C</sup>*<sup>1</sup> <sup>+</sup> *<sup>C</sup>*<sup>2</sup> is accurate to the order *<sup>o</sup>*(*δAt*2).

#### *Appendix A.3. Random Transformation of Systems*

**Lemma A3.** *Let the assumptions of Theorem 1 hold and use the same notation as given there. Let* (A˜, <sup>C</sup>˜, <sup>K</sup>˜ ) *denote a sequence of systems converging a.s. to* (A, <sup>C</sup>, <sup>K</sup>) *such that* (A−A ˜ )*D*−<sup>1</sup> *<sup>x</sup>* <sup>=</sup> *O*((log *T*)*a*), <sup>√</sup>*T*(K−K ˜ ) = *<sup>O</sup>*((log *<sup>T</sup>*)*a*),(C−C ˜ )*D*−<sup>1</sup> *<sup>x</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*) *and let* <sup>A</sup><sup>0</sup> <sup>=</sup> *<sup>S</sup>*0A*S*−<sup>1</sup> <sup>0</sup> = *diag*(A0,11, <sup>A</sup>0,22), <sup>K</sup><sup>0</sup> <sup>=</sup> *<sup>S</sup>*0K, <sup>C</sup><sup>0</sup> <sup>=</sup> <sup>C</sup>*S*−<sup>1</sup> <sup>0</sup> *. Further let*

$$\mathcal{S}\_T = \begin{bmatrix} \mathcal{S}\_{T,11} & \mathcal{S}\_{T,12} \\ 0 & \mathcal{S}\_{T,22} \end{bmatrix} \to \mathcal{S}\_0.$$

*such that* (*ST* <sup>−</sup> *<sup>S</sup>*0)*D*−<sup>1</sup> *<sup>x</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*)*. Let* <sup>Δ</sup>*<sup>S</sup>* = (*ST* <sup>−</sup> *<sup>S</sup>*0)*D*−<sup>1</sup> *<sup>x</sup>* , <sup>Δ</sup><sup>A</sup> = (A−A ˜ )*D*−<sup>1</sup> *<sup>x</sup> and denote the sequence of transformed systems as* (Aˆ, <sup>C</sup>ˆ, <sup>K</sup><sup>ˆ</sup> )=(*ST*A˜*S*−<sup>1</sup> *<sup>T</sup>* , <sup>C</sup>˜*S*−<sup>1</sup> *<sup>T</sup>* , *ST*K˜ )*. Let the block entries of S*<sup>0</sup> *be denoted as Sij and the blocks of* Δ*S be denoted as* Δ*Sij. Then:*

*<sup>T</sup>*(A<sup>ˆ</sup> <sup>11</sup> − A0,11)=(Δ*S*11A<sup>11</sup> − A0,11Δ*S*<sup>11</sup> <sup>+</sup> *<sup>S</sup>*11ΔA<sup>11</sup> <sup>+</sup> *<sup>S</sup>*12ΔA21)*S*−<sup>1</sup> <sup>11</sup> <sup>+</sup> *<sup>o</sup>*(1), <sup>√</sup> *<sup>T</sup>*(A<sup>ˆ</sup> <sup>12</sup> − A0,12)=(*S*11ΔA<sup>12</sup> <sup>+</sup> *<sup>S</sup>*12ΔA22)*S*−<sup>1</sup> <sup>22</sup> <sup>+</sup> <sup>Δ</sup>*S*12*S*−<sup>1</sup> <sup>22</sup> <sup>A</sup>0,22 − A0,11Δ*S*12*S*−<sup>1</sup> <sup>22</sup> + *o*(1), *<sup>T</sup>*(A<sup>ˆ</sup> <sup>21</sup> − A0,21) = *<sup>S</sup>*22ΔA21*S*−<sup>1</sup> <sup>11</sup> <sup>+</sup> *<sup>o</sup>*(1), <sup>√</sup> *<sup>T</sup>*(A<sup>ˆ</sup> <sup>22</sup> − A0,22) = <sup>Δ</sup>*S*22*S*−<sup>1</sup> <sup>22</sup> <sup>A</sup>0,22 <sup>+</sup> *<sup>S</sup>*22ΔA22*S*−<sup>1</sup> <sup>22</sup> − A0,22Δ*S*22*S*−<sup>1</sup> <sup>22</sup> + *o*(1), √ *<sup>T</sup>*(K−K <sup>ˆ</sup> <sup>0</sup>) = <sup>Δ</sup>*S*12K<sup>2</sup> <sup>+</sup> *<sup>S</sup>*<sup>11</sup> <sup>√</sup>*T*(K˜ <sup>1</sup> − K1) + *<sup>S</sup>*<sup>12</sup> <sup>√</sup>*T*(K˜ <sup>2</sup> − K2) Δ*S*22K<sup>2</sup> + *S*<sup>22</sup> <sup>√</sup>*T*(K˜ <sup>2</sup> − K2) + *o*(1), (C−C <sup>ˆ</sup> <sup>0</sup>)*D*−<sup>1</sup> *<sup>x</sup>* = (C−C ˜ )*D*−<sup>1</sup> *x S*−<sup>1</sup> <sup>11</sup> 0 0 *S*−<sup>1</sup> 22 − C0 Δ*S*11*S*−<sup>1</sup> <sup>11</sup> <sup>Δ</sup>*S*12*S*−<sup>1</sup> 22 0 Δ*S*22*S*−<sup>1</sup> 22 + *o*(1).

**Proof.** The proof follows from straightforward computations using the various orders of convergence by neglecting higher order terms.

#### **Appendix B. Reduced Rank Regression with Integrated Variables**

The main results of this paper are based on a more general result documented in [24] (henceforth called BRRR). BRRR uses a slightly different setting and in particular a different dgp. The following lemma provides the essence of the results of BRRR that will be used below.

**Lemma A4.** *Let* (*yt*)*t*∈N,(*z<sup>r</sup> <sup>t</sup>*)*t*∈N,(*z<sup>u</sup> <sup>t</sup>* )*t*∈N, *yt* <sup>∈</sup> <sup>R</sup>*<sup>s</sup>* , *zr <sup>t</sup>* <sup>∈</sup> <sup>R</sup>*m*, *<sup>z</sup><sup>u</sup> <sup>t</sup>* <sup>∈</sup> <sup>R</sup>*<sup>l</sup> be three processes related via*

$$y\_t = b\_r z\_t^r + b\_u z\_t^u + \mu\_t$$

*where the zero mean stationary process* (*ut*)*t*∈<sup>N</sup> *is such that* <sup>E</sup>*ut*(*z<sup>r</sup> <sup>t</sup>*) <sup>=</sup> 0,E*ut*(*z<sup>u</sup> <sup>t</sup>* ) <sup>=</sup> 0,E*utu <sup>t</sup>* > 0 *and where n* = *rank*(*br*) < min(*s*, *m*)*, that is br is of reduced rank.*

*Further assume that there exist square nonsingular matrices* <sup>T</sup>*<sup>y</sup>* <sup>∈</sup> <sup>R</sup>*s*×*<sup>s</sup>* , <sup>T</sup>*<sup>r</sup>* <sup>∈</sup> <sup>R</sup>*m*×*m*, <sup>T</sup>*<sup>u</sup>* <sup>∈</sup> R*n*×*<sup>n</sup> such that*

$$\ddot{y}\_t = \mathcal{T}\_y y\_t = (\mathcal{T}\_y b\_r \mathcal{T}\_r^{-1})(\mathcal{T}\_r z\_t^r) + (\mathcal{T}\_y b\_r \mathcal{T}\_r^{-1})(\mathcal{T}\_r z\_t^r) + \mathcal{T}\_y u\_t = \tilde{b}\_r \tilde{z}\_t + \tilde{b}\_u \tilde{z}\_t^u + \tilde{u}\_t$$

*such that with c*• = *n* − *c we have*

$$
\tilde{b}\_{\mathsf{r}} = \left[ \begin{array}{ccc} I\_{\mathsf{c}} & 0 & 0 \\ 0 & 0 & \tilde{b}\_{\mathsf{r},\mathsf{\bullet}} \end{array} \right], \quad \tilde{b}\_{\mathsf{r},\mathsf{\bullet}} = \tilde{O}\_{\mathsf{\bullet}} \Gamma\_{\mathsf{\bullet}\mathsf{\prime}}^{\prime} \quad \tilde{O}\_{\mathsf{\bullet}} \in \mathbb{R}^{(s-\mathsf{c})\times c\_{\mathsf{\bullet}}}, \quad \Gamma\_{\mathsf{\bullet}} \in \mathbb{R}^{m\_{\mathsf{\bullet}}\times c\_{\mathsf{\bullet}}}.
$$

*Here the partitioning corresponds to z*˜ *<sup>t</sup>* = [*z*˜ *<sup>t</sup>*,1, *z*˜ *<sup>t</sup>*,2, *z*˜ *<sup>t</sup>*,•] *where <sup>z</sup>*˜*t*,1 <sup>∈</sup> <sup>R</sup>*c*, *<sup>z</sup>*˜*t*,2 <sup>∈</sup> <sup>R</sup>*m*−*c*−*m*• *are MFI(1) processes and* (*z*˜*t*,•)*t*∈N, *<sup>z</sup>*˜*t*,• <sup>∈</sup> <sup>R</sup>*m*• *is stationary, <sup>z</sup>*˜*<sup>u</sup> <sup>t</sup>* = [(*z*˜*<sup>u</sup> t*,1) ,(*z*˜*<sup>u</sup> <sup>t</sup>*,•) ] *where* (*z*˜*<sup>u</sup> <sup>t</sup>*,1)*t*∈<sup>N</sup> *is a MFI(1) process and* (*z*˜*<sup>u</sup> <sup>t</sup>*,•)*t*∈<sup>N</sup> *is stationary and where the following bounds hold (z*˜*t*,: = [*z*˜ *<sup>t</sup>*,1, *z*˜ *t*,2] *):*

*u*˜*t*, *<sup>u</sup>*˜*t* <sup>=</sup> *<sup>O</sup>*(1) , *u*˜*t*, *<sup>z</sup>*˜*t*,• <sup>=</sup> *<sup>O</sup>*(*QT*) , *u*˜*t*, *<sup>z</sup>*˜*<sup>u</sup> <sup>t</sup>*,• <sup>=</sup> *<sup>O</sup>*(*QT*), *u*˜*t*, *<sup>u</sup>*˜*t* − <sup>E</sup>*u*˜*tu*˜ *<sup>t</sup>* <sup>=</sup> *<sup>O</sup>*(*QT*) , *u*˜*t*, *<sup>z</sup>*˜*t*,: <sup>=</sup> *<sup>O</sup>*(log *<sup>T</sup>*) , *u*˜*t*, *<sup>z</sup>*˜*<sup>u</sup> <sup>t</sup>*,1 = *O*(log *T*), *<sup>M</sup>*<sup>ˆ</sup> • <sup>=</sup> *<sup>z</sup>*˜*t*,• *z*˜*u t*,• , *<sup>z</sup>*˜*t*,• *z*˜*u t*,• , *<sup>M</sup>*<sup>ˆ</sup> <sup>−</sup><sup>1</sup> • <sup>=</sup> *<sup>O</sup>*(1) , *<sup>M</sup>*<sup>ˆ</sup> • <sup>=</sup> *<sup>O</sup>*(1), *<sup>M</sup>*• <sup>&</sup>gt; <sup>0</sup> *<sup>M</sup>*<sup>ˆ</sup> <sup>1</sup> <sup>=</sup> *z*˜*t*,: *z*˜*u t*,1  , *z*˜*t*,: *z*˜*u t*,1  , *<sup>M</sup>*<sup>ˆ</sup> 1/*<sup>T</sup>* <sup>=</sup> *<sup>O</sup>*(log log *<sup>T</sup>*) , (*M*<sup>ˆ</sup> <sup>1</sup>)−<sup>1</sup> <sup>=</sup> *<sup>O</sup>*(*Q*<sup>2</sup> *T*), *<sup>z</sup>*˜*t*,• *z*˜*u t*,• , *z*˜*t*,: *z*˜*u t*,1  <sup>=</sup> *<sup>O</sup>*(log *<sup>T</sup>*) , *<sup>M</sup>*<sup>ˆ</sup> • <sup>−</sup> *<sup>M</sup>*• <sup>=</sup> *<sup>O</sup>*(*QT*).

*Then the reduced rank regression estimator* ˆ *bRRR* = [ˆ *br*,*RRR*, ˆ *bu*,*RRR*] *maximizing the Gaussian likelihood subject to rank*(*βr*) = *<sup>n</sup>* <sup>=</sup> *<sup>c</sup>* <sup>+</sup> *<sup>c</sup>*• *is consistent:* <sup>ˆ</sup> *bRRR* <sup>−</sup> *<sup>b</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*/ <sup>√</sup>*T*) *for some a* < ∞*. Furthermore* ˜ *bRRR*,*<sup>r</sup>* <sup>−</sup> ˜ *br* = [*O*((log *T*)*a*/*T*),*O*((log *T*)*a*/ <sup>√</sup>*T*)] *with* ˜ *bRRR*,*<sup>r</sup>* = T*y* ˆ *bRRR*,*r*<sup>T</sup> <sup>−</sup><sup>1</sup> *<sup>r</sup> , where the second block has <sup>m</sup>*• *columns and corresponds to the stationary components of the regressor vector.*

**Proof.** The theorem slightly extends the results of BRRR by adding high level assumptions instead of low level assumptions on the data generating process. The proof hence consists in adjusting the proof in BRRR. In the following we only indicate where arguments in BRRR need to be replaced. A detailed proof would replicate much of the arguments in BRRR and hence is omitted.

The representation of Theorem 3.1 in BRRR is contained in the assumptions. Then consistency follows from examining the proof of the first part of Theorem 3.2 in BRRR: essential for the norm bounds are Lemma A.1 (I) and (III). The norm bounds stated under point (I) are directly assumed in this lemma except for the filtered version using *nt* in place of *xt*. Instead, here the results for *nt* which are needed in the proof of Theorem 3.2 of BRRR are directly assumed. (III) then follows. Lemmas A.3–A.5 in BRRR do not depend on the assumptions on the various processes and hence continue to hold. Then the proof for consistency in Appendix A.3.1 of BRRR only uses these norm bounds referring also to [38] (which is also only based on the norm bounds contained in the assumptions of this lemma) and hence continues to hold.

#### **Appendix C. Proofs of the Theorems**

*Appendix C.1. Proof of Theorem 1*

For proving consistency of the transfer function estimators it is sufficient to find a (possibly) random matrix <sup>S</sup>˜ *<sup>T</sup>* such that the least squares estimates (A˜, <sup>C</sup>˜, <sup>K</sup>˜ ) of one representation (A, <sup>C</sup>, <sup>K</sup>) of the true system obtained using *<sup>x</sup>*˜*<sup>t</sup>* :<sup>=</sup> <sup>S</sup>˜ *Tx*ˆ*<sup>t</sup>* converges (a.s.) to (A, C, K). This will be done in two steps: First a particular basis (which is not realizable in practice) will be chosen such that <sup>K</sup>˜ *<sup>p</sup>* − K*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(1) sufficiently fast such that in the second step the regressions in the system equations based on the resulting state estimator *x*˜*<sup>t</sup>* are consistent. The derivation of the first step will also provide an approximation of the error term which can be used in order to derive the asymptotic distribution.

#### Appendix C.1.1. Proof of Theorem 1 (I)

The central step in CVA is the solution to the RRR problem. The following proof heavily draws on the results contained in [24] (henceforth called BRRR) collected in Lemma A4 for easier reference. As in BRRR, in order to derive the asymptotic properties we first transform the vectors in order to separate stationary and nonstationary terms. In order to achieve the separation let *Zt* = [*y <sup>t</sup>*−1, *<sup>y</sup> <sup>t</sup>*−2, ..., *<sup>y</sup> <sup>t</sup>*−*S*] <sup>∈</sup> <sup>R</sup>*sS*. Then for *<sup>p</sup>* <sup>=</sup> *kS* we obtain

$$Y\_{t,p}^- = \begin{pmatrix} y\_{t-1} \\ y\_{t-2} \\ \vdots \\ y\_{t-S} \\ y\_{t-S-1} \\ \vdots \\ y\_{t-kS} \end{pmatrix} = \begin{pmatrix} Z\_t \\ Z\_{t-S} \\ \vdots \\ Z\_{t-(k-1)S} \end{pmatrix}.$$

It is easy to see that for each *<sup>j</sup>* the process (*ZrS*−*j*)*r*∈<sup>N</sup> is an *<sup>I</sup>*(1) process. Moreover the strict minimum-phase condition for (A◦, C◦, K◦) implies that also for the system corresponding to (*ZrS*−*j*)*r*∈<sup>N</sup> the strict minimum-phase condition holds.

Define the transformation T*<sup>S</sup>* := [O*S*,1, O*S*,⊥] where <sup>O</sup>*S*,1 <sup>∈</sup> <sup>R</sup>*sS*×*<sup>c</sup>* denotes the matrix containing the first *c* columns of O*<sup>S</sup>* for the system (A◦, C◦, K◦) in the canonical form. Further O*S*,<sup>⊥</sup> is a block column of an orthonormal matrix such that O *<sup>S</sup>*,⊥O*S*,1 <sup>=</sup> 0. Then the argument of [20] shows that in T*SZt* the first *c* components are integrated while the remaining *sS* − *c* components are stationary. Then consider for *p* = *kS* < *t* ≤ *T* − *f* + 1 (using <sup>O</sup>† *<sup>S</sup>*,1 = (O *S*,1O*S*,1)−1O *<sup>S</sup>*,1)

$$
\tilde{\boldsymbol{z}}\_{t,p} := \begin{bmatrix}
\mathcal{O}\_{\boldsymbol{S},1}^{\dagger} \mathcal{O}\_{\boldsymbol{S}} (\mathbf{x}\_{t} - \boldsymbol{\mathcal{A}}\_{\circ}^{p} \mathbf{x}\_{t-p}) \\
\mathcal{O}\_{\boldsymbol{S},\perp}^{\dagger} \mathbf{Z}\_{t} \\
\mathcal{O}\_{\boldsymbol{S},1}^{\dagger} (\mathbf{Z}\_{t} - \mathbf{Z}\_{t-\mathcal{S}}) \\
\mathcal{O}\_{\boldsymbol{S},\perp}^{\prime} \mathbf{Z}\_{t-\mathcal{S}} \\
\vdots \\
\mathcal{O}\_{\boldsymbol{S},1}^{\dagger} (\mathbf{Z}\_{t-(k-2)\mathcal{S}} - \mathbf{Z}\_{t-(k-1)\mathcal{S}}) \\
\mathcal{O}\_{\boldsymbol{S},\perp}^{\prime} \mathbf{Z}\_{t-(k-1)\mathcal{S}}
\end{bmatrix}, \quad \tilde{\mathcal{Y}}\_{t} := \begin{bmatrix}
\mathcal{O}\_{f,1}^{\dagger} \\
\mathcal{O}\_{f,\perp}^{\dagger}
\end{bmatrix} \mathbf{Y}\_{t,f}^{+}.\tag{A1}
$$

Here O*<sup>f</sup>* ,<sup>⊥</sup> is a matrix such that O *<sup>f</sup>* ,⊥O*<sup>f</sup>* ,1 <sup>=</sup> 0, <sup>O</sup> *<sup>f</sup>* ,⊥O*<sup>f</sup>* ,<sup>⊥</sup> <sup>=</sup> *<sup>I</sup>*. Obviously *<sup>z</sup>*˜*t*,*<sup>p</sup>* is a linear transformation of *Y*− *<sup>t</sup>*,*<sup>p</sup>* and *<sup>y</sup>*˜*<sup>t</sup>* of *<sup>Y</sup>*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* . It can be shown that the linear transformation is nonsingular such that there is a one-one relation between *Y*− *<sup>t</sup>*,*<sup>p</sup>* and *z*˜*t*,*p*. In *z*˜*t*,*<sup>p</sup>* and *y*˜*<sup>t</sup>* only the first *c* components are unit root processes, the remaining components being stationary.

For *<sup>p</sup>* = *kS* the final *<sup>p</sup>* − *kS* block rows of *<sup>z</sup>*˜*t*,*<sup>p</sup>* are defined as *yt*−(*k*−1)*S*−*<sup>j</sup>* − *yt*−*kS*−*j*, *<sup>j</sup>* = 1, ..., *p* − *kS*. Clearly also these components are stationary.

Partition *z*˜*t*,*<sup>p</sup>* = [*z*˜ *<sup>t</sup>*,1, *z*˜ *t*,•] , *<sup>z</sup>*˜*t*,1 <sup>∈</sup> <sup>R</sup>*c*, into its first *<sup>c</sup>* and the remaining coordinates (omitting the subscript *<sup>p</sup>* on the right hand side for notational convenience). Similarly partition *y*˜*<sup>t</sup>* = [*y*˜ *<sup>t</sup>*,1, *y*˜ *t*,•] , *<sup>y</sup>*˜*t*,1 <sup>∈</sup> <sup>R</sup>*c*. Using these transformed matrices, *<sup>Y</sup>*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* = *β*1*Y*<sup>−</sup> *<sup>t</sup>*,*<sup>p</sup>* <sup>+</sup> *<sup>N</sup>*<sup>+</sup> *t*, *f* can be written as

$$\boldsymbol{\tilde{y}}\_{t} = \boldsymbol{\tilde{b}}\_{1} \boldsymbol{\tilde{z}}\_{t,p} + \boldsymbol{\tilde{N}}\_{t,f,p}^{+} = \begin{bmatrix} \boldsymbol{\tilde{y}}\_{t,1} \\ \boldsymbol{\tilde{y}}\_{t,\bullet} \end{bmatrix} = \begin{bmatrix} \boldsymbol{I}\_{c} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\tilde{b}}\_{\bullet,p} \end{bmatrix} \begin{bmatrix} \boldsymbol{\tilde{z}}\_{t,1} \\ \boldsymbol{\tilde{z}}\_{t,\bullet} \end{bmatrix} + \boldsymbol{\mathcal{O}}\_{f} \boldsymbol{\tilde{\mathcal{A}}}\_{o}^{p} \mathbf{x}\_{t-p} + \begin{bmatrix} \boldsymbol{\tilde{\mathcal{E}}}\_{t,1} \\ \boldsymbol{\tilde{\mathcal{E}}}\_{t,\bullet} \end{bmatrix} \tag{A2}$$

where

$$
\tilde{b}\_1 = \begin{bmatrix} I\_\mathsf{c} & 0 \\ 0 & \tilde{b}\_{\bullet, p} \end{bmatrix}, \quad \tilde{b}\_{\bullet, p} = \mathbb{E} \tilde{g}\_{t, \bullet} \tilde{z}\_{t, \bullet}' \left( \mathbb{E} \tilde{z}\_{t, \bullet} \tilde{z}\_{t, \bullet}' \right)^{-1} = O\_{\bullet, p} \Gamma\_{\bullet, p}', \quad \tilde{b}\_1 = O\_p \Gamma\_p' 
$$

and where ˜ *b*•,*<sup>p</sup>* is of rank *n* − *c* providing a representation of the form given in Theorem 3.1 of BRRR except that the error term *N*˜ <sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* ,*<sup>p</sup>* (defined by the equation) is not white. Finally (A2) also defines the sub blocks *ε*˜*t*,*<sup>i</sup>* of *N*˜ <sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* ,*<sup>p</sup>* which are hence linear combinations of *<sup>E</sup>*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* and therefore typically MA(f) processes. Note that *z*˜*t*,1, *z*˜*t*,•, *y*˜*t*,• depend on the choice of *f* and *p* which is not reflected in the notation.

Here - <sup>E</sup>*z*˜*t*,•*z*˜ *t*,• −<sup>1</sup> and <sup>E</sup>*y*˜*t*,•*z*˜ *<sup>t</sup>*,• are worth a remark: for *<sup>p</sup>* <sup>=</sup> *kS* the results of [20] can be directly used to obtain upper and lower bounds for the norms of these matrices uniformly in *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. For *<sup>p</sup>* <sup>=</sup> *kS* the additional rows in *<sup>z</sup>*˜*t*,• add entries to <sup>E</sup>*y*˜*t*,•*z*˜ *<sup>t</sup>*,• that are of order *<sup>O</sup>*(*λp*) for some 0 <sup>&</sup>lt; *<sup>λ</sup>* <sup>&</sup>lt; 1 as *yt* <sup>−</sup> *yt*−*<sup>S</sup>* is a VARMA process. Similarly the smallest eigenvalue of <sup>E</sup>*z*˜*t*,•*z*˜ *<sup>t</sup>*,• can be bounded from below based on arguments for *<sup>p</sup>* <sup>=</sup> *kS* following [20] which in turn refer to Theorem 6.6.10 of [22]. The additional terms for *p* = *kS* correspond to backward innovations with non-singular covariance matrix thus also leading to a lower bound of the smallest eigenvalue uniformly in *k*. (The backward innovations representation for a stationary VARMA process (*yt*)*t*∈<sup>Z</sup> equals *yt* <sup>=</sup> <sup>∑</sup><sup>∞</sup> *<sup>j</sup>*=<sup>1</sup> *k<sup>b</sup> <sup>j</sup> yt*+*<sup>j</sup>* + *<sup>ε</sup><sup>b</sup> <sup>t</sup>* and can be obtained from the complex conjugate of the spectral density. Nonsingularity of the spectral density implies that the backward innovation *ε<sup>b</sup> <sup>t</sup>* have nonsingular covariance matrix. This implies a lower bound on the accuracy with which components of *yt*−(*k*−1)*S*−*<sup>j</sup>* can be predicted based on *yt*−*i*, *<sup>i</sup>* ≤ (*<sup>k</sup>* − <sup>1</sup>)*S*.)

Furthermore the strict minimum-phase assumption for the state space representation (A◦, C◦, K◦) of the process (*yt*)*t*∈<sup>Z</sup> implies the strict minimum-phase assumption for the sub-sampled process (*ZkS*<sup>+</sup>*j*)*k*∈Z. Thus the arguments of [20] show that [˜ *<sup>b</sup>*•,*p*, 0] <sup>→</sup> ˜ *b*•,<sup>∞</sup> where the norm of the difference is of order *<sup>O</sup>*(A¯ *<sup>p</sup>* ◦). The increase of *p* as a function of the sample size jointly with the strict minimum-phase assumption implies that *<sup>O</sup>*(A¯ *<sup>p</sup>* ◦) = *<sup>o</sup>*(*T*−1). This also implies that <sup>O</sup>˜*<sup>f</sup>* <sup>A</sup>¯ *<sup>p</sup>* ◦ *xt*−*<sup>p</sup>* <sup>=</sup> *op*(*T*−1/2).

Correspondingly there exists a limiting decomposition ˜ *b*•,<sup>∞</sup> = *O*•Γ • such that <sup>Γ</sup> •*S*• = *In*−*<sup>c</sup>* where *S*• denotes a selector matrix whose columns contain the vectors of the canonical basis of <sup>R</sup>∞. Since [K◦,(A◦ − K◦C◦)K◦,(A◦ − K◦C◦)2K◦, ...,(A◦ − K◦C◦)*n*−1K◦] is of full row rank it can be assumed that *S*• only has nonzero entries in its first *ns* rows. Denoting the submatrix of the first *ps* rows by *Sp*,2 then also [Γ •]1:*pSp*,2 = *In*−*<sup>c</sup>* where [.]1:*<sup>p</sup>* denotes the first *p* block columns of a matrix. This fixes a unique decomposition of ˜ *b*• and hence *<sup>O</sup>*• and <sup>Γ</sup>• do not depend on *<sup>p</sup>*. Convergence of ˜ *<sup>b</sup>*•,*<sup>p</sup>* to ˜ *b*• jointly with the lower bound on *<sup>p</sup>*(*T*) then implies convergence of order *<sup>o</sup>*(*T*−1) of *<sup>O</sup>*•,*<sup>p</sup>* to *<sup>O</sup>*• and <sup>Γ</sup> •,*<sup>p</sup>* to [Γ •]1:*<sup>p</sup>* using the decomposition of ˜ *b*•,*<sup>p</sup>* such that Γ •,*pSp*,2 = *In*−*c*. Correspondingly *Op* → *<sup>O</sup>* and Γ *<sup>p</sup>* − [Γ ]1:*p* → 0.

Therefore the reduced rank regression in the CVA procedure shows the same structure as investigated in Lemma A4 with the differences that *z*˜*t*,2 and *z*˜*<sup>u</sup> <sup>t</sup>* do not occur, and *z*˜*t*,• has increasing size as a function of the sample size. The next lemma therefore extends the results of BRRR to the RRR used in CVA:

In the following we will use a generic *<sup>a</sup>* <sup>∈</sup> <sup>N</sup> in statements like *<sup>O</sup>*((log *<sup>T</sup>*)*a*), not necessarily the same in each occurrence. In this sense e.g., the product of two terms that are *O*((log *T*)*a*) is again taken to be *O*((log *T*)*a*).

**Lemma A5.** *Let the assumptions of Theorem 1 hold where additionally* (*εt*)*t*∈<sup>Z</sup> *is iid. Introduce the notation*

$$\mathcal{D}\_z = \text{diag}(T^{-1/2}I\_{\varepsilon\prime}I\_{ps-\varepsilon}), \quad \mathcal{D}\_y = \text{diag}(T^{-1/2}I\_{\varepsilon\prime}I\_{fs-\varepsilon}), \quad \mathcal{D}\_x = \text{diag}(T^{-1/2}I\_{\varepsilon\prime}I\_{n-\varepsilon}).$$

*Let G*¯ *<sup>p</sup> denote a solution to*

$$(\vec{D}\_z(\tilde{z}\_{t,p}, \tilde{y}\_t)\vec{D}\_y)(\vec{D}\_y(\tilde{y}\_t, \tilde{y}\_t)\vec{D}\_y)^{-1}(\vec{D}\_y(\tilde{y}\_t, \tilde{z}\_{t,p})\vec{D}\_z)\vec{G}\_p = (\vec{D}\_z(\tilde{z}\_{t,p}, \tilde{z}\_{t,p})\vec{D}\_z)\vec{G}\_p\vec{R}^2$$

*using the notation of* (A1) *where <sup>R</sup>*¯ <sup>2</sup> <sup>→</sup> <sup>Θ</sup><sup>2</sup> <sup>=</sup> *diag*(*Ic*, <sup>Θ</sup>•) <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup> and where <sup>G</sup>*¯ *<sup>p</sup> is normalized such that G*¯ 1,1,*<sup>p</sup>* = *Ic*, *G*¯ •,2,*pSp*,2 <sup>=</sup> *In*−*<sup>c</sup> for a selector matrix Sp*,2*. Further let*

$$
\Gamma\_p = \begin{bmatrix} I\_c & 0 \\ 0 & \Gamma\_{\bullet, p} \end{bmatrix} , \Gamma\_{\bullet, p}' S\_{p, 2} = I\_{n - \infty}
$$

*denote the solution to the decoupled problem where the stationary and the nonstationary subproblem are separated:*

$$
\begin{pmatrix}
\langle \widetilde{z}\_{t,1}, \widetilde{y}\_{t,1} \rangle \langle \widetilde{y}\_{t,1}, \widetilde{y}\_{t,1} \rangle^{-1} \langle \widetilde{y}\_{t,1}, \widetilde{z}\_{t,1} \rangle \widetilde{\Gamma}\_{1,1,p} \\
\langle \widetilde{z}\_{t,\bullet}, \widetilde{y}\_{t,\bullet} \rangle \langle \widetilde{y}\_{t,\bullet}, \widetilde{y}\_{t,\bullet} \rangle^{-1} \langle \widetilde{y}\_{t,\bullet}, \widetilde{z}\_{t,\bullet} \rangle \widetilde{\Gamma}\_{\bullet,p}
\end{pmatrix} = \begin{pmatrix}
\langle \widetilde{z}\_{t,1}, \widetilde{z}\_{t,1} \rangle \widetilde{\Gamma}\_{1,1,p} \Theta\_{1} \\
\langle \widetilde{z}\_{t,\bullet}, \widetilde{z}\_{t,\bullet} \rangle \widetilde{\Gamma}\_{\bullet,p} \Theta\_{\bullet}
\end{pmatrix}.
$$

*(I) Then if <sup>f</sup>* <sup>≥</sup> *<sup>n</sup> fixed independent of <sup>T</sup> and <sup>p</sup>* ≥ −*<sup>d</sup>* log *<sup>T</sup>*/ log *<sup>ρ</sup>*0, *<sup>d</sup>* <sup>&</sup>gt; 1, *<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*((log *<sup>T</sup>*)*a*) *for some a* < ∞ *the a.s. results of Lemma A.6 (I)-(III) and Lemma A.7 of [24] hold true for* (log *T*)<sup>3</sup> *replaced by* (log *<sup>T</sup>*)*<sup>a</sup> for some integer a* <sup>&</sup>lt; <sup>∞</sup>*. In particular <sup>G</sup>*¯ *<sup>p</sup>* <sup>−</sup> <sup>Γ</sup>¯ *<sup>p</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*/*T*1/2)*. (II) Using the notation <sup>δ</sup>Gp* :<sup>=</sup> *<sup>G</sup>*¯ *<sup>p</sup>* <sup>−</sup> <sup>Γ</sup>¯ *<sup>p</sup> define*

$$\mathcal{S}\_T := \left[ \begin{array}{c} I\_{\mathfrak{c}} \\ 0 \end{array} \; -\sqrt{T} \delta G\_{\bullet, 1, p}^{\prime} (\overline{z}\_{t, \bullet \ \mathfrak{z}} \, \overline{z}\_{t, \bullet}) \Gamma\_{\bullet, p}^{\dagger} \\\ I\_{\mathfrak{p}s - c} \end{array} \right], \quad \Gamma\_{\bullet, p}^{\dagger} := \Gamma\_{\bullet, p} (\Gamma\_{\bullet, p}^{\prime} \langle \overline{z}\_{t, \bullet \ \mathfrak{z}}, \overline{z}\_{t, \bullet} \rangle \Gamma\_{\bullet, p})^{-1}.$$

*Then for* Γ˜ *<sup>p</sup>* :<sup>=</sup> <sup>S</sup>˜ *TD*˜ <sup>−</sup><sup>1</sup> *<sup>x</sup> G*¯ *pD*˜ *<sup>z</sup> and*

Since in

$$
\Gamma' = \begin{bmatrix} I & 0 \\ 0 & \Gamma'\_{\bullet} \end{bmatrix},
$$

*we obtain* Γ˜ *<sup>p</sup>* − [Γ ]1:*<sup>p</sup>* = [*O*((log *T*)*a*/*T*), *O*((log *T*)*a*/*T*1/2)] *where the partitioning corresponds to the partitioning of z*˜*t*,*<sup>p</sup> into z*˜*t*,1 *and z*˜*t*,•*. Here* Γ • *denotes the right factor of* ˜ *b*•,<sup>∞</sup> = *O*•Γ • *such that* [Γ •]1:*pSp*,2 = *In*−*<sup>c</sup> holds.*

*(III) Let the assumptions of Theorem <sup>1</sup> hold. Then <sup>Z</sup>*<sup>ˆ</sup> *<sup>T</sup>* :<sup>=</sup> *Tvec* (Γ˜ *<sup>p</sup>* − [Γ ]1:*p*) *Ic* 0 *converges in distribution.*

**Proof.** (I) First consider the entries of the vectors *y*˜*t*,• and *z*˜*t*,• (see (A1)) more closely.

$$\mathcal{O}'\_{f,\perp}Y^+\_{t,f} = \mathcal{O}'\_{f,\perp}(\mathcal{O}\_{f,\bullet}x\_{t,\bullet} + \mathcal{E}\_f E^+\_{t,f})^\perp$$

the nonstationary directions are filtered by definition, *y*˜*t*,• is stationary and does not depend on *T*.

Further, also *z*˜*t*,• is stationary for fixed *p* as the nonstationary directions are either filtered by pre-multiplication with O *<sup>S</sup>*,<sup>⊥</sup> or by yearly differencing *Zt* <sup>−</sup> *Zt*−*S*.

Therefore we obtain from stationary theory for fixed *p* = *kS* that

$$||\mathbb{E}\vec{g}\_{t,\bullet}\vec{z}\_{t,\bullet}^{\prime}(\mathbb{E}\vec{z}\_{t,\bullet}\vec{z}\_{t,\bullet}^{\prime})^{-1} - \langle\vec{g}\_{t,\bullet\prime}\vec{z}\_{t,\bullet}\rangle\langle\vec{z}\_{t,\bullet\prime}\vec{z}\_{t,\bullet}\rangle^{-1}|| = o(1).$$

Here sup*<sup>p</sup>* (E*z*˜*t*,•*z*˜ *<sup>t</sup>*,•)−1 <sup>&</sup>lt; <sup>∞</sup> has been discussed before. Now <sup>E</sup>*y*˜*t*,•*z*˜ *<sup>t</sup>*,•(E*z*˜*t*,•*z*˜ *<sup>t</sup>*,•)−<sup>1</sup> <sup>=</sup> *<sup>β</sup>*˜•,*<sup>p</sup>* <sup>+</sup> *<sup>o</sup>*(*T*−1/2) = *<sup>O</sup>*•,*p*[Γ •]1:*<sup>p</sup>* <sup>+</sup> *<sup>o</sup>*(*T*−1/2) where the *<sup>o</sup>*(*T*−1/2) term appears due to neglecting <sup>O</sup>˜*<sup>f</sup> <sup>A</sup>*¯ *pxt*−*p*. It follows that det (*β*˜•,*pSp*,2) (*β*˜•,*pSp*,2) = det[*O* •,*pO*•,*p*] > 0 and

hence *β*ˆ•,*<sup>p</sup>* <sup>−</sup> *<sup>β</sup>*˜•,*pFr* <sup>=</sup> *<sup>o</sup>*(1) implies lim*T*→<sup>∞</sup> det (*β*ˆ•,*pSp*,2) (*β*ˆ•,*pSp*,2) > 0 a.s. where *<sup>β</sup>*ˆ•,*<sup>p</sup>* :<sup>=</sup> *y*˜*t*,•, *<sup>z</sup>*˜*t*,•*z*˜*t*,•, *<sup>z</sup>*˜*t*,•<sup>−</sup>1. Since *<sup>O</sup>*<sup>ˆ</sup> •,*p*Γ¯ •,*<sup>p</sup>* <sup>−</sup> *<sup>β</sup>*˜•,*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(1) due to consistency, also

$$\lim\_{T \to \infty} \det \left[ (\hat{\mathcal{O}}\_{\bullet, p} \bar{\Gamma}\_{\bullet, p}^{\prime} \mathcal{S}\_{p, 2})^{\prime} (\hat{\mathcal{O}}\_{\bullet, p} \bar{\Gamma}\_{\bullet, p}^{\prime} \mathcal{S}\_{p, 2}) \right] = \lim\_{T \to \infty} \det \hat{\mathcal{O}}\_{\bullet, p}^{\prime} \hat{\mathcal{O}}\_{\bullet, p} \det (\bar{\Gamma}\_{\bullet, p}^{\prime} \mathcal{S}\_{p, 2})^2 > 0 \quad \text{a.s.}$$

Since <sup>Γ</sup>¯ •,*<sup>p</sup>* <sup>−</sup> <sup>Γ</sup>•,*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(1) due to the definition of <sup>Γ</sup>¯ •,*<sup>p</sup>* and the continuity of the solution of the eigenvalue problem it follows that *<sup>O</sup>*<sup>ˆ</sup> •,*<sup>p</sup>* <sup>−</sup>*O*•,*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(1) and therefore lim sup*<sup>T</sup>* det*O*<sup>ˆ</sup> •,*pO*<sup>ˆ</sup> •,*<sup>p</sup>* <sup>&</sup>gt; 0. As in Lemma 6 of [40] it can be shown that Γ •,*<sup>p</sup>* − [Γ •]1:*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(*T*−1) and *<sup>O</sup>*•,*<sup>p</sup>* <sup>=</sup> *<sup>O</sup>*• <sup>+</sup> *<sup>o</sup>*(*T*−1) for the range of *<sup>p</sup>* given in Theorem <sup>1</sup> since these matrices correspond to a stationary problem. Hence the chosen normalization of <sup>Γ</sup>¯ •,*<sup>p</sup>* can be used a.s.

Next in order to obtain the convergence of *G*¯ to Γ¯ *<sup>p</sup>*, Lemma A.6 of BRRR is slightly extended to the current situation (for details and notation see there). Lemma A.6 of BRRR contains three parts: BRRR(I) gives bounds on the error in the matrices (with *lT* = log *T*)

*δyz* = 1 *<sup>T</sup> y*˜*t*,1, *<sup>z</sup>*˜*t*,1 <sup>√</sup><sup>1</sup> *<sup>T</sup> y*˜*t*,1, *<sup>z</sup>*˜*t*,• √1 *<sup>T</sup> y*˜*t*,•, *<sup>z</sup>*˜*t*,1 *y*˜*t*,•, *<sup>z</sup>*˜*t*,• − 1 *<sup>T</sup> z*˜*t*,1, *z*˜*t*,1 0 0 *y*˜*t*,•, *z*˜*t*,• = *O*( <sup>1</sup> *T l a <sup>T</sup>*) *<sup>O</sup>*( <sup>√</sup><sup>1</sup> *T l a T*) *O*( <sup>√</sup><sup>1</sup> *T l a <sup>T</sup>*) 0 , *δyy* = 1 *<sup>T</sup> y*˜*t*,1, *<sup>y</sup>*˜*t*,1 <sup>√</sup><sup>1</sup> *<sup>T</sup> y*˜*t*,•, *<sup>y</sup>*˜*t*,• √1 *<sup>T</sup> y*˜*t*,•, *<sup>y</sup>*˜*t*,1 *y*˜*t*,•, *<sup>y</sup>*˜*t*,• − 1 *<sup>T</sup> z*˜*t*,1, *z*˜*t*,1 0 0 *y*˜*t*,•, *y*˜*t*,• = *O*( <sup>1</sup> *T l a <sup>T</sup>*) *<sup>O</sup>*( <sup>√</sup><sup>1</sup> *T l a T*) *O*( <sup>√</sup><sup>1</sup> *T l a <sup>T</sup>*) 0 , *δzz* = 1 *<sup>T</sup> z*˜*t*,1, *<sup>z</sup>*˜*t*,1 <sup>√</sup><sup>1</sup> *<sup>T</sup> z*˜*t*,1, *<sup>z</sup>*˜*t*,• √1 *<sup>T</sup> z*˜*t*,•, *<sup>z</sup>*˜*t*,1 *z*˜*t*,•, *<sup>z</sup>*˜*t*,• − 1 *<sup>T</sup> z*˜*t*,1, *z*˜*t*,1 0 0 *z*˜*t*,•, *z*˜*t*,• = 0 *O*( <sup>√</sup><sup>1</sup> *T l a T*) *O*( <sup>√</sup><sup>1</sup> *T l a <sup>T</sup>*) 0 .

BRRR(II) deals with *<sup>J</sup>* <sup>=</sup> *<sup>Q</sup>*¯ <sup>−</sup> <sup>Φ</sup>¯ <sup>=</sup>

$$D\_z \langle \overline{z}\_t, \overline{y}\_t \rangle \mathcal{D}\_y (\mathcal{D}\_y \langle \overline{y}\_t, \overline{y}\_t \rangle \mathcal{D}\_y)^{-1} \mathcal{D}\_y \langle \overline{y}\_t, \overline{z}\_t \rangle \mathcal{D}\_z - \begin{bmatrix} \ \frac{1}{\tau} \langle \overline{z}\_{t,1}, \overline{z}\_{t,1} \rangle & 0\\ 0 & \langle \overline{z}\_{t,\bullet}, \overline{y}\_{t,\bullet} \rangle \langle \overline{y}\_{t,\bullet}, \overline{y}\_{t,\bullet} \rangle^{-1} \langle \overline{y}\_{t,\bullet}, \overline{z}\_{t,\bullet} \rangle \end{bmatrix}^T$$

and BRRR(III) shows that there exists a solution *G*¯ *<sup>p</sup>* converging to a solution Γ¯ *<sup>p</sup>* of the separated problem.

For showing the orders of convergence of *δzz* the arguments are unchanged except for noting that in *z*˜*t*,1, *z*˜*t*,• the number of columns increases as a function of the sample size. Since the a.s. bounds on the entries of this expression hold uniformly (as follows straightforwardly from the arguments of Lemma A.1 of BRRR) this does not change the arguments. With respect to *δyz* note that now *y*˜*<sup>t</sup>* = *β*˜ <sup>1</sup>*z*˜*t*,*<sup>p</sup>* <sup>+</sup> *<sup>ε</sup>*˜*<sup>t</sup>* <sup>+</sup> <sup>O</sup>˜*<sup>f</sup> <sup>A</sup>*¯ *pxt*−*p*. Due to the increase of *p* as a function of the sample size, *A*¯ *<sup>p</sup>* = *o*(*T*−1−) for small enough > 0 and therefore <sup>O</sup>˜*<sup>f</sup> <sup>A</sup>*¯ *pxt*−*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(*T*−1/2−/2) since *xt* <sup>=</sup> *<sup>o</sup>*(*T*(1+)/2) (uniformly in 1 <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> *<sup>T</sup>*) whether (*xt*)*t*∈<sup>Z</sup> is a unit root process or stationary. Hence O˜*<sup>f</sup> <sup>A</sup>*¯ *pxt*−*p*, <sup>O</sup>˜*<sup>f</sup> <sup>A</sup>*¯ *pxt*−*p* <sup>=</sup> *<sup>o</sup>*(1). Further O˜*<sup>f</sup> <sup>A</sup>*¯ *pxt*−*p*,*ε*˜*t* <sup>=</sup> *<sup>o</sup>*(*T*−1/2) follows from *xt*−*p*,*ε*˜*t* <sup>=</sup> *<sup>O</sup>*(log *<sup>T</sup>*) (see Lemma A.1 (I)). This shows that the additional term is always of lower order and can be neglected. The remaining arguments follow exactly as in the proof of Lemma A.6 of BRRR. The proof of Lemma A.7 of BRRR only uses the order bounds derived above and hence follows immediately. This shows (I).

(II) Using the definition of <sup>S</sup>˜ *<sup>T</sup>* we obtain:

$$
\begin{split}
\bar{\Gamma}\_{p}^{\prime} &= \quad \bar{\mathcal{S}}\_{T} \bar{D}\_{x}^{-1} \bar{G}\_{p}^{\prime} \tilde{D}\_{z} = \bar{\mathcal{S}}\_{T} \begin{bmatrix} I\_{\mathsf{c}} & \sqrt{T} \delta G\_{\bullet,1,p}^{\prime} \\ \delta G\_{1,2,p}^{\prime} / \sqrt{T} & G\_{\bullet,2,p}^{\prime} \end{bmatrix} \\ &= \quad \left[ \begin{array}{cccc} I\_{\mathsf{c}} - \delta G\_{\bullet,1,p}^{\prime} (\underline{\tilde{z}\_{t,\bullet}} \underline{\tilde{z}\_{t,\bullet}}) \Gamma\_{\bullet,p}^{\dagger} \delta G\_{1,2,p}^{\prime} & \sqrt{T} \delta G\_{\bullet,1,p}^{\prime} (I - \langle \underline{z}\_{t,\bullet}, \underline{z}\_{t,\bullet} \rangle \Gamma\_{\bullet,p}^{\dagger} \bar{G}\_{\bullet,2,p}^{\prime}) \\ \delta G\_{1,2,p}^{\prime} / \sqrt{T} & G\_{\bullet,2,p}^{\prime} \end{array} \right].
\end{split}
$$

From (I) and Lemma A.7 of BRRR *<sup>δ</sup>G*•,1,*<sup>p</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*/*T*1/2), *<sup>δ</sup>G*1,2,*<sup>p</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*/*T*1/2) and *<sup>G</sup>*¯•,2,*<sup>p</sup>* <sup>−</sup> <sup>Γ</sup>¯ •,*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*((log *<sup>T</sup>*)*a*/*T*1/2). Finally

$$\delta G\_{\bullet,1,p}^{\prime}(I - \langle \tilde{z}\_{t,\bullet}, \tilde{z}\_{t,\bullet} \rangle \Upsilon\_{\bullet,p}^{\dagger} G\_{\bullet,2,p}^{\prime}) = \delta G\_{\bullet,1,p}^{\prime}(I - \langle \tilde{z}\_{t,\bullet}, \tilde{z}\_{t,\bullet} \rangle \Upsilon\_{\bullet,p}^{\dagger} \Upsilon\_{\bullet,p}^{\prime}) + O((\log T)^{\pi}/T) = O((\log T)^{\pi}/T)$$

as in the proof of Lemma A.7 of BRRR. Using Lemma A.5 (III) of BRRR with Ξˆ *<sup>f</sup>* = *y*˜*t*,•, *<sup>y</sup>*˜*t*,•−1/2 it follows that <sup>Γ</sup>¯ •,*<sup>p</sup>* − <sup>Γ</sup> •,*<sup>p</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*aT*−1/2). Since *<sup>G</sup>*¯•,2,*<sup>p</sup>* <sup>−</sup> <sup>Γ</sup>¯ •,*<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*((log *<sup>T</sup>*)*a*/ *T*1/2) the same rate of convergence holds for *G*¯ •,2,*<sup>p</sup>* <sup>−</sup> <sup>Γ</sup> •,*<sup>p</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*/*T*1/2). It follows that Γ˜ *<sup>p</sup>* − [Γ ]1:*<sup>p</sup>* = [*O*((log *T*)*a*/*T*),*O*((log *T*)*a*/*T*1/2)].

(III) From above we have

$$T(\tilde{\Gamma}'\_p - [\Gamma']\_{1:p}) \begin{pmatrix} I\_{\mathbb{C}} \\ 0 \end{pmatrix} = \begin{bmatrix} -(\sqrt{T}\delta G'\_{\bullet,1,p}(\mathbb{z}\_{t,\bullet}, \mathbb{z}\_{t,\bullet})\Gamma^{\dagger}\_{\bullet,p}\sqrt{T}\delta G'\_{1,2,p}) \\ \sqrt{T}\delta G'\_{1,2,p} \end{bmatrix} + o\_P(1). \tag{A3}$$

Now from the proof of Lemma A.7 of BRRR we obtain

$$[\sqrt{T}\delta G\_{\bullet,1,p}]' = \Xi O\_{\bullet} (I - \Theta\_{\bullet}^2)^{-1} \bar{\Gamma}'\_{\bullet,p} + o\_P(1).$$

Furthermore using the expression given in Lemma A.7 of BRRR:

$$\begin{split} \sqrt{T}\delta G\_{1,2,p} &= \quad \sqrt{T}Z\_{11}^{-1}[\delta\_{zz}^{\mathbf{1}\bullet}\Gamma\_{\bullet,p}\Theta\_{\bullet}^{2} - f\_{1,\bullet}\Gamma\_{\bullet,p}](I-\Theta\_{\bullet}^{2})^{-1} + o\_{P}(1) \\ &= \quad \sqrt{T}Z\_{11}^{-1}[\delta\_{zz}^{\mathbf{1}\bullet}\Gamma\_{\bullet,p}(\Theta\_{\bullet}^{2}-I) + [\delta\_{zz}^{\mathbf{1}\bullet} - f\_{1,\bullet}]\Gamma\_{\bullet,p}](I-\Theta\_{\bullet}^{2})^{-1} + o\_{P}(1) \\ &= \quad -Z\_{11}^{-1}\langle\tilde{z}\_{t,1},\mathbf{x}\_{t,\bullet}\rangle - Z\_{11}^{-1}\sqrt{T}[I\_{\mathbf{1},\bullet} - \delta\_{zz}^{\mathbf{1}\bullet}]\Gamma\_{\bullet,p}(I-\Theta\_{\bullet}^{2})^{-1} + o\_{P}(1) \\ &= \quad -Z\_{11}^{-1}\langle\tilde{z}\_{t,1},\mathbf{x}\_{t,\bullet}\rangle - Z\_{11}^{-1}\mathbb{E}\tilde{z}\_{t,1}\mathbf{z}\_{t,\bullet}'(\mathbb{E}\tilde{g}\_{t,\bullet}(\mathbf{y}\_{t,\bullet})^{\prime})^{-1}\mathbb{E}\tilde{g}\_{t,\bullet}\mathbf{x}\_{t,\bullet}'(I-\Theta\_{\bullet}^{2})^{-1} + o\_{P}(1). \end{split}$$

This shows the result.

The transformations in the representation lead to an estimator *G*¯ taking the place of Kˆ *<sup>p</sup>*. Using <sup>S</sup>˜ *<sup>T</sup>* as defined in Lemma A5 the corresponding estimator Γ˜ *<sup>p</sup>* <sup>=</sup> <sup>S</sup>˜ *TD*˜ <sup>−</sup><sup>1</sup> *<sup>x</sup> G*¯ *pD*˜ *<sup>z</sup>* fulfills Γ˜ *<sup>p</sup>* − Γ *<sup>p</sup>* = [*O*((log *T*)*a*/*T*),*O*((log *T*)*a*/ <sup>√</sup>*T*)].

Based on this result let (A, C, K) denote the realization of the true transfer function in the state basis corresponding to Γ *<sup>p</sup>* where Γ *pSp* <sup>=</sup> *In* and let (A˜, <sup>C</sup>˜, <sup>K</sup>˜ ) denote the (unfeasible) CVA estimates using *x*˜*<sup>t</sup>* := Γ˜ *pz*˜*t*,*p*. The next lemma then provides the main ingredients for the rest of the proofs:

**Lemma A6.** *Let the assumptions of Theorem <sup>1</sup> hold and define Dx* :<sup>=</sup> *diag*(*IcT*−1, *In*−*cT*−1/2)*. Then there exists an integer a* < ∞ *such that*

$$(\tilde{\mathcal{A}} - \mathcal{A})D\_x^{-1} = O((\log T)^a), \quad (\tilde{\mathcal{C}} - \mathcal{C})D\_x^{-1} = O((\log T)^a), \quad (\tilde{\mathcal{K}} - \mathcal{K}) = O((\log T)^a/T^{1/2}).$$

**Proof.** First note that the regression of *Y*<sup>+</sup> *<sup>t</sup>*, *<sup>f</sup>* onto *Y*<sup>−</sup> *<sup>t</sup>*,*<sup>p</sup>* includes time points *t* = *p* + 1, ..., *T* − *f* + 1 whereas for estimating the system matrices we can use *x*ˆ*t*, *t* = *p* + 1, ..., *T* + 1 and *yt*, *<sup>t</sup>* <sup>=</sup> *<sup>p</sup>* <sup>+</sup> 1, ..., *<sup>T</sup>*. Thus in this proof we use *at*, *bt<sup>T</sup> <sup>p</sup>*+<sup>1</sup> :<sup>=</sup> *<sup>T</sup>*−<sup>1</sup> <sup>∑</sup>*<sup>T</sup> <sup>t</sup>*=*p*+<sup>1</sup> *atb <sup>t</sup>* instead of *at*, *bt* <sup>=</sup> *<sup>T</sup>*−<sup>1</sup> <sup>∑</sup>*T*−*f*+<sup>1</sup> *<sup>t</sup>*=*p*+<sup>1</sup> *atb t*.

The following orders of convergence are straightforward to derive using the results of Lemma A1, <sup>A</sup>¯ *<sup>p</sup>* <sup>=</sup> *<sup>o</sup>*(*T*−1),(Γ˜ *<sup>p</sup>* − [Γ ]1:*p*)*D*−<sup>1</sup> *<sup>z</sup>* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*a*) and *<sup>x</sup>*˜*<sup>t</sup>* <sup>−</sup> *xt* = (Γ˜ *p* − [Γ ]1:*p*)*z*˜*t*,*<sup>p</sup>* <sup>−</sup> <sup>A</sup>¯ *pxt*−*p*, *<sup>t</sup>* <sup>&</sup>gt; *<sup>p</sup>* according to Lemma A5 and Lemma A1 for the range of *<sup>p</sup>* given in Theorem 1:

$$\begin{aligned} \langle \boldsymbol{\varepsilon}\_{l}, \boldsymbol{\bar{x}}\_{l} - \mathbf{x}\_{t} \rangle\_{p+1}^{T} &= O(p(\log T)^{a}/T) \quad , \quad \boldsymbol{\mathcal{D}}\_{z}(\boldsymbol{\bar{z}}\_{t,p}, \boldsymbol{\bar{x}}\_{t} - \mathbf{x}\_{t})\_{p+1}^{T} = O(p(\log T)^{a}/T^{1/2}) \\ \boldsymbol{\bar{\mathcal{D}}}\_{z}(\boldsymbol{\bar{z}}\_{t+1,p}, \boldsymbol{\bar{x}}\_{t} - \mathbf{x}\_{t})\_{p+1}^{T} &= O(p(\log T)^{a}/T^{1/2}) \quad , \quad \boldsymbol{\bar{\mathcal{D}}}\_{x}(\boldsymbol{\bar{x}}\_{t}, \boldsymbol{\bar{x}}\_{t} - \mathbf{x}\_{t})\_{p+1}^{T} = O(p(\log T)^{a}/T^{1/2}) \\ \langle \boldsymbol{\bar{x}}\_{t} - \mathbf{x}\_{t}, \boldsymbol{\bar{x}}\_{t} - \mathbf{x}\_{t} \rangle\_{p+1}^{T} &= O(p^{2}(\log T)^{a}/T) \quad . \end{aligned}$$

Using these orders of convergence we obtain

$$
\langle \tilde{D}\_{\mathbf{x}} \langle \tilde{\mathbf{x}}\_{t}, \tilde{\mathbf{x}}\_{t} \rangle\_{p+1}^{T} \tilde{D}\_{\mathbf{x}} = \tilde{D}\_{\mathbf{x}} \langle \mathbf{x}\_{t}, \mathbf{x}\_{t} \rangle\_{p+1}^{T} \tilde{D}\_{\mathbf{x}} + O(p^{2} (\log T)^{a} / T^{1/2}) > 0 \quad a.s.
$$

From Lemma A1 also (*D*˜ *<sup>x</sup>x*˜*t*, *<sup>x</sup>*˜*t<sup>T</sup> <sup>p</sup>*+1*D*˜ *<sup>x</sup>*)−<sup>1</sup> = (*D*˜ *<sup>x</sup>xt*, *xt<sup>T</sup> <sup>p</sup>*+1*D*˜ *<sup>x</sup>*)−1(1+ *<sup>o</sup>*(1)) = *<sup>O</sup>*((log *<sup>T</sup>*)*a*). Therefore

$$\begin{split} \left(\tilde{\mathcal{C}} - \mathcal{C}\right)D\_{\mathcal{X}}^{-1} &= \quad \sqrt{T}\Big(\langle\boldsymbol{\varepsilon}\_{l}, \boldsymbol{\tilde{x}}\boldsymbol{\boldsymbol{\varepsilon}}\rangle\_{p+1}^{T} - \mathcal{C}\big(\boldsymbol{\tilde{x}}\_{l} - \boldsymbol{\chi}\_{l}, \boldsymbol{\tilde{x}}\_{l}\big)\_{p+1}^{T}\big) \tilde{D}\_{\mathcal{X}}\big(\tilde{\mathcal{D}}\_{\mathcal{X}}\big(\boldsymbol{\tilde{x}}\_{l}, \boldsymbol{\tilde{x}}\boldsymbol{\tilde{x}}\big)\_{p+1}^{T}\tilde{D}\_{\mathcal{X}}\big)^{-1} \\ &= \quad \sqrt{T}\big(\boldsymbol{\varepsilon}\_{l}, \boldsymbol{\chi}\_{l}\big)\_{p+1}^{T}\big\mathcal{D}\_{\mathcal{X}}\big(\mathcal{D}\_{\mathcal{X}}\big\langle \boldsymbol{\chi}\_{l}, \boldsymbol{\chi}\_{l}\big\rangle\_{p+1}^{T}\big\mathcal{D}\_{\mathcal{X}} + o(1)\big)^{-1} \\ &\quad - \sqrt{T}\big(\mathcal{X}\_{l} - \boldsymbol{\chi}\_{l}, \boldsymbol{\chi}\_{l}\big)\_{p+1}^{T}\big\mathcal{D}\_{\mathcal{X}}(\mathcal{D}\_{\mathcal{X}}\big\langle \boldsymbol{\chi}\_{l}, \boldsymbol{\chi}\_{l}\big\rangle\_{p+1}^{T}\big\mathcal{D}\_{\mathcal{X}})^{-1} + o(1) = O(p(\log T)^{d}). \end{split} \tag{A4}$$

This in particular establishes consistency for the estimate. Next analogously (using the notation *<sup>δ</sup>xt* <sup>=</sup> *<sup>x</sup>*˜*<sup>t</sup>* <sup>−</sup> *xt*) we obtain (A−A ˜ )*D*−<sup>1</sup> *<sup>x</sup>* <sup>=</sup>

$$\begin{split} &\sqrt{T}(\boldsymbol{\upbeta}\_{l+1}-\mathcal{A}\boldsymbol{\upbeta}\_{l},\boldsymbol{\upbeta}\_{l})\_{p+1}^{T}\boldsymbol{\upbeta}\_{x}(\boldsymbol{\upbeta}\_{x}(\boldsymbol{\upbeta}\_{l},\boldsymbol{\upbeta}\_{l})\_{p+1}^{T}\boldsymbol{\upbeta}\_{x})^{-1} \\ &=\quad\sqrt{T}\Big{(}\big{(}(\boldsymbol{\upbeta}\_{l+1}-\boldsymbol{\upalpha}\_{l+1})+(\boldsymbol{\up}\_{l+1}-\boldsymbol{\upbeta}\_{l})+\boldsymbol{\upbeta}(\boldsymbol{\upalpha}\_{l}-\boldsymbol{\upbeta}\_{l}),\boldsymbol{\upbeta}\_{l}\big{)}\_{p+1}^{T}\boldsymbol{\upbeta}\_{x}\big{)}(\boldsymbol{\upbeta}\_{x}\big{(}\boldsymbol{\upalpha}\_{l},\boldsymbol{\upalpha}\_{l}\big{)}\_{p+1}^{T}\boldsymbol{\upbeta}\_{x}+o(1))^{-1} \\ &=\quad\quad\sqrt{T}\Big{(}\big{(}\boldsymbol{\upbeta}\_{l+1},\boldsymbol{\upalpha}\_{l}\big{)}\_{p+1}^{T}-\mathcal{A}\big{(}\boldsymbol{\upbeta}\_{l},\boldsymbol{\upalpha}\_{l}\big{)}\_{p+1}^{T}+\left(\boldsymbol{\upalpha}\_{l},\boldsymbol{\upalpha}\_{l}\big{)}\_{p+1}^{T}\big{)}\big{)}\_{x}(\boldsymbol{\upbeta}\_{x}\big{(}\boldsymbol{\upalpha}\_{l},\boldsymbol{\upalpha}\_{l}\big{)}\_{p+1}^{T}\big{)}\_{x}^{-1}+o(1)\end{split}\tag{A5}$$

and therefore consistency for <sup>A</sup>˜ is established. Finally note that for

$$\mathfrak{E}\_t = \mathfrak{y}\_t - \mathcal{C}\mathfrak{x}\_t = \mathfrak{e}\_t + \mathcal{C}(\mathfrak{x}\_t - \mathfrak{x}\_t) + (\mathcal{C} - \mathcal{C})\mathfrak{x}\_t$$

it follows that *ε*ˆ*t*,*ε*ˆ*t<sup>T</sup> <sup>p</sup>*+<sup>1</sup> = <sup>Ω</sup> + *<sup>O</sup>*(*p*2(log *<sup>T</sup>*)*a*/*T*1/2). Furthermore since *<sup>ε</sup>*ˆ*<sup>t</sup>* denotes the residuals of the regression of *yt* onto *<sup>x</sup>*˜*<sup>t</sup>* it follows that *ε*ˆ*t*, *<sup>x</sup>*¯*t<sup>T</sup> <sup>p</sup>*+<sup>1</sup> = 0. From this we obtain

√ *<sup>T</sup>*(K−K ˜ ) = <sup>√</sup> *<sup>T</sup>*(*x*˜*t*+<sup>1</sup> − K*ε*ˆ*t*,*ε*ˆ*t<sup>T</sup> p*+1(*ε*ˆ*t*,*ε*ˆ*t<sup>T</sup> p*+1)−<sup>1</sup> <sup>=</sup> <sup>√</sup> *T* (*x*˜*t*+<sup>1</sup> <sup>−</sup> *xt*+1) − A*δxt* <sup>+</sup> <sup>K</sup>(*ε<sup>t</sup>* <sup>−</sup> *<sup>ε</sup>*ˆ*t*),*ε*ˆ*t<sup>T</sup> p*+1 (*ε*ˆ*t*,*ε*ˆ*t<sup>T</sup> p*+1)−<sup>1</sup> <sup>=</sup> <sup>√</sup> *T δxt*+<sup>1</sup> − A*δxt* <sup>+</sup> <sup>K</sup>(*ε<sup>t</sup>* <sup>−</sup> *<sup>ε</sup>*ˆ*t*),*ε*ˆ*t<sup>T</sup> p*+1 Ω−1(1 + *o*(1)) (A6) <sup>=</sup> <sup>√</sup> *T δxt*+<sup>1</sup> − A*δxt* <sup>+</sup> <sup>K</sup>(*ε<sup>t</sup>* <sup>−</sup> *<sup>ε</sup>*ˆ*t*),*εt<sup>T</sup> p*+1 Ω−1(1 + *o*(1)) + *o*(1) = √ *<sup>T</sup>δxt*+1,*εt<sup>T</sup> p*+1 <sup>Ω</sup>−<sup>1</sup> <sup>+</sup> *<sup>o</sup>*(1) = <sup>√</sup> *<sup>T</sup>*(Γ˜ *<sup>p</sup>* − Γ *p*)*z*¯*t*+1,*p*,*εt<sup>T</sup> p*+1 Ω−<sup>1</sup> + *o*(1) = *O*(*p*(log *T*)*<sup>a</sup>* ).

These expressions do not only show consistency of a specific order, but also give the relevant highest order terms for the asymptotic distribution, which are used below. As *C*ˆ *A*ˆ*<sup>j</sup> <sup>K</sup>*<sup>ˆ</sup> <sup>=</sup> <sup>C</sup>˜A˜*<sup>j</sup>* K → CA ˜ *<sup>j</sup>* <sup>K</sup> <sup>=</sup> C◦A*<sup>j</sup>* ◦K◦, Lemma A6 establishes consistency for the impulse response sequence *C*ˆ *A*ˆ*<sup>j</sup> K*ˆ (thus proofs Theorem 1 (I)) as well as, jointly with *p* = *O*((log *T*)*a*), the rate of convergence *O*((log *T*)*a*/*T*1/2) for the not realizable choice of the basis and the impulse response sequence CA*<sup>j</sup>* K.

#### Appendix C.1.2. Proof of Theorem 1 (II)

In order to arrive at the canonical representation (*A*ˇ, *C*ˇ, *K*ˇ) two steps are performed: first the reordered Jordan normal form is calculated, afterwards the matrices *C*˜ *<sup>j</sup>*,<sup>C</sup> are transformed such that *E j C*ˇ *<sup>j</sup>*,<sup>C</sup> = *Icj* holds. We will follow these steps below.

In the first step a transformation matrix *<sup>U</sup>*<sup>ˆ</sup> needs to be found such that *<sup>A</sup>*˜ <sup>=</sup> *<sup>U</sup>*<sup>ˆ</sup> <sup>A</sup>˜*U*<sup>ˆ</sup> <sup>−</sup><sup>1</sup> is in reordered Jordan normal form. In this respect <sup>A</sup>˜ and <sup>A</sup> are used in Lemma A2. Accordingly *<sup>U</sup>*ˆ*<sup>t</sup>* = [*U*ˆ*t*,1, ..., *<sup>U</sup>*ˆ*t*,*S*/2, *<sup>U</sup>*ˆ*t*,•] can be defined such that *<sup>V</sup> <sup>j</sup> <sup>U</sup>*<sup>ˆ</sup> *<sup>t</sup>*,*<sup>j</sup>* <sup>=</sup> *Icj* where *<sup>U</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup>* corresponds to the transformation from A to A◦ as given in the theorem. An appropriate choice of *z*˜*t*,1 leads to *U* = *In*. Furthermore the basis in the space spanned by the columns of *<sup>U</sup>*ˆ*t*,• where *<sup>U</sup>*<sup>ˆ</sup> *t*,*j <sup>U</sup>*ˆ*t*,• <sup>=</sup> 0 can be chosen such that [0, *<sup>I</sup>*]*U*<sup>ˆ</sup> *<sup>t</sup>*,• <sup>=</sup> *<sup>I</sup>* for large enough *<sup>T</sup>*.

A first order approximation according to Lemma A2 then leads to

$$\hat{\mathcal{U}}\_{t,\boldsymbol{j}} = \mathcal{U}\_{\boldsymbol{j}} + Z\_1 + O(\|\mathcal{A} - \mathcal{A}\|^2) = \mathcal{U}\_{\boldsymbol{j}} - \Sigma(\mathcal{A} - \mathcal{A})\mathcal{U}\_{\boldsymbol{j}} + O(\|\mathcal{A} - \mathcal{A}\|^2),$$

for *<sup>j</sup>* <sup>=</sup> 0, ..., *<sup>S</sup>*/2. Consequently *U*ˆ*t*,*<sup>j</sup>* <sup>−</sup> *Uj* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*aT*−1) and thus also *<sup>U</sup>*ˆ*<sup>t</sup>* <sup>−</sup> *<sup>U</sup>* <sup>=</sup> *O*((log *T*)*aT*−1). Consequently the order of convergence for the transformed system (*A*ˆ, *C*ˆ, *K*ˆ) is unchanged. In a second step an upper triangular transformation matrix *U*˜ can be found transforming (*A*ˆ, *C*ˆ, *K*ˆ) such that *A*˜ corresponds to the reordered Jordan normal form. Due to the upper block triangularity of this transform we can apply Lemma A3 to show that the order of convergence remains identical.

For the second step note that Lemma A3 provides the required terms: An application to the block diagonal transformation S*<sup>T</sup>* = diag(*E* 1*C*˜ 1,C, ..., *E S*/2*C*˜ *<sup>S</sup>*/2,C, S*T*,•), where S*T*,• transforms the stationary subsystem to echelon form, concludes the proof.

#### Appendix C.1.3. Proof of Theorem 1 (III)

The only argument that uses the iid assumption is the almost sure convergence of (*D*˜ *<sup>x</sup>xt*, *xtD*˜ *<sup>x</sup>*)−1. Weakening the assumptions on the noise implies that this order of convergence still holds in probability while the almost sure version cannot be shown with the tools of this paper. This concludes the proof of Theorem 1.

#### *Appendix C.2. Proof of Theorem 2*

Using the notation introduced in (A1),

$$\hat{X} = \bar{D}\_{\bar{z}} \langle \tilde{y}\_{l}, \tilde{z}\_{l,p} \rangle \bar{D}\_{\bar{z}} (\bar{D}\_{\bar{z}} \langle \tilde{z}\_{l,p}, \tilde{z}\_{l,p} \rangle \tilde{D}\_{\bar{z}})^{-1} \bar{D}\_{\bar{z}} \langle \tilde{z}\_{l,p}, \tilde{y}\_{l} \rangle \bar{D}\_{\bar{z}} (\bar{D}\_{\bar{z}} \langle \tilde{y}\_{l}, \tilde{y}\_{l} \rangle \tilde{D}\_{\bar{z}})^{-1} \to X\_{\mathbb{O}} = \left[ \begin{array}{c} I\_{\mathbb{C}} & \mathbf{0} \\ \mathbf{0} & X\_{\mathbb{C}, \bullet} \end{array} \right]$$

for a suitable matrix *<sup>X</sup>*◦,•. The eigenvalues of *<sup>X</sup>*<sup>ˆ</sup> are the squares of the singular values of the RRR problem in the first step of CVA. Therefore

$$\begin{aligned} \text{Tr}\sum\_{i=1}^{\mathcal{E}}(\mathbf{1} - \theta\_i^2) &= \left[ -T \text{tr} \Big[ \mathcal{U}\_c'(\hat{\mathbf{X}} - \mathbf{X}\_\circ) [\mathcal{U}\_\mathbf{c} - (\mathbf{X}\_\circ - I)^\dagger (\hat{\mathbf{X}} - \mathbf{X}\_\circ) \mathcal{U}\_\mathbf{c}] \Big] + o\_P(1) \right] \\ &= \left[ -T \text{tr} \Big[ \Delta \mathcal{X}\_{11} - \Delta \mathcal{X}\_{12} (\mathcal{X}\_{\circ \bullet} - I)^\dagger \Delta \mathcal{X}\_{21} \Big] + o\_P(1) \right] \end{aligned}$$

according to a second order approximation in the Rayleigh-Schrödinger expansions (Lemma A2). Now, in the current situation we obtain (*<sup>I</sup>* <sup>−</sup> *<sup>X</sup>*ˆ) *I* 0 =

$$\begin{split} \mathcal{I} &= \left( \mathcal{D}\_{\mathcal{Y}}(\mathcal{g}\_{t}, \mathcal{g}\_{t}) \mathcal{D}\_{\mathcal{Y}} - \mathcal{D}\_{\mathcal{Y}}(\mathcal{g}\_{t}, \boldsymbol{z}\_{t,p}) \mathcal{D}\_{\boldsymbol{z}} (\mathcal{D}\_{\boldsymbol{z}}(\boldsymbol{z}\_{t,p}, \boldsymbol{z}\_{t,p}) \mathcal{D}\_{\boldsymbol{z}})^{-1} \mathcal{D}\_{\boldsymbol{z}}(\boldsymbol{z}\_{t,p}, \boldsymbol{g}\_{t}) \mathcal{D}\_{\boldsymbol{y}} \right) (\mathcal{D}\_{\mathcal{Y}}(\mathcal{g}\_{t}, \mathcal{g}\_{t}) \mathcal{D}\_{\boldsymbol{y}})^{-1} \begin{bmatrix} I \\ 0 \end{bmatrix} \\ &= \left( \mathcal{D}\_{\mathcal{Y}}(\boldsymbol{\xi}\_{t}, \boldsymbol{\varepsilon}\_{t}) \mathcal{D}\_{\boldsymbol{y}} - \mathcal{D}\_{\mathcal{Y}}(\boldsymbol{\xi}\_{t}, \boldsymbol{z}\_{t,p}) \mathcal{D}\_{\boldsymbol{z}} (\mathcal{D}\_{\boldsymbol{z}}(\boldsymbol{z}\_{t,p}, \boldsymbol{\varepsilon}\_{t,p}) \mathcal{D}\_{\boldsymbol{z}})^{-1} \mathcal{D}\_{\boldsymbol{z}}(\boldsymbol{z}\_{t,p}, \boldsymbol{\varepsilon}\_{t}) \mathcal{D}\_{\boldsymbol{y}} \right) (\mathcal{D}\_{\mathcal{Y}}(\mathcal{g}\_{t}, \mathcal{g}\_{t}) \mathcal{D}\_{\boldsymbol{y}})^{-1} \begin{bmatrix} I \\ 0 \end{bmatrix} .\end{split}$$

Furthermore *ε*˜*t*, *<sup>z</sup>*˜*t*,*pD*˜ *<sup>z</sup>*(*D*˜ *<sup>z</sup>z*˜*t*,*p*, *<sup>z</sup>*˜*t*,*pD*˜ *<sup>z</sup>*)−1*D*˜ *<sup>z</sup>z*˜*t*,*p*,*ε*˜*t* <sup>=</sup> *OP*(*T*−1) and

$$(\mathcal{D}\_y \langle \vec{y}\_t, \vec{y}\_t \rangle \mathcal{D}\_y)^{-1} \begin{bmatrix} I \\ 0 \end{bmatrix} = \begin{bmatrix} I \\ -\langle \tilde{y}\_{t,\bullet,\prime} \tilde{y}\_{t,\bullet} \rangle^{-1} \langle \tilde{y}\_{t,\bullet,\prime} \tilde{y}\_{t,1} \rangle / \sqrt{T} \end{bmatrix} (\langle \tilde{y}\_{t,1}^\pi, \tilde{y}\_{t,1}^\pi \rangle / T)^{-1}$$

where *y*˜*<sup>π</sup> <sup>t</sup>*,1 <sup>=</sup> *<sup>y</sup>*˜*t*,1 − *y*˜*t*,1, *<sup>y</sup>*˜*t*,•*y*˜*t*,•, *<sup>y</sup>*˜*t*,•−1*y*˜*t*,•. From this we get using <sup>E</sup>*ε*˜*t*,•*ε*˜ *<sup>t</sup>*,• <sup>=</sup> <sup>E</sup>*y*˜*t*,•*y*˜ *<sup>t</sup>*,• <sup>−</sup> *<sup>X</sup>*◦,•E*y*˜*t*,•*y*˜ *t*,•:

$$\begin{split} T(I\_{\varepsilon} - \hat{X}\_{1,1}) &= \quad \left( \langle \tilde{\varepsilon}\_{t,1}, \tilde{\varepsilon}\_{t,1} \rangle - \langle \tilde{\varepsilon}\_{t,1}, \tilde{\varepsilon}\_{t,\bullet} \rangle \langle \tilde{y}\_{t,\bullet}, \tilde{y}\_{t,\bullet} \rangle^{-1} \langle \tilde{y}\_{t,\bullet}, \tilde{y}\_{t,1} \rangle \right) (\langle \tilde{y}\_{t,1}^{\pi}, \tilde{y}\_{t,1}^{\pi} \rangle / T)^{-1} + o\_{P}(1), \\ &\sqrt{T} \Delta X\_{2,1} = \quad \left( -\langle \tilde{\varepsilon}\_{t,\bullet}, \tilde{\varepsilon}\_{t,1} \rangle + (I - X\_{\diamond,\bullet}) \langle \tilde{y}\_{t,\bullet}, \tilde{y}\_{t,1} \rangle \right) (\langle \tilde{y}\_{t,1}^{\pi}, \tilde{y}\_{t,1}^{\pi} \rangle / T)^{-1} + o\_{P}(1) \\ &\sqrt{T} \Delta X\_{1,2} = \quad -\mathbb{E}\tilde{\varepsilon}\_{t,1} \tilde{\varepsilon}\_{t,\bullet}^{\prime} (\mathbb{E} \tilde{y}\_{t,\bullet} \tilde{y}\_{t,\bullet}^{\prime})^{-1} + o\_{P}(1). \end{split}$$

Thus *T* ∑*<sup>c</sup> <sup>i</sup>*=1(<sup>1</sup> <sup>−</sup> *<sup>σ</sup>*<sup>ˆ</sup> <sup>2</sup> *<sup>i</sup>* ) =

$$\begin{split} \mathcal{I} &=& \text{tr}\Big[\Big(\langle \boldsymbol{\tilde{\varepsilon}}\_{t,1}, \boldsymbol{\tilde{\varepsilon}}\_{t,1} \rangle - \mathbb{E}\boldsymbol{\tilde{\varepsilon}}\_{t,1} \boldsymbol{\varepsilon}\_{t,\bullet}^{\prime} (\mathbb{E}\boldsymbol{\tilde{\mathcal{g}}}\_{t,\bullet} \boldsymbol{\tilde{\mathcal{g}}}\_{t,\bullet}^{\prime})^{-1} (I - \mathcal{X}\_{0,\bullet})^{-1} \mathbb{E}\boldsymbol{\tilde{\varepsilon}}\_{t,\bullet} \boldsymbol{\mathcal{E}}\_{t,1}^{\prime} \Big) (\langle \boldsymbol{\tilde{\mathcal{g}}}\_{t,1}, \boldsymbol{\tilde{\mathcal{g}}}\_{t,1} \rangle / \boldsymbol{\varmathcal{I}})^{-1} \Big] + o\_{P}(1) \\ &=& \text{tr}\Big[\Big(\langle \boldsymbol{\tilde{\varepsilon}}\_{t,1}, \boldsymbol{\tilde{\varepsilon}}\_{t,1} \rangle - \mathbb{E}\boldsymbol{\tilde{\varepsilon}}\_{t,1} \boldsymbol{\tilde{\varepsilon}}\_{t,\bullet}^{\prime} (\mathbb{E}\boldsymbol{\tilde{\varepsilon}}\_{t,\bullet} \boldsymbol{\mathcal{E}}\_{t,\bullet}^{\prime})^{-1} \mathbb{E}\boldsymbol{\tilde{\varepsilon}}\_{t,\bullet} \boldsymbol{\tilde{\varepsilon}}\_{t,1}^{\prime} \Big) (\langle \boldsymbol{\tilde{\mathcal{g}}}\_{t,1}, \boldsymbol{\tilde{\mathcal{g}}}\_{t,1} \rangle / \boldsymbol{\varmathcal{I}})^{-1} \Big] + o\_{P}(1) \stackrel{d}{\to} \boldsymbol{Z}. \end{split}$$

*Appendix C.3. Proof of Theorem 3*

The proof of Theorem 3 follows the same path as the proof of Theorem 1. In (A5) it was shown that the asymptotic distribution of *<sup>T</sup>*(A˜ <sup>11</sup> − A◦,11) depends on

$$
\langle \tilde{\boldsymbol{\mathfrak{x}}}\_{t+1,\boldsymbol{\mathfrak{j}}} - \boldsymbol{\mathfrak{x}}\_{t+1,\boldsymbol{\mathfrak{j}}}, \boldsymbol{\mathfrak{x}}\_{t,\boldsymbol{\mathfrak{k}}} \rangle, \langle \tilde{\boldsymbol{\mathfrak{x}}}\_{t,\boldsymbol{\mathfrak{j}}} - \boldsymbol{\mathfrak{x}}\_{t,\boldsymbol{\mathfrak{j}}}, \boldsymbol{\mathfrak{x}}\_{t,\boldsymbol{\mathfrak{k}}} \rangle \quad , \quad \langle \boldsymbol{\mathfrak{x}}\_{t}, \boldsymbol{\mathfrak{x}}\_{t,\boldsymbol{\mathfrak{j}}} \rangle, \langle \boldsymbol{\mathfrak{x}}\_{t,\boldsymbol{\mathfrak{k}}}, \boldsymbol{\mathfrak{x}}\_{t,\boldsymbol{\mathfrak{j}}} \rangle / T
$$

for *j*, *k* = 0, ..., *S*/2. Note that

$$\delta \mathbf{x}\_{t+i} = \overline{\mathbf{x}}\_{t+i} - \mathbf{x}\_{t+i} = (\Gamma\_p' - [\Gamma']\_{1:p})\overline{\mathbf{z}}\_{t+i,p} + o\_P(T^{-1})$$

for *i* = 0, 1. Then the results of Lemma A5 show that the first *c* columns of (Γ˜ *<sup>p</sup>* − [Γ ]1:*p*) converge to a random variable (below denoted as *Z*Γ) when multiplied with *T* while the remaining columns converge in distribution when multiplied with <sup>√</sup>*T*. Therefore

$$T(\delta \mathbf{x}\_{t+l}, \mathbf{x}\_{t,k}) = T(\Gamma\_p' - [\Gamma']\_{1:p}) \frac{\langle \mathbf{z}\_{t+l,p}, \mathbf{x}\_{t,k} \rangle}{T} + o\_P(1) \\ = T(\Gamma\_p' - [\Gamma']\_{1:p}) \begin{bmatrix} I\_\zeta \\ 0 \end{bmatrix} \frac{\langle \mathcal{Z}\_{t+l,1}, \mathbf{x}\_{t,k} \rangle}{T} + o\_P(1).$$

Due to the definition (A1), *<sup>z</sup>*˜*t*,1 = [*xt*,*j*]*j*=0,...,*S*/2 <sup>+</sup>*o*(*T*−1) and hence (using A◦ <sup>=</sup> diag(A◦,*u*,A◦,•))

$$
\langle \langle \widetilde{\boldsymbol{z}}\_{t+1,1}, \boldsymbol{\chi}\_{t,k} \rangle / T = \mathcal{A}\_{\circ,\boldsymbol{\mu}} \langle \widetilde{\boldsymbol{z}}\_{t,1}, \boldsymbol{\chi}\_{t,k} \rangle / T + o(1).
$$

Considering now the complex-valued representation and using the notation

$$\Delta\Gamma\_1 := T(\bar{\Gamma}'\_p - [\Gamma']\_{1:p}) \begin{bmatrix} I\_{\mathfrak{c}} \\ 0 \end{bmatrix}, \quad \mathcal{S}\_{\mathfrak{f}} = [\mathbf{0}\_{\mathfrak{c}\_{\mathfrak{f}}, \sum\_{ij} \mathfrak{c}\_{i}}]]$$

where *Sjz*˜*t*,1 = *xt*,*j*,C, it follows that the contribution of these two terms to the limiting distribution of the diagonal block corresponding to the unit root *zj* amounts to (using *xt*,*j*,C, *xt*,*k*,C/*T* → 0 for *k* = *j* and *δxt*,*j*,<sup>C</sup> = *x*˜*t*,*j*,<sup>C</sup> − *xt*,*j*,C)

$$
\langle \delta \mathfrak{x}\_{t+1,j,\mathbb{C},\prime} \mathfrak{x}\_{t,j,\mathbb{C}} \rangle - \mathcal{A}\_{\circ,j\circ} \langle \delta \mathfrak{x}\_{t,j,\mathbb{C},\prime} \mathfrak{x}\_{t,j,\mathbb{C}} \rangle = 0
$$

$$\begin{split} 0 &= -S\_j \Delta \Gamma\_1 \mathcal{A}\_{o,\mu} \frac{\langle \widetilde{z}\_{t,1}, \mathbf{x}\_{t,j,\mathbb{C}} \rangle}{T} - \mathcal{A}\_{o,\bar{j}\bar{j}} S\_j \Delta \Gamma\_1 \frac{\langle \widetilde{z}\_{t,1}, \mathbf{x}\_{t,j,\mathbb{C}} \rangle}{T} + o\_P(1) \\ &= -S\_j \Delta \Gamma\_1 S\_j' \mathcal{A}\_{o,\bar{j}\bar{j}} \frac{\langle \mathbf{x}\_{t,j,\mathbb{C}}, \mathbf{x}\_{t,j,\mathbb{C}} \rangle}{T} - \mathcal{A}\_{o,\bar{j}\bar{j}} S\_j \Delta \Gamma\_1 S\_{\bar{j}}' \frac{\langle \mathbf{x}\_{t,j,\mathbb{C}}, \mathbf{x}\_{t,j,\mathbb{C}} \rangle}{T} + o\_P(1) \\ &= -S\_j \Delta \Gamma\_1 S\_j' \overline{\varpi\_j} \frac{\langle \mathbf{x}\_{t,j,\mathbb{C},\mathbf{x}} \mathbf{x}\_{t,j,\mathbb{C}} \rangle}{T} - \overline{\varpi\_j} S\_j \Delta \Gamma\_1 S\_{\bar{j}}' \frac{\langle \mathbf{x}\_{t,j,\mathbb{C},\mathbf{x}} \mathbf{x}\_{t,j,\mathbb{C}} \rangle}{T} + o\_P(1) = o\_P(1). \end{split}$$

Therefore, for the diagonal blocks in (A5) these two terms do not contribute and the asymptotic distribution is determined by

$$T \langle \mathcal{K}\_{\circ, \circ, \circ} \varepsilon\_{t \prime} \mathbf{x}\_{t, \circ} \rangle \langle \mathbf{x}\_{t, \circ} \mathbf{x}\_{t, \circ} \rangle^{-1}$$

for which the asymptotic results are provided in Lemma A1. This also shows that estimating the state does not change the asymptotic distribution in the diagonal blocks as the impact of <sup>Γ</sup>˜ *<sup>p</sup>* <sup>−</sup> <sup>Γ</sup>*<sup>p</sup>* is of lower order.

In order to derive the distribution of the sum of the eigenvalues note that as in the proof of Theorem 2, according to Lemma A2 the sum of the eigenvalues of <sup>A</sup>˜ converging to *zj* obeys the following second order approximation:

$$\begin{aligned} \mathrm{Tr}\sum\_{i=1}^{c\_{\hat{j}}} (\hat{\lambda}\_i - z\_{\hat{j}}) &=& \mathrm{Tr}\mathrm{tr} \Big[ \mathrm{Id}\_{\hat{j}}' (\hat{\mathcal{A}} - \mathcal{A}\_{\circ}) [\mathcal{U}\_{\hat{j}} - \mathcal{A}\_{\circ} (z\_{\hat{j}})^{\dagger} (\hat{\mathcal{A}} - \mathcal{A}\_{\circ}) \mathcal{U}\_{\hat{j}}] \Big] + o\_P(T^{-1}) \\ &=& \mathrm{Tr}\mathrm{tr} \Big[ \hat{\mathcal{A}}\_{\circ, \hat{j} \hat{\imath}} - z\_{\hat{j}} I\_{\mathbb{C}\_{\hat{j}}} \Big] + o\_P(1) \end{aligned}$$

since (A−A <sup>ˆ</sup> ◦)*Uj* <sup>=</sup> *<sup>O</sup>*((log *<sup>T</sup>*)*aT*−1) in this case implying that the second order terms vanish. Thus we obtain the asymptotic distribution under the null hypothesis as the limiting distribution of

$$T \text{tr} \big[ \langle \mathcal{K}\_{\circ\_{\prime} \dot{j}\_{\prime} \mathbb{C}} \varepsilon\_{t \prime} \mathbf{x}\_{t, \dot{j}\_{\prime} \mathbb{C}} \rangle \langle \mathbf{x}\_{t, \dot{j}\_{\prime} \mathbb{C} \prime} \mathbf{x}\_{t, \dot{j}\_{\prime} \mathbb{C}} \rangle^{-1}] . \ . $$

It is easy to verify that this test statistic is pivotal for complex and real unit roots. This proves Theorem 3.

#### *Appendix C.4. Proof of Theorem 4*

The result for *C*˜*<sup>m</sup>* can be shown using the results of [4]. As the eigenvalues are insensitive to changes in the basis we can assume without restriction of generality that the only unit root components in <sup>T</sup> *<sup>X</sup>*(*m*) *<sup>t</sup>* are contained in the first *cm* rows:

$$c\_t^{(m)} := \mathcal{T}X\_t^{(m)} = \begin{bmatrix} c\_{t,u}^{(m)} \\ c\_{t,\bullet}^{(m)} \end{bmatrix}, \quad \mathcal{D}\_{\mathfrak{c}} = \begin{bmatrix} T^{-1}I\_{\mathfrak{c}\_m} & 0 \\ 0 & I\_{n-\mathfrak{c}\_m} \end{bmatrix}.$$

Due to the filtering, *c* (*m*) *<sup>t</sup>*,• is stationary while *<sup>c</sup>* (*m*) *<sup>t</sup>*,*<sup>u</sup>* contains the unit root *zm*. Then the relevant matrix *X*ˆ *<sup>m</sup>* can be written as

$$\hat{X}\_m := \langle c\_{t-1\prime}^\pi, p\_t^\pi \rangle \langle p\_t^\pi, p\_t^\pi \rangle^{-1} \langle p\_t^\pi, c\_{t-1\prime}^\pi \rangle \langle c\_{t-1\prime}^\pi, c\_{t-1\prime}^\pi \rangle^{-1}.$$

Since *pt* <sup>=</sup> <sup>K</sup>*εt*−<sup>1</sup> <sup>+</sup> <sup>∑</sup>*<sup>S</sup> <sup>j</sup>*=1,*<sup>j</sup>* <sup>=</sup>*<sup>m</sup> <sup>α</sup>jβ j <sup>X</sup>*(*j*) *<sup>t</sup>*−<sup>1</sup> + [0, *<sup>α</sup>*˜ *<sup>m</sup>*]*<sup>c</sup>* (*m*) *<sup>t</sup>*−1, we consequently have *<sup>p</sup><sup>π</sup> <sup>t</sup>* <sup>=</sup> <sup>K</sup>*ε<sup>π</sup> <sup>t</sup>*−<sup>1</sup> <sup>+</sup> [0, *α*˜ *<sup>m</sup>*]*c<sup>π</sup> <sup>t</sup>*−1. Therefore, for the three components of *<sup>X</sup>*<sup>ˆ</sup> *<sup>m</sup>* we obtain with appropriate definitions of the random variables *Sm*, *Tm* and using standard asymptotics

*pπ <sup>t</sup>* , *<sup>p</sup><sup>π</sup> <sup>t</sup>* = K*ε π <sup>t</sup>*−<sup>1</sup> <sup>+</sup> *<sup>α</sup>*˜ *mc<sup>π</sup> <sup>t</sup>*−1,•, <sup>K</sup>*<sup>ε</sup> π <sup>t</sup>*−<sup>1</sup> <sup>+</sup> *<sup>α</sup>*˜ *mc<sup>π</sup> <sup>t</sup>*−1,•→K- <sup>E</sup>*εt*−1*<sup>ε</sup> t*−1 <sup>K</sup> <sup>+</sup> *<sup>α</sup>*˜ *<sup>m</sup>*E*c*<sup>Π</sup> *<sup>t</sup>*−1,•(*c*<sup>Π</sup> *<sup>t</sup>*−1,•) *α*˜ *<sup>m</sup>* > 0, *pπ <sup>t</sup>* , *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1 <sup>=</sup> K*<sup>ε</sup> π <sup>t</sup>*−<sup>1</sup> <sup>+</sup> *<sup>α</sup>*˜ *mc<sup>π</sup> <sup>t</sup>*−1,•, *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1 *<sup>d</sup>* <sup>→</sup> [*Sm*, *<sup>α</sup>*˜ *<sup>m</sup>*E*c*<sup>Π</sup> *<sup>t</sup>*−1,•(*c*<sup>Π</sup> *<sup>t</sup>*−1,•) ], *pπ <sup>t</sup>* , *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1*c<sup>π</sup> <sup>t</sup>*−1, *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1−1*D*˜ <sup>−</sup><sup>1</sup> *<sup>c</sup>* = [0, *α*˜ *<sup>m</sup>*] + K*ε π <sup>t</sup>*−1, *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1*c<sup>π</sup> <sup>t</sup>*−1, *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1−1*D*˜ <sup>−</sup><sup>1</sup> *c* K*ε π <sup>t</sup>*−1, *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1*c<sup>π</sup> <sup>t</sup>*−1, *<sup>c</sup><sup>π</sup> <sup>t</sup>*−1−1*D*˜ <sup>−</sup><sup>1</sup> *c d* → [*Tm*, 0].

Correspondingly the first block column *X*ˆ *<sup>m</sup>*,*<sup>u</sup>* of *X*ˆ *<sup>m</sup>* converges to zero such that *TX*ˆ *<sup>m</sup>*,*<sup>u</sup>* converges in distribution while the second block column converges in probability without normalization. This shows that

$$T\sum\_{i=1}^{\mathcal{L}\_{\rm m}} \hat{\lambda}\_i \quad = \operatorname{Tr} \Big[ \mathsf{U}\_{\rm m}^{\prime} (\hat{\mathsf{X}}\_{\rm m} - \mathsf{X}\_{\rm m}) [\mathsf{U}\_{\rm m} - \mathsf{X}\_{\rm m}^{\dagger} (\hat{\mathsf{X}}\_{\rm m} - \mathsf{X}\_{\rm m}) \mathsf{U}\_{\rm m}] + o\_P(1) = \operatorname{tr} \Big[ T\hat{\mathsf{X}}\_{\rm m, \mu \rm u} \Big] + o\_P(1)$$

converges in distribution. The limit is given in [4].

For the case of the estimated state note that the difference between the estimated and the true state is given as

$$\mathbf{x}\_t - \mathbf{x}\_t = \tilde{\Gamma}'\_p \tilde{\boldsymbol{z}}\_{t,p} - \Gamma'\_p \tilde{\boldsymbol{z}}\_{t,p} - \tilde{\mathcal{A}}^p \mathbf{x}\_{t-p} = (\tilde{\Gamma}\_p - \Gamma\_p)' \tilde{\boldsymbol{z}}\_{t,p} - \tilde{\mathcal{A}}^p \mathbf{x}\_{t-p}.$$

The strict minimum-phase assumption and the assumption on the increase of *p* = *p*(*T*) implies that the second term can be neglected being *oP*(*T*−1). Furthermore

$$(\Gamma\_p - \Gamma\_p)' \mathbb{Z}\_{t,p} = (\Gamma\_p - \Gamma\_p)' \mathcal{D}\_z^{-1} \mathcal{D}\_z \mathbb{z}\_{t,p\prime} \quad (\Gamma\_p - \Gamma\_p)' \mathcal{D}\_z^{-1} = O\_\mathbb{P}(T^{-1/2}).$$

Using this it can be concluded that

$$
\begin{split}
\langle \mathfrak{p}\_{t\prime} \mathfrak{p}\_{t\prime} \rangle &= \langle p\_{t\prime} p\_{t\prime} \rangle + O\_{\mathbb{P}}(T^{-1/2}) \quad , \quad \langle \mathfrak{p}\_{t\prime} \mathfrak{f}\_{t-1,\mathfrak{p}}^{(m)} \rangle = \langle p\_{t\prime} c\_{t-1,\mathfrak{p}}^{(m)} \rangle + O\_{\mathbb{P}}(T^{-1/2}), \\
\langle \mathfrak{p}\_{t\prime} \mathfrak{f}\_{t-1,\mathfrak{u}}^{(m)} \rangle &= \langle p\_{t\prime} c\_{t-1,\mathfrak{u}}^{(m)} \rangle + O\_{\mathbb{P}}(T^{-1/2}) \quad , \quad \langle \mathfrak{f}\_{t,\mathfrak{u}}^{(m)}, \mathfrak{f}\_{t,\mathfrak{u}}^{(k)} \rangle = \langle c\_{t,\mathfrak{u}}^{(m)}, c\_{t,\mathfrak{u}}^{(k)} \rangle + O\_{\mathbb{P}}(1).
\end{split}
$$

These equations imply that the difference between the expression using the true state and the one using the estimated state converges to zero, implying that the two tests accept and reject jointly asymptotically under the null hypothesis.

#### **References**

