*Article* **Cointegration and Adjustment in the CVAR(∞) Representation of Some Partially Observed CVAR(1) Models**

#### **Søren Johansen**

Department of Economics, University of Copenhagen, Øster Farimagsgade 5, building 26, DK-1353 Copenhagen K, Denmark; Soren.Johansen@econ.ku.dk; Tel.: +45-35323071

Received: 18 September 2018; Accepted: 8 January 2019; Published: 10 January 2019

**Abstract:** A multivariate CVAR(1) model for some observed variables and some unobserved variables is analysed using its infinite order CVAR representation of the observations. Cointegration and adjustment coefficients in the infinite order CVAR are found as functions of the parameters in the CVAR(1) model. Conditions for weak exogeneity for the cointegrating vectors in the approximating finite order CVAR are derived. The results are illustrated by two simple examples of relevance for modelling causal graphs.

**Keywords:** adjustment coefficients; cointegrating coefficients; CVAR; causal models

**JEL Classification:** C32

#### **1. Introduction**

In a conceptual exploration of long-run causal order, Hoover (2018) applies the CVAR(1) model for the processes *Xt* = (*x*1*t*, ... , *xpt*) and *Tt* = (*T*1*t*, ... , *Tmt*) , to model a causal graph. The process (*X <sup>t</sup>*; *T <sup>t</sup>*) is a solution to the equations

$$\begin{cases} \Delta X\_{l+1} = MX\_l + CT\_l + \varepsilon\_{l+1} \\ \Delta T\_{l+1} = \eta\_{l+1} \end{cases} \tag{1}$$

where the error terms *ε<sup>t</sup>* are independent identically distributed (i.i.d.) Gaussian variables with mean 0 and variance Ω*<sup>ε</sup>* = diag(*ω*11, ... , *ωpp*) > 0, and are independent of the errors *ηt*, which are (i.i.d.) Gaussian with mean 0 and variance Ω*η*.

Thus, the stochastic trends, *Tt* are nonstationary random walks and conditions will be given below for *Xt* to be *I*(1), that is, nonstationary, but Δ*Xt* stationary. This will imply that *MXt* + *CTt* is stationary, so that *Xt* and *Tt* cointegrate.

The entry *Mij* = 0 means that *xj* causes *xi*, which is written *xj* → *xi*, and *Cij* = 0 means that *Tj* → *xi*, and it is further assumed that *Mii* = 0. Note that the model assumes that there are no causal links from *Xt* to *Tt*, so that *Tt* is strongly exogenous.

A simple example for three variables, *x*1, *x*2, *x*3, and a trend *T*, is the graph

$$T \to \mathfrak{x}\_1 \to \mathfrak{x}\_2 \to \mathfrak{x}\_{3\prime}$$

where the matrices are given by

$$\begin{pmatrix} M = \begin{pmatrix} \* & 0 & 0 \\ \* & \* & 0 \\ 0 & \* & \* \end{pmatrix}, \mathbf{C} = \begin{pmatrix} \* \\ 0 \\ 0 \end{pmatrix}.$$

*Econometrics* **2019**, *7*, 2

where ∗ indicates a nonzero coefficient.

Provided that *Ip* + *M* has all eigenvalues in the open unit disk, it is seen that

$$MX\_{t+1} + \mathbb{C}T\_{t+1} = (I\_p + M)(MX\_t + \mathbb{C}T\_t) + M\varepsilon\_{t+1} + \mathbb{C}\eta\_{t+1,\*}$$

determines a stationary process defined for all *t*. We define a nonstationary solution to (1) for *t* = 0, 1, . . . by

$$X\_t = -M^{-1} \mathbb{C} \sum\_{i=1}^t \eta\_i + M^{-1} \sum\_{i=0}^\infty (I\_p + M)^i (M \varepsilon\_{t-i} + \mathbb{C} \eta\_{t-i}) \text{ and } T\_t = \sum\_{i=1}^t \eta\_i. \tag{2}$$

Note that the starting values are

$$X\_0 = M^{-1} \sum\_{i=0}^{\infty} (I\_P + M)^i (M\varepsilon\_{-i} + \mathcal{C}\eta\_{-i}) \text{ and } T\_0 = 0.$$

It is seen that Δ*Xt*+1, Δ*Tt*+<sup>1</sup> and *MXt* + *CTt* are stationary processes for all *t*, and that (*X <sup>t</sup>*; *T <sup>t</sup>* ) is a solution to Equation (1). In the following, we assume that (*X <sup>t</sup>*; *T <sup>t</sup>*) is defined by (2) for *t* = 0, 1, . . .

The paper by Hoover gives a detailed and general discussion of the problems of recovering causal structures from nonstationary observations *Xt*, or subsets of *Xt*, when *Tt* is unobserved, that is, *Xt* = (*X* 1*t* ; *X* <sup>2</sup>*t*) where the observations *X*1*<sup>t</sup>* are *p*1-dimensional and the unobserved processes *X*2*<sup>t</sup>* and *Tt* are *p*2- and *m*-dimensional respectively, *p* = *p*<sup>1</sup> + *p*2. It is assumed that there are at least as many observations as trends, that is *p*<sup>1</sup> ≥ *m*.

Model (1) is therefore rewritten as

$$\begin{aligned} \Delta X\_{1,t+1} &= M\_{11}X\_{1t} + M\_{12}X\_{2t} + \mathbb{C}\_{1}T\_{t} + \varepsilon\_{1,t+1}, \\ \Delta X\_{2,t+1} &= M\_{21}X\_{1t} + M\_{22}X\_{2t} + \mathbb{C}\_{2}T\_{t} + \varepsilon\_{2,t+1}, \\ \Delta T\_{t+1} &= \eta\_{t+1}. \end{aligned} \tag{3}$$

Note that there is now a causal link from the observed process *X*1*<sup>t</sup>* to the unobserved process *X*2*<sup>t</sup>* if *M*<sup>21</sup> = 0.

It follows from (3) that *X*1*<sup>t</sup>* is *I*(1) and cointegrated with *p*<sup>1</sup> − *m* cointegrating vectors *β*, see Theorem 1. Therefore, Δ*X*1*<sup>t</sup>* has an infinite order autoregressive representation, see (Johansen and Juselius 2014, Lemma 2), which is written as

$$
\Delta X\_{1,t+1} = a\beta'X\_{1t} + \sum\_{i=1}^{\infty} \Gamma\_i \Delta X\_{1,t+1-i} + \nu\_{t+1'}^\beta \tag{4}
$$

where the operator norm ||Γ*i*|| <sup>=</sup> *<sup>λ</sup>*1/2 max(Γ *i* Γ*i*) is *O*(*ρ<sup>i</sup>* ) for some 0 < *ρ* < 1. The matrices *α* and *β* are *p*<sup>1</sup> × *m* of rank *m*, and *ν β <sup>t</sup>*+<sup>1</sup> <sup>=</sup> <sup>Δ</sup>*X*1,*t*+<sup>1</sup> <sup>−</sup> *<sup>E</sup>*(Δ*X*1,*t*+1|F*<sup>β</sup> <sup>t</sup>* ), where <sup>F</sup>*<sup>β</sup> <sup>t</sup>* = *σ*(Δ*X*1*s*,*s* ≤ *t*, *β X*1*t*). Thus, *X*1*<sup>t</sup>* is not measurable with respect to <sup>F</sup>*<sup>β</sup> <sup>t</sup>* , but *β <sup>X</sup>*1*<sup>t</sup>* is measurable with respect to <sup>F</sup>*<sup>β</sup> <sup>t</sup>* . Here, the prediction errors *ν β <sup>t</sup>*+<sup>1</sup> are i.i.d. *Np*<sup>1</sup> (0, Σ), where Σ is calculated below. The representation of *X*1*t*, similar to (2), is

$$X\_{1t} = \beta\_\perp (\boldsymbol{\alpha}\_\perp' \boldsymbol{\Gamma} \boldsymbol{\beta}\_\perp)^{-1} \boldsymbol{\alpha}\_\perp' \sum\_{i=1}^t \nu\_i^\mathcal{G} + \sum\_{i=0}^\infty \mathbb{C}\_i \nu\_{t-i}^\mathcal{G}, t = 0, 1, \dots \tag{5}$$

where <sup>Γ</sup> <sup>=</sup> *Ip*<sup>1</sup> <sup>−</sup> <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> <sup>Γ</sup>*<sup>i</sup>* and ||*Ci*|| <sup>=</sup> *<sup>O</sup>*(*ρ<sup>i</sup>* ). Here, *<sup>β</sup>*<sup>⊥</sup> is a *<sup>p</sup>*<sup>1</sup> × (*p*<sup>1</sup> − *<sup>m</sup>*) matrix of full rank for which *β <sup>β</sup>*<sup>⊥</sup> = 0, and similarly for *<sup>α</sup>*⊥. This shows that *<sup>X</sup>*1*<sup>t</sup>* is a cointegrated *<sup>I</sup>*(1) process, that is, *<sup>X</sup>*1*<sup>t</sup>* is nonstationary, while *β X*1*<sup>t</sup>* and Δ*X*1*<sup>t</sup>* are stationary.

A statistical analysis, including estimation of *α*, *β*, and Γ, can be conducted for the observations *X*1*t*, *t* = 1, ... *T*, using an approximating finite order CVAR, see Saikkonen (1992) and Saikkonen and Lütkepohl (1996).

Hoover (2018) investigates, in particular, whether weak exogeneity for *β* in the approximating finite order CVAR, that is, a zero row in *α*, is a useful tool for finding the causal structure in the graph.

The present note solves the problem of finding expressions for the parameters *α* and *β* in the CVAR(∞) model (4) for the observation *X*1*t*, as functions of the parameters in model (3), and finds conditions on these for the presence of a zero row in *α*, and hence weak exogeneity for *β* in the approximating finite order CVAR.

#### **2. The Assumptions and Main Results**

First, some definitions and assumptions are given, then the main results on *α* and *β* are presented and proved in Theorems 1 and 2. These results rely on Theorem A1 on the solution of an algebraic Riccati equation, which is given and proved in the Appendix A.

In the following, a *k* × *k* matrix is called stable, if all eigenvalues are contained in the open unit disk. If *<sup>A</sup>* is a *<sup>k</sup>*<sup>1</sup> × *<sup>k</sup>*<sup>2</sup> matrix of rank *<sup>k</sup>* ≤ min(*k*1, *<sup>k</sup>*2), an orthogonal complement, *<sup>A</sup>*⊥, is defined as a *k*<sup>1</sup> × (*k*<sup>1</sup> − *k*) matrix of rank *k*<sup>1</sup> − *k* for which *A* <sup>⊥</sup>*<sup>A</sup>* <sup>=</sup> 0. If *<sup>k</sup>*<sup>1</sup> <sup>=</sup> *<sup>k</sup>*, *<sup>A</sup>*<sup>⊥</sup> <sup>=</sup> 0. Note that *<sup>A</sup>*<sup>⊥</sup> is only defined up to multiplication from the right by a (*k*<sup>1</sup> − *k*) × (*k*<sup>1</sup> − *k*) matrix of full rank. Throughout, *Et*(.) and *Vart*(.) denote conditional expectation and variance given the sigma-field F0,*<sup>t</sup>* = *σ*{*X*1,*s*, 0 ≤ *s* ≤ *t*}, generated by the observations.

#### **Assumption 1.** *In Equation (3), it is assumed that*

*(i) ε*1*t, ε*2*t, and η<sup>t</sup> are mutually independent and i.i.d. Gaussian with mean zero and variances* Ω1*,* Ω2*, and* Ω*η*, *where* Ω<sup>1</sup> *and* Ω<sup>2</sup> *are diagonal matrices,*

*(ii) Ip*<sup>1</sup> + *M*11*, Ip*<sup>2</sup> + *M*<sup>22</sup> *and Ip* + *M are stable,*

*(iii) C*1.2 <sup>=</sup> *<sup>C</sup>*<sup>1</sup> <sup>−</sup> *<sup>M</sup>*12*M*−<sup>1</sup> <sup>22</sup> *C*<sup>2</sup> *has full rank m.*

*Let* (*X* 1*t* ; *X* 2*t* ; *T t*) *,* 0 = 1, ... , *n, be the solution to (3) given in (2), such that* Δ*Xt and MXt* + *CTt are stationary.*

Assumption 1(ii) on *M*11, *M*<sup>22</sup> and *M* is taken from Hoover (2018) to ensure that, for instance, the process *Xt* given by the equations *Xt* = (*Ip* + *M*)*Xt*−<sup>1</sup> + *input*, is stationary if the input is stationary, such that the nonstationarity of *Xt* in model (3) is created by the trends *Tt*, and not by the own dynamics of *Xt* as given by *M*. It follows from this assumption that *M* is nonsingular, because *Ip* + *M* is stable, and similarly for *<sup>M</sup>*<sup>11</sup> and *<sup>M</sup>*22. Moreover *<sup>M</sup>*11.2 <sup>=</sup> *<sup>M</sup>*<sup>11</sup> <sup>−</sup> *<sup>M</sup>*12*M*−<sup>1</sup> <sup>22</sup> *M*<sup>21</sup> is nonsingular because

$$
\det M = \det M\_{22} \det M\_{11,2} \neq 0.
$$

*The Main Results*

The first result on *β* is a simple consequence of model (3).

**Theorem 1.** *Assumption 1 implies that the cointegrating rank is r* = *p*<sup>1</sup> − *m*, *and that the coefficients β and <sup>β</sup>*<sup>⊥</sup> *in the CVAR(*∞) *representation for X*1*t, see (4), are given for p*<sup>1</sup> > *m as*

$$
\beta\_{\perp} = M\_{11.2}^{-1} \mathbb{C}\_{1.2} \text{ and } \beta = M\_{11.2}' (\mathbb{C}\_{1.2})\_{\perp}. \tag{6}
$$

*For p*<sup>1</sup> = *<sup>m</sup>*, *<sup>β</sup>*<sup>⊥</sup> *has rank p*1, *and there is no cointegration: <sup>α</sup>* = *<sup>β</sup>* = <sup>0</sup>*.*

**Proof of Theorem of 1.** From the model Equation (3), it follows, by eliminating *X*2*<sup>t</sup>* from the first two equations, that

$$
\Delta X\_{1,t+1} - M\_{12} M\_{22}^{-1} \Delta X\_{2,t+1} = M\_{11,2} X\_{1t} + \mathcal{C}\_{1,2} T\_t + \varepsilon\_{1t+1} - M\_{12} M\_{22}^{-1} \varepsilon\_{2,t+1}.
$$

Solving for the nonstationary terms gives

$$M\_{11.2}X\_{1t} + \mathbb{C}\_{1.2}T\_t = \Delta X\_{1,t+1} - M\_{12}M\_{22}^{-1}\Delta X\_{2,t+1} - \varepsilon\_{1t+1,} + M\_{12}M\_{22}^{-1}\varepsilon\_{2,t+1}.\tag{7}$$

Multiplying by *β M*−<sup>1</sup> 11.2, it is seen that *β X*1*<sup>t</sup>* is stationary, if *β M*−<sup>1</sup> 11.2*C*1.2 = 0. By Assumption 1(i), *C*1.2 has rank *m*, so that *β* has rank *p*<sup>1</sup> − *m*, which proves (6).

The result for *α* is more involved and is given in Theorem 2. The proof is a further analysis of (7) and involves first, the representation *X*1*<sup>t</sup>* in terms of a sum of prediction errors *ν β <sup>t</sup>* = Δ*X*1*<sup>t</sup>* − *<sup>E</sup>*(Δ*X*1*t*|F*<sup>β</sup> <sup>t</sup>*−1), see (5), and second, a representation of *<sup>E</sup>*(*Tt*|F0,*t*) = *<sup>E</sup>*(*Tt*|*X*10, ... , *<sup>X</sup>*1*t*) as the (weighted) sum of the prediction errors *ν*0*<sup>t</sup>* = Δ*X*1*<sup>t</sup>* − *E*(Δ*X*1*t*|F0,*t*−1). The second representation requires a result from control theory on the solution of an algebraic Riccati equation, together with some results based on the Kalman filter for the calculation of the conditional mean and variance of the unobserved processes *X*2*t*, *Tt* given the observations *X*0*s*, 0 ≤ *s* ≤ *t*. These are collected as Theorem A1 in the Appendix A.

For the discussion of these results, it is useful to reformulate (3) by defining the unobserved variables and errors

$$\begin{pmatrix} T\_t^\* = \begin{pmatrix} X\_{2t} \\ T\_t \end{pmatrix}, \eta\_t^\* = \begin{pmatrix} \varepsilon\_{2t} \\ \eta\_t \end{pmatrix}, \Omega^\* = Var(\eta\_t^\*) = \begin{pmatrix} \Omega\_2 & 0 \\ 0 & \Omega\_\eta \end{pmatrix} \tag{8}$$

and the matrices

$$Q^\* = \left(\begin{array}{cc} I\_{p\_2} + M\_{22} & \mathbb{C}\_2 \\ 0 & I\_m \end{array}\right) , M\_{21}^\* = \left(\begin{array}{cc} M\_{21} \\ 0 \end{array}\right) , \mathbb{C}^\* = (M\_{12}; \mathbb{C}\_1). \tag{9}$$

Then, (3) becomes

$$\begin{aligned} X\_{1,l+1} &= (I\_{l^\*} + M\_{11})X\_{1l} + \mathbb{C}^\* T\_l^\* + \varepsilon\_{1,l+1}, \\ T\_{l+1}^\* &= M\_{21}^\* X\_{1l} + \mathbb{Q}^\* T\_l^\* + \eta\_{l+1}^\*. \end{aligned} \tag{10}$$

One can then show, see Theorem A1, that based on properties of the Gaussian distribution, a recursion can be found for the calculation of *Vt* = *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* ) and *Et* = *Et*(*T*<sup>∗</sup> *<sup>t</sup>* ) = *Et*(*T*<sup>∗</sup> *<sup>t</sup>* |F0*t*) and *Vt* = *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* ) = *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* |F0*t*), using the matrices in (8) and (9), by the equations Some

$$V\_{t+1} = Q^\* V\_l Q^{\*\prime} + \Omega^\* - Q^\* V\_l \mathbb{C}^{\*\prime} (\mathbb{C}^\* V\_l \mathbb{C}^{\*\prime} + \Omega\_1)^{-1} \mathbb{C}^\* V\_l Q^{\*\prime},\tag{11}$$

$$E\_{t+1} = M\_{21}^\* X\_{1t} + \mathcal{Q}^\* E\_l + \mathcal{Q}^\* V\_l \mathcal{C}^{\*t} \left(\mathcal{C}^\* V\_l \mathcal{C}^{\*t} + \Omega\_1\right)^{-1} \nu\_{0t+1}.\tag{12}$$

It then follows from results from control theory, that *V* = lim*t*→<sup>∞</sup> *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* ) exists and satisfies the algebraic Riccati equation

$$V = Q^\* V Q^{\*\prime} + \Omega^\* - Q^\* V \mathcal{C}^{\*\prime} (\mathcal{C}^\* V \mathcal{C}^{\*\prime} + \Omega\_1)^{-1} \mathcal{C}^\* V Q^{\*\prime}. \tag{13}$$

Moreover, the prediction errors *ν*0*<sup>t</sup>* = Δ*X*1*<sup>t</sup>* − *E*(Δ*X*1*t*|F0,*t*−1) are independent *Np*<sup>1</sup> (0, Σ*t*) for Σ*<sup>t</sup>* = *C*∗*VtC*∗ + Ω1, and the prediction errors *ν β <sup>t</sup>* <sup>=</sup> <sup>Δ</sup>*X*1*<sup>t</sup>* <sup>−</sup> *<sup>E</sup>*(Δ*X*1*t*|F*<sup>β</sup> <sup>t</sup>*−1) are independent identically distributed *Np*<sup>1</sup> (0, Σ) for Σ = *C*∗*VC*∗ + Ω1. Finally, *Et*(*Tt*) has the representation in the prediction errors, *ν*0*i*,

$$E\_t(T\_t) = E\_0(T\_0) + (0; l\_{l\bullet}) \sum\_{i=1}^t V\_i \mathbb{C}^{\*t} \Sigma\_i^{-1} \mathbb{1}\_{\varnothing i \prime} \tag{14}$$

where *E*0(*T*0) = *E*(*T*0|*X*10) = 0.

*Econometrics* **2019**, *7*, 2

Comparing the representation (5) for *X*1*<sup>t</sup>* and (14) for *Et*(*Tt*) gives a more precise relation between the coefficients of the nonstationary terms in (7). The main result of the paper is to show how this leads to expressions for the coefficients *α* and *α*<sup>⊥</sup> as functions of the parameters in model (3).

**Theorem 2.** *Assumption 1 implies, that the coefficients <sup>α</sup> and <sup>α</sup>*<sup>⊥</sup> *in the CVAR(*∞) *representation of <sup>X</sup>*1*<sup>t</sup> are given for p*<sup>1</sup> > *m as*

$$\mathfrak{a}\_{\perp} = \Sigma^{-\ddagger} (M\_{12} V\_{2T} + \mathbb{C}\_1 V\_{TT}), \ \mathfrak{a} = \Sigma (M\_{12} V\_{2T} + \mathbb{C}\_1 V\_{TT})\_{\perp \nu} \tag{15}$$

*where*

$$
\Sigma = Var(\nu\_t^\S) = \mathbf{C}^\* V \mathbf{C}^{\*\prime} + \Omega\_1 = (M\_{12}; \mathbb{C}\_1) \left( \begin{array}{c} V\_{22} & V\_{2T} \\ V\_{T2} & V\_{TT} \end{array} \right) (M\_{12}; \mathbb{C}\_1)^\prime + \Omega\_1. \tag{16}
$$

**Proof of Theorem 2.** The left hand side of (7) has two nonstationary terms. The observation *X*1*<sup>t</sup>* is represented in (5) in terms of a random walk in the prediction errors *ν β <sup>i</sup>* , plus a stationary term, and *Tt* is a random walk in *ηi*. Calculating the conditional expectation given the sigma-field F0,*t*, *Tt* is replaced by *Et*(*Tt*), which in (14) is represented as a weighted sum of *ν*0*i*. Thus, the conditional expectation of (7) gives

$$M\_{11.2}X\_{1t} + C\_{1.2}E\_t(T\_t) = E\_t(\Delta X\_{1t+1} - M\_{12}M\_{22}^{-1}\Delta X\_{2,t+1}),\tag{17}$$

where the right hand side is bounded in mean:

$$\left| E \left| E\_t (\Delta X\_{1,t+1} - M\_{12} M\_{22}^{-1} \Delta X\_{2,t+1}) \right| \right| \le c \left\{ E \left| \Delta X\_{1,t+1} \right| + \left| \Delta X\_{2,t+1} \right| \right\} \le c.$$

Setting *t* = [*nu*] and dividing by *n*1/2, it follows from (5) that

$$n^{-1/2}X\_{1[nu]} \stackrel{\mathcal{D}}{\rightarrow} \mathcal{J}\_{\perp}(\boldsymbol{a}\_{\perp}^{\prime}\boldsymbol{\Gamma}\boldsymbol{\beta}\_{\perp})^{-1}\boldsymbol{a}\_{\perp}^{\prime}\mathcal{W}\_{\boldsymbol{\Gamma}}(\boldsymbol{u}),\tag{18}$$

where *Wν*(*u*) is the Brownian motion generated by the i.i.d. prediction errors *ν β t* .

From (14), it can be proved that

$$n^{-1/2}E\_{\left[\text{nu}\right]}(T\_{\left[\text{nu}\right]}) = (0; I\_{\text{m}})n^{-1/2}\sum\_{t=1}^{\left[\text{nu}\right]}V\_{t}\mathbb{C}^{\*t}\Sigma\_{t}^{-1}\nu\_{0t} \stackrel{\text{D}}{\rightarrow} (0; I\_{\text{m}})V\mathbb{C}^{\*t}\Sigma^{-1}\mathcal{W}\_{\text{V}}(\mu). \tag{19}$$

This follows by replacing *Vt*, Σ*<sup>t</sup>* by *V*, Σ, because for *δ <sup>t</sup>* = *VtC*∗Σ−<sup>1</sup> *<sup>t</sup>* <sup>−</sup> *VC*∗Σ−<sup>1</sup> <sup>→</sup> 0, it holds that

$$\operatorname{Var}(n^{-1/2}\sum\_{t=1}^{[nu]}\delta\_t'\nu\_{0t}) = n^{-1}\sum\_{t=1}^{[nu]}\delta\_t'\Sigma\_t\delta\_t \to 0, n \to \infty.$$

Next we can replace *ν*0*<sup>t</sup>* by *ν β <sup>t</sup>* as follows: For *t* = 0, 1, . . . the sum

$$\alpha \beta' X\_{1t} + \sum\_{i=1}^{t} \Gamma\_i \Delta X\_{1,t+1-i} = \alpha \beta' X\_{1t} + \Gamma\_1 \Delta X\_{1t} + \dots + \Gamma\_t \Delta X\_{11} + \dots$$

is measurable with respect to both <sup>F</sup>*<sup>β</sup> <sup>t</sup>* and F0*t*, such that

$$\nu\_{0,t+1} - \nu\_{t+1}^{\mathcal{J}} = -E(\sum\_{i=t+1}^{\infty} \Gamma\_i \Delta X\_{1,t+1-i} | \mathcal{F}\_{0,t}) + \sum\_{i=t+1}^{\infty} \Gamma\_i \Delta X\_{1,t+1-i}.$$

Then

$$E|\nu\_{0,t+1} - \nu\_{t+1}^{\emptyset}| \le c \sum\_{i=t+1}^{\infty} \rho^i E|\Delta X\_{1,t+1-i}| = O(\rho^t),$$

*Econometrics* **2019**, *7*, 2

and therefore

$$E|n^{-1/2}\sum\_{i=1}^{[nu]}(\nu\_{t+1}^{\beta}-\nu\_{0,t+1})| \leq n^{-1/2}\sum\_{i=1}^{[nu]}E|\nu\_{t+1}^{\beta}-\nu\_{0,t+1}| \leq cn^{-1/2}\sum\_{i=1}^{[nu]}\rho^{i} \to 0, n \to \infty,$$

which proves (19).

Finally, setting *t* = [*nu*] and normalizing (17) by *n*−1/2, it follows that in the limit

$$M\_{11.2} \beta\_\perp (a\_\perp' \Gamma \beta\_\perp)^{-1} a\_\perp' \mathcal{W}\_\nu(u) + \mathbb{C}\_{1.2}(0; I\_m) V \mathbb{C}^{\*\prime} \Sigma^{-1} \mathcal{W}\_\nu(u) = 0 \text{ for } u \in [0, 1].$$

This relation shows that the coefficient to *<sup>W</sup>ν*(*u*) is zero, so that *<sup>α</sup>*<sup>⊥</sup> can be chosen as

$$\alpha\_{\perp} = \Sigma^{-1} \mathbf{C}^\* V(0; I\_m)' = \Sigma^{-1} (M\_{12} V\_{2T} + \mathbf{C}\_1 V\_{TT})'$$

and therefore *<sup>α</sup>* = <sup>Σ</sup>(*M*12*V*2*<sup>T</sup>* + *<sup>C</sup>*1*VTT*)<sup>⊥</sup> which proves (15).

#### **3. Two Examples of Simplifying Assumptions**

It follows from Theorem 2 that in order to investigate a zero row in *α*, the matrix *V* is needed. This is easy to calculate from the recursion (11), for a given value of the parameters, but the properties of *V* are more difficult to evaluate. In general, *α* does not contain a zero row, but if *M*12*V*2*<sup>T</sup>* = 0, the expressions for *α* and *α*<sup>⊥</sup> simplify, so that simple conditions on *M*<sup>12</sup> and *C*<sup>1</sup> imply a zero row in *α* and hence give weak exogeneity in the statistical analysis of the approximating finite order CVAR. This extra condition, *M*12*V*2*<sup>T</sup>* = 0, implies that

$$
\Sigma = (M\_{12}; \mathbb{C}\_1)V(M\_{12}; \mathbb{C}\_1)' + \Omega\_1 \\
= M\_{12}V\_{22}M\_{12}' + \mathbb{C}\_1 V\_{TT}\mathbb{C}\_1' + \Omega\_{12},
$$

and

$$(M\_{12}V\_{2T} + \mathcal{C}\_1 V\_{TT})\_{\perp} = (\mathcal{C}\_1 V\_{TT})\_{\perp} = \mathcal{C}\_{1 \perp \nu}$$

such that *α* simplifies to

$$\boldsymbol{\alpha} = (\boldsymbol{M\_{12}} \boldsymbol{V\_{22}} \boldsymbol{M\_{12}'} + \boldsymbol{C\_1} \boldsymbol{V\_{TT}} \boldsymbol{C\_1'} + \boldsymbol{\Omega\_1}) \boldsymbol{C\_{1\perp}} = (\boldsymbol{M\_{12}} \boldsymbol{V\_{22}} \boldsymbol{M\_{12}'} + \boldsymbol{\Omega\_1}) \boldsymbol{C\_{1\perp}} \boldsymbol{\varepsilon}$$

Thus, a condition for a zero row in *α* is

$$
\epsilon\_i^l \mathfrak{a} = \epsilon\_i^l M\_{12} V\_{22} M\_{12}' \mathbf{C}\_{1\perp} + \omega\_i \epsilon\_i^l \mathbf{C}\_{1\perp} = 0 \tag{20}
$$

because <sup>Ω</sup><sup>1</sup> = diag(*ω*1, ... , *<sup>ω</sup>p*<sup>1</sup> ). This is simple to check by inspecting the matrices *<sup>M</sup>*<sup>12</sup> and *<sup>C</sup>*1<sup>⊥</sup> in model (3). In the next section, two cases are given, where such a simple solution is available.

**Case 1** (*M*<sup>12</sup> = 0)**.** *If the unobserved process X*2*<sup>t</sup> does not cause the observation X*1*t*, *then M*<sup>12</sup> = 0. *Therefore, M*12*V*2*<sup>T</sup>* = 0 *and from (20) it follows that*

$$e'\_i \mathfrak{a} = \omega\_i e'\_i \mathbf{C}\_{1\perp} = \mathbf{0}.$$

*Thus, α has a zero row if C*1<sup>⊥</sup> *has a zero row.*

*An example of M*<sup>12</sup> = 0 *is the chain T* → *x*<sup>1</sup> → *x*<sup>2</sup> → *x*3, *where X*<sup>1</sup> = {*x*1, *x*2, *x*3} *is observed and X*<sup>2</sup> = 0, *and hence M*<sup>12</sup> = 0 *and C*<sup>2</sup> = 0. *Then, because T* → *x*<sup>1</sup>

$$\mathbf{C}\_{1} = \begin{pmatrix} \* \\ 0 \\ 0 \end{pmatrix}, \mathbf{C}\_{1\perp} = \begin{pmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{pmatrix}.$$

*Thus, the first row of C*1<sup>⊥</sup> *is a zero row, such that x*<sup>1</sup> *is weakly exogenous.*

To formulate the next case, a definition of strong orthogonality of two matrices is introduced.

**Definition 1.** *Let A be a k* × *k*<sup>1</sup> *matrix and B a k* × *k*<sup>2</sup> *matrix. Then, A and B are called strongly orthogonal if A DB* = 0 *for all diagonal matrices D, or equivalently if AjiBj*- = 0 *for all i*, *j*, -*.*

Thus, if *Aji* = 0, we assume that row *j* of *B* is zero, and if *Bj*- = 0, row *j* of *A* is zero. A simple example is

$$A = \begin{pmatrix} \* & \* \\ 0 & \* \\ 0 & 0 \end{pmatrix}, B = \begin{pmatrix} 0 \\ 0 \\ \* \end{pmatrix}.$$

Thus, the definition means that if two matrices are strongly orthogonal, it is due to the positions of the zeros and not to linear combination of nonzero numbers being zero.

Thus, in particular if *M*<sup>12</sup> and *C*<sup>1</sup> are strongly orthogonal, and if *T* causes a variable in *X*1, then *X*<sup>2</sup> does not cause that variable. The expression for *V* simplifies in the following case.

**Lemma 1.** *If C*<sup>2</sup> = 0, *and M* 12Ω−<sup>1</sup> <sup>1</sup> *C*<sup>1</sup> = 0, *then Q*<sup>∗</sup> = blockdiag(*Ip*<sup>2</sup> + *M*22; *Im*), *and V*2*<sup>T</sup>* = 0 *such that V* = blockdiag(*V*22; *VTT*).

**Proof of Lemma 1.** We first prove that *Vt* is blockdiagonal for *t* = 0. From (2), it follows that

$$\left( \begin{array}{c} X\_{10} \\ X\_{20} \end{array} \right) = M^{-1} \sum\_{i=0}^{\infty} (I\_p + M)^i (M\varepsilon\_{-i} + \mathbb{C}\eta\_{-i}) \text{ and } T\_0 = 0.$$

Thus, if Φ denotes the variance of (*X* <sup>10</sup>; *X* 20) , then

$$V\_0 = Var\left( \begin{pmatrix} X\_{20} \\ & T\_0 \end{pmatrix} | X\_{10} \right) = \begin{pmatrix} \Phi\_{22.1} & 0 \\ 0 & 0 \end{pmatrix},$$

and hence blockdiagonal. Assume, therefore, that *Vt* =blockdiag(*Vt*22; *VtTT*) and consider the expression for *Vt*+1, see (11). In this expression, *Q*<sup>∗</sup> is block diagonal (because *C*<sup>2</sup> = 0) and *Q*∗*VtQ*∗ and Ω<sup>∗</sup> are block diagonal, and the same holds for *Q*∗*V*1/2 *<sup>t</sup>* . Thus, it is enough to show that

$$V\_t^{1/2} \mathbf{C}^{\*\prime} \{\mathbf{C}^\* V\_t \mathbf{C}^{\*\prime} + \Omega\_1\}^{-1} \mathbf{C}^\* V\_t^{1/2} \mathbf{y}$$

is block diagonal. To simplify the notation, define the normalized matrices

$$\check{M} = \Omega\_1^{-1/2} M\_{12} V\_{t22}^{1/2} \text{ and } \check{\mathbb{C}} = \Omega\_1^{-1/2} \mathbb{C}\_1 V\_{tTT}^{1/2}.$$

Then, by assumption,

$$\breve{\mathcal{M}}'\hat{\mathcal{C}} = V\_{t22}^{1/2} \mathcal{M}\_{12}' \Omega\_1^{-1} \mathcal{C}\_1 V\_{tTT}^{1/2} = 0\_r$$

so that, using *Vt*<sup>2</sup>*<sup>T</sup>* = 0,

$$V\_t^{1/2} \mathcal{C}^{\*\prime} (\mathcal{C}^\* V\_t \mathcal{C}^{\*\prime} + \Omega\_1)^{-1} \mathcal{C}^\* V\_t^{1/2} = (\check{M}, \check{\mathcal{C}})^{\prime} (\check{M} \check{M}^{\prime} + \check{\mathcal{C}} \check{\mathcal{C}}^{\prime} + I\_{\mathbb{P}\_1})^{-1} (\check{M}, \check{\mathcal{C}}) .$$

A direct calculation shows that

$$(\check{\mathcal{M}}\check{\mathcal{M}}' + \check{\mathcal{C}}\check{\mathcal{C}}' + I\_{p\_1})^{-1} = I\_{p\_1} - \check{\mathcal{M}}(I\_{p\_2} + \check{\mathcal{M}}'\check{\mathcal{M}})^{-1}\check{\mathcal{M}}' - \check{\mathcal{C}}(I\_{p\_2} + \check{\mathcal{C}}'\check{\mathcal{C}})^{-1}\check{\mathcal{C}}',$$

and that

$$\mathcal{M}'\{I\_{p\_1} - \check{\mathcal{M}}(I\_{p\_2} + \check{\mathcal{M}}'\check{\mathcal{M}})^{-1}\check{\mathcal{M}}' - \check{\mathcal{C}}(I\_{p\_2} + \check{\mathcal{C}}'\check{\mathcal{C}})^{-1}\check{\mathcal{C}}'\}\check{\mathcal{C}} = 0$$

such that (*M*ˇ , *C*ˇ) (*M*<sup>ˇ</sup> *<sup>M</sup>*<sup>ˇ</sup> + *<sup>C</sup>*ˇ*C*ˇ + *Ip*<sup>1</sup> )−1(*M*<sup>ˇ</sup> , *<sup>C</sup>*ˇ) is block diagonal.

Then, *V*1/2 *<sup>t</sup> <sup>C</sup>*∗{*C*∗*VtC*∗ <sup>+</sup> <sup>Ω</sup>1}−1*C*∗*V*1/2 *<sup>t</sup>* and hence *Vt*+<sup>1</sup> are block diagonal. Taking the limit for *t* → ∞, it is seen that also *V* is block diagonal.

**Case 2** (*C*<sup>2</sup> = 0, and *M*<sup>12</sup> and *C*<sup>1</sup> are strongly orthogonal)**.** *Because C*<sup>2</sup> = 0 *and M* 21Ω−<sup>1</sup> <sup>1</sup> *C*<sup>1</sup> = 0, *Lemma 1 shows that V*2*<sup>T</sup>* = 0, *so that the condition M*12*V*2*<sup>T</sup>* = 0 *and (20) hold. Moreover, strong orthogonality also implies that M* <sup>12</sup>*C*<sup>1</sup> = <sup>0</sup> *such that M*<sup>12</sup> = *<sup>C</sup>*1⊥*<sup>ξ</sup> for some <sup>ξ</sup>*. *Hence*

$$
\epsilon\_i^l \mathfrak{a} = \epsilon\_i^l M\_{12} V \mathfrak{a} M\_{12}^l \mathbb{C}\_{1\perp} + \omega\_i \epsilon\_i^l \mathbb{C}\_{1\perp} = \epsilon\_i^l \mathbb{C}\_{1\perp} (\not\!\!^\sharp V \mathfrak{a} \mathcal{M}\_{12}^l \mathbb{C}\_{1\perp} + \omega\_i \not\!\!^\sharp I\_{p\_1 \cdots m}), \tag{21}
$$

*and therefore, a zero row in C*1<sup>⊥</sup> *gives a zero row in α.*

*Consider again the chain T* → *x*<sup>1</sup> → *x*<sup>2</sup> → *x*3, *but assume now that x*<sup>2</sup> *is not observed. Thus, X*<sup>1</sup> = {*x*1, *x*3} *and X*<sup>2</sup> = {*x*2}. *Here, T causes x*1, *and x*<sup>2</sup> *causes x*3, *so that*

$$M\_{12} = \left(\begin{array}{c} 0 \\ \* \end{array}\right), \mathcal{C}\_1 = \left(\begin{array}{c} \* \\ 0 \end{array}\right), \mathcal{C}\_2 = 0.$$

*Note that M* <sup>12</sup>*DC*<sup>1</sup> = 0 *for all diagonal D because T and X*<sup>2</sup> *cause disjoint subsets of X*1*. This, together with C*<sup>2</sup> = 0*, implies that V is block diagonal and that (21) holds. Thus, xi is weakly exogenous, e i α* = 0*, if*

$$e'\_i \mathbf{C}\_{1\perp} = e'\_i \left( \begin{array}{c} 0 \\ \* \end{array} \right) = 0.$$

#### **4. Conclusions**

This paper investigates the problem of finding adjustment and cointegrating coefficients for the infinite order CVAR representation of a partially observed simple CVAR(1) model. The main tools are some classical results for the solution of the algebraic Riccati equation, and the results are exemplified by an analysis of CVAR(1) models for causal graphs in two cases where simple conditions for weak exogeneity are derived in terms of the parameters of the CVAR(1) model.

#### **Funding:** This research received no external funding

**Acknowledgments:** The author would like to thank Kevin Hoover for long discussions on the problem and its solution, and Massimo Franchi for reading a first version of the paper and for pointing out the excellent book by Lancaster and Rodman, and two anonymous referees who helped clarify some of the proofs.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **Appendix A.**

The next Theorem shows how the Kalman filter can be used to calculate *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* ) and *Et*(*T*<sup>∗</sup> *t* ) using the same technique as for the common trends model and proves the existence of the limit of *Vt*. The last result follows from the theory of the algebraic Riccati equation, see Lancaster and Rodman (1995), in the following LR(1995).

**Theorem A1.** *Let X*1*<sup>t</sup> and T*∗ *<sup>t</sup> be given by model (10) and let Assumption 1 be satisfied. Then, Vt* = *Vart*(*T*<sup>∗</sup> *t* ) *and Et* = *Et*(*T*<sup>∗</sup> *<sup>t</sup>* ) *are given recursively, using the starting values E*<sup>0</sup> *and V*<sup>0</sup> *by*

$$V\_{t+1} = Q^\* V\_t Q^{\*\prime} + \Omega^\* - Q^\* V\_t \mathbf{C}^{\*\prime} \Sigma\_t^{-1} \mathbf{C}^\* V\_t Q^{\*\prime},\tag{A1}$$

$$E\_{t+1} = M\_{21}^\* X\_{1t} + Q^\* E\_t + Q^\* V\_t C^{\*t} \Sigma\_t^{-1} \nu\_{0, t+1} \tag{A2}$$

*where*

$$
\Sigma\_{\rm t} = \mathbb{C}^\* V\_{\rm t} \mathbb{C}^{\*\prime} + \Omega\_{\rm 1} \tag{A3}
$$

*Econometrics* **2019**, *7*, 2

*and the prediction errors*

$$\nu\_{0,t+1} = X\_{1,t+1} - E\_t(X\_{1,t+1}) \tag{A4}$$

*are independent Np*<sup>1</sup> (0, Σ*t*)*.*

*The sequence Vt starting with V*0, *converges to a finite positive limit V, which satisfies the algebraic Riccati equation,*

$$V = Q^\* V Q^{\*\prime} + \Omega^\* - Q^\* V \mathbb{C}^{\*\prime} \Sigma^{-1} \mathbb{C}^\* V Q^{\*\prime}, \quad \Sigma = \mathbb{C}^\* V \mathbb{C}^{\*\prime} + \Omega\_1. \tag{A5}$$

*Furthermore,*

$$Q^\* - Q^\* V \mathcal{C}^{\*\prime} \Sigma^{-1} \mathcal{C}^\* \tag{A6}$$

*is stable, and Et*(*Tt*) *satisfies the equation*

$$E\_{t+1}(T\_{t+1}) = E\_t(T\_t) + (0; I\_m) V\_t \mathbb{C}^{\*t} \Sigma\_t^{-1} \nu\_{0, t+1}.\tag{A7}$$

**Proof of Theorem A1.** The variance *Vt* = *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* ) can be calculated recursively, using the properties of the Gaussian distribution, as

$$\begin{split} Var\_{t+1}(T\_{t+1}^\*) &= Var\_t(T\_{t+1}^\* | X\_{1,t+1}) \\ &= Var\_t(T\_{t+1}^\*) - Cov\_t(T\_{t+1}^\*; X\_{1,t+1}) Var\_t(X\_{1,t+1})^{-1} Cov\_t(X\_{1,t+1}; T\_{t+1}^\*) .\end{split} \tag{A8}$$

From the model Equation (10), it follows that

$$\operatorname{Var}\_{t}(T\_{t+1}^{\*}) = \operatorname{Var}\_{t}\{M\_{21}^{\*}X\_{1t} + Q^{\*}T\_{t}^{\*} + \eta\_{t+1}^{\*}\} = Q^{\*}\operatorname{Var}\_{t}(T\_{t}^{\*})Q^{\*\prime} + \Omega^{\*},\tag{A9}$$

$$\text{Cov}\_l(T\_{t+1}^\*; X\_{1, t+1}) = \text{Cov}\_l\{T\_{t+1}^\*; (I\_{p\_l} + M\_{11})X\_{1t} + \mathcal{C}^\* T\_t^\* + \varepsilon\_{1t+1}\} = \mathcal{Q}^\*Var\_l(T\_t^\*)\mathcal{C}^{\*\prime},\tag{A10}$$

$$\operatorname{Var}\_{l}(X\_{1:t+1}) = \operatorname{Var}\_{l}\{(I\_{p1} + M\_{11})X\_{1t} + \mathcal{C}^\*T\_{t}^\* + \varepsilon\_{1t+1}\} = \mathcal{C}^\*\operatorname{Var}\_{l}(T\_{l}^\*)\mathcal{C}^{\*\ell} + \Omega\_{1}.\tag{A11}$$

Then, (A8)–(A11) give the recursion for *Vt* = *Vart*(*T*<sup>∗</sup> *<sup>t</sup>* ) in (A1). Similarly, for the conditional mean, it is seen that

$$\begin{aligned} E\_{t+1}(T\_{t+1}^\*) &= E\_t(T\_{t+1}^\*|X\_{1,t+1}) = E\_t(T\_{t+1}^\*) + \mathbb{C}ov\_t(T\_{t+1}^\*; X\_{1,t+1})Var\_t(X\_{1,t+1})^{-1}\nu\_{0,t+1,t}, \\ E\_t(T\_{t+1}^\*) &= M\_{21}^\*X\_{1t} + Q^\*E\_t(T\_t^\*), \end{aligned}$$

which implies (A2) with prediction error *ν*0,*t*+<sup>1</sup> = Δ*X*1,*t*+<sup>1</sup> − *Et*(Δ*X*1,*t*+1).

Note that (A1) is the usual recursion from the Kalman filter equations for the state space model obtained from (10) for *M*∗ <sup>21</sup> = 0, see Durbin and Koopman (2012). Note also, however, that (A2) is not the usual recursion from the common trends model, because of the first term containing *M*∗ <sup>21</sup>. It is seen from (A1) that if *Vt* converges to *V*, then *V* has to satisfy the algebraic Riccati equation (A5) and Σ is given as indicated.

The result that *Vt* converges to a finite positive limit follows from LR (1995, Theorem 17.5.3), where the assumptions, in the present notation, are

*a*.1 (*Q*∗; *Ip*2+*m*) is controllable,

*a*.2 (*Q*∗; *Ip*2+*m*) is stabilizable,

*a*.3 (*C*∗; *Q*∗) is detectable.

Before giving the proof, some definitions from control theory are given, which are needed for checking the conditions of the results in LR(1995).

Let *A* be a *k* × *k* matrix and *B* be a *k* × *k*<sup>1</sup> matrix.

*d*.1 The pair {*A*, *B*} is called *controllable* if

$$rank(B; AB; \dots; A^{k-1}B) = k\_r$$

#### LR(1995, (4.1.3)).

*d*.2 The pair {*A*; *B*} is *stabilizable* if there is a *k*<sup>1</sup> × *k* matrix *K*, such that *A* + *BK* is stable LR(1995, page 90, line 5-).

*d*.3 Finally {*B*; *A*} is *detectable* means that {*A* ; *B* } is stabilizable, LR(1995, page 91 line 6-). The first assumption, *a*.1, is easy to check: The pair (*Q*∗; *Ip*2+*m*) is controllable, see *d*.1, means that

$$\text{rank}(I\_{p\_2+m}; Q^\* I\_{p\_2+m}; \dots; Q^{\*p\_2+m-1} I\_{p\_2+m}) = p\_2 + m.$$

The second assumption, *a*.2, follows because controllability implies stabilizability, see LR (1995, Theorem 4.4.2).

Finally, *d*.3 shows that (*C*∗; *Q*∗) detectable means (*Q*∗; *C*∗) stabilizable, and LR(1995, Theorem 4.5.6 (b)), see also Hautus (1969), shows that (*Q*∗; *C*∗) is stabilizable, if and only if

$$\text{rank}(Q^{\*\prime} - \lambda I\_{p\_2+m}; \mathbb{C}^{\*\prime}) = \text{rank}\begin{pmatrix} M\_{12} & \mathbb{C}\_1 \\ I\_{p\_2} + M\_{22} - \lambda I\_{p\_2} & \mathbb{C}\_2 \\ 0 & I\_m - \lambda I\_m \end{pmatrix} = p\_2 + m \text{ for all } |\lambda| \ge 1.$$

For *<sup>λ</sup>* <sup>=</sup> 1, using *<sup>C</sup>*1.2 <sup>=</sup> *<sup>C</sup>*<sup>1</sup> <sup>−</sup> *<sup>M</sup>*12*M*−<sup>1</sup> <sup>22</sup> *C*<sup>2</sup> and Assumption 1, it follows that

$$\begin{aligned} \text{rank}(M(1)) &= \text{rank}\begin{pmatrix} M\_{12} & \mathbb{C}\_1 \\ M\_{22} & \mathbb{C}\_2 \end{pmatrix} = \text{rank}\begin{pmatrix} 0 & \mathbb{C}\_{1,2} \\ M\_{22} & \mathbb{C}\_2 \end{pmatrix} \\ &= \text{rank}(\mathbb{C}\_{1,2}) + \text{rank}(M\_{22}) = m + p\_2. \end{aligned}$$

For |*λ*| > 1, using Assumption 1(ii), it is seen that

$$\text{rank}(M(\lambda)) = \text{rank}(I\_{p\_2} + M\_{22} - \lambda I\_{p\_2}) + \text{rank}(I\_m - \lambda I\_m) = p\_2 + m\_2$$

because *λ* is not an eigenvalue of the stable matrix *Ip*<sup>2</sup> + *M*22, when |*λ*| > 1.

Thus, (*Q*∗; *C*∗) is stabilizable, and assumptions *a*.1, *a*.2, *a*.3 hold, such that and LR (1995, Theorem 17.5.3) applies. This proves that limit *V* = lim*t*→<sup>∞</sup> *Vt* exists and (A6) holds.

Multiplying (A2) by (0; *Im*), it is seen, using (0; *Im*)*Q*<sup>∗</sup> = (0; *Im*), and (0; *Im*)*M*<sup>∗</sup> <sup>21</sup> = 0, that a recursion for *Et*(*Tt*) is given by (A7).

#### **References**


Lancaster, Peter, and Leiba Rodman. 1995. *Algebraic Riccati Equations*. Oxford: Clarendon Press.


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
