Abstract
This paper discusses the notion of cointegrating space for linear processes integrated of any order. It first shows that the notions of (polynomial) cointegrating vectors and of root functions coincide. Second, it discusses how the cointegrating space can be defined (i) as a vector space of polynomial vectors over complex scalars, (ii) as a free module of polynomial vectors over scalar polynomials, or finally (iii) as a vector space of rational vectors over rational scalars. Third, it shows that a canonical set of root functions can be used as a basis of the various notions of cointegrating space. Fourth, it reviews results on how to reduce polynomial bases to minimal order—i.e., minimal bases. The application of these results to Vector AutoRegressive processes integrated of order 2 is found to imply the separation of polynomial cointegrating vectors from non-polynomial ones.
1. Introduction
In their seminal paper, Engle and Granger (1987) introduced the notion of cointegration and of cointegrating (CI) rank for processes integrated of order 1, or I(1). They did this in the following way:1
Definition: The components of the vector , are said to be co-integrated of order, denoted , if (i) all components of , are ; (ii) there exists a vector so that , . The vector is called the co-integrating vector.
[...] If has p components, then there may be more than one co-integrating vector . It is clearly possible for several equilibrium relations to govern the joint behavior of the variables. In what follows, it will be assumed that there are exactly r linearly independent co-integrating vectors, with , which are gathered together into the array . By construction the rank of will be r which will be called the “co-integrating rank” of .
Engle and Granger (1987) did not define explicitly the notion of cointegrating space, but just the cointegrating rank, which corresponds to its dimension; explicit mention of the cointegrating space was first made in Johansen (1988).
The Granger representation theorem in Engle and Granger (1987) showed that the cointegration matrix needs to be orthogonal to the Moving Average (MA) impact matrix of . More precisely, for , the MA impact matrix has rank equal to and representation , where is a basis of the orthogonal complement of the space spanned by the columns of and a is full column rank.
Johansen (1991, 1992) stated the appropriate conditions under which the Granger representation theorem holds for I(1) and I(2) Vector AutoRegressive processes (VAR) , where the AR impact matrix has rank equal to and rank factorization , with and of full column rank. He defined the cointegrating space as the vector space generated by the column vectors in over the field of real numbers .
Johansen (1991) noted that is uniquely defined2 by the rank factorization , but the choice of basis is arbitrary, i.e., is not identified. Hypotheses that do not constrain are hence untestable. He proposed likelihood ratio tests on and described asymptotic properties of a just-identified version of . Later Johansen (1995) discussed the choice of basis as an econometric identification problem of a system of simultaneous equations (SSE) of cointegrating relations describing the long-run equilibria in the process. He discussed identification using linear restrictions, along the lines of the classical identification problem of SSE studied in econometrics since the early days of the Cowles Commission.
The observation in Johansen (1988) that the cointegrating vectors formed a vector space was an important breakthrough. For instance, it addressed the question: ‘How many cointegrating vectors should one estimate in a given system of dimension p?’. A proper answer is in fact: A set of r linearly independent vectors, spanning the cointegrating space , i.e., a basis of .
Similarly, when assuming that a set of p interest rates is described by an I(1) process, the notion of cointegrating space enables one to discuss questions like ‘How should one test that all interest rates spreads are stationary?’. In fact, if all interest rates differentials were stationary, then one should have cointegrating rank , which gives a first testable hypothesis on the cointegrating rank. Moreover there is no need to test all possible interest rates differentials to be stationary, but, if the cointegrating rank has been found to be , one can test that the cointegrating space is spanned by any set of linearly independent r contrasts between pairs of interest rates. If the cointegrating rank is found to be , one may still want to test the restriction that the cointegrating space is a subspace of the linear space spanned by all contrasts.
These questions, and many more, found clear answers thanks to the introduction of the notion of cointegrating space. The recognition that the set of cointegrating vectors forms a vector space was then instrumental to represent any cointegrating vector as a linear combination of the ones in a basis of the vector space.
The notion of cointegrating space, together with the complementary notion of attractor space, has been recently discussed in the context of functional time series for infinite dimensional Hilbert space valued AR processes with unit roots, see Beare et al. (2017), Beare and Seo (2020), Franchi and Paruolo (2020), and for infinite dimensional Banach space valued AR processes with unit roots, see Seo (2019).
For systems with variables integrated of order d, , with Granger and Lee (1989) and Engle and Yoo (1991) introduced the related notions of multicointegration and polynomial cointegration; see also Engsted and Johansen (2000). However, no proper discussion of cointegrating spaces or of their corresponding bases has been proposed in the literature for higher order systems.
The present paper closes this gap, making use of classical concepts in local spectral theory, see Gohberg et al. (1993). A central role is played by canonical system of root functions, which have already been exploited in Franchi and Paruolo (2011, 2016) to characterize the inversion of a matrix function, and used in Franchi and Paruolo (2019) to derive the generalization of the Granger-Johansen representation theorem for I(d) processes.
In order to simplify exposition, this paper focuses on unit roots at a single point , indexed by frequency . When , the resulting matrices are complex-valued, and the symbol F is taken to indicate . For , F is taken instead to indicate . Unit roots at distinct seasonal frequencies different from 0 have been considered e.g., in Hylleberg et al. (1990), Gregoir (1999), Johansen and Schaumburg (1998), Bauer and Wagner (2012). Several of these papers paired frequencies when to obtain real coefficient matrices for Equilibrium Correction (EC) representations; in order to keep exposition as simple as possible, this is not attempted in the present paper.
To the best of the authors’ knowledge, local spectral theory tools are employed here for the first time to discuss the definition of cointegrating space for processes, , and related bases. It is observed that several candidate cointegrating spaces exists, corresponding to different choices of the set of vectors and scalars. The sets of vectors are chosen here to be either the set of polynomial vectors or the one of rational vectors, while the set of scalars are taken to be (i) the field , (ii) the ring of polynomials with coefficients in F (denoted ) or (iii) the field of rational function with coefficients in F (denoted ). The resulting spaces are either vector spaces, in cases (i) and (iii), or a free module in case (ii). The relationship among their bases is discussed following Forney (1975), whose results are used to derive a polynomial basis of minimal degree—i.e., a minimal basis.
The focus of this paper is on the parsimonious representation of the set of cointegrating vectors. As noted by a referee, the present results may find application also in the parametrization and estimation of EC systems. This, however, is beyond the scope of the present paper.
The rest of the paper is organised as follows. Section 2 provides the motivation for the paper. Section 3 reports definitions of integration and cointegration in systems, where the cointegrating vectors are allowed to be vector functions; here and its powers are associated with the difference operator and its powers. Section 4 defines root functions and canonical systems of root functions and Section 5 discusses possible definitions of the cointegration space. Section 6 discusses how to derive bases for the various notions of cointegrating space from VAR coefficients. Section 7 discusses minimal bases using results in Forney (1975) and Section 8 applies these results in order to obtain a minimal basis in the VAR case. Section 9 concludes; Appendix A reports background results.
2. Motivation
This section motivates the study of the represention of cointegrating vectors in terms of bases of suitable spaces, for systems integrated of order two, which are more formally introduced in Section 3 below. Let be a vector process, and let and L be the (0-frequency) difference and the lag operators. Assume that is integrated of order 2, I(2), with nonstationary for and stationary for .
Mosconi and Paruolo (2017) consider the identification problem for the following cointegrating SSE with variables
The first set of polynomial vectors has coefficient of order 0 (i.e., that multiplies ) and coefficient of order 1 (i.e., that multiplies ). The last polynomial vectors have 0 coefficients of order 0 and and coefficients of order 1. They discussed identification of the SSE with respect to transformations corresponding to pre-multiplication of (or ecm) by a block triangular, nonsingular matrix of the form
where are blocks of real coefficients, , with and nonsingular square matrices.
They show that has the same structure as in terms of the null coefficient of order 0 in the last equations, as well as the same block as the coefficient of order 0 in the first and as the coefficient of order 1 in the last rows. More precisely,
- is replaced by , a set of linear combinations of ,
- is replaced by a set of linear combinations of and ,
- is replaced by a set of linear combinations of , and .
Remark 1
(F-linear combinations). Note that the Q linear combinations have scalars taken from , and that any CI vectors can be obtained as linear combinations with coefficients in F of the rows in , called in the following ‘F-linear combinations’.
The main motivation to study the notion of cointegration space for processes with comes from the following observation.
Remark 2
(-linear combinations). The set of CI vectors obtained as F-linear combinations of the rows in can be also obtained by considering the alternative set of cointegrating vectors
and choosing linear combinations with scalar in the set of polynomials , where has the form for some finite n.
To show that the set of -linear combinations of is the same as the set of F-linear combinations of , it is sufficient to show that the rows of can be obtained as -linear combinations of the rows in , possibly up to terms of the type which generate stationary processes by definition.
Note first that is common to and . In order to obtain in from one needs to select the scalar Δ from and multiply it by . Similarly, in order to obtain in one only needs to select the scalar and multiply it by to obtain . Because is stationary by the assumption that is I(2), the term can be discarded, and this completes the argument.
The take-away from Remark 2 is that, if one allows the set of multiplicative scalars to contain polynomials, i.e., if one moves from F-linear combinations to -linear combinations, then one can reduce the number of rows needed to generate the set of CI vectors: in fact has rows, while the number of rows in is .
The previous discussion shows that the two sets, F and , could be used as possible set of scalars in taking linear combinations. The first one, F, is a field (i.e., a division ring), the second one, , is a ring but not a field because it lacks the multiplicative inverse.
Given that vector spaces require the set of scalars to be a field, one may also consider another possible set of scalars, namely , the set of rational functions of the type with , and not identically equal to 0, indicated as . This leads to consider three possible choices for the set of scalars: (i) The field F, (ii) the ring and (iii) the field . The rest of the paper discusses relative merits of using any of them.
The above discussion focused on unit roots at , which are associated to the long run behavior of the process. When data are observed every month or quarter, seasonal unit roots, seasonal cointegration and seasonal error correction have been shown to be useful notions, see Hylleberg et al. (1990). For instance, in the case of quarterly series, the relevant seasonal unit roots are at and at where i is the imaginary unit. These roots are represented as with , where correspond to .
Johansen and Schaumburg (1998) showed that the conditions under which a VAR process allows for seasonal integration (and cointegration) of order 1 are of the same type as for roots at , except that expansions of the VAR polynomial are performed around each , see their Theorem 3. They also provided the corresponding EC form in their Corollary 2; see also Bauer and Wagner (2012) and the discussion in Remark 9 below.
In general, the conditions for integration of any order d at a point on the unit circle can be shown to be of the same type. This paper hence considers the generic case of a linear process with a generic root on the unit circle , and discusses the notions of cointegration, root functions and minimal bases in this general context. This allows to show that the present results hold for generic frequency , .
Incidentally, the results presented below in Section 6 state the generalization of the Granger and the Johansen Representation Theorems presented in Franchi and Paruolo (2019) for a generic unit root at any frequency .
3. Setup and Definitions
This section introduces notation and basic definitions of integrated and cointegrated processes.
3.1. Linear Processes
Assume that is a i.i.d. sequence, called a noise process,3 with and where is the indicator function, and define the linear process , where is a nonstochastic vector and is a matrix function, with coefficient matrices . Note that the matrices are defined by an expansion of around . The term is nonstochastic, i.e., , and can contain deterministic terms. Because , one sees that , and hence in the following is often written as .
The matrix function is assumed to be finite when z is inside the open disk , , in with center at 0 and radius , i.e., is assumed analytic on . Here and in the following indicates the modulus and indicates the open disk with center and radius . In this paper is assumed to be regular on , i.e., can lose rank only at a finite number of isolated points in .
Because of analyticity of , it can be expanded around any interior point of . In particular, define the point on the unit circle at frequency , , and observe that it lies inside because . Hence one can expand as on , . Note that the matrices are defined by an expansion of around , but that the dependence of on is not included in the notation for simplicity. The analysis of the properties of is done locally around on , .
Similarly to , one can consider a scalar function of z, say, or a vector function taken to be analytic on , . This means that has representation around and similarly for . A special case is when is a polynomial of degree k, , which corresponds to setting all for Another special case is given by rational functions with and polynomials, where and is not a root of . Similarly for .
3.2. Integration
The following definition specifies the class of processes as a subset of all linear processes built from the noise sequence , and introduces the notion of processes using the difference operator at frequency , . To simplify notation, the dependence of on the lag operator L is left implicit. Observe also that, because , in the analytic expansions can be expressed as , where corresponds to the operator .
Next, the definition of order of integration is introduced; this is defined as the difference between two nonnegative integer exponents and of in the representation that links the process with its driving linear process . This definition allows for the possibility to have integrated of negative order.
Definition 1
(Integrated processes at frequency ). Let be analytic on , , and let be a noise process. If , satisfies , then is called a linear process; if, in addition,
then is said to be integrated of order zero at frequency , indicated .
Let be finite non-negative integers; if satisfies where , then is said to be integrated of order at frequency , indicated ; in this case has representation
where is analytic on , , and .
Remark 3
Remark 4
(Mean-0 linear process). The linear process in Definition 1 can have any expectation , which however, does not play any role in the definition of the process. Hence, one can assume that in Definition 1 without loss of generality.
Remark 5
( in Definition 1). Assume with analytic on , , and , . Then and Definition 1 implies that is . This example shows that the presence of in Equation (2) allows to concentrate attention on the stochastic part of the process .
Remark 6
Remark 7
(Example of ). As an example, consider the process with . Setting one finds that Equation (2) is satisfied with , i.e., that the process is . Selecting any other frequency , one sees that Equation (2) is satisfied for , i.e., that the order of integration is 0, i.e., for . This illustrates the fact that a process may have different orders of integration at different frequencies.
Remark 8
( versus ). Consider the process defined only for , which satisfies for . Consider another process satisfying the same equation for with for . The process is according to Definition 1, and it is suggested to extend this qualification to , because it coincides with the process on the non-negative integers, for .
Remark 9
(One or more frequencies). Definition 1 of integration refers to a single frequency ω, but it can be used to cover multiple frequencies. In fact, consider the ‘ARMA process with unit root structure’, as defined in Bauer and Wagner (2012), i.e., a process satisfying where for a (finite) set of frequencies , with a stationary ARMA process with . They call , the ‘unit root structure’ of , see their Definition 2. This can be obtained using Definition 1 for each in turn, noting that being ARMA corresponds to a rational , which is a special case of the definition above.
Hylleberg et al. (1990), Gregoir (1999), Johansen and Schaumburg (1998), Bauer and Wagner (2012) consider to be real-valued, which implies that integration frequencies are ‘paired’, so that if is a unit root of the process, so is ; this implies that in this case one can pair frequencies with and rearrange coefficients so as to obtain real coefficient matrices in EC representations. This is not done in this paper for reasons of simplicity.
Remark 10
(Relation with other definitions). The definition of an (respectively an ) process in the present Definition 1 coincides with Definition 3.2 (respectively Definition 3.3) in Johansen (1996) when setting (respectively and ). The present definition also agrees with Definitions 2.1 and 2.2 of integration in Gregoir (1999), both for positive and negative orders and any frequency ω. The definition also agrees with the one in Franchi and Paruolo (2019) when applied to vector processes.
Remark 11
(Entries in ). When ω differs from 0 or π, the point has a nonzero complex part; hence the matrix in (1) has complex entries and the coefficient matrices in the expansion are complex even when the coefficients in the expansion around are real.
Following Gregoir (1999), the summation operator at frequency is defined as
Basic properties of the operator are proved in Gregoir (1999); these include
where is any sequence over .
Remark 12
(Simplifications of and initial values). Take in (2), which in this case reads with . Applying the operator on both sides one obtains .4 If one assigns the initial value of equal to , one obtains , which corresponds to the cancellation of from both sides of (2). The same reasoning applies for generic to the cancellation of from both sides of (2). This shows that one can simplify powers of from both sides of (2) by properly assigning initial values; this cancellation is always implicitly performed in the following, in line with preference for minimal values of as discussed in Remark 6.
3.3. Cointegration
Cointegration is the property of (possibly polynomial) linear combinations of to have a lower order of integration with respect to the original order of integration of at frequency . Specifically, consider a nonzero row vector function , analytic on a disk , . As in Engle and Granger (1987), the idea is to call cointegrating if has lower order of integration than , excluding cases such as where by itself does not reduce the order of integration.
This leads to the following definition.
Definition 2
(Cointegrating vector at frequency ). Let be as in Definition 1, i.e.,
where , is analytic on , , and , see (2); let also be a row vector function, analytic on with . Then is called a cointegrating vector at frequency ω if for some , i.e.,
where is analytic on , , and . Given Equation (2), Equation (7) is equivalent to the condition
The positive integer is called the order of the cointegrating vector of at . is said to be cointegrated at frequency ω if any cointegrating vector can be replaced by without decreasing the order s in (8); otherwise is said to be multicointegrated at frequency ω.
Remark 13
( has full rank on , , except at ). Because cointegrating vectors are by definition different from zero at , is cointegrated at frequency ω if and only if has reduced rank. Moreover, because is regular on , the point is isolated, i.e., has full rank on , , except at .
Remark 14
(Entries in cointegrating vectors). Similarly to Remark 11, the coefficient vectors in the expansion are in general complex. Note that does not satisfy the definition because the requirement is not satisfied.
Remark 15
(d and s). Recall that d (the order of integration) is the difference between the exponents of on the l.h.s. and r.h.s. of (2). When pre-multiplied by , the exponent on the r.h.s. decreases by s and the difference of the exponents on the l.h.s. and r.h.s. of (7) becomes . Because , this can only happen if factors from , see (8). The condition guarantees that no remaining additional power of can be factored from using .
Remark 16
(Examples of cointegration vectors). Take with chosen in , and note that this implies in (7). This shows that the definition contains the definition of cointegrating vectors as a special case.
The usual definition of cointegration, see Definition 3.4 in Johansen (1996), considers a process and defines cointegrated with cointegrating vector if “can be made stationary by a suitable choice of initial distribution”. The following proposition clarifies that his definition coincides with the one in this paper.
Proposition 1
(Relation with Definition 3.4 in Johansen (1996)). is a cointegrating vector in the sense of Definition 3.4 in Johansen (1996) if and only if Definition 2 is satisfied with and , , .
Proof.
For simplicity and without loss of generality, set and omit the subscript . Assume Definition 2 is satisfied with and , , and , i.e.,
see Remark 12, and set . Applying to both sides of Equation (9) one finds . Note that is stationary for any , and hence the initial values can be chosen equal to , so as to obtain , a stationary process.
Conversely, assume that is a cointegrating vector in the sense of Definition 3.4 in Johansen (1996). Because , one has , see Definition 1, with analytic on a disk , , which admits expansion around 1, where is analytic on the same disc. A necessary and sufficient condition for cointegration in the sense of Definition 3.4 in Johansen (1996) is that as shown in Johansen (1988) Equation (17); see also Engle and Granger (1987, p. 256).5 Hence one finds with , which is analytic on , , and hence also on , . By Corollary 1 below, one has that satisfies with finite and . This shows that Definition 2 is satisfied with , , and . □
Remark 17
( can have negative order of integration). Johansen (1996) makes the following observation just after his Definition 3.4: “Note that need not be I(0)”, which recognises that can have negative order of integration. This is indeed the case when to in Definition 2, because .
Remark 18
(Relation to other definitions in the literature). The definition of cointegration in Engle and Granger (1987) reported in the introduction is a special case of the present one with a constant vector and , under the additional requirement that all variables are integrated of the same order. For more details on this for the case , see Franchi and Paruolo (2019). When and , Definition 2 covers the definitions of multicointegration and polynomial cointegration in Granger and Lee (1989), Engle and Yoo (1991), Johansen (1996). When and for where n is the number of seasons, the definition covers seasonal cointegration in Hylleberg et al. (1990), Johansen and Schaumburg (1998).
Example 1
(I(1) VAR). Following Johansen (1988), consider with analytic on . Assume also that has only solutions outside , , or at , where ‘det’ indicates the determinant of a matrix. Here and in the following, let indicate a basis of the orthogonal complement of the linear space spanned by the columns of the matrix a. Moreover for a full-column-rank matrix a is the orthogonal projection matrix onto . Johansen (1991) (see his Equations (4.3) and (4.4) in Theorem 4.1) showed that for to be I(1) at frequency , a set of necessary and sufficient conditions are:
- (i)
- with full column rank matrices of dimension , ,
- (ii)
- of maximal rank .
In this case satisfies (2) for , and taken to be any row vector in with .
Example 2
(I(2) VAR). Following Johansen (1992), consider the same VAR process as in Example 1. Johansen (1992) showed that for to be I(2) at frequency , a set of necessary and sufficient conditions are:
- (i)
- with full column rank matrices of dimension , ,
- (ii)
- with full column rank matrices of dimension , ,
- (iii)
- of maximal rank .
In this case satisfies (2) for , and taken to be any row vector obtained by linear combinations of the rows in and . The notion of cointegrating space for I(2) processes is discussed in detail below, where is called the ‘multicointegrating coefficient’.
4. Root Functions, Cointegrating Vectors and Canonical Systems
This section introduces root functions and canonical systems of root functions, and their connection to cointegrating vectors, as defined in Definition 2 above.
4.1. Root Functions
Let be cointegrated at frequency , i.e., see Definition 2,
where and has full rank on , , except at , see Remark 13.
The following definition of (left) root functions is taken from Gohberg et al. (1993); this definition is given in a neighborhood of .
Definition 3
(Root function). A row vector function analytic on is called a root function of at if and if
The positive integer s is called the order of the root function at .
Observe that is and analytic on , .
Remark 19
(Factoring the difference operator). Definition 3 characterizes roots functions by their ability to factor powers of from . Note that, because here , one can write as where corresponds to the difference operator and can be absorbed in without affecting its property that .
Remark 20
(Local analysis). Note first that cannot be identically 0 in Definition 3, because is assumed to be regular. Next take for example the matrix which has full rank on , except at the two points and , where it has rank 1.
Take first the point at ; in this case one could choose a disk with any , on which is analytic and full rank except at . One can verify that a root function is , which satisfies with . The same can be repeated for the other point , choosing a different disk with any , and a root function equal to .
The implication of this example is that one can have multiple separated points where has reduced rank, and apply the above definition to each point separately, using a different disk D for each point. In other words, the discussion of cointegration in this paper is local to a single unit root.
Remark 21
(Order). A root function factorises from , and s indicates the order. The condition guarantees that in the analytic expansion , the first term is not the null vector. Note that the condition makes sure that one cannot extract additional factors of from using .
It is immediate to see that a cointegrating vector is a root function of and vice versa, as stated in the following theorem.
Theorem 1
(Cointegrating vectors and root functions). is a cointegrating vector at frequency ω if and only if is a root function of at , and the order of the cointegrating vector and of the root function coincide.
Proof.
Observe that any root function satisfies Definition 2 of cointegrating vectors and vice versa, including the definition of their order. □
Results in Gohberg et al. (1993) shows that the order of a root functions is finite, because it is bounded by the order of as a zero of , a result that is reported in the next proposition.
Proposition 2
(Bound on the order of a root function). The order of a root function of at is at most equal to the order of as a zero of , which is finite because is regular.
Proof.
See Gohberg et al. (1993). □
Corollary 1
(Bound on the order of a cointegrating vector). The order of any cointegrating vector at frequency ω is finite.
Proof.
This follows from Proposition 2 because cointegrating vectors and root functions coincide by Theorem 1. □
4.2. Canonical Systems of Root Functions
Next, canonical systems of root functions for at are introduced, see Gohberg et al. (1993). Choose a root function of highest order . Since the orders of the root functions are bounded by Proposition 2, such a function exists. Next proceed iteratively over , choosing the next root function to be of the highest order such that is linearly independent from . Because , this process ends with m root functions .
Note that the columns in span the finite dimensional space , so that one can choose vectors that span its orthogonal complement. This construction leads to the following definition.
Definition 4
((Extended) canonical system of root functions). Let and be constructed as above; then
are called a canonical system of root functions (respectively an extended canonical system of root functions) of at of orders (respectively ) with .
Such a canonical system of root functions is not unique. To see this, one can show that the first row vector in (11) can be replaced by a combination of and , called , and the canonical system of root functions containing would still satisfy the definition. More specifically, define and observe that, by Definition 3, , with , . Hence one has
where . Because , , one has unless . However, this last case is ruled out because it would contradict the fact that is maximal. Hence . This shows that satisfies the definition of root function of order , and hence it can replace in (11).
While a canonical system of root functions (and also an extended canonical system of root functions) is not unique, the orders are uniquely determined by at , see Lemma 1.1 in Gohberg et al. (1993); they are called the partial multiplicities of at .
Finally, consider the local Smith factorization of at , see Gohberg et al. (1993), i.e., the factorization
where is uniquely defined and contains the partial multiplicities of at ; the matrices are analytic and invertible in a neighbourhood of and are non-unique. is called the local Smith form of at .6
Remark 22
(Extended canonical system of root functions in the I(1) VAR case). In the VAR case, see Example 1, the orders of an extended canonical system of root functions of at 1 are and a possible choice of an extended canonical system of root functions corresponding to these unique orders is given by the p rows in .
Remark 23
(Extended canonical system of root functions in the I(2) VAR case). In the I(2) VAR case, see Example 2, the orders of an extended canonical system of root functions of at 0 are and a possible choice of an extended canonical system of root functions corresponding to these unique orders is given by the p rows in .
5. Cointegrating Spaces
Let be a canonical system of root functions of at , see Definition 4. Appendix A.2 shows that with are well defined sets of (generalized) root functions. This section argues that one could take any of them as a definition of ‘cointegrating space’ for multicointegrated systems. Note that
so that the three definitions of cointegrating space are naturally nested. Remark that is a vector space over F, is a free module over the ring of polynomials in z (which contains ) and is a vector space over the field of rationals functions of z ((which contains and hence ). Finally note the central role played by the canonical system of root functions as a basis for these different spaces, which differ for the set of scalars chosen in linear combinations.
5.1. The Cointegrating Space as a Vector Space over F
The cointegrating space , where , is a vector space. In fact, the set of all F-linear combination of produces a vector space, because is closed under multiplication by a scalar in F by Proposition A1 and with respect to vector addition, as a special case of Proposition A2.
In order to discuss the cointegrating spaces and , the notion of generalized cointegrating vector is introduced, as the counterpart of the notion of generalized root function, see Definition A1.
Definition 5
(Generalized cointegrating vector at frequency ). Let and be a cointegrating vector at frequency ω and order s, see Definition 2; then
is called a generalized cointegrating vector at frequency ω with order s and exponent n.
5.2. The Cointegrating Space as a Free Module over
Consider next . is the polynomial ring formed as the set of polynomials in z with coefficients in F. As it is well known, is a ring but not a field (division ring), see e.g., Hungerford (1980), because polynomials, unlike rational functions, lack the multiplication inverse. The following propositions summarizes that is a free module over the ring of polynomials.
Proposition 3
( is a -module). Consider , where is a canonical system of root functions of at with coefficients in F, and where is the ring of polynomials in z with coefficients in F; then is closed with respect to the vector sum, and it is closed under multiplication by a scalar polynomial in ; hence is a module over the ring of polynomials.
Proof.
By Propositions A1 and A2, is closed under addition and under multiplication by a scalar polynomial in . One needs to verify that, see e.g., Definition IV.1.1 in Hungerford (1980), for and
where · indicates multiplication by a scalar. The distributive properties in (13) are seen to be satisfied. This proves the statement. □
5.3. The Cointegrating Space as a Vector Space over
Finally consider . The set of scalars is the field of rational functions in z with coefficients in F. As it is well known, is a field (division ring), see e.g., Hungerford (1980).
Remark 24
(Rational vectors without poles at ). Take to be a rational vector, i.e., of the form where is a monic polynomial and is a vector polynomial, with and relatively prime, see Example A1. If has no root equal to , then is an analytic function on , , see Remark A1 and Lemma A1; hence a special case of an analytic vector function is a rational vector with denominator without roots equal to .
Remark 25
(Rational vectors with poles at ). If has one root equal to with multiplicity m, then has a pole of order m, and it is not an analytic function on some , ; hence Definition 2 cannot be applied, because it requires to be analytic. However, one could remove the pole of order m by defining , and use Definition 2 on , which is analytic function, as done in Definition 5.
Remark 26
(Representation for generic rational vectors). In the following, when dealing with rational vectors of the type , it is sufficient to consider the case where does not have a root at , thanks to Definition 5. In fact, let be decomposed as with and ; in this representation, is a root of if and only if and it is not a root if and only if . By Remark 24, is a (generalized) cointegrating vector if and only if is a cointegrating vector. Hence Definition 5 allows to concentrate on the case where the denominator has no root at .
The following proposition summarizes that is a vector space over the field of rational functions.
Proposition 4
( is a vector space over ). Let where is a canonical system of root functions of at with coefficients in F, where is the field of rational function in z with coefficients in F; then is closed with respect to the vector sum, and under multiplication by a scalar rational function in , and is a vectors space over the field of rational functions.
Proof.
is closed with respect to multiplication by a rational function in , see Proposition A1, and with respect to vector addition, see Proposition A2. One can verify for and , that the distribution equalities in (13) are satisfied. Because is a field, then is a vector space over . □
6. The Local Rank Factorization
This section shows how to explicitly obtain a canonical system of root functions or an extended canonical system of root functions for a generic VAR process
with analytic for all , , having roots at and at z with , see Remarks 1 and 2.
The derivation of the Granger representation theorem involves the inversion of the matrix function
in . This includes the case of matrix polynomials , in which the degree of is finite, k say, with for .7
The inversion of around the singular point yields an inverse with a pole of some order at ; an explicit condition on the coefficients in (15) for to have a pole of given order d is described in Theorem 2 below; this is indicated as the pole condition in the following. Under the pole condition, has Laurent expansion around given by
Note that and is expanded around . In the following, the coefficients are called the Laurent coefficients. The first d of them, , make up the principal part and characterize the singularity of at .
Theorem 2
(pole condition). Consider defined in (15); let , and define , by the rank factorization . Moreover, for define , by the rank factorization
where denotes the orthogonal projection onto the space spanned by the columns of x and
Finally, let
Then, a necessary and sufficient condition for to have an inverse with pole of order at – calledpole condition – is that
Observe that because , one has ; hence if and only if
This corresponds to the condition in Howlett (1982, Theorem 3) and to the condition in Johansen (1991, Theorem 4.1). Similarly, one has if and only if ,
which corresponds to the condition in Johansen (1992, Theorem 3).
Theorem 2 is thus a generalization of the Johansen’s and conditions and shows that, in order to have a pole of order d in the inverse, one needs rank conditions on : The first are reduced rank conditions, , that establish that the order of the pole is greater than j; the last one is a full rank condition, , that establishes that the order of the pole is exactly equal to d. These requirements make up the pole condition.
Theorem 3
(Local Smith factorization). Consider and the other related quantities defined in Theorem 16; for , define the matrix functions as follows
and define the matrix functions and as follows
Then are analytic and invertible on , , and is the local Smith form of at , . Moreover one can choose the factors for the local Smith factorization of defined in (16), see (12), as
Theorem 3 shows that the lrf fully characterizes the elements of the local Smith factorization of at . In fact, the values of j with in the lrf provide the distinct partial multiplicities of at and gives the number of partial multiplicities that are equal to a given j; this characterizes the local Smith form . Moreover, it also provides the constructions of an extended canonical system of root functions.
Remark that the j-th block of rows in can be written as
where and have full row rank; here denotes the corresponding block of rows in . This shows that are root functions of order of .
The next result presents the Triangular representation as proved in Franchi and Paruolo (2019, Corollary 4.6).
Proposition 5
Observe that the canonical system of root functions in (23) is not unique and not of minimal polynomial order, as discussed in the next section. The following example applies the above concepts in the VAR case.
Example 3
(I(2) VAR example continued). Consider Example 2. Applying truncation to the rows of , see Propositions 5 and A3, one finds that the columns in are root functions of at of order at least . Consider now one row in for some matrix A; this root function is of order 2 by Remark 23, and its truncation to degree 1, i.e., to the corresponding row of is still of order 2 by Propositions 5 and A3, Finally consider one row in , which gives a root function of order at least 1; its truncation to a polynomial of degree 0 gives the corresponding row of , which has order at least 1 by Propositions 5 and A3. In fact the rows of give root functions of order equal to 1 or to 2, when the corresponding entries in in are equal to 0, as discussed below.
7. Minimal Bases
This section describes the algorithm of Forney (1975) to reduce the basis to minimal order, using the generic notation of in place of . The generic basis is assumed to be rational and of dimension . This algorithm exploits the nesting . In the following, the j-row of is indicated as , which is the j-th element of the basis, . Various modifications of the original basis are indicated as for .
Definition 6
(Degree of ). If is a polynomial basis, the degree of its j-th row, indicated as , is defined as the maximum degree of its elements, and the degree v of is defined as , i.e., the sum of the degrees of its rows.
The reduction algorithm proposed by Forney (1975, pp. 497–98) consists of the following 3 steps.
- Step 1
- If is not polynomial, multiply each row by the least common denominator of each row to obtain a polynomial basis .
- Step 2
- Reduce row orders in by taking -linear combinations.
- Step 3
- Reduce to a basis with a full-row-rank high order coefficient matrix, i.e., a “row proper” basis.
Remark 27
(Spaces and algorithm). Step 1 works on , Step 2 works on , Step 3 uses F-linear combinations on with appropriate square polynomial matrices .
7.1. Step 1
If is polynomial, the algorithm sets ; otherwise is rational, and its j-th row has representation , where is a polynomial row vector and is a scalar polynomial, and and are relatively prime. The first step consist in computing , where is a square polynomial matrix of dimension r.
7.2. Step 2
The second step reduces the degree of the rows in . This involves finding specific points , , at which . To find them, one can calculate the greatest common divisor of all minors of . If this step is complete, and the algorithm sets ; otherwise one computes the zeros of , say, where , . The following substep is then applied to each root sequentially, .
Denote by the current basis; this will be replaced by at the end of this substep. For , one has . For , all minors of order r of vanish, which means that is singular, i.e., it has reduced rank and rank factorization , say, where are full column rank. Let be one row in . Indicate by the set of its non-zero coefficients, and let be the maximal degree of rows in with nonzero coefficient in .
This substep consists of replacing row of with , which is still a polynomial vector. In fact , so that has representation with a polynomial vector, so that . This defines in terms of as where Q is an square matrix, equal to except for row , equal to , and where is a diagonal matrix equal to except for having in its -th position on the diagonal. Note that Q is nonsingular, because . The same procedure is applied to each row of .
This substep is repeated for all , . The condition on the minors in then recalculated and the substep repeated for the new roots, until the greatest common divisor of all minors of is 1. When this is the case, Step 2 sets .
7.3. Step 3
The last step operates on the high order coefficient matrix, repeating the following substep. Let indicate at the beginning of the substep, which will be replaced by at the end of it. Let be the order of the i-th row of , indicated as . The high-order matrix is defined as the matrix composed of the coefficient matrix of the highest degree of for each row of .
A necessary and sufficient condition for to be of full rank is that the order of is equal to the maximum order of its minors. If this is not the case, is singular, i.e., it has rank factorization with and a of full column rank. Hence one can choose a vector as one row in for which one has .
As before, let and define . Let also , note that for and let . Row in is replaced by
where s in the last expression in the first line is defined as and .
The central expression in (24) shows that is polynomial because in the exponents of . In order to see that the degree of is also lower than , one can note that the the high order coefficient in (25), which correspond to in (24), equals . This implies that the order of is lower than , and that replacing row of with reduces the order of the vector.
This defines in terms of as where N is an square matrix, equal to except for row , equal to . Note that N is nonsingular, because . This process is repeated for all the rows in . Next set and repeat until the high order coefficient matrix has full rank. When this is the case, Step 3 sets .
8. From a Canonical System of Root Functions to a Minimal Basis for VAR
This section applies the algorithm of Forney reviewed in Section 7 to in (23) to reduce the basis to minimal order in the VAR example at frequency . This application leads to the separation of the cases of
- (i)
- non-polynomial cointegrating relations reducing the order of integration from 2 to 0;
- (ii)
- polynomial cointegrating relations reducing the order of integration from 2 to 0.
The process of obtaining minimal bases does not lead to a unique choice of basis; this leaves open the choice of how to further restrict the basis to obtain uniqueness. Forney (1975) obtains uniqueness requiring the minimal basis to be in upper echelon form. Other sets of restrictions can also be considered. For the sake of brevity, the restrictions on how to obtain a unique minimum basis are not further discussed here.
8.1. Step 1 in VAR
Consider the triangular representation of an system, see (23):
and apply the algorithm of Forney (1975) to . Because is already polynomial, one has .
8.2. Step 2 in VAR
Next consider Step 2, and set . One wishes to find some zero and some corresponding so as have . Denoting , one hence needs to find the pair such that
where u is a scalar. Note that is not a possible zero of (27), because is of full row rank, so that . Post-multiplying (27) by the square non-singular matrix one finds
Hence, partitioning as where is , one finds that the second set of equations gives and the first one, substituting the expression of given in Theorem 3, implies
where in (29); note also that has been simplified in (30). This proves the following proposition.
Proposition 6
(Step 2 condition in ). A necessary and sufficient condition for Step 2 to be non-empty is that (29), (30) hold simultaneously, i.e., that is a non-zero eigenvalue—left eigenvector pair of , and the left eigenvector is orthogonal to . If this is the case, for each pair one has
Observe that from (27), using and with , one finds
where the last equality follows from (31). This shows that under the necessary and sufficient condition in Proposition 6, there is a linear combination of where one can factor out of , which reduces the order from 1 to 0. Here , which has degree equal to 1, is replaced by , which has degree 0. Note that from (31) the new cointegrating relation is in the span of .
This can be done for all pairs . Let be all the pairs satisfying the assumptions of Proposition 6, , and let . Choose also as some matrix matrix such that is square and nonsingular; many matrices satisfy this criterion, including . The output of Step 2 can be expressed as the following choice of :
Remark 28
(CI(2,2) cointegration). This step brings out from some cointegrating relations that map the I(2) variables directly to I(0) without the help of first differences Δ.
8.3. Step 3 in VAR
Consider in (33) and its high order coefficient matrix
Step 3 requires to find a nonzero matrix such that . Recall that is square and nonsingular; hence if and only if, partitioning as one has
This equality can be written as
Remark 29
(Further degree reductions). Equation (35) requires to be orthogonal to remaining part of the multicointegrating coefficient in direction of . In addition also needs to satisfy (34). For some configurations of dimensions, (34) could be solvable for in terms of other quantities; in this case (34) would not impose further restrictions.
Let also be any complementary matrix such that is square and nonsingular; one possible choice of is . The output of Step 3 can be expressed as the following choice of
Remark 30
(Minimal basis). This step brings out from some other cointegrating relations that map the I(2) variables directly to I(0) without the help of first differences Δ. Equation (36) shows how the canonical system of root functions can be reduced to minimal order.
Example 4
(Multicointegration coefficient in the span of ). Consider the special case when the multicointegrating coefficient satisfies , i.e., it has components only in the direction of . This special case is relevant, because while with for .
One can see that in this case the conditions in Proposition 6 are not satisfied. In fact (29) cannot hold, as . Step 2 is hence empty, and this implies that the rows including are missing and in in (33) and (36).
Applying Step 3, Equation (34) is always satisfied by the choice , because . Equation (35) then reads , which is satisfied if and only if has reduced rank. In this case, let the rank factorization be , with ψ and η of full column rank. One can then let and choose , so that
There are several examples of this separation in the I(2) VAR literature; for example Kongsted (2005) discusses this when .
9. Conclusions
This paper discusses the notion of cointegrating space for general processes. The notion of cointegrating space was formally introduced in the literature by Johansen (1988) for the case of I(1) VAR system. The definition of the cointegrating space is simplest in the case without multicointegration, because there is no need to consider vector polynomials in the lag operator.
Engle and Yoo (1991) introduced the notion of polynomial cointegrating vectors in parallel with the related one of multicointegration in Granger and Lee (1989). However, the literature has not yet discussed the notion of cointegrating space in the general polynomial case; this paper fills this gap.
In this context, this paper recognises that cointegrating vectors are in general root functions, which have been analysed at length in the mathematical and engineering literature, see e.g., Gohberg et al. (1993). This allows to characterise a number of properties of cointegrating vectors.
Canonical systems of root functions are found to provide a basis of several notions of cointegration space in the multicointegrated case. The extended local rank factorization of Franchi and Paruolo (2016) can be used to explicitly derive a canonical system of root functions. This result is constructive, as it gives an explicit way to derive such a basis from the VAR polynomial.
The canonical system of root functions constructed in this way is not necessarily of minimal polynomial degree, however. The three-step procedure of Forney (1975) to reduce this basis to minimal-degree is reviewed and restated in terms of rank factorizations. The application of this procedure to I(2) VAR systems is shown to separate the polynomial and the non-polynomial cointegrating vectors.
Author Contributions
The authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Appendix A.1. Scalar, Vector, Matrix Analytic Functions
Consider a rational functions , defined as with and polynomials, where . One can ask when is analytic on , . The following remark states that this is the case provided is not a root of .
Remark A1
(Rational scalars can be analytic on ). Let be a rational function, i.e., with and polynomial; assume also in addition that has no root equal to . Then is analytic on , for some . In fact, let be the degree of , and decompose as , where are the roots of with multiplicity , using the factor theorem for polynomials, see e.g., Barbeau (1989, p. 56). Then each term has an analytic representation on , , see e.g., Lemma A1 below. Note that this generates an infinite tail in , i.e., is not polynomial in this case (unless ).
Lemma A1
(The inverse of a polynomial is analytic away form its roots). Let be the distinct roots of a polynomial with multiplicities , , and let be another point, distinct from ; then one can pick some radius δ with such that is analytic on .
Proof.
The polynomial can be decomposed as . Next consider each term in the product and observe that where . Define and note that for , so that for and
Hence is analytic on for any , and as a consequence also on with . This implies that is analytic on . □
Similarly, consider a vector function with rational entries. The denominator polynomials in all entries can be collected in a single one, the least common denominator, and hence has representation where is a monic polynomial and is a vector polynomial, and and are relatively prime. The same applies to rational matrix functions .
Example A1
(The least common denominator of bivariate rational vectors). The least common denominator can be illustrated as follows. Take a rational row vector , where are (nonzero) polynomials ; then one can find a polynomial with lowest degree such that where are polynomials ; is the least common multiple of the denominators, i.e., the least common denominator, and one has
with where are still polynomials, so that is a vector polynomial. The vector polynomial and the scalar polynomial are relatively prime, because there is no scalar polynomial that divides both and all the elements in . The polynomials in and can still be divided by a scalar in F, so can be assumed to be monic.
Remark A2
(Rational vector and matrices). The analytic vector functions and analytic matrix functions can be generated as rational vectors or matrices, as long as their denominator polynomial has no root equal to . When has one root equal to with multiplicity , this implies that or have a pole of order at , and or are not analytic on a disk centered around .
Appendix A.2. Spans of Canonical Systems of Root Functions
This section considers linear combinations of canonical system of root functions with coefficients in F, and . Attention is first given to multiplication of a root function by a rational or polynomial scalar; next generic linear combinations of canonical system of root functions in are considered.
In order to discuss results, the notion of generalized root function is introduced first.
Definition A1
(Generalized root function). Let and be a root function of at and order s, see Definition 3; then
is called a generalized root function of at with order s and exponent n.
Observe that this is in line with Definition 5 of generalized cointegrating vectors for rational vectors. The reason for the introduction of the notion of generalized root function is provided by the next proposition.
Proposition A1
(Multiplication by a scalar). Let be a root function for of order s on , . Then
- (i)
- if , , then is a root function on of order s;
- (ii)
- if , , then is a generalized root function on of order s and exponent ;
- (iii)
- if , , then is a generalized root function on of order s and exponent .
Proof.
Consider first case .
. where are relatively prime polynomials, . If has root then it admits representation with and , . Hence where with and . The factor has exponent , which can be positive, negative or 0; because are relatively prime polynomials, , either or or . The factor is a generalized root function of order s, because is a root function of order s and the scalar factor satisfies , so that . This shows that is a generalized root function of order s and exponent
. Set in the proof of , and note that the exponent is , which is either 0 or positive.
. Set , in the proof of , and note that the exponent is . □
Remark A3
(A generalized root function is meromorphic). A generalized root function is analytic on except for the possibility to have poles at the isolated point , i.e., it is a meromorphic function on .
Remark A4
(A generalized root function can be analytic). When the exponent n of is zero, the generalized root function coincides with the root function . When the exponent n of is positive, then the generalized root function has a zero at . In both cases is analytic. So a generalized root function can be analytic (with or without a zero at ).
Remark A5
(Generalized root function and cointegration). Observe that Definition A1 implies the following: given a meromorphic function , check if it has a root or a pole at ; this function is a generalized root functions if, after removing the pole or the zero at by multiplying it by where n is the order of the root or of the pole, the resulting function is a root function, i.e., a cointegrating vector. This is in line with Definition 5.
Attention is now turned to linear combinations of a canonical system of root functions . The scalars of the linear combination can be in F, or . The main result in Proposition A2 below is that -linear combinations of generate a generalized root function possibly with a zero at , while -linear combinations of generate a generalized root function possibly with a pole or a zero at .
In the following, let be a vector with elements in F. Let also be the set of non-zero entries in v, , with the cardinality of and the ordered set of indices in , , . Similarly, let be a vector with polynomial elements in with its ordered set of indices of nonzero elements in , and let finally be a vector with rational elements in with as its ordered set of indices of nonzero elements in .
Proposition A2
(Linear combinations). Let be a canonical system of root functions of on a disc , with orders ; let also , and be nonzero vectors; one has:
- (i)
- is a root function of order ;
- (ii)
- is a generalized root function, with exponent where is the order of as a zero of , and with order ;
- (iii)
- is a generalized root function, possibly with a pole or a zero at , with exponent where is the order of as a pole or as a zero of , and with order .
Proof.
By definition , analytic on and . One finds with because F is a field, and hence it is closed under multiplication. Hence is a polynomial with coefficients vectors in , of the same form as each , and one finds that
where , and Note that because is a nonzero vector, the set is not empty. Next observe that otherwise this would contradict the property of to be of maximal order and linearly independent from the previous root function for . This shows that is a root function of order s.
Consider , where by Proposition A1. (i), one has that is a generalized root function with representation say, with and a root function of order . Let , and note that with . In order to show that , let be the set of indices with , and observe that where by construction and by the definition of root function. If this would imply that there is a nonzero linear combination of equal to 0’, i.e., that is not of full row rank, which contradicts the construction in Definition 4. This implies that , and that is a generalized root function of order q.
Next, because is a root function of order one has
where . Finally, in order to prove that the order of the generalized root function is s, one needs to show that . Let be the set of indices with , and observe that where and as above. If , then there exists a nonzero linear combination of equal to , which would imply the existence of a root function of higher order obtained by combination of the rows in with index , which contradict the fact that the orders are chosen to be maximal. This implies that the order of the generalized root function is equal to s.
The proof is the same as in . Note that here may be negative. □
Remark A6
(Closure with respect to linear combinations). Proposition A2 shows that -linear combinations and -linear combinations of a canonical systems of root functions produce generalized root functions. Note that is itself a set of generalized root functions (with 0 exponent). Hence, in this sense, generalized root functions are closed under -linear combinations and -linear combinations.
Remark A7
(Spans). Indicate the set of G-linear combinations as , where . It is simple to observe that
Remark A8
(Role of characteristics of canonical system of root functions). The proof of Proposition A2 reveals that, in order to conclude that a - or -linear combination of is a generalized root function, the property that is of full row rank plays a crucial role. In fact, when reaching the equality where q is the exponent of the linear combination, one can show that by making use of this property only, without using the maximal orders of the root functions in . This proves the following corollary.
Corollary A1
(Linear combinations of a set of root functions). Replace the canonical system of root functions in Proposition A2 with a set of m root functions for on , such that is of full row rank; then the - or -linear combinations and are generalized root functions with the same exponents as in Proposition A2.
Appendix A.3. Truncations of Root Functions
This section discusses how the truncation of a root function still delivers a root function, possibly of lower order. The main implication of this property is that one can take any element in for and obtain other root functions by truncation, thus enlarging the set of root functions that can be generated from .
Let be a root function of order s of on , , and indicate the truncation of to a polynomial of degree r as ; the remainder is called the tail of . The following proposition clarifies that one can modify the tail of a root function without affecting its property to factor some power of from . One special case is that one can delete the tail after the order s of the root function without changing its order.
Proposition A3
(Truncations). Let be a root function of order s for on , , , and let be a vector function, analytic on ; then
- (i)
- for an integer , the row vector withis still a root function on of order ;
- (ii)
- if one chooses , in the definition (A3) of with proportional to the tail of , a special case of is that the truncation of to the polynomial of order ℓ is also a root function on of order ;
- (iii)
- finally if is a root function of order s in a canonical system of root functions of at , then its truncation to a polynomial of degree is still a root function of at on of order s.
Proof.
By definition one has with . Hence, setting , one finds
where . If , then is a root function of order q. If, instead, , then is a root function of order n greater than q; in any case , with n finite by Proposition 2.
Choose in (A3), so that . The statement follows from .
The coefficients and generate the same coefficients for in the convolution , where for by definition of order s, see (8). This implies that is a root function at least of order s. However, because root functions in a canonical system of root functions are chosen of maximal order, the order of is equal to s. This completes the proof. □
Remark A9
(Truncated cointegrating vectors). Proposition A3 implies that truncating a cointegrating vector to order preserves the cointegrating property, but not necessarily the order s.
Remark A10
(Cointegrating vectors in VAR can be chosen not to be polynomial). Consider Example 1, where the orders of integration of (polynomial) linear combinations can be either 1 or 0. In this case, root function are of order at most , and Proposition A3. ensures that the root functions can be truncated to order , i.e., to non-polynomial linear combinations.
Remark A11
(A generic process may have polynomial cointegration relations). Consider now the generic case of an I(1) process. The orders of integration of (polynomial) linear combinations can be say, with . In this case, root function are of order at most , and Proposition A3. ensures that the root functions can be truncated to order d. If this may require polynomial linear combinations also in the case.
Remark A12
(Polynomial cointegration vectors in VAR can be chosen of order at most 1). Consider Example 2, where the orders of integration of (polynomial) linear combinations can be either 2, 1 or 0. In this case, root function are of order at most , and Proposition A3. ensures that the root functions can be truncated to order , i.e., to polynomial linear combinations of order 1.
Remark A13
(Multicointegrated systems require polynomial cointegration relations). As shown in the previous three remarks, in general, multicointegrated systems require to consider polynomial linear combinations.
Notes
| 1 | See Engle and Granger (1987, pp. 253–54). Here N in their notation is replaced by p and with for consistency with the rest of the paper. |
| 2 | The following notation is employed: indicates either the field of real numbers or the field of complex numbers and if a matrix is written in terms of its columns, indicates the column span of A with coefficients in F, i.e., and denotes the row span of with coefficients in F, i.e., , where indicates the conjugate transpose of A. Hence if and only if , i.e., the spaces coincide but the former contains column vectors while the latter contains row vectors. Here the row form is employed. |
| 3 | could be taken to be non-autocorrelated instead of i.i.d. with no major changes in the results in the paper. |
| 4 | This result is usually stated as where is a generic constant, see e.g., Hannan and Deistler (1988) Equation (1.2.15). |
| 5 | In fact, substituting , one finds , and applying to both sides gives where is stationary. The term is a bilateral random walk (Franchi and Paruolo 2019), a nonstationary process, so that the l.h.s. can be made stationary if and only if the coefficient loading is 0. |
| 6 | Theorem 3 provides two constructions of the local Smith factorization. |
| 7 | In this case is analytic for all . |
| 8 | In the first sentence in Definition 3.1 of Franchi and Paruolo (2019) ‘’ should read ‘’. The results of Franchi and Paruolo (2019, Theorem 3.3) are applied setting there equal to here. |
| 9 | The present statement follows by Franchi and Paruolo (2019, Theorem 3.5) with and there equal to and here. |
References
- Barbeau, Edward J. 1989. Polynomials. Berlin and Heidelberg: Springer. [Google Scholar]
- Bauer, Dietmar, and Martin Wagner. 2012. A State Space Canonical Form for Unit Root Processes. Econometric Theory 28: 1313–49. [Google Scholar] [CrossRef]
- Beare, Brendan, and Won-ki Seo. 2020. Representation of I(1) and I(2) autoregressive Hilbertian processes. Econometric Theory 36: 773–802. [Google Scholar] [CrossRef]
- Beare, Brendan, Juwon Seo, and Won-ki Seo. 2017. Cointegrated Linear Processes in Hilbert Space. Journal of Time Series Analysis 38: 1010–27. [Google Scholar] [CrossRef]
- Engle, Robert F., and Clive W. J. Granger. 1987. Co-integration and Error Correction: Representation, Estimation, and Testing. Econometrica 55: 251–76. [Google Scholar] [CrossRef]
- Engle, Robert F., and Sam B. Yoo. 1991. Cointegrated economic time series: An overview with new results. In Long-Run Economic Relations: Readings in Cointegration. Edited by Robert Engle and Clive Granger. Oxford: Oxford University Press, pp. 237–66. [Google Scholar]
- Engsted, Tom, and Søren Johansen. 2000. Granger’s Representation Theorem and Multicointegration. In Cointegration, Causality and Forecasting: Festschrift in Honour of Clive Granger. Edited by R. F. Engle and H. White. Oxford: Oxford University Press, pp. 200–12. [Google Scholar]
- Forney, G. David, Jr. 1975. Minimal bases of rational vector spaces, with applications to multivariable linear systems. SIAM Journal on Control 13: 493–520. [Google Scholar] [CrossRef]
- Franchi, Massimo, and Paolo Paruolo. 2011. Inversion of regular analytic matrix functions: Local Smith form and subspace duality. Linear Algebra and Its Applications 435: 2896–912. [Google Scholar] [CrossRef][Green Version]
- Franchi, Massimo, and Paolo Paruolo. 2016. Inverting a matrix function around a singularity via local rank factorization. SIAM Journal of Matrix Analysis and Applications 37: 774–97. [Google Scholar] [CrossRef]
- Franchi, Massimo, and Paolo Paruolo. 2019. A general inversion theorem for cointegration. Econometric Reviews 38: 1176–201. [Google Scholar] [CrossRef]
- Franchi, Massimo, and Paolo Paruolo. 2020. Cointegration in functional autoregressive processes. Econometric Theory 36: 803–39. [Google Scholar] [CrossRef]
- Gohberg, Israel, Marinus A. Kaashoek, and Frederik Van Schagen. 1993. On the local theory of regular analytic matrix functions. Linear Algebra and Its Applications 182: 9–25. [Google Scholar] [CrossRef]
- Granger, Clive W. J., and Tae-Hwy Lee. 1989. Investigation of production, sales and inventory relationships using multicointegration and non-symmetric error correction models. Journal of Applied Econometrics 4: S145–59. [Google Scholar] [CrossRef]
- Gregoir, Stèphane M. 1999. Multivariate time series with various hidden unit roots, Part I. Econometric Theory 15: 435–68. [Google Scholar] [CrossRef]
- Hannan, Edward J., and Manfred Deistler. 1988. The Statistical Theory of Linear Systems. Hoboken: John Wiley & Sons. [Google Scholar]
- Howlett, Phil G. 1982. Input retrieval in finite dimensional linear systems. Journal of Australian Mathematical Society (Series B) 23: 357–82. [Google Scholar] [CrossRef]
- Hungerford, Thomas W. 1980. Algebra. Berlin and Heidelberg: Springer. [Google Scholar]
- Hylleberg, Svend, Robert F. Engle, Clive W. J. Granger, and Sam B. Yoo. 1990. Seasonal integration and cointegration. Journal of Econometrics 44: 215–38. [Google Scholar] [CrossRef]
- Johansen, Søren. 1988. Statistical Analysis of Cointegration Vectors. Journal of Economic Dynamics and Control 12: 231–54. [Google Scholar] [CrossRef]
- Johansen, Søren. 1991. Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica 59: 1551–80. [Google Scholar] [CrossRef]
- Johansen, Søren. 1992. A representation of vector autoregressive processes integrated of order 2. Econometric Theory 8: 188–202. [Google Scholar] [CrossRef]
- Johansen, Søren. 1995. Identifying restrictions of linear equations with applications to simultaneous equations and cointegration. Journal of Econometrics 69: 111–32. [Google Scholar] [CrossRef]
- Johansen, Søren. 1996. Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models. Oxford: Oxford University Press. [Google Scholar]
- Johansen, Søren, and E. Schaumburg. 1998. Likelihood analysis of seasonal cointegration. Journal of Econometrics 88: 301–39. [Google Scholar] [CrossRef]
- Kongsted, Hans Christian. 2005. Testing the nominal-to-real transformation. Journal of Econometrics 124: 205–25. [Google Scholar] [CrossRef]
- Mosconi, Rocco, and Paolo Paruolo. 2017. Identification conditions in simultaneous systems of cointegrating equations with integrated variables of higher order. Journal of Econometrics 198: 271–76. [Google Scholar] [CrossRef]
- Seo, Won-ki. 2019. Cointegration and Representation of Cointegrated Autoregressive Processes in Banach Spaces. arXiv arXiv:1712.08748v4. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).