Next Article in Journal
On the Properties of a Newly Susceptible, Non-Seriously Infected, Hospitalized, and Recovered Subpopulation Epidemic Model
Previous Article in Journal
Fine-Tuned Cardiovascular Risk Assessment: Locally Weighted Salp Swarm Algorithm in Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

VARMA Models with Single- or Mixed-Frequency Data: New Conditions for Extended Yule–Walker Identification

by
Celina Pestano-Gabino
*,
Concepción González-Concepción
and
María Candelaria Gil-Fariña
Department of Applied Economics and Quantitative Methods, Universidad de La Laguna (ULL), 38200 San Cristóbal de La Laguna, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(2), 244; https://doi.org/10.3390/math12020244
Submission received: 5 December 2023 / Revised: 9 January 2024 / Accepted: 10 January 2024 / Published: 11 January 2024

Abstract

:
This paper deals with the identifiability of VARMA models with VAR order greater than or equal to the MA order, in the context of mixed-frequency data (MFD) using extended Yule–Walker equations. The main contribution is that necessary and sufficient conditions for identifiability in the single-frequency data case are expressed in an original way and yield new results in the MFD case. We also provide two counterexamples that answer an open question in this topic about whether certain sufficient conditions are necessary for identifiability. Therefore, this paper expands the set of models that can be identified with MFD using extended Yule–Walker equations. The main idea is that with MFD, some autocovariance blocks are not available from observed variables and, in some cases, the new conditions in this paper can be used to reconstruct all the non-available covariance blocks from available covariance blocks.

1. Introduction

In time series models, developing statistically efficient and computationally quick methods for identifying and estimating VAR or VARMA models with single-frequency data (SFD) or mixed-frequency data (MFD) is an important task. In this paper, “SFD means that all the variables of a model are observed at the same discrete-time frequency at which the model operates, and MFD means that some of the variables are observed at the same discrete-time frequency at which the model operates, and others are observed at one or more lower frequencies” [1].
Linear algebra tools are extensively used to study transfer functions, Yule–Walker equations and Hankel matrices associated with identifying and estimating VAR or VARMA models (see, for instance, [1,2,3,4,5,6,7,8,9,10,11,12,13]). In particular, the extended Yule–Walker method (XYW) was proposed by [5], and it is considered in several later papers, in particular in [2,3,8] for estimating VAR models from available covariance matrices of MFD. Two of the principal and parallel strands of this literature which extend the method to the case of VARMA models are [1,2]. The first considers both exact and generic identification and the second considers only exact identification. They make both common and differing assumptions involving the parameters of a VARMA model and prove that their assumptions are, as a whole, sufficient to identify all of the parameters of a VARMA model with MFD. In particular, [1] questions whether its “conditions as a whole are necessary for identification”. This question has motivated our work, and our aim is to expand the methodology to more identifiable models.
This paper is focused on identification. In econometrics and statistics, identification means the coefficients (parameters) of a model are determined uniquely from data covariances (and higher moments, depending on the data distribution) under assumed conditions for the coefficients of a model.
Our aim is to provide conditions for the identifiability of VARMA models that cannot be identified by following the procedure in [1] and, in this regard, to complement it. We use a subset of conditions in [1] and some more, and we prove with two counterexamples that the whole set of conditions in [1] is not necessary. The main results are obtained from subsystems of extended Yule–Walker equations. Therefore, we expand the set of models identified with extended Yule–Walker methods.
Because with MFD some autocovariance blocks are not available from observed variables, the main idea of our work is to provide conditions that, in addition to ensuring that the model with SFD is identifiable, allow rebuilding the unknown blocks from available covariance blocks. This thus yields all the complete autocovariances. As is well known, there is a bijection between the covariance and the corresponding spectral density of the process (see, for instance, [6]), meaning we can ensure the identifiability of the model.
Section 2 summarizes and comments on the sufficient conditions used in [1] to identify VARMA models with SFD and MFD. In Section 3, we prove our main theoretical results. With a suitable change, our paper expands the applicability of XYW methods to more identifiable models. Section 4 tests and illustrates our main insight with two counterexamples. We close this paper with Conclusions, References and an Appendix with a MATLAB subroutine used in a counterexample.

2. The Six Sufficient Conditions for Identification in [1]

Zadrozny, in [1], considers the VARMA(r, q) model
yt = A1yt−1 + … + Aryt−r + B0εt + B1εt1 + … + Bqεt−q
where yt denotes an n × 1 vector of observed variables; p = max{r, q + 1}; Ai for i = 1, …, p and Bj for j = 0, 1, …, p − 1 are n × n matrices, Ar ≠ 0, Bq ≠ 0, Ai = 0 if i = r + 1, …, p; Bi = 0 if i = q + 1, …, p − 1; and ε t denotes an n × 1 white noise vector.
In order to express the conditions in [1], and others in this paper, we need the following notation:
a(z) = IA1z −…− Apzp, b(z) = B0 + B1z + … + Bqzq, where z ∈ ℂ,
F = A 1 I n 0 0 A p 0 I n 0 i s   n p × n p , G = B 0 B p 1 i s   n p   n , H = ( I n , 0 n × n , , 0 n × n ) i s   n × n p ,
CL(F, G) = [G FG F2GFL−1G] is np × nL and
OL(F, H)= [Ht (HF)t(HF2)t (HFL−1)t]t is nL×np, for L = 1, 2, …
Assuming that the VARMA(r, q) model is stationary and that ∑ = E( ε t ε t t ) is positive definite:
K(z) = a−1(z)b(z) = i = 0 K i z i is the transfer function,
Ci = E( y t y t i T ) = j = 0 K i + j K j t for i     , is the i-th population covariance matrix.
To study mixed-frequency data cases (with stock variables), we consider n = n1 + n2 variables; the first n1 variables are high-frequency variables observed in every period and, given N N , the last n2 variables are low-frequency variables observed only for t {0, N, 2N, 3N, …}. Furthermore, we consider the following partition:
C i = C i f f C i f s C i s f C i s s ,
where C i f f , C i f s , C i s f   a n d   C i s s are n1 × n1, n1 × n2, n2 × n1 and n2 × n2 blocks, respectively.
Note that all the covariance blocks are available from variables observed with MFD, except C i s s if IkN, for any integer k. However, the XYW method in [1] only considers the first n1 columns in each Ci. Therefore, let us denote:
C ~ i as the first n1 columns of Ci, for i     ℤ;
H1 as the matrix with the first n1 rows of H;
H2 as the matrix with the last n2 rows of H.
Zadrozny, in [1], proves that under the following six sufficient conditions, a VARMA model (1) is identified by the population covariance of its variables observed with MFD.
Condition I: VARMA model (1) is stationary, i.e., det a(z) ≠ 0 if |z| ≤ 1.
Condition II: VARMA model (1) is regular with B0 lower triangular and non-singular and ∑ = In.
Condition III: VARMA model (1) is miniphase, i.e., det b(z) ≠ 0 if |z| < 1.
Condition IV: rank Cnp(F, G) = np.
Condition V: rank CL(F, V H 1 t ) = np and rank OL(F, H1) = np, for sufficiently large L, where V = k = 0 F k G G t F t k = [Cnp(F, G)…] [Cnp(F, G)…]t which exists because the model is stationary.
Note on Condition V: Condition V in [1] (p. 441) reads: “VARMA model (2.1) is observable for sufficiently large L, for the MFD being considered”. Condition V written above can be read in [1] (p. 445): “Section 3 and Section 4 proved that parameters of VARMA model (2.1) are identified (…) for MFD if CL(F, V H 1 t ) and OL(F, H1) have full rank, (…) for sufficiently large L”, because to identify VAR parameters with the specific procedure in [1] (pp. 441–442), rank D1 = np is necessary, and therefore the full rank of CL(F, V H 1 t ) . (We have taken into account that matrices D1 and E1 in [1] (p. 442) must be written without a subscript in the second   H ~ ) . Moreover, to identify VMA parameters with the specific procedure in [1] (pp. 442–444), the full rank of OL(F, H1) is necessary.
Condition VI: The nq × nq matrix B 1 B 0 1 I n 0 0 B q B 0 1 0 I n 0 is diagonalizable, i.e., it has a linearly independent set of eigenvectors.
Remark 1. 
Zadrozny, in [1], proves that Conditions I, II, III and IV are sufficient for identifiability with SFD. However, if Conditions I, II and III hold, Condition IV is sufficient but not necessary in the SFD case (see Counterexamples 1 and 2).
Deistler et al. in [6] prove that if q > r, a VARMA(r, q) model with MFD is not identifiable. Next, we will show that rank OL (F, H1) = np in Condition V excludes not only the case q > r, but also the case q = r.
Lemma 1. 
If q ≥ r, then rank OL(F, H1) < np.
Proof. 
Considering that if qr then p = q + 1 and Ar+1 = … = Ap = 0.
On the one hand, the first n(pr) columns in the n(pr) × np matrix
( H t ( H F ) t ( H F 2 ) t   ( H F p r 1 ) t ) t
form a lower triangular matrix with ones on the diagonal.
Therefore, its rank is exactly n(pr). As a result
r a n k ( H 1 t ( H 1 F ) t ( H 1 F 2 ) t H 1 F p r 1 t ) t = n 1 ( p r ) .
On the other hand, considering the nr × np matrix ( ( H F p r ) t ( H F p 1 ) t ) t , it is easy to see that the submatrix formed by its last nr columns is lower triangular with ones on the diagonal, and therefore,
r a n k ( H F p r t ( H F p 1 ) t ) t = n r .
Moreover, if ip, HFi = A1HFi−1 + … + ArHFi−r and, from (3), rank ( ( H F p r ) t ( H F L 1 ) t ) t = nr.
As a consequence:
r a n k ( H 1 F p r t H 1 F L 1 t ) t n r                   i f   L > p
From (2) and (4), rank OL(F, H1) ≤ n1(pr) + nr = n1pn1r + (n1 + n2)r = n1p + n2r < np and Lemma 1 has been proven. □
Note that if we do not have the hypothesis from Lemma 1, i.e., if q < r, then p = r and we do not have (2). Therefore, rank OL(F, H1) could be equal to np.

3. Reconstructing Missing Blocks in Autocovariance Matrices

Our goal in this work is to extend the set of models that can be identified with Yule–Walker methods in the MFD case. We provide new conditions to identify the VARMA model in the MFD case from an original form of rewriting necessary and sufficient conditions in the SFD case. These conditions are expressed based on the parameters of the model. We will demonstrate with two counterexamples that the sufficient conditions in [1] are not necessary to identify MFD models. Thus, we consider one of the questions opened in [1] to be resolved.
In our proposal, we treat the cases r > q and r = q separately, giving new conditions to replace the corresponding ones in [1]. In addition, the fourth and fifth conditions are different in each of the two cases.

3.1. Case I: r > q

In this case, our main result will be Theorem 1. We previously introduced the necessary notation, the new conditions and some previous results (Hanzon’s Theorem and Lemma 2).
Let us denote the following matrices:
G * = 0 0 B 0 B q a n d   X = C q C q 1 C q r + 1   a r e   n r × n , F * = A 1 A 2 I n 0 A r 0 0 0 0 0 0 I n 0 i s   n r × n r ,
θ = H 1 F * H 1 F * n r H 2 F * k N q H 2 F * k N q ( F * ) N H 2 F * k N q ( F * ) ( n r 1 ) N   i s   n 2 r × n r , J = C q + 1 f f C q + 1 f s C q + n r f f C q + n r f s C k N s f C k N s s C ( k + 1 ) N s f C k + 1 N s s C ( k + n r 1 ) N s f C k + n r 1 N s s i s   n 2 r × n ,
where k is an integer such that kN > q.
θb is the submatrix formed with columns of θ, such that the ith column of θ is inθb, if I ≠ 1, …, n1, n + 1, …, n + n1, 2n + 1, …, 2n + n1, …, (r − 1)n + 1, …, (r − 1)n + n1 and also the ith row of X = C q C q 1 C q r + 1 is not a row of Cj or C−j with j {0, N, 2N, 3N…}.
θa is the submatrix of θ with the columns that are not in θb.
Below we set out the new conditions. Note that although [1] refers to a state space model associated with the VARMA model, we only need to note the algebraic properties that hold in certain matrices constructed from the parameters of the VARMA model.
Condition ii: The VARMA model is regular with B0 = I and ∑ positive definite.
Condition iv.1: rankCnr(F, G*) = nr.
Condition v.1: rank Cnr(F, V * ( F t ) r q 1 H 1 t ) = nr where
V* = k = 0 F k G * ( G * ) t F t k = [Cnp(F, G* 1/2)…] [Cnp(F, G* 1/2)…]t, which exists because the model is stationary.
Condition vi:θb is full column rank.
We will also make use of [11] (Theorem 3.1.3.2–1 (iii), Theorem 3.1.2.3–29 (iii) and Corollary 2.4.3–25), which we summarize in the following theorem that we call Hanzon’s Theorem.
Hanzon’s Theorem: Considering K−i = 0 if i > 0, for i, j, h N , we denote
M i , j , h = K i K i + 1   K i + 1 K i + 2 K i + h 1 K i + h K i + j 1   K i + j + h 2 a n d   Q i , j , h = C i C i + 1   C i + 1 C i + 2 C i + h 1 C i + h C i + j 1       C j + j + h 2
If model (1) is stationary, regular and miniphase, the following conditions are satisfied for any orders r and q:
  • rank Mq−r+1,∞,∞ = rank Mq−r+1,r,nr
  • rank Qq−r+1,r,∞ = rank Qq−r+1,r,nr
  • rank Mq−r+1,r,nr = rank Qq−r+1,r,nr
As a consequence, we can deduce the following Lemma.
Lemma 2. 
Suppose r > q and Conditions I, II or ii, and III hold. Therefore, Condition IV implies Condition iv.1.
Proof. 
We can easily see that
Ki = HFiG for i ≥ 0
and M0,r,nr = Or(F, H) Cnr(F, G). Taking into account that Or(F, H) is full column rank nr and, from Condition IV, Cnr(F, G) is full row rank nr, then rank M0,r,nr = nr.
By Hanzon’s Theorem, rank M0,r,nr = rank M0,r, = nr, rank Mq−r+1,r,nr = rank Mq−r+1,r,. Since all the columns of M0,r, are columns of the matrix Mq−r+1,r,, rank Mq−r+1,r, rank M0,r,. Since the matrix has nr rows, rank Mq−r+1,r, = nr. By Hanzon’s Theorem, rank Mq−r+1,r,nr = rank Mq−r+1,r, = nr.
We can easily see that
Kj = HFr−q−1+jG* for j ≥ q − r + 1, (note that q − r + 1 ≤ 0)
and Mq−r+1,r,nr = Or(F, H) Cnr(F, G*). Considering that Mq−r+1,r,nr is full rank and Or(F, H) is full column rank nr, then Cnr(F, G*) is full row rank nr, i.e., Condition iv.1 holds. □
In light of the above, we can now state the following Theorem.
Theorem 1. 
If r > q and Conditions I, II or ii, and III hold:
(a) 
Condition iv.1 is necessary and sufficient for identifiability of the VARMA(r, q) model (1) in the SFD case.
(b) 
Conditions iv.1, v.1 and vi are sufficient for identifiability of the VARMA(r, q) model (1) in the MFD case.
Proof. 
For j ≥ 0, Cq−r+1+j = i = 0 K i + q r + 1 + j K i t = i = 0 H F i + j G * G * t F i q + r 1 t H t = HFj i = 0 F i G * G * t F i t F q + r 1 t H t .
Therefore, denoting V* = i = 0 F i G * G * t F i t ,
Cq−r+1+j = H FjV*(Fq+r−1)tHt for j ≥ 0.
Taking into account that Mqr+1,r,nr = Or(F, H) Cnr(F, G*), Or(F, H) is full column rank nr and, from Condition iv.1, we have that Cnr(F, G*) is full row rank nr, then rank Mq−r+1,r,nr = nr.
From Hanzon’s Theorem, rank Mq−r+1,r,nr = rank Qq−r+1,r,nr = nr, and therefore, under Conditions I, II or ii, III and iv.1, a VARMA model is identified with population covariance of its variables observed with SFD. Note that (A1, A2, …, Ar) can be uniquely identified from the autocovariance matrices of the process, solving the following system:
A r   A 1 Q q r + 1 , r , n r = ( C q + 1 C q + 2 C q + n r ) .
Taking into account that Qqr+1,r,nr = Or(F, H) Cnr(F, V*(Ft)r−q+1Ht), rank Qq−r+1,r,nr = nr and Or(F, H) is full column rank nr then
Cnr(F, V*(Ft)r−q+1Ht) is full row rank nr.
If we substitute H for H1 in the autocovariances
C ~ j + q r + 1 = H F j V * F q + r 1 t H 1 t   f o r   j   0 .
Denoting
Q ~ q r + 1 , r , L = C ~ q r + 1 C ~ q r + 2 C ~ q r + 2 C ~ q r + 3 C ~ q r + L C ~ q r + L + 1 C ~ q C ~ q + 1 C ~ q + L 1
we have
Q ~ q r + 1 , r , L = O r ( F , H )   C L ( F , V * ( F t ) r q + 1 H 1 t ) .
Keeping Hamilton–Cayley in mind, rank CL(F, V*(Ft)r−q+1 H 1 t ) does not change when L > nr.
Condition v.1 implies r a n k Q ~ q r + 1 , r , n r = n r , and therefore (A1, A2, …, Ar) can be uniquely identified from available autocovariance matrices of the process in the MFD case, solving the system A r   A 1 Q ~ q r + 1 , r , n r = ( C ~ q + 1 C ~ q + 2 C ~ q + n r ) as follows:
A r   A 1 = C ~ q + 1 C ~ q + 2 C ~ q + n r Q ~ q r + 1 , r , n r T Q ~ q r + 1 , r , n r Q ~ q r + 1 , r , n r T 1
The last part of the proof of this theorem aims to show that, once (A1, A2, ..., Ar) are calculated, if Condition vi holds, we can uniquely reconstruct the unknown blocks of the autocovariance matrices.
Note that H(F*)iX = Cq+i for i > 0, i.e., H1(F*)iX = C q + i f f C q + i f s and H2(F*)iX = C q + i s f C q + i s s . Therefore, J = θX and, in particular, J2 = θX2, where J2 and X2 are the submatrices formed by the last n2 columns of J and X, respectively.
Note that, keeping Hamilton–Cayley in mind, θ has been defined such that its rank does not change when some rows of H(F*)iare added to θ, for some i > nr. Moreover, J has been defined only with available autocovariance blocks. If we rearrange the rows of X2 such that θX2 = (θa θb) X 2 a X 2 b , we make it such that X2b is the submatrix containing the unknown blocks. To calculate X2b, we solve the following system of linear equations:
J2θaX2a = θbX2b,
for Condition vi, (6) has as a unique solution
X 2 b = θ b t θ b 1 θ b t ( J 2 θ a X 2 a ) .
As a consequence, we have C i s s for i = q − r + 1, …, q and we can calculate C i s s for i > q, considering that Ci = A1Ci−1 + … + ArCi−r for i > q.
Therefore, Theorem 1 has been proven. □
The following Corollary 1 is a consequence of section (a) of the previous Theorem and of the Theorem in [9].
Corollary 1. 
Suppose r > q and that Conditions I, ii and III hold. In this case: rank(Ar⁝Bq) = n and (a(z), b(z)) is left coprime iff rankCnr(F, G*) = nr.
Note that, unlike [1], we can consider rank Ar < n or rank Bq < n.

3.2. Case 2: r = q

In this section, our main result will be Theorem 2.
For this case, neither G nor G* allow us to state sufficient conditions similar to those in Theorem 1. Therefore, we consider the following nr × n matrix:
G * * = A 1 + B 1 A r + B r ,
and, considering p = r, we state the following conditions, which change with respect to Case 1.
Condition iv.2: rank Cnr(F, G**) = nr.
Condition v.2: rank Cnr(F, ( G * * B 0 t ~ + F V * * H 1 t ) ) = nr where B 0 t ~ denotes the first n1 columns of B 0 t and V** = i = 0 F i G * * G * * t F i t , which exists because the model is stationary.
We are in a position to state the following theorem.
Theorem 2. 
If r = q and Conditions I, II or ii, and III hold:
(a) 
Condition iv.2 is necessary and sufficient for identifiability of the VARMA(r, r) model (1) in the SFD case.
(b) 
Conditions iv.2, v.2 and vi are sufficient for identifiability of the VARMA(r, r) model (1) in the MFD case.
Proof. 
The proof of Theorem 2 is similar to that of Theorem 1, except for some specific details, because with G**, we have that
Kj+1 = HFjG** if j ≥ 0,
C j = HF j 1 G * * B 0 t + HF j i = 0 F i G * * G * * t F i t H t = HF j 1 ( G * * B 0 t + FV * * H t ) for   j     1   with   V * * = i = 0 F i G * * G * * t F i t .
Taking into account that
M1,r,nr = Or(F, H) Cnr(F, G**) and Q1,r,nr = Or(F, H) Cnr(F, (G**∑B0t + FV**Ht)),
Conditions I, II or ii, III and iv.2 imply the full rank nr of the matrices M1,r,nr, Q1,r,nr and Cnr(F, (G** B 0 t + FV**Ht)).
If in the autocovariances we substitute H for H1 and B 0 t for B 0 t ~ :
C ~ j + q r + 1 = HFj−1(G** B 0 t ~ + FV** H 1 t ) for j ≥ 1
Q ~ 1 , r , n r = C ~ 1 C ~ 2 C ~ 2 C ~ 3 C ~ n r C ~ n r + 1 C ~ r C ~ r + 1 C ~ n r + r 1 a n d   t h e n
Q ~ 1 , r , n r = O r ( F , H )   C n r ( F , ( G * * B 0 t ~ + FV * * H 1 t )
Condition v.2 implies r a n k Q ~ 1 , r , n r = n r , and therefore (A1, A2, …, Ar) can be uniquely identified from the autocovariance matrices of the process, solving (5).
The last part of the proof of this theorem is identical to that of Theorem 1. Therefore, Theorem 2 has been proven. □
As a consequence of section (a) of the previous Theorem and of the Theorem in [9], we give the following Corollary 2.
Corollary 2. 
Suppose r = q and that Conditions I, ii and III hold. In this case, rank(Ar⁝Br) = n and (a(z), b(z)) is left coprime iff rank Cnr(F, G**) = nr.
Note that the results in this section consider certain blocks available in the autocovariance matrices that [1] ignores. In particular, if r > q + 1, we use C i f f a n d   C i s f for i = q – r + 1, …, 1, in Q ~ q r + 1 , r , L solving (5) and C i f s for i = q + 1, …, q + nr and i = {kN, (k + 1)N, … (k + nr−1)N} in X2a solving (6).

4. Counterexamples

In Counterexample 1, the conditions in Theorem 1 hold, and thus the VARMA model is identified with MFD. However, Condition IV in [1] does not hold. Therefore, it is not necessary for identifiability in the SFD case. We remark that rank Ar < n, but rank(ArBq) = n.
Counterexample 1. 
Consider the VARMA(3, 1) model with A0 = B0 = I,
A 1 = 0 1 / 2 1 / 2 0 , A 2 = 1 / 4 0 0 1 / 4 , A 3 = 1 / 2 1 / 4 1 / 4 1 / 8 , B 1 = 1 / 2 1 / 2 1 / 2 0
and E( ε t t t ) = In.
We have q = 1, r = 3, nr = 6 and we consider n1 = 1.
The model is identifiable in the SFD case because
r a n k K q r + 1 K q r + 2 K q K q + 1 K q + n 1 r K q + n r 1 = r a n k K 1 K 0 K 0 K 1 K 1 K 2 K 4 K 5 K 6 = 6 .
Note that Conditions I, ii, III, iv.1 hold.
However, Condition IV is not satisfied because rank [GFnpG] = 5 ≠ 6 and, as a consequence, rank CL(F, V H 1 t ) < np and Condition V do not hold. Therefore, A1, A2 and A3 could not be uniquely calculated using the procedure in [1].
We prove that Condition v.1 holds (with MATLAB, see Appendix A) as follows:
(i)
First, we computed Ci (i = 0, 1, 2, 3) by solving the Yule–Walker equations:
C0A1C−1A2C−2A3C−3 = I + B1K1t
C1A1C0A2C−1A3C−2 = B1
C2A1C1A2C0A3C−1 = 0
C3A1C2A2C1A3C0 = 0
where K1 = B1 + A1. We then computed Ci = A1Ci−1 + A2Ci−2 + A3Ci−3. for i > 4.
(ii)
Second, we obtained that rank Q ~ q r + 1 , r , n r = 6.
(iii)
Taking into account that Q ~ q r + 1 , r , L = Or(F, H) CL(F, V*(Ft)r−q+1 H 1 t ), Or(F, H) is full column rank nr, CL(F, V*(Ft)r−q+1 H 1 t ) has nr rows and rank Q ~ q r + 1 , r , n r = 6 = nr, and we can affirm that rank CL(F, V*(Ft)r−q+1 H 1 t ) = nr = 6; i.e., Condition v.1 holds.
Therefore, A1, A2 and A3 can be uniquely determined by solving (5).
Regarding Condition vi, in this example, θ has 6 columns, where θb is the submatrix with the 2nd and 6th columns of θ and rank θb = 2, i.e., Condition vi holds.
Taking into account that X = C 1 C 0 C 1 , from (6), we can identify C 1 22 a n d C 1 22 . As a consequence, C0 and C1 are complete. Finally, the unknown C i 22 for i > 1 can be identified considering that Ci = A1Ci−1 + A2Ci−2 + A3Ci−3 for i > 1. Therefore, this model is identified in the MFD case.
In the following example, Condition V does not hold because r = q (Lemma 1). However, the conditions in Theorem 2 hold and therefore the VARMA model is identified with MFD.
Counterexample 2. 
Consider the VARMA(1, 1) model where A0 = B0 = I, A1 = 1 / 2 1 / 4 1 1 / 2 , B1 = 1 4 1 / 4 1  and E( ε t ε t t ) = In.
We have r = q = 1, nr = 2 and we consider n1 = 1. The autocovariance matrices are
C 0 = 4753 / 256 1025 / 128 1025 / 128 949 / 64 , C 1 = 201 / 32 275 / 64 229 / 16 51 / 32 , C 2 = 7 / 16 7 / 4 7 / 8 7 / 2 , C i = 0
for i > 2.
Note that Conditions I, ii, III and iv.2 hold.
Regarding Condition v.2, note that
Q ~ 1 , r , n r = Or(F, H))Cnr(F, (G** B 0 t ~ + FV** H 1 t ) = C ~ 1 C ~ 2 = 201 / 32 7 / 16 229 / 16 7 / 8 . Due to the fact that rank C ~ 1 C ~ 2 = nr = 2, Or(F, H) has nr columns and Cnr(F, (G** B 0 t ~ + FV** H 1 t )) has nr rows, then rank Cnr(F, (G** B 0 t ~ + FV** H 1 t ) ) = nr = 2; i.e., Condition v.2 holds.
Therefore, A1 is uniquely determined by solving (5).
Regarding Condition vi, taking into account that HF* = A1 is a submatrix of θ and θb is the second column of θ, then rank θb = 1 and Condition vi holds. Taking into account that X = C1, from (6), we can identify C 1 22 . Since C0 and C1 are complete, the unknown C i 22 for i > 1 can be identified considering that Ci = A1Ci−1 for i > 1. Therefore, this model is identified in the MFD case.

5. Conclusions

In this work, we have helped to expand the set of VARMA models identified by extended Yule–Walker methods. It provides new necessary and sufficient conditions in the simple-frequency data case, and sufficient conditions in the mixed-frequency data case. The main results are embodied in two theorems, two corollaries and two counterexamples. The two counterexamples allow us to affirm that models are identifiable for which the sufficient conditions for identifiability in [1] do not hold.

Author Contributions

Conceptualization, C.P.-G.; methodology, C.P.-G. and M.C.G.-F.; software, C.P.-G. and C.G.-C.; validation, C.G.-C. and M.C.G.-F.; formal analysis, C.P.-G. and C.G.-C.; writing—original draft, C.P.-G. and M.C.G.-F.; writing—review & editing, C.G.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministerio de Ciencia, Innovación y Universidades/Fondo Europeo de Desarrollo Regional, grant number PID2019-104928RB-I00 (MICINN/FEDER, UE).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We greatly appreciate the comments of P. Zadrozny, which have contributed significantly to improving our early results.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

% Numerical evidence: Condition v.1 holds in Counterexample 1
% Software: MATLAB R2022b. Free version in https://es.mathworks.com/products/matlab.html (accessed on 5 December 2023)
% We write the following linear system, AX = B, to calculate C0, C1, C2 and C3
A = [1 0 0 0 0 0.50 0 0 0.25 0 0 0 0.50 0.25 0 0
0 1 0 0 0 0 0 0.50 0 0 0.25 0 0 0 0.50 0.25
0 0 1 0 0.50 0 0 0 0 0.25 0 0 0.25 0.1250 0 0
0 0 0 1 0 0 0.50 0 0 0 0 0.25 0 0 0.25 0.125
0 0 0.50 0 1.25 0 0 0 0.50 0.25 0 0 0 0 0 0
0 0 0 0.50 0 1 0.25 0 0 0 0.50 0.25 0 0 0 0
0.50 0 0 0 0 0.25 1 0 0.25 0.125 0 0 0 0 0 0
0 0.50 0 0 0 0 0 1.25 0 0 0.25 0.125 0 0 0 0
0.25 0 0 0 0.50 0.25 0.50 0 1 0 0 0 0 0 0 0
0 0.25 0 0 0 0 0.50 0.75 0 1 0 0 0 0 0 0
0 0 0.25 0 0.75 0.125 0 0 0 0 1 0 0 0 0 0
0 0 0 0.25 0 0.50 0.25 0.125 0 0 0 1 0 0 0 0
0.50 0 0.25 0 0.25 0 0 0 0.50 0 1 0 0 0
0 0.50 0 0.25 0 0.25 0 0 0 0 0 0.50 0 1 0 0
0.25 0 0.1250 0 0 0 0.25 0 0.50 0 0 0 1 0
0 0.25 0 0.125 0 0 0 0.25 0 0.50 0 0 0 0 0 1]
B = [1.25 0 0.25 1 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0]
X = inv(A)*B′
C0 = [X(1) X(2);X(3) X(4)]
C1 = [X(5) X(6);X(7) X(8)]
C2 = [X(9) X(10);X(11) X(12)]
C3 = [X(13) X(14);X(15) X(16)]
% Calculations to obtain the autocovariance C4, C5 and C6
A3 = [1/2 1/4
1/4 1/8]
A2 = [1/4 0
0 1/4]
A1 = [0 1/2
1/2 0]
C4 = −A3*C1−A2*C2−A1*C3
C5 = −A3*C2−A2*C3−A1*C4
C6 = −A3*C3−A2*C4−A1*C5
% Rank of Qs
Qs = [C1′ C0 C1 C2 C3 C4
C0 C1 C2 C3 C4 C5
C1 C2 C3 C4 C5 C6 ]
RangoQs = rank(Qs)
singularvaluesQs = svd(Qs)
% Considering only the odd columns in Qs
QsOdd = [Qs(:, 1) Qs(:, 3) Qs(:, 5) Qs(:, 7) Qs(:, 9) Qs(:, 11)]
RangoQsOdd = rank(QsOdd)
singularvaluesQsOdd = svd(QsOdd)
% Considering only the even columns in Qs
QsEven = [Qs(:, 2) Qs(:, 4) Qs(:, 6) Qs(:, 8) Qs(:, 10) Qs(:, 12)]
RangoQsEven = rank(QsEven)
singularvaluesQsEven = svd(QsEven)
%%%%%%
A =
1.0000 0 0 0 0 0.5000 0 0 0.2500 0 0 0 0.5000 0.2500 0 0
0 1.0000 0 0 0 0 0 0.5000 0 0 0.2500 0 0 0 0.5000 0.2500
0 0 1.0000 0 0.5000 0 0 0 0 0.2500 0 0.2500 0.1250 0 0 0
0 0 0 1.0000 0 0 0.5000 0 0 0 0 0.2500 0 0 0.2500 0.1250
0 0 0.5000 0 1.2500 0 0 0 0.5000 0.2500 0 0 0 0 0 0
0 0 0 0.5000 0 1.0000 0.2500 0 0 0 0.5000 0.2500 0 0 0 0
0.5000 0 0 0 0 0.2500 1.0000 0 0.2500 0.1250 0 0 0 0 0 0
0 0.5000 0 0 0 0 0 1.2500 0 0 0.2500 0.1250 0 0 0 0
0.2500 0 0 0 0.5000 0.2500 0.5000 0 1.0000 0 0 0 0 0 0 0
0 0.2500 0 0 0 0 0.5000 0.7500 0 1.0000 0 0 0 0 0 0
0 0 0.2500 0 0.7500 0.1250 0 0 0 0 1.0000 0 0 0 0 0
0 0 0 0.2500 0 0.5000 0.2500 0.1250 0 0 0 1.0000 0 0 0 0
0.5000 0 0.2500 0 0.2500 0 0 0 0 0 0.5000 0 1.0000 0 0 0
0 0.5000 0 0.2500 0 0.2500 0 0 0 0 0 0.5000 0 1.0000 0 0
0.2500 0 0.1250 0 0 0 0.2500 0 0.5000 0 0 0 0 0 1.0000 0
0 0.2500 0 0.1250 0 0 0 0.2500 0 0.5000 0 0 0 0 0 1.0000
B =
1.2500 0 0.2500 1.0000 0.5000 0.5000 0.5000 0 0 0 0 0 0 0 0 0
X =
1.7457
0.1876
0.1876
1.2901
0.5771
0.2847
−0.2865
0.0682
−0.6529
0.0452
−0.5153
−0.4018
−0.8064
−0.2866
−0.0618
−0.2478
C0 =
1.7457 0.1876
0.1876 1.2901
C1 =
0.5771 0.2847
−0.2865 0.0682
C2 =
−0.6529 0.0452
−0.5153 −0.4018
C3 =
−0.8064 −0.2866
−0.0618 −0.2478
A3 =
0.5000 0.2500
0.2500 0.1250
A2 =
0.2500 0
0 0.2500
A1 =
0 0.5000
0.5000 0
C4 =
−0.0228 −0.0468
0.4236 0.1640
C5 =
0.4451 0.0675
0.2545 0.1243
C6 =
0.2971 0.1548
−0.1191 0.0279
Qs =
0.5771 −0.2865 1.7457 0.1876 0.5771 0.2847 −0.6529 0.0452 −0.8064 −0.2866 −0.0228 −0.0468
0.2847 0.0682 0.1876 1.2901 −0.2865 0.0682 −0.5153 −0.4018 −0.0618 −0.2478 0.4236 0.1640
1.7457 0.1876 0.5771 0.2847 −0.6529 0.0452 −0.8064 −0.2866 −0.0228 −0.0468 0.4451 0.0675
0.1876 1.2901 −0.2865 0.0682 −0.5153 −0.4018 −0.0618 −0.2478 0.4236 0.1640 0.2545 0.1243
0.5771 0.2847 −0.6529 0.0452 −0.8064 −0.2866 −0.0228 −0.0468 0.4451 0.0675 0.2971 0.1548
−0.2865 0.0682 −0.5153 −0.4018 −0.0618 −0.2478 0.4236 0.1640 0.2545 0.1243 −0.1191 0.0279
RangoQs =
6
singularvaluesQs =
2.9541
2.5268
1.2708
1.0259
0.2446
0.0917
QsOdd =
0.5771 1.7457 0.5771 −0.6529 −0.8064 −0.0228
0.2847 0.1876 −0.2865 −0.5153 −0.0618 0.4236
1.7457 0.5771 −0.6529 −0.8064 −0.0228 0.4451
0.1876 −0.2865 −0.5153 −0.0618 0.4236 0.2545
0.5771 −0.6529 −0.8064 −0.0228 0.4451 0.2971
−0.2865 −0.5153 −0.0618 0.4236 0.2545 −0.1191
RangoQsOdd =
6
singularvaluesQsOdd =
2.7937
2.2169
0.5019
0.2229
0.0897
0.0383
QsEven =
−0.2865 0.1876 0.2847 0.0452 −0.2866 −0.0468
0.0682 1.2901 0.0682 −0.4018 −0.2478 0.1640
0.1876 0.2847 0.0452 −0.2866 −0.0468 0.0675
1.2901 0.0682 −0.4018 −0.2478 0.1640 0.1243
0.2847 0.0452 −0.2866 −0.0468 0.0675 0.1548
0.0682 −0.4018 −0.2478 0.1640 0.1243 0.0279
RangoQsEven =
6
singularvaluesQsEven =
1.5738
1.4737
0.3418
0.1908
0.1172
0.0195

References

  1. Zadrozny, P.A. Extended Yule-Walker identification of VARMA models with single-or mixed-frequency data. J. Econom. 2016, 193, 438–446. [Google Scholar] [CrossRef]
  2. Anderson, B.D.; Deistler, M.; Felsenstein, E.; Koelbl, L. The structure of multivariate AR and ARMA systems: Regular and singular systems; the single and the mixed frequency case. J. Econom. 2016, 192, 366–373. [Google Scholar] [CrossRef]
  3. Anderson, B.D.; Deistler, M.; Felsenstein, E.; Funovits, B.; Koelbl, L.; Zamani, M. Multivariate AR systems and mixed frequency data: G-identifiability and estimation. Econom. Theory 2016, 32, 793–826. [Google Scholar] [CrossRef]
  4. Boularouk, Y.; Djeddour, K. New approximation for ARMA parameters estimate. Math. Comput. Simul. 2015, 118, 116–122. [Google Scholar] [CrossRef]
  5. Chen, B.; Zadrozny, P.A. An extended Yule-Walker method for estimating a vector autoregressive model with mixed-frequency data. Adv. Econom. 1998, 13, 47–73. [Google Scholar]
  6. Deistler, M.; Koelbl, L.; Anderson, B.D. Non-identifiability of VMA and VARMA systems in the mixed frequency case. Econom. Stat. 2017, 4, 31–38. [Google Scholar] [CrossRef]
  7. Furnovits, B.; Braumann, A. Identifiability of Structural Singular Vector Autoregressive Model. arXiv 2019, arXiv:1910.04096. [Google Scholar]
  8. Koelbl, L.; Deistler, M. A new approach for estimating VAR systems in the mixed-frequency case. Stat. Pap. 2018, 61, 1203–1212. [Google Scholar] [CrossRef]
  9. Hannan, E.J. The identification of vector mixed autoregressive-moving. Biometrika 1969, 56, 223–225. [Google Scholar]
  10. Hannan, E.J.; Deistler, M. The statistical theory of linear systems. Society for Industrial and Applied Mathematics. average systems, Biometrika 2012, 56, 223–225. [Google Scholar]
  11. Hanzon, B. Identifiability, Recursive Identification and Spaces of Linear Dynamical Systems; Centrum Wiskunde & Informatica (CWI): Amsterdam, The Netherlands, 1989; Tracts 63, 64; ISBN 9061963710. [Google Scholar]
  12. Hong-Zhi, Z.; Zhao-Guo, C.; Hannan, E.J. A Note on ARMA Estimation. J. Time Ser. Anal. 1983, 4, 9–17. [Google Scholar] [CrossRef]
  13. Yin, H.; Zhifang, Z.; Ding, F. Model order determination using the Hankel matrix of Impulse responses. Appl. Math. Lett. 2011, 24, 797–802. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pestano-Gabino, C.; González-Concepción, C.; Gil-Fariña, M.C. VARMA Models with Single- or Mixed-Frequency Data: New Conditions for Extended Yule–Walker Identification. Mathematics 2024, 12, 244. https://doi.org/10.3390/math12020244

AMA Style

Pestano-Gabino C, González-Concepción C, Gil-Fariña MC. VARMA Models with Single- or Mixed-Frequency Data: New Conditions for Extended Yule–Walker Identification. Mathematics. 2024; 12(2):244. https://doi.org/10.3390/math12020244

Chicago/Turabian Style

Pestano-Gabino, Celina, Concepción González-Concepción, and María Candelaria Gil-Fariña. 2024. "VARMA Models with Single- or Mixed-Frequency Data: New Conditions for Extended Yule–Walker Identification" Mathematics 12, no. 2: 244. https://doi.org/10.3390/math12020244

APA Style

Pestano-Gabino, C., González-Concepción, C., & Gil-Fariña, M. C. (2024). VARMA Models with Single- or Mixed-Frequency Data: New Conditions for Extended Yule–Walker Identification. Mathematics, 12(2), 244. https://doi.org/10.3390/math12020244

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop