Next Article in Journal
Semi-Metric Portfolio Optimization: A New Algorithm Reducing Simultaneous Asset Shocks
Next Article in Special Issue
Modeling COVID-19 Infection Rates by Regime-Switching Unobserved Components Models
Previous Article in Journal
Exploring Industry-Distress Effects on Loan Recovery: A Double Machine Learning Approach for Quantiles
Previous Article in Special Issue
Manfred Deistler and the General-Dynamic-Factor-Model Approach to the Statistical Analysis of High-Dimensional Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Causal Vector Autoregression Enhanced with Covariance and Order Selection

1
Department of Stochastics, Budapest University of Technology and Economics, 1111 Budapest, Hungary
2
Department of Computer Science, University of Southern California, Los Angeles, CA 90007, USA
3
Committee on Computational and Applied Mathematics, University of Chicago, Chicago, IL 60637, USA
4
Department of Statistics, Yale University, New Haven, CT 06520, USA
5
UFR Sciences and Techniques, Nantes University, 44035 Nantes, France
6
Lindner College of Business, University of Cincinnati, Cincinnati, OH 45221, USA
7
Data Science and Analytics Institute, University of Oklahoma, Norman, OK 73019, USA
8
Department of Statistics, Mathematics, and Insurance, Faculty of Commerce, Assiut University, Assiut Governorate 71515, Egypt
*
Author to whom correspondence should be addressed.
Econometrics 2023, 11(1), 7; https://doi.org/10.3390/econometrics11010007
Submission received: 22 August 2022 / Revised: 17 February 2023 / Accepted: 20 February 2023 / Published: 24 February 2023
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)

Abstract

:
A causal vector autoregressive (CVAR) model is introduced for weakly stationary multivariate processes, combining a recursive directed graphical model for the contemporaneous components and a vector autoregressive model longitudinally. Block Cholesky decomposition with varying block sizes is used to solve the model equations and estimate the path coefficients along a directed acyclic graph (DAG). If the DAG is decomposable, i.e., the zeros form a reducible zero pattern (RZP) in its adjacency matrix, then covariance selection is applied that assigns zeros to the corresponding path coefficients. Real-life applications are also considered, where for the optimal order p 1 of the fitted CVAR ( p ) model, order selection is performed with various information criteria.

1. Introduction

The purpose of the present paper is to connect graphical modeling tools and time series models together via path coefficient estimation. In statistics, the path analysis was established by the geneticist Wright (1934) about a century ago, but he used complicated entrywise calculations with partial correlations. Taking these partial correlations of a pair of variables in a multidimensional data set conditioned on another set of variables makes things overtly complicated, as the conditioning set changes in the steps. A bit later, in econometrics, structural equation modeling (SEM) was developed; the prominent author Haavelmo (1943) obtained the Nobel price for it later. The maximum likelihood estimation (MLE) of the parameters in the Gaussian case was elaborated by Joreskog (1977). At the same time, Ref. Wold (1985), the inventor of partial least squares regression (PLS), used matrix calculations, and Ref. Kiiveri et al. (1984) already used block matrix decompositions when dividing their variables into endogenous and exogenous ones. However, none of these authors applied steadily algorithms of block LDL (variant of the Cholesky) decomposition alone, without using partial correlations. Furthermore, they did not consider time series.
Here, we give a rigorous block matrix approach of these problems that originated in statistics and time series analysis. Furthermore, we enhance the usual and structural vector autoregressive (VAR and SVAR) models, discussed e.g., in Deistler and Scherrer (2019) and Deistler and Scherrer (2022), with a causal component that has an effect between the coordinates contemporaneously. Therefore, we call our causal vector autoregressive model CVAR. Joint effects between the contemporaneous components are also considered in SVAR models of Keating (1996); Lütkepohl (2005); and Kilian and Lütkepohl (2017), but just recursive ordering of the variables, and no specific structure of the underlying directed graph is investigated. Though in Wold (1960) a causal chain model is introduced with an exogenous and a lagged endogenous variable, Gaussian Markov processes and usual regression estimates are used in the context of econometric problems. This research is also inspired by the paper Wermuth (1980), where recursive ordering of the variables is crucial, without using any time component.
Ref. Eichler (2006) introduces causality as a fundamental tool for the empirical investigations of dynamic interactions in multivariate time series. He also discusses the differences between the structural and Granger causality. The former one appears in the SVAR models (see Geweke (1984)), whereas the latter one first appears in Wiener (1956), then in Granger (1969), and is sometimes called Wiener–Granger causality. Without causality between the contemporaneous components, our model in the Gaussian case also resembles the one of Eichler (2012), where the error term (shock) can have correlated components. Our higher order recursive VAR model can be transformed into a model like this, but the price is losing the recursive structure. The VAR model of Brillinger (1996) has a similar structure as ours with uncorrelated error terms; but no further benefits of recursive models, such as RZP, induced by the underlying DAG, are discussed. Ref. Sims (1980) investigates the use of different types of structural equations and autoregressive models in macroeconomics, without suggesting numerical algorithms. However, historically, this survey paper was among the first ones which pointed out the difference between the existing macroeconomic models so far and distinguished endogenous and exogenous variables. The method of the most recent paper Bazinas and Nielsen (2022) is based on the reduced form system and is constructed by the conditional distribution of two endogenous variables, given a catalyst or multiple catalysts; lagged effects are assessed, without having a longer time series, and stationarity is not assumed.
Throughout the paper, second order processes are considered that can be assumed to asymptotically follow multivariate Gaussian distribution. In Section 2, the different types of VAR models are compared, and a novel CVAR model is introduced, combining a recursive graphical model contemporaneously and a VAR ( p ) model longitudinally. In Section 3, the models are described in details, together with introducing algorithms for the parameter estimation. In Section 3.1, the unrestricted CVAR ( p ) model is introduced, while in Section 3.2, the restricted cases are treated, with some prescribed zeros in the path coefficients. Relation to covariance selection and decomposability is discussed too. In Section 4, application to real life data is presented together with information criteria for order selection (optimal choice of p). The results and estimation schemes are summarized in Section 5; finally, in Section 6, conclusions and further perspectives are posed. The proofs of the theorems and the detailed description of the algorithms are to be found in Appendix A, while the pseudocodes in Appendix B. To illustrate the CVAR model and related algorithms, there are supporting Python files and notebooks uploaded, together with some additional tables and figures. These are included in the Supplementary Material.

2. Materials and Methods

First, the different purpose VAR ( p ) models for the d-dimensional, weakly stationary process { X t } are enlisted and compared. The first two models are known in the literature, whereas the last two are our contributions, for which block matrix decomposition based algorithms are introduced in Section 3, and they are illustrated in Section 4 on real life data.
  • Reduced form VAR ( p ) model: for given integer p 1 , it is
    X t + M 1 X t 1 + + M p X t p = V t , t = p + 1 , p + 2 , ,
    where V t is white noise, it is uncorrelated with X t 1 , , X t p , it has zero expectation and covariance matrix Σ (not necessarily diagonal, but positive definite), and the matrices M j satisfy the stability conditions (see Deistler and Scherrer (2019)). (Sometimes, X t is isolated on the left-hand side.) V t is called innovation, i.e., the error term of the (added value to the) best one-step ahead linear prediction of X t with its past, which (in the case of a VAR(p) model) can be carried out with the p-lag long past X t 1 , , X t p .
    Here, the ordering of the components of X t does not matter: if it is changed (with some permutation of { 1 , , d } ), then clearly the rows of the matrices M j s and, furthermore, the rows and columns of Σ are permuted accordingly.
  • Structural form SVAR ( p ) model: for given integer p 1 , it is
    A X t + B 1 X t 1 + + B p X t p = U t , t = p + 1 , p + 2 , ,
    where the white noise term U t is uncorrelated with X t 1 , , X t p , and it has zero expectation with uncorrelated components, i.e., with positive definite, diagonal covariance matrix Δ . A is a d × d upper triangular matrix with 1s along its main diagonal, whereas B 1 , , B p are d × d matrices; see also Lütkepohl (2005). The components of U t are called structural shocks, and they are mutually uncorrelated and assigned to the individual variables.
    Here, the ordering of the components of X t does matter: if it is changed (with some permutation of { 1 , , d } ), then the matrices A , B j and Δ cannot be obtained in a simple way; they profoundly change under the given permutation.
    However, there is a one-to-one correspondence between the reduced and structural model; since A is invertible, from Equation (2), Equation (1) can be obtained (and vice versa):
    X t + A 1 B 1 X t 1 + + A 1 B p X t p = A 1 U t , t = p + 1 , p + 2 , ,
    where M j = A 1 B j , V t = A 1 U t , and Σ = A 1 Δ A T 1 ; further, | Σ | = | Δ | as | A | = 1 .
  • Causal CVAR ( p )  unrestricted model: it also obeys Equation (2), but here the ordering of the components follows a causal ordering, given e.g., by an expert’s knowledge. This is a recursive ordering along a “complete” DAG, where the permutation (labeling) of the graph nodes (assigned to the components of X t ) is such that X t , i can be caused by X t , j whenever i < j , which means a j i directed edge. Here, the causal effects are meant contemporaneously, and reflected by the upper triangular structure of the matrix A .
    It is important that, in any ordering of the jointly Gaussian variables, a Bayesian network (in other words, a Gaussian directed graphical model) can be constructed, in which every node (variable) is regressed linearly with the variables corresponding to higher label nodes. The partial regression coefficients behave like path coefficients, also used in SEM. If the DAG is complete, then there are no zero constraints imposed on the partial regression coefficients. Here, building the DAG just aims at finding a sensible ordering of the variables.
  • Causal CVAR ( p )  restricted model: here, an incomplete DAG is built, based on partial correlations.
    First, we build an undirected graph: do not connect i and j if the partial correlation coefficients of X i and X j , eliminating the effect of the other variables is 0 (theoretically), or less than a threshold (practically). Such an undirected graphical model is called Markov random field (MRF). It is known (see Rao (1973) and Lauritzen (2004)) that partial correlations can be calculated from the concentration matrix (inverse of the covariance matrix). However, here the upper left block of the inverse of the large block matrix, containing the first p autocovariance matrices, is used. If this undirected graph is triangulated, then in a convenient (so-called perfect) ordering of the nodes, the zeros of the adjacency matrix form an RZP. We can find such a (not necessarily unique) ordering of the nodes with the maximal cardinality search (MCS) algorithm, together with cliques and separators of a so-called junction tree (JT); see Lauritzen (2004), Koller and Friedman (2009), and Bolla et al. (2019). In this ordering (labeling) of the nodes, a DAG can also be constructed, which is Markov equivalent to the undirected one (it has no so-called sink V configuration); for further details, see Section 3.2.
    Having an RZP in the CVAR restricted model, we use the incomplete DAG for estimation. With the covariance selection method of Dempster (1972), the starting concentration matrix is re-estimated by imposing zero constraints for its entries in the RZP positions (symmetrically). By the theory (see, e.g., Bolla et al. (2019)), this will result in zero entries of A in the no directed edge positions.
Note that the unrestricted CVAR model can use an incomplete DAG as well, where the labeling of its nodes follows the perfect labeling of the undirected graph; still, the parameter matrices A and B j s are “full” in the sense that no zeros of A are guaranteed in the no-edge positions of the graph. Their entries are just considered as path coefficients of the contemporaneous and lagged effects, respectively. On the contrary, in the restricted CVAR model, action is carried out for introducing zero entries in A in the no-edge positions. If the desired zeros form an RZP, the covariance selection has a closed form (see Lauritzen (2004)). In the lack of an RZP, the covariance selection still works, but it needs an infinite (convergent) iteration, called iterative proportional scaling (IPS), see Lauritzen (2004) and Bolla et al. (2019). Other possibility is to moralize the DAG (connect parents that are not connected and thus eliminate the sink Vs), and work with the so-obtained undirected graph.
Then, both in the unrestricted and restricted CVAR ( p ) models, an order selection is initiated to choose the optimal p, based on information criteria, such as AIC, BIC, AICC, and HQ, where only the number of parameters differs in the two cases. Actually, in the restricted case, the product-moments are calculated only within the cliques, and since separators are subsets of them, we can reduce the computational complexity of our algorithm that is spectacular when the number of nodes is “large”.

3. Results

3.1. The Unrestricted Causal VAR(p) Model

The directed Gaussian graphical model of Wermuth (1980) does not consider time development; it is, in fact, a CVAR(0) model. In addition, note that at this point, the ordering of the jointly Gaussian variables is not relevant, since in any recursive ordering of them (encoded in A ), a Gaussian directed graphical model (in other words, a Gaussian Bayesian network) can be constructed, where every variable is regressed linearly with the higher label ones.
To illustrate the p > 0 case, first we introduce the unrestricted CVAR(1) model. This has a special interest, as can be used for longitudinal data spanning a short time interval or adapted to the situation when X t 1 represents the exogenous, and X t the endogenous variables in their components.
Let { X t } be a d-dimensional, weakly stationary process with real valued components of zero expectation and covariance matrix function C ( h ) , h = 0 , ± 1 , ± 2 , ; C ( h ) = C T ( h ) . All deterministic and random vectors are column vectors and so C ( h ) = E X t X t + h T does not depend on t, by weak stationarity. The CVAR(1) model equation is
A X t + B X t 1 = U t , t = 1 , 2 , ,
where A is a d × d upper triangular matrix with 1s along its main diagonal, B is a d × d matrix; furthermore, the white noise random vector U t is uncorrelated with (in the Gaussian case, independent of) X t 1 , has zero expectation, and covariance matrix Δ = diag ( δ 1 , , δ d ) .
Let C 2 denote the covariance matrix of the stacked random vector ( X t T , X t 1 T ) T which, in block matrix form, is as follows:
C 2 = C ( 0 ) C T ( 1 ) C ( 1 ) C ( 0 ) .
It is symmetric and positive definite if the process is of full rank regular (which means that its spectral density matrix is of full rank, see Bolla and Szabados (2021)) that is assumed in the sequel. It is well known that the inverse of C 2 , the so-called concentration matrix K , has the block-matrix form
K = C 1 ( 1 | 0 ) C 1 ( 1 | 0 ) C T ( 1 ) C 1 ( 0 ) C 1 ( 0 ) C ( 1 ) C 1 ( 1 | 0 ) C 1 ( 0 ) + C 1 ( 0 ) C ( 1 ) C 1 ( 1 | 0 ) C T ( 1 ) C 1 ( 0 ) ,
where C ( 1 | 0 ) = C ( 0 ) C T ( 1 ) C 1 ( 0 ) C ( 1 ) is the conditional covariance matrix C ( t | t 1 ) of the distribution of X t , given X t 1 ; by weak stationarity, it does not depend on t either; therefore, it is denoted by C ( 1 | 0 ) . In addition, C 2 is positive definite if and only if both C ( 0 ) and C ( 1 | 0 ) are positive definite.
Observe that C ( 1 | 0 ) = A 1 Δ A 1 T is the covariance matrix of the innovation V t = A 1 U t . Therefore, the left upper block of K contains its inverse, which is A T Δ 1 A .
Theorem 1.
The parameter matrices A , B , and Δ of model Equation (3) can be obtained by the block LDL decomposition of the (positive definite) concentration matrix K (inverse of the covariance matrix C 2 in Equation (4)) of the 2 d -dimensional Gaussian random vector ( X t T , X t 1 T ) T . If K = L D L T is this (unique) decomposition with block-triangular matrix L and block-diagonal matrix D , then they have the form
L = A T O d × d B T I d × d , D = Δ 1 O d × d O d × d C 1 ( 0 ) ,
where the d × d upper triangular matrix A with 1s along its main diagonal, the d × d matrix B , and the diagonal matrix Δ of model Equation (3) can be retrieved from them.
The proof of this theorem together with the detailed description of the algorithm is to be found in Appendix A.1 and Appendix A.2 of Appendix A.
The above model naturally generalizes to the following recursive CVAR(p) model: for given integer p 1 ,
A X t + B 1 X t 1 + + B p X t p = U t , t = p + 1 , p + 2 , ,
where the white noise term U t is uncorrelated with X t 1 , , X t p , it has zero expectation and covariance matrix Δ = diag ( δ 1 , , δ d ) . A is a d × d upper triangular matrix with 1s along its main diagonal; whereas, B 1 , , B p are d × d matrices.
Here, we have to perform the block Cholesky decomposition of the inverse covariance matrix of ( X t T , X t 1 T , , X t p T ) T , i.e., the inverse of the matrix
C p + 1 = C ( 0 ) C T ( 1 ) C T ( 2 ) C T ( p ) C ( 1 ) C ( 0 ) C T ( 1 ) C T ( p 1 ) C ( 2 ) C ( 1 ) C ( 0 ) C T ( p 2 ) C ( p ) C ( p 1 ) C ( p 2 ) C ( 0 ) .
This is a symmetric, positive definite block Toeplitz matrix with ( p + 1 ) × ( p + 1 ) blocks which are d × d matrices. Again, C T ( h ) = C ( h ) , and it is well known that the inverse matrix C p + 1 1 has the following block-matrix form:
  • Upper left block: C 1 ( p | 0 , , p 1 ) ;
  • Upper right block: C 1 ( p | 0 , , p 1 ) C T ( 1 , , p ) C p 1 ;
  • Lower left block: C p 1 C ( 1 , , p ) C 1 ( p | 0 , , p 1 ) ;
  • Lower right block: C p 1 + C p 1 C ( 1 , , p ) C 1 ( p | 0 , , p 1 ) C T ( 1 , , p ) C p 1 ,
where C ( p | 0 , , p 1 ) = C ( 0 ) C T ( 1 , , p ) C p 1 C ( 1 , , p ) is the conditional covariance matrix C ( t | t 1 , , t p ) of the distribution of X t given X t 1 , , X t p ; due to stationarity, it does not depend on t either; therefore, it is denoted by C ( p | 0 , , p 1 ) . Furthermore, C T ( 1 , , p ) = ( C T ( 1 ) , , C T ( p ) ) is a d × p d and C ( 1 , , p ) is a p d × d matrix. In addition, C p + 1 is positive definite if and only if both C p and C ( p | 0 , , p 1 ) are positive definite.
Theorem 2.
The parameter matrices A , B 1 , , B p and Δ of model Equation (6) can be obtained by the block LDL decomposition of the (positive definite) concentration matrix K (inverse of the covariance matrix C p + 1 in Equation (7)) of the ( p + 1 ) d -dimensional Gaussian random vector ( X t T , X t 1 T , , X t p T ) T . If K = L D L T is this (unique) decomposition with block-triangular matrix L and block-diagonal matrix D , then they have the form
L = A T O d × p d B T I p d × p d , D = Δ 1 O d × p d O p d × d C p 1 ,
where the d × d upper triangular matrix A with 1s along its main diagonal, the d × p d matrix B = ( B 1 B p ) (transpose of B T , partitioned into blocks) and the diagonal matrix Δ of model Equation (6) can be retrieved from them.
The proof of this theorem together with the detailed description of the algorithm is to be found in Appendix A.3 and Appendix A.4 of Appendix A.

3.2. The Restricted Causal VAR(p) Model

First, we again consider the p = 1 case. Assume that we have a causal ordering of the coordinates X 1 , , X d of X such that X j can be the cause of X i whenever i < j . We can think of X i s as the nodes of a graph in a directed graphical model (Bayesian network) and their labeling corresponds to a topological ordering of the nodes in the underlying DAG. Thus, i < j can imply a j i edge, and then we say that X j is a parent (cause) of X i . ( X i can have multiple parents, maximum d i ones.) For example, when asset prices or relative returns of different assets or currencies (on the same day) influence each other in a certain (recursive) order. Now, restricted cases are analyzed, when only certain arrows (causes) are present, but the DAG is connected. In particular, only certain asset prices influence some others on a DAG contemporaneously, but not all possible directed edges are present. In this case, a covariance selection technique can be initiated to re-estimate the covariance matrix so that the partial regression coefficients in the no-edge positions are zeros.
Our DAG is sometimes given by an expert’s knowledge, but usually it is built from an undirected graph, when we also require that the so-constructed DAG be Markov equivalent to its undirected skeleton. Then, the DAG must not contain a sink V configuration. Under sink V configuration, a triplet j h i is understood, where i is not connected by j ( h < i < j ) ; see Figure 1.
Here, we include a short description on the relation between directed and undirected graphical models, with emphases on the Gaussian case, based on Lauritzen (2004) and Bolla et al. (2019). Directed and undirected models have many properties in common, and under some conditions, there are important correspondences between them.
Let X N d ( μ , Σ ) be a d-variate non-degenerate Gaussian random vector with expectation μ and positive definite, symmetric d × d covariance matrix Σ . The also positive definite, symmetric matrix Σ 1 of entries σ i j is called a concentration matrix, it appears in the joint density, and its zero entries indicate conditional independences between two components of X , given the remaining ones. Mostly, the variables are already centered, so μ = 0 is assumed.
Let us form an undirected graph G on the node-set V = { 1 , , d } , where V corresponds to the components of X , and the edges are drawn according to the rule
i j σ i j 0 , i j .
This is called an undirected Gaussian graphical model, which is a special Markov Random Field (MRF). To establish conditional independence statements, we use the following facts.
Proposition 1.
Let X = ( X 1 , , X d ) T N d ( 0 , Σ ) be a random vector, and let V : = { 1 , , d } denote the index set of the variables, d 3 . Assume that Σ is positive definite. Then,
r X i X j | X V \ { i , j } = σ i j σ i i σ j j i j ,
where r X i X j | X V \ { i , j } denotes the partial correlation coefficient between X i and X j after eliminating the effect of the remaining variables X V \ { i , j } . Furthermore,
σ i i = 1 / ( Var ( X i | X V \ { i } ) , i = 1 , , d
is the reciprocal of the conditional (residual) variance of X i , given the other variables X V \ { i } .
Definition 1.
Let X N d ( 0 , Σ ) be a random vector with Σ positive definite. Consider the regression plane
E ( X i | X V \ { i } = x V \ { i } ) = j V \ { i } β j i · V \ { i } x j , j V \ { i } ,
where x j ’s are the coordinates of x V \ { i } . Then, we call the coefficient β j i · V \ { i } the partial regression coefficient of X j when regressing X i with X V \ { i } , j V \ { i } .
Proposition 2.
β j i · V \ { i } = σ i j σ i i , j V \ { i } .
Corollary 1.
An important consequence of Propositions 1 and 2 is that
β j i · V \ { i } = r X i X j | X V \ { i , j } σ j j σ i i = r X i X j | X V \ { i , j } Var ( X i | X V \ { i } ) Var ( X j | X V \ { j } ) , j V \ { i } .
(The formula is analogous to the one of unconditioned regression.) Thus, only the variables X j ’s, whose partial correlation with X i (after eliminating the effect of the remaining variables) is not 0, enter into the regression of X i with the other variables.
To form the edges, instead of Equation (9), for i j , we have to test the following statistical hypothesis, and draw an edge if we can reject H 0 with a “small enough” significance:
H 0 : r X i X j | X V \ { i , j } = 0 ,
i.e., X i and X j are conditionally independent conditioned on the remaining variables. Equivalently, H 0 means that β i j | V \ { i } = 0 , β j i | V \ { j } = 0 , or simply, σ i j = σ j i = 0 ( Σ > 0 is assumed).
To test H 0 in some format, several exact tests are known that are usually based on likelihood ratio tests. The following test uses the empirical partial correlation coefficient, denoted by r ^ X i X j | X V \ { i , j } , and the following statistic is based on it:
B = 1 ( r ^ X i X j | X V \ { i , j } ) 2 = | S V \ { i , j } | · | S V | | S V \ { i } | · | S V \ { j } | ,
where S is the sample size (n) times the empirical covariance matrix of the variables in the subscript (its entries are the product-moments).
It can be proven that, under H 0 , the test statistic
t = n d · 1 B 1 = n d · r ^ X i X j | X V \ { i , j } 1 ( r ^ X i X j | X V \ { i , j } ) 2
is distributed as Student’s t with n d degrees of freedom. Therefore, we reject H 0 for large values of | t | , or equivalently, for large values of r ^ X i X j | X V \ { i , j } .
In the directed model (Bayesian network), the nodes of the graph G correspond to random variables X 1 , , X d , whereas the directed edges to causal dependences between them. In the case of a DAG G with node-set V = { 1 , , d } , there are no directed cycles, and therefore, there exists a recursive ordering (labeling) of the nodes such that, for every directed edge j i , the relation i < j holds.
Let par ( i ) { i + 1 , , d } denote the set of the parents of i and, for any A V , we use the notation x A = { x i : i A } and X A = { X i : i A } . To draw the edges, the directed pairwise Markov property is used: for i < j , there is no j i directed edge, whenever X i and X j are conditionally independent, given X par ( i ) . With notation,
X i X j | X par ( i ) for j { i + 1 , , d } \ par ( i ) , i = 1 , , d 1 .
In the case of a non-degenerate Gaussian distribution, by the Hammersley–Clifford theorem, the following undirected factorization property is also equivalent to the undirected pairwise Markov property (9) that defines the graph. It means the factorization of the joint density of the components of X , for any state configuration x = ( x 1 , , x d ) as follows:
f ( x ) = 1 Z C C Ψ C ( x C ) ,
where Z > 0 is a normalizing constant, and the non-negative compatibility functions Ψ C s are assigned to the cliques of G. Under clique, we understand a maximal complete subgraph. (Note that, in graph theory, it is sometimes called a maximal clique.) The above factorization is far from unique, but in special (so-called decomposable) models, the forthcoming Equation (10) gives an explicit formula for the compatibility functions.
In addition, even if the underlying graph is undirected, a decomposable structure of it gives a (not necessarily unique) so-called perfect ordering of the nodes, in which order directed edges can be drawn. Conversely, a decomposable directed graph (with no sink V configurations) can be made undirected by disregarding the orientation of the edges.
Decomposable graphs have a special interest with regard to exact MLE. There are several equivalent properties of a decomposable graph, based on Wermuth (1980); Lauritzen (2004); Bolla et al. (2019):
  • G is triangulated (with other words, chordal), i.e., every cycle in G of a length of at least four has a chord.
  • G has a perfect numbering of its nodes such that, in this labeling, ne ( i ) { i + 1 , , d } is a complete subgraph, where ne ( i ) is the set of neighbors of i, for i = 1 , , d . It is also called single node elimination ordering (see Wainwright (2015)), and obtainable with the maximal cardinality search (MCS) algorithm of Tarjan and Yannakakis (1984); see also Koller and Friedman (2009).
  • G has the following running intersection property: we can number the cliques of it to form a so-called perfect sequence  C 1 , , C k where each combination of the subgraphs induced by H j 1 = C 1 C j 1 and C j is a decomposition ( j = 2 , , k ) , i.e., the necessarily complete subgraph S j = H j 1 C j is a separator. More precisely, S j is a node cutset between the disjoint node subsets H j 1 \ S j and R j = C j \ S j = H j \ H j 1 . This sequence of cliques is also called a junction tree (JT).
    Here, any clique C j is the disjoint union of R j (called residual), the nodes of which are not contained in any C i , i < j and of S j (called separator) with the following property: there is an i * { 1 , , j 1 } such that
    S j = C j ( i = 1 j 1 C i ) = C j C i * .
    This (not necessarily unique) C i * is called parent clique of C j . Here, S 1 = and R 1 = C 1 . Furthermore, if such an ordering is possible, a version may be found in which any prescribed set is the first one. Note that the junction tree is indeed a tree with nodes C 1 , , C k and one less edge that are the separators S 2 , , S k .
  • There is a labeling of the nodes such that the adjacency matrix contains a reducible zero pattern (RZP). It means that there is an index set I { ( i , j ) : 1 i < j d } which is reducible in the sense that, for each ( i , j ) I and h = 1 , , i 1 , we have ( h , i ) I or ( h , j ) I or both.
    Indeed, this convenient labeling is a perfect numbering of the nodes.
  • The following Markov chain property also holds: f ( x R j | x C 1 C j 1 ) = f ( x R j | x S j ) .
    Therefore, if we have a perfect sequence C 1 , , C k of the cliques with separators S 1 = , S 2 , , S k , then, for any state configuration x , we have the following factorized form of the density:
    f ( x ) = j = 1 k f ( x C j ) j = 2 k f ( x S j ) = i = 1 k f ( x R j | x S j ) .
To find the structure, where one of the equivalent criteria of decomposability holds, we can use the MCS method of Tarjan and Yannakakis (1984) and Koller and Friedman (2009). The simple MCS gives label d to an arbitrary node. Then, the nodes are labeled consecutively, from d down to 1, choosing as the next to label a node with a maximum number of previously labeled neighbors and breaking ties arbitrarily. (Note that Lauritzen (2004) labels the nodes conversely.) The MCS ordering is far from unique, and this simple version is not always capable of finding the JT structure behind a triangulated graph in one run, but another run is needed. There are also variants of this algorithm which are applicable to a non-triangulated graph too, and capable of triangulating it with adding a minimum number of edges.
In the unrestricted model, no restrictions for the upper-diagonal entries of A were made. In practice, we have a sample and all the autocovariance matrices are estimated, and the resulting A , B matrices are calculated with them. Usually, a statistical hypothesis testing advances this procedure, during which it can be found that certain partial correlations (closely related to the entries of K ) do not significantly differ from zero. Then, we naturally want to introduce zeros for the corresponding entries of A . For this, the method of covariance selection of Dempster (1972) was elaborated; see also Lauritzen (2004) and Wermuth (1980). First, we give a more general definition of the notion of an RZP.
Definition 2.
Let M be a symmetric or an upper triangular matrix of real entries. We say that M has a reducible zero pattern (RZP) with respect to the index set I { ( i , j ) : 1 i < j d } if, for each ( i , j ) I and h = 1 , , i 1 , we have ( h , i ) I or ( h , j ) I or both.
In view of this, we can find relation between the zeros of A in the CVAR(1) model and those of the inverse covariance matrix.
Proposition 3.
If the upper triangular matrix A of model Equation (3) has an RZP with respect to the index set I, then the upper left d × d block of K = C 2 1 has an RZP with respect to I. Conversely, if K has an RZP with respect to the index set I, then it is inherited to A .
Proof. 
In the forward direction, the proof follows from Equation (A4) of Appendix A.2 in Appendix A. Indeed, in the presence of an index set I, giving an RZP in A , for 1 j < i d : if l i j = a j i = 0 , then k i j = 0 , since either i h = a h i = 0 or j h = a h j = 0 (or both) for h = 1 , , j 1 , which are the intrinsic entries of the summation in Equation (A4).
In the backward direction, if k i j = 0 , then l i j = a j i = 0 too because of the Markov equivalence of the DAG and its undirected skeleton in the decomposable case. The presence of the RZP guarantees decomposability (see the equivalences to decomposability). Furthermore, by the nested structure of the block LDL decomposition, the entry l i j is a partial regression coefficients that is zero at the same time as the corresponding partial correlation coefficient and the entry k i j of K (see Proposition 1 and Corollary 1).    □
Note that, in both directions, the other matrix ( K or A ) may have additional zeros. Consequently, if we have causal relations between the contemporaneous components of X t , and the so-constructed DAG has an RZP, then this RZP is inherited by the left upper block of K , which is C 1 ( 1 | 0 ) . Therefore, we further improve the covariance selection model Dempster (1972), by introducing zero entries into the sample conditional covariance matrix. Actually, fixing the zero entries in the left upper block of K , we re-estimate the matrix C 2 .
In the possession of a sample, there are exact MLEs developed for this purpose, for an i.i.d. sample (see Bolla et al. (2019), Lauritzen (2004)). Note that here we do not have an i.i.d. sample, but a serially correlated sample. However, by ergodicity, for “large” n, this method also works and gives an asymptotic MLE, akin to the product-moment estimates.
For estimation purposes, we use the empirical partial correlation coefficients, and based on them, the above exact test to check whether they significantly differ from 0 or not. In Theorem 5.3 of Lauritzen (2004)), it is proved that, based on an i.i.d. sample, under the covariance selection model, the MLE of the mean vector is the sample mean X ¯ , and the restricted covariance matrix Σ * = ( σ i j * ) can be estimated as follows. The entries in the edge-positions are estimated as in the saturated model (no restrictions):
σ ^ * i j = 1 n s i j , { i , j } E ,
where S = ( s i j ) = = 1 n ( X X ¯ ) ( X X ¯ ) T is the usual product-moment estimate. The other entries (in the no-edge positions) of Σ * are free, but satisfy the model conditions: after taking the inverse K of Σ * with these undetermined entries, we obtain the same number of equations for them from k i j = 0 whenever { i , j } E . To do so, there are numerical algorithms at our disposal, for instance, the iterative proportional scaling (IPS), see Lauritzen (2004), p. 134, where an infinite iteration is needed because, in general, there is no explicit solution for the MLE. However, the fixed point of this iteration gives a unique positive definite matrix K ^ .
In the decomposable case, there is no need of running the IPS, but an explicit estimate can be given as follows. Recall that, if the Gaussian graphical model is decomposable (its concentration graph G is decomposable), then the cliques, together with their separators (with possible multiplicities), form a JT structure. Denote C as the set of the cliques and S the set of the separators in G. Then, direct density estimates, using (10), are available. In particular, the MLE of K can be calculated based on the product-moment estimates applied for subsets of the variables, corresponding to the cliques and separators.
Let n be the size of the sample for the underlying d-variate normal distribution, and assume that n > d . For the clique C C , let [ S C ] V denote n times the empirical covariance matrix corresponding to the variables { X i : i C } complemented with zero entries to have a d × d (symmetric, positive semidefinite) matrix. Likewise, for the separator S S , let [ S S ] V denote n times the empirical covariance matrix corresponding to the variables { X i : i S } complemented with zero entries to have an d × d (symmetric, positive semidefinite) matrix. Then, the MLE of the mean vector is the sample average (as usual), while that of the concentration matrix is
K ^ = n C C [ S C 1 ] V S S [ S S 1 ] V ,
see Proposition 5.9 of Lauritzen (2004). This proposition states that the above MLE exists with probability one if and only if n is greater than the maximum clique size.
However, here we use a serially correlated sample as follows. Assume that the cliques of the node set { 1 , , d } of X t are C 1 , , C k , to which a last clique is added, formed by the components X t 1 , 1 , , X t 1 , d of X t 1 . If C 1 , , C k form a JT in this ordering, the joint density of X t and X t 1 factorizes like
f ( x t , x t 1 ) = f ( x t 1 ) j = 1 k f ( x t , R j | x t , S j , x t 1 ) .
For covariance selection, we include the lag 1 variables X t 1 , 1 , , X t 1 , d too. Therefore, the new cliques and separators are
C j : = C j { X t 1 , 1 , , X t 1 , d } , j = 1 , , k
and
S j : = S j { X t 1 , 1 , , X t 1 , d } , j = 2 , , k .
Having this, we are able to re-estimate the 2 d × 2 d K , inverse of C 2 in (4), for our VAR(1) model as follows:
K ^ = ( n 1 ) j = 1 k [ S C j 1 ] 2 d j = 2 k [ S S j 1 ] 2 d ,
where the matrix S C is the product-moment estimate based on the n 1 element serially correlated sample with the following variables:
X t , i : i C and X t 1 , 1 , , X t 1 , d , t = 2 , , n ;
furthermore, [ M C ] 2 d denotes the 2 d × 2 d matrix comprising the entries of the larger 2 d × 2 d matrix M in the | C | × | C | block corresponding to C , and otherwise zeros. By the properties of the LDL decomposition, these zeros go into zeros of A .
In the financial example of Section 4, the cliques and separators of the forthcoming Equation (14) are used. There the estimate of K is as follows:
K ^ = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 33 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 34 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 35 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 36 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 37 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 38 0 0 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 43 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 44 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 45 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 46 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 47 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 48 0 0 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 53 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 54 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 55 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 56 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 57 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 58 0 0 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 63 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 64 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 65 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 66 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 67 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 68 0 0 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 73 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 74 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 75 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 76 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 77 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 78 0 0 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 83 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 84 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 85 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 86 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 87 k ^ { 3 , 4 , 5 , 6 , 7 , 8 } 88 + 0 0 0 0 0 0 0 0 0 k ^ { 2 , 3 , 5 , 6 , 7 } 22 k ^ { 2 , 3 , 5 , 6 , 7 } 23 0 k ^ { 2 , 3 , 5 , 6 , 7 } 25 k ^ { 2 , 3 , 5 , 6 , 7 } 26 k ^ { 2 , 3 , 5 , 6 , 7 } 27 0 0 k ^ { 2 , 3 , 5 , 6 , 7 } 32 k ^ { 2 , 3 , 5 , 6 , 7 } 33 0 k ^ { 2 , 3 , 5 , 6 , 7 } 35 k ^ { 2 , 3 , 5 , 6 , 7 } 36 k ^ { 2 , 3 , 5 , 6 , 7 } 37 0 0 0 0 0 0 0 0 0 0 k ^ { 2 , 3 , 5 , 6 , 7 } 52 k ^ { 2 , 3 , 5 , 6 , 7 } 53 0 k ^ { 2 , 3 , 5 , 6 , 7 } 55 k ^ { 2 , 3 , 5 , 6 , 7 } 56 k ^ { 2 , 3 , 5 , 6 , 7 } 57 0 0 k ^ { 2 , 3 , 5 , 6 , 7 } 62 k ^ { 2 , 3 , 5 , 6 , 7 } 63 0 k ^ { 2 , 3 , 5 , 6 , 7 } 65 k ^ { 2 , 3 , 5 , 6 , 7 } 66 k ^ { 2 , 3 , 5 , 6 , 7 } 67 0 0 k ^ { 2 , 3 , 5 , 6 , 7 } 72 k ^ { 2 , 3 , 5 , 6 , 7 } 73 0 k ^ { 2 , 3 , 5 , 6 , 7 } 75 k ^ { 2 , 3 , 5 , 6 , 7 } 76 k ^ { 2 , 3 , 5 , 6 , 7 } 77 0 0 0 0 0 0 0 0 0 + k ^ { 1 , 4 , 5 } 11 0 0 k ^ { 1 , 4 , 5 } 14 k ^ { 1 , 4 , 5 } 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 k ^ { 1 , 4 , 5 } 41 0 0 k ^ { 1 , 4 , 5 } 44 k ^ { 1 , 4 , 5 } 45 0 0 0 k ^ { 1 , 4 , 5 } 51 0 0 k ^ { 1 , 4 , 5 } 54 k ^ { 1 , 4 , 5 } 55 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 k ^ { 3 , 5 , 6 , 7 } 33 0 k ^ { 3 , 5 , 6 , 7 } 35 k ^ { 3 , 5 , 6 , 7 } 36 k ^ { 3 , 5 , 6 , 7 } 37 0 0 0 0 0 0 0 0 0 0 0 k ^ { 3 , 5 , 6 , 7 } 53 0 k ^ { 3 , 5 , 6 , 7 } 55 k ^ { 3 , 5 , 6 , 7 } 56 k ^ { 3 , 5 , 6 , 7 } 57 0 0 0 k ^ { 3 , 5 , 6 , 7 } 63 0 k ^ { 3 , 5 , 6 , 7 } 65 k ^ { 3 , 5 , 6 , 7 } 66 k ^ { 3 , 5 , 6 , 7 } 67 0 0 0 k ^ { 3 , 5 , 6 , 7 } 73 0 k ^ { 3 , 5 , 6 , 7 } 75 k ^ { 3 , 5 , 6 , 7 } 76 k ^ { 3 , 5 , 6 , 7 } 77 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 k ^ { 4 , 5 } 44 k ^ { 4 , 5 } 45 0 0 0 0 0 0 k ^ { 4 , 5 } 54 k ^ { 4 , 5 } 55 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Restricted cases in the p > 1 scenario can be treated similarly. Here, too, the existence of an RZP in the DAG on d nodes is equivalent to the existence of an RZP in the left upper d × d corner of the concentration matrix C p + 1 1 . From the model equations, it is obvious that
X t , i = j = i + 1 d a i j X t , j h = 1 p j = 1 d b h , i j X t h , j U t , i ,
where X t , i is the ith coordinate of X t . By weak stationarity, it follows that the entries of the matrices A and B h = ( b h , i j ) i , j = 1 p are partial regression coefficients as follows:
a i j = β X t , i X t , j · { X t , i + 1 , , X t , d , X t 1 , 1 , , X t 1 , d , , X t p , 1 , , X t p , d } , 1 i < j d ; b h , i j = β X t , i X t + h , j · { X t , i + 1 , , X t , d , X t 1 , 1 , , X t 1 , d , , X t p , 1 , , X t p , d } , 1 i < j d , h = 1 , , d .
Since the conditioning set changes from equation to equation, it is easier to use the block LDL decompositions here, without the exact meaning of the coefficients.
Considering the components of X t , X t 1 , , X t p as nodes of the expanded graph, the joint density of X t , X t 1 , , X t p factorizes like
f ( x t , x t 1 , , x t p ) = f ( x t 1 , , x t p ) f ( x t | x t 1 , , x t p ) = f ( x t 1 , , x t p ) · i = 1 d f ( x t , i | x t , par ( i ) , x t 1 , , x t p ) .
Now, assume that the cliques of the node set { 1 , , d } of X t are C 1 , , C k , and they form a JT with residuals R 1 , , R k and separators S 1 , , S k (with the understanding that S 1 = and R 1 = C 1 ). Enhancing the preceding density with this, we obtain the following factorization:
f ( x t , x t 1 , , x t p ) = f ( x t 1 , , x t p ) · j = 1 k f ( x t , R j | x t , S j , x t 1 , , x t p ) .
Covariance selection can be carried out similarly as in the p = 1 case, but here zero entries of the left upper d × d block of C p + 1 1 provide the zero entries of A . For this purpose, the n p element sample entries are used with the following coordinates:
X t , i : i C and X t 1 , 1 , , X t 1 , d , , X t p , 1 , , X t p , d ,
for t = p + 1 , , n when we calculate the product-moment estimate S C with C = C { X t 1 , , X t p } . For more details, see the explanation after Equation (13) and Section 4.
Again, here the covariance selection is carried out based only on a serially correlated and not an independent sample. However, when n is “large”, then ergodicity issues (see, e.g., Bolla and Szabados (2021)) give rise to this relaxation of the original algorithm. In addition, by the theory of Brockwell and Davis (1991) (p. 424), it is guaranteed that the Yule–Walker equations have a stable stationary solution to the VAR ( p ) model, whenever the starting covariance matrix of ( p + 1 ) × ( p + 1 ) blocks is positive definite. However, we assume this in our theorems. In this case, the empirical versions are also positive definite (almost surely as n ), and the covariance selection also gives a positive definite estimate. Thus, the estimated parameter matrices provide a stable VAR model in view of the standard theory and ergodicity if n is “large”.

4. Applications with Order Selection

4.1. Financial Data

We used the data communicated in the paper Akbilgic et al. (2014) on daily relative returns of eight different asset prices, spanning 534 days. The multivariate time series was found to be stationary and nearly Gaussian.
First, we applied the unrestricted CVAR(p) model. We constructed a DAG by making the undirected graph on eight nodes directed. The undirected graph was constructed by testing statistical hypotheses for the partial correlations of the pairs of the variables conditioned on all the others. As the test statistic is increasing in the absolute value of the partial correlation in question, a threshold 0.04 for the latter one was used that corresponds to significance level α = 0.008851 of the partial correlation test. Table 1 contains the partial correlations based on C 1 ( 0 ) .
Since the graph was triangulated, with the MCS algorithm, we were able to (not necessarily uniquely) label the nodes so that the adjacency matrix of this undirected graph had an RZP:
1 : NIK   ( stock   market   return   index   of   Japan ) , 2 : EU   ( MSCI   European   index ) , 3 : ISE   ( Istanbul   stock   exchange   national   100   index ) 4 : EM   ( MSCI   emerging   markets   index ) , 5 : BVSP   ( stock   market   return   index   of   Brazil ) , 6 : DAX   ( stock   market   return   index   of   Germany ) , 7 : FTSE   ( stock   market   return   index   of   UK ) , 8 : SP   ( Standard   &   Poor s   500   return   index ) .
If this is considered as the topological labeling of the DAG, where directed edges point from a higher label node to a lower label one, then the so-obtained directed graph is Markov equivalent to its undirected skeleton; see Figure 2a,b. However, the RZP is used only in the restricted case; in the unrestricted case, only the DAG ordering of the variables is used.
We ran the VAR(p) algorithm with p = 1 , 2 , 3 , 4 , 5 and found that the A matrices do not change much with increasing p, akin to B 1 . The B 2 , …, B 5 matrices have relatively “small” entries. Consequently, contemporaneous effects and one-day lags are the most important. This is also supported by the forthcoming order selection investigations. For the p = 1 and p = 2 cases, see Table 2, Table 3, Table 4, Table 5 and Table 6, respectively. The p = 3 , 4 , 5 cases are represented by tables in the Supplementary Material.
Then, we considered the restricted CVAR(1) model. Here, we want to introduce structural zeros into the matrix A . Now, the matrix C 1 ( 1 | 0 ) , the left upper 8 × 8 corner of C 2 1 is used for covariance selection. Figure 2b shows this DAG with the significant path coefficients above the arrows, based on Table 7.
The ordering of the variables is the same as in the unrestricted case, but the RZP is a bit different. The decomposable structure has the following cliques and separators:
C 1 =   { BVSP ,   DAX ,   EM ,   FTSE ,   ISE ,   SP } = { 3 ,   4 ,   5 ,   6 ,   7 ,   8 } C 2 =   { BVSP ,   DAX ,   EU ,   FTSE ,   ISE } = { 2 ,   3 ,   5 ,   6 ,   7 } C 3 =   { BVSP ,   EM ,   NIK } = { 1 ,   4 ,   5 } S 2 =   { BVSP ,   DAX ,   FTSE ,   ISE } = { 3 ,   5 ,   6 ,   7 } S 3 =   { BVSP ,   EM } = { 4 ,   5 } ,
where the parent clique of both C 2 and C 3 is C 1 . Note that the set of nodes in the second braces is the same, but they follow increasing labels so that they better see the JT structure. The 16 × 16 matrix K ^ is estimated by covariance selection, using the lag 1 variables too; see it in a table in the Supplementary Material.
The matrices A and B were estimated via the algorithm for the LDL decomposition of K ^ . Here, the zeros of the left upper 8 × 8 block of K ^ will necessarily result in the zeros of A in the same positions. The upper-diagonal entries of A and the entries of B are considered as path coefficients which represent the contemporaneous and 1-day lagged effect of the assets to the others, respectively; see Table 7 and Table 8.
In the VAR(2) situation, the graph, constructed by C 1 ( 2 | 1 , 0 ) , is the same, has the same JT with 3 cliques, and the same RZP as based on C 1 ( 1 | 0 ) . It is in accordance with our former observation that the lag of 2 or more days’ effects of the assets to the others is negligible compared to the 1-day lag effect (the forthcoming order selection also supports this).
Here, the 24 × 24 matrix K ^ was estimated by adapting Equation (13) to the 3 d × 3 d situation, by using both the lag 1 and lag 2 variables for covariance selection. This is to be found in the Supplementary Material. The estimated A , B 1 , and B 2 matrices are shown in Table 9, Table 10 and Table 11.
Summarizing, in the p = 1 and p = 2 cases, when we took into consideration the lag 1 and 2 variables, respectively, in the graph building, we obtained the same graph with the same threshold for the partial correlation coefficients as in the CVAR(0) case.
To find the optimal order p, information criteria are suggested; see e.g., Box et al. (2015); Brockwell and Davis (1991). Here, the following criteria will be used: the AIC (Akaike Information Criterion), the AICC (bias corrected version of the AIC), the BIC (Bayesian information criterion), and the HQ (Hannan and Quinn’s criterion). Each criterion can be decomposed into two terms: an information term that quantifies the information brought by the model (via the likelihood) and a penalization term that penalizes too “large” number of parameters, in order to avoid over-fitting. It can be proven that the AIC has a positive probability of overspecification and the BIC is strongly consistent, but sometimes it underspecifies the true model. The explicit forms of AIC, BIC, and HQ, which are to be minimized with respect to p, are as follows:
AIC ( p ) = ln | Δ ^ | + 2 p d 2 + d 2 n p = j = 1 d ln δ ^ j + 2 p d 2 + d 2 n p , BIC ( p ) = ln | Δ ^ | + p d 2 + d 2 ln ( n p ) n p = j = 1 d ln δ ^ j + p d 2 + d 2 ln ( n p ) n p , HQ ( p ) = ln | Δ ^ | + 2 p d 2 + d 2 ln ( ln ( n p ) ) n p = j = 1 d ln δ ^ j + 2 p d 2 + d 2 ln ( ln ( n p ) ) n p ,
where Δ ^ is the estimated error covariance matrix Δ .
The AICC (Akaike Information Criterion Corrected) is a bias-corrected version of Akaike’s AIC, which is an estimate of the Kullback–Leibler index of the fitted model relative to the true model and needs further explanation. Here,
AICC ( p ) = 2 ln L ( A ^ , B ^ 1 , , B ^ p , Δ ^ ) + penalty ( p ) ,
where the first term is 2 times the log-likelihood function, evaluated at the parameter estimates of Theorems 1 and 2, whereas the second term penalizes the computational complexity. The model parameters A , B 1 , , B p , and Δ are estimated by the block Cholesky decomposition of the estimated inverse covariance matrix C p + 1 1 of the Gaussian random vector ( X t T , X t 1 T , , X t p T ) T ; see Algorithms of Appendix A.2 and Appendix A.4. This is a moment estimation, but since our underlying distribution is multivariate Gaussian, which belongs to the exponential family, asymptotically, it is also an MLE (for “large” n) that satisfies the moment matching equations, see Wainwright and Jordan (2008). Of course, the matrices A ^ , B ^ 1 , , B ^ p , and Δ ^ also depend on p, but for simplicity, we do not denote this dependence. More exactly,
L ( A ^ , B ^ 1 , , B ^ p , Δ ^ ) = ( 2 π ) ( n p ) d 2 | Δ ^ | n p 2 e 1 2 t = p + 1 n U t T Δ ^ 1 U t = ( 2 π ) ( n p ) d 2 ( j = 1 d δ ^ j ) n p 2 e 1 2 t = p + 1 n j = 1 d ( U t j 2 / δ ^ j ) ,
where
U t = A ^ ( X t X ^ t ) ,
and
X ^ t = A ^ 1 B ^ 1 X t 1 A ^ 1 B ^ p X t p ,
for t = p + 1 , , n .
In the unrestricted model, the complexity term (see Brockwell and Davis (1991)) is
2 p d 2 + d 2 ( n p ) d ( n p ) d p d 2 d 2 1 .
Therefore,
AICC ( p ) = ( n p ) d ln ( 2 π ) + ( n p ) ln | Δ ^ | + t = p + 1 n U T Δ ^ 1 U t + 2 p d 2 + d 2 ( n p ) d ( n p ) d p d 2 d 2 1 = ( n p ) d ln ( 2 π ) + ( n p ) j = 1 d ln δ ^ j + t = p + 1 n j = 1 d U t j 2 δ ^ j + 2 p d 2 + d 2 ( n p ) d ( n p ) d p d 2 d 2 1 .
In the restricted model, the penalization term depends on the cardinalities of the cliques C 1 , , C k that are the same for all p. The penalization terms for the four criteria are
penalty AIC ( p ) = 2 p d 2 + j = 1 k | C j | 2 j = 2 k | S j | 2 n p , penalty BIC ( p ) = p d 2 + j = 1 k | C j | 2 j = 2 k | S j | 2 ln ( n p ) n p , penalty HQ ( p ) = 2 p d 2 + j = 1 k | C j | 2 j = 2 k | S j | 2 ln ( ln ( n p ) ) n p penalty AICC ( p ) = 2 p d 2 + j = 1 k | C j | 2 j = 2 k | S j | 2 ( n p ) d ( n p ) d p d 2 j = 1 k | C j | 2 1 .
The cliques are usually of “small” sizes that can reduce computational complexity, in particular, when the number of variables d is much “larger” than the clique sizes. Furthermore, the separators are intersections of the cliques, so the number of product-moments calculated within them can be subtracted.
All of these criteria are tested, for both the restricted and unrestricted CVAR ( p ) models, using the financial data above for p = 1 , 2 , , 9 . The results for the unrestricted case are shown in Table 12.
Observe that, in the unrestricted case, AIC reaches the minimum for p = 2 , whereas AICC, BIC, and HQ for p = 1 . This is in accordance with our previous experience that the parameter matrices did not change much after the first or second day.
In the restricted case (see Table 13), except for the AIC, every criterion suggests that the best model is obtained with p = 1 . Thus, the parameter matrices did not change much after the first day, except for AIC, which seems to overspecify the model and was the lowest on the 4th day, i.e., the last workday after the first workday. In addition, these criteria showed only a minuscule decrease in the restricted case; probably because the clique sizes were not significantly smaller than d.
Componentwise predictions of X t with RMSEs and figures are shown in the Supplementary Material.

4.2. IMR (Infant Mortality Rate) Longitudinal Data

Here, we used the longitudinal data of six indicators (components of X t ), spanning 21 years (1995–2015) from the World Bank in the case of Egypt:
1 : IMR   ( Infant   mortality   rate ) , 2 : MMR   ( Maternal   mortality   ratio ) , 3 : HepB   ( Hepatitis - B   immunization ) , 4 : GDP   ( Gross   domestic   per   capita ) , 5 : OPExp   ( Out - of - pocket   health   expenditure   as   %   of   HExp ) , 6 : HExp   ( Current   health   expenditure   as   %   of   GDP ) .
For more details about these indicators, see Abdelkhalek and Bolla (2020). Through the CVAR(p) model, we show the contemporaneous and lagged time effects between the components. Since the sample size is small, we investigate only the CVAR(1) model in the unrestricted and restricted situations. Furthermore, the variables are measured on different scales; thus, we use the autocorrelations which are the autocovariances of the standardized variables. We distinguish between two working hypotheses with respect to two different ordering of the variables given by an expert:
  • Case 1: { IMR ,   MMR ,   HepB ,   OPExp ,   HExp ,   GDP } .
  • Case 2: { IMR ,   MMR ,   HepB ,   GDP ,   OPExp ,   HExp } .
In the unrestricted CVAR(1) model, both orderings work, but we present only Case 1. (The estimated matrices A and B are mostly the same in both cases, but the entries are interchanged with respect to the ordering of the variables.) The entries of matrix A (see Table 14) represent the contemporaneous effects (path coefficients) between the components at time t. The MMR has the largest contemporaneous inverse causal effect on the IMR, i.e., an increase in the MMR caused a decrease in the IMR by 1.13 . Matrix B (see Table 15), on the other hand, indicates the path coefficients of the one time lag causal effect of X t 1 on the current X t components. An increase in the IMR at a one-year time lag caused an increase in the IMR at the current time by 0.29 . All other path coefficients in the matrices A , B can be explained likewise.
In the restricted CVAR(1) model, the graph structure is important. We consider only Case 2 that provides the RZP and corresponds to the ordering of (15). Note that the so-obtained DAG is Markov equivalent to its undirected skeleton. The decomposable structure of the JT has two cliques and only one separator as follows:
C 1 =   { IMR ,   MMR ,   HepB 3 ,   GDP ,   HExp } = { 1 ,   2 ,   3 ,   4 ,   6 } , C 2 =   { OPExp ,   HExp } = { 5 ,   6 } , S 2 =   { HExp } = { 6 } .
In this case, the n 1 element sample including lag 1 variables is used to estimate the 12 × 12 matrix K ^ with covariance selection, see Equation (13). Then, the LDL algorithm was applied to the so-obtained K ^ (shown in the Supplementary Material) to estimate the model parameters A , B . Unlike the unrestricted model, here there are prescribed zero entries in K ^ and A . Specifically, the zeros of the left upper 6 × 6 corner of K ^ will necessarily result in the zeros of the estimated matrix A in the same positions. Similarly to the unrestricted situation, the non-zero upper-diagonal entries of A (see Table 16) represent the path coefficients of the contemporaneous causal effects of X t , while the entries of the matrix B (see Table 17) represent the one time lag causal effects of the X t 1 components on the X t components.

5. Discussion

The main contribution of our paper is the introduction of causality in VAR models by using graphical modeling tools. SVAR models are known in the literature, but there the inclusion of the upper triangular matrix A rather facilitates an alternative solution for the Yule–Walker equations, and not the causal ordering of the contemporaneous effects.
Our unrestricted CVAR model does this job, where the recursive ordering of the variables follows a DAG ordering in the directed graphical model contemporaneously, and the entries of A are treated like path coefficients of SEM. In addition, the white noise process U t of structural shocks (see Equation (6)) is obtained from the process V t of innovations in the reduced form (see Equation (1)) and has an econometric interpretation. The structural shocks are mutually uncorrelated, and they are assigned to the individual variables. They also represent unanticipated changes in the observed econometric variables. However, they are not just orthogonalized innovations, but here the labeling of the nodes and the graph skeleton behind the matrix A also matters.
In the unrestricted case, the following estimation scheme is used. The DAG is built partly by expert knowledge and partly by starting with an undirected Gaussian graphical model, using known algorithms (e.g., MCS) to find a triangulated graph and a (not necessarily unique) perfect labeling of the nodes, in which ordering the directed and undirected models are Markov equivalent to each other (there are no sink V configurations in the DAG). However, here the Markov equivalence is not important: even if the undirected graph is not triangulated, and the DAG contains sink Vs, the DAG ordering (given, e.g., by an expert) can be used to estimate the A and B matrices, which are full in the sense that no zero constraints for their entries are assumed at the beginning. After having the DAG ordering, we apply the block LDL decomposition for the estimated block matrix C 2 or C p + 1 , and retrieve the estimated parameter matrices by Theorem 1 or Theorem 2.
It is the restricted CVAR model, where zero constraints for the entries of A (in the given DAG ordering) are made. For this purpose, we re-estimate the covariance matrix (the big block matrix, the size of which depends on the order p of the model) such that the entries in the left upper block of its inverse are zeros in the no-edge positions. For this, there is the method of covariance selection at our disposal, which works for Gaussian variables even if the prescribed zeros in the inverse covariance matrix do not have the RZP (RZP is just the property of decomposable models). In this case, our algorithm first applies algorithms (e.g., MCS) to find the JT structure of the graph (which is equivalent to having an RZP). The estimation scheme is enhanced with covariance selection, for which there are closed form estimates in the decomposable case. Actually, we use an improved version of the covariance selection that needs higher order autocovariances too and relaxes the independence of the sample entries which are only serially correlated. This is supported by ergodicity issues if n is “large”. Note that, in the lack of an RZP, the covariance selection still works, but it needs the infinite iteration (IPS).
Since the necessary product-moment estimates include only variables belonging to the cliques and separators, and the separators are intersections of the cliques, this fact can reduce the computational complexity of the restricted CVAR model compared to the unrestricted one. The information criteria, applied to select the optimal order p, also take into consideration the number of relevant parameters to be estimated.

6. Conclusions and Further Perspectives

Our algorithm is also applicable to longitudinal data instead of time series. The p = 0 case resolves the problem posed in Wermuth (1980), and the p = 1 case is also applicable to solve a SEM with endogenous and exogenous variables.
As a further perspective, lagged causalities could also be introduced, with some upper triangular matrices B . For example, if the previous time observations influence the present time ones, and the order of causalities is the same as that of the contemporaneous ones, then B 1 is also upper triangular. This problem can be solved by running the block Cholesky decomposition with 2 d singleton blocks and treating only the other blocks “en block”.   

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/econometrics11010007/s1. For illustrating the CVAR model and related algorithms, there are supporting Python and notebook files uploaded, together with some additional tables and figures: CVAR.py, CVAR_example.ipynb, CVARtables.pdf, CVARfigures.pdf.

Author Contributions

Conceptualization, M.B. (Marianna Bolla), D.Y.; methodology, M.B. (Marianna Bolla), M.B. (Máté Baranyi), F.A. and V.F.; software, D.Y., H.W., R.M. and V.F.; validation, W.T., C.D.; formal analysis, F.A., V.F.; investigation, D.Y.; resources, M.B. (Máté Baranyi); data curation, M.B. (Máté Baranyi), F.A.; writing—original draft preparation, M.B. (Marianna Bolla); writing—review and editing, M.B. (Marianna Bolla), M.B. (Máté Baranyi); visualization, D.Y., V.F. and F.A.; supervision, M.B. (Marianna Bolla); project administration, M.B. (Marianna Bolla). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The third-party financial dataset analyzed in the current study is available in the UCI Machine Learning Repository, and was collected by the authors of Akbilgic et al. (2014). The dataset is available in: Dua, D. and Graff, C. (219). UCI Machine Learning Repository, Irvine, CA: University of California, School of Information and Computer Science, https://archive.ics.uci.edu/ml/datasets/ISTANBUL+STOCK+EXCHANGE (accessed on 1 August 2022). The World Bank Data on infant mortality rates are available on https://data.worldbank.org/indicator (accessed on 1 August 2022); see also Abdelkhalek and Bolla (2020).

Acknowledgments

The research was carried out under the auspices of the Budapest Semesters in Mathematics program, in the framework of an undergraduate online research course in summer 2021, with the participation of US undergraduate students. Two PhD students of the corresponding author also participated. In particular, Fatma Abdelkhalek’s work was funded by a scholarship under the Stipendium Hungaricum program between Egypt and Hungary, whereas Valentin Frappier’s internship by the Erasmus program of the EU.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
VARVector AutoRegression
SVARStructural Vector AutoRegression
CVARCausal Vector AutoRegression
SEMStructural Equation Modeling
DAGDirected Acyclic Graph
JTJunction Tree
MCSMaximal Cardinality Search
IPSIterative Proportional Scaling
RZPReducible Zero Pattern
MRFMarkov Random Field
AICAkaike Information Criterion
AICCAkaike Information Criterion Corrected
BICBayesian Information Criterion
HQHannan and Quinn’s criterion
MLEMaximum Likelihood Estimate
PLSPartial Least Squares regression
RMSERoot Mean Square Error
IMRInfant Mortality Rate
LDLvariant of the Cholesky decomposition for a symmetric, positive semidefinite matrixas L (lower triangular) × D (diagonal) × L T

Appendix A. Proofs of the Main Theorems

Appendix A.1. Proof of Theorem 1

First of all, note that the block Cholesky decomposition applies to K partitioned symmetrically into ( d + 1 ) × ( d + 1 ) blocks of sizes 1 , , 1 , d , where the number of singleton blocks (of size 1) is d. (In the p = 0 case, all the blocks are singletons, so the standard LDL decomposition is applicable.) Therefore, in the main diagonal of the resulting L , we have number d of 1s and I d × d (the d × d identity matrix), see the forthcoming Equation (A3). In other words, the last d rows (and columns) are treated “en block”; this is why here indeed the block LDL (variant of the block Cholesky) decomposition is applicable.
Let us compute the inverse of the matrix L D L T with block matrices L and D partitioned as in Equation (5). For the time being, we only assume that A is a d × d upper triangular matrix with 1s along its main diagonal, B is d × d , and the diagonal matrix Δ has positive diagonal entries. We will use the computation rule of the inverse of a symmetrically partitioned block matrix Rózsa (1991), which is applicable due to the fact that | A | = 1 , so the matrix A is invertible:
( L D L T ) 1 = A B O d × d I d × d 1 Δ 1 O d × d O d × d C 1 ( 0 ) 1 A T O d × d B T I d × d 1 = A 1 A 1 B O d × d I d × d Δ O d × d O d × d C ( 0 ) ( A T ) 1 O d × d B T ( A T ) 1 I d × d = A 1 Δ ( A 1 ) T + A 1 B C ( 0 ) B T ( A 1 ) T A 1 B C ( 0 ) C ( 0 ) B T ( A T ) 1 C ( 0 ) .
Now, we are going to prove that the above matrix equals C 2 if and only if A , B , Δ satisfy the model equations. Comparing the blocks to those of (4), the right bottom block is C ( 0 ) in both expressions. Comparing the left bottom blocks, we obtain C ( 0 ) B T ( A T ) 1 = C ( 1 ) , and so B T = C 1 ( 0 ) C ( 1 ) A T and B = A C T ( 1 ) C 1 ( 0 ) should hold for B . It is in accordance with the model equation. Indeed, (3) is equivalent to
B X t 1 = A X t + U t ,
which, after multiplying with X t 1 T from the right and taking expectations into consideration, yields B C ( 0 ) = A C T ( 1 ) that in turn provides
B = A C T ( 1 ) C 1 ( 0 ) .
By symmetry, the same applies to the right upper block. As for the left upper block,
A 1 Δ ( A T ) 1 + A 1 B C ( 0 ) B T ( A T ) 1 = C ( 0 )
should hold. Multiplying this equation with A from the left and with A T from the right, we obtain the equivalent equation
Δ = A C ( 0 ) A T B C ( 0 ) B T .
This is in accordance with Equation (3), which implies
E ( A X t + B X t 1 ) ( A X t + B X t 1 ) T = A C ( 0 ) A T + A C T ( 1 ) B T + B C ( 1 ) A T + B C ( 0 ) B T = Δ .
Combining this with Equation (A1), we obtain
Δ = A C ( 0 ) A T + A C T ( 1 ) B T + B C ( 1 ) A T + B C ( 0 ) B T = A C ( 0 ) A T A C T ( 1 ) C 1 ( 0 ) C ( 1 ) A T A C T ( 1 ) C 1 ( 0 ) C ( 1 ) A T + A C T ( 1 ) C 1 ( 0 ) C ( 0 ) C 1 ( 0 ) C ( 1 ) A T = A C ( 0 ) A T A C T ( 1 ) C 1 ( 0 ) C ( 1 ) A T = A C ( 0 ) A T B C ( 0 ) B T ,
which also satisfies (A2).
Summarizing, we have proved that, under the model equations, ( L D L T ) 1 = C 2 , or equivalently, L D L T = K indeed holds. In view of the uniqueness of the block LDL decomposition (under positive definiteness of the involved matrices), this finishes the proof.

Appendix A.2. Algorithm for the Block LDL Decomposition of Appendix A.1

By the preliminary assumptions, K and so D are positive definite; therefore, Δ has positive diagonal entries. To apply the protocol of the block Cholesky decomposition, which gives the theoretically guaranteed unique solution, it is worth writing the above matrices according to the blocks as follows. The matrix L has the partitioned form
L = 1 0 0 0 0 0 21 1 0 0 0 0 31 32 1 0 0 0 0 0 d 1 d 2 d , d 1 1 0 0 d + 1 , 1 d + 1 , 2 d + 1 , d 1 d + 1 , d I d × d ,
where the 2 d × 2 d lower triangular matrix L is also lower triangular with respect to its blocks which are partly scalars, partly vectors, and partly matrices as follows:
i j = a j i , j = 1 , , d 1 ; i = j + 1 , , d ; 1 i = j = 1 , , d ; 0 i = 1 , , d ; j = i + 1 , , 2 d ;
Furthermore, the vectors d + 1 , j are d × 1 for j = 1 , , d , and comprise the column vectors of the d × d matrix B T . The matrix in the bottom right block is the d × d identity I d × d , and above it, the zero entries can be arranged into the d × d zero matrix O d × d .
The 2 d × 2 d block-diagonal matrix D in partitioned form is
D = δ 1 1 0 0 0 0 0 0 δ 2 1 0 0 0 0 0 0 δ 3 1 0 0 0 0 0 0 0 0 0 δ d 1 0 0 0 0 0 0 C 1 ( 0 ) ,
where the d × 1 vectors 0 comprise O d × d in the left bottom, and the entries comprise the inverse of the d × d positive definite matrix C ( 0 ) in the right bottom block. We perform the following multiplications of block matrices, also using formulas Golub and Van Loan (2012); Rózsa (1991) for their inverses and the algorithm proposed in Nocedal and Wright (1999) to obtain the recursion of the block LDL decomposition that goes on as follows:
  • Outer cycle (column-wise). For j = 1 , , d : δ j 1 = k j j h = 1 j 1 j h δ h 1 j h (with the reservation that δ 1 1 = k 11 );
  • Inner cycle (row-wise). For i = j + 1 , , d :
    i j = k i j h = 1 j 1 i h δ h 1 j h δ j
    and
    d + 1 , j = k d + 1 , j h = 1 j 1 d + 1 , h δ h 1 j h δ j
    (with the reservation that, in the j = 1 case, the summand is zero), where k d + 1 , j for j = 1 , , d is d × 1 vector in the bottom left block of K .
Note that the last step of the outer cycle, when j = d + 1 , formally would be
C 1 ( 0 ) = K d + 1 , d + 1 h = 1 d d + 1 , h δ h 1 d + 1 , h T = K d + 1 , d + 1 h = 1 d δ h 1 d + 1 , h d + 1 , h T ,
where d + 1 , h for h = 1 , , d are d × 1 vectors, and K d + 1 , d + 1 is the bottom right d × d block of the 2 d × 2 d concentration matrix K , but it need not be performed as it is in accordance with Theorem 1. Then, no inner cycle follows and the recursion ends in one run.
It is obvious that the above decomposition has a nested structure, so, for the first d rows of L , only its previous rows or preceding entries in the same row enter into the calculation, as if we performed the standard LDL decomposition of K . Therefore, i j = a j i for j = 1 , , d 1 , i = j + 1 , , d that are the partial regression coefficients akin to those offered by the standard LDL decomposition K = L ˜ D ˜ L ˜ T , so the first d rows of L ˜ and L are the same, and the first d rows of D ˜ and D are the same too.
When the process terminates after finding the first d rows of L , we consider the blocks “en block” and obtain the matrix B = ( d + 1 , 1 , , d + 1 , d ) T .

Appendix A.3. Proof of Theorem 2

Note that here the block Cholesky decomposition applies to K partitioned symmetrically into ( d + 1 ) × ( d + 1 ) blocks of sizes 1 , , 1 , p d with number d of singleton blocks. (Therefore, in the main diagonal of L , we have number d of 1s and I p d × p d .) The d × p d matrix B , a transpose of B T , will contain the coefficient matrices of Equation (6) in its blocks, like
B = ( B 1 B p ) .
The proof goes on similarly as in Appendix A.1. However, for completeness and being able to formulate the algorithm, we discuss it herein. Let us compute the inverse of the matrix L D L T with block matrices L and D partitioned as in Equation (8). For the time being, we only assume that A is a d × d upper triangular matrix with 1s along its main diagonal, B is d × p d , and the diagonal matrix Δ has positive diagonal entries. We can again use the computation rule of the inverse of symmetrically partitioned block matrices, since the matrix A is invertible.
( L D L T ) 1 = A B O p d × d I p d × p d 1 Δ 1 O d × p d O p d × p d C p 1 1 A T O d × p d B T I p d × p d 1 = A 1 A 1 B O p d × d I p d × p d Δ O d × p d O p d × d C p ( A T ) 1 O d × p d B T ( A T ) 1 I p d × p d = A 1 Δ ( A 1 ) T + A 1 B C p B T ( A 1 ) T A 1 B C p C p B T ( A T ) 1 C p .
Now, we are going to prove that the above matrix equals C p + 1 if and only if A , B , Δ satisfy the model equations. Comparing the blocks to those of (7), the right bottom block is C p in both expressions. Comparing the left bottom blocks, we obtain C p B T ( A T ) 1 = C ( 1 , , p ) , and so, B T = C p 1 C ( 1 , , p ) A T and B = A C T ( 1 , , p ) C p 1 should hold for B . It is in accordance with the model equation: indeed, (6) is equivalent to
B 1 X t 1 + + B p X t p = A X t + U t ,
which, after multiplying with X t 1 T , , X t p T from the right and taking expectation, in concise form yields B C p = A C T ( 1 , , p ) that in turn provides
B = A C T ( 1 , , p ) C p 1 .
By symmetry, it also applies to the right upper block. As for the left upper block,
A 1 Δ ( A T ) 1 + A 1 B C p B T ( A T ) 1 = C ( 0 )
should hold. Multiplying this equation with A from the left and with A T from the right, we obtain the equivalent equation
Δ = A C ( 0 ) A T B C p B T .
This is in accordance with Equation (6) that implies
E ( A X t + B 1 X t 1 + + B p X t p ) ( A X t + B 1 X t 1 + + B p X t p ) T = A C ( 0 ) A T + A C T ( 1 , , p ) B T + B C ( 1 , , p ) A T + B C p B T = Δ .
Combining this with Equation (A5), we have
Δ = A C ( 0 ) A T + A C T ( 1 , , p ) B T + B C ( 1 , , p ) A T + B C p B T = A C ( 0 ) A T A C T ( 1 , , p ) C p 1 C ( 1 , , p ) A T A C T ( 1 , , p ) C p 1 C ( 1 , , p ) A T + A C T ( 1 , , p ) C p 1 C p C p 1 C ( 1 , , p ) A T = A C ( 0 ) A T A C T ( 1 , , p ) C p 1 C ( 1 , , p ) A T = A C ( 0 ) A T B C p B T ,
which also satisfies (A6).
Summarizing, we have proved that, under the model equations, ( L D L T ) 1 = C p + 1 , or equivalently, L D L T = K indeed holds. In view of the uniqueness of the block LDL decomposition (under positive definiteness of the involved matrices), this finishes the proof.

Appendix A.4. Algorithm for the Block LDL Decomposition of Appendix A.3

Again, the protocol of the block Cholesky decomposition is applied to the involved matrices in block partitioned form. Here,
L = 1 0 0 0 0 0 21 1 0 0 0 0 31 32 1 0 0 0 0 0 d 1 d 2 d , d 1 1 0 0 d + 1 , 1 d + 1 , 2 d + 1 , d 1 d + 1 , d I p d × p d ,
where the ( p + 1 ) d × ( p + 1 ) d lower triangular matrix L is also lower triangular with respect to its blocks which are partly scalars, partly vectors, and partly matrices as follows:
i j = a j i , j = 1 , , d 1 ; i = j + 1 , , d ; 1 i = j = 1 , , d ; 0 i = 1 , , d ; j = i + 1 , , ( p + 1 ) d .
Furthermore, the vectors d + 1 , j are p d × 1 for j = 1 , , d , and comprise the column vectors of the p d × d matrix B T . The matrix in the bottom right block is the p d × p d identity, and above it, the zero entries can be arranged into the d × p d zero matrix.
The ( p + 1 ) d × ( p + 1 ) d block-diagonal matrix D in partitioned form is
D = δ 1 1 0 0 0 0 0 0 δ 2 1 0 0 0 0 0 0 δ 3 1 0 0 0 0 0 0 0 0 0 δ d 1 0 0 0 0 0 0 C p 1 ,
where the p d × 1 vectors 0 comprise O p d × d in the left bottom, and the matrix C p 1 stands in the right bottom block. With multiplication rules of block matrices and their inverses, the recursion of the block LDL decomposition goes on as follows:
  • Outer cycle (column-wise). For j = 1 , , d : δ j 1 = k j j h = 1 j 1 j h δ h 1 j h (with the reservation that δ 1 1 = k 11 );
  • Inner cycle (row-wise). For i = j + 1 , , d :
    i j = k i j h = 1 j 1 i h δ h 1 j h δ j
    and
    d + 1 , j = k d + 1 , j h = 1 j 1 d + 1 , h δ h 1 j h δ j
    (with the reservation that, in the j = 1 case, the summand is zero), where k d + 1 , j for j = 1 , , d are p d × 1 vectors in the bottom left block of K .
The recursion ends in one run.
The above decomposition is again a nested one, so for the first d rows of L , only its previous rows or preceding entries in the same row enter into the calculation, as if we performed the usual LDL decomposition of K . Therefore, i j = a j i for j = 1 , , d 1 , i = j + 1 , , d that are the negatives of the partial regression coefficients akin to those offered by the standard LDL decomposition K = L ˜ D ˜ L ˜ T , so the first d rows of L ˜ and L are the same, and the first d rows of D ˜ and D are the same too. When the process terminates, we consider the blocks “en block” and obtain the p d × d matrix B T = ( d + 1 , 1 , , d + 1 , d ) .

Appendix B. Pseudocodes

In practice, the n × d data matrix D = ( X 1 , , X n ) T is given, where X t is a serially correlated sample of the underlying d-dimensional stacked random vector X = ( X 1 , , X d ) T at time t { 1 , , n } , n > d . To construct a CVAR model, the first step is to construct an undirected graph G and a causal ordering of its nodes (the d observed variables). The algorithm below is a general procedure for this step. Notice that we only consider triangulated (thus decomposable) graphs in this section.1
Algorithm A1 will fail if the specified threshold r * does not lead to a triangulated graph. Therefore, it is recommended that users manually inspect the initial partial correlations (step 3) and repeat the graph construction step (step 4) with various reasonable thresholds. If there is an expert’s advice on the causal structure (in the form of a causal ordering or a junction tree) of the variables in a dataset, the users may also skip Algorithm A1 and build a CVAR model directly using the following algorithms.
Algorithm A1: Constructing an undirected graph and a causal ordering of variables
Input  : D , n × d data matrix
    p, order of the CVAR model
     r * , threshold for the partial correlation statistical test
Output: undirected graph G and its perfect ordering
1
Compute the block Toeplitz matrix C p + 1 as in Equation (7), by using the autocovariance matrices C ( h ) for h = 0.1 , , p .
2
Compute the concentration matrix K = C p + 1 1 = ( σ i j ) .
3
Compute the partial correlation coefficient r i j for X i and X j conditioned on all other variables up to lag p. By Proposition 1,
r i j = σ i j σ i i σ j j ( 1 i < j d ) .
4
Construct an undirected graph G = ( V , E ) based on the partial correlations such that V = { 1 , , d } and E = { ( i , j ) : i < j , r i j r * } .
5
If G is triangulated, proceed to the next step; otherwise, terminate with an appropriate warning (then, choose another r * ).
6
Apply MCS (Maximal Cardinality Search) on G to obtain a perfect elimination ordering (see Algorithm 9.3 of Koller and Friedman (2009)).
Last but not least, to find the optimal order p for the CVAR model (i.e., order selection), we recommend repeating Algorithm A2 or A3 for different ps (e.g., for p = 1 , , 10 ) and then comparing various information criteria of the resulting models (as illustrated in Section 4).
Algorithm A2: Constructing an unrestricted CVAR model
Input  : D : n × d data matrix or the existing K from Algorithm A1
    p: order of the CVAR model
     ( i 1 , , i d ) , causal ordering of the d observed variables.
Output: parameter matrices A , B 1 , , B p
1
Reorder the columns of D according to the causal ordering.
2
Compute the concentration matrix K for the reordered D (or, reorder the rows and columns of the existing K from Algorithm A1).
3
Run Appendix A.4 with ( K , p ) to obtain the parameter matrices A , B 1 , , B p for the unrestricted CVAR(p) model.
Algorithm A3: Constructing a restricted CVAR model
Input  : D : n × d data matrix
    p: order of the CVAR model
    G, (undirected) chordal graph for observed variables
     ( i 1 , , i d ) , causal ordering of observed variables.
Output: parameter matrices A ^ , B ^ 1 , , B ^ p
1
Re-label the nodes in G according to the causal ordering.
2
Build a JT (junction tree) based on G and the causal (i.e., perfect elimination) ordering using a JT algorithm (e.g., NetworkX.junction_tree(G) in Hagberg et al. (2008)).
3
Apply covariance selection (as in Equation (13)) to obtain a re-estimated concentration matrix K ^ using the JT from step 2.
4
Run Appendix A.4 with ( K ^ , p ) to obtain the parameter matrices A ^ , B ^ 1 , , B ^ p for the restricted CVAR(p) model.

Note

1
Please see the main text for suggestions on graphs that are not triangulated when moralization and running the IPS algorithm is needed. The default r * threshold is usually set according to a significance level (e.g., α = 0.05 ) for the partial correlation test. This can be changed based on the sample size and the effect size.

References

  1. Abdelkhalek, Fatma, and Marianna Bolla. 2020. Application of Structural Equation Modeling to Infant Mortality Rate in Egypt. In Demography of Population Health, Aging and Health Expenditures. Edited by Christos H. Skiadas and Charilaos Skiadas. Cham: Springer, pp. 89–99. [Google Scholar]
  2. Akbilgic, Oguz, Hamparsum Bozdogan, and M. Erdal Balaban. 2014. A Novel Hybrid RBF Neural Networks Model as a Forecaster. Statistics and Computing 24: 365–75. [Google Scholar] [CrossRef]
  3. Bazinas, Vassilios, and Bent Nielsen. 2022. Causal Transmission in Reduced-Form Models. Econometrics 10: 14. [Google Scholar] [CrossRef]
  4. Bolla, Marianna, Fatma Abdelkhalek, and Máté Baranyi. 2019. Graphical models, regression graphs, and recursive linear regression in a unified way. Acta Scientiarum Mathematicarum (Szeged) 85: 9–57. [Google Scholar] [CrossRef]
  5. Bolla, Marianna, and Tamás Szabados. 2021. Multidimensional Stationary Time Series: Dimension Reduction and Prediction. New York: CRC Press, Taylor and Francis Group. [Google Scholar]
  6. Box, George EP, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung. 2015. Time series Analysis: Forecasting and Control. New York: Wiley. [Google Scholar]
  7. Brillinger, David R. 1996. Remarks concerning graphical models for time series and point processes. Revista de Econometria 16: 1–23. [Google Scholar] [CrossRef] [Green Version]
  8. Brockwell, Peter J., and Richard A. Davis. 1991. Time Series: Theory and Methods. Berlin/Heidelberg: Springer. [Google Scholar]
  9. Deistler, Manfred, and Wolfgang Scherrer. 2019. Vector Autoregressive Moving Average Models. In Handbook of Statistics. Berlin/Heidelberg: Springer, vol. 41. [Google Scholar]
  10. Deistler, Manfred, and Wolfgang Scherrer. 2022. Time Series Models. Cham: Springer Nature. [Google Scholar]
  11. Dempster, Arthur P. 1972. Covariance selection. Biometrics 28: 157–75. [Google Scholar] [CrossRef]
  12. Eichler, Michael. 2006. Graphical modelling of dynamic relationships in multivariate time series. In Handbook of Time Series Analysis. Edited by Schelter Björn, Winterhalder Matthias and Timmer Jens. Berlin/Heidelberg: Wiley-VCH Berlin. [Google Scholar]
  13. Eichler, Michael. 2012. Graphical modelling of multivariate time series. Probability Theory Related Fields 153: 233–68. [Google Scholar] [CrossRef] [Green Version]
  14. Geweke, John. 1984. Inference and causality in economic time series models. In Handbook of Econometrics. Amsterdam: Elsevier, vol. 2. [Google Scholar]
  15. Golub, Gene H., and Charles F. Van Loan. 2012. Matrix Computations. Baltimore: JHU Press. [Google Scholar]
  16. Granger, Clive W. J. 1969. Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37: 424–38. [Google Scholar] [CrossRef]
  17. Haavelmo, Trygve. 1943. The statistical implications of a system of simultaneous equations. Econometrica 11: 1–12. [Google Scholar] [CrossRef]
  18. Hagberg, Aric, Pieter Swart, and Daniel S Chult. 2008. Exploring network structure, dynamics, and function using NetworkX. Paper presented at the 7th Python in Science Conference (SciPy2008), Pasadena, CA, USA, August 19–24; Edited by Varoquaux Gäel, Vaught Travis and Millman Jarrod. Los Alamos: Los Alamos National Lab (LANL), pp. 11–15. [Google Scholar]
  19. Jöreskog, Karl G. 1977. Structural equation models in the social sciences. Specification, estimation and testing. In Applications of Statistics. Edited by Pathak R. Krishnaiah. Amsterdam: North-Holland Publishing Co., pp. 265–87. [Google Scholar]
  20. Keating, John W. 1996. Structural information is recursive VAR orderings. Journal of Econometric Dynamics and Control 20: 1557–80. [Google Scholar] [CrossRef]
  21. Kiiveri, Harri, Terry P. Speed, and John B. Carlin. 1984. Recursive causal models. Journal of the Australian Mathematical Society 36: 30–52. [Google Scholar] [CrossRef] [Green Version]
  22. Kilian, Lutz, and Helmut Lütkepohl. 2017. Structural Vector Autoregressive Analysis. Cambridge: Cambridge University Press. [Google Scholar]
  23. Koller, Daphne, and Nir Friedman. 2009. Probabilistic Graphical Models. Principles and Techniques. Cambridge: MIT Press. [Google Scholar]
  24. Lauritzen, Steffen L. 2004. Graphical Models. Oxford Statistical Science Series; Oxford: Clarendon Press, Oxford University Press, reprint with corr. edition. [Google Scholar]
  25. Lütkepohl, Helmut. 2005. New Introduction to Multiple Time Series Analysis. Berlin/Heidelberg: Springer. [Google Scholar]
  26. Nocedal, Jorge, and Stephen J. Wright. 1999. Numerical Optimization. Berlin/Heidelberg: Springer. [Google Scholar]
  27. Rao, Calyampudi Radhakrishna. 1973. Linear Statistical Inference and its Applications. New York: Wiley. [Google Scholar]
  28. Rózsa, Pál. 1991. Linear Algebra and Its Applications. Budapest: Műszaki Kiadó. (In Hungarian) [Google Scholar]
  29. Sims, Christopher A. 1980. Macroeconomics and reality. Econometrica 48: 1–48. [Google Scholar] [CrossRef] [Green Version]
  30. Tarjan, Robert E., and Mihalis Yannakakis. 1984. Simple Linear-Time Algorithms to Test Chordality of Graphs, Test Acyclicity of Hypergraphs, and Selectively Reduce Acyclic Hypergraphs. SIAM Journal on Computing 13: 566–79. [Google Scholar] [CrossRef]
  31. Wainwright, Martin J., and Michael I. Jordan. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning 1: 1–305. [Google Scholar] [CrossRef] [Green Version]
  32. Wainwright, Martin J. 2015. Graphical Models and Message-Passing Algorithms: Some Introductory Lectures. In Mathematical Foundations of Complex Networked Information Systems. Lecture Notes in Mathematics 2141. Edited by Fagnani Fabio, Sophie M. Fosson and Ravazzi Chiara. Cham: Springer. [Google Scholar]
  33. Wermuth, Nanny. 1980. Recursive equations, covariance selection, and path analysis. Journal of the American Statistical Association 75: 963–72. [Google Scholar] [CrossRef]
  34. Wiener, Norbert. 1956. The theory of prediction. In Modern Mathematics for Engineers. Edited by E. F. Beckenback. New York: McGraw–Hill. [Google Scholar]
  35. Wold, Herman O. A. 1960. A generalization of causal chain models. Econometrica 28: 444–63. [Google Scholar] [CrossRef]
  36. Wold, Herman O. A. 1985. Partial least squares. In Encyclopedia of Statistical Sciences. Edited by Samuel Kotz, Norman L. Johnson and C. R. Read. New York: Wiley. [Google Scholar]
  37. Wright, Sewall. 1934. The method of path coefficients. The Annals of Mathematical Statistics 5: 161–215. [Google Scholar] [CrossRef]
Figure 1. Triplet sink V.
Figure 1. Triplet sink V.
Econometrics 11 00007 g001
Figure 2. Graphical models fitted to the financial dataset.
Figure 2. Graphical models fitted to the financial dataset.
Econometrics 11 00007 g002
Table 1. Partial correlation coefficients from C 1 ( 0 ) . Entries marked by asterisk are less than 0.04 in absolute value (i.e., they correspond to no-edge positions in the graph), and the corresponding significance is α = 0.008851 .
Table 1. Partial correlation coefficients from C 1 ( 0 ) . Entries marked by asterisk are less than 0.04 in absolute value (i.e., they correspond to no-edge positions in the graph), and the corresponding significance is α = 0.008851 .
NIKEUISEEMBVSPDAXFTSESP
NIK 0.016 *0.035 *0.522−0.260−0.019 *−0.0760.024 *
EU0.016 * 0.2170.034 *0.0670.6870.7470.018 *
ISE0.035 *0.217 0.358−0.157−0.077−0.0590.034 *
EM0.5220.034 *0.358 0.5460.0480.086−0.184
BVSP−0.2600.067−0.1570.546 −0.093−0.0450.533
DAX−0.019 *0.687−0.0770.048−0.093 −0.2030.191
FTSE−0.0760.747−0.0590.086−0.045−0.203 0.057
SP0.024 *0.018 *0.034 *−0.1840.5330.1910.057
Table 2. A matrix for the unrestricted Financial VAR(1) model (rounded to 4 decimals).
Table 2. A matrix for the unrestricted Financial VAR(1) model (rounded to 4 decimals).
NIKEUISEEMBVSPDAXFTSESP
NIK10.02640.0042−0.89020.20300.01700.0781−0.0336
EU01−0.0418−0.0146−0.0239−0.3746−0.5255−0.0033
ISE001−0.95180.1613−0.1658−0.3129−0.1413
EM0001−0.3507−0.1182−0.24640.1077
BVSP00001−0.0129−0.2782−0.6375
DAX000001−0.8102−0.2336
FTSE0000001−0.6100
SP00000001
Table 3. B matrix for the unrestricted Financial VAR(1) model (rounded to 4 decimals).
Table 3. B matrix for the unrestricted Financial VAR(1) model (rounded to 4 decimals).
NIK 1 EU 1 ISE 1 EM 1 BVSP 1 DAX 1 FTSE 1 SP 1
NIK0.1845−0.1685−0.08740.08520.06350.0205−0.1236−0.2798
EU−0.01310.1219−0.00440.0291−0.0124−0.0393−0.09790.0011
ISE0.06770.2811−0.06570.2473−0.2940−0.05430.0098−0.1442
EM−0.0016−0.0569−0.01590.1076−0.0917−0.09450.0875−0.1071
BVSP−0.01400.07040.0142−0.10460.1397−0.14970.1188−0.0812
DAX−0.00340.2021−0.0342−0.0044−0.0352−0.0476−0.0670−0.0673
FTSE0.0293−0.0168−0.01090.0420−0.11290.21410.0805−0.2641
SP0.04170.2603−0.02610.0112−0.0026−0.0709−0.28500.1240
Table 4. A matrix for the unrestricted Financial VAR(2) model (rounded to 4 decimals).
Table 4. A matrix for the unrestricted Financial VAR(2) model (rounded to 4 decimals).
NIKEUISEEMBVSPDAXFTSESP
NIK1−0.01140.0103−0.88220.19950.02330.0856−0.0214
EU01−0.0426−0.0110−0.0240−0.3745−0.5137−0.0128
ISE001−0.97880.1701−0.1669−0.3139−0.1361
EM0001−0.3450−0.1154−0.23750.0922
BVSP00001−0.0047−0.2655−0.6601
DAX000001−0.8120−0.2339
FTSE0000001−0.6320
SP00000001
Table 5. B 1 matrix for the unrestricted Financial VAR(2) model (rounded to 4 decimals).
Table 5. B 1 matrix for the unrestricted Financial VAR(2) model (rounded to 4 decimals).
NIK 1 EU 1 ISE 1 EM 1 BVSP 1 DAX 1 FTSE 1 SP 1
NIK0.2063−0.1826−0.11060.10630.07310.0187−0.1502−0.2580
EU−0.00370.1364−0.00100.0232−0.0150−0.0371−0.0996−0.0107
ISE0.04090.2476−0.07710.2274−0.2772−0.04470.0331−0.1284
EM0.0489−0.0200−0.00300.1360−0.1150−0.09960.0468−0.1162
BVSP−0.00660.09310.0261−0.10910.1312−0.15730.1161−0.0935
DAX−0.01230.2146−0.03190.0073−0.0406−0.0536−0.0727−0.0694
FTSE0.08520.00190.02750.0145−0.11170.23770.1035−0.3427
SP0.05300.2759−0.0565−0.00330.0024−0.0945−0.31060.1789
Table 6. B 2 matrix for the unrestricted Financial VAR(2) model (rounded to 4 decimals).
Table 6. B 2 matrix for the unrestricted Financial VAR(2) model (rounded to 4 decimals).
NIK 2 EU 2 ISE 2 EM 2 BVSP 2 DAX 2 FTSE 2 SP 2
NIK−0.0402−0.1695−0.04100.01560.0998−0.04060.1367−0.0091
EU0.00170.0771−0.00650.00540.00370.0192−0.0762−0.0394
ISE−0.0142−0.1725−0.0276−0.00880.03890.11670.08260.0357
EM−0.00540.0650−0.03220.1155−0.0695−0.0959−0.0162−0.0270
BVSP−0.04230.0332−0.04490.2878−0.0717−0.0221−0.0381−0.0120
DAX−0.03720.01770.01300.0658−0.0360−0.0108−0.02020.0059
FTSE0.04910.3107−0.08200.06930.02990.0153−0.0840−0.3038
SP0.0447−0.06280.0804−0.18240.07850.0133−0.17750.1284
Table 7. A matrix for the restricted Financial VAR(1) model (rounded to 4 decimal places).
Table 7. A matrix for the restricted Financial VAR(1) model (rounded to 4 decimal places).
NIKEUISEEMBVSPDAXFTSESP
NIK100−0.81930.2080000
EU01−0.04210−0.0269−0.3782−0.52970
ISE001−0.93860.1653−0.1675−0.3161−0.1477
EM0001−0.3419−0.1184−0.24640.0997
BVSP00001−0.0130−0.2729−0.6423
DAX000001−0.8102−0.2336
FTSE0000001−0.6104
SP00000001
Table 8. B matrix for the restricted Financial VAR(1) model (rounded to 4 decimal places).
Table 8. B matrix for the restricted Financial VAR(1) model (rounded to 4 decimal places).
NIK 1 EU 1 ISE 1 EM 1 BVSP 1 DAX 1 FTSE 1 SP 1
NIK0.1811−0.1797−0.08560.08420.0739−0.0058−0.1146−0.2662
EU−0.01310.1213−0.00460.0304−0.0130−0.0415−0.09690.0002
ISE0.06760.2814−0.06580.2483−0.2941−0.05670.0120−0.1472
EM−0.0016−0.0567−0.01580.1067−0.0908−0.09510.0890−0.1085
BVSP−0.01390.07040.0142−0.10410.1391−0.14880.1195−0.0828
DAX−0.00340.2019−0.0342−0.0046−0.0353−0.0474−0.0669−0.0672
FTSE0.0292−0.0171−0.01090.0419−0.11300.21420.0807−0.2642
SP0.04170.2608−0.02610.0115−0.0026−0.0713−0.28530.1239
Table 9. A matrix for the restricted Financial VAR(2) model (rounded to 4 decimals).
Table 9. A matrix for the restricted Financial VAR(2) model (rounded to 4 decimals).
NIKEUISEEMBVSPDAXFTSESP
NIK100−0.81910.2076000
EU01−0.04230−0.0293−0.3811−0.51920
ISE001−0.96620.1790−0.1713−0.3112−0.1470
EM0001−0.3361−0.1153−0.23720.0835
BVSP00001−0.0069−0.2544−0.6664
DAX000001−0.8128−0.2336
FTSE0000001−0.6319
SP00000001
Table 10. B 1 matrix for the restricted Financial VAR(2) model (rounded to 4 decimals).
Table 10. B 1 matrix for the restricted Financial VAR(2) model (rounded to 4 decimals).
NIK 1 EU 1 ISE 1 EM 1 BVSP 1 DAX 1 FTSE 1 SP 1
NIK0.2009−0.1869−0.10980.10890.0824−0.0079−0.1493−0.2428
EU−0.00380.1387−0.00130.0260−0.0153−0.0410−0.1027−0.0086
ISE0.03530.2865−0.07500.2479−0.2741−0.06390.0101−0.1418
EM0.0494−0.0218−0.00270.1338−0.1144−0.09900.0500−0.1177
BVSP−0.01070.12020.0276−0.09470.1327−0.16740.0987−0.1030
DAX−0.01100.2072−0.03220.0034−0.0412−0.0503−0.0677−0.0675
FTSE0.08240.01760.02810.0224−0.11040.23090.0928−0.3463
SP0.05060.2898−0.05600.00400.0037−0.1010−0.31990.1760
Table 11. B 2 matrix for the restricted Financial VAR(2) model (rounded to 4 decimals).
Table 11. B 2 matrix for the restricted Financial VAR(2) model (rounded to 4 decimals).
NIK 2 EU 2 ISE 2 EM 2 BVSP 2 DAX 2 FTSE 2 SP 2
NIK−0.0455−0.1847−0.03910.02640.0906−0.04860.14270.0089
EU0.00170.0755−0.00580.00470.00330.0179−0.0765−0.0370
ISE−0.0161−0.1634−0.0290−0.00210.03520.11130.08210.0313
EM−0.00560.0659−0.03300.1189−0.0701−0.0959−0.0167−0.0283
BVSP−0.04300.0415−0.04560.2906−0.0729−0.0258−0.0389−0.0168
DAX−0.03690.01630.01300.0656−0.0356−0.0100−0.02030.0064
FTSE0.04850.3142−0.08200.07160.02900.0128−0.0845−0.3054
SP0.0442−0.06060.0805−0.18250.07780.0117−0.17730.1281
Table 12. Order selection criteria for the unrestricted Financial CVAR ( p ) model, bold-face values represent minimum of each criterion.
Table 12. Order selection criteria for the unrestricted Financial CVAR ( p ) model, bold-face values represent minimum of each criterion.
pAICAICCBICHQ
1−76.81−33,222.68−76.07−76.52
2−76.85−33,173.98−75.60−76.36
3−76.84−33,095.75−75.08−76.15
4−76.83−33,011.98−74.55−75.94
5−76.77−32,893.23−73.97−75.67
6−76.69−32,766.33−73.37−75.39
7−76.58−32,612.38−72.74−75.08
8−76.48−32,457.38−72.11−74.77
9−76.41−32,316.33−71.52−74.49
Table 13. Order selection criteria for the restricted Financial CVAR ( p ) model; bold-face values represent the minimum of each criterion.
Table 13. Order selection criteria for the restricted Financial CVAR ( p ) model; bold-face values represent the minimum of each criterion.
pAICAICCBICHQ
1−76.87−33,239.11−76.19−76.60
2−76.91−33,190.27−75.71−76.44
3−76.93−33,129.67−75.22−76.26
4−77.00−33084.90−74.77−76.13
5−76.94−32,969.37−74.19−75.86
6−76.92−32,869.31−73.65−75.64
7−76.81−32,718.36−73.02−75.33
8−76.80−32,612.63−72.49−75.11
9−76.78−32,495.56−71.94−74.88
Table 14. A matrix for the IMR unrestricted VAR(1) model of Case 1 (rounded to 4 decimals).
Table 14. A matrix for the IMR unrestricted VAR(1) model of Case 1 (rounded to 4 decimals).
IMRMMRHepBOPExpHExpGDP
IMR1.0−1.1259−0.01610.00030.0176−0.1348
MMR0.01.00000.35940.0492−0.06840.7135
HepB0.00.00001.0000−0.16260.2510−0.8196
OPExp0.00.00000.00001.0000−0.6876−0.4229
HExp0.00.00000.00000.00001.00000.6749
GDP0.00.00000.00000.00000.00001.0000
Table 15. B matrix for the IMR unrestricted VAR(1) model of Case 1 (rounded to 4 decimals).
Table 15. B matrix for the IMR unrestricted VAR(1) model of Case 1 (rounded to 4 decimals).
IMR-1MMR-1HepB-1OPExp-1HExp-1GDP-1
IMR0.2986−0.3589−0.00760.0042−0.0115−0.0639
MMR−0.0149−0.7469−0.23580.05400.0193−0.5577
HepB13.0541−15.7658−1.1915−0.35060.2170−1.7902
OPExp7.1616−8.0906−0.2994−0.1038−0.0215−0.7720
HExp1.6861−2.9922−0.4650−0.1566−0.0681−1.0913
GDP−11.067413.21820.32540.3204−0.30991.2129
Table 16. A matrix for the IMR restricted VAR(1) model of Case 2 (rounded to 4 decimals).
Table 16. A matrix for the IMR restricted VAR(1) model of Case 2 (rounded to 4 decimals).
IMRMMRHepBGDPOPExpHExp
IMR1.00.07360.00520.01110.00.0040
MMR0.01.00000.02370.11800.0−0.0116
HepB0.00.00001.0000−0.34180.00.0236
GDP0.00.00000.00001.00000.0−0.0035
OPExp0.00.00000.00000.00001.0−0.7854
HExp0.00.00000.00000.00000.01.0000
Table 17. B matrix for the IMR restricted VAR(1) model of Case 2 (rounded to 4 decimals).
Table 17. B matrix for the IMR restricted VAR(1) model of Case 2 (rounded to 4 decimals).
IMR-1MMR-1HepB-1GDP-1OPExp-1HExp-1
IMR−0.9198−0.0592−0.00210.0053−0.0013−0.0049
MMR−1.01750.2511−0.00020.0607−0.00470.0039
HepB3.6850-4.4606−0.9058−0.4230−0.0811−0.0950
GDP0.5849−0.44530.0640−0.85730.08960.1086
OPExp3.6432−3.7486−0.1413−0.43800.0273−0.0926
HExp1.9561−3.4687−0.5221−0.6302−0.2298−0.1171
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bolla, M.; Ye, D.; Wang, H.; Ma, R.; Frappier, V.; Thompson, W.; Donner, C.; Baranyi, M.; Abdelkhalek, F. Causal Vector Autoregression Enhanced with Covariance and Order Selection. Econometrics 2023, 11, 7. https://doi.org/10.3390/econometrics11010007

AMA Style

Bolla M, Ye D, Wang H, Ma R, Frappier V, Thompson W, Donner C, Baranyi M, Abdelkhalek F. Causal Vector Autoregression Enhanced with Covariance and Order Selection. Econometrics. 2023; 11(1):7. https://doi.org/10.3390/econometrics11010007

Chicago/Turabian Style

Bolla, Marianna, Dongze Ye, Haoyu Wang, Renyuan Ma, Valentin Frappier, William Thompson, Catherine Donner, Máté Baranyi, and Fatma Abdelkhalek. 2023. "Causal Vector Autoregression Enhanced with Covariance and Order Selection" Econometrics 11, no. 1: 7. https://doi.org/10.3390/econometrics11010007

APA Style

Bolla, M., Ye, D., Wang, H., Ma, R., Frappier, V., Thompson, W., Donner, C., Baranyi, M., & Abdelkhalek, F. (2023). Causal Vector Autoregression Enhanced with Covariance and Order Selection. Econometrics, 11(1), 7. https://doi.org/10.3390/econometrics11010007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop