3.2.1. Pearson Correlation

This is the traditional approach to investigate the relationship between two variables. There are numerous studies regarding these parameters: Stigler (1986), Snedecor and Cochran [1980] 1989, Galton (1889), etc. In this paper, we briefly acknowledge the relevant formula as well as the confidence level for testing. The parameter *ρ* represents the product–moment correlation coefficient.

$$\hat{\rho} = \frac{\sum\_{i=1}^{n} w\_i (x\_i - \overline{x})(y\_i - \overline{y})}{\sqrt{\sum\_{i=1}^{n} w\_i (x\_i - \overline{x})^2} \sqrt{\sum\_{i=1}^{n} w\_i (y\_i - \overline{y})^2}} \tag{1}$$

In which, *wi* denotes weight and *x* and *y* are the means of *x* and *y*, respectively. Then, we also demonstrate the unadjusted significance level for testing significance level.

$$\rho = 2 \text{\*tail} \left( \mathbf{n} - 2 \text{, } |\not p| \frac{\sqrt{n-2}}{1-\not p^2} \right) \tag{2}$$

One of Pearson correlation benefits is that it is easy to calculate; however, it only supports linear dependence between two variables. Therefore, we also employ further investigation.

### 3.2.2. Vector Autoregressive Model (VAR)

We refer to the studies of Lütkepohl (2005) and Greene (2008) to briefly explain the VAR model in terms of linear regression without constraint placed on the coefficients. The VAR(p) model with exogenous variables is statistically written in form as

$$\mathbf{y}\_{l} = A\mathbf{y}\_{t-1} + B\_{0}\mathbf{y}\_{l} + \mathbf{u}\_{l} \tag{3}$$

In which *yt* is the matrix with (*K* × 1) of endogenous variables; *A* is a matrix with (*K* × *Kp*) of coefficients of lagged values of *Y* (*Yt*−1); *B*0 is matrix with coefficients of matrix *χ*; *χt* is the matrix (*M* × 1) of exogenous variables; and utis the matrix (*K* × 1) of white noise innovations. Finally, *Yt*is the

$$\text{Matrix } (\mathbf{K}\_p \times \mathbf{1}) \text{ matrix with } \mathbf{Y}\_t = \left( \begin{array}{c} \mathbf{y}\_t \\ \vdots \\ \mathbf{y}\_{t-p+1} \end{array} \right).$$

The matrix *χt* also includes intercept terms in VAR model. Therefore, *χt* will be empty when it includes no exogenous variables and no intercept terms in the model. In summary, VAR is a model with K variables regressed in linear functions. In this estimation, there are (p) own lagged values of variables and (p) lags of other (*K* − 1) variables, and possibly exogenous variables. Therefore, a VAR model with p lags denotes as VAR(p).

### 3.2.3. Structural Vector Autoregressive Model (SVAR)

In this study, we briefly introduced the theoretical framework of Structural Vector Autoregressive Model (SVAR) by Amisano and Giannini (2012). Assume that we have the full system of variables of interactions as follows

$$y\_t = \delta\_1 + b\_{12}\theta\_1 + \gamma\_{11}y\_{t-1} + \gamma\_{12}\theta\_{t-1} + \varepsilon\_{1t} \tag{4}$$

$$
\vartheta\_1 = \delta\_2 + b\_{21} y\_1 + \gamma\_{21} y\_{t-1} + \gamma\_{22} \theta\_{t-1} + \varepsilon\_{2t} \tag{5}
$$

*J. Risk Financial Manag.* **2019**, *12*, 52

Therefore, we can capture the mutual interaction between these variables in our model. When it comes to matrix notation, we have

$$
\begin{bmatrix} 1 & -b\_{12} \\ -b\_{21} & 1 \end{bmatrix} \begin{bmatrix} y\_t \\ \theta\_1 \end{bmatrix} = \begin{bmatrix} \delta\_1 \\ \delta\_2 \end{bmatrix} + \begin{bmatrix} \gamma\_{11} & \gamma\_{12} \\ \gamma\_{21} & \gamma\_{22} \end{bmatrix} \begin{bmatrix} y\_{t-1} \\ \theta\_{t-1} \end{bmatrix} + \begin{bmatrix} \varepsilon\_{1t} \\ \varepsilon\_{2t} \end{bmatrix} \tag{6}
$$

Compactly, ξ denotes the matrix in Equation (6).

$$B\xi\_t = \Delta + \Gamma \xi\_{t-1} + \varepsilon\_t \tag{7}$$

.

This SVAR can be rewritten in a VAR form by premultiplication by *B*−<sup>1</sup>

$$
\xi\_t = \mathbf{A}\_0 + \mathbf{A}\_1 \xi\_{t-1} + \mu\_t
$$

In which, *ut* = *<sup>B</sup>*−1*εt*, A0 = *<sup>B</sup>*−1Δ, and A1 = *<sup>B</sup>*−1Γ, then the *εt* is assumed that SVAR generates white noise and i.i.d, the *ut* from VAR has the following characteristics; (i) zero mean, (ii) fixed variance, (iii) not individually autocorrelated and, most importantly, (iv) the correlation between *b*12 and *b*21 is

different to 0. Finally, our calculation will examine: *ut* = *<sup>B</sup>*−1*εt* = (*<sup>ε</sup>*1*t*−*b*12*ε*2*t*) 1−*b*12*b*<sup>21</sup> (*<sup>ε</sup>*2*t*−*b*21*ε*1*t*) 1−*b*12*b*<sup>21</sup>

There are also two types of SVAR estimation such as short-run and long-run restrictions. For the short run one, the Cholesky decomposition is followed by a study of Sims (1980), whereas Amisano and Giannini (2012) adapted long run SVAR to the methodology of checking local identification.
