The objective of the study is to find out whether the volume of gold demand plays any significant role in influencing the international benchmark gold prices. The international gold prices are fixed by the LBMA and are known as the AM fix and PM fix prices. Being the largest consumers of gold in the world gold markets, do India and China play any significant role in the price fixation? The countries included in the sample are India, the USA, Japan, China, Europe and the Middle East. Countries from Europe and the Middle East are grouped and shown as a single variable called Europe and the Middle East, as individual countries’ gold demand is minuscule in the world gold markets. The total gold consumption of these countries represents around 78% of the world’s total gold consumption. Data consist of quarterly gold demand, and LBMA AM and PM fix prices are included in the analysis for the period from 1994 to 2020. Analysis is conducted by considering LBMA AM and PM fix prices as the dependent variables and gold demand as the independent variables. The data are sourced from the World Gold Council and LBMA websites.
The quarterly data are converted into monthly data by using the Cubic Spline Method, which removes the seasonal effect. The preliminary analysis is carried out through graph and descriptive statistics. Stationarity properties of the variables are examined through ADF and PP unit root tests and the optimum lag length was obtained from VAR lag length selection criteria. The Johansen Cointegration test is used to determine the long-run relationship between international gold price and gold demand. The VAR Granger Causality/Block Exogeneity test, Impulse Response Function, and Variance Decomposition tools are used to examine the relationship between the variables. The following hypotheses are considered and tested in the study:
3.1. Johansen Cointegration Test
This study uses the Johansen Cointegration method (
Johansen 1988), (
Johansen 1991), (
Johansen and Juselius 1990), because this method is suitable for testing the long-run relationship of more than two variables. There are two test statistics for cointegration under the Johansen approach, which are formulated as
where
r is the number of cointegrating vectors under the null hypothesis and
is the estimated value for the
ith ordered eigenvalue from the
∏ matrix. Intuitively, the larger
is, the larger and more negative will be
and, hence, the larger will be the test statistic. Each eigenvalue will have associated with it a different cointegrating vector, which will be eigenvectors. A significantly non-zero eigenvalue indicates a significant cointegrating vector.
is a joint test where the null is that the number of cointegrating vectors is less than or equal to r against an unspecified or general alternative that there is more than
r. It starts with p eigenvalues, and then successively the largest is removed.
= 0 when all the
λi = 0, for
i = 1, …,
g.
conducts separate tests on each eigenvalue, and has as its null hypothesis that the number of cointegrating vectors is r against an alternative of
r + 1.
3.2. Granger Causality Block Exogeneity Wald Test
It is likely that, when a VAR includes many lags of variables, it will be difficult to see which sets of variables have significant effects on each dependent variable and which do not. In order to address this issue, tests are usually conducted that restrict all the lags of a particular variable to zero. A block exogeneity test is useful for detecting whether to incorporate a variable into a VAR. Given the aforementioned distinction between causality and exogeneity, this multivariate generalization of the Granger causality test should actually be called a “block causality” test. In any event, the issue is to determine whether lags of one variable—say,
Granger—cause any other of the variables in the system. In the four-variable case with
,
,
, and
, the test is whether lags of
Granger cause either
,
, or
in the system. In essence, the block exogeneity restricts all lags of
in the
,
, and
equations to be equal to zero. This cross-equation restriction is properly tested using the likelihood ratio test given by Equation (3). Estimate the
,
, and
equations using lagged values of
,
,
, and
and calculate
. Re-estimate excluding the lagged values of
and calculate
. Next, form the likelihood ratio statistic:
This statistic has a chi-square distribution with degrees of freedom equal to 2p (since p values of are excluded from each equation). Here = 3p + 1 since the unrestricted , , and equations contain p lags of , , , and plus a constant.
3.3. VAR Impulse Response Function
Impulse response analysis is another way of inspecting and evaluating the impact of shocks cross-section. In other words, impulse responses trace out the responsiveness of the dependent variables in the VAR to shocks to each of the variables. So, for each variable from each equation separately, a unit shock is applied to the error, and the effects upon the VAR system over time are noted. Thus, if there are g variables in a system, a total of g2 impulse responses could be generated. While persistence measures focus on the long-run properties of shocks, impulse response traces the evolutionary path of the impact over time.
Impulse response analysis, together with variance decomposition, forms innovation accounting for sources of information and information transmission in a multivariate dynamic system. The way that this is achieved in practice is by expressing the VAR model as a VMA—that is, the vector autoregressive model is written as a vector moving average. Provided that the system is stable, the shock should gradually die away. Considering the following vector autoregression (VAR) process:
where
yt is an n × 1 vector of variables,
A0 is an vector of an n × 1 vector of intercept,
Aτ (
τ = 1, …,
k) are n × n matrices of coefficients,
is an n dimension vector of white noise processes with
E(
) = 0,
being non-singular for all
t, and
for
t ≠
s. Without losing generality, exogenous variables other than lagged
yt are omitted for simplicity. A stationary VAR process of Equation (4) can be shown to have a MA representation of the following form:
where
C =
E(
yt) = (I −
A1 − … −
Ak) − 1 A
0, and
can be computed from
recursively
, τ = 1,2, Λ with
= I and
= 0 for
< 0.
The MA coefficients in Equation (5) can be used to examine the interaction between variables. For example, aij,k, the ijth element of , is interpreted as the reaction, or impulse response, of the ith variable to a shock τ periods ago in the jth variable, provided that the effect is isolated from the influence of other shocks in the system. Therefore, a seemingly crucial problem in the study of impulse response is to isolate the effect of a shock on a variable of interest from the influence of all other shocks, which is achieved mainly through orthogonalisation.
Orthogonalisation per se is straightforward and simple. The covariance matrix
in general, has non-zero off-diagonal elements. Orthogonalisation is a transformation, which results in a set of new residuals or innovations
νt satisfying
. The procedure is to choose any non-singular matrix
G of transformation for
so that
. In the process of transformation or orthogonalisation,
is replaced by
and
is replaced by
, and Equation (5) becomes:
Suppose that there is a unit shock to, for example, the
jth variable at time 0 and there is no further shock afterwards, and there are no shocks to any other variables. Then after
k periods,
yt will evolve to the level:
where
e(
j) is a selecting vector with its
jth element being one and all other elements being zero. The accumulated impact is the summation of the coefficient matrices from time 0 to
k. This is made possible because the covariance matrix of the transformed residuals is a unit matrix I with off-diagonal elements being zero. Impulse response is usually exhibited graphically based on Equation (7). A shock to each of the n variables in the system results in n impulse response functions and graphs, so there is a total of n × n graphs showing these impulse response functions.
3.4. VAR Variance Decomposition
Variance decompositions offer a slightly different method for examining VAR system dynamics. They give the proportion of the movements in the dependent variables that are due to their ‘own’ shocks, versus shocks to the other variables. A shock to the ith variable will directly affect that variable, of course, but it will also be transmitted to all of the other variables in the system through the dynamic structure of the VAR. Variance decompositions determine how much of the s-step-ahead forecast error variance of a given variable is explained by innovations to each explanatory variable. In practice, it is usually observed that own series shocks explain most of the (forecast) error variance of the series in a VAR. To some extent, impulse responses and variance decompositions offer very similar information.
Since the residuals have been orthogonalised, variance decomposition is straightforward. The
k-period ahead forecast errors in Equation (5) or (6) are:
The covariance matrix of the
k-period ahead forecast errors is:
The right-hand side of Equation (9) just reminds the reader that the outcome of variance decomposition will be the same irrespective of G. The choice or derivation of matrix G only matters when the impulse response function is concerned to isolate the effect from the influence of other sources.
The variance of forecast errors attributed to a shock to the
jth variable can be picked out by a selecting vector
e(
j), with the
jth element being one and all other elements being zero:
Furthermore, the effect on the
ith variable due to a shock to the
jth variable, or the contribution to the
ith variable’s forecast error by a shock to the
jth variable, can be picked out by a second selecting vector
e(
i) with the
ith element being one and all other elements being zero.
In relative terms, the contribution is expressed as a percentage of the total variance:
which sums up to 100 percent.