**5. Application**

Due to the complexity of the proposed model, the likelihod function may be flat in the neighbourhood of the maxima, so the optimization procedure using traditional procedures may fail. An alternative is to use Bayesian methods. Some references on the use of Bayesian procedures for the family of ARCH processes are Geweke (1989), Kleibergen and Dijk (1993), Geweke (1994) and Bauwens and Lubrano (1998).

**Figure 3.** Autocorrelation functions of the Euro-Dollar returns and squared returns after seasonal adjustment.

**Figure 4.** Euro-Dollar returns, absolute returns, squared returns and histogram after taking off seasonal pattern.

It is well known that when the analytical expressions of the full conditional distributions are known we can use Gibbs sampling. However, if the conditional distributions are not known, we need to modify the algorithm or to use another algorithm, such as the Metropolis-Hastings one. Another alternative to solve this is to use the Griddy-Gibbs sampler of Ritter and Tanner (1992).

Griddy-Gibbs sampling can be used when the joint conditional distribution of at least one parameter does not have a distribution form known but it has an analytical expression that can be evaluated on a grid of points. For that, we evaluate the analytical expressions of the joint conditional distribution function, and by numerical integration we can generate random variables of this distribution; see Davis and Rabinowitz (1975).

A problem that appears when we use Griddy-Gibbs sampling is to determine a window and the number of points where we will evaluate numerically the desired function. An inadequate determination of this grid of points could cause errors in the parameters estimation. In general it seems suitable to have 50 points in the grid for a good evaluation.

We will use a technique to reduce the variance that is to compute the conditional mean

$$\sum\_{n=1}^{N} E\left(\theta\_i | \theta\_{1'}^n, \dots, \theta\_{i-1'}^n, \theta\_{i+1'}^n, \dots, \theta\_{n'}^n | y \right) / N\_{\nu}$$

instead of ∑*Nn*=<sup>1</sup> *θni* /*N* to estimate *E* (*<sup>θ</sup>i*|*<sup>θ</sup>*1, *θ*2,..., *θi*−1, *θi*+1,..., *θ<sup>n</sup>*, *y*). Here *θni* denotes the value of the parameter *θi* at iteration *n*.

An important fact is that aggregate returns have, generally, a magnitude greater than non-aggregate returns, so the components with larger aggregations have smaller values. For this reason, we can use the impacts defined above to study the contribution of each component to the model.

In order to establish the number of components in the PHARCH model, we will use information of the financial market behavior based on the behavior of the traders. So, we consider five components, as seen in the Table 1, corresponding to information arriving at the market at the rate of 15 min, 1 h, 1 day, 1 week and 1 month.

This means that we need to estimate the parameters of a PHARCH(5) process with aggregations 1, 4, 96, 480 and 1920 as follows.

$$\begin{array}{rcl} r\_t & = & \sigma\_t \varepsilon\_t \\ \sigma\_t^2 & = & \mathbb{C}\_0 + \mathbb{C}\_1 r\_{t-1}^2 + \mathbb{C}\_2 \left( r\_{t-1} + \dots + r\_{t-4} \right)^2 + \\ & + \mathbb{C}\_3 \left( r\_{t-1} + \dots + r\_{t-96} \right)^2 + \mathbb{C}\_4 \left( r\_{t-1} + \dots + r\_{t-480} \right)^2 + \\ & + \mathbb{C}\_5 \left( r\_{t-1} + \dots + r\_{t-1920} \right)^2 \end{array} \tag{11}$$

where *Cj* ≥ 0, *j* = 1, . . . , 5, *C*0 > 0 , and *εt* ∼ *t*(0, 1, *<sup>v</sup>*).

The number of parameters to estimate is seven because we considered *εt* ∼ *t*(0, 1, *<sup>v</sup>*), *v* > 2. We use an autoregressive processes to filter the data and to take into account the information given by the acf function of the returns shown in the Figure 3.

We consider non-informative priors, that is, uniform distributions on the parametric space, as follows: *C*0, *C*1, *C*2, *C*3, *C*4, *C*5 ∼ *U*(0, 1) and *v* ∼ *U*(3, *ct*), where *U* denotes the uniform distribution and *ct* is a large number; in particular, we used *ct* = 50.

Estimates using maximum likelihood (ML) are shown in Table 2, and the corresponding impacts are shown in Figure 5. The optimizer used to evaluate the impacts was simulated annealing; see Belisle (1992). Several problems were faced in the process of using ML because in some situations the optimizer did not converge. Sometimes we can solve this problem, using initial values near to optimum. But this may not be a normal situation in real cases. So the need for alternative procedures.

As we can see, the impact of the components decreases for larger aggregations. This is a natural result because intraday traders are those who dominate the market. Another fact is that the weekly component has a similar impact to the monthly component, meaning that both have similar weight contributions to predicting volatility. The results show that an impact can be significant even when the parameter is small. The above estimates will serve as a comparison with Griddy-Gibbs estimates.

In the Griddy–Gibbs sampling we use a moving window: we define a new window in each iteration as a function of the mean, mode and standard deviation.


**Table 1.** Component description of PHARCH process for Euro-Dollar.


**Table 2.** Parameter estimation by maximum likelihood using simulate annealing optimizer.

Impact of the Components

Table 3 shows the results of the estimation of a parsimonious HARCH(5) model for the Euro-Dollar, using this criterion of selection for a moving window for each parameter, where the conditional density will be computed. The number of points where this density was evaluated was 50.

We used the non conditional and conditional mean in each step of the iteration to calculate the estimate parameters. As expected, conditional method was faster than non-conditional, but the difference was very small.

We see that the values are practically the same by both methods (maximum likelihood and Griddy-Gibbs sampling).

In Figure 6 we can see the convergence of the parameters using Griddy-Gibbs for each iteration step. We can see the fast convergence of the parameters.

**Figure 5.** Impact of the components estimated by maximum likelihood.

**Figure 6.** Convergence of the parameters using Griddy-Gibbs sampler.

Now, we compare HARCH modeling with GARCH modeling. In Figures 7–9 we present a residual analysis after the fitting of a GARCH model. In Figures 10–12 we have the corresponding graphs for the PHARCH(5) fitting. We see a slightly better fit of the PHARCH model. If we use the prediction mean squared error (PMSE) as a criterion for comparison, we obtain the values 15.58 and 15.20, for GARCH and PHARCH modeling, respectively, using the standardized residuals and 1000 values for the prediction period.


**Table 3.** Estimated parameters for the PHARCH(5)model with aggregations 1, 4, 96, 480 and 1920 for the Euro-Dollar series, using Griddy-Gibbs sampling.

**Figure 7.** Euro-Dollar residuals, absolute residuals, squared residuals and histogram of the residuals after fitting a GARCH process.

**Figure 8.** Autocorrelation and partial autocorrelation functions of the residuals, absolute residuals and squared residuals after GARCH fitting.

QQ-Plot of Euro Dollar Residuals

**Figure 9.** QQ-Plot of GARCH Residuals.

**Figure 10.** Euro-Dollar residuals, absolute residuals, squared residuals and histogram of the residuals after fitting a PHARCH process.

**Figure 11.** Autocorrelation and partial autocorrelation functions of the residuals, absolute residuals and squared residuals after PHARCH fitting.

QQ-Plot of Euro Dollar Residuals

**Figure 12.** QQ-plot of PHARCH residuals.
