*2.2. Maximum Likelihood Estimation*

The logarithm of the likelihood function for the model (6) is given by

$$\mathcal{L}(\boldsymbol{\theta}) = \sum\_{t=1}^{n} \mathcal{L}\_t(\boldsymbol{\theta}), \tag{7}$$

where L*t*(*θ*) = −12 log |**Σ**| + log{*g*(*<sup>δ</sup>t*)} = log *kp*(*η*) − 12 log |**Σ**| − 12*η* (1 + *ηp*)log(<sup>1</sup> + *<sup>c</sup>*(*η*)*<sup>δ</sup>t*) is the contribution from the *t*th return to the likelihood, *t* = 1, 2, . . . , *n*.

From (7), the score function is given by

$$\mathcal{U}(\boldsymbol{\theta}) = \sum\_{t=1}^{n} \mathcal{U}\_t(\boldsymbol{\theta}), \tag{8}$$

where *Ut*(*θ*)=(*UTtα*, *<sup>U</sup>Tt<sup>β</sup>*, *<sup>U</sup>Ttσ*, *Utη*)*<sup>T</sup>* with

$$\begin{aligned} \mathcal{U}\_{t\hbar} &= \quad \omega\_t \Sigma^{-1} \mathfrak{e}\_{t\prime} \\ \mathcal{U}\_{t\delta} &= \quad \ge \mathfrak{x}\_t \mathcal{U}\_{t\hbar} \\ \mathcal{U}\_{t\sigma} &= \quad -\frac{1}{2} \mathcal{D}\_p^T \text{vec} \Big( \Sigma^{-1} - \omega\_t \Sigma^{-1} \mathfrak{e}\_t \mathfrak{e}\_t^T \Sigma^{-1} \Big) , \quad \text{and} \\ \mathcal{U}\_{t\eta} &= \quad \frac{1}{2\eta^2} \{\mathcal{c}(\eta)p - \beta(\eta) - \omega\_t \mathcal{c}(\eta)\delta\_t + \log(1 + \mathcal{c}(\eta)\delta\_t)\} \end{aligned}$$

where *ωt* = 1 + *ηp η c*(*η*) 1 + *<sup>c</sup>*(*η*)*<sup>δ</sup>t* , for *t* = 1, ... , *n*; *β*(*η*) = *ψ* 12*η* (1 + *ηp*) − *ψ* 12*η* ; *ψ*(*x*) is the digamma function and *<sup>D</sup>p* is the duplication matrix; see Magnus and Neudecker (2007). It is difficult to obtain the maximum likelihood (ML) estimators from U(*θ*) = **0**. The EM algorithm has been suggested frequently to obtain ML estimators in statistical models under the *t*-distribution, mainly because it leads to a simple implementation of an iteratively weighted estimation procedure. As is well known, the *t*-distribution is a scale mixture of a normal distribution (see Lange et al. 1989), which facilitates the implementation of the EM algorithm considerably. Then, based on the complete-data log-likelihood function we obtained the expressions of ML estimates (see Liu and Rubin 1995; Shoham 2002; Xie et al.

2007). Thus, in our case, as shown in Appendix B, the ML estimates of *α*, *β*, **Σ** and *η* are obtained as solution of the following equations:

$$\hat{\mathfrak{a}} = \bar{\mathfrak{y}}\_{\omega} - \hat{\mathfrak{P}} \bar{\mathfrak{x}}\_{\omega}, \qquad \hat{\mathfrak{B}} = \frac{\sum\_{t=1}^{n} \omega\_{t} (\mathbf{x}\_{t} - \bar{\mathfrak{x}}\_{\omega}) (\mathbf{y}\_{t} - \bar{\mathfrak{y}}\_{\omega})}{\sum\_{t=1}^{n} \omega\_{t} (\mathbf{x}\_{t} - \bar{\mathfrak{x}}\_{\omega})^{2}}, \qquad \hat{\mathfrak{C}} = \frac{1}{n} \sum\_{t=1}^{n} \omega\_{t} \hat{\mathfrak{e}}\_{t} \hat{\mathfrak{e}}\_{t}^{T}, \tag{9}$$

and

$$\eta^{-1} = \frac{2}{a + \log a - 1} + 0.0416 \left\{ 1 + \epsilon r f \left( 0.6594 \log \left( \frac{2.1971}{a + \log a - 1} \right) \right) \right\} \eta$$

where ˆ *t* = *yt* − *α*ˆ − *β* ˆ *xt*; ¯*yω* = ∑*nt*=<sup>1</sup> *<sup>ω</sup>tyt*/ ∑*nt*=<sup>1</sup> *ωt*; *<sup>x</sup>*¯*ω* = ∑*nt*=<sup>1</sup> *ωtxt*/ ∑*nt*=<sup>1</sup> *ωt*; *er f*(*x*) = 2 *x*0exp(−*u*<sup>2</sup>)*du*, *a* = −(1/*n*) ∑*nt*=<sup>1</sup>(*vt*<sup>2</sup> − *vt*1), with *vt*1 = (1 + *pη*)/(<sup>1</sup> + *<sup>c</sup>*(*η*)*<sup>δ</sup>t*)

√*π vt*2 = *ψ*1 + *pη* 2*η* − log 1 + *<sup>c</sup>*(*η*)*<sup>δ</sup>t* 2*η* , for *t* = 1, ... , *n*. The iterative process given by Equation (9) was implemented in **R** language. Note that the normal model *ωt* = 1, *t* = 1, ..., *n* and the ML estimators of *α*, *β* and **Σ** correspond to the normal case. Under the *t*-model, the exact marginal distribution of *α*ˆ, *β* ˆ and **Σ** ˆ are particularly difficult to obtain, but under normal distribution, the estimators of *α*, *β* and **Σ** have exact marginal distributions (see Campbell et al. 1997).
