**4. Simulation**

In this section, the performance of the preliminary test and shrinkage SUR ridge estimators of *β* are investigated via Monte Carlo simulations. We generate the response from the following model:

$$
\begin{pmatrix} \mathbf{Y}\_1 \\ \mathbf{Y}\_2 \\ \vdots \\ \mathbf{Y}\_M \end{pmatrix} = \begin{pmatrix} \mathbf{X}\_1 & \mathbf{0} & \dots & \mathbf{0} \\ \mathbf{0} & \mathbf{X}\_2 & \dots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \vdots & \mathbf{x}\_M \end{pmatrix} \begin{pmatrix} \mathcal{J}\_1 \\ \mathcal{J}\_2 \\ \vdots \\ \mathcal{J}\_M \end{pmatrix} + \begin{pmatrix} \varepsilon\_1 \\ \varepsilon\_2 \\ \vdots \\ \varepsilon\_M \end{pmatrix}
$$

The explanatory variables are generated from a multivariate normal distribution MVN*pi*(**<sup>0</sup>**, **<sup>Σ</sup>***x*), and the random errors are generated from MVN*M*(**<sup>0</sup>**, **<sup>Σ</sup>***ε*). We summarize the simulation details as follows:


Here, we use Δ values between zero and two and use *α* = 0.05. We also consider that the lengths of the nuisance parameter *β*2 are two and four, respectively. Therefore, the restricted matrices are:

$$\begin{array}{rcl} \mathbf{R}\_{i} &=& \begin{bmatrix} 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \mathbf{r}\_{i} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \text{ if } p\_{i} = 5 \text{ and} \\\ \mathbf{R}\_{i} &=& \begin{bmatrix} 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \mathbf{r}\_{i} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \text{ if } p\_{i} = 7, \end{array}$$

where *i* = 2, 3 are the number of equation. Hence, **R** will be *diag*(**<sup>R</sup>**1, **<sup>R</sup>**2) and *diag*(**<sup>R</sup>**1, **R**2, **<sup>R</sup>**3). Furthermore, **r** will be (**<sup>r</sup>** 1,**<sup>r</sup>** 2) and (**<sup>r</sup>** 1,**<sup>r</sup>** 2,**<sup>r</sup>** <sup>3</sup>) .

4. The performance of an estimator is evaluated by using the relative mean squared error (RMSE) criterion. The RMSE of an estimator *α*<sup>∗</sup> with respect to *α*RR is defined as follows:

$$\text{RMSE} \left( \widehat{\mathfrak{a}}^{\*} \right) = \frac{\text{MSE} \left( \widehat{\mathfrak{a}}^{\text{RR}} \right)}{\text{MSE} \left( \widehat{\mathfrak{a}}^{\*} \right)},$$

where *α*<sup>∗</sup> is one of the listed estimators. If the RMSE of an estimator is larger than one, it indicates that it is superior to *α*RR.

Table 1 provides notations and a symbol key for the benefit of the reader.


**Table 1.** Values and explanations of the symbols.

We plot the simulation results in Figures 1 and 2. The simulation results for some other parameter configurations were also obtained, but are not included here for the sake of brevity.

= **Figure 1.** RMSE of the estimators as a function of Δ when *M* = 2, *T* = 100, *ρx* = 0.5, 0.9, and *ρε* = 0.5, 0.9. FME, full model estimator; RE, restricted estimation; PTE, preliminary test estimator; PSE, positive-rule Stein-type estimator.

 =

 *pi*

 = **Figure 2.** RMSE of the estimators as a function of Δ when *M* = 3, *T* = 100, *ρx* = 0.5, 0.9, and *ρε* = 0.5, 0.9.

 =

 *pi*

According to these results:

1. When Δ = 0, which means that the null hypothesis is true or that the restrictions are consistent, the RE estimator always performs competitively when compared to other estimators. The PTE mostly outperforms the SE and PSE when *pi* = 5, while it looses its efficiency when *pi* = 7 when compared to PSE. The SE may perform worse than the FME due its sign problem, as is indicated in Section 3.

