*4.4. Discussion of Simulation Results*

As mentioned previously, in the conventional hypothesis testing scenario for comparing nested models, Riedle, Neath and Cavanaugh [1] established that the uncorrected BDCP approximates the *p*-value derived from the likelihood ratio test. Therefore, in the case where the null candidate model is correctly specified, both the uncorrected BDCP and the *p*-value have a *Uni f orm*(0, 1) distribution. This behavior is displayed in Table 1, where for large sample sizes, the mean and median of the BDCP distribution are around 0.5. This is a problematic feature of the uncorrected BDCP and *p*-values because the measure does not reliably favor the null model in those settings where the null is true. However, we see that for large sample sizes, both the BDCPk and the BDCPb values are close to 1, which clearly favors the null model.

Table 2 shows the results from the setting where the alternative hypothesis is correctly specified, while the null is underspecified. Here, we would expect all the discrepancy probabilities to be close to 0, as seen in the case where the sample size is *N* = 500. However, for smaller sample sizes, i.e., *N* = 25 and *N* = 50, we observe larger values for the discrepancy probabilities. In fact, for *N* = 25, the BDCPb is 0.89 and, with a mean and median close to 0.5, the uncorrected BDCP exhibits similar behavior to the case where the null is true. This phenomenon is expected within the framework of model selection, where additional explanatory variables are favorable if there is a sufficient sample size to adequately estimate their effects. If the sample size is too small to construct reliable estimates, then it is best to choose smaller models, even at the expense of model misspecification.

The results from Tables 1, 3–6 show that when estimating the KLDCP with a small sample size (*N* = 25 to *N* = 100), the BDb performs either better than or as well as the BDk. For large sample sizes, all simulation sets exhibit a similar performance for both corrections.

For discrepancy estimation, Tables 7–10 show that across all sample sizes, *kb* overcorrects for the bias of the discrepancy approximation, and the over correction is more prominent for small sample sizes. It is worth noting that this evident over-estimation from the BDb is accompanied by a superior bias reduction of the corresponding KLDCP estimator. For instance, Table 7 shows a significant over-estimation by BDb compared to BDk, especially in the small sample settings. However, the corresponding estimator of the KLDCP, displayed in Table 1, exhibits less bias for BDCPb than for BDCPk.

Finally, Tables 11 and 12 show that, across all sample sizes, the correction by *kb* markedly reduces the bias compared to the correction by *k*. This means that in the setting where the mean structure is correctly specified for the null and overspecified for the alternative, but both models are incorrectly specified with respect to the error distribution, the bootstrap-based correction evidently outperforms the simple correction of *k*.

In most cases, however, the bias reductions resulting from the *kb* and the *k* corrections are comparable. Therefore, our simulation studies suggest that if the null and/or the alternative models are misspecified, then correcting by either *kb* or *k* will generally yield comparable estimators of the expected KLDCP.


**Table 1.** Distribution approximations for Set 1, where the null model is correctly specified, while the alternative model is overspecified.

**Table 2.** Distribution approximations for Set 2, where the null model is underspecified, while the alternative model is correctly specified.



**Table 3.** Distribution approximations for Set 3, where the null and alternative models are underspecified, but the null model is closer to the true data-generating model.

**Table 4.** Distribution approximations for Set 4, where the null and alternative models are equally underspecified.



**Table 5.** Distribution approximations for Set 5, where the null and alternative models are misspecified with respect to the error distribution. Here, the errors are generated from a Student's t distribution.

**Table 6.** Distribution approximations for Set 6, where the null and alternative models are misspecified with respect to the error distribution. Here, the errors are generated from a mixture of normal distributions.



**Table 7.** Expected value of the KLD, its bootstrap estimate, and the bias of the corrected bootstrap estimates for the null and alternative models in Set 1. Here, the null model is correctly specified, while the alternative model is overspecified.

**Table 8.** Expected value of the KLD, its bootstrap estimate, and the bias of the corrected bootstrap estimates for the null and alternative models in Set 2. Here, the null model is underspecified, while the alternative model is correctly specified.


**Table 9.** Expected value of the KLD, its bootstrap estimate, and the bias of the corrected bootstrap estimates for the null and alternative models in Set 3. Here, the null and alternative models are underspecified, but the null model is closer to the true data-generating model.



**Table 9.** *Cont.*

**Table 10.** Expected value of the KLD, its bootstrap estimate, and the bias of the corrected bootstrap estimates for the null and alternative models in Set 4. Here, the null and alternative models are equally underspecified.


**Table 11.** Expected value of the KLD, its bootstrap estimate, and the bias of the corrected bootstrap estimates for the null and alternative models in Set 5. Here, the null and alternative models are misspecified with respect to the error distribution, and the errors are generated from a Student's t distribution.



**Table 11.** *Cont.*

**Table 12.** Expected value of the KLD, its bootstrap estimate, and the bias of the corrected bootstrap estimates for the null and alternative models in Set 6. Here, the null and alternative models are misspecified with respect to the error distribution, and the errors are generated from a mixture of normal distributions.

