*3.5. Model Evaluation*

A model evaluation metric quantifies a predictive model's performance, typically involving training a model on a dataset, using the model to make predictions on a "test dataset" not used during training, then comparing the predictions to the expected values in the test dataset. Different authors use different metrics to compare their models. Table 2 shows the evaluation metrics used in this study. In all formulas, *yt y*ˆ*<sup>t</sup> T* is the target value, output value, and the size of a test dataset in out-of-sample or out-of-fold prediction.

**Table 2.** Common types of evaluation metrics.

