*2.3. Machine Learning Figures of Merits*

To evaluate the performance of the representative model, the following metrics are used: accuracy (ACC), true positive rate (TPR), true negative rate (TNR), false negative rate (FNR), and false positive rate (FPR). These measures are computed using the following forms:

$$\text{Accuracy}(\text{ACC}) = \frac{\text{TP} + \text{TN}}{\text{TN} + \text{TP} + \text{FN} + \text{FP}} \tag{1}$$

$$\text{Sensitivity}(\text{TRP}) = \frac{\text{TP}}{\text{TP} + \text{FN}} \tag{2}$$

$$\text{Specificity} \,(\text{TNR}) = \frac{\text{TN}}{\text{TN} + \text{FP}} \tag{3}$$

$$\text{Fallout(FPR)} = \frac{\text{FP}}{\text{TN} + \text{FP}} \tag{4}$$

$$\text{False Negative Rate} \text{(FNR)} = \frac{\text{FN}}{\text{TP} + \text{FN}} \tag{5}$$

where the TP's and FPs refer to the number of correct and incorrect predictions of outcomes to be in the considered output class, whereas the TN's and FNs refer to the number of correct and incorrect predictions of outcomes to be in any other output classes respectively [30].

The ROC (receiver operating characteristic) curve is a graphical representation of the performance of a binary classification model. It is a graph that shows the trade-off between the true positive rate and the false positive rate. A diagonal line in the ROC curve indicates that the test has no discriminatory ability, while an ROC curve above the diagonal line indicates a test with better-than-chance discrimination ability. The area under the ROC curve (AUC) is a measure of the overall ability of the test to discriminate between the presence or absence of a condition. An AUC of 1.0 indicates perfect discrimination, and an AUC of 0.5 indicates no discriminatory ability [57].
