4.5.1. Accuracy

Classification accuracy is the simplest performance metric and is commonly used with balanced datasets (i.e., the number of samples per class is balanced). Accuracy is defined as the number of correct predictions, divided by the total number of predictions, and is implemented by comparing the annotated ground truth data with the predicted results:

$$\text{Accuracy} = \frac{tp + tn}{tp + tn + fp + fn} \tag{2}$$

where *tp* represents the true positives, *tn* the true negatives, *f p* are the false positives, and *f n* the false negatives. Note that, if unbalanced data are considered (i.e., the number of samples per class is not balanced), a new accuracy metric, known as balanced accuracy, should be computed. The balanced accuracy is computed by normalizing *tp* and *tn* by the number of positive and negative samples, respectively, then perform their sum, and divide by two, as indicated in Equation (3):

$$\text{Balanceed accuracy} = \frac{TP + TN}{2} \tag{3}$$

where *TP* represents the normalized true positives and *TN* the normalized true negatives; however, a fair performance evaluation between algorithms should not only rely on the accuracy, as Red AI tends to favor.
