*3.4. Cross-Validation*

The training set was further used in cross-validation. This method provides a more general indication of the classification performance. In this work, cross-validation was implemented via the so-called *k*-fold cross-validation method. Here, the training set was further split into *k* subsets, which were smaller than the training set. This was followed by subsequent trainings of the classification algorithm using *k* − 1 subsets as the training set, while the remaining data were set aside for testing. The accuracy score was then estimated for all folds and averaged to get a representation of the overall performance of the classifier. This allowed for a higher training/testing split, as a validation set is not necessary when using cross-validation. A schematic of the *k*-fold method is depicted in Figure 4.

**Figure 4.** The *k*-fold cross-validation procedure with *n* folds.
