4.2.3. ELM

The architecture of the ELM is the one previously used [23]: one input layer, one hidden layer, and one output layer. Activation function: first layer—ReLU; hidden layer— Sigmoid. Overall, the ELM model resulted (see Table 6) in a high accuracy and higher recall rates compared to Table 4, for any feature set. When compared to the SVM models, the *Base* model resulted in a lower recall rate (though a higher F1-score was achieved with the ELM model). On the other hand, the *Robust Base* resulted in a higher recall rate with the ELM model compared to the SVM model. Even though the *Robust Base* feature set had a low dimensional space, the three rates (i.e., accuracy, recall, and F1-score) were higher than those of the *Base* feature set. Using the sets that include the novel features increased these metrics while improving the robustness of the model at the same time.

**Table 6.** Model performance—ELM.

