*3.6. Neural Network*

Multi-layer perceptron (MLP) architecture was implemented in the present study. The model was a deep neural network with three hidden layers composed of 512, 256, and 128 neurons and a one-dimensional output. The output was fed to a sigmoid function and a 0.5 threshold was used to achieve a binary output: when the output of the sigmoid was > 0.5 the label 1 was assigned, otherwise the label 0 was assigned. Rectified linear units (ReLU) were implemented to provide non-linearity between two consecutive hidden layers. In the experiments, stochastic gradient descent was employed as the optimization algorithm and binary cross entropy as the loss function. Eventually, MLP model was trained adopting an early stop technique: the network was trained for a maximum of 100 epochs, stopping when the accuracy on the validation set did not increase for 10 consecutive epochs. The best-performing learned parameters were adopted to evaluate the model performances.
