2.2.3. Multilayer Perceptron

A multilayer perceptron (MLP) is a type of artificial neural network. It consists of an input layer that receives the data, a set of hidden layers that process the data, and an output layer that contains the response of the classification [49]. The network is trained via backpropagation, which is an optimization technique where a cost function (related to the difference between predictions and true values) is minimized. The function learned by the neural network consists of the linear combination of a set of two parameters, namely, weights and biases. The use of this method allows for flexibility, as linear and nonlinear systems can be fitted to the network and in cases where online predictions are needed.

Major drawbacks of this algorithm are its strong dependence on hyperparameters (i.e., number of neurons, number of hidden layers, etc.) and the presence of local minima when using hidden layers. This means that, when more hidden layers are used to increase accuracy, there is a major risk of deviating from a global optimal solution. A deeper description and the advantages of this algorithm can be found in [51].
