*3.4. Multi-Layer Perceptron*

The most common ANN model is the multi-layer perceptron (MLP). In MLP, input values are transformed by activation function *f*, giving the value as an output from the neuron. The MLP is made up of various layers including one input layer, one or more hidden layers, and one output layer. In MLP, parameters such as the number of input variables, number of hidden layers, activation function, and learning rate play an important role in the design of neural network architecture. The multi-layer perceptron (MLP) is shown in Figure 3. Neurons have activation functions for both the

hidden layer and the output layer; neurons receive only the input dataset and have no activation functions on the input layer. Weights are multiplied with inputs, and are summarized accordingly as;

*f*(*xi*) = - *n i*=1 (*wixi*) + *bias* (13)

**Figure 3.** Schematic algorithm of multi-layer perceptron (MLP).

Whilst the most commonly applied activation function is logistic function (sigmoid function), given in following equation:

$$f(\mathbf{x}\_i) = \frac{1}{1 + e^{-\mathbf{x}}} \tag{14}$$
