2.2.1. Conventional Deep ANN/Multilayer Perceptron (MLP)

Multilayer perceptron was introduced by Rosenblatt in 1958 as the basic type of neural network and consists of a number of perceptron [46–59]. There is an input layer to receive the data and there is an output layer that determines and predicts the output value in multilayer perceptron. Between the input and output layers, there is a selected number of hidden layers, which is the main processing engine of MLP [42–46,56–59].

As shown in Figure 2, MLP is a simple neural network. Equation (1) is used to calculate the output of a single perceptron or neuron [46,59].

$$output = f\left(\sum\_{i}^{inputs} (\mathbf{x}\_i.w\_i + b\_i)\right),\tag{1}$$

where *xi* is the input of the neuron, *wi* is the weight on each connection to the neuron, *bi* is the bias, and *f*(.) is the activation function, for instance, the tanh activation function [16,47,59].

**Figure 2.** Multilayer perceptron network with single output.
