2.1.2. Architecture of the Network

In this stage, the network's architecture must be determined. In order to do so, the number of input, hidden and output layers should be specified [15]. In this study, an MLP (Multilayer perceptron) network is used in which the output of each layer is considered the input vector for the next layer. Each layer's neurons have connections with the previous layer's neurons. Each neuron's duty is to calculate the net layer's weight and pass data through a function called the transfer function. Sigmoid Tangent is regarded as one of the most useful functions in this case and has been widely used by experts [56–61]. Thus, the abovementioned function was used as the transfer function. The final network in this research constitutes a multilayer perceptron neural network with 14 input variables in an input layer, a hidden layer and an output layer. The schematic structure of the designed neural network is illustrated in Figure 2.
