*3.2. ANN*

An ANN is an information processing system based on the structure and function of biological neural networks. ANNs utilize connected artificial neurons and learn input parameters through these neurons to explain their unique behaviors, thereby performing nonlinear modeling. Multilayer perceptron (MLP) is the most commonly used ANN architecture; it consists of input, hidden, and output layers as shown in Figure 5 [35]. As shown in Equation (6), all the neuron connections in every layer of an ANN include weights and biases. The neuron values of previous layers are adjusted by varying weights and compensated through bias.

$$y\_i = f(net) = f\left(\sum\_{i=1}^n a\_i \mathbf{x}\_i + b\_j\right) \tag{6}$$

where *x* is the input value (*i* = 1, ..., *n*), w is the weight vector between neurons, *b* is the bias of the neuron, f is the activation function, and *y* is the output value. MLP is generally trained using an inverse propagation algorithm wherein interconnected weights of the network are repeatedly modified to minimize errors defined as the root mean square errors (RMSEs) [36]. Previous studies have reported that the Levenberg–Marquardt algorithm is suitable because it yields consistent results for the majority of ANNs [37].

**Figure 5.** Procedure of an artificial neural network from an input sample.
