*2.6. Artificial Neural Networks*

Artificial neural networks (ANN) provide a powerful and speedy tool for classification problems. Figure 1 shows the basic design of ANN with two layers where each neuron produces a unique number. Inputs are multiplied by corresponding weights and summed up. The corresponding output is derived after adding a bias term to the in-between results. The sum of the weighted inputs is transformed through a nonlinear activation function to get the outcome of each corresponding neuron. Several triggering functions are viable; however, in this work, from an empirical analysis, a hyperbolic tangent function was used. The multilayer perceptron architecture is used in the ANN during experimentation. It is made up of a feed-forward architecture composed of an input layer, one or more inside layers, and one output layer. The number of inside layers and neurons on each layer is determined by the dealt issue; in this case, the inputs correspond to the signalobtained features, i.e., homogeneity and kurtosis. The hidden layer is set heuristically

by experimentation, whereas the number of output neurons is related to the distinct categories being recognized. The ANN employed in this experimentation was trained using a Levenberg–Marquardt backpropagation scheme [24].

**Figure 1.** (**a**) Typical scheme of an ANN, (**b**) operation of an artificial neuron in a layer.
