*Artificial Neural Networks*

ANN consist of several artificial McCulloch–Pitts neurons. The structure of such a neuron is shown in Figure 2b. The input values *x*<sup>i</sup> of the neuron are thereby multiplied by weighting factors *w*i. The result of the summation of the weighted input values is adjusted with an offset *b* (*also: bias*) and projected to the output *y* via a transfer function *f* [33]. This results according to

$$y = f(\vec{w}^\mathsf{T}\vec{x} + b). \tag{25}$$

Various transfer functions exist whose suitability depends on the problem. An overview of numerous transfer functions and their respective characteristics is given in [34].

Based on the representation of a single artificial neuron, an ANN can be realized, and various constructions are possible. In this work, the feed-forward architecture, which is shown in Figure 2a is considered. This is described by several layers, each of which is fully connected to the following layer. These are the input and output layer, which describe the input and output variables of the network, and one or more hidden layers. The latter are composed of an arbitrary number of artificial neurons [35].

The weighting factors *w*<sup>i</sup> and offsets *b* of the individual neurons are determined when training the network using a training method. An overview of various training methods is given in [36].

**Figure 2.** Representation of the used feed-forward ANN and artificial neuron structure.
