2.2.2. Recurrent Neural Network (RNN)

RNNs are conventional neural networks consisting of one or more feedback loops [47]. RNNs have the ability to utilize their input memory to process entries [48]. In conventional neural networks, all inputs and outputs are considered to be independent of each other. This means that the output is not fed back to the network as an input; however, in the case of RNNs, output can be fed again with the input to be considered in future decisions [47,48]. RNNs' basic architecture is shown in Figure 3.

**Figure 3.** Basic recurrent neural network (RNN).

In Figure 3, the RNN consists of input (*xt*), hidden state (*ht*), and outputs (*yt*). *Wx*, *Wy*, and *Wh* are weight matrices. The most important part of RNN is the hidden state (*ht*), which is a vector that can also have an arbitrary dimension [48].

$$h\_t = F\_{\text{w }}(h\_{t-1}, \mathbf{x}\_t), \tag{2}$$

$$h\_t = \tanh\left(\mathcal{W}\_h h\_{(t-1)} + \mathcal{W}\_\mathbf{x} \mathbf{x}\_t\right),\tag{3}$$

$$y\_t = \mathcal{W}\_y h\_{t\prime} \tag{4}$$

Figure 3 also shows the relationship between functions in RNN. In the functions, *h***(***t−***1)** of previous hidden state contains information from the previous time step; *Fw* is an activation function as shown Equation (3) [47,48].
