*4.1. Stacked*

The main idea behind stacked ensembles is to combine a set of trained models through training of another model (meta-model). The output predictions of the meta-model are based on the training of the model outputs, Algorithm **??**. In our implementation we fit the models output into a DNN with two hidden dense layers, Figure **??**.

**Figure 6.** The Stacked Ensemble Architecture.

### **Algorithm 1** Stacked Ensemble Pseudo-code


The outputs of the models are merged with the concatenation function of Keras https://keras.io/. The input of the concatenation is a fixed size output tensor of each model. The output of the concatenation is a single tensor, which is then used as an input to the fully connected layer. A second fully connected layer follows similar to the final dense layer on each model.
