*3.3. Experimental Approach*

The different techniques' models were, respectively, developed using various approaches.

The OP-ELM models were trained by tuning the model dimensions. A different number of hidden nodes were used to train the model in the respective experiments. Optimal pruning using the LOO method was key in determining the model's dimensions. Various dimensions were investigated and the model with the lowest errors in each experiment was captured and is presented in the results section.

LSTM-RNN models were trained with different numbers of stacked hidden LSTM units. The variation of the hidden units was consistent in all the different experiments. Similar to the OP-ELM, the performance results for the model with the lowest obtained UCLF forecast errors were captured.

Single layered DBN models were developed with the number of hidden units being varied for the respective models, the lowest number of hidden units used was four with the highest number of hidden units being sixteen.

**Figure 5.** The South African UCLF (MW—normalized): (**a**) UCLF for a period between January 2010 and December 2019; (**b**) monthly periodicity of UCLF between January 2018 and December 2019; (**c**) weekly periodicity for June–July 2019 and November–December 2019.


**Figure 6.** Variables used in the different experiments conducted per technique.

The aggregation ensemble approach was used for the ensemble of the three techniques. These ensembles were of two techniques at a time. Here, the various respective parameters per technique are tuned and combined to form different ensemble models. The performance results of the forecast results with the lowest errors are captured per experiment. For each technique and experiment, the other hyperparameters, such as training rate and the number of layers, were kept the same. In future work, the effect of optimizing the hyperparameters can be investigated.
