2.2.6. Evolutionary DAE (EvoDAE)

Similarly, we implemented a DAE trained by the WWO evolutionary algorithm, which is first applied to minimize the reconstruction error in Equation (13) of each autoencoder layer by layer, and then applied to minimize the RMSE of the whole DAE. The EvoDAE uses the same structure (including the top-level Gaussian mixture model) as DAE.

### 2.2.7. Deep Denoising Autoencoder (DDAE)

A denoising autoencoder is a variant of the basic autoencoder. It first randomly adds some noise to an initial input vector **x** to form a corrupted **x**, and then encodes **x** to a hidden representation **z**, which is then decoded to a reconstructed **<sup>x</sup>**. The aim of denoising-autoencoder training is to reconstruct a clean "repaired" **x** from a corrupted **x**, which can still be represented by Equation (13). The key difference is that **z** is deterministic mapping of **x** and thus the result of a stochastic mapping of **x**.

Similarly, a DDAE [22] consists of a stack of denoising autoencoders. Its training consists of two stages. The first stage is to train each denoising autoencoder layer by layer, and the second stage is to train the whole DDAE to minimize the RMSE over the training set. For our prediction problem, the DDAE model uses the same structure (including the top-level Gaussian mixture model) as DAE.
