*2.4. Performace Metrics*

To analyze the integration of the DDM, in addition to using active methods, it was proposed to use a passive method, which consisted of retraining the algorithms every 24 h regardless of whether there was a change in the data distribution. These methods were compared in each of the algorithms using performance metrics, mean absolute percentage error (MAPE), mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) were used.

MAPE shows the measure of the precision of the estimated values comparative with the real values (in a percentage) [43], which is determined according to Equation (2).

$$\text{MAPE} = \frac{\sum\_{i=1}^{n} \left| \frac{y\_i - \phi\_i}{y\_i} \right|}{n} \times 100\% \tag{2}$$

MAE is utilized to assess how close estimates or expectations are to the real results. It is determined by averaging the absolute differences between the expected values and the real values [44], as shown in Equation (3).

$$\text{MAE} = \frac{\sum\_{i=1}^{n} |y\_i - \hat{y}\_i|}{n} \tag{3}$$

RMSE evaluates the differences between the real values and estimated values [45], which is determined according to Equation (4):

$$\text{RMSE} = \sqrt{\frac{\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2}{n}} \tag{4}$$

R<sup>2</sup> is a statistical measure of the variance between estimated values acquired by the model and real values (level of direct relationship among anticipated and estimated values) [46], which is determined according to Equation (5).

$$\mathcal{R}^2 = 1 - \frac{\sum\_{i=1}^n \left( y\_i - \mathfrak{Y}\_i \right)^2}{\sum\_{i=1}^n \left( y\_i - \overline{\mathfrak{y}}\_i \right)^2} \tag{5}$$

where *yi* is the expected value, yˆ*<sup>i</sup>* is the real value, y*<sup>i</sup>* is the average value, and *n* is the total number of estimations.

The reason why these metrics were chosen was to have an overview of the performance of the models. In the case of the MAPE, it was chosen because it is easy to understand since it presents percentage values, but due to its limitations, it was decided to accompany it with the MAE, which shows how much inaccuracy is expected from the forecast on average, helping to determine which models are better. However, because the MAE can have difficulty distinguishing large from small errors, it was combined with the RMSE to be on the safe side. As for R2, it was selected to know how the data fit the models.
