To compare with other methods, we chose 12 prediction methods to test the same experimental data. Therefore, in this paper, we will analyze the event results in terms of two aspects: prediction efficiency and prediction performance.
4.2.2. Accuracy Analysis
To compare the advantages and disadvantages of various prediction methods, we choose BR, SVR, gated recurrent unit (GRU), LSTM, EEMD-BR, EEMD-SVR, EEMD-GRU, EEMD-LSTM, eEEMD-BR, eEEMD-SVR, eEEMD-GRU, and eEEMD-LSTM to predict the same experimental data. In the experiment, the penalty coefficient is 1.0 of SVR. The parameters of the specific eEEMD-LSTM model are shown in
Table 2.
GRU retains the necessary architecture of LSTM, which can control the forgetting factor and update the state unit simultaneously through a single gating unit, and dynamically control the time scale and forgetting behavior of different units in the network [
22]. Too many comparison methods in the picture will affect the clarity. Therefore, the GRU, EEMD-GRU, and eEEMD-GRU methods no longer appear in the image analysis of the article and only the evaluation metrics result of the three methods are shown in
Table 3.
To further compare the prediction performance of different methods, EEMD and eEEMD are combined with BR, SVR, and LSTM models separately, and 6 methods to predict wind power are obtained.
Figure 7 shows the prediction error distribution of EEMD and eEEMD of the same model. The prediction error shown in
Figure 7 is the absolute error of each data after the inverse normalization.
Figure 7 shows the prediction error, which fluctuates around zero. The figure compares and analyzes the frequency distribution of prediction errors for various methods. In particular,
Figure 7a compares BR, EEMD-BR, and eEEMD-BR methods, while
Figure 7b,c compare the frequency distribution of prediction errors for SVR-related methods and LSTM-related methods, respectively.
It can be clearly seen from
Figure 6 that regardless of which model is used, the model with EEMD can effectively reduce the error, and the frequency of error close to zero increases significantly. Furthermore, the error frequency of the eEEMD optimization algorithm is closest to zero. It is observed that the eEEMD-LSTM model has the highest error frequency near zero.
This phenomenon is attributed to the fact that the IMFs obtained from EEMD decomposition has more distinct time series characteristics than the original dataset. The combined model reduces the complexity of model prediction, and the prediction error is smaller than that of a single model. By combining differential training with reconstructed components, the combined use of eEEMD and model has been able to prove a significant improvement in reducing prediction errors. Our research findings indicate that differential training may alleviate the situation of under-learning or over-learning situation up to some extent. The eEEMD-LSTM model showed the most significant improvement in reducing prediction error, while the eEEMD-SVR showed the least improvement. This observation indicates that the effectiveness of optimization algorithm has limited benefit in improving the prediction performance of different methods. However, the entire pair is still effective in improving the prediction results.
To further analyze the forecast trend of the 9 methods, the predicted value of each model is compared with the actual value. The overall forecast trend of 9 methods in data 1 is shown in
Figure 8, and the partly forecast trend is shown in
Figure 9.
The results of
Figure 8 and
Figure 9 indicate that the prediction performance of a single model is relatively poor. Among various methods, the fitting curve of the BR model has a large deviation, while the fitting effect of the LSTM model is better. Although the EEMD decomposition algorithm improves the predictive performance of a single model, the overall prediction effect is still not ideal. The combination of the eEEMD method and the model effectively improves the prediction effect, and the improvement effect is more obvious than EEMD. The eEEMD method is implemented in all methods, and the prediction effect results show that the eEEMD-LSTM has the best prediction effect. Additionally,
Figure 8 and
Figure 9 show the prediction trends for the 9 methods, which are consistent with the overall error variation shown in
Figure 7.
The experimental results in
Table 3 were calculated using RMSE, MAE, and MAPE. The smaller the calculation result, the better the prediction effect of this method. The evaluation metrics calculation results are the normalized results of the data.
Table 3 mainly compares the differences of different evaluation metrics of BR, SVR, GRU, and LSTM in 3 sets of data when EEMD and eEEMD are used respectively.
Table 3 shows that the independent BR model has the weakest prediction performance, while LSTM surpasses all models in effectiveness of all models.
Regarding the RMSE evaluation metrics, EEMD-BR increased by 57.9% at most, while EEMD-SVR increased by 15.7% on average, and EEMD-LSTM increased by at least 15.1%. The MAE evaluation metrics show that EEMD-BR has increased by 58.2% at the maximum, EEMD-SVR by 15.2% on average, EEMD-GRU by 21.28% on average, and EEMD-LSTM by at least 21.7%. Finally, the MAPE evaluation metrics show EEMD-BR increasing by a maximum of 67.4%, EEMD-SVR by an average of 83.6%, and EEMD-LSTM by at least 23.0%.
When the methods are decomposed by eEEMD, the eEEMD-SVR model showed the most stable improvement in prediction effectiveness, while the eEEMD-LSTM model shows the best overall prediction results. Furthermore, when the eEEMD method is used for reconstruction, eEEMD-LSTM produces the best overall prediction results, while eEEMD-BR shows the most significant improvement.
Considering the RMSE evaluation metrics, the increase rate of eEEMD-BR is at least 18.6%, and the maximum increase rate of eEEMD-LSTM is 34.2%. For the MAE evaluation metrics, eEEMD-BR increased by at least 17.9%, eEEMD-SVR by 6.8% on average, and eEEMD-LSTM by 22.6% at the maximum. Finally, the MAPE evaluation metrics show eEEMD-BR increasing by at least 1.3%, eEEMD-SVR by an average of 83.6%, eEEMD-GRU increased by 1.72% and eEEMD-LSTM by 52.1% at the maximum.
The predictive performance of the method not only focuses on the numerical results of one of the evaluation indicators, but also requires analysis from multiple perspectives. The proposed method eEEMD-LSTM is not very efficient on MAPE evaluation metrics of data 1 and data 3, but is still superior to most models. Except for the MAPE evaluation metrics, the eEEMD-LSTM method is better than other models in other metrics. By analyzing the prediction results of different models, it can be concluded that this method has a better prediction performance than other methods.
To more clearly compare the evaluation indicators of different methods, the specific values are visualized. The error comparison of different methods is shown in
Figure 10 below.
The histograms for the relevant evaluation metrics are shown in
Figure 10. In particular, the red column depicts the RMSE
1 of nine methods, including BR, EEMD-BR, eEEMD-BR, SVR, EEMD-SVR, eEEMD-SVR, LSTM, EEMD-LSTM, and eEEMD-LSTM, with data 1. Similarly, the green MAE
1 and purple columns MAPE
1 indicate the MAE and MAPE of each model with data 1, respectively. The yellow columns RMSE
2, blue columns MAE
2, and pink columns MAPE
2 show the relevant evaluation with data 2, while the dark green columns RMSE
3, cyan columns MAE
3, and orange columns MAPE
3 represent the relevant evaluation with data 3. Based on
Figure 10, the eEEMD-LSTM, as described in the present study, exhibited the minimum evaluation error, irrespective of the evaluation standard employed.