*3.3. Ensemble Method*

Various experiences show no specific training algorithm in machine learning methods that can be the best and most accurate for all applications. Each algorithm is a particular model based on certain assumptions. Sometimes these assumptions are met, and sometimes they are violated. Therefore, no algorithm alone can succeed in all situations. Ensemble methods have been introduced to solve this problem. The primary motivation for developing the Ensemble method is to reduce the error rate. Forecasting error using the Ensemble approach, a group of techniques is much lower than using a single model. When combining independent and different classifiers, the likelihood of making the right decision is strengthened since each of these classifiers will perform better than a random guess.

Hansen and Salamon [49] presented deploying multiple models on regression. They proved that someone could show that the overall error E decreases uniformly concerning N with the N independent classifier with a probability of error e < 0.5. Also, the overall performance is significantly reduced if someone uses dependent categories. The methodology consists of two consecutive steps: The training and testing phases. As shown in Figure 3, several predictive models are produced using training samples in the training phase. Predictive models would combine to predict the next step or the testing phase.

**Figure 3.** Ensemble method flowchart.

Some popular ensemble methods are Boosting, Bagging, and Blending, of which the Bagging approach will be used in this research. There are two main reasons to choose an Ensemble model: performance and robustness. The Ensemble model can make better forecasts and do better than any single model. An Ensemble model reduces the spread or distribution of the estimates and model accuracy.
