*3.3. Ensemble*

In the ensemble method, numerous weak learners are combined to create a learner featuring strong predictive power. In this study, the regression tree method was used as the ensemble learner, and the boosting method was used for combining the models used. The boosting method performs slightly better than the bagging method and exhibits a higher predictive power. In addition, the boosting method can be used to compensate for shortcomings in learning the next regression tree by assigning weights to reduce errors between the prediction and output values of the first regression tree. The least-squares method was used for calculating the error. Least-squares boosting is a sequential ensemble method that sequentially builds a decision tree because it compensates for errors in the previous tree as shown in Figure 6; it is defined as Equation (7) [38,39]:

$$F\_T(\mathbf{x}) = \sum\_{t=1}^{T} f\_t(\mathbf{x})\tag{7}$$

where *x* is the input variable, *ft* is the weak learner, and *FT*(*x*) is the strong learner.

**Figure 6.** Procedure of an ensemble from an input sample.

Each weak learner produces a hypothesis *h*(*xi*) as the output for each training set sample. At each iteration step t, one weak learner is selected, and a coefficient α*t* is assigned to minimize the sum of the training errors of the final t-step acceleration classifier, which is defined as follows:

$$E\_t = \sum\_i E\left[F\_{t-1}(\mathbf{x}\_i) + \alpha\_t h(\mathbf{x}\_i)\right] \tag{8}$$

where *Ft*−<sup>1</sup>(*x*) is the accelerated learner generated up to the last training stage, *E*(*F*) is the error function, and *ft*(*x*) = α*th*(*x*) is the weak learner currently considered for addition to the final classifier. At each iteration step in the training process, a weight with the same value as the current error *<sup>E</sup>*(*Ft*−<sup>1</sup>(*xi*)) for the training data set is assigned to the next data set, in order to compensate for the errors.
