4.1.2. Holdout and 10-Fold Cross-Validations for Overall Accuracy Rate

Figures 4 and 5 show the accuracy rate classification for each activity for ensemble method with base learners SVM and RF using holdout and 10-fold cross validation methods, respectively, for dataset 1.

As shown in Figure 4, the accuracy rate of activity A1, A2, A3 and A6 of Random subspace classifier with SVM for holdout method produced superior accuracy rate with the range from 98.4% to 99.9% compared to RF which produced an accuracy rate of 97.0% to 99.4%. On the other hand, RF produced better results with an accuracy rate of 97.1% and 96.9% compared to SVM with 97.1% and 96.9% activities A4 and A5.

**Figure 4.** Accuracy rate of each activity holdout method dataset 1.

**Accuracy rate 10 fold cross validation method**

**Figure 5.** Accuracy rate of each activity 10 fold cross validation method dataset 1.

Results from Figure 5 indicate that the accuracy rate for activities A1, A2, A3 and A6 of Random subspace classifier with SVM using 10-fold cross validation is higher with 98.5% to 99.8% compared to RF with 97.3% to 99.5%. The accuracy rate for activities A4 and A5 is 97.4% 97.5% for SVM and 98.1% for RF.

Tables 5 and 6 show the overall accuracy rate of all other ensemble methods with base learners SVM and RF using holdout method and 10-fold cross validation method for dataset 1.


**Table 5.** Overall performance evaluation ensemble methods on the holdout method.

Results in Table 5 demonstrate that SVM has a significantly greater accuracy with Bagging (93.83%, *p* = 0.028). There were no significant differences for other ensemble methods using the holdout method and 10-fold cross-validation method as shown in Table 6 for dataset 1.


**Table 6.** Overall performance evaluation ensemble classifiers for 10-fold cross-validation method.

## *4.2. Performance Evaluation of Dataset 2*

4.2.1. Holdout Method for Precision, Recall, F-measure, and ROC Evaluation

Table 7 presents the performance evaluation of random subspace classifier with SVM and RF as base leaners as the best classifier of holdout method.


**Table 7.** Performance evaluation of each activity with random subspace classifier on the holdout method.

As shown in Table 7, the results of random subspace classifier evaluation on holdout method obtained the best precision in activity A6 with 100% for SVM and RF. For other activities, SVM shows better precision results between 96.7% and 99.8% compared to RF which obtained 95.8% to 98.8%. For recall, SVM achieved 100% compared to RF 99.8% for activity A6. SVM results for activities A1, A2, A3 and A4 ranges from 97% to 100% compared RF which obtained 95.8% to 99%. However, RF produced better recall results with 97.2% compared to SVM which recorded 97% for activity A5. The F-measure evaluation shows that SVM obtained 100% compared to RF that produced 99.9% for activity A6. For activity A4, SVM and RF obtained 97% and 95.9% respectively which are the lowest performance compared to activities A1, A2, A3 and A5. Table 8 shows the ROC curve of each activity of Random subspace with SVM classifier and Random subspace with RF using hold out method for dataset 2.

As shown in Table 8, the results of ROC evaluation gained 1.000 in activities A1, A2, A3 and A6 in both base learners. For activities A4 and A5, the ROC results for RF was 0.999 compared to 0.995 and 0.998 for SVM.

**Table 8.** ROC for each activity of the random subspace classifier on holdout method.


4.2.2. 10-Fold Cross Validation for Precision, Recall, F-measure, and ROC Evaluation

The cross-validation method evaluates fold (k) = 10 for all activities of dataset 2. Table 9 presents the performance evaluation of Random subspace as the best classifier of 10-fold cross-validation method.


**Table 9.** Performance evaluation for each activity of the random subspace classifier on 10-fold cross-validation method. Random subspace (10-fold cross-validation method).

From Table 9, the precision results for random subspace classifier on 10-fold cross-validation method shows that SVM and RF obtained 99.9% for activity A1 and 100% for both the SVM and RF for activity A6. For activities A2, A3, A4, and A5, SVM obtained between 97.9% and 99.9% and RF 95.6% to 98.5%. Recall results achieved 100% for activities A1 and A6 using SVM compared to RF which received 98.4% and 99.8%. For activities A2, A3, A4, A5, SVM gained between 98% and 99.8%, whereas, RF obtained between 95.2% and 99.2%. The results of F-measure for activities A1 and A6 for SVM was 100% compared to RF which obtained between 98.7% and 99.9%. SVM obtained between 98% and 99.7% compared to RF which produced results between 96.1% and 98.3% for the results of F-measure in activities A2, A3, A4, and A5.

Table 10 shows the ROC of each activity of Random subspace with SVM and Random subspace with RF classifier using 10-fold cross validation method dataset 2.


**Table 10.** ROC for each activity of the random subspace classifier on 10-fold cross validation method.

As shown in Table 10, the results of ROC evaluation for activities A1, A2, A3, and A6 for SVM is 1.000, whereas, RF produced 1.000 only for A6. For activities A4 and A5, SVM recorded 0.999. RF achieved 0.998 for activity A4 and 0.999 for A1, A2, A3 and A5.

As can be seen in Tables 7–10, overall performance evaluation in each activity on Random subspace classifier with base learner SVM is better than RF for holdout method and 10-fold cross-validation methods. The 10-fold cross-validation model evaluation gives better results compared to holdout for both base learners.

Representative ROC curves for SVM and RF are shown in Figures 6 and 7 based on Table 10 for activities A1, A2 and A6.

**Figure 6.** ROC graph activity A1 (**a**), A2 (**b**) and A6 (**c**) of Random subspace with SVM classifier using 10 fold cross validation method dataset 2.

**Figure 7.** ROC graph activity A1 (**a**), A2 (**b**) and A6 (**c**) of Random subspace with RF classifier using 10-fold cross validation method dataset 2.

As can be seen in Figures 6 and 7, the activities A1, A2 and A6 for SVM and RF, the ROC results produced 1.000. For activities A1 and A2, the ROC results are 0.999 for RF.

4.2.3. Holdout and 10-Fold CROSS-validations for Overall Accuracy Rate Classification

Figures 8 and 9 show the accuracy rate of classification for each activity for ensemble method with base learners SVM and RF using holdout method and 10-fold cross validation method for dataset 2.

**100.00% 99.90% 99.90% 99.30% 99.30% 100.00% 99.60% 99.50% 99.50% 98.60% 98.70% 100.00% 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00% A1 A2 A3 A4 A5 A6 Accuracy rate percentage Random subspace (SVM) Random subspace (Random Forest)**

**Accuracy rate 10 cross validation method**

**Figure 9.** Accuracy rate of each activity of 10-fold cross validation method using dataset 2.

As shown in Figure 8, the accuracy rate of activity A6 is 100% for Random subspace classifier with SVM and RF using the holdout method. For activities A1, A2, A3, A4 and A5, the accuracy rate for SVM is between 99% and 100% compared to RF which obtained results between 98.6% and 99.5%.

As shown in Figure 9, the accuracy rate of activity A6 is 100% for Random subspace classifier with SVM and RF using the 10-fold cross validation method. For activities A1, A2, A3, A4 and A5, the accuracy rate for SVM is between 99.9% and 100% compared to RF which obtained results between 98.7% and 99.6%.

Tables 11 and 12 present the overall accuracy rate classification of other ensemble methods with base learners SVM and RF on holdout method and 10-fold cross validation method for dataset 2.

Results in Table 11 show that SVM base learner has significantly greater accuracy rate than RF with the same probability value (*p*-value) when using Bagging (98.54%, *p* = 0.028), END (98.61%, *p* = 0.028) and Random subspace (98.74%, *p* = 0.028). In this case, random subspace gives the highest accuracy rate in the holdout method. There was no significant difference in accuracy between the SVM and RF when employing Adaboost and RF for holdout method because of the higher *p*-value. From Table 12, SVM base learner demonstrates significantly greater accuracy rate than RF, whilst the

random subspace gives the highest accuracy rate with (99.22%, *p* = 0.028). The results of END (99.20%, *p* = 0.028), Adaboost (99.17%, *p* = 0.028) and Bagging (99.07%, *p* = 0.028) have significantly greater overall accuracy rate than RF as a base learner in 10-fold cross-validation method.


**Table 11.** Overall performance evaluation of ensemble classifiers for the holdout method.

**Table 12.** Overall performance evaluation of ensemble methods for the 10-fold cross-validation.

