**3. Results**

#### *3.1. Recognition of Simple ADL*

The results of simple ADL recognition with the IBk method presented around 80% accuracy using the different combinations of motion and magnetic sensors, as presented in Table 2.

**Table 2.** ADL recognition using the Instance Based k-nearest neighbour (IBk) method implemented with Weka software.


AdaBoost is a binary classifier that uses a weak classier to improve the recognition of different events. The implementation of this algorithm was performed with the identification of each ADL. The results of simple ADL identification with the AdaBoost with the decision stump method implemented with Weka software are presented in Table 3, verifying that all of the ADL were recognised with an accuracy between 25.61% (going downstairs recognised with the accelerometer and magnetometer sensors) and 98.44% (standing recognised with the accelerometer, magnetometer, and gyroscope sensors).


**Table 3.** Accuracies of ADL recognition using the AdaBoost with the decision stump method implemented with Weka software.

In addition, Table 4 presents the clarification of the values obtained in Table 3, presenting the True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values. As this recognition was performed as binary recognition, i.e., the comparisons were performed by comparing the correct value with all records, we verified that the values of TP and TN were higher than others, proving the reliability of the method.

**Table 4.** Confusion matrix values of ADL recognition using the AdaBoost with the decision stump method implemented with Weka software (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).


Moreover, the results on the recognition of simple ADL with AdaBoost with the decision tree method implemented with the Smile framework are presented in Table 5, verifying that all of the ADL presented an accuracy between 83.79% and 99.55% using the different combinations of motion and magnetic sensors.

Additionally, Table 6 presents the clarification of the values obtained in Table 5, presenting the True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values. As this recognition was performed as binary recognition, i.e., the comparisons were performed by comparing the correct value with all records, we verified that the sum of the values of TP and TN was 2000. This was the value of the number of samples equal to each activity, but the method reported a high number of FP.

Finally, the results previously obtained with the implementation of the recognition of simple ADL with the DNN method implemented with the Deeplearning4j framework are presented in Table 7, verifying that all of the ADL showed an accuracy between 66.70% and 99.35% using the different combinations of motion and magnetic sensors.


**Table 5.** Accuracies of ADL identification using AdaBoost with the decision tree implemented with the SMILE framework.

**Table 6.** Confusion matrix values of ADL identification using AdaBoost with the decision tree implemented with the SMILE framework (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).


**Table 7.** Accuracies of ADL identification using the DNN method.


#### *3.2. Recognition of Environments*

The use of the IBk method for the recognition of environments using the microphone data reported an average accuracy of 41.43%, as presented in Table 8. The remaining results presented in Table 9 showed that the AdaBoost with the decision stump method implemented with Weka software had an accuracy between 10.36% and 91.78%. Next, the AdaBoost with the decision tree implemented with the SMILE framework reported an accuracy between 88.74% and 99.08%. Finally, the DNN method implemented with the Deeplearning4j framework presented an accuracy between 19.90% and 98.00%.

In addition, Table 10 presents the clarification of the values obtained in Table 9, presenting the True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values. As this recognition was performed as binary recognition, i.e., the comparisons were performed by comparing the correct value with all records, we verified that the values of TP were higher in the recognition of bar, library, hall, and street. However, in the remaining classes, the values of TN were correctly recognised.


**Table 8.** Recognition of environments using the IBk method implemented with Weka software.

**Table 9.** Accuracies of recognition of environments using the AdaBoost and DNN methods.


**Table 10.** Confusion matrix values of the recognition of environments using AdaBoost with the decision stump implemented with Weka software (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).


Furthermore, Table 11 presents the clarification of the values obtained in Table 5, presenting the True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values. As this recognition was performed as binary recognition, i.e., the comparisons were performed comparing the correct value with all records, we verified that the values of TP were higher in the recognition of bar, library, hall, and street. However, in the remaining classes, the values of TN were also correctly recognised.


**Table 11.** Confusion matrix values of the recognition of environments using AdaBoost with the decision tree implemented with the SMILE framework (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).

#### *3.3. Recognition of Activities without Motion*

Initially, we presented, in Table 12, the results on the recognition of activities without motion with the IBk method reporting an accuracy between 99.27% and 100% using the data acquired from the accelerometer, magnetometer, gyroscope, GPS receiver, and the environment previously identified.

**Table 12.** Accuracies of the recognition of activities without motion using the IBk method implemented with Weka software.


Furthermore, the results of the implementation of the recognition of activities without motion with the AdaBoost with the decision stump method implemented with Weka software are presented in Tables 13 and 14, verifying that the events were recognised with an accuracy between 98.32% and 100% using the data acquired from the accelerometer, magnetometer, gyroscope, GPS receiver, and the environment previously identified.

**Table 13.** Accuracies of the activities' recognition without motion using the AdaBoost with the decision stump method implemented with Weka software for motion and magnetic sensors after the recognition of the environment.


**Table 14.** Accuracies of the activities' recognition without motion using the AdaBoost with the decision stump method implemented with Weka software for motion, magnetic, and location sensors after the recognition of the environment


Additionally, Tables 15 and 16 present the clarification of the values obtained in Tables 13 and 14, presenting the True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values. As this recognition was performed as binary recognition, i.e., the comparisons were performed by comparing the correct value with all records, we verified that the values of TP and TN were higher than others, proving the reliability of the method.

**Table 15.** Confusion matrix values of the recognition of activities without motion using the AdaBoost with the decision stump method implemented with Weka software for motion and magnetic sensors after the recognition of the environment (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).



**Table 16.** Confusion matrix values of the recognition of activities without motion using the AdaBoost with the decision stump method implemented with Weka software for motion, magnetic, and location sensors after the recognition of the environment (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative)

Additionally, the results on the recognition of activities without motion with the AdaBoost with the decision tree implemented with the SMILE framework are presented in Tables 17 and 18, verifying that the events were recognised with an accuracy between 98.50% and 100% using the data acquired from the accelerometer, magnetometer, gyroscope, GPS receiver, and the environment previously identified.

**Table 17.** Accuracies of the activities' recognition without motion using the AdaBoost with the decision tree implemented with the SMILE framework for motion and magnetic sensors after the recognition of the environment.


**Table 18.** Accuracies of the activities' recognition without motion using the AdaBoost with the decision tree implemented with the SMILE framework for motion, magnetic, and location sensors after the recognition of the environment.


Tables 19 and 20 present the clarification of the values obtained in Tables 17 and 18, presenting the True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) values. As this recognition was performed as binary recognition, i.e., the comparisons were performed comparing the correct value with all records, we verified that the values of TP and TN were higher than others, proving the reliability of the method.

**Table 19.** Confusion matrix values of the recognition of activities without motion using the AdaBoost with the decision tree implemented with the SMILE framework for motion and magnetic sensors after the recognition of the environment (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).


**Table 20.** Confusion matrix values of the recognition of activities without motion using the AdaBoost with the decision tree implemented with the SMILE framework for motion, magnetic, and location sensors after the recognition of the environment (TP = True Positive; TN = True Negative; FP = False Positive; FN = False Negative).


Finally, the results of the activity recognition without motion using the DNN method implemented with the DeepLearning4j framework are presented in Tables 21 and 22, verifying that the events were recognised with an accuracy between 79.55% and 98.50% using the data acquired from the accelerometer, magnetometer, gyroscope, GPS receiver, and the environment previously identified.

**Table 21.** Accuracies of the activities' recognition without motion using the DNN method for motion and magnetic sensors after the recognition of the environment.


Based on the results reported, Table 23 presents the average of the results obtained with the different algorithms implemented. As shown, the best results were achieved with the IBk method (99.68%) and AdaBoost with the decision tree as a weak classifier (94.05%).

The training stage was faster with IBk and AdaBoost with the decision tree than the DNN method previously implemented. These methods were less complicated to implement than the DNN method and were more efficient.


**Table 22.** Accuracies of the activities' recognition without motion using using the DNN method for motion, magnetic, and location sensors after the recognition of the environment.


**Table 23.** Average of the accuracy of each implemented method.

Based on the limitations of mobile devices, these methods should be implemented in the ADL and environment recognition framework to improve the results provided to the user. The results showed that the recognition of ADL and its environments was possible with the implementation of the AdaBoost, IBk, and DNN methods. It allows opportunities to create a personal digital life coach and monitor the different lifestyles. It is important for all people, because mobile devices are widely used. They exploit the possibilities to improve the quality of life.

#### **4. Discussion and Conclusions**

The implementations of DNN, IBk, AdaBoost with the decision stump, and AdaBoost with the decision tree were performed with success with the dataset previously acquired, which was based on the data received from the accelerometer, magnetometer, gyroscope, GPS receiver, and microphone. The framework was composed of data acquisition, data processing, data cleaning, feature extraction, data fusion, and data classification, to recognise eight ADL and nine environments.

In general, the overall accuracies of the methods depended on the number of sensors and resources available during data acquisition. The framework should be a function of the number of sensors available in mobile devices. The methods with an accuracy higher than 90% were the IBk method and AdaBoost with the decision tree as the weak classifier.

The AdaBoost and IBk methods reported the best results because these methods were not susceptible to overfitting in comparison with the DNN method. Notably, one of the reasons for this conclusion was the use of a weak classifier by AdaBoost that handled the discrimination of some results.

According to the previously proposed structure of a framework for the recognition of ADL and environments [2,17–25], the main focus of this study was related to the data classification module, taking into account the implementations of the other modules performed in previous studies. Previously, the DNN method was implemented, and it reported reliable results. Still, for the recognition of the environments with acoustic data, the results obtained were below the expectations, because it took many resources from the processing unit. For the validation of the different implemented methods, we performed cross-validation with 10 folds.

Following the tests of the different methods for the recognition of simple ADL, the best results were achieved with AdaBoost with the decision tree implemented with the SMILE framework, reporting an overall accuracy of 91.33% with all combinations of sensors. Still, there was a high number of FP. In the case of the recognition of environments, the best method was also AdaBoost with the decision tree implemented with the SMILE framework, reporting an overall accuracy of 99.87%. Still, it did not recognise correctly two environments. However, the AdaBoost with the decision stump method implemented with Weka software did not recognise five environments correctly, reporting an overall accuracy of 32.04%. Finally, in the recognition of activities without motion, the results obtained with AdaBoost with the decision tree implemented with the SMILE framework were the same as the results obtained with the DNN method (99.87%).

As future work, the methods should be implemented during the development of the framework for the identification of ADL and its environments, adapting the approach to all the sensors available on mobile devices.

**Author Contributions:** Conceptualization, methodology, software, validation, formal analysis, investigation, writing, original draft preparation, and writing, review and editing: J.M.F., I.M.P., G.M., N.M.G., E.Z., P.L., F.F.-R., S.S., and L.X. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is funded by FCT/MCTES through national funds and when applicable co-funded EU funds under the project **UIDB/EEA/50008/2020** (*Este trabalho é financiado pela FCT/MCTES através de fundos nacionais e quando aplicável cofinanciado por fundos comunitários no âmbito do projeto UIDB/EEA/50008/2020*).

**Acknowledgments:** This work is funded by FCT/MCTES through national funds and when applicable co-funded EU funds under the project **UIDB/EEA/50008/2020** (*Este trabalho é financiado pela FCT/MCTES através de fundos nacionais e quando aplicável cofinanciado por fundos comunitários no âmbito do projeto UIDB/EEA/50008/2020*). This article is based on work from COST Action IC1303 - AAPELE - Architectures, Algorithms and Protocols for Enhanced Living Environments, and COST Action CA16226 - SHELD-ON- Indoor living space improvement: Smart Habitat for the Elderly, supported by COST (European Cooperation in Science and Technology). More information at www.cost.eu.

**Conflicts of Interest:** The authors declare no conflicts of interest.
