**1. Introduction**

The use of mobile devices while doing daily activities is increasing [1]. These devices have different types of sensors that allow the acquisition of several data related to the user, including the accelerometer, magnetometer, gyroscope, Global Positioning System (GPS) receiver, and microphone [2,3]. These sensors allow the creation of intelligent systems to improve the quality of life. The monitoring of older adults or people with chronic diseases is one of the critical purposes. Furthermore, it can be useful to support sports activities and stimulate the practice of physical activity in teenagers [4]. The development of these systems is included in the research of Ambient Assisted Living (AAL) systems and Enhanced Living Environments (ELE) [5–10].

The automatic recognition of ADL is widely researched [11–16], where the previously proposed framework [2,17–25] was tested and validated with different types of Artificial Neural Networks (ANN) [26–28], verifying that the best results were achieved with Deep Neural Networks (DNN). The proposed framework allows the recognition of eight ADL, i.e., walking, running, standing, going upstairs, going downstairs, watching television, sleeping, driving, and other activities without motion, and nine environments, i.e., bar, classroom, gym, hall, kitchen, library, street, bedroom, and living room. This framework uses sensors available in mobile devices [29,30], reporting different accuracies. The proposed architecture is composed of data acquisition, data processing, data fusion, and data classification. The classification module is divided into three small stages, including the recognition of simple ADL, i.e., running, standing, walking, going upstairs, going downstairs, and other activities without motion, with accelerometer, gyroscope, and magnetometer sensors, the recognition of environments, i.e., bar, classroom, gym, hall, kitchen, library, street, bedroom, and living room, with the microphone data, and the recognition of activities without motion, i.e., sleeping, watching television, driving, and other activities without movement.

This research is based on the creation of a framework for the recognition of ADL and its environments. Still, its main goal is related to the testing of ensemble learning methods to further improve the obtained accuracy in the recognition.

The main contribution of this paper is the implementation of different machine learning methods with the same dataset used for the creation of the framework [31], including AdaBoost [32,33] and Instance Based k-nearest neighbour (IBk) [34], using different Java based frameworks, including Weka [35] and Smile [36]. Finally, the results obtained with the different methods should be compared to decide the best method for implementation using the ADL and environment recognition framework.

The results show that the application of the IBk method implemented with Weka software reported better results than others, reporting results with around 77.68% accuracy in recognition of ADL, 41.43% accuracy in recognition of environments, and 99.73% accuracy in recognition of activities without motion. However, AdaBoost applied with Smile also gave important results, reporting results between 85.44% (going upstairs) and 99.98% (driving).

Section 2 gives the presentation of the different methods implemented. The results and the comparative study of this paper are presented in Section 3. Finally, the discussion and conclusions are presented in Section 4.
