Multi-User Activity Recognition Using Plot Images Based on Ambiental Sensors
Abstract
:1. Introduction
- Ambiental sensors;
- Wearable sensors.
- Simultaneous activities;
- Collaborative activities;
- Concurrent activities.
- Validating the proposed method by testing with plot images, which were generated on the basis of the Aruba CASAS dataset from our previous study, and comparing the results with our research described in [13].
- Generating new images based on the proposed method from our study [13] but using the Kyoto Multiresident ADL Activities dataset provided by the Center for Advanced Studies in Adaptive Systems (CASAS).
- Using a new data representation method and comparing the results with the previous method.
- Optimizing the activity recognition neural network and testing several parameters to increase the activity recognition accuracy.
- Recognizing daily activities through a method capable of interpreting continuous data flows and creating a method compatible with real-time systems.
- Image recognition algorithms are very efficient.
- The graphic representation can take into account the spatial aspect of the activity by depicting it in the form of a map.
- New features can be introduced to improve the graphic representation, including the temporal spectrum and not the actual activity execution time.
2. Related Works
2.1. Sources of Data Acquisition
2.2. Image Representation of Dataset
2.3. Image Recognition Methods
3. Dataset and Data Representation
3.1. Dataset Information
3.2. Density Map Graphic Representation Method
Algorithm 1 Generating density map plot algorithm |
Require: ▹Data item count from input dataset file
Ensure: ▹ Activity group for n do is activity begin if b = true then else Create target path and folder Map activityGroup g items to matrix PlotMatrix using Heatmap plot type and save PNG image end if end for |
3.3. Coordinates of Sensor Locations Graphical Representation Method
Algorithm 2 Generating scatter plot algorithm |
Require: ▹ Data item count from input dataset file
Ensure: ▹ Activity group for n do for do end for if then Create target path and folder Map Activity group g items to list of 2D points Plot matrix using Scatter plot type and save PNG image end if end for |
3.3.1. Scatter Plot Data Image Generation
3.3.2. Temporal Scatter Plot Data Image Generation
4. Image Recognition Method
5. Results
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CASAS | Center for Advanced Studies in Adaptive Systems |
HAR | Human Activity Recognition |
UWB | Ultra-Wideband |
RFID | Radio-Frequency Identification |
DT | Decision Tree |
RFo | Random Forest |
ET | Extra Trees |
GRU | Gated Recurrent Unit |
CNN | Convolutional Neural Network |
DNN | Deep Neural Network |
LSTM | Long Short-Term |
CPU | Central Processing Unit |
GPU | Graphics processing unit |
CSV | Comma-separated values |
References
- Pavliuk, O.; Mishchuk, M.; Strauss, C. Transfer Learning Approach for Human Activity Recognition Based on Continuous Wavelet Transform. Algorithms 2023, 16, 77. [Google Scholar] [CrossRef]
- Webber, J.; Mehbodniya, A.; Arafa, A.; Alwakeel, A. Improved Human Activity Recognition Using Majority Combining of Reduced-Complexity Sensor Branch Classifiers. Electronics 2022, 11, 392. [Google Scholar] [CrossRef]
- Park, H.; Kim, N.; Lee, G.H.; Choi, J.K. MultiCNN-FilterLSTM: Resource-efficient sensor-based human activity recognition in IoT applications. Future Gener. Comput. Syst. 2023, 139, 196–209. [Google Scholar] [CrossRef]
- Czekaj, Ł.; Kowalewski, M.; Domaszewicz, J.; Kitłowski, R.; Szwoch, M.; Duch, W. Real-Time Sensor-Based Human Activity Recognition for eFitness and eHealth Platforms. Sensors 2024, 24, 3891. [Google Scholar] [CrossRef]
- Li, Q.; Gravina, R.; Li, Y.; Alsamhi, S.H.; Sun, F.; Fortino, G. Multi-user activity recognition: Challenges and opportunities. Inf. Fusion 2020, 63, 121–135. [Google Scholar] [CrossRef]
- Bernardo, J.B.L.; Taparugssanagorn, A.; Miyazaki, H.; Pati, B.M.; Thapa, U. Robust Human Activity Recognition for Intelligent Transportation Systems Using Smartphone Sensors: A Position-Independent Approach. Appl. Sci. 2024, 14, 10461. [Google Scholar] [CrossRef]
- Li, Y.; Yang, G.; Su, Z.; Li, S.; Wang, Y. Human activity recognition based on multienvironment sensor data. Inf. Fusion 2023, 91, 47–63. [Google Scholar] [CrossRef]
- Rizk, H.; Elmogy, A.; Rihan, M.; Yamaguchi, H. MultiSenseX: A Sustainable Solution for Multi-Human Activity Recognition and Localization in Smart Environments. AI 2025, 6, 6. [Google Scholar] [CrossRef]
- Herfandi, H.; Sitanggang, O.S.; Nasution, M.R.A.; Nguyen, H.; Jang, Y.M. Real-Time Patient Indoor Health Monitoring and Location Tracking with Optical Camera Communications on the Internet of Medical Things. Appl. Sci. 2024, 14, 1153. [Google Scholar] [CrossRef]
- Dang, X.; Fan, K.; Li, F.; Tang, Y.; Gao, Y.; Wang, Y. Multi-Person Action Recognition Based on Millimeter-Wave Radar Point Cloud. Appl. Sci. 2024, 14, 7253. [Google Scholar] [CrossRef]
- Papadakis, A.; Spyrou, E. A Multi-Modal Egocentric Activity Recognition Approach towards Video Domain Generalization. Sensors 2024, 24, 2491. [Google Scholar] [CrossRef] [PubMed]
- Javadi, S.; Riboni, D.; Borzì, L.; Zolfaghari, S. Graph-Based Methods for Multimodal Indoor Activity Recognition: A Comprehensive Survey. IEEE Trans. Comput. Soc. Syst. 2025; Early Access. [Google Scholar] [CrossRef]
- Alexan, A.; Alexan, A.; Oniga, Ş. Single user activity recognition with Density Activity Abstraction Graphics Algorithm. In Proceedings of the 2022 IEEE 2nd Conference on Information Technology and Data Science (CITDS), Debrecen, Hungary, 16–18 May 2022; pp. 7–12. [Google Scholar] [CrossRef]
- Najeh, H.; Lohr, C.; Leduc, B. Real-Time Human Activity Recognition on Embedded Equipment: A Comparative Study. Appl. Sci. 2024, 14, 2377. [Google Scholar] [CrossRef]
- Ronao, C.A.; Cho, S.B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
- Manjarres, J.; Narvaez, P.; Gasser, K.; Percybrooks, W.; Pardo, M. Physical Workload Tracking Using Human Activity Recognition with Wearable Devices. Sensors 2020, 20, 39. [Google Scholar] [CrossRef] [PubMed]
- He, Z.; Sun, Y.; Zhang, Z. Human Activity Recognition Based on Deep Learning Regardless of Sensor Orientation. Appl. Sci. 2024, 14, 3637. [Google Scholar] [CrossRef]
- Anguita-Molina, M.Á.; Cardoso, P.J.S.; Rodrigues, J.M.F.; Medina-Quero, J.; Polo-Rodríguez, A. Multi-Occupancy Activity Recognition Based on Deep Learning Models Fusing UWB Localisation Heatmaps and Nearby-Sensor Interaction. IEEE Internet Things J. 2025; Early Access. [Google Scholar] [CrossRef]
- Lee, J.S.; Choi, S.; Kwon, O. Identifying multiuser activity with overlapping acoustic data for mobile decision making in smart home environments. Expert Syst. Appl. 2017, 81, 299–308. [Google Scholar] [CrossRef]
- Jain, V.; Gupta, G.; Gupta, M.; Sharma, D.K.; Ghosh, U. Ambient intelligence-based multimodal human action recognition for autonomous systems. ISA Trans. 2023, 132, 94–108. [Google Scholar] [CrossRef] [PubMed]
- Iqbal, J.; Tsagarakis, N.; Caldwell, D. Design of a Wearable Direct-driven Optimized Hand Exoskeleton Device. In Proceedings of the International Conference on Advances in Computer-Human Interactions (ACHI), Gosier, Guadeloupe, France, 23–28 February 2011. [Google Scholar]
- Lian, C.; Zhao, Y.; Sun, T.; Shao, J.; Liu, Y.; Fu, C.; Lyu, X.; Zhan, Z. Incorporating image representation and texture feature for sensor-based gymnastics activity recognition. Knowl.-Based Syst. 2025, 311, 113076. [Google Scholar] [CrossRef]
- Zhao, Y.; Shao, J.; Lin, X.; Sun, T.; Li, J.; Lian, C.; Lyu, X.; Si, B.; Zhan, Z. CIR-DFENet: Incorporating cross-modal image representation and dual-stream feature enhanced network for activity recognition. Expert Syst. Appl. 2025, 266, 125912. [Google Scholar] [CrossRef]
- Qingzheng, C.; Qing, T.; Muchao, Z.; Luyao, M. CNN-based gesture recognition using raw numerical gray-scale images of surface electromyography. Biomed. Signal Process. Control 2025, 101, 107176. [Google Scholar] [CrossRef]
- Saeed, U.; Shah, S.Y.; Shah, S.A.; Liu, H.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Jan, S.U.; Ahmad, J.; Abbasi, Q.H. Multiple Participants’ Discrete Activity Recognition, Well-Controlled Environment Using Universal Software Radio Peripheral Wireless Sensing. Sensors 2022, 22, 809. [Google Scholar] [CrossRef] [PubMed]
- DeSmet, C.; Greeley, C.; Cook, D.J. Hydra-TS: Enhancing Human Activity Recognition with Multiobjective Synthetic Time-Series Data Generation. IEEE Sens. J. 2025, 25, 763–772. [Google Scholar] [CrossRef]
- Singh, D.; Merdivan, E.; Kropf, J.; Holzinger, A. Class imbalance in multi-resident activity recognition: An evaluative study on explainability of deep learning approaches. Univ. Access Inf. Soc. 2024. [Google Scholar] [CrossRef]
- Matsui, T.; Misaki, S.; Sato, Y.; Fujimoto, M.; Suwa, H.; Yasumoto, K. Multi-person Daily Activity Recognition with Non-contact Sensors based on Activity Co-occurrence. In Proceedings of the 2021 Thirteenth International Conference on Mobile Computing and Ubiquitous Network (ICMU), Tokyo, Japan, 17–19 November 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Wang, T.; Cook, D.J.; Fischer, T.R. The Indoor Predictability of Human Mobility: Estimating Mobility with Smart Home Sensors. IEEE Trans. Emerg. Top. Comput. 2023, 11, 182–193. [Google Scholar] [CrossRef] [PubMed]
- Kyoto Multi-User CASAS Dataset. Available online: https://casas.wsu.edu/datasets/adlmr.zip (accessed on 30 January 2025).
- Aruba Single-User CASAS Dataset. Available online: https://casas.wsu.edu/datasets/aruba.zip (accessed on 30 January 2025).
- Singla, G.; Cook, D.J.; Schmitter-Edgecombe, M. Recognizing Independent and Joint Activities Among Multiple Residents in Smart Environments. J. Ambient. Intell. Humaniz. Comput. 2010, 1, 57–63. [Google Scholar] [CrossRef]
- ScottPlot.NET. Available online: https://scottplot.net/ (accessed on 30 January 2025).
- ScottPlot on GitHub. Available online: https://github.com/scottplot/scottplot/ (accessed on 30 January 2025).
- ScottPlot Nuget Package. Available online: https://www.nuget.org/packages/ScottPlot/ (accessed on 30 January 2025).
- Research Source Code. Available online: https://github.com/AncaAlexan/mpdiCASAS (accessed on 30 January 2025).
- Ketkar, N. Deep Learning with Python: A Hands-On Introduction; Apress: Bangalore, India, 2017; ISBN 978-1-4842-2766-4. [Google Scholar] [CrossRef]
Reference Number | Algorithm | Activities | Dataset | Recognition Rate |
---|---|---|---|---|
[25] | Extra Tree | 16 activities | collected during the research | 98% |
[25] | Random Forest | 16 activities | collected during the research | 97% |
[25] | Decision Tree | 16 activities | collected during the research | 90% |
[18] | CNN + GRU | 8 activities | collected during the research | 99% |
[29] | Markov | daily activities | CASAS single user | 83% |
[29] | Markov | daily activities | CASAS multi-user | 11% |
[26] | Hydra-TS | daily activities | CASAS multi-user | 67% |
[27] | LSTM | daily activities | CASAS multi-user | 39% |
[27] | BiLSTM | daily activities | CASAS multi-user | 37% |
[28] | DNN | 16 activities | collected during the research | 66.7% |
Activity Number | Description | Persons | Activity Type |
---|---|---|---|
1 | Fill medication dispenser in the kitchen using items obtained from the cabinet. Return items to the cabinet when done | Person A | Individual |
2 | Hang up clothes in the hallway closet. The clothes are laid out on the couch in the living room | Person B | Individual |
3 | a. Move the couch and coffee table to the other side of the living room b. Request help from Person A | Person A and B | Collaborative |
4 | Sit on the couch and read a magazine | Person B | Individual |
5 | Water plants located around the apartment. Use the watering can located in the hallway closet. Return the watering can to the closet when finished. | Person A | Individual |
6 | Sweep the kitchen floor using the broom and dust pan located in the kitchen closet. Return the tools to the closet when finished. | Person B | Individual |
7 | Play a game of checkers for a maximum of five minutes | Person A and B | Collaborative |
8 | Set out ingredients for dinner in the kitchen | Person A | Individual |
9 | Set dining room table for dinner | Person B | Individual |
10 | Read a magazine on the living room couch | Person A | Individual |
11 | Simulate paying an electric bill. Retrieve a check, a pen, and an envelope from the cupboard underneath the television in the living room. Use the telephone book in the dining room to look up a number for a utility company to confirm the amount on the bill. a. Person B requests help from Person A to find the number for the utility company b. Person A will stop the current task to help and finish a task when done helping | Person A and B | Collaborative |
12 | Gather food for a picnic from the kitchen cupboard and pack them in a picnic basket | Person A | Individual |
13 | Retrieve dishes from a kitchen cabinet. a. Person B requests help from Person A to identify the cabinet in which the dishes are located. b. Person A will stop the current task to help and finish a task when done helping | Person A and B | Collaborative |
14 | Pack supplies in the picnic basket | Person B | Individual |
15 | Pack food in the picnic basket and bring the basket to the front door of the apartment. | Person A | Individual |
Layer Type | Output Shape |
---|---|
rescaling_1 (Rescaling) | (None, 227, 227, 3) |
conv2d (Conv2D) | (None, 227, 227, 16) |
max_pooling2d (MaxPooling2D) | (None, 227, 227, 16) |
conv2d (Conv2D) | (None, 113, 113, 32) |
max_pooling2d (MaxPooling2D) | (None, 56, 56, 32) |
conv2d (Conv2D) | (None, 56, 56, 64) |
max_pooling2d (MaxPooling2D) | (None, 28, 28, 64) |
flatten (Flatten) | (None, 50176) |
dense (Dense) | (None, 128) |
dense_1 (Dense) | (None, 10) |
Dataset | Type Number | Single-User/Multi-User | Windowing Size | Extra-Parameters | Image Representation | Recognition Rate on Training Process | Recognition Rate in Validating Process |
---|---|---|---|---|---|---|---|
Aruba CASAS | 1 | Single-user | Activity length | - | Density map | 99% | 99% |
Kyoto CASAS | 2 | Multi-user | 3 s | without overlapping | Density map | 54% | 44% |
Kyoto CASAS | 3 | Multi-user | 8 s | with overlapping 5 s | Density map | 55% | 46% |
Kyoto CASAS | 4 | Multi-user | 8 s | with overlapping 5 s, limitation to two activities per image | Density map | 52% | 45% |
Kyoto CASAS | 5 | Multi-user | 30 s | without overlapping, lines and points | Scatter plot | 75% | 73% |
Kyoto CASAS | 6 | Multi-user | 50 s | without overlapping, lines and points | Scatter plot | 79% | 74% |
Kyoto CASAS | 7 | Multi-user | 30 s | without overlapping, lines and points, normalized temporal aspect | Scatter plot | 68% | 49% |
Kyoto CASAS | 8 | Multi-user | 50 s | without overlapping, lines and points, normalized temporal aspect | Scatter plot | 79% | 56% |
Kyoto CASAS | 9 | Multi-user | 10 s | without overlapping, lines and points, normalized temporal aspect | Scatter plot | 45% | 30% |
Dataset 1 | Dataset 2 | T-Statistic Value | p-Value |
---|---|---|---|
Scatter Plot 50 s windowing without overlapping, lines and points | Density map 8 s windowing with overlapping 5 s | 16.97 | |
Scatter Plot 50 s windowing without overlapping, lines and points | Scatter Plot 50 s windowing without overlapping, lines and points, normalized temporal aspect | 0.0 | |
Density map 8 s windowing with overlapping 5 s | Scatter Plot 50 s windowing without overlapping, lines and points, normalized temporal aspect | ||
Density map 8 s windowing with overlapping 5 s | Density map 8 s windowing with overlapping 5 s with overlapping 5 s, limitation to two activities per image | 6.70 |
Dataset | Recognition Rate | Validation Rate | Optimization Parameters |
---|---|---|---|
Kyoto CASAS windowing 50 s and scatter representation | 82% | 77% | four convolutional layers were added; pooling parameter was modified 128, 256, 512, 512 |
Kyoto CASAS windowing 50 s and scatter representation | 80% | 75% | two convolutional layers were added; pooling parameter was modified 128, 256 |
Kyoto CASAS windowing 50 s and scatter representation | 83% | 76% | four convolutional layers were added; pooling parameter was modified 128, 256, 256, 256; AdamW optimizer |
Kyoto CASAS windowing 50 s and scatter representation | 83% | 78% | four convolutional layers were added; pooling parameter was modified 128, 256, 256, 256; Adadelta optimizer |
Type Number from Table 4/Reference Number | Method | Recognition Rate |
---|---|---|
6 | 7 convolutional layers + AdamW | 83% |
6 | 7 convolutional layers Adadelta | 83% |
8 | 7 convolutional layers | 79% |
[26] | Hydra-TS, an innovative multi-objective generative adversarial network | 67% |
[27] | LSTM | 39% |
[27] | BiLSTM | 39% |
[28] | DNN 2 steps recognition | 66.7% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alexan, A.R.; Alexan, A.I.; Oniga, S. Multi-User Activity Recognition Using Plot Images Based on Ambiental Sensors. Appl. Sci. 2025, 15, 2610. https://doi.org/10.3390/app15052610
Alexan AR, Alexan AI, Oniga S. Multi-User Activity Recognition Using Plot Images Based on Ambiental Sensors. Applied Sciences. 2025; 15(5):2610. https://doi.org/10.3390/app15052610
Chicago/Turabian StyleAlexan, Anca Roxana, Alexandru Iulian Alexan, and Stefan Oniga. 2025. "Multi-User Activity Recognition Using Plot Images Based on Ambiental Sensors" Applied Sciences 15, no. 5: 2610. https://doi.org/10.3390/app15052610
APA StyleAlexan, A. R., Alexan, A. I., & Oniga, S. (2025). Multi-User Activity Recognition Using Plot Images Based on Ambiental Sensors. Applied Sciences, 15(5), 2610. https://doi.org/10.3390/app15052610