Improving Inertial Sensor-Based Activity Recognition in Neurological Populations
Abstract
:1. Introduction
- Developing a novel framework that converts inertial sensor time-series data into images (activity images).
- Adopting established data augmentation techniques in image processing to artificially increase limited datasets for the purpose of better HAR in neurological populations (where access to data may be difficult).
- Verifying the proposed approach in public datasets and conducting experimental pilot studies for a single sensor-based HAR on limited HS, PD, and SS datasets.
2. Related Works
Inertial Sensor-Based HAR in Neurologic Populations
3. Methodology
3.1. Data Normalization and Numerical to Image Conversion (Initial State)
3.2. Data Augmentation
3.3. HAR via CNN
4. Datasets
4.1. Local Datasets
4.2. UCI-HAR and WISDM Independent Benchmarking Datasets
5. Analytical Procedures
6. Results
6.1. UCI-HAR Datasets
6.2. WISDM Datasets
6.3. Local Datasets (HS Model)
6.4. Local Datasets (PD Model)
6.5. Local Datasets (SS Model)
7. Discussion
7.1. Verification of the Results in Public Datasets
7.1.1. UCI-HAR Dataset
7.1.2. WISDM Dataset
7.2. Verification in Local Datasets
7.3. Limitation and Future Work
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ramamurthy, S.R.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. Wiley Interdiscip. Rev: Data Min. Knowl. Discov. 2018, 8, e1254. [Google Scholar] [CrossRef]
- Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
- Lima, W.S.; Souto, E.; El-Khatib, K.; Jalali, R.; Gama, J. Human activity recognition using inertial sensors in a smartphone: An overview. Sensors 2019, 19, 3213. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Giggins, O.M.; Clay, I.; Walsh, L. Physical activity monitoring in patients with neurological disorders: A review of novel body-worn devices. Digit. Biomark. 2017, 1, 14–42. [Google Scholar] [CrossRef] [PubMed]
- Jiang, W.; Yin, Z. Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 1307–1310. [Google Scholar]
- Ahmadi, M.; O’Neil, M.; Fragala-Pinkham, M.; Lennon, N.; Trost, S. Machine learning algorithms for activity recognition in ambulant children and adolescents with cerebral palsy. J. Neuroeng. Rehabil. 2018, 15, 1–9. [Google Scholar] [CrossRef] [Green Version]
- Capela, N.; Lemaire, E.; Baddour, N.; Rudolf, M.; Goljar, N.; Burger, H. Evaluation of a smartphone human activity recognition application with able-bodied and stroke participants. J. Neuroeng. Rehabil. 2016, 13, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Demrozi, F.; Pravadelli, G.; Bihorac, A.; Rashidi, P. Human activity recognition using inertial, physiological and environmental sensors: A comprehensive survey. IEEE Access 2020, 8, 210816–210836. [Google Scholar] [CrossRef]
- Hussain, Z.; Sheng, M.; Zhang, W.E. Different approaches for human activity recognition: A survey. arXiv 2019, arXiv:1906.05074. [Google Scholar]
- Liu, R.; Ramli, A.A.; Zhang, H.; Datta, E.; Henricson, E.; Liu, X. An Overview of Human Activity Recognition Using Wearable Sensors: Healthcare and Artificial Intelligence. arXiv 2021, arXiv:2103.15990. [Google Scholar]
- Rast, F.M.; Labruyère, R. Systematic review on the application of wearable inertial sensors to quantify everyday life motor activity in people with mobility impairments. J. NeuroEngineering Rehabil. 2020, 17, 1–19. [Google Scholar] [CrossRef]
- Masum, A.K.M.; Bahadur, E.H.; Shan-A-Alahi, A.; Chowdhury, M.A.U.Z.; Uddin, M.R.; al Noman, A. Human activity recognition using accelerometer, gyroscope and magnetometer sensors: Deep neural network approaches. In Proceedings of the 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kanpur, India, 6–8 July 2019; pp. 1–6. [Google Scholar]
- Jia, R.; Liu, B. Human daily activity recognition by fusing accelerometer and multi-lead ECG data. In Proceedings of the 2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), KunMing, China, 5–8 August 2013; pp. 1–4. [Google Scholar]
- Liu, J.; Chen, J.; Jiang, H.; Jia, W.; Lin, Q.; Wang, Z. Activity recognition in wearable ECG monitoring aided by accelerometer data. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–4. [Google Scholar]
- Cheng, J.; Chen, X.; Shen, M. A framework for daily activity monitoring and fall detection based on surface electromyography and accelerometer signals. IEEE J. Biomed. Health Inform. 2012, 17, 38–45. [Google Scholar] [CrossRef] [PubMed]
- Celik, Y.; Stuart, S.; Woo, W.L.; Godfrey, A. Gait analysis in neurological populations: Progression in the use of wearables. Med. Eng. Phys. 2020, 87, 9–29. [Google Scholar] [CrossRef]
- Şengül, G.; Ozcelik, E.; Misra, S.; Damaševičius, R.; Maskeliūnas, R. Fusion of smartphone sensor data for classification of daily user activities. Multimed. Tools Appl. 2021, 80, 33527–33546. [Google Scholar] [CrossRef]
- Issa, M.E.; Helmi, A.M.; Al-Qaness, M.A.; Dahou, A.; Elaziz, M.A.; Damaševičius, R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare 2022, 10, 1084. [Google Scholar] [CrossRef] [PubMed]
- Şengül, G.; Karakaya, M.; Misra, S.; Abayomi-Alli, O.O.; Damaševičius, R. Deep learning based fall detection using smartwatches for healthcare applications. Biomed. Signal Process. Control. 2022, 71, 103242. [Google Scholar] [CrossRef]
- Trojaniello, D.; Ravaschio, A.; Hausdorff, J.M.; Cereatti, A. Comparative assessment of different methods for the estimation of gait temporal parameters using a single inertial sensor: Application to elderly, post-stroke, Parkinson’s disease and Huntington’s disease subjects. Gait Posture 2015, 42, 310–316. [Google Scholar] [CrossRef] [PubMed]
- San, P.P.; Kakar, P.; Li, X.-L.; Krishnaswamy, S.; Yang, J.-B.; Nguyen, M.N. Deep learning for human activity recognition. In Big Data Analytics for Sensor-Network Collected Intelligence; Elsevier: Amsterdam, The Netherlands, 2017; pp. 186–204. [Google Scholar]
- Lawal, I.A.; Bano, S. Deep human activity recognition with localisation of wearable sensors. IEEE Access 2020, 8, 155060–155070. [Google Scholar] [CrossRef]
- O’Brien, M.K.; Shawen, N.; Mummidisetty, C.K.; Kaur, S.; Bo, X.; Poellabauer, C.; Kording, K.; Jayaraman, A. Activity recognition for persons with stroke using mobile phone technology: Toward improved performance in a home setting. J. Med. Internet Res. 2017, 19, e184. [Google Scholar] [CrossRef] [PubMed]
- Albert, M.V.; Toledo, S.; Shapiro, M.; Koerding, K. Using mobile phones for activity recognition in Parkinson’s patients. Front. Neurol. 2012, 3, 158. [Google Scholar] [CrossRef] [Green Version]
- Zeng, M.; Nguyen, L.T.; Yu, B.; Mengshoel, O.J.; Zhu, J.; Wu, P.; Zhang, J. Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services, Austin, TX, USA, 6–7 November 2014; pp. 197–205. [Google Scholar]
- Huynh, T.; Schiele, B. Analyzing features for activity recognition. In Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies; ACM: New York, NY, USA, 2005; pp. 159–163. [Google Scholar]
- Capela, N.A.; Lemaire, E.D.; Baddour, N. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients. PLoS ONE 2015, 10, e0124414. [Google Scholar] [CrossRef] [Green Version]
- Catal, C.; Tufekci, S.; Pirmit, E.; Kocabag, G. On the use of ensemble of classifiers for accelerometer-based activity recognition. Appl. Soft Comput. 2015, 37, 1018–1022. [Google Scholar] [CrossRef]
- Shuvo, M.M.H.; Ahmed, N.; Nouduri, K.; Palaniappan, K. A Hybrid Approach for Human Activity Recognition with Support Vector Machine and 1D Convolutional Neural Network. In Proceedings of the 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington DC, DC, USA, 13–15 October 2020; pp. 1–5. [Google Scholar]
- Huang, W.; Zhang, L.; Teng, Q.; Song, C.; He, J. The convolutional neural networks training with Channel-Selectivity for human activity recognition based on sensors. IEEE J. Biomed. Health Inform. 2021, 25, 3834–3843. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Ordóñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
- Eyobu, O.S.; Han, D.S. Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 2018, 18, 2892. [Google Scholar] [CrossRef] [Green Version]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv 2017, arXiv:1712.04621. [Google Scholar]
- Huang, J.; Lin, S.; Wang, N.; Dai, G.; Xie, Y.; Zhou, J. TSE-CNN: A two-stage end-to-end CNN for human activity recognition. IEEE J. Biomed. Health Inform. 2019, 24, 292–299. [Google Scholar] [CrossRef] [PubMed]
- Alawneh, L.; Alsarhan, T.; Al-Zinati, M.; Al-Ayyoub, M.; Jararweh, Y.; Lu, H. Enhancing human activity recognition using deep learning and time series augmented data. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 10565–10580. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar] [CrossRef]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Lawal, I.A.; Bano, S. Deep human activity recognition using wearable sensors. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments; Association for Computing Machinery: New York, NY, USA, 2019; pp. 45–48. [Google Scholar]
- Aslan, M.F.; Sabanci, K.; Durdu, A. A CNN-based novel solution for determining the survival status of heart failure patients with clinical record data: Numeric to image. Biomed. Signal Process. Control. 2021, 68, 102716. [Google Scholar] [CrossRef]
- Ortiz, J.L.R. Human Activity Dataset Generation. In Smartphone-Based Human Activity Recognition; Springer Theses; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
- Zhang, M.; Sawchuk, A.A. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1036–1043. [Google Scholar]
- Anguita, D.; Ghio, A.; Oneto, L.; Perez, X.P.; Ortiz, J.L.R. A public domain dataset for human activity recognition using smartphones. In Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2013; 2013; pp. 437–442. [Google Scholar]
- Vågeskar, E. Activity Recognition for Stroke Patients. Master’s Thesis, NTNU, Trondheim, Norway, 2017. Available online: http://hdl.handle.net/11250/2468160 (accessed on 23 November 2022).
- Albert, M.V.; Azeze, Y.; Courtois, M.; Jayaraman, A. In-lab versus at-home activity recognition in ambulatory subjects with incomplete spinal cord injury. J. Neuroeng. Rehabil. 2017, 14, 1–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jimale, A.O.; Noor, M.H.M. Subject variability in sensor-based activity recognition. J. Ambient. Intell. Humaniz. Comput. 2021, 1–14. [Google Scholar] [CrossRef]
- Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 international interdisciplinary PhD workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar]
- Shao, J.; Hu, K.; Wang, C.; Xue, X.; Raj, B. Is normalization indispensable for training deep neural network? Adv. Neural Inf. Process. Syst. 2020, 33, 13434–13444. [Google Scholar]
- Banos, O.; Galvez, J.-M.; Damas, M.; Pomares, H.; Rojas, I. Window size impact in human activity recognition. Sensors 2014, 14, 6474–6499. [Google Scholar] [CrossRef] [Green Version]
- Bianco, S.; Cadene, R.; Celona, L.; Napoletano, P. Benchmark analysis of representative deep neural network architectures. IEEE Access 2018, 6, 64270–64277. [Google Scholar] [CrossRef]
- Mutegeki, R.; Han, D.S. A CNN-LSTM approach to human activity recognition. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 362–366. [Google Scholar]
- Tufek, N.; Yalcin, M.; Altintas, M.; Kalaoglu, F.; Li, Y.; Bahadir, S.K. Human action recognition using deep learning methods on limited sensory data. IEEE Sens. J. 2019, 20, 3101–3112. [Google Scholar] [CrossRef] [Green Version]
- Canziani, A.; Paszke, A.; Culurciello, E. An analysis of deep neural network models for practical applications. arXiv 2016, arXiv:1605.07678. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Ardakani, A.A.; Kanafi, A.R.; Acharya, U.R.; Khadem, N.; Mohammadi, A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020, 121, 103795. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.-E.; Nam, N.-E.; Shim, J.-S.; Jung, Y.-H.; Cho, B.-H.; Hwang, J.J. Transfer learning via deep neural networks for implant fixture system classification using periapical radiographs. J. Clin. Med. 2020, 9, 1117. [Google Scholar] [CrossRef] [Green Version]
- Galar, M.; Fernandez, A.; Barrenechea, E.; Bustince, H.; Herrera, F. A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Trans. Syst. Man Cybern. Part C 2011, 42, 463–484. [Google Scholar] [CrossRef]
- López, V.; Fernández, A.; García, S.; Palade, V.; Herrera, F. An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 2013, 250, 113–141. [Google Scholar] [CrossRef]
- Yen, C.-T.; Liao, J.-X.; Huang, Y.-K. Human Daily Activity Recognition Performed Using Wearable Inertial Sensors Combined With Deep Learning Algorithms. IEEE Access 2020, 8, 174105–174114. [Google Scholar] [CrossRef]
- Li, H.; Trocan, M. Deep learning of smartphone sensor data for personal health assistance. Microelectron. J. 2019, 88, 164–172. [Google Scholar] [CrossRef]
- Cho, H.; Yoon, S.M. Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors 2018, 18, 1055. [Google Scholar] [CrossRef] [Green Version]
- Lonini, L.; Gupta, A.; Kording, K.; Jayaraman, A. Activity recognition in patients with lower limb impairments: Do we need training data from each patient? In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3265–3268. [Google Scholar]
- Hu, X.; Chu, L.; Pei, J.; Liu, W.; Bian, J. Model complexity of deep learning: A survey. Knowl. Inf. Syst. 2021, 63, 2585–2619. [Google Scholar] [CrossRef]
Dataset | Walking | Ascent | Descent | Standing | Total |
---|---|---|---|---|---|
HS | 50 (25) | 50 (25) | 49 (25) | 50 (25) | 199 (100) |
PD | 81 (29) | 64 (23) | 60 (21) | 75 (27) | 280 (100) |
SS | 49 (28) | 18 (11) | 31 (18) | 75 (43) | 173 (100) |
Dataset | Walking | Ascent | Descent | Standing | Total |
---|---|---|---|---|---|
HS | 450 | 450 | 441 | 450 | 1791 |
PD | 729 | 576 | 540 | 675 | 2520 |
SS | 441 | 162 | 279 | 675 | 1557 |
CNN Architecture | Layer (Depth) | Size (Megabyte) | Parameters (Millions) | Input Image Size |
---|---|---|---|---|
ResNet18 | 18 | 44 | 11.7 | 224 × 224 |
ResNet50 | 50 | 96 | 25.6 | 224 × 224 |
MobileNetv2 | 54 | 13 | 3.5 | 224 × 224 |
GoogleNet | 22 | 27 | 7 | 224 × 224 |
Dataset | Walking | Ascent | Descent | Sitting | Standing | Laying | Jogging | Total | |
---|---|---|---|---|---|---|---|---|---|
UCI-HAR | Original | 1226 (17) | 1073 (15) | 986 (13) | 1286 (17) | 1374 (19) | 1407 (19) | - | 7352 |
Utilized | 500 (16.6) | 500 (16.6) | 500 (16.6) | 500 (16.6) | 500 (16.6) | 500 (16.6) | - | 3000 | |
WISDM | Original | 424,400 (38.6) | 122,869 (11.2) | 100,427 (9.1) | 59,939 (5.5) | 48,395 (4.4) | - | 34,217 (31.2) | 756,030 |
Utilized | 500 (16.6) | 500 (16.6) | 500 (16.6) | 500 (16.6) | 500 (16.6) | - | 500 (16.6) | 3000 |
Initial State | Enhanced State | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
DL-CNN Epochs: 5 Iteration: 3750 Learning rate: 0.001 Batch size: 32 | Pre-trained network | Acc. (%) | Sens. | Spec. | F1 | MCC | Acc. (%) | Sens. | Spec. | F1_ | MCC | Training time (min) |
ResNet18 | 93.3 | 0.929 | 0.987 | 0.928 | 0.915 | 96.1 | 0.960 | 0.992 | 0.961 | 0.953 | 89.26 | |
ResNet50 | 91.8 | 0.914 | 0.984 | 0.911 | 0.897 | 97.0 | 0.970 | 0.994 | 0.970 | 0.964 | 165.38 | |
MobileNet-v2 | 90.7 | 0.903 | 0.982 | 0.899 | 0.883 | 96.2 | 0.962 | 0.992 | 0.962 | 0.954 | 143.41 | |
GoogleNet | 81.0 | 0.803 | 0.962 | 0.800 | 0.771 | 91.9 | 0.919 | 0.984 | 0.918 | 0.903 | 75.55 |
Walking | Ascent | Descent | Sitting | Standing | Laying | Walking | Ascent | Descent | Sitting | Standing | Laying | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Walking | 106 | 0 | 0 | 0 | 0 | 0 | Walking | 494 | 0 | 1 | 0 | 0 | 0 |
Ascent | 0 | 108 | 0 | 0 | 0 | 0 | Ascent | 1 | 547 | 3 | 0 | 0 | 0 |
Descent | 0 | 1 | 106 | 0 | 0 | 0 | Descent | 1 | 0 | 477 | 0 | 0 | 0 |
Sitting | 0 | 0 | 0 | 89 | 3 | 12 | Sitting | 0 | 0 | 1 | 465 | 12 | 17 |
Standing | 1 | 1 | 0 | 14 | 66 | 11 | Standing | 0 | 0 | 0 | 19 | 461 | 11 |
Laying | 0 | 0 | 0 | 2 | 4 | 76 | Laying | 0 | 0 | 0 | 15 | 8 | 467 |
Initial State | Enhanced State | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
DL-CNN Epochs: 5 Iteration: 3750 Learning rate: 0.001 Batch size: 32 | Pre-trained network | Acc. (%) | Sens. | Spec. | F1 | MCC | Acc. (%) | Sens. | Spec. | F1 | MCC | Training time (min) |
ResNet18 | 83.5 | 0.832 | 0.967 | 0.828 | 0.799 | 95.8 | 0.958 | 0.992 | 0.958 | 0.949 | 72.2 | |
ResNet50 | 86.0 | 0.854 | 0.972 | 0.854 | 0.827 | 95.4 | 0.953 | 0.991 | 0.953 | 0.944 | 163.49 | |
MobileNet-v2 | 82.7 | 0.821 | 0.965 | 0.821 | 0.787 | 95.4 | 0.953 | 0.991 | 0.953 | 0.944 | 129.52 | |
GoogleNet | 71.5 | 0.719 | 0.943 | 0.718 | 0.678 | 89.3 | 0.891 | 0.979 | 0.892 | 0.871 | 80.27 |
Jogging | Walking | Ascent | Descent | Sitting | Standing | Jogging | Walking | Ascent | Descent | Sitting | Standing | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jogging | 100 | 2 | 0 | 3 | 0 | 1 | Jogging | 488 | 3 | 3 | 1 | 0 | 0 |
Walking | 0 | 106 | 0 | 2 | 0 | 0 | Walking | 0 | 546 | 0 | 5 | 0 | 0 |
Ascent | 3 | 4 | 85 | 12 | 0 | 3 | Ascent | 5 | 6 | 453 | 8 | 2 | 4 |
Descent | 3 | 4 | 8 | 79 | 5 | 5 | Descent | 0 | 6 | 20 | 457 | 4 | 8 |
Sitting | 0 | 0 | 0 | 1 | 60 | 32 | Sitting | 0 | 0 | 3 | 1 | 468 | 19 |
Standing | 0 | 1 | 0 | 0 | 10 | 71 | Standing | 0 | 0 | 4 | 2 | 21 | 463 |
Initial State | Enhanced State | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
DL-CNN Epochs: 5 Iteration: 190 Learning rate: 0.001 Batch size: 32 | Pre-trained network | Acc. (%) | Sens. | Spec. | F1 | MCC | Acc. (%) | Sens. | Spec. | F1 | MCC |
ResNet18 | 80.0 | 0.821 | 0.936 | 0.803 | 0.753 | 99.7 | 0.997 | 0.999 | 0.997 | 0.996 | |
ResNet50 | 82.5 | 0.827 | 0.942 | 0.822 | 0.765 | 100.0 | 1.000 | 1.000 | 1.000 | 1.000 | |
MobileNet-v2 | 85.0 | 0.863 | 0.951 | 0.852 | 0.810 | 97.5 | 0.975 | 0.991 | 0.975 | 0.967 | |
GoogleNet | 42.5 | 0.358 | 0.798 | 0.313 | 0.224 | 95.3 | 0.953 | 0.984 | 0.952 | 0.937 |
Ascent | Descent | Walking | Standing | Ascent | Descent | Walking | Standing | ||
---|---|---|---|---|---|---|---|---|---|
Ascent | 9 | 2 | 1 | 0 | Ascent | 86 | 0 | 0 | 0 |
Descent | 2 | 5 | 0 | 0 | Descent | 0 | 95 | 0 | 0 |
Walking | 0 | 2 | 11 | 0 | Walking | 0 | 0 | 91 | 0 |
Standing | 0 | 0 | 0 | 8 | Standing | 0 | 0 | 0 | 87 |
Initial State | Enhanced State | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
DL-CNN Epochs: 5 Iteration: 190 Learning rate: 0.001 Batch size: 32 | Pre-trained network | Acc. (%) | Sens. | Spec. | F1 | MCC | Acc. (%) | Sens. | Spec. | F1 | MCC |
ResNet18 | 94.6 | 0.949 | 0.982 | 0.947 | 0.929 | 98.8 | 0.987 | 0.996 | 0.987 | 0.983 | |
ResNet50 | 94.6 | 0.940 | 0.981 | 0.945 | 0.928 | 99.0 | 0.989 | 0.997 | 0.990 | 0.986 | |
MobileNet-v2 | 92.9 | 0.936 | 0.976 | 0.931 | 0.908 | 99.2 | 0.992 | 0.997 | 0.992 | 0.989 | |
GoogleNet | 89.3 | 0.895 | 0.964 | 0.896 | 0.864 | 97.61 | 0.973 | 0.991 | 0.978 | 0.975 |
Ascent | Descent | Walking | Standing | Ascent | Descent | Walking | Standing | ||
---|---|---|---|---|---|---|---|---|---|
Ascent | 15 | 1 | 0 | 2 | Ascent | 121 | 1 | 0 | 2 |
Descent | 1 | 10 | 0 | 0 | Descent | 0 | 99 | 0 | 0 |
Walking | 0 | 0 | 13 | 0 | Walking | 0 | 0 | 135 | 0 |
Standing | 0 | 0 | 0 | 14 | Standing | 0 | 0 | 1 | 145 |
Initial State | Enhanced State | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
DL-CNN Epochs: 5 Iteration: 190 Learning rate: 0.001 Batch size: 32 | Pre-trained network | Acc. (%) | Sens. | Spec. | F1 | MCC | Acc. (%) | Sens. | Spec. | F1 | MCC |
ResNet18 | 74.3 | 0.690 | 0.917 | 0.643 | 0.591 | 96.2 | 0.944 | 0.987 | 0.948 | 0.936 | |
ResNet50 | 71.4 | 0.667 | 0.903 | 0.629 | 0.558 | 98.1 | 0.968 | 0.993 | 0.973 | 0.967 | |
MobileNet-v2 | 74.3 | 0.690 | 0.913 | 0.650 | 0.590 | 97.4 | 0.960 | 0.992 | 0.960 | 0.952 | |
GoogleNet | 65.7 | 0.500 | 0.874 | 0.563 | 0.516 | 79.8 | 0.655 | 0.927 | 0.656 | 0.647 |
Ascent | Descent | Walking | Standing | Ascent | Descent | Walking | Standing | ||
---|---|---|---|---|---|---|---|---|---|
Ascent | 1 | 1 | 1 | 4 | Ascent | 36 | 0 | 0 | 3 |
Descent | 0 | 4 | 0 | 1 | Descent | 1 | 51 | 0 | 1 |
Walking | 0 | 0 | 12 | 0 | Walking | 0 | 0 | 120 | 0 |
Standing | 1 | 2 | 0 | 8 | Standing | 0 | 1 | 0 | 99 |
Study | Method | Augmentation | Accuracy (%) | |
---|---|---|---|---|
UCI | WISDM | |||
Alawneh et al. [37] | RNN | Moving average and the exponential smoothing | 97.9–80.0 * | 97.13–83.4 * |
Huang et al. [36] | CNN | Step detection based novel augmentation technique- not appropriate for passive activities | - | 95.7–86.4 * |
Yen et al. [63] | CNN | NA | 95.99 | - |
Jiang and Yin [5] | CNN | NA | 97.59 | - |
Li and Trocan [64] | CNN | NA | 95.75 | - |
Cho and Yoon [65] | CNN | Data sharpening | 97.62 | - |
Proposed framework | CNN | Numerical image conversion + image augmentation | 97.0–93.3 * | 95.8–86.0 * |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Celik, Y.; Aslan, M.F.; Sabanci, K.; Stuart, S.; Woo, W.L.; Godfrey, A. Improving Inertial Sensor-Based Activity Recognition in Neurological Populations. Sensors 2022, 22, 9891. https://doi.org/10.3390/s22249891
Celik Y, Aslan MF, Sabanci K, Stuart S, Woo WL, Godfrey A. Improving Inertial Sensor-Based Activity Recognition in Neurological Populations. Sensors. 2022; 22(24):9891. https://doi.org/10.3390/s22249891
Chicago/Turabian StyleCelik, Yunus, M. Fatih Aslan, Kadir Sabanci, Sam Stuart, Wai Lok Woo, and Alan Godfrey. 2022. "Improving Inertial Sensor-Based Activity Recognition in Neurological Populations" Sensors 22, no. 24: 9891. https://doi.org/10.3390/s22249891
APA StyleCelik, Y., Aslan, M. F., Sabanci, K., Stuart, S., Woo, W. L., & Godfrey, A. (2022). Improving Inertial Sensor-Based Activity Recognition in Neurological Populations. Sensors, 22(24), 9891. https://doi.org/10.3390/s22249891