Using Visualization to Evaluate the Performance of Algorithms for Multivariate Time Series Classification
Abstract
:1. Introduction
2. Related Work
3. Methodology
3.1. Classification Algorithms
3.1.1. The Random Convolutional Kernel Transform (ROCKET)
3.1.2. Residual Network (ResNet)
3.1.3. InceptionTime
3.1.4. One Dimensional Convolutional Neural Network (CNN-1D)
3.1.5. CNN-1D—LSTM
3.1.6. Transformers
3.2. Datasets
3.2.1. Epilepsy (EPI)
3.2.2. NATOPS
3.2.3. Articulacy Word Recognition (AWR)
3.2.4. SelfRegulationSCP1 (SCP1)
3.2.5. PEMS-SF
3.2.6. Heartbeat (HB)
3.2.7. Face Detection (FD)
3.2.8. Duck Duck Geese (DDG)
3.2.9. Finger Movements (FM)
3.2.10. SelfRegulationSCP2 (SCP2)
3.2.11. Hand Movement Direction (HMD)
3.2.12. Ethanol Concentration (EC)
3.2.13. Basic Motions (BM)
3.2.14. Racket Sports (RS)
3.2.15. Libras (LIB)
3.3. Workflow
4. Experimental Results
4.1. Overall Results
4.2. Individual Results
4.2.1. Results for Epilepsy (EPI)
4.2.2. Results for NATOPS
4.2.3. Results for Articulacy Word Recognition (AWR)
4.2.4. Results for SelfRegulationSCP1 (SCP1)
4.2.5. Results for PEMS-SF
4.2.6. Results for Heartbeat (HB)
4.2.7. Results for Face Detection (FD)
4.2.8. Results for Duck Duck Geese (DDG)
4.2.9. Results for Finger Movements (FM)
4.2.10. Results for SelfRegulationSCP2 (SCP2)
4.2.11. Results for Hand Movement Direction (HMD)
4.2.12. Results for Ethanol Concentration (EC)
4.2.13. Results for Basic Motions (BM), Racket Sports (RS) and Libras (LIB)
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bagnall, A.; Lines, J.; Bostrom, A.; Large, J.; Keogh, E. The great time series classification bake off: A review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov. 2017, 31, 606–660. [Google Scholar] [CrossRef] [PubMed]
- Bagnall, A.; Dau, H.A.; Lines, J.; Flynn, M.; Large, J.; Bostrom, A.; Southam, P.; Keogh, E. The UEA multivariate time series classification archive. arXiv 2018, arXiv:1811.00075. [Google Scholar]
- Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Disc. 2019, 33, 917–963. [Google Scholar] [CrossRef]
- Baldan, F.; Benítez, J. Multivariable times series classification through an interpretable representation. arXiv 2020, arXiv:2009.03614v1. [Google Scholar]
- Pasos Ruiz, A.; Flynn, M.; Large, J.; Middlehurst, M.; Bagnall, A. The great multivariate time series classification bake off: A review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov. 2021, 35, 401–449. [Google Scholar] [CrossRef] [PubMed]
- Pasos Ruiz, A.; Bagnall, A. Dimension selection strategies for multivariate time series classification with HIVE-COTEv2.0. In Advanced Analytics and Learning on Temporal Data Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2023; pp. 133–147. [Google Scholar]
- Ilbert, R.; Huang, T.V.; Zhang, Z. Data Augmentation for Multivariate Time Series Classification: An Experimental Study. In Proceedings of the IEEE 40th International Conference on Data Engineering Workshops (ICDEW), Utrecht, The Netherlands, 13–16 May 2024. [Google Scholar]
- Dempster, A.; Petitjean, F.; Webb, G. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Min. Knowl. Disc. 2020, 34, 1454–1495. [Google Scholar] [CrossRef]
- Löning, M.; Bagnall, A.; Ganesh, S.; Viktor Kazakov, J.L.; Franz, K. sktime: A Unified Interface for Machine Learning with Time Series. arXiv 2019, arXiv:1909.07872. [Google Scholar]
- Wang, Z.; Yan, W.; Oates, T. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the International Joint Conference on Neural Networks, Anchorage, AK, USA, 14–19 May 2017; pp. 1578–1585. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Fawaz, H.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D.; Weber, J.; Webb, G.; Idoumghar, L.; Muller, P.A.; Petitjean, F. InceptionTime: Finding AlexNet for time series classification. Data Min. Knowl. Disc. 2020, 34, 1936–1962. [Google Scholar] [CrossRef]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.Q. A Survey of the Recent Architectures of Deep Convolutional Neural Networks. Arti. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
- Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A Survey. Mech. Syst. Signal Process. 2021, 151, 1–24. [Google Scholar] [CrossRef]
- Acuna, E.; Kendziora, C.; Fustenberg, R.; Breshike, C.J.; Kendziora, D. Machine learning algorithms for analytes classification based on simulated spectra. In Proceedings of the Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imaging XXX, National Harbor, MA, USA, 21–26 April 2024; SPIE: Bellingham, WA, USA, 2024; Volume 13031. 130310H. [Google Scholar] [CrossRef]
- Sak, H.; Senior, A.; Beaufays, F. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Interspeech 2014, 2014, 338–342. [Google Scholar]
- Cura, A.; Kucuk, H.; Ergen, E.; Oksuzoglu, I.B. Driver profiling using long short term memory (LSTM) and convolutional neural network (CNN) methods. IEEE Trans. Intell. Transp. Syst. 2020, 22, 6572–6582. [Google Scholar] [CrossRef]
- Xu, G.; Ren, T.; Chen, Y.; Che, W.A. One-Dimensional CNN-LSTM Model for Epileptic Seizure Recognition Using EEG Signal Analysis. Front. Neurosci. 2020, 14, 578126. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkaoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Qingsong, W.; Tian, Z.; Chaoli, Z.; Weiqi, C.; Ziqing, M.; Junchi, Y.; Liang, S. Transformers in Time Series: A Survey. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; Available online: https://github.com/qingsongedu/time-series-transformers-review (accessed on 1 September 2024).
- Zerveas, G.; Jayaraman, S.; Patel, D.; Bhamidipaty, A.; Eickhoff, C. A transformer-based framework for multivariate time series representation learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, Singapore, 14–18 August 2021. [Google Scholar]
- Wang, Z.; Zhang, J.; Zhang, X.; Chen, P.; Wang, B. Transformer Model for Functional Near-Infrared Spectroscopy Classification. IEEE J. Biomed. Health Inform. 2022, 26, 2559–2569. [Google Scholar] [CrossRef]
- Villar, J.R.; Vergara, P.; Menendez, M.; de la Cal, E.; Gonzalez, V.; Sedano, J. Generalized models for the classification of abnormal movements in daily life and its applicability to epilepsy convulsion recognition. Int. J. Neural Syst. 2016, 26, 1650037. [Google Scholar] [CrossRef] [PubMed]
- Ghouaiel, N.; Marteau, P.; Dupont, M. Continuous pattern detection and recognition in stream-a benchmark for online gesture recognition. Int. J. Appl. Pattern Recognit. 2017, 4, 146–160. [Google Scholar] [CrossRef]
- Wang, J.; Balasubramanian, A.; de La Vega, L.; Green, J.R.; Samal, A.; Prabhakaran, B. Word recognition from continuous articulatory movement time-series data using symbolic representations. In Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies, Grenoble, France, 21–22 August 2013; pp. 119–127. [Google Scholar]
- Birbaumer, N.; Ghanayim, N.; Hinterberger, T.; Iversen, I.; Kotchoubey, B.; Kubler, A.; Perelmouter, J.; Taub, E.; Flor, H. A spelling device for the paralysed. Nature 1999, 398, 297. [Google Scholar] [CrossRef] [PubMed]
- Cuturi, M. Fast global alignment kernels. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA, 28 June–2 July 2011; pp. 929–936. [Google Scholar]
- Goldberger, A.; Amaral, L.; Glass, L.; Hausdorff, J.; Ivanov, P.; Mark, R.; Mietus, J.; Moody, G.; Peng, C.K.; Stanley, E. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef]
- Olivetti, R.; Kia, M.; Avesani, P. DecMeg2014-Decoding the Human Brain. Kaggle. Available online: https://kaggle.com/competitions/decoding-the-human-brain (accessed on 1 October 2024).
- Xeno-Canto: Sharing Wildlife Sounds from Around the World. Repository. Available online: https://xeno-canto.org/ (accessed on 1 October 2024).
- Blankertz, B.; Curio GMuller, K.R. Classifying single trial EEG: Towards brain computer interfacing. Adv. Neural Inf. Process. Syst. 2002, 14, 157–164. [Google Scholar]
- Large, J.; Kemsley, E.K.; Wellner, N.; Goodall, I.; Bagnall, A. Detecting forged alcohol non-invasively through vibrational spectroscopy and machine learning. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Melbourne, VIC, Australia, 3–6 June 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 298–309. [Google Scholar]
- Dias, D.; Peres, S. Algoritmos bio-inspirados aplicados ao reconhecimento de padroes da libras: Enfoque no parâmetro movimento. In Proceedings of the 16 Simpósio Internacional de Iniciaçao Cientıfica da Universidade de Sao Paulo, São Paulo, Brasil, 26–31 October 2008. [Google Scholar]
Dataset | Train | Test | Dim (D) | Length | Classes | Train × D | T × D/L | Acc. Default % |
---|---|---|---|---|---|---|---|---|
EPI | 137 | 138 | 3 | 206 | 4 | 411 | 1.99 | 26.8 |
NATOPS | 180 | 180 | 24 | 51 | 6 | 4320 | 84.7 | 26.81 |
AWR | 275 | 300 | 9 | 144 | 25 | 2475 | 17.18 | 4.00 |
SCP1 | 268 | 293 | 6 | 896 | 2 | 1608 | 1.79 | 50.2 |
PEMS | 267 | 173 | 963 | 144 | 7 | 257,121 | 1785.5 | 17.34 |
HB | 204 | 205 | 61 | 405 | 2 | 12,444 | 30.7 | 72.19 |
FD | 5890 | 3524 | 144 | 62 | 2 | 848,160 | 13,680 | 50.0 |
DDG | 50 | 50 | 1345 | 270 | 5 | 67,250 | 249 | 20.0 |
FM | 316 | 100 | 28 | 50 | 2 | 8848 | 177 | 51.0 |
SCP2 | 200 | 180 | 7 | 1152 | 2 | 1400 | 1.22 | 50.0 |
HMD | 160 | 74 | 10 | 400 | 4 | 1600 | 4 | 40.54 |
EC | 261 | 263 | 3 | 1751 | 4 | 783 | 0.44 | 25.09 |
BM | 40 | 40 | 6 | 100 | 4 | 240 | 2.4. | 25.0 |
RS | 151 | 152 | 6 | 30 | 4 | 906 | 30.2 | 28.3 |
LIB | 180 | 180 | 2 | 45 | 15 | 360 | 8 | 6.7 |
Dataset | LSTM | CNN1D | Transformers | ResNet | Inception Time | Rocket |
---|---|---|---|---|---|---|
EPI | 82.53 ± 1.79 | 83.48 ± 1.44 | 76.45 ± 3.33 | 94.05 ± 4.30 | 93.62 ± 5.11 | 99.20 ± 0.79 |
NATOPS | 83.99 ± 1.69 | 91.67 ± 1.94 | 77.07 ± 5.09 | 92.88 ± 2.68 | 94.27 ± 1.14 | 88.32 ± 0.64 |
AWR | 87.92 ± 2.09 | 92.93 ± 1.26 | 48.78 ± 3.19 | 93.36 ± 5.70 | 97.56 ± 2.22 | 99.29 ± 0.18 |
SCP1 | 77.74 ± 1.51 | 78.58 ± 0.73 | 75.63 ± 2.91 | 73.03 ± 6.40 | 76.79 ± 8.80 | 84.88 ± 1.02 |
PEMS | 89.70 ± 2.22 | 88.55 ± 4.27 | 78.09 ± 2.78 | 73.80 ± 6.95 | 73.86 ± 6.21 | 81.26 + 1.44 |
HB | 71.70 ± 1.21 | 73.56 ± 1.30 | 73.72 ± 2.13 | 57.56 ± 9.77 | 67.36 ± 9.45 | 74.48 ± 0.94 |
FD | 63.62 ± 0.79 | 61.14 ± 0.46 | 61.38 ± 1.14 | 55.26 ± 1.14 | 65.92 ± 0.94 | 58.67 ± 0.55 |
DDG | 51.80 ± 5.62 | 62.0 ± 4.42 | 42.6 ± 6.80 | 59.8 ± 4.46 | 61.2 ± 2.35 | 49.4 ± 3.27 |
FM | 51.79 ± 3.18 | 51.80 ± 2.09 | 50.4 ± 3.02 | 53.6 ± 4.14 | 55.2 ± 2.82 | 55.1 ± 1.28 |
SCP2 | 53.93 ± 2.67 | 52.72 ± 1.84 | 51.94 ± 2.80 | 51.10 ± 1.75 | 52.77 ± 2.79 | 55.16 ± 2.06 |
HMD | 34.99 ± 7.63 | 48.78 ± 3.08 | 34.59 ± 3.99 | 30.4 ± 3.89 | 40.80 ± 1.89 | 50.89 ± 3.57 |
EC | 26.92 ± 1.90 | 27.65 ± 1.52 | 27.97 ± 2.35 | 27.63 ± 2.18 | 28.09 ± 2.79 | 40.83 ± 1.88 |
BM | 91.49 ± 6.02 | 96.00 ± 2.23 | 62.99 ± 13.96 | 97.0 ± 6.70 | 55.2 ± 2.82 | 100.0 ± 0.00 |
RS | 74.60 ± 3.24 | 69.99 ± 4.57 | 44.91 ± 12.85 | 88.68 ± 2.0 | 88.28 ± 1.35 | 91.17 ± 0.59 |
LIB | 16.22 ± 1.59 | 14.79 ± 2.29 | 12.80± 2.39 | 81.66 ± 10.51 | 87.44 ± 0.63 | 90.55 ± 0.39 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Acuña, E.; Aparicio, R. Using Visualization to Evaluate the Performance of Algorithms for Multivariate Time Series Classification. Data 2025, 10, 58. https://doi.org/10.3390/data10050058
Acuña E, Aparicio R. Using Visualization to Evaluate the Performance of Algorithms for Multivariate Time Series Classification. Data. 2025; 10(5):58. https://doi.org/10.3390/data10050058
Chicago/Turabian StyleAcuña, Edgar, and Roxana Aparicio. 2025. "Using Visualization to Evaluate the Performance of Algorithms for Multivariate Time Series Classification" Data 10, no. 5: 58. https://doi.org/10.3390/data10050058
APA StyleAcuña, E., & Aparicio, R. (2025). Using Visualization to Evaluate the Performance of Algorithms for Multivariate Time Series Classification. Data, 10(5), 58. https://doi.org/10.3390/data10050058