Next Article in Journal
Printing Defect Detection Based on Scale-Adaptive Template Matching and Image Alignment
Previous Article in Journal
A Comprehensive Survey on Resource Allocation Strategies in Fog/Cloud Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks

by
Gustavo Aquino
,
Marly Guimarães Fernandes Costa
and
Cícero Ferreira Fernandes Costa Filho
*
R&D Center in Electronic and Information Technology, Federal University of Amazonas, Manaus 69077-000, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(9), 4409; https://doi.org/10.3390/s23094409
Submission received: 28 February 2023 / Revised: 7 April 2023 / Accepted: 27 April 2023 / Published: 30 April 2023
(This article belongs to the Section Internet of Things)

Abstract

Human Activity Recognition (HAR) is a complex problem in deep learning, and One-Dimensional Convolutional Neural Networks (1D CNNs) have emerged as a popular approach for addressing it. These networks efficiently learn features from data that can be utilized to classify human activities with high performance. However, understanding and explaining the features learned by these networks remains a challenge. This paper presents a novel eXplainable Artificial Intelligence (XAI) method for generating visual explanations of features learned by one-dimensional CNNs in its training process, utilizing t-Distributed Stochastic Neighbor Embedding (t-SNE). By applying this method, we provide insights into the decision-making process through visualizing the information obtained from the model’s deepest layer before classification. Our results demonstrate that the learned features from one dataset can be applied to differentiate human activities in other datasets. Our trained networks achieved high performance on two public databases, with 0.98 accuracy on the SHO dataset and 0.93 accuracy on the HAPT dataset. The visualization method proposed in this work offers a powerful means to detect bias issues or explain incorrect predictions. This work introduces a new type of XAI application, enhancing the reliability and practicality of CNN models in real-world scenarios.
Keywords: human activity recognition; accelerometer data; deep learning; one-dimensional convolutional neural networks; embeddings; explainable artificial intelligence; embeddings visualization; t-SNE; visualization human activity recognition; accelerometer data; deep learning; one-dimensional convolutional neural networks; embeddings; explainable artificial intelligence; embeddings visualization; t-SNE; visualization

Share and Cite

MDPI and ACS Style

Aquino, G.; Costa, M.G.F.; Filho, C.F.F.C. Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks. Sensors 2023, 23, 4409. https://doi.org/10.3390/s23094409

AMA Style

Aquino G, Costa MGF, Filho CFFC. Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks. Sensors. 2023; 23(9):4409. https://doi.org/10.3390/s23094409

Chicago/Turabian Style

Aquino, Gustavo, Marly Guimarães Fernandes Costa, and Cícero Ferreira Fernandes Costa Filho. 2023. "Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks" Sensors 23, no. 9: 4409. https://doi.org/10.3390/s23094409

APA Style

Aquino, G., Costa, M. G. F., & Filho, C. F. F. C. (2023). Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks. Sensors, 23(9), 4409. https://doi.org/10.3390/s23094409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop