Human Activity Recognition and Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (15 July 2022) | Viewed by 36328

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information and Communication Technology, University of Agder, Campus Grimstad, Jon Lilletuns vei, 4879 Grimstad, Norway
Interests: machine learning; deep learning; data analysis; wearable computing; physiological and behavioral biometrics; human activity recognition

Special Issue Information

Dear Colleagues,

The aim of human activity recognition is to detect and recognize the dynamic human body movements and activities of an individual or a group of individuals based on sensor observations. Accurate and robust human activity recognition is essential for a multitude of applications in human computer interaction, human robot coexistence, developing assistive technologies for wellbeing, fall detection, rehabilitation, sports, augmented reality, human emotion characterization, behavior analysis, and surveillance.
As a result of the vast and growing number of applications, human activity recognition is one of the most widely studied and active research topics. Over the last couple of decades, a wide range of machine learning methods have been applied to automatically recognize human body movements and activities using different types of sensing techniques, including visual, wearable/embedded sensors, and passive sensing modalities. Recently, the proliferation of sensing technologies and data combined with deep learning techniques have allowed scientific and research communities to develop accurate and robust methods and algorithms for single- and multi-person human activity recognition.
This Special Issue is focused on papers (including research and surveys) that provide up-to-date developments and bring advancements to the field of human activity recognition and its applications.

Dr. Muhammad Muaaz
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • activity recognition
  • wearable sensing
  • passive sensing
  • vision sensing
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3907 KiB  
Article
Wi-CAL: A Cross-Scene Human Motion Recognition Method Based on Domain Adaptation in a Wi-Fi Environment
by Zhanjun Hao, Juan Niu, Xiaochao Dang and Danyang Feng
Electronics 2022, 11(16), 2607; https://doi.org/10.3390/electronics11162607 - 20 Aug 2022
Viewed by 1869
Abstract
In recent years, research on Wi-Fi sensing technology has developed rapidly. This technology automatically senses human activities through commercial Wi-Fi devices, such as lying down, falling, walking, waving, sitting down, and standing up. Because the movement of human parts affects the transmission of [...] Read more.
In recent years, research on Wi-Fi sensing technology has developed rapidly. This technology automatically senses human activities through commercial Wi-Fi devices, such as lying down, falling, walking, waving, sitting down, and standing up. Because the movement of human parts affects the transmission of Wi-Fi signals, resulting in changes in CSI. In the context of indoor monitoring of human health through daily behavior, we propose Wi-CAL. More precisely, CSI fingerprints were collected at six events in two indoor locations, and data enhancement technology Dynamic Time Warping Barycentric Averaging (DBA) was used to expand the data. Then the feature weighting algorithm and convolution layer are combined to select the most representative CSI data features of human action. Finally, a classification model suitable for multiple scenes was obtained by blending the softmax classifier and CORrelation ALignment (CORAL) loss. Experiments are carried out on public data sets and the data sets before and after the expansion collected in this paper. Through comparative experiments, it can be seen that our method can achieve good recognition performance. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

14 pages, 2381 KiB  
Article
Multi-Branch Attention-Based Grouped Convolution Network for Human Activity Recognition Using Inertial Sensors
by Yong Li, Luping Wang and Fen Liu
Electronics 2022, 11(16), 2526; https://doi.org/10.3390/electronics11162526 - 12 Aug 2022
Cited by 6 | Viewed by 1932
Abstract
Recently, deep neural networks have become a widely used technology in the field of sensor-based human activity recognition and they have achieved good results. However, some convolutional neural networks lack further selection for the extracted features, or the networks cannot process the sensor [...] Read more.
Recently, deep neural networks have become a widely used technology in the field of sensor-based human activity recognition and they have achieved good results. However, some convolutional neural networks lack further selection for the extracted features, or the networks cannot process the sensor data from different locations of the body independently and in parallel. Therefore, the accuracy of existing networks is not ideal. In particular, similar activities are easy to be confused, which limits the application of sensor-based HAR. In this paper, we propose a multi-branch neural network based on attention-based convolution. Each branch of the network consists of two layers of attention-based grouped convolution submodules. We introduce a dual attention mechanism that consists of channel attention and spatial attention to select the most important features. Sensor data collected at different positions of the human body are separated and fed into different network branches for training and testing independently. Finally, the multi-branch features are fused. We test the proposed network on three large datasets: PAMAP2, UT, and OPPORTUNITY. The experiment results show that our method outperforms the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

20 pages, 2189 KiB  
Article
Light-Weight Classification of Human Actions in Video with Skeleton-Based Features
by Włodzimierz Kasprzak and Bartłomiej Jankowski
Electronics 2022, 11(14), 2145; https://doi.org/10.3390/electronics11142145 - 8 Jul 2022
Cited by 4 | Viewed by 2129
Abstract
An approach to human action classification in videos is presented, based on knowledge-aware initial features extracted from human skeleton data and on further processing by convolutional networks. The proposed smart tracking of skeleton joints, approximation of missing joints and normalization of skeleton data [...] Read more.
An approach to human action classification in videos is presented, based on knowledge-aware initial features extracted from human skeleton data and on further processing by convolutional networks. The proposed smart tracking of skeleton joints, approximation of missing joints and normalization of skeleton data are important steps of feature extraction. Three neural network models—based on LSTM, Transformer and CNN—are developed and experimentally verified. The models are trained and tested on the well-known NTU-RGB+D (Shahroudy et al., 2016) dataset in the cross-view mode. The obtained results show a competitive performance with other SOTA methods and verify the efficiency of proposed feature engineering. The network has a five times lower number of trainable parameters than other proposed methods to reach nearly similar performance and twenty times lower number than the currently best performing solutions. Thanks to the lightness of the classifier, the solution only requires relatively small computational resources. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

24 pages, 3741 KiB  
Article
Daily Living Activity Recognition In-The-Wild: Modeling and Inferring Activity-Aware Human Contexts
by Muhammad Ehatisham-ul-Haq, Fiza Murtaza, Muhammad Awais Azam and Yasar Amin
Electronics 2022, 11(2), 226; https://doi.org/10.3390/electronics11020226 - 12 Jan 2022
Cited by 14 | Viewed by 2507
Abstract
Advancement in smart sensing and computing technologies has provided a dynamic opportunity to develop intelligent systems for human activity monitoring and thus assisted living. Consequently, many researchers have put their efforts into implementing sensor-based activity recognition systems. However, recognizing people’s natural behavior and [...] Read more.
Advancement in smart sensing and computing technologies has provided a dynamic opportunity to develop intelligent systems for human activity monitoring and thus assisted living. Consequently, many researchers have put their efforts into implementing sensor-based activity recognition systems. However, recognizing people’s natural behavior and physical activities with diverse contexts is still a challenging problem because human physical activities are often distracted by changes in their surroundings/environments. Therefore, in addition to physical activity recognition, it is also vital to model and infer the user’s context information to realize human-environment interactions in a better way. Therefore, this research paper proposes a new idea for activity recognition in-the-wild, which entails modeling and identifying detailed human contexts (such as human activities, behavioral environments, and phone states) using portable accelerometer sensors. The proposed scheme offers a detailed/fine-grained representation of natural human activities with contexts, which is crucial for modeling human-environment interactions in context-aware applications/systems effectively. The proposed idea is validated using a series of experiments, and it achieved an average balanced accuracy of 89.43%, which proves its effectiveness. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

20 pages, 1825 KiB  
Article
Low-Power On-Chip Implementation of Enhanced SVM Algorithm for Sensors Fusion-Based Activity Classification in Lightweighted Edge Devices
by Juneseo Chang, Myeongjin Kang and Daejin Park
Electronics 2022, 11(1), 139; https://doi.org/10.3390/electronics11010139 - 3 Jan 2022
Cited by 10 | Viewed by 2890
Abstract
Smart homes assist users by providing convenient services from activity classification with the help of machine learning (ML) technology. However, most of the conventional high-performance ML algorithms require relatively high power consumption and memory usage due to their complex structure. Moreover, previous studies [...] Read more.
Smart homes assist users by providing convenient services from activity classification with the help of machine learning (ML) technology. However, most of the conventional high-performance ML algorithms require relatively high power consumption and memory usage due to their complex structure. Moreover, previous studies on lightweight ML/DL models for human activity classification still require relatively high resources for extremely resource-limited embedded systems; thus, they are inapplicable for smart homes’ embedded system environments. Therefore, in this study, we propose a low-power, memory-efficient, high-speed ML algorithm for smart home activity data classification suitable for an extremely resource-constrained environment. We propose a method for comprehending smart home activity data as image data, hence using the MNIST dataset as a substitute for real-world activity data. The proposed ML algorithm consists of three parts: data preprocessing, training, and classification. In data preprocessing, training data of the same label are grouped into further detailed clusters. The training process generates hyperplanes by accumulating and thresholding from each cluster of preprocessed data. Finally, the classification process classifies input data by calculating the similarity between the input data and each hyperplane using the bitwise-operation-based error function. We verified our algorithm on ‘Raspberry Pi 3’ and ‘STM32 Discovery board’ embedded systems by loading trained hyperplanes and performing classification on 1000 training data. Compared to a linear support vector machine implemented from Tensorflow Lite, the proposed algorithm improved memory usage to 15.41%, power consumption to 41.7%, performance up to 50.4%, and power per accuracy to 39.2%. Moreover, compared to a convolutional neural network model, the proposed model improved memory usage to 15.41%, power consumption to 61.17%, performance to 57.6%, and power per accuracy to 55.4%. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

13 pages, 1599 KiB  
Article
IQ-Data-Based WiFi Signal Classification Algorithm Using the Choi-Williams and Margenau-Hill-Spectrogram Features: A Case in Human Activity Recognition
by Yier Lin and Fan Yang
Electronics 2021, 10(19), 2368; https://doi.org/10.3390/electronics10192368 - 28 Sep 2021
Cited by 2 | Viewed by 3035
Abstract
This paper presents a novel approach that applies WiFi-based IQ data and time–frequency images to classify human activities automatically and accurately. The proposed strategy first uses the Choi–Williams distribution transform and the Margenau–Hill spectrogram transform to obtain the time–frequency images, followed by the [...] Read more.
This paper presents a novel approach that applies WiFi-based IQ data and time–frequency images to classify human activities automatically and accurately. The proposed strategy first uses the Choi–Williams distribution transform and the Margenau–Hill spectrogram transform to obtain the time–frequency images, followed by the offset and principal component analysis (PCA) feature extraction. The offset features were extracted from the IQ data and several spectra with maximum energy values in the time domain, and the PCA features were extracted via the whole images and several image slices on them with rich unit information. Finally, a traditional supervised learning classifier was used to label various activities. With twelve-thousand experimental samples from four categories of WiFi signals, the experimental data validated our proposed method. The results showed that our method was more robust to varying image slices or PCA numbers over the measured dataset. Our method with the random forest (RF) classifier surpassed the method with alternative classifiers on classification performance and finally obtained a 91.78% average sensitivity, 91.74% average precision, 91.73% average F1-score, 97.26% average specificity, and 95.89% average accuracy. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

23 pages, 758 KiB  
Article
Extensible Chatbot Architecture Using Metamodels of Natural Language Understanding
by Rade Matic, Milos Kabiljo, Miodrag Zivkovic and Milan Cabarkapa
Electronics 2021, 10(18), 2300; https://doi.org/10.3390/electronics10182300 - 18 Sep 2021
Cited by 21 | Viewed by 11197
Abstract
In recent years, gradual improvements in communication and connectivity technologies have enabled new technical possibilities for the adoption of chatbots across diverse sectors such as customer services, trade, and marketing. The chatbot is a platform that uses natural language processing, a subset of [...] Read more.
In recent years, gradual improvements in communication and connectivity technologies have enabled new technical possibilities for the adoption of chatbots across diverse sectors such as customer services, trade, and marketing. The chatbot is a platform that uses natural language processing, a subset of artificial intelligence, to find the right answer to all users’ questions and solve their problems. Advanced chatbot architecture that is extensible, scalable, and supports different services for natural language understanding (NLU) and communication channels for interactions of users has been proposed. The paper describes overall chatbot architecture and provides corresponding metamodels as well as rules for mapping between the proposed and two commonly used NLU metamodels. The proposed architecture could be easily extended with new NLU services and communication channels. Finally, two implementations of the proposed chatbot architecture are briefly demonstrated in the case study of “ADA” and “COVID-19 Info Serbia”. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Figure 1

25 pages, 4267 KiB  
Article
Deep Learning Methods for 3D Human Pose Estimation under Different Supervision Paradigms: A Survey
by Dejun Zhang, Yiqi Wu, Mingyue Guo and Yilin Chen
Electronics 2021, 10(18), 2267; https://doi.org/10.3390/electronics10182267 - 15 Sep 2021
Cited by 14 | Viewed by 8964
Abstract
The rise of deep learning technology has broadly promoted the practical application of artificial intelligence in production and daily life. In computer vision, many human-centered applications, such as video surveillance, human-computer interaction, digital entertainment, etc., rely heavily on accurate and efficient human pose [...] Read more.
The rise of deep learning technology has broadly promoted the practical application of artificial intelligence in production and daily life. In computer vision, many human-centered applications, such as video surveillance, human-computer interaction, digital entertainment, etc., rely heavily on accurate and efficient human pose estimation techniques. Inspired by the remarkable achievements in learning-based 2D human pose estimation, numerous research studies are devoted to the topic of 3D human pose estimation via deep learning methods. Against this backdrop, this paper provides an extensive literature survey of recent literature about deep learning methods for 3D human pose estimation to display the development process of these research studies, track the latest research trends, and analyze the characteristics of devised types of methods. The literature is reviewed, along with the general pipeline of 3D human pose estimation, which consists of human body modeling, learning-based pose estimation, and regularization for refinement. Different from existing reviews of the same topic, this paper focus on deep learning-based methods. The learning-based pose estimation is discussed from two categories: single-person and multi-person. Each one is further categorized by data type to the image-based methods and the video-based methods. Moreover, due to the significance of data for learning-based methods, this paper surveys the 3D human pose estimation methods according to the taxonomy of supervision form. At last, this paper also enlists the current and widely used datasets and compares performances of reviewed methods. Based on this literature survey, it can be concluded that each branch of 3D human pose estimation starts with fully-supervised methods, and there is still much room for multi-person pose estimation based on other supervision methods from both image and video. Besides the significant development of 3D human pose estimation via deep learning, the inherent ambiguity and occlusion problems remain challenging issues that need to be better addressed. Full article
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)
Show Figures

Graphical abstract

Back to TopTop