Machine Learning for Human Activity Recognition

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 3264

Special Issue Editor


E-Mail Website
Guest Editor
Mayo Clinic, Rochester, MN, USA
Interests: machine learning; signal processing; human–machine interaction

Special Issue Information

Dear Colleagues,

Human activity recognition (HAR) refers to the detection and recognition of human gestures and activities such as walking, falling, and drawing a circlein indoor and outdoor environments. HAR has different applications in security, health care, smart homes, and human–machine interaction. Wearable sensors (e.g., gyroscope and accelerometer), cameras (e.g., still image and video), and radio wireless signals (e.g., WiFi signals) are some methods for collecting data and sensing the environment.

Machine learning and deep learning in particular are promising approaches for HAR. Generally, these approaches have a large number of trainable parameters, require tremendous quantities of labelled training data, need major hyper-parameter tuning, and are resource-hungry in training and inference. These can cause difficulties in training and inference for HAR on edge and resource-limited devices. Pruning, tiny ML models by design, augmentation, and novel representation learning techniques can potentially overcome these challenges.

This Special Issue of /Journal of Imaging/ aims to feature reports of recent advances in machine learning for HAR and its applications. Particularly, innovative methods using graph representation learning, self-supervised, semi-supervised, few-shot, and unsupervised learning and implementation appraoches for real-world applications are welcomed.

Dr. Hojjat Salehinejad
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • wearables
  • computer vision
  • channel state information
  • wifi for detection and sensing
  • Internet of Things
  • integrated sensing
  • reconfigurable intelligence surfaces
  • deep learning
  • graph representation learning
  • semi-supervised learning
  • unsupervised learning
  • few-shot learning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 16214 KiB  
Article
Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog
by Charlotte Segonne and Pierre Duthon
J. Imaging 2023, 9(10), 211; https://doi.org/10.3390/jimaging9100211 - 3 Oct 2023
Viewed by 1572
Abstract
Vehicles featuring partially automated driving can now be certified within a guaranteed operational design domain. The verification in all kinds of scenarios, including fog, cannot be carried out in real conditions (risks or low occurrence). Simulation tools for adverse weather conditions (e.g., physical, [...] Read more.
Vehicles featuring partially automated driving can now be certified within a guaranteed operational design domain. The verification in all kinds of scenarios, including fog, cannot be carried out in real conditions (risks or low occurrence). Simulation tools for adverse weather conditions (e.g., physical, numerical) must be implemented and validated. The aim of this study is, therefore, to verify what criteria need to be met to obtain sufficient data to test AI-based pedestrian detection algorithms. It presents both analyses on real and numerically simulated data. A novel method for the test environment evaluation, based on a reference detection algorithm, was set up. The following parameters are taken into account in this study: weather conditions, pedestrian variety, the distance of pedestrians to the camera, fog uncertainty, the number of frames, and artificial fog vs. numerically simulated fog. Across all examined elements, the disparity between results derived from real and simulated data is less than 10%. The results obtained provide a basis for validating and improving standards dedicated to the testing and approval of autonomous vehicles. Full article
(This article belongs to the Special Issue Machine Learning for Human Activity Recognition)
Show Figures

Figure 1

16 pages, 932 KiB  
Article
Synthesizing Human Activity for Data Generation
by Ana Romero, Pedro Carvalho, Luís Côrte-Real and Américo Pereira
J. Imaging 2023, 9(10), 204; https://doi.org/10.3390/jimaging9100204 - 29 Sep 2023
Viewed by 1055
Abstract
The problem of gathering sufficiently representative data, such as those about human actions, shapes, and facial expressions, is costly and time-consuming and also requires training robust models. This has led to the creation of techniques such as transfer learning or data augmentation. However, [...] Read more.
The problem of gathering sufficiently representative data, such as those about human actions, shapes, and facial expressions, is costly and time-consuming and also requires training robust models. This has led to the creation of techniques such as transfer learning or data augmentation. However, these are often insufficient. To address this, we propose a semi-automated mechanism that allows the generation and editing of visual scenes with synthetic humans performing various actions, with features such as background modification and manual adjustments of the 3D avatars to allow users to create data with greater variability. We also propose an evaluation methodology for assessing the results obtained using our method, which is two-fold: (i) the usage of an action classifier on the output data resulting from the mechanism and (ii) the generation of masks of the avatars and the actors to compare them through segmentation. The avatars were robust to occlusion, and their actions were recognizable and accurate to their respective input actors. The results also showed that even though the action classifier concentrates on the pose and movement of the synthetic humans, it strongly depends on contextual information to precisely recognize the actions. Generating the avatars for complex activities also proved problematic for action recognition and the clean and precise formation of the masks. Full article
(This article belongs to the Special Issue Machine Learning for Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop