Human Activity Recognition (HAR) in Healthcare, 2nd Edition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 3623

Special Issue Editors


E-Mail Website
Guest Editor
Department of Civil, Energy, Environmental and Materials Engineering (DICEAM), Mediterranean University of Reggio Calabria, Reggio Calabria, Italy
Interests: biomedical signal processing and sensors; photonics; optical fibers; MEMS; metamaterials; nanotechnology; artificial intelligence; neural network; virtual reality; augmented reality; indoor navigation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, NTNU/Norwegian University of Science and Technology, 7491 Trondheim, Norway
Interests: medical informatics applications; eHealth; social media; learning

Special Issue Information

Dear Colleagues,

Technological advances, including those in the medical field, have improved patients' quality of life. These results have led to an increased elderly population with a greater demand for healthcare, which is difficult to meet due to caregivers' expensive and scarce availability. Advances in artificial intelligence, wireless connection systems, and nanotechnologies allow intelligent human health monitoring systems to be created, avoiding hospitalization with apparent cost containment. Recognizing human activities (HAR), specially those based on the use of data collected through sensors or on viewing images captured by cameras, is fundamental in the health monitoring system. In addition, they can guarantee activity recognition functions, the monitoring of vital functions, traceability, the detection of falls and safety alarms, and cognitive assistance. The rapid development of the Internet of Things (IoT) supports research on a wide range of automated and interconnected solutions to improve the quality of life of older people and their independence. With IoT, it is possible to create innovative solutions in ambient intelligence (Aml) and ambient assisted living (AAL).

Dr. Luigi Bibbò
Prof. Dr. J. Artur Serrano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • machine learning
  • wearable sensor
  • Internet of Things
  • ambient assisted living
  • ambient intelligent

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 5957 KiB  
Article
Inertial and Flexible Resistive Sensor Data Fusion for Wearable Breath Recognition
by Mehdi Zabihi, Bhawya, Parikshit Pandya, Brooke R. Shepley, Nicholas J. Lester, Syed Anees, Anthony R. Bain, Simon Rondeau-Gagné and Mohammed Jalal Ahamed
Appl. Sci. 2024, 14(7), 2842; https://doi.org/10.3390/app14072842 - 28 Mar 2024
Viewed by 497
Abstract
This paper proposes a novel data fusion technique for a wearable multi-sensory patch that integrates an accelerometer and a flexible resistive pressure sensor to accurately capture breathing patterns. It utilizes an accelerometer to detect breathing-related diaphragmatic motion and other body movements, and a [...] Read more.
This paper proposes a novel data fusion technique for a wearable multi-sensory patch that integrates an accelerometer and a flexible resistive pressure sensor to accurately capture breathing patterns. It utilizes an accelerometer to detect breathing-related diaphragmatic motion and other body movements, and a flex sensor for muscle stretch detection. The proposed sensor data fusion technique combines inertial and pressure sensors to eliminate nonbreathing body motion-related artifacts, ensuring that the filtered signal exclusively conveys information pertaining to breathing. The fusion technique mitigates the limitations of relying solely on one sensor’s data, providing a more robust and reliable solution for continuous breath monitoring in clinical and home environments. The sensing system was tested against gold-standard spirometry data from multiple participants for various breathing patterns. Experimental results demonstrate the effectiveness of the proposed approach in accurately monitoring breathing rates, even in the presence of nonbreathing-related body motion. The results also demonstrate that the multi-sensor patch presented in this paper can accurately distinguish between varying breathing patterns both at rest and during body movements. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

19 pages, 3672 KiB  
Article
Estimation of Systolic and Diastolic Blood Pressure for Hypertension Identification from Photoplethysmography Signals
by Hygo Sousa De Oliveira, Rafael Albuquerque Pinto, Eduardo James Pereira Souto and Rafael Giusti
Appl. Sci. 2024, 14(6), 2470; https://doi.org/10.3390/app14062470 - 14 Mar 2024
Cited by 1 | Viewed by 784
Abstract
Continuous monitoring plays a crucial role in diagnosing hypertension, characterized by the increase in Arterial Blood Pressure (ABP). The gold-standard method for obtaining ABP involves the uncomfortable and invasive technique of cannulation. Conversely, ABP can be acquired non-invasively by using Photoplethysmography (PPG). This [...] Read more.
Continuous monitoring plays a crucial role in diagnosing hypertension, characterized by the increase in Arterial Blood Pressure (ABP). The gold-standard method for obtaining ABP involves the uncomfortable and invasive technique of cannulation. Conversely, ABP can be acquired non-invasively by using Photoplethysmography (PPG). This non-invasive approach offers the advantage of continuous BP monitoring outside a hospital setting and can be implemented in cost-effective wearable devices. PPG and ABP signals differ in scale values, which creates a non-linear relationship, opening avenues for the utilization of algorithms capable of detecting non-linear associations. In this study, we introduce Neural Model of Blood Pressure (NeuBP), which estimates systolic and diastolic values from PPG signals. The problem is treated as a binary classification task, distinguishing between Normotensive and Hypertensive categories. Furthermore, our research investigates NeuBP’s performance in classifying different BP categories, including Normotensive, Prehypertensive, Grade 1 Hypertensive, and Grade 2 Hypertensive cases. We evaluate our proposed method by using data from the publicly available MIMIC-III database. The experimental results demonstrate that NeuBP achieves results comparable to more complex models with fewer parameters. The mean absolute errors for systolic and diastolic values are 5.02 mmHg and 3.11 mmHg, respectively. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

22 pages, 1026 KiB  
Article
Real-Time Human Activity Recognition on Embedded Equipment: A Comparative Study
by Houda Najeh, Christophe Lohr and Benoit Leduc
Appl. Sci. 2024, 14(6), 2377; https://doi.org/10.3390/app14062377 - 12 Mar 2024
Viewed by 537
Abstract
As living standards improve, the growing demand for energy, comfort, and health monitoring drives the increased importance of innovative solutions. Real-time recognition of human activities (HAR) in smart homes is of significant relevance, offering varied applications to improve the quality of life of [...] Read more.
As living standards improve, the growing demand for energy, comfort, and health monitoring drives the increased importance of innovative solutions. Real-time recognition of human activities (HAR) in smart homes is of significant relevance, offering varied applications to improve the quality of life of fragile individuals. These applications include facilitating autonomy at home for vulnerable people, early detection of deviations or disruptions in lifestyle habits, and immediate alerting in the event of critical situations. The first objective of this work is to develop a real-time HAR algorithm in embedded equipment. The proposed approach incorporates the event dynamic windowing based on space-temporal correlation and the knowledge of activity trigger sensors to recognize activities in the case of a record of new events. The second objective is to approach the HAR task from the perspective of edge computing. In concrete terms, this involves implementing a HAR algorithm in a “home box”, a low-power, low-cost computer, while guaranteeing performance in terms of accuracy and processing time. To achieve this goal, a HAR algorithm was first developed to perform these recognition tasks in real-time. Then, the proposed algorithm is ported on three hardware architectures to be compared: (i) a NUCLEO-H753ZI microcontroller from ST-Microelectronics using two programming languages, C language and MicroPython; (ii) an ESP32 microcontroller, often used for smart-home devices; and (iii) a Raspberry-PI, optimizing it to maintain accuracy of classification of activities with a requirement of processing time, memory resources, and energy consumption. The experimental results show that the proposed algorithm can be effectively implemented on a constrained resource hardware architecture. This could allow the design of an embedded system for real-time human activity recognition. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

31 pages, 6952 KiB  
Article
Device Position-Independent Human Activity Recognition with Wearable Sensors Using Deep Neural Networks
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Appl. Sci. 2024, 14(5), 2107; https://doi.org/10.3390/app14052107 - 03 Mar 2024
Viewed by 907
Abstract
Human activity recognition (HAR) identifies people’s motions and actions in daily life. HAR research has grown with the popularity of internet-connected, wearable sensors that capture human movement data to detect activities. Recent deep learning advances have enabled more HAR research and applications using [...] Read more.
Human activity recognition (HAR) identifies people’s motions and actions in daily life. HAR research has grown with the popularity of internet-connected, wearable sensors that capture human movement data to detect activities. Recent deep learning advances have enabled more HAR research and applications using data from wearable devices. However, prior HAR research often focused on a few sensor locations on the body. Recognizing real-world activities poses challenges when device positioning is uncontrolled or initial user training data are unavailable. This research analyzes the feasibility of deep learning models for both position-dependent and position-independent HAR. We introduce an advanced residual deep learning model called Att-ResBiGRU, which excels at accurate position-dependent HAR and delivers excellent performance for position-independent HAR. We evaluate this model using three public HAR datasets: Opportunity, PAMAP2, and REALWORLD16. Comparisons are made to previously published deep learning architectures for addressing HAR challenges. The proposed Att-ResBiGRU model outperforms existing techniques in accuracy, cross-entropy loss, and F1-score across all three datasets. We assess the model using k-fold cross-validation. The Att-ResBiGRU achieves F1-scores of 86.69%, 96.23%, and 96.44% on the PAMAP2, REALWORLD16, and Opportunity datasets, surpassing state-of-the-art models across all datasets. Our experiments and analysis demonstrate the exceptional performance of the Att-ResBiGRU model for HAR applications. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 3354 KiB  
Study Protocol
Protocol for the Development of Automatic Multisensory Systems to Analyze Human Activity for Functional Evaluation: Application to the EYEFUL System
by Paula Obeso-Benítez, Marta Pérez-de-Heredia-Torres, Elisabet Huertas-Hoyas, Patricia Sánchez-Herrera-Baeza, Nuria Máximo-Bocanegra, Sergio Serrada-Tejeda, Marta Marron-Romera, Javier Macias-Guarasa, Cristina Losada-Gutierrez, Sira E. Palazuelos-Cagigas, Jose L. Martin-Sanchez and Rosa M. Martínez-Piédrola
Appl. Sci. 2024, 14(8), 3415; https://doi.org/10.3390/app14083415 - 18 Apr 2024
Viewed by 326
Abstract
The EYEFUL system represents a pioneering initiative designed to leverage multisensory systems for the automatic evaluation of functional ability and determination of dependency status in people performing activities of daily living. This interdisciplinary effort, bridging the gap between engineering and health sciences, aims [...] Read more.
The EYEFUL system represents a pioneering initiative designed to leverage multisensory systems for the automatic evaluation of functional ability and determination of dependency status in people performing activities of daily living. This interdisciplinary effort, bridging the gap between engineering and health sciences, aims to overcome the limitations of current evaluation tools, which often lack objectivity and fail to capture the full range of functional capacity. Until now, it has been derived from subjective reports and observational methods. By integrating wearable sensors and environmental technologies, EYEFUL offers an innovative approach to quantitatively assess an individual’s ability to perform activities of daily living, providing a more accurate and unbiased evaluation of functionality and personal independence. This paper describes the protocol planned for the development of the EYEFUL system, from the initial design of the methodology to the deployment of multisensory systems and the subsequent clinical validation process. The implications of this research are far-reaching, offering the potential to improve clinical evaluations of functional ability and ultimately improve the quality of life of people with varying levels of dependency. With its emphasis on technological innovation and interdisciplinary collaboration, the EYEFUL system sets a new standard for objective evaluation, highlighting the critical role of advanced screening technologies in addressing the challenges of modern healthcare. We expect that the publication of the protocol will help similar initiatives by providing a structured approach and rigorous validation process. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

Back to TopTop