sensors-logo

Journal Browser

Journal Browser

Deep Learning in Visual and Wearable Sensing for Motion Analysis and Healthcare

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biosensors".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 4273

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
Interests: medical data science; machine learning; pattern recognition; activity recognition; motion capture; sensor technologies; medical informatics

E-Mail Website
Guest Editor
Faculty of Information, Media and Electrical Engineering, Institute of Media and Imaging Technology, TH Köln, Köln, Germany
Interests: motion capture; sensor technologies; digital health; machine learning; computer animation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
Interests: biomedical engineering; artificial intelligence; pattern recognition; machine vision; machine learning; medical sensor
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce this Special Issue, which aims to gather together articles investigating the use of deep learning approaches in visual and wearable sensing, e.g., for motion analysis and healthcare applications. This issue will make a significant contribution to the field of machine learning and cover a broad spectrum of applications in the medical domain.

Applications may include (but are not limited to): diagnostics, activity recognition, motion tracking, motion analysis of body parts or rehabilitation support. As sensor technologies are diverse, we welcome all papers exploring the use of wearable sensors or ambient sensors (such as RGB(D) image/video, millimeter-wave radar, etc.).

Dr. Sebastian Fudickar
Prof. Dr. Björn Krüger
Prof. Dr. Marcin Grzegorzek
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning techniques
  • computer vision
  • wearable sensors
  • accelerometers
  • gyroscopes
  • magnetometers
  • multimodal sensing
  • EMG and force sensors
  • human activity recognition
  • human movement/gait analysis

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 1193 KiB  
Article
A Systematic Evaluation of Feature Encoding Techniques for Gait Analysis Using Multimodal Sensory Data
by Rimsha Fatima, Muhammad Hassan Khan, Muhammad Adeel Nisar, Rafał Doniec, Muhammad Shahid Farid and Marcin Grzegorzek
Sensors 2024, 24(1), 75; https://doi.org/10.3390/s24010075 - 22 Dec 2023
Viewed by 782
Abstract
This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest [...] Read more.
This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest of the research community to collect kinematic and kinetic data to analyze the gait. The most crucial step for gait analysis is to find the set of appropriate features from continuous time series data to accurately represent human locomotion. This paper presents a systematic assessment of numerous feature extraction techniques. In particular, three different feature encoding techniques are presented to encode multimodal time series sensory data. In the first technique, we utilized eighteen different handcrafted features which are extracted directly from the raw sensory data. The second technique follows the Bag-of-Visual-Words model; the raw sensory data are encoded using a pre-computed codebook and a locality-constrained linear encoding (LLC)-based feature encoding technique. We evaluated two different machine learning algorithms to assess the effectiveness of the proposed features in the encoding of raw sensory data. In the third feature encoding technique, we proposed two end-to-end deep learning models to automatically extract the features from raw sensory data. A thorough experimental evaluation is conducted on four large sensory datasets and their outcomes are compared. A comparison of the recognition results with current state-of-the-art methods demonstrates the computational efficiency and high efficacy of the proposed feature encoding method. The robustness of the proposed feature encoding technique is also evaluated to recognize human daily activities. Additionally, this paper also presents a new dataset consisting of the gait patterns of 42 individuals, gathered using IMU sensors. Full article
Show Figures

Figure 1

13 pages, 2668 KiB  
Article
Validity of AI-Based Gait Analysis for Simultaneous Measurement of Bilateral Lower Limb Kinematics Using a Single Video Camera
by Takumi Ino, Mina Samukawa, Tomoya Ishida, Naofumi Wada, Yuta Koshino, Satoshi Kasahara and Harukazu Tohyama
Sensors 2023, 23(24), 9799; https://doi.org/10.3390/s23249799 - 13 Dec 2023
Cited by 2 | Viewed by 1596
Abstract
Accuracy validation of gait analysis using pose estimation with artificial intelligence (AI) remains inadequate, particularly in objective assessments of absolute error and similarity of waveform patterns. This study aimed to clarify objective measures for absolute error and waveform pattern similarity in gait analysis [...] Read more.
Accuracy validation of gait analysis using pose estimation with artificial intelligence (AI) remains inadequate, particularly in objective assessments of absolute error and similarity of waveform patterns. This study aimed to clarify objective measures for absolute error and waveform pattern similarity in gait analysis using pose estimation AI (OpenPose). Additionally, we investigated the feasibility of simultaneous measuring both lower limbs using a single camera from one side. We compared motion analysis data from pose estimation AI using video footage that was synchronized with a three-dimensional motion analysis device. The comparisons involved mean absolute error (MAE) and the coefficient of multiple correlation (CMC) to compare the waveform pattern similarity. The MAE ranged from 2.3 to 3.1° on the camera side and from 3.1 to 4.1° on the opposite side, with slightly higher accuracy on the camera side. Moreover, the CMC ranged from 0.936 to 0.994 on the camera side and from 0.890 to 0.988 on the opposite side, indicating a “very good to excellent” waveform similarity. Gait analysis using a single camera revealed that the precision on both sides was sufficiently robust for clinical evaluation, while measurement accuracy was slightly superior on the camera side. Full article
Show Figures

Figure 1

17 pages, 12583 KiB  
Article
Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction
by László Kopácsi, Benjámin Baffy, Gábor Baranyi, Joul Skaf, Gábor Sörös, Szilvia Szeier, András Lőrincz and Daniel Sonntag
Sensors 2023, 23(11), 5126; https://doi.org/10.3390/s23115126 - 27 May 2023
Viewed by 1314
Abstract
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants [...] Read more.
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot’s perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot’s perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

  • Type of Paper: Article
  • Tentative Title: 3D Semantic Label Transfer and Matching in Human-Robot Collaboration
  • Authors: László Kopácsi (1,2), Benjámin Baffy (2), Gábor Baranyi (2), Joul Skaf (2), Gábor Sörös (3), Szilvia Szeier (2), András Lőrincz (2), Daniel Sonntag (1,4)
  • Abstract: Allocentric semantic 3D maps are highly useful for a variety of human-machine interactions since ego-centric instructions can be derived by the machine for the human partner. Class labels, however, may differ or could be missing for the participants due to the different perspectives. In order to overcome this issue, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher, i.e., the human viewpoints but are inferior from lower, such as the viewpoints of a small robot. We propose several approaches for acquiring semantic labels for unusual perspectives. We start with a partial semantic reconstruction from the human perspective that we extended to the new, unusual perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and in a real environment using Intel's small OpenBot robot that we equipped with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot's perspective with accuracy comparable to the original one. In addition, we exploited the gained information and improved the recognition performance of the deep network for the lower viewpoints and showed that the small robot alone is capable of generating high-quality semantic maps for the human partner. Furthermore, as computations are close to real-time, the approach enables interactive applications.
Back to TopTop