sensors-logo

Journal Browser

Journal Browser

Human Activity Recognition Using Wearable Sensors for Learning and Teaching

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 11970

Special Issue Editors


E-Mail Website
Guest Editor
Physics Education Research, TU Kaiserslautern, 67663 Kaiserslautern, Germany
Interests: multiple representations; multimedia learning; technology-enhanced learning; AI and education

E-Mail Website
Guest Editor
Physics Education Research, TU Kaiserslautern, 67663 Kaiserslautern, Germany
Interests: multiple representations; representational competence; eye-tracking in education; AI and education

E-Mail Website
Guest Editor
Embedded Intelligence, DFKI, 67663 Kaiserslautern, Germany
Interests: pervasive and wearable computing; sensor systems; artificial intelligence; embedded systems; self-organization

E-Mail Website
Guest Editor
Human-Centered Ubiquitous Media, LMU Munich, 80331 Munich, Germany
Interests: HCI; ubiquitous computing; interaction design; wearable computing

Special Issue Information

Dear Colleagues,

Human activity recognition (HAR) is a well-established, dynamic research area within ubiquitous and wearable computing. As sensors are becoming increasingly ubiquitous and the machine learning methods needed to evaluate the collected information are improving, we are seeing a rapid emergence of practical, mature applications in fields such as, for example, Industry 4.0, sports, and healthcare. While the use of sensors has also been extensively studied in education, the use of complex activity recognition (rather than the mere sensing and simple evaluation of sensor signals) is still at a very early stage. One reason for this may be that not only do the complexity and variety of bodily activities make it difficult to automatically recognize such activities rapidly and accurately but, also, that learning variables such as cognition or emotion are invisible and need to be interpreted by human‐driven interpretations or validated test instruments. In combination with these interpretation methods, state-of-the-art deep learning models could overcome several problems in HAR, for instance, feature extraction and multiple-occupant activities and, thus, advance the interpretation of sensor-based HAR data.

This Special Issue aims to address this trade-off and discuss the state of the art, difficulties, innovations, and improvements in HAR applications in the context of cognition, emotions, learning, and instruction.

Prof. Dr. Jochen Kuhn
Dr. Stefan Küchemann
Prof. Dr. Paul Lukowicz
Prof. Dr. Albrecht Schmidt
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • wearable sensors
  • cognition
  • emotion
  • learning
  • multimodal data
  • leaning analytics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2830 KiB  
Article
Design and Implementation of a Gesture-Aided E-Learning Platform
by Wolfgang Kremser, Stefan Kranzinger and Severin Bernhart
Sensors 2021, 21(23), 8042; https://doi.org/10.3390/s21238042 - 01 Dec 2021
Viewed by 2160
Abstract
In gesture-aided learning (GAL), learners perform specific body gestures while rehearsing the associated learning content. Although this form of embodiment has been shown to benefit learning outcomes, it has not yet been incorporated into e-learning. This work presents a generic system design for [...] Read more.
In gesture-aided learning (GAL), learners perform specific body gestures while rehearsing the associated learning content. Although this form of embodiment has been shown to benefit learning outcomes, it has not yet been incorporated into e-learning. This work presents a generic system design for an online GAL platform. It is comprised of five modules for planning, administering, and monitoring remote GAL lessons. To validate the proposed design, a reference implementation for word learning was demonstrated in a field test. 19 participants independently took a predefined online GAL lesson and rated their experience on the System Usability Scale and a supplemental questionnaire. To monitor the correct gesture execution, the reference implementation recorded the participants’ webcam feeds and uploaded them to the instructor for review. The results from the field test show that the reference implementation is capable of delivering an e-learning experience with GAL elements. Designers of e-learning platforms may use the proposed design to include GAL in their applications. Beyond its original purpose in education, the platform is also useful to collect and annotate gesture data. Full article
Show Figures

Figure 1

17 pages, 4600 KiB  
Article
Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4
by Niharika Kumari, Verena Ruf, Sergey Mukhametov, Albrecht Schmidt, Jochen Kuhn and Stefan Küchemann
Sensors 2021, 21(22), 7668; https://doi.org/10.3390/s21227668 - 18 Nov 2021
Cited by 17 | Viewed by 5502
Abstract
Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the [...] Read more.
Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered. Full article
Show Figures

Figure 1

20 pages, 2976 KiB  
Communication
Investigating the Usability of a Head-Mounted Display Augmented Reality Device in Elementary School Children
by Luisa Lauer, Kristin Altmeyer, Sarah Malone, Michael Barz, Roland Brünken, Daniel Sonntag and Markus Peschel
Sensors 2021, 21(19), 6623; https://doi.org/10.3390/s21196623 - 05 Oct 2021
Cited by 9 | Viewed by 3469
Abstract
Augmenting reality via head-mounted displays (HMD-AR) is an emerging technology in education. The interactivity provided by HMD-AR devices is particularly promising for learning, but presents a challenge to human activity recognition, especially with children. Recent technological advances regarding speech and gesture recognition concerning [...] Read more.
Augmenting reality via head-mounted displays (HMD-AR) is an emerging technology in education. The interactivity provided by HMD-AR devices is particularly promising for learning, but presents a challenge to human activity recognition, especially with children. Recent technological advances regarding speech and gesture recognition concerning Microsoft’s HoloLens 2 may address this prevailing issue. In a within-subjects study with 47 elementary school children (2nd to 6th grade), we examined the usability of the HoloLens 2 using a standardized tutorial on multimodal interaction in AR. The overall system usability was rated “good”. However, several behavioral metrics indicated that specific interaction modes differed in their efficiency. The results are of major importance for the development of learning applications in HMD-AR as they partially deviate from previous findings. In particular, the well-functioning recognition of children’s voice commands that we observed represents a novelty. Furthermore, we found different interaction preferences in HMD-AR among the children. We also found the use of HMD-AR to have a positive effect on children’s activity-related achievement emotions. Overall, our findings can serve as a basis for determining general requirements, possibilities, and limitations of the implementation of educational HMD-AR environments in elementary school classrooms. Full article
Show Figures

Graphical abstract

Back to TopTop