sensors-logo

Journal Browser

Journal Browser

Sensor Technologies for Gesture Recognition Applications in Shared Spaces

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 20 October 2024 | Viewed by 4844

Special Issue Editors


E-Mail Website
Guest Editor
Measurement and Sensor Technology, Chemnitz University of Technology, 09126 Chemnitz, Germany
Interests: impedance spectroscopy; physical and chemical sensors based on carbonaceous nanomaterials; energy aware wireless sensors

E-Mail Website
Guest Editor
Measurement and Sensor Technology, Chemnitz University of Technology, 09126 Chemnitz, Germany
Interests: machine Learning; swarm Intelligence for feature selection; embedded systems; gesture recognition

Special Issue Information

Dear Colleagues,

Gesture Recognition is a competitive field of research that allows intelligent agents to understand human body language in shared spaces. It is based on sensor technologies to read and interpret hand movements, grasping forces, body activities, motions and posture, and hand sign languages. Gesture Recognition integrates artificial intelligence to serve various goals of human machine interactions and human–human communication in many scenarios under the context of smart cities and hybrid societies. Sensor technologies for gesture recognition in shared spaces could serve applications such as automatic recognition of sign language, interaction of humans and robots, new ways of controlling video games, virtual reality and digital twin applications, automotive vehicles and smart traffic among others.

Potential topics include but are not limited to:

  • Wearables and IoT for gesture recognition;
  • Gesture recognition sensor technologies;
  • Feature extraction and selection methods for gesture recognition;
  • Sensors and Myographic measurement methods for gesture detection;
  • Wearable sensors for human tracking and Gait analysis;
  • Sensors for human postures and movements recognition;
  • Sensors and algorithms for motion detection and tracking;
  • Hand gesture recognition;
  • Sensors and algorithms for sign language recognition;
  • Gesture recognition for remote controlling, virtual reality and digital twins;
  • Pattern recognition and machine learning for gesture recognition;
  • Gesture recognition for smart cities;
  • Applications of gesture recognitions in shared spaces;
  • Algorithms for gesture recognition and body attached sensor network;

Prof. Dr. Olfa Kanoun
Dr. Rim Barioul
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • wearable sensors
  • myography
  • gesture recognition
  • body-attached sensor networks
  • shared spaces
  • intelligent Agents
  • sign language
  • IoT
  • motions and posture recognition
 

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2094 KiB  
Article
Unsupervised Domain Adaptation for Inter-Session Re-Calibration of Ultrasound-Based HMIs
by Antonios Lykourinas, Xavier Rottenberg, Francky Catthoor and Athanassios Skodras
Sensors 2024, 24(15), 5043; https://doi.org/10.3390/s24155043 - 4 Aug 2024
Viewed by 685
Abstract
Human–Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends [...] Read more.
Human–Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test–time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable. Full article
Show Figures

Figure 1

20 pages, 4716 KiB  
Article
Novel Wearable System to Recognize Sign Language in Real Time
by İlhan Umut and Ümit Can Kumdereli
Sensors 2024, 24(14), 4613; https://doi.org/10.3390/s24144613 - 16 Jul 2024
Viewed by 1100
Abstract
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems [...] Read more.
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems developed using different technologies, including cameras, armbands, and gloves. However, the system we propose in this study stands out for its practicality, utilizing surface electromyography (muscle activity) and inertial measurement unit (motion dynamics) data from both arms. We address the drawbacks of other methods, such as high costs, low accuracy due to ambient light and obstacles, and complex hardware requirements, which have limited their practical application. Our software can run on different operating systems using digital signal processing and machine learning methods specific to this study. For the test, we created a dataset of 80 words based on their frequency of use in daily life and performed a thorough feature extraction process. We tested the recognition performance using various classifiers and parameters and compared the results. The random forest algorithm emerged as the most successful, achieving a remarkable 99.875% accuracy, while the naïve Bayes algorithm had the lowest success rate with 87.625% accuracy. The new system promises to significantly improve communication for people with hearing disabilities and ensures seamless integration into daily life without compromising user comfort or lifestyle quality. Full article
Show Figures

Figure 1

25 pages, 25566 KiB  
Article
Comparative Study of sEMG Feature Evaluation Methods Based on the Hand Gesture Classification Performance
by Hiba Hellara, Rim Barioul, Salwa Sahnoun, Ahmed Fakhfakh and Olfa Kanoun
Sensors 2024, 24(11), 3638; https://doi.org/10.3390/s24113638 - 4 Jun 2024
Viewed by 968
Abstract
Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture [...] Read more.
Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions. Full article
Show Figures

Figure 1

30 pages, 5445 KiB  
Article
End-to-End Ultrasonic Hand Gesture Recognition
by Elfi Fertl, Do Dinh Tan Nguyen, Martin Krueger, Georg Stettinger, Rubén Padial-Allué, Encarnación Castillo and Manuel P. Cuéllar
Sensors 2024, 24(9), 2740; https://doi.org/10.3390/s24092740 - 25 Apr 2024
Viewed by 1214
Abstract
As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of [...] Read more.
As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Sensor Technologies for Gesture Recognition Applications in Shared Spaces
Detect language 
 
English
×
Translating...
Back to TopTop