Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (397)

Search Parameters:
Keywords = human gesture recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3464 KB  
Article
A Novel Hand Motion Intention Recognition Method That Decodes EMG Signals Based on an Improved LSTM
by Tian-Ao Cao, Hongyou Zhou, Zhengkui Chen, Yiwei Dai, Min Fang, Chengze Wu, Lurong Jiang, Yanyun Dai and Jijun Tong
Symmetry 2025, 17(10), 1587; https://doi.org/10.3390/sym17101587 - 23 Sep 2025
Viewed by 127
Abstract
Electromyography (EMG) signals reflect hand motion intention and exhibit a certain degree of amplitude symmetry. Nowadays, recognition of hand motion intention based on EMG has enriched its burgeoning promotion in various applications, such as rehabilitation, prostheses, and intelligent supply chains. For instance, the [...] Read more.
Electromyography (EMG) signals reflect hand motion intention and exhibit a certain degree of amplitude symmetry. Nowadays, recognition of hand motion intention based on EMG has enriched its burgeoning promotion in various applications, such as rehabilitation, prostheses, and intelligent supply chains. For instance, the motion intentions of humans can be conveyed to logistics equipment, thereby improving the level of intelligence in a supply chain. To enhance the recognition accuracy of multiple hand motion intentions, this paper proposes a hand motion intention recognition method that decodes EMG signals based on improved long short-term memory (LSTM). Firstly, we performed preprocessing and utilized overlapping sliding windows on EMG segments. Secondly, we chose LSTM and improved it so as to capture features and enable prediction of hand motion intention. Specifically, we introduced the optimal key hyperparameter combination in the LSTM model using a genetic algorithm (GA). We found that our proposed method achieved relatively high accuracy in detecting hand motion intention, with average accuracies of 92.0% (five gestures) and 89.7% (seven gestures), while the highest accuracy reached 100.0% (seven gestures). Our paper may provide a way to predict the motion intention of the human hand for intention communication. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

22 pages, 3399 KB  
Article
Integrating Cross-Modal Semantic Learning with Generative Models for Gesture Recognition
by Shuangjiao Zhai, Zixin Dai, Zanxia Jin, Pinle Qin and Jianchao Zeng
Sensors 2025, 25(18), 5783; https://doi.org/10.3390/s25185783 - 17 Sep 2025
Viewed by 222
Abstract
Radio frequency (RF)-based human activity sensing is an essential component of ubiquitous computing, with WiFi sensing providing a practical and low-cost solution for gesture and activity recognition. However, challenges such as manual data collection, multipath interference, and poor cross-domain generalization hinder real-world deployment. [...] Read more.
Radio frequency (RF)-based human activity sensing is an essential component of ubiquitous computing, with WiFi sensing providing a practical and low-cost solution for gesture and activity recognition. However, challenges such as manual data collection, multipath interference, and poor cross-domain generalization hinder real-world deployment. Existing data augmentation approaches often neglect the biomechanical structure underlying RF signals. To address these limitations, we present CM-GR, a cross-modal gesture recognition framework that integrates semantic learning with generative modeling. CM-GR leverages 3D skeletal points extracted from vision data as semantic priors to guide the synthesis of realistic WiFi signals, thereby incorporating biomechanical constraints without requiring extensive manual labeling. In addition, dynamic conditional vectors are constructed from inter-subject skeletal differences, enabling user-specific WiFi data generation without the need for dedicated data collection and annotation for each new user. Extensive experiments on the public MM-Fi dataset and our SelfSet dataset demonstrate that CM-GR substantially improves the cross-subject gesture recognition accuracy, achieving gains of up to 10.26% and 9.5%, respectively. These results confirm the effectiveness of CM-GR in synthesizing personalized WiFi data and highlight its potential for robust and scalable gesture recognition in practical settings. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

11 pages, 1005 KB  
Proceeding Paper
Multimodal Fusion for Enhanced Human–Computer Interaction
by Ajay Sharma, Isha Batra, Shamneesh Sharma and Anggy Pradiftha Junfithrana
Eng. Proc. 2025, 107(1), 81; https://doi.org/10.3390/engproc2025107081 - 10 Sep 2025
Viewed by 358
Abstract
Our paper introduces a novel idea of a virtual mouse character driven by gesture detection, eye-tracking, and voice monitoring. This system uses cutting-edge computer vision and machine learning technology to let users command and control the mouse pointer using eye motions, voice commands, [...] Read more.
Our paper introduces a novel idea of a virtual mouse character driven by gesture detection, eye-tracking, and voice monitoring. This system uses cutting-edge computer vision and machine learning technology to let users command and control the mouse pointer using eye motions, voice commands, or hand gestures. This system’s main goal is to provide users who want a more natural, hands-free approach to interacting with their computers as well as those with impairments that limit their bodily motions, such as those with paralysis—with an easy and engaging interface. The system improves accessibility and usability by combining many input modalities, therefore providing a flexible answer for numerous users. While the speech recognition function permits hands-free operation via voice instructions, the eye-tracking component detects and responds to the user’s gaze, therefore providing exact cursor control. Gesture recognition enhances these features even further by letting users use their hands simply to execute mouse operations. This technology not only enhances personal user experience for people with impairments but also marks a major development in human–computer interaction. It shows how computer vision and machine learning may be used to provide more inclusive and flexible user interfaces, therefore improving the accessibility and efficiency of computer usage for everyone. Full article
Show Figures

Figure 1

23 pages, 15956 KB  
Article
A Photovoltaic Light Sensor-Based Self-Powered Real-Time Hover Gesture Recognition System for Smart Home Control
by Nora Almania, Sarah Alhouli and Deepak Sahoo
Electronics 2025, 14(18), 3576; https://doi.org/10.3390/electronics14183576 - 9 Sep 2025
Viewed by 386
Abstract
Many gesture recognition systems with innovative interfaces have emerged for smart home control. However, these systems tend to be energy-intensive, bulky, and expensive. There is also a lack of real-time demonstrations of gesture recognition and subsequent evaluation of the user experience. Photovoltaic light [...] Read more.
Many gesture recognition systems with innovative interfaces have emerged for smart home control. However, these systems tend to be energy-intensive, bulky, and expensive. There is also a lack of real-time demonstrations of gesture recognition and subsequent evaluation of the user experience. Photovoltaic light sensors are self-powered, battery-free, flexible, portable, and easily deployable on various surfaces throughout the home. They enable natural, intuitive, hover-based interaction, which could create a positive user experience. In this paper, we present the development and evaluation of a real-time, hover gesture recognition system that can control multiple smart home devices via a self-powered photovoltaic interface. Five popular supervised machine learning algorithms were evaluated using gesture data from 48 participants. The random forest classifier achieved high accuracies. However, a one-size-fits-all model performed poorly in real-time testing. User-specific random forest models performed well with 10 participants, showing no significant difference in offline and real-time performance and under normal indoor lighting conditions. This paper demonstrates the technical feasibility of using photovoltaic surfaces as self-powered interfaces for gestural interaction systems that are perceived to be useful and easy to use. It establishes a foundation for future work in hover-based interaction and sustainable sensing, enabling human–computer interaction researchers to explore further applications. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Intelligent Systems, 2nd Edition)
Show Figures

Figure 1

10 pages, 2364 KB  
Proceeding Paper
AI-Powered Sign Language Detection Using YOLO-v11 for Communication Equality
by Ivana Lucia Kharisma, Irma Nurmalasari, Yuni Lestari, Salma Dela Septiani, Kamdan and Muchtar Ali Setyo Yudono
Eng. Proc. 2025, 107(1), 83; https://doi.org/10.3390/engproc2025107083 - 8 Sep 2025
Viewed by 294
Abstract
Communication plays a vital role in conveying messages, expressing emotions, and sharing perceptions, becoming a fundamental aspect of human interaction with the environment. For individuals with hearing impairments, sign language serves as an essential communication tool, enabling interaction both within the deaf community [...] Read more.
Communication plays a vital role in conveying messages, expressing emotions, and sharing perceptions, becoming a fundamental aspect of human interaction with the environment. For individuals with hearing impairments, sign language serves as an essential communication tool, enabling interaction both within the deaf community and with non-deaf individuals. This study aims to bridge this misconception by developing an iconic language recognition system using the Deep Learning-based YOLO-v11 algorithm. YOLO-v11, a state-of-the-art object detection algorithm, is known for its speed, accuracy, and efficiency. The system uses image recognition to identify hand gestures in ASL and translates them into text or speech, facilitating inclusive communication. The accuracy of the training model is 94.67%, and the accuracy of the testing model is 93.02%, indicating that the model has excellent performance in recognizing sign language from the training and testing datasets. Additionally, the model is very reliable in recognizing the classes “Hello”, “I Love You”, “No”, and “Thank You” with a sensitivity close to or equal to 100%. This research contributes to advancing communication equality for individuals with hearing impairments, promoting inclusivity, and supporting their integration into society. Full article
Show Figures

Figure 1

10 pages, 2931 KB  
Proceeding Paper
Dynamic Hand Gesture Recognition Using MediaPipe and Transformer
by Hsin-Hua Li and Chen-Chiung Hsieh
Eng. Proc. 2025, 108(1), 22; https://doi.org/10.3390/engproc2025108022 - 3 Sep 2025
Viewed by 1408
Abstract
We developed a low-cost, high-performance gesture recognition system with a dynamic hand gesture recognition technique based on the Transformer model combined with MediaPipe. The technique accurately extracts hand gesture key points. The system was designed with eight primary gestures: swipe up, swipe down, [...] Read more.
We developed a low-cost, high-performance gesture recognition system with a dynamic hand gesture recognition technique based on the Transformer model combined with MediaPipe. The technique accurately extracts hand gesture key points. The system was designed with eight primary gestures: swipe up, swipe down, swipe left, swipe right, thumbs up, OK, click, and enlarge. These gestures serve as alternatives to mouse and keyboard operations, simplifying human–computer interaction interfaces to meet the needs of media system control and presentation switching. The experiment results demonstrated that training deep learning models using the Transformer achieved over 99% accuracy, effectively enhancing recognition performance. Full article
Show Figures

Figure 1

20 pages, 2732 KB  
Article
Redesigning Multimodal Interaction: Adaptive Signal Processing and Cross-Modal Interaction for Hands-Free Computer Interaction
by Bui Hong Quan, Nguyen Dinh Tuan Anh, Hoang Van Phi and Bui Trung Thanh
Sensors 2025, 25(17), 5411; https://doi.org/10.3390/s25175411 - 2 Sep 2025
Viewed by 565
Abstract
Hands-free computer interaction is a key topic in assistive technology, with camera-based and voice-based systems being the most common methods. Recent camera-based solutions leverage facial expressions or head movements to simulate mouse clicks or key presses, while voice-based systems enable control via speech [...] Read more.
Hands-free computer interaction is a key topic in assistive technology, with camera-based and voice-based systems being the most common methods. Recent camera-based solutions leverage facial expressions or head movements to simulate mouse clicks or key presses, while voice-based systems enable control via speech commands, wake-word detection, and vocal gestures. However, existing systems often suffer from limitations in responsiveness and accuracy, especially under real-world conditions. In this paper, we present 3-Modal Human-Computer Interaction (3M-HCI), a novel interaction system that dynamically integrates facial, vocal, and eye-based inputs through a new signal processing pipeline and a cross-modal coordination mechanism. This approach not only enhances recognition accuracy but also reduces interaction latency. Experimental results demonstrate that 3M-HCI outperforms several recent hands-free interaction solutions in both speed and precision, highlighting its potential as a robust assistive interface. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

47 pages, 15579 KB  
Article
Geometric Symmetry and Temporal Optimization in Human Pose and Hand Gesture Recognition for Intelligent Elderly Individual Monitoring
by Pongsarun Boonyopakorn and Mahasak Ketcham
Symmetry 2025, 17(9), 1423; https://doi.org/10.3390/sym17091423 - 1 Sep 2025
Viewed by 529
Abstract
This study introduces a real-time, non-intrusive monitoring system designed to support elderly care through vision-based pose estimation and hand gesture recognition. The proposed framework integrates convolutional neural networks (CNNs), temporal modeling using LSTM networks, and symmetry-aware keypoint analysis to enhance the accuracy and [...] Read more.
This study introduces a real-time, non-intrusive monitoring system designed to support elderly care through vision-based pose estimation and hand gesture recognition. The proposed framework integrates convolutional neural networks (CNNs), temporal modeling using LSTM networks, and symmetry-aware keypoint analysis to enhance the accuracy and reliability of behavior detection under varied real-world conditions. By leveraging the bilateral symmetry of human anatomy, the system improves the robustness of posture and gesture classification, even in the presence of partial occlusion or variable lighting. A total of 21 hand landmarks and 33 body pose points are used to recognize predefined actions and communication gestures, enabling seamless interaction without wearable devices. Experimental evaluations across four distinct lighting environments confirm a consistent accuracy above 90%, with real-time alerts triggered via IoT messaging platforms. The system’s modular architecture, interpretability, and adaptability make it a scalable solution for intelligent elderly individual monitoring, offering a novel application of spatial symmetry and optimized deep learning in healthcare technology. Full article
Show Figures

Figure 1

22 pages, 445 KB  
Article
Design of Real-Time Gesture Recognition with Convolutional Neural Networks on a Low-End FPGA
by Rui Policarpo Duarte, Tiago Gonçalves, Gustavo Jacinto, Paulo Flores and Mário Véstias
Electronics 2025, 14(17), 3457; https://doi.org/10.3390/electronics14173457 - 29 Aug 2025
Viewed by 449
Abstract
Hand gesture recognition is used in human–computer interaction, with multiple applications in assistive technologies, virtual reality, and smart systems. While vision-based methods are commonly employed, they are often computationally intensive, sensitive to environmental conditions, and raise privacy concerns. This work proposes a hardware/software [...] Read more.
Hand gesture recognition is used in human–computer interaction, with multiple applications in assistive technologies, virtual reality, and smart systems. While vision-based methods are commonly employed, they are often computationally intensive, sensitive to environmental conditions, and raise privacy concerns. This work proposes a hardware/software co-optimized system for real-time hand gesture recognition using accelerometer data, designed for a portable, low-cost platform. A Convolutional Neural Network from TinyML is implemented on a Xilinx Zynq-7000 SoC-FPGA, utilizing fixed-point arithmetic to minimize computational complexity while maintaining classification accuracy. Additionally, combined architectural optimizations, including pipelining and loop unrolling, are applied to enhance processing efficiency. The final system achieves a 62× speedup over an unoptimized floating-point implementation while reducing power consumption, making it suitable for embedded and battery-powered applications. Full article
Show Figures

Figure 1

25 pages, 4202 KB  
Article
Real-Time Paddle Stroke Classification and Wireless Monitoring in Open Water Using Wearable Inertial Nodes
by Vladut-Alexandru Dobra, Ionut-Marian Dobra and Silviu Folea
Sensors 2025, 25(17), 5307; https://doi.org/10.3390/s25175307 - 26 Aug 2025
Viewed by 786
Abstract
This study presents a low-cost wearable system for monitoring and classifying paddle strokes in open-water environments. Building upon our previous work in controlled aquatic and dryland settings, the proposed system consists of ESP32-based embedded nodes equipped with MPU6050 accelerometer–gyroscope sensors. These nodes communicate [...] Read more.
This study presents a low-cost wearable system for monitoring and classifying paddle strokes in open-water environments. Building upon our previous work in controlled aquatic and dryland settings, the proposed system consists of ESP32-based embedded nodes equipped with MPU6050 accelerometer–gyroscope sensors. These nodes communicate via the ESP-NOW protocol in a master–slave architecture. With minimal hardware modifications, the system implements gesture classification using Dynamic Time Warping (DTW) to distinguish between left and right paddle strokes. The collected data, including stroke type, count, and motion similarity, are transmitted in real time to a local interface for visualization. Field experiments were conducted on a calm lake using a paddleboard, where users performed a series of alternating strokes. In addition to gesture recognition, the study includes empirical testing of ESP-NOW communication range in the open lake environment. The results demonstrate reliable wireless communication over distances exceeding 100 m with minimal packet loss, confirming the suitability of ESP-NOW for low-latency data transfer in open-water conditions. The system achieved over 80% accuracy in stroke classification and sustained more than 3 h of operational battery life. This approach demonstrates the feasibility of real-time, wearable-based motion tracking for water sports in natural environments, with potential applications in kayaking, rowing, and aquatic training systems. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition: 3rd Edition)
Show Figures

Figure 1

25 pages, 6468 KB  
Article
Thermal Imaging-Based Lightweight Gesture Recognition System for Mobile Robots
by Xinxin Wang, Xiaokai Ma, Hongfei Gao, Lijun Wang and Xiaona Song
Machines 2025, 13(8), 701; https://doi.org/10.3390/machines13080701 - 8 Aug 2025
Viewed by 439
Abstract
With the rapid advancement of computer vision and deep learning technologies, the accuracy and efficiency of real-time gesture recognition have significantly improved. This paper introduces a gesture-controlled robot system based on thermal imaging sensors. By replacing traditional physical button controls, this design significantly [...] Read more.
With the rapid advancement of computer vision and deep learning technologies, the accuracy and efficiency of real-time gesture recognition have significantly improved. This paper introduces a gesture-controlled robot system based on thermal imaging sensors. By replacing traditional physical button controls, this design significantly enhances the interactivity and operational convenience of human–machine interaction. First, a thermal imaging gesture dataset is collected using Python3.9. Compared to traditional RGB images, thermal imaging can better capture gesture details, especially in low-light conditions, thereby improving the robustness of gesture recognition. Subsequently, a neural network model is constructed and trained using Keras, and the model is then deployed to a microcontroller. This lightweight model design enables the gesture recognition system to operate on resource-constrained embedded devices, achieving real-time performance and high efficiency. In addition, using a standalone thermal sensor for gesture recognition avoids the complexity of multi-sensor fusion schemes, simplifies the system structure, reduces costs, and ensures real-time performance and stability. The final results demonstrate that the proposed design achieves a model test accuracy of 99.05%. In summary, through its gesture recognition capabilities—featuring high accuracy, low latency, non-contact interaction, and low-light adaptability—this design precisely meets the core demands for “convenient, safe, and natural interaction” in rehabilitation, smart homes, and elderly assistive devices, showcasing clear potential for practical scenario implementation. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

32 pages, 6323 KB  
Article
Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving
by Irene Bouzón, Jimena Pascual, Cayetana Costales, Aser Crespo, Covadonga Cima and David Melendi
Sensors 2025, 25(15), 4679; https://doi.org/10.3390/s25154679 - 29 Jul 2025
Viewed by 793
Abstract
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to [...] Read more.
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to support remote interventions in emergency scenarios. Built on a modular ROS2 architecture, the system allows seamless transition between simulated and physical platforms, enabling safe and reproducible testing. The experimental results show a high task success rate and user satisfaction, highlighting the importance of intuitive controls, gesture recognition accuracy, and low-latency feedback. Our findings contribute to the understanding of human-robot interaction (HRI) in immersive teleoperation contexts and provide insights into the role of multisensory feedback and control modalities in building trust and situational awareness for remote operators. Ultimately, this approach is intended to support the broader acceptability of autonomous driving technologies by enhancing human supervision, control, and confidence. Full article
(This article belongs to the Special Issue Human-Centred Smart Manufacturing - Industry 5.0)
Show Figures

Figure 1

26 pages, 27333 KB  
Article
Gest-SAR: A Gesture-Controlled Spatial AR System for Interactive Manual Assembly Guidance with Real-Time Operational Feedback
by Naimul Hasan and Bugra Alkan
Machines 2025, 13(8), 658; https://doi.org/10.3390/machines13080658 - 27 Jul 2025
Viewed by 800
Abstract
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. [...] Read more.
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. In response, we present Gest-SAR, a SAR framework that integrates a custom MediaPipe-based gesture classification model to deliver adaptive light-guided pick-to-place assembly instructions and real-time error feedback within a closed-loop interaction instance. In a within-subject study, ten participants completed standardised Duplo-based assembly tasks using Gest-SAR, paper-based manuals, and tablet-based instructions; performance was evaluated via assembly cycle time, selection and placement error rates, cognitive workload assessed by NASA-TLX, and usability test by post-experimental questionnaires. Quantitative results demonstrate that Gest-SAR significantly reduces cycle times with an average of 3.95 min compared to Paper (Mean = 7.89 min, p < 0.01) and Tablet (Mean = 6.99 min, p < 0.01). It also achieved 7 times less average error rates while lowering perceived cognitive workload (p < 0.05 for mental demand) compared to conventional modalities. In total, 90% of the users agreed to prefer SAR over paper and tablet modalities. These outcomes indicate that natural hand-gesture interaction coupled with real-time visual feedback enhances both the efficiency and accuracy of manual assembly. By embedding AI-driven gesture recognition and AR projection into a human-centric assistance system, Gest-SAR advances the collaborative interplay between humans and machines, aligning with Industry 5.0 objectives of resilient, sustainable, and intelligent manufacturing. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

30 pages, 2228 KB  
Article
Controlling Industrial Robotic Arms Using Gyroscopic and Gesture Inputs from a Smartwatch
by Carmen-Cristiana Cazacu, Mihail Hanga, Florina Chiscop, Dragos-Alexandru Cazacu and Costel Emil Cotet
Appl. Sci. 2025, 15(15), 8297; https://doi.org/10.3390/app15158297 - 25 Jul 2025
Viewed by 683
Abstract
This paper presents a novel interface that leverages a smartwatch for controlling industrial robotic arms. By harnessing the gyroscope and advanced gesture recognition capabilities of the smartwatch, our solution facilitates intuitive, real-time manipulation that caters to users ranging from novices to seasoned professionals. [...] Read more.
This paper presents a novel interface that leverages a smartwatch for controlling industrial robotic arms. By harnessing the gyroscope and advanced gesture recognition capabilities of the smartwatch, our solution facilitates intuitive, real-time manipulation that caters to users ranging from novices to seasoned professionals. A dedicated application is implemented to aggregate sensor data via an open-source library, providing a streamlined alternative to conventional control systems. The experimental setup consists of a smartwatch equipped with a data collection application, a robotic arm, and a communication module programmed in Python. Our aim is to evaluate the practicality and effectiveness of smartwatch-based control in a real-world industrial context. The experimental results indicate that this approach significantly enhances accessibility while concurrently minimizing the complexity typically associated with automation systems. Full article
Show Figures

Figure 1

20 pages, 16450 KB  
Article
A Smart Textile-Based Tactile Sensing System for Multi-Channel Sign Language Recognition
by Keran Chen, Longnan Li, Qinyao Peng, Mengyuan He, Liyun Ma, Xinxin Li and Zhenyu Lu
Sensors 2025, 25(15), 4602; https://doi.org/10.3390/s25154602 - 25 Jul 2025
Viewed by 723
Abstract
Sign language recognition plays a crucial role in enabling communication for deaf individuals, yet current methods face limitations such as sensitivity to lighting conditions, occlusions, and lack of adaptability in diverse environments. This study presents a wearable multi-channel tactile sensing system based on [...] Read more.
Sign language recognition plays a crucial role in enabling communication for deaf individuals, yet current methods face limitations such as sensitivity to lighting conditions, occlusions, and lack of adaptability in diverse environments. This study presents a wearable multi-channel tactile sensing system based on smart textiles, designed to capture subtle wrist and finger motions for static sign language recognition. The system leverages triboelectric yarns sewn into gloves and sleeves to construct a skin-conformal tactile sensor array, capable of detecting biomechanical interactions through contact and deformation. Unlike vision-based approaches, the proposed sensor platform operates independently of environmental lighting or occlusions, offering reliable performance in diverse conditions. Experimental validation on American Sign Language letter gestures demonstrates that the proposed system achieves high signal clarity after customized filtering, leading to a classification accuracy of 94.66%. Experimental results show effective recognition of complex gestures, highlighting the system’s potential for broader applications in human-computer interaction. Full article
(This article belongs to the Special Issue Advanced Tactile Sensors: Design and Applications)
Show Figures

Figure 1

Back to TopTop