Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = Norwegian Sign Language recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 910 KB  
Brief Report
Real-Time Norwegian Sign Language Recognition Using MediaPipe and LSTM
by Md. Zia Uddin, Costas Boletsis and Pål Rudshavn
Multimodal Technol. Interact. 2025, 9(3), 23; https://doi.org/10.3390/mti9030023 - 3 Mar 2025
Cited by 4 | Viewed by 4546
Abstract
The application of machine learning models for sign language recognition (SLR) is a well-researched topic. However, many existing SLR systems focus on widely used sign languages, e.g., American Sign Language, leaving other underrepresented sign languages such as Norwegian Sign Language (NSL) relatively underexplored. [...] Read more.
The application of machine learning models for sign language recognition (SLR) is a well-researched topic. However, many existing SLR systems focus on widely used sign languages, e.g., American Sign Language, leaving other underrepresented sign languages such as Norwegian Sign Language (NSL) relatively underexplored. This work presents a preliminary system for recognizing NSL gestures, focusing on numbers 0 to 10. Mediapipe is used for feature extraction and Long Short-Term Memory (LSTM) networks for temporal modeling. This system achieves a testing accuracy of 95%, aligning with existing benchmarks and demonstrating its robustness to variations in signing styles, orientations, and speeds. While challenges such as data imbalance and misclassification of similar gestures (e.g., Signs 3 and 8) were observed, the results underscore the potential of our proposed approach. Future iterations of the system will prioritize expanding the dataset by including additional gestures and environmental variations as well as integrating additional modalities. Full article
Show Figures

Figure 1

25 pages, 7137 KB  
Article
Comparative Analysis of Image Classification Models for Norwegian Sign Language Recognition
by Benjamin Svendsen and Seifedine Kadry
Technologies 2023, 11(4), 99; https://doi.org/10.3390/technologies11040099 - 15 Jul 2023
Cited by 6 | Viewed by 3648
Abstract
Communication is integral to every human’s life, allowing individuals to express themselves and understand each other. This process can be challenging for the hearing-impaired population, who rely on sign language for communication due to the limited number of individuals proficient in sign language. [...] Read more.
Communication is integral to every human’s life, allowing individuals to express themselves and understand each other. This process can be challenging for the hearing-impaired population, who rely on sign language for communication due to the limited number of individuals proficient in sign language. Image classification models can be used to create assistive systems to address this communication barrier. This paper conducts a comprehensive literature review and experiments to find the state of the art in sign language recognition. It identifies a lack of research in Norwegian Sign Language (NSL). To address this gap, we created a dataset from scratch containing 24,300 images of 27 NSL alphabet signs and performed a comparative analysis of various machine learning models, including the Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Convolutional Neural Network (CNN) on the dataset. The evaluation of these models was based on accuracy and computational efficiency. Based on these metrics, our findings indicate that SVM and CNN were the most effective models, achieving accuracies of 99.9% with high computational efficiency. Consequently, the research conducted in this report aims to contribute to the field of NSL recognition and serve as a foundation for future studies in this area. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

Back to TopTop