Next Article in Journal
Modernizing the Legacy Healthcare System to Decentralize Platform Using Blockchain Technology
Previous Article in Journal
A Novel Methodology for Classifying Electrical Disturbances Using Deep Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Survey of Advancements in Real-Time Sign Language Translators: Integration with IoT Technology

by
Maria Papatsimouli
,
Panos Sarigiannidis
and
George F. Fragulis
*
Department of Electrical and Computer Engineering, University of Western Macedonia, ZEP Campus, 50100 Kozani, Greece
*
Author to whom correspondence should be addressed.
Technologies 2023, 11(4), 83; https://doi.org/10.3390/technologies11040083
Submission received: 5 April 2023 / Revised: 14 June 2023 / Accepted: 19 June 2023 / Published: 22 June 2023
(This article belongs to the Collection Technology Advances on IoT Learning and Teaching)

Abstract

:
Real-time sign language translation systems are of paramount importance in enabling communication for deaf and hard-of-hearing individuals. This population relies on various communication methods, including sign languages and visual techniques, to interact with others. While assistive technologies, such as hearing aids and captioning, have improved their communication capabilities, a significant communication gap still exists between sign language users and non-users. In order to bridge this gap, numerous sign language translation systems have been developed, encompassing sign language recognition and gesture-based controls. Our research aimed to analyze the advancements in real-time sign language translators developed over the past five years and their integration with IoT technology. By closely examining these technologies, we aimed to attain a deeper comprehension of their practical applications and evolution in the domain of sign language translation. We analyzed the current literature, technical reports, and conference papers on real-time sign language translation systems. Our results offer insights into the current state of the art in real-time sign language translation systems and their integration with IoT technology. We also provide a deep understanding of the recent developments in sign language translation technology and the potential for their fusion with Internet of Things technology to improve communication and promote inclusivity for the deaf and hard-of-hearing population.

1. Introduction

In today’s rapidly evolving world, effective communication stands as an undeniable imperative, extending its significance to even those who face the challenges of being deaf or mute. Within this context, sign language emerges as a unifying conduit for expression, enabling a shared language for individuals in this unique community. Through relentless innovation, advancements in sign language translation methods and software have emerged as transformative agents, profoundly enriching the quality of life for those who rely on sign language as their primary means of communication. Notably, the adoption of sign language translation technology has witnessed a remarkable upsurge in recent years, particularly within the vibrant deaf community, wherein it has become the gold standard, effortlessly facilitating seamless interaction among its members. However, despite these monumental advancements, the formidable barrier of communication between sign language users and non-users persists as an ongoing challenge [1].
In recent years, researchers have made significant strides in developing real-time sign language translators that can help bridge this communication gap [2]. These systems rely on computer vision techniques and machine learning algorithms to accurately recognize and translate sign language gestures into spoken language. Nevertheless, accurately translating hand gestures into language remains a complex task, and researchers continue to work toward improving the accuracy of these systems [3].
Furthermore, sign language translation programs advance alongside the Internet of Things (IoT). IoT technology allows for the seamless integration of multiple devices, sensors, and networks. In this paper, we present an overview of the developments in real-time sign language translation over the last five years and their integration with IoT technologies. We want to shed light on the state of the art in sign language translation technology while also assessing how the Internet of Things will affect these systems.
To accomplish this purpose, we will examine and analyze the current literature, technical reports, and conference papers relating to real-time sign language translation systems, as well as conduct experiments to evaluate and compare the performance of some of these systems. According to Johnston [4], language is a complex means of communication that involves the use of established grammatical rules and conventional symbols within a specific community. It is a dynamic system that facilitates the exchange of a wide range of thoughts, feelings, and intentions. Language is a defining characteristic of human culture and society, which distinguishes humans from other species. Due to its complexity and historical development, linguists, anthropologists, and others interested in human communication find language an intriguing subject of study.
Arbitrary symbols are used in various forms of communication, such as human language, animal vocalizations, and visual signals such as traffic lights. These symbols aid in conveying people’s thoughts, feelings, and intentions to one another and have context-dependent meanings. Our unique ability to communicate through language sets humans apart from other species. It is a dynamic aspect of human culture and society that is constantly evolving, with new languages emerging and older ones disappearing.
In communication, both spoken words and sign languages serve as vehicles for conveying meaning. However, these elements can have diverse relationships between their form and meaning. Sign languages, for example, have a more visible and direct link between their form and meaning, while spoken languages often exhibit a more arbitrary link between the two. By arbitrary, we mean that the form of a word or sign has no inherent relationship with its meaning. Rather, the meaning of a word or sign arises from societal and cultural conventions. The arbitrariness of spoken language makes it adaptable and flexible, allowing for the introduction of new words and signs to describe new concepts and ideas [5].

1.1. Review Information and Selection Method

The papers were categorized based on their topic and the year of their development. Specifically, the selection of papers was made by focusing on the development of real-time sign language translators that were created after 2019. The aim of this selection was to summarize the works that were published in this period to review the technologies that were used. Journal articles, technical reports, and papers presented at conferences provide a wealth of useful options for selecting references. These resources are marvelous because they are up-to-date, in-depth, specialized, relevant to the industry, and unique and can provide insights from a variety of fields. Researchers can improve the quality of their work, show that they are up-to-date in their fields, gain access to specialized information, build bridges between academia and industry, investigate new trends, and promote multidisciplinary understanding by using references from these various sources. Researchers can make a greater impact on the scholarly dialogue and improve their knowledge of their topic by accepting a variety of these sources as potential reference materials. Citations from journal papers represent a guarantee that the study’s foundations are solid and have undergone an expert review. They provide knowledge with background material, which is essential for building on the work of others and advancing human understanding. However, technical papers may shed light on real-world problems and present workable answers. Researchers can gain access to specialized information that might not be found elsewhere by consulting technical reports. These studies provide information about cutting-edge technologies, practical applications, and other aspects of the industry. Researchers can improve communication and collaboration between the academic and business communities by drawing on information from technical reports. The value of conference papers in presenting novel studies and new developments cannot be underestimated. They provide an area where experts in a certain field can present and discuss their most recent results, methods, and ideas. Researchers demonstrate their dedication to their profession and knowledge of recent advancements by citing papers presented at conferences. As an overview of the future of the field, these papers frequently present new methods, preliminary results, and continuing projects. References gathered from a range of conference papers also help promote knowledge across disciplines. Researchers from all around the world attend conferences to share their findings and learn from those in various fields. Referencing conference papers allows scholars to obtain information from a wide variety of sources and perspectives, expanding the range of their research. Hence, there is a wide range of choices when it comes to selecting references from scholarly articles, technical reports, and conference papers. Research projects benefit from these sources because of their reliability, scope, depth, specialization, industrial importance, novelty, and multicultural viewpoints. Researchers can show that they are up-to-date on the newest results and trends, gain access to specialized knowledge, help bridge the gap between academia and industry, and promote multidisciplinary understanding by using references from these sources in their work. Researchers who are willing to expand their horizons by drawing on a broader range of references, including these, promote their fields further. People with hearing loss face many difficulties in their communication with the hearing community because the latter does not understand sign language. This communication problem is eased by using systems with recognition tools, which convert sign languages to text or voice, but a lot of these sign languages are not supported by these tools. Several research projects have been developed in recent years and made efforts to translate text into Sign Language Animation. These projects were focused on specific languages, mainly with English sign language. Furthermore, these projects used limited vocabularies, and some other devices on the market failed, causing communication problems [6].

1.2. Outline of the Paper

The structure of this paper is as follows: Section 2 presents an overview of sign language, challenges in sign language translation, and real-time translation approaches. Section 3 highlights the Internet of Things (IoT) and wearable computing by providing an overview of these technologies and their advancements. Section 4 addresses augmented reality agents, sign language recognition, and translation using AR, as well as IoT and assistive technologies for disabled individuals. Section 5 presents an overview of real-time sign language recognition and an overview of existing sign language recognition systems using IoT. Section 6 presents the results of the non-unification of sign languages, while Section 7 focuses on the current applications of sign language translators. Finally, there is a results and discussion section that analyzes the performance of different systems and their implications for disabled individuals and sign language users.

2. Sign Language and Real-Time Translation

Sign language translators play a crucial role in bridging the gap between the deaf community and the hearing community. They can express themselves in sign language and have the rare capacity to hear spoken language. Deaf people can greatly benefit from the services of translators since they can completely understand and take part in meetings, conferences, and other public events because of the visual representation of spoken messages and the translation of sign language into spoken English. With the help of sign language translators, teachers of deaf students can make their classrooms more accessible to students of all backgrounds. Translators sit in the same class as these students and interpret what their teachers, classmates, and other students are saying. Professional sign language translators ensure that deaf students have the same access to educational content as their hearing peers. Because of this, deaf students can get an education, have their voices heard, and take an active role in class discussions. Furthermore, translators are highly helpful friends to the deaf community. They break down barriers and create new opportunities for progress by improving communication between employees, employers, and customers. Business meetings, job interviews, conferences, and training sessions are just some of the professional contexts where sign language translators may make a difference, allowing deaf people to demonstrate their skills, make valuable contributions, and pursue rewarding careers. Furthermore, sign language translators play a crucial role in guaranteeing that people who are deaf or hard of hearing have access to all necessary services. For example, in the medical field, these experts make sure that patients can ask questions, receive clear answers, and feel confident in the decisions they make regarding their care. A similar service is provided by sign language interpreters, who make it possible for people who are hard of hearing to access legal resources, learn about their rights, and communicate with attorneys. Last but not least, sign language interpreters advocate for the rights of the deaf and hard-of-hearing community. By doing so, they refute myths and promote acceptance in the hearing community by increasing their knowledge of sign language and deaf culture. Sign language interpreters can help society as a whole become more accepting of those with different abilities by spreading information about accessibility and the need for the more widespread use of sign language services.

2.1. Overview of Sign Language

Sign language serves as a mode of communication for individuals with hearing impairments, encompassing a diverse array of components, such as hand gestures, facial expressions, and body movements, to convey meaning. This language employs a distinct grammatical structure and vocabulary, qualifying it as a comprehensive and autonomous language system. Furthermore, sign language usage and interpretation vary across countries and regions, with diverse communities utilizing distinct signs to express similar meanings. Sign language constitutes a valuable instrument that enhances communication between individuals who are deaf or hard of hearing and the hearing community, and it is constantly evolving and gaining more widespread usage and recognition [7,8].
In their daily interactions, including those with family and friends, people who are deaf or hard of hearing primarily rely on sign languages. It has been established via research that sign languages are actual human languages with unique grammatical structures, vocabularies, and linguistic properties. This insight has boosted awareness of sign languages’ significance and contributed to their continued expansion and development. Studies by authors such as Snoddon [9] and Maalej [10] have demonstrated the linguistic complexity of sign languages and the critical role that they play in enabling communication and fostering community among deaf and hard-of-hearing individuals. However, there are a number of differences between spoken and sign languages [11].
Sign languages are visual methods of communication that use hand gestures, facial expressions, and body language to convey meaning. They not only convey linguistic information but also emotions, making them a highly diverse and expressive mode of communication. The vocabulary and syntax of sign languages can vary significantly between different countries and even regions within the same country, as evidenced by research conducted by scholars such as Stokoe [12] and Christopoulos [13]. In order to use sign language proficiently, several factors come into play, including the recognition of hand gestures, movements, facial expressions, and spatial variations, which are linguistically significant. As demonstrated by research conducted by Maalej [10] and Emmorey [5], interpreting sign language is a complex task that requires a deep understanding of the language and its unique features. These studies emphasize the critical role of recognizing sign languages as legitimate human languages and the significant value that they provide in facilitating communication for deaf and hard-of-hearing individuals.

2.2. Challenges in Sign Language Translation

Interactivity is an essential aspect of sign language software programs, as it allows users to actively participate in the learning process and maintain their focus. Sign language, being a visual language, has numerous applications that can effectively aid information sharing and accessibility for the deaf and hard-of-hearing community.
In recent years, many tools and applications have been developed to assist individuals in learning sign languages and translating signs into spoken language or text, significantly enhancing communication and accessibility for the deaf and hard-of-hearing community. Studies, such as those conducted in [14,15], have demonstrated the importance of sign language in enabling communication for those with hearing difficulties, and they emphasize the need for technology to facilitate access to information.

2.3. Sign Language Translation System Approaches

Sign language recognition systems are typically based on either sensor-based recognition or computer vision technology. Sensor-based recognition systems use a camera to capture hand motion recordings through photographs or videos, which are then processed through image analysis. However, it is worth noting that higher-resolution cameras require more memory space and processing power. Computer vision systems, on the other hand, require high-end sensors and advanced techniques, which can increase the cost and complexity of the system. Additionally, the background must be free from noise and disturbances for the system to function effectively. The software for sign language recognition can be further divided into two approaches: data-glove and visual-based systems [6].

2.3.1. Data Gloves

Sign language recognition systems utilize either sensor-based recognition or computer vision technology. Sensor-based recognition methods, also known as data-glove approaches, gather crucial information, such as the finger bending degree, wrist position, and hand motion, through incorporated sensors. This information is then transmitted to a mobile device or application for processing. However, these systems have limitations, such as difficulties in recording hand and finger movements and an inability to capture facial expressions, lip movements, and eye movements. Moreover, the accuracy of the data collected by the data glove may be impacted by the external environment [16].
Sign language recognition using the data-glove approach involves utilizing an electronic glove equipped with sensors that gather and transmit data. This glove is worn by the signer, providing a channel for input to transfer data to a mobile device or application. This approach is popular among sign language translation systems due to the ease of gathering information, the setup, and data translation, requiring less computing power. However, data gloves can be costly, with some models exceeding USD 9000. Although less expensive alternatives exist, they often have fewer sensors and are more prone to interference, leading to a loss of critical information and decreased accuracy and precision in translation. Moreover, since hands come in varying sizes, a data glove may not fit comfortably for all users. The advantages and disadvantages of utilizing data gloves are summarized in Table 1 [17].
The implementation of smart gloves in sign language recognition technology necessitates several components, such as flex sensors, microcontrollers, and wireless transmitters, to function smoothly and wirelessly. However, the cost of these gloves can pose a challenge, with cheaper alternatives often having restricted sensors, leading to a loss of crucial information and reduced translation accuracy [18].

2.3.2. Visual-Based Method

A visual-based approach to sign language recognition involves using a camera to capture images and videos used for translation. One of the primary benefits of this approach is its versatility, as it allows for the consideration of facial expressions, head motions, and lip reading in addition to hand movements.
There are two main types of visual-based approaches: hand-crafted shading gloves and light-based skin color recognition. In the hand-crafted shading glove approach, the signer wears a color-coded glove, and information is extracted from the images through color segmentation. These gloves are typically less expensive than smart gloves and often feature distinct shading on each finger and palm. However, the specific techniques employed may vary depending on the system being used [18,19].

2.3.3. Hand Gesture Recognition Process

The process of hand gesture recognition in sign language translation begins by capturing images or videos of the signer’s hand movements using input devices. Next, relevant features are extracted to describe the hand gestures. These hand gestures are then compared to stored data to identify a match. Once the region of interest is detected, the relevant features are retrieved and utilized to provide an output in the form of text, voice, or video to the user [20]. This process is depicted visually in Figure 1.

2.4. Real-Time Translation Approaches

Sign language translation systems employ various statistical methods to accomplish their translation tasks, such as example-based methods, finite-state transducers, and other techniques [21,22,23]. While numerous projects have been created to translate spoken language, most of them are limited to specific domains and small to medium-sized vocabularies. Another notable technology in this field is the use of 3D avatar animation, where virtual agents are incorporated into spoken language systems to provide a range of services [24].
However, sign language recognition systems face several limitations, mainly related to the collection of gesture data. In sign language, hand movements, full-body motions, and facial expressions are all essential components of effective translation. The challenge lies in capturing this information in real-time, processing it accurately, and recognizing it in a way that is simple enough to support the daily needs of people with hearing loss. To meet these requirements, the system must be powerful, which makes it less portable, and must be able to translate a wide range of signs in real-time. These limitations impact the usability and practicality of sign language recognition systems.
The translation of sign language into spoken language is limited by the collection of gesture data. To accurately translate sign language, hand gestures, full-body movements, and facial expressions must be taken into account. However, acquiring, processing, and recognizing these data present challenges, particularly in terms of simplicity and portability. For example, a sign language translation system must be able to translate a large number of signs in real-time to be useful for individuals with hearing loss. This requires powerful hardware, which may impede portability. Additionally, translating dynamic gestures has proven to be a challenge, with only a few algorithms demonstrating successful translation in specific testing environments.
Furthermore, sign languages have unique grammar structures that must be taken into account in the translation process [16]. The translation of sign language into spoken language is limited by the collection of gesture data. To accurately translate sign language, hand gestures, full-body movements, and facial expressions must be taken into account. However, acquiring, processing, and recognizing these data present challenges, particularly in terms of simplicity and portability. For example, a sign language translation system must be able to translate a large number of signs in real-time to be useful for individuals with hearing loss. This requires powerful hardware, which may impede portability. Additionally, translating dynamic gestures has proven to be a challenge, with only a few algorithms demonstrating successful translation in specific testing environments.

3. Internet of Things and Wearable Computing

3.1. Overview of IoT and Wearable Computing

The Internet of Things (IoT) is a bold network of physical objects that are innately connected to the internet and possess built-in technology to interact with the environment or the external world. The objects can range from household appliances and vehicles to utility systems, city sensors, industrial equipment, clothing, and any other object that exists in the artificial human environment. IoT embodies an overarching concept that encompasses interconnected technologies, devices, objects, and services, forming an intricate and interdependent web of digital and physical entities [25,26,27]. The IoT characteristics are [28]:
  • Interconnectivity: IoT enables communication between various devices and systems, resulting in a network of connected objects.
  • Automation: IoT allows for the automatic exchange of data and control of devices without human intervention.
  • Intelligence: IoT devices are equipped with sensors and other technologies that make them capable of collecting, processing, and analyzing data, leading to improved decision making.
  • Real-Time Monitoring: IoT devices can continuously monitor their environment and provide real-time data and insights.
  • Scalability: IoT is capable of scaling to accommodate the growing number of connected devices, making it an attractive technology for a variety of applications.
The characteristics of IoT have led to its widespread adoption in various fields, including healthcare, manufacturing, transportation, and others. IoT is a network of physical objects with built-in technology that allows them to interact with their environment or the outside world through the internet, as described by technology analysts and visionaries [25]. This network of objects, which can range from household appliances to community sensors, industrial equipment, clothing, and more [26], integrates the physical world into computer systems, improving accuracy and reducing costs [29]. IoT includes sensors and actuators that are part of smart systems, such as smart homes, cities, and vehicles [30]. It is a transformative technology that will significantly impact various applications in the future, including smart homes, healthcare systems, smart manufacturing, environment monitoring, and smart logistics [31].
IoT is a network of physical objects that are connected to the internet, allowing them to interact with their environment or the outside world. This revolutionary technology integrates the physical world into computer systems, enhancing accuracy and reducing costs. It is a convergence of edge computing, pervasive networking, centralized cloud computing, fog computing, and database technologies, enabling devices to manage and transfer data and offering advanced connectivity of devices, systems, and services. As of 2018, there were 22 billion devices connected to IoT worldwide, and it is projected that by 2030, the number will reach 50 billion devices [32,33].

3.2. Advancements in IoT and Wearable Technologies

3.2.1. IoT and Communication

The Internet of Things (IoT) is characterized by intelligent devices that can exchange data by communicating with each other. The applications of IoT are extensive, spanning across industries such as transportation, healthcare, smart environments, personal and gaming robots, and city information. There are various communication technologies and protocols available for wireless connectivity, including Internet Protocol Version 6 (IPv6), LoWPAN, ZigBee, Bluetooth Low Energy (BLE), Z-Wave, and Near-Field Communication (NFC) for short-range communications, as well as SigFox and Cellular for Low-Power Wide-Area Network (LPWAN) communications [34]. A device that enables communication between individuals who are deaf or mute and those who do not know sign language is known as a speech-to-text or text-to-speech converter. This device can convert speech into text and display it on a wearable device for the deaf or mute person to read. Similarly, speech-to-text converters have been developed to convert speech or text into sign language [33].

3.2.2. Wearable Computing

The term “wearable computing” refers to the utilization of technology through wearable gadgets such as fitness trackers, smartwatches, and smart glasses. With these devices, users can conveniently access, store, and process data and information while they are on the move. Wearable technology offers a multitude of benefits, particularly in healthcare, where it enables more precise and effective monitoring than before. Thanks to customization, healthcare monitoring can be personalized to meet an individual’s specific needs. Apart from healthcare, wearable computing is also utilized in other industries, such as data tracking and entertainment. Over the years, the wearable technology market has grown rapidly and is expected to continue to gain importance in our daily lives [3].

3.2.3. Wearable Devices for Deaf and Hard-of-Hearing People

Deaf and hard-of-hearing individuals often struggle to detect common household sounds, such as doorbells, phones ringing, and crying children, which can make daily life challenging. To help them overcome these obstacles, wearable devices are being developed. In one such system proposed in [35], a transmitter is installed at the door, and a wearable receiver is used to notify hearing-impaired and elderly people of visitors. The transmitter is composed of a Raspberry Pi, RPi camera, switch or doorbell, GSM, and Bluetooth, while the receiver includes a Raspberry Pi, Bluetooth, LCD or screen to display the image and message, and a vibrator to alert the wearer. The visitor’s image and timestamp are stored on a server for future reference. This system minimizes visitor wait time and increases security for deaf and elderly people.
In [36], researchers investigated the detection of sounds in both indoor and outdoor environments. The study demonstrated that the proposed method was more effective than traditional methods, providing real-time results with a success rate of up to 94%. The wearable device, which is composed of a Raspberry Pi, microphone sensor, and vibration motor, is cost-effective and suitable for everyday use by hearing-impaired individuals.

4. Augmented Reality Agents and Assistive Technologies

4.1. Overview of Augmented Reality Agents

Augmented Agents (AAs) are AI programs that use computer vision, NLP, and ML techniques to assist users with tasks requiring human decision making. These virtual entities act as interfaces between users and computer systems, offering valuable assistance in customer service, healthcare, entertainment, and travel and helping people with disabilities. Augmented Agents can respond to user queries, perform complex tasks, and provide advice and recommendations. They understand customer needs, offer medical guidance, create virtual entertainment experiences, provide real-time travel information, analyze data to improve decision making, and detect suspicious activities. By optimizing processes, increasing productivity, and offering personalized insights, Augmented Agents help users in a variety of fields [37] and in other places to best match the human’s current location [38,39,40,41].

4.2. Sign Language Recognition and Translation with AR

Augmented Agents have the potential to revolutionize the way in which people with disabilities interact with and use technology. For people with hearing impairments, Augmented Agents can be used to help them better understand verbal conversations. They can be used to recognize speech and translate it into text, as well as provide visual cues. They can also be used for lipreading, facial recognition, and sign language support. This can be a huge help to deaf and hard-of-hearing people who are trying to communicate effectively.
Augmented Agents can also help people with physical disabilities by providing interactive assistance. This includes providing support for wheelchair users, helping them find accessible routes, and providing recommendations for activities. They can also provide assistance to those in wheelchairs by controlling the motion of the wheelchair. Other uses for Augmented Agents include providing assistance to people with learning disabilities. They can provide personalized educational materials and allow them to access information more easily. They can also be used to provide tailored supportive services, such as assisting in the completion of daily tasks. Overall, the implementation of Augmented Agents has the potential to provide invaluable assistance to people with disabilities and can enable them to live more independently and overcome the difficulties posed by their disabilities.
In [42], the advantages of developing intellectual information technology and augmented reality for individuals with disabilities are discussed. These technologies offer several benefits, including improved access to digital content and services, enhanced quality of life by enabling individuals with physical impairments to control devices, and increased safety by utilizing artificial intelligence (AI) algorithms to identify objects in an image and convert them into sound. Such technologies can be especially helpful for people who are visually impaired or blind, allowing them to access digital content and interact with their environment in ways that would otherwise be impossible.

4.2.1. Benefits and Limitations of Wearable Devices and Augmented Agents

The benefits of wearable computing are numerous. Wearable computers provide users with greater mobility and access to information or tasks while on the move, without having to rely on a physical device such as a laptop or tablet computer for portability. Additionally, since these devices are wirelessly connected to other wearables and sensors, they can provide real-time data about their environment as well as feedback from the wearer’s actions—something that is not possible with regular desktop systems. Furthermore, by adapting its behavior according to the user’s changing environment over time, such a device can assist people more intelligently than a traditional desktop, which remains stationary in one place. For instance, the benefits of Talking Hands [16] for people with Autism Spectrum Disorder (ASD) and speech impairments include the increased ability to communicate effectively due to features such as voice output devices, text systems, and picture exchange communication systems. Thanks to its advanced technology, it has improved accuracy when translating signs into words or symbols. It is easier to use and can be used by individuals who have difficulty expressing themselves verbally due to their autism spectrum disorder diagnosis. It has features such as voice output devices, which allow users to express their thoughts through signs and symbols.
The limitations of wearable computing devices include their size and weight, as they must remain lightweight to be comfortable for long-term use, but this can limit their processing power compared to traditional desktop systems. Wireless connections can also cause issues related to signal strength and interference that could affect performance over time. Developing intellectual information technology in augmented reality for persons with disabilities can be costly, have accessibility issues due to its complexity, and raise privacy concerns. Talking Hands has limitations, such as the device’s limited technology not accurately translating signs into words or symbols and the difficulty for users who have trouble expressing themselves verbally. The cost could also be a problem since the device is still new to the market. However, creating these apps can also have limitations, such as the need for specialized knowledge and finding ways around audio content, which can be challenging without proper resources. These limitations must be considered when developing and using these technologies, as they can affect their effectiveness and widespread adoption.

4.2.2. Examples of Augmented Agents for People with Disabilities

With “Talking Hands”, a sign language translation device, individuals with Autism Spectrum Disorder (ASD) have access to more effective communication methods [16]. Unlike other sign language translation devices, it offers features such as voice output devices, text systems, and picture exchange communication systems. This device’s accuracy in translating signs into words or symbols is high due to its advanced technology, and its usability is beneficial for individuals with speech impairments caused by an autism spectrum disorder diagnosis. While there may be some limitations and challenges associated with using Talking Hands, analysis suggests that it could have a positive impact on individuals with these conditions. Mobile augmented reality (MAR) applications have become popular for engaging museum and gallery visitors, but most of these apps have been tailored for normal-hearing visitors, leaving the needs of individuals with hearing impairments unaddressed [43]. To create an accessible and engaging MAR app for this population, museums must consider design elements such as aesthetics, curiosity, usability, interaction motivation, and more. By doing so, museums can reach a wider audience and provide new opportunities for learning through interactive experiences, regardless of any limitations posed by audio content or specialized knowledge requirements from developers. Several successful mobile augmented reality apps designed specifically for hearing-impaired users at museums or gallery sites around the world exist. For example, the National Gallery Singapore has developed an app called “Gallery Explorer”, which provides audio descriptions and visual aids to help visitors who have difficulty understanding spoken language. Researchers from University College London (UCL) have also created a project that uses 3D printing technology to create tactile models for museum exhibits. This will help improve engagement among all museum visitors, regardless of their impairment status, increase enjoyment, and enhance learning opportunities.
Intelligent information technology augmented reality for individuals with disabilities involves converting visual images into sound and vice versa. A unified knowledge base can be used to store image concepts as an operating mechanism, and existing methods such as assistive technologies, AI algorithms, and applications specifically designed for individuals with physical impairments have been utilized. The benefits of this technology include increased access to digital content and services and improved quality of life through AR-Books [42], which can be used to help deaf students learn and understand complex topics. This technology is more effective when users are involved in every step through a participatory design philosophy, ensuring that the created products better meet their needs.
AR-Books [44] can also be developed to meet the needs of deaf students by avoiding sign language markers and using 3D model markers instead of 2D images or text. Educators and developers can use the findings from studies on AR-Books to create educational materials that are accessible to everyone, regardless of their ability level. AR has been extended to entertainment, including computer gaming, and can be used for embodied animated agents and immersive content [3,37]. Advanced scenario-authoring techniques in AR can be demonstrated through example applications, and remediation theory can be applied to media influences when producing AR applications. This technology can create interactive experiences with virtual objects or characters that respond to user input in a real-world environment, creating opportunities for more engaging content, such as games, educational simulations, or training scenarios.

4.2.3. IoT, Machine Learning, and Sign Language Applications

The Internet of Things (IoT) represents a revolutionary force that is transforming the modes of communication between humans and machines. One of the critical domains where IoT is exerting its profound influence is sign language. By leveraging advanced technology, IoT is empowering individuals with hearing impairments to learn, communicate, and engage with the world beyond their auditory capabilities. Sign language constitutes a highly intricate linguistic system that necessitates sophisticated technical support to enable its effective usage.
With the aid of IoT, it has become feasible to establish a connection between sign language users and the hearing world. Wearable devices, such as watches, earpieces, and glasses, can serve as a conduit for this connection, communicating with a computer that can instantaneously translate the user’s sign language into spoken words. This enables sign language users to communicate with the external world without relying on a third-party interpreter. Such technological advancements are facilitating greater independence and fuller engagement with the world for sign language users. For instance, IoT allows sign language users to participate in online educational programs and even access online job markets. Moreover, there is evidence to suggest that such technologies are enabling sign language users to achieve educational and professional milestones more efficiently and expeditiously. The use of IoT in sign language is a breakthrough and is having an enormously positive impact on the lives of hearing-impaired people [45]. With the help of this technology, these individuals can interact with the world and participate in various activities and opportunities that they would otherwise be unable to access.
Machine learning (ML) represents a cornerstone for uncovering insights from IoT data. It has gained widespread adoption in a plethora of domains, most notably in healthcare, where it automates medical record keeping, disease prognosis, and patient monitoring. Multiple ML algorithms have been employed for classification and forecasting in healthcare, including Support Vector Machines (SVM), Decision Trees, Naive Bayes Classifier, Random Forest Algorithm, and K-Nearest Neighbors (KNN). Each algorithm harbors unique attributes and constraints, and its efficacy on a given dataset may vary, resulting in different predictive outcomes. This accentuates the necessity of grasping the nuances of these algorithms prior to deploying them in actual healthcare scenarios and of employing ML and IoT technologies for healthcare system trend predictions with prudence. The main challenge lies in discovering the most suitable algorithm for a specific dataset, which necessitates a profound understanding of each algorithm and its performance characteristics.

4.2.4. Augmented Reality Agents, IoT, and Disabled People

Effective communication is crucial for individuals with neurodevelopmental disorders, as it can impact their self-determination and quality-of-life outcomes [46,47]. Hence, communication skill enhancement is frequently a primary focus in interventions for people with such disorders, with early and intensive intervention being emphasized. However, individuals and families living in rural or remote areas often have limited access to interventions [48,49,50,51]. To overcome this challenge, researchers have explored the feasibility of telehealth service delivery models. They found that some interventions designed for face-to-face delivery can be modified and delivered using telehealth services [52]. In [53], the authors present a comprehensive review of several studies on virtual reality (VR) and augmented reality (AR) communication interventions for individuals with neurodevelopmental disorders associated with communication disabilities. The review concluded that VR/AR interventions can be useful in certain situations. Furthermore, researchers are currently exploring the potential use of additional technologies, such as VR and AR, to deliver communication interventions [54].

5. Related Works in Sign Language Recognition Using Real-Time Approach

Several studies have explored sign language recognition (see Table 2). In a recent paper [55], a novel architecture was proposed that combines a CNN-based image network with a pose estimation network to recognize Bangla symbols. The image network extracts visual features, while the pose estimation network computes the relative positions of key hand points for additional features. The proposed method achieved an accuracy of 91.51%.
In another study, an application was developed for students to learn American Sign Language through game learning. The application uses the K-Nearest-Neighbor method for classification and achieved an accuracy of 99.44% [56].
The YSSA, as described in [57], is a wearable device that translates American Sign Language into spoken English in real-time using machine learning algorithms through a smartphone app. The device is lightweight, highly sensitive, highly stretchable, and low-price and works by converting small tensile forces or pressures into electricity through contact electrification. While the machine learning algorithms incorporated into the device achieve high recognition rates, the system relies on a mobile terminal for the audio output.
Another study [58] presents a glove that translates the American Sign Language (ASL) alphabet into text using a computer or smartphone. The glove uses strain sensors, a wearable electronic module, and a Python script to transmit information to the device. The glove can translate all 26 letters of the ASL alphabet and costs less than USD 100.

Related Works in Sign Language Recognition Using IoT

Pooja Dubey and his team proposed a system to aid the communication of deaf and mute people [7]. The system requires the user to wear a glove, which is used to recognize their gestures’ movements. The recognized gestures are then connected to an IoT connection via a database. The IoT-Based Sign Language Conversion system has two parts: one for teaching sign language and the other for expressing the message. The sensors are sorted to accommodate different hand sizes and shapes. The maximum and minimum values of the sensors are recorded, and a special hand gesture is taken to enter the label for the real-time hand position.
The sign language translator using LabVIEW enabled with IoT [61] uses flex sensors and Ni myRIO software. The data are transmitted from the sensors to the computer using NILabVIEW software and a Wi-Fi transceiver. Single-server multi-client access over LAN is enabled for IoT. The system acquires data from the fingers when a sign is shown, and the sensors connected to the NI myRIO controller send the data wirelessly to the end computer via Wi-Fi and the TCP protocol. The system analyzes the acquired data and converts them to text and voice for output using a text-to-speech converter. The current system produces the output without any delays.
In [17], the system includes the hand gesture model, the OCR model, and the Live Tracking module. The flex sensors record the gestures in the hand gesture model, which are processed by Ardruino. The result is sent to a text-to-speech application to generate the output. In the OCR model, the camera captures the image, which is processed by the Raspberry Pi using existing libraries. Finally, in the Live Tracking module, the system uses GSM for data and voice transmission at varying frequency bands.
In [62], a smart wearable device for a sign interpretation system for both hands using a multiclass classifier was designed. The system used cotton gloves embedded with sensors and an Android-based mobile application for gesture identification. The gloves were comfortable for everyone to use, and the mobile application contained a text-to-speech module for audible communication output.
The Internet of Things (IoT) is an advanced technology that offers enormous promise to improve our lives in many different ways. The benefits of IoT to human life are the focus of ongoing research and development efforts by academic institutions and scientific communities [63]. Artificial intelligence technologies have the potential to greatly improve the lives of people who are deaf or hard of hearing by removing barriers to communication. In light of recent developments in sensing technologies and AI algorithms, a wide range of applications have been created to meet the demands of deaf and hearing-impaired individuals [64,65,66].
The fast expansion of the Internet of Things (IoT) has changed how people interact with technology and created a world that is connected through a variety of intelligent systems and devices. In this context, the importance of sign language translators specifically designed for IoT devices becomes evident. While mobile app translators have their advantages, integrating sign language translators with IoT presents unique opportunities to enhance accessibility, communication, and inclusivity for deaf and hard-of-hearing individuals.
At this point, the significance of sign language translators with IoT is presented, highlighting their seamless integration, hands-free interaction, contextual interpretation, real-time accessibility, and scalability within the IoT ecosystem. First, Internet of Things (IoT) gadgets that incorporate sign language translators make communication incredibly simple and easy. Hard-of-hearing people can interact with others without the need for specific mobile applications by having these translators embedded into their electronic devices, wearable equipment, or home automation systems. With this connection, the user experience is unified and simplified, and the effort of moving between applications is gone, making the entire IoT ecosystem more accessible. Additionally, IoT gadgets are made for easy, hands-free use. There is no longer a requirement for manual input or reliance on text-based communication because of IoT-optimized sign language translators. This hands-free equipment is in sync with the way in which deaf people like to communicate, allowing them to easily exchange information and take control of their surroundings using sign language.
Another major benefit of IoT devices is that they can provide data and services in real-time. By incorporating real-time sign language translators into IoT devices, users will have immediate access to translation services without downloading additional apps. Access to instant translation becomes possible for the deaf and hard-of-hearing community, removing barriers to communication and allowing for more open interactions in the world of the Internet of Things. By connecting sign language interpreters to Internet of Things gadgets, interpreting services can be made available in more places, such as individual homes, businesses, public areas, public transportation, and healthcare centers. As a result of IoT’s widespread presence and ability to expand, people who are deaf or hard of hearing will have greater access to interpretation services in more situations (see Table 3).

6. Non-Unification of Sign Languages

The non-unification of sign languages across different countries creates a communication barrier between people who use different sign languages. This can lead to misunderstandings and difficulties in communication, especially for deaf and speech-impaired individuals who rely on sign language as their primary mode of communication. Translators from spoken language into sign language and vice versa are therefore needed. The lack of sign language uniformity makes it difficult for deaf people to communicate with each other, as there are 300 different sign languages in use worldwide. To overcome this, a translator for sign language can be created to facilitate communication [68].
The aim is to build a gesture recognition system that captures gestures or sign language and converts them into other words. In order to construct a distinct sign language that is universal, researchers from various sign languages must collaborate to determine the most frequently used signs.
The study also focuses on identifying common signs used worldwide, with the goal of establishing a universal sign language. The development of a real-time gesture recognition system is suggested, utilizing a camera to capture signing, eliminating image noise, and employing the Gaussian average method for background subtraction. Ultimately, the aim of developing such systems is to enhance communication abilities for hearing-impaired people through the introduction of a unified sign language or a sign language translator, offering a potentially life-changing opportunity for improved communication with others.
The non-unification of sign languages is a result of various limitations that hinder their standardization and mutual intelligibility across different countries and regions. These limitations arise from geographical and cultural factors, linguistic variation, lack of standardization, limited awareness and resources, and sociopolitical factors.
Sign languages naturally develop within specific communities and are influenced by local culture and language, leading to the existence of distinct sign languages in different regions. This independence and limited interaction between communities, especially in geographically isolated areas, contribute to the lack of unification [69,70,71].
Similar to spoken languages, sign languages exhibit regional and local variations, making communication between sign language users from different regions challenging. Different sign languages or dialects are used by various communities within a country or region, further complicating standardization efforts.
Unlike spoken languages, sign languages have not been standardized to the same extent, primarily due to the absence of a centralized authority, diverse linguistic features across sign languages, and difficulties in capturing sign language structures in written form [68,72,73].
Sign languages have historically received less recognition and support compared to spoken languages, impeding efforts to achieve global unification. The limited availability of educational materials, professional training, and accessibility services further hinders the standardization and widespread adoption of a unified sign language.
Sociopolitical factors also play a role, as the recognition of sign languages as official languages and the establishment of sign language legislation vary across countries. Some regions lack official status or legal protection for sign languages, hindering their development and unification efforts.
International organizations such as the World Federation of the Deaf and the World Association of Sign Language Interpreters encourage cooperation, resource sharing, and the creation of standardized sign language materials to address these limitations. Video communication platforms have enabled interaction between sign language users from different geographical locations [74,75,76].

7. Applications of Sign Language Translators

Automatic Sign Language Recognition (SLR) has become an important research area, drawing the interest of many researchers. Advances in SLR technologies have the potential to create HCI systems that help people with both spoken and sign language skills communicate more easily [77]. Translators of sign languages are crucial for online communication. During video calls and online meetings, they assist in translating sign language so that deaf persons can actively participate in dialogues. Additionally, they offer services for internet conversations and text message transcription and captioning, enabling deaf people to have effective written communication. The compilation of sign language dictionaries with regularly used terminology and phrases is another significant advancement. These dictionaries support precise and consistent communication, which is beneficial for interpreters and those learning sign language. Sign language translators provide more inclusive and efficient communication on digital platforms by enhancing accessibility in online interactions and offering thorough linguistic resources [61,78,79,80]. Sign language interpretation has become increasingly prominent in the entertainment industry. In movies and TV shows, sign language translators ensure that the dialogue and storyline are accessible to deaf viewers. By translating the spoken language into sign language, they enable deaf people to fully engage with visual narratives. Similarly, sign language interpretation is employed in music concerts and festivals, allowing deaf audience members to experience live performances through sign language translation of the lyrics and musical elements [81,82]. Interpreters who speak sign language are also essential during sporting events. Their presence fosters a sense of inclusion and equal engagement, enabling deaf people to fully grasp the essence of crowd discussions, announcements, and cheers [83,84,85].
Sign language interpretation is also essential in the travel and tourism industry to ensure that deaf individuals have equal access to cultural experiences. It enables the real-time interpretation of information, historical context, and instructions, allowing them to understand and engage with their surroundings. Travel guides specifically designed for deaf individuals are also important [86]. They are essential tools for deaf travelers to navigate and explore destinations with confidence and independence. They also facilitate effective communication between deaf people and mental health professionals, allowing them to express their thoughts and emotions without barriers. Sign language interpretation plays a significant role in religious services and events, giving a sense of belonging and spiritual connection. It also serves as a facilitator in museums and art galleries, enhancing the understanding and appreciation of exhibits and allowing deaf individuals to fully immerse themselves in the experience.
They ensure that deaf individuals can engage in performances, speeches, and interactive activities. By incorporating sign language interpretation into these events, we promote inclusivity, enable equal access to information and entertainment, and strengthen community bonds.
Sign language interpreters are crucial to ensuring accessibility on digital platforms given the growth of social media and online content. To enable deaf people to access and participate in the material on social media platforms, they offer sign language interpretation for videos, live streaming, and online discussions. The excitement and camaraderie of competitive gaming are completely accessible to deaf players and spectators at e-sports and gaming events thanks to sign language interpretation.
Sign language interpretation has expanded its applications beyond human communication. There is growing interest in using sign language to communicate with pets and animals. Sign language interpreters work alongside animal trainers and handlers to establish a common language between humans and animals, facilitating commands, cues, and understanding [87].
By merging sign language with technology, we create a more inclusive and accessible environment that enhances the lives of deaf individuals and promotes independence in their daily activities [15]. Furthermore, the development of sign language recognition technology for smart homes and devices is revolutionizing accessibility for deaf people. These tools translate sign language movements into commands or actions for various smart devices using computer vision and machine learning techniques. This makes it possible for deaf people to take charge of their surroundings and easily access information inside their houses. We improve the lives of deaf people and give them independence in their daily activities by fusing sign language with technology to create a more inclusive and accessible environment [59,88,89].

8. Conclusions

According to the World Health Organization (WHO), hearing loss affects more than 5% of the world’s population, or 466 million individuals: 432 million adults and 34 million children. The study of advancements in real-time sign language translators and their integration with IoT technology highlights the potential to improve communication and promote inclusion for the deaf and hard-of-hearing community. This survey emphasizes the importance of sign language and other assistive technologies in improving speech comprehension and enabling participation in discussions for those who are hard of hearing or deaf. The latest developments in sign language translation technology, such as gesture-based controls and sign language recognition, have an essential role in bridging communication gaps across diverse cultures and enhancing accessibility for disabled individuals. In addition, this study highlights the practical uses and progress made in the field of sign language translation, particularly the integration of IoT technology for the real-time and remote translation of sign language. By incorporating IoT technology into sign language translation systems, translation accuracy and speed can be significantly improved, making it easier and more accessible for users. Furthermore, this study identifies the advantages and disadvantages of wearable devices, augmented reality agents, and other assistive technologies that can complement real-time sign language translators and enhance accessibility for disabled individuals. In conclusion, this study yields deep-seated revelations pertaining to the burgeoning advancements in sign language translation technology and the prospects for their amalgamation with Internet of Things technology to augment communication and promote inclusivity for the deaf and hard-of-hearing cohort. Further elaborate and efficacious sign language translation schemes may come to fruition in light of perpetual and progressive research and development endeavors in this domain, endowing benefits to not only the deaf and hard-of-hearing population but also the broader populace.

9. Future Work

In the near future, there will be additional research to provide a more comprehensive evaluation of sign language translators with advanced technology. This will include a thorough analysis of machine-learning-based translators, which are more user-friendly. Additionally, a strategy will be implemented to incorporate wearable devices and augmented reality. Moreover, sign language translators can be integrated with various online content management systems, such as [90,91], allowing deaf individuals to take exams and acquire new skills.

Author Contributions

Conceptualization, M.P. and G.F.F.; methodology, M.P.; software, M.P.; validation, M.P., P.S. and G.F.F.; formal analysis, M.P.; writing—original draft preparation, M.P.; writing—review and editing, P.S. and G.F.F.; supervision, G.F.F. All authors have read and agreed to the published version of the manuscript.

Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 957406.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 957406.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sturman, D.; Zeltzer, D. A Survey of Glove-Based Input. IEEE Comput. Graph. Appl. 1994, 14, 30–39. [Google Scholar] [CrossRef]
  2. Wu, Y.; Huang, T. Vision-Based Gesture Recognition: A Review. In Gesture-Based Communication in Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 1999; pp. 103–115. [Google Scholar]
  3. Starner, T.; Mann, S.; Rhodes, B.; Levine, J.; Healey, J.; Kirsch, D.; Picard, R.W.; Pentland, A. Augmented Reality through Wearable Computing. Presence Teleoperators Virtual Environ. 1997, 6, 386–398. [Google Scholar] [CrossRef]
  4. Johnston, T.; Schembri, A. Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar] [CrossRef]
  5. Emmorey, K. Language, Cognition, and the Brain: Insights from Sign Language Research; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 2002. [Google Scholar]
  6. Wijayawickrama, R.; Premachandra, R.; Punsara, T.; Chanaka, A. Iot based sign language recognition system. Glob. J. Comput. Sci. Technol. 2020, 20, 39–44. [Google Scholar]
  7. Dubey, P.; Shrivastav, M.P. Iot Based Sign Language Conversion. Int. J. Res. Eng. Sci. (IJRES) 2021, 9, 84–89. [Google Scholar]
  8. Sutton-Spence, R.; Woll, B. Linguistics and Sign Linguistics. In The Linguistics of British Sign Language: An Introduction; Cambridge University Press: Cambridge, UK, 1999; pp. 1–21. [Google Scholar] [CrossRef]
  9. Snoddon, K. Wendy Sandler & Diane Lillo-Martin, Sign Language and Linguistic Universals. Cambridge: Cambridge University Press, 2006. Pp. xxi, 547. Pb $45.00. Lang. Soc. 2008, 37, 628. [Google Scholar] [CrossRef]
  10. Maalej, Z. Book Review: Language, Cognition, and the Brain: Insights from Sign Language Research. Linguist List 2002. Available online: http://www.linguistlist.org/issues/13/13-1631.html (accessed on 5 March 2023).
  11. Tervoort, B.T. Sign language: The study of deaf people and their language: J.G. Kyle and B. Woll, Cambridge, Cambridge University Press, 1985. ISBN 521 26075. ix+318 pp. Lingua 1986, 70, 205–212. [Google Scholar] [CrossRef]
  12. Stokoe, W.C., Jr. Sign language structure: An outline of the visual communication systems of the American deaf. J. Deaf. Stud. Deaf. Educ. 2005, 10, 3–37. [Google Scholar] [CrossRef] [Green Version]
  13. Christopoulos, C.; Bonvillian, J. Sign Language. J. Commun. Disord. 1985, 18, 1–20. [Google Scholar]
  14. Papatsimouli, M.; Lazaridis, L.; Kollias, K.F.; Skordas, I.; Fragulis, G.F. Speak with Signs: Active Learning Platform for Greek Sign Language, English Sign Language, and Their Translation. In SHS Web of Conferences; EDP Sciences: Les Ulis, France, 2020; Volume 102, p. 01008. [Google Scholar]
  15. Papatsimouli, M.; Kollias, K.F.; Lazaridis, L.; Maraslidis, G.; Michailidis, H.; Sarigiannidis, P.; Fragulis, G.F. Real Time Sign Language Translation Systems: A review study. In Proceedings of the 2022 11th International Conference on Modern Circuits and Systems Technologies (MOCAST), Bremen, Germany, 8–10 June 2022; pp. 1–4. [Google Scholar]
  16. Pezzuoli, F.; Tafaro, D.; Pane, M.; Corona, D.; Corradini, M.L. Development of a New Sign Language Translation System for People with Autism Spectrum Disorder. Adv. Neurodev. Disord. 2020, 4, 439–446. [Google Scholar] [CrossRef]
  17. Shubankar, B.; Chowdhary, M.; Priyaadharshini, M. IoT Device for Disabled People. Procedia Comput. Sci. 2019, 165, 189–195. [Google Scholar] [CrossRef]
  18. Shukor, A.Z.; Miskon, M.F.; Jamaluddin, M.H.; bin Ali, F.; Asyraf, M.F.; bin Bahar, M.B. A new data glove approach for Malaysian sign language detection. Procedia Comput. Sci. 2015, 76, 60–67. [Google Scholar] [CrossRef] [Green Version]
  19. Akmeliawati, R.; Ooi, M.P.L.; Kuang, Y.C. Real-Time Malaysian Sign Language Translation Using Colour Segmentation and Neural Network. In Proceedings of the 2007 IEEE Instrumentation & Measurement Technology Conference IMTC 2007, Warsaw, Poland, 1–3 May 2007; pp. 1–6. [Google Scholar] [CrossRef]
  20. Zhao, S.; Chen, Z.h.; Kim, J.T.; Liang, J.; Zhang, J.; Yuan, Y.B. Real-Time Hand Gesture Recognition Using Finger Segmentation. Sci. World J. 2014, 2014, 267872. [Google Scholar] [CrossRef] [Green Version]
  21. Varea, I.G.; Och, F.J.; Ney, H.; Casacuberta, F. Efficient Integration of Maximum Entropy Lexicon Models within the Training of Statistical Alignment Models. In Machine Translation: From Research to Real Users: 5th Conference of the Association for Machine Translation in the Americas, AMTA 2002, Tiburon, CA, USA, 8–12 October 2002; Richardson, S.D., Ed.; Springer: Berlin/Heidelberg, Germany, 2002; pp. 54–63. [Google Scholar]
  22. Sumita, E.; Akiba, Y.; Doi, T.; Finch, A.; Imamura, K.; Paul, M.; Watanabe, T. A Corpus-Centered Approach to Spoken Language Translation. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, Budapest, Hungary, 12–17 April 2003; pp. 171–174. [Google Scholar]
  23. Casacuberta, F.; Vidal, E. Machine Translation with Inferred Stochastic Finite-State Transducers. Comput. Linguist. 2004, 30, 205–225. [Google Scholar] [CrossRef]
  24. Aarssen, A.; Genis, R.; van der Veeken, E. (Eds.) A Bibliography of Sign Languages, 2008–2017: With an Introduction by Myriam Vermeerbergen and Anna-Lena Nilsson; Brill: Aylesbury, UK, 2018. [Google Scholar] [CrossRef]
  25. Mohd Javaid, I.H.K. Internet of Things (IoT) Enabled Healthcare Helps to Take the Challenges of COVID-19 Pandemic. J. Oral Biol. Craniofac. Res. 2021, 11, 209–214. [Google Scholar] [CrossRef]
  26. Xu, L.; He, W.; Li, S. Internet of Things in Industries: A Survey. IEEE Trans. Ind. Inform. 2014, 10, 2233–2243. [Google Scholar] [CrossRef]
  27. Bassi, A.; Bauer, M.; Fiedler, M.; Kramp, T.; Van Kranenburg, R.; Lange, S.; Meissner, S. Enabling Things to Talk; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  28. Patel, K.K.; Patel, S.M.; Scholar, P. Internet of things-IOT: Definition, characteristics, architecture, enabling technologies, application & future challenges. Int. J. Eng. Sci. Comput. 2016, 6, 5. [Google Scholar]
  29. Yang, Y.; Wu, L.; Yin, G.; Li, L.; Zhao, H. A Survey on Security and Privacy Issues in Internet-of-Things. IEEE Internet Things J. 2017, 4, 1250–1258. [Google Scholar] [CrossRef]
  30. Razzaque, M.; Milojevic-Jevric, M.; Palade, A.; Clarke, S. Middleware for Internet of Things: A Survey. IEEE Internet Things J. 2016, 3, 70–95. [Google Scholar] [CrossRef] [Green Version]
  31. Ashton, K. That “Internet of Things” Thing: That “Internet of Things” Thing: In the Real World Things Matter More than Ideas. RFID J. 2009, 22, 97–114. [Google Scholar]
  32. Betts, R. Architecting for the Internet of Things; O’Reilly Media: Sebastopol, CA, USA, 2016. [Google Scholar]
  33. Bansal, M.; Garg, S. Internet of Things (IoT) Based Assistive Devices. In Proceedings of the 6th International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 20–22 January 2021; pp. 1006–1009. [Google Scholar] [CrossRef]
  34. Al-Sarawi, S.; Anbar, M.; Alieyan, K.; Alzubaidi, M. Internet of Things (IoT) Communication Protocols: Review. In Proceedings of the 8th International Conference on Information Technology (ICIT), Amman, Jordan, 17–18 May 2017; pp. 685–690. [Google Scholar] [CrossRef]
  35. Kshirsagar, S.; Sachdev, S.; Singh, N.; Tiwari, A.; Sahu, S. IoT enabled gesture-controlled home automation for disabled and elderly. In Proceedings of the 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 11–13 March 2020; pp. 821–826. [Google Scholar]
  36. Yağanoğlu, M.; Köse, C. Real-Time Detection of Important Sounds with a Wearable Vibration Based Device for Hearing-Impaired People. Electronics 2018, 7, 50. [Google Scholar] [CrossRef] [Green Version]
  37. Barakonyi, I.; Schmalstieg, D. Augmented Reality Agents in the Development Pipeline of Computer Entertainment. In Entertainment Computing—ICEC 2005: 4th International Conference, Sanda, Japan, 19–21 September 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 345–356. [Google Scholar]
  38. Raskar, R.N. Projector-Based Three Dimensional Graphics; The University of North Carolina at Chapel Hill: Chapel Hill, NC, USA, 2002. [Google Scholar]
  39. Raskar, R.; Welch, G.; Cutts, M.; Lake, A.; Stesin, L.; Fuchs, H. The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 24 July 1998; pp. 179–188. [Google Scholar]
  40. Raskar, R.; Welch, G.; Low, K.L.; Bandyopadhyay, D. Shader Lamps: Animating Real Objects with Image-Based Illumination. In Eurographics Workshop on Rendering Techniques, Saarbrücken, Germany, 9–16 April 2021; Springer: Berlin/Heidelberg, Germany, 2001; pp. 89–102. [Google Scholar]
  41. Al-Turjman, F. Artificial Intelligence in IoT; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  42. Hrytsyk, V.; Grondzal, A.; Bilenkyj, A. Augmented Reality for People with Disabilities. In Proceedings of the 2015 Xth International Scientific and Technical Conference Computer Sciences and Information Technologies (CSIT), Lviv, Ukraine, 14–17 September 2015; pp. 188–191. [Google Scholar]
  43. Baker, E.J.; Bakar, J.A.A.; Zulkifli, A.N. Mobile Augmented Reality Elements for Museum Hearing Impaired Visitors’ Engagement. J. Telecommun. Electron. Comput. Eng. 2017, 9, 171–178. [Google Scholar]
  44. Zainuddin, N.M.M.; Zaman, H.B.; Ahmad, A. A Participatory Design in Developing Prototype an Augmented Reality Book for Deaf Students. In Proceedings of the 2010 Second International Conference on Computer Research and Development, Kuala Lumpur, Malaysia, 7–10 May 2010; pp. 400–404. [Google Scholar]
  45. Salim, S.; Jamil, M.M.A.; Ambar, R.; Wahab, M.H.A. A Review on Hand Gesture and Sign Language Techniques for Hearing Impaired Person. In Machine Learning Techniques for Smart City Applications: Trends and Solutions; Hemanth, D.J., Ed.; Springer: Cham, Switzerland, 2022; pp. 35–44. [Google Scholar] [CrossRef]
  46. Chamak, B.; Bonniau, B. Trajectories, Long-Term Outcomes and Family Experiences of 76 Adults with Autism Spectrum Disorder. J. Autism Dev. Disord. 2016, 46, 1084–1095. [Google Scholar] [CrossRef] [PubMed]
  47. Johnson, C.J.; Beitchman, J.H.; Brownlie, E. Twenty-Year Follow-up of Children with and without Speech-Language Impairments: Family, Educational, Occupational, and Quality of Life Outcomes. Am. J. Speech-Lang. Pathol. 2010, 19, 51–65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Rogers, S.J.; Estes, A.; Lord, C.; Vismara, L.; Winter, J.; Fitzpatrick, A.; Guo, M.; Dawson, G. Effects of a Brief Early Start Denver Model (ESDM)–Based Parent Intervention on Toddlers at Risk for Autism Spectrum Disorders: A Randomized Controlled Trial. J. Am. Acad. Child Adolesc. Psychiatry 2012, 51, 1052–1065. [Google Scholar] [CrossRef] [Green Version]
  49. Webb, S.J.; Jones, E.J.; Kelly, J.; Dawson, G. The Motivation for Very Early Intervention for Infants at High Risk for Autism Spectrum Disorders. Int. J. Speech-Lang. Pathol. 2014, 16, 36–42. [Google Scholar] [CrossRef] [Green Version]
  50. Jones, D.; McAllister, L.; Lyle, D. Community-Based Service-Learning: A Rural Australian Perspective on Student and Academic Outcomes of Participation. Int. J. Res. Serv. Learn. Community Engagem. 2016, 4, 181–198. [Google Scholar] [CrossRef]
  51. Vohra, R.; Madhavan, S.; Sambamoorthi, U.; St Peter, C. Access to Services, Quality of Care, and Family Impact for Children with Autism, Other Developmental Disabilities, and Other Mental Health Conditions. Autism Int. J. Res. Pract. 2014, 18, 815–826. [Google Scholar] [CrossRef] [Green Version]
  52. Taylor, L.J.; Maybery, M.T.; Wray, J.; Ravine, D.; Hunt, A.; Whitehouse, A.J. Brief Report: Do the Nature of Communication Impairments in Autism Spectrum Disorders Relate to the Broader Autism Phenotype in Parents? J. Autism Dev. Disord. 2013, 43, 2984–2989. [Google Scholar] [CrossRef]
  53. Bailey, B.; Bryant, L.; Hemsley, B. Virtual reality and augmented reality for children, adolescents, and adults with communication disability and neurodevelopmental disorders: A systematic review. Rev. J. Autism Dev. Disord. 2022, 9, 160–183. [Google Scholar] [CrossRef]
  54. Bryant, L.; Brunner, M.; Hemsley, B. A Review of Virtual Reality Technologies in the Field of Communication Disability: Implications for Practice and Research. Disabil. Rehabil. Assist. Technol. 2020, 15, 365–372. [Google Scholar] [CrossRef]
  55. Abedin, T.; Prottoy, K.S.; Moshruba, A.; Hakim, S.B. Bangla Sign Language Recognition Using Concatenated BdSL Network. arXiv 2021, arXiv:2107.11818. [Google Scholar]
  56. Lee, C.; Ng, K.K.; Chen, C.H.; Lau, H.; Chung, S.; Tsoi, T. American Sign Language Recognition and Training Method with Recurrent Neural Network. Expert Syst. Appl. 2021, 167, 114403. [Google Scholar] [CrossRef]
  57. Zhou, Z.; Chen, K.; Li, X.; Zhang, S.; Wu, Y.; Zhou, Y.; Meng, K.; Sun, C.; He, Q.; Fan, W.; et al. Sign-to-Speech Translation Using Machine-Learning-Assisted Stretchable Sensor Arrays. Nat. Electron. 2020, 3, 571–578. [Google Scholar] [CrossRef]
  58. O’Connor, T.F.; Fach, M.E.; Miller, R.; Root, S.E.; Mercier, P.P.; Lipomi, D.J. The Language of Glove: Wireless Gesture Decoder with Low-Power and Stretchable Hybrid Electronics. PLoS ONE 2017, 12, e0179766. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Rizwan, S.B.; Khan, M.S.Z.; Imran, M. American Sign Language Translation via Smart Wearable Glove Technology. In Proceedings of the 2019 International Symposium on Recent Advances in Electrical Engineering (RAEE), Islamabad, Pakistan, 28–29 August 2019; Volume 4, pp. 1–6. [Google Scholar] [CrossRef]
  60. Heera, S.Y.; Murthy, M.K.; Sravanti, V.S.; Salvi, S. Talking Hands — An Indian Sign Language to Speech Translating Gloves. In Proceedings of the 2017 International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), Bengaluru, India, 21–23 February 2017; pp. 746–751. [Google Scholar] [CrossRef]
  61. Kumar, M.P.; Thilagaraj, M.; Sakthivel, S.; Maduraiveeran, C.; Rajasekaran, M.P.; Rama, S. Sign Language Translator Using LabVIEW Enabled with Internet of Things. In Smart Intelligent Computing and Applications; Springer: Singapore, 2018; Volume 104, pp. 603–612. [Google Scholar] [CrossRef]
  62. P, G.J.; Miss, A.K. IoT Based Sign Language Interpretation System. J. Phys. Conf. Ser. 2019, 1362, 012034. [Google Scholar] [CrossRef]
  63. Farooq, M.S.; Riaz, S.; Abid, A.; Umer, T.; Zikria, Y.B. Role of IoT Technology in Agriculture: A Systematic Literature Review. Electronics 2020, 9, 319. [Google Scholar] [CrossRef] [Green Version]
  64. Papastratis, I.; Chatzikonstantinou, C.; Konstantinidis, D.; Dimitropoulos, K.; Daras, P. Artificial Intelligence Technologies for Sign Language. Sensors 2021, 21, 5843. [Google Scholar] [CrossRef]
  65. Maraslidis, G.S.; Kottas, T.L.; Tsipouras, M.G.; Fragulis, G.F. Design of a Fuzzy Logic Controller for the Double Pendulum Inverted on a Cart. Information 2022, 13, 379. [Google Scholar] [CrossRef]
  66. Kollias, K.F.; Syriopoulou-Delli, C.K.; Sarigiannidis, P.; Fragulis, G.F. The contribution of machine learning and eye-tracking technology in autism spectrum disorder research: A systematic review. Electronics 2021, 10, 2982. [Google Scholar] [CrossRef]
  67. Ambavane, P.; Karjavkar, R.; Pathare, H.; Relekar, S.; Alte, B.; Sharma, N.K. A novel communication system for deaf and dumb people using gesture. In ITM Web of Conferences; EDP Sciences: Les Ulis, France, 2020; Volume 32, p. 02003. [Google Scholar]
  68. Kumar, V.K.; Goudar, R.H.; Desai, V.T. Sign Language Unification: The Need for next Generation Deaf Education. Procedia Comput. Sci. 2015, 48, 673–678. [Google Scholar] [CrossRef] [Green Version]
  69. Al-Fityani, K. Deaf People, Modernity, and a Contentious Effort to Unify Arab Sign Languages. Ph.D. Thesis, University of California San Diego, San Diego, CA, USA, 2010. [Google Scholar]
  70. Association, B.D. Gestuno: International Sign Language of the Deaf. Revised and Enlarged Book of Signs Agreed and Adopted by the Unification of Signs Commission of the World Federation of the Deaf; British Deaf Association: London, UK, 1975. [Google Scholar]
  71. Fernald, T.B.; Napoli, D.J. Exploitation of Morphological Possibilities in Signed Languages: Comparison of American Sign Language with English. Sign Lang. Linguist. 2000, 3, 3–58. [Google Scholar] [CrossRef]
  72. Kumar Attar, R.; Goyal, V.; Goyal, L. State of the Art of Automation in Sign Language: A Systematic Review. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 2023, 22, 1–80. [Google Scholar] [CrossRef]
  73. Napier, J.; Leeson, L.; Napier, J.; Leeson, L. Sign Language in Action; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  74. Neidle, C.; Lee, R.G. Unification, Competition and Optimality in Signed Languages: Aspects of the Syntax of American Sign Language (ASL). Available online: https://www.bu.edu/asllrp/ (accessed on 18 June 2023).
  75. Núñez-Marcos, A.; Perez-de-Viñaspre, O.; Labaka, G. A Survey on Sign Language Machine Translation. Expert Syst. Appl. 2022, 213, 118993. [Google Scholar] [CrossRef]
  76. Wilcox, S.; Xavier, A.N. A Framework for Unifying Spoken Language, Signed Language, and Gesture. Todas as Letras-Revista de Língua e Literatura 2013, 15, 1. [Google Scholar]
  77. Li, K.; Zhou, Z.; Lee, C.H. Sign Transition Modeling and a Scalable Solution to Continuous Sign Language Recognition for Real-World Applications. ACM Trans. Access. Comput. (TACCESS) 2016, 8, 1–23. [Google Scholar] [CrossRef]
  78. Elmahgiubi, M.; Ennajar, M.; Drawil, N.; Elbuni, M.S. Sign Language Translator and Gesture Recognition. In Proceedings of the 2015 Global Summit on Computer & Information Technology (GSCIT), Sousse, Tunisia, 11–13 June 2015. [Google Scholar] [CrossRef]
  79. Jin, C.M.; Omar, Z.; Jaward, M.H. A Mobile Application of American Sign Language Translation via Image Processing Algorithms. In Proceedings of the 2016 IEEE Region 10 Symposium (TENSYMP), Bali, Indonesia, 9–11 May 2016. [Google Scholar] [CrossRef]
  80. Ku, Y.J.; Chen, M.J.; King, C.T. A Virtual Sign Language Translator on Smartphones. In Proceedings of the 2019 Seventh International Symposium on Computing and Networking Workshops (CANDARW), Nagasaki, Japan, 26–29 November 2019; pp. 445–449. [Google Scholar]
  81. Lee, S.; Jo, D.; Kim, K.B.; Jang, J.; Park, W. Wearable Sign Language Translation System Using Strain Sensors. Sens. Actuators A Phys. 2021, 331, 113010. [Google Scholar] [CrossRef]
  82. Madhuri, Y.; Anitha, G.; Anburajan, M. Vision-Based Sign Language Translation Device. In Proceedings of the 2013 International Conference on Information Communication and Embedded Systems (ICICES), Chennai, India, 21–22 February 2013. [Google Scholar] [CrossRef]
  83. Mahesh, M.; Jayaprakash, A.; Geetha, M. Sign Language Translator for Mobile Platforms. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 1176–1181. [Google Scholar]
  84. Oliveira, T.; Escudeiro, P.; Escudeiro, N.; Rocha, E.; Barbosa, F.M. Automatic Sign Language Translation to Improve Communication. In Proceedings of the 2019 IEEE Global Engineering Education Conference (EDUCON), Dubai, United Arab Emirates, 8–11 April 2019; pp. 937–942. [Google Scholar]
  85. Parton, B.S. Sign Language Recognition and Translation: A Multidisciplined Approach from the Field of Artificial Intelligence. J. Deaf. Stud. Deaf. Educ. 2006, 11, 94–101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Rastgoo, R.; Kiani, K.; Escalera, S.; Sabokrou, M. Sign Language Production: A Review. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3451–3461. [Google Scholar]
  87. Berthet, M.; Coye, C.; Dezecache, G.; Kuhn, J. Animal Linguistics: A Primer. Biol. Rev. 2023, 98, 81–98. [Google Scholar] [CrossRef]
  88. Tolba, M.F.; Elons, A.S. Recent Developments in Sign Language Recognition Systems. In Proceedings of the 2013 8th International Conference on Computer Engineering & Systems (ICCES), Cairo, Egypt, 26–28 November 2013. [Google Scholar] [CrossRef]
  89. Yin, A.; Zhao, Z.; Liu, J.; Jin, W.; Zhang, M.; Zeng, X.; He, X. Simulslt: End-to-end Simultaneous Sign Language Translation. In Proceedings of the 29th ACM International Conference on Multimedia, New York, NY, USA, 20–24 October 2021; pp. 4118–4127. [Google Scholar]
  90. Fragulis, G.F.; Lazaridis, L.; Papatsimouli, M.; Skordas, I.A. ODES: An Online Dynamic Examination System Based on a CMS Wordpress Plugin. In Proceedings of the 2018 South-Eastern European Design Automation, Computer Engineering, Computer Networks and Society Media Conference (SEEDA_CECNSM), Kastoria, Greece, 22–24 September 2018; pp. 1–8. [Google Scholar]
  91. Lazaridis, L.; Papatsimouli, M.; Fragulis, G.F. S.A.T.E.P.: Synchronous-asynchronous Tele-Education Platform. In SEEDA-CECNSM ’16: Proceedings of the SouthEast European Design Automation, Computer Engineering, Computer Networks and Social Media Conference; Association for Computing Machinery: New York, NY, USA, 2016; pp. 92–97. [Google Scholar] [CrossRef]
Figure 1. Hand gesture recognition process.
Figure 1. Hand gesture recognition process.
Technologies 11 00083 g001
Table 1. Advantages and disadvantages.
Table 1. Advantages and disadvantages.
AdvantagesDisadvantages
WirelessDifficult to handle
PortableComponents are expensive
Table 2. Real-time sign language translators.
Table 2. Real-time sign language translators.
PaperNameLanguageAccuracyYear
[55]-B.S.L91.51%2021
[56]LSTM-RNA.S.L.91.82%2021
[57]YSSAA.S.L.98.63%2021
[58]-A.S.L.-2017
[59]DastaanaA.S.L.95%2019
[60]Talking HandsI.S.L-2017
Table 3. IoT applications for Sign Language.
Table 3. IoT applications for Sign Language.
A/APaperLanguageYear
1[7]  I.S.L.2021
2[67]A.S.L.2020
3[61]Sign language to English language2019
4[17]E.S.L.2019
5[62]I.S.L. alphabet2019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papatsimouli, M.; Sarigiannidis, P.; Fragulis, G.F. A Survey of Advancements in Real-Time Sign Language Translators: Integration with IoT Technology. Technologies 2023, 11, 83. https://doi.org/10.3390/technologies11040083

AMA Style

Papatsimouli M, Sarigiannidis P, Fragulis GF. A Survey of Advancements in Real-Time Sign Language Translators: Integration with IoT Technology. Technologies. 2023; 11(4):83. https://doi.org/10.3390/technologies11040083

Chicago/Turabian Style

Papatsimouli, Maria, Panos Sarigiannidis, and George F. Fragulis. 2023. "A Survey of Advancements in Real-Time Sign Language Translators: Integration with IoT Technology" Technologies 11, no. 4: 83. https://doi.org/10.3390/technologies11040083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop