A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute
Abstract
:1. Introduction
1.1. Sign Language
1.2. Artificial Intelligence
1.2.1. Machine Learning
1.2.2. Deep Learning
1.2.3. Artificial Neural Network
1.2.4. Convolutional Neural Network
1.3. Manuscript Organization
2. Literature Review
Research Questions
3. Research Methodology
3.1. Design Research
3.2. Conduct Research
4. Proposed System Block Diagram and Novel Features
4.1. Novel Features of the Proposed System
4.2. Proposed System Block Diagram
5. Experimental Setup and Results
5.1. System Validation Dataset and Criteria
5.2. Experiment 1—Processing Quality of Acquired Gesture
5.3. Experiment 2—Processing Variation in Gestures
5.4. Experiment 3—Machine Learning Algorithm Repeatability
5.5. Experiment 4—Algorithm Performance
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Vaidya, O.; Gandhe, S.; Sharma, A.; Bhate, A.; Bhosale, V.; Mahale, R. Design and development of hand gesture based communication device for deaf and mute people. In Proceedings of the IEEE Bombay Section Signature Conference (IBSSC), Mumbai, India, 4–6 December 2020; pp. 102–106. [Google Scholar] [CrossRef]
- Marin, G.; Dominio, F.; Zanuttigh, P. Hand gesture recognition with Leap Motion and Kinect devices. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 1565–1569. [Google Scholar] [CrossRef]
- Saleem, M.I.; Otero, P.; Noor, S.; Aftab, R. Full duplex smart system for Deaf & Dumb and normal people. In Proceedings of the Global Conference on Wireless and Optical Technologies (GCWOT), Mlaga, Spain, 6–8 October 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Deb, S.; Suraksha; Bhattacharya, P. Augmented Sign Language Modeling (ASLM) with interaction design on smartphone—An assistive learning and communication tool for inclusive classroom. Procedia Comput. Sci. 2018, 125, 492–500. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Rishi, K.; Prarthana, A.; Pravena, K.S.; Sasikala, S.; Arunkumar, S. Two-way sign language conversion for assisting deaf-mutes using neural network. In Proceedings of the 8th International Conference on Advanced Computing and Communication Systems, ICACCS 2022, Coimbatore, India, 25–26 March 2022; pp. 642–646. [Google Scholar] [CrossRef]
- Anupama, H.S.; Usha, B.A.; Madhushankar, S.; Vivek, V.; Kulkarni, Y. Automated sign language interpreter using data gloves. In Proceedings of the International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 472–476. [Google Scholar] [CrossRef]
- Kawai, H.; Tamura, S. Deaf-and-mute sign language generation system. Pattern Recognit. 1985, 18, 199–205. [Google Scholar] [CrossRef]
- Bhadauria, R.S.; Nair, S.; Pal, D.K. A Survey of Deaf Mutes. Med. J. Armed Forces India 2007, 63, 29–32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sood, A.; Mishra, A. AAWAAZ: A communication system for deaf and dumb. In Proceedings of the 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 7–9 September 2016; pp. 620–624. [Google Scholar] [CrossRef]
- Yousaf, K.; Mehmood, Z.; Saba, T.; Rehman, R.; Rashid, M.; Altaf, M.; Shuguang, Z. A Novel Technique for Speech Recognition and Visualization Based Mobile Application to Support Two-Way Communication between Deaf-Mute and Normal Peoples. Wirel. Commun. Mob. Comput. 2018, 2018, 1013234. [Google Scholar] [CrossRef]
- Raheja, J.L.; Singhal, A.; Chaudhary, A. Android Based Portable Hand Sign Recognition System. arXiv 2015, arXiv:1503.03614. [Google Scholar]
- Soni, N.S.; Nagmode, M.S.; Komati, R.D. Online hand gesture recognition & classification for deaf & dumb. In Proceedings of the International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 26–27 August 2016; pp. 1–4. [Google Scholar] [CrossRef]
- Chakrabarti, S. State of deaf children in West Bengal, India: What can be done to improve outcome. Int. J. Pediatr. Otorhinolaryngol. 2018, 110, 3742. [Google Scholar] [CrossRef] [PubMed]
- Ameur, S.; Khalifa, A.B.; Bouhlel, M.S. Chronological pattern indexing: An efficient feature extraction method for hand gesture recognition with Leap Motion. J. Vis. Commun. Image Represent. 2020, 70, 102842. [Google Scholar] [CrossRef]
- Ameur, S.; Khalifa, A.B.; Bouhlel, M.S. A novel hybrid bidirectional unidirectional LSTM network for dynamic hand gesture recognition with Leap Motion. Entertain. Comput. 2020, 35, 100373. [Google Scholar] [CrossRef]
- Boppana, L.; Ahamed, R.; Rane, H.; Kodali, R.K. Assistive sign language converter for deaf and dumb. In Proceedings of the 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Atlanta, GA, USA, 14–17 July 2019; pp. 302–307. [Google Scholar] [CrossRef]
- Suharjito; Anderson, R.; Wiryana, F.; Ariesta, M.C.; Kusuma, G.P. Sign Language Recognition Application Systems for Deaf-Mute People: A Review Based on Input-Process-Output. Procedia Comput. Sci. 2017, 116, 441–448. [Google Scholar] [CrossRef]
- Patwary, A.S.; Zaohar, Z.; Sornaly, A.A.; Khan, R. Speaking system for deaf and mute people with flex sensors. In Proceedings of the 2022 6th International Conference on Trends in Electronics and Informatics, ICOEI 2022, Tirunelveli, India, 28–30 April 2022; pp. 168–173. [Google Scholar] [CrossRef]
- Sharma, A.; Yadav, A.; Srivastava, S.; Gupta, R. Analysis of movement and gesture recognition using Leap Motion Controller. Procedia Comput. Sci. 2018, 132, 551–556. [Google Scholar] [CrossRef]
- Salem, N.; Alharbi, S.; Khezendar, R.; Alshami, H. Real-time glove and android application for visual and audible Arabic sign language translation. Procedia Comput. Sci. 2019, 163, 450–459. [Google Scholar] [CrossRef]
- Zhang, Y.; Min, Y.; Chen, X. Teaching Chinese Sign Language with a Smartphone. Virtual Real. Intell. Hardw. 2021, 3, 248–260. [Google Scholar] [CrossRef]
- Samonte, M.J.C.; Gazmin, R.A.; Soriano, J.D.S.; Valencia, M.N.O. BridgeApp: An assistive mobile communication application for the deaf and mute. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 16–18 October 2019; pp. 1310–1315. [Google Scholar] [CrossRef]
- Sobhan, M.; Chowdhury, M.Z.; Ahsan, I.; Mahmud, H.; Hasan, M.K. A communication aid system for deaf and mute using vibrotactile and visual feedback. In Proceedings of the 2019 International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 21–22 September 2019; pp. 184–190. [Google Scholar] [CrossRef]
- KN, S.K.; Sathish, R.; Vinayak, S.; Pandit, T.P. Braille assistance system for visually impaired, blind & deaf-mute people in indoor & outdoor application. In Proceedings of the 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), Bangalore, India, 17–18 May 2019; pp. 1505–1509. [Google Scholar] [CrossRef]
- Villagomez, E.B.; King, R.A.; Ordinario, M.J.; Lazaro, J.; Villaverde, J.F. Hand gesture recognition for deaf-mute using FuzzyNeural network. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics—Asia (ICCE-Asia), Bangkok, Thailand, 12–14 June 2019; pp. 30–33. [Google Scholar] [CrossRef]
- Tao, Y.; Huo, S.; Zhou, W. Research on communication APP for deaf and mute people based on face emotion recognition technology. In Proceedings of the 2020 IEEE 2nd International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Weihai, China, 14–16 October 2020; pp. 547–552. [Google Scholar] [CrossRef]
- Shareef, S.K.; Haritha, I.V.S.L.; Prasanna, Y.L.; Kumar, G.K. Deep learning based hand gesture translation system. In Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 3–5 June 2021; pp. 1531–1534. [Google Scholar] [CrossRef]
- Dhruv, A.J.; Bharti, S.K. Real-time sign language converter for mute and deaf people. In Proceedings of the 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), Gandhinagar, India, 24–26 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Rosero-Montalvo, P.D.; Godoy-Trujillo, P.; Flores-Bosmediano, E.; Carrascal-Garcia, J.; Otero-Potosi, S.; Benitez-Pereira, H.; Peluffo-Ordonez, D.H. Sign language recognition based on intelligent glove using machine learning techniques. In Proceedings of the 2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM), Cuenca, Ecuador, 15–19 October 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Janeera, D.A.; Raja, K.M.; Pravin, U.K.R.; Kumar, M.K. Neural network based real time sign language interpreter for virtual meet. In Proceedings of the 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 8–10 April 2021; pp. 1593–1597. [Google Scholar] [CrossRef]
- Gupta, A.M.; Koltharkar, S.S.; Patel, H.D.; Naik, S. DRISHYAM: An interpreter for deaf and mute using single shot detector model. In Proceedings of the 8th International Conference on Advanced Computing and Communication Systems, ICACCS 2022, Coimbatore, India, 25–26 March 2022; pp. 365–371. [Google Scholar] [CrossRef]
- Lan, S.; Ye, L.; Zhang, K. Attention-augmented electromagnetic representation of sign language for human-computer interaction in deafand-mute community. In Proceedings of the 2021 IEEE USNC-URSI Radio Science Meeting (Joint with AP-S Symposium), Singapore, Singapore, 4–10 December 2021; pp. 47–48. [Google Scholar] [CrossRef]
- Telluri, P.; Manam, S.; Somarouthu, S.; Oli, J.M.; Ramesh, C. Low cost flex powered gesture detection system and its applications. In Proceedings of the 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 15–17 July 2020; pp. 1128–1131. [Google Scholar] [CrossRef]
- Jamdar, V.; Garje, Y.; Khedekar, T.; Waghmare, S.; Dhore, M.L. Inner voice—An effortless way of communication for the physically challenged deaf & mute people. In Proceedings of the 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), Gandhinagar, India, 24–26 September 2021; pp. 1–5. [Google Scholar] [CrossRef]
- He, Y.; Kuerban, A.; Yu, Q.; Xie, Q. Design and implementation of a sign language translation system for deaf people. In Proceedings of the 2021 3rd International Conference on Natural Language Processing (ICNLP), Beijing, China, 26–28 March 2021; pp. 150–154. [Google Scholar] [CrossRef]
- Xia, K.; Lu, W.; Fan, H.; Zhao, Q.A. Sign Language Recognition System Applied to Deaf-Mute Medical Consultation. Sensors 2022, 22, 9107. [Google Scholar] [CrossRef] [PubMed]
- Siddiqui, A.; Zia, M.Y.I.; Otero, P. A universal machine-learning-based automated testing system for consumer electronic products. Electronics 2021, 10, 136. [Google Scholar] [CrossRef]
- Siddiqui, A.; Zia, M.Y.I.; Otero, P. A Novel Process to Setup Electronic Products Test Sites Based on Figure of Merit and Machine Learning. IEEE Access 2021, 9, 80582–80602. [Google Scholar] [CrossRef]
- Ronchetti, F.; Quiroga, F.; Estrebou, C.A.; Lanzarini, L.C.; Rosete, A. LSA64: An argentinian sign language dataset. In Proceedings of the Congreso Argentino de Ciencias de La Computacion (CACIC), San Luis, Argentina, 3–10 October 2016; pp. 794–803. [Google Scholar]
- Joze, H.R.V.; Koller, O. MS-ASL: A large-scale data set and benchmark for understanding American sign language. In Proceedings of the 30th British Machine Vision Conference 2019, BMVC 2019, Cardiff, UK, 9–12 September 2019. [Google Scholar]
- Jie, H.; Zhou, W.; Li, H.; Li, W. Attention-Based 3D-CNNs for Large-Vocabulary Sign Language Recognition. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2822–2832. [Google Scholar] [CrossRef]
- Kagirov, I.; Ivanko, D.; Ryumin, D.; Axyonov, A.; Karpov, A. TheRuSLan: Database of Russian sign language. In Proceedings of the LREC 2020—12th International Conference on Language Resources and Evaluation, Conference Proceedings, Marseille, France, 11–16 May 2020; pp. 6079–6085. [Google Scholar]
- Sincan, O.M.; Keles, H.Y. AUTSL: A Large Scale Multi-Modal Turkish Sign Language Dataset and Baseline Methods. IEEE Access 2020, 8, 181340–181355. [Google Scholar] [CrossRef]
- Dongxu, L.; Opazo, C.R.; Yu, X.; Li, H. Word-Level Deep Sign Language Recognition from Video: A New Large-Scale Dataset and Methods Comparison. 2020. Available online: https://dxli94.github.io/ (accessed on 10 November 2022).
- Tavella, F.; Schlegel, V.; Romeo, M.; Galata, A.; Cangelosi, A. WLASL-LEX: A Dataset for Recognising Phonological Properties in American Sign Language. arXiv 2022, arXiv:2203.06096. [Google Scholar]
- Engineer Ambitiously—NI. Available online: https://www.ni.com/en-gb.html (accessed on 23 October 2022).
- Kaggle Dataset. Available online: https://www.kaggle.com/datasets/alexalex1211/aslamerican-sign-language (accessed on 23 October 2022).
References | Hardware | Software/Algorithm | Features/Limitations |
---|---|---|---|
[7] | Gloves, Sensors, Arduino | KNN | Custom made, require validation; DnM to NDnM only |
[8] | PC based | None | NDnM to DnM to only |
[9] | None | None | Survey only |
[10] | PC based | Image processing | DnM to NDnM only |
[11] | Mobile app | Speech recognition | DnM to NDnM only |
[12] | Mobile app | ANN | DnM to NDnM only |
[13] | PC based | PCA | DnM to NDnM only |
[14] | None | None | Survey only |
[15] | Leap Motion | CPI | DnM to NDnM only |
[16] | Leap Motion | LSTM | DnM to NDnM only |
[17] | Raspberry Pi | Deep learning | DnM to NDnM only |
[18] | PC based | Various | Survey only |
[19] | Gloves, Sensors, Arduino | None | DnM to NDnM only |
[20] | Leap Motion | None | DnM to NDnM only |
[21] | Gloves, Sensors, Mobile app | None | DnM to NDnM only |
[22] | Mobile app | None | DnM to NDnM only |
[23] | Mobile app | None | DnM to NDnM only |
[24] | Mobile app | None | DnM to NDnM to DnM |
[25] | Sensors, Standalone | None | DnM to NDnM only |
[26] | PC, Microcontroller | FNN | DnM to NDnM only |
[27] | Gloves, Microcontroller | None | DnM to NDnM only |
[28] | PC based | Deep learning | DnM to NDnM only |
[29] | PC based | CNN | DnM to NDnM only |
[30] | Gloves, Sensors, Arduino | KNN | DnM to NDnM only |
[31] | PC based | NN | DnM to NDnM only |
[32] | PC based | CNN | DnM to NDnM only |
[37] | Mobile app | Mobile Net | DnM to NDnM to DnM |
Letters and Number | Accuracy |
---|---|
A to Z | 98.46 |
0 to 9 | 98.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Saleem, M.I.; Siddiqui, A.; Noor, S.; Luque-Nieto, M.-A.; Otero, P. A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute. Appl. Sci. 2023, 13, 453. https://doi.org/10.3390/app13010453
Saleem MI, Siddiqui A, Noor S, Luque-Nieto M-A, Otero P. A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute. Applied Sciences. 2023; 13(1):453. https://doi.org/10.3390/app13010453
Chicago/Turabian StyleSaleem, Muhammad Imran, Atif Siddiqui, Shaheena Noor, Miguel-Angel Luque-Nieto, and Pablo Otero. 2023. "A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute" Applied Sciences 13, no. 1: 453. https://doi.org/10.3390/app13010453
APA StyleSaleem, M. I., Siddiqui, A., Noor, S., Luque-Nieto, M.-A., & Otero, P. (2023). A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute. Applied Sciences, 13(1), 453. https://doi.org/10.3390/app13010453