Next Article in Journal
Clinical Outcome of TAVI vs. SAVR in Patients with Severe Aortic Stenosis
Previous Article in Journal
Use of the Free Endometriosis Risk Advisor App as a Non-Invasive Screening Test for Endometriosis in Patients with Chronic Pelvic Pain and/or Unexplained Infertility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Digital Alternative Communication for Individuals with Amyotrophic Lateral Sclerosis: What We Have

by
Felipe Fernandes
1,*,
Ingridy Barbalho
1,
Arnaldo Bispo Júnior
1,
Luca Alves
1,
Danilo Nagem
1,
Hertz Lins
1,
Ernano Arrais Júnior
1,
Karilany D. Coutinho
1,
Antônio H. F. Morais
2,
João Paulo Q. Santos
2,
Guilherme Medeiros Machado
3,
Jorge Henriques
4,
César Teixeira
4,
Mário E. T. Dourado Júnior
1,5,
Ana R. R. Lindquist
1 and
Ricardo A. M. Valentim
1
1
Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil
2
Advanced Nucleus of Technological Innovation (NAVI), Federal Institute of Rio Grande do Norte (IFRN), Natal 59015-000, Brazil
3
Research Department, ECE-Engineering School, 75015 Paris, France
4
Department of Informatics Engineering, Center for Informatics and Systems of the University of Coimbra, Universidade de Coimbra, 3030-788 Coimbra, Portugal
5
Department of Integrated Medicine, Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2023, 12(16), 5235; https://doi.org/10.3390/jcm12165235
Submission received: 14 July 2023 / Revised: 5 August 2023 / Accepted: 9 August 2023 / Published: 11 August 2023
(This article belongs to the Section Clinical Neurology)

Abstract

:
Amyotrophic Lateral Sclerosis is a disease that compromises the motor system and the functional abilities of the person in an irreversible way, causing the progressive loss of the ability to communicate. Tools based on Augmentative and Alternative Communication are essential for promoting autonomy and improving communication, life quality, and survival. This Systematic Literature Review aimed to provide evidence on eye-image-based Human–Computer Interaction approaches for the Augmentative and Alternative Communication of people with Amyotrophic Lateral Sclerosis. The Systematic Literature Review was conducted and guided following a protocol consisting of search questions, inclusion and exclusion criteria, and quality assessment, to select primary studies published between 2010 and 2021 in six repositories: Science Direct, Web of Science, Springer, IEEE Xplore, ACM Digital Library, and PubMed. After the screening, 25 primary studies were evaluated. These studies showcased four low-cost, non-invasive Human–Computer Interaction strategies employed for Augmentative and Alternative Communication in people with Amyotrophic Lateral Sclerosis. The strategies included Eye-Gaze, which featured in 36% of the studies; Eye-Blink and Eye-Tracking, each accounting for 28% of the approaches; and the Hybrid strategy, employed in 8% of the studies. For these approaches, several computational techniques were identified. For a better understanding, a workflow containing the development phases and the respective methods used by each strategy was generated. The results indicate the possibility and feasibility of developing Human–Computer Interaction resources based on eye images for Augmentative and Alternative Communication in a control group. The absence of experimental testing in people with Amyotrophic Lateral Sclerosis reiterates the challenges related to the scalability, efficiency, and usability of these technologies for people with the disease. Although challenges still exist, the findings represent important advances in the fields of health sciences and technology, promoting a promising future with possibilities for better life quality.

1. Introduction

Amyotrophic Lateral Sclerosis (ALS) is a progressive and irreversible neurodegenerative disease that affects an individual’s motor neurons. As a result, there is a gradual loss of functionality in voluntary movements, respiratory function, and communication [1,2,3,4,5]. For people with ALS, having access to an ecosystem that integrates multi-professional assistance and assistive technologies, particularly Augmentative and Alternative Communication resources, has been shown to be essential to preserving communication and interaction skills and enhancing quality of life and survival as the disease advances [6,7,8,9,10,11].
As ALS progresses, functional losses intensify. Thus, the communicative process, autonomy, social interaction, and participation are partially or entirely affected. To compensate for these losses, which are related to functional and motor abilities caused by this disease, several kinds of research have been developed, with a common objective: to promote improvement in the quality of life of patients with ALS. Some of this research has been directed toward alternative communication, as this is one of the main issues for ALS patients. This is because many of these patients lose the ability to communicate, which can cause social isolation and loss of autonomy [12,13]. Therefore, research in this field is often based on Human–Computer Interaction, to promote Augmentative and Alternative Communication methods, using devices or information systems and applications within the scope of assistive technologies [14,15,16,17,18,19,20,21].
In Augmentative and Alternative Communication, there are different mechanisms and paradigms for controlling interfaces based on Human–Computer Interaction. Bioelectric signals, for instance, are widely used and investigated mechanisms in neuroscience, rehabilitation, and Brain–Computer Interface (BCI). In BCI, brain signals and electroencephalography devices are widely used for Human–Computer Interaction, especially when the person with ALS is in a locked-in state and cannot voluntarily move their eyes [22,23,24,25,26,27,28]. In a systematic review, Jaramillo-Yánez et al. [29] showed studies investigating electromyography signals for Human–Computer Interaction through muscle contractions and gesture recognition. Other studies have also suggested that, by applying electro-oculogram-based features, it is possible to control interfaces by capturing predefined eye movements (up, down, left, and right) or blinking [30,31,32,33,34,35].
Despite scientific and technological advances in the field of Augmentative and Alternative Communication using bioelectric signals, it is still a challenge to introduce devices in this category to the home environment, for use by individuals with severe motor disabilities, such as people with ALS. These limitations are related to home usability (or the ability to handle an instrument), the time necessary to select characters or items in an Augmentative and Alternative Communication interface, and the necessity of electrodes that must be attached to the patient, which usually causes fatigue and discomfort and discourages from adopting the resource [12,36,37,38].
Other approaches to Human–Computer Interaction, highlighted in significant areas of Computer Vision and Machine Learning, use computational methods based on image processing. For example, some studies have used the eyes of individuals with a severe motor disability, along with one or more cameras, to capture image data for processing and defining patterns, such as blinking [13,39,40] or pupil movement [41,42,43,44,45,46]. Thus, the result of the images’ digital processing coming from the eyes of the user acts as input for a Human–Computer Interaction system.
Fathi et al. [40] presented two categories of image-based techniques: with and without infrared. Methods for eye tracking with infrared are more effective; however, prolonged exposure to infrared may cause health damage or discomfort to the eye. Also, it may require specific hardware for the attachment to the individual’s head. The methods that use cameras without infrared lights are simpler; however, the detection aspects of eye movements or eye blinking are more complex, which may compromise the system’s accuracy, relative to the desired selection target on the interface [42,47].
In the context of Augmentative and Alternative Communication and image-based mechanisms for people with ALS or other diseases belonging to the so-called locked-in syndrome, it is challenging to craft solutions based on Human–Computer Interaction in a home environment, which requires low effort for typing and low cost as well, that are efficient and accessible. The devices for Augmentative and Alternative Communication that are considered to have high performance are usually commercial and are provided with additional elements, such as infrared and sensors for image processing, which represents high cost and robustness.
In this context, the following question arises: is it possible to build a low-cost resource, using an eye-image-based method that relies on Computer Vision and Machine Learning techniques for Augmentative and Alternative Communication, interaction, and inclusion of people with ALS? To answer this research question, this paper presents, based on the methodological features of a Systematic Literature Review [48,49,50,51,52], an investigation of primary studies that explore eye-based Human–Computer Interaction systems and technologies for Augmentative and Alternative Communication for people with ALS.

2. Materials and Methods

This research was developed based on the systematic review guidelines proposed by Kitchenham [48] and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist [53], and it was registered with PROSPERO (registration no. CRD42021230721) [54]. Initially, as a fundamental part of the protocol, five Research Questions (RQ) were developed (see Table 1 below).
The process of identifying primary studies related to the investigation object of this Systematic Literature Review consisted of searches in six repositories: Science Direct, Web of Science, Springer, IEEE Xplore, ACM Digital Library, and PubMed. Searches in all databases were performed on 18 November 2021. Except for PubMed, two search strings (SS01 and SS02) were used in the searches. Specifically for PubMed, a third search string (SS03) was considered, which was defined from the Medical Subject Headings (MeSH) thesaurus. The search strings are presented below:
  • SS01: (eye) AND (track OR gaze OR blink OR localization) AND (camera OR webcam) AND (“amyotrophic lateral sclerosis” OR als);
  • SS02: (eye) AND (track OR gaze OR blink OR localization) AND (camera OR webcam) AND (“neuromuscular disease” OR “motor neuron disease”);
  • SS03: see Appendix A.1.
After identifying and defining the initial set of records, screening was performed, to select a subset of eligible primary studies. This process was organized and executed by applying three elementary procedures: (i) Inclusion Criteria—IC; (ii) Exclusion Criteria—EC; and (iii) Quality Assessment Criteria—QA.
In procedure (i), a subset of primary studies was defined, based on the Inclusion Criteria (Table 2) applied through the filters made available in the repositories. In procedure (ii), a screening guided by the Exclusion Criteria (Table 2) based on the title reading, abstract, and keywords was performed on the subset of primary studies. Rayyan [55], a web application for systematic reviews, assisted in this step (ii).
To determine the final set of eligible articles, and to seek answers to the Research Questions (see Table 1), a screening, guided by the Quality Assessment Criteria (see Table 3), was performed from the entire reading of the primary studies. An elimination condition (QA01) and an evaluation metric, called score (see Equation (1)), were used for the qualification and ranking of the studies. The score was the arithmetic mean of the weights (w) assigned for each Quality Assessment Criteria. The weight (w), which could vary between 0,  0.5 , and  1.0 , measured how satisfactory the response of that article was for a given Quality Assessment Criteria, as shown in Equation (2). Primary articles that scored  0.5  or higher (i.e.,  0.5 s c o r e 1.0 ) were considered eligible for this Systematic Literature Review. Two reviewers assigned scores, and elementary data from the final set of eligible studies, extracted based on the research questions, were summarized in Table 4.
s c o r e = 1 n QA i = 1 n QA w QA i
where:
 –
n Q A : variable used to represent the total of Quality Assessment Criteria;
 –
w Q A : variable used to determine the value referring to the weight w assigned to the Quality Assessment Criteria under analysis (see the possible values in the Equation (2)).
w QA = 1.0 , yes , fully describes , 0.5 , yes , partially describes , 0 , does not describe .

3. Results

The detailed quantitative results of the protocol execution of this Systematic Literature Review have been summarized in Figure 1. After identifying 9084 records and performing screening, consisting of the application of Inclusion Criteria (8586 studies excluded), Exclusion Criteria (449 studies excluded), and Quality Assessment Criteria (24 studies excluded), a set of 25 primary studies were considered eligible and were included in this Systematic Literature Review, to answer the Research Questions (Table 1). The most comprehensive research results were organized and presented in the same sequence as the Research Questions. The analysis is based on the data extracted from the 25 eligible articles, briefly described in Table 4. This table was organized in groups of Human–Computer Interaction, Score, and year of publication of the articles, arranged in descending order.

3.1. Research Question 01

Based on the primary studies, and as shown in Figure 2, the strategies evidenced for Human–Computer Interaction through the eyes are divided into four categories: Eye-Gaze; Eye-Blink; Eye-Tracking; and hybrid strategies, which combine some of the previous categories. The Human–Computer Interaction approach based on Eye-Gaze (36% of the studies) was the highlighted technique found in the research, a technique that generally seeks to estimate gaze direction based on pupil movement on the horizontal (left and right) and vertical (up and down) axes, to select the target object on the interface [56,57,58,59,60,61,62,63,64].
The Eye-Blink strategy was evidenced in 28% of the studies [65,66,67,68,69,70,71]. In this category, the approach to target selection varies and can be based on the detection/identification of voluntary eye blinking (long eye-blinks) [65], the simulation of analogous mouse clicks (right or left eye-blink) [66,67,68,70,71] and the sequential combination of blinks in a temporal space [69]. With the same percentage, 28% of the primary studies provided Human–Computer Interaction approaches to Eye-Tracking, i.e., identifying and classifying the effective pupil direction [72,73,74,75,76,77,78]. Despite its similarity to Eye-Gaze, this strategy seeks to estimate gaze direction in relation to the image pixels, which is more accurate and goes beyond horizontal and vertical lines. The studies by Zhao et al. [79] and Xu and Lin [80], which belong to the category of hybrid strategies, combine Eye-Gaze and Eye-Blink strategies.

3.2. Research Question 02

The algorithmic techniques explored in the primary studies varied, due to the Human–Computer Interaction techniques presented in Section 3.1 (Research Question 01) and the environment configuration in which the camera was allocated for acquiring images/video from the user, as shown in Figure 3, in the step called Video Acquisition. Figure 3 also shows a generic workflow of the procedures (tasks) and the respective Computer Vision or Machine Learning computational techniques often used to solve the challenges comprising Human–Computer Interaction through pupil tracking or blink detection.
In the set of primary studies, mainly for the face detection/localization and eye detection/localization steps, the use of Computer Vision algorithmic resources from the Open Source Computer Vision Library (OpenCV) [81] and Dlib [82] was observed. For the face detection/localization task, for example, the authors mentioned the use of techniques based on the Viola–Jones algorithm or the Haar cascade classifier, available in the OpenCV library, and on Dlib’s built-in facial landmark detector [56,58,60,65,70,74,80]. Other sources of computational tools for image processing and Computer Vision were also explored in the face detection/localization step: for example, Singh and Singh [66] and Singh and Singh [67] used the Viola–Jones algorithm from the MathWorks company. Authors Zhang et al. [57] explored face detection with the Machine Learning Kit on iOS and landmark detection with Dlib.
The eye detection/localization task challenges were mainly related to eye clipping and pupil or iris enhancement. The computational techniques for eye clipping are intrinsically linked to the video acquisition approach, which does not have a Head-Mounted Camera and which aims to delimit or extract the region of interest (the eye). The commonly explored computational techniques were the Viola–Jones algorithm [66,67,74], algorithmic models based on Facial Landmark Points [57,70], and Geometrical Dependencies coupled with Binarization/grayscale/OpenCV strategies [56,63,65,75,78,79].
For enhancing the pupil (or iris) in an eye image, the robust and versatile technique for detecting circles called the Circular Hough Transform [58,62,63,73,74] is highlighted. Other techniques covering the eye detection/localization task can also be mentioned, such as the Limbus tracking method [59], gradient-based algorithms [60,80], Machine Learning models based on Hierarchical Temporal Memory [61,64], Erosion with a cross-shaped structure element [68], segmentation based on thresholding [69], and the Clustering Method of Unbroken Pixel Lines [76].
From the perspective of detecting eye blinking, the authors explored computational models based on Template Matching [65,68], optical flow technique and pixels’ motion analysis [66,67], mathematical formulas to calculate Eye Aspect Ratio (EAR) [70,80] or iris height and width [79], and the 2 Pixel Verification Methodology (black and white: open eye; black and black: closed eye) [69]. In addition to using the Template Matching technique, Missimer and Betke [68] incorporated the Lucas–Kanade optical flow algorithm and finite state machines into the proposed model. Krapic et al. [71] used software called eViacam with integrated implementations of Motion Analysis from the authors [83,84] and computational techniques (not specified) from the OpenCV Library.
In the set of studies where the purpose was to develop a Human–Computer Interaction strategy based on gaze direction, it is evident that the authors followed the workflow, to identify the pupil/iris in the images in advance, using varied methods such as Circular Hough Transform [58,62,63,73,74] or gradient-based algorithm [60,80], and that they extracted values related to the coordinates that served as input to mathematical models (called Geometrical Dependencies in this work) that calculated and identified the gaze direction [58,60,62,63,73,74,78,79,80]. The authors Eom et al. [56] and Yildiz et al. [62] used Geometrical Dependencies to train and create Machine Learning models, using neural networks and the K-Nearest Neighbor algorithm, respectively.
Other approaches to classify gaze direction can also be mentioned. Zhang et al. [57] used Template Matching. Abe et al. [59] explored the vertical eye-gaze detection method, which is also based on the Limbus tracking technique. Rozado et al. [61] and Rozado et al. [64] combined Machine Learning models based on Hierarchical Temporal Memory with ITU Gaze Tracker (open source library software) and the Needleman–Wunsch algorithm, respectively. In a study by Oyabu et al. [76], the pupil position was defined using the Clustering Method of Unbroken Pixel Lines. Also performing mathematical operations, Aharonson et al. [75] calculated the pupil position using two different algorithms: a parametrical interpolation-based algorithm (called a polynomial) and a model-based algorithm (called a projection). Park and Park [72] built an expert embedded system, Pupil Center Corneal Reflection, to track pupils through hardware with attached adaptive lights and a mathematical model-based program. Kaushik et al. [77] used the EyeScan software.

3.3. Research Question 03

The performance-related evaluation of the computational techniques explored in the primary studies showed promising results in control group testing. The analysis of the 13 studies that reported the performance of the techniques in percentage terms shows that the average accuracy (Acc) reached the value of 94.12% (std = 4.14; median = 95%) [57,58,61,63,64,65,66,67,68,74,77,79,80]. The approach proposed by authors Park and Park [72] obtained accuracy of 1–2 ° . In addition to the accuracy of 95.17%, Królak and Strumiłło [65] measured Recall and Precision, which obtained values of 96.91% and 98.13%, respectively. Singh and Singh [67] also measured performance on more than one metric, showing 91.2% Acc and 94.11% Precision. Abe et al. [59] presented the average error in two perspectives of eye-gaze detection: vertical detection (0.56 ° ) and horizontal detection (1.09 ° ). The authors Rahnama-ye-Moqaddam and Vahdat-Nejad [60] reported an average error rate of 5.68% and, now looking at the best error rate obtained by the system, Yildiz et al. [62] presented 0.98% as a result. All the computational techniques explored are identified in Table 4.
Other approaches to evaluating technology performance for Human–Computer Interaction have been used. Eom et al. [56] conducted computer experiments with a control group and summed the individual participants’ gaze movement error (vertical and horizontal). Zhang et al. [57] evaluated, in addition to the Eye-Gaze system, the usability of Augmentative and Alternative Communication software, through a questionnaire, with questions based on the Likert scale [85]. Similarly, Krapic et al. [71] evaluated usability tests. Rupanagudi et al. [69] evaluated and compared the speed of the proposed algorithm to another approach in the literature. With an evaluation system based on pattern recognition, Rakshita [70] reported the efficiency of the approach (without quantifying). Saleh and Tarek [73] evaluated the proposal based on an interface with six targets representing user needs. Aharonson et al. [75] constructed a table containing each user’s mean deviation in degrees. The experimental results in Oyabu et al. [76] were presented by calculating the time, using a “click experiment screenshot” system. Kavale et al. [78] showed the performance of the techniques used through images.

3.4. Research Question 04

Based on the Video Acquisition step presented in Figure 3, 76% of the primary studies [56,57,58,59,60,61,65,66,67,68,69,70,71,72,74,76,78,79,80] performed computational experiments using devices for image collection placed on a table or integrated into the computer itself, as in the case of notebooks or smartphones with integrated cameras, which characterizes a Human–Computer Interaction approach where users are free of devices on their body. Alternatively, 24% of the studies explored a Human–Computer Interaction approach where the prototype for image collection, the camera, was mounted on the user’s head [62,63,64,73,75,77].
From a general perspective, 52% of the primary studies proposed Human–Computer Interaction devices equipped with some light source projected onto the user’s eye or face, being infrared lights [61,63,64,69,72,73,76,77,78,79,80] and lamps [66,67]. In the category of Human–Computer Interaction strategies based on Eye-Gaze, which accumulated the most significant number of studies (nine), five proposed Augmentative and Alternative Communication approaches (approximately 55.6%) using cameras free of additional or body features [56,57,58,59,60]. Of the other four studies in the same category, three explored Head-Mounted Camera approaches [62,63,64], with two proposing infrared [63,64], while authors Rozado et al. [61] added infrared lights to the camera.
In the Eye-Blink category, all seven studies explored image capture using cameras not mounted on the user’s head [65,66,67,68,69,70,71]. Three studies equipped the cameras with some type of light source projected onto the user’s eye or face, with one being infrared lights [69] and two being lamps [66,67]. Video acquisition in studies belonging to the Eye-Tracking category varied between approaches with Head-Mounted Cameras [73,75,77], two of them with infrared [73,77], and with non-head-mounted cameras equipped with [72,76,78] and without [74] infrared. The authors Zhao et al. [79] and Xu and Lin [80], from the hybrid Human–Computer Interaction strategies category, investigated Augmentative and Alternative Communication techniques from images collected from cameras with infrared.

3.5. Research Question 05

The data extracted from the primary studies to answer these Research Questions have been summarized in Figure 4. Figure 4 clearly shows that only one study—by authors Rahnama-ye-Moqaddam and Vahdat-Nejad [60], for Eye-Gaze—performed experimental tests on a person with ALS. A second study considered participants with other (unspecified) disabilities. Królak and Strumiłło [65] included 12 people in experimental testing to evaluate the Human–Computer Interaction approach through Eye-Blink. All primary studies performed tests with healthy controls, with an average of 10.76 participants per study (std = 12.3; median = 5).

4. Discussion

This Systematic Literature Review investigated 25 primary studies on image-based Human–Computer Interaction approaches using simple, low-cost cameras for Augmentative and Alternative Communication for people with ALS. Initially, as an answer to the problem question, the results point to the possibility and feasibility of developing low-cost technologies for Human–Computer Interaction through eye imagery. However, there are still challenges to be explored in the broad areas of Computer Vision, Machine Learning, and Augmentative and Alternative Communication, related not only to the cost but also to the efficiency and usability of Human–Computer Interaction technologies that are used through the eyes, particularly in the context of people with ALS. From this perspective, this Systematic Literature Review organized and discussed, in sequence, the main findings.
The first analysis, related to Human–Computer Interaction strategies, showed four strategies addressed by the primary studies: Eye-Gaze (36%) [56,57,58,59,60,61,62,63,64], Eye-Blink (28%) [65,66,67,68,69,70,71], Eye-Tracking (28%) [72,73,74,75,76,77,78], and Hybrid strategies (8%) [79,80]. It was also observed that 52% of the studies adopted additional features to control the environment’s light that falls on the user’s eye or face. Of this group of studies, 11 resorted to the use of infrared [61,63,64,69,72,73,76,77,78,79,80] and 2 to fluorescent light [66,67]. The targeting of light beams at the eye aimed, in practical terms, to create reflective effects in the pupil region in the case of infrared lights, or reference points in the pupil/iris/sclera, such as to facilitate image processing and, hence, detection and classification of gaze direction or eye state (open or closed). Still, from the perspective of improving the image processing conditions, gaze motion detection, and performance of the Human–Computer Interaction device, six studies (24%) conducted experiments with the camera attached to the users’ heads [62,63,64,73,75,77].
The surveys highlighted in this study could be relevant if they aligned with sharing knowledge and the Augmentative and Alternative Communication resource (the final product). Therefore, the goal must be to improve the functional capacities of people with motor disabilities—that is, the autonomy of those individuals. In this way, it may be possible to mitigate the effects of social isolation. Moreover, it also promotes the exercise of rights, citizenship, fundamental freedoms, and health care for people with ALS. This aspect is very significant, for it acts directly on health promotion, well-being, and the reduction of inequalities, which may reflect in the promotion of equity. These factors are even foreseen in the Sustainable Development Goals (SDGs) that are part of the United Nations (UN) 2030 Agenda, particularly SDGs 3 (3.8) and 10 [86]. Therefore, this issue is not only about developing new technologies or simply providing low-cost solutions but is also about acting through technological mediation as research that induces social inclusion, reduces inequalities, and promotes equity—values often not measurable in scientific research of a more technological nature.
Although most of the results occurred in healthy controls, they suggest the viability of investment in research in this field. However, public health policymakers must prioritize research in this area, to ensure that the poorest people diagnosed with ALS have access to assistive technologies that can improve their quality of life. It is not enough to develop new technologies: it is necessary to ensure that people with ALS have access to them, regardless of their social conditions. Securing investment in research in this field is essential, not only for providing access to people with this disease - which is fundamental - but also for ensuring the sustainability and advancement of studies in this area, which is often neglected by the industry, as the market is very limited.
ALS is considered rare, and, despite efforts to seek digital health solutions, there are still significant challenges to be tackled: these include the need for more data, studies, and evidence on disease incidence and prevalence, which are essential but scarce pieces of information in the context of global health [12,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107]. There are few records or epidemiological studies in Brazil, and only two studies at national level have been mentioned in the scientific literature. In 1998, Dietrich-Neto et al. [108] conducted a national survey and reported incidence and prevalence rates of 0.4 and 1.2 per 100,000 inhabitants, respectively. More recently, when analyzing the period from 2004 to 2013, the researchers Moura et al. [109] estimated the average incidence of ALS to be 0.461 cases per 100,000 inhabitants (with a trend of increasing incidence over the years), a rate similar to that of Dietrich-Neto et al. [108]. It is worth mentioning that, in Brazil, until 2019, there was no compulsory notification system or national registry of Amyotrophic Lateral Sclerosis, which may have led to under-reporting [107,110].
To address this problem of under-reporting in Brazil, Barbalho et al. [111] emphasize the National ALS Registry, an applied research project supported by the Brazilian Ministry of Health. According to Barbalho et al. [111], this National Registry is a project still in progress and under implementation throughout Brazil, the goal of which is to continuously map all people with ALS in the country online. Through the National Registry, it will be possible to develop epidemiological studies and analyses that can support the decision making of public authorities in the design of health policies in the context of ALS in Brazil, for example. In this sense, Law Project No. 4691 of 2019 [112] aimed to make the notification of rare diseases mandatory in Brazil, and so the National ALS Registry is a structuring part of this Law Project. Noteworthy in Brazil is the state of Rio Grande do Norte, which is in the Northeast Region of the country, as it was the first Brazilian state to publish Law No. 10,924 of 10 June 2021 [113], which made the notification of ALS compulsory.
There are many challenges in the ALS context, and it is clear that they go beyond the areas of health sciences and technology. However, understanding transdisciplinarity and the appropriate use of these technologies or digital health solutions could significantly improve access to quality health care, reduce inequalities, and improve life quality, especially for people with ALS. Therefore, it is also necessary to consider technologies as tools for society’s social and sustainable development.

5. Conclusions

This paper, through the execution of a Systematic Literature Review protocol, investigated primary studies in the literature and highlighted five relevant points that could directly contribute to development and technological effectiveness in providing eye-image-based Human–Computer Interaction strategies regarding Augmentative and Alternative Communication for people with ALS. The first point showed the Human–Computer Interaction approaches based on eye images: Eye-Gaze (36%) [56,57,58,59,60,61,62,63,64]; Eye-Blink (28%) [65,66,67,68,69,70,71]; Eye-Tracking (28%) [72,73,74,75,76,77,78]; and hybrid strategies (8%) [79,80]. These Human–Computer Interaction approaches are the results of efforts by the scientific community to develop low-cost solutions and to indicate the feasibility of their use as assistive technologies for Augmentative and Alternative Communication for people with ALS or other diseases that compromise functional abilities. The computational resources related to Computer Vision/Machine Learning techniques and to hardware support for image acquisition and enhancement were also examined and described in Table 4, which summarizes the answers to these and the other investigated points.
The computational models identified showed potential for face and eye detection and for eye movement tracking tasks or eye state classification (open or closed). However, there were limitations regarding experiments on people with ALS and, in some studies, the methodological density of the model structure and application. In addition to these limitations, it is important to highlight that computational techniques have reached an efficiency threshold (regarding performance), i.e., they are well-consolidated for Human–Computer Interaction through the eyes. However, it is worth noting that controlled computational experiments with a low and undiversified number of users may mask the actual results, presenting good results in the tests but without the ability to generalize the model. These aspects could be explored in further research and related to approaches without using a Head-Mounted Camera or infrared, which could direct other tests, considering people with ALS without causing discomfort.
The purpose of this Systematic Literature Review was to gather findings on eye-image-based Human–Computer Interaction approaches for the Augmentative and Alternative Communication of people with ALS. It is essentially optimistic research, regarding the innovation, development, and availability of low-cost technologies for universal access and significant improvements in the quality of life for people with ALS or other motor disabilities.

Author Contributions

F.F., I.B. and R.A.M.V. contributed to the conception and design of the study; methodology, F.F. and I.B.; collection, organizing, and review of the literature, F.F. and I.B.; Validation and Visualization, F.F.; writing—original draft preparation, F.F.; writing—review and editing, F.F., I.B., A.B.J., H.L. and R.A.M.V.; supervision, R.A.M.V.; project administration, R.A.M.V.; All authors contributed to manuscript revision. All authors have read and agreed to the published version of the manuscript.

Funding

The Brazilian Ministry of Health funded the present study through the Scientific and Technological Development Applied to ALS (Project number: 132/2018) carried out by the Laboratory of Technological Innovation in Health (LAIS) of the Federal University of Rio Grande do Norte (UFRN).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We kindly thank the Laboratory of Technological Innovation in Health (LAIS) of the Federal University of Rio Grande do Norte (UFRN) and the Ministry of Health, Brazil for supporting the research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1

Search String 03 (SS03) for the PubMed repository:
((((((Eye Tracking Technology) OR (Eye-Tracking Technologies) OR (Technology, Eye-Tracking) OR (Eyetracking Technology) OR (Eyetracking Technologies) OR (Gaze-Tracking Technology) OR (Gaze Tracking Technology) OR (Gaze-Tracking Technologies) OR (Eye-Tracking System) OR (Eye Tracking System) OR (Eye-Tracking Systems) OR (Eyetracking System) OR (Eyetracking Systems) OR (Eye Movement Data Analysis) OR (Gaze-Tracking) OR (Gaze Tracking) OR (Gaze-Tracking System) OR (Gaze Tracking System) OR (Gaze-Tracking Systems) OR (Gazetracking System) OR (Gazetracking Systems) OR (Eye-Tracking) OR (Eye Tracking)) OR ((Eye Movement) OR (Movement, Eye) OR (Movements, Eye))) OR ((Eye Movement Measurement) OR (Measurement, Eye Movement) OR (Measurements, Eye Movement))) OR ((Focusing, Ocular) OR (Ocular Focusing) OR (Ocular Fixation) OR (Eye Gaze) OR (Eye Gazes) OR (Gaze, Eye) OR (Gazes, Eye))) OR ((Saccade) OR (Saccadic Eye Movements) OR (Eye Movement, Saccadic) OR (Eye Movements, Saccadic) OR (Movement, Saccadic Eye) OR (Movements, Saccadic Eye) OR (Saccadic Eye Movement) OR (Pursuit, Saccadic) OR (Pursuits, Saccadic) OR (Saccadic Pursuit) OR (Saccadic Pursuits))) AND ((Sclerosis, Amyotrophic Lateral) OR (Gehrig’s Disease) OR (Gehrig Disease) OR (Gehrigs Disease) OR (Charcot Disease) OR (Motor Neuron Disease, Amyotrophic Lateral Sclerosis) OR (Lou Gehrig’s Disease) OR (Lou-Gehrigs Disease) OR (Disease, Lou-Gehrigs) OR (ALS - Amyotrophic Lateral Sclerosis) OR (ALS Amyotrophic Lateral Sclerosis) OR (Lou Gehrig Disease) OR (Amyotrophic Lateral Sclerosis, Guam Form) OR (Amyotrophic Lateral Sclerosis-Parkinsonism-Dementia Complex 1) OR (Amyotrophic Lateral Sclerosis Parkinsonism Dementia Complex 1) OR (Guam Form of Amyotrophic Lateral Sclerosis) OR (Guam Disease) OR (Disease, Guam) OR (Amyotrophic Lateral Sclerosis, Parkinsonism-Dementia Complex of Guam) OR (Amyotrophic Lateral Sclerosis, Parkinsonism Dementia Complex of Guam) OR (Amyotrophic Lateral Sclerosis With Dementia) OR (Dementia With Amyotrophic Lateral Sclerosis)).

References

  1. Goutman, S.A.; Hardiman, O.; Al-Chalabi, A.; Chió, A.; Savelieff, M.G.; Kiernan, M.C.; Feldman, E.L. Recent advances in the diagnosis and prognosis of amyotrophic lateral sclerosis. Lancet Neurol. 2022, 21, 480–493. [Google Scholar] [CrossRef]
  2. Goutman, S.A.; Hardiman, O.; Al-Chalabi, A.; Chió, A.; Savelieff, M.G.; Kiernan, M.C.; Feldman, E.L. Emerging insights into the complex genetics and pathophysiology of amyotrophic lateral sclerosis. Lancet Neurol. 2022, 21, 465–479. [Google Scholar] [CrossRef]
  3. Saadeh, W.; Altaf, M.A.B.; Butt, S.A. A wearable neuro-degenerative diseases detection system based on gait dynamics. In Proceedings of the 2017 IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), Abu Dhabi, United Arab Emirates, 23–25 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  4. Hardiman, O.; Al-Chalabi, A.; Chio, A.; Corr, E.M.; Logroscino, G.; Robberecht, W.; Shaw, P.J.; Simmons, Z.; van den Berg, L.H. Amyotrophic lateral sclerosis. Nat. Rev. Dis. Prim. 2017, 3, 17071. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. van Es, M.A.; Hardiman, O.; Chio, A.; Al-Chalabi, A.; Pasterkamp, R.J.; Veldink, J.H.; van den Berg, L.H. Amyotrophic lateral sclerosis. Lancet 2017, 390, 2084–2098. [Google Scholar] [CrossRef] [PubMed]
  6. Londral, A.; Pinto, A.; Pinto, S.; Azevedo, L.; De Carvalho, M. Quality of life in amyotrophic lateral sclerosis patients and caregivers: Impact of assistive communication from early stages. Muscle Nerve 2015, 52, 933–941. [Google Scholar] [CrossRef] [PubMed]
  7. Linse, K.; Aust, E.; Joos, M.; Hermann, A. Communication Matters—Pitfalls and Promise of Hightech Communication Devices in Palliative Care of Severely Physically Disabled Patients With Amyotrophic Lateral Sclerosis. Front. Neurol. 2018, 9, 603. [Google Scholar] [CrossRef] [PubMed]
  8. Linse, K.; Rüger, W.; Joos, M.; Schmitz-Peiffer, H.; Storch, A.; Hermann, A. Usability of eyetracking computer systems and impact on psychological wellbeing in patients with advanced amyotrophic lateral sclerosis. Amyotroph. Lateral Scler. Front. Degener. 2018, 19, 212–219. [Google Scholar] [CrossRef] [PubMed]
  9. Rosa Silva, J.P.; Santiago Júnior, J.B.; dos Santos, E.L.; de Carvalho, F.O.; de França Costa, I.M.P.; de Mendonça, D.M.F. Quality of life and functional independence in amyotrophic lateral sclerosis: A systematic review. Neurosci. Biobehav. Rev. 2020, 111, 1–11. [Google Scholar] [CrossRef]
  10. Gillespie, J.; Przybylak-Brouillard, A.; Watt, C.L. The Palliative Care Information Needs of Patients with Amyotrophic Lateral Sclerosis and their Informal Caregivers: A Scoping Review. J. Pain Symptom Manag. 2021, 62, 848–862. [Google Scholar] [CrossRef]
  11. Howard, I.M.; Burgess, K. Telehealth for Amyotrophic Lateral Sclerosis and Multiple Sclerosis. Phys. Med. Rehabil. Clin. N. Am. 2021, 32, 239–251. [Google Scholar] [CrossRef]
  12. Fernandes, F.; Barbalho, I.; Barros, D.; Valentim, R.; Teixeira, C.; Henriques, J.; Gil, P.; Dourado Júnior, M. Biomedical signals and machine learning in amyotrophic lateral sclerosis: A systematic review. Biomed. Eng. Online 2021, 20, 61. [Google Scholar] [CrossRef]
  13. de Lima Medeiros, P.A.; da Silva, G.V.S.; dos Santos Fernandes, F.R.; Sánchez-Gendriz, I.; Lins, H.W.C.; da Silva Barros, D.M.; Nagem, D.A.P.; de Medeiros Valentim, R.A. Efficient machine learning approach for volunteer eye-blink detection in real-time using webcam. Expert Syst. Appl. 2022, 188, 116073. [Google Scholar] [CrossRef]
  14. Caligari, M.; Godi, M.; Guglielmetti, S.; Franchignoni, F.; Nardone, A. Eye tracking communication devices in amyotrophic lateral sclerosis: Impact on disability and quality of life. Amyotroph. Lateral Scler. Front. Degener. 2013, 14, 546–552. [Google Scholar] [CrossRef] [PubMed]
  15. Hwang, C.S.; Weng, H.H.; Wang, L.F.; Tsai, C.H.; Chang, H.T. An Eye-Tracking Assistive Device Improves the Quality of Life for ALS Patients and Reduces the Caregivers’ Burden. J. Mot. Behav. 2014, 46, 233–238. [Google Scholar] [CrossRef] [PubMed]
  16. Shravani, T.; Sai, R.; Vani Shree, M.; Amudha, J.; Jyotsna, C. Assistive Communication Application for Amyotrophic Lateral Sclerosis Patients. In Computational Vision and Bio-Inspired Computing; Smys, S., Tavares, J.M.R.S., Balas, V.E., Iliyasu, A.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 1397–1408. [Google Scholar]
  17. Eicher, C.; Kiselev, J.; Brukamp, K.; Kiemel, D.; Spittel, S.; Maier, A.; Oleimeulen, U.; Greuèl, M. Expectations and Concerns Emerging from Experiences with Assistive Technology for ALS Patients. In Universal Access in Human-Computer Interaction. Theory, Methods and Tools; Antona, M., Stephanidis, C., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 57–68. [Google Scholar]
  18. Sigafoos, J.; Schlosser, R.W.; Lancioni, G.E.; O’Reilly, M.F.; Green, V.A.; Singh, N.N. Assistive Technology for People with Communication Disorders. In Assistive Technologies for People with Diverse Abilities. Autism and Child Psychopathology Series; Lancioni, G.E., Singh, N.N., Eds.; Springer: New York, NY, USA, 2014; Chapter 4; pp. 77–112. [Google Scholar]
  19. Bona, S.; Donvito, G.; Cozza, F.; Malberti, I.; Vaccari, P.; Lizio, A.; Greco, L.; Carraro, E.; Sansone, V.A.; Lunetta, C. The development of an augmented reality device for the autonomous management of the electric bed and the electric wheelchair for patients with amyotrophic lateral sclerosis: A pilot study. Disabil. Rehabil. Assist. Technol. 2019, 16, 513–519. [Google Scholar] [CrossRef] [PubMed]
  20. Santana G., A.; Ortiz C, O.; Acosta, J.F.; Andaluz, V.H. Autonomous Assistance System for People with Amyotrophic Lateral Sclerosis. In IT Convergence and Security 2017; Kim, K.J., Kim, H., Baek, N., Eds.; Springer: Singapore, 2018; pp. 267–277. [Google Scholar]
  21. Elliott, M.A.; Malvar, H.; Maassel, L.L.; Campbell, J.; Kulkarni, H.; Spiridonova, I.; Sophy, N.; Beavers, J.; Paradiso, A.; Needham, C.; et al. Eye-controlled, power wheelchair performs well for ALS patients. Muscle Nerve 2019, 60, 513–519. [Google Scholar] [CrossRef] [Green Version]
  22. Ramakrishnan, J.; Mavaluru, D.; Sakthivel, R.S.; Alqahtani, A.S.; Mubarakali, A.; Retnadhas, M. Brain–computer interface for amyotrophic lateral sclerosis patients using deep learning network. Neural Comput. Appl. 2022, 34, 13439–13453. [Google Scholar] [CrossRef]
  23. Miao, Y.; Yin, E.; Allison, B.Z.; Zhang, Y.; Chen, Y.; Dong, Y.; Wang, X.; Hu, D.; Chchocki, A.; Jin, J. An ERP-based BCI with peripheral stimuli: Validation with ALS patients. Cogn. Neurodynam. 2020, 14, 21–33. [Google Scholar] [CrossRef] [PubMed]
  24. Sorbello, R.; Tramonte, S.; Giardina, M.E.; La Bella, V.; Spataro, R.; Allison, B.; Guger, C.; Chella, A. A Human–Humanoid Interaction Through the Use of BCI for Locked-In ALS Patients Using Neuro-Biological Feedback Fusion. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 487–497. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, Y.H.; Huang, S.; Huang, Y.D. Motor imagery EEG classification for patients with amyotrophic lateral sclerosis using fractal dimension and Fisher’s criterion-based channel selection. Sensors 2017, 17, 1557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Vansteensel, M.J.; Pels, E.G.; Bleichner, M.G.; Branco, M.P.; Denison, T.; Freudenburg, Z.V.; Gosselaar, P.; Leinders, S.; Ottens, T.H.; Van Den Boom, M.A.; et al. Fully Implanted Brain–Computer Interface in a Locked-In Patient with ALS. N. Engl. J. Med. 2016, 375, 2060–2066. [Google Scholar] [CrossRef]
  27. Mainsah, B.O.; Collins, L.M.; Colwell, K.A.; Sellers, E.W.; Ryan, D.B.; Caves, K.; Throckmorton, C.S. Increasing BCI communication rates with dynamic stopping towards more practical use: An ALS study. J. Neural Eng. 2015, 12, 016013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. McCane, L.M.; Sellers, E.W.; McFarland, D.J.; Mak, J.N.; Carmack, C.S.; Zeitlin, D.; Wolpaw, J.R.; Vaughan, T.M. Brain-computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis. Amyotroph. Lateral Scler. Front. Degener. 2014, 15, 207–215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Jaramillo-Yánez, A.; Benalcázar, M.E.; Mena-Maldonado, E. Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review. Sensors 2020, 20, 2467. [Google Scholar] [CrossRef]
  30. Tonin, A.; Jaramillo-Gonzalez, A.; Rana, A.; Khalili-Ardali, M.; Birbaumer, N.; Chaudhary, U. Auditory Electrooculogram-based Communication System for ALS Patients in Transition from Locked-in to Complete Locked-in State. Sci. Rep. 2020, 10, 8452. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, R.; He, S.; Yang, X.; Wang, X.; Li, K.; Huang, Q.; Yu, Z.; Zhang, X.; Tang, D.; Li, Y. An EOG-Based Human-Machine Interface to Control a Smart Home Environment for Patients with Severe Spinal Cord Injuries. IEEE Trans. Biomed. Eng. 2019, 66, 89–100. [Google Scholar] [CrossRef] [PubMed]
  32. Chang, W.D.; Cha, H.S.; Kim, D.Y.; Kim, S.H.; Im, C.H. Development of an electrooculogram-based eye-computer interface for communication of individuals with amyotrophic lateral sclerosis. J. Neuroeng. Rehabil. 2017, 14, 89. [Google Scholar] [CrossRef] [Green Version]
  33. Larson, A.; Herrera, J.; George, K.; Matthews, A. Electrooculography based electronic communication device for individuals with ALS. In Proceedings of the 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–5. [Google Scholar] [CrossRef]
  34. Lingegowda, D.R.; Amrutesh, K.; Ramanujam, S. Electrooculography based assistive technology for ALS patients. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Bengaluru, India, 5–7 October 2017; pp. 36–40. [Google Scholar] [CrossRef]
  35. Pinheiro, C.G.; Naves, E.L.; Pino, P.; Losson, E.; Andrade, A.O.; Bourhis, G. Alternative communication systems for people with severe motor disabilities: A survey. Biomed. Eng. Online 2011, 10, 31. [Google Scholar] [CrossRef] [Green Version]
  36. Chaudhary, U.; Vlachos, I.; Zimmermann, J.B.; Espinosa, A.; Tonin, A.; Jaramillo-Gonzalez, A.; Khalili-Ardali, M.; Topka, H.; Lehmberg, J.; Friehs, G.M.; et al. Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nat. Commun. 2022, 13, 1236. [Google Scholar] [CrossRef]
  37. Singh, H.; Singh, J. Object Acquisition and Selection in Human Computer Interaction Systems: A Review. Int. J. Intell. Syst. Appl. Eng. 2019, 7, 19–29. [Google Scholar] [CrossRef] [Green Version]
  38. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain—Computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 2016, 12, 513–525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Kayadibi, I.; Güraksın, G.E.; Ergün, U.; Özmen Süzme, N. An Eye State Recognition System Using Transfer Learning: AlexNet-Based Deep Convolutional Neural Network. Int. J. Comput. Intell. Syst. 2022, 15, 49. [Google Scholar] [CrossRef]
  40. Fathi, A.; Abdali-Mohammadi, F. Camera-based eye blinks pattern detection for intelligent mouse. Signal Image Video Process. 2015, 9, 1907–1916. [Google Scholar] [CrossRef]
  41. Mu, S.; Shibata, S.; Chun Chiu, K.; Yamamoto, T.; kuan Liu, T. Study on eye-gaze input interface based on deep learning using images obtained by multiple cameras. Comput. Electr. Eng. 2022, 101, 108040. [Google Scholar] [CrossRef]
  42. Hwang, I.S.; Tsai, Y.Y.; Zeng, B.H.; Lin, C.M.; Shiue, H.S.; Chang, G.C. Integration of eye tracking and lip motion for hands-free computer access. Univers. Access Inf. Soc. 2021, 20, 405–416. [Google Scholar] [CrossRef]
  43. Blignaut, P. Development of a gaze-controlled support system for a person in an advanced stage of multiple sclerosis: A case study. Univers. Access Inf. Soc. 2017, 16, 1003–1016. [Google Scholar] [CrossRef]
  44. Chareonsuk, W.; Kanhaun, S.; Khawkam, K.; Wongsawang, D. Face and Eyes mouse for ALS Patients. In Proceedings of the 2016 Fifth ICT International Student Project Conference (ICT-ISPC), Nakhonpathom, Thailand, 27–28 May 2016; pp. 77–80. [Google Scholar] [CrossRef]
  45. Liu, S.S.; Rawicz, A.; Ma, T.; Zhang, C.; Lin, K.; Rezaei, S.; Wu, E. An Eye-Gaze Tracking and Human Computer Interface System for People with ALS and Other Locked-in Diseases. J. Med. Biol. Eng. 2012, 32, 1–3. [Google Scholar]
  46. Liu, Y.; Lee, B.S.; Rajan, D.; Sluzek, A.; McKeown, M.J. CamType: Assistive text entry using gaze with an off-the-shelf webcam. Mach. Vis. Appl. 2019, 30, 407–421. [Google Scholar] [CrossRef]
  47. Holmqvist, K.; Örbom, S.L.; Hooge, I.T.C.; Niehorster, D.C.; Alexander, R.G.; Andersson, R.; Benjamins, J.S.; Blignaut, P.; Brouwer, A.M.; Chuang, L.L.; et al. Eye tracking: Empirical foundations for a minimal reporting guideline. Behav. Res. Methods 2023, 55, 364–416. [Google Scholar] [CrossRef]
  48. Kitchenham, B. Procedures for Performing Systematic Reviews; Technical Report; Keele University, Department of Computer Science, Software Engineering Group and Empirical Software Engineering National ICT Australia Ltd.: Keele, UK, 2004. [Google Scholar]
  49. Brereton, P.; Kitchenham, B.A.; Budgen, D.; Turner, M.; Khalil, M. Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 2007, 80, 571–583. [Google Scholar] [CrossRef] [Green Version]
  50. Kitchenham, B.A.; Budgen, D.; Brereton, P. Evidence-Based Software Engineering and Systematic Reviews, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 2016. [Google Scholar]
  51. Snyder, H. Literature review as a research methodology: An overview and guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
  52. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report; Keele University and University of Durham: Staffs, UK, 2007. [Google Scholar]
  53. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef] [PubMed]
  54. Fernandes, F.; Barbalho, I. Camera-Based Eye Interaction Techniques for Amyotrophic Lateral Sclerosis Individuals: A Systematic Review. PROSPERO. 2021. CRD42021230721. Available online: https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=230721 (accessed on 29 March 2021).
  55. Ouzzani, M.; Hammady, H.; Fedorowicz, Z.; Elmagarmid, A. Rayyan—A web and mobile app for systematic reviews. Syst. Rev. 2016, 5, 210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Eom, Y.; Mu, S.; Satoru, S.; Liu, T. A Method to Estimate Eye Gaze Direction When Wearing Glasses. In Proceedings of the 2019 International Conference on Technologies and Applications of Artificial Intelligence (TAAI), Kaohsiung, Taiwan, 21–23 November 2019; pp. 1–6. [Google Scholar] [CrossRef]
  57. Zhang, X.; Kulkarni, H.; Morris, M.R. Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, New York, NY, USA, 6–11 May 2017; pp. 2878–2889. [Google Scholar] [CrossRef]
  58. Aslam, Z.; Junejo, A.Z.; Memon, A.; Raza, A.; Aslam, J.; Thebo, L.A. Optical Assistance for Motor Neuron Disease (MND) Patients Using Real-time Eye Tracking. In Proceedings of the 2019 8th International Conference on Information and Communication Technologies (ICICT), Karachi, Pakistan, 16–17 November 2019; pp. 61–65. [Google Scholar] [CrossRef]
  59. Abe, K.; Ohi, S.; Ohyama, M. Eye-gaze Detection by Image Analysis under Natural Light. In Human-Computer Interaction. Interaction Techniques and Environments; Jacko, J.A., Ed.; Springer: Berlin/Heidelberg, Gertmany, 2011; pp. 176–184. [Google Scholar]
  60. Rahnama-ye Moqaddam, R.; Vahdat-Nejad, H. Designing a pervasive eye movement-based system for ALS and paralyzed patients. In Proceedings of the 2015 5th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 29 October 2015; pp. 218–221. [Google Scholar] [CrossRef]
  61. Rozado, D.; Rodriguez, F.B.; Varona, P. Low cost remote gaze gesture recognition in real time. Appl. Soft Comput. 2012, 12, 2072–2084. [Google Scholar] [CrossRef]
  62. Yildiz, M.; Yorulmaz, M. Gaze-Controlled Turkish Virtual Keyboard Application with Webcam. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  63. Nakazawa, N.; Aikawa, S.; Matsui, T. Development of Communication Aid Device for Disabled Persons Using Corneal Surface Reflection Image. In Proceedings of the 2nd International Conference on Graphics and Signal Processing, ICGSP’18, New York, NY, USA, 6–8 October 2018; pp. 16–20. [Google Scholar] [CrossRef]
  64. Rozado, D.; Agustin, J.S.; Rodriguez, F.B.; Varona, P. Gliding and Saccadic Gaze Gesture Recognition in Real Time. ACM Trans. Interact. Intell. Syst. 2012, 1, 1–27. [Google Scholar] [CrossRef]
  65. Królak, A.; Strumiłło, P. Eye-blink detection system for human–computer interaction. Univers. Access Inf. Soc. 2012, 11, 409–419. [Google Scholar] [CrossRef] [Green Version]
  66. Singh, H.; Singh, J. Object acquisition and selection using automatic scanning and eye blinks in an HCI system. J. Multimodal User Interfaces 2019, 13, 405–417. [Google Scholar] [CrossRef]
  67. Singh, H.; Singh, J. Real-time eye blink and wink detection for object selection in HCI systems. J. Multimodal User Interfaces 2018, 12, 55–65. [Google Scholar] [CrossRef]
  68. Missimer, E.; Betke, M. Blink and Wink Detection for Mouse Pointer Control. In Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments, PETRA ’10, New York, NY, USA, 23–25 June 2010. [Google Scholar] [CrossRef] [Green Version]
  69. Rupanagudi, S.R.; Bhat, V.G.; Ranjani, B.S.; Srisai, A.; Gurikar, S.K.; Pranay, M.R.; Chandana, S. A simplified approach to assist motor neuron disease patients to communicate through video oculography. In Proceedings of the 2018 International Conference on Communication information and Computing Technology (ICCICT), Mumbai, India, 2–3 February 2018; pp. 1–6. [Google Scholar] [CrossRef]
  70. Rakshita, R. Communication Through Real-Time Video Oculography Using Face Landmark Detection. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 20–21 April 2018; pp. 1094–1098. [Google Scholar] [CrossRef]
  71. Krapic, L.; Lenac, K.; Ljubic, S. Integrating Blink Click interaction into a head tracking system: Implementation and usability issues. Univers. Access Inf. Soc. 2015, 14, 247–264. [Google Scholar] [CrossRef]
  72. Park, J.H.; Park, J.B. A novel approach to the low cost real time eye mouse. Comput. Stand. Interfaces 2016, 44, 169–176. [Google Scholar] [CrossRef]
  73. Saleh, N.; Tarek, A. Vision-Based Communication System for Patients with Amyotrophic Lateral Sclerosis. In Proceedings of the 2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES), Giza, Egypt, 23–25 October 2021; pp. 19–22. [Google Scholar] [CrossRef]
  74. Atasoy, N.A.; Çavuşoğlu, A.; Atasoy, F. Real-Time motorized electrical hospital bed control with eye-gaze tracking. Turk. J. Electr. Eng. Comput. Sci. 2016, 24, 5162–5172. [Google Scholar] [CrossRef]
  75. Aharonson, V.; Coopoo, V.Y.; Govender, K.L.; Postema, M. Automatic pupil detection and gaze estimation using the vestibulo-ocular reflex in a low-cost eye-tracking setup. SAIEE Afr. Res. J. 2020, 111, 120–124. [Google Scholar] [CrossRef]
  76. Oyabu, Y.; Takano, H.; Nakamura, K. Development of the eye input device using eye movement obtained by measuring the center position of the pupil. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 2948–2952. [Google Scholar] [CrossRef]
  77. Kaushik, R.; Arora, T.; Tripathi, R. Design of Eyewriter for ALS Patients throughEyecan. In Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 12–13 October 2018; pp. 991–995. [Google Scholar] [CrossRef]
  78. Kavale, K.; Kokambe, K.; Jadhav, S. taskEYE: “A Novel Approach to Help People Interact with Their Surrounding Through Their Eyes”. In Proceedings of the 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT), Mumbai, India, 9–13 July 2018; pp. 311–313. [Google Scholar] [CrossRef]
  79. Zhao, Q.; Yuan, X.; Tu, D.; Lu, J. Eye moving behaviors identification for gaze tracking interaction. J. Multimodal User Interfaces 2015, 9, 89–104. [Google Scholar] [CrossRef]
  80. Xu, C.L.; Lin, C.Y. Eye-motion detection system for mnd patients. In Proceedings of the 2017 IEEE 4th International Conference on Soft Computing & Machine Intelligence (ISCMI), Port Louis, Mauritius, 23–24 November 2017; pp. 99–103. [Google Scholar] [CrossRef]
  81. Bradski, G.R. Computer vision face tracking for use in a perceptual user interface. Intel Technol. J. 1998, 2, 1–15. [Google Scholar]
  82. King, D.E. Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  83. Grauman, K.; Betke, M.; Gips, J.; Bradski, G. Communication via eye blinks - detection and duration analysis in real time. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1. pp. I–I. [Google Scholar] [CrossRef] [Green Version]
  84. Chau, M.; Betke, M. Real Time Eye Tracking and Blink Detection with USB Cameras. Technical Report, Boston University. CAS: Computer Science: Technical Reports. OpenBU: Boston, MA, USA, 2005. OpenBU. Available online: https://open.bu.edu/handle/2144/1839 (accessed on 20 May 2023).
  85. Likert, R. A technique for the measurement of attitudes. Arch. Psychol. 1932, 22, 55. [Google Scholar]
  86. United Nations. Take Action for the Sustainable Development Goals. New York, NY, USA. Available online: https://www.un.org/sustainabledevelopment/sustainable-development-goals/ (accessed on 10 April 2023).
  87. Papaiz, F.; Dourado, M.E.T.; Valentim, R.A.d.M.; de Morais, A.H.F.; Arrais, J.P. Machine Learning Solutions Applied to Amyotrophic Lateral Sclerosis Prognosis: A Review. Front. Comput. Sci. 2022, 4, 869140. [Google Scholar] [CrossRef]
  88. Gromicho, M.; Leão, T.; Oliveira Santos, M.; Pinto, S.; Carvalho, A.M.; Madeira, S.C.; De Carvalho, M. Dynamic Bayesian networks for stratification of disease progression in amyotrophic lateral sclerosis. Eur. J. Neurol. 2022, 29, 2201–2210. [Google Scholar] [CrossRef] [PubMed]
  89. Tavazzi, E.; Daberdaku, S.; Zandonà, A.; Vasta, R.; Nefussy, B.; Lunetta, C.; Mora, G.; Mandrioli, J.; Grisan, E.; Tarlarini, C.; et al. Predicting functional impairment trajectories in amyotrophic lateral sclerosis: A probabilistic, multifactorial model of disease progression. J. Neurol. 2022, 269, 3858–3878. [Google Scholar] [CrossRef]
  90. Gordon, J.; Lerner, B. Insights into Amyotrophic Lateral Sclerosis from a Machine Learning Perspective. J. Clin. Med. 2019, 8, 1578. [Google Scholar] [CrossRef] [Green Version]
  91. Ahangaran, M.; Chiò, A. AIM in Amyotrophic Lateral Sclerosis. In Artificial Intelligence in Medicine; Lidströmer, N., Ashrafian, H., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–13. [Google Scholar] [CrossRef]
  92. Dalgıç, Ö.O.; Wu, H.; Safa Erenay, F.; Sir, M.Y.; Özaltın, O.Y.; Crum, B.A.; Pasupathy, K.S. Mapping of critical events in disease progression through binary classification: Application to amyotrophic lateral sclerosis. J. Biomed. Inform. 2021, 123, 103895. [Google Scholar] [CrossRef] [PubMed]
  93. Ahangaran, M.; Chiò, A.; D’Ovidio, F.; Manera, U.; Vasta, R.; Canosa, A.; Moglia, C.; Calvo, A.; Minaei-Bidgoli, B.; Jahed-Motlagh, M.R. Causal associations of genetic factors with clinical progression in amyotrophic lateral sclerosis. Comput. Methods Programs Biomed. 2022, 216, 106681. [Google Scholar] [CrossRef] [PubMed]
  94. Bede, P.; Murad, A.; Hardiman, O. Pathological neural networks and artificial neural networks in ALS: Diagnostic classification based on pathognomonic neuroimaging features. J. Neurol. 2022, 269, 2440–2452. [Google Scholar] [CrossRef] [PubMed]
  95. Iadanza, E.; Fabbri, R.; Goretti, F.; Nardo, G.; Niccolai, E.; Bendotti, C.; Amedei, A. Machine learning for analysis of gene expression data in fast- and slow-progressing amyotrophic lateral sclerosis murine models. Biocybern. Biomed. Eng. 2022, 42, 273–284. [Google Scholar] [CrossRef]
  96. Thome, J.; Steinbach, R.; Grosskreutz, J.; Durstewitz, D.; Koppe, G. Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics. Hum. Brain Mapp. 2022, 43, 681–699. [Google Scholar] [CrossRef]
  97. Greco, A.; Chiesa, M.R.; Da Prato, I.; Romanelli, A.M.; Dolciotti, C.; Cavallini, G.; Masciandaro, S.M.; Scilingo, E.P.; Del Carratore, R.; Bongioanni, P. Using blood data for the differential diagnosis and prognosis of motor neuron diseases: A new dataset for machine learning applications. Sci. Rep. 2021, 11, 3371. [Google Scholar] [CrossRef]
  98. Kocar, T.D.; Behler, A.; Ludolph, A.C.; Müller, H.P.; Kassubek, J. Multiparametric Microstructural MRI and Machine Learning Classification Yields High Diagnostic Accuracy in Amyotrophic Lateral Sclerosis: Proof of Concept. Front. Neurol. 2021, 12, 745475. [Google Scholar] [CrossRef]
  99. Leão, T.; Madeira, S.C.; Gromicho, M.; de Carvalho, M.; Carvalho, A.M. Learning dynamic Bayesian networks from time-dependent and time-independent data: Unraveling disease progression in Amyotrophic Lateral Sclerosis. J. Biomed. Inform. 2021, 117, 103730. [Google Scholar] [CrossRef]
  100. Grollemund, V.; Le Chat, G.; Secchi-Buhour, M.S.; Delbot, F.; Pradat-Peyre, J.F.; Bede, P.; Pradat, P.F. Manifold learning for amyotrophic lateral sclerosis functional loss assessment. J. Neurol. 2021, 268, 825–850. [Google Scholar] [CrossRef] [PubMed]
  101. Kocar, T.D.; Müller, H.P.; Ludolph, A.C.; Kassubek, J. Feature selection from magnetic resonance imaging data in ALS: A systematic review. Ther. Adv. Chronic Dis. 2021, 12, 20406223211051002. [Google Scholar] [CrossRef]
  102. Grollemund, V.; Chat, G.L.; Secchi-Buhour, M.S.; Delbot, F.; Pradat-Peyre, J.F.; Bede, P.; Pradat, P.F. Development and validation of a 1-year survival prognosis estimation model for Amyotrophic Lateral Sclerosis using manifold learning algorithm UMAP. Sci. Rep. 2020, 10, 13378. [Google Scholar] [CrossRef] [PubMed]
  103. Myszczynska, M.A.; Ojamies, P.N.; Lacoste, A.M.B.; Neil, D.; Saffari, A.; Mead, R.; Hautbergue, G.M.; Holbrook, J.D.; Ferraiuolo, L. Applications of machine learning to diagnosis and treatment of neurodegenerative diseases. Nat. Rev. Neurol. 2020, 16, 440–456. [Google Scholar] [CrossRef]
  104. Chen, Q.F.; Zhang, X.H.; Huang, N.X.; Chen, H.J. Identification of Amyotrophic Lateral Sclerosis Based on Diffusion Tensor Imaging and Support Vector Machine. Front. Neurol. 2020, 11, 275. [Google Scholar] [CrossRef] [PubMed]
  105. Grollemund, V.; Pradat, P.F.; Querin, G.; Delbot, F.; Le Chat, G.; Pradat-Peyre, J.F.; Bede, P. Machine Learning in Amyotrophic Lateral Sclerosis: Achievements, Pitfalls, and Future Directions. Front. Neurosci. 2019, 13, 135. [Google Scholar] [CrossRef] [Green Version]
  106. Pinto, S.; Quintarelli, S.; Silani, V. New technologies and Amyotrophic Lateral Sclerosis – Which step forward rushed by the COVID-19 pandemic? J. Neurol. Sci. 2020, 418, 117081. [Google Scholar] [CrossRef] [PubMed]
  107. Barbalho, I.; Valentim, R.; Júnior, M.D.; Barros, D.; Júnior, H.P.; Fernandes, F.; Teixeira, C.; Lima, T.; Paiva, J.; Nagem, D. National registry for amyotrophic lateral sclerosis: A systematic review for structuring population registries of motor neuron diseases. BMC Neurol. 2021, 21, 269. [Google Scholar] [CrossRef]
  108. Dietrich-Neto, F.; Callegaro, D.; Dias-Tosta, E.; Silva, H.A.; Ferraz, M.E.; Lima, J.M.B.D.; Oliveira, A.S.B. Amyotrophic lateral sclerosis in Brazil: 1998 national survey. Arq. Neuro-Psiquiatr. 2000, 58, 607–615. [Google Scholar] [CrossRef] [Green Version]
  109. Moura, M.C.; Casulari, L.A.; Novaes, M.R.C.G. Ethnic and demographic incidence of amyotrophic lateral sclerosis (ALS) in Brazil: A population based study. Amyotroph. Lateral Scler. Front. Degener. 2016, 17, 275–281. [Google Scholar] [CrossRef]
  110. Barbalho, I.M.P.; Fernandes, F.; Barros, D.M.S.; Paiva, J.C.; Henriques, J.; Morais, A.H.F.; Coutinho, K.D.; Coelho Neto, G.C.; Chioro, A.; Valentim, R.A.M. Electronic health records in Brazil: Prospects and technological challenges. Front. Public Health 2022, 10, 963841. [Google Scholar] [CrossRef]
  111. Barbalho, I.M.P.; Fonseca, A.; Fernandes, F.; Henriques, J.; Gil, P.; Nagem, D.; Lindquist, R.; Santos-Lima, T.; Santos, J.P.Q.; Paiva, J.C.; et al. Digital Health Solution for Monitoring and Surveillance of Amyotrophic Lateral Sclerosis in Brazil. Front. Public Health. 2023, 11, 1209633. [Google Scholar] [CrossRef]
  112. Brasil. Projeto de Lei N° 4691, De 2019. Atividade Legislativa. Senado Federal. Brasília, DF. Available online: https://www25.senado.leg.br/web/atividade/materias/-/materia/138326 (accessed on 10 July 2023).
  113. Brasil. Lei N° 10924, De 10 de Junho de 2021. Diário Oficial do Rio Grande do Norte. Available online: http://diariooficial.rn.gov.br/dei/dorn3/docview.aspx?id_jor=00000001&data=20210611&id_doc=726286. (accessed on 10 July 2023).
Figure 1. Result of the search and screening process of primary studies for this systematic review.
Figure 1. Result of the search and screening process of primary studies for this systematic review.
Jcm 12 05235 g001
Figure 2. Strategies for Human–Computer Interaction (HCI) based on eye images.
Figure 2. Strategies for Human–Computer Interaction (HCI) based on eye images.
Jcm 12 05235 g002
Figure 3. Generic workflow (pipeline) model.
Figure 3. Generic workflow (pipeline) model.
Jcm 12 05235 g003
Figure 4. The number of subjects used in the primary studies [56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80].
Figure 4. The number of subjects used in the primary studies [56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80].
Jcm 12 05235 g004
Table 1. Research Questions.
Table 1. Research Questions.
RQDescription
01What strategy is used to establish Human–Computer Interaction based on eye images?
02What computational technique is used for processing and classifying eye images (Computer Vision or Machine Learning, e.g.)?
03What is the performance of the computational techniques explored (evaluated through accuracy, precision, sensitivity, specificity, error)?
04What is the hardware support for image acquisition?
05What is the profile of the group of individuals submitted to the experimental tests of the study (healthy controls, ALS, or other diseases)?
Table 2. Inclusion and Exclusion Criteria.
Table 2. Inclusion and Exclusion Criteria.
IDInclusion CriteriaExclusion Criteria
01Articles published between 2010 and 18 November 2021.Duplicate articles.
02Original and complete research articles published in Journals or Conferences.Review articles.
03Articles in the areas of technology, engineering, or computer science.Articles not related to communication strategies through the eyes for Human–Computer Interaction based on generic cameras.
Table 3. Quality Assessment.
Table 3. Quality Assessment.
QADescriptionEliminator
01Is the research object of the study a Human–Computer Interaction approach based on eye images for people with ALS or Motor Neurone disease?Yes
02Does the study describe the approach to image processing?No
03Does the study describe the algorithmic technique’s performance (accuracy, precision, sensitivity, specificity, error)?No
04Does the study describe the hardware used for image acquisition?No
05Does the study perform experiments on control groups (healthy people), people with ALS, or other diseases?No
Table 4. Summary of the main characteristics of the articles included in the systematic review.
Table 4. Summary of the main characteristics of the articles included in the systematic review.
StudyYearScoreHCIHardwareSubjectsTechniques (Keywords)Performance(%)
HC/ALS/ODAccRecallPrecisionError
Eom et al. [56]20190.8Eye-GazeCamera6/0/0Haar-like/binarization/grayscale/NNA different approach
Zhang et al. [57]20170.8Eye-GazeiPhone and iPad12/0/0Fast face alignment/GD/TM86%---
Aslam et al. [58]20190.7Eye-GazeCamera3/0/0Haar-like/CHT100%---
Abe et al. [59]20110.7Eye-GazeCamera5/0/0Limbus Tracking Method---0.56 ° /1.09 °
Rahnama-ye-Moqaddam and Vahdat-Nejad [60]20150.6Eye-GazeCamera4/1/0Haar cascade/GVM/TM---5.68%
Rozado et al. [61]20120.6Eye-GazeCamera with IR15/0/0ITU Gaze Tracker/E-HTM98%---
Yildiz et al. [62]20190.5Eye-GazeHMC1/0/0CHT/KNN---0.98%
Nakazawa et al. [63]20180.5Eye-GazeHMC with IR5/0/0CHT93.32%---
Rozado et al. [64]20120.5Eye-GazeHMC with IR20/0/0HTM/Needleman–Wunsch95%---
Królak and Strumiłło [65]20120.8Eye-BlinkCamera37/0/12Viola–Jones/GD/TM95.17%96.91%98.13%-
Singh and Singh [66]20190.7Eye-BlinkCamera with light source10/0/0Viola–Jones/PMA90%---
Singh and Singh [67]20180.7Eye-BlinkCamera with light source10/0/0Viola–Jones/PMA91.2%-94.11%-
Missimer and Betke [68]20100.7Eye-BlinkCamera20/0/0TM/Optical flow algorithm96.6%---
Rupanagudi et al. [69]20180.6Eye-blinkCamera with IR50/0/0grayscale/SBT/2PVMA different approach
Rakshita [70]20180.5Eye-BlinkCamera1/0/0grayscale/FLD/EARA different approach
Krapic et al. [71]20150.5Eye-BlinkCamera12/0/0eViacam softwareA different approach
Park and Park [72]20160.8Eye-TrackingCamera with IR4/0/0Pupil Center Corneal Reflection1–2 ° ---
Saleh and Tarek [73]20210.7Eye-TrackingHMC with IR5/0/0grayscale/CHT/GDA different approach
Atasoy et al. [74]20160.7Eye-TrackingCamera30/0/0Viola–-Jones/grayscale/CHT/GD90%---
Aharonson et al. [75]20200.6Eye-TrackingHMC4/0/0OpenCV/Polynomial/ProjectionA different approach
Oyabu et al. [76]20120.6Eye-TrackingCamera with IR5/0/0Binarization/CMUPLA different approach
Kaushik et al. [77]20180.5Eye-TrackingHMC with IR1/0/0EyeScan software95%---
Kavale et al. [78]20180.5Eye-TrackingCamera with IR1/0/0Binarization/GDA different approach
Zhao et al. [79]20150.8HybridCamera with IR7/0/0Binarization/GD92.69%---
Xu and Lin [80]20170.7HybridCamera with IR1/0/0FLD/GD100%---
Abbreviations: HCI (Human–Computer Interaction), HC (healthy controls), ALS (Amyotrophic Lateral Sclerosis), OD (other diseases), Acc (accuracy), HMC (Head-Mounted Camera), IR (infrared), HTM (Hierarchical Temporal Memory), E-HTM (Extended HTM), NN (Neural Network), GVM (Gradient Vector Method), TM (Template Matching), GD (Geometrical Dependencies), CHT (Circular Hough Transform), KNN (K-Nearest Neighbor), PMA (Pixels’ Motion Analysis), SBT (Segmentation Based on Thresholding), 2PVM (2 Pixel Verification Methodology), FLD (Facial Landmark Detector), EAR (Eye Aspect Ratio), POLYNOMIAL (parametrical interpolation-based algorithm), PROJECTION (model-based algorithm), CMUPL (Clustering Method of Unbroken Pixel Lines).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernandes, F.; Barbalho, I.; Bispo Júnior, A.; Alves, L.; Nagem, D.; Lins, H.; Arrais Júnior, E.; Coutinho, K.D.; Morais, A.H.F.; Santos, J.P.Q.; et al. Digital Alternative Communication for Individuals with Amyotrophic Lateral Sclerosis: What We Have. J. Clin. Med. 2023, 12, 5235. https://doi.org/10.3390/jcm12165235

AMA Style

Fernandes F, Barbalho I, Bispo Júnior A, Alves L, Nagem D, Lins H, Arrais Júnior E, Coutinho KD, Morais AHF, Santos JPQ, et al. Digital Alternative Communication for Individuals with Amyotrophic Lateral Sclerosis: What We Have. Journal of Clinical Medicine. 2023; 12(16):5235. https://doi.org/10.3390/jcm12165235

Chicago/Turabian Style

Fernandes, Felipe, Ingridy Barbalho, Arnaldo Bispo Júnior, Luca Alves, Danilo Nagem, Hertz Lins, Ernano Arrais Júnior, Karilany D. Coutinho, Antônio H. F. Morais, João Paulo Q. Santos, and et al. 2023. "Digital Alternative Communication for Individuals with Amyotrophic Lateral Sclerosis: What We Have" Journal of Clinical Medicine 12, no. 16: 5235. https://doi.org/10.3390/jcm12165235

APA Style

Fernandes, F., Barbalho, I., Bispo Júnior, A., Alves, L., Nagem, D., Lins, H., Arrais Júnior, E., Coutinho, K. D., Morais, A. H. F., Santos, J. P. Q., Machado, G. M., Henriques, J., Teixeira, C., Dourado Júnior, M. E. T., Lindquist, A. R. R., & Valentim, R. A. M. (2023). Digital Alternative Communication for Individuals with Amyotrophic Lateral Sclerosis: What We Have. Journal of Clinical Medicine, 12(16), 5235. https://doi.org/10.3390/jcm12165235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop