Next Article in Journal
Cylindrical Steel Tanks Subjected to Long-Duration and High-Pressure Triangular Blast Load: Current Practice and a Numerical Case Study
Previous Article in Journal
Identifying Correlated Functional Brain Network Patterns Associated with Touch Discrimination in Survivors of Stroke Using Automated Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Several P300-Based Visuo-Auditory Brain-Computer Interfaces for a Completely Locked-in ALS Patient: A Longitudinal Case Study

by
Rute Bettencourt
1,
Miguel Castelo-Branco
2,3,
Edna Gonçalves
4,5,
Urbano J. Nunes
1,6,* and
Gabriel Pires
1,2,7
1
Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
2
Coimbra Institute for Biomedical Imaging and Translational Research, University of Coimbra, 3000-548 Coimbra, Portugal
3
Faculty of Medicine, University of Coimbra, 3000-370 Coimbra, Portugal
4
University Hospital Center of São João, 4200-319 Porto, Portugal
5
Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal
6
Department of Electrical and Computer Engineering, Faculty of Sciences and Technology, University of Coimbra, 3030-290 Coimbra, Portugal
7
Department of Engineering, School of Technology of Tomar, Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(8), 3464; https://doi.org/10.3390/app14083464
Submission received: 18 March 2024 / Revised: 12 April 2024 / Accepted: 15 April 2024 / Published: 19 April 2024
(This article belongs to the Special Issue Brain-Computer Interfaces: Novel Technologies and Applications)

Abstract

:

Featured Application

Brain computer interface for communication with patients in the Completely Locked-In state.

Abstract

In a completely locked-in state (CLIS), often resulting from traumatic brain injury or neurodegenerative diseases like amyotrophic lateral sclerosis (ALS), patients lose voluntary muscle control, including eye movement, making communication impossible. Brain-computer interfaces (BCIs) offer hope for restoring communication, but achieving reliable communication with these patients remains a challenge. This study details the design, testing, and comparison of nine visuo-auditory P300-based BCIs (combining different visual and auditory stimuli and different visual layouts) with a CLIS patient over ten months. The aim was to evaluate the impact of these stimuli in achieving effective communication. While some interfaces showed promising progress, achieving up to 90% online accuracy in one session, replicating this success in subsequent sessions proved challenging, with the average online accuracy across all sessions being 56.4 ± 15.2%. The intertrial variability in EEG signals and the low discrimination between target and non-target events were the main challenge. Moreover, the lack of communication with the patient made BCI design a challenging blind trial-and-error process. Despite the inconsistency of the results, it was possible to infer that the combination of visual and auditory stimuli had a positive impact, and that there was an improvement over time.

1. Introduction

Brain computer interfaces (BCIs) interpret brain signals, allowing communication with computers by bypassing the usual neuronal muscular pathways [1]. Brain signals can be obtained through various techniques, including electroencephalography (EEG), electrocorticography (ECoG), functional near-infrared spectroscopy (fNIRS), or functional magnetic resonance imaging (fMRI), with EEG being the most commonly used. When a patient loses the ability to move all voluntary muscles except the eyes, they are referred to as being in a locked-in state (LIS) [2,3]. This state is commonly associated with motor neuron degenerative diseases, such as ALS, as well as stroke and traumatic brain injury. Several studies report that patients in LIS can use BCI and eye-tracking assistive technologies to communicate with people in their environment [4,5]. However, when patients’ voluntary eye movements fade, and a patient enters a completely locked-in state (CLIS), they lose the ability to control eye-tracking devices, making the use of BCIs the only available means of communication. Unfortunately, BCIs still remain very ineffective for the CLIS population (see Table 1 for main results related to the current state of the art). Even binary YES/NO communication is very difficult to obtain accurately and systematically. CLIS patients cannot control eye and eye lid movements, leading to eye dryness and the potential development of corneal ulcers without proper care. Visual-only BCIs might not provide the desired outcome of a possible method of communication, although Steady State Visual Evoked Potential (SSVEP) interfaces have proved to be effective in [6]. In this study, authors used SSVEPs evoked by LEDs and instructed patients to either attend or ignore the presented stimuli, detecting user intention though SSVEP power spectral density (PSD). For one of the patients transitioning from LIS to CLIS and experiencing loss of eye movement control, the LED was positioned approximately 15° from the center of the optic axis to enable attention shifts without requiring the patient to direct their gaze. Auditory BCIs usually have a lower classification result in healthy populations than visual BCIs but remain one possible via for patients in CLIS, as the auditory system is believed to remain intact until late stages of the disease. Pires et al. [7] used an auditory and a hybrid audiovisual oddball paradigm with spoken words to try to communicate with a CLIS patient, showing a better neurophysiological signal with the auditory interface than with the hybrid interface, although no effective communication was achieved. Guger et al. [8] used vibrotactile stimulation with two or three stimulators (one in each wrist, which were associated with yes and no answers, and a third one in the shoulder that acted as a distractor) and used the oddball paradigm to provide a binary communication system. Han et al. [9] used a left motor imagery and mental subtraction task to implement a binary classifier, achieving up to 87.5% online performance with a CLIS patient. Naito et al. [10] tested an fNIRS-based BCI with 17 CLIS patients performing mental tasks associated with yes/no answers. Ardali et al. [11] also used an fNIRS-based BCI to obtain yes/no answers from CLIS patients by associating semantic content with the questions. Chaudhary et al. [12] tested, very recently and for the first time, an intracortical BCI with a CLIS patient. The patient was implanted with two 64-microelectrode arrays in the supplementary and primary motor cortex. The BCI paradigm was based on an auditory neurofeedback strategy to communicate. The brain signals allowed differentiation between low-pitched and high-pitched tones, with an accuracy above 80% on most days. This strategy was then employed in a speller interface that allowed the patient to communicate freely. The patient could correctly spell 131 characters per day, which is a promising result, although it has the disadvantage of being highly invasive. For a recent review on BCIs used with patients in LIS and CLIS, please refer to [13].
The state of the art presented above shows that clinical experiments with CLIS patients are quite insipient, very experimental, and currently limited to yes/no detection. Further studies are needed, and these studies should be longitudinal in nature to gain a better understanding of patients’ brain states, including attention spans, fatigue, and variability [14], and to adjust interfaces to the specificities of each patient. The trial-and-error approach in the design process takes time, especially considering the possibility that patients may not fully comprehend the mental tasks proposed [15,16]. Achieving a long-term, at-home BCI system remains a distant goal that requires substantial progress.
In this study, our primary objective was to establish communication with a CLIS patient utilizing a P300-based BCI. The option of using a P300 oddball paradigm relied on the fact that this approach has been the most common and reliable for long-term use of BCI with patients in late ALS stages including LIS, offering the benefit of possibly being seamlessly used when patients transition from LIS to CLIS [17]. Over the course of ten months, we followed the patient while developing and testing various P300-BCI versions that we expected to address communication challenges and overcome the patient’s difficulties. Therefore, the interfaces were designed with a patient-centered approach, but the iterative process design turned into a blind trial-and-error endeavor due to the patient’s lack of communication. We developed and tested nine interfaces encompassing different visual, auditory, and hybrid audiovisual stimuli. Some of these interfaces were also tested in a systematic way by a control group of five participants for validation and comparison.

2. Methods

2.1. Participants

Each developed interface underwent pilot testing with healthy volunteers before being evaluated with the patient to ensure proper functionality of the interfaces and the intended effects. Systematic tests involving a control group and a subset of the developed BCIs were then performed for reference and to compare neuronal patterns and performance with those obtained with the patient.

2.1.1. CLIS Patient

The patient, a 54-year-old female with limb onset ALS diagnosed 46 months prior to the initial BCI session, in February 2023, had an ALS functional rating scale-revised (ALSFRS-R) score of 1. At the beginning of the experiments, she had no eyelid and vertical eye movements and could no longer use the eye-tracking device she used previously. She had slight horizontal movement of the eyes, interpreted as a ‘Yes’ response, while the absence of movement indicated a ‘No’. Only the patient’s husband could interpret the slight eye movement.
Although she had lost control of the eye-tracking device, it continued to serve as a means for her family to discern whether her eye was open and to detect attempts at eye movement, which were indicative of communication efforts. The patient subsequently entered CLIS some months later. Despite the loss of direct communication, the ERP waveform analysis suggested that the patient retained visual, hearing, and cognitive functions, which motivated us to continue the experiments. Table 2 summarizes the main clinical characteristics of the patient 8 months after the initial BCI session.
This study was approved by the Ethical Committee for Health of the University Hospital Center of S. João, Porto (CHUSJ), complying with the code of Ethics of the Declaration of Helsinki. Informed consent was obtained from the patient’s husband. The decision to participate in this study was made by the patient’s family after meetings with the research team and the medical staff of the CHUSJ palliative care service where the patient was being followed.

2.1.2. Control Group

The control group consisted of 5 (5 female) healthy volunteers (22.0 ± 2.0 years) with normal or corrected vision and hearing. An informed consent was given by each participant.

2.2. Data Acquisition

A 16-channel g.USBamp acquisition device (g.tec medical engineering GmbH, Schiedlberg, Austria) was used to acquire EEG signals at a sampling rate of 256 Hz. EEG was acquired with 16 g.Ladybird electrodes (Fz, Cz, C3, C4, CPz, Pz, P3, P4, P07, P08, POz, Oz, FPz, FCz, FC1, and FC2) placed in a g.GAMMAcap2 (g.tec medical engineering GmbH, Austria) according to the extended international 10–20 System. The right earlobe was used as reference and the AFz channel for ground. All electrodes were connected to a g.GAMMAbox (g.tec medical engineering GmbH, Austria). Signals were filtered by a band-pass filter between 0.1 and 30 Hz and a notch filter at 50 Hz to eliminate the powerline interference. Data were acquired, processed, and classified in real-time in a Highspeed online processing Simulink framework.

2.3. Experimental Protocol

The experiments with the patient were conducted at her home. The patient was in a wheelchair, in a reclined position. The visits to the patient were done in the late morning when the patient showed longer attention spans, according to information provided by the husband. Each visit lasted no longer than two hours. We performed 16 visits to the patient for data acquisition and testing. At each visit, we performed one or more sessions, where we tested the same or different interfaces. A total of 31 sessions were conducted during the 16 visits.
The developed interfaces comprised visual (text, symbols, and images), auditory (spoken words and white noise), or hybrid stimuli (combining visual and auditory stimuli). The visual stimuli were presented via a screen located at approximately 1 m from the patient’s line of sight, and the auditory stimuli were presented through earphones. The stimuli differed on the different interfaces, as explained in Section 2.4.1, but the number of symbols remained the same during our experiments. The BCIs evaluated in this study consisted of a P300 oddball paradigm (1 Target and 6 standards (non-Targets)), consisting of seven stimuli that provide a small Portuguese lexicon, namely ‘SIM’ (Yes), ‘NÃO’ (No), ‘TOSSE’ (Cough), ‘TV’, ‘AJUDA’ (Help), ‘SONO’ (Sleep), and ‘STOP’, which, according to the patient’s family members, were the main needs of the patient. The stimuli had a duration of 550 ms and an interstimulus interval (ISI) of 100 ms.
Each online session was preceded by a calibration to obtain training data for fitting the classification model. The calibration process (Figure 1a) consisted of the patient focusing on one word at a time and mentally counting the number of times each word appeared on the screen and/or was spoken. Each target word was repeated 9 times interleaved with the non-target words, according to the oddball paradigm, to induce the target-related potentials. In total, there were 12 words in the training sequence, corresponding to 108 (12 × 9) target trials and 648 (12 × 9 × 6) non-target trials.
During the online session (Figure 1b), the patient was asked to perform a copy-paste task, wherein she was instructed with one word and required to mentally count the number of times the specified word occurred. The number of repetitions for each selection was adjusted to 9, although different numbers of repetitions were tested in some sessions for comparison. Each copy task consisted of 10 selections.
After the fourth visit to the patient, due to the very low BCI performance achieved, two modifications were introduced to the experimental procedure: (1) only the words ‘SIM’ and ‘NÃO’ were used in the training sequence, while the P300 oddball paradigm with the seven symbols remained the same; and (2) in the copy task, we started applying a filter in the classification algorithm, limiting the selections to ‘SIM’ and ‘NÃO’ choices, i.e., transforming the 7-class problem into a binary classification. The oddball paradigm with the seven symbols remained the same as in calibration.
The control group performed the experiments in a laboratory environment, always using the 7-class BCI.

2.4. BCI Paradigm Design: Visual and Auditory Stimuli

The visual and auditory stimuli were implemented recurring to the Psychophysics toolbox [18,19] in Matlab R2021b, running on a laptop with the Windows 11 operating system. Visual stimulation was displayed to the participants on the laptop’s built-in 15.6″ screen to avoid multi-screen desynchronization of the stimuli, and the auditory stimulation was delivered through Mi In-Ear Headphones Basic (Xiaomi Communications Co., Ltd., Beijing, China).

2.4.1. Stimuli

The BCI was implemented using the g.tec’s Highspeed Simulink framework. The g.USBamp driver set the sampling rate for EEG acquisition and triggered both visual and auditory stimuli. The psychophysics routines for stimuli implementation were embedded in a 2-level Simulink s-function. The stimuli consisted of text words, spoken words, white noise audio, standard face images, and family face images.
The audio files were recorded in a Huawei P20 lite phone with Lexis Audio Editor version 1.2.158, converted from mono to stereo, and saved in the wav audio format. These files were then edited to increase the volume and filtered to reduce background noise in Audacity (Audacity Team, version 3.3.2). The white noise audio file was generated in Matlab 2021b.
The standard face images were obtained from the Radboud Faces Database [20]. Patient’s family member’s photos were obtained from the patient’s caregiver. The background was removed from the images and changed to black, leaving only the family members’ faces visible.

2.4.2. BCI Versions and Iterations

The different interfaces tested were developed according to the patient’s needs for basic communication, with the goal of providing a small lexicon. The interfaces were designed iteratively, aiming to overcome the challenges posed by the lack of eye movement and aiming to simplify the interface and increase the patient’s attention and engagement. Table 3 lists the iterations of the developed BCIs. In the following section, we describe the several iterations and their rationale (videos for each interface are available at https://home.isr.uc.pt/~gpires/videos/BCI4ALL/videos.html (accessed on 11 March 2024), except for the familiar faces’ interface due to privacy reasons).
As a starting point, we used the visual interface tested in [7], adjusted to seven Portuguese text words (‘Sim’, ‘Não’, ‘Tosse’, ‘Ajuda’, ‘TV’, ‘Sono’, and ‘Stop’) associated to arrow symbols. The stimuli were highlighted by switching color from gray to green. The symbols were positioned close together to be within the patient’s foveal region. Given the poor ERP results obtained with this interface obtained in the first sessions, we decided to replace the text/arrow symbols with larger images covering the entire range of the screen. At the flash time, the text word was replaced by an image of an emotionally neutral person. The rationale of using faces as stimuli was to trigger ERPs related to face detection and processing and to increase task engagement [21,22,23]. Given the improvements in ERPs and in performance, we replaced the standard face with 7 images of the family, expecting to further improve task engagement and enhance ERPs.
Given the vision impairments in CLIS patients related to the absence of gaze movements and eye dryness, auditory stimuli were added to the previous interfaces. The auditory stimuli were the 7 spoken words. Purely auditory interfaces and hybrid visuo-auditory interfaces were created.
Expecting to increase selective attention to ‘SIM’ and ‘NÃO’ options, the previous versions were modified, replacing the spoken words ‘Sono’, ‘Ajuda’, and ‘Stop’ with white audio noise.
To further improve the visual component of the ERP, a new grid layout was developed, consisting of two rows and four columns with both the words and the familiar and standard faces displayed in a larger size. The distance between the patient and the screen was adjusted to ensure full view of the grid layout in the foveal region.
Given the significant gaze restriction, the last modification to the visual component of the interface involved relocating the ‘SIM’ and ‘NÃO’ words from opposite columns to the same column within the layout. This adjustment was made to ensure that the target stimuli were in a smaller visual local field.
All visual interfaces mentioned above had an audiovisual version with all spoken words and an audiovisual version with three words as white noise and the remaining as spoken words. Figure 2 shows the final visual layout with the ‘AJUDA’ word in the ON state.

2.5. Classification Pipeline

The classification pipeline is similar to that used in [7,24]. A 1 s EEG segment (epoch) is extracted for each event in the oddball paradigm. The segments are averaged according to the number of repetitions used and then normalized to have zero mean and standard deviation of 1. Then, the EEG of the 16 channels is projected with a statistical spatial filter called Fisher Criterion Beamformer (FCB) [25] into two projections, aiming to simultaneously increase the discrimination between target and non-target epochs and to reduce space dimensionality. The best features of the projections are then selected using the r-squared method and used for classification with a Naïve Bayes classifier. The BCIs tested with the patient include a postprocessing filter to exclusively select ‘YES’ or ‘NO’ target options. The BCI pipeline from stimulation to classification is depicted in Figure 3. A demonstration code for this classification pipeline is available in Matlab and Python at https://github.com/gpiresML/FCB-spatial-filter (accessed on 11 March 2024).

3. Results

3.1. Neurophysiological Analysis

The datasets obtained from each session were pre-processed and analyzed using Matlab R2021b and EEGLAB v2021.1 [26].

3.1.1. CLIS Patient

Figure 4a shows the ERPs for target and non-target events at channel PO8 obtained during the calibration in the session that provided the best online BCI performance (AVNsf, which achieved 90% accuracy for the two-class BCI) at channel PO8. The P300 component can be seen in this experiment with an amplitude of −3.06 μV, as well as a N400 ERP with an amplitude of −8.15 μV, which can be associated with semantic and face processing [27]. These two components are also statistically different between target and non-target stimuli, which shows that the patient was perceptually and cognitively aware and performing the task correctly.
Unfortunately, this well-defined and discriminative pattern was not consistent across sessions and interfaces. Figure 5 displays the ERPs average for target and non-target events for all V, A, AV, and AVN conditions on channels Fz and POz, along with the time windows where the target and non-target events are statistically different (green bars, two-paired t-test, p < 0.05). In contrast to the previous case (involving only one session), plots in Figure 5 do not exhibit the expected target ERPs. Additionally, we observe that target and non-target ERPs are very similar. Although there are time segments showing statistical significance, they are widely spread and not associated with well-defined peaks. The absence of well-defined patterns in the waveform makes data analysis more challenging in narrowing down relevant time windows. The audio component shows an increase in the amplitude in the frontal region, which is not observed in the occipitoparietal region. For both the A and V conditions, time windows where the target and non-target events are statistically different are very few and small (around 450 ms and 650 ms at POz for the V condition, and around 300 ms at Fz and 450 ms at POz for the A condition). However, when we combine visual and audio stimulation in both the AV and AVN conditions, the number and width of statistically different time windows increase (large discriminative window between 500 and 700 ms at POz for the AV condition, and several discriminative windows between 450 and 650 ms in the AVN condition for the same electrode). This indicates that the hybrid interfaces have a better potential for online classification in this patient than a V-only or A-only type of stimulation.
To infer the variability of the intertrial signal, a representative example, collected from one session with the AVNsf condition, is shown in Figure 6a. The color map displays inconsistent amplitude across trials, indicating a lack of coherence in the perceptual and cognitive processing of the oddball paradigm. In contrast, in Figure 6b, the color map from one of the control group participants for the same condition shows consistent intertrial responses to the stimuli. Furthermore, the variability was quantitatively measured through the signal-to-noise (SNR) ratio. For example, data collected using the AVNsf interface for the patient have an SNR of −16.50 dB, whereas for the control group, the SNR ranges from −3.26 to −13.73 dB, using the same interface. The r-squared metric used for feature selection ranged from 0.046 to 0.115 for the patient, compared to 0.193 to 0.491 in the control group.

3.1.2. Control Group

The control group ERPs exhibit the typical ERP response for oddball paradigms, with the presence of the N200 peak with a −4.65 μV amplitude, and the P300 peak, with a 4.48 μV amplitude at electrode PO8 for the target stimuli in the AVNsf condition, as depicted in Figure 4b. Furthermore, there are statistical differences between the target and non-target stimuli from 70–150 ms, 190–500 ms, and 600–700 ms (two-paired t-test, p < 0.05). There is no noticeable N400 component, although there is a negative peak around 650 ms.

3.2. Online and Offline BCI Performance

Figure 7 shows the online classification accuracies across sessions for the BCIs tested on the patient. A linear regression is fitted for each tested BCI to infer whether there was any performance tendency (only for those interfaces which were tested at least in three sessions and discarding the first four sessions, where the seven-class BCIs were tested). Additionally, the regression considering all interfaces together was also obtained. Accuracies of 80% and 90% were obtained in sessions 8, 12, and 9, with the first two for the AVNff interface and the last for the AVNsf interface. Overall, 59.1% of all sessions were above chance level once the two-class BCI was employed. The average online classification accuracy for the two-class BCI was 56.4 ± 15.2%.
The classification results vary from session to session within the same interface. For instance, in one session, the highest accuracy was 90% with AVNsf, whereas in the subsequent session, it dropped to 30%. These inconsistencies in results indicate that different factors influence the user’s performance. Additionally, they highlight the difficulty of drawing conclusive comparisons between interfaces based solely on their performance.
Looking at the overall tendency, we observe an increase of classification over time. When considering the interfaces individually, the use of familiar faces led to an improvement in classification performance when using both hybrid AV interfaces. Conversely, the use of visual-only interfaces resulted in a decline in classification across sessions.
Figure 8 depicts the boxplots illustrating the distribution of the classification performance obtained during both calibration and online operation phases for the two-class BCIs. The results from calibration data were obtained using five-fold cross-validation. The highest median value in online classification was observed with the AVff condition. The highest accuracy in online classification was achieved with the AVNsf interface, reaching 90%, while the same interface also yielded the lowest classification result of 30%. The classification accuracy obtained from calibration data was significantly higher than that obtained online. This discrepancy highlights a lack of generalization in the classification models, likely due to the very high EEG variability. Therefore, the high classification results observed in offline calibration sessions did not translate to similar high classification results in online sessions. All results were obtained for nine repetitions.
For the control group, the classification results for the online sessions are presented in Table 4. The AVNsf condition yielded the highest average online classification of 98.0 ± 4.5% with Nrep averaging 2.6 repetitions, while the worst-performing condition was Vsf with 88.0 ± 16.4%. Therefore, these results confirm the expected impact of the different stimulation features.
As referred above, there was a notable disparity in classification performance between calibration and online operations across most patients’ sessions. However, for the control group, online classification performance aligned closely with calibration, suggesting that the proposed classification approach effectively handles the relatively stable data of the able-bodied group, but struggles in generalizing to the highly variable data of the patient. Further research into more effective signal processing and classification approaches identifying feature invariant spaces is required. Some progress has been made in this domain by exploring the use of Riemannian geometry [28], although additional efforts are needed to overcome this challenge.

4. Discussion

The main goal of this study was to establish an effective communication channel with a CLIS patient. With this purpose, we tested multiple stimulation modalities with various features and combined different layouts and stimulation strategies. Our aim was to identify an effective combination that allows the exploration of perceptual and cognitive functions believed to be common among individuals in CLIS. The rationale behind the modifications implemented in the nine BCI versions was two-fold. Firstly, we aimed to adjust the interfaces to accommodate the perceptual limitations of the patient, which involved testing different visual layouts and stimuli sizes. Secondly, we sought to enhance selective attention and task engagement by introducing stimuli based on images with emotional content and using auditory white noise stimuli associated to non-target events to draw attention to target events. The study shows that the different types of BCI stimulation caused different responses but also that the responses greatly varied within the same type of stimulation. Both the AV and AVN conditions showed larger discrimination windows when compared to visual and auditory-only conditions. This positively correlates with the online performance, where these conditions also attained better results more times. The visual-only conditions showed lower results, likely influenced by possible eye vision deterioration and the patient’s tendency to close their eyelids, impacting BCI performance. Nevertheless, the visual component remained important, especially when combined with auditory stimuli, as the combined performance surpassed that of visual-only and auditory-only conditions.
The control group achieved very high classification results for all interfaces requiring a low number of repetitions to control the BCI system. This outcome was anticipated due to the simplicity of the BCIs affirming their functionality and the effectiveness of the classification pipeline. The best performing interface for this group was the AVNsf condition, with 98.0 ± 4.5% accuracy for 2.6 ± 1.5 repetitions for the seven-class BCI, which englobed audiovisual stimulation with the white noise audio component. However, no conclusions about the influence of the white noise component can be made for the CLIS patient since the corresponding interface for the patient had both the highest (90%) and lowest (30%) accuracies for the two-class BCI employed. This leads us to conclude that performance results heavily rely on the mental state of the participant during the sessions. The performance fluctuation across sessions may be related to variation in attention and vigilance, as well as in motivation and mood, due to antidepressant medication. The significant gap between calibration and online results obtained with the patient underscores the necessity of exploring alternative classification approaches that are more adaptable in contexts of high EEG variability. While the current processing methods applied are well-established and have shown effectiveness in healthy populations and individuals in advanced stages of ALS, exhibiting relatively stable data, they proved ineffective when applied to the highly variable data of the CLIS patient.
The neurophysiological results obtained with the control group clearly show the expected ERPs in oddball paradigms, namely the N200 and P300 components. However, the neural responses observed in the CLIS patient are inconsistent and do not exhibit evident N200 and P300 ERPs. Moreover, there is no clear waveform discrimination between target and non-target responses, as shown in Figure 5. A notable exception comes from the data collected during one session under AVNsf condition, as depicted in Figure 4, where the P300 is evident and indicates discriminative power. The N200 is also present but appears weak and lacks discriminative power. Figure 5 highlights the low discrimination between target and non-target stimuli, with much smaller time windows observed for the CLIS patient compared to the control group.
There are several possible causes contributing to the ERP variability, resulting in atypical average signals and the consequent low SNR and reduced discriminative power. These factors may include fluctuations in attention and vigilance levels, the cognitive workload demanded by the task, and the patient’s limited working memory for sustaining attention on target symbols. These processes, which can be related to reduced executive functions, can lead to high variability in ERP latency and amplitude. Additionally, external factors such as repetitive stimulation, inducing SSVEPs, and distractors from adjacent stimuli may have also contributed to the variability and decrease in target discrimination.
Nevertheless, the ERPs were more consistent than those obtained in our exploratory study [7] involving a CLIS patient. Previous research has shown that, while a unique common pattern in the EEG signal of CLIS patients could not be found, it was common for the EEG signal to show slowing and attenuation of alpha activity, along with significantly altered auditory-evoked potential responses compared to those of healthy individuals [29]. Additionally, an ECoG study involving a late-stage ALS patient [30] revealed that the P300 component of the signal, detectable when the patient was still able to communicate through voluntary muscle control, became undetectable three months after the last communication was made.
Given the highly atypical nature of the observed patient’s ERP waveforms, further methods and studies are required to understand what potential (confounding) factors affects target ERPs and to identify other neural correlates that can help predict BCI performance. To further elucidate potential signaling effects, future studies should incorporate BCI paradigms that systematically assess neural responses at different levels, including reflexive, perceptual, and cognitive processes and isolating the effects of different stimulus characteristics. This should occur early on at the LIS stage, when patients can provide more reliable feedback.
Our study provides valuable insights into potential strategies to enhance the feasibility and efficacy of BCIs for CLIS patients. However, both neurophysiological and classification results cannot be generalizable to other CLIS patients, given the significant variability in individual characteristics and conditions within this population.
EEG-based BCIs are still not a reliable communication system for at-home, day-to-day use for CLIS patients [17]. Moreover, there are only a handful of successful studies utilizing a functional BCI-based system for CLIS patients. Although we achieved three sessions with an online classification equal to or above 80%—surpassing the 56.9% achieved in [11] and comparable to accuracies in [6,8], all achieved with non-invasive techniques—our results were not consistent and cannot be considered successful as there was no repetition of those results. However, this underscores the potential of the proposed approaches to be effective for these patients.
The use of BCIs for communication in patients in CLIS would improve the quality of life of these patients by allowing them to regain a voice in the decision-making process of their caretaking needs, as well as allowing a level of social interaction with their surroundings, family members, and caretakers. In other words, this would represent a qualitative improvement within the goals of personalized medicine.
Longitudinal studies are essential for achieving more effective BCIs and could provide us with additional insights into the neuronal changes during the transition from LIS to CLIS [31,32]. For this to happen, BCI studies should start once a patient enters LIS, a state in which patients can still communicate with eye blinks, eye movements, or alternative communication tools, such as eye-tracking spellers. The implementation of BCI technology for patients in LIS allows experimenters to directly question a patient’s comprehension on how to use the system, and possibly facilitates them becoming proficient users before entering CLIS. BCI studies starting when the patient enters LIS would also allow a longitudinal assessment of the hypothesis of the extinction of goal-directed thinking [15,31]. This hypothesis states that there is a reduction in arousal, vigilance, and working memory in patients in CLIS due to the lack of social-cognitive interaction. The continuous use of BCIs would allow uninterrupted communication, even after losing eye movements, with family members and caretakers, preventing a decline in executive function. In [6], a patient who progressed from LIS to CLIS retained the ability to use the BCI acquired in LIS. Additionally, introducing the interface before entering in CLIS would enable patients to offer valuable insights, providing feedback on interface design, and express any concerns or doubts about how the system operates, which were not possible to ascertain in our study. This would enable improved personalization. The introduction of BCI for assistive contexts before an ALS patient even reaches LIS has been explored in [33,34] showing that the mental strategies used can become helpful in translating these mental strategies when the patient reaches more advanced states. Thus, early BCI introduction in earlier stages of ALS, even in LIS, could help prolong patients’ communication capabilities.
Future work is expected to include larger-scale longitudinal studies, with the inclusion of both LIS and CLIS patients, to address the aforementioned aspects and achieve a more optimized combination of stimulation strategies and classification algorithms tailored to patients’ specific needs. Furthermore, BCI sessions with patients should occur in periods when the patients are known to be awake/vigilant. It is essential to find biomarkers of patients’ vigilance levels as well as performance predictors. With these goals, an automatic vigilance detector is underway towards implementation. Additionally, shorter calibration sessions are required to reduce patient fatigue. Within our group, efforts have been made to reduce calibration times, albeit so far only having been tested with healthy participants [28].

Author Contributions

Conceptualization, G.P. and R.B.; methodology, G.P. and R.B.; software, G.P. and R.B.; validation, G.P. and R.B.; formal analysis, G.P., R.B., M.C.-B., E.G. and U.J.N.; writing—original draft preparation, R.B. and G.P.; writing—review and editing, R.B., M.C.-B., E.G., U.J.N. and G.P.; funding acquisition, G.P. and U.J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Portuguese Foundation for Science and Technology (FCT) under Grants ISR-UC UIDB/00048/2020 (doi: 10.54499/UIDB/00048/2020), B-RELIABLE: PTDC/EEI-AUT/30935/2017 and BCI4ALL2024. Rute Bettencourt was supported by the FCT Ph.D. scholarship 2023.03995.BD (doi: 10.54499/2023.03995.BD).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee for Health of the University Hospital Center of S. João, Porto (CHUSJ) (protocol nr. 162/2023).

Informed Consent Statement

Informed consent was obtained from all subjects or from a legal representative of the subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the patient’s family for their availability and support during the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Birbaumer, N. Breaking the Silence: Brain–Computer Interfaces (BCI) for Communication and Motor Control. Psychophysiology 2006, 43, 517–532. [Google Scholar] [CrossRef] [PubMed]
  2. Bauer, G.; Gerstenbrand, F.; Rumpl, E. Varieties of the Locked-in Syndrome. J. Neurol. 1979, 221, 77–91. [Google Scholar] [CrossRef] [PubMed]
  3. Schnetzer, L.; McCoy, M.; Bergmann, J.; Kunz, A.; Leis, S.; Trinka, E. Locked-in Syndrome Revisited. Ther. Adv. Neurol. Disord. 2023, 16, 17562864231160872. [Google Scholar] [CrossRef] [PubMed]
  4. Kuzma-Kozakiewicz, M.; Andersen, P.M.; Ciecwierska, K.; Vázquez, C.; Helczyk, O.; Loose, M.; Uttner, I.; Ludolph, A.C.; Lulé, D. An Observational Study on Quality of Life and Preferences to Sustain Life in Locked-in State. Neurology 2019, 93, E938–E945. [Google Scholar] [CrossRef] [PubMed]
  5. Linse, K.; Rüger, W.; Joos, M.; Schmitz-Peiffer, H.; Storch, A.; Hermann, A. Usability of Eyetracking Computer Systems and Impact on Psychological Wellbeing in Patients with Advanced Amyotrophic Lateral Sclerosis. Amyotroph. Lateral Scler. Front. Degener. 2018, 19, 212–219. [Google Scholar] [CrossRef] [PubMed]
  6. Okahara, Y.; Takano, K.; Nagao, M.; Kondo, K.; Iwadate, Y.; Birbaumer, N.; Kansaku, K. Long-Term Use of a Neural Prosthesis in Progressive Paralysis. Sci. Rep. 2018, 8, 16787. [Google Scholar] [CrossRef] [PubMed]
  7. Pires, G.; Barbosa, S.; Nunes, U.J.; Gonçalves, E. Visuo-Auditory Stimuli with Semantic, Temporal and Spatial Congruence for a P300-Based BCI: An Exploratory Test with an ALS Patient in a Completely Locked-in State. J. Neurosci. Methods 2022, 379, 109661. [Google Scholar] [CrossRef] [PubMed]
  8. Guger, C.; Spataro, R.; Allison, B.Z.; Heilinger, A.; Ortner, R.; Cho, W.; La Bella, V. Complete Locked-in and Locked-in Patients: Command Following Assessment and Communication with Vibro-Tactile P300 and Motor Imagery Brain-Computer Interface Tools. Front. Neurosci. 2017, 11, 251. [Google Scholar] [CrossRef]
  9. Han, C.H.; Kim, Y.W.; Kim, D.Y.; Kim, S.H.; Nenadic, Z.; Im, C.H. Electroencephalography-Based Endogenous Brain-Computer Interface for Online Communication with a Completely Locked-in Patient. J. Neuroeng. Rehabil. 2019, 16, 18. [Google Scholar] [CrossRef]
  10. Masayoshi, N.; Yohko, M.; Kuniaki, O.; Yoshitoshi, I.; Masashi, K.; Tsuneo, K. A Communication Means for Totally Locked-in ALS Patients Based on Changes in Cerebral Blood Volume Measured with Near-Infrared Light. IEICE Trans. Inf. Syst. 2007, E90-D, 1028–1037. [Google Scholar] [CrossRef]
  11. Khalili Ardali, M.; Rana, A.; Purmohammad, M.; Birbaumer, N.; Chaudhary, U. Semantic and BCI-Performance in Completely Paralyzed Patients: Possibility of Language Attrition in Completely Locked in Syndrome. Brain Lang. 2019, 194, 93–97. [Google Scholar] [CrossRef] [PubMed]
  12. Chaudhary, U.; Vlachos, I.; Zimmermann, J.B.; Espinosa, A.; Tonin, A.; Jaramillo-Gonzalez, A.; Khalili-Ardali, M.; Topka, H.; Lehmberg, J.; Friehs, G.M.; et al. Spelling Interface Using Intracortical Signals in a Completely Locked-in Patient Enabled via Auditory Neurofeedback Training. Nat. Commun. 2022, 13, 1236. [Google Scholar] [CrossRef] [PubMed]
  13. Rezvani, S.; Hosseini-Zahraei, S.H.; Tootchi, A.; Guger, C.; Chaibakhsh, Y.; Saberi, A.; Chaibakhsh, A. A Review on the Performance of Brain-Computer Interface Systems Used for Patients with Locked-in and Completely Locked-in Syndrome. Cogn. Neurodyn. 2023, 1–25. [Google Scholar] [CrossRef]
  14. Kübler, A.; Furdea, A.; Halder, S.; Hammer, E.M.; Nijboer, F.; Kotchoubey, B. A Brain–Computer Interface Controlled Auditory Event-Related Potential (P300) Spelling System for Locked-In Patients. Ann. N. Y. Acad. Sci. 2009, 1157, 90–100. [Google Scholar] [CrossRef] [PubMed]
  15. Kübler, A.; Birbaumer, N. Brain–Computer Interfaces and Communication in Paralysis: Extinction of Goal Directed Thinking in Completely Paralysed Patients? Clin. Neurophysiol. 2008, 119, 2658–2666. [Google Scholar] [CrossRef] [PubMed]
  16. Murguialday, A.R.; Hill, J.; Bensch, M.; Martens, S.; Halder, S.; Nijboer, F.; Schoelkopf, B.; Birbaumer, N.; Gharabaghi, A. Transition from the Locked in to the Completely Locked-in State: A Physiological Analysis. Clin. Neurophysiol. 2011, 122, 925–933. [Google Scholar] [CrossRef] [PubMed]
  17. Wolpaw, J.R.; Bedlack, R.S.; Reda, D.J.; Ringer, R.J.; Banks, P.G.; Vaughan, T.M.; Heckman, S.M.; McCane, L.M.; Carmack, C.S.; Winden, S.; et al. Independent Home Use of a Brain-Computer Interface by People with Amyotrophic Lateral Sclerosis. Neurology 2018, 91, e258–e267. [Google Scholar] [CrossRef] [PubMed]
  18. Brainard, D.H. The Psychophysics Toolbox. Spat. Vis. 1997, 10, 433–436. [Google Scholar] [CrossRef] [PubMed]
  19. Pelli, D.G. The VideoToolbox Software for Visual Psychophysics: Transforming Numbers into Movies. Spat. Vis. 1997, 10, 437–442. [Google Scholar] [CrossRef]
  20. Langner, O.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.; Hawk, S.T.; Van Knippenberg, A.D. Presentation and validation of the Radboud faces database. Cogn. Emotion 2010, 24, 1377–1388. [Google Scholar] [CrossRef]
  21. Kaufmann, T.; Schulz, S.M.; Grünzinger, C.; Kübler, A. Flashing Characters with Famous Faces Improves ERP-Based Brain-Computer Interface Performance. J. Neural Eng. 2011, 8, 56016–56026. [Google Scholar] [CrossRef] [PubMed]
  22. Kaufmann, T.; Schulz, S.M.; Köblitz, A.; Renner, G.; Wessig, C.; Kübler, A. Face Stimuli Effectively Prevent Brain–Computer Interface Inefficiency in Patients with Neurodegenerative Disease. Clin. Neurophysiol. 2013, 124, 893–900. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, Y.; Zhao, Q.; Jin, J.; Wang, X.; Cichocki, A. A Novel BCI Based on ERP Components Sensitive to Configural Processing of Human Faces. J. Neural Eng. 2012, 9, 026018. [Google Scholar] [CrossRef]
  24. Barbosa, S.; Pires, G.; Nunes, U. Toward a Reliable Gaze-Independent Hybrid BCI Combining Visual and Natural Auditory Stimuli. J. Neurosci. Methods 2016, 261, 47–61. [Google Scholar] [CrossRef] [PubMed]
  25. Pires, G.; Nunes, U.; Castelo-Branco, M. Statistical Spatial Filtering for a P300-Based BCI: Tests in Able-Bodied, and Patients with Cerebral Palsy and Amyotrophic Lateral Sclerosis. J. Neurosci. Methods 2011, 195, 270–281. [Google Scholar] [CrossRef] [PubMed]
  26. Delorme, A.; Makeig, S. EEGLAB: An Open Source Toolbox for Analysis of Single-Trial EEG Dynamics Including Independent Component Analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef]
  27. Olivares, E.I.; Iglesias, J.; Saavedra, C.; Trujillo-Barreto, N.J.; Valdés-Sosa, M. Brain Signals of Face Processing as Revealed by Event-Related Potentials. Behav. Neurol. 2015, 2015, 514361. [Google Scholar] [CrossRef] [PubMed]
  28. Cruz, A.; Pires, G.; Nunes, U.J. Spatial Filtering Based on Riemannian Distance to Improve the Generalization of ErrP Classification. Neurocomputing 2022, 470, 236–246. [Google Scholar] [CrossRef]
  29. Khalili-Ardali, M.; Wu, S.; Tonin, A.; Birbaumer, N.; Chaudhary, U. Neurophysiological Aspects of the Completely Locked-in Syndrome in Patients with Advanced Amyotrophic Lateral Sclerosis. Clin. Neurophysiol. 2021, 132, 1064–1076. [Google Scholar] [CrossRef]
  30. Bensch, M.; Martens, S.; Halder, S.; Hill, J.; Nijboer, F.; Ramos, A.; Birbaumer, N.; Bogdan, M.; Kotchoubey, B.; Rosenstiel, W.; et al. Assessing Attention and Cognitive Function in Completely Locked-in State with Event-Related Brain Potentials and Epidural Electrocorticography. J. Neural Eng. 2014, 11, 026006. [Google Scholar] [CrossRef]
  31. Maruyama, Y.; Yoshimura, N.; Rana, A.; Malekshahi, A.; Tonin, A.; Jaramillo-Gonzalez, A.; Birbaumer, N.; Chaudhary, U. Electroencephalography of Completely Locked-in State Patients with Amyotrophic Lateral Sclerosis. Neurosci. Res. 2021, 162, 45–51. [Google Scholar] [CrossRef] [PubMed]
  32. Secco, A.; Tonin, A.; Rana, A.; Jaramillo-Gonzalez, A.; Khalili-Ardali, M.; Birbaumer, N.; Chaudhary, U. EEG Power Spectral Density in Locked-in and Completely Locked-in State Patients: A Longitudinal Study. Cogn. Neurodyn. 2021, 15, 473–480. [Google Scholar] [CrossRef] [PubMed]
  33. Savić, A.M.; Aliakbaryhosseinabadi, S.; Blicher, J.U.; Farina, D.; Mrachacz-Kersting, N.; Došen, S. Online Control of an Assistive Active Glove by Slow Cortical Signals in Patients with Amyotrophic Lateral Sclerosis. J. Neural Eng. 2021, 18, 046085. [Google Scholar] [CrossRef] [PubMed]
  34. Aliakbaryhosseinabadi, S.; Dosen, S.; Savic, A.M.; Blicher, J.; Farina, D.; Mrachacz-Kersting, N. Participant-Specific Classifier Tuning Increases the Performance of Hand Movement Detection from EEG in Patients with Amyotrophic Lateral Sclerosis. J. Neural Eng. 2021, 18, 056023. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experimental protocol for (a) the calibration; and (b) the online classification sessions. The interval between each selection is set to 4 s. During online sessions, the number of stimulus repetitions may be adjusted based on the participant’s performance. Each word (stimulus) can be associated to a visual or auditory stimuli. Each online session consisted of 10 selections.
Figure 1. Experimental protocol for (a) the calibration; and (b) the online classification sessions. The interval between each selection is set to 4 s. During online sessions, the number of stimulus repetitions may be adjusted based on the participant’s performance. Each word (stimulus) can be associated to a visual or auditory stimuli. Each online session consisted of 10 selections.
Applsci 14 03464 g001
Figure 2. Final grid layout of the interface with the standard face as the ON stimuli used in the AVNsf BCI. In this screenshot, the word in the ON state (target event) is ‘AJUDA’. See demonstration videos.
Figure 2. Final grid layout of the interface with the standard face as the ON stimuli used in the AVNsf BCI. In this screenshot, the word in the ON state (target event) is ‘AJUDA’. See demonstration videos.
Applsci 14 03464 g002
Figure 3. Overview of the BCI pipeline used in this study. The participant engages in the task with a P300-based BCI based on visuo-auditory stimulation. The collected data undergoes preprocessing followed by feature extraction, where the most relevant features are selected for classification using a Naïve-Bayes classifier across seven possible classes. A Yes/No filter may be used to limit the choice to only these two classes, and finally feedback is provided for the detected symbol/word.
Figure 3. Overview of the BCI pipeline used in this study. The participant engages in the task with a P300-based BCI based on visuo-auditory stimulation. The collected data undergoes preprocessing followed by feature extraction, where the most relevant features are selected for classification using a Naïve-Bayes classifier across seven possible classes. A Yes/No filter may be used to limit the choice to only these two classes, and finally feedback is provided for the detected symbol/word.
Applsci 14 03464 g003
Figure 4. (a) ERPs for the target and non-target stimuli obtained in one calibration session for the AVNsf condition for the CLIS patient at electrode PO8. The green bars indicate time windows where target and non-target ERPs were statistically different (two-paired t-test, p < 0.05); (b) Grand average ERPs for the control group under the same condition. EEG data were not normalized for both groups.
Figure 4. (a) ERPs for the target and non-target stimuli obtained in one calibration session for the AVNsf condition for the CLIS patient at electrode PO8. The green bars indicate time windows where target and non-target ERPs were statistically different (two-paired t-test, p < 0.05); (b) Grand average ERPs for the control group under the same condition. EEG data were not normalized for both groups.
Applsci 14 03464 g004
Figure 5. Patient’s average ERPs for the target and non-target stimuli responses across all sessions of each interface type (V, A, AV, and AVN) in different brain regions—electrodes Fz (left) and POz (right). In the green regions there are statistically significant differences between the target and non-target stimuli (two-paired t-test, p < 0.05). The data were normalized to have zero mean and a standard deviation of 1.
Figure 5. Patient’s average ERPs for the target and non-target stimuli responses across all sessions of each interface type (V, A, AV, and AVN) in different brain regions—electrodes Fz (left) and POz (right). In the green regions there are statistically significant differences between the target and non-target stimuli (two-paired t-test, p < 0.05). The data were normalized to have zero mean and a standard deviation of 1.
Applsci 14 03464 g005
Figure 6. ERP color maps obtained in EEGLab at the POz electrode during one calibration session for the AVNsf condition, shown for both the patient (a) and participant P1 from the control group (b). Each line represents one trial of the session. The average of the amplitudes for the trials is shown at the bottom of the figure, generating the ERP graph. These color maps provide illustrative examples of the intertrial variability observed in the patient’s data during the same session, compared to a representative participant of the control group. In the patient’s data, intertrial variability is evident in the columns of the color map, where both positive and negative amplitude values occur for the same time point in subsequent trials. Conversely, the ERP color map of participant P1 exhibits consistent amplitudes across trials.
Figure 6. ERP color maps obtained in EEGLab at the POz electrode during one calibration session for the AVNsf condition, shown for both the patient (a) and participant P1 from the control group (b). Each line represents one trial of the session. The average of the amplitudes for the trials is shown at the bottom of the figure, generating the ERP graph. These color maps provide illustrative examples of the intertrial variability observed in the patient’s data during the same session, compared to a representative participant of the control group. In the patient’s data, intertrial variability is evident in the columns of the color map, where both positive and negative amplitude values occur for the same time point in subsequent trials. Conversely, the ERP color map of participant P1 exhibits consistent amplitudes across trials.
Applsci 14 03464 g006
Figure 7. Online classification achieved by the patient for the different interfaces in each session (typically, each visit consisted of two sessions). The overall classification tendency (linear regression fitting) for the two-class BCI is depicted with thick dark blue line, while the classification tendencies for the Vff, AVff, and AVNff conditions are shown in yellow, green, and light blue, respectively. The remaining interfaces were not tested a sufficient number of times to provide reliable tendency fitting. The online operation was always performed with nine repetitions. Starting from the fifth visit, the classification algorithm began using a filter that delimited the choice to ‘Yes’ or ‘No’ (binary selection), instead of the seven words.
Figure 7. Online classification achieved by the patient for the different interfaces in each session (typically, each visit consisted of two sessions). The overall classification tendency (linear regression fitting) for the two-class BCI is depicted with thick dark blue line, while the classification tendencies for the Vff, AVff, and AVNff conditions are shown in yellow, green, and light blue, respectively. The remaining interfaces were not tested a sufficient number of times to provide reliable tendency fitting. The online operation was always performed with nine repetitions. Starting from the fifth visit, the classification algorithm began using a filter that delimited the choice to ‘Yes’ or ‘No’ (binary selection), instead of the seven words.
Applsci 14 03464 g007
Figure 8. Average classification results obtained for the different interfaces, including both calibration and online sessions. High offline classification results from the calibration sessions (obtained with five-fold cross-validation) did not translate into high classification results in the online sessions. All results were obtained for nine repetitions.
Figure 8. Average classification results obtained for the different interfaces, including both calibration and online sessions. High offline classification results from the calibration sessions (obtained with five-fold cross-validation) did not translate into high classification results in the online sessions. All results were obtained for nine repetitions.
Applsci 14 03464 g008
Table 1. BCIs studies with CLIS patients. NA—Not available. * Considering only 7 of the 17 CLIS patients.
Table 1. BCIs studies with CLIS patients. NA—Not available. * Considering only 7 of the 17 CLIS patients.
AuthorsParticipantsData
Acquisition
StimuliPerformance
(Nr. of Sessions)
Naito et al., 2007 [10]17 ALS CLISfNIRSAuditory yes/no questions (binary)79.6% * (NA)
Ardali et al., 2019 [11]4 ALS CLISfNRISAuditory yes/no questions (binary)56.9% (46)
Pires et al., 2022 [7]1 ALS CLISEEGVisuo-auditory P300 paradigm (7-class)30% (1)
Okahara et al., 2018 [6]3 ALS 1 progressed to CLISEEGSteady-state visual evoked potentials elicited by blue and green LEDs (binary)80% (191)
Guger et al., 2017 [8]3 ALS CLISEEGVibrotactile P300 paradigm (binary)70%, 90% (NA)
Han et al., 2019 [9]1 ALS CLISEEGMotor imagery (binary)87.5% (4)
Chaudhary et al., 2022 [12]1 ALS CLISIntracorticalAuditory neurofeedback (binary with decision tree)>80% (135)
Table 2. CLIS patient characteristics.
Table 2. CLIS patient characteristics.
CharacteristicPatient
Age54
GenderFemale
ALSFRS-R1
Diagnosis/onsetALS/Limb
Time since onset57 months
Ventilation/nutritionTracheotomy/Gastrostomy
Movement controlNo
HearingYes
Tactile sensitivityYes
Understands what is askedAuthors believe so
Attention spansUnknown
MedicationFluoxetine
Table 3. Iterative design of the BCI interfaces (interfaces highlighted in bold were tested online with the patient).
Table 3. Iterative design of the BCI interfaces (interfaces highlighted in bold were tested online with the patient).
IterationInterfaceVisual StimulationAudio Stimulation
1VaArrow layout with words ON in green-
2AVaArrow layout with words ON in greenSpoken words
2AVNaArrow layout with words ON in green4 spoken words and 3 white noise
3VsfStandard face in small central layout-
3VffFamiliar faces in small central layout-
3AVsfStandard face in small central layoutSpoken words
3AVffFamiliar faces in small central layoutSpoken words
4AVNsfStandard face in small central layout4 spoken words and 3 white noise
4AVNffFamiliar faces in small central layout4 spoken words and 3 white noise
5VsfStandard face in grid layout-
5VffFamiliar faces in grid layout-
5AVsfStandard face in grid layoutSpoken words
5AVffFamiliar faces in grid layoutSpoken words
5AVNsfStandard face in grid layout4 spoken words and 3 white noise
5AVNffFamiliar faces in grid layout4 spoken words and 3 white noise
6VsfStandard face in grid layout with ‘SIM’/‘NÃO’ on the same side-
6VffFamiliar faces in grid layout with ‘SIM’/‘NÃO’ on the same side-
6AVsfStandard face in grid layout with ‘SIM’/‘NÃO’ on the same sideSpoken words
6AVffFamiliar face in grid layout with ‘SIM’/‘NÃO’ on the same sideSpoken words
6AVNsfStandard face in grid layout with ‘SIM’/‘NÃO’ on the same side4 spoken words and 3 white noise
6AVNffFamiliar faces in grid layout with ‘SIM’/‘NÃO’ on the same side4 spoken words and 3 white noise
7AVNwWords in grid layout with ‘SIM’/‘NÃO’ on the same side, words ON in green4 spoken words and 3 white noise
Table 4. Control group online classification performance.
Table 4. Control group online classification performance.
ParticipantVsfAVsfAVNsf
Accuracy (%)NrepAccuracy (%)NrepAccuracy (%)Nrep
1100110011001
2100110011001
37038031003
47058041004
510041004904
Mean ± sd88.0 ± 16.42.8 ± 1.892.0 ± 11.02.6 ± 1.598.0 ± 4.52.6 ± 1.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bettencourt, R.; Castelo-Branco, M.; Gonçalves, E.; Nunes, U.J.; Pires, G. Comparing Several P300-Based Visuo-Auditory Brain-Computer Interfaces for a Completely Locked-in ALS Patient: A Longitudinal Case Study. Appl. Sci. 2024, 14, 3464. https://doi.org/10.3390/app14083464

AMA Style

Bettencourt R, Castelo-Branco M, Gonçalves E, Nunes UJ, Pires G. Comparing Several P300-Based Visuo-Auditory Brain-Computer Interfaces for a Completely Locked-in ALS Patient: A Longitudinal Case Study. Applied Sciences. 2024; 14(8):3464. https://doi.org/10.3390/app14083464

Chicago/Turabian Style

Bettencourt, Rute, Miguel Castelo-Branco, Edna Gonçalves, Urbano J. Nunes, and Gabriel Pires. 2024. "Comparing Several P300-Based Visuo-Auditory Brain-Computer Interfaces for a Completely Locked-in ALS Patient: A Longitudinal Case Study" Applied Sciences 14, no. 8: 3464. https://doi.org/10.3390/app14083464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop