Next Article in Journal
A Robust CoS-PVNet Pose Estimation Network in Complex Scenarios
Next Article in Special Issue
A Robust Automatic Epilepsy Seizure Detection Algorithm Based on Interpretable Features and Machine Learning
Previous Article in Journal
Piezoelectric MEMS Energy Harvester for Low-Power Applications
Previous Article in Special Issue
A Fine-Grained Approach for EEG-Based Emotion Recognition Using Clustering and Hybrid Deep Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain–Computer Interface Based on PLV-Spatial Filter and LSTM Classification for Intuitive Control of Avatars

by
Kevin Martín-Chinea
1,*,
José Francisco Gómez-González
2,* and
Leopoldo Acosta
2
1
Department of Industrial Engineering, University of La Laguna, 38071 San Cristóbal de La Laguna, Spain
2
Department of Computer and Systems Engineering, University of La Laguna, 38071 San Cristóbal de La Laguna, Spain
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(11), 2088; https://doi.org/10.3390/electronics13112088
Submission received: 22 April 2024 / Revised: 16 May 2024 / Accepted: 23 May 2024 / Published: 27 May 2024
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)

Abstract

:
This study researches the combination of the brain–computer interface (BCI) and virtual reality (VR) in order to improve user experience and facilitate control learning in a safe environment. In addition, it assesses the applicability of the phase-locking value spatial filtering (PLV-SF) method and the Short-Term Memory Network (LSTM) in a real-time EEG-based BCI. The PLV-SF has been shown to improve signal quality, and the LSTM exhibits more stable and accurate behavior. Ten healthy volunteers, six men and four women aged 22 to 37 years, participated in tasks inside a virtual house, using their EEG states to direct their movements and actions through a commercial, low-cost wireless EEG device together with a virtual reality system. A BCI and VR can be used effectively to enable the intuitive control of virtual environments by immersing users in real-life situations, making the experience engaging, fun, and safe. Control test times decreased significantly from 3.65 min and 7.79 min in the first and second quartiles, respectively, to 2.56 min and 4.28 min. In addition, a free route was performed for the three best volunteers who finished in an average time of 6.30 min.

Graphical Abstract

1. Introduction

Technology has advanced greatly over the years, with many new innovations and developments improving our daily lives, and there are two fields to highlight: brain–computer interfaces (BCIs) and virtual reality (VR).
A BCI is a system that allows a user to control a computer or other device using their brain activity. This technology has the potential to revolutionize the way people interact with their environment by providing a direct link between the human brain and machines. These systems can be more attractive to a person with a mobility problem because these devices can help in several ways to increase their independence and autonomy in their daily activities. There is a body of literature on different proposals. For example, Huyang et al. [1] proposed in their study a novel hybrid BCI system based on electroencephalography (EEG) and electrooculogram (EOG) to control a combined wheelchair and robotic arm system through hand motor imagery, eye blinks, and eyebrow-raising movements. Yu et al. [2] presented a BCI system that integrates motor imagination (MI) potential and P300, while Wang et al. [3] proposed a new system that aggregates EOG information. Both systems were designed with the purpose of implementing wheelchair control. Chen et al. [4] explored the use of an EEG-based BCI and a steady-state visual evoked potential (SSVEP) to control an electric wheelchair for people with motor disabilities, including multiple sclerosis and amyotrophic lateral sclerosis. Pawus et al. [5] proposed an expert system that involves two neural networks that analyze EEG signals from selected electrodes and detect nervous tics, as well as interference from external sources, for application in BCI technology. Wang et al. [6] proposed a BCI system based on SSVEP and EOG to estimate the vigilance state of a person. This model applied a spatiotemporal convolution module with an attention mechanism to explore the spatiotemporal information of EEG features, and a short-term memory module was used to learn the temporal dependencies of EOG features. In the review by Naser et al. [7], it was possible to see the evolution of the control of EEG-driven wheelchairs, in which both the state-of-the-art and the different models adopted in the literature during recent decades and the limitations that these systems present were identified.
VR, which is a computer-generated simulation of an environment, can create an immersive experience that allows the user to feel as if they are physically present in a fictional world. VR is often used for gaming, education, training, and other interactive experiences [8,9,10]. There are different applications, such as in therapy: Emmelkamp et al. [11] described its effectiveness in anxiety disorders and post-traumatic stress disorder; Juan et al. [12] studies an application with three serious games for motor rehabilitation of hands movements; with the applications presented in the review developed by Ehioghae et al. [13], they highlighted VR as a promising way to optimize postoperative recovery in orthopedic surgery patients. Example applications for the improvement in the quality of education include that demonstrated in the meta-analysis on nursing education developed by Chen et al. [14], the virtual environment of the hydrogen atom that allows exploring atomic orbitals in 3D space developed by Suno et al. [15], and in the various cases of surgical education presented in the review by Ntakakis et al. [16].
If both technologies are combined, a system can be obtained that controls an avatar in a virtual reality environment using brain signals to interpret the user’s intentions and movements. This system can allow the user to learn and improve the control of a BCI by being more familiar with it and developing a stronger connection with it in a safe and controlled environment for practicing movements and activities. Some examples follow. Deng et al. [17] developed a modular multi-quadcopter system in a 3D virtual reality scene where an SSVEP-BCI system was applied for swarm control; Vourvopoulos et al. [18] researched the embodied feedback with VR to help the elderly population with stroke age-range demographic in their BCI performance; the integration of a BCI based on the SSVEP paradigm in a VR flight simulator was proposed by Zhengdong et al. [19]. In a VR environment, it is possible to provide visual or auditory feedback when the user successfully performs a task, as Juliano et al. [20] demonstrated in their study of embodiment and as in the REINVENT platform developed by Vourvopoulos et al. [21], in which EEG, electromyography (EMG), and VR were combined to generate feedback and generate a fruitful recovery of chronic stroke survivors.
This combination of systems can help the user learn to control their brain activity more effectively. Additionally, using a variety of tasks and challenges during training can help the user develop a wider range of control over the BCI system. Its application in therapy and rehabilitation can be highly effective in helping patients overcome cognitive and physical challenges, such as brain injuries, stroke, and neurological disorders. The technology allows for personalized and tailored therapy sessions, which can be adjusted in real time to suit the needs of each patient. This can lead to faster and more effective rehabilitation outcomes. Examples are the VR rehabilitation set for stroke patients defined by Karácsony et al. [22], which uses a real-time EEG-based BCI MI for different activations, and the system proposed by Gao et al. [23] for patients with the same pathology (stroke patients). This system combined BCI, soft hand rehabilitation glove, and VR to mobilize more cerebral cortex, muscle strength, and muscle tension to address hand motor dysfunction.
The current manuscript presents a practical application of a VR environment that incorporates a BCI system that uses phase-locking value spatial filtering (PLV-SF) [24] and Long Short-Term Memory (LSTM) neural networks for signal classification [25]. In particular, the work of Martín-Chinea et al. [24] detailed a spatial filtering based on a graph Laplacian quadratic form, while [25] compared the performance of LSTM neural networks with other classification algorithms commonly used in the literature (support vector machine, discriminant analysis, k-nearest neighbor, and decision tree learner). The LSTM networks showed an improvement range of around 30%. This combination of advanced methodologies offers a unique solution for capturing and analyzing brain activity and applying it in VR environments. This study demonstrates the applicability of these complementary methodologies, which aim to find high accuracy and reliability, in the real use case. The PLV spatial filtering method enhances the performance of BCIs by improving the signal-to-noise ratio of EEG signals, particularly in noisy or complex environments. This method effectively separates relevant signals from background noise and unwanted artifacts, thereby improving the accuracy and reliability of EEG control signals. Utilizing this innovative approach, previously confined to academic environments, its real-time integration with EEG signals showcases its practical viability in real-world scenarios. Moreover, the LSTM neural network is a machine learning algorithm that models the temporal dynamics of brain signals and user behavior, which is crucial for accurate and responsive control of BCI and VR systems. LSTM networks can process variable-length input sequences and recognize patterns to make predictions, making them ideal for modeling complex temporal relationships. Other authors, such as Gong et al. [26], have demonstrated its applicability using a model that combines spiking neurons with adaptive LSTM and graph convolution to classify EEG signals. Wang et al. [27] proposed a hybrid 2D CNN-LSTM model for MI EEG classification. Guerrero-Méndez et al. [28] applied various models, such as the Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM)/Bidirectional Long Short-Term Memory (BiLSTM), and a combination of CNN and LSTM, in the same context (MI EEG).
Several aspects of improvement can be identified when applying the methodologies proposed in the cited publications. On the one hand, many of them are based on specific systems that do not take into account the evolution of users and how they interact with the system over time. Another crucial aspect is the choice of the systems used; in this case, the feasibility of a low-cost wireless trading system for EEG signal acquisition is highlighted, which may present a lower signal-to-noise ratio compared to clinical systems. In addition, the integration of VR technology as a tool to test various methodologies and evaluate user training in BCI systems in an engaging, fun, and safe way is evaluated. Therefore, this manuscript details all aspects of this study, and the results obtained are presented, demonstrating not only the practical applicability of both methods but also the volunteers’ experiences in the developed environment.
The background is presented throughout this introduction. The methods applied are presented below, including the PLV-SF method, the LSTM for cognitive state classification, and the VR for the task environment. Finally, the findings are presented in the results, with the interpretation in the discussion part, and the conclusion, which summarizes the key ideas and suggests future directions.

2. Materials and Methods

To assess the effectiveness of both the PLV-SF method and LSTM as decision-making systems within a VR environment designed for user training and system control enhancement, the materials and methods described in this section were applied. The purpose of this section is to provide a detailed description of the methods utilized in this research and obtain information about the user experience, including the signal filtering process and the operation of the system in the virtual environment. This information will help readers understand and evaluate the methodology used to conduct this study.

2.1. Equipment and Software

An OpenBCI device (OpenBCI Inc., New York, NY, USA), a non-invasive and low-cost EEG device, was utilized in the study, as shown in Figure 1a. It featured a combination of Cyton and Daisy biosensing boards and was used in conjunction with a Python program to record signals at a sampling rate of 125 Hz. The cap employed in the study featured 18 sensors (FP1, FP2, F7, F3, F4, F8, T7, C3, C4, T8, P7, P3, P4, P8, O1, O2, and two reference electrodes at Ref and GND) placed according to the 10–20 system, as shown in Figure 1b. This system was combined with the HTC Vive virtual reality goggles (HTC Corporation, Taipei, Taiwan), which displayed the virtual reality environment. Figure 1c shows an example of a user using both devices.
All processing and analyses were carried out using Matlab® (The MathWorks, Inc., Natick, MA, USA) and Fieldtrip (https://www.fieldtriptoolbox.org/, accessed on 21 April 2024) [29], a toolbox developed by the Donders Institute for Brain, Cognition and Behaviour in Nijmegen, The Netherlands in collaboration with other institutes. This toolbox offers advanced preprocessing and analysis methods for magnetoencephalography (MEG), EEG, intracranial electroencephalography (iEEG), and near-infrared spectroscopy (NIRS) recordings. The LSTM was defined and trained using the Matlab Deep Learning Toolbox™, and the virtual environment was created with the Unity game engine (Unity Technologies, San Francisco, CA, USA).

2.2. Participants

Ten healthy volunteers participated in the tests, consisting of 6 men and 4 women ranging in age from 22 to 37 years. Only 4 of them had prior experience using BCI systems.
Written consent was obtained from each volunteer prior to their participation in the study, ensuring their informed agreement to take part in the research procedures and protocols. Additionally, all the ethical and experimental procedures and protocols applied, which involved human subjects, were granted by the Ethics Committee of the University of La Laguna under Approval No. CEIBA2020-0405. This approval underscored our commitment to upholding the welfare and rights of all individuals involved in the study in compliance with ethical principles and regulatory requirements.

2.3. Experimental Training Protocol

The experimental design was conducted in a noise-free room, where the volunteer had to remain comfortably seated in a chair, wearing the EEG device and VR goggles.
The brain signal used to train the classifier was based on the protocol in Figure 2. This protocol has two key components, each lasting 20 s with a 10 s break in between. During the initial phase of the study, each participant was asked to simply observe the virtual environment without any active participation. This baseline state was used to record the participant’s EEG at rest. The second part of the experiment corresponded to the EEG signal when the user’s eyes were closed. This period was indicated to the volunteer by an auditory signal (a beep) that tells them when to open and close their eyes.
After conducting the training and generating the classifier, the user interaction was studied through a menu with different action buttons (each corresponding to a specific action: forward movement, left turn, right turn, and backward movement). Three consecutive sessions were conducted where the user was presented with a set of four actions to perform (the first and last sessions had identical action sequences to assess participant learning).
In addition, three of the people who obtained the best results in the previous experiments and felt more comfortable using the system took an extra experiment, in which they had to move the avatar in the VR environment freely around the room to move from one point to another.

2.4. Data Preprocessing

The signals worked in 5 s segments. During the training process, a window of this size was generated and moved along the sequence to build the training dataset. In real-time classification, the signal was acquired every 5 s for continuous processing and analysis. Both training and real-time processing had the same preprocessing. First, a band-pass filter was used between 1 and 40 Hz.
Secondly, the REBLINCA procedure [30] was used to eliminate the impact of flicker on the signal. This process used a central sensor as a reference signal (FPZ) to detect the effects of flicker and eliminate its influence on other sensors. From this signal, a signal component was created with a fifth-order Butterworth filter in the frequency range of 1 to 7 Hz. In addition, a derived threshold signal was calculated to highlight the rising and falling phases generated by the flicker. This latter signal was also normalized to have a mean of zero and a variance of one, and a moving average was applied over the square of the signal. When this threshold signal identified the areas affected by the flicker, it was corrected by subtracting the weighted component signal of each channel, where the weighting was defined by the ratio between these two signals. For more information on the procedure carried out, see [25].
Third, an automatic artifact rejection was applied. Each channel was normalized with respect to its standard deviation and mean, a new generic signal was created with the mean of all normalized channels, and to this signal, a threshold 3 σ around the mean of the average was applied to automatically remove artifacts, as described in the Fieltrip tutorial [31].

2.5. Features Extraction

The Morlet wavelet transform [32] was applied to obtain the power evolution in the time–frequency domain. This function was chosen because of its suitability for the analysis of non-stationary signals such as EEG [33,34]. This choice was based on its ability to effectively adapt to changes in the signal as well as its excellent localization in both the time and frequency domains. This allowed us to effectively capture the various temporal and spectral characteristics of the EEG signal.
The analysis focused on telling an eyes-open state from an eyes-closed state apart. To achieve this differentiation, the alpha band (8–12 Hz) of the EEG signals recorded by the O1 and O2 sensors in the visual area (occipital lobe) [25,35] was used to train the classifier. This band was selected because it is associated with states of visual attention, relaxation, and low cognitive activity [36,37,38]. Both aspects make it suitable for identifying the response to visual stimuli or any activity related to visual perception and attention.

2.6. Phase-Locking Value Spatial Filtering

This process is described in the paper [24] and was applied to the alpha band power spectrum. PLV-SF is a method that uses the spatial dependencies between EEG sensors to reconstruct the signals.
Starting from the set of trials, where X is a matrix with dimension n x m, the temporal power associated with each sensor is calculated from the Morlet wavelet transform, where n are the sensors and m are the temporal samples. The graph Laplacian matrix L of each trial is generated, which is computed from the synchronization metric used, in this case, the phase-locking value (PLV). This synchronization metric defines the average phase difference between two sensors, as can be seen in the following formula used [39]:
P L V k l = | e i φ k l ( t ) |
The time-independent value that characterizes the entire interval of interest is calculated from the phase difference between the two signals, which is obtained using the Hilbert transform.
The PLV is a metric commonly used to measure synchronization in EEG sensors [40,41] for a BCI because synchronization metrics are less affected by noise than amplitude (this is indicated by a non-random phase or phase difference distribution [42]). Noise in an EEG signal can directly affect the signal, while the phase of the two signals can remain relatively unchanged by amplitude noise since this type of noise generally tends to affect the magnitude of the signal more than its relative phase. In addition, by working on temporal synchronization between different brain regions, temporal synchronization patterns are evaluated, obtaining more robust information even in the presence of external noise.
PLV-SF is a filtering method that uses this synchronization metric to solve the following convex problem at each sensor, generating a new matrix of equal dimensions with filtered signals:
min t r [ X T L X ] s u b j e c t t o | | B X Y | | F
where Y corresponds to the original matrix without the specific channel to be filtered, B corresponds to a binary matrix in which the channel to be filtered is removed, ∘ denotes the Hadamard product, and | | | | F denotes the Frobenius (or L2) norm for matrices.

2.7. LSTM Classification

An LSTM, an artificial neural network (ANN), was selected as the classification algorithm [43]; the algorithm selection was based on the article [25], where an LSTM network was compared with other classification algorithms. The comparison focused on the classification of the filtered signal from the power spectrum of the sensors corresponding to the occipital lobe, and the device used O1 and O2 sensors.
The same architecture used in that publication [25] was applied, as shown in Figure 3, which includes a sequence input layer with two neurons for each channel. The LSTM layer, consisting of eight cells, is included to identify long-term dependencies between time steps in the sequence data. The fully connected layer connects every neuron in one layer with those in the next, while the SoftMax layer uses the SoftMax activation function to calculate the probability of each class. The final classification layer determines the outcome [44,45].
The results presented in the study [25] highlight the outstanding performance of LSTM, mainly for two reasons. First, the accuracy (acc), sensitivity (sen), specificity (spe), and Matthews correlation coefficient (mcc) metrics showed that LSTM significantly outperformed the other algorithms evaluated. Secondly, its performance was especially remarkable when processing continuous sequences, while classical algorithms showed instability, alternating between different states, and LSTM provided a more consistent and stable classification throughout the entire sequence.

2.8. Virtual Environment

The virtual environment was a simulated space with various rooms and furniture. During training, the user was placed in this environment to learn how to respond to the stimuli protocol; see Figure 4a. As illustrated in Section 2.3 Experimental Training Protocol, the volunteer was present in the virtual environment but did not interact with it in any way; the volunteer’s task was to open and close their eyes in response to the sound stimuli received.
Once training was complete and the LSTM classification model was generated, the user was placed in the same VR scenario but could interact with the environment through a menu with scrolling buttons; see Figure 4b. The menu highlighted each button for 10 s (speed of change); after this time, the next button was automatically highlighted. To select a button, the user had to close their eyes (brain state defined as action) when the button was highlighted. The LSTM model had to correctly classify this state (this model works with 5 s segments). The speed of changes was explored in an earlier paper [25]. This study showed how the different time windows began to lose significant differences in the classification of the population studied when they reached 5 s. This means that, in the current system, users had a reaction window that allowed them two opportunities to select a particular button. To enhance the user experience, the menu included a “beep” sound when buttons changed and a “click” sound and green color change when a button was selected.
This menu interface was verified by means of three different tests, in which the user had to execute a random sequence of four buttons. As can be seen in Figure 4c, within the VR environment, these target buttons were presented to the user above the menu. Both the first and third tests consisted of the same sequence of buttons to compare improvement and execution times between users.

3. Results

The research’s findings highlight the positive evaluations at both the system control and user experience levels. Additionally, the result shows the applicability of both innovative, state-of-the-art methods in real-world scenarios (the PLV-SF method and the LSTM classifiers).
Table 1 presents the results obtained from the volunteers using the system:
  • Previous experience (Previous Exp) was used because it is essential to consider the previous experience of the volunteers with a BCI system because of its ability to reduce adaptation time and improve the handling of the technology.
  • Training accuracy (Training Acc) is the benchmark metric of model training performance.
  • The percentage of hits in the test (Test Acc) represents the correct classification at each moment for the classifier when the user is interacting with the menu (to achieve 100% accuracy in the test, the user must select all correct buttons on the first attempt, without selecting any incorrect buttons or omitting the selection of a correct button).
  • Time refers to the minutes the user takes to complete the test.
These last two parameters, time and test accuracy, were recorded for each test (first fixed sequence, random sequence, and last fixed sequence).
Although some users had previous experience in other experiments with various protocols, it was investigated whether there was any difference in the results of this protocol compared to users with no previous experience. A Student’s t-test for independent samples was performed on the accuracy and time data obtained in both the first fixed test and the second fixed test. The results showed p-values of 0.185 and 0.685 for accuracy and 0.357 and 0.665 for time, respectively. These results indicate that there is no relationship between the users’ experience and their performance in controlling the system, so they can be considered as part of a general population.
The distribution of system accuracy (Figure 5—Accuracies) does not vary much. The accuracies remain in a similar range because the proportion of errors is lower than that of successes. When a user attempts to select a button and fails, they must return to the same button, which generates a higher number of hits to compensate for the failure. The times (Figure 5—Times) show an improvement in most users, who present lower values in the second test (2.56 min in quartile 1 and 4.28 min in quartile 2) compared to the first (3.65 min in quartile 1 and 7.79 min in quartile 2), meaning that a few managed to execute the required commands in less time.
The time results of the second test with fixed commands highlight two outliers. Table 1 shows that this result corresponds to user 6, who became worse in both hits and times, and to user 10, who, despite having high times, improved on their time in the first fixed test.
Once the tests were completed, the users answered a series of questions to evaluate their feelings regarding the use of the system, as shown in Table 2.
According to the results presented in Figure 6, it is observed that eight out of ten users highlighted the importance of control as the most crucial part of the system, while only two users emphasized the relevance of the user interface for interaction, moving away from the scope of the virtual reality (VR) environment, which, although attractive to them, was not a priority. This evaluation of the different parts of the system highlights the importance of control and the intermediate menu, regardless of the environment used, whether virtual reality or a physical wheelchair. However, it is essential to highlight the importance of virtual reality to ensure safe interaction and facilitate user learning. As for the difficulty in performing the tests, only two considered having difficulties in use, and three defined that the time allowed to change the selection from one button to another was not enough to select the desired button (they needed more time to select it correctly). And, as for the overall ratings of the system, it was obtained that at the level of the equipment used, three users scored 4, and seven scored 5; at the level of the interface, four users scored 4, and six scored 5; and at the level of the overall system, four scored 4 and six scored 5.
During the execution of the experiments, the system interacted with users to obtain information about their experience and comfort level in performing the tasks. With this information, along with the results in both accuracy and execution time, a new challenge was posed on another date to users 1, 5, and 7. This test consisted of the user moving from one corner of the room to another, where the refrigerator was located (the interaction with the menu remained the same as in the previous case). The route to be taken was freely selected by the user with two possible options: move forward, turn left, move forward, and move toward the refrigerator or turn right, move forward, turn left, and move toward the refrigerator. Figure 7 shows the routes taken by each user.
User 1 successfully completed the route using two different paths, while the other two users, because of a loss of control of the system (due to fatigue and frustration), were unable to complete the second circuit.
During the first path, User 1 encountered some false positives and backtracked after reaching the upper right corner but completed it in 6′07″. In the second path, there were a few challenges, including difficulty rotating at one of the corners and in the middle of the room. Nonetheless, user 1 eventually reached the target in 9′58″. User 7 successfully reached the target by following the first path despite initially making an error when approaching the fridge. However, this user quickly corrected their mistake and completed the task in a total time of 4′07″. Finally, user 8 completed the path with a time of 5′1″. User 8 encountered only one issue at the start of the path when they accidentally walked backward in the wrong direction. Despite this setback, user 8 was able to quickly recover and successfully complete the path.

4. Discussion

This manuscript presents a practical application of a virtual reality environment that incorporates a BCI system working with the PLV-SF filtering method and an LSTM classifier.
Nowadays, learning takes place in both real and virtual environments. Physical environments carry risks associated with unforeseen situations, which has led to an increase in the adoption of virtual environments as a more secure, low-cost alternative. Within these environments, various factors are identified that make the interaction with the scene similar. However, VR offers a three-dimensional perception that enhances the feeling of presence and realism compared to 2D screen viewing. Much research has documented the comparisons that support the advantages inherent in the use of this technology [46,47,48,49].
In essence, the implementation of BCIs in practical settings still faces limitations. The size and diversity of the sample are key aspects, as they may limit the generalizability of the findings. In this study, despite working with only 10 people, the sample was sought to be diverse and representative of the population. Likewise, this study was based on visual activation, but future research can explore other valid tasks or scenarios to better understand the performance of BCIs in practical environments at the level of usability and user experience. But, without a doubt, two of the main limitations of BCI systems are reliability and adaptability. As such, it is crucial to have applications like the one presented, which allow people to hone their skills and test the system. Moreover, various specialists have underlined the significance of users acquiring proficiency in the utilization of BCIs to enhance their dependability. For instance, Tortora et al. [50] demonstrated that a BCI operator was able to enhance their output through repeated use between tests. Eidel et al. [51] replicated in their study previous research on the use of tactile stimulation for the control of a BCI. The system used was a P300-based BCI, and users had to navigate a virtual wheelchair through a 3D apartment. The study found significant training effects on both online and offline accuracy, which increased significantly with training from 65% to 86% and from 70% to 95%, respectively. In addition, subjective questionnaire data showed high workload and moderate-to-high satisfaction. Other research, developed by Juan et al. [52], analyzed and explored the efficacy of a low-cost neurofeedback training (NFT) system based on a real-time EEG BCI to regulate subjects’ working memory (WM) levels. One NFT group received several sessions with a game feedback interface designed to regulate the alpha band, and EEG data were collected and analyzed. The study found that NFT significantly increased alpha band power in the prefrontal and occipital cortexes and improved WM performance, with lower error rates than the control group.
Despite the promising results obtained in the present manuscript, the primary limitation of the proposed system lies in the performance time required by the BCI system to classify each user action. Consequently, future research endeavors will focus on exploring alternative paradigms, such as motor imagery, event-related potential, and/or steady-state visually evoked potential, aiming to reduce BCI response times while maintaining an immersive and natural user experience.

5. Conclusions

BCI and VR technology has potential applications in fields such as gaming, education, training, and rehabilitation, where intuitive and immersive experiences are desired. This research demonstrates that the combination of a BCI and VR can be used effectively to enable intuitive control of virtual environments by immersing users in real-life situations, making the experience of learning to control the system and improve not only engaging and fun but also completely safe. Furthermore, the applicability of the PLV-SF method and the application of LSTM were demonstrated in a real case. Users showed improvements by reducing the time to complete the tests (comparing the first to the last), obtaining times that went from 3.65 min and 7.79 min in the first and second quartiles to 2.56 min and 4.28 min, respectively. In addition, it was demonstrated how the proposed system works in real cases by allowing different users to reach a specific point in the house in an average time of 6 min and 18.25 s. Overall, this study highlights the potential of BCI-VR technology to enhance the user experience and enable more natural interaction with real-world cases.

Author Contributions

Conceptualization, K.M.-C., J.F.G.-G. and L.A.; methodology, K.M.-C., J.F.G.-G. and L.A.; software, K.M.-C.; validation, K.M.-C., J.F.G.-G. and L.A.; formal analysis, K.M.-C., J.F.G.-G. and L.A.; investigation, K.M.-C., J.F.G.-G. and L.A.; resources, K.M.-C., J.F.G.-G. and L.A.; data curation, K.M.-C.; writing—original draft preparation, K.M.-C. and J.F.G.-G.; writing—review and editing, K.M.-C., J.F.G.-G. and L.A.; visualization, K.M.-C. and J.F.G.-G.; supervision, J.F.G.-G. and L.A.; project administration, J.F.G.-G. and L.A.; funding acquisition, K.M.-C., J.F.G.-G. and L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded under the auspices of Research Project ProID2017010100, supported by the Consejería de Economía, Industria, Comercio y Conocimiento of the Government of the Canary Islands (Spain) and the ERDF (European Regional Development Fund), and Research Project DPI2017-90002-R, supported by the Spanish Ministerio de Economía, Industria y Competitividad, Gobierno de España. The work of K. Martín-Chinea was supported by the Agencia Canaria de Investigación, Innovación y Sociedad de la Información (ACIISI) of the Government of the Canary Islands (Spain) and the European Social Fund (ESP).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of La Laguna (Approval no. CEIBA2020-0405).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
BCIBrain–computer interface
VRVirtual reality
PLV-SFPhase-locking value spatial filtering
LSTMLong Short-Term Memory
EEGElectroencephalography
EOGElectrooculogram
SSVEPSteady-state visual evoked potential
EMGElectromyography
MIMotor imagery
MEGMagnetoencephalography
iEEGIntracranial electroencephalography
NIRSNear infrared spectroscopy
ANNArtificial neural network
NFTNeurofeedback training
WMWorking memory

References

  1. Huang, Q.; Zhang, Z.; Yu, T.; He, S.; Li, Y. An EEG-/EOG-Based Hybrid Brain-Computer Interface: Application on Controlling an Integrated Wheelchair Robotic Arm System. Front. Neurosci. 2019, 13, 1243. [Google Scholar] [CrossRef]
  2. Yu, Y.; Zhou, Z.; Liu, Y.; Jiang, J.; Yin, E.; Zhang, N.; Wang, Z.; Liu, Y.; Wu, X.; Hu, D. Self-paced operation of a wheelchair based on a hybrid brain-computer interface combining motor imagery and P300 potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 2516–2526. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, H.; Li, Y.; Long, J.; Yu, T.; Gu, Z. An asynchronous wheelchair control by hybrid EEG-EOG brain-computer interface. Cogn. Neurodynamics 2014, 8, 399–409. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, W.; Chen, S.K.; Liu, Y.H.; Chen, Y.J.; Chen, C.S. An Electric Wheelchair Manipulating System Using SSVEP-Based BCI System. Biosensors 2022, 12, 772. [Google Scholar] [CrossRef] [PubMed]
  5. Pawuś, D.; Paszkiel, S. BCI Wheelchair Control Using Expert System Classifying EEG Signals Based on Power Spectrum Estimation and Nervous Tics Detection. Appl. Sci. 2022, 12, 10385. [Google Scholar] [CrossRef]
  6. Wang, K.; Qiu, S.; Wei, W.; Zhang, Y.; Wang, S.; He, H.; Xu, M.; Jung, T.P.; Ming, D. A multimodal approach to estimating vigilance in SSVEP-based BCI. Expert Syst. Appl. 2023, 225, 120177. [Google Scholar] [CrossRef]
  7. Naser, M.Y.; Bhattacharya, S. Towards Practical BCI-Driven Wheelchairs: A Systematic Review Study. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1030–1044. [Google Scholar] [CrossRef] [PubMed]
  8. Xiong, J.; Hsiang, E.L.; He, Z.; Zhan, T.; Wu, S.T. Augmented reality and virtual reality displays: Emerging technologies and future perspectives. Light. Sci. Appl. 2021, 10, 216. [Google Scholar] [CrossRef] [PubMed]
  9. Al-Ansi, A.M.; Jaboob, M.; Garad, A.; Al-Ansi, A. Analyzing augmented reality (AR) and virtual reality (VR) recent development in education. Soc. Sci. Humanit. Open 2023, 8, 100532. [Google Scholar] [CrossRef]
  10. Demeco, A.; Zola, L.; Frizziero, A.; Martini, C.; Palumbo, A.; Foresti, R.; Buccino, G.; Costantino, C. Immersive Virtual Reality in Post-Stroke Rehabilitation: A Systematic Review. Sensors 2023, 23, 1712. [Google Scholar] [CrossRef]
  11. Emmelkamp, P.M.; Meyerbröker, K. Virtual Reality Therapy in Mental Health. Annu. Rev. Clin. Psychol. 2021, 17, 495–519. [Google Scholar] [CrossRef]
  12. Juan, M.C.; Elexpuru, J.; Dias, P.; Santos, B.S.; Amorim, P. Immersive virtual reality for upper limb rehabilitation: Comparing hand and controller interaction. Virtual Real. 2023, 27, 1157–1171. [Google Scholar] [CrossRef] [PubMed]
  13. Ehioghae, M.; Montoya, A.; Keshav, R.; Vippa, T.K.; Manuk-Hakobyan, H.; Hasoon, J.; Kaye, A.D.; Urits, I. Effectiveness of Virtual Reality-Based Rehabilitation Interventions in Improving Postoperative Outcomes for Orthopedic Surgery Patients. Curr. Pain Headache Rep. 2024, 28, 37–45. [Google Scholar] [CrossRef]
  14. Chen, F.Q.; Leng, Y.F.; Ge, J.F.; Wang, D.W.; Li, C.; Chen, B.; Sun, Z.L. Effectiveness of virtual reality in nursing education: Meta-analysis. J. Med. Internet Res. 2020, 22, e18290. [Google Scholar] [CrossRef] [PubMed]
  15. Suno, H.; Ohno, N. Virtual Hydrogen, a virtual reality education tool in physics and chemistry. Procedia Comput. Sci. 2023, 225, 2283–2291. [Google Scholar] [CrossRef]
  16. Ntakakis, G.; Plomariti, C.; Frantzidis, C.; Antoniou, P.E.; Bamidis, P.D.; Tsoulfas, G. Exploring the use of virtual reality in surgical education. World J. Transplant. 2023, 13, 36–43. [Google Scholar] [CrossRef]
  17. Deng, T.; Huo, Z.; Zhang, L.; Dong, Z.; Niu, L.; Kang, X.; Huang, X. A VR-based BCI interactive system for UAV swarm control. Biomed. Signal Process. Control. 2023, 85, 104944. [Google Scholar] [CrossRef]
  18. Vourvopoulos, A.; Blanco-Mora, D.A.; Aldridge, A.; Jorge, C.; Figueiredo, P.; Badia, S.B.I. Enhancing Motor-Imagery Brain-Computer Interface Training With Embodied Virtual Reality: A Pilot Study With Older Adults. In Proceedings of the 2022 IEEE International Workshop on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering, MetroXRAINE 2022—Proceedings, Rome, Italy, 26–28 October 2022; pp. 157–162. [Google Scholar] [CrossRef]
  19. Zhou, Z.; Zhang, L.; Wei, S.; Zhang, X.; Mao, L. Development and evaluation of BCI for operating VR flight simulator based on desktop VR equipment. Adv. Eng. Inform. 2022, 51, 101499. [Google Scholar] [CrossRef]
  20. Juliano, J.M.; Spicer, R.P.; Vourvopoulos, A.; Lefebvre, S.; Jann, K.; Ard, T.; Santarnecchi, E.; Krum, D.M.; Liew, S.L. Embodiment is related to better performance on a brain–computer interface in immersive virtual reality: A pilot study. Sensors 2020, 20, 1204. [Google Scholar] [CrossRef]
  21. Vourvopoulos, A.; Pardo, O.M.; Lefebvre, S.; Neureither, M.; Saldana, D.; Jahng, E.; Liew, S.L. Effects of a brain-computer interface with virtual reality (VR) neurofeedback: A pilot study in chronic stroke patients. Front. Hum. Neurosci. 2019, 13, 460405. [Google Scholar] [CrossRef]
  22. Karácsony, T.; Hansen, J.P.; Iversen, H.K.; Puthusserypady, S. Brain computer interface for neuro-rehabilitation with deep learning classification and virtual reality feedback. In Proceedings of the 10th Augmented Human International Conference 2019, Reims, France, 11–12 March 2019. [Google Scholar] [CrossRef]
  23. Gao, N.; Chen, P.; Liang, L. BCI–VR-Based Hand Soft Rehabilitation System with Its Applications in Hand Rehabilitation After Stroke. Int. J. Precis. Eng. Manuf. 2023, 24, 1403–1424. [Google Scholar] [CrossRef]
  24. Martin-Chinea, K.; Gómez-González, J.F.; Acosta, L. A New PLV-Spatial Filtering to Improve the Classification Performance in BCI Systems. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2275–2282. [Google Scholar] [CrossRef] [PubMed]
  25. Martín-Chinea, K.; Ortega, J.; Gómez-González, J.F.; Pereda, E.; Toledo, J.; Acosta, L. Effect of time windows in LSTM networks for EEG-based BCIs. Cogn. Neurodyn. 2022, 17, 385–398. [Google Scholar] [CrossRef] [PubMed]
  26. Gong, P.; Wang, P.; Zhou, Y.; Zhang, D. A Spiking Neural Network With Adaptive Graph Convolution and LSTM for EEG-Based Brain-Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1440–1450. [Google Scholar] [CrossRef] [PubMed]
  27. Wang, J.; Cheng, S.; Tian, J.; Gao, Y. A 2D CNN-LSTM hybrid algorithm using time series segments of EEG data for motor imagery classification. Biomed. Signal Process. Control. 2023, 83, 104627. [Google Scholar] [CrossRef]
  28. Guerrero-Mendez, C.D.; Blanco-Diaz, C.F.; Ruiz-Olaya, A.F.; López-Delis, A.; Jaramillo-Isaza, S.; Andrade, R.M.; Souza, A.F.D.; Delisle-Rodriguez, D.; Frizera-Neto, A.; Bastos-Filho, T.F. EEG motor imagery classification using deep learning approaches in naïve BCI users. Biomed. Phys. Eng. Express 2023, 9, 45029. [Google Scholar] [CrossRef] [PubMed]
  29. Oostenveld, R.; Fries, P.; Maris, E.; Schoffelen, J.M. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 2011, 156869. [Google Scholar] [CrossRef]
  30. Di Flumeri, G.; Arico, P.; Borghini, G.; Colosimo, A.; Babiloni, F. A new regression-based method for the eye blinks artifacts correction in the EEG signal, without using any EOG channel. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Orlando, FL, USA, 16–20 August 2016; pp. 3187–3190. [Google Scholar] [CrossRef]
  31. Donders Centre for Cognitive Neuroimaging. FieldTrip: Automatic Artifact Rejection; Donders Centre for Cognitive Neuroimaging: Nijmegen, The Netherlands, 2019. [Google Scholar]
  32. Tallon-Baudry, C.; Bertrand, O. Oscillatory gamma activity in humans and its role in object representation. Trends Cogn. Sci. 1999, 3, 151–162. [Google Scholar] [CrossRef]
  33. Belcher, M.A.; Hwang, I.C.; Bhattacharya, S.; Hairston, W.D.; Metcalfe, J.S. EEG-based prediction of driving events from passenger cognitive state using Morlet Wavelet and Evoked Responses. Transp. Eng. 2022, 8, 100107. [Google Scholar] [CrossRef]
  34. Gosala, B.; Dindayal Kapgate, P.; Jain, P.; Nath Chaurasia, R.; Gupta, M. Wavelet transforms for feature engineering in EEG data processing: An application on Schizophrenia. Biomed. Signal Process. Control 2023, 85, 104811. [Google Scholar] [CrossRef]
  35. Barry, R.J.; Clarke, A.R.; Johnstone, S.J.; Magee, C.A.; Rushby, J.A. EEG differences between eyes-closed and eyes-open resting conditions. Clin. Neurophysiol. 2007, 118, 2765–2773. [Google Scholar] [CrossRef] [PubMed]
  36. Raufi, B.; Longo, L. An Evaluation of the EEG Alpha-to-Theta and Theta-to-Alpha Band Ratios as Indexes of Mental Workload. Front. Neuroinform. 2022, 16, 861967. [Google Scholar] [CrossRef] [PubMed]
  37. Mizokuchi, K.; Tanaka, T.; Sato, T.G.; Shiraki, Y. Alpha band modulation caused by selective attention to music enables EEG classification. Cogn. Neurodyn. 2023. [Google Scholar] [CrossRef]
  38. Mussigmann, T.; Bardel, B.; Lefaucheur, J.P. Resting-state electroencephalography (EEG) biomarkers of chronic neuropathic pain. A systematic review. NeuroImage 2022, 258, 119351. [Google Scholar] [CrossRef] [PubMed]
  39. García-Prieto, J.; Bajo, R.; Pereda, E. Efficient Computation of Functional Brain Networks: Toward Real-Time Functional Connectivity. Front. Neuroinform. 2017, 11, 8. [Google Scholar] [CrossRef] [PubMed]
  40. Esparza-Iaizzo, M.; Vigué-Guix, I.; Ruzzoli, M.; Torralba-Cuello, M.; Soto-Faraco, S. Long-Range α˙-Synchronization as Control Signal for BCI: A Feasibility Study. eNeuro 2023, 10, 1–14. [Google Scholar] [CrossRef] [PubMed]
  41. Corsi, M.C.; Chevallier, S.; Fallani, F.D.V.; Yger, F. Functional Connectivity Ensemble Method to Enhance BCI Performance (FUCONE). IEEE Trans. Biomed. Eng. 2022, 69, 2826–2838. [Google Scholar] [CrossRef] [PubMed]
  42. Palva, J.M. Encyclopedia of Computational Neuroscience; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–5. [Google Scholar] [CrossRef]
  43. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  44. Kudo, M.; Toyama, J.; Shimbo, M. Multidimensional curve classification using passing-through regions. Pattern Recognit. Lett. 1999, 20, 1103–1111. [Google Scholar] [CrossRef]
  45. Bishop, C. Pattern Recognition and Machine Learning, 1st ed.; Springer: Singapore, 2006; p. 758. [Google Scholar]
  46. Moreno-Lumbreras, D.; Minelli, R.; Villaverde, A.; Gonzalez-Barahona, J.M.; Lanza, M. CodeCity: A comparison of on-screen and virtual reality. Inf. Softw. Technol. 2023, 153, 107064. [Google Scholar] [CrossRef]
  47. Erhardsson, M.; Alt Murphy, M.; Sunnerhagen, K.S. Commercial head-mounted display virtual reality for upper extremity rehabilitation in chronic stroke: A single-case design study. J. Neuroeng. Rehabil. 2020, 17, 154. [Google Scholar] [CrossRef] [PubMed]
  48. Meyer, O.A.; Omdahl, M.K.; Makransky, G. Investigating the effect of pre-training when learning through immersive virtual reality and video: A media and methods experiment. Comput. Educ. 2019, 140, 103603. [Google Scholar] [CrossRef]
  49. Frederiksen, J.G.; Sørensen, S.M.D.; Konge, L.; Svendsen, M.B.S.; Nobel-Jørgensen, M.; Bjerrum, F.; Andersen, S.A.W. Cognitive load and performance in immersive virtual reality versus conventional virtual reality simulation training of laparoscopic surgery: A randomized trial. Surg. Endosc. 2020, 34, 1244–1252. [Google Scholar] [CrossRef] [PubMed]
  50. Tortora, S.; Beraldo, G.; Bettella, F.; Formaggio, E.; Rubega, M.; Del Felice, A.; Masiero, S.; Carli, R.; Petrone, N.; Menegatti, E.; et al. Neural correlates of user learning during long-term BCI training for the Cybathlon competition. J. Neuroeng. Rehabil. 2022, 19, 69. [Google Scholar] [CrossRef] [PubMed]
  51. Eidel, M.; Kübler, A. Wheelchair Control in a Virtual Environment by Healthy Participants Using a P300-BCI Based on Tactile Stimulation: Training Effects and Usability. Front. Hum. Neurosci. 2020, 14, 265. [Google Scholar] [CrossRef]
  52. Chen, X.Y.; Sui, L. Alpha band neurofeedback training based on a portable device improves working memory performance of young people. Biomed. Signal Process. Control 2023, 80, 104308. [Google Scholar] [CrossRef]
Figure 1. (a) OpenBCI device. (b) Sensor distribution of the cap according to the 10–20 distribution system. (c) Example of a user using the BCI system with the VR glasses.
Figure 1. (a) OpenBCI device. (b) Sensor distribution of the cap according to the 10–20 distribution system. (c) Example of a user using the BCI system with the VR glasses.
Electronics 13 02088 g001
Figure 2. Experimental protocol.
Figure 2. Experimental protocol.
Electronics 13 02088 g002
Figure 3. Neural network architecture.
Figure 3. Neural network architecture.
Electronics 13 02088 g003
Figure 4. Different interaction scenes. (a) VR environment. (b) VR environment with scroll button menu. (c) VR with the menu and instructions to follow.
Figure 4. Different interaction scenes. (a) VR environment. (b) VR environment with scroll button menu. (c) VR with the menu and instructions to follow.
Electronics 13 02088 g004
Figure 5. Comparison between the two tests with the same static sequence (dots are outliers).
Figure 5. Comparison between the two tests with the same static sequence (dots are outliers).
Electronics 13 02088 g005
Figure 6. Results of the post-experience user questionnaires.
Figure 6. Results of the post-experience user questionnaires.
Electronics 13 02088 g006
Figure 7. Routes taken by users in the room. The red triangles define the first path, the blue ones define the second, and the green triangles define the start.
Figure 7. Routes taken by users in the room. The red triangles define the first path, the blue ones define the second, and the green triangles define the start.
Electronics 13 02088 g007
Table 1. Results obtained by the user when interacting with the virtual reality system.
Table 1. Results obtained by the user when interacting with the virtual reality system.
UserPrevious ExpTraining AccFixed Seq 1Random SeqFixed Seq 2
Test Acc Time Test Acc Time Test Acc Time
1Yes10069.234.0871.883.3761.763.57
2No9474.515.4888.462.7769.232.72
3No8380.952.2074.193.2864.713.63
4Yes10075.003.8272.223.8575.002.48
5Yes9585.711.4282.612.4770.832.53
6Yes8576.008.1275.002.5868.609.22
7No8870.316.8277.555.1084.622.65
8No9968.573.6064.715.4274.424.50
9No6263.758.6067.275.8280.001.58
10No9560.9511.2556.524.8055.887.23
mean 90.1072.505.5373.043.9570.514.01
std 11.597.523.119.021.248.522.40
Table 2. Test questions after experience in the virtual environment.
Table 2. Test questions after experience in the virtual environment.
#Question
1What do you consider most important in the system?
2Did you have difficulties performing the test? Did the devices, the room, or other elements
make it difficult to perform the test?
3Was the speed of changes between buttons and selection time adequate for system control?
4General evaluation of the use of the applied system (electroencephalograph and virtual reality
glasses) (rated from 1 to 5)
5Overall rating of the interface (from 1 to 5)
6Overall rating of the system (from 1 to 5)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martín-Chinea, K.; Gómez-González, J.F.; Acosta, L. Brain–Computer Interface Based on PLV-Spatial Filter and LSTM Classification for Intuitive Control of Avatars. Electronics 2024, 13, 2088. https://doi.org/10.3390/electronics13112088

AMA Style

Martín-Chinea K, Gómez-González JF, Acosta L. Brain–Computer Interface Based on PLV-Spatial Filter and LSTM Classification for Intuitive Control of Avatars. Electronics. 2024; 13(11):2088. https://doi.org/10.3390/electronics13112088

Chicago/Turabian Style

Martín-Chinea, Kevin, José Francisco Gómez-González, and Leopoldo Acosta. 2024. "Brain–Computer Interface Based on PLV-Spatial Filter and LSTM Classification for Intuitive Control of Avatars" Electronics 13, no. 11: 2088. https://doi.org/10.3390/electronics13112088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop