Next Article in Journal
A Survey of Machine Learning in Edge Computing: Techniques, Frameworks, Applications, Issues, and Research Directions
Next Article in Special Issue
Integrating Artificial Intelligence to Biomedical Science: New Applications for Innovative Stem Cell Research and Drug Development
Previous Article in Journal
Comparison of a Custom-Made Inexpensive Air Permeability Tester with a Standardized Measurement Instrument
Previous Article in Special Issue
An End-to-End Lightweight Multi-Scale CNN for the Classification of Lung and Colon Cancer with XAI Integration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair

by
Maria Carolina Avelar
1,
Patricia Almeida
1,
Brigida Monica Faria
2,3,* and
Luis Paulo Reis
1,3
1
Faculty of Engineering, University of Porto (FEUP), Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
2
ESS, Polytechnic of Porto (ESS-P.PORTO), Rua Dr. António Bernardino de Almeida, 400, 4200-072 Porto, Portugal
3
Artificial Intelligence and Computer Science Laboratory (LIACC—Member of LASI LA), Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
*
Author to whom correspondence should be addressed.
Technologies 2024, 12(6), 80; https://doi.org/10.3390/technologies12060080
Submission received: 9 April 2024 / Revised: 15 May 2024 / Accepted: 20 May 2024 / Published: 3 June 2024

Abstract

:
The independence and autonomy of both elderly and disabled people have been a growing concern in today’s society. Therefore, wheelchairs have proven to be fundamental for the movement of these people with physical disabilities in the lower limbs, paralysis, or other type of restrictive diseases. Various adapted sensors can be employed in order to facilitate the wheelchair’s driving experience. This work develops the proof concept of a brain–computer interface (BCI), whose ultimate final goal will be to control an intelligent wheelchair. An event-related (de)synchronization neuro-mechanism will be used, since it corresponds to a synchronization, or desynchronization, in the mu and beta brain rhythms, during the execution, preparation, or imagination of motor actions. Two datasets were used for algorithm development: one from the IV competition of BCIs (A), acquired through twenty-two Ag/AgCl electrodes and encompassing motor imagery of the right and left hands, and feet; and the other (B) was obtained in the laboratory using an Emotiv EPOC headset, also with the same motor imaginary. Regarding feature extraction, several approaches were tested: namely, two versions of the signal’s power spectral density, followed by a filter bank version; the use of respective frequency coefficients; and, finally, two versions of the known method filter bank common spatial pattern (FBCSP). Concerning the results from the second version of FBCSP, dataset A presented an F1-score of 0.797 and a rather low false positive rate of 0.150. Moreover, the correspondent average kappa score reached the value of 0.693, which is in the same order of magnitude as 0.57, obtained by the competition. Regarding dataset B, the average value of the F1-score was 0.651, followed by a kappa score of 0.447, and a false positive rate of 0.471. However, it should be noted that some subjects from this dataset presented F1-scores of 0.747 and 0.911, suggesting that the movement imagery (MI) aptness of different users may influence their performance. In conclusion, it is possible to obtain promising results, using an architecture for a real-time application.

1. Introduction

Independence and autonomy in mobility are two of the most important conditions for determining the quality of life of people with disabilities or with low mobility capacities [1]. Limited mobility could have origin in a broad range of situations, from accidents to disease to the ageing process. Currently, several mobility-related technologies are designed to achieve independent mobility, in particular powered orthosis, prosthetic devices, and exoskeletons. Notwithstanding these devices, wheeled mobility devices remain among the most used assistive devices [2]. According to the World Health Organization (WHO) [3], approximately 10% of the world’s population, or around 740 million people, suffer from disabilities, and, among those people, almost 10% require a wheelchair. Therefore, it is estimated that about 1% of the total population needs wheelchairs, which translates into 74 million people worldwide [4].
The importance of providing multifaceted wheelchairs that can be adapted to the most diverse conditions of their users is thus emphasised. Different interfaces are being developed, enabling us to overcome existing barriers of use. In particular, special attention has been dedicated to voice control techniques, joysticks, and tongue or head movements. However, hand gesture recognition and brain–computer interface (BCI) systems are proving to be interesting methods of wheelchair control due to their accessible price and non-invasiveness. Therefore, BCIs seem the best option to bridge the users’ will and the wheelchair, as they provide a direct pathway between the “mind” and the external world, just by interpreting the user’s brain activity patterns into corresponding commands [5], and, thus, not requiring neuro-muscular control capabilities whatsoever. Furthermore, people desire to be in charge of their motion as much as possible, even if they have lost most of their voluntary muscle control; therefore, BCIs are an exceptional option [6]. Brain–computer interfaces provide control and communication between human intention and physical devices by translating the pattern of brain activity into commands [6]. The flow of a BCI consists of the acquisition of the information from the brain, followed by the data processing, and ending in the output of a control command [7]. Thus, usually, a BCI can be conceptually divided into signal acquisition, pre-processing, and feature extraction and classification; the last three are the interpretation of the first one.
This paper is structured into six sections, beginning with this introduction. The second section addresses the background and state of the art concerning brain–computer interfaces (BCIs) for acquiring and classifying brain activity. Section 3 details the methods and materials employed in the experimental work. In Section 4, the results obtained from various approaches, including a real-time application, are presented. A discussion of the findings is presented in Section 5, followed by conclusions and suggestions for future work.

2. Background and State of the Art of BCI Brain Activity Acquisition Methods

Understanding the acquisition methods for brain activity is crucial for the development of effective brain–computer interfaces (BCIs). There are several methods available, but the most commonly used and well-established method is electroencephalography (EEG). EEG is favoured for its low cost, convenience, standardized electrode placement, and well-documented acquisition techniques. Additionally, EEG offers known filtering methods to address noise and ocular artefacts, making it an attractive option for BCI applications.

2.1. Signal Acquisition

There are several methods to acquire brain activity that can be fed into a BCI; however, the most used acquisition method is EEG, as it is low-cost and convenient to use. Other factors that make it such an attractive tool are the standardisation of electrode placement, plentiful and well-documented information on acquisition techniques, and being a well-established method with known filtering [8]. Table 1, adapted from [7], compares the different types of methods used to acquire signals for BCI use.
I. Magnetoencephalography (MEG) is a neuro-imaging technique, which uses the magnetic fields created by the natural currents that flow in the brain to map the brain activity. To do that, it uses magnetometers. The cerebral cortex’s sites, which are activated by a stimulus, can be found from the detected magnetic field distribution [9].
II. Near-infrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared (NIR) region of the electromagnetic spectrum (from 780 nm to 2500 nm). NIR light can penetrate human tissues; however, it suffers a relatively high attenuation due to the main chromophore haemoglobin (the oxygen transport red blood cell protein), which is presented in the blood. Therefore, when a specific area of the brain is activated, the localised blood volume in that area changes quickly and, if optical imaging is used, it is possible to measure the location and activity of specific regions of the brain. This is due to the continuous tracking of the haemoglobin levels through the determination of optical absorption coefficients [10].
III. Functional magnetic resonance imaging (fMRI), through variations associated with blood flow, can measure brain activity. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled; thus, when an area of the brain is in use, the blood flow to that region also increases [11].
IV. Electrocorticography (ECoG) is a type of electrophysiological monitoring that records activity mainly from the cortical pyramidal cells (neurons). For that, it requires the electrodes to be placed directly on the exposed surface of the brain so that the recorded activity comes directly from the cerebral cortex [12].
V. Micro-electrode arrays (MEAs) are devices that contain multiple microelectrodes; the number can vary from ten to thousands, through which the neural signals are obtained. These arrays function as neural interfaces that connect neurons to electronic circuitry [13].
VI. Functional transcranial Doppler (fTCD) is a technique that uses ultrasound Doppler to measure the velocity of blood flow in the main cerebral arteries during local neural activity [14]. Changes in the velocity of the blood flow are correlated to changes in cerebral oxygen uptake, enabling fTCD to measure brain activity [15].
However, the robustness of all existing BCI systems is not satisfactory due to the non-stationary nature of non-invasive EEG signals. If a BCI system is unstable, other techniques should be further developed to improve the overall driving performance [6]. Usually, these concerns improve feature extraction and classification as the other option would fall on trading to an invasive approach. Although the range of existent commercial headsets is quite good, most of them lack in the number of available electrodes as they are more turned to improve the user’s focus and to help to relax, or be used for gaming. Furthermore, the ones that present better characteristics are the Emotiv EPOC, Emotiv Flex, and the Open BCI [16]. Although the last two do not restrict the electrodes’ configuration as Emotiv EPOC does, they are more expensive and complex. As for the open BCI one, it does not offer the same freedom of measurement and comfort as Emotiv ones, as these are wireless with a 12 h, for EPOC, and 9 h, for Flex, lasting battery [17]. The cost of an Emotiv is approximately USD 1000, and Open BCI can cost more than USD 2000 [16]. Hereupon, authors nowadays do not use commercial EEG headsets to obtain the signals that will feed the BCI; they prefer assembling their own EEG set through an amplifier and electrodes, as seen in Table 2.
However, these are usually not wireless options. Nevertheless, there is still a significant group who use Emotiv EPOC, as this one offers a wider range of electrodes when compared with other commercial options, allowing obtaining of the signals from different brain lobes. It is possible to find several public EEG datasets related to motor imagery [50]. These datasets involve recordings of brain activity while subjects imagine performing specific motor tasks, such as moving a limb or making a particular gesture. These datasets are essential for studying motor control, brain–computer interfaces (BCIs), and rehabilitation. These datasets cover a wide range of tasks and experimental paradigms. With varying numbers of subjects, electrode configurations, and recording parameters, each dataset offers insights into different aspects of brain function and behaviour. From collections like the largest SCP data of motor imagery, with extensive EEG recordings spanning multiple sessions and participants, to focused datasets like the imagination of right-hand thumb movement, capturing specific motor imagery tasks, these datasets [50] serve as valuable resources for exploring the neural correlates of motor control, emotion processing, error monitoring, and other cognitive processes.

2.2. Signal Processing

The signal-processing module is divided into different parts [51]. The steps vary depending on whether the stage is training or testing; however, the training steps are broader than the testing ones, and, hence, these will be the ones to be discussed. The first step is to pre-process the signal, and it is further subdivided into band-pass and spatial filtering; afterwards, the features are extracted and selected. Finally, the classification is carried out, and the performance is evaluated. To perform this, techniques of machine learning must be applied, and thus the brief explanation of this concept.
Machine learning (ML) is based on data analytics that automates analytical model building. By using algorithms that iteratively learn from data, the computer can find hidden insights without being explicitly programmed where to look. This approach is used when the problem is complex and can be described by many variables. It creates an unknown target function that models the input into the desired output [52]. The learning algorithm receives a set of labelled examples (inputs with corresponding outputs) and learns by comparing its predicted output with correct outputs to find errors, modifying the model accordingly. The resulting model can predict future events. When exposed to new data, the model can adapt itself. In theory, if the algorithm works properly, the larger the amount of data there are, the better are the predictions. However, they are limited by bias in the algorithm and in the data, which can produce systematically skewed predictions. Therefore, the complexity of the learning algorithm is critical and should be balanced with the complexity of the data [52].

2.2.1. Pre-Processing

The EEG signal, per se, is very noisy, which is due to several aspects such as the low signal-to-noise ratio—as it is collected from the individual’s scalp surface, the low spatial resolution, and other sources such as artefacts or interfering frequencies [53]. Artefact removal involves cancelling or correcting the artefacts without distorting the signal of interest and can be implemented in both the temporal and spatial domains [54]. Usually, the pre-processing concerns two types of filtering, in the frequency and the spatial domain [51]: band-pass filtering consists of removing some frequencies, or frequency bands, from the signal [53], outputting the frequency range of interest; and spatial filtering, which consists of combining the original sensor signals, usually linearly, which can result in a signal with a higher signal-to-noise ratio than that of individual sensors [51]. It combines the electrodes, which leads to more discriminating signals [54]. According to Pejas [55], approaches that rely on spatial filtering not only provide more true positives but also allow more flexibility when choosing the electrode placement. Spatial filters that linearly combine signals acquired from different EEG channels can extract and enhance the desired brain activity; thus, usually, it is enough to place the electrodes somewhere in the desired area and not in the exact location.

2.2.2. Feature Extraction and Classification

There are different types of features according to the domain from where they are extracted: time, frequency, or spatial. Different methods are used to extract the features from the EEG signal and further classify them so that the control commands can be obtained. Table 3, partly adapted from [56], presents a group of techniques used by different authors. It comprises several examples referring to the different principles: ERPs, SSVEP, ERD/ERS, and Hybrid.
The ERD/ERS neuro-mechanism is a widely used one and has been producing noticeable results. This corresponds to a change in the power of specific frequency bands since the user is imagining or visualising a certain motor movement. The best combination is obtained with SVMs or NN as classifiers. Authors such as Abiyev et al. [57] and Khare et al. [35] achieved an impressive accuracy of 100%. The extracted features were all in the frequency domain, mostly from the frequency coefficients, band power, or spatial filtering. SSVEP BCIs can originate ace outcomes regardless of the classifier. This is probably due to the neuro-mechanism itself, as it is linked to a specific frequency, facilitating the extraction of the feature vector. However, contrarily to the ERD/ERS BCIs, these require some sort of hardware, usually flashing buttons (each one at a unique frequency rate), which will act as the stimulus for the user. The latter will focus on the button, which represents the desired direction; hence, proportionally amplifying the EEG signal band corresponding to the button frequency. The extracted features fall in the frequency domain and regard the power in specific frequency bands (corresponding to the respective button). ERPs are short amplitude deflections in the brain signal that are timestamped to an event. They are identified by the triggering event, direction of deflection, observed location, and latency [7]. That is why these BCIs usually use temporal features, whereas ERD and SSVEP BCIs employ frequency features [6]. Concerning the used classifier, the BCI performance does not seem to depend upon this choice. Regarding hybrid BCIs, it can be deduced that methods that aim to decompose the signal are preferentially used to extract the features. Concerning the classification, the used classifiers are mainly SVMs and LDA. It is possible to conclude that, depending on the chosen neuro-mechanism, the type of extracted features will differ. However, for the classifiers, the same cannot be applied, although it is possible to infer that some classifiers have a better performance than others have, namely, SVMs, NN, and LDA.
A BCI provides control and communication between human intention and physical devices by translating the pattern of the brain activity into commands. The goal is to use this as a way of controlling an IW, which will eventually lead to an increase in the quality of life of people with disorders and limitations. A BCI has different main blocks: signal acquisition, signal processing, and the application of the output commands. The first one aims to collect the brain signals to feed them to the signal processing unit. There are several ways of achieving this, with EEG being the most common, affordable, and well-documented way. To make it even more accessible and portable for the patient, the EEG headset should be wireless; hence, the Emotiv EPOC is the chosen one.
Moreover, the aim is to use the Emotiv EPOC headset as a way to record the user’s brain activity, as it is rapidly installed and portable. Although many authors have already proposed several solutions, none of them meet the required criteria to be commercialised, either by the lack of portability or the lack of accuracy. Therefore, the final goal would be to have a portable, comfortable, affordable, and reliable solution for an end-user consumer, so that the system would ideally be prepared for an out-of-the-laboratory application. Thus, this work contributes to the conceptualisation of the BCI system, regarding its architecture and algorithms.

3. Materials and Methods

A motor imagery (MI) neuro-mechanism is proposed, as it allows the user to focus on the path instead of focusing on the user interface, as the last two are stimulus-dependent neuro-mechanisms. Figure 1 presents the overall scheme of the BCI architecture along with its constituent parts.
Three classes for the commands are used, namely left (0), right (1), and neutral (2). The first two correspond to changes in the direction, whereas the last one implies that the subject wishes to maintain the same direction. This choice relies on the fact that the left and right are the basic commands to control a moving device and, since the system is working in a continuum, the necessity of a neutral class to maintain the direction of movement arises. According to Tang et al. [39], some subjects present a better ability to distinguish between the feet and hands, rather than the left hand from the right one. Consequently, three different runs are tested, where the subject can substitute one of the hands for the thought of feet. More specifically, the subject may have a better performance while differentiating the left hand from the feet, and it may be advantageous to use the thought of the feet to turn to the right.
Moreover, the experiments are divided into two main parts: the validation of the concept and the corresponding execution or testing. Regarding the first part, two datasets are used, dataset 2a from the BCI competition IV (dataset A) [62] and another one acquired in our laboratory using the Emotiv EPOC headset (dataset B). Concerning the execution of the algorithm, a real-time acquisition from the headset is attempted and evaluated.

3.1. Datasets

Dataset A contains a four-class MI for different body parts: the left and right hand (LH/RH), feet (F), and tongue. This dataset corresponds to dataset 2a of the BCI competition IV and comprises 2 sessions of 288 trials from 9 different subjects. In each session, there were 6 smaller sessions of 48 trials, each separated by breaks. It also encompasses an evaluation dataset with the same characteristics as the previously described one. For this work, the tongue MI was discarded, as it was not of interest.
The acquisition protocol for each trial can be seen in Figure 2 and it is a sequence composed of a fixation cross (2 s), followed by an arrow representing the desired MI (1.5 s), a period of blank screen for the subject to imagine the asked cue (2.75 s), and it finishes with a break (~2 s). Furthermore, there is a sound alerting for the beginning and ending of the MI period (4 s).
The signals were obtained using 22 Ag/AgCl electrodes, which were positioned following the 10/20 system shown in Figure 3a. These were placed mostly at the central part of the cortex, where the sensorimotor part is located.
The acquisition protocol for dataset B was approximately the same as for dataset A, with two differences inspired by Tang et al. [39], Dharmasena et al. [42], and Stock and Balbinot [63]. More specifically, in the MI cue, the arrows were displayed on the screen for the whole period, as shown in the diagram presented in Figure 4. Furthermore, the indication of the start of a cue was not used to simplify the process. There were three different cues: right hand (right arrow), left hand (left arrow), and foot (down arrow). The set of sessions comprised 360 trials, 120 for each MI.
In total, signals from nine different healthy subjects were acquired, where subjects 4 and 6 are left-handed, while the others are right-handed. All subjects are below 25 years old, except subject 5, who is 51. During acquisition, the subjects were seated comfortably in a chair, in a quiet room, with their hands on the top of the table while looking at the screen; they were also asked to keep their movements, such as eye gazing, sniffing, or coughing, to a minimum. All the procedures were performed under the ethical standards of the 1964 Helsinki Declaration.
The electrode placement can be seen in Figure 3b. Although none of these match the placement for dataset 2a, they still cover part of the central, parietal, and frontal locations of the cortex, which are known for contributing to the MI [14]. However, it is expected that the results will not be as satisfactory, as the centre of the cortex is not covered [61]. It is also important to state that the recorded points may also depend on the format of the subject’s head, as electrode placement on narrower heads will not be the same as for wider ones, because the electrodes in the headset are fixed.

3.2. Data Processing

The two datasets were divided into training and test as follows:
  • Dataset A: the training data supplied by the BCI Competition IV were used as a train and the evaluation one as a test. The duration of the epochs was two seconds, as explained in [62].
  • Dataset B: 100 trials of each MI were used as training data, and the remaining 20 were used as tests. Usually, each subject would do a 20-trial session, which results in 5 sessions for training and 1 session for testing. For each visual cue and motor imagery moment, as these had a duration of 5 s, two epochs of two seconds each were extracted, allowing to double the data, ending up with 240 epochs, in total, for each class.
  • A subject-oriented approach was followed, requiring the model to undergo training specific to each subject before being tested. However, it should be noted that the sessions utilised for testing differed from those used for training purposes.

3.2.1. Pre-Processing and Feature Extraction

Filtering the EEG signal is already enough to remove noise and ocular artefacts, which are the most common. The first comprises high frequencies, which are discarded, as these are not included in the bands of interest. Moreover, ocular artefacts mainly appear in the theta band, which, once again, is not a band of interest for the MI paradigm. Thus, for every feature extraction approach, presented in the next section, a filtering step is always applied to eliminate these artefacts. The main methods for feature extraction regarding the MI paradigm are spatial filtering using the common spatial pattern (CSP) approach and the use of the signal’s frequency band or the frequency coefficients as features. The different approaches were tested, but with some variations. The next steps, feature selection and classification, were the same for all the approaches.
  • Filter Bank Common Spatial Pattern I
As dataset A is from a competition, the first approach was to develop an algorithm based on the winning method, denoted the filter bank common spatial pattern (FBCSP), as described in [62]. The goal is to maximise the best band for each user, which results in dividing the alpha and beta bands into nine sub-bands, from 4 Hz to 40 Hz. Although Ang et al. [62] use a Chebyshev II filter, in this work a Butterworth filter of order five and zero phase was applied. This choice lies in the facts that this filter is known for being the flattest in the passing band, the zero phase provides zero group distortion, and the order five is a nice compromise with respect to speed. The FBCSP algorithm applies the CSP procedure to each sub-band of the signal. The algorithm generates a linear filter, which is used to extract features that best discriminate between classes, by maximising the ratio between their covariance matrices [64].
2.
Filter Bank Common Spatial Pattern II
This approach follows the same principles as the first approach but, after obtaining a spatial filter, the average power of Z is computed and used as features.
3.
Power Spectral Density
The signal is filtered using a Butterworth filter, for the reasons previously enunciated, from 4 to 35 Hz to comprise the alpha and beta bands. Afterwards, epochs of two seconds are obtained and normalised. The latter consists of centring each channel to have zero mean. For that, the mean of each epoch for each channel is calculated and then subtracted [57]. Afterwards, the Welch method, with a Hanning window, is applied to obtain the power spectral density for each epoch, which is used as the features vector. The Welch method consists of dividing the signal into overlapping segments, which are further windowed. Then, the signal periodogram, which is an estimate of the signal spectral density, is calculated, resorting to the discrete Fourier transform. Windowing the segments, for example with the Hanning window, allows for mitigating spectral leakage. This is because the Fourier transform assumes that the signal is periodic, and non-periodic signals lead to sudden transitions that have a broad frequency response [65]. Different methods for choosing the most significant features were tested, namely a method based on a mutual information criterion, the ANOVA F test, and the extra trees classifier, to compute the features’ importance. The first measures the dependency between two random variables and relies on non-parametric methods based on entropy estimation, such as from K-nearest neighbours, to improve the selection. The second assesses the amount of variability between each class mean, in the context of the variation within the different classes, to determine whether the mean differences are statistically significant or not. Finally, the extra trees classifier is used to compute the importance of the features, allowing the irrelevant ones to be discarded. For either of the methods, only the K best features are selected. This is performed by a 10-fold cross-validation, using 5–70% of the features. The 70% limit is imposed to prevent overfitting.

3.2.2. Classification

The classifiers were trained to differentiate between three different classes. Due to slower computational time and the fact that they might generate overfitting, non-linear classifiers were not used as a first approach [56]. Thus, four classifiers were trained: Gaussian Naive Bayes (GNB), linear discriminant analysis (LDA), linear support vector machines (LSVMs), and logistic regression (LR). Using these four classifiers, different combinations were tested, as represented in Table 4.
When using a single classifier to predict the result, there are two main approaches: predict a class or predict the probability of belonging to each class. In the latter approach, the ideal value for the probability threshold can be obtained through different metrics. The F1-score was the chosen one, as it considers both the precision and the recall of the classifier. When using two classifiers to predict the final command, different approaches were applied, which are further explained:
  • Two classifiers: if both classifiers predict that the class is 0, then the class is 0; the same is applied for classes 1 and 2. However, if they do not agree with the classification, then the trial is classified as 2 in order to decrease the number of false positives, which in this case are the trials miss-classified as MI to the left or right;
  • Two classifiers with variable probability: the idea behind this approach is the same as before; however, the output of each classifier is a probability and not a class label. Thus, a threshold is estimated for each one of the classifiers to output a label, and then the same method is applied, as explained for the two classifiers;
  • Ensemble methods: these methods are already developed and are widely used to combine the different predictions so that a more generalised and robust model can be obtained. These methods can be divided into two main groups: averaging and boosting. Regarding the first one, the different classifiers are built independently and only after that are their outputs combined to reduce the variance. Concerning the boosting methods, the classifiers are built sequentially so that the next classifier can try to decrease the bias of the combined. Voting classifier: this combines the predictions of the different classifiers and outputs a final prediction as the result of a majority vote. This majority vote can be hard or soft. Hard: each classifier predicts the class, and the final prediction is the one that most of them predicted. The final prediction can be obtained using a weighted averaging procedure if the classifiers have different weights. Soft: each classifier has a weight and predicts the probability of each class, and then the final prediction is obtained using a weighted averaging procedure. AdaBoost: considers several initial classifiers, called weak learners, and combines their predictions through a weighted majority vote. This process is repeated, and at each iteration/boost the data are modified. Each sample starts with a weight, and if it is incorrectly classified its weight increases for the classifier to notice it more; on the other hand, correctly classified samples have their weights reduced. After several iterations, the overall classifier, or strong learner, is expected to be better than the individual ones.
To further improve results, several non-linear classifiers were also tested. These include the K-nearest neighbours (K-NNs), kernel support vector machines (KSVMs), decision trees (DTs), neural networks (NNs), and, finally, random forest (RF). Similarly to linear classifiers, the same combinations of classifiers were tested as well. As these classifiers require more data to obviate overfitting, for each trial of MI, which had a duration of five seconds, more epochs were extracted. For each trial, two epochs of two seconds were extracted. The first second of the signal was not used, as a preventive way, since the image of the arrow could act as a stimulus for the pretended direction. Hereupon, this time is sufficient for the person to assimilate which MI must do. Nevertheless, the approach of doubling the number of epochs ended up also being used for the linear and statistical classifiers. This was to ensure that both methods used the same amount of data. Table 5 summarises all the optimised hyper-parameters for the respective classifiers, as well as a brief description of their function. The optimisation process was carried out using a grid search with a 5-fold cross-validation decided by the F1-score.

3.2.3. Evaluation

To evaluate the results from the different approaches on the two datasets, the F1-score, the kappa score, and the false positive (FP) rate were used. The F1-score is the average of the precision and recall, and it reaches its best value at 1 (perfect precision and recall) and worst at 0. The kappa score expresses the level of agreement between two annotators. Although it is not usually used to compare a prediction with a ground truth, it was the only metric provided by the IV BCI Competition. A kappa value between −1 and 0 denotes a random classifier, while a value near 1 means a perfect one. Concerning the FP rate, a new metric was developed, since it is more important, for the final application, to penalise the FPs from classes 0 and 1, than from class 2. Nevertheless, a high rate of true positives is still desirable, independently of the class. Thus, the false positive rate was used for this evaluation. For an FP rate higher than 1, the classifier produces more false positives than true positives; hence, a rate smaller than 1 is desirable. For each subject, the best run out of the three was obtained based on the F1-score. Then, for that run, the respective kappa score and FP rate are also presented. Furthermore, for all the approaches, the feature selector was the extra trees. Table 6 contains the used linear and statistical classifiers (0–3), and the non-linear classifiers (4–8).
The first step consisted of only applying the linear and statistical classifiers. Afterwards, with the intent of improving even more the performance of the approach that presented the best results, the non-linear classifiers, along with the first set of classifiers, were only applied to the correspondent approach. This is because these classifiers take longer to run and optimise.

3.3. Hardware and Software

Python version 3 was the programming language used for the experimental work related to signal processing and classification, along with Numpy, Pandas, Seaborn, and Scikit-Learn libraries. For real-time testing, an interface was required to deliver the raw data acquired by the headset to Python. Given that the headset is an Emotiv EPOC, the pyemotiv Python library was applied. This library interfaces with the Emotiv EPOC research SDK, provided by the distributor, enabling the output of raw EEG data for the experimental setup [17].

4. Results

This section presents the results obtained from the different approaches previously explained, as well as the outcomes of a real-time application.

4.1. Filter Bank Common Spatial Pattern I—FBCSP I

In this approach, only the linear and statistical classifiers are used to build the different classifier combinations, since the obtained results were not the best.

4.1.1. FBCSP I Approach Using Dataset A

Table 7 presents the obtained F1-score for the different combinations, using only linear or statistical classifiers. The row “Best” corresponds to the best score for each subject. Most subjects presented a preferable run, regardless of the combinations, except for subject 1, who chose, at least once, one of the three runs. The highest F1-score, on average, was obtained by the ensemble voting hard, which can be explained as it corresponds to the major vote between the two best classifiers; thus, by combining their predictions, it comes out as more accurate.
Table 7 presents the respective kappa score, corroborating with the ensemble voting hard being the best combination. The kappa score from the winner of the IV BCI competition is also presented. However, the competition involved the classification of four classes: left hand, right hand, foot, and tongue; as for this work, there are only three classes: left, right, and neutral. Thereafter, the results from the competition are here exposed just as a qualitative comparison. Hereupon, the obtained kappa value of 0.604 is in the same order of magnitude as the result from the competition, 0.57, and thus higher than 0.5, which surely reflects the no randomness of the classifiers. Moreover, the FP rate had its lowest value, on average, for the one probabilistic classifier, whose threshold was decided based on the maximisation of the F1-score. This result is logical, as, by maximising the F1-score, there is an implicit maximisation of the precision and the recall, thus minimising the FP rate. However, the lowest FP rate was expected to belong to the ensemble voting hard because it was the combination with the highest F1-score. These different approaches correspond to different combinations of several classifiers. Table 8 presents the best ones for the different approaches and subjects. It can be concluded that the best algorithms correspond to the Gaussian Naive Bayesian classifier (0), linear discriminant analysis (1) and logistic regression (3), which was not presumed, as the LR was seldom mentioned during the literature review. Concerning the linear SVM (2), it was never picked, suggesting that it is not a good classifier for this dataset, using these features, as it is not capable of accurately distinguishing the three classes.

4.1.2. FBCSP I Approach Using Dataset B

Similarly to what was presented for dataset A, Table 9 introduces the F1-score for the best run in each approach. Contrary to A, several subjects picked all three runs at least once as their best. Only subjects 5, 6, and 7 picked one or two. This already suggests that the extracted features were not very strongly indicative of the class. Once again, the best F1-score was obtained by the ensemble voting hard approach, followed by the two classifiers. However, since the F1-score varies from 0 to 1, the obtained result is not satisfactory as it stays in the bottom half of the spectrum. Similarly, for the kappa score, the value of 0.218 is closer to 0 than to 1, indicating that the classifier is closer to random than to perfect, as presumed. The FP rate is quite high, reaching almost 1, that is to say, the number of FP is almost the same as TP, thus manifesting that this approach is not adequate for the ultimate goal of controlling an IW. A very low FP rate is mandatory to maintain the safety of the IW driver. Nevertheless, subjects 3 and 1 presented a better performance than the others did, presenting scores equivalent to dataset A, which corroborates that people have different aptness regarding MI [40]. Nevertheless, it is also important to consider inter-individual differences, such as distinct brainwave patterns, cognitive abilities, and learning speeds, among others. This major difference between subjects may also be due to the positioning of the headset, as the electrodes are fixed, which may lead to more coverage of the motor cortex in some subjects than in others.
Table 10 contains the chosen classifiers for the different methods. Similarly to A, LR (3) and LDA (1) presented the best performance. However, the Gaussian Naive Bayes (0) did not perform well enough to be chosen. Once again, the linear SVM (2) was not picked.

4.2. Filter Bank Common Spatial Pattern II—FBCSP II

Next, we describe the implementation of the FBCSP II approach using dataset A, providing detailed analysis and results of the classification performance. Following that, we examine the application of the same approach using dataset B, shedding light on its efficacy and comparative performance metrics.

4.2.1. FBCSP II Approach Using Dataset A

Table 11 presents the F1-score for dataset A, regarding the use of only linear or statistical classifiers. Most of the subjects present a preference concerning the run; for others, such as 3 and 5, it is not clear, as the three runs were chosen as the best one, at least once. Globally, the different classifier combinations presented resulted more or less in the same ranking and behaved as expected. The best F1-score was obtained, once again, for the ensemble voting hard approach, followed by the one classifier approach, which was not prospected, as it is intuitive that the output of two classifiers would be more accurate than just one. The worst score corresponds to the AdaBoost, thus clearly excluding it as a recommended approach.
The best kappa was from the ensemble voting hard approach, 0.693, which is higher than the one from the previous approach, 0.607, and in the same order of magnitude as 0.57, the kappa score of the competition’s winner. It was already prospected that both the two classifier approaches would present a lower FP rate because they prevent the FP for classes 0 and 1. This is also one of the reasons why their F1-score is slightly lower than for the other approaches. AdaBoost presented the lower F1-score and kappa, and therefore presented the highest FP rate. It can be concluded (Table 12) that the best algorithms correspond once again to linear discriminant analysis (1) and logistic regression (3). The Gaussian Naive Bayesian classifier (0) was chosen a few times; however, it was almost always the second-best classifier, whereas the linear SVM (2) was never picked. Therefore, it is possible to conclude that the latter is not a good classifier for this dataset using these features, as it is not capable of accurately distinguishing the three classes.
Due to their promising results, the use of non-linear classifiers was also tested. However, the results, contrary to what was expected, did not improve; instead, they stayed roughly the same. Furthermore, the best combination was not the ensemble voting hard but the two classifiers, which reflected in a lower FP rate. Regarding the kappa score, its average value is very close to the one previously obtained for the first group of classifiers.

4.2.2. FBCSP II Approach Using Dataset B

Similarly to what was presented for dataset A, Table 13 introduces the F1-score for the best run in each approach. Most of the subjects presented a preference regarding a run or two. The best classifier was the AdaBoost, with an average F1-score of 0.504, immediately followed by the ensemble voting soft, with a score of 0.497. Moreover, the highest F1-score value was lower than for A but higher than for the previous approach, 0.484. Once again, it is important to emphasise that subject 1 and subject 3 presented scores equivalent to dataset A.
Furthermore, the best value for the average kappa score, 0.256, is both lower than the one for dataset A and also lower than the value of the competition; however, it is higher than the value of 0.218 from FBCSP I. Nevertheless, is still not a desirable value as it is too close to 0. As presumed from the previous scores, the FP rate is higher for dataset B than for dataset A. The lowest value, 0.444, was obtained with the one probabilistic classifier approach. The FP rate for the best approach, AdaBoost, was 0.513, which is high but still lower than the best for approach FBCSP I, 0.730. Concerning the selected classifiers, it is clear, from Table 14, that LDA and LR were preferred over the other two. Moreover, the Gaussian Naive Bayes classifier was not chosen as the best one for either of the combinations. Despite what was concluded from dataset A, the linear SVM was picked for this one, even if only once, which endorses that LDA and LR are the most suitable classifiers for this approach.
Because the formerly exposed results were not satisfactory, in the sense that they are not acceptable for a real application, further approaches demanded to be tested. Moreover, as FBCSP II was better than FBCSP I, non-linear classifiers were added to the previous classifiers to be trained and tried out. Table 15 contains the F1-score from this method. As expected, the results improved compared with the linear and statistical classifiers approach. Two combinations obtained the same average F1-score, the one classifier, and the ensemble voting hard, which was not foreseen as it was presumed that two classifiers would predict a more accurate result rather than just one.
Regarding the kappa score, it was higher for the ensemble voting hard, which is better than the score for just the linear and statistical classifiers, but still lower than for dataset A, which corroborates with that previously stated about the headset used to acquire these signals. The FP rate was lower than previously, as was anticipated due to the rise of the F1-score. Despite these results, it is still important to mention that subjects 1, 3, and 6 produced results comparable to the ones from dataset A, even if with slightly higher FP rates than the ones from dataset A. Table 16 presents the chosen classifier(s) for each combination. These encompass mainly the kernel SVMs, followed by the neural networks. In some cases, the K-NN is chosen as the second-best classifier. None of the linear or statistical classifiers were chosen, which indicates that this dataset requires more complex models to predict better results.

4.3. Power Spectral Density

The next subsections present the findings and insights from the analysis of results obtained using power spectral density.

4.3.1. Power Spectral Density Approach Using Dataset A

As mentioned before, since the best results with linear classifiers were not obtained with the PSD approach, non-linear strategies were not employed. Moreover, as no single linear classifier produced satisfactory results on its own, combinations thereof were not evaluated either. The obtained F1-score was 0.510, which is much lower than the F1-score of 0.793 for the combination of one classifier and FBCSP II. Moreover, the FP rate is also much higher than the one for FBCSP II, thus endorsing the idea that this approach is not the best for this dataset.

4.3.2. Power Spectral Density Approach Using Dataset B

Similarly to dataset A, only the one classifier approach was tested due to unsatisfactory results. However, the difference between this and the FBCSP II was not as large as the one for dataset A. The average F1-score, 0.443, was quite similar to the one for dataset A, 0.510, which did not happen for the other approaches. However, the kappa score and the FP rate were worse. The latter presents a high value of 0.844, which lies close to 1, that is to say, there is almost more FP than TP, which is not the goal.

4.4. Real-Time Application

The subject with the best performance for dataset B was chosen to perform the real-time testing. The subject was asked to sit still and maintain movements to a minimum, similar to the training phase. An external person experimented and asked the subject to imagine a certain MI. Every 2 s an epoch was sent to the system and a class was predicted. The person conducting the experiment waited for ten predictions to appear before asking for the next one, as a way of allowing the system to stabilise. Again, due to stabilisation, the first three outputs after a new MI were discarded. As subject 3 was the one who presented the best results, the steps described previously were applied in a real-time scenario, leading to the results in Table 17. The final system consisted of applying the FBCSP II approach, which produced the best score. Then, 70% of the features were extracted and fed to an ensemble voting hard classifier built with the major vote between the kernel SVM and the K-NN, where the vote percentage was 2 to 1, respectively.
Overall, the results were satisfactory and as anticipated from the performance previously analysed. From twenty-two cues, there were only two cues incorrectly classified, represented in red, one for the left and another for the right. The left one was misclassified as right, which is a problem since, if it was an IW, it would go in the opposite direction. The right cue was classified as neutral. Despite being misclassified, it is not the worst since a hypothetical IW would have maintained the same direction. Nevertheless, none of the neutral cues were incorrectly classified. Although some limitations related to the accuracy, these results show that the system is evolving towards the right direction, suggesting that a new headset and a refinement of the algorithm would deliver promising results.

5. Discussion

Overall, dataset A, independently of the approach, produced better results than dataset B. The only exception might be for the PSD approaches, where the average F1-score value for both datasets was very similar. There are several possibilities for this discrepancy in the results, such as dataset A was obtained using a stable headset, with movable electrodes. This leads to the electrodes always assuming the same known position throughout the different sessions. Moreover, the results were obtained by professionals in a carefully controlled environment. Dataset B was collected using an Emotiv EPOC. There are several problems regarding the EPOC, such as it has fixed electrodes. This leads to the absolute position being different for all the subjects, and even varies from session to session. Moreover, it did not fit everyone’s head, and four subjects did not make it into this dataset due to limitations in positioning the electrodes correctly. Some sensors were starting to be oxidised, which led to noisy acquisitions, which hampered the already difficult EEG processing, as the signal is very sensitive. The Emotiv EPOC does not cover the motor cortex, which is critical for the tasks in this study. While the literature suggests it can work for parietal and frontal areas, its performance is not optimal for motor cortex tasks. Nevertheless, the Emotiv EPOC was chosen for its balance of cost, ease of use, and functionality, offering a reasonable number of electrodes, wireless operation, extended battery life, and affordability. Due to the fact that the process of training is very time-consuming and this work was merely a proof of concept, only the subject with the best performance for dataset B was chosen for the real-time testing.
Comparing the different approaches, it was already expected that the best method would be related to the CSP, as it was the winning method of the competition. This suggests that spatial methods perform better than the others do, which may be related to the elimination of existent artefacts in the bands of interest. However, it was interesting that FBCSP II produced slightly better scores than FBCSP I, implying that feeding the whole spatial filtered signal to the feature selector works better than feeding a transformed version of the signal filtered by just the columns of the spatial filter. Although the results from the competition are merely qualitative, the results from FBCSP II also indicate that using the extra tree classifier to obtain the features’ importance and the ensemble voting hard, employing the LR and the GNB, or LDA, to classify the epochs, represents a valuable update. This led to a final average kappa score of 0.69, which is 20% higher than the winner value of 0.57. Despite the 20% not being a real quantitative evaluation, the value of 0.69 already suggests that the algorithm is considerably better than a random classifier and can correctly classify the epochs, presenting an F1-score of 0.797 and the smallest FP rate of all the tested approaches, 0.150. Similarly, dataset B also presented better results for the FBCSP II than for the FBCSP I. Moreover, contrarily to dataset A, dataset B improved its results by allowing the use of non-linear classifiers. Fakhruzzaman et al. [66] and Muñoz et al. [61] used the Emotiv EPOC headset and the CSP method as a features extractor. Fakhruzzaman et al. [66] obtained an average accuracy of 60%, whereas Muñoz et al. [61] obtained an average accuracy of 67.5% using the LDA classifier, 68.3% using the SVM, and 96.7% using Nu-SVC RBF kernel. Overall, the result of 65% from dataset B regarding the FBCSP II approach with the ensemble voting hard classifier falls within the mean values presented by these authors, except for the last method of Muñoz et al. [61]. The latter is greatly higher than the others are, suggesting that this classifier is indicated for this type of feature, and should be considered for further implementation in future work. Furthermore, it is important to state that the signals in dataset B had constraints in AF3 and AF4 electrodes, which may be important electrodes according to Lin and Lo [60] and Muñoz et al. [61], thus decreasing the obtained accuracy and the F1-score. The average accuracy of other authors using the EPOC and the magnitude of frequency components or the power spectral density (square of the magnitude) as features was 74–100% for Abiyev et al. [57], 70% for Hurtado-Rincon et al. [59], and 86–92% for Lin and Lo [60], and Siribunyaphat and Punsawad [67]. More recent works [49,68,69] have also achieved important F1-scores, using different EEG headsets. This reflects a promising area to explore for controlling intelligent wheelchairs.

6. Conclusions and Future Work

The goal of this work was being able to decode MI intentions from the users, using an Emotiv EPOC as the headset to extract the EEG signals. The intentions were left, right, and neutral, which would be further translated into control commands for an intelligent wheelchair. This headset has higher constraints in terms of accessing data in a less controlled environment; however, overall, this work allowed the development of a proof of concept for future projects and a thorough study regarding the different algorithms. Although the real-time results are still not suitable for the actual application, they validate the concept and the developed architecture to connect the different parts of the system. For future work, utilising a broader and more diverse dataset may contribute to enhancing the model’s generality. Another interesting future work could be applying different methods for noise removal, such as independent component analysis. Additionally, conducting long-term usage and testing across diverse environments will be essential for assessing system stability and applicability.

Author Contributions

Conceptualization, L.P.R. and B.M.F.; methodology, M.C.A., P.A., L.P.R. and B.M.F.; software, M.C.A. and P.A.; validation, M.C.A. and P.A.; formal analysis, M.C.A., P.A. and L.P.R.; investigation, M.C.A., P.A. and L.P.R.; resources, L.P.R. and B.M.F.; data curation, M.C.A. and P.A.; writing—original draft preparation, M.C.A. and P.A.; writing—review and editing, L.P.R. and B.M.F.; visualization, M.C.A., P.A., L.P.R. and B.M.F.; supervision, L.P.R. and B.M.F. project administration, L.P.R. and B.M.F.; funding acquisition, L.P.R. and B.M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by: Base Funding—UIDB/00027/2020 of the Artificial Intelligence and Computer Science Laboratory (LIACC) funded by national funds through the FCT/MCTES (PIDDAC) and IntellWheels2.0: Intelligent Wheelchair with Flexible Multimodal Interface and Realistic Simulator (POCI-01-0247-FEDER-39898), supported by NORTE 2020, under PT2020.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Optimizer/LITEC (under project POCI-01-0247-FEDER-39898 approved on 28 May 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data from the BCI competitions are available at https://www.bbci.de/competition/iv/#datasets first accessed on 1 June 2019. Other data are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Davies, A.; Souza, L.H.; Frank, A.O. Changes in the quality of life in severely disabled people following provision of powered indoor/outdoor chairs. Disabil. Rehabil. 2003, 25, 286–290. [Google Scholar] [CrossRef] [PubMed]
  2. Koontz, A.M.; Ding, D.; Jan, Y.; Groot, S.; Hansen, A. Wheeled mobility. BioMed Res. Int. 2015, 2015, 138176. [Google Scholar] [CrossRef] [PubMed]
  3. Guidelines on the Provision of Manual Wheelchairs in Less Resourced Settings. Available online: https://www.who.int/publications/i/item/9789241547482 (accessed on 5 January 2024).
  4. Worldwide Need—Wheelchair Foundation. Available online: https://www.wheelchairfoundation.org/fth/analysis-of-wheelchair-need (accessed on 5 January 2024).
  5. Xin-an, F.; Luzheng, B.; Teng, T.; Ding, H.; Liu, Y. A brain–computer interface-based vehicle destination selection system using p300 and ssvep signals. IEEE Trans. Intell. Transp. Syst. 2015, 16, 274–283. [Google Scholar] [CrossRef]
  6. Luzheng, B.; Xin-An, F.; Liu, Y. EEG-based brain-controlled mobile robots: A survey. IEEE Trans. Hum.-Mach. Syst. 2013, 43, 161–176. [Google Scholar] [CrossRef]
  7. Gürkök, H.; Nijholt, A. Brain–computer interfaces for multimodal interaction: A survey and principles. Int. J. Hum.-Comput. Interact. 2012, 28, 292–307. [Google Scholar] [CrossRef]
  8. Major, T.C.; Conrad, J.M. A survey of brain computer interfaces and their applications. In Proceedings of the SOUTHEASTCON 2014, Lexington, KY, USA, 13–16 March 2014; pp. 1–8. [Google Scholar] [CrossRef]
  9. Hämäläinen, M.; Hari, R.; Ilmoniemi, R.J.; Knuutila, J.; Lounasmaa, O.V. Magnetoencephalography—Theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys. 1993, 65, 413. [Google Scholar] [CrossRef]
  10. Ferrari, M.; Quaresima, V. A brief review on the history of human functional near-infrared spectroscopy (fnirs) development and fields of application. Neuroimage 2012, 63, 921–935. [Google Scholar] [CrossRef] [PubMed]
  11. Logothetis, N.K.; Pauls, J.; Augath, M.; Trinath, T.; Oeltermann, A. Neurophysiological investigation of the basis of the fmri signal. Nature 2001, 412, 150. [Google Scholar] [CrossRef] [PubMed]
  12. Hashiguchi, K.; Morioka, T.; Yoshida, F.; Miyagi, Y.; Nagata, S.; Sakata, A.; Sasaki, T. Correlation between scalp-recorded electroencephalographic and electrocorticographic activities during ictal period. Seizure 2007, 16, 238–247. [Google Scholar] [CrossRef]
  13. Buitenweg, J.R.; Rutten, W.L.C.; Marani, E. Geometry-based finiteelement modeling of the electrical contact between a cultured neuron and a microelectrode. IEEE Trans. Biomed. Eng. 2003, 50, 501–509. [Google Scholar] [CrossRef]
  14. Trambaiolli, L.R.; Falk, T.H. Hybrid brain–computer interfaces for wheelchair control: A review of existing solutions, their advantages and open challenges. In Smart Wheelchairs and Brain-Computer Interfaces; Elsevier: Amsterdam, The Netherlands, 2018; pp. 229–256. [Google Scholar] [CrossRef]
  15. Hage, B.; Alwatban, M.R.; Barney, E.; Mills, M.; Dodd, M.D.; Truemper, E.J.; Bashford, G.R. Functional transcranial doppler ultrasound for measurement of hemispheric lateralization during visual memory and visual search cognitive tasks. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2016, 63, 2001–2007. [Google Scholar] [CrossRef] [PubMed]
  16. Open Bci. Available online: https://openbci.com/ (accessed on 5 January 2024).
  17. Emotiv Epoc. Available online: https://www.emotiv.com/ (accessed on 10 January 2024).
  18. Pires, G.; Castelo-Branco, M.; Nunes, U. Visual p300-based bci to steer a wheelchair: A bayesian approach. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 658–661. [Google Scholar] [CrossRef]
  19. Zhang, R.; Wang, Q.; Li, K.; He, S.; Qin, S.; Feng, Z.; Chen, Y.; Song, P.; Yang, T.; Zhang, Y. A bci-based environmental control system for patients with severe spinal cord injuries. IEEE Trans. Biomed. Eng. 2017, 64, 1959–1971. [Google Scholar] [CrossRef]
  20. Rebsamen, B.; Burdet, E.; Guan, C.; Teo, C.L.; Zeng, Q.; Ang, M.; Laugier, C. Controlling a wheelchair using a bci with low information transfer rate. In Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, Noordwijk, The Netherlands, 13–15 June 2007; pp. 1003–1008. [Google Scholar] [CrossRef]
  21. Iturrate, I.; Antelis, J.M.; Kubler, A.; Minguez, J. A noninvasive brainactuated wheelchair based on a p300 neurophysiological protocol and automated navigation. IEEE Trans. Robot. 2009, 25, 614–627. [Google Scholar] [CrossRef]
  22. Alqasemi, R.; Dubey, R. A 9-dof wheelchair-mounted robotic arm system: Design, control, brain-computer interfacing, and testing. In Advances in Robot Manipulators; InTech: Rijeka, Croatia, 2010. [Google Scholar] [CrossRef]
  23. Shin, B.; Kim, T.; Jo, S. Non-invasive brain signal interface for a wheelchair navigation. In Proceedings of the International Conference on Control Automation and Systems, Gyeonggi-do, Republic of Korea, 27–30 October 2010. [Google Scholar] [CrossRef]
  24. Lopes, A.C.; Pires, G.; Nunes, U. Assisted navigation for a brain-actuated intelligent wheelchair. Robot. Auton. Syst. 2013, 61, 245–258. [Google Scholar] [CrossRef]
  25. Zhang, R.; Li, Y.; Yan, Y.; Zhang, H.; Wu, S.; Yu, T.; Gu, Z. Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 128–139. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, Y.; Wang, R.; Gao, X.; Hong, B.; Gao, S. A practical vepbased brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 234–240. [Google Scholar] [CrossRef] [PubMed]
  27. Dasgupta, S.; Fanton, M.; Pham, J.; Willard, M.; Nezamfar, H.; Shafai, B.; Erdogmus, D. Brain controlled robotic platform using steady state visual evoked potentials acquired by eeg. In Proceedings of the 2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 7–10 November 2010; pp. 1371–1374. [Google Scholar] [CrossRef]
  28. Prueckl, R.; Guger, C. Controlling a robot with a brain-computer interface based on steady state visual evoked potentials. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–5. [Google Scholar] [CrossRef]
  29. Mandel, C.; Lüth, T.; Laue, T.; Röfer, T.; Gräser, A.; KriegBrückner, B. Navigating a smart wheelchair with a brain-computer interface interpreting steady-state visual evoked potentials. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 1118–1125. [Google Scholar] [CrossRef]
  30. Xu, Z.; Li, J.; Gu, R.; Xia, B. Steady-state visually evoked potential (ssvep)-based brain-computer interface (bci): A low-delayed asynchronous wheelchair control system. In Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 305–314. [Google Scholar] [CrossRef]
  31. Müller, S.M.; Bastos, T.F.; Filho, M.S. Proposal of a ssvep-bci to command a robotic wheelchair. J. Control. Autom. Electr. Syst. 2013, 24, 97–105. [Google Scholar] [CrossRef]
  32. Diez, P.F.; Müller, S.M.; Mut, V.A.; Laciar, E.; Avila, E.; Bastos, T.F.; Filho, M.S. Commanding a robotic wheelchair with a high-frequency steady-state visual evoked potential based brain–computer interface. Med. Eng. Phys. 2013, 35, 1155–1164. [Google Scholar] [CrossRef]
  33. Duan, J.; Li, Z.; Yang, C.; Xu, P. Shared control of a brain-actuated intelligent wheelchair. In Proceedings of the 11th World Congress on Intelligent Control and Automation (WCICA), Shenyang, China, 29 June–4 July 2014; pp. 341–346. [Google Scholar] [CrossRef]
  34. Larsen, E.A. Classification of Eeg Signals in a Brain-Computer Interface System. Master’s Thesis, Institutt for Datateknikk og Informasjonsvitenskap, Trondheim, Norway, 2011. [Google Scholar]
  35. Khare, V.; Santhosh, J.; Anand, S.; Bhatia, M. Brain computer interface based real time control of wheelchair using electroencephalogram. Int. J. Soft Comput. Eng. IJSCE 2011, 1, 41–45. [Google Scholar]
  36. Choi, K. Control of a vehicle with eeg signals in real-time and system evaluation. Eur. J. Appl. Physiol. 2012, 112, 755–766. [Google Scholar] [CrossRef]
  37. Barbosa, A.O.; Achanccaray, D.R.; Meggiolaro, M.A. Activation of a mobile robot through a brain computer interface. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–7 May 2010; pp. 4815–4821. [Google Scholar] [CrossRef]
  38. Tsui, C.S.; Gan, J.Q.; Roberts, S.J. A self-paced brain–computer interface for controlling a robot simulator: An online event labelling paradigm and an extended kalman filter based algorithm for online training. Med. Biol. Eng. Comput. 2009, 47, 257–265. [Google Scholar] [CrossRef]
  39. Tang, Z.; Sun, S.; Zhang, S.; Chen, Y.; Li, C.; Chen, S. A brain-machine interface based on erd/ers for an upper-limb exoskeleton control. Sensors 2016, 16, 2050. [Google Scholar] [CrossRef]
  40. Bahri, Z.; Abdulaal, S.; Buallay, M. Sub-band-power-based efficient brain computer interface for wheelchair control. In Proceedings of the 2014 World Symposium on Computer Applications & Research (WSCAR), Sousse, Tunisia, 18–20 January 2014; pp. 1–7. [Google Scholar] [CrossRef]
  41. Li, M.; Zhang, Y.; Zhang, H.; Hu, S. An eeg based control system for intelligent wheelchair. In Applied Mechanics and Materials; Trans Tech Publications: Bäch, Switzerland, 2013; Volume 300, pp. 1540–1545. [Google Scholar] [CrossRef]
  42. Dharmasena, S.; Lalitharathne, K.; Dissanayake, K.; Sampath, A.; Pasqual, A. Online classification of imagined hand movement using a consumer grade eeg device. In Proceedings of the 2013 8th IEEE International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 17–20 December 2013; pp. 537–541. [Google Scholar] [CrossRef]
  43. Batres-Mendoza, P.; Ibarra-Manzano, M.; Guerra-Hernandez, E.; Almanza-Ojeda, D.; Montoro-Sanjose, C.; Romero-Troncoso, R.; Rostro-Gonzalez, H. Improving eeg-based motor imagery classification for real-time applications using the qsa method. Comput. Intell. Neurosci. 2017, 2017, 9817305. [Google Scholar] [CrossRef]
  44. Cao, L.; Li, J.; Ji, H.; Jiang, C. A hybrid brain computer interface system based on the neurophysiological protocol and brain-actuated switch for wheelchair control. J. Neurosci. Methods 2014, 229, 33–43. [Google Scholar] [CrossRef]
  45. Li, J.; Ji, H.; Cao, L.; Zang, D.; Gu, R.; Xia, B.; Wu, Q. Evaluation and application of a hybrid brain computer interface for real wheelchair parallel control with multi-degree of freedom. Int. J. Neural Syst. 2014, 24, 1450014. [Google Scholar] [CrossRef]
  46. Allison, B.Z.; Brunner, C.; Kaiser, V.; Müller-Putz, G.; Neuper, C.; Pfurtscheller, G. Toward a hybrid brain–computer interface based on imagined movement and visual attention. J. Neural Eng. 2010, 7, 026007. [Google Scholar] [CrossRef]
  47. Long, J.; Li, Y.; Wang, H.; Yu, T.; Pan, J. Control of a simulated wheelchair based on a hybrid brain computer interface. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA, 28 August–1 September 2012; pp. 6727–6730. [Google Scholar] [CrossRef]
  48. Rani, B.J.; Umamakeswari, A. Electroencephalogram-based brain controlled robotic wheelchair. Indian J. Sci. Technol. 2015, 8 (Suppl. 9), 188–197. [Google Scholar] [CrossRef]
  49. Almeida, P.; Faria, B.M.; Reis, L.P. Brain Waves Classification Using a Single-Channel Dry EEG Headset: An Application for Controlling an Intelligent Wheelchair. In Advances in Practical Applications of Agents, Multi-Agent Systems, and Cognitive Mimetics; The PAAMS Collection; Lecture Notes in Computer Science; Mathieu, P., Dignum, F., Novais, P., De la Prieta, F., Eds.; Springer: Cham, Switzerland, 2023; Volume 13955. [Google Scholar] [CrossRef]
  50. Open Bci-Publicly Available EEG Datasets. Available online: https://openbci.com/community/publicly-available-eeg-datasets/ (accessed on 5 February 2024).
  51. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for eeg-based brain–computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef]
  52. Bzdok, D.; Krzywinski, M.; Altman, N. Points of significance: Machine learning: A primer. Nat. Methods 2017, 14, 1119–1120. [Google Scholar] [CrossRef]
  53. Johansson, M. Novel cluster-based svm to reduce classification error in noisy eeg data: Towards real-time brain-robot interfaces. Comput. Biol. Med. 2018, 148, 105931. [Google Scholar] [CrossRef]
  54. Singh, A.; Lal, S.; Guesgen, H.W. Architectural review of co-adaptive brain computer interface. In Proceedings of the 2017 4th Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE), Mana Island, Fiji, 11–13 December 2017; pp. 200–207. [Google Scholar] [CrossRef]
  55. Pejas, J.; El Fray, I.; Hyla, T.; Kacprzyk, J. (Eds.) Advances in Soft and Hard Computing; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef]
  56. Fernández-Rodríguez, A.; Álvarez, F.; Angevin, R. Review of real brain-controlled wheelchairs. J. Neural Eng. 2016, 13, 061001. [Google Scholar] [CrossRef] [PubMed]
  57. Abiyev, A.H.; Akkaya, N.; Aytac, E.; Günsel, I.; Çagman, A. Brain based control of wheelchair. In Proceedings of the International Conference on Artificial Intelligence (ICAI), Las Vegas, NV, USA, 27–30 July 2015; p. 542. Available online: https://api.semanticscholar.org/CorpusID:13468342 (accessed on 9 April 2024).
  58. Carrino, F.; Dumoulin, J.; Mugellini, E.; Khaled, O.A.; Ingold, R. A self-paced bci system to control an electric wheelchair: Evaluation of a commercial, lowcost eeg device. In Proceedings of the 2012 ISSNIP Biosignals and Biorobotics Conference: Biosignals and Robotics for Better and Safer Living (BRC), Manaus, Brazil, 9–11 January 2012; pp. 1–6. [Google Scholar] [CrossRef]
  59. Rincon, J.; Jaramillo, S.; Cespedes, Y.; Meza, A.M.; Domínguez, G. Motor imagery classification using feature relevance analysis: An emotiv-based bci system. In Proceedings of the 2014 XIX Symposium on Image, Signal Processing and Artificial Vision, Armenia, Colombia, 17–19 September 2014; pp. 1–5. [Google Scholar] [CrossRef]
  60. Lin, J.; Lo, C. Mental commands recognition on motor imagery-based brain computer interface. Int. J. Comput. Consum. Control. 2016, 25, 18–25. [Google Scholar]
  61. Muñoz, J.; Ríos, L.H.; Henao, O. Low cost implementation of a motor imagery experiment with bci system and its use in neurorehabilitation. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA; 2014. [Google Scholar] [CrossRef]
  62. Ang, K.; Chin, Z.; Wang, C.; Guan, C.; Zhang, H. Filter bank common spatial pattern algorithm on bci competition iv datasets 2a and 2b. Front. Neurosci. 2012, 6, 39. [Google Scholar] [CrossRef] [PubMed]
  63. Stock, V.; Balbinot, A. Movement imagery classification in emotiv cap based system by naïve bayes. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 4435–4438. [Google Scholar] [CrossRef]
  64. Kim, P.; Kim, K.; Kim, S. Using common spatial pattern algorithm for unsupervised real-time estimation of fingertip forces from semg signals. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5039–5045. [Google Scholar] [CrossRef]
  65. Siemens PLM Community. Spectral Leakage. Available online: https://community.plm.automation.siemens.com/t5/Testing-Knowledge-Base/Windows-and-Spectral-Leakage/ta-p/432760 (accessed on 5 December 2023).
  66. Fakhruzzaman, M.N.; Riksakomara, E.; Suryotrisongko, H. Eeg wave identification in human brain with Emotiv EPOC for motor imagery. Procedia Comput. Sci. 2016, 72, 269–276. [Google Scholar] [CrossRef]
  67. Siribunyaphat, N.; Punsawad, Y. Brain–Computer Interface Based on Steady-State Visual Evoked Potential Using Quick-Response Code Pattern for Wheelchair Control. Sensors 2023, 23, 2069. [Google Scholar] [CrossRef] [PubMed]
  68. Ramírez-Arias, F.J.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; Colores-Vargas, J.M.; García-Canseco, E.; López-Bonilla, O.R.; Galindo-Aldana, G.M.; Inzunza-González, E. Evaluation of Machine Learning Algorithms for Classification of EEG Signals. Technologies 2022, 10, 79. [Google Scholar] [CrossRef]
  69. Sabio, J.; Williams, N.S.; McArthur, G.M.; Badcock, N.A. A scoping review on the use of consumer-grade EEG devices for research. PLoS ONE 2024, 19, e0291186. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
Figure 1. Scheme of the BCI architecture and its parts.
Figure 1. Scheme of the BCI architecture and its parts.
Technologies 12 00080 g001
Figure 2. Acquisition protocol for dataset A.
Figure 2. Acquisition protocol for dataset A.
Technologies 12 00080 g002
Figure 3. Electrodes’ placement, according to the 10/20 system, for both datasets: (a) dataset A; (b) dataset B.
Figure 3. Electrodes’ placement, according to the 10/20 system, for both datasets: (a) dataset A; (b) dataset B.
Technologies 12 00080 g003
Figure 4. Acquisition protocol for dataset B.
Figure 4. Acquisition protocol for dataset B.
Technologies 12 00080 g004
Table 1. Properties of brain activity acquisition methods.
Table 1. Properties of brain activity acquisition methods.
EEGMEGNIRSfMRIECoGMEAfTCD
DeploymentNon-invasiveNon-invasiveNon-invasiveNon-invasiveInvasiveInvasiveNon-invasive
Measured ActivityElectricalMagneticHemodynamicHemodynamicElectricalElectricalHemodynamic
Temporal ResolutionGoodGoodLowLowHighHighHigh
Spatial ResolutionLowLowLowGoodGoodHighLow
PortabilityHighLowHighLowHighHighHigh
CostLowHighLowHighHighHigh
Table 2. EEG headsets used in the literature.
Table 2. EEG headsets used in the literature.
ArticleEEG HeadsetPrincipleArticleEEG HeadsetPrinciple
[18]12 Ag/Cl electrodesERP—P300[19]NuAmps and 12 electrodesERP—P300
[20]NuAmps and 15 electrodesERP—P300[21]gTec EEG (16 electrodes and
g.USBamp amplifier)
ERP—P300
[22]16-channel electrode capERP—P300[23]Biopac MP150 EEG systemERP—P300
[24]gTec EEG (12 electrodes and
g.USBamp amplifier)
ERP—P300[25]Neuroscan (15 electrodes’ cap)ERP—P300
[26]BioSemi ActiveTwo system
32 channels
SSVEP[27]g.USBamp amplifier with
g.Butterfly active electrodes
SSVEP
[28]8 gold electrodes connected to the
g.USBamp amplifier
SSVEP[29]gTec EEG with g.USBamp
amplifier
SSVEP
[30]EEG Cap and g.USBamp amplifierSSVEP[31]BrainNet-36 with 12 channelsSSVEP
[32]BrainNet BNT-36 with 3 channelsSSVEP[33]6 channels EEG capSSVEP
[34]NeuroSky mindsetERD/ERS[35]Grass Telefactor EEG Twin3
Machine
ERD/ERS
[36]G-TEC system with 5 Ag/AgCl
electrodes
ERD/ERS[37]8 channels EEG capERD/ERS
[38]5 bipolar EEG channels and a g.tec
amplifier
ERD/ERS[39]ActiveTwo 64-channel EEG systemERD/ERS
[40]Emotiv EPOCERD/ERS[41]Emotiv EPOCERD/ERS
[42]Emotiv EPOCERD/ERS[43]Emotiv EPOCERD/ERS
[44]EEG Cap—15 electrodesERD/ERS and SSVEP[45]Gtec Amplifier (15 channels)ERD/ERS and SSVEP
[46]g.BSamp amplifier (5 channels)ERD/ERS and SSVEP[47]NuAmps device (15 channels)ERD/ERS and ERP—P300
[48,49]NeuroSkyERP—P300; Eye
Blinking (EMG)
[5]SYMPTOM amplifier with 10
electrodes
ERP—P300 and SSVEP
ERP—event-related potential; ERS—event-related synchronization; SSVEP—steady-state visual evoked potential.
Table 3. Summary of different authors’ BCIs regarding the used EEG headset, the neuro-mechanism, the extracted features, the classification methods, the outputted commands, and accuracy.
Table 3. Summary of different authors’ BCIs regarding the used EEG headset, the neuro-mechanism, the extracted features, the classification methods, the outputted commands, and accuracy.
ArticleEEG HeadsetPrincipleFeaturesClassifierControlAccuracy
[18]12 Ag/Cl electrodesERP—P300Signal averaging and standard
deviation
2 class BayesianL/R/F/B (45° or 90°)/S95%
[19]NuAmps and 12 electrodesERP—P300Data vectors of concatenated epochsBLDA (Bayesian)9 destinations89.6%
[20]NuAmps and 15 electrodesERP—P300Raw signalSVM7 locations, an ‘application
button’ and lock
90%
[21]gTec EEG (16 electrodes
and g.USBamp amplifier)
ERP—P300Moving average techniqueLDA15 locations, L/R and
validate selection
94%
[22]16-channel electrode capERP—P300Signal averagingLinear classifier6 for the IW (not specified)92%
[23]Biopac MP150 EEG systemERP—P300Signal averagingLinear classifierF/B/L/R
[24]gTec EEG (12 electrodes
and g.USBamp amplifier)
ERP—P300Optimal statistical spatial filterBinary BayesianF/B/L/R (45° or 90°/S88%
[25]Neuroscan (15 electrodes’ cap)ERP—P300Signal averagingSVM37 locations, validate or delete selection, stop and show extra locations
[26]BioSemi ActiveTwo system
32 channels
SSVEPPeaks in the frequency magnitudeL/R>95%
[27]g.USBamp amplifier with
g.Butterfly active electrodes
SSVEPFrequency band power (PSD)SVML/R/F/S95%
[28]8 gold electrodes connected
to the g.USBamp amplifier
SSVEPLDAL/R/B/F/S90%
[29]gTec EEG with g.USBamp
amplifier
SSVEPFrequency band power (PSD)Threshold method
not specified
L/R/B/F93.6%
[30]EEG Cap and g.USBamp
amplifier
SSVEPCCABayesianF/L/R/turn on/off87%
[31]BrainNet-36 with
12 channels
SSVEPFrequency band power (PSD)Decision treesL/R/F/SQualitative
evaluation
[32]BrainNet BNT-36 with
3 channels
SSVEPFrequency band power (PSD)Statistical maximumL/R/F/B95%
[33]6 channels EEG capSSVEPFFT and CCACCA coefficientL/R/F/B/S>90%
[34]NeuroSky mindsetERD/ERSFrequency band power (PSD)NNGame91%
[35]Grass Telefactor EEG
Twin3 Machine
ERD/ERSCoefficients from the waveletsRadial basis
function NN
L/R/F/B/rest100%
[36]G-TEC system with 5
Ag/AgCl electrodes
ERD/ERSCommon spatial frequency subspace
decomposition (CSFSD)
SVML/R/F91–95%
[37]8 channels EEG capERD/ERSMean, zero-crossing and energy from
different levels of the DWT
ANNL/R/F/S91%
[38]5 bipolar EEG channels and
a g.tec amplifier
ERD/ERSLogarithmic frequency band powerLDAL/R75%
[39]ActiveTwo 64-channel EEG
system
ERD/ERSFrequency band power (PSD) and
CSP
SVMExoskeleton control
LH/LF/RH/RF
84%
[40]Emotiv EPOCERD/ERSPCA and average power of the
wavelets’ sub-bands
NN w/BPL/R/F/B91%
[42]Emotiv EPOCERD/ERS—–Emotiv programL/R/F/S70%
[56]Emotiv EPOCERD/ERSFrequency componentsSVML/R/F/B/S100%
[56]Emotiv EPOCERD/ERSFrequency componentsNNL/R/F/B/S100%
[56]Emotiv EPOCERD/ERSFrequency componentsBayesianL/R/F/B/S94%
[56]Emotiv EPOCERD/ERSFrequency componentsDecision treesL/R/F/B/S74%
[42]Emotiv EPOCERD/ERSFrequency band power (PSD)LDAL/R70%
[43]Emotiv EPOCERD/ERSMetrics from the EEG signalDecision treesL/R82%
[57]Emotiv EPOCERD/ERSCSPSVML/R60%
[58]Emotiv EPOCERD/ERSLDAL/R60%
[59]Emotiv EPOCERD/ERSPSD, Hjort parameters, CWT and
DWT—PCA for feature reduction
K-NNL/R86–92%
[60]Emotiv EPOCERD/ERSEnergy distribution from the DWTSVML/R/T/N97%
[61]Emotiv EPOCERD/ERSCSPLDAL/R68%
[61]Emotiv EPOCERD/ERSCSPSVML/R68%
[61]Emotiv EPOCERD/ERSCSPNu-SVC RBF
Kernel
L/R68%
[44]EEG Cap—15 electrodesERD/ERS—
(L/R) and SSVEP
CSP (ERD/ERS); CCA (SSVEP)SVM (ERD/ERS);
Canonical correlation coefficient (SSVEP)
L/R/A/DA, maintain an uniform velocity and turn on/off
[45]Gtec Amplifier (15 channels)ERD/ERS—
(L/R) SSVEP-
(Des)accelerate
CSP (ERD/ERS); CCA (SSVEP)SVML/R/A/DA
[46]g.BSamp amplifier
(5 channels)
ERD/ERS and
SSVEP
Frequency band power (PSD)LDAL/R81%
[47,48]NuAmps device
(15 channels)
ERD/ERS and
ERP—P300
CSPLDAL/R/A/DA100%
[48,49]NeuroSkyERP—P300 and
Eye Blinking (EMG)
Changes in the levelThresholdL/R/F/B/S
[5]SYMPTOM amplifier with
10 electrodes
ERP—P300 and
SSVEP
PCA (ERP); PSD (SSVEP)LDAERP—9 destinations
SSVEP—confirm
99%
L—left; R—right; F—forward; B—backward; S—stop; A—accelerate; DA—decelerate; H—hand; F—foot; T—tongue; N—no imaging.
Table 4. Testing a combination of classifiers.
Table 4. Testing a combination of classifiers.
Number of ClassifiersTypeNumber of Classifiers
1Non-probabilistic2—EnsembleVoting Hard
Probabilistic—F1Voting Soft
2Non-probabilisticAdaBoost
Probabilistic—F1
Table 5. Optimised hyper-parameters for the different classifiers.
Table 5. Optimised hyper-parameters for the different classifiers.
Supervised
Classifier
Hyper-ParameterGrid Search SpaceDescription
LRClogspace −4 to 6, step
size 1
Regularisation parameter, which has a significant effect on
the generalisation performance of the classifier
K-NNn_neighbors1 to 50, step size 10Number of neighbours to use
SVMKernelrbf—Gaussian kernel
function
Function used to compute the kernel matrix for
classification
gammalogspace −3 to 6, step
size 1
Regularisation parameter used in RBF kernel, which has
significant impact on the performance of the kernel
Clogspace −3 to 7, step
size 1
Regularisation parameter, which has a significant effect on
the generalisation performance of the classifier
DTmax_depth1 to 20, step size 2The maximum depth of the tree. If none, then nodes are expanded until all leaves are pure or until all leaves contain
less than min_samples_split samples.
min_samples_split10 to 500, step size 20Minimum number of samples required to split a node
min_samples_leaf1 to 10, step size 2Minimum number of samples required in a newly created
leaf after the split
NNhidden_layers5 to 55, step 10The i element represents the number of neurons in the i hidden layer
activationrelu—rectified linear
unit function
Activation function for the hidden layer
solveradam—stochastic gradient-based
optimiser
The solver for weight optimisation
learning_rateconstantLearning rate schedule for weight updates. If ‘constant’,
the learning rate is given by learning_rate_init
learning_rate_initlogspace −4 to 4, step 1The initial learning rate used. It controls the step size in
updating the weights.
alphalogspace −4 to 4, step 1L2 penalty (regularisation term) parameter.
RFn_estimators10 to 100, step 20Number of trees in the forest
max_depthNone or 2 to 10, step
size 2
The maximum depth of the tree. If none, then nodes are expanded until all leaves are pure or until all leaves contain
less than min_samples_split samples.
min_samples_split10 to 500, step size 20The minimum number of samples required to split a node
min_samples_leaf1 to 10, step size 2The minimum number of samples required in a newly created
leaf after the split
LR—logistic regression; K-NN—K-nearest neighbour; SVM—support vector machine; DT—decision tree; NN—neural networks; RF—random forest.
Table 6. Labels of the classifiers.
Table 6. Labels of the classifiers.
NumberName
0Gaussian Naive Bayes (GNB)
1Linear discriminant analysis (LDA)
2Linear support vector machines (LSVMs)
3Logistic regression (LR)
4K-nearest neighbours (K-NNs)
5Kernel support vector machines (KSVMs)
6Decision trees (DTs)
7Neural networks (NNs)
8Random forest (RF)
Table 7. F1-score, kappa, and FP rate for dataset A and FBCSP I, using the first set of classifiers.
Table 7. F1-score, kappa, and FP rate for dataset A and FBCSP I, using the first set of classifiers.
Average123456789
F1-score1 Classifier0.7290.7630.5600.9060.7080.6150.6010.9030.7740.733
Prob. F10.7290.7970.6630.8830.7140.6030.5750.8960.6620.768
2 Classifiers0.7120.7820.4960.9200.6790.6100.5460.8480.7760.751
Prob. F10.7380.7920.6230.9320.6890.6340.5670.8740.7680.760
Soft0.7300.8430.5970.9500.6920.5630.5920.8480.7490.735
Hard0.7480.8010.6320.9280.7190.6470.6150.8690.7590.758
Ada0.6820.6940.5880.8560.6160.5510.5090.8940.6760.750
Best0.7650.8430.6630.9500.7190.6470.6150.9030.7760.768
Kappa1 Classifier0.5870.6320.3330.8540.5560.4100.3960.8470.6600.597
Prob. F10.5730.6940.4930.8190.5560.2990.3540.8400.4720.632
2 Classifiers0.5830.6530.2360.8750.5000.4030.5460.7640.6600.611
Prob. F10.5900.6740.4380.8960.5350.3820.3470.7920.6460.604
Soft0.5860.7640.3890.9240.5210.3470.3540.7710.6180.590
Hard0.6070.7010.3610.8890.5690.4720.4170.7850.6390.632
Ada0.5350.5420.4930.7850.4240.3260.2640.8400.5140.625
Best0.6560.7640.4930.9240.5690.4720.5460.8470.6600.632
Winner0.5700.6800.4200.7500.4800.4000.2700.7700.7500.610
FP rate1 Classifier0.2950.2060.4750.0920.3550.5650.5270.0310.2280.177
Prob. F10.2710.1340.2940.1110.1780.8780.4230.1230.2000.098
2 Classifiers0.2640.0900.6130.0710.2500.4150.6070.0660.1440.119
Prob. F10.1950.0950.4150.0450.2820.1650.5160.0270.1450.063
Soft0.3340.1260.5780.0440.3610.5820.6990.0980.2800.242
Hard0.2710.1450.7340.0600.1880.4140.5000.0380.1890.172
Ada0.3860.3070.6850.1570.4740.5710.7550.1090.2600.154
Best0.1590.0900.2940.0440.1780.1650.4230.0270.1440.063
Table 8. Best classifiers, from the first set, for each combination for dataset A and FBCSP I.
Table 8. Best classifiers, from the first set, for each combination for dataset A and FBCSP I.
123456789
1 Classifier003333333
Prob. F1033330013
2 Classifiers303131130130013031
Prob. F1013103310303301310
Soft033131303103033013
Hard300103303030301331
0—Gaussian Naïve Bayesian; 1—linear discriminant analysis; 3—logistic regression.
Table 9. F1-score, kappa, and FP rate for dataset B and FBCSP I, using the first set of classifiers.
Table 9. F1-score, kappa, and FP rate for dataset B and FBCSP I, using the first set of classifiers.
Average123456789
F1-score1 Classifier0.4440.5070.3860.6360.4000.3750.4050.4560.3900.439
Prob. F10.4380.4700.4160.6620.3160.4920.3340.4370.3610.452
2 Classifiers0.4480.4420.4560.6220.3910.4190.3610.4420.4040.496
Prob. F10.4270.4270.3530.6400.4440.4030.3330.4420.3030.496
Soft0.4350.4850.3580.5920.4300.4550.4010.4060.3300.453
Hard0.4840.5970.4010.6260.4100.5470.3860.4700.4830.440
Ada0.4280.4170.3750.6250.3920.4330.3830.3580.3750.492
Best0.5070.5970.4560.6620.4440.5470.4050.4700.4830.496
Kappa1 Classifier0.1600.2620.0750.4500.0750.0630.1000.1750.0750.162
Prob. F10.1500.1750.0000.4880.0750.2500.0120.1250.0500.175
2 Classifiers0.1330.1620.1880.4380.0370.1250.0500.0000.1370.063
Prob. F10.1210.137−0.0120.4380.0370.1500.0000.137−0.0380.238
Soft0.1460.2250.0370.3870.1370.1750.1130.075−0.0120.175
Hard0.2180.3870.1000.4380.0880.3250.0630.2000.2000.162
Ada0.1420.1250.0630.4380.0880.1500.0750.0370.0630.238
Best0.2530.3870.1880.4880.1370.3250.1130.2000.2000.238
FP rate1 Classifier0.9080.6721.2390.3551.3700.8670.6460.8521.4350.736
Prob. F11.2021.0562.0000.3801.5870.7671.6101.2401.2730.907
2 Classifiers0.7960.8110.7270.3730.7440.6800.9551.2501.0000.622
Prob. F10.8880.8041.9490.2400.2090.5960.9751.1761.4860.559
Soft0.9180.7931.1160.4511.0590.6110.8571.0431.5900.741
Hard0.7300.5211.1250.4400.5110.4391.1560.8390.8210.717
Ada0.9160.7201.2220.3731.4260.6921.0651.3021.0000.441
Best0.5430.5210.7270.2400.2090.4390.6460.8390.8210.441
Table 10. Best classifiers, from the first set, for each combination for dataset B and FBCSP I.
Table 10. Best classifiers, from the first set, for each combination for dataset B and FBCSP I.
Classifiers123456789
1 Classifier313333133
Prob. F1133333333
2 Classifiers133131313131313113
Prob. F1313131033131313131
Soft313113313113131331
Hard311331313131313113
0—Gaussian Naïve Bayesian; 1—linear discriminant analysis; 3—logistic regression.
Table 11. F1-score, kappa, and FP rate for dataset A and FBCSP II, using the first set of classifiers.
Table 11. F1-score, kappa, and FP rate for dataset A and FBCSP II, using the first set of classifiers.
Average123456789
F1-score1 Classifier0.7930.8560.6970.9500.8380.6060.6680.8980.7820.846
Prob. F10.7920.8540.6430.9280.8490.6290.6630.9230.7880.855
2 Classifiers0.7850.8400.6430.9330.8400.6040.6860.8590.8420.817
Prob. F10.7830.8400.6430.9300.8400.6440.6860.8590.7930.817
Soft0.7870.8780.6820.9210.8250.6620.6080.8880.8050.818
Hard0.7970.8290.6980.9490.8330.6180.6790.9240.7740.871
Ada0.7240.8010.6200.8940.7690.6160.5420.8190.7450.713
Best0.8180.8780.6980.9500.8490.6620.6860.9240.8420.871
Kappa1 Classifier0.6830.7850.5350.9240.7570.3820.5000.8330.6670.764
Prob. F10.6850.7640.4650.9280.7640.4310.4790.8820.6740.778
2 Classifiers0.5960.6810.3610.8960.6110.2920.3820.6390.7640.743
Prob. F10.6580.7570.4580.8890.7500.3960.5210.7570.6810.715
Soft0.6230.7570.4720.8750.7360.4930.3060.8260.7010.438
Hard0.6930.7430.5420.9240.7500.4240.5140.8820.6530.806
Ada0.5860.7010.4310.8400.6530.4240.3130.7290.6180.569
Best0.7200.7850.5420.9280.7640.4930.5210.8820.7640.806
Winner0.5700.6800.4200.7500.4800.4000.2700.7700.7500.610
FP rate1 Classifier0.2180.1080.3830.0290.1270.6460.3470.0160.2380.071
Prob. F10.1610.0380.3170.0400.0660.4030.2550.0250.2370.065
2 Classifiers0.1920.1530.5320.0350.1310.3950.2760.0240.1320.050
Prob. F10.1500.0830.3700.0800.0670.1860.2790.0110.1940.080
Soft0.2380.1270.5290.0810.1520.3360.5810.0420.1970.097
Hard0.1950.1450.3400.0440.1560.4290.2950.0200.2470.080
Ada0.2600.1160.4630.1090.2290.4740.6240.0680.1180.136
Best0.1210.0380.3170.0290.0660.1860.2550.0110.1320.050
Table 12. Best classifiers, from the first set, for each combination for dataset A and FBCSPII.
Table 12. Best classifiers, from the first set, for each combination for dataset A and FBCSPII.
Classifiers123456789
1 Classifier333333333
Prob. F1331333333
2 Classifiers313131313131313131
Prob. F1303113313131313131
Soft033031303031313030
Hard303131303031313031
0—Gaussian Naïve Bayesian; 1—linear discriminant analysis; 3—logistic regression.
Table 13. F1-score, kappa, and FP rate for dataset B and FBCSP II, using the first set of classifiers.
Table 13. F1-score, kappa, and FP rate for dataset B and FBCSP II, using the first set of classifiers.
Average123456789
F1-score1 Classifier0.4610.6120.4510.7120.4380.5150.4790.4720.0000.471
Prob. F10.4780.5560.4310.7100.3940.5060.4670.4560.3160.463
2 Classifiers0.4540.6060.4350.6970.4310.5270.4710.4680.0000.455
Prob. F10.4570.6050.4330.7000.4370.5060.4900.4560.0000.489
Soft0.4970.6240.4250.7160.4140.5260.4810.4810.3370.471
Hard0.4860.6120.4330.7050.4080.5150.4220.4720.3320.471
Ada0.5040.5170.4170.7420.4750.5000.5080.4670.3420.567
Best0.5240.6240.4510.7420.4750.5270.5080.4810.3420.567
Kappa1 Classifier0.2430.4000.1750.5500.1370.2750.2250.2000.0000.225
Prob. F10.2000.2620.1000.5130.0880.2250.2000.1880.0000.225
2 Classifiers0.2320.3870.1500.5130.1250.2880.2130.2000.0120.200
Prob. F10.2330.3750.1500.5130.1370.2250.2380.1880.0250.250
Soft0.2440.4250.1370.5630.1130.2880.2250.2130.0130.225
Hard0.2240.4000.1500.5380.1000.2750.1250.2000.0000.225
Ada0.2560.2750.1250.6130.2130.2500.2620.2000.0120.350
Best0.2850.4250.1750.6130.2130.2880.2620.2130.0250.350
FP rate1 Classifier0.5150.3060.7410.1670.510.5970.690.8040.050.776
Prob. F10.4590.3610.5630.1360.5110.2410.8040.7270.150.638
2 Classifiers0.490.2960.7310.1480.50.5240.6670.7140.0240.804
Prob. F10.4440.3730.5630.1730.5490.2410.6950.6180.0710.717
Soft0.5440.3110.8820.1880.5920.5560.7240.7720.0980.776
Hard0.5610.3060.8270.1690.6460.5970.80.8040.1250.776
Ada0.5130.4680.680.1240.4040.550.770.8040.2930.529
Best0.3850.2960.5630.1240.4040.2410.6670.6180.0240.529
Table 14. Best classifiers, from the first set, for each combination for dataset B and FBCP II.
Table 14. Best classifiers, from the first set, for each combination for dataset B and FBCP II.
S1S2S3S4S5S6S7S8S9
1 Classifier312313113
Prob. F1313333113
2 Classifiers311331311331131331
Prob. F1311331313131131331
Soft311331311331131331
Hard311331311313131331
1—Linear discriminant analysis; 2—linear support vector machines; 3—logistic regression.
Table 15. F1-score, kappa, and FP rate for dataset B and FBCSP II, using both sets of classifiers.
Table 15. F1-score, kappa, and FP rate for dataset B and FBCSP II, using both sets of classifiers.
Average123456789
F1-score1 Classifier0.6510.7470.5950.9110.6040.6600.7060.6040.5170.515
Prob. F10.6480.7180.6310.9120.6280.6670.7370.6420.4500.447
2 Classifiers0.5880.7120.5860.7730.5820.6010.6130.5450.4600.418
Prob. F10.5990.7120.5600.7710.6190.6500.6580.6230.3740.418
Soft0.6460.7580.6260.8920.5890.6440.6780.6220.4850.519
Hard0.6510.7470.5950.9110.6040.6600.7060.6040.5170.515
Ada0.6680.7580.6310.9120.6280.6670.7370.6420.5170.519
Best0.6510.7470.5950.9110.6040.6600.7060.6040.5170.515
Kappa1 Classifier0.4260.4630.3750.8630.3870.4880.5250.3870.1250.225
Prob. F10.4330.6250.1750.8630.4250.4880.5630.4630.1000.200
2 Classifiers0.3590.5500.3130.6250.4000.3900.4130.3250.0500.162
Prob. F10.4130.5750.3250.6380.4130.4750.4880.4380.1750.188
Soft0.4470.6250.4500.8380.3750.4630.4500.4250.1500.250
Hard0.4430.6000.3750.8630.3870.4880.5380.3870.1250.225
Ada0.4780.6250.4500.8630.4250.4880.5630.4630.1750.250
Best0.4260.4630.3750.8630.3870.4880.5250.3870.1250.225
FP rate1 Classifier0.4710.5190.3140.1010.6900.4680.4220.6480.0801.000
Prob. F10.4110.3220.1110.1010.5950.5060.4000.5060.1251.036
2 Classifiers0.3140.1900.2000.0780.4860.1780.2880.5610.0680.774
Prob. F10.3990.3700.3640.1100.5340.3970.4660.4400.0710.836
Soft0.4710.3000.4610.1210.7000.5060.5000.5810.1350.933
Hard0.4710.5190.3140.1010.6900.4680.4220.6480.0801.000
Ada0.2900.1900.1110.0780.4860.1780.2880.4400.0680.774
Best0.4710.5190.3140.1010.6900.4680.4220.6480.0801.000
Table 16. Best classifiers, from both sets, for each combination for dataset B and FBCSP II.
Table 16. Best classifiers, from both sets, for each combination for dataset B and FBCSP II.
Individual (I)I1I2I3I4I5I6I7I8I9
1 Classifier555555555
Prob. F1555555555
2 Classifiers575754575757575757
Prob. F1575454575757575457
Soft575454585757575454
Hard575754575757575457
4—K-nearest neighbours; 5—kernel support vector machines; 7—neural networks; 8—random forest.
Table 17. Cues and respective outputs from subject 3’s real-time applications.
Table 17. Cues and respective outputs from subject 3’s real-time applications.
MIOutputMajority%MIOutputMajority%
N2112220022260%R1211111111190%
R1101211001160%N2222120022270%
N2022222002270%L2111110010130%
L0001001122050%N2220222222290%
L0100010122050%R1011101112170%
N2022022202270%N22222222222100%
R1012112110160%L0020202022050%
L0020001101060%N2122212222280%
N2221221202270%R1212120202230%
R1111002111170%N22222222222100%
N0220220022260%L0001000022070%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Avelar, M.C.; Almeida, P.; Faria, B.M.; Reis, L.P. Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair. Technologies 2024, 12, 80. https://doi.org/10.3390/technologies12060080

AMA Style

Avelar MC, Almeida P, Faria BM, Reis LP. Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair. Technologies. 2024; 12(6):80. https://doi.org/10.3390/technologies12060080

Chicago/Turabian Style

Avelar, Maria Carolina, Patricia Almeida, Brigida Monica Faria, and Luis Paulo Reis. 2024. "Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair" Technologies 12, no. 6: 80. https://doi.org/10.3390/technologies12060080

APA Style

Avelar, M. C., Almeida, P., Faria, B. M., & Reis, L. P. (2024). Applications of Brain Wave Classification for Controlling an Intelligent Wheelchair. Technologies, 12(6), 80. https://doi.org/10.3390/technologies12060080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop