Next Article in Journal
Resilience against Catastrophic Cyber Incidents: A Multistakeholder Analysis of Cyber Insurance
Previous Article in Journal
Methods for Determining Losses and Parameters of Cylindrical-Rotor Medium-Power Synchronous Generators
Previous Article in Special Issue
A Robust Automatic Epilepsy Seizure Detection Algorithm Based on Interpretable Features and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Analysis of Traditional Methods and Deep Learning Methods in SSVEP-Based BCI: A Survey

1
School of Information Science and Engineering, Shenyang Ligong University, Shenyang 110159, China
2
Science and Technology Development Corporation, Shenyang Ligong University, Shenyang 110159, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2767; https://doi.org/10.3390/electronics13142767
Submission received: 12 June 2024 / Revised: 2 July 2024 / Accepted: 12 July 2024 / Published: 14 July 2024
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)

Abstract

:
The brain–computer interface (BCI) is a direct communication channel between humans and machines that relies on the central nervous system. Neuroelectric signals are collected by placing electrodes, and after feature sampling and classification, they are converted into control signals to control external mechanical devices. BCIs based on steady-state visual evoked potential (SSVEP) have the advantages of high classification accuracy, fast information conduction rate, and relatively strong anti-interference ability, so they have been widely noticed and discussed. From k-nearest neighbor (KNN), multilayer perceptron (MLP), and support vector machine (SVM) classification algorithms to the current deep learning classification algorithms based on neural networks, a wide variety of discussions and analyses have been conducted by numerous researchers. This article summarizes more than 60 SSVEP- and BCI-related articles published between 2015 and 2023, and provides an in-depth research and analysis of SSVEP-BCI. The survey in this article can save a lot of time for scholars in understanding the progress of SSVEP-BCI research and deep learning, and it is an important guide for designing and selecting SSVEP-BCI classification algorithms.

1. Introduction

BCI technology is a technology that realizes human–computer interaction without passing through the peripheral nerves and muscles [1,2]. It is divided into two types: noninvasive and invasive. Due to the existence of various human risks and social ethics and other factors, the current mainstream brain–computer interfaces are noninvasive, which has the advantages of small risk coefficient, wide application range, high user comfort, and low burden [3]. In many electrical signal acquisitions, electroencephalography (EEG) has great advantages, so it is more valued by researchers. Its time resolution is high, the acquisition equipment is simple and easy to operate, and real-time monitoring and online transmission can be realized [4].
At present, there are a variety of brain signal excitation paradigms for brain–computer interfaces using EEG, including BCI based on motor imagery (MI), BCI based on event-related potential (ERP), and BCI based on SSVEP [5]. Among them, SSVEP-BCI has been widely studied and discussed due to the merits of less user training and having a high information transfer rate (ITR). The SSVEP refers to the stimulation of a fixed frequency to the eye retina of the subject, so that the cerebral cortex produces an electrical signal with the same frequency or frequency doubling, and the signal is collected through the electrode cap on the surface of the brain to achieve the purpose of human–computer interaction. SSVEP-BCI is currently divided into three frequency bands: low frequency (4~12 Hz), high frequency (>30 Hz), and intermediate frequency (12~30 Hz). At present, mainstream research direction is concentrated in the middle- and low-frequency bands, which is widely used due to the large amplitude response of brain signals and high information transmission rate during the experiment [6,7]. The classical system based on feature extraction is shown in Figure 1; the BCI system based on SSVEP is divided into five stages. First, the electrical signals of the cerebral cortex are collected through equipment such as motor caps. Then, the electrical signals are preprocessed, and then the signal features are extracted. After that, various classification methods are used to classify and identify the processed signals. Finally, the classified signals are output to external devices, such as mechanical arm [8], drones [9], and wheelchairs [10].
By collecting more than 60 articles published between 2015 and 2023, the research progress of SSVEP-BCI is summarized and generalized. Firstly, this paper provides an in-depth analysis and summary from three aspects, namely, the research progress of SSVEP potential paradigm, the research progress of decoding method, and the research progress of system and application, so that the researchers can easily understand the SSVEP-BCI through this survey. After that, the development history of deep learning is briefly summarized, which will enable related scholars to better understand the academic development history of deep learning and deepen their knowledge of deep learning; then, a detailed overview of SSVEP-BCI classification algorithms is provided in terms of traditional classification methods and deep learning classification methods, and a comparative analysis is made between the two methods, which is the focus and significance of the present survey. This survey aims to allow researchers to clearly understand the traditional KNN, MLP, and SVM classification algorithms as well as the current popular research deep learning classification algorithms based on neural networks, so that they can choose the appropriate classification algorithms according to their own research situation to achieve the goals and results required by the research. This new survey provides an important reference value for future researchers when using SSVEP-BCI.

1.1. Literature Search and Inclusion Criteria

Papers related to BCI and SSVEP were organized by using literature databases such as Web of Science, Google Scholar, Scopus, IEEE Xplore, and PubMed. During the search process, keywords included (1) EEG, (2) signal classification algorithms, and (3) deep learning. After reading and organizing the above searched papers, the papers were retained if their contents met the following conditions: (1) SCI papers in Region III or above; (2) they introduced SSVEP-related contents; (3) they had detailed descriptions of feature extraction and signal classification; and (4) they used deep learning classification algorithms to deal with SSVEP signals. The retained 66 articles were deeply analyzed and studied after screening through layers. The results are shown in Figure 2.

1.2. Contributions of This Survey

In the process of SSVEP-BCI research, collecting references and understanding the current status of research, research problems, and future research direction is very important. This survey summarizes 66 related articles for the first time on SSVEP-BCI in-depth study and analysis, from the progress of SSVEP-BCI research to the development of deep learning, and then to the data preprocessing of a variety of methods that the occasions adapted and the phenomena targeted, which greatly saves the researchers’ time in the process of collecting information and understanding SSVEP-BCI. Finally, a comparative analysis of traditional classification algorithms and deep learning classification algorithms is conducted to provide design guidelines for researchers to study SSVEP-BCI and also to point out the direction of research, which is an important advantage of this investigation. Data preprocessing and signal classification algorithms are the key links in the process of conducting research on SSVEP-BCI, which directly determines the control effect. The contributions of this investigation are mainly as follows:
(1)
A detailed overview of the progress of SSVEP from paradigm to signal decoding and, finally, application is provided to help researchers better understand the various research progresses of SSVEP-BCI.
(2)
We analyze and summarize the current mainstream research direction and research content of SSVEP-BCI based on deep learning, and outline the development history of deep learning. The traditional KNN, MLP, and SVM classification algorithms and neural-network-based deep learning classification algorithms are described in detail, comparing and analyzing the advantages, disadvantages, and adaptation scenarios to provide design guidelines for researchers.
(3)
We point out SSVEP-BCI’s deficiencies and difficulties, the main breakthrough points in future research, and provide an outlook on the future direction of SSVEP-BCI application based on deep learning classification algorithms.
To summarize, this survey makes great contributions and provides rich guidance and convenient references for researchers in their related research on SSVEP-BCI.

1.3. The Structure of This Survey

The structure of this survey is shown in Figure 3. Section 2 gives a detailed account about the research progress of BCI based on SSVEP, which includes the research progress of SSVEP paradigm, the research progress of decoding methods, and the research progress of systems and applications. Section 3 provides a detailed account of the SSVEP-BCI classification algorithm based on deep learning, respectively, in terms of the history of the development of deep learning, the data preprocessing methods in signal acquisition, and the SSVEP-BCI classification algorithm; then Section 4 focuses on the current challenges and future directions of BCI based on SSVEP. Finally, Section 5 gives the conclusion.

2. Research Progress of BCI Based on SSVEP

SSVEP is the response of visual cortical neurons induced by the subjects’ fixation on periodic visual stimuli. It has attracted wide attention due to its high ITR, strong stability, and wide applicability. It has been applied in visual field injury detection, identity recognition, and brain-controlled typing interface [11,12,13].

2.1. Research Progress of SSVEP Paradigm

Designing effective visual stimuli so that users’ EEG signals can distinguish different targets is crucial for achieving SSVEP-BCI. At present, the most widely used coding paradigm is the phase–frequency joint paradigm, that is, mixed frequency and phase coding [14,15]. Due to its efficient coding method, the method occupies a mainstream position in SSVEP-BCI. In recent years, researchers have developed new coding strategies based on this paradigm and expanded the coding instruction set. Chen et al. [16] arranged different frequencies in chronological order, using 8 stimulation frequencies, and each target was combined into a coding sequence using 4 frequencies to achieve 160-target coding. Chen et al. [17] realized 120-target coding using a frequency range of 12~23.9 Hz, a frequency interval of 0.1 Hz, and a phase interval of 0.35π. As shown in Figure 4, blue is the frequency of the stimulus and red is the initial phase.
The traditional SSVEP-BCI system generally uses high-contrast strong stimuli to provide visual presentation, but it easily causes visual fatigue, which is not conducive to the long-time use of the system. To enhance the user’s comfort, the researchers studied from the perspective of peripheral visual stimulation. Chen et al. [18] realized 13-target encoding by using four stimulus regions with different frequencies and the positional relationship between each target and the four stimulus regions. Zhao et al. [19] regarded the interval between the stimulus regions as the target, and realized the coding of 40 targets. Based on the time and phase locking characteristics of SSVEP. Jiang et al. [20] implemented a four-target SSVEP-BCI system using 60 Hz flicker-free visual stimulation combined with phase encoding. The recognition accuracy of the system was 87.75% in the online test, which proved the feasibility of SSVEP-BCI with flicker-free visual stimulation.
To further add the number of coding targets, the researchers used SSVEP and other EEG paradigms or other physiological signals to construct a hybrid BCI paradigm. Zhou et al. [21] combined SSVEP with electrooculogram (EOG) signals to construct an asynchronous 12-target BCI system using the recognition blink signal and SSVEP. Jiang et al. [22] developed a fusion paradigm of EEG response and pupil response under low-frequency visual stimulation (0.8~2.12 Hz), which provides a good solution to alleviate the problem of “BCI illiteracy”.

2.2. Research Progress of Decoding Methods

In order to accurately and efficiently identify SSVEP submerged in background EEG noise, according to the amount of training data available in different application scenarios, recent research results can be divided into three categories: training decoding [23], transfer learning decoding [24], and zero-calibration decoding [25].
Most of the early training algorithms are derived from the canonical correlation analysis (CCA) algorithm. In 2018, researchers introduced task-related component analysis (TRCA) into SSVEP decoding [26,27] to extract task-related components in EEG, which significantly improved the performance of training algorithms. In recent years, researchers have further improved the performance of decoding algorithms by introducing adjacent frequency information [28] and time domain information [29]. Liu et al. [30] proposed the task discriminant component analysis (TDCA) algorithm, which uses the discriminant model to find the common spatial filter for all frequencies, and obtains the decoding algorithm performance better than the integrated task related component analysis (eTRCA) algorithm. Deep learning also shows excellent performance in training and decoding due to its powerful feature extraction ability. Researchers use convolutional neural network (CNN)-based structures to extract spatial and temporal features in different layers. The structure of a CNN is shown in Figure 5.
Li et al. [31] proposed a convolutional correlation analysis model (ConvCA), and the decoding accuracy exceeds eTRCA. Guney et al. [32] proposed a deep convolutional neural network model (DNN) to further enhance the decoding exactitude of SSVEP. Considering the temporal characteristics of EEG signals, Zhang et al. [33] proposed a bidirectional twin correlation analysis (bi-SiamCA) model and used a recurrent neural network (RNN) to construct a classification model, which significantly improved the decoding performance of the deep model. The structure is shown in Figure 6.
Long calibration time limits the application of BCI system. Therefore, researchers use the transfer learning method to construct a cross-day, cross-device, and cross-subject decoding model to achieve decoding in the case of major calibration. Liu et al. [34] used alignment pooled domain adaptation (ALPHA) to analyze the relationship between source domain and target domain from feature level, and decomposed, aligned, and pooled the subspace, which improved the decoding performance of cross-device and cross-day transfer learning within the subjects. Zhu et al. [35] used ensemble learning technology to improve the decoding performance on cross-sky tasks.
In order to make the decoding method have the ability to be directly applied to new subjects, the researchers developed a zero-calibration decoding method. Ding et al. [36] proposed the filter bank CNN (FB-tCNN), which has significantly higher decoding accuracy than the traditional zero-training method CCA in the case of zero calibration. Chen et al. [37] applied the Transformer structure to the SSVEP classification task, which further improved the recognition accuracy of SSVEP under zero calibration. Yan et al. [38] improved the SSVEP recognition accuracy of cross-subjects by improving the superposition averaging method of the cross-subject spatial filter transfer method (CSSFT). In addition, SSVEP can also be used for human identification. Zhang et al. [39] obtained brain functional connectivity by calculating the correlation matrix of SSVEP spectrum. The individual recognition accuracy of this method was 98% under the test of 15 subjects’ datasets.

2.3. Research Progress of System and Application

SSVEP-BCI is mainly used in three directions: communication, control, and condition monitoring. In terms of communication, it is mainly used for telephone dialing [40,41], text spelling [42,43], etc. In terms of control, it is mainly used in nerve prosthesis [44,45] and equipment control [46,47]. In terms of state measurement, because the time domain, frequency domain, and spatial domain characteristics of SSVEP signal can reflect the state of the brain, it is used in passive BCI to realize the functions of implicit attention monitoring [48,49], gaze position monitoring [50,51], visual field damage detection [52,53], and individual identification [54,55], as shown in Figure 7, listing some practical applications of SSVEP-BCI, respectively. In view of the actual application requirements, the researchers have optimized the SSVEP-BCI system from the directions of decoding control strategy, hybrid BCI, and mobile BCI.
In order to improve the practicability, researchers have carried out a series of studies on decoding control strategies. Shi et al. [56] realized asynchronous Chinese communication between amyotrophic lateral sclerosis (ALS) patients and the outside world by using the spatiotemporal balanced dynamic window algorithm. Asynchronous communication is more natural and practical in practical applications. Researchers have carried out research on asynchronous control methods. Yang et al. [57] proposed a multiwindow asynchronous detection strategy based on space–time equalization, which improved the asynchronous control performance of the SSVEP-BCI keyboard. In addition, considering that the training calibration time is long and it easily causes fatigue, an online adaptation strategy can be set up to improve the performance of the system under less training or no training conditions through online machine learning. Wong et al. [58] proposed an online adaptive CCA (OACCA), which iteratively updates the algorithm model through online data and obtains better performance than the dynamic stopping strategy.
In addition to improving the practicality of the control strategy, research has also improved from the perspective of wearable devices. In order to facilitate signal acquisition, a post-ear signal acquisition method was proposed. Marinou et al. [59] used the SSVEP response collected behind the single ear for target detection, which verified the feasibility of SSVEP-BCI behind the ear. Liang et al. [60] proposed a fast and effective method for estimating the difference of retroauricular overlap delay between left and right visual fields, which effectively improved the system performance of retroauricular SSVEP-BCI. In order to accelerate the stimulation paradigm and control scene switching, Chen et al. [61] used mobile BCI and augmented reality technology to control the manipulator through a 4-target SSVEP-BCI system, achieving an information transmission rate of 14.21 bit/min. Chen et al. [62] raised the control command to 12 targets and achieved an information transmission rate of 67.37 bit/min.

3. SSVEP-BCI Classification Algorithm Based on Deep Learning

In the SSVEP-BCI system, effective extraction of SSVEP features is a key link in signal processing algorithms, and feature extraction influences the accuracy of SSVEP classification algorithm control system [63,64]. There have been many algorithms applied to SSVEP feature classification processing: KNN, MLP, SVM classification methods, and the current popular research deep learning classification methods based on neural networks. Summarizing the relevant literature on SSVEP-BCI, traditional machine learning classification algorithms and deep learning are analyzed and investigated, respectively.

3.1. The Development Process of Deep Learning

  • Inception: From 1980, when the backpropagation (BP) algorithm was proposed, to 2006, the BP algorithm was proposed to make the training of neural networks simple and feasible [65]. However, due to the problems of neural network algorithms and the limitations of computer power, very few scientists were able to stay in the field. Shallow learning methods became the mainstream of the era, and algorithms such as KNN, MLP, and SVM have received widespread attention [66,67,68]. These shallow learning methods are achieving good results in practice, which makes deep neural networks unpopular.
  • Rapid development period: From 2006 to 2012. In 2006, Hinton and his team [69,70] published a paper in Science, proposing a downscaling and layer-by-layer pretraining method that made the practical implementation of deep networks possible. In the same year, they also published an important paper proposing a solution to the problem of gradient disappearance during deep network training, and proposed the concept of “deep autoencoder”. Since then, researchers’ studies on neural networks began to enter the era of deep learning, and the curtain of the development of deep learning was opened. In 2012, Hinton et al. [71] made a breakthrough in research, proposing the “dropout” technology, which can effectively reduce the overfitting in deep learning, improve the generalization ability, and simplify the design of neural networks; the proposed technology has inspired the deep learning research boom.
  • Explosion period: from 2012 to present. In 2012, Hinton’s student Krizhevsky et al. [72] used convolutional neural networks as the basis for ImageNet image recognition and achieved superior performance results to all traditional methods. From then on, convolutional neural networks began to shine in the field of computer vision. In 2014, Krizhevsky [73] proposed two parallelization methods, data parallelism and model parallelism, which significantly improved the training speed of convolutional neural networks through parallelized training and promoted the development of deep learning frameworks. In 2016, He et al. [74] proposed residual networks, which solved the problem of difficult training of deep networks and provided important ideas and methods. In 2017, Vaswani et al. [75] proposed the Transformer model for the first time, which revolutionized the field of deep learning, and not only improved the performance of deep learning models, but also greatly accelerated the training speed and inference speed of the model. In 2018, Devlin et al. [76] proposed the Bidirectional Encoder Representations from Transformers (BERT), a new language model, which has had a significant impact on the application of deep learning in the field of natural language. In 2019, Rejer et al. [77] minimized the number of channels by adjusting the number of channels for acquiring EEG signals while maintaining high accuracy, which greatly contributed to the research on SSVEP-BCI. In 2022, Du et al. [78] discussed in-depth research on SSVEP for augmented reality (AR) and provided a comparative analysis on the color selection of visual stimuli in AR-SSVEP for researchers using SSVEP.

3.2. Data Preprocessing

Although deep learning is able to extract features from raw data end-to-end and perform classification and recognition, proper data preprocessing will undoubtedly improve the efficiency of deep learning to learn useful information from data. Commonly used techniques in deep-learning-based SSVEP-BCI data preprocessing include frequency domain filters, time–frequency transforms, and filter banks.

3.2.1. Frequency Filter

The main components of the SSVEP response are the fundamental, the second harmonic, and the third harmonic; they are located in a certain bandwidth in the frequency domain, whereas the noise in the ocular and electromyographic signals is located within the low frequency, so the noise can be removed by using the technique of frequency filters [79]. Frequency filters include band-pass filters and trap filters; band-pass filters filter the original signal to remove a portion of the noise, and, typically, a 50 Hz trap filter is also used to remove the industrial frequency noise [80]. Band-pass filtering can improve the learning efficiency of deep learning. Although the filtering operation generally causes the data to lose some of their information, it allows the deep learning network structure to focus more on task-relevant information. In general, useless noise information below 1 Hz and low-energy information above 100 Hz are necessary to filter out.

3.2.2. Time–Frequency Conversion

The time–frequency transform approach considers both time and frequency, which allows for a more thorough analysis of the frequency variations of the EEG in different time intervals, among which the fast Fourier transform (FFT) is one of the most commonly used and widely applicable time–frequency transforms. Based on the fundamental frequency information of SSVEP-EEG, the frequency domain features in the appropriate frequency range are intercepted on the frequency domain map as inputs to the network architecture. In 2017, Kwak et al. [81] used frequency domain features as inputs to a CNN. In their paper, they preprocessed the acquired EEG data and transformed the segmented data using the FFT method. Then, they selected 120 sampling points from each channel, corresponding to 5–35 Hz, and finally normalized the data to the range of 0 to 1. In 2019, Zhang et al. [82] used a similar process, also achieving satisfactory results. In the same year, Ravi et al. [83] also used a similar process and network as Zhang et al. The difference is that Ravi et al. used both the real and imaginary parts of the FFT as inputs; this approach took into account both the amplitude factor and the phase factor, achieving better results than before.

3.2.3. Filter Bank

A filter bank (FB) is also a frequently used technique in SSVEP signal preprocessing, which allows for more efficient and faster extraction of the information embedded in it by decomposing the signal into multiple subband components. 2017 Islam et al. [84] developed a novel unsupervised method for binary subband CCA (BsCCA) by adding a filter bank to typical correlation analysis (CCA). It was demonstrated that BsCCA on top of the FB can significantly improve the performance of SSVEP-BCI. 2020 Shao et al. [85] proposed a method called filter bank time-localized typical correlation analysis (FBTCCA) by considering a time-localized structure on top of filter bank typical correlation analysis (FBCCA), and experimentally demonstrated that FBTCCA is better than FBCCA with higher accuracy and information transfer rate in the system of SSVEP-BCI. The articles mentioned above show that filter banks are widely used in the data preprocessing stage of SSVEP, and with the development of time, some scholars began to consider the application of FB combined with deep learning in the feature extraction of SSVEP-BCI. In 2021, Zhao et al. [86] discussed the classification performance of SSVEP signals in EEG, and proposed a combination of FB and CNN methods to improve SSVEP classification, by using a filter bank with three passbands to extract and separate the SSVEP signals. The experimental results proved that the addition of FB can significantly improve the performance of the CNN-based SSVEP classification algorithm. In 2023, Xu et al. [87] proposed an FB complex spectral CNN (FB-CCNN). By using an optimization algorithm with artificial gradient descent, the FB-CCNN was improved, and experimental results showed that its classification achieved a leading accuracy of 94.85 ± 6.18% and 80.58 ± 14.43 on two open SSVEP datasets, respectively. The structural block diagram of FB-CCNN is shown in Figure 8.
These studies have shown that performing data preprocessing is an indispensable part of the deep-learning-based SSVEP-BCI classification algorithm. In addition, blink artifacts, eye movement artifacts, and motion artifacts are three common disturbances that often occur during data preprocessing, which can negatively affect the quality of SSVEP signals, and scholars have studied how to eliminate them. For blinking artifacts, the independent component analysis (ICA) technique can be used to quickly isolate them from the multichannel EEG signal and remove them from the original signal [88]. In addition to this, discrete wavelet transform (DWT) is also one of the methods that can effectively eliminate blink artifacts [89]. For eye movement artifacts, the most used techniques in SSVEP include reducing the frequency of eye movement, which effectively reduces eye movement artifacts, or linear subtraction of horizontal EEG signals to remove eye movement artifacts [90]. Motion artifacts can be achieved by removing motion-related high-frequency components through filters, such as low-pass filters. Of course, the more advanced CCA is also an excellent solution to effectively remove motion artifacts [91].

3.3. Types and Layers of Network Architecture

After preprocessing the signal data of SSVEP, the information in the feature signal becomes specific and clear, and the next step is the selection of the key classifier, which affects the accuracy of the system’s control of external devices. There are many choices of classifiers for SSVEP-BCI, and the summarized literature describes the traditional classification algorithms and deep learning classification algorithms separately.

3.3.1. Traditional Classification Algorithm

Before deep learning was tapped, most researchers used machine learning classification algorithms for signal classifiers, such as KNN, MLP, and SVM. Rizal [92], in 2022, used KNN as a predictor in the classification and processing stage of EEG to classify the EEG signal feature information of the patients at different stages, and the classification accuracy was achieved at 90.74%. In 2020, Yang et al. [93] proposed a new MLP model classifier in classifying and identifying EEG signals of stroke patients; by using this classification algorithm, it is possible to achieve an average accuracy of 76% in classifying the signal information. In 2023, Janapati et al. [94] optimized and improved the MLP in classifying the signal features of SSVEP by using the hybrid anchoring-based particle swarm scaling conjugate gradient multilayer perceptron (APS-MLP); this improved MLP classification algorithm has a great improvement in robustness and test accuracy compared to KNN and MLP, which is significant for SSVEP-BCI feature classification. Ma et al. [95], in 2023, proposed a fusion feature extraction method (CCA-DTF), which is used in conjunction with the SVM classifier to classify SSVEP-BCI feature signal information, and the experimental results proved that the SVM classification algorithm based on CCA-DTF achieves an average classification accuracy of 94.52% in a time window of 2 s, and the ITR reaches 49.23 bits/minute. Its performance is far superior to the traditional SVM classifier.

3.3.2. Deep Learning Classification Algorithms

With the development of deep learning, researchers have found that the introduction of deep learning models on classification algorithms, by changing the structure of the model to classify large-scale data information, has a superior performance to traditional machine learning classification algorithms. Commonly used deep learning models can generally be divided into three categories: fully connected neural networks, RNN, and CNN [96].
Artificial neural networks (ANNs) are a kind of neural network model which is connected by a large number of neurons and constituted by excitation function and weights, which has the advantages of self-adaptation, self-organization, and real-time learning under the training of a large amount of data. Fully connected neural networks are an important form of ANNs; a simple 4-3-2 ANN structural model is shown in Figure 9. Wang et al. [97] improved the ANN by introducing the prior knowledge of the task, and the improved ANN model showed high efficiency and applicability in the aspect of the target individual recognition used for SSVEP; the results of experiments proved the prospect and value of ANN application in SSVEP target recognition. Tarafdar et al. [98], in 2019, employed the classification algorithm of an ANN to classify the human brain SSVEP signal features after drinking coffee, by using the CCA method to preprocess the initial data of the collected signals, and then employing the ANN algorithm for the classification of the feature extraction. The experimental results proved that the classification algorithm of ANN can achieve 97% classification accuracy for the relevant features.
RNN introduces recurrent units so that the network structure can refer to the information of previous moments when processing the information of the current moment, and its structure contains the input layer, the hidden layer, and the output layer, which shows excellence in processing and analyzing the sequential data in the field of language processing, language recognition, etc. Hashemnia et al. [99], in 2021, analyzed the human EEG data in comparison with the designed RNN neural network structure, and the experimental results showed that the RNN has the ability to capture and collect temporal features for speech recognition in a similar way to the human brain. Samee et al. [100] used the RNN in combination with bidirectional short- and long-term memory (BiLSTM) to enable feature extraction and collection of EEG signals to distinguish between and diagnose epileptic seizures. The results of the experiment proved that the accuracy of this RNN-BiLSTM model can reach 98.90% when collecting and processing the data signals, and it can realize the accurate classification.
CNN mainly consists of five parts: input layer, convolutional layer, activation function, pooling layer, and fully-connected layer, of which the most central are the convolutional layer and pooling layer. CNN has excellent results in image recognition; it can learn from a large amount of image data to realize the extraction and classification of image features, and complete image recognition analysis. Li et al. [101] proposed an expanded mixed-wash convolutional neural network (DSCNN) model in the signal classification of SSVEP-BCI, and experimental classification of EEG signals was carried out by using the classification algorithm of the DSCNN. The results showed that under normal conditions, the DSCNN method can achieve an average accuracy of 96.75%, which proves that the CNN model is very suitable for use in the feature classification of SSVEP-BCI. Kwak et al. [81] conducted research and analysis on the robustness of neural signals to SSVEP signals as an example, and used the CNN deep learning model classifier to classify the features of SSVEP signals in dynamic and static environments, respectively. The traditional CCA and CCA-KNN classifiers were compared and analyzed, and the experimental results show that the CNN classifier is more robust, and its classification rate achieves 99.28% and 94.03% in static and dynamic, respectively.

3.3.3. Comparative Analysis of Classification Algorithms

The idea of the KNN classifier is simple and easy to implement, completely based on the principle of distance for classification, applicable to a wide range, and can be used for multiclassification problems, but the need to calculate the distance between the samples to be classified and all the training samples leads to its computational complexity, which, within a certain range, is directly proportional to the classification accuracy. MLP has strong nonlinear fitting ability and is suitable for dealing with complex input–output relationships, and its performance can be improved by increasing the number of hidden layers and neurons. It is mostly used in large-scale data processing and parallel computation, but it easily falls into the local minima, the tuning of parameters is also complicated due to excessive number of parameters, and the high complexity of task leads to its slower learning speed and the drastic prolongation of the training time. SVM is good at high dimensional space, it also applies to nonlinear classification with excellent generalization ability, and the computational complexity is proportional to its accuracy. The computational complexity is much lower compared to KNN and MLP, but it is not suitable for large-scale dataset feature classification. In 2024, Rajalakshmi et al. [102], in signal classification in EEG, firstly preprocessed the data with band-pass filtering, and then compared the processed data features using SVM, KNN, and MLP classifiers, respectively. Comparing the test performance of the three classifiers, the SVM classifier had the best performance with the highest classification accuracy of 91.67%. In the study of SSVEP-BCI, when the EEG signal data size is not large, the traditional SVM classifier is more advantageous in classifying the signal features of SSVEP, and the classification accuracy is higher compared to KNN and MLP.
The ANN model in deep learning classifiers consists of multiple neurons through which signals are received and output signals are generated, and its structure is designed to be flexible. It can be customized and optimized for different tasks as well as data types, it usually requires a large amount of data and computational resources for training, and can only be converged through constant iteration. It is more suitable for sequence data, like time series and linguistic data, and, due to the structure being relatively simple, the good performance depends on a large number of parameters and computational resources. RNN, on the other hand, is a kind of neural network for sequence data, which has unique advantages in processing sequence data, and its complexity increases according to the length of the sequence. However, due to the disappearance of the gradient and the explosion problem that will be faced by the training of RNN, the training process is very complicated and difficult, and it is mainly used in the field of text categorization and language processing. The structure of CNN is similar to that of biological visual neural network, including convolution layer and pooling layer, which are suitable for processing two-dimensional and three-dimensional data such as images and videos. The number of parameters can be reduced by the convolution layer parameter sharing and the dimension reduction operation of the pooling layer, thereby reducing the computational complexity and improving the generalization ability of the model. In 2021, Choubey et al. [103] used ANN and KNN classifiers to classify the EEG signals of patients with diseases, respectively, and the results showed that the classification algorithm of ANN is more accurate than the traditional KNN classifier in classifying EEG signals. In 2022, Joshi et al. [104] discussed the EEG signals of human emotions, and in their experiments, using RNN and KNN classification algorithms to categorize the experimental data for comparison, the results showed that the average accuracy of RNN was slightly higher than that of KNN, reaching 94.844%. Al-Quraishi et al. [105], in 2024, improved the RNN classification algorithm by adding long short-term memory (LSTM) and gated recurrent unit (GRU) algorithms, and in their experiments, compared it with two classifiers, SVM and KNN. The results showed that the improved RNN outperforms the traditional machine learning algorithms in recognizing EEG signals. In 2019, Tocoglu et al. [106] used three model classifiers of ANN, RNN, and CNN of deep learning to classify the data for sentiment analysis of linguistic texts, respectively, and the experimental results proved that CNN has the highest classification accuracy in dealing with the sentiment analysis of texts. In the classification of SSVEP signals, since SSVEP signals mainly appear in the occipital region of the cerebral cortex and are closely related to visual processing, of the three deep learning models of ANN, RNN, and CNN, CNN is more advantageous than ANN and RNN, and the classification accuracy of the feature extraction classification using CNN model classifiers will be higher.

4. Challenges and Future Directions of SSVEP-BCI

After collating numerous articles and summarizing, there are still some challenges in the current SSVEP-BCI, as follows:
  • The background noise of SSVEP signal data is relatively large, and external sound, light source interference, magnetic field, electric field, etc., may cause interference to the data acquisition process.
  • The ITR of the SSVEP-BCI system has room for improvement. When the number of stimulus targets is certain, the information transfer rate mainly depends on the length of the recognition window of the classification recognition algorithm, and there are still very few SSVEP recognition algorithms that can achieve high efficiency with short time windows.
  • Most current SSVEP-BCIs use low-frequency stimulus targets as the stimulus source, but prolonged use of low-frequency SSVEP-BCIs can fatigue users, and low-frequency SSVEP-BCIs increase the risk of inducing photosensitive epilepsy.
Currently, researchers are mainly focusing on CCA and TRCA in decoding control strategies for SSVEP-BCI, developing deep learning algorithms, combining deep learning models with CCA and TRCA, or developing more advanced algorithms and signal processing techniques. This is a valuable research direction which can effectively decode brain signals to achieve control command transformation. In 2024, Xiong et al. [107] designed a three-dimensional convolutional neural network (3DCNN) model for extracting and decoding SSVEP-BCI data features using a deep learning approach, and the results of the experimental data showed that the average classification accuracy of 3DCNN reached 89.35% and the ITR reached 173.02 bits/min, which provides a valuable way of thinking about the decoding and application of SSVEP-BCI. Most studies on SSVEP-BCI use more single BCI systems, and the future fusion of SSVEP-BCI with other types of BCIs may also be a point of in-depth exploration. Cui et al. [108], in 2023, combined high-frequency SSVEP and surface electromyography (sEMG), and proposed a hybrid BCI system on the basis of a single BCI. Through comparative experiments, they demonstrated that the hybrid BCI incorporating high-frequency SSVEP and sEMG performed better than a single BCI, which provided a reference for the research of multiple hybrid BCIs. Portable and wearable SSVEP-BCI systems are also a key direction for future research, and it is valuable to study how to reduce the size of the device, lower the power consumption, and optimize the performance in real environments. In 2018, Wang et al. [109] discussed and analyzed the portable SSVEP-BCI system, and designed a head-mounted BCI based on SSVEP, which can realize the first-person interaction with the environment interaction, thus realizing the control for the vehicle, which is a meaningful attempt for portable BCI. With the development of personalization and medical treatment, it is very meaningful to see how SSVEP-BCIs can be made more focused on individual differences in the future, and how they can be customized and optimized according to the user’s EEG characteristics, cognitive ability, and physical condition to develop systems that can be adapted to the needs of different users. In addition to traditional communication and control applications, SSVEP-BCI can also be explored to develop new application areas, such as the use of electrical skin activity for automatic emotion recognition, recognition of emotional activity and regulation, cognitive function enhancement, rehabilitation training, etc., which require researchers to further explore and develop the appropriate technological approaches. In 2024, Veeranki et al. [110] studied electrical skin activity (EDA). The features were extracted using nonlinear signal processing methods, and several classifiers, such as KNN, SVM, etc., were used for feature classification, respectively. The results of the study proved the effectiveness of nonlinear signals in EDA for emotion recognition, which is a very valuable attempt for new application areas.

5. Conclusions

This survey provides a detailed study and analysis of the research progress and classification algorithms of SSVEP-BCI. The current research progress is analyzed and explained in terms of SSVEP paradigms, decoding methods, and system applications. In the research of SSVEP paradigms, the most widely used coding paradigm is the phase–frequency joint paradigm, while the peripheral visual field stimulus presentation can effectively improve the user’s comfort, and the hybrid BCI paradigm can further enhance the number of coding targets. The research of SSVEP decoding methods is mainly categorized into three types, namely, trained decoding, migration learning decoding, and zero-calibration decoding. In terms of system applications, SSVEP-BCI is mainly used in three aspects, namely, communication, control, and condition monitoring, and most of the researchers optimize the system from the directions of decoding control strategies, hybrid BCI, and mobile BCI to make it meet different practical application requirements. Although deep learning is able to extract features from raw data end-to-end and perform classification and recognition, appropriate data preprocessing will undoubtedly improve the efficiency of deep learning to learn useful information from data. Commonly used data preprocessing techniques include frequency-domain filters, time–frequency transforms, and filter banks, and the quality and effectiveness of data preprocessing also have a significant impact on model performance. In the section of SSVEP-BCI classification algorithms, traditional classification algorithms and deep learning classification algorithms are narrated and summarized, respectively, and the selection and application of the two types of algorithms are compared and analyzed. Deep learning classification algorithms generally outperform traditional classification algorithms in the tasks of SSVEP-BCI. This is mainly due to the ability of deep learning models to automatically learn complex feature representations of the input data, which improves the accuracy and robustness of classification. Despite their excellent performance, the high computational complexity and long training time of deep learning models limit their application in real-time BCI systems. Deep learning models usually require a large amount of labeled data for training, and when the feature classification data are not large enough, the performance effect of choosing traditional classification algorithms, such as SVM, is somewhat better than that of deep learning models. The following four areas are important future research directions: Developing lightweight deep learning models to meet the needs of real-time BCI systems; researching unsupervised or semisupervised learning methods to reduce the dependence on data labeling; combining physiological signals from other modalities to improve the performance and stability of the SSVEP-BCI system; and exploring the prospects for the application of virtual reality and augmented reality in the SSVEP-BCI system. In conclusion, this study saves a lot of time for researchers in collecting information and provides a comprehensive and convenient reference for SSVEP-BCI research and classification algorithm selection.

Author Contributions

Writing—original draft, J.W. (Jingjing Wang); Writing—review & editing, J.W. (Jiaxuan Wu). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Defense Industrial Technology Development Program (JCKY2022410C002), and the Basic Research Funds for Undergraduate Universities in Liaoning Province (SYLUGXTD07).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, B.; Yuan, H.; Meng, J.; Gao, S. Brain–computer interfaces. In Neural Engineering; Springer: Berlin/Heidelberg, Germany, 2020; pp. 131–183. [Google Scholar]
  2. Rezeika, A.; Benda, M.; Stawicki, P.; Gembler, F.; Saboor, A.; Volosyak, I. Brain–computer interface spellers: A review. Brain Sci. 2018, 8, 57. [Google Scholar] [CrossRef] [PubMed]
  3. Revuelta Herrero, J.; Lozano Murciego, Á.; López Barriuso, A.; Hernández de la Iglesia, D.; Villarrubia González, G.; Corchado Rodríguez, J.M.; Carreira, R. Non intrusive load monitoring (nilm): A state of the art. In Trends in Cyber-Physical Multi-Agent Systems. In Proceedings of the 15th ProThe PAAMS Collection-15th International Conference, PAAMS, Porto, Portugal, 21–23 June 2017; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 125–138. [Google Scholar]
  4. Cohen, M.X. Where does EEG come from and what does it mean? Trends Neurosci. 2017, 40, 208–218. [Google Scholar] [CrossRef] [PubMed]
  5. Zou, X.L.; Huang, T.J.; Wu, S. Towards a new paradigm for brain-inspired computer vision. Mach. Intell. Res. 2022, 19, 412–424. [Google Scholar] [CrossRef]
  6. Liu, B.; Huang, X.; Wang, Y.; Chen, X.; Gao, X. BETA: A large benchmark database toward SSVEP-BCI application. Front. Neurosci. 2020, 14, 627. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, X.; Wang, Y.; Zhang, S.; Gao, S.; Hu, Y.; Gao, X. A novel stimulation method for multi-class SSVEP-BCI using intermodulation frequencies. J. Neural Eng. 2017, 14, 026013. [Google Scholar] [CrossRef] [PubMed]
  8. Lee, H.W. The study of mechanical arm and intelligent robot. IEEE Access 2020, 8, 119624–119634. [Google Scholar] [CrossRef]
  9. Hassanalian, M.; Abdelkefi, A. Classifications, applications, and design challenges of drones: A review. Prog. Aerosp. Sci. 2017, 91, 99–131. [Google Scholar] [CrossRef]
  10. Leaman, J.; La, H.M. A comprehensive review of smart wheelchairs: Past, present, and future. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 486–499. [Google Scholar] [CrossRef]
  11. Zerafa, R.; Camilleri, T.; Falzon, O.; Camilleri, K.P. To train or not to train? A survey on training of feature extraction methods for SSVEP-based BCIs. J. Neural Eng. 2018, 15, 051001. [Google Scholar] [CrossRef]
  12. Xing, X.; Wang, Y.; Pei, W.; Guo, X.; Liu, Z.; Wang, F.; Chen, H. A high-speed SSVEP-based BCI using dry EEG electrodes. Sci. Rep. 2018, 8, 14708. [Google Scholar] [CrossRef]
  13. Fernández-Rodríguez, Á.; Velasco-Álvarez, F.; Ron-Angevin, R. Review of real brain-controlled wheelchairs. J. Neural Eng. 2016, 13, 061001. [Google Scholar] [CrossRef]
  14. Chen, X.; Liu, B.; Wang, Y.; Gao, X. A spectrally-dense encoding method for designing a high-speed SSVEP-BCI with 120 stimuli. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2764–2772. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, X.; Wang, Y.; Nakanishi, M.; Gao, X.; Jung, T.P.; Gao, S. High-speed spelling with a noninvasive brain–computer interface. Proc. Natl. Acad. Sci. USA 2015, 112, E6058–E6067. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, Y.; Yang, C.; Ye, X.; Chen, X.; Wang, Y.; Gao, X. Implementing a calibration-free SSVEP-based BCI system with 160 targets. J. Neural Eng. 2021, 18, 046094. [Google Scholar] [CrossRef] [PubMed]
  17. Chen, Y.; Yang, R.; Huang, M.; Wang, Z.; Liu, X. Single-source to single-target cross-subject motor imagery classification based on multisubdomain adaptation network. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 1992–2002. [Google Scholar] [CrossRef] [PubMed]
  18. Chen, J.; Wang, Y.; Maye, A.; Hong, B.; Gao, X.; Engel, A.K.; Zhang, D. A spatially-coded visual brain-computer interface for flexible visual spatial information decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 926–933. [Google Scholar] [CrossRef] [PubMed]
  19. Zhao, X.; Wang, Z.; Zhang, M.; Hu, H. A comfortable steady state visual evoked potential stimulation paradigm using peripheral vision. J. Neural Eng. 2021, 18, 056021. [Google Scholar] [CrossRef] [PubMed]
  20. Jiang, L.; Pei, W.; Wang, Y. A user-friendly SSVEP-based BCI using imperceptible phase-coded flickers at 60 Hz. China Commun. 2022, 19, 1–14. [Google Scholar] [CrossRef]
  21. Zhou, Y.; He, S.; Huang, Q.; Li, Y. A hybrid asynchronous brain-computer interface combining SSVEP and EOG signals. IEEE Trans. Biomed. Eng. 2020, 67, 2881–2892. [Google Scholar] [CrossRef]
  22. Jiang, L.; Li, X.; Pei, W.; Gao, X.; Wang, Y. A hybrid brain-computer interface based on visual evoked potential and pupillary response. Front. Hum. Neurosci. 2022, 16, 834959. [Google Scholar] [CrossRef]
  23. Merel, J.; Carlson, D.; Paninski, L.; Cunningham, J.P. Neuroprosthetic decoder training as imitation learning. PLoS Comput. Biol. 2016, 12, e1004948. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, P.; Li, W.; Ma, X.; He, J.; Huang, J.; Li, Q. Feature-selection-based transfer learning for intracortical brain–machine interface decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 29, 60–73. [Google Scholar] [CrossRef] [PubMed]
  25. Dong, Y.; Wen, X.; Gao, F.; Gao, C.; Cao, R.; Xiang, J.; Cao, R. Subject-Independent EEG Classification of Motor Imagery Based on Dual-Branch Feature Fusion. Brain Sci. 2023, 13, 1109. [Google Scholar] [CrossRef] [PubMed]
  26. Nakanishi, M.; Wang, Y.; Chen, X.; Wang, Y.T.; Gao, X.; Jung, T.P. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans. Biomed. Eng. 2017, 65, 104–112. [Google Scholar] [CrossRef] [PubMed]
  27. Duan, F.; Jia, H.; Sun, Z.; Zhang, K.; Dai, Y.; Zhang, Y. Decoding premovement patterns with task-related component analysis. Cogn. Comput. 2021, 13, 1389–1405. [Google Scholar] [CrossRef]
  28. Wong, C.M.; Wan, F.; Wang, B.; Wang, Z.; Nan, W.; Lao, K.F.; Rosa, A. Learning across multi-stimulus enhances target recognition methods in SSVEP-based BCIs. J. Neural Eng. 2020, 17, 016026. [Google Scholar] [CrossRef] [PubMed]
  29. Yang, C.; Han, X.; Wang, Y.; Saab, R.; Gao, S.; Gao, X. A dynamic window recognition algorithm for SSVEP-based brain–computer interfaces using a spatio-temporal equalizer. Int. J. Neural Syst. 2018, 28, 1850028. [Google Scholar] [CrossRef]
  30. Liu, B.; Chen, X.; Shi, N.; Wang, Y.; Gao, S.; Gao, X. Improving the performance of individually calibrated SSVEP-BCI by task-discriminant component analysis. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1998–2007. [Google Scholar] [CrossRef]
  31. Li, M.; Wu, L.; Xu, G.; Duan, F.; Zhu, C. A robust 3D-convolutional neural network-based electroencephalogram decoding model for the intra-individual difference. Int. J. Neural Syst. 2022, 32, 2250034. [Google Scholar] [CrossRef] [PubMed]
  32. Guney, O.B.; Oblokulov, M.; Ozkan, H. A deep neural network for ssvep-based brain-computer interfaces. IEEE Trans. Biomed. Eng. 2021, 69, 932–944. [Google Scholar] [CrossRef]
  33. Zhang, X.; Qiu, S.; Zhang, Y.; Wang, K.; Wang, Y.; He, H. Bidirectional Siamese correlation analysis method for enhancing the detection of SSVEPs. J. Neural Eng. 2022, 19, 046027. [Google Scholar] [CrossRef] [PubMed]
  34. Liu, B.; Chen, X.; Li, X.; Wang, Y.; Gao, X.; Gao, S. Align and pool for EEG headset domain adaptation (ALPHA) to facilitate dry electrode based SSVEP-BCI. IEEE Trans. Biomed. Eng. 2021, 69, 795–806. [Google Scholar] [CrossRef] [PubMed]
  35. Zhu, Y.; Li, Y.; Lu, J.; Li, P. EEGNet with ensemble learning to improve the cross-session classification of SSVEP based BCI from ear-EEG. IEEE Access 2021, 9, 15295–15303. [Google Scholar] [CrossRef]
  36. Ding, W.; Shan, J.; Fang, B.; Wang, C.; Sun, F.; Li, X. Filter bank convolutional neural network for short time-window steady-state visual evoked potential classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2615–2624. [Google Scholar] [CrossRef] [PubMed]
  37. Chen, J.; Zhang, Y.; Pan, Y.; Xu, P.; Guan, C. A Transformer-based deep neural network model for SSVEP classification. Neural Netw. 2023, 164, 521–534. [Google Scholar] [CrossRef] [PubMed]
  38. Yan, W.; Wu, Y.; Du, C.; Xu, G. An improved cross-subject spatial filter transfer method for SSVEP-based BCI. J. Neural Eng. 2022, 19, 046028. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, Y.; Shen, H.; Li, M.; Hu, D. Brain biometrics of steady state visual evoked potential functional networks. IEEE Trans. Cogn. Dev. Syst. 2022, 15, 1694–1701. [Google Scholar] [CrossRef]
  40. Seymour, K.M.; Higginson, C.I.; DeGoede, K.M.; Bifano, M.K.; Orr, R.; Higginson, J.S. Cellular telephone dialing influences kinematic and spatiotemporal gait parameters in healthy adults. J. Mot. Behav. 2016, 48, 535–541. [Google Scholar] [CrossRef] [PubMed]
  41. Allison, W. Unplanned obsolescence: Interpreting the automatic telephone dialing system after the smartphone epoch. Mich. L. Rev. 2020, 119, 147. [Google Scholar] [CrossRef]
  42. Sakuntharaj, R.; Mahesan, S. A novel hybrid approach to detect and correct spelling in Tamil text. In Proceedings of the 2016 IEEE international conference on information and automation for sustainability (ICIAfS), Galle, Sri Lanka, 16–19 December 2016; pp. 1–6. [Google Scholar]
  43. Kasmaiee, S.; Kasmaiee, S.; Homayounpour, M. Correcting spelling mistakes in Persian texts with rules and deep learning methods. Sci. Rep. 2023, 13, 19945. [Google Scholar] [CrossRef]
  44. Ciancio, A.L.; Cordella, F.; Barone, R.; Romeo, R.A.; Bellingegni, A.D.; Sacchetti, R.; Zollo, L. Control of prosthetic hands via the peripheral nervous system. Front. Neurosci. 2016, 10, 116. [Google Scholar] [CrossRef] [PubMed]
  45. Liu, X.; Yao, L.; Ng, K.A.; Li, P.; Wang, W.; Je, M.; Xu, Y.P. A power-efficient current-mode neural/muscular stimulator design for peripheral nerve prosthesis. Int. J. Circuit Theory Appl. 2018, 46, 692–706. [Google Scholar] [CrossRef]
  46. Wei, Y.; Li, W.; An, D.; Li, D.; Jiao, Y.; Wei, Q. Equipment and intelligent control system in aquaponics: A review. IEEE Access 2019, 7, 169306–169326. [Google Scholar] [CrossRef]
  47. Zhang, H.; Li, Q.; Meng, S.; Xu, Z.; Lv, C.; Zhou, C. A safety fault diagnosis method on industrial intelligent control equipment. Wirel. Netw. 2024, 30, 4287–4299. [Google Scholar] [CrossRef]
  48. Lee, Y.C.; Lin, W.C.; Cherng, F.Y.; Ko, L.W. A visual attention monitor based on steady-state visual evoked potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 24, 399–408. [Google Scholar] [CrossRef] [PubMed]
  49. Yang, Y.; Xiu, L.; Chen, X.; Yu, G. Do emotions conquer facts? A CCME model for the impact of emotional information on implicit attitudes in the post-truth era. Humanit. Soc. Sci. Commun. 2023, 10, 1–7. [Google Scholar] [CrossRef]
  50. Kuo, Y.W.; Jung, T.P.; Shieh, H.P.D. Polychromatic SSVEP Stimuli with Subtle Flickering Adapted to Brain-Display Interactions. J. Neural Eng. 2017, 14, 016018. [Google Scholar]
  51. Yoon, H.S.; Park, K.R. Cyclegan-based deblurring for gaze tracking in vehicle environments. IEEE Access 2020, 8, 137418–137437. [Google Scholar] [CrossRef]
  52. Nakanishi, M.; Wang, Y.T.; Jung, T.P. Detecting Glaucoma with a Portable Brain-Computer Interface for Objective Assessment of Visual Function Loss. JAMA 2017, 135, 550–557. [Google Scholar] [CrossRef]
  53. Ouyang, R.; Jin, Z.; Tang, S.; Fan, C.; Wu, X. Low-quality training data detection method of EEG signals for motor imagery BCI system. J. Neurosci. Methods 2022, 376, 109607. [Google Scholar] [CrossRef]
  54. Phothisonothai, M. An investigation of using SSVEP for EEG-based user authentication system. In Proceedings of the 2015 IEEE Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, China, 16–19 December 2015; pp. 923–926. [Google Scholar]
  55. Zhao, H.; Wang, Y.; Liu, Z.; Pei, W.; Chen, H. Individual identification based on code-modulated visual-evoked potentials. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3206–3216. [Google Scholar] [CrossRef]
  56. Shi, N.; Wang, L.; Chen, Y.; Yan, X.; Yang, C.; Wang, Y.; Gao, X. Steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) of Chinese speller for a patient with amyotrophic lateral sclerosis: A case report. J. Neurorestoratol. 2020, 8, 40–52. [Google Scholar] [CrossRef]
  57. Yang, C.; Yan, X.; Wang, Y.; Chen, Y.; Zhang, H.; Gao, X. Spatio-temporal equalization multi-window algorithm for asynchronous SSVEP-based BCI. J. Neural Eng. 2021, 18, 0460b7. [Google Scholar] [CrossRef] [PubMed]
  58. Wong, C.M.; Wang, Z.; Nakanishi, M.; Wang, B.; Rosa, A.; Chen, C.P.; Wan, F. Online adaptation boosts SSVEP-based BCI performance. IEEE Trans. Biomed. Eng. 2021, 69, 2018–2028. [Google Scholar] [CrossRef] [PubMed]
  59. Marinou, A.; Saunders, R.; Casson, A.J. Flexible inkjet printed sensors for behind-the-ear SSVEP EEG monitoring. In Proceedings of the 2020 IEEE International Conference on Flexible and Printable Sensors and Systems (FLEPS), Manchester, UK, 16–19 August 2020; pp. 1–4. [Google Scholar]
  60. Liang, L.; Bin, G.; Chen, X.; Wang, Y.; Gao, S.; Gao, X. Optimizing a left and right visual field biphasic stimulation paradigm for SSVEP-based BCIs with hairless region behind the ear. J. Neural Eng. 2021, 18, 066040. [Google Scholar] [CrossRef] [PubMed]
  61. Chen, X.; Huang, X.; Wang, Y.; Gao, X. Combination of augmented reality based brain-computer interface and computer vision for high-level control of a robotic arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3140–3147. [Google Scholar] [CrossRef] [PubMed]
  62. Chen, L.; Chen, P.; Zhao, S.; Luo, Z.; Chen, W.; Pei, Y.; Yin, E. Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain–computer interface. J. Neural Eng. 2021, 18, 066005. [Google Scholar] [CrossRef] [PubMed]
  63. Zhang, K.; Xu, G.; Du, C.; Wu, Y.; Zheng, X.; Zhang, S.; Chen, R. Weak feature extraction and strong noise suppression for SSVEP-EEG based on chaotic detection technology. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 862–871. [Google Scholar] [CrossRef] [PubMed]
  64. Yang, D.; Nguyen, T.H.; Chung, W.Y. A bipolar-channel hybrid brain-computer interface system for home automation control utilizing steady-state visually evoked potential and eye-blink signals. Sensors 2020, 20, 5474. [Google Scholar] [CrossRef]
  65. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  66. Asikainen, A.; Kolehmainen, M.; Ruuskanen, J.; Tuppurainen, K. Structure-based classification of active and inactive estrogenic compounds by decision tree, LVQ and kNN methods. Chemosphere 2006, 62, 658–673. [Google Scholar] [CrossRef] [PubMed]
  67. Bandyopadhyay, S.; Pal, S.K. Relation between VGA-classifier and MLP: Determination of network architecture. Fundam. Informaticae 1999, 37, 177–199. [Google Scholar] [CrossRef]
  68. Wong, W.T.; Hsu, S.H. Application of SVM and ANN for image retrieval. Eur. J. Oper. Res. 2006, 173, 938–950. [Google Scholar] [CrossRef]
  69. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  70. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  71. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv 2012, arXiv:1207.0580. [Google Scholar]
  72. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar] [CrossRef]
  73. Krizhevsky, A. One weird trick for parallelizing convolutional neural networks. arXiv 2014, arXiv:1404.5997. [Google Scholar]
  74. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  75. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
  76. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  77. Rejer, I.; Cieszyński, Ł. Independent component analysis for a low-channel SSVEP-BCI. Pattern Anal. Appl. 2019, 22, 47–62. [Google Scholar] [CrossRef]
  78. Du, Y.; Zhao, X. Visual stimulus color effect on SSVEP-BCI in augmented reality. Biomed. Signal Process. Control 2022, 78, 103906. [Google Scholar] [CrossRef]
  79. Yan, W.; Du, C.; Wu, Y.; Zheng, X.; Xu, G. SSVEP-EEG denoising via image filtering methods. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1634–1643. [Google Scholar] [CrossRef] [PubMed]
  80. Yan, W.; He, B.; Zhao, J.; Wu, Y.; Du, C.; Xu, G. Frequency domain filtering method for SSVEP-EEG preprocessing. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2079–2089. [Google Scholar] [CrossRef] [PubMed]
  81. Kwak, N.S.; Müller, K.R.; Lee, S.W. A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS ONE 2017, 12, e0172578. [Google Scholar] [CrossRef] [PubMed]
  82. Zhang, X.; Xu, G.; Mou, X.; Ravi, A.; Li, M.; Wang, Y.; Jiang, N. A convolutional neural network for the detection of asynchronous steady state motion visual evoked potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1303–1311. [Google Scholar] [CrossRef] [PubMed]
  83. Ravi, A.; Beni, N.H.; Manuel, J.; Jiang, N. Comparing user-dependent user-independent training of CNN for SSVEPBCI. J. Neural Eng. 2020, 17, 026028. [Google Scholar] [CrossRef] [PubMed]
  84. Islam, M.R.; Molla, M.K.I.; Nakanishi, M.; Tanaka, T. Unsupervised frequency-recognition method of SSVEPs using a filter bank implementation of binary subband CCA. J. Neural Eng. 2017, 14, 026007. [Google Scholar] [CrossRef]
  85. Shao, X.; Lin, M. Filter bank temporally local canonical correlation analysis for short time window SSVEPs classification. Cogn. Neurodynamics 2020, 14, 689–696. [Google Scholar] [CrossRef]
  86. Zhao, D.; Wang, T.; Tian, Y.; Jiang, X. Filter bank convolutional neural network for SSVEP classification. IEEE Access 2021, 9, 147129–147141. [Google Scholar] [CrossRef]
  87. Xu, D.; Tang, F.; Li, Y.; Zhang, Q.; Feng, X. FB-CCNN: A Filter Bank Complex Spectrum Convolutional Neural Network with Artificial Gradient Descent Optimization. Brain Sci. 2023, 13, 780. [Google Scholar] [CrossRef] [PubMed]
  88. Li, R.; Principe, J.C. Blinking artifact removal in cognitive EEG data using ICA. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 5273–5276. [Google Scholar]
  89. Arab, M.R.; Suratgar, A.A.; Ashtiani, A.R. Electroencephalogram signals processing for topographic brain mapping and epilepsies classification. Comput. Biol. Med. 2010, 40, 733–739. [Google Scholar] [CrossRef] [PubMed]
  90. Mannan MM, N.; Kim, S.; Jeong, M.Y.; Kamran, M.A. Hybrid EEG—Eye tracker: Automatic identification and removal of eye movement and blink artifacts from electroencephalographic signal. Sensors 2016, 16, 241. [Google Scholar] [CrossRef] [PubMed]
  91. Shukla, P.K.; Roy, V.; Shukla, P.K.; Chaturvedi, A.K.; Saxena, A.K.; Maheshwari, M.; Pal, P.R. An advanced EEG motion artifacts eradication algorithm. Comput. J. 2023, 66, 429–440. [Google Scholar] [CrossRef]
  92. Rizal, A.; Hadiyoso, S.; Ramdani, A.Z. FPGA-based implementation for real-time epileptic EEG classification using Hjorth descriptor KNN. Electronics 2022, 11, 3026. [Google Scholar] [CrossRef]
  93. Yang, P.; Wang, J.; Zhao, H.; Li, R. Mlp with riemannian covariance for motor imagery based eeg analysis. IEEE Access 2020, 8, 139974–139982. [Google Scholar] [CrossRef]
  94. Janapati, R.; Dalal, V.; Desai, U.; Sengupta, R.; Kulkarni, S.A.; Hemanth, D.J. Classification of Visually Evoked Potential EEG Using Hybrid Anchoring-based Particle Swarm Optimized Scaled Conjugate Gradient Multi-Layer Perceptron Classifier. Int. J. Artif. Intell. Tools 2023, 32, 2340016. [Google Scholar] [CrossRef]
  95. Ma, P.; Dong, C.; Lin, R.; Ma, S.; Liu, H.; Lei, D.; Chen, X. Effect of Local Network Characteristics on the Performance of the SSVEP Brain-Computer Interface. IRBM 2023, 44, 100781. [Google Scholar] [CrossRef]
  96. Xu, D.; Tang, F.; Li, Y.; Zhang, Q.; Feng, X. An analysis of deep learning models in SSVEP-based BCI: A survey. Brain Sci. 2023, 13, 483. [Google Scholar] [CrossRef]
  97. Wang, Z.; Wong, C.M.; Wang, B.; Feng, Z.; Cong, F.; Wan, F. Compact Artificial Neural Network Based on Task Attention for Individual SSVEP Recognition with Less Calibration. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2525–2534. [Google Scholar] [CrossRef]
  98. Tarafdar, K.K.; Pradhan, B.K.; Nayak, S.K.; Khasnobish, A.; Chakravarty, S.; Ray, S.S.; Pal, K. Data mining based approach to study the effect of consumption of caffeinated coffee on the generation of the steady-state visual evoked potential signals. Comput. Biol. Med. 2019, 115, 103526. [Google Scholar] [CrossRef] [PubMed]
  99. Hashemnia, S.; Grasse, L.; Soni, S.; Tata, M.S. Human EEG and recurrent neural networks exhibit common temporal dynamics during speech recognition. Front. Syst. Neurosci. 2021, 15, 617605. [Google Scholar] [CrossRef] [PubMed]
  100. Samee, N.A.; Mahmoud, N.F.; Aldhahri, E.A.; Rafiq, A.; Muthanna, M.S.A.; Ahmad, I. RNN and BiLSTM fusion for accurate automatic epileptic seizure diagnosis using EEG signals. Life 2022, 12, 1946. [Google Scholar] [CrossRef] [PubMed]
  101. Li, M.; Ma, C.; Dang, W.; Wang, R.; Liu, Y.; Gao, Z. DSCNN: Dilated shuffle CNN model for SSVEP signal classification. IEEE Sens. J. 2022, 22, 12036–12043. [Google Scholar] [CrossRef]
  102. Rajalakshmi, A.; Sridhar, S.S. Classification of yoga, meditation, combined yoga–meditation EEG signals using L-SVM, KNN, and MLP classifiers. Soft Comput. 2024, 28, 4607–4619. [Google Scholar] [CrossRef]
  103. Choubey, H.; Pandey, A. A combination of statistical parameters for the detection of epilepsy and EEG classification using ANN and KNN classifier. Signal Image Video Process. 2021, 15, 475–483. [Google Scholar] [CrossRef]
  104. Joshi, S.; Joshi, F. Human Emotion Classification based on EEG Signals Using Recurrent Neural Network And KNN. arXiv 2022, arXiv:2205.08419. [Google Scholar]
  105. Al-Quraishi, M.S.; Tan, W.H.; Elamvazuthi, I.; Ooi, C.P.; Saad, N.M.; Al-Hiyali, M.I.; Ali, S.S.A. Cortical signals analysis to recognize intralimb mobility using modified RNN and various EEG quantities. Heliyon 2024, 10, e30406. [Google Scholar] [CrossRef] [PubMed]
  106. Tocoglu, M.A.; Ozturkmenoglu, O.; Alpkocak, A. Emotion analysis from Turkish tweets using deep neural networks. IEEE Access 2019, 7, 183061–183069. [Google Scholar] [CrossRef]
  107. Xiong, H.; Song, J.; Liu, J.; Han, Y. Deep transfer learning-based SSVEP frequency domain decoding method. Biomed. Signal Process. Control 2024, 89, 105931. [Google Scholar] [CrossRef]
  108. Cui, H.; Chi, X.; Wang, L.; Chen, X. A High-Rate Hybrid BCI System Based on High-Frequency SSVEP and sEMG. IEEE J. Biomed. Health Inform. 2023, 27, 5688–5698. [Google Scholar] [CrossRef] [PubMed]
  109. Wang, M.; Li, R.; Zhang, R.; Li, G.; Zhang, D. A wearable SSVEP-based BCI system for quadcopter control using head-mounted device. IEEE Access 2018, 6, 26789–26798. [Google Scholar] [CrossRef]
  110. Veeranki, Y.R.; Diaz LR, M.; Swaminathan, R.; Posada-Quintero, H.F. Non-Linear Signal Processing Methods for Automatic Emotion Recognition using Electrodermal Activity. IEEE Sens. J. 2024, 24, 8079–8093. [Google Scholar] [CrossRef]
Figure 1. The block diagram of system structure based on SSVEP-BCI.
Figure 1. The block diagram of system structure based on SSVEP-BCI.
Electronics 13 02767 g001
Figure 2. The 66 pieces of literature studied in depth in this survey: (a) the platform used for literature search, (b) keywords of the literature.
Figure 2. The 66 pieces of literature studied in depth in this survey: (a) the platform used for literature search, (b) keywords of the literature.
Electronics 13 02767 g002
Figure 3. The structure of the survey.
Figure 3. The structure of the survey.
Electronics 13 02767 g003
Figure 4. Character speller for 120-target SSVEP-type BCI systems.
Figure 4. Character speller for 120-target SSVEP-type BCI systems.
Electronics 13 02767 g004
Figure 5. The structure of a CNN.
Figure 5. The structure of a CNN.
Electronics 13 02767 g005
Figure 6. The principle block diagram of bi-SiamCA.
Figure 6. The principle block diagram of bi-SiamCA.
Electronics 13 02767 g006
Figure 7. Application of the SSVEP-BCI.
Figure 7. Application of the SSVEP-BCI.
Electronics 13 02767 g007
Figure 8. Block diagram of FB-CCNN structure.
Figure 8. Block diagram of FB-CCNN structure.
Electronics 13 02767 g008
Figure 9. A simple 4-3-2 ANN structural model.
Figure 9. A simple 4-3-2 ANN structural model.
Electronics 13 02767 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Wang, J. An Analysis of Traditional Methods and Deep Learning Methods in SSVEP-Based BCI: A Survey. Electronics 2024, 13, 2767. https://doi.org/10.3390/electronics13142767

AMA Style

Wu J, Wang J. An Analysis of Traditional Methods and Deep Learning Methods in SSVEP-Based BCI: A Survey. Electronics. 2024; 13(14):2767. https://doi.org/10.3390/electronics13142767

Chicago/Turabian Style

Wu, Jiaxuan, and Jingjing Wang. 2024. "An Analysis of Traditional Methods and Deep Learning Methods in SSVEP-Based BCI: A Survey" Electronics 13, no. 14: 2767. https://doi.org/10.3390/electronics13142767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop