Next Article in Journal
Development of a Flexible Information Security Risk Model Using Machine Learning Methods and Ontologies
Previous Article in Journal
Impedance Spectroscopy of Lanthanum-Doped (Pb0.75Ba0.25)(Zr0.70Ti0.30)O3 Ceramics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Connecting the Brain with Augmented Reality: A Systematic Review of BCI-AR Systems

by
Georgios Prapas
,
Pantelis Angelidis
,
Panagiotis Sarigiannidis
,
Stamatia Bibi
and
Markos G. Tsipouras
*
Department of Electrical and Computer Engineering, University of Western Macedonia, Campus ZEP Kozani, 50100 Kozani, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(21), 9855; https://doi.org/10.3390/app14219855
Submission received: 21 August 2024 / Revised: 30 September 2024 / Accepted: 17 October 2024 / Published: 28 October 2024
(This article belongs to the Section Applied Neuroscience and Neural Engineering)

Abstract

:
The increasing integration of brain–computer interfaces (BCIs) with augmented reality (AR) presents new possibilities for immersive and interactive environments, particularly through the use of head-mounted displays (HMDs). Despite the growing interest, a comprehensive understanding of BCI-AR systems is still emerging. This systematic review aims to synthesize existing research on the use of BCIs for controlling AR environments via HMDs, highlighting the technological advancements and challenges in this domain. An extensive search across electronic databases, including IEEEXplore, PubMed, and Scopus, was conducted following the PRISMA guidelines, resulting in 41 studies eligible for analysis. This review identifies key areas for future research, potential limitations, and offers insights into the evolving trends in BCI-AR systems, contributing to the development of more robust and user-friendly applications.

1. Introduction

A brain–computer interface (BCI) is a system that directly interprets the intentions of a person based on their brain activity [1,2]. It enables users to manipulate or control objects [3] in their environment using only their thoughts. Typically, BCIs establish a direct connection between the electrical signals in the brain and an external device, such as a computer, an electric wheelchair, a head mounted display, or a robotic limb. These interfaces are primarily used for exploring, mapping, assisting, or enhancing human cognitive or sensory-motor functions. The main components of a brain–computer interface are usually the following:
  • Brain activity measurement device: This can take the form of a headset, cap, or headband equipped with specialized sensors. These sensors detect and record the signals emitted by the brain.
  • Computer system for processing and analyzing brain activity: The recorded brain signals are processed and analyzed by BCI software. This software employs specialized methods and algorithms to interpret the user’s intended actions based on the brain activity.
  • Application control: Once the system has identified the user’s desired action, it sends a signal to the relevant application or tool to execute that command.
There are many alternative techniques used to measure brain signals, and these can be categorized into invasive, semi-invasive, and non-invasive techniques. Invasive BCIs involve the direct implantation of devices into the brain’s grey matter during neurosurgery. While these devices provide the highest quality of signals, they are prone to issues such as scar tissue formation, which can weaken the signals or trigger an immune response due to the presence of a foreign object in the brain.
Semi-invasive BCIs, on the other hand, are implanted inside the skull but positioned outside the brain’s grey matter. These devices offer better signal resolution compared to non-invasive BCIs. In addition, the risk of scar tissue formation within the brain is lower in semi-invasive BCIs than in fully invasive ones.
The least invasive method is the use of a set of electrodes, typically known as an electroencephalograph (EEG), which are attached to the top of the head [4]. These electrodes can detect and record brain signals. Regardless of the placement of the electrodes, the underlying mechanism remains the same: the electrodes measure small voltage differences between neurons. The signal is then amplified and filtered. Although the electrical signal is partially blocked and distorted by the skull, this non-invasive method is more widely accepted due to its relative advantages over the other techniques mentioned. The most important advantage is the safety of the procedure, as the electrodes do not require surgery to be placed [5]. Additionally, non-invasive BCIs are widely accessible and easy to use, making them suitable for a larger population without requiring extensive training. They do not restrict mobility or physical movement, allowing users to engage in various activities while using the interface.
BCI systems examine the brain’s electrical activity, which can be recorded using invasive, semi-invasive, or non-invasive techniques, such as electrodes positioned on top of the head. The signals are amplified and converted into digital form using preprocessing methods, and the applicable features of the signals are extracted, processed, and translated into commands capable of controlling external devices or applications. BCI systems can be categorized into three types: active, reactive, and passive. In active BCI systems, users participate in mental tasks that generate specific patterns of EEG signals. These patterns are then detected by the BCI system. The most commonly used method involves motor imagery (MI), where participants imagine moving body parts without physically carrying out the movements [6]. On the other hand, reactive BCI involves regulating brain activity in response to external stimuli provided by the BCI system. The prevalent paradigm in this category is the P300 speller, where symbols or letters are displayed sequentially on a screen, and participants focus their attention on the desired symbol. Passive BCI [7] involves solely monitoring the EEG activity of users without requiring them to engage in any mental tasks. In passive systems, the EEG activity is not intentionally manipulated for a specific purpose but rather used to extract information such as the user’s emotional state. The BCI focus of this paper is presented in Figure 1.
Augmented reality (AR) is an interactive encounter with the actual surroundings in which computer-generated perceptual information enhances the objects present in the real world. This enhancement can involve multiple senses, such as sight, sound, and touch. AR can be described as a system that combines elements of the real and virtual worlds, allowing for real-time interaction and accurate 3D alignment between virtual and real objects. The additional sensory information can either enhance the natural environment (add virtual content in real-world elements) or mask it (hide or override real-world elements). The AR experience seamlessly blends with the physical world, creating an immersive perception within the real environment.
Smart glasses offer two primary methods for displaying AR content: optical see-through and video see-through. Video see-through systems utilize cameras embedded within the head-mounted device to present video feeds. This is the conventional approach employed by smartphones for AR applications. Video see-through is particularly advantageous when remote experiences are desired, such as controlling a robot to fix something from a distant location or virtually exploring a potential vacation destination. It is also beneficial for utilizing image enhancement systems like night-vision devices. On the other hand, optical see-through systems combine computer-generated imagery with a real-world view seen through the glasses via a semi-transparent mirror. This method is useful in scenarios where concerns arise about potential power failures. An optical see-through solution allows users to maintain visual perception in every situation. Additionally, if high image quality is a priority, portable cameras and fully immersive head-mounted displays cannot match the experience of direct viewing provided by optical see-through technology.
Various review attempts have been made in the literature to demonstrate the brain’s connection with alternative realities. However, most of them focus on virtual reality and distinct applications like patient rehabilitation. A comparative analysis is presented in Table 1 to showcase the existing review attempts.
Lotte et al. [8] conducted a review in 2012, highlighting the existing BCI-VR applications. The articles were categorized according to the neurophysiological signal used to drive the BCI (MI, P300, SSVEP).
Kohli et al. [9] reviewed the use cases of virtual and augmented reality-based BCI applications for smart cities. The review was conducted in 2022, and the papers included were divided into two main categories depending on the type of reality (virtual or augmented).
Angrisani et al. [10] provided a comprehensive picture of the current state-of-the-art SSVEP BCIs in AR environments. The search was conducted on the Scopus database using the AR and SSVEP keywords and covering the last 6 years (2018–2023). Out of the 56 articles retrieved, 20 of them were thoroughly compared based on EEG acquisition, EEG processing, and BCI application.
Nwagu et al. [11] conducted a systematic review focusing on EEG-based BCI applications in immersive environments. The search was performed in four online databases (ACM, IEEE Xplore, PubMed, and Scopus), resulting in 2982 papers. The final number of articles to be assessed was 76, and they covered the last decade (2012 to 2022). The structure of the results contained the following sections: trend by year, application domains, trend by country, features of the VR/AR application, BCI paradigms, EEG acquisition, EEG signal processing, BCI interaction tasks, system evaluation, study findings, and challenges.
This work presents a systematic review of EEG-based BCI applications in AR environments. To the best of our knowledge, this is the first systematic review focusing explicitly on immersive AR environments projected on HMDs. This review spans from 2012 to 2024 and exclusively includes applications involving only healthy participants. A search was performed in three online databases (IEEE Xplore, PubMed, and Scopus) retrieving 730 search results. The final 41 papers included for analysis were divided into three categories based on the BCI paradigm (reactive, passive, and active).
This systematic review investigates the progress and trends in the domain of BCI-AR systems. The primary goal is to conduct a comprehensive analysis of the existing literature and point out crucial discoveries and emerging patterns. The main objective is to identify innovative directions and potential future developments through the synthesis of available knowledge.

2. Research Methodology

A systematic review is an approach that involves the identification, evaluation, and interpretation of all relevant research findings referring to a specific research question or topic area. The primary objective is to synthesize the existing evidence in a reasonable, thorough, and unbiased manner. The authors implemented a comprehensive screening procedure to assess the eligibility of the articles and evaluated the risk of bias in all included studies. Discrepancies among the researchers were addressed through discussions, leading to an agreement.

2.1. Search Strategy

The preferred reporting items for systematic review and meta-analysis (PRISMA) [12] were used to direct the reporting of the search for articles, the extraction of data, and the synthesis of results. A broad search process was conducted in the following digital databases: IEEE Xplore (145), PubMed (139), and Scopus (446). The search was performed between June 2024 and early July 2024, covering 13 years of publication (2012–2024) to showcase the most recent BCI technology. The search string used to search for relevant literature was the following: ((“BCI” OR “brain-computer interface”) AND (“mixed reality” OR “augmented reality” OR “MR” OR “AR”)). The next step was to exclude duplicate publications using the Rayyan software [13] which is a free web tool designed to help researchers with systematic reviews.

2.2. Selection Criteria

During the review process, articles were assessed for inclusion based on specific criteria. The first requirement was that the articles described a BCI system that was designed using EEG technology. Additionally, the articles were only considered if they included a head-mounted device as the stimuli used in the study. Finally, only articles that involved participants who were healthy and had no history of disorders or pathology were included in the analysis.
Also, several exclusion criteria were applied to ensure that the included studies were relevant and met the requirements of the research question. First, review articles, case studies, qualitative research, and any other secondary articles were excluded. Studies that included participants with a pathological history were excluded, as were studies that did not involve mixed or augmented reality stimuli. Additionally, studies that used biological measures other than an electroencephalogram (EEG) as the primary research outcome were excluded from the review process. Finally, full texts that were published in languages other than English were excluded from the review.

2.3. Study Selection

A comprehensive search of databases (presented in Figure 2) and other sources yielded a total of 730 search results. After removing duplicate entries, 356 studies remained for a title and abstract screening, and their eligibility was assessed based on predefined inclusion criteria. From this screening process, 41 papers were identified for further analysis, and their full texts were thoroughly examined. The included papers comprise 15 from conferences and 26 from journals. Among the journal papers, the sources are diversified across 19 different journals, with 13 classified as Q1, 4 as Q2, and 2 as Q3. The papers were divided into three categories based on the type of BCI (active, reactive, or passive).

3. Study Statistics

3.1. Research Attributes

The following subsection presents graphs and statistics pertaining to the following attributes: publication year, number of participants, and BCI paradigm.

3.1.1. Publication Year

Articles included in the analysis were limited to those published from 2012 onward. However, none of the articles identified during the screening process prior to 2014 met the predefined criteria for inclusion. The median publication year (Figure 3) of the selected studies was 2022 (mean = 2020.73; SD = 2.42; range = 2012–2024).

3.1.2. Participants

The mean number of participants across all included articles (Figure 4) was 12.07 (SD = 7.03, range = 1–35).

3.1.3. BCI Paradigm

In this section, an overview of the distribution of studies based on the BCI paradigms is provided. Table 2 summarizes the number of studies categorized under each paradigm, and Figure 5 visualizes this distribution.

3.1.4. EEG Devices

In addition to the participant statistics and BCI paradigms, it is also important to consider the EEG devices used across the studies. These devices vary in terms of electrode count and cost, which affect the precision and accessibility of brain signal recordings. Table 3 provides a comparison of the EEG devices used in the included studies, classified by the number of electrodes and their approximate price.

4. Results

In this section, a comprehensive summary of the studies in the literature is presented. The results section is divided into three categories: reactive BCI, passive, and active BCI. For each category, a brief synopsis of each work is presented along with information about signal preprocessing, feature extraction methods, classification techniques, and evaluation metrics. One study, which employs a hybrid approach by incorporating elements of both active and reactive BCI, has been included in the active BCI category, as it primarily aligns with the characteristics of active BCI.

4.1. Reactive BCI

This category focuses on the reactive BCI systems and consists of 32 studies presented in Table 4, which will be classified into three categories: home automation and control, human–robot interaction and control, and IoT applications.

4.1.1. Home Automation and Control

Putze et al. [14] developed the HoloSSVEP system that utilizes the HoloLens HMD’s AR camera combined with an eye tracker to control a smart home. To record the EEG signals, a g.Nautilus headset was employed consisting of three electrodes. The EEG signal was filtered using a bandpass filter between 1–35 Hz and then processed with canonical correlation analysis (CCA). To evaluate the experiment, 12 subjects tried to control 4 different systems (office lights, window blinds, a TV, and a music player) with four different control options. Classification accuracy was the evaluation metric applied for this work. Although the accuracy of the system was high, users were concerned about the comfort of wearing two headsets. With a similar objective, Park et al. [15,29] implemented a home appliance control system by combining EEG-based BCI with HoloLens AR HMD. They tested four different stimulus types (three under an AR environment and one on an LCD monitor). A BioSemi ActiveTwo system with 33 electrodes was used to record the EEG data. Then, the EEG data were downsampled to 512 Hz, and a bandpass filter was applied between 2 and 54 Hz. The multivariate synchronization index (EMSI) was employed for the classification process. To evaluate their experiment, 17 participants took part in the online experiment, which consisted of controlling three home appliances with four available commands. Classification accuracy and information transfer rate (ITR) were the evaluation metrics for this experiment, with respective values of 92.8% and 37.4 bits/min. In a later attempt, they also integrated an eye tracker based on electrooculograms (EOGs) in their system. The system’s performance and usability were assessed with 13 individuals over the age of 65. The EEG data were collected from 12 electrode positions using a BioSemi ActiveTwo system equipped with 12 active electrodes. Afterward, the data were downsampled at a rate of 512 Hz and subjected to bandpass filtering with cutoff frequencies of 2–54 Hz. An EMSI algorithm was used to classify the SSVEP responses. In this experiment, the evaluation criteria used were classification accuracy and ITR, achieving values of 88.8% and 34 bits/min, respectively.

4.1.2. Human–Robot Interaction and Control

Si-Mohammed et al. [17] also tested the combination of BCI-AR technology with four user studies and tested their results by controlling a mobile robot through the HoloLens device. To record the EEG data, a g.USB amplifier with six electrodes was used. The multi-class common spatial pattern (CSP) was used to filter the data, and linear discriminant analysis (LDA) was used to classify the signal into one of the three classes. The robot was controlled by the three directional commands: forward, rightward, and leftward.
Angrisani et al. made several attempts [18,23,31] to integrate BCI with AR. At first, they explored the viability of combining Epson Moverio BT-200 smart glasses with BCI to enhance human–robot interaction in the Industry 4.0 framework. Single-channel Olimex EEG-SMT was employed to acquire the EEG signals, and the fast Fourier transform (FFT) of the EEG signal was calculated and visualized for the frequency range of interest. One participant was tested on the two-class AR-BCI system featuring a simultaneous display of two flickering icons. In a later attempt, they proposed a wearable monitoring system for inspection in the framework of Industry 4.0. They combined Olimex EEG-SMT, using one electrode, with Epson Moverio BT-200 AR HMD (Epson, Suwa, Nagano, Japan). To process the EEG signal, a simple power spectral density (PSD) analysis was first conducted, followed by a digital bandpass finite impulse response (FIR) filter and a fast Fourier transform (FFT) for feature extraction. To evaluate the system, 20 participants tested the prototype and tried to control the system using two commands. The results indicated that the accuracy was better when the acquisition time for the SSVEP signals was higher. In their most recent attempt, they proposed the adoption of machine learning (ML) classifiers in order to improve the performance of highly wearable, single-channel BCIs. The EEG data were collected using the Olimex EEG-SMT (Olimex Ltd., Plovdiv, Bulgaria), featuring one active electrode, while the Epson Moverio BT-200 smart glasses were utilized to display the two flickering targets. To process the EEG signals, an FFT was performed in the frequency domain, followed by the application of a bandpass filter within the time domain, restricting the signal to frequencies between 5 and 25 Hz. For the classification process, the selected ML classifiers were support vector machine (SVM), k-nearest neighbor (KNN), and artificial neural network (ANN). The experiment involved the participation of 20 volunteers, and two evaluation metrics were utilized to assess performance: classification accuracy and acquisition time.
Chen et al. [22] designed a four-command SSVEP-BCI system combined with HoloLens to control a robotic arm. Nine channels from the Neuracle EEG amplifier were used in this study. EEG signals were downsampled to 250 Hz, and a notch filter at 50 Hz was applied. The FBCCA algorithm was used to classify the data. Twelve subjects participated in the online experiment, and the evaluation metrics employed for this study were the mean classification accuracy, ITR, and time to complete a freely controlled robot movement, with respective results of 93.96%, 14.21 bits/min, and 107.67 s.
Ke et al. [24] aimed to design and evaluate a high-speed online eight-class SSVEP-based BCI in an OST-AR environment and test it by controlling a robotic arm. The proposed hardware consisted of an eight-channel EEG device designed in their laboratory and a HoloLens HMD. The EEG signals were bandpass filtered from 7–90 Hz and notch filtered at 50 Hz. To classify their data, they used extended CCA and ensemble TRCA. A total of 10 subjects took part in the online robot arm control task, resulting in an ITR of 45.57 bits/min.
Fang et al. [30] designed a four-target AR-based BCI-SSVEP for human–robot interaction. The EEG signals were acquired using a Neuroscan (Compumedics Neuroscan, Abbotsford, Australia) device equipped with eight electrodes, while the AR display was facilitated by the utilization of HoloLens 2. For the preprocessing portion, three subfilters with different bandpass ranges were designed, running at 7–17 Hz, 16–32 Hz, and 25–47 Hz, respectively, while for the classification process, the filter bank convolutional neural network (FB-tCNN) was employed. During the cue-guided task involving the robotic arm, all subjects demonstrated a grasping success rate of 87.50%. Additionally, ITR achieved a value of 159.40 bits/min.
De Pace et al. [43] explored the potential of a projected AR system combined with an SSVEP-based BCI to aid human–robot interaction. They employed the NextMind BCI device, which used NeuroTags as flickering visual stimuli in order to enable users to control a robotic arm for pick-and-place tasks. The system was tested with 22 healthy participants, evaluating usability and robustness through metrics such as the System Usability Scale (SUS) and NASA-TLX. The study found that the adaptive positioning of visual stimuli was significantly more effective and preferred over a nonadaptive linear approach.

4.1.3. IoT Applications

Kim et al. [16] investigated the feasibility of an AR-BMI system using grid-shaped (3 × 3) SSVEP flickering stimuli displayed on HoloLens HMD. A 32-electrode antiCAP was used to record the EEG signals, which were then classified into one of six classes using shrinkage-regularized linear discrimination analysis (shrinkage-rLDA) with 10-fold cross-validation. One subject participated in the evaluation portion, and the average classification accuracy, which was the evaluation metric for this experiment, was 30.51%.
Zhao et al. [19] designed four different display layouts and tested the different results by displaying them in HoloLens HMD and on a PC screen. SynAmps2 amplifiers with 64 electrodes were used to record the EEG data. A bandpass filter between 0.5–45 Hz was applied, and the signals were sampled at 1000 Hz. Power spectrum density (PSD) estimation was used to process the data, and CCA was used for the classification. Ten subjects participated in the experiment, and the evaluation metrics employed were classification accuracy, ITR, and power distribution topography. The results indicated that when the stimulus duration is more than 3 s, the AR-SSVEP achieves similar classification accuracy to PC-SSVEP. Apart from the performance variation attributed to display layouts, Zhang et al. [25] investigated the effect of ambient brightness on AR-BCI performance by testing five different light intensities on 18 subjects. The SynAmps2 amplifier, with 64 selected electrodes, and HoloLens HMD were the selected hardware for this study. In order to process and classify their EEG data, they used FFT, CCA, and FBCCA. To enable the SSVEP recognition algorithm to adjust to varying light intensities, they introduced a novel optimization algorithm called ensemble online adaptive CCA (eOACCA). The purpose of this algorithm was to enhance the adaptability of the SSVEP recognition algorithm when faced with changes in light intensity. The results indicated that as the light intensity increases, the response intensity of AR-SSVEP gradually decreases, and there is a corresponding decrease in recognition accuracy as well. Furthermore, the experimental outcomes proved that the proposed eOACCA algorithm outperforms FBCCA and CCA algorithms. In the same context, Du and Zhao [39] explored the impact of different visual stimulus colors on the classification accuracy of SSVEP-BCI. The researchers designed interfaces featuring four distinct colors (white, red, green, and blue) and conducted tests in a combined AR environment and on a conventional PC screen. The NeuSen W acquisition device with 32 electrodes was combined with HoloLens HMD. The classification results were found to be affected by both the visual stimulus colors and the duration of stimulation.
Liu et al. [20] designed an AR-BCI system with an eight-class SSVEP stimulus and studied the performance of different algorithms. The hardware used in this study was an eight-channel EEG device developed in their laboratories and the HoloLens HMD. EEG data were filtered by a notch, highpass, and lowpass filter. Extended filter bank canonical correlation analysis (FBCCA) and task-related component analysis (TRCA) were the two algorithms tested. Eight participants took part in the experiment, and the results showed that the extended FBCCA had the best overall performance.
Kerous and Liarokapis [21] developed a working prototype of BrainChat that featured two-person textual communication. HTC Vive HMD was used in combination with an eight-channel NeuroElectrics Enobio 32. EEG signals were processed and classified with the OpenVibe software. Two subjects tested the prototype system and managed to communicate using their EEG signals.
Heo et al. [26] conducted a study to evaluate the performance of BCI in various postures, including sitting, standing, and walking. They utilized a standard EEG cap with 31 electrodes in combination with the HoloLens HMD. For signal preprocessing, the researchers implemented several filters. A highpass filter was employed to eliminate frequencies above 0.5 Hz, while a lowpass filter was used to remove frequencies below 50 Hz. Additionally, another lowpass filter was applied to eliminate frequencies below 12 Hz. The linear support vector machine (SVM) classifier was utilized in terms of classification. To evaluate their system, six subjects took part in the experiment, and the performance metric was employed, which can be defined as the ratio of the number of trials for a correctly predicted target to the total number of trials for each posture. The results showed that there were no significant differences in BCI performance in regard to posture.
Zhang et al. [27] present a robot grasping experiment that was designed to verify the applicability of the AR-BCI. The Neuracle EEG Recorder, equipped with nine electrodes, was integrated with the HoloLens HMD to create the BCI-AR system. The FBCCA algorithm was used to classify the flickering stimuli. Twelve subjects participated in the online experiment, and they were able to successfully control the robot. The evaluation metrics employed for this study were classification accuracy and ITR. In a different study, Zhang et al. [41] created SSVEP flickering stimulation interfaces that featured four different numbers of stimulus targets to examine the impact of stimulus numbers on SSVEP-BCI within an AR context. SynAmps2 with 64 electrodes was used as the acquisition device, and a bandpass filter between 5 and 90 Hz was applied. The researchers employed CCA, FBCCA, and TRCA for the classification process. The results indicated that the recognition accuracy decreased as the number of stimuli increased in the AR-SSVEP setup. Also, in a later work, Zhang et al. [44] developed a BCI system based on SSVEP to enhance the practical application and interaction capabilities of prosthetic hands for disabled patients. The study introduced an asynchronous visual stimulus paradigm using AR with eight control modes (grasp, put down, pinch, point, fist, palm push, hold pen, and initial) and proposed a new pattern recognition algorithm, Center-ECCA-SVM, combining center-extended canonical correlation analysis and support vector machine. Additionally, an intelligent BCI system switch based on the YOLOv4 deep learning object detection algorithm was proposed to enhance user interaction. The results showed that the AR paradigm significantly improved the average SSVEP spectrum amplitude and SNR compared to the liquid crystal display (LCD) paradigm. The proposed Center-ECCA-SVM classifier achieved high asynchronous pattern recognition accuracy, and the YOLOv4-tiny model demonstrated effective real-time detection of the prosthetic hand. The system’s practicality was validated through real-life task completion, showcasing its effectiveness and user acceptability.
Jang et al. [28] designed a biometric authentication system based on EEG by utilizing the rapid serial visual presentation (RSVP) paradigm with stimuli of photographs of people displayed on AR HMD. During the experimental trial, 10 photographs depicting faces were presented to the subjects in a randomized sequence. Within this set, one photograph portrayed a person familiar to the subject (referred to as the “target”), while the remaining nine photographs displayed faces of individuals unknown to the subject (referred to as “non-targets”). To obtain the EEG data, a 64-channel BioSemi ActiveTwo system was employed. The preprocessing stage consisted of downsampling the signal from 2048 Hz to 512 Hz and applying a bandpass filter from 0.1–50 Hz. To perform the classification stage, the researchers employed four distinct machine learning classifiers: linear SVM (LSVM), k-nearest neighbor (KNN), LDA, and decision tree (DT) models. A total of 20 participants actively participated in the experiment, and the results showcased exceptional performance and accuracy. The evaluation of this experiment involved two key metrics: the amplitude of event-related potentials (ERP) and the latency. These metrics assessed and measured the neural responses and time delays associated with the experimental task.
Apicella et al. [32,35] addressed the adoption of ML classifiers and CNN to improve the performance of highly wearable single-channel BCIs. The suggested system relied on classifying SSVEPs. They combined a single-channel EEG acquisition device with four different AR HMDs. The signal processing stage involved an FFT in the frequency domain and the appliance of a bandpass filter between 5–25 Hz in the time domain. Finally, the classification process involved three ML classifiers, specifically, SVM, KNN, and ANN. In the initial experiment, 20 subjects participated, while in the subsequent three experiments, there were nine subjects each. Furthermore, the first HMD featured two flickering targets, while the other three HMDs had four targets each. The evaluation metric utilized for this study was classification accuracy. In another attempt to improve the performance of highly wearable reactive BCIs, they proposed the adoption of an innovative algorithm (ANN) with a learnable activation function. In the experimental campaign, 20 volunteers participated, and each volunteer underwent single-channel EEG acquisition. Epson Moverio BT-200 smart glasses were used to display two flickering icons during the experiment. Classification accuracy was the evaluation metric employed for this study, and the results indicated that the ML classifier can outperform other processing strategies, such as CCA.
Sakkalis et al. [33] proposed an AR-based BCI-SSVEP system with three to four commands for wheelchair navigation. The system was composed of Epson Moverio BT-35E smart glasses and a four-channel g.MOBIlab+. The EEG signals were first subjected to a 0.5–100 Hz bandpass filter, and then the relevant features were extracted using CCA. For the classification stage, LDA, KNN, and SVM were employed. In the online experiment, 12 subjects participated, and the evaluation metrics used for assessing the system’s performance were classification accuracy and ITR. In a similar attempt, Mori et al. [45] developed a BCI system to control an electric wheelchair using audiovisual stimuli from MR goggles and virtual sound sources. The system components included an electric wheelchair (YAMAHA JWX-1), MR goggles (HoloLens2), wired earphones (ALPEX HR-3500BK), and four EEG electrodes (Polymate Mini AP108). The classifiers for visual and auditory stimuli were evaluated using leave-one-out cross-validation, achieving average classification accuracies above 70% and 50%, respectively. Online analysis showed target selection accuracies of 37.1% for visual markers and 25.7% for sound, with visual selection significantly higher than chance level. The lower online accuracy was attributed to marker flashing certainty and sound distinction difficulties for some participants.
Huang et al. [34] proposed a protocol for SSVEP-based neurofeedback training to alter attention with emotional biases using a portable AR-BCI. The five participants were instructed to focus their attention on the task-relevant stimulus, which was a semi-transparent Gabor patch, and to disregard the emotional distractor, represented by an angry or sad face. Each stimulus was flickering at a specific frequency (8.57 or 12 Hz). Signal acquisition was performed with an EEG cap containing 16 electrodes. FFT was used to detect the distinct response of visual stimuli tagged with specific frequencies.
Sugino et al. [36] designed an AR-BCI system that detected objects in a 3D space using depth sensors and ML. The EEG signals were acquired using the Polymate Mini AP108, which used two electrodes. On the other hand, for displaying the four stimuli, HoloLens 2 was utilized. Classification accuracy was the evaluation metric employed for this study.
Liu et al. [37] combined computer vision (CV) and AR with a brain-controlled wheelchair. They used five active electrodes to acquire the EEG signals and Epson BT-350 to display the stimuli. Twenty subjects tested the six-command semiautomatic mode, and the performance metrics employed for this experiment were classification accuracy, ITR, and the average time required to reach each designated target.
He et al. [38] developed a fast recognition method based on a separable convolutional neural network (SepCNN) in order to improve the accuracy and ITR of AR-BCI systems. An EEG cap with 32 electrodes was combined with HoloLens HMD in order to display the nine-target experiment. In their comparison, SepCNN was tested against four common ML classifiers: Bayesian LDA, LDA, SVM, and SWLDA. The results revealed that SepCNN demonstrated significant improvements in both ITR and classification accuracy.
Arpaia et al. [40] improved the classification accuracy of SSVEPs by combining the use of FFT and CCA in the time domain. To acquire the EEG data, the researchers employed the Olimex EEG-SMT system, equipped with two active electrodes. For the AR device comparison, they evaluated three different HMDs: Epson Moverio BT-350, Oculus Rift S, and Microsoft HoloLens. The proposed algorithm demonstrated higher performance in terms of classification accuracy compared to the classic CCA method. Among the AR devices evaluated, HoloLens exhibited the best overall performance.
Horii et al. [42] introduced a BMI system designed to determine the user’s focus or attention on an object, enabling them to grasp it within the physical environment. An eight-channel EEG cap was combined with HMZ-T3 HMD to carry out the study. To assess the performance of the experiment, eight subjects took part in the experimental process, and the study utilized classification accuracy as the evaluation metric.

4.1.4. System Commands

The number of commands for the reactive BCI category varies from 2 [18,23,31,35] to 36 [21,41]. The most frequently used number of commands is four, and it appears in 10 studies, [14,15,19,22,26,29,30,36,39,40]. The authors of [20,24,27,44] designed a system with eight commands, while refs. [17,42] chose three commands for their system. In the works of [16,37], a six-command system was developed, whereas ref. [25] employed nine commands. Additionally, Refs. [32,33] designed systems incorporating three and four commands, while ref. [41] developed a system with 9, 16, 25 and 36 commands.

4.1.5. Signal Processing and Feature Extraction

The majority of the researchers applied a variety of bandpass filters: 1–35 Hz [14]; 2–54 Hz [15,29]; 0.5–45 Hz [19]; 1–20 Hz [21]; 7–90 Hz [24]; 0.1–50 Hz [28]; 7–17 Hz, 16–32 Hz, and 25–47 Hz [30]; 5–25 Hz [31,32,40]; 0.5–100 Hz [33] and 9–25 Hz [35]; 5–20 Hz [36]; 0.5–50 Hz [37]; 0.1–12 Hz [38]; 5–40 Hz [39]; 5–90 Hz [41]; 0.5–60 Hz [42]; 8–40 Hz [44]; and 0.5–30 Hz [45]. The researchers in [20] applied a highpass, a lowpass, and a notch filter, while those in [26] applied a highpass filter above 0.5 Hz and two lowpass filters below 50 and 12 Hz, respectively. Also, Refs. [20,22,24] applied a notch filter to remove the powerline noise. The authors of [19,23] calculated the PSD using FFT, while those of [25] applied only FFT. The CSP technique was used by [17].
To extract the desired features for the classifier, the vast majority of the researchers [14,32,33,35,41] used CCA. FFT was also employed by [18,23,32,42], while stepwise LDA was used by [37]. Finally, ref. [44] employed extended CCA and FFT for feature extraction.

4.1.6. Classification Techniques

Different classification techniques were used in these studies. Since the goal of many researchers was to improve the classification accuracy and the ITR of their systems, they employed more than one classification technique in order to compare the results. More specifically, ref. [20] used Extended-FBCCA and TRCA; Ref. [24] employed extended CCA and extended TRCA; Ref. [25] tested CCA, FBCCA, and eOACCA; Ref. [28] compared LSVM, KNN, LDA, and DT; Refs. [31,32] used SVM, KNN, and ANN; Ref. [38] employed BLDA, LDA, SWLDA, SVM, and SepCNN; Ref. [39] used CCA and FBCCA; Ref. [40] compared CCA with their proposed algorithm; Ref. [41] employed CCA, FBCCA, and TRCA; Ref. [44] utilized Center-ECCA and SVM; and Ref. [45] employed LDA and SVM. As for the rest of the studies that employed a single classification algorithm, KNN was used by [14], EMSI was tested by [15,29], shrinkage-rLDA was used by [16], LDA was employed by [17,21], FBCCA was utilized by [22,27], SVM was used by [26,42], FB-tCNN was employed by [30], FFT was used by [34], and ANN was employed by [35].

4.1.7. Evaluation Metrics

Several evaluation metrics were utilized by the authors in order to measure the effectiveness of their systems. The most common were the classification accuracy presented in 24 studies [14,15,16,19,20,22,23,24,25,27,29,30,31,32,33,35,36,37,38,39,41,42,44,45] along with the information transfer rate (ITR) presented in 12 studies [15,19,20,22,24,27,29,30,33,37,38,41]. In addition to these common performance metrics, some authors employed further evaluation metrics (EEG and time metrics) that were well suited to their respective systems. Those metrics were measured brain potentials [18], power distribution topography [19], time to complete a robot movement [22], performance [26], amplitude of ERP [28], acquisition time [31], SSVEP competition scores [34], average time to reach target [37], SUS, NASA-TLX questionnaires [43], and time to response [40]. It is worth mentioning that most authors employed more than one of the previously mentioned evaluation metrics for their systems.

4.2. Passive BCI

This section is dedicated to passive BCI systems and includes five studies presented in Table 5.
Olivieri et al. [46] proposed a novel AR-BCI framework to train the user to regulate his own mental state while performing surgery-like tasks using a robotic system. They combined Emotiv EPOC with Sony HMZ-T1 in order to present the AR-BCI scenario. Ten subjects participated in the experiment, in which they would operate an AR scalpel based on their cognitive value.
Vortmann et al. [47] designed an alignment experiment in order to classify user attention as internally or externally directed. They combined a 16-channel g.Nautilus with HoloLens to display the alignment task. To preprocess the EEG data, they used a lowpass filter at 50 Hz, a highpass filter at 1 Hz, and a notch filter at 50 Hz. LDA was utilized in the classification process, and the results of the analysis of 15 participants demonstrated that the classifier reliably predicts the type of attention. In another attempt, Vortmann et al. [48] explored the feasibility of categorizing a target as real or virtual by analyzing EEG signals using ML techniques. To acquire the EEG data, a g.Nautilus EEG headset with 16 active electrodes was used. The EEG data were bandpass filtered between 3 and 45 Hz and notch filtered at 50 Hz. Results showed that person-dependent classification based on EEG data is possibly better and works more reliably than the classification based on eye-tracking data.
Sanna et al. [49] developed a BCI–AR user interface based on the NextMind and the HoloLens 2. The experiment was structured into two distinct parts. In the first phase, participants were required to visually identify and select a specific component. Once this selection was made, the second phase involved the assembly of these components using a robotic arm. NASA-TLX and SUS questionnaires were used to evaluate the experiment.
De Massari et al. [50] combined BCI in mixed reality (MR) environments in order to decode users’ mental states. To acquire the EEG data, a 64-channel actiCAP was utilized, while SVM and LDA classifiers were used for the classification process. The results suggest that LDA had the best accuracy.

4.2.1. Signal Processing and Feature Extraction

Different types of filters were used in this category to preprocess the EEG signals. Ref. [47] used a lowpass filter at 50 Hz along with a highpass filter at 1 Hz and a notch filter at 50 Hz. Ref. [48] employed a bandpass filter between 3 and 45 Hz and a notch filter at 50 Hz, while ref. [50] used a spatial filter. Finally, refs. [46,49] did not mention their preprocessing phase.
All of the researchers employed different approaches to extract features for their classifiers. Ref. [47] used PSD, ref. [48] employed FBCSP, and ref. [50] used spectral estimation.

4.2.2. Classification and Evaluation Metrics

LDA was the most-used classifier in this category, employed by [47,50]. CNN was used by [48], while ref. [50] also employed SVM along with LDA.
Quite a few evaluation metrics were used by the authors. Classification accuracy was employed by [47,48,50]. Test trial time was the evaluation metric used in [46], whereas ref. [49] employed SUS and NASA-TLX questionnaires to assess their experiment.

4.3. Active BCI

This category centers on the active BCI systems and consists of four studies presented in Table 6. One of the studies [51] features a hybrid BCI system that integrates active and reactive BCI technologies.
Choi and Jo [51] designed a hybrid BCI-AR system that combines MI and SSVEP to navigate a quadcopter. EEG data were collected using actiCHamp with six electrodes, and a lowpass filter at 40 Hz was applied in collection. CCA was used for the SSVEP classification, while FBCSP was used for the MI classification. To test their system, two subjects took part in the experiment, and both of them managed to successfully navigate the quadcopter.
Horie et al. [52] developed two games utilizing the beta/alpha ratios of EEG signals as a degree of concentration. A single EEG channel in combination with HoloLens was employed for the two experiments. In the first game, users attempted to hit targets positioned in the mixed reality space using bullets controlled by hand gestures and concentration. Meanwhile, in the second game, the user had to focus on outperforming their opponent. One-way ANOVA was performed to evaluate the experiments.
Ji et al. [53] designed an active AR-BCI system that utilized voluntary eye blinks to control a robot. They developed an eye blink detection algorithm in order to identify the long blink and the double blink from the EEG data while effectively filtering out noise and the normal blink. The results indicated that the blink-based input method of the eye, as opposed to the gesture-based input, resulted in a reduced user input time.
Sun et al. [54] developed a brain-controlled robotic arm system based on MI using MR visual guidance to enhance training efficiency and EEG signal classification accuracy. The system integrated EEG signals for task switching and motor imagery to control the robotic arm, utilizing a combination of CSP and SVM for signal classification. Eight subjects participated in the experiment, using a 64-channel EEG cap with data processed through the actiCHamp amplifier. The results showed a significant improvement in classification accuracy after visually guided training, with an average increase of about 10% in accuracy and a notable enhancement in kappa values.

4.3.1. System Commands

The hybrid BCI system of [51] consists of four commands (one active and three reactive), while ref. [53] designed a system with two active commands. These commands allowed users to interact with the system efficiently, depending on the task complexity and the BCI paradigm employed.

4.3.2. Signal Processing and Feature Extraction

For the active BCI category, a low-pass filter at 40 Hz was applied by [51], while ref. [54] employed a bandpass filter between 13 and 30 Hz. To extract the desired features for the classifier, ref. [54] utilized CSP. The filtering and feature extraction techniques are crucial to enhancing signal quality and improving classification accuracy.

4.3.3. Classification

For the classification process, CCA and FBCSP were employed by [51], SVM was utilized by [54], and ref. [53] used their own proposed algorithm. Each classifier was selected based on its effectiveness in recognizing specific features within the EEG signals.

4.3.4. Evaluation Metrics

To evaluate their systems, the authors of this category employed two different evaluation metrics. Classification accuracy was employed by [51,53,54], while one-way ANOVA was utilized by [52]. These metrics provided insight into system performance and ensured the reliability of the results.

5. Discussion

The present systematic review analyzed research articles that use EEG signals in order to control an AR environment projected on HMDs. The study relies on the results obtained from three established scientific databases: IEEE Xplore, Scopus, and PubMed. The initial section of the review showcases the statistical outcomes of the articles, including the publication year, number of participants, and BCI paradigm. In the following section, the articles are grouped into three categories according to their BCI paradigm (reactive, passive, or active). For each category, a summary of each article is presented, along with an analysis of the number of system commands, feature extraction stage, classification techniques, and evaluation metrics employed.
The review yields numerous significant observations. The vast majority of the researchers (78.04% of the studies included) chose the reactive BCI paradigm. This may be attributed to the nature of reactive BCI techniques, such as SSVEP, which demand minimal to no training, making them well-suited for BCI-AR applications. Also, the literature showed that reactive BCI systems, and more specifically, SSVEP systems, provide the best accuracy and ITR. Another observation is the use of EEG caps instead of headbands. Since AR technology requires HMDs to render the environment, most commercial headbands are not a good solution because of their shape and size, which do not allow them to integrate with HMDs. Furthermore, the average number of participants was 12.14, which is a relatively small number. This phenomenon could have several explanations. First, most studies were conducted after 2019, coinciding with the COVID-19 pandemic, which made it challenging to gather volunteers. In addition, the physical discomfort of the systems plays a significant role in the limited number of volunteers since they need to wear both the BCI device and the HMD. Moreover, because the BCI-AR technology is still in its early stages of development, researchers primarily focus on the feasibility of the systems.
Regarding the classification process, the researchers utilized several algorithms (Figure 6). It is worth mentioning again that the researchers were aiming to enhance the performance of their systems; therefore, in the majority of the studies, multiple algorithms were employed. SVM, alongside CCA and its variations, was the most utilized algorithms, appearing in 11 studies. LDA was another prevalent choice, being utilized in 9 studies. Moreover, Neural Networks and KNN were employed in 5 studies.
Among the feature extraction methods (Figure 7), CCA was the most frequently used method to extract features for the classifier, being utilized in six studies. FFT ranked as the second most popular choice among researchers, being applied in five studies.
Figure 8 shows the number of system commands for BCI-AR systems from the literature. Most of the authors designed their systems with up to eight commands. Since most of the studies utilized the reactive BCI paradigm, the system commands were displayed in the HMD. Consequently, as the number of commands increased, the user’s field of view decreased, resulting in a limited view of the environment for the user. Although the mean number of commands is 7.37, the median is much lower, being 4, since two studies [21,41] employed 36 commands for their systems. In summary, for this dataset, the median is a more representative measure of central tendency than the mean due to the presence of outliers.

Future Trends

Many key points were identified throughout the course of this research. The most common aspect shared among studies was the relatively low number of participants. Since HMDs for AR are still in their early stages of development, they are not very comfortable for the user, especially when they have to be combined with a BCI. Hence, the rapid advancement of AR technology is expected to play a significant role in enhancing the comfort and usability of AR-BCI systems. Another future trend is the adoption of deep learning techniques to enhance the classification accuracy of the systems. Researchers are working on integrating different machine learning algorithms and constructing neural networks to improve the transfer rate of BCI systems. Yet another critical aspect that demands attention is finding the ideal stimulation and acquisition time for EEG signals in reactive BCIs. Various studies [19,20,23] that have examined the correlation between classification accuracy and different stimulation times indicate that an increase in stimulation time is associated with a corresponding rise in classification accuracy. In addition to stimulation time, ref. [39] highlighted the importance of determining the optimal color for visual stimuli. Their research findings indicated that varying stimulus colors can impact classification accuracy. One more important future trend to be considered is the development of hybrid BCI systems. This approach enables researchers to harness the benefits of each BCI paradigm while mitigating their respective limitations. As ref. [51] proposed, combining MI with SSVEP can result in decreasing the training time needed for BCIs relying on MI while expanding the range of usable classes without introducing additional visual complexity.

6. Conclusions

This systematic review presents an overview of BCI-AR systems, including studies from 2012–2024. The 41 studies were grouped into three main categories: reactive BCI, passive BCI, and active BCI. The review provides a summary of the conducted experiments, the obtained results, and the signal processing and classification techniques utilized. It also reveals several important contributions from the existing research on BCI-AR systems. The most significant finding is the consistent use of reactive BCIs, particularly SSVEP, which demonstrates high accuracy and ease of use, making it a valuable paradigm for controlling AR environments. Furthermore, the integration of BCI with AR through HMDs shows considerable potential for creating immersive, hands-free interaction systems. Despite these advances, there are key limitations in the current state of research. A major challenge is the discomfort associated with EEG caps, which not only limits user participation but also affects the long-term feasibility of BCI-AR systems. Using a wearable BCI-AR system that involves two separate devices placed on the head—such as an EEG cap and an HMD—further adds to the discomfort, making these systems less practical for extended use. Looking forward, several promising directions for future research have emerged. One key area involves developing more comfortable, user-friendly BCI devices that can seamlessly integrate with HMDs. The adoption of advanced machine learning and deep learning techniques is also expected to significantly enhance classification accuracy and improve system performance. Additionally, hybrid BCI systems, which combine multiple paradigms, can help increase functionality and expand applications. Addressing these challenges will be essential for advancing the field and unlocking the full potential of BCI-AR technologies.

Author Contributions

Conceptualization, M.G.T.; methodology, G.P., P.A., P.S., S.B. and M.G.T.; software, G.P. and P.S.; validation, G.P. and P.A.; data curation, G.P. and M.G.T.; writing—original draft preparation, G.P.; writing—review and editing, G.P., P.A., P.S., S.B. and M.G.T.; supervision, M.G.T.; project administration, M.G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed in part by the project “AGROTOUR–New Technologies and Innovative Approaches to Agri-Food and Tourism to Boost Regional Excellence in Western Macedonia” (MIS 5047196), which is implemented under the action “Reinforcement of the Research and Innovation Infrastructure”, supported by the operational program “Competitiveness, Entrepreneurship and Innovation” (NSRF 2014–2020), and co-financed by Greece and the European Union (European Regional Development Fund).

Data Availability Statement

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCIBrain–computer interface
ARAugmented reality
HMDHead-mounted display
EEGElectroencephalograph
EOGElectrooculogram
PRISMAPreferred reporting items for systematic review and meta-analysis
CCACanonical correlation analysis
EMSIMultivariate synchronization index
ITRInformation transfer rate
shrinkage-rLDAShrinkage-regularized linear discrimination analysis
CSPCommon spatial pattern
LDALinear discriminant analysis
FFTFast Fourier transform
FIRFinite-impulse response
MLMachine learning
ANNArtificial neural network
KNNk-nearest neighbor
SVMSupport vector machine

References

  1. Kalcher, J.; Flotzinger, D.; Neuper, C.; Gölly, S.; Pfurtscheller, G. Graz brain-computer interface II: Towards communication between humans and computers based on online classification of three different EEG patterns. Med. Biol. Eng. Comput. 1996, 34, 382–388. [Google Scholar] [CrossRef] [PubMed]
  2. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, X.; Wang, Y.; Gao, S.; Jung, T.P.; Gao, X. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain–computer interface. J. Neural Eng. 2015, 12, 046008. [Google Scholar] [CrossRef] [PubMed]
  4. Martini, M.L.; Oermann, E.K.; Opie, N.L.; Panov, F.; Oxley, T.; Yaeger, K. Sensor modalities for brain-computer interface technology: A comprehensive literature review. Neurosurgery 2020, 86, E108–E117. [Google Scholar] [CrossRef]
  5. Waldert, S. Invasive vs. non-invasive neuronal signals for brain-machine interfaces: Will one prevail? Front. Neurosci. 2016, 10, 295. [Google Scholar] [CrossRef]
  6. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  7. Zander, T.O.; Kothe, C. Towards passive brain–computer interfaces: Applying brain–computer interface technology to human–machine systems in general. J. Neural Eng. 2011, 8, 025005. [Google Scholar] [CrossRef]
  8. Lotte, F.; Faller, J.; Guger, C.; Renard, Y.; Pfurtscheller, G.; Lécuyer, A.; Leeb, R. Combining BCI with virtual reality: Towards new applications and improved BCI. In Towards Practical Brain-Computer Interfaces: Bridging the Gap from Research to Real-World Applications; Springer: Berlin/Heidelberg, Germany, 2013; pp. 197–220. [Google Scholar]
  9. Kohli, V.; Tripathi, U.; Chamola, V.; Rout, B.K.; Kanhere, S.S. A review on Virtual Reality and Augmented Reality use-cases of Brain Computer Interface based applications for smart cities. Microprocess. Microsyst. 2022, 88, 104392. [Google Scholar] [CrossRef]
  10. Angrisani, L.; Arpaia, P.; De Benedetto, E.; Duraccio, L.; Regio, F.L.; Tedesco, A. Wearable Brain-Computer Interfaces based on Steady-State Visually Evoked Potentials and Augmented Reality: A Review. IEEE Sens. J. 2023, 23, 16501–16514. [Google Scholar] [CrossRef]
  11. Nwagu, C.; AlSlaity, A.; Orji, R. EEG-Based Brain-Computer Interactions in Immersive Virtual and Augmented Reality: A Systematic Review. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–33. [Google Scholar] [CrossRef]
  12. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed]
  13. Rayyan. Intelligent Systematic Review—Rayyan. Available online: https://www.rayyan.ai/ (accessed on 1 June 2023).
  14. Putze, F.; Weiß, D.; Vortmann, L.M.; Schultz, T. Augmented reality interface for smart home control using SSVEP-BCI and eye gaze. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2812–2817. [Google Scholar]
  15. Park, S.; Cha, H.S.; Im, C.H. Development of an online home appliance control system using augmented reality and an SSVEP-based brain–computer interface. IEEE Access 2019, 7, 163604–163614. [Google Scholar] [CrossRef]
  16. Kim, J.W.; Kim, M.N.; Kang, D.H.; Ahn, M.H.; Kim, H.S.; Min, B.K. An online top-down SSVEP-BMI for augmented reality. In Proceedings of the IEEE 2019 7th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 18–20 February 2019; pp. 1–3. [Google Scholar]
  17. Si-Mohammed, H.; Petit, J.; Jeunet, C.; Argelaguet, F.; Spindler, F.; Evain, A.; Roussel, N.; Casiez, G.; Lécuyer, A. Towards BCI-based interfaces for augmented reality: Feasibility, design and evaluation. IEEE Trans. Vis. Comput. Graph. 2018, 26, 1608–1621. [Google Scholar] [CrossRef]
  18. Angrisani, L.; Arpaia, P.; Moccaldi, N.; Esposito, A. Wearable augmented reality and brain computer interface to improve human-robot interactions in smart industry: A feasibility study for SSVEP signals. In Proceedings of the IEEE 2018 IEEE 4th International Forum on Research and Technology for Society and Industry (RTSI), Palermo, Italy, 10–13 September 2018; pp. 1–5. [Google Scholar]
  19. Zhao, X.; Liu, C.; Xu, Z.; Zhang, L.; Zhang, R. SSVEP stimulus layout effect on accuracy of brain-computer interfaces in augmented reality glasses. IEEE Access 2020, 8, 5990–5998. [Google Scholar] [CrossRef]
  20. Liu, P.; Ke, Y.; Du, J.; Liu, W.; Kong, L.; Wang, N.; An, X.; Ming, D. An SSVEP-BCI in augmented reality. In Proceedings of the IEEE 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 5548–5551. [Google Scholar]
  21. Kerous, B.; Liarokapis, F. BrainChat-A collaborative augmented reality brain interface for message communication. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; pp. 279–283. [Google Scholar]
  22. Chen, X.; Huang, X.; Wang, Y.; Gao, X. Combination of augmented reality based brain-computer interface and computer vision for high-level control of a robotic arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3140–3147. [Google Scholar] [CrossRef] [PubMed]
  23. Angrisani, L.; Arpaia, P.; Esposito, A.; Moccaldi, N. A wearable brain–computer interface instrument for augmented reality-based inspection in industry 4.0. IEEE Trans. Instrum. Meas. 2019, 69, 1530–1539. [Google Scholar] [CrossRef]
  24. Ke, Y.; Liu, P.; An, X.; Song, X.; Ming, D. An online SSVEP-BCI system in an optical see-through augmented reality environment. J. Neural Eng. 2020, 17, 016066. [Google Scholar] [CrossRef]
  25. Zhang, R.; Cao, L.; Xu, Z.; Zhang, Y.; Zhang, L.; Hu, Y.; Chen, M.; Yao, D. Improving AR-SSVEP Recognition Accuracy Under High Ambient Brightness Through Iterative Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1796–1806. [Google Scholar] [CrossRef]
  26. Heo, D.; Kim, M.; Kim, J.; Choi, Y.J.; Kim, S.P. The Uses of Brain-Computer Interface in Different Postures to Application in Real Life. In Proceedings of the IEEE 2022 10th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 21–23 February 2022; pp. 1–5. [Google Scholar]
  27. Zhang, S.; Chen, Y.; Zhang, L.; Gao, X.; Chen, X. Study on robot grasping system of SSVEP-BCI based on augmented reality stimulus. Tsinghua Sci. Technol. 2022, 28, 322–329. [Google Scholar] [CrossRef]
  28. Jang, H.; Park, S.; Woo, J.; Ha, J.; Kim, L. Authentication System Based on Event-related Potentials Using AR Glasses. In Proceedings of the IEEE 2023 11th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 20–22 February 2023; pp. 1–4. [Google Scholar]
  29. Park, S.; Ha, J.; Park, J.; Lee, K.; Im, C.H. Brain-controlled, AR-based Home automation system using SSVEP-based brain-computer interface and EOG-based eye tracker: A feasibility study for the elderly end User. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 544–553. [Google Scholar] [CrossRef]
  30. Fang, B.; Ding, W.; Sun, F.; Shan, J.; Wang, X.; Wang, C.; Zhang, X. Brain-computer interface integrated with augmented reality for human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 2022, 15, 1702–1711. [Google Scholar] [CrossRef]
  31. Angrisani, L.; Apicella, A.; Arpaia, P.; De Benedetto, E.; Donato, N.; Duraccio, L.; Giugliano, S.; Prevete, R. A ML-based Approach to Enhance Metrological Performance of Wearable Brain-Computer Interfaces. In Proceedings of the 2022 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Ottawa, ON, Canada, 16–19 May 2022; pp. 1–5. [Google Scholar]
  32. Apicella, A.; Arpaia, P.; De Benedetto, E.; Donato, N.; Duraccio, L.; Giugliano, S.; Prevete, R. Enhancement of SSVEPs classification in BCI-based wearable instrumentation through machine learning techniques. IEEE Sens. J. 2022, 22, 9087–9094. [Google Scholar] [CrossRef]
  33. Sakkalis, V.; Krana, M.; Farmaki, C.; Bourazanis, C.; Gaitatzis, D.; Pediaditis, M. Augmented reality driven steady-state visual evoked potentials for wheelchair navigation. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2960–2969. [Google Scholar] [CrossRef] [PubMed]
  34. Huang, X.; Mak, J.; Wears, A.; Price, R.B.; Akcakaya, M.; Ostadabbas, S.; Woody, M.L. Using Neurofeedback from Steady-State Visual Evoked Potentials to Target Affect-Biased Attention in Augmented Reality. In Proceedings of the IEEE 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; pp. 2314–2318. [Google Scholar]
  35. Apicella, A.; Arpaia, P.; Cataldo, A.; De Benedetto, E.; Donato, N.; Duraccio, L.; Giugliano, S.; Prevete, R. Adoption of Machine Learning Techniques to Enhance Classification Performance in Reactive Brain-Computer Interfaces. In Proceedings of the 2022 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Messina, Italy, 22–24 June 2022; pp. 1–5. [Google Scholar]
  36. Sugino, M.; Mori, F.; Tanaka, M.; Kotani, K.; Jimbo, Y. Augmented Reality Brain–Computer Interface with Spatial Awareness. IEEJ Trans. Electr. Electron. Eng. 2022, 17, 1820–1822. [Google Scholar] [CrossRef]
  37. Liu, K.; Yu, Y.; Liu, Y.; Tang, J.; Liang, X.; Chu, X.; Zhou, Z. A novel brain-controlled wheelchair combined with computer vision and augmented reality. Biomed. Eng. Online 2022, 21, 1–20. [Google Scholar] [CrossRef]
  38. He, C.; Du, Y.; Zhao, X. A separable convolutional neural network-based fast recognition method for AR-P300. Front. Hum. Neurosci. 2022, 16, 986928. [Google Scholar] [CrossRef]
  39. Du, Y.; Zhao, X. Visual stimulus color effect on SSVEP-BCI in augmented reality. Biomed. Signal Process. Control 2022, 78, 103906. [Google Scholar] [CrossRef]
  40. Arpaia, P.; De Benedetto, E.; De Paolis, L.; D’Errico, G.; Donato, N.; Duraccio, L. Performance enhancement of wearable instrumentation for AR-based SSVEP BCI. Measurement 2022, 196, 111188. [Google Scholar] [CrossRef]
  41. Zhang, R.; Xu, Z.; Zhang, L.; Cao, L.; Hu, Y.; Lu, B.; Shi, L.; Yao, D.; Zhao, X. The effect of stimulus number on the recognition accuracy and information transfer rate of SSVEP–BCI in augmented reality. J. Neural Eng. 2022, 19, 036010. [Google Scholar] [CrossRef]
  42. Horii, S.; Nakauchi, S.; Kitazaki, M. AR-SSVEP for brain-machine interface: Estimating user’s gaze in head-mounted display with USB camera. In Proceedings of the 2015 IEEE Virtual Reality (VR), Arles, France, 23–27 March 2015; pp. 193–194. [Google Scholar]
  43. De Pace, F.; Manuri, F.; Bosco, M.; Sanna, A.; Kaufmann, H. Supporting Human–Robot Interaction by Projected Augmented Reality and a Brain Interface. IEEE Trans. -Hum.-Mach. Syst. 2024, 54, 599–608. [Google Scholar] [CrossRef]
  44. Zhang, X.; Zhang, T.; Jiang, Y.; Zhang, W.; Lu, Z.; Wang, Y.; Tao, Q. A novel brain-controlled prosthetic hand method integrating AR-SSVEP augmentation, asynchronous control, and machine vision assistance. Heliyon 2024, 10, e26521. [Google Scholar] [CrossRef] [PubMed]
  45. Mori, F.; Sugino, M.; Huang, Y.; Kotani, K.; Jimbo, Y. Control of Electric Wheelchair by Brain-Computer Interface Using Mixed Reality and Virtual Sound Source. IEEJ Trans. Electr. Electron. Eng. 2024, 19, 1014–1025. [Google Scholar] [CrossRef]
  46. Olivieri, E.; Barresi, G.; Mattos, L.S. BCI-based user training in surgical robotics. In Proceedings of the IEEE 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 4918–4921. [Google Scholar]
  47. Vortmann, L.M.; Kroll, F.; Putze, F. EEG-based classification of internally-and externally-directed attention in an augmented reality paradigm. Front. Hum. Neurosci. 2019, 13, 348. [Google Scholar] [CrossRef] [PubMed]
  48. Vortmann, L.M.; Schwenke, L.; Putze, F. Using brain activity patterns to differentiate real and virtual attended targets during augmented reality scenarios. Information 2021, 12, 226. [Google Scholar] [CrossRef]
  49. Sanna, A.; Manuri, F.; Fiorenza, J.; De Pace, F. BARI: An Affordable Brain-Augmented Reality Interface to Support Human–Robot Collaboration in Assembly Tasks. Information 2022, 13, 460. [Google Scholar] [CrossRef]
  50. De Massari, D.; Pacheco, D.; Malekshahi, R.; Betella, A.; Verschure, P.F.; Birbaumer, N.; Caria, A. Fast mental states decoding in mixed reality. Front. Behav. Neurosci. 2014, 8, 415. [Google Scholar] [CrossRef]
  51. Choi, J.; Jo, S. Application of hybrid Brain-Computer Interface with augmented reality on quadcopter control. In Proceedings of the IEEE 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 26–28 February 2020; pp. 1–5. [Google Scholar]
  52. Horie, R.; Goto, K.; Ootsuka, Y. Game systems by using a brain computer interface and mixed reality. In Proceedings of the 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), Nara, Japan, 9–12 October 2018; pp. 431–432. [Google Scholar]
  53. Ji, Z.; Liu, Q.; Xu, W.; Yao, B.; Liu, J.; Zhou, Z. A closed-loop brain-computer interface with augmented reality feedback for industrial human-robot collaboration. Int. J. Adv. Manuf. Technol. 2021, 124, 3083–3098. [Google Scholar] [CrossRef]
  54. Sun, H.; Yan, X.; Yang, J.; Li, Q. Development of a brain-controlled robotic arm system for motor imagery based on MR visual guidance. In Proceedings of the 2024 4th International Conference on Bioinformatics and Intelligent Computing, Dalian, China, 12–14 January 2024; pp. 159–164. [Google Scholar]
Figure 1. BCI focus of this paper.
Figure 1. BCI focus of this paper.
Applsci 14 09855 g001
Figure 2. PRISMA flow chart with search query.
Figure 2. PRISMA flow chart with search query.
Applsci 14 09855 g002
Figure 3. Publication year of the included studies.
Figure 3. Publication year of the included studies.
Applsci 14 09855 g003
Figure 4. Distribution of participants.
Figure 4. Distribution of participants.
Applsci 14 09855 g004
Figure 5. Distribution of the studies based on the BCI paradigm.
Figure 5. Distribution of the studies based on the BCI paradigm.
Applsci 14 09855 g005
Figure 6. Classification algorithms employed in the studies.
Figure 6. Classification algorithms employed in the studies.
Applsci 14 09855 g006
Figure 7. Feature extraction methods.
Figure 7. Feature extraction methods.
Applsci 14 09855 g007
Figure 8. Number of system commands.
Figure 8. Number of system commands.
Applsci 14 09855 g008
Table 1. Review articles on AR-BCI technologies.
Table 1. Review articles on AR-BCI technologies.
AuthorsYearReview TypeArticles IncludedYear RangeReality
Lotte et al. [8]2012Non-Systematic--Virtual
Kohli et al. [9]2022Non-Systematic--Virtual and Augmented
Angrisani et al. [10]2023Non-Systematic202018–2023Augmented
Nwagu et al. [11]2023Systematic762012–2022Virtual and Augmented
This work2024Systematic372012–2023Augmented
Table 2. Number of studies for each BCI paradigm.
Table 2. Number of studies for each BCI paradigm.
BCI ParadigmNumber of Studies
Active3
Reactive32
Passive5
Hybrid1
Table 3. Comparison of EEG devices based on electrode count, price, manufacturer, city, and country.
Table 3. Comparison of EEG devices based on electrode count, price, manufacturer, city, and country.
DeviceTypeNumber of ElectrodesApproximate Price (USD)ManufacturerCityCountry
g.NautilusCommercial32/64$20,000–$30,000g.tecSchiedlbergAustria
Emotiv EPOCCommercial14$800–$1000EmotivSan FranciscoUSA
BioSemi ActiveTwoCommercialUp to 256$30,000–$60,000+BioSemiAmsterdamThe Netherlands
BrainVision actiCHampCommercial32 to 160$25,000–$50,000Brain Products GmbHGilchingGermany
BrainAmpCommercial64$20,000–$30,000Brain Products GmbHGilchingGermany
g.USBampCommercial16/32$10,000–$20,000g.tecSchiedlbergAustria
EEG-SMTCustom8 to 32$500–$1000OlimexPlovdivBulgaria
SynAmps2Commercial64$20,000–$40,000Compumedics NeuroscanCharlotteUSA
NeuroElectrics Enobio 32Commercial32$10,000–$15,000NeuroElectricsBarcelonaSpain
NeuroElectrics StarStim 8Commercial8$5000–$10,000NeuroElectricsBarcelonaSpain
Neuracle EEGCommercial64$5000–$15,000Neuracle TechnologyChangzhouChina
Olimex EEG-SMTCustom8 to 32$300–$700OlimexPlovdivBulgaria
g.MOBIlab+Commercial4$2500–$5000g.tecSchiedlbergAustria
OpenBCI CytonCustom8 to 16$500–$1500OpenBCIBrooklynUSA
Polymate Mini AP108Commercial8$4000–$6000Digitex LabTokyoJapan
NextMindCommercial1$400–$500NextMindParisFrance
B-BridgeCommercial8$3000–$5000B-Bridge TechnologyCupertinoUSA
Table 4. Studies from the reactive BCI category.
Table 4. Studies from the reactive BCI category.
ReferenceYearSubsEEG ChannelsCommandsSignal PreprocessingClassificationFeature ExtractionEvaluation
Putze et al. [14]20191234Band Pass 1–35 HzKNNCCAClassification Accuracy
Park et al. [15]201917334Band Pass 2–54 HzEMSI-Classification Accuracy, ITR
Kim et al. [16]20191326-shrinkage-rLDA-Classification Accuracy
Si-Mohammed et al. [17]20182463CSPLDA--
Angrisani et al. [18]2018112--FFTMeasuring Brain Potentials
Zhao et al. [19]202010644Band Pass 0.5–45 Hz, PSDCCA-Classification Accuracy, ITR, Power Distribution Topography
Liu et al. [20]2019888High Pass, Low Pass and Notch filterExtended-FBCCA and TRCA-Classification Accuracy, ITR
Kerous and Liarokapis [21]20172836Band Pass 1–20 HzLDA--
Chen et al. [22]20201294Notch filter 50 Hz, FFTFBCCA-Classification Accuracy, ITR, Time to complete a robot movement
Angrisani et al. [23]20192012PSD, FFT-FFTClassification Accuracy
Ke et al. [24]20201088Band Pass 7–90 Hz, Notch filter 50 Hzextended CCA, ensemble TRCA-Classification Accuracy, ITR
Zhang et al. [25]202318649FFTCCA, FBCCA, eOACCA-Classification Accuracy
Heo et al. [26]20226314High Pass above 0.5 Hz, Low Pass below 50 Hz, Low Pass below 12 HzSVM-Performance
Zhang et al. [27]20221298-FBCCA-Classification Accuracy, ITR
Jang et al. [28]20232064-Band Pass 0.1–50 HzLSVM, KNN, LDA, DT-Amplitude of ERP, Latency
Park et al. [29]202213124Band Pass 2–54 HzEMSI-Classification Accuracy, ITR
Fang et al. [30]2022684Band Pass 7–17 Hz, Band Pass 16–32 Hz, Band Pass 25–47 HzFB-tCNN-Classification Accuracy, ITR
Angrisani et al. [31]20222012Band Pass 5–25 HzSVM, KNN, ANN-Classification Accuracy, Acquisition Time
Apicella et al. [32]20229, 2013, 4Band Pass 5–25 HzSVM, KNN, ANNCCAClassification Accuracy
Sakkalis et al. [33]20221243, 4Band Pass 0.5–100 HzLDA, KNN, SVMCCAClassification Accuracy, ITR
Huang et al. [34]2022516--FFT-SSVEP Competition Scores
Apicella et al. [35]20222012Band Pass 9–25 HzANNFFT, CCAClassification Accuracy
Sugino et al. [36]2022424Band Pass 5–20 Hz--Classification Accuracy
Liu et al. [37]20222056Band Pass 0.5–50 Hz-Stepwise LDAClassification Accuracy, ITR, Average time to reach target
He et al. [38]202215329Band Pass 0.1–12 HzBLDA, LDA, SWLDA, SVM, SepCNN-Classification Accuracy, ITR
Du and Zhao [39]202210324Filter 5–40 HzCCA, FBCCA-Classification Accuracy
Arpaia et al. [40]2022924Band Pass 5–25 HzCCA, Proposed algorithm-Classification Accuracy, Time Response
Zhang et al. [41]202213, 5649, 16, 25, 36Band Pass 5–90 HzCCA, FBCCA, TRCACCAClassification Accuracy, ITR
Horii et al. [42]2015883Band Pass 0.5–60 HzSVMFFTClassification Accuracy
De Pace et al. [43]2024229----SUS, NASA-TLX
Zhang et al. [44]202412328Band Pass 8–40 HzCenter-ECCA-SVMExtended CCA, FFTClassification Accuracy
Mori et al. [45]202474-Band Pass 0.5–30 HzLDA, SVM-Classification Accuracy
Table 5. Studies from the passive BCI category.
Table 5. Studies from the passive BCI category.
ReferenceYearSubsEEG ChannelsCommandsSignal PreprocessingClassificationFeature ExtractionEvaluation Metrics
Olivieri et al. [46]20151014----Test Trial Time
Vortmann et al. [47]20191516-Low Pass at 50 Hz, High Pass at 1 Hz, Notch at 50 HzLDAPSDClassification Accuracy
Vortmann et al. [48]20212016-Band Pass 3–45 Hz, Notch filter 50 HzCNNFBCSPClassification Accuracy
Sanna et al. [49]2022108----SUS, NASA-TLX Questionnaires
De Massari et al. [50]2014564-Spatial FilterLDA, SVMSpectral EstimationClassification Accuracy
Table 6. Studies from the active BCI category.
Table 6. Studies from the active BCI category.
ReferenceYearSubsEEG ChannelsCommandsSignal PreprocessingClassificationFeature ExtractionEvaluation Metrics
Choi and Jo [51]2020264Low Pass at 40 HzCCA, FBCSP-Classification Accuracy
Horie et al. [52]201814,161----One-way ANOVA
Ji et al. [53]20211212-Proposed Algorithm-Classification Accuracy
Sun et al. [54]2024864-Band Pass 13–30 HzSVMCSPClassification Accuracy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prapas, G.; Angelidis, P.; Sarigiannidis, P.; Bibi, S.; Tsipouras, M.G. Connecting the Brain with Augmented Reality: A Systematic Review of BCI-AR Systems. Appl. Sci. 2024, 14, 9855. https://doi.org/10.3390/app14219855

AMA Style

Prapas G, Angelidis P, Sarigiannidis P, Bibi S, Tsipouras MG. Connecting the Brain with Augmented Reality: A Systematic Review of BCI-AR Systems. Applied Sciences. 2024; 14(21):9855. https://doi.org/10.3390/app14219855

Chicago/Turabian Style

Prapas, Georgios, Pantelis Angelidis, Panagiotis Sarigiannidis, Stamatia Bibi, and Markos G. Tsipouras. 2024. "Connecting the Brain with Augmented Reality: A Systematic Review of BCI-AR Systems" Applied Sciences 14, no. 21: 9855. https://doi.org/10.3390/app14219855

APA Style

Prapas, G., Angelidis, P., Sarigiannidis, P., Bibi, S., & Tsipouras, M. G. (2024). Connecting the Brain with Augmented Reality: A Systematic Review of BCI-AR Systems. Applied Sciences, 14(21), 9855. https://doi.org/10.3390/app14219855

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop