1. Introduction
Authentication—the process of verifying a user’s identity before granting system access—is a fundamental component of information security. Traditionally, authentication methods have relied on three primary factors: something the user knows (e.g., passwords or PINs), something the user has (e.g., security tokens or smart cards), and something the user is (e.g., biometric traits). However, knowledge-based and possession-based methods are susceptible to various attacks, including interception, theft, and social engineering. Even biometric authentication, which leverages inherent physiological or behavioral characteristics such as fingerprints or facial features, can be vulnerable to sophisticated spoofing and presentation attacks [
1,
2,
3].
Recent advances in biometric technologies have expanded the landscape of authentication methods. Some authentication approaches, such as voice recognition, iris scanning, and behavioral biometrics, offer enhanced security, yet each comes with its own limitations regarding accuracy, user acceptance, and vulnerability to forgery. Among these, brain–computer interface (BCI) technology has attracted significant attention as an emerging approach to authentication. BCIs enable direct communication between the human brain and external devices, capturing unique neural activity patterns that are difficult to replicate [
1,
4,
5,
6].
Electroencephalography (EEG), a non-invasive technique for recording electrical brain activity, is particularly promising in this context. EEG signals exhibit individual-specific temporal and spectral characteristics, making them highly distinctive and resilient to forgery. Moreover, EEG is sensitive to cognitive and emotional states, further complicating attempts at impersonation [
4,
6]. These properties position EEG-based authentication as a compelling alternative to traditional biometrics [
7,
8].
Despite its potential, EEG-based authentication faces several practical challenges. High-density EEG devices can be costly and cumbersome, potentially impacting user comfort and system usability. Furthermore, the non-stationary nature of EEG signals—affected by factors such as user fatigue, mood, or environmental conditions—necessitates advanced signal processing and machine learning techniques to ensure robust and reliable authentication [
1,
3,
9].
BCI technology, by facilitating direct interaction between users and digital systems without muscular input, provides a promising platform for secure authentication. The inherent complexity and uniqueness of brain signals make them an attractive biometric trait for robust identity verification [
3]. In this study, we propose and evaluate a comprehensive BCI-based authentication system using publicly available EEG dataset, named the PhysioNet EEG Motor Movement/Imagery Dataset, to assess its generalizability and practical viability. The proposed system operates in two primary phases: registration and authentication. During registration, a user’s EEG signals are recorded, preprocessed, and transformed into feature vectors stored in a secure database. In the authentication phase, new EEG data from the user are similarly processed and compared against the stored templates to verify identity. Access is granted only when a sufficient match is established between the real-time features and the stored data.
The authentication framework comprises four main components: signal acquisition, preprocessing, feature extraction, and classification. Signal acquisition involves recording brainwave activity from users. Preprocessing steps such as noise reduction and artifact removal are applied to enhance data quality. Feature extraction identifies distinctive patterns in the EEG data, which serve as the basis for user differentiation. Finally, a classification model is trained to recognize individual users based on these features, and the resulting models are stored for future authentication attempts.
The main objectives of this study are twofold: (1) to achieve high authentication accuracy and reliability through robust feature extraction and classification techniques and (2) to enhance user experience by reducing the number of EEG channels required, thereby improving practicality without compromising security. These objectives are demonstrated through two experimental setups, focusing on system performance and usability, respectively.
The remainder of this research is structured as follows:
Section 2 reviews the relevant literature,
Section 3 details the methodology,
Section 4 presents the results and discussion, and
Section 5 provides the conclusion and future directions.
2. Literature Review
EEG-based authentication is attracting more interest due to its potential to provide secure, user-specific biometric verification. Several experimental designs have been examined, each utilizing a variety of preprocessing techniques, feature extraction strategies, and classification models to optimize identification performance.
Yu et al. [
2] implemented an authentication system based on the P300 BCI paradigm, applying feature extraction through filtering and decimation, and classification using Fisher’s Linear Discriminant (FLD). Using images of individuals as external stimuli, the system achieved an accuracy of 83.1%. Similarly, Nakamura et al. [
1] applied average reference preprocessing and extracted power spectral density (PSD) features, which were subsequently classified using a Support Vector Machine (SVM), achieving an accuracy of 94.5%.
Meanwhile, other researchers have employed multiple feature extraction algorithms to enhance recognition accuracy. Kang et al. [
5] developed a multimodal feature extraction pipeline incorporating bandpass filtering, Hilbert transform, and notch filtering, achieving 98.93% accuracy using Euclidean distance. Similarly, Zeynali and Seyedarabi [
4] conducted a comparative study of three feature extraction techniques: Discrete Fourier Transform, Discrete Wavelet Transform (DWT), and Autoregressive modeling. These features were evaluated with various classifiers, including SVM, Bayesian Networks, and neural networks, achieving accuracies ranging from 84.49% to 92.89%.
Some research has utilized a hybrid authentication, such as that of Wu et al. [
6], who combined EEG signals with eye-blinking data. EEG features were extracted using Event-Related Potentials and classified with Convolutional Neural Networks (CNNs), while blink data were analyzed using a backpropagation neural network. This multimodal system achieved 97.6% accuracy. Additionally, Zeng et al. [
10] explored a bimodal system that integrated facial recognition with EEG signals. They extracted features using FLD and employed Hierarchical Discriminant Component Analysis for classification. Utilizing 195 face images as stimuli, the system achieved an average accuracy of 88.88%.
In addition to visual stimuli, auditory stimuli have also been explored in EEG-based biometric systems. For example, Seha and Hatzinakos [
3] examined the use of steady-state Auditory Evoked Potentials, applying Canonical Correlation Analysis for feature extraction and Linear Discriminant Analysis for classification. Their system achieved a classification accuracy of 96.46%, highlighting the potential of auditory stimuli in EEG-based biometrics. Thomas et al. [
11] emphasized the significance of high-frequency EEG oscillations, particularly in the gamma band, for user identification. They extracted PSD features and classified them using Mahalanobis distance, achieving an accuracy of 90%.
Several studies have emphasized usability by reducing the number of EEG channels necessary for authentication. Ortega et al. [
7] proposed an EEG authentication system using a minimal number of channels. With Principal Component Analysis and Wilcoxon ranking for feature extraction and SVM for classification, the system achieved 100% accuracy on a 13-subject dataset. TajDini et al. [
12] proposed an EEG-based authentication system using mental tasks recorded from a reduced number of channels. The study utilized PSD for feature extraction and applied an SVM for classification. Their method achieved an accuracy of 91.6%, demonstrating the feasibility of effective biometric authentication with fewer electrodes.
Waili et al. [
13] developed a low-complexity, EEG-based biometric authentication system aimed at enhancing usability in real-world applications. They utilized basic statistical features extracted from raw EEG signals and classified them using a decision tree classifier. The system achieved a reported accuracy of 90.1%, emphasizing feasibility in constrained environments with minimal hardware requirements. Suppiah and Vinod [
14] proposed a single-channel EEG biometric system for user identification during relaxed resting states. This system uses PSD and a k-nearest neighbors (k-NN) classifier with majority voting, demonstrating a high identification accuracy of 96.67% across all channels and 97% based on single channel.
Overall, this body of research highlights a wide variety of effective methodologies. While FLD, PSD, and used average reference were among the most-used feature extraction techniques, neural networks and SVM emerged as the most frequently employed classifiers. The consistent use of multimodal inputs and deep learning architectures reflects a broader trend toward hybrid and adaptive EEG-based authentication systems capable of delivering high performance and robustness. Despite these advancements, existing approaches lack a balanced approach to optimizing the trade-off between security robustness and practical usability in authentication frameworks. Our study aims to bridge this gap, underlining the need for a systematic framework that jointly maximizes security and usability constraints.
4. Results and Discussion
Traditional classification and deep learning methodologies were applied using different methods of feature extraction (FLD and DWT). The classifiers were SVM with an RBF kernel, QDA, k-NN, XGboost, and MLP. We designed two experiments: Experiment-1 to evaluate accuracy and performance of classification models and Experiment-2 to evaluate the system usability based on number of selected channels. Experiment-2 aimed to enhance the user experience in BCI authentication by minimizing the number of channels or electrodes required to acquire signals. Specifically, we examined the transition from 64 channels to 32 channels and then to 16 channels. The goal was to simplify the BCI authentication process by systematically reducing the number of channels and making it more user-friendly and efficient. Additionally, we compared the results of our model with those of other related studies utilizing the same dataset using machine learning evaluation metrics.
4.1. Evaluation Metrics
To maintain consistency and relevance in the system evaluation, we conducted a quantitative assessment based on well-established machine learning evaluation metrics for comparison with similar prior studies [
5,
7,
11,
14]. This comprehensive analysis aimed to evaluate the performance, accuracy, and efficacy of our algorithm against established benchmarks in the literature. For evaluation of machine learning, we partition the available data into training and testing sets to evaluate the performance of a model. This study involved splitting the data with a ratio of 80% for training data and 20% for testing data. During the training phase, the model was exposed to training data, which allowed it to learn patterns and relationships within that dataset. Following this, the model’s performance was assessed using the testing data, which the model had not encountered during training. This approach helped determine how well the model could be generalized to new, unseen data. The performance of each method was evaluated based on machine learning metrics, such as accuracy, false acceptance rate (FAR), false rejection rate (FRR), equal error rate (EER), precision, recall, and F1 measure.
Accuracy was measured in terms of the percentage of correctly predicted instances among the total number of instances. FAR is the rate at which an unauthorized person is accepted or has access to the system, as shown in Equation (1), where NFR is the number of false rejections, and NAA is the number of authenticated attempts.
FRR is the rate at which an authorized person is rejected, as shown in Equation (2), where NFA is the number of false acceptances, and NIA is the number of impostor attempts. Equal error rate (EER) is shown in Equation (3).
Precision is a critical metric in biometric authentication that measures the system’s ability to identify actual positive instances among the total predicted positive instances. Equation (4) shows the precision calculation, where true positives (TP) are the model that correctly predicts the positive outcome, and FP is the model that incorrectly predicts the positive outcome.
Recall is defined as the ratio of true positive predictions to the total number of positive instances. Equation (5) present recall, where false negative (FN) means that the model incorrectly predicts the negative outcome.
F1 score is computed as the harmonic mean of precision and recall, which then provides a single value for model performance (Equation (6)).
4.2. Experiment-1: System Performance
In Experiment-1, we evaluated classification models to achieve high accuracy and reliability. Our study includes a comparison of the results using the DWT and FLD feature extraction methods to assess the performance of machine learning and deep learning classifiers. Each classifier was examined in terms of accuracy, F1 score, recall, and precision, as well as security metrics, such as FAR, FRR, and EER.
Table 3 and
Table 4 present the results of various classification methodologies using FLD and the DWT feature extraction method. A comparison of the performances with DWT and FLD is illustrated in
Figure 4.
QDA: Both the FLD and DWT feature extraction methods yielded notable similarities in the performance of the QDA classifier. Both configurations exhibited exceptional accuracy, with a combined accuracy score of 99.64%. Both classifiers achieved or were very close to achieving 100% precision, recall, and F1 scores. In both configurations, the FAR and FRR were extremely low, with values of 0.0035, indicating a minimal risk of misidentification. Regardless of the feature extraction method used, the QDA classifier maintained high reliability, making it an ideal model for sensitive systems. Although both methods exhibited a comparable performance, the choice between FLD and DWT may depend on specific application requirements or the nature of the dataset.
K-NN: Robust performance in identity classification has been demonstrated by the integration of FLD with the k-NN classifier. With FLD, k-NN achieved an excellent accuracy of 95.79%, allowing for the precise identification of individuals with a balanced precision, recall, and F1 score. The FAR of 0.0365 and FRR of 0.0422 indicate moderate error rates; however, further optimization can refine the balance between false acceptances and rejections. Conversely, DWT combined with k-NN yielded a good accuracy of 83.78% and a good precision of 88%. While k-NN had slightly higher error rates, its balanced EER of 0.0393 underscores its suitability for authentication systems. Overall, the collaborative integration of FLD and k-NN offers promising avenues for effective identification, thereby enabling adaptable solutions for various security environments.
XGBoost: As a result of DWT, XGBoost achieved an accuracy of 80.12% and an F1 score of 80%, while maintaining a precision of 81%. The classifier’s error rates, FARS, and FRRS remained acceptable for XGBoost with DWT. A coefficient of 0.1966 indicates a balanced trade-off between accuracy and error. However, in the case of FLD, XGBoost exhibited an accuracy of 95.79%, indicating excellent performance in identity classification, with balanced precision, recall, and F1 scores of 96%. The FAR of 0.0365 and the FRR of 0.0422 indicate a moderate risk of error, but further optimization could balance false acceptances and rejections. In general, XGBoost with FLD provides superior accuracy and performance for sensitive systems.
MLP: With FLD, MLP achieved a higher accuracy of 92.03%, with balanced precision, recall, and an F1 score of 92% and minimal error rates. MLP with DWT achieved a slightly lower accuracy of 89.58% but still maintained strong performance, with both precision and recall at 90%, indicating robust capabilities and incorrect identification. Despite the higher error rates observed in MLP with DWT, the model still presented a low risk of misidentification. These results suggest that, while MLP with FLD offers slightly superior accuracy and lower error rates, MLP with DWT remains a viable option for security-sensitive systems, particularly in contexts where the specific features extracted by DWT prove advantageous.
Simple NN: The performance of simple NN was similar across the FLD and DWT features. With FLD, the simple NN achieved an accuracy of 89.13% and an F1 score of 89%, whereas with DWT, it attained an accuracy of 87.59% and an F1 score of 88%. The simple NN achieved 89% recall and precision with FLD, while 88% recall and precision were obtained with DWT. FAR and FRR, however, exhibited slight variations, with FLD showing lower values than DWT (FLD: FAR = 0.11074, FRR = 0.11090; DWT: FAR = 0.1211, FRR = 0.1244). As evidenced by their EER values, both configurations maintained relatively low risks of individual misidentification (FLD: EER = 0.1082; DWT: EER = 0.1228). In general, both the FLD and DWT configurations of the simple NN are suitable for authentication systems.
CNN: The performance of CNN across the FLD and DWT features was also quite similar. With FLD, CNN achieved an accuracy of 99.41% and an F1 score of 99%, while with DWT, it attained an accuracy of 99.38% and an F1 score of 99%. Both configurations achieved a recall and precision of 99%. In terms of error rates, both the FLD and DWT configurations exhibited minimal risks of individual misidentification, with FLD slightly outperforming DWT in terms of FAR and FRR values (FLD: FAR = 0.0058, FRR = 0.0059; DWT: FAR = 0.0061, FRR = 0.0062). Both configurations demonstrated an optimal balance between false acceptances and false rejections, as evidenced by their EER values (FLD: EER = 0.0059; DWT: EER = 0.0061). Overall, both the FLD and DWT configurations of CNN are considered highly suitable for authentication systems, as they provide exceptional performance and minimal error rates.
To summarize, models employing DWT and FLD feature extraction techniques performed differently across various biometric authentication models. It should be noted that models using the FLD feature extraction generally displayed competitive accuracy, with k-NN achieving the highest accuracy of 80.18%, closely followed by CNN with 78.25%. In contrast, the DWT feature extraction models demonstrated slightly lower accuracy, with CNN leading at 77.85%. Although the QDA models achieved consistent performance across both feature extraction methods, exhibiting accuracies of 75.53%, other models displayed notable variability. Furthermore, the FLD models generally displayed lower FARs, FRRs, and EERs than the DWT models. FLD demonstrated promising performance across multiple biometric authentication models, which underscores the importance of selecting appropriate feature extraction techniques tailored to specific biometric authentication tasks.
4.3. Experiment-2: System Usability and Channel Selection
BCI systems utilize many channels, up to 64 electrodes, to capture brain signals with sufficient spatial resolution. However, this approach presents challenges in terms of system complexity and setup time, as well as user discomfort, thereby affecting the overall user experience. As a result, Experiment-2 was designed to reduce the number of channels while simultaneously maintaining sufficient performance.
All 64 channels were utilized during the implementation while working with the public dataset in Experiment 1. However, to enhance usability, the number of channels used was reduced to 32 and then to 16 to improve the user experience while preserving robust authentication performance. We measured the effect of channel reduction to 16 and 32 using evaluation metrics. Two experimental phases were conducted, each utilizing a different subset of channels. In the first phase, 32 channels were used, while in the second phase, 16 channels were used. These channels were carefully selected based on their demonstrated significance and impact in prior research. The 32-channel approach was initially selected based on its use in another study [
26], which explored the implications of halving the number of channels.
According to correlation analysis (
Section 3.3), the 16-channel selection included FP1, FP2, and FPZ, all of which were found to be redundant in several studies [
7,
11,
26]. Additionally, the positive impact of selecting these channels was observed in another study [
7].
In the set of 32 channels, the following were included: AF3, AF4, AF7, C3, C4, CP1, CP2, CP5, CP6, CZ, F3, F4, F7, F8, FC1, FC2, FC5, FC6, FP1, FP2, FP1, FZ, O1, O2, OZ, P3, P4, P7, P8, PO3, PO4, T7, and T8. For the set of 16 channels, the following were selected: AF3, AF7, AFZ, C3, C5, FC1, FC3, FC5, F1, F3, F5, F7, FP1, FPZ, FT7, and FZ.
We conducted training sessions for identical models employing FLD and DWT feature extractions by utilizing the 32 selected channels.
Table 5 provides a comprehensive summary of the results obtained from these training sessions.
The results demonstrated the performance of various evaluation models with different feature extraction techniques for biometric authentication. The CNN model with FLD achieved the highest accuracy and F1 score of 92.64% and 93%, respectively, indicating superior performance. However, considerable variability in performance across models highlights the importance of model selection. Specifically, models utilizing FLD generally exhibited lower FAR, FRR, and EER values than those using DWT for feature extraction.
We organized training sessions for identical models utilizing both FLD and DWT feature extraction techniques with the 16 selected channels.
Table 6 offers a detailed overview of the results derived from these training sessions.
To summarize, this section addressed the issue of increasing usability by systematically reducing the number of channels from 64 to 32 and then to 16 to identify those most relevant for authentication. It was determined that with 32 channels, an accuracy of 92.64% could be achieved, while with 16 channels, an accuracy of 80.18% could be obtained. By reducing the number of channels, the authentication process was streamlined, and the key channels for authentication were highlighted, thereby allowing for the performance and usability of the system to be improved.
4.4. Comparison with Previous Studies
The EEGMMIDB dataset, established in 2000, has maintained a consistent presence throughout research endeavors across a wide range of areas, resulting in positive results. Recent applications of this dataset have extended to the field of authentication, with studies showcasing commendable results, as detailed in
Table 7. EEG authentication framework has been assessed using the dataset since 2018. For classification, we used machine learning models on EEG signals from motor imagery tasks performed by subjects and the rest state (REC and ROC) in the dataset. Using EEG data from the motor imagery tasks, the dataset demonstrated the robustness and accuracy of the proposed framework in distinguishing between genuine users and potential attackers. This dataset is thus essential for researchers interested in BCIS and EEG-based authentication, since it provides specific EEG signals linked to distinct mental tasks to develop and evaluate authentication frameworks based on brainwave signals.
Table 7 summarizes the results from previous studies that used the same dataset.
Our machine learning models are competitive when compared to prior studies utilizing the same datasets. Notably, our QDA classifier attained an accuracy of 99.64%, exceeding previous benchmarks, wherein the highest accuracy achieved through traditional machine learning was 99.39% [
7]. Compared to studies reporting accuracies ranging from 75% to 98%, our CNN model achieved an accuracy of 99.41% in the domain of DL. Furthermore, our results are very close to those of a study that achieved an accuracy of 99.97% with 1DCNN-ALSTM [
26]. This illustrates the effectiveness and robustness of our deep learning approach in achieving near-optimal accuracy levels.
Although BCI-based authentication systems exhibit high reliability and accuracy, their practical adoption depends heavily on usability and user-friendliness. A critical factor in improving user experience is minimizing the number of signal acquisition channels or electrodes, thereby reducing complexity while maintaining reliable performance.
4.5. Application Interface and Acceptance Test
In this section, we present the BCI-based authentication application with the use of our own data to detect participants’ identities during the REST state. We obtained approval from the Institutional Review Board (IRB) under reference number 24-1295 at the College of Medicine Research Center (CMRC) at King Saud University (KSU) to ensure compliance with ethical standards regarding human subjects. The data reported for this experiment were collected between the 5th and the 10th of May 2023 from 20 participants. Twenty female students from KSU participated in the experiment. Participants were asked if they had any neurological or psychiatric disorders and if their visual and auditory acuities were normal. Furthermore, none of the participants had experience with EEG or BCI devices. All subjects were over the age of 18 and could make decisions on their own. There were 16 subjects between the ages of 18 and 25, three between the ages of 26 and 35, and one between the ages of 36 and 45.
As part of the experiment, the participants were instructed to focus on a specific point to avoid any eye movement or distractions. During the data acquisition process, participants were asked to maintain calmness, relax, and refrain from thinking about anything while placing their hands in a relaxed and comfortable position to prevent muscle movements, which could adversely affect the quality of the data. They were then instructed to abide by the following procedures during the experiment:
Sit comfortably, with hands placed in a relaxed manner.
Focus on a specific point to avoid eye movements or distractions.
Avoid thinking about any matters during the experiment.
Avoid moving hands, feet, or any voluntary muscles during the experiment.
Refrain from swallowing or any other movements that could affect the quality of the signals captured during the experiment.
We built a graphical user interface (GUI) that allowed users to record brain signals and use them seamlessly and efficiently for authentication processes during both the registration and authentication phases of the process. A Python script presents a graphical wizard interface using a Tkinter to streamline the BCI user authentication setup process. The wizard comprises multiple steps, guiding users through essential procedures, such as headset preparation and behavioral protocols, to ensure optimal data acquisition quality. Users can seamlessly navigate between steps using intuitive buttons, including “Next” and “Back.” The wizard also facilitates user registration and login functionalities, with simulated progress indicators that provide real-time feedback on the registration and login processes.
Overall, this wizard aims to enhance the user experience by simplifying the setup procedure and promoting efficient interaction with the BCI system for authentication purposes.
Figure 5 shows the experimental environment with the Emotiv EPOC X EEG device to capture the participants’ brain signals. The Emotiv EPOC X samples EEG signals at a frequency of 128 Hz. This device features 14 channels corresponding to standard international 10–20 system locations, including AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4.
Figure 6 illustrates a sample recording of a participant’s signal using emotivpro software (version 3.0).
Our prototype application was designed based on [
28], which investigated EEG acquisition during the rest state for both open and closed eyes. The recording process was divided into two stages. In the first stage, the participants underwent a one-minute session, which was utilized to train the model. In the second stage, recordings were made for a duration of 10 s to test the model’s performance. A one-minute rest period was allocated between recordings for each participant to prevent the gel on the electrodes from drying out. This methodology aimed to ensure optimal conditions for data collection and minimize any potential sources of interference during the recording process.
The experiments were conducted in the university’s laboratory infrastructure free of noise to ensure optimal conditions for its success. Participants were instructed to open their eyes intermittently during the experiment in a serene setting with the aim of inducing them into a state of rest. To minimize muscle tension, the participants were seated comfortably on chairs with their arms at their sides throughout the study. During the login process, EEG data acquisition took 10 s, as visually represented in the toolbar illustrated in
Figure 7.
Figure 8 showed the confirmation of granted access.
4.5.1. Application Performance Analyses
EEGMMIDB dataset demonstrated superior accuracy (illustrated in Experiment 1 and Experiment 2) during the CNN model, which led us to attempt to incorporate the CNN model into our own dataset. In the beginning, we used the data acquired from the participants for the one-minute period, separating it into training (80%) and validation (20%) data and following a methodology like that used with the EEGMMIDB dataset. A 99.75% accuracy rate was achieved, accompanied by 100% precision, recall, and F1 scores. A FAR of 0.0025, an FRR of 0.0026, and an EER of 0.0025 represent minimal values, respectively. It is evident from these results that the authentication system is highly accurate and reliable.
Another 10-second recording was taken following the launch of the application for the purpose of evaluating the prediction model. A 99.73% accuracy was achieved, accompanied by 100% precision, recall, and F1 scores. From this performance, it is evident that the model is reliable and effective in accurately predicting authentication systems within a short period of time.
We have conducted latency profiling on our system (Intel Xeon, 2 vCPUs, 13 GB RAM) using the EEGMMIDB dataset (109 users).
Table 8 presents a detailed breakdown of offline processing times, FLD feature extraction. and real-time inference. The key metric for real-time application, per-sample classification latency, is low. In 32-channel model, the total processing latency per sample is under 30 s (29.18 s for CNN inference). This efficiency is consistent across configurations, with 16-channel and 64-channel inference times at 41.21 s and 11.77 s (for the NN classifier), respectively. While the initial feature extraction and model training phases are computationally intensive and performed offline, the optimized inference pipeline demonstrates that our system can operate as a real-time application.
4.5.2. Acceptance Test
After the experiment was completed, the participants were directed to submit user acceptance survey submission. The results of this survey show that out of the twenty subjects, seven reported feeling rather stressed out during the last week. None of the participants reported feeling tired on the day of the experiment. Regard usability of the rest state (open eyes), participants rated the task as somewhat boring, with four participants rating it as 3, and three participants rating it as 4 on a scale from 1 (Strongly Agree) to 5 (Strongly Disagree). Regarding the attention requirement, one participant found it minimal (rating of 1), while the majority rated it as somewhat attention-demanding (ratings of 3 to 5). Most participants (11 out of 20) could imagine performing the task repeatedly for authentication purposes.
Regard the usability of the device, after a brief description, most participants (10 out of 20) could imagine putting the headset on themselves without difficulty. Additionally, the majority (16 out of 20) had a very positive experience with the headset, finding it comfortable to wear for extended periods. All participants expressed no security concerns regarding the use of brainwave authentication.
When asked if they would be willing to use brainwave authentication as their primary authentication method, 11 out of 20 participants responded affirmatively, while 9 were unsure without providing specific reasons. Additionally, all participants did not foresee any potential problems with these techniques in an authentication system.