Next Article in Journal
Probe Contact Force Monitoring during Conductivity Measurements of the Left Atrial Appendage to Support the Design of Novel Diagnostic and Therapeutic Procedures
Next Article in Special Issue
Simple and Robust Deep Learning Approach for Fast Fluorescence Lifetime Imaging
Previous Article in Journal
Fast Control for Backlight Power-Saving Algorithm Using Motion Vectors from the Decoded Video Stream
Previous Article in Special Issue
Atrioventricular Synchronization for Detection of Atrial Fibrillation and Flutter in One to Twelve ECG Leads Using a Dense Neural Network Classifier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG-Based Person Identification during Escalating Cognitive Load

Department of Electromagnetic and Biomedical Engineering, Faculty of Electrical Engineering and Information Technology, University of Zilina, 010 26 Zilina, Slovakia
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(19), 7154; https://doi.org/10.3390/s22197154
Submission received: 25 August 2022 / Revised: 16 September 2022 / Accepted: 17 September 2022 / Published: 21 September 2022

Abstract

:
With the development of human society, there is an increasing importance for reliable person identification and authentication to protect a person’s material and intellectual property. Person identification based on brain signals has captured substantial attention in recent years. These signals are characterized by original patterns for a specific person and are capable of providing security and privacy of an individual in biometric identification. This study presents a biometric identification method based on a novel paradigm with accrual cognitive brain load from relaxing with eyes closed to the end of a serious game, which includes three levels with increasing difficulty. The used database contains EEG data from 21 different subjects. Specific patterns of EEG signals are recognized in the time domain and classified using a 1D Convolutional Neural Network proposed in the MATLAB environment. The ability of person identification based on individual tasks corresponding to a given degree of load and their fusion are examined by 5-fold cross-validation. Final accuracies of more than 99% and 98% were achieved for individual tasks and task fusion, respectively. The reduction of EEG channels is also investigated. The results imply that this approach is suitable to real applications.

1. Introduction

Biometric methods have an irreplaceable function in a person’s identification and verification. Due to increases in theft of personal access data to various services, biometry can be considered incomparably secure. Biometry can be defined as a set of methods designed to identify or verify a person according to an individual’s unique physical (physiological) features or habitual (behavioral) traits, and is often used for security reasons. Typical physiological characteristics include fingerprints [1], facial image [2], hand geometry [3], iris [4], or retina [5] patterns. The individual’s behavioral attributes can involve handwritten signature [6], voice [7], gait dynamics [8], and others. Falsification of conventional biometrics data can be overcome by using bioelectric signals such as electrocardiographic (ECG) [9], electromyographic (EMG) [10], electrooculographic (EOG) [11], and electroencephalographic (EEG) [12] signals, for personal identification and verification. These signals contain unique patterns that are difficult to copy or imitate, therefore preserving the secrecy and privacy of the individuals. Consequently, using bioelectric signals as biometric units ensures that the biometric data comes from the competent individual who is genuinely attending the registration of the specific system. This is a fundamental requirement for a biometric mechanism to function properly.
Information obtained by acquiring, processing, and analyzing EEG data is utilized in a wide range of applications. The EEG signals reflect the brain’s electrical activity, and the resultant EEG recording represents the summation of the synchronous activity of many neurons with similar spatial localization. This brain activity can be measured non-invasively by surface electrodes placed on the scalp. The primary EEG signal research focuses on the diagnosis, detection, or prediction of diseases such as epilepsy or epileptic seizures [13,14,15], stroke [16,17], schizophrenia [18], Alzheimer’s disease [19], Parkinson’s disease [20], insomnia [21], etc. Further research is oriented toward the possibility of restoration of the ability of human bodily functions through EEG signals within rehabilitation [22,23,24]. The communication path between the brain and the computer can also be used to control the following objects or devices: virtual cursor [25], keyboard [26], wheelchair [27], or intelligent home [28]. The EEG signals can also be beneficial for quantifying neurological biomarkers of sleep stages [29], advanced driver assistance systems [30], lie detection [31], or in the study of the neurological effect of microwave stimulation [32]. Research interest has recently been concentrated on biometric identification and authentication based on these signals. The main advantage is that EEG signals can only be recorded from a living person, thus it is difficult to steal or falsify them [33,34,35].
The several available experimental configurations for developing a biometric system include different paradigms, feature extraction techniques, or classification algorithms. One paradigm in biometrics involves using a resting state without performing particular mental tasks by the brain when a person remains awake, calm, and relaxed. Other ways are event-related states, including visual or auditory evoked potentials. Other methods involve motor or visual imagery while a person is performing different tasks. Current studies declare a high accuracy rate in recognizing people based on EEG signals.
Ma L. et al. [36] investigated a publicly available database. Data were measured by a 64-channel system during the resting state, namely through closed and open eyes. A convolutional neural network was used for the feature extraction and classification of 10 subjects. The highest classification accuracy belongs to open eyes, 88%, which exceeded the state of closed eyes for all examined scenarios. Fan Y. et al. [37] proposed a personal identification system with a combination data augmentation and convolutional neural network based on resting-state EEG from a public database including 109 subjects. Their system reached an average accuracy of 99.32% using only 14 channels. Sun Y. et al. studied the same public database [38]. In addition to the resting state, four tasks were incorporated both physically and imaginatively, including opening and closing fists and feet. The 1D Convolutional Long Short-Term Memory Neural Network was proposed for EEG-based biometric identification. The results showed a remarkably high recognition rate of 99.58% when using 16 channels.
Moctezuma L. A. et al. [39] created a system for biometric identification of 27 subjects, who performed 33 repetitions of 5 fictional words in Spanish. Extracted features from 14 channels were based separately on a discrete wavelet transform, nine statistical values, and their six combinations (15 in total). A random forest was utilized for feature classification. The maximum classification accuracy of (95 ± 0.4)% was achieved for all channels and subjects based on the instantaneous energy coefficient for each decomposition level from the discrete wavelet transform per channel. Gui Q. et al. [40] presented an identification and authentication framework based on EEG signals recorded while silently reading select words. The EEG signals were collected from 32 subjects via six channels. The noise level was reduced by averaging one channel and a low-pass filter. Wavelet packet decomposition was used to extract delta, theta, alpha, beta, and gamma frequency bands, and then mean value, standard deviation, and entropy were calculated. The feed-forward multilayer neural network was applied for classification. The classification rate, which recognized one subject or a small group of individuals from others, reached approximately 90%. Jayarathne I. et al. [41] proposed an authentication system based on visualizing four-digit numbers while EEG signals were measured. The EEG signals were acquired from 12 subjects using 14 channels. The alpha and beta frequency bands were investigated, whereas common spatial patterns (CSP) values were used as the main features for classification via the linear discriminant analysis (LDA) algorithm. The maximum achieved accuracy reached 96.97%.
Yap H. Y. et al. [42] dealt with biometry based on two EEG acquisition protocols, namely eyes-closed and visual stimulation by words displayed on an LED screen. They created their own database of eight subjects. The eyes-closed protocol achieved the maximum accuracy of 96.42%, while the visual stimulation protocol attained the highest accuracy of 99.06%. Abbas Seha S. N. and Hatzinakos D. [43] investigated the feasibility of using brainwave responses to auditory stimulation for human biometric recognition. The EEG data for the proposed biometric system were recorded from 21 subjects using 7 channels while listening to 4 acoustic tones. Three different features were evaluated based on the EEG sub-band rhythms’ energy and entropy estimation. Extracted features were classified using discriminant analysis with a maximum recognition rate of 97.18%.
Attalah O. [44] proposed a biometric system for person identification based on EEG signals acquired from 36 subjects during multi-tasks. Four features were based on the mean spectral power estimation in delta, theta, alpha, and beta EEG frequency sub-bands for each task. For classification, linear discriminate analysis, k-nearest neighbor, and support vector machine were investigated. The highest accuracy was 100% using a k-NN classifier based on multi-tasks for 14 selected channels. Hema C. R. et al. [45] investigated the brain activity of 50 subjects during relaxation, reading, spelling, and math calculation for person identification. Feed-forward neural network and recurrent neural network were used for classification. The recurrent neural network achieved the highest average accuracy of 95% for the spelling task. Zeynali M. and Seyedarabi H. [46] investigated the performance of an authentication system based on EEG data using a dataset containing EEG data from seven subjects measured by six channels during five mental activities, such as resting state, compose a letter task, math task, geometric figure rotation, and visual counting. Features were extracted using discrete Fourier transform, discrete Wavelet transform, autoregressive modeling, and entropy. The neural network, Bayesian network, and Support Vector Machine were applied to classification. Separately, the influence of only one selected channel on a specific task was examined, where the accuracy rate was in the range of 97.07% to 98.3% using a neural network. The O2 channel was considered optimal without considering the type of mental activity when the accuracy achieved 95%.
This study presents an EEG-based biometric system that includes escalating the brain’s cognitive load, which is the result of relaxing and concentrating on playing a serious game. In this work, we examine the classification of individual tasks (rest state and specific level game) and the fusion of these tasks separately. Specific EEG channels may provide redundant or suboptimal information. Therefore, an essential point for the simplified acquisition, upward the comfort of signal measurement, and reduction of the demands on computing power is to determine and minimize the number of required channels while increasing or maintaining the accuracy of the subsequent classification. For this reason, the influences of different brain regions based on the different electrode selection combinations are investigated. The main contributions of this study can be summarized as follows:
  • This study presents a biometric identification method based on a novel paradigm with unique data for EEG-based person identification. A unique set of EEG data was used, which covers the entire spectrum of the brain load from when the subject was relaxed with closed eyes to solving a difficult task.
  • The ability to identify a person was investigated separately for no brain load, low, medium, and high loads, and the combination of these loads with high accuracy.
  • This research deals with modeling the effects of reducing channel numbers by using a relatively low number of achieved channels, in comparison to person identification accuracy with other studies using a larger number of channels. This can lead to a reduction in cost and time funding in the real-life adoption of the proposed approach.

2. Materials and Methods

The EEG data was measured by the BIOPAC MP36 acquisition system (BIOPAC Systems Inc., Goleta, CA, USA), including the precise built-in universal amplifiers and 24-bit A/D converters for signal acquisition. One MP36 unit is capable of sensing four channels. This study used two synchronized MP36 units to measure eight channels. A regular EEG cap with 19 electrodes pre-positioned in the International 10–20 montage was used. The cap is made from Lycra-type fabric with recessed tin electrodes. A more detailed description of the measurement system is provided in [47].
The entire presented approach can be summarized into two stages, namely the training and testing stages of the 1D-CNN model, as shown in Figure 1. The training stage includes digital filtering of the raw EEG data (notch and band-pass filter), task selection (resting state, three individual levels, level fusion or game playing, and fusion of all tasks), data segmentation (into segments with a length of 1 s or 1000 samples and half overlap), channel selection (according to recorded scalp regions into eight, four, or two channels), common average re-reference based on available channels, and 1D-CNN model training, validation, and evaluation based on 5-fold CV. The testing stage focuses on an already selected task or task fusion and selected channels. Digital raw data filtering, segmentation, and common average re-reference are preserved. This phase is responsible for determining a person’s identity. Each block is described in the following subsections.

2.1. EEG Data

This study investigates a database consisting of EEG signals from adult subjects previously analyzed in [47]. The EEG database was primarily created to investigate brain activity using spectral analysis while playing a serious game with rising difficulty. In total, EEG measurements of 21 university students (9 males and 12 females) with a mean age of 22.7 years (range: 21–26 years) are included in the database. The EEG signals were measured by eight unipolar channels with a sampling rate of 1 kHz. Individual electrodes were placed on the scalp, as illustrated in Figure 2. The electrode locations include frontal (F3, F4), central (C3, C4), parietal (P3, P4), and occipital (O1, O2) regions in accordance with the international 10–20 scheme. The reference electrode was constituted by connected and grounded auricle electrodes.
The EEG signals were acquired for each subject during the resting state and while playing a serious game. Individual tasks can be divided as follows:
  • Task 1: Close eyes for approximately 60 s;
  • Task 2: The first level (easy) of the serious game;
  • Task 3: The second level (medium) of the serious game;
  • Task 4: The third level (hard) of the serious game. Eight students were unable to complete this level, so the end of the game was subsequently considered the end of this task in all cases.

2.2. Serious Game

A serious game can be defined as the innovative and exciting use of games or gaming elements for purposes more serious than mere “entertainment.” They are beneficial in training cognitive abilities, short- or long-term memory, physical training, and both prevention and physical rehabilitation [47]. The serious game design was focused on the students’ cognitive training in logical thinking. It was realized through logical puzzles with principles based on Boolean algebra. The game is divided into three levels. Each represents a particular difficulty level—easy, medium, and hard—which reflects the escalating cognitive load for the game player.

2.3. Data Pre-Processing

The first filtering stage was composed of a digital notch filter to reduce the 50 Hz powerline interferences and a band-pass filter with cut-off frequencies of 0.5 Hz and 95 Hz to eliminate DC offset and unwanted high-frequency noise (including power line harmonics). The majority of the defined frequency band of the EEG signal was covered [48], including the potentially present gamma band (from 30 Hz to 100 Hz) characteristic for the cognitive load [49]. The band-pass filter was the result of Butterworth low-pass and Butterworth high-pass filter cascading.
Subsequently, the common average reference (CAR) method was applied to filtered data to remove the mutual information from all simultaneously recorded channels to increase the signal-to-noise ratio (SNR). It can be calculated as [50]:
V i CAR = V i ER 1 N j = 1 N V j ER ,
where V i CAR represents modified voltage values of i-th EEG signal (channel) after CAR, V i ER represents the voltage between the i-th detecting electrode and the reference electrode, and N is the number of all detecting electrodes.

2.4. Dataset Preparation

The EEG signals based on the duration of a particular task were segmented in the time domain using a sliding window into segments (epochs) with a fixed length of 1000 samples (1 s) and a half-overlap of 500 samples (0.5 s) to create a set of data for the subsequent classification process.
A 5-fold cross-validation (CV) was used to estimate the predictive performance of the classification model on unknown data with different segment lengths. First, the entire dataset was split at a ratio of 0.8 for training data (for 5-fold CV) and 0.2 for test data (for overall testing of the most accurate case). Next, the training data was divided into five subsets of the same size. Four subsets of data were considered as training sets and the remaining subset as the validation set. A total of five iterations of training were performed, with each iteration corresponding to a unique training and validation data set. The variant of the proposed model with the best accuracy for the corresponding segment length was finally trained and evaluated on test data. The division of the dataset is depicted in Figure 3. In order to reduce the bias of the proposed model’s variants, the 5-fold cross-validation has been incorporated within the training stage. The subjective bias in data division has been minimalized by precise EEG data labeling (resting stage, start, and end of individual game levels) in real-time by the technician during the data acquisition. In addition, the EEG data acquisition was performed while a synchronized camera recorded, which has been used for offline data labeling. Moreover, no subjects involved in this study have been excluded in order to reduce the subjective bias of the proposed model, even when they could not successfully finish the hardest game level. A more detailed description of the measurement setup and protocol is provided in [47].
The z-score method was used for channel-wise data normalization across all training samples, where the mean value from the individual data point x was subtracted and divided by standard deviation. The normalized value x′ can be defined as:
x = x μ σ ,
where µ denotes the mean value and σ denotes standard deviation.

2.5. Feature Extraction and Classification

Traditional approaches in EEG-based biometric systems include manually extracted features and conventional classification algorithms such as linear discriminant analysis (LDA) [51,52], k-nearest neighbors (k-NN) [44,53], or support vector machine (SVM) [54,55]. Alternatively, deep learning (DL) algorithms [37,38,56,57,58,59,60,61,62,63] are becoming state-of-the-art in person identification and authentication. These algorithms are able to automate the feature extraction process and eliminate some of the manual human encroachment.
In this work, a one-dimensional convolutional neural network (1D-CNN) is proposed to automatically extract and classify the most unique neurological features that correspond to the individual subjects. Specifically, two variants of the 1D-CNN model were created using Deep Learning Toolbox in MATLAB R2021b (Mathworks, Inc., Natick, MA, USA). The first variant (hereafter referred to as variant A) contains three convolutional blocks, and the second variant (hereafter referred to as variant B) contains an additive fourth convolutional block. The overall network architectures are depicted in Figure 4.
The input layer receives multichannel one-dimensional EEG signals with a segment length of 1000 samples, which are processed independently via multichannel kernel filters in the first convolutional layer. Variant A obtains 3 convolutional layers with 32, 64, and 128 filters. Variant B includes 4 convolutional layers with 32, 64, 128, and 256 filters. Each convolutional kernel performs convolutional operations on local regions with a kernel size of 1 × 5 and extracts certain features from the input data. Afterward, the Batch Normalization (BN) layer was implemented between the convolutional and activation layers. BN layer can speed up convergence and improve the performance and stability of the neural network. The nonlinear function Rectified Linear Unit (ReLU) was applied as an activation function in convolutional blocks, which changed negative values to 0 and left positive unchanged. The Max Pooling layers were included as the last layer within each convolutional block. These layers were utilized as a downsampling operation to calculate the maximum value over the local pooling regions with a size of 1 × 2 and stride 1 × 2, which reduced the feature maps’ dimension. Instead, a typical fully connected layer, the Global Average Pooling (GAP) is included after the convolutional blocks and produces the temporal average of the feature map from the previous layer. The output layer was a fully connected layer containing 21 neurons corresponding to the number of output classes. The activation function softmax was applied to this layer. The softmax function assigns the probability to each class with which the given input belongs to the respective class. The class with the highest probability is considered the correct result of the classification.
The Adam optimization algorithm and cross-entropy loss function are used to train the proposed 1D-CNN model. The model was trained for a maximum of 100 epochs with a mini-batch size of 64 and a constant learning rate of 0.001. The training data were randomly shuffled before every epoch. The output network was considered the network that achieved the best loss validation during 5-fold cross-validation.

2.6. Evaluation Metrics

The classification model performance was evaluated on selected statistical measures: Average Accuracy, Macro Average Precision, Macro Average Recall, and Macro Average F1 score for the multiclass problem. They can be defined as follows [64]:
A v e r a g e   A c c u r a c y = ( k = 1 K T P k + T N k T P k + T N k + F P k + F N k ) × 100 % K ,
M a c r o   A v e r a g e   P r e c i s i o n = ( k = 1 K T P k T P k + F P k ) × 100 % K ,
M a c r o   A v e r a g e   R e c a l l = ( k = 1 K T P k T P k + F N k ) × 100 % K ,
M a c r o   A v e r a g e   F 1   s c o r e = 2 × M a c r o   A v e r a g e   R e c a l l × M a c r o   A v e r a g e   Precision M a c r o   A v e r a g e   R e c a l l + M a c r o   A v e r a g e   Precision ,
where TP is a true positive, FN is a false negative, FP is false positive, k is a particular class, and K is a number of all classes. Average Accuracy is the percentage of correct prediction for total observations and was considered a fundamental metric. Macro Average Precision is the average percentage of correctly predicted positive observations to the total predicted positive observations. Macro Average Recall is the average percentage of correctly predicted positive observations to all observations. Macro Average F1 score is the weighted average of Macro Average Precision and Macro Average Recall.

3. Results and Discussion

This section compares the performance of both variants of the proposed 1D-CNN model. Table 1, Table 2, Table 3, Table 4 and Table 5 show the results for the individual states, corresponding to the increasing brain load and fusion of these states on the validation subset of data. Variant performances were evaluated based on the achieved Average Accuracy, Macro Average Precision, Macro Average Recall, and Macro Average F1 score. The impact of channel reduction and classification model on person recognition is investigated using a 5-fold CV. For this reason, all the subsequent results are given in the form of mean value ± standard deviation.
The results are shown for all eight channels and the reduced number of channels. Systematic channel reduction is performed based on an individual measured region’s combination of the scalp. Each region is constituted by 2 channels (see Figure 2). In the first step, the number of channels is halved, i.e., in two regions, where 6 combinations are created: frontal (F) + central (C), frontal (F) + occipital (O), frontal (F) + parietal (P), central (C) + occipital (O), central (C) + parietal (P), and parietal (P) + occipital (O) region. In the second step, the number of channels is limited to a quarter of the original number, where each region is examined individually, i.e., through four combinations. A total of 104 different cases are investigated.
Table 1 presents the results for person identification based on the resting state with closed eyes. For all channels, an average accuracy rate of (99.17 ± 0.41)% and (99.46 ± 0.44)% are achieved for variant A and variant B, respectively. In half of the channels, the combination of the central and occipital region shows the highest accuracy of (95.18 ± 1.25)% for variant A and (97.52 ± 1.01)% for variant B. However, a significant decline can be observed when using only one region, where, for the most successful central region, a decrease of up to (34.99 ± 1.64)% is found for variant A and (27.69 ± 0.04)% for variant B compared to the combination of the central + occipital region.
The results of person identification during the first level of the serious game are reported in Table 2. The average accuracy of person identification is (98.99 ± 0.45)% for variant A and (99.61 ± 0.24)% for variant B using eight channels. The highest average accuracy, (94.21 ± 1.32)% for variant A, belongs to the central + occipital region, and (96.71 ± 0.67)% for variant B corresponds to the central + parietal region. For the single region, the average accuracy is less than 65% in all cases.
Table 3 summarizes the results of person identification during the second level of the serious game. As can be seen, the average accuracy of person identification reaches (99.41 ± 0.41)% for variant A and (99.72 ± 0.13)% for variant B when all channels are used. As in the majority of previous results, reducing channels to two regions is most successful for the central + occipital region. The average classification accuracies are (95.62 ± 0.78)% and (97.50 ± 0.73)% for variant A and variant B, respectively. In the case of a single region, the best results can be observed in the occipital region, but with a significant decrease of (61.42 ± 2.05)% for variant A and (67.80 ± 1.71)% for variant B compared to the more examined regions.
The results of person identification according to the third level of the serious game are reflected in Table 4. Including all channels, average accuracy rates of (98.33 ± 0.18)% for variant A and (99.20 ± 0.16)% for variant B are achieved. In half of the channels, the highest average accuracy reached (93.14 ± 0.83)% in the central + parietal region for variant A, and the highest accuracy rate of (95.84 ± 0.45)% corresponds to the frontal + parietal region for variant B. For the single region, the average accuracy does not exceed 65% in all cases.
Furthermore, person identification is investigated based on the fusion of EEG signals recorded while playing the serious game with increasing difficulty (levels 1–3). Based on the present results for the individual levels, three combinations of two scalp regions are selected: frontal + parietal, central + parietal, and central + occipital. Two-channel regions are further excluded. The values of the individual classification metrics are given in Table 5 for this scenario. For all channels, the average accuracy is (98.82 ± 0.29)% and (97.84 ± 0.18)% for variants A and B, respectively. For the two regions, the highest precision rate is achieved by the central + parietal regions with a value of (89.88 ± 1.06)% for variant A and frontal + parietal with a value of (92.58 ± 0.49)% for variant B.
The last evaluated scenario represents a fusion of resting-state and all game levels. The same combination of scalp regions as in the previous case is considered. The average accuracy corresponding to variant A is (98.08 ± 0.30)% and (98.97 ± 0.12)% to variant B for all channels, see Table 6. The highest accuracy for the reduced channels can be observed for the central + parietal combination, namely (91.01 ± 0.77)% and (93.39 ± 0.42)% for variant A and variant B, respectively.
The following figures depict a graphical representation of the aforementioned results in the form of a bar graphs with the indicated standard deviation. The comparison of the average accuracy of both model variants for person identification when all eight channels were applied is shown in Figure 5. These results are shown separately for the resting state, playing a particular game level, playing all game levels, and all states together. The subsequent two figures show this metric for reduced regions from four to two (from eight channels to four channels), specifically in Figure 6 for variant A and Figure 7 for variant B. As can be seen, the results indicate an increase in accuracy on the validation subset of person identification data among 21 people by an additive fourth convolutional block, thus increasing the complexity of the model.
After evaluating the prediction performance of two variants of the 1D-CNN model with different combinations of input EEG channels, all data from the 5-fold CV are used to train the final model corresponding to each mental task. Based on the previous results, only hyperparameters of variant B are subsequently considered. The final model is rated on an original test data subset. The corresponding classification metrics for the final model evaluation of individual tasks and task fusions are presented in Table 7. Using all channels, individual tasks achieve an average accuracy of more than 99% and task fusions of more than 98% when identifying 21 persons. When the channels are halved, more than 97% average accuracy is obtained for resting state, level 1, and level 2. In the case of level 3, the accuracy is 95.74%. The decrease in the accuracy of this level compared to level 1 and level 2 can be attributed to its complexity and inability to complete for all subjects. This fact was influenced by the diverse brain activity of the individual during the duration of level 3, when there was a gradual lack of concentration caused by human frustration at the individual’s inability to complete level 3. A decrease in accuracy can be seen in the reduction of channels for task fusion, specifically 93.33% and 93.84% for the playing game task and all task fusion, respectively. A graphical representation of the achieved accuracy results is presented in Figure 8.
As a representative example, the normalized confusion matrix for the prediction performance of eight channels, as well as task fusion, is depicted in Figure 9. The normalized confusion matrix displays the number of correctly and incorrectly classified observations for each predicted class as percentages of the number of observations of the corresponding predicted class. The diagonal cells correspond to the class-wise precision or positive predictive values. Confusion matrices for all investigated cases are enclosed in the Supplementary Material.
The overall results obtained in this study reveal that EEG signals represent an effective biometric identifier in user identification. As shown in Figure 5, Figure 6, Figure 7 and Figure 8, the mental task in the form of playing a serious game had a slightly better accuracy performance compared to the resting state paradigm. Although performance degradation was observed when combining individual game levels, task fusions, or reduction of the number of channels, the overall accuracy was still sustained at over 93%.
A brief comparison of the proposed method within existing research works is listed in Table 8. It summarizes an overview of important information from several previous EEG-based person identification studies that utilize deep learning algorithms. Our results are challenging to compare quantitatively with others, as the EEG data used do not originate from the database investigated so far. It should be noted that we present a paradigm of playing a serious game for person identification, which has not yet been studied to the best of our knowledge. The performance of the proposed paradigm indicates competition with other state-of-the-art studies, and high person identification accuracy rate values indicate its applicability in biometric systems. Using the relatively non-complex 1D-CNN architecture, results above 99% accuracy for individual tasks and above 98% accuracy for the fusion of these tasks are achieved using only eight channels compared to other studies.
Due to the significant reduction in the number of channels (eight or four channels) and relatively high accuracy (above 98%), the EEG signal modality is a promising alternative to other biometric signals from the human body. A major advantage of using the bioelectric signal modality, such as an EEG, represents the signal’s unique patterns, which could be challenging to copy or imitate in a real-life scenario while operating an EEG-based biometrics system. The cost of this approach represents the acquisition system, namely its price and time-duration of the measurement preset and acquisition protocol. However, by using a non-research-grade EEG device, properly selected channels, and a properly designed game, the presented approach could be a cost-effective solution in terms of practical applications.
In the further continuation of this research, we plan to create a real biometric application based on the presented procedures and dataset with the possibility of expanding the number of persons. In addition, tests for the repetitive game session would be conducted to assess the stability of the EEG signals in terms of learning factors and thus demonstrate the suitability and sustainability of the proposed protocol in the identification field.

4. Conclusions

This study deals with person identification using the EEG signals of 21 subjects recorded during resting and while playing a serious game. The presented protocol has not been researched in the context of biometric identification before. The EEG records were divided into four task groups: a resting state with eyes closed and three game levels. Person identification was performed for each task independently, and a fusion of the three game levels and a fusion of all tasks were also created, culminating in a total of five situations. The 1D-CNN deep learning algorithm performed feature extraction in the time domain and classification. Specifically, two variants of the classification model were evaluated, the first with three and the second with four convolutional blocks, using a 5-fold CV. At the same time, the influence of the systematic reduction of the number of EEG channels was investigated, where the influence of individual scalp regions on the accuracy of classification was gradually rated. The resulting model was evaluated on a test data set. The average accuracy reached a value of more than 99% and 98% for individual tasks and task fusion, respectively. As the number of channels decreased to half, the classification accuracy also decreased to >97% for resting state, level 1, and level 2. For level 3, accuracy was ~95% and task fusion >93%. The decrease in accuracy in the case of level 3 is probably related to its incompleteness for some subjects. The use of only one scalp region was rejected because satisfactory results were not achieved.
The presented approach can provide person identification for multi-level system security and continuous person identification when a variable brain load level. A biometric system based on changing brain loads is advantageous in real-life scenarios, as a person performing various complex tasks can still be identifiable and so will not be denied access to a particular device. One of the possible approaches to the future is further implementation, such as identifying and recognizing students during online exams with the elimination of cheating or staff in the home office with different competencies.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s22197154/s1, Figure S1: The confusion matrix of the final model’s classification accuracy considering all channels and resting state; Figure S2: The confusion matrix of the final model’s classification accuracy considering reduced channels and resting state; Figure S3: The confusion matrix of the final model’s classification considering all channels and Level 1; Figure S4: The confusion matrix of the final model’s classification considering reduced channels and Level 1; Figure S5: The confusion matrix of the final model’s classification considering all channels and Level 2; Figure S6: The confusion matrix of the final model’s classification considering reduced channels and Level 2; Figure S7: The confusion matrix of the final model’s classification considering all channels and Level 3; Figure S8: The confusion matrix of the final model’s classification considering reduced channels and Level 3; Figure S9: The confusion matrix of the final model’s classification considering all channels and all game levels; Figure S10: The confusion matrix of the final model’s classification considering reduced channels and all game levels; Figure S11: The confusion matrix of the final model’s classification accuracy for different cases considering all channels and task fusion; Figure S12: The confusion matrix of the final model’s classification accuracy for different cases considering reduced channels and task fusion.

Author Contributions

Conceptualization, I.K. and B.B.; Methodology, I.K., B.B. and M.S.; Supervision, B.B.; Visualization, I.K. and M.S.; Writing—original draft, I.K.; Writing—review & editing, B.B. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This publication has been produced with the support of the Integrated Infrastructure Operational Program for the project: Creation of a Digital Biobank to support the systemic public research infrastructure, ITMS: 313011AFG4, co-financed by the European Regional Development Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kouamo, S.; Tangha, C. Fingerprint Recognition with Artificial Neural Networks: Application to E-Learning. J. Intell. Learn. Syst. Appl. 2016, 8, 39. [Google Scholar] [CrossRef]
  2. Weng, R.; Lu, J.; Tan, Y.P. Robust Point Set Matching for Partial Face Recognition. IEEE Trans. Image Process. 2016, 25, 1163–1176. [Google Scholar] [CrossRef]
  3. Kumar, R. Hand Image Biometric Based Personal Authentication System. Stud. Comput. Intell. 2017, 660, 201–226. [Google Scholar] [CrossRef]
  4. Chirchi, V.R.E.; Waghmare, L.M.; Chirchi, E.R. Iris Biometric Recognition for Person Identification in Security Systems. Int. J. Comput. Appl. 2011, 9, 24. [Google Scholar] [CrossRef]
  5. Suganya, M.; Krishnakumari, K. A Novel Retina Based Biometric Privacy Using Visual Cryptography. Int. J. Comput. Sci. Netw. Secur. 2016, 16, 76. [Google Scholar]
  6. Kurowski, M.; Sroczyński, A.; Bogdanis, G.; Czyżewski, A. An Automated Method for Biometric Handwritten Signature Authentication Employing Neural Networks. Electronics 2021, 10, 456. [Google Scholar] [CrossRef]
  7. Shah, H.N.M.; Ab Rashid, M.Z.; Abdollah, M.F.; Kamarudin, M.N.; Lin, C.K.; Kamis, Z. Biometric Voice Recognition in Security System. Indian J. Sci. Technol. 2014, 7, 104. [Google Scholar] [CrossRef]
  8. Sudha, L.R.; Bhavani, D.R. Bhavani Biometric Authorization System Using Gait Biometry. arXiv 2011, arXiv:1108.6294. [Google Scholar]
  9. Diab, M.O.; Seif, A.; Sabbah, M.; El-Abed, M.; Aloulou, N. A Review on ECG-Based Biometric Authentication Systems. In Hidden Biometrics; Springer: Singapore, 2020; pp. 17–44. [Google Scholar]
  10. Raurale, S.A.; McAllister, J.; del Rincon, J.M. EMG Biometric Systems Based on Different Wrist-Hand Movements. IEEE Access 2021, 9, 12256–12266. [Google Scholar] [CrossRef]
  11. Abo-Zahhad, M.; Ahmed, S.M.; Abbas, S.N. A Novel Biometric Approach for Human Identification and Verification Using Eye Blinking Signal. IEEE Signal Process. Lett. 2015, 22, 876–880. [Google Scholar] [CrossRef]
  12. Paranjape, R.B.; Mahovsky, J.; Benedicenti, L.; Koles, Z. The Electroencephalogram as a Biometric. Can. Conf. Electr. Comput. Eng. 2001, 2, 1363–1366. [Google Scholar] [CrossRef]
  13. Acharya, U.R.; Molinari, F.; Sree, S.V.; Chattopadhyay, S.; Ng, K.H.; Suri, J.S. Automated Diagnosis of Epileptic EEG Using Entropies. Biomed. Signal Process Control 2012, 7, 401–408. [Google Scholar] [CrossRef] [Green Version]
  14. Tzallas, A.T.; Tsipouras, M.G.; Fotiadis, D.I. Epileptic Seizure Detection in EEGs Using Time-Frequency Analysis. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 703–710. [Google Scholar] [CrossRef] [PubMed]
  15. Tsiouris, Κ.; Pezoulas, V.C.; Zervakis, M.; Konitsiotis, S.; Koutsouris, D.D.; Fotiadis, D.I. A Long Short-Term Memory Deep Learning Network for the Prediction of Epileptic Seizures Using EEG Signals. Comput. Biol. Med. 2018, 99, 24–37. [Google Scholar] [CrossRef]
  16. Hussain, I.; Park, S.J. HealthSOS: Real-Time Health Monitoring System for Stroke Prognostics. IEEE Access 2020, 8, 213574–213586. [Google Scholar] [CrossRef]
  17. Choi, Y.A.; Park, S.J.; Jun, J.A.; Pyo, C.S.; Cho, K.H.; Lee, H.S.; Yu, J.H. Deep Learning-Based Stroke Disease Prediction System Using Real-Time Bio Signals. Sensors 2021, 21, 4269. [Google Scholar] [CrossRef] [PubMed]
  18. Shoeibi, A.; Sadeghi, D.; Moridian, P.; Ghassemi, N.; Heras, J.; Alizadehsani, R.; Khadem, A.; Kong, Y.; Nahavandi, S.; Zhang, Y.D.; et al. Automatic Diagnosis of Schizophrenia in EEG Signals Using CNN-LSTM Models. Front. Neuroinform. 2021, 15, 777977. [Google Scholar] [CrossRef]
  19. Safi, M.S.; Safi, S.M.M. Early Detection of Alzheimer’s Disease from EEG Signals Using Hjorth Parameters. Biomed. Signal Process. Control 2021, 65, 102338. [Google Scholar] [CrossRef]
  20. Oh, S.L.; Hagiwara, Y.; Raghavendra, U.; Yuvaraj, R.; Arunkumar, N.; Murugappan, M.; Acharya, U.R. A Deep Learning Approach for Parkinson’s Disease Diagnosis from EEG Signals. Neural Comput. Appl. 2020, 32, 10927–10933. [Google Scholar] [CrossRef]
  21. Yang, B.; Liu, H. Automatic Identification of Insomnia Based on Single-Channel EEG Labelled with Sleep Stage Annotations. IEEE Access 2020, 8, 104281–104291. [Google Scholar] [CrossRef]
  22. Foong, R.; Tang, N.; Chew, E.; Chua, K.S.G.; Ang, K.K.; Quek, C.; Guan, C.; Phua, K.S.; Kuah, C.W.K.; Deshmukh, V.A.; et al. Assessment of the Efficacy of EEG-Based MI-BCI with Visual Feedback and EEG Correlates of Mental Fatigue for Upper-Limb Stroke Rehabilitation. IEEE Trans. Biomed. Eng. 2020, 67, 786–795. [Google Scholar] [CrossRef] [PubMed]
  23. Lazarou, I.; Nikolopoulos, S.; Petrantonakis, P.C.; Kompatsiaris, I.; Tsolaki, M. EEG-Based Brain–Computer Interfaces for Communication and Rehabilitation of People with Motor Impairment: A Novel Approach of the 21st Century. Front. Hum. Neurosci. 2018, 12, 14. [Google Scholar] [CrossRef] [Green Version]
  24. Lin, B.S.; Chen, J.L.; Hsu, H.C. Novel Upper-Limb Rehabilitation System Based on Attention Technology for Post-Stroke Patients: A Preliminary Study. IEEE Access 2017, 6, 2720–2731. [Google Scholar] [CrossRef]
  25. Bi, L.; Lian, J.; Jie, K.; Lai, R.; Liu, Y. A Speed and Direction-Based Cursor Control System with P300 and SSVEP. Biomed. Signal Process. Control 2014, 14, 126–133. [Google Scholar] [CrossRef]
  26. Nguyen, T.H.; Chung, W.Y. A Single-Channel SSVEP-Based BCI Speller Using Deep Learning. IEEE Access 2019, 7, 1752–1763. [Google Scholar] [CrossRef]
  27. Li, J.; Liang, J.; Zhao, Q.; Li, J.; Hong, K.; Zhang, L. Design of Assistive Wheelchair System Directly Steered by Human Thoughts. Int. J. Neural Syst. 2013, 23, 1350013. [Google Scholar] [CrossRef]
  28. Shukla, P.K.; Chaurasiya, R.K.; Verma, S. Performance Improvement of P300-Based Home Appliances Control Classification Using Convolution Neural Network. Biomed. Signal Process. Control 2021, 63, 102220. [Google Scholar] [CrossRef]
  29. Hussain, I.; Hossain, M.A.; Jany, R.; Bari, M.A.; Uddin, M.; Kamal, A.R.M.; Ku, Y.; Kim, J.-S. Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages. Sensors 2022, 22, 3079. [Google Scholar] [CrossRef]
  30. Hussain, I.; Young, S.; Park, S.J. Driving-Induced Neurological Biomarkers in an Advanced Driver-Assistance System. Sensors 2021, 21, 6985. [Google Scholar] [CrossRef]
  31. Saini, N.; Bhardwaj, S.; Agarwal, R. Classification of EEG Signals Using Hybrid Combination of Features for Lie Detection. Neural Comput. Appl. 2020, 32, 3777–3787. [Google Scholar] [CrossRef]
  32. Hussain, I.; Young, S.; Kim, C.H.; Benjamin, H.C.M.; Park, S.J. Quantifying Physiological Biomarkers of a Microwave Brain Stimulation Device. Sensors 2021, 21, 1896. [Google Scholar] [CrossRef] [PubMed]
  33. Chan, H.L.; Kuo, P.C.; Cheng, C.Y.; Chen, Y.S. Challenges and Future Perspectives on Electroencephalogram-Based Biometrics in Person Recognition. Front. Neuroinform. 2018, 12, 66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Wang, M.; Hu, J.; Abbass, H.A. BrainPrint: EEG Biometric Identification Based on Analyzing Brain Connectivity Graphs. Pattern Recognit. 2020, 105, 107381. [Google Scholar] [CrossRef]
  35. Palaniappan, R.; Mandic, D.P. Biometrics from Brain Electrical Activity: A Machine Learning Approach. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 738–742. [Google Scholar] [CrossRef]
  36. Ma, L.; Minett, J.W.; Blu, T.; Wang, W.S.Y. Resting State EEG-Based Biometrics for Individual Identification Using Convolutional Neural Networks. In Proceedings of the Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Milan, Italy, 25–29 August 2015. [Google Scholar]
  37. Fan, Y.; Shi, X.; Li, Q. CNN-Based Personal Identification System Using Resting State Electroencephalography. Comput. Intell. Neurosci. 2021, 2021, 1160454. [Google Scholar] [CrossRef]
  38. Sun, Y.; Lo, F.P.W.; Lo, B. EEG-Based User Identification System Using 1D-Convolutional Long Short-Term Memory Neural Networks. Expert Syst. Appl. 2019, 125, 259–267. [Google Scholar] [CrossRef]
  39. Moctezuma, L.A.; Torres-García, A.A.; Villaseñor-Pineda, L.; Carrillo, M. Subjects Identification Using EEG-Recorded Imagined Speech. Expert Syst. Appl. 2019, 118, 201–208. [Google Scholar] [CrossRef]
  40. Gui, Q.; Jin, Z.; Xu, W. Exploring EEG-Based Biometrics for User Identification and Authentication. In Proceedings of the 2014 IEEE Signal Processing in Medicine and Biology Symposium, IEEE SPMB 2014-Proceedings, Philadelphia, PA, USA, 13 December 2015. [Google Scholar]
  41. Jayarathne, I.; Cohen, M.; Amarakeerthi, S. BrainID: Development of an EEG-Based Biometric Authentication System. In Proceedings of the 7th IEEE Annual Information Technology, Electronics and Mobile Communication Conference, IEEE IEMCON 2016, Vancouver, BC, Canada, 13–15 October 2016. [Google Scholar]
  42. Yap, H.Y.; Choo, Y.H.; Mohd Yusoh, Z.I.; Khoh, W.H. Person Authentication Based on Eye-Closed and Visual Stimulation Using EEG Signals. Brain Inf. 2021, 8, 21. [Google Scholar] [CrossRef]
  43. Abbas Seha, S.N.; Hatzinakos, D. A New Approach for EEG-Based Biometric Authentication Using Auditory Stimulation. In Proceedings of the 2019 International Conference on Biometrics, ICB 2019, Crete, Greece, 4–7 June 2019. [Google Scholar]
  44. Attallah, O. Multi-Tasks Biometric System for Personal Identification. In Proceedings of the Proceedings-22nd IEEE International Conference on Computational Science and Engineering and 17th IEEE International Conference on Embedded and Ubiquitous Computing, CSE/EUC 2019, New York, NY, USA, 1–3 August 2019. [Google Scholar]
  45. Hema, C.R.; Elakkiya, A.; Paulraj, M.P. Biometric Identification Using Electroencephalography. Int. J. Comput. Appl. 2014, 106, 17–22. [Google Scholar]
  46. Zeynali, M.; Seyedarabi, H. EEG-Based Single-Channel Authentication Systems with Optimum Electrode Placement for Different Mental Activities. Biomed. J. 2019, 42, 261–267. [Google Scholar] [CrossRef]
  47. Babusiak, B.; Hostovecky, M.; Smondrk, M.; Huraj, L. Spectral Analysis of Electroencephalographic Data in Serious Games. Appl. Sci. 2021, 11, 2480. [Google Scholar] [CrossRef]
  48. Webster, J.G. Medical Instrumentation: Application and Design, 4th ed.; John Wiley & Sons: New York, NY, USA, 2009. [Google Scholar]
  49. Fitzgibbon, S.P.; Pope, K.J.; MacKenzie, L.; Clark, C.R.; Willoughby, J.O. Cognitive Tasks Augment Gamma EEG Power. Clin. Neurophysiol. 2004, 115, 1802–1809. [Google Scholar] [CrossRef] [PubMed]
  50. Moctezuma, L.A.; Molinas, M. Towards a Minimal EEG Channel Array for a Biometric System Using Resting-State and a Genetic Algorithm for Channel Selection. Sci. Rep. 2020, 10, 14917. [Google Scholar] [CrossRef] [PubMed]
  51. Seha, S.N.A.; Hatzinakos, D. EEG-Based Human Recognition Using Steady-State AEPs and Subject-Unique Spatial Filters. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3901–3910. [Google Scholar] [CrossRef]
  52. Chen, Y.; Atnafu, A.D.; Schlattner, I.; Weldtsadik, W.T.; Roh, M.C.; Kim, H.J.; Lee, S.W.; Blankertz, B.; Fazli, S. A High-Security EEG-Based Login System with RSVP Stimuli and Dry Electrodes. IEEE Trans. Inf. Forensics Secur. 2016, 11, 2635–2647. [Google Scholar] [CrossRef]
  53. Issac, C.M.; Grace Mary Kanaga, E. Probing on Classification Algorithms and Features of Brain Signals Suitable for Cancelable Biometric Authentication. In Proceedings of the 2017 IEEE International Conference on Computational Intelligence and Computing Research, ICCIC 2017, Coimbatore, India, 14–16 December 2017. [Google Scholar]
  54. Di, Y.; An, X.; Zhong, W.; Liu, S.; Ming, D. The Time-Robustness Analysis of Individual Identification Based on Resting-State EEG. Front. Hum. Neurosci. 2021, 15, 403. [Google Scholar] [CrossRef]
  55. Liu, S.; Bai, Y.; Liu, J.; Qi, H.; Li, P.; Zhao, X.; Zhou, P.; Zhang, L.; Wan, B.; Wang, C.; et al. Individual Feature Extraction and Identification on EEG Signals in Relax and Visual Evoked Tasks. In Proceedings of the Communications in Computer and Information Science, Aizu-Wakamatsu, Japan, 16–17 September 2013; Volume 404. [Google Scholar]
  56. Schons, T.; Moreira, G.J.P.; Silva, P.H.L.; Coelho, V.N.; Luz, E.J.S. Convolutional Network for EEG-Based Biometric. In Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics, Proceedings of the Lecture Notes in Computer Science, Valparaíso, Chile, 7–10 November 2017; Springer: Cham, Switzerland, 2018; Volume 10657 LNCS. [Google Scholar]
  57. Arnau-González, P.; Katsigiannis, S.; Ramzan, N.; Tolson, D.; Arevalillo-Herráez, M. ES1D: A Deep Network for EEG-Based Subject Identification. In Proceedings of the Proceedings-2017 IEEE 17th International Conference on Bioinformatics and Bioengineering, BIBE 2017, Washington, DC, USA, 23–25 October 2017; Volume 2018-January. [Google Scholar]
  58. Kumar, P.; Saini, R.; Kaur, B.; Roy, P.P.; Scheme, E. Fusion of Neuro-Signals and Dynamic Signatures for Person Authentication. Sensors 2019, 19, 4641. [Google Scholar] [CrossRef]
  59. Wilaiprasitporn, T.; Ditthapron, A.; Matchaparn, K.; Tongbuasirilai, T.; Banluesombatkul, N.; Chuangsuwanich, E. Affective EEG-Based Person Identification Using the Deep Learning Approach. IEEE Trans. Cogn. Dev. Syst. 2020, 12, 486–496. [Google Scholar] [CrossRef]
  60. Maiorana, E. Learning Deep Features for Task-Independent EEG-Based Biometric Verification. Pattern Recognit. Lett. 2021, 143, 122–129. [Google Scholar] [CrossRef]
  61. Kasim, Ö.; Tosun, M. Biometric Authentication from Photic Stimulated EEG Records. Appl. Artif. Intell. 2021, 35, 1407–1419. [Google Scholar] [CrossRef]
  62. Yu, T.; Wei, C.S.; Chiang, K.J.; Nakanishi, M.; Jung, T.P. EEG-Based User Authentication Using a Convolutional Neural Network. In Proceedings of the International IEEE/EMBS Conference on Neural Engineering, NER, San Francisco, CA, USA, 20–23 March 2019; Volume 2019-March. [Google Scholar]
  63. Jijomon, C.M.; Vinod, A.P. Person-Identification Using Familiar-Name Auditory Evoked Potentials from Frontal EEG Electrodes. Biomed. Signal Process. Control 2021, 68, 102739. [Google Scholar] [CrossRef]
  64. Grandini, M.; Bagli, E.; Visani, G. Metrics for Multi-Class Classification: An Overview. arXiv 2020, arXiv:2008.05756. [Google Scholar]
Figure 1. Block diagram of presented attitude in order to determine person identity. The approach is established by the training and testing stage of the 1D-CNN model inclusive EEG data pre-processing. The green arrow depicts the transition from the training to the testing stage while data handling.
Figure 1. Block diagram of presented attitude in order to determine person identity. The approach is established by the training and testing stage of the 1D-CNN model inclusive EEG data pre-processing. The green arrow depicts the transition from the training to the testing stage while data handling.
Sensors 22 07154 g001
Figure 2. Unipolar electrode configuration for EEG signal measurement, modified according to [47].
Figure 2. Unipolar electrode configuration for EEG signal measurement, modified according to [47].
Sensors 22 07154 g002
Figure 3. Dataset division according to the 5-fold cross validation.
Figure 3. Dataset division according to the 5-fold cross validation.
Sensors 22 07154 g003
Figure 4. The proposed 1D-CNN model: (a) Variant A with three convolutional blocks, (b) Variant B with four convolutional blocks.
Figure 4. The proposed 1D-CNN model: (a) Variant A with three convolutional blocks, (b) Variant B with four convolutional blocks.
Sensors 22 07154 g004
Figure 5. The average accuracy comparison across the model variants for tasks and task fusion corresponding to the eight considered channels.
Figure 5. The average accuracy comparison across the model variants for tasks and task fusion corresponding to the eight considered channels.
Sensors 22 07154 g005
Figure 6. The average accuracy comparison for variant A involving tasks and task fusion corresponding to the reduced channels.
Figure 6. The average accuracy comparison for variant A involving tasks and task fusion corresponding to the reduced channels.
Sensors 22 07154 g006
Figure 7. The average accuracy comparison for variant B involving tasks and task fusion corresponding to the reduced channels.
Figure 7. The average accuracy comparison for variant B involving tasks and task fusion corresponding to the reduced channels.
Sensors 22 07154 g007
Figure 8. The accuracy evaluation for the final model based on the test data for individual tasks and task fusions.
Figure 8. The accuracy evaluation for the final model based on the test data for individual tasks and task fusions.
Sensors 22 07154 g008
Figure 9. The confusion matrix of the final model’s classification accuracy for different cases considering all channels and task fusion.
Figure 9. The confusion matrix of the final model’s classification accuracy for different cases considering all channels and task fusion.
Sensors 22 07154 g009
Table 1. Classification metrics for the variants of the 1D-CNN model of person identification based on resting state using 5-fold CV method (result format: mean ± standard deviation).
Table 1. Classification metrics for the variants of the 1D-CNN model of person identification based on resting state using 5-fold CV method (result format: mean ± standard deviation).
ChannelsVariant AVariant B
Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
all99.17 ± 0.4199.13 ± 0.3799.32 ± 0.3199.22 ± 0.3499.46 ± 0.4499.37 ± 0.6199.53 ± 0.3599.45 ± 0.44
F, C91.19 ± 1.2991.43 ± 1.2791.40 ± 1.6391.41 ± 1.4396.40 ± 0.8596.47 ± 0.7696.46 ± 0.6296.46 ± 0.68
F, O93.53 ± 1.5293.68 ± 1.4693.67 ± 1.2793.67 ± 1.3697.32 ± 0.6797.41 ± 0.7297.39 ± 0.5997.40 ± 0.65
F, P91.82 ± 1.2592.10 ± 1.6791.84 ± 1.6591.97 ± 1.6696.01 ± 1.9495.93 ± 1.9796.20 ± 1.7696.06 ± 1.86
C, O95.18 ± 1.2595.17 ± 1.2295.23 ± 1.2495.20 ± 1.2397.52 ± 1.0197.45 ± 1.0297.58 ± 1.0097.51 ± 1.01
C, P93.63 ± 1.0294.04 ± 0.9293.79 ± 1.3593.91 ± 1.0997.37 ± 0.9297.31 ± 0.8897.54 ± 0.8497.42 ± 0.86
P, O89.15 ± 2.2789.93 ± 2.0689.15 ± 2.1389.54 ± 2.0995.38 ± 1.3995.47 ± 1.2195.69 ± 1.3795.58 ± 1.29
F57.86 ± 2.7756.81 ± 2.9357.29 ± 2.5157.05 ± 2.7064.28 ± 3.4563.50 ± 3.6664.02 ± 3.4863.76 ± 3.57
C60.19 ± 2.8961.14 ± 3.8859.69 ± 2.8160.41 ± 3.2669.83 ± 1.0570.40 ± 1.2069.95 ± 0.5070.17 ± 0.71
P58.83 ± 4.3461.43 ± 4.0159.05 ± 4.3560.22 ± 4.1767.25 ± 1.1168.64 ± 2.6567.47 ± 1.4168.05 ± 1.84
O59.17 ± 2.8458.51 ± 2.9558.96 ± 2.4858.73 ± 2.6965.45 ± 2.0665.94 ± 1.6065.03 ± 1.2665.48 ± 1.41
Table 2. Classification metrics for the variants of the 1D-CNN model of person identification based on level 1 using 5-fold CV method validation data (result format: mean ± standard deviation).
Table 2. Classification metrics for the variants of the 1D-CNN model of person identification based on level 1 using 5-fold CV method validation data (result format: mean ± standard deviation).
ChannelsVariant AVariant B
Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
all98.99 ± 0.4599.02 ± 0.4598.92 ± 0.3398.97 ± 0.3899.61 ± 0.2499.53 ± 0.3899.57 ± 0.2199.55 ± 0.27
F, C90.55 ± 1.5690.69 ± 1.3489.24 ± 1.7989.96 ± 1.5394.40 ± 1.2994.23 ± 1.7393.58 ± 1.3993.90 ± 1.54
F, O92.76 ± 0.9092.36 ± 0.8991.72 ± 0.9292.04 ± 0.9095.99 ± 0.5495.49 ± 0.8195.18 ± 0.5695.33 ± 0.66
F, P92.64 ± 0.7992.61 ± 1.03 92.73 ± 0.8692.67 ± 0.9496.11 ± 0.6895.88 ± 0.9195.98 ± 0.8695.93 ± 0.88
C, O94.21 ± 1.3293.87 ± 1.5093.62 ± 1.6693.74 ± 1.5896.50 ± 0.8696.49 ± 0.9496.07 ± 1.1296.28 ± 1.02
C, P92.93 ± 0.9293.53 ± 0.8593.14 ± 0.9193.33 ± 0.8896.71 ± 0.6796.81 ± 0.9596.75 ± 0.4096.78 ± 0.56
P, O91.60 ± 0.9791.84 ± 1.0990.19 ± 1.0591.01 ± 1.0795.45 ± 0.2795.47 ± 0.5094.62 ± 0.4995.04 ± 0.49
F56.05 ± 1.7451.55 ± 2.1549.87 ± 1.6550.70 ± 1.8761.22 ± 1.80 59.36 ± 1.5356.17 ± 2.0557.72 ± 1.75
C57.56 ± 1.9456.52 ± 1.9353.62 ± 2.1355.03 ± 2.0364.10 ± 1.56 62.56 ± 0.9859.80 ± 0.9261.15 ± 0.95
P46.82 ± 3.3545.46 ± 3.64 40.92 ± 3.1943.07 ± 3.4048.05 ± 3.8648.76 ± 2.6242.24 ± 4.3945.27 ± 3.28
O58.61 ± 2.8357.71 ± 1.9253.25 ± 2.4655.39 ± 2.1664.88 ± 1.5064.93 ± 0.8960.47 ± 1.0762.62 ± 0.97
Table 3. Classification metrics for the variants of the 1D-CNN model of person identification based on level 2 using 5-fold CV method validation data (result format: mean ± standard deviation).
Table 3. Classification metrics for the variants of the 1D-CNN model of person identification based on level 2 using 5-fold CV method validation data (result format: mean ± standard deviation).
ChannelsVariant AVariant B
Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
all99.41 ± 0.4199.42 ± 0.4999.24 ± 0.5799.33 ± 0.5399.72 ± 0.1399.67 ± 0.1799.72 ± 0.1899.69 ± 0.17
F, C87.73 ± 1.4989.05 ± 0.9885.78 ± 2.1887.38 ± 1.3593.53 ± 0.7293.69 ± 0.5992.27 ± 1.3492.97 ± 0.82
F, O93.31 ± 1.01 92.63 ± 1.2492.50 ± 1.7692.56 ± 1.4597.32 ± 0.6897.05 ± 0.8197.07 ± 0.9497.06 ± 0.87
F, P90.39 ± 1.8890.52 ± 1.51 90.17 ± 2.1890.34 ± 1.7896.21 ± 1.2295.68 ± 1.1796.07 ± 1.3895.87 ± 1.27
C, O95.62 ± 0.7894.98 ± 0.8694.78 ± 1.0594.88 ± 0.9597.50 ± 0.7397.42 ± 0.7596.90 ± 1.2297.16 ± 0.93
C, P94.02 ± 1.4494.17 ± 1.6193.13 ± 1.1793.65 ± 1.3697.44 ± 0.7797.45 ± 0.7197.12 ± 1.3497.28 ± 0.93
P, O88.69 ± 1.6488.49 ± 1.3487.45 ± 1.2387.97 ± 1.2894.98 ± 1.80 94.41 ± 1.8594.35 ± 2.4594.38 ± 2.11
F39.69 ± 4.0339.58 ± 2.0537.39 ± 3.5638.45 ± 2.6045.61 ± 6.1649.87 ± 6.4844.87 ± 6.9947.24 ± 6.73
C57.53 ± 1.2857.53 ± 5.0450.15 ± 0.8653.59 ± 1.4765.27 ± 2.4763.87 ± 4.0760.81 ± 2.7962.30 ± 3.31
P25.92 ± 2.3227.33 ± 4.5222.51 ± 2.6524.69 ± 3.3426.41 ± 4.1529.97 ± 3.4424.23 ± 3.7726.80 ± 3.60
O61.42 ± 2.0560.23 ± 3.0155.27 ± 2.0257.64 ± 2.4267.80 ± 1.7165.04 ± 1.6162.97 ± 2.5163.99 ± 1.96
Table 4. Classification metrics for the variants of the 1D-CNN model of person identification based on level 3 using 5-fold CV method validation data (result format: mean ± standard deviation).
Table 4. Classification metrics for the variants of the 1D-CNN model of person identification based on level 3 using 5-fold CV method validation data (result format: mean ± standard deviation).
ChannelsVariant AVariant B
Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
all98.33 ± 0.1898.29 ± 0.1998.15 ± 0.2598.22 ± 0.2299.20 ± 0.1699.07 ± 0.2499.11 ± 0.1999.09 ± 0.21
F, C89.40 ± 0.1089.14 ± 0.6086.73 ± 0.9887.92 ± 0.7493.71 ± 0.5793.19 ± 0.7892.53 ± 0.7092.86 ± 0.74
F, O90.57 ± 1.0490.74 ± 0.7989.62 ± 1.0590.18 ± 0.9093.80 ± 0.9693.83 ± 0.7793.33 ± 1.1693.58 ± 0.93
F, P92.58 ± 0.8892.27 ± 0.4992.05 ± 0.9692.16 ± 0.6595.84 ± 0.4595.81 ± 0.8095.31 ± 0.7295.56 ± 0.76
C, O90.52 ± 0.6491.18 ± 0.7389.81 ± 1.1090.49 ± 0.8893.83 ± 0.79 94.17 ± 0.7193.36 ± 0.8193.76 ± 0.76
C, P93.14 ± 0.8392.73 ± 1.1892.87 ± 1.3292.80 ± 1.2595.68 ± 0.5295.65 ± 0.5995.57 ± 0.6895.61 ± 0.63
P, O90.31 ± 0.7191.10 ± 0.6288.95 ± 0.7090.01 ± 0.6693.98 ± 1.0194.24 ± 1.1093.06 ± 1.3393.65 ± 1.20
F53.41 ± 1.8249.70 ± 1.7647.92 ± 1.9348.79 ± 1.8459.86 ± 0.91 57.52 ± 0.5755.21 ± 0.7756.34 ± 0.66
C57.38 ± 1.7857.60 ± 2.3452.59 ± 1.1954.98 ± 1.5864.80 ± 0.9264.23 ± 1.4760.10 ± 1.0262.10 ± 1.20
P48.02 ± 2.8451.53 ± 2.6944.25 ± 2.7047.61 ± 2.6954.13 ± 3.3655.60 ± 3.3649.83 ± 3.1752.56 ± 3.26
O53.53 ± 1.3456.26 ± 0.9450.53 ± 2.2953.24 ± 1.3357.61 ± 0.8159.87 ± 0.7755.67 ± 0.7457.69 ± 0.75
Table 5. Classification metrics for the variants of the 1D-CNN model of person identification based on level fusion (playing game sequence) using 5-fold CV method validation data (result format: mean ± standard deviation).
Table 5. Classification metrics for the variants of the 1D-CNN model of person identification based on level fusion (playing game sequence) using 5-fold CV method validation data (result format: mean ± standard deviation).
ChannelsVariant AVariant B
Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
all97.84 ± 0.1897.63 ± 0.1397.66 ± 0.3197.64 ± 0.1898.82 ± 0.2998.65 ± 0.2698.77 ± 0.3098.71 ± 0.28
F, P88.74 ± 0.7188.70 ± 0.6388.12 ± 0.6488.41 ± 0.6392.58 ± 0.4992.27 ± 0.5792.09 ± 0.4692.18 ± 0.51
C, O89.76 ± 0.8789.70 ± 0.8689.18 ± 1.1989.44 ± 1.0091.99 ± 0.6791.81 ± 0.4091.50 ± 0.7091.65 ± 0.51
C, P89.88 ± 1.0689.45 ± 0.9189.46 ± 1.0989.45 ± 0.9992.54 ± 0.5992.11 ± 0.8392.12 ± 0.8692.11 ± 0.84
Table 6. Classification metrics for the variants of the 1D-CNN model of person identification based on all task fusion (resting state + playing game sequence) using 5-fold CV method validation data (result format: mean ± standard deviation).
Table 6. Classification metrics for the variants of the 1D-CNN model of person identification based on all task fusion (resting state + playing game sequence) using 5-fold CV method validation data (result format: mean ± standard deviation).
ChannelsVariant AVariant B
Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)Average Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
all98.08 ± 0.3098.06 ± 0.2998.04 ± 0.3298.05 ± 0.3098.97 ± 0.1298.91 ± 0.1898.93 ± 0.1098.92 ± 0.13
F, P89.11 ± 0.5889.30 ± 0.7688.64 ± 0.7088.97 ± 0.7392.40 ± 0.49 92.39 ± 0.4491.99 ± 0.6092.19 ± 0.51
C, O89.54 ± 0.5989.54 ± 0.6288.88 ± 0.5889.21 ± 0.6092.22 ± 0.8692.15 ± 0.9491.94 ± 0.8092.04 ± 0.86
C, P91.01 ± 0.7790.75 ± 0.6290.68 ± 0.7390.71 ± 0.6793.39 ± 0.4293.17 ± 0.4593.09 ± 0.4093.13 ± 0.42
Table 7. Classification metrics for the evaluation of the final model of person identification based on individual tasks and task fusion test data.
Table 7. Classification metrics for the evaluation of the final model of person identification based on individual tasks and task fusion test data.
ChannelsTaskAverage Accuracy (%)Macro Average Precision (%)Macro Average Recall (%)Macro Average F1 Score (%)
AllResting State 99.8099.7899.8299.80
Level 199.7799.6799.7099.68
Level 299.8899.8399.7499.78
Level 399.0499.0099.0999.04
Game98.7998.7998.7298.75
Task fusion98.7598.7798.6898.72
ReducedResting State (C, O)97.4597.6897.3597.51
Level 1 (C, P)97.2997.6597.6497.64
Level 2 (C, O)97.7897.5597.2497.39
Level 3 (F, P)95.7495.9795.2895.62
Game (F, P)93.3393.1793.1793.17
Task Fusion (C, P)93.8493.6893.3993.53
Table 8. The comparison of state-of-the-art deep learning algorithms for EEG-based biometry.
Table 8. The comparison of state-of-the-art deep learning algorithms for EEG-based biometry.
Ref.ParadigmDatabaseNo. of SubjectsNo. of ChannelsSegment LengthClassifier, Result
[37]Resting statePhysionet10914 reduced0.5 s2D-CNN
99.32%
[38]Resting state, opening, and closing fists and feet both physically and imaginarilyPhysionet10916 reduced1 s1D-CNN LSTM
99.58%
[56]Resting statePhysionet1096412 s1D-CNN
99.81%
[57]Watching film clipsDREAMER23141 sCNN
94.01%
[58]Signed subject signatures on mobile phone screenOwn33 genuine and 25 forged users14-BLSTM-NN
98.78%
[59]Watching affective elicited music videosDEAP325 reduced1 sCNN-GRU
99.17% (CRR)
[60]Eyes close, open, motor speech imaginarily, visual stimulation, mathematical calculationOwn 45195 s1D-CNN
95.2% (eyes open)
[61]Photic stimulationOwn1616 3 s1D-CNN
97.17%
[62]Steady-state visual-evoked potentialsOwn89-CNN
96.78%
[63]Auditory evoked potentialsOwn202/1 reduced2 s1D-CNN LSTM
99.53% (2 channels)
96.93% (1 channel)
8 1D-CNN
Rest99.80%
L199.77%
L299.88%
L399.04%
GAME98.79%
ALL98.75%
Prop. Own21 1 s
4 reduced 1D-CNN
Rest97.45%
L197.29%
L297.78%
L395.74%
GAME93.33%
ALL93.84%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kralikova, I.; Babusiak, B.; Smondrk, M. EEG-Based Person Identification during Escalating Cognitive Load. Sensors 2022, 22, 7154. https://doi.org/10.3390/s22197154

AMA Style

Kralikova I, Babusiak B, Smondrk M. EEG-Based Person Identification during Escalating Cognitive Load. Sensors. 2022; 22(19):7154. https://doi.org/10.3390/s22197154

Chicago/Turabian Style

Kralikova, Ivana, Branko Babusiak, and Maros Smondrk. 2022. "EEG-Based Person Identification during Escalating Cognitive Load" Sensors 22, no. 19: 7154. https://doi.org/10.3390/s22197154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop