Next Article in Journal
Three-Dimensional Structure Reconstruction System of Nasal Cavity Based on a Short-Source Acoustic Tube
Previous Article in Journal
Features of Changes in the Parameters of Acoustic Signals Characteristic of Various Metalworking Processes and Prospects for Their Use in Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Computer Interaction Multi-Task Modeling Based on Implicit Intent EEG Decoding

1
School of Modern Post (School of Automation), Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Architecture and Artistic Design, Inner Mongolia University of Science and Technology, Baotou 014010, China
3
School of Digital Media & Design Arts, Beijing University of Posts and Telecommunications, Beijing 100876, China
4
Beijing Key Laboratory of Network System and Network Culture, Beijing University of Posts and Telecommunications, Beijing 100876, China
5
Key Laboratory of Interactive Technology and Experience System of the Ministry of Culture and Tourism, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 368; https://doi.org/10.3390/app14010368
Submission received: 25 September 2023 / Revised: 26 October 2023 / Accepted: 28 December 2023 / Published: 30 December 2023

Abstract

:
In the short term, a fully autonomous level of machine intelligence cannot be achieved. Humans are still an important part of HCI systems, and intelligent systems should be able to “feel” and “predict” human intentions in order to achieve dynamic coordination between humans and machines. Intent recognition is very important to improve the accuracy and efficiency of the HCI system. However, it is far from enough to focus only on explicit intent. There is a lot of vague and hidden implicit intent in the process of human–computer interaction. Based on passive brain–computer interface (pBCI) technology, this paper proposes a method to integrate humans into HCI systems naturally, which is to establish an intent-based HCI model and automatically recognize the implicit intent according to human EEG signals. In view of the existing problems of few divisible patterns and low efficiency of implicit intent recognition, this paper finally proves that EEG can be used as the basis for judging human implicit intent through extracting multi-task intention, carrying out experiments, and constructing algorithmic models. The CSP + SVM algorithm model can effectively improve the EEG decoding performance of implicit intent in HCI, and the effectiveness of the CSP algorithm on intention feature extraction is further verified by combining 3D space visualization. The translation of implicit intent information is of significance for the study of intent-based HCI models, the development of HCI systems, and the improvement of human–machine collaboration efficiency.

1. Introduction

With the continuous maturity of 6G, cloud computing, big data, and artificial intelligence technologies, human–machine systems have experienced the evolution from mechanization and informatization to intelligence. In the case that full machine autonomy cannot be achieved in a short period of time [1,2,3], humans are still an important part of the system [4], and machines are required to report their progress to humans and allow them to make decisions in the event of unexpected situations. Intelligent human–machine systems emphasize real-time monitoring and automatically adapt to new environmental conditions and changing user needs in and around themselves to identify events that require action, decide how to react, design plans, and execute them [5,6]. Compared with the mechanization and information age, the remarkable feature of the intelligent human–machine system is to emphasize that the human state—whether it is a physiological or mental state—should be included in the system, and the system should be able to “feel” and “predict” the user’s intentions so as to achieve natural, dynamic, and efficient collaboration between humans and machines.
The rapid development of brain–computer interfaces (BCIs) has expanded the communication channel between humans and machines and greatly promoted the development of natural human–computer interactions (HCIs) and human–machine collaboration. On the one hand, active BCI technology realizes direct control of machines using human brain signals, which has always been the ultimate goal pursued by HCI researchers—to create a system that uses user intuition to interact [7]. For example, Deng et al. (2022) developed a system to use EEG signals to directly control the mechanical arm [8]. On the other hand, passive BCI (pBCI) technology makes full use of the human spontaneous EEG to measure the user’s mental state, providing additional input for the system [9]. To a greater extent to ensure the safety, efficiency, and nature of the interaction. At present, the pBCI technology has been successfully applied to detecting human attention [10], emotion [11,12], fatigue [13], cognitive workload [14,15], and intention [16].
Intention recognition is one of the key technologies to realize natural HCI, and it is very important to improve the accuracy and efficiency of HCI systems. In daily life, humans express explicit intent through facial expressions, voice, and gestures, and in HCI and HRI, intent such as “click” and “copy/paste” can also be easily understood by systems through keyboard, mouse, touch screen, voice commands, etc. However, it is far from enough to understand the real intention of humans only by recognizing the explicit intent. Especially in the intelligent era, intelligent systems emphasize dynamic collaboration between humans and machines. The machine needs to predict the user’s intention and take actions in advance to ensure a smooth interaction. In addition to explicit intent, HCI is filled with a lot of implicit intent [17]. These intentions are more hidden and more difficult to identify. They are hard to understand through facial expressions and body movements. Some studies have shown that intent recognition combined with user physiological signals such as eye movement [7,18,19] and EEG is more accurate [20]. However, most of the studies focus on the study of explicit intention at present and only study limited types of interaction intention, such as motor intent (motor imagery, MI) [21], cognitive task (mental task) intent [22,23], visual browsing (visual search) intention [24], and click decision intent [25].
To solve the above problems, this paper uses pBCI technology to explore the algorithm model that can improve the performance of implicit intent EEG decoding in HCI. Firstly, five typical interactive intention tasks of HCI are extracted to reflect the implicit intent mode. Using the multi-task experiment paradigm to obtain intent EEG data and the common space pattern (CSP) algorithm to extract the mixed spatial filters between different intentions. Then, the intent features are input into the machine learning classifiers to train the multi-classification intent recognition model. Verify the performance of the intent models and the effectiveness of the feature extraction algorithm. This study provides an attempt to study the intent-based HCI model [24] from the perspective of neural data and provides theoretical models for practical application scenarios such as the interaction of disabled people with computers, the adaptive interface design of complex systems, and the human–machine dynamic collaboration in intelligent systems.

2. Related Research

Intention refers to the thought of a person to achieve a certain goal or the related thoughts of a person before they begin to produce a certain behavior [26]. In HCI, the intention is the goals and expectations of the user in operating the computer system. In psychology, human intention is divided into explicit intent and implicit intent [27]. Existing research on implicit intent in the HCI field focuses on intentions during visual viewing or making interactive behavioral decisions.

2.1. Research on Implicit Intent in HCI

Nearly 80% of the information humans obtain in their daily lives is through visual channels [28], and a large amount of information in HCI is also received through the visual channel. Therefore, most implicit intent research is based on the analysis of the physiological signals generated in the process of visual browsing. For example, using physiological signals to judge whether the user is browsing purposefully, whether browsing with transactional tasks [27], and which target is preferred by the user. Kang divided visual browsing intention into navigational and informational intention according to whether the user has a purpose when browsing. The former intention refers to humans who find any object without a specific target in visual browsing, while the latter refers to the browsing behavior of looking for a specific target with clear motivation [24].
Kang designed experiments for visual browsing behavior and explored the feedback of users’ EEG in two situations of navigational and informational intention. They proposed that the phase synchronization method can better reflect internal intent and intention conversion than the wave peak measurement method, which can be used for task classification and cognitive detection in BCI [24]. Park U et al. fused EEG and eye movement signals to model implicit intent during visual search to judge the user’s current intention state [29]. In addition, visual preference is also one of the classical implicit intents that has been widely studied in HCI. Khushaba studied the users’ visual preferences for different shapes, colors, and materials from a neuroscience perspective and modeled the user’s target preference decisions. It is beneficial to the development of both adaptive systems and BCI systems [30,31]. Understanding users’ preferences for goals in advance is helpful to improve the efficiency of HCI.
At present, research on the decision intention of interactive behavior is widely used in the field of complex systems, mainly to solve the problem of intelligent system takeover in time in the case of the high possibility of human error, such as fatigue and high load, so as to ensure the safe operation of the system. Taking the intelligent assisted driving system ADAS as an example, Wang et al. studied the neural characteristics under different driving intentions and provided a braking intention detection method for the driving assistance system from a neuroscience perspective [32]. In HCI, the behavioral decision intention is reflected as the user click decision situation. Slanzi et al. studied users’ willingness to click when using real websites, combined eye movement with brain EEG data to build a model, and inferred the users’ willingness to click in the area where fixation occurred [25]. Wang et al. (2021) studied the EEG signals under the plane left/right turn and shoot, extracted the EEG features by the CSP common spatial mode algorithm, extracted the gaze position, gaze length, saccade, and pupil diameter as intention features, used D-S theory to fuse features at the strategy layer and predict the pilot intention, and provided a theoretical basis for aircraft adaptive system design [33]. Liang et al. (2018) oriented the human–machine interface (HMI) of complex systems using the multi-task interface paradigm to extract four typical interaction intentions of the operator, including generalized browsing, target search, etc., and modeled the behavioral intention of the operator when interacting with HMI based on the eye movement data [34].

2.2. Research on Algorithm Model Construction

In terms of implicit intent feature extraction, the existing research mainly uses EEG phase synchronization features, frequency domain features, and EEG spatial characteristics:
The first type of feature extraction method is based on the phase synchronization perspective of multi-channel EEG signals and extracts the phase-locked value (phase locking value, PLV) of several electrode pairs of synchronous responses to participate in intention modeling. Kang et al. (2015) used the EEG multichannel signal phase synchronization method, and the PLV values with the maximum PLV value in theta, alpha, beta-1, and beta-2 under informational/navigational intention were input into three classifiers: SVM, GMM, and NB. SVM has the highest classification accuracy with an average accuracy of 63.6%; the frequency band with the largest PLV difference is theta and alpha, and the PLV value of visual search intention with target was significantly larger than the PLV value without target [24]. Park et al. (2014) also studied implicit intent from the multi-channel signal phase synchronization perspective. They compared the size and changes of EEG PLV values under two visual browsing intentions and found that the classification accuracy of integrating EEG and eye movement signals was about 5% higher than that based on single EEG data modeling [29].
The other feature extraction method uses the traditional EEG frequency domain analysis method to extract energy-related features in different frequency bands. Khushaba et al. (2021) used the fast Fourier transform method to extract the spectrum related to intention, divide the data window according to the shortest decision time, and model the Shannon entropy SMI (Shannon mutual information) in the model as the feature (Khushaba et al., 2012). This study found that the theta bands in the frontal, parietal, and occipital regions were most associated with preference decisions, including alpha in the frontal and temporal regions, and they also found clear and significant changes in brain waves in the frontal, temporal, and occipital regions when consumers make preference decisions. What also uses EEG frequency domain analysis is Wang et al. (2019). Their team, combined with the EEG time domain and frequency domain PSD characteristics, explored the driver’s emergency braking, flexible braking, and normal driving intentions and the differences between the three intentions. They found that braking intention can cause a 40–60 Hz PSD enhancement significantly, and PSD emergency braking increased more, with a shorter duration. Using the SVM classifier for brake intention recognition, identification accuracy was more than 74% [32]. Slanzi (2017) studied the physiological changes when users make the click decision from the perspective of multimodal physiological signal fusion and modeled the click decision based on the physiological signals. They selected pupil diameter-related data as an eye movement feature and 789 EEG characteristic values such as frequency domain, time domain, frequency domain, and complexity feature. At last, select 15 characteristic value input classifiers using the random lasso method. This study found that the neural network and logit modeling effects are good; the classification accuracy of logit reached 71.09% [25].
The third type of method uses the spatial mode characteristics of EEG to find the binary classification hybrid intention spatial filter through the CSP algorithm and modify the original signal elements of the intention EEG through the spatial filter to obtain the spatial filter characteristics. The CSP algorithm was originally a two-classification feature extraction algorithm, which can be improved for multi-classification feature extraction. Based on D-S evidence theory, Wang et al. (2021) fused EEG CSP features and eye movement data for multi-task HCI intention identification. Experiments show that among the four EEG feature extraction methods, CSP + SVM has the highest accuracy in identifying HCI intention, with an average accuracy of 76.84% [33]. In recent years, the CSP algorithm has been widely used in the field of BCI and has obvious effects, especially in the field of motion imagination (motor imagery, MI) MI-BCI. Hu et al. (2022) imagined EEG signal classification for multi-task motion. They extracted the personalized rhythm characteristics of subjects through adaptive spectrum perception and then used the one-to-many multi-spatial mode to calculate the airspace characteristics to construct the space-temporal frequency multi-level fusion features. Combined with the convolutional neural network, high-precision multi-task EEG signal classification was realized [35]. Lu et al. (2023) improved the traditional CSP algorithm based on Tikhonov regularization and L2 norm, and the identification accuracy was higher than other existing methods [36]. The study shows that the CSP algorithm is better than the above two methods.
In terms of intention classifier selection, the traditional machine learning classifier can be used to achieve a better recognition effect. Support vector machine (SVM) is proven to have good results in small sample data modeling, and it is widely used in the field of BCI because of its good generalization performance, which is especially suitable for EEG data modeling. In addition, GMM, NB, neural networks, logit, and FLDA have applications in intention modeling.
The above intention model provides a basis for adaptive interface, BCI, and accessibility HCI studies. From the current research situation, we can see that most studies are two-classification modeling for single intention types, with fewer separable modes, low recognition accuracy, and a lack of a multi-classification intent recognition model for the implicit intention of HCI. From the perspective of the feature extraction method, the traditional frequency domain and phase synchronization method to extract intention feature classification performance is limited, and the phase synchronization method is more suitable for observing the transformation between different intentions from the perspective of the time domain. The CSP algorithm has better feature extraction ability than traditional methods, but it is mainly used in the field of motion intention recognition. Therefore, this paper explores the introduction of the CSP feature extraction algorithm into implicit intention feature extraction and uses a machine learning algorithm to automatically identify multi-class intention to finally obtain an implicit intention classification model with high discrimination and classification accuracy.

3. Methods

This paper used the multi-attribute task battery (MATB)—a multi-task experimental method proposed by NASA [37] as an experimental paradigm. MATB provides a robust method to study the effects of various parameters on a participant’s multitasking behavior, such as his/her decision making, performance level, visual behavior, mental load, etc. Because this experimental paradigm is for military aviation, this paper makes some improvements according to the typical cognitive tasks when humans interact with general complex HCI interfaces, then redesigns the EEG trigger materials under the multi-task paradigm. After inducing intent EEG signals, preprocess the original EEG signal, remove the artifact component, and obtain clean EEG data. The CSP algorithm is adopted for generative intent features and then sent to the machine learning classifiers for multi-type intention discrimination. For the specific algorithm model, see Figure 1.

3.1. Extract Typical HCI Implicit Intent

In order to obtain the real operator interaction intent, this paper uses HMI software (BAOTOU STEEL RARE EARTH STEEL PLATE COMPANY 2030 PL-TCM) in industrial control systems as a research subject to extract HCI tasks. An industrial control system is an automatic control system based on computer technology as the core that integrates automation, digitalization, and information network technology [38]. Among them, HMI software carries out the tasks of real-time visualization of equipment and product data, emergency alarms, controlling machine operation modes, and changing the production parameter model. The research team established the HMI software development and operation, software architecture, basic content and function, and interaction process. On the basis of these field investigations and post-interviews about operator cognitive abilities [39], according to HMI software main interface function division as shown in Figure 2 and operator routine interaction behavior, we finally extracted five types of typical HCI tasks: ① Generalized browse task: Daily production monitoring, performing aimless visual browsing behavior. ② Visual search task: Visual positioning of specific targets such as equipment operation state and production data, abnormal target and early warning information, performing purposeful visual browsing behavior. ③ Click control task: click buttons to change the running state of the device and perform click operation behavior. ④ Table inquiry task: Table is a common element in the interface of a complex system. Table review task requires the operator to find specific targets in the table, visually locate and report data, and perform visually locate targets and search reporting behavior. ⑤ Complex dispose task: Complex situation problem disposal, which requires a higher level of cognitive function participation than the above tasks. Experimental procedures are also designed according to these tasks. Figure 2 shows the intention task extraction procedure.

3.2. Feature Extraction

The basic principle of the CSP algorithm is to fuse the multi-channel signal to obtain the optimal spatial filter, maximize the variance of different categories of signals after projection, and finally obtain the features with the maximum discrimination in the spatial domain. The traditional CSP algorithm is used to find two types of signals with mixed space filters, which is not suitable for multi-classification feature extraction, so some improvements are needed. The improved algorithm is as follows:
  • Calculate the mixed space mean covariance matrix
Firstly, calculate the covariance matrix of the multi-channel EEG signal of each intention type: X i   ( N × T ) is a multi-channel evoked response time-spatial signal matrix under two types of intent tasks, where N is the number of channels and T is the time point data collected for each channel. After normalizing the data, the covariance matrix of the two types of intentions was calculated for each segment R i :
R i = X i X i T trace ( X i X i T ) , i = 1 , 2 , 3 , 4 , 5
X T represents the transpose of the matrix X, and trace ( X X T ) means the sum of elements on the diagonal of the matrix.
Then, by averaging the segments, we obtain the average covariance matrix of intentions R i ¯ . The sum of the covariance matrix of every two types out of five intents is the covariance matrix of the mixed space covariance matrix R. Thus, for the five-class problem, we obtain a total of 10 mixed spaces. For the mixed space composed of the first and second type of intentions, the mean covariance R:
R = R ¯ 1 + R ¯ 2
2.
Construct hybrid spatial filters with pairwise intention pairs
First, calculate the whitening feature matrix. Eigenvalue decomposition is used for R :
R = U λ U T
where U is the eigenvector matrix; λ is the diagonal array of the corresponding eigenvalues, arranged in descending order of the eigenvalues.
Arrange the eigenvalues in descending order, and convert the whitening feature matrix P :
P = λ 1 U T
Then, the whitened matrix is used to act on mean covariance, including:
S i = P R i ¯ P T , i 1 , 2 , 3 , 4 , 5
where, in the direction of S 1 maximum eigenvalue, take the minimum eigenvalue of S 2 , and B is found by the following calculation:
S 1 = B λ 1 B T
There are
S 2 = B λ 2 B T
At the same time
λ 1 + λ 2 = I
The classification of every two types of intentions can be used to obtain the projection matrix. W is the desired spatial filter:
W i = B T P , i = 1 , 2 10
Up to now, we have obtained ten spatial filter pairs W.
3.
Generate intention features
Each type of intention EEG signal corresponds to four spatial filters, which are filtered, respectively. Since there are ten mixed spaces above, each of the five types of intention EEG needs to be projected onto four spatial filters, respectively. The filtering process is calculated as follows:
Firstly, suppose W1 is a mixed intention spatial filter composed of the first type and second type of intentions X1 X2. X1 X2 are projected through the spatial filter W1, and we obtain two types of intention features Z:
Z 1 = W 1 × X 1 Z 2 = W 1 × X 2
Then select the first n rows and last n rows vectors as features to decompose, which best represent the two intent types; take the variance as an example; and the final features are obtained after the following changes:
f i = lg var ( Z i ) k = 1 2 n var ( Z k )
4.
Integrate intent features by column
5.
After different spatial filters are applied to the original intention EEG data, the intention features are obtained and then spliced into columns. Finally, a total of 12 dimensions of CSP feature data are obtained for each type of intention (n value is 3 after repeated experiments), which is used for subsequent multi-intention and multi-classification modeling and recognition.

3.3. Machine Learning Classifier

The principle of SVM is to find a partition hyperplane in the sample space based on the training set, separating the samples of different categories while maximizing the minimum distance from two point sets to this plane, and the edge of the two point sets has the maximum distance to this plane. SVM uses supervised learning to binary-classify data. After improving the strategies of “one to one”, “one to one rest”, and “many to many” methods, SVM can also realize multi-classification problem modeling.
KNN generally performs better than SVM for multi-classification problems. The basic principle of the KNN (K-nearest neighbor) nearest neighbor classification algorithm is to use all known categories of samples as a reference, calculate the distance between the unknown samples and all known samples, and select the nearest K known samples from the unknown samples according to the minority–majority voting rule (majority voting), the unknown samples, and the K nearest samples in the category of more.
The naive Bayesian algorithm is one of the most widely used classification algorithms. The naive Bayes model uses the knowledge of probability statistics to classify sample data sets, and the algorithm is simple and stable. Few parameters need to be estimated, and when the data present different characteristics, the classification performance of naive Bayes will not have much difference. According to the characteristics of the variables in this paper, it is advisable to adopt the maximization posterior probability model, Gaussian naive Bayes (GNB). GNB assumes that the conditional probability of each feature variable of the data sample in each category follows a Gaussian distribution. After calculating the conditional probability of each feature dimension, the maximized posterior probability is calculated.
In this paper, the above three classical machine learning classifiers are used to model multi-task intention and compare the performance of each classifier.

4. Experiment

4.1. Subjects

Twenty-two college students were selected as the subjects. All subjects were male, aged 18–22 (SD = 3.01). All subjects were right-handed, had normal vision or corrected vision, and had certain experience in computer use. The EEG experiments were scheduled daily at 9 a.m. and lasted for 3 weeks. The subjects were required to rest well on the day before the experiment, wash their hair, and blow it dry within two hours before the experiment. Among the 22 subjects, two had too many artifacts of the EEG signals. So, these two subjects were eliminated, and finally, 20 subjects’ EEG data were used to participate in the intention modeling work.

4.2. Experiment Setup

The experimental device includes the Brain Products EEG amplifier and 64-guide cap. The international standard 10–20 lead system was used to collect data from each channel of the cerebral cortex. For all sampling data, TP 9 and TP 10 were used as reference electrodes; all impedance was kept below 10 kΩ (most electrodes are below 5 kΩ), with a sampling rate of 1000 Hz; two Lenovo computers were used for the experiment, one for data acquisition and recording and one for the experimental program interface. The computer display resolution was 1280 × 1024 pixels. The distance between the subject eyes and the screen was about 60 cm; the experimental procedure was performed by the Eprime3.0 software, and the EEG data were automatically recorded by the software. Participants were instructed to avoid blinking and physical activity.

4.3. Procedures

The experiment program was designed using the multi-task experimental paradigm in the Eprime3.0 software. The specific experimental process is the introduction of the experimental process, the pre-experiment, and then the formal experiment. After confirming that the subject was ready, the participant pressed the space bar to start the experiment. After 5 min of collecting resting-state EEG signals, subjects performed a generalization browsing task, a visual search task, a click control task, a table review task, and a complex disposal task in turn. The multi-task experimental paradigm is that the interface displays and runs all modules, and users need to complete specific tasks in different areas of the interface. Except for task 1, which is global browsing, other tasks need to be completed in the respective areas according to the requirements of the experimenter. The specific experimental process is shown in Figure 2. The areas marked in yellow are the areas corresponding to each intention task. The experimental instructions for the current task were displayed before the start of each experimental task, and after the subject was ready, the space bar was pressed to start the task. Experimental procedures marked the EEG signals before and after each task. There was a relaxation time at the end of the task. When ready, the space bar was pressed to go to the next experiment. The total experimental time of each participant was about 30 min, and five types of intention EEG data were recorded.

4.4. Preprocessing of the EEG Data

Preprocessing was performed using the MATLAB EEGLAB toolkit (MATLAB R2013b). After positioning electrodes, active electrode removal, bandpass filtering (0.5–45 Hz), data slicing, removing segment, and data reference with TP 9 and TP 10, there were 61 channels of data left. The blink, ECG, and eye drift artifacts were removed using the ICA algorithm, and extreme values were removed using ±100 μV as the boundary. The subject-wise cross-validation method was used to train the implicit intention recognition model. Data set normalization used the mapminmax function of MATLAB, the test data set was normalized with the same scaling scale as the training set, and the data set of the final input classifier was obtained.

5. Results and Analysis

5.1. Classification Results

The feature data extracted by the CSP algorithm were input into SVM, KNN, and GNB classifiers, respectively. The RBF kernel SVM classifier was adopted, and we performed a grid search on c and g to find the optimal values (the values of c and g are from 10−5 to 105, step size is 0.5) by cross-validation of the training set. The optimal SVM-RBF parameter combination was obtained by using 5-fold cross-validation for each pair of c and g. In the KNN model, Euclidean distance was used to measure the distance between samples, and a 5-fold cross-validation method was used to calculate the training performance of the KNN model in the range of k values from 3 to 11, and the optimal k value was obtained. The training performance and optimal parameters of the classifier are shown in Table 1.
The trained optimal parameters and model were used to identify the data in the test set. The test accuracy of 20 subjects is shown in Table 2. As can be seen from Table 3, the average test accuracy of the SVM classifier is the highest, which is 90.81% ± 9.42. The second is the KNN classifier, whose test accuracy is 80.87% ± 5.94. Among the three classifiers, the GNB classification accuracy is the lowest, at only 76.66% ± 10.12. The Friedman test was used to compare the three classifiers and observe the performance differences. The Friedman test results showed that the three classifiers were significantly different (p = 0.001, χ2 = 14, df = 2). A post hoc test showed that the SVM classifier had significantly better performance than the other two classifiers, as shown in Figure 3.
In terms of the recognition accuracy of different interaction intentions, the SVM model has the highest recognition accuracy for each intention task; the average recognition accuracy of the task is 90.5%; the recognition accuracy of task 1 is the highest, reaching 99.48%, followed by the recognition accuracy of task 3 of 90.96%. The recognition accuracy of tasks 5, 4, and 2 is 89.81%, 87.62%, and 84.63%, respectively. The comparison of the recognition accuracy of the three classifiers is shown in Figure 4.
Among the five types of interactive intention, the recognition accuracy of generalized browsing intention is the highest. The reason is that this intention is the task with the smallest task load, and the user behavior is closer to the resting EEG, so it is the most different from other types of intention. This is followed by click control intent, table query intent, and complex disposal intent. The target search intention has the lowest recognition accuracy among the five types of interaction intentions, with the average recognition accuracy of the three classifiers, indicating that the interaction task is not sufficiently distinguishable from other tasks, especially from the generalized browsing intention, which is also a visual browsing intention. Therefore, a large number of binary studies on these two types of intentions are significant.

5.2. Effectiveness of the CSP Intention Feature

In order to verify the superiority of the common space pattern algorithm in interactive intention feature extraction compared with other feature extraction methods, this paper compared the CSP feature with the power spectral density (PSD) feature algorithm extracted by the fast Fourier transform and the input SVM classifier, respectively. The comparison of the test accuracy of 20 subjects under the two algorithms is shown in Figure 5. Experiments show that CSP features are significantly better than traditional features in intention recognition. It shows that the CSP intention feature extraction method can remove the EEG signal, which is not suitably related to intention, so that the classification effect can be optimized.
The paired sample t-test showed that there were significant differences between CSP features and PSD features in SVM classification, t(19) = 23.851, p = 0.000, d = 0.99. The accuracy of the intention EEG CSP feature on the SVM (M = 90.93, SD = 9.80) of 20 subjects was significantly higher than that of the PSD feature (M = 31.18, SD = 8.95), as shown in Table 4.
In order to further intuitively show the effectiveness of the CSP feature matrix on the representation information of a multi-task intention signal, this paper reduces the high-dimensional space feature by the LDA dimensional–dimensional algorithm, and the feature separability is observed by scatter plot. LDA is a supervised dimension reduction algorithm that uses LDA to reduce the 12-dimensional CSP features and frequency point PSD features to a three-dimensional space for viewing. The EEG data of subjects 19, with low noise, good data quality, and a medium-level recognition rate of the three classifiers, were selected for visual display of the three types of features. As shown in Figure 6, the five colors represent the features constructed by the EEG signals of the five types of interaction. It can be seen that the data points of intention features extracted based on the CSP algorithm show cluster states, indicating that CSP features are better than the traditional CSP and have better spatial discrimination than the improved FBCSP, which further demonstrates the effectiveness of traditional CSP algorithm for intention feature extraction.
Next, this paper analyzes the brain channel weights through the cospatial pattern projection matrix of each type of intention. First, extract the mixed space filter’s energy value of 61 channels through a fast Fourier transform; then mapminmax is used to reduce the line vector to [0, 1]. Finally, the MATLAB topoplot function is used to visualize the channel weights of the whole brain. Follow the above steps and observe intentions in spatial mode from a macro perspective.
Table 5 shows the brain channel weight topography of the common spatial pattern projection matrix for all subjects. With a greater weight, indicating that the yellow region channel is more important for intention identification, and the blue area channel has a smaller weight. It was found that the EEG responses of subjects performing different interaction intentions are specific. The channels with higher weights are concentrated in the electrode positions in the occipital and parietal regions of the cerebral cortex, indicating that the channels in the occipital and parietal regions are more important for intention recognition and can be prioritized for the development of active BCI system with timeliness requirements. In addition, the highly weighted electrodes are concentrated in a few electrodes, which provides a basis for the improvement of the CSP algorithm based on channel selection.

6. Discussion

There are lots of implicit intentions in human–computer interaction, so the problem of pattern recognition with multiple intents is very important. As early as 2005, Palaniappan et al. proved the feasibility of using EEG technology to identify various human mental tasks such as letter composition tasks, math tasks, visual counting tasks, and geometric figure rotation tasks. Palaniappan classified various cognitive patterns based on 4 bands of frequency points and power, and a recognition accuracy of 78–97.5% was obtained by training the model with a neural network [23]. Lee et al. classified three cognitive tasks—rest, mental arithmetic, and mental rotation—by BN and achieved an average accuracy of 84.0% [22]. Anderson et al. used neural networks to classify five cognitive tasks (the above four tasks plus the resting state), and the classification accuracy ranges from 38% to 71% [40]. However, the mental tasks used to induce EEG have large differences in cognitive patterns, the classification problem is relatively simple, and the accuracy rate obtained is correspondingly high. The similarity between characters in the implicit intention of HCI is high, and the pattern recognition problem is more difficult. Most studies of implicit intention focus on the field of MI. Some studies on the basis of left and right imagination, combined with the actual scene, introduce more cognitive patterns to identify. For example, Wang used the pilot’s eye movement data to identify three intents—turn left, turn right, and shoot—and obtained an accuracy of over 90% (EEG acc 77.01%, eye movement acc 87.43) [33]. The experiment is the result of combining the motion imagination paradigm with the actual scene and interactive operation. Liang conducted a visual interaction experiment based on the multi-task paradigm to collect eye movement feature data under the relevant behavior intention state. The SVM algorithm was used to establish a classification prediction model, and the eye movement feature components were selected in combination with the difference analysis method [34]. Although he designed a variety of interaction intention types in the cognitive pattern experiment design, he only used eye movement data for intent modeling. This paper takes the industrial control interface of a complex system human–machine interface as an example based on the multi-task experiment paradigm and redesigns the interactive task of each module. The EEG signals in various intention cognitive patterns are obtained, and the model recognition accuracy is high enough. This proves that it is feasible to decode the implicit intention of HCI by EEG, and the validity of the design of the intention-EEG-induced experiment is verified.
It is worth mentioning that in this experiment, the recognition accuracy of purposive visual browsing intention is the lowest, indicating that this intention has the least difference and separability from other intention types. This shows the necessity of studying visual browsing pattern recognition. This basic research can be applied to many natural interaction scenarios, such as VR and AR devices supported by eye-movement interaction technology. Kang specializes in visual browsing behavior, using EEG data to identify the browsing patterns of users with and without goals, but the classification accuracy is low (around 70%) [24]. Park also worked on using eye movement signals to interpret whether people have a specific goal when they are visually browsing [17]. Most studies combined eye movement data to study visual browsing behavior. This paper uses only EEG signals to decode the interaction intent; this may also be the reason why the accuracy of task 2 in this paper is the lowest compared with other intention tasks. The next research direction of our research group is to use the multi-modal method of multi-physiological signal fusion for intention recognition.
The experimental results show that SVM is superior to other machine learning classifiers in intention recognition using EEG, both in recognition effect and generalization ability. This is consistent with the conclusions of Kang’s and Liang’s [24] research [34]. Recently, deep learning has been widely used in HCI-related intention modeling and has achieved high accuracy. For example, Zhang et al. proved the validity of improved CNN in three representative scenarios: Intention recognition in motion image EEG, person recognition, and neurological diagnosis [41]. Further, as the data set expands, we can test the performance of deep learning.
The results also show that in the intention recognition field, the CSP algorithm is effective and even better than other feature extraction methods. This conclusion is consistent with Wang’s [33] and Li’s [42] research. Some studies using traditional EEG features, such as PSD, often obtain reduced accuracy. Wang proposed a classification method based on PSD features to distinguish emergency and soft braking intentions from normal driving intentions using SVM [32]. Li’s study shows that, compared with modeling using raw EEG signals, CSP + SVM can better identify user preferences [42]. In this paper, CSP is applied to the implicit intention feature extraction of HCI. In conclusion, both the visual feature space visualization and the classification performance demonstrate the superiority of the CSP algorithm for intention feature extraction. At present, many improved algorithms have appeared in the CSP algorithm, which is usually combined with spatial characteristics (channel selected), frequency characteristics, and time characteristics to further improve, which is also the direction of continuous optimization in the algorithm in the later stage of this research.
The research content of this paper may be used in the development of adaptive systems based on active BCI technology in the future, which requires the timeliness of the algorithm model. Scholars working to recognize pattern classification also advocate using fewer electrodes to achieve the same or better results [22,23,40]. More economical and efficient methods need to be explored. Therefore, this paper discusses the channel weights in the last part of the results. It is found that the channels with higher weight are concentrated in the parietal and occipital lobes, which is similar to the conclusions of Anderson (2007) [40], Lee and Tan (2006) [22], and Palaniappan (2005) [23]. They use EEG signals from O1, O2, C, C4, P3, and P4 to model the cognitive task the brain is performing, and the highest accuracy achieved was over 90% on a single subject. This illustrates the possibility of reducing the runtime overhead.

7. Conclusions

With the continuous development of brain–computer fusion technology, the communication channel between humans and computers is expanded, and human–machine systems have additional information input methods. This paper, based on pBCI technology, proves that EEG signals can be used as the basis for judging the implicit intention of HCI through EEG experiments. From the comparative analysis of the above experimental results, it can be seen that the intention feature extraction method based on the CSP algorithm is better than the traditional frequency domain feature, which verifies that the CSP algorithm can not only achieve good results in MI but is also effective in implicit intent recognition in HCI. Compared with the traditional PSD feature, the CSP method has achieved a better classification effect, which can not only improve the classification test performance but also effectively reduce the negative effects of subject specificity on the classification effect. The SVM classifier is better than other machine learning classifiers in implicit intention multi-classification performance, and the CSP + SVM model can effectively improve the decoding performance of HCI implicit intention EEG, providing some reference for future intention-based HCI model research. The experimental paradigm of multi-task human–computer interactions for complex systems proposed in this paper still needs to be further accumulated and fully verified by the data. In addition, the study of the projection matrix of each type of intention shows that the weight of each type of intention channel presents some rules, which will guide the next step of this study to optimize the technical direction of CSP features based on channel selection and frequency band-channel joint.

Author Contributions

Methodology, X.M.; Validation, X.M.; Investigation, X.M.; Writing—original draft, X.M.; Writing—review & editing, W.H.; Visualization, X.M.; Supervision, W.H.; Project administration, W.H.; Funding acquisition, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by: (1) the Inner Mongolia Party Committee, Philosophy and social science planning project of Inner Mongolia Autonomous Region, CHINA, grant number 2022NDB244; (2) the Basic research funds for universities directly under the Inner Mongolia Autonomous Region, CHINA, grant number 2023XKJX026; (3) the Major project of Beijing Social Science Foundation, CHINA, No. 18ZDA08.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Inner Mongolia University of Science and Technology Ethics Committee, protocol code (2023) No. 10.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in BAIDU Cloud at https://pan.baidu.com/s/1w36uqJUXEkggoQ7mg5U9vA?pwd=dqiq (accessed on 27 December 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alliance of Industrial Internet. Industrial Internet White Paper. 2021. Available online: http://en.aii-alliance.org/index.php (accessed on 27 December 2023).
  2. Moore, A.; O’Reilly, T.; Nielsen, P.D.; Fall, K. Four Thought Leaders on Where the Industry Is Headed. IEEE Softw. 2016, 33, 36–39. [Google Scholar] [CrossRef]
  3. World Economic Forum. Digital Transformation of Industries; World Economic Forum: Cologny, Switzerland, 2016. [Google Scholar]
  4. Gil, M.; Albert, M.; Fons, J.; Pelechano, V. Designing human-in-the-loop autonomous Cyber-Physical Systems. Int. J. Hum.-Comput. Stud. 2019, 130, 21–39. [Google Scholar] [CrossRef]
  5. De Lemos, R.; Giese, H.; Müller, H.A.; Shaw, M.; Andersson, J.; Litoiu, M.; Schmerl, B.; Tamura, G.; Villegas, N.M.; Wuttke, J.; et al. Software Engineering for Self-Adaptive Systems: A Second Research Roadmap. In Software Engineering for Self-Adaptive Systems II: International Seminar, Dagstuhl Castle, Germany, October 24–29, 2010 Revised Selected and Invited Papers; de Lemos, R., Giese, H., Müller, H.A., Shaw, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–32. [Google Scholar]
  6. Miller, D.B.; Ju, W. Joint Cognition in Automated Driving: Combining Human and Machine Intelligence to Address Novel Problems. In Proceedings of the AAAI Spring Symposium Series, Stanford, CA, USA, 23–25 March 2015; Available online: https://www.aaai.org/ocs/index.php/SSS/SSS15/paper/view/10308 (accessed on 27 December 2023).
  7. Cutrell, E.; Guan, Z.W. What Are You Looking for? An Eye-Tracking Study of Information Usage in Web Search. In Proceedings of the Conference on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007. [Google Scholar]
  8. Deng, X.; Xiao, L.F.; Yang, P.F.; Wang, J.; Zhang, J.H. Development of a robot arm control system using motor imagery electroencephalography and electrooculography. CAAI Trans. Intell. Syst. 2022, 17, 1163–1172. [Google Scholar]
  9. Wang, X.R. Research on Auxiliary Brain State Monitoring Methods in Brain-Computer Interfaces. Master’s Thesis, Yanshan University, Qinhuangdao, China, 2022. Available online: https://www.cnki.net (accessed on 27 December 2023).
  10. Yan, N.; Wang, J.; Wei, N.; Zong, L. Feature Exaction and Classification of Attention Related Electroencephalographic Signals Based on Sample Entropy. J. Xi’an Jiaotong Univ. 2007, 41, 1237–1241. [Google Scholar]
  11. Guo, F.; Yu, C.; Ding, Y. On Measuring Users’ Emotions in Interacting with Webs. J. Northeast. Univ. Nat. Sci. 2014, 35, 131. [Google Scholar]
  12. Hu, X.; Yu, J.; Song, M.; Yu, C.; Wang, F.; Sun, P.; Wang, D.; Zhang, D. EEG Correlates of Ten Positive Emotions. Front. Hum. Neurosci. 2017, 11, 26. [Google Scholar] [CrossRef]
  13. Xu, J.L.; Wang, P.; Mu, Z.D. Fatigue Driving Detection Based on Eye Movement and EEG Features. J. Chongqing Jiaotong Univ. Nat. Sci. 2021, 40, 7–11. [Google Scholar]
  14. Gevins, A.S.; Smith, M.E. Neurophysiological measures of cognitive workload during human-computer interaction. Theor. Issues Ergon. Sci. 2003, 4, 113–131. [Google Scholar] [CrossRef]
  15. Holm, A.; Lukander, K.; Korpela, J.; Sallinen, M.; Müller, K. Estimating Brain Load from the EEG. Sci. World J. 2009, 9, 639–651. [Google Scholar] [CrossRef]
  16. Xing, Y.; Lv, C.; Wang, H.; Wang, H.; Ai, Y.; Cao, D.; Velenis, E.; Wang, F.Y. Driver Lane Change Intention Inference for Intelligent Vehicles: Framework, Survey, and Challenges. IEEE Trans. Veh. Technol. 2019, 68, 4377–4390. [Google Scholar] [CrossRef]
  17. Park, H.; Lee, A.; Lee, M.; Chang, M.S.; Kwak, H.W. Using eye movement data to infer human behavioral intentions. Comput. Hum. Behav. 2016, 63, 796–804. [Google Scholar] [CrossRef]
  18. Jang, Y.M.; Mallipeddi, R.; Lee, M. Identification of human implicit visual search intention based on eye movement and pupillary analysis. User Model. User-Adapt. Interact. 2014, 24, 315–344. [Google Scholar] [CrossRef]
  19. Jang, Y.M.; Mallipeddi, R.; Lee, S.; Kwak, H.W.; Lee, M. Human intention recognition based on eyeball movement pattern and pupil size variation. Neurocomputing 2014, 128, 421–432. [Google Scholar] [CrossRef]
  20. Velasquez, J.; Weber, R.; Yasuda, H.; Aoki, T. Acquisition and Maintenance of Knowledge for Online Navigation Suggestions. IEICE Trans. Inf. Syst. 2005, E88D, 993–1003. [Google Scholar] [CrossRef]
  21. Pan, L.C.; Wang, K.; Xu, M.P.; Ni, G.J.; Ming, D. Review of Researches on Common Spacial Pattern and its Extended Algorithms for Movement Intention Decoding. Chin. J. Biomed. Endineering 2022, 41, 577–588. [Google Scholar]
  22. Lee, J.C.; Tan, D.S. Using a low-cost electroencephalograph for task classification in HCI research. In Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, Montreux, Switzerland, 15–18 October 2006. [Google Scholar] [CrossRef]
  23. Palaniappan, R. Brain Computer Interface Design Using Band Powers Extracted During Mental Tasks. In Proceedings of the 2nd International IEEE EMBS Conference on Neural Engineering, Washington, DC, USA, 16–19 March 2005. [Google Scholar]
  24. Kang, J.S.; Park, U.; Gonuguntla, V.; Veluvolu, K.C.; Lee, M. Human implicit intent recognition based on the phase synchrony of EEG signals. Pattern Recognit. Lett. 2015, 66, 144–152. [Google Scholar] [CrossRef]
  25. Slanzi, G.; Balazs, J.A.; Velasquez, J.D. Combining eye tracking, pupil dilation and EEG analysis for predicting web users click intention. Inf. Fusion 2017, 35, 51–57. [Google Scholar] [CrossRef]
  26. Van-Horenbeke, F.A.; Peer, A. Activity, Plan, and Goal Recognition: A Review. Front. Robot. AI 2021, 8, 643010. [Google Scholar] [CrossRef]
  27. Jansen, B.; Booth, D.; Spink, A. Determining the informational, navigational and transactional intent of web queries. Inf. Process. Manag. 2008, 44, 1251–1266. [Google Scholar] [CrossRef]
  28. Zhou, S.; Yu, Q. Human-Computer Interaction Technology; Tsinghua University Press: Beijing, China, 2022. [Google Scholar]
  29. Park, U.; Mallipeddi, R.; Lee, M. Human Implicit Intent Discrimination Using EEG and Eye Movement. In Proceedings of the 21st International Conference on Neural Information Processing (ICONIP), Kuching, Malaysia, 3–6 November 2014. [Google Scholar]
  30. Khushaba, R.N.; Greenacre, L.; Kodagoda, S.; Louviere, J.J.; Burke, S.; Dissanayake, G. Choice modeling and the brain: A study on the Electroencephalogram (EEG) of preferences. Expert Syst. Appl. 2012, 39, 12378–12388. [Google Scholar] [CrossRef]
  31. Khushaba, R.N.; Wise, C.; Kodagoda, S.; Louviere, J.; Kahn, B.E.; Townsend, C. Consumer neuroscience: Assessing the brain response to marketing stimuli using electroencephalogram (EEG) and eye tracking. Expert Syst. Appl. 2013, 40, 3803–3812. [Google Scholar] [CrossRef]
  32. Wang, H.K.; Bi, L.Z.; Fei, W.J.; Wang, L. An EEG-Based Multi-Classification Method of Braking Intentions for Driver-Vehicle Interaction. In Proceedings of the 2019 IEEE International Conference on Real-Time Computing and Robotics (RCAR), Irkutsk, Russia, 4–9 August 2019; pp. 438–441. [Google Scholar]
  33. Wang, W.; Zhao, M.; Gao, H.; Zhu, S.; Qu, J. Human-computer interaction:Intention recognition based on EEG and eye tracking. Acta Aeronaut. Astronaut. Sin. 2021, 42, 324290. [Google Scholar]
  34. Liang, Y.Q.; Wang, W.; Qu, J.; Yang, J.; Liu, X.W. Human-Computer Interaction Behavior and Intention Prediction Model Based on Eye Movement Characteristics. Acta Electron. Sin. 2018, 46, 2993–3001. [Google Scholar]
  35. Hu, Y.; Liu, Y.; Cheng, C.C.; Geng, C.; Dai, B.; Peng, B.; Zhu, J.; Dai, Y.K. Multi-task motor imagery electroencephalogram classification based on adaptive time-frequency common spatial pattern combined with convolutional neural network. Chin. J. Biomed. Eng. 2022, 39, 1065–1073. [Google Scholar]
  36. Lu, Z.W.; Chen, Y.; Mo, Y.; Zhang, B.X. EEG channel selection method based on TRCSP and L2 norm. Electron. Meas. Technol. 2023, 46, 94–102. [Google Scholar] [CrossRef]
  37. NASA. Available online: https://matb.larc.nasa.gov/ (accessed on 27 December 2023).
  38. Chen, M.; Liang, N.M. The Path to Intelligent Manufacturing: Digitalized Factory; Liang, N., Ed.; China Machine Press: Beijing, China, 2016. [Google Scholar]
  39. Shi, J.; Tang, W.; Li, N.; Zhou, Y.; Zhou, T.; Chen, Z.; Yin, K. User Cognitive Abilities-Human Computer Interaction Tasks Model. In Intelligent Human Systems Integration 2021; Springer: Cham, Switzerland, 2021. [Google Scholar]
  40. Anderson, C.W. Classification of EEG Signals from Four Subjects During Five Mental Tasks. 2007. [Google Scholar]
  41. Zhang, X.; Yao, L.; Wang, X.; Zhang, W.; Zhang, S.; Liu, Y. Know Your Mind: Adaptive Cognitive Activity Recognition with Reinforced CNN. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019. [Google Scholar]
  42. Li, C.; He, F.; Qi, H.; Guo, X.; Chen, L.; Ming, D. A Study for ERP Classification of Food Preference Based on CSP and SVM. Chin. J. Biomed. Eng. 2022, 41, 266–272. [Google Scholar]
Figure 1. Multi-task experimental stimulation methods and the algorithmic model framework of this paper.
Figure 1. Multi-task experimental stimulation methods and the algorithmic model framework of this paper.
Applsci 14 00368 g001
Figure 2. Interactive intention task extraction procedure: (a) field research; (b) functional and content analysis; (c) experimental prototyping.
Figure 2. Interactive intention task extraction procedure: (a) field research; (b) functional and content analysis; (c) experimental prototyping.
Applsci 14 00368 g002
Figure 3. Friedman test and post-test results. Blue line on the right represents SVM, and left and middle lines represent GNB and KNN, respectively.
Figure 3. Friedman test and post-test results. Blue line on the right represents SVM, and left and middle lines represent GNB and KNN, respectively.
Applsci 14 00368 g003
Figure 4. The recognition of different types of interaction intentions by the model.
Figure 4. The recognition of different types of interaction intentions by the model.
Applsci 14 00368 g004
Figure 5. Recognition accuracy of the 20 subjects under different algorithms (units: %).
Figure 5. Recognition accuracy of the 20 subjects under different algorithms (units: %).
Applsci 14 00368 g005
Figure 6. Three-dimensional spatial visualization of subject 19 intention features: Training set: (a) PSD feature; (b) CSP feature; testing set: (c) PSD feature; (d) CSP feature.
Figure 6. Three-dimensional spatial visualization of subject 19 intention features: Training set: (a) PSD feature; (b) CSP feature; testing set: (c) PSD feature; (d) CSP feature.
Applsci 14 00368 g006
Table 1. Subject-wise cross-validation.
Table 1. Subject-wise cross-validation.
SVM
(RBF Kernel)
KNNGaussianNB
%Acccg%Acck%Acc
S199.100.25886.75584.52
S299.080.25885.77482.11
S399.540.25888.22485.21
S499.090.125485.87485.14
S598.940.25887.01384.71
S699.08320.062587.62485.00
S799.420.25885.19781.56
S899.040.25884.15384.43
S999.340.5887.36485.09
S1099.260.125486.90475.96
S1199.030.25886.85385.11
S1299.080.25891.03485.49
S1398.131887.30385.74
S1499.00 0.5888.32383.63
S1599.032287.64381.62
S1699.050.25889.64383.76
S1799.060.125287.60582.17
S1899.890.125287.18383.32
S1999.1220.2588.79392.50
S2099.420.25887.28387.95
Mean99.135 ± 0.3387.32 ± 1.5188.26 ± 1.11
Table 2. Test performance.
Table 2. Test performance.
SVMKNNGNB
%Acc%Acc%Acc
S187.278183.27
S210079.1978.95
S384.284.284.2
S479.6680.0584.03
S576.3480.8582.7
S670.8679.7579.51
S792.3182.4480.27
S810079.879.6
S992.2469.2181.63
S1096.6884.1957.4
S1110085.3583.29
S1285.9662.5274.73
S1382.4282.4243.33
S1410087.676.5
S1598.4182.2578.89
S1698.6287.3878.15
S1799.8787.3773.85
S1810080.2582.79
S197978.4367.79
S2092.3183.2482.36
Table 3. Recognition results of three classifiers.
Table 3. Recognition results of three classifiers.
SVMKNNGNB
Recognition average90.8180.8776.66
Standard deviation9.425.9410.12
Table 4. Pared-samples t-test.
Table 4. Pared-samples t-test.
Paired Differencetdfp
(Two-Tailed)
MeanStd.Standard Error of the Mean95% CI
Upper LimitLower Limit
CSP-PSD59.7511.202.5154.5069.9923.85190.00
Table 5. Whole-brain topography of channel weights of projection matrix for each category.
Table 5. Whole-brain topography of channel weights of projection matrix for each category.
Subjects NumberTask 1
Generalized
Browsing
Task 2
Visual Search
Task 3
Click Control
Task 4
Table Inquire
Task 5
Complex Dispose
S1Applsci 14 00368 i001
S2Applsci 14 00368 i002
S3Applsci 14 00368 i003
S4Applsci 14 00368 i004
S5Applsci 14 00368 i005
S6Applsci 14 00368 i006
S7Applsci 14 00368 i007
S8Applsci 14 00368 i008
S9Applsci 14 00368 i009
S10Applsci 14 00368 i010
S11Applsci 14 00368 i011
S12Applsci 14 00368 i012
S13Applsci 14 00368 i013
S14Applsci 14 00368 i014
S15Applsci 14 00368 i015
S16Applsci 14 00368 i016
S17Applsci 14 00368 i017
S18Applsci 14 00368 i018
S19Applsci 14 00368 i019
S20Applsci 14 00368 i020
Yellow means the weight is close to 1; blue means the weight is close to 0.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miao, X.; Hou, W. Human–Computer Interaction Multi-Task Modeling Based on Implicit Intent EEG Decoding. Appl. Sci. 2024, 14, 368. https://doi.org/10.3390/app14010368

AMA Style

Miao X, Hou W. Human–Computer Interaction Multi-Task Modeling Based on Implicit Intent EEG Decoding. Applied Sciences. 2024; 14(1):368. https://doi.org/10.3390/app14010368

Chicago/Turabian Style

Miao, Xiu, and Wenjun Hou. 2024. "Human–Computer Interaction Multi-Task Modeling Based on Implicit Intent EEG Decoding" Applied Sciences 14, no. 1: 368. https://doi.org/10.3390/app14010368

APA Style

Miao, X., & Hou, W. (2024). Human–Computer Interaction Multi-Task Modeling Based on Implicit Intent EEG Decoding. Applied Sciences, 14(1), 368. https://doi.org/10.3390/app14010368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop