Next Article in Journal
Performance Analysis of a Reconfigurable-Intelligent-Surfaces-Assisted V2V Communication System
Previous Article in Journal
GUI Component Detection-Based Automated Software Crash Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effects of VR and TP Visual Cues on Motor Imagery Subjects and Performance

1
School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650032, China
2
Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650032, China
3
College of Information Engineering, Engineering University of PAP, Xi’an 710018, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(11), 2381; https://doi.org/10.3390/electronics12112381
Submission received: 7 April 2023 / Revised: 18 May 2023 / Accepted: 23 May 2023 / Published: 24 May 2023

Abstract

:
This study objectively evaluated the effects of Virtual Reality Visual Cues (VRVCs) and Traditional Plane Visual Cues (TPVCs) on motor imagery (MI) subjects and Brain-Computer Interface (BCI) performance when building a classification model for MI-BCIs. Four metrics, namely, imagery stability, brain activation and connectivity, classification accuracy, and fatigue level, were used to evaluate the effects of TPVCs and VRVCs on subjects and MI-BCI performance. Nine male subjects performed four types of MI (left/right-hand grip strength) under VRVCs and TPVCs while EEG and fNIRS signals were acquired. FBCSP and HFD were used to extract features, and KNN was used to evaluate MI-BCI accuracy. Rt-DTW was used to evaluate MI stability. PSD topography and the brain functional network were used to assess brain activation and connectivity. Cognitive load and fNIRS mean features were used to evaluate fatigue. The mean classification accuracies of the four types of MI under TPVCs and VRVCs were 50.83% and 51.32%, respectively. However, MI was more stable under TPVCs. VRVCs enhanced the connectivity of the brain functional network during MI and increased the subjects’ fatigue level. This study’s head-mounted VRVCs increased the subjects’ cognitive load and fatigue level. By comparing the performance of an MI-BCI under VRVCs and TPVCs using multiple metrics, this study provides insights for the future integration of MI-BCIs with VR.

1. Introduction

Brain-Computer Interfaces (BCIs) are a novel technology that utilizes signals derived from the central neural activity of the user [1,2]. The primary goal of BCIs is to enable communication and control between the brain and external devices, ultimately enhancing the quality of life and productivity of individuals, particularly those with severe motor dysfunction. Motor imagery (MI) BCIs constitute a critical category of BCIs. The research findings demonstrate that motor imagery (MI) can enhance the phenomenon of event-related synchronization (ERS) [3] in the alpha and beta frequency bands of the sensorimotor cortex, thereby increasing synchronized neural activity in the brain regions responsible for controlling movement during the response to external events or tasks [4]. MI-BCIs require users to mentally simulate the process of moving a specific body part, such as the hand or foot [5,6]. MI-BCIs have demonstrated notable efficacy in enhancing motor performance and ameliorating motor impairments among individuals with movement disorders, thereby facilitating their therapeutic interventions and rehabilitation [7]. However, this must be accomplished without actually executing the physical movement, thereby making it a cognitive activity that is infrequently performed in everyday life and that can be challenging to acquire and regulate.
Visual cues are commonly used to guide subjects in performing motor imagery (MI) in the development of MI-BCI classification models. Two primary forms of visual cues are employed, including Traditional Plane Visual Cues (TPVCs) and Virtual Reality Visual Cues (VRVCs). TPVCs are characterized by visual cues presented on a flat screen [8,9], such as those found on a computer or tablet. In contrast, VRVCs refer to cues presented within a virtual reality environment [10]. Research has shown that VRVCs can provide an immersive environment that makes viewers feel as though they are truly present in the scene. This immersive experience can increase their engagement and attention during the observation process, which can facilitate more efficient user MI training, reduce training cycles [11], and enhance the ease of use of MI-BCIs. However, the prolonged use of VRVCs may result in adverse effects such as eye fatigue and dizziness among users [12]. TPVCs are more commonly utilized, less expensive, and simpler [8,9]. Some users have grown accustomed to TPVCs and are more receptive to this approach. However, the visual information provided by TPVCs is relatively limited, which may result in reduced user immersion.
As mentioned earlier, the utilization of VRVCs and TPVCs in MI tasks has distinct effects on both subjects and MI-BCI performance. Previous research has investigated this impact by examining MI-BCI classification accuracy [13] and self-reported measures, such as satisfaction and cognitive load [14,15]. However, objective indicators have been lacking. This study aims to comprehensively evaluate the divergent effects of VRVC and TPVC MI tasks on subjects and MI-BCI performance. This study assesses imagery stability, brain activation and connection, and classification accuracy. EEG signals are utilized for the assessment, and functional near-infrared spectroscopy (fNIRS) signals are also acquired to gauge the subjects’ fatigue levels.
In this study, we acquired EEG and fNIRS signals during four types of MI of left-hand or right-hand grip strength under VRVCs and TPVCs. To evaluate the different effects of VRVC and TPVC MI tasks on subjects and MI-BCI performance, we used the Filter Bank Common Spatial Pattern (FBCSP) [16] and the Higuchi Fractal Dimension (HFD) [17] to extract EEG features, and K-Nearest Neighbor (KNN) was used to evaluate the classification accuracy of motion imagery EEG signals under VRVCs and TPVCs. We also used a random template-Dynamic Time Warping (rt-DTW) based method to assess the stability of subjects’ MI, Power Spectral Density (PSD) brain topography during MI and the functional brain network [18] in the resting state before and after the experiment to assess brain activation and connectivity, and a cognitive load scale with fNIRS signal mean characteristics [19] to assess the level of fatigue in the subjects. This study aimed to provide valuable insights for the future integration of MI-BCIs with VR.

2. Materials and Methods

2.1. Subjects

This study involved 9 male participants between the ages of 23 and 26 years who were all right-handed and free from any physical or mental illnesses. Before the experiment, all participants provided informed consent, and the study received approval from the Medical Ethics Committee of Kunming University of Science and Technology.

2.2. Research Approach

Figure 1 shows the study design for comparing the impact of VRVCs and TPVCs on MI tasks and performance. Figure 1a shows the VRVC and TPVC MI tasks for the left-hand and right-hand grip strengths. Figure 1b shows the analysis of the motor area EEG used to assess the classification accuracy, brain activation and connectivity, and imagery stability of the subjects. Figure 1c shows the analysis of the frontal area fNIRS and cognitive load scale scores used to evaluate the level of fatigue in the subjects.

2.3. Experimental Design and Procedure

(1)
One trial timing sequence
Considering the inherent delay in hemodynamic response, a block design [20] was used in this study in order to record EEG and fNIRS signals. Figure 2a shows a schematic of the timing for a single trial. The initial phase of 0–3 s entailed the subject attaining a state of wakeful relaxation to establish the baseline signal. Subsequently, during the 3–63 s interval, the subject was instructed to remain at rest to acquire the fNIRS baseline signal. Following this, at 63–66 s, the subjects were presented with either TPVCs or VRVCs to facilitate their execution of prompted MI tasks. These tasks included four types of MI tasks involving left-hand or right-hand grip strengths. During the 66–78 s interval, the subjects were asked to perform the same MI task 4 times. A flashing cross was presented before each MI task to prompt the subject to start it. The duration was 3 s. Finally, during the 78–88 s interval, the subjects were instructed to rest and prepare for the commencement of the subsequent trial.
(2)
MI tasks, training, and experimental design
This experiment involved four types of MI tasks for both left and right hands, namely, imagining left-hand lower grip strength (LLS, corresponding to the actual grip strength of 9–14 kg), left-hand greater grip strength (LGS, corresponding to the actual grip strength of 23–27 kg), right-hand lower grip strength (RLS), and right-hand greater grip strength (RGS).
Before the experiment, the subjects performed the four actual grip tasks and then underwent the corresponding MI training until they could imagine the corresponding grip strength clearly and controllably [21]. During the experiment, the subjects sat in a comfortable chair, relaxed their bodies, tried to avoid head and body movements, and wore an EEG-fNIRS cap. Subsequently, within the contexts of TPVCs and VRVCs, the subjects were individually prompted to mentally simulate the execution of grasping movements with their left or right hand, employing an optimal level of force. Each subject took part in two experimental phases, one under VRVCs and the other under TPVCs, using a VR head-mounted display (HTC vive-pro) and a computer screen as visual cues, respectively. Each experimental phase included 20 trials of LLS, LGS, RLS, and RGS MI tasks. After each task set, the participants completed the cognitive load scale [22] to assess their fatigue level, with at least one day between each experimental phase.

2.4. Data Acquisition

Figure 2b shows a schematic diagram of the EEG electrodes and fNIRS channels. A 32-channel EEG recording system (NeuSen W, neuracle, China) was used, with the reference electrode placed between Cz and Cpz and the ground electrode placed between Fz and Fpz. The sampling rate was 1000 Hz, and the electrode impedance was lowered to below 10 KΩ prior to data collection. fNIRS signals were acquired (Nirsmart, Danyang Huichuang, China) using 8 emitter and receiver probes (total of 10 fNIRS channels) placed around the frontal area electrodes (FP1, FPz, and FP2) with a sampling rate of 20 Hz. The data collected from both devices were simultaneously transmitted to MATLAB using the TCP/IP protocol for storage.

2.5. Data Processing Method

2.5.1. Preprocessing

(1)
EEG preprocessing
To prepare the EEG data for analysis, several preprocessing steps were performed. First, the raw EEG data were downsampled from 1000 Hz to 128 Hz. Next, a 50 Hz trap was applied, followed by bandpass filtering with a range of 0.1–40 Hz. An independent component analysis was then used to remove any artifacts, and a total of 9 channels (FC3, FC4, FCz, C3, C4, Cz, CP3, CP4, Pz) were selected for training the model.
(2)
fNIRS preprocessing
Physiological noise and baseline drift were removed from the raw fNIRS data using preprocessing steps. Specifically, the data were bandpass-filtered with a range of 0.01–0.2 Hz. Next, the optical density signal was converted into blood oxygen protein concentration using the Beer-Lambert law [23]. Finally, the mean values of the time series of the fNIRS channels around the forehead Fp1, Fpz, and Fp2 were calculated (Figure 2c).

2.5.2. Classification of Left-Hand and Right-Hand Grip Imagery Based on FBCSP and HFD

(1)
FBCSP
FBCSP is based on the Common Spatial Pattern (CSP) algorithm and uses a sliding frequency window to divide the data into multiple frequency bands. In this study, a window width of 4 Hz and a step size of 4 Hz were used to slide between 8 Hz and 32 Hz. CSP space domain filtering was then applied to the sub-bands, and the first three feature vectors were selected as the features of each sub-band. Finally, we used mutual information to select multiple sub-band features for classification.
(2)
HFD
HFD is a method for calculating the fractal dimension directly in the time domain without the need for phase space reconstruction [24]. The procedure of this method is as follows: for a channel of EEG data X ( t ) , a new time series is calculated using Equation (1):
X k m = X ( m ) , X ( m + k ) , , X m + N m k k , m = 1 , 2 , , k
where k = 1 , 2 , , k max , k max is usually defined between 6 and 16, and N is the length of X ( t ) . The curve length L m ( k ) is calculated using Equation (2) for sequences X k m with different k values:
L m ( k ) = 1 k i = 1 N m k | A ( m , i , k ) | N 1 N m k k
where A ( m , i , k ) = X ( m + i k ) X ( m + ( i 1 ) k ) . The mean value L ( k ) is calculated for each L m ( k ) using Equation (3):
L ( k ) = 1 k m = 1 k L m ( k )
Then, a straight line is obtained by fitting ln ( L ( k ) ) with ln ( 1 / k ) using the least squares method, and the slope of this line is the HFD characteristic, as in Equation (4):
H F D = ln ( L ( k ) ) ln ( 1 / k )
Finally, the Principal Component Analysis is used to downscale the features composed of HFD and FBCSP, and K-Nearest Neighbor (KNN) is used to classify the four types of MI tasks (LLS, LGS, RLS, and RGS). The specific process is shown in Figure 3.

2.5.3. Stability Analysis of Grip Force Imagination Based on rt-DTW

The rt-DTW algorithm, proposed by Ding P. et al., was used in this study to calculate the speed fluctuations of the subjects performing the MI tasks and to analyze the stability of their performance. The algorithm consists of three main parts: random template sorting, correction template, and the calculation of speed fluctuations.
(1)
Random template sorting
In the first part, the dataset of each MI EEG trial for each subject is represented by t m p , and the r -th trial data t m p r is randomly selected from tmp as the initial template, where 1 r 20 . The DTW distance d j between t m p j and t m p r is then calculated using Equation (5) for the j -th trial in t m p , and the trial data t m p j with the smallest distance from t m p k are obtained using Equation (6):
d j = DTW t m p j , t m p r length t m p r , 1 j 20 , j r
d k = min d j , k j
where DTW is used to stretch or compress t m p j and t m p r along the time axis to align them, as well as to find the shortest distance between the two sequences [25]. The length function calculates the number of sample points in t m p r .
(2)
Template calibration
To extract the data t m p k w m , n from the k-th trail time window w m , n , the window w m , n with a variable length is selected, where m denotes the left-end moment of the window and n denotes the right-end moment of the window, 0 m 0.5 , 1 n 3 . Initially m = 0 , in each cycle m remains unchanged, and at the end of each cycle m increases by 0.1, for a total of 6 cycles. Within each cycle, n is increased from 1 to 3 with a step size of 0.1. The data within the window w ( m , n ) are extracted and used to calculate the minimum distance between t m p k w m , n and t m p r w m , n based on Equation (7). The corresponding window w ( m , n ) with the minimum distance is selected, and t m p k w m , n is used as the model.
d k w m , n = min D T W t m p k w m , n , t m p r w m , n length t m p k w m , n
(3)
Speed fluctuation calculation
For each trial in t m p , the data are extracted within the time window w ( m , n ) , the DTW distance is calculated with the model, and the time window w x , y with the smallest distance to the model is found. The parameters of m , n , and the number of cycles are the same as those in the template correction section. The ratio s i of the number of sampling points in the i-th trial data in w ( x , y ) to the number of sampling points in the model length (model) is calculated using Equation (8). The closer the s i value is to 1, the closer t m p i w x , y is to the model, and the execution time of the i-th trial is closer to that of the model, which is inversely proportional to the speed of execution. Finally, the standard deviation σ of s i is calculated using Equation (9) to obtain the speed fluctuations of the subjects performing the MI tasks. The smaller the speed fluctuations, the higher the stability of the subjects performing the MI tasks.
s i = length t m p i w x , y / length ( model )
σ = 1 20 i = 1 20 s i s ¯ 2
where s ¯ is the mean of s i .

2.5.4. Analysis of PSD Brain Topography and Brain Network in the Motor Area

(1)
Analysis of PSD Brain Topography in the Motor Area
The Power Spectral Density (PSD) describes the power distribution of a signal across different frequencies. In this study, the PSD features of each channel during the execution of the MI tasks by the subjects were plotted on a topography map to analyze the power information of the dominant frequency, reflecting brain activation in the motor area. In this study, a sliding time window with a length of 0.5 s and a step size of 0.5 s was utilized to extract the data, and the PSD features within the frequency band of 8–30 Hz were calculated to assess the subjects’ level of brain activation.
(2)
Analysis of Brain Network in the Motor Area
To analyze the brain network of the motor area, we calculated the Pearson correlation coefficient between the time series of each channel in the motor area, with values ranging from 1 to 9 (covering the 9 electrodes of the motor area). The calculation was performed using Equation (10):
A X Y = n i = 1 , j = 1 n x i y j i = 1 n x i j = 1 n y j n i = 1 n x i 2 i = 1 n x i 2 n j = 1 n y j 2 j = 1 n y j 2
The variable A X Y represents the correlation degree between channels X and Y, where x i and y j are the i-th and j-th sample points of channels X and Y, respectively, and n is the number of sampling points. To avoid self-connection edges in the network, we set the diagonal of the resulting correlation matrix ( A M ) to zero [26] and then took the absolute value of A X Y . The formula for A M is shown in Equation (11):
A M = 0 A 12 A 19 A 21 0 A 29 A 91 A 92 0
Subsequently, we applied a threshold to A M to obtain the binary correlation matrix a M , where a X Y is either 0 or 1. The formula for a M is demonstrated in Equation (12):
a M = 0 a 12 a 19 a 21 0 a 29 a 91 a 92 0
Finally, we calculated the average degree and average clustering coefficient to analyze the characteristics of the brain functional network.

2.5.5. Fatigue Detection Based on Mean HbO Features in fNIRS

fNIRS measures changes in the absorption of near-infrared light by brain tissue to indirectly reflect brain activity. In this study, we used the mean value of the oxygenated hemoglobin (HbO) feature to reflect the degree of fatigue in the MI tasks under two different visual cue conditions: TPVCs and VRVCs. To account for the delay in the hemodynamic response and individual differences in response time, we calculated the mean feature values for three different time windows (0–10 s, 1–11 s, and 2–12 s). In addition, we used the cognitive load scale to analyze changes in the cognitive load of the subjects.

3. Results

3.1. Classification Accuracy of MI under TPVCs and VRVCs

Table 1 shows the MI classification accuracies of the nine subjects under TPVCs and VRVCs. Under TPVCs, the mean, maximum, and minimum classification accuracies were 50.83%, 71.88%, and 31.25%, respectively. Under VRVCs, the mean, maximum, and minimum classification accuracies were 51.32%, 68.75%, and 37.50%, respectively. There was no significant difference in the subjects’ MI classification accuracies between TPVCs and VRVCs (t-test, p = 0.85).

3.2. Analysis of MI Speed Fluctuations in Subjects under TPVCs and VRVCs

Figure 4 shows the speed fluctuations of the subjects during MI under TPVCs and VRVCs. The standard deviation of MI speed under TPVCs was between 0.50 and 0.55, while under VRVCs, it was between 0.55 and 0.60. The standard deviation of MI speed under TPVCs was significantly smaller than that under VRVCs (t-test, p < 0.05). There was no significant difference in the speed standard deviation under the same visual cue.

3.3. Brain Activation and Connectivity Analysis of MI in Subjects under TPVCs and VRVCs

Figure 5 shows the scalp topography of the MI PSD features for the subjects under TPVC and VRVC conditions, while Figure 6 shows the t-test p-values of the PSD features for different time periods of MI execution under TPVC and VRVC conditions. In Figure 5 and Figure 6, it can be observed that there were no significant differences in the MI PSD features between the TPVC and VRVC conditions within the initial 0.5 s after the start of the experiment. However, significant differences (p < 0.05) were observed in some subjects during the 0.5–2.5 s period.
Figure 7a,b show the binary matrices of the EEG correlation in the motor area between the resting state before/after the TPVC experiment, while Figure 7c,d show them before/after the VRVC experiment. The binary matrices of the EEG correlation between resting states before the TPVC and VRVC experiments are similar. However, compared to TPVCs, the binary matrix of the EEG correlation after the VRVC experiment shows a more noticeable change from the resting state. Figure 7e,f show the brain functional network parameters, and the mean degree and mean clustering coefficient of the brain functional network are significantly higher after the VRVC experiment than after the TPVC experiment (t-test, p < 0.05).

3.4. Analysis of MI Fatigue Level in Subjects under TPVCs and VRVCs

Figure 8 shows the concentration changes in HbO in the frontal area Fp1, Fpz, and Fp2 surrounding the fNIRS channels (as shown in Figure 2c) during the MI period in the TPVC and VRVC conditions. In the TPVC condition, the HbO concentration range in the frontal area during the MI period was between 0.35 and 0.80, with a mean between 0.50 and 0.60. In the VRVC condition, the HbO concentration range in the frontal area during the MI period was between 0.25 and 0.70, with a mean between 0.40 and 0.50. The mean HbO concentration in the frontal area during the MI period in the VRVC condition was significantly lower than that in the TPVC condition (t-test, p < 0.05).
Table 2 shows the mean cognitive load scores reported by the nine subjects during the MI task. The average cognitive load score for the subjects under TPVCs during MI was 22.33, significantly lower than the average cognitive load score of 26.11 reported by the subjects under VRVCs during MI (t-test, p < 0.05).

4. Discussion

This study investigated the effects of VRVCs and TPVCs on MI performance and stability in subjects. The results show that, for different visual cues, the MI accuracy of all subjects was higher than chance level (25%), indicating that the selected subjects were able to perform the corresponding MI tasks. There was no significant difference in the classification accuracy of MI between VRVCs and TPVCs (Table 1), possibly because the visual cues only prompted or guided the subjects to perform the MI task, but had a low impact on classification accuracy [13]. Otherwise, the potential reasons for the decreased classification accuracy might include the subjects’ lack of familiarity with the BCI system, the absence of prior experience with BCI technology and motor imagery, or insufficient pre-experimental training that hindered the subjects’ ability to perform the motor imagery task effectively. In addition, a questionnaire was employed to offer a comprehensive description of this situation (Table 3).
According to the stability results of the subjects’ MI performance, it was found that, under the same visual cue condition, the speed fluctuations of the four types of MI tasks were similar, which may indicate that the execution stability of different MI tasks is similar under the same visual cue condition. Compared with VRVCs, TPVCs can reduce the speed fluctuations of subjects’ MI performance; that is, the time taken by subjects to perform MI is more similar, and the stability of the subjects performing the MI tasks is higher. This may be because, under TPVCs, subjects can concentrate more on the MI task itself without being distracted by the additional visual information provided by VRVCs. In addition, TPVCs are simpler and easier for subjects to understand and master, which may improve the stability of MI.
The lack of differences in PSD features within the initial 0.5 s after the start of the MI experiment between the TPVC and VRVC conditions may be attributed to the potential influence of the presented crosshair markers on the subjects’ PSD features, as well as the possibility that the subjects did not initiate the corresponding MI task within the 0.5 s timeframe. However, during the four time periods from 0.5 s to 2.5 s, significant differences in PSD features were observed in the majority of subjects, indicating variations in neural activation during MI between the TPVC and VRVC conditions. Nonetheless, these differences might be offset by a small number of subjects who did not exhibit significant differences, particularly when calculating mean values and when generating scalp topography plots. The subjects who did not show significant differences may be less familiar with MI-BCIs or the MI paradigm. In addition, the differences in brain functional networks between the TPVCs and VRVCs may be due to the fact that the subjects received and processed more VRVC information [21,26,27].
This study used fNIRS, which is less susceptible to electric interference from head-mounted VR, to evaluate the level of fatigue during MI. The subjects believed that more mental and physical effort was required for MI under VRVCs [14,15], but they were more satisfied with their performance. They also perceived the weight and blurriness of the head-mounted VR as factors contributing to fatigue and discomfort.
However, this study has some limitations. Firstly, additional metrics may be necessary to comprehensively assess the effects of different visual cues on MI-BCI performance. Additionally, further research is needed to explore the effects of different factors in different VR environments, such as the complexity and interactivity of virtual scenes, on MI-BCIs. Future studies can further address these issues to improve the effectiveness and feasibility of MI-BCIs.

5. Conclusions

This study indicates that TPVCs and VRVCs have different effects on subjects and MI-BCI performance when constructing MI-BCI classification models. Although there was no statistical difference in classification accuracy, the subjects showed more stable MI under TPVCs, while VRVCs provided more prompt information to the brain, which could have had a negative impact on the subjects’ MI performance and increased their cognitive load and fatigue. These findings contribute to providing insights and directions for combining MI-BCIs with VR.

Author Contributions

Conceptualization, J.Y. and P.D.; methodology, J.Y. and P.D.; software, J.Y.; validation, J.Y.; formal analysis, J.Y.; investigation, J.Y.; resources, J.Y.; data curation, S.Z.; writing—original draft preparation, J.Y.; writing—review and editing, Y.F. and A.G.; visualization, J.Y.; supervision, F.W. and Y.F.; project administration, Y.F.; funding acquisition, Y.F. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (82172058, 81771926, 61763022, 62006246).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Medical Ethics Committee of Kunming University of Science and Technology (approval code KMUST-MEC-056, approval date: 11 March 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vaughan, T.M.; Heetderks, W.J.; Trejo, L.J.; Rymer, W.Z.; Weinrich, M.; Moore, M.M.; Kübler, A.; Dobkin, B.H.; Birbaumer, N.; Donchin, E.; et al. Brain-computer interface technology: A review of the Second International Meeting. IEEE Trans. Neural Syst. Rehabil. Eng. Publ. IEEE Eng. Med. Biol. Soc. 2003, 11, 94–109. [Google Scholar] [CrossRef] [PubMed]
  2. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain-computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef] [PubMed]
  3. Velasquez-Martinez, L.; Caicedo-Acosta, J.; Castellanos-Dominguez, G. Entropy-based estimation of event-related de/synchronization in motor imagery using vector-quantized patterns. Entropy 2020, 22, 703. [Google Scholar] [CrossRef] [PubMed]
  4. Miladinović, A.; Barbaro, A.; Valvason, E.; Ajčević, M.; Accardo, A.; Battaglini, P.P.; Jarmolowska, J. Combined and singular effects of action observation and motor imagery paradigms on resting-state sensorimotor rhythms. In Proceedings of the XV Mediterranean Conference on Medical and Biological Engineering and Computing—MEDICON 2019: Proceedings of MEDICON 2019, Coimbra, Portugal, 26–28 September 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1129–1137. [Google Scholar]
  5. Neuper, C.; Scherer, R.; Reiner, M.; Pfurtscheller, G. Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG. Cogn. Brain Res. 2005, 25, 668–677. [Google Scholar] [CrossRef] [PubMed]
  6. Malouin, F.; Richards, C.L.; Jackson, P.L.; Lafleur, M.F.; Durand, A.; Doyon, J. The Kinesthetic and Visual Imagery Questionnaire (KVIQ) for assessing motor imagery in persons with physical disabilities: A reliability and construct validity study. J. Neurol. Phys. Ther. 2007, 31, 20–29. [Google Scholar] [CrossRef] [PubMed]
  7. Page, S.J.; Levine, P.; Sisto, S.A.; Johnston, M.V. Mental Practice Combined with Physical Practice for Upper-Limb Motor Deficit in Subacute Stroke. Phys. Ther. 2001, 81, 1455–1462. [Google Scholar] [CrossRef] [PubMed]
  8. Huang, D.; Qian, K.; Oxenham, S.; Fei, D.Y.; Bai, O. Event-related desynchronization/synchronization-based brain-computer interface towards volitional cursor control in a 2D center-out paradigm. In Proceedings of the 2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), Paris, France, 11–15 April 2011; pp. 1–8. [Google Scholar]
  9. Savić, A.M.; Malešević, N.B.; Popović, M.B. Motor imagery driven BCI with cue-based selection of FES induced grasps. In Proceedings of the Converging Clinical and Engineering Research on Neurorehabilitation, Toledo, Spain, 14–16 November 2012; Springer: Berlin/Heidelberg, Germany, 2013; pp. 513–516. [Google Scholar]
  10. Cho, H.S.; Park, K.S.; Kim, Y.; Kim, C.S.; Hahn, M. Effects of virtual reality display types on the brain computer interface system. In Universal Access in Human-Computer Interaction: Ambient Interaction, Pt 2, Proceedings, Proceedings of the 4th International Conference on Universal Access in Human-Computer Interaction Held at the HCI International 2007, Beijing, China, 22–27 July 2007; Stephanidis, C., Ed.; Lecture Notes in Computer Science; ICS FORTH, Human Computer Interaction Lab.: Crete, Greece, 2007; Volume 4555, pp. 633–639. [Google Scholar]
  11. Konglw, X.J.; Chen, L. Review of brain-computer interface technology based on virtual reality environment. J. Electron. Meas. Instrum. 2015, 29, 317. [Google Scholar]
  12. Lambooij, M.; Ijsselsteijn, W.; Fortuin, M.; Heynderickx, I. Visual Discomfort and Visual Fatigue of Stereoscopic Displays: A Review. J. Imaging Sci. Technol. 2009, 53, art00001. [Google Scholar] [CrossRef]
  13. Leeb, R.; Keinrath, C.; Friedman, D.; Guger, C.; Scherer, R.; Neuper, C.; Garau, M.; Antley, A.; Steed, A.; Slater, M.; et al. Walking by thinking: The brainwaves are crucial, not the muscles! Presence Teleoperators Virtual Environ. 2006, 15, 500–514. [Google Scholar] [CrossRef]
  14. Leeb, R.; Friedman, D.; Müller-Putz, G.R.; Scherer, R.; Slater, M.; Pfurtscheller, G. Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic. Comput. Intell. Neurosci. 2007, 2007, 079642. [Google Scholar] [CrossRef] [PubMed]
  15. Millan, J.D.R.; Rupp, R.; Mueller-Putz, G.R.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.; Cincotti, F.; Kubler, A.; Leeb, R.; et al. Combining brain-computer interfaces and assistive technologies: State-of-the-art and challenges. Front. Neurosci. 2010, 4, 161. [Google Scholar] [CrossRef] [PubMed]
  16. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IJCNN), Hong Kong, China, 1–8 June 2008; Volume 1–8, pp. 2390–2397. [Google Scholar]
  17. Liu, J.; Yang, Q.; Yao, B.; Brown, R.; Yue, G. Linear correlation between fractal dimension of EEG signal and handgrip force. Biol. Cybern. 2005, 93, 131–140. [Google Scholar] [CrossRef] [PubMed]
  18. Ai, Q.; Chen, A.; Chen, K.; Liu, Q.; Zhou, T.; Xin, S.; Ji, Z. Feature extraction of four-class motor imagery EEG signals based on functional brain network. J. Neural Eng. 2019, 16, 026032. [Google Scholar] [CrossRef] [PubMed]
  19. Sargent, A.; Heiman-Patterson, T.; Feldman, S.; Shewokis, P.A.; Ayaz, H. Mental fatigue assessment in prolonged BCI use through EEG and fNIRS. In Neuroergonomics; Elsevier: Amsterdam, The Netherlands, 2018; pp. 315–316. [Google Scholar]
  20. Jeong, H.; Song, M.; Oh, S.; Kim, J.; Kim, J. Toward Comparison of Cortical Activation with Different Motor Learning Methods Using Event-Related Design: EEG-fNIRS Study. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 6339–6342. [Google Scholar]
  21. Tian, G.; Chen, J.; Ding, P.; Gong, A.; Wang, F.; Luo, J.; Dong, Y.; Zhao, L.; Dang, C.; Fu, Y. Execution, assessment and improvement methods of motor imagery for brain-computer interface. Sheng Xue Gong Cheng Xue Zhi 2021, 38, 434–446. [Google Scholar]
  22. Lyu, X.; Ding, P.; Li, S.; Dong, Y.; Su, L.; Zhao, L.; Gong, A.; Fu, Y. Human factors engineering of BCI: An evaluation for satisfaction of BCI based on motor imagery. Cogn. Neurodyn. 2023, 17, 105–118. [Google Scholar] [CrossRef] [PubMed]
  23. Shelishiyah, R.; Dharan, M.B.; Kumar, T.K.; Musaraf, R.; Beeta, T.D. Signal Processing for Hybrid BCI Signals. J. Phys. Conf. Ser. 2022, 2318, 012007. [Google Scholar] [CrossRef]
  24. Jianqiang, Q.; Xiangyu, K.; Shaolin, H.; Hongguang, M. Performance comparison of methods for estimating fractal dimension of time series. Comput. Eng. Appl. 2016, 52, 33–38. [Google Scholar]
  25. Müller, M. Information Retrieval for Music and Motion; Springer: Berlin/Heidelberg, Germany, 2007; Volume 2. [Google Scholar]
  26. Yu, H.; Ba, S.; Guo, Y.; Guo, L.; Xu, G. Effects of Motor Imagery Tasks on Brain Functional Networks Based on EEG Mu/Beta Rhythm. Brain Sci. 2022, 12, 194. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, T.; Wang, L.; Guo, M.; Xu, G. Effects of virtual reality visual experience on brain functional network. Sheng Xue Gong Cheng Xue Zhi 2020, 37, 251–261. [Google Scholar]
Figure 1. Study protocol for the effects of VRVCs and TPVCs on MI subjects and MI performance. (a) MI with TPVCs and VRVCs. (b) EEG-based analysis of classification accuracy, brain activation and connectivity, and imagery stability. (c) The analysis of fatigue level based on fNIRS and cognitive load scale.
Figure 1. Study protocol for the effects of VRVCs and TPVCs on MI subjects and MI performance. (a) MI with TPVCs and VRVCs. (b) EEG-based analysis of classification accuracy, brain activation and connectivity, and imagery stability. (c) The analysis of fatigue level based on fNIRS and cognitive load scale.
Electronics 12 02381 g001
Figure 2. Schematic diagram of timing and electrode distribution. (a) Schematic diagram of the timing for a single trial. (b) Schematic diagram of EEG electrode and fNIRS channel. (c) Schematic diagram of the fNIRS channel. (d) VR scene diagram.
Figure 2. Schematic diagram of timing and electrode distribution. (a) Schematic diagram of the timing for a single trial. (b) Schematic diagram of EEG electrode and fNIRS channel. (c) Schematic diagram of the fNIRS channel. (d) VR scene diagram.
Electronics 12 02381 g002
Figure 3. FBCSP-HFD structure diagram.
Figure 3. FBCSP-HFD structure diagram.
Electronics 12 02381 g003
Figure 4. MI speed fluctuations in subjects under TPVCs and VRVCs.
Figure 4. MI speed fluctuations in subjects under TPVCs and VRVCs.
Electronics 12 02381 g004
Figure 5. Characteristic brain topography of MI PSD in subjects under TPVCs and VRVCs. (a) Topography of LLS PSD feature of brain under TPVCs. (b) Topography of LGS PSD feature of brain under TPVCs. (c) Topography of RLS PSD feature of brain under TPVCs. (d) Topography of RGS PSD feature of brain under TPVCs. (e) Topography of LLS PSD feature of brain under VRVCs. (f) Topography of LGS PSD feature of brain under VRVCs. (g) Topography of RLS PSD feature of brain under VRVCs. (h) RGS PSD feature brain topography under VRVCs. The tip of the nose is on top of the map.
Figure 5. Characteristic brain topography of MI PSD in subjects under TPVCs and VRVCs. (a) Topography of LLS PSD feature of brain under TPVCs. (b) Topography of LGS PSD feature of brain under TPVCs. (c) Topography of RLS PSD feature of brain under TPVCs. (d) Topography of RGS PSD feature of brain under TPVCs. (e) Topography of LLS PSD feature of brain under VRVCs. (f) Topography of LGS PSD feature of brain under VRVCs. (g) Topography of RLS PSD feature of brain under VRVCs. (h) RGS PSD feature brain topography under VRVCs. The tip of the nose is on top of the map.
Electronics 12 02381 g005
Figure 6. Characteristic brain topography of MI PSD in subjects under TPVCs and VRVCs. The t-test p-value results of MI PSD features under TPVC and VRVC conditions for 9 subjects across different time periods. (a) t-test p-value plot of LLS PSD features under TPVC and VRVC conditions. (b) t-test p-value plot of LGS PSD features under TPVC and VRVC conditions. (c) t-test p-value plot of RLS PSD features under TPVC and VRVC conditions. (d) t-test p-value plot of RGS PSD features under TPVC and VRVC conditions.
Figure 6. Characteristic brain topography of MI PSD in subjects under TPVCs and VRVCs. The t-test p-value results of MI PSD features under TPVC and VRVC conditions for 9 subjects across different time periods. (a) t-test p-value plot of LLS PSD features under TPVC and VRVC conditions. (b) t-test p-value plot of LGS PSD features under TPVC and VRVC conditions. (c) t-test p-value plot of RLS PSD features under TPVC and VRVC conditions. (d) t-test p-value plot of RGS PSD features under TPVC and VRVC conditions.
Electronics 12 02381 g006
Figure 7. Binary matrices of EEG correlation in the motor area between the resting state before/after TPVC and VRVC experiments and related parameters in subjects. (a) Binary matrix of correlation before TPVC experiment in subjects. (b) Binary matrix of correlation after TPVC experiment in subjects. (c) Binary matrix of correlation before VRVC experiment in subjects. (d) Binary matrix of correlation after VRVC experiment in subjects. (e) Mean degree of functional brain network before/after TPVC and VRVC experiments in subjects. (f) Mean clustering coefficients of functional brain network before/after TPVC and VRVC experiments in subjects.
Figure 7. Binary matrices of EEG correlation in the motor area between the resting state before/after TPVC and VRVC experiments and related parameters in subjects. (a) Binary matrix of correlation before TPVC experiment in subjects. (b) Binary matrix of correlation after TPVC experiment in subjects. (c) Binary matrix of correlation before VRVC experiment in subjects. (d) Binary matrix of correlation after VRVC experiment in subjects. (e) Mean degree of functional brain network before/after TPVC and VRVC experiments in subjects. (f) Mean clustering coefficients of functional brain network before/after TPVC and VRVC experiments in subjects.
Electronics 12 02381 g007
Figure 8. Changes in HbO concentrations of fNIRS channels around Fp1, Fpz, and Fp2 in the frontal area during MI in subjects under VRVCs and TPVCs. (a) Mean HbO changes in the 4 fNIRS channels around Fp1 under TPVCs. (b) Mean HbO changes in the 4 fNIRS channels around Fpz under TPVCs. (c) Mean HbO changes in the 4 fNIRS channels around Fp2 under TPVCs. (d) Mean HbO changes in the 4 fNIRS channels around Fp1 under VRVCs. (e) HbO change in the average of the 4 fNIRS channels around Fpz under VRVCs. (f) HbO change in the average of the 4 fNIRS channels around Fp2 under VRVCs. The yellow area in the figure indicates the HbO area enclosed by the maximum and minimum values of HbO in 20 trials, and the solid line indicates the mean HbO.
Figure 8. Changes in HbO concentrations of fNIRS channels around Fp1, Fpz, and Fp2 in the frontal area during MI in subjects under VRVCs and TPVCs. (a) Mean HbO changes in the 4 fNIRS channels around Fp1 under TPVCs. (b) Mean HbO changes in the 4 fNIRS channels around Fpz under TPVCs. (c) Mean HbO changes in the 4 fNIRS channels around Fp2 under TPVCs. (d) Mean HbO changes in the 4 fNIRS channels around Fp1 under VRVCs. (e) HbO change in the average of the 4 fNIRS channels around Fpz under VRVCs. (f) HbO change in the average of the 4 fNIRS channels around Fp2 under VRVCs. The yellow area in the figure indicates the HbO area enclosed by the maximum and minimum values of HbO in 20 trials, and the solid line indicates the mean HbO.
Electronics 12 02381 g008
Table 1. Classification accuracy of four types of MI (LLS, LGS, RLS and RGS) under TPVCs and VRVCs.
Table 1. Classification accuracy of four types of MI (LLS, LGS, RLS and RGS) under TPVCs and VRVCs.
SubjectTPVCs (Mean ± SD (%))VRVCs (Mean ± SD (%))
S153.44 ± 5.7955.63 ± 6.39
S258.13 ± 5.9349.38 ± 7.34
S358.13 ± 7.1048.44 ± 4.72
S445.63 ± 7.8251.56 ± 9.69
S539.69 ± 5.3252.50 ± 3.84
S642.50 ± 6.4659.38 ± 8.07
S757.50 ± 6.1149.69 ± 4.02
S850.00 ± 6.6142.50 ± 6.11
S952.50 ± 4.3752.81 ± 5.20
Mean50.83 ± 6.7151.32 ± 6.20
Table 2. Cognitive load scale results.
Table 2. Cognitive load scale results.
DimensionEvaluation Rating (Mean)
MI under TPVCsMI under VRVCs
Mental demands4.564.78
Physical demands2.332.44
Temporal demands4.224.11
Performance level4.113.89
Effort level4.115.89
Discomfort level3.005.00
Sum22.3326.11
Table 3. BCI Ability Assessment Scale for Subjects.
Table 3. BCI Ability Assessment Scale for Subjects.
DescriptionEvaluation Rating
S1S2S3S4S5S6S7S8S9
Familiarity with BCI452625832
Experience with BCI452563756
Experience with MI441323552
Understanding of the MI Paradigm576865697
Higher scores indicate greater familiarity.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, J.; Zhu, S.; Ding, P.; Wang, F.; Gong, A.; Fu, Y. The Effects of VR and TP Visual Cues on Motor Imagery Subjects and Performance. Electronics 2023, 12, 2381. https://doi.org/10.3390/electronics12112381

AMA Style

Yang J, Zhu S, Ding P, Wang F, Gong A, Fu Y. The Effects of VR and TP Visual Cues on Motor Imagery Subjects and Performance. Electronics. 2023; 12(11):2381. https://doi.org/10.3390/electronics12112381

Chicago/Turabian Style

Yang, Jingcheng, Shixuan Zhu, Peng Ding, Fan Wang, Anmin Gong, and Yunfa Fu. 2023. "The Effects of VR and TP Visual Cues on Motor Imagery Subjects and Performance" Electronics 12, no. 11: 2381. https://doi.org/10.3390/electronics12112381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop