Next Article in Journal
CIRF: Coupled Image Reconstruction and Fusion Strategy for Deep Learning Based Multi-Modal Image Fusion
Previous Article in Journal
Application of Independent Component Analysis and Nelder–Mead Particle Swarm Optimization Algorithm in Non-Contact Blood Pressure Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Relationship between Behavioral and Neurological Impairments Due to Mild Cognitive Impairment: Correlation Study between Virtual Kiosk Test and EEG-SSVEP

1
Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, Seoul 01811, Republic of Korea
2
Department of Neurology, College of Medicine, Hanyang University, Seoul 04763, Republic of Korea
3
Graduate School of Technology and Innovation Management, Hanyang University, Seoul 04763, Republic of Korea
4
Department of Computer Science, Electrical Engineering and Mechatronics, ZHAW Zurich University of Applied Sciences, 8401 Winterthur, Switzerland
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(11), 3543; https://doi.org/10.3390/s24113543
Submission received: 17 April 2024 / Revised: 28 May 2024 / Accepted: 29 May 2024 / Published: 30 May 2024
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Amnestic mild cognitive impairment (aMCI) is a transitional stage between normal aging and Alzheimer’s disease, making early screening imperative for potential intervention and prevention of progression to Alzheimer’s disease (AD). Therefore, there is a demand for research to identify effective and easy-to-use tools for aMCI screening. While behavioral tests in virtual reality environments have successfully captured behavioral features related to instrumental activities of daily living for aMCI screening, further investigations are necessary to establish connections between cognitive decline and neurological changes. Utilizing electroencephalography with steady-state visual evoked potentials, this study delved into the correlation between behavioral features recorded during virtual reality tests and neurological features obtained by measuring neural activity in the dorsal stream. As a result, this multimodal approach achieved an impressive screening accuracy of 98.38%.

1. Introduction

Amnestic mild cognitive impairment (aMCI), positioned between normal aging and Alzheimer’s disease (AD), is characterized by a gradual decline in multiple cognitive domains, such as memory, attention, and executive function [1,2,3,4,5]. Individuals suffering from aMCI face an increased—about 80% [6]—risk of progressing to AD, an irreversible neurodegenerative disorder [3,4,5]. With the growth of pharmacology including Lecanemab [7] and Levetiracetam [8], identifying aMCI at an early stage not only holds promise for decelerating the progression to AD but also presents the potential for a return to normal aging. Hence, precise screening of the early stages of aMCI becomes increasingly important.
In response to the urgent need for early detection of aMCI, a large variety of diagnostic tools such as virtual reality (VR), magnetic resonance imaging (MRI), and electroencephalography (EEG) have been used in recent years. For example, VR shows promise in providing features for aMCI screening through the assessment of instrumental activities of daily living (IADLs), which evaluates cognitive function during complex activities in everyday life [9,10,11,12,13,14,15]. Employing head-mounted displays (HMDs) and hand controllers allows for the easy tracking and recording of behavioral features such as eye and hand movements. Yet, there remain concerns about the feasibility in clinical settings [9,12]. Although MRI allows for the quantitative assessment of brain structure, it could be considered invasive and less suitable for early aMCI screening due to the loud noise [16], potential thermal damage from high-frequency energy [17,18], and the risk of side effects from MRI contrast agents [19]. EEG, on the other hand, can monitor real-time changes in brain activity with temporal accuracy to observe responses to stimuli, albeit with sensitivity to noise [20,21,22,23,24,25]. While various diagnostic tools are employed in clinical settings, each tool presents distinct advantages and limitations.
In recent years, many studies have investigated the integration of diverse diagnostic tools into multimodal models to overcome the above limitations [15,20,26,27]. In this context, combining VR data with EEG–steady state visually evoked potential (SSVEP) emerges as a promising strategy to augment the applicability of VR within clinical settings. Behavioral impairments in aMCI are considered to stem from a compromised dorsal stream of the visual pathway, which governs behavioral responses to visual stimuli [28,29,30,31,32]. EEG-SSVEP is a powerful tool to quantify the impairment of the dorsal stream, as it allows us to capture harmonic responses in the brain’s reaction to periodic visual stimuli applied to a patient’s retina. This is why combining VR and EEG-SSVEP data offers both behavioral and neurological insight, facilitating a holistic assessment of cognitive impairments caused by aMCI. To the best of our knowledge, the research conducted by Xue et al. [26] exhibited a state-of-the-art result in incorporating both VR and EEG for MCI detection. During their research, they combined data from VR HMD with EEG time series data that were recorded during the same experiment. Their investigation, predicting MCI based on features extracted from both VR and EEG, yielded an impressive accuracy of 91.3%. Yet, despite achieving high classification performance, this approach has considerable limitations. First, the absence of consideration for the compromised visual pathway perspective resulted in a lack of insight into the correlation between VR and EEG. The absence of EEG-SSVEP, which allows variations in EEG responses to visual stimuli to be captured, represents a significant gap. Moreover, due to the high sensitivity of EEG sensors to noise [33], the simultaneous use of VR HMD and EEG may entail the loss of meaningful EEG information. This, in turn, might reduce the classification accuracy. Given these considerations, our study was designed to analyze both VR and EEG-SSVEP comprehensively while measuring VR features and EEG-SSVEP features independently.
The main contributions of this paper are threefold. First, we compared and analyzed the behavioral features recorded during a VR test (“virtual kiosk”) and the neurological features from EEG measurements. Here, we compared a healthy control group with aMCI patients. Second, we analyzed the cognitive relationship between behavioral and neurological features. Finally, we demonstrated the potential of multimodal learning by combining both behavioral and neurological features into a single model to assess aMCI.

2. Materials and Methods

2.1. Participants

We recruited a total of 52 participants, consisting of 24 healthy controls and 28 patients with aMCI, at Hanyang University Hospital from January 2022 to April 2024. The diagnosis of aMCI was confirmed by two experienced neurologists, with 18 and 22 years of expertise, following the diagnostic criteria established by Albert et al. [34]. Inclusion criteria for this study were individuals aged 50 and above who possessed the capability to perceive both visual and auditory stimuli. To exclude cases of aMCI resulting from other conditions, patients with concurrent depressive disorders, vascular dementia, or a history of brain surgery were not considered in our study. Based on these criteria, 4 out of the 28 patients with aMCI were excluded from the final analysis. Consequently, the final analysis included 24 healthy controls and 24 patients with aMCI. All participants provided written informed consent to partake in this research and underwent the Korean Mini-Mental State Examination (K-MMSE) to assess cognitive ability. As shown in Table 1, there were no significant differences in demographics (i.e., gender, age, and years of education) between healthy controls and aMCI patients except K-MMSE (p < 0.05). This study received approval from the Institutional Review Board in accordance with the Declaration of Helsinki (HYUH-2021-08-020-004).

2.2. Behavioral Features

Behavioral data were collected from a virtual kiosk test—a VR test we developed during our prior study for the assessment of IADL [9]. The experiment was conducted on a high-performance laptop equipped with an Intel i7-12700H processor processor (Intel, Santa Clara, CA, USA), 16GB RAM (Samsung, Gyeonggi-do, Suwon, Republic of Korea), and an NVIDIA GeForce RTX 3080 graphics card (NVIDIA, Santa Clara, CA, USA). To ensure a fully immersive VR experience, participants wore an HTC VIVE Pro Eye (HTC Vive, Taoyuan City, Taiwan) VR headset and held a hand controller on their dominant hand. For safety reasons, all participants were sitting on a chair during the experiments. Embedded eye trackers in the VR headset and two base stations allowed us to track participants’ eye and hand movements simultaneously.
As shown in Figure 1, the virtual kiosk test comprised the following six steps: (1) selecting a place to eat, (2) choosing a burger item, (3) selecting a side item, (4) choosing a drink item, (5) selecting a payment method, and (6) entering a four-digit password. Participants were asked to memorize the following instructions: “The place to eat is a restaurant. Please use the kiosk to order a shrimp burger, cheese sticks, and a Coca-Cola. Use a credit card as the payment method, and the password for payment is 6289”. Prior to the test, participants underwent two practice sessions to become familiar with the VR environment. Participants could halt the test due to dizziness and cybersickness, but all participants completed the test without any interruptions.
During the test, we recorded hand movements, eye movements, and performance data. From these measured data, a total of six behavioral features were derived and utilized in this study. From the eye movement data, two features were extracted: scanpath length, representing the total distance covered by a participant’s gaze during the virtual kiosk test; and proportion of fixation duration, representing the percentage of time a participant focused on the target menu item out of all menu items. From the hand movement data, two additional features were derived: hand movement distance, referring to the total distance of hand trajectory during the virtual kiosk test; and hand movement speed, representing the average speed of hand movements during the test. Finally, from the performance data, we included time to completion, which is the total duration a participant took to complete all six steps of the virtual kiosk test, and number of errors, which represents the total errors made by a participant in the test, as the final two features. This approach allowed us to capture diverse aspects of participant behavior associated with various cognitive functions, such as perception and information processing.

2.3. Neurological Features

EEG-SSVEP signals were recorded using the Comet-Plus XL Lab EEG system (Natus Neurology, Middleton, WI, USA) with a sampling rate of 200 Hz. Electrode placement followed the international 10-20 system, incorporating Fp1, Fp2, F3, F4, F7, F8, C3, C4, T3, T4, T5, T6, P3, P4, O1, O2, Fz, Cz, and Pz. Measured data from each electrode were normalized to yield zero average during data processing. Artifacts were eliminated using an automatic and tunable removal algorithm [35], which was implemented in the spkit module in Python 3.9. In this study, we employed the package’s “elimination mode”, utilizing the db4 wavelet, and a β value of 0.1.
To investigate the impact of visual stimulation on brain activity, intermittent photic stimulation (IPS) was employed to capture steady-state visual evoked potentials (EEG-SSVEP). Stimulation frequencies of 3, 5, 10, 12, 15, and 20 Hz were utilized, allowing the examination of neural responses and EEG activity specific to each frequency. The IPS session, lasting 120 s, consisted of alternating 10 s periods of photic stimulation at different frequencies and 10 s rest intervals (see Figure 2). Participants were instructed to lie down comfortably in a dimly lit room and close their eyes to induce a state of relaxation and minimize visual distraction while ensuring optimal diffusion of photic stimuli onto the retina [36].
In this study, we investigated the dorsal stream, a crucial element in visual processing for action, utilizing EEG-SSVEP. The dorsal stream‘s response to visual stimuli is characterized by the ratio of signal power in the parietal and occipital lobes(POR). As a first step, this requires the harmonics to be averaged within the stimulation time range for each channel based on time–frequency data. The time–frequency data were obtained using the multi-taper method implemented in MNE-Python, with a sliding time window of 500 ms. The frequency range for the analysis was set from 1 to 50 Hz in steps of 1 Hz. Subsequently, an 8 Hz frequency smoothing algorithm was applied using three tapers. This process was performed using a Fast Fourier Transform with a Hanning taper. Second, channel averaging within each parietal lobe was conducted, with power expressed in dB. Finally, responsiveness to visual stimuli in the dorsal stream was evaluated by computing the parietal-to-occipital ratio.
The equation for power analysis is as follows. In this formula, P represents the power of the harmonic response. x denotes the time–frequency data, e is the electrode index, f refers to the photic stimulation frequency, and n indicates the harmonic number.
P e , f , n = 1 T t = 0 T x e , f n
The equation for lobe power (LP) is as follows. In this formula, L P represents the power of the harmonic response for each lobe, calculated by averaging the power values of the electrodes within that lobe. L denotes the set of electrodes belonging to each lobe.
L P L , f , n = e L P e , f , n L l e n g t h
The equation for the lobe power ratio (LPR) is as follows. In this formula, L P R refers to the ratio of lobe power, which compares the harmonic response of each lobe, specifically evaluating the dorsal stream pathway from the occipital to the parietal lobes. P denotes the set of electrodes within the parietal lobe, while O represents the set within the occipital lobe. The photic stimulation frequencies f are 3, 5, 10, 12, 15, and 20 Hz, and n is the number of harmonics where f n < 50 . The electrodes for the parietal lobe are located at P3, P4, C3, C4, Pz, and Cz, while the electrodes for the occipital lobe are located at O1 and O2.
L P R f , n = L P P , f , n L P O , f , n
The transmission of visual stimuli in the dorsal stream is quantified by the ratio of connectivity in the parietal and occipital lobes (POR), the ratio of θ and β connectivity in the parietal lobe, and the ratio of α and β connectivity in the parietal lobe. As a first step, we used the weighted phase lag index (wPLI) [37] and the weight clustering coefficient developed by Onnela [38] to measure the transmission of visual stimuli in each frequency band. As a second step, we computed the lobe connectivity by averaging connectivity values across channels within each lobe. As a third step, we computed the lobe connectivity ratio to assess the connectivity of the dorsal stream. Finally, we computed the band connectivity ratio based on the θ/β ratio (TBR) and the α/β ratio (ABR) for the parietal lobe.
The equation for connectivity in each EEG channel is as follows. In this formula, C represents the connectivity between each electrode and its neighboring electrodes. f denotes the photic stimulation frequency, b stands for the frequency band, w is the weight matrix of the b band during f Hz photic stimulation obtained through wPLI, e is the electrode index, i and j are variables used to explore the neighboring electrodes of e , and d e g signifies the degree.
C e , f , b = i , j w f , b , e , i w f , b , e , j w f , b , i , j 1 3 deg e deg e 1
The equation for lobe connectivity (LC) is as follows. In this formula, L C represents the connectivity within a lobe, obtained by averaging the connectivity values of the electrodes within that lobe. L denotes the set of electrodes within each lobe.
L C L , f , b = e L C e , f , b L l e n g t h
The equation for the lobe connectivity ratio (LCR) is as follows. In this formula, L C R compares the connectivity between different lobes, specifically evaluating the dorsal stream pathway from the occipital to the parietal lobes. P denotes the set of electrodes within the parietal lobe, while O represents the set of electrodes within the occipital lobe.
L C R f , b = L C P , f , b L C O , f , b
The equation for the band connectivity ratio (BCR) is as follows. In this formula, B C R is the ratio of connectivity across frequency bands, used to analyze subtle processing differences in visual stimuli in the parietal lobe. It compares the θ and α bands, represented by b 1 , with the β band, represented by b 2 . The photic stimulation frequencies f are 3, 5, 10, 12, 15, and 20. The frequency bands b are defined as θ: 4–8 Hz, α: 8–12 Hz, β: 12–30 Hz, and γ: 30–50 Hz. The electrodes for the parietal lobe are located at P3, P4, C3, C4, Pz, and Cz, while the electrodes for the occipital lobe are located at O1 and O2.
B C R f , b 1 , b 2 = L C P , f , b 1 L C P , f , b 2
To enhance understanding, we established three components for naming neurological features: (1) the PS frequency (3, 5, 10, 12, 15, and 20 Hz), (2) the target lobe (parietal lobe, POR), and (3) the specific frequency ranges used in different analysis methods, namely power and connectivity. For instance, in power analysis, ‘1H’ refers to the fundamental harmonic, while ‘2H’ represents the second harmonic in response to visual stimuli. In connectivity analysis, θ, α, β, and γ correspond to 4–8 Hz, 8–12 Hz, 12–30 Hz, and 30–50 Hz, respectively. Consequently, a total of 65 EEG-SSVEP features were calculated from measured data utilizing the above equations. This includes 17 lobe power ratio features, 24 lobe connectivity ratio features, and 24 band connectivity ratio features. By utilizing Benjamini–Hochberg (BH) correction and t-test filtering methods, eight statistically significant EEG-SSVEP features were selected. These features are described as follows: three lobe power ratio features (5PS-POR-2H, 12PS-POR-1H, and 15PS-POR-1H), two lobe connectivity ratio features (3PS-POR-α and 5PS-POR-γ), and three band connectivity ratio features (3PS-P-ABR, 10PS-P-ABR, and 15PS-P-TBR).

2.4. Data Analysis

We employed sequential statistical analysis to assess the differences between the healthy controls and patients with aMCI and validate the performance of behavioral and neurological features. The statistical analysis was conducted using the module ‘statsmodels’ in Python 3.9. First, the chi-squared test and independent samples t-test were conducted to compare demographics between healthy controls and patients with aMCI. Second, Kolmogorov–Smirnov tests were conducted to assess normality. Subsequently, when normality was not met, the Wilcoxon signed-rank test was applied. Third, Levene’s test was performed to verify equal variances after confirming normality. If equal variances were confirmed, the independent samples t-test was employed. Fourth, in cases where normality was confirmed but unequal variances were observed, Welch’s t-test was utilized. Finally, a Pearson correlation analysis was performed to investigate the relationship between behavioral and neurological features. For multiple comparisons of the aforementioned tests and correlation results, we implemented the BH correction [39]. This systematic and structured approach was employed to enhance the reliability of our study’s findings.

2.5. Multimodal Learning

In our study, we combined behavioral and neurological features and employed multimodal learning for the early detection of patients suffering from aMCI. All the multimodal learning and embedded methods were carried out using the scikit-learn module in Python 3.9. To identify the best model for behavioral and neurological features, we compared the following six machine learning classifiers: Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), Naive Bayes Classifier (NB), Gaussian Process Classifier (GPC), k-Nearest Neighbors classifier (KNN), and Random Forest (RF) [40,41]. For the feature selection process, an embedded method using SVM with a linear kernel and L2 regularization with C = 0.05 was adopted to filter features with lower importance relative to the average importance across all features [42].
In this study, the dataset consisted of 48 samples in total, with a 7:3 split between training and test data to robustly assess the classifier’s ability to generalize. As a result, 34 samples were allocated for training (17 from healthy controls, 17 from aMCI patients), while the remaining 14 samples were designated for testing (7 from healthy controls, 7 from aMCI patients). To assess overfitting and optimize the model, we used leave-one-out cross-validation (LOOCV), a validation method widely used in medical studies with small sample sizes of fewer than 50 participants [43,44]. This approach allowed us to validate the training data and optimize hyperparameters through grid search (see Table 2). The models underwent a rigorous assessment using various performance metrics, including accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). These metrics were calculated as follows with TP (true positive), TN (true negative), FP (false positive), and FN (false negative).
A c c u r a c y = T P + T N T N + F P + T P + F N
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
A U C = 0 1 S e n s i t i v i t y ( ( 1 S p e c i f i c i t y ) 1 ( x ) ) d x

3. Results

3.1. Differences in Behavioral Features between Healthy Controls and Amnestic Mild Cognitive Impairment Patients

Significant differences in behavioral features between healthy controls and aMCI patients were observed (see Table 3). For aMCI patients, we found several notable differences, including a prolonged time to completion (BH-corrected Wilcoxon signed-rank test, p < 0.05), an increased number of errors (BH-corrected Welch’s t-test, p < 0.05), an extended scanpath length (BH-corrected Wilcoxon signed-rank test, p > 0.05), a higher proportion of fixation duration (BH-corrected independent samples t-test, p < 0.05), a longer hand movement distance (BH-corrected Wilcoxon signed-rank test p > 0.05), and a reduced hand movement speed (BH-corrected independent samples t-test, p < 0.05) compared to healthy controls. These findings resulted from a comprehensive analysis involving various statistical tests, such as the independent samples t-test, Welch’s t-test, and Wilcoxon signed-rank test, while considering characteristics related to normality and variance. The reported p-values were corrected for multiple comparisons using the BH method.

3.2. Differences in Neurological Features between Healthy Controls and Amnestic Mild Cognitive Impairment Patients

After analyzing the EEG-SSVEP data using an independent samples t-test, aMCI patients showed significant differences in all eight neurological features (see Table 4). Specifically, there were significant differences discerned in 5PS-POR-2H (BH-corrected independent samples t-test, p < 0.01), 12PS-POR-1H (BH-corrected independent samples t-test, p < 0.001), and 15PS-POR-1H (BH-corrected independent samples t-test, p < 0.001) of the lobe power ratio; 3PS-POR-α (BH-corrected independent samples t-test, p < 0.0001) and 5PS-POR-γ (BH-corrected independent samples t-test, p < 0.05) in the lobe connectivity ratio; and 3PS-P-ABR (BH-corrected independent samples t-test, p < 0.05), 10PS-P-ABR (BH-corrected Student’s t-test, p < 0.05), and 15PS-P-TBR (BH-corrected independent samples t-test, p < 0.01) of the band connectivity ratio. All reported p-values were subjected to correction for multiple comparisons using the BH method.

3.3. Correlation between Behavioral and Neurological Features

In this study, we investigated the relationship between behavioral and neurological features by conducting a Pearson correlation analysis on six selected behavioral features and eight neurological features (see Figure 3). The Pearson correlation analysis revealed significant relations between three behavioral features obtained from the virtual kiosk test (i.e., proportion of fixation duration, hand movement speed, and the number of errors) and two neurological features derived from EEG-SSVEP data (i.e., 5PS-POR-2H, 12PS-POR-1H). Notably, neurological features associated with the dorsal stream showed noteworthy correlations with behavioral features: the lobe power ratio, 5PS-POR-2H, correlated with time to completion (r = 0.29, BH-corrected p < 0.05) and the number of errors (r = 0.40, BH-corrected p < 0.05). Similarly, 12PS-POR-1H correlated with the proportion of fixation duration (r = −0.35, BH-corrected p < 0.05), hand movement speed (r = −0.40, BH-corrected p < 0.05), time to completion (r = 0.32, BH-corrected p < 0.05), and the number of errors (r = 0.35, BH-corrected p < 0.05).

3.4. Multivariate Statistical Analysis of Healthy Controls and Patients with Amnestic Mild Cognitive Impairment

To explore the potential differentiation between healthy controls and individuals with aMCI, we employed PCA as an unsupervised technique (refer to Figure 4). Before the application of the embedded method, Figure 4a demonstrates the overlap between these groups using VR and EEG-SSVEP features, indicating challenges in distinguishing aMCI. Conversely, as depicted in Figure 4b, the embedded method identified five features, encompassing behavioral measures, such as time to completion, and neurological parameters including 15PS-POR-1H, 3PS-POR-α, 3PS-P-ABR, and 15PS-P-TBR, which exhibit the potential to detect aMCI at an early stage.

3.5. Multimodal Learning Performance Using Both Behavioral and Neurological Features

In this study, we ultimately selected five features through the embedded feature selection method. Specifically, one was chosen from the behavioral features recorded during the VR test (i.e., time to completion) and four were selected from neurological features (i.e., 15PS-POR-1H, 3PS-POR-α, 3PS-P-ABR, and 15PS-P-TBR).
As illustrated in Table 5, the SVM showed the best performance of our six models, achieving an accuracy of 98.38%, sensitivity of 96.54%, specificity of 100%, and an AUC of 99.73%. To identify the contribution of each of the individual features to the SVM performance, we performed a feature importance analysis. Here, 15PS-P-TBR emerged as the most significant contributor, achieving a feature importance (FI) score of 0.35, followed by 3PS-POR-α (FI = 0.31), 15PS-POR-1H (FI = 0.30), 3PS-P-ABR (FI = 0.29), and time to completion (FI = 0.14).

3.6. Comparative Analysis of Early Amnestic Mild Cognitive Impairment Detection

We compared the classification outcomes of our unimodal models using VR and EEG-SSVEP features, respectively, to a multimodal model using both VR and EEG-SSVEP features. The multimodal model using the combined features exhibited the highest performance with 98.38% accuracy, 96.54% sensitivity, 100.00% specificity, and 99.38% AUC. The unimodal model using EEG-SSVEP features only (i.e., 15PS-POR-1H, 3PS-POR-α, 3PS-P-ABR, and 15PS-P-TBR) was ranked second with 93.33% accuracy, 85.71% sensitivity, 100% specificity, and 93.07% AUC. The unimodal model using VR features only (i.e., time to completion) achieved the lowest performance with 53.74% accuracy, 56.28% sensitivity, 51.52% specificity, and 21.92% AUC. These findings are illustrated in Table 6.

4. Discussion

The primary objective of this study was to integrate behavioral features derived from VR and neurological features collected from EEG-SSVEP to understand their relationship with cognitive decline and improve the accuracy of detection of aMCI. Looking at behavioral features, aMCI patients exhibited a significantly longer scanpath length, lower proportion of fixation duration, extended hand movement distance, slower movement speed, longer time to completion, and a higher number of errors compared to healthy controls. With respect to neurological features, aMCI patients showed fewer harmonic response and band connectivity differences between the occipital and parietal lobes. A correlation analysis between behavioral and neurological data revealed that the features 5PS-POR-2H and 12PS-POR-1H, which stimulate the α range in the lobe power ratio assessing the response to visual stimuli, showed the strongest correlation with behavioral data including the proportion of fixation duration, hand movement speed, and the number of errors. Showcasing the aMCI early detection performance, an SVM using both VR and EEG-SSVEP features achieved remarkable outcomes: an accuracy of 98.38%, a sensitivity of 96.54%, a specificity of 100%, and an AUC of 99.73%. This SVM model outperformed models using either VR or EEG-SSVEP features alone, with the EEG-SSVEP model achieving an accuracy of 93.33% and the VR feature model attaining an accuracy of 53.74%. Despite the lower performance of the VR feature model, integrating the VR feature (i.e., time to completion) with EEG-SSVEP features improved the model’s performance by 5 percent point. This finding suggests that the inclusion of a VR feature (i.e., time to completion) complements the information on behavioral deficits not captured by EEG-SSVEP features, facilitating a comprehensive assessment of the multifaceted cognitive impairments in aMCI patients. These results underscore the potent capabilities of multimodal analysis, combining behavioral and neurological features, for the early detection of aMCI.
Our findings highlight the significance of understanding the relationship for cognitive decline between the behavioral features, obtained from a VR test environment, and the neurological features, derived from EEG-SSVEP data. Specifically, EEG-SSVEP features (i.e., 5PS-POR-2H, 12PS-POR-1H, and 15PS-POR-1H) computed from power analysis exhibited notable correlations with VR features (i.e., proportion of fixation duration, hand movement speed, time to completion, and the number of errors). These findings align well with previous research that investigated the impact of compromised the dorsal stream on both cognitive function [31,32] and behavioral changes [28,29,30]. For instance, some EEG-SSVEP features obtained from the α range between 8 Hz and 12 Hz, such as 5PS-POR-2H or 12PS-POR-1H, showed decreased power in patients with aMCI. These patients also showed an increased number of errors and slower hand movement speed. Prior studies have demonstrated that impairment of the occipital lobe, the primary hub for visual processing, impedes the effective activation of the pulvinar nucleus, a key facilitator of visual processing acceleration. This impediment leads to challenges in processing visual perception along the ventral stream and, subsequently, action processing along the dorsal stream, thereby exacerbating delays in both perceiving and responding to multiple objects [45,46,47]. Consequently, a compromised visual pathway to the primary visual cortex in the occipital lobe is associated with inadequate acquisition of recognition information for behavior at the dorsal stream, potentially leading to behavioral alterations [21,45,46,47,48,49].
This study is subject to several limitations, which are constraints in collecting medical data, resulting in a restricted sample size. Despite these limitations, it is noteworthy that we have found correlations between behavioral and neurological features and that they perform well when integrated. We have successfully applied multimodal learning to the detection of aMCI and compared the performance of six machine learning models popular in this field. Table 6 shows that a model that combines both behavioral and neurological data outperforms similar models that employ one type of data only. Although the necessity of integrating VR features may seem uncertain given the lower performance (53.74% accuracy) of the model using a single VR feature (i.e., time to completion) compared to the unimodal model using EEG-SSVEP features (93.33% accuracy), it is crucial to note that performance improves to 77.58% accuracy when employing all six VR features. Empirical analysis revealed that the VR feature ‘time to completion’ is the most effective for enhancing the overall performance of the final model integrating both VR and EEG-SSVEP features. Our integrated approach that combined five selected behavioral and neurological features yielded remarkable results, including an accuracy of 98.38%, a sensitivity of 96.54%, a specificity of 100%, and an AUC of 99.73%. Our findings indicate that EEG-SSVEP monitors the processing of visual stimuli in the dorsal stream, whereas VR measures behavior in terms of cognitive functioning. As both methods focus on complementary aspects, their combination into a single model is a powerful tool to monitor and predict behavioral changes. To summarize, our study demonstrated that behavioral changes associated with a compromised dorsal stream can be detected by combining VR and EEG-SSVEP features, leading to improved performance in early aMCI detection through multimodal learning.

5. Conclusions

The findings of this study demonstrated that early aMCI detection using both behavioral and neurological features is promising, with 98.38% accuracy, 96.54% sensitivity, 100.00% specificity, and 99.38% AUC. Furthermore, this study established a strong relationship between the behavioral features assessed in the virtual kiosk test and the neurological features measured through EEG-SSVEP. This underscores the importance of integrating multimodal features for the detection of aMCI. Specifically, observations from the virtual kiosk test, including the proportion of fixation duration, hand movement speed, and the number of errors, were found to be closely associated with stimulus responses within the α band range of the lobe power ratio, which account for the dorsal stream stimulus responses. These findings highlight the relationship between VR and EEG-SSVEP features, contributing to understanding how impaired neural responses in the dorsal stream may manifest as behavioral changes. Furthermore, EEG-SSVEP provided insights into the visual stimulus aspects of dorsal stream neural activity, while VR offered behavioral information based on cognitive processing, thus complementing each other in capturing behavioral changes from various perspectives. This synergy resulted in superior performance through SVM-based multimodal learning. Therefore, our multimodal learning approach provides valuable insights into enhancing the performance of early MCI detection by integrating diverse features.

Author Contributions

Conceptualization, D.K. and K.S.; methodology, D.K. and Y.K.; validation, J.P. and H.C.; formal analysis, D.K.; investigation, D.K.; resources, K.S.; data curation, D.K, Y.K., J.P. and H.C.; writing—original draft preparation, D.K.; writing—review and editing, K.S. and M.L.; visualization, D.K. and Y.K.; supervision, K.S.; project administration, K.S.; funding acquisition, H.R. and K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the Seoul National University of Science and Technology.

Institutional Review Board Statement

This study received approval from the Institutional Review Board in accordance with the Declaration of Helsinki (HYUH-2021-08-020-004). Informed consent was obtained from each participant according to the recommendations of the Helsinki Declaration. The Institutional Review Board (IRB) of Hanyang University Hospital approved this study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Warpechowski, M.; Warpechowski, J.; Kulczyńska-Przybik, A.; Mroczko, B. Biomarkers of Activity-Dependent Plasticity and Persistent Enhancement of Synaptic Transmission in Alzheimer Disease: A Review of the Current Status. Med. Sci. Monit. 2023, 29, e938826. [Google Scholar] [CrossRef]
  2. Shukla, A.; Tiwari, R.; Tiwari, S. Review on Alzheimer Disease Detection Methods: Automatic Pipelines and Machine Learning Techniques. Sci 2023, 5, 13. [Google Scholar] [CrossRef]
  3. Wei, X.; Du, X.; Xie, Y.; Suo, X.; He, X.; Ding, H.; Zhang, Y.; Ji, Y.; Chai, C.; Liang, M.; et al. Mapping Cerebral Atrophic Trajectory from Amnestic Mild Cognitive Impairment to Alzheimer’s Disease. Cereb. Cortex 2022, 33, 1310–1327. [Google Scholar] [CrossRef]
  4. Tuena, C.; Pupillo, C.; Stramba-Badiale, C.; Stramba-Badiale, M.; Riva, G. Predictive Power of Gait and Gait-Related Cognitive Measures in Amnestic Mild Cognitive Impairment: A Machine Learning Analysis. Front. Hum. Neurosci. 2024, 17, 1328713. [Google Scholar] [CrossRef]
  5. Yoon, E.J.; Lee, J.-Y.; Kwak, S.; Kim, Y.K. Mild Behavioral Impairment Linked to Progression to Alzheimer’s Disease and Cortical Thinning in Amnestic Mild Cognitive Impairment. Front. Aging Neurosci. 2023, 14, 1051621. [Google Scholar] [CrossRef]
  6. Chong, A.; Ha, J.M.; Chung, J.Y.; Kim, H.; Choo, I.H. Modified RCTU score: A semi-quantitative, visual tool for predicting alzheimer’s conversion from aMCI. Brain Sci. 2024, 14, 132. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Chen, H.; Li, R.; Sterling, K.; Song, W. Amyloid β-Based Therapy for Alzheimer’s Disease: Challenges, Successes and Future. Signal Transduct. Target. Ther. 2023, 8, 248. [Google Scholar] [CrossRef]
  8. Mohs, R.; Bakker, A.; Rosenzweig-Lipson, S.; Rosenblum, M.; Barton, R.L.; Albert, M.S.; Cohen, S.; Zeger, S.; Gallagher, M. The HOPE4MCI Study: A Randomized Double-blind Assessment of AGB101 for the Treatment of MCI Due to AD. Alzheimer’s Dement. Transl. Res. Clin. Interv. 2024, 10, e12446. [Google Scholar] [CrossRef]
  9. Kim, S.Y.; Park, J.; Choi, H.; Loeser, M.; Ryu, H.; Seo, K. Digital Marker for Early Detection of Mild Cognitive Impairment through Hand and Eye Movement Analysis in Virtual Reality Using Machine Learning: First Validation Study. J. Med. Internet Res. 2023, 25, e48093. [Google Scholar] [CrossRef]
  10. Javitt, D.C.; Martinez, A.; Sehatpour, P.; Beloborodova, A.; Habeck, C.; Gazes, Y.; Bermudez, D.; Razlighi, Q.R.; Devanand, D.P.; Stern, Y. Disruption of early visual processing in amyloid-positive healthy individuals and mild cognitive impairment. Alzheimer’s Res. Ther. 2023, 15, 42. [Google Scholar] [CrossRef]
  11. Elefante, C.; Brancati, G.E.; Baldacci, F.; Lattanzi, L.; Ceravolo, R.; Perugi, G. Mild Behavioral Impairment (MBI) and Late-Life Psychiatric Disorders: Differential Clinical Features and Outcomes. Int. Psychogeriatr. 2023, 35, 27–28. [Google Scholar] [CrossRef]
  12. Ryu, H.; Seo, K. The Illusion of Having a Large Virtual Body Biases Action-Specific Perception in Patients with Mild Cognitive Impairment. Sci. Rep. 2021, 11, 24058. [Google Scholar] [CrossRef] [PubMed]
  13. Kim, S.Y.; Park, H.; Kim, H.; Kim, J.; Seo, K. Technostress Causes Cognitive Overload in High-Stress People: Eye Tracking Analysis in a Virtual Kiosk Test. Inf. Process. Manag. 2022, 59, 103093. [Google Scholar] [CrossRef] [PubMed]
  14. Seo, K.; Kim, J.; Oh, D.H.; Ryu, H.; Choi, H. Virtual Daily Living Test to Screen for Mild Cognitive Impairment Using Kinematic Movement Analysis. PLoS ONE 2017, 12, e0181883. [Google Scholar] [CrossRef] [PubMed]
  15. Park, B.; Kim, S.Y.; Park, J.; Choi, H.; Ryu, H.; Seo, K. Integrating biomarkers from virtual reality and magnetic resonance imaging for early detection of mild cognitive impairment using a multimodal learning approach: Validation study. J. Med. Internet Res. 2024, 26, e54538. [Google Scholar] [CrossRef] [PubMed]
  16. Akbar, A.F.; Sayyid, Z.N.; Roberts, D.C.; Hua, J.; Paez, A.; Cao, D.; Lauer, A.M.; Ward, B.K. Acoustic Noise Levels in HigYh-field Magnetic Resonance Imaging Scanners. OTO Open 2023, 7, e79. [Google Scholar] [CrossRef] [PubMed]
  17. Almutairi, H.S.; Mutairi, F.M.; Al Motairi, M.D.; Salem Almutairi, N.; Al-Rashidi, M.H. MRI Safety: RF Burns-Causes and Prevention. J. Surv. Fish. Sci. 2023, 10, 82–85. [Google Scholar]
  18. Vaughn, H.; Declan, A.B.L. MRI-Induced Deep Tissue Burn Presenting to the Emergency Department. Am. J. Emerg. Med. 2022, 58, 352.e3–352.e4. [Google Scholar] [CrossRef] [PubMed]
  19. Saito, Y.; Kamagata, K.; Andica, C.; Uchida, W.; Takabayashi, K.; Yoshida, S.; Nakaya, M.; Tanaka, Y.; Kamiyo, S.; Sato, K.; et al. Reproducibility of Automated Calculation Technique for Diffusion Tensor Image Analysis along the Perivascular Space. Jpn. J. Radiol. 2023, 41, 947–954. [Google Scholar] [CrossRef]
  20. Kim, S.Y.; Park, B.; Kim, D.; Choi, H.; Park, J.; Ryu, H.; Seo, K. Early Screening of Mild Cognitive Impairment Using Multimodal VR-EP-EEG-MRI (VEEM) Biomarkers via Machine Learning. In Proceedings of the 2024 International Conference on Electronics, Information, and Communication (ICEIC) 2024, Taipei, Taiwan, 28–31 January 2024. [Google Scholar] [CrossRef]
  21. Yu, W.-Y.; Low, I.; Chen, C.; Fuh, J.-L.; Chen, L.-F. Brain Dynamics Altered by Photic Stimulation in Patients with Alzheimer’s Disease and Mild Cognitive Impairment. Entropy 2021, 23, 427. [Google Scholar] [CrossRef]
  22. Kim, D.; Park, J.S.; Choi, H.; Ryu, H.; Seo, K. Deep learning model for early screening of patients with Alzheimer’s disease and mild cognitive impairment using EEG-SSVEP. In Proceedings of the HCI Korea, Jeongseon-gun, Republic of Korea, 1–3 February 2023; 2023; pp. 781–787. [Google Scholar]
  23. Peksa, J.; Mamchur, D. State-of-the-Art on Brain-Computer Interface Technology. Sensors 2023, 23, 6001. [Google Scholar] [CrossRef] [PubMed]
  24. Khatun, S.; Morshed, B.I.; Bidelman, G.M. Monitoring Disease Severity of Mild Cognitive Impairment from Single-Channel EEG Data Using Regression Analysis. Sensors 2024, 24, 1054. [Google Scholar] [CrossRef] [PubMed]
  25. Kim, S.-E.; Shin, C.; Yim, J.; Seo, K.; Ryu, H.; Choi, H.; Park, J.; Min, B.-K. Resting-State Electroencephalographic Characteristics Related to Mild Cognitive Impairments. Front. Psychiatry 2023, 14, 1231861. [Google Scholar] [CrossRef] [PubMed]
  26. Xue, C.; Li, A.; Wu, R.; Chai, J.; Qiang, Y.; Zhao, J.; Yang, Q. VRNPT: A Neuropsychological Test Tool for Diagnosing Mild Cognitive Impairment Using Virtual Reality and EEG Signals. Int. J. Hum.-Comput. Interact. 2023, 1–19. [Google Scholar] [CrossRef]
  27. Lee, B.; Lee, T.; Jeon, H.; Lee, S.; Kim, K.; Cho, W.; Hwang, J.; Chae, Y.-W.; Jung, J.-M.; Kang, H.J.; et al. Synergy Through Integration of Wearable EEG and Virtual Reality for Mild Cognitive Impairment and Mild Dementia Screening. IEEE J. Biomed. Health Inform. 2022, 26, 2909–2919. [Google Scholar] [CrossRef] [PubMed]
  28. Yan, S.; Yang, X.; Yang, H.; Sun, Z. Decreased Coherence in the Model of the Dorsal Visual Pathway Associated with Alzheimer’s Disease. Sci. Rep. 2023, 13, 3495. [Google Scholar] [CrossRef]
  29. Ilardi, C.R.; Iavarone, A.; La Marra, M.; Iachini, T.; Chieffi, S. Hand Movements in Mild Cognitive Impairment: Clinical Implications and Insights for Future Research. J. Integr. Neurosci. 2022, 21, 67. [Google Scholar] [CrossRef]
  30. Antar, M.; Wang, L.; Tran, A.; White, A.; Williams, P.; Sylcott, B.; Mizelle, J.C.; Kim, S. Functional Connectivity Analysis of Visually Evoked Erps for Mild Cognitive Impairment: Pilot Study. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) 2023, Sydney, Australia, 24–27 July 2023. [Google Scholar] [CrossRef]
  31. Koike, M.; Noguchi-Shinohara, M.; Morise, H.; Kudo, K.; Tsuchimine, S.; Misaka, Y.; Komatsu, J.; Abe, C.; Horimoto, M.; Kitagawa, S.; et al. Abnormal regional brain activities along the dorsal stream during visuospatial processing in alzheimer’s disease: A magnetoencephalography study. Res. Sq. 2022. [Google Scholar] [CrossRef]
  32. Wu, H.; Song, Y.; Yang, X.; Chen, S.; Ge, H.; Yan, Z.; Qi, W.; Yuan, Q.; Liang, X.; Lin, X.; et al. Functional and Structural Alterations of Dorsal Attention Network in Preclinical and Early-stage Alzheimer’s Disease. CNS Neurosci. Ther. 2023, 29, 1512–1524. [Google Scholar] [CrossRef]
  33. Alexandrovsky, D.; Putze, S.; Schwind, V.; Mekler, E.D.; Smeddinck, J.D.; Kahl, D.; Krüger, A.; Malaka, R. Evaluating User Experiences in Mixed Reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021. [Google Scholar] [CrossRef]
  34. Albert, M.S.; DeKosky, S.T.; Dickson, D.; Dubois, B.; Feldman, H.H.; Fox, N.C.; Gamst, A.; Holtzman, D.M.; Jagust, W.J.; Petersen, R.C.; et al. The Diagnosis of Mild Cognitive Impairment Due to Alzheimer’s Disease: Recommendations from the National Institute on Aging-Alzheimer’s Association Workgroups on Diagnostic Guidelines for Alzheimer’s Disease. Focus 2013, 11, 96–106. [Google Scholar] [CrossRef]
  35. Bajaj, N.; Requena Carrión, J.; Bellotti, F.; Berta, R.; De Gloria, A. Automatic and Tunable Algorithm for EEG Artifact Removal Using Wavelet Decomposition with Applications in Predictive Modeling during Auditory Tasks. Biomed. Signal Process. Control 2020, 55, 101624. [Google Scholar] [CrossRef]
  36. Kasteleijn-Nolst Trenité, D.; Rubboli, G.; Hirsch, E.; Martins da Silva, A.; Seri, S.; Wilkins, A.; Parra, J.; Covanis, A.; Elia, M.; Capovilla, G.; et al. Methodology of Photic Stimulation Revisited: Updated European Algorithm for Visual Stimulation in the EEG Laboratory. Epilepsia 2011, 53, 16–24. [Google Scholar] [CrossRef] [PubMed]
  37. Vinck, M.; Oostenveld, R.; van Wingerden, M.; Battaglia, F.; Pennartz, C.M.A. An Improved Index of Phase-Synchronization for Electrophysiological Data in the Presence of Volume-Conduction, Noise and Sample-Size Bias. NeuroImage 2011, 55, 1548–1565. [Google Scholar] [CrossRef] [PubMed]
  38. Saramäki, J.; Kivelä, M.; Onnela, J.-P.; Kaski, K.; Kertész, J. Generalizations of the Clustering Coefficient to Weighted Complex Networks. Phys. Rev. E 2007, 75, 027105. [Google Scholar] [CrossRef] [PubMed]
  39. Pudjihartono, N.; Fadason, T.; Kempa-Liehr, A.W.; O’Sullivan, J.M. A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction. Front. Bioinform. 2022, 2, 927312. [Google Scholar] [CrossRef] [PubMed]
  40. Chaddad, A.; Wu, Y.; Kateb, R.; Bouridane, A. Electroencephalography Signal Processing: A Comprehensive Review and Analysis of Methods and Techniques. Sensors 2023, 23, 6434. [Google Scholar] [CrossRef] [PubMed]
  41. Przybyszewski, A.W.; Śledzianowski, A.; Chudzik, A.; Szlufik, S.; Koziorowski, D. Machine learning and eye movements give insights into neurodegenerative disease mechanisms. Sensors 2023, 23, 2145. [Google Scholar] [CrossRef] [PubMed]
  42. ZhuParris, A.; de Goede, A.A.; Yocarini, I.E.; Kraaij, W.; Groeneveld, G.J.; Doll, R.J. Machine Learning Techniques for Developing Remotely Monitored Central Nervous System Biomarkers Using Wearable Sensors: A Narrative Literature Review. Sensors 2023, 23, 5243. [Google Scholar] [CrossRef]
  43. Li, Y.; Shao, Y.; Wang, J.; Liu, Y.; Yang, Y.; Wang, Z.; Xi, Q. Machine Learning Based on Functional and Structural Connectivity in Mild Cognitive Impairment. Magn. Reson. Imaging 2024, 109, 10–17. [Google Scholar] [CrossRef] [PubMed]
  44. Mancioppi, G.; Rovini, E.; Fiorini, L.; Zeghari, R.; Gros, A.; Manera, V.; Robert, P.; Cavallo, F. Mild Cognitive Impairment Identification Based on Motor and Cognitive Dual-Task Pooled Indices. PLoS ONE 2023, 18, e0287380. [Google Scholar] [CrossRef]
  45. Cortes, N.; Ladret, H.J.; Abbas-Farishta, R.; Casanova, C. The Pulvinar as a Hub of Visual Processing and Cortical Integration. Trends Neurosci. 2024, 47, 120–134. [Google Scholar] [CrossRef]
  46. Mahon, B.Z.; Almeida, J. Reciprocal Interactions between Parietal and Occipito-Temporal Representations Support Everyday Object-Directed Actions. Neuropsychologia 2024, 198, 108841. [Google Scholar] [CrossRef] [PubMed]
  47. Cortes, N.; de Souza, B.O.; Casanova, C. Pulvinar Modulates Synchrony across Visual Cortical Areas. Vision 2020, 4, 22. [Google Scholar] [CrossRef] [PubMed]
  48. Mahon, B.Z. Higher Order Visual Object Representations: A Functional Analysis of Their Role in Perception and Action. In APA Handbook of Neuropsychology: Neuroscience and Neuromethods; American Psychological Association: Washington, DC, USA, 2023; Volume 2, pp. 113–138. [Google Scholar] [CrossRef]
  49. Yamasaki, T.; Aso, T.; Kaseda, Y.; Mimori, Y.; Doi, H.; Matsuoka, N.; Takamiya, N.; Torii, T.; Takahashi, T.; Ohshita, T.; et al. Decreased Stimulus-Driven Connectivity of the Primary Visual Cortex during Visual Motion Stimulation in Amnestic Mild Cognitive Impairment: An Fmri Study. Neurosci. Lett. 2019, 711, 134402. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Six behavioral features collected from the virtual kiosk test.
Figure 1. Six behavioral features collected from the virtual kiosk test.
Sensors 24 03543 g001
Figure 2. Eight neurological features collected by EEG-SSVEP recording with intermittent photic stimulation.
Figure 2. Eight neurological features collected by EEG-SSVEP recording with intermittent photic stimulation.
Sensors 24 03543 g002
Figure 3. The correlation between VR (vertical axis) and EEG-SSVEP (horizontal axis) features.
Figure 3. The correlation between VR (vertical axis) and EEG-SSVEP (horizontal axis) features.
Sensors 24 03543 g003
Figure 4. The PCA results between the HC and aMCI groups. (a) Results before applying the embedded method, comprising six VR and eight EEG-SSVEP features. (b) Results after applying the embedded method, using one VR and four EEG-SSVEP features.
Figure 4. The PCA results between the HC and aMCI groups. (a) Results before applying the embedded method, comprising six VR and eight EEG-SSVEP features. (b) Results after applying the embedded method, using one VR and four EEG-SSVEP features.
Sensors 24 03543 g004
Table 1. Demographic and neuropsychological test results of healthy controls and patients with amnestic mild cognitive impairment.
Table 1. Demographic and neuropsychological test results of healthy controls and patients with amnestic mild cognitive impairment.
CharacteristicHealthy Controls
(n = 24)
aMCI 1 Patients
(n = 24)
p-Value
MeanSDMeanSD
Demographics
Gender (Male/Female)10/1412/120.56
Age68.429.9271.67.590.22
Years of education12.134.3910.675.710.33
Neuropsychological test result
K-MMSE 228.301.4027.012.38<0.05
1 aMCI: amnestic mild cognitive impairment. 2 K-MMSE: Korean Mini-Mental State Examination.
Table 2. Overview of hyperparameter optimization outcomes obtained through grid search for improving the performance.
Table 2. Overview of hyperparameter optimization outcomes obtained through grid search for improving the performance.
Classifier ModelsHyperparameters
Support Vector Machinekernel = linear; C = 0.05; probability = true
Linear Discriminant Analysissolver = singular value decomposition; shrinkage = no shrinkage; tolerance = 1 × 10−4
Naive Bayespriors = none; variance smoothing = 1 × 10−9
Gaussian Processradial basis function (1.0)
K-Nearest Neighbork = 5; metric = Euclidean; weights = uniform
Random Forestnumber of estimators = 50; max depth = 20; min sample leaf = 4; min sample split = 5
Table 3. Comparative statistical analysis of six behavioral features between healthy controls and aMCI patients.
Table 3. Comparative statistical analysis of six behavioral features between healthy controls and aMCI patients.
Behavioral FeaturesHealthy Controls
(n = 24)
aMCI 1 Patients
(n = 24)
p-Value
MeanSDMeanSD
Eye movement
 Scanpath length (m)30.5937.5552.6057.42>0.05 2
 Proportion of fixation duration (%)56.3113.9845.7616.42<0.05 3
Hand movement
 Hand movement distance (m)11.856.7815.9511.23>0.05 2
 Hand movement speed (m/s)0.230.070.180.06<0.05 3
Performance
 Time to completion (s)50.0054.9491.7584.38<0.05 2
 The number of errors1.751.653.502.90<0.05 4
1 aMCI: amnestic mild cognitive impairment. 2 Wilcoxon signed-rank test p-value with Benjamini–Hochberg correction. 3 Independent sample t-test p-value with Benjamini–Hochberg correction. 4 Welch’s t-test p-value with Benjamini–Hochberg correction.
Table 4. Statistics from eight neurological features for healthy controls and patients with amnestic mild cognitive impairments.
Table 4. Statistics from eight neurological features for healthy controls and patients with amnestic mild cognitive impairments.
Neurological Features 1Healthy Controls
(n = 24)
aMCI 2 Patients
(n = 24)
p-Value 3
MeanSDMeanSD
Lobe Power Ratio
 5PS-POR-2H0.740.080.790.05<0.01
 12PS-POR-1H0.720.060.820.09<0.001
 15PS-POR-1H0.740.070.810.07<0.001
Lobe Connectivity Ratio
 3PS-POR-α0.950.051.020.05<0.0001
 5PS-POR-γ0.990.021.010.04<0.05
Band Connectivity Ratio
 3PS-P-ABR0.980.081.070.14<0.05
 10PS-P-ABR1.040.111.130.17<0.05
 15PS-P-TBR0.920.111.040.11<0.01
1 Hz frequency of photic stimulation-target lobe-specific frequency range. 2 aMCI: amnestic mild cognitive impairment. 3 Independent samples t-test p-value with Benjamini–Hochberg correction.
Table 5. Performance of the six classifiers using one behavioral feature and four neurological features.
Table 5. Performance of the six classifiers using one behavioral feature and four neurological features.
ClassifiersAccuracy (%)Sensitivity (%)Specificity (%)AUC 1 (%)
MeanSDMeanSDMeanSDMeanSD
SVM 298.382.8696.546.12100.000.0099.730.99
KNN 397.783.5597.406.5498.114.4899.511.26
NB 496.973.3293.517.11100.000.0099.781.22
LDA 593.331.6486.152.4599.622.1493.561.52
GPC 689.704.0587.454.6691.676.6598.271.28
RF 776.163.2987.454.6666.295.7490.693.09
1 AUC: area under the receiver operating characteristic curve. 2 SVM: Support Vector Machine. 3 KNN: K-Nearest Neighbors. 4 NB: Naive Bayes. 5 LDA: Linear Discriminant Analysis. 6 GPC: Gaussian Process. 7 RF: Random Forest.
Table 6. Comparison of aMCI detection based on VR features only, EEG-SSVEP features only, or based on the combination of both features.
Table 6. Comparison of aMCI detection based on VR features only, EEG-SSVEP features only, or based on the combination of both features.
FeaturesAccuracy (%)Sensitivity (%)Specificity (%)AUC 1 (%)
MeanSDMeanSDMeanSDMeanSD
VR53.746.9556.2842.4951.5249.9821.922.50
EEG-SSVEP93.330.0085.710.00100.000.0093.070.58
VR + EEG-SSVEP98.382.8696.546.12100.000.0099.730.99
1 AUC: area under the receiver operating characteristic curve.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, D.; Kim, Y.; Park, J.; Choi, H.; Ryu, H.; Loeser, M.; Seo, K. Exploring the Relationship between Behavioral and Neurological Impairments Due to Mild Cognitive Impairment: Correlation Study between Virtual Kiosk Test and EEG-SSVEP. Sensors 2024, 24, 3543. https://doi.org/10.3390/s24113543

AMA Style

Kim D, Kim Y, Park J, Choi H, Ryu H, Loeser M, Seo K. Exploring the Relationship between Behavioral and Neurological Impairments Due to Mild Cognitive Impairment: Correlation Study between Virtual Kiosk Test and EEG-SSVEP. Sensors. 2024; 24(11):3543. https://doi.org/10.3390/s24113543

Chicago/Turabian Style

Kim, Dohyun, Yuwon Kim, Jinseok Park, Hojin Choi, Hokyoung Ryu, Martin Loeser, and Kyoungwon Seo. 2024. "Exploring the Relationship between Behavioral and Neurological Impairments Due to Mild Cognitive Impairment: Correlation Study between Virtual Kiosk Test and EEG-SSVEP" Sensors 24, no. 11: 3543. https://doi.org/10.3390/s24113543

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop