Next Article in Journal
Concordance between Wada, Transcranial Magnetic Stimulation, and Magnetoencephalography for Determining Hemispheric Dominance for Language: A Retrospective Study
Previous Article in Journal
Synthetic Cathinones: Epidemiology, Toxicity, Potential for Abuse, and Current Public Health Perspective
Previous Article in Special Issue
The Optimization of a Natural Language Processing Approach for the Automatic Detection of Alzheimer’s Disease Using GPT Embeddings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study on Feature Extraction Techniques for the Discrimination of Frontotemporal Dementia and Alzheimer’s Disease with Electroencephalography in Resting-State Adults

1
Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal 576104, Karnataka, India
2
Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
3
Artificial Intelligence and Cognitive Load Lab, the Applied Intelligence Research Centre, Technological University Dublin, D07 H6K8 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Brain Sci. 2024, 14(4), 335; https://doi.org/10.3390/brainsci14040335
Submission received: 20 February 2024 / Revised: 20 March 2024 / Accepted: 26 March 2024 / Published: 29 March 2024
(This article belongs to the Special Issue Deep into the Brain: Artificial Intelligence in Brain Diseases)

Abstract

:
Early-stage Alzheimer’s disease (AD) and frontotemporal dementia (FTD) share similar symptoms, complicating their diagnosis and the development of specific treatment strategies. Our study evaluated multiple feature extraction techniques for identifying AD and FTD biomarkers from electroencephalographic (EEG) signals. We developed an optimised machine learning architecture that integrates sliding windowing, feature extraction, and supervised learning to distinguish between AD and FTD patients, as well as from healthy controls (HCs). Our model, with a 90% overlap for sliding windowing, SVD entropy for feature extraction, and K-Nearest Neighbors (KNN) for supervised learning, achieved a mean F1-score and accuracy of 93% and 91%, 92.5% and 93%, and 91.5% and 91% for discriminating AD and HC, FTD and HC, and AD and FTD, respectively. The feature importance array, an explainable AI feature, highlighted the brain lobes that contributed to identifying and distinguishing AD and FTD biomarkers. This research introduces a novel framework for detecting and discriminating AD and FTD using EEG signals, addressing the need for accurate early-stage diagnostics. Furthermore, a comparative evaluation of sliding windowing, multiple feature extraction, and machine learning methods on AD/FTD detection and discrimination is documented.

1. Introduction

Alzheimer’s disease (AD) systematically destroys brain neurons over time [1]. This neurodegenerative disorder progressively leads to cognitive decline, notably in brain regions associated with memory. AD arises from various factors, including environmental influences, vascular diseases, head injuries, genetic predispositions, and, particularly, ageing. More than 50 million people are diagnosed with AD around the globe [2], with this type of disorder significantly contributing to elderly disability and dependency and defining the seventh most crucial cause of death. Similarly, frontotemporal dementia (FTD) is a neurodegenerative disorder that leads to issues associated with communication challenges and behavioural changes. Diagnosing these disorders progresses through several stages: an asymptomatic early pre-clinical phase, a period of mild cognitive impairment, and finally, dementia [3]. As a consequence, diagnoses at an early stage are crucial. A diagnosis can be achieved by utilising physical exams, cerebrospinal fluid tests, cognitive and language tests, and urine and blood tests. Brain scans can also be adopted, such as Computed or Positron Emission Tomographies (CT/PET) and Magnetic Resonance Imaging (MRI) techniques [3]. While brain scans offer detailed spatial resolution, they lack the temporal precision to capture dementia’s evolving symptoms. Electroencephalography (EEG), with its superior temporal resolution, can detect subtle brain activities essential for understanding the dynamic neural interactions in dementia. Moreover, EEG’s cost-effectiveness, accessibility, and ability to provide real-time brain activity monitoring make it suitable for broad screening and diagnostics. However, the volume of recorded EEG data and its inherent artefacts often pose difficulty in identifying critical biomarkers and, thus, accurately diagnosing neurodegenerative disorders. Different research approaches have appeared over the years to mitigate these technical issues. Moreover, there is an abundance of EEG denoising pipelines present in the literature, as previous studies have applied various techniques for extracting high-level features from EEG data, such as wavelet transforms [4], fractal dimensions [5], entropy-based features [6], and the Hurst exponent [7]. Similar techniques have been used for extracting features for detecting and diagnosing Parkinson’s [8] epilepsy [9,10], schizophrenia [11], and other neurological disorders. However, there is limited research on comparing and assessing the utility of such feature extraction techniques for discriminating between Alzheimer’s and frontotemporal patients, as well as from healthy controls (HCs).
This study’s objectives are as follows:
  • To evaluate the effectiveness of sliding windowing, feature extraction techniques, machine learning models, and their pipelines to detect and discriminate AD, FTD, and HC biomarkers.
  • To identify brain regions affected by AD and FTD and verify if these regions align with standard medical tests.
Our Research Question (RQ):
RQ: How does the choice of sliding windowing, feature extraction measures, and machine learning models affect the detection and differentiation of AD and FTD biomarkers in EEG data?
We examined 50% and 90% overlaps for sliding windowing, multiple feature extraction techniques—Higuchi Fractal Dimension, Singular Value Decomposition (SVD) Entropy, Zero Crossing Rate, Detrended Fluctuation Analysis, and Hjorth parameters—to extract salient high-level features from EEG signals and supervised machine learning techniques—K-Nearest Neighbors (KNN), Random Forest (RF), XGBoost, and Extra Trees (ET)—to discriminate frontotemporal dementia, Alzheimer’s disease, and control groups.
The remainder of this paper includes a description of related work for AD and FTD detection (Section 2), and the introduction of a comparative design study for feature extraction from EEG data employing machine learning classification techniques (Section 3). Findings are subsequently presented and critically discussed in Section 4. The contribution to the body of knowledge is explicated in Section 5 by synthesising this research and delineating future avenues of work.

2. Related Work

Several investigations have been conducted for diagnosing Alzheimer’s disorder from biomarkers extracted from electroencephalographic data. Few researchers have employed Hjorth parameters, which are specific statistical properties of EEG data [12]. Others have employed entropy-based features [13], standard measures adopted within biomedicine that represent the degree of disorder of an EEG signal. Various research studies have been based upon the computation of EEG source localisations and the extraction of connectivity features of the cortical region [14] for identifying AD-induced brain network disruptions. Various feature extraction techniques from EEG data exist, and the resulting high-level features are often aggregated and analysed using machine learning. A recent study focused on classifying EEG data from a large dataset of 890 subjects across three categories: healthy controls, mild cognitive impairment subjects, and patients with Alzheimer’s [15]. Another study scrutinised standard EEG pre-processing techniques using exploratory analysis and highlighted their importance in identifying early AD indicators [16]. Further research has also emphasised meticulous pre-processing and feature extraction techniques, employing methods like Kolmogorov Complexity [17], Discrete Wavelet Transform [18], and Spectral Entropy.
Parallel efforts in FTD diagnosis are also noteworthy. A study presented an automatic FTD detection technique by employing Independent Component Analysis (ICA) in the pre-processing phase of the EEG data and a Light Gradient Boosting (LGB) for classification, achieving an 80.67% accuracy [19]. Another study emphasised the discovery of crucial biomarkers in differentiating FTD from other neurodegenerative diseases, focusing on serum and cerebrospinal fluid markers [20]. Similarly, various other forms of dementia, such as the vascular one and mild cognitive impairment, were contrasted using EEG data [21]. This approach combined Wavelet Transform in the EEG pre-processing phase jointly with Independent Component Analysis (ICA). Feature extraction techniques, including Spectral, Permutation, and Tsallis Entropy, were used to augment original EEG data. The study employed machine learning to train supervised models using Support Vector Machines and the K-Nearest Neighbours learning algorithms, coupled with neighbourhood-preserving QR-decomposition for dimensionality reduction based on fuzzy logic. Similarly, researchers performed feature selection via the improved binary gravitation search approach, leading to a higher classification accuracy of patient groups [22].
Machine and deep learning-based applications have been widely adopted for solving supervised AD detection with EEG data analysis [18,23,24,25]. For example, Convolutional Neural Networks (CNNs) have been trained on functional brain connectivity features to detect AD and other neurological disorders automatically [26]. Similarly, a feed-forward neural network was trained on DNA methylation and gene expressions after employing dimensionality reduction techniques. Another study used convolutional auto-encoders to classify AD, mild cognitive impairment, and healthy control subjects utilising time–frequency high-level features generated from the application of Continuous Wavelet Transform [27].
Although deep learning has demonstrated a superior capacity to develop models for automatically learning and integrating salient features from EEG data for an improved classification accuracy [28], they are often considered difficult to interpret and explain. Studies employing more straightforward learning methods, such as logistic regression, have shown that optimally pre-processed data can still lead to a higher model’s performance, and complex learning strategies are not often helpful [29]. This suggests that external but transparent and interpretable techniques can often help extract salient features from EEG data better than automated deep learning methods to classify neurodegenerative disorders. Along with the use of more transparent methods, a novel study focused on multimodal EEG and cerebrospinal fluid-related data to distinguish early-onset Alzheimer from FTD subjects by adopting microstates theory and spectral analyses [30]. In detail, EEG microstates are short time intervals of stable scalp potential fields. This study demonstrates how abnormalities associated with early-onset AD subjects could be detected by analysing the variation in EEG microstate duration and global field power peaks correlating with clinical severity and cerebrospinal fluid biomarkers. Another study extracted statistical features from EEG frequency bands trained with Decision Trees and Random Forests to discriminate Alzheimer’s and frontotemporal dementia subjects. These algorithms are not only more straightforward and interpretable than deep learning algorithms, but they lead to the development of models with remarkable accuracy [31].
Besides the transparency offered by simpler machine learning algorithms or the capacity of deep learning to deliver highly accurate predictive models, there is the technical problem of extracting salient features from large datasets with a reasonable computational complexity in computer memory and time [28,32,33]. Consequently, external techniques for extracting meaningful high-level features from EEG data are not only helpful for transparency and interpretability but are often required for dimensionality reduction and, thus, for a significant decrement in the computational resources required to train predictive models. In this direction, often, Fast Fourier Transforms [34], and Discrete Wavelet Transforms [35] have proven helpful in extracting features from EEG data before training models with machine learning algorithms. Similarly, Multiway Array Decomposition [36], Principal Dynamic Mode (PDM) analysis [37], Singular Value Decomposition [9], and Principal Component Analysis [38] have all demonstrated utility in such endeavour.
In summary, the body of research on identifying Alzheimer’s disorders and frontotemporal dementia using electroencephalographic data is extensive, along with the adoption of machine and deep learning to develop improved predictive models. However, the problem of evaluating the utility of various ad hoc interpretable techniques to learn salient features and biomarkers from EEG signals associated with Alzheimer’s disorders and frontotemporal dementia, often used as inputs to the aforementioned learning techniques, is elusive.

3. Materials and Methods

This section introduces an empirical work that compares various feature extraction techniques from EEG data for discriminating subjects with Alzheimer’s disorder and frontotemporal dementia from healthy adults. Figure 1 shows a synthesis of such a novel pipeline that is divided into five phases: (A) a pre-processing pipeline to denoise EEG data and to segment it with a sliding window technique; (B) a phase where various feature extraction techniques for EEG data are contrasted, along with straightforward supervised learning algorithms for classifying frontotemporal dementia subjects from those having Alzheimer’s disorder; (C) a training phase for automatically aggregating the extracted features from the previous step towards predictive models; (D) an evaluation of such models with unseen testing data across various evaluation metrics; (E) an analytical phase for establishing the importance of EEG channels in the discrimination of Alzheimer’s and frontotemporal dementia subjects.

3.1. Dataset

A dataset published in the OpenNeuro repository (ds004504) [39] was selected. It includes EEG recordings from 88 subjects, of which 23 have frontotemporal dementia (FTD), 36 have Alzheimer’s disease (AD), and 29 are healthy control (HC) subjects. Participants were seated with eyes closed during the recordings (resting state) and were provided with the Mini-Mental State Examination test for cognitive and neurophysical assessment. Recordings were acquired at AHEPA University Hospital in Thessaloniki, Greece. The EEG2100 equipment from the Nihon Kohden Group was used. Nineteen scalp electrodes were used in line with the 10–20 international standard (Fp1, F7, F3, T3, C3, T5, P3, O1, Fz, Cz, Pz, Fp2, F4, F8, C4, T4, P4, T6, O2), with A1 and A2 used as references.
Recordings were sampled at 500 Hz, and the amplifier’s settings were tuned to 10 μV/mm, with a time constant of 0.3 s and a high-frequency filter of 70 Hz. Recording lengths varied between 13.5 min for the Alzheimer’s subjects and 12 min for the frontotemporal subjects, with 13.8 min for the healthy controls. Overall, the dataset had 485.5 , 276.5 , and 402 min of recordings, respectively, for AD, FTD, and healthy controls.

3.2. Pre-Processing Phase

The pre-processing phase associated with the EEG signals was initially executed by the researchers who recorded the data [39]. This included the execution of the Butterworth band-pass filter (0.5–45Hz), the re-referencing of channels to the A1 and A2 electrodes, and artefact elimination using ICA. The EEG data were cleaned from noise using Artefact Subspace Reconstruction, a method present in the EEGLab’s Matlab software. The RunICA algorithm was run to transform the nineteen EEG channels into independent components. Those containing ocular noise or jaw artefacts, via visual inspection, were zeroed, and inverse Independent Component Analysis (ICA) was executed. We extended preprocessing by using a sliding window technique to segment EEG data into overlapping 1-second windows, applying two strategies of 50% and 90% overlap, consistent with methods used in similar studies [40,41,42,43]. The rationale for contrasting two distinct overlapping window strategies was to evaluate the capability to extract time-domain biomarkers from limited EEG data and, thus, the feature extraction’s efficacy in discerning AD and FTD’s key attributes. Note that a 50% overlap strategy is faster than a 90% overlap strategy. Each window containing 500 data points (because of the 500 Hz sampling rate) led to the separate execution of the various selected feature extraction techniques and their different outputs. Each technique yielded 19 columns, the EEG channels, and N rows, the segmented overlapped EEG windows, for each subject. Subsequently, a concatenation of these individual tables was performed, leading to a final data shape of N (total windows) × window length (in seconds) × sampling rate (500 Hz) × 36 (number of subjects) columns and 19 rows (number of channels). The dataset was unbalanced across the target feature (class of subjects, AD/FTD, HC); thus, the Synthetic Minority Oversampling Technique (SMOTE) was applied. In detail, oversampling was performed for the 23 subjects with frontotemporal dementia (FTD) and the 29 healthy controls (HC) to match the 36 subjects with Alzheimer’s disease (AD).

3.3. Feature Extraction Techniques

As mentioned above, several techniques for feature extraction from EEG data were considered. These include the Singular Value Decomposition (SVD) Entropy, the Higuchi Dimension (HFD) based on fractal geometry, the Zero-Crossing Rate statistical indicator, the Detrended Fluctuation Analysis (DFA), and the Hjorth indicator. The PyEEG and Antropy Python libraries were used, and each individual technique is concisely detailed in the following parts of the text.
The Singular Value Decomposition (SVD) Entropy indicator is based on time series complexity. It evaluates the necessary number of orthogonal vectors for accurate data representation [44,45,46]. Mathematically,
H S V D = i = 1 M σ ¯ i log 2 ( σ ¯ i )
SVD Entropy correlates with the complexity of the underlying data, where σ ¯ i is the embedded matrix’s normalised singular values.
The Higuchi Fractal Dimension (HFD) technique focuses on the non-parametric time series analysis based on generating new synthetic signals by a systematic procedure that sub-samples from the original data [47]. Mathematically,
X m k = X ( m ) , X ( m + k ) , X ( m + 2 k ) , , X ( m + N m k k )
where k is the interval length, and m is the initial point. The time series is subsequently utilised to calculate the average curve length L m ( k ) ,
L m ( k ) = i = 1 N m k X ( m + i k ) X ( m + ( i 1 ) k ) y k
where term y is normalised and denoted as
y = ( N 1 ) N m k k
L m ( k ) adheres to a power law, defining the Fractal Dimension D. HFD’s applicability to non-stationary series differentiates it from methods like Spectral and Hurst Exponents [48].
The Zero-Crossing Rate (ZCR) represents the rate of the change in the sign of a signal, essentially counting how many transitions of the zero amplitude exist within a time frame [49,50]. Mathematically,
Z = t = 1 T 1 1 R < 0 ( s t s t 1 ) T 1
where
  • Z represents the Zero Crossing Rate.
  • T is the total number of samples in the signal (or in a specified window/frame of the signal for localised analysis).
  • s t and s t 1 are consecutive samples in the signal at times t and t 1 , respectively.
  • 1 R < 0 ( s t s t 1 ) is an indicator function that evaluates to 1 if the product of s t and s t 1 is less than 0 (indicating a zero crossing, where the sign of the signal changes between two consecutive samples), and 0 otherwise.
  • The denominator ( T 1 ) normalises the sum to account for the number of intervals between samples, providing a rate per sample interval.
The calculation starts by initialising a sum that will accumulate the total number of zero crossings. For each pair of consecutive samples ( s t and s t 1 ) , starting from the second sample up to the last one, the product of these two samples is checked. The indicator function 1 R < 0 ( s t s t 1 ) checks if the product of s t and s t 1 is negative. This is a mathematical way of determining if the sign of the signal changes between these two samples:
  • If s t and s t 1 have different signs, their product will be negative, indicating a zero crossing. The indicator function then contributes 1 to the sum.
  • If s t and s t 1 have the same sign, their product will be positive (or zero if either sample is zero, depending on how zero values are treated), and the indicator function contributes 0 to the sum.
The sum of all instances where the indicator function equals 1 gives the total number of zero crossings. This sum is then normalised by dividing by ( T 1 ) , which is the total number of intervals between consecutive samples in the signal or the window under consideration. The result of this division is the Zero Crossing Rate, Z, which gives a normalised measure of how frequently the signal crosses zero, thus providing insight into the signal’s properties, especially its frequency content and texture.
The Detrended Fluctuation Analysis (DFA) focuses on non-stationary time series for persistent patterns and correlations. It involves integrating the series, segmenting, detrending each segment, and calculating the fluctuation magnitude by contrasting the gradients of log ( F ( n ) ) and log ( n ) .
F ( n ) = k = 1 N y ( k ) y n ( k ) 2 n
where the detrending step is y ( k ) y n ( k ) .
The Hjorth parameters can be used to gauge insights into the characteristics of a signal, such as its regularity and frequency [46]. Three main parameters exist, namely, Activity, Mobility, and Complexity. Mathematically:
A c t i v i t y = v a r ( y ( t ) )
M o b i l i t y = v a r d y ( t ) d t v a r ( y ( t ) )
C o m p l e x i t y = M o b i l i t y d y ( t ) d t M o b i l i t y ( y ( t ) )
Activity measures signal variance, mobility indicates signal frequency, and complexity assesses frequency variation. This research employs the average values of complexity and mobility derived from EEG signals.

3.4. Classification

The aforementioned techniques lead to features that are the input of various machine learning algorithms. A preliminary investigation of many learning algorithms was performed with the features extracted using the SVD Entropy technique and a 50% overlapping strategy among EEG windows. A 15-fold cross-validation strategy was employed, given the limited number of participants in the selected datasets. This preliminary training aimed to deliver an initial understanding of the capacity of each learning algorithm to fit the target feature (AD, FTD, HC) and minimise the computational power and time required for developing many models. The performance measures included the accuracy, precision, recall, F1-score averages, and area under the ROC curve (AUC) averages of the 15 surrogate models (Table 1).
We selected the four top-performing learning techniques based on their preliminary results (Table 1) to conduct a more focused comparative analysis, integrating sliding window techniques and feature extraction measures. This selection aimed to enhance the efficiency and depth of our model evaluation process by concentrating on the most promising algorithms. The four models are described in detail as follows:
  • K-Nearest Neighbors (KNN)—This technique can be used for supervised classification and regression, assuming that similar instances of a dataset cluster together. It is non-parametric and uses proximity to make classifications about the clustering of a new data point.
  • Random Forest (RF)—This technique constructs numerous decision trees during training. On the one hand, a supervised classification determines the most frequent class predicted by the single individual trees. On the other hand, for regression problems, it computes the average of such predictions. It incorporates randomness and includes sampling data with a replacement step to prevent model overfitting for individual tree learning by focusing only on a sub-set of data at each iterative tree’s split.
  • XGBoost—It is an enhanced and efficient form of Gradient Boosting that integrates regularisation as a form of model complexity control to mitigate overfitting. It also includes system-level enhancements to improve efficiency and flexibility, forming a robust predictive ensemble learning technique.
  • Extra Trees (ET)—It is an ensemble of decision trees with additional randomness. It not only bootstraps data but also chooses random split points for features. This extra randomness may reduce variance and improve the generalisation of new data.
The above data-driven learning techniques were subsequently trained on the features extracted using the aforementioned techniques (Table 1). In these circumstances, the dataset was partitioned into two by forming training and test sets (80 and 20% of the original data). To circumvent the risk of data leakage at the subject level, stringent actions were taken to ensure that the training and testing sets were devoid of features extracted from identical subjects. This was meticulously verified by maintaining subject-specific annotations throughout the entirety of the feature extraction process. For instance, consider a scenario where the training set includes subjects 1 to 4. This process ensured that the EEG features associated with these subjects were not in the testing set. Any duplicated features spotted in the testing set were iteratively moved into the training data. It aimed to guarantee the absence of subject data overlap between the sets while maintaining the 80:20 data distribution. Given the limited subjects, the training set was stratified using the 15-fold cross-validation. Through this approach, we divided the training data into 15 folds, using 14 for training and one for validation at each iteration. The top-performing models were selected and further tested on the unbalanced test set (20% of the overall data). Note that these test data were not augmented with SMOTE but left intact, which means following their original nature. The above training mechanism was performed twice: once with the 50% and once with the 90% overlapping strategy among the EEG segmented windows. Three classification tasks were devised: one for discriminating AD patients from healthy controls, one for discriminating FTD patients from healthy controls, and one for discriminating AD from FTD patients.
To further enhance the robustness of the designed multi-phase pipeline across the three classification tasks (Figure 1), the best-performing configuration (feature extraction technique/learning strategy) was repeated by employing five different seeds. This was aimed at generating different training and testing data five times. The averaged metrics from these five runs shaped the final results. Following the classification phase, we employed topographical brain mapping techniques to improve our predictive models’ interpretability. These maps were instrumental in pinpointing the cerebral regions most crucial for distinguishing between Alzheimer’s disease (AD) patients, frontotemporal dementia (FTD) patients, and healthy controls (HC). We selected the classification model with the highest accuracy and computed a feature importance array for each feature extraction technique, as detailed in Section 3.3. This array delineated the significance of features derived from each EEG channel in accurately predicting AD and FTD. Subsequently, we visualised these feature importance scores on topographic brain maps, illuminating the brain areas with elevated significance in the classification process.
The primary objective of this analytical step was to derive deeper insights into the specific brain regions that are most influential in the differentiation between AD patients, FTD patients, and healthy controls, thereby enhancing our understanding of the neurophysiological underpinnings of these disorders. This approach not only aids in validating the predictive models but also contributes to the broader field of neuroscientific research by identifying potential biomarkers and neuroanatomical correlates of these neurodegenerative diseases.

3.5. Hyperparameter Tuning

Optimising model hyperparameters was systematically conducted utilising the GridSearchCV module in Python. The aim was to identify the most effective parameter configurations for each machine learning model examined. Summarised below are the optimal settings discovered for each model:
  • K-Nearest Neighbors (KNN):
    (a)
    leaf_size: 30, indicating the smallest number of points a node can hold.
    (b)
    metric: Utilised euclidean to measure point distances.
    (c)
    n_neighbors: Set to 6, denoting the count of neighbours involved in decision-making.
    (d)
    p: Configured as 2, corresponding to the Euclidean distance.
    (e)
    weights: Applied as uniform, assigning equal weight to all neighbours.
  • Random Forest (RF):
    (a)
    criterion: gini, as the split quality metric.
    (b)
    n_estimators: 120, determining the forest’s tree quantity.
    (c)
    max_depth: None, allowing trees to grow unrestricted.
    (d)
    min_samples_split: 28, minimal samples to split an internal node.
    (e)
    min_samples_leaf: 10, the lowest number of samples for a tree’s leaf node.
  • XGBoost:
    (a)
    eta (learning_rate): 0.1, to control overfitting by moderating step size.
    (b)
    n_estimators: 280, defining the count of boosting stages to perform.
    (c)
    max_depth: 8, limiting the tree depth.
    (d)
    colsample_bytree: 1, determining the fraction of features selected for tree construction.
    (e)
    reg_alpha: 0.05, introducing L1 regularisation.
  • Extra Trees (ET):
    (a)
    criterion: gini, for evaluating splits.
    (b)
    n_estimators: 150, the tree count developed.
    (c)
    max_depth: None, implying no depth limitation.
    (d)
    min_samples_split: 30, the minimum required samples to split a node.
    (e)
    min_samples_leaf: 15, the smallest count of samples a leaf node must have.

3.6. Model Evaluation

Each trained model was evaluated using different evaluation metrics:
  • Sensitivity quantifies the capability of a model to correctly identify the two groups of subjects (having Alzheimer’s disease or healthy adults). It represents the rate of true positives (TP) over the actual positives. A higher sensitivity is synonymous with the robustness of a model in reducing mis-classifications of AD patients from healthy controls.
    S e n s i t i v i t y = T P T P + F N
  • Precision reflects the accuracy of a model in predicting positive outcomes, calculated as the fraction of true positives amongst all instances classified as positive by the model. A high precision implies a reduction in the occurrence of false negatives.
    P r e c i s i o n = T P T P + F P
  • Accuracy measures the overall correctness of a model’s classifications, expressed as the percentage of true overall predictions.
    A c c u r a c y = T P + T N T P + T N + F P + F N
  • Area under the ROC curve represents a model’s ability to discriminate between AD patients and HC. It depicts the true positive rate against the false positive rate (on the y and x axes, respectively) at varying thresholds. ROC stands for Receiver Operating Characteristic and is expressed mathematically as follows:
    T P R = T P T + F N
    F P R = F P F P + T N
    The AUC-ROC is an aggregate measure of the overall performance of a model, which is its capability to distinguish between positive and negative instances across all possible classification thresholds.
  • F1 score: The F1 score is a measure used to evaluate the performance of a model or a test, especially in cases where the balance between precision and recall is crucial. It is essentially a way to capture the balance between the importance of precision—how many of the items identified were relevant—and recall—how many of the relevant items were identified. This score is particularly valuable in situations where an uneven class distribution exists. The F1 score is the harmonic mean of precision and recall, providing a single metric that balances both concerns. The formula to calculate it is as follows:
    A c c u r a c y = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l

4. Results and Discussion

Table 2 shows the performance metrics (from Section 3.6) of our models, evaluated using the original, unbalanced dataset. On the one hand, the results demonstrate that Alzheimer’s disease subjects could be discriminated with a superior precision (94–96%) compared to the frontotemporal disease subjects and healthy controls. On the other hand, the discrimination of FTD subjects always had the lowest precision across comparisons (86–88%). Sensitivity scores were always above 90% across the three comparisons, along with the F1-scores.
The observed lower precision in discriminating frontotemporal dementia (FTD) subjects from healthy controls, as compared to discriminating Alzheimer’s disease (AD) subjects, can be attributed to several factors intrinsic to the nature of these neurological conditions and the characteristics of the EEG signals they produce. The discrepancy in outcomes using feature extraction techniques (SVD Entropy, Detrended Fluctuation Analysis, Zero Crossing Rate, Higuchi Fractal Dimensions, Hjorth parameters) and machine learning algorithms (XGBoost, Random Forest, Extra Trees) may stem from the following:
  • Overlap in EEG signal characteristics: FTD and healthy control EEG signals might share more similar characteristics than those observed between AD and healthy controls. FTD, particularly in its early stages, can manifest subtle EEG changes that are less distinct than those seen in AD, where more pronounced disruptions in brain activity patterns are common. This overlap makes it challenging for the applied feature extraction techniques to capture distinctive features that accurately differentiate FTD from healthy brain activity.
  • Sensitivity and specificity of features: The feature extraction techniques employed may have differing sensitivities and specificities to the pathological changes in brain activity characteristic of FTD versus AD. For instance, features that are highly sensitive to global cognitive decline and widespread neural network disruption in AD may not be as effective in detecting the more localised or less severe disruptions typical of FTD.
  • Stage of the disease: The stage of disease at the time of EEG recording could also impact the precision of discrimination. Early-stage FTD may produce very subtle EEG abnormalities that are difficult to distinguish from normal ageing processes, whereas AD-related changes, such as increased slow-wave activity, might be more evident and easier to detect even at earlier stages.
  • Technical and methodological limitations: The choice of window size for EEG analysis, preprocessing steps, and the specific parameters used in both feature extraction and machine learning algorithms could preferentially favour the detection of AD over FTD. Optimising these methodologies specifically for FTD might require adjustments to better capture the nuanced differences in EEG signals associated with FTD.
  • Variability within FTD spectrum: FTD encompasses a spectrum of disorders with heterogeneous clinical presentations, including behavioural variant FTD (bvFTD) and primary progressive aphasias. This variability contributes to a wider range of EEG signal manifestations, complicating the task of identifying a consistent set of features that distinguish FTD patients from healthy individuals across all subtypes.
Figure 2 shows the performance of models trained on balanced data with SMOTE in distinguishing Alzheimer’s disease subjects from healthy controls (top row), frontotemporal dementia subjects from healthy controls (middle row), and Alzheimer’s disease versus frontotemporal dementia subjects (bottom row) with the 50% and 90% overlapping windows strategy, over 5 runs. The details of these results are presented in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8.
While consistent differences in accuracy across learning techniques have not been found, the Singular Value Decomposition Entropy technique seems to help to consistently develop predictive models with the highest accuracy, regardless of the underlying machine learning technique. This can also be observed from Figure 3 where the average accuracy of each sliding window is presented, grouped across the feature extraction measures. The same thing cannot be said for the other feature extraction techniques, which demonstrated no consistency across machine learning models and data overlapping strategies.
The higher accuracy and F1-score associated with SVD Entropy in this study can be attributed to the following functionalities that it exhibits:
  • The Singular Value Decomposition (SVD) process breaks down the EEG signal into matrices that ignore the noise and preserve the principal characteristics of the EEG signal.
  • Furthermore, SVD reduces the dimensionality of the data, abstracting it into a form that retains essential information while discarding redundancy. This abstraction makes it easier for machine learning models to process and learn from the data, enhancing predictive performance.
  • Additionally, assessing the entropy in the distribution of singular values obtained from SVD quantifies the randomness and complexity of the signal. This is crucial for EEG analysis, where the complexity of brain activity can provide insights into neurological conditions.
Concerning the overlapping strategies, using the 90% overlapping strategy across consecutive EEG windows clearly exhibited utility when discriminating subjects with neurodegenerative disorders (AD and FTD versus HC).
Following the classification phase, we employed a topographical brain mapping technique to improve the interpretability of our predictive models. These maps were instrumental in pinpointing the cerebral regions most crucial for distinguishing AD/FTD patients from HC and AD patients from FTD patients (Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9). We selected the classification model with the highest accuracy (90% overlap, SVD entropy, and a tree classifier) and computed a feature importance array for each feature extraction technique, as detailed in Section 3.3. This array delineated the significance of features derived from each EEG channel in accurately predicting disease. Subsequently, we visualised these feature importance scores on topographic brain maps, illuminating the brain areas with elevated significance in the classification process. The primary objective of this analytical step was to derive deeper insights into the specific brain regions that are most influential in the differentiation between AD/FTD patients and healthy controls, thereby enhancing our understanding of the neurophysiological underpinnings of Alzheimer’s disease and frontotemporal dementia. This approach not only aids in validating the predictive models but also contributes to the broader field of neuroscientific research by identifying potential biomarkers and neuroanatomical correlates of these neurodegenerative diseases.
Topographic maps indicated the importance of the occipital, frontal, and temporal lobes in distinguishing AD from HC (Figure 4 and Figure 5), highlighting specific EEG channels (O2, T5, O1, Fp2, Fp1, F7, F8, T3, T4). Similar regions were critical for differentiating FTD from HC (Figure 6 and Figure 7), but with a different order of significance (O2, Fp1, T3, O1, F7, Fp2, F3, T4, T5), with a notable emphasis on the frontal lobe suggesting its effectiveness in capturing FTD features from the frontal region. In contrast to this, when differentiating AD patients from FTD patients (Figure 8 and Figure 9), the topographic plots show the frontal and temporal regions, especially channels T3, Fp1, Fp2, F7, F8, T4, F3, Fz, and F4, as being more important than the occipital region. This finding highlights a key distinction from feature importance patterns in AD/FTD vs. HC comparisons, where occipital dominance was observed.
Topographic analyses show the occipital, temporal, and frontal regions’ involvement in distinguishing AD from HC. This aligns with empirical observations about AD’s impact areas. The frontal and temporal regions are primarily involved in differentiating AD from FTD, which is consistent with FTD’s primary impact areas.
While topographic maps highlight the occipital region in distinguishing FTD from HC, this may seem unexpected given FTD’s primary impact on frontal and temporal regions [51,52]. However, this aligns with the involvement of the occipital lobe in advanced FTD stages, where it shares degeneration patterns with AD [53,54]. This overlap and variability in the FTD presentation underscores the need for accurate differential diagnosis between these diseases [54].

5. Conclusions

Alzheimer’s disease and frontotemporal dementia, resulting from neuronal damage, impair cognitive functions. Effective denoising and feature extraction from complex, noisy EEG data are essential for their early detection, focusing on dimensionality reduction and key biomarker identification.
Previous research on Alzheimer’s and frontotemporal dementia used limited feature extraction methods without thorough comparison. This study addressed this by evaluating multiple techniques for distinguishing AD and FTD conditions and healthy controls using EEG data. We trained models on features from EEG windows with 50% and 90% overlap, employing classifiers like K-Nearest Neighbors, Random Forest, XGBoost, and Extra Trees. The findings reveal that an increased overlap in EEG windows enhances model accuracy, particularly highlighting SVD entropy’s effectiveness over other techniques. Our model accurately distinguishes AD from FTD, pinpointing critical features in frontal, temporal, and occipital regions. This advances early-stage diagnosis by highlighting distinct EEG patterns specific to each disease.
Future directions include expanding this pipeline’s validation across broader datasets and more diverse subject groups, including AD, FTD, and healthy controls, and extending its utility to diagnose other neurodegenerative diseases like Schizophrenia and Parkinson’s disease. A further investigation is needed to determine the optimal EEG window overlap for effective feature extraction.

Author Contributions

Conceptualization: U.L., A.V.C. and L.L.; Formal analysis: U.L.; Funding acquisition: L.L.; Investigation: U.L.; Methodology: U.L. and A.V.C.; Supervision: A.V.C. and L.L.; Validation: A.V.C. and L.L.; Visualization: U.L.; Writing—original draft: U.L.; Writing—review and editing: U.L., A.V.C. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was conducted with the financial support of the Science Foundation Ireland Centre for Research Training in Digitally-Enhanced Reality (d-real) under Grant No. 18/CRT/6224. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The EEG data utilised in our study were sourced from the OpenNeuro repository, which exclusively hosts datasets that have received ethical approval for public sharing and use. The dataset documentation confirms ethical approvals in accordance with the Declaration of Helsinki and approval by the Scientific and Ethics Committee of AHEPA University Hospital, Aristotle University of Thessaloniki, under protocol number 142/12-04-2023. Data and ethical approvals can be viewed online at https://openneuro.org/datasets/ds004504/versions/1.0.7 (accessed on 15 March 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Breijyeh, Z.; Karaman, R. Comprehensive review on Alzheimer’s disease: Causes and treatment. Molecules 2020, 25, 5789. [Google Scholar] [CrossRef] [PubMed]
  2. World Health Organization Dementia. Available online: https://www.who.int/news-room/fact-sheets/detail/dementia (accessed on 20 May 2023).
  3. Sperling, R.A.; Aisen, P.S.; Beckett, L.A.; Bennett, D.A.; Craft, S.; Fagan, A.M.; Iwatsubo, T.; Jack, C.R.; Kaye, J.; Montine, T.J.; et al. Toward defining the preclinical stages of Alzheimer’s disease: Recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s Dement. 2011, 7, 280–292. [Google Scholar] [CrossRef] [PubMed]
  4. Oltu, B.; Akşahin, M.F.; Kibaroğlu, S. A novel electroencephalography based approach for Alzheimer’s disease and mild cognitive impairment detection. Biomed. Signal Process. Control 2021, 63, 102223. [Google Scholar] [CrossRef]
  5. Nobukawa, S.; Yamanishi, T.; Nishimura, H.; Wada, Y.; Kikuchi, M.; Takahashi, T. Atypical temporal-scale-specific fractal changes in Alzheimer’s disease EEG and their relevance to cognitive decline. Cogn. Neurodynamics 2019, 13, 1–11. [Google Scholar] [CrossRef] [PubMed]
  6. Maturana-Candelas, A.; Gómez, C.; Poza, J.; Pinto, N.; Hornero, R. EEG Characterization of the Alzheimer’s Disease Continuum by Means of Multiscale Entropies. Entropy 2019, 21, 544. [Google Scholar] [CrossRef] [PubMed]
  7. Amezquita-Sanchez, J.P.; Mammone, N.; Morabito, F.C.; Marino, S.; Adeli, H. A novel methodology for automated differential diagnosis of mild cognitive impairment and the Alzheimer’s disease using EEG signals. J. Neurosci. Methods 2019, 322, 88–95. [Google Scholar] [CrossRef] [PubMed]
  8. Lal, U.; Chikkankod, A.V.; Longo, L. Fractal dimensions and machine learning for detection of Parkinson’s disease in resting-state electroencephalography. Neural Comput. Appl. 2024. [Google Scholar] [CrossRef]
  9. Hinchliffe, C.; Yogarajah, M.; Elkommos, S.; Tang, H.; Abasolo, D. Entropy Measures of Electroencephalograms towards the Diagnosis of Psychogenic Non-Epileptic Seizures. Entropy 2022, 24, 1348. [Google Scholar] [CrossRef] [PubMed]
  10. Khan, I.M.; Khan, M.M.; Farooq, O. Epileptic Seizure Detection using EEG Signals. In Proceedings of the 2022 5th International Conference on Computing and Informatics (ICCI), New York, NY, USA, 19–21 October 2022; pp. 111–117. [Google Scholar] [CrossRef]
  11. Bagherzadeh, S.; Shahabi, M.S.; Shalbaf, A. Detection of schizophrenia using hybrid of deep learning and brain effective connectivity image from electroencephalogram signal. Comput. Biol. Med. 2022, 146, 105570. [Google Scholar] [CrossRef] [PubMed]
  12. Safi, M.S.; Safi, S.M.M. Early detection of Alzheimer’s disease from EEG signals using Hjorth parameters. Biomed. Signal Process. Control 2021, 65, 102338. [Google Scholar] [CrossRef]
  13. Şeker, M.; Özbek, Y.; Yener, G.; Özerdem, M.S. Complexity of EEG Dynamics for Early Diagnosis of Alzheimer’s Disease Using Permutation Entropy Neuromarker. Comput. Methods Programs Biomed. 2021, 206, 106116. [Google Scholar] [CrossRef] [PubMed]
  14. Li, R.; Nguyen, T.; Potter, T.; Zhang, Y. Dynamic cortical connectivity alterations associated with Alzheimer’s disease: An EEG and fNIRS integration study. NeuroImage Clin. 2019, 21, 101622. [Google Scholar] [CrossRef] [PubMed]
  15. Jiao, B.; Li, R.; Zhou, H.; Qing, K.; Liu, H.; Pan, H.; Lei, Y.; Fu, W.; Wang, X.; Xiao, X.; et al. Neural biomarker diagnosis and prediction to mild cognitive impairment and Alzheimer’s disease using EEG technology. Alzheimer’s Res. Ther. 2023, 15, 32. [Google Scholar] [CrossRef]
  16. Bairagi, V.K.; Elgandelwar, S.M. Early diagnosis of Alzheimer disease using EEG signals: The role of pre-processing. Int. J. Biomed. Eng. Technol. 2023, 41, 317–339. [Google Scholar] [CrossRef]
  17. Puri, D.; Nalbalwar, S.; Nandgaonkar, A.; Wagh, A. EEG-Based Diagnosis of Alzheimer’s Disease Using Kolmogorov Complexity. In Applied Information Processing Systems; Iyer, B., Ghosh, D., Balas, V.E., Eds.; Springer: Singapore, 2022; pp. 157–165. [Google Scholar]
  18. AlSharabi, K.; Bin Salamah, Y.; Abdurraqeeb, A.M.; Aljalal, M.; Alturki, F.A. EEG Signal Processing for Alzheimer’s Disorders Using Discrete Wavelet Transform and Machine Learning Approaches. IEEE Access 2022, 10, 89781–89797. [Google Scholar] [CrossRef]
  19. Miltiadous, A.; Tzimourta, K.D.; Aspiotis, V.; Afrantou, T.; Tsipouras, M.G.; Giannakeas, N.; Glavas, E.; Tzallas, A.T. Enhanced Alzheimer’s disease and Frontotemporal Dementia EEG Detection: Combining lightGBM Gradient Boosting with Complexity Features. In Proceedings of the 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), L’Aquila, Italy, 22–24 June 2023; pp. 876–881. [Google Scholar] [CrossRef]
  20. Gifford, A.; Praschan, N.; Newhouse, A.; Chemali, Z. Biomarkers in frontotemporal dementia: Current landscape and future directions. Biomarkers Neuropsychiatry 2023, 8, 100065. [Google Scholar] [CrossRef]
  21. Al-Qazzaz, N.K.; Ali, S.; Ahmad, S.A.; Escudero, J. Classification enhancement for post-stroke dementia using fuzzy neighborhood preserving analysis with QR-decomposition. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Republic of Korea, 11–15 July 2017; IEEE: Toulouse, France, 2017. [Google Scholar] [CrossRef]
  22. Al-Qazzaz, N.K.; Ali, S.H.B.M.; Ahmad, S.A.; Escudero, J. Optimal EEG Channel Selection for Vascular Dementia Identification Using Improved Binary Gravitation Search Algorithm. In Proceedings of the 2nd International Conference for Innovation in Biomedical Engineering and Life Sciences, Penang, Malaysia, 10–13 December 2017; Ibrahim, F., Usman, J., Ahmad, M.Y., Hamzah, N., Teh, S.J., Eds.; Springer: Singapore, 2018; pp. 125–130. [Google Scholar]
  23. Pirrone, D.; Weitschek, E.; Di Paolo, P.; De Salvo, S.; De Cola, M.C. EEG Signal Processing and Supervised Machine Learning to Early Diagnose Alzheimer’s Disease. Appl. Sci. 2022, 12, 5413. [Google Scholar] [CrossRef]
  24. Bi, X.; Wang, H. Early Alzheimer’s disease diagnosis based on EEG spectral images using deep learning. Neural Netw. 2019, 114, 119–135. [Google Scholar] [CrossRef] [PubMed]
  25. Longo, L.; Brcic, M.; Cabitza, F.; Choi, J.; Confalonieri, R.; Ser, J.D.; Guidotti, R.; Hayashi, Y.; Herrera, F.; Holzinger, A.; et al. Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Inf. Fusion 2024, 106, 102301. [Google Scholar] [CrossRef]
  26. Alves, C.L.; Pineda, A.M.; Roster, K.; Thielemann, C.; Rodrigues, F.A. EEG functional connectivity and deep learning for automatic diagnosis of brain disorders: Alzheimer’s disease and schizophrenia. J. Phys. Complex. 2022, 3, 025001. [Google Scholar] [CrossRef]
  27. Venugopalan, J.; Tong, L.; Hassanzadeh, H.R.; Wang, M.D. Multimodal deep learning models for early detection of Alzheimer’s disease stage. Sci. Rep. 2021, 11, 3254. [Google Scholar] [CrossRef]
  28. Chikkankod, A.V.; Longo, L. On the Dimensionality and Utility of Convolutional Autoencoder&rsquo;s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps. Mach. Learn. Knowl. Extr. 2022, 4, 1042–1064. [Google Scholar] [CrossRef]
  29. Chedid, N.; Tabbal, J.; Kabbara, A.; Allouch, S.; Hassan, M. The development of an automated machine learning pipeline for the detection of Alzheimer’s Disease. Sci. Rep. 2022, 12, 18137. [Google Scholar] [CrossRef] [PubMed]
  30. Lin, N.; Gao, J.; Mao, C.; Sun, H.; Lu, Q.; Cui, L. Differences in Multimodal Electroencephalogram and Clinical Correlations Between Early-Onset Alzheimer’s Disease and Frontotemporal Dementia. Front. Neurosci. 2021, 15, 687053. [Google Scholar] [CrossRef] [PubMed]
  31. Miltiadous, A.; Tzimourta, K.D.; Giannakeas, N.; Tsipouras, M.G.; Afrantou, T.; Ioannidis, P.; Tzallas, A.T. Alzheimer’s Disease and Frontotemporal Dementia: A Robust Classification Method of EEG Signals and a Comparison of Validation Methods. Diagnostics 2021, 11, 1437. [Google Scholar] [CrossRef] [PubMed]
  32. Krishnan, P.T.; Joseph Raj, A.N.; Balasubramanian, P.; Chen, Y. Schizophrenia detection using MultivariateEmpirical Mode Decomposition and entropy measures from multichannel EEG signal. Biocybern. Biomed. Eng. 2020, 40, 1124–1139. [Google Scholar] [CrossRef]
  33. Raees, P.C.M.; Thomas, V. Automated detection of Alzheimer’s Disease using Deep Learning in MRI. J. Physics: Conf. Ser. 2021, 1921, 012024. [Google Scholar] [CrossRef]
  34. Özbek, Y.; Fide, E.; Yener, G.G. Resting-state EEG alpha/theta power ratio discriminates early-onset Alzheimer’s disease from healthy controls. Clin. Neurophysiol. 2021, 132, 2019–2031. [Google Scholar] [CrossRef] [PubMed]
  35. Cura, O.K.; Akan, A.; Yilmaz, G.C.; Ture, H.S. Detection of Alzheimer’s Dementia by Using Signal Decomposition and Machine Learning Methods. Int. J. Neural Syst. 2022, 32, 2250042. [Google Scholar] [CrossRef]
  36. Latchoumane, C.F.V.; Vialatte, F.B.; Solé-Casals, J.; Maurice, M.; Wimalaratna, S.R.; Hudson, N.; Jeong, J.; Cichocki, A. Multiway array decomposition analysis of EEGs in Alzheimer’s disease. J. Neurosci. Methods 2012, 207, 41–50. [Google Scholar] [CrossRef]
  37. Kang, Y.; Escudero, J.; Shin, D.; Ifeachor, E.; Marmarelis, V. Principal Dynamic Mode Analysis of EEG Data for Assisting the Diagnosis of Alzheimer’s Disease. IEEE J. Transl. Eng. Health Med. 2015, 3, 2401005. [Google Scholar] [CrossRef] [PubMed]
  38. Alessandrini, M.; Biagetti, G.; Crippa, P.; Falaschetti, L.; Luzzi, S.; Turchetti, C. EEG-Based Alzheimer&rsquo;s Disease Recognition Using Robust-PCA and LSTM Recurrent Neural Network. Sensors 2022, 22, 3696. [Google Scholar] [CrossRef]
  39. Miltiadous, A.; Tzimourta, K.D.; Afrantou, T.; Ioannidis, P.; Grigoriadis, N.; Tsalikakis, D.G.; Angelidis, P.; Tsipouras, M.G.; Glavas, E.; Giannakeas, N.; et al. A dataset of 88 EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects. OpenNeuro 2023, 1, 88. [Google Scholar] [CrossRef]
  40. Saideepthi, P.; Chowdhury, A.; Gaur, P.; Pachori, R.B. Sliding Window along with EEGNet based Prediction of EEG Motor Imagery. IEEE Sensors J. 2023, 2023, 3270281. [Google Scholar] [CrossRef]
  41. Pratyusha, K.; Devi, K.S.; Ari, S. Motor Imagery based EEG Signal Classification using Multi-scale CNN Architecture. In Proceedings of the 2022 International Conference on Signal and Information Processing (IConSIP), Pune, India, 26–27 August 2022; pp. 1–5. [Google Scholar] [CrossRef]
  42. Hwang, J.; Park, S.; Chi, J. Improving Multi-Class Motor Imagery EEG Classification Using Overlapping Sliding Window and Deep Learning Model. Electronics 2023, 12, 1186. [Google Scholar] [CrossRef]
  43. Ruiz de Miras, J.; Ibáñez-Molina, A.; Soriano, M.; Iglesias-Parro, S. Schizophrenia classification using machine learning on resting state EEG signal. Biomed. Signal Process. Control 2023, 79, 104233. [Google Scholar] [CrossRef]
  44. Weng, X.; Perry, A.; Maroun, M.; Vuong, L.T. Singular Value Decomposition and Entropy Dimension of Fractals. arXiv 2022, arXiv:2211.12338. [Google Scholar]
  45. Roberts, S.J.; Penny, W.; Rezek, I. Temporal and spatial complexity measures for electroencephalogram based brain–computer interfacing. Med. Biol. Eng. Comput. 1999, 37, 93–98. [Google Scholar] [CrossRef] [PubMed]
  46. Bao, F.S.; Liu, X.; Zhang, C. PyEEG: An open source Python module for EEG/MEG feature extraction. Comput. Intell. Neurosci. 2011, 2011, 406391. [Google Scholar] [CrossRef] [PubMed]
  47. Shamsi, E.; Ahmadi-Pajouh, M.A.; Seifi Ala, T. Higuchi fractal dimension: An efficient approach to detection of brain entrainment to theta binaural beats. Biomed. Signal Process. Control 2021, 68, 102580. [Google Scholar] [CrossRef]
  48. la Torre, F.C.D.; González-Trejo, J.I.; Real-Ramírez, C.A.; Hoyos-Reyes, L.F. Fractal dimension algorithms and their application to time series associated with natural phenomena. J. Phys. Conf. Ser. 2013, 475, 012002. [Google Scholar] [CrossRef]
  49. Giannakopoulos, T.; Pikrakis, A. Chapter 4—Audio Features. In Introduction to Audio Analysis; Giannakopoulos, T., Pikrakis, A., Eds.; Academic Press: Oxford, UK, 2014; pp. 59–103. [Google Scholar] [CrossRef]
  50. Dwivedi, D.; Ganguly, A.; Haragopal, V. 6—Contrast between simple and complex classification algorithms. In Statistical Modeling in Machine Learning; Goswami, T., Sinha, G., Eds.; Academic Press: New York, NY, USA, 2023; pp. 93–110. [Google Scholar] [CrossRef]
  51. Convery, R.; Mead, S.; Rohrer, J.D. Review: Clinical, genetic and neuroimaging features of frontotemporal dementia. Neuropathol. Appl. Neurobiol. 2019, 45, 6–18. [Google Scholar] [CrossRef] [PubMed]
  52. Ahmed, R.M.; Bocchetta, M.; Todd, E.G.; Tse, N.Y.; Devenney, E.M.; Tu, S.; Caga, J.; Hodges, J.R.; Halliday, G.M.; Irish, M.; et al. Tackling clinical heterogeneity across the amyotrophic lateral sclerosis–frontotemporal dementia spectrum using a transdiagnostic approach. Brain Commun. 2021, 3, fcab257. [Google Scholar] [CrossRef] [PubMed]
  53. Agosta, F.; Sala, S.; Valsasina, P.; Meani, A.; Canu, E.; Magnani, G.; Cappa, S.F.; Scola, E.; Quatto, P.; Horsfield, M.A.; et al. Brain network connectivity assessed using graph theory in frontotemporal dementia. Neurology 2013, 81, 134–143. [Google Scholar] [CrossRef] [PubMed]
  54. Zhang, F.; Rakhimbekova, A.; Lashley, T.; Madl, T. Brain regions show different metabolic and protein arginine methylation phenotypes in frontotemporal dementias and Alzheimer’s disease. Prog. Neurobiol. 2023, 221, 102400. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A pipeline for the discrimination of subjects having frontotemporal dementia and Alzheimer’s disorder.
Figure 1. A pipeline for the discrimination of subjects having frontotemporal dementia and Alzheimer’s disorder.
Brainsci 14 00335 g001
Figure 2. Average accuracy scores (over five runs) grouped by feature extraction technique and machine learning technique for Alzheimer’s/healthy controls (top row), frontotemporal/healthy controls (middle row), and Alzheimer’s/frontotemporal (bottom row) with the 50% overlap EEG windows strategy (left) and the 90% strategy (right).
Figure 2. Average accuracy scores (over five runs) grouped by feature extraction technique and machine learning technique for Alzheimer’s/healthy controls (top row), frontotemporal/healthy controls (middle row), and Alzheimer’s/frontotemporal (bottom row) with the 50% overlap EEG windows strategy (left) and the 90% strategy (right).
Brainsci 14 00335 g002
Figure 3. Average accuracies associated with the sliding window segmentation strategies of all the learning techniques grouped by feature extraction technique.
Figure 3. Average accuracies associated with the sliding window segmentation strategies of all the learning techniques grouped by feature extraction technique.
Brainsci 14 00335 g003
Figure 4. Feature importance map for discriminating Alzheimer’s disease subjects from healthy controls with the 50% window overlap.
Figure 4. Feature importance map for discriminating Alzheimer’s disease subjects from healthy controls with the 50% window overlap.
Brainsci 14 00335 g004
Figure 5. Feature importance map for discriminating Alzheimer’s disease subjects from healthy controls with the 90% window overlap.
Figure 5. Feature importance map for discriminating Alzheimer’s disease subjects from healthy controls with the 90% window overlap.
Brainsci 14 00335 g005
Figure 6. Feature importance map for discriminating frontotemporal dementia subjects from healthy controls with the 50% window overlap.
Figure 6. Feature importance map for discriminating frontotemporal dementia subjects from healthy controls with the 50% window overlap.
Brainsci 14 00335 g006
Figure 7. Feature importance map for discriminating frontotemporal dementia subjects from healthy controls with the 90% window overlap.
Figure 7. Feature importance map for discriminating frontotemporal dementia subjects from healthy controls with the 90% window overlap.
Brainsci 14 00335 g007
Figure 8. Feature importance map for discriminating Alzheimer’s disease subjects from frontotemporal dementia subjects with the 50% window overlap.
Figure 8. Feature importance map for discriminating Alzheimer’s disease subjects from frontotemporal dementia subjects with the 50% window overlap.
Brainsci 14 00335 g008
Figure 9. Feature importance map for discriminating Alzheimer’s disease subjects from frontotemporal dementia subjects with the 90% window overlap.
Figure 9. Feature importance map for discriminating Alzheimer’s disease subjects from frontotemporal dementia subjects with the 90% window overlap.
Brainsci 14 00335 g009
Table 1. Average performance scores associated with the models trained with a 15-fold cross-validation, with a 50% overlap strategy among EEG windows, using the SVD Entropy feature extraction technique.
Table 1. Average performance scores associated with the models trained with a 15-fold cross-validation, with a 50% overlap strategy among EEG windows, using the SVD Entropy feature extraction technique.
Learning Tech.AccuracyPrecisionSensitivityF1-ScoreAUC
KNN84.70%81.37%85.90%83.58%91.82%
ET82.72%81.93%79.43%80.66%91.06%
RF82.22%81.33%78.94%80.11%90.42%
XGBoost81.66%78.87%81.41%80.12%89.96%
LGBM78.04%75.08%77.25%76.15%86.78%
GBC73.38%69.58%73.42%71.45%81.09%
LDA71.97%68.99%69.44%69.21%78.42%
Ridge71.86%68.81%69.48%69.14%0.00%
DT71.72%68.41%70.00%69.20%71.58%
LR71.55%68.23%69.78%69.00%78.03%
SVM69.81%66.05%69.90%67.63%0.00%
ADA69.26%65.40%68.48%66.90%75.67%
QDA66.79%59.22%86.10%70.17%79.84%
NB57.12%51.94%73.67%60.92%64.33%
Table 2. Classification results associated with the model trained with the 90% overlap EEG window strategy on original (unbalanced) data using KNN classifier with SVD Entropy (Alzheimer’s disease subjects versus healthy controls in the top row, healthy controls versus frontotemporal dementia subjects in the middle row, and Alzheimer’s disease versus frontotemporal dementia subjects in bottom row).
Table 2. Classification results associated with the model trained with the 90% overlap EEG window strategy on original (unbalanced) data using KNN classifier with SVD Entropy (Alzheimer’s disease subjects versus healthy controls in the top row, healthy controls versus frontotemporal dementia subjects in the middle row, and Alzheimer’s disease versus frontotemporal dementia subjects in bottom row).
GroupPrecisionSensitivityF1-ScoreNumber of Windows
Alzheimer0.940.910.9458,119
Healthy controls0.890.920.9248,239
Accuracy 0.91106,358
Healthy controls0.960.920.9448,095
Frontotemporal dementia0.880.940.9133,200
Accuracy 0.9381,295
Alzheimer0.960.900.9358,008
Frontotemporal dementia0.860.950.9033,322
Accuracy 0.9191,330
Table 3. Classification of Alzheimer’s disease from healthy adults with the 50% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Table 3. Classification of Alzheimer’s disease from healthy adults with the 50% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Feature ExtractionLearning Techn.AccuracyPrecisionSensitivityAUC
SVDKNN84.6881.3785.9391.82
 ET82.7281.9379.4391.06
 RF82.2281.3378.9490.42
 XGB81.6678.8781.4189.96
HFDKNN81.5180.5678.1090.21
 ET80.3278.6977.6588.90
 RF80.0277.9278.1188.42
 XGB79.0677.3976.0887.84
ZCRET80.3777.8579.0988.89
 KNN79.5375.0781.9587.35
 RF79.5376.6678.6988.16
 XGB78.6374.0081.2987.51
DFAET80.5478.2579.0188.98
 KNN79.2574.881.7487.25
 RF79.6376.8778.7488.11
 XGB78.6474.0881.3287.34
HjorthRF76.6776.2270.8484.29
 ET76.3676.3869.5484.19
 XGB76.0474.0572.8984.18
 KNN74.9871.9873.6781.90
Table 4. Classification of Alzheimer’s disease from healthy adults with the 90% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Table 4. Classification of Alzheimer’s disease from healthy adults with the 90% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Feature ExtractionLearning Techn.AccuracyPrecisionSensitivityAUC
SVDKNN94.7294.6693.4296.64
 ET90.2489.7288.5996.64
 RF89.1088.4187.3895.81
 XGB83.1980.3583.2391.62
HFDKNN89.4386.8790.3095.79
 ET85.7683.8284.9293.77
 RF85.0782.6784.7893.13
 XGB80.4576.2682.4789.47
ZCRKNN78.4173.5181.7586.36
 ET77.9977.1772.8786.96
 XGB67.5765.0960.9974.21
 RF77.1076.5971.0885.74
DFAKNN84.0780.4185.6891.61
 ET83.9181.4183.5192.20
 XGB79.8674.9883.2888.82
 RF83.2780.2883.5791.62
HjorthKNN84.4182.2883.6191.64
 ET82.0282.7176.3090.22
 RF81.4981.5176.5389.59
 XGB77.6875.9074.4186.01
Table 5. Classification of frontotemporal dementia from healthy adults with the 50% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Table 5. Classification of frontotemporal dementia from healthy adults with the 50% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Feature ExtractionLearning Techn.AccuracyPrecisionSensitivityAUC
SVDET87.5785.5283.6194.74
 KNN87.2180.3690.7594.10
 RF86.8583.4784.4094.25
 XGB86.0181.4984.9093.70
HFDET84.1182.4077.5691.72
 RF83.8680.9279.0491.43
 XGB82.5376.6382.2191.18
 KNN81.8473.2987.2390.36
ZCRET80.3777.8579.0988.89
 KNN79.5375.0781.9587.35
 RF79.5376.6678.6988.16
 XGB78.6374.0081.2987.51
DFAET83.8785.8472.5891.16
 RF83.6283.7674.4390.99
 XGB82.3478.1678.9290.47
 KNN80.4572.1885.0589.37
HjorthRF77.8571.8774.7486.29
 ET77.6972.7772.4386.50
 KNN75.1065.6481.8283.83
 XGB76.9468.9979.0386.34
Table 6. Classification of frontotemporal dementia from healthy adults with the 90% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Table 6. Classification of frontotemporal dementia from healthy adults with the 90% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Feature ExtractionLearning Techn.AccuracyPrecisionSensitivityAUC
SVDKNN94.1890.4995.8598.23
 ET93.4892.9890.8898.29
 RF92.4190.9990.3597.68
 XGB87.8183.5187.3695.14
HFDKNN90.5284.5993.8196.86
 ET89.2088.9683.8995.73
 RF88.5887.1684.3795.25
 XGB84.2478.5884.2892.65
ZCRET80.9683.6366.1589.16
 KNN80.8173.3882.8988.97
 RF80.2281.8566.0387.94
 XGB71.4469.3553.3877.47
DFARF87.0587.9679.0294.02
 ET86.9989.4877.1294.09
 KNN85.1777.7189.1793.17
 XGB83.8379.7180.9091.84
HjorthKNN84.8076.8589.7293.08
 ET84.3981.0380.5392.59
 RF83.6279.1281.2191.82
 XGB78.2870.0381.5988.21
Table 7. Classification of Alzheimer’s disease from frontotemporal dementia adult subjects with the 50% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Table 7. Classification of Alzheimer’s disease from frontotemporal dementia adult subjects with the 50% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Feature ExtractionLearning Techn.AccuracyPrecisionSensitivityAUC
SVDET85.5085.4592.4394.28
 RF84.9682.1574.8891.82
 KNN83.6073.2386.5090.30
 XGB81.5873.1577.9489.21
HFDET84.4381.4973.7891.58
 RF83.8478.8375.7490.97
 XGB81.9772.6280.6290.2
 KNN81.5180.5678.1090.21
ZCRRF69.6066.5331.9468.06
 ET69.5766.4531.8968.55
 XGB69.3866.7530.4568.86
 KNN57.9844.4364.9663.94
DFAET84.7182.7873.1191.68
 RF84.0279.7575.0591.17
 XGB82.1373.1280.3590.23
 KNN81.2569.1987.2190.27
HjorthET75.4775.0949.8280.74
 RF75.2669.9656.0480.39
 XGB71.4859.9565.0478.52
 KNN67.8754.3073.7176.48
Table 8. Classification of Alzheimer’s disease from frontotemporal dementia adults with the 90% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Table 8. Classification of Alzheimer’s disease from frontotemporal dementia adults with the 90% overlap strategy across the feature-extraction techniques for EEG data and the learning techniques.
Feature ExtractionLearning Techn.AccuracyPrecisionSensitivityAUC
SVDKNN91.9385.0394.3797.48
 ET92.3793.4584.9397.74
 RF91.4290.8184.9697.07
 XGB83.1774.8080.8391.15
HFDKNN90.8888.2482.9097.11
 ET89.6788.5182.1695.95
 RF88.9386.0982.8595.32
 XGB83.4374.0383.6691.87
ZCRRF79.7285.7952.7986.41
 KNN78.8166.5383.6187.98
 XGB70.5070.6831.8171.57
 ET80.8186.6355.6387.74
DFARF87.5284.8179.9694.30
 KNN86.3776.1690.9594.11
 ET87.7886.5478.5894.52
 XGB83.2374.1482.6691.55
HjorthRF82.3681.1666.9289.38
 ET80.3068.4484.8689.53
 XGB73.7963.0367.2381.48
 KNN67.8754.3073.7176.48
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lal, U.; Chikkankod, A.V.; Longo, L. A Comparative Study on Feature Extraction Techniques for the Discrimination of Frontotemporal Dementia and Alzheimer’s Disease with Electroencephalography in Resting-State Adults. Brain Sci. 2024, 14, 335. https://doi.org/10.3390/brainsci14040335

AMA Style

Lal U, Chikkankod AV, Longo L. A Comparative Study on Feature Extraction Techniques for the Discrimination of Frontotemporal Dementia and Alzheimer’s Disease with Electroencephalography in Resting-State Adults. Brain Sciences. 2024; 14(4):335. https://doi.org/10.3390/brainsci14040335

Chicago/Turabian Style

Lal, Utkarsh, Arjun Vinayak Chikkankod, and Luca Longo. 2024. "A Comparative Study on Feature Extraction Techniques for the Discrimination of Frontotemporal Dementia and Alzheimer’s Disease with Electroencephalography in Resting-State Adults" Brain Sciences 14, no. 4: 335. https://doi.org/10.3390/brainsci14040335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop