Next Article in Journal
Security Analysis of DBTRU Cryptosystem
Next Article in Special Issue
Resting-State EEG in Alpha Rhythm May Be Indicative of the Performance of Motor Imagery-Based Brain–Computer Interface
Previous Article in Journal
A Lightweight Image Encryption Algorithm Based on Chaotic Map and Random Substitution
Previous Article in Special Issue
Automated Emotion Identification Using Fourier–Bessel Domain-Based Entropies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy Measures of Electroencephalograms towards the Diagnosis of Psychogenic Non-Epileptic Seizures

1
Centre for Biomedical Engineering, School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
2
Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, National Hospital for Neurology and Neurosurgery, University College London Hospitals, Epilepsy Society, London WC1E 6BT, UK
3
Neurosciences Research Centre, St George’s University of London, London SW17 0RE, UK
4
Atkinson Morley Regional Neuroscience Centre, St George’s Hospital, London SW17 0QT, UK
5
School of Neuroscience, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London WC2R 2LS, UK
6
Department of Computer Science, University of Surrey, Guildford GU2 7XH, UK
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(10), 1348; https://doi.org/10.3390/e24101348
Submission received: 31 May 2022 / Revised: 13 September 2022 / Accepted: 17 September 2022 / Published: 23 September 2022
(This article belongs to the Special Issue Entropy Algorithms for the Analysis of Biomedical Signals)

Abstract

:
Psychogenic non-epileptic seizures (PNES) may resemble epileptic seizures but are not caused by epileptic activity. However, the analysis of electroencephalogram (EEG) signals with entropy algorithms could help identify patterns that differentiate PNES and epilepsy. Furthermore, the use of machine learning could reduce the current diagnosis costs by automating classification. The current study extracted the approximate sample, spectral, singular value decomposition, and Renyi entropies from interictal EEGs and electrocardiograms (ECG)s of 48 PNES and 29 epilepsy subjects in the broad, delta, theta, alpha, beta, and gamma frequency bands. Each feature-band pair was classified by a support vector machine (SVM), k-nearest neighbour (kNN), random forest (RF), and gradient boosting machine (GBM). In most cases, the broad band returned higher accuracy, gamma returned the lowest, and combining the six bands together improved classifier performance. The Renyi entropy was the best feature and returned high accuracy in every band. The highest balanced accuracy, 95.03%, was obtained by the kNN with Renyi entropy and combining all bands except broad. This analysis showed that entropy measures can differentiate between interictal PNES and epilepsy with high accuracy, and improved performances indicate that combining bands is an effective improvement for diagnosing PNES from EEGs and ECGs.

1. Introduction

Psychogenic non-epileptic seizures (PNES) clinically resemble epileptic seizures but are not due to epileptic electrical brain activity [1]. Although the condition is almost as prevalent as multiple sclerosis [2,3], PNES is regularly misdiagnosed: people with PNES are not appropriately diagnosed for an average of seven years [4], and approximately 78% of patients were taking at least one anti-epileptic drug at the time of accurate diagnosis [5]. This has serious adverse effects for both patients and healthcare systems, through unnecessary visits to hospitals, medical tests, and treatments. In addition, since anti-epileptic drugs are not effective for PNES, these misdiagnosed patients will have endured the negative side effects of these expensive drugs without any significant benefit [3]. Furthermore, an estimated one in five referrals to epilepsy clinics actually have PNES [6], highlighting the difficulties in making an accurate diagnosis.
The current gold standard method of diagnosis is the recording of a seizure with video-electroencephalogram (EEG), from which a specialist assesses the semiology (the clinically observable features of the seizure) and visually inspects the EEG [7]. While this method is reliable, there are several shortcomings. Not all epileptic seizures are associated with qualitatively identifiable ictal EEG abnormalities [8], and EEG has a relatively poor ability to accurately identify a patient without epilepsy, with a sensitivity of 25–56% [9]. As a result, it can sometimes be difficult to differentiate between epileptic and psychogenic seizures. Therefore, given the need for in-patient admission and prolonged video-EEG recording, this diagnostic method is costly, inconvenient for the patient, and not accessible to all hospitals [3].
Entropy is a measure of the randomness and uncertainty of a signal, and higher entropy indicates a more complex or chaotic system [10]. It has a low computation cost and has been shown to be effective by previous researchers, making it a suitable option for machine learning. Entropy measures of the EEGs of PNES subjects have been of interest to previous researchers. Pyrzowski et al. [11] used interval analysis of interictal EEGs to compare 51 epilepsy subjects to 14 PNES and 14 headache (PNES and epilepsy free) subjects. The EEGs were theta-alpha filtered and the zero crossing rates were histogram pooled and normalised across the segments and/or channels. From this, the relative counts of fixed-length intervals, several statistical measures, and the Shannon and minimum entropies were extracted. The researchers found that only the entropy measure significantly separated the epilepsy and non-epilepsy (headache patients included) without being affected by the presence of antiepileptic drugs. The researchers found that Shannon entropy was the better of the two entropies at separating epilepsy and non-epilepsy. Furthermore, for Shannon entropy, the optimum frequency band was 7–13 Hz and performed best for temporal-occipital channels, specifically T6 + T5.
One study [12] used 314 epileptic, non-epileptic, and healthy subjects. From these EEGs, 14 EEG features including spectral entropy were extracted using the empirical wavelet transform (EWT), singular spectrum empirical mode decomposition (SSEMD), and singular spectrum empirical wavelet transform (SSEWT). The researchers compared plots of the different types of seizures (e.g., focal non-epileptic, complex partial, interictal non-epileptic, normal controls, etc.) for these features and with each extraction method. For the EWT and SSEMD, the spectral entropies of the epileptic and non-epileptic groups overlap. The SSEWT method, however, shows that spectral entropy has some separation, with the normal subjects, focal non-epileptic, and tonic-clonic seizures isolated from the other classes. Complex partial and generalized non-epileptic seizures are also separated from the other seizure types but overlap each other.
Gasparini et al. [13] and Lo Giudice et al. [14] both used the entropy of the EEG as a control for comparison to the entropies of hidden layers in deep learning models. Gasparini et al. [13] extracted the Shannon and permutation entropies from the EEGs of six PNES subjects and ten healthy controls and found no statistical difference between the two classes for either measure. Lo Giudice et al. [14] used interictal EEG from 18 epilepsy and 18 PNES subjects and also found no statistical difference between the two classes for the permutation entropy of the signal.
ECG analysis has also been a focus of previous PNES research. Ponnusamy, Marques, and Reuber in 2011 [15] and 2012 [16], and Romigi et al. [17] all extracted the approximate entropy (ApEn) from the heart rate (RR interval data) as a part of extensive heart rate variability (HRV) analysis. Ponnusamy, Marques, and Reuber’s 2011 [15] study found that, interictally, ApEn was significantly lower in PNES than in healthy controls, but there was no statistical difference in interictal HRV ApEn between PNES and epilepsy subjects. Their 2012 study [16] found no statistical difference in ictal ApEn between PNES and epilepsy groups. However, epileptic subjects showed a decrease in ApEn during seizure activity, whereas PNES subjects did not. Romigi et al. [17], however, found that ApEn decreased in PNES subjects during seizure activity compared to at rest, before, and after the attack. Furthermore, there was no difference between subjects with PNES only and PNES subjects with comorbid epilepsy.
Single biomedical signal parameters have been shown to be insufficient as a differentiator for PNES and epilepsy [18]. Therefore, a potential tool to mitigate these problems is machine learning. Machine learning classifiers are mathematical algorithms that “learn” how to separate conditions by training on a set of data. The validity of this trained model is then tested using more data. When analysing biomedical signals, these data are typically comprised of one or more features extracted from the signal taken at different observations. This allows the classifiers to consider multiple factors with different types of information simultaneously. The model’s ability to separate the conditions is assessed using performance metrics such as accuracy (ability to predict both conditions correctly) [19].
Machine learning has been previously used to classify entropy measures extracted from the EEGs of PNES patients. For instance, from 2014–2018, a series of six papers published by the same group of researchers [20,21,22,23,24,25] used spectral entropy as one of 55 EEG features analysed by machine learning.
Ahmadi et al. [26] used EEGs from 20 epilepsy and 20 PNES subjects and compared the Shannon entropy, spectral entropy, Renyi entropy, Higuchi fractal dimension, Katz fractal dimension, and the EEG frequency bands with an imperialist competitive algorithm. They found that spectral entropy and Renyi entropy were the most important EEG features as they were always among the five best feature subsets. Furthermore, the classification accuracy decreased significantly when either or both were excluded from a subset. They also found that SVMs with a linear or RBF kernel were the best classifiers.
The same group did another study [10], this time with five epilepsy and five PNES subjects. They extracted the same EEG features from each frequency band, this time including the energy of the signal. The researchers found that beta was the best band for all features and gamma was the worst. The highest performing features differ for each band, making an overarching conclusion difficult.
Cura et al. [27] used synchrosqueezing to represent the time-frequency maps of 16 epilepsy and six PNES subjects. From these maps, 17 features were extracted: three flux, flatness, and energy concentration measures; two Renyi entropy measures; six statistical features; and five TF sub-band energy measures. The researchers used decision tree, SVM, RF, and RUSBoost classifiers to differentiate all 17 features. For the three class problems, the inter-PNES (non-seizure), PNES seizure, and epileptic seizure EEGs, the highest accuracy, precision, and lowest false discovery rate were reported by RF with 95.8%, 91.4%, and 8.6%. The highest sensitivity was reported by the RUSBoost classifier with 90.3%. All classifiers except the SVM reported higher accuracy ≥ 93%, sensitivity ≥ 82%, and precision ≥ 86% and lower false discovery rates ≤ 14% values. The researchers also compared the inter-PNES and PNES EEGs for PNES seizure detection. All accuracies were ≥90% (excluding the SVM for one patient) and RF reported the highest of these.
This paper will aim to assess the ability of seven entropy metrics to differentially diagnose PNES and epilepsy by using these features individually as the inputs for four popular machine learning methods. This analysis will compare the diagnostic power of each feature and each EEG frequency band for a large database of PNES and epilepsy EEG and electrocardiogram (ECG) recordings.

2. Materials and Methods

The data used in this analysis were collected routinely at St George’s Hospital, London and consisted of interictal and preictal surface EEG recordings from 48 PNES and 29 epilepsy patients. The PNES subjects have an age range of 17–59 (mean 34.76 ± 10.55) and a male/female ratio of 14/34. The epilepsy subjects have an age range of 19–79 (mean 38.95 ± 13.93) and a male/female ratio of 18/11. Suitable cases were retrospectively identified from the video-EEG database of those attending for inpatient video-EEG monitoring from 2016 to 2019. The diagnosis of functional seizures was made according to International League Against Epilepsy diagnostic criteria [28] by at least two clinicians experienced in the diagnosis of epilepsy and were documented through video-EEG in all cases. The diagnosis of epileptic seizures was based upon EEG confirmed ictal epileptiform activity during the recorded epileptic event during video-EEG monitoring. Exclusion criteria for both groups included cases with a dual diagnosis of both epileptic and functional non-epileptic seizures. The recordings were taken with Natus Networks with an EEG32 headbox. The EEG electrodes were placed according to the 10–20 system montage with Cz-Pz as the reference electrode. The ECG is comprised of two electrodes, ECG+ and ECG-, placed on the right and left mid-clavicular line. The sampling frequencies were either 256, 512, or 1024 Hz, and bandpass filtering from 0.5 to 70 Hz was applied. The data were reviewed and clipped by experienced clinicians in the field, who selected awake time epochs when patients were still and at rest, without seizures or ictal/epileptiform manifestations, and with minimal noise. All clipped EEG data was de-identified and the video removed prior to the current analysis. Anonymised recordings were stored in EDF+ format.
The EEGs and ECGs were preprocessed using MNE-python [29]. The signals with a sampling rate of over 256 Hz were downsampled to this value and the common electrodes were selected: Fp1, F7, T3, T5, O1, F3, C3, P3, Fz, Cz, Fp2, F8, T4, T6, O2, F4, C4, P4, Fpz, Pz, ECG+, and ECG-. The EEGs were filtered using an FIR, Hamming window, bandpass filter with cutoff frequencies of 0.5 and 40 Hz. The ECGs were filtered using a Bessel IIR bandpass filter with cutoff frequencies of 0.25 and 40 Hz, the method for which was derived from [30,31]. Inspection of the time and frequency plots of the EEG showed no significant mains noise, so this was not specifically removed. The data were then segmented into ten-second non-overlapping epochs. To remove noise, epochs where the EEG amplitude did not exceed 1 µV were removed, and AutoReject [32] automatically removed epochs with noisy EEG. The remaining epochs were then visually inspected to exclude any epochs that contained flat EEG or ECG. The resulting 10,452 epochs were then baseline corrected using the average of each subject’s EEG. These EEG samples were then filtered into the frequency bands: delta 0.5–4 Hz, theta 4–8 Hz, alpha 8–13 Hz, beta 13–30 Hz, and gamma 30–40 Hz. The ECG channel was found by subtracting the values of the ECG+ lead from the ECG- lead. Baseline wander was then removed using a filter with a 0.05 Hz cutoff [33]. Entropy features were extracted from every band and every channel (including ECG), including the original broad band (0.5–40 Hz). The ECG filtering, however, was the same for each EEG frequency band analysed (0.25–40 Hz).
The entropy measures used in this analysis were: approximate, sample, spectral, singular value decomposition (SVD), Renyi, and wavelet entropy. These features were extracted from each channel in each sample, giving 21 input parameters per band per feature. The approximate and sample entropies were computed using EntropyHub [34], the spectral and SVD entropies were calculated using MNE-features [35], and the Renyi entropy was estimated using DIT [36].
Approximate entropy was introduced by Pincus [37] to define irregularity in sequences and time series data [38]. Formally, given N data points from a time series x n = x 1 ,   x 2 , ,   x N , the ApEn is calculated using two input parameters, a run length m and a tolerance window r , which must be fixed [38]. To define ApEn m , r , N , form vector-sequences X 1 , , X N m + 1 defined by X i = x i ,   x i + 1 , ,   x i + m 1 , where i = 1 , ,   N m + 1 . Then define the distance d X i , X j between vectors X i and X j as the maximum distance in their respective scalar components. For each i N m + 1 , construct C i m r defined as (the number of X j such that d X i , X j r ) / N m + 1 . Next, define Φ m r as the average value of ln C i m r . The ApEn is then defined in Equation (1) [38], where N is 2560 throughout this analysis.
ApEn m , r , N = Φ m r Φ m + 1 r
Nevertheless, to avoid the occurrence of ln(0) in the calculation of ApEn, the algorithm includes self-matching, leading to a discussion of bias in this entropy metric [39]. Sample entropy (SampEn) was introduced by Richman and Moorman [39] as an improvement upon ApEn by reducing the dependency on record length and to avoid self-matching. To define SampEn m , r , N of a time series x n = x 1 ,   x 2 , ,   x N , with a run length m and a tolerance window r , form vector-sequences X m 1 , , X m N m + 1 , defined by X m i = x i ,   x i + 1 , ,   x i + m 1 , where i = 1 , , N m + 1 . The distance d X m i , X m j between vectors X m i and X m j is then defined as the maximum absolute distance between their respective scalar components. For each i N m , construct B i m r defined as (the number of X m j such that d X m i , X m j r ) / N m 1 . Next, define B m r as the average value of B i m r . Then, increase the dimension to m + 1 and calculate A i as the number of X m + 1 i within r of X m + 1 j , where j ranges from 1 to N m j i . Define A i m r as A i / N m 1 and A m r as the average value of A i m r . Therefore, B m r is the probability that two sequences will match m points, whereas A m r is the probability that two sequences will match m + 1 points. Sample entropy is then defined using Equation (2),
SampEn m , r = lim N ln A m r B m r
which is estimated by the statistic in Equation (3), where N is 2560 throughout this analysis.
SampEn m , r , N = ln A m r B m r
Since both ApEn and SampEn are highly dependent on the input parameters run length m and tolerance window r , these values require selection. For both entropies, the recommended range of values for the parameters are m = 1 or 2 and r between 0.1 and 0.25 times the standard deviation (SD) of the input time series x n [39]. Therefore, the following parameter combinations were tested with a grid search m = 1 , 2 ,   r S D = 0.1 ,   0.15 ,   0.2 ,   0.25 , where r = r s d × is the SD of the input time series. To avoid overfitting the data, a subset of ten patients per class were selected for this analysis. ApEn and SampEn were extracted from this subset using each combination of m and r . These features were then inputted to a support vector machine (SVM) with a radial basis function (RBF) kernel and validated with 5-fold cross validation. The m and r combination that returned the highest average balanced accuracy from the classifiers was then selected as the input parameters to be used for the analysis with the full dataset. The specifics of the machine learning aspects of this process are described below.
Spectral entropy (SpecEn) finds the Shannon entropy [40] of the power spectrum and is calculated using Equation (4), where p i is the probability distribution of the power spectrum of the time series, i is one of the discrete states (assuming a bin width of one spectral unit), the sum of p i is 1, and Ω is the number of discrete states [41].
SpecEn f = 1 ln Ω i = 1 Ω p i ln p i  
SVD entropy (SVDEn) was defined by Alter et al. [42]. SVD is a matrix orthogonalisation decomposition method, so for a time series x n = x 1 ,   x 2 , ,   x N the Hankel matrix H m × n can be reconstructed as
H m × n = x 1 x 2 x n x 2 x 3 x n + 1 x m x m + 1 x N
where 1 < n < N ,   m = N n + 1 [43]. The SVD of H m × n can be defined as
H m × n = U V T = u 1 , u 2 ,   ,   u L σ 1 0 0 0 σ 2 0 0 0 σ L v 1 v 2 v L
where the left singular vectors U m × m and right singular vectors V n × n are orthogonal matrices, and m × n is a diagonal matrix composed of singular values σ 1 σ 2 σ L 0 ,   L = min m , n [43]. In this space, matrix H m × n satisfies k | | l l δ k l 0 for all 1 k ,   l L [42]. Let us define the normalised eigenvalues as,
p l = σ l 2 / k L σ k 2
which indicates the relative significance of the l th eigenvalue and eigenvector in terms of the fraction of the overall expression that they capture [42]. Then the SVD entropy of the dataset X is as shown in Equation (8) [42]:
SVDEn = 1 log L k = 1 L p k log 2 p k
Renyi entropy (REn) estimates the spectral complexity of a signal and is calculated using Equation (9), where the order   α 0 and α 1 , p i α is the probability distribution of the time series, i is one of the discrete states, and Ω is the number of discrete states [44]. For this analysis, α = 2 to replicate [10] for ease of comparison with this study.
REn α = 1 1 α log 2 i = 1 Ω p i α
Wavelet entropy (WaveEn) is a measure of the degree of disorder associated with the multi-frequency signal response. The wavelet coefficients C i , j were found using wavelet decomposition, where i is the time index and j is the index of the different resolution levels. The energy for each time i and level j can be found using Equation (10) [45].
E i , j = C i , j 2
The mean energy was then calculated using Equation (11),
E j k = 1 n i = k 0 k 0 + t E i , j
where the index k is the mean value in successive time windows, which will now give the time evolution; k 0 is the starting value of the time window k 0 = 1 ,   1 + t ,   1 + t ,   ; and n is the number of wavelet coefficients in the time window for each resolution level [45]. The probability distribution for each level can be defined using Equation (12) [45].
p j k = E j k E t o t k
Following the definition of Shannon entropy [40], the time-varying wavelet entropy was found using Equation (13) [45]. More details can be found at [46].
WaveEn k = j p j k ln p j k
For this analysis, Morlet wavelets were used since they are commonly used in EEG research [47].
Once these features had been extracted from every channel for every epoch in every band, they were used to train and test four machine learning classifiers: SVM, k-nearest neighbours (kNN), random forest (RF), and gradient boosting machine (GBM). These models were implemented using the scikit-learn python package [48].
SVMs were introduced in [49] and classify by searching for an optimal hyperplane that separates the classes. If the data are separable, the hyperplane maximises a margin around itself that does not contain any data, creating boundaries for the classes. Otherwise, the algorithm establishes a penalty on the length of the margin for every observation that is on the wrong side. The SVM classifiers used in this analysis used an RBF kernel, which maps the data onto a non-linear plane. The RBF kernel between two patterns x and x is calculated using Equation (14).
K x , x = e x p γ || x x || 2
In this case, γ was taken as 1/(number of features × variance of the data).
The kNN algorithm is based on the idea that similar groups will cluster. The model is trained by ‘plotting’ observations based on their features, presumably with the classes clustering. The algorithm is tested by plotting an observation and classifying it based on the class of the nearest neighbours. The number of nearest neighbours, k , was individually selected by a grid search that tested 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, and 20 neighbours. This defined k as the value that returned the highest balanced accuracy with ten-fold cross validation.
RF was introduced by [50] and is based on randomised decision trees. Decision trees are flowchart-like structures that predict the value of a target variable by learning a series of simple decision rules based on the training data. RF uses an ensemble of trees, each with a different random subset of the features in a method called bootstrap aggregating, or bagging. This decreases the variance, compared to an individual decision tree, and reduces the risk of overfitting. The class was then taken as the average of the trees’ probabilistic predictions, whereas the original publication [50] let each tree vote for a single class.
GBMs are ensembles of weak learners, typically decision trees, and were introduced by [51,52]. GBMs are similar to gradient descents in a functional space. The model is built by adding a new tree with every iteration. The new tree is fitted to minimise the sum of the losses of the (now previous) model. For binary classification, a prediction is made based on the probability that the sample belongs to the positive class. This is found by applying the sigmoid function to the tree ensemble.
To classify the feature set, ten-fold cross validation was used to define the training and testing datasets. Since the classes in this dataset are imbalanced with more PNES data, the epilepsy data in the training set was oversampled using a synthetic minority over-sampling technique (SMOTE). The feature space was then reduced using principal component analysis (PCA), with a variance of 95%.
Precision, recall, and balanced accuracy were used to evaluate the classifiers’ predictions of test data. Since the dataset was imbalanced, these metrics were selected as they avoid inflated performance metrics on imbalanced datasets. Equations (15)–(17) show the calculations for these performance metrics.
precision = T P T P + F P
recall = T P T P + F N
balanced   accuracy = 1 2 T P T P + F N + T N T N + F P
where TP is the true positive rate, TN is the true negative rate, FP is the false positive rate, and FN is the false negative rate. Here, PNES is the positive class and epilepsy is the negative class.
Permutation feature importance was also used to compare the EEG frequency bands. This was done by adapting the algorithm [50] to include multiple features. A model m was fitted using training data, and then a reference score s was defined using the validation data D . Each feature (channel) of the set (band) to be assessed f n : o was then permutated (randomly shuffled) in order to corrupt the validation samples of that band and give D ˜ k , n : o . The score s ˜ k , n : o of model m on this corrupted validation dataset was then computed. This process of permutating and calculating score s ˜ k , n : o was repeated K times with iteration k . The importance i n : o of the feature set (band) f n : o is then defined using Equation (18).
i n : o = s 1 K k = 1 K s n : o

3. Results

The grid search to establish the ideal values for m and r S D found that the highest average accuracy across the bands was returned when m = 2 and r S D = 0.2 for ApEn and when m = 1 and r S D = 0.15 for SampEn. These parameters were then used to extract the ApEn and SampEn from the full dataset. The accuracies from these tests can be found in the Supplementary Materials.
Using the methods described, the balanced accuracies returned are reported in Table 1. Tables containing the precision and recall can be found in the Supplementary Materials.
Table 1 shows a range of balanced accuracies with only two instances returning below chance (50%). The highest accuracy was 94.68%, with 96.12% precision and 95.19% recall, which was obtained by Renyi entropy with a kNN classifier in the ‘all’ band. Generally, the lowest performing entropy measure was wavelet entropy, and the best was Renyi entropy. Overall, the lowest accuracies were obtained by the gamma band, and with all the EEG bands combined—the ‘all’ band—the highest accuracies were returned.
When comparing the entropy measures and the frequency bands, it is possible to group the measures into three different trends: Renyi entropy; sample, approximate, SVD, and spectral entropy; and wavelet entropy. Wavelet entropy was the measure returning the lowest accuracies with a mean of 53.24 ± 3.18%. This measure returned higher accuracies in the ‘all’ and theta bands, and the lowest accuracies in the alpha, beta, and gamma bands.
Sample, approximate, SVD, and spectral entropy returned higher accuracies in the ‘all’ and broad bands. The combined ‘all’ band improved the SVM, kNN, and GBM classifiers. The RF, however, only showed a slight increase. The SVM accuracy was significantly improved (over 12% increase, excluding spectral entropy) by the ‘all’ band for all these measures, as well as the kNN (over 9% increase, excluding spectral entropy). The delta, theta, alpha, and beta bands returned medium accuracies, and the gamma band returned a further drop in classifier performance. These measures typically outperformed wavelet entropy by a large margin, with means of 67.71 ± 7.29%, 68.97 ± 8.13%, 65.23 ± 7.44%, and 65.16 ± 5.93%, respectively.
The Renyi entropy was overall the highest performing entropy measure, with a mean of 82.48 ± 4.20%. In the broad band, the accuracies of this measure were only somewhat higher than the sample, approximate SVD, and spectral entropies. However, the accuracies for Renyi entropy increased in the theta, alpha, beta, and gamma bands. In comparison, the accuracy for the other measures remained stable or decreased in these bands, especially gamma. The combination of ‘all’ bands improved the accuracy, especially for the SVM, which increased by 10.86%. As a result, most of the classifiers in the ‘all’ band were able to achieve over 90%.
The best classifiers were kNN and, generally, the higher the overall accuracy for a band and/or feature, the bigger the difference between kNN and RF and the other two classifiers. Overall, RF was the better classifier. However, the kNN returned the highest accuracy value since it, along with the SVM, was greatly improved by combining all the bands, whereas RF and GBM were less affected. Furthermore, Table 1 shows that GBM was often the lowest performing classifier.
Since the combination of the bands performed well, a further experiment was conducted to establish which specific bands were contributing to the high accuracy. Using the same process as described above, each band was excluded from the full set and the remaining bands were used for classification. The ECG signal was also used as an input for each band. This experiment used the highest performing classifier, kNN, and the highest performing entropy metric, Renyi entropy, and the outcomes are summarised in Table 2. The importance of the band reported is the average permutation band importance over ten-fold cross validation.
Table 2 shows that removing a single band had a minor effect on the precision and recall, thus affecting the balanced accuracy but not significantly. Excluding broad and delta increased the accuracy to 95.03% and 94.93%, respectively, from 94.68% when all bands were used. However, excluding the others resulted in a loss of 0.60% or more. Therefore, the theta, alpha, beta, and gamma bands contain important information for Renyi entropy. The band importance from the permutation-based testing is congruent with these findings, with the broad and delta bands returning half the permutation importance of the other bands. These findings are congruent with the trend shown in Table 1 for the Renyi entropy, where broad and delta slightly underperformed compared to the other four non-combination bands.

4. Discussion

Spectral and wavelet entropy were both found by calculating the Shannon entropy of the frequency spectrum, where spectral entropy estimated the spectrum using Welch’s method and wavelet entropy used Morlet wavelets. Despite these similarities, the resultant accuracies were significantly different, with spectral entropy outperforming wavelet entropy in every band and with every classifier. This suggests that Welch’s method is more suitable for extracting the uncertainty in the frequency domain for this specific task. Furthermore, the spectral and wavelet entropies both returned the lowest accuracies, on average, of all the measures. Therefore, our results suggest that for these data measures of complexity, those in the time domain may be more effective than those in the frequency domain. The measure that returned the highest accuracy, Renyi entropy, is a variation of Shannon entropy applied directly to the time series. This further lends to the effectiveness of temporal complexity, and further research should explore similar methods.
While the classifier performances for most entropy measures were improved by combining all frequency bands, generally the SVM and kNN improved more significantly than the decision tree-based algorithms, especially RF. Decision trees do not need to increase the parameters with more inputs, so it is possible that the extra information was lost for these model types. Furthermore, the nature of an ensemble of random subsamples of the feature set, as is the case with RF, may have hindered the classifier’s ability to consider the extra information. This could be the cause of the limited improvement and occasional degradation of the RF when combining the classifiers, despite the high performance in the non-combination bands. Therefore, feature selection methods, such as feature ranking, should be used with this classifier to potentially improve accuracy with larger feature sets.
A 2021 meta-analysis on resting state EEGs for the diagnosis of epilepsy and PNES [53] found that comparing oscillations along the theta band may separate epilepsy and PNES. Reuber et al. [4] also found interictal slow rhythms in the theta band for nine out of 50 PNES patients. When considering only the delta, theta, alpha, beta, and gamma bands, the current analysis found that the theta band returned the highest balanced accuracy for 13 out of 24 (four classifiers for six entropy measures) instances, indicating that a difference in theta oscillations could be reflected in the entropy. However, the beta band returned the highest accuracy in 8 of these 24 instances, especially for the spectral entropy. Therefore, the beta band could also be of interest to future researchers.
Comparison to the literature is complex due to the difference in techniques used to analyse the EEGs of PNES patients. For instance, Pyrzowski et al [11] extracted the entropy from pooled histograms of the zero crossing rate, and the six-paper series [20,21,22,23,24,25] and Cura et al. [27] only used one or two entropy measures as part of a larger feature set, obscuring the influence of the entropy. Furthermore, [11,20,21,22,23,24,25] included non-PNES subjects within their subject cohorts. The papers that included the ECG [15,16,17], all analysed the entropy of the heart rate data, a binary signal representing the R peaks, instead of the ECG signal itself. While these studies do represent the potential of entropy for this diagnostic task, the fundamental difference in method makes comparisons with them impossible.
Gasparini et al. [13] and Lo Giudice et al. [14] both statistically analysed the entropy of the EEG signal. The authors of [13] found no differences between the Shannon or permutation entropies of PNES patients and healthy controls, and [14] found no difference in interictal permutation entropy between PNES and epilepsy subjects. Therefore, statistical analysis alone may not be sufficient to differentiate between these groups.
The studies published by Ahmadi et al. [10,26] give details of the performance of similar entropy measures and classifiers in the frequency bands and use PNES-only and epilepsy-only groups. Thus, an in-depth comparison with the current study is possible, although neither study used an ECG channel, only EEGs, and only include the interictal state. The 2018 study [26] used an imperial competitive algorithm to rank the individual feature-band pairs and has listed the top five combinations of inputs for each classifier. They found that RF and decision trees were the weaker classifiers, compared to SVM-Linear, SVM-RBF, and GBM. However, the current analysis found that RF was overall the best classifier, with GBM underperforming. Ahmadi et al. (2018) also found that spectral and Renyi entropies were the most important features, compared to Shannon entropy, Higuchi fractal dimension, and Katz fractal dimension. The current study did not extract Shannon entropy or any fractal dimensions, so a direct comparison cannot be made. However, this analysis did find that Renyi entropy was a very high-performing metric for all bands, and spectral entropy was better than chance (50% accuracy) for all tests. Ahmadi et al. (2018) do not directly compare the frequency bands, though gamma is not listed in the features for any of the top performing inputs. This is congruent to the current study, since gamma underperformed for most entropy measures, including spectral entropy. The outlier is Renyi entropy, which retained high accuracies in the gamma band in the current analysis. Furthermore, broad band Renyi entropy was listed by [26] for most of the top performing combinations. By comparison, the current study found that Renyi was the entropy measure that returned the highest accuracies for the broad band analysis but returned lower accuracies than the other bands for this metric. In addition, the delta band is not noted as important by [26] for either entropy measure; therefore, it was found to be less important for these features, which is in agreement with the findings of the current analysis.
The study by Ahmadi et al. 2020 [10] gave a clearer breakdown of the bands for the Shannon, spectral, and Renyi entropy, although only the precision and recall values were reported, not accuracy, and the broad band was not analysed. In addition, the values reported for the delta and theta bands are exactly the same, which is statistically unlikely and is not reflected in the ROC curves also given. Therefore, the values reported in the current version of this paper for one of these bands may be incorrect. The delta, theta, and gamma bands for all entropy measures and Shannon entropy in the alpha band all return low performance metrics of roughly chance accuracy. The beta band, and spectral and Renyi entropy in the alpha band, however, return mostly 70% precision and 60% recall. ROC analysis showed that the beta band outperformed the delta, theta, alpha, and gamma bands. The alpha band performed well, but much worse than the beta. The delta and theta bands were similar to random chance, and gamma distinctly underperformed for all measures. For the current analysis, the Renyi entropy does show that delta is one of the bands less likely to help differentiate PNES from epilepsy, but disagrees for the theta, alpha, beta, and gamma bands, which all return good and fairly similar accuracies. These trends reported by Ahmadi et al. (2020) were more similar to those for the sample, approximate, spectral, and SVD entropies; where gamma significantly underperformed. Spectral entropy also showed a slight increase in beta band accuracies, but only spectral entropy showed this, and delta performed on par with the other bands.
A limitation of our study is that the two classes are not age- or sex-matched. The ages are similar enough that significant influence is unlikely. However, the PNES group has significantly more females than males, whereas the epilepsy group has more males than females. This is due to PNES being more commonly diagnosed in females than males by a factor of 3:1 [54,55]. In previous studies [56,57,58], machine learning has been successfully used to separate EEG entropy measures of females and males; therefore, it is possible that the balanced accuracies were inflated by the disparity in sex between the two groups. To ensure that this disparity did not have a significant impact, the model that returned the highest accuracy (Renyi entropy with a kNN classifier, with the delta, theta, alpha, beta, and gamma bands inputted as separate features) was trained and tested again with a subset of subjects that were age- and sex-matched. This matched dataset included 50 subjects with a ratio of 11 females to 14 males in both classes, and the epilepsy group had a mean age of 39.16 ± 11.86 while the PNES group had 38.52 ± 10.96. The accuracy, precision, and recall of the matched dataset were 95.40%, 97.10%, and 93.33%. Therefore, the balanced accuracy and precision increased slightly while the recall decreased slightly. Considering this outcome and the similarities in the literature, it is still reasonable to conclude that the difference in sexes between the classes had a minor impact and that entropy measures are indeed powerful measures in differentially diagnosing PNES and epilepsy. Another limitation is that the data includes both preictal (before seizure) and interictal (resting) recordings. Therefore, it is not possible to separate the impacts of these different types of data on the results. Finally, due to a small patient cohort, the current study used ten-fold cross-validation to assess the classifiers. Therefore, samples from each subject were present in both the training and testing datasets. While this is a limitation, it demonstrates that this method is viable and, if trained on a larger population, could be beneficial in clinical contexts.

5. Conclusions

This study shows that the analysis of different frequency bands in the EEG, plus the ECG, with different entropy algorithms returns useful information for the classification of PNES. Furthermore, the bands providing the highest accuracy vary from entropy measure to measure. Therefore, the combination of bands for classification by machine learning algorithms can return higher results. While this would increase the computation cost, entropy measures are quick and low-cost; therefore, the added computation is a small cost compared to the improved performance. The current analysis found that the highest balanced accuracy, 95.03%, was returned by the delta, theta, alpha, beta, and gamma bands combined for the Renyi entropy when a kNN was used in the classification. However, this high performance may have been affected by the use of epoch-wise ten-fold cross validation. The kNN and RF classifiers returned the overall highest accuracies, with the GBM repeatedly underperforming compared to the others, and SVM and kNN showed more improvement with the combination of the bands. Further analysis should explore the combination of further low-cost features to increase the performance and improve the robustness of the classifiers for different patients.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/e24101348/s1, Table S1: Balanced accuracies of the approximate entropy of the small subset of data with the test values of m and r S D . Average reports the average accuracy across the band. Ordered from the highest average balanced accuracy to the lowest in the Average column; Table S2: Balanced accuracies of the sample entropy of the small subset of data with the test values of m and r S D . Average reports the average accuracy across the band. Ordered from the highest average balanced accuracy to the lowest in the Average column; Table S3: Precision of the entropy metrics for every classifier and EEG frequency band (ECG is included in every band). Bold values denote the highest precision amongst the classifiers for each EEG band and entropy measure; Table S4: Recalls of the entropy metrics for every classifier and EEG frequency band (ECG is included in every band). Bold values denote the highest recall amongst the classifiers for each EEG band and entropy measure.

Author Contributions

Conceptualisation, C.H. and D.A.; method, C.H.; formal analysis, C.H.; investigation, C.H.; resources, M.Y. and S.E.; data curation, M.Y. and S.E.; writing—original draft preparation, C.H.; writing—review and editing, C.H., D.A., M.Y., S.E. and H.T.; supervision, D.A.; co-supervision, M.Y. and H.T. All authors have read and agreed to the published version of the manuscript.

Funding

There was no specific funding for this study. M.Y. was supported by a Medical Research Council Clinical Academic Research Partnership award (MR/208 V037676/1).

Institutional Review Board Statement

The study was approved by the Ethics Committee of Fulham, London as part of a larger study on biomarkers in functional seizures (IRAS 231863, REC 18/LO/0328, 18 July 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used in this study were provided by St George’s Hospital and are not publicly available.

Acknowledgments

Thank you to the University of Surrey Doctoral College for funding the doctorate of C.H.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brown, R.J.; Reuber, M. Psychological and Psychiatric Aspects of Psychogenic Non-Epileptic Seizures (PNES): A Systematic Review. Clin. Psychol. Rev. 2016, 45, 157–182. [Google Scholar] [CrossRef] [PubMed]
  2. Benbadis, S.R.; Allen Hauser, W. An Estimate of the Prevalence of Psychogenic Non-Epileptic Seizures. Seizure 2000, 9, 280–281. [Google Scholar] [CrossRef] [PubMed]
  3. Bayly, J.; Carino, J.; Petrovski, S.; Smit, M.; Fernando, D.A.; Vinton, A.; Yan, B.; Gubbi, J.R.; Palaniswami, M.S.; O’Brien, T.J. Time-Frequency Mapping of the Rhythmic Limb Movements Distinguishes Convulsive Epileptic from Psychogenic Nonepileptic Seizures. Epilepsia 2013, 54, 1402–1408. [Google Scholar] [CrossRef] [PubMed]
  4. Reuber, M.; Fernández, G.; Bauer, J.; Singh, D.D.; Elger, C.E. Interictal EEG Abnormalities in Patients with Psychogenic Nonepileptic Seizures. Epilepsia 2002, 43, 1013–1020. [Google Scholar] [CrossRef] [PubMed]
  5. Benbadis, S.R. How Many Patients with Pseudoseizures Receive Antiepileptic Drugs Prior to Diagnosis? Eur. Neurol. 1999, 41, 114–115. [Google Scholar] [CrossRef] [PubMed]
  6. Angus-Leppan, H. Diagnosing Epilepsy in Neurology Clinics: A Prospective Study. Seizure 2008, 17, 431–436. [Google Scholar] [CrossRef]
  7. Whitehead, K.; Kane, N.; Wardrope, A.; Kandler, R.; Reuber, M. Proposal for Best Practice in the Use of Video-EEG When Psychogenic Non-Epileptic Seizures Are a Possible Diagnosis. Clin. Neurophysiol. Pract. 2017, 2, 130–139. [Google Scholar] [CrossRef]
  8. Panayiotopoulos, C. The Epilepsies: Seizures, Syndromes and Management; Bladon Medical Publishing: Oxfordshire, UK, 2005. [Google Scholar]
  9. Benbadis, S.R.; Tatum, W.O. Overintepretation of EEGS and Misdiagnosis of Epilepsy. J. Clin. Neurophysiol. 2003, 20, 42–44. [Google Scholar] [CrossRef]
  10. Ahmadi, N.; Pei, Y.; Carrette, E.; Aldenkamp, A.P.; Pechenizkiy, M. EEG-Based Classification of Epilepsy and PNES: EEG Microstate and Functional Brain Network Features. Brain Inform. 2020, 7, 1–22. [Google Scholar] [CrossRef]
  11. Pyrzowski, J.; Sieminski, M.; Sarnowska, A.; Jedrzejczak, J.; Nyka, W.M. Interval Analysis of Interictal EEG: Pathology of the Alpha Rhythm in Focal Epilepsy. Sci. Rep. 2015, 5, 16230. [Google Scholar] [CrossRef] [Green Version]
  12. Harpale, V.K.; Bairagi, V.K. Effective Method for Epileptic and Nonepileptic Seizure Classification. In Brain Seizure Detection and Classification Using EEG Signals; Harpale, V.K., Bairagi, V.K., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 125–145. ISBN 978-0-323-91120-7. [Google Scholar]
  13. Gasparini, S.; Campolo, M.; Ieracitano, C.; Mammone, N.; Ferlazzo, E.; Sueri, C.; Tripodi, G.G.; Aguglia, U.; Morabito, F.C. Information Theoretic-Based Interpretation of a Deep Neural Network Approach in Diagnosing Psychogenic Non-Epileptic Seizures. Entropy 2018, 20, 43. [Google Scholar] [CrossRef] [PubMed]
  14. Lo Giudice, M.; Varone, G.; Ieracitano, C.; Mammone, N.; Tripodi, G.G.; Ferlazzo, E.; Gasparini, S.; Aguglia, U.; Morabito, F.C. Permutation Entropy-Based Interpretability of Convolutional Neural Network Models for Interictal EEG Discrimination of Subjects with Epileptic Seizures vs. Psychogenic Non-Epileptic Seizures. Entropy 2022, 24, 102. [Google Scholar] [CrossRef] [PubMed]
  15. Ponnusamy, A.; Marques, J.L.B.; Reuber, M. Heart Rate Variability Measures as Biomarkers in Patients with Psychogenic Nonepileptic Seizures: Potential and Limitations. Epilepsy Behav. 2011, 22, 685–691. [Google Scholar] [CrossRef] [PubMed]
  16. Ponnusamy, A.; Marques, J.L.B.; Reuber, M. Comparison of Heart Rate Variability Parameters during Complex Partial Seizures and Psychogenic Nonepileptic Seizures. Epilepsia 2012, 53, 1314–1321. [Google Scholar] [CrossRef] [PubMed]
  17. Romigi, A.; Ricciardo Rizzo, G.; Izzi, F.; Guerrisi, M.; Caccamo, M.; Testa, F.; Centonze, D.; Mercuri, N.B.; Toschi, N. Heart Rate Variability Parameters During Psychogenic Non-Epileptic Seizures: Comparison Between Patients With Pure PNES and Comorbid Epilepsy. Front. Neurol. 2020, 11, 713. [Google Scholar] [CrossRef] [PubMed]
  18. Sundararajan, T.; Tesar, G.E.; Jimenez, X.F. Biomarkers in the Diagnosis and Study of Psychogenic Nonepileptic Seizures: A Systematic Review. Seizure 2016, 35, 11–22. [Google Scholar] [CrossRef]
  19. Xu, P.; Xiong, X.; Xue, Q.; Li, P.; Zhang, R.; Wang, Z.; Valdes-Sosa, P.A.; Wang, Y.; Yao, D. Differentiating between Psychogenic Nonepileptic Seizures and Epilepsy Based on Common Spatial Pattern of Weighted EEG Resting Networks. IEEE Trans. Biomed. Eng. 2014, 61, 1747–1755. [Google Scholar] [CrossRef]
  20. Pippa, E.; Zacharaki, E.; Mporas, I.; Megalooikonomou, V.; Tsirka, V.; Richardson, M.; Koutroumanidis, M. Classification of Epileptic and Non-Epileptic EEG Events. In Proceedings of the 4th International Conference on Wireless Mobile Communication and Healthcare Transforming Healthcare Through Innovations in Mobile and Wireless Technologies (MOBIHEALTH), Athens, Greece, 3–5 November 2014; pp. 87–90. [Google Scholar]
  21. Kanas, V.G.; Zacharaki, E.I.; Pippa, E.; Tsirka, V.; Koutroumanidis, M.; Megalooikonomou, V. Classification of Epileptic and Non-Epileptic Events Using Tensor Decomposition. In Proceedings of the 2015 IEEE 15th International Conference on Bioinformatics and Bioengineering (BIBE), Belgrade, Serbia, 2–4 November 2015; pp. 1–5. [Google Scholar]
  22. Pippa, E.; Zacharaki, E.I.; Mporas, I.; Tsirka, V.; Richardson, M.P.; Koutroumanidis, M.; Megalooikonomou, V. Improving Classification of Epileptic and Non-Epileptic EEG Events by Feature Selection. Neurocomputing 2016, 171, 576–585. [Google Scholar] [CrossRef]
  23. Pippa, E.; Kanas, V.G.; Zacharaki, E.I.; Tsirka, V.; Koutroumanidis, M.; Megalooikonomou, V. EEG-Based Classification of Epileptic and Non-Epileptic Events Using Multi-Array Decomposition. Int. J. Monit. Surveill. Technol. Res. 2017, 4, 1–15. [Google Scholar] [CrossRef]
  24. Pippa, E.; Zacharaki, E.I.; Koutroumanidis, M.; Megalooikonomou, V. Data Fusion for Paroxysmal Events’ Classification from EEG. J. Neurosci. Methods 2017, 275, 55–65. [Google Scholar] [CrossRef] [Green Version]
  25. Pippa, E.; Zacharaki, E.I.; Özdemir, A.T.; Barshan, B.; Megalooikonomou, V. Global vs Local Classification Models for Multi-Sensor Data Fusion. In Proceedings of the SETN ’18 Proceedings of the 10th Hellenic Conference on Artificial Intelligence, Patras, Greece, 9–12 July 2018. [Google Scholar]
  26. Ahmadi, N.; Carrette, E.; Aldenkamp, A.P.; Pechenizkiy, M. Finding Predictive EEG Complexity Features for Classification of Epileptic and Psychogenic Nonepileptic Seizures Using Imperialist Competitive Algorithm. In Proceedings of the 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), Karlstad, Sweden, 18–21 June 2018; pp. 164–169. [Google Scholar]
  27. Cura, O.K.; Yilmaz, G.C.; Türe, H.S.; Akan, A. Classification of Psychogenic Non-Epileptic Seizures Using Synchrosqueezing Transform of EEG Signals. Eur. Signal Process. Conf. 2021, 1172–1176. [Google Scholar] [CrossRef]
  28. Lafrance, W.C.; Baker, G.A.; Duncan, R.; Goldstein, L.H.; Reuber, M. Minimum Requirements for the Diagnosis of Psychogenic Nonepileptic Seizures: A Staged Approach: A Report from the International League Against Epilepsy Nonepileptic Seizures Task Force. Epilepsia 2013, 54, 2005–2018. [Google Scholar] [CrossRef] [PubMed]
  29. Gramfort, A. MEG and EEG Data Analysis with MNE-Python. Front. Neurosci. 2013, 7, 267. [Google Scholar] [CrossRef]
  30. Tan, L.; Jiang, J. Infinite Impulse Response Filter Design. In Digital Signal Processing; Elsevier: Amsterdam, The Netherlands, 2019; pp. 315–419. [Google Scholar]
  31. Hejjel, L.; Kellenyi, L. The Corner Frequencies of the ECG Amplifier for Heart Rate Variability Analysis. Physiol. Meas. 2005, 26, 39–47. [Google Scholar] [CrossRef] [PubMed]
  32. Jas, M.; Engemann, D.A.; Bekhti, Y.; Raimondo, F.; Gramfort, A. Autoreject: Automated Artifact Rejection for MEG and EEG Data. Neuroimage 2017, 159, 417–429. [Google Scholar] [CrossRef] [PubMed]
  33. Van Gent, P.; Farah, H.; van Nes, N.; van Arem, B. Heart Rate Analysis for Human Factors: Development and Validation of an Open Source Toolkit for Noisy Naturalistic Heart Rate Data. In Proceedings of the HUMANIST 2018 Conference, The Hague, The Netherlands, 13–14 June 2018. [Google Scholar]
  34. Flood, M.W.; Grimm, B. EntropyHub: An Open-Source Toolkit for Entropic Time Series Analysis. PLoS ONE 2021, 16, e0259448. [Google Scholar] [CrossRef]
  35. Schiratti, J.-B.; Le Douget, J.-E.; van Quyen, M.L.; Essid, S.; Gramfort, A. An Ensemble Learning Approach to Detect Epileptic Seizures from Long Intracranial EEG Recordings. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  36. Dit: Discrete Information Theory Dit 1.2.3 Documentation. Available online: https://dit.readthedocs.io/en/latest/index.html (accessed on 12 April 2022).
  37. Pincus, S.M. Approximate Entropy as a Measure of System Complexity. Proc. Nati. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef]
  38. Pincus, S.M. Assessing Serial Irregularity and Its Implications for Health. Ann. N. Y. Acad. Sci. 2001, 954, 245–267. [Google Scholar] [CrossRef]
  39. Richman, J.S.; Moorman, J.R. Physiological Time-Series Analysis Using Approximate and Sample Entropy. Am. J. Physiol. -Hear. Circ. Physiol. 2000, 278, 2039–2049. [Google Scholar] [CrossRef]
  40. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  41. Sleigh, J.W.; Steyn-Ross, D.A.; Steyn-Ross, M.L.; Grant, C.; Ludbrook, G. Cortical Entropy Changes with General Anaesthesia: Theory and Experiment. Physiol. Meas. 2004, 25, 921–934. [Google Scholar] [CrossRef] [PubMed]
  42. Alter, O.; Brown, P.O.; Botstein, D. Singular Value Decomposition for Genome-Wide Expression Data Processing and Modeling. Proc. Natl. Acad. Sci. USA 2000, 97, 10101–10106. [Google Scholar] [CrossRef] [PubMed]
  43. Li, Z.; Cui, Y.; Li, L.; Chen, R.; Dong, L.; Du, J. Hierarchical Amplitude-Aware Permutation Entropy-Based Fault Feature Extraction Method for Rolling Bearings. Entropy 2022, 24, 310. [Google Scholar] [CrossRef]
  44. Renyi, A. On Measures of Entropy and Information; University of Califronia Press: Berkeley, CA, USA, 1961; Volume 4. [Google Scholar]
  45. Quian Quiroga, R.; Rosso, O.A.; Başar, E.; Schürmann, M. Wavelet Entropy in Event-Related Potentials: A New Method Shows Ordering of EEG Oscillations. Biol. Cybern. 2001, 84, 291–299. [Google Scholar] [CrossRef]
  46. Blanco, S.; Figliola, A.; Quiroga, R.Q.; Rosso, O.A.; Serrano, E. Time-Frequency Analysis of Electroencephalogram Series. III. Wavelet Packets and Information Cost Function. Phys. Rev. E 1998, 57, 932. [Google Scholar] [CrossRef]
  47. Faust, O.; Acharya, U.R.; Adeli, H.; Adeli, A. Wavelet-Based EEG Processing for Computer-Aided Seizure Detection and Epilepsy Diagnosis. Seizure 2015, 26, 56–64. [Google Scholar] [CrossRef] [PubMed]
  48. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  49. Cortes, C.; Vapnik, V. Support-Vector Networks Editor. Mach. Leaming 1995, 20, 273–297. [Google Scholar] [CrossRef]
  50. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  51. Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  52. Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  53. Faiman, I.; Smith, S.; Hodsoll, J.; Young, A.H.; Shotbolt, P. Resting-State EEG for the Diagnosis of Idiopathic Epilepsy and Psychogenic Nonepileptic Seizures: A Systematic Review. Epilepsy Behav. 2021, 121, 108047. [Google Scholar] [CrossRef] [PubMed]
  54. Sigurdardottir, K.R.; Olafsson, E. Incidence of Psychogenic Seizures in Adults: A Population-Based Study in Iceland. Epilepsia 1998, 39, 749–752. [Google Scholar] [CrossRef] [PubMed]
  55. Szaflarski, J.P.; Ficker, D.M.; Cahill, W.T.; Privitera, M.D. Four-Year Incidence of Psychogenic Nonepileptic Seizures in Adults in Hamilton County, OH. Neurology 2000, 55, 1561–1563. [Google Scholar] [CrossRef]
  56. Hu, J. An Approach to EEG-Based Gender Recognition Using Entropy Measurement Methods. Knowl.-Based Syst. 2018, 140, 134–141. [Google Scholar] [CrossRef]
  57. Wang, P.; Hu, J. A Hybrid Model for EEG-Based Gender Recognition. Cogn. Neurodyn. 2019, 13, 541–554. [Google Scholar] [CrossRef]
  58. Al-Qazzaz, N.K.; Ali, S.H.M.; Ahmad, S.A. Entropy-Based EEG Markers for Gender Identification of Vascular Dementia Patients. IFMBE Proc. 2021, 81, 121–128. [Google Scholar] [CrossRef]
Table 1. Balanced accuracies of the entropy metrics for every classifier and EEG frequency band (ECG is included in every band). Bold values denote the highest accuracy amongst the classifiers for each EEG band and entropy measure.
Table 1. Balanced accuracies of the entropy metrics for every classifier and EEG frequency band (ECG is included in every band). Bold values denote the highest accuracy amongst the classifiers for each EEG band and entropy measure.
FeaturesClassifiersAllBroadDeltaThetaAlphaBetaGamma
Renyi
entropy
SVM91.41%75.95%74.74%80.55%80.17%79.36%78.14%
kNN94.68%83.17%80.23%87.73%88.29%89.41%87.83%
RF92.75%83.29%81.13%88.11%87.38%89.17%87.18%
GBM81.63%71.27%71.97%76.57%76.65%74.36%76.26%
Sample
entropy
m = 1 ,   r = 0.15 *SD
SVM84.11%71.55%66.67%67.09%63.83%63.05%58.58%
kNN86.64%77.61%64.61%66.49%60.29%65.85%59.87%
RF79.92%77.76%67.96%69.70%62.75%66.46%62.00%
GBM73.62%67.48%63.59%65.13%61.21%62.10%59.99%
Approximate
entropy
m = 2 ,   r = 0.2 *SD
SVM85.66%73.04%64.97%67.79%67.59%62.20%58.50%
kNN87.82%78.17%64.07%68.28%67.13%68.23%60.87%
RF80.87%78.70%67.02%72.22%68.52%68.34%63.06%
GBM74.18%68.60%63.30%65.54%62.95%62.58%60.83%
SVD
entropy
SVM83.26%69.28%62.44%64.16%62.81%64.28%56.58%
kNN82.37%72.22%59.49%62.48%61.14%61.82%53.30%
RF76.72%74.84%64.55%66.16%63.80%65.82%55.88%
GBM72.29%66.81%61.78%63.10%60.00%63.11%55.87%
Spectral
entropy
SVM79.03%69.24%62.84%62.98%63.01%65.16%56.25%
kNN77.34%72.92%61.15%60.40%62.65%65.20%54.10%
RF72.24%74.79%65.35%63.62%65.17%68.36%58.67%
GBM69.44%67.04%61.80%61.92%60.71%64.76%58.32%
Wavelet
entropy
SVM58.30%54.72%54.22%60.62%50.88%50.94%50.19%
kNN53.57%52.24%52.48%56.87%50.95%49.89%49.54%
RF55.26%53.23%52.43%58.96%50.35%50.73%50.60%
GBM57.05%52.36%52.14%59.01%51.07%50.72%51.48%
Table 2. Precision, recall, and balanced accuracy of the kNN classifier trained and tested on Renyi entropy for all EEG frequency bands, excluding the corresponding band. ‘None’ denotes all bands are included with no exclusions. Band Importance shows the premutation importance of the band. The ECG channel was included in all iterations.
Table 2. Precision, recall, and balanced accuracy of the kNN classifier trained and tested on Renyi entropy for all EEG frequency bands, excluding the corresponding band. ‘None’ denotes all bands are included with no exclusions. Band Importance shows the premutation importance of the band. The ECG channel was included in all iterations.
Band ExcludedPrecisionRecallAccuracyBand Importance
Broad96.40%95.48%95.03%0.052
Delta96.18%95.63%94.93%0.062
Theta95.90%94.27%94.08%0.111
Alpha95.65%94.59%94.03%0.132
Beta95.73%94.54%94.07%0.128
Gamma95.64%94.57%94.01%0.114
None96.12%95.19%94.68%-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hinchliffe, C.; Yogarajah, M.; Elkommos, S.; Tang, H.; Abasolo, D. Entropy Measures of Electroencephalograms towards the Diagnosis of Psychogenic Non-Epileptic Seizures. Entropy 2022, 24, 1348. https://doi.org/10.3390/e24101348

AMA Style

Hinchliffe C, Yogarajah M, Elkommos S, Tang H, Abasolo D. Entropy Measures of Electroencephalograms towards the Diagnosis of Psychogenic Non-Epileptic Seizures. Entropy. 2022; 24(10):1348. https://doi.org/10.3390/e24101348

Chicago/Turabian Style

Hinchliffe, Chloe, Mahinda Yogarajah, Samia Elkommos, Hongying Tang, and Daniel Abasolo. 2022. "Entropy Measures of Electroencephalograms towards the Diagnosis of Psychogenic Non-Epileptic Seizures" Entropy 24, no. 10: 1348. https://doi.org/10.3390/e24101348

APA Style

Hinchliffe, C., Yogarajah, M., Elkommos, S., Tang, H., & Abasolo, D. (2022). Entropy Measures of Electroencephalograms towards the Diagnosis of Psychogenic Non-Epileptic Seizures. Entropy, 24(10), 1348. https://doi.org/10.3390/e24101348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop