Next Article in Journal
The Risk Assessment of the Security of Electronic Health Records Using Risk Matrix
Previous Article in Journal
Machine Learning-Based Feature Extraction and Classification of EMG Signals for Intuitive Prosthetic Control
Previous Article in Special Issue
Emotional Temperature for the Evaluation of Speech in Patients with Alzheimer’s Disease through an Automatic Interviewer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Epileptic Seizure Detection through Wavelet-Based Analysis of EEG Signal Processing

by
Sebastián Urbina Fredes
1,
Ali Dehghan Firoozabadi
1,*,
Pablo Adasme
2,
David Zabala-Blanco
3,
Pablo Palacios Játiva
4 and
Cesar Azurdia-Meza
5
1
Department of Electricity, Universidad Tecnológica Metropolitana, Santiago 7800002, Chile
2
Department of Electrical Engineering, Universidad de Santiago de Chile, Santiago 9170124, Chile
3
Department of Computing and Industries, Universidad Católica del Maule, Talca 3466706, Chile
4
Escuela de Informática y Telecomunicaciones, Universidad Diego Portales, Santiago 8370190, Chile
5
Department of Electrical Engineering, Universidad de Chile, Santiago 8370451, Chile
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(13), 5783; https://doi.org/10.3390/app14135783
Submission received: 19 May 2024 / Revised: 15 June 2024 / Accepted: 17 June 2024 / Published: 2 July 2024
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing)

Abstract

:
Epilepsy affects millions worldwide, making timely seizure detection crucial for effective treatment and enhanced well-being. Electroencephalogram (EEG) analysis offers a non-intrusive solution, but its visual interpretation is prone to errors and requires a lot of time. Many existing works focus solely on achieving competitive levels of accuracy without considering processing speed or the computational complexity of their models. This study aimed to develop an automated technique for identifying epileptic seizures in EEG data through analysis methods. The efforts have been primarily focused on achieving high accuracy results by operating exclusively within a narrow frequency band of the signal, while also aiming to minimize computational complexity. In this article, a new automated approach is presented for seizure detection by combining signal processing and machine learning techniques. The proposed method comprises four stages: (1) Preprocessing: Savitzky–Golay filter to remove the background noise. (2) Decomposition: discrete wavelet transform (DWT) to extract spontaneous alpha and beta frequency bands. (3) Feature extraction: six features (mean, standard deviation, skewness, kurtosis, energy, and entropy) are computed for each frequency band. (4) Classification: a support vector machine (SVM) method classifies signals as normal or containing a seizure. The method was assessed using two publicly available EEG datasets. For the alpha band, the highest achieved accuracy was 92.82%, and for the beta band it was 90.55%, which demonstrates adequate capability in both bands for accurate seizure detection. Furthermore, the obtained low computational cost suggests a potentially valuable application in real-time assessment scenarios. The obtained results indicate its capacity as a valuable instrument for diagnosing epilepsy and monitoring patients. Further research is necessary for clinical validation and potential real-time deployment.

1. Introduction

Epilepsy as a neurological condition impacts over 50 million individuals globally [1], which is characterized by abnormal increases in brain electrical activity, leading to symptoms such as loss of attention, hallucinations, and convulsions [2]. The consequences of epileptic seizures can be severe, impacting physical health, mental well-being, and social relationships. These effects include loss of consciousness, potential injuries, and, in extreme cases, even the risk of sudden death. The acquisition of epileptic episodes through electroencephalogram (EEG) provides real-time, cost-effective, and non-invasive information with exceptional spatio-temporal resolution [3]. Detecting seizures in EEG signals plays a critical role in diagnosing and treating epilepsy. Depending on the extent of brain area involvement during a seizure, epilepsy can be classified into two main types: focal seizures, which affect specific brain areas, and generalized seizures, which involve both sides of the brain.
The neuronal signals, originating from brain activity, are captured by electrodes positioned on the scalp’s surface. [4]. These electrodes are placed on the scalp based on the configuration proposed in the international 10–20 system of electrode placement [5], as depicted in Figure 1. Electrodes are identified based on their locations: frontal polar (Fp), frontal (F), central (C), parietal (P), temporal (T), occipital (O), and auricular (A), with even numbers on the right hemisphere and odd numbers on the left hemisphere, and with “z” denoting the midline. Additionally, electrodes such as nasion (nose), inion (nape), and auricular points are employed as reference points.
EEG signals belong to the complex domain, wherein each channel associated with an electrode measures the sum of electrical impulses from the cerebral cortex, originating from the activity of billions of neurons proximal to the electrode [6]. EEG signals provide temporal, spatial, and spectral information. The frequency range of EEG signals typically varies between 0.5 and 100 Hz, delineating five principal bands associated with brain and body states, identified by the Greek letters delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–100 Hz) [7]. During an epileptic seizure, hypersynchronization of nervous signals leads to a significant increase in signal amplitude across the bandwidth. This increase is observable in three phases: preictal, ictal, and interictal, corresponding to the periods preceding the seizure, during the seizure, and the period between two attacks, respectively [8]. Figure 2 shows an EEG signal with an epileptic seizure occurring in the interval 2996–3036 s, as annotated by neurophysiologists. The beginning and end of the seizure period were delineated, along with the respective preictal and postictal periods.
Classifying EEG signals presents a significant challenge due to the dynamic nature of the signals and the diverse patterns of seizures observed across patients and recording sessions. Additionally, EEG data acquisition systems encounter obstacles due to their susceptibility to various forms of interference, for example, muscle movements, blinking, and ambient background noise. While visual analysis of EEG signals can identify epileptic crises, this manual process is slow, costly, and prone to human errors. The increasing demand for more accurate and efficient diagnoses has prompted researchers to develop algorithms for the automatic detection of these crises.
This study introduces a model for detecting epileptic attacks in EEGs based on the classification of alpha and beta brainwaves during intervals with and without crises. In this study, a Savitzky–Golay filter is employed to increase the signal-to-noise ratio (SNR) and alpha and beta waves are derived through the application of discrete wavelet transform (DWT). Features from every EEG signal epoch are then extracted through statistical analysis and they are classified as seizure or non-seizure using a linear support vector machine (SVM). Algorithm parameters are adjusted to maximize performance and precision in detecting epileptic crises.
Some key aspects this work brings forth are (1) the implementation of noise reduction techniques in the signals, which is crucial for training precise models in EEG analysis. (2) Signal decomposition, significantly reducing data dimension and thus optimizing computational costs for more efficient processing. (3) Detailed exploration and comparison of alpha and beta band classifications, not only enabling detection in individual channels but also extending the analysis scope to single frequency bands, enhancing the versatility of the analysis.
The article is arranged into the following sections: Section 2 provides a thorough review of related works, Section 3 introduces the proposed approach, Section 4 presents the databases, results, performance, and discussion, and ultimately, Section 5 concludes by presenting findings and outlining future research directions.

2. Related Works

The realm of epileptic seizure detection in EEG signals has been considered as a significant progression in recent years, which is characterized by the introduction of novel techniques and approaches rooted in machine learning (ML). A comprehensive review of the literature unveils a diverse range of methodologies utilized in studying epileptic seizures in EEG signals, all united by the shared goal of enhancing diagnostic accuracy while streamlining analysis resources and time. In the following, a summary is presented regarding the latest advancements in this field, encompassing both recent advancements and pertinent classical research.
Deep learning (DL) models have become widely used in EEG signal analysis due to their high accuracy. Convolutional neural networks (CNNs), in particular, are favored for their pooling layers that efficiently reduce data dimensionality and automate feature selection. This makes them a preferred choice among researchers. Despite their advantages, DL models are complex. They require substantial computational resources, involve numerous nonlinear functions, and need large datasets for effective generalization. Training these models can be time-consuming and involves careful tuning of parameters to avoid overfitting. Additionally, the inner workings of hidden layers can be opaque, posing challenges for implementation in simpler applications.
To mitigate some of these challenges, [9] used a one-dimensional CNN (1D-CNN) with a single convolutional layer, achieving an impressive accuracy of 99.4%. Similarly, [10] utilized time and frequency domain signals in a three-layer CNN, reaching 97.5% accuracy. [11] achieved 92.37% accuracy using 2D CNNs for single electrodes and 3D CNNs for images derived from multiple electrodes.
Recurrent neural networks (RNNs) have also proven effective in capturing the temporal dynamics of EEG signals. Long short-term memory (LSTM) networks and their variants, such as Bidirectional LSTM (BiLSTM), gated recurrent unit (GRU), and fully convolutional nested LSTM (FC-NLSTM), have shown remarkable accuracies of 99.3%, 100%, and 100%, respectively, as reported by [12,13,14]. The authors agree that these models, despite their high accuracy, require substantial computational resources.
Transfer learning, exemplified by [15] combining networks like ResNet and Inception, achieves 100% accuracy. However, optimizing these large models for computational efficiency remains challenging due to their complex hidden layers. Another high-performing model is proposed by [16], utilizing a recurrent topic-synchronized variational autoencoder (RTS-RCVAE) with an accuracy of 98.43%, albeit at significant computational expense.
Several studies have integrated DL techniques with signal processing, incorporating manual feature extraction. Many of these studies utilize Wavelet Transform (WT) for signal decomposition. The objective of employing these techniques is to reduce signal dimensionality, extracting signal representations across different frequency bands, thereby reducing the volume of data processed by DL models. Models combining WT with DL have consistently attained high accuracy performance. For instance, [17] utilized multiresolution Wavelet Transform (WT) in conjunction with an artificial neural network (ANN), achieving an accuracy of 99.6%. [18] explored continuous Wavelet Transform (CWT) for statistical feature extraction and classification using an LSTM, achieving 100% accuracy. Another model proposed by [19] employed Wavelet Packet Decomposition (WPD) and Fast Fourier Transform (FFT), reaching an accuracy of 98.33% through CNN training. The approach of combining Discrete Wavelet Transform (DWT) with LSTM training, proposed by [20], obtained 96.1% accuracy in classifying statistical features. In [21], Tunable-Q Wavelet Transform (TQWT) was applied to signals to compute multiple linear and nonlinear features, including statistical, frequency, and nonlinear aspects such as fractal dimensions (FDs) and entropy. A classification method combining ML and DL, with a CNN-RNN-based DL model, gained an accuracy of 99.71%. Regarding implementation costs associated with WT algorithms, all cases demonstrate efforts to reduce the data volume input into complex DL models, either through signal decomposition or feature extraction.
In the field of traditional ML, various signal and data processing techniques have been integrated with lightweight classifiers. The binary SVM classifier is particularly favored for its creation of models where the decision function is a simple linear function with coefficients equal to the number of features being classified. Linear and nonlinear kernels such as RBF and Polynomial have been commonly employed in classifications, with the latter two incurring higher computational costs compared to the linear kernel.
Studies utilizing SVM-based feature classification techniques have employed signal decomposition methods in time-domain subbands, including DWT [22], Hilbert–Huang Transform (HHT) with Empirical Wavelet Transform (EWT) [23], Discrete Cosine Transform (DCT) [24], and Conditional Mutual Information Maximization (CMIM) feature extraction [25], achieving high accuracy levels of 95.6%, 100%, 97%, and 99.83%, respectively. In [26], the SVM approach was compared with algorithms like K-nearest neighbors (KNN) and Linear Discriminant Analysis (LDA) in the classification of third-order tensors obtained from HHT and WT, represented by Canonical Polyadic Decomposition (CPD) and Block Term Decomposition (BTD), achieving performance exceeding 98%.
In [27], an approach based on KNN classified histograms of Multilevel Local Patterns (MLP) obtained through Empirical Mode Decomposition (EMD) and Intrinsic Mode Functions (IMF) was used, yielding an accuracy of 98.67%. Finally, [28] applied a five-level EWT to decompose EEG signals, utilizing time-frequency features in real-time and further processed classification outcomes, achieving a mean accuracy of 99.88%.
ML algorithms, unlike DL models, necessitate manual feature extraction. Nonetheless, results indicate that these procedures do not significantly impair model performance. Furthermore, controlling the number of input features allows for the creation of simpler models with a small number of representative examples per class. Overall, these models, due to their low data volume requirements for training, better fulfill the computational cost reduction needs.

3. Methodology

In the first step of the proposed method, the channels undergo Savitzky–Golay filtering, which enhances the SNR without causing significant information loss. Subsequently, the signals are decomposed into their spontaneous alpha and beta bands using DWT to decrease the dimensions and volume of the data. Features such as mean, variance, skewness, kurtosis, entropy, and energy undergo extraction within 1 s epochs, overlapping by half intervals. Then, the training of a binary classifier using a linear kernel SVM is conducted, followed by the classification of a set of test signals. Validation is carried out using cross-validation. A diagram illustrating the method outlined is shown in Figure 3.

3.1. EEG Signals

EEG signals are an essential resource in understanding brain activity that they reflect brain dynamics and interactions. Various brain activities can be localized across the cortex [29]. However, signal amplitudes recorded by channels in certain areas are often greater than others. Additionally, activities originating from specific brain regions may not be measurable across the entire cortex. This makes EEG signals a valuable spatial information source, as their multichannel approach allows simultaneous recording from different cortical areas.
Moreover, EEG signals graph the potential difference over time in complex domains [6], displaying variability in amplitude and frequency. Therefore, they also serve as valuable sources of temporal and spectral information [30]. Their variability is attributed to mental states, cognitive activities, attention levels, and other factors. EEG signal amplitudes typically range between 5 and 200 millivolts (mV), with frequencies typically fluctuating between 0.5 and 100 Hertz (Hz) [31]. EEG signals are non-stationary and yield large volumes of data due to their high dimensionality. Moreover, EEG signals frequently contain artifacts, including eye and muscle movements, as well as electrical interference [30]. Identifying and removing these artifacts is crucial for obtaining accurate records of brain activity. Despite these challenges, EEG studies are often insightful about brain activity due to their spatial, temporal, and spectral focus.

3.2. Signal Preprocessing

EEG signals are often contaminated by electrical interference and artifact noise, which if not removed, it can lead to inaccurate analyses and models. Prefiltering is a fundamental preliminary task to obtain clean and accurate EEG signals [30]. Its effectiveness will largely depend on the type of filter, as well as the control of filter parameters. Excessive filtering can remove much of the noise in a signal, potentially leading to the loss of valuable information. Controlling the amount of removed noise and the level of variation in the signal post-prefiltering ensures a balance between a clean signal and minimal information loss. In this study, prefiltering is performed using the Savitzky–Golay filter, where the quality of the output signal is quantified using the SNR and the coefficient of linear correlation.

Savitzky–Golay Filter

The digital Savitzky–Golay filter (SGF) is a finite impulse response (FIR) low-pass filter developed by A. Savitzky and M. J. Golay [32]. It is widely utilized in biomedical signal processing due to its advantages, including smoothing the signal without significant alteration and enhancing the SNR while retaining high-frequency information. It entails convolving a polynomial of order N with a moving window of the signal x [ n ] . This window, centered at term n = 0 , has a frame length of 2 M + 1 , where M represents the number of samples to the right or left of n.
p ( n ) = k = 0 N a k n k ,
where the coefficients a k represent the k-th polynomial [32]. The mean square error (MSE) between the polynomial and the input signal is expressed as:
ϵ N = n = M M ( p ( n ) x [ n ] ) 2 ,
where ϵ N are the MSE calculation. The coefficients a k can be estimated by optimizing the polynomial using:
ϵ N a i = 0 .
The performance of the filter will largely depend on the choice of M and N. A higher order in the polynomial will have a greater impact on the high-frequency response of the filter.

3.3. Discrete Wavelet Transform (DWT)

The DWT represents a signal in multiple resolutions through various scaled and shifted coefficients. The DWT is employed to reduce the dimension of datasets and computational cost. Additionally, it provides information across different frequency bands. DWT utilizes a function known as the mother wavelet [33], denoted as ψ ( t ) , along with a scaling function, represented by φ ( t ) . Considering a signal x [ n ] , the wavelet coefficients, denoted as X ( a , b ) , are determined by:
X ( a , b ) = n x [ n ] ψ a , b [ n ] ,
where the function, ψ a , b [ n ] , corresponds to the wavelet function with scale a and displacement b. The wavelet function takes the form:
ψ a , b [ n ] = 1 a ψ n b a .
The function ψ ( . ) can be one of the wavelet functions such as Gabor, Mexican Hat, Haar, Daubechies, Morlet, and Symmlet, among others. The dyadic DWT is a special case of the DWT [34], in which a = 2 j and b = k 2 j , therefore:
ψ j , k ( t ) = 2 j 2 ψ ( 2 j t k ) , j , k Z .
Substituting Equation (10) into Equation (8), we obtain the expression that allows calculating the wavelet coefficients:
X ( j , k ) = n x [ n ] 2 j / 2 ψ [ 2 j n k ] ,
where j represents the scale level and k represents the displacement level at scale j. The wavelet coefficients are split into two parts; low-frequency approximation coefficients D i and high-frequency detail coefficients A i [35].

3.4. Feature Functions

Features are calculated over epochs composed of N samples. They are then stored as vectors in a feature matrix, which serves as the training database for an ML model. The features such as mean, variance, skewness, kurtosis, entropy, and energy are calculated for labeled segments as either seizure or non-seizure. By incorporating these features, EEG signal analysis becomes more nuanced and informative, allowing for the detection of subtle patterns and abnormalities associated with epileptic seizures. Furthermore, the utilization of half-epoch overlap ensures that interactions between successive epochs are adequately captured, enhancing the robustness of the feature extraction process.

3.5. Support Vector Machine (SVM)

SVM is a supervised binary classification model based on constructing a hyperplane H to maximize the margin between data points from distinct classes [36]. The function H is defined as:
H = w · x i + b = 0 ,
where w represents the weight vector, x i denotes the input feature vector of the i-th observation, and b is the bias coefficient. Depending on its class, each observation is labeled as 1 or −1, forming a label vector y i . The margin distance between the nearest opposing observations can be maximized, which is written as a minimization problem in the form:
min w , b 1 2 ( w · w ) , s . t . y i ( w · x i + b ) 1 0 .
This problem is known as a primal problem and can be solved as a Lagrange optimization problem, such that:
L p ( w , b , α ) = 1 2 ( w · w ) i = 1 n α i [ y i ( w · ( x i + b ) 1 ] ,
where α i denotes the Lagrange coefficients and n denotes the total number of vectors. Resolving this problem is complex given its dependency on w, b, and α . By applying the Karush–Kuhn–Tucker (KKT) conditions, we reach a dual problem exclusively dependent on α , as expressed below:
L d ( α ) = 1 2 i = 1 n j = 1 n α i y i ( x i · x j ) y j α j + i = 1 n α i .
Once the coefficients α i are obtained, the weight vector can be calculated through the expression [36]:
w = i = 1 n α i y i x i ,
and the bias coefficient is obtained using the support vectors previously calculated by:
b = 1 N v s i = v s n ( y i w · x i ) .
The classifier determines which of the two classes an observation belongs to using the sign function, which results from evaluating the new observation x k in the model. Ultimately, the decision function of the classifier, denoted as y k , is expressed as:
y k = sign ( w · x k + b ) .
For cases where the sets cannot be completely separated, a penalty parameter C and slack variables ξ i are introduced. Therefore, the primal problem is now expressed as:
min w , b 1 2 ( w · w ) + C i = 1 n ξ i , s . t . y i ( w · x i + b ) 1 + ξ i 0 , ξ i 0 .
Expressing in primal Lagrangian form and KKT conditions, the procedure in this case remains identical to the previous case, with the addition of a new single constraint given by 0 α i C .
In instances where separation of the set is not feasible, a kernel function is utilized to project the feature space into a higher dimension. The weight vector is now redefined as:
w k = i = 1 n α i y i K ( x i , x j ) ,
where K ( x i , x j ) involves applying the kernel function [36]. Commonly used kernel functions are linear, perceptron, polynomial, radial basis function (gaussian), and sigmoid kernels.

4. Results and Discussion

4.1. Analysis of EEG Signals

The EEG signal analysis experiments conducted in this study has been executed using MATLAB software, version 2023b. The experiments were executed on a Lenovo ThinkPad T430 laptop (Lenovo, Beijing, China), equipped with an Intel Core i5 processor and 16 GB of RAM, OS Windows 10, 64 bit. Twenty EEG files were selected from each of the Helsinki Hospital and Boston Hospital databases. The signals presented periods labeled with and without epileptic seizures. The EEG montage configurations used in recording each dataset differed, leading to the creation of a custom SVM feature classification model for each database. Due to the spatial nature inherent in EEG signals, certain channels demonstrated superior performance relative to others in the experiments. The preprocessing of signals, signal decomposition, feature extraction, and feature classification were performed on EEG signal intervals demarcated as seizure or non-seizure.

4.2. Datasets

The EEG seizure record databases, which are considered in this study, are publicly available and were recorded at the Helsinki University Hospital and Boston Children’s Hospital. EEG signals were labeled by neurophysiologists as either indicative of seizure or non-seizure activity, marking the commencement and conclusion of epileptic episodes. Both databases were sampled at a frequency of 256 Hz, adhering to the electrode placement guidelines outlined by the international 10–20 system.

4.2.1. A Dataset of Neonatal EEG Recordings with Seizure Annotations

The EEG recordings in this database were acquired between 2010 and 2014 at the neonatal intensive care unit (NICU) of Helsinki University Hospital [37]. The recordings were obtained from 79 newborn patients aged 32 to 45 weeks. The recordings have durations ranging from 64 to 96 min and utilize 19 channels.

4.2.2. CHB-MIT Scalp EEG Database

The database was obtained in 2010 at children’s hospital Boston (CHB-MIT) [38]. EEG signals were recorded from 22 patients with intractable seizures, and aged between 1.5 and 22 years. The duration of EEG signals was standardized to 60 min for the entire dataset, utilizing 23 channels. A European data format (EDF) file from the CHB-MIT dataset, along with its signals, is depicted in Figure 4.

4.3. EEG Signal Filtering with SGF

The parameters of the Savitzky–Golay filter were chosen experimentally through the following steps. Initially, a test signal was deliberately exposed to varying degrees of Gaussian white noise, in ranges of −15, −10, −5, 0, 5, 10, and 15 dB. The signals were filtered with all possible combinations of filter parameters, with order and frame length ranging from 3 to 41. The filter parameters were selected based on their capacity to elevate the signal-to-noise ratio (SNR) of a signal contaminated with −15 dB white Gaussian noise (WGN) to a level of −3 dB or better. Moreover, these parameters were chosen to ensure a cutoff frequency of at least 60 Hz, which is at least twice the highest measured fundamental frequency in the beta band (30 Hz). This choice adheres to the Nyquist criterion for signal sampling. Finally, the correlation between the test signal before filtering and the filtered output was computed for each parameter configuration identified in the previous step. These tests revealed that the filter with parameters order N = 22 and frame length M = 35 achieved the best performance in terms of SNR improvement. The outcomes regarding performance are shown in Table 1. The cutoff frequency of the filter was estimated by constructing spectrograms of the signal contaminated with white Gaussian noise at an SNR of 15 dB and of the signal at the filter’s output. The spectrograms in Figure 5 illustrate how the filter’s cutoff frequency fluctuates between 60 and 70 Hz.
Finally, the filtering process applied to the original signal without added noise demonstrated its efficacy, with the SGF 22–35 filter achieving the highest correlation coefficient of 0.98181, which suggests substantial similarity between both signals.

4.4. Wavelet Signal Decomposition

The filtered signals underwent wavelet transformation based on the Mallat algorithm [39]. Figure 6 illustrates the decomposition diagram of the employed filter bank. The signal was filtered using high-pass filters (HPF) to calculate detail coefficients (D) and low-pass filters (LPF) to compute approximation coefficients (A). Additionally, to reduce dimensionality at the output of each filter, the signal was downsampled by a factor of 2. In this process, the Daubechies 8 (db-8) wavelet function was chosen due to its effective balance between time and frequency localization.
The signals passed through the wavelet filter bank, where wavelet coefficients were constructed in the alpha (8–13 Hz) bands, associated with states of conscious relaxation [8], and beta (13–32 Hz) bands, attributed to motor functions [6]. In the alpha band, we utilized coefficients A7 and A11 at decomposition levels 4 and 5, while for the beta band, we used coefficients D9, D11, A6, A8, and A10 at decomposition levels 3 and 4. The number of samples per each 1 s epoch was reduced from 256 to 32 for the beta band and to 16 for the alpha band.
Delta and theta bands were not employed because these bands typically appear in infants, while in adults they only appear during sleep or indicate the presence of other disorders when present during wakefulness [7]. Moreover, the gamma band, related to cognitive brain functions, is not used in this work as it requires the combination of several wavelet coefficient levels 2, 3, and 4, with level 2 operating at 64 samples per second. Our focus is on reducing the computational complexity, which is why the alpha band at 16 samples per second with two wavelet coefficients and the beta band at 32 samples per second with five wavelet coefficients better meet this requirement.

4.5. Feature Extraction

From the alpha and beta wavelet coefficients, intervals labeled were selected as seizure and non-seizure. Mean, variance, skewness, kurtosis, entropy, and energy metrics were computed for sample sets within one-second epochs. Additionally, intermediate features were extracted between two epochs or with overlapping of half-epochs. By using this, interactions between each epoch were also considered.

4.6. SVM-Based Feature Classification

Feature sets were created for seizure and non-seizure classes, ensuring balanced data representation to mitigate data imbalance. These sets were utilized to train a binary SVM classifier with a linear kernel and penalty parameters (C). The linear kernel was chosen for its computational simplicity, as it only requires the dot product, unlike more complex kernels such as RBF or Polynomial. Additionally, the dataset exhibited a quasi-separable distribution between the two classes, with the sets being reasonably well-defined on both sides of the hyperplane, despite some overlap in the middle. The parameter C values were determined through cross-validation training on one-fifth of the total dataset, considering a loss threshold between 1% and 5% of the total classified features during validation. Higher C values reduced error rates but also led to SVM models prone to overfitting, resulting in diminished generalization capability.
Models were trained separately for the alpha and beta bands, significantly reducing the amount of computations and processing time to assess the performance of each one. Specifically, the processing time for the alpha band was shorter than the beta band because the alpha band was constructed with 16 samples, whereas the beta band was constructed with 32 samples.
Model performance was evaluated on a separate test dataset. The class of each test feature vector was determined by applying the SVM’s decision function, classifying a negative output as non-seizure and a positive output as seizure. Three SVM models were generated: the first for the dataset from CHB-MIT, the second for the dataset from Helsinki hospital without pre-filtering, and the third for the pre-filtered dataset from Helsinki hospital. The aim of models 2 and 3 was to compare the performance of the model trained on EEG signals without pre-filtering and filtered signals using the Savitzky–Golay (frame length 35, order 22). This was intended to evaluate how effectively this setup can enhance model performance. The parameters were chosen through extensive trial and error due to the risk of information loss with improper Savitzky–Golay filter settings, potentially resulting in poorer performance compared to models trained on unfiltered signals.

4.7. Model Classification Performance

The evaluation of the generated models involved analyzing sensitivity, specificity, and precision metrics. Sensitivity (recall) quantifies the model’s capacity to accurately detect true positives ( T P ) in relation to all actual positives ( T P + F N ) , where F N are the false negatives. It evaluates the model’s proficiency in detecting positive cases accurately. The mathematical expression for recall is expressed as:
Sensitivity ( recall ) = T P T P + F N × 100 % .
Specificity quantifies the model’s ability to accurately identify true negatives ( T N ) among all actual negatives ( T N + F P ) , where F P represents the false positives, thereby assessing its capability to reduce false alarms in negative case identification. The specificity is given by:
Specificity = T N T N + F P × 100 % .
Precision (accuracy) denotes the proportion of correct predictions (both positive and negative) made by the model relative to all predictions made ( T P + T N + F P + F N ). Accuracy represents an overall measure of the model’s precision. The equation to obtain accuracy is expressed as:
Precision ( accuracy ) = T P + T N T P + T N + F P + F N × 100 % .
In Table 2, the results of calculating recall (Sen), specificity (Spec), and accuracy (Acc) are presented for the three classifier models across the alpha and beta frequency bands. The signal number is indicated in the EDF file that exhibited the best performance in classification. This primarily stems from the fact that certain seizures occur in specific channels and may not be detected throughout the entire cerebral cortex. Additionally, the number of seizures recorded in each EDF file by neurophysiologists is also reported. The model trained using data from the CHB-MIT dataset achieved an average accuracy of 90.3% for the alpha band and 89.7% for the beta band. The classification results are detailed in Table 2. A graphical comparison of the accuracy achieved by the CHB-MIT model for alpha and beta bands is depicted in Figure 7.
The model achieved high accuracy rates for each frequency band separately, demonstrating the algorithm’s capability to detect seizures in EEG signal representations with significantly dimension reduction and data volumes, even from a single channel.
In relation to the classification models developed for the Helsinki Hospital database, the accuracy achieved by the unprocessed model was 73.35% in the alpha band and 70.13% in the beta band. Conversely, the average accuracy achieved by the preprocessed model was 92.82% in the alpha band and 90.55% in the beta band. These results underscore the importance of preprocessing in EEG signals.
Table 3 and Table 4 show the performance obtained for the models constructed with unfiltered and prefiltered databases, respectively.
A visual comparison of the performance from the Helsinki hospital models for the alpha and beta bands is presented in Figure 8. It is observed that models constructed with prefiltered data exhibit significantly superior performance compared to those built with unfiltered data. Thus, it can be concluded that prefiltering notably improves the reliability of EEG signals.
Regarding processing speed, analyzing a 3600-s signal in the alpha band took an average of 70 s, indicating a processing time of 0.019444 s per second of signal. For the beta band, analyzing a signal of the same length took an average of 250 s, resulting in a processing time of 0.69444 s per second of signal. This result is evident given that the alpha band uses 16 samples per second and two wavelet coefficients, while the beta band uses 32 samples per second and five wavelet coefficients. This demonstrates that in this context, alpha band analysis is significantly more computationally efficient than the beta band. Furthermore, given the algorithm’s high processing speed in the alpha band and efforts to minimize computations, there is clear potential for deploying this model in low-cost device environments or simple applications.
The obtained results represent the culmination of multiple experiments. The proper selection of training data, their quality, and quantity leads to superior models. These results provide some key insights into their performance in classifying EEG signals: the metrics of sensitivity, specificity, and precision demonstrate the model’s ability to accurately detect seizures and categorize them. Models constructed with prefiltered data consistently demonstrates superior performance compared to those built with unfiltered data, as evidenced by higher average accuracy for both the alpha and beta bands. This highlights the importance of prefiltering signals beforehand to enhance the quality of the analyses.

4.8. Comparison with Other Works

To evaluate the proposed approach, its performance is compared with other models in the current state-of-the-art. The comparative results are shown in Table 5. Theseresults demonstrate that state-of-the-art models generally exhibit excellent detection accuracy. However, it is clear that as researchers strive to improve accuracy, the complexity of the models also increases. Many studies do not consider this factor, fail to mention it, or address it only superficially. Since the primary focus of these works is on achieving the highest accuracy regardless of computational cost, discussions about implementing these algorithms on low-cost devices or simple applications are often overlooked.
To highlight our model’s ability to reduce computational cost, Table 5 includes the operational time per second of the signal, where such data were provided. For studies that did not supply this information, it is marked as “not available” (N/A), where in most of them the computational complexity is very high that even they ignored this issue and just focused on the accuracy. The studies [14,24,26] examined the algorithm’s processing time, achieving excellent speed results. In this regard, our approach ranked second among the fastest, surpassed only by [14].
These findings encourage further exploration of the model’s efficiency in real-time scenarios. The remaining studies either do not provide processing speed data for their models or only briefly mention computational cost without further elaboration.
Drawing from these findings, it becomes evident that the proposed model demonstrates proper performance compared to other methods. Further enhancements could be achieved by improving the dataset quality, such as employing appropriate feature selection methods for training or considering new feature functions. Additionally, exploring better SVM models by varying the amount of data input for training, adjusting the slack parameter C, or utilizing kernel functions could lead to improvements. Despite exhibiting slightly lower performance than other reviewed models, the proposed model showcases its effectiveness with a satisfactory accuracy of 92.82% for the alpha band and 90.55% for the beta band. These findings suggest that the model could be utilized in epilepsy attack detection tasks, potentially extending its operation to real-time measurement devices, given the model’s low computational complexity.

5. Conclusions

Epilepsy is a disorder where early detection and treatment are crucial for improving the lives of millions of individuals. Thanks to continuous advancements in mathematical modeling and ML, we now have the ability to swiftly and affordably detect this condition through EEG signal analysis. This non-invasive approach enhances the quality of life for patients by offering quick and accurate diagnoses. The methodology proposed in this study has demonstrated satisfactory precision and reliability in identifying seizures within EEG signals, achieving the best accuracy of 92.82% for the alpha band. These results suggest that the model could be effectively utilized in epilepsy detection tasks, With the added advantage of its low computational complexity, enabling operation in resource-constrained computing environments. Additionally, the spectral signal analysis employed in this model, which focuses on specific brain activity frequency bands, can be scaled to detect various neurological disorders and general brain activities. Exploring future research opportunities could lead to further breakthroughs in signal processing techniques like image analysis or spectral analysis. These innovative approaches, along with traditional and deep learning models, have the potential to be extended beyond EEG signals, broadening their impact across different types of data.

Author Contributions

Conceptualization, S.U.F., A.D.F. and P.A.; methodology, S.U.F. and A.D.F.; software, S.U.F., A.D.F., P.A., D.Z.-B. and P.P.J.; validation, S.U.F., A.D.F. and P.A.; formal analysis, S.U.F., D.Z.-B., P.P.J. and C.A.-M.; investigation, S.U.F., A.D.F. and P.A.; resources, S.U.F. and A.D.F.; data curation, S.U.F. and A.D.F.; writing—original draft preparation, S.U.F., A.D.F. and P.A.; writing—review and editing, S.U.F., A.D.F., P.A., D.Z.-B., P.P.J. and C.A.-M.; visualization, S.U.F., A.D.F., P.A., D.Z.-B., P.P.J. and C.A.-M.; supervision, A.D.F.; project administration, A.D.F.; funding acquisition, A.D.F. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from projects: ANID/FONDECYT Iniciación No. 11230129, the Competition for Research Regular Projects, year 2021, code LPR21-02; Universidad Tecnológica Metropolitana, and cost center No: 02030402-999, Department of Electricity.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors acknowledge the financial support for the Projects: DICYT Regular No. 062313AS and ANID/FONDECYT Iniciación No. 11240799.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
BTDBlock Term Decomposition
CAEConvolutional Autoencoder
CHB-MITChildren’s Hospital Boston database
CMIMConditional Mutual Information Maximization
CNNConvolutional Neural Network
CPDCanonical Polyadic Decomposition
CWTContinuous Wavelet Transform
DCTDiscrete Cosine Transform
DLDeep Learning
DWTDiscrete Wavelet Transform
EEGElectroencephalogram
EMDEmpirical Mode Decomposition
EWTEmpirical Wavelet Transform
FBMFractional Brownian Motion
FC-NLSTMFully Convolutional Nested LSTM
FDFractal Dimensions
FGNFractional Gaussian Noise
FIRFinite Impulse Response
GRUGated Recurrent Unit
HEHurst Exponent
HHTHilbert–Huang Transform
IMFIntrinsic Mode Functions
KKTKarush–Kuhn–Tucker
KNNK-Nearest Neighbors
LBLongitudinal Bipolar montage
LDALinear Discriminant Analysis
LSTMLong-Short Term Memory
MLMachine Learning
MSEMean Square Error
NICUNeonatal Intensive Care Unit
PCAPrincipal Component Analysis
RBFRadial Basis Function
RNNRecurrent Neural Network
RTS_RCVAERecurrent Topic-Synchronized Variational Autoencoder
SGFSavitzky–Golay filter
SNRSignal-to-noise ratio
SVMSupport Vector Machine
TQWTTunable-Q Wavelet Transform
WGNWhite Gaussian Noise
WPDWavelet Packet Decomposition
WTWavelet Transform

References

  1. World Health Organization. Epilepsy. Available online: https://www.who.int/health-topics/epilepsy (accessed on 9 March 2024).
  2. Fisher, R. ILAE official report: A practical clinical definition of epilepsy. Epilepsia 2014, 55, 475–482. [Google Scholar] [CrossRef] [PubMed]
  3. Mosquera, E. Analysis and Identification of Evoked Potentials in the Electroencephalogram. Bachelor’s Thesis, Universidad de Sevilla, Sevilla, Spain, 2019. [Google Scholar]
  4. Arriola, J. Mathematical Representation of Brain Waves. Master’s Thesis, National University of the South, Bahía Blanca, Argentina, 2016. [Google Scholar]
  5. Rojas, G.; Alvarez, C.; Montoya, C.E.; De la Iglesia-Vaya, M.; Cisternas, J.E.; Gálvez, M. Study of Resting-State Functional Connectivity Networks Using EEG Electrodes Position as Seed. Front. Neurosci. 2018, 12, 301197. [Google Scholar] [CrossRef] [PubMed]
  6. De La Fuente, C. Eliminación de Artefactos cardíAcos en señAles de Electroencefalograma Mediante Filtro Adaptativo. Bachelor’s Thesis, Universidad Politécnica de Madrid, Madrid, Spain, 2020. [Google Scholar]
  7. Gómez, J. Estudio Comparativo de téCnicas de Caracterización y Clasificación Automática de Emociones a Partir de señAles del Cerebro. Bachelor’s Thesis, Universidad de Nariño, San Juán de Pasto, Colombia, 2018. [Google Scholar]
  8. Bermúdez, A. Técnicas de Procesamiento de EEG para Detección de Eventos. Master’s Thesis, Universidad Nacional de la Plata, Buenos Aires, Argentina, 2013. [Google Scholar]
  9. Chowdhury, T.; Hossain, A.; Fattah, S.; Shahnaz, C. Seizure and Non-Seizure EEG Signals Detection Using 1-D Convolutional Neural Network Architecture of Deep Learning Algorithm. In Proceedings of the 1st International Conference on Advances in Science, Engineering, and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019. [Google Scholar]
  10. Zhou, M.; Tian, C.; Cao, R.; Wang, B.; Niu, Y.; Hu, T.; Guo, H.; Xiang, J. Epileptic Seizure Detection Based on EEG Signals and CNN. Front. Neuroinform. 2018, 12, 95. [Google Scholar] [CrossRef] [PubMed]
  11. Wei, X. Automatic seizure detection using three-dimensional CNN based on multi-channel EEG. BMC Med. Inform. Decis. 2018, 18, 71–80. [Google Scholar] [CrossRef] [PubMed]
  12. Omar, A. Optimizing epileptic seizure recognition performance with feature scaling and dropout layers. Neural Comput. Appl. 2023, 36, 2835–2852. [Google Scholar] [CrossRef]
  13. Hussein, R. Optimized deep neural network architecture for robust detection of epileptic seizures using EEG signals. Clin. Neurophysiol. 2019, 130, 25–37. [Google Scholar] [CrossRef] [PubMed]
  14. Li, Y. Automatic Seizure Detection using Fully Convolutional Nested LSTM. Int. J. Neural Syst. 2020, 30, 2050019. [Google Scholar] [CrossRef] [PubMed]
  15. Lebal, A. Epilepsy-Net: Attention-based 1D-inception network model for epilepsy detection using one-channel and multi-channel EEG signals. Multimed. Tools Appl. 2023, 82, 17391–17413. [Google Scholar] [CrossRef]
  16. He, P. Unsupervised feature learning based on autoencoder for epileptic seizures prediction. Appl. Intell. 2023, 53, 20766–20784. [Google Scholar] [CrossRef]
  17. Guo, L. Automatic epileptic seizure detection in EEGs based on line length feature and artificial neural networks. J. Neurosci. Methods 2010, 191, 101–109. [Google Scholar] [CrossRef]
  18. Khan, S. Robust Epileptic Seizure Detection Using Long Short-Term Memory and Feature Fusion of Compressed Time–Frequency EEG Images. Sensors 2023, 23, 9572. [Google Scholar] [CrossRef]
  19. Tian, X. Deep Multi-View Feature Learning for EEG-Based Epileptic Seizure Detection. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1962–1972. [Google Scholar] [CrossRef] [PubMed]
  20. Najafi, T. A Classification Model of EEG Signals Based on RNN-LSTM for Diagnosing Focal and Generalized Epilepsy. Sensors 2022, 22, 7269. [Google Scholar] [CrossRef] [PubMed]
  21. Malekzadeh, A. Epileptic Seizures Detection in EEG Signals Using Fusion Handcrafted and Deep Learning Features. Sensors 2021, 21, 7710. [Google Scholar] [CrossRef] [PubMed]
  22. Selvathi, D.; Meera, V.K. Realization of epileptic seizure detection in EEG signal using wavelet transform and SVM classifier. In Proceedings of the International Conference on Signal Processing and Communication (ICSPC), Surfers Paradise, Australia, 28–29 July 2017; pp. 18–22. [Google Scholar]
  23. Anand, S.; Jaiswal, S.; Ghosh, P.K. Automatic Focal Eplileptic Seizure Detection in EEG Signals. In Proceedings of the 2017 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), Uttarkhand, India, 18–19 December 2017; pp. 103–107. [Google Scholar]
  24. Gupta, A. A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 925–935. [Google Scholar] [CrossRef] [PubMed]
  25. Zabihi, M. Patient-specific epileptic seizure detection in long-term EEG recording in paediatric patients with intractable seizures. In Proceedings of the IET Intelligent Signal Processing Conference, London, UK, 2–3 December 2013. [Google Scholar]
  26. Aldana, Y. Nonconvulsive Epileptic Seizure Detection in Scalp EEG Using Multiway Data Analysis. IEEE J. Biomed. Health Inform. 2019, 23, 660–671. [Google Scholar] [CrossRef] [PubMed]
  27. Kumar, T.S.; Kanhangad, V.; Pachori, R.B. Classification of Seizure and Seizure-free EEG Signals using Multi-Level Local Patterns. In Proceedings of the 19th International Conference on Digital Signal Processing, Hong Kong, China, 21–23 February 2014; pp. 646–650. [Google Scholar]
  28. Zeng, J. Automatic detection of epileptic seizure events using the time-frequency features and machine learning. Biomed. Signal Process. Control 2021, 69, 102916. [Google Scholar] [CrossRef]
  29. Blanco, S. Desarrollo de un Sistema para Análisis de Señales Electroencefalográficas. Bachelor’s Thesis, Universidad Politécnica de Madrid, Madrid, Spain, 2017. [Google Scholar]
  30. Luna de Luís, M. Procesamiento Digital de Señales de EEG para Clasificación de Trastornos Psicóticos Mediante Aprendizaje de Máquinas. Bachelor’s Thesis, Universidad de Chile, Santiago, Chile, 2021. [Google Scholar]
  31. Sánchez, A. Sonorization of EEG Signals Based on Musical Structures. Bachelor’s Thesis, University of the Andes, Bogotá, Colombia, 2012. [Google Scholar]
  32. Schafer, R. What Is a Savitzky-Golay Filter? [Lecture Notes]. IEEE Signal Process. Mag. 2011, 28, 111–117. [Google Scholar] [CrossRef]
  33. Shensha, M. The Discrete Wavelet Transform: Wedding the a trous and Mallat algorithms. IEEE Trans. Signal Process. 1992, 40, 2464–2482. [Google Scholar] [CrossRef]
  34. Castro, L.R.; Castro, S.M. Wavelets y sus aplicaciones. In Proceedings of the 1st Congreso Argentino de Ciencias de la Computación, Bahía Blanca, Argentina; 1995; pp. 195–204. [Google Scholar]
  35. Ocak, H. Automatic detection of epileptic seizures in EEG using Discrete Wavelet Transform and approximate entropy. Expert Syst. Appl. 2009, 36, 2027–2036. [Google Scholar] [CrossRef]
  36. Carmona, E.J. Tutorial Sobre Máquinas de Vectores de Soporte (SVM); National Distance Education University: Madrid, Spain, 2016. [Google Scholar]
  37. Stevenson, N. A dataset of neonatal EEG recordings with seizure annotations. Sci. Data 2019, 6, 190039. [Google Scholar] [CrossRef] [PubMed]
  38. Guttag, J. CHB-MIT Scalp EEG Database (Version 1.0.0). PhysioNet. 2010. Available online: https://physionet.org/content/chbmit/1.0.0/ (accessed on 15 March 2024).
  39. Mallat, S. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef]
Figure 1. International 10–20 System.
Figure 1. International 10–20 System.
Applsci 14 05783 g001
Figure 2. EEG signal with epileptic seizure interval.
Figure 2. EEG signal with epileptic seizure interval.
Applsci 14 05783 g002
Figure 3. Schematic of the proposed method for epileptic seizure detection.
Figure 3. Schematic of the proposed method for epileptic seizure detection.
Applsci 14 05783 g003
Figure 4. EDF file with its signals in the time domain.
Figure 4. EDF file with its signals in the time domain.
Applsci 14 05783 g004
Figure 5. Comparative spectrograms of the signal with 15 dB noise, before and after SGF filtering. (a) Spectrogram of the noisy signal (SNR = 15 dB). (b) Spectrogram of the filtered signal with filter SGF 22–35.
Figure 5. Comparative spectrograms of the signal with 15 dB noise, before and after SGF filtering. (a) Spectrogram of the noisy signal (SNR = 15 dB). (b) Spectrogram of the filtered signal with filter SGF 22–35.
Applsci 14 05783 g005
Figure 6. Wavelet decomposition based on the Mallat algorithm.
Figure 6. Wavelet decomposition based on the Mallat algorithm.
Applsci 14 05783 g006
Figure 7. EEG classification accuracy results for CHB-MIT model.
Figure 7. EEG classification accuracy results for CHB-MIT model.
Applsci 14 05783 g007
Figure 8. EEG classification accuracy results for Helsinki filtered and unfiltered signals.
Figure 8. EEG classification accuracy results for Helsinki filtered and unfiltered signals.
Applsci 14 05783 g008
Table 1. Results obtained from comparing the SGF 22-35 filter with other SGF filters.
Table 1. Results obtained from comparing the SGF 22-35 filter with other SGF filters.
Signal SNR (dB)
WGN level−15−10−5051015
SGF 5–15−5.3−3.63−0.753.327.9712.817.79
SGF 13–31−4.64−3.16−0.463.478.0712.8717.85
SGF 22–35−2.84−1.760.403.908.2913.0117.97
Table 2. Performance metrics for the CHB-MIT model.
Table 2. Performance metrics for the CHB-MIT model.
EDF FileSignalSeizuresSen α Sen β Spec α Spec β Acc α Acc β
chb01_03.edf51100100100100100100
chb01_04.edf221100100100100100100
chb02_16.edf161100100100100100100
chb02_19.edf51100100505066.766.7
chb03_01.edf181100100100100100100
chb03_02.edf181100100100100100100
chb04_05.edf20110010066.710075100
chb04_08.edf141100100100100100100
chb05_06.edf1115010010010066.7100
chb05_13.edf1811001001005010066.7
chb06_01.edf18310066.6750508060
chb06_04.edf112100100100100100100
chb07_12.edf61100100100100100100
chb07_13.edf151100100100100100100
chb08_02.edf61100100100100100100
chb08_05.edf201100100100100100100
chb09_06.edf711001005033.3366.750
chb09_08.edf232100100100100100100
chb10_12.edf71100100100100100100
chb10_20.edf101100100005050
Mean--97.598.385.8384.1790.2689.67
Table 3. Performance metrics for the Helsinki unfiltered signals.
Table 3. Performance metrics for the Helsinki unfiltered signals.
EDF FileSignalSeizuresSen α Sen β Spec α Spec β Acc α Acc β
eeg1.edf74459.0972.7397.895.5678.6584.27
eeg2.edf10250501001008080
eeg3.edf40--10001000
eeg4.edf15977.7877.78809078.9584.21
eeg5.edf135806010010090.9181.82
eeg6.edf447575808077.7877.78
eeg7.edf120758085.7180.9580.4980.49
eeg8.edf2366.6766.67505057.1457.14
eeg9.edf16771.4371.4310010086.6786.67
eeg10.edf10--0000
eeg11.edf14755004033.3344.44
eeg12.edf41100100100100100100
eeg13.edf7683.3366.6742.8671.4361.5469.23
eeg14.edf8308086.6790.3296.7785.2591.8
eeg15.edf82180.9576.1990.9195.4586.0586.05
eeg16.edf194358.1469.7710097.7379.3183.91
eeg17.edf10366.6766.67252542.8642.86
eeg18.edf90--100100100100
eeg19.edf1127566.6761.5476.926872
eeg20.edf22277.2772.7382.6186.968080
Mean--73.6171.1174.3474.3473.3570.13
Table 4. Performance metrics for the Helsinki filtered signals.
Table 4. Performance metrics for the Helsinki filtered signals.
EDF FileSignalSeizuresSen α Sen β Spec α Spec β Acc α Acc β
eeg1.edf74493.1890.997.895.569693.26
eeg2.edf102100100100100100100
eeg3.edf40--100100100100
eeg4.edf15910088.910010010094.74
eeg5.edf135100100100100100100
eeg6.edf44100100100100100100
eeg7.edf12090908190.488590.24
eeg8.edf23100100100100100100
eeg9.edf16757.1471.41001008086.67
eeg10.edf10--100100100100
eeg11.edf141007580808977.78
eeg12.edf41100010010010066.67
eeg13.edf7610010071.471.438584.62
eeg14.edf83093.339096.896.779593.44
eeg15.edf82195.2490.590.995.459393.02
eeg16.edf194393.0290.797.797.739594.25
eeg17.edf10310010025255757.14
eeg18.edf90--100100100100
eeg19.edf11291.6783.384.692.318888
eeg20.edf22294.4690.991.391.39391.11
Mean--94.658690.891.892.8290.55
Table 5. Comparative table of state-of-the-art approaches for epilepsy seizure detection.
Table 5. Comparative table of state-of-the-art approaches for epilepsy seizure detection.
MethodAccuracy (%)Operational Time (s)Database
1-D CNN, +Butterworth filter, [9]99.4N/ABonn
CNN, [10]97.5N/AFreiburg
3-D CNN, [11]92.37N/AXinjiang
1-D CNN + LSTM + PCA [12]99.3N/ANot available
LSTM, [13]100.0N/ABonn
FC-NLSTM, [14]100.00.00158Bonn/Freiburg
CNN + LSTM, [15]100N/ABonn
RTS-RCVAE, [16]98.43N/ABonn
WT + ANN, [17]99.6N/ABonn
CWT + CAE + LSTM, [18]100N/ABonn
FFT + WPD + CNN, [19]98.33N/ACHB-MIT
LB + DWT + LSTM, [20]96.1N/AHCTM
TQWT + CNN + LSTM, [21]99.71N/ABonn/Freiburg
DWT + SVM, [22]95.6N/ACHB-MIT
HHT + EWT + SVM, [23]100N/ABonn
DCT + SVM, [24]970.02444Bonn
CMIM + SVM, [25]99.83N/ACHB-MIT
HHT + WT + KNN + SVM + LDA, [26]980.37CIREN
EMD + IMF + KNN, [27]98.67N/ABonn
Kurtosis-based channel + EWT, [28]99.88N/ACHB-MIT
SGF + DWT + SVM, This work92.820.019444Helsinki
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Urbina Fredes, S.; Dehghan Firoozabadi, A.; Adasme, P.; Zabala-Blanco, D.; Palacios Játiva, P.; Azurdia-Meza, C. Enhanced Epileptic Seizure Detection through Wavelet-Based Analysis of EEG Signal Processing. Appl. Sci. 2024, 14, 5783. https://doi.org/10.3390/app14135783

AMA Style

Urbina Fredes S, Dehghan Firoozabadi A, Adasme P, Zabala-Blanco D, Palacios Játiva P, Azurdia-Meza C. Enhanced Epileptic Seizure Detection through Wavelet-Based Analysis of EEG Signal Processing. Applied Sciences. 2024; 14(13):5783. https://doi.org/10.3390/app14135783

Chicago/Turabian Style

Urbina Fredes, Sebastián, Ali Dehghan Firoozabadi, Pablo Adasme, David Zabala-Blanco, Pablo Palacios Játiva, and Cesar Azurdia-Meza. 2024. "Enhanced Epileptic Seizure Detection through Wavelet-Based Analysis of EEG Signal Processing" Applied Sciences 14, no. 13: 5783. https://doi.org/10.3390/app14135783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop