Next Article in Journal
GTMesh: A Highly Efficient C++ Template Library for Numerical Schemes on General Topology Meshes
Next Article in Special Issue
Analysis of Mobile Device Dual Tasking on the Move: Normal Cognitive Decline of Aging as Ground Truth for Mild Cognitive Impairment
Previous Article in Journal
Numerical Study on the Influence of Coupling Beam Modeling on Structural Accelerations during High-Speed Train Crossings
Previous Article in Special Issue
Changes in the Power and Coupling of Infra-Slow Oscillations in the Signals of EEG Leads during Stress-Inducing Cognitive Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementation of Machine Learning and Deep Learning Techniques for the Detection of Epileptic Seizures Using Intracranial Electroencephalography

1
Faculty of Electrical Engineering, Warsaw University of Technology, Pl. Politechniki 1, 00-661 Warsaw, Poland
2
1st Military Clinical Hospital with Outpatient Clinic, Municipal Non-Profit Healthcare Facility in Lublin Neurosurgery Department, ul. Kościuszki 30, 19-300 Ełk, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8747; https://doi.org/10.3390/app13158747
Submission received: 26 June 2023 / Revised: 20 July 2023 / Accepted: 25 July 2023 / Published: 28 July 2023
(This article belongs to the Special Issue Artificial Intelligence (AI) in Neuroscience)

Abstract

:
The diagnosis of epilepsy primarily relies on the visual and subjective assessment of the patient’s electroencephalographic (EEG) or intracranial electroencephalographic (iEEG) signals. Neurophysiologists, based on their experience, look for characteristic discharges such as spikes and multi-spikes. One of the main challenges in epilepsy research is developing an automated system capable of detecting epileptic seizures with high sensitivity and precision. Moreover, there is an ongoing search for universal features in iEEG signals that can be easily interpreted by neurophysiologists. This article explores the possibilities, issues, and challenges associated with utilizing artificial intelligence for seizure detection using the publicly available iEEG database. The study presents standard approaches for analyzing iEEG signals, including chaos theory, energy in different frequency bands (alpha, beta, gamma, theta, and delta), wavelet transform, empirical mode decomposition, and machine learning techniques such as support vector machines. It also discusses modern deep learning algorithms such as convolutional neural networks (CNN) and long short-term memory (LSTM) networks. Our goal was to gather and comprehensively compare various artificial intelligence techniques, including both traditional machine learning methods and deep learning techniques, which are most commonly used in the field of seizure detection. Detection results were tested on a separate dataset, demonstrating classification accuracy, sensitivity, precision, and specificity of seizure detection. The best results for seizure detection were obtained with features related to iEEG signal energy (accuracy of 0.97, precision of 0.96, sensitivity of 0.99, and specificity of 0.96), as well as features related to chaos, Lyapunov exponents, and fractal dimension (accuracy, precision, sensitivity, and specificity all equal to 0.95). The application of CNN and LSTM networks yielded significantly better results (CNN: Accuracy of 0.99, precision of 0.98, sensitivity of 1, and specificity of 0.99; LSTM: Accuracy of 0.98, precision of 0.96, sensitivity of 1, and specificity of 0.99). Additionally, the use of the gradient-weighted class activation mapping algorithm identified iEEG signal fragments that played a significant role in seizure detection.

1. Introduction

Epilepsy is one of the most common neurological disorders, affecting millions of people worldwide. This condition is characterized by recurrent epileptic seizures, which can have different symptoms and severity and can impact the quality of life of the patient and their surroundings [1]. Many patients experience difficulties in daily activities such as driving, professional work, or education. Epilepsy also leads to social isolation, which can affect the patient’s well-being and mental health [2]. The issue of epilepsy is also significant from a social and economic standpoint [3]. The costs of epilepsy treatment are high, and this condition can result in work disability, impacting productivity and social development.
The treatment of epilepsy is a complex process that depends on various factors, such as the type and severity of epilepsy, the patient’s age, the presence of other medical conditions, and the response to medications [4]. Unfortunately, despite advancements in medicine, some patients experience difficulties in controlling epileptic seizures. One reason for this may be the insufficient effectiveness of antiepileptic drugs in certain patients [5]. While many antiepileptic drugs are available, finding the proper medication or dosage for a particular patient is not always possible. Furthermore, some medications may cause side effects that make adherence to therapy challenging. Drug resistance occurs in approximately 30% of patients with epilepsy and is associated with various factors, including the type of epilepsy, duration of the disease, number and frequency of seizures, and the presence of other medical conditions [6].
Electroencephalographic (EEG) and intracranial electroencephalographic (iEEG) signals are used in the diagnosis of epilepsy, as well as in the prediction and detection of epileptic seizures [7,8]. EEG is a non-invasive method of measuring the brain’s electrical activity, while iEEG is a more invasive method that involves placing electrodes inside the skull. In both cases, recording the brain’s electrical activity enables the analysis of neuronal changes and the determination of when epileptic seizures occur. With iEEG, due to the more precise recording of neuronal activity, it is possible to achieve a more accurate localization of the brain region where epileptic seizures occur [9,10].
Algorithms have been developed using EEG and iEEG signals for the detection and prediction of epileptic seizures [11]. These algorithms utilize various signal analysis methods, such as frequency analysis and time–frequency analysis, and artificial intelligence techniques, including neural networks and machine learning algorithms [12,13,14,15,16]. In practice, these algorithms can be employed in implanted medical devices, such as neurostimulators, which utilize iEEG for seizure detection and deliver electrical impulses to suppress seizures [17]. Another application involves the use of EEG signals in portable devices, such as watches or bands, which enable continuous EEG signal recording and alert the patient to an upcoming seizure [18]. Thus, the utilization of EEG and iEEG signals for seizure detection and prediction has the potential to significantly improve the quality of life for epilepsy patients and reduce the costs associated with treatment and healthcare.
EEG signals recorded during epileptic seizures vary for each individual due to their unique anatomical and physiological brain characteristics, as well as the type and location of the epilepsy [19,20]. During an epileptic seizure, there are rapid changes in the activity of brain neurons, leading to characteristic alterations in the EEG signal. However, different individuals may have different brain regions involved in the seizure, resulting in variations in the EEG signal. Nevertheless, it is important to note that there are certain similarities in the EEG signal during epileptic seizures that allow for the general identification of characteristic patterns [21]. For example, during a seizure, there is often a sharp increase in activity in high frequencies (above 20 Hz) known as sharp wave or sharp wave-ripple complexes, which are among the most distinctive EEG patterns during an epileptic seizure [22,23,24]. It is worth mentioning that the analysis of EEG signals requires expertise and knowledge from a specialist who can interpret and decipher the characteristic EEG patterns during seizures [25]. Therefore, EEG signal analysis is one of the diagnostic tools employed in diagnosing and treating epilepsy.
In their comprehensive review, Supriya et al. [26] provided an insightful overview of existing techniques in the field of automated epilepsy detection. These techniques employ diverse methods for analyzing EEG signals, including the time domain, frequency domain, time–frequency domain, and non-linear approaches. In another review paper, Alotaiby et al. [27] categorized seizure detection and prediction algorithms into time-domain methods, frequency-domain methods, wavelet-based methods, and methods based on empirical mode decomposition. Sharmila et al. [28] emphasized the variability in pattern recognition techniques required for detecting epileptic seizures across different EEG datasets, owing to the distinct characteristics exhibited under diverse conditions. Parvez et al. [29] present generic approaches for seizure detection, with a focus on feature extraction from both ictal and interictal signals. They use established transformations and decompositions to extract statistical features from the high-frequency coefficients of the signals. In their study, Panda et al. [30] utilized frequency bands, including delta, theta, alpha, beta, and gamma, for feature extraction in the classification of EEG signals. Ocak [31] suggests that seizure onset frequencies predominantly fall within the gamma frequency range (typically between 30 and 100 Hz). Furthermore, Mohseni et al. [32] demonstrated the successful detection of epileptic seizures in all cases using only EEG signal variance. They compared the traditional variance-based method with various methods based on nonlinear time series analysis, entropy, logistic regression, discrete wavelet transform, and time–frequency distributions. Remarkably, the variance-based method outperforms the other methods, achieving the best result of 100% when applied to the same database. The studies conducted by Polat et al. [33] and Emami et al. [34] employ wavelet and Fourier transformations for feature extraction and classification in the detection of seizures within EEG signals. Emami et al. explored the application of image-based seizure detection by utilizing a convolutional neural network on long-term EEG data, including epileptic seizures. The EEG data are filtered, segmented into short segments, transformed into EEG images, and classified by the convolutional neural network as either “seizure” or “non-seizure”. In the study conducted by Wei et al. [35], a novel three-dimensional convolutional neural network structure for automatic seizure detection was proposed. This network takes multi-channel EEG signals as inputs to provide an effective detection system. Furthermore, Zhou et al. [36] utilized a convolutional neural network for differentiating ictal, preictal, and interictal segments in the detection of epileptic seizures. Instead of manual feature extraction, raw EEG signals are directly used as inputs. The performance of time and frequency domain signals in detecting epileptic signals is compared based on the intracranial Freiburg and scalp CHB-MIT databases, in order to explore their potential. In their work, Ma et al. [37] introduced transformers for seizure detection (TSD), a deep learning architecture based on the transformer model. The TSD leverages an encoder-decoder structure and attention mechanisms applied to recorded brain signals. Sun et al. [38] showcased the capabilities of the transformer network in computing attention between input signal channels for seizure detection. They propose a comprehensive model that combines convolutional and transformer layers, effectively eliminating the need for feature engineering or format transformation of the original multi-channel time series. In the study conducted by Ke et al. [39], a novel convolutional transformer model composed of two branches was presented. One branch focuses on extracting time-domain features from multiple inputs of channel-exchanged EEG signals, while the other branch handles frequency-domain representations.

Motivation and the Aim of the Article

Efforts are continuously being made to develop effective and efficient detection methods that can be successfully applied in vagus nerve stimulation (VNS) systems [40]. Stimulators can be configured to automatically trigger therapy upon seizure detection [41]. Based on the analysis of iEEG signals, the stimulator can recognize characteristic seizure patterns and deliver the appropriate therapy, such as brain stimulation, to interrupt or mitigate the intensity of the seizure. Monitoring the frequency of seizures in a patient is also an important factor and can assist doctors in determining the appropriate therapy [42]. On the other hand, even experienced neurophysiologists who are skilled in interpreting iEEG recordings often have doubts about which signal fragments can be considered seizure-related. It is not uncommon for situations to arise where two neurophysiologists disagree on the identification of signal fragments associated with an epileptic seizure. It is expected that developing modern methods of processing and analyzing iEEG signals will help address this issue and identify signals that can be useful in diagnosis. Therefore, a crucial element is to compare multiple feature extraction methods and identify the best ones to interpret and understand the characteristics and morphology of seizure signals. Furthermore, the application of modern deep learning methods and the use of explainability techniques can pinpoint the signal elements that contribute the most to seizure detection.
Considering that well-recorded and correctly labeled signals are best suited for feature comparison, the authors decided to utilize iEEG signals in their research. To enable the application of deep learning techniques, the signals in the database were divided into shorter windows, resulting in a large number of training and testing examples. Subsequently, typical machine learning techniques (including feature extraction and classification) were compared with deep learning techniques (CNN and LSTM). For the feature extraction task, multiple methods were employed, such as spectral analysis, autocorrelation, energy, chaos-related features, the attractor dimension, Lyapunov exponents, the correlation dimension of the attractor, Sevcik’s fractal dimension, wavelet analysis, higher-order statistics, and empirical mode decomposition. The well-known and commonly used method of support vector machines was employed for the classification of signals with epileptic seizures. Our goal was to gather and comprehensively compare various artificial intelligence techniques, including both traditional machine learning methods and deep learning techniques, which are commonly used in the field of seizure detection. Then, by using evaluation measures such as accuracy, sensitivity, precision, and specificity, the potential usefulness of individual features was indicated. By employing the gradient-weighted class activation mapping (Grad-CAM) technique, signal fragments that contribute the most to the detection of epileptic seizures using CNN were identified.
Figure 1 illustrates the schematic of the conducted research described in the article. Within these studies, a standard machine learning approach was applied, which included feature extraction, as well as a deep learning approach. In the context of this task, CNN and LSTM networks were used.

2. Materials

There are several publicly available databases related to epileptic seizures and EEG/iEEG signals, which have been described in works [43,44,45,46,47,48]. However, the lack of precise descriptions in these databases proved to be the biggest challenge, particularly regarding the onset and offset times of seizures and information about the electrodes on which the seizures occurred. In our research, we decided to utilize a well-known and widely used data source, provided by Andrzejak et al. [49]. We chose it due to the high quality of the signals and the clear association with seizures, which was crucial for our study. This allowed us to have access to high-quality signal patterns that were precisely linked to seizures. By incorporating this database into our study, we had the assurance of having reliable and appropriate data that contributed to achieving our research goals. For the present study, iEEGs from five patients were selected, all of whom had achieved complete seizure control after the resection of one of the hippocampal formations. In our experiments, we used segments recorded from within the epileptogenic zone (set D), and set E exclusively contained seizure activity. The segments in set E were selected from all recording sites exhibiting ictal activity. All iEEG signals were recorded using the same amplifier. After 12-bit analog-to-digital conversion, the data were written at a sampling rate of 173.61 Hz. Then, each set denoted D and E, consisting of 100 single-channel iEEG segments of 23.6 s duration, was divided into non-overlapping 2 s windows. In this way, 1100 examples of iEEG recordings without seizures (F—free) and 1100 examples of iEEG recordings during seizures (S—seizure) were created.
Figure 2 depicts an example of an iEEG signal recorded when no epileptic seizure was observed (F—blue color) and an example of an iEEG signal recorded during an epileptic seizure (S—red color). It is notable that the amplitude of the signal recorded during the epileptic seizure is much higher, even exceeding 1000 uV, compared to signals recorded during normal brain activity, which typically range in the hundreds of uV. The examples were randomly divided into a training set and a testing set in a 9:1 ratio. As a result, we obtained a dataset of 1980 examples used for training the classifiers and a set of 220 examples used for testing the classifiers.

3. Methods

This chapter presents various methods, such as feature extraction, classification, CNN, LSTM, and evaluation measures for seizure detection systems. All the parameters for each method are also provided. The experiments were conducted using the Matlab 2023a software package. Some functions used in the experiments were built-in in Matlab and its toolboxes, while others were implemented by the authors.

3.1. Features of EEG Signals

Feature extraction methods from EEG signals are of great importance in machine learning techniques for seizure detection for several reasons [50]. Firstly, they enable dimensionality reduction of the data, which facilitates signal analysis and processing [51]. Secondly, they allow for the identification of relevant information related to seizures, such as specific patterns and signal characteristics [52,53]. These features can serve as the basis for classification and seizure detection by machine learning models [54,55]. Extracted features can also aid in identifying unique patterns associated with different types of seizures, contributing to effective diagnosis.

3.1.1. Average Power in the Time Domain

In the case of a seizure-free state, the energy and power of the iEEG signal are usually lower than during a seizure. Under normal conditions, the brain exhibits regular electrical activities, resulting in lower energy of the iEEG signal. During an epileptic seizure, the iEEG signal often shows an increase in energy. This is caused by intense and abnormal electrical discharges in the brain that characterize a seizure [56]. These abnormal discharges lead to increased neuronal activity in the brain, resulting in higher energy of the iEEG signal [57]. This increase in energy can be observed across different frequencies, such as delta, theta, alpha, beta, and gamma, depending on the type of seizure [58].
To calculate the energy of the signal in the specific frequency bands (delta, theta, alpha, beta, and gamma), digital filters can be used. For this purpose, filters are designed for each frequency range. For example, for alpha waves, a frequency range of 8–12 Hz can be selected, for beta waves 12–35 Hz, for gamma waves 35–100 Hz, for theta waves 4–8 Hz, and for delta waves 0.5–4 Hz. An example of an iEEG signal recorded during a seizure, after passing through the filters responsible for the alpha, beta, gamma, theta, and delta bands, is presented in Figure 3. Applying signal filtration enables the computation of features for each frequency band. The average power in the time domain was then calculated for each second of the signal and each channel [59]:
P x = 1 N n = 0 N 1 x n 2
where N is the number of samples in the window and x[n] is the value of the nth sample. The calculated values have a unit of µV2. The average power in the time domain is typically calculated for time windows of equal length and is proportional to the signal energy.

3.1.2. Higher-Order Statistics for Wavelet Transform

Wavelet transform can be useful in the detection of epileptic seizures due to its properties in analyzing signals in both time and frequency domains [60,61,62]. Seizures often exhibit sudden and short-lived changes in brain activity, which may be easier to detect using a method that allows for precise temporal localization. This is important because epileptic seizures can sometimes have subtle signal changes that are challenging to detect. By employing wavelet transform, there is a greater chance of identifying these low-amplitude changes with specific shapes. The Mallat pyramid is a popular tool for wavelet decomposition [63]. One of the main reasons for the popularity of this method is its effectiveness and versatility in signal analysis. The Mallat pyramid is characterized by high computational efficiency. It utilizes wavelet components of lengths equal to powers of two, which accelerates computations and reduces computational complexity [64]. As a result, signal decomposition using the Mallat pyramid is fast and efficient, which is crucial for analyzing large datasets or real-time applications.
The fundamental step of wavelet transformation is decomposing the signal into a series of approximations and details representing different time and frequency scales [65]. This process can be recursively repeated on successive approximations, creating a wavelet tree. The choice of an appropriate wavelet function for signal analysis can be somewhat complex since there are many different wavelet functions to choose from, such as the Morlet wavelet, Haar wavelet, Daubechies wavelet, and Coiflet wavelet, among others [66]. However, the final selection of the wavelet may depend on individual preferences and the characteristics of the signal. Therefore, it is important to conduct experiments and compare different wavelets to find the one that best reflects the features of recordings during epileptic seizures. During the research, a series of experiments were conducted, and based on visual assessment, the Daubechies 4 (Db4) wavelet was selected.
Figure 4 presents an example of the decomposition of the iEEG signal using the Mallat pyramid and the Daubechies 4 (Db4) wavelet. The decomposition was performed at four levels.
During wavelet analysis, we calculate features that help us describe and interpret the transformed signal [67]. To determine these features, we can utilize energy, which represents the total power of the signal. Higher energy indicates a greater concentration of energy in a specific frequency range [68]. Variance measures the spread of values within a range. Higher variance may indicate greater amplitude variation in a particular frequency range. Skewness provides information about the asymmetry of the value distribution [69]. Positive skewness means the tail of the distribution is shifted to the right, while negative skewness indicates a shift to the left. Kurtosis measures the “peakedness” of the value distribution [70]. Higher kurtosis represents a sharper peak, while lower kurtosis indicates a flatter distribution. Entropy represents the level of disorder or complexity in a signal. Higher entropy signifies greater randomness and a lack of structure [70].
Variance can be calculated as [71]:
v a r x = 1 N 1 n = 0 N 1 x n x ¯ 2  
Skewness [72]:
s k e w n e s s x = 1 N n = 0 N 1 x n x ¯ 3 1 N n = 0 N 1 x n x ¯ 2 3 2  
Kurtosis [73]:
k u r t o s i s x = 1 N n = 0 N 1 x n x ¯ 4 1 N n = 0 N 1 x n x ¯ 2 2  
where N represents the number of samples in the discrete signal, x[n] is the value of the signal at the nth sample, and x ¯ is the mean value of the signal.
Entropy can be calculated according to the formula [74]:
H x = i = 1 M p x i log 2 p x i
where x i is the ith value of the signal and p( x i ) is the probability of occurrence of x i , which can be calculated based on the probability distribution of the signal.

3.1.3. Spectral Analysis and Autocorrelation

Spectral analysis and autocorrelation can be used in the detection of epileptic seizures by analyzing electroencephalographic (EEG) signals and identifying characteristic features related to epileptic seizures [75,76]. Techniques such as Fourier transform can be applied to extract spectral information from EEG signals (Figure 5). Spectral analysis enables the detection of patterns associated with seizures, such as increased power in low frequencies (e.g., slow waves) or abrupt frequency changes.
We can express the formula for the Fourier transform as [77]:
X k = n = 0 N 1 x n e i 2 π k n N
where X[k] represents the discrete Fourier transform of signal x for frequency component k, x[n] is the value of signal x at time n, N is the length of signal x, and i is the imaginary unit.
Autocorrelation allows for the analysis of similarity between shifted copies of an EEG signal (Figure 6). Regular patterns in EEG signals, such as periodic oscillations, can be associated with epileptic seizures. Autocorrelation analysis can aid in identifying these patterns and detecting seizures. Autocorrelation can be defined as [78]:
R x x k = Σ n = 0 N 1 x n     x n k
where R x x [ k ] represents the autocorrelation for shift k, x[n] is the value of signal x at time n, and k is the (temporal) shift between copies of signal x. Autocorrelation measures the similarity between signal x and its delayed versions by a lag of k. The sum of products of corresponding samples of the signal at time n and the shifted sample at time n − k, for n ranging from 0 to N − 1, yields the autocorrelation result R x x [ k ] .

3.1.4. Lyapunov Exponents and Fractal Dimension

Epileptic seizures are irregular and nonlinear phenomena. Chaos theory provides tools for describing and analyzing such irregular and nonlinear processes. Lyapunov exponents are one of the tools in chaos theory that allow for measuring the sensitivity of a system to small initial changes. In the case of epileptic seizures, changes in the dynamics of EEG signals can lead to variations in the values of Lyapunov exponents, indicating the presence of irregularities and nonlinearities characteristic of seizures.
Lyapunov exponents allow for assessing the sensitivity of a dynamic system’s trajectories to small initial perturbations and serve as indicators of chaos in the system. The first step is the proper processing and preparation of signals, including noise removal and optionally value normalization. Then, we construct the trajectories of the dynamic system in phase space. This is the step where input data are transformed into trajectories that will serve as the basis for further analysis. This process can be accomplished using techniques such as time delay embedding or phase space reconstruction. Time delay embedding involves creating trajectories by delaying consecutive samples of the time signal. This means that each sample of the signal is extended by several subsequent samples, forming a multidimensional vector. The time delay is controlled by a parameter τ, which determines the number of samples by which the time is shifted. The reconstructed time delay vector Y i in the lagged phase space is given by [78]:
Y i d = x i , x i + τ , x i + 2 τ , , x i + d 1 τ
where x(i) represents the original time series data at time i, τ is the time delay (lag), and m is the embedding dimension.
The lagged phase space representation allows us to capture the underlying dynamics and dependencies of the system by creating a multidimensional representation of the time series. It should be noted that the appropriate embedding dimension, d, and time delay, τ, need to be determined for the discrete signal x. To determine the optimal time delay, τ, we can utilize the autocorrelation function or mutual information. On the other hand, to calculate the optimal embedding dimension, d, we can use the method proposed by Cao [79]. The concept of trajectory construction is based on the idea that the dynamic properties of a system are visible in the phase space (Figure 7 and Figure 8). Figure 7 presents the trajectory for a segment of the iEEG signal during a seizure.
Figure 8, on the other hand, presents the trajectory for a segment of the iEEG signal when no seizure was detected. These presented phase trajectories enable us to visually capture changes in the analyzed signals. When observing the phase trajectory of an iEEG signal recorded during a seizure, it becomes apparent that it is more regular and organized. In contrast, the phase trajectory for the non-seizure signal is less organized and exhibits fewer distinct structures. Similar relationships can also be observed in other examples.
Lyapunov exponents are measures of the local exponential rates of divergence or convergence of nearby trajectories in a dynamical system. They provide information about the sensitivity of the system to initial conditions and quantify the degree of chaos or complexity present in the system. The Lyapunov exponents can be calculated using the formula [78]:
λ i = log e f ( Y i ( d ) )
where f ( Y i ( d ) ) is the rate of divergence of two neighboring trajectories at point Y i ( d ) . The standard Lyapunov exponent λ is computed as the mathematical average of the local Lyapunov exponents along each dimension of the attractor as [78]:
λ = l i m 1 n i = 0 n 1 λ i n
The number of standard Lyapunov exponents is equal to the embedding dimension of the attractor. For the system to be chaotic, the trajectories must diverge along at least the last dimension of the attractor, which implies that at least one standard Lyapunov exponent must be positive. Several algorithms have been proposed for computing the Lyapunov exponent from discrete signals, with the most commonly used ones being the Wolf algorithm or the Rosenstein algorithm [80].
The correlation dimension of an attractor is a measure that describes the complexity of the geometric structure of a nonlinear dynamical system’s attractor. It measures how much independent spatial information is contained within the attractor. A higher correlation dimension indicates a more complex structure of the attractor, while a lower correlation dimension indicates a less complicated structure. The formula for calculating the correlation dimension of an attractor for a discrete signal x is based on the “box-counting” method (the Grassberger–Procaccia method [81]). The correlation dimension D of the attractor can be estimated using the following formula [78]:
D = lim ϵ 0 log N ϵ log 1 / ϵ
where ϵ is the grid size (radius) and N(ϵ) is the number of spheres of size ϵ that cover the attractor.
The fractal dimension is a measure of signal complexity and irregularity; thus, it can help in detecting these irregularities that may be associated with the presence of epileptic seizures. Comparing the fractal dimension of the signal during seizures and normal brain activity can provide diagnostic information. Several methods have been proposed for calculating the fractal dimension from a discrete signal. An interesting method of calculating the fractal dimension was proposed by Sevcik [82]. The signal is normalized so that its values are in a unitary square. The normalized values of the abscissae x * and the ordinates y * are given by the following formulas [82]:
x i * = x i x m a x
y i * = y i y m a x y m a x y m i n
where i is the sample number. The fractal dimension is calculated from the equation [82]:
D s = 1 + l o g L log 2 N
where L is the length of the curve in a unitary square, and N’ = N − 1.

3.1.5. Empirical Mode Decomposition

Empirical mode decomposition (EMD) is a signal analysis technique that involves decomposing a signal into components of different frequencies and amplitudes, called intrinsic mode functions (IMFs) [83]. This method was introduced by Huang and his collaborators in 1998 and is particularly useful in analyzing nonstationary and nonlinear signals [84]. EMD is an adaptive decomposition method that adjusts to the signal’s properties over time, enabling the analysis of nonstationary signals such as EEG signals [85]. In the case of epileptic seizures, these signals often exhibit frequency and amplitude variability. EMD can help extract signal components that correspond to different aspects of an epileptic seizure. EMD allows for signal analysis at different time scales, enabling the identification of both low-frequency and high-frequency signal components (Figure 9). In the case of epileptic seizures, there can be both slow changes at low frequencies and rapid changes at high frequencies. Multiscale analysis using EMD can reveal these different aspects of an epileptic seizure [85].
The main assumption of the EMD method is that any signal can be decomposed into IMF components that satisfy two criteria: They must be “sufficiently smooth” at each point and the number of extrema (maxima and minima) must be equal or differ by one at most.
The process of signal decomposition using EMD consists of several steps [86]:
  • Identifying all local maxima and minima in the signal.
  • Calculating the average value between the maxima and minima for a given IMF component.
  • Generating the first IMF component by taking the difference between the input signal and the calculated average value.
  • Checking if the IMF component meets the IMF criteria. If so, it is considered the first IMF component. If not, the process is repeated for the remaining signal after removing this component.
  • Repeating steps 2–4 for the remaining signal until obtaining all IMF components.
After applying EMD, we obtain a signal decomposed into IMF components that represent different frequencies and amplitudes present in the original signal. These IMF components can be analyzed separately to obtain more detailed information about the signal’s characteristics at different time scales. When extracting features from an EEG signal, we proceed similarly to extracting features for wavelet transform, but we calculate the features for each successive IMF.

3.1.6. Method of Calculation and Specification of Features

Features of iEEG signals were calculated for two-second windows. The compilation of features, their labels, and the calculation method are presented in Table 1.

3.2. Machine Learning

In machine learning, there are many different types of classifiers that are used for solving classification problems. Among popular classifiers are logistic regression, support vector machines, decision trees, random forest, K-nearest neighbors (KNN), naive Bayes classifiers, and neural networks [15,87,88,89,90,91]. The support vector machine (SVM) is one of the popular machine learning algorithms used for both classification and regression tasks. SVM finds optimal separating hyperplanes for different classes in a high-dimensional space [92]. In the case of the SVM, the kernel is one of the key parameters. The kernel defines a similarity function between data in the feature space [93]. Common types of kernels include linear, polynomial, and radial basis functions (RBFs) [94]. The choice of an appropriate kernel depends on the nature of the data and its separability. Handling nonlinear data using the RBF kernel is very popular [95]. The RBF function can model complex nonlinear relationships between data. Therefore, SVM with an RBF kernel has the ability to flexibly adapt to diverse data and exhibit good generalization capability [96]. During the training of SVM, the optimization process is limited to handling only a few training samples called support vectors. Consequently, even though SVM can operate on a large number of training data, the training process is computationally efficient. These advantages make SVM, especially with the RBF kernel, a popular and effective tool in the field of machine learning, particularly for nonlinear and high-dimensional data [97,98,99,100].

3.3. Deep Learning

Deep learning, as one of the branches of artificial intelligence, can yield better results than traditional machine learning methods in the case of epileptic seizure detection [101,102]. There are several reasons why deep learning can be beneficial in this context [103,104,105]:
  • Hierarchical data representation: Deep learning allows for the automatic creation of multi-level data representations. As a result, neural networks can detect complex patterns and structures in EEG signals that may be difficult to identify using traditional methods.
  • Feature extraction: Deep learning can autonomously extract relevant features from input data, eliminating the need for manual feature engineering. In the case of epileptic seizure detection, neural networks can automatically identify characteristic patterns in EEG signals that are associated with seizures.
  • Utilization of larger datasets: Deep learning requires a large amount of training data. In the context of epileptic seizure detection, the availability of a large EEG database containing both seizure and non-seizure signals enables the training of more advanced neural networks. A larger amount of training data can contribute to improving classification effectiveness.
  • Adaptability: Neural networks can be flexible and adapt to changing conditions. In the case of epileptic seizure detection, EEG signals may undergo changes over time, and seizures can manifest in different forms. Deep learning allows the model to adapt to these changes and adjust to new patterns.
However, it is important to note that the effectiveness of deep learning depends on the appropriate selection of network architecture, parameter optimization, and the quality of available training data.
Deep learning techniques have been rapidly advancing in recent times for several reasons. In recent years, data collection has become easier and more prevalent, especially in areas such as image processing, natural language processing, and biomedical data analysis [106,107]. Large datasets are crucial for effective deep learning as deep models are capable of extracting meaningful features and patterns from such data. The increase in computational power and the availability of advanced hardware, such as graphics processing units (GPUs) and tensor processing units (TPUs), enable accelerated training of neural networks [108]. This allows for the exploration of larger and more complex models, contributing to the development of deep learning techniques. Many advanced neural network architectures have been developed, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs), which possess unique abilities for pattern recognition, sequence processing, or generating new data [109]. These advanced architectures drive the advancement of deep learning techniques and enable the solution of more complex problems. Knowledge about optimization, regularization, weight initialization, normalization, and other aspects of deep learning is continually evolving. This leads to the creation of increasingly efficient and effective deep learning techniques.
Convolutional layers are the main component of CNNs [110]. Each convolutional layer consists of a set of filters (known as convolutional kernels) that are applied to the input data. These filters perform convolution operations, which involve multiplying the signal values in the input window by their corresponding filter weights and summing the results. This generates a feature map that contains information about detected patterns in the data. After the convolution operation, the results are passed through an activation function, such as a rectified linear unit (ReLU) [111]. Activation functions introduce non-linearity to the network, allowing for the modeling of more complex relationships between features. Pooling layers are used to reduce the dimensionality of the data. Max-pooling layers are commonly used, which select the maximum value within the input window and pass it forward [112]. This allows for information reduction while preserving the most important features. After processing the data through convolutional and pooling layers, the results are transformed into a vector and passed to a fully connected layer. This layer consists of a set of neurons that are connected to neurons in the previous layer. These neurons compute weighted sums of inputs and apply an activation function. The final classification results are compared with the expected labels using a loss function. The goal of the network is to minimize this function, which is achieved using optimization algorithms such as stochastic gradient descent (SGD) or adaptive moment estimation (Adam) [113]. During training, the network weights are updated to minimize the loss function.
Creating CNN layers for discrete signals requires considering the specific characteristics of the signals and analysis goals. Depending on the specific problem, layer parameters such as filter size, stride, padding, and activation functions can be adjusted to achieve the best results. It is also important to appropriately customize the architecture of the entire network, including other types of layers such as a fully connected layer, to ensure proper processing and classification of discrete signals.
Convolutional neural networks are used for the detection of epileptic seizures in EEG signals for several reasons [114]. CNNs are capable of automatically extracting relevant features from EEG signals, eliminating the need for manual feature engineering by humans. By utilizing convolutional layers, the network can learn to recognize characteristic patterns and shapes of waves that are important for seizure identification [115]. These patterns can be characterized by changes in amplitude or sequences of waves. CNNs create a hierarchical representation of the data, allowing for the modeling of complex relationships between the features of EEG signals. Convolutional layers extract low-level features, such as wave shapes, while subsequent higher-level layers integrate these features into more global patterns. This enables more advanced seizure recognition. As a result, the use of CNNs for seizure detection in EEG signals provides an automatic and objective approach that can be used for the rapid identification of epileptic seizures, offering hope for improving the effectiveness of algorithms.
In searching for the best CNN structure, the influence of the number of convolutional layers (in the range of 2–5) was investigated. Moreover, the influence of the number of filters (values: 3–10) was checked. Next, the influence of filter sizes (2, 4, 8, 16, 32, 64, and 128) was investigated. At this stage, the knowledge of iEEG signal processing and analysis methods was not taken into account. The choice of the network structure resulted from an automatic search of optimal combinations of the number of layers, the number of filters, and the filter size. Finally, we opted for a relatively simple CNN structure with 3 convolutional layers. A ReLU layer was applied after each convolutional layer. The last convolution layer and the last ReLU layer contain 128 filters. The structure, along with the basic features and parameters of the layers, are listed in Table 2. During the selection of the best parameters, different optimizers (ADAM, SDG), InitialLearnRate values (0.0001, 0.001, 0.01), and L2Regularization values (0.01, 0.001, 0.0001) were also checked.
To train the CNN, the Adam optimization algorithm was used. The initial learning rate parameter was set to 0.001. This parameter determines how quickly the network adjusts its weights during training. The maximum number of training epochs was set to 50. An epoch represents one pass through the entire training dataset. In this case, the network will be trained for a maximum of 50 epochs, meaning that each training sample will be used no more than 50 times. The option of shuffling the training data before each epoch was applied. This means that the training data will be randomly shuffled after each epoch, contributing to better network generalization. A separate set of training data was used for validation. The validation data are used to assess the quality of the network during training. The validation set was created as a random subset of the training data. Approximately 1/10 of the training data were selected to form this validation set.
LSTM networks are capable of effectively modeling sequential data, which is crucial in the detection of epileptic seizures [116]. Brain signals recorded during seizures often exhibit a characteristic sequence of changes that can only be detected through the analysis of time series. LSTM networks have internal memory that allows them to store information about previous states and utilize that information to predict future changes. The fundamental component of an LSTM network is the memory cell [117]. The memory cell stores an internal state that can be updated and read by gates. The gates in an LSTM network control the flow of information, determining which information should be retained and which should be discarded. Through these gates, LSTM networks can focus on relevant information while ignoring the noise and irrelevant details. This ability makes them effective in analyzing sequential data, such as electroencephalographic (EEG) signals used in the detection of epileptic seizures [118].
The application of LSTM networks in epileptic seizure detection involves training the network on EEG data that represent brain activity during seizures and normal activity. An LSTM network can learn to recognize patterns that characterize epileptic seizures and distinguish them from normal activity. Once trained, the LSTM network can be used to analyze real-time EEG signals. Based on the current input data, the network can make predictions about whether a given signal indicates the presence of a pattern characteristic of an epileptic seizure.
In searching for the best LSTM structure, the influence of the number of hidden units (in the range of 5–30) was investigated. The structure, as well as the basic features and parameters of the layers, are presented in Table 3. During the selection of the best parameters, different optimizers (ADAM, SDG), InitialLearnRate values (0.0001, 0.001, 0.01), and L2Regularization values (0.01, 0.001, 0.0001) were also checked.
For the LSTM network, the Adam optimization algorithm was applied, with an initial learning rate of 0.001. The maximum number of epochs was set to 50, and the options of shuffling the training data and using a validation set were employed, similar to the CNN network training. The validation set was created as a random subset of the training data. Approximately 1/10 of the training data were selected to form this validation set.

3.4. Evaluation of the Effectiveness of Seizure Detection

The division into a training set and a test set is a crucial element in training and evaluating classifiers [119]. It allows for assessing the effectiveness of the classifier on new, unknown data. To effectively train a classifier, we need a sufficient amount of data. The data should be representative of the classification problem and contain diverse examples from all the classes that the classifier is intended to recognize. Typically, the data are divided into two sets: The training set and the test set. Usually, the majority of the data are allocated to the training set (70–80%), while a smaller portion is allocated to the test set (20–30%) [120]. It is important for the division to be random and maintain the class proportions to prevent introducing biased associations. The training set is used to train the classifier. The classifier analyzes the training examples and adjusts the weights or parameters of its structure to learn to recognize patterns and classify the data. After the training is completed, the classifier is tested on unknown data from the test set. The classifier analyzes these test examples and predicts their classes. The predicted classes are compared with the actual classes to evaluate the effectiveness of the classifier.
A confusion matrix (Table 4) is a tool used to evaluate the effectiveness of classification in binary or multiclass problems [121]. It is a table that presents the number of correctly and incorrectly classified examples for each class. The confusion matrix is a useful tool for visualizing and analyzing classification results in the context of epileptic seizure detection, allowing the identification of types of classification errors and the assessment of classifier effectiveness.
The evaluation measures of classification quality can be calculated based on the confusion matrix, whose elements are defined as follows:
  • True Negative (TN): The number of cases correctly classified as non-seizure periods.
  • False Positive (FP): The number of cases incorrectly classified as epileptic seizures when they are non-seizure periods (Type I error).
  • False Negative (FN): The number of cases incorrectly classified as non-seizure periods when they are epileptic seizures (Type II error).
  • True Positive (TP): The number of cases correctly classified as epileptic seizures.
Based on the comparison between predicted and actual classes, various measures of classification quality can be calculated, such as accuracy, sensitivity, specificity, precision, etc. These measures help assess the effectiveness of the classifier and understand how well it performs on new, unknown data [116,120]. A brief explanation of these metrics is as follows:
  • Accuracy is a general measure of classifier effectiveness, determining the ratio of the number of correctly classified cases (both epileptic seizures and non-seizure periods) to the total number of cases. A higher accuracy value indicates that the classifier performs well overall in classification [78,122].
A c c u r a c y = ( TP + TN ) / ( TP + TN + FP + FN )
  • Precision is a measure of the classifier’s ability to correctly identify epileptic seizures among all signals classified as seizures. Numerically, it is the ratio of the number of correctly classified epileptic seizures to the sum of correctly classified epileptic seizures and other signals incorrectly classified as seizures. A higher precision indicates that the classifier has a lower tendency for false positive classification errors [123].
P r e c i s i o n = TP / ( TP + FP )
  • Sensitivity is a measure of the classifier’s ability to correctly detect epileptic seizures. It is numerically defined as the ratio of the number of correctly detected seizures to the total number of seizures in the test data. A higher sensitivity value indicates that the classifier has a greater ability to detect epileptic seizures, thereby minimizing the false negatives [124].
S e n s i t i v i t y = TP / ( TP + FN )
  • Specificity is a measure of the classifier’s ability to correctly classify non-seizure signals. Numerically, it is the ratio of the number of correctly classified non-seizure signals to the sum of correctly classified non-seizure signals and signals incorrectly classified as seizures. A higher specificity indicates that the classifier has a lower tendency for false positive classification errors [125].
S p e c i f i c i t y = TN / ( TN + FP )
Evaluating the accuracy of epileptic seizure detection requires understanding these measures and their interpretation. Sensitivity and precision are particularly important in the case of epileptic seizure detection because we typically aim for maximum seizure detection (high sensitivity) while minimizing the number of false alarms (high precision).

4. Results and Discussion

Experiments were conducted, which involved training classifiers for each of the discussed features separately using the training data. Then, classification was performed on the training data. The quality of classification was evaluated using the measures: Accuracy, precision, sensitivity, and specificity. In the classification task, an SVM classifier with an RBF kernel was used. The results of classification quality (accuracy, precision, sensitivity, and specificity) for each feature are presented in Table 5. The obtained results indicate that features such as autocorrelation, spectrum, and features related to signal energy and variance allow for achieving very good classification accuracy. For the spectrum and autocorrelation feature, the results of accuracy, precision, sensitivity, and specificity at levels of 0.97, 0.96, 0.98, and 0.96, respectively, are very promising in the context of epileptic seizure detection.
An accuracy result of 0.97 means that the detection algorithm correctly classified 97% of all samples, including epileptic seizures and other EEG signals. This indicates a high degree of overall classification correctness. A precision of 0.96 means that 96% of the cases classified as epileptic seizures were correct. A high precision score indicates a low percentage of false positive results. A sensitivity of 0.98 means that the algorithm correctly identified 98% of all actual epileptic seizure cases. A higher sensitivity means fewer epileptic seizures will be missed. A specificity of 0.96 means that the algorithm correctly identified 96% of cases that were not epileptic seizures. A higher specificity means fewer cases other than epileptic seizures will be incorrectly classified as positive. Features with accuracy results below 0.6 (below 60%) are lyapExp (0.49), dim (0.51), and skewness (cd1) (0.50). These three features did not fulfill their purpose and showed low accuracy in epileptic seizure classification.
In subsequent experiments, the features were grouped into those related to signal energy in frequency bands, variance, skewness, kurtosis, entropy calculated for the details of wavelet transform, features related to measures of chaos such as variance, skewness, kurtosis, entropy calculated for IMF, spectrum, and autocorrelation. Apart from the spectrum and autocorrelation (0.97), the best results were obtained for variance and measures related to chaos (Table 6).
In the next stage, a CNN network was trained using all the training data (iEEG signals). Then, the trained network was used to classify the data from the test set. Multiple runs of the CNN and LSTM networks confirmed that the network effectively learned and achieved excellent results. To confirm this observation, we conducted ten runs of the CNN and LSTM networks, paying attention to accuracy, sensitivity, precision, and specificity values. The results of these runs were highly consistent. For the training and testing process of the CNN network with the training data, we obtained very similar values. The most frequently recurring results were an accuracy of 0.99, a precision of 0.98, a sensitivity of 1.00, and a specificity of 0.98. For the training and testing process of the LSTM network with the training data, we obtained very similar values. The most frequently recurring results were an accuracy of 0.98, a precision of 0.96, a sensitivity of 1.00, and a specificity of 0.96. The obtained classification results are reported in Table 7. For the presented results, the confusion matrices were shown in Table 8 and Table 9.
The CNN achieved an accuracy of 0.99, indicating that the algorithm correctly classified 99% of all samples, including both epileptic seizures and other EEG signals. The precision is 0.98, meaning that 98% of the cases classified as epileptic seizures by the CNN were correct. The sensitivity achieved a value of 1.00, indicating that the CNN correctly identified all actual cases of epileptic seizures. The specificity is 0.98, indicating that the CNN correctly identified 98% of cases that were not epileptic seizures. On the other hand, the LSTM network achieved an accuracy of 0.98, with a precision of 0.96. The sensitivity achieved a value of 1.00, while the specificity is 0.98. In summary, the results of epileptic seizure detection for the CNN and LSTM networks are very promising.
The obtained accuracy detection results can be compared with other works on seizure detection. A comprehensive comparison and summary of seizure detection accuracy results were presented by Liu et al. [126]. The accuracy results vary depending on the database used and the detection method applied, ranging from 0.905 to 1. For example, in the case of the CHB-MIT database and the CNN approach proposed by Wei et al. [127], the results demonstrate that the original CNN achieves a sensitivity of 70.7% and a specificity of 92.3% for epileptic EEG classification. Conversely, a remarkable 100% accuracy in detection was achieved for the Bonn database using the GRP-DNet algorithm introduced by Zeng et al. [128].
The obtained results should also be compared with the application of transformer-based networks for epileptic seizure detection. The model proposed by Ma et al. [37] achieved an AUROC of 92.1% when tested on Temple University’s publicly available electroencephalogram (EEG) seizure corpus dataset (TUH). In their article, Sun et al. [38] reported a remarkable event-based sensitivity of 97.5% for the SWEC-ETHZ iEEG dataset, while achieving an event-based sensitivity of 98.1% for the TJU-HH iEEG dataset. In the study conducted by Ke et al. [39], experiments were performed on two EEG datasets, demonstrating that the model provides state-of-the-art performance. Specifically, on the CHB-MIT dataset, the model achieves an average sensitivity of 96.02% and an average specificity of 97.94%, surpassing other existing methods by significant margins.
The good results of accuracy, precision, sensitivity, and specificity can be used, to some extent, to evaluate the usefulness of specific features and to compare algorithms. However, it is necessary to critically examine how the research and solutions can be practically applied in the medical field. This stage of analysis, in our opinion, is often overlooked in many scientific publications. In our assessment, high measures alone do not indicate practical utility and do not objectively present the potential for utilizing the created detection system.
An important aspect is the way data are collected and selected for the experiments. In the case of our research material, the signals were recorded from the surface of the brain, and it should be noted that they contain significantly more diagnostic information than EEG signals recorded from the scalp. However, in practice, this entails a substantial increase in the costs of the recording itself and significant involvement of medical personnel. Although the number of examples is considerable, they represent recordings from only five patients. Therefore, the recorded signals do not represent all possible signals recorded for a much larger population of individuals. This strongly calls into question the direct translation of sensitivity, precision, specificity, and accuracy measures to the broader population. It should be remembered that there are many factors causing epilepsy, and recorded EEG signals may vary.
Furthermore, the recorded signals cover only a narrow time window, as we do not know the duration of the recording or the basis for the selection of the recordings. We do not have complete information on how the recordings were identified as either seizure or non-seizure events. We lack information on the criteria used by experts to assign a signal fragment as a seizure or non-seizure. It should be noted that neurophysiologists often do not fully agree in this regard. Therefore, the problem of seizure detection should be approached not only through the lens of evaluation coefficients such as accuracy, precision, sensitivity, and specificity but also within the broader context of data collection and organization. It should be noted that although EEG signals may seem simpler to acquire, they come with certain difficulties. As mentioned earlier, they are often heavily influenced by physiological and technical artifacts. Additionally, there is the challenge of selecting the appropriate EEG channels that can capture changes related to the characteristic features of seizures. Each patient, in fact, has epilepsy foci located in different regions. For each patient, there may be different morphological changes in the EEG signal indicative of a seizure. Therefore, seizure detection using EEG signals appears to be a considerably more challenging problem, and one must approach the published results critically in the literature.
Deep learning methods, including CNN and LSTM networks, are powerful tools for medical analysis and diagnosing various diseases. However, their operation is often difficult to comprehend for humans because these networks learn from vast amounts of data and complex patterns. Doctors want to understand why a network made a specific diagnosis or decision in order to trust the results better. When analyzing medical outcomes, there is a need to confirm whether the CNN network is interpreting the data correctly. Doctors want to ensure that the network recognizes important features and pathologies and takes relevant information into account when making decisions. Examining the functioning of CNN networks allows the verification of whether the network aligns with medical knowledge. Analyzing the performance of CNN networks can help doctors identify which signal or image features are relevant for diagnosing a specific disease. By analyzing the weights and activations of specific neurons in the network, doctors can understand which areas are particularly important for diagnostic decision-making. CNN networks can detect subtle patterns or dependencies in data that may escape the human eye. Doctors can gain new knowledge by uncovering these patterns. Based on the analysis of CNN network performance, doctors can suggest improvements or modifications to the diagnostic process.
The gradient-weighted class activation mapping (Grad-CAM) is an interpretability method used to gain insights into the decision-making process of a deep neural network [129]. Grad-CAM, an extension of the class activation mapping (CAM) technique, assesses the significance of individual neurons in the network’s predictions by examining the gradients of the target class propagated through the network. By computing the gradient of a differentiable output, such as the class score, with respect to the convolutional features in a selected layer, Grad-CAM determines the importance of each neuron. These gradients are then aggregated across spatial and temporal dimensions to obtain weights that represent the importance of each neuron. Subsequently, these weights are used to linearly combine the activation maps, allowing for the identification of the most influential features contributing to the network’s prediction. To explain the functioning of the network, a trained CNN was used for epileptic seizure detection. Figure 10 presents the results of the Grad-CAM algorithm applied to EEG signals containing epileptic seizures. Higher values, highlighted in magenta on the graphs, indicate a higher utility of the signal shape for epileptic signal detection. By observing the charts, we can notice that the signal fragments displaying sharp changes (spikes) with a large signal amplitude have the most significance in the context of seizure detection.
The results obtained for the Grad-CAM algorithm indicate the segments of the signal corresponding to epileptic discharges. The Grad-CAM algorithm utilizes a neural network to generate activation maps that highlight the significant areas of the signal for classification. In the case of EEG signals, various types of changes are present, but for epileptic discharges, rapid and abrupt signal changes with high amplitudes resembling characteristic spikes are observed. Through the analysis of Grad-CAM, the regions in the signal responsible for these rapid high-amplitude changes are identified as significant. Consequently, the Grad-CAM algorithm allows for identifying the signal regions that contribute the most to the detection of epileptic discharges, which can be valuable in the analysis and diagnosis of such cases. The results obtained from the Grad-CAM algorithm can also be valuable in scientific research. They can contribute to a better understanding of the characteristics of epileptic discharges, thereby aiding in the development of new diagnostic and therapeutic methods. In the future, it is worth considering the utilization of larger datasets of iEEG/EEG signals with greater diversity, including signals from a significantly larger number of patients. As a result, the application of the Grad-CAM algorithm in the analysis of EEG signals with epileptic discharges can provide additional information to healthcare professionals, assisting in the diagnosis, treatment, and study of this disease.

5. Conclusions

The best results for seizure detection were obtained with features related to iEEG signal energy, as well as features related to chaos, the Lyapunov exponent, and the fractal dimension. The application of CNN and LSTM networks yielded significantly better results (CNN: Accuracy of 0.99, precision of 0.98, sensitivity of 1, and specificity of 0.99; LSTM: Accuracy of 0.98, precision of 0.96, sensitivity of 1, and specificity of 0.99). The results indicate that even CNN and LSTM networks with a simple structure are capable of handling the problem of epileptic seizure detection. The use of the gradient-weighted class activation mapping algorithm identified iEEG signal fragments that played a significant role in seizure detection. The study was conducted based on iEEG signals that contained well-described and relatively easy-to-detect cases of epileptic seizures. To assess the detection quality more accurately, future experiments should be repeated on a much larger dataset containing a greater number of examples recorded from a larger number of patients. In the future, it is necessary to expand the research to explore the possibility of using shorter iEEG signal windows.

Author Contributions

Conceptualization, A.M., A.R. and M.K.; methodology, M.K., A.R. and A.M.; software, M.K.; validation, M.K. and A.M.; formal analysis, A.M.; investigation, A.M. and M.K.; resources, M.K.; data curation, A.M., A.R. and M.K.; writing—original draft preparation, A.M. and M.K.; writing—review and editing, A.M. and M.K.; visualization, M.K.; supervision, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable, a publicly available database was used.

Informed Consent Statement

Not applicable, a publicly available database was used.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Milligan, T.A. Epilepsy: A Clinical Overview. Am. J. Med. 2021, 134, 840–847. [Google Scholar] [CrossRef] [PubMed]
  2. Birbeck, G.L.; Hays, R.D.; Cui, X.; Vickrey, B.G. Seizure Reduction and Quality of Life Improvements in People with Epilepsy. Epilepsia 2002, 43, 535–538. [Google Scholar] [CrossRef] [PubMed]
  3. Thomas, S.V.; Bindu, V.B. Psychosocial and Economic Problems of Parents of Children with Epilepsy. Seizure 1999, 8, 66–69. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Mann, C.; Maltseva, M.; von Podewils, F.; Knake, S.; Kovac, S.; Rosenow, F.; Strzelczyk, A. Supply Problems of Antiseizure Medication Are Common among Epilepsy Patients in Germany. Epilepsy Behav. 2023, 138, 108988. [Google Scholar] [CrossRef]
  5. Samara, Q.A.; Ifraitekh, A.S.; Al Jayyousi, O.; Sawan, S.; Hazaimeh, E.; Jbarah, O.F. Use of Antiepileptic Drugs as Prophylaxis against Posttraumatic Seizures in the Pediatric Population: A Systematic Review and Meta-Analysis. Neurosurg. Rev. 2023, 46, 49. [Google Scholar] [CrossRef]
  6. Sunaga, Y.; Takayama, Y.; Yokosako, S.; Mizuno, T.; Kouno, M.; Tashiro, M.; Iwasaki, M.; Sasaki, M. Drug-Resistant Temporal Lobe Epilepsy Due to Middle Fossa Meningoencephalocele in a Child: A Surgical Case Report. Brain Dev. 2023, 45, 82–86. [Google Scholar] [CrossRef]
  7. Tatum, W.O.; Rubboli, G.; Kaplan, P.W.; Mirsatari, S.M.; Radhakrishnan, K.; Gloss, D.; Caboclo, L.O.; Drislane, F.W.; Koutroumanidis, M.; Schomer, D.L.; et al. Clinical Utility of EEG in Diagnosing and Monitoring Epilepsy in Adults. Clin. Neurophysiol. 2018, 129, 1056–1082. [Google Scholar] [CrossRef]
  8. Bernabei, J.M.; Sinha, N.; Arnold, T.C.; Conrad, E.; Ong, I.; Pattnaik, A.R.; Stein, J.M.; Shinohara, R.T.; Lucas, T.H.; Bassett, D.S.; et al. Normative Intracranial EEG Maps Epileptogenic Tissues in Focal Epilepsy. Brain 2022, 145, 1949–1961. [Google Scholar] [CrossRef]
  9. Jin, B.; So, N.K.; Wang, S. Advances of Intracranial Electroencephalography in Localizing the Epileptogenic Zone. Neurosci. Bull. 2016, 32, 493–500. [Google Scholar] [CrossRef] [Green Version]
  10. Kołodziej, M.; Majkowski, A.; Rak, R.J.; Rysz, A. Detection of Spikes with Defined Parameters in the ECoG Signal. IEEE Trans. Instrum. Meas. 2019, 68, 1045–1052. [Google Scholar] [CrossRef]
  11. Rasheed, K.; Qayyum, A.; Qadir, J.; Sivathamboo, S.; Kwan, P.; Kuhlmann, L.; O’Brien, T.; Razi, A. Machine Learning for Predicting Epileptic Seizures Using EEG Signals: A Review. IEEE Rev. Biomed. Eng. 2021, 14, 139–155. [Google Scholar] [CrossRef] [PubMed]
  12. Tzallas, A.T.; Tsipouras, M.G.; Fotiadis, D.I. Epileptic Seizure Detection in EEGs Using Time–Frequency Analysis. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 703–710. [Google Scholar] [CrossRef] [PubMed]
  13. Siddiqui, M.K.; Morales-Menendez, R.; Huang, X.; Hussain, N. A Review of Epileptic Seizure Detection Using Machine Learning Classifiers. Brain Inform. 2020, 7, 5. [Google Scholar] [CrossRef] [PubMed]
  14. Vidyaratne, L.S.; Iftekharuddin, K.M. Real-Time Epileptic Seizure Detection Using EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 2146–2156. [Google Scholar] [CrossRef] [PubMed]
  15. Ahmad, I.; Wang, X.; Zhu, M.; Wang, C.; Pi, Y.; Khan, J.A.; Khan, S.; Samuel, O.W.; Chen, S.; Li, G. EEG-Based Epileptic Seizure Detection via Machine/Deep Learning Approaches: A Systematic Review. Comput. Intell. Neurosci. 2022, 2022, e6486570. [Google Scholar] [CrossRef]
  16. Farooq, M.S.; Zulfiqar, A.; Riaz, S. Epileptic Seizure Detection Using Machine Learning: Taxonomy, Opportunities, and Challenges. Diagnostics 2023, 13, 1058. [Google Scholar] [CrossRef]
  17. Haneef, Z.; Skrehot, H.C. Neurostimulation in Generalized Epilepsy: A Systematic Review and Meta-Analysis. Epilepsia 2023, 64, 811–820. [Google Scholar] [CrossRef]
  18. Beniczky, S.; Karoly, P.; Nurse, E.; Ryvlin, P.; Cook, M. Machine Learning and Wearable Devices of the Future. Epilepsia 2021, 62, S116–S124. [Google Scholar] [CrossRef]
  19. Moeller, F.; LeVan, P.; Muhle, H.; Stephani, U.; Dubeau, F.; Siniatchkin, M.; Gotman, J. Absence Seizures: Individual Patterns Revealed by EEG-FMRI. Epilepsia 2010, 51, 2000–2010. [Google Scholar] [CrossRef] [Green Version]
  20. Noachtar, S.; Rémi, J. The Role of EEG in Epilepsy: A Critical Review. Epilepsy Behav. 2009, 15, 22–33. [Google Scholar] [CrossRef]
  21. Rodriguez Ruiz, A.; Vlachy, J.; Lee, J.W.; Gilmore, E.J.; Ayer, T.; Haider, H.A.; Gaspard, N.; Ehrenberg, J.A.; Tolchin, B.; Fantaneanu, T.A.; et al. Association of Periodic and Rhythmic Electroencephalographic Patterns with Seizures in Critically Ill Patients. JAMA Neurol. 2017, 74, 181–188. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Toda, Y.; Kobayashi, K.; Hayashi, Y.; Inoue, T.; Oka, M.; Endo, F.; Yoshinaga, H.; Ohtsuka, Y. High-Frequency EEG Activity in Epileptic Encephalopathy with Suppression-Burst. Brain Dev. 2015, 37, 230–236. [Google Scholar] [CrossRef] [PubMed]
  23. Salami, P.; Peled, N.; Nadalin, J.K.; Martinet, L.-E.; Kramer, M.A.; Lee, J.W.; Cash, S.S. Seizure Onset Location Shapes Dynamics of Initiation. Clin. Neurophysiol. 2020, 131, 1782–1797. [Google Scholar] [CrossRef] [PubMed]
  24. Gulyás, A.I.; Freund, T.T. Generation of Physiological and Pathological High Frequency Oscillations: The Role of Perisomatic Inhibition in Sharp-Wave Ripple and Interictal Spike Generation. Curr. Opin. Neurobiol. 2015, 31, 26–32. [Google Scholar] [CrossRef]
  25. Dümpelmann, M.; Elger, C.E. Automatic Detection of Epileptiform Spikes in the Electrocorticogram: A Comparison of Two Algorithms. Seizure 1998, 7, 145–152. [Google Scholar] [CrossRef]
  26. Supriya, S.; Siuly, S.; Wang, H.; Zhang, Y. Automated Epilepsy Detection Techniques from Electroencephalogram Signals: A Review Study. Health Inf. Sci. Syst. 2020, 8, 33. [Google Scholar] [CrossRef]
  27. Alotaiby, T.N.; Alshebeili, S.A.; Alshawi, T.; Ahmad, I.; Abd El-Samie, F.E. EEG Seizure Detection and Prediction Algorithms: A Survey. EURASIP J. Adv. Signal Process. 2014, 2014, 183. [Google Scholar] [CrossRef] [Green Version]
  28. Sharmila, A.; Geethanjali, P. A Review on the Pattern Detection Methods for Epilepsy Seizure Detection from EEG Signals. Biomed. Eng./Biomed. Tech. 2019, 64, 507–517. [Google Scholar] [CrossRef]
  29. Parvez, M.Z.; Paul, M. Epileptic Seizure Detection by Analyzing EEG Signals Using Different Transformation Techniques. Neurocomputing 2014, 145, 190–200. [Google Scholar] [CrossRef]
  30. Panda, R.; Khobragade, P.S.; Jambhule, P.D.; Jengthe, S.N.; Pal, P.R.; Gandhi, T.K. Classification of EEG Signal Using Wavelet Transform and Support Vector Machine for Epileptic Seizure Diction. In Proceedings of the 2010 International Conference on Systems in Medicine and Biology, Kharagpur, India, 16–18 December 2010; pp. 405–408. [Google Scholar] [CrossRef]
  31. Ocak, H. Optimal Classification of Epileptic Seizures in EEG Using Wavelet Analysis and Genetic Algorithm. Signal Process. 2008, 88, 1858–1867. [Google Scholar] [CrossRef]
  32. Mohseni, H.R.; Maghsoudi, A.; Shamsollahi, M.B. Seizure Detection in EEG Signals: A Comparison of Different Approaches. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–30 September 2006; pp. 6724–6727. [Google Scholar]
  33. Polat, K.; Güneş, S. Classification of Epileptiform EEG Using a Hybrid System Based on Decision Tree Classifier and Fast Fourier Transform. Appl. Math. Comput. 2007, 187, 1017–1026. [Google Scholar] [CrossRef]
  34. Emami, A.; Kunii, N.; Matsuo, T.; Shinozaki, T.; Kawai, K.; Takahashi, H. Seizure Detection by Convolutional Neural Network-Based Analysis of Scalp Electroencephalography Plot Images. NeuroImage Clin. 2019, 22, 101684. [Google Scholar] [CrossRef]
  35. Wei, X.; Zhou, L.; Chen, Z.; Zhang, L.; Zhou, Y. Automatic Seizure Detection Using Three-Dimensional CNN Based on Multi-Channel EEG. BMC Med. Inform. Decis. Mak. 2018, 18, 111. [Google Scholar] [CrossRef] [Green Version]
  36. Zhou, M.; Tian, C.; Cao, R.; Wang, B.; Niu, Y.; Hu, T.; Guo, H.; Xiang, J. Epileptic Seizure Detection Based on EEG Signals and CNN. Front. Neuroinform. 2018, 12, 95. [Google Scholar] [CrossRef] [Green Version]
  37. Ma, Y.; Liu, C.; Ma, M.S.; Yang, Y.; Truong, N.D.; Kothur, K.; Nikpour, A.; Kavehei, O. TSD: Transformers for Seizure Detection. bioRxiv 2023. [CrossRef]
  38. Sun, Y.; Jin, W.; Si, X.; Zhang, X.; Cao, J.; Wang, L.; Yin, S.; Ming, D. Continuous Seizure Detection Based on Transformer and Long-Term IEEG. IEEE J. Biomed. Health Inform. 2022, 26, 5418–5427. [Google Scholar] [CrossRef]
  39. Ke, N.; Lin, T.; Lin, Z.; Zhou, X.-H.; Ji, T. Convolutional Transformer Networks for Epileptic Seizure Detection. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Association for Computing Machinery, New York, NY, USA, 17 October 2022; pp. 4109–4113. [Google Scholar]
  40. Schachter, S.C.; Saper, C.B. Vagus Nerve Stimulation. Epilepsia 1998, 39, 677–686. [Google Scholar] [CrossRef]
  41. González, H.F.J.; Yengo-Kahn, A.; Englot, D.J. Vagus Nerve Stimulation for the Treatment of Epilepsy. Neurosurg. Clin. 2019, 30, 219–230. [Google Scholar] [CrossRef]
  42. Stefan, H.; Kreiselmeyer, G.; Kerling, F.; Kurzbuch, K.; Rauch, C.; Heers, M.; Kasper, B.S.; Hammen, T.; Rzonsa, M.; Pauli, E.; et al. Transcutaneous Vagus Nerve Stimulation (t-VNS) in Pharmacoresistant Epilepsies: A Proof of Concept Trial. Epilepsia 2012, 53, e115–e118. [Google Scholar] [CrossRef]
  43. Schulze-Bonhage, A.; Feldwisch-Drentrup, H.; Ihle, M. The Role of High-Quality EEG Databases in the Improvement and Assessment of Seizure Prediction Methods. Epilepsy Behav. 2011, 22, S88–S93. [Google Scholar] [CrossRef]
  44. Wong, S.; Simmons, A.; Rivera-Villicana, J.; Barnett, S.; Sivathamboo, S.; Perucca, P.; Ge, Z.; Kwan, P.; Kuhlmann, L.; Vasa, R.; et al. EEG Datasets for Seizure Detection and Prediction—A Review. Epilepsia Open 2023, 8, 252–267. [Google Scholar] [CrossRef] [PubMed]
  45. Ihle, M.; Feldwisch-Drentrup, H.; Teixeira, C.A.; Witon, A.; Schelter, B.; Timmer, J.; Schulze-Bonhage, A. EPILEPSIAE—A European Epilepsy Database. Comput. Methods Programs Biomed. 2012, 106, 127–138. [Google Scholar] [CrossRef] [PubMed]
  46. Handa, P.; Mathur, M.; Goel, N. Open and Free EEG Datasets for Epilepsy Diagnosis. arXiv 2021, arXiv:2108.01030. [Google Scholar]
  47. Andrzejak, R.G.; Schindler, K.; Rummel, C. Nonrandomness, Nonlinear Dependence, and Nonstationarity of Electroencephalographic Recordings from Epilepsy Patients. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2012, 86, 046206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. American Epilepsy Society Seizure Prediction Challenge. Available online: https://kaggle.com/competitions/seizure-prediction (accessed on 19 June 2023).
  49. Andrzejak, R.G.; Lehnertz, K.; Rieke, C.; Mormann, F.; David, P.; Elger, C.E. Indications of Nonlinear Deterministic and Finite-Dimensional Structures in Time Series of Brain Electrical Activity: Dependence on Recording Region and Brain State. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef] [Green Version]
  50. Torse, D.; Desai, V.; Khanai, R. A Review on Seizure Detection Systems with Emphasis on Multi-Domain Feature Extraction and Classification Using Machine Learning. BRAIN Broad Res. Artif. Intell. Neurosci. 2017, 8, 109–129. [Google Scholar]
  51. Wang, L.; Xue, W.; Li, Y.; Luo, M.; Huang, J.; Cui, W.; Huang, C. Automatic Epileptic Seizure Detection in EEG Signals Using Multi-Domain Feature Extraction and Nonlinear Analysis. Entropy 2017, 19, 222. [Google Scholar] [CrossRef] [Green Version]
  52. Samiee, K.; Kovács, P.; Gabbouj, M. Epileptic Seizure Detection in Long-Term EEG Records Using Sparse Rational Decomposition and Local Gabor Binary Patterns Feature Extraction. Knowl.-Based Syst. 2017, 118, 228–240. [Google Scholar] [CrossRef]
  53. Atal, D.K.; Singh, M. A Hybrid Feature Extraction and Machine Learning Approaches for Epileptic Seizure Detection. Multidimens. Syst. Signal Process. 2020, 31, 503–525. [Google Scholar] [CrossRef]
  54. Behara, D.S.T.; Kumar, A.; Swami, P.; Panigrahi, B.K.; Gandhi, T.K. Detection of Epileptic Seizure Patterns in EEG through Fragmented Feature Extraction. In Proceedings of the 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 16–18 March 2016; pp. 2539–2542. [Google Scholar]
  55. Liu, Y.; Zhou, W.; Yuan, Q.; Chen, S. Automatic Seizure Detection Using Wavelet Transform and SVM in Long-Term Intracranial EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 749–755. [Google Scholar] [CrossRef]
  56. Burrello, A.; Cavigelli, L.; Schindler, K.; Benini, L.; Rahimi, A. Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-Term Human IEEG Recordings without False Alarms. In Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 9–13 March 2019; pp. 752–757. [Google Scholar]
  57. Balaji, S.S.; Parhi, K.K. Seizure Onset Zone Identification From IEEG: A Review. IEEE Access 2022, 10, 62535–62547. [Google Scholar] [CrossRef]
  58. Sharma, A. Epileptic Seizure Prediction Using Power Analysis in Beta Band of EEG Signals. In Proceedings of the 2015 International Conference on Soft Computing Techniques and Implementations (ICSCTI), Faridabad, India, 8–10 October 2015; pp. 117–121. [Google Scholar]
  59. Schwartz, M. Distribution of the Time-Average Power of a Gaussian Process. IEEE Trans. Inf. Theory 1970, 16, 17–26. [Google Scholar] [CrossRef]
  60. Pattnaik, S.; Rout, N.; Sabut, S. Machine Learning Approach for Epileptic Seizure Detection Using the Tunable-Q Wavelet Transform Based Time–Frequency Features. Int. J. Inf. Tecnol. 2022, 14, 3495–3505. [Google Scholar] [CrossRef]
  61. Yousefi, M.R.; Golnejad, S.; Hosseini, M.M. Comparison of EEG Based Epilepsy Diagnosis Using Neural Networks and Wavelet Transform. arXiv 2022. [Google Scholar] [CrossRef]
  62. Shen, M.; Wen, P.; Song, B.; Li, Y. An EEG Based Real-Time Epilepsy Seizure Detection Approach Using Discrete Wavelet Transform and Machine Learning Methods. Biomed. Signal Process. Control 2022, 77, 103820. [Google Scholar] [CrossRef]
  63. Onufriienko, D.; Taranenko, Y. Filtering and Compression of Signals by the Method of Discrete Wavelet Decomposition into One-Dimensional Series. Cybern. Syst. Anal. 2023, 59, 331–338. [Google Scholar] [CrossRef]
  64. Jing, J.; Pang, X.; Pan, Z.; Fan, F.; Meng, Z. Classification and Identification of Epileptic EEG Signals Based on Signal Enhancement. Biomed. Signal Process. Control 2022, 71, 103248. [Google Scholar] [CrossRef]
  65. Ďuriš, V.; Semenov, V.I.; Chumarov, S.G. Wavelets and Digital Filters Designed and Synthesized in the Time and Frequency Domains. Math. Biosci. Eng. 2022, 19, 3056–3068. [Google Scholar] [CrossRef]
  66. Rafiuddin, N.; Khan, Y.U.; Farooq, O. A Novel Wavelet Approach for Multiclass IEEG Signal Classification in Automated Diagnosis of Epilepsy. IEEE Trans. Instrum. Meas. 2022, 71, 1–10. [Google Scholar] [CrossRef]
  67. Mathew, J.; Sivakumaran, N.; Karthick, P.A. Automated Detection of Seizure Types from the Higher-Order Moments of Maximal Overlap Wavelet Distribution. Diagnostics 2023, 13, 621. [Google Scholar] [CrossRef]
  68. Sharma, R.; Pachori, R.B.; Sircar, P. Seizures Classification Based on Higher Order Statistics and Deep Neural Network. Biomed. Signal Process. Control 2020, 59, 101921. [Google Scholar] [CrossRef]
  69. Qatmh, M.; Bonny, T.; Nasir, N.; Al-Shabi, M.; Al-Shammaa, A. Detection of Epileptic Seizure Using Discrete Wavelet Transform on Gamma Band and Artificial Neural Network. In Proceedings of the 2021 14th International Conference on Developments in eSystems Engineering (DeSE), Sharjah, United Arab Emirates, 7–10 December 2021; pp. 401–406. [Google Scholar]
  70. Boonyakitanont, P.; Lek-uthai, A.; Chomtho, K.; Songsiri, J. A Review of Feature Extraction and Performance Evaluation in Epileptic Seizure Detection Using EEG. Biomed. Signal Process. Control 2020, 57, 101702. [Google Scholar] [CrossRef] [Green Version]
  71. Yao, K. A Formula to Calculate the Variance of Uncertain Variable. Soft Comput. 2015, 19, 2947–2953. [Google Scholar] [CrossRef]
  72. Doane, D.P.; Seward, L.E. Measuring Skewness: A Forgotten Statistic? J. Stat. Educ. 2011, 19, 6–9. [Google Scholar] [CrossRef]
  73. Chissom, B.S. Interpretation of the Kurtosis Statistic. Am. Stat. 1970, 24, 19–22. [Google Scholar] [CrossRef]
  74. Abásolo, D.; Hornero, R.; Espino, P.; Álvarez, D.; Poza, J. Entropy Analysis of the EEG Background Activity in Alzheimer’s Disease Patients. Physiol. Meas. 2006, 27, 241. [Google Scholar] [CrossRef] [Green Version]
  75. Boashah, B.; Mesbah, M. A Time-Frequency Approach for Newborn Seizure Detection. IEEE Eng. Med. Biol. Mag. 2001, 20, 54–64. [Google Scholar] [CrossRef]
  76. Jiang, T.; Zhu, J.; Hu, D.; Gao, W.; Gao, F.; Cao, J. Early Seizure Detection in Childhood Focal Epilepsy with Electroencephalogram Feature Fusion on Deep Autoencoder Learning and Channel Correlations. Multidimens. Syst. Signal Process. 2022, 33, 1273–1293. [Google Scholar] [CrossRef]
  77. Lai, E. Practical Digital Signal Processing; Newnes: Oxford, UK, 2003; ISBN 9780750657983. [Google Scholar] [CrossRef]
  78. Automated EEG-Based Diagnosis of Neurological Disorders: Inventing the Future of Neurology. Available online: https://www.routledge.com/Automated-EEG-Based-Diagnosis-of-Neurological-Disorders-Inventing-the-Future/Adeli-Ghosh-Dastidar/p/book/9781138118201 (accessed on 19 June 2023).
  79. Cao, L. Practical Method for Determining the Minimum Embedding Dimension of a Scalar Time Series. Phys. D Nonlinear Phenom. 1997, 110, 43–50. [Google Scholar] [CrossRef]
  80. Cignetti, F.; Decker, L.M.; Stergiou, N. Sensitivity of the Wolf’s and Rosenstein’s Algorithms to Evaluate Local Dynamic Stability from Small Gait Data Sets. Ann. Biomed. Eng. 2012, 40, 1122–1130. [Google Scholar] [CrossRef]
  81. Liebovitch, L.S.; Toth, T. A Fast Algorithm to Determine Fractal Dimensions by Box Counting. Phys. Lett. A 1989, 141, 386–390. [Google Scholar] [CrossRef]
  82. Sevcik, C. A Procedure to Estimate the Fractal Dimension of Waveforms. arXiv 2010, arXiv:1003.5266. [Google Scholar]
  83. Huang, N.E. Review of Empirical Mode Decomposition. In Wavelet Applications VIII; SPIE: Bellingham, WA, USA, 2001; Volume 4391, pp. 71–80. [Google Scholar]
  84. Mean-Optimized Mode Decomposition: An Improved EMD Approach for Non-Stationary Signal Processing-ScienceDirect . Available online: https://www.sciencedirect.com/science/article/pii/S0019057820302573 (accessed on 19 June 2023).
  85. Karabiber Cura, O.; Kocaaslan Atli, S.; Türe, H.S.; Akan, A. Epileptic Seizure Classifications Using Empirical Mode Decomposition and Its Derivative. BioMed Eng. Online 2020, 19, 10. [Google Scholar] [CrossRef] [Green Version]
  86. Lei, Y.; Lin, J.; He, Z.; Zuo, M.J. A Review on Empirical Mode Decomposition in Fault Diagnosis of Rotating Machinery. Mech. Syst. Signal Process. 2013, 35, 108–126. [Google Scholar] [CrossRef]
  87. Sha’abani, M.N.A.H.; Fuad, N.; Jamal, N.; Ismail, M.F. KNN and SVM Classification for EEG: A Review. In Proceedings of the In In ECCE2019, Proceedings of the 5th International Conference on Electrical, Control & Computer Engineering, Kuantan, Pahang, Malaysia, 29 July 2019; Kasruddin Nasir, A.N., Ahmad, M.A., Najib, M.S., Abdul Wahab, Y., Othman, N.A., Abd Ghani, N.M., Irawan, A., Khatun, S., Raja Ismail, R.M.T., Saari, M.M., et al., Eds.; Springer: Singapore, 2020; pp. 555–565. [Google Scholar]
  88. Hosseini, M.-P.; Hosseini, A.; Ahi, K. A Review on Machine Learning for EEG Signal Processing in Bioengineering. IEEE Rev. Biomed. Eng. 2021, 14, 204–218. [Google Scholar] [CrossRef]
  89. Unnisa, Z.; Zia, S.; Butt, U.M.; Letchmunan, S.; Ilyas, S. Ensemble Usage for Classification of EEG Signals A Review with Comparison. In Proceedings of the Augmented Cognition. Theoretical and Technological Approaches; Schmorrow, D.D., Fidopiastis, C.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 189–208. [Google Scholar]
  90. Si, Y. Machine Learning Applications for Electroencephalograph Signals in Epilepsy: A Quick Review. Acta Epileptol. 2020, 2, 5. [Google Scholar] [CrossRef]
  91. Cho, G.; Yim, J.; Choi, Y.; Ko, J.; Lee, S.-H. Review of Machine Learning Algorithms for Diagnosing Mental Illness. Psychiatry Investig. 2019, 16, 262–269. [Google Scholar] [CrossRef] [Green Version]
  92. El Morr, C.; Jammal, M.; Ali-Hassan, H.; El-Hallak, W. Support Vector Machine. In Machine Learning for Practical Decision Making: A Multidisciplinary Perspective with Applications from Healthcare, Engineering and Business Analytics; El Morr, C., Jammal, M., Ali-Hassan, H., EI-Hallak, W., Eds.; International Series in Operations Research & Management Science; Springer International Publishing: Cham, Switzerland, 2022; pp. 385–411. ISBN 978-3-031-16990-8. [Google Scholar]
  93. Ghosh, S.; Dasgupta, A.; Swetapadma, A. A Study on Support Vector Machine Based Linear and Non-Linear Pattern Classification. In Proceedings of the 2019 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, Tamilnadu, India, 21–22 February 2019; pp. 24–28. [Google Scholar]
  94. Kuyoro, A.O.; Alimi, S.; Awodele, O. Comparative Analysis of the Performance of Various Support Vector Machine Kernels. In Proceedings of the 2022 5th Information Technology for Education and Development (ITED), Changsha, China, 4–6 November 2022; pp. 1–7. [Google Scholar]
  95. Jayasumana, S.; Hartley, R.; Salzmann, M.; Li, H.; Harandi, M. Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 2464–2477. [Google Scholar] [CrossRef] [Green Version]
  96. Han, S.; Qubo, C.; Meng, H. Parameter Selection in SVM with RBF Kernel Function. In Proceedings of the World Automation Congress 2012, Puerto Vallarta, Mexico, 24–28 June 2012; pp. 1–4. [Google Scholar]
  97. Li, X.; Chen, X.; Yan, Y.; Wei, W.; Wang, Z.J. Classification of EEG Signals Using a Multiple Kernel Learning Support Vector Machine. Sensors 2014, 14, 12784–12802. [Google Scholar] [CrossRef] [Green Version]
  98. Zhiwei, L.; Minfen, S. Classification of Mental Task EEG Signals Using Wavelet Packet Entropy and SVM. In Proceedings of the 2007 8th International Conference on Electronic Measurement and Instruments, Xi’an, China, 16–18 August 2007; pp. 3-906–3-909. [Google Scholar]
  99. Fu, K.; Qu, J.; Chai, Y.; Dong, Y. Classification of Seizure Based on the Time-Frequency Image of EEG Signals Using HHT and SVM. Biomed. Signal Process. Control 2014, 13, 15–22. [Google Scholar] [CrossRef]
  100. Nandy, A.; Alahe, M.A.; Nasim Uddin, S.M.; Alam, S.; Nahid, A.-A.; Awal, M.A. Feature Extraction and Classification of EEG Signals for Seizure Detection. In Proceedings of the 2019 International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 10–12 January 2019; pp. 480–485. [Google Scholar]
  101. Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  102. Shoeibi, A.; Khodatars, M.; Ghassemi, N.; Jafari, M.; Moridian, P.; Alizadehsani, R.; Panahiazar, M.; Khozeimeh, F.; Zare, A.; Hosseini-Nejad, H.; et al. Epileptic Seizures Detection Using Deep Learning Techniques: A Review. Int. J. Environ. Res. Public Health 2021, 18, 5780. [Google Scholar] [CrossRef] [PubMed]
  103. Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.H.; Faubert, J. Deep Learning-Based Electroencephalography Analysis: A Systematic Review. J. Neural Eng. 2019, 16, 051001. [Google Scholar] [CrossRef] [PubMed]
  104. Merlin Praveena, D.; Angelin Sarah, D.; Thomas George, S. Deep Learning Techniques for EEG Signal Applications—A Review. IETE J. Res. 2022, 68, 3030–3037. [Google Scholar] [CrossRef]
  105. Antoniades, A.; Spyrou, L.; Took, C.C.; Sanei, S. Deep Learning for Epileptic Intracranial EEG Data. In Proceedings of the 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), Vietri sul Mare, Italy, 13–16 September 2016; pp. 1–6. [Google Scholar]
  106. Otter, D.W.; Medina, J.R.; Kalita, J.K. A Survey of the Usages of Deep Learning for Natural Language Processing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 604–624. [Google Scholar] [CrossRef] [Green Version]
  107. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep Learning for Healthcare: Review, Opportunities and Challenges. Brief. Bioinform. 2018, 19, 1236–1246. [Google Scholar] [CrossRef]
  108. Huang, G.; Bai, Y.; Liu, L.; Wang, Y.; Yu, B.; Ding, Y.; Xie, Y. ALCOP: Automatic Load-Compute Pipelining in Deep Learning Compiler for AI-GPUs. arXiv 2023. [Google Scholar] [CrossRef]
  109. Xu, Y.; Yang, J.; Sawan, M. Multichannel Synthetic Preictal EEG Signals to Enhance the Prediction of Epileptic Seizures. IEEE Trans. Biomed. Eng. 2022, 69, 3516–3525. [Google Scholar] [CrossRef]
  110. Ajit, A.; Acharya, K.; Samanta, A. A Review of Convolutional Neural Networks. In Proceedings of the 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), Vellore, India, 24–25 February 2020; pp. 1–5. [Google Scholar]
  111. Dubey, A.K.; Jain, V. Comparative Study of Convolution Neural Network’s Relu and Leaky-Relu Activation Functions. In Applications of Computing, Automation and Wireless Systems in Electrical Engineering; Mishra, S., Sood, Y.R., Tomar, A., Eds.; Springer: Singapore, 2019; pp. 873–880. [Google Scholar]
  112. Mehedi Shamrat, F.M.J.; Jubair, M.A.; Billah, M.M.; Chakraborty, S.; Alauddin, M.; Ranjan, R. A Deep Learning Approach for Face Detection Using Max Pooling. In Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 3–5 June 2021; pp. 760–764. [Google Scholar]
  113. Netrapalli, P. Stochastic Gradient Descent and Its Variants in Machine Learning. J. Indian Inst. Sci. 2019, 99, 201–213. [Google Scholar] [CrossRef]
  114. Hartmann, M.; Koren, J.; Baumgartner, C.; Duun-Henriksen, J.; Gritsch, G.; Kluge, T.; Perko, H.; Fürbass, F. Seizure Detection with Deep Neural Networks for Review of Two-Channel Electroencephalogram. Epilepsia 2022. [Google Scholar] [CrossRef]
  115. Zhang, Y.; Guo, Y.; Yang, P.; Chen, W.; Lo, B. Epilepsy Seizure Prediction on EEG Using Common Spatial Pattern and Convolutional Neural Network. IEEE J. Biomed. Health Inform. 2020, 24, 465–474. [Google Scholar] [CrossRef]
  116. Hu, X.; Yuan, S.; Xu, F.; Leng, Y.; Yuan, K.; Yuan, Q. Scalp EEG Classification Using Deep Bi-LSTM Network for Seizure Detection. Comput. Biol. Med. 2020, 124, 103919. [Google Scholar] [CrossRef]
  117. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  118. Xu, G.; Ren, T.; Chen, Y.; Che, W. A One-Dimensional CNN-LSTM Model for Epileptic Seizure Recognition Using EEG Signal Analysis. Front. Neurosci. 2020, 14, 578126. [Google Scholar] [CrossRef]
  119. Abdelhameed, A.; Bayoumi, M. A Deep Learning Approach for Automatic Seizure Detection in Children with Epilepsy. Front. Comput. Neurosci. 2021, 15, 650050. [Google Scholar] [CrossRef]
  120. Pisano, F.; Sias, G.; Fanni, A.; Cannas, B.; Dourado, A.; Pisano, B.; Teixeira, C.A. Convolutional Neural Network for Seizure Detection of Nocturnal Frontal Lobe Epilepsy. Complexity 2020, 2020, e4825767. [Google Scholar] [CrossRef]
  121. Vanabelle, P.; De Handschutter, P.; El Tahry, R.; Benjelloun, M.; Boukhebouze, M. Epileptic Seizure Detection Using EEG Signals and Extreme Gradient Boosting. J. Biomed. Res. 2020, 34, 228–239. [Google Scholar] [CrossRef]
  122. Olokodana, I.L.; Mohanty, S.P.; Kougianos, E.; Olokodana, O.O. Real-Time Automatic Seizure Detection Using Ordinary Kriging Method in an Edge-IoMT Computing Paradigm. SN Comput. Sci. 2020, 1, 258. [Google Scholar] [CrossRef]
  123. Alharthi, M.K.; Moria, K.M.; Alghazzawi, D.M.; Tayeb, H.O. Epileptic Disorder Detection of Seizures Using EEG Signals. Sensors 2022, 22, 6592. [Google Scholar] [CrossRef]
  124. Siddiqui, M.K.; Huang, X.; Morales-Menendez, R.; Hussain, N.; Khatoon, K. Machine Learning Based Novel Cost-Sensitive Seizure Detection Classifier for Imbalanced EEG Data Sets. Int. J. Interact. Des. Manuf. 2020, 14, 1491–1509. [Google Scholar] [CrossRef]
  125. Thuwajit, P.; Rangpong, P.; Sawangjai, P.; Autthasan, P.; Chaisaen, R.; Banluesombatkul, N.; Boonchit, P.; Tatsaringkansakul, N.; Sudhawiyangkul, T.; Wilaiprasitporn, T. EEGWaveNet: Multiscale CNN-Based Spatiotemporal Feature Extraction for EEG Seizure Detection. IEEE Trans. Ind. Inform. 2022, 18, 5547–5557. [Google Scholar] [CrossRef]
  126. Liu, G.; Xiao, R.; Xu, L.; Cai, J. Minireview of Epilepsy Detection Techniques Based on Electroencephalogram Signals. Front. Syst. Neurosci. 2021, 15, 685387. [Google Scholar] [CrossRef] [PubMed]
  127. Wei, Z.; Zou, J.; Zhang, J.; Xu, J. Automatic Epileptic EEG Detection Using Convolutional Neural Network with Improvements in Time-Domain. Biomed. Signal Process. Control 2019, 53, 101551. [Google Scholar] [CrossRef]
  128. Zeng, M.; Zhang, X.; Zhao, C.; Lu, X.; Meng, Q. GRP-DNet: A Gray Recurrence Plot-Based Densely Connected Convolutional Network for Classification of Epileptiform EEG. J. Neurosci. Methods 2021, 347, 108953. [Google Scholar] [CrossRef]
  129. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Diagram of the conducted experiments within the research.
Figure 1. Diagram of the conducted experiments within the research.
Applsci 13 08747 g001
Figure 2. An iEEG signal recorded during an epileptic seizure (S) and during a non-seizure (F).
Figure 2. An iEEG signal recorded during an epileptic seizure (S) and during a non-seizure (F).
Applsci 13 08747 g002
Figure 3. An iEEG signal and the result of signal filtering in the alpha, beta, gamma, theta, and delta bands.
Figure 3. An iEEG signal and the result of signal filtering in the alpha, beta, gamma, theta, and delta bands.
Applsci 13 08747 g003
Figure 4. An iEEG signal and its wavelet decomposition into four details and an approximation.
Figure 4. An iEEG signal and its wavelet decomposition into four details and an approximation.
Applsci 13 08747 g004
Figure 5. Comparison of the averaged spectra for iEEG signals recorded during seizures (S) and during seizure-free period (F).
Figure 5. Comparison of the averaged spectra for iEEG signals recorded during seizures (S) and during seizure-free period (F).
Applsci 13 08747 g005
Figure 6. Comparison of the averaged autocorrelations for iEEG signals recorded during seizures (S) and during seizure-free period (F).
Figure 6. Comparison of the averaged autocorrelations for iEEG signals recorded during seizures (S) and during seizure-free period (F).
Applsci 13 08747 g006
Figure 7. A phase trajectory for a fragment of an iEEG signal recorded during an epileptic seizure.
Figure 7. A phase trajectory for a fragment of an iEEG signal recorded during an epileptic seizure.
Applsci 13 08747 g007
Figure 8. Phase trajectory for a fragment of an iEEG signal recorded during a non-seizure period.
Figure 8. Phase trajectory for a fragment of an iEEG signal recorded during a non-seizure period.
Applsci 13 08747 g008
Figure 9. An iEEG signal recorded during a seizure decomposed into IMFs and a residual.
Figure 9. An iEEG signal recorded during a seizure decomposed into IMFs and a residual.
Applsci 13 08747 g009
Figure 10. Results of the Grad-CAM algorithm applied to different fragments of EEG signals containing epileptic seizures.
Figure 10. Results of the Grad-CAM algorithm applied to different fragments of EEG signals containing epileptic seizures.
Applsci 13 08747 g010aApplsci 13 08747 g010b
Table 1. The compilation of features, their labels, and the calculation method.
Table 1. The compilation of features, their labels, and the calculation method.
FeaturesDescription and Calculation Parameters
energy (signal), energy (delta), energy (theta), energy (alpha), energy (beta), energy (gamma)Energy was calculated according to Equation (1) for the EEG signal filtered with a 4th-order Butterworth bandpass filter in the delta, theta, alpha, beta, and gamma bands.
var (cd1), var (cd2), var (cd3), var (cd4)Variance was calculated according to Equation (2) for the successive details (cd1, cd2, cd3, cd4) of the wavelet transform obtained using the Mallat pyramid. The db5 wavelet was used, and the decomposition was performed at 4 levels.
skewness (cd1), skewness (cd2), skewness (cd3), skewness (cd4)Skewness was calculated according to Equation (3) for the successive details (cd1, cd2, cd3, cd4) of the wavelet transform obtained using the Mallat pyramid. The db5 wavelet was used, and the decomposition was performed at 4 levels.
kurtosis (cd1), kurtosis (cd2), kurtosis (cd3), kurtosis (cd4)Kurtosis was calculated according to Equation (4) for the successive details (cd1, cd2, cd3, cd4) of the wavelet transform obtained using the Mallat pyramid. The db5 wavelet was used, and the decomposition was performed at 4 levels.
entropy (signal), entropy (cd1), entropy (cd2), entropy (cd3), entropy (cd4)Entropy was calculated according to Equation (5) for the signal and the successive details (cd1, cd2, cd3, cd4) of the wavelet transform obtained using the Mallat pyramid. The db5 wavelet was used, and the decomposition was performed at 4 levels.
Lag (signa;), dim (signal)The lag and dim parameters were calculated for the EEG signal. These parameters describe fundamental parameters that allow for the construction of an attractor. Autocorrelation was used for calculating the lag parameter, while the Cao method for calculating the dim parameter.
lyapExp (signal)The largest Lyapunov exponent was calculated for the EEG signal. Lag and dim parameters were used in the construction of the attractor.
corDim (signal)The correlation dimension of the attractor was calculated in the attractor space constructed using the lag and dim parameters for the EEG signal. Equation (11) was used for the calculations.
fractalDimension (signal)The largest Lyapunov exponent was calculated for the EEG signal. Equation (14) was used for the calculations.
var (imf1), var (imf2), var (imf3)The energy of successive components imf1, imf2, imf3 of the EMD decomposition was calculated according to Equation (2). The EMD decomposition was performed using the Interpolation method for envelope construction, which utilizes piecewise-cubic Hermite interpolating polynomials.
skewness (imf1), skewness (imf2), skewness (imf3)Skewness for successive components imf1, imf2, imf3 of the EMD decomposition was calculated according to Equation (3).
kurtosis (imf1), kurtosis (imf2), kurtosis (imf3)Kurtosis for successive components imf1, imf2, imf3 of the EMD decomposition was calculated according to Equation (4).
entropy (imf1), entropy (imf2)entropy
(imf3)
Entropy for successive components imf1, imf2, imf3 of the EMD decomposition was calculated according to Equation (5).
spectrum (signal)The spectrum of the EEG signal was calculated using discrete Fourier transformation. The spectral resolution was 0.5 Hz. The amplitude of the spectral peaks in the range of 0–86 Hz was selected.
autocorrelation (signal)The autocorrelation of the signal was calculated using Equation (7). Coefficients corresponding to signal shifts in the range of 1–348 samples were selected.
Table 2. Structure of the CNN used.
Table 2. Structure of the CNN used.
LayerDescription and Parameters
sequenceInputLayer Sequence input with 1 dimensions
convolution1dLayer1-D convolution, 32 3 convolutions with stride 1 and padding “causal”
reluLayerReLU activation function layer
layerNormalizationLayerLayer normalization
convolution1dLayer64 3 convolutions with stride 1 and padding “causal”
reluLayerReLU activation function layer
layerNormalizationLayerNormalization layer
convolution1dLayer128 3 convolutions with stride 1 and padding “causal”
reluLayerReLU activation function layer
layerNormalizationLayerNormalization layer
globalAveragePooling1dLayer1-D global average pooling
fullyConnectedLayer (numClasses)Fully connected layer
softmaxLayerSoftmax layer
classificationLayerClassification layer
Table 3. Structure of the LSTM network used.
Table 3. Structure of the LSTM network used.
LayerDescription and Parameters
Sequence InputSequence input with 1 dimensions
LSTMLSTM with 20 hidden units
Fully Connected2 fully connected layers
SoftmaxLayer softmax
Classification Outputcrossentropyex
Table 4. Interpretation of a confusion matrix.
Table 4. Interpretation of a confusion matrix.
Predicted No SeizurePredicted Seizure
Actual No SeizureTrue NegativeFalse Positive
Actual SeizureFalse NegativeTrue Positive
Table 5. The results of classification quality for the test data calculated for each feature individually.
Table 5. The results of classification quality for the test data calculated for each feature individually.
FeatureAccuracyPrecisionSensitivitySpecificity
energy (signal) 0.920.870.990.85
energy (delta)0.850.820.890.80
energy (theta)0.920.920.920.92
energy (alpha)0.920.910.930.91
energy (beta)0.950.950.960.95
energy (gamma)0.890.890.880.89
var (cd1)0.910.920.900.92
var (cd2)0.950.910.990.90
var (cd3)0.960.950.970.95
var (cd4)0.920.920.930.92
skewness (cd1)0.500.500.580.43
skewness (cd2)0.450.460.530.37
skewness (cd3)0.580.600.510.65
skewness (cd4)0.510.510.380.64
kurtosis (cd1)0.730.700.820.65
kurtosis (cd2)0.580.560.690.46
kurtosis (cd3)0.530.520.660.39
kurtosis (cd4)0.540.550.450.63
entropy (cd1)0.750.760.710.78
entropy (cd2)0.810.810.830.80
entropy (cd3)0.740.680.900.58
entropy (cd4)0.670.620.860.47
entropy (signal)0.640.610.770.51
lyapExp (signal)0.490.490.560.42
Lag (signal)0.650.670.600.70
Dim (signal)0.510.500.990.03
corDim (signal)0.900.860.960.85
fractalDimension (signal)0.650.650.650.65
var (imf1)0.940.920.960.92
var (imf2)0.900.890.910.89
var (imf3)0.790.800.780.80
skewness (imf1)0.550.550.500.59
skewness (imf2)0.560.580.440.68
skewness (imf3)0.490.490.600.38
kurtosis (imf1)0.690.710.640.75
kurtosis (imf2)0.620.630.600.65
kurtosis (imf3)0.540.540.500.58
entropy (imf1)0.890.940.830.95
entropy (imf2)0.860.930.790.94
entropy (imf3)0.750.790.700.81
Spectrum (signal)0.970.960.980.96
Autocorrelation (signal)0.970.960.980.96
Table 6. The results of classification quality for grouped features.
Table 6. The results of classification quality for grouped features.
FeatureAccuracyPrecisionSensitivitySpecificity
energy (signal), energy (delta), energy (theta), energy (alpha), energy (beta)0.970.960.990.95
var (cd1), var (cd2), var (cd3), var (cd4)0.970.960.990.95
skewness (cd1), skewness (cd2), skewness (cd3), skewness (cd4)0.540.530.590.48
kurtosis (cd1), kurtosis (cd2), kurtosis (cd3), kurtosis (cd4)0.770.720.890.65
entropy (sygnal), entropy (cd1), entropy (cd2), entropy (cd3), entropy (cd4)0.830.880.770.89
lyapExp, lag, dim, corDim, fractalDimension0.950.950.950.95
var (imf1), var (imf2), var (imf3)0.950.930.960.93
skewness (imf1), skewness (imf2), skewness (imf3)0.560.570.510.62
kurtosis (imf1), kurtosis (imf2), kurtosis (imf3)0.750.810.650.85
entropy (imf1), entropy (imf2), entropy (imf3)0.870.930.800.94
spectrum0.970.960.980.96
autocorrelation0.970.960.980.96
Table 7. The results of classification quality for CNN and LSTM.
Table 7. The results of classification quality for CNN and LSTM.
FeatureAccuracyPrecisionSensitivitySpecificity
CNN0.990.981.000.98
LSTM0.980.961.000.96
Table 8. Confusion matrix for CNN.
Table 8. Confusion matrix for CNN.
Predicted No SeizurePredicted Seizure
Actual No Seizure1082
Actual Seizure0110
Table 9. Confusion matrix for LSTM network.
Table 9. Confusion matrix for LSTM network.
Predicted No SeizurePredicted Seizure
Actual No Seizure1064
Actual Seizure0110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kołodziej, M.; Majkowski, A.; Rysz, A. Implementation of Machine Learning and Deep Learning Techniques for the Detection of Epileptic Seizures Using Intracranial Electroencephalography. Appl. Sci. 2023, 13, 8747. https://doi.org/10.3390/app13158747

AMA Style

Kołodziej M, Majkowski A, Rysz A. Implementation of Machine Learning and Deep Learning Techniques for the Detection of Epileptic Seizures Using Intracranial Electroencephalography. Applied Sciences. 2023; 13(15):8747. https://doi.org/10.3390/app13158747

Chicago/Turabian Style

Kołodziej, Marcin, Andrzej Majkowski, and Andrzej Rysz. 2023. "Implementation of Machine Learning and Deep Learning Techniques for the Detection of Epileptic Seizures Using Intracranial Electroencephalography" Applied Sciences 13, no. 15: 8747. https://doi.org/10.3390/app13158747

APA Style

Kołodziej, M., Majkowski, A., & Rysz, A. (2023). Implementation of Machine Learning and Deep Learning Techniques for the Detection of Epileptic Seizures Using Intracranial Electroencephalography. Applied Sciences, 13(15), 8747. https://doi.org/10.3390/app13158747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop