Next Article in Journal
A Review of Recent Developments in 6G Communications Systems
Previous Article in Journal
Blockchain-Based Network Optimization for Workstation Nodes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Comparison of Different Machine Learning Algorithms to Classify Epilepsy Seizure from EEG Signals †

Department of Information Technology, Vishwakarma Institute of Technology, Pune 411037, Maharashtra, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances in Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 166; https://doi.org/10.3390/engproc2023059166
Published: 16 January 2024
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
Recurrent seizures are a symptom of a central nervous system disease called epilepsy. The duration of these seizures lasts less than a few seconds or sometimes minutes. There are very few ways to record seizures, and one of them is EEG. EEG systems mainly consist of scalp electrodes that record electrical activity. These EEG data are often complex signals containing noise and artifacts. Accurate classification of epileptic seizures is a major challenge, as manual seizure identification is a laborious and challenging endeavor for neurologists. An automated method for seizure detection and categorization was required to address this issue. In this paper, we used machine learning and proposed a model that predicts the behavior of these signals and classifies seizures. The Epileptic Seizure Recognition Data Set from the UCI Machine Learning Repository was the dataset used in this work. The model is evaluated on various models such as XGboost, Extra Tree Classifier, Random Forest, etc. Using measures like F1 score, recall, and precision, the proposed approaches have been assessed. The results indicate that Random Forest produced the superior result of 0.943 F1 score, and XGB achieved a slightly lower F1 score of 0.933. Moreover, Random Forest has the highest accuracy of 0.977.

1. Introduction

In the present years, the brain–computer interface with clever signal decomposition has played a major role in the fields of medicine, defense services, etc. [1]. It is desirable to have a skilled approach to feature extraction from EEG signals that will make brain–computer interface building easier. The communication between neurons in the brain takes place using electrical signals. The brain activities of living beings are based on these electrical signals. At some point, a failure to communicate causes a brain seizure [2]. The electroencephalogram (EEG) data reveal the brain’s electrical activity. Epilepsy, for instance, can be identified by examining the EEG waves [3]. Seizures are characterized by abnormal brain activity brought on by an epileptic disturbance that interferes with normal brain or body function and affects the central nervous system. The neurological condition epilepsy is characterized by repeated seizures, which are short (lasting less than minutes) bursts of uncontrollable movement that can affect one or all body parts and are occasionally followed by loss of consciousness and control over bowel or bladder function. High electrical performance in a bunch of cerebrum chambers is the result of seizures. Such discharges can occur in many areas of the brain. Muscle jerks used to prolong seizure episodes might be different in seizures. The frequency of seizures also varies. The progressive neurobiological process, known as “epileptogenesis”, is the cause of epilepsy. Its signs include disorientation, strange behavior, and awareness loss [4]. These symptoms can lead to injury from falls or tongue-biting.
A WHO survey notes that almost 7 million of the globe’s population have seizures, leading to the most common neurological disease [5]. It could be difficult to predict when someone could experience a seizure. Therefore, we can say that an epileptic seizure is a dangerous disorder, as it is not inculcated when it arrives. So, for this reason, EEG signals play a major role in the detection of epileptic disorders. To diagnose and conduct a thorough investigation of the brain during an epileptic seizure event, electroencephalography (EEG) is useful. The study of epilepsy recovery strategies involves utilizing EEG, a technique that examines non-Gaussian and non-stationary signals representing electrical activity in the brain. These signals are employed to detect various types of brain disorders [6].
In [7], the authors proposed a fusion approach using deep learning. They used a multi-modal approach for each data stream. And the method uses ensemble fusion on the outputs. EEG waves are written down using the metal plates attached to the scalp. There are several channels that collect brain activity using these electrodes. The physical placement of these electrodes is followed by the International 10–20 systems, as shown in Figure 1.
Although there are many models in earlier papers, the detection of seizures can still be improved. In this research, we build on the previous work and explore the viability of employing machine learning methods for automatically classifying seizures. Epileptic Seizure Recognition is the dataset used in this model. The origin of this dataset can be traced back to the UCI Machine Learning Repository. The model is evaluated on various models such as XGboost, Extra Tree Classifier, Random Forest, etc. Using measures like the F1 score, recall, and precision, the proposed approaches were assessed. The F1 score was considered as the evaluation factor as the dataset was mostly imbalanced. The detailed methodology of the model is explained further in the paper.

2. Materials and Method

2.1. Materials

The points addressed in this section are the dataset used, data pre-processing techniques, and machine learning classification algorithms that were explored.
Before jumping into further analysis and classification, there is one thing that is very crucial in any machine learning research. Good and clean data have no substitute. This research focuses on detecting epileptic seizures from multichannel electroencephalogram signals. Often, EEG signals contain various noises and artifacts, and to detect the occurrence of seizures, the collected data need to be clean, as it reflects brain activity and important electrical impulses [8]. The EEG records the continuous electrical impulses in the form of signals, and it records the abnormal electrical activity that leads to seizures. Therefore, the first step is always to find the correct dataset [9]. In this paper, we studied various epileptic seizure databases that were released by various research institutes around the world. There are mainly 6 datasets that we have included in our analysis, considering different factors that our research primarily needs. Table 1 contains an overview of the datasets available publicly. Although multiple databases are widely available, we focused on a few parameters that we needed in our study, such as sampling rate. It is one of the necessary parameters when it comes to EEG signals. As the human brain is functioning constantly, the generated voltages are fluctuating continuously, and on the other hand, EEG systems produce samples of data by taking discrete photos of this ongoing activity, much like a camera would do. Various EEG systems have different sampling rates, indicating the number of samples they can capture per second. Like oscillations, the unit of the sampling rate is also samples per second, which is also called Hertz (Hz). For instance, a 250 Hz sampling rate EEG equipment may record 250 samples per second. Hence, the higher the sampling rate is, the better would be the precision of signals. The highest frequency in an EEG signal that may be studied is equal to half of the sampling rate. Only consider frequencies up to 256/2 = 128 Hz, for example, if your data were collected at 256 Hz. Research that is even more focused suggests staying with frequencies that are roughly one-third the sampling rate (for example, 256/3 = 85.3 Hz). Since the brain typically generates lower frequencies, you can be certain that the results will be clear even with low EEG sampling rates of roughly 100 Hz (for example, gamma between [25–80 Hz] and delta between [1–4 Hz]).
This model utilized the Epileptic Seizure Recognition Data Set sourced from the UCI Machine Learning Repository. In the original dataset, there were 100 files per folder, each of which represented a distinct subject or individual. In each file, there is a recording of brain activity lasting 23.6 s. The associated time series comprises 4097 sampled data points, each indicating the EEG record’s value at a particular moment in time. Therefore, the data consist of a total of 500 individuals, each of whom has 4097 data points collected during 23.5 s. Every data point represents the EEG recording value at a distinct moment, as each 23-unit chunk contains 178 data points per second, randomly distributed within the 4097 data points. Therefore, 23 × 500 = 11,500 rows of informational elements make up the data, with the last column serving as the output label. In each row, there are 178 data points, corresponding to a duration of 1 s. The dataset comprises 2300 instances of EEG signals with seizures and 9200 instances of EEG signals without seizures. Due to this imbalanced nature of the dataset, the F1 score, recall, and precision were chosen as the performance metrics. Table 2 gives the description of the dataset.

2.2. Pre-Processing

The proposed model’s progression is illustrated in Figure 2. The initial stage involves data pre-processing, wherein standardization is carried out. Subsequently, classification algorithms are employed to distinguish between epileptic seizure and non-seizure conditions.
To avoid overfitting, any machine learning model must go through a crucial step where the data are split into two halves [10]. One for testing, the other for training. Hold out cross validation is a technique where the data are divided into a train test set. After trying different combinations of train and test size, this system appeared to exhibit improved performance when utilizing a 70–30 split, meaning 70% of the data were allocated for training and 30% for testing.
The model fitting and model learning functions are not equally affected by variables assessed at different scales, and they may even lead to bias. The idea is to standardize the features individually (mean = 0 and standard deviation = 1).
Standardization   z = x     μ σ  
Mean   μ = 1 N i = 1 N x i
Standard   Deviation   σ = 1 N i = 1 N x i   μ 2
For classification, we have used algorithms such as k-Nearest Neighbor (KNN), Naive Bayes, Random Forest, Gradient Boost, Extreme Gradient Boost (XGB) and Extra Tree Classifier (ETC). For all these classifiers, a train-test split ratio of 70–30 was chosen.

2.3. Machine Learning Classification Models

2.3.1. K-Nearest Neighbor

KNN is the supervised machine learning algorithm capable of serving purposes in both classification and regression tasks. The outcome of KNN is c class membership. The class to which an object should belong is determined by the majority of votes of k nearest neighbors [11]. The positive integer k is typically small and mostly odd. The KNN model is fitted to the training and validation set with the value of k as 3. The distance metric used was Minkowski distance of order 2 and rest parameters as default. The minkowski distance between two points X = x 1 ,   x 2 ,   ,   x n and Y = y 1 ,   y 2 ,   ,   y n is given by:
D = i = 1 n | x i y i | p 1 p
As order of two is used in the algorithm, then the Minkowski distance becomes the Euclidean distance.

2.3.2. Naive Bayes

It relies on the principles of Bayes theorem and hence utilizes probability for dependent variables. It is assumed that every independent variable has an independent and equal contribution to the outcome [12]. The parameters of NB were kept default and for better prediction.
P   ( A | B ) = P A | B   P A P B

2.3.3. Random Forest

Random Forest combines the result of decision trees to give better accuracy. Here parallel trees are bagged or aggregated we can say, so it is a bagging or bootstrap aggregation ensemble learning algorithm with base learner as decision trees. It constructs and integrates various decision trees to improve accuracy. The names random and forest refer to the random selection of predictors and the usage of a variety of decision trees in prediction and decision-making. Through its features, random forest aids in preventing overfitting and strengthens the model. The model uses 100 estimators with Gini as the criterion [13].
G i n i   I m p u r i t y = i = 1 C f i 1 f i
where, f i = frequency of node, C = number of distinct labels.

2.3.4. Gradient Boost

It is a widely used boosting algorithm which uses Classification and Regression Trees as its base learner. It consists of n trees in which the first tree is randomly trained. Then, the next tree is given residual or errors of the first tree with preceding dataset and a similar pattern continues until all N trees forming ensembles are trained. It trains the base learners(trees) sequentially, that is models are trained one after the other. The model uses 100 boosting rounds. To increase the learning capability of the model, a learning rate of 1.0 was used. These two parameters are the hyperparameters of the gradient boost algorithm on which the performance of the model depends [14]. Other parameters such as max depth and random state were 3 and 69, respectively. This algorithm proved to be one of the best among all in predicting the seizures. This model keeps updating the predictions of previous models in the next model. Let us assume a gradient boost model with M stages. At each stage m   such   that   1 m M , there is an unreliable model F m , so the model adds some new estimators h m x ,
F m + 1 x i = F m x i + h m x i = y i  
In this way, the gradient boost model corrects the errors of its predecessor model in the successor.

2.3.5. Extreme Gradient Boost

This is an ensemble modelling technique of boosting type. It is faster than gradient boosting as regularization techniques, that is L1(Lasso) and L2(Ridge) Regularization. Weights are assigned to independent variables and then given to the decision tree which then increases the weight of wrongly assigned variables before going to the next decision tree to improve accuracy of the model. It minimizes the bias and the penalizing of trees is conducted very cleverly. It has gained much popularity in recent times as it is the choice of many data scientists [15]. This algorithm uses the Newton–Raphson function space unlike gradient descent boosting that uses gradient descent function space.

2.3.6. Extra Tree Classifier

Extra Tree Classifier is pretty like the random forest classifier, with the building of the decision trees in the forest being the sole difference. Multiple de-correlated decision trees are formed using calculations based on Gini index or information gain and entropy. This very random tree classifier is another example of an ensemble learning technique. In this, 100 estimators were used [16]. The criterion was entropy and max features as 1.0. Minimum leaf samples were 3 and minimum sample split was 20. This model does not use bootstrap that means the samples of trees are made without replacement unlike Random Forest which uses bootstrap. Another difference between extra tree classifier and random forest is that extra trees use random splits of nodes unlike random trees which use best splits.
E n t r o p y   S = i = 1 c p i l o g 2 p i
In the above formula, c stands for unique class labels and p i stands for the ratio of rows with output labels.

2.4. Performance Metrics

After analyzing the data, it was found that the data is biased towards the baseline class. Prevalence of seizure class is just 20% in the whole dataset which says that the dataset is mostly imbalanced. Hence it would not be wise to use accuracy as a performance indicator. Therefore, the F1 score is used to evaluate the classification models [17]. The F1 score is a metric that represents the harmonic mean of precision and recall, providing a more balanced summary of the model’s performance.
F 1   S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
The F1 score in terms of components of confusion matrix can be given as follows:
F 1   S c o r e = 2 × T P T P + 1 2 F P + F N

3. Results and Discussion

The EEG signals are complex in nature and difficult to extract; they often contain a lot of noise, and the reduced availability of EEG data makes it difficult for researchers to study. Numerous datasets exist in various forms, but extracting the EEG signals is often a complicated task. The training data has been fed to several classifiers for the classification of seizures. Dealing with large amounts of data seemed to be one of the major challenges [18].
To assess the overall performance of classifiers, the UCI dataset is experimented with various classification models. To validate the proposed method, 30% of the dataset is allocated for training, while the remaining 70% is utilized for testing. As the dataset was unbalanced, instead of accuracy, the F1 score was calculated as a performance metric for all of the classifiers. In Figure 3, it is clear that Random Forest (RF) and Extreme Gradient Boost (EGB) achieved the highest F1 score for both training and testing. Other classifiers, such as KNN, Naive Bayes, and Gradient Boost, achieved the F1 score less than Random Forest and ETC. The accuracy, the F1 score, precision and recall of the entire classification are summarized in Table 3. For this binary classification, confusion matrices were plotted for visualization purposes.
When examining the output generated by the classifier models based on the F1 score, the lowest F1 score, which is 0.768, was attributed to the KNN model, whereas Gradient Boost reached a higher F1 score of 0.889. On the other hand, Random Forest outperformed all of the models and achieved an F1 score of 0.943. Eventually, the accuracy study of specified models showed that Random Forest performed better than others, with a maximum accuracy of 0.977, whereas Extreme Gradient Boost and Extra Tree Classifier achieved slightly lower accuracy of 0.974 and 0.973, respectively. Other performance metrics, such as recall, precision and specificity, of all of the classifiers are depicted in Table 3.

4. Conclusions

These EEG data are often complex signals containing noise and artifacts. Accurate classification of epileptic seizures is a major challenge, as manual seizure identification is a laborious and challenging endeavor for neurologists [19]. This work aimed to develop a potent machine learning model that could forecast when epileptic seizures will occur. This study utilized the Epileptic Seizure Recognition Data Set sourced from the UCI Machine Learning Repository. Standardization was applied during the data pre-processing phase.
Seizure and non-seizure activity were classified using several models, including Random Forest, K-nearest neighbor, naïve bayes, gradient boost, extra tree classifier and extreme gradient boost [20]. To validate the aforementioned models, 30% of the dataset is employed for training, while the remaining 70% is utilized for testing. As the dataset was imbalanced, instead of accuracy, the F1 score was calculated as a performance metric for all classifiers. The results show that Random Forest and Extreme Gradient Boost achieved the highest F1 score of 0.943 and 0.933, respectively. An accuracy of 0.977 was given by Random Forest, which was the highest among all. On an average, 97% accuracy was shown by all the algorithms.

Author Contributions

Conceptualization, V.L.; methodology, C.K. and V.L.; validation, M.K.; investigation, R.M.; resources, S.L.; data curation, S.L.; writing—original draft preparation, C.K. and R.M.; writing—review and editing, M.K. and C.K.; supervision, P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data used are made available in the present work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Subrahmanya, S.V.; Shetty, D.K.; Patil, V.; Hameed, B.M.; Paul, R.; Smriti, K.; Naik, N.; Somani, B.K. The role of Data Science in healthcare advancements: Applications, benefits, and future prospects. Ir. J. Med. Sci. (1971-) 2021, 191, 1473–1483. [Google Scholar] [CrossRef] [PubMed]
  2. Randhawa, P.; Shanthagiri, V.; Kumar, A. Recognition of violent activity response using machine learning methods with wearable sensors. J. Adv. Res. Dyn. Control Syst. 2019, 11, 592–601. [Google Scholar] [CrossRef]
  3. Rakhade, S.N.; Jensen, F.E. Epileptogenesis in the immature brain: Emerging mechanisms. Nat. Rev. Neurol. 2009, 5, 380. [Google Scholar] [CrossRef] [PubMed]
  4. Mardini, W.; Yassein, M.M.B.; Al-Rawashdeh, R.; Aljawarneh, S.; Khamayseh, Y.; Meqdadi, O. Enhanced Detection of Epileptic Seizure Using EEG Signals in Combination with Machine Learning Classifiers. IEEE Access 2020, 8, 24046–24055. [Google Scholar] [CrossRef]
  5. Almustafa, K.M. Classification of epileptic seizure dataset using different machine learning algorithms. Inform. Med. Unlocked 2020, 21, 100444. [Google Scholar] [CrossRef]
  6. Natu, M.; Bachute, M.; Gite, S.; Kotecha, K.; Vidyarthi, A. Review on Epileptic Seizure Prediction: Machine Learning and Deep Learning Approaches. Comput. Math. Methods Med. 2022, 2022, 7751263. [Google Scholar] [CrossRef] [PubMed]
  7. Moldovan, D. Crow Search Algorithm Based Ensemble of Machine Learning Classifiers for Epileptic Seizures Detection. In Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 29–30 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  8. Masum, M.; Shahriar, H.; Haddad, H.M. Epileptic Seizure Detection for Imbalanced Datasets Using an Integrated Machine Learning Approach. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 5416–5419. [Google Scholar] [CrossRef]
  9. Zeljković, V.; Valev, V.; Tameze, C.; Bojic, M. Pre-Ictal phase detection algorithm based on one dimensional EEG signals and two dimensional formed images analysis. In Proceedings of the 2013 International Conference on High Performance Computing & Simulation (HPCS), Helsinki, Finland, 1–5 July 2013; pp. 607–614. [Google Scholar] [CrossRef]
  10. Ferrell, S.; Mathew, V.; Refford, M.; Tchiong, V.; Ahsan, T.; Obeid, I.; Picone, J. The Temple University Hospital EEG corpus. Electrode Location and labels. Inst. Signal Inf. Process. Rep. 2020, 1, 1–9. [Google Scholar]
  11. Guerrero, M.C.; Parada, J.S.; Espitia, H.E. Principal Components Analysis of EEG Signals for Epileptic Patient Identification. Computation 2021, 9, 133. [Google Scholar] [CrossRef]
  12. Vandecasteele, K.; De Cooman, T.; Chatzichristos, C.; Cleeren, E.; Swinnen, L.; Macea Ortiz, J.; Van Huffel, S.; Dümpelmann, M.; Schulze-Bonhage, A.; De Vos, M.; et al. The power of ECG in multimodal patient-specific seizure monitoring: Added value to an EEG-based detector using limited channels. Epilepsia 2021, 62, 2333–2343. [Google Scholar] [CrossRef] [PubMed]
  13. Rasheed, K.; Qayyum, A.; Qadir, J.; Sivathamboo, S.; Kwan, P.; Kuhlmann, L.; O’Brien, T.; Razi, A. Machine Learning for Predicting Epileptic Seizures Using EEG Signals: A Review. IEEE Rev. Biomed. Eng. 2020, 14, 139–155. [Google Scholar] [CrossRef] [PubMed]
  14. Moctezuma, L.A.; Molinas, M. EEG Channel-Selection Method for Epileptic-Seizure Classification Based on Multi-Objective Optimization. Front. Neurosci. 2020, 14, 593. [Google Scholar] [CrossRef] [PubMed]
  15. Siddiqui, M.K.; Morales-Menendez, R.; Huang, X.; Hussain, N. A review of epileptic seizure detection using machine learning classifiers. Brain Inform. 2020, 7, 5. [Google Scholar] [CrossRef] [PubMed]
  16. Amin, H.U.; Mumtaz, W.; Subhani, A.R.; Saad, M.N.; Malik, A.S. Classification of EEG Signals Based on Pattern Recognition Approach. Front. Comput. Neurosci. 2017, 11, 103. [Google Scholar] [CrossRef] [PubMed]
  17. Kołodziej, M.; Majkowski, A.; Rak, R.J.; Świderski, B.; Rysz, A. System for automatic heart rate calculation in epileptic seizures. Australas. Phys. Eng. Sci. Med. 2017, 40, 555–564. [Google Scholar] [CrossRef] [PubMed]
  18. Lasefr, Z.; Ayyalasomayajula, S.S.V.N.R.; Elleithy, K. Epilepsy seizure detection using EEG signals. In Proceedings of the 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 19–21 October 2017; pp. 162–167. [Google Scholar] [CrossRef]
  19. Wang, Y.; Li, Z.; Feng, L.; Zheng, C.; Zhang, W. Automatic Detection of Epilepsy and Seizure Using Multiclass Sparse Extreme Learning Machine Classification. Comput. Math. Methods Med. 2017, 2017, 6849360. [Google Scholar] [CrossRef] [PubMed]
  20. Ubeyli, E.D. Statistics over features: EEG signals analysis. Comput. Biol. Med. 2009, 39, 733–741. [Google Scholar] [CrossRef] [PubMed]
Figure 1. International 10–20 system for EEG electrodes.
Figure 1. International 10–20 system for EEG electrodes.
Engproc 59 00166 g001
Figure 2. Flowchart of proposed model.
Figure 2. Flowchart of proposed model.
Engproc 59 00166 g002
Figure 3. F1 score graph of all classifiers for training and testing set.
Figure 3. F1 score graph of all classifiers for training and testing set.
Engproc 59 00166 g003
Table 1. Comparing Different datasets.
Table 1. Comparing Different datasets.
Sr NoDataset NameYear of ReleaseSampling RateNo. of ChannelsDuration of RecordingsProcessed
1.Epileptic EEG Dataset, Mendeley data2021500 Hz21-NO (mat and EDF files)
2.Pediatric EEG dataset20212000 Hz52-NO (contains EDF files)
3.Siena Scalp EEG Database2020512 Hz34Total 128 hNO (contains EDF format files)
4.The Bonn-Barcelona micro- and macro- EEG database2020-1632 s eachNO (contain MATLAB-files)
Table 2. Description of the dataset.
Table 2. Description of the dataset.
Class NameClass LabelNumber of Instances
Epileptic Seizure12300
Non-Seizure09200
Table 3. Result analysis of Classifiers.
Table 3. Result analysis of Classifiers.
MethodsF1 ScoreRecallSpecificityPrecisionAccuracy
RF0.9430.9280.9900.9580.977
XGB0.9330.8970.9930.9720.974
ETC0.9300.8940.9930.9690.973
NB0.8990.8870.9790.9120.960
GB0.8890.8610.9810.9200.957
KNN0.7680.6250.9990.9950.924
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kunekar, P.; Kumawat, C.; Lande, V.; Lokhande, S.; Mandhana, R.; Kshirsagar, M. Comparison of Different Machine Learning Algorithms to Classify Epilepsy Seizure from EEG Signals. Eng. Proc. 2023, 59, 166. https://doi.org/10.3390/engproc2023059166

AMA Style

Kunekar P, Kumawat C, Lande V, Lokhande S, Mandhana R, Kshirsagar M. Comparison of Different Machine Learning Algorithms to Classify Epilepsy Seizure from EEG Signals. Engineering Proceedings. 2023; 59(1):166. https://doi.org/10.3390/engproc2023059166

Chicago/Turabian Style

Kunekar, Pankaj, Chanchal Kumawat, Vaishnavi Lande, Sushant Lokhande, Ram Mandhana, and Malhar Kshirsagar. 2023. "Comparison of Different Machine Learning Algorithms to Classify Epilepsy Seizure from EEG Signals" Engineering Proceedings 59, no. 1: 166. https://doi.org/10.3390/engproc2023059166

Article Metrics

Back to TopTop