Novel Machine Learning-Based Brain Attention Detection Systems
Abstract
:1. Introduction
2. Preliminaries of Enhanced EEG-Based Attention Detection
2.1. Preprocessing for EEG Signal Improvement
2.2. Various Machine Learning Algorithms
- Support vector machine (SVM) [15] is utilized for classification and regression tasks. However, it is not particularly effective for datasets that have imbalanced class distributions, noise, and overlapping class samples.
- ResNet (residual neural network) [16] allows weight layers to learn residual functions relative to the inputs of previous layers, rather than learning functions independently. Extensive empirical evidence indicates that these residual networks are more straightforward to optimize and can achieve significantly higher accuracy with increased depth. Consequently, this model simplifies the training of networks that exceed the depth of those previously utilized.
- GoogLeNet is a form of convolutional neural network that is based on the Inception model. It features Inception modules, which allow the network to select from different sizes of convolutional filters in each block. The Inception network arranges these modules in layers, sometimes using max-pooling layers with a stride of 2 to reduce the grid resolution by half [17].
- Decision Tree (DT) is a supervised learning method used in many fields. Models where the target variable can take on discrete values are referred to as classification trees; in these structures, leaves represent the class labels, while branches symbolize the conjunctions of features that lead to those labels [18].
- Random Forest (RF) develops a more powerful classifier than a single decision tree by creating a collection of decision trees. It is an ensemble learning approach made up of several decision trees generated through random feature selection and bootstrapping techniques [19]. This combined model adjusts sample weights according to the last classifier’s performance and intensifies the training of misclassified samples in the next iteration.
2.3. Various Feature Reduction Methods
- Analysis of variance (ANOVA) compares means across various groups and is grounded in data variance analysis. This method is frequently applied in feature selection to enhance the processes of inference and decision-making. Earlier studies have incorporated this approach [28].
- The feature importance method (FI) is a technique that evaluates and quantifies the significance of features within a machine learning model, assisting users in comprehending the essential role that particular features play in predictive accuracy of the model [29].
- Linear correlation coefficient (LCC) is a statistical measure employed to quantify the strength and direction of the linear relationship between two variables, providing valuable insights into how one variable tends to increase or decrease in a consistent manner relative to changes in the other variable [30].
- Principal component analysis (PCA) is a linear method for reducing dimensionality by transforming a large number of variables into a smaller subset that still captures most of the significant information from the original dataset. This transformation helps in reducing computational complexity and improving the interpretability of the data [31,32].
2.4. Performance Measures
3. Performance Evaluations
3.1. Result Comparisons for ML Algorithms
3.2. Result Comparisons for Feature Reduction
Model | Original (%) | ANOVA (%) | Feat. Imp. (%) | LCC (%) | PCA (%) |
---|---|---|---|---|---|
KNN | 99.85% | 99.78% | 99.55% | 99.57% | 99.85% |
GoogLeNet | 99.83% | 99.72% | 99.65% | 99.71% | 99.78% |
ResNet-18 | 99.78% | 99.89% | 99.35% | 99.41% | 99.81% |
SVM | 99.69% | 96.30% | 93.55% | 93.48% | 96.71% |
RF | 99.23% | 99.25% | 98.84% | 98.74% | 98.67% |
DT | 93.59% | 93.61% | 93.18% | 92.51% | 88.27% |
4. Robust Optimization for Enhanced EEG Brain Attention Detection System Design
Performance Results for Optimized EEG Brain Attention Detection
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Schomer, D.L.; Lopes da Silva, F.H.; Amzica, F.; Lopes da Silva, F.H. 20C2Cellular Substrates of Brain Rhythms. In Niedermeyer’s Electroencephalography: Basic Principles, Clinical Applications, and Related Fields; Oxford University Press: Oxford, UK, 2017; pp. 20–62. [Google Scholar]
- Goldman, L.; Schafer, A.I. Cecil Medicine; Elservier: Amsterdam, The Netherlands, 2011. [Google Scholar]
- Deschamps, A.; Ben Abdallah, A.; Jacobsohn, E.; Saha, T.; Djaiani, G.; El-Gabalawy, R.; Overbeek, C.; Palermo, J.; Courbe, A.; Cloutier, I.; et al. Electroencephalography-Guided Anesthesia and Delirium in Older Adults After Cardiac Surgery: The ENGAGES-Canada Randomized Clinical Trial. JAMA 2024, 332, 112–123. [Google Scholar] [CrossRef] [PubMed]
- Orovas, C.; Sapounidis, T.; Volioti, C.; Keramopoulos, E. EEG in Education: A Scoping Review of Hardware, Software, and Methodological Aspects. Sensors 2025, 25, 182. [Google Scholar] [CrossRef]
- Dhande, S.; Kamble, A.; Gundewar, S.; Naresh Babu, N.; Kumar, P. Overview of Machine Learning for Bioengineering EEG Signal Processing. In Proceedings of the 2024 Second International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI), Coimbatore, India, 20–30 August 2024; pp. 611–616. [Google Scholar]
- Li, M.; Qiu, M.; Kong, W.; Zhu, L.; Ding, Y. Fusion Graph Representation of EEG for Emotion Recognition. Sensors 2023, 23, 1404. [Google Scholar] [CrossRef] [PubMed]
- Atilla, F.; Alimardani, M. EEG-based Classification of Drivers Attention using Convolutional Neural Network. In Proceedings of the 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 8–10 September 2021; pp. 1–4. [Google Scholar]
- Aci, C.I.; Kaya, M.; Mishchenko, Y. Distinguishing mental attention states of humans via an EEG-based passive BCI using machine learning methods. Expert Syst. Appl. 2019, 134, 153–166. [Google Scholar] [CrossRef]
- Pei, Y.; Luo, Z.; Yan, Y.; Yan, H.; Jiang, J.; Li, W.; Xie, L.; Yin, E. Data Augmentation: Using Channel-Level Recombination to Improve Classification Performance for Motor Imagery EEG. Front. Hum. Neurosci. 2021, 15, 645952. [Google Scholar] [CrossRef]
- Herbert, H. Jasper, M. Report of the committee on methods of clinical examination in electroencephalography: 1957. Electroencephalogr. Clin. Neurophysiol. 1958, 10, 370–375. [Google Scholar]
- Butterworth, S. On the Theory of Filter Amplifiers. Exp. Wirel. Wirel. Eng. 1930, 7, 536–541. [Google Scholar]
- Goyal, D.; Pabla, B. Condition based maintenance of machine tools—A review. CIRP J. Manuf. Sci. Technol. 2015, 10, 24–35. [Google Scholar] [CrossRef]
- Lieber, C.; Mahadevan-Jansen, A. Automated Method for Subtraction of Fluorescence from Biological Raman Spectra. Appl. Spectrosc. 2003, 57, 1363–1367. [Google Scholar] [CrossRef]
- Alarfaj, F.K.; Malik, I.; Khan, H.U.; Almusallam, N.; Ramzan, M.; Ahmed, M. Credit Card Fraud Detection Using State-of-the-Art Machine Learning and Deep Learning Algorithms. IEEE Access 2022, 10, 39700–39715. [Google Scholar] [CrossRef]
- Zhao, J.; Lui, H.; McLean, D.; Zeng, H. Automated Autofluorescence Background Subtraction Algorithm for Biomedical Raman Spectroscopy. Appl. Spectrosc. 2007, 61, 1225–1232. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
- Studer, M.; Ritschard, G.; Gabadinho, A.; Müller, N.S. Discrepancy Analysis of State Sequences. Sociol. Methods Res. 2011, 40, 471–510. [Google Scholar] [CrossRef]
- Randhawa, K.; Loo, C.K.; Seera, M.; Lim, C.P.; Nandi, A.K. Credit Card Fraud Detection Using AdaBoost and Majority Voting. IEEE Access 2018, 6, 14277–14284. [Google Scholar] [CrossRef]
- Oded Maimon, L.R. Data Mining and Knowledge Discovery Handbook; Springer: New York, NY, USA, 2010. [Google Scholar]
- Taghizadeh-Mehrjardi, R.; Nabiollahi, K.; Minasny, B.; Triantafilis, J. Comparing data mining classifiers to predict spatial distribution of USDA-family soil groups in Baneh region, Iran. Geoderma 2015, 253–254, 67–77. [Google Scholar] [CrossRef]
- Mao, A.; Mohri, M.; Zhong, Y. Cross-Entropy Loss Functions: Theoretical Analysis and Applications. arXiv 2023, arXiv:cs.LG/2304.07288. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:cs.LG/1412.6980. [Google Scholar]
- Loshchilov, I.; Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Akogul, S. A Novel Approach to Increase the Efficiency of Filter-Based Feature Selection Methods in High-Dimensional Datasets With Strong Correlation Structure. IEEE Access 2023, 11, 115025–115032. [Google Scholar] [CrossRef]
- Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
- Chuang, L.Y.; Chang, H.W.; Tu, C.J.; Yang, C.H. Improved binary PSO for feature selection using gene expression data. Comput. Biol. Chem. 2008, 32, 29–38. [Google Scholar] [CrossRef]
- Wu, C.; Yan, Y.; Cao, Q.; Fei, F.; Yang, D.; Lu, X.; Xu, B.; Zeng, H.; Song, A. sEMG Measurement Position and Feature Optimization Strategy for Gesture Recognition Based on ANOVA and Neural Networks. IEEE Access 2020, 8, 56290–56299. [Google Scholar] [CrossRef]
- Casalicchio, G.; Molnar, C.; Bischl, B. Visualizing the Feature Importance for Black Box Models. In Proceedings of the Machine Learning and Knowledge Discovery in Databases; Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G., Eds.; Springer: Cham, Switzerland, 2019; pp. 655–670. [Google Scholar]
- Biesiada, J.; Duch, W.l. Feature Selection for High-Dimensional Data—A Pearson Redundancy Based Filter; Springer: Berlin/Heidelberg, Germany, 2007; pp. 242–249. [Google Scholar]
- Evgeniou, T.; Pontil, M. Support Vector Machines: Theory and Applications. In Machine Learning and Its Applications: Advanced Lectures; Springer: Berlin/Heidelberg, Germany, 2001; pp. 249–257. [Google Scholar]
- Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef] [PubMed]
- Minka, T. Automatic Choice of Dimensionality for PCA; Technical Report 514; MIT Media Lab Vision and Modeling Group: Cambridge, MA, USA, 2001; pp. 577–583. [Google Scholar]
- Feng, X.; Kim, S.K. Novel Machine Learning Based Credit Card Fraud Detection Systems. Mathematics 2024, 12, 1869. [Google Scholar] [CrossRef]
- Kim, S.K. Combined Bivariate Performance Measure. IEEE Trans. Instrum. Meas. 2024, 73, 1009404. [Google Scholar] [CrossRef]
- Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
- Serra, C.; Rodriguez, M.C.; Delclos, G.L.; Plana, M.; López, L.I.G.; Benavides, F.G. Criteria and methods used for the assessment of fitness for work: A systematic review. JAMA 2007, 64, 304–312. [Google Scholar] [CrossRef]
- Zhang, D.; Cao, D.; Chen, H. Deep learning decoding of mental state in non-invasive brain computer interface. In AIIPCC ’19, Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing, Sanya China, 26–28 July 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar]
- Sravanth Kumar, R.; Srinivas, K.K.; Peddi, A.; Harsha Vardhini, P. Artificial Intelligence based Human Attention Detection through Brain Computer Interface for Health Care Monitoring. In Proceedings of the 2021 IEEE International Conference on Biomedical Engineering, Computer and Information Technology for Health (BECITHCON), Dhaka, Bangladesh, 4–5 December 2021; pp. 42–45. [Google Scholar]
- Al-Nafjan, A.; Aldayel, M. Predict Students’ Attention in Online Learning Using EEG Data. Sustainability 2022, 14, 6553. [Google Scholar] [CrossRef]
- Khare, S.K.; Bajaj, V.; Sengur, A.; Sinha, G. 10—Classification of mental states from rational dilation wavelet transform and bagged tree classifier using EEG signals. In Artificial Intelligence-Based Brain-Computer Interface; Bajaj, V., Sinha, G., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 217–235. [Google Scholar]
- Suwida, K.; Hidayati, S.C.; Sarno, R. Application of Machine Learning Algorithm for Mental State Attention Classification Based on Electroencephalogram Signals. In Proceedings of the 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE), Jakarta, Indonesia, 16 February 2023; pp. 354–358. [Google Scholar]
- Wang, Y.; Nahon, R.; Tartaglione, E.; Mozharovskyi, P.; Nguyen, V.T. Optimized preprocessing and Tiny ML for Attention State Classification. In Proceedings of the 2023 IEEE Statistical Signal Processing Workshop (SSP), Hanoi, Vietnam, 2–5 July 2023; pp. 695–699. [Google Scholar]
- Khare, S.K.; Bajaj, V.; Gaikwad, N.B.; Sinha, G.R. Ensemble Wavelet Decomposition-Based Detection of Mental States Using Electroencephalography Signals. Sensors 2023, 23, 7860. [Google Scholar] [CrossRef] [PubMed]
- Velaga, N.; Singh, D. The Potential of 1D-CNN for EEG Mental Attention State Detection. Commun. Comput. Inf. Sci. (CCIS) 2024, 2128, 173–185. [Google Scholar]
Hyperparameter | Value |
---|---|
Loss Function | Cross Entropy [22] |
Optimizer | Adam [23] |
Learning Rate Scheduler | Cosine Annealing [24] |
Batch Size | 32 |
Initial Learning Rate | 0.001 |
Epochs | 50 |
Hyperparameter | Value |
---|---|
Decision Tree | criterion = ‘entropy’ |
Random Forest | criterion = ‘entropy’ |
KNN | n neighbors = 5 |
SVM | kernel = ‘rbf’, probability = True |
Method | Feature Reduction Setup |
---|---|
ANOVA | p-values |
Feature Importance | importance [importance of all features] |
LCC | |
PCA | n-components = mle [33] |
Model | Accuracy | Precision | Recall | F1-Score | Train [Min] |
---|---|---|---|---|---|
KNN | 99.85% | 99.81% | 99.89% | 0.999 | ≈0.00 |
GoogLeNet | 99.83% | 99.89% | 99.78% | 0.998 | 60.08 |
ResNet-18 | 99.78% | 99.70% | 99.85% | 0.998 | 48.70 |
SVM | 96.69% | 96.43% | 96.98% | 0.967 | 0.150 |
RF | 99.24% | 99.26% | 99.22% | 0.992 | 0.586 |
DT | 93.59% | 93.46% | 93.77% | 0.936 | 0.131 |
Previous Research | ML Model | States | Accuracy (%) |
---|---|---|---|
C. I. Aci (2019) [8] | SVM | focus, unfocus, drowsy | 91.72 |
D. Zhang (2019) [38] | CNN | focus, unfocus, drowsy | 96.40 |
R. Sravanth Kumar (2021) [39] | KNN | focus, unfocus | 97.50 |
A. Al-Nafjan (2022) [40] | Random Forest | focus, unfocus | 96.00 |
S. K. Khare (2022) [41] | Bagged Tree | focus, unfocus, drowsy | 91.77 |
K. Suwida (2023) [42] | XGBoost | focus, unfocus, drowsy | 98.00 |
Y. Wang (2023) [43] | SVM | focus, unfocus, drowsy | 99.80 |
S. K. Khare (2023) [44] | Optimizable Ensemble | focus, unfocus, drowsy | 97.80 |
N. Velaga (2024) [45] | CNN | focus, unfocus, drowsy | 98.47 |
Our Solution (2024) | KNN | focus, unfocus | 99.56 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Kim, S.-K. Novel Machine Learning-Based Brain Attention Detection Systems. Information 2025, 16, 25. https://doi.org/10.3390/info16010025
Wang J, Kim S-K. Novel Machine Learning-Based Brain Attention Detection Systems. Information. 2025; 16(1):25. https://doi.org/10.3390/info16010025
Chicago/Turabian StyleWang, Junbo, and Song-Kyoo Kim. 2025. "Novel Machine Learning-Based Brain Attention Detection Systems" Information 16, no. 1: 25. https://doi.org/10.3390/info16010025
APA StyleWang, J., & Kim, S.-K. (2025). Novel Machine Learning-Based Brain Attention Detection Systems. Information, 16(1), 25. https://doi.org/10.3390/info16010025