ChMinMaxPat: Investigations on Violence and Stress Detection Using EEG Signals
Abstract
:1. Introduction
1.1. The Literature Review
1.2. Literature Gaps
- Violence is an event that deeply affects the brain [25,26]. Therefore, the brain’s reaction to violence should be detected [25]. For this purpose, we used an EEG violence detection dataset to extract the brain’s reactions to violence. In the literature, various EEG signal classification methods exist, but there is a lack of research on EEG-based violence detection.
- Most of the EEG signal classification models have used a single dataset [30].
- To our knowledge, there has been no feature engineering research aiming to discover the differences in brain activation between violence and stress [31].
1.3. Motivation and Our Model
1.4. Novelties and Contributions
- Novelties:
- A new channel-based FE function, ChMinMaxPat, has been introduced.
- An SOXFE model has been presented by deploying the proposed ChMinMaxPat, and this SOXFE model has been applied to EEG signal datasets for violence and stress detection.
- We have used two EEG signal datasets, specifically for violence and stress detection. By using these datasets for testing, the overall high classification ability of the presented model has been demonstrated.
- Contributions:
- The presented ChMinMaxPat-based model achieved over 99% classification accuracy using both 10-fold and leave-one-record-out (LORO) cross-validation (CV). In this regard, we have introduced a highly accurate feature engineering model, contributing to the field of feature engineering.
- We extracted interpretable results by deploying the DLob symbolic language, and using the generated DLob string, a connectome diagram related to violence was created.
2. Materials and Methods
2.1. Material
2.1.1. The EEG Violence Detection Dataset
2.1.2. The EEG Stress Detection Dataset
2.2. Method
- ChMinMaxPat-based feature extraction (FE);
- CWNCA-based feature selection;
- Classification utilizing tkNN;
- IMV-based information fusion;
- Generation of explainable results with DLob.
2.2.1. The Proposed ChMinMaxPat-Based Feature Extraction
2.2.2. Feature Selection
2.2.3. Classification
2.2.4. Information Fusion
2.2.5. Generation of Explainable Results
3. Results
3.1. Classification Results
3.2. Explainable Results
4. Discussion
- Our findings were as follows:
- A new ChMinMaxPat-based SOXFE model was presented for classifying EEG signals.
- The recommended ChMinMaxPat-based SOXFE model was tested on EEG data for both violence and stress detection.
- The proposed model achieved over 99% accuracy for violence detection and over 70% accuracy for stress detection.
- ChMinMaxPat extracts 15 feature vectors, and the most informative features are selected using CWNCA.
- The tkNN classifier, an advanced version of k-nearest neighbors, outperformed nine other classifiers.
- Information fusion using IMV improved the accuracy, with voted outcomes showing a better performance than that of tkNN.
- The DLob symbolic language was used to explain the brain activity patterns during violence and stress detection.
- Our model performed well in 10-fold CV (99.86%) and LORO CV (99.31%) tests for violence detection. For stress detection, it achieved 92.86% with 10-fold CV and 73.30% with LOSO CV. In this regard, the ChMinMaxPat-based SOXFE model presented demonstrated a general classification ability, as it was applied to two separate EEG signal datasets. The recommended ChMinMaxPat-based model achieved over 99% accuracy in violence detection but showed lower accuracy in stress detection with LOSO CV, where the accuracy dropped to around 70%. This difference suggests that some of the extracted features may be effective across multiple tasks, while others may be more task-specific. High performance in violence detection suggests that the extracted features capture neural patterns strongly associated with processing violent stimuli. In contrast, moderate performance in stress detection suggests that these features may not fully capture the neural characteristics associated with stress responses or that stress-related EEG patterns are more variable and less distinct. Moreover, stress has multiple sources, and it is more complex to detect than violence.
- The DLob arrays/strings generated for both tasks (EEG stress and violence detection) provide insights into the brain regions activated during each type of stimulus. The DLob arrays for violence detection (Table A1) show higher activation of the temporal, parietal, and occipital lobes in addition to the frontal lobes. This widespread activation reflects the complex processing involved in perceiving and responding to intense stimuli, including sensory integration, spatial awareness, and emotional regulation. In contrast, the DLob arrays for stress detection (Table A2) show predominant activation in the frontal lobes, particularly the right frontal lobe (represented by the FR symbol), which is associated with emotional regulation and cognitive processing under stress. The lower activation in the other lobes clearly indicates that stress responses are more internally focused cognitive processes, originating from internal rather than external sensory processing. Additionally, the resulting cortical connectome maps indicate that unique DLob-based patterns can be generated for each condition. These differences highlight that the neural patterns, and hence the features extracted by the presented ChMinMaxPat-based model, are influenced by the specific nature of the task. Features that effectively capture the neural response to intense violence stimuli do not generalize effectively to stress detection due to the different underlying neural mechanisms.
- Violence triggered more activity in the temporal, parietal, and occipital lobes compared to stress.
- Connectome diagrams were created to visualize brain activity during violence and stress, highlighting different neural responses.
- Our research had the following advantages:
- The recommended ChMinMaxPat-based SOXFE model achieves over 99% accuracy for violence detection and over 70% for stress detection;
- It provides clear results using the DLob symbolic language, showing how the brain reacts to violence and stress;
- The ChMinMaxPat feature extractor introduced generates 15 features, and CWNCA picks the most important ones, improving both accuracy and speed;
- It adjusts itself during classification and information fusion, boosting the accuracy without complicated settings;
- Unlike deep learning, the recommended SOXFE model has linear time complexity;
- It works well with different EEG datasets, making it flexible for various tasks;
- Its explainable results and connectome diagrams help with understanding brain activity, which is useful in clinical and forensic settings.
- Our study had the following limitations:
- The model was tested on relatively small datasets, which may affect how well it performs on larger, more diverse datasets;
- While the presented SOXFE model performs very well for violence detection, its accuracy for stress detection is lower, around 70% with LOSO CV.
- Future work consists of the following:
- The recommended ChMinMaxPat-based SOXFE model is planned to be tested on larger, more diverse datasets to improve its robustness by capturing the variability of EEG across individuals and tasks;
- Feature extraction methods tailored to capturing task-specific neural patterns will be developed to increase the model’s performance across various tasks;
- Validation of the model on new tasks and datasets will be conducted to gain more findings into the generalizability of the features extracted by ChMinMaxPat.
- Advanced feature selection techniques prioritizing generalizable features are planned to be implemented to improve the model’s adaptability across tasks;
- The integration of EEG with other physiological signals or behavioral data will be explored in future research to potentially enhance the accuracy for task-specific applications;
- The model will be tested in hospitals and clinics to support the detection of mental health conditions;
- Real-time monitoring of brain activity will be enabled;
- Next-generation DLob-based EEG translation applications are being planned to broaden the model’s utility in EEG interpretation, with a DLob dictionary potentially being created.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
No | DLob String |
---|---|
1 | FRPRFRFRFRFLFRTRFRORFRFLFRTLPLFRFRFRFROLFRTR |
2 | FLFRFRFRFLTRFLTRFLORTLOLFLFLOLTRFLORFLFLPRORFLOLFLOLFRORFLFLFLFRFLTLFLPRFLPLFRPLFLFLFLPLTLFRPLFRFLFRORFRPRPR |
3 | FRFLFRFRFRPRFRFLFRORTLPLFRTRFRTLFROLFRTRPLOLFRPLPLFR |
4 | FLFRFLFRFLFRFLFRFLFLFLFLTLFRORFRFLFROLOLFLFRTRFLTLFROLPLORFROLFRFLFLFLOLTLFLFRFRFRFROLFRPLFRPLFRPRFRFLFROLFLFRFLFLFRFLPLFRFRPRFRFLOLFLOR |
5 | FRFRFRFRFLFLFRFRTLFROLORPLFLORPRTLOLPLOLTLPLOLFLORFR |
6 | FRTRFROLFRFRFRTLFRFLFRPRFRORTLFLTLFLFRORFRFLFRFRFROLFRFLFRFRFRFLPLFRFRFLFRFRORFRFRTLFLFRFRPLFRTLTLTRFRFRTLOLFRPROROLPLFRFRFLPLTRFRPLFRORPLTLFLPLPLFLPRORFRTRTLTLPLFLFRFRFRFLORTLFRPLFROLORORTLFRFRFLPROLORFLTLFLTLFR |
7 | FLFLFRFLFRPRFLFLFRTRFRFRFRFRFRTLTLFLFRTRFRFRFLTRFRFLFRPRFLOLFLOLFRFLFRFRFROLFLPLFLTRFRFRFROLFRTRFLFRFRFLFLORFRORFRFRFRTLFRPR |
8 | PRFRPLFRTRFRTLFROLFRFLFRORFRTLFRPLFRFLFRTRFLORFRPRFRTRFROLFLFLFRFLOLFLFROROL |
9 | TRFLPRFLPLTLTLFLPRTLOLTLFLTLPLFLTRTLFLTRFLOLTLFLORTLTRFLORFLORTRTLTLOLTROLFLTRFRORFRTRPLFLFLPRFRPRORPLORPLPROROLORFLTLOLORFLFLTLORFRPLFRTLTRTRFLOLFLTLFLPLPLFLTRPLFLPROLFLFLOLFL |
10 | TRFRPRFRORFROLFRPLFRFLFRTLFRPLFRPRFRORFRFLFRPLOLTLPLTRFLTRFRTLFRFLOLFLFROLFLPRFLFLOLPLFLFLFR |
11 | FRFRFRTLFRFLFRTRFRFLFROLFRPLFRTLFRFLFRPRFRFLPLFLFRFL |
12 | FRFRFRFRFRPLPLOLFLFLFRFRFROLFROLTLPL |
13 | FRFRFLFRFLFRPRFRFLFRPLFRFLFRORFRTLFROLOLFLPRFLFRFLOLFLFRFLFLFLPRFLOLOROLFLPLFLPLFLFLOLFRFRPLORPLFLPLFLPLFROLFLFRPLPLFLOLFROLPROLFRPR |
14 | TLFLFRFRFLFRPRFLPRTLFROLORFRTLFROLFRFLFRFRTRTRFLORFRFLFRTLFRFRFLPRFRFRPLPLFRTRFRTRFRORFLFLFRPRORPLTLPRFRPLFLTLTLFRFRPLPRPLORPLFLPLFROLFLPRFRTLFRTLFLFLFRFLFRFRFLFRORPRFRORFRFRFLFRTLPRFROLOLORFRFRFLORFRFRFRFRORPLFRPLOLFRFRFROLTRTLPRFRFLFLTLFRPLFRFRTLOROLFLFRFLOLOLFRFLFRPLORFRTLFLOLPLOLTRFLFRFLFLOLTRPLFLFLPLFRFLOLFLOLFRPRFRPLTLFL |
15 | PLFRTLFRFRFLFLFLFRFRFLFRFRTRPRTLFRTRFRPRORFRPLTLFLPLFRFLFRFRFRTLFRPRFRFLTLFLFRFRTRTLFLTROLFRFLTRFROLFLFLFRFRFRFRFRORFRFRFRTROLFLPRFRFROLPLFRFRTRFLOLFRORTLTLFRTRTLFRPRFLPRORFLFRFROLFRTLFLFRFRTRFRFLFRFLFRTLFLFRPLFLFRORFROLFROLFROLFRFRFRPRTLFLPRFRTLOLPRFRFRFR |
No | DLob String |
---|---|
1 | FRFLFRFRFROLFRTRFRFRFRFRFRFLFRTRFROROLFLFRORFRFRFRPRFRFLFRTRFROLFRFRFRTLFLFRFRPLFRPLFLPRFRFRFRFLFRPRFLTRFLFLFLTRFLFRFLFLORFLFRFLFLPROLFLOLFLTRFLFRTLFLFLFLFRFRFRTRPRFLFRFLOLOLFRFLFROLTRFLPLORFLFRFRFLFRFLORFLFLFLFLFLOLOLFRFRFRPRPRFLORFLTRFRFRFLFLFLTLOLTLORFLOLFLPLFRFRTLPRFLOLFRORFRFLPRTRFLTRFLFRPLOLOLFLPLTLPROLFRFLFRFLFRFLFLFLPLORTLPRFLFRORTRFLTLFRFROLFLTLORFRFLFRFLOLFLTRFLTLPRTLTRORPRFRTROLFLFRFLFLFRFLPLTLFLFRPRFRTRFRFRFLORTRFRORFLFRORFR |
2 | FLFRFLFRFLFRORFLFLFLPLFRFRFRFLFRFLTRFRFLOLFLFLTRTLFRPRFRFLPLFRFRFRPRTRORTRPLORFRPLFLFLTRFLPLTLFLFLORFLPRFRFLPLTLFRFLOROLFLORFLOLORFLORFLPRFLFLFLTLFLTRFLFLFROLFLFLFLFRFRFLTRPRFRPLFLFLFLFRFROLFRFRTLFLOLFLFRTRFLFRFLFRTROLTLFRFLTLPRPLFLFROLTLFLORPLFRFLFRPRTRFRFLORTRFLTLTLFRFLORFRFRFLTLTRPLFRFLFRFLOLOLFLFRTROLFROLTRORTRFRTRPLTRTROLFLPRFLFLORFRFLFRTRPRPRPRTRFRFLFRFRFLFRPLORFLPRFRPLFRFRFROLFLFRFLORFRTRTRFLFLFRORTLOLPRFRFRFLFRFLFLFR |
3 | FRFLFRFRFRFRTRFROLORFLTLFLFLFLFLFRTRFLFLFRFRFRTRFRFRFRTRFRFLFRFLFRFRFRPRFRFRFRPLPRTRFLFLFRFLFRFLFLFRFROLFRFRFRFLFRPRFLTRFLPRFLOLFRFLFLTLTRFLFLORFRPRFRFRFRFRORPRFLPLFLOLFRPLFLORFLFRFLFLFLFRFLFLPRFLFLFLFLFRPRTLFLOLTLFRTRFLTLPRFLPRTRFLPLTLFLFRPLPLPRORFRFRFLOLTRORPRFLTRFLFLFLFLFRFLFRFRFR |
4 | FLFRFLFRFLFRFLFRFLFLFLFLPLFRFRFRFRFLFLTRFLFRFLFRFLOLPRORTRFLFLFLFRFLFRFRFLFRFLTRFRFLOLFRORFRFLPRPRPRORORFRTRTLFRPLORFRFRFRFRTLPRFRFLFLTRFRTRFLOROLPROLOLFLTRFLFRORTROROLFRFL |
5 | FRFRFRFRFLFLFRFRFLFLFRFLFRFRPRTRFRFROLORFLFRFLPRFLTRFRTRFROLPRFRFRFLTLFLFLOROLFRFLOLFLFLFRFLFLFRFLFRFRFLFRORORFLFRFRFLFRTRFRFRFRFLFRFLFRFLFRTRFRFLFRFRFRFLPRFRPRTLTLOLFLFRFRFRORTRFLOLFRFRPLFRFLPRFRFRFRFLFLFRFLFROLFRFR |
6 | FRFLFRTLFRFLFRORFRFLFRFRFROLTRFLPLPLTRTLFLPLFLFLFRFLFRFRFRFLFRFLFRFRFRPRPROLFLTLFRTRFRFLFRORFLFRFLFRFLFRFRTLFRPLFLFRFROLFRPRFROLTRPLFRFLFRORORPLPRTLFLFRFROLFLTLFLFLFRTLFLPLFRFRFLFROLFRFRPLORFLFRFROLPLFLOLORFLFRPRTRFROLPRFRFRFRTRFRFLFLPRFLFRFRFL |
7 | FLFRFLFRFRFRFLFLFRFRFLFRFLFLORFLPLFRFLFRFRPRFLTRFRFRFRFRFLFRFRFRFRFRFLPLFRPLTLFLFLFRFRTRFLFLFRFRTRFLFRFLFRTRTLFRORFRFLFLFLFLFRPRFLFLFRTRFRFRFRFRORPRFLFLFRFLFRFRFLFRFRFRTLFLFRFLFLOLFRFRFRORFRTRFLFLOROLPRFLFLTLORFLFROLFLFRFRFROLORFLTRFRFLFROLFLPRFRPRFRTRFLOROLFRFLFRFRORFLOLFRFRFRFRFROLFLFRFRFLORORFLFLFRFLFRTLFLFLFLFLFRFLFROLFLFLTRFLFRFRFRFRFROLFLFLFLFLFRPRFLTRFRFLFRTRFRFLTRORFLOLFRFRFRFLFRFRTLFRFRFLFLORFRPLFRPRFRFLFLFRTLFLFRFLTRTLFLOLFRTRTRFRFRFLFLFRFRORFLTLFRFLFRFRFRPLFRORFRFLFLPLFRORFRTRFLFLFROROLORFRTLFROLFLTLFRFRFRFLFLORFRFLTLFLFRORFRTRFRPLTRFRPRFRFRPRFLFLFRFR |
8 | TRFRFLFRFLFRPRFRPLFRPRFRTRFRFLFROLFRFLFRORFRFRFRTLFRFRFLFLFROLFRFLFRFRFRFRORFRFRFLFRFLFRFRFRFLPRFLFLFROLTRFRFLTRTRFLORFRPLFRFRFLFLFLFRFLFRORTLFRFRTRFLORTRFLORTRFRFLFLORFLPRFLFL |
9 | TRFLTRFLFRPLFLPRPRTLTRORTRFLTRFRFLTLFRPROLFRTRPLFLFLPLFLTRPRTROLFLFLFLPLPROLPRFLFRFLFLFRTRTLOLTRFRTRFLFRFRORFRPLFLFRFRFLPLFLFRTLTLTLTLFLFRORFLFRFLOLFLPRFLFLFLFLFRFRFRFLFRFLPRFRFRTRFRFRFLPLFLOLFRPLFRFLPRFLFRFLFLTRFLOLOLFLFRFRPLFRFLFLFLORFLFLFLFLPLFRFLFRFRFRFROLFRFLFRPRFRFLORFR |
10 | FLFRFLFRFLFRTRFRPRFRTRFRPRFROLFRTRFRFLFRPLFROLORFRFLFRFRFRFRFLFLORFRTLFRFRFRFRFRFLFLFLFLFLFRFLFLFRFRPLFRTLFRFRFRORFRFLFRFRTRFLFRFLFRTRFLTRFRFRFRFLFLFRPRFRORTLFRFLFLFLFL |
11 | FRTLFRFLFRFLFRFLFRFLFRTLFRFRFRFLFLTLFRFLFRFRFRTLFRTRFROLFRFRFRPRFROLFRFLFRFLFRFRFRPLFRPRFLPRFROLFRFLFRFLPLFLFRFLFLPRFRORPLFRFRFRPLPRFLFRFRFRORFLFLFROLTRFLFLFRFLPLFLPRFRFRTRORPLOLPRPRFLFLFLFLORPRFRFRFRFRORFRFLFLTRFRPLFLTLOLFRFRFR |
12 | FRFRFLFLFRFRFLFLFRFLFRFRFRFROLFLFRFLFRFRFRFRFRFLFRFLFRTRFRFRPRFLOLORFROR |
13 | FLFLFLFRFLFRFLFRFLFRORFRFRFRTLFRFLFRTLFLFRFLFLFRPRFRFRFRPLFRFLFLTRFRFLFRFLFLORFRFLFLFLFRFRFROLFRFLTLPRFRPLFRTRFLTRFRFLFRFLFLFLFLORFLPRFRFLFRTLFLPLFRFLOL |
14 | TRFLFLFRTLFRFLFRFLFRTRFRTRORFRTLORFRPRTLTRFLFLFLPRFRFRFRFRFRTRFRPRFLOLFRFRFRFLFRFLFLTRTLPLFRFRFLFLFRTLFRFLFLFRFLORTLTLFRTRFRFLFLPLTLTRFLFLPLTRPLORFRFLFRFRFLFROLFRFRFLFLFLFLOLORFLFRFLFRFLFRFLFLORFRFRFRFRORFRFRFRORTRFRPLFRFRFRFLFRFLFLFLFRFLTLFLFRFRTRFLFLFRFLPRFRFRPLOLFRFLFRFRFRFRFLFRFRFLFRFRFLFRFRFLFLFLFRFRFROLTRFRPRTRPROLFRPRFRFRFLFRFRFRFRFRFLFLFRPLFRFRFLTLFRFRTLORFRPRFRFLFRFRORFRFRFRFLFLFRPLFLFROLFRORTRFRFLFRFRPROLFRPLFRFLTLPLFRPRFRPLFLPRFRFRPRPLFRTRFRFRFRFRFRTRFRFLFLFRFRFRFLTRFLFLFLFRFLFLFRFLFLFLORFLORFRFRFROLFLFRFLORFRFLFRTLPLFR |
15 | TRFLTRFLFLFRFRTLFLFRTRFRPRFRFLFLFLFRTRFLFRFRTRFRTRTLFLFLTRPLFROLFLFLTRFRTRFROLORFRFRPRTLFLFRPRFLTLFRFRFRFLFRFLFRORFRFRFLPLFLFLFLFLFLFRFRTRORORFRORFRFLFLFLFRFLFRFRFRFRTLFRFRFRFRFRFLFLPLFRFRFLFRFLFLOLFROLFRORFRORFRTLFRTLFRFLFRFLFRPLFRFRFLTLFRPRFRFLORFLFRFLFRFRORFLFRFRFRFRFRPRFRFLFRTRFRFRFLFLFLFLFLFRFLTRFRTRFRTLFLFRFLTRPRFLTLFLFRFLFRFRFLFRFRFRFLFRFLFRFLPRFRPRFRFLFRPLFRFRFROLTRFRFLOLFRFLFRFLFRFRFLFRORFROLFLPRFLFRFRTRPLFRPLFROLFRPLFRFLFRFRTLFLFLFLFRFLFRFRPROLFROLFRFLFRFLFRFRFLTRFRTLFLTLFLTLTLFRFRFRFROLFLFRORTLFRFRFLOROLFRPLFROLFRFROLFRFLFRFLFRFRFLFRFRFLFRFRFRFRFRFLFLFLFLFRTRPRFRPLFLFRFRFRFROLFLFLOLFRPRPLFRFLFLFLFLFRORFRFLFLFRFRTRFLFRFRPLFLFRORTLORFRFLFRFLFRPLTLORFRFLFL |
References
- Davis-Stewart, T. Stress Detection: Stress Detection Framework for Mission-Critical Application: Addressing Cybersecurity Analysts Using Facial Expression Recognition. J. Clin. Res. Case Stud. 2024, 2, 1–12. [Google Scholar] [CrossRef]
- Stephenson, M.D.; Schram, B.; Canetti, E.F.; Orr, R. Effects of acute stress on psychophysiology in armed tactical occupations: A narrative review. Int. J. Environ. Res. Public Health 2022, 19, 1802. [Google Scholar] [CrossRef] [PubMed]
- Crivatu, I.M.; Horvath, M.A.; Massey, K. The impacts of working with victims of sexual violence: A rapid evidence assessment. Trauma Violence Abus. 2023, 24, 56–71. [Google Scholar] [CrossRef] [PubMed]
- Bhatt, P.; Sethi, A.; Tasgaonkar, V.; Shroff, J.; Pendharkar, I.; Desai, A.; Sinha, P.; Deshpande, A.; Joshi, G.; Rahate, A. Machine learning for cognitive behavioral analysis: Datasets, methods, paradigms, and research directions. Brain Inform. 2023, 10, 18. [Google Scholar] [CrossRef]
- Othmani, A.; Brahem, B.; Haddou, Y. Machine learning-based approaches for post-traumatic stress disorder diagnosis using video and EEG sensors: A review. IEEE Sens. J. 2023, 23, 24135–24151. [Google Scholar] [CrossRef]
- Sharma, N.; Gedeon, T. Objective measures, sensors and computational techniques for stress recognition and classification: A survey. Comput. Methods Programs Biomed. 2012, 108, 1287–1301. [Google Scholar] [CrossRef]
- Baird, A.; Triantafyllopoulos, A.; Zänkert, S.; Ottl, S.; Christ, L.; Stappen, L.; Konzok, J.; Sturmbauer, S.; Meßner, E.-M.; Kudielka, B.M. An evaluation of speech-based recognition of emotional and physiological markers of stress. Front. Comput. Sci. 2021, 3, 750284. [Google Scholar] [CrossRef]
- Sharma, S.; Singh, G.; Sharma, M. A comprehensive review and analysis of supervised-learning and soft computing techniques for stress diagnosis in humans. Comput. Biol. Med. 2021, 134, 104450. [Google Scholar] [CrossRef]
- Anderson, G.S.; Di Nota, P.M.; Groll, D.; Carleton, R.N. Peer support and crisis-focused psychological interventions designed to mitigate post-traumatic stress injuries among public safety and frontline healthcare personnel: A systematic review. Int. J. Environ. Res. Public Health 2020, 17, 7645. [Google Scholar] [CrossRef]
- Jain, M.; Bhanodia, P.; Sethi, K.K. Artificial Intelligent Model for Riot and Violence Detection that Largely Affect Societal Health and Local Healthcare System. In Industry 4.0 and Healthcare: Impact of Artificial Intelligence; Springer: Singapore, 2023; pp. 113–131. [Google Scholar]
- Romero-Martínez, A.; Lila, M.; Moya-Albiol, L. Sympathetic nervous system predominance in intimate partner violence perpetrators after coping with acute stress. J. Interpers. Violence 2022, 37, NP10148–NP10169. [Google Scholar] [CrossRef]
- Constantin, M.G.; Ştefan, L.-D.; Ionescu, B.; Demarty, C.-H.; Sjöberg, M.; Schedl, M.; Gravier, G. Affect in multimedia: Benchmarking violent scenes detection. IEEE Trans. Affect. Comput. 2020, 13, 347–366. [Google Scholar] [CrossRef]
- Barrington, G.; Ferguson, C.J. Stress and violence in video games: Their influence on aggression. Trends Psychol. 2022, 30, 497–512. [Google Scholar] [CrossRef]
- Pradhan, R.K.; Kumar, U. Emotion, Well-Being, and Resilience: Theoretical Perspectives and Practical Applications; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
- Alarfaj, A.; Hakami, N.A.; Hosnimahmoud, H. Predicting Violence-Induced Stress in an Arabic Social Media Forum. Intell. Autom. Soft Comput. 2023, 35, 1423–1439. [Google Scholar] [CrossRef]
- Partila, P.; Tovarek, J.; Rozhon, J.; Jalowiczor, J. Human stress detection from the speech in danger situation. In Proceedings of the Mobile Multimedia/Image Processing, Security, and Applications 2019, Baltimore, MA, USA, 15 April 2019; pp. 179–185. [Google Scholar]
- Yange, T.S.; Egbunu, C.O.; Onyekware, O.; Rufai, M.A.; Godwin, C. Violence detection in ranches using computer vision and convolution neural network. J. Comput. Scine Inf. Technol. 2021, 7, 94–104. [Google Scholar] [CrossRef]
- Shindhe, D.; Govindraj, S.; Omkar, S. Real-time Violence Activity Detection Using Deep Neural Networks in a CCTV camera. In Proceedings of the 2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 9–11 July 2021; pp. 1–6. [Google Scholar]
- Shahbazi, Z.; Byun, Y.-C. Early life stress detection using physiological signals and machine learning pipelines. Biology 2023, 12, 91. [Google Scholar] [CrossRef]
- Kumari, K.; Singh, J.P.; Dwivedi, Y.K.; Rana, N.P. Multi-modal aggression identification using convolutional neural network and binary particle swarm optimization. Future Gener. Comput. Syst. 2021, 118, 187–197. [Google Scholar] [CrossRef]
- Jaafar, N.; Lachiri, Z. Multimodal fusion methods with deep neural networks and meta-information for aggression detection in surveillance. Expert Syst. Appl. 2023, 211, 118523. [Google Scholar] [CrossRef]
- Rendón-Segador, F.J.; Álvarez-García, J.A.; Enríquez, F.; Deniz, O. Violencenet: Dense multi-head self-attention with bidirectional convolutional lstm for detecting violence. Electronics 2021, 10, 1601. [Google Scholar] [CrossRef]
- Anwar, A.; Kanjo, E.; Anderez, D.O. Deepsafety: Multi-level audio-text feature extraction and fusion approach for violence detection in conversations. arXiv 2022, arXiv:2206.11822. [Google Scholar]
- Singh, S.; Dewangan, S.; Krishna, G.S.; Tyagi, V.; Reddy, S.; Medi, P.R. Video vision transformers for violence detection. arXiv 2022, arXiv:2209.03561. [Google Scholar]
- Hummer, T.A. Media violence effects on brain development: What neuroimaging has revealed and what lies ahead. Am. Behav. Sci. 2015, 59, 1790–1806. [Google Scholar] [CrossRef]
- Siann, G. Accounting for Aggression: Perspectives on Aggression and Violence; Taylor & Francis: London, UK, 1985. [Google Scholar]
- Morabito, F.C.; Ieracitano, C.; Mammone, N. An explainable Artificial Intelligence approach to study MCI to AD conversion via HD-EEG processing. Clin. EEG Neurosci. 2023, 54, 51–60. [Google Scholar] [CrossRef] [PubMed]
- Ahmad, I.; Yao, C.; Li, L.; Chen, Y.; Liu, Z.; Ullah, I.; Shabaz, M.; Wang, X.; Huang, K.; Li, G. An efficient feature selection and explainable classification method for EEG-based epileptic seizure detection. J. Inf. Secur. Appl. 2024, 80, 103654. [Google Scholar] [CrossRef]
- Kiani, M.; Andreu-Perez, J.; Hagras, H.; Rigato, S.; Filippetti, M.L. Towards understanding human functional brain development with explainable artificial intelligence: Challenges and perspectives. IEEE Comput. Intell. Mag. 2022, 17, 16–33. [Google Scholar] [CrossRef]
- Subasi, A. EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Syst. Appl. 2007, 32, 1084–1093. [Google Scholar] [CrossRef]
- Siever, L.J. Neurobiology of aggression and violence. Am. J. Psychiatry 2008, 165, 429–442. [Google Scholar] [CrossRef]
- Tuncer, T.; Dogan, S.; Tasci, I.; Baygin, M.; Barua, P.D.; Acharya, U.R. Lobish: Symbolic language for interpreting electroencephalogram signals in language detection using channel-based transformation and pattern. Diagnostics 2024, 14, 1987. [Google Scholar] [CrossRef]
- Dogan, A.; Akay, M.; Barua, P.D.; Baygin, M.; Dogan, S.; Tuncer, T.; Dogru, A.H.; Acharya, U.R. PrimePatNet87: Prime pattern and tunable q-factor wavelet transform techniques for automated accurate EEG emotion recognition. Comput. Biol. Med. 2021, 138, 104867. [Google Scholar] [CrossRef]
- Zhang, Z.; Schwartz, S.; Wagner, L.; Miller, W. A greedy algorithm for aligning DNA sequences. J. Comput. Biol. 2000, 7, 203–214. [Google Scholar] [CrossRef]
- Goldberger, J.; Hinton, G.E.; Roweis, S.; Salakhutdinov, R.R. Neighbourhood components analysis. Adv. Neural Inf. Process. Syst. 2004, 17, 513–520. [Google Scholar]
- Maillo, J.; Ramírez, S.; Triguero, I.; Herrera, F. kNN-IS: An Iterative Spark-based design of the k-Nearest Neighbors classifier for big data. Knowl.-Based Syst. 2017, 117, 3–15. [Google Scholar] [CrossRef]
- Shah, S.J.H.; Albishri, A.; Kang, S.S.; Lee, Y.; Sponheim, S.R.; Shim, M. ETSNet: A deep neural network for EEG-based temporal–spatial pattern recognition in psychiatric disorder and emotional distress classification. Comput. Biol. Med. 2023, 158, 106857. [Google Scholar] [CrossRef] [PubMed]
- Tahira, M.; Vyas, P. Eeg based mental stress detection using deep learning techniques. In Proceedings of the 2023 International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE), Ballar, India, 29–30 April 2023; pp. 1–7. [Google Scholar]
- Tuncer, T.; Dogan, S.; Baygin, M.; Tasci, I.; Mungen, B.; Tasci, B.; Barua, P.D.; Acharya, U. TTPat and CWINCA-based explainable feature engineering model using Directed Lobish: A new EEG artifact classification model. Knowl.-Based Syst. 2024, 305, 112555. [Google Scholar] [CrossRef]
- Tuncer, T.; Dogan, S.; Tasci, I.; Tasci, B.; Hajiyeva, R. TATPat based explainable EEG model for neonatal seizure detection. Sci. Rep. 2024, 14, 26688. [Google Scholar] [CrossRef]
- Cambay, V.Y.; Tasci, I.; Tasci, G.; Hajiyeva, R.; Dogan, S.; Tuncer, T. QuadTPat: Quadruple Transition Pattern-based explainable feature engineering model for stress detection using EEG signals. Sci. Rep. 2024, 14, 27320. [Google Scholar] [CrossRef]
No | Class | Number of EEGs | Number of Records | Number of Participants |
---|---|---|---|---|
0 | Control | 442 | 38 | 8 |
1 | Violence | 286 | 47 | 6 |
Total | 728 | 85 | 14 |
No | Class | Number of EEGs | Number of Records | Number of Participants |
---|---|---|---|---|
1 | Stress | 1757 | 150 | 150 |
2 | Control | 1882 | 160 | 160 |
Total | 3667 | 310 | 310 |
No | Channel | Symbol | Meaning |
---|---|---|---|
1–4 | AF3, F7, F3, FC5 | FL | This represents logical thinking, planning, and decision-making. It is associated with speech production, Broca’s area, which is essential for language articulation and processing. It is involved in sequential thought processes, working memory, and the regulation of voluntary movements. This region also contributes to problem-solving abilities and attention control, facilitating complex cognitive tasks. The left frontal lobe plays a significant role in executive functions, emotional regulation, and the integration of information from other brain regions to form coherent responses. |
5 | T7 | TL | Language comprehension and production are represented here, including Wernicke’s area, vital for understanding spoken and written language. Involved in verbal memory and the processing of semantic information, it contributes to the recognition of words and sentences. This area is essential for meaningful communication and language-related learning processes. The left temporal lobe also participates in auditory processing, allowing for the discrimination of sounds and the interpretation of complex auditory stimuli, including language nuances and emotional tone. |
6 | P7 | PL | Language and mathematical processing are represented in this region. It plays a role in understanding and producing speech, reading, writing, and numerical computations. The left parietal lobe integrates sensory information related to language and assists in tasks requiring attention to detail, logical reasoning, and spatial orientation in linguistic contexts. It is involved in the processing of tactile sensory information, helping to perceive and interpret touch sensations, and contributes to proprioception, the sense of body position in space. |
7 | O1 | OL | Visual information processing from the right visual field is represented here. Involved in recognizing letters, words, and other visual stimuli related to language, the left occipital lobe plays a crucial role in visual perception and interpretation. It contributes to tasks such as reading and the visual recognition of symbols, aiding in language comprehension. This region is essential for processing fine visual details and is involved in the analysis of shape, color, and motion, which are critical for accurate visual recognition and interaction with the environment. |
8 | O2 | OR | Visual information processing from the left visual field is represented here. Involved in recognizing faces, scenes, and the spatial orientation of objects, the right occipital lobe contributes to visual–spatial processing. It aids in depth perception and interpreting visual motion, essential for navigating the environment and recognizing visual patterns. This area is crucial for holistic visual processing, enabling the recognition of complex images and the perception of visual scenes as a whole. |
9 | P8 | PR | It plays a role in recognizing patterns, shapes, and object positions. The right parietal lobe is essential for processing non-verbal cues, spatial orientation, and integrating sensory information to form a coherent perception of the surroundings. It facilitates activities like map reading, spatial reasoning, and the understanding of spatial relationships between objects, which are critical for movement coordination and environmental interaction. |
10 | T8 | TR | Sound processing, including aspects of language and music, is represented. Involved in recognizing faces and objects, the right temporal lobe contributes to the processing of auditory information, particularly the nuances of tone, pitch, and rhythm in music. It is important for memory associated with visual and auditory stimuli and plays a role in interpreting the emotional content of sounds. This region is also involved in the recognition of complex patterns and the retrieval of non-verbal memories. |
11–14 | FC6, F4, F8, AF4 | FR | The right frontal lobe is associated with divergent thinking, emotional expression, and interpreting social cues. It contributes to empathy, understanding others’ emotions, and creative problem-solving. This area is involved in the regulation of behavior, the processing of emotional responses, and the management of attention and motivation. |
Method | Parameters |
---|---|
ChMinMaxPat | Input: channel values of each data point Identity generation: minimum and maximum Distance computation: distance of the average value Feature vector generation methods: transition table, channel-based coding, and histogram extraction and concatenation The number of feature vectors generated: 15The length of the features: The number of the channels is 14; therefore, the following was used: 196: f1–f6, f8–f13; 1176: f7, f14; 2352: f15. |
CWNCA | Threshold value: 0.99 A variable number of features is selected per the dataset used |
tkNN | k: 1–10 Distance: city block, Euclidean, Cosine Weight: inverse, equal The number of outcomes generated: 118 Final outcome selection method: maximum classification accuracy |
IMV | Loop range: from 3 to the number of outputs Sorting criteria: classification accuracy Sorting method: descending Voting function: Mode |
Greedy | Selection of the most accurate outcome |
XAI generation | Number of DLob symbols used: 8 Entropy: information entropy Connectome generation method: symbol transition |
Metric | Dataset | |||
---|---|---|---|---|
EEG Violence Detection | EEG Stress Detection | |||
10-Fold CV | LORO CV | 10-Fold CV | LORO CV | |
Accuracy | 99.86 | 99.31 | 92.86 | 73.30 |
Sensitivity | 99.65 | 99.65 | 92.32 | 69.86 |
Specificity | 100 | 99.10 | 93.36 | 76.57 |
Geometric mean | 99.82 | 99.37 | 92.84 | 73.14 |
Study | Method | Dataset | Accuracy (%) |
---|---|---|---|
Shah et al. [37] | EEG temporal–spatial network | 1. IDD (14 participants) 2. SEED datasets (15 participants) | 1. 5-fold CV: 99.57 2. 60:40: 98.50 |
Tahira and Vyas [38] | CNN | Physionet EEG dataset | 10-fold CV: 99.20 |
Our method | ChMinMaxPat-based SOXFE model | 1. Collected data (286 violence, 442 control) 2. Collected data (1757 stress, 1882 control) | Violence dataset 1. 10-fold CV: 99.86 2. LORO CV: 99.31 Stress dataset 1. 10-fold CV: 92.86 2. LORO CV: 73.30 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bektas, O.; Kirik, S.; Tasci, I.; Hajiyeva, R.; Aydemir, E.; Dogan, S.; Tuncer, T. ChMinMaxPat: Investigations on Violence and Stress Detection Using EEG Signals. Diagnostics 2024, 14, 2666. https://doi.org/10.3390/diagnostics14232666
Bektas O, Kirik S, Tasci I, Hajiyeva R, Aydemir E, Dogan S, Tuncer T. ChMinMaxPat: Investigations on Violence and Stress Detection Using EEG Signals. Diagnostics. 2024; 14(23):2666. https://doi.org/10.3390/diagnostics14232666
Chicago/Turabian StyleBektas, Omer, Serkan Kirik, Irem Tasci, Rena Hajiyeva, Emrah Aydemir, Sengul Dogan, and Turker Tuncer. 2024. "ChMinMaxPat: Investigations on Violence and Stress Detection Using EEG Signals" Diagnostics 14, no. 23: 2666. https://doi.org/10.3390/diagnostics14232666
APA StyleBektas, O., Kirik, S., Tasci, I., Hajiyeva, R., Aydemir, E., Dogan, S., & Tuncer, T. (2024). ChMinMaxPat: Investigations on Violence and Stress Detection Using EEG Signals. Diagnostics, 14(23), 2666. https://doi.org/10.3390/diagnostics14232666