Personalization of Affective Models Using Classical Machine Learning: A Feasibility Study
Abstract
:1. Introduction
2. Prior Work and Background
3. Methods
3.1. Emognition Dataset
3.2. Data Preprocessing and Arrangement
3.3. Feature Extraction and Selection
3.4. Model Selection and Evaluation
4. Results
5. Discussion
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kamble, K.; Sengupta, J. A comprehensive survey on emotion recognition based on electroencephalograph (EEG) signals. Multimed. Tools Appl. 2023, 82, 27269–27304. [Google Scholar] [CrossRef]
- Li, M.; Ma, C.; Wu, C. Facial Emotion Recognition in Sleep Deprivation: A Systematic Review and Meta-Analysis. Int. Rev. Soc. Psychol. 2023, 36, 9. [Google Scholar] [CrossRef]
- Pena-Garijo, J.; Lacruz, M.; Masanet, M.J.; Palop-Grau, A.; Plaza, R.; Hernandez-Merino, A.; Edo-Villamon, S.; Valllina, O. Specific facial emotion recognition deficits across the course of psychosis: A comparison of individuals with low-risk, high-risk, first-episode psychosis and multi-episode schizophrenia-spectrum disorders. Psychiatry Res. 2023, 320, 115029. [Google Scholar] [CrossRef]
- Huang, Y.; Du, J.; Guo, X.; Li, Y.; Wang, H.; Xu, J.; Xu, S.; Wang, Y.; Zhang, R.; Xiao, L. Insomnia and impacts on facial expression recognition accuracy, intensity and speed: A meta-analysis. J. Psychiatr. Res. 2023, 160, 248–257. [Google Scholar] [CrossRef]
- Pavez, R.; Diaz, J.; Arango-Lopez, J.; Ahumada, D.; Mendez-Sandoval, C.; Moreira, F. Emo-mirror: A proposal to support emotion recognition in children with autism spectrum disorders. Neural Comput. Appl. 2023, 35, 7913–7924. [Google Scholar] [CrossRef]
- Washington, P.; Wall, D.P. A Review of and Roadmap for Data Science and Machine Learning for the Neuropsychiatric Phenotype of Autism. Annu. Rev. Biomed. Data Sci. 2023, 6, 211–228. [Google Scholar] [CrossRef]
- Belyaev, M.; Murugappan, M.; Velichko, A.; Korzun, D. Entropy-Based Machine Learning Model for Fast Diagnosis and Monitoring of Parkinson’s Disease. Sensors 2023, 23, 8609. [Google Scholar] [CrossRef]
- Hazelton, J.L.; Fittipaldi, S.; Fraile-Vazquez, M.; Sourty, M.; Legaz, A.; Hudson, A.L.; Cordero, I.G.; Salamone, P.C.; Yoris, A.; Ibañez, A. Thinking versus feeling: How interoception and cognition influence emotion recognition in behavioural-variant frontotemporal dementia, Alzheimer’s disease, and Parkinson’s disease. Cortex 2023, 163, 66–79. [Google Scholar] [CrossRef]
- Kargarandehkordi, A.; Washington, P. Personalized Prediction of Stress-Induced Blood Pressure Spikes in Real Time from FitBit Data using Artificial Intelligence: A Research Protocol. medRxiv 2023. [Google Scholar] [CrossRef]
- Othmani, A.; Sabri, A.Q.M.; Aslan, S.; Chaieb, F.; Rameh, H.; Alfred, R.; Cohen, D. EEG-based neural networks approaches for fatigue and drowsiness detection: A survey. Neurocomputing 2023, 557, 126709. [Google Scholar] [CrossRef]
- Vehlen, A.; Kellner, A.; Normann, C.; Heinrichs, M.; Domes, G. Reduced eye gaze during facial emotion recognition in chronic depression: Effects of intranasal oxytocin. J. Psychiatr. Res. 2023, 159, 50–56. [Google Scholar] [CrossRef]
- Dildine, T.C.; Amir, C.M.; Parsons, J.; Atlas, L.Y. How Pain-Related Facial Expressions Are Evaluated in Relation to Gender, Race, and Emotion. Affect. Sci. 2023, 4, 350–369. [Google Scholar] [CrossRef]
- Clynes, M. Sentics: The Touch of Emotions; Anchor Press: New York, NY, USA, 1977. [Google Scholar]
- Heraz, A.; Clynes, M. Recognition of emotions conveyed by touch through force-sensitive screens: Observational study of humans and machine learning techniques. JMIR Ment. Health 2018, 5, e10104. [Google Scholar] [CrossRef]
- Kargarandehkordi, A.; Washington, P. Computer Vision Estimation of Stress and Anxiety Using a Gamified Mobile-based Ecological Momentary Assessment and Deep Learning: Research Protocol. medRxiv 2023. [Google Scholar] [CrossRef]
- Shah, R.V.; Grennan, G.; Zafar-Khan, M.; Alim, F.; Dey, S.; Ramanathan, D.; Mishra, J. Personalized machine learning of depressed mood using wearables. Transl. Psychiatry 2021, 11, 338. [Google Scholar] [CrossRef]
- Ripoli, A.; Sozio, E.; Sbrana, F.; Bertolino, G.; Pallotto, C.; Cardinali, G.; Meini, S.; Pieralli, F.; Azzini, A.M.; Concia, E. Personalized machine learning approach to predict candidemia in medical wards. Infection 2020, 48, 749–759. [Google Scholar] [CrossRef] [PubMed]
- De Leeuw, A.-W.; van der Zwaard, S.; van Baar, R.; Knobbe, A. Personalized machine learning approach to injury monitoring in elite volleyball players. Eur. J. Sport Sci. 2022, 22, 511–520. [Google Scholar] [CrossRef] [PubMed]
- Lalitharatne, T.D.; Tan, Y.; Leong, F.; He, L.; Van Zalk, N.; De Lusignan, S.; Iida, F.; Nanayakkara, T. Facial expression rendering in medical training simulators: Current status and future directions. IEEE Access 2020, 8, 215874–215891. [Google Scholar] [CrossRef]
- Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
- Picard, R.W. Affective computing: Challenges. Int. J. Hum.-Comput. Stud. 2003, 59, 55–64. [Google Scholar] [CrossRef]
- Ahonen, T.; Hadid, A.; Pietikainen, M. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 2037–2041. [Google Scholar] [CrossRef] [PubMed]
- Ghimire, D.; Jeong, S.; Lee, J.; Park, S.H. Facial expression recognition based on local region specific features and support vector machines. Multimed. Tools Appl. 2017, 76, 7803–7821. [Google Scholar] [CrossRef]
- Shan, C.; Gong, S.; McOwan, P.W. Facial expression recognition based on local binary patterns: A comprehensive study. Image Vis. Comput. 2009, 27, 803–816. [Google Scholar] [CrossRef]
- An, F.; Liu, Z. Facial expression recognition algorithm based on parameter adaptive initialization of CNN and LSTM. Vis. Comput. 2020, 36, 483–498. [Google Scholar] [CrossRef]
- Dahmane, M.; Meunier, J. Emotion recognition using dynamic grid-based HoG features. In Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Santa Barbara, CA, USA, 21–25 March 2011; pp. 884–888. [Google Scholar]
- Satiyan, M.; Hariharan, M.; Nagarajan, R. Recognition of facial expression using Haar wavelet transform. J. Electr. Electron. Syst. Res. JEESR 2010, 3, 89–96. [Google Scholar]
- Soyel, H.; Demirel, H. Improved SIFT matching for pose robust facial expression recognition. In Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Santa Barbara, CA, USA, 21–25 March 2011; pp. 585–590. [Google Scholar]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
- Banerjee, A.; Mutlu, O.C.; Kline, A.; Surabhi, S.; Washington, P.; Wall, D.P. Training and profiling a pediatric facial expression classifier for children on mobile devices: Machine learning study. JMIR Form. Res. 2023, 7, e39917. [Google Scholar] [CrossRef] [PubMed]
- Qian, Y.; Kargarandehkordi, A.; Mutlu, O.C.; Surabhi, S.; Honarmand, M.; Wall, D.P.; Washington, P. Computer Vision Estimation of Emotion Reaction Intensity in the Wild. arXiv 2023, arXiv:2303.10741. [Google Scholar]
- Zhang, F.; Yu, Y.; Mao, Q.; Gou, J.; Zhan, Y. Pose-robust feature learning for facial expression recognition. Front. Comput. Sci. 2016, 10, 832–844. [Google Scholar] [CrossRef]
- Zhang, T. Facial expression recognition based on deep learning: A survey. In Advances in Intelligent Systems and Interactive Applications, Proceedings of the 2nd International Conference on Intelligent and Interactive Systems and Applications (IISA2017), Beijing, China, 17–18 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 345–352. [Google Scholar]
- Zhang, K.; Huang, Y.; Du, Y.; Wang, L. Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Trans. Image Process. 2017, 26, 4193–4203. [Google Scholar] [CrossRef]
- Zhao, X.; Liang, X.; Liu, L.; Li, T.; Han, Y.; Vasconcelos, N.; Yan, S. Peak-piloted deep network for facial expression recognition. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 425–442. [Google Scholar]
- Cao, C.; Weng, Y.; Zhou, S.; Tong, Y.; Zhou, K. Facewarehouse: A 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 2013, 20, 413–425. [Google Scholar]
- Wells, L.J.; Gillespie, S.M.; Rotshtein, P. Identification of emotional facial expressions: Effects of expression, intensity, and sex on eye gaze. PLoS ONE 2016, 11, e0168307. [Google Scholar] [CrossRef]
- Ahmed, Z.A.; Aldhyani, T.H.; Jadhav, M.E.; Alzahrani, M.Y.; Alzahrani, M.E.; Althobaiti, M.M.; Alassery, F.; Alshaflut, A.; Alzahrani, N.M.; Al-Madani, A.M. Facial features detection system to identify children with autism spectrum disorder: Deep learning models. Comput. Math. Methods Med. 2022, 2022, 3941049. [Google Scholar] [CrossRef]
- Akter, T.; Ali, M.H.; Khan, M.I.; Satu, M.S.; Uddin, M.J.; Alyami, S.A.; Ali, S.; Azad, A.; Moni, M.A. Improved transfer-learning-based facial recognition framework to detect autistic children at an early stage. Brain Sci. 2021, 11, 734. [Google Scholar] [CrossRef] [PubMed]
- Banire, B.; Al Thani, D.; Qaraqe, M.; Mansoor, B. Face-based attention recognition model for children with autism spectrum disorder. J. Healthc. Inform. Res. 2021, 5, 420–445. [Google Scholar] [CrossRef] [PubMed]
- Washington, P.; Kalantarian, H.; Kent, J.; Husic, A.; Kline, A.; Leblanc, E.; Hou, C.; Mutlu, C.; Dunlap, K.; Penev, Y. Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study. JMIR Pediatr Parent 2022, 5, e26760. [Google Scholar] [CrossRef] [PubMed]
- Kalantarian, H.; Jedoui, K.; Dunlap, K.; Schwartz, J.; Washington, P.; Husic, A.; Tariq, Q.; Ning, M.; Kline, A.; Wall, D.P. The performance of emotion classifiers for children with parent-reported autism: Quantitative feasibility study. JMIR Ment. Health 2020, 7, e13174. [Google Scholar] [CrossRef]
- Beary, M.; Hadsell, A.; Messersmith, R.; Hosseini, M.-P. Diagnosis of autism in children using facial analysis and deep learning. arXiv 2020, arXiv:2008.02890. [Google Scholar]
- Nagy, E.; Prentice, L.; Wakeling, T. Atypical facial emotion recognition in children with autism spectrum disorders: Exploratory analysis on the role of task demands. Perception 2021, 50, 819–833. [Google Scholar] [CrossRef]
- Rashidan, M.A.; Na’im Sidek, S.; Yusof, H.M.; Khalid, M.; Dzulkarnain, A.A.A.; Ghazali, A.S.; Zabidi, S.A.M.; Sidique, F.A.A. Technology-assisted emotion recognition for autism spectrum disorder (ASD) children: A systematic literature review. IEEE Access 2021, 9, 33638–33653. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 6000–6010. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
- Mehta, S.; Rastegari, M. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. arXiv 2021, arXiv:2110.02178. [Google Scholar]
- Kolesnikov, A.; Beyer, L.; Zhai, X.; Puigcerver, J.; Yung, J.; Gelly, S.; Houlsby, N. Big transfer (bit): General visual representation learning. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 491–507. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
- Sharif, H.; Khan, R.A. A novel machine learning based framework for detection of autism spectrum disorder (ASD). Appl. Artif. Intell. 2022, 36, 2004655. [Google Scholar] [CrossRef]
- Ahmed, M.R.; Zhang, Y.; Liu, Y.; Liao, H. Single volume image generator and deep learning-based ASD classification. IEEE J. Biomed. Health Inform. 2020, 24, 3044–3054. [Google Scholar] [CrossRef]
- Yang, M.; Cao, M.; Chen, Y.; Chen, Y.; Fan, G.; Li, C.; Wang, J.; Liu, T. Large-scale brain functional network integration for discrimination of autism using a 3-D deep learning model. Front. Hum. Neurosci. 2021, 15, 687288. [Google Scholar] [CrossRef]
- Gao, J.; Chen, M.; Li, Y.; Gao, Y.; Li, Y.; Cai, S.; Wang, J. Multisite autism spectrum disorder classification using convolutional neural network classifier and individual morphological brain networks. Front. Neurosci. 2021, 14, 629630. [Google Scholar] [CrossRef]
- Tang, M.; Kumar, P.; Chen, H.; Shrivastava, A. Deep multimodal learning for the diagnosis of autism spectrum disorder. J. Imaging 2020, 6, 47. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
- Tang, H.; Liu, W.; Zheng, W.-L.; Lu, B.-L. Multimodal emotion recognition using deep neural networks. In Proceedings of the Neural Information Processing: 24th International Conference, ICONIP 2017, Guangzhou, China, 14–18 November 2017; pp. 811–819. [Google Scholar]
- Yin, Z.; Zhao, M.; Wang, Y.; Yang, J.; Zhang, J. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. Comput. Methods Programs Biomed. 2017, 140, 93–110. [Google Scholar] [CrossRef] [PubMed]
- Martin, O.; Kotsia, I.; Macq, B.; Pitas, I. The eNTERFACE’05 audio-visual emotion database. In Proceedings of the 22nd International Conference on Data Engineering Workshops (ICDEW’06), Atlanta, GA, USA, 3–7 April 2006; p. 8. [Google Scholar]
- Zhang, S.; Zhang, S.; Huang, T.; Gao, W.; Tian, Q. Learning affective features with a hybrid deep model for audio–visual emotion recognition. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 3030–3043. [Google Scholar] [CrossRef]
- Nguyen, D.; Nguyen, K.; Sridharan, S.; Dean, D.; Fookes, C. Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition. Comput. Vis. Image Underst. 2018, 174, 33–42. [Google Scholar] [CrossRef]
- Saganowski, S.; Komoszyńska, J.; Behnke, M.; Perz, B.; Kunc, D.; Klich, B.; Kaczmarek, Ł.D.; Kazienko, P. Emognition dataset: Emotion recognition with self-reports, facial expressions, and physiology using wearables. Sci. Data 2022, 9, 158. [Google Scholar] [CrossRef]
- Baltrusaitis, T.; Zadeh, A.; Lim, Y.C.; Morency, L.-P. Openface 2.0: Facial behavior analysis toolkit. In Proceedings of the 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 59–66. [Google Scholar]
- Parousidou, V.-C. Personalized Machine Learning Benchmarking for Stress Detection. Master’s Thesis, Aristotle University of Thessaloniki, Thessaloniki, Greece, 2023. [Google Scholar]
- Tazarv, A.; Labbaf, S.; Reich, S.M.; Dutt, N.; Rahmani, A.M.; Levorato, M. Personalized stress monitoring using wearable sensors in everyday settings. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), virtually, 1–5 November 2021; pp. 7332–7335. [Google Scholar]
- Christensen, D.L.; Van Naarden Braun, K.; Baio, J.; Bilder, D.; Charles, J.; Constantino, J.N.; Daniels, J.; Durkin, M.S.; Fitzgerald, R.T.; Kurzius-Spencer, M. Prevalence and characteristics of autism spectrum disorder among children aged 8 years—Autism and developmental disabilities monitoring network, 11 sites, United States, 2012. MMWR Surveill. Summ. 2018, 65, 1–23. [Google Scholar] [CrossRef]
- Ardhanareeswaran, K.; Volkmar, F. Introduction. Focus: Autism spectrum disorders. Yale J. Biol. Med. 2015, 88, 3–4. [Google Scholar]
- Gordon-Lipkin, E.; Foster, J.; Peacock, G. Whittling down the wait time: Exploring models to minimize the delay from initial concern to diagnosis and treatment of autism spectrum disorder. Pediatr. Clin. 2016, 63, 851–859. [Google Scholar]
- Manfredonia, J.; Bangerter, A.; Manyakov, N.V.; Ness, S.; Lewin, D.; Skalkin, A.; Boice, M.; Goodwin, M.S.; Dawson, G.; Hendren, R. Automatic recognition of posed facial expression of emotion in individuals with autism spectrum disorder. J. Autism Dev. Disord. 2019, 49, 279–293. [Google Scholar] [CrossRef] [PubMed]
- Nag, A.; Haber, N.; Voss, C.; Tamura, S.; Daniels, J.; Ma, J.; Chiang, B.; Ramachandran, S.; Schwartz, J.; Winograd, T. Toward continuous social phenotyping: Analyzing gaze patterns in an emotion recognition task for children with autism through wearable smart glasses. J. Med. Internet Res. 2020, 22, e13810. [Google Scholar] [CrossRef] [PubMed]
- Lakkapragada, A.; Kline, A.; Mutlu, O.C.; Paskov, K.; Chrisman, B.; Stockham, N.; Washington, P.; Wall, D.P. The classification of abnormal hand movement to aid in autism detection: Machine learning study. JMIR Biomed. Eng. 2022, 7, e33771. [Google Scholar] [CrossRef]
- Washington, P.; Voss, C.; Haber, N.; Tanaka, S.; Daniels, J.; Feinstein, C.; Winograd, T.; Wall, D. A wearable social interaction aid for children with autism. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 2348–2354. [Google Scholar]
- Voss, C.; Haber, N.; Wall, D.P. The potential for machine learning–based wearables to improve socialization in teenagers and adults with autism spectrum disorder—Reply. JAMA Pediatr. 2019, 173, 1106. [Google Scholar] [CrossRef] [PubMed]
- Kalantarian, H.; Washington, P.; Schwartz, J.; Daniels, J.; Haber, N.; Wall, D.P. Guess What? Towards Understanding Autism from Structured Video Using Facial Affect. J. Healthc. Inform. Res. 2019, 3, 43–66. [Google Scholar] [CrossRef]
- Kline, A.; Voss, C.; Washington, P.; Haber, N.; Schwartz, H.; Tariq, Q.; Winograd, T.; Feinstein, C.; Wall, D.P. Superpower glass. GetMobile Mob. Comput. Commun. 2019, 23, 35–38. [Google Scholar] [CrossRef]
- Haber, N.; Voss, C.; Wall, D. Making emotions transparent: Google Glass helps autistic kids understand facial expressions through augmented-reaiity therapy. IEEE Spectr. 2020, 57, 46–52. [Google Scholar] [CrossRef]
- Washington, P.; Voss, C.; Kline, A.; Haber, N.; Daniels, J.; Fazel, A.; De, T.; Feinstein, C.; Winograd, T.; Wall, D. SuperpowerGlass: A wearable aid for the at-home therapy of children with autism. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 1–22. [Google Scholar] [CrossRef]
- Voss, C.; Washington, P.; Haber, N.; Kline, A.; Daniels, J.; Fazel, A.; De, T.; McCarthy, B.; Feinstein, C.; Winograd, T. Superpower glass: Delivering unobtrusive real-time social cues in wearable systems. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16 September 2016; pp. 1218–1226. [Google Scholar]
- Elfenbein, H.A.; Ambady, N. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychol. Bull. 2002, 128, 203. [Google Scholar] [CrossRef] [PubMed]
- Jack, R.E.; Garrod, O.G.; Yu, H.; Caldara, R.; Schyns, P.G. Facial expressions of emotion are not culturally universal. Proc. Natl. Acad. Sci. USA 2012, 109, 7241–7244. [Google Scholar] [CrossRef]
- Zeng, Z.; Pantic, M.; Roisman, G.I.; Huang, T.S. A survey of affect recognition methods: Audio, visual and spontaneous expressions. In Proceedings of the 9th International Conference on Multimodal Interfaces, Nagoya, Japan, 12–15 November 2007; pp. 126–133. [Google Scholar]
- Baltrušaitis, T.; Robinson, P.; Morency, L.-P. Openface: An open source facial behavior analysis toolkit. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–10. [Google Scholar]
- Kumar, M.; Zhang, X.; Liu, L.; Wang, Y.; Shi, W. Energy-efficient machine learning on the edges. In Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), New Orleans, LA, USA, 18–22 May 2020; pp. 912–921. [Google Scholar]
- Luxton, D.D. Artificial Intelligence in Behavioral and Mental Health Care; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
- Mohammad, S.M. Ethics sheet for automatic emotion recognition and sentiment analysis. Comput. Linguist. 2022, 48, 239–278. [Google Scholar] [CrossRef]
- Boyd, K.L.; Andalibi, N. Automated emotion recognition in the workplace: How proposed technologies reveal potential futures of work. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–37. [Google Scholar] [CrossRef]
Prompt | Amusement | Anger | Awe | Disgust | Enthusiasm | Fear | Liking | Neutral | Sadness | Surprise | |
---|---|---|---|---|---|---|---|---|---|---|---|
Actual | |||||||||||
Anger | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 0 | |
Disgust | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Sadness | 398 | 213 | 135 | 227 | 87 | 111 | 128 | 1812 | 9 | 159 | |
Neutral | 6766 | 6752 | 6822 | 3339 | 7036 | 6993 | 6541 | 5468 | 7126 | 966 | |
Surprise | 0 | 37 | 0 | 0 | 0 | 116 | 1 | 0 | 0 | 0 | |
Happiness | 40 | 198 | 0 | 495 | 29 | 0 | 6 | 0 | 0 | 1832 |
Subject | F1 Score (Personalized Models) | F1 Score (Generic Models) | ||||
---|---|---|---|---|---|---|
KNN | RF | DNN | KNN | RF | DNN | |
No. 22 | 93.1% | 95.3% | 88.2% | 86.9% | 91.4% | 76.7% |
No. 25 | 89.3% | 91.5% | 83.4% | 88.0% | 92.0% | 81.2% |
No. 28 | 96.6% | 97.8% | 93.7% | 88.5% | 92.7% | 82.9% |
No. 29 | 83.6% | 86.3% | 78.6% | 90.0% | 92.0% | 84.4% |
No. 32 | 93.4% | 95.1% | 91.8% | 92.6% | 95.0% | 88.0% |
No. 39 | 87.2% | 90.0% | 83.8% | 92.1% | 94.0% | 87.0% |
No. 40 | 99.6% | 99.9% | 97.2% | 88.3% | 91.0% | 78.4% |
No. 42 | 93.2% | 94.7% | 89.9% | 87.2% | 91.6% | 78.0% |
No. 45 | 82.5% | 86.3% | 73.3% | 82.2% | 87.1% | 63.9% |
No. 48 | 86.3% | 89.7% | 84.1% | 89.7% | 91.0% | 83.7% |
Model | Data Type | Dataset | Metric | Value | |
---|---|---|---|---|---|
Ours-Generic | Pose/Facial Landmarks | Emognition | F1 score | KNN | 88.50% |
RF | 91.78% | ||||
DNN | 80.42% | ||||
Ours-Personalized | KNN | 90.48% | |||
RF | 92.66% | ||||
DNN | 86.40% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kargarandehkordi, A.; Kaisti, M.; Washington, P. Personalization of Affective Models Using Classical Machine Learning: A Feasibility Study. Appl. Sci. 2024, 14, 1337. https://doi.org/10.3390/app14041337
Kargarandehkordi A, Kaisti M, Washington P. Personalization of Affective Models Using Classical Machine Learning: A Feasibility Study. Applied Sciences. 2024; 14(4):1337. https://doi.org/10.3390/app14041337
Chicago/Turabian StyleKargarandehkordi, Ali, Matti Kaisti, and Peter Washington. 2024. "Personalization of Affective Models Using Classical Machine Learning: A Feasibility Study" Applied Sciences 14, no. 4: 1337. https://doi.org/10.3390/app14041337
APA StyleKargarandehkordi, A., Kaisti, M., & Washington, P. (2024). Personalization of Affective Models Using Classical Machine Learning: A Feasibility Study. Applied Sciences, 14(4), 1337. https://doi.org/10.3390/app14041337