Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions
Abstract
:1. Introduction
- We developed a high-performing system for real-time estimation of multi-person pose synchronization, detecting body synchronization across diverse visual inputs to calculate synchronization metrics. It leverages Lightweight OpenPose [23] for efficient pose estimation, achieving a performance of 5-6 frames per second on a regular CPU. By analyzing pre-recorded rehearsal videos of jazz musicians, we extract 17 body synchronization metrics, encompassing arm, leg, and head movements. These metrics serve as features for our deep learning model. The system incorporates a robust synchronization metric, enabling accurate detection across various pose orientations.
- To assess the relationship between facial emotions and team entanglement, we compute the Pearson correlation between facial emotions and various body synchrony scores. Additionally, we conduct a regression analysis over the time series data, using body synchrony scores as predictors and facial emotions as dependent variables. This approach allows us to estimate the impact of body synchrony on facial emotions, providing deeper insights into the connection between team dynamics and emotional expressions.
- We propose a machine learning pipeline to predict the collective emotions of jazz musicians using body synchrony scores to achieve accurate and interpretable results.
2. Related Work
2.1. Emotions
2.2. Facial Emotion Recognition (FER)
3. Methodology
3.1. Extracting FER Time Series Data
3.2. Team Entanglement
3.3. Real-Time Estimation of Multi-Person Pose Synchronization
3.3.1. Pose Estimation
3.3.2. Synchronization Calculation
3.3.3. Distance
3.4. Data Extraction and Pre-Processing
4. Results
4.1. Correlation Analysis
4.2. Regression Analysis
4.3. Deep Learning Model
5. Discussion
6. Limitations
7. Future Work and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Usman, M.; Latif, S.; Qadir, J. Using deep autoencoders for facial expression recognition. In Proceedings of the 13th International Conference on Emerging Technologies (ICET), Islamabad, Pakistan, 27–28 December 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Guo, R.; Li, S.; He, L.; Gao, W.; Qi, H.; Owens, G. Pervasive and unobtrusive emotion sensing for human mental health. In Proceedings of the 7th International Conference on Pervasive Computing Technologies for Healthcare and Workshops, Venice, Italy, 5–8 May 2013; pp. 436–439. [Google Scholar] [CrossRef]
- De Nadai, S.; D’Incà, M.; Parodi, F.; Benza, M.; Trotta, A.; Zero, E.; Zero, L.; Sacile, R. Enhancing safety of transport by road by on-line monitoring of driver emotions. In Proceedings of the 11th System of Systems Engineering Conference (SoSE), Kongsberg, Norway, 12–16 June 2016; pp. 1–4. [Google Scholar] [CrossRef]
- Verschuere, B.; Crombez, G.; Koster, E.H.W.; Uzieblo, K. Psychopathy and Physiological Detection of Concealed Information: A review. Psychol. Belg. 2006, 46, 99–116. [Google Scholar] [CrossRef] [Green Version]
- Goldenberg, A.; Garcia, D.; Suri, G.; Halperin, E.; Gross, J. The Psychology of Collective Emotions. OSF Prepr. 2017. [Google Scholar] [CrossRef]
- Kerkeni, L.; Serrestou, Y.; Raoof, K.; Cléder, C.; Mahjoub, M.; Mbarki, M. Automatic Speech Emotion Recognition Using Machine Learning; IntechOpen: London, UK, 2019. [Google Scholar] [CrossRef] [Green Version]
- Ali, M.; Mosa, A.H.; Machot, F.A.; Kyamakya, K. Emotion Recognition Involving Physiological and Speech Signals: A Comprehensive Review. In Recent Advances in Nonlinear Dynamics and Synchronization. Studies in Systems, Decision and Control; Springer: Cham, Switzerland, 2018; Volume 109. [Google Scholar] [CrossRef]
- Czarnocki, J. Will new definitions of emotion recognition and biometric data hamper the objectives of the proposed AI Act? In Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 15–17 September 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Galesic, M.; Barkoczi, D.; Berdahl, A.M.; Biro, D.; Carbone, G.; Giannoccaro, I.; Goldstone, R.L.; Gonzalez, C.; Kandler, A.; Kao, A.B.; et al. Beyond collective intelligence: Collective adaptation. J. R. Soc. Interface 2023, 20, 20220736. [Google Scholar] [CrossRef]
- Li, S.; Deng, W. Deep Facial Expression Recognition: A Survey. IEEE Trans. Affect. Comput. 2022, 13, 1195–1215. [Google Scholar] [CrossRef] [Green Version]
- Schindler, K.; Van Gool, L.; de Gelder, B. Recognizing emotions expressed by body pose: A biologically inspired neural model. Neural Netw. 2008, 21, 1238–1246. [Google Scholar] [CrossRef] [PubMed]
- Yang, Z.; Kay, A.; Li, Y.; Cross, W.; Luo, J. Pose-based Body Language Recognition for Emotion and Psychiatric Symptom Interpretation. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 294–301. [Google Scholar] [CrossRef]
- Chartrand, T.L.; Bargh, J.A. The chameleon effect: The perception-behavior link and social interaction. J. Personal. Soc. Psychol. 1999, 76, 893–910. [Google Scholar] [CrossRef] [PubMed]
- Chu, D.A. Athletic training issues in synchronized swimming. Clin. Sport. Med. 1999, 18, 437–445. [Google Scholar] [CrossRef]
- Kramer, R. Sequential effects in Olympic synchronized diving scores. R. Soc. Open Sci. 2017, 4, 160812. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Z.; Xu, A.; Yatani, K. Syncup: Vision-based practice support for synchronized dancing. Proc. ACM Interact. Mobile Wearable Ubiquitous Technol. 2021, 5, 143. [Google Scholar] [CrossRef]
- Balconi, M.; Cassioli, F.; Fronda, G.; Venutelli, M. Cooperative leadership in hyperscanning. Brain and body synchrony during manager-employee interactions. Neuropsychol. Trends 2019, 26, 23–44. [Google Scholar] [CrossRef]
- Ravreby, I.; Yeshurun, Y. Liking as a balance between synchronization, complexity, and novelty. Sci. Rep. 2022, 12, 3181. [Google Scholar] [CrossRef] [PubMed]
- Yun, K.; Watanabe, K.; Shimojo, S. Interpersonal body and neural synchronization as a marker of implicit social interaction. Sci. Rep. 2012, 2, 959. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gloor, P.; Zylka, M.; Fronzetti, A.; Makai, M. ‘Entanglement’—A new dynamic metric to measure team flow. Soc. Netw. 2022, 70, 100–111. [Google Scholar] [CrossRef]
- Glowinski, D.; Camurri, A.; Volpe, G.; Dael, N.; Scherer, K. Technique for automatic emotion recognition by body gesture analysis. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar] [CrossRef] [Green Version]
- Van Delden, J. Real-Time Estimation of Multi-Person Pose Synchronization Using OpenPose. Master’s Thesis, Department of Informatics, TUM Technical University of Munich, Munich, Germany, 2022. [Google Scholar]
- Osokin, D. Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose. arXiv 2018, arXiv:1811.12004, 12004. [Google Scholar]
- Wibawa, A.P.; Utama, A.B.P.; Elmunsyah, H.; Pujianto, U.; Dwiyanto, F.A.; Hernandez, L. Time-series analysis with smoothed Convolutional Neural Network. J. Big Data 2022, 9, 44. [Google Scholar] [CrossRef] [PubMed]
- Colombetti, G. From affect programs to dynamical discrete emotions. Philos. Psychol. 2009, 22, 407–425. [Google Scholar] [CrossRef] [Green Version]
- Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J. Personal. Soc. Psychol. 1971, 17, 124–129. [Google Scholar] [CrossRef] [Green Version]
- Plutchik, R. The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Am. Sci. 2001, 89, 344–350. [Google Scholar] [CrossRef]
- Posner, J.; Russell, J.; Peterson, B. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 2005, 17, 715–734. [Google Scholar] [CrossRef]
- Ambady, N.; Weisbuch, M. Non verbal behavior. In Handbook of Social Psychology, 5th ed.; Wiley: Hoboken, NJ, USA, 2010; Volume 1, pp. 464–497. [Google Scholar] [CrossRef]
- Rule, N.; Ambady, N. First Impressions of the Face: Predicting Success. Soc. Personal. Psychol. Compass 2010, 4, 506–516. [Google Scholar] [CrossRef] [Green Version]
- Purves, D.; Augustine, G.; Fitzpatrick, D.; Katz, L.; LaMantia, A.; McNamara, J.; Williams, S. Neuroscience, 2nd ed.; Sinauer Associates: Sunderland, MA, USA, 2001. [Google Scholar]
- Li, M.; Zhang, W.; Hu, B.; Kang, J.; Wang, Y.; Lu, S. Automatic Assessment of Depression and Anxiety through Encoding Pupil-wave from HCI in VR Scenes. ACM Trans. Multimed. Comput. Commun. Appl. 2022. [Google Scholar] [CrossRef]
- Roessler, J.; Gloor, P. Measuring happiness increases happiness. J. Comput. Soc. Sci. 2020, 4, 123–146. [Google Scholar] [CrossRef] [Green Version]
- Kahou, S.E.; Pal, C.; Bouthillier, X.; Froumenty, P.; Gülçehre, Ç.; Memisevic, R.; Vincent, P.; Courville, A.; Bengio, Y.; Ferrari, R.C.; et al. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction, Sydney, Australia, 9–13 December 2013; pp. 543–550. [Google Scholar] [CrossRef] [Green Version]
- Khan, A.; Lawo, M. Recognizing Emotion from Blood Volume Pulse and Skin Conductance Sensor Using Machine Learning Algorithms; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
- Mehta, D.; Siddiqui, M.; Javaid, A. Recognition of Emotion Intensities Using Machine Learning Algorithms: A Comparative Study. Sensors 2019, 19, 1897. [Google Scholar] [CrossRef] [Green Version]
- Happy, S.; George, A.; Routray, A. Realtime facial expression classification system using local binary patterns. In Proceedings of the 4th International Conference on Intelligent Human Computer Interaction, Kharagpur, India, 27–29 December 2012; IEEE: New York, NY, USA, 2012; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
- Ghimire, D.; Lee, J. Geometric Feature-based facial expression recognition in image sequences using multi-class Adaboost and Support Vector Machines. Sensors 2013, 13, 7714–7734. [Google Scholar] [CrossRef] [Green Version]
- Jung, H.; Lee, S.; Yim, J.; Park, S.; Kim, J. Joint fine-tuning in deep neural networks for facial expression recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2983–2991. [Google Scholar] [CrossRef]
- Jain, D.; Zhang, Z.; Huang, K. Multiangle Optimal Pattern-based Deep Learning for Automatic Facial Expression Recognition. Pattern Recognit. Lett. 2017, 139, 157–165. [Google Scholar] [CrossRef]
- Bhave, A.; Renold, F.; Gloor, P. Using Plants as Biosensors to Measure the Emotions of Jazz Musicians. In Handbook of Social Computing; Edward Elgar Publishing: Cheltenham, UK, 2023. [Google Scholar]
- Page, P.; Kilian, K.; Donner, M. Enhancing Quality of Virtual Meetings through Facial and Vocal Emotion Recognition; COINs Seminar Paper Summer Semester; University of Cologne: Cologne, Germany, 2021. [Google Scholar]
- Elkins, A.N.; Muth, E.R.; Hoover, A.W.; Walker, A.D.; Carpenter, T.L.; Switzer, F.S. Physiological compliance and team performance. Appl. Ergon. 2009, 40, 997–1003. [Google Scholar] [CrossRef] [PubMed]
- Stevens, R.; Gorman, J.; Amazeen, P.; Likens, A.; Galloway, T. The organizational neurodynamics of teams. Nonlinear Dyn. Psychol. Life Sci. 2013, 17, 67–86. [Google Scholar]
- Bakker, A. Flow among music teachers and their students: The crossover of peak experiences. J. Vocat. Behav. 2005, 66, 26–44. [Google Scholar] [CrossRef] [Green Version]
- Toshev, A.; Szegedy, C. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
- Kocabas, M.; Karagoz, S.; Akbas, E. Multiposenet: Fast multi-person pose estimation using pose residual network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 417–433. [Google Scholar]
- Dang, Q.; Yin, J.; Wang, B.; Zheng, W. Deep learning based 2D human pose estimation: A survey. Tsinghua Sci. Technol. 2019, 24, 663–676. [Google Scholar] [CrossRef]
- Jin, S.; Ma, X.; Han, Z.; Wu, Y.; Yang, W.; Liu, W.; Qian, C.; Ouyang, W. Towards multi-person pose tracking: Bottom-up and top-down methods. ICCV Posetrack Workshop 2017, 2, 7. [Google Scholar]
- Andriluka, M.; Pishchulin, L.; Gehler, P.; Schiele, B. 2d human pose estimation: New benchmark and state of the art analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; 2014; pp. 3686–3693. [Google Scholar]
- Li, M.; Zhou, Z.; Li, J.; Liu, X. Bottom-up pose estimation of multiple person with bounding box constraint. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; IEEE: New York, NY, USA, 2018; pp. 115–120. [Google Scholar]
Body Part | Keypoint Vector |
---|---|
Right Shoulder Section | neck → r_sho |
Left Shoulder Section | neck → l_sho |
Right Upper Arm | r_sho → r_elb |
Right Lower Arm | r_elb → r_wri |
Left Upper Arm | l_sho → l_elb |
Left Lower Arm | l_elb → l_wri |
Right Upper Bodyline | neck → r_hip |
Right Upper Leg | r_hip → r_knee |
Right Lower Leg | r_knee → r_ank |
Left Upper Bodyline | neck → l_hip |
Left Upper Leg | l_hip → l_knee |
Left Lower Leg | l_knee → l_ank |
Neck Section | neck → nose |
Right Nose to Eye Section | nose → r_eye |
Right Eye to Ear Section | r_eye → r_ear |
Left Nose to Eye Section | nose → l_eye |
Left Eye to Ear Section | l_eye → l_ear |
Unstandardized Coefficients | Standardized Coefficients | ||||
---|---|---|---|---|---|
Model | B | Std. Error | Beta | t | Sig. |
(Constant) | 0.008 | 0.000 | 25.629 | ||
synchrony_r_knee_to_r_ank | −0.004 | 0.000 | −0.371 | −32.068 | |
synchrony_neck_to_l_hip | 0.010 | 0.002 | 0.456 | 5.473 | |
synchrony_neck_to_r_sho | 0.033 | 0.001 | 1.451 | 32.711 | |
synchrony_neck_to_r_hip | −0.022 | 0.001 | −1.118 | −15.128 | |
synchrony_r_sho_to_r_elb | 0.020 | 0.001 | 0.823 | 20.699 | |
synchrony_neck_to_nose | −0.020 | 0.001 | −0.789 | −20.238 | |
synchrony_neck_to_l_sho | −0.014 | 0.001 | −0.695 | −11.820 | |
synchrony_r_eye_to_r_ear | −0.012 | 0.001 | −0.417 | −16.156 | |
synchrony_nose_to_l_eye | 0.012 | 0.001 | 0.414 | 10.885 | |
synchrony_l_sho_to_l_elb | −0.012 | 0.001 | −0.567 | −9.294 | |
synchrony_l_knee_to_l_ank | −0.002 | 0.000 | −0.115 | −9.702 | |
synchrony_nose_to_r_eye | 0.005 | 0.001 | 0.226 | 5.763 | |
synchrony_r_hip_to_r_knee | 0.003 | 0.000 | 0.123 | 5.084 | |
synchrony_l_hip_to_l_knee | −0.003 | 0.001 | −0.136 | −5.071 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bhave, A.; van Delden, J.; Gloor, P.A.; Renold, F.K. Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions. Sensors 2023, 23, 6789. https://doi.org/10.3390/s23156789
Bhave A, van Delden J, Gloor PA, Renold FK. Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions. Sensors. 2023; 23(15):6789. https://doi.org/10.3390/s23156789
Chicago/Turabian StyleBhave, Anushka, Josephine van Delden, Peter A. Gloor, and Fritz K. Renold. 2023. "Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions" Sensors 23, no. 15: 6789. https://doi.org/10.3390/s23156789
APA StyleBhave, A., van Delden, J., Gloor, P. A., & Renold, F. K. (2023). Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions. Sensors, 23(15), 6789. https://doi.org/10.3390/s23156789