Next Article in Journal
An Advanced Framework for Predictive Maintenance Decisions: Integrating the Proportional Hazards Model and Machine Learning Techniques under CBM Multi-Covariate Scenarios
Previous Article in Journal
3D-Printed SMC Core Alternators: Enhancing the Efficiency of Vortex-Induced Vibration (VIV) Bladeless Wind Turbines
Previous Article in Special Issue
Computational Thinking Measurement of CS University Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Collaborative Analysis of Learners’ Emotional States Based on Cross-Modal Higher-Order Reasoning

1
College of Computer, Guangdong University of Technology, Guangzhou 511400, China
2
College of Automation, Guangdong University of Technology, Guangzhou 511400, China
3
Center of Campus Network & Modern Educational Technology, Guangdong University of Technology, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(13), 5513; https://doi.org/10.3390/app14135513
Submission received: 23 May 2024 / Revised: 18 June 2024 / Accepted: 20 June 2024 / Published: 25 June 2024

Abstract

Emotion is a significant factor influencing education and teaching, closely intertwined with learners’ cognitive processing. Conducting analysis of learners’ emotions based on cross-modal data is beneficial for achieving personalized guidance in intelligent educational environments. Currently, due to factors such as data scarcity and environmental noise, data imbalances have led to incomplete or missing emotional information. Therefore, this study proposes a collaborative analysis model based on attention mechanisms. The model extracts features from various types of data using different tools and employs multi-head attention mechanisms for parallel processing of feature vectors. Subsequently, through a cross-modal attention collaborative interaction module, effective interaction among visual, auditory, and textual information is facilitated, significantly enhancing comprehensive understanding and the analytical capabilities of cross-modal data. Finally, empirical evidence demonstrates that the model can effectively improve the accuracy and robustness of emotion recognition in cross-modal data.
Keywords: cross-modal fusion; learner emotion recognition; attention mechanism; personalized tutoring cross-modal fusion; learner emotion recognition; attention mechanism; personalized tutoring

Share and Cite

MDPI and ACS Style

Wu, W.; Zhao, J.; Shen, X.; Feng, G. Collaborative Analysis of Learners’ Emotional States Based on Cross-Modal Higher-Order Reasoning. Appl. Sci. 2024, 14, 5513. https://doi.org/10.3390/app14135513

AMA Style

Wu W, Zhao J, Shen X, Feng G. Collaborative Analysis of Learners’ Emotional States Based on Cross-Modal Higher-Order Reasoning. Applied Sciences. 2024; 14(13):5513. https://doi.org/10.3390/app14135513

Chicago/Turabian Style

Wu, Wenyan, Jingtao Zhao, Xingbo Shen, and Guang Feng. 2024. "Collaborative Analysis of Learners’ Emotional States Based on Cross-Modal Higher-Order Reasoning" Applied Sciences 14, no. 13: 5513. https://doi.org/10.3390/app14135513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop