CDEA: Causality-Driven Dialogue Emotion Analysis via LLM
Abstract
:1. Introduction
- We propose a dialogue emotion analysis method based on explicit reasoning for emotional causes. This method solves the problem of the lack of explicit reasoning path information in current methods by providing a clear reasoning path, allowing for accurate identification of emotional causes and establishing causal symmetry between emotional causes and emotion categories.
- In addition, we leverage the rich knowledge embedded in GPT-4 and its powerful generalization ability to enhance the effectiveness of the emotion analysis method by constructing instructions that include historical content, emotional causes, and empirical examples. This approach not only effectively compensates for the lack of common knowledge support in current methods that consider emotional causes but also strengthens the accuracy and flexibility of emotional reasoning through a symmetry mechanism.
- We conducted extensive experiments on three benchmark datasets to validate the model’s effectiveness and advantages in constructing explicit reasoning paths and LLM commonsense reasoning. The experimental results show that our model outperforms existing baseline methods in terms of accuracy and reliability, further highlighting the close connection between the emotion causal reasoning model and the concept of symmetry.
2. Related Work
2.1. Sentiment Analysis Techniques
2.2. Conversational Sentiment Analysis Techniques
3. Method
3.1. Mission Definition and Model Overview
3.2. Sentiment Cause Sentence Acquisition
3.3. Dynamic Retrieval of Experience Examples
- Speaker information removal: To prevent speaker identity bias, all speaker metadata are removed, ensuring that the retrieved examples are selected purely based on textual content rather than specific speakers’ emotional tendencies.
- Sentiment category balancing: Since some emotion categories (e.g., Neutral) are more frequent than others, we apply category balancing techniques to ensure that all sentiment classes have a uniform distribution within . This prevents the model from over-relying on dominant categories during retrieval.
- Text normalization: To reduce variability in sentence structure, we perform basic text preprocessing, such as lowercasing, punctuation normalization, and stop-word removal.
3.4. Prompt Instruction Construction
3.5. Training and Loss Functions
4. Experiments
4.1. Setup
4.1.1. Models and Datasets
- COSMIC [36]: the first model that takes into account different categories of commonsense knowledge in a conversational sentiment analysis task and utilizes them to update conversational states.
- DAG-ERC [34]: models the conversation structure as a directed acyclic graph, modeling both distant and proximate information interactions in a conversation.
- DialogueCRN [9]: attempts to model intuitive retrieval and conscious reasoning processes by designing a multi-round reasoning module that iteratively performs the process of extracting and integrating emotional cues.
- SKAIG [49]: uses the structure of a connectivity graph to enrich the representation of edges in the graph with commonsense knowledge, and enriches the representation of target utterances with past and future contextual information in the context.
- CauAIN [10]: takes commonsense knowledge as the cause of emotion generation in dialogs and utilizes attentional mechanisms to update deeper representations of the target utterance in relation to emotion.
- ERCMC [50]: uses the generated pseudo-future contexts in combination with historical contexts to improve emotion recognition in conversation.
- UniMSE [51]: A multi-task learning framework for information extraction that leverages multiple data sources to generate structured outputs. It enhances extraction efficiency and accuracy by integrating a structured extraction language with a pre-trained text-to-structure model.
- InstructERC [52]: is a model for dialogue emotion recognition that uses large-scale language models to improve the accuracy of emotion recognition. The model enhances its understanding of emotions with two auxiliary tasks—speaker identification and emotion prediction.
- Ref. [53]: uses commonsense knowledge to complement the contextual information contained in utterances and enrich the extracted conversation information.
- CKERC [54]: is a novel emotion recognition in conversation (ERC) model that improves the accuracy of emotion recognition by combining large-scale language models (LLMs) and commonsense knowledge.
4.1.2. Implementation Details
4.2. Results and Analysis
4.2.1. Overall Results
4.2.2. Ablation Study
- w/o Inter-Path: In the emotion cause detection module, we do not use the other’s reasoning path information provided by the structured machine commonsense map, and only recognize the other’s cause statements based on the semantic similarity, but we still use the own reasoning path information provided by the structured machine commonsense map to recognize the self cause statements that are consistent with the causality.
- w/o Intra-Path: In the emotion cause detection module, instead of using the own reasoning path information provided by the structured machine commonsense map, it only recognizes the own cause statements based on the semantic similarity, but it still uses the other’s reasoning path information provided by the structured machine commonsense map to recognize the other’s cause statements that are consistent with the causal relationship.
- w/o Inf-Path: Instead of using the inference path information provided by the structured machine commonsense map in the emotion cause detection module, the emotion cause statements are identified only by the semantic similarity between the historical statements and the target statements.
- w/o Exper Demonstration: removing empirical examples from the input of the LLM, i.e., removing empirical examples dynamically selected from the empirical database based on the contextual semantics of a particular target utterance at the time of constructing the command.
- w/o Label Paraphrasing: removing the auxiliary task of generating semantic interpretations of sentiment categories and only fine-tuning the LLM with the main task of dialogue sentiment analysis.
- w/o LoRA: do not use LoRA to fine-tune the LLM, use the full-parameter fine-tuning approach.
4.2.3. Hyperparametric Study
4.2.4. Comparative Experiments with Different LLM in Different Supervised Scenarios
4.3. Case Study
4.4. Module Time Consumption Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, H.; Huang, M.; Zhang, T.; Zhu, X.; Liu, B. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Kumar, A.; Dogra, P.; Dabas, V. Emotion analysis of Twitter using opinion mining. In Proceedings of the 2015 Eighth International Conference on Contemporary Computing (IC3), Noida, India, 20–22 August 2015; IEEE: New York, NY, USA, 2015; pp. 285–290. [Google Scholar]
- Pujol, F.A.; Mora, H.; Martínez, A. Emotion Recognition to Improve E-Healthcare Systems in Smart Cities. In Proceedings of the Research & Innovation Forum 2019: Technology, Innovation, Education, and Their Social Impact, Athens, Greece, 10–12 April 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 245–254. [Google Scholar]
- Poria, S.; Majumder, N.; Mihalcea, R.; Hovy, E. Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE Access 2019, 7, 100943–100953. [Google Scholar]
- König, A.; Francis, L.E.; Malhotra, A.; Hoey, J. Defining affective identities in elderly nursing home residents for the design of an emotionally intelligent cognitive assistant. In Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare, Cancun, Mexico, 16–19 May 2016; pp. 206–210. [Google Scholar]
- Strapparava, C. WordNet-Affect: An affective extension of WordNet. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal, 26–28 May 2004. [Google Scholar]
- Mohammad, S.M.; Turney, P.D. Crowdsourcing a word—Emotion association lexicon. Comput. Intell. 2013, 29, 436–465. [Google Scholar]
- Lian, Z.; Sun, L.; Xu, M.; Sun, H.; Xu, K.; Wen, Z.; Chen, S.; Liu, B.; Tao, J. Explainable multimodal emotion reasoning. arXiv 2023, arXiv:2306.15401. [Google Scholar]
- Hu, D.; Wei, L.; Huai, X. Dialoguecrn: Contextual reasoning networks for emotion recognition in conversations. arXiv 2021, arXiv:2106.01978. [Google Scholar]
- Zhao, W.; Zhao, Y.; Lu, X. CauAIN: Causal Aware Interaction Network for Emotion Recognition in Conversations. In Proceedings of the IJCAI, Vienna, Austria, 23–29 July 2022; pp. 4524–4530. [Google Scholar]
- Schachter, S.; Singer, J. Cognitive, Social, and Physiological Determinants of Emotional State. Psychol. Rev. 1962, 69, 379–399. [Google Scholar] [CrossRef]
- Scherer, K.R. Appraisal Processes in Emotion: Theory, Methods, Research; Oxford University Press: New York, NY, USA, 2001. [Google Scholar]
- Majumder, N.; Poria, S.; Hazarika, D.; Mihalcea, R.; Gelbukh, A.; Cambria, E. Dialoguernn: An attentive rnn for emotion detection in conversations. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 6818–6825. [Google Scholar]
- Zhang, D.; Chen, X.; Xu, S.; Xu, B. Knowledge aware emotion recognition in textual conversations via multi-task incremental transformer. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 13–18 September 2020; pp. 4429–4440. [Google Scholar]
- Jiao, W.; Yang, H.; King, I.; Lyu, M.R. Higru: Hierarchical gated recurrent units for utterance-level emotion recognition. arXiv 2019, arXiv:1904.04446. [Google Scholar]
- Ma, H.; Wang, J.; Qian, L.; Lin, H. HAN-ReGRU: Hierarchical attention network with residual gated recurrent unit for emotion recognition in conversation. Neural Comput. Appl. 2021, 33, 2685–2703. [Google Scholar]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Bhaumik, A.; Strzalkowski, T. Towards a Generative Approach for Emotion Detection and Reasoning. arXiv 2024, arXiv:2408.04906. [Google Scholar]
- Xie, S.M.; Raghunathan, A.; Liang, P.; Ma, T. An explanation of in-context learning as implicit bayesian inference. arXiv 2021, arXiv:2111.02080. [Google Scholar]
- Reddy, G.R.; Reddy, M.S.; Stanlywit, M.; Khaleel, S. Emotion detection from text and analysis of future work: A survey. Riv. Ital. Filos. Anal. Jr. 2023, 14, 59–73. [Google Scholar]
- Zhou, Z.H. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 2018, 5, 44–53. [Google Scholar] [CrossRef]
- Sujadi, C.C.; Sibaroni, Y.; Ihsan, A.F. Analysis content type and emotion of the presidential election users tweets using agglomerative hierarchical clustering. Sink. J. Dan Penelit. Tek. Inform. 2023, 7, 1230–1237. [Google Scholar] [CrossRef]
- Mahesh, B. Machine Learning Algorithms—A Review. Int. J. Sci. Res. (IJSR) 2020, 9, 381–386. [Google Scholar] [CrossRef]
- Rafath, M.A.H.; Mim, F.T.Z.; Rahman, M.S. An analytical study on music listener emotion through logistic regression. World Acad. J. Eng. Sci. 2021, 8, 15–20. [Google Scholar]
- Bengio, Y.; Ducharme, R.; Vincent, P.; Janvin, C. A Neural Probabilistic Language Model. J. Mach. Learn. Res. 2003, 3, 1137–1155. [Google Scholar]
- Sarzyńska-Wawer, J.; Wawer, A.; Pawlak, A.; Szymanowska, J.; Stefaniak, I.; Jarkiewicz, M.; Okruszek, L. Detecting formal thought disorder by deep contextualized word representations. Psychiatry Res. 2021, 304, 114135. [Google Scholar] [CrossRef]
- Radford, A.; Narasimhan, K. Improving Language Understanding by Generative Pre-Training. OpenAI Technical Report. 2018. Available online: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf (accessed on 19 January 2025).
- Kenton, J.D.M.W.C.; Toutanova, L.K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, MN, USA, 2–7 June 2019; Volume 1, pp. 4171–4186. [Google Scholar]
- Wan, B.; Wu, P.; Yeo, C.K.; Li, G. Emotion-cognitive reasoning integrated BERT for sentiment analysis of online public opinions on emergencies. Inf. Process. Manag. 2024, 61, 103609. [Google Scholar] [CrossRef]
- Abu Farha, I.; Magdy, W. A Comparative Study of Effective Approaches for Arabic Sentiment Analysis. Inf. Process. Manag. 2021, 58, 102438. [Google Scholar] [CrossRef]
- Bello, A.; Ng, S.C.; Leung, M.F. A BERT framework to sentiment analysis of tweets. Sensors 2023, 23, 506. [Google Scholar] [CrossRef]
- Ghosal, D.; Majumder, N.; Poria, S.; Chhaya, N.; Gelbukh, A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv 2019, arXiv:1908.11540. [Google Scholar]
- Ishiwatari, T.; Yasuda, Y.; Miyazaki, T.; Goto, J. Relation-Aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, Originally Scheduled in Punta Cana, Dominican Republic, 16–20 November; 2020; pp. 7360–7370. [Google Scholar]
- Shen, W.; Wu, S.; Yang, Y.; Quan, X. Directed acyclic graph network for conversational emotion recognition. arXiv 2021, arXiv:2105.12907. [Google Scholar]
- Zhong, P.; Wang, D.; Miao, C. Knowledge-enriched transformer for emotion detection in textual conversations. arXiv 2019, arXiv:1909.10681. [Google Scholar]
- Ghosal, D.; Majumder, N.; Gelbukh, A.; Mihalcea, R.; Poria, S. Cosmic: Commonsense knowledge for emotion identification in conversations. arXiv 2020, arXiv:2010.02795. [Google Scholar]
- Bosselut, A.; Rashkin, H.; Sap, M.; Malaviya, C.; Celikyilmaz, A.; Choi, Y. COMET: Commonsense transformers for automatic knowledge graph construction. arXiv 2019, arXiv:1906.05317. [Google Scholar]
- Sap, M.; Le Bras, R.; Allaway, E.; Bhagavatula, C.; Lourie, N.; Rashkin, H.; Roof, B.; Smith, N.A.; Choi, Y. ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; AAAI Press: Honolulu, HI, USA, 2019; Volume 33, pp. 3027–3035. [Google Scholar]
- Luo, M.; Xu, X.; Liu, Y.; Pasupat, P.; Kazemi, M. In-context learning with retrieved demonstrations for language models: A survey. arXiv 2024, arXiv:2401.11624. [Google Scholar]
- Zahiri, S.M.; Choi, J.D. Emotion detection on TV show transcripts with sequence-based convolutional neural networks. In Proceedings of the AAAI Workshops, New Orleans, LA, USA, 2–7 February 2018; Volume 18, pp. 44–52. [Google Scholar]
- Zhang, T.; Kishore, V.; Wu, F.; Weinberger, K.Q.; Artzi, Y. Bertscore: Evaluating text generation with bert. arXiv 2019, arXiv:1904.09675. [Google Scholar]
- Lian, Z.; Liu, B.; Tao, J. CTNet: Conversational Transformer Network for Emotion Recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 985–1000. [Google Scholar]
- Baccianella, S.; Esuli, A.; Sebastiani, F. SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), Valletta, Malta, 17–23 May 2010; European Language Resources Association (ELRA): Valletta, Malta, 2010; pp. 2200–2204. [Google Scholar]
- Busso, C.; Bulut, M.; Lee, C.C.; Kazemzadeh, A.; Mower, E.; Kim, S.; Chang, J.N.; Lee, S.; Narayanan, S.S. IEMOCAP: Interactive Emotional Dyadic Motion Capture Database. Lang. Resour. Eval. 2008, 42, 335–359. [Google Scholar]
- Poria, S.; Hazarika, D.; Majumder, N.; Naik, G.; Cambria, E.; Mihalcea, R. MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations. arXiv 2018, arXiv:1810.02508. [Google Scholar]
- Li, Y.; Su, H.; Shen, X.; Li, W.; Cao, Z.; Niu, S. DailyDialog: A Manually Labelled Multi-Turn Dialogue Dataset. arXiv 2017, arXiv:1710.03957. [Google Scholar]
- Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. LLaMA: Open and Efficient Foundation Language Models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
- Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
- Li, J.; Lin, Z.; Fu, P.; Wang, W. Past, Present, and Future: Conversational Emotion Recognition through Structural Modeling of Psychological Knowledge. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic, 16–20 November 2021; pp. 1204–1214. [Google Scholar]
- Wei, Y.; Liu, S.; Yan, H.; Ye, W.; Mo, T.; Wan, G. Exploiting Pseudo Future Contexts for Emotion Recognition in Conversations. arXiv 2023, arXiv:2306.15376. [Google Scholar]
- Lei, S.; Dong, G.; Wang, X.; Wang, K.; Wang, S. InstructERC: Reforming Emotion Recognition in Conversation with a Retrieval Multi-Task LLMs Framework. arXiv 2023, arXiv:2309.11911. [Google Scholar]
- Hu, D.; Bao, Y.; Wei, L.; Zhou, W.; Hu, S. Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations. arXiv 2023, arXiv:2306.01505. [Google Scholar]
- Yang, Z.; Li, X.; Cheng, Y.; Zhang, T.; Wang, X. Emotion Recognition in Conversation Based on a Dynamic Complementary Graph Convolutional Network. IEEE Trans. Affect. Comput. 2024, 15, 1567–1579. [Google Scholar]
- Fu, Y. CKERC: Joint Large Language Models with Commonsense Knowledge for Emotion Recognition in Conversation. arXiv 2024, arXiv:2403.07260. [Google Scholar]
- Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; Tang, J. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. arXiv 2021, arXiv:2103.10360. [Google Scholar]
Sentence (Event) | X Pays Y a Compliment | |
---|---|---|
Self-Reasoning Path | xEffect | be acknowledged |
xReact | feel good | |
xWant | chat with Y | |
Other-Reasoning Path | oEffect | smile |
oReact | feel flattered | |
oWant | compliment X back |
Dataset | Number of Dialogs | Number of Sentences | ||||
---|---|---|---|---|---|---|
Train | Dev | Test | Train | Dev | Test | |
IEMOCAP | 108 | 12 | 31 | 5163 | 647 | 1623 |
MELD | 1039 | 114 | 280 | 9989 | 1109 | 2610 |
DailyDialog | 11,118 | 1000 | 1000 | 87,170 | 8069 | 7740 |
Dataset | Classes | Sentiment Category |
---|---|---|
IEMOCAP | 6 | happy, sad, neutral, angry, excited, frustrated |
MELD | 7 | anger, disgust, fear, joy, neutral, sadness, surprise |
DailyDialog | 7 | anger, disgust, fear, joy, neutral, sadness, surprise |
Dataset | Metric | Addition |
---|---|---|
COSMIC | weighted F1 | - |
DAG-ERC | weighted F1 | - |
DialogueCRN | weighted F1 | w/o neutral category sentences |
Dataset | IEMOCAP | MELD | DailyDialog | |||
---|---|---|---|---|---|---|
Weighted-F1 | Acc | Weighted-F1 | Micro-F1 | Macro-F1 | Micro-F1 | |
COSMIC [36] | 65.28 | 64.25 | 65.21 | 65.13 | 51.05 | 58.48 |
DAG-ERC [34] | 67.10 | 66.47 | 63.37 | - | - | 58.25 |
DialogueCRN [9] | 66.20 | 67.01 | 58.39 | 58.26 | - | 55.46 |
SKAIG [49] | 66.96 | - | 65.18 | - | 51.95 | 59.75 |
CauAIN [10] | 64.29 | 63.84 | 65.15 | 64.85 | 53.85 | 58.21 |
ERCMC [50] | 66.07 | 65.58 | 65.64 | - | 52.11 | 59.92 |
UniMSE [51] | 70.66 | 70.56 | 65.51 | 65.09 | - | - |
InstructERC [52] | 71.39 | 71.43 | 69.15 | 68.96 | - | - |
[53] | 68.31 | - | 66.25 | - | - | 60.21 |
CKERC [54] | 72.40 | - | 69.27 | - | - | - |
CDEA | 66.92 | 66.44 | 65.73 | 66.59 | 54.29 | 60.44 |
CDEA + llama | 73.26 | 72.25 | 69.34 | 69.61 | 63.25 | 64.59 |
IEMOCAP | MELD | DailyDialog | |
---|---|---|---|
CDEA | 66.92 | 65.73 | 60.44 |
w/o Inter-Path | 65.91 | 65.24 | 59.69 |
w/o Intra-Path | 65.96 | 63.53 | 59.33 |
w/o Inf-Path | 65.17 | 65.38 | 59.04 |
IEMOCAP | MELD | DailyDialog | |
---|---|---|---|
CDEA+llama | 73.26 | 69.34 | 64.59 |
w/o exper demonstration | 70.65 | 67.29 | 63.68 |
w/o label paraphrasing | 70.55 | 67.41 | 63.13 |
w/o LoRA | 70.23 | 63.88 | 63.52 |
Contents of the Dialog | |
---|---|
Joey: Oh, yeah, yeah, sure. We live in the building by the uh sidewalk. (neutral) Chandler: You know it? (surprise) Joey: Hey, look, since we are neighbors and all, what do you say we uh, get together for a drink? (neutral) Chandler: Oh, sure, they love us over there. (neutral) Joey: Ben! Ben! Ben! (neutral) | |
Model | Prediction |
DialogueCRN | surprise ✕ |
CKERC | joy ✕ |
ECERN | joy ✕ |
ECERN + llama | neutral ✓ |
Module | Phase | Average Time (ms/sentence) |
---|---|---|
Dialogue History Preprocessing | Sentiment Cause Sentence Acquisition | 42 |
Sentiment Cause Detection (Self and Other) | Sentiment Cause Sentence Acquisition | 185 |
Experience Example Retrieval | Dynamic Retrieval of Experience Examples | 160 |
BART-Based Experience Refinement | Dynamic Retrieval of Experience Examples | 225 |
Prompt Construction | Prompt Instruction Construction and Fine-tuning | 105 |
LLM Inference | Prompt Instruction Construction and Fine-tuning | 3492 |
Total System Runtime | - | 4209 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, X.; Wang, M.; Zhuang, X.; Zeng, X.; Li, Q. CDEA: Causality-Driven Dialogue Emotion Analysis via LLM. Symmetry 2025, 17, 489. https://doi.org/10.3390/sym17040489
Zhang X, Wang M, Zhuang X, Zeng X, Li Q. CDEA: Causality-Driven Dialogue Emotion Analysis via LLM. Symmetry. 2025; 17(4):489. https://doi.org/10.3390/sym17040489
Chicago/Turabian StyleZhang, Xue, Mingjiang Wang, Xuyi Zhuang, Xiao Zeng, and Qiang Li. 2025. "CDEA: Causality-Driven Dialogue Emotion Analysis via LLM" Symmetry 17, no. 4: 489. https://doi.org/10.3390/sym17040489
APA StyleZhang, X., Wang, M., Zhuang, X., Zeng, X., & Li, Q. (2025). CDEA: Causality-Driven Dialogue Emotion Analysis via LLM. Symmetry, 17(4), 489. https://doi.org/10.3390/sym17040489