Next Article in Journal
Research on the Auto-Exposure Method of an Aerial TDI Camera Based on Scene Prediction
Next Article in Special Issue
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages
Previous Article in Journal
Towards Industry 4.0 and Sustainable Manufacturing Applying Environmentally Friendly Machining of a Precipitation Hardened Stainless Steel Using Hot Turning Process
Previous Article in Special Issue
Natural Language Processing Adoption in Governments and Future Research Directions: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prompt Language Learner with Trigger Generation for Dialogue Relation Extraction

1
Department of Computer Science and Engineering, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
2
Human-Inspired AI Research, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(22), 12414; https://doi.org/10.3390/app132212414
Submission received: 10 October 2023 / Revised: 7 November 2023 / Accepted: 14 November 2023 / Published: 16 November 2023
(This article belongs to the Special Issue Natural Language Processing (NLP) and Applications)

Abstract

Dialogue relation extraction identifies semantic relations between entity pairs in dialogues. This research explores a methodology harnessing the potential of prompt-based fine-tuning paired with a trigger-generation approach. Capitalizing on the intrinsic knowledge of pre-trained language models, this strategy employs triggers that underline the relation between entities decisively. In particular, diverging from the conventional extractive methods seen in earlier research, our study leans towards a generative manner for trigger generation. The dialogue-based relation extraction (DialogeRE) benchmark dataset features multi-utterance environments of colloquial speech by multiple speakers, making it critical to capture meaningful clues for inferring relational facts. In the benchmark, empirical results reveal significant performance boosts in few-shot scenarios, where the availability of examples is notably limited. Nevertheless, the scarcity of ground-truth triggers for training hints at potential further refinements in the trigger-generation module, especially when ample examples are present. When evaluating the challenges of dialogue relation extraction, combining prompt-based learning with trigger generation offers pronounced improvements in both full-shot and few-shot scenarios. Specifically, integrating a meticulously crafted manual initialization method with the prompt-based model—considering prior distributional insights and relation class semantics—substantially surpasses the baseline. However, further advancements in trigger generation are warranted, especially in data-abundant contexts, to maximize performance enhancements.
Keywords: dialogue relation extraction; information extraction; trigger generation; prompt-based learning dialogue relation extraction; information extraction; trigger generation; prompt-based learning

Share and Cite

MDPI and ACS Style

Kim, J.; Kim, G.; Son, J.; Lim, H. Prompt Language Learner with Trigger Generation for Dialogue Relation Extraction. Appl. Sci. 2023, 13, 12414. https://doi.org/10.3390/app132212414

AMA Style

Kim J, Kim G, Son J, Lim H. Prompt Language Learner with Trigger Generation for Dialogue Relation Extraction. Applied Sciences. 2023; 13(22):12414. https://doi.org/10.3390/app132212414

Chicago/Turabian Style

Kim, Jinsung, Gyeongmin Kim, Junyoung Son, and Heuiseok Lim. 2023. "Prompt Language Learner with Trigger Generation for Dialogue Relation Extraction" Applied Sciences 13, no. 22: 12414. https://doi.org/10.3390/app132212414

APA Style

Kim, J., Kim, G., Son, J., & Lim, H. (2023). Prompt Language Learner with Trigger Generation for Dialogue Relation Extraction. Applied Sciences, 13(22), 12414. https://doi.org/10.3390/app132212414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop