Next Article in Journal
A Method Based on an Autoencoder for Anomaly Detection in DC Motor Body Temperature
Next Article in Special Issue
Enhancing Phishing Email Detection through Ensemble Learning and Undersampling
Previous Article in Journal
An Improved AoT-DCGAN and T-CNN Hybrid Deep Learning Model for Intelligent Diagnosis of PTCs Quality under Small Sample Space
Previous Article in Special Issue
Incorporating Multi-Hypotheses as Soft-Templates in Neural Headline Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Model for Emotion-Driven Behavior Extraction from Text

1
Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing 100876, China
3
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
4
Institute of Scientific and Technical Information of China, Beijing 100038, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8700; https://doi.org/10.3390/app13158700
Submission received: 11 June 2023 / Revised: 25 July 2023 / Accepted: 26 July 2023 / Published: 27 July 2023
(This article belongs to the Special Issue Machine Learning and AI in Intelligent Data Mining and Analysis)

Abstract

:
Emotion analysis is currently a popular research direction in the field of natural language processing. However, existing research focuses primarily on tasks such as emotion classification, emotion extraction, and emotion cause analysis, while there are few investigations into the relationship between emotions and their impacts. To address these limitations, this paper introduces the emotion-driven behavior extraction (EDBE) task, which addresses these limitations by separately extracting emotions and behaviors to filter emotion-driven behaviors described in text. EDBE comprises three sub-tasks: emotion extraction, behavior extraction, and emotion–behavior pair filtering. To facilitate research in this domain, we have created a new dataset, which is accessible to the research community. To address the EDBE task, we propose a pipeline approach that incorporates the causal relationship between emotions and driven behaviors. Additionally, we adopt the prompt paradigm to improve the model’s representation of cause-and-effect relationships. In comparison to state-of-the-art methods, our approach demonstrates notable improvements, achieving a 1.32% improvement at the clause level and a 1.55% improvement at the span level on our newly curated dataset in terms of the F 1 score, which is a commonly used metric to measure the performance of models. These results underscore the effectiveness and superiority of our approach in relation to existing methods.

1. Introduction

Emotion analysis of text has emerged as a pivotal topic in the field of natural language processing (NLP) [1]. While initial research efforts have focused mainly on identifying expressions of emotion, such as emotion classification [2,3,4] and emotion information extraction [5], there has been a growing interest among researchers in understanding the underlying causes of emotions. Consequently, the field of emotion cause analysis (ECA) has gained prominence as a burgeoning area of research.
An ECA primarily aims to extract the causes that trigger emotional expression in text. However, most of the existing studies in this domain have focused on investigating the causes of individual emotions while overlooking the influence of these emotions on corresponding behaviors. It is crucial to note that emotions significantly impact people’s actions and behaviors [6], and these emotion-driven behaviors can have broader effects on others and on society as a whole. Therefore, it is imperative to extend the research focus beyond the causes of emotions alone and explore the intricate relationships between emotions and behaviors.
The impact of emotions on various aspects of human behavior has been extensively studied across different fields. Within the realms of cognitive psychology and neuroscience, research has highlighted the distinct effects of emotions on behavior [7,8,9,10], attention [11], memory [12,13], and growth [14]. Insights from sociology and law have linked emotions to negative behavior, such as risky driving [15], destructive behavior in children [16], and criminal activities [17]. Additionally, research in the field of sociology has demonstrated the impact of emotion regulation on individual strain [18] and leadership effectiveness [19]. Moreover, in the realm of computer science, the integration of emotional characteristics has shown promise in enhancing the performance of recommendation systems [20,21]. Collectively, these findings underscore the relevance and significance of research in this field.
The above studies have shown that research on emotions and their impacts is highly respected in academia. However, in the field of NLP, corresponding research is still lacking. The majority of studies for emotion analysis in the field of NLP have primarily focused on investigating emotions and their causes rather than exploring the consequences and outcomes associated with emotions. Research within this specific domain has predominantly emphasized understanding the underlying reasons behind emotions, leaving a noticeable gap in comprehending the effects and implications of emotions.
In this paper, we present a novel task called emotion-driven behavior extraction (EDBE), which aims to automatically extract behavior driven by emotions from text. Here, “behavior” refers to the description of a certain action or conduct within the text. The ultimate goal of this research is to efficiently extract the expressed emotions and their resulting impact information from massive amounts of text, which holds potential value for subsequent sociological analysis. To address the EDBE task, we create a publicly available annotated dataset. Furthermore, we propose a two-step pipeline approach. The first step involves extracting emotions and behaviors from the text, which is a well-established task in NLP. The second step involves matching emotions with the behaviors they drive, posing a more significant challenge due to the complex nature of the relationships between emotions and behaviors.
The main novelties of this work can be summarized as follows.
  • Introducing a novel task: We introduce a novel task called emotion-driven behavior extraction (EDBE), which shifts the focus from ECA to understanding the impact of emotions on behaviors. This task holds significant practical value as it provides insights into the relationships between emotions and behaviors. Furthermore, this work makes a significant theoretical contribution to the field of emotion analysis of text.
  • Construction of a dedicated corpus: To facilitate research on the EDBE task, we construct a new corpus specifically tailored for this purpose. The corpus is carefully annotated and serves as a valuable resource for training and evaluating models for emotion-driven behavior extraction. This contribution to the construction of a dedicated corpus addresses the need for a reliable dataset in the field of EDBE and enhances the effectiveness of future research endeavors.
  • Pipeline method with causal relationship: We propose a pipeline approach for the EDBE task that incorporates the causal relationship between emotions and the behaviors they drive. This is achieved by introducing a prompt paradigm, which enhances the model’s ability to capture cause-and-effect relationships between emotions and behaviors. This novel methodology represents an advancement in the field of emotion-driven behavior extraction by explicitly considering the causal linkages between emotions and behaviors, thereby improving the accuracy and interpretability of the results.

2. Related Works

Emotion analysis of text has garnered significant attention in recent years. The related work in this field can be mainly categorized into three main areas: (1) emotion analysis; (2) important methods in emotion analysis, such as relation extraction; and (3) the use of the prompt paradigm.

2.1. Emotion Analysis

Emotion analysis of text is the analysis of emotional information contained within the text, which encompasses a wide range of tasks, including emotion classification, emotion detection, and emotion prediction. These tasks help to understand emotions and leverage them effectively in various domains.
One prominent task is ECA, which explores the relationship between emotions and their underlying causes, providing insights into emotional states and human motivations. Several works have extended the ECA task from different perspectives. For instance, Gui et al. [22] introduced a Chinese emotion cause dataset derived from SINA city news, which served as an open dataset for the literature on ECA. Building upon this, subsequent studies focused on emotion cause extraction (ECE), extracting emotions and their corresponding causes simultaneously [23] and determining the causal relationship between emotions and causes under different contexts [24]. Other tasks, such as extracting the emotion–cause pair [25] and emotion–cause pair extraction in conversations [26], have also been proposed, leading to the emergence of emotion–cause pair extraction (ECPE) as a significant task that involves the extraction of emotions and causes as well as filtering emotion–cause pairs.
Although existing works mainly operate at the clause level, which may yield imprecise semantic information, they tend to focus more on individuals’ emotional experiences rather than the consequences of emotions, which are often crucial for downstream tasks.

2.2. Relation Extraction Methods

Relation extraction is a highly classic task in the field of NLP. Its objective is to extract the relationship between two entities from a given text, and the extracted results are usually represented as triplets in the form of (entity1, relation, entity2). For example, from the sentence “Biden is the president of the United States”, we can extract the relationship as (Biden, PresidentOf, United States), where “PresidentOf” is a type of relation.
Relation extraction also plays a crucial role in emotion analysis, as many emotion analysis tasks involve identifying relationships between entities. Several methods have been proposed for relation extraction. For example, Eberts and Ulges [27] introduced a span-based joint entity and relation extraction model with lightweight reasoning on BERT embeddings for relation classification. Cheng et al. [28] redefined the emotion–cause pair extraction task as a unified sequence labeling problem and designed a unified target-oriented sequence-to-sequence model. Other approaches include utilizing graph convolution networks to encode structural information [29], using a unified label space for entity relation extraction [30], incorporating a span representation space to consider interrelations between spans [31], and using independent encoders for entities and relations [32]. However, these methods often lack performance when it comes to relation extraction tasks involving causal relationships within specific contexts.

2.3. Prompt Paradigm

The prompt paradigm, which originates from models such as Generative Pre-trained Transformer 3 (GPT-3) brown2020language, has emerged as a powerful approach to address a wide range of tasks by transforming fine-tuning tasks into prompt-tuning tasks. This paradigm has been extensively validated in the field of NLP tasks, particularly in those involving few-shot or zero-shot learning. For instance, if we want to extract names from a piece of text, we simply need to add the following sentence at the end (or beginning) of the text: “Please extract names from the above text”. By doing so, we can obtain the mentioned names from the text.
In recent years, researchers have adopted the prompt paradigm to enhance the learning of emotional information. For instance, Mao et al. [33] used prompt-based classification models for emotion detection tasks, while Zhao et al. [34] extended prompt-based learning to multimodal emotion recognition. In the domain of ECA, Zheng et al. [35] proposed a universal prompt that addresses multiple tasks, including ECE, ECPE, and cause–emotion relation classification (CCRC). This approach demonstrates the versatility and effectiveness of the prompt paradigm in capturing the underlying relationships between emotions and their causes.
Furthermore, the prompt paradigm has been successfully applied to improving relation extraction algorithms. For example, Chen et al. [36] introduced prompt tuning with synergistic optimization for relation extraction, injecting latent knowledge contained in relation labels into prompt construction. This integration of the prompt paradigm with relation extraction techniques has shown promising results in capturing and extracting meaningful relations between entities in text.
These studies highlight the significant role of the prompt paradigm in semantic analyses of text, which enables effective learning and improved performance in various tasks.

3. Task Definition

This section introduces the proposed task of emotion-driven behavior extraction (EDBE), which comprises three sub-tasks: emotion extraction, behavior extraction, and emotion-driven behavior identification.
To illustrate the proposed task, we compared it with two related tasks, namely ECE and ECPE, as shown in Figure 1. The ECE task aims to identify cause clauses corresponding to a given emotion keyword, such as “happy”. In the example depicted in Figure 1, the cause clauses related to the keyword “happy” are “a policeman visited the old man with the lost money” and “told them that the thief was caught”. In the ECPE task, the goal is to extract both the emotion clause and its corresponding cause clause. In contrast, the EDBE task aims to extract the emotion and the behavior driven by that emotion. Specifically, the task involves extracting all behaviors, such as “visited the old man with the lost money”, “told them that the thief was caught”, “thanked the policeman”, and “deposited the money in the bank”, along with the emotion “happy” in the document. Emotion–behavior pairs are then established by matching the keyword “happy” with the behaviors it drives, resulting in identified emotion-driven behaviors.
As shown in Figure 1, the first sub-task focuses on identifying emotion spans in the document, which correspond to text fragments expressing emotions. The second sub-task involves identifying behavior spans in the document, which represent actions or events driven by emotions. Finally, the third sub-task aims to identify emotion-driven behaviors, which are pairs of emotional spans and the corresponding behavior spans that are causally related.
The ultimate objective of these sub-tasks is to extract emotion-driven behaviors from the text. The following subsections provide detailed explanations of the design of EDBE and the sub-tasks of emotion extraction, behavior extraction, and emotion–behavior pair filtering.

3.1. Emotion Extraction

In the emotion extraction task, we define emotions based on Ekman’s taxonomy [37], which includes the following basic emotions: happiness, sadness, fear, anger, disgust, and surprise (https://www.w3.org/TR/emotion-voc/xml#big6 (accessed on 10 June 2023)). These categories have been commonly used in previous work on Chinese emotion analysis [22].
Given a document D = x 1 , x 2 , , x t , where t is the number of tokens in the document, we define an emotion span as a continuous sequence of tokens ( x i , x i + 1 , , x j ) , where i [ 0 , t ] and j [ i , t ] , expressing any of the six basic emotions. An emotion span typically corresponds to a keyword indicating a particular emotion. For example, in Figure 1, the span “happy” is an emotion span that indicates the emotion of the citizen as happiness. Therefore, we label this span as an emotion span with the classification of happiness.

3.2. Behavior Extraction

Behavior refers to specific actions mentioned in the document that are performed by an agent based on their will, intention, or initiative. In the proposed EDBE task, a behavior span is defined as a continuous sequence of tokens ( x m , x m + 1 , , x n ) , where m [ 0 , t ] and n [ m , t ] , representing a specific behavior.
In Figure 1, the phrases “visited the old man with the lost money” and “told them that the thief was caught” depict actions initiated by the policeman. On the other hand, the behavior spans that need to be extracted include “thanked the policeman” and “deposited the money in the bank”.
It is important to note that while both the ECE and ECPE tasks identify the causes of emotions at the clause level, EDBE operates at the span level. This finer granularity in information extraction results in a more semantically precise output but also presents a greater challenge to the task [38,39].

3.3. Emotion–Behavior Pair Filtering

Once we have extracted emotions and behaviors, the next step is to identify behaviors that are influenced by specific emotions. These behaviors are referred to as emotion-driven behaviors in the EDBE task. In the example illustrated in Figure 1, the behaviors “thanked the policeman” and “deposited the money in the bank” are identified as emotion-driven behaviors resulting from the emotion “happy”. Conversely, “visited the old man with the lost money” and “told them that the thief was caught” cannot be associated with any emotion span and therefore do not qualify as emotion-driven behaviors.
To identify emotion-driven behaviors, we need to match emotions with their corresponding driven behaviors, thereby filtering the emotion–behavior pairs.

4. Dataset Construction

This section presents the construction process of the new dataset for the EDBE task.

4.1. Data Collection and Preprocessing

The raw corpus used in this study consists of Chinese social news articles released by Sun et al. [40] and comprising a total of 50,849 articles.
Upon observation, the raw corpus contains tokens such as “\u3000” and “ ” that represent spaces. To ensure the subsequent data processing and annotation are not affected by these symbols, we replace them with the space token (“ ”). Next, we employ regular expressions to segment the articles into sentences and remove duplicate sentences. To ensure an adequate amount of semantic information in the dataset, sentences with a length of fewer than 20 tokens are excluded.

4.2. Data Annotation

After preprocessing, each sentence is further divided into clauses, and the data annotation is performed at the clause level. The data annotation process is described as follows.
For emotion spans, we initially pre-annotate the sentences using keyword matching. Subsequently, annotators review the pre-annotation results. The review process involves several tasks, including removing keywords that do not express emotions in the given context, refining keywords to enhance their comprehensiveness in semantics, and identifying new keywords that were not previously listed. Examples are provided for each of these tasks. For instance, the term “头疼” (“headache”) has two primary meanings: physical pain in the head and the emotion of s a d n e s s . In this task, physical states are not considered emotions; thus, the term “头疼” (“headache”), referring to head pain, will not be annotated as an emotion span. Similarly, the emotions expressed by “激动” (“excited”) and “情绪很激动” (“emotional”) are not identical. Hence, manual determination of the emotion span boundary is necessary.
For behavior spans, acts are annotated based on their representation of subjective willingness. For example, “visited the old man with the lost money” is considered a behavior span. Additionally, spans indicating something that someone wants to do but has not done yet, such as “want to do sth.”, are also considered behavior spans. However, spans such as “aroused the anger of many citizens” and “see sb. doing sth.” are not categorized as behavior spans.
Regarding the definition of emotion-driven behaviors, this sub-task is treated as a relation classification extraction task. In Figure 1, for example, “thanked the policeman” and “deposited the money in the bank” are influenced by the emotion “happy”, while “visited the old man with the lost money” and “told them the thief was caught” are not. Therefore, we obtain two emotion–behavior pairs: “happy”–“thanked the policeman” and “happy”–“deposited the money in the bank”.
During the annotation process, two annotators work independently and a third annotator reviews the results. The final dataset includes instances that contain at least one emotion span and one behavior span, resulting in a total of 3599 instances.

4.3. Dataset Statistics

The dataset used in this study consists of 3599 instances, encompassing a total of 15,213 clauses. Table 1 provides a breakdown of the number of emotion spans, behavior spans, and emotion–behavior pairs in the dataset. It is worth noting that the number of behavior spans is considerably higher than the number of emotion–behavior pairs. This is because there exist behaviors that are not influenced by emotions or emotions that cannot be directly inferred from the text. Additionally, it is important to recognize the many-to-many relationships between emotions and behaviors. Specifically, each emotion may give rise to multiple behaviors, and multiple emotions may influence a single behavior.
To further explore the distribution of emotion–behavior pairs in the dataset, Table 2 presents the number of pairs for each instance. The table reveals that the majority of instances (66.38%) contain only one emotion–behavior pair. Instances with two pairs account for 19.89%, while instances with three pairs make up 3.28%. Instances with four or more pairs are relatively rare, representing only 1.50% of the dataset. Additionally, it is worth noting that 8.89% of the instances do not include any emotion–behavior pairs, indicating the absence of emotions driving behaviors.
Furthermore, Table 3 presents insights into the distribution of emotion types across the instances in the dataset. The table displays the number of instances corresponding to each emotion type. The findings indicate that negative emotions are more prevalent than positive emotions in the dataset. Fear, sadness, and anger are the most prominent emotions, accounting for 29.62%, 19.52%, and 19.37% of the emotion spans, respectively. Disgust follows closely at 18.54%.

5. Methodology

The proposed method consists of three main components: an emotion extractor, a behavior extractor, and an emotion–behavior pair filter. Each component includes an encoder and a decoder. Figure 2 provides an overview of the method.

5.1. Encoder

The encoder’s role is to convert tokens into their vector representations. We employ Bidirectional Encoder Representations from Transformers (BERT) [41], a powerful text representation model, for this purpose. The BERT encoder architecture is illustrated in Figure 3a.
To use BERT, we first obtain token embeddings similar to other NLP models. BERT utilizes a sentence separator ( [ S E P ] ) at the end of each sentence and a classifier token ( [ C L S ] ) at the beginning of the first sentence. The representation of [ C L S ] serves as the representation of the entire sentence.
Given a document D = { x 1 , x 2 , , x d } , the input representation for the document is:
D i n p u t = { [ C L S ] , x 1 , x 2 , , x d , [ S E P ] }
where d is the number of tokens in D.
BERT employs a multi-layer bidirectional transformer encoder, primarily using stacked multi-head attention [42]. An attention function maps a query and a set of key–value pairs to an output. It can be computed as follows:
A t t e n t i o n ( Q , K , V ) = s o f t m a x ( Q K T d k ) V
where Q, K, and V are matrices containing queries, keys, and values, respectively.
Multi-head attention consists of multiple attention heads, enabling the model to attend to different positional information from various representation perspectives. Multi-head attention can be calculated as follows:
M u l t i H e a d ( Q , K , V ) = C o n c a t ( h e a d 1 , , h e a d h ) W O
h e a d i = A t t e n t i o n ( Q W i Q , K W i K , V W i V ) , i [ 1 , h ]
where h is the number of heads, and Q W Q i , K W K i , and V W V i are parameter matrices for h e a d i .
We denote the encoding operation as B E R T , and the representation can be obtained as:
R e p = B E R T ( D i n p u t )
where R e p = { r [ C L S ] , r 1 , r 2 , , r d , r [ S E P ] } represents the token representations in D i n p u t , and r [ C L S ] serves as the representation of D i n p u t for text classification tasks.

5.2. Emotion and Behavior Extractors

In this paper, we treat emotion extraction and behavior extraction as sequence labeling tasks, where the goal is to assign the appropriate labels to each token. We use the B I O labeling scheme, where B indicates the beginning of a span, I indicates the inside of a span, and O indicates tokens outside of any span.
The input tokens and the encoder have been described in the previous section. For the emotion and behavior extractors, we employ the BiLSTM-CRF model [43] as the decoder. The structure of the BiLSTM-CRF decoder is shown in Figure 3b.
By feeding the encoded representation ( R e p ) into the decoder, we obtain the predicted label for each input token:
Y e x t = C R F ( B i L S T M ( R e p ) )
where Y e x t = { y [ C L S ] , y 1 , y 2 , , y d , y [ S E P ] } represents the predicted labels for the tokens.
Next, we extract the emotion spans E and behavior spans B from the predicted labels:
E = s e 1 , s e 2 , , s e m
B = s b 1 , s b 2 , , s b n
where m is the number of emotion spans and n is the number of behavior spans in document D.

5.3. Emotion–Behavior Pair Filter

To extract emotion–behavior pairs, we adopt the prompt paradigm, which converts the task into a binary classification problem. The approach involves selecting an emotion span s e from E and a behavior span s b from B, assuming a causal relationship between them. We generate a prompt by applying the selected spans to a template such as “the person is s e , so this person s b ”. This template is denoted as τ .
However, determining the causal relationship between emotion and behavior solely based on the spans can be challenging without considering contextual information and background knowledge. For example, it is difficult to judge the logical correctness of the statement “he is happy, so he thanked the policeman” based solely on the sentence itself. To address this, we incorporate the document D into the generated prompt, resulting in a more comprehensive understanding. This operation is referred to as P r o m p t , and the resulting text is denoted as P:
P = P r o m p t ( D , s e , s b , τ )
Figure 4 provides an example illustrating the steps involved in generating P.
Subsequently, we construct the input representation for P as P i n p u t = { [ C L S ] , P , [ S E P ] } and feed it into the encoder:
R e p P = B E R T ( P i n p u t )
where R e p P = { r [ C L S ] , r 1 , r 2 , , r p , r [ S E P ] } represents the token representations in P, and p is the number of tokens in P.
Emotion–behavior pair filtering is considered a text classification task, and we utilize r [ C L S ] as the input for the decoder. Finally, we feed r [ C L S ] into a fully connected (FC) layer as the decoder, where the output is a binary classification indicating whether there is a causal relationship between the selected emotion span s e and behavior span s b . The structure of the FC decoder is shown in Figure 3c. To obtain the final result of the emotion–behavior pair filter, we apply the a r g m a x function to determine the index of the output from the FC decoder:
Y f i l t e r = a r g m a x ( F C ( r [ C L S ] ) )
If Y f i l t e r is 1, this indicates that s b is driven by s e , implying a causal relationship between them. Otherwise, no causal relationship is identified.
Overall, the proposed methodology consists of three parts: an emotion extractor, a behavior extractor, and an emotion–behavior pair filter. Each part includes an encoder and a decoder. The encoders convert input tokens into their vector representations, while the decoders generate the predicted labels for each token or perform binary classification for the emotion–behavior pair filter.

6. Experiments

6.1. Experimental Settings

For our experiments, we randomly split the dataset into three groups, including 80% data for the train set, 10% for the validation set, and the remaining 10% for the test set.
We implemented our experiments using the PaddlePaddle (https://www.paddlepaddle.org.cn (accessed on 10 June 2023)) framework and ran them on a Tesla P100-DGXS GPU. To ensure a fair comparison with existing models, we used bert-base-chinese (https://github.com/PaddlePaddle/PaddleNLP (accessed on 10 June 2023)) as the encoder. We optimized all trainable parameters in our model using the Adam optimizer. To prevent overfitting, we applied dropout regularization and early stopping. Table 4 presents the hyperparameters used in our experiments.
The proposed method was compared with existing models at both the clause and span levels. Specifically, at the clause level, we compared our method with existing approaches for the ECPE task. On the contrary, at the span level, we compared it with existing approaches for the relation extraction task.
Precision (P), recall (R), and the F 1 score are commonly used evaluation metrics to measure the effectiveness of the evaluated methods in classification tasks.
Precision (P) is the ratio of true positive predictions to the total number of positive predictions made by the model. It represents the accuracy of positive predictions and measures how many of the predicted positive instances are actually correct.
Recall (R), also known as sensitivity or true positive rate, is the ratio of true positive predictions to the total number of actual positive instances in the dataset. It indicates the model’s ability to identify all the relevant positive instances correctly.
The F 1 score is the harmonic mean of precision and recall, which provides a balanced measure between them. It is calculated as follows:
F 1 = 2 × P × R P + R
In our evaluation, we utilized these metrics to assess the performance of various methods. The model that obtained the highest F 1 score on the validation set was selected for further evaluation on the test set.

6.2. Evaluation at the Clause Level

We evaluated our method at the clause level to compare it with existing ECPE approaches, which are similar to our EDBE task. In the ECPE task, data are processed at the clause level, where any clause containing an emotion or cause is considered an emotion or cause clause. Similarly, in the proposed EDBE task, we consider any clause containing an emotion or behavior as an emotion or behavior clause.
Table 5 presents the evaluation results of the proposed method and the existing ECPE approaches on the EDBE task at the clause level. Our method demonstrates superior performance compared to the baseline in various metrics, particularly in the extraction of emotions. In the emotion extraction and behavior extraction sub-tasks, our method demonstrated substantial improvements, surpassing state-of-the-art methods by at least 6.90% and 6.91%, respectively. This notable enhancement can be attributed to the fact that our data were annotated at the span level, and our method also operates predictions at the same level for behaviors. Specifically, we extracted emotion spans and behavior spans from the text and classified each clause based on the presence of these spans in order to determine whether it belongs to the emotion or behavior category. This approach enhances the performance of our method by enabling more accurate extraction of information. This improvement is a result of our method’s ability to process information at a more granular level, specifically at the span level. This enhanced granularity allows our model to capture semantics and contextual information in the text more effectively, leading to better performance.

6.3. Evaluation at the Span Level

In this section, we present a comparison of our approach with several potent relation extraction approaches. The results are presented in Table 6. At the span level, our model predicts emotion spans or behavior spans that should align with the annotated emotion spans or behavior spans, making this task similar to conventional relation extraction.
Table 6 shows the results of existing relation extraction approaches and our approach for the EDBE task at the span level. The results of our method are highlighted in bold. We compared our method with three state-of-the-art relation extraction approaches, including PL-Marker [31], PURE [32], and UniRE [30].
In the span-level evaluation, our approach achieves competitive results compared to existing relation extraction approaches. It outperforms PL-Marker and PURE in emotion extraction and behavior extraction tasks in terms of precision, recall, and the F 1 score. Our approach also achieves the highest F 1 score in the E–B pair extraction task.
Regarding the emotion extraction sub-task, our method did not demonstrate a significant improvement. This can be attributed to the task’s similarity to standard entity extraction tasks, where the extracted emotion spans tend to be small in size. Consequently, achieving substantial advancements in a well-established task such as emotion extraction is challenging.
Conversely, in the behavior extraction sub-task, we observed a noticeable improvement of 2.83% in the F 1 score. This improvement can be attributed to the longer length of behavior spans and the dispersed nature of their semantic information, which make this sub-task more challenging to address.
In the E–B (emotion–behavior) pair extraction sub-task, we also achieved a considerable improvement. This improvement is attributed to the utilization of prompts, which provided additional semantic information to the model, aiding in achieving better results.
Overall, our method shows significant improvements over existing approaches, both at the clause and span levels. The ability to capture fine-grain information and leverage semantic and contextual cues enables our model to achieve better performance in emotion, behavior, and E–B pair extraction tasks.

6.4. Ablation Study

In this section, we detail the results of an ablation study to analyze the importance of different components in generating P as defined in Equation (9). The components we examined include the document D, the emotion s e , the behavior s b , and the template τ . Each component plays a unique role in the feature learning process, and we aim to assess their individual contributions to the overall performance of our model.
To achieve this, we performed ablation experiments at both the clause and span levels. We removed D and τ from P and replaced s e with the placeholder “a certain emotion”. Additionally, we replaced s b with the clause c b in which s b occurs. We evaluated the performance of the model using precision, recall, and F 1 score metrics, and the results are presented in Table 7.
The results of our ablation study demonstrate the significant impact of each component on the feature learning process and validate the effectiveness of all four components. In particular, removing any component leads to a notable decrease in performance. For example, when D is removed from the prompt, the precision, recall, and F 1 score at the clause level drop by 21.35%, 10.19%, and 16.69%, respectively, and the span-level metrics decrease by 18.00%, 11.74%, and 14.13%, respectively. Similarly, removing τ results in decreases of 2.20%, 1.74%, and 1.85%, respectively, in the clause-level metrics, as well as a decrease of 2.11%, 1.74%, and 1.90%, respectively, in the span-level metrics. These findings underscore the crucial role of both D and τ in the feature learning process.
Furthermore, we observed a notable performance difference between the original template and the modified one wherein s b is replaced by its corresponding clause, c b . Specifically, the precision is relatively high, while the recall is comparatively low. This indicates that including a clause as a behavior in the template adds additional information and makes the model more stringent during training, thereby impacting its overall performance. These results suggest that our span-level dataset for the EDBE task has an advantage over clause-level datasets such as ECE and ECPE, as EDBE can capture more nuanced information at the span level, leading to improved performance [23,38,39].
In summary, our ablation study provides valuable insights into the impact of different components on the feature learning process. The results highlight the significance of all four components and demonstrate that removing any component significantly affects the overall performance of the model.

7. Conclusions

Emotions and their impact on behavior have been extensively studied in many fields. However, the relationships between emotions and behaviors in textual data have received relatively little attention in the field of NLP. To address this gap, this paper introduces the EDBE task along with a novel dataset and a pipeline approach. Our study aims to deepen the understanding of how emotions influence behaviors in text.
Our proposed approach exhibits strong performance in extracting emotion-driven behaviors at both the clause and span levels. The evaluation results, including precision, recall, and F 1 score metrics, validate the effectiveness of our model in identifying behaviors influenced by emotions.
However, our work also has certain limitations. Firstly, in terms of task definition, we focused solely on emotions and their impacts without considering an analysis of the causes of emotions. This limitation indicates a potential avenue for future research to explore the causal factors behind emotions. Secondly, the construction of the dataset was based on a relatively small sample size, which may introduce limitations in terms of generalizability. It is also worth noting that the dataset construction process involved human annotation, and there may be inherent errors resulting from annotation mistakes. Lastly, the pipeline method we proposed for the task may encounter challenges related to error accumulation among different sub-tasks. These limitations provide opportunities for further refinement and improvement in future studies.
For future research, several promising directions warrant exploration. Firstly, incorporating advanced techniques, such as leveraging large-scale language models, holds potential for further improving our model’s performance. Additionally, expanding the dataset used for EDBE and incorporating ECA can provide a more comprehensive understanding of the impact of emotions on behaviors. Furthermore, investigating the integration of contextual information and exploring the transferability of our model across different domains would be valuable avenues for future exploration. By considering broader contexts and adapting the model to diverse domains, we can enhance the robustness and applicability of our approach.
In summary, this study advances the field of emotion analysis in text mining by introducing the EDBE task along with a comprehensive methodology. Through the successful extraction of emotion-driven behaviors, we have enhanced our understanding of the intricate interplay between emotions and behaviors. The proposed pipeline approach not only lays the groundwork for further research but also holds promising potential for real-world applications in areas such as sentiment analysis, mental health, and social sciences.

Author Contributions

Conceptualization, Y.S.; methodology, Y.S., S.H. and X.H.; software, Y.S.; validation, Y.S.; formal analysis, Y.S. and R.Z.; investigation, Y.S. and X.H.; resources, Y.S. and X.H.; data curation, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S., S.H., R.Z. and X.H.; visualization, Y.S.; supervision, Y.S.; project administration, Y.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 71974187.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset proposed in this paper will be released at https://github.com/yaweisun/edbe_dataset/ (accessed on 10 June 2023).

Acknowledgments

Thanks to AI Studio (https://aistudio.baidu.com/ (accessed on 10 June 2023)) for the computational support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, B. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  2. Beck, D.; Cohn, T.; Specia, L. Joint emotion analysis via multi-task Gaussian processes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), Doha, Qatar, 25–29 October 2014; pp. 1798–1803. [Google Scholar]
  3. Xu, J.; Xu, R.; Lu, Q.; Wang, X. Coarse-to-fine sentence-level emotion classification based on the intra-sentence features and sentential context. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (CIKM 2012), Maui, HI, USA, 29 October–2 November 2012; pp. 2455–2458. [Google Scholar]
  4. Gao, W.; Li, S.; Lee, S.Y.M.; Zhou, G.; Huang, C.R. Joint learning on sentiment and emotion classification. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management (CIKM 2013), San Francisco, CA, USA, 27 October–1 November 2013; pp. 1505–1508. [Google Scholar]
  5. Das, D.; Bandyopadhyay, S. Finding emotion holder from bengali blog texts—an unsupervised syntactic approach. In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation (PACLIC 2010), Tohoku, Japan, 4–7 November 2010; pp. 621–628. [Google Scholar]
  6. Khatoon, S.; Rehman, V. Negative emotions in consumer brand relationship: A review and future research agenda. Int. J. Consum. Stud. 2021, 45, 719–749. [Google Scholar] [CrossRef]
  7. Bradley, M.M.; Codispoti, M.; Cuthbert, B.N.; Lang, P.J. Emotion and motivation I: Defensive and appetitive reactions in picture processing. Emotion 2001, 1, 276. [Google Scholar] [CrossRef]
  8. Zemack-Rugar, Y.; Bettman, J.R.; Fitzsimons, G.J. The effects of nonconsciously priming emotion concepts on behavior. J. Personal. Soc. Psychol. 2007, 93, 927. [Google Scholar] [CrossRef] [Green Version]
  9. Mancini, C.; Falciati, L.; Maioli, C.; Mirabella, G. Happy facial expressions impair inhibitory control with respect to fearful facial expressions but only when task-relevant. Emotion 2022, 22, 142. [Google Scholar] [CrossRef]
  10. Calbi, M.; Montalti, M.; Pederzani, C.; Arcuri, E.; Umiltà, M.A.; Gallese, V.; Mirabella, G. Emotional body postures affect inhibitory control only when task-relevant. Front. Psychol. 2022, 13, 6857. [Google Scholar] [CrossRef] [PubMed]
  11. Schupp, H.T.; Stockburger, J.; Codispoti, M.; Junghöfer, M.; Weike, A.I.; Hamm, A.O. Selective visual attention to emotion. J. Neurosci. 2007, 27, 1082–1089. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Laird, J.D.; Wagener, J.J.; Halal, M.; Szegda, M. Remembering what you feel: Effects of emotion on memory. J. Personal. Soc. Psychol. 1982, 42, 646. [Google Scholar] [CrossRef]
  13. Mirabella, G.; Grassi, M.; Mezzarobba, S.; Bernardis, P. Angry and happy expressions affect forward gait initiation only when task relevant. Emotion 2022, 23, 387–399. [Google Scholar] [CrossRef]
  14. Field, T. The effects of mother’s physical and emotional unavailability on emotion regulation. Monogr. Soc. Res. Child Dev. 1994, 59, 208–227. [Google Scholar] [CrossRef]
  15. Li, J.; Zhou, Y.; Ge, Y.; Qu, W. Sensation seeking predicts risky driving behavior: The mediating role of difficulties in emotion regulation. Risk Anal. 2022. [Google Scholar] [CrossRef]
  16. Hunnikin, L.M.; Wells, A.E.; Ash, D.P.; Van Goozen, S.H. The nature and extent of emotion recognition and empathy impairments in children showing disruptive behaviour referred into a crime prevention programme. Eur. Child Adolesc. Psychiatry 2020, 29, 363–371. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Moore, K.E.; Tull, M.T.; Gratz, K.L. Borderline personality disorder symptoms and criminal justice system involvement: The roles of emotion-driven difficulties controlling impulsive behaviors and physical aggression. Compr. Psychiatry 2017, 76, 26–35. [Google Scholar] [CrossRef] [PubMed]
  18. Coté, S. A social interaction model of the effects of emotion regulation on work strain. Acad. Manag. Rev. 2005, 30, 509–530. [Google Scholar] [CrossRef]
  19. Johnson, S.K. I second that emotion: Effects of emotional contagion and affect at work on leader and follower outcomes. Leadersh. Q. 2008, 19, 1–19. [Google Scholar] [CrossRef]
  20. Polignano, M.; Narducci, F.; de Gemmis, M.; Semeraro, G. Towards emotion-aware recommender systems: An affective coherence model based on emotion-driven behaviors. Expert Syst. Appl. 2021, 170, 114382. [Google Scholar] [CrossRef]
  21. Moscato, V.; Picariello, A.; Sperli, G. An emotional recommender system for music. IEEE Intell. Syst. 2020, 36, 57–68. [Google Scholar] [CrossRef]
  22. Gui, L.; Wu, D.; Xu, R.; Lu, Q.; Zhou, Y. Event-Driven Emotion Cause Extraction with Corpus Construction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), Austin, TX, USA, 1–5 November 2016; pp. 1639–1649. [Google Scholar]
  23. Xia, R.; Ding, Z. Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Florence, Italy, 28 July–2 August 2019; pp. 1003–1012. [Google Scholar]
  24. Chen, X.; Li, Q.; Wang, J. Conditional causal relationships between emotions and causes in texts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), Online, 16–20 November 2020; pp. 3111–3121. [Google Scholar]
  25. Bi, H.; Liu, P. ECSP: A new task for emotion-cause span-pair extraction and classification. arXiv 2020, arXiv:2003.03507. [Google Scholar]
  26. Li, W.; Li, Y.; Pandelea, V.; Ge, M.; Zhu, L.; Cambria, E. ECPEC: Emotion-cause pair extraction in conversations. IEEE Trans. Affect. Comput. 2022, 1–12. [Google Scholar] [CrossRef]
  27. Eberts, M.; Ulges, A. Span-based joint entity and relation extraction with transformer pre-training. arXiv 2019, arXiv:1909.07755. [Google Scholar]
  28. Cheng, Z.; Jiang, Z.; Yin, Y.; Li, N.; Gu, Q. A Unified Target-Oriented Sequence-to-Sequence Model for Emotion-Cause Pair Extraction. IEEE ACM Trans. Audio Speech Lang. Process 2021, 29, 2779–2791. [Google Scholar] [CrossRef]
  29. Fan, C.; Yuan, C.; Du, J.; Gui, L.; Yang, M.; Xu, R. Transition-based directed graph construction for emotion-cause pair extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), Online, 5–10 July 2020; pp. 3707–3717. [Google Scholar]
  30. Wang, Y.; Sun, C.; Wu, Y.; Zhou, H.; Li, L.; Yan, J. UniRE: A Unified Label Space for Entity Relation Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021), Virtual Event, 1–6 August 2021; Association for Computational Linguistics: Cedarville, OH, USA, 2021. [Google Scholar]
  31. Ye, D.; Lin, Y.; Li, P.; Sun, M. Packed Levitated Marker for Entity and Relation Extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022), Dublin, Ireland, 22–27 May 2022; pp. 4904–4917. [Google Scholar]
  32. Zhong, Z.; Chen, D. A Frustratingly Easy Approach for Entity and Relation Extraction. In Proceedings of the North American Association for Computational Linguistics (NAACL 2021), Online, 6–11 June 2021. [Google Scholar]
  33. Mao, R.; Liu, Q.; He, K.; Li, W.; Cambria, E. The biases of pre-trained language models: An empirical study on prompt-based sentiment analysis and emotion detection. IEEE Trans. Affect. Comput. 2022, 1–11. [Google Scholar] [CrossRef]
  34. Zhao, J.; Li, R.; Jin, Q.; Wang, X.; Li, H. Memobert: Pre-training model with prompt-based learning for multimodal emotion recognition. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2022), Singapore, 23–27 May 2022; IEEE: Piscataway Township, NJ, USA, 2022; pp. 4703–4707. [Google Scholar]
  35. Zheng, X.; Liu, Z.; Zhang, Z.; Wang, Z.; Wang, J. UECA-Prompt: Universal Prompt for Emotion Cause Analysis. In Proceedings of the 29th International Conference on Computational Linguistics (Coling 2022), Gyeongju, Republic of Korea, 12–17 October 2022; pp. 7031–7041. [Google Scholar]
  36. Chen, X.; Zhang, N.; Xie, X.; Deng, S.; Yao, Y.; Tan, C.; Huang, F.; Si, L.; Chen, H. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In Proceedings of the ACM Web Conference 2022 (WWW 2022), Lyon, France, 25–29 April 2022; pp. 2778–2788. [Google Scholar]
  37. Ekman, P. Universals and cultural differences in facial expressions of emotion. In Nebraska Symposium on Motivation; University of Nebraska Press: Lincoln, NE, USA, 1971. [Google Scholar]
  38. Li, X.; Gao, W.; Feng, S.; Wang, D.; Joty, S. Span-level Emotion Cause Analysis with Neural Sequence Tagging. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM 2021), Virtual Event, 1–5 November 2021; pp. 3227–3231. [Google Scholar]
  39. Li, X.; Gao, W.; Feng, S.; Wang, D.; Joty, S. Span-level emotion cause analysis by bert-based graph attention network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM 2021), Virtual Event, 1–5 November 2021; pp. 3221–3226. [Google Scholar]
  40. Sun, M.; Li, J.; Guo, Z.; Yu, Z.; Zheng, Y.; Si, X.; Liu, Z. Thuctc: An efficient chinese text classifier. Available online: https://github.com/thunlp/THUCTC (accessed on 10 June 2023).
  41. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  42. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Proc. Syst. 2017, 30, 5998–6008. [Google Scholar]
  43. Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF models for sequence tagging. arXiv 2015, arXiv:1508.01991. [Google Scholar]
  44. Ding, Z.; Xia, R.; Yu, J. ECPE-2D: Emotion-cause pair extraction based on joint two-dimensional representation, interaction and prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), Online, 5–10 July 2020; pp. 3161–3170. [Google Scholar]
  45. Ding, Z.; Xia, R.; Yu, J. End-to-end emotion-cause pair extraction based on sliding window multi-label learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), Online, 16–20 November 2020; pp. 3574–3583. [Google Scholar]
  46. Cheng, Z.; Jiang, Z.; Yin, Y.; Wang, C.; Ge, S.; Gu, Q. A Consistent Dual-MRC Framework for Emotion-Cause Pair Extraction. Acm Trans. Inf. Syst. (TOIS) 2022, 41, 1–27. [Google Scholar] [CrossRef]
Figure 1. Comparison of ECE, ECPE, and EDBE tasks with an example.
Figure 1. Comparison of ECE, ECPE, and EDBE tasks with an example.
Applsci 13 08700 g001
Figure 2. Overview of the proposed method. It consists of three main parts: an emotion extractor, a behavior extractor, and an emotion–behavior pair filter.
Figure 2. Overview of the proposed method. It consists of three main parts: an emotion extractor, a behavior extractor, and an emotion–behavior pair filter.
Applsci 13 08700 g002
Figure 3. Structures of the encoder and decoders in the proposed method: (a) BERT encoder; (b) Bi-LSTM decoder for emotion and behavior extractors; and (c) FC decoder for the emotion–behavior filter.
Figure 3. Structures of the encoder and decoders in the proposed method: (a) BERT encoder; (b) Bi-LSTM decoder for emotion and behavior extractors; and (c) FC decoder for the emotion–behavior filter.
Applsci 13 08700 g003
Figure 4. An example that shows the steps of generating the prompt (P).
Figure 4. An example that shows the steps of generating the prompt (P).
Applsci 13 08700 g004
Table 1. Details of the dataset.
Table 1. Details of the dataset.
ItemNumber
Instance3599
Clauses15,213
Emotion Spans4089
Behavior Spans7772
Emotion–Behavior Pairs4337
Table 2. Distribution of the number of emotion–behavior pairs in each instance.
Table 2. Distribution of the number of emotion–behavior pairs in each instance.
PairsNumberPercentage
03208.89%
1238966.38%
271619.89%
31183.28%
≥4541.50%
Table 3. Distribution of emotion types.
Table 3. Distribution of emotion types.
EmotionNumberPercentage
Fear121129.62%
Sadness79819.52%
Anger79219.37%
Disgust75818.54%
Happiness3909.54%
Surprise1393.40%
Table 4. Hyperparameters used in the experiments.
Table 4. Hyperparameters used in the experiments.
HyperparameterValue
batch size16
learning rate0.00001
dropout0.2
early stopping50
Table 5. Results of existing ECPE approaches and our approach on the EDBE task at the clause level.
Table 5. Results of existing ECPE approaches and our approach on the EDBE task at the clause level.
Emotion ExtractionBehavior ExtractionE–B * Pair Extraction
ModelP (%)R (%)F1 (%)P (%)R (%)F1 (%)P (%)R (%)F1 (%)
ECPE [23]84.6778.9581.7184.4375.7879.8779.5979.8079.39
ECPE-2D [44]94.6089.2291.8385.6879.3282.3884.9584.5284.73
ECPE-MLL [45]92.9684.9688.7884.9477.4181.0083.6080.2081.87
CD-MRC [46]96.1525.0639.7684.7315.1025.6385.7129.0143.35
UTOS [28]84.3686.5185.4284.4189.5786.9181.0883.9782.50
Ours98.4798.9898.7391.7296.0193.8283.2689.0586.05
* E–B is the abbreviation of emotion–behavior.
Table 6. Results of existing relation extraction approaches and our approach on the EDBE task at the span level.
Table 6. Results of existing relation extraction approaches and our approach on the EDBE task at the span level.
Emotion ExtractionBehavior ExtractionE–B Pair Extraction
ModelP (%)R (%)F1 (%)P (%)R (%)F1 (%)P (%)R (%)F1 (%)
PL-Marker [31]95.5997.7496.6581.8954.1565.1971.0547.0156.59
PURE [32]95.3196.7496.0278.9255.5165.1866.2047.2655.15
UniRE [30]94.1296.7395.4077.8181.3179.5270.1176.0672.97
Ours95.8197.4996.6579.2485.7182.3572.0977.1174.52
Table 7. Results of the ablation study on our template.
Table 7. Results of the ablation study on our template.
Clause LevelSpan Level
TemplateP (%)R (%)F1 (%)P (%)R (%)F1 (%)
Ours83.2689.0586.0572.0977.1174.52
Without D61.9178.8669.3653.9168.6660.39
Without τ 81.0687.3184.0769.9875.3772.57
Without s e 49.4488.8163.5242.2475.8754.27
With c b 84.2682.5983.4273.1071.6472.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; He, S.; Han, X.; Zhang, R. A New Model for Emotion-Driven Behavior Extraction from Text. Appl. Sci. 2023, 13, 8700. https://doi.org/10.3390/app13158700

AMA Style

Sun Y, He S, Han X, Zhang R. A New Model for Emotion-Driven Behavior Extraction from Text. Applied Sciences. 2023; 13(15):8700. https://doi.org/10.3390/app13158700

Chicago/Turabian Style

Sun, Yawei, Saike He, Xu Han, and Ruihua Zhang. 2023. "A New Model for Emotion-Driven Behavior Extraction from Text" Applied Sciences 13, no. 15: 8700. https://doi.org/10.3390/app13158700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop