Next Article in Journal
Research on the Support Technology for Deep Large-Section Refuge Chambers in Broken Surrounding Rock in a Roadway
Previous Article in Journal
Cardiovascular Risk in HIV Patients: Ageing Analysis of the Involved Genes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion Network for Aspect-Level Sentiment Classification Based on Graph Neural Networks—Enhanced Syntactics and Semantics

School of Computer Science, Qufu Normal University, Rizhao 276800, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7524; https://doi.org/10.3390/app14177524
Submission received: 15 June 2024 / Revised: 6 August 2024 / Accepted: 19 August 2024 / Published: 26 August 2024

Abstract

:
Aspect-level sentiment classification (ALSC) struggles with correctly trapping the aspects and corresponding sentiment polarity of a statement. Recently, several works have combined the syntactic structure and semantic information of sentences for more efficient analysis. The combination of sentence knowledge with graph neural networks has also proven effective at ALSC. However, there are still limitations on how to effectively fuse syntactic structure and semantic information when dealing with complex sentence structures and informal expressions. To deal with these problems, we propose an ALSC fusion network that combines graph neural networks with a simultaneous consideration of syntactic structure and semantic information. Specifically, our model is composed of a syntactic attention module and a semantic enhancement module. First, the syntactic attention module builds a dependency parse tree with the aspect term being the root, so that the model focuses better on the words closely related to the aspect terms, and captures the syntactic structure through a graph attention network. In addition, the semantic enhancement module generates the adjacency matrix through self-attention, which is processed by the graph convolutional network to obtain the semantic details. Lastly, the extracted features are merged to achieve sentiment classification. As verified by experiments, the model we propose can effectively enhance ALSC’s behavior.

1. Introduction

ALSC is an entity-oriented, fine-grained system of sentiment classification [1]. “Entity-oriented” refers to the sentiment classification of a specific entity or object, while “fine-grained” refers to the more detailed and specific classification of a sentiment, which more accurately captures and expresses the differences and nuances of a sentiment in a text. Specifically, it involves analyzing the polarity of a sentiment for all aspect terms (perhaps including more than one) within a sentence [2]. While traditional sentiment classification tasks usually only classify an entire sentence in terms of its sentiment, in ALSC, all the aspects of a text are sentiment-classified, which in turn provides a more nuanced understanding of the different aspects. Take “The food is outstanding, but the surrounding in this restaurant is the pits” from the SemEval-2014 Restaurant dataset as an example. In this sentence, the aspect words are “food” and “surrounding”. The two aspects of the sentence express two different sentiment polarities, with “food” being a positive opinion and “surrounding” being a negative opinion. As shown in Figure 1, ALSC separately classifies the two aspects of the sentence in terms of their sentiment, labeling them as negative for “surrounding” and positive for “food”. However, in traditional sentiment classification, the entire sentence is labeled as a negative sentiment. Therefore, it is essential to analyze the aspect-level emotional polarity to understand public feedback and improve product services.
Numerous studies have shown that establishing the connections between the aspect terms and their corresponding syntactic structures and semantic information is necessary for solving aspect-level sentiment classification tasks [3,4,5]. Intuitively, sentence dependencies (syntactic structure) emphasize extracting the modifier and modified relationships among the individual words of a sentence, and understanding the syntactic roles and associations between them can help to integrate dependencies into aspect polarity classification tasks to better capture the sentiment words associated with the aspects. Dependency relations are now broadly adopted by many advanced ALSC methods [6,7]. It has been shown that GAT helps in ALSC tasks [8,9], and combining dependencies with GAT can greatly boost the behavior of sentiment classification [10]. Many approaches have represented dependencies as adjacency matrices and have then used graph neural networks (e.g., graph attention networks [11] (GAT) or graph convolutional networks [12] (GCNs)) to encode them. Some methods have utilized the topology of dependencies for modeling [13], as well as distance methods based on dependency parse trees [14], which are effective at capturing the complex relationships between sentiment elements. In addition, semantic information [15] has also been used in sub-tasks, such as feature extraction and sentence classification, to optimize the judgment of sentiment polarity by identifying and capturing the complex relationships and semantic expressions in sentences. In linguistics, semantic information refers to both the literal meaning and the context and context-related information. In recent years, many studies [16,17,18] have combined the syntactic structure and semantic information to analyze sentence sentiment, a method which has been able to comprehensively capture the sentiment-related aspects of the sentence and reduce information loss.
Meanwhile, attention has also been directed extensively to graph neural networks [19,20] and ALSC tasks, which can productively focus attention on the most relevant parts of a text, thus enabling a deeper understanding of the sentiment polarity. There are some current approaches [21,22] that have incorporated knowledge to enhance the behavior of the model and combine it skillfully with the attention or syntax.
In conjunction with the ALSC task, we propose an ALSC fusion network based on graph neural networks. The network aims to improve the accuracy of the sentiment analysis by fusing the syntactic dependencies and rich semantic information. Referring to similar research works, we found that although there have been some studies that incorporated syntactic structure and semantic information into graph neural networks, they tend to overlook the importance of effectively fusing the aspect-related syntactic structure and semantic information in complex situations. Specifically, our model has two modules: the syntactic attention module and the semantic enhancement module. First, we reshape the common dependency tree to construct a tree that regards the aspect terms as its roots, then discard unnecessary relations and apply a GAT to study dependencies more relevant to various aspects. Subsequently, the GCN is applied to fetch the semantic information, and the attention matrix produced by the self-attention acts as the adjacency matrix to deeply extract the semantic information in the text. The two features are then effectively fused for sentiment classification through multi-head cross-attention. To validate the model’s performance, we experimented on public datasets and the results performed well.
Our main contributions are shown below:
  • An aspect-level sentiment classification fusion network with enhanced syntax and semantics combined with graph neural networks is proposed, which integrates syntactic structure and semantic information.
  • The model integrates the semantic GCN and relational GAT to associate dependencies with semantic features through multi-head cross-attention, to strengthen sentence representations.
  • Experiments are conducted on three public datasets. From the results, our proposed model is effective.

2. Related Work

ALSC is designed to extract from the provided context aspects for their sentiment polarity. Early sentiment classification methods were dominated by the classical text classification approach [23], which does not do a good job of mining the intrinsic connection of aspects to the context. For the past few years, as neural networks continue to evolve [24,25], attention-based networks have been extensively used in ALSC. Tang et al. [26] proposed a deep memristive network (MemNet) utilizing attention, which worked on predicting the sentiment polarity by capturing the relevance of contextual words with respect to the aspect. Ma et al. [27] proposed the Interactive Attention Network (IAN) to construct an interactive representation between the target and context to better grasp the complex relationship between the objective aspects and the texts. Huang et al. [28] proposed the Attention-to-Attention Network (AOA), which shows the relevance of opinion words to aspects by learning attention from aspects and text in both directions. Chen et al. [29] trained attention scores through reinforcement learning and attention-based regularization, treating them as syntactic distances to generate aspect-specific discrete opinion trees. The BERT [30] pre-trained model based on Transformer [31] is used extensively in a variety of natural language processing (NLP) domains by relying on the attention mechanism alone. However, these methods may introduce noise while ignoring resources such as dependencies and semantic information, which can help bring the model to an understanding of what it means and the contextual relationships of the text more deeply, thus improving the accuracy and efficiency of NLP tasks.
Syntactic structure-based methods can effectively analyze the structure of sentences and inter-word modification relations to obtain detailed syntax information. Early work relied on artificially defining some syntax rules. In recent years, GNNs for coding dependency parse trees have been widely used and exhibit good results. Gu et al. [32] constructed GCN by fusing multiple pieces of external knowledge for SC. Sun et al. [7] combined BiLSTM and GNNs to recognize the sentiment polarity of specific aspects of a sentence through dependency tree-enhanced embedding. Zhang et al. [33] integrated GCN into the sentence dependency tree to understand dependency relationships. Nevertheless, these dependency-based methods fail to realize the significance of the aspect words within sentences. To address this issue, Wang et al. [10] proposed an R-GAT, where they encoded, reshaped, and pruned to obtain a novel aspect-oriented dependency parse tree for sentiment classification via GAT. Li et al. [16] aimed to minimize errors in dependency parsing by designing a module with extensive syntax knowledge.
In SC tasks, the understanding of semantic information is as important as the dependencies of words [34]. Lipenkova et al. [35] utilized general linguistic knowledge to promote accuracy in classification. Zhang et al. [36] proposed a new goal-directed structured attention network for identifying several semantic fragments in a sentence, which are then fused with the extracted segments to classify them. Ma et al. [37] augmented the LSTM using stratified attention of target and sentence attention and then incorporated the semantic information for sentiment classification in a loop encoder.
The syntactic structure and the semantic information can jointly affect the prediction of sentiment polarity, and considering both syntax and semantics at the same time can help us understand sentences better. Xu et al. [38] fused syntactic, semantic, and public information through a three-channel GCN, and optimized the fusion using a multi-head attention mechanism. Xin et al. [17] fused syntactic and semantic features using a multilayer GAT by introducing constituent trees and aspect-aware attention mechanisms. Wang et al. [39] combine the use of relational graph attention networks to encode the syntactic information, and semantic graph convolutional networks to capture semantics and enhance the semantic capacity through regularization, and fusion of features through the gating mechanism. Zhang et al. [18] generate matrices through the aspect-aware attention mechanism and the self-attention mechanism, which have been combined with syntactic mask matrices, and enhance the node representations using GCN to fuse syntactic and semantic information. Han et al. [40] combined affective knowledge with inter-aspect dependencies, modeled interactions using GCN and a multi-head self-attention mechanism, incorporating features through a gating mechanism. Wu et al. [41] integrated lexical and syntactic knowledge into opinion induction trees through reinforcement learning and attention mechanisms that were analyzed by applying graph neural networks. These studies validate the necessity and effectiveness of fusing syntactic and semantic information, further demonstrating the correctness of this approach in improving the performance of sentiment polarity prediction.
Although these models are effective, they do not take into account the fact that complex sentence structures and informal expressions may result in the loss of utterance information, which reduces performance. In the sentiment classification tasks, it is important to have both semantic understanding and word dependencies. Therefore, we focus on both types of information altogether, are encoded using graph neural networks to capture contextual sentiment features. The features are then fused by multi-head cross-attention to gain a comprehensive understanding of the expressed sentiment.

3. Methodology

In this section, we depict our model, which wraps around the syntactic attention module and the semantic enhancement module. The model first accesses sentences as input from the three datasets and extracts the aspect words using BERT for the sequence labeling task. Then, BERT is utilized to obtain context-rich vector representations from sentences and identified aspect terms, and the obtained vector representations are used as input to continue to the next step. The syntactic attention module (SA) reconstructs the ordinary dependency parse tree to generate a new tree in which aspect terms act as the root, so as to better guide the model in terms that are closely related to aspects, as well as encode the dependencies through GAT, mapping them to vector representations to obtain dependency features. The semantic enhancement module (SE) extracts the semantic information in the sentences in depth by using self-attention mechanisms to generate attention matrices as the neighbor matrix and encoding processing on the GCN. Then, the two parts of the features are fused through a multi-head cross-attention mechanism, simultaneously learning the interaction between the two features and classifying their sentiment. The diagram of our model structure has been depicted in Figure 2. We will discuss each module in further detail. Specifically, let a sentence be S = { w 1 , w 2 , ⋯, w n }, denoting a sentence containing n words, and let aspect A = { w a + 1 , ⋯, w a + m }, denoting mentioned aspects of a sentence and made up of m words.

3.1. BERT

BERT [30] can contextually encode input text and learn rich sentence information, obtaining optimal results on many tasks. We use BERT to extract aspect features and sentence-hidden representations.
The sequence labeling task is first performed using BERT to extract aspects and catch the aspect-aware contextual representation of the sentence. We use the IOB labeling method [42], which consists of three labels I, O, and B. B means the start of the aspect, I means the middle part of the aspect, and O means the non-aspect. The sequence of sentences S = { w 1 , w 2 , … w n } corresponds to the sequence of labels N = { b 1 , b 2 , … b n }, b i ∈{I, O, B}, The corresponding hidden states are obtained by feeding “[CLS] + sentence + [SEP]” into BERT. The accuracy of capturing aspects is improved by using a linear layer to classify the sequence of hidden states and identify whether each word is an aspect word or not.
In addition, to capture global features from sentences and aspect terms, obtaining the perceptually hidden representation of sentences in specific aspects, we construct “[CLS] + sentence + [SEP] + aspect + [SEP]” as input. This process produces aspect-aware context-hidden representations H, allowing our model to efficiently extract and understand the input sentence.
H = B E R T S e n t e n c e ,
We also thoroughly consider how a word relates to all its associated subwords, providing a more comprehensive understanding of the sentence. Subsequently, the hidden representation H of the sentence is imported into the syntactic attention module and the semantic enhancement module, respectively.

3.2. Syntax Attention Module (SA)

In NLP, dependencies are syntactic relationships of words that allow one to gain a better comprehension of the structure and meaning of a sentence. Aspect terms refer to the entities in a sentence that are targeted to express sentiment or point of view. Our model incorporates an aspect terms-oriented dependency representation to catch the association between aspect terms and sentiment polarity. Wang et al. [10] proposed a tree reconstruction algorithm, according to which we reshape the original dependency tree by setting the aspect as the root node and pruning the tree to remove unnecessary relations. With regards to an input sentence, we apply the dependency parser to obtain its tree at first. Then, the aspects are designated as the root nodes. Next, the words that have direct dependencies with aspect terms are set as child nodes, preserving the original dependencies between them. Finally, other dependencies in the sentence that are unrelated to aspect terms are discarded to minimize the effect of the relationships of irrelevant nodes in model performance. For words that do not have direct dependencies, a dummy relationship n: con (n connections) is established between the aspect and each respective node, where n denotes the distance of two nodes from each other. If there are multiple aspects involved in the sentence, we will build a unique tree for every aspect. Figure 3 demonstrates the dependency parse tree after the reconstruction of a sentence containing one aspect word. Figure 4 demonstrates an example of a dependency tree after the reconstruction of a sentence containing two aspect words, constructing a unique tree for each aspect.
In GAT, for each node, GAT calculates its attentional weights to its neighboring nodes. Then, these weights are used for weighted summation of the features of the neighboring nodes, and each node collects the feature information of its surrounding nodes, which is aggregated by an aggregation operation to renew the representation of the node. However, this process does not take into account the dependency relationship and may lose some important sentence information. Wang et al. [10] extended GAT with aspect-oriented dependencies, which inspired us to encode dependency labels using R-GAT. Specifically, GAT aggregates representations of adjacent nodes along the dependencies between them, and in our model, these weights are determined by the dependencies between nodes. This process captures the dependencies among nodes and enhances their representations. For sentence S, the reconstructed aspect-oriented dependency parse tree preserves important dependencies and maps dependencies to vector representations. Ultimate dependency characterization is calculated as below.
α i j = σ R e L U r i j W m 1 + b m 1 W m 2 + b m 2 ,
β i j = e x p α i j j = 1 N i e x p α i j ,
H s a = M m = 1 j ϵ N i β i j W m h j ,
In the above equation, r i j is the relational embedding from node i to j , and W m 1 is the learned weight matrix that maps r i j to the hidden layer of the GAT. W m 2 is the learned weight matrix, b m 1 and b m 2 are bias vectors, h j is the feature corresponding to the word j , M is the number of relational heads in the relational GAT, and H s a is the feature extracted by the syntactic attention module. By combining the reconstructed dependencies rooted in aspect terms and processing them with GAT, we can catch the dependencies across words and enhance the representation of nodes.

3.3. Semantic Enhancement Module (SE)

Semantic knowledge is the key to breaking out of the vocabulary barrier and digging deeper to find semantic information contained in words [43]. Vaswani et al. [31] proposed the self-attention mechanism, highlighting its important role in capturing semantic information. In this paper, we make full use of the self-attention mechanism according to this point to capture the semantic-related terms of each word in a sentence and improve the comprehension of the text. The self-attention mechanism can learn the different levels of importance of each element of the sentence. We use self-attention to calculate the attention weights between words to generate an attention matrix A. Each element in the attention matrix stands for the attention weight between word i and word j. Using the attention matrix as a neighbor matrix, in GCN, the neighbor matrix describes the connection relationship between nodes, and in the SE, the connection relationship is determined by the attention weights computed by the self-attention, which can better catch the semantic information. The attention score matrix A as the neighbor matrix can be denoted as:
A = s o f t m a x X W Q × Y W K T d k   ,
In the above equation, W Q , W K is the learned weight matrix. X and Y represent the characteristic representation of the input sequence and d k is the dimensions of the node feature, respectively. Our semantic enhancement module uses a single self-attention head to gain a matrix of attention scores for the sentence. In each layer of graph convolution, the depiction of a node is updated in accordance with the nodes that surround it and the edges between them. Our model uses a 2-layer graph convolution network, using the adjacency matrix A to define the connectivity between nodes and to fuse the information from neighboring nodes through weighted summation or other aggregation functions. Information transfer and aggregation of node features is performed through graph convolution layers. Each graph convolution layer is calculated as:
H l + 1 = R e L U A H l W l E + 1 ,
where A is the adjacency matrix, H l is the node features at layer l , W l is the weight matrix at layer l , E is the degree matrix. In this way, it is possible for the model to aggregate the characteristics of the nodes and their neighbors layer by layer and learn a higher-level representation. The semantic enhancement module obtains the final representation H s e = { h s e 1 , h s e 2 ,…, h s e m } denoting the hidden representation of all aspect nodes.

3.4. Multi-Head Cross-Attention

Syntactic structure and semantic information collectively influence the prediction of sentiment polarity. When the syntactic structure changes, the semantic meaning also changes to some extent. Considering both syntax and semantics simultaneously helps in better understanding the sentence. Therefore, we analyze the interrelationships between different features to improve the model performance together and utilize multi-head cross-attention to fuse them. Multi-head cross-attention allows the model to simultaneously learn interactions between features in several different representation subspaces. We use syntactic feature vectors as query Q and semantic feature vectors as key K and value V . These vectors are partitioned into multiple heads h e a d i , each of which learns different weight distributions from the outputs of the SA and the SE, and performs the attention computation independently. All the outputs of the heads are merged to obtain a final feature representation that fuses the information from both modules. The attention output is computed as follows:
Q = H s a W Q ,
K = H s e W K ,
V = H s e W V ,
A t t e n t i o n = s o f t m a x Q K T d k V   ,
where W V is the learned weight matrix, H s a is the result of SA output and H s e is the result of SE output, which, respectively, correspond to the feature representation of SA and SE. The multi-head cross-attention mechanism is executed independently on multiple heads to obtain multiple attention outputs. The outputs from all heads are stitched together and transformed through a linear layer to obtain the final fused feature vector.
M u l t i H e a d = C o n c a t h e a d 1 h e a d n   W O ,
Each head h e a d i is computed from the above-mentioned attention, and W O is a trainable weight matrix. Ultimately, the output is applied to the subsequent sentiment polarity classification, which allows the two modules to effectively integrate the syntactic structure and semantic information to jointly enhance behavior for sentiment classification.

3.5. Loss Function

To efficiently handle our multi-classification task, our model uses cross-entropy loss for optimization:
L o s s = p ϵ D q = 1 C y p q l o g y ^ p q ,
where D is the data sample, a n d   C is the number of categories (there here) including {neutral: 0, positive: 1, negative: −1}. y ^ p q denotes the probability of the q th category of the predicted label of sample p , and y p q is the probability of the q th category of the true label of sample p .

4. Experiment

We first describe the three publicly available datasets used in the experiments. Then, the parameter settings, comparison metrics, and baseline methods used for comparison in the experiments are described, while the results of the experiments are presented and analyzed. Finally, we conduct ablation experiments and case studies to confirm the accuracy of our model.

4.1. Datasets

We use the Restaurant and Laptop from SemEval2014 Task 4 [1], in addition to the Twitter dataset. Because the same dataset was also used in most of the previous frontier studies, this made our subsequent experiment comparisons easier. The Restaurant dataset is made up of reviews from the restaurant domain. The Laptop dataset contains sentences related to laptops and specific viewpoint targets. The Twitter dataset consists of data from the Twitter social networking site, including news, event views, and other data. These datasets all contain three sentiment polarities: positive, neutral, and negative, where {neutral: 0, positive: 1, negative: −1}. Table 1 displays the three datasets’ respective data.

4.2. Experimental Parameters

Our experiments used the computing hardware of NVIDIA GeForce RTX 2080, the operating system of Linux, and the environment configuration of Python 3.7.13, PyTorch 1.11.0, and Transformers 4.1.1. We perform a syntactic analysis task with Biaffine Parser [44] to identify dependencies in sentences. In our experiments, we use a pre-trained 300-dimensional Glove vector [45]. For our model, we utilize the bert-base-uncased version of BERT, leveraging the hidden states from the last layer as word features. Our model is trained in 30 epochs. The batch size is 16, using the Adam optimizer with a learning rate of 5 × 10−5.
GAT belongs to GNN, which utilizes the mechanism of attention to process a task. It can focus on specific regions through multiple attention heads, so we discuss the effect of the quantity of attention heads of the model. As illustrated in Figure 5. It can be known that when six attention heads are used, the model performs best on the three datasets. If the number of attention heads exceeds 6, it may lead to overfitting problems. When only six attention head is used, the worst results are achieved. Therefore, we set this parameter to 6 for subsequent experiments.
To research the influence of the layer number of GCN, we discuss the evaluation of three datasets. The results of the experiment are listed in Figure 6, Evidently, our model performs best while using two layers of GCN. On the one hand, fewer layers limit the propagation ability of the node representations and make it difficult to capture long-distance relationships. On the other hand, excessive layers may lead to information redundancy problems and make the model unstable. Therefore, we set this parameter to 2 for subsequent experiments.

4.3. Comparison Models

To fully estimate our model, it is compared to a couple of advanced baseline models.
  • IAN [27] (Interactive Attention Network) generate aspects and sentence representation through two LSTMs and interactive attention mechanisms for interactive learning in context and target.
  • AOA [28] (attentional-over attentive) jointly utilizes aspects and sentences for modeling and pays automatic attention to significant parts of a sentence.
  • ASGCN [33] uses multi-layer GCNs to encode and integrate sentence dependency trees to gain aspect-oriented features and solve the multi-word dependency problem in sentiment classification.
  • CDT [7] learns representations of sentence features via BiLSTM, performs convolutionally enhanced embedding using dependency parse trees of sentences, and learns aspectual representations with syntactic information.
  • R-GAT [10] defines aspect-oriented dependency structures by recasting and trimming the common dependency parse tree. The new tree is then encoded using R-GAT.
  • R-GAT + BERT [10] is the R-GAT model using pre-trained BERT instead of BiLSTM as the encoder.
  • DualGCN [16] integrates syntactic and semantic information for sentiment categorization and utilizes orthogonal and differential regularizers to process semantically related terms.
  • DualGCN + BERT [16] is the DualGCN model using the pre-trained BERT instead of BiLSTM as the encoder.
  • dotGCN [29] regards aspect-to-context attention scores as syntactic distances, constructs aspect-specific tree models, and simplifies complex structures.
  • RAG-TCGCN [38] fuses syntactic, semantic, and public information through a three-channel GCN, optimizes fusion using a multi-head attention mechanism, and reduces information loss through a residual attention gating mechanism.
  • SSEGCN [18] generates sentence attention score matrices through aspect-aware attention mechanisms and self-attention mechanisms, combines them with syntactic mask matrices, and utilizes GCNs to enhance node representations, fusing syntactic and semantic information for ALSC tasks.
  • SSEMGAT [17] fuses syntactic and semantic features using a multilayer GAT by introducing component trees and aspect-aware attention mechanisms.
  • GMF-SKIA [40] combines affective knowledge and inter-aspect dependencies to model interactions using GCNs and multi-head self-attention mechanisms, and fuses features through gating mechanisms.
  • LSOIT [41] integrates semantic knowledge, phrase structure, and dependencies via reinforcement learning and attention and then applies GNNs for analysis.

4.4. Comparative Indicators

Here, we use the Macro-F1 score and Accuracy for model assessment. Accuracy is a straightforward metric that represents the percentage of samples for which all predictions are correct. The Macro-F1 score treats all classes equally, without accounting for the importance of different classes.

4.5. Results and Analysis

To evaluate our model, we use Accuracy and Macro-F1 score as the main metrics of estimation to be compared with existing baseline models. The principal findings of the experiment are listed in Table 2. Our model shows validity on the Laptop, Restaurant, and Twitter datasets, with Accuracy of 81.80, 87.26, and 78.35, and Macro-F1 score of 78.63, 83.08, and 77.27, respectively. The results show that the model proposed in this paper shows some performance improvement with our method in most cases, compared to other recently published methods. Our proposed model can integrate semantic and syntactic information.
For attention-based methods (e.g., IAN and AOA) capable of automatically capturing useful features in a sentence, AOA performs better than IAN because it pairs capture interactions between aspect and context sentences in a joint manner and important parts of the sentence are automatically attended to, and interactions between aspects and contexts are learned through an interactive network. In contrast, our model fuses syntactic structure and semantic information, which improves the information integration ability when dealing with complex sentence structures and avoids the noise that the attention mechanism may introduce in complex contexts.
For syntax-based methods (e.g., CDT, ASGCN, and R-GAT), it is more valid than attention-based models (e.g., AOA, IAN) because syntactic information is introduced into the model. R-GAT performs a little better than ASGCN and CDT because it will better capture the link between dependencies and aspects and codify the dependency tree using R-GAT. Although syntax-based methods offer a significant performance improvement over attention-based methods, they neglect semantic information between words. There are also methods that utilize the attention mechanism rather than syntactic structures to capture knowledge characteristics, such as dotGCN, which utilizes attention and reinforcement learning to capture sentiment aspects and sentiment polarity pairs, which can achieve good results.
Nevertheless, when dealing with complex sentences, using only syntactic structures leads to poor performance. Methods that consider both syntactic and semantic features (e.g., DualGCN, RAG-TCGCN, SSEGCN, SSEMGAT, GMF-SKIA, and LOSIT) are able to capture sentence information more comprehensively, which helps the model understand sentiment polarity more accurately. DualGCN performs a little better than RAG-TCGCN and GMF-SKIA because it utilizes a dual graph convolutional network to capture both local and global information with a more effective multi-view fusion mechanism, thus providing better performance. LOSIT performs better than SSEMGAT due to its knowledge integration through reinforcement learning and attention mechanisms. SSEGCN performs better than SSEMGAT on the Restaurant and Laptop datasets due to its aspect-aware attention mechanisms and GCN to fuse sentence information.
The addition of semantic knowledge can assist in modeling a better comprehension of what words mean and dig into the deeper features of the sentences. Dependencies can capture the relationships between modifiers and modified words, regardless of the distance between aspectual terms and opinion words. Thus, from a linguistic compositional perspective, the combination of knowledge can improve the catch of affective information.
Judging from the experiment results, we found that the methods using pre-trained BERT models have better overall performance (e.g., R-GAT + BERT, DualGCN + BERT), which also demonstrates the superiority of BERT in feature representation. With the help of BERT, our model obtains better results than other comparable models in terms of Accuracy and Macro-F1 score; our model is slightly lower than the state-of-the-art SSEGCN model by 0.05 on the Restaurant dataset. On the Laptop dataset, the Accuracy of our model is comparable to that of the state-of-the-art SSEGCN model, indicating that our model performs effectively on this dataset, and the Macro-F1 score is 0.53 higher than the state-of-the-art DualGCN+BERT model. On the Twitter dataset, our model outperforms the most advanced dotGCN by 0.24 and 0.27. Our model incorporates knowledge of two kinds of utterances to enhance the behavior of SC and can help the model focus on aspect-related sentiment words, which is also helpful for sentiment classification. It is further shown that syntactic structure and semantic information combined with graph neural networks can enhance the behavior of sentiment classification. These results show the usefulness of our model.

4.6. Ablation Experiment

Our model includes the SA and the SE. In order to further determine whether the integration of syntactic structures and semantic information enhances textual representation and to evaluate the contribution of each module, we perform an ablation experiment on three datasets, where the impact of each module is evaluated separately and compared to better document their relative importance, and the conclusions of our experiments are presented in Table 3. We removed the semantic enhancement module and the syntactic attention module, respectively. The SA utilizes aspect-oriented dependencies to capture syntactic information about utterances, and this rich syntactic knowledge can improve the analysis results. The SE builds a neighbor matrix through the self-attention layer to obtain rich semantic information. Here, “w/o” means “without”. We can see that the performance is optimal when both modules are used simultaneously. Without either module, the Accuracy and the Macro-F1 score decrease to varying degrees, resulting in a degradation of the model performance. This illustrates that each module has its own unique role and is essential to make up our complete model. Overall, deleting SA resulted in the largest decrease in Accuracy and Macro-F1 for Restaurant14, with decreases of 2.29 and 4.1, respectively. The decrease in Accuracy and Macro-F1 for the model after deleting SE was not as significant as that of deleting SA, showing that the dependency information still has a significant impact on the model. In other words, the use of aspect-oriented dependencies greatly improves the model performance, which proves the validity of the model. The model’s use of dependencies and syntactic structure enables better focusing of aspect-related words, which in turn improves the performance of ALSC tasks.

4.7. Case Study

As a visual demonstration of the model’s capabilities, we further validate our approach with a case study, we analyze a few randomly selected sentences from the Restaurant dataset, and Table 4 exhibits the result data, where the bold part of the sentence represents the aspect word, × is a wrong prediction, and √ denotes a correct prediction. Table 4 shows the results of randomly selected cases analyzed using different models. The first sentence has a neutral sentiment polarity. The sentiment of this type of sentence is not particularly obvious, and the model may change certain words in the sentence and thus misjudge the neutral sentiment. For example, in our first sentence, the word “but” is used as a progressive, complementing and emphasizing what has been mentioned before, but it may be mistaken for a negative attitude. In the third sentence, the AOA model does not catch the keyword “are not”, thus leading to a prediction error. In this sentence, “terrible” is a pejorative word, but it is preceded by the negative word “not”, so it is a positive sentiment overall. DualGCN integrates syntactic knowledge and semantic information for sentiment classification and utilizes orthogonal and differential regularizers to process semantically related items, which outperforms the predictions of R-GAT and AOA, but also suffers from prediction errors. In contrast, the syntactic structure and semantic associations of the words are the focus of our modeling attention and obtain the correct sentiment classification results after comprehensive consideration. These examples emphasize the importance of recognizing the overall utterance information in a sentence for the ALSC task, and our model can effectively capture comprehensive utterance information, utilizing the complementary nature of syntactic structure and semantic information, and accurately predicting sentiment polarity.

5. Conclusions

This study offers a fusion network for ALSC. Specifically, our model integrates semantic structure and syntactic information through the SA and the SE, which are combined with GNNs to enhance sentence information, thus capturing more comprehensive utterance information. Our syntactic attention module reshapes the construction of a dependency parse tree with aspect terms as root nodes, enabling the model to be more attentive to words that are closely linked to aspect terms and capture syntactic structures using GAT. The semantic enhancement module obtains the adjacency matrix through self-attention and encodes the semantic information related to aspect terms on the GCN. The capture of sentiment information is enhanced by combining syntactic structure and semantic information through multi-head cross-attention. By performing training experiments on three publicly available datasets, it can be demonstrated that our model results in optimal data, proving that our method is effective and feasible.

Author Contributions

Conceptualization, M.L. and Y.L.; methodology, M.L.; software, W.Z. and M.L.; validation, M.L., Y.L. and W.Z.; formal analysis, M.L.; investigation, M.L.; resources, M.L.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L., Y.L. and W.Z.; visualization, M.L.; supervision, Y.L.; project administration, W.Z.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shandong Province Teaching Reform Project (project no. S2018Z022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar, S. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 23–24 August 2014; pp. 27–35. [Google Scholar]
  2. Tan, K.L.; Lee, C.P.; Lim, K.M. A survey of sentiment analysis: Approaches, datasets, and future research. Appl. Sci. 2023, 13, 4550. [Google Scholar] [CrossRef]
  3. Zhang, W.; Li, X.; Deng, Y.; Bing, L.; Lam, W. A survey on aspect-based sentiment analysis: Tasks, methods, and challenges. IEEE Trans. Knowl. Data Eng. 2023, 35, 11019–11038. [Google Scholar]
  4. Socher, R.; Huval, B.; Manning, C.D.; Ng, A.Y. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Republic of Korea, 12–14 July 2012; pp. 1201–1211. [Google Scholar]
  5. Huang, Z.; Liu, H.; Zhu, J.; Min, J. Customer sentiment recognition in conversation based on contextual semantic and affective interaction information. Appl. Sci. 2023, 13, 7807. [Google Scholar] [CrossRef]
  6. Zhao, G.; Luo, Y.; Chen, Q.; Qian, X. Aspect-based sentiment analysis via multitask learning for online reviews. Knowl.-Based Syst. 2023, 264, 110326. [Google Scholar]
  7. Sun, K.; Zhang, R.; Mensah, S.; Mao, Y.; Liu, X. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019; pp. 5679–5688. [Google Scholar]
  8. Zhai, S.; Chai, Y.; Wang, H.; Gou, D. Aspect-level sentiment joint detection based on graph attention network. Adv. Nat. Comput. 2022, 89, 760–768. [Google Scholar]
  9. Wu, H.; Zhang, Z.; Shi, S.; Wu, Q.; Song, H. Phrase dependency relational graph attention network for aspect-based sentiment analysis. Knowl. Based Syst. 2021, 236, 107736. [Google Scholar]
  10. Wang, K.; Shen, W.; Yang, Y.; Quan, X.; Wang, R. Relational graph attention network for aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 3229–3238. [Google Scholar]
  11. Bai, X.; Liu, P.; Zhang, Y. Investigating typed syntactic dependencies for targeted sentiment classification using graph attention neural network. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 503–514. [Google Scholar]
  12. Liang, B.; Su, H.; Gui, L.; Cambria, E.; Xu, R. Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks. Knowl. Based Syst. 2022, 235, 107643. [Google Scholar]
  13. Tang, H.; Ji, D.; Li, C.; Zhou, Q. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 6578–6588. [Google Scholar]
  14. Phan, M.H.; Ogunbona, P. Modelling context and syntactical features for aspect-based sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 3211–3220. [Google Scholar]
  15. Agarwal, B.; Mittal, N. Semantic orientation-based approach for sentiment analysis. In Prominent Feature Extraction for Sentiment Analysis; Springer: Berlin/Heidelberg, Germany, 2016; pp. 77–88. [Google Scholar] [CrossRef]
  16. Li, R.; Chen, H.; Feng, F.; Ma, Z.; Wang, X.; Hovy, E. Dual graph convolutional networks for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, 1–6 August 2021; pp. 6319–6329. [Google Scholar]
  17. Xin, X.; Wumaier, A.; Kadeer, Z.; He, J. Ssemgat: Syntactic and semantic enhanced multi-layer graph attention network for aspect-level sentiment analysis. Appl. Sci. 2023, 13, 5085. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Zhou, Z.; Wang, Y. Ssegcn: Syntactic and semantic enhanced graph convolutional network for aspect-based sentiment analysis. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Seattle, WA, USA, 10–15 July 2022; pp. 4916–4925. [Google Scholar]
  19. Fan, F.; Feng, Y.; Zhao, D. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 3433–3442. [Google Scholar]
  20. Yadav, R.K.; Jiao, L.; Goodwin, M.; Granmo, O.C. Positionless aspect based sentiment analysis using attention mechanism. Knowl.-Based Syst. 2021, 226, 107136. [Google Scholar]
  21. Zhao, F.; Wu, Z.; Dai, X. Attention transfer network for aspect-level sentiment classification. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; pp. 811–821. [Google Scholar]
  22. Liang, Y.; Meng, F.; Zhang, J.; Chen, Y.; Xu, J.; Zhou, J. An iterative multi-knowledge transfer network for aspect-based sentiment analysis. In Proceedings of the Findings of the Association for Computational Linguistics: Empirical Methods in Natural Language Processing 2021, Punta Cana, Dominican Republic, 7–11 November 2021; pp. 1768–1780. [Google Scholar]
  23. Kiritchenko, S.; Zhu, X.; Cherry, C.; Mohammad, S. Nrc-Canada-2014: Detecting aspects and sentiment in customer reviews. In Proceedings of the 8th International Workshop on Semantic Evaluation, Dublin, Ireland, 23–24 August 2014; pp. 437–442. [Google Scholar]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar]
  25. Wang, Y.; Huang, M.; Zhao, L.; Zhu, X. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 1–5 November 2016; pp. 606–615. [Google Scholar]
  26. Tang, D.; Qin, B.; Liu, T. Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 1–5 November 2016; pp. 214–224. [Google Scholar]
  27. Ma, D.; Li, S.; Zhang, X.; Wang, H. Interactive attention networks for aspect-level sentiment classification. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, VIC, Australia, 19–25 August 2017; pp. 4068–4074. [Google Scholar]
  28. Huang, B.; Ou, Y.; Carley, K.M. Aspect level sentiment classification with attention-over-attention neural networks. In Proceedings of the Social, Cultural, and Behavioral Modeling: 11th International Conference, SBP-BRiMS 2018, Washington, DC, USA, 10–13 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 197–206. [Google Scholar]
  29. Chen, C.; Teng, Z.; Wang, Z.; Zhang, Y. Discrete opinion tree induction for aspect-based sentiment analysis. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 22–27 May 2022; pp. 2051–2064. [Google Scholar]
  30. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
  31. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
  32. Gu, T.; Zhao, H.; He, Z.; Li, M.; Ying, D. Integrating external knowledge into aspect-based sentiment analysis using graph neural network. Knowl. Based Syst. 2022, 259, 110025. [Google Scholar]
  33. Zhang, C.; Li, Q.; Song, D. Aspect-based sentiment classification with aspect-specific graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019; pp. 4568–4578. [Google Scholar]
  34. Ha, H.; Han, H.; Mun, S.; Bae, S.; Lee, J.; Lee, K. An improved study of multilevel semantic network visualization for analyzing sentiment word of movie review data. Appl. Sci. 2019, 9, 2419. [Google Scholar] [CrossRef]
  35. Lipenkova, J. A system for fine-grained aspect-based sentiment analysis of Chinese. In Proceedings of the ACL-IJCNLP 2015 System Demonstrations, Beijing, China, 26–31 July 2015; pp. 55–60. [Google Scholar]
  36. Zhang, J.; Chen, C.; Liu, P.; He, C.; Leung, W.K. Target-guided structured attention network for target-dependent sentiment analysis. Trans. Assoc. Comput. Linguist. 2020, 8, 172–182. [Google Scholar]
  37. Ma, Y.; Peng, H.; Cambria, E. Targeted aspect-based sentiment analysisvia embedding commonsense knowledge into an attentive lstm. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  38. Xu, H.; Liu, S.; Wang, W.; Deng, L. Rag-tcgcn: Aspect sentiment analysis based on residual attention gating and three-channel graph convolutional networks. Appl. Sci. 2022, 12, 12108. [Google Scholar] [CrossRef]
  39. Pengcheng, W.; Linping, T.; Mingwei, T.; Liuxuan, W.; Yangsheng, X.; Mingfeng, Z. Incorporating syntax and semantics with dual graph neural networks for aspect-level sentiment analysis. Eng. Appl. Artif. Intell. 2024, 133, 108101. [Google Scholar]
  40. Han, Y.; Zhou, X.; Wang, G.; Feng, Y.; Zhao, H.; Wang, J. Fusing sentiment knowledge and inter-aspect dependency based on gated mechanism for aspect-level sentiment classification. Neurocomputing 2023, 551, 126462. [Google Scholar]
  41. Wu, H.; Zhou, D.; Sun, C.; Zhang, Z.; Ding, Y.; Chen, Y. Lsoit: Lexicon and syntax enhanced opinion induction tree for aspect-based sentiment analysis. Expert Syst. Appl. 2024, 235, 121137. [Google Scholar]
  42. Kato, T.; Abe, K.; Ouchi, H.; Miyawaki, S.; Inui, K. Embeddings of label components for sequence labeling: A case study of fine-grained named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 222–229. [Google Scholar]
  43. Niu, Y.; Ruobing, X.; Liu, Z.; Sun, M. Improved word representation learning with sememes. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada, 30 July–4 August 2017; pp. 2049–2058. [Google Scholar]
  44. Dozat, T.; Manning, C.D. Deep biaffine attention for neural dependency parsing. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  45. Pennington, J.; Socher, R.; Manning, C. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
Figure 1. Aspect-level sentiment classification (ALSC) and sentiment classification (SC). The top sentence is the ALSC task and below it is the SC task.
Figure 1. Aspect-level sentiment classification (ALSC) and sentiment classification (SC). The top sentence is the ALSC task and below it is the SC task.
Applsci 14 07524 g001
Figure 2. The architecture of the model we propose is illustrated, including the syntactic attention module (SA) and the semantic enhancement module (SE).
Figure 2. The architecture of the model we propose is illustrated, including the syntactic attention module (SA) and the semantic enhancement module (SE).
Applsci 14 07524 g002
Figure 3. Dependency parse tree for sentences containing an aspect word with the aspect word “game” as the root node.
Figure 3. Dependency parse tree for sentences containing an aspect word with the aspect word “game” as the root node.
Applsci 14 07524 g003
Figure 4. The sentence contains two aspects, “food” and “surrounding”, each of which has a unique dependency parse tree with the root node of the aspects.
Figure 4. The sentence contains two aspects, “food” and “surrounding”, each of which has a unique dependency parse tree with the root node of the aspects.
Applsci 14 07524 g004
Figure 5. The number of attention heads in GAT.
Figure 5. The number of attention heads in GAT.
Applsci 14 07524 g005
Figure 6. The number of GCN layers.
Figure 6. The number of GCN layers.
Applsci 14 07524 g006
Table 1. The data of the three experimental datasets.
Table 1. The data of the three experimental datasets.
DatasetsPosi.Nega.Neur.
TrainDataTestDataTrainDataTestDataTrainDataTestData
Restaurant2164728807196637196
Laptop994341870128464169
Twitter156117331273461560173
Table 2. The results of comparative experiments. “Accu.” is Accuracy. The best experimental results are shown in bold.
Table 2. The results of comparative experiments. “Accu.” is Accuracy. The best experimental results are shown in bold.
ModelRestaurantLaptopTwitter
Accu.Macro-F1Accu.Macro-F1Accu.Macro-F1
IAN, 2017 [27]78.60-72.10---
AOA, 2018 [28]81.20-74.50---
ASGCN, 2019 [33]80.7772.0275.5571.0572.1570.40
CDT, 2019 [7]82.3074.0277.1972.9974.6673.66
R-GAT, 2020 [10]83.3076.0877.4273.7675.5773.82
R-GAT + BERT, 2020 [10]86.6081.3578.2174.0776.1574.88
DualGCN, 2021 [16]84.2778.0878.4874.7475.9274.29
DualGCN + BERT, 2021 [16]87.1381.8681.8078.1077.4076.02
dotGCN, 2022 [29]86.1680.4981.0378.1078.1177.00
RAG-TCGCN, 2022 [38]84.0977.0278.8075.0476.6675.41
SSEGCN, 2022 [18]87.3181.0981.0177.9677.4076.02
SSEMGAT, 2023 [17]86.4279.7080.0676.7876.8176.10
GMF-SKIA, 2023 [40]86.2579.81 81.1977.94--
LOSIT, 2024 [41]86.8882.2781.4177.1677.7576.94
Ours87.2683.0881.8078.6378.3577.27
Table 3. Ablation experiment results.
Table 3. Ablation experiment results.
ModelLaptopRestaurantTwitter
Accu.Macro-F1Accu.Macro-F1Accu.Macro-F1
w/o SA79.0575.7184.9778.9875.8574.32
w/o SE80.5677.0086.8881.1676.5975.11
Ours81.8078.6387.2683.0878.3577.27
Table 4. The results of the case study experiment. “Neut.” is Neutral, “Nega.” is Negative, and “Posi.” is Positive. Bolded are the aspect words in the sentences. “√” indicates a correct result for sentiment categorization and “×” indicates an incorrect result.
Table 4. The results of the case study experiment. “Neut.” is Neutral, “Nega.” is Negative, and “Posi.” is Positive. Bolded are the aspect words in the sentences. “√” indicates a correct result for sentiment categorization and “×” indicates an incorrect result.
SentenceActualOursDualGCNR-GATAOA
It was enjoyable, but none of the scents amazed me.Neut.Neut.√Nega.×Nega.×Posi.×
The waiter poured water on my hand and walked away.Nega.Nega.√Neut.×Nega.√Nega.√
The prices are not terrible.Posi.Posi.√Posi.√Posi.√Nega.×
Fantastic ambiance, but the staff was absolutely terrible!Posi.;
Nega.
Posi.√;
Nega.√
Posi.√;
Nega.√
Posi.√;
Nega.√
Nega.×;
Nega.√
Despite the high prices, dining here is consistently satisfying.Nega.;
Posi.
Nega.√;
Posi.√
Nega.√;
Posi.√
Nega.√;
Nega.×
Nega.√;
Posi.√
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Lei, Y.; Zhou, W. Fusion Network for Aspect-Level Sentiment Classification Based on Graph Neural Networks—Enhanced Syntactics and Semantics. Appl. Sci. 2024, 14, 7524. https://doi.org/10.3390/app14177524

AMA Style

Li M, Lei Y, Zhou W. Fusion Network for Aspect-Level Sentiment Classification Based on Graph Neural Networks—Enhanced Syntactics and Semantics. Applied Sciences. 2024; 14(17):7524. https://doi.org/10.3390/app14177524

Chicago/Turabian Style

Li, Miaomiao, Yuxia Lei, and Weiqiang Zhou. 2024. "Fusion Network for Aspect-Level Sentiment Classification Based on Graph Neural Networks—Enhanced Syntactics and Semantics" Applied Sciences 14, no. 17: 7524. https://doi.org/10.3390/app14177524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop