A Cross-Domain Generative Data Augmentation Framework for Aspect-Based Sentiment Analysis
Abstract
:1. Introduction
- We propose a CDGDA framework which generates new fine-grained in-domain data by using a generation model to perform cross-domain transfer from numerous coarse-grained sentiment analysis datasets out-of-domain.
- We incorporate a cross-entropy filter into the CDGDA to improve the output quality by reducing duplicate data.
- We evaluated the CDGDA on three public datasets, and the experiments demonstrate that our framework achieves superior results compared with other baseline methods.
2. Related Work
2.1. Language Models
- Left-to-Right: GPT, GPT-2, GPT-3 [18]
2.2. Aspect-Based Sentiment Analysis
2.3. Domain Adaptation
2.4. Data Augmentation
3. Methodology
3.1. Task Formulation
3.2. Sentence Generation
3.3. Model Architecture
Algorithm 1 Sentence generation |
Require: : in-domain dataset; : out-of-domain dataset; |
training set of is the number of aspect terms; |
is the number of sentences; |
: generation model; : model parameters; |
Ensure: X: Sentences of data augmentation
|
4. Experiment
4.1. Datasets
4.2. Experimental Setting
4.3. Compared Methods
4.4. Main Results
4.5. Effects of Hyperparameter and Data Volume
4.6. Ablation Study
4.7. Case Study
5. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Pontiki, M.; Galanis, D.; Pavlopoulos, J.; Papageorgiou, H.; Androutsopoulos, I.; Manandhar, S. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 23–24 August 2014; Association for Computational Linguistics: Dublin, Ireland, 2014; pp. 27–35. [Google Scholar]
- Zhao, J.; Liu, K.; Xu, L. Book Review: Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. Comput. Linguist. 2016, 42, 595–598. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; Association for Computational Linguistics: Minneapolis, MN, USA, 2019; pp. 4171–4186. [Google Scholar]
- Chen, Y.; Zhang, Z.; Zhou, G.; Sun, X.; Chen, K. Span-based dual-decoder framework for aspect sentiment triplet extraction. Neurocomputing 2022, 492, 211–221. [Google Scholar] [CrossRef]
- Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 7871–7880. [Google Scholar] [CrossRef]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 2020, 21, 5485–5551. [Google Scholar]
- Liu, P.; Yuan, W.; Fu, J.; Jiang, Z.; Hayashi, H.; Neubig, G. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
- Yan, H.; Dai, J.; Ji, T.; Qiu, X.; Zhang, Z. A Unified Generative Framework for Aspect-based Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 1–6 August 2021; pp. 2416–2429. [Google Scholar]
- Zhang, W.; Deng, Y.; Li, X.; Yuan, Y.; Bing, L.; Lam, W. Aspect Sentiment Quad Prediction as Paraphrase Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, 7–11 November 2021; pp. 9209–9219. [Google Scholar]
- Mao, Y.; Shen, Y.; Yang, J.; Zhu, X.; Cai, L. Seq2Path: Generating Sentiment Tuples as Paths of a Tree. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, 22–27 May 2022; Association for Computational Linguistics: Dublin, Ireland, 2022; pp. 2215–2225. [Google Scholar]
- Chen, D.Z.; Faulkner, A.; Badyal, S. Unsupervised Data Augmentation for Aspect Based Sentiment Analysis. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October 2022; International Committee on Computational Linguistics: Gyeongju, Republic of Korea, 2022; pp. 6746–6751. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, Bangkok, Thailand, 18–22 November 2020; MIT Press: Cambridge, MA, USA, 2014; pp. 3104–3112. [Google Scholar]
- Kuchaiev, O.; Ginsburg, B. Factorization tricks for LSTM networks. arXiv 2018, arXiv:1703.10722. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Zhang, F.; Zhang, M.; Liu, S.; Sun, Y.; Duan, N. Enhancing RDF Verbalization with Descriptive and Relational Knowledge. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 2023, 6, 1–18. [Google Scholar] [CrossRef]
- Mengge, X.; Yu, B.; Zhang, Z.; Liu, T.; Zhang, Y.; Wang, B. Coarse-to-Fine Pre-training for Named Entity Recognition. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 19–20 November 2020; pp. 6345–6354. [Google Scholar]
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
- Dong, L.; Yang, N.; Wang, W.; Wei, F.; Liu, X.; Wang, Y.; Gao, J.; Zhou, M.; Hon, H.W. Unified language model pre-training for natural language understanding and generation. arXiv 2019, arXiv:1905.02450. [Google Scholar]
- Bao, H.; Dong, L.; Wei, F.; Wang, W.; Yang, N.; Liu, X.; Wang, Y.; Piao, S.; Gao, J.; Zhou, M.; et al. UNILMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training. arXiv 2020, arXiv:2002.12804. [Google Scholar]
- Song, K.; Tan, X.; Qin, T.; Lu, J.; Liu, T.Y. Mass: Masked sequence to sequence pre-training for language generation. arXiv 2019, arXiv:1905.02450. [Google Scholar]
- Gao, T.; Fisch, A.; Chen, D. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, 1 August 2021; pp. 3816–3830. [Google Scholar]
- Zhang, W.; Li, X.; Deng, Y.; Bing, L.; Lam, W. A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges. IEEE Trans. Knowl. Data Eng 2022, 1–20. [Google Scholar] [CrossRef]
- Liu, P.; Joty, S.; Meng, H. Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; Association for Computational Linguistics: Lisbon, Portugal, 2015; pp. 1433–1443. [Google Scholar]
- Jiang, L.; Yu, M.; Zhou, M.; Liu, X.; Zhao, T. Target-dependent Twitter Sentiment Classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 19–24 June 2011; Association for Computational Linguistics: Portland, OR, USA, 2011; pp. 151–160. [Google Scholar]
- Chen, S.; Liu, J.; Wang, Y.; Zhang, W.; Chi, Z. Synchronous Double-channel Recurrent Network for Aspect-Opinion Pair Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 6515–6524. [Google Scholar]
- Peng, H.; Xu, L.; Bing, L.; Huang, F.; Lu, W.; Si, L. Knowing What, How and Why: A Near Complete Solution for Aspect-Based Sentiment Analysis. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 8600–8607. [Google Scholar]
- Cai, H.; Xia, R.; Yu, J. Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 1–6 August 2021; pp. 340–350. [Google Scholar]
- Ding, Y.; Yu, J.; Jiang, J. Recurrent Neural Networks with Auxiliary Labels for Cross-Domain Opinion Target Extraction. In Proceedings of the 31th AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Association for the Advancement of Artificial Intelligence: San Francisco, CA, USA, 2017; Volume 31. [Google Scholar]
- Chen, Z.; Qian, T. Bridge-Based Active Domain Adaptation for Aspect Term Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 1–6 August 2021; pp. 317–327. [Google Scholar]
- Ben-David, E.; Oved, N.; Reichart, R. PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains. Trans. Assoc. Comput. Linguist. 2022, 10, 414–433. [Google Scholar] [CrossRef]
- Fadaee, M.; Bisazza, A.; Monz, C. Data Augmentation for Low-Resource Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada, 30 July–4 August 2017; Association for Computational Linguistics: Vancouver, BC, Canada, 2017; pp. 567–573. [Google Scholar]
- Li, G.; Wang, H.; Ding, Y.; Zhou, K.; Yan, X. Data augmentation for aspect-based sentiment analysis. Int. J. Mach. Learn. Cybern. 2023, 14, 125–133. [Google Scholar] [CrossRef]
- Coulombe, C. Text data augmentation made simple by leveraging nlp cloud apis. arXiv 2018, arXiv:1812.04718. [Google Scholar]
- Wei, J.; Zou, K. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics: Hong Kong, China, 2019; pp. 6382–6388. [Google Scholar]
- Wang, B.; Ding, L.; Zhong, Q.; Li, X.; Tao, D. A Contrastive Cross-Channel Data Augmentation Framework for Aspect-Based Sentiment Analysis. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October 2022; International Committee on Computational Linguistics: Gyeongju, Republic of Korea, 2022; pp. 6691–6704. [Google Scholar]
- Li, J.; Yu, J.; Xia, R. Generative Cross-Domain Data Augmentation for Aspect and Opinion Co-Extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Seattle, WA, USA, 10–15 July 2022; Association for Computational Linguistics: Seattle, WA, USA, 2022; pp. 4219–4229. [Google Scholar]
- Wang, Y.; Xu, C.; Sun, Q.; Hu, H.; Tao, C.; Geng, X.; Jiang, D. PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 22–27 May 2022; Association for Computational Linguistics: Dublin, Ireland, 2022; pp. 4242–4255. [Google Scholar]
- Dong, L.; Wei, F.; Tan, C.; Tang, D.; Zhou, M.; Xu, K. Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, Baltimore, MD, USA, 22–27 June 2014; Association for Computational Linguistics: Baltimore, MD, USA, 2014; pp. 49–54. [Google Scholar]
- He, R.; McAuley, J. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In Proceedings of the 25th International Conference on World Wide Web, Montreal, QC, Canada, 11–15 April 2016; pp. 507–517. [Google Scholar]
- Rosenthal, S.; Farra, N.; Nakov, P. SemEval-2017 Task 4: Sentiment Analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, BC, Canada, 3–4 August 2017; Association for Computational Linguistics: Vancouver, BC, Canada, 2017; pp. 502–518. [Google Scholar]
- Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. Lora: Low-rank adaptation of large language models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
- Loshchilov, I.; Hutter, F. Fixing weight decay regularization in adam. arXiv 2017, arXiv:1711.05101. [Google Scholar]
- Chen, P.; Sun, Z.; Bing, L.; Yang, W. Recurrent Attention Network on Memory for Aspect Sentiment Analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September 2017; Association for Computational Linguistics: Copenhagen, Denmark, 2017; pp. 452–461. [Google Scholar]
- Fan, F.; Feng, Y.; Zhao, D. Multi-grained Attention Network for Aspect-Level Sentiment Classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; Association for Computational Linguistics: Brussels, Belgium, 2018; pp. 3433–3442. [Google Scholar]
- Li, X.; Bing, L.; Lam, W.; Shi, B. Transformation Networks for Target-Oriented Sentiment Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, VIC, Australia, 15–20 July 2018; Association for Computational Linguistics: Melbourne, VIC, Australia, 2018; pp. 946–956. [Google Scholar]
- Sun, K.; Zhang, R.; Mensah, S.; Mao, Y.; Liu, X. Aspect-Level Sentiment Analysis Via Convolution over Dependency Tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics: Hong Kong, China, 2019; pp. 5679–5688. [Google Scholar]
- Wang, K.; Shen, W.; Yang, Y.; Quan, X.; Wang, R. Relational Graph Attention Network for Aspect-based Sentiment Analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 3229–3238. [Google Scholar]
- Li, R.; Chen, H.; Feng, F.; Ma, Z.; Wang, X.; Hovy, E. Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, 1–6 August 2021; pp. 6319–6329. [Google Scholar]
- Sennrich, R.; Haddow, B.; Birch, A. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; Association for Computational Linguistics: Berlin, Germany, 2016; pp. 86–96. [Google Scholar]
- Wu, X.; Lv, S.; Zang, L.; Han, J.; Hu, S. Conditional bert contextual augmentation. In Proceedings of the Computational Science–ICCS 2019: 19th International Conference, Faro, Portugal, 12–14 June 2019; Proceedings, Part IV 19. Springer: Berlin/Heidelberg, Germany, 2019; pp. 84–95. [Google Scholar]
- Li, Z.; Zou, Y.; Zhang, C.; Zhang, Q.; Wei, Z. Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Virtual Event/Punta Cana, Dominican Republic, 7–11 November 2021; pp. 246–256. [Google Scholar]
Dataset | Sentence | # Positive | # Neutral | # Negative | |
---|---|---|---|---|---|
Restaurant | Train | 1980 | 2164 | 637 | 807 |
Test | 599 | 727 | 196 | 196 | |
Laptop | Train | 1454 | 976 | 455 | 851 |
Test | 409 | 337 | 167 | 128 | |
Train | 6051 | 1507 | 3016 | 1528 | |
Test | 677 | 172 | 336 | 169 | |
Twitter17(o) | Aug | 15,000 | 5000 | 5000 | 5000 |
Dataset | Sentence | Sentiment Score | |||||
---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |||
Yelp(o) | Aug | 5000 | 1000 | 1000 | 1000 | 1000 | 1000 |
AMZElec 1 (o) | Aug | 15,000 | 3000 | 3000 | 3000 | 3000 | 3000 |
Dataset | Examples |
---|---|
Restaurant | Not only was the food outstanding, but the little ‘perks’ were great. #food #Positive #perks #Positive Nevertheless the food itself is pretty good. #food #Positive |
Laptop | The software that comes with this machine is greatly welcomed compared to what Windows comes with. #software #Positive #Windows #Negative They also use two totally different operating systems. #operating systems #Neutral |
We are having an awesome Day, No problem we love madonna too. #madonna #Positive harry potter the last of all will be amazon! #harry potter #Positive | |
Yelp | a tad overpriced for the quality of the food. service just ok. probably wont be returning. #2.0 Cool place to chill out during the afternoon thunder storms. The frozen Irish coffee was amazing #5.0 |
Amazon Electronics | pros love it. It allows to attach and reattach the anti static cablecons none that I can think of #5.0 Also, the tech support is not very good. Read the forums I would just buy another product, and I am ready to junk my unit. #1.0 |
Twitter17 | We may not win Academy Awards but we will be the freaking Peoples Choice dammit #0 Im actually excited to record AW on Ps4 tomorrow, it been a while #2 |
Category | Method | Restaurant | Laptop | ||||
---|---|---|---|---|---|---|---|
Acc | F1 | Acc | F1 | Acc | F1 | ||
LTSM | TNET [47] | 80.69 | 71.27 | 76.54 | 71.75 | 74.90 | 73.60 |
Syn. | CDT [48] | 82.30 | 74.02 | 77.19 | 72.99 | 74.66 | 73.66 |
RGAT [49] | 83.30 | 76.08 | 77.42 | 73.76 | 75.57 | 73.82 | |
DualGCN [50] | 84.27 | 78.08 | 78.48 | 74.74 | 75.92 | 74.29 | |
Att. | RAM [45] | 80.23 | 70.80 | 74.49 | 71.35 | 69.36 | 67.30 |
MGAN [46] | 81.25 | 71.94 | 75.39 | 72.47 | 72.54 | 70.81 | |
BERT−base [3] | 86.31 | 80.22 | 79.66 | 76.11 | 76.50 | 75.23 | |
Syn. and Att. | RGAT + BERT [49] | 86.60 | 81.35 | 78.21 | 74.07 | 76.15 | 74.88 |
DualGCN + BERT [50] | 87.13 | 81.16 | 81.80 | 78.10 | 77.40 | 76.02 | |
Aug. | CBERT [52] | 86.27 | 80.00 | 79.83 | 76.12 | 76.44 | 75.36 |
BT + BERT [51] | 86.47 | 79.63 | 79.59 | 75.79 | 76.26 | 75.16 | |
Aug. and Gen. | CDA [37] | 86.93 | 81.23 | 80.61 | 77.11 | 77.55 | 76.53 |
CDGDA (ours) | 87.67 | 83.05 | 81.65 | 78.64 | 78.14 | 77.38 |
Model | Restaurant | Laptop | ||||
---|---|---|---|---|---|---|
Acc | F1 | Acc | F1 | Acc | F1 | |
CDGDA | 87.67 | 83.05 | 81.65 | 78.64 | 78.14 | 77.38 |
w/o | 86.68 | 81.85 | 80.54 | 76.77 | 77.55 | 76.69 |
w/o | 86.51 | 81.16 | 80.22 | 77.10 | 76.66 | 75.24 |
w/o Filter | 86.42 | 81.39 | 80.85 | 78.43 | 76.37 | 75.54 |
w/o and | 86.31 | 80.22 | 79.66 | 76.11 | 76.50 | 75.23 |
Source | This sushi tasted too salty. #Negative |
out-of-domain | Great food. They have a huge eclectic selection, which is a major plus these days. It is pricey but securely worth it. |
CDGDA | Food tastes bad and pricey. #Negative The bacos and samosas are plentiful and taste good. #Positive The cream cheeses are ordinary. #Neutral Fish tastes good, but a little pricey, but the takeout menu is worth it. #Positive |
Source | I love the size of the keyboard. #Positive |
out-of-domain | Decent when the screens on its brightest, but not good when dim. Bubble soft card is useless. The battery runs out fast. |
CDGDA | The volume control is very easy to use. #Positive The computer’s assistance is useless. #Negative The battery is rated at 6.5 h but the battery life is 4.5 h. #Negative The machine loads very fast. #Positive |
Datasets | CDGDA |
---|---|
Restaurant | The lobster teriyaki and naan. #Positive Coffee was good, but not great. #Positive |
Laptop | The firewire connection is the only way to connect. #Positive i want to install software, #Positive |
larry king’s life is a ninja, #Negative a new version of the windows 7 family is now available. #Negative |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xue, J.; Li, Y.; Li, Z.; Cui, Y.; Zhang, S.; Wang, S. A Cross-Domain Generative Data Augmentation Framework for Aspect-Based Sentiment Analysis. Electronics 2023, 12, 2949. https://doi.org/10.3390/electronics12132949
Xue J, Li Y, Li Z, Cui Y, Zhang S, Wang S. A Cross-Domain Generative Data Augmentation Framework for Aspect-Based Sentiment Analysis. Electronics. 2023; 12(13):2949. https://doi.org/10.3390/electronics12132949
Chicago/Turabian StyleXue, Jiawei, Yanhong Li, Zixuan Li, Yue Cui, Shaoqiang Zhang, and Shuqin Wang. 2023. "A Cross-Domain Generative Data Augmentation Framework for Aspect-Based Sentiment Analysis" Electronics 12, no. 13: 2949. https://doi.org/10.3390/electronics12132949
APA StyleXue, J., Li, Y., Li, Z., Cui, Y., Zhang, S., & Wang, S. (2023). A Cross-Domain Generative Data Augmentation Framework for Aspect-Based Sentiment Analysis. Electronics, 12(13), 2949. https://doi.org/10.3390/electronics12132949