Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis
Abstract
:1. Introduction
2. Related Work
2.1. Word Embedding (WE)
2.1.1. Word2Vec
2.1.2. GloVe
2.1.3. FastText
2.2. Contextual Word Embedding (CWE)
2.2.1. Embedding from Large Language Models (ELMo)
2.2.2. T5
2.2.3. GPT
2.2.4. ELECTRA
2.2.5. BERT
2.3. Context Aware Word Embedding (CAWE)
2.3.1. PolyLM
2.3.2. XLNET
2.3.3. DistilBERT
2.3.4. RoBERTa
2.3.5. GBCN
2.3.6. DeBERTa
2.3.7. ALBERT
3. Materials and Methods
- RQ1:
- What salient features of a less resource morphologically rich language, such as Sesotho sa Leboa, are required to develop a Context-Aware Word Embedding (CAWE) model?
- RQ2:
- What are the existing word embedding methods?
3.1. Research Strategy
3.2. Inclusion Criteria and Exclusion Criteria
3.3. Data Synthesis and Statistical Analysis
4. Results
4.1. Meta-Analysis Summary
4.2. Publication Bias and Meta-Regression
4.3. Descriptive Statistics of Primary Studies
5. Conclusions
- Excessive τ2, I2, and H2 values in the meta-analysis suggest significant variability in the effect sizes. This indicates that there is considerable variation in the efficacy of contextually aware embeddings in treating Meaning Conflation Deficiency among research studies.
- Although there may have been publication bias, the effect magnitude is still not statistically significant, according to the trim and fill analysis. This implies that while contextually aware embeddings may be useful, there is uncertainty and a lack of consistency in their effects on Meaning Conflation Deficiency.
- The uncertainty surrounding the effect size estimations is highlighted by the substantial standard error and non-significant p-value. To get more accurate and trustworthy projections, more study is required.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ansell, A.; Bravo-Marquez, F.; Pfahringer, B. PolyLM: Learning about polysemy through language modeling. In Proceedings of the EACL 2021—16th Conference of the European Chapter of the Association for Computational Linguistics, Kyiv, Ukraine, 19–23 April 2021; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 563–574. [Google Scholar] [CrossRef]
- Pilehvar, M.T. On the Importance of Distinguishing Word Meaning Representations: A Case Study on Reverse Dictionary Mapping. In Proceedings of the NAACL-HLT, Minneapolis, MN, USA, 2–7 June 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 2151–2156. [Google Scholar] [CrossRef]
- Cer, D.; Yang, Y.; Kong, S.; Hua, N.; Limtiaco, N.; John, R.S.; Constant, N.; Guajardo-Céspedes, M.; Yuan, S.; Tar, C.; et al. Universal sentence encoder for English. In Proceedings of the EMNLP 2018—Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 2–4 November 2018; pp. 169–174. [Google Scholar] [CrossRef]
- Masethe, H.D.; Masethe, M.A.; Ojo, S.O.; Giunchiglia, F.; Owolawi, P.A. Word Sense Disambiguation for Morphologically Rich Low-Resourced Languages: A Systematic Literature Review and Meta-Analysis. Information 2024, 15, 540. [Google Scholar] [CrossRef]
- Ansell, A. Contextualised Approaches to Embedding Word Senses. The University of Waikato. 2020. Available online: http://researchcommons.waikato.ac.nz/ (accessed on 21 August 2024).
- Pilehvar, M.T.; Collier, N. De-conflated semantic representations. In Proceedings of the EMNLP 2016—Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 1–5 November 2016; Association for Computational Linguistics: Stroudsburg, PA, USA, 2016; pp. 1680–1690. [Google Scholar] [CrossRef]
- Zhang, T.; Ye, W.; Xi, X.; Zhang, L.; Zhang, S.; Zhao, W. Leveraging human prior knowledge to learn sense representations. Front. Artif. Intell. Appl. 2020, 325, 2306–2313. [Google Scholar] [CrossRef]
- Yang, X.; Mao, K. Learning multi-prototype word embedding from single-prototype word embedding with integrated knowledge. Expert Syst. Appl. 2016, 56, 291–299. [Google Scholar] [CrossRef]
- Won, H.; Lee, H.; Kang, S. Multi-prototype Morpheme Embedding for Text Classification. In Proceedings of the SMA 2020: The 9th International Conference on Smart Media and Applications, Jeju, Republic of Korea, 17–19 September 2020; pp. 295–300. [Google Scholar] [CrossRef]
- Li, N.; Bouraoui, Z.; Camacho-Collados, J.; Espinosa-anke, L.; Gu, Q.; Schockaert, S. Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), Montreal, QC, Canada, 19–25 August 2021. [Google Scholar] [CrossRef]
- Biesialska, M.; Costa-Jussà, M.R. Refinement of unsupervised cross-lingualword embeddings. Front. Artif. Intell. Appl. 2020, 325, 1978–1981. [Google Scholar] [CrossRef]
- da Silva, J.R.; Caseli, H.d.M. Generating Sense Embeddings for Syntactic and Semantic Analogy for Portuguese. arXiv 2020. [Google Scholar] [CrossRef]
- Rodrigues da Silva, J.; Caseli, H.d.M. Sense representations for Portuguese: Experiments with sense embeddings and deep neural language models. Lang. Resour. Eval. 2021, 55, 901–924. [Google Scholar] [CrossRef]
- Ilie, V.I.; Truica, C.O.; Apostol, E.S.; Paschke, A. Context-Aware Misinformation Detection: A Benchmark of Deep Learning Architectures Using Word Embeddings. IEEE Access 2021, 9, 162122–162146. [Google Scholar] [CrossRef]
- Vusak, E.; Kuzina, V.; Jovic, A. A Survey of Word Embedding Algorithms for Textual Data Information Extraction. In Proceedings of the 2021 44th International Convention on Information, Communication and Electronic Technology, MIPRO 2021, Opatija, Croatia, 27 September–1 October 2021; pp. 181–186. [Google Scholar] [CrossRef]
- Hu, R.; Li, S.; Liang, S. Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 3899–3908. [Google Scholar] [CrossRef]
- Katsarou, S.; Rodríguez-gálvez, B.; Shanahan, J. Measuring Gender Bias in Contextualized Embeddings †. Comput. Sci. Math. Forum 2022, 3, 3. [Google Scholar] [CrossRef]
- Balakrishnan, V.; Shi, Z.; Law, C.L.; Lim, R.; Teh, L.L.; Fan, Y.; Periasamy, J. A Comprehensive Analysis of Transformer-Deep Neural Network Models in Twitter Disaster Detection. Mathematics 2022, 10, 4664. [Google Scholar] [CrossRef]
- Loureiro, D.; Jorge, A.M. LIAAD at SemDeep-5 challenge: Word-in-Context (WiC). In Proceedings of the 5th Workshop on Semantic Deep Learning, SemDeep 2019, Macau, China, 12 August 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 51–55. Available online: https://aclanthology.org/W19-5801/ (accessed on 10 September 2024).
- Li, X.; Lei, Y.; Ji, S. BERT- and BiLSTM- Based Sentiment Analysis of Online. Futur. Internet 2022, 14, 332. [Google Scholar] [CrossRef]
- Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.; Le, Q.V. XLNet: Generalized autoregressive pretraining for language understanding. Adv. Neural Inf. Process. Syst. 2019, 32, 1–11. Available online: https://proceedings.neurips.cc/paper/2019/hash/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html (accessed on 10 September 2024).
- Kavatagi, S.; Rachh, R. A Context Aware Embedding for the Detection of Hate Speech in Social Media Networks. In Proceedings of the 2021 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON), IEEE, Pune, India, 29–30 October 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Loureiro, D.; Mário Jorge, A.; Camacho-Collados, J. LMMS reloaded: Transformer-based sense embeddings for disambiguation and beyond. Artif. Intell. 2022, 305, 103661. [Google Scholar] [CrossRef]
- Li, X.; Fu, X.; Xu, G.; Yang, Y.; Wang, J.; Jin, L.; Liu, Q.; Xiang, T. Enhancing BERT Representation With Context-Aware Embedding for Aspect-Based Sentiment Analysis. IEEE Access 2020, 8, 46868–46876. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, Z.; Lu, X. Aspect Sentiment Classification via Local Context- Focused Syntax Based on DeBERTa. In Proceedings of the 2024 4th International Conference on Computer Communication and Artificial Intelligence (CCAI), IEEE, Xi’an, China, 24–26 May 2024; pp. 297–302. [Google Scholar] [CrossRef]
- Martin, C.; Yang, H.; Hsu, W. KDDIE at SemEval-2022 Task 11: Using DeBERTa for Named Entity Recognition. In Proceedings of the SemEval 2022—16th International Workshop on Semantic Evaluation, Seatle, WA, USA, 14–15 July 2022; Association for Computational Linguistics: Stroudsburg, PA, USA, 2022; pp. 1531–1535. [Google Scholar] [CrossRef]
- Kumar, N.; Kumar, S. Enhancing Abstractive Text Summarisation Using Seq2Seq Models: A Context-Aware Approach. In Proceedings of the 2024 International Conference on Automation and Computation (AUTOCOM), IEEE, Dehradun, India, 14–16 March 2024; pp. 490–496. [Google Scholar] [CrossRef]
- Alessio, I.D.; Quaglieri, A.; Burrai, J.; Pizzo, A.; Aitella, U.; Lausi, G.; Tagliaferri, G.; Cordellieri, P.; Cricenti, C.; Mari, E.; et al. Behavioral sciences ‘Leading through Crisis’: A Systematic Review of Institutional Decision-Makers in Emergency Contexts. Behav. Sci. 2024, 14, 481. [Google Scholar] [CrossRef]
- Necula, S.C.; Dumitriu, F.; Greavu-Șerban, V. A Systematic Literature Review on Using Natural Language Processing in Software Requirements Engineering. Electronics 2024, 13, 2055. [Google Scholar] [CrossRef]
- Hladek, D.; Stas, J.; Pleva, M.; Ondas, S.; Kovacs, L. Survey of the Word Sense Disambiguation and Challenges for the Slovak Language. In Proceedings of the 17th IEEE International Symposium on Computational Intelligence and Informatics, IEEE, Budapest, Hungary, 17–19 November 2016; pp. 225–230. [Google Scholar] [CrossRef]
- Thompson, R.C.; Joseph, S.; Adeliyi, T.T. A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection. Information 2022, 13, 527. [Google Scholar] [CrossRef]
- Bowring, A.; Telschow, F.J.E.; Schwartzman, A.; Nichols, T.E. NeuroImage Confidence Sets for Cohen’s deffect size images. Neuroimage 2021, 226, 117477. [Google Scholar] [CrossRef]
- Elkahky, A.; Webster, K.; Andor, D.; Pitler, E. A Challenge Set and Methods for Noun-Verb Ambiguity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; Association for Computational Linguistics: Brussels, Belgium, 2018; pp. 2562–2572. [Google Scholar] [CrossRef]
- Aksoy, M.; Yanık, S.; Amasyali, M.F. A comparative analysis of text representation, classification and clustering methods over real project proposals. Int. J. Intell. Comput. Cybern. 2023, 16, 6. [Google Scholar] [CrossRef]
- Elagbry, H.E.; Attia, S.; Abdel-Rahman, A.; Abdel-Ate, A.; Girgis, S. A Contextual Word Embedding for Arabic Sarcasm Detection with Random Forests. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, Online, 19 April 2021; pp. 340–344. Available online: https://aclanthology.org/2021.wanlp-1.43 (accessed on 10 September 2024).
- Sinha, A.; Shen, Z.; Song, Y.; Ma, H.; Eide, D.; Hsu, B.-J.P.; Wang, K. An Overview of Microsoft Academic Service (MAS) and Applications. In Proceedings of the WWW’15: 24th International World Wide Web Conference, Florence, Italy, 18–22 May 2015; ACM Digital Library: Florence, Italy, 2019; pp. 243–246. [Google Scholar] [CrossRef]
- Chaimae, A.; Yacine, E.Y.; Rybinski, M.; Montes, J.F.A. BERT for Arabic Named Entity Recognition. In Proceedings of the 2020 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Marrakech, Morocco, 25–27 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Kuling, G.; Curpen, B.; Martel, A.L. BI-RADS BERT and Using Section Segmentation to Understand Radiology Reports. Imaging 2022, 8, 131. [Google Scholar] [CrossRef]
- Gani, M.O.; Ayyasamy, R.K.; Fui, Y.T. Bloom’s Taxonomy-based exam question classification: The outcome of CNN and optimal pre-trained word embedding technique. Educ. Inf. Technol. 2023, 28, 15893–15914. [Google Scholar] [CrossRef]
- Campagne, R.V.L.; van Ommen, D.; Rademaker, M.; Teurlings, T.; Frasincar, F. DCWEB-SOBA: Deep Contextual Word Embeddings-Based Semi-automatic Ontology Building for Aspect-Based Sentiment Classification. In Proceedings of the Semantic Web: 19th Internatinal Conference, Hersonissos, Greece, 29 May–2 June 2022; ACM Digital Library: Hersonissos, Greece, 2022; pp. 183–199. [Google Scholar] [CrossRef]
- Gedela, R.T.; Baruah, U.; Soni, B. Deep Contextualised Text Representation and Learning for Sarcasm Detection. Arab. J. Sci. Eng. 2024, 49, 3719–3734. [Google Scholar] [CrossRef]
- Zhang, F.; Gao, W.; Fang, Y. News title classification based on sentence-LDA model and word embedding. In Proceedings of the 2019 International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), Taiyuan, China, 8–10 November 2019; pp. 237–240. [Google Scholar] [CrossRef]
- Mehedi, K.; Fahim, H.; Moontaha, M.; Rahman, M.; Rhythm, E.R. Comparative Analysis of Traditional and Contextual Embedding for Bangla Sarcasm Detection in Natural Language Processing. In Proceedings of the 2023 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT), Malang, Indonesia, 23–25 November 2023; pp. 293–299. [Google Scholar] [CrossRef]
- Zhao, C. Multi-Feature Fusion Machine Translation Quality Evaluation Based on LSTM Neural Network. In Proceedings of the 2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT), Changzhou, China, 4–6 November 2022. [Google Scholar] [CrossRef]
- Roman, M.; Shahid, A.; Uddin, M.I.; Hua, Q.; Maqsood, S. Exploiting Contextual Word Embedding of Authorship and Title of Articles for Discovering Citation Intent Classification. Complexity 2021, 2021, 1–13. [Google Scholar] [CrossRef]
- Elkaref, N.; Abu-Elkheir, M. GUCT at Arabic Hate Speech 2022: Towards a Better Isotropy for Hatespeech Detection. Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur’an QA and Fine-Grained Hate Speech Detection, Online, 20–25 June 2022; European Language Resources Association: Marseille, France, 2022. Available online: https://aclanthology.org/2022.osact-1.27/ (accessed on 30 July 2024).
- Liu, F.; Lu, H.; Neubig, G. Handling homographs in neural machine translation. arXiv 2017, arXiv:1708.06510. [Google Scholar]
- Hailu, B.M.; Assabie, A.; Yenewondim, B.S. Semantic Role Labeling for Amharic Text Using Multiple Embeddings and Deep Neural Network. IEEE Access 2021, 11, 33274–33295. [Google Scholar] [CrossRef]
- Harnmetta, P.; Samanchuen, T. Sentiment Analysis of Thai Stock Reviews Using Transformer Models. In Proceedings of the 2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE), Bangkok, Thailand, 22–25 June 2022. [Google Scholar] [CrossRef]
- Walker, N.; Peng, Y.-T.; Cakmak, M. Neural Semantic Parsing with Anonymization for Command Understanding in General-Purpose Service Robots. In Proceedings of the RoboCup 2019: Robot World Cup XXIII, Sydney, NSW, Australia, 2–8 July 2019; ACM: Berlin/Heidelberg, Germany, 2019; pp. 337–350. [Google Scholar] [CrossRef]
- Hang, G.; Liu, J. Big-Data Based English-Chinese Corpus Collection and Mining and Machine Translation Framework. In Proceedings of the 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 1–13 November 2021; pp. 418–421. [Google Scholar] [CrossRef]
- Agarwal, N.; Sikka, G.; Awasthi, L.K. Web Service Clustering Technique based on Contextual Word Embedding for Service Representation. In Proceedings of the International Conference on Technological Advancements and Innovations (ICTAI), Tashkent, Uzbekistan, 10–12 November 2021. [Google Scholar] [CrossRef]
- Kumar, A.; Albuquerque, V.H.C. Sentiment Analysis Using XLM-R Transformer and Zero-shot Transfer Learning on Resource-poor Indian Language. ACM Trans. Asian Low Resour. Lang. Inf. Process. 2021, 20, 1–13. [Google Scholar] [CrossRef]
- Karnysheva, P.S.A. TUE at SemEval-2020 Task 1: Detecting Semantic Change by Clustering Contextual Word Embeddings. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona: International Committee for Computational Linguistics, Online, 12–13 December 2020; pp. 232–238. [Google Scholar] [CrossRef]
- Tran, O.T.; Phung, A.C.; Ngo, B.X. Using Convolution Neural Network with BERT for Stance Detection in Vietnamese. In Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), Marseill, France, 20–25 June 2022; European Language Resources Association (ELRA): Marseill, France, 2022; pp. 7220–7225. Available online: https://aclanthology.org/2022.lrec-1.783.pdf (accessed on 10 September 2024).
- Alibadi, Z.; Du, M.; Vidal, J.M. Using pre-trained embeddings to detect the intent of an email. In Proceedings of the 7th ACIS International Conference on Applied Computing and Information Technolog, Honolulu, HI, USA, 29–31 May 2019; Association for Computing Machinery: Honolulu, HI, USA, 2019. [Google Scholar] [CrossRef]
Author | Approach | Model | F1 Score (%) | Accuracy (%) |
---|---|---|---|---|
[26] | WE | FastText | 89 | |
[27] | CWE | ELMO | 89 | |
[23] | CAWE | DistilBERT | 84 | 85 |
[24] | CAWE | GBCN | 88 | 93 |
CWE | ELECTRA | 84 |
Database | Results | Search Phrase | Notes |
---|---|---|---|
Scopus | 269 | “Meaning conflation deficiency” OR “context-aware word embedding” OR “sense conflation” OR “context conflation” OR “entity conflation” OR “contextual word embedding” | extensive database with a broad scope. |
Springer | 10 | “Meaning conflation deficiency” OR “context-aware word embedding” OR “sense conflation” OR “context conflation” OR “entity conflation” OR “contextual word embedding” | focused on information technology and computers. |
IEEE Xplore | 35 | “Meaning conflation deficiency” OR “context-aware word embedding” OR “sense conflation” OR “context conflation” OR “entity conflation” OR “contextual word embedding” | abundant in publications on engineering and technology. |
Google Scholar | 56 | “Meaning conflation deficiency” OR “context-aware word embedding” OR “sense conflation” OR “context conflation” OR “entity conflation” OR “contextual word embedding” | offers a quick and easy method for searching academic publications in general. |
Criteria | Decision |
---|---|
The predetermined keywords appear throughout the document, or at the very least in the title, keywords, and abstract sections. | Inclusion |
Publications released in 2014 and after | Inclusion |
Research article written in the English language. | Inclusion |
Research articles without MCD-selected approaches, which are WE, CWE and CAWE | Exclusion |
Research articles without accuracy | Exclusion |
Research articles without a corpus or dataset | Exclusion |
Articles not written in English, reports published prior to 2024, case reports and series, editorial letters, commentary, opinions, conference abstracts, and dissertations | Exclusion |
Meta-Analysis Summary: Random-Effects Model Method: DerSimonian–Laird | |||||
---|---|---|---|---|---|
Heterogeneity: tau2 = 8.8724; I2 (%) = 87.65; H2 = 8.10 | |||||
Study (n = 25); Effect Size; [95% CI]; Weight | |||||
(Elkahky et al., 2018) | [33] | −10.339 | −12.330 | −8.349 | 4.11 |
(Aksoy et al., 2023) | [34] | −7.913 | −9.991 | −5.836 | 4.07 |
(Elagbry et al., 2021) | [35] | −10.091 | −12.809 | −7.373 | 3.77 |
(Sinha et al., 2015) | [36] | −18.286 | −20.296 | −16.275 | 4.10 |
(Younas et al., 2023) | [29] | −9.194 | −11.333 | −7.055 | 4.04 |
(Chaimae et al., 2020) | [37] | −13.569 | −15.624 | −11.515 | 4.08 |
(Kuling et al., 2022) | [38] | −10.463 | −12.443 | −8.483 | 4.11 |
(Gani et al., 2023) | [39] | −7.983 | −10.097 | −5.869 | 4.06 |
(van_Lookeren_Campagne et al., 2022) | [40] | −15.562 | −17.726 | −13.397 | 4.03 |
(Gedela et al., 2024) | [41] | −13.050 | −15.060 | −11.039 | 4.10 |
(Gao et al., 2019) | [42] | −4.353 | −7.230 | −1.476 | 3.69 |
(Iqbal et al., 2023) | [43] | −9.102 | −11.092 | −7.112 | 4.11 |
(Zhao, 2022) | [44] | −15.555 | −17.760 | −13.350 | 4.01 |
(Roman et al., 2021) | [45] | −8.931 | −11.381 | −6.481 | 3.90 |
(Elkaref et al., 2022) | [46] | −9.390 | −11.568 | −7.213 | 4.03 |
(Liu et al., 2018) | [47] | −11.298 | −13.787 | −8.809 | 3.88 |
(Hailu et al., 2023) | [48] | −9.261 | −11.272 | −7.251 | 4.10 |
(Harnmetta et al., 2022) | [49] | −6.309 | −8.343 | −4.275 | 4.09 |
(Walker et al., 2019) | [50] | −7.386 | −9.464 | −5.308 | 4.07 |
(Szarkowska et al., 2021) | [51] | −12.655 | −14.698 | −10.611 | 4.09 |
(Agarwal et al., 2021) | [52] | −8.036 | −10.836 | −5.235 | 3.73 |
(Kumar & Albuquerque, 2021) | [53] | −10.371 | −12.881 | −7.862 | 3.87 |
(Karnysheva & Schwarz, 2020) | [54] | −14.840 | −17.350 | −12.331 | 3.87 |
(Tran et al., 2022) | [55] | −9.603 | −11.851 | −7.354 | 3.99 |
(Alibadi et al., 2019) | [56] | −11.563 | −13.629 | −9.498 | 4.08 |
Theta | −10.631 | −11.881 | −9.381 | ||
Test of theta = 0; z = −16.66 | Test of homogeneity: Q = chi2(31) = 194.41 Prob > |z| = 0.0000 Prob > Q = 0.000 |
Parameter | Coefficient | Std. Err. | z | p > |z| | [95% Conf. Interval] | |
---|---|---|---|---|---|---|
Year | 0.4624064 | 0.3034431 | 1.52 | 0.128 | −0.1323312 | 1.057144 |
Constant | −945.1428 | 613.2503 | −1.54 | 0.123 | −1106.102 | 256.8058 |
Parameter | Coefficient | Std. Err. | z | p > |z| | [95% Conf. Interval] | |
---|---|---|---|---|---|---|
Corpus | −1.05 × 10−7 | 3.29 × 10−8 | −3.20 | 0.001 | −1.69 × 10−7 | −4.07 × 10−8 |
Constant | −10.2262 | 0.5565426 | −18.37 | 0.000 | −11.317 | −9.135397 |
Studies (n = 28) | Coefficient | [95% Conf. Interval] | |
---|---|---|---|
Observed (n = 28) | −10.747 | −11.185 | −10.310 |
Observed + Imputed (25 + 3) | −10.747 | −11.185 | −10.310 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Masethe, M.A.; Masethe, H.D.; Ojo, S.O. Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis. Computers 2024, 13, 271. https://doi.org/10.3390/computers13100271
Masethe MA, Masethe HD, Ojo SO. Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis. Computers. 2024; 13(10):271. https://doi.org/10.3390/computers13100271
Chicago/Turabian StyleMasethe, Mosima Anna, Hlaudi Daniel Masethe, and Sunday O. Ojo. 2024. "Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis" Computers 13, no. 10: 271. https://doi.org/10.3390/computers13100271
APA StyleMasethe, M. A., Masethe, H. D., & Ojo, S. O. (2024). Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis. Computers, 13(10), 271. https://doi.org/10.3390/computers13100271