Advancements in Natural Language Processing, Semantic Networks, and Sentiment Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 2626

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Group - atlanTTic, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; computational linguistics; machine learning; natural language processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Telematics Engineering, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; natural language processing; P2P networks; recommender systems; personal devices and mobile services

E-Mail Website
Guest Editor
Information Technologies Group, atlanTTic, University of Vigo, 36310 Vigo, Spain
Interests: artificial intelligence; natural language processing; computing systems design; real-time systems; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent advancements in deep learning models and the availability of multi-modal data online have motivated the necessity to develop new natural language processing techniques. Pre-trained language models and large language models constitute representative examples. Accordingly, this Special Issue on "Advancements in Natural Language Processing, Semantic Networks, and Sentiment Analysis" welcomes contributions to these advanced techniques with particular attention to the management of semantic knowledge (e.g., sentiment analysis and emotion detection applications) in multidisciplinary-use cases of artificial intelligence (e.g., smart health services). It provides an opportunity to advance the generative artificial intelligence literature for academia, the industry, and the general public. Thus, the call is open for theoretical and practical applications of research trends to inspire innovation in this field. Recommended topics include, but are not limited to, the following: advanced sentiment analysis and emotion detection techniques, applications of generative artificial intelligence (e.g., pre-trained language models and large language models), machine learning models in batch and streaming operations, the study of semantic knowledge management and representation (e.g., semantic networks), etc.

Dr. Silvia García-Méndez
Dr. Enrique Costa-Montenegro
Dr. Francisco De Arriba-Pérez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • emotion detection
  • large language models
  • machine learning
  • natural language processing
  • pre-trained language models
  • semantics and pragmatics
  • sentiment analysis

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 971 KiB  
Article
A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models
by Monserrat Vázquez-Hernández, Luis Alberto Morales-Rosales, Ignacio Algredo-Badillo, Sofía Isabel Fernández-Gregorio, Héctor Rodríguez-Rangel and María-Luisa Córdoba-Tlaxcalteco
Appl. Sci. 2024, 14(11), 4614; https://doi.org/10.3390/app14114614 - 27 May 2024
Viewed by 216
Abstract
In recent years, the use of deep learning models for deploying sentiment analysis systems has become a widespread topic due to their processing capacity and superior results on large volumes of information. However, after several years’ research, previous works have demonstrated that deep [...] Read more.
In recent years, the use of deep learning models for deploying sentiment analysis systems has become a widespread topic due to their processing capacity and superior results on large volumes of information. However, after several years’ research, previous works have demonstrated that deep learning models are vulnerable to strategically modified inputs called adversarial examples. Adversarial examples are generated by performing perturbations on data input that are imperceptible to humans but that can fool deep learning models’ understanding of the inputs and lead to false predictions being generated. In this work, we collect, select, summarize, discuss, and comprehensively analyze research works to generate textual adversarial examples. There are already a number of reviews in the existing literature concerning attacks on deep learning models for text applications; in contrast to previous works, however, we review works mainly oriented to sentiment analysis tasks. Further, we cover the related information concerning generation of adversarial examples to make this work self-contained. Finally, we draw on the reviewed literature to discuss adversarial example design in the context of sentiment analysis tasks. Full article
Show Figures

Figure 1

13 pages, 509 KiB  
Article
Knowledge Graph Completion Using a Pre-Trained Language Model Based on Categorical Information and Multi-Layer Residual Attention
by Qiang Rao, Tiejun Wang, Xiaoran Guo, Kaijie Wang and Yue Yan
Appl. Sci. 2024, 14(11), 4453; https://doi.org/10.3390/app14114453 - 23 May 2024
Viewed by 238
Abstract
Knowledge graph completion (KGC) utilizes known knowledge graph triples to infer and predict missing knowledge, making it one of the research hotspots in the field of knowledge graphs. There are still limitations in generating high-quality entity embeddings and fully understanding the contextual information [...] Read more.
Knowledge graph completion (KGC) utilizes known knowledge graph triples to infer and predict missing knowledge, making it one of the research hotspots in the field of knowledge graphs. There are still limitations in generating high-quality entity embeddings and fully understanding the contextual information of entities and relationships. To overcome these challenges, this paper introduces a novel pre-trained language model-based method for knowledge graph completion that significantly enhances the quality of entity embeddings by integrating entity categorical information with textual descriptions. Additionally, this method employs an innovative multi-layer residual attention network in combination with PLMs, deepening the understanding of the joint contextual information of entities and relationships. Experimental results on the FB15k-237 and WN18RR datasets demonstrate that our proposed model significantly outperforms existing baseline models in link prediction tasks. Full article
Show Figures

Figure 1

18 pages, 996 KiB  
Article
REACT: Relation Extraction Method Based on Entity Attention Network and Cascade Binary Tagging Framework
by Lingqi Kong and Shengquau Liu
Appl. Sci. 2024, 14(7), 2981; https://doi.org/10.3390/app14072981 - 2 Apr 2024
Viewed by 474
Abstract
With the development of the Internet, vast amounts of text information are being generated constantly. Methods for extracting the valuable parts from this information have become an important research field. Relation extraction aims to identify entities and the relations between them from text, [...] Read more.
With the development of the Internet, vast amounts of text information are being generated constantly. Methods for extracting the valuable parts from this information have become an important research field. Relation extraction aims to identify entities and the relations between them from text, helping computers better understand textual information. Currently, the field of relation extraction faces various challenges, particularly in addressing the relation overlapping problem. The main difficulties are as follows: (1) Traditional methods of relation extraction have limitations and lack the ability to handle the relation overlapping problem, requiring a redesign. (2) Relation extraction models are easily disturbed by noise from words with weak relevance to the relation extraction task, leading to difficulties in correctly identifying entities and their relations. In this paper, we propose the Relation extraction method based on the Entity Attention network and Cascade binary Tagging framework (REACT). We decompose the relation extraction task into two subtasks: head entity identification and tail entity and relation identification. REACT first identifies the head entity and then identifies all possible tail entities that can be paired with the head entity, as well as all possible relations. With this architecture, the model can handle the relation overlapping problem. In order to reduce the interference of words in the text that are not related to the head entity or relation extraction task and improve the accuracy of identifying the tail entities and relations, we designed an entity attention network. To demonstrate the effectiveness of REACT, we construct a high-quality Chinese dataset and conduct a large number of experiments on this dataset. The experimental results fully confirm the effectiveness of REACT, showing its significant advantages in handling the relation overlapping problem compared to current other methods. Full article
Show Figures

Figure 1

15 pages, 557 KiB  
Article
Prefix Data Augmentation for Contrastive Learning of Unsupervised Sentence Embedding
by Chunchun Wang and Shu Lv
Appl. Sci. 2024, 14(7), 2880; https://doi.org/10.3390/app14072880 - 29 Mar 2024
Viewed by 567
Abstract
This paper presents prefix data augmentation (Prd) as an innovative method for enhancing sentence embedding learning through unsupervised contrastive learning. The framework, dubbed PrdSimCSE, uses Prd to create both positive and negative sample pairs. By appending positive and negative prefixes to a sentence, [...] Read more.
This paper presents prefix data augmentation (Prd) as an innovative method for enhancing sentence embedding learning through unsupervised contrastive learning. The framework, dubbed PrdSimCSE, uses Prd to create both positive and negative sample pairs. By appending positive and negative prefixes to a sentence, the basis for contrastive learning is formed, outperforming the baseline unsupervised SimCSE. PrdSimCSE is positioned within a probabilistic framework that expands the semantic similarity event space and generates superior negative samples, contributing to more accurate semantic similarity estimations. The model’s efficacy is validated on standard semantic similarity tasks, showing a notable improvement over that of existing unsupervised models, specifically a 1.08% enhancement in performance on BERTbase. Through detailed experiments, the effectiveness of positive and negative prefixes in data augmentation and their impact on the learning model are explored, and the broader implications of prefix data augmentation are discussed for unsupervised sentence embedding learning. Full article
Show Figures

Figure 1

16 pages, 2169 KiB  
Article
Causal Reinforcement Learning for Knowledge Graph Reasoning
by Dezhi Li, Yunjun Lu, Jianping Wu, Wenlu Zhou and Guangjun Zeng
Appl. Sci. 2024, 14(6), 2498; https://doi.org/10.3390/app14062498 - 15 Mar 2024
Cited by 1 | Viewed by 746
Abstract
Knowledge graph reasoning can deduce new facts and relationships, which is an important research direction of knowledge graphs. Most of the existing methods are based on end-to-end reasoning which cannot effectively use the knowledge graph, so consequently the performance of the method still [...] Read more.
Knowledge graph reasoning can deduce new facts and relationships, which is an important research direction of knowledge graphs. Most of the existing methods are based on end-to-end reasoning which cannot effectively use the knowledge graph, so consequently the performance of the method still needs to be improved. Therefore, we combine causal inference with reinforcement learning and propose a new framework for knowledge graph reasoning. By combining the counterfactual method in causal inference, our method can obtain more information as prior knowledge and integrate it into the control strategy in the reinforcement model. The proposed method mainly includes the steps of relationship importance identification, reinforcement learning framework design, policy network design, and the training and testing of the causal reinforcement learning model. Specifically, a prior knowledge table is first constructed to indicate which relationship is more important for the problem to be queried; secondly, designing state space, optimization, action space, state transition and reward, respectively, is undertaken; then, the standard value is set and compared with the weight value of each candidate edge, and an action strategy is selected according to the comparison result through prior knowledge or neural network; finally, the parameters of the reinforcement learning model are determined through training and testing. We used four datasets to compare our method to the baseline method and conducted ablation experiments. On dataset NELL-995 and FB15k-237, the experimental results show that the MAP scores of our method are 87.8 and 45.2, and the optimal performance is achieved. Full article
Show Figures

Figure 1

Back to TopTop