Applications of Information Extraction, Knowledge Graphs, and Large Language Models

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 2553

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Central South University, Changsha 410073, China
Interests: information extraction; text mining; knowledge graph

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Central South University, Changsha 410073, China
Interests: text mining; information extraction; knowledge graph

E-Mail Website
Guest Editor
Rare Care Centre, Perth Children's Hospital, Nedlands, WA 6009, Australia
Interests: natural language processing; knowledge graphs; ontologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

 Dear Colleagues,

Information extraction (IE), knowledge graphs (KGs), and large language models (LLMs) have emerged as powerful tools for organizing, analyzing, and harnessing the potential of vast amounts of data. This Special Issue aims to explore the synergies and applications of IE, KGs, and LLMs, showcasing their collective impact on information management, knowledge representation, and decision-making processes.

Information extraction involves automatically identifying and extracting structured information from unstructured or semi-structured data sources, such as text documents, websites, social media posts, and scientific literature. Knowledge graphs provide a powerful framework for representing and organizing knowledge, enabling efficient navigation, querying, and inference over interconnected entities and their relationships. Large language models, such as GPT-3.5, have pushed the boundaries of natural language understanding and generation, demonstrating remarkable capabilities in tasks such as text completion, translation, summarization, and question answering.

This Special Issue invites original research papers and reviews that showcase the combined applications, methodologies, and advancements in the field of information extraction, knowledge graphs, and large language models. We welcome submissions on, but not limited to, the following topics:

  1. Knowledge Graph Construction: techniques and methodologies for constructing knowledge graphs from diverse data sources, incorporating the outputs of large language models for improved entity recognition, relation extraction, and ontology design.
  2. Semantic Search and Recommendation Systems: leveraging the power of large language models and knowledge graphs to enhance search engines and recommendation systems, enabling more accurate and context-aware information retrieval and personalized recommendations.
  3. Natural Language Processing (NLP) with Large Language Models: exploring the integration of large language models, such as GPT-3.5, with knowledge graphs for tasks such as question answering, sentiment analysis, summarization, and named entity recognition.
  4. Knowledge Graphs in Healthcare and Life Sciences: harnessing the potential of information extraction, large language models, and knowledge graphs to facilitate biomedical data integration, clinical decision support systems, drug discovery, and personalized medicine.
  5. Industry Applications and Ethical Considerations: Real-world case studies demonstrating the adoption and impact of combined IE, KG, and LLM technologies in various domains such as finance, e-commerce, manufacturing, transportation, and energy. Additionally, papers addressing the ethical implications and responsible use of large language models in knowledge extraction and representation are encouraged.

Papers that showcase innovative approaches, novel algorithms, and practical implementations that advance the state of the art in information extraction, knowledge graphs, and large language models are welcome. We particularly encourage papers that demonstrate the successful deployment of these technologies in real-world scenarios and their impact on decision making, knowledge discovery, and information management.

Dr. Junwen Duan
Dr. Fangfang Li
Dr. Tudor Groza
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information extraction
  • knowledge graphs
  • large language model
  • natural language processing
  • healthcare applications
  • industry applications
  • data integration

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4194 KiB  
Article
Do Large Language Models Show Human-like Biases? Exploring Confidence—Competence Gap in AI
by Aniket Kumar Singh, Bishal Lamichhane, Suman Devkota, Uttam Dhakal and Chandra Dhakal
Information 2024, 15(2), 92; https://doi.org/10.3390/info15020092 - 06 Feb 2024
Viewed by 1567
Abstract
This study investigates self-assessment tendencies in Large Language Models (LLMs), examining if patterns resemble human cognitive biases like the Dunning–Kruger effect. LLMs, including GPT, BARD, Claude, and LLaMA, are evaluated using confidence scores on reasoning tasks. The models provide self-assessed confidence levels before [...] Read more.
This study investigates self-assessment tendencies in Large Language Models (LLMs), examining if patterns resemble human cognitive biases like the Dunning–Kruger effect. LLMs, including GPT, BARD, Claude, and LLaMA, are evaluated using confidence scores on reasoning tasks. The models provide self-assessed confidence levels before and after responding to different questions. The results show cases where high confidence does not correlate with correctness, suggesting overconfidence. Conversely, low confidence despite accurate responses indicates potential underestimation. The confidence scores vary across problem categories and difficulties, reducing confidence for complex queries. GPT-4 displays consistent confidence, while LLaMA and Claude demonstrate more variations. Some of these patterns resemble the Dunning–Kruger effect, where incompetence leads to inflated self-evaluations. While not conclusively evident, these observations parallel this phenomenon and provide a foundation to further explore the alignment of competence and confidence in LLMs. As LLMs continue to expand their societal roles, further research into their self-assessment mechanisms is warranted to fully understand their capabilities and limitations. Full article
Show Figures

Figure 1

Back to TopTop