Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,002)

Search Parameters:
Keywords = ChatGPT-4

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 414 KB  
Article
Harnessing Self-Control and AI: Understanding ChatGPT’s Impact on Academic Wellbeing
by Metin Besalti
Behav. Sci. 2025, 15(9), 1181; https://doi.org/10.3390/bs15091181 - 29 Aug 2025
Abstract
The rapid integration of generative AI, particularly ChatGPT, into academic settings has prompted urgent questions regarding its impact on students’ psychological and academic outcomes. Although generative AI holds considerable potential to transform educational practices, its effects on individual traits such as self-control and [...] Read more.
The rapid integration of generative AI, particularly ChatGPT, into academic settings has prompted urgent questions regarding its impact on students’ psychological and academic outcomes. Although generative AI holds considerable potential to transform educational practices, its effects on individual traits such as self-control and academic wellbeing remain insufficiently explored. This study addresses this gap through a sequential two-phase design. In the first phase, the ChatGPT Usage Scale was adapted and validated for a Turkish university student population (N = 413). Using confirmatory factor analysis and item response theory, the scale was confirmed as a psychometrically valid and reliable one-factor instrument. In the second phase, a separate sample (N = 449) was used to examine the relationships between ChatGPT usage, self-control, and academic wellbeing through a mediation model. The findings revealed that higher ChatGPT usage was significantly associated with lower levels of both self-control and academic wellbeing. Additionally, mediation analysis demonstrated that self-control partially mediates the negative relationship between ChatGPT usage and academic wellbeing. The study concludes that while generative AI tools are valuable, their integration into education presents a double-edged sword, highlighting the critical need to foster students’ self-regulatory skills to ensure they can harness these tools responsibly without compromising their academic and psychological health. Full article
(This article belongs to the Special Issue Artificial Intelligence and Educational Psychology)
20 pages, 538 KB  
Article
An Analysis of Students’ Attitudes Toward Artificial Intelligence—ChatGPT, in Particular—in Relation to Personality Traits, Coping Strategies, and Personal Values
by Simona Maria Glaveanu and Roxana Maier
Behav. Sci. 2025, 15(9), 1179; https://doi.org/10.3390/bs15091179 - 29 Aug 2025
Abstract
The general objective of this research was to investigate the attitudes of Bucharest students toward artificial intelligence (AI)—in particular, ChatGPT—in relation to their personality traits, coping strategies, and personal values to identify psychosocial approaches for students’ effective reporting toward this AI product. As [...] Read more.
The general objective of this research was to investigate the attitudes of Bucharest students toward artificial intelligence (AI)—in particular, ChatGPT—in relation to their personality traits, coping strategies, and personal values to identify psychosocial approaches for students’ effective reporting toward this AI product. As there was no instrument validated and calibrated on Romanian students, the scale constructed by Acosta-Enriquez et al. in 2024 was adapted to students from Bucharest (N = 508). Following the item analysis, the adapted scale was reduced to 16 items, and, following the factor analysis (EFA–0.81 < α < 0.91), the structure with three factors (cognitive, affective, and behavioral components), explaining 53% of the variation in Bucharest students’ attitudes toward ChatGPT, was maintained considering the results of the confirmatory factor analysis—CFA (χ2(79) = 218.345, p < 0.001; CMIN/DF = 2.486; CFI = 0.911; TLI = 0.900; RMSEA = 0.058 (90% CI: 0.50–0.065). The present study showed that 85.53% of the research subjects used ChatGPT at least once, of which 24.11% have a positive/open attitude toward ChatGPT, and that there are correlations (p < 0.01; 0.23 < r2 < 0.50) between students’ attitudes toward ChatGPT and several personality traits, coping strategies, and personal values. It also proves that the three components of the attitude toward ChatGPT (cognitive, affective, and behavioral) are correlated with a series of personality traits, coping strategies, and personal values of students. Although the general objective was achieved and the adapted scale has adequate psychometric qualities, the authors propose in future studies to expand the group of subjects so that the scale can be validated at the level of the Romanian population. In this research, at the end, several concrete approaches are proposed for the effective reporting of students toward this AI product, which, beyond the ethical challenges, also recognizes the benefits of technology in the evolution of education. Full article
(This article belongs to the Special Issue Artificial Intelligence and Educational Psychology)
11 pages, 200 KB  
Article
Liver Cysts and Artificial Intelligence: Is AI Really a Patient-Friendly Support?
by Enrico Spalice, Chiara D’Alterio, Maria Lanzone, Immacolata Iannone, Cristina De Padua, Matteo De Pastena and Alessandro Coppola
Surgeries 2025, 6(3), 73; https://doi.org/10.3390/surgeries6030073 - 29 Aug 2025
Abstract
Background: With the advancement of AI-powered online tools, patients are increasingly turning to AI for guidance on healthcare-related issues. Methods: Acting as patients, we posed eight direct questions concerning a common clinical condition—liver cysts—to four AI chatbots: ChatGPT, Perplexity, Copilot, and Gemini. The [...] Read more.
Background: With the advancement of AI-powered online tools, patients are increasingly turning to AI for guidance on healthcare-related issues. Methods: Acting as patients, we posed eight direct questions concerning a common clinical condition—liver cysts—to four AI chatbots: ChatGPT, Perplexity, Copilot, and Gemini. The responses were collected and compared both among the chatbots and with the current literature, including the most recent guidelines. Results: Overall, the responses from the four chatbots were generally consistent with the literature, with only a few inaccuracies noted. For questions addressing “grey areas” in clinical research, all chatbots provided generalized answers. ChatGPT, Copilot, and Gemini highlighted the lack of conclusive evidence in the literature, while Perplexity offered speculative correlations not supported by data. Importantly, all chatbots recommended consulting a healthcare professional. While Perplexity, Copilot, and Gemini included references in their responses, not all cited sources were academic or of medium/high evidence quality. An analysis of Flesch Readability Ease Scores and Estimated Reading Grade Levels indicated that ChatGPT and Gemini provided the most readable and comprehensible responses. Conclusions: The integration of chatbots into real-world healthcare scenarios requires thorough testing to prevent potentially serious consequences from misuse. While undeniably innovative, this technology presents significant risks if implemented improperly. Full article
5 pages, 970 KB  
Proceeding Paper
Application of Artificial Intelligence in Graphical Discrimination of Structural Cracks
by Ren-Jwo Tsay
Eng. Proc. 2025, 108(1), 4; https://doi.org/10.3390/engproc2025108004 - 28 Aug 2025
Viewed by 18
Abstract
For the detection of structural cracks, engineers need to conduct measurements on-site. However, structural cracks are widely distributed and are not easy to access for measurement. Therefore, photography is commonly used for crack detection. Although the crack is observed in the photos, its [...] Read more.
For the detection of structural cracks, engineers need to conduct measurements on-site. However, structural cracks are widely distributed and are not easy to access for measurement. Therefore, photography is commonly used for crack detection. Although the crack is observed in the photos, its location and size need to be clearly defined. Artificial intelligence (AI) and databases are used in graphic processing methods to perform structural crack analysis, in which the Python 3.12 program is used as the main tool. Using the Python program, an AI image analysis program was developed for crack analysis and evaluation. ChatGPT 3.5 was also used for the analysis of crack length and width. Using AI considerably increased credibility in detecting and measuring structural cracks. Full article
Show Figures

Figure 1

35 pages, 890 KB  
Article
Assessing the Accuracy and Completeness of AI-Generated Dental Responses: An Evaluation of the Chat-GPT Model
by Ahmad A. Othman, Abdulwadood J. Sharqawi, Ahmed A. MohammedAziz, Wafaa A. Ali, Amjad A. Alatiyyah and Mahir A. Mirah
Healthcare 2025, 13(17), 2144; https://doi.org/10.3390/healthcare13172144 - 28 Aug 2025
Viewed by 142
Abstract
Background: The rapid advancement of artificial intelligence (AI) in healthcare has opened new opportunities, yet the clinical validation of AI tools in dentistry remains limited. Objectives: This study aimed to assess the performance of ChatGPT in generating accurate and complete responses to academic [...] Read more.
Background: The rapid advancement of artificial intelligence (AI) in healthcare has opened new opportunities, yet the clinical validation of AI tools in dentistry remains limited. Objectives: This study aimed to assess the performance of ChatGPT in generating accurate and complete responses to academic dental questions across multiple specialties, comparing the capabilities of GPT-4 and GPT-3.5 models. Methodology: A panel of academic specialists from eight dental specialties collaboratively developed 48 clinical questions, classified by consensus as easy, medium, or hard, and as requiring either binary (yes/no) or descriptive responses. Each question was sequentially entered into both GPT-4 and GPT-3.5 models, with instructions to provide guideline-based answers. The AI-generated responses were independently evaluated by the specialists for accuracy (6-point Likert scale) and completeness (3-point Likert scale). Descriptive and inferential statistics were applied, including Mann–Whitney U and Kruskal–Wallis tests, with significance set at p < 0.05. Results: GPT-4 consistently outperformed GPT-3.5 in both evaluation domains. The median accuracy score was 6.0 for GPT-4 and 5.0 for GPT-3.5 (p = 0.02), while the median completeness score was 3.0 for GPT-4 and 2.0 for GPT-3.5 (p < 0.001). GPT-4 demonstrated significantly higher overall accuracy (5.29 ± 1.1) and completeness (2.44 ± 0.71) compared to GPT-3.5 (4.5 ± 1.7 and 1.69 ± 0.62, respectively; p = 0.024 and <0.001). When stratified by specialty, notable improvements with GPT-4 were observed in Periodontology, Endodontics, Implantology, and Oral Surgery, particularly in completeness scores. Conclusions: In academic dental settings, GPT-4 provided more accurate and complete responses than GPT-3.5. Despite both models showing potential, their clinical application should remain supervised by human experts. Full article
Show Figures

Figure 1

20 pages, 1318 KB  
Article
The ChatGPT Effect: Investigating Shifting Discourse Patterns, Sentiment, and Benefit–Challenge Framing in AI Mental Health Support
by Sanguk Lee, Minjin (MJ) Rheu and Jie Zhuang
Behav. Sci. 2025, 15(9), 1172; https://doi.org/10.3390/bs15091172 - 28 Aug 2025
Viewed by 192
Abstract
AI has the potential to enhance mental health by scaling support. However, its implementation brings uncertainties and challenges that require careful review to ensure safety. This study examined evolving public views on AI mental health support by analyzing relevant Reddit posts (n [...] Read more.
AI has the potential to enhance mental health by scaling support. However, its implementation brings uncertainties and challenges that require careful review to ensure safety. This study examined evolving public views on AI mental health support by analyzing relevant Reddit posts (n = 517). Following the release of ChatGPT in 2022, discussions about AI in the context of mental health surged, with a noticeable shift in preference toward large language models (LLMs) over conventional therapy chatbots. Users appreciated AI for its emotional support, companionship, and accessibility, while also expressing concerns about adverse effects and lack of conversational depth and emotional connection. Distinct patterns in how benefits and challenges were discussed emerged between experienced and non-experienced AI users, as well as between AI-focused and mental health-focused communities. AI-experienced users acknowledged both the benefits and limitations, whereas AI communities emphasized the positives and mental health communities highlighted the lack of conversational depth. These findings underscore the need for tailored communication strategies to set realistic expectations about the utility of AI in mental healthcare among different stakeholders. This research provides insights into developing ethical AI systems that complement traditional care while addressing current limitations. Full article
(This article belongs to the Special Issue Promoting Health Behaviors in the New Media Era)
Show Figures

Figure 1

7 pages, 1467 KB  
Proceeding Paper
Opportunities and Challenges of Big Models in Middle School Mathematics Teaching
by Yuyang Sun and Jiancheng Zou
Eng. Proc. 2025, 103(1), 20; https://doi.org/10.3390/engproc2025103020 - 27 Aug 2025
Viewed by 83
Abstract
The influence of large language models (LLMs) has permeated education, too. We explored the opportunities and challenges of LLMs in mathematics teaching. In mathematics education, the generative nature of LLMs is appropriate for teachers as it enables an understanding of mathematical knowledge rather [...] Read more.
The influence of large language models (LLMs) has permeated education, too. We explored the opportunities and challenges of LLMs in mathematics teaching. In mathematics education, the generative nature of LLMs is appropriate for teachers as it enables an understanding of mathematical knowledge rather than students who lack discernment. Additionally, we combined programming languages with LLMs, using the example of geometric models, to integrate mathematics and visual representation in a new way. Through a comparison of problem-solving between ChatGPT and MathGPT and an analysis of their logical reasoning, teachers can exercise with large models as auxiliary tools to enhance the quality of mathematics teaching. Full article
Show Figures

Figure 1

29 pages, 10074 KB  
Article
Framework for LLM-Enabled Construction Robot Task Planning: Knowledge Base Preparation and Robot–LLM Dialogue for Interior Wall Painting
by Kyungki Kim, Prashnna Ghimire and Pei-Chi Huang
Robotics 2025, 14(9), 117; https://doi.org/10.3390/robotics14090117 - 27 Aug 2025
Viewed by 320
Abstract
Task planning for a construction robot requires systematically integrating diverse elements, such as building components, construction processes, user input, and robot software. Conventional robot programming complicates this by requiring precise entity naming, relationship definitions, unstructured language interpretation, and accurate action selection. Existing research [...] Read more.
Task planning for a construction robot requires systematically integrating diverse elements, such as building components, construction processes, user input, and robot software. Conventional robot programming complicates this by requiring precise entity naming, relationship definitions, unstructured language interpretation, and accurate action selection. Existing research has focused on isolated components, such as natural language processing, hardcoded data linkages, or BIM data extraction. We introduce a novel framework using an LLM as the cognitive core for autonomous construction robots, encompassing both data preparation and task planning phases. Leveraging OpenAI’s ChatGPT-4, we demonstrate how LLMs can process structured BIM data and unstructured human inputs to generate robot instructions. A prototype tested in a simulated environment with a mobile painting robot adaptively executed tasks through real-time dialogues with ChatGPT-4, reducing reliance on hardcoded logic. Results suggest that LLMs can serve as the cognitive core for construction robots, with potential for extension to more complex operations. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

12 pages, 842 KB  
Article
Developing a Local Generative AI Teaching Assistant System: Utilizing Retrieval-Augmented Generation Technology to Enhance the Campus Learning Environment
by Jing-Wen Wu and Ming-Hseng Tseng
Electronics 2025, 14(17), 3402; https://doi.org/10.3390/electronics14173402 - 27 Aug 2025
Viewed by 191
Abstract
The rapid advancement of AI technologies and the emergence of large language models (LLMs) such as ChatGPT have facilitated the integration of intelligent question-answering systems into education. However, students often hesitate to ask questions, which negatively affects learning outcomes. To address this issue, [...] Read more.
The rapid advancement of AI technologies and the emergence of large language models (LLMs) such as ChatGPT have facilitated the integration of intelligent question-answering systems into education. However, students often hesitate to ask questions, which negatively affects learning outcomes. To address this issue, this study proposes a closed, locally deployed generative AI teaching assistant system that enables instructors to upload course PDFs to generate customized Q&A platforms. The system is based on a Retrieval-Augmented Generation (RAG) architecture and was developed through a comparative evaluation of components, including open-source large language models, embedding models, and vector databases to determine the optimal setup. The implementation integrates RAG with responsive web technologies and is evaluated using a standardized test question bank. Experimental results demonstrate that the system achieves an average answer accuracy of up to 86%, indicating a strong performance in an educational context. These findings suggest the feasibility of the system as an effective, privacy-preserving AI teaching aid, offering a scalable technical solution to improve digital learning in on-premise environments. Full article
Show Figures

Figure 1

9 pages, 209 KB  
Proceeding Paper
AI Detection in Academia: How Indian Universities Can Safeguard Academic Integrity
by Akash Gupta, Harsh Mahaseth and Arushi Bajpai
Eng. Proc. 2025, 107(1), 26; https://doi.org/10.3390/engproc2025107026 - 26 Aug 2025
Viewed by 587
Abstract
In recent times, the use of Artificial Intelligence (AI) technologies like ChatGPT-4o within the education sector has become an undisputed fact. AI has transformed the education sector, offering tools that enhance student research and writing. However, the use of AI raises concerns with [...] Read more.
In recent times, the use of Artificial Intelligence (AI) technologies like ChatGPT-4o within the education sector has become an undisputed fact. AI has transformed the education sector, offering tools that enhance student research and writing. However, the use of AI raises concerns with respect to academic integrity, originality, and authenticity. Indian Universities regulate traditional plagiarism with anti-plagiarism detection systems. Some Indian Universities have also subscribed to AI plagiarism detection systems, but not all of them have subscribed to AI plagiarism detection. The majority of Indian Universities are not sufficiently prepared to identify AI-generated content that is contextually relevant and original, thus bypassing these traditional checks. This study stresses the urgent need for the University Grants Commission (UGC) to introduce advanced AI detection systems across Indian universities. Unlike regular plagiarism checkers, these tools can identify unique writing patterns that suggest AI-generated content. Without such measures, universities risk students using AI to complete assignments and research dishonestly. Through this research, the authors will examine the ethical concerns surrounding AI in academia and highlight the importance of clear guidelines to ensure responsible use. Colleges and universities need proper policies to regulate AI-generated work in student submissions. This study will compare how India and other countries handle AI detection in education, elaborating on the challenges of dealing with AI-generated content. The paper will propose a structured framework for Indian universities, including the use of AI detection tools, ethical guidelines, and awareness programmes to help students use AI responsibly while maintaining academic integrity in a changing educational system. Full article
20 pages, 592 KB  
Review
The Temporal Evolution of Large Language Model Performance: A Comparative Analysis of Past and Current Outputs in Scientific and Medical Research
by Ishith Seth, Gianluca Marcaccini, Bryan Lim, Jennifer Novo, Stephen Bacchi, Roberto Cuomo, Richard J. Ross and Warren M. Rozen
Informatics 2025, 12(3), 86; https://doi.org/10.3390/informatics12030086 - 26 Aug 2025
Viewed by 274
Abstract
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model [...] Read more.
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model outputs (GPT-3.5 and GPT-4.0) with ChatGPT-4.5 across three domains: aesthetic surgery counseling, an academic discussion base of thumb arthritis, and a systematic literature review. Methods: We replicated the methodologies of three previously published studies using identical prompts in ChatGPT-4.5. Each output was assessed against its predecessor using a nine-domain Likert-based rubric measuring factual accuracy, completeness, reference quality, clarity, clinical insight, scientific reasoning, bias avoidance, utility, and interactivity. Expert reviewers in plastic and reconstructive surgery independently scored and compared model outputs across versions. Results: ChatGPT-4.5 outperformed earlier versions across all domains. Reference quality improved most significantly (a score increase of +4.5), followed by factual accuracy (+2.5), scientific reasoning (+2.5), and utility (+2.5). In aesthetic surgery counseling, GPT-3.5 produced generic responses lacking clinical detail, whereas ChatGPT-4.5 offered tailored, structured, and psychologically sensitive advice. In academic writing, ChatGPT-4.5 eliminated reference hallucination, correctly applied evidence hierarchies, and demonstrated advanced reasoning. In the literature review, recall remained suboptimal, but precision, citation accuracy, and contextual depth improved substantially. Conclusion: ChatGPT-4.5 represents a major step forward in LLM capability, particularly in generating trustworthy academic and clinical content. While not yet suitable as a standalone decision-making tool, its outputs now support research planning and early-stage manuscript preparation. Persistent limitations include information recall and interpretive flexibility. Continued validation is essential to ensure ethical, effective use in scientific workflows. Full article
Show Figures

Figure 1

10 pages, 208 KB  
Article
Evaluating the Competence of AI Chatbots in Answering Patient-Oriented Frequently Asked Questions on Orthognathic Surgery
by Ezgi Yüceer-Çetiner, Dilara Kazan, Mobin Nesiri and Selçuk Basa
Healthcare 2025, 13(17), 2114; https://doi.org/10.3390/healthcare13172114 - 26 Aug 2025
Viewed by 247
Abstract
Objectives: This study aimed to evaluate the performance of three widely used artificial intelligence (AI) chatbots—ChatGPT-4, Gemini 2.5 Pro, and Claude Sonnet 4—in answering patient-oriented frequently asked questions (FAQs) related to orthognathic surgery. Given the increasing reliance on AI tools in healthcare, it [...] Read more.
Objectives: This study aimed to evaluate the performance of three widely used artificial intelligence (AI) chatbots—ChatGPT-4, Gemini 2.5 Pro, and Claude Sonnet 4—in answering patient-oriented frequently asked questions (FAQs) related to orthognathic surgery. Given the increasing reliance on AI tools in healthcare, it is essential to evaluate their performance to provide accurate, empathetic, readable, and clinically appropriate information. Methods: Twenty FAQs in Turkish about orthognathic surgery were presented to each chatbot. The responses were evaluated by three oral and maxillofacial surgeons using a modified Global Quality Score (GQS), binary clinical appropriateness judgment, and a five-point empathy rating scale. The evaluation process was conducted in a double-blind manner. The Ateşman Readability Formula was applied to each response using an automated Python-based script. Comparative statistical analyses—including ANOVA, Kruskal–Wallis, and post hoc tests—were used to determine significant differences in performance among chatbots. Results: Gemini outperformed both GPT-4 and Claude in GQS, empathy, and clinical appropriateness (p < 0.001). GPT-4 demonstrated the highest readability scores (p < 0.001) but frequently lacked empathetic tone and safety-oriented guidance. Claude showed moderate performance, balancing ethical caution with limited linguistic clarity. A moderate positive correlation was found between empathy and perceived response quality (r = 0.454; p = 0.044). Conclusions: AI chatbots vary significantly in their ability to support surgical patient education. While GPT-4 offers superior readability, Gemini provides the most balanced and clinically reliable responses. These findings underscore the importance of context-specific chatbot selection and continuous clinical oversight to ensure safe and ethical AI-driven communication. Full article
20 pages, 902 KB  
Review
Pulmonary and Immune Dysfunction in Pediatric Long COVID: A Case Study Evaluating the Utility of ChatGPT-4 for Analyzing Scientific Articles
by Susanna R. Var, Nicole Maeser, Jeffrey Blake, Elise Zahs, Nathan Deep, Zoey Vasilakos, Jennifer McKay, Sether Johnson, Phoebe Strell, Allison Chang, Holly Korthas, Venkatramana Krishna, Manojkumar Narayanan, Tuhinur Arju, Dilmareth E. Natera-Rodriguez, Alex Roman, Sam J. Schulz, Anala Shetty, Mayuresh Vernekar, Madison A. Waldron, Kennedy Person, Maxim Cheeran, Ling Li and Walter C. Lowadd Show full author list remove Hide full author list
J. Clin. Med. 2025, 14(17), 6011; https://doi.org/10.3390/jcm14176011 - 25 Aug 2025
Viewed by 375
Abstract
Coronavirus disease 2019 (COVID-19) in adults is well characterized and associated with multisystem dysfunction. A subset of patients develop post-acute sequelae of SARS-CoV-2 infection (PASC, or long COVID), marked by persistent and fluctuating organ system abnormalities. In children, distinct clinical and pathophysiological features [...] Read more.
Coronavirus disease 2019 (COVID-19) in adults is well characterized and associated with multisystem dysfunction. A subset of patients develop post-acute sequelae of SARS-CoV-2 infection (PASC, or long COVID), marked by persistent and fluctuating organ system abnormalities. In children, distinct clinical and pathophysiological features of COVID-19 and long COVID are increasingly recognized, though knowledge remains limited relative to adults. The exponential expansion of the COVID-19 literature has made comprehensive appraisal by individual researchers increasingly unfeasible, highlighting the need for new approaches to evidence synthesis. Large language models (LLMs) such as the Generative Pre-trained Transformer (GPT) can process vast amounts of text, offering potential utility in this domain. Earlier versions of GPT, however, have been prone to generating fabricated references or misrepresentations of primary data. To evaluate the potential of more advanced models, we systematically applied GPT-4 to summarize studies on pediatric long COVID published between January 2022 and January 2025. Articles were identified in PubMed, and full-text PDFs were retrieved from publishers. GPT-4-generated summaries were cross-checked against the results sections of the original reports to ensure accuracy before incorporation into a structured review framework. This methodology demonstrates how LLMs may augment traditional literature review by improving efficiency and coverage in rapidly evolving fields, provided that outputs are subjected to rigorous human verification. Full article
(This article belongs to the Section Epidemiology & Public Health)
Show Figures

Figure 1

29 pages, 848 KB  
Article
Applying Additional Auxiliary Context Using Large Language Model for Metaphor Detection
by Takuya Hayashi and Minoru Sasaki
Big Data Cogn. Comput. 2025, 9(9), 218; https://doi.org/10.3390/bdcc9090218 - 25 Aug 2025
Viewed by 228
Abstract
Metaphor detection is challenging in natural language processing (NLP) because it requires recognizing nuanced semantic shifts beyond literal meaning, and conventional models often falter when contextual cues are limited. We propose a method to enhance metaphor detection by augmenting input sentences with auxiliary [...] Read more.
Metaphor detection is challenging in natural language processing (NLP) because it requires recognizing nuanced semantic shifts beyond literal meaning, and conventional models often falter when contextual cues are limited. We propose a method to enhance metaphor detection by augmenting input sentences with auxiliary context generated by ChatGPT. In our approach, ChatGPT produces semantically relevant sentences that are inserted before, after, or on both sides of a target sentence, allowing us to analyze the impact of context position and length on classification. Experiments on three benchmark datasets (MOH-X, VUA_All, VUA_Verb) show that this context-enriched input consistently outperforms the no-context baseline across accuracy, precision, recall, and F1-score, with the MOH-X dataset achieving the largest F1 gain. These improvements are statistically significant based on two-tailed t-tests. Our findings demonstrate that generative models can effectively enrich context for metaphor understanding, highlighting context placement and quantity as critical factors. Finally, we outline future directions, including advanced prompt engineering, optimizing context lengths, and extending this approach to multilingual metaphor detection. Full article
Show Figures

Figure 1

28 pages, 2551 KB  
Article
Artificial Intelligence in Education (AIEd): Publication Patterns, Keywords, and Research Focuses
by Weijing Zhu, Luxi Wei and Yinghong Qin
Information 2025, 16(9), 725; https://doi.org/10.3390/info16090725 - 25 Aug 2025
Viewed by 436
Abstract
Since the advent of generative AI, research on AI in Education (AIEd) has experienced explosive growth. This study systematically explores publication dynamics, keyword evolution, and research focuses in AIEd by analyzing 2952 papers from the Web of Science (1990–2024). Using bibliometric methods, 2800 [...] Read more.
Since the advent of generative AI, research on AI in Education (AIEd) has experienced explosive growth. This study systematically explores publication dynamics, keyword evolution, and research focuses in AIEd by analyzing 2952 papers from the Web of Science (1990–2024). Using bibliometric methods, 2800 English publications were screened, with analyses conducted via VOSviewer v1.6.20 and Python v3.11.5. Findings show a surge in publications post-2020, reaching 612 in 2023 and 1216 by November 2024. The US and China are leading contributors, with the University of London and the University of California system as core institutions. Keywords evolved from “AI” and “machine learning” (2018–2020) to “ChatGPT” and “ethics” (post-2022), reflecting dual focuses on technological applications and ethical considerations. Notably, 68% of highly cited papers address ethical controversies, while higher education and medical education emerge as primary application domains, involving personalized learning and intelligent tutoring systems. Cross-disciplinary research is evident, with education studies comprising the largest category. The study reveals AIEd’s shift toward socio-technical integration, highlighting generative AI’s transformative role yet identifying gaps in ethical governance and K-12 research. These insights inform policymakers, journals, and institutions, advocating for enhanced interdisciplinary collaboration and long-term impact research to balance innovation with educational ethics. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

Back to TopTop