Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,166)

Search Parameters:
Keywords = large language models (LLM)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2491 KB  
Article
Simulating Public Opinion: Comparing Distributional and Individual-Level Predictions from LLMs and Random Forests
by Fernando Miranda and Pedro Paulo Balbi
Entropy 2025, 27(9), 923; https://doi.org/10.3390/e27090923 - 2 Sep 2025
Abstract
Understanding and modeling the flow of information in human societies is essential for capturing phenomena such as polarization, opinion formation, and misinformation diffusion. Traditional agent-based models often rely on simplified behavioral rules that fail to capture the nuanced and context-sensitive nature of human [...] Read more.
Understanding and modeling the flow of information in human societies is essential for capturing phenomena such as polarization, opinion formation, and misinformation diffusion. Traditional agent-based models often rely on simplified behavioral rules that fail to capture the nuanced and context-sensitive nature of human decision-making. In this study, we explore the potential of Large Language Models (LLMs) as data-driven, high-fidelity agents capable of simulating individual opinions under varying informational conditions. Conditioning LLMs on real survey data from the 2020 American National Election Studies (ANES), we investigate their ability to predict individual-level responses across a spectrum of political and social issues in a zero-shot setting, without any training on the survey outcomes. Using Jensen–Shannon distance to quantify divergence in opinion distributions and F1-score to measure predictive accuracy, we compare LLM-generated simulations to those produced by a supervised Random Forest model. While performance at the individual level is comparable, LLMs consistently produce aggregate opinion distributions closer to the empirical ground truth. These findings suggest that LLMs offer a promising new method for simulating complex opinion dynamics and modeling the probabilistic structure of belief systems in computational social science. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

43 pages, 859 KB  
Review
ChatGPT’s Expanding Horizons and Transformative Impact Across Domains: A Critical Review of Capabilities, Challenges, and Future Directions
by Taiwo Raphael Feyijimi, John Ogbeleakhu Aliu, Ayodeji Emmanuel Oke and Douglas Omoregie Aghimien
Computers 2025, 14(9), 366; https://doi.org/10.3390/computers14090366 - 2 Sep 2025
Abstract
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified [...] Read more.
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified globally. This paper presents a comprehensive, critical review of ChatGPT’s impact across five key domains: natural language understanding (NLU), content generation, knowledge discovery, education, and engineering. While ChatGPT demonstrates profound capabilities, significant challenges remain in factual accuracy, bias, and the inherent opacity of its reasoning—a core issue termed the “Black Box Conundrum”. To analyze these evolving dynamics and the implications of this shift toward autonomous agency, this review introduces a series of conceptual frameworks, each specifically designed to illuminate the complex interactions and trade-offs within these domains: the “Specialization vs. Generalization” tension in NLU; the “Quality–Scalability–Ethics Trilemma” in content creation; the “Pedagogical Adaptation Imperative” in education; and the emergence of “Human–LLM Cognitive Symbiosis” in engineering. The analysis reveals an urgent need for proactive adaptation across sectors. Educational paradigms must shift to cultivate higher-order cognitive skills, while professional practices (including practices within education sector) must evolve to treat AI as a cognitive partner, leveraging techniques like Retrieval-Augmented Generation (RAG) and sophisticated prompt engineering. Ultimately, this paper argues for an overarching “Ethical–Technical Co-evolution Imperative”, charting a forward-looking research agenda that intertwines technological innovation with vigorous ethical and methodological standards to ensure responsible AI development and integration. Ultimately, the analysis reveals that the challenges of factual accuracy, bias, and opacity are interconnected and acutely magnified by the emergence of agentic systems, demanding a unified, proactive approach to adaptation across all sectors. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
17 pages, 1303 KB  
Article
LLMs in Wind Turbine Gearbox Failure Prediction
by Yoke Wang Tan and James Carroll
Energies 2025, 18(17), 4659; https://doi.org/10.3390/en18174659 - 2 Sep 2025
Abstract
Predictive maintenance strategies in wind turbine operations have risen in popularity with the growth of renewable electricity demand. The capacity of the strategy to predict system health, especially for the wind turbine gearboxes, is critical in reducing wind turbine operation and maintenance cost. [...] Read more.
Predictive maintenance strategies in wind turbine operations have risen in popularity with the growth of renewable electricity demand. The capacity of the strategy to predict system health, especially for the wind turbine gearboxes, is critical in reducing wind turbine operation and maintenance cost. Driven by the emergence of the application of large language models (LLMs) in diverse domains, this work explores the potential of LLMs in the development of wind turbine gearbox prognosis. A comparative analysis is designed to investigate the capability of two state-of-the-art LLMs—GPT-4o and DeepSeek-V3—in proposing machine learning (ML) pipelines to classify gearbox conditions based on a labelled SCADA dataset. The LLMs were prompted with the context of the task and detailed information about the SCADA dataset investigated. The outputs generated by the LLMs were evaluated in terms of pipeline quality and prediction performance using the confusion metric. Baseline ML models were developed and fine-tuned as benchmarks using Python 3.12 libraries. Among the baseline models, the random forest and XGBoost models achieved the highest cross-validated average F1-scores. The results have shown that the ML pipeline proposed by DeepSeek-V3 was significantly better than both GPT-4o and baseline models in terms of data analytical scope and prediction accuracy. Full article
(This article belongs to the Special Issue Renewable Energy System Forecasting and Maintenance Management)
Show Figures

Figure 1

26 pages, 3949 KB  
Article
AnAI-Based Risk Analysis Framework Using Large Language Models for Web Log Security
by Hoseong Jeong and Inwhee Joe
Electronics 2025, 14(17), 3512; https://doi.org/10.3390/electronics14173512 - 2 Sep 2025
Abstract
Web log data analysis is essential for monitoring and securing modern software systems. However, traditional manual analysis methods struggle to cope with the rapidly growing volumes and complexity of log data, resulting in inefficiencies and potential security risks. To address these challenges, this [...] Read more.
Web log data analysis is essential for monitoring and securing modern software systems. However, traditional manual analysis methods struggle to cope with the rapidly growing volumes and complexity of log data, resulting in inefficiencies and potential security risks. To address these challenges, this paper proposes an AI-driven log analysis framework utilizing advanced natural language processing techniques from large language models (LLMs), specifically ChatGPT. The framework aims to automate log data normalization, anomaly detection, and risk assessment, enabling the real-time identification and mitigation of security threats. Our objectives include reducing dependency on human analysis, enhancing the accuracy and speed of threat detection, and providing a scalable solution suitable for diverse web service environments. Through extensive experimentation with realistic log scenarios, we demonstrate the effectiveness of the proposed framework in swiftly identifying and responding to web-based security threats, ultimately improving both security posture and operational efficiency. Full article
33 pages, 1985 KB  
Article
Future Skills in the GenAI Era: A Labor Market Classification System Using Kolmogorov–Arnold Networks and Explainable AI
by Dimitrios Christos Kavargyris, Konstantinos Georgiou, Eleanna Papaioannou, Theodoros Moysiadis, Nikolaos Mittas and Lefteris Angelis
Algorithms 2025, 18(9), 554; https://doi.org/10.3390/a18090554 - 2 Sep 2025
Abstract
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To [...] Read more.
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To address this challenge, this paper introduces KANVAS (Kolmogorov–Arnold Network Versatile Algorithmic Solution)—a framework based on Kolmogorov–Arnold Networks (KANs), which utilize B-spline-based, compact, and interpretable neural units—to distinguish between traditional AI roles and emerging GenAI-related positions. The aim of the study is to develop a reliable and interpretable labor market classification system that differentiates these roles using explainable machine learning. Unlike prior studies that emphasize predictive performance, our work is the first to employ KANs as an explanatory tool for labor classification, to reveal how GenAI-related and European Skills, Competences, Qualifications, and Occupations (ESCO)-aligned skills differentially contribute to distinguishing modern from traditional AI job roles. Using raw job vacancy data from two labor market platforms, KANVAS implements a hybrid pipeline combining a state-of-the-art Large Language Model (LLM) with Explainable AI (XAI) techniques, including Shapley Additive Explanations (SHAP), to enhance model transparency. The framework achieves approximately 80% classification consistency between traditional and GenAI-aligned roles, while also identifying the most influential skills contributing to each category. Our findings indicate that GenAI positions prioritize competencies such as prompt engineering and LLM integration, whereas traditional roles emphasize statistical modeling and legacy toolkits. By surfacing these distinctions, the framework offers actionable insights for curriculum design, targeted reskilling programs, and workforce policy development. Overall, KANVAS contributes a novel, interpretable approach to understanding how GenAI reshapes job roles and skill requirements in a rapidly evolving labor market. Finally, the open-source implementation of KANVAS is flexible and well-suited for HR managers and relevant stakeholders. Full article
26 pages, 13537 KB  
Article
GeoJapan Fusion Framework: A Large Multimodal Model for Regional Remote Sensing Recognition
by Yaozong Gan, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Remote Sens. 2025, 17(17), 3044; https://doi.org/10.3390/rs17173044 - 1 Sep 2025
Abstract
Recent advances in large multimodal models (LMMs) have opened new opportunities for multitask recognition from remote sensing images. However, existing approaches still face challenges in effectively recognizing the complex geospatial characteristics of regions such as Japan, where its location along the seismic belt [...] Read more.
Recent advances in large multimodal models (LMMs) have opened new opportunities for multitask recognition from remote sensing images. However, existing approaches still face challenges in effectively recognizing the complex geospatial characteristics of regions such as Japan, where its location along the seismic belt leads to highly diverse urban environments and cityscapes that differ from those in other regions. To overcome these challenges, we propose the GeoJapan Fusion Framework (GFF), a multimodal architecture that integrates a large language model (LLM) and a vision–language model (VLM) and strengthens multimodal alignment ability through an in-context learning mechanism to support multitask recognition for Japanese remote sensing images. The GFF also incorporates a cross-modal feature fusion mechanism with low-rank adaptation (LoRA) to enhance representation alignment and enable efficient model adaptation. To facilitate the construction of the GFF, we construct the GeoJapan dataset, which comprises a substantial collection of high-quality Japanese remote sensing images, designed to facilitate multitask recognition using LMMs. We conducted extensive experiments and compared our method with state-of-the-art LMMs. The experimental results demonstrate that GFF outperforms previous approaches across multiple tasks, demonstrating its promising ability for multimodal multitask remote sensing recognition. Full article
(This article belongs to the Special Issue Remote Sensing Image Classification: Theory and Application)
32 pages, 2361 KB  
Article
Exploring the Use and Misuse of Large Language Models
by Hezekiah Paul D. Valdez, Faranak Abri, Jade Webb and Thomas H. Austin
Information 2025, 16(9), 758; https://doi.org/10.3390/info16090758 - 1 Sep 2025
Abstract
Language modeling has evolved from simple rule-based systems into complex assistants capable of tackling a multitude of tasks. State-of-the-art large language models (LLMs) are capable of scoring highly on proficiency benchmarks, and as a result have been deployed across industries to increase productivity [...] Read more.
Language modeling has evolved from simple rule-based systems into complex assistants capable of tackling a multitude of tasks. State-of-the-art large language models (LLMs) are capable of scoring highly on proficiency benchmarks, and as a result have been deployed across industries to increase productivity and convenience. However, the prolific nature of such tools has provided threat actors with the ability to leverage them for attack development. Our paper describes the current state of LLMs, their availability, and their role in benevolent and malicious applications. In addition, we propose how an LLM can be combined with text-to-speech (TTS) voice cloning to create a framework capable of carrying out social engineering attacks. Our case study analyzes the realism of two different open-source TTS models, Tortoise TTS and Coqui XTTS-v2, by calculating similarity scores between generated and real audio samples from four participants. Our results demonstrate that Tortoise is able to generate realistic voice clone audios for native English speaking males, which indicates that easily accessible resources can be leveraged to create deceptive social engineering attacks. As such tools become more advanced, defenses such as awareness, detection, and red teaming may not be able to keep up with dangerously equipped adversaries. Full article
Show Figures

Figure 1

19 pages, 272 KB  
Review
Artificial Intelligence in the Diagnosis of Pediatric Rare Diseases: From Real-World Data Toward a Personalized Medicine Approach
by Nikola Ilić and Adrijan Sarajlija
J. Pers. Med. 2025, 15(9), 407; https://doi.org/10.3390/jpm15090407 - 1 Sep 2025
Abstract
Background: Artificial intelligence (AI) is increasingly applied in the diagnosis of pediatric rare diseases, enhancing the speed, accuracy, and accessibility of genetic interpretation. These advances support the ongoing shift toward personalized medicine in clinical genetics. Objective: This review examines current applications of AI [...] Read more.
Background: Artificial intelligence (AI) is increasingly applied in the diagnosis of pediatric rare diseases, enhancing the speed, accuracy, and accessibility of genetic interpretation. These advances support the ongoing shift toward personalized medicine in clinical genetics. Objective: This review examines current applications of AI in pediatric rare disease diagnostics, with a particular focus on real-world data integration and implications for individualized care. Methods: A narrative review was conducted covering AI tools for variant prioritization, phenotype–genotype correlations, large language models (LLMs), and ethical considerations. The literature was identified through PubMed, Scopus, and Web of Science up to July 2025, with priority given to studies published in the last seven years. Results: AI platforms provide support for genomic interpretation, particularly within structured diagnostic workflows. Tools integrating Human Phenotype Ontology (HPO)-based inputs and LLMs facilitate phenotype matching and enable reverse phenotyping. The use of real-world data enhances the applicability of AI in complex and heterogeneous clinical scenarios. However, major challenges persist, including data standardization, model interpretability, workflow integration, and algorithmic bias. Conclusions: AI has the potential to advance earlier and more personalized diagnostics for children with rare diseases. Achieving this requires multidisciplinary collaboration and careful attention to clinical, technical, and ethical considerations. Full article
26 pages, 2040 KB  
Article
Enhancing Software Usability Through LLMs: A Prompting and Fine-Tuning Framework for Analyzing Negative User Feedback
by Nahed Alsaleh, Reem Alnanih and Nahed Alowidi
Computers 2025, 14(9), 363; https://doi.org/10.3390/computers14090363 - 1 Sep 2025
Abstract
In today’s competitive digital landscape, application usability plays a critical role in user satisfaction and retention. Negative user reviews offer valuable insights into real-world usability issues, yet traditional analysis methods often fall short in scalability and contextual understanding. This paper proposes an intelligent [...] Read more.
In today’s competitive digital landscape, application usability plays a critical role in user satisfaction and retention. Negative user reviews offer valuable insights into real-world usability issues, yet traditional analysis methods often fall short in scalability and contextual understanding. This paper proposes an intelligent framework that utilizes large language models (LLMs), including GPT-4, Gemini, and BLOOM, to automate the extraction of actionable usability recommendations from negative app reviews. By applying prompting and fine-tuning techniques, the framework transforms unstructured feedback into meaningful suggestions aligned with three core usability dimensions: correctness, completeness, and satisfaction. A manually annotated dataset of Instagram negative reviews was used to evaluate model performance. Results show that GPT-4 consistently outperformed other models, achieving BLEU scores up to 0.64, ROUGE scores up to 0.80, and METEOR scores up to 0.90—demonstrating high semantic accuracy and contextual relevance in generated recommendations. Gemini and BLOOM, while improved through fine-tuning, showed significantly lower performance. This study also introduces a practical, web-based tool that enables real-time review analysis and recommendation generation, supporting data-driven, user-centered software development. These findings illustrate the potential of LLM-based frameworks to enhance software usability analysis and accelerate feedback-driven design processes. Full article
Show Figures

Figure 1

17 pages, 634 KB  
Perspective
Challenges of Implementing LLMs in Clinical Practice: Perspectives
by Yaara Artsi, Vera Sorin, Benjamin S. Glicksberg, Panagiotis Korfiatis, Robert Freeman, Girish N. Nadkarni and Eyal Klang
J. Clin. Med. 2025, 14(17), 6169; https://doi.org/10.3390/jcm14176169 - 1 Sep 2025
Abstract
Large language models (LLMs) have the potential to transform healthcare by assisting in documentation, diagnosis, patient communication, and medical education. However, their integration into clinical practice remains a challenge. This perspective explores the barriers to implementation by synthesizing recent evidence across five challenge [...] Read more.
Large language models (LLMs) have the potential to transform healthcare by assisting in documentation, diagnosis, patient communication, and medical education. However, their integration into clinical practice remains a challenge. This perspective explores the barriers to implementation by synthesizing recent evidence across five challenge domains: workflow misalignment and diagnostic safety, bias and equity, regulatory and legal governance, technical vulnerabilities such as hallucinations or data poisoning, and the preservation of patient trust and human connection. While the perspective focuses on barriers, LLM capabilities and mitigation strategies are advancing rapidly, raising the likelihood of near-term clinical impact. Drawing on recent empirical studies, we propose a framework for understanding the key technical, ethical, and practical challenges associated with deploying LLMs in clinical environments and provide directions for future research, governance, and responsible deployment. Full article
(This article belongs to the Section Clinical Research Methods)
Show Figures

Figure 1

16 pages, 4762 KB  
Article
ACR: Adaptive Confidence Re-Scoring for Reliable Answer Selection Among Multiple Candidates
by Eunhye Jeong and Yong Suk Choi
Appl. Sci. 2025, 15(17), 9587; https://doi.org/10.3390/app15179587 - 30 Aug 2025
Viewed by 203
Abstract
With the improved reasoning capabilities of large language models (LLMs), their applications have rapidly expanded across a wide range of tasks. In recent question answering tasks, performance gains have been achieved through Self-Consistency, where LLMs generate multiple reasoning paths and determine the final [...] Read more.
With the improved reasoning capabilities of large language models (LLMs), their applications have rapidly expanded across a wide range of tasks. In recent question answering tasks, performance gains have been achieved through Self-Consistency, where LLMs generate multiple reasoning paths and determine the final answer via majority voting. However, this approach can fail when the correct answer is generated but does not appear frequently enough to be selected, highlighting its vulnerability to inconsistent generations. To address this, we propose Adaptive Confidence Re-scoring (ACR)—a method that adaptively evaluates and re-scores candidate answers to select the most trustworthy one when LLMs fail to generate consistent reasoning. Experiments on arithmetic and logical reasoning benchmarks show that ACR maintains or improves answer accuracy while significantly reducing inference cost. Compared to existing verification methods such as FOBAR, ACR reduces the number of inference calls by up to 95%, while improving inference efficiency—measured as accuracy gain per inference call—by a factor of 2× to 17×, depending on the dataset and model. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2159 KB  
Article
Agentic RAG-Driven Multi-Omics Analysis for PI3K/AKT Pathway Deregulation in Precision Medicine
by Micheal Olaolu Arowolo, Sulaiman Olaniyi Abdulsalam, Rafiu Mope Isiaka, Kingsley Theophilus Igulu, Bukola Fatimah Balogun, Mihail Popescu and Dong Xu
Algorithms 2025, 18(9), 545; https://doi.org/10.3390/a18090545 - 30 Aug 2025
Viewed by 158
Abstract
The phosphoinositide 3-kinase (PI3K)/AKT signaling pathway is a crucial regulator of cellular metabolism, proliferation, and survival. It is frequently dysregulated in metabolic, cardiovascular, and neoplastic disorders. Despite the advancements in multi-omics technology, existing methods often fail to provide real-time, pathway-specific insights for precision [...] Read more.
The phosphoinositide 3-kinase (PI3K)/AKT signaling pathway is a crucial regulator of cellular metabolism, proliferation, and survival. It is frequently dysregulated in metabolic, cardiovascular, and neoplastic disorders. Despite the advancements in multi-omics technology, existing methods often fail to provide real-time, pathway-specific insights for precision medicine and drug repurposing. We offer Agentic RAG-Driven Multi-Omics Analysis (ARMOA), an autonomous, hypothesis-driven system that integrates retrieval-augmented generation (RAG), large language models (LLMs), and agentic AI to thoroughly analyze genomic, transcriptomic, proteomic, and metabolomic data. Through the use of graph neural networks (GNNs) to model complex interactions within the PI3K/AKT pathway, ARMOA enables the discovery of novel biomarkers, probable candidates for drug repurposing, and customized therapy responses to address the complexities of PI3K/AKT dysregulation in disease states. ARMOA dynamically gathers and synthesizes knowledge from multiple sources, including KEGG, TCGA, and DrugBank, to guarantee context-aware insights. Through adaptive reasoning, it gradually enhances predictions, achieving 91% accuracy in external testing and 92% accuracy in cross-validation. Case studies in breast cancer and type 2 diabetes demonstrate that ARMOA can identify synergistic drug combinations with high clinical relevance and predict therapeutic outcomes specific to each patient. The framework’s interpretability and scalability are greatly enhanced by its use of multi-omics data fusion and real-time hypothesis creation. ARMOA provides a cutting-edge example for precision medicine by integrating multi-omics data, clinical judgment, and AI agents. Its ability to provide valuable insights on its own makes it a powerful tool for advancing biomedical research and treatment development. Full article
(This article belongs to the Special Issue Advanced Algorithms for Biomedical Data Analysis)
Show Figures

Figure 1

28 pages, 1711 KB  
Article
Identifying Literary Microgenres and Writing Style Differences in Romanian Novels with ReaderBench and Large Language Models
by Aura Cristina Udrea, Stefan Ruseti, Vlad Pojoga, Stefan Baghiu, Andrei Terian and Mihai Dascalu
Future Internet 2025, 17(9), 397; https://doi.org/10.3390/fi17090397 - 30 Aug 2025
Viewed by 97
Abstract
Recent developments in natural language processing, particularly large language models (LLMs), create new opportunities for literary analysis in underexplored languages like Romanian. This study investigates stylistic heterogeneity and genre blending in 175 late 19th- and early 20th-century Romanian novels, each classified by literary [...] Read more.
Recent developments in natural language processing, particularly large language models (LLMs), create new opportunities for literary analysis in underexplored languages like Romanian. This study investigates stylistic heterogeneity and genre blending in 175 late 19th- and early 20th-century Romanian novels, each classified by literary historians into one of 17 genres. Our findings reveal that most novels do not adhere to a single genre label but instead combine elements of multiple (micro)genres, challenging traditional single-label classification approaches. We employed a dual computational methodology combining an analysis with Romanian-tailored linguistic features with general-purpose LLMs. ReaderBench, a Romanian-specific framework, was utilized to extract surface, syntactic, semantic, and discourse features, capturing fine-grained linguistic patterns. Alternatively, we prompted two LLMs (Llama3.3 70B and DeepSeek-R1 70B) to predict genres at the paragraph level, leveraging their ability to detect contextual and thematic coherence across multiple narrative scales. Statistical analyses using Kruskal–Wallis and Mann–Whitney tests identified genre-defining features at both novel and chapter levels. The integration of these complementary approaches enhances microgenre detection beyond traditional classification capabilities. ReaderBench provides quantifiable linguistic evidence, while LLMs capture broader contextual patterns; together, they provide a multi-layered perspective on literary genre that reflects the complex and heterogeneous character of fictional texts. Our results argue that both language-specific and general-purpose computational tools can effectively detect stylistic diversity in Romanian fiction, opening new avenues for computational literary analysis in limited-resourced languages. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

28 pages, 5304 KB  
Review
Reinforcement Learning in Medical Imaging: Taxonomy, LLMs, and Clinical Challenges
by A. B. M. Kamrul Islam Riad, Md. Abdul Barek, Hossain Shahriar, Guillermo Francia and Sheikh Iqbal Ahamed
Future Internet 2025, 17(9), 396; https://doi.org/10.3390/fi17090396 - 30 Aug 2025
Viewed by 122
Abstract
Reinforcement learning (RL) is being used more in medical imaging for segmentation, detection, registration, and classification. This survey provides a comprehensive overview of RL techniques applied in this domain, categorizing the literature based on clinical task, imaging modality, learning paradigm, and algorithmic design. [...] Read more.
Reinforcement learning (RL) is being used more in medical imaging for segmentation, detection, registration, and classification. This survey provides a comprehensive overview of RL techniques applied in this domain, categorizing the literature based on clinical task, imaging modality, learning paradigm, and algorithmic design. We introduce a unified taxonomy that supports reproducibility, highlights design guidance, and identifies underexplored intersections. Furthermore, we examine the integration of Large Language Models (LLMs) for automation and interpretability, and discuss privacy-preserving extensions using Differential Privacy (DP) and Federated Learning (FL). Finally, we address deployment challenges and outline future research directions toward trustworthy and scalable medical RL systems. Full article
45 pages, 5338 KB  
Article
AccessiLearnAI: An Accessibility-First, AI-Powered E-Learning Platform for Inclusive Education
by George Alex Stelea, Dan Robu and Florin Sandu
Educ. Sci. 2025, 15(9), 1125; https://doi.org/10.3390/educsci15091125 - 29 Aug 2025
Viewed by 115
Abstract
Online education has become an important channel for extensive, inclusive and flexible learning experiences. However, significant gaps persist in providing truly accessible, personalized and adaptable e-learning environments, especially for students with disabilities, varied language backgrounds, or limited bandwidth. This paper presents AccessiLearnAI, an [...] Read more.
Online education has become an important channel for extensive, inclusive and flexible learning experiences. However, significant gaps persist in providing truly accessible, personalized and adaptable e-learning environments, especially for students with disabilities, varied language backgrounds, or limited bandwidth. This paper presents AccessiLearnAI, an AI-driven platform, which converges accessibility-first design, multi-format content delivery, advanced personalization, and Progressive Web App (PWA) offline capabilities. Our solution is compliant with semantic HTML5 and ARIA standards, and incorporates features such as automatic alt-text generation for images using Large Language Models (LLMs), real-time functionality for summarization, translation, and text-to-speech capabilities. The platform, built on top of a modular MVC and microservices-based architecture, also integrates robust security, GDPR-aligned data protection, and a human-in-the-loop to ensure the accuracy and reliability of AI-generated outputs. Early evaluations indicate that AccessiLearnAI improves engagement and learning outcomes across multiple ranges of users, suggesting that responsible AI and universal design can successfully coexist to bring equity through digital education. Full article
Show Figures

Figure 1

Back to TopTop