Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,889)

Search Parameters:
Keywords = AI model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1045 KB  
Article
Drivers of AI–Sustainability: The Roles of Financial Wealth, Human Capital, and Renewable Energy
by Guangpeng Chen and Anthony David
Sustainability 2025, 17(21), 9920; https://doi.org/10.3390/su17219920 (registering DOI) - 6 Nov 2025
Abstract
Artificial Intelligence (AI) is increasingly central to sustainable development, yet its advancement varies across G7 economies. This study employs Method of Moments Quantile Regression (MMQR) to examine how Financial Technology (FinTech), Economic Growth (EG), Human Capital (HC), and Renewable Energy Consumption (RENC) influence [...] Read more.
Artificial Intelligence (AI) is increasingly central to sustainable development, yet its advancement varies across G7 economies. This study employs Method of Moments Quantile Regression (MMQR) to examine how Financial Technology (FinTech), Economic Growth (EG), Human Capital (HC), and Renewable Energy Consumption (RENC) influence AI development in G7 countries from 2000 to 2022. By analyzing heterogeneous effects across quantiles, the study captures stage-specific drivers often overlooked in average-based models. Results indicate that FinTech and human capital significantly promote AI adoption in lower and middle quantiles, enhancing digital inclusion and innovation capacity, while RENC becomes relevant primarily at advanced stages of AI adoption. Economic growth exhibits negative or inconsistent effects, suggesting that GDP expansion alone is insufficient for technological transformation without alignment to supportive policies and institutional contexts. The lack of long-run cointegration further highlights the dominance of short- and medium-term dynamics in shaping the AI–sustainability nexus. These findings provide actionable insights for policymakers, emphasizing targeted FinTech development, skill-building initiatives, and renewable-powered AI solutions to foster sustainable and inclusive AI adoption. Overall, the study demonstrates how financial, human, and environmental factors jointly drive AI development, offering a mechanism-based perspective on technology-driven sustainable development in advanced economies. Full article
23 pages, 2298 KB  
Article
Balancing Forecast Accuracy and Emissions for Hourly Wind Power at Dumat Al-Jandal: Sustainable AI for Zero-Carbon Transitions
by Haytham Elmousalami, Felix Kin Peng Hui and Aljawharah A. Alnaser
Sustainability 2025, 17(21), 9908; https://doi.org/10.3390/su17219908 (registering DOI) - 6 Nov 2025
Abstract
This paper develops a Sustainable Artificial Intelligence-Driven Wind Power Forecasting System (SAI-WPFS) to enhance the integration of renewable energy while minimizing the environmental footprint of deep learning computations. Although deep learning models such as CNN, LSTM, and GRU have achieved high accuracy in [...] Read more.
This paper develops a Sustainable Artificial Intelligence-Driven Wind Power Forecasting System (SAI-WPFS) to enhance the integration of renewable energy while minimizing the environmental footprint of deep learning computations. Although deep learning models such as CNN, LSTM, and GRU have achieved high accuracy in wind power forecasting, existing research rarely considers the computational energy cost and associated carbon emissions, creating a gap between predictive performance and sustainability objectives. Moreover, limited studies have addressed the need for a balanced framework that jointly evaluates forecast precision and eco-efficiency in the context of large-scale renewable deployment. Using real-time data from the Dumat Al-Jandal Wind Farm, Saudi Arabia’s first utility-scale wind project, this study evaluates multiple deep learning architectures, including CNN-LSTM-AM and GRU, under a dual assessment framework combining accuracy metrics (MAE, RMSE, R2) and carbon efficiency indicators (CO2 emissions per computational hour). Results show that the CNN-LSTM-AM model achieves the highest forecasting accuracy (MAE = 29.37, RMSE = 144.99, R2 = 0.74), while the GRU model offers the best trade-off between performance and emissions (320 g CO2/h). These findings demonstrate the feasibility of integrating sustainable AI into wind energy forecasting, aligning technical innovation with Saudi Vision 2030 goals for zero-carbon cities and carbon-efficient energy systems. Full article
(This article belongs to the Special Issue Sustainable Energy Systems and Applications)
15 pages, 3358 KB  
Article
Using Two X-Ray Images to Create a Parameterized Scoliotic Spine Model and Analyze Disk Stress Adjacent to Spinal Fixation—A Finite Element Analysis
by Te-Han Wang, Po-Hsing Chou and Chen-Sheng Chen
Bioengineering 2025, 12(11), 1212; https://doi.org/10.3390/bioengineering12111212 - 6 Nov 2025
Abstract
Posterior instrumentation is used to treat severe adolescent idiopathic scoliosis (AIS) with a Cobb angle greater than 40 degrees. Clinical studies indicate that AIS patients may develop adjacent segment degeneration (ASD) post-surgery. However, there is limited research on the biomechanical effects on adjacent [...] Read more.
Posterior instrumentation is used to treat severe adolescent idiopathic scoliosis (AIS) with a Cobb angle greater than 40 degrees. Clinical studies indicate that AIS patients may develop adjacent segment degeneration (ASD) post-surgery. However, there is limited research on the biomechanical effects on adjacent segments after surgery, and straightforward methods for creating finite element (FE) models that reflect vertebral deformation are lacking. Therefore, this study aims to use biplanar X-ray images to establish a case-specific, parameterized FE model reflecting coronal plane vertebral deformation and employ FE analysis to compare pre- and postoperative changes in the range of motion (ROM), endplate stress, and intervertebral disk stress of adjacent segments. We developed an FE model from biplanar X-ray images of a patient with AIS, using ANSYS software to establish pre- and postoperative models. The shape of the preoperative model was validated using computed tomography (CT) reconstruction. A flexion moment was applied to C7 of the spine model to achieve the same forward bending angle in the pre- and postoperative models. This study successfully developed a case-specific parameterized FE model based on X-ray images. The differences between Cobb angle and thoracolumbar kyphosis angle measurements in X-ray images and CT reconstructions were 6.5 and 5.4 mm. This FE model was used to analyze biomechanical effects on motion segments adjacent to the fixation site, revealing a decrease in maximum endplate and disk stress in the cranial segment and an increase in stress in the caudal segment. Full article
(This article belongs to the Special Issue Spine Biomechanics)
Show Figures

Figure 1

37 pages, 8157 KB  
Review
Toward Reliable Interfacial Bond Characterization Between Polymeric Cementitious Composites (PCCs) and Concrete: Testing Standards, Methodologies, and Advanced NDT–AI Hybrid Approaches
by Dongchan Kim and Min Ook Kim
Buildings 2025, 15(21), 4008; https://doi.org/10.3390/buildings15214008 (registering DOI) - 6 Nov 2025
Abstract
The evaluation of interfacial bonds between polymeric cementitious composites (PCCs) and concrete is considered as a critical factor to determine structural safety, durability, and service life regarding the repair and strengthening of old concrete structures. Conventional evaluations of interfacial bond strength have primarily [...] Read more.
The evaluation of interfacial bonds between polymeric cementitious composites (PCCs) and concrete is considered as a critical factor to determine structural safety, durability, and service life regarding the repair and strengthening of old concrete structures. Conventional evaluations of interfacial bond strength have primarily relied on destructive testing methods, such as the pull-off and slant shear tests. However, these methods inherently possess fundamental limitations, including localized damage, non-uniform stress distribution, and uncertainty in result interpretation. This review aims to provide a comprehensive overview of existing standards and methods for assessing interfacial bond strength. For this purpose, the evaluation methods and results for the interfacial bond strength between cementitious composites such as PCCs and concrete were systematically reviewed. It further examines the characteristics and sources of error of the representative destructive method (pull-off test), highlighting its inherent limitations. Furthermore, this study conducted an in-depth analysis of a hybrid evaluation strategy combining non-destructive testing (NDT) and artificial intelligence (AI) to overcome the limitations of conventional interfacial bond strength assessment methods and minimize prediction errors. The results demonstrated that the NDT–AI hybrid approach, based on an ANN–BFGS model, achieved the highest accuracy in bond strength prediction and was identified as the optimal method for quantitatively and non-destructively evaluating interfacial bond behavior. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

12 pages, 1559 KB  
Article
TCEPVDB: Artificial Intelligence-Based Proteome-Wide Screening of Antigens and Linear T-Cell Epitopes in the Poxviruses and the Development of a Repository
by Mansi Dutt, Anuj Kumar, Ali Toloue Ostadgavahi, David J. Kelvin and Gustavo Sganzerla Martinez
Proteomes 2025, 13(4), 58; https://doi.org/10.3390/proteomes13040058 (registering DOI) - 6 Nov 2025
Abstract
Background: Poxviruses constitute a family of large dsDNA viruses that can infect a plethora of species including humans. Historically, poxviruses have caused a health burden in multiple outbreaks. The large genome of poxviruses favors reverse vaccinology approaches that can determine potential antigens and [...] Read more.
Background: Poxviruses constitute a family of large dsDNA viruses that can infect a plethora of species including humans. Historically, poxviruses have caused a health burden in multiple outbreaks. The large genome of poxviruses favors reverse vaccinology approaches that can determine potential antigens and epitopes. Here, we propose the modeling of a user-friendly database containing the predicted antigens and epitopes of a large cohort of poxvirus proteomes using the existing PoxiPred method for reverse vaccinology of poxviruses. Methods: In the present study, we obtained the whole proteomes of as many as 37 distinct poxviruses. We utilized each proteome to predict both antigenic proteins and T-cell epitopes of poxviruses with the aid of an Artificial Intelligence method, namely the PoxiPred method. Results: In total, we predicted 3966 proteins as potential antigen targets. Of note, we considered that this protein may exist in a set of proteoforms. Subsets of these proteins constituted a comprehensive repository of 54,291 linear T-cell epitopes. We combined the outcome of the predictions in the format of a web tool that delivers a database of antigens and epitopes of poxviruses. We also developed a comprehensive repository dedicated to providing access to end-users to obtain AI-based screened antigens and T-cell epitopes of poxviruses in a user-friendly manner. These antigens and epitopes can be utilized to design experiments for the development of effective vaccines against a plethora of poxviruses. Conclusions: The TCEPVDB repository, already deployed to the web under an open-source coding philosophy, is free to use, does not require any login, does not store any information from its users. Full article
Show Figures

Figure 1

19 pages, 374 KB  
Article
Large Language Models to Support Socially Responsible Solar Energy Siting in Utah
by Uliana Moshina, Izabelle P. Chick, Juliet E. Carlisle and Daniel P. Ames
Solar 2025, 5(4), 52; https://doi.org/10.3390/solar5040052 - 6 Nov 2025
Abstract
This study investigates the efficacy of large language models (LLMs) in supporting responsible and optimized geographic site selection for large-scale solar energy farms. Using Microsoft Bing (predecessor to Copilot), Google Bard (predecessor to Gemini), and ChatGPT, we evaluated their capability to address complex [...] Read more.
This study investigates the efficacy of large language models (LLMs) in supporting responsible and optimized geographic site selection for large-scale solar energy farms. Using Microsoft Bing (predecessor to Copilot), Google Bard (predecessor to Gemini), and ChatGPT, we evaluated their capability to address complex technical and social considerations fundamental to solar farm development. Employing a series of guided queries, we explored the LLMs’ “understanding” of social impact, geographic suitability, and other critical factors. We tested varied prompts, incorporating context from existing research, to assess the models’ ability to use external knowledge sources. Our findings demonstrate that LLMs, when meticulously guided through increasingly detailed and contextualized inquiries, can yield valuable insights. We discovered that (1) structured questioning is key; (2) characterization outperforms suggestion; and (3) harnessing expert knowledge requires specific effort. However, limitations remain. We encountered dead ends due to prompt restrictions and limited access to research for some models. Additionally, none could independently suggest the “best” site. Overall, this study reveals the potential of LLMs for geographic solar farm site selection, and our results can inform future adaptation of geospatial AI queries for similarly complex geographic problems. Full article
Show Figures

Figure 1

32 pages, 1709 KB  
Review
The Role of Artificial Intelligence in Bathing Water Quality Assessment: Trends, Challenges, and Opportunities
by M Usman Saeed Khan, Ashenafi Yohannes Battamo, Rajendran Ravindar and M Salauddin
Water 2025, 17(21), 3176; https://doi.org/10.3390/w17213176 - 6 Nov 2025
Abstract
Bathing water quality (BWQ) monitoring and prediction are essential to safeguard public health by informing bathers about the risk of exposure to faecal indicator bacteria (FIBs). Traditional monitoring approaches, such as manual sampling and laboratory analysis, while effective, are often constrained by delayed [...] Read more.
Bathing water quality (BWQ) monitoring and prediction are essential to safeguard public health by informing bathers about the risk of exposure to faecal indicator bacteria (FIBs). Traditional monitoring approaches, such as manual sampling and laboratory analysis, while effective, are often constrained by delayed reporting, limited spatial and temporal coverage, and high operational costs. The integration of artificial intelligence (AI), particularly machine learning (ML), with automated data sources such as environmental sensors and satellite imagery has offered novel predictive and real-time monitoring opportunities in BWQ assessment. This systematic literature review synthesises current research on the application of AI in BWQ assessment, focusing on predictive modelling techniques and remote sensing approaches. Following the PRISMA methodology, 63 relevant studies are reviewed. The review identifies dominant modelling techniques such as Artificial Neural Networks (ANN), Deep Learning (DL), Decision Tree (DT), Random Forest (RF), Multiple Linear Regression (MLR), Support Vector Machine (SVM), and Hybrid and Ensemble Boosting algorithms. The integration of AI with remote sensing platforms such as Google Earth Engine (GEE) has improved the spatial and temporal solution of BWQ monitoring systems. The performance of modelling approaches varied depending on data availability, model flexibility, and integration with alternative data sources like remote sensing. Notable research gaps include short-term faecal pollution prediction and incomplete datasets on key environmental variables, data scarcity, and model interpretability of complex AI models. Emerging trends point towards the potential of near-real-time modelling, Internet of Things (IoT) integration, standardised data protocols, global data sharing, the development of explainable AI models, and integrating remote sensing and cloud-based systems. Future research should prioritise these areas while promoting the integration of AI-driven BWQ systems into public health monitoring and environmental management through multidisciplinary collaboration. Full article
Show Figures

Figure 1

12 pages, 860 KB  
Review
From Data to Decisions: Harnessing Multi-Agent Systems for Safer, Smarter, and More Personalized Perioperative Care
by Jamie Kim, Briana Lui, Peter A. Goldstein, John E. Rubin, Robert S. White and Rohan Jotwani
J. Pers. Med. 2025, 15(11), 540; https://doi.org/10.3390/jpm15110540 - 6 Nov 2025
Abstract
Background/Objectives: Artificial intelligence (AI) is increasingly applied across the perioperative continuum, with potential benefits in efficiency, personalization, and patient safety. Unfortunately, most such tools are developed in isolation, limiting their clinical utility. Multi-Agent Systems for Healthcare (MASH), in which autonomous AI agents [...] Read more.
Background/Objectives: Artificial intelligence (AI) is increasingly applied across the perioperative continuum, with potential benefits in efficiency, personalization, and patient safety. Unfortunately, most such tools are developed in isolation, limiting their clinical utility. Multi-Agent Systems for Healthcare (MASH), in which autonomous AI agents coordinate tasks across multiple domains, may provide the necessary framework for integrated perioperative care. This critical review synthesizes current AI applications in anesthesiology and considers their integration within a MASH architecture. This is the first review to advance MASH as a conceptual and practical framework for anesthesiology, uniquely contributing to the AI discourse by proposing its potential to unify isolated innovations into adaptive and collaborative systems. Methods: A critical review was conducted using PubMed and Google Search to identify peer-reviewed studies published between 2015 and 2025. The search strategy combined controlled vocabulary and free-text terms for AI, anesthesiology, perioperative care, critical care, and pain management. Results were filtered for randomized controlled trials and clinical trials. Data were extracted and organized by perioperative phase. Results: The 16 studies (6 from database search, 10 from prior work) included in this review demonstrated AI applications across the perioperative timeline. Preoperatively, predictive models such as POTTER improved surgical risk stratification. Intraoperative trials evaluated systems like SmartPilot and Navigator, enhancing anesthetic dosing and physiologic stability. In critical care, algorithms including NAVOY Sepsis and VentAI supported early detection of sepsis and optimized ventilatory management. In pain medicine, AI assisted with opioid risk assessment and individualized pain-control regimens. While these trials demonstrated clinical utility, most applications remain domain-specific and unconnected from one another. Conclusions: AI has broad potential to improve perioperative care, but its impact depends on coordinated deployment. MASH offers a unifying framework to integrate diverse agents into adaptive networks, enabling more personalized anesthetic care that is safer and more efficient. Full article
(This article belongs to the Special Issue AI and Precision Medicine: Innovations and Applications)
Show Figures

Figure 1

15 pages, 1506 KB  
Review
Computational Chemistry Advances in the Development of PARP1 Inhibitors for Breast Cancer Therapy
by Charmy Twala, Penny Govender and Krishna Govender
Pharmaceuticals 2025, 18(11), 1679; https://doi.org/10.3390/ph18111679 - 6 Nov 2025
Abstract
Poly (ADP-ribose) polymerase 1 (PARP1) is an important enzyme that plays a central role in the DNA damage response, facilitating repair of single-stranded DNA breaks via the base excision repair (BER) pathway and thus genomic integrity. Its therapeutic relevance is compounded in breast [...] Read more.
Poly (ADP-ribose) polymerase 1 (PARP1) is an important enzyme that plays a central role in the DNA damage response, facilitating repair of single-stranded DNA breaks via the base excision repair (BER) pathway and thus genomic integrity. Its therapeutic relevance is compounded in breast cancer, particularly in BRCA1 or BRCA2 mutant cancers, where compromised homologous recombination repair (HRR) leaves a synthetic lethal dependency on PARP1-mediated repair. This review comprehensively discusses the recent advances in computational chemistry for the discovery of PARP1 inhibitors, focusing on their application in breast cancer therapy. Techniques such as molecular docking, molecular dynamics (MD) simulations, quantitative structure–activity relationship (QSAR) modeling, density functional theory (DFT), time-dependent DFT (TD-DFT), and machine learning (ML)-aided virtual screening have revolutionized the discovery of inhibitors. Some of the most prominent examples are Olaparib (IC50 = 5 nM), Rucaparib (IC50 = 7 nM), and Talazoparib (IC50 = 1 nM), which were optimized with docking scores between −9.0 to −9.3 kcal/mol and validated by in vitro and in vivo assays, achieving 60–80% inhibition of tumor growth in BRCA-mutated models and achieving up to 21-month improvement in progression-free survival in clinical trials of BRCA-mutated breast and ovarian cancer patients. These strategies enable site-specific hopping into the PARP1 nicotinamide-binding pocket to enhance inhibitor affinity and specificity and reduce off-target activity. Employing computation and experimental verification in a hybrid strategy have brought next-generation inhibitors to the clinic with accelerated development, higher efficacy, and personalized treatment for breast cancer patients. Future approaches, including AI-aided generative models and multi-omics integration, have the promise to further refine inhibitor design, paving the way for precision oncology. Full article
Show Figures

Graphical abstract

16 pages, 960 KB  
Article
Validation of a Dermatology-Focused Multimodal Large Language Model in Classification of Pigmented Skin Lesions
by Joshua Mijares, Neil Jairath, Andrew Zhang and Syril Keena T. Que
Diagnostics 2025, 15(21), 2808; https://doi.org/10.3390/diagnostics15212808 - 6 Nov 2025
Abstract
Background: Artificial intelligence (AI) has shown significant promise in augmenting diagnostic capabilities across medical specialties. Recent advancements in generative AI allow for synthesis and interpretation of complex clinical data including imaging and patient history to assess disease risk. Objective: To evaluate the diagnostic [...] Read more.
Background: Artificial intelligence (AI) has shown significant promise in augmenting diagnostic capabilities across medical specialties. Recent advancements in generative AI allow for synthesis and interpretation of complex clinical data including imaging and patient history to assess disease risk. Objective: To evaluate the diagnostic performance of a dermatology-trained multimodal large language model (DermFlow, Delaware, USA) in assessing malignancy risk of pigmented skin lesions. Methods: This retrospective study utilized data from 59 patients with 68 biopsy-proven pigmented skin lesions seen at Indiana University clinics from February 2023 to May 2025. De-identified patient histories and clinical images were input into DermFlow, and clinical images only were input into Claude Sonnet 4 (Claude) to generate differential diagnoses. Clinician pre-operative diagnoses were extracted from the clinical note. Assessments were compared to histopathologic diagnoses (gold standard). Results: Among 68 clinically concerning pigmented lesions, DermFlow achieved 47.1% top diagnosis accuracy and 92.6% any-diagnosis accuracy, with F1 = 0.948, sensitivity 93.9%, and specificity 89.5% (balanced accuracy 91.7%). Claude had 8.8% top diagnosis and 73.5% any-diagnosis accuracy, F1 = 0.816, sensitivity 81.6%, specificity 52.6% (balanced accuracy 67.1%). Clinicians achieved 38.2% top diagnosis and 72.1% any-diagnosis accuracy, F1 = 0.776, sensitivity 67.3%, specificity 84.2% (balanced accuracy 75.8%). DermFlow recommended biopsy in 95.6% of cases vs. 82.4% for Claude, with multiple pairwise differences favoring DermFlow (p < 0.05). Conclusions: DermFlow demonstrated comparable or superior diagnostic performance to clinicians and superior performance to Claude in evaluating pigmented skin lesions. Although additional data must be gathered to further validate the model in real clinical settings, these initial findings suggest potential utility for dermatology-trained AI models in clinical practice, particularly in settings with limited dermatologist availability. Full article
(This article belongs to the Special Issue AI in Dermatology)
Show Figures

Figure 1

52 pages, 1636 KB  
Article
Strategic Complexity and Behavioral Distortion: Retail Investing Under Large Language Model Augmentation
by Dmitrii Gimmelberg and Iveta Ludviga
Int. J. Financial Stud. 2025, 13(4), 210; https://doi.org/10.3390/ijfs13040210 - 6 Nov 2025
Abstract
This conceptual article introduces Perceived Cognitive Assistance (PCA)—a novel psychological construct capturing how interactive support from Large Language Models (LLMs) alters investors’ perception of their cognitive capacity to execute complex trading strategies. PCA formalizes a behavioral shift: LLM-empowered retail investors may transition from [...] Read more.
This conceptual article introduces Perceived Cognitive Assistance (PCA)—a novel psychological construct capturing how interactive support from Large Language Models (LLMs) alters investors’ perception of their cognitive capacity to execute complex trading strategies. PCA formalizes a behavioral shift: LLM-empowered retail investors may transition from intuitive heuristics to institutional-grade strategies—sometimes without adequate comprehension. This empowerment–distortion duality forms the theoretical contribution’s core. To empirically validate this model, this article outlines a five-step research agenda including psychological diagnostics, trading behavior analysis, market efficiency tests, and a Behavioral Shift Index (BSI). One agenda component—a dual-agent simulation framework—enables causal benchmarking in post-LLM environments. This simulation includes two contributions: (1) the Virtual Trader, a cognitively degraded benchmark approximating bounded human reasoning, and (2) the Digital Persona, a psychologically emulated agent grounded in behaviorally plausible logic. These components offer methods for isolating LLM assistance’s cognitive uplift and evaluating behavioral implications under controlled conditions. This article contributes by specifying a testable link from established decision frameworks (Theory of Planned Behavior, Technology Acceptance Model, and Risk-as-Feelings) to two estimators: a moderated regression for individual decisions (Equation (1)) and a composite Behavioral Shift Index derived from trading logs (Equation (2)). We state directional, falsifiable predictions for the regression coefficients and for index dynamics, and we outline an identification and robustness plan—versioned, time-locked, and auditable—to be executed in the subsequent empirical phase. The result is a clear operational pathway from theory to measurement and testing, prior to empirical implementation. No empirical results are reported here; the contribution is the operational, falsifiable architecture and its implementation plan, to be executed in a separate preregistered study. Full article
(This article belongs to the Special Issue Advances in Behavioural Finance and Economics 2nd Edition)
Show Figures

Figure 1

24 pages, 332 KB  
Article
Advancing Translational Science Through AI-Enhanced Teacher Learning for Early Language and Composing
by JeanMarie Farrow, Michael James Farrow and Chenyi Zhang
Educ. Sci. 2025, 15(11), 1496; https://doi.org/10.3390/educsci15111496 - 5 Nov 2025
Abstract
Composing in early childhood classrooms offers a critical opportunity to strengthen children’s language skills, yet many teachers feel underprepared to provide this instruction. This study examines whether an AI-enhanced digital platform (L4C) can serve as a sustainable, community-based professional development model that bridges [...] Read more.
Composing in early childhood classrooms offers a critical opportunity to strengthen children’s language skills, yet many teachers feel underprepared to provide this instruction. This study examines whether an AI-enhanced digital platform (L4C) can serve as a sustainable, community-based professional development model that bridges theory and practice. Twenty-nine teachers in the southeastern United States engaged with L4C, a professional learning model designed to integrate principles from the Science of Literacy, Learning, and Instruction into a cohesive platform that links teachers’ content and pedagogical knowledge-building with lesson planning and reflective practice. Data sources included surveys, pre- and post-lesson plans, and AI usage logs from the lesson planning tool. Findings showed that teachers initially reported significant barriers to composing instruction and sought professional learning responsive to their classroom needs. After using L4C, teachers demonstrated notable growth in their knowledge of language components and the quality of their composing lesson designs. Teachers evaluated the platform positively, particularly valuing the linked videos and scripted lesson tools for making theoretical concepts actionable. These findings suggest that AI-driven platforms like L4C can advance teacher learning in practical, individualized, and contextually relevant ways, offering a promising pathway for professional development in early literacy instruction. Full article
22 pages, 1071 KB  
Article
Development and Validation of a Questionnaire to Evaluate AI-Generated Summaries for Radiologists: ELEGANCE (Expert-Led Evaluation of Generative AI Competence and ExcelleNCE)
by Yuriy A. Vasilev, Anton V. Vladzymyrskyy, Olga V. Omelyanskaya, Yulya A. Alymova, Dina A. Akhmedzyanova, Yuliya F. Shumskaya, Maria R. Kodenko, Ivan A. Blokhin and Roman V. Reshetnikov
AI 2025, 6(11), 287; https://doi.org/10.3390/ai6110287 - 5 Nov 2025
Abstract
Background/Objectives: Large language models (LLMs) are increasingly considered for use in radiology, including the summarization of patient medical records to support radiologists in processing large volumes of data under time constraints. This task requires not only accuracy and completeness but also clinical applicability. [...] Read more.
Background/Objectives: Large language models (LLMs) are increasingly considered for use in radiology, including the summarization of patient medical records to support radiologists in processing large volumes of data under time constraints. This task requires not only accuracy and completeness but also clinical applicability. Automatic metrics and general-purpose questionnaires fail to capture these dimensions, and no standardized tool currently exists for the expert evaluation of LLM-generated summaries in radiology. Here, we aimed to develop and validate such a tool. Methods: Items for the questionnaire were formulated and refined through focus group testing with radiologists. Validation was performed on 132 LLM-generated summaries of 44 patient records, each independently assessed by radiologists. Criterion validity was evaluated through known-group differentiation and construct validity through confirmatory factor analysis. Results: The resulting seven-item instrument, ELEGANCE (Expert-Led Evaluation of Generative AI Competence and Excellence), demonstrated excellent internal consistency (Cronbach’s α = 0.95). It encompasses seven dimensions: relevance, completeness, applicability, falsification, satisfaction, structure, and correctness of language and terminology. Confirmatory factor analysis supported a two-factor structure (content and form), with strong fit indices (RMSEA = 0.079, CFI = 0.989, TLI = 0.982, SRMR = 0.029). Criterion validity was confirmed by significant between-group differences (p < 0.001). Conclusions: ELEGANCE is the first validated tool for expert evaluation of LLM-generated medical record summaries for radiologists, providing a standardized framework to ensure quality and clinical utility. Full article
Show Figures

Figure 1

20 pages, 1812 KB  
Article
Open-Data-Driven Unity Digital Twin Pipeline: Automatic Terrain and Building Generation with Unity-Native Evaluation
by Donghyun Woo, Hyunbin Choi, Ruben D. Espejo Jr., Joongrock Kim and Sunjin Yu
Appl. Sci. 2025, 15(21), 11801; https://doi.org/10.3390/app152111801 - 5 Nov 2025
Abstract
The creation of simulation-ready digital twins for real-world simulations is hindered by two key challenges: the lack of widely consistent, application-ready open access terrain data and the inadequacy of conventional evaluation metrics to predict practical, in-engine performance. This paper addresses these challenges by [...] Read more.
The creation of simulation-ready digital twins for real-world simulations is hindered by two key challenges: the lack of widely consistent, application-ready open access terrain data and the inadequacy of conventional evaluation metrics to predict practical, in-engine performance. This paper addresses these challenges by presenting an end-to-end, open-data pipeline that generates simulation-ready terrain and procedural 3D objects for the Unity engine. A central finding of this work is that the architecturally advanced Swin2SR transformer exhibits severe statistical instability when applied to Digital Elevation Model (DEM) data. We analyze this instability and introduce a lightweight, computationally efficient stabilization technique adapted from climate science—quantile mapping (qmap)—as a diagnostic remedy which restores the model’s physical plausibility without retraining. To overcome the limitations of pixel-based metrics, we validate our pipeline using a three-axis evaluation framework that integrates data-level self-consistency with application-centric usability metrics measured directly within Unity. Experimental results demonstrate that qmap stabilization dramatically reduces Swin2SR’s large error (a 45% reduction in macro RMSE from 47.4 m to 26.1 m). The complete pipeline, using a robust SwinIR model, delivers excellent in-engine performance, achieving a median object grounding error of 0.30 m and real-time frame rates (≈100 FPS). This study provides a reproducible workflow and underscores a crucial insight for applying AI in scientific domains: domain-specific stabilization and application-centric evaluation are indispensable for the reliable deployment of large-scale vision models. Full article
(This article belongs to the Special Issue Augmented and Virtual Reality for Smart Applications)
Show Figures

Figure 1

19 pages, 2457 KB  
Article
A Logic Tensor Network-Based Neurosymbolic Framework for Explainable Diabetes Prediction
by Semanto Mondal, Antonino Ferraro, Fabiano Pecorelli and Giuseppe De Pietro
Appl. Sci. 2025, 15(21), 11806; https://doi.org/10.3390/app152111806 - 5 Nov 2025
Abstract
Neurosymbolic AI is an emerging paradigm that combines neural network learning capabilities with the structured reasoning capacity of symbolic systems. Although machine learning has achieved cutting-edge outcomes in diverse fields, including healthcare, agriculture, and environmental science, it has potential limitations. Machine learning and [...] Read more.
Neurosymbolic AI is an emerging paradigm that combines neural network learning capabilities with the structured reasoning capacity of symbolic systems. Although machine learning has achieved cutting-edge outcomes in diverse fields, including healthcare, agriculture, and environmental science, it has potential limitations. Machine learning and neural models excel at identifying intricate data patterns, yet they often lack transparency, depend on large labelled datasets, and face challenges with logical reasoning and tasks that require explainability. These challenges reduce their reliability in high-stakes applications such as healthcare. To address these limitations, we propose a hybrid framework that integrates symbolic knowledge expressed in First-Order Logic into neural learning via a Logic Tensor Network (LTN). In this framework, expert-defined medical rules are embedded as logical axioms with learnable thresholds. As a result, the model gains predictive power, interpretability, and explainability through reasoning over the logical rules. We have utilized this neurosymbolic method for predicting diabetes by employing the Pima Indians Diabetes Dataset. Our experimental setup evaluates the LTN-based model against several conventional methods, including Support Vector Machines (SVM), Logistic Regression (LR), K-Nearest Neighbors (K-NN), Random Forest Classifiers (RF), Naive Bayes (NB), and a Standalone Neural Network (NN). The findings demonstrate that the neurosymbolic framework not only surpasses traditional models in predictive accuracy but also offers improved explainability and robustness. Notably, the LTN-based neurosymbolic framework achieves an excellent balance between recall and precision, along with a higher AUC-ROC score. These results underscore its potential for trustworthy medical diagnostics. This work highlights how integrating symbolic reasoning with data-driven models can bridge the gap between explainability, interpretability, and performance, offering a promising direction for AI systems in domains where both accuracy and explainability are critical. Full article
Show Figures

Figure 1

Back to TopTop