Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,179)

Search Parameters:
Keywords = neural language models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2365 KB  
Systematic Review
Artificial Intelligence in Endodontic Education: A Systematic Review with Frequentist and Bayesian Meta-Analysis of Student-Based Evidence
by Carlos M. Ardila, Eliana Pineda-Vélez and Anny M. Vivares-Builes
Dent. J. 2025, 13(11), 489; https://doi.org/10.3390/dj13110489 (registering DOI) - 23 Oct 2025
Abstract
Background/Objectives: Artificial intelligence (AI) is entering dental curricula, yet its educational value in endodontics remains unclear. This review synthesized student-based evidence on AI in endodontics, primarily comparing AI vs. students on diagnostic tasks as an educational endpoint and secondarily considering assessment tasks relevant [...] Read more.
Background/Objectives: Artificial intelligence (AI) is entering dental curricula, yet its educational value in endodontics remains unclear. This review synthesized student-based evidence on AI in endodontics, primarily comparing AI vs. students on diagnostic tasks as an educational endpoint and secondarily considering assessment tasks relevant to training. Methods: PubMed/MEDLINE, Embase, Scopus, and Web of Science were searched in July 2025. Eligible studies involved dental students using AI in endodontic tasks or applied AI to student-generated outputs. For diagnostic comparisons we performed random-effects meta-analysis and a complementary Bayesian random-effects model with weakly informative priors. Risk of bias used QUADAS-2; certainty used GRADE. Results: Five studies met inclusion. Two provided complete mean–SD data for the primary meta-analysis and one contributed to a sensitivity model after SD imputation; two were summarized narratively (AUC/F1 only). Pooled effects favored AI: Hedges g = 1.48 (95% CI 0.60–2.36; I2 ≈ 84%); sensitivity (k = 3) g = 1.45 (95% CI 0.77–2.14; I2 ≈ 77%). Across the two LLM studies with analyzable means/SDs, the pooled mean difference in accuracy was approximately +20 percentage points (AI − students). Bayesian analyses yielded posterior means near 1.5 with 95% credible intervals excluding 0 and P (μ > 0) ≈ 1.00. Educational outcomes were sparsely and non-standardly reported. Conclusions: Student-based evidence indicates that AI likely outperforms dental students on endodontic diagnostic tasks, supporting its use as an adjunct for formative tutoring, objective feedback, and more consistent assessment. Full article
(This article belongs to the Special Issue Dental Education: Innovation and Challenge)
Show Figures

Graphical abstract

22 pages, 5639 KB  
Article
A Globally Optimal Alternative to MLP
by Zheng Li, Jerry Cheng and Huanying Helen Gu
Information 2025, 16(10), 921; https://doi.org/10.3390/info16100921 - 21 Oct 2025
Abstract
In deep learning, achieving the global minimum poses a significant challenge, even for relatively simple architectures such as Multi-Layer Perceptrons (MLPs). To address this challenge, we visualized model states at both local and global optima, thereby identifying the factors that impede the transition [...] Read more.
In deep learning, achieving the global minimum poses a significant challenge, even for relatively simple architectures such as Multi-Layer Perceptrons (MLPs). To address this challenge, we visualized model states at both local and global optima, thereby identifying the factors that impede the transition of models from local to global minima when employing conventional model training methodologies. Based on these insights, we propose the Lagrange Regressor (LReg), a framework that is mathematically equivalent to MLPs. Rather than updates via optimization techniques, LReg employs a Mesh-Refinement–Coarsening (discrete) process to ensure the convergence of the model’s loss function to the global minimum. LReg achieves faster convergence and overcomes the inherent limitations of neural networks in fitting multi-frequency functions. Experiments conducted on large-scale benchmarks including ImageNet-1K (image classification), GLUE (natural language understanding), and WikiText (language modeling) show that LReg consistently enhances the performance of pre-trained models, significantly lowers test loss, and scales effectively to big data scenarios. These results underscore LReg’s potential as a scalable, optimization-free alternative for deep learning in large and complex datasets, aligning closely with the goals of innovative big data analytics. Full article
33 pages, 4831 KB  
Article
A General-Purpose Knowledge Retention Metric for Evaluating Distillation Models Across Architectures and Tasks
by Arjay Alba and Jocelyn Villaverde
AI 2025, 6(10), 273; https://doi.org/10.3390/ai6100273 - 21 Oct 2025
Viewed by 136
Abstract
Background: Knowledge distillation (KD) compresses deep neural networks by transferring knowledge from a high-capacity teacher model to a lightweight student model. However, conventional evaluation metrics such as accuracy, mAP, IoU, or RMSE focus mainly on task performance and overlook how effectively the [...] Read more.
Background: Knowledge distillation (KD) compresses deep neural networks by transferring knowledge from a high-capacity teacher model to a lightweight student model. However, conventional evaluation metrics such as accuracy, mAP, IoU, or RMSE focus mainly on task performance and overlook how effectively the student internalizes the teacher’s knowledge. Methods: This study introduces the Knowledge Retention Score (KRS), a composite metric that integrates intermediate feature similarity and output agreement into a single interpretable score to quantify knowledge retention. KRS was primarily validated in computer vision (CV) through 36 experiments covering image classification, object detection, and semantic segmentation using diverse datasets and eight representative KD methods. Supplementary experiments were conducted in natural language processing (NLP) using transformer-based models on SST-2, and in time series regression with convolutional teacher–student pairs. Results: Across all domains, KRS correlated strongly with standard performance metrics while revealing internal retention dynamics that conventional evaluations often overlook. By reporting feature similarity and output agreement separately alongside the composite score, KRS provides transparent and interpretable insights into knowledge transfer. Conclusions: KRS offers a stable diagnostic tool and a complementary evaluation metric for KD research. Its generality across domains demonstrates its potential as a standardized framework for assessing knowledge retention beyond task-specific performance measures. Full article
Show Figures

Figure 1

7 pages, 1456 KB  
Proceeding Paper
Towards a More Natural Urdu: A Comprehensive Approach to Text-to-Speech and Voice Cloning
by Muhammad Ramiz Saud, Muhammad Romail Imran and Raja Hashim Ali
Eng. Proc. 2025, 87(1), 112; https://doi.org/10.3390/engproc2025087112 - 20 Oct 2025
Abstract
This paper introduces a comprehensive approach to building natural-sounding Urdu Text-to-Speech (TTS) and voice cloning systems, addressing the lack of computational resources for Urdu. We developed a large-scale dataset of over 100 h of Urdu speech, carefully cleaned and phonetically aligned through an [...] Read more.
This paper introduces a comprehensive approach to building natural-sounding Urdu Text-to-Speech (TTS) and voice cloning systems, addressing the lack of computational resources for Urdu. We developed a large-scale dataset of over 100 h of Urdu speech, carefully cleaned and phonetically aligned through an automated transcription pipeline to preserve linguistic accuracy. The dataset was then used to fine-tune Tacotron2, a neural network model originally trained for English, with modifications tailored to Urdu’s phonological and morphological features. To further enhance naturalness, we integrated voice cloning techniques that capture regional accents and produce personalized speech outputs. Model performance was evaluated through mean opinion score (MOS), word error rate (WER), and speaker similarity, showing substantial improvements compared to previous Urdu systems. The results demonstrate clear progress toward natural and intelligible Urdu speech synthesis, while also revealing challenges such as handling dialectal variation and preventing model overfitting. This work contributes an essential resource and methodology for advancing Urdu natural language processing (NLP), with promising applications in education, accessibility, entertainment, and assistive technologies. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Graphical abstract

27 pages, 1960 KB  
Review
AI and Machine Learning in Biology: From Genes to Proteins
by Zaw Myo Hein, Dhanyashri Guruparan, Blaire Okunsai, Che Mohd Nasril Che Mohd Nassir, Muhammad Danial Che Ramli and Suresh Kumar
Biology 2025, 14(10), 1453; https://doi.org/10.3390/biology14101453 - 20 Oct 2025
Viewed by 429
Abstract
Artificial intelligence (AI) and machine learning (ML), especially deep learning, have profoundly transformed biology by enabling precise interpretation of complex genomic and proteomic data. This review presents a comprehensive overview of cutting-edge AI methodologies spanning from foundational neural networks to advanced transformer architectures [...] Read more.
Artificial intelligence (AI) and machine learning (ML), especially deep learning, have profoundly transformed biology by enabling precise interpretation of complex genomic and proteomic data. This review presents a comprehensive overview of cutting-edge AI methodologies spanning from foundational neural networks to advanced transformer architectures and large language models (LLMs). These tools have revolutionized our ability to predict gene function, identify genetic variants, and accurately determine protein structures and interactions, exemplified by landmark milestones such as AlphaFold and DeepBind. We elaborate on the synergistic integration of genomics and protein structure prediction through AI, highlighting recent breakthroughs in generative models capable of designing novel proteins and genomic sequences at unprecedented scale and accuracy. Furthermore, the fusion of multi-omics data using graph neural networks and hybrid AI frameworks has provided nuanced insights into cellular heterogeneity and disease mechanisms, propelling personalized medicine and drug discovery. This review also discusses ongoing challenges including data quality, model interpretability, ethical concerns, and computational demands. By synthesizing current progress and emerging frontiers, we provide insights to guide researchers in harnessing AI’s transformative power across the biological spectrum from genes to functional proteins. Full article
(This article belongs to the Special Issue Artificial Intelligence Research for Complex Biological Systems)
Show Figures

Graphical abstract

14 pages, 586 KB  
Article
Complex Table Question Answering with Multiple Cells Recall Based on Extended Cell Semantic Matching
by Hainan Chen and Dongqi Shen
Big Data Cogn. Comput. 2025, 9(10), 265; https://doi.org/10.3390/bdcc9100265 - 20 Oct 2025
Viewed by 154
Abstract
Tables, as a form of structured or semi-structured data, are widely found in documents, reports, and data manuals. Table-based question answering (TableQA) plays a key role in table document analysis and understanding. Existing approaches to TableQA can be broadly categorized into content-matching methods [...] Read more.
Tables, as a form of structured or semi-structured data, are widely found in documents, reports, and data manuals. Table-based question answering (TableQA) plays a key role in table document analysis and understanding. Existing approaches to TableQA can be broadly categorized into content-matching methods and end-to-end generation methods based on encoder–decoder deep neural networks. Content-matching methods return one or more table cells as answers, thereby preserving the original data and making them more suitable for downstream tasks. End-to-end methods, especially those leveraging large language models (LLMs), have achieved strong performance on various benchmarks. However, the variability in LLM-generated expressions and their heavy reliance on prompt engineering limit their applicability where answer fidelity to the source table is critical. In this work, we propose CBCM (Cell-by-Cell semantic Matching), a fine-grained cell-level matching method that extends the traditional row- and column-matching paradigm to improve accuracy and applicability in TableQA. Furthermore, based on the public IM-TQA dataset, we construct a new benchmark, IM-TQA-X, specifically designed for the multi-row and multi-column cell recall task, a scenario underexplored in existing state-of-the-art content-matching methods. Experimental results show that CBCM improves overall accuracy by 2.5% over the latest row- and column-matching method RGCNRCI (Relational Graph Convolutional Networks based Row and Column Intersection), and boosts accuracy in the multi-row and multi-column recall task from 4.3% to 34%. Full article
Show Figures

Figure 1

20 pages, 11103 KB  
Data Descriptor
VitralColor-12: A Synthetic Twelve-Color Segmentation Dataset from GPT-Generated Stained-Glass Images
by Martín Montes Rivera, Carlos Guerrero-Mendez, Daniela Lopez-Betancur, Tonatiuh Saucedo-Anaya, Manuel Sánchez-Cárdenas and Salvador Gómez-Jiménez
Data 2025, 10(10), 165; https://doi.org/10.3390/data10100165 - 18 Oct 2025
Viewed by 222
Abstract
The segmentation and classification of color are crucial stages in image processing, computer vision, and pattern recognition, as they significantly impact the results. The diverse, hand-labeled datasets in the literature are applied for monochromatic or color segmentation in specific domains. On the other [...] Read more.
The segmentation and classification of color are crucial stages in image processing, computer vision, and pattern recognition, as they significantly impact the results. The diverse, hand-labeled datasets in the literature are applied for monochromatic or color segmentation in specific domains. On the other hand, synthetic datasets are generated using statistics, artificial intelligence algorithms, or generative artificial intelligence (AI). This last one includes Large Language Models (LLMs), Generative Adversarial Neural Networks (GANs), and Variational Autoencoders (VAEs), among others. In this work, we propose VitralColor-12, a synthetic dataset for color classification and segmentation, comprising twelve colors: black, blue, brown, cyan, gray, green, orange, pink, purple, red, white, and yellow. VitralColor-12 addresses the limitations of color segmentation and classification datasets by leveraging the capabilities of LLMs, including adaptability, variability, copyright-free content, and lower-cost data—properties that are desirable in image datasets. VitralColor-12 includes pixel-level classification and segmentation maps. This makes the dataset broadly applicable and highly variable for a range of computer vision applications. VitralColor-12 utilizes GPT-5 and DALL·E 3 for generating stained-glass images. These images simplify the annotation process, since stained-glass images have isolated colors with distinct boundaries within the steel structure, which provide easy regions to label with a single color per region. Once we obtain the images, we use at least one hand-labeled centroid per color to automatically cluster all pixels based on Euclidean distance and morphological operations, including erosion and dilation. This process enables us to automatically label a classification dataset and generate segmentation maps. Our dataset comprises 910 images, organized into 70 generated images and 12 pixel segmentation maps—one for each color—which include 9,509,524 labeled pixels, 1,794,758 of which are unique. These annotated pixels are represented by RGB, HSL, CIELAB, and YCbCr values, enabling a detailed color analysis. Moreover, VitralColor-12 offers features that address gaps in public resources such as violin diagrams with the frequency of colors across images, histograms of channels per color, 3D color maps, descriptive statistics, and standardized metrics, such as ΔE76, ΔE94, and CIELAB Chromacity, which prove the distribution, applicability, and realistic perceptual structures, including warm, neutral, and cold colors, as well as the high contrast between black and white colors, offering meaningful perceptual clusters, reinforcing its utility for color segmentation and classification. Full article
Show Figures

Figure 1

25 pages, 4152 KB  
Systematic Review
Mapping the AI Landscape in Project Management Context: A Systematic Literature Review
by Masoom Khalil, Alencar Bravo, Darli Vieira and Marly Monteiro de Carvalho
Systems 2025, 13(10), 913; https://doi.org/10.3390/systems13100913 - 17 Oct 2025
Viewed by 297
Abstract
The purpose of this research is to systematically map and analyze the use of AI technologies in project management, identifying themes, research gaps, and practical implications. This study conducts a systematic literature review (SLR) that combines bibliometric analysis with qualitative content evaluation to [...] Read more.
The purpose of this research is to systematically map and analyze the use of AI technologies in project management, identifying themes, research gaps, and practical implications. This study conducts a systematic literature review (SLR) that combines bibliometric analysis with qualitative content evaluation to explore the present landscape of AI in project management. The search covered literature published until November 2024, ensuring inclusion of the most recent developments. Studies were included if they examined AI methods applied to project management contexts and were published in peer-reviewed English journals as articles, review articles, or early access publications; studies unrelated to project management or lacking methodological clarity were excluded. It follows a structured coding protocol informed by inductive and deductive reasoning, using NVivo (version 12) and Biblioshiny (version 4.3.0) software. From the entire set of 1064 records retrieved from Scopus and Web of Science, 27 publications met the final inclusion criteria for qualitative synthesis. Bibliometric clusters were derived from the entire set of 885 screened records, while thematic coding was applied to the 27 included studies. This review highlights the use of Artificial Neural Networks (ANN), Case-Based Reasoning (CBR), Digital Twins (DTs), and Large Language Models (LLMs) as central to recent progress. Bibliometric mapping identified several major thematic clusters. For this study, we chose those that show a clear link between artificial intelligence (AI) and project management (PM), such as expert systems, intelligent systems, and optimization algorithms. These clusters highlight the increasing influence of AI in improving project planning, decision-making, and resource management. Further studies investigate generative AI and the convergence of AI with blockchain and Internet of Things (IoT) systems, suggesting changes in project delivery approaches. Although adoption is increasing, key implementation issues persist. These include limited empirical evidence, inadequate attention to later project stages, and concerns about data quality, transparency, and workforce adaptation. This review improves understanding of AI’s role in project contexts and outlines areas for further research. For practitioners, the findings emphasize AI’s ability in cost prediction, scheduling, and risk assessment, while also emphasizing the importance of strong data governance and workforce training. This review is limited to English-language, peer-reviewed research indexed in Scopus and Web of Science, potentially excluding relevant grey literature or non-English contributions. This review was not registered and received no external funding. Full article
(This article belongs to the Special Issue Project Management of Complex Systems (Manufacturing and Services))
Show Figures

Figure 1

14 pages, 630 KB  
Article
Disease-Specific Prediction of Missense Variant Pathogenicity with DNA Language Models and Graph Neural Networks
by Mohamed Ghadie, Sameer Sardaar and Yannis Trakadis
Bioengineering 2025, 12(10), 1098; https://doi.org/10.3390/bioengineering12101098 - 13 Oct 2025
Viewed by 550
Abstract
Accurate prediction of the impact of genetic variants on human health is of paramount importance to clinical genetics and precision medicine. Recent machine learning (ML) studies have tried to predict variant pathogenicity with different levels of success. However, most missense variants identified on [...] Read more.
Accurate prediction of the impact of genetic variants on human health is of paramount importance to clinical genetics and precision medicine. Recent machine learning (ML) studies have tried to predict variant pathogenicity with different levels of success. However, most missense variants identified on a clinical basis are still classified as variants of uncertain significance (VUS). Our approach allows for the interpretation of a variant for a specific disease and, thus, for the integration of disease-specific domain knowledge. We utilize a comprehensive knowledge graph, with 11 types of interconnected biomedical entities at diverse biomolecular and clinical levels, to classify missense variants from ClinVar. We use BioBERT to generate embeddings of biomedical features for each node in the graph, as well as DNA language models to embed variant features directly from genomic sequence. Next, we train a two-stage architecture consisting of a graph convolutional neural network to encode biological relationships. A neural network is then used as the classifier to predict disease-specific pathogenicity of variants, essentially predicting edges between variant and disease nodes. We compare performance across different versions of our model, obtaining prediction-balanced accuracies as high as 85.6% (sensitivity: 90.5%; NPV: 89.8%) and discuss how our work can inform future studies in this area. Full article
(This article belongs to the Special Issue AI-Driven Approaches to Diseases Detection and Diagnosis)
Show Figures

Figure 1

14 pages, 2107 KB  
Article
Agricultural Knowledge-Enhanced Deep Learning for Joint Intent Detection and Slot Filling
by Mingtang Liu, Shanshan Wu, Wenlong Tian, Shuo Lei and Jiahao Miao
Appl. Sci. 2025, 15(20), 10932; https://doi.org/10.3390/app152010932 - 11 Oct 2025
Viewed by 258
Abstract
Intent detection and slot filling are fundamental components for constructing intelligent question-answering systems in agricultural domains. Existing approaches show notable limitations in semantic feature extraction and achieve relatively low accuracy when processing domain-specific agricultural queries with complex terminology and contextual dependencies. To address [...] Read more.
Intent detection and slot filling are fundamental components for constructing intelligent question-answering systems in agricultural domains. Existing approaches show notable limitations in semantic feature extraction and achieve relatively low accuracy when processing domain-specific agricultural queries with complex terminology and contextual dependencies. To address these challenges, this paper proposes an agricultural knowledge-enhanced deep learning approach that integrates agricultural domain knowledge and terminology with advanced neural architectures. The method integrates HanLP-based agricultural terminology processing with BERT contextual encoding, TextCNN feature extraction, and attention-based fusion. Experimental validation on a curated domain-specific agricultural dataset of 8041 melon cultivation queries demonstrates that the proposed model achieves an accuracy of 79.6%, recall of 80.1%, and F1-score of 79.8%, demonstrating significant improvements (7–22% performance gains) over baseline methods including TextRNN, TextRCNN, TextCNN, and BERT-TextCNN models. The results demonstrate significant potential for advancing intelligent agricultural advisory systems and domain-specific natural language understanding applications, particularly for precision agriculture applications. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

20 pages, 1358 KB  
Review
Artificial Intelligence in the Diagnosis and Management of Atrial Fibrillation
by Otilia Țica, Asgher Champsi, Jinming Duan and Ovidiu Țica
Diagnostics 2025, 15(20), 2561; https://doi.org/10.3390/diagnostics15202561 - 11 Oct 2025
Viewed by 616
Abstract
Artificial intelligence (AI) has increasingly become a transformative tool in cardiology, particularly in diagnosing and managing atrial fibrillation (AF), the most prevalent cardiac arrhythmia. This review aims to critically assess and synthesize current AI methodologies and their clinical relevance in AF diagnosis, risk [...] Read more.
Artificial intelligence (AI) has increasingly become a transformative tool in cardiology, particularly in diagnosing and managing atrial fibrillation (AF), the most prevalent cardiac arrhythmia. This review aims to critically assess and synthesize current AI methodologies and their clinical relevance in AF diagnosis, risk prediction, and therapeutic guidance. It systematically evaluates recent advancements in AI methodologies, including machine learning, deep learning, and natural language processing, for AF detection, risk stratification, and therapeutic decision-making. AI-driven tools have demonstrated superior accuracy and efficiency in interpreting electrocardiograms (ECGs), continuous monitoring via wearable devices, and predicting AF onset and progression compared to traditional clinical approaches. Deep learning algorithms, notably convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have revolutionized ECG analysis, identifying subtle waveform features predictive of AF development. Additionally, AI models significantly enhance clinical decision-making by personalizing anticoagulation therapy, optimizing rhythm versus rate-control strategies, and predicting procedural outcomes for catheter ablation. Despite considerable potential, practical adoption of AI in clinical practice is constrained by challenges including data privacy, explainability, and integration into clinical workflows. Addressing these challenges through robust validation studies, transparent algorithm development, and interdisciplinary collaborations will be crucial. In conclusion, AI represents a paradigm shift in AF management, promising improvements in diagnostic precision, personalized care, and patient outcomes. This review highlights the growing clinical importance of AI in AF care and provides a consolidated perspective on current applications, limitations, and future directions. Full article
Show Figures

Figure 1

15 pages, 606 KB  
Systematic Review
Artificial Intelligence for Risk–Benefit Assessment in Hepatopancreatobiliary Oncologic Surgery: A Systematic Review of Current Applications and Future Directions on Behalf of TROGSS—The Robotic Global Surgical Society
by Aman Goyal, Michail Koutentakis, Jason Park, Christian A. Macias, Isaac Ballard, Shen Hong Law, Abhirami Babu, Ehlena Chien Ai Lau, Mathew Mendoza, Susana V. J. Acosta, Adel Abou-Mrad, Luigi Marano and Rodolfo J. Oviedo
Cancers 2025, 17(20), 3292; https://doi.org/10.3390/cancers17203292 - 11 Oct 2025
Viewed by 329
Abstract
Background: Hepatopancreatobiliary (HPB) surgery is among the most complex domains in oncologic care, where decisions entail significant risk–benefit considerations. Artificial intelligence (AI) has emerged as a promising tool for improving individualized decision-making through enhanced risk stratification, complication prediction, and survival modeling. However, its [...] Read more.
Background: Hepatopancreatobiliary (HPB) surgery is among the most complex domains in oncologic care, where decisions entail significant risk–benefit considerations. Artificial intelligence (AI) has emerged as a promising tool for improving individualized decision-making through enhanced risk stratification, complication prediction, and survival modeling. However, its role in HPB oncologic surgery has not been comprehensively assessed. Methods: This systematic review was conducted in accordance with PRISMA guidelines and registered with PROSPERO ID: CRD420251114173. A comprehensive search across six databases was performed through 30 May 2025. Eligible studies evaluated AI applications in risk–benefit assessment in HPB cancer surgery. Inclusion criteria encompassed peer-reviewed, English-language studies involving human s ubjects. Two independent reviewers conducted study selection, data extraction, and quality appraisal. Results: Thirteen studies published between 2020 and 2024 met the inclusion criteria. Most studies employed retrospective designs with sample sizes ranging from small institutional cohorts to large national databases. AI models were developed for cancer risk prediction (n = 9), postoperative complication modeling (n = 4), and survival prediction (n = 3). Common algorithms included Random Forest, XGBoost, Decision Trees, Artificial Neural Networks, and Transformer-based models. While internal performance metrics were generally favorable, external validation was reported in only five studies, and calibration metrics were often lacking. Integration into clinical workflows was described in just two studies. No study addressed cost-effectiveness or patient perspectives. Overall risk of bias was moderate to high, primarily due to retrospective designs and incomplete reporting. Conclusions: AI demonstrates early promise in augmenting risk–benefit assessment for HPB oncologic surgery, particularly in predictive modeling. However, its clinical utility remains limited by methodological weaknesses and a lack of real-world integration. Future research should focus on prospective, multicenter validation, standardized reporting, clinical implementation, cost-effectiveness analysis, and the incorporation of patient-centered outcomes. Full article
Show Figures

Figure 1

18 pages, 1540 KB  
Review
From Fractal Geometry to Fractal Cognition: Experimental Tools and Future Directions for Studying Recursive Hierarchical Embedding
by Mauricio J. D. Martins
Fractal Fract. 2025, 9(10), 654; https://doi.org/10.3390/fractalfract9100654 - 10 Oct 2025
Viewed by 318
Abstract
The study of fractals has a long history in mathematics and signal analysis, providing formal tools to describe self-similar structures and scale-invariant phenomena. In recent years, cognitive science has developed a set of powerful theoretical and experimental tools capable of probing the representations [...] Read more.
The study of fractals has a long history in mathematics and signal analysis, providing formal tools to describe self-similar structures and scale-invariant phenomena. In recent years, cognitive science has developed a set of powerful theoretical and experimental tools capable of probing the representations that enable humans to extend hierarchical structures beyond given input and to generate fractal-like patterns across multiple domains, including language, music, vision, and action. These paradigms target recursive hierarchical embedding (RHE), a generative capacity that supports the production and recognition of self-similar structures at multiple scales. This article reviews the theoretical framework of RHE, surveys empirical methods for measuring it across behavioral and neural domains, and highlights their potential for cross-domain comparisons and developmental research. It also examines applications in linguistic, musical, visual, and motor domains, summarizing key findings and their theoretical implications. Despite these advances, the computational and biological mechanisms underlying RHE remain poorly understood. Addressing this gap will require linking cognitive models with algorithmic architectures and leveraging the large-scale behavioral and neuroimaging datasets generated by these paradigms for fractal analyses. Integrating theory, empirical tools, and computational modelling offers a roadmap for uncovering the mechanisms that give rise to recursive generativity in the human mind. Full article
(This article belongs to the Special Issue Fractal Dynamics of Complex Systems in Society and Behavioral Science)
Show Figures

Figure 1

15 pages, 1797 KB  
Article
Exploring AI’s Potential in Papilledema Diagnosis to Support Dermatological Treatment Decisions in Rural Healthcare
by Jonathan Shapiro, Mor Atlas, Naomi Fridman, Itay Cohen, Ziad Khamaysi, Mahdi Awwad, Naomi Silverstein, Tom Kozlovsky and Idit Maharshak
Diagnostics 2025, 15(19), 2547; https://doi.org/10.3390/diagnostics15192547 - 9 Oct 2025
Viewed by 380
Abstract
Background: Papilledema, an ophthalmic finding associated with increased intracranial pressure, is often induced by dermatological medications, including corticosteroids, isotretinoin, and tetracyclines. Early detection is crucial for preventing irreversible optic nerve damage, but access to ophthalmologic expertise is often limited in rural settings. [...] Read more.
Background: Papilledema, an ophthalmic finding associated with increased intracranial pressure, is often induced by dermatological medications, including corticosteroids, isotretinoin, and tetracyclines. Early detection is crucial for preventing irreversible optic nerve damage, but access to ophthalmologic expertise is often limited in rural settings. Artificial intelligence (AI) may enable the automated and accurate detection of papilledema from fundus images, thereby supporting timely diagnosis and management. Objective: The primary objective of this study was to explore the diagnostic capability of ChatGPT-4o, a general large language model with multimodal input, in identifying papilledema from fundus photographs. For context, its performance was compared with a ResNet-based convolutional neural network (CNN) specifically fine-tuned for ophthalmic imaging, as well as with the assessments of two human ophthalmologists. The focus was on applications relevant to dermatological care in resource-limited environments. Methods: A dataset of 1094 fundus images (295 papilledema, 799 normal) was preprocessed and partitioned into a training set and a test set. The ResNet model was fine-tuned using discriminative learning rates and a one-cycle learning rate policy. GPT-4o and two human evaluators (a senior ophthalmologist and an ophthalmology resident) independently assessed the test images. Diagnostic metrics including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and Cohen’s Kappa, were calculated for each evaluator. Results: GPT-4o, when applied to papilledema detection, achieved an overall accuracy of 85.9% with substantial agreement beyond chance (Cohen’s Kappa = 0.72), but lower specificity (78.9%) and positive predictive value (73.7%) compared to benchmark models. For context, the ResNet model, fine-tuned for ophthalmic imaging, reached near-perfect accuracy (99.5%, Kappa = 0.99), while two human ophthalmologists achieved accuracies of 96.0% (Kappa ≈ 0.92). Conclusions: This study explored the capability of GPT-4o, a large language model with multimodal input, for detecting papilledema from fundus photographs. GPT-4o achieved moderate diagnostic accuracy and substantial agreement with the ground truth, but it underperformed compared to both a domain-specific ResNet model and human ophthalmologists. These findings underscore the distinction between generalist large language models and specialized diagnostic AI: while GPT-4o is not optimized for ophthalmic imaging, its accessibility, adaptability, and rapid evolution highlight its potential as a future adjunct in clinical screening, particularly in underserved settings. These findings also underscore the need for validation on external datasets and real-world clinical environments before such tools can be broadly implemented. Full article
(This article belongs to the Special Issue AI in Dermatology)
Show Figures

Figure 1

30 pages, 5986 KB  
Article
Attention-Aware Graph Neural Network Modeling for AIS Reception Area Prediction
by Ambroise Renaud, Clément Iphar and Aldo Napoli
Sensors 2025, 25(19), 6259; https://doi.org/10.3390/s25196259 - 9 Oct 2025
Viewed by 572
Abstract
Accurately predicting the reception area of the Automatic Identification System (AIS) is critical for ship tracking and anomaly detection, as errors in signal interpretation may lead to incorrect vessel localization and behavior analysis. However, traditional propagation models, whether they are deterministic, empirical, or [...] Read more.
Accurately predicting the reception area of the Automatic Identification System (AIS) is critical for ship tracking and anomaly detection, as errors in signal interpretation may lead to incorrect vessel localization and behavior analysis. However, traditional propagation models, whether they are deterministic, empirical, or semi-empirical, face limitations when applied to dynamic environments due to their reliance on detailed atmospheric and terrain inputs. Therefore, to address these challenges, we propose a data-driven approach based on graph neural networks (GNNs) to model AIS reception as a function of environmental and geographic variables. Specifically, inspired by attention mechanisms that power transformers in large language models, our framework employs the SAmple and aggreGatE (GraphSAGE) framework convolutions to aggregate neighborhood features, then combines layer outputs through Jumping Knowledge (JK) with Bidirectional Long Short-Term Memory (BiLSTM)-derived attention coefficients and integrates an attentional pooling module at the graph-level readout. Moreover, trained on real-world AIS data enriched with terrain and meteorological features, the model captures both local and long-range reception patterns. As a result, it outperforms classical baselines—including ITU-R P.2001 and XGBoost in F1-score and accuracy. Ultimately, this work illustrates the value of deep learning and AIS sensor networks for the detection of positioning anomalies in ship tracking and highlights the potential of data-driven approaches in modeling sensor reception. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

Back to TopTop