Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (303)

Search Parameters:
Keywords = deep learning in radiology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 8390 KB  
Article
An Adaptive Deep Learning Framework for Multi-Label Chest X-Ray Diagnosis Using a Hybrid CNN–Transformer Architecture and Class-Wise Ensemble Fusion
by Chi-Feng Hsieh, Hsu-Hsia Peng, Yu-Hsiang Tsai, Chia-Ching Chang, Cheng-Hsuan Juan, Hsian-He Hsu and Chun-Jung Juan
Diagnostics 2026, 16(8), 1227; https://doi.org/10.3390/diagnostics16081227 - 20 Apr 2026
Viewed by 259
Abstract
Background/Objectives: To develop and externally evaluate a deep learning framework for multi-label thoracic disease classification on chest radiographs using hybrid convolutional neural network (CNN)–transformer architectures, hierarchical scalar-weighted fusion, and ensemble strategies. Methods: This retrospective, multi-center study utilized publicly available datasets: NIH [...] Read more.
Background/Objectives: To develop and externally evaluate a deep learning framework for multi-label thoracic disease classification on chest radiographs using hybrid convolutional neural network (CNN)–transformer architectures, hierarchical scalar-weighted fusion, and ensemble strategies. Methods: This retrospective, multi-center study utilized publicly available datasets: NIH ChestX-ray14 (112,120 images; 30,805 patients) for model development and internal testing, and CheXpert (223,415 images) plus ChestX-Det10 (3578 images) for external validation. Nine CNN–transformer hybrids were systematically benchmarked, and the proposed model incorporated multi-scale DenseNet121 features, scalar-weighted fusion, positional encodings, and cross-attention. Four post hoc ensemble methods were explored, including a class-wise Top-3 Grid Search. Performance was evaluated using AUROC as the primary metric, along with precision, recall, F1-score, accuracy, specificity, positive predictive value, and negative predictive value. Statistical comparisons were performed using bootstrapped resampling and appropriate parametric or non-parametic tests. Results: On the NIH internal test set, the proposed hybrid model achieved a mean AUROC of 0.8495, which was significantly higher than that of the DenseNet121 baseline (0.8441, p = 0.032). The Top-3 Grid Search ensemble further improved internal performance, achieving a mean AUROC of 0.8577 (p < 0.00001). On external validation, the ensemble consistently outperformed DenseNet121, achieving mean AUROCs of 0.6500 on CheXpert (p < 0.001) and 0.6592 on ChestX-Det10 (p < 0.001). Per-class analysis revealed significant improvements for clinically important conditions such as cardiomegaly, mass, and pneumothorax. Grad-CAM visualizations demonstrated the strong alignment of predicted abnormalities with radiologically relevant regions. Conclusions: This CNN–transformer framework, particularly when combined with class-wise ensemble strategies, provided modest but statistically significant improvements in multi-label chest X-ray classification. External validation suggested partial generalizability across datasets, although performance remained moderate under domain shift. Full article
(This article belongs to the Special Issue Artificial Intelligence in Diagnostic Imaging)
Show Figures

Figure 1

14 pages, 3690 KB  
Article
Enhancing Reliable Prostate Lesion Detection: Integrating Multi-Expert Annotations and Tailored nnU-Net Ensemble Learning Strategies
by Rafal Jozwiak, Michal Gonet, Jan Mycka, Ihor Mykhalevych, Dariusz S. Radomski, Krzysztof Tupikowski, Tomasz Lorenc, Joanna Dolowy and Anna Zacharzewska-Gondek
Appl. Sci. 2026, 16(8), 3932; https://doi.org/10.3390/app16083932 - 18 Apr 2026
Viewed by 275
Abstract
Accurate detection of prostate cancer suspicious areas in biparametric MRI (bpMRI) remains challenging because of severe lesion-to-background imbalance, limited lesion contrast, and inter-reader variability in lesion delineation. Unlike prior approaches that collapse inter-reader disagreement into a single consensus label, this study makes three [...] Read more.
Accurate detection of prostate cancer suspicious areas in biparametric MRI (bpMRI) remains challenging because of severe lesion-to-background imbalance, limited lesion contrast, and inter-reader variability in lesion delineation. Unlike prior approaches that collapse inter-reader disagreement into a single consensus label, this study makes three contributions: (1) an adapted nnU-Net framework with prostate-centered preprocessing to reduce voxel-level class imbalance; (2) a class-imbalance-aware composite loss combining Dice, binary cross-entropy, and tailored focal loss to improve sensitivity to small and low-contrast lesions; and (3) a multi-expert learning strategy that preserves reader-specific annotations as separate supervision targets and aggregates predictions at the ensemble level. The method was developed on a single-center dataset of 378 bpMRI studies independently annotated by three board-certified radiologists. Of these, 323 studies were used for model development with patient-level 5-fold cross-validation, and 55 studies were reserved as a fixed independent test set. Compared with our previously published U-Net baseline, the proposed consensus-based nnU-Net improved Average Precision (AP) from 0.69 to 0.75, AUROC from 0.92 to 0.96, and the PI-CAI score from 0.81 to 0.85 on the independent test set. In addition, the multi-expert approach further improved AP to 0.81 versus 0.76 (+6.6%, p < 0.01), AUROC to 0.99 versus 0.95 (+4.2%, p < 0.01), and the PI-CAI score to 0.90 versus 0.86 (+4.7%). These findings demonstrate that explicitly preserving expert disagreement as a training signal, combined with anatomically targeted preprocessing and tailored loss design, substantially improves prostate lesion detection in bpMRI, providing a strong basis for future multi center external validation. Full article
Show Figures

Figure 1

17 pages, 1040 KB  
Systematic Review
Artificial Intelligence vs. Human Experts in Temporomandibular Joint MRI Interpretation: A Systematic Review
by Marijus Leketas, Inesa Stonkutė, Miglė Miškinytė and Dominykas Afanasjevas
Healthcare 2026, 14(8), 1066; https://doi.org/10.3390/healthcare14081066 - 17 Apr 2026
Viewed by 249
Abstract
Background: Magnetic resonance imaging (MRI) is the reference standard for evaluating temporomandibular joint (TMJ) disorders, particularly for assessing disc position, joint effusion, and degenerative changes. With increasing imaging demands and advances in deep learning, artificial intelligence (AI) has emerged as a potential [...] Read more.
Background: Magnetic resonance imaging (MRI) is the reference standard for evaluating temporomandibular joint (TMJ) disorders, particularly for assessing disc position, joint effusion, and degenerative changes. With increasing imaging demands and advances in deep learning, artificial intelligence (AI) has emerged as a potential adjunct to expert interpretation. This systematic review aimed to compare the diagnostic performance of AI-based models with that of human experts in TMJ MRI analysis. Methods: This review was conducted in accordance with the PRISMA 2020 guidelines and prospectively registered in PROSPERO (CRD420251174127). A systematic search of PubMed/MEDLINE, ScienceDirect, Wiley Online Library, and Springer Nature Link was performed for studies published between 2020 and 2026. Eligible studies included human participants undergoing TMJ MRI and evaluated AI, machine learning, or deep learning models against human expert interpretation. Extracted outcomes included sensitivity, specificity, accuracy, area under the receiver operating characteristic curve (AUC), and agreement metrics. Risk of bias was assessed using QUADAS-2. Due to substantial heterogeneity, a narrative synthesis was conducted. Results: Five retrospective diagnostic accuracy studies were included, comprising sample sizes ranging from 118 to 1474 patients. Target conditions included anterior disc displacement, joint effusion, osteoarthritis, and disc perforation. AI models demonstrated strong discriminative performance, with reported AUC values ranging from 0.79 to 0.98. In direct comparisons, AI achieved diagnostic accuracy comparable to experienced radiologists. AI systems frequently demonstrated higher specificity and similar overall accuracy, whereas human experts often showed higher sensitivity. In osteoarthritis assessment, AI performance approached expert level and exceeded that of less experienced readers. All studies were retrospective and predominantly single-center, with heterogeneous reference standards and limited external validation. Conclusions: AI achieves diagnostic performance comparable to experienced clinicians in TMJ MRI interpretation and shows promise as a decision-support tool. Nevertheless, it should be regarded as complementary to, rather than a replacement for, expert radiological assessment pending further rigorous validation. Full article
(This article belongs to the Special Issue Dental Research and Innovation: Shaping the Future of Oral Health)
Show Figures

Figure 1

23 pages, 1399 KB  
Review
Bibliometric Analysis of Artificial Intelligence in Pediatric Radiology and Medical Imaging: A Focus on Deep Learning Applications
by Ahmad Tijjani Garba, Aminu Bashir Suleiman, Wenze Du, Ahmed Ibrahim Mahmud, Harisu Abdullahi Shehu, Huseyin Kusetogullari and Md. Haidar Sharif
Bioengineering 2026, 13(4), 461; https://doi.org/10.3390/bioengineering13040461 - 14 Apr 2026
Viewed by 469
Abstract
This study presents the first dedicated bibliometric analysis of artificial intelligence (AI) and deep learning applications in pediatric radiology and medical imaging, mapping the intellectual structure of a rapidly evolving field. A total of 2688 articles and conference proceedings published between 2005 and [...] Read more.
This study presents the first dedicated bibliometric analysis of artificial intelligence (AI) and deep learning applications in pediatric radiology and medical imaging, mapping the intellectual structure of a rapidly evolving field. A total of 2688 articles and conference proceedings published between 2005 and 2025 were retrieved from the Web of Science Core Collection and analyzed using Bibliometrix R and VOSviewer. The findings reveal exponential growth in publications, from 7 papers in 2005 to 559 in 2025, with journal articles dominating the corpus (85.9%). The most-cited contributions, led by Kermany et al. (2018) with 2886 citations, are predominantly technical feasibility studies rather than clinical outcome trials, indicating a field that has advanced methodologically but remains in early stages of clinical translation. Thematic mapping identifies convolutional neural networks, pneumonia, and transfer learning as Motor Themes representing methodological maturity in chest imaging, while neuroimaging and image segmentation clusters occupy Niche Themes, reflecting insular development with limited cross-field connectivity. Geographic analysis reveals concentrated co-authorship along US–China and US–Europe corridors, with African, Latin American, and Southeast Asian institutions largely absent from knowledge production networks. Eight of the ten most productive affiliations are North American, highlighting structural inequities that risk producing AI tools optimized for high-resource settings rather than the global pediatric population. This analysis provides an empirical foundation for reorienting the field toward clinical validation, geographic inclusion, and methodological integration across isolated research communities. Full article
Show Figures

Figure 1

20 pages, 903 KB  
Article
International Multicenter Validation of an Expanded AI Diagnostic System for 18 Pathologies in Thoracic and Musculoskeletal Radiography
by Jean-Laurent Sultan, Pauline Beaumel, Maria Dementjeva, Hugo Declercq, Ilana Sultan, Julia Reinas and Maria Dolores Durán Vila
Diagnostics 2026, 16(8), 1137; https://doi.org/10.3390/diagnostics16081137 - 10 Apr 2026
Viewed by 444
Abstract
Background: Conventional radiography faces high error rates (3–10%) due to heavy clinical workloads. While AI has emerged as a supportive tool, there is an evidence gap regarding the clinical utility of integrated AI systems in detecting both skeletal and thoracic abnormalities. Objectives: This [...] Read more.
Background: Conventional radiography faces high error rates (3–10%) due to heavy clinical workloads. While AI has emerged as a supportive tool, there is an evidence gap regarding the clinical utility of integrated AI systems in detecting both skeletal and thoracic abnormalities. Objectives: This large-scale, international multicenter study aims to validate the performance of a unified radiographic AI suite across an expanded diagnostic scope while confirming its continued robustness. Methods: A retrospective performance evaluation was conducted using 21,581 adult and pediatric X-rays collected from 20 countries. The reference standard was established through independent review by two expert readers, with adjudication of a third radiologist in cases of discordance. Diagnostic metrics, including Area Under the Curve (AUC), sensitivity, and specificity, were calculated for all 18 pathologies. Subgroup analysis was performed by patients’ age, sex, and country of acquisition. Results: For the nine findings within the expanded scope, AUC values exceeded 96.1%, with sensitivity and specificity ranges from 94.5 to 98.8% and 86.6 to 96.1%, respectively. Similarly, for the nine historically validated findings, AUCs remained above 96.1%, with sensitivity and specificity localized between 94.5 and 97.8% and 84.6 and 89.4%, respectively. Consistency was maintained across subgroups. Conclusions: The results confirm the potential of deep learning to transition from narrow, task-specific tools to a unified, high-performance diagnostic system. Full article
(This article belongs to the Special Issue AI in Radiology and Nuclear Medicine: Challenges and Opportunities)
Show Figures

Figure 1

21 pages, 2333 KB  
Systematic Review
Artificial-Intelligence-Based Radiologic, Histopathologic, and Molecular Models for the Diagnosis and Classification of Malignant Salivary Gland Tumors: A Systematic Review and Functional Meta-Synthesis
by Carlos M. Ardila, Eliana Pineda-Vélez, Anny M. Vivares-Builes and Alejandro I. Díaz-Laclaustra
Med. Sci. 2026, 14(2), 183; https://doi.org/10.3390/medsci14020183 - 5 Apr 2026
Viewed by 446
Abstract
Background/Objectives: Malignant salivary gland tumors (MSGTs) are rare, biologically heterogeneous neoplasms in which histopathologic diagnosis and classification are challenging and subject to interobserver variability. Artificial intelligence (AI) approaches using radiologic, histopathologic, and molecular data, including radiomics, deep learning, and biomarker-based models, have been [...] Read more.
Background/Objectives: Malignant salivary gland tumors (MSGTs) are rare, biologically heterogeneous neoplasms in which histopathologic diagnosis and classification are challenging and subject to interobserver variability. Artificial intelligence (AI) approaches using radiologic, histopathologic, and molecular data, including radiomics, deep learning, and biomarker-based models, have been proposed as adjunctive diagnostic tools. This systematic review aimed to identify and critically appraise AI/ML models across radiologic, histopathologic, and molecular domains for distinct diagnostic tasks in MSGTs, and to integrate their diagnostic roles through a functional meta-synthesis. Methods: We conducted a PRISMA 2020-compliant systematic review. Embase, PubMed/MEDLINE, and Scopus were searched from inception to February 2026. Eligible studies developed or validated AI/ML diagnostic or classification models in human salivary gland tumor cohorts and reported extractable performance metrics. Results: From 1265 records, eight studies (1922 participants) met the inclusion criteria, spanning CT/MRI radiomics or deep learning (n = 4), whole-slide histopathology deep learning (n = 3), and DNA methylation-based classification (n = 1). External validation was reported in two CT-based benign–malignant discrimination studies, with AUCs of 0.890 (95% CI 0.844–0.937) and 0.745 (95% CI 0.699–0.791). Heterogeneity in model construction, outcome definitions, and validation strategies precluded meta-analysis. Risk of bias was frequently high in QUADAS-2/PROBAST assessments, driven by retrospective sampling, limited blinding, and analysis-related concerns, while calibration and utility were rarely assessed. Conclusions: AI/ML models for MSGTs demonstrate promising diagnostic performance, particularly for preoperative benign–malignant discrimination, but the current evidence base is limited by heterogeneity, predominantly internal validation, and high risk of bias. The functional meta-synthesis identified three convergent diagnostic domains: malignancy discrimination, histopathologic subtype classification, and molecular/epigenetic taxonomy refinement. Full article
(This article belongs to the Section Translational Medicine)
Show Figures

Figure 1

29 pages, 3941 KB  
Article
Explainable Deep Learning for Thoracic Radiographic Diagnosis: A COVID-19 Case Study Toward Clinically Meaningful Evaluation
by Divine Nicholas-Omoregbe, Olamilekan Shobayo, Obinna Okoyeigbo, Mansi Khurana and Reza Saatchi
Electronics 2026, 15(7), 1443; https://doi.org/10.3390/electronics15071443 - 30 Mar 2026
Viewed by 396
Abstract
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. [...] Read more.
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. Although recent deep learning (DL) approaches have shown strong performance in automated CXR classification, their black-box nature limits interpretability. This study proposes an explainable deep learning framework for COVID-19 detection from chest X-ray images. The framework incorporates anatomically guided preprocessing, including lung-region isolation, contrast-limited adaptive histogram equalization (CLAHE), bone suppression, and feature enhancement. A novel four-channel input representation was constructed by combining lung-isolated soft-tissue images with frequency-domain opacity maps, vessel enhancement maps, and texture-based features. Classification was performed using a modified Xception-based convolutional neural network, while Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to provide visual explanations and enhance interpretability. The framework was evaluated on the publicly available COVID-19 Radiography Database, achieving an accuracy of 95.3%, an AUC of 0.983, and a Matthews Correlation Coefficient of approximately 0.83. Threshold optimisation improved sensitivity, reducing missed COVID-19 cases while maintaining high overall performance. Explainability analysis showed that model attention was primarily focused on clinically relevant lung regions. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

28 pages, 2379 KB  
Article
Decision-Aware Vision Mamba with Context-Guided Slot Mixing for Chest X-Ray Screening and Culture-Based Hierarchical Tuberculosis Classification
by Wangsu Jeon, Hyeonung Jang, Hongchang Lee, Chanho Park, Jiwon Lyu and Seongjun Choi
Sensors 2026, 26(7), 2100; https://doi.org/10.3390/s26072100 - 27 Mar 2026
Viewed by 713
Abstract
Distinguishing Active from Inactive Tuberculosis (TB) on Chest X-rays presents a clinical challenge due to overlapping radiological signs. This study introduces Vision Mamba CGSM, a deep learning framework integrating a State Space Model (SSM) backbone with a Context-Guided Slot Mixing (CGSM) module. The [...] Read more.
Distinguishing Active from Inactive Tuberculosis (TB) on Chest X-rays presents a clinical challenge due to overlapping radiological signs. This study introduces Vision Mamba CGSM, a deep learning framework integrating a State Space Model (SSM) backbone with a Context-Guided Slot Mixing (CGSM) module. The SSM captures global anatomical context, while the CGSM module isolates subtle pathological features by applying localized spatial attention. We validated the model using a hierarchical diagnostic scheme covering Normal, Pneumonia, Active TB, and Inactive TB. Experimental evaluations demonstrate an accuracy of 92.96% and a Youden Index of 79.55% on the independent test set. In the specific binary classification of Active vs. Inactive TB, the model recorded a specificity of 97.04%, outperforming standard baseline architectures including ResNet152 and ViT-B. Additional validations on external datasets confirm the consistent generalization of the proposed feature extraction mechanism. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 2686 KB  
Article
Externally Validated Deep Learning Analysis of Chest Radiographs for Differentiating COVID-19 and Viral Pneumonia
by Michael Masoomi, Latifa Al-Kandari, Haytam Ramzy and Mahday Abass Hamza
Diagnostics 2026, 16(7), 995; https://doi.org/10.3390/diagnostics16070995 - 26 Mar 2026
Viewed by 438
Abstract
Background/Objective: Chest radiography (CXR) is routinely used in the evaluation of respiratory disease; however, differentiating COVID-19 from other viral pneumonias on CXR remains challenging due to substantial radiographic overlap. In this study, a deep learning-based CXR classification model using a ResNet-50 architecture was [...] Read more.
Background/Objective: Chest radiography (CXR) is routinely used in the evaluation of respiratory disease; however, differentiating COVID-19 from other viral pneumonias on CXR remains challenging due to substantial radiographic overlap. In this study, a deep learning-based CXR classification model using a ResNet-50 architecture was developed to categorize images as normal, COVID-19, or non-COVID viral pneumonia, with emphasis on bias mitigation and external validation. Methods: Model training and internal validation were performed using harmonized publicly available datasets with patient-level stratified five-fold cross-validation, while generalizability was evaluated using an independent real-world institutional dataset from Adan Hospital, Kuwait, which was excluded from all training, validation, and hyperparameter tuning stages. Results: On the public validation dataset (n = 847), the model achieved an overall accuracy of 96.8% with balanced class-wise performance, whereas performance on the independent institutional dataset (n = 320) decreased to 93.7%, consistent with expected domain shift. Calibration analyses demonstrated well-aligned probabilistic estimates on validation data and acceptable calibration on institutional data. Negative predictive values remained high for normal and COVID-19 classes across datasets. Exploratory decision curve analysis demonstrated net benefit patterns for COVID-19 predictions under hypothetical threshold assumptions. Conclusions: These findings indicate that, when developed with explicit bias-mitigation strategies and evaluated using independent institutional data, deep learning-based CXR analysis may provide supportive, non-diagnostic decision signals for radiology triage workflows; however, prospective multicenter validation is required prior to clinical adoption. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

43 pages, 6336 KB  
Systematic Review
A Systematic Literature Review of You Only Look Once Architectures (v1–v12) in Healthcare Systems
by Ozgur Koray Sahingoz, Gozde Karatas Baydogmus and Emin Kugu
Diagnostics 2026, 16(6), 935; https://doi.org/10.3390/diagnostics16060935 - 22 Mar 2026
Viewed by 770
Abstract
Background/Objectives: The integration of deep learning and computer vision into healthcare has improved medical diagnosis and image analysis. Among object detection algorithms, the YOLO family has attracted substantial attention due to its ability to analyze images in real time with reported improvements [...] Read more.
Background/Objectives: The integration of deep learning and computer vision into healthcare has improved medical diagnosis and image analysis. Among object detection algorithms, the YOLO family has attracted substantial attention due to its ability to analyze images in real time with reported improvements in detection performance across multiple studies. This systematic review examines the evolution of YOLO algorithms for diagnostic applications in healthcare from YOLOv1 to YOLOv12. Methods: Peer-reviewed scientific articles published up to 1 January 2026 were retrieved from major scientific databases in accordance with PRISMA 2020 guidelines. The included studies applied YOLO models to medical imaging tasks, including disease and lesion detection and support for clinical procedures. Performance was synthesized using reported metrics such as average precision, accuracy, inference time, and computational efficiency. Results: The reviewed literature suggests progressive architectural refinements associated with reported improvements in diagnostic performance. YOLOv5 and YOLOv8 are the most frequently used architectures in diagnostic settings, reflecting a favorable trade-off between accuracy and computational complexity. YOLO-based methods have demonstrated strong performance across radiological, pathological, ophthalmological, and endoscopic applications. Conclusions: YOLO models have matured into robust and optimized solutions for medical image analysis; however, challenges remain in interpretability, cross-institution generalization, and deployment on edge devices. Future work on explainable YOLO-based diagnostics and energy-efficient model design will be particularly valuable. Full article
Show Figures

Figure 1

46 pages, 3952 KB  
Article
A Hybrid Particle Swarm–Genetic Algorithm Framework for U-Net Hyperparameter Optimization in High-Precision Brain Tumor MRI Segmentation
by Shoffan Saifullah, Rafał Dreżewski, Anton Yudhana, Radius Tanone and Andiko Putro Suryotomo
Appl. Sci. 2026, 16(6), 3041; https://doi.org/10.3390/app16063041 - 21 Mar 2026
Viewed by 391
Abstract
Accurate and robust brain tumor segmentation remains a critical challenge in medical image analysis due to high inter-patient variability, complex tumor morphology, and modality-specific noise in MRI scans. This study proposes PSO-GA-U-Net, a novel hybrid deep learning framework that integrates Particle Swarm Optimization [...] Read more.
Accurate and robust brain tumor segmentation remains a critical challenge in medical image analysis due to high inter-patient variability, complex tumor morphology, and modality-specific noise in MRI scans. This study proposes PSO-GA-U-Net, a novel hybrid deep learning framework that integrates Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) to optimize the U-Net architecture, enhancing segmentation performance and generalization. PSO dynamically tunes the learning rate to accommodate modality-specific variations, while the GA adaptively regulates dropout to improve feature diversity and reduce overfitting. The model was evaluated on three benchmark datasets—FBTS, BraTS 2021, and BraTS 2018—using five-fold cross-validation. PSO-GA-U-Net achieves Dice Similarity Coefficients (DSC) of 0.9587, 0.9406, and 0.9480 and Jaccard Index (JI) scores of 0.9209, 0.8881, and 0.9024, respectively, consistently outperforming state-of-the-art models in both overlap accuracy and boundary delineation. Statistical tests confirm that these improvements are significant across folds (p<0.05). Visual heatmaps further illustrate the model’s ability to preserve structural integrity across tumor types and modalities. These results indicate that metaheuristic-guided deep learning offers a promising and clinically applicable solution for automatic tumor segmentation in radiological workflows. Full article
(This article belongs to the Special Issue Advanced Techniques and Applications in Magnetic Resonance Imaging)
Show Figures

Figure 1

25 pages, 2531 KB  
Article
FedIHRAS: A Privacy-Preserving Federated Learning Framework for Multi-Institutional Collaborative Radiological Analysis with Integrated Explainability and Automated Clinical Reporting
by André Luiz Marques Serrano, Gabriel Arquelau Pimenta Rodrigues, Guilherme Dantas Bispo, Vinícius Pereira Gonçalves, Geraldo Pereira Rocha Filho, Maria Gabriela Mendonça Peixoto, Rodrigo Bonacin and Rodolfo Ipolito Meneguette
Biomedicines 2026, 14(3), 713; https://doi.org/10.3390/biomedicines14030713 - 19 Mar 2026
Viewed by 508
Abstract
Background/Objectives: Federated learning has emerged as a promising paradigm for enabling collaborative artificial intelligence in healthcare while preserving data privacy. However, most existing frameworks focus on isolated tasks and lack integrated pipelines that combine classification, segmentation, explainability, and automated clinical reporting. Methods: This [...] Read more.
Background/Objectives: Federated learning has emerged as a promising paradigm for enabling collaborative artificial intelligence in healthcare while preserving data privacy. However, most existing frameworks focus on isolated tasks and lack integrated pipelines that combine classification, segmentation, explainability, and automated clinical reporting. Methods: This study proposes FedIHRAS, a privacy-preserving federated learning framework designed for multi-institutional radiological analysis. The system integrates multi-task deep learning modules, including pathology classification using a modified ResNet-50 backbone, anatomical segmentation, explainability through Grad-CAM, and automated report generation supported by semantic aggregation using SNOMED CT. The framework employs confidence-weighted aggregation, differential privacy mechanisms, and secure aggregation protocols to ensure privacy and robustness across heterogeneous institutional datasets. Results: Experimental evaluation was conducted across four large-scale chest X-ray datasets representing simulated institutional nodes, totaling approximately 874,000 images. FedIHRAS achieved high diagnostic performance with strong cross-institutional generalization and demonstrated improved robustness under non-IID data distributions. Additional experiments showed favorable communication efficiency, effective privacy–utility trade-offs, and strong agreement with expert radiologist assessments. Conclusion: The proposed FedIHRAS framework demonstrates that federated learning can support scalable, privacy-preserving, and clinically meaningful radiological AI systems. By integrating multi-task learning, explainability, and automated reporting within a unified federated architecture, the framework addresses key limitations of existing approaches and contributes to the development of collaborative AI in healthcare. Full article
(This article belongs to the Special Issue Imaging Technology for Human Diseases)
Show Figures

Figure 1

20 pages, 1034 KB  
Review
The Evolving Landscape of COPD Typization
by Alberto Fantin, Nadia Castaldo, Giulia Sartori, Claudia di Chiara, Filippo Patrucco, Giuseppe Morana, Vincenzo Patruno and Ernesto Crisafulli
Medicina 2026, 62(3), 564; https://doi.org/10.3390/medicina62030564 - 18 Mar 2026
Viewed by 731
Abstract
Chronic obstructive pulmonary disease (COPD) represents an escalating global health challenge characterized by profound clinical and biological heterogeneity. Conventional diagnostic paradigms, primarily reliant on spirometric criteria and broad phenotypic labels, often fail to capture the complex molecular mechanisms underlying effective precision medicine. This [...] Read more.
Chronic obstructive pulmonary disease (COPD) represents an escalating global health challenge characterized by profound clinical and biological heterogeneity. Conventional diagnostic paradigms, primarily reliant on spirometric criteria and broad phenotypic labels, often fail to capture the complex molecular mechanisms underlying effective precision medicine. This narrative review synthesizes the evolving landscape of COPD characterization, analyzing the integration of biomarkers, advanced quantitative imaging, and multi-omics technologies. Key developments highlighted include the clinical validation of biologics targeting type 2 inflammation, which reinforce the paradigm shift from generic symptomatic management toward the identification of specific treatable traits. We further explore the role of artificial intelligence and deep learning in enhancing radiological precision and body composition analysis. Ultimately, this work proposes a transition toward a GETomics (Genetics, Environment, and Time) framework as a fundamental prerequisite for transcending the limitations of traditional classification systems and delivering truly personalized care in the 21st century. Full article
(This article belongs to the Special Issue New Trends in Chronic Obstructive Pulmonary Disease (COPD))
Show Figures

Figure 1

16 pages, 3201 KB  
Systematic Review
Artificial Intelligence in ALK-Rearranged NSCLC: Forecasting Response and Resistance
by Andreas Koulouris, Christos Tsagkaris, Konstantinos Kalaitzidis, Georgios Tsakonas and Giannis Mountzios
Cancers 2026, 18(6), 973; https://doi.org/10.3390/cancers18060973 - 18 Mar 2026
Viewed by 655
Abstract
Background/Objectives: The management and prognosis of ALK-rearranged non-small-cell lung cancer have substantially improved over the past decade. However, challenges remain in timely molecular identification, prediction of treatment response, and understanding resistance mechanisms. This systematic review evaluates and synthesizes the evidence on artificial [...] Read more.
Background/Objectives: The management and prognosis of ALK-rearranged non-small-cell lung cancer have substantially improved over the past decade. However, challenges remain in timely molecular identification, prediction of treatment response, and understanding resistance mechanisms. This systematic review evaluates and synthesizes the evidence on artificial intelligence (AI) approaches leveraging imaging, pathology, molecular, and clinical data in this setting. Methods: A systematic search was conducted for peer-reviewed studies published between 2020 and 2025. Eligible studies involved human subjects and applied AI, machine learning, or deep learning methods to predict ALK status or treatment-related outcomes using imaging, pathology, molecular, or multimodal data. Study selection followed the PRISMA 2020 guidelines. Data were extracted on study design, data modality, AI methodology, clinical objectives, and performance metrics. Bibliometric co-occurrence analysis was performed to characterize thematic patterns and temporal trends. Results: Thirteen studies met the inclusion criteria, most of which were retrospective and single-center. AI approaches were applied to radiologic, pathologic, molecular, or multimodal data. Models predicting ALK status reported area under the curve values ranging from 0.73 to 0.99, while prognostic and treatment-response models reported moderate to high discriminative performance. Bibliometric analysis identified two dominant research themes focused on molecular characterization and computational methodology, with a recent shift toward treatment-specific and integrative analyses. External validation and clinical implementation remained limited across studies. Conclusions: AI shows promising potential to support diagnosis, prognostication, and treatment assessment in ALK-rearranged lung cancer. However, methodological heterogeneity, limited external validation, and a lack of prospective studies currently constrain clinical translation. Full article
(This article belongs to the Special Issue ALK in Cancer: Lessons from the Future (2nd Edition))
Show Figures

Graphical abstract

22 pages, 4393 KB  
Article
An Adaptive Attention 3D U-Net for High-Fidelity MRI-to-CT Synthesis: Bridging the Anatomical Gap with CBAM
by Chaima Bensebihi, Nacer Eddine Benzebouchi, Nawel Zemmal, Abdallah Namoun, Aida Chefrour and Siham Amrouch
Diagnostics 2026, 16(6), 875; https://doi.org/10.3390/diagnostics16060875 - 16 Mar 2026
Viewed by 516
Abstract
Background: The generation of synthetic CT images from MRI scans represents a crucial step toward enabling MRI-only clinical workflows and supporting multi-modal integration in medical imaging, particularly in radiotherapy planning. Despite significant advancements in deep learning models, many current methods still struggle to [...] Read more.
Background: The generation of synthetic CT images from MRI scans represents a crucial step toward enabling MRI-only clinical workflows and supporting multi-modal integration in medical imaging, particularly in radiotherapy planning. Despite significant advancements in deep learning models, many current methods still struggle to reconstruct high-density structures, especially bone, and exhibit limited accuracy in density values. This shortcoming is largely attributed to the passage of excessive or noisy features through skip connections in the traditional U-Net architecture, which degrade the quality of information transmitted to the decoder, negatively impacting the clarity of anatomical boundaries and the pixel-wise accuracy of the resulting synthetic image. Methods: In this work, we propose an enhanced 3D U-Net architecture in which the Convolutional Block Attention Module (CBAM) is systematically integrated within each skip connection. The CBAM sequentially applies channel and spatial attention to adaptively reweight encoder feature maps before fusion with the decoder, thereby emphasizing anatomically relevant structures while suppressing irrelevant feature propagation. The model was trained and evaluated on the SynthRAD2023 (Task 1—Brain) MRI–CT dataset. To rigorously assess the contribution of the attention mechanism, a dedicated ablation study was conducted comparing three variants: 3D U-Net with Squeeze-and-Excitation (SE), Coordinate Attention (CA), and the proposed CBAM module. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Normalized Cross-Correlation (NCC). Results: The ablation study demonstrated that the CBAM-enhanced model consistently outperformed both SE- and CA-based variants across all quantitative metrics. Specifically, the proposed method achieved an MAE of 38.2±5.4 HU and an RMSE of 51.0±12.0 HU, representing the lowest reconstruction errors among the evaluated models. In addition, it obtained a PSNR of 29.45±2.10 dB, SSIM of 0.940±0.031, and NCC of 0.967±0.015, indicating superior structural preservation and strong voxel-wise correspondence between synthesized and reference CT volumes. These results confirm that the sequential integration of channel and spatial attention provides a statistically and practically meaningful improvement for high-fidelity MRI-to-CT synthesis. Conclusions: Generating high-resolution brain CT images from brain MRI scans using a 3D U-Net network enhanced with a CBAM module can contribute to supporting the clinical workflow by providing additional diagnostic data without the need for extra radiological examinations, thereby enhancing diagnostic efficiency and reducing radiation exposure. This technique helps reduce patient exposure to radiation and improves accessibility in resource-limited settings. Furthermore, this method is valuable for retrospective studies, surgical planning, and image-guided therapy, where complete multi-modal data may not always be available. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Back to TopTop