Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (797)

Search Parameters:
Keywords = machine learning in medical imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
70 pages, 4767 KB  
Review
Advancements in Breast Cancer Detection: A Review of Global Trends, Risk Factors, Imaging Modalities, Machine Learning, and Deep Learning Approaches
by Md. Atiqur Rahman, M. Saddam Hossain Khan, Yutaka Watanobe, Jarin Tasnim Prioty, Tasfia Tahsin Annita, Samura Rahman, Md. Shakil Hossain, Saddit Ahmed Aitijjo, Rafsun Islam Taskin, Victor Dhrubo, Abubokor Hanip and Touhid Bhuiyan
BioMedInformatics 2025, 5(3), 46; https://doi.org/10.3390/biomedinformatics5030046 - 20 Aug 2025
Viewed by 632
Abstract
Breast cancer remains a critical global health challenge, with over 2.1 million new cases annually. This review systematically evaluates recent advancements (2022–2024) in machine and deep learning approaches for breast cancer detection and risk management. Our analysis demonstrates that deep learning models achieve [...] Read more.
Breast cancer remains a critical global health challenge, with over 2.1 million new cases annually. This review systematically evaluates recent advancements (2022–2024) in machine and deep learning approaches for breast cancer detection and risk management. Our analysis demonstrates that deep learning models achieve 90–99% accuracy across imaging modalities, with convolutional neural networks showing particular promise in mammography (99.96% accuracy) and ultrasound (100% accuracy) applications. Tabular data models using XGBoost achieve comparable performance (99.12% accuracy) for risk prediction. The study confirms that lifestyle modifications (dietary changes, BMI management, and alcohol reduction) significantly mitigate breast cancer risk. Key findings include the following: (1) hybrid models combining imaging and clinical data enhance early detection, (2) thermal imaging achieves high diagnostic accuracy (97–100% in optimized models) while offering a cost-effective, less hazardous screening option, (3) challenges persist in data variability and model interpretability. These results highlight the need for integrated diagnostic systems combining technological innovations with preventive strategies. The review underscores AI’s transformative potential in breast cancer diagnosis while emphasizing the continued importance of risk factor management. Future research should prioritize multi-modal data integration and clinically interpretable models. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

14 pages, 2224 KB  
Article
Evaluation of Transfer Learning Efficacy for Surgical Suture Quality Classification on Limited Datasets
by Roman Ishchenko, Maksim Solopov, Andrey Popandopulo, Elizaveta Chechekhina, Viktor Turchin, Fedor Popivnenko, Aleksandr Ermak, Konstantyn Ladyk, Anton Konyashin, Kirill Golubitskiy, Aleksei Burtsev and Dmitry Filimonov
J. Imaging 2025, 11(8), 266; https://doi.org/10.3390/jimaging11080266 - 8 Aug 2025
Viewed by 328
Abstract
This study evaluates the effectiveness of transfer learning with pre-trained convolutional neural networks (CNNs) for the automated binary classification of surgical suture quality (high-quality/low-quality) using photographs of three suture types: interrupted open vascular sutures (IOVS), continuous over-and-over open sutures (COOS), and interrupted laparoscopic [...] Read more.
This study evaluates the effectiveness of transfer learning with pre-trained convolutional neural networks (CNNs) for the automated binary classification of surgical suture quality (high-quality/low-quality) using photographs of three suture types: interrupted open vascular sutures (IOVS), continuous over-and-over open sutures (COOS), and interrupted laparoscopic sutures (ILS). To address the challenge of limited medical data, eight state-of-the-art CNN architectures—EfficientNetB0, ResNet50V2, MobileNetV3Large, VGG16, VGG19, InceptionV3, Xception, and DenseNet121—were trained and validated on small datasets (100–190 images per type) using 5-fold cross-validation. Performance was assessed using the F1-score, AUC-ROC, and a custom weighted stability-aware score (Scoreadj). The results demonstrate that transfer learning achieves robust classification (F1 > 0.90 for IOVS/ILS, 0.79 for COOS) despite data scarcity. ResNet50V2, DenseNet121, and Xception were more stable by Scoreadj, with ResNet50V2 achieving the highest AUC-ROC (0.959 ± 0.008) for IOVS internal view classification. GradCAM visualizations confirmed model focus on clinically relevant features (e.g., stitch uniformity, tissue apposition). These findings validate transfer learning as a powerful approach for developing objective, automated surgical skill assessment tools, reducing reliance on subjective expert evaluations while maintaining accuracy in resource-constrained settings. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Medical Imaging Applications)
Show Figures

Figure 1

24 pages, 624 KB  
Review
Integrating Artificial Intelligence into Perinatal Care Pathways: A Scoping Review of Reviews of Applications, Outcomes, and Equity
by Rabie Adel El Arab, Omayma Abdulaziz Al Moosa, Zahraa Albahrani, Israa Alkhalil, Joel Somerville and Fuad Abuadas
Nurs. Rep. 2025, 15(8), 281; https://doi.org/10.3390/nursrep15080281 - 31 Jul 2025
Viewed by 504
Abstract
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping [...] Read more.
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping review of reviews of AI/ML applications spanning reproductive, prenatal, postpartum, neonatal, and early child-development care. Methods: We searched PubMed, Embase, the Cochrane Library, Web of Science, and Scopus through April 2025. Two reviewers independently screened records, extracted data, and assessed methodological quality using AMSTAR 2 for systematic reviews, ROBIS for bias assessment, SANRA for narrative reviews, and JBI guidance for scoping reviews. Results: Thirty-nine reviews met our inclusion criteria. In preconception and fertility treatment, convolutional neural network-based platforms can identify viable embryos and key sperm parameters with over 90 percent accuracy, and machine-learning models can personalize follicle-stimulating hormone regimens to boost mature oocyte yield while reducing overall medication use. Digital sexual-health chatbots have enhanced patient education, pre-exposure prophylaxis adherence, and safer sexual behaviors, although data-privacy safeguards and bias mitigation remain priorities. During pregnancy, advanced deep-learning models can segment fetal anatomy on ultrasound images with more than 90 percent overlap compared to expert annotations and can detect anomalies with sensitivity exceeding 93 percent. Predictive biometric tools can estimate gestational age within one week with accuracy and fetal weight within approximately 190 g. In the postpartum period, AI-driven decision-support systems and conversational agents can facilitate early screening for depression and can guide follow-up care. Wearable sensors enable remote monitoring of maternal blood pressure and heart rate to support timely clinical intervention. Within neonatal care, the Heart Rate Observation (HeRO) system has reduced mortality among very low-birth-weight infants by roughly 20 percent, and additional AI models can predict neonatal sepsis, retinopathy of prematurity, and necrotizing enterocolitis with area-under-the-curve values above 0.80. From an operational standpoint, automated ultrasound workflows deliver biometric measurements at about 14 milliseconds per frame, and dynamic scheduling in IVF laboratories lowers staff workload and per-cycle costs. Home-monitoring platforms for pregnant women are associated with 7–11 percent reductions in maternal mortality and preeclampsia incidence. Despite these advances, most evidence derives from retrospective, single-center studies with limited external validation. Low-resource settings, especially in Sub-Saharan Africa, remain under-represented, and few AI solutions are fully embedded in electronic health records. Conclusions: AI holds transformative promise for perinatal care but will require prospective multicenter validation, equity-centered design, robust governance, transparent fairness audits, and seamless electronic health record integration to translate these innovations into routine practice and improve maternal and neonatal outcomes. Full article
Show Figures

Figure 1

21 pages, 22884 KB  
Data Descriptor
An Open-Source Clinical Case Dataset for Medical Image Classification and Multimodal AI Applications
by Mauro Nievas Offidani, Facundo Roffet, María Carolina González Galtier, Miguel Massiris and Claudio Delrieux
Data 2025, 10(8), 123; https://doi.org/10.3390/data10080123 - 31 Jul 2025
Viewed by 734
Abstract
High-quality, openly accessible clinical datasets remain a significant bottleneck in advancing both research and clinical applications within medical artificial intelligence. Case reports, often rich in multimodal clinical data, represent an underutilized resource for developing medical AI applications. We present an enhanced version of [...] Read more.
High-quality, openly accessible clinical datasets remain a significant bottleneck in advancing both research and clinical applications within medical artificial intelligence. Case reports, often rich in multimodal clinical data, represent an underutilized resource for developing medical AI applications. We present an enhanced version of MultiCaRe, a dataset derived from open-access case reports on PubMed Central. This new version addresses the limitations identified in the previous release and incorporates newly added clinical cases and images (totaling 93,816 and 130,791, respectively), along with a refined hierarchical taxonomy featuring over 140 categories. Image labels have been meticulously curated using a combination of manual and machine learning-based label generation and validation, ensuring a higher quality for image classification tasks and the fine-tuning of multimodal models. To facilitate its use, we also provide a Python package for dataset manipulation, pretrained models for medical image classification, and two dedicated websites. The updated MultiCaRe dataset expands the resources available for multimodal AI research in medicine. Its scale, quality, and accessibility make it a valuable tool for developing medical AI systems, as well as for educational purposes in clinical and computational fields. Full article
Show Figures

Figure 1

40 pages, 3463 KB  
Review
Machine Learning-Powered Smart Healthcare Systems in the Era of Big Data: Applications, Diagnostic Insights, Challenges, and Ethical Implications
by Sita Rani, Raman Kumar, B. S. Panda, Rajender Kumar, Nafaa Farhan Muften, Mayada Ahmed Abass and Jasmina Lozanović
Diagnostics 2025, 15(15), 1914; https://doi.org/10.3390/diagnostics15151914 - 30 Jul 2025
Viewed by 1157
Abstract
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, [...] Read more.
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, cross-domain ML applications, and a critical discussion on ethical integration in smart diagnostics. The review focuses on the role of big data analysis and ML towards better diagnosis, improved efficiency of operations, and individualized care for patients. It explores the principal challenges of data heterogeneity, privacy, computational complexity, and advanced methods such as federated learning (FL) and edge computing. Applications in real-world settings, such as disease prediction, medical imaging, drug discovery, and remote monitoring, illustrate how ML methods, such as deep learning (DL) and natural language processing (NLP), enhance clinical decision-making. A comparison of ML models highlights their value in dealing with large and heterogeneous healthcare datasets. In addition, the use of nascent technologies such as wearables and Internet of Medical Things (IoMT) is examined for their role in supporting real-time data-driven delivery of healthcare. The paper emphasizes the pragmatic application of intelligent systems by highlighting case studies that reflect up to 95% diagnostic accuracy and cost savings. The review ends with future directions that seek to develop scalable, ethical, and interpretable AI-powered healthcare systems. It bridges the gap between ML algorithms and smart diagnostics, offering critical perspectives for clinicians, data scientists, and policymakers. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

36 pages, 4309 KB  
Review
Deep Learning Techniques for Prostate Cancer Analysis and Detection: Survey of the State of the Art
by Olushola Olawuyi and Serestina Viriri
J. Imaging 2025, 11(8), 254; https://doi.org/10.3390/jimaging11080254 - 28 Jul 2025
Viewed by 832
Abstract
The human interpretation of medical images, especially for the detection of cancer in the prostate, has traditionally been a time-consuming and challenging process. Manual examination for the detection of prostate cancer is not only time-consuming but also prone to errors, carrying the risk [...] Read more.
The human interpretation of medical images, especially for the detection of cancer in the prostate, has traditionally been a time-consuming and challenging process. Manual examination for the detection of prostate cancer is not only time-consuming but also prone to errors, carrying the risk of an excess biopsy due to the inherent limitations of human visual interpretation. With the technical advancements and rapid growth of computer resources, machine learning (ML) and deep learning (DL) models have been experimentally used for medical image analysis, particularly in lesion detection. However, several state-of-the-art models have shown promising results. There are still challenges when analysing prostate lesion images due to the distinctive and complex nature of medical images. This study offers an elaborate review of the techniques that are used to diagnose prostate cancer using medical images. The goal is to provide a comprehensive and valuable resource that helps researchers develop accurate and autonomous models for effectively detecting prostate cancer. This paper is structured as follows: First, we outline the issues with prostate lesion detection. We then review the methods for analysing prostate lesion images and classification approaches. We then examine convolutional neural network (CNN) architectures and explore their applications in deep learning (DL) for image-based prostate cancer diagnosis. Finally, we provide an overview of prostate cancer datasets and evaluation metrics in deep learning. In conclusion, this review analyses key findings, highlights the challenges in prostate lesion detection, and evaluates the effectiveness and limitations of current deep learning techniques. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

25 pages, 2887 KB  
Article
Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application
by Roman Rodriguez-Aguilar, Jose-Antonio Marmolejo-Saucedo and Utku Köse
Mathematics 2025, 13(15), 2393; https://doi.org/10.3390/math13152393 - 25 Jul 2025
Viewed by 402
Abstract
Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently [...] Read more.
Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently utilized in deep learning applications to analyze detailed structures and organs in the body, using advanced intelligent software. However, challenges related to performance and data privacy often arise when using medical data from patients and healthcare institutions. To address these issues, new approaches have emerged, such as federated learning. This technique ensures the secure exchange of sensitive patient and institutional data. It enables machine learning or deep learning algorithms to establish a client–server relationship, whereby specific parameters are securely shared between models while maintaining the integrity of the learning tasks being executed. Federated learning has been successfully applied in medical settings, including diagnostic applications involving medical images such as MRI data. This research introduces an analytical intelligence system based on an Internet of Medical Things (IoMT) framework that employs federated learning to provide a safe and effective diagnostic solution for brain tumor identification. By utilizing specific brain MRI datasets, the model enables multiple local capsule networks (CapsNet) to achieve improved classification results. The average accuracy rate of the CapsNet model exceeds 97%. The precision rate indicates that the CapsNet model performs well in accurately predicting true classes. Additionally, the recall findings suggest that this model is effective in detecting the target classes of meningiomas, pituitary tumors, and gliomas. The integration of these components into an analytical intelligence system that supports the work of healthcare personnel is the main contribution of this work. Evaluations have shown that this approach is effective for diagnosing brain tumors while ensuring data privacy and security. Moreover, it represents a valuable tool for enhancing the efficiency of the medical diagnostic process. Full article
(This article belongs to the Special Issue Innovations in Optimization and Operations Research)
Show Figures

Figure 1

23 pages, 13834 KB  
Article
Using Shapley Values to Explain the Decisions of Convolutional Neural Networks in Glaucoma Diagnosis
by Jose Sigut, Francisco Fumero and Tinguaro Díaz-Alemán
Algorithms 2025, 18(8), 464; https://doi.org/10.3390/a18080464 - 25 Jul 2025
Viewed by 298
Abstract
This work aims to leverage Shapley values to explain the decisions of convolutional neural networks trained to predict glaucoma. Although Shapley values offer a mathematically sound approach rooted in game theory, they require evaluating all possible combinations of features, which can be computationally [...] Read more.
This work aims to leverage Shapley values to explain the decisions of convolutional neural networks trained to predict glaucoma. Although Shapley values offer a mathematically sound approach rooted in game theory, they require evaluating all possible combinations of features, which can be computationally intensive. To address this challenge, we introduce a novel strategy that discretizes the input by dividing the image into standard regions or sectors of interest, significantly reducing the number of features while maintaining clinical relevance. Moreover, applying Shapley values in a machine learning context necessitates the ability to selectively exclude features to evaluate their combinations. To achieve this, we propose a method involving the occlusion of specific sectors and re-training only the non-convolutional portion of the models. Despite achieving strong predictive performance, our findings reveal limited alignment with medical expectations, particularly the unexpected dominance of the background sector in the model’s decision-making process. This highlights potential concerns regarding the interpretability of convolutional neural network-based glaucoma diagnostics. Full article
Show Figures

Figure 1

27 pages, 2617 KB  
Article
Monte Carlo Gradient Boosted Trees for Cancer Staging: A Machine Learning Approach
by Audrey Eley, Thu Thu Hlaing, Daniel Breininger, Zarindokht Helforoush and Nezamoddin N. Kachouie
Cancers 2025, 17(15), 2452; https://doi.org/10.3390/cancers17152452 - 24 Jul 2025
Viewed by 478
Abstract
Machine learning algorithms are commonly employed for classification and interpretation of high-dimensional data. The classification task is often broken down into two separate procedures, and different methods are applied to achieve accurate results and produce interpretable outcomes. First, an effective subset of high-dimensional [...] Read more.
Machine learning algorithms are commonly employed for classification and interpretation of high-dimensional data. The classification task is often broken down into two separate procedures, and different methods are applied to achieve accurate results and produce interpretable outcomes. First, an effective subset of high-dimensional features must be extracted and then the selected subset will be used to train a classifier. Gradient Boosted Trees (GBT) is an ensemble model and, particularly due to their robustness, ability to model complex nonlinear interactions, and feature interpretability, they are well suited for complex applications. XGBoost (eXtreme Gradient Boosting) is a high-performance implementation of GBT that incorporates regularization, parallel computation, and efficient tree pruning that makes it a suitable efficient, interpretable, and scalable classifier with potential applications to medical data analysis. In this study, a Monte Carlo Gradient Boosted Trees (MCGBT) model is proposed for both feature reduction and classification. The proposed MCGBT method was applied to a lung cancer dataset for feature identification and classification. The dataset contains 107 radiomics which are quantitative imaging biomarkers extracted from CT scans. A reduced set of 12 radiomics were identified, and patients were classified into different cancer stages. Cancer staging accuracy of 90.3% across 100 independent runs was achieved which was on par with that obtained using the full set of 107 radiomics, enabling lean and deployable classifiers. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

17 pages, 382 KB  
Review
Physics-Informed Neural Networks: A Review of Methodological Evolution, Theoretical Foundations, and Interdisciplinary Frontiers Toward Next-Generation Scientific Computing
by Zhiyuan Ren, Shijie Zhou, Dong Liu and Qihe Liu
Appl. Sci. 2025, 15(14), 8092; https://doi.org/10.3390/app15148092 - 21 Jul 2025
Viewed by 2495
Abstract
Physics-informed neural networks (PINNs) have emerged as a transformative methodology integrating deep learning with scientific computing. This review establishes a three-dimensional analytical framework to systematically decode PINNs’ development through methodological innovation, theoretical breakthroughs, and cross-disciplinary convergence. The contributions include threefold: First, identifying the [...] Read more.
Physics-informed neural networks (PINNs) have emerged as a transformative methodology integrating deep learning with scientific computing. This review establishes a three-dimensional analytical framework to systematically decode PINNs’ development through methodological innovation, theoretical breakthroughs, and cross-disciplinary convergence. The contributions include threefold: First, identifying the co-evolutionary path of algorithmic architectures from adaptive optimization (neural tangent kernel-guided weighting achieving 230% convergence acceleration in Navier-Stokes solutions) to hybrid numerical-deep learning integration (5× speedup via domain decomposition) and second, constructing bidirectional theory-application mappings where convergence analysis (operator approximation theory) and generalization guarantees (Bayesian-physical hybrid frameworks) directly inform engineering implementations, as validated by 72% cost reduction compared to FEM in high-dimensional spaces (p<0.01,n=15 benchmarks). Third, pioneering cross-domain knowledge transfer through application-specific architectures: TFE-PINN for turbulent flows (5.12±0.87% error in NASA hypersonic tests), ReconPINN for medical imaging (SSIM=+0.18±0.04 on multi-institutional MRI), and SeisPINN for seismic systems (0.52±0.18 km localization accuracy). We further present a technological roadmap highlighting three critical directions for PINN 2.0: neuro-symbolic, federated physics learning, and quantum-accelerated optimization. This work provides methodological guidelines and theoretical foundations for next-generation scientific machine learning systems. Full article
Show Figures

Figure 1

20 pages, 1606 KB  
Article
Brain Tumour Segmentation Using Choquet Integrals and Coalition Game
by Makhlouf Derdour, Mohammed El Bachir Yahiaoui, Moustafa Sadek Kahil, Mohamed Gasmi and Mohamed Chahine Ghanem
Information 2025, 16(7), 615; https://doi.org/10.3390/info16070615 - 17 Jul 2025
Viewed by 376
Abstract
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating [...] Read more.
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating patients. This research focuses on segmenting glioma brain tumour lesions in MRI images by analysing them at the pixel level. The aim is to develop a deep learning-based approach that enables ensemble learning to achieve precise and consistent segmentation of brain tumours. While many studies have explored ensemble learning techniques in this area, most rely on aggregation functions like the Weighted Arithmetic Mean (WAM) without accounting for the interdependencies between classifier subsets. To address this limitation, the Choquet integral is employed for ensemble learning, along with a novel evaluation framework for fuzzy measures. This framework integrates coalition game theory, information theory, and Lambda fuzzy approximation. Three distinct fuzzy measure sets are computed using different weighting strategies informed by these theories. Based on these measures, three Choquet integrals are calculated for segmenting different components of brain lesions, and their outputs are subsequently combined. The BraTS-2020 online validation dataset is used to validate the proposed approach. Results demonstrate superior performance compared with several recent methods, achieving Dice Similarity Coefficients of 0.896, 0.851, and 0.792 and 95% Hausdorff distances of 5.96 mm, 6.65 mm, and 20.74 mm for the whole tumour, tumour core, and enhancing tumour core, respectively. Full article
Show Figures

Figure 1

22 pages, 2320 KB  
Review
Use of Radiomics in Characterizing Tumor Hypoxia
by Mohan Huang, Helen K. W. Law and Shing Yau Tam
Int. J. Mol. Sci. 2025, 26(14), 6679; https://doi.org/10.3390/ijms26146679 - 11 Jul 2025
Viewed by 741
Abstract
Tumor hypoxia involves limited oxygen supply within the tumor microenvironment and is closely associated with aggressiveness, metastasis, and resistance to common cancer treatment modalities such as chemotherapy and radiotherapy. Traditional methodologies for hypoxia assessment, such as the use of invasive probes and clinical [...] Read more.
Tumor hypoxia involves limited oxygen supply within the tumor microenvironment and is closely associated with aggressiveness, metastasis, and resistance to common cancer treatment modalities such as chemotherapy and radiotherapy. Traditional methodologies for hypoxia assessment, such as the use of invasive probes and clinical biomarkers, are generally not very suitable for routine clinical applications. Radiomics provides a non-invasive approach to hypoxia assessment by extracting quantitative features from medical images. Thus, radiomics is important in diagnosis and the formulation of a treatment strategy for tumor hypoxia. This article discusses the various imaging techniques used for the assessment of tumor hypoxia including magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). It introduces the use of radiomics with machine learning and deep learning for extracting quantitative features, along with its possible clinical use in hypoxic tumors. This article further summarizes the key challenges hindering the clinical translation of radiomics, including the lack of imaging standardization and the limited availability of hypoxia-labeled datasets. It also highlights the potential of integrating radiomics with multi-omics to enhance hypoxia visualization and guide personalized cancer treatment. Full article
Show Figures

Figure 1

14 pages, 1106 KB  
Article
Metastatic Melanoma Prognosis Prediction Using a TC Radiomic-Based Machine Learning Model: A Preliminary Study
by Antonino Guerrisi, Maria Teresa Maccallini, Italia Falcone, Alessandro Valenti, Ludovica Miseo, Sara Ungania, Vincenzo Dolcetti, Fabio Valenti, Marianna Cerro, Flora Desiderio, Fabio Calabrò, Virginia Ferraresi and Michelangelo Russillo
Cancers 2025, 17(14), 2304; https://doi.org/10.3390/cancers17142304 - 10 Jul 2025
Viewed by 446
Abstract
Background/Objective: The approach to the clinical management of metastatic melanoma patients is undergoing a significant transformation. The availability of a large amount of data from medical images has made Artificial Intelligence (AI) applications an innovative and cutting-edge solution that could revolutionize the [...] Read more.
Background/Objective: The approach to the clinical management of metastatic melanoma patients is undergoing a significant transformation. The availability of a large amount of data from medical images has made Artificial Intelligence (AI) applications an innovative and cutting-edge solution that could revolutionize the surveillance and management of these patients. In this study, we develop and validate a machine-learning model based on radiomic data extracted from a computed tomography (CT) analysis of patients with metastatic melanoma (MM). This approach was designed to accurately predict prognosis and identify the potential key factors associated with prognosis. Methods: To achieve this goal, we used radiomic pipelines to extract the quantitative features related to lesion texture, morphology, and intensity from high-quality CT images. We retrospectively collected a cohort of 58 patients with metastatic melanoma, from which a total of 60 CT series were used for model training, and 70 independent CT series were employed for external testing. Model performance was evaluated using metrics such as sensitivity, specificity, and AUC (area under the curve), demonstrating particularly favorable results compared to traditional methods. Results: The model used in this study presented a ROC-AUC curve of 82% in the internal test and, in combination with AI, presented a good predictive ability regarding lesion outcome. Conclusions: Although the cohort size was limited and the data were collected retrospectively from a single institution, the findings provide a promising basis for further validation in larger and more diverse patient populations. This approach could directly support clinical decision-making by providing accurate and personalized prognostic information. Full article
(This article belongs to the Special Issue Radiomics and Imaging in Cancer Analysis)
Show Figures

Graphical abstract

17 pages, 1653 KB  
Article
Establishing a Highly Accurate Circulating Tumor Cell Image Recognition System for Human Lung Cancer by Pre-Training on Lung Cancer Cell Lines
by Hiroki Matsumiya, Kenji Terabayashi, Yusuke Kishi, Yuki Yoshino, Masataka Mori, Masatoshi Kanayama, Rintaro Oyama, Yukiko Nemoto, Natsumasa Nishizawa, Yohei Honda, Taiji Kuwata, Masaru Takenaka, Yasuhiro Chikaishi, Kazue Yoneda, Koji Kuroda, Takashi Ohnaga, Tohru Sasaki and Fumihiro Tanaka
Cancers 2025, 17(14), 2289; https://doi.org/10.3390/cancers17142289 - 9 Jul 2025
Viewed by 541
Abstract
Background/Objectives: Circulating tumor cells (CTCs) are important biomarkers for predicting prognosis and evaluating treatment efficacy in cancer. We developed the “CTC-Chip” system based on microfluidics, enabling highly sensitive CTC detection and prognostic assessment in lung cancer and malignant pleural mesothelioma. However, the final [...] Read more.
Background/Objectives: Circulating tumor cells (CTCs) are important biomarkers for predicting prognosis and evaluating treatment efficacy in cancer. We developed the “CTC-Chip” system based on microfluidics, enabling highly sensitive CTC detection and prognostic assessment in lung cancer and malignant pleural mesothelioma. However, the final identification and enumeration of CTCs require manual intervention, which is time-consuming, prone to human error, and necessitates the involvement of experienced medical professionals. Medical image recognition using machine learning can reduce workload and improve automation. However, CTCs are rare in clinical samples, limiting the training data available to construct a robust CTC image recognition system. In this study, we established a highly accurate artificial intelligence-based CTC recognition system by pre-training convolutional neural networks using images from lung cancer cell lines. Methods: We performed transfer learning of convolutional neural networks. Initially, the models were pre-trained using images obtained from lung cancer cell lines. The model’s accuracy was improved by training with a limited number of clinical CTC images. Results: Transfer learning significantly improved the CTC classification accuracy to an average of 99.51%, compared to 96.96% for a model trained solely on pre-trained cell lines (p < 0.05). This approach showed notable efficacy when clinical training images were limited, achieving statistically significant accuracy improvements with as few as 17 clinical CTC images (p < 0.05). Conclusions: Overall, our findings demonstrate that pre-training with cancer cell lines enables rapid and highly accurate automated CTC recognition even with limited clinical data, significantly enhancing clinical applicability and potential utility across diverse cancer diagnostic workflows. Full article
(This article belongs to the Section Cancer Biomarkers)
Show Figures

Figure 1

34 pages, 947 KB  
Review
Multimodal Artificial Intelligence in Medical Diagnostics
by Bassem Jandoubi and Moulay A. Akhloufi
Information 2025, 16(7), 591; https://doi.org/10.3390/info16070591 - 9 Jul 2025
Viewed by 2151
Abstract
The integration of artificial intelligence into healthcare has advanced rapidly in recent years, with multimodal approaches emerging as promising tools for improving diagnostic accuracy and clinical decision making. These approaches combine heterogeneous data sources such as medical images, electronic health records, physiological signals, [...] Read more.
The integration of artificial intelligence into healthcare has advanced rapidly in recent years, with multimodal approaches emerging as promising tools for improving diagnostic accuracy and clinical decision making. These approaches combine heterogeneous data sources such as medical images, electronic health records, physiological signals, and clinical notes to better capture the complexity of disease processes. Despite this progress, only a limited number of studies offer a unified view of multimodal AI applications in medicine. In this review, we provide a comprehensive and up-to-date analysis of machine learning and deep learning-based multimodal architectures, fusion strategies, and their performance across a range of diagnostic tasks. We begin by summarizing publicly available datasets and examining the preprocessing pipelines required for harmonizing heterogeneous medical data. We then categorize key fusion strategies used to integrate information from multiple modalities and overview representative model architectures, from hybrid designs and transformer-based vision-language models to optimization-driven and EHR-centric frameworks. Finally, we highlight the challenges present in existing works. Our analysis shows that multimodal approaches tend to outperform unimodal systems in diagnostic performance, robustness, and generalization. This review provides a unified view of the field and opens up future research directions aimed at building clinically usable, interpretable, and scalable multimodal diagnostic systems. Full article
Show Figures

Graphical abstract

Back to TopTop