Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (504)

Search Parameters:
Keywords = Computer-assisted diagnosis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1668 KB  
Article
Brain Stroke Classification Using CT Scans with Transformer-Based Models and Explainable AI
by Shomukh Qari and Maha A. Thafar
Diagnostics 2025, 15(19), 2486; https://doi.org/10.3390/diagnostics15192486 - 29 Sep 2025
Abstract
Background & Objective: Stroke remains a leading cause of mortality and long-term disability worldwide, demanding rapid and accurate diagnosis to improve patient outcomes. Computed tomography (CT) scans are widely used in emergency settings due to their speed, availability, and cost-effectiveness. This study proposes [...] Read more.
Background & Objective: Stroke remains a leading cause of mortality and long-term disability worldwide, demanding rapid and accurate diagnosis to improve patient outcomes. Computed tomography (CT) scans are widely used in emergency settings due to their speed, availability, and cost-effectiveness. This study proposes an artificial intelligence (AI)-based framework for multiclass stroke classification (ischemic, hemorrhagic, and no stroke) using CT scan images from the Ministry of Health of the Republic of Turkey. Methods: We adopted MaxViT, a state-of-the-art Vision Transformer (ViT)-based architecture, as the primary deep learning model for stroke classification. Additional transformer variants, including Vision Transformer (ViT), Transformer-in-Transformer (TNT), and ConvNeXt, were evaluated for comparison. To improve model generalization and handle class imbalance, classical data augmentation techniques were applied. Furthermore, explainable AI (XAI) was integrated using Grad-CAM++ to provide visual insights into model decisions. Results: The MaxViT model with augmentation achieved the highest performance, reaching an accuracy and F1-score of 98.00%, outperforming the baseline Vision Transformer and other evaluated models. Grad-CAM++ visualizations confirmed that the proposed framework effectively identified stroke-related regions, enhancing transparency and clinical trust. Conclusions: This research contributes to the development of a trustworthy AI-assisted diagnostic tool for stroke, facilitating its integration into clinical practice and improving access to timely and optimal stroke diagnosis in emergency departments. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

19 pages, 3282 KB  
Review
Generational Leaps in Intrapartum Fetal Surveillance
by Lawrence D. Devoe
Diagnostics 2025, 15(19), 2482; https://doi.org/10.3390/diagnostics15192482 - 28 Sep 2025
Abstract
Background/Objectives: Electronic fetal monitoring (EFM) has been used for intrapartum fetal surveillance for over 50 years. Despite numerous trials comparing EFM with standard fetal heart rate (FHR) auscultation, it remains contentious whether continuous monitoring with standard interpretation has reliably improved perinatal outcomes, specifically [...] Read more.
Background/Objectives: Electronic fetal monitoring (EFM) has been used for intrapartum fetal surveillance for over 50 years. Despite numerous trials comparing EFM with standard fetal heart rate (FHR) auscultation, it remains contentious whether continuous monitoring with standard interpretation has reliably improved perinatal outcomes, specifically lower rates of perinatal morbidity and mortality. This review examines previous attempts to improve fetal monitoring and presents future directions for novel intrapartum fetal surveillance systems. Methods: We conducted a chronological review of EFM developments, including ancillary methods such as fetal ECG analysis, automated systems for FHR analysis, and artificial intelligence applications. We analyzed the evolution from visual interpretation to intelligent systems and evaluated the performance of various automated monitoring platforms. Results: Various ancillary methods developed to improve EFM accuracy for predicting fetal compromise have shown limited success. Only a limited number of studies demonstrated that adding fetal ECG analysis to visual FHR pattern interpretation resulted in better fetal outcomes. Automated systems for FHR analysis have not consistently enhanced intrapartum fetal surveillance. However, novel approaches such as the Fetal Reserve Index (FRI) show promise by incorporating clinical risk factors with traditional FHR patterns to provide higher-level risk assessment and prognosis. Conclusions: The shortcomings of visual interpretation of FHR patterns persist despite technological advances. Future intelligent intrapartum surveillance systems must combine conventional fetal monitoring with comprehensive risk assessment that incorporates maternal, fetal, and obstetric factors. The integration of artificial intelligence with contextualized metrics like the FRI represents the most promising direction for improving intrapartum fetal surveillance and clinical outcomes. Full article
(This article belongs to the Special Issue Game-Changing Concepts in Reproductive Health)
Show Figures

Figure 1

33 pages, 978 KB  
Article
An Interpretable Clinical Decision Support System Aims to Stage Age-Related Macular Degeneration Using Deep Learning and Imaging Biomarkers
by Ekaterina A. Lopukhova, Ernest S. Yusupov, Rada R. Ibragimova, Gulnaz M. Idrisova, Timur R. Mukhamadeev, Elizaveta P. Grakhova and Ruslan V. Kutluyarov
Appl. Sci. 2025, 15(18), 10197; https://doi.org/10.3390/app151810197 - 18 Sep 2025
Viewed by 236
Abstract
The use of intelligent clinical decision support systems (CDSS) has the potential to improve the accuracy and speed of diagnoses significantly. These systems can analyze a patient’s medical data and generate comprehensive reports that help specialists better understand and evaluate the current clinical [...] Read more.
The use of intelligent clinical decision support systems (CDSS) has the potential to improve the accuracy and speed of diagnoses significantly. These systems can analyze a patient’s medical data and generate comprehensive reports that help specialists better understand and evaluate the current clinical scenario. This capability is particularly important when dealing with medical images, as the heavy workload on healthcare professionals can hinder their ability to notice critical biomarkers, which may be difficult to detect with the naked eye due to stress and fatigue. Implementing a CDSS that uses computer vision (CV) techniques can alleviate this challenge. However, one of the main obstacles to the widespread use of CV and intelligent analysis methods in medical diagnostics is the lack of a clear understanding among diagnosticians of how these systems operate. A better understanding of their functioning and of the reliability of the identified biomarkers will enable medical professionals to more effectively address clinical problems. Additionally, it is essential to tailor the training process of machine learning models to medical data, which are often imbalanced due to varying probabilities of disease detection. Neglecting this factor can compromise the quality of the developed CDSS. This article presents the development of a CDSS module focused on diagnosing age-related macular degeneration. Unlike traditional methods that classify diseases or their stages based on optical coherence tomography (OCT) images, the proposed CDSS provides a more sophisticated and accurate analysis of biomarkers detected through a deep neural network. This approach combines interpretative reasoning with highly accurate models, although these models can be complex to describe. To address the issue of class imbalance, an algorithm was developed to optimally select biomarkers, taking into account both their statistical and clinical significance. As a result, the algorithm prioritizes the selection of classes that ensure high model accuracy while maintaining clinically relevant responses generated by the CDSS module. The results indicate that the overall accuracy of staging age-related macular degeneration increased by 63.3% compared with traditional methods of direct stage classification using a similar machine learning model. This improvement suggests that the CDSS module can significantly enhance disease diagnosis, particularly in situations with class imbalance in the original dataset. To improve interpretability, the process of determining the most likely disease stage was organized into two steps. At each step, the diagnostician could visually access information explaining the reasoning behind the intelligent diagnosis, thereby assisting experts in understanding the basis for clinical decision-making. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 895 KB  
Article
Checking Medical Process Conformance by Exploiting LLMs
by Giorgio Leonardi, Stefania Montani and Manuel Striani
Appl. Sci. 2025, 15(18), 10184; https://doi.org/10.3390/app151810184 - 18 Sep 2025
Viewed by 154
Abstract
Clinical guidelines, which represent the normative process models for healthcare organizations, are typically available in a textual, unstructured form. This issue hampers the application of classical conformance-checking algorithms to the medical domain, which take in input of a formalized and computer-interpretable description of [...] Read more.
Clinical guidelines, which represent the normative process models for healthcare organizations, are typically available in a textual, unstructured form. This issue hampers the application of classical conformance-checking algorithms to the medical domain, which take in input of a formalized and computer-interpretable description of the process. In this paper, (i) we propose overcoming this problem by taking advantage of a Large Language Model (LLM), in order to extract normative rules from textual guidelines; (ii) we then check and quantify the conformance of the patient event log with respect to such rules. Additionally, (iii) we adopt the approach as a means for evaluating the quality of the models mined by different process discovery algorithms from the event log, by comparing their conformance to the rules. We have tested our work in the domain of stroke. As regards conformance checking, we have proved the compliance of four Northern Italy hospitals to a general rule for diagnosis timing and to two rules that refer to thrombolysis treatment, and have identified some issues related to other rules, which involve the availability of magnetic resonance instruments. As regards process model discovery evaluation, we have assessed the superiority of Heuristic Miner with respect to other mining algorithms on our dataset. It is worth noting that the easy extraction of rules in our LLM-assisted approach would make it quickly applicable to other fields as well. Full article
Show Figures

Figure 1

17 pages, 3106 KB  
Article
Weakly Supervised Gland Segmentation Based on Hierarchical Attention Fusion and Pixel Affinity Learning
by Yanli Liu, Mengchen Lin, Xiaoqian Sang, Guidong Bao and Yunfeng Wu
Bioengineering 2025, 12(9), 992; https://doi.org/10.3390/bioengineering12090992 - 18 Sep 2025
Viewed by 253
Abstract
Precise segmentation of glands in histopathological images is essential for the diagnosis of colorectal cancer, as the changes in gland morphology are associated with pathological progression. Conventional computer-assisted methods rely on dense pixel-level annotations, which are costly and labor-intensive to obtain. The present [...] Read more.
Precise segmentation of glands in histopathological images is essential for the diagnosis of colorectal cancer, as the changes in gland morphology are associated with pathological progression. Conventional computer-assisted methods rely on dense pixel-level annotations, which are costly and labor-intensive to obtain. The present study proposes a two-stage weakly supervised segmentation framework named Multi-Level Attention and Affinity (MAA). The MAA framework utilizes the image-level labels and combines the Multi-Level Attention Fusion (MAF) and Affinity Refinement (AR) modules. The MAF module extracts the hierarchical features from multiple transformer layers to grasp global semantic context, and generates more comprehensive initial class activation maps. By modeling inter-pixel semantic consistency, the AR module refines pseudo-labels, which can sharpen the boundary delineation and reduce label noise. The experiments on the GlaS dataset showed that the proposed MAA framework achieves the Intersection over Union (IoU) of 81.99% and Dice coefficient of 90.10%, which outperformed the state-of-the-art Online Easy Example Mining (OEEM) method with an improvement of 4.43% in IoU. Such experimental results demonstrated the effectiveness of integrating hierarchical attention mechanisms with affinity-guided refinement for annotation-efficient and robust gland segmentation. Full article
(This article belongs to the Special Issue Recent Progress in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

21 pages, 3296 KB  
Article
Image Sensor-Driven 3D Modeling of Complex Biological Surfaces for Preoperative Planning of Hemangioma Treatment
by Janis Peksa, Dmytro Kukharenko, Andrii Perekrest and Dmytro Mamchur
Sensors 2025, 25(18), 5781; https://doi.org/10.3390/s25185781 - 17 Sep 2025
Viewed by 334
Abstract
The advancement of science and technology has elevated the practice of surgery where computer systems now perform the majority of calculations required for successful interventions. This technological progress can be leveraged to foster surgical improvements by developing and implementing novel computer models for [...] Read more.
The advancement of science and technology has elevated the practice of surgery where computer systems now perform the majority of calculations required for successful interventions. This technological progress can be leveraged to foster surgical improvements by developing and implementing novel computer models for the preoperative planning of surgical treatments. Such systems enable surgeons to select optimal treatment tactics and dosages of operative interventions tailored to individual patients. Currently, there is no consensus on the use of expectant management for hemangiomas, as the most effective therapeutic strategy often depends on the tumor’s type and location, with early treatment being critical in some cases. Accurate diagnosis and effective treatment necessitate precise determination of the tumor’s type, growth characteristics, structure, and location. The use of a surgical method for hemangiomas removal is better for the removal of small formations in places that are not critical from a cosmetic prospective (for example, for males this might be the back and legs). This paper presents a method for creating a three-dimensional (3D) model of hemangioma using polynomial approximation and spline modeling to assist surgeons. The development of the mathematical model, the software implementation, and a comprehensive error analysis are explained in this work. The resulting model demonstrated an average approximation error of 5.6%, and a discriminant analysis confirmed the significance of five key parameters for successful resection. The proposed system offers a robust and economically viable tool for improving the accuracy and outcomes of hemangioma surgery. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 3733 KB  
Article
AI-Assisted Fusion Technique for Orthodontic Diagnosis Between Cone-Beam Computed Tomography and Face Scan Data
by Than Trong Khanh Dat, Jang-Hoon Ahn, Hyunkyo Lim and Jonghun Yoon
Bioengineering 2025, 12(9), 975; https://doi.org/10.3390/bioengineering12090975 - 14 Sep 2025
Viewed by 641
Abstract
This study presents a deep learning-based approach that integrates cone-beam computed tomography (CBCT) with facial scan data, aiming to enhance diagnostic accuracy and treatment planning in medical imaging, particularly in cosmetic surgery and orthodontics. The method combines facial mesh detection with the iterative [...] Read more.
This study presents a deep learning-based approach that integrates cone-beam computed tomography (CBCT) with facial scan data, aiming to enhance diagnostic accuracy and treatment planning in medical imaging, particularly in cosmetic surgery and orthodontics. The method combines facial mesh detection with the iterative closest point (ICP) algorithm to address common challenges such as differences in data acquisition times and extraneous details in facial scans. By leveraging a deep learning model, the system achieves more precise facial mesh detection, thereby enabling highly accurate initial alignment. Experimental results demonstrate average registration errors of approximately 0.3 mm (inlier RMSE), even when CBCT and facial scans are acquired independently. These results should be regarded as preliminary, representing a feasibility study rather than conclusive evidence of clinical accuracy. Nevertheless, the approach demonstrates consistent performance across different scan orientations, suggesting potential for future clinical application. Furthermore, the deep learning framework effectively handles diverse and complex facial geometries, thereby improving the reliability of the alignment process. This integration not only enhances the precision of 3D facial recognition but also improves the efficiency of clinical workflows. Future developments will aim to reduce processing time and enable simultaneous data capture to further improve accuracy and operational efficiency. Overall, this approach provides a powerful tool for practitioners, contributing to improved diagnostic outcomes and optimized treatment strategies in medical imaging. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

17 pages, 2250 KB  
Article
Automated Computer-Assisted Diagnosis of Pleural Effusion in Chest X-Rays via Deep Learning
by Ya-Yun Huang, Yu-Ching Lin, Sung-Hsin Tsai, Tsun-Kuang Chi, Tsung-Yi Chen, Shih-Wei Chung, Kuo-Chen Li, Wei-Chen Tu, Patricia Angela R. Abu and Chih-Cheng Chen
Diagnostics 2025, 15(18), 2322; https://doi.org/10.3390/diagnostics15182322 - 13 Sep 2025
Viewed by 409
Abstract
Background/Objectives: Pleural effusion is a common pulmonary condition that, if left untreated, may lead to respiratory distress and severe complications. Chest X-ray (CXR) imaging is routinely used by physicians to identify signs of pleural effusion. However, manually examining large volumes of CXR images [...] Read more.
Background/Objectives: Pleural effusion is a common pulmonary condition that, if left untreated, may lead to respiratory distress and severe complications. Chest X-ray (CXR) imaging is routinely used by physicians to identify signs of pleural effusion. However, manually examining large volumes of CXR images on a daily basis can require substantial time and effort. To address this issue, this study proposes an automated pleural effusion detection system for CXR images. Methods: The proposed system integrates image cropping, image enhancement, and the EfficientNet-B0 deep learning model to assist in detecting pleural effusion, a task that is often challenging due to subtle symptom presentation. Image cropping was applied to extract the region from the heart to the costophrenic angle as the target area. Subsequently, image enhancement techniques were employed to emphasize pleural effusion features, thereby improving the model’s learning efficiency. Finally, EfficientNet-B0 was used to train and classify pleural effusion cases based on processed images. Results: In the experimental results, the proposed image enhancement approach improved the model’s recognition accuracy by approximately 4.33% compared with the non-enhanced method, confirming that enhancement effectively supports subsequent model learning. Ultimately, the proposed system achieved an accuracy of 93.27%, representing a substantial improvement of 21.30% over the 77.00% reported in previous studies, highlighting its significant advancement in pleural effusion detection. Conclusions: This system can serve as an assistive diagnostic tool for physicians, providing standardized detection results, reducing the workload associated with manual interpretation, and improving the overall efficiency of pulmonary care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

11 pages, 4595 KB  
Article
Computed Tomography of Neoplastic Infiltrating Renal Masses in Patients Without a Previous History of Cancer
by Carlos Nicolau, Andreu Ivars, Carmen Sebastia, Clara Bassaganyas, María Fresno, Leonardo Rodríguez, Josep Puig, Marc Comas-Cufí and Blanca Paño
Cancers 2025, 17(17), 2936; https://doi.org/10.3390/cancers17172936 - 8 Sep 2025
Viewed by 469
Abstract
Background/Objectives: Infiltrative renal masses, characterized by ill-defined margins and parenchymal invasion without forming a discrete mass, present a diagnostic challenge, particularly in patients without a prior history of malignancy. Differentiating among the most common malignant etiologies—renal cell carcinoma (RCC), urothelial carcinoma (UC), and [...] Read more.
Background/Objectives: Infiltrative renal masses, characterized by ill-defined margins and parenchymal invasion without forming a discrete mass, present a diagnostic challenge, particularly in patients without a prior history of malignancy. Differentiating among the most common malignant etiologies—renal cell carcinoma (RCC), urothelial carcinoma (UC), and lymphoma—is essential for guiding appropriate treatment. This study aimed to evaluate whether specific computed tomography (CT) features can assist in the differential diagnosis of these lesions. Methods: A retrospective review was conducted on 68 patients with infiltrative renal masses presented at a tertiary hospital’s oncologic urology committee between 2018 and 2022. Patients with prior malignancy or signs of infection were excluded. All cases underwent contrast-enhanced CT within three months of diagnosis and had histopathological confirmation. Imaging features such as necrosis, collecting system involvement, lymphadenopathy, and others were assessed and statistically analyzed. Results: RCC was the most frequent diagnosis (68%), followed by UC (18%) and lymphoma (7.4%). Significant differences were observed in imaging features: necrosis was more common in RCC (87%) than in UC (25%) and lymphoma (20%), p < 0.001; collecting system involvement was universal in UC (100%) and less common in RCC (65%) and lymphoma (40%), p = 0.009; and lymphadenopathy was more frequent in lymphoma (80%) than in UC (67%) and RCC (35%), p = 0.038. Tumor size also varied significantly, with lymphomas presenting the largest median size (11 cm), followed by RCCs (8.2 cm) and UCs (5 cm), p < 0.001. Conclusions: CT imaging features, particularly necrosis, collecting system involvement, and lymphadenopathy, can aid in distinguishing among RCC, UC, and lymphoma in patients with infiltrative renal masses and no prior cancer history. These findings may support more accurate diagnoses and inform tailored therapeutic strategies. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

15 pages, 367 KB  
Study Protocol
The CORTEX Project: A Pre–Post Randomized Controlled Feasibility Trial Evaluating the Efficacy of a Computerized Cognitive Remediation Therapy Program for Adult Inpatients with Anorexia Nervosa
by Giada Pietrabissa, Davide Maria Cammisuli, Gloria Marchesi, Giada Rapelli, Federico Brusa, Gianluigi Luxardi, Giovanna Celia, Alessandro Chinello, Chiara Cappelletti, Simone Raineri, Luigi Enrico Zappa, Stefania Landi, Francesco Monaco, Ernesta Panarello, Stefania Palermo, Sara Mirone, Francesca Tessitore, Mauro Cozzolino, Leonardo Mendolicchio and Gianluca Castelnuovo
J. Pers. Med. 2025, 15(9), 430; https://doi.org/10.3390/jpm15090430 - 8 Sep 2025
Viewed by 619
Abstract
Background/Objectives: Anorexia nervosa (AN) is marked by cognitive deficits, particularly reduced mental flexibility and weak central coherence, which may sustain the core psychopathological symptoms. While cognitive remediation therapy (CRT) has shown efficacy in improving these cognitive processes in AN, evidence on computer-based CRT [...] Read more.
Background/Objectives: Anorexia nervosa (AN) is marked by cognitive deficits, particularly reduced mental flexibility and weak central coherence, which may sustain the core psychopathological symptoms. While cognitive remediation therapy (CRT) has shown efficacy in improving these cognitive processes in AN, evidence on computer-based CRT remains limited. This study aims to evaluate the feasibility and efficacy of integrating computer-assisted cognitive remediation therapy (CA-CRT) into standard nutritional rehabilitation (treatment as usual, TAU) to improve the targeted cognitive and psychological parameters among inpatients with AN in a more personalized and scalable way. Methods: A multicenter randomized controlled trial (RCT) will be conducted. At least 54 participants with a diagnosis of AN will be recruited at each site and randomized into either the experimental or control group after initial screening. The intervention will last five weeks and include 15 individual CA-CRT sessions alongside 10 individual CR sessions, delivered in addition to standard care. The primary and secondary outcomes will be assessed at the end of the intervention to evaluate the changes in cognitive flexibility, central coherence, and psychological functioning. Results: Participants receiving CA-CRT are expected to develop more flexible and integrated thinking styles and achieve greater improvements in clinical outcomes compared to those receiving standard care alone, supporting a more personalized therapeutic approach. Conclusions: These findings would underscore the feasibility and clinical value of incorporating CA-CRT into standard inpatient treatment for AN. By specifically targeting cognitive inflexibility and poor central coherence in a scalable, individualized format, CA-CRT may enhance treatment effectiveness and support the development of patient-centered interventions tailored to the cognitive profiles of individuals with AN. Full article
Show Figures

Figure 1

33 pages, 4897 KB  
Review
Recent Advances in Sensor Fusion Monitoring and Control Strategies in Laser Powder Bed Fusion: A Review
by Alexandra Papatheodorou, Nikolaos Papadimitriou, Emmanuel Stathatos, Panorios Benardos and George-Christopher Vosniakos
Machines 2025, 13(9), 820; https://doi.org/10.3390/machines13090820 - 6 Sep 2025
Viewed by 993
Abstract
Laser Powder Bed Fusion (LPBF) has emerged as a leading additive manufacturing (AM) process for producing complex metal components. Despite its advantages, the inherent LPBF process complexity leads to challenges in achieving consistent quality and repeatability. To address these concerns, recent research efforts [...] Read more.
Laser Powder Bed Fusion (LPBF) has emerged as a leading additive manufacturing (AM) process for producing complex metal components. Despite its advantages, the inherent LPBF process complexity leads to challenges in achieving consistent quality and repeatability. To address these concerns, recent research efforts have focused on sensor fusion techniques for process monitoring, and on developing more elaborate control strategies. Sensor fusion combines information from multiple in situ sensors to provide more comprehensive insights into process characteristics such as melt pool behavior, spatter formation, and layer integrity. By leveraging multimodal data sources, sensor fusion enhances the detection and diagnosis of process anomalies in real-time. Closed-loop control systems may utilize this fused information to adjust key process parameters–such as laser power, focal depth, and scanning speed–to mitigate defect formation during the build process. This review focuses on the current state-of-the-art in sensor fusion monitoring and control strategies for LPBF. In terms of sensor fusion, recent advances extend beyond CNN-based approaches to include graph-based, attention, and transformer architectures. Among these, feature-level integration has shown the best balance between accuracy and computational cost. However, the limited volume of available experimental data, class-imbalance issues and lack of standardization still hinder further progress. In terms of control, a trend away from purely physics-based towards Machine Learning (ML)-assisted and hybrid strategies can be observed. These strategies show promise for more adaptive and effective quality enhancement. The biggest challenge is the broader validation on more complex part geometries and under realistic conditions using commercial LPBF systems. Full article
(This article belongs to the Special Issue In Situ Monitoring of Manufacturing Processes)
Show Figures

Figure 1

15 pages, 2272 KB  
Article
Prediction of Germline BRCA Mutations in High-Risk Breast Cancer Patients Using Machine Learning with Multiparametric Breast MRI Features
by Hyeonji Park, Kyu Ran Cho, SeungJae Lee, Doohyun Cho, Kyong Hwa Park, Yoon Sang Cho and Sung Eun Song
Sensors 2025, 25(17), 5500; https://doi.org/10.3390/s25175500 - 4 Sep 2025
Viewed by 874
Abstract
The identification of germline BRCA1/2 (BRCA) mutations plays an important role in the treatment planning of high-risk breast cancer patients, but genetic testing may be costly or unavailable. The multiparametric breast MRI (mpMRI) features offer noninvasive imaging biomarkers that could support BRCA mutation [...] Read more.
The identification of germline BRCA1/2 (BRCA) mutations plays an important role in the treatment planning of high-risk breast cancer patients, but genetic testing may be costly or unavailable. The multiparametric breast MRI (mpMRI) features offer noninvasive imaging biomarkers that could support BRCA mutation prediction. In this study, we investigate whether mpMRI features can predict BRCA mutation status in high-risk breast cancer patients. We collected data from 231 consecutive patients (82 BRCA-positive, 149 BRCA-negative) who underwent BRCA mutation testing and preoperative MRI between 2013 and 2019. We used the mpMRI features, including computer-aided diagnosis (CAD)-derived kinetic features, morphologic features, and apparent diffusion coefficient (ADC) values from diffusion-weighted imaging (DWI). In the univariate analysis, higher CAD-derived washout component and peak enhancement, larger tumor size and angio-volume, peritumoral edema on T2-weighted imaging, axillary adenopathy, and minimal or mild background parenchymal enhancement (BPE) were significantly associated with BRCA mutation, while ADC values showed no significant differences. In the multivariate analysis, three significant predictors were washout component ≥ 19.5% (odds ratio [OR] = 3.89, p < 0.001), minimal or mild BPE (OR = 2.57, p = 0.004), and tumor size ≥ 2.5 cm (OR = 2.41, p = 0.004). Using these predictors, we compared the predictive performance of 13 ML models through 30 repeated runs and achieved the highest performance (AUC = 0.72). In conclusion, ML models integrating mpMRI features demonstrated good performance for predicting BRCA mutations in high-risk patients. This noninvasive approach may aid personalized treatment planning and genetic counseling. Full article
Show Figures

Figure 1

16 pages, 715 KB  
Systematic Review
Artificial Intelligence in Computed Tomography Radiology: A Systematic Review on Risk Reduction Potential
by Sandra Coelho, Aléxia Fernandes, Marco Freitas and Ricardo J. Fernandes
Appl. Sci. 2025, 15(17), 9659; https://doi.org/10.3390/app15179659 - 2 Sep 2025
Viewed by 792
Abstract
Artificial intelligence (AI) has emerged as a transformative technology in radiology, offering enhanced diagnostic accuracy, improved workflow efficiency and potential risk mitigation. However, its effectiveness in reducing clinical and occupational risks in radiology departments remains underexplored. This systematic review aimed to evaluate the [...] Read more.
Artificial intelligence (AI) has emerged as a transformative technology in radiology, offering enhanced diagnostic accuracy, improved workflow efficiency and potential risk mitigation. However, its effectiveness in reducing clinical and occupational risks in radiology departments remains underexplored. This systematic review aimed to evaluate the current literature on AI applications in computed tomography (CT) radiology and their contributions to risk reduction. Following the PRISMA 2020 guidelines, a systematic search was conducted in PubMed, Scopus and Web of Science for studies published between 2021 and 2025 (the databases were last accessed on 15 April 2025). Thirty-four studies were included based on their relevance to AI in radiology and reported outcomes. Extracted data included study type, geographic region, AI application and type, role in clinical workflow, use cases, sensitivity and specificity. The majority of studies addressed triage (61.8%) and computer-aided detection (32.4%). AI was most frequently applied in chest imaging (47.1%) and brain haemorrhage detection (29.4%). The mean reported sensitivity was 89.0% and specificity was 93.3%. AI tools demonstrated advantages in image interpretation, automated patient positioning, prioritisation and measurement standardisation. Reported benefits included reduced cognitive workload, improved triage efficiency, decreased manual annotation and shorter exposure times. AI systems in CT radiology show strong potential to enhance diagnostic consistency and reduce occupational risks. The evidence supports the integration of AI-based tools to assist diagnosis, lower human workload and improve overall safety in radiology departments. Full article
Show Figures

Figure 1

19 pages, 17084 KB  
Article
SPADE: Superpixel Adjacency Driven Embedding for Three-Class Melanoma Segmentation
by Pablo Ordóñez, Ying Xie, Xinyue Zhang, Chloe Yixin Xie, Santiago Acosta and Issac Guitierrez
Algorithms 2025, 18(9), 551; https://doi.org/10.3390/a18090551 - 2 Sep 2025
Viewed by 509
Abstract
The accurate segmentation of pigmented skin lesions is a critical prerequisite for reliable melanoma detection, yet approximately 30% of lesions exhibit fuzzy or poorly defined borders. This ambiguity makes the definition of a single contour unreliable and limits the effectiveness of computer-assisted diagnosis [...] Read more.
The accurate segmentation of pigmented skin lesions is a critical prerequisite for reliable melanoma detection, yet approximately 30% of lesions exhibit fuzzy or poorly defined borders. This ambiguity makes the definition of a single contour unreliable and limits the effectiveness of computer-assisted diagnosis (CAD) systems. While clinical assessment based on the ABCDE criteria (asymmetry, border, color, diameter, and evolution), dermoscopic imaging, and scoring systems remains the standard, these methods are inherently subjective and vary with clinician experience. We address this challenge by reframing segmentation into three distinct regions: background, border, and lesion core. These regions are delineated using superpixels generated via the Simple Linear Iterative Clustering (SLIC) algorithm, which provides meaningful structural units for analysis. Our contributions are fourfold: (1) redefining lesion borders as regions, rather than sharp lines; (2) generating superpixel-level embeddings with a transformer-based autoencoder; (3) incorporating these embeddings as features for superpixel classification; and (4) integrating neighborhood information to construct enhanced feature vectors. Unlike pixel-level algorithms that often overlook boundary context, our pipeline fuses global class information with local spatial relationships, significantly improving precision and recall in challenging border regions. An evaluation on the HAM10000 melanoma dataset demonstrates that our superpixel–RAG–transformer (region adjacency graph) pipeline achieves exceptional performance (100% F1 score, accuracy, and precision) in classifying background, border, and lesion core superpixels. By transforming raw dermoscopic images into region-based structured representations, the proposed method generates more informative inputs for downstream deep learning models. This strategy not only advances melanoma analysis but also provides a generalizable framework for other medical image segmentation and classification tasks. Full article
Show Figures

Figure 1

15 pages, 6891 KB  
Article
Artificial Intelligence-Assisted Biparametric MRI for Detecting Prostate Cancer—A Comparative Multireader Multicase Accuracy Study
by Daniel Nißler, Sabrina Reimers-Kipping, Maja Ingwersen, Frank Berger, Felix Niekrenz, Bernhard Theis, Fabian Hielscher, Philipp Franken, Nikolaus Gaßler, Marc-Oliver Grimm, Ulf Teichgräber and Tobias Franiel
J. Clin. Med. 2025, 14(17), 6111; https://doi.org/10.3390/jcm14176111 - 29 Aug 2025
Viewed by 578
Abstract
Objectives: To evaluate the diagnostic accuracy of AI-assisted biparametric MRI (AI-bpMRI) in detecting prostate cancer (PCa) as a possible replacement for multiparametric MRI (mpMRI) depending on readers’ experience. Methods: This fully crossed, multireader multicase, single-centre, consecutive study retrospectively included men with suspected PCa. [...] Read more.
Objectives: To evaluate the diagnostic accuracy of AI-assisted biparametric MRI (AI-bpMRI) in detecting prostate cancer (PCa) as a possible replacement for multiparametric MRI (mpMRI) depending on readers’ experience. Methods: This fully crossed, multireader multicase, single-centre, consecutive study retrospectively included men with suspected PCa. Three radiologists with different levels of experience independently scored each participant’s biparametric (bp) MRI, mpMRI, and AI-bpMRI according to the PI-RADS V2.1 classification. The AI-assisted image processing was based on a sequential deep learning network. Histopathological findings were used as a reference. The study evaluated the mean areas under the receiver operating characteristic curves (AUCs) using the jackknife method for covariance. AUCs were tested for non-inferiority of AI-bpMRI to mpMRI (non-inferiority margin: −0.05). Results: A total of 105 men (mean age 66 ± 7 years) were evaluated. AI-bpMRI was non-inferior to mpMRI in detecting both Gleason score (GS) ≥ 3 + 4 PCa (AUC difference: 0.03 [95% CI: −0.03, 0.08], p = 0.37) and GS ≥ 3 + 3 PCa (AUC difference: 0.04 [95% CI: −0.01, 0.09], p = 0.14) and was superior to bpMRI in detecting GS ≥ 3 + 3 PCa (AUC difference: 0.07 [95% CI: 0.02, 0.12], p = 0.004). The benefit of AI-bpMRI was greatest for the readers with low or medium experience (AUC difference in detecting GS ≥ 3 + 4 compared to mpMRI: 0.06 [95% CI: −0.03, 0.14], p = 0.19 and 0.06 [95% CI: −0.03, 0.14], p = 0.19, respectively). Conclusions: This study indicates that AI-bpMRI detects PCa with a diagnostic accuracy comparable to that of mpMRI. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

Back to TopTop