Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (301)

Search Parameters:
Keywords = brain tumor segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1736 KB  
Article
Automatic Brain Tumor Segmentation in 2D Intra-Operative Ultrasound Images Using Magnetic Resonance Imaging Tumor Annotations
by Mathilde Gajda Faanes, Ragnhild Holden Helland, Ole Solheim, Sébastien Muller and Ingerid Reinertsen
J. Imaging 2025, 11(10), 365; https://doi.org/10.3390/jimaging11100365 - 16 Oct 2025
Viewed by 226
Abstract
Automatic segmentation of brain tumors in intra-operative ultrasound (iUS) images could facilitate localization of tumor tissue during the resection surgery. The lack of large annotated datasets limits the current models performances. In this paper, we investigated the use of tumor annotations in magnetic [...] Read more.
Automatic segmentation of brain tumors in intra-operative ultrasound (iUS) images could facilitate localization of tumor tissue during the resection surgery. The lack of large annotated datasets limits the current models performances. In this paper, we investigated the use of tumor annotations in magnetic resonance imaging (MRI) scans, which are more accessible than annotations in iUS images, for training of deep learning models for iUS brain tumor segmentation. We used 180 annotated MRI scans with corresponding unannotated iUS images, and 29 annotated iUS images. Image registration was performed to transfer the MRI annotations to the corresponding iUS images before training the nnU-Net model with different configurations of the data and label origins. The results showed similar performance for a model trained with only MRI annotated tumors compared to models trained with only iUS annotations and both, and to expert annotations, indicating that MRI tumor annotations can be used as a substitute for iUS tumor annotations to train a deep learning model for automatic brain tumor segmentation in the iUS images. The best model obtained an average Dice score of 0.62 ± 0.31, compared to 0.67 ± 0.25 for an expert neurosurgeon, where the performance on larger tumors was similar, but lower for the models on smaller tumors. In addition, the results showed that removing smaller tumors from the training sets improved the results. Full article
(This article belongs to the Special Issue Progress and Challenges in Biomedical Image Analysis—2nd Edition)
Show Figures

Figure 1

27 pages, 3948 KB  
Article
Fully Automated Segmentation of Cervical Spinal Cord in Sagittal MR Images Using Swin-Unet Architectures
by Rukiye Polattimur, Emre Dandıl, Mehmet Süleyman Yıldırım and Utku Şenol
J. Clin. Med. 2025, 14(19), 6994; https://doi.org/10.3390/jcm14196994 - 2 Oct 2025
Viewed by 490
Abstract
Background/Objectives: The spinal cord is a critical component of the central nervous system that transmits neural signals between the brain and the body’s peripheral regions through its nerve roots. Despite being partially protected by the vertebral column, the spinal cord remains highly [...] Read more.
Background/Objectives: The spinal cord is a critical component of the central nervous system that transmits neural signals between the brain and the body’s peripheral regions through its nerve roots. Despite being partially protected by the vertebral column, the spinal cord remains highly vulnerable to trauma, tumors, infections, and degenerative or inflammatory disorders. These conditions can disrupt neural conduction, resulting in severe functional impairments, such as paralysis, motor deficits, and sensory loss. Therefore, accurate and comprehensive spinal cord segmentation is essential for characterizing its structural features and evaluating neural integrity. Methods: In this study, we propose a fully automated method for segmentation of the cervical spinal cord in sagittal magnetic resonance (MR) images. This method facilitates rapid clinical evaluation and supports early diagnosis. Our approach uses a Swin-Unet architecture, which integrates vision transformer blocks into the U-Net framework. This enables the model to capture both local anatomical details and global contextual information. This design improves the delineation of the thin, curved, low-contrast cervical cord, resulting in more precise and robust segmentation. Results: In experimental studies, the proposed Swin-Unet model (SWU1), which uses transformer blocks in the encoder layer, achieved Dice Similarity Coefficient (DSC) and Hausdorff Distance 95 (HD95) scores of 0.9526 and 1.0707 mm, respectively, for cervical spinal cord segmentation. These results confirm that the model can consistently deliver precise, pixel-level delineations that are structurally accurate, which supports its reliability for clinical assessment. Conclusions: The attention-enhanced Swin-Unet architecture demonstrated high accuracy in segmenting thin and complex anatomical structures, such as the cervical spinal cord. Its ability to generalize with limited data highlights its potential for integration into clinical workflows to support diagnosis, monitoring, and treatment planning. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

26 pages, 10666 KB  
Article
FALS-YOLO: An Efficient and Lightweight Method for Automatic Brain Tumor Detection and Segmentation
by Liyan Sun, Linxuan Zheng and Yi Xin
Sensors 2025, 25(19), 5993; https://doi.org/10.3390/s25195993 - 28 Sep 2025
Viewed by 648
Abstract
Brain tumors are highly malignant diseases that severely threaten the nervous system and patients’ lives. MRI is a core technology for brain tumor diagnosis and treatment due to its high resolution and non-invasiveness. However, existing YOLO-based models face challenges in brain tumor MRI [...] Read more.
Brain tumors are highly malignant diseases that severely threaten the nervous system and patients’ lives. MRI is a core technology for brain tumor diagnosis and treatment due to its high resolution and non-invasiveness. However, existing YOLO-based models face challenges in brain tumor MRI image detection and segmentation, such as insufficient multi-scale feature extraction and high computational resource consumption. This paper proposes an improved lightweight brain tumor detection and instance segmentation model named FALS-YOLO, based on YOLOv8n-Seg and integrating three key modules: FLRDown, AdaSimAM, and LSCSHN. FLRDown enhances multi-scale tumor perception, AdaSimAM suppresses noise and improves feature fusion, and LSCSHN achieves high-precision segmentation with reduced parameters and computational burden. Experiments on the tumor-otak dataset show that FALS-YOLO achieves Precision (B) of 0.892, Recall (B) of 0.858, mAP@0.5 (B) of 0.912 in detection, and Precision (M) of 0.899, Recall (M) of 0.863, mAP@0.5 (M) of 0.917 in segmentation, outperforming YOLOv5n-Seg, YOLOv8n-Seg, YOLOv9s-Seg, YOLOv10n-Seg and YOLOv11n-Seg. Compared with YOLOv8n-Seg, FALS-YOLO reduces parameters by 31.95%, computational amount by 20.00%, and model size by 32.31%. It provides an efficient, accurate and practical solution for the automatic detection and instance segmentation of brain tumors in resource-limited environments. Full article
(This article belongs to the Special Issue Emerging MRI Techniques for Enhanced Disease Diagnosis and Monitoring)
Show Figures

Figure 1

13 pages, 1587 KB  
Article
Glioma Grading by Integrating Radiomic Features from Peritumoral Edema in Fused MRI Images and Automated Machine Learning
by Amir Khorasani
J. Imaging 2025, 11(10), 336; https://doi.org/10.3390/jimaging11100336 - 27 Sep 2025
Viewed by 430
Abstract
We aimed to investigate the utility of peritumoral edema-derived radiomic features from magnetic resonance imaging (MRI) image weights and fused MRI sequences for enhancing the performance of machine learning-based glioma grading. The present study utilized the Multimodal Brain Tumor Segmentation Challenge 2023 (BraTS [...] Read more.
We aimed to investigate the utility of peritumoral edema-derived radiomic features from magnetic resonance imaging (MRI) image weights and fused MRI sequences for enhancing the performance of machine learning-based glioma grading. The present study utilized the Multimodal Brain Tumor Segmentation Challenge 2023 (BraTS 2023) dataset. Laplacian Re-decomposition (LRD) was employed to fuse multimodal MRI sequences. The fused image quality was evaluated using the Entropy, standard deviation (STD), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) metrics. A comprehensive set of radiomic features was subsequently extracted from peritumoral edema regions using PyRadiomics. The Boruta algorithm was applied for feature selection, and an optimized classification pipeline was developed using the Tree-based Pipeline Optimization Tool (TPOT). Model performance for glioma grade classification was evaluated based on accuracy, precision, recall, F1-score, and area under the curve (AUC) parameters. Analysis of fused image quality metrics confirmed that the LRD method produces high-quality fused images. From 851 radiomic features extracted from peritumoral edema regions, the Boruta algorithm selected different sets of informative features in both standard MRI and fused images. Subsequent TPOT automated machine learning optimization analysis identified a fine-tuned Stochastic Gradient Descent (SGD) classifier, trained on features from T1Gd+FLAIR fused images, as the top-performing model. This model achieved superior performance in glioma grade classification (Accuracy = 0.96, Precision = 1.0, Recall = 0.94, F1-Score = 0.96, AUC = 1.0). Radiomic features derived from peritumoral edema in fused MRI images using the LRD method demonstrated distinct, grade-specific patterns and can be utilized as a non-invasive, accurate, and rapid glioma grade classification method. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

28 pages, 2869 KB  
Article
Enhancing Medical Image Segmentation and Classification Using a Fuzzy-Driven Method
by Akmal Abduvaitov, Abror Shavkatovich Buriboev, Djamshid Sultanov, Shavkat Buriboev, Ozod Yusupov, Kilichov Jasur and Andrew Jaeyong Choi
Sensors 2025, 25(18), 5931; https://doi.org/10.3390/s25185931 - 22 Sep 2025
Viewed by 785
Abstract
Automated analysis for tumor segmentation and illness classification is hampered by the noise, low contrast, and ambiguity that are common in medical pictures. This work introduces a new 12-step fuzzy-based improvement pipeline that uses fuzzy entropy, fuzzy standard deviation, and histogram spread functions [...] Read more.
Automated analysis for tumor segmentation and illness classification is hampered by the noise, low contrast, and ambiguity that are common in medical pictures. This work introduces a new 12-step fuzzy-based improvement pipeline that uses fuzzy entropy, fuzzy standard deviation, and histogram spread functions to enhance picture quality in CT, MRI, and X-ray modalities. The pipeline produces three improved versions per dataset, lowering BRISQUE scores from 28.8 to 21.7 (KiTS19), 30.3 to 23.4 (BraTS2020), and 26.8 to 22.1 (Chest X-ray). It is tested on KiTS19 (CT) for kidney tumor segmentation, BraTS2020 (MRI) for brain tumor segmentation, and Chest X-ray Pneumonia for classification. A Concatenated CNN (CCNN) uses the improved datasets to achieve a Dice coefficient of 99.60% (KiTS19, +2.40% over baseline), segmentation accuracy of 0.983 (KiTS19) and 0.981 (BraTS2020) versus 0.959 and 0.943 (CLAHE), and classification accuracy of 0.974 (Chest X-ray) versus 0.917 (CLAHE). A classic CNN is trained on original and CLAHE-filtered datasets. These outcomes demonstrate how well the pipeline works to improve image quality and increase segmentation/classification accuracy, offering a foundation for clinical diagnostics that is both scalable and interpretable. Full article
Show Figures

Figure 1

30 pages, 3101 KB  
Review
Artificial Intelligence in the Diagnosis and Treatment of Brain Gliomas
by Kyriacos Evangelou, Ioannis Kotsantis, Aristotelis Kalyvas, Anastasios Kyriazoglou, Panagiota Economopoulou, Georgios Velonakis, Maria Gavra, Amanda Psyrri, Efstathios J. Boviatsis and Lampis C. Stavrinou
Biomedicines 2025, 13(9), 2285; https://doi.org/10.3390/biomedicines13092285 - 17 Sep 2025
Viewed by 1000
Abstract
Brain gliomas are highly infiltrative and heterogenous tumors, whose early and accurate detection as well as therapeutic management are challenging. Artificial intelligence (AI) has the potential to redefine the landscape in neuro-oncology and can enhance glioma detection, imaging segmentation, and non-invasive molecular characterization [...] Read more.
Brain gliomas are highly infiltrative and heterogenous tumors, whose early and accurate detection as well as therapeutic management are challenging. Artificial intelligence (AI) has the potential to redefine the landscape in neuro-oncology and can enhance glioma detection, imaging segmentation, and non-invasive molecular characterization better than conventional diagnostic modalities through deep learning-driven radiomics and radiogenomics. AI algorithms have been shown to predict genotypic and phenotypic glioma traits with remarkable accuracy and facilitate patient-tailored therapeutic decision-making. Such algorithms can be incorporated into surgical planning to optimize resection extent while preserving eloquent cortical structures through preoperative imaging fusion and intraoperative augmented reality-assisted navigation. Beyond resection, AI may assist in radiotherapy dose distribution optimization, thus ensuring maximal tumor control while minimizing surrounding tissue collateral damage. AI-guided molecular profiling and treatment response prediction models can facilitate individualized chemotherapy regimen tailoring, especially for glioblastomas with MGMT promoter methylation. Applications in immunotherapy are emerging, and research is focusing on AI to identify tumor microenvironment signatures predictive of immune checkpoint inhibition responsiveness. AI-integrated prognostic models incorporating radiomic, histopathologic, and clinical variables can additionally improve survival stratification and recurrence risk prediction remarkably, to refine follow-up strategies in high-risk patients. However, data heterogeneity, algorithmic transparency concerns, and regulatory challenges hamstring AI implementation in neuro-oncology despite its transformative potential. It is therefore imperative for clinical translation to develop interpretable AI frameworks, integrate multimodal datasets, and robustly validate externally. Future research should prioritize the creation of generalizable AI models, combine larger and more diverse datasets, and integrate multimodal imaging and molecular data to overcome these obstacles and revolutionize AI-assisted patient-specific glioma management. Full article
(This article belongs to the Special Issue Mechanisms and Novel Therapeutic Approaches for Gliomas)
Show Figures

Graphical abstract

25 pages, 2304 KB  
Article
From Anatomy to Genomics Using a Multi-Task Deep Learning Approach for Comprehensive Glioma Profiling
by Akmalbek Abdusalomov, Sabina Umirzakova, Obidjon Bekmirzaev, Adilbek Dauletov, Abror Buriboev, Alpamis Kutlimuratov, Akhram Nishanov, Rashid Nasimov and Ryumduck Oh
Bioengineering 2025, 12(9), 979; https://doi.org/10.3390/bioengineering12090979 - 15 Sep 2025
Viewed by 731
Abstract
Background: Gliomas are among the most complex and lethal primary brain tumors, necessitating precise evaluation of both anatomical subregions and molecular alterations for effective clinical management. Methods: To find a solution to the disconnected nature of current bioimage analysis pipelines, where anatomical segmentation [...] Read more.
Background: Gliomas are among the most complex and lethal primary brain tumors, necessitating precise evaluation of both anatomical subregions and molecular alterations for effective clinical management. Methods: To find a solution to the disconnected nature of current bioimage analysis pipelines, where anatomical segmentation based on MRI and molecular biomarker prediction are done as separate tasks, we use here Molecular-Genomic and Multi-Task (MGMT-Net), a one deep learning scheme that carries out the task of the multi-modal MRI data without any conversion. MGMT-Net incorporates a novel Cross-Modality Attention Fusion (CMAF) module that dynamically integrates diverse imaging sequences and pairs them with a hybrid Transformer–Convolutional Neural Network (CNN) encoder to capture both global context and local anatomical detail. This architecture supports dual-task decoders, enabling concurrent voxel-wise tumor delineation and subject-level classification of key genomic markers, including the IDH gene mutation, the 1p/19q co-deletion, and the TERT gene promoter mutation. Results: Extensive validation on the Brain Tumor Segmentation (BraTS 2024) dataset and the combined Cancer Genome Atlas/Erasmus Glioma Database (TCGA/EGD) datasets demonstrated high segmentation accuracy and robust biomarker classification performance, with strong generalizability across external institutional cohorts. Ablation studies further confirmed the importance of each architectural component in achieving overall robustness. Conclusions: MGMT-Net presents a scalable and clinically relevant solution that bridges radiological imaging and genomic insights, potentially reducing diagnostic latency and enhancing precision in neuro-oncology decision-making. By integrating spatial and genetic analysis within a single model, this work represents a significant step toward comprehensive, AI-driven glioma assessment. Full article
(This article belongs to the Special Issue Mathematical Models for Medical Diagnosis and Testing)
Show Figures

Figure 1

21 pages, 4721 KB  
Article
Automated Brain Tumor MRI Segmentation Using ARU-Net with Residual-Attention Modules
by Erdal Özbay and Feyza Altunbey Özbay
Diagnostics 2025, 15(18), 2326; https://doi.org/10.3390/diagnostics15182326 - 13 Sep 2025
Viewed by 701
Abstract
Background/Objectives: Accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) scans is critical for diagnosis and treatment planning due to their life-threatening nature. This study aims to develop a robust and automated method capable of precisely delineating heterogeneous tumor regions while improving [...] Read more.
Background/Objectives: Accurate segmentation of brain tumors in Magnetic Resonance Imaging (MRI) scans is critical for diagnosis and treatment planning due to their life-threatening nature. This study aims to develop a robust and automated method capable of precisely delineating heterogeneous tumor regions while improving segmentation accuracy and generalization. Methods: We propose Attention Res-UNet (ARU-Net), a novel Deep Learning (DL) architecture integrating residual connections, Adaptive Channel Attention (ACA), and Dimensional-space Triplet Attention (DTA) modules. The encoding module efficiently extracts and refines relevant feature information by applying ACA to the lower layers of convolutional and residual blocks. The DTA is fixed to the upper layers of the decoding module, decoupling channel weights to better extract and fuse multi-scale features, enhancing both performance and efficiency. Input MRI images are pre-processed using Contrast Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, denoising filters, and Linear Kuwahara filtering to preserve edges while smoothing homogeneous regions. The network is trained using categorical cross-entropy loss with the Adam optimizer on the BTMRII dataset, and comparative experiments are conducted against baseline U-Net, DenseNet121, and Xception models. Performance is evaluated using accuracy, precision, recall, F1-score, Dice Similarity Coefficient (DSC), and Intersection over Union (IoU) metrics. Results: Baseline U-Net showed significant performance gains after adding residual connections and ACA modules, with DSC improving by approximately 3.3%, accuracy by 3.2%, IoU by 7.7%, and F1-score by 3.3%. ARU-Net further enhanced segmentation performance, achieving 98.3% accuracy, 98.1% DSC, 96.3% IoU, and a superior F1-score, representing additional improvements of 1.1–2.0% over the U-Net + Residual + ACA variant. Visualizations confirmed smoother boundaries and more precise tumor contours across all six tumor classes, highlighting ARU-Net’s ability to capture heterogeneous tumor structures and fine structural details more effectively than both baseline U-Net and other conventional DL models. Conclusions: ARU-Net, combined with an effective pre-processing strategy, provides a highly reliable and precise solution for automated brain tumor segmentation. Its improvements across multiple evaluation metrics over U-Net and other conventional models highlight its potential for clinical application and contribute novel insights to medical image analysis research. Full article
(This article belongs to the Special Issue Advances in Functional and Structural MR Image Analysis)
Show Figures

Figure 1

22 pages, 3585 KB  
Article
A Novel 3D U-Net–Vision Transformer Hybrid with Multi-Scale Fusion for Precision Multimodal Brain Tumor Segmentation in 3D MRI
by Fathia Ghribi and Fayçal Hamdaoui
Electronics 2025, 14(18), 3604; https://doi.org/10.3390/electronics14183604 - 11 Sep 2025
Viewed by 848
Abstract
In recent years, segmentation for medical applications using Magnetic Resonance Imaging (MRI) has received increasing attention. Working in this field has emerged as an ambitious task and a major challenge for researchers; particularly, brain tumor segmentation from MRI is a crucial task for [...] Read more.
In recent years, segmentation for medical applications using Magnetic Resonance Imaging (MRI) has received increasing attention. Working in this field has emerged as an ambitious task and a major challenge for researchers; particularly, brain tumor segmentation from MRI is a crucial task for accurate diagnosis, treatment planning, and patient monitoring. With the rapid development of deep learning methods, significant improvements have been made in medical image segmentation. Convolutional Neural Networks (CNNs), such as U-Net, have shown excellent performance in capturing local spatial features. However, these models cannot explicitly capture long-range dependencies. Therefore, Vision Transformers have emerged as an alternative segmentation method recently, as they can exploit long-range correlations through the self-attention mechanism (MSA). Despite their effectiveness, ViTs require large annotated datasets and may compromise fine-grained spatial details. To address these problems, we propose a novel hybrid approach for brain tumor segmentation that combines a 3D U-Net with a 3D Vision Transformer (ViT3D), aiming to jointly exploit local feature extraction and global context modeling. Additionally, we developed an effective fusion method that uses upsampling and convolutional refinement to improve multi-scale feature integration. Unlike traditional fusion approaches, our method explicitly refines spatial details while maintaining global dependencies, improving the quality of tumor border delineation. We evaluated our approach on the BraTS 2020 dataset, achieving a global accuracy score of 99.56%, an average Dice similarity coefficient (DSC) of 77.43% (corresponding to the mean across the three tumor subregions), with individual Dice scores of 84.35% for WT, 80.97% for TC, and 66.97% for ET, and an average Intersection over Union (IoU) of 71.69%. These extensive experimental results demonstrate that our model not only localizes tumors with high accuracy and robustness but also outperforms a selection of current state-of-the-art methods, including U-Net, SwinUnet, M-Unet, and others. Full article
Show Figures

Figure 1

18 pages, 2228 KB  
Article
Artificial Intelligence-Based MRI Segmentation for the Differential Diagnosis of Single Brain Metastasis and Glioblastoma
by Daniela Pomohaci, Emilia-Adriana Marciuc, Bogdan-Ionuț Dobrovăț, Mihaela-Roxana Popescu, Ana-Cristina Istrate, Oriana-Maria Onicescu (Oniciuc), Sabina-Ioana Chirica, Costin Chirica and Danisia Haba
Diagnostics 2025, 15(17), 2248; https://doi.org/10.3390/diagnostics15172248 - 5 Sep 2025
Viewed by 1817
Abstract
Background/Objectives: Glioblastomas (GBMs) and brain metastases (BMs) are both frequent brain lesions. Distinguishing between them is crucial for suitable therapeutic and follow-up decisions, but this distinction is difficult to achieve, as it includes clinical, radiological and histopathological correlation. However, non-invasive AI examination [...] Read more.
Background/Objectives: Glioblastomas (GBMs) and brain metastases (BMs) are both frequent brain lesions. Distinguishing between them is crucial for suitable therapeutic and follow-up decisions, but this distinction is difficult to achieve, as it includes clinical, radiological and histopathological correlation. However, non-invasive AI examination of conventional and advanced MRI techniques can overcome this issue. Methods: We retrospectively selected 78 patients with confirmed GBM (39) and single BM (39), with conventional MRI investigations, consisting of T2W FLAIR and CE T1W acquisitions. The MRI images (DICOM) were evaluated by an AI segmentation tool, comparatively evaluating tumor heterogeneity and peripheral edema. Results: We found that GBMs are less edematous than BMs (p = 0.04) but have more internal necrosis (p = 0.002). Of the BM primary cancer molecular subtypes, NSCCL showed the highest grade of edema (p = 0.01). Compared with the ellipsoidal method of volume calculation, the AI machine obtained greater values when measuring lesions of the occipital and temporal lobes (p = 0.01). Conclusions: Although extremely useful in radiomics analysis, automated segmentation applied alone could effectively differentiate GBM and BM on a conventional MRI, calculating the ratio between their variable components (solid, necrotic and peripheral edema). Other studies applied to a broader set of participants are necessary to further evaluate the efficacy of automated segmentation. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

27 pages, 6135 KB  
Article
A Unified Deep Learning Framework for Robust Multi-Class Tumor Classification in Skin and Brain MRI
by Mohamed A. Sayedelahl, Ahmed G. Gad, Reham M. Essa, Zakaria G. Hussein and Amr A. Abohany
Technologies 2025, 13(9), 401; https://doi.org/10.3390/technologies13090401 - 3 Sep 2025
Viewed by 1125
Abstract
Early detection of cancer is critical for effective treatment, particularly for aggressive malignancies like skin cancer and brain tumors. This research presents an integrated deep learning approach combining augmentation, segmentation, and classification techniques to identify diverse tumor types in skin lesions and brain [...] Read more.
Early detection of cancer is critical for effective treatment, particularly for aggressive malignancies like skin cancer and brain tumors. This research presents an integrated deep learning approach combining augmentation, segmentation, and classification techniques to identify diverse tumor types in skin lesions and brain MRI scans. Our method employs a fine-tuned InceptionV3 convolutional neural network trained on a multi-modal dataset comprising dermatoscopy images from the Human Against Machine archive and brain MRI scans from the ISIC 2023 repository. To address class imbalance, we implement advanced preprocessing and Generative Adversarial Network (GAN)-based augmentation. The model achieves 97% accuracy in classifying images across ten categories: seven skin cancer types, multiple brain tumor variants, and an “undefined” class. These results suggest clinical applicability for multi-cancer detection. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

38 pages, 13994 KB  
Article
Post-Heuristic Cancer Segmentation Refinement over MRI Images and Deep Learning Models
by Panagiotis Christakakis and Eftychios Protopapadakis
AI 2025, 6(9), 212; https://doi.org/10.3390/ai6090212 - 2 Sep 2025
Viewed by 997
Abstract
Lately, deep learning methods have greatly improved the accuracy of brain-tumor segmentation, yet slice-wise inconsistencies still limit reliable use in clinical practice. While volume-aware 3D convolutional networks achieve high accuracy, their memory footprint and inference time may limit clinical adoption. This study proposes [...] Read more.
Lately, deep learning methods have greatly improved the accuracy of brain-tumor segmentation, yet slice-wise inconsistencies still limit reliable use in clinical practice. While volume-aware 3D convolutional networks achieve high accuracy, their memory footprint and inference time may limit clinical adoption. This study proposes a resource-conscious pipeline for lower-grade-glioma delineation in axial FLAIR MRI that combines a 2D Attention U-Net with a guided post-processing refinement step. Two segmentation backbones, a vanilla U-Net and an Attention U-Net, are trained on 110 TCGA-LGG axial FLAIR patient volumes under various loss functions and activation functions. The Attention U-Net, optimized with Dice loss, delivers the strongest baseline, achieving a mean Intersection-over-Union (mIoU) of 0.857. To mitigate slice-wise inconsistencies inherent to 2D models, a White-Area Overlap (WAO) voting mechanism quantifies the tumor footprint shared by neighboring slices. The WAO curve is smoothed with a Gaussian filter to locate its peak, after which a percentile-based heuristic selectively relabels the most ambiguous softmax pixels. Cohort-level analysis shows that removing merely 0.1–0.3% of ambiguous low-confidence pixels lifts the post-processing mIoU above the baseline while improving segmentation for two-thirds of patients. The proposed refinement strategy holds great potential for further improvement, offering a practical route for integrating deep learning segmentation into routine clinical workflows with minimal computational overhead. Full article
Show Figures

Figure 1

24 pages, 2959 KB  
Article
From Detection to Diagnosis: An Advanced Transfer Learning Pipeline Using YOLO11 with Morphological Post-Processing for Brain Tumor Analysis for MRI Images
by Ikram Chourib
J. Imaging 2025, 11(8), 282; https://doi.org/10.3390/jimaging11080282 - 21 Aug 2025
Viewed by 1393
Abstract
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present [...] Read more.
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation—including geometric transformations—to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high mAP@0.5 scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

19 pages, 2017 KB  
Article
Segmentation of Brain Tumors Using a Multi-Modal Segment Anything Model (MSAM) with Missing Modality Adaptation
by Jiezhen Xing and Jicong Zhang
Bioengineering 2025, 12(8), 871; https://doi.org/10.3390/bioengineering12080871 - 12 Aug 2025
Viewed by 1462
Abstract
This paper presents a novel multi-modal segment anything model (MSAM) for glioma tumor segmentation using structural MRI images and diffusion tensor imaging data. We designed an effective multimodal feature fusion block to effectively integrate features from different modalities of data, thereby improving the [...] Read more.
This paper presents a novel multi-modal segment anything model (MSAM) for glioma tumor segmentation using structural MRI images and diffusion tensor imaging data. We designed an effective multimodal feature fusion block to effectively integrate features from different modalities of data, thereby improving the accuracy of brain tumor segmentation. We have designed an effective missing modality training method to address the issue of missing modalities in actual clinical scenarios. To evaluate the effectiveness of MSAM, a series of experiments were conducted comparing its performance with U-Net across various modality combinations. The results demonstrate that MSAM consistently outperforms U-Net in terms of both Dice Similarity Coefficient and 95% Hausdorff Distance, particularly when structural modality data are used alone. Through feature visualization and the use of missing modality training, we show that MSAM can effectively adapt to missing data, providing robust segmentation even when key modalities are absent. Additionally, segmentation accuracy is influenced by tumor region size, with smaller regions presenting more challenges. These findings underscore the potential of MSAM in clinical applications where incomplete data or varying tumor sizes are prevalent. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

24 pages, 948 KB  
Review
A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives
by Cecilia Diana-Albelda, Álvaro García-Martín and Jesus Bescos
J. Imaging 2025, 11(8), 269; https://doi.org/10.3390/jimaging11080269 - 11 Aug 2025
Viewed by 1372
Abstract
Accurate and automated segmentation of gliomas from Magnetic Resonance Imaging (MRI) is crucial for effective diagnosis, treatment planning, and patient monitoring. However, the aggressive nature and morphological complexity of these tumors pose significant challenges that call for advanced segmentation techniques. This review provides [...] Read more.
Accurate and automated segmentation of gliomas from Magnetic Resonance Imaging (MRI) is crucial for effective diagnosis, treatment planning, and patient monitoring. However, the aggressive nature and morphological complexity of these tumors pose significant challenges that call for advanced segmentation techniques. This review provides a comprehensive analysis of Deep Learning (DL) methods for glioma segmentation, with a specific focus on bridging the gap between research performance and practical clinical deployment. We evaluate over 80 state-of-the-art models published up to 2025, categorizing them into CNN-based, Pure Transformer, and Hybrid CNN-Transformer architectures. The primary objective of this paper is to critically assess these models not only on their segmentation accuracy but also on their computational efficiency and suitability for real-world medical environments by incorporating hardware resource considerations. We present a comparison of model performance on the BraTS datasets benchmark and introduce a suitability analysis for top-performing models based on their robustness, efficiency, and completeness of tumor region delineation. By identifying current trends, limitations, and key trade-offs, this review offers future research directions aimed at optimizing the balance between technical performance and clinical usability to improve diagnostic outcomes for glioma patients. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Back to TopTop