Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (134)

Search Parameters:
Keywords = Brain Tumor Segmentation Challenge

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
46 pages, 3952 KB  
Article
A Hybrid Particle Swarm–Genetic Algorithm Framework for U-Net Hyperparameter Optimization in High-Precision Brain Tumor MRI Segmentation
by Shoffan Saifullah, Rafał Dreżewski, Anton Yudhana, Radius Tanone and Andiko Putro Suryotomo
Appl. Sci. 2026, 16(6), 3041; https://doi.org/10.3390/app16063041 - 21 Mar 2026
Viewed by 262
Abstract
Accurate and robust brain tumor segmentation remains a critical challenge in medical image analysis due to high inter-patient variability, complex tumor morphology, and modality-specific noise in MRI scans. This study proposes PSO-GA-U-Net, a novel hybrid deep learning framework that integrates Particle Swarm Optimization [...] Read more.
Accurate and robust brain tumor segmentation remains a critical challenge in medical image analysis due to high inter-patient variability, complex tumor morphology, and modality-specific noise in MRI scans. This study proposes PSO-GA-U-Net, a novel hybrid deep learning framework that integrates Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) to optimize the U-Net architecture, enhancing segmentation performance and generalization. PSO dynamically tunes the learning rate to accommodate modality-specific variations, while the GA adaptively regulates dropout to improve feature diversity and reduce overfitting. The model was evaluated on three benchmark datasets—FBTS, BraTS 2021, and BraTS 2018—using five-fold cross-validation. PSO-GA-U-Net achieves Dice Similarity Coefficients (DSC) of 0.9587, 0.9406, and 0.9480 and Jaccard Index (JI) scores of 0.9209, 0.8881, and 0.9024, respectively, consistently outperforming state-of-the-art models in both overlap accuracy and boundary delineation. Statistical tests confirm that these improvements are significant across folds (p<0.05). Visual heatmaps further illustrate the model’s ability to preserve structural integrity across tumor types and modalities. These results indicate that metaheuristic-guided deep learning offers a promising and clinically applicable solution for automatic tumor segmentation in radiological workflows. Full article
(This article belongs to the Special Issue Advanced Techniques and Applications in Magnetic Resonance Imaging)
Show Figures

Figure 1

20 pages, 1689 KB  
Article
Optimization-Driven Multimodal Brain Tumor Segmentation Using α-Expansion Graph Cuts
by Roaa Soloh, Bilal Nakhal and Abdallah El Chakik
Computation 2026, 14(3), 70; https://doi.org/10.3390/computation14030070 - 15 Mar 2026
Viewed by 331
Abstract
Precise segmentation of brain tumors from multimodal MRI scans is essential for accurate neuro-oncological diagnosis and treatment planning. To address this challenge, we propose a label-free optimization-driven segmentation framework based on the α-expansion graph cut algorithm, offering improved computational efficiency and interpretability [...] Read more.
Precise segmentation of brain tumors from multimodal MRI scans is essential for accurate neuro-oncological diagnosis and treatment planning. To address this challenge, we propose a label-free optimization-driven segmentation framework based on the α-expansion graph cut algorithm, offering improved computational efficiency and interpretability compared to deep learning alternatives. The method relies on structured optimization and handcrafted features, including local intensity patches, entropy-based texture descriptors, and statistical moments, to compute voxel-wise unary potentials via gradient-boosted decision trees (XGBoost). These are integrated with spatially adaptive pairwise terms within a graph model optimized through α-expansion. Evaluation on 146 BraTS validation volumes demonstrates reliable whole-tumor overlap, with a mean Dice score of 0.855 ± 0.184 and a 95% Hausdorff distance of 18.66 mm. Bootstrap analysis confirms the statistical stability of these results. The low computational overhead and modular design make the method particularly suitable for transparent and resource-constrained clinical deployment scenarios. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

24 pages, 3704 KB  
Article
Source-Free Active Domain Adaptation for Brain Tumor Segmentation via Mamba and Region-Level Uncertainty
by Haowen Zheng, Che Wang, Yudan Zhou, Congbo Cai and Zhong Chen
Brain Sci. 2026, 16(3), 300; https://doi.org/10.3390/brainsci16030300 - 8 Mar 2026
Viewed by 454
Abstract
Background/Objectives: Accurate brain tumor segmentation from MRI is crucial for diagnosis but faces challenges like domain shifts across medical centers, data privacy constraints, and high annotation costs. While source-free active domain adaptation (SFADA) emerges as a promising solution to these issues, existing approaches [...] Read more.
Background/Objectives: Accurate brain tumor segmentation from MRI is crucial for diagnosis but faces challenges like domain shifts across medical centers, data privacy constraints, and high annotation costs. While source-free active domain adaptation (SFADA) emerges as a promising solution to these issues, existing approaches often overlook the inherent structural complexity in tumor regions. Methods: We propose a novel SFADA framework composed of two major contributions. First, we introduce a Region-level Uncertainty-Guided Sample Selection (RUGS) strategy, enabling the identification of the most informative target-domain samples in a single inference pass. Second, we present the Source-Free Active Domain Adaptation Network (SFADA-Net), a Mamba-driven segmentation model equipped with a dual-path multi-kernel convolution module for enhanced local feature interaction and a structure-aware prompted Mamba module for capturing global spatial relationships. Results: Extensive evaluations across one source domain dataset (BraTS-2021) and three target domain datasets (BraTS-SSA, BraTS-PED, and BraTS-MEN 2023) demonstrate the superior adaptability of the proposed method, achieving consistently high segmentation accuracy across domains. With only 5% annotation budget, our framework consistently outperforms state-of-the-art segmentation and domain adaptation methods, achieving robust segmentation accuracy across diverse domains and approaching the performance of fully supervised learning. Conclusions: The proposed method achieves superior accuracy in brain tumor region segmentation and precise boundary delineation under a limited annotation budget. It effectively mitigates domain shift while fully complying with data privacy regulations. Consequently, our framework relieves manual annotation bottlenecks and accelerates the cross-center deployment of accurate diagnostic tools, facilitating the clinical application of domain adaptation. Full article
Show Figures

Figure 1

38 pages, 15512 KB  
Article
Improving Brain Tumor Detection by Cortical Surface and Vessels Segmentation Through RGB-to-HSI Transfer Learning
by Guillermo Vazquez, Alberto Martín-Pérez, Angel Perez-Nuñez, Alfonso Lagares, Eduardo Juarez and Cesar Sanz
Cancers 2026, 18(5), 857; https://doi.org/10.3390/cancers18050857 - 6 Mar 2026
Viewed by 484
Abstract
Background: Accurate in vivo brain tumor detection using hyperspectral imaging (HSI), a non-invasive technique that captures spectral information beyond the visible range, is challenging due to the complexity of biological tissues and the difficulty in distinguishing malignant from healthy areas. Conventional neural-network-based [...] Read more.
Background: Accurate in vivo brain tumor detection using hyperspectral imaging (HSI), a non-invasive technique that captures spectral information beyond the visible range, is challenging due to the complexity of biological tissues and the difficulty in distinguishing malignant from healthy areas. Conventional neural-network-based methods often misclassify tumor tissue as blood vessels, largely due to high vascularization and the scarcity of annotated data. Method: To address this issue, this work proposes an underexplored approach that decomposes the problem into two tasks: (1) segmentation of the brain cortical surface and its blood vessels, and (2) segmentation of biological tissues within the segmented craniotomy site. The cortical segmentation task is addressed independently of the segmentation model used in the second stage. To achieve this, a set of pseudo-labels is generated from RGB and HSI captures acquired during in vivo brain surgeries. These pseudo-labels support a multimodal training strategy that leverages both imaging domains, yielding a model capable of segmenting the craniotomy site and the blood vessels contained in it. The model is further refined on HSI using weakly supervised fine-tuning with sparse ground truth annotations. Results: The final segmentation map combines cortical and tissue segmentation outputs, considering only cortex pixels not overlapped by vessels as potential tumor regions. This simplifies the HSI tissue segmentation task, reframing it as a binary segmentation of healthy vs. other tissues, while still enabling a comprehensive multiclass output. Conclusions: The proposed method achieves up to a 15.48% increase in F1 score for the tumor class, while segmenting the brain cortex with a mean Dice similarity coefficient (DSC) of 92.08% and accurately detecting 95.42% of labeled blood vessel samples in the HSI dataset. Full article
Show Figures

Figure 1

30 pages, 8409 KB  
Article
SCAG-Net: Automated Brain Tumor Prediction from MRI Using Cuttlefish-Optimized Attention-Based Graph Networks
by Vijay Govindarajan, Ashit Kumar Dutta, Amr Yousef, Mohd Anjum, Ali Elrashidi and Sana Shahab
Diagnostics 2026, 16(4), 565; https://doi.org/10.3390/diagnostics16040565 - 13 Feb 2026
Viewed by 521
Abstract
Background/Objectives: The earlier, more accurate, and more consistent prediction of the brain tumor recognition process requires automated systems to minimize diagnostic delays and human error. The automated system provides a platform for handling large medical images, speeding up clinical decision-making. However, the existing [...] Read more.
Background/Objectives: The earlier, more accurate, and more consistent prediction of the brain tumor recognition process requires automated systems to minimize diagnostic delays and human error. The automated system provides a platform for handling large medical images, speeding up clinical decision-making. However, the existing system is facing difficulties due to the high variability in tumor location, size, and shape, which leads to segmentation complexity. In addition, glioma-related tumors infiltrate the brain tissues, making it challenging to identify the exact tumor region. Method: The above-identified research difficulties are overcome by applying the Swin-UNet with cuttlefish-optimized attention-based Graph Neural Networks (SCAG-Net), thereby improving overall brain tumor recognition accuracy. This integrated approach is utilized to address infiltrative gliomas, tumor variability, and feature redundancy issues by improving diagnostic efficiency. Initially, the collected MRI images are processed using the Swin-UNet approach to identify the region, minimizing prediction error robustly. The region’s features are explored utilizing the cuttlefish algorithm, which minimizes redundant features and speeds up classification by improving accuracy. The selected features are further processed using the attention graph network, which handles structural and heterogeneous information across multiple layers, improving classification accuracy compared to existing methods. Results: The efficiency of the system, implemented with the help of public datasets such as BRATS 2018, BRATS 2019, BRATS 2020, and Figshare is ensured by the proposed SCAG-Net approach, which achieves maximum recognition accuracy. The proposed system achieved a Dice coefficient of 0.989, an Intersection over Union of 0.969, and a classification accuracy of 0.992. This performance surpassed the most recent benchmark models by margins of 1.0% to 1.8% and with statistically significant differences (p < 0.05). These findings present a statistically validated, computationally efficient, clinically deployable framework. Conclusions: The effective analysis of MRI complex structures is used in medical applications and clinical analysis. The proposed SCAG-Net framework significantly improves brain tumor recognition by addressing tumor heterogeneity and infiltrative gliomas using MRI images. The proposed approach provides a robust, efficient, and clinically deployable solution for brain tumor recognition from MRI images, supporting accurate and rapid diagnosis while maintaining expert-level performance. Full article
Show Figures

Figure 1

17 pages, 1286 KB  
Article
Brain Tumor Segmentation with Contextual Transformer-Based U-Net
by Shakhnoza Muksimova, Jushkin Baltaev and Young Im Cho
Electronics 2026, 15(4), 782; https://doi.org/10.3390/electronics15040782 - 12 Feb 2026
Viewed by 538
Abstract
Presently, the segmentation of brain tumors from magnetic resonance imaging (MRI) scans is a very important challenge in the medical area, and it has a huge impact on correct diagnosis, efficient treatment planning, and patient prognosis. We present here the Contextual Transformer U-Net [...] Read more.
Presently, the segmentation of brain tumors from magnetic resonance imaging (MRI) scans is a very important challenge in the medical area, and it has a huge impact on correct diagnosis, efficient treatment planning, and patient prognosis. We present here the Contextual Transformer U-Net (CT-UNet), a novel deep learning approach that can significantly increase the accuracy and speed of brain tumor segmentation. The CT-UNet method features Transformer blocks embedded in a U-Net layout that extracts the most important contextual information across different types of MRI sequences, thereby drastically refining the delineation of tumor regions. We have tested CT-UNet on the Brain Tumor Segmentation (BraTS) challenge dataset that includes a large variety of tumor types, localization, and progression stages. To check the model’s performance, we used the Dice coefficient, sensitivity, specificity, precision, and Hausdorff distance metrics. The findings from our experiments demonstrate that CT-UNet has a substantial advantage over the classical segmentation model, and the 0.92 Dice coefficient it has achieved testifies to its state-of-the-art tumor localization in terms of both extent and form. Besides that, CT-UNet has achieved a very high sensitivity (0.90) and specificity (0.94); thus, it has been perfectly capable of discriminating tumor from non-tumor tissues. Spatial accuracy has also been improved significantly, as can be seen from the 7.5 mm Hausdorff distance achieved by this model, which means it can closely replicate the given tumor boundaries. By employing dynamic modality fusion and incorporating the Transformer mechanism into the established U-Net architecture, we have raised the bar for brain tumor segmentation. Our solution paves the way for another breakthrough in medical imaging technologies. CT-UNet not only speeds up the workflow of radiologists but also facilitates more targeted therapeutic strategies that may result in better patient care and prognosis. Yet the main goal of this work is to provide a basis for future studies that can consider incorporating deep learning methods in a routine clinical setting, thus paving the way for healthcare providers to benefit from both technical and clinical advantages. Full article
Show Figures

Figure 1

12 pages, 781 KB  
Proceeding Paper
Bayesian Optimization-Driven U-Net Architecture Tuning for Brain Tumor Segmentation
by Shoffan Saifullah and Rafał Dreżewski
Eng. Proc. 2026, 124(1), 22; https://doi.org/10.3390/engproc2026124022 - 9 Feb 2026
Viewed by 490
Abstract
Precise brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for clinical diagnosis and treatment planning. However, determining an optimal deep learning architecture for such tasks remains a challenge due to the vast hyperparameter space and structural variations. This paper presents [...] Read more.
Precise brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for clinical diagnosis and treatment planning. However, determining an optimal deep learning architecture for such tasks remains a challenge due to the vast hyperparameter space and structural variations. This paper presents a novel approach that integrates Bayesian Optimization (BO) to automatically tune the U-Net architecture for effective brain tumor segmentation. The proposed BO-UNet framework searches over encoder, bottleneck, and decoder configurations using a Gaussian Process-based surrogate model, guided by a fitness function derived from Dice Similarity Coefficient (DSC) and Jaccard Index (JI). Experiments were conducted on two benchmark datasets: the Figshare Brain Tumor Segmentation (FBTS) dataset and the BraTS 2021 dataset (focused on Whole Tumor segmentation). The best-discovered architecture [64, 64, 64, 256, 64, 128, 256] achieved notable performance: on the FBTS dataset, it reached 0.9503 DSC and 0.9054 JI; on BraTS 2021, it obtained 0.9261 DSC and 0.8631 JI, outperforming several state-of-the-art methods. Convergence and segmentation-map evolution confirm that BO effectively guided the architectural search process. These findings demonstrate the potential of BO-driven deep learning in medical imaging, opening new avenues for architecture-level optimization with minimal manual intervention. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

36 pages, 1319 KB  
Review
A Review of U-Net Based Deep Learning Frameworks for MRI-Based Brain Tumor Segmentation
by Ayse Bastug Koc and Devrim Akgun
Diagnostics 2026, 16(4), 506; https://doi.org/10.3390/diagnostics16040506 - 7 Feb 2026
Viewed by 721
Abstract
Automated segmentation of brain tumors from Magnetic Resonance Imaging (MRI) images is helpful for clinical diagnosis, surgical planning, and post-treatment monitoring. In recent years, the U-Net architecture has been observed as one of the most popular solutions among deep learning models. This article [...] Read more.
Automated segmentation of brain tumors from Magnetic Resonance Imaging (MRI) images is helpful for clinical diagnosis, surgical planning, and post-treatment monitoring. In recent years, the U-Net architecture has been observed as one of the most popular solutions among deep learning models. This article presents a review of 35 studies published between 2019 and 2025 focusing on U-Net-based brain tumor segmentation. The primary focus of this review is an in-depth analysis of commonly used U-Net architectures. The transformation of original 2D and 3D models into more advanced variants is examined in detail. Results from a wide range of studies are synthesized, and standard evaluation criteria are summarized along with benchmark datasets such as the BRATS competition to validate the effectiveness of these models. Additionally, the paper overviews the recent developments in the field, determines fundamental challenges, and provides insight into future directions, including improving model efficiency and generalization, combining multimodal data, and advancing clinical applications. This review serves as a guide for researchers to examine the impact of the U-Net architecture on brain tumor segmentation. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

20 pages, 2026 KB  
Article
Unified Adult–Pediatric Glioma Segmentation via Synergistic MAE Pretraining and Boundary-Aware Refinement
by Moldir Zharylkassynova, Jaepil Ko and Kyungjoo Cheoi
Electronics 2026, 15(2), 329; https://doi.org/10.3390/electronics15020329 - 12 Jan 2026
Viewed by 411
Abstract
Accurate brain tumor segmentation in both adult and pediatric populations remains a challenge due to substantial differences in brain anatomy, tumor distribution, and subregion size. This study proposes a unified segmentation framework based on nnU-Net, integrating encoder-level self-supervised pretraining with a lightweight, boundary-aware [...] Read more.
Accurate brain tumor segmentation in both adult and pediatric populations remains a challenge due to substantial differences in brain anatomy, tumor distribution, and subregion size. This study proposes a unified segmentation framework based on nnU-Net, integrating encoder-level self-supervised pretraining with a lightweight, boundary-aware decoder. The encoder is initialized using a large-scale 3D masked autoencoder pretrained on brain MRI, while the decoder is trained with a hybrid loss function that combines region-overlap and boundary-sensitive terms. A harmonized training and evaluation protocol is applied to both the BraTS-GLI (adult) and BraTS-PED (pediatric) cohorts, enabling fair cross-cohort comparison against baseline and advanced nnU-Net variants. The proposed method improves mean Dice scores from 0.76 to 0.90 for adults and from 0.64 to 0.78 for pediatric cases, while reducing HD95 from 4.42 to 2.24 mm and from 9.03 to 6.23 mm, respectively. These results demonstrate that combining encoder-level pretraining with decoder-side boundary supervision significantly enhances segmentation accuracy across age groups without adding inference-time computational overhead. Full article
(This article belongs to the Special Issue AI-Driven Medical Image/Video Processing)
Show Figures

Figure 1

20 pages, 1768 KB  
Article
Towards Patient Anatomy-Based Simulation of Net Cerebrospinal Fluid Flow in the Intracranial Compartment
by Edgaras Misiulis, Algis Džiugys, Alina Barkauskienė, Aidanas Preikšaitis, Vytenis Ratkūnas, Gediminas Skarbalius, Robertas Navakas, Tomas Iešmantas, Robertas Alzbutas, Saulius Lukoševičius, Mindaugas Šerpytis, Indrė Lapinskienė, Jewel Sengupta and Vytautas Petkus
Appl. Sci. 2026, 16(2), 611; https://doi.org/10.3390/app16020611 - 7 Jan 2026
Viewed by 504
Abstract
Biophysics-based, patient-specific modeling remains challenging for clinical translation, particularly for cerebrospinal fluid (CSF) flow where anatomical detail and computational cost are tightly coupled. We present a computational framework for steady net CSF redistribution in an MRI-derived cranial CSF domain reconstructed from T2 [...] Read more.
Biophysics-based, patient-specific modeling remains challenging for clinical translation, particularly for cerebrospinal fluid (CSF) flow where anatomical detail and computational cost are tightly coupled. We present a computational framework for steady net CSF redistribution in an MRI-derived cranial CSF domain reconstructed from T2-weighted imaging, including the ventricular system, cranial subarachnoid space, and periarterial pathways, to the extent resolvable by clinical MRI. Cranial CSF spaces were segmented in 3D Slicer and a steady Darcy formulation with prescribed CSF production/absorption was solved in COMSOL Multiphysics®. Geometrical and flow descriptors were quantified using region-based projection operations. We assessed discretization cost–accuracy trade-offs by comparing first- and second-order finite elements. First-order elements produced a 1.4% difference in transmantle pressure and a <10% difference in element-wise mass-weighted velocity metric for 90% of elements, while reducing computation time by 75% (20 to 5 min) and peak memory usage five-fold (150 to 30 GB). This proof-of-concept framework provides a computationally tractable baseline for studying steady net CSF pathway redistribution and sensitivity to boundary assumptions, and may support future patient-specific investigations in pathological conditions such as subarachnoid hemorrhage, hydrocephalus and brain tumors. Full article
Show Figures

Figure 1

30 pages, 3535 KB  
Article
PRA-Unet: Parallel Residual Attention U-Net for Real-Time Segmentation of Brain Tumors
by Ali Zakaria Lebani, Medjeded Merati and Saïd Mahmoudi
Information 2026, 17(1), 14; https://doi.org/10.3390/info17010014 - 23 Dec 2025
Viewed by 728
Abstract
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an [...] Read more.
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an optimal balance between accuracy and computational cost remains a significant challenge. In many cases, current methods trade speed for accuracy, or vice versa, consuming substantial computing power and making them difficult to use on devices with limited resources. To address this issue, we present PRA-UNet, a lightweight deep learning model optimized for fast and accurate 2D brain tumor segmentation. Using a single 2D input, the architecture processes four types of MRI scans (FLAIR, T1, T1c, and T2). The encoder uses inverted residual blocks and bottleneck residual blocks to capture features at different scales effectively. The Convolutional Block Attention Module (CBAM) and the Spatial Attention Module (SAM) improve the bridge and skip connections by refining feature maps and making it easier to detect and localize brain tumors. The decoder uses depthwise separable convolutions, which significantly reduce computational costs without degrading accuracy. The BraTS2020 dataset shows that PRA-UNet achieves a Dice score of 95.71%, an accuracy of 99.61%, and a processing speed of 60 ms per image, enabling real-time analysis. PRA-UNet outperforms other models in segmentation while requiring less computing power, suggesting it could be suitable for deployment on lightweight edge devices in clinical settings. Its speed and reliability enable radiologists to diagnose tumors quickly and accurately, enhancing practical medical applications. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

20 pages, 6322 KB  
Article
MAEM-ResUNet: Accurate Glioma Segmentation in Brain MRI via Symmetric Multi-Directional Mamba and Dual-Attention Modules
by Deguo Yang, Boming Yang and Jie Yan
Symmetry 2026, 18(1), 1; https://doi.org/10.3390/sym18010001 - 19 Dec 2025
Viewed by 534
Abstract
Gliomas are among the most common and aggressive malignant brain tumors. Their irregular morphology and fuzzy boundaries pose substantial challenges for automatic segmentation in MRI. Accurate delineation of tumor subregions is crucial for treatment planning and outcome assessment. This study proposes MAEM-ResUNet, an [...] Read more.
Gliomas are among the most common and aggressive malignant brain tumors. Their irregular morphology and fuzzy boundaries pose substantial challenges for automatic segmentation in MRI. Accurate delineation of tumor subregions is crucial for treatment planning and outcome assessment. This study proposes MAEM-ResUNet, an extension of the ResUNet architecture that integrates three key modules: a multi-scale adaptive attention module for joint channel–spatial feature selection, a symmetric multi-directional Mamba block for long-range context modeling, and an adaptive edge attention module for boundary refinement. Experimental results on the BraTS2020 and BraTS2021 datasets demonstrate that MAEM-ResUNet outperforms mainstream methods. On BraTS2020, it achieves an average Dice Similarity Coefficient of 91.19% and an average Hausdorff Distance (HD) of 5.27 mm; on BraTS2021, the average Dice coefficient is 89.67% and the average HD is 5.87 mm, both showing improvements compared to other mainstream models. Meanwhile, ablation experiments confirm the synergistic effect of the three modules, which significantly enhances the accuracy of glioma segmentation and the precision of boundary localization. Full article
Show Figures

Figure 1

24 pages, 596 KB  
Article
Deep Learning-Based Fusion of Multimodal MRI Features for Brain Tumor Detection
by Bakhita Salman, Eithar Yassin, Deepak Ganta and Hermes Luna
Appl. Sci. 2025, 15(24), 13155; https://doi.org/10.3390/app152413155 - 15 Dec 2025
Viewed by 1761
Abstract
Despite advances in deep learning, brain tumor detection from MRI continues to face major challenges, including the limited robustness of single-modality models, the computational burden of transformer-based architectures, opaque fusion strategies, and the lack of efficient binary screening tools. To address these issues, [...] Read more.
Despite advances in deep learning, brain tumor detection from MRI continues to face major challenges, including the limited robustness of single-modality models, the computational burden of transformer-based architectures, opaque fusion strategies, and the lack of efficient binary screening tools. To address these issues, we propose a lightweight multimodal CNN framework that integrates T1, T2, and FLAIR MRI sequences using modality-specific encoders and a channel-wise fusion module (concatenation followed by a 1 × 1 convolution). The pipeline incorporates U-Net-based segmentation for tumor-focused patch extraction, improving localization and reducing irrelevant background. Evaluated on the BraTS 2020 dataset (7500 slices; 70/15/15 patient-level split), the proposed model achieves 93.8% accuracy, 94.1% F1-score, and 19 ms inference time. It outperforms all single-modality ablations by up to 5% and achieves competitive or superior performance to transformer-based baselines while using over 98% fewer parameters. Grad-CAM and LIME visualizations further confirm clinically meaningful tumor-region activation. Overall, this efficient and interpretable multimodal framework advances scalable brain tumor screening and supports integration into real-time clinical workflows. Full article
Show Figures

Figure 1

30 pages, 3530 KB  
Article
Prompt-Driven Multimodal Segmentation with Dynamic Fusion for Adaptive and Robust Medical Imaging with Applications to Cancer Diagnosis
by Shatha Abed Alsaedi, Hossam Magdy Balaha, Mohamed Farsi, Majed Alwateer, Moustafa M. Aboelnaga, Mohamed Shehata, Mahmoud Badawy and Mostafa A. Elhosseini
Cancers 2025, 17(22), 3691; https://doi.org/10.3390/cancers17223691 - 18 Nov 2025
Viewed by 1557
Abstract
Background/Objectives: Medical image segmentation is a crucial task for diagnosis, treatment planning, and monitoring of cancer; however, it remains one of the toughest nuts to crack for Artificial Intelligence (AI)-based clinical applications. Deep-learning models have shown near-perfect results for narrow tasks such as [...] Read more.
Background/Objectives: Medical image segmentation is a crucial task for diagnosis, treatment planning, and monitoring of cancer; however, it remains one of the toughest nuts to crack for Artificial Intelligence (AI)-based clinical applications. Deep-learning models have shown near-perfect results for narrow tasks such as single-organ Computed Tomography (CT) segmentation. Still, they fail to deliver under practicality, in which cross-modality robustness and multi-organ delineation are essential (e.g., liver Dice dropping to 0.88 ± 0.15 in combined CT-MR scenarios). That fragility exposes two structural gaps: (i) rigid task-specific architectures, which are not flexible enough to adapt to various clinical instructions, and (ii) the assumption that a universal loss function is best in all cancer imaging applications. Methods: A novel multimodal segmentation framework is proposed that combines natural language prompts and high-fidelity imaging features through Feature-wise Linear Modulation (FiLM) and Conditional Batch Normalization, enabling a single model to adapt dynamically across modalities, organs, and pathologies. Unlike preceding systems, the proposed approach is prompt-driven, context-aware, and end-to-end trainable to ensure alignment between computational adaptability and clinical decision-making. Results: Extensive evaluation on the Brain Tumor Dataset (cancer-relevant neuroimaging) and the CHAOS multi-organ challenge demonstrates two key insights: (1) while Dice loss remains optimal for single-organ tasks, (2) Jaccard (IoU) loss outperforms when multi-organ, cross-modality divides cancer segmentation boundaries. Empirical evidence has thus been offered that optimality of a loss function is task- and context-dependent and not universal. Conclusions: The design framework’s principles directly address what is documented in workflow requirements and display capabilities that may connect algorithmic innovation with clinical utility once validated through prospective clinical trials. Full article
Show Figures

Figure 1

23 pages, 2140 KB  
Article
Radiomic-Based Machine Learning for Differentiating Brain Metastases Recurrence from Radiation Necrosis Post-Gamma Knife Radiosurgery: A Feasibility Study
by Mateus Blasques Frade, Paola Critelli, Eleonora Trifiletti, Giuseppe Ripepi and Antonio Pontoriero
Int. J. Transl. Med. 2025, 5(4), 50; https://doi.org/10.3390/ijtm5040050 - 24 Oct 2025
Cited by 1 | Viewed by 1629
Abstract
Background: Radiation therapy is a key treatment modality for brain metastases. While providing a treatment alternative, post-treatment imaging often presents diagnostic challenges, particularly in distinguishing tumor recurrence from radiation-induced changes such as necrosis. Advanced imaging techniques and artificial intelligence (AI)-based radiomic analyses emerge [...] Read more.
Background: Radiation therapy is a key treatment modality for brain metastases. While providing a treatment alternative, post-treatment imaging often presents diagnostic challenges, particularly in distinguishing tumor recurrence from radiation-induced changes such as necrosis. Advanced imaging techniques and artificial intelligence (AI)-based radiomic analyses emerge as alternatives to help lesion characterization. The objective of this study was to assess the capacity of machine learning algorithms to distinguish between brain metastases recurrence and radiation necrosis. Methods: The research was conducted in two phases and used publicly available MRI data from patients treated with Gamma Knife radiosurgery. In the first phase, 30 cases of local recurrence of brain metastases and 30 cases of radiation-induced necrosis were considered. Image segmentation and radiomic feature extraction were performed on these data using MatRadiomics_1_5_3, a MATLAB-based framework integrating PyRadiomics. Features were then selected using point-biserial correlation. In the second phase, a classification was performed using a Support Vector Machine model with repeated stratified cross-validation settings. Results: The results achieved an accuracy on the test set of 83% for distinguishing metastases from necrosis. Conclusions: The results of this feasibility study demonstrate the potential of radiomics and AI to improve diagnostic accuracy and personalized care in neuro-oncology. Full article
Show Figures

Figure 1

Back to TopTop