Neural Network Models for Prostate Zones Segmentation in Magnetic Resonance Imaging
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Data Acquisition
3.2. Neural Network Architecture and Training
3.2.1. Ensemble Learning
- Average: This method calculates the average of outputs from different classifiers, choosing the class associated with the highest average value. It is typically used when the output from each classifier is numerical.
- Weighted Average: Unlike the standard averaging method, this approach assigns weights to the outputs of individual classifiers based on their importance. These weights aim to minimize discrepancies between the ensemble’s output and the true output, often derived from an error correlation matrix.
- Nash Vote: in this strategy, each classifier assigns a value between zero and one for each candidate output, contributing to the decision-making process.
- Dynamically Weighted Average: Here, weights are not static. Instead, they dynamically adjust based on the confidence levels of the outputs from the respective classifiers.
- Weighted Average with Data-Dependent Weights: this variation in the weighted average utilizes specific partitions of the input space, calculating different weights for each partition through methods like the FSL algorithm.
- Majority Vote: each classifier casts a vote for the class with the highest output, and the final class is the one that receives the most votes.
- Winner Takes All (WTA): this approach selects the class with the highest output across all classifiers as the definitive class.
- Bayesian Combination: using probabilistic approaches, this method estimates the belief value that a sample belongs to a particular class.
Attention-Res-U-Net (Att-R-Net)
Vanilla-Net
V-Net
3.2.2. Meta-Net
3.2.3. YOLO-V8 Architecture
3.3. Implementation Details
4. Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chahal, E.S.; Patel, A.; Gupta, A.; Purwar, A.; G, D. Unet Based Xception Model for Prostate Cancer Segmentation from MRI Images. Multimed. Tools Appl. 2022, 81, 37333–37349. [Google Scholar] [CrossRef]
- Yang, X.; Liu, C.; Wang, Z.; Yang, J.; Le Min, H.; Wang, L.; Cheng, K.-T.T. Co-Trained Convolutional Neural Networks for Automated Detection of Prostate Cancer in Multi-Parametric MRI. Med. Image Anal. 2017, 42, 212–227. [Google Scholar] [CrossRef] [PubMed]
- William, K.; Oh, M.D.; Mark Hurwitz, M.D.; Anthony, V.; D’Amico, M.D.; Jerome, P.; Richie, M.D.; Philip, W.; Kantoff, M. Biology of Prostate Cancer. In Holland-Frei Cancer Medicine, 6th ed.; National Library of Medicine: Rockville, MD, USA, 2003. [Google Scholar]
- Adler, D.; Lindstrot, A.; Ellinger, J.; Rogenhofer, S.; Buettner, R.; Perner, S.; Wernert, N. The Peripheral Zone of the Prostate Is More Prone to Tumor Development than the Transitional Zone: Is the ETS Family the Key? Mol. Med. Rep. 2012, 5, 313–316. [Google Scholar] [CrossRef] [PubMed]
- Holder, K.G.; Galvan, B.; Knight, A.S.; Ha, F.; Collins, R.; Weaver, P.E.; Brandi, L.; de Riese, W.T. Possible Clinical Implications of Prostate Capsule Thickness and Glandular Epithelial Cell Density in Benign Prostate Hyperplasia. Investig. Clin. Urol. 2021, 62, 423–429. [Google Scholar] [CrossRef]
- Sato, S.; Kimura, T.; Onuma, H.; Egawa, S.; Takahashi, H. Transition Zone Prostate Cancer Is Associated with Better Clinical Outcomes than Peripheral Zone Cancer. BJUI Compass 2021, 2, 169–177. [Google Scholar] [CrossRef] [PubMed]
- Wu, C.; Montagne, S.; Hamzaoui, D.; Ayache, N.; Delingette, H.; Renard-Penna, R. Automatic Segmentation of Prostate Zonal Anatomy on MRI: A Systematic Review of the Literature. Insights Imaging 2022, 13, 202. [Google Scholar] [CrossRef] [PubMed]
- Korsager, A.S.; Fortunati, V.; van der Lijn, F.; Carl, J.; Niessen, W.; Østergaard, L.R.; van Walsum, T. The Use of Atlas Registration and Graph Cuts for Prostate Segmentation in Magnetic Resonance Images. Med. Phys. 2015, 42, 1614–1624. [Google Scholar] [CrossRef]
- Nai, Y.; Teo, B.; Tan, N.; Chua, K.; Wong, C.; O’Doherty, S.; Stephenson, M.; Schaefferkoetter, J.; Thian, Y.; Chiong, E.; et al. Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images. Comput. Math. Methods Med. 2020, 2020, 8861035. [Google Scholar] [CrossRef]
- Zaridis, D.I.; Mylona, E.; Tachos, N.; Kalantzopoulos, C.Ν.; Marias, K.; Tsiknakis, M.; Matsopoulos, G.K.; Koutsouris, D.D.; Fotiadis, D.I. ResQu-Net: Effective Prostate’s Peripheral Zone Segmentation Leveraging the Representational Power of Attention-Based Mechanisms. Biomed. Signal Process. Control 2024, 93, 106187. [Google Scholar] [CrossRef]
- Zavala-Romero, O.; Breto, A.L.; Xu, I.R.; Chang, Y.-C.C.; Gautney, N.; Pra, A.D.; Abramowitz, M.; Pollack, A.; Stoyanova, R. Segmentation of Prostate and Prostate Zones Using Deep Learning. Strahlenther. Onkol. 2020, 196, 932–942. [Google Scholar] [CrossRef]
- Cuocolo, R.; Comelli, A.; Stefano, A.; Benfante, V.; Dahiya, N.; Stanzione, A.; Castaldo, A.; De Lucia, D.R.; Yezzi, A.; Imbriaco, M. Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset. J. Magn. Reson. Imaging 2021, 54, 452–459. [Google Scholar] [CrossRef] [PubMed]
- Kou, W.; Marshall, H.; Chiu, B. Boundary-Aware Semantic Clustering Network for Segmentation of Prostate Zones from T2-Weighted MRI. Phys. Med. Biol. 2024, 69, 175009. [Google Scholar] [CrossRef]
- Mitura, J.; Jóźwiak, R.; Mycka, J.; Mykhalevych, I.; Gonet, M.; Sobecki, P.; Lorenc, T.; Tupikowski, K. Ensemble Deep Learning Models for Segmentation of Prostate Zonal Anatomy and Pathologically Suspicious Areas BT. In Medical Image Understanding and Analysis; Yap, M.H., Kendrick, C., Behera, A., Cootes, T., Zwiggelaar, R., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 217–231. [Google Scholar]
- Xu, L.; Zhang, G.; Zhang, D.; Zhang, J.; Zhang, X.; Bai, X.; Chen, L.; Peng, Q.; Jin, R.; Mao, L.; et al. Development and Clinical Utility Analysis of a Prostate Zonal Segmentation Model on T2-Weighted Imaging: A Multicenter Study. Insights Imaging 2023, 14, 44. [Google Scholar] [CrossRef] [PubMed]
- Yan, Y.; Liu, R.; Chen, H.; Zhang, L.; Zhang, Q. CCT-Unet: A U-Shaped Network Based on Convolution Coupled Transformer for Segmentation of Peripheral and Transition Zones in Prostate MRI. IEEE J. Biomed. Health Inform. 2023, 27, 4341–4351. [Google Scholar] [CrossRef] [PubMed]
- Jimenez-Pastor, A.; Lopez-Gonzalez, R.; Fos-Guarinos, B.; Garcia-Castro, F.; Wittenberg, M.; Torregrosa-Andrés, A.; Marti-Bonmati, L.; Garcia-Fontes, M.; Duarte, P.; Gambini, J.P.; et al. Automated Prostate Multi-Regional Segmentation in Magnetic Resonance Using Fully Convolutional Neural Networks. Eur. Radiol. 2023, 33, 5087–5096. [Google Scholar] [CrossRef] [PubMed]
- Kaneko, M.; Cacciamani, G.E.; Yang, Y.; Magoulianitis, V.; Xue, J.; Yang, J.; Liu, J.; Lenon, M.S.L.; Mohamed, P.; Hwang, D.H.; et al. MP09-05 Automated Prostate Gland and Prostate Zones Segmentation Using a Novel Mri-Based Machine Learning Framework and Creation of Software Interface for Users Annotation. J. Urol. 2023, 209, e105. [Google Scholar] [CrossRef]
- Baldeon-Calisto, M.; Wei, Z.; Abudalou, S.; Yilmaz, Y.; Gage, K.; Pow-Sang, J.; Balagurunathan, Y. A Multi-Object Deep Neural Network Architecture to Detect Prostate Anatomy in T2-Weighted MRI: Performance Evaluation. Front. Nucl. Med. 2023, 2, 1083245. [Google Scholar] [CrossRef]
- Aldoj, N.; Biavati, F.; Michallek, F.; Stober, S.; Dewey, M. Automatic Prostate and Prostate Zones Segmentation of Magnetic Resonance Images Using DenseNet-like U-Net. Sci. Rep. 2020, 10, 14315. [Google Scholar] [CrossRef]
- Zabihollahy, F.; Schieda, N.; Krishna Jeyaraj, S.; Ukwatta, E. Automated Segmentation of Prostate Zonal Anatomy on T2-Weighted (T2W) and Apparent Diffusion Coefficient (ADC) Map MR Images Using U-Nets. Med. Phys. 2019, 46, 3078–3090. [Google Scholar] [CrossRef] [PubMed]
- Adams, L.C.; Makowski, M.R.; Engel, G.; Rattunde, M.; Busch, F.; Asbach, P.; Niehues, S.M.; Vinayahalingam, S.; van Ginneken, B.; Litjens, G.; et al. Prostate158—An Expert-Annotated 3T MRI Dataset and Algorithm for Prostate Cancer Detection. Comput. Biol. Med. 2022, 148, 105817. [Google Scholar] [CrossRef]
- Chen, P. MetaNet: Network Analysis for Omics Data. Available online: https://github.com/Asa12138/MetaNet (accessed on 6 January 2025).
- Huang, F.; Xie, G.; Xiao, R. Research on Ensemble Learning. In Proceedings of the 2009 International Conference on Artificial Intelligence and Computational Intelligence, Shanghai, China, 7–8 November 2009; Volume 3, pp. 249–252. [Google Scholar]
- Thomas, G.D. Machine Learning Research: Four Current Directions. Artif. Intell. Mag. 1997, 18, 97–136. [Google Scholar]
- Zhou, Z.-H.; Wu, J.; Tang, W. Ensembling Neural Networks: Many Could Be Better than All. Artif. Intell. 2002, 137, 239–263. [Google Scholar] [CrossRef]
- Verikas, A.; Lipnickas, A.; Malmqvist, K.; Bacauskiene, M.; Gelzinis, A. Soft Combination of Neural Classifiers: A Comparative Study. Pattern Recognit. Lett. 1999, 20, 429–444. [Google Scholar] [CrossRef]
- Zhang, C.; Song, Y.; Liu, S.; Lill, S.; Wang, C.; Tang, Z.; You, Y.; Gao, Y.; Klistorner, A.; Barnett, M.; et al. MS-GAN: GAN-Based Semantic Segmentation of Multiple Sclerosis Lesions in Brain Magnetic Resonance Imaging; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation BT. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Oktay, O.; Schlemper, J.; Le Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Y Hammerla, N.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
- Mnih, V.; Heess, N.; Graves, A.; Kavukcuoglu, K. Recurrent Models of Visual Attention. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; Volume 27. [Google Scholar]
- Abraham, N.; Khan, N. A Novel Focal Tversky Loss Function with Improved Attention U-Net for Lesion Segmentation. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 683–687. [Google Scholar]
- Chen, H.; Wang, Y.; Guo, J.; Tao, D. VanillaNet: The Power of Minimalism in Deep Learning. In Proceedings of the Advances in Neural Information Processing Systems; Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S., Eds.; Curran Associates, Inc.: Newry, UK, 2023; Volume 36, pp. 7050–7064. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation BT. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016.; Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 424–432. [Google Scholar]
- Buscema, M. MetaNet: The Theory of Independent Judges. Subst. Use Misuse 1998, 33, 439–461. [Google Scholar] [CrossRef]
- Buscema, M. Self-Reflexive Networks. Subst. Use Misuse 1998, 33, 409–438. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Zhao, L.; Li, S. Object Detection Algorithm Based on Improved YOLOv3. Electronics 2020, 9, 537. [Google Scholar] [CrossRef]
- Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Mach. Tool. 2023, 11, 677. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Qiu, J. YOLO by Ultralytics. Available online: https://github.com/ultralytics/ultralytics (accessed on 12 January 2023).
- Sahafi, A.; Koulaouzidis, A.; Lalinia, M. Polypoid Lesion Segmentation Using YOLO-V8 Network in Wireless Video Capsule Endoscopy Images. Diagnostics 2024, 14, 474. [Google Scholar] [CrossRef]
- Guo, J.; Lou, H.; Chen, H.; Liu, H.; Gu, J.; Bi, L.; Duan, X. A New Detection Algorithm for Alien Intrusion on Highway. Sci. Rep. 2023, 13, 10667. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; NanoCode012; Kwon, Y.; Michael, K.; Xie, T.; Fang, J.; Yifu, Z.; et al. Ultralytics/Yolov5: V7.0—YOLOv5 SOTA Realtime Instance Segmentation. 2022. Available online: https://ui.adsabs.harvard.edu/abs/2022zndo...3908559J/abstract (accessed on 6 January 2025).
- Thapa, S.; Lomholt, M.A.; Krog, J.; Cherstvy, A.G.; Metzler, R. Bayesian Analysis of Single-Particle Tracking Data Using the Nested-Sampling Algorithm: Maximum-Likelihood Model Selection Applied to Stochastic-Diffusivity Data. Phys. Chem. Chem. Phys. 2018, 20, 29018–29037. [Google Scholar] [CrossRef] [PubMed]
Study | Network | Dataset | CG | PZ | ||
---|---|---|---|---|---|---|
[12] | ENet | PROSTATEx (204 MRIs) | DSC: | 87% | DSC: | 71% |
[13] | BASC-Net | NCI-ISBI 2013 (80 MRIs) | DSC: | 88.6% | DSC: | 79.9% |
Prostate158 (102 MRIs) | DSC: | 89.2% | DSC: | 80.5% | ||
[14] | nnU-Net | Private dataset (607 MRIs) | DSC: | 86.5% | DSC: | 73.6% |
[15] | 3D U-Net | From 223 patients, including an internal group of 93, and two external datasets (ETDpub, n = 141 and ETDpri, n = 59) | DSC: | 86.9% | DSC: | 76.9% |
[16] | CCT-Unet | ProstateX and Huashan datasets (240 MRIs) | DSC: | 87.49% | DSC: | 80.39% |
[17] | U-Net-based model | Inhouse dataset containing 243 T2W | DSC: | 85% | DSC: | 72% |
[18] | A novel two-stage Green Learning | A dataset containing 119 MRIs | DSC: | 81% | DSC: | 62% |
[19] | A 2D–3D convolutional neural network ensemble (PPZ-SegNet) | Cancer Imaging Archive (training: 150 and Test: 283 MRIs) | Not reported | DSC: | 62% | |
[20] | Dense U-net | A dataset containing 141 MRIs | Not reported | DSC: | 78.1% | |
[21] | U-Net | A dataset containing 225 MRIs (T2W) | DSC: | 93.75% | DSC: | 86.78% |
A dataset containing 225 MRIs (ADC) | DSC: | 89.89% | DSC: | 86.1% |
Models | Validation Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
Att-R-Net fold 1 | 78.4% (72–81%) | 87.9% (83–89%) | 54.5% (42–66%) | 70.5% (59–79%) |
Att-R-Net fold 2 | 73.2% (60–77%) | 84.5% (75–87%) | 54% (38–66%) | 70.1% (55–80%) |
Att-R-Net fold 3 | 73.3% (65–77%) | 84.6% (78–87%) | 58.3% (42–66%) | 73.6% (59–79%) |
Att-R-Net fold 4 | 75.3% (70–79%) | 85.9% (82–88%) | 58.2% (43–68%) | 73.6% (61–81%) |
Att-R-Net fold 5 | 70.8% (60–77%) | 82.9% (75–87%) | 58.1% (48–64%) | 73.5% (65–78%) |
Vanilla-Net fold 1 | 78.5% (75–82%) | 87.6% (85–90%) | 55.7% (48–60%) | 71.5% (65–75%) |
Vanilla-Net fold 2 | 78.3% (74–81%) | 87.8% (85–89%) | 57.9% (45–61%) | 73.3% (62–76%) |
Vanilla-Net fold 3 | 77.4% (74–81%) | 87.3% (85–89%) | 54.3% (47–62%) | 70.3% (64–77%) |
Vanilla-Net fold 4 | 79.3% (74–81%) | 88.2% (85–89%) | 56.4% (51–63%) | 72.1% (68–77%) |
Vanilla-Net fold 5 | 78.5% (73–81%) | 87.9% (84–89%) | 58.4% (46–62%) | 73.8% (63–77%) |
V-Net fold 1 | 76.7% (71–81%) | 86.8% (83–90%) | 56.1% (46–65%) | 71.9% (63–79%) |
V-Net fold 2 | 76.2% (73–80%) | 86.3% (84–89%) | 57.3% (50–62%) | 72.9% (67–77%) |
V-Net fold 3 | 76.9% (72–81%) | 86.9% (83–89%) | 56.7% (49–66%) | 72.3% (66–79%) |
V-Net fold 4 | 77.7% (72–81%) | 87.4% (84–90%) | 57.3% (49–64%) | 72.8% (66–78%) |
V-Net fold 5 | 76.8% (70–79%) | 86.9% (82–88%) | 58.6% (44–62%) | 73.9% (61–76%) |
Ensemble | 80.4% (76–83%) | 89.1% (86–90%) | 60.6% (52–69%) | 75.5% (69–82%) |
Models | Test Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
Att-R-Net fold 1 | 78.9% (66–82%) | 88.2% (79–90%) | 49.4% (39–59%) | 66.1% (56–74%) |
Att-R-Net fold 2 | 70.6% (62–79%) | 82.7% (77–88%) | 47.3% (39–60%) | 64.1% (56–75%) |
Att-R-Net fold 3 | 74.5% (64–80%) | 85.3% (78–89%) | 49.8% (36–60%) | 66.4% (56–75%) |
Att-R-Net fold 4 | 76.5% (67–81%) | 86.7% (44–62%) | 52.8% (80–89%) | 69.1% (61–76%) |
Att-R-Net fold 5 | 71.7% (62–78%) | 83.5% (76–88%) | 49.8% (35–60%) | 66.5% (52–75%) |
Vanilla-Net fold 1 | 77.9% (67–82%) | 87.6% (80–90%) | 52.8% (44–58%) | 69.1% (61–73%) |
Vanilla-Net fold 2 | 78.6% (71–83%) | 88% (83–90%) | 53.6% (45–62%) | 69.8% (62–77%) |
Vanilla-Net fold 3 | 78.2% (69–81%) | 87.8% (82–89%) | 50.8% (43–59%) | 67.4% (60–74%) |
Vanilla-Net fold 4 | 77.5% (67–83%) | 87.3% (80–90%) | 50.1% (42–58%) | 66.7% (59–73%) |
Vanilla-Net fold 5 | 77.6% (66–82%) | 87.4% (45–62%) | 52.7% (80–90%) | 69% (62–77%) |
V-Net fold 1 | 75% (63–81%) | 85.7% (77–89%) | 48.2% (43–59%) | 65.1% (60–74%) |
V-Net fold 2 | 77% (70–81%) | 87% (82–89%) | 50.5% (41–59%) | 67.1% (59–74%) |
V-Net fold 3 | 75.7% (67–82%) | 86.1% (80–90%) | 48.2% (39–59%) | 65% (56–74%) |
V-Net fold 4 | 76.7% (68–80%) | 86.8% (81–89%) | 52% (44–58%) | 68.4% (61–73%) |
V-Net fold 5 | 77.1% (68–81%) | 87% (81–89%) | 50.2% (45–57%) | 66.9% (62–73%) |
Ensemble | 79.3% (72–85%) | 88.4% (83–92%) | 54.5% (46–63%) | 70.5% (63–77%) |
Models | Validation Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
Att-R-Net | 75% (67–79%) | 85% (80–88%) | 56% (51–67%) | 72% (58–80%) |
Vanilla-Net | 79% (76–83%) | 88% (86–90%) | 58% (49–63%) | 73% (66–78%) |
V-Net | 78% (75–83%) | 88% (86–91%) | 57% (49–64%) | 72% (66–78%) |
Att-R-Net + Vanilla-Net | 80% (74–83%) | 89% (85–90%) | 58% (47–67%) | 74% (64–80%) |
Att-R-Net + V-Net | 79% (73–82%) | 88% (85–90%) | 56% (46–67%) | 72% (63–80%) |
Vanilla-Net + V-Net | 80% (76–83%) | 89% (87–91%) | 58% (52–65%) | 73% (68–79%) |
Att-R-Net + Vanilla-Net + V-Net | 80% (76–83%) | 89% (86–91%) | 58% (51–67%) | 73% (67–80%) |
Models | Test Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
Att-R-Net | 74% (68–82%) | 85% (81–90%) | 49% (42–62%) | 65% (59–77%) |
Vanilla-Net | 78% (72–84%) | 88% (84–91%) | 54% (45–62%) | 70% (62–77%) |
V-Net | 78% (71–83%) | 87% (83–91%) | 52% (45–60%) | 69% (62–75%) |
Att-R-Net + Vanilla-Net | 79% (72–85%) | 88% (84–92%) | 51% (44–62%) | 68% (61–77%) |
Att-R-Net + V-Net | 78% (70–85%) | 88% (82–92%) | 51% (44–62%) | 68% (44–62%) |
Vanilla-Net + V-Net | 78% (72–85%) | 88% (84–92%) | 54% (47–62%) | 71% (64–77%) |
Att-R-Net + Vanilla-Net + V-Net | 79% (71–85%) | 88% (83–92%) | 53% (46–62%) | 69% (63–77%) |
Model | Validation Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
YOLO-V8 | 81% (70–85%) | 91% (84–93%) | 60% (55–69%) | 74% (67–81%) |
Model | Test Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
YOLO-V8 | 80% (71–84%) | 89% (83–91%) | 58% (52–66%) | 73% (68–79%) |
Models | Validation Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
Ensemble Model | 80.4% (76–83%) | 89.1% (86–90%) | 60.6% (52–69%) | 75.5% (69–82%) |
Meta-Net (Vanilla-Net + V-Net) | 80% (76–83%) | 89% (87–91%) | 58% (52–65%) | 73% (68–79%) |
YOLO-V8 | 81% (70–85%) | 91% (84–93%) | 60% (55–69%) | 74% (67–81%) |
Models | Test Set | |||
---|---|---|---|---|
CG | PZ | |||
IoU | DSC | IoU | DSC | |
Ensemble Model | 79.3% (72–85%) | 88.4% (83–92%) | 54.5% (46–63%) | 70.5% (63–77%) |
Meta-Net (Vanilla-Net + V-Net) | 78% (72–85%) | 88% (84–92%) | 54% (47–62%) | 71% (64–77%) |
YOLO-V8 | 80% (71–84%) | 89% (83–91%) | 58% (52–66%) | 73% (68–79%) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fouladi, S.; Di Palma, L.; Darvizeh, F.; Fazzini, D.; Maiocchi, A.; Papa, S.; Gianini, G.; Alì, M. Neural Network Models for Prostate Zones Segmentation in Magnetic Resonance Imaging. Information 2025, 16, 186. https://doi.org/10.3390/info16030186
Fouladi S, Di Palma L, Darvizeh F, Fazzini D, Maiocchi A, Papa S, Gianini G, Alì M. Neural Network Models for Prostate Zones Segmentation in Magnetic Resonance Imaging. Information. 2025; 16(3):186. https://doi.org/10.3390/info16030186
Chicago/Turabian StyleFouladi, Saman, Luca Di Palma, Fatemeh Darvizeh, Deborah Fazzini, Alessandro Maiocchi, Sergio Papa, Gabriele Gianini, and Marco Alì. 2025. "Neural Network Models for Prostate Zones Segmentation in Magnetic Resonance Imaging" Information 16, no. 3: 186. https://doi.org/10.3390/info16030186
APA StyleFouladi, S., Di Palma, L., Darvizeh, F., Fazzini, D., Maiocchi, A., Papa, S., Gianini, G., & Alì, M. (2025). Neural Network Models for Prostate Zones Segmentation in Magnetic Resonance Imaging. Information, 16(3), 186. https://doi.org/10.3390/info16030186