From Mammogram Analysis to Clinical Integration with Deep Learning in Breast Cancer Diagnosis
Abstract
1. Introduction
- Summarizes state-of-the-art methods for detection, segmentation, and classification;
- Evaluates DL-based mammography workflows and reported technical performance;
- Reviews prospective deployments and trials of AI-supported screening;
- Compares methods and highlights their limitations;
- Discusses emerging directions such as explainability and multi-task learning.
2. Methods and Materials
2.1. Research Questions
2.2. PRISMA
2.3. Data Sources and Search Strategy
2.4. Exclusion and Inclusion Criteria
- Focused on prognosis, meta-analyses, or clinical decision-making without reporting primary algorithmic results on mammograms ();
- Lacked sufficient scientific novelty or methodological rigor ().
2.5. Risk of Bias Within Studies
3. Results
3.1. Key Vision Tasks in Mammogram Interpretation
3.1.1. Detection
3.1.2. Segmentation
3.1.3. Classification
3.2. Overview of Publicly Available Datasets
3.3. Clinical Integration
3.4. Limitations and Challenges
4. Discussion
5. Limitation of the Study
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| AUC | Area Under the ROC Curve |
| CADx | Computer-Aided Diagnosis |
| CC | Craniocaudal |
| CE | Conformité Européenne (EU regulatory mark) |
| CNN | Convolutional Neural Network |
| DETR | DEtection TRansformer |
| DICOM | Digital Imaging and Communications in Medicine |
| DL | Deep Learning |
| DTL | Deep Transfer Learning |
| FDA | U.S. Food and Drug Administration |
| FFDM | Full-Field Digital Mammography |
| FPPI | False Positives Per Image |
| GAN | Generative Adversarial Network |
| IoU | Intersection over Union |
| LBP | Local Binary Patterns |
| mAP | Mean Average Precision |
| mIoU | Mean Intersection over Union |
| MLO | Mediolateral Oblique |
| PACS | Picture Archiving and Communication System |
| RIS | Radiology Information System |
| ROI | Region of Interest |
| SVM | Support Vector Machine |
| TPR | True Positive Rate |
| ViT | Vision Transformer |
| VLM | Vision Language Models |
References
- World Health Organization. Breast Cancer 2021. Available online: https://www.iarc.who.int/featured-news/breast-cancer-awareness-month-2021/?utm_source=chatgpt.com (accessed on 17 March 2025).
- Myers, E.R.; Moorman, P.; Gierisch, J.M.; Havrilesky, L.J.; Grimm, L.J.; Ghate, S.; Davidson, B.; Mongtomery, R.C.; Crowley, M.J.; McCrory, D.C.; et al. Benefits and harms of breast cancer screening: A systematic review. JAMA 2015, 314, 1615–1634. [Google Scholar] [CrossRef]
- UK Independent. Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: An independent review. Lancet 2012, 380, 1778–1786. [Google Scholar] [CrossRef]
- American Cancer Society. Types of Breast Cancer. 2025. Available online: https://www.cancer.org/cancer/types/breast-cancer/about/types-of-breast-cancer.html (accessed on 28 August 2025).
- American College of Radiology (ACR). ACR BI-RADS® Atlas: Breast Imaging Reporting and Data System; American College of Radiology: Reston, VA, USA, 2013. [Google Scholar]
- Lowry, K.P.; Coley, R.Y.; Miglioretti, D.L.; Kerlikowske, K.; Henderson, L.M.; Onega, T.; Sprague, B.L.; Lee, J.M.; Herschorn, S.; Tosteson, A.N.A.; et al. Screening Performance of Digital Breast Tomosynthesis vs Digital Mammography in Community Practice by Patient Age, Screening Round, and Breast Density. JAMA Netw. Open 2020, 3, e2011792. [Google Scholar] [CrossRef]
- Freeman, K.; Geppert, J.; Stinton, C.; Todkill, D.; Johnson, S.; Clarke, A.; Taylor-Phillips, S. Use of artificial intelligence for image analysis in breast cancer screening programmes: Systematic review of test accuracy. BMJ 2021, 374, n1872. [Google Scholar] [CrossRef]
- Yoon, J.H.; Strand, F.; Baltzer, P.A.; Conant, E.F.; Gilbert, F.J.; Lehman, C.D.; Morris, E.A.; Mullen, L.A.; Nishikawa, R.M.; Sharma, N.; et al. Standalone AI for Breast Cancer Detection at Screening Digital Mammography and Digital Breast Tomosynthesis: A Systematic Review and Meta-Analysis. Radiology 2023, 307, e222639. [Google Scholar] [CrossRef]
- Lei, Y.M.; Yin, M.; Yu, M.H.; Yu, J.; Zeng, S.E.; Lv, W.Z.; Li, J.; Ye, H.R.; Cui, X.W.; Dietrich, C.F. Artificial intelligence in medical imaging of the breast. Front. Oncol. 2021, 11, 600557. [Google Scholar] [CrossRef]
- Al-Karawi, D.; Al-Zaidi, S.; Helael, K.A.; Obeidat, N.; Mouhsen, A.M.; Ajam, T.; Alshalabi, B.A.; Salman, M.; Ahmed, M.H. A review of artificial intelligence in breast imaging. Tomography 2024, 10, 705–726. [Google Scholar] [CrossRef]
- Branco, P.E.S.C.; Franco, A.H.S.; Oliveira, A.P.d.; Carneiro, I.M.C.; Carvalho, L.M.C.d.; Souza, J.I.N.d.; Leandro, D.R.; Cândido, E.B. Artificial intelligence in mammography: A systematic review of the external validation. Rev. Bras. Ginecol. Obs. 2024, 46, e-rbgo71. [Google Scholar] [CrossRef]
- Díaz, O.; Rodríguez-Ruíz, A.; Sechopoulos, I. Artificial Intelligence for breast cancer detection: Technology, challenges, and prospects. Eur. J. Radiol. 2024, 175, 111457. [Google Scholar] [CrossRef]
- Schopf, C.M.; Ramwala, O.A.; Lowry, K.P.; Hofvind, S.; Marinovich, M.L.; Houssami, N.; Elmore, J.G.; Dontchos, B.N.; Lee, J.M.; Lee, C.I. Artificial Intelligence-Driven Mammography-Based Future Breast Cancer Risk Prediction: A Systematic Review. J. Am. Coll. Radiol. 2024, 21, 319–328. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
- McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94. [Google Scholar] [CrossRef]
- Yan, Y.; Conze, P.H.; Lamard, M.; Quellec, G.; Cochener, B.; Coatrieux, G. Towards improved breast mass detection using dual-view mammogram matching. Med. Image Anal. 2021, 71, 102083. [Google Scholar] [CrossRef]
- Cao, H.; Pu, S.; Tan, W.; Tong, J. Breast mass detection in digital mammography based on anchor-free architecture. Comput. Methods Programs Biomed. 2021, 205, 106033. [Google Scholar] [CrossRef]
- Zhang, L.; Li, Y.; Chen, H.; Wu, W.; Chen, K.; Wang, S. Anchor-free YOLOv3 for mass detection in mammogram. Expert Syst. Appl. 2022, 191, 116273. [Google Scholar] [CrossRef]
- Aly, G.H.; Marey, M.; El-Sayed, S.A.; Tolba, M.F. YOLO-Based Breast Masses Detection and Classification in Full-Field Digital Mammograms. Comput. Methods Programs Biomed. 2021, 200, 105823. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, K.; Abdoli, N.; Gilley, P.W.; Wang, X.; Liu, H.; Zheng, B.; Qiu, Y. Transformers improve breast cancer diagnosis from unregistered multi-view mammograms. Diagnostics 2022, 12, 1549. [Google Scholar] [CrossRef]
- Park, S.; Lee, K.H.; Ko, B.; Kim, N. Unsupervised anomaly detection with generative adversarial networks in mammography. Sci. Rep. 2023, 13, 2925. [Google Scholar] [CrossRef]
- Agarwal, R.; Díaz, O.; Yap, M.H.; Lladó, X.; Martí, R. Deep learning for mass detection in Full Field Digital Mammograms. Comput. Biol. Med. 2020, 121, 103774. [Google Scholar] [CrossRef]
- Manalı, D.; Demirel, H.; Eleyan, A. Deep Learning Based Breast Cancer Detection Using Decision Fusion. Computers 2024, 13, 294. [Google Scholar] [CrossRef]
- Baccouche, A.; Garcia-Zapirain, B.; Zheng, Y.; Elmaghraby, A.S. Early Detection and Classification of Abnormality in Prior Mammograms Using Image-to-Image Translation and YOLO Techniques. Comput. Methods Programs Biomed. 2022, 221, 106884. [Google Scholar] [CrossRef]
- Baccouche, A.; Garcia-Zapirain, B.; Castillo-Olea, C.; Elmaghraby, A.S. Breast Lesions Detection and Classification via YOLO-Based Fusion Models. Comput. Mater. Contin. 2021, 69, 1407–1427. [Google Scholar] [CrossRef]
- Betancourt Tarifa, A.S.; Marrocco, C.; Molinara, M.; Tortorella, F.; Bria, A. Transformer-based mass detection in digital mammograms. J. Ambient Intell. Humaniz. Comput. 2023, 14, 2723–2737. [Google Scholar] [CrossRef]
- Ghantasala, G.P.; Unhelkar, B.; Chakrabarti, P.; Vidyullatha, P.; Pyla, M. Improving Breast Cancer Diagnosis through Multi-Class Segmentation using Attention UNet Model. In Proceedings of the 2024 Second International Conference on Advances in Information Technology (ICAIT), Chikkamagaluru, India, 24–27 July 2024; Volume 1, pp. 1–7. [Google Scholar]
- Identifying Women With Dense Breasts at High Risk for Interval Cancer. Ann. Intern. Med. 2015, 162, 673–681. [CrossRef] [PubMed]
- Reeves, R.A.; Kaufman, T. Mammography. In StatPearls [Internet], updated 2023 July 24 ed.; StatPearls Publishing: Treasure Island, FL, USA, 2023; Available online: https://www.ncbi.nlm.nih.gov/books/NBK559310/ (accessed on 4 July 2025).
- Müjdat Tiryaki, V. Mass segmentation and classification from film mammograms using cascaded deep transfer learning. Biomed. Signal Process. Control 2023, 84, 104819. [Google Scholar] [CrossRef]
- Aliniya, P.; Nicolescu, M.; Nicolescu, M.; Bebis, G. Improved Loss Function for Mass Segmentation in Mammography Images Using Density and Mass Size. J. Imaging 2024, 10, 20. [Google Scholar] [CrossRef]
- Fu, X.; Cao, H.; Hu, H.; Lian, B.; Wang, Y.; Huang, Q.; Wu, Y. Attention-Based Active Learning Framework for Segmentation of Breast Cancer in Mammograms. Appl. Sci. 2023, 13, 852. [Google Scholar] [CrossRef]
- Hithesh, M.; Puttanna, V.K. From Pixels to Prognosis: Exploring from UNet to Segment Anything in Mammogram Image Processing for Tumor Segmentation. In Proceedings of the 2023 4th International Conference on Intelligent Technologies (CONIT), Bangalore, India, 21–23 June 2024; pp. 1–7. [Google Scholar]
- Ahmed, S.T.; Barua, S.; Fahim-Ul-Islam, M.; Chakrabarty, A. CoAtNet-Lite: Advancing Mammogram Mass Detection Through Lightweight CNN-Transformer Fusion with Attention Mapping. In Proceedings of the 2024 6th International Conference on Electrical Engineering and Information & Communication Technology (ICEEICT), Dhaka, Bangladesh, 2–4 May 2024; pp. 143–148. [Google Scholar]
- Demil, S.; Bouzar-Benlabiod, L.; Paillet, G. Cost Efficient Mammogram Segmentation and Classification with NeuroMem® Chip for Breast Cancer Detection. In Proceedings of the 2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI), Bellevue, WA, USA, 4–6 August 2023; pp. 273–278. [Google Scholar]
- Ali, M.; Hu, H.; Muhammad, T.; Qureshi, M.A.; Mahmood, T. Deep Learning and Shape-Driven Combined Approach for Breast Cancer Tumor Segmentation. In Proceedings of the 2025 6th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 18–19 February 2025; pp. 1–6. [Google Scholar]
- Masood, A.; Naseem, U.; Kim, J. Multi-Level swin transformer enabled automatic segmentation and classification of breast metastases. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023; pp. 1–4. [Google Scholar]
- Farrag, A.; Gad, G.; Fadlullah, Z.M.; Fouda, M.M. Mammogram tumor segmentation with preserved local resolution: An explainable AI system. In Proceedings of the GLOBECOM 2023–2023 IEEE Global Communications Conference, Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 314–319. [Google Scholar]
- Nour, A.; Boufama, B. Hybrid Deep Learning and Active Contour Approach for Enhanced Breast Lesion Segmentation and Classification in Mammograms. Intell.-Based Med. 2025, 11, 100224. [Google Scholar] [CrossRef]
- Bentaher, N.; Kabbadj, Y.; Salah, M.B. Enhancing breast masses detection and segmentation: A novel u-net-based approach. In Proceedings of the 2023 10th International Conference on Wireless Networks and Mobile Communications (WINCOM), Istanbul, Turkey, 26–28 October 2023; pp. 1–6. [Google Scholar]
- M’Rabet, S.; Fnaiech, A.; Sahli, H. Heightened breast cancer segmentation in mammogram images. In Proceedings of the 2024 International Conference on Control, Automation and Diagnosis (ICCAD), Paris, France, 15–17 May 2024; pp. 1–6. [Google Scholar]
- Patil, B.; Vishwanath, P.; Priyanka, K.; Husseyn, M.; Parthiban, K. Convolutional Neural Network-Regularized Extreme Learning Machine with Hyperbolic Secant for Breast Cancer Segmentation and Classification. In Proceedings of the 2025 3rd International Conference on Integrated Circuits and Communication Systems (ICICACS), Raichur, India, 21–22 February 2025; pp. 1–6. [Google Scholar]
- Elkorany, A.S.; Elsharkawy, Z.F. Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance. Sci. Rep. 2023, 13, 2663. [Google Scholar] [CrossRef]
- Ayana, B.Y.; Kumar, A.; Kim, J.; Kim, S.W. Vision-transformer-based transfer learning for mammogram classification. Diagnostics 2023, 13, 178. [Google Scholar] [CrossRef]
- Yamazaki, A.; Ishida, T. Two-view mammogram synthesis from single-view data using generative adversarial networks. Appl. Sci. 2022, 12, 12206. [Google Scholar] [CrossRef]
- Lamprou, C.; Katsikari, K.; Rahmani, N.; Hadjileontiadis, L.J.; Seghier, M.; Alshehhi, A. StethoNet: Robust Breast Cancer Mammography Classification Framework. IEEE Access 2024, 12, 144890–144903. [Google Scholar] [CrossRef]
- Wu, N.; Phang, J.; Park, J.; Shen, Y.; Huang, Z.; Zorin, M.; Jastrzębski, S.; Févry, T.; Katsnelson, J.; Kim, E.; et al. Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE Trans. Med. Imaging 2020, 39, 1184–1194. [Google Scholar] [CrossRef]
- Ahmed, S.; Elazab, N.; El-Gayar, M.M.; Elmogy, M.; Fouda, Y.M. Multi-Scale Vision Transformer with Optimized Feature Fusion for Mammographic Breast Cancer Classification. Diagnostics 2025, 15, 1361. [Google Scholar] [CrossRef]
- Manigrasso, F.; Milazzo, R.; Russo, A.S.; Lamberti, F.; Strand, F.; Pagnani, A.; Morra, L. Mammography classification with multi-view deep learning techniques: Investigating graph and transformer-based architectures. Med. Image Anal. 2025, 99, 103320. [Google Scholar] [CrossRef]
- Hussain, S.; Teevno, M.A.; Naseem, U.; Avalos, D.B.A.; Cardona-Huerta, S.; Tamez-Peña, J.G. Multiview Multimodal Feature Fusion for Breast Cancer Classification Using Deep Learning. IEEE Access 2025, 13, 9265–9275. [Google Scholar] [CrossRef]
- Su, Y.; Liu, Q.; Xie, W.; Hu, P. YOLO-LOGO: A transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms. Comput. Methods Programs Biomed. 2022, 221, 106903. [Google Scholar] [CrossRef] [PubMed]
- Kamran, S.A.; Hossain, K.F.; Tavakkoli, A.; Bebis, G.; Baker, S. Swin-sftnet: Spatial feature expansion and aggregation using swin transformer for whole breast micro-mass segmentation. In Proceedings of the 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Cartagena, Colombia, 18–21 April 2023; pp. 1–5. [Google Scholar]
- Carriero, A.; Groenhoff, L.; Vologina, E.; Basile, P.; Albera, M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics 2024, 14, 848. [Google Scholar] [CrossRef]
- Prinzi, F.; Insalaco, M.; Orlando, A.; Gaglio, S.; Vitabile, S. A yolo-based model for breast cancer detection in mammograms. Cogn. Comput. 2024, 16, 107–120. [Google Scholar] [CrossRef]
- Guevara Lopez, M.A.; Posada, N.; Moura, D.; Pollán, R.; Franco-Valiente, J.; Ortega, C.; Del Solar, M.; Díaz-Herrero, G.; Ramos, I.; Loureiro, J.; et al. BCDR: A Breast Cancer Digital Repository. In Proceedings of the 15th International Conference on Experimental Mechanics, Porto, Portugal, 22–27 July 2012; pp. 1065–1066. [Google Scholar]
- Pattanaik, R.K.; Mishra, S.; Siddique, M.; Gopikrishna, T.; Satapathy, S. Breast Cancer Classification from Mammogram Images Using Extreme Learning Machine-Based DenseNet121 Model. J. Sens. 2022, 2022, 2731364. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, Z.; Feng, Y.; Zhang, L. WDCCNet: Weighted double-classifier constraint neural network for mammographic image classification. IEEE Trans. Med. Imaging 2021, 41, 559–570. [Google Scholar] [CrossRef]
- Petrini, D.G.; Shimizu, C.; Roela, R.A.; Valente, G.V.; Folgueira, M.A.A.K.; Kim, H.Y. Breast cancer diagnosis in two-view mammography using end-to-end trained efficientnet-based convolutional network. IEEE Access 2022, 10, 77723–77731. [Google Scholar] [CrossRef]
- Ayana, G.; Park, J.; Choe, S.w. Patchless multi-stage transfer learning for improved mammographic breast mass classification. Cancers 2022, 14, 1280. [Google Scholar] [CrossRef]
- Dada, E.G.; Oyewola, D.O.; Misra, S. Computer-aided diagnosis of breast cancer from mammogram images using deep learning algorithms. J. Electr. Syst. Inf. Technol. 2024, 11, 38. [Google Scholar] [CrossRef]
- Lopez, E.; Grassucci, E.; Valleriani, M.; Comminiello, D. Multi-view hypercomplex learning for breast cancer screening. arXiv 2022, arXiv:2204.05798. [Google Scholar] [CrossRef]
- Dehghan Rouzi, M.; Moshiri, B.; Khoshnevisan, M.; Akhaee, M.A.; Jaryani, F.; Salehi Nasab, S.; Lee, M. Breast cancer detection with an ensemble of deep learning networks using a consensus-adaptive weighting method. J. Imaging 2023, 9, 247. [Google Scholar] [CrossRef]
- Khan, S.K.; Kanamarlapudi, A.; Singh, A.R. RM-DenseNet: An Enhanced DenseNet Framework with Residual Model for Breast Cancer Classification Using Mammographic Images. In Proceedings of the 2024 2nd International Conference on Advancement in Computation & Computer Technologies (InCACCT), Gharuan, India, 2–3 May 2024; pp. 711–715. [Google Scholar]
- Shen, T.; Wang, J.; Gou, C.; Wang, F. Hierarchical Fused Model with Deep Learning and Type-2 Fuzzy Learning for Breast Cancer Diagnosis. IEEE Trans. Fuzzy Syst. 2020, 28, 3204–3218. [Google Scholar] [CrossRef]
- Leung, C.; Nguyen, H. A Novel Deep Learning Approach for Breast Cancer Detection on Screening Mammography. In Proceedings of the 2023 IEEE 23rd International Conference on Bioinformatics and Bioengineering, BIBE 2023, Dayton, OH, USA, 4–6 December 2023; pp. 277–284. [Google Scholar] [CrossRef]
- Pi, J.; Qi, Y.; Lou, M.; Li, X.; Wang, Y.; Xu, C.; Ma, Y. FS-UNet: Mass segmentation in mammograms using an encoder-decoder architecture with feature strengthening. Comput. Biol. Med. 2021, 137, 104800. [Google Scholar] [CrossRef]
- Huynh, H.N.; Tran, A.T.; Tran, T.N. Region-of-Interest Optimization for Deep-Learning-Based Breast Cancer Detection in Mammograms. Appl. Sci. 2023, 13, 6894. [Google Scholar] [CrossRef]
- Mohammed, A.D.; Ekmekci, D. Breast Cancer Diagnosis Using YOLO-Based Multiscale Parallel CNN and Flattened Threshold Swish. Appl. Sci. 2024, 14, 2680. [Google Scholar] [CrossRef]
- Rahman, M.M.; Jahangir, M.Z.B.; Rahman, A.; Akter, M.; Nasim, M.A.A.; Gupta, K.D.; George, R. Breast Cancer Detection and Localizing the Mass Area Using Deep Learning. Big Data Cogn. Comput. 2024, 8, 80. [Google Scholar] [CrossRef]
- Jiang, J.; Peng, J.; Hu, C.; Jian, W.; Wang, X.; Liu, W. Breast cancer detection and classification in mammogram using a three-stage deep learning framework based on PAA algorithm. Artif. Intell. Med. 2022, 134, 102419. [Google Scholar] [CrossRef] [PubMed]
- Bhatti, H.M.A.; Li, J.; Siddeeq, S.; Rehman, A.; Manzoor, A. Multi-detection and Segmentation of Breast Lesions Based on Mask RCNN-FPN. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2020, Seoul, Republic of Korea, 16–19 December 2020; pp. 2698–2704. [Google Scholar] [CrossRef]
- Suckling, J.; Parker, J.; Dance, D.; Astley, S.; Hutt, I.; Boggis, C.; Ricketts, I.; Stamatakis, E.; Cerneaz, N.; Kok, S.; et al. The Mammographic Image Analysis Society Digital Mammogram Database; University of Essex: Colchester, UK, 1994. [Google Scholar] [CrossRef]
- Heath, M.; Bowyer, K.; Kopans, D.; Kegelmeyer, P.; Moore, R.; Chang, K.; Munishkumaran, S. Current Status of the Digital Database for Screening Mammography. In Digital Mammography: Nijmegen, 1998; Karssemeijer, N., Thijssen, M., Hendriks, J., van Erning, L., Eds.; Springer: Dordrecht, The Netherlands, 1998; pp. 457–460. [Google Scholar] [CrossRef]
- Lee, R.S.; Gimenez, F.; Hoogi, A.; Miyake, K.K.; Gorovoy, M.; Rubin, D.L. A curated mammography data set for use in computer-aided detection and diagnosis research (CBIS-DDSM). Sci. Data 2017, 4, 1–9. [Google Scholar] [CrossRef]
- Moreira, I.C.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J.S. INbreast: Toward a Full-field Digital Mammographic Database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef]
- Halling-Brown, M.D.; Warren, L.M.; Ward, D.; Lewis, E.; Mackenzie, A.; Wallis, M.G.; Wilkinson, L.S.; Given-Wilson, R.M.; McAvinchey, R.; Young, K.C. OPTIMAM Mammography Image Database: A Large-Scale Resource of Mammography Images and Clinical Data. Radiol. Artif. Intell. 2021, 3, e200103. [Google Scholar] [CrossRef] [PubMed]
- Nguyen, H.T.; Nguyen, H.Q.; Pham, H.H.; Lam, K.; Le, L.T.; Dao, M.; Vu, V. VinDr-Mammo: A large-scale benchmark dataset for computer-aided diagnosis in full-field digital mammography. Sci. Data 2023, 10, 277. [Google Scholar] [CrossRef]
- Cui, C.; Li, L.; Cai, H.; Fan, Z.; Zhang, L.; Dan, T.; Li, J.; Wang, J. The Chinese Mammography Database (CMMD): An Online Mammography Database with Biopsy Confirmed Types for Machine Diagnosis of Breast. The Cancer Imaging Archive. 2021. Available online: https://doi.org/10.7937/tcia.eqde-4b16 (accessed on 4 July 2025).
- Oza, P.; Oza, U.; Oza, R.; Sharma, P.; Patel, S.; Kumar, P.; Gohel, B. Digital mammography dataset for breast cancer diagnosis research (DMID) with breast mass segmentation analysis. Biomed. Eng. Lett. 2024, 14, 317–330. [Google Scholar] [CrossRef] [PubMed]
- Dembrower, K.; Wåhlin, E.; Liu, Y.; Olsson, M.; Eklund, M.; Lång, K.; Tsakok, A.; Strand, F. Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload: A retrospective simulation study. Lancet Digit. Health 2020, 2, e468–e474. [Google Scholar] [CrossRef]
- Dembrower, K.; Crippa, A.; Colón, E.; Eklund, M.; Strand, F.; Consortium, S.T. Artificial intelligence for breast cancer detection in screening mammography in Sweden: A prospective, population-based, paired-reader, non-inferiority study. Lancet Digit. Health 2023, 5, e703–e711. [Google Scholar] [CrossRef]
- van Nijnatten, T.J.A.; Payne, N.R.; Hickman, S.E.; Ashrafian, H.; Gilbert, F.J. Overview of trials on artificial intelligence algorithms in breast cancer screening—A roadmap for international evaluation and implementation. J. Clin. Med. 2025, 167, 111087. [Google Scholar] [CrossRef] [PubMed]
- Pedemonte, S.; Tsue, T.; Mombourquette, B.; Vu, Y.N.T.; Matthews, T.; Hoil, R.M.; Shah, M.; Ghare, N.; Zingman-Daniels, N.; Holley, S.; et al. A deep learning algorithm for reducing false positives in screening mammography. arXiv 2022, arXiv:2204.06671. [Google Scholar] [CrossRef]
- Kyono, T.; Gilbert, F.J.; van der Schaar, M. MAMMO: A Deep Learning Solution for Facilitating Radiologist-Machine Collaboration in Breast Cancer Diagnosis. arXiv 2018, arXiv:1811.02661. [Google Scholar] [CrossRef]
- Eisemann, N.; Bunk, S.; Mukama, T.; Baltus, H.; Elsner, S.A.; Gomille, T.; Hecht, G.; Heywang-Köbrunner, S.; Rathmann, R.; Siegmann-Luz, K.; et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat. Med. 2025, 31, 917–924. [Google Scholar] [CrossRef]
- Yu, H.; Yi, S.; Niu, K.; Zhuo, M.; Li, B. UMIT: Unifying Medical Imaging Tasks via Vision-Language Models. arXiv 2025, arXiv:2503.15892. [Google Scholar] [CrossRef]
- Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; Wang, B. Segment Anything in Medical Images. Nat. Commun. 2024, 15, 654. [Google Scholar] [CrossRef] [PubMed]
- Lauritzen, A.D.; Lillholm, M.; Lynge, E.; Nielsen, M.; Karssemeijer, N.; Vejborg, I. Early Indicators of the Impact of Using AI in Mammography Screening for Breast Cancer. Radiology 2024, 311, e232479. [Google Scholar] [CrossRef]
- Verboom, S.D.; Kroes, J.; Pires, S.; Broeders, M.J.; Sechopoulos, I. Hybrid radiologist/AI mammography screening with certainty-based triage: A simulation study. Radiology 2024, 316, e242594. [Google Scholar] [CrossRef]
- Ji, J.; Hou, Y.; Chen, X.; Pan, Y.; Xiang, Y. Vision-Language Model for Generating Textual Descriptions From Clinical Images: Model Development and Validation Study. JMIR Form. Res. 2024, 8, e32690. [Google Scholar] [CrossRef]
- Rieke, N.; Hancox, J.; Li, W.; Milletari, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. NPJ Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef] [PubMed]









| RQ | Research Question |
|---|---|
| RQ1 | What are the current deep learning techniques used in mammographic breast cancer diagnosis? |
| RQ2 | What are the types of problems solved? |
| RQ3 | Which methods/models are used and how they evolve? |
| RQ4 | What are the recent trends for detection, segmentation, and classification tasks? |
| RQ5 | What are the existing limitations and challenges? |
| RQ6 | Which datasets are available for mammography images? |
| RQ7 | What are the existing multimodal learning approaches? |
| Task Type | Number of Papers |
|---|---|
| Detection | 19 |
| Segmentation | 18 |
| Classification | 14 |
| Inclusion Criteria | Exclusion Criteria |
|---|---|
| Studies written in English only | Studies written in other languages |
| Published between 1 January 2020 and 1 June 2025 | Published before 1 January 2020 |
| Focused on mammography images for breast cancer diagnosis and detection | Focused on mammography images for breast cancer prognosis, meta-analysis, and clinical decision-making |
| Used deep learning or machine learning methods | No deep learning or machine learning used |
| Included studies focusing only on mammography | Modality-specific restrictions |
| Published in peer-reviewed sources (IEEE, Scopus, etc.) | Preprints, retracted papers, review papers, dissertations |
| Outcome Domain | Definition and Metrics Collected |
|---|---|
| Lesion Detection Performance | Sensitivity; specificity; precision; false positives per image (FPPI); AUC. |
| Segmentation Accuracy | Dice Similarity Coefficient (DSC); Intersection over Union (IoU); pixel-level precision and recall. |
| Classification Accuracy | Accuracy; F1-score; AUC; confusion matrix-derived metrics (TP, TN, FP, FN). |
| Domain | Low Risk (%) | Unclear Risk (%) | High Risk (%) |
|---|---|---|---|
| Dataset Representativeness | 18 (38.30%) | 23 (48.94%) | 6 (12.77%) |
| Index Test (DL model) | 35 (74.47%) | 9 (19.15%) | 3 (6.38%) |
| Reference Standard | 30 (63.83%) | 12 (25.53%) | 5 (10.64%) |
| Flow and Timing | 41 (87.23%) | 5 (10.64%) | 1 (2.13%) |
| Applicability Concerns | Low (55.32%) | Moderate (31.91%) | High (12.77%) |
| Study | Dataset Representativeness | Index Test | Reference Standard | Flow and Timing | Overall Risk |
|---|---|---|---|---|---|
| [15] | Low | Low | Low | Low | Low |
| [16] | Low | Unclear | Low | Low | Moderate |
| [17] | Low | Low | Unclear | Low | Moderate |
| [18] | Low | Low | Unclear | Low | Moderate |
| [19] | Low | Unclear | Low | Low | Moderate |
| [20] | Unclear | Low | Low | Low | Moderate |
| [21] | Low | Low | Low | Low | Low |
| [22] | Unclear | Low | Low | Low | Moderate |
| [23] | Low | Low | Unclear | Low | Moderate |
| [24] | Low | Low | Low | Low | Low |
| [25] | Low | Unclear | Unclear | Low | High |
| [26] | Low | Low | Low | Low | Low |
| [27] | Low | Low | Low | Low | Low |
| [28] | Unclear | Low | Unclear | Low | High |
| [29] | Low | Unclear | Low | Low | Moderate |
| [30] | Low | Unclear | Low | Low | Moderate |
| [31] | Low | Low | Unclear | Low | Moderate |
| [32] | Low | Low | Low | Low | Low |
| [33] | Low | Low | Low | Low | Low |
| [34] | Low | Unclear | Low | Low | Moderate |
| [35] | Low | Low | Low | Low | Low |
| [36] | Low | Low | Low | Low | Low |
| [37] | Unclear | Low | Low | Low | Moderate |
| [38] | Low | Low | Unclear | Low | Moderate |
| [39] | Low | Unclear | Low | Low | Moderate |
| [40] | Low | Low | Unclear | Low | Moderate |
| [41] | Low | Unclear | Low | Low | Moderate |
| [42] | Low | Low | Low | Low | Low |
| [43] | Low | Low | Low | Low | Low |
| [44] | Low | Low | Unclear | Low | Moderate |
| [45] | Low | Low | Low | Low | Low |
| [46] | Unclear | Low | Low | Low | Moderate |
| [47] | Low | Low | Low | Low | Low |
| [48] | Low | Low | Low | Low | Low |
| [49] | Low | Low | Unclear | Low | Moderate |
| [50] | Low | Unclear | Low | Low | Moderate |
| Approach | Advantage | Disadvantage | Task | Papers |
|---|---|---|---|---|
| Ensemble Faster R-CNN + CNN | Higher accuracy than radiologists; reduces errors; works across countries | Needs large diverse data; high computational cost; interpretability limited | Detection | McKinney et al. [15] |
| YOLOv3-based fusion model | Accurate, fast, multi-class detection | Lower calcification sensitivity; needs tuning; may miss small lesions | Detection | Baccouche et al. [25] |
| Faster R-CNN (InceptionV2) | High sensitivity, low false positives, robust across datasets | Needs large annotation; slower than YOLO; misses subtle lesions | Detection | Agarwal et al. [22] |
| YOLOv3 with anchor tuning | Fast, high sensitivity, accurate for mass sizes | Lower accuracy for tiny calcifications; needs tuning/augmentation | Detection | Aly et al. [19] |
| Dual-view Siamese network (YOLOv3 + Siamese CNN) | Improves detection by using both views; fewer false positives | Needs paired images; more complex; matching masses is harder | Detection | Yan et al. [16] |
| YOLO + image-to-image translation (Pix2Pix, CycleGAN) | High accuracy for masses; enables early prediction; multi-lesion types | Lower accuracy for prior calcifications; needs paired data; complex | Detection | Baccouche et al. [24] |
| Anchor-free BMassDNet | High recall, low false positives; handles various mass sizes | Needs mask labels; can’t detect all lesion types; longer training | Detection | Cao et al. [17] |
| Anchor-free YOLOv3 (GIoU, focal loss) | Fewer false anchors; better for small masses | Higher false positives than 2-stage; complex loss tuning | Detection | Zhang et al. [18] |
| Decision fusion (CNN, ResNet50 + SVM, LBP + SVM) | Very high accuracy; robust; avoids overfitting | Computationally heavier; needs careful tuning | Detection | Manalı et al. [23] |
| YOLO-LOGO (YOLOv5 + Transformer) | Fast, detects and segments masses, high TPR | Lower precision for small masses; complex; not end-to-end | Detection/Segmentation | Su et al. [51] |
| Transformer-based CNN (TransM) | High accuracy; models global/local features | Needs more data/memory; complex | Detection | Chen et al. [20] |
| Swin Transformer mass detector | Higher sensitivity; outperforms CNNs; improved with fusion | High compute needs; longer training; harder to tune | Detection | Betancourt et al. [26] |
| Swin-SFTNet (Swin Transformer U-Net) | Best for micro-mass segmentation; robust to small/irregular shapes | Higher complexity; more demanding; less public validation | Detection/Segmentation | Kamran et al. [52] |
| StyleGAN2-based anomaly detector | No annotation needed; detects anomalies; high sensitivity | Lower specificity; less accurate than supervised; CC view only | Detection | Park et al. [21] |
| Approach | Advantage | Disadvantage | Task | Papers |
|---|---|---|---|---|
| U-Net family (U-Net, U-Net++, Attention U-Net, ResNet-U-Net) | Strong spatial localization; easily extensible (attention, residual, multi-scale features); good for medical images | May struggle with dense tissue; sensitive to class imbalance; performance varies with architecture depth | Segmentation | Tiryaki et al. [30], Hithesh et al. [33], Ghantasala et al. [27], Bentaher et al. [40], Nour and Boufama [39], Jimaging et al. [31] |
| SegNet family (SegNet, Enhanced SegNet) | Efficient inference; fast for clinical deployment; lower computational requirements | Lower accuracy on complex shapes and dense tissue compared to transformer models | Segmentation | M’Rabet [41], Hithesh et al. [33] |
| Transformer-based models (Swin Transformer, CoAtNet) | Excellent global context modeling; high accuracy and generalizability; powerful multi-scale feature fusion | High computational cost; complex training and architecture | Segmentation | Masood et al. [37], Ahmed et al. [34] |
| DeepLab family (DeepLabv3+) | Large receptive field; multi-scale context with fine-detail preservation; flexible dilation rates | Higher complexity; requires careful tuning for dilation rates; slower inference | Segmentation | Farrag et al. [38] |
| Hybrid CNN + traditional (e.g., U-Net + Active Contour Model) | Precise boundary refinement; robust on irregular, low-contrast masses | More complex and slower inference; integration of components adds overhead | Segmentation | Nour and Boufama [39] |
| Segment Anything Model (SAM) | Strong generic segmentation pretraining; flexible for multiple domains | Underperforms without domain-specific fine-tuning; low accuracy for mammograms out of the box | Segmentation | Hithesh et al. [33] |
| Approach | Advantage | Disadvantage | Task | Papers |
|---|---|---|---|---|
| CNN-based models (single view or ROI-based CNNs) | Strong local feature learning; well-established; good for small regions or patches | Limited global context; high preprocessing needs; struggles with scale/rotation variance | Classification | Elkorany et al. [43], Wu et al. [47] |
| Transformer-based models (ViT, Swin, PVT) | Excellent global context modeling; better at handling full images; holistic attention | High computational cost; complex to train; requires large datasets | Classification | Ayana et al. [44], Bermudez et al. [49], Ahmed et al. [48] |
| Ensemble CNN frameworks | Combines strengths of multiple CNNs; better generalization; robust to variations in input | Increased computational complexity; feature fusion design is non-trivial | Classification | StethoNet [46], Elkorany et al. [43] |
| Two-stream/multi-view CNN or transformer models | Leverages complementary views (CC, MLO, left-right); mirrors radiologist workflow | More complex architecture; higher memory and compute requirements | Classification | Wu et al. [47], Bermudez et al. [49], Ahmed et al. [48] |
| GAN-augmented classifiers | Generates missing views (e.g., CC from MLO); enhances data completeness; aids low-resource settings | Synthetic views may lack fine details; challenging to train and validate | Classification | Yamazaki and Ishida [45] |
| Multimodal models (image + metadata fusion) | Combines imaging and clinical data; improves diagnostic accuracy; more holistic view | More complex training pipeline; requires comprehensive data collection | Classification | Hussain et al. [50] |
| Author, Year | Target Variable | Architecture | Pre-Processing | Dataset | Output | Result |
|---|---|---|---|---|---|---|
| Wang et al. (2021) [57] | Breast Cancer Classification | DenseNet-121 | Augmentation | INbreast | Benign/Malignant | Acc.: 96.2% |
| Petrini et al. (2022) [58] | Breast Cancer Diagnosis | EfficientNet | Transfer Learning | CBIS-DDSM | Normal, Benign, Malignant | Acc.: 85.13%, AUC: 93% |
| Ayana et al. (2022) [59] | Mass Classification | MSTL | Transfer Learning | DDSM | Benign/Malignant | AUC: 100% |
| Dada et al. (2024) [60] | Breast Cancer Detection | EfficientNet | Resize | MaMaTT2 | Benign, Malignant, Normal | Acc.: 98.29% |
| Lopez et al. (2022) [61] | Breast Cancer Classification | PHResNet | Pre-Training | CBIS-DDSM | Multi-Class | AUC: 84% |
| Dehghan et al. (2023) [62] | Breast Cancer Detection | Ensemble CNN | Transfer Learning | INbreast, DDSM | Benign/Malignant | F2: 95.48% |
| Khan et al. (2024) [63] | Breast Cancer Classification | RM-DenseNet | Not Specified | Not Specified | Benign/Malignant | Acc.: 96.50% |
| Shen et al. (2020) [64] | Segmentation, Classification | ResU-segNet | Augmentation, Features | INbreast, Private | Tumor, Malignancy | DSC: 90%, AUC: 98% |
| Leung et al. (2023) [65] | Segmentation | U-Net | Augmentation | CBIS-DDSM | Tumor Regions | Dice: 64.59% |
| Pi et al. (2021) [66] | Segmentation | FS-UNet | Not Specified | CBIS-DDSM | Tumor Regions | Dice: 84.19% |
| Huynh et al. (2023) [67] | ROI Optimization | YOLOX | Windowing | Multiple datasets | Binary | AUC: 98% |
| Mohammed et al. (2024) [68] | Detection | YOLO, CNN | Augmentation | CBIS-DDSM | Tumor Regions | mAP: 91.15% |
| Rahman et al. (2024) [69] | Detection | YOLO + U-Net | Augmentation | MIAS | Tumor Regions | AUC: 98.6% |
| Jiang et al. (2022) [70] | Detection | EfficientNet-B3 | Post-Processing | CBIS-DDSM, MIAS | Tumor Regions | AUC: 96% |
| Bhatti et al. (2020) [71] | Detection, Segmentation | Mask R-CNN-FPN | Augmentation | DDSM, INbreast | Multi-Class | mAP: 84% |
| Year | Name | Typical Tasks | Sample Size (Patients/Exams/Images) | Modality/Formats | Annotations | Access/Licensing | Train/Test Split Notes |
|---|---|---|---|---|---|---|---|
| 1994 | Mammographic Image Analysis Society (MIAS) [72] | Tumor detection, classification | 161 patients/—/322 images | Digitized film (PGM; 1024×1024 from 200 µm resampled) | Radiologist truth-markings; elliptical lesion outlines; benign/malignant; tissue type | Public for research; MIAS licence (research-only); no DUA | No official split |
| 1997 | Digital Database for Screening Mammography (DDSM) [73] | Tumor detection, classification | —/2620 exams/∼10,480 images | Digitized film (LJPEG; often converted to TIFF/DICOM) | ROI contours; BI–RADS descriptors; lesion type; pathologically verified labels | Public download (legacy tooling); no DUA (license unspecified) | No official split; CBIS–DDSM provides standardized splits |
| 2011 | INbreast [75] | Detection, segmentation, classification | 115 patients/—/410 images | FFDM (DICOM) | Precise lesion contours (XML); BI–RADS density & assessment | Freely available to researchers; no explicit license, no DUA stated | No official split |
| 2017 | CBIS–DDSM [74] | Detection, segmentation, classification | 1566 participants/2620 studies/10,239 images | Digitized film curated to DICOM | Updated ROI segmentations & boxes; curated labels | Public via TCIA; CC BY 3.0; no DUA | Predefined train/test splits (mass & calcifications) |
| 2020 | OPTIMAM (OMI–DB) [76] | Detection, classification, risk modeling | 173,319 women/—/>2.5M images | FFDM (DICOM) + rich clinical metadata | Extensive clinical labels; lesion–level annotations available in subsets | Restricted: apply to Data Access Committee; DUA required; fees may apply | No public split; varies per approved study protocol |
| 2020 | CSAW (cohort & public subsets) [80] | Detection, segmentation, risk | Full cohort: multi–million images; CSAW–CC subset: 8723 participants (873 cancer/7850 controls); CSAW–S segmentation: 172 patients | FFDM (DICOM; PNG masks in subsets) | Pixel–level tumor annotations (CSAW–S/CC); masking labels (CSAW–M) | CSAW–CC: Restricted on request; CSAW–S: Restricted with CC BY–NC–ND 4.0 terms | Subsets documented; no single official split across CSAW ecosystem |
| 2021 | VinDr–Mammo [77] | Detection, classification | —/5000 exams/20,000 images | FFDM (DICOM) | Breast–level BI–RADS assessment; lesion bounding boxes; double read with arbitration | Restricted on PhysioNet (credentialed access; DUA); citation required | Predefined split: 4000 train/1000 test |
| 2022 | DMID [79] | Detection, segmentation, classification | —/—/510 images | Digital mammography (DICOM & TIFF) | Mass segmentation masks; benign/malignant; BI–RADS density; abnormality type | Public for research/education (publisher’s terms); no DUA | No official split |
| 2021 | CMMD [78] | Detection, classification, subtype prediction | 1775 patients/—/5202 images | Mammography (collected TIFF; released as DICOM) + Clinical data (XLSX) | Biopsy–confirmed benign/malignant; molecular subtypes for subset | Public via TCIA; CC BY 4.0; no DUA | No official split |
| Challenge | Corresponding Articles |
|---|---|
| Breast Tissue Density Obscuring Masses | [15,16,17,18,19] |
| Generalization Across Populations and Devices | [15,20,21] |
| Integration of Multi-View and Temporal Data | [15,16,20] |
| Anchor Design and Localization Precision | [17,18,19] |
| Limited Annotated Datasets and Class Imbalance | [21,22,23,24] |
| Real-Time Detection Constraints and Computational Cost | [17,18,19,25] |
| Interpretability and Clinical Acceptance | [15,21,26] |
| Challenge | Corresponding Articles |
|---|---|
| Breast Tissue Density and Lesion Visibility | [27,28,29,30,31] |
| Class Imbalance and Multi-label Complexity | [27,32,33] |
| Computational Complexity and Model Efficiency | [34,35,36,37] |
| Interpretability and Clinical Trust | [34,38,39] |
| Small Dataset Sizes and Generalization Limits | [30,32,37] |
| Weak Boundary Detection and Irregular Lesion Shapes | [31,36,39,40] |
| Model Adaptability and Real-Time Deployment | [35,36,41,42] |
| Challenge | Corresponding Articles |
|---|---|
| Overfitting on Small Datasets and Limited External Validation | [43,44,45] |
| Poor Generalization Across Devices and Populations | [44,46,47] |
| Loss of Global Context in CNN-Based Models | [44,47,48] |
| View Integration and Multi-Image Fusion Complexity | [45,47,49] |
| Lack of Interpretability in Deep Learning Pipelines | [43,46,50] |
| Incomplete View Availability in Clinical Data | [45] |
| Integration of Clinical Metadata with Imaging Data | [46,48,49,50] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abdikenov, B.; Zhaksylyk, T.; Imasheva, A.; Rakishev, D. From Mammogram Analysis to Clinical Integration with Deep Learning in Breast Cancer Diagnosis. Informatics 2025, 12, 106. https://doi.org/10.3390/informatics12040106
Abdikenov B, Zhaksylyk T, Imasheva A, Rakishev D. From Mammogram Analysis to Clinical Integration with Deep Learning in Breast Cancer Diagnosis. Informatics. 2025; 12(4):106. https://doi.org/10.3390/informatics12040106
Chicago/Turabian StyleAbdikenov, Beibit, Tomiris Zhaksylyk, Aruzhan Imasheva, and Dimash Rakishev. 2025. "From Mammogram Analysis to Clinical Integration with Deep Learning in Breast Cancer Diagnosis" Informatics 12, no. 4: 106. https://doi.org/10.3390/informatics12040106
APA StyleAbdikenov, B., Zhaksylyk, T., Imasheva, A., & Rakishev, D. (2025). From Mammogram Analysis to Clinical Integration with Deep Learning in Breast Cancer Diagnosis. Informatics, 12(4), 106. https://doi.org/10.3390/informatics12040106

