Enhancing Breast Lesion Detection in Mammograms via Transfer Learning
Abstract
1. Introduction
- Systematic evaluation of advanced deep learning object detectors, including Cascade R-CNN, YOLOv12 (S, M, L, and X variants), RTMDet-X, and RT-DETR-X, on four mammography datasets (CBIS-DDSM, VinDr-Mammo, EMBED, and INbreast) for detecting masses and calcifications.
- Standardized preprocessing pipeline with contrast-limited adaptive histogram equalization (CLAHE), breast cropping, and data augmentation techniques like rotations and scaling to enhance model robustness.
- Transfer learning assessment by fine-tuning pretrained models on dataset pairs (e.g., CBIS-DDSM+VinDr-Mammo) and testing on held-out datasets (e.g., INbreast), leveraging additional medical imaging datasets (e.g., VinDr-CXR) to improve generalization.
2. Related Work
2.1. YOLO-Based Methods
2.2. Transformer-Based Methods
2.3. Unsupervised and Ensemble Methods
2.4. Transfer Learning
2.5. Comprehensive Overviews
3. Materials and Methods
3.1. Datasets
3.2. Preprocessing and Augmentation
3.2.1. CLAHE Algorithm
3.2.2. Cropping Algorithm
3.2.3. Augmentation
3.3. Transfer Learning Strategy
3.3.1. Cross-Dataset Transfer Learning
- Training on INbreast + CBIS-DDSM, testing on VinDr-Mammo.
- Training on INbreast + VinDr-Mammo, testing on CBIS-DDSM.
- Training on CBIS-DDSM + VinDr-Mammo, testing on INbreast.
3.3.2. Transfer Learning from Medical Imaging Datasets
3.4. Deep Learning Models
- Cascade R-CNN: A multi-stage extension of the two-stage detector family. It refines proposals through a cascade of detection heads with progressively higher IoU thresholds, improving localization quality. Our implementation uses a ResNet-50 backbone with FPN, pretrained on COCO, and is fine-tuned on mammography data.
- YOLOv12 (S/L/X) [40]: A family of single-stage detectors based on the YOLO architecture. Each variant (Small, Large, X-Large) scales the network depth and width. YOLOv12 processes the entire image at once and predicts bounding boxes and class probabilities directly. We use a pretrained backbone and train end-to-end with mosaic augmentation disabled (since we supply our own augmentations). YOLOv12-L was selected as the primary model for most experiments due to its balance of high detection accuracy and computational efficiency. Preliminary experiments showed YOLOv12-L consistently outperformed smaller variants (S, M) and closely matched or exceeded larger models (e.g., YOLOv12-X, RTMDet-X) in mass detection across datasets (e.g., : 0.963 on INbreast) while requiring fewer computational resources than YOLOv12-X or Cascade R-CNN X101. Its single-stage architecture and optimized feature extraction make it well-suited for mammography’s high-resolution images and subtle lesions.
- RT-DETR-X: A real-time variant of the DETR framework. Like DETR, it uses a transformer encoder–decoder and a set-based prediction loss to eliminate postprocessing steps (e.g., NMS). The “X” version incorporates optimizations for speed (lighter backbone, fewer queries). We use the publicly available code for RT-DETR-X and fine-tune it on mammograms, leveraging its end-to-end global reasoning.
- RTMDet-X: A recent one-stage detector optimized for real-time performance. It unifies object detection heads across models and supports scale-aware dynamic assignment for better label matching. RTMDet uses an efficient backbone (based on ConvNeXt or CSPNet variants) and combines it with a unified detection head for improved accuracy and latency. We evaluate the RTMDet-X variant pretrained on COCO then fine-tuned on mammography data.
3.5. Evaluation Metrics
3.6. Implementation Details
- Batch size: 4 (two images per GPU on two GPUs), adjusted for memory constraints.
- Optimizer: AdamW (, , weight_decay = ).
- Learning rate: , with cosine annealing schedule.
- Loss function: Combined classification and box regression losses, standard for object detection.
- Epochs: 100, early stopping applied with 10-epoch patience based on validation mAP.
- Input size: Images resized to 640 × 640 pixels after preprocessing (CLAHE, cropping). For YOLO-format annotations, bounding box coordinates are normalized to cropped dimensions.
4. Results
4.1. Impact of Preprocessing
4.2. Performance on Individual Datasets
4.3. Comparison Across Architectures
4.4. Cross-Dataset Transfer Learning
Visualization of Model Performance
4.5. Transfer Learning from Medical Imaging
4.6. Comparison with State-of-the-Art Methods
5. Discussion
5.1. Limitations of This Study
5.2. Clinical Interpretation
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
BI-RADS | Breast Imaging–Reporting and Data System |
CAD | Computer-Aided Detection |
CLAHE | Contrast Limited Adaptive Histogram Equalization |
CNN | Convolutional Neural Network |
CXR | Chest X-ray |
DBT | Digital Breast Tomosynthesis |
DDSM | Digital Database for Screening Mammography |
DETR | DEtection TRansformer |
FFDM | Full-Field Digital Mammography |
FPN | Feature Pyramid Network |
IoU | Intersection over Union |
mAP | Mean Average Precision |
NMS | Non-Maximum Suppression |
ROC | Receiver Operating Characteristic |
ROI | Region of Interest |
SVM | Support Vector Machine |
TL | Transfer Learning |
YOLO | You Only Look Once |
References
- World Health Organization. Breast Cancer. 2023. Available online: https://www.who.int/news-room/fact-sheets/detail/breast-cancer (accessed on 14 June 2025).
- Lee, R.S.; Gimenez, F.; Hoogi, A.; Miyake, K.K.; Gorovoy, M.; Rubin, D.L. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 2017, 4, 170177. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: High Quality Object Detection and Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1483–1498. [Google Scholar] [CrossRef] [PubMed]
- Lyu, C.; Zhang, W.; Huang, H.; Zhou, Y.; Wang, Y.; Liu, Y.; Zhang, S.; Chen, K. RTMDet: An Empirical Study of Designing Real-Time Object Detectors. arXiv 2022, arXiv:2212.07784. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Nguyen, H.T.; Nguyen, H.Q.; Pham, H.H.; Lam, K.; Le, L.T.; Dao, M.; Vu, V. VinDr-Mammo: A large-scale benchmark dataset for computer-aided diagnosis in full-field digital mammography. Sci. Data 2023, 10, 277. [Google Scholar] [CrossRef] [PubMed]
- Jeong, J.J.; Vey, B.L.; Bhimireddy, A.; Kim, T.; Santos, T.; Correa, R.; Dutt, R.; Mosunjac, M.; Oprea-Ilies, G.; Smith, G.; et al. The EMory BrEast imaging Dataset (EMBED): A racially diverse, granular dataset of 3.4 million screening and diagnostic mammographic images. Radiol. Artif. Intell. 2023, 5, e220047. [Google Scholar] [CrossRef]
- Moreira, I.C.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J.S. Inbreast: Toward a full-field digital mammographic database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef]
- Ribli, D.; Horváth, A.; Unger, Z.; Pollner, P.; Csabai, I. Detecting and classifying lesions in mammograms with deep learning. Sci. Rep. 2018, 8, 4165. [Google Scholar] [CrossRef]
- Al-Masni, M.A.; Al-Antari, M.A.; Park, J.M.; Gi, G.; Kim, T.Y.; Rivera, P.; Valarezo, E.; Choi, M.T.; Han, S.M.; Kim, T.S. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput. Methods Programs Biomed. 2018, 157, 85–94. [Google Scholar] [CrossRef]
- Zhang, L.; Li, Y.; Chen, H.; Wu, W.; Chen, K.; Wang, S. Anchor-free YOLOv3 for mass detection in mammogram. Expert Syst. Appl. 2022, 191, 116273. [Google Scholar] [CrossRef]
- Aly, G.H.; Marey, M.; El-Sayed, S.A.; Tolba, M.F. YOLO-Based Breast Masses Detection and Classification in Full-Field Digital Mammograms. Comput. Methods Programs Biomed. 2021, 200, 105823. [Google Scholar] [CrossRef]
- Baccouche, A.; Garcia-Zapirain, B.; Castillo-Olea, C.; Elmaghraby, A.S. Breast Lesions Detection and Classification via YOLO-Based Fusion Models. Comput. Mater. Contin. 2021, 69, 1407–1427. [Google Scholar] [CrossRef]
- Su, Y.; Liu, Q.; Xie, W.; Hu, P. YOLO-LOGO: A transformer-based YOLO segmentation model for breast mass detection and segmentation in digital mammograms. Comput. Methods Programs Biomed. 2022, 221, 106903. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, K.; Abdoli, N.; Gilley, P.W.; Wang, X.; Liu, H.; Zheng, B.; Qiu, Y. Transformers improve breast cancer diagnosis from unregistered multi-view mammograms. Diagnostics 2022, 12, 1549. [Google Scholar] [CrossRef]
- Betancourt Tarifa, A.S.; Marrocco, C.; Molinara, M.; Tortorella, F.; Bria, A. Transformer-based mass detection in digital mammograms. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 2723–2737. [Google Scholar] [CrossRef]
- Kamran, S.A.; Hossain, K.F.; Tavakkoli, A.; Bebis, G.; Baker, S. Swin-sftnet: Spatial feature expansion and aggregation using swin transformer for whole breast micro-mass segmentation. In Proceedings of the 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Cartagena, Colombia, 17–21 April 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs Beat YOLOs on Real-time Object Detection. arXiv 2024, arXiv:2304.08069. [Google Scholar] [CrossRef]
- Abdikenov, B.; Zhaksylyk, T.; Imasheva, A.; Orazayev, Y.; Karibekov, T. Innovative Multi-View Strategies for AI-Assisted Breast Cancer Detection in Mammography. J. Imaging 2025, 11, 247. [Google Scholar] [CrossRef]
- Park, S.; Lee, K.H.; Ko, B.; Kim, N. Unsupervised anomaly detection with generative adversarial networks in mammography. Sci. Rep. 2023, 13, 2925. [Google Scholar] [CrossRef]
- McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94. [Google Scholar] [CrossRef]
- Manalı, D.; Demirel, H.; Eleyan, A. Deep Learning Based Breast Cancer Detection Using Decision Fusion. Computers 2024, 13, 294. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Agarwal, R.; Díaz, O.; Yap, M.H.; Lladó, X.; Martí, R. Deep learning for mass detection in Full Field Digital Mammograms. Comput. Biol. Med. 2020, 121, 103774. [Google Scholar] [CrossRef]
- Carriero, A.; Groenhoff, L.; Vologina, E.; Basile, P.; Albera, M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics 2024, 14, 848. [Google Scholar] [CrossRef]
- Nguyen, H.Q.; Lam, K.; Le, L.T.; Pham, H.H.; Tran, D.Q.; Nguyen, D.B.; Le, D.D.; Pham, C.M.; Tong, H.T.; Dinh, D.H.; et al. VinDr-CXR: An open dataset of chest X-rays with radiologist’s annotations. Sci. Data 2022, 9, 429. [Google Scholar] [CrossRef]
- Nguyen, H.T.; Pham, H.H.; Nguyen, N.T.; Nguyen, H.Q.; Huynh, T.Q.; Dao, M.; Vu, V. VinDr-SpineXR: A deep learning framework for spinal lesions detection and classification from radiographs. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; Springer: Cham, Switzerland, 2021; pp. 291–301. [Google Scholar]
- Anouk Stein, M.; Wu, C.; Carr, C.; Shih, G.; Dulkowski, J.; kalpathy; Chen, L.; Prevedello, L.; Marc Kohli, M.; McDonald, M.; et al. RSNA Pneumonia Detection Challenge. Kaggle. 2018. Available online: https://kaggle.com/competitions/rsna-pneumonia-detection-challenge (accessed on 10 July 2025).
- Liu, J.; Lian, J.; Yu, Y. ChestX-Det10: Chest X-ray Dataset on Detection of Thoracic Abnormalities. arXiv 2020, arXiv:2006.10550. [Google Scholar] [CrossRef]
- Abedeen, I.; Rahman, M.A.; Prottyasha, F.Z.; Ahmed, T.; Chowdhury, T.M.; Shatabda, S. FracAtlas: A Dataset for Fracture Classification, Localization and Segmentation of Musculoskeletal Radiographs. Sci. Data 2023, 10, 521. [Google Scholar] [CrossRef] [PubMed]
- Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv 2020, arXiv:2003.11597. [Google Scholar] [CrossRef] [PubMed]
- Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
- Al-Juboori, R.A.L. Contrast enhancement of the mammographic image using retinex with CLAHE methods. Iraqi J. Sci. 2017, 58, 327–336. [Google Scholar]
- Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193. [Google Scholar] [CrossRef]
- Boss, R.S.C.; Thangavel, K.; Daniel, D.A.P. Automatic Mammogram image Breast Region Extraction and Removal of Pectoral Muscle. arXiv 2013, arXiv:1307.7474. [Google Scholar] [CrossRef]
- Zhou, K.; Li, W.; Zhao, D. Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+. Technol. Health Care 2022, 30, 173–190. [Google Scholar] [CrossRef] [PubMed]
- Oza, P.; Sharma, P.; Patel, S.; Adedoyin, F.; Bruno, A. Image augmentation techniques for mammogram analysis. J. Imaging 2022, 8, 141. [Google Scholar] [CrossRef] [PubMed]
- Tian, Y.; Ye, Q.; Doermann, D. YOLOv12: Attention-Centric Real-Time Object Detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar] [CrossRef]
- Al-Antari, M.A.; Al-Masni, M.A.; Kim, T.S. Deep Learning Computer-Aided Diagnosis for Breast Lesion in Digital Mammogram. Adv. Exp. Med. Biol. 2020, 1213, 59–72. [Google Scholar] [CrossRef]
- Ribeiro, R.F.; Gomes-Fonseca, J.; Torres, H.R.; Oliveira, B.; Vilhena, E.; Morais, P.; Vilaca, J.L. Deep learning methods for lesion detection on mammography images: A comparative analysis. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Glasgow, UK, 11–15 July 2022; pp. 3526–3529. [Google Scholar] [CrossRef]
- Karaca Aydemir, B.K.; Telatar, Z.; Güney, S.; Dengiz, B. Detecting and classifying breast masses via YOLO-based deep learning. Neural Comput. Appl. 2025, 37, 11555–11582. [Google Scholar] [CrossRef]
- Cao, Z.; Yang, J.; Gao, F.; Chen, D. Automated abnormalities detection in mammography using deep learning. Complex Intell. Syst. 2024, 10, 567–578. [Google Scholar] [CrossRef]
Dataset | Images | Patients | Annotations | Modality |
---|---|---|---|---|
INbreast | 410 | 115 | Masses, calcifications (bounding boxes) | FFDM |
CBIS-DDSM | 3318 | 1566 | Masses (891), calcifications (753) (bounding boxes) | Digitized film |
VinDr-Mammo | 20,000 | 5000 | Masses, calcifications (bounding boxes), BI-RADS | FFDM |
EMBED | ∼3,400,000 | 110,000 | 60,000 lesions (masses, calcifications) | FFDM, DBT |
Dataset | Images | Region | Task | Modality |
---|---|---|---|---|
VinDr-CXR | 18,000 | Chest | Disease classification and detection | X-ray |
VinDr-SpineXR | 10,469 | Spine | Spinal lesion classification and localization | X-ray |
RSNA Pneumonia | 26,684 | Chest | Pneumonia detection (bounding boxes) | X-ray |
ChestX-Det10 | 3543 | Chest | Multi-disease detection (bounding boxes) | X-ray |
FracAtlas | 4083 | Musculoskeletal | Fracture localization | X-ray |
COVID-19 Image Data Collection | 800+ | Chest | COVID-19 classification and severity scoring | X-ray, CT |
Dataset | Type | Precision | Recall | (95% CI) | F1 |
---|---|---|---|---|---|
INbreast | Raw | 0.832 | 0.553 | 0.754 (0.732–0.776) | 0.664 |
INbreast | CLAHE/cropped | 0.987 | 0.857 | 0.963 (0.941–0.985) | 0.917 |
CBIS-DDSM | Raw | 0.764 | 0.342 | 0.471 (0.449–0.493) | 0.471 |
CBIS-DDSM | CLAHE/cropped | 0.615 | 0.552 | 0.566 (0.544–0.588) | 0.582 |
VinDr-Mammo | Raw | 0.574 | 0.398 | 0.438 (0.416–0.460) | 0.472 |
VinDr-Mammo | CLAHE/cropped | 0.735 | 0.513 | 0.590 (0.568–0.612) | 0.604 |
Class | Model | Precision | Recall | F1 | |
---|---|---|---|---|---|
Mass | YOLOv12-L | 0.987 | 0.857 | 0.963 | 0.917 |
Calcification | YOLOv12-L | 0.051 | 0.034 | 0.006 | 0.041 |
Mass and calcification | YOLOv12-L | 0.772 | 0.474 | 0.497 | 0.590 |
Mass and calcification | YOLOv12-S | 0.68 | 0.53 | 0.554 | 0.596 |
Class | Model | Precision | Recall | F1 | |
---|---|---|---|---|---|
Mass | YOLOv12-L | 0.615 | 0.552 | 0.566 | 0.582 |
Calcification | YOLOv12-L | 0.001 | 0.143 | 0.001 | 0.002 |
Mass and calcification | YOLOv12-L | 0.753 | 0.263 | 0.238 | 0.391 |
Class | Model | Precision | Recall | F1 | |
---|---|---|---|---|---|
Mass | YOLOv12-L | 0.735 | 0.513 | 0.59 | 0.604 |
Calcification | YOLOv12-L | 0.003 | 0.488 | 0.014 | 0.006 |
Mass and calcification | YOLOv12-L | 0.672 | 0.28 | 0.22 | 0.395 |
Class | Model | Precision | Recall | F1 | |
---|---|---|---|---|---|
1 class | YOLOv12-L | 0.156 | 0.257 | 0.126 | 0.194 |
1 class + VinDr-Mammo (1 class) | YOLOv12-L | 0.512 | 0.294 | 0.306 | 0.373 |
Class | Model | Precision | Recall | F1 | |
---|---|---|---|---|---|
Mass | RTMDet-X | 0.736 | 0.659 | 0.688 | 0.695 |
Mass | RT-DETR-X | 0.721 | 0.613 | 0.626 | 0.662 |
Mass | CASCADE R-CNN X101 | 0.687 | 0.603 | 0.614 | 0.642 |
Mass | YOLOv12-X | 0.616 | 0.518 | 0.552 | 0.563 |
Mass | YOLOv12-L | 0.719 | 0.59 | 0.634 | 0.648 |
Calcification | RT-DETR-X | 0.193 | 0.319 | 0.116 | 0.241 |
Calcification | YOLOv12-L | 0.303 | 0.101 | 0.096 | 0.152 |
Mass and calcification | RT-DETR-X | 0.555 | 0.487 | 0.467 | 0.519 |
Mass and calcification | YOLOv12-L | 0.376 | 0.315 | 0.288 | 0.343 |
Model | Params (M) | FLOPs (G) | Inference Time (ms) | Memory (GB) |
---|---|---|---|---|
YOLOv12-L | 46.7 | 68.5 | 29 | 4.7 |
YOLOv12-X | 68.2 | 95.3 | 42 | 6.2 |
Cascade R-CNN X101 | 85.4 | 110.2 | 55 | 7.8 |
RTMDet-X | 53.9 | 82.1 | 36 | 5.3 |
RT-DETR-X | 62.3 | 89.7 | 48 | 6.5 |
Dataset | Precision | Recall | F1 | |
---|---|---|---|---|
INbreast | 0.987 | 0.857 | 0.963 | 0.917 |
VinDr-Mammo + CBIS-DDSM | 0.626 | 0.54 | 0.555 | 0.579 |
VinDr-Mammo + CBIS-DDSM on INbreast | 0.969 | 0.999 | 0.995 | 0.984 |
CBIS-DDSM | 0.615 | 0.552 | 0.566 | 0.582 |
INbreast + VinDr-Mammo | 0.626 | 0.54 | 0.555 | 0.579 |
INbreast + VinDr-Mammo on CBIS-DDSM | 0.523 | 0.5 | 0.447 | 0.511 |
VinDr-Mammo | 0.735 | 0.513 | 0.59 | 0.604 |
CBIS-DDSM + INbreast | 0.561 | 0.542 | 0.519 | 0.551 |
CBIS-DDSM + INbreast on VinDr-Mammo | 0.602 | 0.504 | 0.5 | 0.548 |
Model | Precision | Recall | F1 | |
---|---|---|---|---|
YOLOv12-L | 0.719 | 0.590 | 0.634 | 0.648 |
YOLOv12-L with TL | 0.700 | 0.644 | 0.678 | 0.671 |
YOLOv12-X | 0.616 | 0.518 | 0.552 | 0.563 |
YOLOv12-X with TL | 0.719 | 0.607 | 0.647 | 0.658 |
Cascade R-CNN X101 | 0.687 | 0.603 | 0.614 | 0.642 |
Cascade R-CNN X101 with TL | 0.719 | 0.627 | 0.646 | 0.670 |
RTMDet-X | 0.736 | 0.659 | 0.688 | 0.695 |
RTMDet-X with TL | 0.754 | 0.667 | 0.697 | 0.708 |
RT-DETR-X | 0.721 | 0.613 | 0.626 | 0.662 |
RT-DETR-X with TL | 0.661 | 0.675 | 0.656 | 0.668 |
Reference | Year | Metric (mAP/AP, F1) | Dataset | Method |
---|---|---|---|---|
Al-Antari et al. [41] | 2020 | F1: 0.988 (detection) | INbreast | YOLOv3 |
Su et al. [16] | 2022 | mAP: 0.650, F1: 0.745 | CBIS-DDSM | YOLO-LOGO (YOLOv5L6) |
Ribeiro et al. [42] | 2022 | mAP: 0.694, F1: 0.712 | CBIS-DDSM | YOLOv5 |
Karaca Aydemir et al. [43] | 2025 | mAP: 0.843; F1: 0.812 | INbreast (TL from CBIS-DDSM/VinDr-Mammo) | YOLOv5-CAD (YOLOv5) |
Cao et al. [44] | 2024 | : 0.873 | INbreast | YOLOv8 with TL |
Current study | 2025 | : 0.995, F1: 0.984 (mass) | INbreast | YOLOv12-L with TL |
Current study | 2025 | : 0.697, F1: 0.708 (combined mass) | Combined dataset | RTMDet-X with TL |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abdikenov, B.; Rakishev, D.; Orazayev, Y.; Zhaksylyk, T. Enhancing Breast Lesion Detection in Mammograms via Transfer Learning. J. Imaging 2025, 11, 314. https://doi.org/10.3390/jimaging11090314
Abdikenov B, Rakishev D, Orazayev Y, Zhaksylyk T. Enhancing Breast Lesion Detection in Mammograms via Transfer Learning. Journal of Imaging. 2025; 11(9):314. https://doi.org/10.3390/jimaging11090314
Chicago/Turabian StyleAbdikenov, Beibit, Dimash Rakishev, Yerzhan Orazayev, and Tomiris Zhaksylyk. 2025. "Enhancing Breast Lesion Detection in Mammograms via Transfer Learning" Journal of Imaging 11, no. 9: 314. https://doi.org/10.3390/jimaging11090314
APA StyleAbdikenov, B., Rakishev, D., Orazayev, Y., & Zhaksylyk, T. (2025). Enhancing Breast Lesion Detection in Mammograms via Transfer Learning. Journal of Imaging, 11(9), 314. https://doi.org/10.3390/jimaging11090314