Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis
Abstract
:1. Introduction
- We present an efficient and robust deep learning model for the segmentation of breast cancer in H–E-stained WSIs. The experimental results show that the proposed method significantly outperforms the baseline approaches for the segmentation of breast cancer in H–E-stained WSIs ();
- Our framework is demonstrated to be capable of detecting tiny metastasis foci, such as micro-metastases and ITCs, which are extremely difficult to find by visual inspection on H–E-stained WSIs. In comparison, the baseline approaches tend to fail in detecting tiny metastasis foci;
- By leveraging the efficiency of a tile-based data structure and a modified fully convolutional neural network model, the proposed method is notably faster in gigapixel WSI analysis than the baseline approaches, taking 2.4 min to complete the whole slide analysis utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card.
2. Related Works
3. Materials and Methods
3.1. The Dataset
3.2. Transfer Learning, Boosting Learning, Boosted Data Augmentation, and Focusing Sampling
3.2.1. Transfer Learning
3.2.2. Boosting Learning
3.2.3. Boosted Data Augmentation
3.2.4. Focusing Sampling
3.3. Whole-Slide Image Processing
3.4. The Proposed Modified Fully Convolutional Network
3.5. Implementation Details
4. Results
4.1. Evaluation Metrics
4.2. Quantitative Evaluation with Statistical Analysis
4.3. Run Time Analysis
5. Discussion and Significance of the Work
5.1. Discussion
5.2. Significance of the Work
6. Conclusions and Future Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Anderson, B.O.; Cazap, E.; El Saghir, N.S.; Yip, C.H.; Khaled, H.M.; Otero, I.V.; Adebamowo, C.A.; Badwe, R.A.; Harford, J.B. Optimisation of breast cancer management in low-resource and middle-resource countries: Executive summary of the Breast Health Global Initiative consensus, 2010. Lancet Oncol. 2011, 12, 387–398. [Google Scholar] [CrossRef]
- Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2020. CA Cancer J. Clin. 2020, 70, 7–30. [Google Scholar] [CrossRef] [PubMed]
- Bandi, P.; Geessink, O.; Manson, Q.; Van Djik, M.; Balkenhol, M.; Hermsen, M.; Bejnordi, B.E.; Lee, B.; Paeng, K.; Zhong, A.; et al. From detection of individual metastases to classification of lymph node status at the patient level: The camelyon17 challenge. IEEE Trans. Med. Imaging 2018, 38, 550–560. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Amin, M.B.; Greene, F.L.; Edge, S.B.; Compton, C.C.; Gersjenwald, J.E.; Brookland, R.K.; Meyer, L.; Gress, D.M.; Byrd, D.R.; Winchester, D.P. The eighth edition AJCC cancer staging manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA Cancer J. Clin. 2017, 67, 93–99. [Google Scholar] [CrossRef]
- Edge, S.B.; Compton, C.C. The American Joint Committee on Cancer: The 7th edition of the AJCC cancer staging manual and the future of TNM. Ann. Surg. Oncol. 2010, 17, 1471–1474. [Google Scholar] [CrossRef] [PubMed]
- Apple, S.K. Sentinel lymph node in breast cancer: Review article from a pathologist’s point of view. J. Pathol. Transl. Med. 2016, 50, 83. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lee, H.S.; Kim, M.A.; Yang, H.K.; Lee, B.L.; Kim, W.H. Prognostic implication of isolated tumor cells and micrometastases in regional lymph nodes of gastric cancer. World J. Gastroenterol. WJG 2005, 11, 5920. [Google Scholar] [CrossRef]
- Qaiser, T.; Tsang, Y.-W.; Taniyama, D.; Sakamoto, N.; Nakane, K.; Epstein, D.; Rajpoot, N. Fast and accurate tumor segmentation of histology images using persistent homology and deep convolutional features. Med. Image Anal. 2019, 55, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Dihge, L.; Vallon-Christersson, J.; Hegardt, C.; Saal, L.H.; Hakkinen, J.; Larsson, C.; Ehinger, A.; Loman, N.; Malmberg, M.; Bendahl, P.-O.; et al. Prediction of Lymph Node Metastasis in Breast Cancer by Gene Expression and Clinicopathological Models: Development and Validation within a Population-Based Cohort. Clin. Cancer Res. 2019, 25, 6368–6381. [Google Scholar] [CrossRef] [Green Version]
- Shinden, Y.; Ueo, H.; Tobo, T.; Gamachi, A.; Utou, M.; Komatsu, H.; Nambara, S.; Saito, T.; Ueda, M.; Hirata, H.; et al. Rapid diagnosis of lymph node metastasis in breast cancer using a new fluorescent method with γ-glutamyl hydroxymethyl rhodamine green. Sci. Rep. 2016, 6, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Yu, K.H.; Zhang, C.; Berry, G.J.; Altman, R.B.; Ré, C.; Rubin, D.L.; Snyder, M. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat. Commun. 2016, 7, 12474. [Google Scholar] [CrossRef] [Green Version]
- Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; et al. U-Net: Deep learning for cell counting, detection, and morphometry. Nat. Methods 2019, 16, 67–70. [Google Scholar] [CrossRef]
- Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Guo, Z.; Liu, H.; Ni, H.; Wang, X.; Su, M.; Guo, W.; Wang, K.; Jiang, T.; Qian, Y. A fast and refined cancer regions segmentation framework in whole-slide breast pathological images. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Priego-Torres, B.M.; Sanchez-Morillo, D.; Fernandez-Granero, M.A.; Garcia-Rojo, M. Automatic segmentation of whole-slide H&E stained breast histopathology images using a deep convolutional neural network architecture. Expert Syst. Appl. 2020, 151, 113387. [Google Scholar]
- Bejnordi, B.E.; Balkehol, M.; Litjens, G.; Holland, R.; Bult, P.; Karssemeijer, N.; van der Laak, J.A.W.M. Automated detection of DCIS in whole-slide H&E stained breast histopathology images. IEEE Trans. Med. Imaging 2016, 35, 2141–2150. [Google Scholar]
- Huang, W.C.; Chung, P.-C.; Tsai, H.-W.; Chow, N.-H.; Juang, Y.-Z.; Tsai, H.-H.; Lin, S.-H.; Wang, C.-H. Automatic HCC detection using convolutional network with multi-magnification input images. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; pp. 194–198. [Google Scholar]
- Celik, Y.; Talo, M.; Yildirim, O.; Karabatak, M.; Acharya, U.R. Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recognit. Lett. 2020, 133, 232–239. [Google Scholar] [CrossRef]
- Gecer, B.; Aksoy, S.; Mercan, E.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. Pattern Recognit. 2018, 84, 345–356. [Google Scholar] [CrossRef] [PubMed]
- Lin, H.; Chen, H.; Graham, S.; Dou, Q.; Rajpoot, N.; Heng, P.A. Fast scannet: Fast and dense analysis of multi-gigapixel whole-slide images for cancer metastasis detection. IEEE Trans. Med. Imaging 2019, 38, 1948–1958. [Google Scholar] [CrossRef] [Green Version]
- Bejnordi, B.E.; Veta, M.; van Diest, P.L.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J.A.W.M. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
- Litjens, G.; Bandi, P.; Bejnori, B.E.; Geessink, O.; Balkenhol, M.; Bult, P.; Halilovic, A.; Hermsen, M.; van de Loo, R.; Vogels, R.; et al. 1399 H&E-stained sentinel lymph node sections of breast cancer patients: The CAMELYON dataset. GigaScience 2018, 7, giy065. [Google Scholar]
- Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep learning for identifying metastatic breast cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar]
- Alzubaidi, L.; Al-Amidie, M.; Al-Asadi, A.; Humaidi, A.J.; Al-Shamma, O.; Fadhel, M.A.; Zhang, J.; Santamaría, J.; Duan, Y. Novel transfer learning approach for medical imaging with limited labeled data. Cancers 2021, 13, 1590. [Google Scholar] [CrossRef]
- Becker, C.; Christoudias, C.M.; Fua, P. Domain adaptation for microscopy imaging. IEEE Trans. Med. Imaging 2014, 34, 1125–1139. [Google Scholar] [CrossRef] [Green Version]
- Bermúdez-Chacón, R.; Becker, C.; Salzmann, M.; Fua, P. Scalable unsupervised domain adaptation for electron microscopy. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 326–334. [Google Scholar]
- Spanhol, F.A.; Oliveira, L.S.; Cavalin, P.R.; Petitjean, C.; Heutte, L. Deep features for breast cancer histopathological image classification. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1868–1873. [Google Scholar]
- Li, Z.; Zhang, J.; Tan, T.; Teng, X.; Sun, X.; Zhao, H.; Liu, L.; Xiao, Y.; Lee, B.; Li, Y.; et al. Deep Learning Methods for Lung Cancer Segmentation in Whole-slide Histopathology Images-the ACDC@ LungHP Challenge 2019. IEEE J. Biomed. Health Inform. 2020, 25, 429–440. [Google Scholar] [CrossRef]
- Signaevsky, M.; Prastawa, M.; Farrel, K.; Tabish, N.; Baldwin, E.; Han, N.; Iida, M.A.; Koll, J.; Bryce, C.; Purohit, D.; et al. Artificial intelligence in neuropathology: Deep learning-based assessment of tauopathy. Lab. Investig. 2019, 99, 1019. [Google Scholar] [CrossRef]
- Zhu, R.; Sui, D.; Qin, H.; Hao, A. An extended type cell detection and counting method based on FCN. In Proceedings of the 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE), Washington, DC, USA, 23–25 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 51–56. [Google Scholar]
- Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Segmentation of nuclei in histopathology images by deep regression of the distance map. IEEE Trans. Med. Imaging 2018, 38, 448–459. [Google Scholar] [CrossRef]
- Gupta, D.; Jhunjhunu wala, R.; Juston, M.; Jaled, M.C. Image Segmentation Keras: Implementation of Segnet, FCN, UNet, PSPNet and Other Models in Keras. Available online: https://github.com/divamgupta/image-segmentation-keras (accessed on 15 October 2020).
Method | Score | 95% C.I. for Mean | ||||
---|---|---|---|---|---|---|
Mean | Std. Deviation | Std. Error | Lower Bound | Upper Bound | ||
Proposed method | 0.892 | 0.163 | 0.047 | 0.787 | 0.995 | |
Precision | U-Net [16] | 0.486 | 0.116 | 0.033 | 0.411 | 0.559 |
SegNet [15] | 0.548 | 0.091 | 0.026 | 0.489 | 0.605 | |
FCN [17] | 0.552 | 0.062 | 0.018 | 0.512 | 0.590 | |
Deeplabv3+ [18] with MobileNet [19] | 0.643 | 0.262 | 0.075 | 0.476 | 0.809 | |
Deeplabv3+ [18] with ResNet [20] | 0.613 | 0.354 | 0.102 | 0.388 | 0.838 | |
Deeplabv3+ [18] with Xception [21] | 0.753 | 0.286 | 0.082 | 0.571 | 0.935 | |
Proposed method | 0.837 | 0.169 | 0.049 | 0.729 | 0.945 | |
Recall | U-Net [16] | 0.643 | 0.022 | 0.006 | 0.628 | 0.656 |
SegNet [15] | 0.588 | 0.028 | 0.008 | 0.570 | 0.606 | |
FCN [17] | 0.500 | 0.082 | 0.023 | 0.448 | 0.551 | |
Deeplabv3+ [18] with MobileNet [19] | 0.682 | 0.277 | 0.080 | 0.506 | 0.858 | |
Deeplabv3+ [18] with ResNet [20] | 0.440 | 0.261 | 0.075 | 0.274 | 0.606 | |
Deeplabv3+ [18] with Xception [21] | 0.584 | 0.290 | 0.083 | 0.399 | 0.768 | |
Proposed method | 0.844 | 0.127 | 0.036 | 0.763 | 0.925 | |
F1-score | U-Net [16] | 0.564 | 0.095 | 0.027 | 0.503 | 0.624 |
SegNet [15] | 0.562 | 0.124 | 0.036 | 0.383 | 0.581 | |
FCN [17] | 0.510 | 0.078 | 0.022 | 0.400 | 0.531 | |
Deeplabv3+ [18] with MobileNet [19] | 0.640 | 0.241 | 0.069 | 0.487 | 0.794 | |
Deeplabv3+ [18] with ResNet [20] | 0.480 | 0.262 | 0.075 | 0.313 | 0.646 | |
Deeplabv3+ [18] with Xception [21] | 0.621 | 0.259 | 0.047 | 0.456 | 0.786 | |
Proposed method | 0.749 | 0.188 | 0.054 | 0.629 | 0.868 | |
mIoU | U-Net [16] | 0.473 | 0.114 | 0.331 | 0.400 | 0.546 |
SegNet [15] | 0.380 | 0.129 | 0.037 | 0.298 | 0.462 | |
FCN [17] | 0.363 | 0.086 | 0.025 | 0.308 | 0.418 | |
Deeplabv3+ [18] with MobileNet [19] | 0.504 | 0.229 | 0.066 | 0.358 | 0.650 | |
Deeplabv3+ [18] with ResNet [20] | 0.344 | 0.213 | 0.031 | 0.208 | 0.480 | |
Deeplabv3+ [18] with Xception [21] | 0.487 | 0.251 | 0.072 | 0.327 | 0.647 | |
v3_DCNN-1280 * [22] | 0.685 | - | - | - | - | |
Xception-65 * [23] | 0.645 | - | - | - | - |
LSD Multiple Comparisons | |||||||
---|---|---|---|---|---|---|---|
Dependent Variable | (I) Method | (J) Method | Mean Difference (I–J) | Std. Error | Sig. | 95% C.I. | |
Lower Bound | Upper Bound | ||||||
Precision | Proposed method | U-Net [16] | 0.405 * | 0.088 | <0.001 | 0.311 | 0.499 |
SegNet [15] | 0.344 * | 0.088 | <0.001 | 0.250 | 0.438 | ||
FCN [17] | 0.339 * | 0.088 | <0.001 | 0.245 | 0.434 | ||
Deeplabv3+ [18] with MobileNet [19] | 0.248 * | 0.088 | <0.001 | 0.163 | 0.516 | ||
Deeplabv3+ [18] with ResNet [20] | 0.248 * | 0.088 | <0.001 | 0.072 | 0.424 | ||
Deeplabv3+ [18] with Xception [21] | 0.278 * | 0.088 | <0.001 | 0.102 | 0.454 | ||
Recall | Proposed method | U-Net [16] | 0.195 * | 0.079 | <0.001 | 0.116 | 0.273 |
SegNet [15] | 0.249 * | 0.079 | <0.001 | 0.170 | 0.328 | ||
FCN [17] | 0.337 * | 0.079 | <0.001 | 0.258 | 0.416 | ||
Deeplabv3+ [18] with MobileNet [19] | 0.155 * | 0.079 | <0.001 | 0.350 | 0.313 | ||
Deeplabv3+ [18] with ResNet [20] | 0.397 * | 0.079 | <0.001 | 0.239 | 0.556 | ||
Deeplabv3+ [18] with Xception [21] | 0.253 * | 0.079 | <0.001 | 0.0.94 | 0.411 | ||
F1-score | Proposed method | U-Net [16] | 0.280 * | 0.075 | <0.001 | 0.190 | 0.369 |
SegNet [15] | 0.382 * | 0.075 | <0.001 | 0.292 | 0.471 | ||
FCN [17] | 0.339 * | 0.075 | <0.001 | 0.304 | 0.482 | ||
Deeplabv3+ [18] with MobileNet [19] | 0.203 * | 0.075 | <0.001 | 0.524 | 0.354 | ||
Deeplabv3+ [18] with ResNet [20] | 0.364 * | 0.075 | <0.001 | 0.213 | 0.515 | ||
Deeplabv3+ [18] with Xception [21] | 0.222 * | 0.075 | <0.001 | 0.234 | 0.235 | ||
mIoU | Proposed method | U-Net [16] | 0.275 * | 0.055 | <0.001 | 0.165 | 0.386 |
SegNet [15] | 0.369 * | 0.055 | <0.001 | 0.258 | 0.480 | ||
FCN [17] | 0.385 * | 0.055 | <0.001 | 0.275 | 0.496 | ||
Deeplabv3+ [18] with MobileNet [19] | 0.245 * | 0.074 | <0.001 | 0.096 | 0.393 | ||
Deeplabv3+ [18] with ResNet [20] | 0.405 * | 0.074 | <0.001 | 0.256 | 0.553 | ||
Deeplabv3+ [18] with Xception [21] | 0.261 * | 0.074 | <0.001 | 0.113 | 0.410 |
Method | CPU | RAM | GPU | Inference Time per WSI (min.) |
---|---|---|---|---|
Proposed Method (with 4 GPUs) | Intel Xeon Gold 6134 CPU @ 3.20 GHz × 16 | 128 GB | 4 × GeForce GTX 1080 Ti | 2.4 |
Proposed Method (with 1 GPU) | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 9.6 |
U-Net [16] | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 44 |
SegNet [15] | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 43 |
FCN [17] | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 48 |
Deeplabv3+ [18] with MobileNet [19] | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 17.2 |
Deeplabv3+ [18] with ResNet [20] | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 18.2 |
Deeplabv3+ [18] with Xception [21] | Intel Xeon CPU E5-2650 v2 @ 2.60 GHz × 16 | 32 GB | 1 × GeForce GTX 1080 Ti | 17.8 |
v3_DCNN-1280 [22] | - | - | 1 × GeForce GTX 1080 Ti | 13.8 |
Xception-65 [23] | Intel Xeon CPU E5-2698 v4 @ 2.2 GHz | 256 GB | 4 × Tesla V100 Tensor Core | 398.2 * |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khalil, M.-A.; Lee, Y.-C.; Lien, H.-C.; Jeng, Y.-M.; Wang, C.-W. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics 2022, 12, 990. https://doi.org/10.3390/diagnostics12040990
Khalil M-A, Lee Y-C, Lien H-C, Jeng Y-M, Wang C-W. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics. 2022; 12(4):990. https://doi.org/10.3390/diagnostics12040990
Chicago/Turabian StyleKhalil, Muhammad-Adil, Yu-Ching Lee, Huang-Chun Lien, Yung-Ming Jeng, and Ching-Wei Wang. 2022. "Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis" Diagnostics 12, no. 4: 990. https://doi.org/10.3390/diagnostics12040990
APA StyleKhalil, M.-A., Lee, Y.-C., Lien, H.-C., Jeng, Y.-M., & Wang, C.-W. (2022). Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics, 12(4), 990. https://doi.org/10.3390/diagnostics12040990