Cross-Domain Approach for Automated Thyroid Classification Using Diff-Quick Images
Abstract
1. Introduction
2. Deep Learning Models for Histopathological Image Classification
3. Cross-Domain Approach for Automated Thyroid Classification
3.1. Constructing the Database Being Diff-Quick Stained Images
3.2. Pre-Processing Diff-Quick Stained Images
3.3. Cross-Domain Approach for Diff-Quick Images Classification
- 1.
- The initial convolutional layer’s feature maps were increased from 8 to 16 to enhance the early-stage capture of subtle, small-scale cellular details typical of thyroid cytology images.
- 2.
- Second, instead of using two initial single-layer convolutions, the second convolutional operation was replaced by a more expressive three-layer convolutional module (consisting of a convolution, a convolution, and another convolution), enabling the network to capture richer spatial and structural patterns earlier in the network. Additionally, the deeper convolutional modules operating at higher channel counts (128 and 256 filters) and associated bottleneck layers were removed to reduce model complexity and redundancy. Specifically, the revised network retains only two multi-layer convolutional modules (from 16 to 32 channels and 32 to 64 channels, respectively).
- 3.
- After these modules, a single standard convolutional layer (with 128 filters) is introduced, rather than employing further complex convolutional blocks or bottleneck convolutions. The number of max-pooling layers (five) was preserved, maintaining the downsampling necessary to efficiently aggregate spatial features, resulting in a final spatial resolution of with 128 feature channels.
- 4.
- Lastly, the flattened and fully connected layers were adjusted to match the new feature map sizes, ensuring the model’s final output remains consistent.
4. Results
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cibas, E.S.; Ali, S.Z. The 2017 Bethesda system for reporting thyroid cytopathology. Thyroid 2017, 27, 1341–1346. [Google Scholar] [CrossRef]
- Pusztaszeri, M.; Brimo, F.; Wang, C.; Sekhon, H.; Al-Nourhji, O.; Fischer, G.; Zeman-Pocrnich, C. The Bethesda System for Reporting Thyroid Cytopathology: Summary Guidelines of the Canadian Society of Cytopathology. 2019. Available online: https://cytopathology.ca/wp-content/uploads/2019/07/Bethesda-system-memo.pdf (accessed on 15 April 2025).
- Kharya, S.; Soni, S. Weighted naive bayes classifier: A predictive model for breast cancer detection. Int. J. Comput. Appl. 2016, 133, 32–37. [Google Scholar] [CrossRef]
- Pietikäinen, M. Image analysis with local binary patterns. In Proceedings of the Image Analysis: 14th Scandinavian Conference, SCIA 2005, Joensuu, Finland, 19–22 June 2005; Proceedings 14. Springer: Berlin/Heidelberg, Germany, 2005; pp. 115–118. [Google Scholar]
- De Siqueira, F.R.; Schwartz, W.R.; Pedrini, H. Multi-scale gray level co-occurrence matrices for texture description. Neurocomputing 2013, 120, 336–345. [Google Scholar] [CrossRef]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A Dataset for Breast Cancer Histopathological Image Classification. IEEE Trans. Biomed. Eng. 2016, 63, 1455–1462. [Google Scholar] [CrossRef]
- Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. Bach: Grand challenge on breast cancer histology images. Med. Image Anal. 2019, 56, 122–139. [Google Scholar] [CrossRef]
- Caicedo, J.C.; Lazebnik, S. Active Object Localization with Deep Reinforcement Learning. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Long, N.T.; Huong, T.T.; Bao, N.N.; Binh, H.T.T.; Le Nguyen, P.; Nguyen, K. Q-learning-based distributed multi-charging algorithm for large-scale WRSNs. Nonlinear Theory Its Appl. IEICE 2023, 14, 18–34. [Google Scholar] [CrossRef]
- Kazerouni, A.; Aghdam, E.K.; Heidari, M.; Azad, R.; Fayyaz, M.; Hacihaliloglu, I.; Merhof, D. Diffusion Models for Medical Image Analysis: A Comprehensive Survey. arXiv 2022, arXiv:2211.07804. [Google Scholar]
- Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef]
- Kim, H.E.; Cosa-Linan, A.; Santhanam, N.; Jannesari, M.; Maros, M.E.; Ganslandt, T. Transfer learning for medical image classification: A literature review. BMC Med. Imaging 2022, 22, 69. [Google Scholar] [CrossRef]
- Tao, S.; Guo, Y.; Zhu, C.; Chen, H.; Zhang, Y.; Yang, J.; Liu, J. Highly efficient follicular segmentation in thyroid cytopathological whole slide image. In Precision Health and Medicine; Springer: Berlin/Heidelberg, Germany, 2019; pp. 149–157. [Google Scholar]
- Slabaugh, G.; Beltran, L.; Rizvi, H.; Deloukas, P.; Marouli, E. Applications of machine and deep learning to thyroid cytology and histopathology: A review. Front. Oncol. 2023, 13, 958310. [Google Scholar] [CrossRef]
- Dov, D.; Kovalsky, S.Z.; Feng, Q.; Assaad, S.; Cohen, J.; Bell, J.; Henao, R.; Carin, L.; Range, D.E. Use of machine learning–based software for the screening of thyroid cytopathology whole slide images. Arch. Pathol. Lab. Med. 2022, 146, 872–878. [Google Scholar] [CrossRef]
- Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
- Yang, H.; Chen, C.; Jiang, M.; Liu, Q.; Cao, J.; Heng, P.A.; Dou, Q. Dltta: Dynamic learning rate for test-time adaptation on cross-domain medical images. IEEE Trans. Med. Imaging 2022, 41, 3575–3586. [Google Scholar] [CrossRef]
- Zhang, B.; Manoochehri, H.; Ho, M.M.; Fooladgar, F.; Chong, Y.; Knudsen, B.S.; Sirohi, D.; Tasdizen, T. CLASS-M: Adaptive stain separation-based contrastive learning with pseudo-labeling for histopathological image classification. arXiv 2023, arXiv:2312.06978. [Google Scholar]
- Sengupta, S.; Brown, D.E. Automatic Report Generation for Histopathology images using pre-trained Vision Transformers. arXiv 2023, arXiv:2311.06176. [Google Scholar]
- Dolezal, J.M.; Kochanny, S.; Dyer, E.; Ramesh, S.; Srisuwananukorn, A.; Sacco, M.; Howard, F.M.; Li, A.; Mohan, P.; Pearson, A.T. Slideflow: Deep learning for digital histopathology with real-time whole-slide visualization. BMC Bioinform. 2024, 25, 134. [Google Scholar] [CrossRef]
- Xie, L.; Li, C.; Wang, Z.; Zhang, X.; Chen, B.; Shen, Q.; Wu, Z. Shisrcnet: Super-resolution and classification network for low-resolution breast cancer histopathology image. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Vancouver, BC, Canada, 8–12 October 2023; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2023. [Google Scholar]
- Silverman, J.F.; Frable, W.J. The use of the Diff-Quik stain in the immediate interpretation of fine-needle aspiration biopsies. Diagn. Cytopathol. 1990, 6, 366–369. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Method | Dataset | Accuracy |
---|---|---|
Without Adaptation [18] | Circus image | 63.15% |
ResNet [19] | TCGA | 72.11% |
ResNet (Freeze) [19] | TCGA | 73.17% |
Vision Transformer (Freeze) [19] | TCGA | 73.17% |
Vision Transformer [19] | TCGA | 73.50% |
HIPT-BERT [20] | GTEx | 82.06% |
FixMatch [19] | TCGA | 83.34% |
MixMatch [19] | TCGA | 88.35% |
Slideflow [21] | TCGA | 90.20% |
CLASS-M [19] | TCGA | 92.13% |
SHISRCNet [22] | BreakHis | 97.82% |
Model | B4 | B5 | B6 | ||||||
---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1 Score | Precision | Recall | F1 Score | Precision | Recall | F1 Score | |
L-DCN | 0.62 | 0.67 | 0.64 | 0.60 | 0.72 | 0.65 | 0.90 | 0.44 | 0.60 |
C-DCN | 0.70 | 0.68 | 0.69 | 0.65 | 0.70 | 0.67 | 0.65 | 0.70 | 0.67 |
L-EDCN | 0.75 | 0.72 | 0.73 | 0.70 | 0.75 | 0.72 | 0.80 | 0.70 | 0.75 |
C-EDCN | 0.83 | 0.79 | 0.81 | 0.80 | 0.82 | 0.81 | 0.83 | 0.81 | 0.82 |
L-SHISRCNet | 0.70 | 0.70 | 0.70 | 0.65 | 0.72 | 0.68 | 0.75 | 0.60 | 0.67 |
C-SHISRCNet | 0.72 | 0.70 | 0.71 | 0.68 | 0.72 | 0.70 | 0.70 | 0.72 | 0.71 |
L-ResNet18 | 0.75 | 0.91 | 0.82 | 0.62 | 0.74 | 0.68 | 0.83 | 0.62 | 0.71 |
C-ResNet18 | 0.80 | 0.94 | 0.86 | 0.64 | 0.84 | 0.73 | 0.88 | 0.59 | 0.71 |
L-MobileNetV2 | 0.70 | 0.45 | 0.55 | 0.50 | 0.72 | 0.59 | 0.78 | 0.65 | 0.71 |
C-MobileNetV2 | 0.74 | 0.50 | 0.59 | 0.54 | 0.71 | 0.61 | 0.80 | 0.70 | 0.74 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Do, T.-H.; Le, H.; Dang, M.-H.H.; Nguyen, V.-D.; Do, P. Cross-Domain Approach for Automated Thyroid Classification Using Diff-Quick Images. Mathematics 2025, 13, 2191. https://doi.org/10.3390/math13132191
Do T-H, Le H, Dang M-HH, Nguyen V-D, Do P. Cross-Domain Approach for Automated Thyroid Classification Using Diff-Quick Images. Mathematics. 2025; 13(13):2191. https://doi.org/10.3390/math13132191
Chicago/Turabian StyleDo, Thanh-Ha, Huy Le, Minh-Huong Hoang Dang, Van-De Nguyen, and Phuc Do. 2025. "Cross-Domain Approach for Automated Thyroid Classification Using Diff-Quick Images" Mathematics 13, no. 13: 2191. https://doi.org/10.3390/math13132191
APA StyleDo, T.-H., Le, H., Dang, M.-H. H., Nguyen, V.-D., & Do, P. (2025). Cross-Domain Approach for Automated Thyroid Classification Using Diff-Quick Images. Mathematics, 13(13), 2191. https://doi.org/10.3390/math13132191