CellGAN: Generative Adversarial Networks for Cellular Microscopy Image Recognition with Integrated Feature Completion Mechanism
Abstract
:1. Introduction
- Constructing a feature enhancement generator that incorporates a Transformer to supplement long-range semantic information;
- Introducing bilinear interpolation for feature completion in the self-attention module to reduce computational complexity and enhance inference speed;
- Employing two-dimensional positional encoding in the self-attention mechanism to supplement positional information and facilitate position recovery;
- Integrating segmentation loss and discriminator error for a generator accuracy correction strategy to optimize segmentation accuracy continuously.
2. Related Work
3. Methods
3.1. Feature Completion Mechanism
3.1.1. Transformer Block Feature Completion
3.1.2. Self-Attention Mechanism Enhancement Strategy
3.2. Accuracy Rectification Strategy
4. Results
4.1. Experimental Environment
4.2. Datasets
4.3. Evaluation Metrics
4.4. Experimental Results
5. Discussion
5.1. Model Comparative Analysis
5.2. Ablation Experiment
5.3. Model Generalization Analysis
5.4. Model Interpretability Analysis
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lei, X.; Lei, Y.; Li, J.K.; Du, W.X.; Li, R.G.; Yang, J.; Li, J.; Li, F.; Tan, H.B. Immune cells within the tumor microenvironment: Biological functions and roles in cancer immunotherapy. Cancer Lett. 2020, 470, 126–133. [Google Scholar] [CrossRef]
- Poole, J.J.; Mostaço-Guidolin, L.B. Optical Microscopy and the Extracellular Matrix Structure: A Review. Cells 2021, 10, 1760. [Google Scholar] [CrossRef]
- Magazzù, A.; Marcuello, C. Investigation of Soft Matter Nanomechanics by Atomic Force Microscopy and Optical Tweezers: A Comprehensive Review. Nanomaterials 2023, 13, 963. [Google Scholar] [CrossRef]
- Chen, J.; Sasaki, H.; Lai, H.; Su, Y.; Liu, J.; Wu, Y.; Zhovmer, A.; Combs, C.A.; Rey-Suarez, I.; Chang, H.Y.; et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 2021, 18, 678–687. [Google Scholar] [CrossRef]
- Palla, G.; Fischer, D.S.; Regev, A.; Theis, F.J. Spatial components of molecular tissue biology. Nat. Biotechnol. 2022, 40, 308–318. [Google Scholar] [CrossRef]
- Seo, H.; Badiei Khuzani, M.; Vasudevan, V.; Huang, C.; Ren, H.; Xiao, R.; Jia, X.; Xing, L. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med. Phys. 2020, 47, e148–e167. [Google Scholar] [CrossRef]
- Kumar, S.N.; Fred, A.L.; Varghese, P.S. An Overview of Segmentation Algorithms for the Analysis of Anomalies on Medical Images. J. Intell. Syst. 2020, 29, 612–625. [Google Scholar] [CrossRef]
- Bannon, D.; Moen, E.; Schwartz, M.; Borba, E.; Kudo, T.; Greenwald, N.; Vijayakumar, V.; Chang, B.; Pao, E.; Osterman, E.; et al. DeepCell Kiosk: Scaling deep learning–enabled cellular image analysis with Kubernetes. Nat. Methods 2021, 18, 43–45. [Google Scholar] [CrossRef]
- Stringer, C.; Wang, T.; Michaelos, M.; Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 2021, 18, 100–106. [Google Scholar] [CrossRef]
- Mondal, A.K.; Agarwal, A.; Dolz, J.; Desrosiers, C. Revisiting CycleGAN for semi-supervised segmentation. arXiv 2019, arXiv:1908.11569. [Google Scholar] [CrossRef]
- Goncalves, J.P.; Pinto, F.A.; Queiroz, D.M.; Villar, F.M.; Barbedo, J.G.; Del Ponte, E.M. Deep learning architectures for semantic segmentation and automatic estimation of severity of foliar symptoms caused by diseases or pests. Biosystemsengineering 2021, 210, 129–142. [Google Scholar] [CrossRef]
- Tong, S.; Zhang, J.; Li, W.; Wang, Y.; Kang, F. An image-based system for locating pruning points in apple trees using instance segmentation and RGB-D images. Biosyst. Eng. 2023, 236, 277–286. [Google Scholar] [CrossRef]
- Qian, L.; Zhou, X.; Li, Y.; Hu, Z. Unet#: A Unet-like redesigning skip connections for medical image segmentation. arXiv 2022, arXiv:2205.11759. [Google Scholar] [CrossRef]
- Eissa, M.M.; Napoleon, S.A.; Ashour, A.S. DeepLab V3+ Based Semantic Segmentation of COVID-19 Lesions in Computed Tomography Images. J. Eng. Res. 2022, 6, 184–191. [Google Scholar] [CrossRef]
- Bousias Alexakis, E.; Armenakis, C. Evaluation of UNet and UNet++ architectures in high resolution image change detection applications. Int. Arch. Photogrammetry. Remote Sens. Spat. Inf. Sci. 2020, 43, 1507–1514. [Google Scholar] [CrossRef]
- Liu, Z.; Yuan, H. An Res-Unet method for pulmonary artery segmentation of CT images. J. Phys. Conf. Ser. 2021, 1924, 012018. [Google Scholar] [CrossRef]
- Luc, P.; Couprie, C.; Chintala, S.; Verbeek, J. Semantic Segmentation using Adversarial Networks. arXiv 2016, arXiv:1611.08408. [Google Scholar] [CrossRef]
- Ramwala, O.A.; Dhakecha, S.A.; Ganjoo, A.; Visiya, D.; Sarvaiya, J.N. Leveraging Adversarial Training for Efficient Retinal Vessel Segmentation. In Proceedings of the 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Pitesti, Romania, 1–3 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Tato, A.; Nkambou, R. Improving Adam Optimizer. 2018. Available online: https://openreview.net/pdf?id=HJfpZq1DM (accessed on 7 July 2024).
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired imageto-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar] [CrossRef]
- You, A.; Kim, J.K.; Ryu, I.H.; Yoo, T.K. Application of generative adversarial networks (GAN) for ophthalmology image domains: A survey. Eye Vis. 2022, 9, 6. [Google Scholar] [CrossRef]
- Chen, Z.; Wei, J.; Zeng, X.; Xu, L. Retinal vessel segmentation based on task-driven generative adversarial network. IET Image Process. 2020, 14, 4599–4605. [Google Scholar] [CrossRef]
- Ding, Y.; Zhang, Z.; Zhao, X.; Hong, D.; Cai, W.; Yang, N.; Wang, B. Multi-scale receptive fields: Graph attention neural network for hyperspectral image classification. Expert Syst. Appl. 2023, 223, 119858. [Google Scholar] [CrossRef]
- Guo, X.; Chen, C.; Lu, Y.; Meng, K.; Chen, H.; Zhou, K.; Wang, Z.; Xiao, R. Retinal vessel segmentation combined with generative adversarial networks and Dense U-Net. IEEE Access 2020, 8, 194551–194560. [Google Scholar] [CrossRef]
- Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
- Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
- Gao, Y.; Huang, R.; Yang, Y.; Zhang, J.; Shao, K.; Tao, C.; Chen, Y.; Metaxas, D.N.; Li, H.; Chen, M. Focusnetv2: Imbalanced large and small organ segmentation with adversarial shape constraint for head and neck Ct images. Med. Image Anal. 2021, 67, 101831. [Google Scholar] [CrossRef]
- Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. Available online: https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf (accessed on 7 July 2024).
- Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; IEEE: New York, NY, USA; pp. 327–331. [Google Scholar] [CrossRef]
- Liu, Y.; Sangineto, E.; Bi, W.; Sebe, N.; Lepri, B.; Nadai, M. Efficient training of visual transformers with small datasets. Adv. Neural Inf. Process. Syst. 2021, 34, 23818–23830. [Google Scholar]
- Shi, J.; Wang, Y.; Yu, Z.; Li, G.; Hong, X.; Wang, F.; Gong, Y. Exploiting multi-scale parallel self-attention and local variation via dual-branch transformer-cnn structure for face super-resolution. IEEE Trans. Multimed. 2023, 26, 2608–2620. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland; pp. 205–218. [Google Scholar] [CrossRef]
- Cao, Y.H.; Yu, H.; Wu, J. Training vision transformers with only 2040 images. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland; pp. 220–237. [Google Scholar] [CrossRef]
- Salehi, A.W.; Khan, S.; Gupta, G.; Alabduallah, B.I.; Almjally, A.; Alsolai, H.; Siddiqui, T.; Mellit, A. A study of CNN and transfer learning in medical imaging: Advantages, challenges, future scope. Sustainability 2023, 15, 5930. [Google Scholar] [CrossRef]
- Wang, S.; Li, B.Z.; Khabsa, M.; Fang, H.; Ma, H. Linformer: Self-attention with linear complexity. arXiv 2020, arXiv:2006.04768. [Google Scholar] [CrossRef]
- Zhou, R.G.; Wan, C. Quantum image scaling based on bilinear interpolation with decimals scaling ratio. Int. J. Theor. Phys. 2021, 60, 2115–2144. [Google Scholar] [CrossRef]
- Sirinukunwattana, K.; Raza, S.E.; Tsang, Y.W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef] [PubMed]
- Sirinukunwattana, K.; Pluim, J.P.; Chen, H.; Qi, X.; Heng, P.A.; Guo, Y.B.; Wang, L.Y.; Matuszewski, B.J.; Bruni, E.; Sanchez, U.; et al. Gland segmentation in colon histology images: The glas challenge contest. Med. Image Anal. 2017, 35, 489–502. [Google Scholar] [CrossRef]
- Phoulady, H.A.; Mouton, P.R. A New Cervical Cytology Dataset for Nucleus Detection and Image Classification (Cervix93) and Methods for Cervical Nucleus Detection. arXiv 2018, arXiv:1811.09651. [Google Scholar] [CrossRef]
- Gao, Y.; Zhou, M.; Metaxas, D.N. UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part III 24. Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 61–71. [Google Scholar] [CrossRef]
- Huff, D.T.; Weisman, A.J.; Jeraj, R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys. Med. Biol. 2021, 66, 04TR01. [Google Scholar] [CrossRef]
Dataset | Training Set IoU | Validation Set IoU |
---|---|---|
MoNuSeg | 89.11% | 77.50% |
GlandCeil | 94.11% | 83.52% |
Nucleus | 92.06% | 86.08% |
RiceCell | 91.50% | 93.21% |
Model | IoU% ↑ | Dice% ↑ | Acc% ↑ |
---|---|---|---|
UNet [13] | 89.69 | 94.43 | 95.77 |
DeepLabV3+ [14] | 90.06 | 94.67 | 95.98 |
UNet++ [15] | 88.32 | 93.52 | 94.32 |
ResUNet [16] | 90.24 | 94.82 | 96.23 |
SwinUnet [37] | 79.28 | 87.95 | 89.46 |
UTNet [45] | 91.00 | 95.29 | 96.13 |
CellGAN | 93.21 | 96.49 | 97.32 |
Scheme | UNet | Transformer | Improved MHSA | GAN | IoU | Dice |
---|---|---|---|---|---|---|
1 | √ | 89.69 | 94.43 | |||
2 | √ | 70.26 | 82.53 | |||
3 | √ | √ | 91.00 | 95.29 | ||
4 | √ | √ | 89.94 | 94.71 | ||
5 | √ | √ | √ | 91.30 | 95.45 | |
6 | √ | √ | √ | 92.49 | 96.10 | |
7 | √ | √ | √ | √ | 93.21 | 96.49 |
Model | MoNuSeg | GlandCell | Nucleus | |||
---|---|---|---|---|---|---|
IoU ↑ | Dice ↑ | IoU ↑ | Dice ↑ | IoU ↑ | Dice ↑ | |
DeepLabV3+ [14] | 45.80 | 61.82 | 78.41 | 87.55 | 63.20 | 77.02 |
UNet++ [15] | 47.01 | 63.06 | 80.58 | 88.90 | 63.11 | 76.64 |
ResUNet [16] | 40.98 | 57.94 | 80.49 | 88.99 | 62.65 | 76.57 |
SwinUnet [37] | 62.28 | 76.66 | 79.29 | 87.95 | 85.40 | 91.81 |
UTNet [45] | 18.34 | 30.41 | 83.97 | 91.09 | 65.90 | 79.00 |
CellGAN | 63.72 | 77.84 | 83.72 | 91.14 | 86.10 | 92.53 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liao, X.; Yi, W. CellGAN: Generative Adversarial Networks for Cellular Microscopy Image Recognition with Integrated Feature Completion Mechanism. Appl. Sci. 2024, 14, 6266. https://doi.org/10.3390/app14146266
Liao X, Yi W. CellGAN: Generative Adversarial Networks for Cellular Microscopy Image Recognition with Integrated Feature Completion Mechanism. Applied Sciences. 2024; 14(14):6266. https://doi.org/10.3390/app14146266
Chicago/Turabian StyleLiao, Xiangle, and Wenlong Yi. 2024. "CellGAN: Generative Adversarial Networks for Cellular Microscopy Image Recognition with Integrated Feature Completion Mechanism" Applied Sciences 14, no. 14: 6266. https://doi.org/10.3390/app14146266
APA StyleLiao, X., & Yi, W. (2024). CellGAN: Generative Adversarial Networks for Cellular Microscopy Image Recognition with Integrated Feature Completion Mechanism. Applied Sciences, 14(14), 6266. https://doi.org/10.3390/app14146266