A Deep Learning Model for Cervical Optical Coherence Tomography Image Classification
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Collection
2.2. OCT Image Processing
2.3. Model Design
2.3.1. Model Architecture
2.3.2. Backbone Network
2.3.3. Texture Encoding
2.3.4. Deeply Supervised Classification
2.3.5. OCT Volume Label Prediction
2.4. Experiment Setups
2.4.1. Comparison Experiment
2.4.2. Data Preprocessing for Model Training
2.4.3. Parameter Settings
2.4.4. Evaluation Metrics
2.5. Statistical Analysis
3. Results
3.1. Machine–Machine Comparison with Baseline Models
3.2. Human–Machine Comparison with Medical Experts
3.3. Interpretability Study
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Appendix A.1. Extraction of Cervical OCT Images for Training
Appendix A.2. Cross-Shaped Threshold Voting
Appendix A.3. Evaluation Metrics
References
- Ren, C.; Zeng, X.; Shi, Z.; Wang, C.; Wang, H.; Wang, X.; Zhang, B.; Jiang, Z.; Ma, H.; Hu, H.; et al. Multi-center clinical study using optical coherence tomography for evaluation of cervical lesions in-vivo. Sci. Rep. 2021, 11, 7507. [Google Scholar] [CrossRef] [PubMed]
- Schiffman, M.; de Sanjose, S. False positive cervical HPV screening test results. Papillomavirus Res. 2019, 7, 184–187. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, L.; Zhao, G.; Che, L.; Zhang, H.; Fang, J. The clinical research of Thinprep Cytology Test (TCT) combined with HPV-DNA detection in screening cervical cancer. Cell. Mol. Biol. 2017, 63, 92–95. [Google Scholar] [CrossRef]
- Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef] [PubMed]
- Zeng, X.; Zhang, X.; Li, C.; Wang, X.; Jerwick, J.; Xu, T.; Ning, Y.; Wang, Y.; Zhang, L.; Zhang, Z.; et al. Ultrahigh-resolution optical coherence microscopy accurately classifies precancerous and cancerous human cervix free of labeling. Theranostics 2018, 8, 3099–3110. [Google Scholar] [CrossRef] [PubMed]
- Gallwas, J.; Jalilova, A.; Ladurner, R.; Kolben, T.M.; Kolben, T.; Ditsch, N.; Homann, C.; Lankenau, E.; Dannecker, C. Detection of cervical intraepithelial neoplasia by using optical coherence tomography in combination with microscopy. J. Biomed. Opt. 2017, 22, 16013. [Google Scholar] [CrossRef] [PubMed]
- Paczos, T.; Bonham, A.; Canavesi, C.; Rolland, J.P.; O’Connell, R. Near-histologic resolution images of cervical dysplasia obtained with Gabor domain optical coherence microscopy. J. Low. Genit. Tract. Dis. 2021, 25, 137–141. [Google Scholar] [CrossRef]
- Holmstrom, O.; Linder, N.; Kaingu, H.; Mbuuko, N.; Mbete, J.; Kinyua, F.; Törnquist, S.; Muinde, M.; Krogerus, L.; Lundin, M.; et al. Point-of-care digital cytology with artificial intelligence for cervical cancer screening in a resource-limited setting. JAMA Netw. Open 2021, 4, e211740. [Google Scholar] [CrossRef]
- Xue, P.; Tang, C.; Li, Q.; Li, Y.; Shen, Y.; Zhao, Y.; Chen, J.; Wu, J.; Li, L.; Wang, W.; et al. Development and validation of an artificial intelligence system for grading colposcopic impressions and guiding biopsies. BMC Med. 2020, 18, 406. [Google Scholar] [CrossRef]
- Cheng, S.; Liu, S.; Yu, J.; Rao, G.; Xiao, Y.; Han, W.; Zhu, W.; Lv, X.; Li, N.; Cai, J.; et al. Robust whole slide image analysis for cervical cancer screening using deep learning. Nat. Commun. 2021, 12, 5639. [Google Scholar] [CrossRef]
- Li, Y.; Chen, J.; Xue, P.; Tang, C.; Chang, J.; Chu, C.; Ma, K.; Li, Q.; Zheng, Y.; Qiao, Y. Computer-aided cervical cancer diagnosis using time-lapsed colposcopic images. IEEE Trans. Med. Imaging 2020, 39, 3403–3415. [Google Scholar] [CrossRef] [PubMed]
- Jiang, P.; Li, X.; Shen, H.; Chen, Y.; Wang, L.; Chen, H.; Feng, J.; Liu, J. Asystematic review of deep learning-based cervical cytology screening: From cell identification to whole slide image analysis. Artif. Intell. Rev. 2023, 56 (Suppl. S2), 2687–2758. [Google Scholar] [CrossRef]
- Jha, A.K.; Mithun, S.; Sherkhane, U.B.; Jaiswar, V.; Osong, B.; Purandare, N.; Kannan, S.; Prabhash, K.; Gupta, S.; Vanneste, B.; et al. Systematic review and meta-analysis of prediction models used in cervical cancer. Artif. Intell. Med. 2023, 139, 102549. [Google Scholar] [CrossRef] [PubMed]
- Ma, Y.; Xu, T.; Huang, X.; Wang, X.; Li, C.; Jerwick, J.; Ning, Y.; Zeng, X.; Wang, B.; Wang, Y.; et al. Computer-aided diagnosis of label-free 3-D optical coherence microscopy images of human cervical tissue. IEEE Trans. Biomed. Eng. 2019, 66, 2447–2456. [Google Scholar] [CrossRef]
- Castellano, G.; Bonilha, L.; Li, L.M.; Cendes, F. Texture analysis of medical images. Clin. Radiol. 2004, 59, 1061–1069. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Dollár, P.; Girshick, R.B.; He, K.; Hariharan, B.; Belongie, S.J. Feature pyramid networks for object detection. In Proceedings of the 2017 IEEE CVPR, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: New York, NY, USA, 2017; pp. 936–944. [Google Scholar] [CrossRef]
- Lee, C.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z. Deeply-supervised nets. PMLR 2014, 38, 562–570. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE CVPR, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: New York, NY, USA, 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Zhang, H.; Xue, J.; Dana, K. Deep TEN: Texture encoding network. In Proceedings of the 2017 IEEE CVPR, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: New York, NY, USA, 2017; pp. 2896–2905. [Google Scholar] [CrossRef]
- Zhu, S.C.; Guo, C.; Wang, Y.; Xu, Z. What are textons? Int. J. Comput. Vis. 2005, 62, 121–143. [Google Scholar] [CrossRef]
- Chen, K.; Wang, Q.; Ma, Y. Cervical optical coherence tomography image classification based on contrastive self-supervised texture learning. Med. Phys. 2022, 49, 3638–3653. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 2015 ICLR, San Diego, CA, USA, 7–9 May 2015. [Google Scholar] [CrossRef]
- Clopper, C.J.; Pearson, E.S. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika 1934, 26, 404–413. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef]
- Yan, L.; Xiao, X.; He, L.; Shi, L.; Yang, X.; Zhang, J.; Zhang, Y.; Fu, C. Efficacy of optical coherence tomography in the triage of women with minor abnormal cervical cytology before colposcopy. PLoS ONE 2023, 18, e0282833. [Google Scholar] [CrossRef]
- Xiao, X.; Yan, L.; Yang, X.; Zhou, Z.; Shi, L.; Fu, C. Optical coherence tomography can reduce colposcopic referral rates in patients with high-risk human papillomavirus. J. Low. Genit. Tract. Dis. 2023, 27, 324–330. [Google Scholar] [CrossRef]
Dataset | Hospital | Number | Age | HPV Results | TCT Results |
---|---|---|---|---|---|
Multi-center | The Third Affiliated Hospital of Zhengzhou University | 350 | 38.67 ± 9.86 | Positive: 281 Negative: 31 Untested: 38 | Positive: 215 Negative: 63 Untested: 72 |
Liaoning Cancer Hospital and Institute | 227 | 44.08 ± 8.38 | Positive: 138 Negative: 84 Untested: 5 | Positive: 69 Negative: 134 Untested: 24 | |
Puyang Oilfield General Hospital | 59 | 43.04 ± 8.06 | Positive: 39 Negative: 3 Untested: 17 | Positive: 39 Negative: 12 Untested: 8 | |
Luohe Central Hospital | 57 | 40.37 ± 10.05 | Positive: 49 Negative: 8 Untested: 0 | Positive: 36 Negative: 21 Untested: 0 | |
Zhengzhou Jinshui District General Hospital | 40 | 39.03 ± 12.47 | Positive: 36 Negative: 2 Untested: 2 | Positive: 9 Negative: 27 Untested: 4 | |
Overall | 733 | 40.85 ± 9.79 | Positive: 543 Negative: 128 Untested: 62 | Positive: 368 Negative: 257 Untested: 108 | |
Renmin | Renmin Hospital of Wuhan University | 113 | 49.68 ± 11.77 | Positive: 98 Negative: 12 Untested: 3 | Positive: 20 Negative: 74 Untested: 19 |
Huaxi | West China Hospital of Sichuan University | 132 | 39.02 ± 9.09 | Positive: 116 Negative: 7 Untested: 9 | Positive: 16 Negative: 112 Untested: 4 |
Dataset | Size | MI | CY | EP | HSIL | CC | Total |
---|---|---|---|---|---|---|---|
Multi-center | # Patients | 239 | 126 | 99 | 161 | 74 | 699 |
# Volumes | 363 | 195 | 153 | 166 | 379 | 1256 | |
# Images | 3172 | 2464 | 2067 | 5539 | 731 | 13,973 | |
Remin | # Patients | 20 | 18 | 10 | 36 | 21 | 105 |
# Volumes | 241 | 216 | 121 | 432 | 254 | 1264 | |
Huaxi | # Patients | 40 | 27 | 17 | 39 | 2 | 125 |
# Volumes | 301 | 216 | 148 | 87 | 8 | 760 |
Backbone | B-TE | B-FPN | B-F-T | B-F-C | Ours | |
---|---|---|---|---|---|---|
Acc5 | 84.56 ± 4.02% | 86.10 ± 3.41% | 85.77 ± 3.92% | 85.94 ± 3.13% | 87.54 ± 3.82% | 88.67 ± 2.94% |
Acc2 | 92.39 ± 2.04% | 93.72 ± 1.59% | 92.97 ± 2.80% | 93.38 ± 1.38% | 94.52 ± 1.73% | 94.89 ± 1.52% |
Sens | 89.56 ± 5.46% | 91.96 ± 2.84% | 91.11 ± 4.35% | 91.35 ± 4.36% | 93.06 ± 3.65% | 92.70 ± 3.85% |
Spec | 93.98 ± 2.43% | 94.88 ± 2.93% | 94.05 ± 3.49% | 94.49 ± 2.82% | 95.42 ± 2.62% | 96.28 ± 2.23% |
PPV | 90.84 ± 5.22% | 92.55 ± 5.17% | 91.03 ± 6.47% | 92.14 ± 4.63% | 92.97 ± 4.76% | 94.21 ± 4.65% |
NPV | 92.96 ± 2.26% | 94.13 ± 2.62% | 93.93 ± 2.79% | 94.02 ± 2.76% | 95.24 ± 2.33% | 95.06 ± 2.30% |
AUC | 97.09 ± 1.50% | 98.04 ± 1.09% | 97.95 ± 1.32% | 98.12 ± 0.71% | 98.72 ± 0.56% | 98.80 ± 0.50% |
Dataset | Acc2 (%) | Sens (%) | Spec (%) | PPV (%) | NPV (%) | F1-Score (%) | |
---|---|---|---|---|---|---|---|
Renmin | Investigator 1 | 65.71 | 47.06 | 83.33 | 72.73 | 62.50 | 57.14 |
(55.81–74.70) | (32.93–61.54) | (70.71–92.08) | (54.48–86.70) | (50.30–73.64) | (45.88–67.89) | ||
Investigator 2 | 75.24 | 56.86 | 92.59 | 87.88 | 69.44 | 69.05 | |
(65.86–83.14) | (42.25–70.65) | (82.11–97.94) | (71.80–96.60) | (57.47–79.76) | (58.02–78.69) | ||
Investigator 3 | 69.52 | 50.98 | 87.04 | 78.79 | 65.28 | 61.91 | |
(59.78–78.13) | (36.60–65.25) | (75.10–94.63) | (61.09–91.02) | (53.14–76.12) | (50.66–72.29) | ||
Investigator 4 | 66.67 | 33.33 | 98.15 | 94.44 | 60.92 | 49.28 | |
(56.80–75.57) | (20.76–47.92) | (90.11–99.95) | (72.71–99.86) | (49.87–71.21) | (37.02–61.59) | ||
Investigator 5 | 65.74 | 70.59 | 61.11 | 63.16 | 68.75 | 66.67 | |
(55.81–74.70) | (56.17–82.51) | (46.88–74.08) | (49.34–75.55) | (53.75–81.34) | (56.95–75.45) | ||
Avg. (95% CI) | 68.57 | 51.77 | 84.44 | 79.40 | 65.38 | 60.81 | |
(58.78–77.28) | (37.34–65.98) | (72.01–92.87) | (58.40–88.69) | (52.64–75.97) | (50.41–71.85) | ||
Ours | 81.91 | 82.35 | 81.48 | 80.77 | 83.02 | 81.55 | |
(73.19–88.74) | (69.13–91.60) | (68.57–90.75) | (67.47–90.37) | (70.20–91.93) | (72.70–88.51) | ||
Huaxi | Investigator 1 | 85.60 | 82.50 | 87.06 | 75.00 | 91.36 | 78.57 |
(78.20–91.24) | (67.22–92.66) | (78.02–93.36) | (59.66–86.81) | (83.00–96.45) | (68.26–86.78) | ||
Investigator 2 | 87.20 | 77.50 | 91.76 | 81.58 | 89.66 | 82.67 | |
(80.05–92.50) | (61.55–89.16) | (83.77–96.62) | (65.67–92.26) | (81.27–95.16) | (68.84–87.80) | ||
Investigator 3 | 92.00 | 80.00 | 97.65 | 94.12 | 91.21 | 86.49 | |
(85.78–96.10) | (64.35–90.95) | (91.76–99.71) | (80.32–99.28) | (83.41–96.13) | (76.55–93.32) | ||
Investigator 4 | 76.80 | 55.00 | 87.06 | 66.67 | 80.43 | 60.27 | |
(68.41–83.88) | (38.49–70.74) | (78.02–93.36) | (48.17–82.04) | (70.85–87.97) | (48.14–71.55) | ||
Investigator 5 | 78.40 | 67.50 | 83.53 | 65.85 | 84.52 | 66.67 | |
(70.15–85.26) | (50.87–81.43) | (73.91–90.69) | (49.41–79.92) | (74.99–91.49) | (55.32–76.76) | ||
Avg. (95% CI) | 84.00 | 72.50 | 89.41 | 76.64 | 87.44 | 74.93 | |
(76.38–89.94) | (56.11–85.40) | (80.85–95.04) | (59.76–88.56) | (78.50–93.52) | (63.21–83.58) | ||
Ours | 89.60 | 87.50 | 90.59 | 79.55 | 93.90 | 84.34 | |
(82.87–94.35) | (73.20–95.81) | (82.29–95.85) | (66.60–91.61) | (86.34–97.99) | (74.71–91.39) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zuo, X.; Liu, J.; Hu, M.; He, Y.; Hong, L. A Deep Learning Model for Cervical Optical Coherence Tomography Image Classification. Diagnostics 2024, 14, 2009. https://doi.org/10.3390/diagnostics14182009
Zuo X, Liu J, Hu M, He Y, Hong L. A Deep Learning Model for Cervical Optical Coherence Tomography Image Classification. Diagnostics. 2024; 14(18):2009. https://doi.org/10.3390/diagnostics14182009
Chicago/Turabian StyleZuo, Xiaohu, Jianfeng Liu, Ming Hu, Yong He, and Li Hong. 2024. "A Deep Learning Model for Cervical Optical Coherence Tomography Image Classification" Diagnostics 14, no. 18: 2009. https://doi.org/10.3390/diagnostics14182009
APA StyleZuo, X., Liu, J., Hu, M., He, Y., & Hong, L. (2024). A Deep Learning Model for Cervical Optical Coherence Tomography Image Classification. Diagnostics, 14(18), 2009. https://doi.org/10.3390/diagnostics14182009