Automated Classification of the Tympanic Membrane Using a Convolutional Neural Network
Abstract
:1. Introduction
2. Materials and Methods
2.1. Datasets
2.2. Deep Neural Network Model
3. Results
3.1. Accuracy
3.2. Class Activation Map
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Lieberthal, A.S.; Carroll, A.E.; Chonmaitree, T.; Ganiats, T.G.; Hoberman, A.; Jackson, M.A.; Joffe, M.D.; Miller, D.T.; Rosenfeld, R.M.; Sevilla, X.D. The diagnosis and management of acute otitis media. Pediatrics 2013, 131, e964–e999. [Google Scholar] [CrossRef] [PubMed]
- Block, S.L.; Mandel, E.; Mclinn, S.; Pichichero, M.E.; Bernstein, S.; Kimball, S.; Kozikowski, J. Spectral gradient acoustic reflectometry for the detection of middle ear effusion by pediatricians and parents. Pediatr. Infect. Dis. J. 1998, 17, 560–564. [Google Scholar] [CrossRef] [PubMed]
- Meyer-Baese, A.; Schmid, V.J. Pattern Recognition and Signal Analysis in Medical Imaging; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115. [Google Scholar] [CrossRef] [PubMed]
- Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E. Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm. J. Investig. Dermatol. 2018, 138, 1529–1538. [Google Scholar] [CrossRef] [PubMed]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
- Burlina, P.M.; Joshi, N.; Pekala, M.; Pacheco, K.D.; Freund, D.E.; Bressler, N.M. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017, 135, 1170–1176. [Google Scholar] [CrossRef] [PubMed]
- Brown, J.M.; Campbell, J.P.; Beers, A.; Chang, K.; Ostmo, S.; Chan, R.P.; Dy, J.; Erdogmus, D.; Ioannidis, S.; Kalpathy-Cramer, J. Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. JAMA Ophthalmol. 2018, 136, 803–810. [Google Scholar] [CrossRef] [PubMed]
- Kim, S.; Bae, W.; Masuda, K.; Chung, C.; Hwang, D. Fine-Grain Segmentation of the Intervertebral Discs from MR Spine Images Using Deep Convolutional Neural Networks: BSU-Net. Appl. Sci. 2018, 8, 1656. [Google Scholar] [CrossRef] [PubMed]
- Dormer, J.D.; Halicek, M.; Ma, L.; Reilly, C.M.; Schreibmann, E.; Fei, B. Convolutional neural networks for the detection of diseased hearts using CT images and left atrium patches. In Proceedings of the Medical Imaging 2018, Houston, TX, USA, 10–15 February 2018; p. 1057530. [Google Scholar]
- Awate, G.; Bangare, S.; Pradeepini, G.; Patil, S. Detection of Alzheimers Disease from MRI using Convolutional Neural Network with Tensorflow. arXiv 2018, arXiv:1806.10170. [Google Scholar]
- Tran, T.-T.; Fang, T.-Y.; Pham, V.-T.; Lin, C.; Wang, P.-C.; Lo, M.-T. Development of an Automatic Diagnostic Algorithm for Pediatric Otitis Media. Otol. Neurotol. 2018, 39, 1060–1065. [Google Scholar] [CrossRef] [PubMed]
- Mironică, I.; Vertan, C.; Gheorghe, D.C. Automatic pediatric otitis detection by classification of global image features. In Proceedings of the E-Health and Bioengineering Conference (EHB), Iasi, Romania, 24–26 November 2011; pp. 1–4. [Google Scholar]
- Kuruvilla, A.; Shaikh, N.; Hoberman, A.; Kovačević, J. Automated diagnosis of otitis media: Vocabulary and grammar. J. Biomed. Imaging 2013, 2013, 27. [Google Scholar] [CrossRef] [PubMed]
- Shie, C.-K.; Chang, H.-T.; Fan, F.-C.; Chen, C.-J.; Fang, T.-Y.; Wang, P.-C. A hybrid feature-based segmentation and classification system for the computer aided self-diagnosis of otitis media. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4655–4658. [Google Scholar]
- Davies, J.; Djelic, L.; Campisi, P.; Forte, V.; Chiodo, A. Otoscopy simulation training in a classroom setting: A novel approach to teaching otoscopy to medical students. Laryngoscope 2014, 124, 2594–2597. [Google Scholar] [CrossRef] [PubMed]
- Pichichero, M.E.; Poole, M.D. Assessing diagnostic accuracy and tympanocentesis skills in the management of otitis media. Arch. Pediatr. Adolesc. Med. 2001, 155, 1137–1142. [Google Scholar] [CrossRef] [PubMed]
- Moshtaghi, O.; Sahyouni, R.; Haidar, Y.M.; Huang, M.; Moshtaghi, A.; Ghavami, Y.; Lin, H.W.; Djalilian, H.R. Smartphone-enabled otoscopy in neurotology/otology. Otolaryngol. Head Neck Surg. 2017, 156, 554–558. [Google Scholar] [CrossRef] [PubMed]
- Moberly, A.C.; Zhang, M.; Yu, L.; Gurcan, M.; Senaras, C.; Teknos, T.N.; Elmaraghy, C.A.; Taj-Schaal, N.; Essig, G.F. Digital otoscopy versus microscopy: How correct and confident are ear experts in their diagnoses? J. Telemed. Telecare 2018, 24, 453–459. [Google Scholar] [CrossRef] [PubMed]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Bickley, L.; Szilagyi, P.G. Bates’ Guide to Physical Examination and History-Taking; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2012. [Google Scholar]
- Sanna, M.; Russo, A.; De Donato, G. Color Atlas of Otoscopy; Thieme: New York, NY, USA, 1999. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
- Kasher, M.S. Otitis Media Analysis-An Automated Feature Extraction and Image Classification System. Bachelor’s Thesis, Helsinki Metropolia University of Applied Sciences, Helsinki, Finland, 2018. [Google Scholar]
- Cireşan, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Mitosis detection in breast cancer histology images with deep neural networks. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 411–418. [Google Scholar]
- Alaskar, H.; Hussain, A.; Al-Aseem, N.; Liatsis, P.; Al-Jumeily, D. Application of Convolutional Neural Networks for Automated Ulcer Detection in Wireless Capsule Endoscopy Images. Sensors 2019, 19, 1265. [Google Scholar] [CrossRef] [PubMed]
- Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef]
Label | Side | Perforation |
---|---|---|
Left normal (663) | 96.5% | 89.1% |
Right normal (773) | 99.2% | 91.7% |
Left perforated TM (181) | 95.6% | 91.1% |
Right perforated TM (201) | 99.5% | 94.5% |
Overall (1818) | 97.9% | 91.0% |
AUROC | Accuracy | Sensitivity | Specificity | PPV | NPV | |
---|---|---|---|---|---|---|
Side | 0.98 | 97.9% | 99.3% | 96.3% | 96.9% | 99.1% |
Perforation | 0.92 | 91.0% | 90.5% | 92.9% | 98.0% | 72.3% |
Left Normal TM | Left TM with Perforation | Right Normal TM | Right TM with Perforation | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | Side | TM Finding | Overall | Side | TM Finding | Overall | Side | TM Finding | Overall | Side | TM Finding | ||
Activation rate | 8.7% | 8.7% | 8.6% | 94.8% | 90.6% | 99.4% | 52.5% | 94.3% | 10.6% | 60.3% | 21.6% | 98.5% | |
External auditory canal | AS | 19.1% | 1.5% | 1.8% | 42.3% | 30.4% | 50.3% | 46.5% | 47.3% | 1.4% | 41.9% | 8.0% | 42.3% |
AI | 52.2% | 5.6% | 3.5% | 51.6% | 58.0% | 40.3% | 86.8% | 87.8% | 3.2% | 59.8% | 15.6% | 56.2% | |
PS | 44.3% | 6.8% | 0.9% | 74.3% | 75.1% | 66.3% | 87.4% | 90.8% | 0.9% | 53.5% | 8.0% | 56.2% | |
PI | 36.5% | 4.4% | 2.0% | 63.3% | 64.6% | 55.8% | 56.8% | 57.6% | 2.1% | 48.5% | 6.5% | 51.7% | |
Tympanic membrane | AS | 20.9% | 1.1% | 2.6% | 31.2% | 44.2% | 15.5% | 22.7% | 22.5% | 1.3% | 17.4% | 3.5% | 17.4% |
AI | 16.5% | 1.2% | 1.7% | 17.5% | 22.1% | 11.6% | 23.6% | 21.5% | 3.2% | 13.3% | 2.5% | 13.4% | |
PS | 33.0% | * 5.3% (17.56) | 0.5% | 42.6% | 56.9% | 240.3% | 74.2% | 77.1% | 0.8% | 36.1% | 7.5% | 35.8% | |
PI | 24.3% | * 2.3% (37.87) | 2.0% | 29.4% | 43.6% | 12.7% | 34.5% | 34.3% | 1.9% | 16.2% | 3.5% | 15.9% | |
Tympanic annulus | AS | 20.9% | 2.0% | 1.7% | 24.2% | 12.2% | 33.7% | 48.8% | 50.2% | 1.0% | 24.5% | 1.0% | 28.4% |
AI | 25.2% | 3.2% | 1.2% | 30.3% | 20.4% | 37.0% | 59.4% | 59.9% | 2.5% | 32.0% | 1.5% | 36.8% | |
PS | 44.3% | 6.9% | 0.8% | 25.7% | 32.0% | 16.6% | 85.9% | 89.0% | 1.2% | 13.3% | 3.5% | 12.4% | |
PI | 21.7% | 2.9% | 0.9% | 14.9% | 7.7% | 20.4% | 46.9% | 48.3% | 0.9% | 28.6% | 0% | 34.3% | |
Malleus | Short process | 45.2% | 7.2% | 0.6% | 49.3% | 61.9% | 31.5% | 89.9% | 93.1% | 1.2% | 43.2% | 10.1% | 41.8% |
Handle | 27.8% | 4.1% | 0.8% | 19.8% | 16.0% | 22.1% | 72.9% | 75.0% | 1.4% | 16.2% | 1.5% | 17.9% | |
Cone of light | 46.1% | 6.5% | 1.5% | 2.0% | 2.8% | 1.1% | 74.8% | 77.7% | 0.8% | 4.1% | 1.5% | 3.5% | |
Umbo | 22.6% | 3.3% | 0.6% | 7.0% | 6.1% | 7.2% | 60.3% | 62.7% | 0.5% | 9.1% | 0.5% | 10.4% | |
Perforation margin | 0% | 0% | 0% | 43.4% | 19.9% | * 63.0% (9.03) | 0% | 0% | 0% | 53.9% | 1.5% | 63.2% | |
Middle ear | 0% | 0% | 0% | 61.8% | 39.2% | * 78.5% (3.55) | 0% | 0% | 0% | 75.9% | 10.1% | * 81.1% (37.88) |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, J.Y.; Choi, S.-H.; Chung, J.W. Automated Classification of the Tympanic Membrane Using a Convolutional Neural Network. Appl. Sci. 2019, 9, 1827. https://doi.org/10.3390/app9091827
Lee JY, Choi S-H, Chung JW. Automated Classification of the Tympanic Membrane Using a Convolutional Neural Network. Applied Sciences. 2019; 9(9):1827. https://doi.org/10.3390/app9091827
Chicago/Turabian StyleLee, Je Yeon, Seung-Ho Choi, and Jong Woo Chung. 2019. "Automated Classification of the Tympanic Membrane Using a Convolutional Neural Network" Applied Sciences 9, no. 9: 1827. https://doi.org/10.3390/app9091827
APA StyleLee, J. Y., Choi, S. -H., & Chung, J. W. (2019). Automated Classification of the Tympanic Membrane Using a Convolutional Neural Network. Applied Sciences, 9(9), 1827. https://doi.org/10.3390/app9091827