Next Article in Journal
Cryptogenic Organizing Pneumonia: Evolution of Morphological Patterns Assessed by HRCT
Previous Article in Journal
Impact of Gene Polymorphisms in GAS5 on Urothelial Cell Carcinoma Development and Clinical Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Deep-Learning-Based Artificial Intelligence Tool for Differential Diagnosis between Dry and Neovascular Age-Related Macular Degeneration

1
Department of Information and Statistics, Chungbuk National University, Chungdae-ro 1, Seowon-gu, Cheongju-si, Chungbuk 28644, Korea
2
College of Pharmacy and Medical Research Center, Chungbuk National University, 194-31 Osongsaengmyeong 1-ro, Osong-eup, Heungdeok-gu, Cheongju-si, Chungbuk 28160, Korea
3
Department of Ophthalmology, Ulsan University Hospital, College of Medicine, University of Ulsan, 877, Bangeojinsunhwando-ro, Dong-gu, Ulsan 44033, Korea
*
Authors to whom correspondence should be addressed.
Theses authors equally contributed to this study.
Diagnostics 2020, 10(5), 261; https://doi.org/10.3390/diagnostics10050261
Submission received: 24 March 2020 / Revised: 26 April 2020 / Accepted: 26 April 2020 / Published: 28 April 2020
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
The use of deep-learning-based artificial intelligence (AI) is emerging in ophthalmology, with AI-mediated differential diagnosis of neovascular age-related macular degeneration (AMD) and dry AMD a promising methodology for precise treatment strategies and prognosis. Here, we developed deep learning algorithms and predicted diseases using 399 images of fundus. Based on feature extraction and classification with fully connected layers, we applied the Visual Geometry Group with 16 layers (VGG16) model of convolutional neural networks to classify new images. Image-data augmentation in our model was performed using Keras ImageDataGenerator, and the leave-one-out procedure was used for model cross-validation. The prediction and validation results obtained using the AI AMD diagnosis model showed relevant performance and suitability as well as better diagnostic accuracy than manual review by first-year residents. These results suggest the efficacy of this tool for early differential diagnosis of AMD in situations involving shortages of ophthalmology specialists and other medical devices.

1. Introduction

Deep-learning-based artificial intelligence (AI) tools have been adopted by medical experts for disease diagnosis and detection. Furthermore, AI-based diagnosis can be used to augment human analysis in pathology and radiology [1]. AI-based tools have been developed for cancer diagnosis using pathology slides, preliminary radiology reports, chest X-rays, and detection of cardiac dysfunction using electrocardiograms [2,3,4]. Image extraction and analytical algorithms in AI diagnosis are drawing the attention of medical specialists. For example, use of deep learning tools to analyze photographs of lesions represents a potential methodology for diagnosing several retinal diseases in ophthalmology. Therefore, deep learning AI tools might be useful to ophthalmologists for predicting and treating diabetic retinopathy, age-related macular degeneration (AMD), floaters, and retinitis pigmentosa. Recently, the Food and Drug Administration approved IDx-DR for the detection of diabetic retinopathy as an AI-based diagnostic system [5]. In diabetic retinopathy, blood vessels become blocked and irregular in diameter [6], which induces fluid leakage and hemorrhaging associated with vision damage. Additionally, angiogenesis is considered to be a pathogenic process in diabetic retinopathy [7]. These pathologies represent potential photographic sources for the development of AI diagnosis tools.
AMD is a degenerative eye disease and the leading cause of irreversible vision loss in the elderly [8]. It is a complex, multifactorial disease, and the pathogenesis is not fully understood. [9]. Choroidal neovascularization (CNV), vascular leakage, and hemorrhaging are the hallmarks of neovascular AMD (nAMD) [10]. Detection of AMD in its early stages is important for a good prognosis, and the differential diagnosis between dry AMD (dAMD) and nAMD is also critical for appropriate treatment and reduction of disease severity [11]. However, a shortage of ophthalmologists and medical devices for diagnosis represents a potential challenge for the timely detection of diseases.
Ophthalmologists can diagnose AMD through eye examinations, such as fundus photography, optical coherence tomography (OCT), fluorescein angiography (FA), and indocyanine green angiography (ICGA) [12], with multimodal imaging also potentially necessary for accurate AMD diagnosis and treatment. However, for diagnostic screening purposes, it is difficult to access all of the various imaging equipment. Fundus photography has the limitation of providing only two-dimensional retinal information. However, it is an inexpensive and relatively simple device-based diagnostic tool that is easy to operate. Additionally, images can be saved and used at a later time by different clinicians and researchers. Furthermore, this method results in higher patient compliance due to the short test times and non-invasiveness of the method.
Fundus photographs record the appearance of patient retinas, allowing the clinician to detect retinal changes and review the findings with a colleague [13]. AMD-related leakage of fluid and blood can be observed by fundus photography, which is also capable of detecting drusen, mottled appearance, and hemorrhagic detachment. Therefore, fundus photography might be useful for diagnosing AMD in routine eye examinations.
In this study, we explored the viability of fundus photography for the development of a deep-learning-based AI diagnostic tool and demonstrated the performance of the proposed AI tool for differentially diagnosing AMD (control vs. dAMD vs. nAMD). Additionally, we compared the diagnostic accuracy of the AI tool with that of ophthalmology residents for AMD.

2. Materials and Methods

2.1. Ethical Approval

This study was reviewed, and the protocol approved by the Institutional Human Experimentation Committee Review Board of Ulsan University Hospital, Ulsan, Republic of Korea (UUH 2019-12-006, 31 December 2019). The study was conducted in accordance with the ethical standards set forth in the 1964 Declaration of Helsinki.

2.2. Subjects

To select patient groups (nAMD and dAMD), the medical records of patients aged >50 years who had been diagnosed with nAMD or dAMD between March 1, 2015, and July 31, 2019, at the Department of Ophthalmology of Ulsan University Hospital, Ulsan, Republic of Korea, were retrospectively reviewed. All subjects (399 eyes of 378 patients) underwent a complete ophthalmic examination that included the best-corrected visual acuity assessment, non-contact tonometry (CT-1P; Topcon Corporation, Tokyo, Japan), and swept-source OCT (DRI OCT-1 Atlantis; Topcon Corporation, Tokyo, Japan). CNV or polypoidal vascular lesions were detected via FA and ICGA (Heidelberg Retina Angiograph Spectralis; Heidelberg Engineering, Heidelberg, Germany). Patients who had had previous retinal surgeries, such as epiretinal membrane, macular hole, vitreous hemorrhage, and rhegmatogenous retinal detachment (RRD), were excluded. Subjects were also excluded if they had pre-existing ocular diseases (such as glaucoma, uveitis, diabetic retinopathy, and retinal vascular disease, which are known to affect retinal pathophysiology), severe media opacity, or high myopia (axial length ≥ 26.5 mm). To select a normal control, the medical records of patients who had been diagnosed with and surgically treated for various retinal diseases (macular hole, epiretinal membrane, or RRD) were also reviewed. Normal control was defined based on the absence of lesions, including drusen, according to fundus photography and OCT in the unaffected eyes.

2.3. Imaging Equipment

We used two fundus photography systems (TRC-NW8, Topcon Corporation, Tokyo, Japan, and Daytona, Optos, Inc., Marlborough, MA, USA). The TRC-NW8 retinal camera provides high-quality 16.2-megapixel images, with a 45° field of central macular view. Daytona provides ultra-widefield fundus digital images at 200° of the retina in a single pass. All retinal images were reviewed by a retinal specialist (JKM) to ensure that the photographs were of sufficiently high quality to adequately visualize the retina.

2.4. Convolutional Neural Network (CNN) Modeling

Convolutional neural network (CNN) techniques have recently shown noticeable advances in various fields, including computer vision and image analysis. We used this method to classify macular degeneration in macular images. We used a modified Visual Geometry Group with 16 layers (VGG16) model [14] (the winner of the ILSVRC-2014 competition) as a deep learning model for classification. VGG16 has a very simple architecture that uses only 3 × 3 convolutional layers and 2 × 2 pooling layers (Figure 1). We loaded the VGG16 model with image datasets from ImageNet (http://www.image-net.org/) and trained the convolutional layers and fully connected layers with macular images. The macular image dataset was divided into two sets, with 30% of the images in each group placed into the test set, and the remaining images used for the training set [15]. Training was performed using multiple iterations with a learning rate of 0.000001 and Nadam optimization.
Class activation map (CAM) visualization was performed to identify areas displaying the greatest effect of macular degeneration. CAM extracts feature maps of the final convolutional layer (Conv5_3) of the model trained using macular images and computes the weights of the feature maps to represent the heatmap in the image.

2.5. Preprocessing

Each original image had a resolution of 913 × 837 pixels with a 24-bit RGB channel. We first identified the appropriate coordinates for cropping the images to ensure that they were centered around the center of the macula. The coordinates were then adjusted to eliminate unnecessary information, such as the black margin area. All images were cropped based on fixed and adjusted coordinates, with the cropped images having a resolution of 500 × 500 pixels. All of the cropped images were then resized to 244 × 244 pixels as input images for the deep learning model. Preprocessed images were generated using various methods with Keras ImageDataGenerator (https://keras.io/) during training.

2.6. Cross-Validation of Artificial Intelligence (AI)-Based Diagnosis

Cross-validation is a useful technique for evaluating the performance of deep learning models. In cross-validation, the dataset is randomly divided into training set and test sets, with the training set used to build a model, and the test set used to assess the performance of the model by measuring accuracy.
In k-fold cross-validation, the dataset is divided randomly into k subsets of equal size, with one used as a test set and the others for training. The cross-validation is performed k times to allow for the use of all subsets exactly once as a test set. Model performance is determined according to the average of model evaluation scores calculated across k test subsets. Here, we evaluated the performance of the proposed CNN model using 5-fold cross-validation, with performance determined according to the average accuracy of five cross-validations for each class comparison.

2.7. Comparative Analysis of Accuracy Values of the AI Diagnosis Tool and Residents in Ophthalmology

To compare the performance between AI diagnosis and that of clinical reviewers, two residents in our hospital evaluated the fundus images used to develop the tool. Reviewer 1 was a first-year resident and reviewer 2 a fourth-year resident in ophthalmology. For 3-class classification, control, dAMD, and nAMD fundus photos were randomly displayed on a computer screen for 20 s, and the two reviewers interpreted fundus findings as Normal, dAMD, or nAMD. For 2-class classification, comparisons were divided into three groups (Normal vs. dAMD, Normal vs. nAMD, and dAMD vs. nAMD) according to the same time constraints. The fundus photos were also randomly displayed on the screen for 20 s. Two reviewers read retina findings as Normal or dAMD in the Normal–dAMD group, as Normal or nAMD in the Normal–nAMD group, and as dAMD or nAMD in the dAMD–nAMD group. Accuracy values for each reviewer were calculated and presented accordingly.

3. Results

3.1. Fundus Image Collection

Eyes (n = 142) from 126 patients were diagnosed with nAMD, and fundus images were collected. Fundus examination of eyes with nAMD can include one or more features, such as subretinal and/or intraretinal fluid, subretinal hemorrhage, and retinal pigment epithelial detachment and intraretinal exudate in the macular area (Figure 2). Based on the category of AMD in age-related eye diseases [16], drusen types corresponding to categories 2 and 3 were defined as dAMD (132 eyes from 127 patients) through fundus photography and OCT (Figure 3a,b). Furthermore, images of 125 eyes from 125 patients were collected as controls (Figure 3c,d).

3.2. Data Augmentation

We performed several iterative learning steps using Nadam optimization, and loss values of the model were recorded for each iteration. The model with the lowest loss value recorded during training was adopted and used. The images were processed using Keras ImageDataGenerator, with the center of the macula located in the center of the image. Image generation was performed by randomly moving in the up, down, left, and right directions, flipping, and applying image rotation and zooming according to a previous report [17] (Figure 4).

3.3. Validation of the Deep-Learning-Based Diagnostic Tool

CAM visualization showed that the convolutional neural network (CNN) successfully identified areas of degeneration (Figure 5). These represent the most important areas in each image in the trained CNN when classified as AMD. In the case of dAMD, the drusen, which resembled a yellow dot characteristic of dAMD symptoms, was correctly identified. For nAMD, areas involving degeneration and bleeding were identified, with pathological changes, such as elevation, observed in the center of the macula. Accordingly, we were able to identify macular morphological changes characteristic of nAMD (Figure 5).
We achieved 90.86% accuracy with preprocessing for three-class classification. Table 1 shows a comparison of accuracy between a preprocessed (w-Pre) model and a non-preprocessed (w/o-Pre) model. The results indicated that the w-Pre model performed better in terms of accuracy, except for the comparison of the control with dAMD, and that preprocessing of fundus images improved classification. Table 2 and Table 3 show the detailed results for each fold. Therefore, we used the modified VGG-16 model trained with preprocessed data.
Table 2 shows the accuracies and the average accuracies obtained for each fold of cross-validation, with the accuracies for two-class classification higher than those for three-class classification. Table 4 shows measurements of sensitivity, specificity, positive predictive value, and negative predictive value, revealing a similar outcome of two-class classification outperforming three-class classification. Additionally, we performed the same validation using ultra-widefield images (Table 5), resulting in similar results. Furthermore, while using ultra-widefield images, the tool showed an accuracy of 0.7584 in normal vs. dAMD, 0.9099 in normal vs. nAMD, and 0.7601 in dAMD vs. nAMD for the two-class classification. In the three-class classification of ultra-widefield images, the accuracy was 0.7321. We evaluated the performance of medical reviewers and compared this with outcomes using AI diagnosis (Table 6). The results showed that the AI tool outperformed both a first- and fourth-year resident in accurately differentiating between AMD types and a control for both three-class and two-class classifications.

4. Discussion

Recent advances in deep learning techniques have increased the focus of medical specialists on potential application of AI-based diagnostic tools. Given the image-extraction features of deep learning algorithms, these techniques are potentially suitable for analyzing photographs from eye examinations. AMD is a leading cause of vision loss, and early detection is important for a good prognosis [18]. Furthermore, differential diagnosis between dAMD and nAMD is critical for suitable treatment [19]. However, based on the shortage of ophthalmologists and medical devices, early diagnosis of dAMD and nAMD is challenging. In this study, we developed a deep-learning-based diagnostic tool to detect and differentiate between dAMD and nAMD using fundus photographs. To the best of our knowledge, this represents the first development and application of an AI tool for differential diagnosis of AMD type.
Five-fold cross-validation revealed that our AI model showed high accuracy (>0.9) for both three-class and two-class classification and comparable and superior accuracy to diagnoses by medical reviewers (fourth- and first-year residents). Additionally, differential diagnosis using ultra-widefield images from AMD patients revealed overall accuracies lower than those obtained using conventional fundus images. Unlike conventional fundus images, the ultra-widefield images were not formal photos and included unnecessary information (eyelids, light bleeds, different pixels, etc.), making manual preprocessing of the images necessary. These results suggested that ultra-widefield images were not appropriate for use with deep learning tools.
Our AI tool detected features of AMD, such as drusen, bleeding, and elevation of the center of the macula. Specifically, bleeding and degeneration of the centers of maculae are markers used for nAMD diagnosis [20]. Multimodal imaging (e.g., OCT) is generally necessary for accurate AMD diagnosis and prognostic prediction. The low reliability of diagnostic imaging equipment can result in a poor diagnosis and prognosis, especially in low-income countries. Therefore, we developed an AI tool for AMD diagnosis that uses only conventional fundus photographs and demonstrated the efficacy of the tool for differential diagnosis between dAMD and nAMD. Our findings support this AI tool as a cost-effective methodology that addresses possible shortages of eye specialists and medical devices required for accurate AMD diagnosis.

Author Contributions

J.Y. and J.K.M. were responsible for study conception and design; T.-Y.H. and K.M.K. performed the experiments and data analyses; S.M.G., J.H.K., and H.K.M. participated in discussion of the findings. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This work was supported by a National Research Foundation of Korea (NRF) grant, funded by the Korean government (MSIT) (No. 2017R1C1B5017929), and a research grant from the Chungbuk National University in 2019 and 2020.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Levine, A.B.; Schlosser, C.; Grewal, J.; Coope, R.; Jones, S.J.M.; Yip, S. Rise of the Machines: Advances in Deep Learning for Cancer Diagnosis. Trends Cancer 2019, 5, 157–169. [Google Scholar] [CrossRef] [PubMed]
  2. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyo, D.; Moreira, A.L.; Razavian, N.; Tsirigos, R. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef] [PubMed]
  3. Mathews, S.M.; Kambhamettu, C.; Barner, K.E. A novel application of deep learning for single-lead ECG classification. Comput. Biol. Med. 2018, 99, 53–62. [Google Scholar] [CrossRef]
  4. Baltruschat, I.M.; Nickisch, H.; Grass, M.; Knopp, T.; Saalbach, A. Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification. Sci. Rep. 2019, 9, 6381. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Abramoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit. Med. 2018, 1, 39. [Google Scholar] [CrossRef] [PubMed]
  6. Hall, A. Recognising and managing diabetic retinopathy. Commun. Eye Health 2011, 24, 5–9. [Google Scholar]
  7. Tremolada, G.; Del Turco, C.; Lattanzio, R.; Maestroni, S.; Maestroni, A.; Bandello, F.; Zerbini, G. The role of angiogenesis in the development of proliferative diabetic retinopathy: Impact of intravitreal anti-VEGF treatment. Exp. Diabetes Res. 2012, 2012, 728325. [Google Scholar] [CrossRef] [PubMed]
  8. Evans, J.R. Risk factors for age-related macular degeneration. Prog. Retin. Eye Res. 2001, 20, 227–253. [Google Scholar] [CrossRef]
  9. Javadzadeh, A.; Ghorbanihaghjo, A.; Bahreini, E.; Rashtchizadeh, N.; Argani, H.; Alizadeh, S. Plasma oxidized LDL and thiol-containing molecules in patients with exudative age-related macular degeneration. Mol. Vis. 2010, 16, 2578–2584. [Google Scholar] [PubMed]
  10. Wang, J.J.; Rochtchina, E.; Lee, A.J.; Chia, E.M.; Smith, W.; Cumming, R.G.; Mitchell, P. Ten-year incidence and progression of age-related maculopathy: The blue Mountains Eye Study. Ophthalmology 2007, 114, 92–98. [Google Scholar] [CrossRef] [PubMed]
  11. Cook, H.L.; Patel, P.J. Tufail A. Age-related macular degeneration: Diagnosis and management. Br. Med. Bull. 2008, 85, 127–149. [Google Scholar] [CrossRef] [PubMed]
  12. Talks, J.; Koshy, Z.; Chatzinikolas, K. Use of optical coherence tomography, fluorescein angiography and indocyanine green angiography in a screening clinic for wet age-related macular degeneration. Br. J. Ophthalmol. 2007, 91, 600–601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Bernardes, R.; Serranho, P.; Lobo, C. Digital ocular fundus imaging: A review. Ophthalmologica 2011, 226, 161–181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition; Cornell University: Ithaca, NY, USA, 2014. [Google Scholar]
  15. Lee, C.S.; Baughman, D.M.; Lee, A.Y. Deep learning is effective for the classification of OCT images of normal versus Age-related Macular Degeneration. Ophthalmol. Retina 2017, 1, 322–327. [Google Scholar] [CrossRef] [PubMed]
  16. Group A-REDSR. A randomized, placebo-controlled, clinical trial of high-dose supplementation with vitamins C and E, beta carotene, and zinc for age-related macular degeneration and vision loss: AREDS report no. 8. Arch. Ophthalmol. 2001, 119, 1417–1436. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Tan, J.H.; Bhandary, S.V.; Sivaprasad, S.; Hagiwara, Y.; Bagchi, A.; Raghavendra, U.; Rao, A.K.; Raju, B.; Shetty, N.S.; Gertych, A.; et al. Age-related macular degeneration detection using deep convolutional neural network. Future Gener. Comput. Syst. 2018, 87, 127–135. [Google Scholar] [CrossRef]
  18. Ly, A.; Yapp, M.; Nivison-Smith, L.; Assaad, N.; Hennessy, M.; Kalloniatis, M. Developing prognostic biomarkers in intermediate age-related macular degeneration: Their clinical use in predicting progression. Clin. Exp. Optom. 2018, 101, 172–181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Handa, J.T.; Bowes Rickman, C.; Dick, A.D.; Gorin, M.B.; Miller, J.W.; Toth, C.A.; Ueffing, M.; Zarbin, M.; Farrer, L.A. A systems biology approach towards understanding and treating non-neovascular age-related macular degeneration. Nat. Commun. 2019, 10, 3347. [Google Scholar] [CrossRef] [PubMed]
  20. Campochiaro, P.A.; Marcus, D.M.; Awh, C.C.; Regillo, C.; Adamis, A.P.; Bantseev, V.; Chiang, Y.; Ehrlich, J.S.; Erickson, S.; Hanley, W.D.; et al. The Port Delivery System with Ranibizumab for Neovascular Age-Related Macular Degeneration: Results from the Randomized Phase 2 Ladder Clinical Trial. Ophthalmology 2019, 126, 1141–1154. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The proposed convolutional neural network (CNN) architecture (a modified Visual Geometry Group with 16 layers (VGG16) model). The CNN with the modified VGG16 model used 3 × 3 convolutional layers and 2 × 2 pooling layers. Convolutional layers and fully connected layers were trained with macular images.
Figure 1. The proposed convolutional neural network (CNN) architecture (a modified Visual Geometry Group with 16 layers (VGG16) model). The CNN with the modified VGG16 model used 3 × 3 convolutional layers and 2 × 2 pooling layers. Convolutional layers and fully connected layers were trained with macular images.
Diagnostics 10 00261 g001
Figure 2. Multimodal images of neovascular age-related macular degeneration in a 61-year-old man. (a) Fundus photography shows subretinal fluid, exudation, and hemorrhage; (b) Optical coherence tomography (OCT) B-scan revealed non-uniform hyper-reflective formations above the retinal pigment epithelium and the presence of intraretinal cysts and subretinal fluid; (c) Fluorescein angiography (FA) demonstrates aspects of a well-defined (white arrow) and an irregular (yellow arrow) hyper-fluorescent lesion; (d) Indocyanine green angiography (ICGA) shows staining of the type 2 choroidal neovascularization (CNV) (white arrow); (e) An OCT angiography image (with the neovascular network) overlaid on the ICGA image.
Figure 2. Multimodal images of neovascular age-related macular degeneration in a 61-year-old man. (a) Fundus photography shows subretinal fluid, exudation, and hemorrhage; (b) Optical coherence tomography (OCT) B-scan revealed non-uniform hyper-reflective formations above the retinal pigment epithelium and the presence of intraretinal cysts and subretinal fluid; (c) Fluorescein angiography (FA) demonstrates aspects of a well-defined (white arrow) and an irregular (yellow arrow) hyper-fluorescent lesion; (d) Indocyanine green angiography (ICGA) shows staining of the type 2 choroidal neovascularization (CNV) (white arrow); (e) An OCT angiography image (with the neovascular network) overlaid on the ICGA image.
Diagnostics 10 00261 g002
Figure 3. Fundus photography and optical coherence tomography of dry age-related macular degeneration (dAMD) and control retinas. (a) Numerous soft, yellow drusen in the right eye of a 78-year-old woman; (b) The corresponding OCT image shows multiple deposits accumulating under the retinal pigment epithelium. (c) Normal control fundus photography in the right eye of a 66-year-old man. (d) The corresponding OCT image of the control.
Figure 3. Fundus photography and optical coherence tomography of dry age-related macular degeneration (dAMD) and control retinas. (a) Numerous soft, yellow drusen in the right eye of a 78-year-old woman; (b) The corresponding OCT image shows multiple deposits accumulating under the retinal pigment epithelium. (c) Normal control fundus photography in the right eye of a 66-year-old man. (d) The corresponding OCT image of the control.
Diagnostics 10 00261 g003
Figure 4. Image preprocessing. Eye images were preprocessed by Keras ImageDataGenerator. Original images were cropped and resized to 244 × 244 pixels. The training dataset images were generated using various methods, including width shift, height shift, rotation, zoom, horizontal flip, and vertical flip.
Figure 4. Image preprocessing. Eye images were preprocessed by Keras ImageDataGenerator. Original images were cropped and resized to 244 × 244 pixels. The training dataset images were generated using various methods, including width shift, height shift, rotation, zoom, horizontal flip, and vertical flip.
Diagnostics 10 00261 g004
Figure 5. Examples of class activation map (CAM) visualization. CAM visualization of normal, dry age-related macular degeneration (dAMD), and neovascular age-related macular degeneration (nAMD) retinas. CAM extracts the feature map of the last convolution layer (Conv5_3) and shows a heatmap within the image describing the calculated weight of the feature map. (a) dAMD fundus images show drusen (arrow), and (d) heatmap images show drusen identified by the artificial intelligence (AI) tool; (b) Normal fundus images have no drusen, and (e) heatmap images of normal controls show that the AI tool identified the contour of fovea according to the absence of drusen; (c) nAMD fundus images show bleeding and degenerated areas (green arrows), and (f) heatmap images show identified drusen and other features of degeneration and bleeding; (gi) Representative images of dAMD, a normal control, and nAMD, respectively; (l) Heatmap images of nAMD show that the AI tool identified pathological changes in the macula, such as elevation of the center; (j) There was no heatmap at the center of dAMD; however, the AI tool detected drusen instead; (k) Heatmap image showing AI identification of the center of the macula in a control, with no degenerated area.
Figure 5. Examples of class activation map (CAM) visualization. CAM visualization of normal, dry age-related macular degeneration (dAMD), and neovascular age-related macular degeneration (nAMD) retinas. CAM extracts the feature map of the last convolution layer (Conv5_3) and shows a heatmap within the image describing the calculated weight of the feature map. (a) dAMD fundus images show drusen (arrow), and (d) heatmap images show drusen identified by the artificial intelligence (AI) tool; (b) Normal fundus images have no drusen, and (e) heatmap images of normal controls show that the AI tool identified the contour of fovea according to the absence of drusen; (c) nAMD fundus images show bleeding and degenerated areas (green arrows), and (f) heatmap images show identified drusen and other features of degeneration and bleeding; (gi) Representative images of dAMD, a normal control, and nAMD, respectively; (l) Heatmap images of nAMD show that the AI tool identified pathological changes in the macula, such as elevation of the center; (j) There was no heatmap at the center of dAMD; however, the AI tool detected drusen instead; (k) Heatmap image showing AI identification of the center of the macula in a control, with no degenerated area.
Diagnostics 10 00261 g005
Table 1. Comparison of outcomes according to preprocessing.
Table 1. Comparison of outcomes according to preprocessing.
Average
Accuracy
3-Class2-Class
Control–dAMD–nAMDControl–dAMDControl–nAMDdAMD–nAMD
w-Pre0.90860.91920.98130.9132
w/o-Pre0.85590.92640.98080.9063
Data represent calculated accuracy values.
Table 2. Results obtained using five-fold cross-validation with preprocessing.
Table 2. Results obtained using five-fold cross-validation with preprocessing.
Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Fold 10.97560.88461.00000.9231
Fold 20.88641.00001.00000.8929
Fold 30.95350.92591.00001.0000
Fold 40.93180.92861.00000.9643
Fold 50.79550.85710.90630.7857
Average0.90860.91920.98130.9132
Data represent calculated accuracy values.
Table 3. Results obtained using five-fold cross-validation without preprocessing.
Table 3. Results obtained using five-fold cross-validation without preprocessing.
Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Fold 10.80490.88460.96670.9615
Fold 20.84090.92861.00000.8571
Fold 30.88370.92591.00000.9626
Fold 40.88640.96431.00000.8214
Fold 50.86360.92860.93750.9286
Average0.85590.92640.98080.9063
Data represent calculated accuracy values.
Table 4. Average classification results for each model.
Table 4. Average classification results for each model.
ModelAccuracySensitivitySpecificityPPVNPV
3-class 0.90860.90461.00001.00000.9349
Control–dAMD–nAMD0.86050.93940.83030.9500
0.95710.93290.87500.9786
2-classControl–dAMD0.91920.92520.91670.87880.9492
Control–nAMD0.98130.96841.00001.00000.9625
dAMD–nAMD0.91320.87950.94480.93180.8992
Data represent calculated accuracy values. PPV, positive predictive value; NPV, negative predictive value.
Table 5. Results obtained using five-fold cross-validation and ultra-widefield images.
Table 5. Results obtained using five-fold cross-validation and ultra-widefield images.
Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Fold 10.78850.60710.86360.8636
Fold 20.78850.71430.81820.7727
Fold 30.64810.85710.91300.6957
Fold 40.75000.78570.95450.7727
Fold 50.68520.82761.00000.6957
Average0.73210.75840.90990.7601
Data represent calculated accuracy values.
Table 6. Comparison of differential diagnosis of AMD type between first- and fourth-year residents.
Table 6. Comparison of differential diagnosis of AMD type between first- and fourth-year residents.
Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Reviewer 1Reviewer 2Reviewer 1Reviewer 2Reviewer 1Reviewer 2Reviewer 1Reviewer 2
Fold 10.73170.90240.96150.92310.96671.00000.69230.9231
Fold 20.70450.90910.96430.89290.93750.90620.85190.9259
Fold 30.69770.81400.85190.92590.93750.87500.81480.9630
Fold 40.79550.72730.96430.92860.90620.96880.75000.9286
Fold 50.71430.80000.75000.96430.87500.90620.71430.7143
Average0.72870.83060.89840.92700.92460.93120.76470.8910
Reviewers 1 and 2 represent a first- and fourth-year resident, respectively. Data represent calculated accuracy values.

Share and Cite

MDPI and ACS Style

Heo, T.-Y.; Kim, K.M.; Min, H.K.; Gu, S.M.; Kim, J.H.; Yun, J.; Min, J.K. Development of a Deep-Learning-Based Artificial Intelligence Tool for Differential Diagnosis between Dry and Neovascular Age-Related Macular Degeneration. Diagnostics 2020, 10, 261. https://doi.org/10.3390/diagnostics10050261

AMA Style

Heo T-Y, Kim KM, Min HK, Gu SM, Kim JH, Yun J, Min JK. Development of a Deep-Learning-Based Artificial Intelligence Tool for Differential Diagnosis between Dry and Neovascular Age-Related Macular Degeneration. Diagnostics. 2020; 10(5):261. https://doi.org/10.3390/diagnostics10050261

Chicago/Turabian Style

Heo, Tae-Young, Kyoung Min Kim, Hyun Kyu Min, Sun Mi Gu, Jae Hyun Kim, Jaesuk Yun, and Jung Kee Min. 2020. "Development of a Deep-Learning-Based Artificial Intelligence Tool for Differential Diagnosis between Dry and Neovascular Age-Related Macular Degeneration" Diagnostics 10, no. 5: 261. https://doi.org/10.3390/diagnostics10050261

APA Style

Heo, T. -Y., Kim, K. M., Min, H. K., Gu, S. M., Kim, J. H., Yun, J., & Min, J. K. (2020). Development of a Deep-Learning-Based Artificial Intelligence Tool for Differential Diagnosis between Dry and Neovascular Age-Related Macular Degeneration. Diagnostics, 10(5), 261. https://doi.org/10.3390/diagnostics10050261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop