Next Article in Journal
Medio-Lateral and Flexion-Extension Gap Imbalances in Mechanically Aligned Total Knee Arthroplasty Using Measured Resection Technique in Korean Patients: 3D Simulation
Next Article in Special Issue
Chemically-Boosted Corneal Cross-Linking for the Treatment of Keratoconus through a Riboflavin 0.25% Optimized Solution with High Superoxide Anion Release
Previous Article in Journal
Healthcare Professionals’ Perspectives on the Cross-Sectoral Treatment Pathway for Women with Gestational Diabetes during and after Pregnancy—A Qualitative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning

1
Department of Ophthalmology, School of Medicine, Keio University, Tokyo 160-8582, Japan
2
Department of Technology and Design Thinking for Medicine, Graduate School of Biomedical Sciences, Hiroshima University, Hiroshima 734-8551, Japan
3
Tsubota Laboratory, Inc., Tokyo 160-0016, Japan
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2021, 10(4), 844; https://doi.org/10.3390/jcm10040844
Submission received: 17 January 2021 / Revised: 7 February 2021 / Accepted: 14 February 2021 / Published: 18 February 2021
(This article belongs to the Special Issue Innovations in Keratoconus Diagnosis and Management)

Abstract

:
We aimed to predict keratoconus progression and the need for corneal crosslinking (CXL) using deep learning (DL). Two hundred and seventy-four corneal tomography images taken by Pentacam HR® (Oculus, Wetzlar, Germany) of 158 keratoconus patients were examined. All patients were examined two times or more, and divided into two groups; the progression group and the non-progression group. An axial map of the frontal corneal plane, a pachymetry map, and a combination of these two maps at the initial examination were assessed according to the patients’ age. Training with a convolutional neural network on these learning data objects was conducted. Ninety eyes showed progression and 184 eyes showed no progression. The axial map, the pachymetry map, and their combination combined with patients’ age showed mean AUC values of 0.783, 0.784, and 0.814 (95% confidence interval (0.721–0.845) (0.722–0.846), and (0.755–0.872), respectively), with sensitivities of 87.8%, 77.8%, and 77.8% ((79.2–93.7), (67.8–85.9), and (67.8–85.9)) and specificities of 59.8%, 65.8%, and 69.6% ((52.3–66.9), (58.4–72.6), and (62.4–76.1)), respectively. Using the proposed DL neural network model, keratoconus progression can be predicted on corneal tomography maps combined with patients’ age.

1. Introduction

The first human study of the corneal crosslinking (CXL) to halt the progression of keratoconus/keratectasia was reported by Wollensak et al. [1] in 2003, at the time thought to be an incurable disease. Patients with this condition sometimes had to endure pain when wearing contact lenses, with the sudden occurrence of acute hydrops as an additional complication. Keratoplasty has been necessary with disease progression in some cases.
The primary purpose of CXL is to halt the progression of keratoconus. The best candidate for CXL is an individual with keratoconus or post-refractive surgery ectasia who has recently revealed disease progression. However, there are no definitive criteria for predicting keratoconus progression at present. The parameters that must be considered are changes in refraction (including astigmatism), uncorrected and best spectacle-corrected visual acuities, and corneal shape and thickness (according to corneal topography or tomography) [2,3,4,5,6].
Widely accepted indications for CXL include an increase of 1.00 diopter (D) or more in the steepest keratometry measurement, an increase of 1.00 D or more in the manifest cylinder, and an increase of 0.50 D or more in the manifest refraction spherical equivalent in 12 months [7]. It may take several months to years to determine whether a patient meets the clinical criteria for CXL. However, especially for some patients, the disease may exacerbate rapidly during the follow-up period, even while awaiting CXL [8]. Therefore, a method for predicting the progression and the need for CXL in keratoconus cases at the first examination is required.
Artificial intelligence (AI) is the fourth industrial revolution in mankind’s history, and deep learning (DL) is a class of state-of-the-art machine learning techniques that has sparked tremendous global interest in recent years [9]. In the field of ophthalmology, DL use for the diagnosis of diabetic retinopathy, glaucoma, age-related macular degeneration, and retinopathy of prematurity using fundus photographs and/or optical coherence tomography (OCT) have been developed [10,11,12,13,14,15,16]. For corneal diseases, DL can predict the likelihood of the need for future keratoplasty treatment [17]. Recently, DL has been used for the detection and staging of keratoconus [18,19,20]; however, the ability of DL to predict progression, namely the decision for CXL indication, has not been reported.
In the present work, we aimed to determine the need for CXL to halt keratoconus progression using DL. To our knowledge, this is the first trial to distinguish the indication for CXL by DL.

2. Materials and Methods

This study followed the ethical standards of the Declaration of Helsinki and the study protocol was approved by the Institutional Review Board of the Keio University School of Medicine.
We retrospectively analyzed the axial and the pachymetry maps combined with the patients’ age at the initial visit of each patient by DL. Two hundred and seventy-four eyes of 158 patients with keratoconus (112 males and 46 females; mean age, 27.8 ± 11.7 years), who visited the Department of Ophthalmology, Keio University School of Medicine from January 2009 to August 2018 at least twice, were included to the present study and retrospectively examined (Supplementary Material Table S1). Tomography images of those eyes were taken using Pentacam HR® instrument (Oculus, Wetzlar, Germany) by trained certified ophthalmic technicians at the first visit (Figure S1). Keratoconus was diagnosed based on corneal tomography, i.e., ectasia screening using the CASIA® device (Tomey, Aichi, Japan), and/or topographic keratoconus classification using the Pentacam HR instrument. Eyes with pellucid marginal degeneration, keratectasia after laser refractive corneal surgery, previous acute hydrops, or other ocular surface diseases were excluded. Then, the patients were followed 2 times or more with certain intervals. The mean period between the initial and final examination was 2.60 ± 2.09 years (varied from 6 weeks to 8.6 years).
CXL treatment was applied to eyes with recently active keratoconus that showed significant keratoconus progression, based on aforementioned criteria by corneal specialists. Eyes that underwent CXL were assigned to the progression group; eyes that did not undergo CXL were placed in the non-progression group (Figure 1).
We created an AI model to predict conical cornea progression with an axial map (Axial), a pachymetry map (Pachy), and their lateral combination (Both) taken at the first visit, using a Pentacam HR instrument; the assessments were based on patients’ age (Figure 2).
The K-fold (K = 5) cross-validation method [21,22] was used in this study. The original sample was randomly partitioned into k subsamples. K-1 subsamples were used as training data after data augmentation and the remaining single subsample was retained as the validation data for testing the model. The cross-validation process was repeated k times, with each of the subsamples was used as the validation data. All images were resized to 224 pixels × 224 pixels.
The deep neural network model was constructed based on the Visual Geometry Group-16 (VGG-16) [23,24,25]. The five blocks with convolutional layers, rectified linear unit activation function and max pooling layer of the VGG-16 [26,27,28] were used in this neural network. We used parameters from ImageNet blocks 1–4. This method is called fine-tuning and used in various studies [29].
After five convolutional blocks, the global average pooling layer is passed, such that spatial information is removed from the extracted features. After the global average pooling layer, we combined the standardized age information. The ratio of the amount of age information to the amount of image information is referred to here as the parameter ratio, described below. The extracted features were then compressed by passing through the fully connected layers. The last fully connected layer with the activation function, Softmax, evaluated the probability of each class (i.e., the two groups comprising the progression group and the non-progression group). The number of units in the hidden layer (n_dim) is described below.
We used the optimization momentum stochastic gradient descent algorithm (inertial term = 0.9) [30,31] as the optimizer. The learning ratio of the optimizer is described below (Figure 3).
The images were compressed in five blocks of visual geometry group-16 network and a global average pooling layer. Afterwards, standardized age information was combined in the “parameter ratio”. The extracted features were then compressed by passing through the fully connected layers. The last fully connected layer with the activation function, Softmax, evaluated the probability of each class (i.e., the two groups comprising the progression group and the non-progression group). VGG-16: Visual Geometry Group-16 and n_dim, the number of units in the hidden layer.
The parameter ratio, n_dim, and learning ratio were chosen from a uniform distribution from 0.2 to 0.8, an exponential distribution from 26 to 28, and a logarithmic distribution from 10-4 to 10-2, respectively. The performance of our approach was evaluated using the k-fold cross-validation method 10 times. The parameters with the highest area under the curve (AUC) were used. The developed prediction model and training were applied using Python TensorFlow (https://www.tensorflow.org/ (accessed on 15 February 2021)). We used Optuna (https://optuna.readthedocs.io/en/stable/index.html (accessed on 15 February 2021)) for setting the hyperparameter. The training and analysis codes are provided in Dataset S1.
The performance metrics were AUC, sensitivity, and specificity. The receiver operating characteristic curve (ROC) and the AUC were calculated using the NN’s output as the probability that a certain image belonged to the progression group, in addition to actual progression information. Using the Youden index [32] in the ROC curve, we defined the optimal cutoff value and the sensitivity and specificity of the cutoff value.
We compared patient age with the Welch’s t-test and male–female ratio with the Fisher’s exact test. p < 0.05 was considered statistically significant. Statistical analysis was performed using Python Scipy (https://www.scipy.org/ (accessed on 15 February 2021)) and Python Statsmodels (http://www.statsmodels.org/stable/index.html (accessed on 15 February 2021)).
For the AUC analysis, the AUC was assumed to be normally distributed and the 95% confidence interval (CI) was calculated with the following formula:
95 % C I = A U C ± Z ( 0.975 ) S E ( A U C )
Z ( x ) = 1 2 π e x p ( x 2 2 )
S E ( A U C ) = A U C ( 1 A U C ) + ( n p 1 ) ( Q 1 A U C 2 ) + ( n N 1 ) ( Q 2 A U C 2 ) n p n N
Q 1 = A U C 2 A U C
Q 2 = 2 A U C 2 1 + A U C

3. Results

3.1. Background

Ninety eyes showed progression and were included in the progression group; the other 184 eyes did not show progression and were placed in the non-progression group. The background information of both groups is listed in Table 1.

3.2. Evaluation of Keratoconus Progression

The predictive performance of keratoconus progression is shown in Table 2 and Figure 4.

4. Discussion

In this study, we developed a new method to predict the progression of keratoconus using DL via an AI platform. When the possibility for keratoconus progression was combined with patients’ age, the AUC values were 0.783 (0.721–0.845) with the axial map, 0.784 (0.722–0.846) with the corneal pachymetry map, and 0.814 (0.755–0.872) using both maps.
The age of the enrolled patients was significantly younger in the progression group than in the non-progression group. Age is an inevitable factor in keratoconus progression, partly because keratoconus is a disorder that tends to progress depend on the patients’ age; its progression tends to slow during middle age but young onset keratoconus has been shown to progress much faster [33,34]. We had previously investigated the condition of keratoconus patients who were followed-up twice or more after the initial visit and found that the patients’ age was the most relevant factor with respect to keratoconus progression, followed by Rmin (the minimum sagittal curvature evaluated by Pentacam HR) of the corneal frontal plane [35]. The disproportionate influence of age between progression and non-progression groups was ineluctable.
Considering that a young age is relevant to keratoconus progression, we applied patients’ age to corneal tomography data to predict progression using our DL approach. We used three types of maps: an axial map of the corneal frontal plane, a pachymetry map, and a combination of the two; the three map types showed similar AUC, sensitivity, and specificity values. This demonstrates the clinical versatility of DL for predicting the progression of keratoconus, as the axial map of the corneal frontal plane is usually displayed by every corneal topography/tomography device.
An AUC value of 0.78–0.81 is not a perfect indicator for progression prediction. For keratoconus specialists, who determine empirically the indication of CXL considering the clinical stage of keratoconus and patients’ age, the diagnosis rate may not be sufficient; however, it could serve as an indicator for non-specialists such as family practitioners, general ophthalmologists/optometrists, or specialists in other fields of ophthalmology to help them decide whether patients should consult with corneal specialists trained in CXL.
In the present investigation, we used the images of the corneal tomography resized to 224 pixels × 224 pixels. This compression could be suitable for the analysis or corneal tomography/topography, because the present investigation requires analysis of relatively large area of the pictures, not like finding out the tiny hemorrhages like in the assessment of diabetic retinopathy. This character can help accelerate the calculation speed and also be applied to other devices with lower specification, possibly bringing a great advantage for the clinical use.
This is the first trial that proposes the prediction of keratoconus progression by corneal topographic data at the first visit of the patients using DL as long as our knowledge, containing some limitations. The limitations of the present study included relatively small number of the participants and variation in the follow-up period to determine the indication for CXL (from 6 weeks to 8.6 years). Reassessment of all cases followed up for more than 2 years may provide a more accurate representation.
We excluded cases with pellucid marginal degeneration from the present study, which shows protrusion and thinning of the lower part of the cornea that occurs after the third or fourth decade of life and continues to progress even after middle-age. This was done partly because we thought that this condition might be different from the usual keratoconus and also because the number of patients with pellucid marginal degeneration was too small (less than 10% of the whole). The mechanisms for delayed occurrence and progression of protrusion in eyes with pellucid marginal degeneration has not been clarified. DL using a large number of keratoconus cases may elucidate these unclarified questions.

5. Conclusions

We attempted to predict the exacerbation of keratoconus that required CXL aftertime using DL and showed that the axial map or the pachymetry map combined with the patients’ age were useful indicators of the need for CXL, with about 80% probability. The prediction of keratoconus progression using AI-based DL is expected to help ophthalmologists/optometrists, and especially non-specialists, regarding the timing of referring patients to corneal specialists for CXL treatment.

Supplementary Materials

The following are available online at https://www.mdpi.com/2077-0383/10/4/844/s1, Figure S1: Axial map of the frontal plane and pachymetry map of Pentacam HR, Table S1: Anonymous list of enrolled cases.

Author Contributions

Conceptualization, N.K., H.M., and Hitoshi Tabuchi; methodology, N.K., H.M., M.T., and H.T. (Hitoshi Tabuchi); software, H.M., M.T., and Hitoshi Tabuchi; validation, N.K., H.M., M.T., C.S., and H.T. (Hitoshi Tabuchi); formal analysis, H.M. and M.T.; investigation, N.K., H.M., M.T., and H.T. (Hitoshi Tabuchi); resources, K.N. and H.T. (Hitoshi Tabuchi); data curation, N.K., M.T., C.S., and H.T. (Hidemasa Torii); writing—original draft preparation, N.K. and H.M.; writing—review and editing, M.T., C.S. and H.T. (Hidemasa Torii); visualization, N.K., H.M., and M.T.; supervision, K.N., H.T. (Hidemasa Torii), and K.T.; project administration, K.N., K.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Keio University School of Medicine (Protocol code; 20190222, date of approval; 2 March 2020).

Informed Consent Statement

Patient consent was waived due to retrospective study design. Instead, opt-out was carried out.

Data Availability Statement

The data presented in this study are available in Supplementary Material.

Acknowledgments

The authors wish to thank Sachiko Masui at the Department of Ophthalmology, Keio University School of Medicine, for her help with data collection and storage.

Conflicts of Interest

The authors declare no conflict of interest. Outside the submitted work, Kazuo Tsubota reports his position as CEO of Tsubota Laboratory, Inc., Tokyo, Japan, a company producing a keratoconus treatment-related device. Tsubota Laboratory, Inc. provided support in the form of salary for KT, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. This does not alter our adherence to Journal of Clinical Medicine’s policies on sharing data and materials.

References

  1. Wollensak, G.; Spoerl, E.; Seiler, T. Riboflavin/ultraviolet-a–induced collagen crosslinking for the treatment of keratoconus. Am. J. Ophthalmol. 2003, 135, 620–627. [Google Scholar] [CrossRef]
  2. Khattak, A.; Nakhli, F.R.; Cheema, H.R. Corneal collagen crosslinking for progressive keratoconus in Saudi Arabia: One-year controlled clinical trial analysis. Saudi J. Ophthalmol. 2015, 29, 249–254. [Google Scholar] [CrossRef] [Green Version]
  3. Alhayek, A.; Lu, P.-R. Corneal collagen crosslinking in keratoconus and other eye disease. Int. J. Ophthalmol. 2015, 8, 407–418. [Google Scholar]
  4. Asri, D.; Touboul, D.; Fournié, P.; Malet, F.; Garra, C.; Gallois, A.; Malecaze, F.; Colin, J. Corneal collagen crosslinking in progressive keratoconus: Multicenter results from the French National Reference Center for Keratoconus. J. Cataract. Refract. Surg. 2011, 37, 2137–2143. [Google Scholar] [CrossRef] [PubMed]
  5. Wittig-Silva, C.; Whiting, M.; Lamoureux, E.; Sullivan, L.J.; Lindsay, R.G.; Snibson, G.R. A Randomized Controlled Trial of Corneal Collagen Cross-linking in Progressive Keratoconus: Preliminary Results. J. Refract. Surg. 2008, 24, S720–S725. [Google Scholar] [CrossRef]
  6. Hersh, P.S.; Stulting, R.D.; Muller, D.; Durrie, D.S.; Rajpal, R.K.; Binder, P.S.; Donnenfeld, E.D.; Hardten, D.; Price, F.; Schanzlin, D.; et al. United States Multicenter Clinical Trial of Corneal Collagen Crosslinking for Keratoconus Treatment. Ophthalmology 2017, 124, 1259–1270. [Google Scholar] [CrossRef]
  7. Wisse, R.P.L.; Simons, R.W.P.; Van der Vossen, M.J.B.; Muijzer, M.B.; Soeters, N.; Nuijts, R.M.M.A.; Godefrooij, D.A. Clinical evaluation and validation of the Dutch Crosslinking for Kera-toconus score. JAMA Ophthalmol. 2019, 137, 610–616. [Google Scholar] [CrossRef]
  8. Romano, V.; Vinciguerra, R.; Arbabi, E.M.; Hicks, N.; Rosetta, P.; Vinciguerra, P.; Kaye, S.B. Progression of Keratoconus in Patients While Awaiting Corneal Cross-linking: A Prospective Clinical Study. J. Refract. Surg. 2018, 34, 177–180. [Google Scholar] [CrossRef] [Green Version]
  9. Ting, D.S.W.; Pasquale, L.R.; Peng, L.; Campbell, J.P.; Lee, A.Y.; Raman, R.; Tan, G.S.W.; Schmetterer, L.; A Keane, P.; Wong, T.Y. Artificial intelligence and deep learning in ophthalmology. Br. J. Ophthalmol. 2018, 103, 167–175. [Google Scholar] [CrossRef] [Green Version]
  10. Du, X.-L.; Li, W.-B.; Hu, B.-J. Application of artificial intelligence in ophthalmology. Int. J. Ophthalmol. 2018, 11, 1555–1561. [Google Scholar] [CrossRef] [PubMed]
  11. Xu, J.; Xue, K.; Zhang, K. Current status and future trends of clinical diagnoses via image-based deep learning. Theranostics 2019, 9, 7556–7565. [Google Scholar] [CrossRef]
  12. Tong, Y.; Lu, W.; Yu, Y.; Shen, Y. Application of machine learning in ophthalmic imaging modalities. Eye Vis. 2020, 7, 1–15. [Google Scholar] [CrossRef] [Green Version]
  13. Lim, G.; Bellemo, V.; Xie, Y.; Lee, X.Q.; Yip, M.Y.T.; Ting, D.S.W. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: A review. Eye Vis. 2020, 7, 1–13. [Google Scholar] [CrossRef] [Green Version]
  14. Devalla, S.K.; Liang, Z.; Pham, T.H.; Boote, C.; Strouthidis, N.G.; Thiery, A.H.; A Girard, M.J. Glaucoma management in the era of artificial intelligence. Br. J. Ophthalmol. 2019, 104, 301–311. [Google Scholar] [CrossRef]
  15. Grzybowski, A.; Brona, P.; Lim, G.; Ruamviboonsuk, P.; Tan, G.S.W.; Abramoff, M.; Ting, D.S.W. Artificial intelligence for diabetic retinopathy screening: A review. Eye 2020, 34, 451–460. [Google Scholar] [CrossRef]
  16. Wong, T.Y.; Sabanayagam, C. Strategies to Tackle the Global Burden of Diabetic Retinopathy: From Epidemiology to Artificial Intelligence. Ophthalmology 2019, 243, 9–20. [Google Scholar] [CrossRef]
  17. Yousefi, S.; Takahashi, H.; Hayashi, T.; Tampo, H.; Inoda, S.; Arai, Y.; Tabuchi, H.; Asbell, P. Predicting the likelihood of need for future keratoplasty intervention using artifi-cial intelligence. Ocul. Surf. 2020, 18, 320–325. [Google Scholar] [CrossRef]
  18. Yousefi, S.; Yousefi, E.; Takahashi, H.; Hayashi, T.; Tampo, H.; Inoda, S.; Arai, Y.; Asbell, P. Keratoconus severity identification using unsupervised machine learning. PLoS ONE 2018, 13, e0205998. [Google Scholar] [CrossRef]
  19. Klyce, S.D. The Future of Keratoconus Screening with Artificial Intelligence. Ophthalmology 2018, 125, 1872–1873. [Google Scholar] [CrossRef] [Green Version]
  20. Kamiya, K.; Ayatsuka, Y.; Kato, Y.; Fujimura, F.; Takahashi, M.; Shoji, N.; Mori, Y.; Miyata, K. Keratoconus detection using deep learning of colour-coded maps with anterior seg-ment optical coherence tomography: A diagnostic accuracy study. BMJ Open 2019, 9, e031313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Mosteller, F.; Tukey, J.W. Data analysis, including statistics. In Handbook of Social Psychology: Vol. 2. Research Methods; Lindzey, G., Aronson, E., Eds.; Addison-Wesley Pub. Co: Boston, MA, USA, 1968; pp. 80–203. [Google Scholar]
  22. Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, IJCAI 95, Montréal, QC, Canada, 20–25 August 1995; pp. 1137–1145. [Google Scholar]
  23. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  24. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  25. Lee, C.Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z. Deeply-supervised nets. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, San Diego, CA, USA, 9–12 May 2015; Volume 38, pp. 562–570. [Google Scholar]
  26. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  27. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Lauderdale, FL, USA, 11–13 April 2011; Volume 15, pp. 315–323. [Google Scholar]
  28. Scherer, D.; Müller, A.; Behnke, S. Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition. In Proceedings of the Artificial Neural Networks—ICANN 2010—20th International Conference, Thessaloniki, Greece, 15–18 September 2010; pp. 92–101. [Google Scholar]
  29. Agrawal, P.; Girshick, R.; Malik, J. Analyzing the Performance of Multilayer Neural Networks for Object Recognition. In Proceedings of the Constructive Side-Channel Analysis and Secure Design, Paris, France, 13–15 April 2014; Springer International Publishing: Graz, Austria, 2014; pp. 329–344. [Google Scholar]
  30. Qian, N. On the momentum term in gradient descent learning algorithms. Neural Netw. 1999, 12, 145–151. [Google Scholar] [CrossRef]
  31. Nesterov, Y. A method for unconstrained convex minimization problem with the rate of convergence O (1/k^2). Proc. USSR Acad. Sci. 1983, 269, 543–547. [Google Scholar]
  32. Youden, W.J. Index for rating diagnostic tests. Cancer 1950, 32–35. [Google Scholar] [CrossRef]
  33. Perez-Straziota, C.; Gaster, R.N.; Rabinowitz, Y.S. Corneal cross-linking for pediatric keratoconus review. Cornea 2018, 37, 802–809. [Google Scholar] [CrossRef]
  34. Olivo-Payne, A.; Abdala-Figuerola, A.; Hernandez-Bogantes, E.; Pedro-Aguilar, L.; Chan, E.; Godefrooij, D. Optimal management of pediatric keratoconus: Challenges and solutions. Clin. Ophthalmol. 2019, 13, 1183–1191. [Google Scholar] [CrossRef] [Green Version]
  35. Kato, N.; Negishi, K.; Sakai, C.; Tsubota, K. Baseline factors predicting the need for corneal crosslinking in patients with kera-toconus. PLoS ONE 2020, 15, e0231439. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flowchart representing the selection of enrolled patients. CXL: corneal crosslinking.
Figure 1. The flowchart representing the selection of enrolled patients. CXL: corneal crosslinking.
Jcm 10 00844 g001
Figure 2. Conical cornea progression prediction using a combined axial map, pachymetry, and a combined image of the two, with respect to patients’ age.
Figure 2. Conical cornea progression prediction using a combined axial map, pachymetry, and a combined image of the two, with respect to patients’ age.
Jcm 10 00844 g002
Figure 3. Overall architecture of the neural network model used in this study. VGG: Visual Geometry Group.
Figure 3. Overall architecture of the neural network model used in this study. VGG: Visual Geometry Group.
Jcm 10 00844 g003
Figure 4. Receiver operating characteristic (ROC) curve of the probability for cross-linking (CXL) among keratoconus eyes. Axial map (Axial), pachymetry map (Pachy), and a combination of the two maps (Both) reveal a similar ROC curve of probability for CXL.
Figure 4. Receiver operating characteristic (ROC) curve of the probability for cross-linking (CXL) among keratoconus eyes. Axial map (Axial), pachymetry map (Pachy), and a combination of the two maps (Both) reveal a similar ROC curve of probability for CXL.
Jcm 10 00844 g004
Table 1. Patients’ background.
Table 1. Patients’ background.
Progression GroupNon-Progression Groupp-Value
Age (mean ± SD)21.0 ± 5.931.5 ± 12.4p < 0.01
Gender (female ratio)24/9055/184p = 0.67
SD: standard deviation.
Table 2. Area under the curve (AUC), sensitivity, and specificity outcomes obtained by assessment using an axial map, a corneal pachymetry map, and a combination of the two, with or without respect to patients’ age.
Table 2. Area under the curve (AUC), sensitivity, and specificity outcomes obtained by assessment using an axial map, a corneal pachymetry map, and a combination of the two, with or without respect to patients’ age.
AUCSensitivitySpecificity
Axial0.783 (0.721–0.845)87.8% (79/90) (79.2–93.7)59.8% (110/184) (52.3–66.9)
Pachy0.784 (0.722–0.846)77.8% (70/90) (67.8–85.9)65.8% (121/184) (58.4–72.6)
Both0.814 (0.755–0.872)77.8% (70/90) (67.8–85.9)69.6% (128/184) (62.4–76.1)
AUC, area under the curve.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kato, N.; Masumoto, H.; Tanabe, M.; Sakai, C.; Negishi, K.; Torii, H.; Tabuchi, H.; Tsubota, K. Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. J. Clin. Med. 2021, 10, 844. https://doi.org/10.3390/jcm10040844

AMA Style

Kato N, Masumoto H, Tanabe M, Sakai C, Negishi K, Torii H, Tabuchi H, Tsubota K. Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. Journal of Clinical Medicine. 2021; 10(4):844. https://doi.org/10.3390/jcm10040844

Chicago/Turabian Style

Kato, Naoko, Hiroki Masumoto, Mao Tanabe, Chikako Sakai, Kazuno Negishi, Hidemasa Torii, Hitoshi Tabuchi, and Kazuo Tsubota. 2021. "Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning" Journal of Clinical Medicine 10, no. 4: 844. https://doi.org/10.3390/jcm10040844

APA Style

Kato, N., Masumoto, H., Tanabe, M., Sakai, C., Negishi, K., Torii, H., Tabuchi, H., & Tsubota, K. (2021). Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. Journal of Clinical Medicine, 10(4), 844. https://doi.org/10.3390/jcm10040844

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop