Next Article in Journal
Rapid and Cost-Effective Fabrication and Performance Evaluation of Force-Sensing Resistor Sensors
Previous Article in Journal
Hormonal Balance and Cardiovascular Health: Exploring the Interconnection between Menopause, Body Composition, and Thyroid Function in a Cohort of Hypertensive Women
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy towards Comparative Performance on Optical Coherence Tomography as on Fundus Autofluorescence

1
Doheny Image Analysis Laboratory, Doheny Eye Institute, 150 North Orange Grove Blvd, Pasadena, CA 91103, USA
2
School of Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
3
Department of Computer Science, The University of California, Los Angeles, CA 90095, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(17), 7773; https://doi.org/10.3390/app14177773
Submission received: 29 July 2024 / Revised: 24 August 2024 / Accepted: 27 August 2024 / Published: 3 September 2024

Abstract

:
Stargardt atrophy and geographic atrophy (GA) represent pivotal endpoints in FDA-approved clinical trials. Predicting atrophy progression is crucial for evaluating drug efficacy. Fundus autofluorescence (FAF), the standard 2D imaging modality in these trials, has limitations in patient comfort. In contrast, spectral-domain optical coherence tomography (SD-OCT), a 3D imaging modality, is more patient friendly but suffers from lower image quality. This study has two primary objectives: (1) develop an efficient predictive modeling for the generation of future FAF images and prediction of future Stargardt atrophic (as well as GA) regions and (2) develop an efficient predictive modeling with advanced 3D OCT features at ellipsoid zone (EZ) for the comparative performance in the generation of future enface EZ maps and prediction of future Stargardt atrophic regions on OCT as on FAF. To achieve these goals, we propose two deep neural networks (termed ReConNet and ReConNet-Ensemble) with recurrent learning units (long short-term memory, LSTM) integrating with a convolutional neural network (CNN) encoder–decoder architecture and concurrent learning units integrated by ensemble/multiple recurrent learning channels. The ReConNet, which incorporates LSTM connections with CNN, is developed for the first goal on longitudinal FAF. The ReConNet-Ensemble, which incorporates multiple recurrent learning channels based on enhanced EZ enface maps to capture higher-order inherent OCT EZ features, is developed for the second goal on longitudinal OCT. Using FAF images at months 0, 6, and 12 to predict atrophy at month 18, the ReConNet achieved mean (±standard deviation, SD) and median Dice coefficients of 0.895 (±0.086) and 0.922 for Stargardt atrophy and 0.864 (±0.113) and 0.893 for GA. Using SD-OCT images at months 0 and 6 to predict atrophy at month 12, the ReConNet-Ensemble achieved mean and median Dice coefficients of 0.882 (±0.101) and 0.906 for Stargardt atrophy. The prediction performance on OCT images is comparably good to that on FAF. These results underscore the potential of SD-OCT for efficient and practical assessment of atrophy progression in clinical trials and retina clinics, complementing or surpassing the widely used FAF imaging technique.

1. Introduction

Stargardt disease (also called juvenile macular dystrophy) is the most common cause of macular degeneration in children and young adults. It results in retinal damage and progressive vision loss and can result in eventual blindness [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. AMD is the leading cause of blindness in people aged 65 and older in the Western world, with many of these patients appearing to eventually lose vision due to the development of macular neovascularization and non-neovascularization or GA. Stargardt atrophy and GA represent the endpoints of Stargardt disease and non-neovascularization AMD, respectively, in Food and Drug Administration (FDA)-approved clinical trials. The atrophic formation in juvenile and age-related macular dystrophy has been the most severe cause of vision loss and blindness. Until 2023, there had been no proven effective treatments for macular atrophy. Currently, two complement inhibitors, pegcetocoplan (Apellis Pharmaceuticals, Waltham, MA, USA) and avacincaptad (Iveric Bio, Parsippany-Troy Hills, NJ, USA), demonstrated positive Phase 3 clinical trial results in treating AMD atrophy and were cleared for clinical use by the FDA. As such, detecting macular atrophy and predicting the expected progression, and selecting the optimal therapy have been topics of pressing need and critical importance.
Fundus autofluorescence (FAF) is one commonly used imaging method in evaluating Stargardt disease and AMD [15]. With FAF, lipofuscin build-up can be assessed on the produced image of the retina. While FAF imaging has been widely used for diagnosing and predicting macular atrophy progression, it has notable limitations. FAF provides 2D imaging with restricted depth penetration and specificity. Additionally, the bright blue excitation light used in FAF can cause patient discomfort. In contrast, spectral-domain optical coherence tomography (SD-OCT) is a 3D imaging modality that excels in delivering high-resolution, cross-sectional images of the macula, making it ideal for a detailed tissue analysis in retinal layers [16,17]. OCT typically operates at wavelengths around 850 nm or 1050 nm, which generally enhances patient comfort. Hence, beyond FAF, it is important to develop effective approaches for diagnosing and predicting macular atrophy progression on OCT.
The convolutional neural network (CNN) architecture has been widely used in the task of automated semantic segmentation of Stargardt disease and GA [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37]. However, current approaches to an automated analysis for macular atrophy generally do not incorporate longitudinal data, focusing on one instant in time. One way to incorporate longitudinal data is through the use of recurrent neural networks, such as the long short-term memory (LSTM) architecture, a modification of the recurrent neural network [38,39,40,41,42,43,44,45]. The LSTM architecture has been used in medical image analyses for the detection of cardiomegaly, consolidation, pleural effusion, and hiatus hernia on X-ray and for Alzheimer’s disease diagnosis [46,47]. In the field of ophthalmology, LSTMs have been successfully used for assessing glaucoma progression, predicting diabetic retinopathy, and predicting late AMD progression. However, in these cases, the LSTM architecture was used for prediction in the classification, not for prediction for the segmentation, of images. Additionally, it is important to being able to predict (or generate) future images (FAF or OCT enface) as well to assist clinicians for decision-making on treatment by visualizing both the predicted future atrophy regions and the associated generated or predicted images. However, there are no reported studies that can generate or predict future images for both FAF and OCT enface. Moreover, the retinal layers in volumetric SD-OCT images include rich information beyond the generally used image features (original retinal layer mean intensity and thickness features). To take advantage of the rich 3D retinal layer information on SD-OCT images beyond 2D FAF images, ensemble CNNs, an extension of CNNs, could be used, where several neural networks each handle separate inputs of the novel features based on the retinal layer and work together to concurrently predict the output of atrophy progression with the CNN features from the ensemble networks together [18,29,30].
Per our knowledge, there is so far only one Stargardt disease progression prediction reported that used a self-attended U-Net to predict atrophy progression at twelve months. It is indeed from our own group [25]. The limitation of our previous algorithm is that it only used baseline FAF images as input without additional longitudinal information. Other work in this area is largely focused on GA prediction. In Lee et al., CNNs were used to extract features from color fundus photographs, and these features were subsequently fed into an LSTM. However, they predict the 2- and 5-year risk of progression to late AMD rather than generating prediction masks of regions of atrophy [45]. In Banerjee et al.’s study, an RNN model is used with OCT and demographic data as input to predict progression to exudative AMD with certain timeframes but again does not attempt to generate prediction masks of atrophy [48]. In Zhang et al.’s study, a bi-directional LSTM prediction module is used with a 3D U-Net refinement module to predict locations of future GA growth on OCT imaging [49]. However, as with Lee et al., each timepoint is sent to a separate model which does not concurrently take advantage of ensemble features from rich 3D OCT retinal layer information.
Thus, in this study, we first develop an efficient predictive model for the generation of future FAF images and the prediction of future Stargardt atrophic (as well as GA) regions and then develop an efficient ensemble predictive model that concurrently utilizes advanced 3D OCT features at the retinal layer of ellipsoid zones (EZ) (where the atrophy occurs) for the comparative performance in the generation of future enface EZ maps and prediction of future Stargardt atrophic regions on OCT as on FAF. To achieve these goals, we propose a set of novel deep convolutional neural units that are enhanced with recurrent network (i.e., LSTM in this project) units and the concurrent learning of ensemble network unit networks (termed ReConNet) for the prediction of the future progression of atrophy in Stargardt and GA based on longitudinal patients’ data.
Our major contributions include the following: (1) The ReConNet (recurrent and concurrent neural network) architecture incorporates LSTM connections with encoder and decoder pathways for use on longitudinal data on FAF. (2) On OCT, the ReConNet further takes advantage of the enhanced OCT enface EZ feature maps in ensemble networks (termed ReConNet-Ensemble) to capture higher-order inherent EZ retinal layer features associated with atrophy for an enhanced algorithm performance. (3) The ReConNet can be used for the generation or prediction for both future FAF images and future Stargardt atrophy and geographic atrophy. (4) While the OCT suffers from lower image quality, our prediction performance on OCT images with ensemble EZ features is comparably good to that on FAF with the available data of Stargardt disease.

2. Methods

2.1. Imaging Dataset and Ground Truth

The Stargardt dataset consisted of 141 eyes from 100 patients from ProgStar study. Each eye had FAF imaging (Spectralis HRA+OCT 1.11.2.0; Heidelberg Engineering, Heidelberg, Germany) [1,2,3,4] performed at baseline (zero-month) and at six-, twelve-, and eighteen-month follow-up. Of these eyes, 71 were identified as additionally having SD-OCT imaging (Spectralis HRA+OCT; Heidelberg Engineering) performed at initial baseline, six-month, and twelve-month visits. The dataset consisted of 60 eyes from 60 patients obtained from Doheny Eye Institute clinics that had FAF imaging performed at initial baseline visit and six-, twelve-, and eighteen-month follow-ups.
The ground truth masks of both GA and Stargardt atrophic regions for all FAF images were manually delineated by certified graders at Doheny Image Reading Center. We used 85% of the images along with the ground truth for the training of the deep learning algorithms and used 15% of them for testing. For the GA dataset, the field of view was 30° with pixel dimensions of 768 × 868. For the Stargardt disease dataset, the field of view was 20° with pixel dimensions of 512 × 512. The FAF image scan dimensions were resized to a standard size of 256 × 256. The SD-OCT volumes had scan protocols with 496 × 1024 × 49 (depth × A-scans × B-scans) pixels or 496 × 512 × 49 pixels. All B-scans were resized to be 1024 × 496 pixels. After segmentation, enface feature maps were generated from each volume with dimensions of 1024 × 49 pixels. We resized the enface feature maps to 512 × 512 pixels when registering them to the initial-visit FAF images. They were further resized to 256 × 256 pixels when assembled for input to the neural network because of limited memory space. All the right (OD) eye scans and images were horizontally flipped so as to match them to left (OS) eye scans and images, providing consistency through the course of the analysis. As described in our previous work, all Stargardt dataset FAF images and SD-OCT-derived feature maps were registered to the initial baseline visit through feature-based image registration [50]. All GA dataset FAF images were similarly registered to their initial visit images. For Stargardt FAF images, contrast-limited adaptive histogram equalization was performed as part of pre-processing.
The ground truths utilized in this study for both GA and Stargardt atrophy were based on FAF images, as thus far, FAF images have been utilized as standard imaging modality for macular atrophy assessment in clinical trial studies. GA and Stargardt atrophy lesions on FAF images were graded using the semiautomated software tool RegionFinder 2.6.6.0 (Heidelberg Engineering, Heidelberg, Germany). Images were initially graded by a certified reading center grader, and the grading was subsequently reviewed by a senior grader. A senior investigator (SS) resolved discrepancies between the two graders.

2.2. Neural Network Structure

To achieve the two primary goals, we designed two network structures (ReConNet, ReConNet-Ensemble) for the macular atrophy prediction applications based on multiple patient visits: (1) prediction of future GA and Stargardt atrophy regions using longitudinal FAF images (ReConNet); (2) prediction of future Stargardt atrophy regions using longitudinal SD-OCT images (ReConNet-Ensemble). Additionally, we also investigated the feasibility of the prediction of interval growth of GA and Stargardt atrophy regions using longitudinal FAF images (ReConNet-Interval). All three algorithms were implemented using the open-source deep learning framework Keras.
The first neural network, ReConNet, used in this study was for longitudinal prediction of progression of GA and Stargardt atrophy on FAF images based on the network architecture, which has an encoding pathway and a decoding pathway, with a concatenation linking the output of an encoding pathway convolution to the input of a decoding pathway convolution. The term ReConNet was used for both FAF and OCT to be flexible in the future to include additional ensemble network inputs for concurrent learning. Such a deep learning architecture can be used for semantic segmentation using a relatively small training set of images [18]. To incorporate longitudinal data, the neural network in this study had multiple encoding pathways, one for each timestep. At each layer of the encoding pathway, the outputs of the encoding pathway convolutions were combined using a 2D convolutional-LSTM layer. The output of this layer was concatenated with the input of the corresponding decoding pathway convolution. Note that the overall network architectures for ReConNet and ReConNet-Interval were similar, and the difference was that the input for ReConNet was original FAF image (and related ground truth) and for ReConNet-Interval was the FAF image with the region of atrophy set to zero (and related interval ground truth), reflecting the growth regions between any two adjacent patient visits. A schematic of the neural network used in this study is depicted in Figure 1.
The second neural network structure, ReConNet-Ensemble, used in this study was for longitudinal prediction of progression of Stargardt atrophy on OCT images. The ensemble inputs included multiple enhanced enface feature maps derived from the OCT data surrounding the EZ in the depth (z-direction) of the OCT, the region of the retina most affected by Stargardt atrophy. These feature maps also extended beyond the traditional mean intensity maps, additionally including minimum intensity, maximum intensity, median intensity, standard deviation, skewness, kurtosis, gray-level entropy, and thickness of the EZ similar to that used in our previous work [18]. The network structure of ReConNet-Ensemble was similar to that previously described by our team [18]. The difference was that the previous one was for prediction using baseline alone, and the ReConNet-Ensemble in this project included longitudinal data with three patient visits (month 0, month 6, and month 12), reflecting dynamic changes of atrophy over time. In ReConNet-Ensemble neural network, multiple of the above-described neural networks were used to take in the multiple OCT-derived enface feature maps as input, allowing for the incorporation of the three-dimensional OCT information at EZ. In our previous work, we found significant improvement for the prediction of Stargardt atrophy when incorporating these advanced feature maps compared to using mean intensity alone. The logit layers of the individual neural networks were combined through averaging, with the combined result subsequently being sent through a softmax function to predict probability maps of atrophy. The ReConNet-Ensemble neural network is depicted in Figure 2.
Note that the ReConNet neural networks can predict both future images and atrophy. They are optimized by different loss functions depending on if they are used for image generation or for segmentation. For the future image generation (or prediction), the ground truth is the original image at that timestep. The cosine similarity loss is designed to optimize the performance of the CNN model by maximizing the similarity of the generated future image and the original image at that timestep. The cosine similarity loss is expressed as follows:
L c o s = x Ω p l ( x ) g l ( x ) x Ω p l 2 ( x ) x Ω g l 2 ( x )
The Dice loss is designed to optimize the performance of the CNN model by maximizing the overlap between predicted interesting regions and manually annotated interesting regions (i.e., ground truth). Thus, for the segmentation of atrophic regions, Dice loss is used as follows:
L D i c e = 1 2 x Ω p l ( x ) g l ( x ) x Ω g l 2 ( x ) + x Ω p l 2 ( x )
where g l ( x ) is 1 if the pixel x Ω belongs to class l , otherwise 0, and p l ( x ) is the predicted probability that pixel x belongs to class l .

2.3. Atrophy Prediction

2.3.1. Prediction of Future GA and Stargardt Atrophy Regions Using Longitudinal FAF Images (ReConNet)

The prediction of progression of geographic and Stargardt atrophy using longitudinal FAF images was trained with two steps (initial step of ReConNet1 and final step of ReConNet2; see Figure 3 for details). The initial step consisted of the prediction/generation of FAF image and atrophy regions (size and location) on the FAF image. The input for the neural network was the zero-month, six-month, and twelve-month FAF images, each paired with the manually graded atrophy label for that visit. The outputs were the predicted/generated FAF image and atrophy at eighteen months. In the second step, the generated/predicted FAF image and label were then used in conjunction with the zero-month, six-month, and twelve-month images in the neural network for enhanced atrophy prediction. An example of this process with the input and output at each step is shown in Figure 3.

2.3.2. Prediction of Progression of Stargardt Atrophy Using Longitudinal SD-OCT Images (ReConNet-Ensemble)

With the ensemble neural network for Stargardt atrophy progression using longitudinal SD-OCT images, a similar process of two steps (ReConNet1-Ensemble and ReConNet2-Ensemble) was performed. Note that for OCT data, we only had data from three patient visits (month 0, month 6, month 12). Instead of using single input of FAF image for each patient visit, in step 1, various enhanced feature maps at each patient visit derived from the zero-month and six-month SD-OCT scans (each again paired with the manually graded atrophy label for that visit) were sent through the ensemble network, producing both a predicted region of atrophy and a predicted enface image. As mentioned, the enhanced features demonstrated rich 3D atrophy information beyond the mean intensity associated features shown in our previous paper [18]. In step 2, rather than attempt to predict the individual twelve-month OCT-derived feature maps, the predicted enface image was used as a stand-in for the twelve-month feature maps and sent through the neural network again in conjunction with the zero-month and six-month feature maps.

2.3.3. Prediction of Interval Growth of GA and Stargardt Atrophy Regions Using Longitudinal FAF Images (ReConNet-Interval)

In this task, the interval growth of GA and Stargardt atrophy between visits was examined. In the zero-month, six-month, and twelve-month FAF images, the areas labeled as atrophy by graders was set to zero, and interval growth labels were paired with each visit. A placeholder label created by inverting the twelve-month atrophy region ground truth was paired with the twelve-month FAF image. The neural network, ReConNet-Interval, was trained to identify only the interval growth of the region of atrophy. Note that different from ReConNet and ReConNet-Ensemble, which were performed by two steps where the generated/predicted image and label were used in conjunction with the zero-month, six-month, and twelve-month images in the neural network for enhanced atrophy prediction, the ReConNet-Interval can only be performed for the first step, as the second step will need an additional visit after 18 months to obtain the interval. Hence, the performance of the one step ReConNet1-Interval may not be comparable with the enhanced atrophy prediction results after two steps from ReConNet2 and ReConNet2-Ensemble.

2.4. Performance Evaluation

Four metrics were used to evaluate the performance of the neural networks: pixel-wise accuracy, Dice coefficient, sensitivity, and specificity. Pixel-wise accuracy measures all correctly identified pixels in the image:
A c c u r a c y = T P + T N T P + T N + F P + F N
where TP is true positives, TN is true negatives, FP is false positives, and FN is false negative. Sensitivity describes how much atrophy is correctly labeled compared to the total amount of atrophy:
S e n s i t i v i t y = T P T P + F N
Specificity describes how much non-atrophied tissue is correctly labeled compared to the total amount of non-atrophied tissues:
S p e c i f i c i t y = T N T N + F P
The Dice coefficient of two sets A and B describes the spatial overlap between the two sets:
D i c e = 2 ( A B ) A + B

3. Results

Neural network predictions were evaluated using the pixel-wise accuracy, Dice coefficient, sensitivity, and specificity. The initial and final prediction results for ReConNet are shown in Table 1. The initial and final results for ReConNet-Ensemble are shown in Table 2. The results for ReConNet-Interval are shown in Table 3.
When comparing the performance of the neural network with and without the inclusion of a predicted eighteen-month FAF image, the Wilcoxon ranked-sign test was used due to the non-normal distribution of the data, resulting in p-values less than 0.05 for all metrics for all neural network configurations for both Stargardt atrophy and GA, except for with specificity in one case. Examples of inputs, output predicted regions of atrophy, and ground truths after ReConNet, ReConNet-Ensemble, and ReConNet-Interval are shown in Figure 4, Figure 5 and Figure 6, respectively. In Figure 4, the ReConNet model is applied to longitudinal FAF images to generate future images and predict the full extent of atrophic regions for both Stargardt atrophy and GA. In Figure 5, the ReConNet-Ensemble model combines features from multiple longitudinal OCT enface maps to generate future images and predict the full extent of Stargardt atrophic regions. Figure 6 demonstrates the use of the ReConNet-Interval model to predict the growth interval, rather than the entire atrophic regions, for both Stargardt atrophy and GA. Each of these algorithms has distinct advantages depending on the imaging modality used (2D FAF with ReConNet or 3D OCT with ReConNet-Ensemble) and the specific treatment goals. For instance, ReConNet-Interval is suited for scenarios where the focus is on controlling the rate of atrophic growth, while ReConNet is useful for assessing whether early atrophic changes have been addressed beyond merely suppressing growth.

4. Discussions

This paper represents the first attempt to use longitudinal imaging data to predict future regions of macular atrophy in the setting of Stargardt disease using recurrent (LSTM) and concurrent neural network architecture (ReConNet and ReConNet-Ensemble). The Stargardt atrophy progression prediction performance on OCT images is comparably good to that on FAF. These results underscore the potential of SD-OCT for the efficient and practical assessment of atrophy progression in clinical trials and retina clinics, complementing or surpassing the widely used FAF imaging technique. Additionally, the ReConNet is also applied for the GA progression prediction on FAF, demonstrating its feasibility potentially for general macular atrophy progression prediction.
For the prediction of atrophy growth in FAF using ReConNet, as shown in Table 1, the initial ReConNet1 predictions achieved a median Dice coefficient of 0.568 for eighteen-month predictions of atrophy in the setting of Stargardt atrophy using baseline zero-month, six-month, and twelve-month data. When this initial generated/predicted eighteen-month images and labels were used in conjunction with prior visits of longitudinal data, a median Dice coefficient of 0.922 for ReConNet2 was achieved, showing significant improvement (62% improvement, p < 0.05). For GA, median Dice coefficients of 0.867 for ReConNet1 and 0.893 for ReConNet2 were achieved with modest enhancement, probably due to the good performance of ReConNet1 and additionally the smaller size of the GA dataset (60 eyes) compared with the Stargardt atrophy dataset (141 eyes); hence, the improvement space is limited.
For the prediction of Stargardt atrophy growth in OCT, the ensemble neural network with various OCT feature maps resulted in a similar pattern of performance. Note that the ensemble neural network was only applied on Stargardt atrophy prediction (with month 0 and 6 to predict month 12 since further longitudinal data were not available). Additionally, the longitudinal OCT data for GA patients were not accessible in this study. Using initial zero-month and six-month data, the ensemble neural network achieved a median Dice coefficient of 0.742 for twelve-month predictions in the setting of Stargardt disease (71 eyes) for the initial ReConNet1-Ensemble, and using the initial prediction in conjunction with the prior visits of longitudinal data resulted in a median Dice coefficient of 0.906 for ReConNet2-Ensemble, again showing significant improvement (22% improvement, p < 0.05). Similar patterns of results were seen with pixel accuracy, sensitivity, and specificity, with the only exception of specificity for the prediction of Stargardt atrophy when using FAF images. In comparison to our previous work, we have achieved higher Dice coefficients for atrophy prediction through the inclusion of longitudinal data.
It is worth noting that the prediction results of the Dice coefficient for Stargardt atrophy using OCT based on ReConNet-Ensemble (Table 2 and Figure 5) is comparable to that using FAF based on ReConNet (Table 1 and Figure 4). This finding is critically important. Usually, FAF has been considered as the best imaging modality regarding imaging quality and has been used as a main imaging modality for clinical trials of new atrophy drugs. However, the FAF imaging acquisition process is very uncomfortable for patients, and it is in a two-dimensional imaging modality. OCT imaging is much more tolerable for patients, is a three-dimensional imaging modality, and is becoming the most popular imaging modality in major retina clinics. Hence, our novel recurrent and concurrent neural networks based on multiple advanced enface OCT maps, which reflect higher-order inherent retinal layer features associated with atrophy, provide a new, more efficient, and practical way for the prediction of atrophy progression in clinical trials and clinics.
When predicting only the interval growth of the atrophic lesions in FAF using ReConNet-Interval, in the setting of Stargardt atrophy, a median Dice coefficient of 0.557 was achieved. For GA, a median Dice coefficient of 0.601 was achieved. As mentioned, the ReConNet-Interval can only be performed for the first step, as the second step will need an additional visit after 18 months to obtain the interval that is not available for this project. Based on our investigation of the two steps’ schema for ReConNet and ReConNet-Ensemble, we can reasonably expect that the performance of the median Dice coefficient for the interval growth of Stargardt and geographic atrophic lesions for ReConNet2-Interval should be both greater than 0.80.
In Figure 4, it is possible to compare the initial and final predictions of the ReConNet neural network to the input FAF images and ground truths. Of interest in these images is the improved detail of the final prediction compared to the initial prediction. This indicates that in predicting the FAF image, the neural network may capture features important to the growth of atrophy that are not captured by segmentation alone. In Figure 5, we demonstrate a set of comparisons with the initial and final predictions based on the ensemble neural network alongside the input OCT feature maps and their corresponding ground truths. Once again, there is an improved final prediction after the incorporation of an initially predicted FAF image and ground truth. This indicates that the benefit of incorporating a predicted FAF image for the final prediction is consistently present across imaging modalities.
In Figure 6, it is possible to compare the predicted interval growth to the input-edited FAF images. It can be seen that the predicted interval growth tends to underestimate the ground truth. This is also apparent in the high specificity and relatively low sensitivity.
While these results are promising, this study is not without limitations. First, while our ReConNet architecture performs well with a limited sample size, the dataset sizes for both Stargardt disease and GA are relatively small, particularly for the GA dataset. Second, the longitudinal patient visits are limited, especially for the goal for the evaluation of the growth interval. Either more longitudinal patient visits or denser visits for ReConNet-Interval should be included to achieve a reasonably good atrophy progression prediction by applying two steps of ReConNet-Interval (i.e., ReConNet1-Interval and ReConNet2-Interval) similar as the ReConNet and ReConNet-Ensemble. Third, the registration process may introduce some errors due to the relatively low image resolution in one dimension. It results in slightly sheared, rotated, or translated registered OCT enface maps in comparison to the corresponding baseline FAF images. Fourth, while the Dice coefficient has been widely used as a standard metric for the evaluation of macular atrophy [51,52], the investigation of additional new metrics, for instance, as used in [53], may be useful, which can be a future direction of investigation.

5. Conclusions

In this study, we reported multiple methods of incorporating longitudinal data in the generation of future images and prediction of future regions of atrophy in Stargardt disease and AMD. There is high agreement with the manual gradings when predicting the whole region of atrophy, which is further improved when cycling the initial predictions, and incorporating them into the originally prior input longitudinal data. The prediction performance in OCT images is comparable to that using FAF, which opens a new, more efficient, and practical door in the assessment of atrophy progression for clinical trials and retina clinics, beyond widely used FAF. These results are highly encouraging for a high-performance interval growth prediction when more frequent or longer-term longitudinal data are available in our clinics. This is a pressing task for our next step in the ongoing research.

Author Contributions

Z.M. developed and coded the algorithms studied, performed the experiments, and was the primary author of the manuscript text. Z.C.W. curated the data, coded some of the algorithms, and was another primary author of the manuscript text. E.X. and S.X. assisted with image registration and ground truth refinement. I.M. assisted with image registration. S.R.S. provided clinical background and edited and wrote a portion of the manuscript text. Z.J.H. cultivated the idea of the development and edited and wrote a portion of the manuscript text. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Eye Institute of the National Institutes of Health under Award Numbers R21EY029839 and R21EY030619.

Institutional Review Board Statement

All data (both Stargardt and GA data) used in this project were de-identified in accordance with the Health and Insurance Portability and Accountability Act (HIPAA) Safe Harbor. The study in this paper was conducted in accordance with the Declaration of Helsinki. For the Stargardt dataset, it was approved by the Institutional Review Board of each of the participating institutions: the Cole Eye Institute; Eberhard Karls University Eye Hospital; the Greater Baltimore Medical Centre; the Moorfields Eye Hospital; the Moran Eye Centre; the Retina Foundation of the Southwest; the Scheie Eye Institute; Université de Paris 06; and the Wilmer Eye Institute. For the GA data, it was approved by the Institutional Review Board the University of California, Los Angeles.

Informed Consent Statement

For Stargardt data, written informed consent was obtained from all participants and their legal guardians for study participation and the publication of identifying images. For GA data, informed consent was obtained from all participants, and research was conducted in accordance with the relevant guidelines and regulations.

Data Availability Statement

The datasets used in this study are not publicly available to protect patients’ privacy and to not violate informed consent. The datasets are available from the corresponding author on reasonable request. The code generated during the study is accessible from the corresponding author based on reasonable request and subject to the regulations of the institute.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Strauss, R.W.; Ho, A.; Muñoz, B.; Cideciyan, A.V.; Sahel, J.A.; Sunness, J.S.; Birch, D.G.; Bernstein, P.S.; Michaelides, M.; Traboulsi, E.I.; et al. The natural history of the progression of atrophy secondary to Stargardt disease (ProgStar) studies: Design and baseline characteristics: ProgStar report no. 1. Ophthalmology 2016, 123, 817–828. [Google Scholar] [CrossRef]
  2. Schönbach, E.M.; Wolfson, Y.; Strauss, R.W.; Ibrahim, M.A.; Kong, X.; Muñoz, B.; Birch, D.G.; Cideciyan, A.V.; Hahn, G.A.; Nittala, M.; et al. Macular sensitivity measured with microperimetry in Stargardt disease in the progression of atrophy secondary to Stargardt disease (ProgStar) study: Report no.7. JAMA Ophthalmol. 2017, 135, 696–703. [Google Scholar] [CrossRef] [PubMed]
  3. Strauss, R.W.; Muñoz, B.; Ho, A.; Jha, A.; Michaelides, M.; Mohand-Said, S.; Cideciyan, A.V.; Birch, D.; Hariri, A.H.; Nittala, M.G.; et al. Incidence of atrophic lesions in Stargardt disease in the progression of atrophy secondary to Stargardt disease (ProgStar) study: Report no. 5. JAMA Ophthalmol. 2017, 135, 687–695. [Google Scholar] [CrossRef]
  4. Strauss, R.W.; Muñoz, B.; Ho, A.; Jha, A.; Michaelides, M.; Cideciyan, A.V.; Audo, I.; Birch, D.G.; Hariri, A.H.; Nittala, M.G.; et al. Progression of Stargardt disease as determined by fundus autofluorescence in the retrospective progression of Stargardt disease study (ProgStar report no. 9). JAMA Ophthalmol. 2017, 135, 1232–1241. [Google Scholar] [CrossRef] [PubMed]
  5. Ma, L.; Kaufman, Y.; Zhang, J.; Washington, I. C20-D3-vitamin A slows lipofuscin accumulation and electrophysiological retinal degeneration in a mouse model of Stargardt disease. J. Biol. Chem. 2010, 286, 7966–7974. [Google Scholar] [CrossRef]
  6. Kong, J.; Kim, S.R.; Binley, K.; Pata, I.; Doi, K.; Mannik, J.; Zernant-Rajang, J.; Kan, O.; Iqball, S.; Naylor, S.; et al. Correction of the disease phenotype in the mouse model of Stargardt disease by lentiviral gene therapy. Gene Ther. 2008, 15, 1311–1320. [Google Scholar] [CrossRef]
  7. Binley, K.; Widdowson, P.; Loader, J.; Kelleher, M.; Iqball, S.; Ferrige, G.; de Belin, J.; Carlucci, M.; Angell-Manning, D.; Hurst, F.; et al. Transduction of photoreceptors with equine infectious anemia virus lentiviral vectors: Safety and biodistribution of StarGen for Stargardt disease. Investig. Ophtalmol. Vis. Sci. 2013, 54, 4061–4071. [Google Scholar] [CrossRef] [PubMed]
  8. Mukherjee, N.; Schuman, S. Diagnosis and management of Stargardt disease. EyeNet Magazine 2014. [Google Scholar]
  9. Kong, X.; Ho, A.; Munoz, B.; West, S.; Strauss, R.W.; Jha, A.; Ervin, A.; Buzas, J.; Singh, M.; Hu, Z.; et al. Reproducibility of measurements of retinal structural parameters using optical coherence tomography in Stargardt disease. Transl. Vis. Sci. Technol. 2019, 8, 46. [Google Scholar] [CrossRef] [PubMed]
  10. Bressler, N.M.; Bressler, S.B.; Congdon, N.G.; Ferris, F.L., 3rd; Friedman, D.S.; Klein, R.; Lindblad, A.S.; Milton, R.C.; Seddon, J.M.; Age-Related Eye Disease Study Research Group. Potential public health impact of Age-Related Eye Disease Study results: AREDS report no. 11. Arch. Ophthalmol. 2003, 121, 1621–1624. [Google Scholar] [PubMed] [PubMed Central]
  11. Davis, M.D.; Gangnon, R.E.; Lee, L.-Y.; Hubbard, L.D.; Klein, B.E.; Klein, R.; Ferris, F.L.; Bressler, S.B.; Milton, R.C.; Age-Related Eye Disease Study Group. The age-related eye disease study severity scale for age-related macular degeneration: AREDS report no. 17. Arch. Ophthalmol. 2005, 123, 1484–1498. [Google Scholar] [PubMed]
  12. Ferris, F.L.; Davis, M.D.; Clemons, T.E.; Lee, L.Y.; Chew, E.Y.; Lindblad, A.S.; Milton, R.C.; Bressler, S.B.; Klein, R.; Age-Related Eye Disease Study (AREDS) Research Group. A simplified severity scale for age-related macular degeneration: AREDS report no. 18. Arch. Ophthalmol. 2005, 123, 1570–1574. [Google Scholar]
  13. Klein, R.; Klein, B.E.; Jensen, S.C.; Meuer, S.M. The five-year incidence and progression of age-related maculopathy: The Beaver Dam Eye Study. Ophthalmology 1997, 104, 7–21. [Google Scholar] [CrossRef]
  14. Blair, C.J. Geographic atrophy of the retinal pigment epithelium: A manifestation of senile macular degeneration. Arch. Ophthalmol. 1975, 93, 19–25. [Google Scholar] [CrossRef] [PubMed]
  15. Schmitz-Valckenberg, S.; Holz, F.; Bird, A.; Spaide, R. Fundus autofluorescence imaging: Review and perspectives. Retina 2008, 28, 385–409. [Google Scholar] [CrossRef]
  16. Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef]
  17. Fujimoto, J.G.; Bouma, B.; Tearney, G.J.; Boppart, S.A.; Pitris, C.; Southern, J.F.; Brezinski, M.E. New technology for high-speed and high-resolution optical coherence tomography. Ann. N. Y. Acad. Sci. 1998, 838, 96–107. [Google Scholar] [CrossRef] [PubMed]
  18. Mishra, Z.; Wang, Z.; Sadda, S.R.; Hu, Z. Using Ensemble OCT-Derived Features beyond Intensity Features for Enhanced Stargardt Atrophy Prediction with Deep Learning. Appl. Sci. 2023, 13, 8555. [Google Scholar] [CrossRef] [PubMed]
  19. Mishra, Z.; Wang, Z.; Sadda, S.R.; Hu, Z. Automatic Segmentation in Multiple OCT Layers for Stargardt Disease Characterization via Deep Learning. Transl. Vis. Sci. Technol. 2021, 10, 24. [Google Scholar] [CrossRef]
  20. Kugelman, J.; Alonso-Caneiro, D.; Chen, Y.; Arunachalam, S.; Huang, D.; Vallis, N.; Collins, M.J.; Chen, F.K. Retinal boundary segmentation in Stargardt disease optical coherence tomography images using automated deep learning. Transl. Vis. Sci. Technol. 2020, 9, 12. [Google Scholar] [CrossRef]
  21. Charng, J.; Xiao, D.; Mehdizadeh, M.; Attia, M.S.; Arunachalam, S.; Lamey, T.M.; Thompson, J.A.; McLaren, T.L.; De Roach, J.N.; Mackey, D.A.; et al. Deep learning segmentation of hyperautofluorescent fleck lesions in Stargardt disease. Sci. Rep. 2020, 10, 16491. [Google Scholar] [CrossRef]
  22. Schmitz-Valckenberg, S.; Brinkmann, C.K.; Alten, F.; Herrmann, P.; Stratmann, N.K.; Göbel, A.P.; Fleckenstein, M.; Diller, M.; Jaffe, G.J.; Holz, F.G. Semiautomated image processing method for identification and quantification of geographic atrophy in age-related macular degeneration. Investig. Ophthalmol. Vis. Sci. 2011, 52, 7640–7646. [Google Scholar]
  23. Chen, Q.; de Sisternes, L.; Leng, T.; Zheng, L.; Kutzscher, L.; Rubin, D.L. Semi-automatic geographic atrophy segmentation for SD-OCT images. Biomed. Opt. Express 2013, 4, 2729–2750. [Google Scholar] [PubMed]
  24. Wang, S.; Wang, Z.; Vejalla, S.; Ganegoda, A.; Nittala, M.; Sadda, S.; Hu, Z. Reverse engineering for reconstructing baseline features of dry age-related macular degeneration in optical coherence tomography. Sci. Rep. 2022, 12, 22620. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  25. Wang, Z.; Sadda, S.R.; Lee, A.; Hu, Z. Automated segmentation and feature discovery of age-related macular degeneration and Stargardt disease via self-attended neural networks. Sci. Rep. 2022, 12, 14565. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  26. Hu, Z.; Wu, X.; Hariri, A.; Sadda, S. Multiple Layer Segmentation and Analysis in Three-Dimensional Spectral-Domain Optical Coherence Tomography Volume Scans. J. Biomed. Opt. 2013, 18, 076006. [Google Scholar] [CrossRef] [PubMed]
  27. Hu, Z.; Medioni, G.G.; Hernandez, M.; Hariri, A.; Wu, X.; Sadda, S.R. Segmentation of the Geographic Atrophy in Spectral-Domain Optical Coherence Tomography and Fundus Autofluorescene Images. Investig. Ophthalmol. Vis. Sci. 2013, 54, 8375–8383. [Google Scholar] [PubMed]
  28. Wang, Z.; Sadda, S.R.; Hu, Z. Deep learning for automated screening and semantic segmentation of age-related and juvenile atrophic macular degeneration. In Proceedings of the Medical Imaging 2019: Computer-Aided Diagnosis, San Diego, CA, USA, 16–21 February 2019; International Society for Optics and Photonics: San Diego, CA, USA, 2019; Volume 10950, p. 109501Q. [Google Scholar] [CrossRef]
  29. Hu, Z.; Wang, Z.; Sadda, S. Automated segmentation of geographic atrophy using deep convolutional neural networks. In Proceedings of the SPIE Medical Imaging 2018: Computer-Aided Diagnosis, Houston, TX, USA, 10–15 February 2018; Volume 10575, p. 1057511. [Google Scholar] [CrossRef]
  30. Saha, S.; Wang, Z.; Sadda, S.; Kanagasingam, Y.; Hu, Z. Visualizing and understanding inherent features in SD-OCT for the progression of age-related macular degeneration using deconvolutional neural networks. Appl. AI Lett. 2020, 1, e16. [Google Scholar] [CrossRef] [PubMed]
  31. Schmidt-Erfurth, U.; Bogunovic, H.; Grechenig, C.; Bui, P.; Fabianska, M.; Waldstein, S.; Reiter, G.S. Role of Deep Learning-Quantified Hyperreflective Foci for the Prediction of Geographic Atrophy Progression. Am. J. Ophthalmol. 2020, 216, 257–270. [Google Scholar] [CrossRef]
  32. Ramsey, D.; Sunness, J.; Malviya, P.; Applegate, C.; Hager, G.; Handa, J. Automated image alignment and segmentation to follow progression of geographic atrophy in age-related macular degeneration. Retina 2014, 34, 1296–1307. [Google Scholar] [PubMed]
  33. Liefers, B.; Colijn, J.M.; González-Gonzalo, C.; Verzijden, T.; Wang, J.J.; Joachim, N.; Mitchell, P.; Hoyng, C.B.; van Ginneken, B.; Klaver, C.C.W.; et al. A Deep Learning Model for Segmentation of Geographic Atrophy to Study Its Long-Term Natural History. Ophthalmology 2020, 127, 1086–1096. [Google Scholar] [CrossRef]
  34. Chu, Z.; Wang, L.; Zhou, X.; Shi, Y.; Cheng, Y.; Laiginhas, R.; Zhou, H.; Shen, M.; Zhang, Q.; de Sisternes, Let al. Automatic geographic atrophy segmentation using optical attenuation in OCT scans with deep learning. Biomed. Opt. Express 2022, 13, 1328–1343. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  35. Pramil, V.; de Sisternes, L.; Omlor, L.; Lewis, W.; Sheikh, H.; Chu, Z.; Manivannan, N.; Durbin, M.; Wang, R.K.; Rosenfeld, P.J.; et al. A Deep Learning Model for Automated Segmentation of Geographic Atrophy Imaged Using Swept-Source OCT. Ophthalmol. Retina 2023, 7, 127–141. [Google Scholar] [CrossRef] [PubMed]
  36. Kalra, G.; Cetin, H.; Whitney, J.; Yordi, S.; Cakir, Y.; McConville, C.; Whitmore, V.; Bonnay, M.; Lunasco, L.; Sassine, A.; et al. Machine Learning-Based Automated Detection and Quantification of Geographic Atrophy and Hypertransmission Defects Using Spectral Domain Optical Coherence Tomography. J. Pers. Med. 2023, 13, 37. [Google Scholar] [CrossRef] [PubMed]
  37. Ji, Z.; Chen, Q.; Niu, S.; Leng, T.; Rubin, D.L. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images. Transl. Vis. Sci. Technol. 2018, 7, 1. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  38. Manaswi, N.K. RNN and LSTM. In Deep Learning with Applications Using Python; Apress: Berkeley, CA, USA, 2018. [Google Scholar] [CrossRef]
  39. Calin, O. Deep Learning Architectures; Springer Nature: Cham, Switzerland, 2020; 555p, ISBN 978-3-030-36720-6. [Google Scholar]
  40. Graves, A.; Schmidhuber, J. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. In Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–11 December 2008; MIT Press: Cambridge, MA, USA, 2009; pp. 545–552. [Google Scholar]
  41. Kugelman, J.; Alonso-Caneiro, D.; Read, S.A.; Vincent, S.J.; Collins, M.J. Automatic segmentation of OCT retinal boundaries using recurrent neural networks and graph search. Biomed. Opt. Express 2018, 9, 5759–5777. [Google Scholar] [CrossRef]
  42. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  43. Monner, D.; Reggia, J.A. A generalized LSTM-like training algorithm for second-order recurrent neural networks. Neural Netw. 2012, 25, 70–83. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  44. Dixit, A.; Yohannan, J.; Boland, M.V. Assessing Glaucoma Progression Using Machine Learning Trained on Longitudinal Visual Field and Clinical Data. Ophthalmology 2021, 128, 1016–1026. [Google Scholar] [CrossRef]
  45. Lee, J.; Wanyan, T.; Chen, Q.; Keenan, T.D.L.; Glicksberg, B.S.; Chew, E.Y.; Lu, Z.; Wang, F. Predicting Age-related Macular Degeneration Progression with Longitudinal Fundus Images Using Deep Learning. In Machine Learning in Medical Imaging: 13th International Workshop, MLMI 2022, Held in Conjunction with MICCAI 2022, Singapore, 18 September 2022, Proceedings; Lian, C., Cao, X., Rekik, I., Xu, X., Cui, Z., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13583. [Google Scholar] [CrossRef]
  46. Santeramo, R.; Withey, S.; Montana, G. Longitudinal detection of radiological abnormalities with time-modulated LSTM. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018, Proceedings; Springer International Publishing: Cham, Switzerland, 2018; pp. 326–333. [Google Scholar]
  47. Hong, X.; Lin, R.; Yang, C.; Zeng, N.; Cai, C.; Gou, J.; Yang, J. Predicting Alzheimer’s disease using LSTM. IEEE Access 2019, 7, 80893–80901. [Google Scholar] [CrossRef]
  48. Banerjee, I.; de Sisternes, L.; Hallak, J.A.; Leng, T.; Osborne, A.; Rosenfeld, P.J.; Gregori, G.; Durbin, M. Prediction of age-related macular degeneration disease using a sequential deep learning approach on longitudinal SD-OCT imaging biomarkers. Sci. Rep. 2020, 10, 15434. [Google Scholar] [CrossRef]
  49. Zhang, Y.; Zhang, X.; Ji, Z.; Niu, S.; Leng, T.; Rubin, D.L.; Yuan, S.; Chen, Q. An integrated time adaptive geographic atrophy prediction model for SD-OCT images. Med. Image Anal. 2021, 68, 101893. [Google Scholar] [CrossRef] [PubMed]
  50. Hernandez, M.; Medioni, G.G.; Hu, Z.; Sadda, S.R. Multimodal registration of multiple retinal images based on line structures. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 907–914. [Google Scholar]
  51. Sadda, J.; Abdelfattah, N.S.; Sadda, S.R.; Hu, Z. Inter-Grader Repeatability of Geographic Atrophy Measurements from Infrared Reflectance Images. Investig. Ophthalmol. Vis. Sci. 2018, 59, 3245. [Google Scholar]
  52. Abdelfattah, N.S.; Sadda, J.; Wang, Z.; Hu, Z.; Sadda, S. Near-Infrared Reflectance Imaging for Quantification of Atrophy Associated with Age-Related Macular Degeneration. Am. J. Ophthalmol. 2020, 212, 169–174. [Google Scholar] [CrossRef] [PubMed]
  53. Stojanov, D. Phylogenicity of B.1.1.7 surface glycoprotein, novel distance function and first report of V90T missense mutation in SARS-CoV-2 surface glycoprotein. Meta Gene 2021, 30, 100967. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A schematic of the recurrent neural network architecture. The first T to T + N columns from the top blue squares to the bottom purple squares represent multiple encoding pathways, one for each timestep. At each layer of the encoding pathway, the outputs of the encoding pathway convolutions are combined using a 2D convolutional-LSTM layer. The output of this layer is concatenated with the input of the corresponding decoding pathway convolution, as shown in the right three columns.
Figure 1. A schematic of the recurrent neural network architecture. The first T to T + N columns from the top blue squares to the bottom purple squares represent multiple encoding pathways, one for each timestep. At each layer of the encoding pathway, the outputs of the encoding pathway convolutions are combined using a 2D convolutional-LSTM layer. The output of this layer is concatenated with the input of the corresponding decoding pathway convolution, as shown in the right three columns.
Applsci 14 07773 g001
Figure 2. The ensemble neural network structure. The OCT feature maps include traditional mean intensity maps and additional minimum intensity, maximum intensity, median intensity, standard deviation, skewness, kurtosis, gray-level entropy, and thickness of the EZ.
Figure 2. The ensemble neural network structure. The OCT feature maps include traditional mean intensity maps and additional minimum intensity, maximum intensity, median intensity, standard deviation, skewness, kurtosis, gray-level entropy, and thickness of the EZ.
Applsci 14 07773 g002
Figure 3. A schematic of the prediction algorithm, where i in the subscript indicates the initial prediction results from ReconNet1 and f in the subscript indicates the final prediction results from ReConNet2.
Figure 3. A schematic of the prediction algorithm, where i in the subscript indicates the initial prediction results from ReconNet1 and f in the subscript indicates the final prediction results from ReConNet2.
Applsci 14 07773 g003
Figure 4. Example results of ReConNet. Input FAF images and labels, initial prediction, final prediction, and ground truth comparison for ReConNet with Stargardt atrophy (Top) and GA (Bottom).
Figure 4. Example results of ReConNet. Input FAF images and labels, initial prediction, final prediction, and ground truth comparison for ReConNet with Stargardt atrophy (Top) and GA (Bottom).
Applsci 14 07773 g004aApplsci 14 07773 g004b
Figure 5. Example results after ReConNet-Ensemble. Input OCT feature maps, initial prediction, final prediction, and ground truth comparison for ensemble ReConNet with Stargardt atrophy. Input labels are not pictured.
Figure 5. Example results after ReConNet-Ensemble. Input OCT feature maps, initial prediction, final prediction, and ground truth comparison for ensemble ReConNet with Stargardt atrophy. Input labels are not pictured.
Applsci 14 07773 g005
Figure 6. Example results after ReConNet1-Interval. (Top) Input modified FAF images, labels, interval growth prediction, and ground truth comparison with Stargardt atrophy. (Middle and Bottom) Input-modified FAF images, labels, interval growth prediction, and ground truth comparison with GA. Eighteen-month FAF images are shown for reference.
Figure 6. Example results after ReConNet1-Interval. (Top) Input modified FAF images, labels, interval growth prediction, and ground truth comparison with Stargardt atrophy. (Middle and Bottom) Input-modified FAF images, labels, interval growth prediction, and ground truth comparison with GA. Eighteen-month FAF images are shown for reference.
Applsci 14 07773 g006
Table 1. Results for ReConNet (Median/Mean (SD)).
Table 1. Results for ReConNet (Median/Mean (SD)).
AccuracyDice CoefficientSensitivitySpecificity
Stargardt AtrophyReConNet1-Initial0.919|0.904 (0.072)0.568|0.577 (0.163)0.406|0.432 (0.163)0.998|0.996 (0.008)
ReConNet2-Final0.980|0.973 (0.021)0.922|0.895 (0.086)0.876|0.84 (0.120)0.998|0.996 (0.007)
p-value<0.001<0.001<0.0010.020
GAReConNet1-Initial0.901|0.896 (0.060)0.867|0.827 (0.129)0.84|0.839 (0.107)0.971|0.938 (0.089)
ReConNet2-Final0.928|0.919 (0.042)0.893|0.864 (0.113)0.964|0.945 (0.062)0.920|0.901 (0.077)
p-value<0.001<0.001<0.001<0.001
Table 2. Results for ReConNet-Ensemble (Median/Mean (SD)).
Table 2. Results for ReConNet-Ensemble (Median/Mean (SD)).
AccuracyDice CoefficientSensitivitySpecificity
Stargardt AtrophyReConNet1-Ensemble-Initial0.955|0.92 (0.093)0.742|0.662 (0.238)0.691|0.63 (0.292)0.992|0.977 (0.032)
ReConNet2-Ensemble-Final0.98|0.968 (0.033)0.906|0.882 (0.101)0.912|0.894 (0.086)0.991|0.983 (0.029)
p-value<0.001<0.001<0.0010.140
Table 3. Results for ReConNet-Interval (Median/Mean (SD)).
Table 3. Results for ReConNet-Interval (Median/Mean (SD)).
AccuracyDice CoefficientSensitivitySpecificity
Stargardt AtrophyReConNet1-Interval0.988|0.985 (0.009)0.559|0.557 (0.094)0.676|0.673 (0.145)0.993|0.991 (0.006)
GAReConNet1-Interval0.968|0.959 (0.025)0.601|0.612 (0.089)0.718|0.711 (0.155)0.981|0.975 (0.020)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mishra, Z.; Wang, Z.C.; Xu, E.; Xu, S.; Majid, I.; Sadda, S.R.; Hu, Z.J. Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy towards Comparative Performance on Optical Coherence Tomography as on Fundus Autofluorescence. Appl. Sci. 2024, 14, 7773. https://doi.org/10.3390/app14177773

AMA Style

Mishra Z, Wang ZC, Xu E, Xu S, Majid I, Sadda SR, Hu ZJ. Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy towards Comparative Performance on Optical Coherence Tomography as on Fundus Autofluorescence. Applied Sciences. 2024; 14(17):7773. https://doi.org/10.3390/app14177773

Chicago/Turabian Style

Mishra, Zubin, Ziyuan Chris Wang, Emily Xu, Sophia Xu, Iyad Majid, SriniVas R. Sadda, and Zhihong Jewel Hu. 2024. "Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy towards Comparative Performance on Optical Coherence Tomography as on Fundus Autofluorescence" Applied Sciences 14, no. 17: 7773. https://doi.org/10.3390/app14177773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop