Next Article in Journal
A New Blockchain-Based Multi-Level Location Secure Sharing Scheme
Previous Article in Journal
Numerical Investigation of Tonal Trailing-Edge Noise Radiated by Low Reynolds Number Airfoils
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Cone Cell Identification on Adaptive Optics Scanning Laser Ophthalmoscope Images Based on TV-L1 Optical Flow Registration and K-Means Clustering

1
Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
2
Department of Biomedical Engineering, University of Science and Technology of China, Hefei 230041, China
3
Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(5), 2259; https://doi.org/10.3390/app11052259
Submission received: 15 January 2021 / Revised: 10 February 2021 / Accepted: 26 February 2021 / Published: 4 March 2021

Abstract

:
Cone cell identification is essential for diagnosing and studying eye diseases. In this paper, we propose an automated cone cell identification method that involves TV-L1 optical flow estimation and K-means clustering. The proposed algorithm consists of the following steps: image denoising based on TV-L1 optical flow registration, bias field correction, cone cell identification based on K-means clustering, duplicate identification removal, identification based on threshold segmentation, and merging of closed identified cone cells. Compared with manually labelled ground-truth images, the proposed method shows high effectiveness with precision, recall, and F1 scores of 93.10%, 94.97%, and 94.03%, respectively. The method performance is further evaluated on adaptive optics scanning laser ophthalmoscope images obtained from a healthy subject with low cone cell density and subjects with either diabetic retinopathy or acute zonal occult outer retinopathy. The evaluation results demonstrate that the proposed method can accurately identify cone cells in subjects with healthy retinas and retinal diseases.

1. Introduction

High-resolution in vivo retinal imaging facilitates the diagnosis and study of retinal diseases. Nevertheless, ocular aberrations limit the optical resolution of retinal imaging systems. To solve this problem, adaptive optics, which was originally developed to correct atmospheric imaging aberrations [1], has been applied to retinal imaging [2,3,4,5,6,7,8]. By integrating adaptive optics, a scanning laser ophthalmoscope, which is a widely used retinal imaging system, can achieve in vivo retinal imaging at the cellular level [4,9,10,11]. Thus, the identification of cone cells (one type of photoreceptor found in the retina) can be implemented by using adaptive optics scanning laser ophthalmoscope (AO-SLO) images. Although cone cells can be identified manually, such a method is often time and work intensive. Thus, automated and semi-automated methods for cone cell identification have been devised [12,13,14,15,16,17,18,19,20,21,22,23,24,25].
Automated cone cell identification mainly includes image denoising and target identification. Reliable denoising of AO-SLO images often relies on averaging multiple AO-SLO images. However, the directly averaged AO-SLO image is blurred due to eye motion. To correct eye motion artifacts, the AO-SLO images need to be registered. Although hardware-based registration is effective [3,4,26,27,28,29,30], software-based registration is more commonly used due to its low costs [31,32,33,34,35,36]. For software-based registration, optical flow estimation provides high performance [36]. However, this registration method requires preprocessing that increases its complexity. To avoid preprocessing, we adopt TV-L1 optical flow [37] to achieve direct software-based AO-SLO image registration. Then, for target identification, we use K-means clustering [38], which despite being a common unsupervised learning algorithm, has not been applied to cone cell identification.
To verify the effectiveness of the proposed method, we compared it with manual labeling to obtain various evaluation measures, namely, recall, precision, and F1 score. We further evaluated the identification performance of the proposed method for cone cell identification of AO-SLO images from a healthy subject with low cone cell density and subjects with either diabetic retinopathy or acute zonal occult outer retinopathy, as far as we know for the first time.

2. Proposed Cone Cell Identification Method

Figure 1 shows the flowchart of the proposed cone cell identification method, which comprises six steps: (1) image denoising, (2) bias field correction, (3) cone cell identification based on K-means clustering [38], (4) duplicate identification removal, (5) identification based on threshold segmentation, and (6) merging of closed identified cone cells. First, AO-SLO images are denoised by averaging multiple images registered by TV-L1 optical flow estimation [37]. Second, bias field correction [39] is applied to the denoised image, and the corrected image is isotropically magnified four times via bicubic interpolation. Third, cone cells are roughly identified by K-means clustering [38]. Fourth, duplicate identification results are removed. Fifth, a threshold is calculated by using the identification results to segment the corrected image. The segmentation results are used to generate the cone cell identification results. Finally, the identification outcome is obtained by merging closed identified cone cells [16].

2.1. AO-SLO Image Denoising

The human eye has a limited resilience to light exposure. Therefore, to prevent optical damage, the light of AO-SLO imaging should be set to a low-power mode. However, low light might lead to a low signal-to-noise ratio in AO-SLO images. To increase this ratio, multiple registered AO-SLO images can be averaged. To this end, we adopt TV-L1 optical flow estimation [37] for image registration and then average the registered images after screening. The flowchart of image denoising based on registered images is shown in Figure 2. First, the middle image in a time sequence is selected as a reference for the registration of other images. Then, TV-L1 optical flow registration is applied to each image to be registered. Third, image registration is evaluated. Registered images whose structural similarity index [40] with respect to the reference image is below 0.5 under masks are discarded. If the number of remaining registered images is below one-fifth of the number of acquired images, the algorithm selects the next image closest to the middle in the time sequence as the new reference image. When the number of successfully registered images is sufficient, they are averaged along with the reference image under masks to obtain the denoised AO-SLO image.
Figure 3 shows an example of image denoising on a representative AO-SLO image patch using the proposed method. Note that image noise is markedly reduced after denoising, indicating the high effectiveness of the method.

2.2. Bias Field Correction

To mitigate intensity differences across the cone cells in an AO-SLO image and improve the identification accuracy, we apply bias field correction [39] to denoised AO-SLO images. First, an intensity bias field image is created by applying a Gaussian filter to the denoised image:
Bias   field   image   =   Gaussion   filter ( Denoised   image )
The denoised image is corrected by extracting the bias field image [39]:
Bias   field   corrected   image ( x , y )   =   Mean ( Bias   field   image ) × Denoised   image ( x , y ) Bias   field   image ( x , y )
Figure 4 illustrates bias field correction performed on a representative AO-SLO image. The intensity of the cone cells is more uniform after bias field correction. To accurately identify the cone cells, we also isotropically magnify the corrected image four times via bicubic interpolation after bias field correction and before identification.

2.3. Identification Based on K-Means Clustering

K-means clustering [38] is the most time-consuming operation in the proposed cone cell identification method. To reduce the computational time of K-means clustering, we divide the bias-field-corrected image into image patches before identification. Then, we separately identify the cone cells on each patch and join the identification results afterward. To accurately identify the cone cells near the edge of image patches, we add extra patch borders of 10% of the length of the image patch before identification. These borders only support identification in the image patches, but the identified cone cells in the borders are discarded from the final identification results.
For identification on each image patch, we first apply a histogram equalization. Second, the pixels are divided into 3 clusters by K-means according to intensity. Third, we generate a mask for the cluster that contains the largest number of pixels because this mask represents either cone cells or the background. Fourth, we extract the contours of this mask by using function findContours in the Python implementation of OpenCV. Then, we obtain the centroid of the area inside each contour to identify the cone cells.
Figure 5 shows an example of cone cell identification based on K-means clustering on a representative image patch. The cone cells are roughly identified. Nevertheless, overidentification errors occur, as some cone cells are identified more than once.

2.4. Duplicate Identification Removal

To mostly correct overidentification, we remove duplicate identification results with low intensities. Specifically, we first divide the identified results into groups, with the maximum distances per group being below a threshold. Second, we only keep the identification results corresponding to the highest intensity per group and remove the likely duplicate identification results. Thus, we only preserve one identification result per group.
Figure 6 shows an example of duplicate identification result removal using the image patch shown in Figure 5. The overidentification shown in Figure 5 is corrected after duplicate identification removal.

2.5. Identification Based on Threshold Segmentation

To further improve the accuracy of cone cell identification at high-intensity locations, we apply threshold segmentation to the bias-field-corrected image (Section 2.2). The threshold is determined as follows: (1) we generate a mask that includes the identification results obtained in Section 2.4 and their surroundings whose locations are within 4 pixels from them; (2) the mean value of the bias-field-corrected image under the mask is calculated; (3) this mean value is set as the threshold for segmentation. After segmenting the bias-field-corrected image using this threshold, we extract the contours of the segmentation results by using function findContours of OpenCV and obtain the centroid of the area inside each contour to identify the cone cells.
Figure 7 shows an example of threshold segmentation applied to a representative image patch. The cone cells are more accurately identified after threshold segmentation.

2.6. Merging of Closed Identification Results

After threshold segmentation, some cone cells are identified more than once. To mitigate duplicate identification, we merge closed identified cone cells by using the method in [16], which combine several closed identified results into one identified result. Specifically, morphological dilation is applied to the identification results (function morphologyEx of OpenCV), and the centroid of each region is obtained. Then, closed identified cone cells are combined, obtaining refined identification around the middle of the previously identified closed cone cells.
Figure 8 shows an example of merging closed identified cone cells on a representative image patch. Closed cone cells are merged, thus improving the identification results.

3. Results

An AO-SLO system with an acquisition rate of 30 Hz was used for imaging the posterior part of the eye of human subjects. The field of view and frame size of the human eye are 1.5° and 512 × 449 pixels, respectively. Thus, a transverse region of 445 × 445 µm was scanned using an effective focal length of 17 mm for the eye. More details of the AO-SLO system can be found in [41]. Before imaging, the pupil was dilated with 1% tropicamide and 2.5% phenylephrine hydrochloride to increase its diameter to 6–8 mm. Throughout the procedure, light exposure was maintained in accordance with the maximum permissible exposure limits specified by the American National Standards Institute [42].
The automated processing of a 100 × 100-pixel image required 8.72 s for denoising based on TV-L1 optical flow registration, 0.01 s for bias field correction, 0.52 s for identification based on K-means clustering, 0.01 s for duplicate identification removal, 1.56 s for identification based on threshold segmentation, and 1.65 s for merging of closed identification results. These computational times were obtained from a system equipped with an Intel Core i5-9400 CPU at 2.90 GHz, an NVIDIA GeForce GTX 1660 Ti graphics card, and 16.0 GB memory. Python (64 bits) was used for implementation of the algorithms: the numerical calculations were mainly implemented by using NumPy and SciPy libraries; the computer vision algorithms, especially K-means, were mainly implemented by OpenCV library.
To verify the effectiveness of the proposed method for cone cell identification, we acquired five retinal images around the foveola of healthy subjects (n = 5). The proposed method successfully identified the cone cells from the five subjects. The identification on three representative images is shown in Figure 9. By regarding manual cone cell identification as the ground truth, the overall precision, recall, and F1 score of the identification are listed in Table 1. The proposed method provides high identification accuracy near the foveola of the healthy subjects.
We further evaluated the performance of the proposed method on different types of AO-SLO images. Figure 10 shows three examples of AO-SLO images [43,44] and their corresponding cone cell identification results. The first example (Figure 10(a1,b1)) shows an AO-SLO image at different retinal locations in the same healthy eye with cone cell density below that of the examples shown in Figure 9. The examples in Figure 10(a2,b2) and Figure 10(a3,b3) show AO-SLO images of an eye with diabetic retinopathy [43] and an eye with acute zonal occult outer retinopathy [44], respectively, along with the corresponding cone cell identification results. The proposed method accurately identifies AO-SLO images of eyes with low cone cell density, diabetic retinopathy, and acute zonal occult outer retinopathy.

4. Discussion

One future direction of image denoising for AO-SLO images is to try the deep learning-based image registration method and averaging the registered images. The deep learning registration model could be trained by natural images and applied to the AO-SLO images. For automated cone cell identification methods, more unsupervised clustering methods, which are known for their high accuracy in image segmentation, could be applied to AO-SLO images for trying to obtain highly accurate cone cell identification results in some pathological cases.

5. Conclusions

We propose an automated cone cell identification method that uses TV-L1 optical flow registration and K-means clustering identification on AO-SLO images. The proposed method successfully achieves denoising based on TV-L1 optical flow registration, bias field correction, identification based on K-means clustering, and merging of closed identified cone cells, as verified experimentally. To evaluate the performance of the proposed method, we compared its automated cone cell identification with manual labeling. The proposed method achieves precision, recall, and F1 scores of 93.10%, 94.97%, and 94.03%, respectively. Furthermore, the proposed method exhibits high-performance cone cell identification on AO-SLO images of eyes with low cone cell density, diabetic retinopathy, and acute zonal occult outer retinopathy.

Author Contributions

Conceptualization, Y.C.; methodology, Y.C., and Y.H.; software, Y.C.; validation, Y.C.; formal analysis, Y.C.; investigation, Y.C.; resources, Y.H., G.S.; data curation, Y.H.; writing—original draft preparation, Y.C.; writing—review and editing, J.W., W.L., X.Z., and Y.H.; visualization, Y.C.; supervision, Y.C.; project administration, L.X. and X.Z.; funding acquisition, G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Jiangsu Province (BK20200214); National Key R&D Program of China (2017YFB0403701); Jiangsu Province Key R&D Program (BE2019682, BE2018667); National Natural Science Foundation of China (61605210, 61675226, 61378090); Youth Innovation Promotion Association of Chinese Academy of Sciences (2019320); Frontier Science Research Project of the Chinese Academy of Sciences (QYZDB-SSW-JSC03); Strategic Priority Research Program of the Chinese Academy of Sciences (XDB02060000).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are not publicly available due to them containing information that could compromise research participant privacy.

Acknowledgments

The authors wish to thank the anonymous reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Babcock, H.W. The Possibility of Compensating Astronomical Seeing. Publ. Astron. Soc. Pac. 1953, 65, 229. [Google Scholar] [CrossRef]
  2. Roorda, A.; Romero-Borja, F.; Donnelly, W.J., III; Queener, H.; Hebert, T.J.; Campbell, M.C. Adaptive Optics Scanning Laser Ophthalmoscopy. Opt. Express 2002, 10, 405–412. [Google Scholar] [CrossRef]
  3. Burns, S.A.; Tumbar, R.; Elsner, A.E.; Ferguson, D.; Hammer, D.X. Large-Field-of-View, Modular, Stabilized, Adaptive-Optics-Based Scanning Laser Ophthalmoscope. J. Opt. Soc. Am. 2007, 24, 1313–1326. [Google Scholar] [CrossRef] [Green Version]
  4. Ferguson, R.D.; Zhong, Z.; Hammer, D.X.; Mujat, M.; Patel, A.H.; Deng, C.; Zou, W.; Burns, S.A. Adaptive Optics Scanning Laser Ophthalmoscope with Integrated Wide-Field Retinal Imaging and Tracking. J. Opt. Soc. Am. 2010, 27, A265–A277. [Google Scholar] [CrossRef]
  5. Dubra, A.; Sulai, Y. Reflective Afocal Broadband Adaptive Optics Scanning Ophthalmoscope. Biomed. Opt. Express 2011, 2, 1757–1768. [Google Scholar] [CrossRef] [Green Version]
  6. Shemonski, N.D.; South, F.A.; Liu, Y.-Z.; Adie, S.G.; Carney, P.S.; Boppart, S.A. Computational High-Resolution Optical Imaging of the Living Human Retina. Nat. Photon. 2015, 9, 440–443. [Google Scholar] [CrossRef] [Green Version]
  7. Lu, J.; Gu, B.; Wang, X.; Zhang, Y. High Speed Adaptive Optics Ophthalmoscopy with an Anamorphic Point Spread Function. Opt. Express 2018, 26, 14356–14374. [Google Scholar] [CrossRef] [PubMed]
  8. Mozaffari, S.; Jaedicke, V.; LaRocca, F.; Tiruveedhula, P.; Roorda, A. Versatile Multi-Detector Scheme for Adaptive Optics Scanning Laser Ophthalmoscopy. Biomed. Opt. Express 2018, 9, 5477–5488. [Google Scholar] [CrossRef] [PubMed]
  9. Dreher, A.W.; Bille, J.F.; Weinreb, R.N. Active Optical Depth Resolution Improvement of the Laser Tomographic Scanner. Appl. Opt. 1989, 28, 804–808. [Google Scholar] [CrossRef] [PubMed]
  10. Liang, J.; Williams, D.R.; Miller, D.T. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A 1997, 14, 2884–2892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Hofer, H.; Chen, L.; Yoon, G.Y.; Singer, B.; Yamauchi, Y.; Williams, D.R. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express 2001, 8, 631–643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Turpin, A.; Morrow, P.; Scotney, B.; Anderson, R.; Wolsley, C. Automated Identification of Photoreceptor Cones Using Mul-Ti-Scale Modelling and Normalized Cross-Correlation. In Proceedings of the International Conference on Image Analysis and Processing, Ravenna, Italy, 14–16 September 2011; pp. 494–503. [Google Scholar]
  13. Cunefare, D.; Cooper, R.F.; Higgins, B.; Katz, D.F.; Dubra, A.; Carroll, J.; Farsiu, S. Automatic Detection of Cone Photoreceptors in Split Detector Adaptive Optics Scanning Light Ophthalmoscope Images. Biomed. Opt. Express 2016, 7, 2036–2050. [Google Scholar] [CrossRef] [Green Version]
  14. Bukowska, D.M.; Chew, A.L.; Huynh, E.; Kashani, I.; Wan, S.L.; Wan, P.M.; Chen, F.K. Semi-automated Identification of Cones in the Human Retina Using Circle Hough Transform. Biomed. Opt. Express 2015, 6, 4676–4693. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Chiu, S.J.; Lokhnygina, Y.; Dubis, A.M.; Dubra, A.; Carroll, J.; Izatt, J.A.; Farsiu, S. Automatic Cone Photoreceptor Segmentation Using Graph Theory and Dynamic Programming. Biomed. Opt. Express 2013, 4, 924–937. [Google Scholar] [CrossRef] [Green Version]
  16. Li, K.Y.; Roorda, A. Automated Identification of Cone Photoreceptors in Adaptive Optics Retinal Images. J. Opt. Soc. Am. A 2007, 24, 1358–1363. [Google Scholar] [CrossRef]
  17. Liu, J.; Jung, H.; Dubra, A.; Tam, J. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting. Investig. Opthalmol. Vis. Sci. 2017, 58, 4477–4489. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, Y.; He, Y.; Wang, J.; Li, W.; Xing, L.; Gao, F.; Shi, G. Automated Superpixels-Based Identification and Mosaicking of Cone Photoreceptor Cells for Adaptive Optics Scanning Laser Ophthalmoscope. Chin. Opt. Lett. 2020, 18, 101701. [Google Scholar] [CrossRef]
  19. Chen, Y.; He, Y.; Wang, J.; Li, W.; Xing, L.; Gao, F.; Shi, G. Automated Cone Photoreceptor Cell Segmentation and Identification in Adaptive Optics Scanning Laser Ophthalmoscope Images Using Morphological Processing and Watershed Algorithm. IEEE Access 2020, 8, 105786–105792. [Google Scholar] [CrossRef]
  20. Cunefare, D.; Langlo, C.S.; Patterson, E.J.; Blau, S.; Dubra, A.; Carroll, J.; Farsiu, S. Deep Learning Based Detection of Cone Photoreceptors with Multimodal Adaptive Optics Scanning Light Ophthalmoscope Images of Achromatopsia. Biomed. Opt. Express 2018, 9, 3740–3756. [Google Scholar] [CrossRef] [PubMed]
  21. Cunefare, D.; Huckenpahler, A.L.; Patterson, E.J.; Dubra, A.; Carroll, J.; Farsiu, S. RAC-CNN: Multimodal Deep Learning Based Automatic Detection and Classification of Rod and Cone Photoreceptors in Adaptive Optics Scanning Light Ophthalmoscope Im-Ages. Biomed. Opt. Express 2019, 10, 3815–3832. [Google Scholar] [CrossRef]
  22. Hamwood, J.; Alonso-Caneiro, D.; Sampson, D.M.; Collins, M.J.; Chen, F.K. Automatic Detection of Cone Photoreceptors with Fully Convolutional Networks. Transl. Vis. Sci. Technol. 2019, 8, 10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Cunefare, D.; Fang, L.; Cooper, R.F.; Dubra, A.; Carroll, J.; Farsiu, S. Open Source Software for Automatic Detection of Cone Photoreceptors in Adaptive Optics Ophthalmoscopy Using Convolutional Neural Networks. Sci. Rep. 2017, 7, 6620. [Google Scholar] [CrossRef] [PubMed]
  24. Davidson, B.; Kalitzeos, A.; Carroll, J.; Dubra, A.; Ourselin, S.; Michaelides, M.; Bergeles, C. Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning. Sci. Rep. 2018, 8, 1–13. [Google Scholar] [CrossRef]
  25. Bergeles, C.; Dubis, A.M.; Davidson, B.; Kasilian, M.; Kalitzeos, A.; Carroll, J.; Dubra, A.; Michaelides, M.; Ourselin, S. Un-Supervised Identification of Cone Photoreceptors in Non-confocal Adaptive Optics Scanning Light Ophthalmoscope Images. Biomed. Opt. Express 2017, 8, 3081–3094. [Google Scholar] [CrossRef] [PubMed]
  26. Hammer, D.X.; Ferguson, R.D.; Magill, J.C.; White, M.A.; Elsner, A.E.; Webb, R.H. Image Stabilization for Scanning Laser Ophthalmoscopy. Opt. Express 2002, 10, 1542–1549. [Google Scholar] [CrossRef] [Green Version]
  27. Ferguson, R.D.; Hammer, D.X.; Burns, S.A.; Elsner, A.E. Retinal Hemodynamic Imaging with the TSLO. Investig. Ophthalmol. Vis. Sci. 2004, 45, 1137. [Google Scholar]
  28. Ferguson, R.D.; Hammer, D.X.; Elsner, A.E.; Webb, R.H.; Burns, S.A.; Weiter, J.J. Wide-Field Retinal Hemodynamic Imaging with the Tracking Scanning Laser Ophthalmoscope. Opt. Express 2004, 12, 5198–5208. [Google Scholar] [CrossRef]
  29. Hammer, D.X.; Ferguson, R.D.; Bigelow, C.E.; Iftimia, N.V.; Ustun, T.E.; Burns, S.A. Adaptive Optics Scanning Laser Oph-Thalmoscope for Stabilized Retinal Imaging. Opt. Express 2006, 14, 3354–3367. [Google Scholar] [CrossRef] [Green Version]
  30. Yang, Q.; Zhang, J.; Nozato, K.; Saito, K.; Williams, D.R.; Roorda, A.; Rossi, E.A. Closed-Loop Optical Stabilization and Digital Image Registration in Adaptive Optics Scanning Light Ophthalmoscopy. Biomed. Opt. Express 2014, 5, 3174–3191. [Google Scholar] [CrossRef] [Green Version]
  31. Mujat, M.; Patel, A.; Iftimia, N.; Akula, J.D.; Fulton, A.B.; Ferguson, R.D. High-Resolution Retinal Imaging: Enhancement Techniques. In Proceedings of the Ophthalmic Technologies XXV SPIE BiOS, Francisco, CA, USA, 11 March 2015; p. 930703. [Google Scholar]
  32. Vogel, C.R.; Arathorn, D.W.; Roorda, A.; Parker, A. Retinal Motion Estimation in Adaptive Optics Scanning Laser Ophthal-Moscopy. Opt. Express 2006, 14, 487–497. [Google Scholar] [CrossRef]
  33. Sheehy, C.K.; Yang, Q.; Arathorn, D.W.; Tiruveedhula, P.; De Boer, J.F.; Roorda, A. High-Speed, Image-Based Eye Tracking with a Scanning Laser Ophthalmoscope. Biomed. Opt. Express 2012, 3, 2611–2622. [Google Scholar] [CrossRef] [Green Version]
  34. Chen, H.; He, Y.; Wei, L.; Yang, J.; Li, X.; Shi, G.; Zhang, Y. Polynomial Transformation Model for Frame-to-Frame Registration in an Adaptive Optics Confocal Scanning Laser Ophthalmoscope. Biomed. Opt. Express 2019, 10, 4589–4606. [Google Scholar] [CrossRef]
  35. Li, H.; Lu, J.; Shi, G.; Zhang, Y. Tracking Features in Retinal Images of Adaptive Optics Confocal Scanning Laser Ophthalmoscope Using KLT-SIFT Algorithm. Biomed. Opt. Express 2010, 1, 31–40. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, Y.; He, Y.; Wang, J.; Li, W.; Xing, L.; Gao, F.; Shi, H.G. Automated Optical Flow Based Registration for Adaptive Optics Scanning Laser Ophthalmoscope. IEEE Photon J. 2019, 12, 1–9. [Google Scholar] [CrossRef]
  37. Pérez, J.S.; Meinhardt-Llopis, E.; Facciolo, G. TV-L1 Optical Flow Estimation. Image Process. Line 2013, 3, 137–150. [Google Scholar] [CrossRef] [Green Version]
  38. Hartigan, J.A.; Manchek, A.W. Algorithm AS 136: A K-Means Clustering Algorithm. J. R. Stat. Soc. Ser. C Appl. Stat. 1979, 28, 100–108. [Google Scholar] [CrossRef]
  39. Zang, P.; Liu, G.; Zhang, M.; Dongye, C.; Wang, J.; Pechauer, A.D.; Hwang, T.S.; Wilson, D.J.; Huang, D.; Li, D.; et al. AU-Tomated Motion Correction Using Parallel-Strip Registration for Wide-Field en Face OCT Angiogram. Biomed. Opt. Express 2016, 7, 2823–2836. [Google Scholar] [CrossRef] [Green Version]
  40. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Wang, Y.W.Y.; He, Y.H.Y.; Wei, L.W.L.; Li, X.L.X.; Yang, J.Y.J.; Zhou, H.Z.H.; Zhang, Y.Z.Y.; Wang, Y.H.Y. Bimorph Deformable Mirror Based Adaptive Optics Scanning Laser Ophthalmoscope for Retina Imaging in Vivo. Chin. Opt. Lett. 2017, 15, 121102. [Google Scholar] [CrossRef]
  42. ANS. Institute, American National Standard for Safe Use of Lasers; Laser Institute of America: Orlando, FL, USA, 2007. [Google Scholar]
  43. Burns, S.A.; Elsner, A.E.; Sapoznik, K.A.; Warner, R.L.; Gast, T.J. Adaptive Optics Imaging of the Human Retina. Prog. Retin. Eye Res. 2019, 68, 1–30. [Google Scholar] [CrossRef] [PubMed]
  44. Merino, D.; Duncan, J.L.; Tiruveedhula, P.; Roorda, A. Observation of Cone and Rod Photoreceptors in Normal Subjects and Patients Using a New Generation Adaptive Optics Scanning Laser Ophthalmoscope. Biomed. Opt. Express 2011, 2, 2189–2201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Flowchart of proposed method for automated cone cell identification.
Figure 1. Flowchart of proposed method for automated cone cell identification.
Applsci 11 02259 g001
Figure 2. Flowchart of adaptive optics scanning laser ophthalmoscope (AO-SLO) image denoising.
Figure 2. Flowchart of adaptive optics scanning laser ophthalmoscope (AO-SLO) image denoising.
Applsci 11 02259 g002
Figure 3. AO-SLO image (a) before and (b) after denoising using proposed method based on TV-L1 optical flow registration.
Figure 3. AO-SLO image (a) before and (b) after denoising using proposed method based on TV-L1 optical flow registration.
Applsci 11 02259 g003
Figure 4. Bias field correction. (a) Denoised AO-SLO image patch. (b) Bias field image. (c) Bias-field-corrected image.
Figure 4. Bias field correction. (a) Denoised AO-SLO image patch. (b) Bias field image. (c) Bias-field-corrected image.
Applsci 11 02259 g004
Figure 5. Cone cell identification based on K-means clustering. (a) Denoised AO-SLO image patch. (b) Identified cone cells marked in red on denoised AO-SLO image patch.
Figure 5. Cone cell identification based on K-means clustering. (a) Denoised AO-SLO image patch. (b) Identified cone cells marked in red on denoised AO-SLO image patch.
Applsci 11 02259 g005
Figure 6. Duplicate identification removal. Identified cone cells are marked in red on the denoised AO-SLO image patch (a) before and (b) after duplicate identification removal.
Figure 6. Duplicate identification removal. Identified cone cells are marked in red on the denoised AO-SLO image patch (a) before and (b) after duplicate identification removal.
Applsci 11 02259 g006
Figure 7. Cone cell dentification based on threshold segmentation. (a) Denoised AO-SLO image patch. Identification results from (b) the method in Section 2.4 (red dots), (c) threshold segmentation (green dots), and (d) both methods on the denoised AO-SLO image patch.
Figure 7. Cone cell dentification based on threshold segmentation. (a) Denoised AO-SLO image patch. Identification results from (b) the method in Section 2.4 (red dots), (c) threshold segmentation (green dots), and (d) both methods on the denoised AO-SLO image patch.
Applsci 11 02259 g007
Figure 8. Merging of closed identified cone cells. (a) Denoised AO-SLO image patch. Identification results (b) before and (c) after merging (green dots) on the denoised AO-SLO image patch.
Figure 8. Merging of closed identified cone cells. (a) Denoised AO-SLO image patch. Identification results (b) before and (c) after merging (green dots) on the denoised AO-SLO image patch.
Applsci 11 02259 g008
Figure 9. Cone cell identification by proposed method. (a) Input AO-SLO images and (b) corresponding identification results. (1) The healthy subject number 1, (2) the healthy subject number 2 and (3) the healthy subject number 3.
Figure 9. Cone cell identification by proposed method. (a) Input AO-SLO images and (b) corresponding identification results. (1) The healthy subject number 1, (2) the healthy subject number 2 and (3) the healthy subject number 3.
Applsci 11 02259 g009
Figure 10. Cone cell identification using the proposed method. (a) Input AO-SLO images of healthy eye with low cone cell density (1), eye with diabetic retinopathy (2), and eye with acute zonal occult outer retinopathy (3). (b) Corresponding cone cell identification results.
Figure 10. Cone cell identification using the proposed method. (a) Input AO-SLO images of healthy eye with low cone cell density (1), eye with diabetic retinopathy (2), and eye with acute zonal occult outer retinopathy (3). (b) Corresponding cone cell identification results.
Applsci 11 02259 g010
Table 1. Performance measures of cone cell identification.
Table 1. Performance measures of cone cell identification.
MeasureDescriptionValue
PrecisionPercentage of actual cells in identified cells93.10%
RecallPercentage of actual cells identified94.97%
F1 score2 × Precision × Recall/(Precision + Recall)94.03%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; He, Y.; Wang, J.; Li, W.; Xing, L.; Zhang, X.; Shi, G. Automated Cone Cell Identification on Adaptive Optics Scanning Laser Ophthalmoscope Images Based on TV-L1 Optical Flow Registration and K-Means Clustering. Appl. Sci. 2021, 11, 2259. https://doi.org/10.3390/app11052259

AMA Style

Chen Y, He Y, Wang J, Li W, Xing L, Zhang X, Shi G. Automated Cone Cell Identification on Adaptive Optics Scanning Laser Ophthalmoscope Images Based on TV-L1 Optical Flow Registration and K-Means Clustering. Applied Sciences. 2021; 11(5):2259. https://doi.org/10.3390/app11052259

Chicago/Turabian Style

Chen, Yiwei, Yi He, Jing Wang, Wanyue Li, Lina Xing, Xin Zhang, and Guohua Shi. 2021. "Automated Cone Cell Identification on Adaptive Optics Scanning Laser Ophthalmoscope Images Based on TV-L1 Optical Flow Registration and K-Means Clustering" Applied Sciences 11, no. 5: 2259. https://doi.org/10.3390/app11052259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop