Next Article in Journal
Imaging of the Liver and Pancreas: The Added Value of MRI
Previous Article in Journal
Integrative Approaches for the Diagnosis and Management of Erosive Oral Lichen Planus
Previous Article in Special Issue
Detection of Severe Lung Infection on Chest Radiographs of COVID-19 Patients: Robustness of AI Models across Multi-Institutional Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial-Intelligence-Enhanced Analysis of In Vivo Confocal Microscopy in Corneal Diseases: A Review

by
Katarzyna Kryszan
1,2,*,
Adam Wylęgała
1,3,
Magdalena Kijonka
1,2,
Patrycja Potrawa
2,
Mateusz Walasz
2,
Edward Wylęgała
1,2 and
Bogusława Orzechowska-Wylęgała
4
1
Chair and Clinical Department of Ophthalmology, School of Medicine in Zabrze, Medical University of Silesia in Katowice, District Railway Hospital, 40-760 Katowice, Poland
2
Department of Ophthalmology, District Railway Hospital in Katowice, 40-760 Katowice, Poland
3
Health Promotion and Obesity Management, Pathophysiology Department, Medical University of Silesia in Katowice, 40-752 Katowice, Poland
4
Department of Pediatric Otolaryngology, Head and Neck Surgery, Chair of Pediatric Surgery, Medical University of Silesia, 40-760 Katowice, Poland
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(7), 694; https://doi.org/10.3390/diagnostics14070694
Submission received: 8 February 2024 / Revised: 13 March 2024 / Accepted: 22 March 2024 / Published: 26 March 2024
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Analysis—2nd Edition)

Abstract

:
Artificial intelligence (AI) has seen significant progress in medical diagnostics, particularly in image and video analysis. This review focuses on the application of AI in analyzing in vivo confocal microscopy (IVCM) images for corneal diseases. The cornea, as an exposed and delicate part of the body, necessitates the precise diagnoses of various conditions. Convolutional neural networks (CNNs), a key component of deep learning, are a powerful tool for image data analysis. This review highlights AI applications in diagnosing keratitis, dry eye disease, and diabetic corneal neuropathy. It discusses the potential of AI in detecting infectious agents, analyzing corneal nerve morphology, and identifying the subtle changes in nerve fiber characteristics in diabetic corneal neuropathy. However, challenges still remain, including limited datasets, overfitting, low-quality images, and unrepresentative training datasets. This review explores augmentation techniques and the importance of feature engineering to address these challenges. Despite the progress made, challenges are still present, such as the “black-box” nature of AI models and the need for explainable AI (XAI). Expanding datasets, fostering collaborative efforts, and developing user-friendly AI tools are crucial for enhancing the acceptance and integration of AI into clinical practice.

1. Introduction

Artificial intelligence (AI) is increasingly entering medicine all over the world. The approval of AI algorithms for the first time in healthcare use was in 1995, when Neuromedical Systems, Inc. (NSI) (Suffern, New York, NY, USA), developed the PAPNET® Testing System to rescreen cervical smears. According to the data from 19 October 2023, the Food and Drug Administration (FDA) has approved approximately 700 AI algorithms for medical purposes since [1]. These mechanisms are mostly related to image and video analyses, and there is a great deal of information that can be extracted from an image using computer vision methods that the human eye may miss.
The cornea is a transparent part of the anterior segment of the eye that provides two-thirds of the eye’s focusing power and enables clear vision. However, as the first barrier against harmful environmental factors, it can be exposed to physical, chemical, and biological damage.
In vivo confocal microscopy (IVCM) captures cross-sectional images of the cornea with a thickness of several micrometers, which facilitates a noninvasive examination of every layer. Although artificial intelligence (AI) models have proven beneficial in various ophthalmological applications [2,3], the development of deep-learning-based systems in anterior eye segment diagnostics, especially when using IVCM images, still faces many challenges. A regular analysis of scientific reports and studies is crucial to enhance the awareness and understanding of AI as it enables subsequent investigators to achieve improved results in performance parameters, explainability, repeatability, and safety in their studies.
The aim of this review is to summarize and present the AI-assisted IVCM devices that have been developed over the last few years for keratitis, dry eye disease, and diabetic corneal neuropathy diagnostics.

2. Convolutional Neural Network Architecture

As is well known, convolutional neural networks (CNNs) are the most useful deep learning networks for image data analysis. To visualize what part of an image is important for classification, the algorithm learns by itself by relying on a large database of examples without human indication [4]. One of the goals of artificial intelligence is to enable machines to observe the world in a way that is similar to humans. This is possible through the use of neural networks. Neural networks are mathematical structures that are inspired by human neurons that are found in the brain. Their most common application is for image processing [5]. First, CNN models take an input image as an array of pixels, process it, and then finally classify it as a certain category [6].
A general model of CNNs consists of four components: the convolutional layer, pooling layer, activation function layer, and fully connected layer [7]. In the convolutional layer, the main mathematical task performed is called convolution. Convolution can be defined as a mathematical transformation of two functions that produces a third one that expresses how the shape of the first function is modified by the second [8]. An activation function is then applied (e.g., rectified linear unit activation function—ReLU or sigmoid function) after each convolution operation. This step enables the network to find nonlinear relationships between the features in the image [9]. Moving to the pooling layer, its goal is to reduce the dimensions of the feature arrays, which is what speeds up the computation process [10,11]. The fully connected layer represents the global information of the input object, and it also ultimately identifies to what class the image belongs [12]. At this stage, the activation function, when it is applied to the last fully connected layer, is used for a multiclass classification task. The most common one is the SoftMax function, which normalizes the output vector from the last fully connected layer to the probabilities of the target class (i.e., where each value ranges from 0 to 1 [13]).

3. Artificial Intelligence Issues

Due to the fact that there are incredibly few medical centers that collect data provided by confocal microscopy, there are still no publicly available materials that can be used to train a CNN. Deep learning methods have high accuracy when the amount of data is large for the purposes of training [14,15,16]. Although certain augmentation methods can improve model performance [17,18], models still benefit from having as big a dataset as possible.

3.1. Low Quality of Images

A classification process can be distorted by low-quality images [19,20,21]. Usually, bad images are excluded from a dataset such that they do not influence the final performance. Qu et al. [22] proposed a deep learning model to count and analyze the morphology of abnormal corneal endothelial cells at high noise levels and in poor-quality IVCM images. In reference to the first problem, which refers to the limited size of a dataset, it is important to recover as much data as possible from low-quality images.

3.2. Overfitting

The next common issue is overfitting. This problem means that the model only works well on the training data, but the generalization effect in testing is inaccurate [23,24]. It may be an effect of too many parameters being taken into account by the model in relation to the size of the training. Hence, it is better for the model to be less complex. Although it will achieve worse results on the training set, it will generalize the problem better and will classify new data more correctly. Another strategy to address overfitting is using a hold out or a cross-validation dataset [25,26]. Hold out is when one splits up a dataset into a “train” and “test” set [27]. A common split is using 80% of the data for training and 20% for testing; however, with respect to the articles analyzed in this review, 3:1 [28] and 10:1 [29] ratios have also been proposed. Cross-validation, on the other hand, divides the dataset randomly into “k” groups. One of the groups is used as the test set and the rest are used as the training set [30]. These are then partitioned multiple times until each group has been used as the test set [31].
If we cannot collect more diverse data, we can sometimes generate them ourselves. Although it sounds quite risky and dishonest, it is a common method in AI practice called augmentation. Data augmentation prevents the overfitting problem that was described above [32]. There is a great deal of room for improvement, especially in the area of image processing. We can slightly rotate the image, move it, change its colors, or make other more or less subtle changes that will give the model a significant amount of new data [33,34,35].

3.3. Unrepresentative Training Set

An unrepresentative training dataset is an issue that is similar to the overfitting problem. It generally means that the training data do not have enough diversity required, in one class, to properly train the model (e.g., the different phenotypes of corneal bacterial infection in IVCM images). It is, thus, recommended to collect variant features that are the least represented in the training data [36,37]. This is where the process called feature engineering plays a significant role. It consists of selecting the most useful features (among the available features) and feature discovery (combining the existing features to obtain more useful features) [38,39].

3.4. Limited Dataset Size

N-shot learning (NSL) proves advantageous in scenarios involving challenging images, especially when dealing with limited training data. In this context, a “shot” denotes a single example that is available for training, and “N” represents the number of these examples. NSL is broadly defined and has the following subfields: zero-shot learning, one-shot learning, and few-shot learning. The basic idea of zero-shot learning is to use the model’s existing knowledge (which is usually based on a set of provided examples, e.g., appearance, proportions, or functionality) to classify new data that have not been encountered before [40]. One-shot learning allows a model to learn from a single data example. Few-shot learning is similar to one-shot learning except that it has more than one training example to learn from. According to the few-shot learning approach, which usually means N-way-K-shot classification (where N stands for the number of classes and K for the number of examples from each class), the main task is to classify the “Q”-query images among the N classes.

3.5. “Black-Box” Problem

One of the significant obstacles faced by scientists implementing program solutions based on artificial intelligence is the lack of social acceptance and trust. The term “black-box” in the context of AI means that artificial intelligence does not contain information about how it achieved its results [41]. Due to the problems of the “black-box”, the explainable AI (XAI) approach was established. XAI can be understood as methods that will enable humans to understand the output produced by machine learning algorithms. The use of saliency maps indicates the areas of the image that have the greatest impact on prediction, i.e., what increases usefulness and the understanding of users. A saliency map is an image segmentation method that analyzes every pixel and gives it a validity label in the output classification [42,43].

4. Methods and Materials

The articles were collected through PubMed, and the appropriate publications were analyzed in review. All of the available studies that focused on artificial intelligence in confocal microscopy were included. The main purpose of this review was to present the usefulness of clinical applications analyzing IVCM images in ocular surface disorders such as keratitis, dry eye disease, and diabetic corneal neuropathy. Criteria and a search strategy were established. All articles were found in the PubMed database. The search keywords included “artificial intelligence/AI”, “confocal microscopy/IVCM”, “deep learning/DL”, and “machine learning/ML”. When we take a look into the past, we can note that artificial intelligence has developed significantly over the last 5 years. Furthermore, in the field of ophthalmology, 2020 was a real breakthrough year in terms of the amount of research publications related to machine learning. The main timeline was set from 2018 to 2023, but we also strived to use the most recent papers. The reason for this was the desire to share the latest knowledge and scientific reports, which—while these studies often use mechanisms discovered by their predecessors—are now being upgraded with new methods that improve their effectiveness and efficiency. Only original research articles written in English were included, i.e., reviews, editorials, opinions, single case reports, and ex vivo studies were excluded. The reference lists of the remaining studies were also checked, and they served as supplementary literature in the review. Publishers were scrutinized and the preference was given to peer-reviewed, academic journals and reputable websites. Our primary objective was to mitigate potential biases and to specifically address the risk of content omissions and unnecessary overlapping. Each article underwent individual assessments for coherence, completeness, and scope. This was achieved through a rigorous analysis of results, including performance outcomes, limitations, and future solutions. Constructive feedback from all authors was conducted.
We excluded articles where no artificial intelligence networks were mentioned. Studies where recovery of the full text was not possible, even after searching the available medical databases, were also excluded. A total of 34 articles were included in the final manuscript. Following this initial phase, the manuscript underwent iterative refinement. The revised versions were once again subjected to a thorough review by all authors, with subsequent amendments overseen by the first author. The final iterations were then circulated to the senior author for their meticulous evaluation and ultimate approval.
A PRISMA flow diagram to visually depict the systematic inclusion of articles was incorporated (see Figure 1 for the article selection process in detail).

5. Evaluation of Individual Disease Articles

This section is divided into three separate parts, each of which involves the review of articles by analyzing different diseases—keratitis, dry eye disease, and diabetic corneal neuropathy.

5.1. Keratitis

Infectious keratitis is caused by microorganisms such as bacteria, fungi, protozoa, and viruses [44]. The most common cause of keratitis is a disruption of the corneal epithelium, which serves as an excellent passage for microorganisms. After cornea penetration, the anterior chamber inflammation starts with acute and severe pain. Without early and proper treatment, it may lead to subsequent vision loss, infection of the posterior segment of the eye, and need for surgery [45,46]. The gold standard in the diagnosis of keratitis still remains microbial culture [47]. However, IVCM might serve as an additional useful diagnostic tool, and it may also help in implementing empirical treatment as soon as possible.

5.1.1. Fungal Keratitis

Mycotic keratitis is one of the most severe inflammations of the cornea. It can be observed worldwide, with increased frequency in tropical and subtropical areas [48]. Certain risk factors can be defined, such as contact with agriculture, the use of corticosteroids, and the use of contact lenses, as well as systemic diseases like diabetes mellitus or immunosuppression. In IVCM images, fungi usually show up as linear, branching, and hyperreflective filaments. Their filament diameters vary between 1.5–7.8 μm, and the total length can reach up to 400 μm [49].
Figure 2 presents confocal images with mycotic keratitis characteristics.

5.1.2. Bacterial Keratitis

The severity of bacterial keratitis varies depending on geography, climate, national development, and access to medical care. The main risk factors include ocular surface diseases, contact lens wear, systemic immunosuppression, prior corneal surgery, use of topical steroids, and trauma [50]. Although bacteria, except for Nocardia spp., are often too small to be detected as visible structures by confocal microscopy [51], there are certain premises that may indicate bacterial infection, such as an abundance of polymorphic neutrophils or a lack of atypical elements [52].
Figure 3 shows confocal images with bacterial keratitis characteristics.

5.1.3. Acanthamoeba Keratitis

Acanthamoeba is a group of protozoa that live in the form of cysts and trophozoites. They can be found in many water sources, both potable and nonpotable. The majority of cases are connected with the use of contact lenses [53,54]. In IVCM images, they are usually found as highly reflective oval cysts surrounded by a low-refractile wall that has a clear boundary and a dark ring outside [55].
Figure 4 shows confocal images with Acanthamoeba keratitis characteristics.

5.1.4. Viral Keratitis

Although many viruses have been shown to cause keratitis, the herpes viruses are the prevailing etiological cause of viral keratitis [56]. The herpes simplex virus can affect all layers of the cornea [57]. IVCM findings regarding this include the following: the presence of hyperreflective and irregular epithelial cells; Langerhans cells within the basal epithelium layer [58]; and a decreased number of sub-basal nerves and increased tortuosity. Changes in the nerve characteristics have also been observed in the herpes zoster virus [59]. Mangan et al. [60] presented a patient with corneal irregular epithelial cells, scattered inflammatory cells and cell debris, and activated dendritic cells in the sub-basal epithelial area with a marked decrease in the sub-basal corneal nerve plex during COVID-19 infection. Adenoviruses can occur as cell clusters in the basal epithelial layer with increased Langerhans cell presence and hyperreflectivity areas in the anterior stroma [61].
Figure 5 shows confocal images with viral keratitis characteristics.
A manual analysis of IVCM keratitis images requires specialized staff, as well as a significant amount of time to properly examine each case. AI is increasingly contributing to speeding up and improving the overall accuracy of diagnostics. Below, the selected articles will be discussed.
Essalat et al. [28] tested eight deep learning models based on convolutional neural networks (CNNs) to create automated support in the diagnostic accuracy of confocal microscopy of infectious keratitis. The dataset was divided into four groups: Acanthamoeba keratitis, fungal keratitis, nonspecific keratitis, and healthy eyes. The best model (Densenet161) achieved an accuracy, precision, recall, and F1 score of 93.55%, 92.52%, 94.77%, and 96.93%, respectively. These performance metrics are usually used to describe the performance of medical devices [62]. The authors emphasized that their proposed algorithm can help ophthalmologists provide faster and more reliable diagnoses by implementing a saliency map that highlights infectious areas in the IVCM images.
Alisa Lincke et al. [63] proposed an AI-based decision support system for the automated diagnosis of Acanthamoeba keratitis (AK). They used ResNet101V2 with transfer learning implementation. Despite the low sensitivity of the AK diagnosis (16.6% of correct model’s predictions), their proposed system reduced the time needed to sort and analyze IVCM images, which was achieved by dividing them into healthy and unhealthy ones.
Xuelian Wu et al. [64] compared the automatic hyphae detection and quantitative evaluation of confocal images with corneal smear results. The accuracy of their proposed technology was better than current corneal smear examinations (p  <  0.05).
Shanshan Liang et al. [65] used a two-stream convolutional network—GoogLeNet and VGGNet—to diagnose fungal keratitis. The main stream is used for extracting the important parts (i.e., groups of pixels) of the input image. The second stream is used for discriminating between the background and the intensified pixels that create the hyphae. The dual-stream structure allows users to have more influence over the segmentation results. Moreover, it provides a better performance compared to the single-stream networks. The features extracted by every stream were integrated to perform the final prediction. The proposed model resulted in an accuracy, precision, sensitivity, specificity, and F1 score of 97.73%, 98.68%, 97.02%, 98.54%, and 97.84%, respectively.
Jian Lv et al. [66] also based their approach on images of patients with confirmed fungal infection via fungal culture. In the testing dataset, their ResNet101 CNN model showed an accuracy of 96.26%, a specificity of 98.34%, and a sensitivity of 91.86%. What was interesting, however, was that, after adding diabetic patients into the training set, the accuracy decreased to 93.64%, the specificity increased to 98.89%, and the sensitivity decreased to 82.56%. The reason for this may lie in the reduction in nerve fibers in the corneal tissue of diabetic patients [67]. Apart from the worse results, it was found to be more realistic because diabetic patients are common patients in the clinic. In their next work [68], they paid attention to comparing the performance of ophthalmologists who were assisted by the black-box AI model and the explainable AI model (XAI) in terms of diagnosing fungal keratitis. The explainable model consisted of histograms showing the model prediction probabilities of positive and negative fungal keratitis presence. Overall, the performance in the XAI-assisted diagnostics was better than the AI-assisted diagnostics, and both tools produced better performances than the work that was conducted without their assistance. This effect was more evident for inexperienced doctors compared to experienced doctors. Another interesting observation was made—although the time with the XAI-assisted device was higher than that without assistance, the difference was not statistically significant (p = 0.092). This situation shows that people involved in science and medicine not only want to make their work easier and more efficient with new tools, but they also wish to understand how these tools work.
Certain attempts were made not only to detect fungal hyphae but also to recognize their species. Ningning Tang et al. [69] designed an automated method to distinguish Fusarium and Aspergillus genres. To cope with the overfitting phenomenon, they used transfer learning to improve their model’s generalization ability. Transfer learning is an approach to machine learning that involves using the knowledge acquired while solving one task and then applying it to perform another (which is relatively similar in the field) [70]. In this study, the datasets were determined according to the microbiological culture results, which are, in actuality, not possible for humans to recognize. The models were valid in their judgments with an area under the curve (AUC) of 88.7% for Fusarium and an AUC of 82.7% for Aspergillus.
Ningning Tang et al.’s [71] project was based on dual hybrid systems that were aimed at the automated identification of corneal layers from IVCM images. They developed two classifiers based on CNNs and KNNs (K-neighbor networks). The first one was used to analyze the pixel information, and the other one was used to analyze the scanning depth information. Then, two hybrid strategies (a weighted voting method and the LightGBM algorithm) collected the outputs of the two base classifiers. A weighted voting method gained the best classification result. Both hybrid approaches achieved better performances when compared with the CNN or the KNN alone.
Zhi Liu et al. [29] trained a novel CNN that uses data augmentation and image fusion to detect fungal keratitis. In this work, normal images were augmented by image turnovers. That increased the number of corneal images from 219 to 876. The novel SCS method, which is based on CS (contrast stretching), was also used to preprocess the original image to highlight important features without an information loss. Then, the fusion was conducted. They improved the basic method called MF (mean fusion), which is an approach that relies on matching the images of the same size and taking the average [72]. HMF (histogram mean fusion) is used to create an image histogram that represents the grayscale of the image. Accordingly, the grayscale of the preprocessed SCS-based image matches the original image. In this way, the merged image has the same gray level as the original image, while the distinction of key structures in the SCS-based preprocessed image is preserved. This experiment of combined CNNs—AlexNet and VGGNet—using histogram matching fusion (HMF) achieved an accuracy of 99.95% and 99.89%, respectively. Moreover, compared to the traditional AlexNet and VGGNet, it was 99.35% and 99.14%, respectively.
Fan Xu et al. [73] proposed a deep transfer learning model, Inception-ResNet, for the detection of activated dendritic cells and inflammatory cells with high accuracy. The dataset included patients with keratitis, dry eyes disease, and pterygium. The accuracy of the model was similar to an experienced ophthalmologist and better than a beginner ophthalmologist.
Yulin Yan et al. [74] worked on an automatic mechanism for the fast recognition of the layers of corneal images using in vivo confocal microscopy. Additionally, they differentiated them as normal and abnormal. The abnormal images included cases of the edema of epithelial cells, enlarged interstitial spaces, inflammatory cells, nerve fiber tortuosity or thinning, Langerhans cell amounts, stromal swelling, scarring, pathogen infiltrations (Acathamoeba, fungi), neovascularization, and endothelial cell swelling or deposits. A comparison between humans and machines showed that the model was as accurate as an experienced ophthalmologist and about 237 times faster than a human. At the same time, the accuracy of inexperienced doctors in IVCM image recognition using the model could be significantly improved and may even approach that of specialists. The author paid attention to dataset volume and encouraged readers to develop interhospital cooperation to expand the databases for more detailed analyses.
Table 1 summarizes the articles described above.

5.2. Dry Eye Disease

Dry eye disease (DED) is a consequence of insufficient ocular surface moisture via the tear film. It may be caused by the composition of incorrect layers or excessive evaporation. It is often associated with autoimmune diseases (e.g., rheumatoid arthritis, Sjögren’s syndrome [75,76,77], Graves–Basedow disease [78], graft versus host disease [79]), as well as dermatological (e.g., pemphigoid [80] and rosacea [81]) and neurological diseases (e.g., Parkinson’s disease [82] and Bell’s palsy [83]). It may occur postoperatively, e.g., after laser refractive procedures [84], after cataract surgery [85], after the use of drugs (e.g., antihistamines, antidepressants, and contraceptives), and in cigarette smokers [86].
In dry eye syndrome, the following IVCM image features have been reported [87]: a decreased density in the corneal superficial epithelial cell, increased density in the corneal anterior keratocyte, increased density in the inflammatory dendritic cell, a decreased number of sub-basal nerves, and increased nerve tortuosity.
Figure 6 shows confocal images of dry eye disease characteristics.
Shanshan Wei et al. [88] designed a deep learning model called CNS-Net, which was designed to analyze sub-basal nerve morphologies. It allowed for the possibility of obtaining the average density and the maximum length of the nerve fiber with a high accuracy that produced an AUC of 96%. The model can also analyze 32 images per second, a feat that is practically impossible for a human, and it was able to ensure that nerve fibers were not missed when compared to an ophthalmologist.
Dalan Jing et al. [89] used the previously mentioned CNS-Net model to study the relationships between corneal sub-basal nerve parameters and corneal aberrations in dry eye disease. The ocular surface irritation pain was found to be positively interrelated with anterior corneal aberration. In their next study [90], the same algorithm measured sub-basal nerve parameters to investigate the association between oval cells, Langerhans cells (LCs), and dry eye disease. In opposition to the studies [91,92] showing a decrease in the nerve length, it was observed that—with the presence of the LCs and bright, oval cells—there was a greater corneal peripheral nerve maximum length and average density. This suggests that the changes in corneal nerve density and nerve number are related to the level of advancement of the dry eye disease. It can, therefore, be concluded that an increased length and number of nerves occur in mild and intermediate dry eye conditions.
Gairik Kundu et al. [93] investigated corneal nerve characteristics using confocal microscopy in patients presenting ocular surface pain. Orthoptic-related issues and systemic diseases were also included in the artificial intelligence algorithm, which operated on a random forest (RF) classifier. One of the key advantages of random forest is its ability to handle large and complex datasets. Microneuromas were defined as the parameter with highest importance by the RF model, and they could also be the possible reason for the pain.
An objective tool for sub-basal plexus nerve tortuosity level determination was designed. Tortuosity grading delivers information about corneal nerve reconstruction—a vicious circle of degeneration and regeneration processes. Yitian Zhao et al.’s [94] 2020 device CS-NET, which is based on the Retinex model [95], was used to enable image quality enhancement. The grading of the tortuosity level was achieved with the linear support vector machine. With their AI model, Baikai Ma et al. [96] suggested that cornea nerve tortuosity is a potential biomarker for corneal neurobiology in dry eye disease. Fernández, I. et al. [97] investigated post-LASIK dry eye syndrome and noted an increased nerve tortuosity compared with the control group.
Ye-Ye Zhang et al. [98] and Sachiko Maruoka et al. [99] investigated meibomian glands. Meibomian gland dysfunction (MGD) can lead to a decreased or decomposed tear film lipid layer and inflammation, which causes dry eye disease [100]. Zhang et al. [98] trained three types of convolutional neural networks to differentiate the meibomian gland appearances. Among them, the DenseNet169 network showed the highest accuracy of 97.3% in obstructive MGD (OMGD), 98.6% in atrophic MGD (AMGD), and 98% in the healthy controls. It was better than an ophthalmologist’s accuracy of 91%. Maruoka et al.’s [99]. DenseNet-201-based model achieved an area under the curve, sensitivity, and specificity for diagnosing obstructive MGD at 0.966%, 94.2%, and 82.1%, respectively. In addition, for the ensemble various DL model, it achieved values of 0.981%, 92.1%, and 98.8%, respectively.
Harry Levine et al. [101] presented an automated algorithm for the detection of dendritic cells in the IVCM images of central corneas. Despite obtaining slightly worse algorithm results compared to the manual counts, the authors suggested that the further development of this algorithm can improve generalizability and performance.
Md Asif Khan Setu et al. [102] analyzed both dendritic cells and corneal nerve fibers using U-Net CNN and Mask R-CNN architectures. The proposed model was able to segmentate nerve fibers, define nerve tortuosity, count total nerve density, and punctate branch points, all in combination with dendritic cell detection (an objective tool that was created to differentiate the severity of ocular surface disorder).
Table 2 summarizes the articles described above.

5.3. Diabetic Corneal Neuropathy

Diabetic corneal neuropathy is one of the most common ocular complications in diabetes. The cause of diabetic neuropathy is high blood glucose levels, which results in the formation of the glycation end products that result in changes in the nerves [103]. IVCM images can show the curtailment of corneal nerve fiber lengths, as well as nerve fiber density reduction, nerve fiber branch density dilution, and increases in nerve fiber tortuosity [104].
Figure 7 shows confocal images with diabetic corneal neuropathy characteristics.
The first steps in automated nerve fiber detection were taken by Dabbah et al. in 2010 [105] and 2011 [106], as well as Petropoulos et al. (2014) [107], who relied on 2D Gabor filters and Gaussian envelopes. The Gabor filter is a linear filter used for texture analysis. It determines any particular and regular feature in the image in a localized area around the place of analysis in a specific line-based way. Then, these methods were improved by Xin Chen et. al. [108] in terms of sensitivity and of accuracy 0.917 and 0.913, respectively. A similar method [109], called the corneal nerve fiber fractal dimension, was then used for automated measurements of corneal nerve complexity.
Wei Tang et al. [110] proposed a multiscale feature guidance neural network (MLFGNet) for automatic corneal nerve fiber segmentations in IVCM images. In the literature, it was found that multiscale feature fusion can improve the detection accuracy of all kinds of objects, including objects with a relatively small scale [111] (in this case, e.g., thin nerve fibers). This novel deep learning instrument observes the information aggregation from high-level features to low-level features, and it also reduces the information gap between different levels. The model even captured the curvilinear structure of nerve fibers while the other methods did not.
Tooba Salahouddin et al. [112] proposed a model, based on the U-Net network and an adaptive neuro-fuzzy inference system, which differentiates diabetic peripheral neuropathy. It allows one to compare each class with one another. The classification of diabetic peripheral neuropathy presence in diabetic patients achieved a 92% score in sensitivity and 80% in specificity. Furthermore, the model detected corneal nerve damage in patients who were understood to have no diabetic peripheral neuropathy in terms of their Toronto Clinical Neuropathy Score. Thus, the model could be considered a more sensitive approach for early small nerve pathologies.
Yanda Meng et al. [113] modified their previous AI-based algorithm [114] to classify patients with prediabetes and diabetes into those with or without peripheral neuropathy. There was no need for expert annotation, and the algorithm had a sensitivity of 91% and a specificity of 93% in detecting nerve fiber disorders. This type of algorithm prototype could serve as a rapid, automated screening tool through which to provide neuropathy detection.
The algorithm prepared by Williams et al. [115] was trained on 1698 corneal confocal microscopy images. It was then tested on 2137 images, both containing healthy controls or certain nondiabetes conditions (e.g., keratoconus and pseudoexfoliation syndrome) and diabetic participants. The algorithm, which is based on the U-Net network, identified the total nerve fiber length, branch points, and tail points, as well as the number and length of the nerve segments. It was then compared with the widely used and validated automated image analysis software ACCMetrics (Version 2.0, Early Neuropathy Assessment [ENA] group, University of Manchester, Manchester, UK), and it was found to perform better with the analyzed parameters.
Erdost Yıldız et al. [116] confronted GAN-based algorithms with U-Net algorithms, which aim to automatically segment the corneal sub-basal nerves in IVCM images. It is interesting that the authors added noise to the images to simulate everyday challenges in ophthalmology clinics, as well as lowered image quality to verify whether the algorithms could still work properly.
NerveStitcher, which was designed by Guangxu Li et al. [117], is a novel stitching framework that is based on a convolutional neural network and a graph convolutional neural network. It enables the merging of multiple images with overlapping fields of view to create a larger mosaic-like image with a wider field of view. This then provides an opportunity through which to retrace the nerve length and morphology in larger areas.
An interesting direction taken by Abdulhakim Elbita et al. [118] was the idea of creating 3D corneal layer models that could provide a better visualization of the anatomy as well as a more accurate identification of diseased locations.
Table 3 summarizes the articles described above.

6. Conclusions

It is clear that artificial intelligence will become a permanent fixture in medicine, and existing algorithms will be continuously improved upon to meet the most pressing needs of doctors and patients. To ensure better clinical compliance, AI devices should be interpretable and understandable to clinicians, as they should support their diagnostic and therapeutic decisions.
A manual analysis of IVCM images is time-consuming, even if performed by an experienced ophthalmology specialist. The automation of this process is needed and is necessary to address shortages of qualified staff, speed up diagnostics, and reduce treatment costs. Moreover, it is worth observing the latest studies and solutions to explore the performance of combined deep learning methods.
Comparing the results of analyses performed via a computer with the work of a human may be controversial, but we should look at it in a different way. AI is not supposed to replace humans; rather, its goal should be to help humans achieve better results at work and to be more effective with less energy wasted. DL models can quickly filter a large amount of data and exclude images of healthy cases: this allows them to provide ophthalmologists with the images that most likely represent infectious structures (along with the model’s diagnosis and confidence level). This process can significantly reduce the workload of ophthalmologists and IVCM technicians. It can also serve as a second independent opinion that supports less-experienced doctors and improves their self-assurance in diagnoses.
CNNs are the most useful and comprehensive deep learning networks for image data analysis, despite the many challenges and difficulties faced by artificial intelligence algorithms. Some of these aforementioned issues have already been resolved, but we should still strive for solutions that are explainable and clear to us in terms of algorithmic decision interpretation. Explainable systems could teach IVCM analysis skills in places where there is a shortage of well-qualified medical staff. Explanatory maps can quickly help indicate the most important features that can guide further diagnostics, thereby reducing test time and associated costs.
Combining IVCM with other examination methods, such as slit-lamp microscopy images or optical coherence tomography (OCT) scan results, could create a comprehensive diagnostic device for better disease management.
Despite the increasing amount of research on artificial intelligence in the diagnostics of the anterior segment of the eye, there is still a lack of research on bacterial keratitis. The cause for this may be found in its size being below the resolution that IVCM images can capture. In addition, it is not possible to discriminate bacterial species in IVCM images. There are also no studies that are directly related to analyzing viral diseases in IVCM images. This may be due to the fact that they are perceived as neurotrophic inflammation and are diagnosed on the basis of nerve parameter changes (neurotrophic keratopathy) instead.
Mostly due to the secondary etiological character of dry eye disease, a multidisciplinary approach is required, and, consequently, a patient’s willingness to engage in the diagnostic and therapeutic process. This could be a problem for elderly, sick people, and we should meet this challenge by shortening the time needed for accurate diagnoses via the implementation of AI-powered devices in our work.
Recently, the Food and Drug Administration (FDA) approved the first autonomous AI-based DL algorithm to screen for diabetic retinopathy. Thanks to automated detection and the characteristics of corneal nerve fibers, there are opportunities on the horizon for screening devices that can detect early neuropathy. Such a device could lead to the effective prevention of advanced diabetic neuropathy complications via allowing patients to receive early professional treatment. It is worth mentioning that further exploration of cost-effective models needs to be executed to assess their influence on health economics.
It is crucial to design high-quality enhancements of the captured images, i.e., contrast intensification, such that the algorithm does not overlook thin and faint nerve fibers (which are the fibers that are first affected in diabetic corneal neuropathy). Moreover, diabetic corneal neuropathy should be always considered with systemic complications, and glucose level blood tests should be used for better disease management.

7. Future Directions

It is crucial to constantly improve the performance of AI-supported devices to render their predictions even more accurate and to enable the possibility of working with various types of data. New and advanced algorithms should be utilized to protect data size, which would therefore lead to analyzing the unfocused and poor-quality images that are often captured by the IVCM method. AI devices can be applied to hospitals that have little clinical experience or have shortages of qualified staff. We encourage others to purchase IVCM devices, even without experienced IVCM interpreters, in order to explore their usefulness in automatic image analysis.
For all the diseases presented, a limitation was found in that the studies did not include any measurements of parameter change but instead looked at a single time point. This may be a clue for future researchers in terms of analyzing the performance of model interpretations during disease progression. In the future, we would like artificial intelligence to be able to not only recognize the disease but also to determine its severity and treatment response.
It is crucial to share multicenter studies that encompass larger, different ethnic groups in order to establish a clinically reliable algorithm that could be used anywhere in the world.
With increasing model generalization, an intelligent screening of corneal diseases will be possible. Having said this, we should not only be concerned about detection pathologies but also their grading and treatment. It could be that, in the future, AI will give us an answer regarding the necessity of treatment, as well as its expected effectiveness and side effects.

Author Contributions

Conceptualization, K.K. and E.W.; methodology, A.W. and M.W.; validation K.K., P.P. and M.K.; formal analysis, K.K. and A.W.; investigation K.K.; resources K.K. and A.W.; writing—original draft preparation, K.K.; writing—review and editing, K.K. and M.W.; supervision, E.W. and B.O.-W.; project administration, E.W.; funding acquisition, E.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by the Medical University of Silesia in Katowice.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. FDA. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (accessed on 10 October 2023).
  2. Popescu Patoni, S.I.; Muşat, A.A.M.; Patoni, C.; Popescu, M.N.; Munteanu, M.; Costache, I.B.; Pîrvulescu, R.A.; Mușat, O. Artificial intelligence in ophthalmology. Rom. J. Ophthalmol. 2023, 67, 207–213. [Google Scholar] [CrossRef]
  3. Jin, K.; Ye, J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. Adv. Ophthalmol. Pract. Res. 2022, 2, 100078. [Google Scholar] [CrossRef]
  4. Nagendran, M.; Chen, Y.; A Lovejoy, C.; Gordon, A.C.; Komorowski, M.; Harvey, H.; Topol, E.J.; Ioannidis, J.P.A.; Collins, G.S.; Maruthappu, M. Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies. BMJ 2020, 368, m689. [Google Scholar] [CrossRef]
  5. Valueva, M.; Nagornov, N.; Lyakhov, P.; Valuev, G.; Chervyakov, N. Application of the residue number system to reduce hardware costs of the convolutional neural network implementation. Math. Comput. Simul. 2020, 177, 232–243. [Google Scholar] [CrossRef]
  6. Raghav, P. Understanding of Convolutional Neural Network (CNN)—Deep Learning. Available online: https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148 (accessed on 4 March 2018).
  7. Goswami, S.I.A.K.; Mishra, S.P.; Asopa, P. Conceptual Understanding of Convolutional Neural Network—A Deep Learning Approach. Procedia Comput. Sci. 2018, 132, 679–688. [Google Scholar]
  8. Eric, W. Convolution. From MathWorld—A Wolfram Web Resource. Available online: https://mathworld.wolfram.com/Convolution.html (accessed on 1 January 2022).
  9. Eckle, K.; Schmidt-Hieber, J. A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Netw. 2019, 110, 232–242. [Google Scholar] [CrossRef]
  10. Brownlee, J. A Gentle Introduction to Pooling Layers for Convolutional Neural Networks. Available online: https://machinelearningmastery.com/pooling-layers-for-convolutional-neural-networks/ (accessed on 5 July 2019).
  11. Nirthika, R.; Manivannan, S.; Ramanan, A.; Wang, R. Pooling in convolutional neural networks for medical image analysis: A survey and an empirical study. Neural Comput. Appl. 2022, 34, 5321–5347. [Google Scholar] [CrossRef]
  12. Zhuo, Z.; Zhou, Z. Low Dimensional Discriminative Representation of Fully Connected Layer Features Using Extended LargeVis Method for High-Resolution Remote Sensing Image Retrieval. Sensors 2020, 20, 4718. [Google Scholar] [CrossRef]
  13. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  14. Tsuneki, M. Deep learning models in medical image analysis. J. Oral Biosci. 2022, 64, 312–320. [Google Scholar] [CrossRef]
  15. Dorfman, E. How Much Data Is Required for Machine Learning? Available online: https://postindustria.com/how-much-data-is-required-for-machine-learning/ (accessed on 25 March 2022).
  16. Smolic, H. How Much Data Is Needed For Machine Learning? Available online: https://graphite-note.com/how-much-data-is-needed-for-machine-learning (accessed on 15 December 2022).
  17. Avanzo, M.; Wei, L.; Stancanello, J.; Vallières, M.; Rao, A.; Morin, O.; Mattonen, S.A.; El Naqa, I. Machine and deep learning methods for radiomics. Med. Phys. 2020, 47, e185–e202. [Google Scholar] [CrossRef]
  18. Hao, R.; Namdar, K.; Liu, L.; Haider, M.A.; Khalvati, F. A Comprehensive Study of Data Augmentation Strategies for Prostate Cancer Detection in Diffusion-Weighted MRI Using Convolutional Neural Networks. J. Digit. Imaging 2021, 34, 862–876. [Google Scholar] [CrossRef]
  19. Kabir, M.M.; Ohi, A.Q.; Rahman, M.S.; Mridha, M.F. An Evolution of CNN Object Classifiers on Low-Resolution Images. arXiv 2021, arXiv:2101.00686. [Google Scholar] [CrossRef]
  20. Michał, K.; Bogusław, C. Impact of Low Resolution on Image Recognition with Deep Neural Networks: An Experimental Study. Int. J. Appl. Math. Comput. Sci. 2018, 28, 735–744. [Google Scholar] [CrossRef]
  21. Cai, D.; Chen, K.; Qian, Y.; Kämäräinen, J.-K. Convolutional low-resolution fine-grained classification. Pattern Recognit. Lett. 2019, 119, 166–171. [Google Scholar] [CrossRef]
  22. Qu, J.; Qin, X.; Peng, R.; Xiao, G.; Gu, S.; Wang, H.; Hong, J. Assessing abnormal corneal endothelial cells from in vivo confocal microscopy images using a fully automated deep learning system. Eye Vis. 2023, 10, 20. [Google Scholar] [CrossRef]
  23. Hosseini, M.; Powell, M.; Collins, J.; Callahan-Flintoft, C.; Jones, W.; Bowman, H.; Wyble, B. I tried a bunch of things: The dangers of unexpected overfitting in classification of brain data. Neurosci. Biobehav. Rev. 2020, 119, 456–467. [Google Scholar] [CrossRef]
  24. Demšar, J.; Zupan, B. Hands-on training about overfitting. PLOS Comput. Biol. 2021, 17, e1008671. [Google Scholar] [CrossRef]
  25. Eertink, J.J.; Heymans, M.W.; Zwezerijnen, G.J.C.; Zijlstra, J.M.; de Vet, H.C.W.; Boellaard, R. External validation: A simulation study to compare cross-validation versus holdout or external testing to assess the performance of clinical prediction models using PET data from DLBCL patients. EJNMMI Res. 2022, 12, 58. [Google Scholar] [CrossRef]
  26. Eche, T.; Schwartz, L.H.; Mokrane, F.Z.; Dercle, L. Toward Generalizability in the Deployment of Artificial Intelligence in Ra-diology: Role of Computation Stress Testing to Overcome Underspecification. Radiol Artif. Intell. 2021, 3, e210097. [Google Scholar] [CrossRef]
  27. Ting, D.S.J.; Ho, C.S.; Deshmukh, R.; Said, D.G.; Dua, H.S. Infectious keratitis: An update on epidemiology, causative microorganisms, risk factors, and antimicrobial resistance. Eye, 1084; 35. [Google Scholar] [CrossRef]
  28. Essalat, M.; Abolhosseini, M.; Le, T.H.; Moshtaghion, S.M.; Kanavi, M.R. Interpretable deep learning for diagnosis of fungal and acanthamoeba keratitis using in vivo confocal microscopy images. Sci. Rep. 2023, 13, 8953. [Google Scholar] [CrossRef]
  29. Liu, Z.; Cao, Y.; Li, Y.; Xiao, X.; Qiu, Q.; Yang, M.; Zhao, Y.; Cui, L. Automatic diagnosis of fungal keratitis using data augmentation and image fusion with deep convolutional neural network. Comput. Methods Programs Biomed. 2020, 187, 105019. [Google Scholar] [CrossRef]
  30. Tougui, I.; Jilbab, A.; El Mhamdi, J. Impact of the Choice of Cross-Validation Techniques on the Results of Machine Learning-Based Diagnostic Applications. Healthc. Inform. Res. 2021, 27, 189–199. [Google Scholar] [CrossRef]
  31. Bradshaw, T.J.; Huemann, Z.; Hu, J.; Rahmim, A. A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. Radiol. Artif. Intell. 2023, 5, e220232. [Google Scholar] [CrossRef]
  32. Chlap, P.; Min, H.; Vandenberg, N.; Dowling, J.; Holloway, L.; Haworth, A. A review of medical image data augmentation techniques for deep learning applications. J. Med. Imaging Radiat. Oncol. 2021, 65, 545–563. [Google Scholar] [CrossRef]
  33. Shorten, C.; Khoshgoftaar, T.M.; Furht, B. Text Data Augmentation for Deep Learning. J. Big Data 2021, 8, 101. [Google Scholar] [CrossRef]
  34. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
  35. Kebaili, A.; Lapuyade-Lahorgue, J.; Ruan, S. Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review. J. Imaging 2023, 9, 81. [Google Scholar] [CrossRef]
  36. Nakagawa, K.; Moukheiber, L.; Leo, A. Celi, Malhar Patel, Faisal Mahmood, Dibson Gondim, Michael Hogarth and Richard Levenson, AI in Pathology: What could possibly go wrong? Semin. Diagn. Pathol. 2023, 40, 100–101. [Google Scholar] [CrossRef]
  37. de Hond, A.A.H.; Leeuwenberg, A.M.; Hooft, L.; Kant, I.M.J.; Nijman, S.W.J.; van Os, H.J.A.; Aardoom, J.J.; Debray, T.P.A.; Schuit, E.; van Smeden, M.; et al. Guidelines and quality criteria for artificial intelli-gence-based prediction model in healthcare: A scoping review. NPJ Digit. Med. 2022, 5, 2. [Google Scholar] [CrossRef]
  38. Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intel-ligent Systems, 3rd ed.; O’Reilly Media: Sebastopol, CA, USA, 2023; ISBN 978-83-832-2424-4. [Google Scholar]
  39. Pillai, M.; Adapa, K.; Shumway, J.W.; Dooley, J.; Das, S.K.; Chera, B.S.; Mazur, L. Feature Engineering for Interpretable Machine Learning for Quality Assurance in Radiation Oncology. IOS 2021, 29, 460–464. [Google Scholar] [CrossRef]
  40. Ye, Z.; Yang, G.; Jin, X.; Liu, Y.; Huang, K. Rebalanced Zero-Shot Learning. IEEE Trans. Image Process. 2023, 32, 4185–4198. [Google Scholar] [CrossRef]
  41. London, A.J. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hast. Cent. Rep. 2019, 49, 15–21. [Google Scholar] [CrossRef]
  42. Ayhan, M.S.; Kümmerle, L.B.; Kühlewein, L.; Inhoffen, W.; Aliyeva, G.; Ziemssen, F.; Berens, P. Clinical validation of saliency maps for understanding deep neural networks in ophthalmology. Med. Image Anal. 2022, 77, 102364. [Google Scholar] [CrossRef]
  43. Maity, A. Improvised Salient Object Detection and Manipulation. arXiv 2015. [Google Scholar] [CrossRef]
  44. Stapleton, F. The epidemiology of infectious keratitis. Ocul. Surf. 2023, 8, 351–363. [Google Scholar]
  45. Donovan, C.; Arenas, E.; Ayyala, R.S.; E Margo, C.; Espana, E.M. Fungal keratitis: Mechanisms of infection and management strategies. Surv. Ophthalmol. 2022, 67, 758–769. [Google Scholar] [CrossRef]
  46. Brown, L.; Leck, A.K.; Gichangi, M.; Burton, M.J.; Denning, D.W. The global incidence and diagnosis of fungal keratitis. Lancet Infect. Dis. 2020, 21, e49–e57. [Google Scholar] [CrossRef]
  47. Zemba, M.; Dumitrescu, O.-M.; Dimirache, A.-E.; Branisteanu, D.C.; Balta, F.; Burcea, M.; Moraru, A.D.; Gradinaru, S. Diagnostic methods for the etiological assessment of infectious corneal pathology (Review). Exp. Ther. Med. 2022, 23, 137. [Google Scholar] [CrossRef]
  48. Thomas, P.A.; Kaliamurthy, J. Mycotic keratitis: Epidemiology, diagnosis and management. Clin. Microbiol. Infect. 2013, 19, 210–220. [Google Scholar] [CrossRef] [PubMed]
  49. Bakken, I.M.; Jackson, C.J.; Utheim, T.P.; Villani, E.; Hamrah, P.; Kheirkhah, A.; Nielsen, E.; Hau, S.; Lagali, N.S. The use of in vivo confocal microscopy in fungal keratitis—Progress and challenges. Ocul. Surf. 2022, 24, 103–118. [Google Scholar] [CrossRef]
  50. Ting, D.S.J.; Cairns, J.; Gopal, B.P.; Ho, C.S.; Krstic, L.; Elsahn, A.; Lister, M.; Said, D.G.; Dua, H.S. Risk Factors, Clinical Outcomes, and Prognostic Factors of Bacterial Keratitis: The Nottingham Infectious Keratitis Study. Front. Med. 2021, 8, 715118. [Google Scholar] [CrossRef] [PubMed]
  51. Hoffman, J.J.; Dart, J.K.G.; De, S.K.; Carnt, N.; Cleary, G.; Hau, S. Comparison of culture, confocal microscopy and PCR in routine hospital use for microbial keratitis diagnosis. Eye 2022, 36, 2172–2178. [Google Scholar] [CrossRef] [PubMed]
  52. Wang, Y.E.; Tepelus, T.C.; Vickers, L.A.; Baghdasaryan, E.; Gui, W.; Huang, P.; Irvine, J.A.; Sadda, S.; Hsu, H.Y.; Lee, O.L. Role of in vivo confocal microscopy in the diagnosis of infectious keratitis. Int. Ophthalmol. 2019, 39, 2865–2874. [Google Scholar] [CrossRef] [PubMed]
  53. Curro-Tafili, K.; Verbraak, F.D.; de Vries, R.; van Nispen, R.M.A.; Ghyczy, E.A.E. Diagnosing and monitoring the characteristics of Acanthamoeba keratitis using slit scanning and laser scanning in vivo confocal microscopy. Ophthalmic Physiol. Opt. 2023, 44, 131–152. [Google Scholar] [CrossRef] [PubMed]
  54. Zhang, Y.; Xu, X.; Wei, Z.; Cao, K.; Zhang, Z.; Liang, Q. The global epidemiology and clinical diagnosis of Acanthamoeba keratitis. J. Infect. Public Health 2023, 16, 841–852. [Google Scholar] [CrossRef] [PubMed]
  55. Li, S.; Bian, J.; Wang, Y.; Wang, S.; Wang, X.; Shi, W. Clinical features and serial changes of Acanthamoeba keratitis: An in vivo confocal microscopy study. Eye 2019, 34, 327–334. [Google Scholar] [CrossRef] [PubMed]
  56. Koganti, R.; Yadavalli, T.; Naqvi, R.A.; Shukla, D.; Naqvi, A.R. Pathobiology and treatment of viral keratitis. Exp. Eye Res. 2021, 205, 108483. [Google Scholar] [CrossRef]
  57. Chaloulis, S.K.; Mousteris, G.; Tsaousis, K.T. Incidence and Risk Factors of Bilateral Herpetic Keratitis: 2022 Update. Trop. Med. Infect. Dis. 2022, 7, 92. [Google Scholar] [CrossRef]
  58. Poon, S.H.L.; Wong, W.H.L.; Lo, A.C.Y.; Yuan, H.; Chen, C.-F.; Jhanji, V.; Chan, Y.K.; Shih, K.C. A systematic review on advances in diagnostics for herpes simplex keratitis. Surv. Oph-Thalmol. 2021, 66, 514–530. [Google Scholar] [CrossRef]
  59. Mok, E.; Kam, K.W.; Young, A.L. Corneal nerve changes in herpes zoster ophthalmicus: A prospective longitudinal in vivo confocal microscopy study. Eye 2023, 37, 3033–3040. [Google Scholar] [CrossRef]
  60. Mangan, M.S.; Yildiz-Tas, A.; Yildiz, M.B.; Yildiz, E.; Sahin, A. In Vivo confocal microscopy findings after COVID-19 infection. Ocul. Immunol. Inflamm. 2021, 30, 1866–1868. [Google Scholar] [CrossRef]
  61. Subaşı, S.; Yüksel, N.; Toprak, M.; Tuğan, B.Y. In Vivo Confocal Microscopy Analysis of the Corneal Layers in Adenoviral Epidemic Keratoconjunctivitis. Turk. J. Ophthalmol. 2018, 48, 276–280. [Google Scholar] [CrossRef] [PubMed]
  62. Kalpathy-Cramer, J.; Patel, J.B.; Bridge, C.; Chang, K. Basic Artificial Intelligence Techniques. Radiol. Clin. North Am. 2021, 59, 941–954. [Google Scholar] [CrossRef] [PubMed]
  63. Lincke, A.; Roth, J.; Macedo, A.F.; Bergman, P.; Löwe, W.; Lagali, N.S. AI-Based Decision-Support System for Diagnosing Acanthamoeba Keratitis Using In Vivo Confocal Microscopy Images. Transl. Vis. Sci. Technol. 2023, 12, 29. [Google Scholar] [CrossRef] [PubMed]
  64. Wu, X.; Tao, Y.; Qiu, Q.; Wu, X. Application of image recognition-based automatic hyphae detection in fungal keratitis. Australas. Phys. Eng. Sci. Med. 2018, 41, 95–103. [Google Scholar] [CrossRef] [PubMed]
  65. Hu, Y.; Soltoggio, A.; Lock, R.; Carter, S. A Structure-Aware Convolutional Neural Network for Automatic Diagnosis of Fungal Keratitis with In Vivo Confocal Microscopy Images. Neural Netw. 2019, 109, 31–42. [Google Scholar] [CrossRef] [PubMed]
  66. Lv, J.; Zhang, K.; Chen, Q.; Chen, Q.; Huang, W.; Cui, L.; Li, M.; Li, J.; Chen, L.; Shen, C.; et al. Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images. Ann. Transl. Med. 2020, 8, 706. [Google Scholar] [CrossRef] [PubMed]
  67. Alam, U.; Anson, M.; Meng, Y.; Preston, F.; Kirthi, V.; Jackson, T.L.; Nderitu, P.; Cuthbertson, D.J.; Malik, R.A.; Zheng, Y.; et al. Artificial Intelligence and Corneal Confocal Microscopy: The Start of a Beautiful Relationship. J. Clin. Med. 2022, 11, 6199. [Google Scholar] [CrossRef]
  68. Xu, F.; Jiang, L.; He, W.; Huang, G.; Hong, Y.; Tang, F.; Lv, J.; Lin, Y.; Qin, Y.; Lan, R.; et al. The Clinical Value of Explainable Deep Learning for Diagnosing Fungal Keratitis Using in vivo Confocal Microscopy Images. Front. Med. 2021, 8, 797616. [Google Scholar] [CrossRef]
  69. Tang, N.; Huang, G.; Lei, D.; Jiang, L.; Chen, Q.; He, W.; Tang, F.; Hong, Y.; Lv, J.; Qin, Y.; et al. An artificial intelligence approach to classify pathogenic fungal genera of fungal keratitis using corneal confocal microscopy images. Int. Ophthalmol. 2023, 43, 2203–2214. [Google Scholar] [CrossRef]
  70. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76. [Google Scholar] [CrossRef]
  71. Tang, N.; Huang, G.; Lei, D.; Jiang, L.; Chen, Q.; He, W.; Tang, F.; Hong, Y.; Lv, J.; Qin, Y.; et al. A Hybrid System for Automatic Identification of Corneal Layers on In Vivo Confocal Microscopy Images. Transl. Vis. Sci. Technol. 2023, 12, 8. [Google Scholar] [CrossRef]
  72. Almasri, M.M.; Alajlan, A.M. Artificial Intelligence-Based Multimodal Medical Image Fusion Using Hybrid S2 Optimal CNN. Electronics 2022, 11, 2124. [Google Scholar] [CrossRef]
  73. Xu, F.; Qin, Y.; He, W.; Huang, G.; Lv, J.; Xie, X.; Diao, C.; Tang, F.; Jiang, L.; Lan, R.; et al. A deep transfer learning framework for the automated assessment of corneal inflammation on in vivo confocal microscopy images. PLoS ONE 2021, 16, e0252653. [Google Scholar] [CrossRef]
  74. Yan, Y.; Jiang, W.; Zhou, Y.; Yu, Y.; Huang, L.; Wan, S.; Zheng, H.; Tian, M.; Wu, H.; Huang, L.; et al. Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images. Front. Med. 2023, 10, 1164188. [Google Scholar] [CrossRef]
  75. Akpek, E.K.; Bunya, V.Y.; Saldanha, I.J. Sjögren’s Syndrome: More Than Just Dry Eye. Cornea 2019, 38, 658–661. [Google Scholar] [CrossRef]
  76. Caban, M.; Omulecki, W.; Latecka-Krajewska, B. Dry eye in Sjögren’s syndrome—Characteristics and therapy. Eur. J. Ophthalmol. 2022, 32, 3174–3184. [Google Scholar] [CrossRef]
  77. Yu, K.; Bunya, V.; Maguire, M.; Asbell, P.; Ying, G.S. Dry Eye Assessment and Management Study Research Group. Systemic Conditions Associated with Severity of Dry Eye Signs and Symptoms in the Dry Eye Assessment and Management Study. Ophthalmology. 2021, 128, 1384–1392. [Google Scholar] [CrossRef]
  78. Sikder, S.; Gire, A.; Selter, J.H. The relationship between Graves’ ophthalmopathy and dry eye syndrome. Clin. Ophthalmol. 2014, 9, 57–62. [Google Scholar] [CrossRef]
  79. Shetty, R.; Dua, H.S.; Tong, L.; Kundu, G.; Khamar, P.; Gorimanipalli, B.; D’souza, S. Role of in vivo confocal microscopy in dry eye disease and eye pain. Indian J. Ophthalmol. 2023, 71, 1099–1104. [Google Scholar] [CrossRef]
  80. Stan, C.; Diaconu, E.; Hopirca, L.; Petra, N.; Rednic, A.; Stan, C. Ocular cicatricial pemphigoid. Rom. J. Ophthalmol. 2020, 64, 226–230. [Google Scholar] [CrossRef]
  81. Sobolewska, B.; Schaller, M.; Zierhut, M. Rosacea and Dry Eye Disease. Ocul. Immunol. Inflamm. 2022, 30, 570–579. [Google Scholar] [CrossRef] [PubMed]
  82. Ungureanu, L.; Chaudhuri, K.R.; Diaconu, S.; Falup-Pecurariu, C. Dry eye in Parkinson’s disease: A narrative review. Front. Neurol. 2023, 14, 1236366. [Google Scholar] [CrossRef] [PubMed]
  83. Tiemstra, J.D.; Khatkhate, N. Bell’s palsy: Diagnosis and management. Am. Fam. Physician 2007, 76, 997–1002. [Google Scholar] [PubMed]
  84. Sambhi, R.-D.S.; Mather, R.; Malvankar-Mehta, M.S. Dry eye after refractive surgery: A meta-analysis. Can. J. Ophthalmol. 2020, 55, 99–106. [Google Scholar] [CrossRef] [PubMed]
  85. Kato, K.; Miyake, K.; Hirano, K.; Kondo, M. Management of Postoperative Inflammation and Dry Eye After Cataract Surgery. Cornea 2019, 38, S25–S33. [Google Scholar] [CrossRef] [PubMed]
  86. Tariq, M.; Amin, H.; Ahmed, B.; Ali, U.; Mohiuddin, A. Association of dry eye disease with smoking: A systematic review and meta-analysis. Indian J. Ophthalmol. 2022, 70, 1892–1904. [Google Scholar] [CrossRef] [PubMed]
  87. Matsumoto, Y.; Ibrahim, O.M.A. Application of In Vivo Confocal Microscopy in Dry Eye Disease. Investig. Opthalmology Vis. Sci. 2018, 59, DES41–DES47. [Google Scholar] [CrossRef]
  88. Wei, S.; Shi, F.; Wang, Y.; Chou, Y.; Li, X. A Deep Learning Model for Automated Sub-Basal Corneal Nerve Segmentation and Evaluation Using In Vivo Confocal Microscopy. Transl. Vis. Sci. Technol. 2020, 9, 32. [Google Scholar] [CrossRef]
  89. Jing, D.; Liu, Y.; Chou, Y.; Jiang, X.; Ren, X.; Yang, L.; Su, J.; Li, X. Change patterns in the corneal sub-basal nerve and corneal aberrations in patients with dry eye disease: An artificial intelligence analysis. Exp. Eye Res. 2021, 215, 108851. [Google Scholar] [CrossRef]
  90. Jing, D.; Jiang, X.; Chou, Y.; Wei, S.; Hao, R.; Su, J.; Li, X. In vivo Confocal Microscopic Evaluation of Previously Neglected Oval Cells in Corneal Nerve Vortex: An Inflammatory Indicator of Dry Eye Disease. Front. Med. 2022, 9, 906219. [Google Scholar] [CrossRef] [PubMed]
  91. Chiang, J.C.B.; Tran, V.; Wolffsohn, J.S. The impact of dry eye disease on corneal nerve parameters: A systematic review and meta-analysis. Ophthalmic Physiol. Opt. 2023, 43, 1079–1091. [Google Scholar] [CrossRef] [PubMed]
  92. Fang, W.; Lin, Z.-X.; Yang, H.-Q.; Zhao, L.; Liu, D.-C.; Pan, Z.-Q. Changes in corneal nerve morphology and function in patients with dry eyes having type 2 diabetes. World J. Clin. Cases 2022, 10, 3014–3026. [Google Scholar] [CrossRef] [PubMed]
  93. Kundu, G.; Shetty, R.; D’souza, S.; Khamar, P.; Nuijts, R.M.M.A.; Sethu, S.; Roy, A.S. A novel combination of corneal confocal microscopy, clinical features and artificial intelligence for evaluation of ocular surface pain. PLoS ONE 2022, 17, e0277086. [Google Scholar] [CrossRef] [PubMed]
  94. Zhao, Y.; Zhang, J.; Pereira, E.; Zheng, Y.; Su, P.; Xie, J.; Zhao, Y.; Shi, Y.; Qi, H.; Liu, J.; et al. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE Trans. Med. Imaging 2020, 39, 2725–2737. [Google Scholar] [CrossRef] [PubMed]
  95. Lecca, M.; Gianini, G.; Serapioni, R.P. Mathematical insights into the Original Retinex Algorithm for Image Enhancement. J. Opt. Soc. Am. A 2022, 39, 2063–2072. [Google Scholar] [CrossRef] [PubMed]
  96. Ma, B.; Xie, J.; Yang, T.; Su, P.; Liu, R.; Sun, T.; Zhou, Y.; Wang, H.; Feng, X.; Ma, S.; et al. Quantification of Increased Corneal Subbasal Nerve Tortuosity in Dry Eye Disease and Its Correlation With Clinical Parameters. Transl. Vis. Sci. Technol. 2021, 10, 26. [Google Scholar] [CrossRef] [PubMed]
  97. Fernández, I.; Vázquez, A.; Calonge, M.; Maldonado, M.J.; de la Mata, A.; López-Miguel, A. New Method for the Au-tomated Assessment of Corneal Nerve Tortuosity Using Confocal Microscopy Imaging. Appl. Sci. 2022, 12, 10450. [Google Scholar] [CrossRef]
  98. Zhang, Y.-Y.; Zhao, H.; Lin, J.-Y.; Wu, S.-N.; Liu, X.-W.; Zhang, H.-D.; Shao, Y.; Yang, W.-F. Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Front. Med. 2021, 8, 774344. [Google Scholar] [CrossRef]
  99. Maruoka, S.; Tabuchi, H.; Nagasato, D.; Masumoto, H.; Chikama, T.; Kawai, A.C.; Oishi, N.C.; Maruyama, T.; Kato, Y.; Hayashi, T.; et al. Deep Neural Network-Based Method for Detecting Obstructive Meibomian Gland Dysfunction With in Vivo Laser Confocal Microscopy. Cornea 2020, 39, 720–725. [Google Scholar] [CrossRef]
  100. Sim, R.; Yong, K.; Liu, Y.-C.; Tong, L. In Vivo Confocal Microscopy in Different Types of Dry Eye and Meibomian Gland Dysfunction. J. Clin. Med. 2022, 11, 2349. [Google Scholar] [CrossRef] [PubMed]
  101. Levine, H.; Tovar, A.; Cohen, A.K.; Cabrera, K.; Locatelli, E.; Galor, A.; Feuer, W.; O’Brien, R.; Goldhagen, B.E. Automated identification and quantification of activated dendritic cells in central cornea using artificial intelligence. Ocul. Surf. 2023, 29, 480–485. [Google Scholar] [CrossRef] [PubMed]
  102. Setu, A.K.; Schmidt, S.; Musial, G.; Stern, M.E.; Steven, P. Segmentation and Evaluation of Corneal Nerves and Dendritic Cells From In Vivo Confocal Microscopy Images Using Deep Learning. Transl. Vis. Sci. Technol. 2022, 11, 24. [Google Scholar] [CrossRef] [PubMed]
  103. So, W.Z.; Wong, N.S.Q.; Tan, H.C.; Lin, M.T.Y.; Lee, I.X.Y.; Mehta, J.S.; Liu, Y.-C. Diabetic corneal neuropathy as a surrogate marker for diabetic peripheral neuropathy. Neural Regen. Res. 2022, 17, 2172–2178. [Google Scholar] [PubMed]
  104. Mansoor, H.; Tan, H.C.; Lin, M.T.-Y.; Mehta, J.S.; Liu, Y.-C. Diabetic Corneal Neuropathy. J. Clin. Med. 2020, 9, 3956. [Google Scholar] [CrossRef] [PubMed]
  105. Dabbah, M.A.; Graham, J.; Petropoulos, I.; Tavakoli, M.; Malik, R.A. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images. Med. Image Comput. Comput. Assist Interv. 2010, 13 Pt 1, 300–307. [Google Scholar] [CrossRef] [PubMed]
  106. Dabbah, M.; Graham, J.; Petropoulos, I.; Tavakoli, M.; Malik, R. Automatic analysis of diabetic peripheral neuropathy using multi-scale quantitative morphology of nerve fibres in corneal confocal microscopy imaging. Med. Image Anal. 2011, 15, 738–747. [Google Scholar] [CrossRef]
  107. Petropoulos, I.N.; Alam, U.; Fadavi, H.; Marshall, A.; Asghar, O.; Dabbah, M.A.; Chen, X.; Graham, J.; Ponirakis, G.; Boulton, A.J.M.; et al. Rapid Automated Diagnosis of Diabetic Peripheral Neuropathy With In Vivo Corneal Confocal Microscopy. Investig. Opthalmology Vis. Sci. 2014, 55, 2071–2078. [Google Scholar] [CrossRef]
  108. Chen, X.; Graham, J.; Dabbah, M.A.; Petropoulos, I.N.; Tavakoli, M.; Malik, R.A. An Automatic Tool for Quantification of Nerve Fibers in Corneal Confocal Microscopy Images. IEEE Trans. Biomed. Eng. 2017, 64, 786–794. [Google Scholar] [CrossRef]
  109. Chen, X.; Graham, J.; Petropoulos, I.N.; Ponirakis, G.; Asghar, O.; Alam, U.; Marshall, A.; Ferdousi, M.; Azmi, S.; Efron, N.; et al. Corneal Nerve Fractal Dimension: A Novel Corneal Nerve Metric for the Diagnosis of Diabetic Sensorimotor Polyneuropathy. Investig. Opthalmology Vis. Sci. 2018, 59, 1113–1118. [Google Scholar] [CrossRef]
  110. Tang, W.; Chen, X.; Yuan, J.; Meng, Q.; Shi, F.; Xiang, D.; Chen, Z.; Zhu, W. Multi-scale and local feature guidance network for corneal nerve fiber segmentation. Phys. Med. Biol. 2023, 68, 095026. [Google Scholar] [CrossRef] [PubMed]
  111. Huang, L.; Chen, C.; Yun, J.; Sun, Y.; Tian, J.; Hao, Z.; Yu, H.; Ma, H. Multi-Scale Feature Fusion Convolutional Neural Network for Indoor Small Target Detection. Front. Neurorobotics 2022, 16, 881021. [Google Scholar] [CrossRef] [PubMed]
  112. Salahouddin, T.; Petropoulos, I.N.; Ferdousi, M.; Ponirakis, G.; Asghar, O.; Alam, U.; Kamran, S.; Mahfoud, Z.R.; Efron, N.; Malik, R.A.; et al. Artificial Intelligence-Based Classification of Diabetic Peripheral Neuropathy From Corneal Confocal Mi-croscopy Images. Diabetes Care 2021, 44, e151–e153. [Google Scholar] [CrossRef] [PubMed]
  113. Meng, Y.; Preston, F.G.; Ferdousi, M.; Azmi, S.; Petropoulos, I.N.; Kaye, S.; Malik, R.A.; Alam, U.; Zheng, Y. Artificial Intelligence Based Analysis of Corneal Confocal Microscopy Images for Diagnosing Peripheral Neuropathy: A Binary Classification Model. J. Clin. Med. 2023, 12, 1284. [Google Scholar] [CrossRef] [PubMed]
  114. Preston, F.G.; Meng, Y.; Burgess, J.; Ferdousi, M.; Azmi, S.; Petropoulos, I.N.; Kaye, S.; Malik, R.A.; Zheng, Y.; Alam, U. Artificial intelligence utilising corneal confocal microscopy for the diagnosis of peripheral neuropathy in diabetes mellitus and prediabetes. Diabetologia 2022, 65, 457–466. [Google Scholar] [CrossRef] [PubMed]
  115. Williams, B.M.; Borroni, D.; Liu, R.; Zhao, Y.; Zhang, J.; Lim, J.; Ma, B.; Romano, V.; Qi, H.; Ferdousi, M.; et al. An artificial intelligence-based deep learning algorithm for the diagnosis of diabetic neuropathy using corneal confocal microscopy: A development and validation study. Diabetologia 2019, 63, 419–430. [Google Scholar] [CrossRef] [PubMed]
  116. Yildiz, E.; Arslan, A.T.; Tas, A.Y.; Acer, A.F.; Demir, S.; Sahin, A.; Barkana, D.E. Generative Adversarial Network Based Automatic Segmentation of Corneal Subbasal Nerves on In Vivo Confocal Microscopy Images. Transl. Vis. Sci. Technol. 2021, 10, 33. [Google Scholar] [CrossRef] [PubMed]
  117. Li, G.; Li, T.; Li, F.; Zhang, C. NerveStitcher: Corneal confocal microscope images stitching with neural networks. Comput. Biol. Med. 2022, 151, 106303. [Google Scholar] [CrossRef]
  118. Elbita, A.; Qahwaji, R.; Ipson, S.; Sharif, M.S.; Ghanchi, F. Preparation of 2D sequences of corneal images for 3D model building. Comput. Methods Programs Biomed. 2014, 114, 194–205. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the literature search history.
Figure 1. Flowchart of the literature search history.
Diagnostics 14 00694 g001
Figure 2. In vivo confocal microscopy images of fungal keratitis. (A,B) Bunches of hyper-reflective, linear structures with acute angle branching. Resolution 400 × 400 µm.
Figure 2. In vivo confocal microscopy images of fungal keratitis. (A,B) Bunches of hyper-reflective, linear structures with acute angle branching. Resolution 400 × 400 µm.
Diagnostics 14 00694 g002
Figure 3. In vivo confocal microscopy images of bacterial keratitis. (A,B) No atypical organisms such as Acanthamoeba, fungal filaments, or yeasts. The presence of A means a significant influx of leukocytes, and the presence of B means “dendritiform” cells and keratocyte activation. Resolution 400 × 400 µm.
Figure 3. In vivo confocal microscopy images of bacterial keratitis. (A,B) No atypical organisms such as Acanthamoeba, fungal filaments, or yeasts. The presence of A means a significant influx of leukocytes, and the presence of B means “dendritiform” cells and keratocyte activation. Resolution 400 × 400 µm.
Diagnostics 14 00694 g003
Figure 4. In vivo confocal microscopy images of Acanthamoeba keratitis. (A,B) Highly reflective oval cysts with a low-refractile wall that has a clear boundary and a dark ring outside. Resolution 400 × 400 µm.
Figure 4. In vivo confocal microscopy images of Acanthamoeba keratitis. (A,B) Highly reflective oval cysts with a low-refractile wall that has a clear boundary and a dark ring outside. Resolution 400 × 400 µm.
Diagnostics 14 00694 g004
Figure 5. In vivo confocal microscopy images of herpes zoster keratitis. (A) Highly reflective keratic precipitates. (B) A decrease in nerve length and the presence of dendritic inflammatory cells. Resolution 400 × 400 µm.
Figure 5. In vivo confocal microscopy images of herpes zoster keratitis. (A) Highly reflective keratic precipitates. (B) A decrease in nerve length and the presence of dendritic inflammatory cells. Resolution 400 × 400 µm.
Diagnostics 14 00694 g005
Figure 6. In vivo confocal microscopy images of dry eye disease. (A) An increased density of highly reflective keratocytes. (B) A decreased density of corneal epithelial cells. Resolution 400 × 400 µm.
Figure 6. In vivo confocal microscopy images of dry eye disease. (A) An increased density of highly reflective keratocytes. (B) A decreased density of corneal epithelial cells. Resolution 400 × 400 µm.
Diagnostics 14 00694 g006
Figure 7. In vivo confocal microscopy images of diabetic corneal neuropathy (patient with diabetes type 1). (A) Decreased nerve fiber lengths and densities. (B) Increase in nerve fiber tortuosity. Resolution 400 × 400 µm.
Figure 7. In vivo confocal microscopy images of diabetic corneal neuropathy (patient with diabetes type 1). (A) Decreased nerve fiber lengths and densities. (B) Increase in nerve fiber tortuosity. Resolution 400 × 400 µm.
Diagnostics 14 00694 g007
Table 1. Summary table for the different DL systems in the detection of keratitis when using IVCM. The performance parameter types depended on the authors’ choice and show the results of the best proposed approaches. Additional techniques and novelties are featured.
Table 1. Summary table for the different DL systems in the detection of keratitis when using IVCM. The performance parameter types depended on the authors’ choice and show the results of the best proposed approaches. Additional techniques and novelties are featured.
AuthorsYearDatasetArtificial Intelligence MethodResultsAdditional Techniques and Novelties
Essalat et al. [28]20234001 imagesCNN—Densenet161Accuracy 93.55%
Precision 92.52%
Recall 94.77%
F1 score 96.93%
Saliency maps.
Alisa Lincke et al. [63].202368,970 imagesCNN—ResNet101V2Healthy/diseased—95% accuracyTransfer learning.
Xuelian Wu et al. [64]201782 patientsAdaptive robust binary patternThe accuracy of the model was superior to the corneal smear examination (p  <  0.05) approach.Support vector machine.
Sensitivity 89.29%
Specificity 95.65%
AUC 0.946
Shanshan Liang et al. [55]20237278 imagesSACNN—GoogLeNet and VGGNetAccuracy 97.73%Two-stream convolutional network.
Precision 98.68%
Sensitivity 97.02%
Specificity 98.54%,
F1 score 97.84%
Jian Lv et al. [66]20202088 imagesCNN—ResNetAccuracy 96.26%
Specificity 98.34%
Sensitivity 91.86%
AUC 0.9875
Jian Lv et al. [67]20211089 imagesCNN—ResNetAccuracy 96.5% Grad-CAM and guided Grad-CAM to generate explanation maps and pixel explanations.
Sensitivity 93.6%
Specificity 98.2%
AUC 0.983
Ningning Tang et al. [71]20233364 imagesCNN—ResNet Fusarium Aspergillus Decision tree classifier and CNN-based classifier
Grad-CAM and guided Grad-CAM to generate explanation maps and pixel explanation.
AUC 0.887 AUC 0.827
Ningning Tang et al. [69]20237957 images CNN—Inception-ResNet V2 and
K nearest neighbor
Precision 90.96% Two classifiers (CNN- and KNN-based) and
two hybrid strategies (weighted voting method and LightGBM) were used to fuse the results.
Recall 91.45%
F1 score 91.11%
AUC 0.9841
Liu Zhi et al. [29]20201213 imagesCNN—AlexNet and VGGNetAccuracy 99.95% Sub-area contrast stretching algorithm and
histogram matching fusion algorithm.
Sensitivity 99.90%
Specificity 100%
Fan Xu et al. [68]20213453 imagesCNN—Inception-ResNet V2 Activated dendritic cells Inflammatory cells Transfer learning technique.
Accuracy 93.19% Accuracy 97.67%
Sensitivity 81.71% Sensitivity 91.74%
Specificity 95.17% Specificity 99.31%
G mean 88.72% G mean 95.45%
AUC 0.9646 AUC 0.9901
Table 2. Summary table for the different DL systems in the detection of dry eye disease when using IVCM images. The performance parameter types depended on the authors’ choices, and the results of the best proposed approaches are detailed. Additional techniques and novelties are also featured.
Table 2. Summary table for the different DL systems in the detection of dry eye disease when using IVCM images. The performance parameter types depended on the authors’ choices, and the results of the best proposed approaches are detailed. Additional techniques and novelties are also featured.
AuthorsYearDatasetArtificial Intelligence MethodResultsAdditional Techniques and Novelties
Yulin Yan et al. [74]202319,612 imagesCNN–ResNet50 Internal test:
Accuracy 91.4%, 95.7%, 96.7%, and 95% for the recognition of each layer.
Accuracy 96.1%, 93.2%, 94.5%, and 95.9% for normal/abnormal images recognition (for each layer).
External test:
Accuracy 96.0%, 96.5%, 96.6%, and 96.4% for the recognition of each layer.
Accuracy 98.3%, 97.2%, 94.0%, and 98.2% for normal/abnormal image recognition (for each layer).
Shanshan Wei et al. [88]20205221 images CNN—ResNet34 AUC 0.96 CNS-Net established.
Dalan Jing et al. [90]2022~2290 imagesCNN—CNS-Net The corneal nerve morphology (the average density and maximum length) were significantly correlated with the corneal intrinsic aberrations. The corneal sub-basal nerve morphology and corneal intrinsic aberrations were investigated with CNS-Net.
Gairik Kundu et al. [93]2022120 imagesCCMetrics for nerve fiber characteristics and
Random Forest classifier
AUC 0.736
Accuracy 86%
F1 score 85.9%
Precision 85.6%
Recall 86.3%
Correlation investigation was conducted between the various clinical symptoms and imaging parameters of ocular surface pain.
Yitian Zhao et al. [94]2020322 imagesCS-NET
The infinite perimeter active contour with hybrid region
Accuracy 81.8% for the first dataset.
Accuracy 87.5% for the second dataset.
A Retinex model advanced exponential curvature estimation method with a
linear support vector machine.
Baikai Ma et al. [96]20211501 imageskNN-DOWA
The infinite perimeter active contour with hybrid region information
The tortuosity was higher in patients with DED than in healthy volunteers ( p < 0.001). The tortuosity was positively correlated with the ocular surface disease index ( r = 0.418, p = 0.003) and negatively correlated with tear breakup time ( r = −0.398 and p = 0.007).
No correlation was found between the tortuosity and visual analog scale scores, corneal fluorescein staining scores, or the Schirmer I test.
Fernandez et al. [97]202243 imagesWatershed algorithm The tortuosity index was significantly higher in post-LASIK patients with ocular pain than in the control patients. No significant differences were detected with manual measurements.
The tortuosity quantification was positively correlated with the ocular surface disease index (OSDI) and a numeric rating scale (NRS) assessing pain.
Ye-Ye Zhang et al. [98]20218311 imagesCNN—DenseNet169 OMGD AMGD
AUC 97.3%
Sensitivity 88.8%
Specificity 95.4%
AUC 98.6%
Sensitivity 89.4%
Specificity 98.4%
Sachiko Maruoka et al. [99]2020380 imagesCNNs—DenseNet-201, VGG16, DenseNet-169, and InceptionV3The single DL model:
AUC 0.966
Sensitivity 94.2%
Specificity 82.1%
The ensemble DL model (VGG16 + DenseNet-169 + DenseNet-201 + InceptionV3)
AUC 0.981
Sensitivity 92.1%
Specificity 98.8%
Transfer learning.
Harry Levine et. al. [101]2023173 imagesCNNs—CSPDarknet53 and YOLOv3 The mean number of aDCs in the central cornea were quantified automatically: 0.83 ± 1.33 cells/image.
The mean number of aDCs in the central cornea were quantified manually: 1.03 ± 1.65 cells/image.
Transfer learning.
Md Asif Khan Setu et al. [102]20221219 imagesCNN—U-Net and
Mask R-CNN
The CNFs model The DCs model
Sensitivity 86.1%
Specificity 90.1%
Precision 89.37%
Recall 94.43%
F1 score 91.83%
Abbreviations: CNS-Net—corneal nerve segmentation network; LASIK—laser-assisted in situ keratomileusis; aDCs—activated dendritic cells; CNFs—corneal nerve fibers; DCs—dendritic cells.
Table 3. Summary table for the different DL systems in the detection of diabetic corneal neuropathy when using IVCM. The performance parameter types depended on the authors’ choice, and the results of the best proposed approaches are shown. Additional techniques and novelties are featured.
Table 3. Summary table for the different DL systems in the detection of diabetic corneal neuropathy when using IVCM. The performance parameter types depended on the authors’ choice, and the results of the best proposed approaches are shown. Additional techniques and novelties are featured.
AuthorsYearDatasetArtificial Intelligence
Method
ResultsAdditional Techniques and Novelties
Dabbah et al. [105]2010525 images2D Gabor wavelet and a Gaussian envelope The automatic analysis is consistent with the manual analysis at a correlation of (r = 0.92).
Dabbah et al. [106]2011521 images2D Gabor wavelet and a Gaussian envelope The model had the lowest equal error rate of 15.44%.
Ioannis N. Petropoulos et al. [107]2014186 patients2D Gabor wavelet and a Gaussian envelope The manual and automated analysis methods were highly correlated for the following:
CNFD (r = 0.9, p < 0.0001)
CNFL (r = 0.89, p < 0.0001)
CNBD (r = 0.75, p < 0.0001)
Xin Chen et al. [108]2017888 images2D Gabor wavelet and a Gaussian envelope with
dual-tree complex wavelet transforms
Nerve fiber detection:
Sensitivity 91.7%
Specificity 91.3%
Xin Chen et al. [109]2018176 patients2D Gabor wavelet and a Gaussian envelope with
dual-tree complex wavelet transforms
The AUC for identifying DSPN were comparable:
0.77 for automated CNFD
0.74 for automated CNFL
0.69 for automated CNBD
0.74 for automated ACNFrD.
Wei Tang et al. [110]2023524 imagesCNN—MLFGNetDice coefficients were 89.33%, 89.41%, and 88.29%.A multiscale progressive guidance module, a
local feature-guided attention module, and a
multiscale deep supervision module.
Tooba Salahouddin et al. [112]2021108 patientsCNN—U-Net DPN from the control subjects:
AUC 0.86
Sensitivity 84%
Specificity 71%
DPN from the DPN+:
AUC 0.95
Sensitivity 92%
Specificity 80%
Control subjects from the DPN+:
AUC 1.0
Sensitivity 100%
Specificity 95%
Yanda Meng et al. [113]2023279 patientsCNN—ResNet50 Sensitivity 91%
Specificity 93%
AUC 0.95
Grad-CAM and guided Grad-CAM to generate explanation maps and pixel explanations.
Yanda Meng et al. [114]2022228 patientsCNN—ResNet50 HV PN− PN+ Grad-CAM and guided Grad-CAM to generate explanation maps and pixel explanations.Occlusion sensitivity.
Recall 100%
Precision 83%
F1 score 91%
Recall 85%
Precision 92%
F1 score 88%
Recall 83% Precision 100% F1 score 91%
Williams et al. [115]20201698 imagesCNN—U-Net Intraclass correlations:
Total corneal nerve fiber length 0.933
Mean length per segment 0.656
Number of branch points 0.891
Erdost Yıldız et al. [116]2021510 imagesCNN—U-Net and GAN U-Net GAN
AUC 0.8934 AUC 0.9439
Guangxu Li et al. [117]202230 images setsCNN—VGGNet The stitching method can evaluate the corneal nerve of patients more accurately and reliably compared to a single image.
Abdulhakim Elbita et al. [118]2014356 imagesBack propagation neural network Accuracy 99.4% DCT filter, Gaussian smoothing, contrast standardized, and Otsu’s threshold.
Abbreviations: CNFD—corneal nerve fiber density; CNFL—corneal nerve fiber length; CNBD—corneal nerve fiber branch density; DSPN—diabetic sensorimotor polyneuropathy; ACNFrD—corneal nerve fiber fractal dimension; DPN—diabetic peripheral neuropathy; HV—healthy volunteer; PN—neuropathy.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kryszan, K.; Wylęgała, A.; Kijonka, M.; Potrawa, P.; Walasz, M.; Wylęgała, E.; Orzechowska-Wylęgała, B. Artificial-Intelligence-Enhanced Analysis of In Vivo Confocal Microscopy in Corneal Diseases: A Review. Diagnostics 2024, 14, 694. https://doi.org/10.3390/diagnostics14070694

AMA Style

Kryszan K, Wylęgała A, Kijonka M, Potrawa P, Walasz M, Wylęgała E, Orzechowska-Wylęgała B. Artificial-Intelligence-Enhanced Analysis of In Vivo Confocal Microscopy in Corneal Diseases: A Review. Diagnostics. 2024; 14(7):694. https://doi.org/10.3390/diagnostics14070694

Chicago/Turabian Style

Kryszan, Katarzyna, Adam Wylęgała, Magdalena Kijonka, Patrycja Potrawa, Mateusz Walasz, Edward Wylęgała, and Bogusława Orzechowska-Wylęgała. 2024. "Artificial-Intelligence-Enhanced Analysis of In Vivo Confocal Microscopy in Corneal Diseases: A Review" Diagnostics 14, no. 7: 694. https://doi.org/10.3390/diagnostics14070694

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop