Next Article in Journal
A Novel GAN-Based Synthesis Method for In-Air Handwritten Words
Previous Article in Journal
Time Difference of Arrival Passive Localization Sensor Selection Method Based on Tabu Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy

by
Roberto Romero-Oraá
1,2,*,
María García
1,2,
Javier Oraá-Pérez
1,
María I. López-Gálvez
1,2,3,4 and
Roberto Hornero
1,2,5
1
Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain
2
Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
3
Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
4
Instituto Universitario de Oftalmobiología Aplicada (IOBA), Universidad de Valladolid, 47011 Valladolid, Spain
5
Instituto de Investigación en Matemáticas (IMUVA), Universidad de Valladolid, 47011 Valladolid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6549; https://doi.org/10.3390/s20226549
Submission received: 29 September 2020 / Revised: 7 November 2020 / Accepted: 13 November 2020 / Published: 16 November 2020
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.

Graphical Abstract

1. Introduction

Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus [1,2]. While it is asymptomatic in its initial stage, it leads progressively to vision loss [2]. With the rising incidence of diabetes, DR has become the main cause of blindness and visual impairment in the working age population [1]. However, it is proven that serious sight damage can be prevented through early, accurate diagnosis and proper eye care [1,3]. Therefore, it is important to carry out regular DR examinations based on the analysis of color fundus images to detect the characteristic signs of the DR: red lesions (RLs), such as microaneurysms (MAs) and hemorrhages (HEs), and bright lesions, such as exudates (EXs) [2,4]. MAs, as the earliest visible sign of DR, are caused by leakages of tiny blood vessels. They appear as reddish, small, and circular dots. HEs are produced by retinal ischemia and rupture of damaged and fragile retinal vessels. They generally look like bigger red spots with irregular shapes. EXs are fluid deposits of lipoproteins and other proteins leaking through abnormal retinal blood vessels. They appear as yellowish, bright patches of varied shapes and sizes with sharp edges [2,4]. Although manual reading of retinal images has proven effective in patient care, it requires a lot of effort, time, and costs [5]. In this context, computer-assisted diagnostic systems (CADSs) are designed to relieve the workload of specialists, reduce the health costs, and hasten the diagnosis [1,4].
Several studies have been developed in the past twenty years to detect DR-related lesions in fundus images [6]. Some approaches focused on detecting Mas alone and can be divided into four groups [4]: mathematical morphology-based [7], region growing-based [8], wavelet-based [9], and hybrid approaches [10,11]. Other methods focused on exclusively detecting HEs and can be divided into two categories [4]: mathematical morphology [12,13] and pixel classification [14]. However, both MAs and HEs are typically detected together, because they frequently look similar and their distinction is not necessary to determine the presence of DR [1]. This way, we can find several works where all RLs are detected together. Among the recent studies, Seoud et al. [15] proposed a technique based on dynamic shape features that represent the evolution of the shape during image flooding. Other authors divided the image into superpixels to detect RLs [16,17]. Srivastava et al. [18] proposed to apply filters on patches of different sizes and to combine the results using multiple kernel learning. Deep learning has also been employed in some studies, achieving successful results in the detection of RLs [19,20,21].
Numerous methods have also been proposed to detect EXs. Generally, they can be divided into three groups: clustering-based [22,23,24]; mathematical morphology, thresholding, and region growing-based [25,26,27]; and pixel classification-based [28,29]. Among the most recent studies, an unsupervised approach based on the ant colony optimization algorithm has been proposed [30]. Theera-Umpon et al. [31] proposed an EX detection method exploring other techniques of supervised learning, such as support vector machines and neural networks. Superpixel approaches can also be found in the literature [32]. Other authors have employed deep learning together with some additional operations [33,34,35]. In this context, Guo et al. [36] introduced a novel top-k loss for EXs segmentation, which considers class-unbalance and loss-unbalance by focusing more on the hard-to-classify pixels.
None of the previous studies have individually considered other structures of the retina beyond the optic disc (OD), the fovea, and the vasculature. We hypothesize that the reflective features of the retina and the choroidal vasculature visible in tigroid retinas can also be useful for the detection of retinal lesions. In this study, we propose a novel method to detect RLs and EXs where the image is decomposed into several layers providing, separately, information on different structures of the retina. Among these layers, the lesion candidates, the reflective features, and the choroidal vessels were included, which is the main contribution of this work. This decomposition was based on human perception. Hence, the layers were directly separated using color and spatial information.

2. Retinal Image Databases

In this study, both a proprietary dataset and a public database were employed. The proprietary dataset was divided into a training set, used for the development of the method, and an independent test set, used to evaluate the performance. We also used the public database DiaretDB1 [37] to verify the robustness and generalization ability of the proposed method.
The proprietary dataset consisted of 564 retinal images provided by the Instituto de Oftalmobiología Aplicada (IOBA) of the University of Valladolid (Valladolid, Spain) and the Hospital Clínico Universitario de Valladolid (Valladolid, Spain). All subjects gave their informed consent to participate in the study. Our research was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee at the Hospital Clínico Universitario de Valladolid. The images were captured using a Topcon TRC-NW400 automatic retinal camera (Topcon Medical Systems, Inc., Oakland, NJ, USA) at a 45-degree field of view (FOV). All images had a resolution of 1956 × 1934 pixels and were captured using the two-field protocol adopted by National Service Framework for Diabetes in the United Kingdom for DR screening [38]. This way, two images were captured per eye: one fovea-centered and one OD-centered. It should be noted that, for some patients, fewer than four images were available. All the RLs and EXs in this database were manually annotated in detail by an ophthalmologist. While 270 out of 564 fundus images showed DR signs, the remaining 294 lacked any type of lesion. Among the 270 pathological images, 183 showed EXs, 239 showed RLs, and 152 images included both EXs and RLs. The database was randomly divided into two balanced sets. The training set (281 images) allowed us to develop the method and optimize its parameters. The test set (283 images) was used to evaluate the performance of the method. It should be noted that all the images of the same patient were included in the same set.
The DiaretDB1 database is composed of 89 images captured in the Kuopio University Hospital and divided into a training set (28 images) and a test set (61 images) [37]. They were captured at a 50-degree FOV and had a resolution of 1500 × 1552 pixels. All the images were fovea-centered, and the ground truth of the lesions was roughly annotated by four experts using circles, ellipses, and polygons. The DiaretDB1 database was used to verify the robustness of the proposed method and compare our results with those obtained in previous studies. In order to perform this comparison, only the 61 images in the test set of DiaretDB1 were used.

3. Methods

3.1. Overview

The proposed method comprises several steps. First, we applied a preprocessing stage to normalize the image appearance and enhance the retinal structures. Second, we obtained an estimation of the retinal background, the segmentation of the vasculature, and the location of the OD and the fovea. Third, the image was decomposed into several layers. Then, multiple lesion candidates were segmented. Next, several features were extracted using the obtained layers, and feature selection was performed using the fast correlation-based filter (FCBF) technique [39]. Finally, a multilayer perceptron (MLP) was used to distinguish the true lesions from the rest of the candidates [40]. All the steps are described in the following subsections. The diagram in Figure 1 shows the global structure of the proposed method.

3.2. Preprocessing

The appearance of the fundus images is strongly affected by the image quality and the intrinsic features of the patient [10,15]. A preprocessing stage is required to normalize the input images in order to make subsequent processing easier. In this study, we applied our method in [17], consisting of five sequential operations: bright border artifact removal, background extension, illumination and color equalization, denoising, and contrast enhancement. The retinal landmarks were notably highlighted and the intra-image and inter-image normalization was achieved. The result of this stage, I p r e p , can be seen in Figure 2. From this stage, we also obtained the diameter of the FOV, D, which allowed us to obtain relative sizes for subsequent operations.

3.3. Background Extraction

In order to detect the visible lesions in fundus images, it is important to separate the background and the foreground. The foreground covers the main information and is composed of dark pixels and bright pixels relative to the background [41]. In this stage, we estimated the background of the fundus image using our method in [42]. Additionally, we eliminated the dark pixels in I p r e p to obtain the image I b g b r i and eliminated the bright pixels to obtain the image I b g d a r k . The images obtained in this stage can be seen in Figure 3. For this figure, we selected a more appropriate example than the image in Figure 2.

3.4. Detection of the Main Anatomical Structures

The fovea, the OD, and the vascular network are the most important landmarks in the fundus image [43]. Since the appearance of the fovea and the blood vessels is dark, they can be confused with some RLs. In the same way, parts inside the OD can potentially be classified as EXs [43]. Therefore, the prior detection of these structures is useful for the automatic detection of retinal lesions. The aim of this stage was to segment the vasculature and locate the OD and fovea centers. For this task, we applied our methods in [42], which have proved robustness using four different databases. These methods properly work with different types of FOV and do not require a standardized imaging protocol. The vasculature segmentation, M v e s s , was based on vessel centerlines detection, region growing, and morphology. The OD and fovea locations were based on various saliency maps. Morphological operations, spatial filters, and template matching were applied [42]. The OD was modeled as a circle with radius R O D = D 12 pixels [44].

3.5. Red Lesion Candidate Segmentation

In order to detect RLs, the fundus image was decomposed into several layers. Each layer represented a different structure of the retina and provided useful information for the detection of retinal lesions. Since RLs appear as dark regions, we calculated the complement of the subtraction I b g d a r k I b g in order to select the dark pixels, obtaining I d a r k . In this image, the color difference between dark pixels and the background was highlighted, while leaving the rest of the pixels black (see Figure 4a).
After preprocessing, inter- and intra-image variability was reduced. Hence, the color of the visible structures in I d a r k was always consistent. This way, they could be directly separated using color and spatial information. The blood vessels, the choroidal atrophy, and the rest of dark pixels, which are RL candidates, are the structures of interest in I d a r k that should be taken into account for RL detection.
As expected, the color of the vasculature in I d a r k is very similar to the color of RLs. Therefore, we eliminated the pixels belonging to blood vessel using the mask M v e s s obtained in Section 3.4. The result, I d a r k 2 , can be seen in Figure 4b. There is another type of structure that is easily distinguishable in I d a r k : the underlying choroidal vasculature visible in tigroid retinas [45]. This is caused by the lack of pigments in the retinal pigment epithelium and is common in aged or myopic patients [45]. It has pink tones in I d a r k and can hinder the detection of RLs due to the high contrast it shows against the background. Separating the choroidal vasculature is useful to classify the RLs in the image, since images featuring very marked choroidal vessels tend to present false positives [17]. For this reason, the next step was to separate the layer corresponding to the choroidal vessels in I d a r k 2 . For this task, we used the color information in the Hue-Saturation-Value (HSV) color space. This color space was designed to approximate the eye perception and is useful to replicate human interpretation [46]. It decouples the brightness component from the color-carrying information. Using HSV, we analyzed the pixels belonging to the choroidal vessels using the training set. This way, we empirically selected the ranges of values that represented those pixels for each channel. In this work, these ranges were determined to be H = [ 0.75 ,   0.1 ] , S = [ 0.2 ,   1.0 ] , and V = [ 0.05 ,   1.0 ] . Then, we segmented the pixels in I d a r k 2 that were between the selected ranges, obtaining the image L c h o r d a r k , which represented the layer of choroidal atrophy (see Figure 4c). In the same way, we selected the HSV color ranges associated with RLs. For this task, we analyzed all the pixels belonging to RLs in the images of the training set annotated by the ophthalmologist. The selected ranges in this work were H = [ 0.1 ,   0.45 ] , S = [ 0.1 ,   1.0 ] , and V = [ 0.2 ,   1.0 ] . Then, we segmented the pixels in I d a r k 2 that were between the selected ranges, obtaining the layer L r l c a n d , associated to potential RLs (see Figure 4d). Finally, the mask of potential RL candidates was obtained binarizing the layer L r l c a n d in order to obtain the candidate binary mask M r l c a n d . The obtained layers of interest in this phase, L c h o r d a r k (Figure 4c) and L r l c a n d (Figure 4d), were also useful for lesion candidate classification in later stages.

3.6. Exudate Candidate Segmentation

EXs appear as bright regions in contrast with the background. Therefore, we subtracted the image I b g from the I b g b r i image in order to select the bright pixels. The obtained image, I b r i , enhanced the color difference of the bright pixels with respect to the retinal background, while leaving the rest of the pixels black (see Figure 5a).
Following the same idea as for the segmentation of RLs, I b r i was divided into different layers of information. The proposed method relied in the idea that the color of the pixels in I b r i for each type of structure is constant. This way, we separated the different structures of interest in I b r i using color and spatial information. The HSV color space was also employed here [46]. The structures of interest in I b r i are the choroidal vasculature, the reflective features, and the EX candidates.
Often, parts of the tigroid retina appear as bright pixels in contrast to the retinal background and are shown in red tones in I b r i (see Figure 5a). We analyzed the pixels in I b r i belonging to choroidal vessels in tigroid retinas using the training set. We empirically selected the ranges in HSV representing those pixels. In this study, those ranges were H = [ 0.75 ,   0.15 ] , S = [ 0 ,   1.0 ] , and V = [ 0 ,   1.0 ] . Then, the layer of choroidal vessels, L c h o r b r i , was obtained segmenting the pixels in I b r i that were between the selected ranges (see Figure 5b). Other structures visible in I b r i were the reflective features caused by the nerve fiber layer [47]. They are very common when it comes to retinas in young patients and cannot be considered as abnormalities. Most of these marks are concentrated along the widest vessels [47] and tend to be green and blue in I b r i (see Figure 5a). Due to their color, they can be confused with EXs. Therefore, separating the reflective features is useful to classify the true EXs in the image. In order to separate reflective features from lesions, we selected the pixels associated with reflective features using the HSV color space in the training set. The selected ranges were H = [ 0.25 ,   0.85 ] , S = [ 0 ,   1.0 ] , and V = [ 0 ,   1.0 ] , obtaining the image I b m 1 . On the other hand, we selected the pixels surrounding the main vessels in the vascular network. For this task, we first performed a morphological opening over the image M v e s s to roughly remove the thin vessels. A disk of radius R O D 10 pixels was applied. Second, a morphological dilation was performed using a disk of radius D 60 pixels, obtaining the image I b m 2 . Finally, the reflective features layer, L b m , was obtained by multiplying I b m 1 and I b m 2 to select the bright marks surrounding the vasculature (see Figure 5c). Using the same idea, the layer of potential EXs was also extracted from I b r i . We used the HSV color space and, for each component, selected the ranges of values among which the EXs were. For this task, we analyzed all the pixels belonging to EXs in the images of the training set annotated by the ophthalmologist. In this work, the selected ranges were H = [ 0.15 ,   0.45 ] , S = [ 0.1 ,   1.0 ] , and V = [ 0.1 ,   1.0 ] . Thus, we obtained the layer L e x c a n d (see Figure 5d). Finally, to obtain the binary mask of potential EX candidates, M e x c a n d , we binarized L e x c a n d . The obtained layers of interest in this phase, L c h o r b r i (Figure 5b), L b m (Figure 5c), and L e x c a n d (Figure 5d), were also useful for lesion candidate classification in later stages.

3.7. Red Lesion Classification

Once the RL candidates were obtained, we used an MLP to separate the true RLs from non-RL candidates. This type of neural network has been used in previous studies for the automatic detection of RLs [17,48]. This stage comprises three steps:

3.7.1. Feature Extraction

For each region candidate in M l r c a n d , a set of features was extracted using the previously obtained layers. We included 100 features, as specified in Table 1. As it can be seen, most of the features are directly extracted from the decomposed layers.

3.7.2. Feature Selection

Reducing the number of features to a set of relevant, low-correlated ones decreases classification errors and simplifies the structure of the classifier [40]. For this task, the FCBF method was applied [39]. FCBF is classifier-independent and computes symmetrical uncertainty to find the most relevant and non-redundant features for a certain problem [39]. The 24 selected features are also specified in Table 1. Features of different nature were selected, including shape, distance, intensity, and variability around the candidates in different layers.

3.7.3. Multilayer Perceptron Neural Network

An MLP consists of various fully connected layers of neurons. It maps a set of input variables onto a set of output variables using a nonlinear function [40]. Since a single hidden layer of neurons is capable of universal approximation, a 3-layer MLP was used (input-hidden-output) [49]. The number of neurons in the input layer was the number of selected features. We used a single neuron in the output layer, since our problem was dichotomous. The number of hidden units was experimentally optimized during the training stage. The activation function used in the hidden layer was the hyperbolic tangent sigmoid (tanh), which accelerates the learning process of the network [40]. The logistic sigmoid was used as the activation function in the output neuron. We used the scaled conjugate gradient backpropagation method as the learning function. The cross-entropy error function was the selected error function to minimize during training [17,40]. In addition, we used the regularization parameter, experimentally optimized during training, to avoid overfitting and improve generalization [40].

3.8. Exudate Classification

After EX candidate segmentation, we used the MLP to detect the true EX. This type of network has also been used in previous studies for the automatic detection of EXs [50]. This classification stage comprises three steps:

3.8.1. Feature Extraction

For each region candidate in M e x c a n d , a set of features was extracted using the layers obtained in the previous stages. It should be noted that the same set of 100 features used for RL candidate classification were also used in the case of EXs. They are collected in Table 1.

3.8.2. Feature Selection

As for the RL classification, we used the FCBF method to select a reduced number of features. The 34 selected features in this stage are specified in Table 1. Again, features of a different nature were selected, including shape, distance, intensity, and variability around the candidate in different layers.

3.8.3. Multilayer Perceptron Neural Network

For this step, EX candidates were classified using an MLP with the same configuration as the one used for RL candidate classification. However, a joint classification for both types of lesions would not be possible in our approach, since the selected features were different. Moreover, it is interesting to separately classify the lesions from a clinical point of view. They often appear at different times and have implications in determining the severity of the disease. In this case, the number of neurons in the input layer was the number of selected features in the previous step. The number of neurons in the hidden layer and regularization parameter were also experimentally optimized during the training process.

3.9. Performance Assessment for Lesion Detection

All optimal values for the parameters of the proposed method were obtained using the 281 images from the training set of the proprietary database. We obtained the final results using the test set of the proprietary database (283 images) and the test set of the DiaretDB1 database (61 images). For this purpose, two criteria were considered:

3.9.1. Pixel-Based Criterion

A lesion was correctly identified when, at least one of its pixels was detected [17]. In this case, the lesion was considered as correctly detected. For this criterion, we calculated the pixel-based positive predictive value ( P P V p ) and sensitivity ( S E p ).

3.9.2. Image-Based Criterion

An image was considered pathological when a minimum number of pixels were detected as lesions [17]. In this work, this minimum value was set to 30 pixels [17,51]. Automatic detections of a lower number of pixels were interpreted as noise, since they represent a very small fraction of the image (<0.00001%). Based on the image-based criterion, the average sensitivity ( S E i ), specificity ( S P i ), and accuracy ( A C C i ) were computed.

4. Results

4.1. Red Lesion Detection

We extracted 4889 RL candidates from the training set. Only 2029 of them were true RL. We randomly selected another 2029 non-RL regions to balance the two classes. The extracted features over the RL candidates were normalized (mean = 0, standard deviation = 1) to improve the classification results [40].

4.1.1. MLP Configuration on the Training Set

We experimented with the number of hidden neurons in the range [1:1:100]. The regularization parameter values were varied in the range [0:0.1:1]. For this task, we applied 10-fold-cross-validation exclusively using the training set of the proprietary database. This technique is a powerful preventative measure against overfitting. The chosen values for those parameters were 51 and 0.5, respectively, since they maximized the average accuracy over the validation test (see Figure 6).

4.1.2. Red Lesion Detection on the Test Set

The results for RL detection in terms of the pixel-based criterion and image-based criterion using both the proprietary database and the public database can be seen in Table 2. These results were obtained by applying the criteria in Section 3.9 and show the performance of the complete algorithm.

4.2. Exudate Detection

We extracted 4782 EX candidates from the training set. Only 2072 of them were true EX. Thus, we randomly selected another 2072 non-EX regions to balance the two classes. The extracted features over the EX candidates were also normalized to improve the classification results (mean = 0, standard deviation = 1) [40].

4.2.1. MLP Configuration on the Training Set

As for the RL classification, we varied the number of hidden neurons in the range [1:1:100] and the regularization parameter values in the range [0:0.1:1]. We applied 10-fold-cross-validation exclusively using the training set of the proprietary database allowing to control overfitting. The chosen value for those parameters was 55 and 0.4, respectively, since they maximized the average accuracy over the validation test (see Figure 7).

4.2.2. Exudate Detection on the Test Set

The results for EX detection in terms of pixel-based criterion and image-based criterion using both the proprietary database and the public database are presented in Table 3. These results were obtained by applying the criteria in Section 3.9 and show the performance of the complete algorithm.

5. Discussion

In this study, we have proposed automatic methods for the detection of DR-related retinal lesions. The fundus image was decomposed into various layers representing different structures of the retina, which is the main contribution of this paper. Among these layers, the lesion candidates, the choroidal vasculature visible in tigroid retinas, and the reflective features were included, having proved useful for the classification of retinal lesions. To the best of our knowledge, there are no previous studies that have decomposed the image separating the relevant retinal structures such as the choroidal vessels and the reflective features to detect RLs and EXs.
The proposed method for retinal lesion detection was evaluated on a set of 283 fundus images. Among them, 120 showed RLs, 92 showed EXs, and 76 images showed both types of lesions. The proprietary database was very heterogeneous, showing variations in color, luminosity, contrast, and quality among images. In the same way, variable lesions in terms of appearance and size could be found. The method was also evaluated on the test set of the public database DiaretDB1, composed of 61 images. The results for the detection of RLs and EXs were measured using a pixel-based criterion and an image-based criterion. Results can be seen in Table 2 and Table 3.
All of these results can be compared to those obtained in previous studies according to the image-based criterion, as shown in Table 4 and Table 5. However, comparisons should be made with caution, since the databases and the performance measures vary among studies. We have found four methods that have been evaluated using the DiaretDB1 database in order to establish a direct comparison with the proposed method for RL detection. Jaafar et al. [52] obtained a high S E i = 98.80%, yet a S P i = 86.20% lower than ours ( S P i = 91.67%). In addition, they tested their method using the database DiaretDB0 together with DiaretDB1. In the work of Roychowdhury et al. [53], they obtained S P i = 93.73%, but the S E i = 75.50% was low. In [16], S P i = 91.67% was obtained. However, our value of S E i (88.00%) improves their S E i (83.30%). Table 4 also shows that the proposed method achieves better results than our previous work [17]. Regarding EX detection, we have also found several methods that have been assessed using the DiaretDB1 database. Walter et al. [27] obtained S E i = 86.00% and S P i = 69.00%. In [54], values of S E i = 92.00% and S P i = 68.00% were obtained. Liu et al. [55] achieved S E i = 83.00% and S P i = 75.00%. The method proposed in [32] showed a S E i = 88.00% and S P i = 95.00%. Kaur and Mittal obtained S E i = 91.00% and S P i = 94.00%. Finally, the work of Adem et al. presented high values of S E i = 99.20% and S P i = 97.97%. The value of S E i achieved with our method (95.00%) is higher than those obtained in previous studies, with only one exception [33]. It should be noted that the test set in [33] was composed of images from DiaretDB0 and DRIMDB databases in addition to DiaretDB1. Moreover, the training set of the DiaretDB1 database was used in the training phase and the appearance of these images is similar to the ones in the test set of the same database. Therefore, this could make obtaining good results easier. When comparing our results with previous approaches, it should be noted that the proposed method has been developed using only images from the proprietary database. This dataset is different from the public database DiaretDB1 in several aspects. Firstly, the images in our database have a higher resolution. Secondly, they have been captured using a different protocol. Thirdly, the FOV in the images of the proprietary database is 45°, while the FOV in the images of DiaretDB1 is 50°. In addition, they were selected considering different quality criteria [37]. In spite of these differences, the results on the test set of DiaretDB1 prove the robustness of the proposed method.
Our study also has some limitations that should be mentioned. First, segmenting the blood vessels as an independent stage has some drawbacks. When an RL is detected as part of the vascular network, it is discarded as a possible lesion, regardless of the later stages. To avoid this problem, the elimination of the blood vessel segments could be addressed in the last stage using supervised classification. Second, some parameters of the method were empirically set. However, the image normalization achieved in the preprocessing stage makes the value of those parameters effective for any input image. Furthermore, the value of these parameters is not critical, and the performance is not significantly affected as long as they are around the selected values. We noticed that small deviations of these values hardly produced changes in the output. This is especially true when it comes to the saturation channel (S) of HSV. Thus, we could declare that the values for the channels hue (H) and value (V) could be deviated around the range [−0.03, 0.03] producing a similar segmentation, while the saturation channel could be deviated around the range [−0.05, 0.05] to obtain almost the same result. Third, the classification stage is based on a set of handcrafted features. Even though they have proved to represent lesions adequately with a moderate database size, we intend to explore deep-learning-based approaches in future studies. These methods will require that we also increase the size of the database for proper training.

6. Conclusions

In this work, we have proposed a method for the detection of RLs and EXs in fundus images. The method has been developed based on the clues observed by the ophthalmologists to identify the different structures of the retina. This allowed us to study the existing relation between the lesions and other structures, such as the choroidal vessels and the reflective features. The extracted layers using our decomposition method have proven to be useful to detect DR-related retinal lesions. Our results suggest that the proposed method could be used as part of an automatic DR screening system. Thus, it could be a diagnostic aid for the early detection of DR, reducing the workload of specialists and improving the management of diabetic patients.

Author Contributions

Conceptualization, R.R.-O., M.G., and R.H.; data curation, J.O.-P. and M.I.L.-G.; formal analysis, R.R.-O. and M.G.; funding acquisition, R.R.-O. and R.H.; investigation, R.R.-O.; methodology, R.R.-O.; project administration, R.H.; software, R.R.-O.; supervision, M.G.; validation, J.O.-P. and M.I.L.-G.; visualization, R.R.-O.; writing—original draft, R.R.-O.; writing—review and editing, M.G., J.O.-P., M.I.L.-G., and R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Ministerio de Ciencia, Innovación y Universidades [DPI2017-84280-R, PGC2018-098214-A-I00]; the European Regional Development Fund [DPI2017-84280-R, PGC2018-098214-A-I00, 0378_AD_EEGWA_2_P]; the European Commission [0378_AD_EEGWA_2_P]; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN) [CB19/01/00012]; Junta de Castilla y León (EDU/1100/2017); and the European Social Fund (EDU/1100/2017).

Conflicts of Interest

The authors declare no conflict of interest.

Ethical Approval

This study was approved by the Ethics Committee of the Hospital Clínico Universitario de Valladolid, Valladolid, Spain.

References

  1. Abramoff, M.D.; Garvin, M.K.; Sonka, M. Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 2010, 3, 169–208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Kar, S.S.; Maity, S.P. Automatic Detection of Retinal Lesions for Screening of Diabetic Retinopathy. IEEE Trans. Biomed. Eng. 2018, 65, 608–618. [Google Scholar] [CrossRef] [PubMed]
  3. Das, T.; Raman, R.; Ramasamy, K.; Rani, P.K. Telemedicine in diabetic retinopathy: Current status and future directions. Middle East Afr. J. Ophthalmol. 2015, 22, 174–178. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Mookiah, M.R.K.; Acharya, U.R.; Chua, C.K.; Lim, C.M.; Ng, E.Y.K.; Laude, A. Computer-aided diagnosis of diabetic retinopathy: A review. Comput. Biol. Med. 2013, 43, 2136–2155. [Google Scholar] [CrossRef] [PubMed]
  5. Niemeijer, M.; Abràmoff, M.D.; van Ginneken, B. Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening. Med. Image Anal. 2006, 10, 888–898. [Google Scholar] [CrossRef] [PubMed]
  6. Xiao, D.; Bhuiyan, A.; Frost, S.; Vignarajan, J.; Tay-Kearney, M.L.; Kanagasingam, Y. Major automatic diabetic retinopathy screening systems and related core algorithms: A review. Mach. Vis. Appl. 2019, 30, 423–446. [Google Scholar] [CrossRef]
  7. Abràmoff, M.D.; Folk, J.C.; Han, D.P.; Walker, J.D.; Williams, D.F.; Russell, S.R.; Massin, P.; Cochener, B.; Gain, P.; Tang, L.; et al. Automated analysis of retinal images for detection of referable diabetic retinopathy. JAMA Ophthalmol. 2013, 131, 351–357. [Google Scholar] [CrossRef] [Green Version]
  8. Fleming, A.D.; Philip, S.; Goatman, K.A.; Olson, J.A.; Sharp, P.F. Automated microaneurysm detection using local contrast normalization and local vessel detection. IEEE Trans. Med. Imaging 2006, 25, 1223–1232. [Google Scholar] [CrossRef]
  9. Quellec, G.; Lamard, M.; Josselin, P.M.; Cazuguel, G.; Cochener, B.; Roux, C. Optimal Wavelet Transform for the Detection of Microaneurysms in Retina Photographs. IEEE Trans. Med. Imaging 2008, 27, 1230–1241. [Google Scholar] [CrossRef] [Green Version]
  10. Wu, B.; Zhu, W.; Shi, F.; Zhu, S.; Chen, X. Automatic detection of microaneurysms in retinal fundus images. Comput. Med. Imaging Graph. 2017, 55, 106–112. [Google Scholar] [CrossRef]
  11. Lazar, I.; Hajdu, A. Retinal Microaneurysm Detection Through Local Rotating Cross-Section Profile Analysis. IEEE Trans. Med. Imaging 2013, 32, 400–407. [Google Scholar] [CrossRef] [PubMed]
  12. Hatanaka, Y.; Nakagawa, T.; Hayashi, Y.; Kakogawa, M.; Sawada, A.; Kawase, K.; Hara, T.; Fujita, H. Improvement of automatic hemorrhage detection methods using brightness correction on fundus images. In Proceedings of the Medical Imaging 2008: Computer-Aided Diagnosis; Giger, M.L., Karssemeijer, N., Eds.; SPIE: San Diego, CA, USA, 2008; Volume 6915. [Google Scholar]
  13. Fleming, A.D.; Goatmana, K.A.; Williams, G.J.; Philip, S.; Sharpa, P.F.; Olson, J.A. Automated detection of blot haemorrhages as a sign of referable diabetic retinopathy. In Proceedings of the Medical Image Understanding and Analysis (MIUA2008), Dundee, Scotland, UK, 2–3 July 2008; pp. 235–239. [Google Scholar]
  14. Zhang, X.; Chutatape, O. A SVM approach for detection of hemorrhages in background diabetic retinopathy. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; pp. 2435–2440. [Google Scholar]
  15. Seoud, L.; Hurtut, T.; Chelbi, J.; Cheriet, F.; Langlois, J.M.P. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening. IEEE Trans. Med. Imaging 2016, 35, 1116–1126. [Google Scholar] [CrossRef] [PubMed]
  16. Zhou, W.; Wu, C.; Chen, D.; Wang, Z.; Yi, Y.; Du, W. Automated Detection of Red Lesions Using Superpixel Multichannel Multifeature. Comput. Math. Methods Med. 2017, 2017, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Romero-Oraá, R.; Jiménez-García, J.; García, M.; López-Gálvez, M.I.; Oraá-Pérez, J.; Hornero, R. Entropy rate superpixel classification for automatic red lesion detection in fundus images. Entropy 2019, 21, 417. [Google Scholar] [CrossRef] [Green Version]
  18. Srivastava, R.; Duan, L.; Wong, D.W.K.; Liu, J.; Wong, T.Y. Detecting retinal microaneurysms and hemorrhages with robustness to the presence of blood vessels. Comput. Methods Programs Biomed. 2017, 138, 83–91. [Google Scholar] [CrossRef]
  19. Orlando, J.I.; Prokofyeva, E.; del Fresno, M.; Blaschko, M.B. An ensemble deep learning based approach for red lesion detection in fundus images. Comput. Methods Programs Biomed. 2018, 153, 115–127. [Google Scholar] [CrossRef] [Green Version]
  20. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Opthalmology Vis. Sci. 2016, 57, 5200. [Google Scholar] [CrossRef] [Green Version]
  21. Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal lesion detection with deep learning using image patches. Investig. Ophthalmol. Vis. Sci. 2018, 59, 590–596. [Google Scholar] [CrossRef]
  22. Osareh, A.; Shadgar, B.; Markham, R. A computational-intelligence-based approach for detection of exudates in diabetic retinopathy images. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 535–545. [Google Scholar] [CrossRef]
  23. Sopharak, A.; Uyyanonvara, B.; Barman, S. Automatic exudate detection for diabetic retinopathy screening. ScienceAsia 2009, 35, 80–88. [Google Scholar] [CrossRef]
  24. Hsu, W.; Pallawala, P.M.D.S.; Lee, M.L.; Eong, K.-G.A. The role of domain knowledge in the detection of retinal hard exudates. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  25. Imani, E.; Pourreza, H.R. A novel method for retinal exudate segmentation using signal separation algorithm. Comput. Methods Programs Biomed. 2016, 133, 195–205. [Google Scholar] [CrossRef] [PubMed]
  26. Sánchez, C.I.; García, M.; Mayo, A.; López, M.I.; Hornero, R. Retinal image analysis based on mixture models to detect hard exudates. Med. Image Anal. 2009, 13, 650–658. [Google Scholar] [CrossRef] [PubMed]
  27. Walter, T.; Klein, J.-C.; Massin, P.; Erginay, A. A contribution of image processing to the diagnosis of diabetic retinopathy—detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 2002, 21, 1236–1243. [Google Scholar] [CrossRef] [PubMed]
  28. Sanchez, C.I.; Hornero, R.; López, M.I.; Aboy, M.; Poza, J.; Abásolo, D.; Sánchez, C.I.; Hornero, R.; López, M.I.; Aboy, M.; et al. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis. Med. Eng. Phys. 2008, 30, 350–357. [Google Scholar] [CrossRef] [Green Version]
  29. Niemeijer, M.; Van Ginneken, B.; Russell, S.R.; Suttorp-Schulten, M.S.A.; Abràmoff, M.D. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis. Investig. Ophthalmol. Vis. Sci. 2007, 48, 2260–2267. [Google Scholar] [CrossRef] [Green Version]
  30. Pereira, C.; Gonçalves, L.; Ferreira, M. Exudate segmentation in fundus images using an ant colony optimization approach. Inf. Sci. (Ny) 2015, 296, 14–24. [Google Scholar] [CrossRef]
  31. Theera-Umpon, N.; Poonkasem, I.; Auephanwiriyakul, S.; Patikulsila, D. Hard exudate detection in retinal fundus images using supervised learning. Neural Comput. Appl. 2019, 32, 13079–13096. [Google Scholar] [CrossRef]
  32. Zhou, W.; Wu, C.; Yi, Y.; Du, W. Automatic Detection of Exudates in Digital Color Fundus Images Using Superpixel Multi-Feature Classification. IEEE Access 2017, 5, 17077–17088. [Google Scholar] [CrossRef]
  33. Adem, K. Exudate detection for diabetic retinopathy with circular Hough transformation and convolutional neural networks. Expert Syst. Appl. 2018, 114, 289–295. [Google Scholar] [CrossRef]
  34. Prentašić, P.; Lončarić, S. Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput. Methods Programs Biomed. 2016, 137, 281–292. [Google Scholar] [CrossRef]
  35. Khojasteh, P.; Passos Júnior, L.A.; Carvalho, T.; Rezende, E.; Aliahmad, B.; Papa, J.P.; Kumar, D.K. Exudate detection in fundus images using deeply-learnable features. Comput. Biol. Med. 2019, 104, 62–69. [Google Scholar] [CrossRef] [PubMed]
  36. Guo, S.; Wang, K.; Kang, H.; Liu, T.; Gao, Y.; Li, T. Bin loss for hard exudates segmentation in fundus images. Neurocomputing 2019, 392, 314–324. [Google Scholar] [CrossRef]
  37. Kauppi, T.; Kalesnykiene, V.; Kamarainen, J.-K.; Lensu, L.; Sorri, I.; Raninen, A.; Voutilainen, R.; Uusitalo, H.; Kalviainen, H.; Pietila, J. DIARETDB1 diabetic retinopathy database and evaluation protocol. In Proceedings of the British Machine Vision Conference, Coventry, UK, 10–13 September 2007. [Google Scholar]
  38. National Service Framework for Diabetes: Standards; Department of Health: London, UK, 2002.
  39. Yu, L.; Liu, H. Efficient Feature Selection via Analysis of Relevance and Redundancy. J. Mach. Learn. Res. 2004, 5, 1205–1224. [Google Scholar]
  40. Bishop, C. Neural Networks for Pattern Recognition, 1st ed.; Oxford University Press: New York, NY, USA, 1995; Volume 1995, ISBN 9780198538646. [Google Scholar]
  41. Foracchia, M.; Grisan, E.; Ruggeri, A. Luminosity and contrast normalization in retinal images. Med. Image Anal. 2005, 9, 179–190. [Google Scholar] [CrossRef] [PubMed]
  42. Romero-Oraá, R.; García, M.; Oraá-Pérez, J.; López, M.I.; Hornero, R. A robust method for the automatic location of the optic disc and the fovea in fundus images. Comput. Methods Programs Biomed. 2020, 196, 105599. [Google Scholar] [CrossRef]
  43. Niemeijer, M.; Abràmoff, M.D.; van Ginneken, B. Fast detection of the optic disc and fovea in color fundus photographs. Med. Image Anal. 2009, 13, 859–870. [Google Scholar] [CrossRef] [Green Version]
  44. Hsiao, H.K.; Liu, C.C.; Yu, C.Y.; Kuo, S.W.; Yu, S.S. A novel optic disc detection scheme on retinal images. Expert Syst. Appl. 2012, 39, 10600–10606. [Google Scholar] [CrossRef]
  45. Lyu, X.; Li, H.; Zhen, Y.; Ji, X.; Zhang, S. Deep tessellated retinal image detection using Convolutional Neural Networks. Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2017, 676–680. [Google Scholar]
  46. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Dorling Kindersley: London, UK, 2009; ISBN 8131726959. [Google Scholar]
  47. Giancardo, L.; Chaum, E.; Karnowski, T.P.; Meriaudeau, F.; Tobin, K.W.; Li, Y. Bright retinal lesions detection using color fundus images containing reflective features. IFMBE Proc. 2009, 25, 292–295. [Google Scholar]
  48. García, M.; Sánchez, C.I.; López, M.I.; Hornero, R. Automatic Detection of Red Lesions in Retinal Images Using a Multilayer Perceptron Neural Network. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 5425–5428. [Google Scholar]
  49. Huang, G.-B.; Chen, Y.-Q.; Babri, H.A. Classification ability of single hidden layer feedforward neural networks. IEEE Trans. Neural Netw. 2000, 11, 799–801. [Google Scholar] [CrossRef] [Green Version]
  50. García, M.; Sánchez, C.I.; López, M.I.; Abásolo, D.; Hornero, R. Neural network based detection of hard exudates in retinal images. Comput. Methods Programs Biomed. 2009, 93, 9–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. García, M.; López, M.I.; Álvarez, D.; Hornero, R. Assessment of four neural network based classifiers to automatically detect red lesions in retinal images. Med. Eng. Phys. 2010, 32, 1085–1093. [Google Scholar] [CrossRef] [PubMed]
  52. Jaafar, H.F.; Nandi, A.K.; Al-Nuaimy, W. Automated detection of red lesions from digital colour fundus photographs. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Boston, MA, USA, 30 August–3 September 2011; pp. 6232–6235. [Google Scholar]
  53. Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Screening fundus images for diabetic retinopathy. In Proceedings of the Conference Record—Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 4–7 November 2012; pp. 1641–1645. [Google Scholar]
  54. Harangi, B.; Hajdu, A. Automatic exudate detection by fusing multiple active contours and regionwise classification. Comput. Biol. Med. 2014, 54, 156–171. [Google Scholar] [CrossRef] [PubMed]
  55. Liu, Q.; Zou, B.; Chen, J.; Ke, W.; Yue, K.; Chen, Z.; Zhao, G. A location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images. Comput. Med. Imaging Graph. 2016, 55, 78–86. [Google Scholar] [CrossRef] [Green Version]
  56. Niemeijer, M.; Van Ginneken, B.; Staal, J.; Suttorp-Schulten, M.S.A.; Abràmoff, M.D. Automatic detection of red lesions in digital color fundus photographs. IEEE Trans. Med. Imaging 2005, 24, 584–592. [Google Scholar] [CrossRef] [Green Version]
  57. Grisan, E.; Ruggeri, A. A hierarchical bayesian classification for non-vascular lesions detection in fundus images. In Proceedings of the 3rd European Medical and Biological Engineering Conference, Prague, Czech Republic, 20–25 November 2005; Volume 11. [Google Scholar]
  58. Sánchez, C.I.; Niemeijer, M.; Dumitrescu, A.V.; Suttorp-Schulten, M.S.A.; Abràmoff, M.D.; van Ginneken, B. Evaluation of a Computer-Aided Diagnosis System for Diabetic Retinopathy Screening on Public Data. Investig. Opthalmol. Vis. Sci. 2011, 52, 4866. [Google Scholar] [CrossRef]
  59. Kaur, J.; Mittal, D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern. Biomed. Eng. 2018, 38, 27–53. [Google Scholar] [CrossRef]
Figure 1. Diagram of the proposed method. (1) Preprocessing. (2) Retinal background extraction. (3) Vessel segmentation, optic disc location, and fovea location. (4) Layer decomposition. (5) Feature extraction and selection. (6) Multilayer perceptron (MLP) classification.
Figure 1. Diagram of the proposed method. (1) Preprocessing. (2) Retinal background extraction. (3) Vessel segmentation, optic disc location, and fovea location. (4) Layer decomposition. (5) Feature extraction and selection. (6) Multilayer perceptron (MLP) classification.
Sensors 20 06549 g001
Figure 2. Preprocessing stage. (a) Original image. (b) Preprocessed image, I p r e p .
Figure 2. Preprocessing stage. (a) Original image. (b) Preprocessed image, I p r e p .
Sensors 20 06549 g002
Figure 3. Background extraction stage. (a) Preprocessed image. (b) Estimated background, I b g . (c) Estimated background preserving dark structures, I b g d a r k . (d) Estimated background preserving bright structures, I b g b r i .
Figure 3. Background extraction stage. (a) Preprocessed image. (b) Estimated background, I b g . (c) Estimated background preserving dark structures, I b g d a r k . (d) Estimated background preserving bright structures, I b g b r i .
Sensors 20 06549 g003
Figure 4. Red lesion candidate segmentation. (a) Image I d a r k . (b) Image I d a r k 2 . (c) Image L c h o r d a r k . (d) Image L r l c a n d . These images are shown with enhanced contrast for an easier readability.
Figure 4. Red lesion candidate segmentation. (a) Image I d a r k . (b) Image I d a r k 2 . (c) Image L c h o r d a r k . (d) Image L r l c a n d . These images are shown with enhanced contrast for an easier readability.
Sensors 20 06549 g004
Figure 5. Exudate candidate segmentation. (a) Image I b r i . (b) Image L c h o r b r i . (c) Image L b m . (d) Image L e x c a n d . These images are shown with enhanced contrast for an easier readability.
Figure 5. Exudate candidate segmentation. (a) Image I b r i . (b) Image L c h o r b r i . (c) Image L b m . (d) Image L e x c a n d . These images are shown with enhanced contrast for an easier readability.
Sensors 20 06549 g005
Figure 6. Average accuracy for RL classification over the validation set obtained during MLP training for varying the number of hidden neurons and the regularization parameter.
Figure 6. Average accuracy for RL classification over the validation set obtained during MLP training for varying the number of hidden neurons and the regularization parameter.
Sensors 20 06549 g006
Figure 7. Average accuracy for EX classification over the validation set obtained during MLP training for varying the number of hidden neurons and the regularization parameter.
Figure 7. Average accuracy for EX classification over the validation set obtained during MLP training for varying the number of hidden neurons and the regularization parameter.
Sensors 20 06549 g007
Table 1. Extracted features for lesion classification.
Table 1. Extracted features for lesion classification.
Num.DescriptionSelected for Red Lesion (RL) DetectionSelected for Exudate (EX) Detection
1Area of the region--
2Width of the bounding box (smallest rectangle containing the region)--
3Height of the bounding box--
4Area of the smallest convex hull (smallest convex polygon that can contain the region)--
5Eccentricity of the ellipse that has the same second moments as the region55
6Number of holes in the region--
7Ratio of pixels in the region to pixels in the total bounding box-7
8Length of the major axis of the ellipse that has the same normalized second central moments as the region--
9Length of the minor axis of the ellipse that hast the same normalized second central moments as the region--
10Distance around the boundary of the region (perimeter length)--
11Proportion of the pixels in the convex hull that are also in the region (solidity)11-
12–14Mean of the pixels inside the region computed in the Red-Green- Blue (RGB) channels of the image I p r e p 13-
15–17Median of the pixels inside the region computed in the RGB channels of the image I p r e p -17
18–20Standard deviation of the pixels inside the region computed in the RGB channels of the image I p r e p 18, 1918–20
21–23Entropy of the pixels inside the region computed in the RGB channels of the image I p r e p 22, 2321–23
24–26Mean of the pixels inside the region computed in the Hue-Saturation-Value (HSV) channels of the image L r l c a n d / L e x c a n d 24, 2626
27–29Median of the pixels inside the region computed in the HSV channels of the image L r l c a n d / L e x c a n d 28, 2927, 29
30–32Standard deviation of the pixels inside the region computed in the HSV channels of the image L r l c a n d / L e x c a n d 3230, 32
33–35Entropy of the pixels inside the region computed in the HSV channels of the image L r l c a n d / L e x c a n d 3534, 35
36–38Mean of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L r l c a n d --
39–41Median of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L r l c a n d --
42–44Standard deviation of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L r l c a n d 4442
45–47Entropy of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L r l c a n d --
48–50Mean of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r d a r k --
51–53Median of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r d a r k --
54–56Standard deviation of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r d a r k --
57–59Entropy of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r d a r k 5957
60–62Mean of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r b r i -62
63–65Median of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r b r i 63–6564, 65
66–68Standard deviation of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r b r i 66-
69–71Entropy of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L c h o r b r i --
72–74Mean of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L e x c a n d -73, 74
75–77Median of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L e x c a n d --
78–80Standard deviation of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L e x c a n d -78–80
81–83Entropy of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L e x c a n d -83
84–86Mean of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L b m --
87–89Median of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L b m --
90–81Standard deviation of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L b m 9091
93–95Entropy of the pixels inside a circle with radius R D O centered on the region computed in the HSV channels of the image L b m -93
96Mean of all the pixels the V channel of the image L b m 9696
97Mean of the pixels calculated in the border of the region applying Prewitt operator in the image I p r e p 9797
98Mean of the pixels inside the region calculated in the result of applying multiscale line operator filters9898
99Distance to the center of the optic disc (OD)-99
100Distance to the center of the fovea100100
Table 2. Results for the detection of red lesions.
Table 2. Results for the detection of red lesions.
DatabasePixel-Based CriterionImage-Based Criterion
S E p P V V p S E i S P i A C C i
Proprietary82.2591.0785.0090.8088.34
DiaretDB184.7996.2588.0091.6790.16
Table 3. Results for the detection of exudates.
Table 3. Results for the detection of exudates.
DatabasePixel-Based CriterionImage-Based Criterion
S E p P P V p S E i S P i A C C i
Proprietary89.4296.0188.0498.9595.41
DiaretDB191.6598.5995.0090.2491.80
Table 4. Comparison of some methods for red lesion detection.
Table 4. Comparison of some methods for red lesion detection.
MethodDatabaseNb. S E i S P i
Jaafar et al., 2011 [52]DiaretDB121998.8086.20
Roychowdhury et al., 2012 [53]DiaretDB18975.5093.73
Zhou et al., 2017a [16]DiaretDB18983.3097.30
Romero-Oraá et al., 2019 [17]DiaretDB18984.0088.89
García et al., 2010 [51]Private11510056.00
Niemeijer et al., 2005 [56]Private10010087.00
Grisan and Ruggeri, 2005 [57]Private26071.0099.00
Seoud et al., 2016 [15]Messidor120083.3097.30
Orlando et al., 2018 [19]Messidor120091.1050.00
Sánchez et al., 2011 [58]Messidor120092.2050.00
Proposed methodDiaretDB18988.0091.67
Table 5. Comparison of some methods for exudate detection.
Table 5. Comparison of some methods for exudate detection.
MethodDatabaseNb. S E i S P i
Walter et al., 2002 [27]DiaretDB18986.0069.00
Harangi and Hajdu, 2014 [54]DiaretDB18992.0068.00
Liu et al., 2016 [55]DiaretDB18983.0075.00
Zhou et al., 2017b [32]DiaretDB18988.0095.00
Kaur and Mittal, 2018 [59]DiaretDB18991.0094.00
Adem, 2018 [33]DiaretDB18999.2097.97
Proposed methodDiaretDB18995.0090.24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Romero-Oraá, R.; García, M.; Oraá-Pérez, J.; López-Gálvez, M.I.; Hornero, R. Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy. Sensors 2020, 20, 6549. https://doi.org/10.3390/s20226549

AMA Style

Romero-Oraá R, García M, Oraá-Pérez J, López-Gálvez MI, Hornero R. Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy. Sensors. 2020; 20(22):6549. https://doi.org/10.3390/s20226549

Chicago/Turabian Style

Romero-Oraá, Roberto, María García, Javier Oraá-Pérez, María I. López-Gálvez, and Roberto Hornero. 2020. "Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy" Sensors 20, no. 22: 6549. https://doi.org/10.3390/s20226549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop