Next Article in Journal
Exploring the Paradox of Bone Mineral Density in Type 2 Diabetes: A Comparative Study Using Opportunistic Chest CT Texture Analysis and DXA
Next Article in Special Issue
A Radiomic-Based Machine Learning System to Diagnose Age-Related Macular Degeneration from Ultra-Widefield Fundus Retinography
Previous Article in Journal
Biomarkers for Predicting Response to Personalized Immunotherapy in Gastric Cancer
Previous Article in Special Issue
Fuzzy Logic-Based System for Identifying the Severity of Diabetic Macular Edema from OCT B-Scan Images Using DRIL, HRF, and Cystoids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Methods for Fundus Image Analysis for Diagnosis of Diabetic Retinopathy Development Stages Based on Fusion Features

by
Mohammed Alshahrani
1,*,
Mohammed Al-Jabbar
1,*,
Ebrahim Mohammed Senan
2,*,
Ibrahim Abdulrab Ahmed
1 and
Jamil Abdulhamid Mohammed Saif
3
1
Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
2
Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
3
Computer and Information Systems Department, Applied College, University of Bisha, Bisha 67714, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Diagnostics 2023, 13(17), 2783; https://doi.org/10.3390/diagnostics13172783
Submission received: 19 June 2023 / Revised: 22 August 2023 / Accepted: 24 August 2023 / Published: 28 August 2023
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease, 3rd Edition)

Abstract

:
Diabetic retinopathy (DR) is a complication of diabetes that damages the delicate blood vessels of the retina and leads to blindness. Ophthalmologists rely on diagnosing the retina by imaging the fundus. The process takes a long time and needs skilled doctors to diagnose and determine the stage of DR. Therefore, automatic techniques using artificial intelligence play an important role in analyzing fundus images for the detection of the stages of DR development. However, diagnosis using artificial intelligence techniques is a difficult task and passes through many stages, and the extraction of representative features is important in reaching satisfactory results. Convolutional Neural Network (CNN) models play an important and distinct role in extracting features with high accuracy. In this study, fundus images were used for the detection of the developmental stages of DR by two proposed methods, each with two systems. The first proposed method uses GoogLeNet with SVM and ResNet-18 with SVM. The second method uses Feed-Forward Neural Networks (FFNN) based on the hybrid features extracted by first using GoogLeNet, Fuzzy color histogram (FCH), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP); followed by ResNet-18, FCH, GLCM and LBP. All the proposed methods obtained superior results. The FFNN network with hybrid features of ResNet-18, FCH, GLCM, and LBP obtained 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.

1. Introduction

Diabetic retinopathy, a retina disease generated by diabetes, damages the retina and leads to blindness. In DR, blood glucose rises above its normal level, leading to leakage of some fluids and blood to the retina [1]. Proliferative diabetic retinopathy (PDR) is a more advanced stage of DR. It is characterized by the growth of new blood vessels in the retina. These new blood vessels are fragile and can leak blood or fluid, which can damage the retina and lead to vision loss. Non-proliferative diabetic retinopathy (NPDR) is the earliest stage of DR. It is characterized by small blood vessel changes in the retina. NPDR may not cause any symptoms, but it can lead to more serious complications if it is not treated. NPDR is divided into mild, moderate, and severe.
Mild NPDR: This stage is characterized by the presence of microaneurysms, which are small bulges in the blood vessels of the retina. Microaneurysms may not cause any symptoms, but they can be a sign of early damage to the retina. Circular red dot marks appear at the end of the micro-aneurysm (MA) [2].
Moderate NPDR: This stage is characterized by the presence of microaneurysms, as well as other changes in the blood vessels of the retina, such as hemorrhages and exudates. Hemorrhages are small spots of blood that leak from the blood vessels, while exudates are small areas of fluid that leak from the blood vessels. Moderate NPDR can cause some vision loss, such as blurred vision or dark spots in the field of vision. In the moderate stage, red dots in the MA expand into deeper layers, and a flame hemorrhage occurs in the retina.
Severe NPDR: This stage is characterized by the presence of many microaneurysms, hemorrhages, and exudates. Severe NPDR can cause significant vision loss, such as tunnel vision or complete blindness. Neovascularization occurs and grows on the retina’s inner surface of the retina [3].
Figure 1 shows the stages of DR development with biomarkers appearing at each stage. According to the World Health Organization (WHO) reports, more than 422 million people were diagnosed with diabetes in 2014, and more than a third of them (35%) were infected with DR due to damage to the delicate blood vessels of the retina [4]. The number of people with DR will increase considerably to 592 million by 2025 [5]. The prevalence of DR also differs between groups of diabetics according to their classification, as estimates in the United States indicate that 86% of people with type 1 diabetes and 40% with type II suffer from DR [6]. Vision loss varies from one person with diabetes to another according to the development stages of DR. About 10% of people with diabetes who do not have DR will develop any stage of NPDR, and patients with severe NPDR have a PDR risk of 75%. Therefore, all stages of DR are a measure of the risk of developing international clinical DR disease [7].
Treatment options vary among different stages of DR. Diabetics with DR or stage 1 NPDR (mild) need only regular check-ups. Patients with moderate or severe NPDR require treatment with scatter laser or vitrectomy. Therefore, determining the stage of DR is important to provide appropriate treatment to patients. The diagnosis of NPDR involves fundus imaging and analysis by an ophthalmologist. In the stage of PDR, new abnormal blood vessels will form, which are fragile and burst and bleed, leading to blindness. The most helpful prognosis for effective treatment is in the NPDR stages. Therefore, regular fundus examination for diabetic patients is an effective clinical method for the detection of abnormal blood vessels [8].
However, DR examinations require clinical knowledge, highly qualified and experienced ophthalmologists, and time to analyze fundus images and detect DR in its early stages. The number of diabetic patients worldwide has increased from 4.7% to 8.5% from 1980 to 2014 [9,10]. In this regard, the number of skilled ophthalmologists is rare and unequally distributed; in 2012, the number of doctors globally reached 210,730 (i.e., three doctors for every 100,000 people) [11]. The gap is also very wide between ophthalmologists and diabetics in developing countries. Thus, automated diagnosis using artificial intelligence techniques is necessary for the detection of DR. In literature, many methods have been implemented to detect the evolution of DR by CNN models and machine learning. The main and most difficult task lies in feature extraction methods. Representative features are extracted from fundus images to diagnose DR with maximum accuracy and effectiveness and reduced computational time. In the proposed work, features are extracted by CNN models and classified by a machine learning algorithm. Features were extracted from retinal images using a hybrid approach that combined features extracted by CNN with handcrafted features extracted using features from FCH, GLCM, and LBP. This hybrid approach was able to extract features that represent each stage of DR with high accuracy. The hybrid approach used in this study was able to extract features that represent each stage of DR with high accuracy. This is because the CNN models were able to learn features specific to DR, while the handcrafted features were able to extract features general to images. The combination of these two types of features resulted in a model that was able to classify DR with high accuracy accurately. This method is novel and one of the main contributions of this study. The hybrid approach used in this study is a promising new DR detection and classification method.
The main essential contributions to this work:
  • Enhancement of fundus images with average and Laplacian filters and merging the filters’ outputs to obtain an improved image.
  • Diagnosing fundus images by using a hybrid technique between CNN models and SVM algorithm.
  • Applied the FFNN network based on hybrid features of GoogLeNet, and handcrafted as well as ResNet-18 and handcrafted.
The remainder of the paper is organized as follows. Section 2: This section summarizes several relevant previous studies on DR detection and classification. Section 3: This section describes the methodology and materials used to analyze fundus images and identify the stages of DR development. Section 4: This section summarizes the evaluation results achieved by the proposed system. Section 5: This section compares the proposed system’s performance with other state-of-the-art methods. Section 6: This section concludes the paper and discusses future work.

2. Related Work

We reviewed several studies that have used fundus images to detect DR early. Many researchers have used various methods to achieve promising results. Our study distinguishes itself from previous works by extracting hybrid features through CNN models and combining them with handcrafted features. Our methodology is novel and powerful for representing the most critical features of each stage of DR development. It uses a hybrid technique that combines CNN models and an SVM algorithm. Our proposed method yields superior results for the detection of the developmental stages of DR.
Liu et al. proposed three models to improve fundus imaging diagnosis of DR. Basic models such as EfficientNetB4, NASNetLarge, and InceptionResNetV2 were selected and trained by cross-entropy loss enhancement. The outputs of these models were used to train the hybrid models. The models achieved an accuracy between 85.44% and 86.34% [12]. Qummar et al. introduced five CNN models for fundus imaging diagnosis by encoding rich features. The models obtained good results in distinguishing the stages of DR [13]. Gao et al. presented the Inception-V3 model for DR data set diagnostics; They standardized the coordination of all fundus images through a pipeline. They submitted a proposed modification to the model and evaluated its performance with other CNN models. The model achieved high performance in the diagnosis of DR [14]. Gayathri et al. developed a CNN model to extract features and diagnose them by machine learning. The J48 algorithm based on features outperformed the rest of the machine learning [15]. Wan et al. presented CNN models to diagnose a set of fundus images and overcome the challenges of segmentation, classification, and detection of DR. They trained and tested the data set provided by Kaggle and achieved good results in the diagnosis of DR images [16]. Frank et al. introduced an improved hybrid system (IDx-DR-EU-2.1; IDx) for detecting DR before it leads to blindness. The hybrid system operates classification of DR and (IDx-DR-EU-2.1) has achieved superior results compared with the reference standard [17]. Romany et al. presented the AlexNet for the diagnosis of DR. The model includes an image enhancement mechanism, segmentation of an area of interest based on connected component analysis, feature extraction by linear discriminant analysis, and diagnosis by SVM. AlexNet showed better results with FC7 features, where it achieved an accuracy of 97.93%, while with PCA features, it achieved an accuracy of 95.2% [18]. Shanthi et al. developed a CNN model to classify the Messidor data set. Convolutional layers were used to extract features, and the ReLU layer was used to optimize the features. The model yielded good results in distinguishing the stages of DR development [19]. Tao et al. presented a CNN model to diagnose 13,673 images divided into six classes. The system segmented a region of interest to detect DR stages and evaluated a deep learning model on the DDR data set [20]. Martinez et al. presented ResNet50 to extract the most important representative features without image optimization. The ResNet50 model was evaluated on the MESSIDOR data set, which achieved good results for each class in the data set [21]. Hemanth et al. presented a hybrid system of CNN and image processing for DR diagnosis. The hybrid method was evaluated on 400 images from the MESSIDOR data set, which achieved good results for classifying fundus images [22]. Lifeng et al. a CNN model for analysis of fundus image of aneurysm for detection of PDR. The system classified fundus images as normal or abnormal by semantic segmentation and dividing image pixels based on aneurysm features [23]. Chenrui et al. designed a Source-Free Transfer Learning methodology for DR diagnosis through two modules, namely, Collaborative Consistency and Target Generation. The target generation unit trains data, and the collaborative consistency unit improves the methodology through the target unit images. The method was evaluated on the APTOS 2019 data set and achieved good results for diagnosis using fundus images [24].

3. Materials and Methods

This section describes the methods and materials used in the study to analyze and diagnose DR stages using fundus images. The study proposes two methods: Diagnosis of fundus images using a hybrid technique that combines CNN models and the SVM algorithm. Analysis of fundus images using the FFNN network based on the hybrid features extracted using the CNN models, followed by dimensionality reduction using the PCA algorithm and combining them with handcrafted features (Figure 2).

3.1. Data Set Description

The data set has been presented to researchers interested in computer-aided diagnosis of DR and is available on Kaggle [25]. The data set consisted of 35,126 fundus images in 24-bit RGB color space with a resolution of 3500 × 3000 pixels. The data set has been classified by many experts and trainers into five classes: 25,810 images of normal at 73.48%, 2443 images of mild NPDR at 6.95%, 5292 images of moderate NPDR at 15.06%, 873 images of severe NPDR at 2.48% and 708 images of Proliferative PDR at 2.02%. Table 1 describes the interpretation of the biomarkers of each class in the Messidor data set. Since the data set contains 25,810 images, 73% of the data set is on natural images, thereby affecting the overall accuracy when evaluating the proposed systems. Considering the vast difference between the normal class and the DR-stage Development classes, we chose 10% of the natural pictures. The natural class contained 2581 pictures (21.69%). The data set contains 11,897 images distributed as follows: 2581 images of normal at 21.69%, 2443 images of mild NPDR at 20.53%, 5292 images of moderate NPDR at 44.48%, 873 images of severe NPDR at 7.43% and 708 images of proliferative PDR at 5.95%. Figure 3a shows samples of the data set representing all classes randomly selected by the system.

3.2. Pre-Processing

3.2.1. Improvement of DR Data Set Images

Fundus images contain some artifacts due to the movement of the patient’s eye while taking the images; they also have poor contrast between the microvasculature and their surrounding parts. Noise leads to deterioration in the performance of the systems.
Pre-processing techniques are essential to clear the noise and increase the contrast of the microvasculature. The green channel in the RGB color system provides minute details of the microvasculature and other details of the retina. In the beginning, the average colors for each channel are calculated in RGB. Color constancy is then calculated to re-scale the images. Finally, two overlapping filters are applied: The average filter to remove noise and increase the contrast of the microvasculature and the Laplacian filter to reveal the edges of the microvasculature [26].
First, the average filter is set to a 4 × 4 pixel. The average of the 15 pixels in the kernel is then calculated and used to replace the target pixel. This process is repeated for all pixels in the image, as shown in Equation (1). The equation works by first calculating the average of the 15 pixels in the kernel. This average is then used to replace the target pixel in the output image.
S x = 1 L i = 0 L 1 z x i  
where S x is the output, z x i is the previous input and L is L number pixels of the filter.
Secondly, the Laplacian filter enhances the images by revealing the edges of the microvasculature and black spots. Equation (2) shows how the filtering mechanism works on the pixels of the retinal fundus image.
2 f x , y = 2 f x 2 + 2 f y 2
In this equation, the Laplacian of f which is a second-order differential operator, is denoted by 2 f .
Where 2 f is the second-order differential equation; and x, y are the location of pixels in the matrix.
Finally, the two enhanced images are merged by means of the average filter and Laplacian to show the microvasculature more clearly and obtain the improved fundus images as in Equation (3).
F i n a l   e n h a n c e d = S x 2 f x , y
Figure 3b shows a set of fundus images after the enhancement process. The same images in Figure 3a are displayed in Figure 3b after enhancement.

3.2.2. Data Augmentation Method

The retinal fundus image data set comprises five classes with imbalanced distribution. The moderate class constitutes 44.48% of the data set, while the proliferative class represents 5.95%. This class imbalance could lead to accuracy favoring the majority class, which is a common challenge. Furthermore, preventing overfitting in deep learning models necessitates a substantial data set. To address these issues, a data augmentation technique was employed, which artificially augments images [27]. This technique involves applying operations like rotation, flipping, and shifting to images within the same data set, effectively increasing the size of each class by varying amounts. This process rectifies the imbalance issue and yields a balanced data set. Specifically, the normal and mild classes were augmented by three images per image, the moderate class by one image per image, the severe class by 11 images per image, and the proliferative class by 13 images per image. Consequently, a balanced data set was achieved as Table 2.
Given CNN models’ requirement for extensive training data, the data augmentation process significantly enhances the models’ performance and improves the efficiency of diabetic retinopathy diagnosis.

3.3. Hybrid Techniques

Modern techniques consist of CNN and SVM algorithms. The first block is the GoogLeNet and ResNet-18 models that extract features. The dimensions of the features are reduced by the PCA algorithm [28]. The SVM receives features and classifies them with high accuracy and efficiency.
The proposed method has several advantages, which are promising results and low computational cost compared with applying pre-trained CNN models. Hybrid technology requires low-cost computational resources, whereas CNN models require high-cost computing resources.

3.3.1. Deep Feature Extraction

The importance of CNN models lies in the fact that they have many successive layers representing their core. Each layer in CNN has a specific task, and performance is integrated between it and previous and later layers. CNN layers extract features and train the model to classify new test samples. CNN models have the advantage of extracting high-resolution features from many levels in various layers [29]. Each layer extracts a specific type of features; for example, the layer for the color features, the layer for the geometric features, the layer for extracting the texture and shape features, and so on.
ResNet-18 has 18 deep layers distributed into five convolutional, one ReLU, and one average pooling, fully connected and classifies all the input images represented by feature vectors into five appropriate classes. Finally, the softMax activation function has five neurons. ResNet-18 contains layers with different neurons and more than 11.5 million parameters.
The GoogLeNet has 27 layers, including pooling layers. GoogLeNet contains 7 million parameters.
Here, we will briefly discuss the layers used in the proposed system as follows:
Convolutional Layers: CNN contains multiple convolutional layers, each with a specific task. Convolutional layers are the basis of CNN, and their name derives from them. Convolutional layers extract features accurately and efficiently based on parameters that control these layers. These parameters are filter size, zero-padding, and p-step; each parameter has a specific task. The filter size varies from layer to layer and is responsible for the size that the filter f(n) will convolute around a specific part of the image to be processed x(n), as shown in Equation (4). The zero padding preserves the image size to be processed according to the original image. The filter moves around the image and moves over the image according to the p-step value [30].
z n = x f n = x a f n a   d a
where f(n) is the filter, x(n) is the image input, ∗ is convolutional operator and z(n) is image output.
Pooling layers are essential layers in CNN models. They reduce the dimensions of images and speed up computations. This is important because millions of parameters pass through convolutional layers, which can cause computational problems. Pooling layers solve this problem by reducing the image dimensions using average pooling and max pooling. Average pooling selects a specific part of the image according to the filter size, calculates its average, and then replaces the selected group with a single value (average of all values), as in Equation (5). Max pooling selects a specific part of the image according to the filter size, finds the maximum value, and then replaces the specified group with a single value (maximum value in the given group), as in Equation (6).
y i ,   j = 1 k 2 m , n = 1 . l f i 1 S + m ;     j 1 S + n
y i ,   j = m a x m , n = 1 . l   f i 1 S + m ;     j 1 S + n
where f is the filter pixels; m, n are the location of the matrix; l is the size matrix, l is the number of pixels selected in each group and S is the step filter.

3.3.2. PCA Algorithm

PCA is a statistical method that is used to reduce the dimensionality of data. It works by finding a set of principal components that explain the most variance in the data. In the context of DR, PCA is used to reduce the dimensionality of the features extracted by CNNs. However, the features extracted by CNNs are high-dimensional. This makes it difficult to train CNN models on the features. PCA works by finding a set of principal components that explain the most variance in the data. The principal components are ranked in order of decreasing variance. PCA is used to reduce the dimensionality of the features by selecting the first k principal components, where k is the desired dimensionality of the features. The first k principal components explain the most variance in the data. PCA is used to improve the performance of CNN models on DR data. By reducing the dimensionality of the features, PCA makes it easier for CNN models to learn the relationships between the features and the DR stage. Here are some of the benefits of using PCA to reduce the dimensionality of features extracted by CNNs for DR detection: PCA improves the accuracy of CNN models by reducing the dimensionality of the features. This is because PCA helps to remove noise and irrelevant features from the data. PCA reduces the training time of CNN models by reducing the size of the feature vector. This is because PCA helps to remove redundant features from the data. PCA improves the interpretability of CNN models by reducing the number of features that need to be considered. This is because PCA helps to identify the most important features that are predictive of the DR stage.

3.3.3. SVM Classifier

This section presents features extracted from the CNN models and classified by the SVM algorithm with high accuracy.
The supervised SVM algorithm solves many classification and regression problems. The SVM algorithm selects all data set points in an n dimension space (n represents the number of features). Each coordinate in a dimensional space represents a specific feature value. The algorithm finds many lines, called hyperplane, and chooses only one representing the max margin between the data points. The algorithm aims to separate features (data points) into classes by hyperplane so it can classify any new data point into its appropriate class with high efficiency. The algorithm selects the support vectors that enable it to choose the best hyperplane. Support vectors are points near or located on the hyperplane that form the max-margin separating the classes. The two types of SVM are linear and nonlinear [31].
Figure 4 shows a methodology consisting of two blocks: Firstly, GoogLeNet, and ResNet-18 models to extract features and reduce dimensionality using the PCA algorithm; secondly, the SVM for classification [32].

3.4. Features Combined with CNN and Handcrafted Features

This section discusses a hybrid method for features extracted from CNN models and handcrafted features. All the features are combined and fed to the FFNN algorithm. The proposed method is characterized by novelty, high accuracy of diagnosis, low computational cost, and need for computer specifications of medium performance and price. This method gives a high representation of the data of each image in the DR data set, and thus promising results are achieved for diagnosing each image with its own class (type of DR intensity).
Handcrafted features such as color, shape, texture, and geometry are important features to represent each image. Thus, combining features of CNN models with handcrafted features yields highly representative feature vectors for each image.
The methodology is as follows: All images are enhanced through average and Laplacian filters. The enhanced fundus images are fed to GoogLeNet and ResNet-18 models to extract features and store them in feature vectors. The feature vectors produced by the CNN models consist of 4096 features for each fundus image [33]. Due to the high feature dimensions, the PCA is applied to reduce the feature to 1024 features so the feature matrix becomes 11,897 × 1024.
The essential features of texture and color are extracted by traditional algorithms Fuzzy color histogram (FCH), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP) and merged into one feature vector [34]. The FCH algorithm extracts 16 color features for each fundus image, the GLCM algorithm extracts 13 color features and the LBP algorithm extracts 203 features that describe the binary texture. All the features are combined so the size of each feature bus is 232 features, and the feature matrix size is 11,897 × 232.
The FCH method is selected because the microvasculature responds to the RGB color channels, especially the green color channel, which provides fine details of the microvasculature [35]. FCH is a powerful algorithm for extracting the color features based on the fuzzy of each color channel. The GLCM algorithm measures each central pixel (target) of its neighbors based on the distance and angle of the central pixel of its neighbors. GLCM measures spatial and texture relationships. Pixels whose values are close to each other have smooth textures, while pixels whose values are different have rough textures [36]. The LBP has the ability to efficiently extract the texture features of surfaces. Each central pixel in an image is replaced by 24 adjacent pixels through a set operator to 5 × 5 pixels [37].
The features extracted by GoogLeNet are fused with the handcrafted features into feature vectors so that each image has 1256 features. Sixthly, the FFNN classifier is fed with a feature matrix of a size of 11,897 × 1256 to precisely classify it.
The features extracted by ResNet-18 are combined with the features collected by the traditional algorithms (FCH, GLCM, and LBP) into feature vectors so that each image has 1256 features. The FFNN classifier is fed with a feature matrix of a size of 11,897 × 1256 to precisely classify it.
The FFNN algorithm is a powerful tool for classifying and predicting medical images. It was used to diagnose fundus images for the detection of DR. The network had 1256 input units, 15 hidden layers, and five output neurons. Each image was classified into its appropriate DR class as shown in Figure 5. The value of each neuron is calculated by calculating the value of the neuron in the previous layer with the weight associated with it. The algorithm works iteratively, updating the weights in each iteration, to calculate the least error of the actual and expected values.
Figure 6 illustrates the methodology of the proposed method, which shows the integration of the features of GoogLeNet and ResNet-18 reduced by the PCA, then combined with handcrafted features. Finally, all the features are provided to the FFNN classifier.

4. Experimental Result

4.1. Evaluation Metrics

This study used two methods to diagnose DR data sets. The first method used a hybrid technology of CNN with SVM. The second method used hybrid features extracted by CNN (GoogLeNet and ResNet-18) and handcrafted features. All the proposed systems produced a confusion matrix, a table that shows the performance of a classification model. The confusion matrix contains all the data set samples during the testing phase. The performance of the systems was evaluated using Equations (7)–(11), where the required data for the equations were obtained from the confusion matrix [38].
Accuracy = TN + TP TN + TP + FN + FP 100 %
Precision = TP TP + FP 100 %
Sensitivity = TP TP + FN 100 %
Specificity = TN TN + FP 100 %
AUC   = True   Positive   Rate False   Positive   Rate = Sensitivity Specificity 100 %
where TP is a DR retinal fundus sample correctly classified as a DR. TN is normal retinal fundus images correctly classified as non-DR. FP is a normal retinal fundus sample classified as DR. FN is a DR retinal fundus image classified as non-DR.

4.2. Splitting Data Set

This work aims to diagnose images for the detection of DR development stages by various hybrid methods between deep learning and automatic models and techniques for diagnosing DR based on extracting a mixture of features consisting of color and texture. The data set contains 11,897 images divided into five classes: A class of normal fundus images and four classes representing the stages of DR development. The data set was divided into training and validation by 80% and testing by 20%. Table 3 describes all five classes’ data set distribution. All proposed methods were applied to a PC device with Intel® i5 6th generation processor with specifications of RAM 12 GB and GPU 4 GB. The systems were implemented on MATLAB 2018b environment.
Table 4 describes the training time of the fundus image data set for diabetic retinopathy in all the proposed systems.

4.3. Results of CNN Models and Hybrid Techniques

This section discusses the results obtained by pre-trained GoogLeNet and Res-Net-18 models and a hybrid system of CNN and the SVM. Table 5 shows the performance results of pre-trained GoogLeNet and Res-Net-18 for image diagnostics for the detection of DR [39]. The two models achieved results in diagnosing the data set for the detection of DR developmental stages. The GoogLeNet attained 92.56% accuracy, 92.6% precision, 91.8% sensitivity, 98.2% specificity, and 97.42% AUC. In comparison, ResNet-18 obtained 91.47% accuracy, 91.38% precision, 90.2% sensitivity, 97.8% specificity, and 96.58% AUC.
Table 5 shows the implementation results of GoogLeNet + SVM and ResNet-18 + SVM systems to diagnose retinal images for detection of DR. Goog-LeNet + SVM network attained 98.8% accuracy, 97.6% precision, 97.8% sensitivity, 100% specificity, and 98.92% AUC. In comparison, the ResNet-18 + SVM network obtained 98.9% accuracy, 98.8% precision, 98.2% sensitivity, 100% specificity, and 99.21% AUC.
As for the hybrid systems of CNN and the KNN, GoogLeNet, and Res-Net-18 models extract features and store them in feature vectors. The system reduces the feature dimensions by the PCA algorithm.
Hybrid systems have attained good results in the detection of DR stages evolution, as GoogLeNet + SVM attained 98.8% accuracy and 98.92% AUC. The accuracy at each class: The network reached an accuracy of 98.1% for normal, 99.2% for mild, 99.8% for moderate, 96.6% for severe, and 95.1% for proliferative (Figure 7).
Also, the ResNet-18 + SVM attained 98.9% accuracy and 99.21% AUC. The accuracy at each class: The network reached an accuracy of 98.8% for normal, 98% for mild, 100% for moderate, 97.7% for severe, and 96.5% for proliferative (Figure 8).

4.4. Results of Hybrid Features of CNN and Handcrafted Features

This section shows the evaluation performance of methods by using FFNN with hybrid features of CNN and handcrafted features to diagnose retinal fundus images for detection of DR [40]. The hybrid feature vectors are then fed to the FFNN classifier to achieve superior results. The implementation of the FFNN classifier was adjusted via trial and error at the best performance. This section reviews various FFNN performance assessment tools for diagnosing retinopathy data sets.
The confusion matrix is the gold standard for evaluating performance systems. In this study, the performance of FFNN was evaluated based on features of CNN models and handcrafted.
Table 6 shows the implementation of the FFNN classifier based on the hybrid features of fundus image diagnostics for the detection of DR developmental stages. The network reached promising results due to the hybrid and diverse features of several algorithms.
This section discusses two models of GoogLeNet-handcrafted-FFNN and ResNet-18-handcrafted-FFNN performance experiments where promising results were achieved in two experiments. First, GoogLeNet-handcrafted-FFNN attained 99.6% accuracy, 99.4% precision, 99.2% sensitivity, 100% specificity, and 99.78% AUC. By contrast, the ResNet-18-handcrafted-FFNN attained 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.
Figure 9 describes the AUC and confusion matrix generated by GoogLeNet-handcrafted-FFNN, which attained an overall 99.6% accuracy and 99.78% AUC. It also attained superior accuracy for diagnosing each stage of DR development, where the network attained an accuracy of 99.8%, 99.4%, 99.8%, 100%, and 97.2% for the diagnoses of class 1 (normal), class 2 (mild), class 3 (moderate), class 4 (severe), and class 5 (proliferative) stages, respectively.
Figure 10 describes the AUC and confusion matrix generated by ResNet-18-handcrafted-FFNN, which attained an overall 99.7% accuracy and 99.86% AUC of 99.86%. It also achieved superior accuracy in diagnosing each stage of DR development. The system attained an accuracy of 99.6%, 99.2%, 100%, 100%, and 98.6% for the diagnoses of class 1 (normal), class 2 (mild), class 3 (moderate), class 4 (severe) and class 5 (proliferative) stages, respectively.
In Figure 10b, each color represents one class in the data set.

5. Discussing the Performance of the Systems

In this study, two methods were discussed; each technique has two systems (two experiments), and each experiment has a different method and materials. The study aimed to accurately diagnose the retinal fundus for detecting DR developmental stages. Since fundus images contain noise and low micro-vascular contrast, all images were enhanced. The imbalance of the data set and the lack of the data set led to poor performance of the system and the accuracy bias to the majority class, so the data augmentation method was used to balance the data set by increasing the number of images for each class by an amount different from the other class [41].
The reason for using GoogLeNet and ResNet-18 models is to produce more convenient features when combined with handcrafted features.
The first method is a hybrid system of CNN (GoogLeNet and ResNet-18) and SVM. GoogLeNet + SVM and ResNet-18 + SVM, reaching an accuracy of 98.8% and 98.9%, respectively.
The second proposed method uses FFNN based on features CNN with handcrafted features. The features were extracted from CNN models and then reduced by the PCA. Finally, all the resulting features are combined using first, GoogLeNet-handcrafted-FFNN; Second, ResNet-18-handcrafted-FFNN. GoogLeNet-handcrafted-FFNN attained an accuracy of 99.6%. In comparison, ResNet-18-handcrafted-FFNN attained an accuracy of 99.7%.
Table 7 describes the results of all methods for fundus image diagnostics for the detection of DR developmental stages. The table shows the overall accuracy of each technique and the accuracy of the diagnosis at the level of each stage of DR. For normal fundus images, the GoogLeNet-handcrafted-FFNN model attained 99.8%. GoogLeNet-handcrafted-FFNN reached 99.4% for the diagnosis of the mild stage. For the moderate stage, ResNet-18 + SVM and ResNet-18-handcrafted-FFNN attained 100%. FFNN reached 100% accuracy in diagnosing the severe stage based on the hybrid features. Finally, the ResNet-18-handcrafted-FFNN attained 98.6% accuracy for proliferative stage diagnosis.
Figure 11 presents the results of evaluating the proposed methods in this study to diagnose fundus images to detect stages of DR.
Table 8 has been added to compare the performance of all the methods proposed in this study. Where it is noted that the Res-Net-18-handcrafted-FFNN model is superior to the rest of the proposed systems in all measures of accuracy, precision, sensitivity, and specificity.
Table 9 shows the results of previous studies related to diagnosing a fundus image data set for the detection of DR and compares them with the results of the proposed systems. Our proposed system is better than previous approaches in all measures. The previous systems attained an accuracy of between 65.2% and 97%, while the proposed system attained 99.7% accuracy. The previous systems attained a sensitivity between 64.2% and 98.48%, while the proposed system attained 99.6% sensitivity. The previous systems attained a specificity of between 66.2% and 98%, while the proposed system attained 100% specificity. The previous systems attained an AUC of between 92.72% and 97.3%, while the proposed system attained 99.86% AUC.

6. Conclusions

The rapid spread of diabetes and the swelling number of people with diabetes worldwide require many highly trained ophthalmologists. The development of DR has many stages, ranging from mild to moderate to severe PDR. If early diagnosis of the initial stages is not performed, then it will lead to the stage of PDR that causes severe vision impairment and blindness. The treatment of DR in its early stages requires highly qualified experts and takes a long time. Systems using artificial intelligence have been developed to overcome the shortcomings of manual diagnosis. In this study, two proposed methods were developed, each with two experiments. The first method is to use hybrid systems of CNN and SVM. This method reached promising results in fundus imaging diagnostics to detect the stages of DR. The second proposed method is FFNN with features of CNN and handcrafted. The ResNet-18-handcrafted-FFNN achieved an excellent performance, with 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.
The limitations encountered include an unbalanced data set and a lack of sufficient images to train the data set. These limitations can be overcome by applying data augmentation.
Future works on the features of many CNN models will be combined and classified by ANN, FFNN, random forest, decision trees, and AdaBoost classifiers.

Author Contributions

Conceptualization, M.A., E.M.S., M.A.-J. and I.A.A.; methodology, M.A., E.M.S., M.A.-J. and J.A.M.S.; software, E.M.S., M.A. and M.A.-J.; validation, I.A.A., M.A.-J., E.M.S., M.A. and J.A.M.S.; formal analysis, M.A., I.A.A., M.A.-J. and E.M.S.; investigation, M.A.-J., E.M.S., M.A. and J.A.M.S.; resources, M.A., E.M.S. and M.A.-J.; data curation, E.M.S., M.A., I.A.A., J.A.M.S. and M.A.-J.; writing—original draft preparation, E.M.S.; writing—review and editing, M.A., M.A.-J. and J.A.M.S.; visualization, E.M.S., M.A.-J., I.A.A., M.A. and J.A.M.S.; supervision, M.A., M.A.-J. and E.M.S.; project administration, M.A., E.M.S. and M.A.-J.; funding acquisition, M.A. and M.A.-J. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the Deanship of Scientific Research at Najran University, Kingdom of Saudi Arabia, through a grant code (NU/DRP/SEHRC/12/5).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that supports the results of the evaluation of the performance of the proposed methods in this study was collected through a data set available online at the link: https://www.kaggle.com/competitions/diabetic-retinopathy-detection/data (18 October 2022).

Acknowledgments

The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work, under the General Research Funding program grant code (NU/DRP/SEHRC/12/5).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duker, J.S.; Waheed, N.K.; Goldman, D. Handbook of Retinal OCT: Optical Coherence Tomography E-Book; Elsevier Health Sciences: London, UK, 2021; Available online: https://books.google.co.in/books?hl=en&lr=&id (accessed on 18 October 2022).
  2. Wu, Z.; Shi, G.; Chen, Y.; Shi, F.; Chen, X.; Coatrieux, G.; Li, S. Coarse-to-fine classification for diabetic retinopathy grading using convolutional neural network. Artif. Intell. Med. 2020, 108, 101936. [Google Scholar] [CrossRef] [PubMed]
  3. Haneda, S.; Yamashita, H. International clinical diabetic retinopathy disease severity scale. Nihon Rinsho. Jpn. J. Clin. Med. 2010, 68, 228–235. Available online: https://europepmc.org/article/med/21661159 (accessed on 18 October 2022).
  4. Roglic, G. WHO Global report on diabetes: A summary. Int. J. Noncommun. Dis. 2016, 1, 3. Available online: https://www.ijncd.org/article.asp?issn=2468-8827 (accessed on 18 October 2022). [CrossRef]
  5. Jan, S.; Ahmad, I.; Karim, S.; Hussain, Z.L.; Rehman, M.; Shah, M.A. Status of diabetic retinopathy and its presentation patterns in diabetics at ophthalomogy clinics. JPMI J. Postgrad. Med. Inst. 2018, 32, 2143. Available online: https://66.219.22.243/index.php/jpmi/article/view/2143 (accessed on 18 October 2022).
  6. Tang, J.; Kern, T.S. Inflammation in diabetic retinopathy. Prog. Retin. Eye Res. 2011, 30, 343–358. [Google Scholar] [CrossRef] [PubMed]
  7. Yang, Y.; Li, T.; Li, W.; Wu, H.; Fan, W.; Zhang, W. Lesion detection and grading of diabetic retinopathy via two-stages deep convolutional neural networks. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017; pp. 533–540. [Google Scholar] [CrossRef]
  8. Wu, L.; Fernandez-Loaiza, P.; Sauma, J.; Hernandez-Bogantes, E.; Masis, M. Classification of diabetic retinopathy and diabetic macular edema. World J. Diabetes 2013, 4, 290. Available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3874488/ (accessed on 18 October 2022). [CrossRef]
  9. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ellison, D.W. The 2016 World Health Organization classification of tumors of the central nervous system: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef]
  10. Zang, P.; Gao, L.; Hormel, T.T.; Wang, J.; You, Q.; Hwang, T.S.; Jia, Y. DcardNet: Diabetic retinopathy classification at multiple levels based on structural and angiographic optical coherence tomography. IEEE Trans. Biomed. Eng. 2020, 68, 1859–1870. Available online: https://ieeexplore.ieee.org/abstract/document/9207828 (accessed on 18 October 2022). [CrossRef]
  11. Resnikoff, S.; Felch, W.; Gauthier, T.M.; Spivey, B. The number of ophthalmologists in practice and training worldwide: A growing gap despite more than 200,000 practitioners. Br. J. Ophthalmol. 2012, 96, 783–787. Available online: https://bjo.bmj.com/content/96/6/783.short (accessed on 18 October 2022). [CrossRef] [PubMed]
  12. Liu, H.; Yue, K.L.; Cheng, S.; Pan, C.; Sun, J.; Li, W. Hybrid model structure for diabetic retinopathy classification. J. Healthc. Eng. 2020, 2020, 8840174. Available online: https://www.hindawi.com/journals/jhe/2020/8840174/ (accessed on 18 October 2022). [CrossRef]
  13. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirband, S.; Rehman, Z.U.; Jadoon, W. A deep learning ensemble approach for diabetic retinopathy detection. IEEE Access 2019, 7, 150530–150539. Available online: https://ieeexplore.ieee.org/abstract/document/8869883/ (accessed on 18 October 2022). [CrossRef]
  14. Gao, Z.; Li, J.; Guo, J.; Chen, Y.; Yi, Z.; Zhong, J. Diagnosis of diabetic retinopathy using deep neural networks. IEEE Access 2018, 7, 3360–3370. Available online: https://ieeexplore.ieee.org/abstract/document/8581492/ (accessed on 18 October 2022). [CrossRef]
  15. Gayathri, S.; Gopi, V.P.; Palanisamy, P. A lightweight CNN for Diabetic Retinopathy classification from fundus images. Biomed. Signal Process. Control 2020, 62, 102115. Available online: https://www.sciencedirect.com/science/article/pii/S1746809420302676 (accessed on 18 October 2022).
  16. Wan, S.; Liang, Y.; Zhang, Y. Deep convolutional neural networks for diabetic retinopathy detection by image classification. Comput. Electr. Eng. 2018, 72, 274–282. Available online: https://www.sciencedirect.com/science/article/pii/S0045790618302556 (accessed on 18 October 2022). [CrossRef]
  17. Verbraak, F.D.; Abramoff, M.D.; Bausch, G.C.; Klaver, C.; Nijpels, G.; Schlingemann, R.O.; van der Heijden, A.A. Diagnostic accuracy of a device for the automated detection of diabetic retinopathy in a primary care setting. Diabetes Care 2019, 42, 651–656. Available online: https://diabetesjournals.org/care/article-abstract/42/4/651/36147 (accessed on 18 October 2022). [CrossRef] [PubMed]
  18. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2018, 8, 41–57. [Google Scholar] [CrossRef]
  19. Shanthi, T.; Sabeenian, R.S. Modified Alexnet architecture for classification of diabetic retinopathy images. Comput. Electr. Eng. 2019, 76, 56–64. Available online: https://www.sciencedirect.com/science/article/pii/S0045790618334190 (accessed on 18 October 2022). [CrossRef]
  20. Li, T.; Gao, Y.; Wang, K.; Guo, S.; Liu, H.; Kang, H. Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf. Sci. 2019, 501, 511–522. Available online: https://www.sciencedirect.com/science/article/pii/S0020025519305377 (accessed on 18 October 2022). [CrossRef]
  21. Martinez-Murcia, F.J.; Ortiz, A.; Ramírez, J.; Górriz, J.M.; Cruz, R. Deep residual transfer learning for automatic diagnosis and grading of diabetic retinopathy. Neurocomputing 2021, 452, 424–434. Available online: https://www.sciencedirect.com/science/article/pii/S0925231220316520 (accessed on 18 October 2022). [CrossRef]
  22. Hemanth, D.J.; Deperlioglu, O.; Kose, U. An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network. Neural Comput. Appl. 2020, 32, 707–721. [Google Scholar] [CrossRef]
  23. Qiao, L.; Zhu, Y.; Zhou, H. Diabetic retinopathy detection using prognosis of microaneurysm and early diagnosis system for non-proliferative diabetic retinopathy based on deep learning algorithms. IEEE Access 2020, 8, 104292–104302. Available online: https://ieeexplore.ieee.org/abstract/document/9091167/ (accessed on 18 October 2022). [CrossRef]
  24. Zhang, C.; Lei, T.; Chen, P. Diabetic retinopathy grading by a source-free transfer learning approach. Biomed. Signal Process. Control 2022, 73, 103423. [Google Scholar] [CrossRef]
  25. Diabetic Retinopathy Detection|Kaggle. Available online: https://www.kaggle.com/competitions/diabetic-retinopathy-detection/data (accessed on 30 March 2022).
  26. Ahmed, I.A.; Senan, E.M.; Rassem, T.H.; Ali, M.A.; Shatnawi, H.S.A.; Alwazer, S.M.; Alshahrani, M. Eye Tracking-Based Diagnosis and Early Detection of Autism Spectrum Disorder Using Machine Learning and Deep Learning Techniques. Electronics 2022, 11, 530. [Google Scholar] [CrossRef]
  27. Senan, E.M.; Jadhav, M.E.; Rassem, T.H.; Aljaloud, A.S.; Mohammed, B.A.; Al-Mekhlafi, Z.G. Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning. Comput. Math. Methods Med. 2022, 2022, 8330833. [Google Scholar] [CrossRef] [PubMed]
  28. Olayah, F.; Senan, E.M.; Ahmed, I.A.; Awaji, B. AI Techniques of Dermoscopy Image Analysis for the Early Detection of Skin Lesions Based on Combined CNN Features. Diagnostics 2023, 13, 1314. [Google Scholar] [CrossRef]
  29. Bodapati, J.D.; Shaik, N.S.; Naralasetti, V. Deep convolution feature aggregation: An application to diabetic retinopathy severity level prediction. Signal Image Video Process. 2021, 15, 923–930. [Google Scholar] [CrossRef]
  30. Kandel, I.; Castelli, M. Transfer Learning with Convolutional Neural Networks for Diabetic Retinopathy Image Classification. A Review. Appl. Sci. 2020, 10, 2021. [Google Scholar] [CrossRef]
  31. Ebrahimi, P.; Salamzadeh, A.; Soleimani, M.; Khansari, S.M.; Zarea, H.; Fekete-Farkas, M. Startups and Consumer Purchase Behavior: Application of Support Vector Machine Algorithm. Big Data Cogn. Comput. 2022, 6, 34. [Google Scholar] [CrossRef]
  32. Abunadi, I.; Senan, E.M. Multi-Method Diagnosis of Blood Microscopic Sample for Early Detection of Acute Lymphoblastic Leukemia Based on Deep Learning and Hybrid Techniques. Sensors 2022, 22, 1629. [Google Scholar] [CrossRef]
  33. Atteia, G.; Abdel Samee, N.; El-Kenawy, E.-S.M.; Ibrahim, A. CNN-Hyperparameter Optimization for Diabetic Maculopathy Diagnosis in Optical Coherence Tomography and Fundus Retinography. Mathematics 2022, 10, 3274. [Google Scholar] [CrossRef]
  34. Abunadi, I.; Senan, E.M. Deep Learning and Machine Learning Techniques of Diagnosis Dermoscopy Images for Early Detection of Skin Diseases. Electronics 2021, 10, 3158. [Google Scholar] [CrossRef]
  35. Senan, E.M.; Jadhav, M.E.; Kadam, A. Classification of PH2 images for early detection of skin diseases. In Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India, 2–4 April 2021; pp. 1–7. [Google Scholar] [CrossRef]
  36. Mujeeb Rahman, K.K.; Nasor, M.; Imran, A. Automatic Screening of Diabetic Retinopathy Using Fundus Images and Machine Learning Algorithms. Diagnostics 2022, 12, 2262. [Google Scholar] [CrossRef]
  37. Khalid, A.; Senan, E.M.; Al-Wagih, K.; Ali Al-Azzam, M.M.; Alkhraisha, Z.M. Hybrid Techniques of X-ray Analysis to Predict Knee Osteoarthritis Grades Based on Fusion Features of CNN and Handcrafted. Diagnostics 2023, 13, 1609. [Google Scholar] [CrossRef] [PubMed]
  38. Senan, E.M.; Abunadi, I.; Jadhav, M.E.; Fati, S.M. Score and Correlation Coefficient-Based Feature Selection for Predicting Heart Failure Diagnosis by Using Machine Learning Algorithms. Comput. Math. Methods Med. 2021, 2021. [Google Scholar] [CrossRef] [PubMed]
  39. Kalbhor, M.; Shinde, S.; Popescu, D.E.; Hemanth, D.J. Hybridization of Deep Learning Pre-Trained Models with Machine Learning Classifiers and Fuzzy Min–Max Neural Network for Cervical Cancer Diagnosis. Diagnostics 2023, 13, 1363. [Google Scholar] [CrossRef]
  40. Mohammed, B.A.; Senan, E.M.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Al-Mekhlafi, Z.G.; Almurayziq, T.S.; Ghaleb, F.A. Multi-Method Analysis of Medical Records and MRI Images for Early Diagnosis of Dementia and Alzheimer’s Disease Based on Deep Learning and Hybrid Methods. Electronics 2021, 10, 2860. [Google Scholar] [CrossRef]
  41. Zhang, X.; Kim, Y.; Chung, Y.-C.; Yoon, S.; Rhee, S.-Y.; Kim, Y.S. A Wrapped Approach Using Unlabeled Data for Diabetic Retinopathy Diagnosis. Appl. Sci. 2023, 13, 1901. [Google Scholar] [CrossRef]
Figure 1. Stages of DR development with the appearance of biomarkers.
Figure 1. Stages of DR development with the appearance of biomarkers.
Diagnostics 13 02783 g001
Figure 2. Methodology of color fundus image diagnostics for detection of DR developmental stages.
Figure 2. Methodology of color fundus image diagnostics for detection of DR developmental stages.
Diagnostics 13 02783 g002
Figure 3. A set of fundus image samples (a) before enhancement (set of data set) (b) after enhancement.
Figure 3. A set of fundus image samples (a) before enhancement (set of data set) (b) after enhancement.
Diagnostics 13 02783 g003
Figure 4. A methodology for fundus imaging diagnostics to diagnose the developmental stages of DR using a hybrid system.
Figure 4. A methodology for fundus imaging diagnostics to diagnose the developmental stages of DR using a hybrid system.
Diagnostics 13 02783 g004
Figure 5. The basic structure of the FFNN network for classifying DR.
Figure 5. The basic structure of the FFNN network for classifying DR.
Diagnostics 13 02783 g005
Figure 6. Methodology of hybrid feature extraction and diagnosis by FFNN.
Figure 6. Methodology of hybrid feature extraction and diagnosis by FFNN.
Diagnostics 13 02783 g006
Figure 7. Hybrid technique performance results between GoogLeNet and SVM (a) confusion matrix (b) AUC.
Figure 7. Hybrid technique performance results between GoogLeNet and SVM (a) confusion matrix (b) AUC.
Diagnostics 13 02783 g007
Figure 8. Hybrid technique performance results between ResNet-18 and SVM (a) confusion matrix (b) AUC.
Figure 8. Hybrid technique performance results between ResNet-18 and SVM (a) confusion matrix (b) AUC.
Diagnostics 13 02783 g008
Figure 9. GoogLeNet-handcrafted-FFNN performance results (a) confusion matrix (b) AUC.
Figure 9. GoogLeNet-handcrafted-FFNN performance results (a) confusion matrix (b) AUC.
Diagnostics 13 02783 g009
Figure 10. ResNet-18-handcrafted-FFNN performance results (a) confusion matrix (b) AUC.
Figure 10. ResNet-18-handcrafted-FFNN performance results (a) confusion matrix (b) AUC.
Diagnostics 13 02783 g010
Figure 11. Display of the performance of the proposed methods in this study for diagnosing the developmental stages of DR.
Figure 11. Display of the performance of the proposed methods in this study for diagnosing the developmental stages of DR.
Diagnostics 13 02783 g011
Table 1. Classification of DR with appearance of biomarker.
Table 1. Classification of DR with appearance of biomarker.
Stages of DRLesion DetectionNo of Images
NormalIt is normal and no abnormalities were noticed25,810
Mild NPDRThe appearance of aneurysms lightly2443
Moderate NPDRAppearance of microvascular aneurysm with an amount of more than Mild NPDR and less than Severe NPDR5292
severe NPDRSpotted macular bleeding in the four quadrants
Microvascular abnormalities in at least one quadrant
The appearance of blood vessel protrusion in one of the quadrants
873
PDRAppearance of pre-retinal hemorrhage—Appearance of Neovascularization708
Table 2. Augmentation method of balancing the DR data set through the training stage.
Table 2. Augmentation method of balancing the DR data set through the training stage.
PhaseTraining Phase
Class NameNormalMildModerateSevereProliferative
Before augmentation165215633387558453
After augmentation66086252677466966342
Table 3. Splitting of retinal fundus images during all stages for all classes.
Table 3. Splitting of retinal fundus images during all stages for all classes.
PhaseTraining and Validation 80%Testing 20%
ClassesTraining (80%)Validation (20%)
Normal1652413516
Mild1563391489
Moderate33878471058
Severe558140175
Proliferative453113142
Table 4. Training time of the proposed systems.
Table 4. Training time of the proposed systems.
TechniquesExtracting Features MethodsTraining TimeTesting Time
CNNGoogLeNet320 min 54 s13 min 49 s
ResNet-18280 min 39 s11 min 8 s
HybridGoogLeNet + SVM5 min 26 s1 min 52 s
ResNet-18 + SVM4 min 9 s1 min 14 s
FFNNGoogLeNet and handcrafted11 min 18 s2 min 42 s
ResNet-18 and handcrafted9 min 31 s2 min 17 s
Table 5. CNN and Hybrid models performance results for detection of DR stages.
Table 5. CNN and Hybrid models performance results for detection of DR stages.
MethodsCNN ModelsHybrid Models
MeasureGoogLeNetResNet-18GoogLeNet + SVMResNet-18 + SVM
Accuracy %92.5691.4798.898.9
Precision %92.691.3897.698.8
Sensitivity %91.890.297.898.2
Specificity %98.297.8100100
AUC %97.4296.5898.9299.21
Table 6. Performance of FFNN based on hybrid features to detect DR stages.
Table 6. Performance of FFNN based on hybrid features to detect DR stages.
Hybrid FeaturesGoogLeNet-Handcrafted-FFNNResNet-18-Handcrafted-FFNN
Accuracy %99.699.7
Precision %99.499.6
Sensitivity %99.299.6
Specificity %100100
AUC %99.7899.86
Table 7. Performance of the proposed methods in this study to diagnose fundus images to reveal the developmental stages of DR.
Table 7. Performance of the proposed methods in this study to diagnose fundus images to reveal the developmental stages of DR.
DiseasesNormalMildModerateSevereProliferativeAccuracy %
HybridGoogLeNet + SVM98.199.299.896.695.198.8
ResNet-18 + SVM98.89810097.796.598.9
Hybrid FeaturesFFNNGoogLeNet, FCH, GLCM and LBP99.899.499.810097.299.6
ResNet-18, FCH, GLCM and LBP99.699.210010098.699.7
Table 8. Performance of all proposed methods.
Table 8. Performance of all proposed methods.
MethodsCNN ModelsHybrid ModelsCNN-Handcrafted-FFNN
MeasureGoogLeNetResNet-18GoogLeNet + SVMResNet-18 + SVMGoogLeNet-Handcrafted-FFNNResNet-18-Handcrafted-FFNN
Accuracy %92.5691.4798.898.999.699.7
Precision %92.691.3897.698.899.499.6
Sensitivity %91.890.297.898.299.299.6
Specificity %98.297.8100100100100
AUC %97.4296.5898.9299.2199.7899.86
Table 9. Comparison of the performance results of proposed systems with relevant previous studies.
Table 9. Comparison of the performance results of proposed systems with relevant previous studies.
Previous StudiesAccuracy %Sensitivity %Specificity %AUC %
Liu et al. [12]85.4498.4871.82-
Qummar et al. [13]65.264.266.2-
Gao et al. [14]85.59493.01-
Wan et al. [16]93.3677.6693.4592.72
Romany et al. [18]95.269693-
Shanthi et al. [19]96.6---
Martinez et al. [21]95.598.394.597.3
Hemanth et al. [22]979498-
Proposed model99.799.610099.86
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshahrani, M.; Al-Jabbar, M.; Senan, E.M.; Ahmed, I.A.; Saif, J.A.M. Hybrid Methods for Fundus Image Analysis for Diagnosis of Diabetic Retinopathy Development Stages Based on Fusion Features. Diagnostics 2023, 13, 2783. https://doi.org/10.3390/diagnostics13172783

AMA Style

Alshahrani M, Al-Jabbar M, Senan EM, Ahmed IA, Saif JAM. Hybrid Methods for Fundus Image Analysis for Diagnosis of Diabetic Retinopathy Development Stages Based on Fusion Features. Diagnostics. 2023; 13(17):2783. https://doi.org/10.3390/diagnostics13172783

Chicago/Turabian Style

Alshahrani, Mohammed, Mohammed Al-Jabbar, Ebrahim Mohammed Senan, Ibrahim Abdulrab Ahmed, and Jamil Abdulhamid Mohammed Saif. 2023. "Hybrid Methods for Fundus Image Analysis for Diagnosis of Diabetic Retinopathy Development Stages Based on Fusion Features" Diagnostics 13, no. 17: 2783. https://doi.org/10.3390/diagnostics13172783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop