Next Article in Journal
Inversion Analysis of the In Situ Stress Field around Underground Caverns Based on Particle Swarm Optimization Optimized Back Propagation Neural Network
Next Article in Special Issue
Radiological Features of B3 Lesions in Mutation Carrier Patients: A Single-Center Retrospective Analysis
Previous Article in Journal
Restenosis Investigation of Two-Stent Placement in the Artery Bifurcation with Different Stenting Techniques Using Computational Fluid Dynamics Analysis
Previous Article in Special Issue
Dual-Tracer PET Image Separation by Deep Learning: A Simulation Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Detection of Diabetic Hypertensive Retinopathy in Fundus Images Using Transfer Learning

1
Department of Computer Science and Engineering, School of Computer Science and Engineering, Lovely Professional University, Jalandhar-Delhi G.T. Road, Phagwara 144411, Punjab, India
2
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
PRINCE Laboratory Research, ISITcom, Hammam Sousse, University of Sousse, Sousse 4000, Tunisia
4
Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, Abha 61421, Saudi Arabia
5
BioImaging Unit, Space Research Centre, University of Leicester, Michael Atiyah Building, Leicester LE1 7RH, UK
6
Electrical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 4695; https://doi.org/10.3390/app13084695
Submission received: 8 February 2023 / Revised: 28 March 2023 / Accepted: 29 March 2023 / Published: 7 April 2023
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)

Abstract

:
Diabetic retinopathy (DR) is a complication of diabetes that affects the eyes. It occurs when high blood sugar levels damage the blood vessels in the retina, the light-sensitive tissue at the back of the eye. Therefore, there is a need to detect DR in the early stages to reduce the risk of blindness. Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. Transfer learning has been applied to diabetic retinopathy classification with promising results. Pre-trained models, such as convolutional neural networks (CNNs), can be fine-tuned on a new dataset of retinal images to classify diabetic retinopathy. This manuscript aims at developing an automated scheme for diagnosing and grading DR and HR. The retinal image classification has been performed using three phases that include preprocessing, segmentation and feature extraction techniques. The pre-processing methodology has been proposed for reducing the noise in retinal images. A-CLAHE, DNCNN and Wiener filter techniques have been applied for the enhancement of images. After pre-processing, blood vessel segmentation in retinal images has been performed utilizing OTSU thresholding and mathematical morphology. Feature extraction and classification have been performed using transfer learning models. The segmented images were then classified using Modified ResNet 101 architecture. The performance for enhanced images has been evaluated on PSNR and shows better results as compared to the existing literature. The network is trained on more than 6000 images from MESSIDOR and ODIR datasets and achieves the classification accuracy of 98.72%.

1. Introduction

The high blood sugar levels in diabetes can damage the small blood vessels in the retina, and when this is combined with the high blood pressure in hypertension, it can further damage the blood vessels, leading to bleeding and fluid leakage in the retina. This can cause vision problems, such as blurred vision, dark spots or floaters, and even vision loss if left untreated. Management of diabetic hypertensive retinopathy involves controlling both diabetes and hypertension through lifestyle changes and medication. Regular eye exams with an eye doctor (ophthalmologist) are also important to monitor any changes in the retina and to detect any issues early on. Treatment may include laser therapy, injections or surgery, depending on the severity of the condition. It is important for individuals with diabetes and hypertension to work closely with their healthcare team to manage their conditions and reduce the risk of complications, including diabetic hypertensive retinopathy [1].
Computer aided diagnosis (CAD) has evolved as a major research problem for medical imaging and diagnosis of radiology. The field of medical imaging uses image information to evaluate and analyze abnormalities at an early stage. In CAD systems, low-contrast medical images are being improved with diverse imaging techniques that are designed to interpret prominent portions of the image and are relevant for the diagnosis of diseases [2]. CAD systems are computer systems designed to detect or diagnose diseases by providing clinical experts with a second opinion. It improves overall performance for the detection of diseases and helps doctors detect various medical abnormalities at an early stage. CAD systems are specifically intended to improve disease diagnostic accuracy and reduce computation complexity in interpreting image information. The same role of clinics and computer-aided algorithms is considered, and better diagnostic performance is achieved. CAD systems are also useful in fundus image analysis as, by regular screening and prompt treatment, the likelihood of severe eye complications is reduced. This research aims to provide a CAD scheme to help ophthalmologists distinguish various grades of DR and HR with a second opinion [3].
Automatic extraction and convolutional neural networks (CNNs) are two concepts in the field of artificial intelligence and machine learning that are often used together in image and video processing tasks. Automatic extraction, also known as feature extraction, refers to the process of identifying and selecting relevant features from raw data. In image and video processing tasks, these features could include edges, textures, shapes, or other patterns that can help distinguish one object from another. Convolutional neural networks, on the other hand, are a type of deep learning algorithm that are commonly used in image recognition tasks. They are designed to automatically learn and extract features from images by passing them through a series of convolutional layers. These layers consist of a set of filters or kernels that convolve with the input image, detecting features at different scales and orientations [4]. The output of each convolutional layer is then passed through a pooling layer, which downsamples the feature maps to reduce the number of parameters and prevent overfitting. The final output is a set of high-level features that can be used to classify or label the input image.
Since the pre-trained network consists of various filters, each layer consists of diverse features. A large classifier is necessary to handle these features. A deep network classification is usually made up of two hidden layers with 4096 neurons. In this manner, the weight parameter of the classifier is millions. It is a challenging task to train this independent classifier. A network that could extract an optimum number of features is therefore required. In addition, an adaptive classifier structure to deal with these extracted characteristics is also necessary. Diverse feature extraction procedures were applied for image classification. A feature is known as the object’s property that is defined by one or more functions. It can also help to reduce the amount of redundant data in retinal images. Texture features are useful in image analysis for accurate classification. It provides more information about the spatial arrangement of the pixels in terms of color and intensity. Furthermore, vascular abnormalities are the most common signs of DR. Since the texture properties of fundus images are retrieved for MA detection, the automated algorithm’s accuracy is improved [5].
Diabetic retinopathy occurs in patients with a history of diabetes for around 5 years or so. Diabetic retinopathy (DR) is a major cause of blindness found in diabetic individuals. DR can be caused by diabetes mellitus that affects the retinal microvasculature. Therefore, there is a need to detect DR in the early stages to reduce the risk of blindness in working-age groups [6]. DR can be detected by taking the fundus images of the retina. Detection of morphological changes that include microaneurysms, hard exudates, soft exudates (cotton wool spot), hemorrhages, macula, optic disk, optic nerve head and an increase in the blood vessels in fundus images are still a tedious task. These morphological changes can be detected either by time-consuming manual inspection or by computer-aided diagnosis (CAD) that can help the ophthalmologist to identify the problem. Hypertensive retinopathy (HR) occurs due to hypertension (high blood pressure) which affects the blood vessels. The symptoms of HR initiate with AVR nicking, tortuosity and bifurcation in blood vessels in the early detection of disease [6].
This paper focuses on the development of a computationally capable system for grading the severity of DR and HR retinal anomalies. The developed methodologies and models can correctly discern retinal physiological landmarks, diagnose pathological symptoms and effectively characterize DR and HR severity classes. The retinal image classification is performed using three phases that include preprocessing, segmentation and feature extraction techniques. The pre-processing methodology is proposed for reducing the noise in retinal images. A-CLAHE, DNCNN and Wiener filter techniques are applied for the enhancement of images. After pre-processing, blood vessel segmentation in retinal images is performed utilizing OTSU thresholding and mathematical morphology. Feature extraction and classification are performed using transfer learning models. The segmented images are then classified using Modified ResNet 101 architecture.
The potential work of numerous authors on diabetic retinopathy detection, a hypertensive retinopathy detection based on transfer learning, is described in Section 2. Section 3 discusses the methodology of three phases, namely pre-processing, segmentation and feature extraction. In the pre-processing phase, a novel hybrid technique is implemented for denoising the images. An efficient method for the segmentation of blood vessels is discussed after enhancing the images. The ResNet-101 model, which is a pre-trained network architecture, is used to classify the segmented images. Section 4 describes the results and discussion of the proposed techniques followed by conclusion and a future scope in Section 5.

2. Related Work

Ophthalmologists’ manual diagnosis of DR takes a long time, requires a great deal of work and is prone to disease misdiagnosis. As a result, employing a CAD system can help to avoid misdiagnosis while also saving money, time and effort. Deep learning (DL) has arisen and gained recognition in a variety of sectors during the previous decade, including medical image analysis. DL beats all other image analysis algorithms in the vast majority of scenarios by accurately detecting features from input data for classification or segmentation. DL approaches do not necessitate the extraction of hand-crafted features, but they still necessitate a large amount of data for training [3]. Machine learning, on the other hand, necessitates the extraction of handcrafted features but does not necessitate a vast amount of data for training. Machine learning approaches for DR detection must first extract the features, as described in [5]. Then, as in [6], DR lesions are extracted for classification. CNN is a DL approach for image processing that is frequently utilized [7], highly effective and successful [8]. The related work of CNN is explained in the following section.
A deep learning approach was proposed for the classification and severity of DR in retinal images. ResNet 50 was used for feature maps and random forest was used for classification purposes [9]. The proposed approach was implemented on MESSIDOR 2 and EYEPACS datasets and achieved an accuracy of 96% for MESSIDOR and 75.09 for the EYEPACS datasets [10].
The proposed approach applied maximal primary curvature for extracting the blood vessels. Following that, AHE and mathematical morphology were used to improve and eradicate incorrectly segmented regions. To train the classifier for categorization, we employed a convolution neural network (CNN). CNN’s classification design consisted of a combination of squeeze and excitation and bottleneck layers, as well as a convolution and pooling layer architecture for classification between the two classes. The proposed work had been implemented on the DIARETDB1 dataset and achieved an accuracy/precision of 98.7/97.2%, respectively [11].
Fully diagnosis systems outperformed manual techniques in terms of misdiagnosis, while also saving time, effort, and money according to the current study. No-DR, mild, moderate, severe and proliferative DR were the five stages of the proposed method for DR imaging. The system was made up of two TL-based models. The first model (CNN512), which entered the complete image into the CNN model, categorized it into one of five DR stages. On the DDR and APTOS Kaggle 2019 public datasets, it achieved an accuracy of 88.6% and 84.1%, respectively. On the DDR dataset, the second model used an adapted YOLOv3 model to detect and localized DR lesions, achieving a 0.216 mAP in lesion localization. Finally, both proposed structures, CNN512 and YOLOv3, were merged to recognize DR images and target DR lesions, achieving an accuracy of 89% and sensitivity of 89%, and specificity of 97.3%, respectively [12].
The HR (DenseHyper) system was developed based on a trained features layer (TF-L) and dense feature transform layer (DFT-L) for the deep residual learning (DRL) approaches to detect HR. The DenseHyper system consisted of various dense architectures that combined TF-L with a CNN for learning features from various lesions and creating specialized features using DFT-L. The performance was evaluated on DRHAGIS, DRIVE, DIARET DB, DR1 and DR2 datasets and achieved the average sensitivity/specificity/accuracy/AUC of 93/95/95/0.96% [13], respectively.
A deep CNN was used in this work for the detection and classification of DR into five stages. Three convolutional layers and a fully connected layer were included in the suggested technique. As a result, the diabetic retinopathy categorization based on convolutional neural networks was more trustworthy and accurate. This saves time with regard to classification. As a result, the condition can be treated sooner, lowering the chance of vision loss. The proposed technique was implemented on the Kaggle dataset and achieved an accuracy of 94.44% [14].
A multi-tasking deep learning approach was proposed for detecting the severity of DR. This multitasking model consists of one classification model and regression model, each with its own loss function. Given that a higher severity level usually follows a lower severity level, the classification and regression models were combined to account for this reliance. After training the regression and classification models separately, the characteristics obtained by these two models were combined and fed into a multilayer perceptron network to categorize the five stages of DR. The development of the modified DenseNet was used to implement this approach. The experimental results were evaluated on APTOS and EYEPACS datasets and achieved an accuracy of 85 and 82, respectively. For evaluating the proposed work, the Xception network through transfer learning was also developed [15].
VeriSee DR software was developed based on deep learning image assessment for testing the accuracy of grading the severity of DR. The EYEPACS dataset provided 5649 color fundus images, as well as another 31,612 color fundus images, which were utilized to train the VeriSee model. VeriSee had AUCs of 0.955, 0.955 and 0.984 in identifying any DR, referable DR and PDR, respectively. Internal physicians (64.3 percent and 20.6 percent, respectively) demonstrated better sensitivity in identifying any DR and PDR than VeriSee (92.2 percent and 90.9 percent, respectively) (P 0.001 for both) [16]. The literature survey in tabular form is depicted in Table 1.

3. Proposed Method

The proposed methodology is described in the following section. It is divided into three phases, particularly image pre-processing, segmentation and feature extraction and classification.

3.1. Image Preprocessing

Original images are mainly exposed to diverse types of noise that can degrade the quality of the image. For the removal of unwanted induced signals, pre-processing of retinopathy images is required. The RGB component is initially separated from the original images for conversion of the RGB image into a gray image. The image is then partitioned into foreground and background using the edge-distance function before using adaptive CLAHE (contrast limited adaptive histogram equalization). Using the following function, the enhanced image is obtained:
F m = I p x , y μ m σ m
In the abovementioned Equation (1), Ip(x,y) signifies the image pixels, µm represents mean and σm represents standard deviation. The algorithm for pre-processing is as follows (Algorithm 1):
Algorithm 1: Algorithm of Pre-processing phase
Input: Acquire dataset (Dataset D)
    1.
For all images in D1
    2.
Split the images in R,G,B channels
    3.
Apply adaptive CLAHE (A-CLAHE) on individual channels
  i.
Obtain contrast-enhanced image for R as normalR
  ii.
Obtain contrast-enhanced image for G as normalG
  iii.
Obtain contrast-enhanced image for B as normalB
    4.
Apply denoising network DNCNN on different channels
  i.
Obtain denoised image for R as Rd
  ii.
Obtain denoised image for G as Gd
  iii.
Obtain denoised image for B as Bd
    5.
Apply local contrast and Wiener filter on three channels
  i.
Obtain local contrast image for R as lcmR
  ii.
Obtain local contrast image for G as lcmG
  iii.
Obtain local contrast image for B as lcmB
  iv.
Obtain enhanced image by applying Wiener filter for R as lcmmR
  v.
Obtain enhanced image by applying Wiener filter for G as lcmmG
  vi.
Obtain enhanced image by applying Wiener filter for B as lcmmB
    6.
Concatenate enhanced RGB as lcmRGB = lcmR, lcmG, lcmB
    7.
Concatenate values of A-CLAHE for all the three channels as normalRGB = normalR, normalG, normalB
    8.
Concatenate values of denoised network for all the three channels as denoisedRGB
    9.
Calculating PSNR for denoised image, enhanced image, normal RGB with respect to original image
    10.
Save the denoised image in a specified folder
    11.
Output: Image-enhanced and denoised dataset: Dataset 1

3.1.1. Acquire Dataset

Five types of fundus images classified as normal, mild diabetic, moderate diabetic, severe diabetic and hypertensive are considered for this work. All images in the category are taken from the publicly available databases, i.e., MESSIDOR and ODIR. Almost all the categorical images present in the dataset are equal.
The acquired images cannot be used directly for screening and diagnosis as they have an imperfection in terms of noise, illumination, and so on. The variation of inter and intra pixels exists due to various reasons such as poor camera contrast, incorrect positioning of the lens, and so on. These variations in retinal images can affect the performance for the detection of various abnormalities present in DR and HR. Image pre-processing and enhancement eliminate the unwanted variations present in the image without significantly affecting the image data. Figure 1 shows the diverse steps involved in pre-processing.
The pre-processing technique first acquires the dataset. The implementation of the proposed work is performed on MESSIDOR [17] and ODIR [18] datasets. Then, adaptive CLAHE is applied to increase the contrast of the color channel. Then, a pre-trained DNCNN model is applied to handle Gaussian denoising with unknown levels. For better image smoothing, the Wiener filter is used. In the following section, the explanation of all the methods is performed in detail.

3.1.2. Adaptive CLAHE

The deviation of intensity levels in an image is referred to as the image’s histogram. The histogram method can cause the medical image to be noisy. In order to restrict the noise, CLAHE can be used. We found that choosing the clip limit for optimal improvement using CLAHE is very critical. CLAHE is a method for improving adaptive contrast. CLAHE is based on AHE, which calculates the histogram for a pixel context area. In the display range, the intensity of the pixel is transformed into a value proportional to the pixel intensity range in the histogram of the local intensity. CLAHE has two parameters: block size (N) and clip limit. These parameters are used mainly to monitor image quality but are determined by users heuristically [19].
The bin size in the local histogram has a significant impact on the clip level selection. The clip limit (CL) value is calculated automatically from the inputs in the proposed technique. The maximum bin height is taken in the local sub-image histogram and the pixel is distributed to each gray level in equal measure. The algorithm for adaptive CLAHE is shown below (Algorithm 2):
Algorithm 2: Algorithm of Adaptive CLAHE
Input: Original image I
Output CLAHE enhanced image Ig
   1.
∀ I resize A ∗ B, I = {i} tiles ∋ i = A * B/i * I
   2.
Hi = Histogram (i)
   3.
CL(i) = MCL • Mavg
Mavg = ∑ MClip/Mgray ∋ Mgray = {ig}
MCL = normalized I (0,1)
∑MClip = ∑ pixels ∈ x,y tile dimension
   4.
Clip
If (Hi > CL) then
   Clip Histogram Hi
Else {Hi = Hi + CL
     } endif
   5.
∑MCL = Mcp
   6.
CLAHE (i)
   7.
Evaluating Ig using bilinear interpolation
Output: CLAHE Enhanced Ig
The enhanced image’s PSNR value after applying adaptive CLAHE is about 35, which is significantly improved in comparison to prior methods. In addition, the DNCNN network is applied for enhancing the retinal images and reducing the noise. DNCNN is the valuable pre-trained model for image denoising. The PSNR value of the denoised images is significantly improved. After applying DNCNN, a Wiener filter is applied, and the PSNR value obtained is around 60, which is significantly higher than most existing approaches.

3.1.3. DNCNN

A noisy image is fed into the DNCNN network. To train a residual mapping R(m) ≈ n, the DnCNN network uses the residual learning formulation, where n is the noise and the residual formulation is d = m − n. For denoising, the images of the DNCNN model is used [20]. The optimization task arises as to the difference between the noisy input m and the noise n is approximately equal to the denoised image d. The optimization problem is expressed as minimum loss l:
l p = 1 n i = 1 n R m i , p d i m i 2
where the batch size is n, and the DnCNN network has p trainable parameters. The L2 norm is used in the initial implementation of this work. As a result, the loss function reflects the Euclidean distance between the clean images and the expected outputs [21].
The DnCNN network has 17 convolutional layers, as shown in Figure 2. Figure 3 illustrates the workflow of the DNCNN network for denoising the images. Firstly, the original image is fed to CONV + relu for the input layer. Then, CONV + BN+ relu layers are applied from the second to the penultimate layer. Finally, a convolutional layer is applied for the output. A total of 64 kernels of size 3 ∗ 3 are utilized for all layers except the output layer [22]. A stride of 1 and “same” padding are used in all levels to verify the similarity between spatial dimensions of the inputs and outputs [23]. One kernel of size 3 ∗ 3 is used for the output layer (Figure 2 and Figure 3).
The pseudocode for the proposed DNCNN model is shown as follows (Algorithm 3):
Algorithm 3: Algorithm of the proposed DNCNN
Pseudocode for DNCNN
  1.
∀ ∃ M Layers
  2.
n image ∈ W ∗ H ∴ input I = W ∗ H ∗ n
CONV (3 ∗ 3 ∗ n) + RELU (max(0, •)) + zero padding ∃ n = image channels
  3.
For n = 2: M − 1 Layers
CONV (3 ∗ 3 ∗ j)
Batch Normalization
ReLU + Zero Padding
  4.
A = CONV(3 ∗ 3 ∗ j) + Zero Padding
  5.
Output I’ = a − R(a) ∴ loss function C(θ) = ½ ||x − x ||2 when θ = [W,b]

3.1.4. Applying Wiener Filter

As the final step of pre-processing, we used a Wiener filter. It is a low pass filtering approach based on statistical neighborhood pixel estimation [24]. The calculation of the neighborhood’s local mean and variance is performed in the Wiener filter. The weaker smoothening is applied if there is more variation, whereas stronger smoothening is applied if there is less variation. In this phase, the Wiener filter’s default parameters were used for this task [25]. The output of the pre-processing step is the denoised image D. The pseudocode for pre-processing is described as follows (Algorithm 4):
Algorithm 4: Algorithm of Pre-processing of Wiener Filter
Pseudocode for pre-processing
Input: Image (I)
Output: Denoised Image D
  1.
∀ In = Zeroes (size (Is,t), w = zeroes (S = size(Is,t)))
  2.
For i = 1:length (In)
  3.
I1 = {In (:, :, 1); In (:, :, 2); In (:, :, 3)}
  4.
Apply A-CLAHE
  5.
Ciq = {I1 (:, :, 1); I1 (:, :, 2); I1 (:, :, 3)}
  6.
Apply DnCNN algorithm
  7.
Dn = {I1 (:, :, 1); I1 (:, :, 2); I1 (:, :, 3)}
  8.
Apply Wiener filter
  9.
W = {Dn (:, :, 1); Dn (:, :, 2); Dn (:, :, 3)}
  10.
Apply Local Contrast
  11.
LC = {Dn (:, :, 1); Dn (:, :, 2); Dn (:, :, 3)}
  12.
Concatenate Ergb = cat (LC(:, :, 1); LC(:, :, 2); LC (:, :, 3))
  13.
Concatenate Ciq = cat (Ciq (:, :, 1); Ciq (:, :, 2); Ciq (:, :, 3))
  14.
Concatenate D = cat ({Dn (:, :, 1); Dn (:, :, 2); Dn (:, :, 3) })
  15.
Evaluate PSNR
  16.
Save denoised image D
After enhancing the retinal images, the evaluation is performed by calculating Peak signal to noise ratio. PSNR is evaluated as:
P S N R = 10 × log 10 M A X P i 2 M S E
where MSE specifies mean squared error, which is evaluated as:
S E = 1 a b   i = 0 a 1 j = 0 b 1 I x , y R x , y 2  

3.2. Segmentation

In every image analysis, the segmentation stage is crucial. If the segmentation is not performed correctly, the rest of the image analysis will be complicated. Furthermore, segmenting the optic disc (OD) in a retinal image is critical in the diagnosis of certain ophthalmological abnormalities. Segmentation is a method of categorizing data based on pixel similarity. In terms of grayscale, color and intensity, the pixels in the nearby region have different values. When it comes to the automatic classification of disease, segmentation accuracy is crucial. Any inaccuracy in the segmentation stage leads to misclassification for diagnosing the pathological sign [26].
In recent years, researchers have used a variety of ways for segmenting retinal vasculature. The methods have been divided into two groups, namely supervised and unsupervised methods. There are various algorithms for unsupervised approaches, such as morphological processing, model-based, matched filtering and vessel tracking. Notwithstanding, the unsupervised approaches have a limitation of handcrafted features for the representation and segmentation of vessels. As a result, supervised approaches outperform unsupervised ones when it comes to calculating segmentation accuracy.
To filter unwanted noise and geometrical objects based on vessel structure, the researchers used a modified version of Otsu’s approach [27]. Otsu’s method is typically used to determine a threshold for distinguishing vessel and non-vessel pixels on a local or global scale across the entire image. Applying Otsu thresholding to the entire image at once produces poor results, which is why the modified version of OTSU’s thresholding is applied. Both thin and thick vessels become more visible in this way. A single improved image that is then subjected to more local thresholding is obtained. The vessel-based thresholding is employed to define a new threshold in local thresholding, which is dependent on vessel locality. To suppress noise more effectively for vessels in the vicinity of wide vessels, we added some offset to the global threshold [28].
We set a lower threshold than the global for other places away from wide vessels by subtracting an offset from it to recover small or thin vessels from the low-intensity background. The threshold values of sigma are set for enhancing thick and thin vessels in retinal images. For obtaining the binary image, global Otsu thresholding is applied. Then, the fusion of thick and thick vessels is applied to obtain the improved image [29]. Then, morphological operations are performed for effective the segmentation of blood vessels. The proposed methodology for the segmentation of blood vessels is shown in Figure 4.

3.2.1. OTSU Thresholding

The ideal threshold is the one that produces the minimum potential segmentation error. Otsu is a technique that can be utilized to obtain optimal thresholding outcomes. Otsu thresholding provides several advantages over other procedures, including computational speed, good capabilities when combined with other methods for performance improvement and a consistent performance. Based on the assumption that each picture pixel contains two classes or a bimodal histogram, Otsu thresholding will determine the appropriate image thresholding. By decreasing class variance, the Otsu technique [30] searches thoroughly. The image is segmented based on pixels into two parts, C0 and C1 (foreground and background), according to the intensity level, where C0 varies from 0 to p and C1 varies from p + 1 to 255. As mentioned in [31], let σ w c 2 ,   σ B 2 ,   σ T M 2 be within-class, between class and the total variance, respectively. The minimization of σ w c 2 should be present to achieve optimal thresholding. The pseudocode of OTSU thresholding is described as follows (Algorithm 5):
Algorithm 5: Algorithm of OTSU Thresholding
Input Image I
  1.
∀ I, Ig = I (:, :, 2) ∴ Ig = [P + 1 ,…………. 255] s.t. P + 1 = Black and 255-Lightest
  2.
  I g   .   n p     i = 0 P + 1 n i   n i = group   of   pixels
  3.
Hist   ( I g ) = n i / N     Hist   ( I g )     0 ,   i = p P + 1 H i s t   ( I g   ) = 1
  4.
Suppose   ni = { C o   ,   C i }   s . t   C o = [ P + 1   .   P * ]   and   C i = [ p * + 1     255 ]
  5.
W b = P   ( C o ) = i = p + 1 p H i s t   ( I g   )
  6.
W f = P   ( C o ) = i = p + 1 255 H i s t   ( I g   ) = 1 W b
  7.
The calculating mean of both Co and Ci
  8.
µ b = i = p + 1 p H i s t   ( I g     ) W b ,   µ f = i = p + 1 255 H i s t   ( I g     ) W f
  9.
Total   mean   µ TM = W b   .   µ b + W f   .   µ f
  10.
Class   variance   σ b 2 = i = p + 1   p i µ b 2   H i s t I g W b
  11.
σ f 2 = i = p + 1   255 i µ f 2   H i s t I g W f
  12.
W i t h i n   c l a s s   v a r i a n c e   σ w c   2 = W b .   σ b   2 + W f   . σ f   2
  13.
Between   class   variance   σ B 2 = W b .   μ b μ T M 2 + W f .   μ f μ T M 2 = W b W f μ b μ f 2 .
  14.
σ T M 2 = σ W C 2 + σ B C 2
  15.
p = a v g   σ B C 2 i   = a v g   σ w C 2 i  
For excluding unconnected non-vessel pixels, we employed pixel/area-based thresholding. Noise causes certain small, isolated regions in the segmentation findings, and these regions are sometimes mistakenly identified as vessels. We deleted less than or equal to 30 disconnected pixels that were judged non-vessels or part of the background noise based on the connectivity of the retinal vessels. After that, mathematical morphology is applied for effective vascular segmentation.

3.2.2. Mathematical Morphology

Morphology is one of the most well-known branches of biology. To preserve the image’s original geometry, morphological approaches are used to extract various components of the image, such as boundaries and regions. The image is subjected to several algebraic operations such as erosion, dilation and top hat. To understand which regions to segment, the morphological technique is used for feature extraction from the retinal images [32]. Four variables are used to determine the effectiveness of segmentation. The following formula function is used to calculate the sensitivity and specificity of the proposed model.
Sensitivity is defined as the proportion of accurately detected objects to the total number of objects, and is evaluated as:
S e n = T P P
In Equation (6), specificity is defined as the ratio of correctly detected non-objects classes to the total number of object classes:
S p e c = T N N

3.3. Classification of Fundus Images

The classification of segmented images is performed utilizing the ResNet model, which is a pre-trained network architecture. ResNet is one of the most extensively used pre-trained models in the CNN category. To avoid wasting processing capacity on the same activity, the notion of transfer learning (TL) is used. TL eliminates the need to develop a model from scratch because the network has previously been trained with a set of features on larger datasets. TL is used to add pre-trained networks that have already been trained to perform a certain task. To tackle the difficulty of training neural networks, ResNet is built on a residual block that bypasses connections. There may be 16, 18, 34, 50 or 101 layers in these structures. Conv1, Conv2x, Conv3x, Conv4x, Conv5x and a fully connected (FC) layer are the six modules that make up ResNet-101 [33]. Conv1 is the name of the network’s first connected layer (CL). FC is in charge of learning and modifying weights to train better. The ResNet-101 model is used for the extraction of features and classification of DR and HR (Figure 5).
The proposed network architecture is shown alongside the architecture of ResNet. More than 10 million similar images were used to train the pre-trained model. The network was trained using the dataset, and the previous model’s FC layer was replaced with the newly proposed CL model. In an 80:20 ratio, the network is tested and trained. To ensure unbiased sampling, the model uses random retinal images from the dataset.
Recent research has found that network depth improves classification accuracy [34]. However, as the network grows in depth, it becomes saturated, and performance begins to decline fast [35]. This is a problem that the ResNet framework can solve. At every three convolutional layer, shortcut connections are introduced to the deep network. These shortcut connections conduct identity mapping without adding additional parameters or increasing computational complexity, making network optimization easier during the training phase. As a result, when undertaking image classification tasks, ResNet allows deeper networks to attain a higher accuracy than shallower networks [36].
MATLAB is used to implement the proposed model for retinal image classification. The fundus images are taken from a publicly accessible dataset of around 6000 images. The ResNet-101 transfer learning model was used to train and test the suggested model. ResNet-101 is one of the best pre-trained models for categorizing various types of medical imaging at the moment. To extract features from images, the CL, pool5 and FC layers are employed, and the FC layer is enhanced in this study by adjusting the weights according to the dataset for improved training. Each image yields a total of 2048 unique features.

3.4. Evaluation Metrics

Several metrics are commonly used to assess the accuracy and robustness of classification models. The accuracy of a model is defined as the proportion of correct predictions made by the model out of the total number of predictions. The Several metrics is depicted in Table 2. This can be mathematically expressed as:
Accuracy = (TP + TN)/(TP + FP + FN + TN),
where TP, TN, FP and FN represent the number of true positive, true negative, false positive and false negative predictions, respectively.
Another commonly used metric is precision, which is defined as the ratio of true positive predictions to the total number of predicted positive observations. This can be expressed as:
Precision = TP/(TP + FP)
The recall metric, on the other hand, is defined as the ratio of true positive predictions to all observations in the actual class, which can be expressed as:
Recall = TP/(TP + FN)
The F1 score, which is the harmonic mean of precision and recall, is a widely used metric for evaluating the performance of artificial intelligence models. It can be calculated as:
F1 score = 2 ∗ (Recall ∗ Precision)/(Recall + Precision)
Finally, the confusion matrix is a tabular representation of the various metrics mentioned above and is used to provide an intuitive understanding of the performance of a predictive model.

4. Results and Discussion

Diabetic retinal images are difficult to screen and identify in the early stages, and the patient’s eyesight may deteriorate as the condition advances. A retinal model with three phases is presented in this manuscript. The retinal images are enhanced during the first step to reduce image noise. Then, segmentation of blood vessels is performed. The suggested model, which is based on the ResNet-101 architecture, is then trained using segmented images. The model is prepared to perform a similar task. The pool5 layer of the network extracts the 2048 features from each image and uses the dataset to train the fully connected layer. The weights adjust themselves and change dynamically in response to the output. The suggested approach correctly classified retinal pictures as normal, mild diabetic, moderate diabetic, severe diabetic and hypertensive with a 98.72 percent accuracy. The authors can apply the proposed model to other medical datasets in the future.
Evaluation of pre-processing is performed on MESSIDOR and ODIR (ocular disease intelligent recognition) datasets based on the PSNR ratio. The output of image enhancement on retinal images is shown in Figure 6. The comparison of the proposed pre-processing technique with the existing techniques is described in Figure 7.
The enhanced image’s PSNR value after applying adaptive CLAHE is about 35, which is significantly improved in comparison to prior methods. In addition, the DNCNN network is applied for enhancing the retinal images and reducing the noise. DNCNN is the valuable pre-trained model for image denoising. The PSNR value of the denoised images had improved significantly. After applying DNCNN, the PSNR value obtained is around 60, which is significantly higher than most existing approaches. Figure 7 depicts the performance of proposed enhancement methods with existing methods.
Figure 8 illustrates the segmented images of blood vasculature. The set of numbers that are accurately predicted is referred to as the true positive (TP). The misclassified samples are represented by the false negative (FN) and false positive (FP) and the classified samples that are not predicted accurately are known as true negative (TN).
Image enhancement is carried out to reduce noise. Many noises, such as salt and pepper and Gaussian noise, are frequently applied to the original photos. Image quality can be affected by noise, causing poor segmentation. To assure segmentation quality, pre-processing is performed using A-CLAHE, DNCNN and Wiener filters. DNCNN is a model that is pre-trained to remove noise in medical pictures. In the experiment, the PSNR value for the image with DNCNN produced higher results. Figure 9 depicts the analysis of blood vessel segmentation compared with existing methods based on sensitivity and specificity.
The network is trained using the dataset, and the previous model’s FC layer is replaced with the newly proposed connected layer model. In an 80:20 ratio, the network is tested and trained. To ensure unbiased sampling, the model uses random retinal images from the dataset. The confusion matrix of the dataset MESSIDOR and ODIR is depicted in Figure 10. In comparison to earlier methods, the proposed model has a 98.72 percent accuracy, which is quite high and ultimately accurate in all scenarios, as illustrated in Figure 11 and Figure 12. Figure 13 depicts how the proposed method improves existing state-of-the-art models in an accuracy comparison analysis.

5. Conclusions and Future Scope

The interest in automatic screening and diagnosis of DR is encouraged because of the very large diabetic and hypertensive population and the prevalence of retinopathy cases among them. Early screening and diagnosis are crucial in decreasing the risk of vision loss. This paper emphasizes the effective DR and HR grading schemes. Different methods have been developed to provide ophthalmologists with a mass screening solution, tracking the disease’s course, and minimizing the risk of vision loss. In this paper, the proposed pseudocode and algorithms have been implemented and evaluated on MESSIDOR and ODIR datasets. The analysis of the performance in the form of visual and graphical analysis is performed. For validation of the proposed algorithms, PSNR is used for image enhancement, P, N, T values for image segmentation and the accuracy for classification of DR and HR have been performed. Transfer learning is utilized for the classification of DR and HR. The proposed approach achieves an accuracy of 98.72 %. The future work focuses on diagnosing various retinal diseases such as glaucoma, AMD and RoP. However, multiscale classification of HR can be developed to assist ophthalmologists for the correct diagnosis of HR.

Author Contributions

Conceptualization, methodology, writing—original draft, results analysis, D.N.; data collection, data analysis, writing—review and editing, results analysis, N.A.; methodology, writing—review and editing, design and presentation, references, B.O.S. and M.S.A.; methodology, writing—review and editing, M.A.; methodology, writing—review and editing, H.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R321), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University (KKU) for funding this research through the Research Group Program Under the Grant Number: (R.G.P.2/451/44).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used during the current study are available from the corresponding author upon reasonable request.

Acknowledgments

This research was financially supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R321), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University (KKU) for funding this research through the Research Group Program Under the Grant Number:(R.G.P.2/451/44).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaur, M.; Kamra, A. Detection of retinal abnormalities in fundus image using transfer learning networks. Soft Comput. 2023, 27, 3411–3425. [Google Scholar] [CrossRef]
  2. Alyoubi, W.L.; Shalash, W.M.; Abulkhair, M.F. Diabetic retinopathy detection through deep learning techniques: A review. Inform. Med. Unlocked 2020, 20, 100377. [Google Scholar] [CrossRef]
  3. Deng, L.; Yu, D. Deep learning: Methods and applications. Found. Trends Signal Process. 2013, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
  4. Vega, R.; Sanchez-Ante, G.; Falcon-Morales, L.E.; Sossa, H.; Guevara, E. Retinal vessel extraction using Lattice Neural Networks with dendritic processing. Comput. Biol. Med. 2015, 58, 20–30. [Google Scholar] [CrossRef] [PubMed]
  5. Kumar, K.; Samal, D.; Suraj. Automated Retinal Vessel Segmentation Based on Morphological Preprocessing and 2D-Gabor Wavelets. Adv. Intell. Syst. Comput. 2020, 1082, 411–423. [Google Scholar] [CrossRef] [Green Version]
  6. Sikder, N.; Masud, M.; Bairagi, A.K.; Arif, A.S.M.; Al Nahid, A.; Alhumyani, H.A. Severity classification of diabetic retinopathy using an ensemble learning algorithm through analyzing retinal images. Symmetry 2021, 13, 670. [Google Scholar] [CrossRef]
  7. Bakator, M.; Radosav, D. Deep learning and medical diagnosis: A review of literature. Multimodal Technol. Interact. 2018, 2, 47. [Google Scholar] [CrossRef] [Green Version]
  8. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  9. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2018, 8, 41–57. [Google Scholar] [CrossRef]
  10. Yaqoob, M.K.; Ali, S.F.; Bilal, M.; Hanif, M.S.; Al-Saggaf, U.M. Resnet based deep features and random forest classifier for diabetic retinopathy detection. Sensors 2021, 21, 3883. [Google Scholar] [CrossRef]
  11. Das, S.; Kharbanda, K.; Suchetha, M.; Raman, R.; Dhas, E. Deep learning architecture based on segmented fundus image features for classification of diabetic retinopathy. Biomed. Signal Process. Control 2021, 68, 102600. [Google Scholar] [CrossRef]
  12. Alyoubi, W.L.; Abulkhair, M.F.; Shalash, W.M. Diabetic retinopathy fundus image classification and lesions localization system using deep learning. Sensors 2021, 21, 3704. [Google Scholar] [CrossRef] [PubMed]
  13. Abbas, Q.; Ibrahim, M.E.A. DenseHyper: An automatic recognition system for detection of hypertensive retinopathy using dense features transform and deep-residual learning. Multimed. Tools Appl. 2020, 79, 31595–31623. [Google Scholar] [CrossRef]
  14. Raja Kumar, R.; Pandian, R.; Prem Jacob, T.; Pravin, A.; Indumathi, P. Detection of Diabetic Retinopathy Using Deep Convolutional Neural Networks. In Computational Vision and Bio-Inspired Computing, Proceedings of the International Conference On Computational Vision and Bio Inspired Computing, Coimbatore, India, 25–26 September 2019; Springer: Singapore, 2021; Volume 1318, pp. 415–430. [Google Scholar] [CrossRef]
  15. Majumder, S.; Kehtarnavaz, N. Multitasking Deep Learning Model for Detection of Five Stages of Diabetic Retinopathy. IEEE Access 2021, 9, 123220–123230. [Google Scholar] [CrossRef]
  16. Hsieh, Y.T.; Chuang, L.M.; Jiang, Y.D.; Chang, T.J.; Yang, C.M.; Yang, C.H.; Chan, L.W.; Kao, T.Y.; Chen, T.C.; Lin, H.C.; et al. Application of deep learning image assessment software VeriSeeTM for diabetic retinopathy screening. J. Formos. Med. Assoc. 2021, 120, 165–171. [Google Scholar] [CrossRef] [PubMed]
  17. Decenciere, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The messidor database. Image Anal. Stereol. 2014, 33, 231–234. [Google Scholar] [CrossRef] [Green Version]
  18. Larxel. Ocular Disease Recognition. Available online: https://odir2019.grand-challenge.org/ (accessed on 1 April 2022).
  19. BahadarKhan, K.; Khaliq, A.A.; Shahid, M. A morphological hessian based approach for retinal blood vessels segmentation and denoising using region based otsu thresholding. PLoS ONE 2016, 11, e0158996. [Google Scholar] [CrossRef] [Green Version]
  20. Ansari, M.A.; Mahraj, S.K. A Robust Method for Identification of Paper Currency Using Otsu’s Thresholding. In Proceedings of the 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, 11–12 July 2018. [Google Scholar] [CrossRef]
  21. Amin, J.; Sharif, M.; Rehman, A.; Raza, M.; Mufti, M.R. Diabetic retinopathy detection and classification using hybrid feature set. Microsc. Res. Tech. 2018, 81, 990–996. [Google Scholar] [CrossRef]
  22. Subramani, B.; Veluchamy, M. Fuzzy contextual inference system for medical image enhancement. Meas. J. Int. Meas. Confed. 2019, 148, 106967. [Google Scholar] [CrossRef]
  23. Sonali; Sahu, S.; Singh, A.K.; Ghrera, S.P.; Elhoseny, M. An approach for de-noising and contrast enhancement of retinal fundus image using CLAHE. Opt. Laser Technol. 2019, 110, 87–98. [Google Scholar] [CrossRef]
  24. Pal, M.N.; Banerjee, M. Evaluation of Effectiveness of Image Enhancement Techniques with Application to Retinal Fundus images. In Proceedings of the 2020 4th International Conference on Computational Intelligence and Networks (CINE), Kolkata, India, 27–29 February 2020. [Google Scholar] [CrossRef]
  25. Palanisamy, G.; Ponnusamy, P.; Gopi, V.P. An improved luminosity and contrast enhancement framework for feature preservation in color fundus images. Signal Image Video Process. 2019, 13, 719–726. [Google Scholar] [CrossRef]
  26. Kumar, Y.; Gupta, B. Retinal image blood vessel classification using hybrid deep learning in cataract diseased fundus images. Biomed. Signal Process. Control 2023, 84, 104776. [Google Scholar] [CrossRef]
  27. Wiharto; Palgunadi, Y.S. Blood Vessels segmentation in retinal fundus image using hybrid method of Frangi Filter, Otsu thresholding and morphology. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 417–422. [Google Scholar] [CrossRef] [Green Version]
  28. Khan, K.B.; Khaliq, A.A.; Jalil, A.; Shahid, M. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising. PLoS ONE 2018, 13, e0192203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Tamim, N.; Elshrkawey, M.; Azim, G.A.; Nassar, H. Retinal blood vessel segmentation using hybrid features and multi-layer perceptron neural networks. Symmetry 2020, 12, 894. [Google Scholar] [CrossRef]
  30. Uysal, E.; Güraksin, G.E. Computer-aided retinal vessel segmentation in retinal images: Convolutional neural networks. Multimed. Tools Appl. 2020, 80, 3505–3528. [Google Scholar] [CrossRef]
  31. Liu, H.; Teng, L.; Fan, L.; Sun, Y.; Li, H. A new ultra-wide-field fundus dataset to diabetic retinopathy grading using hybrid preprocessing methods. Comput. Biol. Med. 2023, 157, 106750. [Google Scholar] [CrossRef]
  32. Raja Sankari, V.M.; Snekhalatha, U. Ashok Chandrasekaran, Prabhu Baskaran, Automated diagnosis of Retinopathy of prematurity from retinal images of preterm infants using hybrid deep learning techniques. Biomed. Signal Process. Control 2023, 85, 104883. [Google Scholar] [CrossRef]
  33. Zulaikha Beevi, S. Multi-Level severity classification for diabetic retinopathy based on hybrid optimization enabled deep learning. Biomed. Signal Process. Control 2023, 84, 104736. [Google Scholar] [CrossRef]
  34. Saranya, P.; Pranati, R.; Patro, S.S. Detection and classification of red lesions from retinal images for diabetic retinopathy detection using deep learning models. Multimed Tools Appl. 2023. [Google Scholar] [CrossRef]
  35. Fayyaz, A.M.; Sharif, M.I.; Azam, S.; Karim, A.; El-Den, J. Analysis of Diabetic Retinopathy (DR) Based on the Deep Learning. Information 2023, 14, 30. [Google Scholar] [CrossRef]
  36. Chavan, S.; Choubey, N. An automated diabetic retinopathy of severity grade classification using transfer learning and fine-tuning for fundus images. Multimed Tools Appl. 2023. [Google Scholar] [CrossRef]
Figure 1. Pre-processing methodology.
Figure 1. Pre-processing methodology.
Applsci 13 04695 g001
Figure 2. Architecture of DNCNN.
Figure 2. Architecture of DNCNN.
Applsci 13 04695 g002
Figure 3. Proposed DNCNN workflow for image enhancement.
Figure 3. Proposed DNCNN workflow for image enhancement.
Applsci 13 04695 g003
Figure 4. Proposed methodology for blood vessel segmentation.
Figure 4. Proposed methodology for blood vessel segmentation.
Applsci 13 04695 g004
Figure 5. The proposed ResNet-101 architecture.
Figure 5. The proposed ResNet-101 architecture.
Applsci 13 04695 g005
Figure 6. Output of image enhancement in retinal images.
Figure 6. Output of image enhancement in retinal images.
Applsci 13 04695 g006
Figure 7. Graph depicting the performance of pre-processing with other methods.
Figure 7. Graph depicting the performance of pre-processing with other methods.
Applsci 13 04695 g007
Figure 8. Segmented images (original, enhanced, segmented image).
Figure 8. Segmented images (original, enhanced, segmented image).
Applsci 13 04695 g008aApplsci 13 04695 g008b
Figure 9. Blood vessel segmentation analysis.
Figure 9. Blood vessel segmentation analysis.
Applsci 13 04695 g009
Figure 10. Confusion matrix of MESSIDOR and ODIR datasets.
Figure 10. Confusion matrix of MESSIDOR and ODIR datasets.
Applsci 13 04695 g010
Figure 11. Accuracy of Messidor and ODIR dataset.
Figure 11. Accuracy of Messidor and ODIR dataset.
Applsci 13 04695 g011
Figure 12. The Loss of Messidor and ODIR dataset.
Figure 12. The Loss of Messidor and ODIR dataset.
Applsci 13 04695 g012
Figure 13. Proposed model compared to existing approaches.
Figure 13. Proposed model compared to existing approaches.
Applsci 13 04695 g013
Table 1. Related work for the classification of DR and HR.
Table 1. Related work for the classification of DR and HR.
Author/YearDatasetTechniqueClassificationPerformance
[10]/2021MESSIDOREYEPACSRESNET 50
Random Forest
MESSIDOR-2;
EYEPACS: 5
96% for MESSIDOR and 75.09 for EYEPACS dataset
[11]/2021DIARET DB1Maximum Principal Curvature, Adaptive Histogram Equalization, CNN2acc/prec of 98.7/97.2
[12]/2021DDR
APTOS
CNN512
YOLOv3
5acc/sen/spec of 89/89/97.3
[13]/2020DRHAGIS
DRIVE
DIARET DB
DR1
DR2
DenseHyper2sen/spec/acc/auc of 93/95/95/0.96
[14]/2021KaggleCNN5Acc:94.44
[15]/2021APTOS and EYEPACSModified Dense Net
Xception
5acc:85 for APTOS and 82 for EYEPACS
[16]/2021EYEPACSRESNET
Modified Inception v4
5sen/spec/acc of 92.2/89.5/90.7
Table 2. Several metrics.
Table 2. Several metrics.
Actual
Positive: 0Negative: 1
PredictionPositive: 0True PositiveFalse Positive
Negative: 1False NegativeTrue Negative
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nagpal, D.; Alsubaie, N.; Soufiene, B.O.; Alqahtani, M.S.; Abbas, M.; Almohiy, H.M. Automatic Detection of Diabetic Hypertensive Retinopathy in Fundus Images Using Transfer Learning. Appl. Sci. 2023, 13, 4695. https://doi.org/10.3390/app13084695

AMA Style

Nagpal D, Alsubaie N, Soufiene BO, Alqahtani MS, Abbas M, Almohiy HM. Automatic Detection of Diabetic Hypertensive Retinopathy in Fundus Images Using Transfer Learning. Applied Sciences. 2023; 13(8):4695. https://doi.org/10.3390/app13084695

Chicago/Turabian Style

Nagpal, Dimple, Najah Alsubaie, Ben Othman Soufiene, Mohammed S. Alqahtani, Mohamed Abbas, and Hussain M. Almohiy. 2023. "Automatic Detection of Diabetic Hypertensive Retinopathy in Fundus Images Using Transfer Learning" Applied Sciences 13, no. 8: 4695. https://doi.org/10.3390/app13084695

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop