CAD-EYE: An Automated System for Multi-Eye Disease Classification Using Feature Fusion with Deep Learning Models and Fluorescence Imaging for Enhanced Interpretability
Abstract
:1. Introduction
1.1. Research Motivation
- Even with the utilization of sophisticated image processing technologies, accurately delineating features from images related to normal cases, diabetic retinopathy, hypertensive retinopathy, glaucoma, and contrast remains challenging. The challenges involved in precisely locating and extracting features associated with eye diseases contribute to this difficulty.
- A dataset combining photos from various eye illnesses such as hypertensive retinopathy, diabetic retinopathy, glaucoma, and contrast is not publicly available. Because of this, it is hard to implement an automated system to classify different eye diseases.
1.2. Research Contribution
- In this work, the researchers compiled a substantial dataset consisting of 10,000 photos sourced from reputable internet platforms and supplemented by private datasets from previous studies. This large dataset was important since it enabled the model to achieve remarkably high classification accuracy.
- This work is the first to introduce a Fluorescence Imaging Simulation algorithm into multi-eye classification research. The image processing algorithm along with the proposed feature fusion model allowed the model to achieve higher accuracies than reported systems while classifying four different eye diseases.
- In this work, the feature fusion technique was incorporated to combine the features from two DL models to build the CAD-EYE system. The approach resulted in the creation of a multi-layered model that was successful in solving the classification problem.
- Additional layers (custom layers including dense layers following the feature fusion process, which help refine features extracted from both models) are added to the design of the CAD-EYE model to enable the model to classify a number of eye-related diseases. The convolutional neural network (CNN) models (MobileNet, EfficientNet) are used to extract features associated with eye disorders, and these features are subsequently combined through the feature fusion approach.
- The authors claim that this is the first effort to develop an automated system that is superior to current approaches in the identification of five different eye classes (normal, diabetic retinopathy, hypertensive retinopathy, glaucoma, and contrast-related eye disorders), as illustrated in Figure 1.
- Our systems exhibited superior performance compared to the approaches proposed in the available research, achieving a remarkably higher accuracy percentage of 98%.
1.3. Paper Organization
2. Literature Survey
3. Proposed Approach
3.1. Data Collection and Preprocessing
3.2. Data Augmentation
4. Proposed Architecture
5. Recognition of Eye Diseases
6. XGBoost Classifier
7. Results
7.1. Experiment 1
7.2. Experiment 2
7.3. Experiment 3
8. State-of-the-Art Comparison
9. Discussion
10. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- World Health Organization. Blindness and Vision Impairment. 2018. Available online: https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (accessed on 22 September 2022).
- Sabanayagam, C.; Banu, R.; Chee, M.L.; Lee, R.; Wang, Y.X.; Tan, G.; Jonas, J.B.; Lamoureux, E.L.; Cheng, C.Y.; Klein, B.E.; et al. Incidence and progression of diabetic retinopathy: A systematic review. Lancet Diabetes Endocrinol. 2019, 7, 140–149. [Google Scholar] [CrossRef] [PubMed]
- Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Wei, W. Fuzzy based image edge detection algorithm for blood vessel detection in retinal images. Appl. Soft Comput. 2020, 94, 106452. [Google Scholar] [CrossRef]
- Cho, N.H.; Shaw, J.E.; Karuranga, S.; Huang, Y.; da Rocha Fernandes, J.D.; Ohlrogge, A.; Malanda, B. IDF Diabetes Atlas: Global estimates of diabetes prevalence for 2017 and projections for 2045. Diabetes Res. Clin. Pract. 2018, 138, 271–281. [Google Scholar] [CrossRef] [PubMed]
- Jain, A.; Jalui, A.; Jasani, J.; Lahoti, Y.; Karani, R. Deep learning for detection and severity classification of diabetic retinopathy. In Proceedings of the 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT), Chennai, India, 25–26 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
- Walker, H.K.; Hall, W.D.; Hurst, J.W. Clinical Methods: The History, Physical, and Laboratory Examinations; Butterworths: Boston, MA, USA, 1990. [Google Scholar]
- Mishra, C.; Tripathy, K. Fundus Camera 2022 [Updated 25 August 2023]. In StatePearls; StatePearls Publishing: Treasure Island, FL, USA, 2024. [Google Scholar]
- Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The Messidor database. Image Anal. Stereol. 2014, 33, 231–234. [Google Scholar] [CrossRef]
- Naveed, K.; Abdullah, F.; Madni, H.A.; Khan, M.A.; Khan, T.M.; Naqvi, S.S. Towards automated eye diagnosis: An improved retinal vessel segmentation framework using ensemble block matching 3D filter. Diagnostics 2021, 11, 114. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
- Decenciere, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine learning and image processing methods for teleophthalmology. Irbm 2013, 34, 196–203. [Google Scholar] [CrossRef]
- Wong, T.; Cheung, C.; Larsen, M.; Sharma, S.; Simo, R. Diabetic retinopathy. Nat. Rev. Dis. Primers 2016, 2, 16012. [Google Scholar] [CrossRef]
- Zhang, X.; Saaddine, J.B.; Chou, C.F.; Cotch, M.F.; Cheng, Y.J.; Geiss, L.S.; Gregg, E.W.; Albright, A.L.; Klein, B.E.; Klein, R. Prevalence of diabetic retinopathy in the United States, 2005–2008. JAMA 2010, 304, 649–656. [Google Scholar] [CrossRef]
- Teo, Z.L.; Tham, Y.C.; Yu, M.; Chee, M.L.; Rim, T.H.; Cheung, N.; Bikbov, M.M.; Wang, Y.X.; Tang, Y.; Lu, Y.; et al. Global prevalence of diabetic retinopathy and projection of burden through 2045: Systematic review and meta-analysis. Ophthalmology 2021, 128, 1580–1591. [Google Scholar] [CrossRef]
- Raman, R.; Srinivasan, S.; Virmani, S.; Sivaprasad, S.; Rao, C.; Rajalakshmi, R. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye 2019, 33, 97–109. [Google Scholar] [CrossRef] [PubMed]
- Chakrabarti, R.; Harper, C.A.; Keeffe, J.E. Diabetic retinopathy management guidelines. Expert Rev. Ophthalmol. 2012, 7, 417–439. [Google Scholar] [CrossRef]
- Topaloglu, I. Deep learning based convolutional neural network structured new image classification approach for eye disease identification. Sci. Iran. 2023, 30, 1731–1742. [Google Scholar] [CrossRef]
- Choudhary, A.; Ahlawat, S.; Urooj, S.; Pathak, N.; Lay-Ekuakille, A.; Sharma, N. A deep learning-based framework for retinal disease classification. Healthcare 2023, 11, 212. [Google Scholar] [CrossRef]
- Thomas, S.; Hodge, W.; Malvankar-Mehta, M. The cost-effectiveness analysis of teleglaucoma screening device. PLoS ONE 2015, 10, e0137913. [Google Scholar] [CrossRef]
- Harasymowycz, P.; Birt, C.; Gooi, P.; Heckler, L.; Hutnik, C.; Jinapriya, D.; Shuba, L.; Yan, D.; Day, R. Medical management of glaucoma in the 21st century from a Canadian perspective. J. Ophthalmol. 2016, 2016, 6509809. [Google Scholar] [CrossRef]
- Weinreb, R.N.; Leung, C.K.; Crowston, J.G.; Medeiros, F.A.; Friedman, D.S.; Wiggs, J.L.; Martin, K.R. Primary open-angle glaucoma. Nat. Rev. Dis. Primers 2016, 2, 16067. [Google Scholar] [CrossRef]
- Tham, Y.C.; Li, X.; Wong, T.Y.; Quigley, H.A.; Aung, T.; Cheng, C.Y. Global prevalence of glaucoma and projections of glaucoma burden through 2040: A systematic review and meta-analysis. Ophthalmology 2014, 121, 2081–2090. [Google Scholar] [CrossRef]
- Allison, K.; Patel, D.; Alabi, O. Epidemiology of glaucoma: The past, present, and predictions for the future. Cureus 2020, 12, e11686. [Google Scholar] [CrossRef]
- Akter, N.; Fletcher, J.; Perry, S.; Simunovic, M.P.; Briggs, N.; Roy, M. Glaucoma diagnosis using multi-feature analysis and a deep learning technique. Sci. Rep. 2022, 12, 8064. [Google Scholar] [CrossRef]
- Greenfield, D.S.; Weinreb, R.N. Role of optic nerve imaging in glaucoma clinical practice and clinical trials. Am. J. Ophthalmol. 2008, 145, 598–603. [Google Scholar] [CrossRef]
- Michelessi, M.; Lucenteforte, E.; Oddone, F.; Brazzelli, M.; Parravano, M.; Franchi, S.; Ng, S.M.; Virgili, G. Optic nerve head and fibre layer imaging for diagnosing glaucoma. Cochrane Database Syst. Rev. 2015, 11, CD008803. [Google Scholar]
- Antón López, A.; Nolivos, K.; Pazos López, M.; Fatti, G.; Ayala, M.E.; Martínez-Prats, E.; Peral, O.; Poposki, V.; Tsiroukis, E.; Morilla-Grasa, A.; et al. Diagnostic accuracy and detection rate of glaucoma screening with optic disk photos, optical coherence tomography images, and telemedicine. J. Clin. Med. 2021, 11, 216. [Google Scholar] [CrossRef] [PubMed]
- Kanse, S.S.; Yadav, D.M. Retinal fundus image for glaucoma detection: A review and study. J. Intell. Syst. 2019, 28, 43–56. [Google Scholar] [CrossRef]
- Shinde, R. Glaucoma detection in retinal fundus images using U-Net and supervised machine learning algorithms. Intell.-Based Med. 2021, 5, 100038. [Google Scholar] [CrossRef]
- Yalçin, N.; Alver, S.; Uluhatun, N. Classification of retinal images with deep learning for early detection of diabetic retinopathy disease. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
- Chakrabarty, N. A deep learning method for the detection of diabetic retinopathy. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
- Ting, D.S.W.; Pasquale, L.R.; Peng, L.; Campbell, J.P.; Lee, A.Y.; Raman, R.; Tan, G.S.W.; Schmetterer, L.; Keane, P.A.; Wong, T.Y. Artificial intelligence and deep learning in ophthalmology. Br. J. Ophthalmol. 2019, 103, 167–175. [Google Scholar] [CrossRef]
- Ramakrishnan, P.; Sivagurunathan, P.; Sathish Kumar, N. Fruit classification based on convolutional neural network. Int. J. Control Autom 2019. [Google Scholar]
- Feng, X.; Yang, J.; Laine, A.F.; Angelini, E.D. Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017, Proceedings, Part III 20; Springer: Berlin/Heidelberg, Germany, 2017; pp. 568–576. [Google Scholar]
- Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-aware ensemble network for glaucoma screening from fundus image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef]
- Fumero, F.; Alayón, S.; Sanchez, J.L.; Sigut, J.; Gonzalez-Hernandez, M. RIM-ONE: An open retinal image database for optic nerve evaluation. In Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Bristol, UK, 27–30 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
- Verma, K.; Deep, P.; Ramakrishnan, A. Detection and classification of diabetic retinopathy using retinal images. In Proceedings of the 2011 Annual IEEE India Conference, Hyderabad, India, 16–18 December 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
- Gondal, W.M.; Köhler, J.M.; Grzeszick, R.; Fink, G.A.; Hirsch, M. Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2069–2073. [Google Scholar]
- Bock, R.; Meier, J.; Michelson, G.; Nyúl, L.G.; Hornegger, J. Classifying glaucoma with image-based features from fundus photographs. In Proceedings of the Pattern Recognition: 29th DAGM Symposium, Heidelberg, Germany, 12–14 September 2007; Proceedings 29; Springer: Berlin/Heidelberg, Germany, 2007; pp. 355–364. [Google Scholar]
- Malik, S.; Kanwal, N.; Asghar, M.N.; Sadiq, M.A.A.; Karamat, I.; Fleury, M. Data driven approach for eye disease classification with machine learning. Appl. Sci. 2019, 9, 2789. [Google Scholar] [CrossRef]
- Abbas, Q. Glaucoma-deep: Detection of glaucoma eye disease on retinal fundus images using deep learning. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 41–45. [Google Scholar] [CrossRef]
- Jain, L.; Murthy, H.S.; Patel, C.; Bansal, D. Retinal eye disease detection using deep learning. In Proceedings of the 2018 Fourteenth International Conference on Information Processing (ICINPRO), Bengaluru, India, 21–23 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
- Metin, B.; Karasulu, B. Derin Öğrenme Modellerini Kullanarak İnsan Retinasının Optik Koherans Tomografi Görüntülerinden Hastalık Tespiti. Veri Bilimi 2022, 5, 9–19. [Google Scholar]
- Sarki, R.; Ahmed, K.; Wang, H.; Zhang, Y.; Wang, K. Convolutional neural network for multi-class classification of diabetic eye disease. EAI Endorsed Trans. Scalable Inf. Syst. 2021, 9, e5. [Google Scholar] [CrossRef]
- Umer, M.J.; Sharif, M.; Raza, M.; Kadry, S. A deep feature fusion and selection-based retinal eye disease detection from oct images. Expert Syst. 2023, 40, e13232. [Google Scholar] [CrossRef]
- Gargeya, R.; Leng, T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef] [PubMed]
- Sajid, M.Z.; Hamid, M.F.; Youssef, A.; Yasmin, J.; Perumal, G.; Qureshi, I.; Naqi, S.M.; Abbas, Q. DR-NASNet: Automated System to Detect and Classify Diabetic Retinopathy Severity Using Improved Pretrained NASNet Model. Diagnostics 2023, 13, 2645. [Google Scholar] [CrossRef] [PubMed]
- Sajid, M.Z.; Qureshi, I.; Youssef, A.; Khan, N.A. FAS-Incept-HR: A fully automated system based on optimized inception model for hypertensive retinopathy classification. Multimed. Tools Appl. 2024, 83, 14281–14303. [Google Scholar] [CrossRef]
- Sajid, M.Z.; Qureshi, I.; Abbas, Q.; Albathan, M.; Shaheed, K.; Youssef, A.; Ferdous, S.; Hussain, A. Mobile-Hr: An ophthalmologic-based classification system for diagnosis of hypertensive retinopathy using optimized MobileNet architecture. Diagnostics 2023, 13, 1439. [Google Scholar] [CrossRef]
- Hockwin, O. Cataract classification. Doc. Ophthalmol. 1995, 88, 263–275. [Google Scholar] [CrossRef]
- Dong, Y.; Zhang, Q.; Qiao, Z.; Yang, J.J. Classification of cataract fundus image based on deep learning. In Proceedings of the 2017 IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 18–20 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
- Nakayama, L.F.; Ribeiro, L.Z.; Dychiao, R.G.; Zamora, Y.F.; Regatieri, C.V.; Celi, L.A.; Silva, P.; Sobrin, L.; Belfort, R., Jr. Artificial intelligence in uveitis: A comprehensive review. Surv. Ophthalmol. 2023, 68, 669–677. [Google Scholar] [CrossRef]
- Kaggle. Eyepacs, Aptos, Messidor Diabetic Retinopathy. 2024. Available online: https://www.kaggle.com/datasets/ascanipek/eyepacs-aptos-messidor-diabetic-retinopathy (accessed on 15 August 2024).
- Eye Disease Dataset. 2019. Kaggle. Available online: https://www.kaggle.com/datasets/kondwani/eye-disease-dataset (accessed on 15 August 2024).
- Dataset for Different Eye Disease. 2022. Kaggle. Available online: https://www.kaggle.com/datasets/dhirajmwagh1111/dataset-for-different-eye-disease (accessed on 15 August 2024).
- Eye Diseases Classification. 2022. Kaggle. Available online: https://www.kaggle.com/datasets/gunavenkatdoddi/eye-diseases-classification (accessed on 15 August 2024).
- Papers with Code—DIARETDB1 Dataset, n.d. Available online: https://paperswithcode.com/dataset/diaretdb1 (accessed on 15 August 2024).
- Wahab Sait, A.R. Artificial Intelligence-Driven Eye Disease Classification Model. Appl. Sci. 2023, 13, 11437. [Google Scholar] [CrossRef]
Reference | Methodology | Datasets | Models |
---|---|---|---|
[29] | Classification was made by convolutional neural network | MESSIDOR, DIARETDB, STARE | CNN |
[30] | Preprocessing of image then passing it to a convolutional neural network to predict if the patient is diabetic | High-Resolution Fundus (HRF) | CNN |
[31] | Gray level co-occurrence matrices (GLCMs), classifiers like Random Forests, Support Vector Machine (SVM), gradient boost, AdaBoost, Naïve Bayes, and Gaussian | MESSIDOR | GLCM |
[35] | Disk-aware ensemble network | SCES, SINDI | CNN |
[37] | Random Forest classification using different features such as the perimeter and area of the blood vessels and hemorrhages | STARE | Filters and Random Forests |
[43] | CNN architecture | DiaretDB1 | CNN |
[44] | Deep belief network (DBN) and convolutional neural network (CNN) | DRIONS-DB, sjchoi86-HRF | CNN |
[45] | Multiclass DED, auto-mated classification framework | MESSIDOR | CNN |
[46] | Retinal illness classification based on selection utilizing Modified-Alexnet and ResNet-50 networks | - | AlexNet, ResNet50 |
[47] | DR severity detection and classification method based on enhanced pretrained NASNet Model | APTOS-2019, PAK-DR (Private) | NASNet |
[48] | Using inceptionV3 model to detect and classify HR eye disease | PAK-HR (Private) | InceptionV3 |
[49] | A novel model is built based on the pretrained MobileNet architecture through the addition of dense blocks to make the network more efficient | PAK-HR (Private), DRIVE, DiaRetDB0 | MobileNet |
Step | Explanation |
---|---|
Input Image Acquisition | Load the RGB image of the eye fundus from the dataset. All the images are normally captured with a high resolution and full color, for instance, 1125 × 1264 pixels. |
Channel Separation | Separate the RGB image into its three color channels: red, green, and blue. Each channel is considered to be an individual gray-scale image representing the intensity of that color. |
Green Channel Enhancement | Strengthen the green channel by scaling the intensity values with the formula G′ = α × G. Here, G is the source of the green channel, and α the scaling factor, which often takes a value of 2.0 to amplify the green’s intensity. |
Image Reconstruction | Reconstruct the RGB image by combining the original red and blue with the enhanced green channel. I″ is given by I′ = merge (R, G′, B), where R and B are the original red and blue channels. |
Blue Channel Suppression | This will turn off the blue channel to B = 0, which might allow the fluorescence effect to be more apparent because of the loss of the blue component but strongly showing the green fluorescence. |
Colormap Application (Optional) | Just apply a colormap—for example, ‘HOT’—to emphasize the fluorescence more: it will just stretch the color palette of the image towards the warm end to show strongly bright green areas. |
Output Image | Save or show the final processed image. This will shift the color scheme of the outputted image to accentuate green fluorescence and therefore pinpoint the areas of interest. |
Ref. | Datasets | Normal | Diabetic Retinopathy | Hypertensive Retinopathy | Glaucoma | Contrast | Total |
---|---|---|---|---|---|---|---|
[53] | Eyepacs, Aptos, Messidor | 23,125 | 23,125 | - | - | - | 46,250 |
[54] | Eye disease dataset | - | - | - | 50 | 100 | 150 |
[55] | Eye disease classification | 250 | 250 | - | 250 | 250 | 1000 |
[56] | Dataset for different eye diseases | 1637 | - | - | 1637 | 1637 | 4101 |
[57] | DiaRetDB1 | 100 | 100 | - | - | - | 200 |
Private | PAK-HR | 3000 | - | 3000 | - | - | 6000 |
Private | DR-Insight | 1000 | - | 4000 | - | - | 5000 |
Private | Imam-HR | 1130 | - | 2040 | - | - | 3170 |
- | - | 30,242 | 23,475 | 9040 | 1937 | 1987 | 65,871 |
Step | Number |
---|---|
1 | Importing the necessary python libraries (albumentations, torchvision, and torch). |
2 | Write the code for the get_autoaugment_transform() function. The AutoAugment policy and other augmentation settings are set up by this function. |
3 | The main code will have the following functionalities: a. Load the dataset for augmentation. b. After the dataset has been loaded, apply the already defined AutoAugment transformation. |
4 | This step uses torch.utils.data to create the DataLoader class. During training, the shuffled batches of the augmented dataset will be generated by the DataLoader. |
5 | Define a basic CNN model class. With nn.Module, SimpleCNN is used. For the model’s training, configure the optimizer (like SGD or Adam) and loss function (like CrossEntropyLoss). Indicate the batch size and learning rate, among other hyperparameters, as well as the quantity of training epochs. |
6 | The following steps are repeated to train the model for a given number of epochs: a. Put the model in the training mode first. b. Call the DataLoader to generate batches of images from the augmented images along with their labels. c. Run the model in the forward pass mode. d. A defined loss function is used to compute the difference between reference labels and expected outputs. e. The computation of the model’s parameter gradients with respect to the loss is performed using a backward pass. f. Tweak the model’s parameters with the selected optimizer in step number one and use computed gradients. g. The model can be evaluated now on a separate validation dataset during training. |
7 | The last step is the evaluation of the model on a new dataset. |
Step | Operation | Explanations | ||
---|---|---|---|---|
1 | Load trained Models | Pre- | Extract features from images using EfficientNetB0 and MobileNetV2. | |
2 | Freeze ers | Lay- | Freeze the models trained by weight and transform the models to non-trainable ones. | |
3 | Global Average Pooling | Apply global average pooling to both MobileNetV2 features and EfficientNetB0 features. | ||
4 | Concatenation | pooled features. | Concatenate | |
5 | Dense ers | Lay- | Xdense1 = Dense(128, activation = ‘relu’)(XConcat). Apply the dense layer with ReLU activation. Xdense2 = Dense(numclasses, activation = ‘softmax’)(Xdense1). The final dense layer for classification. | |
6 | Fusion Model Definition | Combine the features extracted from MobileNetV2 and EfficientNet models to train the model. | ||
7 | Compile the Model | Using the compile function, we compiled the model. The function parameters include the metrics used in evaluating (accuracy) and the optimizer (Adam). | ||
8 | Train Model | the | Using the fit function, the model is trained for 10 epochs. In this instruction, we specify for the fit function the training, validation, and test data arrays. |
Techniques for Augmentation | Values |
---|---|
B | batch |
X | batch minimum activating value |
µB | mini-batch mean |
mini-batch variance | |
ϵ | constant added for numerical stability |
β | learning parameter |
γ | learning parameter |
Step | Description | Input | Output | ||
---|---|---|---|---|---|
1 | Initialize XGBoost model | - | XGBoost model with hyperparameters (η, λ, τ) | ||
2 | Train XGBoost model on normal and abnormal samples | Training data: X = (x1, a1), (x2, a2), …, (xm, am) Labels: a = 0, 1 | Trained model | XGBoost | |
3 | Use depth-wise Conv2D instead of Conv2D | Feature map x = a1, a2, a3, …, an | Modified map | feature | |
Build classifier based on XGBoost | |||||
4 |
| Trained XGBoost model and ensemble of decision trees | - | ||
5 | Allocate class label for testing samples | Testing data: x xtest1, xtest2, xtest3, …, xtestk | = | Predicted class labels for Xtest | |
6 | Output identification of normal retinographic samples and hypertensive retinopathy (HR) | Predicted class labels | Recognition result for diabetic retinopathy, hypertensive retinopathy, glaucoma, and contrast samples |
Models | Sensitivity (SE) | Specificity (SP) | Accuracy | F1-Score |
---|---|---|---|---|
VGG16 | 68% | 64% | 69% | 68.8% |
VGG19 | 70.9% | 70.8% | 71.1% | 71% |
InceptionV3 | 73.2% | 74.5% | 75% | 74.7% |
ResNet50 | 83.5% | 82% | 80.2% | 80.8% |
Xception | 81.7% | 80.4% | 81.2% | 81% |
MobileNet | 89.5% | 88.9% | 89.4% | 89% |
DenseNet-169 | 89.9% | 90% | 90% | 90% |
EfficientNet | 88% | 79.8% | 88% | 88% |
CAD-EYE | 97.3% | 97.9% | 98% | 98% |
Model | Dataset | Sensitivity (SE) | Specificity (SP) | F1-Score | Recall | Accuracy |
---|---|---|---|---|---|---|
CAD-EYE | EDC | 99.50 | 99.68 | 99.95 | 99.98 | 1.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khalid, M.; Sajid, M.Z.; Youssef, A.; Khan, N.A.; Hamid, M.F.; Abbas, F. CAD-EYE: An Automated System for Multi-Eye Disease Classification Using Feature Fusion with Deep Learning Models and Fluorescence Imaging for Enhanced Interpretability. Diagnostics 2024, 14, 2679. https://doi.org/10.3390/diagnostics14232679
Khalid M, Sajid MZ, Youssef A, Khan NA, Hamid MF, Abbas F. CAD-EYE: An Automated System for Multi-Eye Disease Classification Using Feature Fusion with Deep Learning Models and Fluorescence Imaging for Enhanced Interpretability. Diagnostics. 2024; 14(23):2679. https://doi.org/10.3390/diagnostics14232679
Chicago/Turabian StyleKhalid, Maimoona, Muhammad Zaheer Sajid, Ayman Youssef, Nauman Ali Khan, Muhammad Fareed Hamid, and Fakhar Abbas. 2024. "CAD-EYE: An Automated System for Multi-Eye Disease Classification Using Feature Fusion with Deep Learning Models and Fluorescence Imaging for Enhanced Interpretability" Diagnostics 14, no. 23: 2679. https://doi.org/10.3390/diagnostics14232679
APA StyleKhalid, M., Sajid, M. Z., Youssef, A., Khan, N. A., Hamid, M. F., & Abbas, F. (2024). CAD-EYE: An Automated System for Multi-Eye Disease Classification Using Feature Fusion with Deep Learning Models and Fluorescence Imaging for Enhanced Interpretability. Diagnostics, 14(23), 2679. https://doi.org/10.3390/diagnostics14232679