Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data
Abstract
:1. Introduction
2. Background
3. Data and Augmentation
4. Methodology
4.1. Machine-Learning Approaches
- Support Vector Machines: Support vector machine may be utilized both for regression and classification problems [28]. For classification, it is a classifier with a goal to locate a hyperplane separating the two classes with a large margin.
- Random Forest: It is a classifier with several decision trees on various subsets of the provided dataset that takes the average to enhance the predicted accuracy of that dataset [29]. To anticipate the ultimate output, the random forest collects guesses from all of its trees and combines them into a single prediction. Overfitting can be avoided by having a larger number of trees to choose from when making a model.
- kNN: k-nearest neighbor algorithm assigns the class to the test sample based on the nearest neighbors with the largest majority [30]. Being nearest depends on the distance metric which is normally the Euclidean distance or absolute distance.
- Naive Bayes: To categorize the data, a naive Bayes classifier applies concepts from probability theory [31]. The theorem developed by Bayes is utilized by the naive Bayes classification algorithms. The most important takeaway from Bayes’ theorem is that the probability of an event can be recalculated whenever new evidence is added to the mix.
- XGBoost: Extreme gradient boosting is built on supervised machine learning, decision trees, ensemble learning, and gradient boosting [32].
- AdaBoost: As part of an ensemble method in machine learning, adaptive boosting is a technique known as AdaBoost [33]. AdaBoost’s most frequent algorithm is a decision tree with only one split, known as a decision tree with only one level. Decision stumps are another name for these trees. This algorithm creates a model and equally weighs all the input data points in that model.
- Bagging: An ensemble meta-estimator, bagging classifiers fit base classifiers on random subsets of the original dataset and then aggregate their individual predictions (either by voting or average) to generate a final forecast [34].
4.2. Feature Extraction
- Local Binary Patterns (LBP): Each pixel in a picture is labelled using the local binary patterns operator by thresholding a 3x3 neighborhood surrounding each pixel with the center value [35]. These classes are used to label pixels. Each result is assigned a binary value, which is either a 1 or a 0, depending on whether the surrounding pixels are equal or greater than the center pixels.
- Histogram of Oriented Gradients (HOG): The purpose of HOG is to detect the presence of a particular object oriented at a specified direction [36]. The magnitude of pixel orientation data is weighted to establish the criteria for characterizing an item in these attributes.
- Gabor Filters: They are linear Gabor filters that detect if an image has a certain frequency content within a given region of interest for texture research [37]. Many current vision experts believe that the frequency and orientation representations of Gabor filters are like those perceived in the human eye.
- Gray Level Cooccurrence Matrix (GLCM): A GLCM is a matrix representing the frequency of cooccurrence of a pair of pixel intensities at a specified distance and angle [38]. The GLCM is computed to extract the texture features from images. Cluster prominence, cluster shade, dissimilarity, energy, entropy, homogeneity, and maximum probability are the GLCM features used in this study.
4.3. Deep-Learning Approaches
- ResNet50: This model [39] was originally trained for 1000 classes using the ImageNet database [40]. The ResNet50 CNN model is adequately efficient on vision tasks, and particularly fits well considering the efficiency as well as complexity. Moreover, this has been widely used as the default choice for deep transfer learning in computer vision tasks. It has 48 convolutional layers, one max-pool, and one average-pool layer. For diabetic foot ulcer recognition, the last three layers’ parameters were updated via transfer learning by network training using thermogram image data. The input image was resized as 224 × 224 to match the ResNet input image resolution requirement. The output was restricted to one neuron, providing the probability for the sample to be recognized as a diabetic group.
- DenseNet121: DenseNet was developed with the aim to obtain benefit from a deep network while keeping fewer parameters [41]. It improves the accuracy by minimizing the problem of a vanishing gradient. It has 120 convolutional layers and 4 average-pool layers. To use it with diabetic foot thermogram data, the last two layers were used for the parameter update via training. The input image was resized at 224 × 224 for this network also.
- Proposed CNN Model: In addition to pre-trained networks, a custom CNN model was developed specifically for DF classification. For this purpose, the DFTNet model was adopted as the base model [9]; however, there were major differences. This study utilized the input volume size 180 × 80 × 3; however, the DFTNet used an input volume of 227 × 227 × 3. The learning rate was chosen as 0.001 with the Adam optimizer. The batch size was set to 64. The network architecture diagram is shown in Figure 4. The detailed working of the proposed model is shown in Figure 5.
5. Results and Discussion
5.1. Results of Machine-Learning Approaches
5.1.1. Classification of Combined Data (Image+Patch)
5.1.2. Full-Image Thermogram-Based Classification Results
5.1.3. Thermogram Patch-Based Classification Results
5.2. Results of Deep-Learning Approaches
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wang, S.; Zhang, K.; Chao, L.; Li, D.; Tian, X.; Bao, H.; Chen, G.; Xia, Y. Exploring the Utility of Radar and Satellite-Sensed Precipitation and Their Dynamic Bias Correction for Integrated Prediction of Flood and Landslide Hazards. J. Hydrol. 2021, 603, 126964. [Google Scholar] [CrossRef]
- Iversen, M.M.; Tell, G.S.; Riise, T.; Hanestad, B.R.; Østbye, T.; Graue, M.; Midthjell, K. History of Foot Ulcer Increases Mortality Among Individuals with DiabetesTen-Year Follow-up of the Nord-Trøndelag Health Study, Norway. Diabetes Care 2009, 32, 2193–2199. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hernandez-Contreras, D.A.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.D.J.; Renero-Carrillo, F.J. Plantar Thermogram Database for the Study of Diabetic Foot Complications. IEEE Access 2019, 7, 161296–161307. [Google Scholar] [CrossRef]
- Ring, F. Thermal Imaging Today and Its Relevance to Diabetes. J. Diabetes Sci. Technol. 2010, 4, 857–862. [Google Scholar] [CrossRef] [Green Version]
- Bagavathiappan, S.; Philip, J.; Jayakumar, T.; Raj, B.; Rao, P.N.S.; Varalakshmi, M.; Mohan, V. Correlation between Plantar Foot Temperature and Diabetic Neuropathy: A Case Study by Using an Infrared Thermal Imaging Technique. J. Diabetes Sci. Technol. 2010, 4, 1386–1392. [Google Scholar] [CrossRef] [Green Version]
- Hernandez-Contreras, D.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.; Gonzalez-Bernal, J. Narrative Review: Diabetic Foot and Infrared Thermography. Infrared Phys. Technol. 2016, 78, 105–117. [Google Scholar] [CrossRef]
- Liu, C.; van der Heijden, F.; Klein, M.E.; van Baal, J.G.; Bus, S.A.; van Netten, J.J. Infrared Dermal Thermography on Diabetic Feet Soles to Predict Ulcerations: A Case Study. Adv. Biomed. Clin. Diagn. Syst. XI 2013, 8572, 102–110. [Google Scholar] [CrossRef]
- Faust, O.; Rajendra Acharya, U.; Ng, E.Y.K.; Hong, T.J.; Yu, W. Application of Infrared Thermography in Computer Aided Diagnosis. Infrared Phys. Technol. 2014, 66, 160–175. [Google Scholar] [CrossRef]
- Cruz-Vega, I.; Hernandez-Contreras, D.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.d.J.; Ramirez-Cortes, J.M. Deep Learning Classification for Diabetic Foot Thermograms. Sensors 2020, 20, 1762. [Google Scholar] [CrossRef] [Green Version]
- Adam, M.; Ng, E.Y.K.; Oh, S.L.; Heng, M.L.; Hagiwara, Y.; Tan, J.H.; Tong, J.W.K.; Acharya, U.R. Automated Detection of Diabetic Foot with and without Neuropathy Using Double Density-Dual Tree-Complex Wavelet Transform on Foot Thermograms. Infrared Phys. Technol. 2018, 92, 270–279. [Google Scholar] [CrossRef]
- Goyal, M.; Reeves, N.D.; Davison, A.K.; Rajbhandari, S.; Spragg, J.; Yap, M.H. DFUNet: Convolutional Neural Networks for Diabetic Foot Ulcer Classification. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 4, 728–739. [Google Scholar] [CrossRef] [Green Version]
- Nanda, R.; Nath, A.; Patel, S.; Mohapatra, E. Machine Learning Algorithm to Evaluate Risk Factors of Diabetic Foot Ulcers and Its Severity. Med. Biol. Eng. Comput. 2022, 60, 2349–2357. [Google Scholar] [CrossRef] [PubMed]
- Cao, C.; Qiu, Y.; Wang, Z.; Ou, J.; Wang, J.; Hounye, A.H.; Hou, M.; Zhou, Q.; Zhang, J. Nested Segmentation and Multi-Level Classification of Diabetic Foot Ulcer Based on Mask R-CNN. Multimed. Tools Appl. 2022, 82, 18887–18906. [Google Scholar] [CrossRef]
- Fraiwan, L.; AlKhodari, M.; Ninan, J.; Mustafa, B.; Saleh, A.; Ghazal, M. Diabetic Foot Ulcer Mobile Detection System Using Smart Phone Thermal Camera: A Feasibility Study. Biomed. Eng. Online 2017, 16, 117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xu, Y.; Han, K.; Zhou, Y.; Wu, J.; Xie, X.; Xiang, W. Classification of Diabetic Foot Ulcers Using Class Knowledge Banks. Front. Bioeng. Biotechnol. 2022, 9, 811028. [Google Scholar] [CrossRef]
- Ai, L.; Yang, M.; Xie, Z. ACTNet: Asymmetric Convolutional Transformer Network for Diabetic Foot Ulcers Classification. Phys. Eng. Sci. Med. 2022, 45, 1175–1181. [Google Scholar] [CrossRef]
- Goyal, M.; Reeves, N.D.; Rajbhandari, S.; Yap, M.H. Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization on Mobile Devices. IEEE J. Biomed. Health Inform. 2019, 23, 1730–1741. [Google Scholar] [CrossRef] [Green Version]
- Santos, F.; Santos, E.; Vogado, L.H.; Ito, M.; Bianchi, A.; Tavares, J.M.; Veras, R. DFU-VGG, a Novel and Improved VGG-19 Network for Diabetic Foot Ulcer Classification. In Proceedings of the 2022 29th International Conference on Systems, Signals and Image Processing (IWSSIP), Sofia, Bulgaria, 1–3 June 2022. [Google Scholar] [CrossRef]
- Al-Garaawi, N.; Ebsim, R.; Alharan, A.F.H.; Yap, M.H. Diabetic Foot Ulcer Classification Using Mapped Binary Patterns and Convolutional Neural Networks. Comput. Biol. Med. 2022, 140, 105055. [Google Scholar] [CrossRef]
- Tulloch, J.; Zamani, R.; Akrami, M. Machine Learning in the Prevention, Diagnosis and Management of Diabetic Foot Ulcers: A Systematic Review. IEEE Access 2020, 8, 198977–199000. [Google Scholar] [CrossRef]
- Niri, R.; Lucas, Y.; Treuillet, S.; Douzi, H. Smartphone-Based Thermal Imaging System for Diabetic Foot Ulcer Assessment. In Journées d’Etude sur la TéléSanté; Sorbonne Universités: Paris, France, 2019. [Google Scholar]
- Liu, C.; van Netten, J.J.; van Baal, J.G.; Bus, S.A.; van der Heijden, F. Automatic Detection of Diabetic Foot Complications with Infrared Thermography by Asymmetric Analysis. J. Biomed. Opt. 2015, 20, 026003. [Google Scholar] [CrossRef] [Green Version]
- Vardasca, R.; Magalhaes, C.; Seixas, A.; Carvalho, R.; Mendes, J. Diabetic Foot Monitoring Using Dynamic Thermography and AI Classifiers. In Proceedings of the 3rd Quantitative InfraRed Thermography Asia Conference (QIRT), Tokyo, Japan, 1–5 July 2019. [Google Scholar] [CrossRef]
- Adam, M.; Ng, E.Y.K.; Oh, S.L.; Heng, M.L.; Hagiwara, Y.; Tan, J.H.; Tong, J.W.K.; Acharya, U.R. Automated Characterization of Diabetic Foot Using Nonlinear Features Extracted from Thermograms. Infrared Phys. Technol. 2018, 89, 325–337. [Google Scholar] [CrossRef]
- Van Netten, J.J.; Van Baal, J.G.; Liu, C.; Van Der Heijden, F.; Bus, S.A. Infrared Thermal Imaging for Automated Detection of Diabetic Foot Complications. J. Diabetes Sci. Technol. 2013, 7, 1122–1129. [Google Scholar] [CrossRef] [Green Version]
- Van Netten, J.J.; Prijs, M.; Van Baal, J.G.; Liu, C.; Van Der Heijden, F.; Bus, S.A. Diagnostic Values for Skin Temperature Assessment to Detect Diabetes-Related Foot Complications. Diabetes Technol. Ther. 2014, 16, 714–721. [Google Scholar] [CrossRef] [Green Version]
- Khosa, I.; Pasero, E. Defect Detection in Food Ingredients Using Multilayer Perceptron Neural Network; IEEE: Piscataway, NJ, USA, 2014; ISBN 9781479928057. [Google Scholar]
- Schölkopf, B. SVMs-A Practical Consequence of Learning Theory. IEEE Intell. Syst. Their Appl. 1998, 13, 18–21. [Google Scholar] [CrossRef] [Green Version]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Baskett, F.; Shustek, L.J. An Algorithm for Finding Nearest Neighbors. IEEE Trans. Comput. 1975, C-24, 1000–1006. [Google Scholar] [CrossRef]
- Cichosz, P. Naïve Bayes Classifier. In Data Min. Algorithms; Wiley Data and Cybersecurity: Hoboken, NJ, USA, 2015; pp. 118–133. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–17 August 2016. [Google Scholar] [CrossRef]
- Rätsch, G.; Onoda, T.; Müller, K.R. Soft Margins for AdaBoost. Mach. Learn. 2001, 42, 287–320. [Google Scholar] [CrossRef]
- Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
- Pietikäinen, M. Image Analysis with Local Binary Patterns. Lect. Notes Comput. Sci. 2005, 3540, 115–118. [Google Scholar] [CrossRef] [Green Version]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume I, pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
- Fogel, I.; Sagi, D. Gabor Filters as Texture Discriminator. Biol. Cybern. 1989, 61, 103–113. [Google Scholar] [CrossRef]
- Haralick, R.M.; Dinstein, I.; Shanmugam, K. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2010; pp. 248–255. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
Category | Diabetic Group | Control Group | Total |
---|---|---|---|
No. of cases | 122 | 45 | 167 |
Original full images | 244 | 90 | 334 |
Images after augmentation | 500 | 500 | 1000 |
Original patches | 976 | 360 | 1336 |
Patches after augmentation | 1500 | 1500 | 3000 |
Evaluation Metric | Features | SVM | RF | XGBoost | Naive Bayes | AdaBoost | kNN | Bagging |
---|---|---|---|---|---|---|---|---|
Sensitivity | HOG | 0.71 | 0.71 | 0.71 | 0.64 | 0.57 | 0.67 | 0.71 |
Gabor | 0.71 | 0.14 | 0.16 | 0.71 | 0.28 | 0.14 | 0.71 | |
GLCM | 0.26 | 0.28 | 0.36 | 0.14 | 0.14 | 0.36 | 0.4 | |
LBP | 0.07 | 0.14 | 0.14 | 0.07 | 0.07 | 0.14 | 0.19 | |
Combined | 0.53 | 0.51 | 0.51 | 0.46 | 0.64 | 0.57 | 0.55 | |
Specificity | HoG | 0.85 | 0.9 | 0.9 | 0.85 | 0.87 | 0.7 | 0.87 |
Gabor | 0.9 | 0.77 | 0.87 | 0.47 | 0.8 | 0.9 | 0.87 | |
GLCM | 0.95 | 0.75 | 0.75 | 0.85 | 0.9 | 0.75 | 0.77 | |
LBP | 0.9 | 0.78 | 0.78 | 0.8 | 0.85 | 0.73 | 0.88 | |
Combined | 0.89 | 0.83 | 0.82 | 0.94 | 0.79 | 0.81 | 0.83 | |
Accuracy | HoG | 0.71 | 0.65 | 0.69 | 0.66 | 0.59 | 0.7 | 0.73 |
Gabor | 0.68 | 0.61 | 0.65 | 0.54 | 0.67 | 0.7 | 0.73 | |
GLCM | 0.7 | 0.63 | 0.65 | 0.67 | 0.6 | 0.65 | 0.67 | |
LBP | 0.69 | 0.59 | 0.6 | 0.61 | 0.65 | 0.57 | 0.59 | |
Combined | 0.78 | 0.73 | 0.72 | 0.79 | 0.75 | 0.73 | 0.74 | |
F1-score | HoG | 0.67 | 0.71 | 0.71 | 0.62 | 0.59 | 0.6 | 0.69 |
Gabor | 0.7 | 0.16 | 0.1 | 0.44 | 0.3 | 0.2 | 0.69 | |
GLCM | 0.34 | 0.29 | 0.34 | 0.18 | 0.2 | 0.34 | 0.36 | |
LBP | 0.1 | 0.16 | 0.19 | 0.08 | 0.09 | 0.15 | 0.17 | |
Combined | 0.59 | 0.54 | 0.53 | 0.57 | 0.61 | 0.57 | 0.57 |
Models | Sensitivity | Specificity | Accuracy | F1-Score |
---|---|---|---|---|
SVM | 0.345 | 0.968 | 0.811 | 0.479 |
RF | 0.559 | 0.948 | 0.850 | 0.652 |
XGB | 0.630 | 0.932 | 0.856 | 0.688 |
NB | 0.392 | 0.9 | 0.772 | 0.464 |
ADA | 0.630 | 0.896 | 0.829 | 0.650 |
kNN | 0.642 | 0.916 | 0.847 | 0.679 |
BAG | 0.619 | 0.908 | 0.835 | 0.654 |
Models | Sensitivity | Specificity | Accuracy | F1-Score |
---|---|---|---|---|
SVM | 0.530 | 0.886 | 0.776 | 0.594 |
RF | 0.506 | 0.827 | 0.727 | 0.535 |
XGB | 0.506 | 0.821 | 0.723 | 0.531 |
NB | 0.457 | 0.935 | 0.787 | 0.571 |
ADA | 0.638 | 0.794 | 0.746 | 0.609 |
kNN | 0.566 | 0.805 | 0.731 | 0.566 |
BAG | 0.554 | 0.827 | 0.742 | 0.571 |
Models | Sensitivity | Specificity | Accuracy | F1-Score | AUC | |
---|---|---|---|---|---|---|
Image Level | ResNet50 | 0.778 | 0.571 | 0.627 | 0.875 | 0.623 |
DenseNet121 | 0.5 | 0.542 | 0.521 | 0.511 | 0.52 | |
Custom Model | 0.97 | 0.958 | 0.97 | 0.891 | 0.976 | |
Patch Level | ResNet50 | 0.565 | 0.913 | 0.832 | 0.874 | 0.834 |
DenseNet121 | 0.661 | 0.822 | 0.791 | 0.8 | 0.788 | |
Custom Model | 0.839 | 0.889 | 0.93 | 0.891 | 0.932 | |
Image + Patch Level | ResNet50 | 0.697 | 0.902 | 0.881 | 0.75 | 0.879 |
DenseNet121 | 0.75 | 0.875 | 0.843 | 0.8 | 0.841 | |
Custom Model | 0.839 | 0.889 | 0.933 | 0.891 | 0.938 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khosa, I.; Raza, A.; Anjum, M.; Ahmad, W.; Shahab, S. Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data. Diagnostics 2023, 13, 2637. https://doi.org/10.3390/diagnostics13162637
Khosa I, Raza A, Anjum M, Ahmad W, Shahab S. Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data. Diagnostics. 2023; 13(16):2637. https://doi.org/10.3390/diagnostics13162637
Chicago/Turabian StyleKhosa, Ikramullah, Awais Raza, Mohd Anjum, Waseem Ahmad, and Sana Shahab. 2023. "Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data" Diagnostics 13, no. 16: 2637. https://doi.org/10.3390/diagnostics13162637
APA StyleKhosa, I., Raza, A., Anjum, M., Ahmad, W., & Shahab, S. (2023). Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data. Diagnostics, 13(16), 2637. https://doi.org/10.3390/diagnostics13162637