Next Article in Journal
The Relationship between Psychological Hardiness and Military Performance by Reservists: A Moderation Effect of Perceived Stress and Resilience
Previous Article in Journal
Combined Rehabilitation Protocol in the Treatment of Osteoarthritis of the Knee: Comparative Study of Extremely Low-Frequency Magnetic Fields and Soft Elastic Knee Brace Effect
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification

1
Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
2
School of Computer Science, SCS, Taylor’s University, Subang Jaya 47500, Malaysia
*
Author to whom correspondence should be addressed.
Healthcare 2023, 11(9), 1222; https://doi.org/10.3390/healthcare11091222
Submission received: 12 March 2023 / Revised: 15 April 2023 / Accepted: 22 April 2023 / Published: 25 April 2023

Abstract

:
Pressure ulcers are significant healthcare concerns affecting millions of people worldwide, particularly those with limited mobility. Early detection and classification of pressure ulcers are crucial in preventing their progression and reducing associated morbidity and mortality. In this work, we present a novel approach that uses YOLOv5, an advanced and robust object detection model, to detect and classify pressure ulcers into four stages and non-pressure ulcers. We also utilize data augmentation techniques to expand our dataset and strengthen the resilience of our model. Our approach shows promising results, achieving an overall mean average precision of 76.9% and class-specific mAP50 values ranging from 66% to 99.5%. Compared to previous studies that primarily utilize CNN-based algorithms, our approach provides a more efficient and accurate solution for the detection and classification of pressure ulcers. The successful implementation of our approach has the potential to improve the early detection and treatment of pressure ulcers, resulting in better patient outcomes and reduced healthcare costs.

1. Introduction

Pressure ulcers, sometimes known as bed sores, are small, localized wounds to the skin and underlying tissues brought on by persistent pressure or friction against the skin [1]. They are serious health concerns, especially for older individuals and those with disabilities who spend a lot of time in bed or sitting down. However, newborn and pediatric children are also susceptible to pressure ulcers, albeit at a lower level [2]. Pressure ulcers are becoming more prevalent, particularly among elderly individuals who are more susceptible due to decreased mobility, sensory perception, and skin integrity [3,4]. Additionally, extended hospital stays in chronic illnesses, neurological conditions, complex organ failure, cancer, radiation treatments, and prolonged stays in the intensive care unit (ICU) [5], as well as long hospital stays during pandemics, such as COVID-19 [6,7,8], increase the risk of developing pressure ulcers. Moreover, the prevalence of chronic diseases, such as obesity, diabetes, and cardiovascular disease, is rising. Many of these diseases also increase the risk of developing pressure ulcers [9,10]. Sedentary lifestyles have led to more pressure ulcers, especially among older adults and those with disabilities [11,12,13,14]. The most commonly affected areas include the buttocks, head, shoulders, sacrum, coccyx, elbows, heels, hips, and ears. Figure 1 shows some of the areas where pressure ulcers may likely develop.
Patients who develop pressure ulcers may experience considerable effects, including pain, discomfort, and reduced mobility. In severe cases, they can lead to serious infections, hospitalization, and even death [15]. For the effective care and treatment of pressure ulcers, early and correct identification is essential [16]. However, classifying pressure ulcers into different stages can be challenging, especially for inexperienced healthcare providers [17,18]. As a result, there is a need for a more efficient and accurate method for classifying pressure ulcers that can improve patient outcomes and reduce healthcare costs. The current methods for classifying pressure ulcers into stages typically involve a visual assessment of the wound and a consideration of its depth and extent. There are several different classification systems in use, but the most widely used is the National Pressure Ulcer Advisory Panel (NPUAP) staging system [1]. Table 1 summarizes the NPUAP staging system classification of pressure ulcers into four stages based on the severity of tissue damage, ranging from stage I (least severe) to stage IV (most severe). Figure 2 provides a visual representation of the different stages.
Doctors and nurses typically classify pressure ulcers through visual inspection and manual palpation. During the assessment, they examine the wound site, look for signs of redness, blanching, and tissue loss, and take note of any other signs of infection or tissue damage. The healthcare provider may also use a probe or another instrument to assess the depth of the wound and evaluate any underlying tissue damage [19]. Accurate pressure ulcer classification is crucial, but healthcare providers can misclassify them due to various reasons. These include a lack of training and experience, limited information, observer bias, and variability in wound appearance. Factors such as the patient’s skin color, age, and overall health can also influence the visual appearance of a pressure ulcer, making it difficult to classify the wound without additional information [20]. Pressure ulcer classification requires a comprehensive assessment, considering various factors [21]. Deep learning, a subtype of machine learning inspired by the human brain, has shown potential in image classification by automatically learning complex patterns [22]. Computer vision and deep learning techniques have been widely adopted in various domains, including medical imaging, security, and image classification [23,24,25,26]. These techniques can detect and classify pressure ulcers by identifying visual features, such as color intensity, texture, depth, border, undermining, and tunneling, enabling earlier treatment and prevention of complications. Table 2 outlines the detectable features of pressure ulcers at each stage, which can be utilized by computer vision techniques, such as convolutional neural networks (CNNs) for accurate and efficient detection and classification.
Hence, we conclude that pressure ulcers are serious medical conditions that require accurate and timely detection and classification. While current methods rely on manual inspection, there has been increasing interest in developing automated approaches that can provide more accurate and efficient results. Various computer vision techniques, such as CNNs and manual image processing, have been explored for this task. However, previous studies have limitations, such as small dataset sizes, reliance on limited feature extraction methods, and lack of generalizability. Therefore, a more comprehensive and accurate solution to pressure ulcer detection and classification is needed [27,28,29,30,31,32,33].
To address this need, we propose a YOLO-based model for the detection and classification of pressure ulcers in medical images. Our method leverages the benefits of deep learning, such as its ability to automatically learn features, and employs data augmentation techniques to improve the model’s performance. We also use a dataset compiled from multiple sources with manually labeled images, which provides a more comprehensive and diverse set of training examples. Our study aims to accurately and efficiently detect and classify pressure ulcers, potentially leading to better patient outcomes and reduced healthcare costs.
Therefore, our work highlights the importance of image processing, computer vision, machine learning, and deep learning in the context of pressure ulcer detection and classification. By using a more comprehensive dataset and a more advanced deep learning model, we aim to address some of the limitations of previous studies and provide a more accurate and efficient solution to this critical healthcare problem. Our main contributions are as follows. We:
  • Developed a YOLO-based deep learning model for the detection and classification of pressure ulcers in medical images, which can accurately classify pressure ulcers into four stages based on severity.
  • Created a new dataset with the stage-wise classification of pressure ulcers, as no such dataset was previously available, by manually labeling images with bounding boxes and polygonal regions of interest using YOLO.
  • Added a large number of images of non-pressure ulcers, including surgical wounds and burns, to the dataset to make it more representative of real-world scenarios.
  • Utilized data augmentation techniques, such as rotation and flipping, to increase the dataset size and improve the model’s performance.
  • Compared the performance of the proposed model to other cutting-edge deep learning models as well as to conventional pressure ulcer detection and classification methods, indicating the model’s advantage in terms of accuracy and effectiveness.
  • Demonstrated the potential of the proposed model to aid in the timely diagnosis, treatment, and prevention of pressure ulcers, potentially improving patient outcomes and reducing healthcare costs.
  • Contributed to the field by providing a new dataset and an accurate and efficient solution for the detection and classification of pressure ulcers, which has the potential to advance the state-of-the-art in this important area of medical image analysis.
In Section 2, we will first provide a comprehensive literature review of existing methods for pressure ulcer detection and classification, highlighting their strengths and limitations. In Section 3, we will present the materials and methods used in our study, including details of the dataset and image labeling process, as well as a description of the YOLO-based deep learning model and data augmentation techniques employed. Section 4 will report and discuss the results of our training and evaluation, including a comparison of our model’s performance with that of previous studies. In Section 5, we will provide an in-depth analysis of the model’s strengths and limitations, and discuss potential directions for future research. Finally, in Section 6, we will conclude our work with a summary of our key findings and contributions, as well as a discussion of the potential impact of our model on clinical practice and patient outcomes.

2. Literature Review

In recent years, a significant amount of research has been devoted to the use of deep learning in medical imaging. The literature review for this study primarily focuses on deep learning techniques applied to various medical imaging tasks, including object detection, image segmentation, and image classification. Figure 3 illustrates a hierarchy of tasks and algorithms for analyzing medical images using deep learning.
As seen in Figure 3, one area of research has been the development of convolutional neural networks (CNNs) for object detection in medical images. Researchers have applied CNNs to detect various structures in medical images, such as tumors, organs, and blood vessels. For example, Salama et al. and Agnes et al. [34,35] used a CNN to detect breast tumors in mammography images, Gao et al. and Monkam et al. [36,37] used a CNN to detect lung nodules in CT scans, and Li et al. and Ting et al. [38,39] used a CNN to detect retinal abnormalities in eye images. These studies have demonstrated the potential of CNNs for accurate and efficient object detection in medical images. Another method for identifying and segmenting liver tumors in multi-phase CT images is the phase attention mask R-CNN proposed by [40]. This method selectively extracts features from each phase using an attention network for each scale and outperforms other methods for segmenting liver tumors in terms of segmentation accuracy. Several other notable studies have explored the application of RCNN, Faster RCNN, and Fast RCNN in the automatic detection and segmentation of medical images, including works by [41,42,43,44,45,46].
Deep learning is being increasingly used for image segmentation in medical imaging. Researchers have developed deep learning algorithms for segmenting different parts of the body in medical photographs, such as the brain in MRI images [47], the lungs in CT scans [48], and the retina in eye images [49].
In addition to object detection and image segmentation, researchers have employed deep learning to classify medical images. The main focus of this study has been on creating algorithms for classifying medical images into specified categories, such as normal and abnormal, benign and malignant. For instance, Esteva et al. [50] used an intelligent algorithm to categorize the images of skin lesions, Shu et al. [51] used such a method to categorize mammography images, and Dansana et al. [52] used a deep learning system to categorize images of chest X-rays into normal and abnormal. Hence, we found extensive literature studies that have demonstrated that deep learning can successfully and properly categorize diseases in medical images.
Since our research focus is also mainly on object detection and classification in pressure ulcer images, we outline several noteworthy publications that have used deep learning algorithms for detection and classification tasks in medical images in Table 3.
YOLO (you only look once) is a recent deep learning approach used for object detection in medical images [60]. Compared to traditional CNN, YOLO is designed to be fast and efficient, making it well-suited for real-time object detection in medical images. YOLO divides the image into a grid and predicts bounding boxes and class probabilities for each cell. For example, in the case of lung cancer screening, the bounding boxes would represent the location of the lung nodules, and the class probabilities would indicate whether the nodule is cancerous or not. This can aid radiologists in the early detection and diagnosis of lung cancer, resulting in better patient outcomes. In medical imaging, YOLO has been applied to the detection of various structures, including tumors, organs, and blood vessels. For example, researchers have used YOLO to detect lung nodules in CT scans and retinal abnormalities in eye images; Boonrod et al. [61] used YOLO to detect abnormal cervical vertebrae in X-ray images, achieving higher accuracy and a faster processing time compared to traditional object detection methods. Similarly, Wojaczek et al. [62] used YOLO to detect the location and shape of prostate cancer in magnetic resonance images. Compared to traditional CNNs, YOLO has been shown to be faster and more efficient, while still achieving similar or even better performance in object detection tasks. Furthermore, YOLO is designed to be easy to train and implement, making it accessible to researchers and practitioners who may have limited experience with deep learning [63,64,65].
Various studies have explored the use of automated image analysis, deep learning, vision, and machine learning techniques for pressure ulcer detection, classification, and segmentation. These studies have demonstrated the feasibility of recognizing complicated structures in biomedical images with high accuracy, including the use of convolutional neural networks for tissue classification and segmentation, as well as the simultaneous segmentation and classification of stage 1–4 pressure injuries. However, these studies have their limitations, including the need for labeled datasets and the requirement for the manual selection of parameters. The authors of [66] proposed a system that utilized a LiDAR sensor and deep learning models for automatically assessing pressure injuries, achieving satisfactory accuracy with U-Net outperforming Mask R-CNN; Zahia et al. [67] used CNNs for automatic tissue classification in pressure injuries, achieving an overall classification accuracy of 92.01%; Liu et al. [68] developed a system using deep learning algorithms to identify pressure ulcers and achieved high accuracy with the Inception-ResNet-v2 model; Fergus et al. [69] used a faster region-based convolutional neural network and a mobile platform to classify pressure ulcers; Swerdlow et al. [70] applied the Mask-R-CNN algorithm for simultaneous segmentation and classification of stage 1–4 pressure injuries; Elmogy et al. [71] proposed a tissue classification system for pressure ulcers using a 3D-CNN. Table 4 summarizes these studies.
In this study, YOLO is employed for the detection and classification of pressure ulcers into four stages, and non-pressure ulcer images are also considered. Augmentation techniques are used to enhance the quality and quantity of the available dataset. By collecting images from various sources, the study aims to overcome the limitations of previous studies and provide a more accurate and robust system for pressure ulcer detection and classification. This study’s results will be important for medical professionals and caregivers to make informed decisions on the prevention and treatment of pressure ulcers.

3. Methodology

3.1. Description of the YOLO Model

YOLOv5 is a deep neural network architecture used for object detection that was developed by Ultralytics [60,72,73]. Its performance improves previous versions by introducing a number of architectural improvements and optimizations. A detection head (DH) and a backbone network (BN) make up the YOLOv5 architecture. The BN is in charge of feature extraction, and the DH is responsible for the actual object detection and categorization.
The backbone network of YOLOv5 is based on a modified version of the EfficientNet architecture, which uses a compound scaling method to balance the model size and performance. The EfficientNet architecture was originally proposed by Tan and Le in 2019 and has since become a popular choice for many computer vision tasks.
The detection head of YOLOv5 is similar to that of YOLOv4, as shown in Figure 4, but with some important modifications. It uses spatial pyramid pooling (SPP) to capture features at different scales and a new anchor system that is better suited for detecting small objects.
Overall, YOLOv5 is designed to be fast, accurate, and easy to use. It achieves state-of-the-art performance on a number of object detection benchmarks while maintaining a small model size.
We selected YOLOv5s for pressure ulcer detection and classification due to its high accuracy in object detection, real-time performance, and flexibility in training on custom datasets. Additionally, YOLOv5 has been successfully applied in several other medical imaging tasks, including lung nodule detection and diabetic retinopathy detection, which suggests that it could be effective for pressure ulcer detection and classification as well.

3.2. Dataset Used for Training and Testing

In order to develop a highly accurate deep learning-based model for pressure ulcer classification into stages 1, 2, 3, and 4, as well as non-pressure ulcers, a diverse and comprehensive dataset was collected for both training and evaluation. This dataset was thoughtfully designed to include two distinct sets of pressure ulcer images, as well as images of other types, such as burns, abdominal wounds, and diabetic foot ulcers, all of which were sourced from the highly regarded Medetec image database [75].
To increase the size and diversity of the dataset, an additional 200 images from all 4 stages and non-pressure ulcers were collected from various online sources, including Google Images. To further enhance the dataset, advanced data augmentation techniques were applied, including image rotation, flipping, and resizing, to generate additional images from the existing data. The careful selection and augmentation of this diverse dataset will enable the deep learning model to be robust to variations in input data and to generalize well to new, unseen pressure ulcer images. Figure 5 illustrates the class distribution of the dataset after the inclusion of these additional images and data augmentation techniques.
The labeling process of the dataset was conducted in collaboration with a medical domain expert to ensure accuracy and consistency in determining the stages of pressure ulcers. The images were labeled with polygonal bounding boxes to precisely outline the areas of interest for the deep learning model.

4. Results

We conducted our experiments on a Colab GPU with YOLOv5 version 7.0-114-g3c0a6e6, Python version 3.8.10, and Torch version 1.13.1+cu116. We trained the model using the following hyperparameters: a learning rate (lr0) of 0.01, momentum of 0.937, weight decay of 0.0005, and batch size of 18. We used the stochastic gradient descent (SGD) optimizer for 500 epochs with patience of 100, and saved the best model weights.
Based on the YOLOv5s model we trained, we achieved good results in terms of the overall mAP and individual class performance. The model achieved an overall mAP50 of 0.769 and mAP50-95 of 0.398 on the validation set. This means that the model was able to accurately detect and classify pressure ulcers with a high degree of confidence.
Figure 6 shows the loss values for the box loss, object loss, and class loss at each epoch during the training process. The box loss represents the difference between the predicted and ground-truth bounding box coordinates, the object loss represents the confidence score for each object detected in an image, and the class loss represents the probability of each detected object belonging to a specific class.
The goal of training an object detection model is to minimize the total loss, which is a combination of box loss, object loss, and class loss. The loss values should exhibit a decreasing trend as the training progresses, indicating an improvement in the model’s ability to detect different stages of pressure ulcers in the images.
Moreover, from Figure 7, it appears that the precision, recall, and mean average precision (mAP) are all increasing with training epochs. This could indicate that the model improves over time and becomes more accurate at identifying the correct stages of pressure ulcers in the images.
From Table 5, it can be observed that the model attained high precision and recall scores for the non-pressure ulcer (NonPU) and Stage 1 categories, suggesting that the model accurately detected and classified these classes. However, the recall score for Stage 2 was relatively low at 0.164, indicating that the model may have missed some instances of this category, possibly due to the small number of images available for this class.
Stage 3 and Stage 4 of our model have shown promising results in detecting and classifying pressure ulcers. Our proposed model achieved a mean average precision (mAP50) of 0.749 and 0.729, respectively, indicating a high level of accuracy in identifying and localizing pressure ulcers in the images. In Stage 3, the model accurately identified 26 instances of pressure ulcers with a precision (P) of 0.659 and recall (R) of 0.692, demonstrating its ability to detect and classify pressure ulcers even in challenging image conditions. In Stage 4, the model identified 22 instances of pressure ulcers with a precision (P) of 0.615 and recall (R) of 0.864, indicating its capability to correctly identify and classify a significant number of pressure ulcers with high accuracy.
Overall, the results of our YOLOv5s model suggest that it performed well in accurately detecting and classifying pressure ulcers in the images we used for training and validation. Hence, we can infer that these results demonstrate the potential of the YOLOv5 model for detecting and classifying pressure ulcers in medical images, which could have significant implications for improving patient care and outcomes.
In addition to the metrics presented in Table 5, we also generated additional evaluation metrics to further analyze the performance of our YOLOv5 model. The precision confidence curve, recall confidence curve, precision–recall (PR) curves, F1 score curves, and confusion matrices can be found in Figure 7, Figure 8, Figure 9, Figure 10, and Figure 11, respectively. These evaluation metrics provide a more detailed understanding of the model’s ability to accurately detect and classify pressure ulcers in the images. The PR curves and F1 score curves show the trade-off between precision and recall for different decision thresholds, while the confusion matrices provide information on the number of true positives, true negatives, false positives, and false negatives for each class.

5. Discussion

While there have been numerous studies on pressure ulcer detection and classification using deep learning, YOLOv5 has not been commonly used in this domain. In fact, to the best of our knowledge, there are no studies that have employed YOLOv5 for this task. Only one notable study has used YOLOv4 for pressure ulcer detection and classification. Additionally, distinguishing between pressure and non-pressure ulcers using deep learning has not been widely explored. Hence, our work makes a novel contribution to the field by utilizing YOLOv5 for pressure ulcer classification and introducing a novel approach to simultaneously distinguish between four stages of pressure ulcers and non-pressure ulcer images, making it a significant contribution to the field. Furthermore, it is worth noting that for object detection tasks, accuracy alone may not be a sufficient metric, as it does not account for false positives and false negatives. Instead, mean average precision (mAP) is often used to evaluate the performance of object detection models. The mAP takes into account precision and recall at different levels of intersection over union (IoU) thresholds, and provides a more comprehensive evaluation of the model’s performance. In our study, we report a mAP50 of 0.769, which indicates that our model performs well in detecting and classifying pressure ulcers into different stages. In Table 6, we present a comparative analysis of our findings with three benchmark studies.
It is evident from the table that our model performs better than the benchmark models in terms of the quantity and variety of classes in the dataset employed. Although, Liu et al. [68] achieved higher recall and precision scores than our model, it is important to note that the two models were trained and evaluated on different tasks. While our model was trained to detect and classify pressure ulcers into five classes, whereas the CNN-based model was trained to classify images as either erythema or non-erythema, and to classify tissue as necrotic or non-necrotic. Therefore, it is not necessarily a fair comparison to directly compare the performance metrics of the two models. Hence, it is important to highlight that because the performance indicators employed in these research studies differ, it is challenging to directly compare the results. Our model outperformed [29,69], as demonstrated in the table, with superior metrics and a significantly higher mAP value. These results demonstrate the potential of our model for accurate pressure ulcer detection and classification. Although promising, further improvements can be made by fine-tuning the hyperparameters and optimizing the model to enhance its performance.

6. Conclusions

Our study demonstrates the effectiveness of a deep learning-based approach for the automated detection and classification of pressure ulcers. By utilizing the YOLOv5 model, we achieved high accuracy in identifying pressure ulcers of different stages and non-pressure ulcers. Our proposed model outperformed existing CNN-based methods, showcasing the superiority of YOLOv5 for this task.
We also developed a diverse dataset of pressure ulcers and non-pressure ulcers, which we augmented to enhance the model’s robustness to variations in input data. Our model achieved high precision and recall scores for non-pressure ulcers and Stage 1 pressure ulcers, which are easier to identify due to their distinct features. However, the recall score for Stage 2 pressure ulcers was relatively low due to the limited number of images available for this class. To improve the model’s performance, we recommend collecting more images of Stage 2 pressure ulcers.
Our study’s findings demonstrate the potential of deep learning-based systems for automated pressure ulcer detection and classification, offering a promising solution for earlier intervention and improved patient outcomes. This technology could also reduce the workload of healthcare professionals, allowing them to focus on other essential tasks. To further advance this field, we plan to investigate a tailored YOLOv5 model trained on a larger and more diverse dataset, including images from different populations, races, and age groups. Additionally, we will explore the use of transfer learning by leveraging pre-trained models on other medical imaging datasets.

Author Contributions

Data curation, B.A.; funding acquisition, B.A.; investigation, F.A.; methodology, B.A.; project administration, M.H.; supervision, N.Z.J.; validation, F.A.; writing—original draft, F.A.; writing—review and editing, N.Z.J. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research & Innovation, the Ministry of Education in Saudi Arabia for funding this research work through project number 223202.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number 223202.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUCArea Under Curve
CNNConvolution Neural Network
COVIDCoronavirus Disease
CTComputed Tomography
DCNNDeep Convolutional Neural Network
DSCDice Similarity Coefficient
ICUIntensive Care Unit
mAPMean Average Precision
MRIMagnetic Resonance Imaging
NPUAPNational Pressure Ulcer Advisory Panel
PADProbabilistic Anatomical Distance
PANPath Aggregation Network
PR CurvePrecision Recall Curve
PUPressure Ulcer
RCNNRegion-based Convolutional Neural Network
SPPSpatial Pyramid Pooling
YOLOYou Only Look Once

References

  1. Edsberg, L.; Black, J.; Goldberg, M.; McNichol, L.; Moore, L.; Sieggreen, M. Revised national pressure ulcer advisory panel pressure injury staging system: Revised pressure injury staging system. J. Wound Ostomy Cont. Nurs. 2016, 43, 585. [Google Scholar] [CrossRef]
  2. Mallick, A.; Bhandari, M.; Basumatary, B.; Gupta, S.; Arora, K.; Sahani, A. Others Risk factors for developing pressure ulcers in neonates and novel ideas for developing neonatal antipressure ulcers solutions. J. Clin. Neonatol. 2023, 12, 27. [Google Scholar] [CrossRef]
  3. Kwong, E.; Pang, S.; Aboo, G.; Law, S. Pressure ulcer development in older residents in nursing homes: Influencing factors. J. Adv. Nurs. 2009, 65, 2608–2620. [Google Scholar] [CrossRef]
  4. Rasero, L.; Simonetti, M.; Falciani, F.; Fabbri, C.; Collini, F.; Dal Molin, A. Pressure ulcers in older adults: A prevalence study. Adv. Skin Wound Care 2015, 28, 461–464. [Google Scholar] [CrossRef]
  5. Shahin, E.; Dassen, T.; Halfens, R. Pressure ulcer prevalence and incidence in intensive care patients: A literature review. Nurs. Crit. Care 2008, 13, 71–79. [Google Scholar] [CrossRef]
  6. Moore, Z.; Patton, D.; Avsar, P.; McEvoy, N.; Curley, G.; Budri, A.; Nugent, L.; Walsh, S.; O’Connor, T. Prevention of pressure ulcers among individuals cared for in the prone position: Lessons for the COVID-19 emergency. J. Wound Care 2020, 29, 312–320. [Google Scholar] [CrossRef]
  7. Gefen, A.; Ousey, K. COVID-19: Pressure ulcers, pain and the cytokine storm. J. Wound Care 2020, 29, 540–542. [Google Scholar] [CrossRef]
  8. Perrillat, A.; Foletti, J.; Lacagne, A.; Guyot, L.; Graillon, N. Facial pressure ulcers in COVID-19 patients undergoing prone positioning: How to prevent an underestimated epidemic? J. Stomatol. Oral Maxillofac. Surg. 2020, 121, 442–444. [Google Scholar] [CrossRef]
  9. Cai, S.; Rahman, M.; Intrator, O. Obesity and pressure ulcers among nursing home residents. Med Care 2013, 51, 478. [Google Scholar] [CrossRef]
  10. Ness, S.; Hickling, D.; Bell, J.; Collins, P. The pressures of obesity: The relationship between obesity, malnutrition and pressure injuries in hospital inpatients. Clin. Nutr. 2018, 37, 1569–1574. [Google Scholar] [CrossRef]
  11. Jackson, J.; Carlson, M.; Rubayi, S.; Scott, M.; Atkins, M.; Blanche, E.; Saunders-Newton, C.; Mielke, S.; Wolfe, M.; Clark, F. Qualitative study of principles pertaining to lifestyle and pressure ulcer risk in adults with spinal cord injury. Disabil. Rehabil. 2010, 32, 567–578. [Google Scholar] [CrossRef]
  12. Ghaisas, S.; Pyatak, E.; Blanche, E.; Blanchard, J.; Clark, F.; Group, P. Lifestyle changes and pressure ulcer prevention in adults with spinal cord injury in the pressure ulcer prevention study lifestyle intervention. Am. J. Occup. Ther. 2015, 69, 6901290020p1–6901290020p10. [Google Scholar] [CrossRef]
  13. Morita, T.; Yamada, T.; Watanabe, T.; Nagahori, E. Lifestyle risk factors for pressure ulcers in community-based patients with spinal cord injuries in Japan. Spinal Cord 2015, 53, 476–481. [Google Scholar] [CrossRef]
  14. Clark, F.; Rubayi, S.; Jackson, J.; Uhles-Tanaka, D.; Scott, M.; Atkins, M.; Gross, K.; Carlson, M. The role of daily activities in pressure ulcer development. Adv. Skin Wound Care 2001, 14, 52–54. [Google Scholar] [CrossRef] [PubMed]
  15. Marcusso, R.; Van Weyenbergh, J.; Moura, J.; Dahy, F.; de Moura Brasil Matos, A.; Haziot, M.; Vidal, J.; Fonseca, L.; Smid, J.; Assone, T.; et al. Dichotomy in fatal outcomes in a large cohort of people living with HTLV-1 in São Paulo, Brazil. Pathogens 2019, 9, 25. [Google Scholar] [CrossRef]
  16. Haavisto, E.; Stolt, M.; Puukka, P.; Korhonen, T.; Kielo-Viljamaa, E. Consistent practices in pressure ulcer prevention based on international care guidelines: A cross-sectional study. Int. Wound J. 2022, 19, 1141–1157. [Google Scholar] [CrossRef]
  17. LeBlanc, K.; Woo, K.; Bassett, K.; Botros, M. Professionals’ knowledge, attitudes, and practices related to pressure injuries in Canada. Adv. Skin Wound Care 2019, 32, 228–233. [Google Scholar] [CrossRef]
  18. Bates-Jensen, B.; McCreath, H.; Harputlu, D.; Patlan, A. Reliability of the Bates-Jensen wound assessment tool for pressure injury assessment: The pressure ulcer detection study. Wound Repair Regen. 2019, 27, 386–395. [Google Scholar] [CrossRef]
  19. Alves, P.; Eberhardt, T.; Soares, R.; Pinto, M.; Pinto, C.; Vales, L.; Morais, J.; Oliveira, I. Differential diagnosis in pressure ulcers and medical devices. Ceska A Slov. Neurol. A Neurochir. 2017, 80, S29–S35. [Google Scholar] [CrossRef]
  20. Boyko, T.V.; Longaker, M.T.; Yang, G.P. Review of the current management of pressure ulcers. Adv. Wound Care 2018, 7, 57–67. [Google Scholar] [CrossRef]
  21. Horup, M.; Soegaard, K.; Kjølhede, T.; Fremmelevholm, A.; Kidholm, K. Static overlays for pressure ulcer prevention: A hospital-based health technology assessment. Br. J. Nurs. 2020, 29, S24–S28. [Google Scholar] [CrossRef]
  22. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  23. Bengio, Y.; Lecun, Y.; Hinton, G. Deep learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar] [CrossRef]
  24. Litjens, G.; Kooi, T.; Bejnordi, B.; Setio, A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.; Van Ginneken, B.; Sánchez, C. A survey on deep learning in medical image analysis. Med Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  25. Humayun, M.; Khalil, M.; Almuayqil, S.; Jhanjhi, N. Framework for Detecting Breast Cancer Risk Presence Using Deep Learning. Electronics 2023, 12, 403. [Google Scholar] [CrossRef]
  26. Attaullah, M.; Ali, M.; Almufareh, M.; Ahmad, M.; Hussain, L.; Jhanjhi, N.; Humayun, M. Initial stage COVID-19 detection system based on patients’ symptoms and chest X-ray images. Appl. Artif. Intell. 2022, 36, 2055398. [Google Scholar] [CrossRef]
  27. Santos, C.; Almeida, M.; Lucena, A. The Nursing Diagnosis of risk for pressure ulcer: Content validation. Rev. Lat.-Am. Enferm. 2016, 24. [Google Scholar] [CrossRef]
  28. Ashfaq, F.; Ghoniem, R.; Jhanjhi, N.; Khan, N.; Algarni, A. Using Dual Attention BiLSTM to Predict Vehicle Lane Changing Maneuvers on Highway Dataset. Systems 2023, 11, 196. [Google Scholar] [CrossRef]
  29. Lau, C.; Yu, K.; Yip, T.; Luk, L.; Wai, A.; Sit, T.; Wong, J.; Ho, J. An artificial intelligence-enabled smartphone app for real-time pressure injury assessment. Front. Med. Technol. 2022, 4, 905074. [Google Scholar] [CrossRef]
  30. Chang, C.; Christian, M.; Chang, D.; Lai, F.; Liu, T.; Chen, Y.; Chen, W. Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis. PLoS ONE 2022, 17, e0264139. [Google Scholar] [CrossRef]
  31. Cicceri, G.; De Vita, F.; Bruneo, D.; Merlino, G.; Puliafito, A. A deep learning approach for pressure ulcer prevention using wearable computing. Hum.-Centric Comput. Inf. Sci. 2020, 10, 5. [Google Scholar] [CrossRef]
  32. Lustig, M.; Schwartz, D.; Bryant, R.; Gefen, A. A machine learning algorithm for early detection of heel deep tissue injuries based on a daily history of sub-epidermal moisture measurements. Int. Wound J. 2022, 19, 1339–1348. [Google Scholar] [CrossRef] [PubMed]
  33. Danilovich, I.; Moshkin, V.; Reimche, A.; Tevelevich, M.; Mikhaylovskiy, N. Video monitoring over anti-decubitus protocol execution with a deep neural network to prevent pressure ulcer. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering In Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 1384–1387. [Google Scholar]
  34. Salama, W.; Aly, M. Deep learning in mammography images segmentation and classification: Automated CNN approach. Alex. Eng. J. 2021, 60, 4701–4709. [Google Scholar] [CrossRef]
  35. Agnes, S.; Anitha, J.; Pandian, S.I.A.; Peter, J. Classification of mammogram images using multiscale all convolutional neural network (MA-CNN). J. Med Syst. 2020, 44, 1–9. [Google Scholar] [CrossRef]
  36. Gao, J.; Jiang, Q.; Zhou, B.; Chen, D. Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview. Math. Biosci. Eng. 2019, 16, 6536–6561. [Google Scholar] [CrossRef]
  37. Monkam, P.; Qi, S.; Ma, H.; Gao, W.; Yao, Y.; Qian, W. Detection and classification of pulmonary nodules using convolutional neural networks: A survey. IEEE Access 2019, 7, 78075–78091. [Google Scholar] [CrossRef]
  38. Li, F.; Chen, H.; Liu, Z.; Zhang, X.; Wu, Z. Fully automated detection of retinal disorders by image-based deep learning. Graefe’s Arch. Clin. Exp. Ophthalmol. 2019, 257, 495–505. [Google Scholar] [CrossRef]
  39. Ting, D.; Peng, L.; Varadarajan, A.; Keane, P.; Burlina, P.; Chiang, M.; Schmetterer, L.; Pasquale, L.; Bressler, N.; Webster, D.; et al. Deep learning in ophthalmology: The technical and clinical considerations. Prog. Retin. Eye Res. 2019, 72, 100759. [Google Scholar] [CrossRef]
  40. Hasegawa, R.; Iwamoto, Y.; Xianhua, H.; Lin, L.; Hu, H.; Xiujun, C.; Chen, Y. Automatic detection and segmentation of liver tumors in multi-phase ct images by phase attention mask r-cnn. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 10–12 January 2021; pp. 1–5. [Google Scholar]
  41. Johnson, J. Adapting mask-rcnn for automatic nucleus segmentation. arXiv 2018, arXiv:1805.00500. [Google Scholar]
  42. Yang, S.; Fang, B.; Tang, W.; Wu, X.; Qian, J.; Yang, W. Faster R-CNN based microscopic cell detection. In Proceedings of the 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), Shenzhen, China, 15–17 December 2017; pp. 345–350. [Google Scholar]
  43. Zhu, G.; Piao, Z.; Kim, S. Tooth detection and segmentation with mask R-CNN. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 070–072. [Google Scholar]
  44. Huang, X.; Sun, W.; Tseng, T.; Li, C.; Qian, W. Fast and fully-automated detection and segmentation of pulmonary nodules in thoracic CT scans using deep convolutional neural networks. Comput. Med. Imaging Graph. 2019, 74, 25–36. [Google Scholar] [CrossRef]
  45. Tang, W.; Zou, D.; Yang, S.; Shi, J.; Dan, J.; Song, G. A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab. Neural Comput. Appl. 2020, 32, 6769–6778. [Google Scholar] [CrossRef]
  46. Akselrod-Ballin, A.; Karlinsky, L.; Hazan, A.; Bakalo, R.; Horesh, A.; Shoshan, Y.; Barkan, E. Deep learning for automatic detection of abnormal findings in breast mammography. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings 3; Springer International Publishing: Cham, Switzerland, 2017; pp. 321–329. [Google Scholar]
  47. Kim, M.; Yun, J.; Cho, Y.; Shin, K.; Jang, R.; Bae, H.; Kim, N. Deep learning in medical imaging. Neurospine 2019, 16, 657. [Google Scholar] [CrossRef]
  48. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Chong, Y.; et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef]
  49. Quellec, G.; Charriere, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef]
  50. Esteva, A.; Kuprel, B.; Novoa, R.; Ko, J.; Swetter, S.; Blau, H.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
  51. Shu, X.; Zhang, L.; Wang, Z.; Lv, Q.; Yi, Z. Deep neural networks with region-based pooling structures for mammographic image classification. IEEE Trans. Med. Imaging 2020, 39, 2246–2255. [Google Scholar] [CrossRef]
  52. Dansana, D.; Kumar, R.; Bhattacharjee, A.; Hemanth, D.; Gupta, D.; Khanna, A.; Castillo, O. Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm. Soft Comput. 2020, 27, 2635–2643. [Google Scholar] [CrossRef]
  53. Su, Y.; Li, D.; Chen, X. Lung nodule detection based on faster R-CNN framework. Comput. Methods Programs Biomed. 2021, 200, 105866. [Google Scholar] [CrossRef]
  54. Chiao, J.; Chen, K.; Liao, K.; Hsieh, P.; Zhang, G.; Huang, T. Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine 2019, 98, e15200. [Google Scholar] [CrossRef]
  55. Kumar, K.; Prasad, A.; Metan, J. A hybrid deep CNN-Cov-19-Res-Net Transfer learning architype for an enhanced Brain tumor Detection and Classification scheme in medical image processing. Biomed. Signal Process. Control 2022, 76, 103631. [Google Scholar] [CrossRef]
  56. Hardalaç, F.; Uysal, F.; Peker, O.; Çiçeklidağ, M.; Tolunay, T.; Tokgöz, N.; Kutbay, U.; Demirciler, B.; Mert, F. Fracture detection in wrist X-ray images using deep learning-based object detection models. Sensors 2022, 22, 1285. [Google Scholar] [CrossRef]
  57. Sun, X.; Wang, D.; Chen, Q.; Ni, J.; Chen, S.; Liu, X.; Cao, Y.; Liu, B. MAF-Net: Multi-branch anchor-free detector for polyp localization and classification in colonoscopy. In Proceedings of the International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, 6–8 July 2022; pp. 1162–1172. [Google Scholar]
  58. Tanwar, S.; Vijayalakshmi, S.; Sabharwal, M.; Kaur, M.; AlZubi, A.; Lee, H. Detection and Classification of Colorectal Polyp Using Deep Learning. BioMed Res. Int. 2022, 2022, 2805607. [Google Scholar] [CrossRef]
  59. Abunasser, B.; AL-Hiealy, M.; Zaqout, I.; Abu-Naser, S. Breast cancer detection and classification using deep learning Xception algorithm. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 223–228. [Google Scholar] [CrossRef]
  60. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  61. Boonrod, A.; Boonrod, A.; Meethawolgul, A.; Twinprai, P. Diagnostic accuracy of deep learning for evaluation of C-spine injury from lateral neck radiographs. Heliyon 2022, 8, e10372. [Google Scholar] [CrossRef]
  62. Wojaczek, A.; Kalaydina, R.; Gasmallah, M.; Szewczuk, M.; Zulkernine, F. Computer vision for detecting and measuring multicellular tumor shperoids of prostate cancer. In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019; pp. 563–569. [Google Scholar]
  63. Bharati, P.; Pramanik, A. Deep learning techniques—R-CNN to mask R-CNN: A survey. In Computational Intelligence in Pattern Recognition, Proceedings of the CIPR 2019; Springer: Singapore, 2020; pp. 657–668. [Google Scholar]
  64. Humayun, M.; Ashfaq, F.; Jhanjhi, N.; Alsadun, M. Traffic management: Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid pooling network. Electronics 2022, 11, 2748. [Google Scholar] [CrossRef]
  65. Rani, E. LittleYOLO-SPP: A delicate real-time vehicle detection algorithm. Optik 2021, 225, 165818. [Google Scholar]
  66. Liu, T.; Wang, H.; Christian, M.; Chang, C.; Lai, F.; Tai, H. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera. Sci. Rep. 2023, 13, 680. [Google Scholar] [CrossRef]
  67. Zahia, S.; Sierra-Sosa, D.; Garcia-Zapirain, B.; Elmaghraby, A. Tissue classification and segmentation of pressure injuries using convolutional neural networks. Comput. Methods Programs Biomed. 2018, 159, 51–58. [Google Scholar] [CrossRef]
  68. Liu, T.; Christian, M.; Chu, Y.; Chen, Y.; Chang, C.; Lai, F.; Tai, H. A pressure ulcers assessment system for diagnosis and decision making using convolutional neural networks. J. Formos. Med. Assoc. 2022, 121, 2227–2236. [Google Scholar] [CrossRef]
  69. Fergus, P.; Chalmers, C.; Henderson, W.; Roberts, D.; Waraich, A. Pressure Ulcer Categorisation using Deep Learning: A Clinical Trial to Evaluate Model Performance. arXiv 2022, arXiv:2203.06248. [Google Scholar]
  70. Swerdlow, M.; Guler, O.; Yaakov, R.; Armstrong, D. Simultaneous Segmentation and Classification of Pressure Injury Image Data Using Mask-R-CNN. Comput. Math. Methods Med. 2023, 2023, 3858997. [Google Scholar] [CrossRef] [PubMed]
  71. Elmogy, M.; García-Zapirain, B.; Burns, C.; Elmaghraby, A.; Ei-Baz, A. Tissues classification for pressure ulcer images based on 3D convolutional neural network. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3139–3143. [Google Scholar]
  72. Wong, G.; Ong, Y.; Tong, J.; Bai, H.; Zhou, Y. YOLOv5: A State-of-the-art Object Detection Model. arXiv 2021, arXiv:2104.08545. [Google Scholar]
  73. Lin, T.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  74. Zhu, L.; Geng, X.; Li, Z.; Liu, C. Improving YOLOv5 with attention mechanism for detecting boulders from planetary images. Remote Sens. 2021, 13, 3776. [Google Scholar] [CrossRef]
  75. Medetec Image Databases. Available online: http://www.medetec.co.uk/files/medetec-image-databases.html (accessed on 18 February 2023).
Figure 1. Positions and areas of the body at risk for pressure ulcers.
Figure 1. Positions and areas of the body at risk for pressure ulcers.
Healthcare 11 01222 g001aHealthcare 11 01222 g001b
Figure 2. The Stages of Pressure Ulcer.
Figure 2. The Stages of Pressure Ulcer.
Healthcare 11 01222 g002
Figure 3. Applications of deep learning algorithms for medical images.
Figure 3. Applications of deep learning algorithms for medical images.
Healthcare 11 01222 g003
Figure 4. Architecture diagram for YOLOv5, adapted from [74]. YOLOv5 introduced a new architecture that includes a scaled YOLOv3 backbone and a novel neck design, which consists of SPP and PAN modules.
Figure 4. Architecture diagram for YOLOv5, adapted from [74]. YOLOv5 introduced a new architecture that includes a scaled YOLOv3 backbone and a novel neck design, which consists of SPP and PAN modules.
Healthcare 11 01222 g004
Figure 5. Bar chart showing the distribution of pressure ulcer stages and non-pressure ulcers in the dataset.
Figure 5. Bar chart showing the distribution of pressure ulcer stages and non-pressure ulcers in the dataset.
Healthcare 11 01222 g005
Figure 6. The training results.
Figure 6. The training results.
Healthcare 11 01222 g006
Figure 7. Precision confidence curve.
Figure 7. Precision confidence curve.
Healthcare 11 01222 g007
Figure 8. Recall confidence curve.
Figure 8. Recall confidence curve.
Healthcare 11 01222 g008
Figure 9. Precision–recall curve.
Figure 9. Precision–recall curve.
Healthcare 11 01222 g009
Figure 10. F1 score curve.
Figure 10. F1 score curve.
Healthcare 11 01222 g010
Figure 11. Confusion metrics.
Figure 11. Confusion metrics.
Healthcare 11 01222 g011
Table 1. The staging of pressure ulcers according to NPUAP.
Table 1. The staging of pressure ulcers according to NPUAP.
StageNameDescriptionGeneral Features
1Non-blanchable erythemaSkin is intact, but may appear red or discolored. It may feel warmer or cooler to the touch than the surrounding skin.Discoloration, warmth or coolness, pain, itching
2Partial-thickness skin lossSkin or portion of skin that has lost some of its density appears as a small, open sore with no slough and a red or pink wound bed.Abrasion, blistering, partial thickness loss, shallow open ulcer, red or pink wound bed
3Full-thickness skin lossLoss of tissue in its entirety, possible subcutaneous fat visibility, but no exposed bone, tendon, or muscle.Deep open crater, full thickness loss, and visible subcutaneous fat, may have slough or necrotic tissue
4Full-thickness skin loss with exposed bone, tendon, or muscleA complete loss of tissue with exposed bone, muscle, or tendons.Bone, tendon, or muscle that is visible whole thickness reduction, necrotic, or slough tissue
Table 2. Computer vision detectable features for pressure ulcer stages.
Table 2. Computer vision detectable features for pressure ulcer stages.
FeaturesStage 1Stage 2Stage 3Stage 4
ColorSkin appears reddenedPink wound in center and discoloration around the sorePus or a greenish fluid from the soreDark purple or black color in the area
TextureAffected skin may be different from the surrounding healthy skinThe ulcer area may appear broken or damaged The presence of eschar, firm, or mushy texture in the area
BorderThe affected skin may have a clear border separating it from the surrounding healthy skin Can involve undermining or tunneling, where the ulcer extends into the surrounding tissue
Depth Involves partial-thickness loss of skinfull-thickness tissue loss, which can be deeper than stage-2 ulcersA total loss of tissue with exposed bone, muscle, or tendons
Table 3. Brief overview of deep learning algorithms for item recognition and classification in medical images.
Table 3. Brief overview of deep learning algorithms for item recognition and classification in medical images.
StudyTaskMethodDatasetLimitations
[53]Detection of lung nodulesFaster R-CNNLIDC-IDRIDoes not include small benign nodules, smaller dataset
[54]Detection and classification the breast tumorsMask R-CNNultrasound images China Medical University HospitalSmall dataset size limit the generalizability
[55]Brain tumor detection and classification using medical imagesHybrid DCNN with ResNet 152Brats MRI image datasetN/A
[56]Fracture detection in wrist X-ray images10 different object detection models including RetinaNet, Faster-RCNN, and RegNetClinical dataset collected from Gazi University HospitalSingle clinical dataset
[57]Detect and classify adenomatous polyps.Multi-branch CNN with attention module6059 imagesNot tested in clinical setting
[58]Using colonoscopy images to detect and categorize colorectal polypsCNN will enhance and filter the images, and a single shot multi-box detector (SSD) will identify and categorize any anomalies27,508 whole slide images (WSI)The dataset used is limited to images observed under white light imaging (WLI)
[59]Detection and classification of breast cancer stagesCNN (Xception)Collection of ultrasound images (USIs) of breast cancer patientsN/A
Table 4. Summary of studies on pressure ulcer stages detection and classification using deep learning.
Table 4. Summary of studies on pressure ulcer stages detection and classification using deep learning.
StudyMethodologyResults
[66]Deep learning models and a LiDAR camera are used in a system for automatic segmentation and measuringAchieved acceptable accuracy and a 26.2% mean relative error with U-Net outperforming Mask R-CNN
[67]Employed CNNs for automatic tissue classification in pressure injuriesAchieved an overall classification accuracy of 92.01%, with high precision for granulation and necrotic tissue, and lower precision for slough tissue
[68]Diagnosed pressure ulcers using deep learning algorithmsAchieved high accuracy, with the model classifying erythema and non-erythema wounds with an accuracy of about 98.5% and necrotic tissue with an accuracy of about 97%
[69]Categorized and documented pressure ulcers using faster region-based CNN and mobile platformAchieved 45 false positives with a mean average precision of 0.6796, recall of 0.6997, and F1 score of 0.6786 using a confidence score threshold of @.75.
[70]Simultaneous segmentation and classification of pressure injuries using Mask-R-CNN algorithmAchieved strong F1 scores and Dice coefficients for stages 1–4, resulting in a total classification accuracy of 92.6% and segmentation accuracy of 93.1%.
[71]Tissue classification system for pressure ulcers using 3D CNNAchieved an average of 95% AUC, 92% DSC, and 10% PAD.
Table 5. Performance Evaluation of Pressure Ulcer Detection Model on Different Stages.
Table 5. Performance Evaluation of Pressure Ulcer Detection Model on Different Stages.
ClassPRmAP@50mAP@50-95
all0.7810.6850.7690.398
NonPU0.7250.7060.7110.279
Stage 10.90810.9950.501
Stage 210.1640.660.336
Stage 30.6590.6920.7490.375
Stage 40.6150.8640.7290.497
Table 6. Comparison between our model and benchmark models.
Table 6. Comparison between our model and benchmark models.
StudyModelTaskImagesClassesmAPPrecisionRecallF1
[68]CNNErythema Classification5282N/A97.7%95.5%98.5%
[68]CNNNecrotic Tissue Classification5282N/A96.7%96.7%97%
[69]Faster R-CNN with [email protected] and [email protected]Pressure Ulcer Stage Classification216670.9%77.4%64%69%
[29]YOLOv4Pressure Ulcer Stage Classification216663.2%%N/AN/AN/A
Our ModelYOLOv5sPressure Ulcer Detection and Classification into four stages and Non-PU1000+576.9%78.1%68.5%73.2%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.Z.; Humayun, M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare 2023, 11, 1222. https://doi.org/10.3390/healthcare11091222

AMA Style

Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare. 2023; 11(9):1222. https://doi.org/10.3390/healthcare11091222

Chicago/Turabian Style

Aldughayfiq, Bader, Farzeen Ashfaq, N. Z. Jhanjhi, and Mamoona Humayun. 2023. "YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification" Healthcare 11, no. 9: 1222. https://doi.org/10.3390/healthcare11091222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop