Next Article in Journal
The Priority of Water Consumption in the Spanish Tourism Industry: A Dilemma for Residents and Researchers
Previous Article in Journal
A Deep Learning Estimation for Probing Depth of Transient Electromagnetic Observation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diagnosis of Pressure Ulcer Stage Using On-Device AI

1
Division of Biomedical Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea
2
Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju 54907, Republic of Korea
3
Department of Laboratory Medicine, Jeonbuk National University Medical School and Hospital, Jeonju 54907, Republic of Korea
4
Division of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors have contributed equally to this work.
Appl. Sci. 2024, 14(16), 7124; https://doi.org/10.3390/app14167124
Submission received: 19 July 2024 / Revised: 8 August 2024 / Accepted: 12 August 2024 / Published: 14 August 2024

Abstract

:
Pressure ulcers are serious healthcare concerns, especially for the elderly with reduced mobility. Severe pressure ulcers are accompanied by pain, degrading patients’ quality of life. Thus, speedy and accurate detection and classification of pressure ulcers are vital for timely treatment. The conventional visual examination method requires professional expertise for diagnosing pressure ulcer severity but it is difficult for the lay carer in domiciliary settings. In this study, we present a mobile healthcare platform incorporated with a light-weight deep learning model to exactly detect pressure ulcer regions and classify pressure ulcers into six severities such as stage 1–4, deep tissue pressure injury, and unstageable. YOLOv8 models were trained and tested using 2800 annotated pressure ulcer images. Among the five tested YOLOv8 models, the YOLOv8m model exhibited promising detection performance with overall classification accuracy of 84.6% and a mAP@50 value of 90.8%. The mobile application (app) was also developed applying the trained YOLOv8m model. The mobile app returned the diagnostic result within a short time (≒3 s). Accordingly, the proposed on-device AI app can contribute to early diagnosis and systematic management of pressure ulcers.

1. Introduction

Pressure ulcers are localized wounds to the skin, subcutaneous fat, and muscle tissue caused by obstruction and poor nutrition. This is due to decreased blood circulation in the affected area [1,2]. They often occur in specific areas exposed to prolonged pressure such as the sacrum, coccyx, ischial tuberosity, and the entire pelvis [3]. They are particularly common in elderly patients with reduced mobility who spend a long time in bed [4,5]. These pressure ulcers can lead patients to isolate socially and experience discomfort, pain, severe infection, and even death [6,7,8]. In addition, approximately 3 million people are treated for pressure ulcers costing $17.8 billion in the United States each year [9]. In Europe, the estimated prevalence rate of pressure ulcers is 10.8% [10]. The burden of pressure ulcers has been continuously growing because the global population is aging [11]. Therefore, it is essential to diagnose pressure ulcers early and receive effective treatment based on a precise diagnostic result for preventing the progression to serious illness and alleviating the future burden of pressure ulcers.
The National Pressure Ulcer Advisory Panel (NPUAP) staging criterion is widely used for consistent assessment of the severity of pressure ulcers [1]. Pressure ulcers can be classified into six categories such as stage 1, 2, 3, 4, deep tissue pressure injury (DTPI), and unstageable based on wound size, redness, tissue loss, inflammation degree, etc. The severity of pressure ulcers should be correctly identified for the effective care [12]. Traditionally, the severity of pressure ulcers including the degree of tissue damage and infection is diagnosed through visual examination and manual palpation by medical professionals [13,14]. However, the typical diagnostics is labor-intensive, time consuming, and observer-dependent [15,16]. Moreover, patients who suffer from pressure ulcers have difficulty visiting a hospital due to decreased mobility and it is hard for a caregiver to monitor the progress of pressure ulcers in domiciliary settings.
Deep learning has been broadly applied in various biomedical research fields including bio-signal investigation, medical image analysis, and biomolecular structure prediction [17,18,19,20,21,22]. Several deep learning models also have been adopted for diagnosing pressure ulcers [23]. Cicceri et al. obtained positional information with wearable inertial sensors and applied a simple deep neural network (DNN) to predict six postures of a patient to warn for possible exposure to pressure ulcers [24]. Slough, granulation, and necrotic eschar tissue regions in pressure ulcers were segmented applying a shallow CNN model [25], 3D convolutional neural network (CNN) [26], U-Net, DeeplabV3, PsPNet, FPN, and Mask R-CNN [27]. Chino et al. estimated the pressure ulcer region using a U-Net-based CNN model [28]. Pandey et al. introduced the MobileNetV2 model to detect the pressure ulcer boundary from an infrared thermal image [29]. Liu et al. used the Inception-ResNet-v2 model to identify the degree of erythema and necrosis [30]. The aforementioned group tested Mask R-CNN and U-Net for segmentation and area measurement of pressure ulcers with a LiDAR camera [31]. Kim et al. applied SE-ResNext101 architecture to classify pressure ulcer stages based on photographs [32]. You Only Look Once (YOLO) models were applied to detect regions of pressure ulcers and identify pressure ulcer class [33,34]. Fergus et al. utilized the Faster R-CNN to classify pressure ulcers into six stages (stage 1–4, DTPI, unstageable) [35]. However, studies to accurately recognize the detailed severity and affected area of pressure ulcers using mobile devices have not been attempted (Table 1).
This study proposes a light-weight deep learning-based mobile healthcare platform for boundary detection and classification of pressure ulcers in general camera images, simultaneously. The images of pressure ulcers are collected from some open sources. Boundaries and classes were manually annotated in more than 3000 images. The YOLOv8 models [36], which perform with low computational cost and high accuracy, are employed to distinguish the affected area of pressure ulcers and sort them into six classes according to the NPUAP staging criterion. Finally, the user-friendly mobile application (app) is also developed with Android Studio offering a convenient method to monitor pressure ulcer states using the user’s smartphone or tablet. In summary, the novelty of this study is to develop an on-device AI system applying the light-weight YOLOv8 model for detecting pressure ulcers: (1) we established the pressure ulcer dataset from various open sources and manually annotated them; (2) we applied YOLOv8 models and evaluated their performances to accurately estimate lesions and severity of pressure ulcers; (3) we designed the mobile app embedded with the trained YOLOv8 model to help users easily check their pressure ulcer stage by using their own smart devices.
The remainder of the paper is organized as follows. Section 2 introduces materials and methods used in this study, including the dataset, YOLOv8 models, and mobile app design. Section 3 presents the results of the YOLOv8 models and mobile app development. Section 4 discusses our model’s performance and future studies. Section 5 concludes our work.

2. Materials and Methods

2.1. Dataset

The pressure ulcer dataset was gathered from multiple open sources such as the Medetec pressure ulcer dataset [37], Roboflow, and Google (data availability statement). The collected dataset contained pressure ulcer images acquired from various ages (a few infant patients and mostly elderly patients) and sites (e.g., buttocks, foot, and cheek) in clinical settings. The label and bounding box were manually annotated for the collected dataset. The pressure ulcer images were classified into one of the six labels (stage 1–4, DTPI, unstageable) based on the NPUAP staging criterion (Table 2) [1]. Bounding boxes were generated for detecting pressure ulcer areas using the polygonal tool.
The annotated initial collected dataset consisted of a total of 1830 objects: 149 tags for stage 1, 400 tags for stage 2, 478 tags for stage 3, 167 tags for stage 4, 300 tags for DTPI, and 336 tags for unstageable. The numbers in the original dataset were unbalanced between labels. Therefore, some data were augmented through flipping, rotating, and scaling. The augmentation was conducted to reduce overfitting and improve object detection performance [35]. After the augmentation, the pressure ulcer dataset was composed of a total of 2800 objects: 477 tags for stage 1, 400 tags for stage 2, 478 tags for stage 3, 459 tags for stage 4, 486 tags for DTPI, and 500 tags for unstageable. The original collected images were different in size (560 × 368–40 × 640). All images were resized to 640 × 640 pixels to match the input image resolution of the pre-trained YOLOv8 models with the COCO dataset (640 × 640 pixels, 200 k). The pre-trained YOLOv8 models were fine-tuned on the pressure ulcer dataset (640 × 640 pixels, 2.8 k). The pressure ulcer dataset was randomly split into train (70%), validation (20%), and test (10%) sets.

2.2. YOLOv8 Model Description

YOLO was first introduced by Redmon et al. [38]. The model is the first one to use a one-step object detection approach that simultaneously detects bounding boxes and object classes. Compared to the conventional two-step object detectors including R-CNN, YOLO models have shown excellent performance with short object detection time [36].
YOLOv8 is the latest version of YOLO by Ultralytics released in 2023 [39,40]. Compared to its previous versions, it has the most enhanced object detection performance [36]. In particular, the model is an anchor-free approach to directly predict the center point and size of the bounding box containing the object. This method reduces the complexity of the model by eliminating the process to manually define the size and number of anchor boxes (Figure 1).
The YOLOv8 structure contains feature pyramid network (FPN) just as YOLOv5. FPN demonstrates a bottom-up structure with lateral connections, effectively detecting objects in multiple scales. However, there are a few differences between two models. The kernel size of the first convolution layer in the backbone and neck has been changed to 3 × 3 (k = 3, Figure 1). The C3 module in YOLOv5 was replaced with the C2f module in YOLOv8 (Figure 1). The C2f module in YOLOv8 reduces the number of blocks in the initial stage of the backbone network and the number of output channels in the final stage to form a lighter model with high accuracy. In the head part of YOLOv8, the object-ness branch in YOLOv5 was removed and separate heads were used in YOLOv8 to individually process classification and regression tasks (Figure 1). These split heads allow each branch to focus on its own unique task, enhancing overall object detection accuracy. Overall, advanced backbone, neck, and head architectures result in enhancing feature extraction and object detection accuracy. There are five YOLOv8 models (e.g., 8n, 8s, 8m, 8l, and 8x) according to model size (depth (d), width (w), ratio (r)) and the number of trainable parameters (Figure 1 and Table 3).
YOLOv8 models were trained on a desktop computer (NVIDIA GeForce RTX 3090Ti GPU, AMD Ryzen 5950X CPU, 128 GB RAM) with Python 3.10.9 and PyTorch version 1.12.1 + cu113. Hyperparameters used in the study are as follows: learning rate of 0.01, batch size of 16, epoch of 200, AdamW optimizer, and no dropout. The five YOLOv8 models were trained and each model weight, which showed the highest mean average precision (mAP) for the validation dataset during 200 training epochs, was saved. After finishing training, the performances of the five saved YOLOv8 models were compared.

2.3. Mobile App Design

The mobile app developed in this study was designed to run the trained YOLOv8 model through an application processor in mobile devices. The trained YOLOv8m.pt model was firstly exported to TorchScript. TorchScript utilizes JIT (Just-In-Time) compilation enabling efficient execution on mobile devices. The TorchScript model was optimized using “torch.utils.mobile_optimizer.optimize_for_mobile” utility. This includes fusing convolution–batch normalization, hoisting convolution packed parameters, and automatically transferring to GPU [41]. Finally, the optimized model was converted to .pth format for mobile deployment. The app was developed using Android Studio, the official integrated development environment (IDE) for Android app development. Kotlin was used for the development programming language as it is generally utilized by Android mobile app developers. The app was designed with two primary functions. Firstly, it enables users to diagnose all six types of pressure ulcers using on-device AI. Secondly, it provides guidelines based on the diagnostic result. The mobile app comprises log-in, home, results, and information pages.

3. Results

3.1. Pressure Ulcer Detection Performance

Five trained YOLOv8 models predicted both the class (stage 1–4, DTPI, unstageable) and the regions of pressure ulcers. Table 4 shows a comparison of the overall object detection performance of the five trained YOLOv8 models using the test dataset. To compare the object detection performance, five metrics were calculated: accuracy, precision, recall, F1 score, and mean average precision (mAP).
Accuracy, precision, recall, and F1 score are the most common classification evaluation metrics [42]. Accuracy is the proportion of correctly classified objects:
Accuracy = TP + TN TP + TN + FP + FN
TP, TN, FP, and FN denote the number of true positives, true negatives, false positives, and false negatives of each label, respectively.
Precision denotes the ratio of the number of positives which are correctly classified (TP) to all positive predictions (TP + FP):
Precision = TP TP + FP
Recall is defined as the rate of true positives (TP) among all actual positives (TP + FN):
Recall = TP TP + TN
F1 score represents the harmonic mean of precision and recall considering both FP and TN in balance:
F 1   score = 2 × Precision   ×   Recall Precision + Recall
mAP is defined as follows: mAP is a standard metric to comprehensively examine the performance of the multi-class object detection model [34,35]. AP is the area under the precision–recall graph for each class and mAP computes AP values and is averaged over all classes:
m A P = 1 N i = 1 N A P i
where N is the total number of classes and APi represents the AP for a given class i. Here, mAP@50 is the mean of the AP values calculated at Intersection of Union (IoU) > 0.5. mAP@50-95 indicates the average of the mAP values computed with altering IoU thresholds ranging from 0.5 to 0.95. IoU is defined as the overlap ratio between the predicted bounding box and the ground–truth object region:
I o U = P r e d i c t e d   b o u n d i n g   b o x G r o u n d t r u t h   r e g i o n P r e d i c t e d   b o u n d i n g   b o x G r o u n d t r u t h   r e g i o n
Among the five tested YOLOv8 models, the YOLOv8m model recorded the highest values except for the precision metric (Table 4). The YOLOv8m model showed strong performance in terms of the overall accuracy of 0.846, recall of 0.891, and mAP@50 of 0.908 on the test dataset. This implies that the YOLOv8m model can classify and detect pressure ulcers with high confidence. Accordingly, the YOLOv8m model was chosen as the best option for diagnosing pressure ulcer stage and it was deployed in mobile devices.
Figure 2 demonstrates the values of box loss (box_loss), class loss (cls_loss), and distribution focal loss (dfl_loss) at each epoch during the YOLOv8m model training process. The box_loss measures the difference between estimated bounding box and ground–truth bounding box coordinates. The cls_loss calculates the error in the probabilities of the predicted class compared to the ground–truth class. The dfl_loss offers more informative and accurate estimation of the bounding box. The total loss, which is a combination of these losses, was decreased on both the training and validation dataset during the training process. Hence, precision, recall, and mAP metrics were gradually enhanced with the increasing epoch.
Figure 3 describes the confusion matrix providing a graphical representation of the detailed classification results of the YOLOv8m model. The trained YOLOv8m model predicted the detailed type of pressure ulcers in the test dataset with a high score.
From this confusion matrix, we calculated all metrics for all six classes (Table 5). As a result, all metrics varied across the pressure ulcer types and showed good results for pressure ulcer detection. The YOLOv8m model correctly detected with nearly 90% of precision, recall, F1 score, and mAP@50.
Examples of output images by the trained YOLOv8m model are described in Figure 4. Both the bounding box outlining each pressure ulcer area and the corresponding stage with classification probability were simultaneously displayed. Individual pressure ulcers such as stage4, DTPI, and unstageable were successfully detected in the test images with a single class (Figure 4a–c). When there were two or more stages of pressure ulcers in the test images, the trained YOLOv8m model was also able to encase the ulcer region and identify its stage (Figure 4d–f). Hence, the trained YOLOv8m model provided reasonable judgments to predict the affected area and the severity of the individual pressure ulcer.

3.2. Pressure Ulcer Checker Mobile App

Based on the trained YOLOv8m model, the ‘Pressure Ulcer Checker’ mobile app was developed. The main components of this mobile app are registration, home, detection results, and instruction pages (Figure 5).
When a user runs the mobile app for the first time, the log-in page appears for user identification (Figure 5a). If the user has not registered yet, the user can sign up by pressing the ‘Join the membership’ button on the log-in page. After successfully logging in, the user encounters the home page (Figure 5b). There is a red and a blue button (Figure 5b). If the user presses the red button (camera), the rear camera of the mobile device is launched and the user can take a photo of the pressure ulcer directly applying this application. If the user presses the blue button (gallery), the mobile app allows the user to access the photo-gallery of the user’s mobile device and to select the photo wanted for inspection. Furthermore, the user enables to check inspection history including the pressure ulcer photo, inspection date, and severity on the ‘Latest Results’ section in the home page (Figure 5b). After the user presses either the red or blue button, the inspection is conducted by the trained YOLOv8m model using the photo provided by the user. After finishing inspection, the diagnostic results of pressure ulcers are displayed on the screen (Figure 5c). The results include the bounding boxes and the classes with probability of each pressure ulcer. The detection results acquired from the mobile app (Figure 5c) are the same as those acquired from the desktop computer (Figure 4e). When the user chooses the ‘next’ button in Figure 5c, the mobile app offers the instructions on how to prevent pressure ulcers and information of the stage based on the inspection result (Figure 5d). The instructions and information about pressure ulcers were informed by previous studies [1,2,30].
The ‘Pressure Ulcer Checker’ mobile app was tested using several Android phones such as the Samsung Galaxy S23 (Qualcomm Snapdragon 8 Gen 2, 8 GB RAM, 256 GB Storage), S22 (Qualcomm Snapdragon 8 Gen 1, 8 GB RAM, 256 GB Storage), and S21 (Exynos 2100, 8 GB RAM, 256 GB Storage). When one image was inspected, it took approximately 2.82 ± 0.13 s, 3.16 ± 0.21 s, and 3.18 ± 0.17 s using the S23, S22, and S21, respectively. In addition, we derived the requirements for properly running the developed mobile app (Table 6). First, the operating system (OS) should be Android 10 or newer for smooth operations of various systems and third-party applications running together. Second, the minimum requirement of the application processor (AP) is selected for ARMv8a 64 bit, which is an AP that supports 64 bit, following the application development guidelines provided by Google Play. Third, 4 GB RAM is proposed as the minimum requirement. The RAM resource was monitored by using the Profiler tool in Android Studio. According to the observation, 4 GB RAM was necessary to run the OS, fundamental applications, trained YOLOv8m model, and deep learning framework, comfortably. Finally, storage of 1 GB or more is required for installing the mobile app (241 MB), storing enough photos of pressure ulcers, and saving diagnostic result log files.

4. Discussion

This study presents an on-device AI healthcare platform that helps users to detect the regions of pressure ulcer lesions, predict the classes of pressure ulcers, and check the states of pressure ulcers, with convenience. To the best of our knowledge, there are no such studies about developing an on-device pressure ulcer staging system with great performance. The YOLOv8 models used in this study are the newest versions built by Ultralytics [39]. The YOLOv8 models have been extensively applied in various object detection research fields because of their high accuracy and fast prediction time [43,44,45,46]. Thus, we applied the YOLOv8 models and tested all of the YOLOv8 models (8n, 8s, 8m, 8l, and 8x). From 8n to 8x, the model size and the number of trainable parameters increase [40]. Using a large enough dataset (e.g., the COCO dataset with 200 k annotated images), the object detection performance is proven to improve with increasing model size [40]. However, the YOLOv8m model performed the best in this study (Table 4) because each YOLOv8 model was fine-tuned with a relatively small pressure ulcer dataset (about 3 k annotated images). For this reason, the trained YOLOv8m model was embedded in the mobile app. The trained YOLOv8m model successfully generated a set of bounding boxes along with pressure ulcer classes and confidence scores under both the desktop (Figure 4) and the smartphone settings (Figure 5c).
Compared with other classes, relatively low classification performance was shown in stage 2 (Table 5) because the trained YOLOv8m model misclassified stage 2 (actual class) as mainly background (no detection) class (predicted class, Figure 3). Representative failure cases of detecting stage 2 were described in Figure 6. There are multiple regions of stage 2 in ground–truth labels (Figure 6b,e). However, the YOLOv8m model detected not multiple small regions but one large region of stage 2 (Figure 6c,f). This implies that more magnified or high-resolution images can enhance the pressure ulcer detection performance of the model.
Several studies have been conducted to infer both the areas and the classes of pressure ulcers applying the Faster R-CNN [35], YOLOv4 [33], and YOLOv5s [34] models. Table 7 exhibits a comparative analysis of our model (YOLOv8m) with three other studies previously published. All studies utilized thousands of pressure ulcer images to train each model. Compared to the references [33] and [34], we annotated all types of pressure ulcers following the NPUAP staging system [1]. Moreover, our YOLOv8m model performs better than previous models in terms of all five metrics (Table 7). The Faster R-CNN requires a pre-defined size and number of anchor boxes for object detection [35]. However, the YOLOv8m model is an anchor-free model that directly predicts the center point and size of bounding box containing the object. It reduces model complexity and improves the accuracy of inferring the pressure ulcer region. Compared to the previous YOLO models [33,34], the YOLOv8 model involves separate heads to estimate bounding boxes and classes. It also enhances the object detection performance. As a result, the YOLOv8m has shown excellent results detecting pressure ulcers in this study (Table 5 and Table 7).
We also created the user-friendly mobile app with the light-weight YOLOv8m model (Figure 5). It supports users to easily monitor pressure ulcers anytime and anywhere using their own mobile devices. It also does not require a client–server system [35], network connection (e.g., 4/5G communications, Wi-Fi, and Bluetooth), and extra hardware such as an inertial sensor [24] and thermal camera [29]. The mobile app additionally gives information of the pressure ulcer based on the diagnostic result (Figure 5d). Consequently, this on-device AI app can assist in the understanding of the pressure ulcer status for non-experts and the decision-making for medical experts. Ultimately, severe infection and hospital admission can be prevented through early diagnosis and timely treatment.
Although this developed platform showed superior performance, model robustness, reliability, and generalizability can be improved by ameliorating light-weight deep learning models, image processing, data augmentation techniques, and by collecting more diverse pressure ulcer images acquired from various populations and camera recording conditions. In this study, we tested the pressure ulcer detection performance using a publicly available dataset, so we plan to evaluate the practical usability and effectiveness of the on-device AI healthcare platform under various clinical settings.

5. Conclusions

We proposed the on-device AI mobile app to automatically detect and classify pressure ulcers. To establish the object detection model, pressure ulcer images were gathered from diverse open sources, augmented, and annotated by hand. A total of 2800 pressure ulcer images were used to train, validate, and test five YOLOv8 models (8n, 8s, 8m, 8l, and 8x). To compare their pressure ulcer detection performance, five metrics such as accuracy, precision, recall, F1 score, and mAP were assessed. As a result, the YOLOv8m model showed the highest detection performance on pressure ulcers with 0.846, 0.897, 0.891, 0.894, and 0.908 of accuracy, precision, recall, F1 score, and mAP@50, respectively. The trained YOLOv8m model also outperformed past studies. In addition, this YOLOv8m model was implemented into the Android app. This app was developed utilizing Android Studio and the Kotlin programming language. The mobile app was tested under several Android phones (Samsung Galaxy S21, S22, and S23). Consequently, the on-device AI mobile app is able to precisely determine the pressure ulcer areas and stages with taking about 3 s. The proposed system would be useful for early diagnosis of and recovery from pressure ulcers.

Author Contributions

Y.C.: investigation, methodology, software, writing—original draft, writing—review and editing; J.H.K.: data curation, investigation, methodology, visualization, writing—original draft, writing—review and editing; H.W.S.: data curation, investigation, methodology, validation, writing—original draft, writing—review and editing; C.H.: software, validation, visualization; S.Y.L.: funding acquisition, methodology, project administration, validation, writing—review and editing; T.G.: conceptualization, funding acquisition, resources, supervision, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Bio&Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government (MSIT) (No. RS-2023-00236157).

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the usage of a public open dataset.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These datasets can be found in the Medetec Wound Database (https://www.medetec.co.uk/files/medetec-image-databases.html) (accessed on 7 August 2024) and at Roboflow (https://universe.roboflow.com/stage2-n7xya/pressure-ulcer-sxitf) (accessed on 7 August 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Edsberg, L.E.; Black, J.M.; Goldberg, M.; McNichol, L.; Moore, L.; Sieggreen, M. Revised national pressure ulcer advisory panel pressure injury staging system: Revised pressure injury staging system. J. Wound Ostomy Cont. Nurs. 2016, 43, 585. [Google Scholar] [CrossRef] [PubMed]
  2. Mervis, J.S.; Phillips, T.J. Pressure ulcers: Pathophysiology, epidemiology, risk factors, and presentation. J. Am. Acad. Dermatol. 2019, 81, 881–890. [Google Scholar] [CrossRef]
  3. Rasero, L.; Simonetti, M.; Falciani, F.; Fabbri, C.; Collini, F.; Dal Molin, A. Pressure ulcers in older adults: A prevalence study. Adv. Ski. Wound Care 2015, 28, 461–464. [Google Scholar] [CrossRef]
  4. Jaul, E.; Barron, J.; Rosenzweig, J.P.; Menczel, J. An overview of co-morbidities and the development of pressure ulcers among older adults. BMC Geriatr. 2018, 18, 305. [Google Scholar] [CrossRef] [PubMed]
  5. Mallick, A.N.; Bhandari, M.; Basumatary, B.; Gupta, S.; Arora, K.; Sahani, A.K. Risk factors for developing pressure ulcers in neonates and novel ideas for developing neonatal antipressure ulcers solutions. J. Clin. Neonatol. 2023, 12, 27–33. [Google Scholar] [CrossRef]
  6. Thomas, D.R. Prevention and treatment of pressure ulcers. J. Am. Med. Dir. Assoc. 2006, 7, 46–59. [Google Scholar] [CrossRef]
  7. Gorecki, C.; Brown, J.M.; Nelson, E.A.; Briggs, M.; Schoonhoven, L.; Dealey, C.; Defloor, T.; Nixon, J.; European Quality of Life Pressure Ulcer Project Group. Impact of pressure ulcers on quality of life in older patients: A systematic review. J. Am. Med. Dir. Assoc. 2009, 57, 1175–1183. [Google Scholar] [CrossRef]
  8. Thomas, D.R.; Goode, P.S.; Tarquine, P.H.; Allman, R.M. Hospital-acquired pressure ulcers and risk of death. J. Am. Geriatr. Soc. 1996, 44, 1435–1440. [Google Scholar] [CrossRef] [PubMed]
  9. Hajhosseini, B.; Longaker, M.T.; Gurtner, G.C. Pressure injury. Ann. Surg. 2020, 271, 671–679. [Google Scholar] [CrossRef]
  10. Moore, Z.; Avsar, P.; Conaty, L.; Moore, D.H.; Patton, D.; O’Connor, T. The prevalence of pressure ulcers in Europe, what does the European data tell us: A systematic review. J. Wound Care 2019, 28, 710–719. [Google Scholar] [CrossRef]
  11. Zhang, X.; Zhu, N.; Li, Z.; Xie, X.; Liu, T.; Ouyang, G. The global burden of decubitus ulcers from 1990 to 2019. Sci. Rep. 2021, 11, 21750. [Google Scholar] [CrossRef] [PubMed]
  12. Haavisto, E.; Stolt, M.; Puukka, P.; Korhonen, T.; Kielo-Viljamaa, E. Consistent practices in pressure ulcer prevention based on international care guidelines: A cross-sectional study. Int. Wound J. 2022, 19, 1141–1157. [Google Scholar] [CrossRef] [PubMed]
  13. Ankrom, M.A.; Bennett, R.G.; Sprigle, S.; Langemo, D.; Black, J.M.; Berlowitz, D.R.; Lyder, C.H. Pressure-related deep tissue injury under intact skin and the current pressure ulcer staging systems. Adv. Ski. Wound Care 2005, 18, 35–42. [Google Scholar] [CrossRef] [PubMed]
  14. Nixon, J.; Thorpe, H.; Barrow, H.; Phillips, A.; Andrea Nelson, E.; Mason, S.A.; Cullum, N. Reliability of pressure ulcer classification and diagnosis. J. Adv. Nurs. 2005, 50, 613–623. [Google Scholar] [CrossRef] [PubMed]
  15. Nancy, G.A.; Kalpana, R.; Nandhini, S. A study on pressure ulcer: Influencing factors and diagnostic techniques. Int. J. Low. Extrem. Wounds 2022, 21, 254–263. [Google Scholar] [CrossRef] [PubMed]
  16. Stausberg, J.; Lehmann, N.; Kröger, K.; Maier, I.; Niebel, W. Reliability and validity of pressure ulcer diagnosis and grading: An image-based survey. Int. J. Nurs. Stud. 2007, 44, 1316–1323. [Google Scholar] [CrossRef] [PubMed]
  17. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  18. Wainberg, M.; Merico, D.; Delong, A.; Frey, B.J. Deep learning in biomedicine. Nat. Biotechnol. 2018, 36, 829–838. [Google Scholar] [CrossRef]
  19. Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. NPJ Digit. Med. 2021, 4, 5. [Google Scholar] [CrossRef]
  20. Bote-Curiel, L.; Munoz-Romero, S.; Gerrero-Curieses, A.; Rojo-Álvarez, J.L. Deep learning and big data in healthcare: A double review for critical beginners. Appl. Sci. 2019, 9, 2331. [Google Scholar] [CrossRef]
  21. Tang, D.; Chen, J.; Ren, L.; Wang, X.; Li, D.; Zhang, H. Reviewing CAM-Based Deep Explainable Methods in Healthcare. Appl. Sci. 2024, 14, 4124. [Google Scholar] [CrossRef]
  22. Rippon, M.G.; Fleming, L.; Chen, T.; Rogers, A.A.; Ousey, K. Artificial intelligence in wound care: Diagnosis, assessment and treatment of hard-to-heal wounds: A narrative review. J. Wound Care 2024, 33, 229–242. [Google Scholar] [CrossRef] [PubMed]
  23. Dweekat, O.Y.; Lam, S.S.; McGrath, L. Machine learning techniques, applications, and potential future opportunities in pressure injuries (bedsores) management: A systematic review. Int. J. Environ. Res. Public Health 2023, 20, 796. [Google Scholar] [CrossRef] [PubMed]
  24. Cicceri, G.; De Vita, F.; Bruneo, D.; Merlino, G.; Puliafito, A. A deep learning approach for pressure ulcer prevention using wearable computing. Hum.-Centric Comput. Inf. Sci. 2020, 10, 5. [Google Scholar] [CrossRef]
  25. Zahia, S.; Sierra-Sosa, D.; Garcia-Zapirain, B.; Elmaghraby, A. Tissue classification and segmentation of pressure injuries using convolutional neural networks. Comput. Meth. Programs Biomed. 2018, 159, 51–58. [Google Scholar] [CrossRef]
  26. Elmogy, M.; García-Zapirain, B.; Burns, C.; Elmaghraby, A.; Ei-Baz, A. Tissues classification for pressure ulcer images based on 3D convolutional neural network. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018. [Google Scholar]
  27. Chang, C.W.; Christian, M.; Chang, D.H.; Lai, F.; Liu, T.J.; Chen, Y.S.; Chen, W.J. Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis. PLoS ONE 2022, 17, e0264139. [Google Scholar] [CrossRef]
  28. Chino, D.Y.; Scabora, L.C.; Cazzolato, M.T.; Jorge, A.E.; Traina, C., Jr.; Traina, A.J. Segmenting skin ulcers and measuring the wound area using deep convolutional networks. Comput. Meth. Programs Biomed. 2020, 191, 105376. [Google Scholar] [CrossRef] [PubMed]
  29. Pandey, B.; Joshi, D.; Arora, A.S.; Upadhyay, N.; Chhabra, H. A deep learning approach for automated detection and segmentation of pressure ulcers using infrared-based thermal imaging. IEEE Sens. J. 2022, 22, 14762–14768. [Google Scholar] [CrossRef]
  30. Liu, T.J.; Christian, M.; Chu, Y.-C.; Chen, Y.-C.; Chang, C.-W.; Lai, F.; Tai, H.-C. A pressure ulcers assessment system for diagnosis and decision making using convolutional neural networks. J. Formos. Med. Assoc. 2022, 121, 2227–2236. [Google Scholar] [CrossRef]
  31. Liu, T.J.; Wang, H.; Christian, M.; Chang, C.-W.; Lai, F.; Tai, H.-C. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera. Sci. Rep. 2023, 13, 680. [Google Scholar] [CrossRef]
  32. Kim, J.; Lee, C.; Choi, S.; Sung, D.-I.; Seo, J.; Lee, Y.N.; Lee, J.H.; Han, E.J.; Kim, A.Y.; Park, H.S. Augmented decision-making in wound care: Evaluating the clinical utility of a deep-learning model for pressure injury staging. Int. J. Med. Inform. 2023, 180, 105266. [Google Scholar] [CrossRef]
  33. Lau, C.H.; Yu, K.H.-O.; Yip, T.F.; Luk, L.Y.F.; Wai, A.K.C.; Sit, T.-Y.; Wong, J.Y.-H.; Ho, J.W.K. An artificial intelligence-enabled smartphone app for real-time pressure injury assessment. Front. Med. Technol. 2022, 4, 905074. [Google Scholar] [CrossRef] [PubMed]
  34. Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.; Humayun, M. Yolo-based deep learning model for pressure ulcer detection and classification. Healthcare 2023, 11, 1222. [Google Scholar] [CrossRef]
  35. Fergus, P.; Chalmers, C.; Henderson, W.; Roberts, D.; Waraich, A. Pressure ulcer categorization and reporting in domiciliary settings using deep learning and mobile devices: A clinical trial to evaluate end-to-end performance. IEEE Access 2023, 11, 65138–65152. [Google Scholar] [CrossRef]
  36. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  37. Medetec Wound Database: Stock Pictures of Wounds. Available online: https://www.medetec.co.uk/files/medetec-image-databases.html (accessed on 7 August 2024).
  38. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  39. YOLO: A Brief History. Available online: https://docs.ultralytics.com (accessed on 16 July 2024).
  40. GitHub. Available online: https://github.com/ultralytics/ultralytics?tab=readme-ov-file (accessed on 16 July 2024).
  41. PyTorch Mobile Optimizer. Available online: https://pytorch.org/docs/stable/mobile_optimizer.html (accessed on 7 August 2024).
  42. Yacouby, R.; Axman, D. Probabilistic extension of precision, recall, and f1 score for more thorough evaluation of classification models. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, Online, 20 November 2020. [Google Scholar]
  43. Sharma, N.; Baral, S.; Paing, M.P.; Chawuthai, R. Parking time violation tracking using yolov8 and tracking algorithms. Sensors 2023, 23, 5843. [Google Scholar] [CrossRef]
  44. Yang, G.; Wang, J.; Nie, Z.; Yang, H.; Yu, S. A lightweight YOLOv8 tomato detection algorithm combining feature enhancement and attention. Agronomy 2023, 13, 1824. [Google Scholar] [CrossRef]
  45. Xiao, B.; Nguyen, M.; Yan, W.Q. Fruit ripeness identification using YOLOv8 model. Multimed. Tools Appl. 2024, 83, 28039–28056. [Google Scholar] [CrossRef]
  46. Chabi Adjobo, E.; Sanda Mahama, A.T.; Gouton, P.; Tossa, J. Automatic localization of five relevant Dermoscopic structures based on YOLOv8 for diagnosis improvement. J. Imaging 2023, 9, 148. [Google Scholar] [CrossRef]
Figure 1. YOLOv8 model architecture. d, w, and r indicate the depth multiple, width multiple, and ratio of each module, respectively. k, s, and p denote kernel size, stride, and padding number, respectively.
Figure 1. YOLOv8 model architecture. d, w, and r indicate the depth multiple, width multiple, and ratio of each module, respectively. k, s, and p denote kernel size, stride, and padding number, respectively.
Applsci 14 07124 g001
Figure 2. YOLOv8m training results.
Figure 2. YOLOv8m training results.
Applsci 14 07124 g002
Figure 3. Confusion matrix of pressure ulcer classification by the trained YOLOv8m model.
Figure 3. Confusion matrix of pressure ulcer classification by the trained YOLOv8m model.
Applsci 14 07124 g003
Figure 4. Representative pressure ulcer detection results by the trained YOLOv8m model. (ac) Single stage detection in each image. (df) Multiple stage detections in each image.
Figure 4. Representative pressure ulcer detection results by the trained YOLOv8m model. (ac) Single stage detection in each image. (df) Multiple stage detections in each image.
Applsci 14 07124 g004
Figure 5. Development of a mobile app for detecting pressure ulcers. (a) Log-in page. (b) App home page. (c) Detection results after inspection. (d) Provision of instructions and information according to the results.
Figure 5. Development of a mobile app for detecting pressure ulcers. (a) Log-in page. (b) App home page. (c) Detection results after inspection. (d) Provision of instructions and information according to the results.
Applsci 14 07124 g005
Figure 6. Failure cases of detecting stage 2. (a,d) Original images. (b,e) Ground–truth labels. (c,f) Prediction results.
Figure 6. Failure cases of detecting stage 2. (a,d) Original images. (b,e) Ground–truth labels. (c,f) Prediction results.
Applsci 14 07124 g006
Table 1. Summary of the existing studies on identifying pressure ulcers applying deep learning models.
Table 1. Summary of the existing studies on identifying pressure ulcers applying deep learning models.
ReferenceModelTaskDescriptionLimitation
[24]DNNClassificationMonitored six postures of patients using inertial sensors and DNNCannot recognize pressure ulcer region and stage
[30]Inception-ResNet-v2Classified the severity of erythema and necrosis with achieving over 97% classification accuracy
[32]SE-ResNext101Categorized pressure ulcers into seven classes (stage 1–4, DTPI, unstageable, others) with achieving 71.5% F1 scoreCannot estimate pressure ulcer region
[25]CNNSegmentationSegmented a pressure ulcer RGB image to slough, granulation, and necrotic eschar tissue regions with achieving overall 92% classification accuracy [25] and 95% AUC [26]Cannot predict detailed pressure ulcer stage
[26]3D CNN
[27]Five CNN modelsApplied U-Net, DeeplabV3, PsPet, FPN, and Mask R-CNN for tissue segmentation with achieving 99.6% tissue classification
[28]U-NetSegmented pressure ulcer wounds with achieving 90% F1 score
[31]Mask R-CNN, U-NetSegmented pressure ulcer region and measured its area via a LiDAR camera and the deep learning models with achieving 26.2% mean relative error
[29]MobileNetV2Boundary detectionDetected pressure ulcer regions by displaying bounding boxes over them with achieving 75.9% mAP@50
[33]YOLOv4Boundary detection, classificationSimultaneously predicted pressure ulcer boundaries and classes with achieving 63.2% accuracy [33], 73.2% F1 score [34], and 69.6% F1 score [35], respectivelyRelatively low detection performance and portability
[34]YOLOv5
[35]Faster R-CNN
Table 2. Features of each pressure ulcer label with representative images.
Table 2. Features of each pressure ulcer label with representative images.
LabelImageFeature
Stage 1Applsci 14 07124 i001
-
Local non-chronic erythema
-
No skin damage
Stage 2Applsci 14 07124 i002
-
Thin, open, pink wound
-
Not only the epidermis but also part of the dermis is damaged
-
Occurrence of intact or damaged serous blisters
Stage 3Applsci 14 07124 i003
-
The epidermis, dermis, and subcutaneous tissue are damaged
-
Can measure the depth of tissue damage
-
Can involve tunneling or encroachment
Stage 4Applsci 14 07124 i004
-
Full-thickness skin damage with exposure of bones, ligaments, and muscles
-
Partial growth of floating tissue or dry crust at the base of the wound
DTPIApplsci 14 07124 i005
-
Partial skin discoloration to purple or reddish brown without skin damage
-
Blood-filled blisters or deep bruises
UnstageableApplsci 14 07124 i006
-
Full-thickness skin damage or wound bed covered with necrotic tissue
-
The extent of tissue damage is unknown
Table 3. Comparison of YOLOv8 architectures (8n–8x).
Table 3. Comparison of YOLOv8 architectures (8n–8x).
ModelDepth Multiple (d)Width Multiple (w)Ratio (r)Trainable
Parameters
YOLOv8n0.330.252.03.2 M
YOLOv8s0.330.502.028.6 M
YOLOv8m0.670.751.578.9 M
YOLOv8l1.001.001.0165.2 M
YOLOv8x1.001.251.0157.8 M
Table 4. Overall object detection performance for the test dataset according to YOLOv8 models (8n–8x).
Table 4. Overall object detection performance for the test dataset according to YOLOv8 models (8n–8x).
ModelAccuracyPrecisionRecallF1 ScoremAP@50mAP@50-95
YOLOv8n0.8100.9100.8700.8890.8980.674
YOLOv8s0.8140.9220.8450.8820.8910.670
YOLOv8m0.8460.8970.8910.8940.9080.685
YOLOv8l0.7890.8950.8610.8780.8920.671
YOLOv8x0.7960.9050.8590.8810.9000.680
Table 5. Detailed performance evaluation of the trained YOLOv8m model for pressure ulcer detection.
Table 5. Detailed performance evaluation of the trained YOLOv8m model for pressure ulcer detection.
LabelPrecisionRecallF1 ScoremAP@50mAP@50-95
Overall0.8970.8910.8940.9080.685
Stage 10.9590.8570.9050.9000.630
Stage 20.7940.8430.8180.8390.564
Stage 30.8740.8430.8580.8740.645
Stage 40.9380.9570.9470.9750.922
DTPI0.9050.9260.9150.9360.664
Unstageable0.9110.9180.9140.9240.682
Table 6. Several requirements for the mobile app.
Table 6. Several requirements for the mobile app.
Operating SystemAndroid 10 or Newer
Application processorARMv8a 64 bit or higher performance
RAM4 GB or more
Storage1 GB or more
Table 7. Comparison of the detection performance between our study and recent studies predicting both the area and type of pressure ulcers at the same time.
Table 7. Comparison of the detection performance between our study and recent studies predicting both the area and type of pressure ulcers at the same time.
ModelDataset SizeClassesAccuracyPrecisionRecallF1 ScoremAP@50
Faster
R-CNN [35]
50846 (stage 1–4, DTPI, unstageable)N/A0.7760.6410.696N/A
YOLOv4 [33]14326 (stage 1–4, unstageable, others)0.632N/AN/AN/AN/A
YOLOv5s [34]1000+5 (stage 1–4, non-pressure ulcer)N/A0.7810.6850.7320.769
YOLOv8m (our study)28006 (stage 1–4, DTPI, unstageable)0.8460.8970.8910.8940.908
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, Y.; Kim, J.H.; Shin, H.W.; Ha, C.; Lee, S.Y.; Go, T. Diagnosis of Pressure Ulcer Stage Using On-Device AI. Appl. Sci. 2024, 14, 7124. https://doi.org/10.3390/app14167124

AMA Style

Chang Y, Kim JH, Shin HW, Ha C, Lee SY, Go T. Diagnosis of Pressure Ulcer Stage Using On-Device AI. Applied Sciences. 2024; 14(16):7124. https://doi.org/10.3390/app14167124

Chicago/Turabian Style

Chang, Yujee, Jun Hyung Kim, Hyun Woo Shin, Changjin Ha, Seung Yeob Lee, and Taesik Go. 2024. "Diagnosis of Pressure Ulcer Stage Using On-Device AI" Applied Sciences 14, no. 16: 7124. https://doi.org/10.3390/app14167124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop