Next Article in Journal
Accurate Low Complexity Quadrature Angular Diversity Aperture Receiver for Visible Light Positioning
Next Article in Special Issue
Signal Quality in Continuous Transcutaneous Bilirubinometry
Previous Article in Journal
Multi-Spectral Radiation Temperature Measurement: A High-Precision Method Based on Inversion Using an Enhanced Particle Swarm Optimization Algorithm with Multiple Strategies
Previous Article in Special Issue
Evaluating the Electroencephalographic Signal Quality of an In-Ear Wearable Device
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced 2D Hand Pose Estimation for Gloved Medical Applications: A Preliminary Model

1
Department of Exercise and Sport Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
2
Human Movement Science Curriculum, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
3
Renaissance Computing Institute, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
4
Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(18), 6005; https://doi.org/10.3390/s24186005
Submission received: 29 July 2024 / Revised: 6 September 2024 / Accepted: 10 September 2024 / Published: 17 September 2024
(This article belongs to the Special Issue Wearable Sensors for Continuous Health Monitoring and Analysis)

Abstract

:
(1) Background: As digital health technology evolves, the role of accurate medical-gloved hand tracking is becoming more important for the assessment and training of practitioners to reduce procedural errors in clinical settings. (2) Method: This study utilized computer vision for hand pose estimation to model skeletal hand movements during in situ aseptic drug compounding procedures. High-definition video cameras recorded hand movements while practitioners wore medical gloves of different colors. Hand poses were manually annotated, and machine learning models were developed and trained using the DeepLabCut interface via an 80/20 training/testing split. (3) Results: The developed model achieved an average root mean square error (RMSE) of 5.89 pixels across the training data set and 10.06 pixels across the test set. When excluding keypoints with a confidence value below 60%, the test set RMSE improved to 7.48 pixels, reflecting high accuracy in hand pose tracking. (4) Conclusions: The developed hand pose estimation model effectively tracks hand movements across both controlled and in situ drug compounding contexts, offering a first-of-its-kind medical glove hand tracking method. This model holds potential for enhancing clinical training and ensuring procedural safety, particularly in tasks requiring high precision such as drug compounding.

1. Introduction

As the field of digital health evolves, sensor technology is already playing a pivotal role in advancing critical assessments and the development of monitoring applications in medical diagnostics, treatment, safety, and training. The evolution of sensor technologies, including inertial measurement units, flex sensors, and computer vision, has opened new avenues for enhancing these capabilities while offering unprecedented precision and efficiency in behavioral and object tracking. Such technological advances offer promise for revolutionizing safety and training in clinical environments through enhanced performance tracking and monitoring systems, aiming to reduce the risk of procedural errors.
One primary objective of these applications is to track performers’ hands to quantify both the overall motion trajectories and the specific kinematics of each hand and finger as tasks are performed. This tracking has been accomplished previously through sensors attached to the devices used in tasks, serving as proxies for hand trajectories [1], or with retroreflective markers in camera-based motion capture systems [2]. These methods have significantly enhanced insights into performance levels and have enabled objective measurements of skilled operations. For example, sensors have been used in healthcare-related studies to differentiate between the skills of expert and non-expert practitioners [2,3,4,5,6,7,8]. One such study used sensor-based accelerometry and retroreflective motion capture to reveal distinct differences in behavioral patterns between expert and novice providers during simulated laryngoscopy and intubation [5]. Electromagnetic sensors have also been used to index differences in motion parameters among experts and novices in separate studies involving simulated internal jugular cannulation [6], radiological “pin-pull” maneuvers [8], and large vessel patch anastomosis [3]. Furthermore, recent explorations into training applications have shown that virtual reality simulations, augmented with real-time hand tracking, enable surgeons to practice complex procedures safely, without the associated risks of live operations [9]. However, these techniques often face challenges when deployed in live clinical settings, especially with the sterile environment’s demand for non-invasive sensors compatible with medical gloves. Such limitations highlight the need for innovative approaches tailored to the unique demands of medical applications.
Beyond enhancing surgical and medical procedure technique, sensor technologies also have the potential to play a crucial role in procedural safety, such as contamination prevention during drug compounding. For example, the 2012 meningitis outbreak caused by contaminated injections compounded at the New England Compounding Center (NECC) resulted in significant morbidity and mortality [10], highlighting the catastrophic consequences of inadequate procedural oversight and underscoring the inherent risks associated with compounding. This incident has led to stricter regulatory oversight and an increased demand for technological interventions to ensure patient safety. Advanced tracking technologies, particularly those capable of monitoring hand movements, are key to ensuring adherence to aseptic technique for preventing contamination [11]. To date, traditional monitoring methods, such as human observation or video recording, have failed to capture the intricate details of hand movements that can differentiate between proper and improper aseptic technique. There is likely significant potential for enhancing safety and compliance through greater precision in both the tracking and monitoring of these behaviors.
Despite the utility of sensor-based hand tracking across these various contexts, such methods are not without their limitations. For example, even with their small sizes, sensors often add weight or alter the shape of medical tools. Retroreflective markers, typically several millimeters in diameter and placed on medical devices or the individual’s hands, also introduce several challenges, including pre-procedure preparation time, potential contamination during the procedure leading to tracking errors, and added complexity to in situ medical assessments [4]. Alternatives, such as tracking systems integrated into gloves, may reduce the tactile feedback critical for precision movements and often limit tracking utility in simulation scenarios, detracting from the generalizability of the data collected.
A recent letter to the editor emphasized the importance of motion-tracking technologies to enhance training and assessment in clinical settings, highlighting the necessity for methods that can accurately measure and analyze hand movements [12,13]. Advancements in computer vision technologies have introduced a compelling solution for clinical skill assessment and the training of complex medical procedures: hand pose estimation. This technique determines the position and orientation of the hands and their skeletal components through computational methods, marking a significant advancement from traditional motion capture technologies. This method operates without external markers or sensors, thereby allowing for the real-time capture of dynamic hand movements without adding burden to the performer. Specifically, by overcoming the challenges presented by earlier methods, hand pose estimation offers a tool for accurate modeling and analysis during in situ performance.
Hand pose estimation via computer vision and external cameras offers contactless, non-invasive communication between the human performer and the tracking technology through a variety of camera types and configurations [14]. These include the integration of additional sensor-based data such as red-green-blue (RGB) 2-dimensional (2D) video, time-of-flight sensors, and infrared sensors to extract information pertinent to hand configurations. Time-of-flight hand detection primarily uses depth sensors to provide 3-dimensional (3D) information without being affected by lighting or color changes. This enables the segmentation and recognition of regions of interest by analyzing depth fields and is especially useful in environments where visual information alone is not sufficient [15]. Other recognition techniques, such as those based on appearances or motion, focus on extracting features from image sequences to identify and analyze hand poses. Appearance-based methods rely on features from 2D images, such as pixel intensities, to model hand shapes and motions without prior segmentation [16]. This facilitates real-time processing and accommodates variance in user characteristics (e.g., medical glove color). Conversely, motion-based recognition tracks hand movements across video frames using algorithmic approaches. However, challenges such as lighting variations and dynamic backgrounds can reduce accuracy [17].
As technology has improved, alternatives to these approaches have become more viable. Specifically, skeleton-based recognition in hand pose estimation has shown promise over the last several years [18,19,20]. This method focuses on identifying model parameters that enhance the detection of complex hand features, including joint orientations, distances between joints, and the angles and trajectories of these joints. It leverages geometric attributes and translates features and correlations of data into a structured skeletal model of the hand. These types of models can effectively classify hand poses by examining the skeletal data that describe the relative positions of hand joints. Techniques used in skeleton-based recognition typically involve complex deep learning models, such as convolutional neural networks, to process the positional data of hand skeleton joints to accurately model the hand’s structure [19,20,21].
While advancements in skeleton-based hand pose estimation offer the potential to transform medical assessment and training applications, several challenges remain to be addressed. Current approaches struggle with achieving the accuracy and latency required for real-time or near real-time applications. Additionally, the sterile environment of clinical procedures imposes constraints, including the fact that performers typically wear a variety of different colored medical gloves, which challenge existing algorithms to accommodate. However, there has been notable work in related surgical domains. Hein et al. [22] proposed a hand pose estimation approach during surgical tool manipulation, utilizing a PVNet [23] model for object tracking in combination with the MANO hand model [24]. This hybrid solution was compared to both a HandObjectNet-based approach [25] and a hybrid PVNet–HandObjectNet model [23]. The MANO hand model, which captures low-dimensional hand shape variations rather than individual hand keypoints, contrasts with the HandObjectNet and PVNet–HandObjectNet approaches that integrate tool and hand pose together into a unified framework. Their evaluation indicated an average tool vertex error of 13.8 mm, though no specific hand pose error was reported. Similarly, Doughty et al. [26] developed a hybrid model, HMD-EgoPose, using an egocentric perspective via a Microsoft Hololens to track object-hand poses, and reduced the average tool’s 3D vertex error to 11.0 mm.
While pose estimation techniques are increasingly used to track and assess medical task performance, they lack the precision needed for detailed hand kinematics analysis. One study explored hand kinematics in a medical context, employing a YOLOv5s [27] architecture combined with EfficientNet B3 and FPN-EfficientNet B1 modules [28]. Müller et al. [29] achieved a hand skeletal key point accuracy of 10.0 pixels, even with diverse medical glove colors, bloody gloves, and varied surgical tool manipulations. Though promising, this work relied on egocentric views, which are not always practical in situ. Notably, no current published algorithm effectively tracks multi-color gloved hand position in medical procedures from an exocentric perspective. Prior to the development of the current project, a search of existing exocentric perspective hand position models was conducted to identify an existing solution. The most well-known of these was MediaPipe Hands [30], a powerful computer-vision recognition tool that has been built to accurately recognize the position of hands. The long-term goal of our research team is to develop digital health applications for safety and protocol adherence, and this requires successful tracking of medical-gloved hands as part of the medical procedure workflow. Unfortunately, when we attempted to utilize MediaPipe Hands for gloved-hand tracking, it failed to detect any keypoints on gloved hands (i.e., it returned “NULL” values for all keypoints). This limitation has been noted in previous studies [28]. No other existing hand-tracking models are capable of tracking gloved hands from an exocentric perspective, which has necessitated an important advancement in the domain of medical procedure applications: the development of an exocentric hand-tracking model performant for medical-gloved hands of varying colors.
The purpose of the current manuscript is thus to examine a novel computer-vision based hand kinematics estimation method designed for use with gloved hands of various colors in medical settings as a crucial advancement for maintaining sterility while achieving high precision and efficiency. This method is designed to serve as a foundational component of a larger safety and adherence tracking platform for drug compounding, promising significant reductions in procedural errors and furthering enhancements in clinical training and safety protocols.

2. Materials and Methods

2.1. Data Collection

Video data were collected in two separate locations. The first collection took place in a classroom laboratory at a major southeastern U.S. school of pharmacy, and videos were recorded on students practicing aseptic drug compounding procedures in a laminar airflow workstation (LAFW). For this collection, three commercially available Logitech BRIO webcams (Logitech, Lausanne, Switzerland) were affixed to the interior of the vent hood in strategic positions that limited airflow obstruction while optimizing hand visibility. All cameras were pointed at the center of the working area with perspectives from directly overhead (top-down), the upper left corner, and the right side of the hood (see Figure 1). During this collection, initial video recordings were taken of a student wearing latex gloves as they slowly rotated their hands on all axes to provide clear examples of hand position at a variety of angles. This was repeated twice for blue, purple, and tan medical glove colors for a total of six recordings. For the second collection, videos were recorded during a series of completed procedures in which each student completed two repetitions of compounding a drug using aseptic technique with all three glove colors for a total of six trials. Each session recording lasted roughly seven minutes and was recorded in 1080p resolution at 30 frames/second, resulting in approximately 25,000 frames of data per session. The second data collection was conducted in situ at the department of pharmacy of a major southeastern U.S. medical center. For this collection, only a single camera was placed in the upper left corner of the vent hood. A custom script automated both the start and stop of each recording once motion was detected in the video frame or once there was no longer any motion in the video frame, respectively. The camera was left in place for four days, and videos that did not contain a complete procedure from start to finish were discarded. A total of 237 videos were recorded, from which a subset of eight of these videos were selected, which included eight unique hand morphologies (eight separate people recorded) and eight distinct procedures. Each of the final eight videos ranged from four to nine minutes in length. Portions of these videos in which the subject stepped away from the hood and out of frame were manually removed.
The process of camera selection was based on two considerations: (1) visibility of each hand and (2) ease of application. The three camera positions shown in Figure 1 were assessed based on these two criteria. The overhead camera provided the clearest visibility of each hand in almost every circumstance, but the direction of airflow in the vent hoods could be disrupted whether it was a vertical or horizontal (i.e., LAFW) airflow hood. The lower right camera positioning was thus initially considered because it minimized airflow disruption; however, images from this camera were quickly discarded for poor visibility due to consistent occlusions. Figure 2 shows a still-frame image for all three camera perspectives. Figure 2A shows a typical image from the lower-right camera view, and it is important to note that the frequency at which the right hand blocked the entire view of the left hand severely limited the utility of this view for the purposes of the current project. Conversely, the balance of consistent visibility of both hands with a camera position that still minimally disrupted airflow was found in the upper left camera position (Figure 2B). In this position, the camera only blocked airflow on the extreme left side, which typically did not interrupt any use of the vent hood, while also capturing both left- and right-hand positions with minimal occlusion by other objects in the scene. The final camera position (Figure 2C) caused the greatest obstruction to airflow and was not considered a viable option for broader application.

2.2. Data Processing

Video recordings were split into discrete images using a custom script (Python Ver. 3.7.11). A subset of approximately 700 images was saved from each video, equating to approximately 1 in every 12 images saved. This method allowed the images in the subset to be distinct from one another and representative of the behavior in the video without overwhelming human labelers, as markers are labeled manually by human coders. Images were then separated into groups of 50 and uploaded to Amazon Web Services (AWS, Amazon.com, Inc., Seattle, WA, USA). Images were labeled manually by research staff through the Amazon SageMaker Ground Truth data set labeling platform. Groups of 50 images were selected at a time for a single labeling job and sent to individual team members for labeling. In the labeling job setup, images were treated as video frames, and the task type of keypoints was selected. Twenty-two keypoints were identified for each hand, as shown in Figure 3. Markers included medial and lateral wrist markers (identified based on anatomical position), center of mass, and points on all metacarpophalangeal joints, interphalangeal joints, and fingertips, identified as 1–4 proximally to distally (1–3 for thumbs).
When onboarding new labelers, a set of 10 “gold standard” images was provided as an initial practice job to gain familiarity with the AWS platform, as well as to verify the accuracy of labeling across research staff. Upon completion of a labeling job, annotation data was compiled into a .json file that contained information on the x and y coordinate locations of each labeled point, as well as the identifications of each image labeled. This process ensured that recreations of the labels could be used to train future machine learning models. Of note, not every label could be identified in each image: labelers were instructed to only label keypoints they could clearly see and exclude keypoints that were occluded by other objects or other areas of the hand. The total count of annotated images was just under 1500. Given the number of workers available and the labor-intensive nature of this task, 1500 images represented an attainable balance of manageable workload and sufficient volume for this proof-of-concept model development.

2.3. Model Development

DeepLabCut is a computer vision machine learning program originally designed for tracking the position and movement of animals [31]. The program itself provides a full-service, start-to-finish user interface that allows individuals to take a series of video frames or individual images, annotate the desired keypoints, and train a machine learning model to automatically identify those keypoints on future unlabeled images. For the purposes of this project, AWS was utilized for marker identification due to its versatility and generalizability to other machine learning platforms. Deployment of this program was done through a Linux-based Ubuntu 20.04 EC2 environment on AWS.
Another custom Python script was developed to translate the AWS-generated .json annotation files into the appropriate file type for importing into the DeepLabCut Project Manager GUI 2.3.9. This script ultimately took individual .json files for individual images and collated them into a single monolithic .csv file that had a row for every image. The columns were categorized based on a series of four rows of metadata: “scorer”, “individuals”, “body parts”, and “coords”. The scorer row allowed research staff to track who labeled each image. The individuals row represented the individual instance of an object, which corresponded to the left and right hands, while the body parts row contained the names of each of the keypoints on a single hand. Finally, the coords row contained either an “x” or a “y” to indicate the x and y coordinates, respectively, for the keypoint identified by the metadata above. The cells in this monolithic .csv file, therefore, contained either the x or y coordinate in pixels for the image identified by the row for the keypoint identified by the metadata tags in the first 4 rows of that column. DeepLabCut has an in-built function to convert .csv files into .h5 files, which is the file type DeepLabCut GUI can read for information.
In the DeepLabCut GUI, projects are defined by a .yaml configuration file, which allows users to define keypoints, the skeleton model, individuals, and other similarly basic information. Once appropriate configurations are set, the monolithic .csv file is loaded into the GUI. Under the “Label Frames” tab, the “Check labels” button was used to verify that the annotations from the .json files were translated correctly and displayed correctly in the DeepLabCut GUI. Images were separated into standard 80% to 20% training/testing groups, and training was initiated.
DeepLabCut comes pre-loaded with three main machine learning architectures: ResNet, MobileNet, and EfficientNet. As the purpose of this project was to develop a model that balanced the requirements of being computationally lightweight with optimized accuracy in estimating hand position, we began by investigating MobileNet to maximize speed. In exchange for speed, MobileNet was not able to manage large images, nor could it provide high-resolution results on our data set. Based on MobileNet’s initial failure, it was determined that both MobileNet and EfficientNet would be unable to achieve usable accuracy without additional frame processing. Thus, a ResNet architecture became the primary focus. ResNet had the highest resolution, but at its most intricate instance of 152 layers, it took hours to complete a single batch of training, making it unsuited for developing a model that was fast to train without sacrificing accuracy. However, scaling back the complexity of the machine learning architecture allowed for greater speed. The architecture selected for this project was ResNet_50, a version of ResNet that has 50 discrete layers in the training structure. At this level of complexity, ResNet was able to provide high-resolution results in an acceptable timeframe, satisfying both criteria. Prior to the actual model training, this architecture allowed for some mild image pre-processing, which included removing color saturation and slightly boosting the contrast of each image. Training iterations were set to 50,000 and “checkpoints” were saved every 1000 iterations. Adding this layer of security allowed training to resume from any of those 50 save points in the case of technical issues (i.e., computer shutting down, program glitches, etc.). The learning rate scheduler was set to the DeepLabCut recommended defaults. Training took an average of four days to complete. Once training was complete, DeepLabCut provided a series of model snapshots and a .pickle file, which contained the final version of the model.
The evaluation tab on the DeepLabCut GUI allows the user to apply the model to a set of pre-labeled images that were not used to train the initial model. These images are presented without the labels, and the model generates its predictions independently. This process allows for a quantitative analysis of the accuracy of the model when compared to the pre-labeled keypoint locations, enabling assessment of model generalizability. Data produced by this process include an RMSE value on an individual key-point basis for each image, an overall summary RMSE value for all the validation images utilized, and test error in whole pixels with and without a confidence error cutoff. Prediction confidence percentage is a built-in value provided by the DeepLabCut model evaluation tools. For every point inferred by a provided model, DeepLabCut provides three output layers that account for vector field and intensity mapping: the x and y coordinates of a pixel and the calculated probability that the pixel defined by the previous two values is the location of the keypoint in question, on a scale of 0 to 1 [31]. The pixel location with the highest confidence value is reported; the x and y coordinates are used to generate inferred annotations, and the confidence value is reported for evaluation purposes. The prediction confidence cutoff was set to the default value of 60%.

3. Results

The final data set included 1483 images, 1186 (i.e., 80%) of which were used for the training process, and 297 (i.e., 20%) of which were used as a hold-out test set. After 50,000 training iterations, overall evaluation values were provided. Inferred values were compared to manually coded “ground truth” keypoint locations, and RMSE in absolute pixels was reported. Within the training set across all images and keypoints, there was a reported 5.89 pixel error. This indicates that, on average, the model-generated keypoint location was 5.89 pixels away from the human-coded location. When the model predicted keypoint locations for the test set, images that were not used as part of the training set, the RMSE error was reported at an average of 10.06 pixels. Reported pixel locations are also given as confidence values from 0 to 1. When points with a confidence value below 0.6 were excluded (i.e., points that were programmatically less than 60% confidence as identified correctly were not considered), the RMSE for the training set was 4.67 pixels, and the RMSE for the test set was 7.48 pixels. Table 1 lists average RMSE and confidence values for each keypoint across all 1483 images used in this data set. Notably, pixel error relates to metric error differently, depending on the distance of the object from the camera. In the present study, 1 pixel equates to 0.42 mm at the farthest distance. When considering the view in Figure 3, however, the hand is approximately 10 cm from the hood surface, which changes the conversion to 1 pixel equating to 0.54 mm. Using that reference image, the error was between 2.12 and 4.19 mm.
Importantly, a dependent t-test indicated no difference in RMSE for the left hand compared to the right hand when excluding keypoints with less than 60% confidence, t(21) = −1.934, p = 0.067 (M = 5.04 ± 1.202 and M = 5.364 ± 1.112 for the left- and right-hand keypoints, respectively). The left and right hands also exhibited a high average confidence score, with the left hand at an average of 95.1% and the right at an average of 91.9% (p = 0.008). Of particular note, the average RMSE values were inflated slightly due to specific keypoint markers underperforming. Specifically, the hand Center, Thumb 1, WristLat, and WristMed reported abnormally high RMSE values and lower average confidence values. On the left hand, those four markers were the only points to have less than 95% of the identified markers above the 60% confidence cutoff (hand Center: 94.55%, Thumb1: 94.79%, WristLat: 77.21%, WristMed: 88.35%) and had average confidence values below 95% (hand Center: 93%, Thumb1: 94%, WristLat: 77%, WristMed: 86%). The right hand had slightly different results. The markers with fewer than 95% of markers above a 60% confidence value were hand Center (74.39%), Middle1 (93.14%), Ring1 (83.21%), Ring2 (92.87%), Ring3 (94.09%), Pinky1 (82.89%), Pinky2 (90.18%), Pinky3 (93.77%), WristLat (83.15%), and WristMed (81.56%). Additionally, there were more markers with an average confidence value below 95% than on the left side: hand Center (77%), Index1 (94%), Middle1 (92%), Ring1 (84%), Ring2 (93%), Ring3 (93%), Pinky1 (83%), Pinky2 (90%), Pinky3 (93%), WristLat (82%), and WristMed (81%). When the four highlighted markers (hand Center, Thumb1, WristLat, WristMed) were factored out of average values, the average RMSE of points over 60% confidence dropped to 4.55 pixels on the left hand and 5.02 on the right hand.

4. Discussion

The current study evaluated the accuracy of a first-of-its-kind hand pose estimation model for medical-gloved clinical applications using a combination of controlled and in situ data collections. The developed model performed well, as indicated by RMSE scores of less than 5.5 pixels, outperforming previously reported values by nearly 5 pixels [29]. Specifically, given the size of each image, an inferred point of 4–5 pixels from the ground-truth keypoint location provides a resolution of the hand that is accurate enough to predict keypoint positions within a drug-compounding task context. Figure 4 shows the RMSE error range, shown in white circles, around the ground-truth keypoint location as denoted by the black crosses. This image highlights that, when considering the average error, the keypoints are still highly representative of overall hand behavior. These results are additionally important given the inability to infer points on gloved hands using other models. Specifically, when these same images were input to MediaPipe Hands, the annotation file was returned empty, meaning that there was a 0% recognition rate. Our new model was capable of an average error of less than 5 pixels. This means that our model was able to recognize gloved hands with relatively high accuracy. This improvement shows a significant advancement in this domain. While this is an important first step, further empirical research is needed to determine whether this level of accuracy is appropriate for the practical needs of a given medical application.
Also of importance is that of the 22 keypoints on each hand, only four had confidence scores that were notably worse than the others. These included the hand Center, Thumb1, WristLat, and WristMed markers, which all share a feature that reduces accurate identification: these keypoint locations do not always correspond to easy-to-identify morphological landmarks when gloves are worn. This makes it difficult for human coders to maintain reliability in position coding, which, in turn, leads to machine learning architectures training on less reliable data, resulting in less consistent labeling. The primary issue is that when a human coder manually labels featureless keypoints such as these, human coding error increases due to inconsistencies in accurately labeling the same pixel location within and between human labelers. This is unlike the remaining 18 keypoints, which map to specific, high-contrast anatomical landmarks that allow more accurate indexing of both manually labeled and inferred points. For example, Index4, the tip of the index finger, is easily identifiable, as are points related to the interphalangeal joints. This is clearly seen in each image, with higher-contrast landmark-based identifiers relative to those locations. When compared to the hand Center marker, whether on the palm or back of the hand and which represents the center of mass of the hand, these keypoints are situated on a smooth surface with a gloved hand. It is, therefore, very difficult to annotate a landmark-less location reliably, and Figure 5 highlights this issue. In this figure, two frames are taken from a video where the individual’s hand did not move between each frame, and these images have been overlayed to visualize the inconsistency of those keypoints. Manually labeling the same point without context provided by sequential images is challenging, and as shown in Figure 4, there was a small amount of error in this process. This error was likely exacerbated by the fact that sequential video frames may or may not have been presented sequentially to human coders. Additionally, up to 200 images were coded at one time, and thus attentional fatigue may have impacted the quality of some coding.
This issue, with certain keypoints having poorer confidence scores, has the potential to influence model development as well. For example, even if a machine learning model was given precise data, most machine learning models would struggle to identify a consistent spot on a featureless plane, especially with changing dimensions (e.g., different hands or different angles). Furthermore, when the challenges of labeling the ground-truth markers are passed on to the training data informing the development of a machine learning model, the model must attempt to find a set of rules for accurately identifying a challenging keypoint in the face of higher variability in input. This likely explains the marked reduction in performance for those specified markers.
It is also important to note that the model accuracy for the right hand was worse than the left hand. While the difference in average RMSE after the 60% cutoff did not show a statistically significant difference between the right and left hands (p = 0.067), there were more markers that performed below the 95% threshold compared to the left hand. On the left hand, the four aforementioned “trouble keypoints” were below the 95% threshold for both average confidence value and percentage of points. On the right hand, there were ten such markers, although for the remaining six that were not part of the four trouble keypoints, they were not far below the 95% threshold. The reduction in performance of the right hand may be explained by the position of the camera for most of the images provided for training. Figure 2B shows the image viewed by the camera when placed in the upper left corner of the LAFW. As the camera is on the left side of the volume, the left hand is far more visible and therefore easier to predict, while the right hand is occluded more frequently and simply further away from the camera, making it more difficult to infer points. Furthermore, the difference between the overhead image shown in Figure 2C and the upper left view for the left hand is much smaller than the change in shape of the right hand between those two camera views. Therefore, another potential explanation could be the inclusion of more disparate training images for the right hand and more similar images for the left hand. The overhead images were included because they provide the clearest image of both hands in a scene, but the upper left camera view was chosen as a fixed position that could remain consistent for both biological safety cabinet (vertical airflow) and LAFW (horizontal airflow) without blocking the required ventilation. This is an important consideration as a balance between model performance and the feasibility of video integration for in situ assessments. Furthermore, placing the camera high on the lateral wall gave the best view of the far hand, while placing the camera lower would risk occlusion of the near hand when objects get placed on that side of the vent hood. Regardless of these considerations, the average RMSE values for all points sans the four “trouble keypoints” were still within an acceptable range of error for the proposed aseptic technique task. An additional pixel of average error on the right hand still would not impede adequate detection of hand position, pose, and interaction with other objects in the scene during drug compounding. It remains to be seen whether this holds for other higher-precision medical tasks.
Despite these results, the study does face a few limitations that may impact the generalizability and application of these findings. First, as discussed, the variability in camera positioning may have introduced a bias in the hand visibility and likely affected the accuracy specific to right-hand pose estimation. This was potentially due to differences in both occlusion and distance from the camera. It also makes it difficult to project the model’s effectiveness across different camera setups while also indicating a potential issue in achieving uniform accuracy across both hands. Similarly, while manual labeling was necessary for the development of the model, reliance on this type of labeling can introduce human error. Therefore, future iterations of this project would benefit from a hybrid of more operationalized manual coding combined with the inclusion of calculated virtual markers for the trouble keypoints, as well as the repositioning of the camera to capture both the left and right hands equally. If a team of coders was deployed to label the keypoints manually for a new iteration, more careful guidelines for placement of markers on difficult areas of the hand (i.e., Center keypoints) would potentially improve the input to the model training and may result in a higher inference accuracy during model training. Finally, it would likely be beneficial in future iterations to remove the four poor-performing keypoints from manual coding entirely and use a heuristics-based approach to identifying those points after inference has been completed on the other, more stable markers. This would likely yield greater success and therefore lower average RMSE values. As it stands, while the resultant model is strong for applications such as drug compounding or relatively larger scale movements, such as during laryngoscopy and intubation, further examination is needed to determine the feasibility of the current model for more fine-grained movements such as those relevant to exacting surgical techniques.
In addition to the variations in shape, presentation, and visibility of hands, it is important to consider the color values of the gloves worn across tested images. The pre-processing of images done by our selected ResNet_50 architecture is insufficient to completely negate the effects of different colored gloves. Since the values of a light tan glove and a purple glove are different, removing hue saturation and slightly adjusting contrast does not cause the two different colors to appear similarly. Such an approach would require considerable distortion of the image, resulting in the contextual details that may play a role in hand detection being unrecognizable. Additionally, the amount of manual labor required to color match each image to one another would be considerable: in addition to labeling all 44 markers, coders would be required to adjust the contrast, saturation, brightness, and exposure of each image so that they fit a uniform value across all images. This would exponentially increase the required manual coding time. The simpler solution is to include an array of glove colors in the original training set to offset appearance differences between colors. Doing so will only slightly increase the number of images required for a reliable training set and does not require any additional steps outside the manual marker labeling already required.
Despite these limitations, the development and successful implementation of an advanced hand pose estimation model, which is performant across different medical glove colors, directly addresses the challenges highlighted in the previous literature [17]. As shown in Figure 6 (see Supplemental Materials for complete video), the model we have developed is robust enough to provide accurate predictions even when there are occlusions present in the image. Moreover, this model has the potential to support technology that can revolutionize the way medical professionals train and successfully perform procedures by providing a more nuanced understanding of hand movements. This is crucial for tasks such as drug compounding and surgical operations, and the facilitation of more accurate and real-time tracking of hand poses in these settings will allow the resultant models to contribute to minimizing procedural errors and enhancing overall clinical safety. Ultimately, as computer vision-based tracking continues to evolve, the future of digital health will begin to take shape through the elucidation of new performance metrics and training applications, with hand tracking at the center of this important field.

Supplementary Materials

The following supporting information can be downloaded at: https://zenodo.org/records/11099114. Video S1: Side-by-side unlabeled and model-inferred keypoints of the preliminary medical-gloved hand pose model.

Author Contributions

Conceptualization, A.W.K., R.P.M., R.H. and S.F.E.; methodology, A.W.K., R.P.M., R.H. and S.F.E.; validation, A.W.K., R.P.M., R.H. and S.F.E.; formal analysis, D.W. and R.P.M.; writing—original draft preparation, A.W.K., D.W. and R.P.M.; writing—review and editing, R.H. and S.F.E.; supervision, A.W.K.; project administration, S.F.E.; funding acquisition, S.F.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by an intramural grant from the Eshelman Institute of Innovation (UNC-CH) in partnership with QleanAir (Solna, Sweden).

Institutional Review Board Statement

Not applicable. This study was IRB-exempt.

Informed Consent Statement

Not applicable due to IRB-exempt status of the study.

Data Availability Statement

Access to the data is restricted to safeguard proprietary information related to the trained models and their integration into future intellectual property claims. Data are available upon request and subject to permission for the sole purpose of peer review.

Acknowledgments

The authors would like to acknowledge Anastasia Borodai, Colin Cabelka, Megan Earnhart, Emma Meyer, Telieha Middleton, Lexi Rudolph, Grace Waters, and Jenny Zhou for their assistance with the labeling of keypoints for training and testing images used for the development of the reported model.

Conflicts of Interest

A.W.K., R.P.M., R.H., and S.F.E. are co-inventors on PCT/US2023/029079, “Device for assessment of hands-on aseptic technique”, which utilizes the present model as part of its digital workflow. This intellectual property is licensed to Assure Technologies, Inc., of which S.F.E. is a co-founder and holds equity.

References

  1. Heiliger, C.; Andrade, D.; Geister, C.; Winkler, A.; Ahmed, K.; Deodati, A.; Treuenstätt, V.H.E.V.; Werner, J.; Eursch, A.; Karcz, K.; et al. Tracking and evaluating motion skills in laparoscopy with inertial sensors. Surg. Endosc. 2023, 37, 5274–5284. [Google Scholar] [CrossRef] [PubMed]
  2. Kerrey, B.T.; Boyd, S.D.; Geis, G.L.; MacPherson, R.P.; Cooper, E.; Kiefer, A.W. Developing a Profile of Procedural Expertise: A Simulation Study of Tracheal Intubation Using 3-Dimensional Motion Capture. Simul. Healthc. 2020, 15, 251–258. [Google Scholar] [CrossRef] [PubMed]
  3. Genovese, B.; Yin, S.; Sareh, S.; Devirgilio, M.; Mukdad, L.; Davis, J.; Santos, V.J.; Benharash, P. Surgical Hand Tracking in Open Surgery Using a Versatile Motion Sensing System: Are We There Yet? Am. Surg. 2016, 82, 872–875. [Google Scholar] [CrossRef] [PubMed]
  4. Deng, Z.; Xiang, N.; Pan, J. State of the Art in Immersive Interactive Technologies for Surgery Simulation: A Review and Prospective. Bioengineering 2023, 10, 1346. [Google Scholar] [CrossRef] [PubMed]
  5. Van Hove, P.D.; Tuijthof, G.J.M.; Verdaasdonk, E.G.G.; Stassen, L.P.S.; Dankelman, J. Objective assessment of technical surgical skills. Br. J. Surg. 2010, 97, 972–987. [Google Scholar] [CrossRef] [PubMed]
  6. Clinkard, D.; Holden, M.; Ungi, T.; Messenger, D.; Davison, C.; Fichtinger, G.; McGraw, R. The Development and Validation of Hand Motion Analysis to Evaluate Competency in Central Line Catheterization. Acad. Emerg. Med. 2015, 22, 212–218. [Google Scholar] [CrossRef]
  7. Weinstein, J.L.; El-Gabalawy, F.; Sarwar, A.; DeBacker, S.S.; Faintuch, S.; Berkowitz, S.J.; Bulman, J.C.; Palmer, M.R.; Matyal, R.; Mahmood, F.; et al. Analysis of Kinematic Differences in Hand Motion between Novice and Experienced Operators in IR: A Pilot Study. J. Vasc. Interv. Radiol. 2021, 32, 226–234. [Google Scholar] [CrossRef] [PubMed]
  8. Nagayo, Y.; Saito, T.; Oyama, H. A Novel Suture Training System for Open Surgery Replicating Procedures Performed by Experts Using Augmented Reality. J. Med. Syst. 2021, 45, 1–9. [Google Scholar] [CrossRef] [PubMed]
  9. Feasibility of Tracking in Open Surgical Simulation. Available online: https://www.ijohs.com/read/article/webpdf/contents-1669131551342-87e055cc-de77-4aed-b154-f680142856b2 (accessed on 23 April 2024).
  10. New England Compounding Center Meningitis Outbreak-Wikipedia. Available online: https://en.wikipedia.org/wiki/New_England_Compounding_Center_meningitis_outbreak (accessed on 23 April 2024).
  11. Cabelka, C. Novel Technologies for the Evaluation of Sterile Compounding Technique. 2022. Available online: https://cdr.lib.unc.edu/concern/honors_theses/k0698k02w (accessed on 23 April 2024).
  12. Oudah, M.; Al-Naji, A.; Chahl, J. Hand Gesture Recognition Based on Computer Vision: A Review of Techniques. J. Imaging 2020, 6, 73. [Google Scholar] [CrossRef] [PubMed]
  13. Corvetto, M.A.; Altermatt, F.R. Tracking Motion Devices as Assessment Tools in Anesthesia Procedures: Have We Been Using Them Well? Can. J. Emerg. Med. 2017, 19, 412–413. [Google Scholar] [CrossRef]
  14. Kaur, H.; Rani, J. A review: Study of various techniques of Hand gesture recognition. In Proceedings of the 1st IEEE International Conference on Power Electronics, Intelligent Control and Energy Systems, ICPEICES 2016, Delhi, India, 4–6 July 2016. [Google Scholar] [CrossRef]
  15. Molina, J.; Pajuelo, J.A.; Martínez, J.M. Real-time Motion-based Hand Gestures Recognition from Time-of-Flight Video. J. Signal Process. Syst. 2017, 86, 17–25. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Jiang, G.; Lin, Y. A novel finger and hand pose estimation technique for real-time hand gesture recognition. Pattern Recognit 2016, 49, 102–114. [Google Scholar] [CrossRef]
  17. Stergiopoulou, E.; Sgouropoulos, K.; Nikolaou, N.; Papamarkos, N.; Mitianoudis, N. Real time hand detection in a complex background. Eng. Appl. Artif. Intell. 2014, 35, 54–70. [Google Scholar] [CrossRef]
  18. Devineau, G.; Moutarde, F.; Xi, W.; Yang, J. Deep learning for hand gesture recognition on skeletal data. In Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, Xi’an, China, 15–19 May 2018; pp. 106–113. [Google Scholar] [CrossRef]
  19. Guo, F.; He, Z.; Zhang, S.; Zhao, X.; Fang, J.; Tan, J. Normalized edge convolutional networks for skeleton-based hand gesture recognition. Pattern Recognit. 2021, 118, 108044. [Google Scholar] [CrossRef]
  20. Núñez, J.C.; Cabido, R.; Pantrigo, J.J.; Montemayor, A.S.; Vélez, J.F. Convolutional Neural Networks and Long Short-Term Memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit. 2018, 76, 80–94. [Google Scholar] [CrossRef]
  21. Zhong, E.; Del-Blanco, C.R.; Berjón, D.; Jaureguizar, F.; García, N. Real-Time Monocular Skeleton-Based Hand Gesture Recognition Using 3D-Jointsformer. Sensors 2023, 23, 7066. [Google Scholar] [CrossRef] [PubMed]
  22. Hein, J.; Seibold, M.; Bogo, F.; Farshad, M.; Pollefeys, M.; Fürnstahl, P.; Navab, N. Towards markerless surgical tool and hand pose estimation. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 799–808. [Google Scholar] [CrossRef] [PubMed]
  23. Peng, S.; Liu, Y.; Huang, Q.; Zhou, X.; Bao, H. Pvnet: Pixel-wise voting network for 6dof pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4561–4570. [Google Scholar]
  24. Romero, J.; Tzionas, D.; Black, M.J. Embodied hands: Modeling and capturing hands and bodies together. ACM Trans. Graph (ToG) 2017, 36, 245. [Google Scholar] [CrossRef]
  25. Hasson, Y.; Tekin, B.; Bogo, F.; Laptev, I.; Pollefeys, M.; Schmid, C. Leveraging photometric consistency over time for sparsely supervised hand-object reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  26. Doughty, M.; Ghugre, N.R. HMD-EgoPose: Head-mounted display-based egocentric marker-less tool and hand pose estimation for augmented surgical guidance. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 2253–2262. [Google Scholar] [CrossRef] [PubMed]
  27. Jocher, G.R. ultralytics/yolov5. GitHub. 2022. Available online: https://github.com/ultralytics/yolov5 (accessed on 30 August 2024).
  28. Tan, M.; Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, Long Beach, CA, USA, 9–15 June 2019; Volume 97, pp. 6105–6114. [Google Scholar]
  29. Müller, L.R.; Petersen, J.; Yamlahi, A.; Wise, P.; Adler, T.J.; Seitel, A.; Kowalewski, K.F.; Müller, B.; Kenngott, H.; Nickel, F.; et al. Robust hand tracking for surgical telestration. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1477–1486. [Google Scholar] [CrossRef] [PubMed]
  30. Zhang, F.; Bazarevsky, V.; Vakunov, A.; Tkachenka, A.; Sung, G.; Chang, C.L.; Grundmann, M. MediaPipe Hands: On-device Real-time Hand Tracking. arXiv 2020, arXiv:2006.10214. [Google Scholar]
  31. Lauer, J.; Zhou, M.; Ye, S.; Menegas, W.; Schneider, S.; Nath, T.; Rahman, M.M.; Di Santo, V.; Soberanes, D.; Feng, G.; et al. Multi-animal pose estimation, identification and tracking with DeepLabCut. Nat. Methods 2022, 19, 496–504. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A schematic of the camera placement across both data collections of the in situ data collection. For the initial training data collection, the left (L) and overhead (O) camera views were used, while only the L camera was used for the in situ testing data collection. The right (R) camera was used for the initial training data collection; however, no frames were coded for inclusion in the training data set from this perspective.
Figure 1. A schematic of the camera placement across both data collections of the in situ data collection. For the initial training data collection, the left (L) and overhead (O) camera views were used, while only the L camera was used for the in situ testing data collection. The right (R) camera was used for the initial training data collection; however, no frames were coded for inclusion in the training data set from this perspective.
Sensors 24 06005 g001
Figure 2. Comparisons between camera views: (A) camera position in the lower right corner of the LAFW, (B) camera position in the upper left of the LAFW, (C) overhead camera position within the LAFW.
Figure 2. Comparisons between camera views: (A) camera position in the lower right corner of the LAFW, (B) camera position in the upper left of the LAFW, (C) overhead camera position within the LAFW.
Sensors 24 06005 g002
Figure 3. Twenty-two keypoints identified and labeled on the right hand. Labels are mirrored on the contralateral hand and follow an identical naming convention.
Figure 3. Twenty-two keypoints identified and labeled on the right hand. Labels are mirrored on the contralateral hand and follow an identical naming convention.
Sensors 24 06005 g003
Figure 4. Representation of average inference error in pixels. The center of each black cross indicates the ground-truth marker location, while the white circle area indicates average RMSE in pixels. Note, while the image is magnified for visibility, the circles are to scale relative to an RMSE based on the full 1920 × 1080 pixel image.
Figure 4. Representation of average inference error in pixels. The center of each black cross indicates the ground-truth marker location, while the white circle area indicates average RMSE in pixels. Note, while the image is magnified for visibility, the circles are to scale relative to an RMSE based on the full 1920 × 1080 pixel image.
Sensors 24 06005 g004
Figure 5. Error in manual labeling shown by overlaying two discrete images of the right hand from two different video frames, with the lighter and darker circles indicating independent labeling efforts of the same keypoint by the same human coder.
Figure 5. Error in manual labeling shown by overlaying two discrete images of the right hand from two different video frames, with the lighter and darker circles indicating independent labeling efforts of the same keypoint by the same human coder.
Sensors 24 06005 g005
Figure 6. Video frame with annotations when the left hand occludes the right.
Figure 6. Video frame with annotations when the left hand occludes the right.
Sensors 24 06005 g006
Table 1. RMSE and confidence percentage for all keypoints of the left and right hands.
Table 1. RMSE and confidence percentage for all keypoints of the left and right hands.
HandMarkerAvg RMSE (pixels)Avg Confidence (%)Avg RMSE > 0.6 Confidence%N > 0.6 ConfidenceHandMarkerAvg RMSE (pixels)Avg Confidence (%)Avg RMSE > 0.6 Confidence%N > 0.6 Confidence
LeftCenter7.4800.936.28994.55RightCenter13.1140.776.66774.39
Index15.4810.975.15798.17Index16.6910.946.14995.90
Index25.0700.984.80298.80Index25.6650.984.79698.30
Index35.7860.974.64897.61Index35.6980.974.57797.00
Index45.1020.964.44096.63Index45.2810.974.24996.47
Middle15.2060.984.77698.39Middle17.5040.925.83193.14
Middle24.4290.983.94598.25Middle25.5830.954.62995.95
Middle35.6490.954.92396.54Middle36.8310.954.69595.24
Middle45.7470.955.04795.37Middle46.8200.964.35295.24
Ring14.5530.974.30897.67Ring18.3870.845.71183.21
Ring25.3160.984.01298.44Ring27.0590.934.82092.87
Ring35.1660.954.73196.15Ring39.3510.935.93994.09
Ring44.8920.964.06895.31Ring48.1130.954.32395.56
Pinky14.9990.974.37297.15Pinky113.8760.835.77482.89
Pinky24.3250.974.16597.47Pinky29.0200.904.71990.18
Pinky35.5460.954.98596.71Pinky39.0570.937.03993.77
Pinky45.8930.964.07297.08Pinky45.8120.954.15895.52
Thumb17.3480.945.96294.79Thumb16.8780.955.67995.92
Thumb25.0460.984.69398.52Thumb24.8370.984.56898.80
Thumb36.1860.974.82296.87Thumb35.1550.984.08897.79
WristLat12.0860.777.99177.21WristLat9.7840.827.72483.15
WristMed11.3460.868.55988.35WristMed10.9200.817.51281.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kiefer, A.W.; Willoughby, D.; MacPherson, R.P.; Hubal, R.; Eckel, S.F. Enhanced 2D Hand Pose Estimation for Gloved Medical Applications: A Preliminary Model. Sensors 2024, 24, 6005. https://doi.org/10.3390/s24186005

AMA Style

Kiefer AW, Willoughby D, MacPherson RP, Hubal R, Eckel SF. Enhanced 2D Hand Pose Estimation for Gloved Medical Applications: A Preliminary Model. Sensors. 2024; 24(18):6005. https://doi.org/10.3390/s24186005

Chicago/Turabian Style

Kiefer, Adam W., Dominic Willoughby, Ryan P. MacPherson, Robert Hubal, and Stephen F. Eckel. 2024. "Enhanced 2D Hand Pose Estimation for Gloved Medical Applications: A Preliminary Model" Sensors 24, no. 18: 6005. https://doi.org/10.3390/s24186005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop