Next Article in Journal
Multiple-Junction-Based Traffic-Aware Routing Protocol Using ACO Algorithm in Urban Vehicular Networks
Next Article in Special Issue
Novel Approach Combining Shallow Learning and Ensemble Learning for the Automated Detection of Swallowing Sounds in a Clinical Database
Previous Article in Journal
Directional Multi-Resonant Micro-Electromechanical System Acoustic Sensor for Low Frequency Detection
Previous Article in Special Issue
Impact of Fatigue on Ergonomic Risk Scores and Foot Kinetics: A Field Study Employing Inertial and In-Shoe Plantar Pressure Measurement Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Shoulder Joint Rotation Angle Using Tablet Device and Pose Estimation Artificial Intelligence Model

Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe 650-0017, Japan
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(9), 2912; https://doi.org/10.3390/s24092912
Submission received: 12 March 2024 / Revised: 29 April 2024 / Accepted: 30 April 2024 / Published: 2 May 2024

Abstract

:
Traditionally, angle measurements have been performed using a goniometer, but the complex motion of shoulder movement has made these measurements intricate. The angle of rotation of the shoulder is particularly difficult to measure from an upright position because of the complicated base and moving axes. In this study, we attempted to estimate the shoulder joint internal/external rotation angle using the combination of pose estimation artificial intelligence (AI) and a machine learning model. Videos of the right shoulder of 10 healthy volunteers (10 males, mean age 37.7 years, mean height 168.3 cm, mean weight 72.7 kg, mean BMI 25.6) were recorded and processed into 10,608 images. Parameters were created using the coordinates measured from the posture estimation AI, and these were used to train the machine learning model. The measured values from the smartphone’s angle device were used as the true values to create a machine learning model. When measuring the parameters at each angle, we compared the performance of the machine learning model using both linear regression and Light GBM. When the pose estimation AI was trained using linear regression, a correlation coefficient of 0.971 was achieved, with a mean absolute error (MAE) of 5.778. When trained with Light GBM, the correlation coefficient was 0.999 and the MAE was 0.945. This method enables the estimation of internal and external rotation angles from a direct-facing position. This approach is considered to be valuable for analyzing motor movements during sports and rehabilitation.

1. Introduction

The shoulder is composed of three bones (clavicle, scapula, and humeral head) and four joints (glenohumeral, sternoclavicular, acromioclavicular, and scapulothoracic joints). The glenohumeral joint is the primary shoulder joint existing between the head of the humerus and the glenoid fossa of the shoulder girdle. The glenohumeral joint is a ball-and-socket joint, with only 25% of the humeral head fitting into the glenoid fossa, forming a very shallow joint. The shoulder is the most flexible joint in the body and is also an unstable joint [1,2]. The normal range of motion of the shoulder is 150 to 180 degrees of forward flexion, 40 to 60 degrees of extension, 150 to 180 degrees of abduction, 60 to 90 degrees of external rotation, and 50 to 70 degrees of internal rotation [3,4,5]. For each movement, a basic axis and a movement axis are defined, so it is necessary to know these when measuring angles.
Motion analysis of the shoulder joint is an important element in the field of sports medicine and rehabilitation [6]. The gold standard for shoulder joint angle measurement is the use of a universal goniometer, but it is dependent on the evaluator’s knowledge and skill [7]. Other measurement methods have been reported using visual inspection, inclinometers, smartphone applications, or markers. However, the reliability of visual inspection (ICC: 0.15~0.92), inclinometers (ICC: 0.5~0.95), and smartphone applications (ICC: 0.38~0.99) varies from report to report, and analysis during exercise is difficult [8,9,10,11,12]. Motion capture systems using markers and high-speed cameras are constrained by cost and environment [13]. Therefore, it is useful to develop a method of angle measurement that is both accurate and inexpensive.
With the spread in the use of smartphones, many physicians have smartphones. Furthermore, the advancement of smartphone cameras has made it possible to record high-quality videos. Therefore, the use of smartphones for medical photography is becoming increasingly popular and appears to be accepted by many patients [14]. If angles can be estimated from videos taken with a smartphone, there is no need for special equipment and no cost.
Recent advances in AI have led to the development of various posture estimation AI models. For example, a driver monitoring system (DMS) can detect a driver’s driving posture and facial expressions to reduce the risk of accidents [15,16]. Pose estimation AI models also include MediaPipe, Yolov8, and PoseNet. The Yolov8 is the latest model in the Yolo series that serves as a general model for visual understanding. The Yolov8 is the most recent model in the Yolo series. It performs better than previous versions in terms of accuracy and speed [17]. The PoseNet is a real-time pose estimation model developed by Google that detects 17 body parts and uses deep learning techniques to estimate human posture in real time from both photos and videos [18]. MediaPipe, developed by Google, is an open-source, cross-platform system that specializes in real-time media processing, especially video and image analysis and processing. MediaPipe is capable of tracking the location of 33 landmarks on the human body and measuring the position coordinates of the landmarks [18]. By focusing on detecting bounding boxes for relatively rigid body parts, this method uses a minimally sufficient number of landmarks for the face, hands, and feet to estimate the rotation, size, and position of the region of interest for subsequent models. Compared to other posture estimation methods, MediaPipe has demonstrated superior accuracy, but it is not perfect [19]. It has been reported that the integration of MediaPipe with machine learning models for posture analysis has improved accuracy in assessing shoulder abduction angles [20].
In this study, we attempted to develop a highly accurate method of angle estimation that does not require special equipment and is inexpensive. As a method for angle estimation, we assumed that the angle could be estimated with high accuracy by combining MediaPipe with machine learning. In addition, the rotational movement of the shoulder joint was selected as the movement for angle estimation. The basic axis of shoulder rotation is a vertical line to the anterior plane through the elbow, and the movement axis is the ulna. Therefore, the shoulder rotation angle is difficult to measure from a face-to-face position. In addition, although there have been reports of shoulder angles measured using posture estimation AI, there have been no reports of rotation angles measured [21,22]. We examined the possibility of estimating the rotation angle of the shoulder joint by detecting the coordinates from the video using MediaPipe and combining it with machine learning.

2. Materials and Methods

2.1. Participants

To evaluate the range of motion of the shoulder rotation angle, 10 healthy adult volunteer subjects were involved (10 males, mean age 37.7 years, mean height 168.3 cm, mean weight 72.7 kg, mean BMI 25.6). They were instructed to perform shoulder joint external and internal rotation exercises in a standing position facing forward, with the right upper extremity drooped and the elbow joint in a 90-degree flexion. This study was approved by the Kobe University Ethics Committee (approval number: B210009), and informed consent was obtained from all participants.

2.2. Angle Measurement Application

In this study, a smartphone angle measurement application was used as the true angle, and the accuracy of angle estimation was examined. The angle measurement application used was Measure Angles-Bubble Level (ver. 3.99.90, JRSoftWorx), and the reliability of the application was examined. To evaluate the reliability of the application, lines of 50°, 30°, 0°, −30°, and −50° angles were marked using a protractor on the tabletop. The 30° internal rotation is indicated as −30°. We validated the accuracy of the app when the smartphone was placed on a tabletop and when it was fixed to the forearm. The smartphone was fixed to the subject’s forearm with a band. With the application active, the forearm was moved over the line for each angle and the angle displayed on the application was recorded. The angle displayed on the application at each angle was recorded. The protocol was repeated 30 times for each mark, and the mean absolute error (MAE) was examined (Figure 1).

2.3. Data Acquisition and Image Processing by MediaPipe

The smartphone was attached to the volunteer’s forearm, and a tablet device (iPhone 14 Pro, Apple, Cupertino, CA, USA) was placed at a height of 150 cm from the floor, 2 m away from the subject (Figure 2). All video recording was performed by a designated examiner (S.T.). The video recording mode was 1080p HD, 30 fps, with each angle recorded for approximately 2 s. The captured video files were processed with MediaPipe pose’s phyton library to obtain each landmark coordinate (x, y, z). The landmark coordinates were normalized between 0.0 and 1.0 by the image width (x) and height (y). However, since the distance between the object and the camera is not used as a parameter in this study, the z coordinate is not used. In this study, we used the coordinates of the bilateral shoulder joints, right elbow joint, right wrist joint, and right hip joint, which are among the 33 landmarks detectable by MediaPipe (Figure 3). Vector calculations were performed using each coordinate, and angle and distance parameters were calculated.

2.4. Machine Learning (ML)

In this study, we compared models created by machine learning with five different methods (linear regression, ElasticNet, SVM, random forest regression, and Light GBM). Traditional linear regression and ElasticNet were adopted as the basic regression methods. SVM is an algorithm that performs classification or regression by determining a boundary or hyperplane that separates two classes of data. Random forest regression is an ensemble learning process that uses multiple decision trees to obtain high reservation performance. Light GBM is based on the decision tree algorithm and requires less learning time and memory usage than traditional methods [9]. Figure 4 shows the workflow of this study. Ten volunteers were filmed at every 10 degrees of rotational angle from -50 to 50 degrees, and 11 types of data were obtained. The internal rotation direction was defined as negative values and the external rotation angle was defined as positive values. A total of 10,608 images were obtained from the video data. The videos taken were randomly divided into training data for machine learning (80%) and test data used for angle estimation (20%). After determining the optimal parameters for each ML algorithm from the training data, the correlation coefficient and MAE were determined as performance indicators to compare the accuracy of the models.
Another method of assessing the quality of a regression analysis is the residual plot. The residual plot displays the difference (residuals) between the predicted and actual values in the regression analysis. For each ML model, the actual value (true angle) is plotted on the x-axis and the residual (actual angle–predicted angle) on the y-axis. A residual close to zero indicates that the model adequately captures the data. In this study, residual plots for linear regression, ElasticNet, SVM, random forest regression, and Light GBM were created to evaluate the accuracy of each model.
Feature importance and SHAP (Shapley additive explanations) values were used to visualize important parameters for estimating shoulder rotation angle. The feature values were normalized by the total of all features present in the tree; the overall importance of a feature was obtained by dividing it by the total number of trees in the ML. In addition, the contribution of each feature to the prediction was evaluated using the SHAP value. Based on game theory, the SHAP value was defined as the contribution of each feature to the model’s predictions. SHAP values are useful for increasing the interpretability of a model and are especially beneficial in complex models [23]. All analyses of ML models were performed using Scikit -learn v1.0.2 library.

2.5. Parameters

Vectors were defined using the coordinates of the bilateral shoulder joints, the right elbow joint, the right wrist joint, and the right hip joint, as recognized by MediaPipe, and several parameters for use in machine learning were established (Figure 5).
The parameters used in the analysis are described in detail below (Table 1).
  • norm_elbow_size: The value obtained by dividing the cross product of the vector from the right shoulder to the right elbow and the vector from the right elbow to the right wrist joint by the square of the length of the vector from the right shoulder joint to the right hip joint.
  • norm_shoulder_size: The value obtained by dividing the cross product of the vector from the right elbow to the right wrist and the vector from the right shoulder to the right hip joint by the square of the length of the vector from the right shoulder joint to the right hip joint.
  • norm_forearm_distance: The value obtained by dividing the cross product of the vector from the left shoulder to the right shoulder and the vector from the right shoulder to the right hip joint by the square of the length of the vector from the right shoulder joint to the right hip joint.
  • norm_uparm_distance: The value obtained by dividing the distance of the vector from the right shoulder to the right elbow by the distance of the vector from the right shoulder joint to the right hip joint.
  • elbow_angle: The angle formed by the right shoulder, the right elbow, and the right wrist.
  • shoulder_angle: The angle formed by the right elbow, the right shoulder, and the right wrist.
  • trunk_angle: The angle formed by the left shoulder, the right shoulder, and the right hip.

3. Results

3.1. Validation for Application

To evaluate the reliability of the angle measurement application, MAE was calculated using the angle data obtained. The MAE was 0.96 when the smartphone was placed on the tabletop and 0.91 when it was fixed to the forearm. Therefore, we used the app as the source of the true value.

3.2. Estimation of Shoulder Rotation Angle

Machine learning was performed using training data randomly selected from a total of 10,608 acquired images. An example of the angle estimation scene is shown in Figure 6. The estimated angle is shown above the volunteer’s head. Comparing the models, both the correlation coefficient and MAE were superior in Light GBM (Table 2). The hyperparameters used in model training are summarized in Table 3 For each ML model, the actual values (true angle) were plotted on the x-axis, and the residuals (actual angles–predicted angles) on the y-axis (Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11). The mean and standard deviation of each residual of the linear regression and Light GBM models for each angle are summarized (Table 4 and Table 5). A heat map showing the correlation of each parameter is shown in Figure 12. According to the heatmap, norm_elbow_size and norm_shoulder_size showed a strong correlation with the angle. Furthermore, norm_elbow_size scored higher when evaluated using feature importance in angle estimation, indicating that norm_elbow_size has a greater impact on models that estimate shoulder rotation angle. Heat map results are consistent with feature importance and SHAP value results. From the evaluation of SHAP values, it was found that norm_elbow_size had high scores in both positive and negative aspects (Figure 13).

4. Discussion

Shoulder pain impairs patients’ quality of life and ability to work [24]. Therefore, shoulder motion analysis is important in medical practice [6]. In this study, the combination of machine learning and MediaPipe made it possible to estimate the rotation angle of the shoulder from a straight-forward-facing position. Normally, a goniometer is used to measure the angle of the shoulder joint. However, it has some limitations, such as the difficulty of measuring during exercise. In addition, it is difficult to measure the shoulder rotation angle from a facing posture. In previous studies, there have been some reports of angle estimation using posture estimation AI, but there have been no reports of shoulder joint rotation angles [19,20,21]. The method in this study is beneficial for angle measurement in clinical practice because it allows real-time evaluation and is not costly.
The methods of machine learning used were linear regression and Light GBM, both of which had correlation coefficients of 0.972 and 0.997, respectively, indicating high accuracy. The MAE for linear regression was 6.056, indicating an error of more than 5 degrees, but it is expected that this can be improved by increasing the number of cases in the future. Linear regression defines a linear relationship between variables x and y with the equation y = a + bx and allows the value of the dependent variable y to be estimated from the independent variable x [25]. In this study, the linear regression model was able to estimate the angle of rotation with high accuracy. It is thought that this was due to the parameter settings (especially norm_elbow_size) having a linear relationship with the angle of rotation. There have been previous reports examining posture estimation using a combination of machine learning with MediapPipe and Light GBM, but none have mentioned the shoulder rotation angle [26,27]. In this study as well, the estimation of angles was highly accurate with a correlation coefficient of 0.997 and MAE of 1.464. It is a cost-effective method for estimating angles, as it requires minimal computation and no special equipment.
Residual plots are essential for evaluating regression models by clarifying issues related to assumptions and pinpointing outliers. From the Light GBM plots in this study, we observe a wide spread of residuals across all angles, with no systematic deviations. Notably, horizontal bands within the residuals may suggest consistent prediction errors at certain angular values. Linear regression plots also show a similar lack of systematic patterns, but the spread in the Light GBM suggests a more nuanced understanding of variance, which may lead to superior accuracy. In both models, residuals are centered around zero, generally suggesting the reliability of predictions.
SHAP values explain the effect of having a certain value for a given feature in comparison to the prediction we would make if that feature took some baseline value. The features listed on the y-axis are the features that the model considers when making estimates. The position on the y-axis is determined by the importance of each feature, with the higher-ranked features having a greater impact on the angle estimation model output. The parameters such as norm_elbow_size seem to be one of the most impactful features, as it is located at the top of the y-axis. Features with SHAP values to the right (positive impact) increase the model’s output, while those to the left (negative impact) decrease the output. Figure 10 shows that the higher the value of norm_elbow_size (indicated by pink/red on the right side), the higher the predicted value, and the lower the value of norm_elbow_size (indicated by blue on the left side), the lower the predicted value. Norm_elbow_size is a value normalized by dividing the area of the parallelogram formed by the upper arm and the forearm by the square of the trunk length. Since the length of the trunk does not change with movement, normalization of the trunk can accommodate changes in the distance between the camera and the subject. Furthermore, the area of the parallelogram increases as the angle of gyration increases from a position facing directly forward (Figure 14). There is a negative correlation with internal rotation and a positive correlation with external rotation. This relationship occurs because the direction of internal rotation is defined as minus. Therefore, the correlation between the parallelogram and the angle of rotation would change from a position where the camera and the subject are not directly facing each other. Some reports have previously changed the camera position to estimate the shoulder abduction angle. It has been reported that if appropriate parameters are set during machine learning and the learning is carried out, it is possible to estimate the angle with high accuracy [21].
The estimation model developed in this study can display angles in real time without the use of complicated devices; this is a feature that was not seen in previous studies and it is considered innovative. With a tablet device, real-time examination and measurement can be performed remotely, expanding the potential for applications in telemedicine and rehabilitation. The possibility of analyzing the movements of thumbs and other fingers from various positions with a smartphone is expected to contribute to the efficiency of motion analysis and its application to telemedicine and rehabilitation.
There are several limitations to this study. Firstly, the maximum external rotation angle with the arm at the side is defined as 60 degrees, and the maximum internal rotation range is 80 degrees [28]. However, some subjects in the present study had difficulty actively setting the external rotation angle to 60 degrees, so the rotation angle was limited to 50 degrees. Second, the application is used as the true angle. The accuracy of the app varies according to the report [10]. In this study, the reliability assessment of the app is in 10-degree increments, and the evaluation of the rotational angle is also in 10-degree increments. In actual rehabilitation and sports sites, a more detailed angle evaluation may be necessary. Third, it only evaluates the rotational angle in the position with the arm at the side. The shoulder joint is a complex motion joint with three-dimensional flexion–extension and abduction–adduction, and it is necessary to be able to estimate the angle of more complex motions. Future research on angle estimation models created by collecting data on more complex motions is needed.

5. Conclusions

This study demonstrated the potential of estimating shoulder joint rotation angles by combining MediaPipe and machine learning. The machine learning model was trained on images from videos of 10 volunteers. The model was evaluated using correlation coefficients and MAE. Both linear regression and Light GBM methods were able to estimate shoulder rotation angles with high accuracy. This method allows for the real-time estimation of shoulder rotation angles without the need for special equipment, which has never been used before. Estimation is possible with only a tablet terminal and is expected to be utilized in remote areas.
In conclusion, the combination of MediaPipe and ML enabled highly accurate estimation of shoulder rotation angles in real time from videos taken with a smartphone from a forward-facing position.

Author Contributions

Conceptualization, S.T. (Shunsaku Takigami); methodology, A.I.; software, A.I.; validation, S.T. (Shuya Tanaka), T.F., T.K., Y.E. and K.Y.; formal analysis, M.K., investigation, M.K., resources, K.Y., H.N., M.K. and A.I.; data curation, S.T. (Shuya Tanaka); writing—original draft preparation, S.T. (Shuya Tanaka); writing—review and editing, A.I. and Y.M.; visualization, M.K., supervision, A.I. and Y.M.; project administration, R.K.; funding acquisition, R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Kobe University Review Board (approval number: B210009. Approval date 21 April 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients to publish this paper.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available because of confidentiality concerns.

Conflicts of Interest

The authors declare no conflicts of interest. The sponsors had no role in the design, execution, interpretation, or writing of the study.

References

  1. Kishore, D.M.; Bindu, S.; Manjunath, N.K. Estimation of Yoga Postures Using Machine Learning Techniques. Int. J. Yoga. 2022, 15, 137–143. [Google Scholar] [CrossRef] [PubMed]
  2. Bakhsh, W.; Nicandri, G. Anatomy and Physical Examination of the Shoulder. Sports Med. Arthrosc. Rev. 2018, 26, e10–e22. [Google Scholar]
  3. Dutton, M. Orthopaedic Examination, Evaluation, and Intervention, 2nd ed.; McGraw Hill: Philadelphia, PA, USA, 2008. [Google Scholar]
  4. Hoppenfield, S. Physical Examination of the Spine and Extremities; Prentice-Hall: Norwalk, CT, USA, 1976. [Google Scholar]
  5. Norkin, C.; White, J. Measurement of Joint Motion: A Guide to Goniometry, 4th ed.; F.A. Davis: Philadelphia, PA, USA, 2009. [Google Scholar]
  6. Longo, U.G.; De Salvatore, S.; Carnevale, A. Optical Motion Capture Systems for 3D Kinematic Analysis in Patients with Shoulder Disorders. Int. J. Environ. Res. Public. Health 2022, 19, 12033. [Google Scholar] [CrossRef] [PubMed]
  7. Michael, J.M.; McHugh, M.P.; Johnson, C.P. Reliability of shoulder range of motion comparing a goniometer to a digital level. Physiother. Theory Pract. 2010, 26, 327–333. [Google Scholar]
  8. Terwee, C.B.; de Winter, A.F.; Scholten, R.J. Interobserver reproducibility of the visual estimation of the range of motion of the shoulder. Arch. Phys. Med. Rehabil. 2005, 86, 1356–1361. [Google Scholar] [CrossRef] [PubMed]
  9. Tozzo, M.C.; Ansanello, W.; Martins, J. Inclinometer Reliability for Shoulder Ranges of Motion in Individuals With Subacromial Impingement Syndrome. J. Manip. Physiol. Ther. 2021, 44, 236–243. [Google Scholar] [CrossRef]
  10. Keogh, J.W.L.; Cox, A.; Anderson, S. Reliability and validity of clinically accessible smartphone applications to measure joint range of motion: A systematic review. J. Manip. Physiol. Ther. 2021, 44, 236–243. [Google Scholar] [CrossRef]
  11. Werner, B.C.; Holzgrefe, R.E.; Griffin, J.W. Validation of an innovative method of shoulder range-of-motion measurement using a smartphone clinometer application. J. Shoulder. Elb. Surg. 2014, 23, e275–e282. [Google Scholar] [CrossRef]
  12. Hwang, S.; Ardebol, J.; Ghayyad, K. Remote visual estimation of shoulder range of motion has generally high interobserver reliability but limited accuracy. JSES. Int. 2023, 7, 2528–2533. [Google Scholar] [CrossRef]
  13. Yahya, M.; SHAh, J.A.; Yusof, Z.M. Motion capture sensing techniques used in human upper limb motion: A review. Sens. Rev. 2019, 39, 504–511. [Google Scholar] [CrossRef]
  14. Hsieh, C.; Yun, D.; Bhatia, A.C. Patient perception on the usage of smartphones for medical photography and reference in dermatology. Dermatol. Surg. 2015, 41, 149–154. [Google Scholar] [CrossRef]
  15. Piercy, T.; Herrmann, G.; Cangelosi, A. Using skeletal position to estimate human error rates in telemanipulator operators. Front. Robot. AI 2024, 10, 1287417. [Google Scholar] [CrossRef]
  16. Kim, D.; Park, H.; Kim, T. Real-time driver monitoring system with facial landmark-based eye closure detection and head pose recognition. Sci. Rep. 2023, 13, 18264. [Google Scholar] [CrossRef]
  17. Dong, C.; Du, G. An enhanced real-time human pose estimation method based on a modified YOLOv8 framework. Sci. Rep. 2024, 14, 8012. [Google Scholar] [CrossRef]
  18. Siddiqui, H.U.R.; Saleem, A.A.; Raza, M.A. Empowering Lower Limb Disorder Identification through PoseNet and Artificial Intelligence. Diagnostic 2023, 13, 2881. [Google Scholar] [CrossRef] [PubMed]
  19. Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. BlazePose: On-device Real-time Body Pose tracking. arXiv 2020, arXiv:2006.10204. [Google Scholar]
  20. Halder, A.M.; Kuhl, S.G.; Zobitz, M.E.; Larson, D.; An, K.N. Effects of the glenoid labrum and glenohumeral abduction on the stability of the shoulder joint through concavity-compression: An in vitro study. J. Bone. Jt. Surg. Am. 2001, 83, 1062–1069. [Google Scholar] [CrossRef]
  21. Kusunose, M.; Inui, A.; Nishimoto, H. Measurement of Shoulder Abduction Angle with Posture Estimation Artificial Intelligence Model. Sensors 2023, 23, 6445. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, X.M.; Smith, D.T.; Zhu, Q. A webcam-based machine learning approach for three-dimensional range of motion evaluation. PLoS ONE 2023, 18, e0293178. [Google Scholar] [CrossRef]
  23. Mittal, N.; Sabo, A.; Deshpande, A. Feasibility of video-based joint hypermobility assessment in individuals with suspected Ehlers-Danlos syndromes/generalized hypermobility spectrum disorders: A single-site observational study protocol. BMJ Open 2022, 12, e068098. [Google Scholar] [CrossRef]
  24. Ruopsa, N.; Vastamaki, H.; Ristolainen, L. Convergent Validity of Thoracic Outlet Syndrome Index (TOSI). Phys. Act. Health. 2022, 6, 16–25. [Google Scholar] [CrossRef]
  25. Lundberg, S.M.; Lee, S. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  26. Schneider, A.; Hommel, G.; Blettner, M. Linear regression analysis: Part 14 of a series on evaluation of scientific publications. Dtsch. Arztebl. Int. 2010, 107, 776–782. [Google Scholar] [PubMed]
  27. Shin, J.; Matsuoka, A.; Hasan, M.A.M. American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation. Sensors 2021, 21, 5856. [Google Scholar] [CrossRef] [PubMed]
  28. American Academy of Orthopaedic Surgeons. Joint Motion: Method of Measuring and Recording; Churchill Livingstone: Chicago, IL, USA, 1965. [Google Scholar]
Figure 1. Validation of the angle measurement app.
Figure 1. Validation of the angle measurement app.
Sensors 24 02912 g001
Figure 2. (a) A smartphone attached to the forearm. (b) The camera position for recording.
Figure 2. (a) A smartphone attached to the forearm. (b) The camera position for recording.
Sensors 24 02912 g002
Figure 3. MediaPipe landmarks.
Figure 3. MediaPipe landmarks.
Sensors 24 02912 g003
Figure 4. The workflow of the data acquisition and ML. A total of 10,608 images were captured at 30 fps over 2 s from 10 participants with 11 different shoulder rotation angles ranging from −50 to 50°. The captured images were randomly separated and 80% were assigned as training data to fine-tune the model parameters and 20% as test data to evaluate the efficacy of each ML model. Training data were used to identify the best hyperparameters for each ML algorithm, and MAE and correlation coefficient were used as indicators to estimate and compare model accuracy.
Figure 4. The workflow of the data acquisition and ML. A total of 10,608 images were captured at 30 fps over 2 s from 10 participants with 11 different shoulder rotation angles ranging from −50 to 50°. The captured images were randomly separated and 80% were assigned as training data to fine-tune the model parameters and 20% as test data to evaluate the efficacy of each ML model. Training data were used to identify the best hyperparameters for each ML algorithm, and MAE and correlation coefficient were used as indicators to estimate and compare model accuracy.
Sensors 24 02912 g004
Figure 5. Landmarks and vectors were used in this study.
Figure 5. Landmarks and vectors were used in this study.
Sensors 24 02912 g005
Figure 6. Actual angle estimation images.
Figure 6. Actual angle estimation images.
Sensors 24 02912 g006
Figure 7. The residuals (actual angles–predicted angles) of the linear regression model were plotted and compared against the actual angles for the test data.
Figure 7. The residuals (actual angles–predicted angles) of the linear regression model were plotted and compared against the actual angles for the test data.
Sensors 24 02912 g007
Figure 8. The residuals (actual angles–predicted angles) of the ElasticNet model were plotted and compared against the actual angles for the test data.
Figure 8. The residuals (actual angles–predicted angles) of the ElasticNet model were plotted and compared against the actual angles for the test data.
Sensors 24 02912 g008
Figure 9. The residuals (actual angles–predicted angles) of the SVM model were plotted and compared against the actual angles for the test data.
Figure 9. The residuals (actual angles–predicted angles) of the SVM model were plotted and compared against the actual angles for the test data.
Sensors 24 02912 g009
Figure 10. The residuals (actual angles–predicted angles) of the random forest regression model were plotted and compared against the actual angles for the test data.
Figure 10. The residuals (actual angles–predicted angles) of the random forest regression model were plotted and compared against the actual angles for the test data.
Sensors 24 02912 g010
Figure 11. The residuals of the Light GBM model were graphically represented and juxtaposed with the actual angles for the test data.
Figure 11. The residuals of the Light GBM model were graphically represented and juxtaposed with the actual angles for the test data.
Sensors 24 02912 g011
Figure 12. Heat map for each parameter. Warm colors show a positive correlation and cold colors show a negative correlation.
Figure 12. Heat map for each parameter. Warm colors show a positive correlation and cold colors show a negative correlation.
Sensors 24 02912 g012
Figure 13. Feature importance and SHAP value in Light GBM.
Figure 13. Feature importance and SHAP value in Light GBM.
Sensors 24 02912 g013
Figure 14. The area of parallelograms tends to increase similarly as the angle of gyration increases ((a) ER 10°; (b) ER 50°).
Figure 14. The area of parallelograms tends to increase similarly as the angle of gyration increases ((a) ER 10°; (b) ER 50°).
Sensors 24 02912 g014
Table 1. List of parameters used in this study.
Table 1. List of parameters used in this study.
NameDefinition
norm_elbow_size a   ×   b / c 2
norm_shoulder_size a   ×   d / c 2
norm_forearm_distance b / c
norm_uparm_distance a / c
elbow_angle∠①-②-③
shoulder_angle∠②-①-③
trunk_angle∠⑤-①-④
Table 2. The accuracies of each ML model are summarized.
Table 2. The accuracies of each ML model are summarized.
Linear
Regression
ElasticNetSVMRandom Forest
Regression
Light GBM
Correlation
coefficient
0.9720.9720.9890.9940.997
MAE6.0565.9352.4682.0631.464
Table 3. Summary of hyperparameters of each model.
Table 3. Summary of hyperparameters of each model.
ML ModelLinear
Regression
ElasticNetSVMRandom Forest
Regression
Light GBM
Representative
parameters
Penalty: L2
C: 1.0
Solver: lgfbs
Alpha: 1 × 10−5
L1-ratio: 0.889
Fit intercept: True
C: 10.0
Gamma: 0.0046
Criterion: squared error
Max depth: 6
Number estimators: 10
Objective: mean absolute error
Learning rate: 0.076
Max depth: 8
Table 4. The mean and standard deviation of each residual for each angle of the linear regression model.
Table 4. The mean and standard deviation of each residual for each angle of the linear regression model.
Linear RegressionResidualResidual
Mean (°)Standard Deviation (°)
True_Angle (°)
−50−6.446.27
−40−1.325.61
−300.396.10
−202.465.42
−102.556.51
00.636.25
10−3.318.72
20−4.816.65
30−1.634.38
403.265.77
508.55.57
Table 5. The mean and standard deviation of each residual for each angle of the Light GBM model.
Table 5. The mean and standard deviation of each residual for each angle of the Light GBM model.
Light GBMResidualResidual
Mean (°)Standard Deviation (°)
True_Angle (°)
−50−1.883.47
−40−0.593.47
−300.313.23
−200.142.71
−100.313.34
01.054.09
100.684.53
20−1.195.96
30−0.445.19
400.624.28
503.336.57
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Takigami, S.; Inui, A.; Mifune, Y.; Nishimoto, H.; Yamaura, K.; Kato, T.; Furukawa, T.; Tanaka, S.; Kusunose, M.; Ehara, Y.; et al. Estimation of Shoulder Joint Rotation Angle Using Tablet Device and Pose Estimation Artificial Intelligence Model. Sensors 2024, 24, 2912. https://doi.org/10.3390/s24092912

AMA Style

Takigami S, Inui A, Mifune Y, Nishimoto H, Yamaura K, Kato T, Furukawa T, Tanaka S, Kusunose M, Ehara Y, et al. Estimation of Shoulder Joint Rotation Angle Using Tablet Device and Pose Estimation Artificial Intelligence Model. Sensors. 2024; 24(9):2912. https://doi.org/10.3390/s24092912

Chicago/Turabian Style

Takigami, Shunsaku, Atsuyuki Inui, Yutaka Mifune, Hanako Nishimoto, Kohei Yamaura, Tatsuo Kato, Takahiro Furukawa, Shuya Tanaka, Masaya Kusunose, Yutaka Ehara, and et al. 2024. "Estimation of Shoulder Joint Rotation Angle Using Tablet Device and Pose Estimation Artificial Intelligence Model" Sensors 24, no. 9: 2912. https://doi.org/10.3390/s24092912

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop