Next Article in Journal
Dual-Branch Colorization Network for Unpaired Infrared Images Based on High-Level Semantic Features and Multiscale Residual Attention
Previous Article in Journal
Experimental Design of Steel Surface Defect Detection Based on MSFE-YOLO—An Improved YOLOV5 Algorithm with Multi-Scale Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Monitoring and Assessment of Rehabilitation Exercises for Low Back Pain through Interactive Dashboard Pose Analysis Using Streamlit—A Pilot Study

by
Dilliraj Ekambaram
and
Vijayakumar Ponnusamy
*
Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(18), 3782; https://doi.org/10.3390/electronics13183782
Submission received: 5 August 2024 / Revised: 10 September 2024 / Accepted: 18 September 2024 / Published: 23 September 2024
(This article belongs to the Section Bioelectronics)

Abstract

:
In the modern era, AI-driven algorithms have significantly influenced medical diagnosis and therapy. In this pilot study, we propose using Streamlit 1.38.0 to create an interactive dashboard, PoAna .v1—Pose Analysis, as a new approach to address these concerns. In real-time, our system accurately tracks and evaluates individualized rehabilitation exercises for patients suffering from low back pain using features such as exercise visualization and guidance, real-time feedback and monitoring, and personalized exercise plans. This dashboard was very effective for tracking rehabilitation progress. We recruited 32 individuals to participate in this pilot study. We monitored an individual’s overall performance for one week. Of the participants, 18.75% engaged in rehabilitative exercises less frequently than twice daily; 81.25% did so at least three times daily. The proposed Long Short-Term Memory (LSTM) architecture had a training accuracy score of 98.8% and a testing accuracy of 99.7%, with an average accuracy of 10-fold cross-validation of 98.54%. On the pre- and post-test assessments, there is a significant difference between pain levels, with a p < 0.05 and a t-stat value of 12.175. The proposed system’s usability score is 79.375, indicating that it provides a user-friendly environment for the user to use the PoAna .v1 web application. So far, our research suggests that the Streamlit 1.38.0-based dashboard improves patients’ engagement, adherence, and success with exercise. Future research aims to add more characteristics that can improve the complete care of low back pain (LBP) and validate the effectiveness of this intervention in larger patient cohorts.

1. Introduction

The prevalence of LBP will lead to rheumatoid arthritis, carpel tunnel syndrome, osteoarthritis, ligament sprain, bone fracture, and so on. Notably, the prevalence of LBP has significantly impacted working personnel in low-income and middle-income countries [1]. According to Ferreira et al. [2], by 2050, LBP will be the primary cause of years lived with disabilities (YLD). Hospitals and rehabilitation centers are the usual settings for rehabilitation programs that involve professional monitoring and direction [3,4]. We highly suggest self-management as a first-line therapy option. The personalized care system represents a modern, Society 5.0 implementation of medicine [5]. Personalized healthcare systems alleviate the limitations of traditional therapy systems. Understanding visual situations, the primary objective of computer vision, requires a variety of tasks, such as recognition, detection, and captioning [6]. A standard method for action recognition involves representing human stances in videos as skeleton data [7].
Artificial Intelligence (AI)-based personalized physiotherapy systems are an impactful technology for people [8]. The Sussex Community National Health Service (NHS) organization prescribed the exercises for LBP rehabilitation [9]. Self-rehabilitation faces issues when analyzing the prescribed exercise poses, which may cause severe muscle strain [10]. Finding the skeletal data using pose estimation techniques (OpenPose, MediaPipe) leads to a precise analysis of proper exercise monitoring [11]. Recent research utilizing Microsoft Kinect depth-sensing cameras [12] has demonstrated that artificial intelligence (AI) may offer immediate feedback during squat exercises [13,14], therefore greatly enhancing the precision of the workout movements and increasing user involvement. Building upon prior efforts, this system provides accurate and real-time monitoring to guarantee users execute exercises accurately and safely. We have specifically developed dedicated AI-powered exercise regimens to target persistent low back pain (LBP), a prevalent and incapacitating ailment. These technologies have improved the quality of life for health by providing customized, artificial intelligence-directed workout regimens. The selfBACK project has established the efficacy of artificial intelligence (AI) in encouraging physical exercise through a self-management application [15,16,17]. These results indicate that this method has the potential to improve compliance with exercise regimens by allowing users to independently manage their rehabilitation with less medical involvement. Comparative studies of AI applications and conventional physical therapy have shown that AI can be equally, if not more effective at directing exercises such as squats.
We selected Streamlit 1.38.0 for this project because of its ability to efficiently create and implement interactive web applications [18]. It effortlessly integrates with Python (version 3.10.5), enabling us to construct a user-friendly interface capable of efficiently visualizing and interacting with deep learning models developed using Keras 3.3.2 or PyTorch 2.4. The proposed framework is well-suited for applications that necessitate real-time data processing and user engagement. For instance, our self-rehabilitation tool offers prompt feedback on exercise performance. Furthermore, Streamlit 1.38.0 the deployment process, allowing us to focus on the fundamental features of the program without requiring considerable front-end coding [19,20,21]. The key challenge in this technique is the loss of tracking joints due to poor lighting conditions, the occlusion of body parts, and more [22,23,24]. This proposed web application, equipped with an interactive dashboard, enables users to independently monitor a person’s movement and receive visual feedback on the correctness of their exercise. LSTM classifies exercise poses and provides feedback on their correctness. Furthermore, the therapist can evaluate the participants’ overall performance and provide comments to the user to enhance their exercise performance. The key points from our work are summarized below: (1) To analyze motor function during LBP exercise therapy, our model uses standard RGB videos. (2) Deep learning-based RGB data fusion allowed for an accurate algorithm for motor function assessment. (3) Our model significantly outperformed SOTA approaches in evaluating the functions of LBP exercise. (4) We created a web health app that provides assessment results, recommends various self-exercises for patients, and monitors their recovery. We conducted a statistical analysis to validate the significance of the proposed framework.
The structure of this research provides a comprehensive understanding of “PoAna .v1”, a real-time interactive dashboard for LBP exercise rehabilitation analysis and feedback generation. Section 1 introduces the basic understanding of AI for healthcare sectors in the rehabilitation of LBP, highlighting the different self-rehabilitation techniques equipped with AI and software developed for self-rehabilitation web applications for telerehabilitation. Section 2 describes the methodologies and materials, including the process of deep learning models, feature collection and preprocessing, and the comparison of various web application frameworks and web applications. In Section 3, we went into more detail about the tests that were used to figure out the overall performance analysis, the statistical analysis of the pre-pain and post-pain assessments, the comparison of performance matrices with other methods, and the score for the system usability for the PoAna .v1 web dashboard for self-rehabilitation. Section 4 delves into discussing the process of the proposed system performance in real-time and deliberates on the effectiveness of the proposed system with other approaches. Finally, Section 5 concludes the paper by outlining the limitations of the proposed approach and future directions for moving forward with our work for further implementations.

2. Materials and Methods

We included participants in this study if they had experienced musculoskeletal pain, such as shoulder, wrist, elbow, and hip pain, for at least 3 weeks. We utilized the pain assessment scales (PAS) questionnaire [25] to recruit the participants. We conducted the recruitment process among the volunteer participants. Initially, 38 people showed interest in participating in this study. We excluded four participants because they did not meet our inclusion criteria (two individuals had recently undergone surgery; one was an alcoholic, and another was on medication for high blood sugar). Two people dropped out of the study due to their poor responsiveness to continuing work on our web application. Figure 1 depicts the flow of participants in this study.
We are trialing the PoAna .v1 app intervention on a general group, rather than a subgroup with specific pain characteristics, to improve the quality of life for all people with non-specific chronic LBP, regardless of demographic factors such as symptom duration. We recruited three different age groups of participants with muscle strain, sharp pain in the upper and lower limbs, stiffness, limited mobility of motor functions, and radiated pain in the upper and lower body parts. The gender ratio of the overall participants is 11 females and 21 males. The mean age of participants is 37.38 ± 15.19, and BMI is 25.83 ± 4.18. Table 1 displays the three different age groups of participants, their anthropometric parameters, and pain level in the pre-pain assessment.

2.1. Proposed System Flow Diagram

We captured self-rehabilitation exercise poses using a webcam, mobile cam, and standard RGB camera. The captured video feed sequence served as the system analysis’s primary input for the MediaPipe framework, which extracts the key landmarks or points on the human body joints from the video feed. The data collection for this study consisted of 96 video feeds captured from four distinct fields-of-view angles. The videos ranged in duration from 10 to 30 s and documented nine different postures held by sixteen individuals during low back pain recovery exercises. To train the model, the total amount of photos was evenly divided into 4800 per class. We estimated the key points on the body using the landmarks. These key points determine the angles for various body joint movements in the video feed. The LSTM model trains on various body joint angles, and it generates feedback on exercise performance using the same key points. If the system determines that the calculated angles fall within the accepted range for the exercise, it will provide feedback indicating that the exercise was performed correctly; if not, it will provide feedback indicating that the exercise was not performed correctly. Table 2 tabulates the details of the accepted calculated angle ranges for important body joints.
The Logitech C920 external webcam, selected for its extensive accessibility, cost-effectiveness, and dependable functionality, records video in 1080p of resolution (1920 × 1080 pixels) at a rate of 30 frames per second. This outstanding video was perfect for our investigation. The laptop’s integrated camera captures images at a resolution of 720p (1280 × 720 pixels) and produces 30 frames per second. Although its resolution was lower, it was sufficient for our requirements, allowing us to evaluate the resilience of our techniques across various image qualities. We stored the feedback and calculated angles in a CSV file for future reference and analysis. “PoAna .v1”, also known as Pose Analysis, integrates the entire process into a sleek web application. The app allows the participants to access their daily performance feedback, the detailed analysis of their exercises, and track their progress. Figure 2 illustrates the proposed system process flow.

2.2. Feature Collection and Processing

r a d = a r c t a n 2 c y b y ,   c x b x a r c t a n 2 a y b y ,   a x b x ,
a n g l e = r a d × 180 π ,
where a , b , and c are the landmarks on body key points, vectors of x , and y coordinates are extracted from each frame in the video sequence. An arctangent of difference between the x and y coordinates of landmarks of one body joint subtracted with another joint point of x and y coordinates will give a rad value. Finding the absolute value of rad multiplied by 180/π will provide the angle of body joints. Detailed information about the tracked skeletal joints, the set of derived characteristics, and the activity class labels can be seen in Table 3.
Algorithm 1 depicts the collection exercise poses and the export of key points.
A l g o r i t h m   1 :   P o s e   D a t a   C o l l e c t i o n   a n d   K e y p o i n t   E x p o r t
I n p u t :
V V i d e o   f i l e   p a t h
A S e t   o f   a c t i o n s
N N u m b e r   o f   s e q u e n c e s
L L e n g t h   o f   e a c h   s e q u e n c e   n u m b e r   o f   f r a m e s
D B a s e   p a t h   f o r   s a v i n g   k e y p o i n t s
O u t p u t :
K e y p o i n t s   d a t a   s t o r e d   i n   n u m p y   a r r a y s
I n i t i a l i z e   V a r i a b l e s   a n d   M o d e l s :
c a p V i d e o   C a p t u r e V
h o l i s t i c m i n   _ d e t e c t i o n _ c o n f i d e n c e = 0.5 , m i n   _ t r a c k i n g _ c o n f i d e n c e = 0.5    
L o o p   T h r o u g h   A c t i o n s   a n d   S e q u e n c e s :
f o r   e a c h   a c t i o n   a A   d o
                          f o r   e a c h   s e q u e n c e   s 0,1 , , N 1   d o
                                                          f o r   e a c h   f r a m e   f 0,1 , , L 1   d o
                                                                                            R e a d   f r a m e :
                                                                                            R e t ,   f r a m e c a p . r e a d
                                                                                            M a k e   D e t e c t i o n s :
                                                                                            I m a g e ,   r e s u l t s m e d i a p i p e d e t e c t i o n f r a m e , h o l i s t i c
                                                                                            D r a w   L a n d m a r k s :
                                                                                            D r a w s t y l e d l a n d m a r k s i m a g e , r e s u l t s
                                                                                            A p p l y   W a i t   L o g i c :
                                                                                            i f   f r a m e n u m = 0   t h e n
                                                                                                            S T A R T I N G   C O L L E C T I O N
                                                                                                            C o l l e c t i n g   f r a m e s   f o r   a   V i d e o   N u m b e r
                                                                                                            w a i t K e y 2000 m s
                                                                                            e l s e
                                                                                                            C o l l e c t i n g   f r a m e s   f o r   a   V i d e o   N u m b e r
                                                                                            e n d   i f
                                                                                            E x p o r t   K e y p o i n t s :
                                                                                            K e y p o i n t s e x t r a c t k e y p o i n t s r e s u l t s
                                                                                            N p y p a t h o s . p a t h . j o i n D A T A P A T H , a , s t r s , s t r f
                                                                                            N p . s a v e n p y p a t h , k e y p o i n t s
                                                          e n d   f o r
                          e n d   f o r
e n d   f o r
When it came to identifying the exercise positions, we gathered the appropriate ten features from different angles: the left elbow, left shoulder, left hip, left knee, left wrist, right shoulder, right hip, right knee, and right wrist, using twelve key points through MediaPipe 0.10.15, as mentioned in Figure 3.
We created feature vectors after carefully selecting relevant features for each action. We save the ten distinct angular characteristics as a dataset in a numpy array (.npy) format. In the third step, we feed the dataset containing the features recovered from the series of frames to the LSTM, enabling it to recognize rehabilitation exercise poses for non-specific chronic back pain.

2.3. Comparison of Various Web App Frameworks with Streamlit

In this subsection, we discussed the comparison of various web application frameworks. The most common frameworks used for web applications for data science and machine learning [20,21] are Streamlit, Dash, Flask, and Voila. Table 4 Shows the comparison of various web app frameworks for data science and machine learning environments.
Streamlit 1.38.0 was chosen because it greatly decreased the development time and made it easy to integrate machine learning models and its fast interface creation capabilities and good compatibility with Python.

2.4. Classification Model and Web Application

In this subsection, we discussed the lightweight classification model used to classify the action and provide comments on whether it was correctly performed or not. We developed a Streamlit-based web application, “PoAna .v1”, for users to analyze their exercise poses for low back pain rehabilitation. The task at hand necessitates a complete understanding of the activity rather than frame-by-frame details, making the temporal dimension of the data a critical component. Part of the design consisted of a softmax layer after a dense layer and one 64-unit LSTM block. Figure 4 shows a simplified representation of the suggested model’s design.
Pose estimation is one of the most popular AI approaches since it uses a person’s photograph to determine their location and orientation in space. Chae et al. [11] developed a mobile app based on convolutional LSTM that requires a Microsoft Kinect sensor to capture human motion to classify the squat exercise. Bijalwan et al. [22] propose a system that incorporates 3D joints and skeletal data, utilizing Kinectv2 sensor hardware. This system creates economic barriers for remote users. During recovery exercises, Francisco and Rodrigues [26] developed computer vision-based (open pose technique) tracking of a patient’s body joints, recognizing and validating the exercises. The OpenPose pose estimation technique overlays the 18 body joint key points, which may not accurately predict the exercise poses. They detected only three rehabilitation exercise poses. Rangari et al. [27] proposed a system to classify the activity and guess its pose. In this system, improper plank positions are currently difficult to categorize. Hongyan Zheng, Haijun Zhang, and Hao Zhang are involved. The authors [28] utilize the OpenPose Model for multimodal posture action classification. Improving accuracy and providing adequate resilience when estimating the pose in real-time is still difficult. Hang Cai [29] analyzes real-time fitness motions. This method can only monitor the movement of a single limb. Simultaneously, the system has improved its ability to track multiple limbs simultaneously. For traditional rehabilitation exercises (an eight-section brocade) in China, Qiu et al. [30] implemented gradient-weighted class activation mapping (Grad-CAM) and presented the visual results of matching between the correct and incorrect poses. The experimental examination utilizes the built-in NVIDIA GeForce MX330 GPU in conjunction with an 11th generation Intel i5 CPU and 4GB RAM equipped laptop, manufactured by Dell, Texas, United States.
Physiotherapists, physical therapy experts, and researchers worked together to develop the PoAna .v1 web application using Streamlit. The PoAna .v1 web application features five tabs on its front end: home, accounts, experts, comments, and contact. The web application takes further action based on the users’ requests. Figure 5 illustrates the web application’s architectural flow.
This web application contains various features that can benefit users and experts. In particular, the app allows users to visualize the exercises they have performed in real-time and provides real-time feedback. The user either performs the exercise correctly or fails to do so. Additionally, the user can view a graphical representation of angle deviations during exercise execution. Users can perform various operations in the PoAna .v1 web application, as depicted in Figure 6. Experts operations are shown in Figure 7.

3. Results

This study aims to assess the three assessments to ensure the web dashboard’s effectiveness for LBP self-rehabilitation. The study compares the overall performance among three age groups, examines the clinical outcome measure of the pain assessment scale, evaluates the system usability score (SUS), assesses the accuracy of the model prediction, and compares the effectiveness of SOTA methods.

3.1. Participants’ Overall Performance Analysis

We assessed the overall performance based on how long it took the user to complete an exercise. Table 5 shows the required duration to complete a particular exercise per the Sussex community’s NHS Foundation Trust for mechanical LBP.
The participants are free to perform the exercises mentioned above within the time duration specified in the table. If the participants can perform the exercise without pain, they can do so. We consider the accuracy of exercise time duration (A) for an exercise performed correctly (PC) subtracted from one not performed properly (NPC), mathematically expressed as Equation (3).
A = P C N P C
We measure the performance of an exercise (P) by considering the accuracy of its time duration. We measure performance within the range of −100 to 100. The performance calculation is based on the following fixed conditions:
  • If the ‘A’ value is greater than or equal to 30 for all exercises except for lumbar flexion (LF), then the lumbar flexion pose target for the exercise duration is 15 s. Hence, the lumbar flexion condition is ( A 15 ), and the performance is regarded as 100%.
  • The performance is considered as −100% if the ‘A’ value is less than 30 for all exercises, except LF, where it is 15 s instead of 30 s;
  • If the ‘A’ value is not greater than 30 or less than −30 (i.e., A is between −30 and 30 inclusive), the value can be calculated by (A/30) × 100. This takes A’s value, divides it by 30, and then multiplies the result by 100. We use this method to show the exercise’s performance in percentage form.
Then, finally, we evaluate the overall performance (OP) of the user per session as Equation (4).
O P = n = 1 8 P 8
where ‘P’ is the performance of an exercise pose by the user. The graphical representation of various exercise performances is shown in Figure 8.

3.2. Statistical Analysis of Pain Assessment

The null hypothesis of this research study is H0: there is no difference in pain levels before and after treatment, and the alternate hypothesis is H1: there is a significant difference in pain levels before and after treatment. Initially, we collected the pre-pain assessment score before the participants started their self-rehabilitation sessions. The mean pre-pain assessment score is 6.3390 with a standard deviation of 1.9746. The mean post-pain assessment score is 3.0859; the SD is 1.8575. Figure 9 depicts the overall mean score of the pre- and post-pain assessments.
We performed an independent t-test with a  small sample size of 32. The t-test value was 12.175, and the p-value  achieved was p = 0.000002. p < 0.05, hence, we reject  the null hypothesis; there is a significant difference between pre-pain and  post-pain assessments. The outliers are represented as Electronics 13 03782 i011. So, our system will  help the participants improve their exercise performance.
We divided the participants in this study into three precise age groups: 17–24, 25–55, and 55 and above. To analyze the variation in parameter distribution among these groups, we conducted statistical tests for each parameter within each age group. Within each age group, we utilized a Wilcoxon signed-rank and paired t-test to compare the parameters. Implementing this method enabled us to detect notable variations in the distribution of parameters among the different age groups. Table 6 displays the results.
The majority of measurements in the 17–24 age range showed significant variations, except for a few such as Q4, Q7, Q8, and Q10. In almost all measures, the age range of 25–55 had the most persistent and substantial differences. The 55+ age group had fewer significant differences, suggesting a lesser degree of significant diversity in characteristics. The paired t-test can assess the difference between the pre-pain and post-pain assessment scores within the same age group. Equation (5) mathematically represents it.
t = d ¯ s d n
We represent the mean difference between the paired observations as d ¯ , the standard deviation of the differences s d , where n represents the number of observations in each pair. To compare the parameter distribution across the different age groups, we employed the Kruskal–Wallis test. Equation (6) expresses it as follows.
H = 12 N N + 1 i = 1 k R i 3 n i 3 N + 1
where N is the total number of observations, R i is the sum of ranks for the one group, n i is the number of observations for one group, and k is the number of groups used for assessment. A p-value of 1.0 from the Kruskal–Wallis test suggests that there is no statistically significant difference in the parameter distributions among the various age groups.

3.3. System Usability Score Assessment

We evaluated our PoAna .v1 web application’s system usability score for an interactive patient dashboard [31]. The participant can administer the SUS after completing the one week of sessions. The SUSs range from 0 to 4. We arranged the scores for each question as specified. A score of 0 indicates strong disagreement, while a score of 4 indicates strong agreement. Figure 10 displays the distribution of SUSs from all 32 participants.
In the assessment, our system achieved a mean SUS of 79.375. Hence, our web application can achieve excellent usability scores for diverse participants in the future.

3.4. Model Performance Metrics and Comparison of SOTA Methods

The prediction of the system was carried out after 30 sequences of frames on the testing video. This system produces predictions and generates feedback. Figure 11 shows the accuracy and loss performance on the split training and test data. We computed the model’s performance matrices using the generation confusion matrix.
Maintaining posture synchronization in real-time pose-guided matching is challenging because different body components of the stance tend to overlap and, thus, fail to match. Figure 12 compares our proposed system with other SOTA models.
Most deep learning architectures use all of the image features, such as background, human clothing variations, and lighting conditions. The attention mechanism in human action recognition uses spatial and temporal features effectively to provide a high degree of accuracy in the deep learning model [32,33]. The model faces challenges such as heightened computational complexity, elevated resource requirements, potential overfitting, and intricate implementation and tuning. These make the model very sensitive to the feature and unable to classify the exercise poses. Skeleton-based approaches, which represent human actions as sequences of key points or joints, offer a robust alternative by focusing on motion dynamics rather than appearance data. Our proposed model has the advantage of reducing dimensionality, which provides a low computational cost compared to existing models, as well as allowing us to perform real-time processing on less powerful hardware. The proposed system’s real-time processing speed is inadequate; it can process the input video with eight frames per second (FPS). Table 7 illustrates how our model compares in terms of performance metrics with existing approaches.
We measure the proposed model’s performance using metrics like accuracy, precision, recall, F1-score, and stratified K-fold cross-validation to ensure the effective classification of rehabilitation exercises. The trade-off between accurate prediction and inaccurate prediction offers a valuable understanding of the model’s performance by evaluating these quantitative measures. Symbolically, Equations (7)–(10) represent these matrices.
A c c u r a c y i = i = 1 N T P i i = 1 N ( T P i + F P i + F N i )  
P r e c i s i o n i = T P i T P i + F P i
R e c a l l i = T P i T P i + F N i
F 1 s c o r e = 2 × P r e c i s i o n i × R e c a l l i P r e c i s i o n i + R e c a l l i
We validate the effectiveness of the model’s performance by evaluating the stratified K-fold cross-validation accuracy. The process starts with partitioning the dataset. We partition the total number of samples (N) into K folds. In our study, we use K = 10 folds for the cross-validation. The average accuracy of cross-validation is determined using the following mathematical expression in Equation (11),
A c c u r a c y C V = 1 K 1 = 1 K A c c u r a c y i
The cross-validation accuracy evaluation reduces the variance compared to the train-test spilt performance evaluation metric and minimizes the potential bias on the model performance.

4. Discussion

During the preprocessing stage, we adjust the dimensions of images to ensure that the input size remains constant. Through experimentation with several approaches, including maintaining the aspect ratio and using interpolation algorithms, we have devised a scaling technique that effectively balances feature retention and input dimension standardization. On a numerical scale, the results of the proposed system show a reduction in pain levels in the participants by 3.3. Most of the users improved their performance by exercising correctly. This framework’s deep learning model, which acquires RGB frames as data, achieves a good cross-validation accuracy of 98.45%. This framework performs well in CPU with a model size of 77.29 KB and a computational cost of 1206 FLOPs, which in turn provide excellent support to web-based action classification and feedback generation. The SUS also indicates that the system is excellent for real-time usability. Under any lighting conditions, our system predicts the exercise pose.
The RGB videos offer a cost-effective solution for video-based pose analysis applications. The precision outcomes of technologies like depth cameras and motion sensors provide comprehensive spatial positioning and movement in three dimensions. However, these techniques are more expensive and cannot support the processing power of the CPU. With CPU processing, our proposed system achieves reasonable accuracy in classifying exercise poses. However, the real-time processing speed of the proposed system is not up to par. We utilized an optimized algorithm that includes data processing, code profiling, and asynchronous processing to reduce the computational complexity. Despite many benefits, it has some limitations; the web application achieves a processing time of around eight frames per second (FPS) in a CPU environment. With 30 FPS, it will perform well on GPU systems. Users’ occlusion of other body parts during exercise may result in inaccurate exercise pose predictions. Future investigations are required to develop lightweight adaptive deep-learning models and implement parallel processing that achieves high prediction accuracy with less time spent processing video feeds in real-time.

5. Conclusions

The primary objective of this work is to analyze the significant difference in pain in pre- and post-treatment. Important key features of this work are that it shows the overall performance of exercises performed by the individuals, experts can visualize the overall session attended and performance comparison of each participant, and in every session, participants can receive real-time feedback on the correction of an exercise. Effective classification of exercise poses can be carried out with this proposed deep learning model on the CPU. Feedback on exercise poses will enable the users to perform exercises with utmost precision. Some limitations on the progress of tracking, periodic updates on the exercise database, and processing time needed to address adopting Society 5.0 in healthcare systems [38,39,40,41]. Technology-driven healthcare for smart cities is the future of system integration which requires engaging users in high motivation to take their self-rehabilitation through the integration of gaming technology [42,43,44]. To ensure the model’s generalizability, we plan to apply the lightweight adaptive learning algorithm to a diverse range of populations of different age groups. We conclude that for the healthcare of Society 5.0, our proposed system provides a stepped forward, user-friendly self-rehabilitation tool to improve quality of life.

Author Contributions

Conceptualization, D.E. and V.P.; methodology, D.E. and V.P.; software, D.E.; validation, D.E. and V.P.; formal analysis, V.P.; investigation, D.E. and V.P.; data curation, D.E.; writing—original draft preparation, D.E. and V.P.; writing—review and editing, D.E.; visualization, D.E. and V.P.; supervision, V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to the corresponding author.

Acknowledgments

The authors would like to thank the reviewers for their valuable suggestions which helped in improving the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sandal, L.F.; Stochkendahl, M.J.; Svendsen, M.J.; Wood, K.; Øverås, C.K.; Nordstoga, A.L.; Villumsen, M.; Rasmussen, C.D.N.; Nicholl, B.; Cooper, K.; et al. An app-delivered self-management program for people with Low back pain: Protocol for the selfBACK randomized controlled trial. JMIR Res. Protoc. 2019, 8, e14720. [Google Scholar] [CrossRef] [PubMed]
  2. Ferreira, M.L.; de Luca, K.; Haile, L.M.; Steinmetz, J.D.; Culbreth, G.T.; Cross, M.; A Kopec, J.; Ferreira, P.H.; Blyth, F.M.; Buchbinder, R.; et al. Global, regional, and national burden of low back pain, 1990–2020, its attributable risk factors, and projections to 2050: A systematic analysis of the Global Burden of Disease Study 2021. Lancet Rheumatol. 2023, 5, e316–e329. [Google Scholar] [CrossRef] [PubMed]
  3. Kim, D.-W.; Park, J.E.; Kim, M.-J.; Byun, S.H.; Jung, C.I.; Jeong, H.M.; Woo, S.R.; Lee, K.H.; Lee, M.H.; Jung, J.-W.; et al. Automatic assessment of upper extremity function and mobile application for self-administered stroke rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 652–661. [Google Scholar] [CrossRef] [PubMed]
  4. Liao, P.-H.; Chu, W. Exploring the impact of an instructional web-based healthcare app for relieving back pain from spinal compression fractures: An observational study. Multimed. Tools Appl. 2023, 83, 33295–33311. [Google Scholar] [CrossRef]
  5. Cunha, B.; Ferreira, R.; Sousa, A.S.P. Home-based rehabilitation of the shoulder using auxiliary systems and artificial intelligence: An overview. Sensors 2023, 23, 7100. [Google Scholar] [CrossRef]
  6. Yuan, H.; Chan, S.; Creagh, A.P.; Tong, C.; Acquah, A.; Clifton, D.A.; Doherty, A. Self-supervised learning for human activity recognition using 700,000 person-days of wearable data. npj Digit. Med. 2024, 7, 91. [Google Scholar] [CrossRef] [PubMed]
  7. Kulkarni, P.; Gawai, S.; Bhabad, S.; Patil, A.; Choudhari, S. Yoga pose recognition using deep learning. In Proceedings of the 2024 International Conference on Emerging Smart Computing and Informatics (ESCI), Maharashtra, India, 5–7 March 2024; pp. 1–6. [Google Scholar]
  8. Wang, K.; Peng, L.; You, M.; Deng, Q.; Li, J. Multicomponent supervised tele-rehabilitation versus home-based self-rehabilitation management after anterior cruciate ligament reconstruction: A study protocol for a randomized controlled trial. J. Orthop. Surg. Res. 2024, 19, 381. [Google Scholar] [CrossRef]
  9. Mishra, P.K.; Mihailidis, A.; Khan, S.S. Skeletal video anomaly detection using deep learning: Survey, challenges, and future directions. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 1073–1085. [Google Scholar] [CrossRef]
  10. Nhs.uk. Mechanical Low Back Pain. Available online: https://www.sussexcommunity.nhs.uk/patients-and-visitors/resources/patient-resources/mechanical-low-back-pain/rehabilitation-exercises (accessed on 22 July 2024).
  11. Chae, H.J.; Kim, J.B.; Park, G.; O’ Sullivan, D.M.; Seo, J.; Park, J.J. An artificial intelligence exercise coaching mobile app: Development and randomized controlled trial to verify its effectiveness in posture correction. Interact. J. Med. Res. 2023, 12, e37604. [Google Scholar] [CrossRef]
  12. Ettefagh, A.; Roshan Fekr, A. Enhancing automated lower limb rehabilitation exercise task recognition through multi-sensor data fusion in tele-rehabilitation. BioMed. Eng. OnLine 2024, 23, 35. [Google Scholar] [CrossRef]
  13. Luna, A.; Casertano, L.; Timmerberg, J.; O’neil, M.; Machowsky, J.; Leu, C.-S.; Lin, J.; Fang, Z.; Douglas, W.; Agrawal, S. Artificial intelligence application versus physical therapist for squat evaluation: A randomized controlled trial. Sci. Rep. 2021, 11, 18109. [Google Scholar] [CrossRef] [PubMed]
  14. Park, J.; Chung, S.Y.; Park, J.H. Real-Time Exercise Feedback through a Convolutional Neural Network: A Machine Learning-Based Motion-Detecting Mobile Exercise Coaching Application. Yonsei Med. J. 2022, 63, S34–S42. [Google Scholar] [CrossRef] [PubMed]
  15. Marcuzzi, A.; Nordstoga, A.L.; Bach, K.; Aasdahl, L.; Nilsen, T.I.L.; Bardal, E.M.; Boldermo, N.; Bertheussen, G.F.; Marchand, G.H.; Gismervik, S.; et al. Effect of an Artificial Intelligence-Based Self-Management App on Musculoskeletal Health in Patients With Neck and/or Low Back Pain Referred to Specialist Care: A Randomized Clinical Trial. JAMA Netw. 2023, 6, e2320400. [Google Scholar] [CrossRef]
  16. Rasmussen, C.D.N.; Svendsen, M.J.; Wood, K.; Nicholl, B.I.; Mair, F.S.; Sandal, L.F.; Mork, P.J.; Søgaard, K.; Bach, K.; Stochkendahl, M.J. App-delivered self-management intervention trial selfback for people with low back pain: Protocol for implementation and process evaluation. JMIR Res. Protoc. 2020, 9, e20308. [Google Scholar] [CrossRef]
  17. Rughani, G.; Nilsen, T.I.L.; Wood, K.; Mair, F.S.; Hartvigsen, J.; Mork, P.J.; Nicholl, B.I. The selfBACK artificial intelligence-based smartphone app can improve low back pain outcome even in patients with high levels of depression or stress. Eur. J. Pain 2023, 27, 568–579. [Google Scholar] [CrossRef] [PubMed]
  18. Khorasani, M.; Abdou, M.; Hernández Fernández, J. Web Application Development with Streamlit: Develop and Deploy Secure and Scalable Web Applications to the Cloud Using a Pure Python Framework; Apress Apress: Berkeley, CA, USA, 2022. [Google Scholar]
  19. Kitagawa, K.; Maki, S.; Furuya, T.; Maruyama, J.; Toki, Y.; Ohtori, S. Development of a web application for predicting Asia Impairment Scale at discharge in spinal cord injury patients: A machine learning approach. N. Am. Spine Soc. J. (NASSJ) 2024, 18, 100345. [Google Scholar] [CrossRef]
  20. Joshitha, K.L.; Madhanraj, P.; Roshan, B.R.; Prakash, G.; Ram, V.S.M. AI-FIT COACH-Revolutionizing Personal Fitness with Pose Detection, Correction and Smart Guidance. In Proceedings of the 2024 International Conference on Communication, Computing and Internet of Things (IC3IoT), Chennai, India, 17–18 April 2024; pp. 1–5. [Google Scholar] [CrossRef]
  21. Luangaphirom, T.; Lueprasert, S.; Kaewvichit, P.; Boonphotsiri, S.; Burapasikarin, T.; Siriborvornratanakul, T. Real-time weight training counting and correction using MediaPipe. Adv. Comp. Int. 2024, 4, 3. [Google Scholar] [CrossRef]
  22. Bijalwan, V.; Semwal, V.B.; Singh, G.; Mandal, T.K. HDL-PSR: Modelling spatio-temporal features using hybrid deep learning approach for post-stroke rehabilitation. Neural Process. Lett. 2023, 55, 279–298. [Google Scholar] [CrossRef]
  23. Deoskar, A.; Parab, S.; Patil, S. Personalized physio-care system using Ai. In Handbook of Research on Artificial Intelligence and Soft Computing Techniques in Personalized Healthcare Services; Apple Academic Press: Burlington, ON, Canada, 2023; pp. 229–258. [Google Scholar]
  24. Chen, W.; Lim, L.J.R.; Lim, R.Q.R.; Yi, Z.; Huang, J.; He, J.; Yang, G.; Liu, B. Artificial intelligence powered advancements in upper extremity joint MRI: A review. Heliyon 2024, 10, e28731. [Google Scholar] [CrossRef]
  25. Pain Assessment Scales. Maineddc.org. Available online: https://www.maineddc.org/images/PDFs/Pain_Assessment_Scales.pdf (accessed on 22 July 2024).
  26. Francisco, J.A.; Rodrigues, P.S. Computer vision based on a modular neural network for automatic assessment of physical therapy rehabilitation activities. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2174–2183. [Google Scholar] [CrossRef]
  27. Rangari, T.; Kumar, S.; Roy, P.P.; Dogra, D.P.; Kim, B.G. Video based exercise recognition and correct pose detection. Multimed. Tools Appl. 2022, 81, 30267–30282. [Google Scholar] [CrossRef]
  28. Zheng, H.; Zhang, H.; Zhang, H. Design of teaching system of physical yoga course in colleges and universities based on computer network. Secur. Commun. Netw. 2022, 2022, 6591194. [Google Scholar] [CrossRef]
  29. Cai, H. Application of intelligent real-time image processing in fitness motion detection under internet of things. J. Supercomput. 2022, 78, 7788–7804. [Google Scholar] [CrossRef]
  30. Qiu, Y.; Wang, J.; Jin, Z.; Chen, H.; Zhang, M.; Guo, L. Pose-guided matching based on deep learning for assessing quality of action on rehabilitation training. Biomed. Signal Process. Control 2022, 72, 103323. [Google Scholar] [CrossRef]
  31. Brooke, J. SUS-A Quick and Dirty Usability Scale. Ahrq.gov. Available online: https://digital.ahrq.gov/sites/default/files/docs/survey/systemusabilityscale%2528sus%2529_comp%255B1%255D.pdf (accessed on 22 July 2024).
  32. Shakhnoza, M.; Sabina, U.; Sevara, M.; Cho, Y.-I. Novel Video Surveillance-Based Fire and Smoke Classification Using Attentional Feature Map in Capsule Networks. Sensors 2022, 22, 98. [Google Scholar] [CrossRef]
  33. Muksimova, S.; Mardieva, S.; Cho, Y.-I. Deep Encoder-Decoder Network-Based Wildfire Segmentation Using Drone Images in Real-Time. Remote Sens. 2022, 14, 6302. [Google Scholar] [CrossRef]
  34. Sharma, A.; Singh, R. ConvST-LSTM-Net: Convolutional Spatiotemporal LSTM Networks for Skeleton-Based Human Action Recognition; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  35. Srivastava, R.P.; Umrao, L.S.; Yadav, R.S. Real-time yoga pose classification with 3-D Pose Estimation Model with LSTM. Multimed. Tools Appl. 2023, 83, 33019–33030. [Google Scholar] [CrossRef]
  36. Thoutam, V.A.; Srivastava, A.; Badal, T.; Mishra, V.K.; Sinha, G.R.; Sakalle, A.; Bhardwaj, H.; Raj, M. Yoga pose estimation and feedback generation using Deep Learning. Comput. Intell. Neurosci. 2022, 2022, 4311350. [Google Scholar] [CrossRef]
  37. Swain, D.; Satapathy, S.; Acharya, B.; Shukla, M.; Gerogiannis, V.C.; Kanavos, A.; Giakovis, D. Deep Learning Models for Yoga Pose Monitoring. Algorithms 2021, 15, 403. [Google Scholar] [CrossRef]
  38. Tuppad, A.; Patil, S.D. Super-Smart Healthcare System in Society 5.0. In Society 5.0: Smart Future Towards Enhancing the Quality of Society; Springer Nature: Singapore, 2022; pp. 209–227. [Google Scholar]
  39. Dlamini, Z.; Miya, T.V.; Hull, R.; Molefi, T.; Khanyile, R.; de Vasconcellos, J.F. Society 5.0: Realizing next-generation healthcare. In Society 5.0 and Next Generation Healthcare; Springer: Cham, Switzerland, 2023; pp. 1–30. [Google Scholar]
  40. Ciasullo, M.V.; Orciuoli, F.; Douglas, A.; Palumbo, R. Putting Health 4.0 at the service of Society 5.0: Exploratory insights from a pilot study. Socioecon. Plann. Sci. 2022, 80, 101163. [Google Scholar] [CrossRef]
  41. The Future Healthcare Technologies: A Roadmap to Society 5.0. Springerprofessional.de. Available online: https://www.springerprofessional.de/en/the-future-healthcare-technologies-a-roadmap-to-society-5-0/20208258 (accessed on 22 July 2024).
  42. Ekambaram, D.; Ponnusamy, V. AI-assisted physical therapy for post-injury rehabilitation: Current state of the art. IEIE Trans. Smart Process. Comput. 2023, 12, 234–242. [Google Scholar] [CrossRef]
  43. Gajardo Sánchez, A.D.; Murillo-Zamorano, L.R.; López-Sánchez, J.; Bueno-Muñoz, C. Gamification in Health Care Management: Systematic Review of the Literature and Research Agenda. SAGE Open 2023, 13, 21582440231218834. [Google Scholar] [CrossRef]
  44. Likarenko, Y. Gamification in Healthcare: Use Cases, Trends, and Challenges. Uptech. Team. Available online: https://www.uptech.team/blog/gamification-in-healthcare (accessed on 22 July 2024).
Figure 1. The flow of participants using our proposed self-rehabilitation web application.
Figure 1. The flow of participants using our proposed self-rehabilitation web application.
Electronics 13 03782 g001
Figure 2. Process flow of classification model and feedback generation.
Figure 2. Process flow of classification model and feedback generation.
Electronics 13 03782 g002
Figure 3. Found through MediaPipe: (a) 33 key points of skeletal information, and (b) 12 key points that are considered for the collection of angle features.
Figure 3. Found through MediaPipe: (a) 33 key points of skeletal information, and (b) 12 key points that are considered for the collection of angle features.
Electronics 13 03782 g003
Figure 4. Process flow structure of the LSTM network architecture.
Figure 4. Process flow structure of the LSTM network architecture.
Electronics 13 03782 g004
Figure 5. PoAna .v1 web application’s architecture.
Figure 5. PoAna .v1 web application’s architecture.
Electronics 13 03782 g005
Figure 6. The operations of PoAna .v1, the application depicted here, are as follows: (a) the home page screen displays the exercises that can be analyzed using this app; (b) once the user logs in, they can select the start task button when they are ready to perform tasks; and (c) consumers can read the instructions before performing their exercises. We conducted this research in Tamil Nadu, India. We display the instructions in Tamil to ensure users comprehend and complete the exercise flawlessly; (d) Additionally, individuals can visualize the feedback on their performance, regardless of correctness or incorrectness.
Figure 6. The operations of PoAna .v1, the application depicted here, are as follows: (a) the home page screen displays the exercises that can be analyzed using this app; (b) once the user logs in, they can select the start task button when they are ready to perform tasks; and (c) consumers can read the instructions before performing their exercises. We conducted this research in Tamil Nadu, India. We display the instructions in Tamil to ensure users comprehend and complete the exercise flawlessly; (d) Additionally, individuals can visualize the feedback on their performance, regardless of correctness or incorrectness.
Electronics 13 03782 g006aElectronics 13 03782 g006b
Figure 7. (a) The page for experts is displayed, providing general information about the participant’s average age, weight, and BMI; (b) a comparison of the daily sessions each user attends is displayed; (c) experts can visualize the performance analysis of users by selecting the users in this window; (d) specialists can see the specific exercise comparison of various users in a single plot.
Figure 7. (a) The page for experts is displayed, providing general information about the participant’s average age, weight, and BMI; (b) a comparison of the daily sessions each user attends is displayed; (c) experts can visualize the performance analysis of users by selecting the users in this window; (d) specialists can see the specific exercise comparison of various users in a single plot.
Electronics 13 03782 g007
Figure 8. Graphical representation of a comparison of different age groups’ exercise performance.
Figure 8. Graphical representation of a comparison of different age groups’ exercise performance.
Electronics 13 03782 g008
Figure 9. Pre-pain and Post-pain assessments of each question score and mean score.
Figure 9. Pre-pain and Post-pain assessments of each question score and mean score.
Electronics 13 03782 g009
Figure 10. Distribution of SUSs for 32 participants.
Figure 10. Distribution of SUSs for 32 participants.
Electronics 13 03782 g010
Figure 11. Performance metrics of the classification model: (a) Training accuracy and loss vs epochs; (b) Testing accuracy and loss vs epochs; (c) Confusion matrix for training data; (d) Confusion matrix for testing data.
Figure 11. Performance metrics of the classification model: (a) Training accuracy and loss vs epochs; (b) Testing accuracy and loss vs epochs; (c) Confusion matrix for training data; (d) Confusion matrix for testing data.
Electronics 13 03782 g011
Figure 12. Accuracy comparison of SOTA models in [11,22,26,27,28,29,30] with the proposed system.
Figure 12. Accuracy comparison of SOTA models in [11,22,26,27,28,29,30] with the proposed system.
Electronics 13 03782 g012
Table 1. Participants’ anthropometric parameters.
Table 1. Participants’ anthropometric parameters.
ParameterGroup—1Group—2Group—3
AgeMore than 55Age: Between ≥25 to ≤55Age: Between ≥17 to <25
GenderMale—4; Female—1Male—11; Female—6Male—6; Female—4
Mean Height (cm)156.8 ± 3.03160 ± 6.97162.4 ± 5.81
Mean Weight (kg)64.8 ± 14.1169.66 ± 7.8559.56 ± 7.79
BMI (kg/m2)26.26 ± 5.1927.34 ± 3.7022.56 ± 2.76
LBP levelNo pain—0No pain—0No pain—0
Mild pain—0Mild pain—1Mild pain—2
Moderate pain—1Moderate pain—10Moderate pain—7
Severe pain—4Severe pain—7Severe pain—0
Table 2. Details of the accepted ranges of angles to show the feedback on various exercises.
Table 2. Details of the accepted ranges of angles to show the feedback on various exercises.
Name of the ExerciseAccepted Range of Angles for Important Key PointsSample Performed Correctly
Arm raiseLeft Elbow Angle = 175°; ±2°
Left Shoulder Angle = 177°; ±2°
(or)
Right Elbow Angle = 175°; ±2°
Right Shoulder Angle = 177°; ±2°
Electronics 13 03782 i001
Bridge poseLeft knee Angle = 79°; ±3°
Right knee Angle = 85°; ±3°
Left hip Angle = 158°; ±3°
Right hip Angle = 130°; ±3°
Electronics 13 03782 i002
Cat-cow poseLeft Shoulder Angle = 50°; ±3°
Right Shoulder Angle = 40°; ±3°
Left Hip Angle = 78°; ±5°
Right Hip Angle = 92°; 10°
Electronics 13 03782 i003
Child poseLeft elbow Angle = 166°; ±2°
Right elbow Angle = 150°; ±6°
Left shoulder Angle = 11°; ±4°
Right shoulder Angle = 10°; ±4°
Left hip Angle = 37°; ±6°
Right hip Angle = 32°; ±10°
Left knee Angle = 56°; ±8°
Right knee Angle =62°; ±10°
Electronics 13 03782 i004
Knee hugLeft elbow Angle = 165°; ±10°
Right elbow Angle = 165°; ±10°
Left shoulder Angle = 10°; ±5°
Right shoulder Angle = 10°; ±5°
Left hip Angle = 50°; ±5°
Right hip Angle = 50°; ±5°
Left knee Angle = 120°; ±8°
Right knee Angle = 120°; ±8°
Electronics 13 03782 i005
Knee rollRight-side Roll
Left knee Angle = 85°; ±6°
Right knee Angle = 84°; ±5°
Left hip Angle = 131°; ±8°
Right hip Angle = 120°; ±13°
Electronics 13 03782 i006
Left-side Roll
Left knee Angle = 20°; ±22°
Right knee Angle = 17°; ±24°
Left hip Angle = 164°; ±24°
Right hip Angle = 150°; ±23°
Electronics 13 03782 i007
Lumbar flexionLeft elbow Angle = 170°; ±10°
Right elbow Angle = 150°; ±10°
Left shoulder Angle = 40°; ±10°
Right shoulder Angle = 30°; ±10°
Left hip Angle = 80°; ±15°
Right hip Angle = 85°; ±15°
Left knee Angle = 5°; ±2°
Right knee Angle = 5°; ±2°
Electronics 13 03782 i008
Side bendRight Side Bend
Left Hip Angle = 167°; ±4°
Right Hip Angle = 155°; ±5°
Left Shoulder Angle = 15°; ±2°
Right Shoulder Angle = 32°; ±5°
Electronics 13 03782 i009
Left Side Bend
Left Hip Angle = 146°; ±5°
Right Hip Angle = 164°; ±2°
Left Shoulder Angle = 60°; ±10°
Right Shoulder Angle = 15°; ±2°
Electronics 13 03782 i010
Table 3. Detailed information about the tracked skeletal joints, the set of derived characteristics, and the activity class labels.
Table 3. Detailed information about the tracked skeletal joints, the set of derived characteristics, and the activity class labels.
S. No.ParametersDescription
1Tracked skeletal joints Left_shoulder, right_shoulder, left_elbow, right_elbow, left_wrist, right_wrist, left_hip, right_hip, left_knee, right_knee, left_ankle, right_ankle
2Set of derived characteristicsLeft elbow angle (LE_angle), Left shoulder angle (LS_angle), Left knee angle (LK_angle), Left wrist angle (LW_angle), Right shoulder angle (RS_angle), Right elbow angle (RE_angle), Right knee angle (RK_angle), Right hip angle (RH_angle), Right wrist (RW_angle)
3Activity class labelsArm_raise, Bridge_pose, Cat_cow, Child_pose, Knee_hug, Knee_roll, Lumbar_flexion, Side_bend
Table 4. Comparison of various web application frameworks’ features for data science and machine learning environments.
Table 4. Comparison of various web application frameworks’ features for data science and machine learning environments.
FeaturesStreamlitDashFlaskViola
Design complexityFor prototyping, design with minimal code.Compared to Streamlit, the design prototype requires more initial setup. But, it is simple to use.For routing and template creation, detailed coding is required.This framework contains limited tools for design prototyping.
Integration with PythonSeamless and built specifically for Python.Seamless and built specifically for Python.It is seamless, general-purpose, and flexible for various applications.It is only suitable for a limited number of applications.
CustomizationFocuses on rapid deployment and simplicity.It offers more flexibility with design and interaction.Complete control over app design but requires more coding.Not more flexible compared to Streamlit.
PerformanceIt allows for quick deployment and handles small and data-intensive applications.It can handle complex and data-driven apps well.It is highly efficient and capable of handling heavy loads.While it works well for interactive visualizations, it necessitates optimization for large-scale applications.
Use casesIt provides rapid prototyping, simple dashboards, data exploration, and model demos.Complex dashboards, data visualization, and enterprise apps require interactivity.To develop full-scale web applications, APIs, and intricate data-driven websites.Similar to the Streamlit 1.38.0 framework.
Table 5. Overall performance assessment procedure for LBP exercises.
Table 5. Overall performance assessment procedure for LBP exercises.
Name of the ExerciseProcedureSets Required to Complete the TaskTime Duration (S)
Arm raiseArrange yourself with your feet shoulder width apart, your arms by your sides, and your palms facing inward. Position your arm vertically in front of you while maintaining its straightness. Stretch to the maximum extent to align your arm with your head before descending to the initial position.3–5 sets30 s
Bridge poseLie on your back, with your knees flexed and your feet supported on the floor. Contract your lower abdominal muscles and exhale through your buttocks. To create a ‘Bridge’ with your body, elevate your buttocks off the floor or bed.3–5 sets30 s
Cat_cow poseTo begin, flex the mid-back upwards towards the ceiling. Adjust the position by rotating the pelvis in the opposite direction. Next, it is necessary to enhance the arch in the lower back.3–5 sets30 s
Child poseBegin by assuming a kneeling posture. Next, lower your upper body to the ground until your forehead touches the floor. Position your arm above your head and flatten your palms on the floor.3–5 sets30 s
Knee hug posePosition yourself on your back, flex, and elevate your knees. Next, use your hands to pull your knees back towards your chest.3–5 sets30 s
Knee rollReclining on your back with your knees flexed, gradually lower your legs to one side and maintain this position for 5 s. Rectify the central position and interchange the sides. You should see your back elongate opposite your legs’ lateral tilt.3–5 sets30 s
Lumbar flexion posePlace your feet shoulder-width apart. Proceed gradually by rolling or bending forward and moving your hands down your legs, towards your feet, until you perceive a sensation of elongation in your lumbar region. 3–5 sets15 s
Side bendStand with your feet spaced shoulder width apart and your hands resting beside you. Gradually glide one hand down your leg, extending to your knee. 3–5 sets30 s
Table 6. Statistical test analysis of variation in parameter distribution among different age groups.
Table 6. Statistical test analysis of variation in parameter distribution among different age groups.
Parameters17–24 Age Group25–55 Age GroupMore than 55 Age Group
p-ValueSignificancep-ValueSignificancep-ValueSignificance
Question 1 6.714784 × 10 3 True 7.629395 × 10 6 True 6.250000 × 10 2 False
Question 2 1.734969 × 10 2 True 3.814697 × 10 6 True 2.548148 × 10 2 True
Question 3 1.693843 × 10 3 True 4.341638 × 10 6 True 6.250000 × 10 2 False
Question 4 1.307971 × 10 1 False 1.917252 × 10 3 True 3.122981 × 10 2 True
Question 5 1.694735 × 10 2 True 1.355520 × 10 3 True 3.410942 × 10 2 True
Question 6 3.906250 × 10 2 True 7.123216 × 10 4 True 6.250000 × 10 2 False
Question 7 7.812500 × 10 2 False 6.133083 × 10 4 True 6.250000 × 10 2 False
Question 8 1.953125 × 10 1 False 4.766092 × 10 4 True 8.970902 × 10 3 True
Question 9 2.343750 × 10 2 True 3.557261 × 10 4 True 1.893498 × 10 2 True
Question 10 1.108281 × 10 1 False 1.411438 × 10 3 True 8.966252 × 10 2 False
Question 11 1.775592 × 10 2 True 3.814697 × 10 6 True 4.005739 × 10 2 True
Question 12 7.812500 × 10 3 True 3.814697 × 10 6 True 6.250000 × 10 2 False
Question 13 2.343750 × 10 2 True 1.092991 × 10 5 True 1.778078 × 10 1 False
Question 14 1.694735 × 10 2 True 1.740782 × 10 4 True 1.347019 × 10 1 False
Question 15 5.148875 × 10 4 True 1.596259 × 10 7 True 4.742066 × 10 2 True
Question 16 2.770785 × 10 2 True 9.536743 × 10 5 True 6.250000 × 10 2 False
Question 17 7.812500 × 10 3 True 1.859963 × 10 4 True 3.267792 × 10 2 True
Question 18 1.775592 × 10 2 True 1.144409 × 10 5 True 6.250000 × 10 2 False
Question 19 2.343750 × 10 2 True 6.485782 × 10 7 True 2.047587 × 10 2 True
Question 20 2.343750 × 10 2 True 1.883172 × 10 4 True 6.250000 × 10 2 False
Table 7. Comparison of performance metrics of our proposed model with those of existing models.
Table 7. Comparison of performance metrics of our proposed model with those of existing models.
Performance MetricsCV_MNN [26]Rang_LSTM [27]Conv ST_LSTM [34]RPS_LSTM [35]MLP [36]CNN_LSTM [37]Proposed Model
Accuracy46.275.652.795.129.973.499.6
Precision46.075.352.39529.673.299.2
Recall46.175.352.395.129.673.299.2
F1-score46.175.552.695.129.973.499.6
10-fold cross-validation accuracy44.674.350.993.825.671.898.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ekambaram, D.; Ponnusamy, V. Real-Time Monitoring and Assessment of Rehabilitation Exercises for Low Back Pain through Interactive Dashboard Pose Analysis Using Streamlit—A Pilot Study. Electronics 2024, 13, 3782. https://doi.org/10.3390/electronics13183782

AMA Style

Ekambaram D, Ponnusamy V. Real-Time Monitoring and Assessment of Rehabilitation Exercises for Low Back Pain through Interactive Dashboard Pose Analysis Using Streamlit—A Pilot Study. Electronics. 2024; 13(18):3782. https://doi.org/10.3390/electronics13183782

Chicago/Turabian Style

Ekambaram, Dilliraj, and Vijayakumar Ponnusamy. 2024. "Real-Time Monitoring and Assessment of Rehabilitation Exercises for Low Back Pain through Interactive Dashboard Pose Analysis Using Streamlit—A Pilot Study" Electronics 13, no. 18: 3782. https://doi.org/10.3390/electronics13183782

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop