Next Article in Journal
Fault Restoration of Six-Axis Force/Torque Sensor Based on Optimized Back Propagation Networks
Next Article in Special Issue
Wearable Sensors Applied in Movement Analysis
Previous Article in Journal
Novel Method for Estimating Propulsive Force Generated by Swimmers’ Hands Using Inertial Measurement Units and Pressure Sensors
Previous Article in Special Issue
Machine Learning Identifies Chronic Low Back Pain Patients from an Instrumented Trunk Bending and Return Test
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Derived Lifting Techniques and Pain Self-Efficacy in People with Chronic Low Back Pain

1
School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
2
School of Health Sciences, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
3
School of Kinesiology, Shanghai University of Sports, Shanghai 200438, China
4
Centre for Health, Exercise and Sports Medicine, Department of Physiotherapy, The University of Melbourne, Melbourne, VIC 3010, Australia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6694; https://doi.org/10.3390/s22176694
Submission received: 21 July 2022 / Revised: 16 August 2022 / Accepted: 31 August 2022 / Published: 4 September 2022
(This article belongs to the Special Issue Wearable Sensors Applied in Movement Analysis)

Abstract

:
This paper proposes an innovative methodology for finding how many lifting techniques people with chronic low back pain (CLBP) can demonstrate with camera data collected from 115 participants. The system employs a feature extraction algorithm to calculate the knee, trunk and hip range of motion in the sagittal plane, Ward’s method, a combination of K-means and Ensemble clustering method for classification algorithm, and Bayesian neural network to validate the result of Ward’s method and the combination of K-means and Ensemble clustering method. The classification results and effect size show that Ward clustering is the optimal method where precision and recall percentages of all clusters are above 90, and the overall accuracy of the Bayesian Neural Network is 97.9%. The statistical analysis reported a significant difference in the range of motion of the knee, hip and trunk between each cluster, F (9, 1136) = 195.67, p < 0.0001. The results of this study suggest that there are four different lifting techniques in people with CLBP. Additionally, the results show that even though the clusters demonstrated similar pain levels, one of the clusters, which uses the least amount of trunk and the most knee movement, demonstrates the lowest pain self-efficacy.

1. Introduction

Chronic low back pain (CLBP) is a multifactorial condition that is the leading cause of activity limitations and work absenteeism, affecting 540 million people worldwide [1]. Adaptation in trunk muscle control is commonly observed in people with CLBP, which is associated with changes in trunk muscle properties [2] and delayed reaction time in response to external perturbations [3]. These adaptations could be reflected in trunk and lower limb movement variability, especially during lifting [4].
Lifting is a complex activity that requires coordination of the lower limbs (e.g., hip and knee) and the trunk [4]. In simplistic terms, lifting techniques could be classified as a stoop lift (i.e., lifting with flexed back) or leg lift (i.e., lifting with hips and knees bent and back straight). Although lifting with the legs was traditionally considered to be a safer lifting technique, this has been disputed in several studies [5,6]. Lifting movements can vary considerably between individuals depending on factors such as hamstring tightness and movement speed—both of which have been demonstrated in people with CLBP or in risk factors for CLBP [7,8,9]. Therefore, dichotomous classification of lifting techniques may not be appropriate in people with CLBP. It is currently unknown how many different lifting techniques people with CLBP would demonstrate. This information may guide clinicians in identifying and individualizing target areas for rehabilitation for people with CLBP.
Moreover, it is well established that CLBP is associated with changes in psychosocial domain such as pain self-efficacy [10]. Pain self-efficacy is defined as the belief in one’s ability to perform painful or perceived painful tasks or movements to achieve a desirable outcome. Pain self-efficacy is typically measured using a Pain Self-Efficacy Questionnaire (PSEQ) [11]. In people with CLBP, low pain self-efficacy is associated with higher pain intensity, disability, and fear-avoidance beliefs [12,13,14]. Therefore, pain self-efficacy is an important attribute to be assessed in people with CLBP.
Recent technology using the computational intelligence technique for classification [15] may assist with the identification of different lifting techniques in people with CLBP. In principle, there are three main steps for activity recognition: (i) data capture by appropriate sensor; (ii) segmentation of the captured data and feature extraction; (iii) recognition of the activity using appropriate classification/identification techniques.
In classification, machine learning is known as one of the categories of artificial intelligence. In general, there are two types of machine learning: supervised and unsupervised. In supervised machine learning, once the data set has been labelled with each input, a pre-set correct output is assigned [16]. By contrast, unsupervised machine learning techniques utilize unlabelled data sets to identify patterns which will then be clustered into different groups [16]. In different medical and health applications, clustering algorithms have been applied to cluster patient records to identify a trend in health care [17,18], detect a set of co-expressed genes [19], categorize patients from medical records [20], and from the symptoms, find out patient subgroups [21]. It is unknown whether different movement patterns could be identified using unsupervised machine learning techniques or clustering algorithms in people with CLBP.
Thus, this study aims to present an innovative methodology for identifying different lifting movement patterns in people with CLBP using unsupervised machine learning techniques and range of motion. Therefore, the main contribution of this paper is the novel application of unsupervised machine learning techniques for lifting movement pattern classification in the CLBP participants. We hypothesized that people with CLBP will lift utilizing various techniques when clustered using the trunk, hip and knee movement integration.

2. Materials and Methods

The components for the camera-based cluster classification system introduced in this paper are presented in Figure 1.

2.1. Participants

One-hundred and fifteen males and females (nfemales = 57) with CLBP aged 25 to 60 years old with CLBP were recruited from a large physiotherapy clinic in Melbourne (VIC, Australia). This study was approved by the University of Melbourne Behavioural and Social Sciences Human Ethics Sub-Committee. Participants were included in the study if they had reported pain between the gluteal fold and the twelfth thoracic vertebra (T12) level, with or without leg pain that had persisted for >3 months. Participants were excluded from the study if they demonstrated overt neurological signs, such as muscle weakness and loss of lower limb reflexes, had had spine and lower limb surgery, had been diagnosed with active inflammatory conditions such as rheumatoid arthritis, had been diagnosed with cancer or did not comprehend written or verbal English. The participants in this study had not received any physiotherapy intervention during their assessment of lifting technique. All participants completed assessments of pain self-efficacy using the PSEQ [10].

2.2. Data Collection

The lifting task protocol has been published previously [4,22]. In summary, participants began the test standing upright, barefooted, with their arms by their sides. Participants were instructed to bend down and lift an 8 kg weight (i.e., an average weight of groceries [23]) placed between their feet with both hands from the ground up to the level of their abdomen. Participants were instructed to utilize a lifting technique of their choosing. Eight lifting trials were performed, with the first 2 trials serving as practice trials, hence excluded from data analysis. The sequence of consecutive actions during the lifting task is summarized and presented in Figure 2.
Kinematic data were collected using non-reflective markers placed on the participants’ skin to mark the head, trunk, pelvis, upper and lower limbs [22]. A 12-camera motion analysis system (Optitrack Flex 13, NaturalPoint, Corvallis, OR, USA) with 120 Hz sampling rate was utilized to provide the three-dimensional recording of anatomical landmarks. Kinematic data were grouped, named, cleaned and gap-filled using Optitrack Motive software (NaturalPoint, Corvallis, OR, USA). The data were then passed through to a custom-written analysis pipeline of Visual3D v5.01.6 (C-Motion, Inc., Germantown, MD, USA). Angular displacement and velocity data of different joints in all planes were derived using custom-written software (LabVIEW 2009, National Instruments).

2.3. Pre-Processing and Features Extraction

The angular displacement of the trunk, hip and knee joints during lifting were used for analysis and were inputted into the machine learning algorithm.
A joint range of motion is chosen to simplify the complex data into efficient features. The joint range is calculated by taking the difference between maximum and minimum values of that joint’s angular displacement.
R a n g e   o f   m o t i o n   ( R O M ) = M a x ( θ ) M i n ( θ )
where M a x ( θ ) is the maximum value of joint’s angular displacement, and M i n ( θ ) is the minimum value of the joint’s angular displacement.
One perspective of inter-joint coordination in manual lifting is a distal-to-proximal pattern of extension of the knee, hip, and lumbar vertebral joints [24]. In addition, the movement of the knee, the hip and the lumbar take an important role in achieving the lifting task and generating different types of lifting techniques. From the provided data, the range of motion of the trunk, hip and knee in the sagittal plane are extracted for further analysis. A between-side average value was used for the knee and hip, as there was no statistically significant ROM differences between the left and right sides.

2.4. Partitional Clustering

Clustering is known as one of the common techniques which is used to generate homogeneous groups of objects [25]. From the provided data points, all the data points that are similar and closely resemble each other will be placed into the same cluster [26]. Partitional clustering is known as one of the most popular algorithm in clustering [27,28,29]. In partitional clustering, data points are separated into a predetermined number of clusters without a hierarchical structure.
The K-means clustering algorithm is most commonly used as a partitional clustering [30]. In K-means clustering algorithm, h clusters are generated so that the distance between the points within its own clusters and centroids is less than the distance to centroids of other clusters. The algorithm’s operation begins with the selection of h points as the centroid. Following the selection of the h points, all points are allocated to the closest centroid, resulting in the formation of t clusters. The average of each cluster’s points will then be used to generate a centroid. These centroids make up the mean vector, with each field of the vector equalling each cluster centroid. A new centroid generates the new cluster as a result of this process. In the situation where the centroids remain unchanged, K-means clustering algorithm will be completed. K-means clustering has certain advantages such as less space and time complexity or optimal results accomplished for equal-sized globular and separated clusters [30]. However, K-means clustering algorithm shows sensitivity to outliers and noises. Moreover, a poor initial choice of centroid in partitioning process might produce increasingly poor result. In this study, K-means clustering algorithm was implemented using kmean function from Matlab.

2.5. Ensemble Clustering

In recent years, clustering ensemble has widely been used to improve the robustness and quality of results from clustering algorithms [31,32,33,34]. In ensemble clustering, multiple results from different clustering algorithms are combined into final clusters without retrieving features or base algorithm information. In ensemble clustering, the only requirement is obtaining the base clustering information instead of the data itself. This is useful for dealing with privacy concerns and knowledge reuse [35].
The ensemble clustering algorithm consists of two main stages: diversity and consensus function. The data set is processed in the diversity stage with a single clustering algorithm with several initializations or multiple standard clustering algorithms. The results of this process are recorded-based clustering. Afterwards, the consensus function is implemented to combine the based clustering result and produce the final consensus solution. Currently, there is a different approach for consensus function, such as conducting co-association matrix or hypergraph partitioning. With a co-association matrix, a key advantage is the specification of a number of clusters in consensus partition is not required [30]. However, hypergraph partitioning requires this specification information [30]. However, in this research, the cluster number will be investigated and is used as input of hypergraph partitioning. As a result, a co-association matric will be unsuitable in this case. Thus, hypergraph partitioning is chosen for the consensus function in this research.
In the hypergraph, there are two main components: hyperedges present clusters, and vertices present equivalent samples or points. A clustering is presented as a label vector   ϑ t . Label vector ϑ with mixture of r label vector   ϑ 1 , ϑ 2 ,   , ϑ r , where r is known as number of clustering, is the procedure by consensus function. The objective function is described by function T   : V v * r   V v mapping a set of clustering to an integrated clustering T   : { ϑ t   | t   { 1 , 2 ,   ,   r }     ϑ   } . Labelled vector of ϑ t is demonstrated by binary matrix L t where each cluster is specified with a column. In situation where the row is relating to an object with a known label, and all entries of the row in the binary membership indicator matrix L t are considered equal to 1. In contrast, in the situation where the row is relating to an unknown label, objects are considered equal to 0. Matrix L = ( L 1   L 2   L r ) as a hypergraph adjacency matrix is explained with v vertices and a = e = 1 r g e hyperedges.
There are three algorithms in hypergraph methods: Cluster-based Similarity Partitioning Algorithm (CSPA), HyperGraph Partitioning Algorithm (HGPA) and Meta-Clustering Algorithm (MCLA).
In the Cluster-based Similarity Partitioning Algorithm, clustering can be used to generate a measure of pair-wise similarity because it illustrates the relationships among the objects that reside within the same cluster. The fraction of the clustering, where two objects occur within the same cluster and can be calculated in one sparse matrix multiplication 1 r   L   L c where   L is indicator matrix and   L   c   is matrix transposition, it is indicated by the entries of B [35]. The purpose of the similarity matrix is to re-cluster the items using any suitable similarity-based clustering technique.
HyperGraph Partitioning Algorithm (HGPA) partitions the hypergraph by cutting the smallest number of hyperedges possible. All hyperedges are weighed to ensure that they are all of the same weight. Furthermore, all vertices have the same weight. The partitions are created using the minimal cut technique, which divides the data into I unconnected components of roughly equal size. For these partitions, Han et al. (1997) employed the HMETIS hypergraph partitioning package [36]. In contrast to CSPA, which considers local piecewise similarity, HGPA solely considers the comparatively global links between items across partitions. Furthermore, HMETIS has a proclivity for obtaining a final partition in which all clusters are nearly the same size.
Cluster correspondence is dealt with in MCLA integration. MCLA finds and consolidates cluster groups, transforming them into meta-clusters. Constructing the meta-graph, computing meta-clusters, and computing clusters of the objects are the three key aspects of this method for finding the final clusters of items. The hyperedges J e ,   e = 1 , 2 ,   ,   a are the meta-vertices, and the graph and edge weights are proportional to the similarity between vertices. Matching labels can be identified by partitioning the meta-graph into o balanced meta-clusters. Each vertex is appropriately weighted to the size of the cluster to which it belongs. Balancing ensures that the sum of vertex-weights is generally equal within each meta-cluster. To achieve clustering of the J vectors, the graph partitioning package METIS is used in this stage. Each vertex in the meta-graph represents a distinct cluster label; as a result, a meta-cluster denotes a collection of corresponding labels. The hyperedges are crushed into a single meta-hyperedge for each of the o meta-clusters. An object is assigned to the meta-cluster with the highest entry in the association vector. Ties are broken in an ad hoc manner using this approach.
For this research, ensemble clustering is used to improve the robustness and quality of results from the K-means clustering algorithm. At the diversity stage, the K-means clustering algorithm with various initial choices of a centroid is applied to the data set to create base clustering. Following base clustering, in the consensus function stage, CSPA, HGPA, and MCLA algorithms are used separately to conduct the final results. Following base clustering, in the consensus function stage, the CSPA, HGPA, and MCLA algorithms are used separately to conduct the final results. The ROM of trunk, hip and knee as features were passed through K-means with squared Euclidean distance and random initial choice of centroid using Matlab multiple times (50 times) to form-based clustering for ensemble cluster. Before passing these results to ensemble clustering, duplicated results from K-means were removed to increase the quality and diversity of based clustering. From obtained-based clustering, ensemble clustering uses CSPA, HGPA, and MCLA as consensus functions processed to finalize the final result using the python ClusterEnsembles package.

2.6. Clustering—Wards

Besides partitioning clusters, the other widely utilized clustering algorithm is Hierarchical Clustering. The Hierarchical Clustering algorithm produces clusters in a hierarchical tree-like structure or a dendrogram [26,37].
In the beginning, each data point is assigned as a single unique cluster. By combining two data sets, allocating the data to an existing cluster, or merging two clusters after each loop, a new cluster can be formed [38]. The condition to create a new cluster is when the similarity or dissimilarity between every pair of objects (data or cluster) is found. Currently, there are four common kinds of linkage techniques, but research has shown that Ward’s linkage is the most effective technique for dealing with noisy data compared to the other three [26,38,39].
The linkage technique developed by Ward in 1963 uses the incremental sum of squares, which means the growth in the total within-cluster sum of squares as a consequence of merging two clusters [26]. The sum of squares of the distance between entire objects in the cluster and the cluster’s centroid is explained as the within-cluster sum of squares [38]. The sum of squares metric is equal to the distance metric d A B , which is shown below:
d 2 E F = 2 n E n F n E + n F   y E ¯ + y F ¯   2
where the Euclidean distance is represented by || ||, the centroid of cluster E and F is represented by y E and y F respectively, and the number of elements in cluster E and F is represented by n E and n F . In some research studies as references, Ward’s linkage does not contain the factor of 2 in Equation (2) when n E is multiplied by n F . This factor’s main purpose is to ensure that the distance between two singleton clusters will be the same as the Euclidean distance.
To calculate the distance from cluster D to a new cluster C, cluster C is formed by merging clusters E and F, and the updated equation is shown as follows:
d 2 D C = n E + n D n C + n D d 2 D E + n F + n D n C + n D d 2 D F n D n C + n D d 2 E F
where the distance between cluster D and cluster E is represented by d D E , the distance between cluster D and cluster F is represented by d D F , the distance between cluster F and cluster E is represented by d E F , the number of elements in clusters E, F, C and D are represented by n E , n F , n C and n D .
After the hierarchical cluster is formed, a cut point is determined which can be at any position in the tree so that a full description of the clusters (final output) can be extracted [26]. In this study, the ROM of the trunk, hip and knee were passed through Ward clustering with Euclidean distance using the linkage and cluster function in Matlab.

2.7. Determining Optimal Number of Cluster

In partitioning clustering, for example, K-means clustering, the number of cluster h to be formed is defined and given, and choosing the correct optimal number of clusters for a single data set is challenging. This question, unfortunately, has no definitive answer. The method for determining similarity and the partitioning parameters define the ideal number of clusters, which is highly subjective. A basic and popular strategy is to examine the dendrogram produced by hierarchical clustering to see if it offers a specified number of clusters. Unfortunately, this strategy is as subjective. Direct technique and statistical testing method are two of these methods. Optimizing a criterion, such as the sum of squares within a cluster or the average silhouette, is a direct strategy. The equivalent procedures are known as the elbow and silhouette methods. The silhouette method analyses the average distance between clusters while the elbow method analyses the total within-cluster sum of square (WSS) for different clusters. In this research, both elbow and silhouette methods are used to determine optimal number of cluster for K-means cluster and Ward clustering using evalclusters function from Matlab and KElbowVisualizer function from Yellowbrick.

2.8. Machine Learning—Classification

A Bayesian neural network [15,40] construction operates a feed-forward structure with three layers, and it is formed by:
z k ( x , w ) = f ( b k + i = 1 l w k i f ( b i   + j = 1 m w i j x j ) )
where the transfer function is represented by f ( . ) , and in this paper, the hyperbolic tangent function is applied, the number of input nodes is represented by m ( j starts from 1 to m ), the number of hidden nodes is represented by l ( i starts from 1 to l ), the quantity of output is represented by q ( k starts from 1 to q ), the weight from input unit x j to the hidden unit y i is represented by w i j , the weight from hidden unit y i to the output z k is represented by w k i , and biases are represented by b i   and b k .
Bayesian regularization structure is suggested to improve the generalization capabilities of the neural network irrespective of whether the presented data are noisy and/or finite [41]. In Bayesian learning, the probability distribution of network factors will be observed; thus, the trained network’s greatest generalization can be delivered. Particularly, all of the obtainable data can be compatible and used in this kind of neural network to train. Consequently, the application with a small data set is appropriate.
The best possible model in the Bayesian framework, which the training data S corresponded to, is acquired automatically. Founded on Gaussian probability distribution on weight values, applying Bayes’ theorem can compute the posterior distribution of the weights w in the network H and this is presented below:
p ( w   | S , H ) = p ( S   | w , H ) p ( w | H ) p ( S | H )  
where p ( S   | w , H ) represents the probability that knowledge about the weight from observation is included, the knowledge about background weight set is contained in the prior distribution   p ( w | H ) , and lastly, the p ( S | H ) represents the network H evidence.
For a MLP neural network described in Figure 3, the cost function   G ( w ) can be minimized to achieve the most possible value for the neural network weight w M P , and the cost function is shown below:
G ( w ) = β K S ( w ) +   α K W ( w )  
where hyper-parameters are symbolized by α and β , and the effective difficulty of network structure is operated by the ratio   α / β , the error function is symbolized by K S ( w )   and the total square of weight function is symbolized by K W ( w ) ; this function can be calculated using the below equation:
K W ( w ) = 1 2 w 2
Updating the cost function with hyper-parameters, the neural network with a too large weight can lead to poor generalization when new test cases are used and can be avoided. Consequently, during a neural network training process, a set of validation is not compulsory.
To update hyper-parameters, the Bayesian regularization algorithm is used, and it is shown below:
α M P = γ 2 K W ( w M P ) ;   β M P = N γ 2 K S ( w M P )  
where the effective number of parameters is represented by γ = c 2 α M P t r ( H M P ) 1 , the total number of parameters in the network is represented by c , the total number of errors is represented by N , and the Hessian matrix of G ( w ) at the smallest, minimum point of w M P is represented by H .
The Bayesian framework can estimate and evaluate the log evidence of model H i , and it is shown below:
ln p ( S | H i ) = α M P K W M P β M P K W M P 1 2 ln | A | + W 2 ln α M P +   N 2 ln β M P + ln M + 2 ln M + 1 2 ln 2 γ + 1 2 ln 2 N γ    
where the number of network parameters is represented by W , the number of hidden nodes is represented by M , and the cost function Hessian matrix is represented by A . The best optimal structure will be found out based on the log evidence value; a structure which has the highest value will be chosen.
To measure multi-class classification performance, the familiar performance metric can be considered and used: precision, recall and accuracy. These indicators are shown as follows:
R e c a l l = T P O T P O + F N O
P r e c i s i o n = T P O T P O + F P O
A c c u r a c y = T o t a l   o f   c o r r e c t l y   c l a s s i f e d   d a t a T o t a l   n u m b e r   o f   d a t a
where the number of the data inputs, which is denoted as   O   and is classified correctly, is represented as T P O (true positive of O ), and O denotes one of the classes in multi-class. The number of the data inputs, which does not denote as O and is classified as O , is represented as F P O (false positive of O ), and the number of the data inputs, which denotes as O and is classified as not O , is represented as F N O (false negative of O ).
The clustering algorithm’s output is combined with the original input data set to create a new Bayesian neural network classification data set. For the Bayesian neural network classification, the data set is separated into two sets: the first set contains 50% of the overall sets used for training purposes and the second sets has the remaining sets for testing purposes. In addition, to train this neural network classifier, the Levenberg–Marquardt with Bayesian regularization algorithm is implemented, and the mean squared error function is selected as the error function K D ( w ) [41]. In this study, cluster result (Wards Clustering, combination of K-means and CPSA, HGPA and MCLA) and features of the clustering algorithm’s data (ROM of trunk, hip and knee) were passed to the Bayesian neural network with maximum number of epochs of 600 and maximum mu of 1 × 10100. Minimum performance gradient is 1 × 10−20 using Matlab script. Fifty percent of the data set (239 trials) was used for training purposes, and the other half (234 trials) was used for testing purposes.

2.9. Statistical Analysis

Besides using the Bayesian neural network to choose the better unsupervised machine learning algorithm for classifying the lifting technique, the Partial Ƞ 2 (partial eta squared) is used to measure the effect size of different algorithms from the statistical point of view.
A one-way multivariate analysis of variance (MANOVA) was used to examine the trunk, hip, and knee ROM differences between each cluster. Tukey’s Honestly Significant Difference test was conducted to analyse significant group differences. The statistical analysis methods (least significant difference) were performed with patient-reported outcome measures such as the pain self-efficacy questionnaire (PSEQ). All analyses were conducted with a significance level set at 0.05 using IBM SPSS software version 28.0.1 (SPSS Inc., Chicago, IL, USA).

3. Results

Four-hundred and seventy-three lifting trials were included in this study. The participants’ demographic information is summarized in Table 1.
The results of elbow and silhouette methods for the Ward clustering algorithm are shown in Figure 4 and Figure 5. The results of the elbow and silhouette methods for the K-means clustering algorithm are shown in Figure 6 and Figure 7. The elbow method for both Ward and K-means suggests the optimal number of clusters is two, while the silhouette method suggests that four is the optimal number of clusters. In this study, the cluster result represents the lifting technique that people with CLBP uses. Currently, lifting techniques can be classified as two techniques: a stoop lift or leg lift. The main object of this research is to identify how many possible lifting techniques people with CLBP can perform besides the two lifting techniques. As a result, the optimal number of clusters for both K-means and Ward clustering is four.
The descriptive statistics pertaining to the range of motion of the trunk, hip and knee for each cluster between different unsupervised machine learning methods are summarized in Table 2.
Classification of four clusters by applying the Bayesian neural network as a classifier on the test set of different methods is summarized in Table 3. Additionally, the effect sizes of different unsupervised machine learning are shown in Table 4. For trunk, hip and knee features, Ward clustering provided the highest effect compared to the combination of K-means and ensemble clustering. The Bayesian neural network and effect size calculation result suggests that Ward clustering is the optimal algorithm for this study.
The optimum number of hidden nodes of the Bayesian neural network training versus log evidence is plotted and represented in Figure 8. Based on the plot, the training model with nine hidden nodes is determined as the best classification indication.
Cluster 2 was the most common, which constituted 40.80% of all self-selected techniques. The least chosen was Cluster 1 (6.76% of all data analysed). The descriptive statistics pertaining to PSEQ and pain level for each cluster are summarized in Table 5.
The result of the hierarchical tree is represented using a dendrogram shown in Figure 9. Four clusters were created. A 3D scatter plot was generated to visualize the clustering algorithm’s output in three-dimensional space, and it is shown in Figure 10.
There were significant differences between all clusters and body regions (F (9, 1136) = 195.67, p < 0.0001). The post hoc test revealed different features between the clusters summarized in Figure 11. Post hoc test results show that there were significant differences between all clusters for trunk, hip and knee ROM. This shows that the four cluster results are completely distinctive from a statistical point of view.
Clusters 2 and 4 are hip dominance with more knee movement than the trunk. Cluster 3 is hip dominance with more trunk movement compared to the knee. Cluster 1 is knee dominance with more hip movement than the trunk.
Post hoc test revealed PSEQ between the clusters, summarized in Figure 12. The PSEQ mean scores of cluster 1 is statistically significantly different from other clusters (p < 0.05). The PSEQ mean scores are not statistically significantly different between clusters 2, 3, and 4 (p > 0.05 for each pair of clusters). From a clinical point of view, if the PSEQ is greater than 40, there is minimal impairment, and the patient is very confident. If the PSEQ value is less than 40, it means there is an impairment, and the patient is not too confident. As a result, it makes only cluster 1 (PSEQ mean score is 36.56) different from the other clusters.

4. Discussion

To the best of our knowledge, this is the first study to utilize unsupervised and supervised machine learning algorithms to classify different movement strategies during a dynamic task using motion capture camera. Besides stoop lift (represent as cluster 3) and leg lift (represent as cluster 1), the study found two additional lifting techniques in people with CLBP, which is in agreement with our study hypothesis. In clusters 2 and 4, people with CLBP tend to use the hip mainly with additional support from knee to lift. Although the Ward clustering method is different from the combination of K-means and ensemble clustering method, descriptive statistics of the trunk, hip and knee ROM results for each cluster between methods are similar. Both the Ward clustering method and the combination of K-means and ensemble clustering method suggest that there would be four different lifting techniques: hip dominance with more trunk movement (cluster 3—ward clustering, cluster 3—K-means and CSPA, cluster 1—K-means and HGPA, and cluster 3—K-means and MCLA); knee dominance with more hip movement (cluster 1—ward clustering, cluster 2—K-means and CSPA, cluster 4—K-means and HGPA, and cluster 4—K-means and MCLA); hip dominance with more knee movement than the trunk (cluster 4—ward clustering, cluster 4—K-means and CSPA, cluster 2—K-means and HGPA, and cluster 1—K-means and MCLA); and hip dominance with much more knee movement than the trunk (cluster 2—ward clustering, cluster 1—K-means and CSPA, cluster 3—K-means and HGPA, and cluster 2—K-means and MCLA). Additionally, the study suggested that Ward clustering was the optimal clustering method for the current data set.
The algorithm in this study resulted in high recall and precision values for all lifting clusters. These indicate that most of the data have been classified correctly, and the model performs well. There is potential to classify any new data using this result. In previous research, hierarchical clustering techniques have been applied in psychiatry. Paykel and Rassaby [42] applied hierarchical clustering to classify suicide attempters. The result has helped to investigate the causes and to guide some improvement for the method of therapy. Additionally, Kurz et al. [43] used Ward’s hierarchical agglomerative method to classify suicidal behaviour. This study’s accuracy is 95.7% (Ward’s method had classified 96% of all cases) [43]. Their clusters were found to have implications for clinical interpretation, therapy and prognostication. This demonstrates that hierarchical clustering can successfully provide some valuable information for further application.
In this study, there are a few variables and aspects that might impact lifting technique in people with CLBP such as height, weight, age, duration of pain and lumbar muscle strength. However, multivariant analysis of covariance (MANCOVA) was conducted as further analysis to determine whether trunk, hip and knee ROM performances differed between each cluster whilst controlling for these variables. MANCOVA results indicated that there is a statistically significant difference between clusters in terms of combined trunk, hip and knee ROM, after controlling for height, weight, age, duration of pain and lumbar muscle strength. Therefore, the results from MANCOVA suggested that there was no significant differences in our initial MANOVA analysis.
Lifting is an important risk factor associated with work-related CLBP [44]. People with CLBP utilized different lifting techniques compared to people without CLBP, particularly, less trunk ROM and increased knee ROM, which are identified as one of many phenotypes of lifting techniques in this study [45]. Identification of lifting techniques in people with CLBP is potentially important in rehabilitation. Physiotherapists and manual handling advisors often encourage CLBP patients to lift with a straight back, which is already the most common lifting technique identified in this study [46]. It is unclear whether the straight-back lifting technique is the cause or the result of the motor adaptation of CLBP. Therefore, it is perhaps unsurprising that the general advice to keep the back straight during lifting did not prevent future low back pain [46]. Identifying lifting techniques and their impact on the lumbar spine can help clinicians to guide LBP patients in adopting a lifting technique that imposes the least amount of force on the lumbar spine. This information can help clinicians in directing effective ergonomic interventions in people with occupational LBP. Additionally, identifying lifting techniques in people with CLBP may help physiotherapists direct and prioritize rehabilitation towards appropriate areas of the body that may be associated with a painful lifting technique (e.g., trunk, hip, and knee). This may result in positive changes in pain and CLBP-related disability (i.e., precision medicine).
An important finding of this study is that despite the majority of the clusters demonstrating similar pain levels, cluster 1 demonstrated the lowest pain self-efficacy, which was associated with the least amount of trunk movement and the most knee movement during lifting (i.e., lifting with a ‘straight’ back). This finding is consistent with the previous literature that pain self-efficacy is positively correlated with lifting lumbar ROM in people with CLBP [47]. The lower self-efficacy in this group may manifest as reluctance to bend the lumbar spine during lifting, which in a past study has been associated with higher lumbar extension load [48]. This in turn, may sensitize lumbar tissues resulting in pain perpetuation. Thus, observation of a patient’s lifting technique and pain self-efficacy may be key clinical assessments to define this group and develop appropriate multicomponent interventions, such as education and exercise [49].
One should interpret these study results with caution. One limitation of this study is the small sample size. The reader needs to be cautious when applying the result to a larger sample size. However, in previous studies with an even smaller sample size (n = 236 [42] and 486 [43]), this method could still cluster participants accurately. Another limitation of this study is the clustering performed on the repetitions instead of mean joint angles for each participant, as this study aims to identify and capture as many different lifting techniques that people with CLBP could demonstrate. This means that we could not account for within-participant lifting technique variability (i.e., variation between each lifting repetition performed by a single participant). Pain alters movement variability within and between participants due to differences in muscle recruitment strategies, psychological features (e.g., fear-avoidant behaviour) and muscle function (e.g., strength, flexibility) to name a few [50,51]. As a result, a small number of CLBP participants in this study (n = 17 or 15% of total participants) demonstrated employed >1 lifting technique (i.e., belonged to >1 lifting clusters). The clinical implication of this limitation is currently unclear and should be evaluated in future studies. Third, the experiment only focused on the sagittal plane’s trunk, hip and knee ROM. As such, we were unable to capture movement of the trunk, hip and knee in the other coronal and axial planes. However, the symmetrical lifting task involves mainly movements of the lumbar spine, hip and knee in the sagittal plane. Future studies should explore clustering techniques involving movements in all planes.
It is currently unknown if lifting technique clusters in healthy people are different from those with CLBP. Future studies should aim to compare different lifting techniques in healthy people and people with CLBP. This information will guide the assessment and rehabilitation of movement in people with CLBP. A validation of this novel methodology from a clinic point of view can be conducted in future studies. Authors should discuss the results and how they can be interpreted from the perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted.

5. Conclusions

To the best of our knowledge, this research is the first study introducing an innovative methodology for classifying different movement strategies during lifting tasks in people with CLBP using unsupervised and supervised machine learning techniques. The optimal unsupervised machine learning technique based on Ward’s clustering accurately differentiated four distinct movement groups in people with CLBP instead of two lifting techniques, as in current state-of-the-art research studies. The output of the clustering (four clusters) has been validated by the supervised machine learning using Bayesian Neural Network with an accuracy of 97.9%. This promising technique could aid in more precise assessment and rehabilitation of people with CLBP.

Author Contributions

Conceptualization, T.C.P., A.P. and R.C.; methodology, T.C.P., A.P. and R.C.; software, T.C.P. and R.C.; validation, T.C.P., A.P. and R.C.; formal analysis, T.C.P.; investigation, T.C.P., A.P., J.F., A.B., H.T.N. and R.C.; resources, A.P., J.F., A.B., H.T.N. and R.C.; data curation, A.P., J.F. and A.B.; writing—original draft preparation, T.C.P., A.P., J.F. and R.C.; writing—review and editing, T.C.P., A.P., J.F., A.B., H.T.N. and R.C.; visualization, A.P. and R.C.; supervision, A.P., H.T.N. and R.C.; project administration, T.C.P., A.P., H.T.N. and R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the University of Melbourne Behavioural and Social Sciences Human Ethics Sub-Committee (reference number 1749 845 and 8 August 2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

For third-party data, restrictions apply to the availability of these data. Data were obtained from the University of Melbourne and are available from Pranata, A. et al. or at https://www.sciencedirect.com/science/article/pii/S0021929018301131 (accessed on 26 July 2020) with the permission of the University of Melbourne.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, A.; March, L.; Zheng, X.; Huang, J.; Wang, X.; Zhao, J.; Blyth, F.M.; Smith, E.; Buchbinder, R.; Hoy, D. Global low back pain prevalence and years lived with disability from 1990 to 2017: Estimates from the Global Burden of Disease Study 2017. Ann. Transl. Med. 2020, 8, 299. [Google Scholar] [CrossRef]
  2. van Dieën, J.H.; Reeves, N.P.; Kawchuk, G.; van Dillen, L.R.; Hodges, P.W. Motor Control Changes in Low Back Pain: Divergence in Presentations and Mechanisms. J. Orthop. Sports Phys. Ther. 2019, 49, 370–379. [Google Scholar] [CrossRef] [PubMed]
  3. Prins, M.; Griffioen, M.; Veeger, T.T.J.; Kiers, H.; Meijer, O.G.; van der Wurff, P.; Bruijn, S.M.; van Dieën, J.H. Evidence of splinting in low back pain?: A systematic review of perturbation studies. Eur. Spine J. 2018, 27, 40–59. [Google Scholar] [CrossRef] [PubMed]
  4. Pranata, A.; Perraton, L.; El-Ansary, D.; Clark, R.; Mentiplay, B.; Fortin, K.; Long, B.; Brandham, R.; Bryant, A. Trunk and lower limb coordination during lifting in people with and without chronic low back pain. J. Biomech. 2018, 71, 257–263. [Google Scholar] [CrossRef] [PubMed]
  5. Bazrgari, B.; Shirazi-Adl, A.; Arjmand, N. Analysis of squat and stoop dynamic liftings: Muscle forces and internal spinal loads. Eur. Spine J. 2007, 16, 687–699. [Google Scholar] [CrossRef] [PubMed]
  6. van Dieën, J.H.; Hoozemans, M.J.; Toussaint, H.M. Stoop or squat: A review of biomechanical studies on lifting technique. Clin. Biomech. 1999, 14, 685–696. [Google Scholar] [CrossRef]
  7. Zawadka, M.; Skublewska-Paszkowska, M.; Gawda, P.; Lukasik, E.; Smolka, J.; Jablonski, M. What factors can affect lumbopelvic flexion-extension motion in the sagittal plane?: A literature review. Hum. Mov. Sci. 2018, 58, 205–218. [Google Scholar] [CrossRef]
  8. Laird, R.A.; Gilbert, J.; Kent, P.; Keating, J.L. Comparing lumbo-pelvic kinematics in people with and without back pain: A systematic review and meta-analysis. BMC Musculoskelet. Disord. 2014, 15, 229. [Google Scholar] [CrossRef]
  9. Sadler, S.G.; Spink, M.J.; Ho, A.; de Jonge, X.J.; Chuter, V.H. Restriction in lateral bending range of motion, lumbar lordosis, and hamstring flexibility predicts the development of low back pain: A systematic review of prospective cohort studies. BMC Musculoskelet. Disord. 2017, 18, 179. [Google Scholar] [CrossRef]
  10. Nicholas, M.K. The pain self-efficacy questionnaire: Taking pain into account. Eur. J. Pain 2005, 11, 153–163. [Google Scholar] [CrossRef]
  11. Ferrari, S.; Vanti, C.; Pellizzer, M.; Dozza, L.; Monticone, M.; Pillastrini, P. Is there a relationship between self-efficacy, disability, pain and sociodemographic characteristics in chronic low back pain? A multicenter retrospective analysis. Arch. Physiother. 2019, 9, 9. [Google Scholar] [CrossRef]
  12. Levin, J.B.; Lofland, K.R.; Cassisi, J.E.; Poreh, A.M.; Blonsky, E.R. The relationship between self-efficacy and disability in chronic low back pain patients. Int. J. Rehabil. Health 1996, 2, 19–28. [Google Scholar] [CrossRef]
  13. Karasawa, Y.; Yamada, K.; Iseki, M.; Yamaguchi, M.; Murakami, Y.; Tamagawa, T.; Kadowaki, F.; Hamaoka, S.; Ishii, T.; Kawai, A.; et al. Association between change in self-efficacy and reduction in disability among patients with chronic pain. PLoS ONE 2019, 14, e0215404. [Google Scholar] [CrossRef]
  14. de Moraes Vieira, É.B.; de Góes Salvetti, M.; Damiani, L.P.; de Mattos Pimenta, C.A. Self-Efficacy and Fear Avoidance Beliefs in Chronic Low Back Pain Patients: Coexistence and Associated Factors. Pain Manag. Nurs. 2014, 15, 593–602. [Google Scholar] [CrossRef] [PubMed]
  15. Rifai, C.; Naik, G.R.; Tuan Nghia, N.; Sai Ho, L.; Tran, Y.; Craig, A.; Nguyen, H.T. Driver Fatigue Classification with Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System. IEEE J. Biomed. Health Inform. 2017, 21, 715–724. [Google Scholar]
  16. Bell, J. Machine Learning: Hands-On for Developers and Technical Professionals; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
  17. McCallum, A.; Nigam, K.; Ungar, L. Efficient clustering of high-dimensional data sets with application to reference matching. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, Boston, MA, USA, 20–23 August 2000; ACM: New York, NY, USA, 2000; pp. 169–178. [Google Scholar]
  18. Fiorini, L.; Cavallo, F.; Dario, P.; Eavis, A.; Caleb-Solly, P. Unsupervised Machine Learning for Developing Personalised Behaviour Models Using Activity Data. Sensors 2017, 17, 1034. [Google Scholar] [CrossRef] [PubMed]
  19. Pagnuco, I.A.; Pastore, J.I.; Abras, G.; Brun, M.; Ballarin, V.L. Analysis of genetic association using hierarchical clustering and cluster validation indices. Genomics 2017, 109, 438–445. [Google Scholar] [CrossRef]
  20. Kuizhi, M.; Jinye, P.; Ling, G.; Zheng, N.; Jianping, F. Hierarchical classification of large-scale patient records for automatic treatment stratification. IEEE J. Biomed. Health Inform. 2015, 19, 1234–1245. [Google Scholar]
  21. Hamid, J.S.; Meaney, C.; Crowcroft, N.S.; Granerod, J.; Beyene, J. Cluster analysis for identifying sub-groups and selecting potential discriminatory variables in human encephalitis. BMC Infect. Dis. 2010, 10, 364. [Google Scholar] [CrossRef]
  22. Farragher, J.B.; Pranata, A.; Williams, G.; El-Ansary, D.; Parry, S.M.; Kasza, J.; Bryant, A. Effects of lumbar extensor muscle strengthening and neuromuscular control retraining on disability in patients with chronic low back pain: A protocol for a randomised controlled trial. BMJ Open 2019, 9, e028259. [Google Scholar] [CrossRef]
  23. Silvetti, A.; Mari, S.; Ranavolo, A.; Forzano, F.; Iavicoli, S.; Conte, C.; Draicchio, F. Kinematic and electromyographic assessment of manual handling on a supermarket green- grocery shelf. Work 2015, 51, 261–271. [Google Scholar] [CrossRef]
  24. Burgess-Limerick, R.; Abernethy, B.; Neal, R.J.; Kippers, V. Self-Selected Manual Lifting Technique: Functional Consequences of the Interjoint Coordination. Hum. Factors 1995, 37, 395–411. [Google Scholar] [CrossRef] [PubMed]
  25. Aggarwal, C.C. An Introduction to Cluster Analysis. In Data Clustering; Chapman and Hall: London, UK, 2014; pp. 1–27. [Google Scholar]
  26. Xu, R.; Wunsch, D. Clustering; IEEE Press: Piscataway, NJ, USA, 2015. [Google Scholar]
  27. Everitt, B.S.; Landau, S.; Leese, M.; Stahl, D. An Introduction to Classification and Clustering. In Cluster Analysis; John Wiley & Sons, Ltd.: Chichester, UK, 2011; pp. 1–13. [Google Scholar]
  28. Hansen, P.; Jaumard, B. Cluster analysis and mathematical programming. Math. Program. 1997, 79, 191–215. [Google Scholar] [CrossRef]
  29. Omran, M.G.; Engelbrecht, A.P.; Salman, A. An overview of clustering methods. Intell. Data Anal. 2007, 11, 583–605. [Google Scholar] [CrossRef]
  30. Golalipour, K.; Akbari, E.; Hamidi, S.S.; Lee, M.; Enayatifar, R. From clustering to clustering ensemble selection: A review. Eng. Appl. Artif. Intell. 2021, 104, 104388. [Google Scholar] [CrossRef]
  31. Fred, A.L.N.; Jain, A.K. Combining multiple clusterings using evidence accumulation. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 835–850. [Google Scholar] [CrossRef]
  32. Berikov, V. Weighted ensemble of algorithms for complex data clustering. Pattern Recognit. Lett. 2014, 38, 99–106. [Google Scholar] [CrossRef]
  33. Li, F.; Qian, Y.; Wang, J.; Dang, C.; Jing, L. Clustering ensemble based on sample’s stability. Artif. Intell. 2019, 273, 37–55. [Google Scholar] [CrossRef]
  34. Zhou, P.; Du, L.; Liu, X.; Shen, Y.-D.; Fan, M.; Li, X. Self-Paced Clustering Ensemble. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1497–1511. [Google Scholar] [CrossRef]
  35. Strehl, A.; Ghosh, J. Cluster ensembles—A knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 2003, 3, 583–617. [Google Scholar]
  36. Han, E.; Karypis, G.; Kumar, V.; Mobasher, B. Clustering Based on Association Rule Hypergraphs; University of Minnesota Digital Conservancy: Minneapolis, MN, USA, 1997. [Google Scholar]
  37. Rajendran, S. What Is Hierarchical Clustering? An Introduction to Hierarchical Clustering. Available online: https://www.mygreatlearning.com/blog/hierarchical-clustering/ (accessed on 16 January 2021).
  38. Anselin, L. Cluster Analysis Hierarchical Clustering Methods. Available online: https://geodacenter.github.io/workbook/7bh_clusters_2a/lab7bh.html (accessed on 15 January 2021).
  39. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
  40. Mezzetti, M.; Borzelli, D.; D’Avella, A. A Bayesian approach to model individual differences and to partition individuals: Case studies in growth and learning curves. Stat. Methods Appl. 2022, 1–27. [Google Scholar] [CrossRef]
  41. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  42. Paykel, E.S.; Rassaby, E. Classification of Suicide Attempters by Cluster Analysis. Br. J. Psychiatry 1978, 133, 45–52. [Google Scholar] [CrossRef] [Green Version]
  43. Kurz, A.; Möller, H.J.; Baindl, G.; Bürk, F.; Torhorst, A.; Wächtler, C.; Lauter, H. Classification of parasuicide by cluster analysis. Types of suicidal behaviour, therapeutic and prognostic implications. Br. J. Psychiatry 1987, 150, 520–525. [Google Scholar] [CrossRef] [PubMed]
  44. Coenen, P.; Gouttebarge, V.; van der Burght, A.S.A.M.; van Dieën, J.H.; Frings-Dresen, M.H.W.; van der Beek, A.J.; Burdorf, A. The effect of lifting during work on low back pain: A health impact assessment based on a meta-analysis. Occup. Environ. Med. 2014, 71, 871–877. [Google Scholar] [CrossRef] [PubMed]
  45. Nolan, D.; O’Sullivan, K.; Newton, C.; Singh, G.; Smith, B. Are there differences in lifting technique between those with and without low back pain? A systematic review. Physiotherapy 2020, 107, e76. [Google Scholar] [CrossRef]
  46. Nolan, D.; O’Sullivan, K.; Stephenson, J.; O’Sullivan, P.; Lucock, M. What do physiotherapists and manual handling advisors consider the safest lifting posture, and do back beliefs influence their choice? Musculoskelet. Sci. Pract. 2018, 33, 35–40. [Google Scholar] [CrossRef]
  47. La Touche, R.; Grande-Alonso, M.; Arnes-Prieto, P.; Paris-Alemany, A. How Does Self-Efficacy Influence Pain Perception, Postural Stability and Range of Motion in Individuals with Chronic Low Back Pain? Pain Physician 2019, 22, E1–E13. [Google Scholar] [CrossRef]
  48. Kingma, I.; Faber, G.S.; Bakker, A.J.; van Dieen, J.H. Can Low Back Loading During Lifting Be Reduced by Placing One Leg Beside the Object to Be Lifted? Phys. Ther. 2006, 86, 1091–1105. [Google Scholar] [CrossRef]
  49. Martinez-Calderon, J.; Flores-Cortes, M.; Morales-Asencio, J.M.; Fernandez-Sanchez, M.; Luque-Suarez, A. Which interventions enhance pain self-efficacy in people with chronic musculoskeletal pain? A systematic review with meta-analysis of randomized controlled trials, including over 12,000 participants. J. Orthop. Sports Phys. Ther. 2020, 50, 418–430. [Google Scholar] [CrossRef]
  50. Hodges, P.W.; Smeets, R.J. Interaction between pain, movement, and physical activity: Short-term benefits, long-term consequences, and targets for treatment. Clin. J. Pain 2015, 31, 97–107. [Google Scholar] [CrossRef] [PubMed]
  51. Hodges, P.W.; Tucker, K. Moving differently in pain: A new theory to explain the adaptation to pain. Pain 2011, 152 (Suppl. 3), S90–S98. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The components for the camera-based cluster classification system.
Figure 1. The components for the camera-based cluster classification system.
Sensors 22 06694 g001
Figure 2. Sequence of consecutive actions during lifting.
Figure 2. Sequence of consecutive actions during lifting.
Sensors 22 06694 g002
Figure 3. Artificial neural network (ANN) structure (weights and biases are highlighted in red dotted circle).
Figure 3. Artificial neural network (ANN) structure (weights and biases are highlighted in red dotted circle).
Sensors 22 06694 g003
Figure 4. Silhouette score results for Ward clustering algorithm.
Figure 4. Silhouette score results for Ward clustering algorithm.
Sensors 22 06694 g004
Figure 5. Elbow method results for Ward clustering algorithm (blue line indicates distortion score, and orange dashed line indicates the time to train the clustering model).
Figure 5. Elbow method results for Ward clustering algorithm (blue line indicates distortion score, and orange dashed line indicates the time to train the clustering model).
Sensors 22 06694 g005
Figure 6. Silhouette score results for K-means clustering algorithm.
Figure 6. Silhouette score results for K-means clustering algorithm.
Sensors 22 06694 g006
Figure 7. Elbow method results for K-means clustering algorithm (blue line indicates distortion score, and orange dashed line indicates the time to train the clustering model).
Figure 7. Elbow method results for K-means clustering algorithm (blue line indicates distortion score, and orange dashed line indicates the time to train the clustering model).
Sensors 22 06694 g007
Figure 8. Evidence framework of Bayesian inference (optimum number of hidden node of 9, indicated by red dashed line).
Figure 8. Evidence framework of Bayesian inference (optimum number of hidden node of 9, indicated by red dashed line).
Sensors 22 06694 g008
Figure 9. Dendrogram of output hierarchical tree (red dashed line indicates a cut point to form four clusters).
Figure 9. Dendrogram of output hierarchical tree (red dashed line indicates a cut point to form four clusters).
Sensors 22 06694 g009
Figure 10. Three-dimensional scatter plot of output of cluster algorithm.
Figure 10. Three-dimensional scatter plot of output of cluster algorithm.
Sensors 22 06694 g010
Figure 11. Estimated marginal means of trunk, hip and knee ROM between clusters.
Figure 11. Estimated marginal means of trunk, hip and knee ROM between clusters.
Sensors 22 06694 g011
Figure 12. Estimated marginal means of PSEQ between clusters (red dashed line indicates threshold value for PSEQ; below red line suggests clinically significant finding or low pain self-efficacy).
Figure 12. Estimated marginal means of PSEQ between clusters (red dashed line indicates threshold value for PSEQ; below red line suggests clinically significant finding or low pain self-efficacy).
Sensors 22 06694 g012
Table 1. Descriptive participants’ demographic information.
Table 1. Descriptive participants’ demographic information.
Variables (Units)Mean (SD)
Age (years)45.4 (11.6)
Height (cm)173.4 (11.1)
Weight (kg)79.6 (17.6)
BMI (m/kg2)26.3 (5.4)
Pain Level (VAS out of 100)45.8 (19.9)
Duration of Pain (months)89.2 (113.3)
ODI (%)31.5 (14.4)
PSEQ (out of 60)45.2 (9.9)
BMI, body mass index; ODI, Oswestry Disability Index; PSEQ, Pain self-efficacy questionnaire; VAS, Visual Analogue Scale; SD, standard deviation.
Table 2. Descriptive statistics (mean (SD)) of trunk, hip and knee range of motion for each cluster between methods.
Table 2. Descriptive statistics (mean (SD)) of trunk, hip and knee range of motion for each cluster between methods.
Mean (Standard Deviation)
TrunkHipKnee
Ward ClusteringCluster 134.19 (6.84)114.58 (9.74)136.24 (12.35)
Cluster 241.88 (7.84)107.37 (9.27)96.39 (12.66)
Cluster 354.00 (9.26)83.92 (13.02)26.70 (10.36)
Cluster 448.04 (9.73)93.54 (9.61)67.29 (13.72)
K-means + CSPACluster 134.19 (6.84)105.20 (9.67)88.75 (7.46)
Cluster 233.01 (7.35)108.60 (11.55)115.13 (15.31)
Cluster 353.17 (9.09)86.44 (13.84)31.72 (16.95)
Cluster 447.28 (9.82)93.91 (10.35)64.57 (10.53)
K-means + HGPACluster 152.80 (9.89)84.26 (13.06)30.92 (14.99)
Cluster 247.65 (9.24)96.07 (9.26)65.36 (10.90)
Cluster 344.92 (8.93)105.23 (9.61)89.02 (7.81)
Cluster 438.11 (7.38)108.59 (11.62)115.08 (15.45)
K-means + MCLACluster 147.52 (9.52)95.24 (10.68)65.59 (11.18)
Cluster 242.58 (8.59)105.79 (10.47)95.49 (10.07)
Cluster 353.90 (9.16)84.23 (12.98)27.18 (10.60)
Cluster 436.09 (7.05)111.80 (9.95)129.71(13.84)
Table 3. Classification results (recall, precision and accuracy) of Bayesian neural network between each cluster on test set.
Table 3. Classification results (recall, precision and accuracy) of Bayesian neural network between each cluster on test set.
MethodClusterRecallPrecisionAccuracy
Ward ClusteringCluster 193.8%93.8%97.9%
Cluster 299.0%96.9%
Cluster 398.0%100%
Cluster 497.2%98.6%
K-means + CSPACluster 193.1%88.5%93.6%
Cluster 288.1%94.5%
Cluster 394.9%98.2%
Cluster 498.3%93.5%
K-means + HGPACluster 198.3%96.7%94.9%
Cluster 289.8%98.1%
Cluster 396.6%89.1%
Cluster 494.9%96.6%
K-means + MCLACluster 194.2%98.5%97.0%
Cluster 2100%94.8%
Cluster 398.1%98.1%
Cluster 491.7%100%
Table 4. Partial eta squared effect size result between different methods.
Table 4. Partial eta squared effect size result between different methods.
MethodTrunkHipKnee
Ward Clustering0.0150.0780.122
K-means + CSPA0.0020.0130.029
K-means + HGPA0.0070.0100.043
K-means + MCLA0.0010.0030.058
Table 5. Descriptive statistics (mean (SD)) of PSEQ, and pain level for each cluster.
Table 5. Descriptive statistics (mean (SD)) of PSEQ, and pain level for each cluster.
Cluster 1Cluster 2Cluster 3Cluster 4
PSEQMean35.5643.8946.1345.57
(SD)(8.45)(9.95)(10.53)(9.5)
PainMean51.0350.3347.7348.17
(SD)(10.22)(20.40)(21.21)(19.27)
SD, standard deviation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Phan, T.C.; Pranata, A.; Farragher, J.; Bryant, A.; Nguyen, H.T.; Chai, R. Machine Learning Derived Lifting Techniques and Pain Self-Efficacy in People with Chronic Low Back Pain. Sensors 2022, 22, 6694. https://doi.org/10.3390/s22176694

AMA Style

Phan TC, Pranata A, Farragher J, Bryant A, Nguyen HT, Chai R. Machine Learning Derived Lifting Techniques and Pain Self-Efficacy in People with Chronic Low Back Pain. Sensors. 2022; 22(17):6694. https://doi.org/10.3390/s22176694

Chicago/Turabian Style

Phan, Trung C., Adrian Pranata, Joshua Farragher, Adam Bryant, Hung T. Nguyen, and Rifai Chai. 2022. "Machine Learning Derived Lifting Techniques and Pain Self-Efficacy in People with Chronic Low Back Pain" Sensors 22, no. 17: 6694. https://doi.org/10.3390/s22176694

APA Style

Phan, T. C., Pranata, A., Farragher, J., Bryant, A., Nguyen, H. T., & Chai, R. (2022). Machine Learning Derived Lifting Techniques and Pain Self-Efficacy in People with Chronic Low Back Pain. Sensors, 22(17), 6694. https://doi.org/10.3390/s22176694

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop