Next Article in Journal
Cloaking of Equilateral Triangle Patch Antennas and Antenna Arrays with Planar Coated Metasurfaces
Previous Article in Journal
Channel Estimation for RIS-Assisted MIMO Systems in Millimeter Wave Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Post-Stroke Severity Assessment Using Novel Unsupervised Consensus Learning for Wearable and Camera-Based Sensor Datasets

Department of Electrical, Computer, and Biomedical Engineering, Faculty of Engineering and Architectural Science, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5513; https://doi.org/10.3390/s23125513
Submission received: 21 April 2023 / Revised: 30 May 2023 / Accepted: 2 June 2023 / Published: 12 June 2023

Abstract

:
Stroke survivors often suffer from movement impairments that significantly affect their daily activities. The advancements in sensor technology and IoT have provided opportunities to automate the assessment and rehabilitation process for stroke survivors. This paper aims to provide a smart post-stroke severity assessment using AI-driven models. With the absence of labelled data and expert assessment, there is a research gap in providing virtual assessment, especially for unlabeled data. Inspired by the advances in consensus learning, in this paper, we propose a consensus clustering algorithm, PSA-NMF, that combines various clusterings into one united clustering, i.e., cluster consensus, to produce more stable and robust results compared to individual clustering. This paper is the first to investigate severity level using unsupervised learning and trunk displacement features in the frequency domain for post-stroke smart assessment. Two different methods of data collection from the U-limb datasets—the camera-based method (Vicon) and wearable sensor-based technology (Xsens)—were used. The trunk displacement method labelled each cluster based on the compensatory movements that stroke survivors employed for their daily activities. The proposed method uses the position and acceleration data in the frequency domain. Experimental results have demonstrated that the proposed clustering method that uses the post-stroke assessment approach increased the evaluation metrics such as accuracy and F-score. These findings can lead to a more effective and automated stroke rehabilitation process that is suitable for clinical settings, thus improving the quality of life for stroke survivors.

1. Introduction

Stroke is the third major cause of disability. As reported by disability-adjusted life years lost [DALYs], 143 million cases of disability worldwide in 2019 were due to stroke. Nearly 60% of post-stroke patients with upper-limb hemiparesis in the severe stage experience chronic major functional impairment [1]. Rehabilitation is highly recommended for stroke survivors as one of the most effective treatments to accelerate motor functions in the affected parts of their bodies [2,3]. The first step in a rehabilitation process is the assessment of the affected body part for rehabilitation planning and strategy. Traditionally, such assessments are self-report-based or conducted according to an expert decision. The Fugl-Meyer Assessment (FMA) is one such assessment in which clinicians measure sensorimotor impairment and functional movements of the body associated with the range of motion, muscles, and joints. It also measures levels of severity in stroke survivors. FMA-UE refers to the FMA score for the Upper Extremities, which consists of 33 tasks scored between 0 and 2 points [2]. If a patient performs the required task fully or partially, or is unable to perform the task, the assigned point value will be 2, 1, or 0, respectively. The sum of the points from all tasks will be the FMA score assigned for the patient, which ranges between 0 and 66. The used measures are manually calculated using a checklist approach, based on the clinician’s opinion; however, this is a time-consuming and subjective process [4,5]. Moreover, the visual assessment involves some extent of doubt from various sources, such as from assessment gratitude [6] or motion irregularity [7,8,9]. Defining the patients’ movements accurately with comparable performance indicators will positively impact the target planning of rehabilitation strategies [10,11]. Therefore, automating this assessment will benefit both the rehabilitation process as well as the strategy and planning for regaining movement, which in turn would facilitate the stroke assessment process [9]. Automating assessment scores can be achieved by utilizing motion capture systems. Non-visual tracking systems, such as wearable sensors, or visual-based tracking systems, such as camera-based systems, can be implemented to automate stroke assessment. In the field of non-visual tracking systems, several sensor technologies such as inertial sensory systems (IMUs), ultrasonic localization systems (UMSs), electromagnetic measurement systems (EMSs), and glove-based wearable sensors are employed in the field of motion capturing [12,13]. The visual-based tracking system, which uses devices such as optical or camera-based sensors, is deployed. The camera-based techniques consist of a marker-based and markerless tracking system. The marker-based method can use optical-passive or optical-active procedures. The optical-passive procedures include those such as Vicon (Motion System Ltd., Oxford, UK), which uses an optical, infrared camera tracking the retroreflective markers. The optical-passive system utilizes LED markers that radiate light that specific cameras can trace. In other words, the optical-passive procedures only reflect the light coming towards the markers, in contrast to the optical-active procedures, which produce the light that is then gathered using camera-based techniques [13]. The Vicon system (Motion System Ltd., Oxford, UK), an optical passive, produces the 3D positions of objects by combing several cameras’ 2D positions, which are derived from the reflective markers placed on the body [13]. Several features, such as segment orientation, joint angle, and acceleration, can be derived from the Vicon camera system [13]. The wearable sensors or inertial measurement systems (IMSs), such as Xsens (Xsens MVN Awinda, Xsens Technologies, the Netherlands), use fused multiple inertial sensors, including a 3D gyroscope, 3D acceleration, and 3D magnetometer, to collect data such as position, linear and angular acceleration, velocity, etc. MEMS motion sensors are utilized to develop a sensor fusion structure [14]. There are several studies focused on using wearable sensors and camera-based systems applying machine learning technology for activity recognition [15,16,17,18,19,20,21,22,23] measurement classification [24,25,26,27,28,29,30,31,32,33,34], and clinical assessment simulation [35,36,37,38,39,40,41,42,43]. However, the main machine learning technique primarily uses supervised learning to automatically predict assessment scores. In clinical assessment simulations, a few studies have used unsupervised learning[1,44,45] to only determine outliers or homogeneous movements, then utilized regression classifiers to predict scores. No study has investigated severity levels using unsupervised learning and trunk displacement features in the frequency domain for post-stroke smart assessment. In this paper, the compensatory movements, or trunk displacement, have been used to label each cluster and then been compared with the ground truth FMA score achieved by clinician experts. The clustering of data using two different datasets (camera and wearable) in the frequency domain using position and linear acceleration features is an additional novelty in, and contribution made by, this paper.
The main contributions of the reported research are summarized below:
  • The function of the affected hand in post-stroke patients (level of severity) was investigated using unsupervised learning.
  • The general movements categorized as activities of daily living, such as holding a cup and drinking, eating apples, answering the phone, etc., were utilized.
  • For the first time, position data in the frequency domain was used in addition to the acceleration data.
  • The novel labeling method for each cluster using trunk displacement is one of the main contributions made by this study.
  • In the study, the proposed method investigated not only wearable datasets but also camera-based datasets.
This paper is organized as follows. Section 2 describes the related works. Clustering analysis and consensus learning are discussed in Section 3 and Section 4, respectively. The proposed assessment model using consensus-based clustering is demonstrated in Section 5. The material and methods, preprocessing, and proposed labeling method are deliberated in Section 6. The data preprocessing is shown in Section 7. The experimental results are presented in Section 8. Furthermore, a discussion of the results is offered in Section 9. Finally, the conclusion and future are deliberated in Section 10.

2. Related-Work

Supervised learning predicts severity levels by developing various connections between patient attributes and the effects of interest, which have been investigated the most within the literature [46,47]. The studies developed on the automated assessment of the motor function of the upper extremity are divided into three types, including those on activity recognition [15,16,17,18,19,20,21,22,23], measurement classification [3,24,25,26,27,28,29,30,31,32,33,34], and clinical assessment simulation [1,35,36,37,38,39,40,41,42,43]. Additionally, the assessments have been utilized by employing various sensors. Wearable sensor-based systems are greatly used (such as IMUs or accelerometers [15,16,17,18,19,20,21,22,23], barometers [48,49], Flex sensors [21], EMG sensors [24,25], pressure sensors, etc.) [21,26,27,28,29,30,31,32], as are camera-based systems [5,33,34,35,36], to automate the stroke assessment test by reducing barriers and the obligations of experts or physiotherapists [1]. Wearable and camera-based sensors have been used for clinical treatment and rehabilitation [12,47].

2.1. Wearable Sensors

The authors in [1] chose 23 stroke patients with FMA-UE scores of less than 30, which is found in severe stroke patients, and used unsupervised learning to determine the homogeneous movement, outlier movements, and all moving components. A study by [26] used the Xsens wearable system and 17 sensors to define the correlation between FMA score and body movements while patients performed daily activities. Two IMU sensors were developed by the [27] group to assess the outcomes of rehabilitation treatments. However, these two studies did not implement machine learning techniques. Additionally, Ref. [28,29] estimated the functional ability scale using accelerometer sensors attached to the arm, upper arm, and hand. The Random Forest technique used the acceleration data to predict FMA scores. In addition to the accelerometer, flex sensors were attached to the body by the [21] study to monitor patient movements. The Extreme Learning Machine (ELM) was applied to predict FMA scores [21]. Researchers employed a rule-based classification [37] to estimate each patient’s FMA score using accelerometer sensors affixed to the upper arm and forearm. The summary of the work conducted using wearable sensors is presented in Table 1. In the literature, the assessment type has been investigated based on three categories: clinical emulation [1,19,20,21,22,23,30,38,39,40], movement classification [3,17,18,25,41,42,43,50,51,52,53,54], and activity recognition [15,16,24,44,48,49,55,56,57,58,59]. As this study focused on clinical emulation, Table 1 describes only the summary of this category. Several studies deployed individual accelerometers [19,20,22,37], and some studies combined IMUs with different sensors such as flex sensors [60,61]. While some studies used healthy participants [20,54,62], others used stroke patients as participants [1,22,23,38,39,40]. Support Vector regression (SVR) was the most used classifier [21,22,40]; this was primarily used in regression problems for clinical assessments where participants were given clinical scores [1,19,20,21,22,30,37,38,40]. A few researchers also employed the Random Forest (RF) technique [19,20,30]. In a few studies, the unsupervised machine learning method utilized the k-means [32] and DBScan [1] methods. The work in [32] investigated the Functional Ability Scale (FAS) assessment test using the K-means cluster and demonstrated the clustering results correlation to movement quality examine by the FAS. The authors in [45] used a hierarchical cluster to define the level of severity based on FMA scores. No study directly investigated the severity levels of strokes using unsupervised AI-driven models without available labeling.

2.2. Camera-Based Sensors

The summary of studies developed using the camera-based sensors is described in Table 2. The Kinect camera is one of the cameras used in motion capture research [63,64]. The Kinect camera is also combined with the Myo armband [9,65], force-sensing and resistor-sensing [5,35], pressure sensors or gloves sensors [34]. Different supervised machine learning techniques have been used, such as Artificial Neural Networks [33], SVM [34,66], and rule-based classification techniques [5,66]. The Vicon camera (Vicon Motion System Ltd., Oxford, UK) is also used to capture human motion, and the kinematics data can be derived using the Nexus software. However, no study has investigated severity levels of strokes using unsupervised learning and trunk displacement features in the frequency domain in post-stroke smart assessment. In this study, the compensatory movements or trunk displacements were used to label each cluster and then compared with the ground truth FMA scores achieved by clinician experts. The clustering of data using two different datasets (camera and wearable) in the frequency domain using position and linear acceleration features is an additional novelty in, and contribution made by, this paper.

3. Clustering Analysis

Different categories of clustering methods are based on their properties: for example, hard, soft, distance-based, and density-based clustering. Eight baseline clustering methods are employed: the Fuzzy C-means, K-means, the Self-Organizing Map (SOM), Gaussian Mixture Models, DBScan, and hierarchical, spectral, and OPTICS clustering.

3.1. K-Means Clustering

The K-means method [67] aims to minimize the sum of squared distances between each data point and the centroids of the assigned cluster, as defined in Equation (1).
arg min C , M   i = 1 K j = 1 N | | X i μ j | | 2 . [ C ] i j
Here, k is the number of data points, N is the number of clusters, Cij is the (i,j)-th element of matrix C (it equals 0 if a data point does not belong to cluster j, and 1 otherwise), μi is the centroid of the jth cluster (i.e., it is the average of all data points assigned to cluster j), and | | X i μ j | | 2 is the Euclidean distance between two points.

3.2. Fuzzy C-Means Clustering

The cluster center “c” is randomly selected and the probability of cluster membership for an ith data point to the jth cluster is calculated thus [67]:
μ i j = 1 k = 1 c ( d i j / d i k ) ( 2 / m 1 )
m is the fuzziness parameter or factor, c denotes the cluster number, and dij indicates the Euclidean distance between the jth cluster center and ith data point. μ i j represents the degree of membership of the ith data to the jth cluster center. After allocating data points to the jth cluster, the center of each cluster is defined thus:
v j = i = 1 n μ i j m × x i i = 1 n μ i j m   j = 1 , 2 , . c
This iteration continues until the following equation is minimal:
J ( U , V ) = i = 1 n j = 1 c μ i j m | | x i v j | | 2

3.3. SOM Clustering

Let X = {x1, x2, …, xn} be the set of input vectors and W = {w1, w2, …, wm} be the set of weight vectors for the nodes in the grid. The SOM clustering tries to minimize the objective function J(W) (Equation (5)) and update the weights of the nodes in the grid (Equation (6)):
J ( W ) = i | | x i w i | | 2
w i ( t + 1 ) = w i ( t ) + η ( t ) × h ( c i ,   t ) × ( x i w i ( t ) )
Here, t is the iteration number, η(t) is the learning rate at iteration t, and h(ci, t) is the neighbourhood function that determines the influence of the input vector xi on the weight vector wi based on the distance between ci and the winning node, and ci is the index of the winning node in the grid for the input vector xi [68,69,70].

3.4. Hierarchical Clustering

In hierarchical clustering, the relationship between each cluster is based on hierarchy, and a dendrogram is the output of the included clusters. Initially, this process considers the full dataset as a cluster and then the hierarchical levelling is developed based on a distance matrix. The distance between each pair of data points is calculated and then the closest pair is selected and merged. This process is repeated iteratively until all clusters have been merged/split. The objective of optimization in hierarchical clustering is to find the optimal hierarchy of nested clusters that best represents the underlying structure of the data. The quality of the clustering solution can be evaluated using external or internal validation metrics such as the adjusted Rand index, the silhouette score, or the cophenetic correlation coefficient [71].

3.5. Spectral Clustering:

Spectral clustering is a graph partitioning problem that identifies node neighbourhoods and the edges connecting them based on graph theory. This method involves transforming the data into a new representation using the eigenvalues and eigenvectors of a matrix derived from the data. The process begins by constructing a similarity graph or matrix that captures the pairwise similarity or dissimilarity among each pair of data points. The graph can be constructed in different ways depending on the problem. Still, the most common approach is to use a Gaussian kernel function to measure the similarity between two data points based on their distance. Once the similarity matrix is constructed, the eigenvectors and eigenvalues of the matrix are computed using techniques from linear algebra. The k eigenvectors corresponding to the k smallest eigenvalues are then used to represent the data in a lower-dimensional space. This is conducted by treating the eigenvectors as new coordinates for the data points. Spectral clustering can be computationally expensive and requires the choice of several parameters such as the number of clusters and the similarity measure [72].

3.6. Gaussian Mixture Models Clustering:

Gaussian Mixture Models (GMM) clustering is a soft clustering method based on probability density assessments that applies the Expectation-Maximization algorithm and creates ellipsoid-shaped clusters. A GMM is composed of several Gaussian distributions and has mean and center. In addition, the covariance is also defined for the GMM clustering method. The mixing probability, which defines the Gaussian function size, is also determined for GMM clustering, and this enhances the ability of this method to deliver a numerical quantity of capability per total of clusters when compared to hard clustering methods such as the K-means method. Given a dataset X = {x₁, x₂, ..., xN} and a GMM model with K Gaussian components, the goal is to find the optimal values of the parameters θ = {w₁, ..., wK, μ₁, ..., μK, Σ₁, ..., ΣK}, where wᵢ is the weight of the i-th component, μᵢ is its mean vector, and Σᵢ is its covariance matrix. The optimization problem aims to maximize the log-likelihood of the observed data as depicted below:
L L   ( θ ) = i log [ k w k N ( x i | μ k ,   k ) ]
Here, N ( x i | μ k ,   k ) is the Gaussian probability density function with mean μₖ and covariance matrix   k evaluated at data point xᵢ. The Expectation-Maximization (EM) algorithm is used to find the optimal values of these parameters. The algorithm alternates between the E-step and the M-step. In the E-step, the posterior probabilities γᵢₖ of each data point xᵢ belonging to each component k are computed thus:
γ i k = w k N   ( x i | μ k ,   k ) 1 w 1 N ( x i | μ 1 ,   1 )
Here, 1 w 1 N ( x i | μ 1 ,   1 ) is the total probability of data point i across all components.
In the M-step, the parameters are updated as follows:
w k = i γ i k N
μ k = i γ i k x i i γ i k
  k = i γ i k ( x i μ k ) ( x i μ k ) T i γ i k
where N is the total number of data points. The EM algorithm iteratively updates the parameters until convergence is reached. The final set of parameters represents the optimal solution to the GMM clustering problem [73,74].

3.7. DBScan Clustering:

The DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm does not require specifying the number of clusters in advance, and can handle clusters of arbitrary shape. DBSCAN defines two parameters: the minimum number of points (MinPts) required to form a dense region and a distance threshold (eps) that determines the size of the neighborhood around each data point. Let D be the dataset of n data points {x1, x2, ..., xn} and let eps and MinPts be the distance threshold and minimum number of points, respectively. The DBSCAN algorithm optimizes the clustering by finding the following sets of points:
  • Core points: A point x in D is a core point if it has at least MinPts points in its eps-neighborhood, including itself.
  • Border points: A point y in D is a border point if it is not a core point but has at least one core point within its eps-neighborhood.
  • Noise points: A point z in D is a noise point if it is neither a core nor a border point.
The DBSCAN algorithm groups the core points and their border points into clusters. Two core points belong to the same cluster if they are directly or indirectly reachable from each other through a series of core points. DBSCAN tries to maximize the number of points assigned to a cluster while minimizing the number of noise points. DBSCAN adjusts the parameters eps and MinPts to find the optimal clustering [1,75].

3.8. OPTICS Clustering:

OPTICS, which stands for ‘ordering points to identify cluster structure’, is another density-based clustering method similar to DBScan. However, the reachability distance plot in OPTICS is an advancement over that in DBScan. The reachability distance between p and q is the smallest distance from p if p is the core object. OPTICS uses a priority queue data structure to efficiently order the points based on their densities and avoid the computation of pairwise distances between all points, which can be computationally expensive. Instead, the priority queue allows the algorithm to consider only the points relevant to the current point being processed. Another optimization used in OPTICS clustering is a data structure called a core distance tree, which stores the core distances of all points in the dataset. The core distance of a point is the minimum distance at which a point can be considered part of a dense region. The core distance tree efficiently computes the reachability distances between points in the dataset [76].

4. The Consensus Solvers

Consensus clustering combines multiple clusterings of the same data set into a single-consensus clustering solution to obtain a more stable and robust clustering solution that captures the common structure across different clusterings.

4.1. Meta-Clustering Algorithm (MCLA) Consensus Solver

The Meta-Clustering Algorithm (MCLA) is centered around clustering clusters and provides confidence estimates for object membership within clusters. The performance of MCLA depends on the choice of clustering algorithms, the combination method, and the quality of the individual clusterings. Its first goal is to transform the provided cluster label vectors into a hypergraph representation appropriate for subsequent analysis. Any set of clusterings can be mapped to a hypergraph composed of vertices and hyperedges. The hyperedge is a matrix of binary membership indicators. Each column represents a cluster, and the rows corresponding to objects with unknown labels are populated with zeros in the indicator matrix while 1 denotes an object with a known label. As a result, each cluster can be mapped to a hyperedge, and the set of clusterings can be represented as a hypergraph. For the MCLA, clusters are represented as hyperedges. The MCLA aims to group and merge related hyperedges, assigning objects to the resulting collapsed hyperedge based on their strongest participation. A graph-based clustering approach achieves the determination of related hyperedges for collapsing.

4.2. HyperGraph Partitioning Algorithm (HGPA) Consensus Solver

The HyperGraph Partitioning Algorithm (HGPA) directly repartitions the data by leveraging the existing clusters as indicators of strong associations. In addition, it can be expressed as partitioning a hypergraph by removing the smallest number of hyperedges. In the HGPA, the objective is to partition the hypergraph into a set of l disjoint subgraphs (called partitions) such that some criterion is optimized. The HGPA works by constructing a weighted hypergraph where each vertex represents a hyperedge and the weight of each vertex is proportional to the cost of assigning it to a specific partition. [77].

4.3. Cluster-based Similarity Partitioning Algorithm (CSPA) Consensus Solver

The CSPA partitions a given dataset into distinct clusters by analyzing the similarities between the objects in the dataset. The algorithm operates by building a similarity matrix that evaluates the similarities between pairs of objects in the dataset. The choice of similarity measure can be any distance metric or appropriate similarity measure for the data. After constructing the similarity matrix, the algorithm initially groups clusters by clustering objects that display high pairwise similarities. Subsequently, this initial set of clusters is improved using a hierarchical clustering algorithm that iteratively combines pairs of clusters with high similarities until the desired number of clusters is obtained. When merging the clusters, the algorithm leverages a cluster similarity measure to determine which pairs of clusters should be combined in situations where the weights are proportionate to the sizes of the clusters [77].

4.4. Hybrid Bipartite Graph Formulation (HBGF) Consensus Solver

The Hybrid Bipartite Graph Formulation (HBGF) creates a bipartite graph of vertices and edges. An edge will only connect an instance vertex to a cluster vertex if that instance belongs to that cluster. New cluster vertices are added if a fresh clustering is incorporated into the ensemble, with these vertices linked to the instances they contain. Round vertices and clusters represent the instances illustrated by diamond-shaped vertices. The graph’s edges each possess a single weight, with any edges possessing zero weights being excluded. The bipartite graph is partitioned into non-overlapping clusters using a partitioning algorithm such as spectral clustering or the k-means method. The goal of the partitioning algorithm is to minimize the number of cut edges between clusters while balancing the weights of the vertices [78]. Figure 1 presents the HBGF consensus clustering using two individual clusterings: K-means and Fuzzy C-means clustering.

5. The Proposed Post-Stroke Severity Assessment Model using Modified NMF-Consensus Solver (PSA-MNMF)

The proposed consensus clustering algorithm, PSA-NMF, combines various clusterings into one united clustering, i.e., creating cluster consensus, to produce more stable and robust results compared to any individual clustering method. We developed a modified nonnegative matrix factorization [79] (MNMF) as an enhanced consensus solver by factorizing the consensus matrix into two nonnegative matrices that represent the underlying structure of the data. In our algorithm, once the MNMF phase is completed, an exhaustive search is executed to find the best optimal combinations of the consensus toward robust results. Each of these methodologies is described next.

5.1. The Modified Nonnegative Matric Factorization (MNMF) Consensus Solver

Assume A defines the data point A = {a1,…, an}, which contains n data points. Then, the partitions B of data point A is B = {b1, b2, …., bE}, E = 1,…., E, and the set of clustering in each partition is Dt = {d1f, d2f,…. dmf}. m denotes the number of clusters for the partition BT. The number of clusters m can differ for each cluster and A = h = 1 m D h t . Now, the distance between two partitions B1 and B2 can be defined based on the MNMF solver as follows:
f ( b 1 , b 2 ) = i , j = 1 n f i , j ( b 1 ,   b 2 )
For each element,
f i , j ( b 1 ,   b 2 ) = { 1   ( i , j )   D k ( b 1 )   a n d   ( i , j )   D ( b 2 )   1   ( i , j )   D k ( b 1 )   a n d   ( i , j )   D k ( b 2 ) 0   n o n   o f   t h e   a b o v e
where i and j are members of different clusters in partition b1, denoted thus: ( i , j )   D k ( B 1 ) . Then, one approach can be to define the connectivity matrix as follows:
C M i j ( b t ) = { 1   ( i , j )   D k ( b t ) 0   ( i , j )   D k ( b t )  
The fij (b1,b2) values are defined according to the connectivity matrix as follows:
f i j ( b 1 , b 2 ) = | C M i j ( b 1 ) C M i j ( b 2 ) | = | C M i j ( b 1 ) C M i j ( b 2 ) | 2
The fij (b1,b2) value is 0 or 1.The consensus clustering b* can be derived thus:
min b * g = 1 T t = 1 T f i j ( b t , b * ) = 1 T t = 1 T k = 1 n [ C M i k ( b t ) C M i k ( b * ) ] 2
Assuming that the solution for the optimization equation above is Uik = CMik (B*), the average consensus association between i and k is as follows:
C M ˜ i k = 1 T t = 1 T C M i k ( b t )
The average squared difference from the consensus association is as follows:
C M ˜ :   Δ C M 2 = 1 T t i , k [ C M i k ( b t ) C M i k ˜ ] 2
The smaller Δ C M 2 is, the closer to each other the partitions are. The Δ C M 2 is constant, and therefore,
G = 1 T   t i k ( C M i k ( b t ) C M ˜ i k + C M ˜ i k U i k ) 2 = Δ C M 2 + i , k ( C M ˜ i k U i k ) 2 = C M ˜ U 2  
Then, the optimization problem for consensus clustering is expressed as follows:
min H ˜ T   H ˜ = 1 ,   H ˜ 0 ˜ | | C M N ˜   N ˜ 2 ˜ | | 2
This norm is called the Frobenius norm. This equation indicates that the consensus association clustering used is consensus clustering. Now, the next task is to define an NMF clustering solver. The nonnegative matrix factorization (NMF) solver uses nonnegative data (A) factorized to two nonnegative matrices like P and Q (A = PQ). For the NMF formulation, the clustering indicator of a matrix N = {0,1}nxk can be indicated as a clustering solution, and by definition the constraint for this matrix is that in each row, only one “1” can exist and the rest of the elements must contain zeros.
Uik = NNT or Uik = (NNT)ik
If i and k belong to the same cluster since the dot product is the multiplication of i by k, the results will equal 1 (NNT)ik = 1; otherwise, (NNT)ik = 0. With the constraints of U = NNT, the consensus clustering is optimized as follows:
min N | | C M ˜ N N T | | 2
Let us assume that (NTN)jl = 0 when j is not equal to l and i and j do not belong to each other. Additionally, we must consider that (NTN)jj = |Cj| = rj, where Cj denotes cluster j, and that in the matrix there will be only non-zero rj values. Let L = diag(NTN) = diag (r1, r2, ….rk) where NTN = L. Then, the optimization problem for this case can be written as follows:
min N N T = 1 ,   L ,   N 0 | | C M ˜ N N T | | 2
This optimization can be easier to solve compared to Equation (21). However, the cluster size should be defined. Therefore, the cluster size should be eliminated.
N ˜ = N ( N T N ) 1 2
N N T = N ˜ L N ˜ T ,   N ˜ T N = N ( N T N )   1 N = 1
Then, the optimization equation is as depicted:
min N ˜ T N ˜ = 1 ,   L ,   N ˜ 0 | | C M ˜ N ˜ L N ˜ T | | 2
N ˜ T and L are both acquired as solutions for this problem. This formulation demonstrates that consensus clustering can be defined as a symmetric nonnegative matrix factorization. The optimization equation of the MNMF is as follows:
min N ,   L 0 | | C M ˜ N ˜ L N ˜ T | | 2
Given NTN=1, for each iteration, the procedure is updated as follows:
N o i     N o i   ( C M   N   L ) o i ( N N T C M   N L ) o i  
L i e     L i e   ( N T C M   N ) i e ( N T N   L   N T N ) i e  
In conclusion, the above explains the formulation of an algorithm of [78]. L was restricted to being a diagonal matrix, but, in this equation, Y is not restricted to being a diagonal matrix. However, at some point, it can change to being a diagonal matrix; in that case, it will remain a diagonal matrix.

5.2. Exhaustive Search

An exhaustive search phase was conducted in this study to find every potential solution in order to obtain a desirable solution. We tried to find the best performance among a combination of baseline clustering methods within the solver consensus by attempting every possible combination. The model ran 100 times to decrease the variance; the best 10 combinations were reported each time. The 10 best performances (quantified by selecting a proper assessment measure, such as F-score) were reported. The final best combination that appeared the most was selected to be reported in the results.

5.3. THE PSA-MNMF Consensus Clustering Algorithm

The input data set is provided in the PSA-MNMF consensus clustering algorithm and a set of clustering algorithms in order to generate the individual clusterings. Next, the baseline clustering algorithms are applied to the input data set to generate a set of individual clusterings. Each clustering algorithm may produce a different result due to its unique assumptions, parameters, and randomness. A consensus matrix is constructed based on the scores of similarity between the individual clustering. The consensus matrix is a square matrix representing the degree of agreement between each pair of data points in the input data set. The consensus matrix is then factorized into two nonnegative matrices, W and H, using the MNMF. The factorization can be expressed as C ≈W*H, where C is the consensus matrix, W is the cluster assignment matrix, and H is the consensus centroid matrix. The rows of the consensus centroid matrix are used as input for a chosen clustering algorithm to obtain a final consensus clustering solution. The clustering algorithm choice may differ from the individual baseline clustering algorithms. Finally, each data point is assigned to the corresponding consensus cluster. Algorithm 1 summarizes the proposed PSA-MNMF algorithm and Figure 2 describes the consensus part of the proposed method.
Algorithm 1: PSA-MNMF
Input: Dataset A={a1,…., an}, a set of partitions B of data points B = {b1,b2, ….., bt} such that each partition B consists of a set of clustering Dt= {d1t, d2t, …., dkt} that uses a selected clustering methodology.
Output: The set H of B heterogeneous clusterings that included the 10 best and highest F-scores (or performance metrics α) and appeared in all 100 runs when using the exhaustive search method.
 Initialization: 
Calculate the X-cluster= {The results of each clustering
Initialize H= {}. Define the connectivity matric CM as follows:
             C M i j ( b t ) = { 1   ( i , j )   D k ( b t ) 0   ( i , j )   D k ( b t )  
Define a matrix Nixk such that in each row only “1” can exist and the rest of the values should be zeros. Calculate the NNT. If i belongs to k, the results will equal 1, otherwise they will equal zero. Define L as L = NTN.
  Begin 
Step   1 :   The   optimization   equation   for   PSA - NMF   is   calculated   using   the   equation :   min N ,   L 0 | | C M ˜ N ˜ L N ˜ T | | 2 , where NTN=1
Step   2 :   At   each   iteration ,   the   N   value   will   be   updated   using   this   equation :   N o i     N o i   ( C M   N   L ) o i ( N N T C M   N L ) o i
Step   3 :   At   each   iteration ,   the   L   is   updated :   L i e     L i e   ( N T C M   N ) i e ( N T N   L   N T N ) i e
Step 5: The exhaustive method finds the best performance metric from among the top 10 recorded combinations.
Step 4: Steps 1-4 are repeated 100 times.
Step 6: The final consensus clustering solution assigns each data point in the input data set to a consensus cluster.
Step 7: The algorithm returns H and performance metrics α (including F-score, accuracy, precision, and recall)
End
The proposed MNMF-based consensus clustering has several advantages over traditional consensus clustering methods. The MNMF can handle high-dimensional data sets and extract meaningful features that capture common structures across different clusterings. Moreover, the MNMF can provide a more interpretable clustering solution as it explicitly separates the cluster assignments from the consensus centroids. The PSA-MNMF is the first contribution toward post-stroke severity assessment that provides robust results using the position data and acceleration data in the frequency domain.

6. Data, Materials, and Methods

Generally, post-stroke motion datasets are rarely considered when choosing an open-source dataset. The U-limb datasets published in 2021 consist of 65 post-stroke and 91 healthy subjects collected in different clinical settings using the same protocol [80]. In this study, to deploy an unsupervised learning method, the data collected from stroke patients by utilizing wearable sensors and camera-based sensors were implemented from the U-limb datasets. Group research at the University of Zurich (UZH) [81] implemented 17 IMU sensor systems, collected using the Xsens suite (Awinda, Xsens Technologies B.V., Enschede, The Netherlands), for 20 stroke patients. IMUs included a 3D angular magnetometer, 3D accelerometers, and a 3D gyroscope. Table 3 describes the participants’ characteristics. The FMA-UE score for the patients, which was 46.00 ± 10.16, was in the moderate and mild categories according to the study [45]. Both affected hands and non-affected hands were used for this study. The dataset selected for this research is an open-source dataset. The mean age of the participants in this study was 61.00 ± 10.69, including 5 females and 15 males. 11 right hands and 9 left hands were affected, and it was known that there was only one person with a dominant left hand. The four grasping-action activities chosen for this research are described in Table 3. The camera-based dataset was dataset1 and the sensor-based dataset was dataset2.
The Hannover Medical School (MHH) research group collected position data on healthy and stroke-patient participants deploying motion capture technologies. This system comprised 12 MX Vicon cameras (Vicon Motion System Ltd., Oxford, UK) operated by Version 1.8.5 of the Nexus software. There were 21 passive markers attached to the upper body (thorax, upper arm, and forearm) to capture arm movements. The number of stroke patients attending this research was 20—12 male and 6 female—and the mean age of the participants was 49.88 ± 16.92 years. The FMA-UE for this group was 17.75 ± 2.05; since it was less than 29, it was considered to fall within the severe category [45]. The study captured only the affected hand of each stroke patient. There were 20 healthy participants included in this research, and 12 of them were male. The mean average age of the healthy group was 46.77 ± 15.25 years. The dominant hand was selected for testing in healthy participants, and 2 of the participants were left-handed. Each participant repeated the four tasks three times: the same as in the sensor data. This research group was selected for our study because the same experiment and research protocol had been employed with the UZH group. The Table 4 presents the characteristics of both wearable and camera-based system.

7. Data Preprocessing

The camera-based position data were collected at a 200 Hz sample rate. The position time series of camera data was filtered using a low-pass second-order Butterworth filter with a cut-off frequency of 20 Hz to lessen the high frequencies that were noise elements not created by humans. The wearable sensor data were collected with a sampling frequency of 60 Hz. The low-pass second-order Butterworth filter with a cut-off frequency of 10 Hz was applied. This section separately describes the wearable sensor dataset (dataset-1) and camera-based dataset (dataset-2). Figure 3 describes the general preprocessing steps to get the position data in frequency domains for the camera system and wearable sensor datasets.

7.1. Wearable Sensors (Dataset 1)

The 3D positions (x, y, z) of five major upper limb parts consisting of the hand, shoulder, upper arm, forearm, and sternum (T8) were selected for this research. Therefore, 5 features with 3D predictor variables (i.e., 15 features) were used for each side of the body. From the relevant equation, the linear acceleration data was derived from the position data. The position data and acceleration data were tested in the frequency domain. The following formula (in Equation 30) was used according to an earlier study [83] to decompose 3D to 1D each time and be independent of the orientations of sensors. Then, the mean value was derived for the acceleration data. In the equation below, X, Y, and Z are the three dimensions of acceleration in each step.
A c c e l e r a t i o n   a t   e a c h   s t e p s = X 2 + Y 2 + Z 2

7.2. Camera-Based Sensors (Dataset-2)

The 3D positions from 11 markers at the wrist, ulnar bone, humerus bone, scapula, and trunk were collected from camera-based datasets. Nine markers were used for feature selection with 3 dimensions (x,y,z). Therefore, 27 features were used for the camera dataset. The 4 markers on the trunk were used to define the trunk displacements. Similar to what had been done in the wearable sensor dataset, the acceleration was derived from the position data using a formula. The linear acceleration in the frequency domain and position data in the frequency domain were tested.

7.3. Trunk Displacement Measurement

Measurements were done according to an earlier study [84] using T8 from the wearable sensors and an average of 4 sensors on the trunk from the camera-based sensors. The measurements were done according to [84]. Trunk displacements were specified by differences in the position and orientation of the sensor located at the sternum [85]. The mean of the first 10 data points was subtracted from the position data in each step for all x, y, and z directions. The following equation was used to find one value for each step.
T r u n k   D i s p l a c e m e n t s = T D x + T D y + T D Z
Here, TDX is the trunk displacement in the x direction (or front) at each step, and TDy and TDz are the trunk displacements in y and z, respectively. Based on the literature [85,86,87,88], trunk movements are called compensatory movements in stroke patients while performing tasks. The labelling for each cluster was assigned according to trunk displacement. For the camera-base dataset, 4 markers located at the trunk were selected, and for each marker, the displacement was calculated according to the above-mentioned method. Then, the average of these 4 markers was selected for the final trunk displacement to label each cluster.

7.4. Data Labeling

For stroke patients, extreme trunk displacement is a common motor compensation [89]. Accordingly, stroke survivors use trunk displacement as a compensatory movement for the activities of daily living [82,90]. Therefore, in a novel method for labelling, the trunk displacement was used to label each cluster. Accordingly, the more displacement was, the more severe the stroke level was. The lowest displacement average in each cluster was selected as the healthiest or mild level. The labelling method using trunk displacement is a novel one proposed in this paper. Each labelling result derived from each clustering was compared with the ground truth FMA score for each patient.
The visualization summary of step-by-step methods employed in this paper is presented in Figure 4.

8. Experimental Analysis and Results

This section provides the experimental results and analysis of the proposed PSA-MNMF algorithm as well as of baseline individual and consensus methods. Eight baseline clustering methods were employed: Fuzzy C-means clustering, K-means clustering, a Self-Organizing Map (SOM), Gaussian Mixture Models, DBScan, and Hierarchical, Spectral, and OPTICS clustering. The results of the consensus MCLA solver are also reported for making comparisons to the proposed PSA-MNMF. The accuracy, recall, precision, and F-score data are also reported. In this section, we have used two datasets, the wearable sensor-based dataset-1 and camera-based dataset-2. We have investigated the clustering results using a combination of position and acceleration in frequency. In this paper, for the number of clusters k, k = 2 denotes ‘severe’ and ‘non-severe’, and k = 3 denotes ‘severe,’ ‘mild,’ and ‘non-severe.’

8.1. The Averaged Normalized Mutual Information (ANMI)

If no prior information is available regarding the relative importance of the individual groupings, an appropriate objective for the consensus answer would be to identify a clustering that maximizes the information shared with the original clustering. Thus, to justify the choice of the NMF as a baseline solver for our proposed model, we have conducted a comparative analysis using Averaged Normalized Mutual Information (ANMI) to compare the five consensus clusterings including HGPA, MCLA, HBGF, CSPA, and NMF [77,91,92]. Mutual information, a symmetric measure that quantifies the statistical information shared between two distributions [93], serves as a reliable indicator of the information shared between a pair of clusterings. Averaged normalized mutual information measures the amount of information that two splits (clusters and class labels) share, regardless of the number of clusters. The Mutual Information score quantifies the degree to which these splits are correlated, indicating how much information can be inferred about one of them if the other is known. As the number grows higher and closer to 1, since it is normalized, the consensus clustering results show better performance. The ANMI results of the NMF, MCLA, CSPA, HGPA, and HGBF clusterings are reported in Figure 5 and Figure 6 for dataset-1 and dataset-2, respectively. The best baseline consensus solution for dataset-1 and dataset-2 was the NMF solver. Thus, the NMF solver method was selected as the base consensus model of our proposed model, with modified objective factors, as shown in Section 5.

8.1.1. Performance Evaluation: Wearable Sensors (Dataset-1)

For k = 2 and k = 3, each algorithm ran 100 times to reduce the variance of the results. For k = 2, the computational time was about 0.38 s for each clustering run and approximately 95.35 s for the consensus clustering run. For k = 3, it was 0.215 s for each clustering run and approximately 82.94 s for the consensus clustering run. The cut-off frequency for the filter was assigned to be 10 Hz for k = 2 and k = 3. Table 5, Table 6 and Table 7 present the results of the position data on wearable position and acceleration and the merged data on acceleration and position in the frequency domain for dataset-1, respectively, for k = 2. Similarly, the results for k = 3 are shown in Table 8, Table 9 and Table 10. In Table 5 and Table 8, the performance results of the position data of wearable sensors in the frequency domain are presented. Table 6 and Table 9 present the results of the acceleration data on the wearable sensors in the frequency domain. Table 7 and Table 10 show the merged data on acceleration in frequency and position in the frequency domain of the wearable sensor data.
As shown in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, using a two-level and three-level assessment, the proposed PSA-MNMF showed the best performance compared to the baseline clustering methods. Figure 7 demonstrates the comparisons of position, acceleration, and merging of position and acceleration in frequency domains between the proposed PSA-MNMF and the MCLA consensus methods for k = 2 and k = 3. We can note that the proposed PSA-MNMF outperformed the MCLA by achieving higher accuracy, precision, recall, and F-score, making it suitable for clinical settings.

8.1.2. Performance Evaluation: Camera-Based Data (Dataset 2)

For the camera-based dataset-2, all 8 algorithms were implemented and compared for k = 2 and k = 3 (k is the number of clusters). Each cluster ran 100 times to reduce the variance of the results. Each time, while running, the computational time for k = 2 was about 0.33 s for each clustering run and approximately 88.08 s for the consensus clustering run, and k = 3, the computational time was about 0.416 s and approximately 73.88 s for the consensus clustering run. Table 11, Table 12 and Table 13 present the results at k = 2 on position data, acceleration data, and the merged data on acceleration in frequency and position in the frequency domain for dataset-2, respectively. As shown in Table 11, Table 12, Table 13, Table 14, Table 15 and Table 16, using a two-level and three-level assessment, the proposed PSA-MNMF demonstrated the best performance compared to the baseline clustering methods. We can note from Figure 8 that the proposed PSA-MNMF outperformed the MCLA by achieving higher accuracy, precision, recall, and F-score, thus making it suitable for clinical settings.

9. Discussion

This study aimed to determine the severity levels of strokes, evaluate the functions of the affected hands in post-stroke patients, and ultimately automate post-stroke assessment. To analyze a stroke dataset, advanced sensors are required in order to capture stroke movements. One of the strengths of this study was that it deployed an integration of two different sensor technologies—wearable sensor-based and camera-based—that collected the data using the same protocol and selected the same tasks. Both datasets focused on the upper limb in stroke patients and healthy participants while they performed daily living activities. We have proposed a method to estimate stroke survivors’ severity levels—more specifically, to examine the functionality levels of affected hands in stroke patients—by deploying unsupervised learning for the first time to cluster the severity levels in stroke patients. Most studies utilize one to three clusterings; however, to make our results more powerful, 8 clustering methods were implemented for this research. An innovative approach, the PSA-MNMF clustering method, which combines all 8 clustering methods, was proposed. This method was compared with individual clusters as well as with other consensus clustering algorithms such as MCLA. The proposed consensus clustering method offered more robust and consistent results compared to individual clustering and enhanced the performance measurement outcomes compared to other methods. In addition, it must be noted that as per the literature, trunk displacement is one of the compensatory movements considerably presented by stroke patients’ movements. With this knowledge from the literature, we proposed a novel labelling approach where trunk displacement was used to label the severity level in each patient. The frequency domains of position and acceleration and the merged datasets of position and acceleration in the frequency domain were used as part of a unique approach for both datasets in this study. All of the included clustering and consensus clustering models ran 100 times to reduce the variance. The exhaustive search method was applied to find the best combination of clusterings, and the best 10 results among all the possible combinations were selected and reported. The 2-clustering, as well as the 3-clustering, were reported for both datasets. The clustering results using the trunk displacement method were compared with the FMA-UE score that had been reported in the open-source datasets using the standard method, i.e., examination by clinical experts. Then, accuracy, precision, recall, and F-score were reported by comparing the ground truths derived from the FMA-UE score and the clustering labels. The results indicated that for all 12 models (for example, for dataset-1 with 2-clusters, the position in the frequency domain, acceleration in the frequency domain, and the merge data of position and acceleration in the frequency domain, and similarly for 3-clusters and for dataset-2) the PSA-MNMF algorithm presented the highest performance measurements in the form of higher accuracy, precision, recall, and F-score when compared to the individual clustering as well to as the MCLA solver. After the proposed PSA-MNMF method, the MCLA solver, which was the ensemble clustering of all the individuals, demonstrated the highest performance measurement compared to the individual clustering results. The results derived from dataset-1 (which used wearable sensors) showed higher performance when compared to the camera-based systems; this could have been because the camera-based dataset seemed to have more noise compared to the wearable sensor dataset and required more preprocessing. Additionally, the results of the combination of two features, i.e., of position in the frequency domain and acceleration in the frequency domain, did not show any significant changes. The consensus clustering results agreed with the hypothesis that using combinations of several clustering would enhance the final output results. The proposed PSA-MNMF consensus solver presented better results compared to other solvers such as MCLA. Additionally, using the trunk displacement feature as one of the important features to define each cluster showed a promising method. Furthermore, a notable aspect of this study was that it included multiple data collection methods, specifically one that used wearable sensors and one that used camera systems. This choice showcases the versatility and wide-ranging applicability of our proposed method, serves as a key strength, and contributes to the overall richness and depth of this study. In conclusion, from a clinical perspective, the primary contribution of this study is to highlight the importance of transitioning from traditional clinical assessments to employing artificial intelligence (AI)-sensor-based systems for diverse assessments, specifically for those assessments focusing on functional abilities. Furthermore, the study emphasizes the potential of such systems in ultimately evaluating the quality of life among post-stroke patients. This shift in assessment methodologies has the potential to enhance the accuracy, efficiency, and overall understanding of patients’ functional capabilities and well-being, leading to improved care and better outcomes in post-stroke rehabilitation.

10. Conclusions and Future Directions

In this paper, motion capture data on stroke patients and healthy subjects were collected from two different clinical universities. The Xsens and Vicon camera datasets were selected for this study since both collected the motion of the upper limb using a shared protocol. However, the inclusion of more datasets with similar protocols and using the same technology while collecting data (such as wearable sensors) could enhance the automated assessment of strokes, as well as any healthcare or rehabilitation planning. Additionally, it must be noted that 4 similar tasks were selected from each motion capture method in this study. However, using the same tasks would enhance the clustering results. The experimental results indicated that consensus clustering enhances cluster output using the novel trunk-displacement labelling method proposed in this study. Future work could investigate limited sensors or markers and compare the results using all upper-limb sensors or markers. Additionally, semi-supervised learning can be examined to automate assessment. Developing an open dataset for stroke patients while they perform FMA-UE tasks could also improve the results when comparing semi-supervised learning with unsupervised learning clustering using FMA scores. For this study, whole actions in participants was considered for finding the mean acceleration; however, looking at smaller steps could be a great future goal for exploration. In addition, for large, collected datasets, gender-based and age-based analysis could be investigated.

Author Contributions

Conceptualization, R.K. and N.R.; methodology, N.R. and R.K.; software, N.R.; formal analysis, R.K. and N.R.; investigation, N.R.; resources,; writing—original draft preparation, N.R.; writing—review and editing, R.K., Farah Mohammadi; supervision, R.K. and F.M.; Visualization, N.R.; funding acquisition, R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from Toronto Metropolitan University’s Faculty of Engineering and Architectural Science.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The camera-based data open-source are available in section MDPI Research Data Policies at https://dataverse.harvard.edu/file.xhtml?persistentId=doi:10.7910/DVN/FU3QZ9/PILNU7&version=4.0, and the Wearable sensor open dataset (Xsens dataset) are available at https://zenodo.org/record/3713449#.Y9RC0nbMI2x. (accessed on 12 March 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Oubre, B.; Daneault, J.-F.; Jung, H.-T.; Whritenour, K.; Miranda, J.G.V.; Park, J.; Ryu, T.; Kim, Y.; Lee, S.I. Estimating Upper-Limb Impairment Level in Stroke Survivors Using Wearable Inertial Sensors and a Minimally-Burdensome Motor Task. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 601–611. [Google Scholar] [CrossRef] [PubMed]
  2. Singer, B.; Garcia-Vega, J. The Fugl-Meyer Upper Extremity Scale. J. Physiother. 2016, 63, 53. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Lee, S.I.; Adans-Dester, C.P.; Grimaldi, M.; Dowling, A.V.; Horak, P.C.; Black-Schaffer, R.M.; Bonato, P.; Gwin, J.T. Enabling Stroke Rehabilitation in Home and Community Settings: A Wearable Sensor-Based Approach for Upper-Limb Motor Training. IEEE J. Transl. Eng. Health Med. 2018, 6, 2100411. [Google Scholar] [CrossRef] [PubMed]
  4. Psychometric Comparisons of 2 Versions of the Fugl-Meyer Motor Scale and 2 Versions of the Stroke Rehabilitation Assessment of Movement. Available online: https://journals.sagepub.com/doi/epdf/10.1177/1545968308315999 (accessed on 22 January 2023).
  5. Lee, S.; Lee, Y.-S.; Kim, J. Automated Evaluation of Upper-Limb Motor Function Impairment Using Fugl-Meyer Assessment. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 125–134. [Google Scholar] [CrossRef]
  6. Tanaka, K.; Yano, H. Errors of Visual Judgement in Precision Measurements. Ergonomics 1984, 27, 767–780. [Google Scholar] [CrossRef]
  7. van Beers, R.J.; Haggard, P.; Wolpert, D.M. The Role of Execution Noise in Movement Variability. J. Neurophysiol. 2004, 91, 1050–1063. [Google Scholar] [CrossRef]
  8. Harbourne, R.T.; Stergiou, N. Movement Variability and the Use of Nonlinear Tools: Principles to Guide Physical Therapist Practice. Phys. Ther. 2009, 89, 267–282. [Google Scholar] [CrossRef] [Green Version]
  9. Razfar, N.; Kashef, R.; Mohammadi, F. A Comprehensive Overview on IoT-Based Smart Stroke Rehabilitation Using the Advances of Wearable Technology. In Proceedings of the 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Haikou, China, 20–22 December 2021; pp. 1359–1366. [Google Scholar]
  10. Sacco, R.L.; Adams, R.; Albers, G.; Alberts, M.J.; Benavente, O.; Furie, K.; Goldstein, L.B.; Gorelick, P.; Halperin, J.; Harbaugh, R. Guidelines for Prevention of Stroke in Patients with Ischemic Stroke or Transient Ischemic Attack: A Statement for Healthcare Professionals from the American Heart Association/American Stroke Association Council on Stroke: Co-Sponsored by the Council on Cardiovascular Radiology and Intervention: The American Academy of Neurology Affirms the Value of This Guideline. Stroke 2006, 37, 577–617. [Google Scholar] [CrossRef]
  11. Billinger, S.A.; Arena, R.; Bernhardt, J.; Eng, J.J.; Franklin, B.A.; Johnson, C.M.; MacKay-Lyons, M.; Macko, R.F.; Mead, G.E.; Roth, E.J. Physical Activity and Exercise Recommendations for Stroke Survivors: A Statement for Healthcare Professionals from the American Heart Association/American Stroke Association. Stroke 2014, 45, 2532–2553. [Google Scholar] [CrossRef] [Green Version]
  12. Simbaña, E.D.O.; Baeza, P.S.-H.; Huete, A.J.; Balaguer, C. Review of Automated Systems for Upper Limbs Functional Assessment in Neurorehabilitation. IEEE Access 2019, 7, 32352–32367. [Google Scholar] [CrossRef]
  13. Zhou, H.; Hu, H. Human Motion Tracking for Rehabilitation—A Survey. Biomed. Signal Process. Control 2008, 3, 1–18. [Google Scholar] [CrossRef]
  14. Paulich, M.; Schepers, M.; Rudigkeit, N.; Bellusci, G. Xsens MTw Awinda: Miniature Wireless Inertial-Magnetic Motion Tracker for Highly Accurate 3D Kinematic Applications; Xsens: Enschede, The Netherlands, 2018. [Google Scholar]
  15. Chae, S.H.; Kim, Y.; Lee, K.-S.; Park, H.-S. Development and Clinical Evaluation of a Web-Based Upper Limb Home Rehabilitation System Using a Smartwatch and Machine Learning Model for Chronic Stroke Survivors: Prospective Comparative Study. JMIR mHealth uHealth 2020, 8, e17216. [Google Scholar] [CrossRef]
  16. Panwar, M.; Biswas, D.; Bajaj, H.; Jöbges, M.; Turk, R.; Maharatna, K.; Acharyya, A. Rehab-Net: Deep Learning Framework for Arm Movement Classification Using Wearable Sensors for Stroke Rehabilitation. IEEE Trans. Biomed. Eng. 2019, 66, 3026–3037. [Google Scholar] [CrossRef]
  17. Liu, X.; Rajan, S.; Ramasarma, N.; Bonato, P.; Lee, S.I. The Use of a Finger-Worn Accelerometer for Monitoring of Hand Use in Ambulatory Settings. IEEE J. Biomed. Health Inform. 2019, 23, 599–606. [Google Scholar] [CrossRef]
  18. Kaku, A.; Parnandi, A.; Venkatesan, A.; Pandit, N.; Schambra, H.; Fernandez-Granda, C. Towards Data-Driven Stroke Rehabilitation via Wearable Sensors and Deep Learning. In Proceedings of the 5th Machine Learning for Healthcare Conference, PMLR, City, Country, 18 September 2020; pp. 143–171. [Google Scholar]
  19. Sapienza, S.; Adans-Dester, C.; O’Brien, A.; Vergara-Diaz, G.; Lee, S.; Patel, S.; Black-Schaffer, R.; Zafonte, R.; Bonato, P.; Meagher, C.; et al. Using a Minimum Set of Wearable Sensors to Assess Quality of Movement in Stroke Survivors. In Proceedings of the 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Philadelphia, PA, USA, 17–19 July 2017; pp. 284–285. [Google Scholar]
  20. Adans-Dester, C.; Hankov, N.; O’Brien, A.; Vergara-Diaz, G.; Black-Schaffer, R.; Zafonte, R.; Dy, J.; Lee, S.I.; Bonato, P. Enabling Precision Rehabilitation Interventions Using Wearable Sensors and Machine Learning to Track Motor Recovery. NPJ Digit. Med. 2020, 3, 121. [Google Scholar] [CrossRef] [PubMed]
  21. Yu, L.; Xiong, D.; Guo, L.; Wang, J. A Remote Quantitative Fugl-Meyer Assessment Framework for Stroke Patients Based on Wearable Sensor Networks. Comput. Methods Programs Biomed. 2016, 128, 100–110. [Google Scholar] [CrossRef] [PubMed]
  22. Lucas, A.; Hermiz, J.; Labuzetta, J.; Arabadzhi, Y.; Karanjia, N.; Gilja, V. Use of Accelerometry for Long Term Monitoring of Stroke Patients. IEEE J. Transl. Eng. Health Med. 2019, 7, 1–10. [Google Scholar] [CrossRef] [PubMed]
  23. Chen, X.; Guan, Y.; Shi, J.Q.; Du, X.-L.; Eyre, J. Designing Compact Features for Remote Stroke Rehabilitation Monitoring Using Wearable Accelerometers. CCF Trans. Pervasive Comput. Interact. 2022, 5, 206–225. [Google Scholar] [CrossRef]
  24. Meng, L.; Zhang, A.; Chen, C.; Wang, X.; Jiang, X.; Tao, L.; Fan, J.; Wu, X.; Dai, C.; Zhang, Y. Exploration of Human Activity Recognition Using a Single Sensor for Stroke Survivors and Able-Bodied People. Sensors 2021, 21, 799. [Google Scholar] [CrossRef]
  25. Jiang, Y.; Qin, Y.; Kim, I.; Wang, Y. Towards an IoT-Based Upper Limb Rehabilitation Assessment System. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Repulic of Korea, 11–15 July 2017; pp. 2414–2417. [Google Scholar]
  26. van Meulen, F.B.; Reenalda, J.; Buurke, J.H.; Veltink, P.H. Assessment of Daily-Life Reaching Performance after Stroke. Ann. Biomed. Eng. 2015, 43, 478–486. [Google Scholar] [CrossRef]
  27. Li, H.-T.; Huang, J.-J.; Pan, C.-W.; Chi, H.-I.; Pan, M.-C. Inertial Sensing Based Assessment Methods to Quantify the Effectiveness of Post-Stroke Rehabilitation. Sensors 2015, 15, 16196–16209. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Patel, S.; Hughes, R.; Hester, T.; Stein, J.; Akay, M.; Dy, J.G.; Bonato, P. A Novel Approach to Monitor Rehabilitation Outcomes in Stroke Survivors Using Wearable Technology. Proc. IEEE 2010, 98, 450–461. [Google Scholar] [CrossRef]
  29. Del Din, S.; Patel, S.; Cobelli, C.; Bonato, P. Estimating Fugl-Meyer Clinical Scores in Stroke Survivors Using Wearable Sensors. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 5839–5842. [Google Scholar]
  30. Chaeibakhsh, S.; Phillips, E.; Buchanan, A.; Wade, E. Upper Extremity Post-Stroke Motion Quality Estimation with Decision Trees and Bagging Forests. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 4585–4588. [Google Scholar]
  31. Wang, J.; Yu, L.; Wang, J.; Guo, L.; Gu, X.; Fang, Q. Automated Fugl-Meyer Assessment Using SVR Model. In Proceedings of the 2014 IEEE International Symposium on Bioelectronics and Bioinformatics (IEEE ISBB 2014), Chung Li, Taiwan, 11–14 April 2014; pp. 1–4. [Google Scholar]
  32. Lee, S.I.; Jung, H.-T.; Park, J.; Jeong, J.; Ryu, T.; Kim, Y.; dos Santos, V.S.; Miranda, J.G.V.; Daneault, J.-F. Towards the Ambulatory Assessment of Movement Quality in Stroke Survivors Using a Wrist-Worn Inertial Sensor. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 2825–2828. [Google Scholar]
  33. Kim, W.-S.; Cho, S.; Baek, D.; Bang, H.; Paik, N.-J. Upper Extremity Functional Evaluation by Fugl-Meyer Assessment Scoring Using Depth-Sensing Camera in Hemiplegic Stroke Patients. PLoS ONE 2016, 11, e0158640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Otten, P.; Kim, J.; Son, S.H. A Framework to Automate Assessment of Upper-Limb Motor Function Impairment: A Feasibility Study. Sensors 2015, 15, 20097–20114. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Lee, S.-H.; Song, M.; Kim, J. Towards Clinically Relevant Automatic Assessment of Upper-Limb Motor Function Impairment. In Proceedings of the 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Las Vegas, NV, USA, 24–27 February 2016; pp. 148–151. [Google Scholar]
  36. Olesh, E.V.; Yakovenko, S.; Gritsenko, V. Automated Assessment of Upper Extremity Movement Impairment Due to Stroke. PLoS ONE 2014, 9, e104487. [Google Scholar] [CrossRef] [PubMed]
  37. Zhi, T.; Meng, C.; Fu, L. Design of Intelligent Rehabilitation Evaluation Scale for Stroke Patients Based on Genetic Algorithm and Extreme Learning Machine. J. Sens. 2022, 2022, 9323152. [Google Scholar] [CrossRef]
  38. Bochniewicz, E.M.; Emmer, G.; McLeod, A.; Barth, J.; Dromerick, A.W.; Lum, P. Measuring Functional Arm Movement after Stroke Using a Single Wrist-Worn Sensor and Machine Learning. J. Stroke Cerebrovasc. Dis. 2017, 26, 2880–2887. [Google Scholar] [CrossRef]
  39. Lee, T.K.M.; Leo, K.-H.; Sanei, S.; Chew, E.; Zhao, L. Triaxial Rehabilitative Data Analysis Incorporating Matching Pursuit. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017; pp. 434–438. [Google Scholar]
  40. Park, E.; Lee, K.; Han, T.; Nam, H.S. Automatic Grading of Stroke Symptoms for Rapid Assessment Using Optimized Machine Learning and 4-Limb Kinematics: Clinical Validation Study. J. Med. Internet Res. 2020, 22, e20641. [Google Scholar] [CrossRef]
  41. Bisio, I.; Garibotto, C.; Lavagetto, F.; Sciarrone, A. When EHealth Meets IoT: A Smart Wireless System for Post-Stroke Home Rehabilitation. IEEE Wirel. Commun. 2019, 26, 24–29. [Google Scholar] [CrossRef]
  42. Mannini, A.; Trojaniello, D.; Cereatti, A.; Sabatini, A.M. A Machine Learning Framework for Gait Classification Using Inertial Sensors: Application to Elderly, Post-Stroke and Huntington’s Disease Patients. Sensors 2016, 16, 134. [Google Scholar] [CrossRef] [Green Version]
  43. Butt, A.H.; Zambrana, C.; Idelsohn-Zielonka, S.; Claramunt-Molet, M.; Ugartemendia-Etxarri, A.; Rovini, E.; Moschetti, A.; Molleja, C.; Martin, C.; Salleras, E.O.; et al. Assessment of Purposeful Movements for Post-Stroke Patients in Activites of Daily Living with Wearable Sensor Device. In Proceedings of the 2019 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Siena, Italy, 9–11 July 2019; pp. 1–8. [Google Scholar]
  44. Bobin, M.; Amroun, H.; Boukalle, M.; Anastassova, M.; Ammi, M. Smart Cup to Monitor Stroke Patients Activities during Everyday Life. In Proceedings of the 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Halifax, NS, Canada, 30 July–3 August 2018; pp. 189–195. [Google Scholar]
  45. Woytowicz, E.J.; Rietschel, J.C.; Goodman, R.N.; Conroy, S.S.; Sorkin, J.D.; Whitall, J.; McCombe Waller, S. Determining Levels of Upper Extremity Movement Impairment by Applying a Cluster Analysis to the Fugl-Meyer Assessment of the Upper Extremity in Chronic Stroke. Arch. Phys. Med. Rehabil. 2017, 98, 456–462. [Google Scholar] [CrossRef] [Green Version]
  46. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial Intelligence in Healthcare: Past, Present and Future. Stroke Vasc. Neurol. 2017, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Boukhennoufa, I.; Zhai, X.; Utti, V.; Jackson, J.; McDonald-Maier, K.D. Wearable Sensors and Machine Learning in Post-Stroke Rehabilitation Assessment: A Systematic Review. Biomed. Signal Process. Control 2022, 71, 103197. [Google Scholar] [CrossRef]
  48. O’Brien, M.K.; Shawen, N.; Mummidisetty, C.K.; Kaur, S.; Bo, X.; Poellabauer, C.; Kording, K.; Jayaraman, A. Activity Recognition for Persons with Stroke Using Mobile Phone Technology: Toward Improved Performance in a Home Setting. J. Med. Internet Res. 2017, 19, e7385. [Google Scholar] [CrossRef] [PubMed]
  49. Massé, F.; Gonzenbach, R.R.; Arami, A.; Paraschiv-Ionescu, A.; Luft, A.R.; Aminian, K. Improving Activity Recognition Using a Wearable Barometric Pressure Sensor in Mobility-Impaired Stroke Patients. J. Neuroeng. Rehabil. 2015, 12, 72. [Google Scholar] [CrossRef] [Green Version]
  50. Miller, A.; Quinn, L.; Duff, S.V.; Wade, E. Comparison of Machine Learning Approaches for Classifying Upper Extremity Tasks in Individuals Post-Stroke. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2020, 2020, 4330–4336. [Google Scholar] [CrossRef] [PubMed]
  51. Hsu, W.-C.; Sugiarto, T.; Lin, Y.-J.; Yang, F.-C.; Lin, Z.-Y.; Sun, C.-T.; Hsu, C.-L.; Chou, K.-N. Multiple-Wearable-Sensor-Based Gait Classification and Analysis in Patients with Neurological Disorders. Sensors 2018, 18, 3397. [Google Scholar] [CrossRef] [Green Version]
  52. Wang, F.-C.; Chen, S.-F.; Lin, C.-H.; Shih, C.-J.; Lin, A.-C.; Yuan, W.; Li, Y.-C.; Kuo, T.-Y. Detection and Classification of Stroke Gaits by Deep Neural Networks Employing Inertial Measurement Units. Sensors 2021, 21, 1864. [Google Scholar] [CrossRef]
  53. Derungs, A.; Schuster-Amft, C.; Amft, O. Wearable Motion Sensors and Digital Biomarkers in Stroke Rehabilitation. Curr. Dir. Biomed. Eng. 2020, 6, 229–232. [Google Scholar] [CrossRef]
  54. Balestra, N.; Sharma, G.; Riek, L.M.; Busza, A. Automatic Identification of Upper Extremity Rehabilitation Exercise Type and Dose Using Body-Worn Sensors and Machine Learning: A Pilot Study. DIB 2021, 5, 158–166. [Google Scholar] [CrossRef]
  55. Yang, G.; Deng, J.; Pang, G.; Zhang, H.; Li, J.; Deng, B.; Pang, Z.; Xu, J.; Jiang, M.; Liljeberg, P.; et al. An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning. IEEE J. Transl. Eng. Health Med. 2018, 6, 1–10. [Google Scholar] [CrossRef] [PubMed]
  56. Capela, N.A.; Lemaire, E.D.; Baddour, N. Feature Selection for Wearable Smartphone-Based Human Activity Recognition with Able Bodied, Elderly, and Stroke Patients. PLoS ONE 2015, 10, e0124414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Tran, T.; Chang, L.-C.; Almubark, I.; Bochniewicz, E.M.; Shu, L.; Lum, P.; Dromerick, A. Robust Classification of Functional and Nonfunctional Arm Movement after Stroke Using a Single Wrist-Worn Sensor Device. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018. [Google Scholar] [CrossRef]
  58. Chen, P.-W.; Baune, N.; Zwir, I.; Wang, J.; Swamidass, V.; Wong, A. Measuring Activities of Daily Living in Stroke Patients with Motion Machine Learning Algorithms: A Pilot Study. Int. J. Environ. Res. Public Health 2021, 18, 1634. [Google Scholar] [CrossRef] [PubMed]
  59. Boukhennoufa, I.; Zhai, X.; McDonald-Maier, K.D.; Utti, V.; Jackson, J. Improving the Activity Recognition Using GMAF and Transfer Learning in Post-Stroke Rehabilitation Assessment. In Proceedings of the 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl’any, Slovakia, 21–23 January 2021; pp. 000391–000398. [Google Scholar]
  60. Hosseini, Z.-S.; Peyrovi, H.; Gohari, M. The Effect of Early Passive Range of Motion Exercise on Motor Function of People with Stroke: A Randomized Controlled Trial. J. Caring Sci. 2019, 8, 39–44. [Google Scholar] [CrossRef]
  61. Taub, E.; Crago, J.E.; Uswatte, G. Constraint-Induced Movement Therapy: A New Approach to Treatment in Physical Rehabilitation. Rehabil. Psychol. 1998, 43, 152–170. [Google Scholar] [CrossRef]
  62. Stinear, C.M.; Lang, C.E.; Zeiler, S.; Byblow, W.D. Advances and Challenges in Stroke Rehabilitation. Lancet Neurol. 2020, 19, 348–360. [Google Scholar] [CrossRef]
  63. Lun, R.; Zhao, W. A Survey of Applications and Human Motion Recognition with Microsoft Kinect. Int. J. Patt. Recogn. Artif. Intell. 2015, 29, 1555008. [Google Scholar] [CrossRef] [Green Version]
  64. Gu, Y.; Do, H.; Ou, Y.; Sheng, W. Human Gesture Recognition through a Kinect Sensor. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 1379–1384. [Google Scholar]
  65. Mohamed, A.A.; Awad, M.I.; Maged, S.A.; Gaber, M.M. Automated Upper Limb Motor Functions Assessment System Using One-Class Support Vector Machine. In Proceedings of the 2021 16th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 15–16 December 2021; pp. 1–6. [Google Scholar]
  66. Seo, N.J.; Crocher, V.; Spaho, E.; Ewert, C.R.; Fathi, M.F.; Hur, P.; Lum, S.A.; Humanitzki, E.M.; Kelly, A.L.; Ramakrishnan, V. Capturing Upper Limb Gross Motor Categories Using the Kinect® Sensor. Am. J. Occup. Ther. 2019, 73, 7304205090p1–7304205090p10. [Google Scholar] [CrossRef]
  67. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Springer Science & Business Media: New York, NY, USA, 2013; ISBN 978-1-4757-0450-1. [Google Scholar]
  68. Kuruba Manjunath, Y.S.; Kashef, R.F. Distributed Clustering Using Multi-Tier Hierarchical Overlay Super-Peer Peer-to-Peer Network Architecture for Efficient Customer Segmentation. Electron. Commer. Res. Appl. 2021, 47, 101040. [Google Scholar] [CrossRef]
  69. Close, L.; Kashef, R. Combining Artificial Immune System and Clustering Analysis: A Stock Market Anomaly Detection Model. JILSA 2020, 12, 83–108. [Google Scholar] [CrossRef]
  70. Li, M.; Kashef, R.; Ibrahim, A. Multi-Level Clustering-Based Outlier’s Detection (MCOD) Using Self-Organizing Maps. Big Data Cogn. Comput. 2020, 4, 24. [Google Scholar] [CrossRef]
  71. Ariza Colpas, P.; Vicario, E.; De-La-Hoz-Franco, E.; Pineres-Melo, M.; Oviedo-Carrascal, A.; Patara, F. Unsupervised Human Activity Recognition Using the Clustering Approach: A Review. Sensors 2020, 20, 2702. [Google Scholar] [CrossRef] [PubMed]
  72. Jia, H.; Ding, S.; Xu, X.; Nie, R. The Latest Research Progress on Spectral Clustering. Neural Comput. Appl. 2014, 24, 1477–1486. [Google Scholar] [CrossRef]
  73. Patel, E.; Kushwaha, D.S. Clustering Cloud Workloads: K-Means vs Gaussian Mixture Model. Procedia Comput. Sci. 2020, 171, 158–167. [Google Scholar] [CrossRef]
  74. Li, G.; Wang, Z.; Zhang, Q.; Sun, J. Offline and Online Objective Reduction via Gaussian Mixture Model Clustering. IEEE Trans. Evol. Comput. 2022, 27, 341–354. [Google Scholar] [CrossRef]
  75. Kashef, R.; Warraich, M. Homogeneous vs. Heterogeneous Distributed Data Clustering: A Taxonomy. In Data Management and Analysis: Case Studies in Education, Healthcare and Beyond; Alhajj, R., Moshirpour, M., Far, B., Eds.; Studies in Big Data; Springer International Publishing: Cham, Switzerland, 2020; pp. 51–66. ISBN 978-3-030-32587-9. [Google Scholar]
  76. OPTICS: Ordering Points to Identify the Clustering Structure: ACM SIGMOD Record: Vol 28, No 2. Available online: https://dl-acm-org.ezproxy.lib.torontomu.ca/doi/10.1145/304181.304187 (accessed on 28 February 2023).
  77. Ghosh, J.; Acharya, A. Cluster Ensembles. WIREs Data Min. Knowl. Discov. 2011, 1, 305–315. [Google Scholar] [CrossRef]
  78. Fern, X.Z.; Brodley, C.E. Solving Cluster Ensemble Problems by Bipartite Graph Partitioning. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, AB, Canada, 4 July 2004; Association for Computing Machinery: New York, NY, USA, 2004; p. 36. [Google Scholar]
  79. Li, T.; Ding, C.; Jordan, M.I. Solving Consensus and Semi-Supervised Clustering Problems Using Nonnegative Matrix Factorization. In Proceedings of the Seventh IEEE International Conference on Data Mining (ICDM 2007), Omaha, NE, USA, 28–31 October 2007; pp. 577–582. [Google Scholar]
  80. Averta, G.; Barontini, F.; Catrambone, V.; Haddadin, S.; Handjaras, G.; Held, J.P.O.; Hu, T.; Jakubowitz, E.; Kanzler, C.M.; Kühn, J.; et al. U-Limb: A Multi-Modal, Multi-Center Database on Arm Motion Control in Healthy and Post-Stroke Conditions. GigaScience 2021, 10, giab043. [Google Scholar] [CrossRef]
  81. Schwarz, A.; Bhagubai, M.M.C.; Wolterink, G.; Held, J.P.O.; Luft, A.R.; Veltink, P.H. Assessment of Upper Limb Movement Impairments after Stroke Using Wearable Inertial Sensing. Sensors 2020, 20, E4770. [Google Scholar] [CrossRef]
  82. Schwarz, A.; Bhagubai, M.M.C.; Nies, S.H.G.; Held, J.P.O.; Veltink, P.H.; Buurke, J.H.; Luft, A.R. Characterization of Stroke-Related Upper Limb Motor Impairments across Various Upper Limb Activities by Use of Kinematic Core Set Measures. J. NeuroEng. Rehabil. 2022, 19, 2. [Google Scholar] [CrossRef]
  83. Rey, V.F.; Hevesi, P.; Kovalenko, O.; Lukowicz, P. Let There Be IMU Data: Generating Training Data for Wearable, Motion Sensor Based Activity Recognition from Monocular RGB Videos. In Proceedings of the Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9 September 2019; ACM: London, UK, 2019; pp. 699–708. [Google Scholar]
  84. Schwarz, A.; Veerbeek, J.M.; Held, J.P.O.; Buurke, J.H.; Luft, A.R. Measures of Interjoint Coordination Post-Stroke Across Different Upper Limb Movement Tasks. Front. Bioeng. Biotechnol. 2021, 8, 620805. [Google Scholar] [CrossRef]
  85. Subramanian, S.K.; Yamanaka, J.; Chilingaryan, G.; Levin, M.F. Validity of Movement Pattern Kinematics as Measures of Arm Motor Impairment Poststroke. Stroke 2010, 41, 2303–2308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Brouwer, N.P.; Yeung, T.; Bobbert, M.F.; Besier, T.F. 3D Trunk Orientation Measured Using Inertial Measurement Units during Anatomical and Dynamic Sports Motions. Scand. J. Med. Sci. Sport. 2021, 31, 358–370. [Google Scholar] [CrossRef] [PubMed]
  87. Cai, S.; Li, G.; Zhang, X.; Huang, S.; Zheng, H.; Ma, K.; Xie, L. Detecting Compensatory Movements of Stroke Survivors Using Pressure Distribution Data and Machine Learning Algorithms. J. Neuroeng. Rehabil. 2019, 16, 131. [Google Scholar] [CrossRef] [PubMed]
  88. Jayasinghe, S.A.L.; Wang, R.; Gebara, R.; Biswas, S.; Ranganathan, R. Compensatory Trunk Movements in Naturalistic Reaching and Manipulation Tasks in Chronic Stroke Survivors. J. Appl. Biomech. 2021, 37, 215–223. [Google Scholar] [CrossRef] [PubMed]
  89. Levin, M.F.; Liebermann, D.G.; Parmet, Y.; Berman, S. Compensatory Versus Noncompensatory Shoulder Movements Used for Reaching in Stroke. Neurorehabil. Neural Repair. 2016, 30, 635–646. [Google Scholar] [CrossRef] [Green Version]
  90. Lo Presti, D.; Zaltieri, M.; Bravi, M.; Morrone, M.; Caponero, M.A.; Schena, E.; Sterzi, S.; Massaroni, C. A Wearable System Composed of FBG-Based Soft Sensors for Trunk Compensatory Movements Detection in Post-Stroke Hemiplegic Patients. Sensors 2022, 22, 1386. [Google Scholar] [CrossRef]
  91. He, Z.; Xu, X.; Deng, S. K-ANMI: A Mutual Information Based Clustering Algorithm for Categorical Data. Inf. Fusion 2008, 9, 223–233. [Google Scholar] [CrossRef] [Green Version]
  92. Sano, T.; Migita, T.; Takahashi, N. A Damped Newton Algorithm for Nonnegative Matrix Factorization Based on Alpha-Divergence. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 463–468. [Google Scholar]
  93. Csiszar, I.; Shields, P. Information Theory and Statistics: A Tutorial. Found. Trends Commun. Inf. Theory 2004, 1, 417–528. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The HBGF consensus clustering bipartite graph.
Figure 1. The HBGF consensus clustering bipartite graph.
Sensors 23 05513 g001
Figure 2. The proposed PSSA-MNMF consensus clustering method is demonstrated.
Figure 2. The proposed PSSA-MNMF consensus clustering method is demonstrated.
Sensors 23 05513 g002
Figure 3. The preprocessing procedure.
Figure 3. The preprocessing procedure.
Sensors 23 05513 g003
Figure 4. The step-by-step method for processing acceleration data, and implementing PSSA-MNMF consensus clustering, is demonstrated.
Figure 4. The step-by-step method for processing acceleration data, and implementing PSSA-MNMF consensus clustering, is demonstrated.
Sensors 23 05513 g004
Figure 5. The ANMI: position and acceleration in the frequency domain (Dataset-1) for k = 2 and k = 3.
Figure 5. The ANMI: position and acceleration in the frequency domain (Dataset-1) for k = 2 and k = 3.
Sensors 23 05513 g005
Figure 6. The ANMI: position and acceleration in the frequency domain (dataset-2) for k = 2 and k = 3.
Figure 6. The ANMI: position and acceleration in the frequency domain (dataset-2) for k = 2 and k = 3.
Sensors 23 05513 g006
Figure 7. Dateset-1: PSA-MNMF vs. MCLA for k = 2 and k = 3.
Figure 7. Dateset-1: PSA-MNMF vs. MCLA for k = 2 and k = 3.
Sensors 23 05513 g007
Figure 8. Dataset-2: PSA-MNMF vs. MCLA for k = 2 and k = 3.
Figure 8. Dataset-2: PSA-MNMF vs. MCLA for k = 2 and k = 3.
Sensors 23 05513 g008
Table 1. Summary of the studies that used wearable sensors to predict FMA scores.
Table 1. Summary of the studies that used wearable sensors to predict FMA scores.
References Assessment Tests Sensors Result Types Features Machine Learning Purpose
Meulen et al. (2015) [26]Compared with FMA17 IMUs (Xsens system attached to the body)Correlation- Hand position relative to the trunk as well as pelvic region
- Quantitative analysis of arm and trunk (Distance)
NATo assess arm movements and
compare them with FMA scores
Li et al. (2015) [27]Compared with Wolf Motor Function Test2 IMUs attached to the arm and wristCorrelationAcceleration and gyroscopeNATo evaluate motion quality before and after rehabilitation tasks
Del Din et al. (2011) [29]Compared with FMAAccelerometers to hand, forearm, upper finger, thumb, and sternum.PredictionAccelerationRandom ForestTo estimate FMA scores
Yu et al. (2016) [21]Compared with FMA2 Accelerometers and 7 flex sensorsPredictionAmp, Mean, RMS, JERK, ApEn 1ELM and SVM to map the result to FMATo predict FMA scores
Chaeibakhsh et al. (2016) [30]Compared with FMA5 APDM Opal motion monitoring sensors (APDM Inc., OR, USA). on the sternum, bilateral, dorsal wrists, and bilateral upper arms proximal to the elbow.Prediction-Accelerometer and gyroscope sensor, RMSE value, entropy, and dominant frequencyDecision tree and Bootstrap Aggregation ForestTo estimate FMA scores
Wang et al. (2014) [31]Compared with FMA2 Accelerometer attached to elbow and shoulderEstimationAccelerationa Support Vector RegressionTo estimate the FMA scores of shoulder and elbow movements
Oubre et al. (2020) [1]Compared with FMA9 IMUs (MTw Awinda, Xsens, Netherlands) on the wrist, SternumEstimationMean velocity
Time duration
travel distance
DBScan and Regression ModelTo estimate FMA scores
Lee et al. (2018) [32]Compared with Functional
ability
Scale
9 IMUs (MTw Awinda, Xsens, Netherlands) attached to the wristCorrelationVelocityK-means Cluster- Utilizes kinematic characteristics of voluntary limb movements.
-Focuses on the quality of movement in stroke survivors
Patel et al. (2010) [28]Compared with FASAccelerometers attached to the hand, forearm, and upper armPredictionAccelerationRandom ForestTo estimate FAS score
Adans-Dester et al. (2020) [20]FMA and FASIMUUpper limbDisplacement, velocity, acceleration, and jerkRandom ForestDifferent ADL tasks to evaluate the FAS and FMA scoring
1 AMP: amplitude of sensor data; MEAN: mean value of sensor data; RMS: root mean square value of sensor data; JERK: root mean square value of the derivative of sensor data; ApEn: approximate entropy of sensor data.
Table 2. A summary of the studies that have used camera-based sensors to predict FMA scores is presented.
Table 2. A summary of the studies that have used camera-based sensors to predict FMA scores is presented.
References Assessments Test Sensors Results Type Features Machine Learning Purpose
Kim et al. (2016) [33]FMA
Kinect camera dataCorrelationPositions, angles, and the distance between two joints (for instance, hand-shoulder, hand-head and elbow-head)Artificial Neural Network (ANN)To develop the FMA tool by utilizing a Kinect camera and to classify the 6 FMA upper extremity tasks
Mohamed et al.(2021) [65]FMAKinect V1 and Myo armbandN/AEMG and position dataSVMPredicting FMA Scores
Soe et al. (2019) [66]Mallet Clinical Rating ScaleKinect V1N/AVelocityRule-based classificationClassifying the Mallet clinical rating scale
Lee et al. (2017) [5]FMAKinect, force and resistor sensing handCorrelationAcceleration dataRule-based classifierClassifying FMA tasks
Otten et al. (2015) [34]FMA-The Kinect camera
pressure sensor (FSR 400 series, Interlink Electronics, Westlake Village, CA, USA)
- Glove sensors (a Shimmer inertial measurement unit;
IMU, Shimmer, Dublin)
- A glove sensor (DG5-VHand glove 3.0, DGTech, Bazzano, Italy)
PredictionKinematic features such as finger flexion and extension, joint angle, and supination and pronation of the handSVM-Linear
SVM-Kernel
BNN
To predict 24 out of 33 FMA tasks as 0, 1, or 2 scores.
Lee et al. (2016) [35]FMAKinect v2 and force-sensingPredictionJoint motion (abduction and adduction, flexion extension, etc.)Fuzzy-logic classificationTo classify movements for FMA prediction purposes. The healthy subjects were used.
Olesh et al. (2014) [36]Arm movements from FMA and Action Research Arm TestThe low-cost motion capture device is called the impulse motion capture system.EstimationJoint AnglePrinciple Components Analysis (PCA)Comparing quantitative scores derived from the qualitative clinical scores generated by clinicians
Table 3. The activities of daily living tasks selected for this study [82].
Table 3. The activities of daily living tasks selected for this study [82].
Step 1Step 2 Step 3
Reaching distally and grasping a glass Drinking for 3 sPlacing it back in the initial position
Reaching distally and grasping a phone Moving it to the subject’s ear for 3 sPlacing it back in the initial position
Reaching distally and grasping a small cup Drinking for 3 sPlacing it back in the initial position
Reaching distally and grasping an applePretending to bitePlacing it back in the initial position
Table 4. The subjects’ characteristics are described for both wearable sensors and camera-based system.
Table 4. The subjects’ characteristics are described for both wearable sensors and camera-based system.
CharacteristicsMean (SD)/Count (Camera Sensor)Mean (SD)/Count
(Wearable Sensor)
Age46.77 ± 15.2561.00 ±10.69
Sex6 female; 12 male5 female; 15 male
FMMA-UE17.75 ± 2.0546.00 ± 10.16
Affected Hand12 right; 8 left11 right; 9 left
Table 5. Accuracy, precision, recall, and f-score—Dataset 1 (position in the frequency domain with k = 2).
Table 5. Accuracy, precision, recall, and f-score—Dataset 1 (position in the frequency domain with k = 2).
k = 2FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy60.4% ± 0.000160.7% ± 0.00360.7% ± 0.03658.8% ± 0.000129.4% ± 0.000171.5% ± 0.000259.2% ± 0.00165.5% ± 0.000175% ± 0.0001
P65.8% ± 0.000265.9% ± 0.00366.3% ± 0.02465.6% ± 0.000116.2% ± 0.000169.1% ± 0.000265.9% ± 0.00455.1% ± 0.000173.9% ± 0.0001
R60.4% ± 0.000160.7% ± 0.00360.7% ± 0.03658.8% ± 0.000129.4% ± 0.000171.5% ± 0.000259.2% ± 0.00165.5% ± 0.000175% ± 0.0001
F = score61.8% ± 0.000162.1% ± 0.00361.9% ± 0.03460.4% ± 0.000114.6% ± 0.000168.7% ± 0.000160.7% ± 0.00157% ± 0.000174% ± 0.0001
Table 6. Accuracy, precision, recall, and f-score—Dataset 1 (acceleration in the frequency domain for k = 2).
Table 6. Accuracy, precision, recall, and f-score—Dataset 1 (acceleration in the frequency domain for k = 2).
k = 2FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy52.2% ± 0.000152.2% ± 0.000258.6% ± 0.03957.1% ± 0.000165% ± 0.000143.4% ± 0.000159.7% ± 0.04165% ± 0.000169.6% ± 0.002
P70.1% ± 0.000170.1% ± 0.000168.7% ± 0.00867.4% ± 0.000167% ± 0.000169% ± 0.000169.1% ± 0.00548.2% ± 0.000170.4% ± 0.001
R52.2% ± 0.000252.2% ± 0.000158.6% ± 0.03957.1% ± 0.000165% ± 0.000143.4% ± 0.000159.7% ± 0.04165% ± 0.000169.6% ± 0.001
F = score52.2% ± 0.000152.1% ± 0.000159.7% ± 0.04158.4% ± 0.000167% ± 0.000138.4% ± 0.000160.9% ± 0.04254.6% ± 0.000168.6% ± 0.001
Table 7. Accuracy, precision, recall, and f-score—Dataset 1 (merging of position and acceleration in the frequency domain for k = 2).
Table 7. Accuracy, precision, recall, and f-score—Dataset 1 (merging of position and acceleration in the frequency domain for k = 2).
k = 2FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy58.1% ± 0.01154.4% ± 0.00261.5% ± 0.03355.5% ± 0.000135.2% ± 0.000173.3% ± 0.000161.2% ± 0.00442.5% ± 0.000174.3% ± 0.001
P70.1% ± 0.00170.1% ± 0.00470.4% ± 0.01165.7% ± 0.000149.6% ± 0.000175.3% ± 0.000170.9% ± 0.00347.8% ± 0.000176.4% ± 0.002
R58% ± 0.01154.4% ± 0.00261.5% ± 0.03355.5% ± 0.000135.2% ± 0.000173.3% ± 0.000161.2% ± 0.00442.5% ± 0.000174.3% ± 0.0019
F = score59% ± 0.01354.8% ± 0.00362.7% ± 0.03556.9% ± 0.000132.2% ± 0.000173.7% ± 0.000162.5% ± 0.00544.5% ± 0.000174.1% ± 0.001
Table 8. Accuracy, precision, recall, and f-score—Dataset 1 (position in the frequency domain with k = 3).
Table 8. Accuracy, precision, recall, and f-score—Dataset 1 (position in the frequency domain with k = 3).
k = 3FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy48.1% ± 0.01646.9% ± 0.00547.8% ± 0.03644.9% ± 0.000113.1% ±0.000144.9% ± 0.000148.8% ± 0.00957.5% ± 0.000172.1% ± 0.002
P60% ± 0.01661.3% ± 0.00457.5% ± 0.02661.3% ± 0.000116.9% ± 0.000163.9% ± 0.000156% ± 0.00348.5% ± 0.000179.8% ± 0.001
R48.1% ± 0.01646.9% ± 0.00547.8% ± 0.03644.9% ± 0.000113.1% ± 0.000144.9% ± 0.000148.8% ± 0.00957.5% ± 0.000172.1% ± 0.002
F = score51.1% ± 0.01250% ± 0.00550.3% ± 0.03447.1% ± 0.000114.1% ± 0.000146.4% ± 0.000151.3% ± 0.00844.3% ± 0.000172.3% ± 0.001
Table 9. Accuracy, precision, recall, and f-score—Dataset 1 (acceleration in the frequency domain for k = 3).
Table 9. Accuracy, precision, recall, and f-score—Dataset 1 (acceleration in the frequency domain for k = 3).
k = 3FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy46.2% ± 0.000146.5% ± 0.00745% ± 0.02648% ± 0.000122.1% ± 0.000143.6% ± 0.000138% ± 0.000251.5% ± 0.000167.30% ± 0.007
P67.4% ± 0.000167.7% ± 0.00456.6% ± 0.07250% ± 0.000165.8% ± 0.000169.5% ± 0.000168.6% ± 0.000238.1% ± 0.000170.30% ± 0.03
R46.2% ± 0.000146.5% ± 0.00745% ± 0.02648% ± 0.000122.1% ± 0.000143.6% ± 0.000138% ± 0.000251.5% ± 0.000167.3% ± 0.007
F = score47.3% ± 0.000147.5% ± 0.00845.7% ± 0.0348.6% ± 0.000112.8% ± 0.000143% ± 0.000134.3% ± 0.000241.9% ± 0.000166% ± 0.007
Table 10. Accuracy, precision, recall, and f-score—Dataset 1 (merging position and acceleration in the frequency domain for k = 3).
Table 10. Accuracy, precision, recall, and f-score—Dataset 1 (merging position and acceleration in the frequency domain for k = 3).
k = 3FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy46.2% ± 0.000146.2% ± 0.00546.6% ± 0.0348.7% ± 0.00322.6% ± 0.000146.9% ± 0.000140.3% ± 0.000146.5% ± 0.000167.9% ± 0.04
P67.7% ± 0.000168.8% ± 0.00757.5% ± 0.06253.6% ± 0.00627.2% ± 0.000170.3% ± 0.000267.5% ± 0.000235% ± 0.000170.80% ± 0.053
R46.2% ± 0.000146.2% ± 0.00546.6% ± 0.0348.7% ± 0.00322.6% ± 0.000146.9% ± 0.000240.3% ± 0.000146.5% ± 0.000167.9% ± 0.01
F = score46.4% ± 0.000146% ± 0.00747.3% ± 0.03349.8% ± 0.00314.8%± 0.000145.4% ± 0.000137.6% ± 0.000139.2% ± 0.000166.20% ± 0.03
Table 11. Accuracy, precision, recall, and f-score—Dataset 2 (position in the frequency domain for k = 2).
Table 11. Accuracy, precision, recall, and f-score—Dataset 2 (position in the frequency domain for k = 2).
k = 2FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy50.6% ± 0.02149.1% ± 0.00147.6% ± 0.06556.9% ± 0.00135.9% ± 0.000140.8% ± 0.000148.5% ± 0.000145.2% ± 0.000260.1%± 0.001
P50.7% ± 0.02249.1% ± 0.00147.6% ± 0.06757% ± 0.00135.7% ± 0.000140.8% ± 0.000248.5% ± 0.000125.9% ± 0.001065.6%± 0.003
R50.6% ± 0.02149.1% ± 0.00147.6% ± 0.06556.9% ± 0.00135.9% ± 0.000140.8% ± 0.000148.5% ± 0.000245.2% ± 0.000160.1%± 0.001
F = score50.1% ± 0.01748.9% ± 0.00147.3% ± 0.06556.7% ± 0.00135.7% ± 0.000140.7% ± 0.000148.4% ± 0.000231.4% ± 0.000259.8%± 0.001
Table 12. Accuracy, precision, recall and f-score—Dataset 2 (acceleration in the frequency domain for k = 2).
Table 12. Accuracy, precision, recall and f-score—Dataset 2 (acceleration in the frequency domain for k = 2).
k = 2FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy39% ± 0.000138.9% ± 0.00338.4% ± 0.03536.2% ± 0.000146.2% ± 0.000130.5% ± 0.000149.4% ± 0.000148.1% ± 0.000155.3% ± 0.0008
P38.4% ± 0.000138.4% ± 0.00338.1% ± 0.03436.2% ± 0.000245% ± 0.000126.6% ± 0.000224.8% ± 0.000124.4% ± 0.000264.1% ± 0.0007
R39% ± 0.000138.9% ± 0.00338.4% ± 0.03536.2% ± 0.000146.2% ± 0.000130.5% ± 0.000149.4% ± 0.000148.1% ± 0.000155.4% ± 0.0008
F = score38.2% ± 0.000138.2% ± 0.00338% ± 0.03336.2% ± 0.000143.1% ± 0.000127.5% ± 0.000133% ± 0.000132.4% ± 0.000155.5% ± 0.0008
Table 13. Accuracy, precision, recall, and f-score—Dataset 2 (merging position and acceleration in the frequency domain for k = 2).
Table 13. Accuracy, precision, recall, and f-score—Dataset 2 (merging position and acceleration in the frequency domain for k = 2).
k = 2FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy38.7% ± 0.000139% ± 0.00237.9% ± 0.04637.5% ± 0.000147.4% ± 0.000131.5% ± 0.000249.4% ± 0.000156.1% ± 0.000156.6% ± 0.009
P38.2% ± 0.000138.5% ± 0.00237.6% ± 0.04637.5% ± 0.000247.1% ± 0.000228.4% ± 0.000124.8% ± 0.000251.9% ± 0.000157.1% ± 0.0019
R38.7% ± 0.000239% ± 0.00237.9% ± 0.04637.5% ± 0.000147.4% ± 0.000231.5% ± 0.000149.4% ± 0.000156.1% ± 0.000156.6% ± 0.001
F = score38% ± 0.000138.3% ± 0.00237.5% ± 0.04637.5% ± 0.000146.3% ± 0.000229% ± 0.000233% ± 0.000247.4% ± 0.000155.7% ± 0.002
Table 14. Accuracy, precision, recall, and f-score—Dataset 2 (position in the frequency domain for k = 3).
Table 14. Accuracy, precision, recall, and f-score—Dataset 2 (position in the frequency domain for k = 3).
k = 3FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy50.6% ± 0.01949.1% ± 0.00147.7% ± 0.05156.8% ± 0.00235.9% ± 0.040.8% ± 0.048.5% ± 0.01345.2% ± 0.059.41% ± 0.004
P50.6% ± 0.0249.1% ± 0.00147.7% ± 0.05256.9% ± 0.00235.7% ± 0.040.8% ± 0.048.5% ± 0.02125.9% ± 0.075.3% ± 0.0001
R50.6% ± 0.01949.1% ± 0.00147.7% ± 0.05156.8% ± 0.00235.9% ± 0.040.8% ± 0.048.5% ± 0.01345.2% ± 0.059.41% ± 0.004
F-score50.1% ± 0.01649% ± 0.00147.5% ± 0.0556.6% ± 0.00135.7% ± 0.040.7% ± 0.048.4% ± 0.01631.4% ± 0.058.4% ± 0.0001
Table 15. Accuracy, precision, recall, and f-score—Dataset 2 (acceleration in the frequency domain for k = 3).
Table 15. Accuracy, precision, recall, and f-score—Dataset 2 (acceleration in the frequency domain for k = 3).
K = 3FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy27% ± 0.00926.4% ± 0.0129.1% ± 0.04326.6% ± 0.02223.8% ± 0.000130.3% ± 0.000148.9% ± 0.000114% ± 0.000154.4% ± 0.0001
P51% ± 0.00751.1% ± 0.00447.4% ± 0.0650.8% ± 0.00924.3% ± 0.000158.5% ± 0.000124.8% ± 0.000139.9% ± 0.000168.1% ± 0.0001
R27% ± 0.00926.4% ± 0.0129.1% ± 0.04326.6% ± 0.02223.8% ± 0.000130.3% ± 0.000148.9% ± 0.000114% ± 0.000154.4% ± 0.0002
F-score33.9% ± 0.00833.2% ± 0.00735.8% ± 0.04733.7% ± 0.01824.1% ± 0.000132.3% ± 0.000132.9% ± 0.000117% ± 0.000155% ± 0.0002
Table 16. Accuracy, precision, recall, and f-score—Dataset 2 (merging of position and acceleration in the frequency domain for k = 3).
Table 16. Accuracy, precision, recall, and f-score—Dataset 2 (merging of position and acceleration in the frequency domain for k = 3).
k = 3FuzzyK-MeansSOMGaussian MixtureDBSCANHierarchicalSpectralOPTICSPSA-MNMF
Accuracy27.9% ± 0.01726% ± 0.01628.7% ± 0.0433.1% ± 0.00232% ± 0.000130% ± 0.000148.6% ± 0.000143.4% ± 0.000154.2% ± 0.009
P49.4% ± 0.01550.6% ± 0.00947.7% ± 0.05246.8% ± 0.00425.6% ± 0.000158.7% ± 0.000124.6% ± 0.000133.3% ± 0.000170.9% ± 0.015
R27.9% ± 0.01726% ± 0.01628.7% ± 0.0433.1% ± 0.00232% ± 0.000130% ± 0.000148.6% ± 0.000143.4% ± 0.000154.4% ± 0.07
F = score34.4% ± 0.01432.8% ± 0.01335.7% ± 0.04338.4% ± 0.00328.5% ± 0.000132.4% ± 0.000132.6% ± 0.000131.5% ± 0.000154.1% ± 0.002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Razfar, N.; Kashef, R.; Mohammadi, F. Automatic Post-Stroke Severity Assessment Using Novel Unsupervised Consensus Learning for Wearable and Camera-Based Sensor Datasets. Sensors 2023, 23, 5513. https://doi.org/10.3390/s23125513

AMA Style

Razfar N, Kashef R, Mohammadi F. Automatic Post-Stroke Severity Assessment Using Novel Unsupervised Consensus Learning for Wearable and Camera-Based Sensor Datasets. Sensors. 2023; 23(12):5513. https://doi.org/10.3390/s23125513

Chicago/Turabian Style

Razfar, Najmeh, Rasha Kashef, and Farah Mohammadi. 2023. "Automatic Post-Stroke Severity Assessment Using Novel Unsupervised Consensus Learning for Wearable and Camera-Based Sensor Datasets" Sensors 23, no. 12: 5513. https://doi.org/10.3390/s23125513

APA Style

Razfar, N., Kashef, R., & Mohammadi, F. (2023). Automatic Post-Stroke Severity Assessment Using Novel Unsupervised Consensus Learning for Wearable and Camera-Based Sensor Datasets. Sensors, 23(12), 5513. https://doi.org/10.3390/s23125513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop