Next Article in Journal
Generation of Head Movements of a Robot Using Multimodal Features of Peer Participants in Group Discussion Conversation
Previous Article in Journal
PERSEL, a Ready-to-Use PERsonality-Based User SELection Tool to Maximize User Experience Redesign Effectiveness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Upper-Limb and Trunk Kinematic Variability: Accuracy and Reliability of an RGB-D Sensor

1
Institute of Intelligent Industrial Systems and Technologies for Advanced Manufacturing (STIIMA), Italian Council of National Research (CNR), Via Alfonso Corti 12, 20133 Milan, Italy
2
Politecnico di Milano, 20133 Milano, Italy
3
Electronics, Information and Bioengineering Department, Politecnico di Milano, 20133 Milano, Italy
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2020, 4(2), 14; https://doi.org/10.3390/mti4020014
Submission received: 2 March 2020 / Revised: 1 April 2020 / Accepted: 23 April 2020 / Published: 26 April 2020

Abstract

:
In the field of motion analysis, the gold standard devices are marker-based tracking systems. Despite being very accurate, their cost, stringent working environments, and long preparation time make them unsuitable for small clinics as well as for other scenarios such as industrial application. Since human-centered approaches have been promoted even outside clinical environments, the need for easy-to-use solutions to track human motion is topical. In this context, cost-effective devices, such as RGB-Depth (RBG-D) cameras have been proposed, aiming at a user-centered evaluation in rehabilitation or of workers in industry environment. In this paper, we aimed at comparing marker-based systems and RGB-D cameras for tracking human motion. We used a Vicon system (Vicon Motion Systems, Oxford, UK) as a gold standard for the analysis of accuracy and reliability of the Kinect V2 (Microsoft, Redmond, WA, USA) in a variety of gestures in the upper limb workspace—targeting rehabilitation and working applications. The comparison was performed on a group of 15 adult healthy subjects. Each subject had to perform two types of upper-limb movements (point-to-point and exploration) in three workspace sectors (central, right, and left) that might be explored in rehabilitation and industrial working scenarios. The protocol was conceived to test a wide range of the field of view of the RGB-D device. Our results, detailed in the paper, suggest that RGB-D sensors are adequate to track the upper limb for biomechanical assessments, even though relevant limitations can be found in the assessment and reliability of some specific degrees of freedom and gestures with respect to marker-based systems.

1. Introduction

Human motion analysis (HMA) is a domain in the biomechanics field aiming at quantitatively describing the human movement, which is traditionally acquired by means of motion tracking technologies. Applications of HMA span many areas ranging from clinics and motor rehabilitation to industrial and entertainment fields [1]. In the area of human motor rehabilitation, the data obtained from motion tracking are used for medical treatment plans and evaluations [2]. The most accurate standard systems for this field are marker-based optoelectronic tracking equipment. Placing the markers on anatomical landmarks on the subject’s body, such systems allow a precise reconstruction of the kinematic parameters. Many studies describe a variety of usages of the optoelectronic tracking equipment [3,4,5,6,7,8,9]. This high precision, however, is achieved with time-consuming marker-positioning procedures, which places constraints on usage environments such as laboratories, as well as coming at a high cost. These issues limit the use of such systems to scientific research laboratories and hospitals, preventing their use in small clinics, home rehabilitation, and industrial applications due to less controllable environments. Because of these limitations, a need for systems requiring faster setup time, more flexible acquisition environments, and lower costs has arisen [1]. During these last two decades, many new instruments have been developed that are able to provide refined kinematic and/or dynamic data of human motion in three dimensions (3D), such as wearable sensors or optical motion acquisition systems, some of which have been tested in clinical settings and rehabilitation scenarios [10,11]. The development of inertial sensors and RGB-depth (RGB-D) sensors has allowed the implementation of devices that meet these requirements. RGB-D sensors manage this through a sensor fusion mechanism and allow the reconstruction of human movement through video frame segmentation coupled with depth analysis [12]. A popular example of an RGB-D device is the Microsoft’s Kinect V2. It features quick set-up time, acquires large ranges of motion, and provides good motor tracking capabilities [13,14]. Some studies confirmed a good reliability of devices such as Kinect and Kinect V2 in recording the kinematics of the upper limb when compared to marker-based systems [15,16]. Applications to the field of rehabilitation and health of low-cost RGB-D systems such as Kinect include a variety of studies, including biomechanical assessments [17,18,19,20,21,22,23,24,25,26,27], translation of quantitative clinical scales [28,29,30], post-stroke patients’ assessments [31,32,33,34], validation for their usage in home environments [35,36,37], and assisted living [38].
Only recently has human motion tracking acquired visibility in industrial contexts. Recent roadmaps [39,40] suggested the need of human-centered approaches and solutions for improving workers’ health and injury prevention in the workplaces, and motion tracking applicability to assess users in working environments [41]. Thus, in recent years, a growing interest in motion tracking technologies has arisen since they have become widely available and applicable outside of standard clinical environments. A first field of application is injury prevention: it has been suggested that, as most manual labor in industry requires elaborate movements in unmonitored environments, this could lead to injuries and work-related disorders [42]. Secondly, motion tracking systems could provide data of the stress and strain on the body, and in this way improve workstation safety [42], help avoid unnecessary work risks, and ensure safe and successful human–robot interactions [43,44]. Moreover, tracking technologies have been used to properly provide designer with data that will help them improving products and making them easier to use and more comfortable for people [45,46]. An offline use of tracking is found in a recent work [47] that presented a method by which changing the manipulator’s behavior in volume areas where there is higher probability to find a human cooperator. Occupied volumes were computed based on pre-recorded data of the human motion.
Another important aspect of both industrial and rehabilitative motion capture deals with possible obtrusive settings, such as when interacting with a robot. In some human–robot interactions, the machine is required to co-operate or mimic operators; motion tracking is essential in order to avoid harmful or unwanted interactions [48]. Some works exploited this concept. For instance, methods of motion tracking, integrated in the context of human–robot interaction were proposed [49].
Lastly, it is noteworthy that many recent laboratory studies used bio-inspired approaches based on tracking systems to assess human–robot interaction, when testing devices for assisting manual works [50]. These recent works aim to translate biomedical technologies in industrial environments.
Given the major application of human motion tracking in clinical settings and the growing interest in human-centered approaches even outside of the biomedical field, some studies were performed in order to assess the usability of RGB-D devices and assess their accuracy and reliability with respect to gold standard systems [16,18,51,52,53]. Comparing a marker-based system with Kinect [16,18], the authors showed good sensitivity of the RGB-D sensor, even though they concentrated on a small sample of subjects and only reaching movements. In one study [51], the authors made clear that there is a high correlation between the gait parameters registered with a marker-based system and the Kinect device, concluding that it can be used to track gait parameters, although for its use in a clinical environment they suggest that further improvements in its sensitivity are needed. The accuracy of the Kinect V2 in full body acquisitions, concentrated on walking and balance control was assessed [52], concluding that Kinect V2 could achieve good level of precision for motion analysis. Another work [53] concentrated only on some degrees of freedom of the upper-limb. They tested the accuracy of the sensor, and found that the RGB-D sensor could approximate the results of the marker-based system, even though the range of movements analyzed was quite limited. A comprehensive assessment [54] analyzed the performance of Kinect in movements in respect to the Vicon system in detail. The authors concluded that the system is suitable for motion tracking, but the analyzed degrees of freedom were associated to specific gestures conceived to emphasize each specific motor primitive—a favorable condition for tracking. In general, we found that there was a limited number of studies that concentrated on assessing the validity of RGB-D sensors in assessing the upper limb movement in multiple workspace sectors resembling the variety of the possible tasks during daily-life or working activities. Especially, we noted how previous studies in this field concentrated mainly on paradigmatic rehabilitation tasks, gait, or functional movements, rather than mapping the variety of upper-limb gestures typical of working and real-life applications. Consequently, given that RGB-D sensors can promote a human-centered perspective in the context of motor evaluation, in this pilot study we implemented a comparison between the Kinect V2 (as a well-known RGB-D sensor widely used for motion tracking) and an optoelectronic marker-based Vicon system (as golden standard). The aim was to evaluate the accuracy and reliability of the upper-limb and trunk movement in an extended simulation of gestures typical of working environment, addressing in particular novel applications related to the scaling-up of rehabilitation assessments to the industrial field.

2. Materials and Methods

2.1. Participants

Fifteen young adults with a similar age range (19–35) were recruited (8 females, 7 males, mean age 24 ± 3 years, mean height 1.72 ± 0.10 m, mean weight 63 ± 12 kg). Participants had no musculoskeletal impairments affecting their performance. The studies involving human participants were reviewed and approved by the CNR Ethical Committee (Rome, Italy, protocol number 0044338/2018). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

2.2. Equipment

The tests were performed in the motion acquisition laboratory of the Italian Council of National Research (CNR) in Italy, Lecco. The laboratory was equipped with:
  • One Vicon Vero system, composed of 10 infrared cameras; a set of reflective markers for motion tracking to be used with the Vicon system. In this experimental condition, 34 markers were used (25 for the upper-limb model, 9 for the target);
  • One Kinect V2.0 device to track the human body in space. Kinect uses an RGB-D camera, for frame acquisition at 30 Hz sampling frequency, and a time of flight infrared camera, for depth sensing. For more in-depth on the Kinect systems, exhaustive details can be found in previous works [55,56]. The Kinect was mounted on an easel and was at about 2.5 m from the recorded scene for best tracking [52];
  • Two general purpose computers: the first one connected to the Vicon system and containing the software for the acquisition and for the pre-processing of the tracking data, and the second one containing a custom-made software in C#, which communicated directly with the Kinect V2 device. It could generate a file containing 25 points of interest composing the SDK Kinect skeleton;
  • One 60 cm diameter circular target with 9 points of interest named N, NE, E, SE, S, SW, W, NW, and O. This target was used as a reference for the subjects to execute point-to-point and workspace exploration movements [57].
A schematic of the set-up is portrayed in Figure 1.

2.3. Movement Selection

Each subject was asked to execute two types of movements with the right arm. The selected tasks were point-to-point and hand exploration movements [57], commonly adopted in the context of motor rehabilitation. These are gestures extrapolated from daily life activities, and at the same time allow exploring up to the limits of the upper limb available workspace.
The point-to-point movement (Figure 2, top row) is a paradigmatic movement used by clinicians to test motor capabilities of patients. It was chosen for its wide use in the rehabilitation contexts [28]. The movement involves multi-joint coordination and is used often in daily life activities. The subject started from a predefined resting position with the upper-arm leaning along the body, then reached toward a specific point on the circular target and then returned to the starting position. The first movement, performed by each subject, was reaching towards the ‘O’ point, then back to resting position. Afterwards, the subject pointed to ‘NE’, back to resting position, and then continued the motion pattern clockwise until reaching the ‘N’ point, and backwards.
The hand exploration task (Figure 2, lower row) is another gesture conceived to simulate movements on a working surface and object displacement for distal limb coordination. Again, the circular target was used as reference to drive the subject’s movement. The objective of each participant was to move the hand radially across the target, starting from the resting position and going to different points on the circumference. The resting position for this movement was upright with the hand pointing towards the ‘O’ point. The first movement was towards ‘NE’, then ‘O’, and the motion pattern proceeded clockwise until the subjects reached ‘N’, and finally back to ‘O’.

2.4. Experimental Set-Up

The equipment was placed in the acquisition volume of the Vicon system. The subject faced frontally with respect to the Kinect. In order to test the field of view of the device, the circular target was placed in three different positions: on the right of the subject; on the left; in front of the participant, horizontally. Regardless of the position of the target, movements were always performed with the right upper limb. For the right and left sectors, the target was placed on a tripod and adjusted so that the ‘O’ target was at the subject’s shoulder height. While in the central sector, the target was placed horizontally on a table. The workspace sectors are depicted in Figure 2.
Two general-purpose computers—operating Vicon and Kinect, respectively—were positioned outside of the acquisition volume so as not to interfere with the acquisition. The only objects present in the workspace were a table, the target, the two tripods, and the Kinect.

2.5. Acquisition

Before each recording session, the subject was equipped in accordance to the upper limb model designed for the Vicon system. Five markers were placed on the trunk, one on each shoulder, three on each upper arm, two on each elbow, and four on each forearm and wrist (Figure 3), for a total of 25 markers [53]. Meanwhile, the Kinect V2 started to warm-up for about 20 min.
The datasets were acquired in the previously described environment with both the Kinect and the Vicon. Since the software for the two systems were on different computers, the acquisition of the data was not synchronized during the recordings. We chose to always start the registration with the Vicon and afterwards start the Kinect data stream. At the signal of the operator, the subject started the movement.
The two different acquisition systems provided specific data structures. The Kinect dataset consisted in the Microsoft SDK 2.0 skeleton 3D data from 25 anatomical points (Figure 3). The custom-made software used to interact with the Kinect provided a file readable with MATLAB (Mathworks, Natick, MA, USA). This file contained the 3D coordinates of the SDK skeleton’s points with respect to the Kinect’s reference system.
The dataset registered with the Vicon system needed preprocessing in the Nexus virtual environment (Vicon software), including marker tracking and labelling. Afterwards, using the upper limb model, the software could accurately reconstruct the position of the glenohumeral joint center (shoulder center of rotation), humeroulnar joint center (elbow center of rotation) and radiocarpal joint center (wrist joint center) for both left and right upper limbs, as well as estimating the position of the trunk [58]. This tracking allowed the estimation of 11 degrees of freedom (details in Section 2.6). After preprocessing, the data were saved in c3d file format and imported in MATLAB.
In order to allow test–retest comparison, each acquisition was repeated twice; the second trial was performed about 2 min after the first, in a similar way in respect to a previous analogous study [27].

2.6. Data Analysis

All computed angular data (Vicon and Kinect) were filtered with a low-pass Butterworth filter at 5 Hz [59]. Then, since the two systems acquired data at different sampling frequencies (100 Hz Vicon; 30 Hz Kinect) and acquisition times (Vicon started about 1 s before Kinect and ended 1 s after Kinect), the two datasets were aligned. The first step was the detection of movement phases: each acquisition was subdivided into sub-movements (nine sub-movements for arm point-to-point forward phases, eight forward phases and eight backward phases for workspace exploration). The segmentation into movement phases was achieved as follows. The curvilinear abscissa and velocity profile of the wrist marker (end effector) were computed. Then, synchronization of each phase onset and offset was achieved by means of an algorithm based of thresholding and local minimum detection on the velocity profile. Then, after onset alignment, the data from each phase (O->NE, NE->O…) was resampled so that each of them was 100 samples long to allow inter-variable, inter-sector, and inter-subject comparisons.
In order to compare the two systems, the position of the centers of rotation of the right upper-limb was used to compute a set of 11 variables, namely the rotational degrees of freedom (DoF), which described the upper-limb motion. The following degrees of freedom were considered (Figure 4): shoulder elevation, shoulder rotation along the vertical axis, shoulder internal–external rotation, elbow flexion–extension, hand flexion–extension, hand pronation–supination, hand deviation, scapular elevation, trunk torsion, trunk anterior–posterior flexion, and trunk medial–lateral flexion. The direction of the arrows indicates ‘positive’ angles. In order to avoid problems of reference system registration, all variables were computed relative to a subject specific reference system.

2.7. Outcome Measures and Statistical Analysis

For measuring Kinect accuracy, the chosen comparison metric was the angular distance (ad) computed between the corresponding degrees of freedom, phase-by-phase, obtained with the Vicon and the Kinect systems, respectively. The implicit assumption was that the Vicon was the gold standard. First, directional pie-charts illustrating the ad for each of the degree of freedom (DoF) considered were generated, accounting for directions and for each of the sectors. For point-to-point movements, the directional pie-charts had 8 cardinal directions, referred to the forward phases of the movements; for exploration movements, we considered also the backward phases and thus the resolution of the pie-charts was doubled. A heatmap was designed to show the magnitude of the ad between Vicon and Kinect.
Then, all the ad data were pooled into three matrices, one matrix per sector, each matrix containing the ad for each DoF and computed phase. These three matrices were used as parameters for a two-way ANOVA test, with ad compared between degrees of freedom (dimension: 11) and sectors (3: central, right and left) as factors, in order to see how each DoF was tracked with the Kinect in respect to Vicon and quantify the performance of the RGB-D sensor in each degree of freedom and sector. The p-value (p) determined the probability of obtaining test results at least as extreme as the results actually observed during the test, assuming that the null hypothesis is correct. The tests were repeated separately for point-to-point and exploration movements. Post-hoc tests (MATLAB function multcompare) were also implemented to determine which degrees of freedom and/or sectors differed from the others. The post-hoc test was Tukey’s honest significant difference criterion or (HSD). This method compares the variables of interest two at a time under the assumption of equal variance and statistical independence. The results offer a pairwise confidence interval of similarity. Level of significance for all statistical tests was 0.05.
For measuring Kinect repeatability, the chosen comparison metric was the angular distance (ad) computed between the corresponding degrees of freedom, phase-by-phase, obtained with the Kinect system in the test and re-test trials.
To show this, directional, heatmap pie-charts depicting the ad comparing Kinect performance in test and retest conditions were plotted. For point-to-point movements, the directional pie-charts had eight cardinal directions, referred to the forward phases of the movements; for exploration movements, we considered also the backward phases and thus the resolution of the pie-charts was doubled. A heatmap was designed to show the magnitude of the ad between test and retest.
In order to provide a metric for reliability of the statistical results, a test-retest analysis was also performed, taking advantage of the fact that each acquisition was repeated twice. In order to quantify the reliability through test-retest datasets, the interclass correlation coefficient (ICC) between the datasets was chosen [54]. Post-hoc tests (MATLAB function ‘multcompare’) were also implemented to determine which degrees of freedom and/or sectors differed from the others. The post-hoc test was Tukey’s honest significant difference criterion or (HSD). This method compares the variables of interest two at a time under the assumption of equal variance and statistical independence. The results offer a pairwise confidence interval of similarity. The level of significance for all statistical tests was set to 0.05.

3. Results

3.1. Marker-Based System vs. RGB-D Sensor

In this section, Kinect accuracy in respect to the marker-based system and the statistical results obtained for all subjects are presented.
A series of visual representations of the circular target illustrate, through heatmaps, the performance of the Kinect against the Vicon system for each specific DoF, in each sector, and with respect to the directionality of the target. The mean accuracy of the Kinect device (across the two repetitions), averaged across subjects, obtained using the protocol established in this study is reported. First, the averaged data error (all subjects) for point-to-point in the central sector was reported (Figure 5a). The same result for the right (Figure 5b) and left (Figure 5c) sectors was reported. The same results were also reported for exploration for the central sector (Figure 6a), right sector (Figure 6b), and left sector (Figure 6c).
The results presented in this section are related to the test dataset (comparison between test and retest is presented in Section 3.2). The two-way Anova analysis showed that, on average, seven DoF presented an average error lower than 10°. Two DoF presented an average error in the range between 10°–20° and one had ad > 20° for both the executed movements (point-to-point and exploration). Furthermore, one DoF had error greater than 20° and different mean between point-to-point and exploration movements.
A more in-depth analysis of the arm point-to-point movement revealed that trunk torsion, trunk antero-posterior flexion, trunk medio-lateral flexion, shoulder elevation, shoulder rotation along the vertical axis, elbow extension, and scapular elevation were tracked with an error lower or equal to 5°. The hand deviation angle and hand flexion–extension were tracked with an error range between 5° and 15°. The shoulder internal–external rotation showed a mean error of about 20°. Lastly, the hand pronation–supination angle had a mean error greater than 20°. Furthermore, from the analysis of the point-to-point movement, statistical difference was not found between the mean of shoulder elevation and the means of shoulder rotation along the vertical axis (p = 0.96), scapular elevation (p = 0.99), trunk torsion (p = 0.78), trunk medio-lateral flexion (p = 1), elbow extension (p = 0.59), and trunk antero-posterior flexion (p = 0.61). While shoulder internal–external rotation, hand pronation–supination, hand flexion–extension, and hand deviation had statistically different mean one with the other and in respect to the other angles (p < 0.001). Meanwhile, considering the comparison between the sectors, we found that the average error in the right sector was smaller than in the other two sectors, ranging between 8° and 9°, and was statistically different from the left and central sectors (respectively, p = 0.012 and p = 0.0053). Left and central sectors had similar means and were not statically different (p = 0.96); the mean error varied between 9° and 10°.
The same test performed for the exploration movement provided similar results regarding the average variations between the DoF but with different statistical significance. We could identify some groups: shoulder elevation, shoulder rotation along the vertical axis, and elbow extension were grouped together; in fact, we could not find statistical difference in respect to shoulder elevation (respectively, shoulder rotation along vertical axis, p = 1 and elbow extension p = 0.99). These DoF showed statistical difference from the rest of the variables with p < 0.01 for all three. Hand pronation–supination and hand flexion–extension were not statistically different (p = 0.87) and were statistically different from all the others (p < 0.001); then, scapular elevation, trunk torsion and trunk medio-lateral were not statistically different, respectively compared to trunk torsion (p = 0.94), meanwhile trunk antero-posterior flexion had not a statistically different mean only in respect to trunk medio-later flexion (p = 0.99) but were statistically different in respect to trunk torsion (p = 0.05) and scapular elevation (p ≤ 0.001). Lastly, shoulder internal–external rotation and hand pronation–supination had statistically different means if compared one with the other (p < 0.0001) and from the others (p < 0.001). In the case of exploration, the inter-sector error was differently distributed: all the sectors had no statistical differences central (11.13°), right (11.39°), and left (11.18°). Comparing right and left sectors, they resulted not to have statistical differences (p = 0.79). The same could be said for right and central (p = 0.68) and for central and left (p = 0.98).

3.2. RGB-D Sensor Reliability

In this section, we describe the test-retest as a measure of reliability of the RGB-D sensor. An overview of the results is available in Table 1 and Table 2.
A series of visual representations of the circular target illustrate through heatmaps the repeatability of the Kinect for each specific DoF in each sector with respect to the directionality of the target. First, the mean angular distance for point-to-point in the central sector was reported (Figure 7a). Then, the same result was reported for the right (Figure 7b) and left (Figure 7c) sectors. The same results were also reported for exploration for the central sector (Figure 8a), right sector (Figure 8b), and left sector (Figure 8c).
For a more general view of the reproducibility, the interclass correlation coefficient between the datasets separately for each sector and for each movement was also computed. The interclass correlation coefficient in point-to-point movement was 0.81 (p = 0) for the right sector, 0.82 (p = 0) for the central sector and the 0.73 (p < 0.001) for the left sector. Meanwhile, for the exploration movement the ICC was 0.84 (p = 0) for the right sector, 0.75 (p = 0) for the central sector and 0.62 (p < 0.001) for the left sector.
Figure 9 and Figure 10 portray visual representation of the statistical analysis of point-to-point movements and exploration movements, respectively. Furthermore, statistical results between test and retest datasets were provided. As seen in Figure 9 and as reported with the ICC, for the point-to point movement consistency between the results both in mean differences and in significance was found. Figure 10 shows that consistency between the degrees of freedom for the right and central sectors was preserved; meanwhile, the left sector had a slight difference, although not presenting statistical significance with respect to the other sectors.

4. Discussion

4.1. Summary of the Results

The RGB-D device was in general able to track upper-limb and trunk motion for the majority of the considered articular angles. However, the ad found in shoulder internal-external rotation and forearm pronation-supination was not negligible. As seen, if one can accept this error, the Kinect V2 is useful to track the glenohumeral joint center motion, the humeroulnar joint center motion, and the trunk movements especially in the right and central sector while performing the arm point-to-point movement. On the contrary, our results recommend more caution while registering exploration movements, since the mean error per sector resulted slightly higher especially with respect to point-to-point movement.

4.1.1. Degrees of Freedom

Most degrees of freedom were tracked with an error below 10°. This result is consistent with previous studies [18], in which the variables describing shoulder elevation and elbow extension were tracked with an error around 5°, but in a more constrained scenario (mono-directional reaching movements). The test of accuracy on healthy people, on indexes related to rehabilitation, revealed that most clinical parameters presented an absolute agreement and no systematic bias between RGB-D and marker-based systems [52]. Other similar investigations [53] found that most of the parameters extrapolated were tracked with an RMSE below 10°, which is comparable to the results obtained through the methodologies illustrated in this work. Other studies reported even more precise results for shoulder tracking but focused on a restricted range of postures [20]. The only degrees of freedom presenting low accuracy were the ones related to the hand. This finding suggests that one should be careful when registering data with RGB-D cameras in a context of high variability of motion or in not favorable postures. In fact, in the current study, two angles presented a very high error: the shoulder internal–external rotation and the hand pronation–supination. The first one is critical since it is based on the projection of the forearm on the transversal plane of the upper arm. This projection highly depends on the angle of elbow extension, which means that when the angle between the arm and upper arm approaches 0°, the shoulder internal–external rotation cannot be computed with high reliability since the arm is approaching kinematic singularity. Moreover, in similar configurations, small displacements in joint center reconstruction can produce unpredictable variations in the angle extrapolation. In this study, this condition is stressed since the limb is often in a fully extended, singularity posture (especially in exploration movements). This is most likely the reason for finding, in general, worse results in respect to previous works [20,54]. It is likely that, in a wide exploration of the workspace, the tracking is worse in respect to more constrained scenarios.
Since the morphology of the reconstructed skeletons is different when using Vicon and Kinect V2 [54], pronation–supination angle is critical too. Using the Kinect SDK skeleton, this variable was computed using the angular variation of a unitary vector starting from wrist and pointing towards the thumb. The angular variation of this vector around the axis of the forearm could be assumed to be the hand pronation–supination. A critical observation that made difficult the extrapolation of this angle was the poor quality of the tracking of the thumb with Kinect. Our results are in partial accordance with the findings of a comprehensive previous study [54], revealing that fine movements related to the hand could benefit from ad-hoc hand-models, which were not implemented in this study. Moreover, in the assessment of body extremities (hands and feet), even previous works found lower signal-to-noise ratio [52] leading to poor tracking. Furthermore, the movements we analyzed were very demanding for reliable hand pronation and supination computation.
In a more general discussion, a relevant finding was that the tracking performance appears to be better for movements with wide range of motions (such as point-to-point movements, which have been better tracked than hand exploration movements). Arguably, Microsoft’s built-in algorithms have made it possible to better reconstruct wider movements that have a higher articular range. These results should be taken into account when tracking people in environments with high tracking precision requirements, or where high accuracy is requested in fine movements.

4.1.2. Sectors

So far, RGB-D sensors such as Kinect V2 were tested mainly considering the performance of the sensor for tracking gestures conceived to emphasize specific degrees of freedom [54], or during rehabilitative-oriented tasks [18,33,36,51]. In this study, our novel contribution was to strongly characterize the performance of the RGB-D sensor in upper-limb movements in a context of continuous variability which is naturally found in daily-life tasks and working activities [57].
The accuracy in tracking workspace sectors was different between the two analyzed movements. For the point-to-point movement, there was a statistically significant difference for the left and central sectors, in which tracking performance of the Kinect was poor with respect to the right sector. This deterioration in tracking performance was probably due to the fact that, even if the target was positioned to the left or in front of the subject, the movement was performed with the right arm. Consequently, during the execution of the tasks in these conditions, part of the subject’s body was partially hidden by the arm for significant periods of time, causing the Kinect device to estimate the position of the covered points, probably introducing a further error. This experimental condition is common to RGB-D devices and should be considered as one of the main limitations to the adoption of such sensors in respect to marker-based systems. Depending on the context, the use of ad-hoc algorithms might be considered to solve this issue.
For the exploration movement, we found that the performances on the left and central sectors were comparable, while for the right sectors they were slightly worse. Although, the ICC for the right and central sectors proved to be higher and thus these two sectors provided more consistent and reliable data. This suggests that the central sector might be the most appropriate one in which to register exploration movements, in order to have the highest reliability and lowest error. These results remark that, in presence of obstruction, the use of RGB-D devices can be more critical. However, we still have to underline that the overall difference is limited to less than 3°, even when performances were not statistically repeatable.
Our results might be of interest of the scientific community also in estimating more reliable configurations for accurate gesture recognition for several field of applications [60], and in particular for the use of the upper-limb in the contexts of assisted living, clinics, and working environments [38].

4.1.3. RGB-D Sensor: Reliability

The test-retest allowed to characterize the reliability of the RGB-D sensor. We found that repeatability was quite high in both point-to-point and exploration movements, except for hand pronation and supination degree of freedom. Our results are in general in line with the previous findings. In fact, preliminary studies on healthy people declared that RGB-D sensors offer the same repeatability of marker-based systems [16]. Similar results were found in a comprehensive rehabilitation-oriented study analyzing whole body movement and especially walking. The authors concluded that repeatability analysis yielded rather similar results for both Kinect V2 and Vicon [52]. The results of the current study agree with previous ones that show acceptable reliability and sensitivity across the sessions for many parameters measured by Kinect for both healthy subjects and also for stroke patients [61]. All these studies considered gestures strongly coupled with degrees of freedom: the slightly more cautious results achieved in this study are probably related to the choice of a more demanding protocol (choice of gestures, variability, and configurations) that stressed the tracking capability of the sensor on generic movements. No remarkable differences were found in repeatability across sectors, even though, as expected, the left sector showed little worse repeatability, probably due to obstruction.

4.1.4. Applications in Real Scenarios

As highlighted from the literature, the RGB-D devices can register upper-limb gestures and have already been used in rehabilitative applications for physical assessment or training, like bi-dimensional movements in the sagittal plane [28], in a tele-rehabilitation scenarios [62], or even the three-dimensional range of motion of the upper limb [63]. Its use in this domain is a great advantage for both clinicians and patients alike. For example, ranges of motion provide accurate enough approximations of the joint positions, providing clinicians with an objective indicator on the well-being of the patients. From the patient’s point of view, it might make more popular home-based rehabilitative treatments.
On the other hand, industrial applications are oriented towards monitoring of workplace physical occupation, safety, and injury prevention. Thus, movements on a workbench include many simple movements such as reaching for objects or more complex ones such as manipulating items on the table, or interacting with machines using simple and controlled movements such as hand-over gestures [48,49,50]. In more general industrial contexts, it has been already used to study workers’ safeness in working environments. In this context, we mention that in the framework of the recently started European H2020 research project “Mindbot” [64], RGB-D technologies will be used to provide monitoring of workers during interaction with collaborative robots. Moreover, the newly released Kinect Azure [65] enhances the tracking capability of Kinect V2 and provides a more evolved tool to foster the concepts proposed in this paper related to people and workers in living and working environments.

4.2. Limitations

The first limitation of this work is the limited number of workspace sectors. Real application scenarios might require the use of other sectors, such as shown in other upper-limb studies [57] simulating assembly scenarios, that were excluded in this study due to limitations of the Kinect system tracking algorithms. Also, the number of movements could be expanded to mimic real scenarios, including other upper-limb gestures useful in industry [47,66,67] and including the tracking of other body segments such as legs [52,63]. Including a total body model would allow to inspect a wider number of degrees of freedom and compare the performances on full 3D movement. Moreover, other RGB-D solutions can be considered, both in the adopted sensors and employed algorithms. For this study, we decided to refer to a well-documented, commercial solution with wide diffusion, even though further developments of the proposed concepts should involve the use of other sensors, or algorithms for skeleton segmentation.
In fact, only conditions in which the subjects partially obstruct the tracking were considered; in real scenarios, the environment might include further limitations and occlusions which were not considered in this study. Lastly, in order to propose a more solid statistical analysis, a more extended group of subjects might be enrolled.

5. Conclusions

In this paper, we investigated the comparison of an RGB commercial sensor with a golden standard optoelectronic marker-based system. We found that the RGB-D sensor with embedded human tracking algorithm could properly approximate the DoF computed with the marker-based system. However, we found that the performance of the RGB-D sensor is not well usable for the detection of some DoF and that ‘wider’ movements, such as point-to-point, are tracked with greater accuracy than those of exploration. Even though with some limitations, we can conclude that RGB-D sensor is a potential candidate for motion analysis in rehabilitative and industrial environments when marker-based systems cannot be employed.

Author Contributions

Conceptualization, A.S. and R.M.M.; Methodology, A.S. and R.M.M.; Software, R.M.M.; Validation, A.S. and R.M.M.; Formal analysis, A.S. and R.M.M.; Investigation, R.M.M. and A.S.; Resources, A.S., R.M.M., L.MT., M.S, and P.C.; Data curation, R.M.M. and A.S.; Writing—original draft preparation, A.S., R.M.M., P.C., M.S., and L.M.T.; Writing—review and editing, R.M.M., P.C., M.S., and L.M.T.; Visualization, R.M.M.; Supervision, P.C.; Project administration, A.S., M.S., and L.M.T.; Funding acquisition, A.S., M.S., and L.M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

S h o u l d e r   e l e v a t i o n = a c o s ( a ^ × ( t ^ ) ) ,
S h o u l d e r   r o t a t i o n = a c o s [ a p ^ × ( f ^ ) ] 90 ,
S h o u l d e r   i n t e r n a l / e x t e r n a l   r o t a t i o n = a c o s ( a f _ p ^ × n 2 ) ^ 90 ,
E l b o w   e x t e n s i o n = a c o s ( a ^ × a f ) ^ ,
H a n d   d e v i a t i o n = a c o s ( u _ h a n d _ Φ ^ × w 2 ) ^ 90 ,
H a n d   f l e x i o n   a n d   e x t e n s i o n = a c o s ( u _ h a n d _ Π   ^ × w 1 ) ^ 90 ,
H a n d   p r o n a t i o n / s u p i n a t i o n = a c o s ( u _ h a n d 2 _ Ω ^ × w 1 ) ^ 90 ,
S c a p u l a r   e l e v a t i o n = a c o s ( s ^ × ( s ^ ) ) ,
T r u n k   t o r s i o n = a c o s [ s ^ × ( f ^ ) ] 90 ,
T r u n k   a n t e r o / p o s t e r i o r   f l e x i o n = a c o s ( t _ s a g ^ × f ^ ) ,
T r u n k   m e d i o / l a t e r a l   f l e x i o n = a c o s [ t _ f r o n t ^ × ( s ^ ) ] 90 ,
Note: In Equation (A2), the vector a p ^ is the arm vector projected on a plane orthogonal to the vector t ^ ; in Equation (A3), the vector a f _ p ^ is the forearm vector projected on a plane orthogonal to the vector a ^ ; in (A5), the vector u _ h a n d _ Φ ^ is the projection of h 1 ^ on the plane orthogonal to the vector w 1 ^ ; in (A6), the vector u _ h a n d _ Π   ^ is the projection of h 1 ^ on a plane orthogonal to the vector w 2 ^ ; in (A7), the vector u _ h a n d 2 _ Ω ^ is the projection of h 2 ^ on the plane orthogonal to the vector a f ^ ; in (A8), s ^ is the shoulder vector in the initial position and s ^ is the vector connecting the trunk to the right glenohumeral joint center projected on the plane orthogonal to the vector   f ^ ; in (A9), the vector f ^   represents the direction orthogonal to the back of the subject in the first frame; in (A10), t _ s a g ^ is the projection of vector t ^ on the sagittal plane, orthogonal to the vector s ^ in the first frame; in (A11), t _ f r o n t ^ is the projection of vector   t ^ on the frontal plane, orthogonal to the vector   f ^ in the first frame.

References

  1. Colyer, S.; Evans, M.; Cosker, D.; Salo, A.I.T. A Review of the Evolution of Vision-Based Motion Analysis and the Integration of Advanced Computer Vision Methods Towards Developing a Markerless System. Sports Med. Open 2018, 4, 24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Zhou, H.; Hu, H. Human motion tracking for rehabilitation—A survey. Biomed. Signal Process. Control. 2008, 3, 1–18. [Google Scholar] [CrossRef]
  3. Lu, T.W.; O’Connor, J. Bone position estimation from skin marker co-ordinates using global optimisation with joint constraints. J. Biomech. 1999, 32, 129–134. [Google Scholar] [CrossRef]
  4. Roux, E.; Bouilland, S.; Godillon-Maquinghen, A.-P.; Bouttens, D. Evaluation of the GO method within the upper limb kinematics analysis. J. Biomech. 2002, 35, 1279–1283. [Google Scholar] [CrossRef] [Green Version]
  5. Petuskey, K.; Bagley, A.; Abdala, E.; James, M.A.; Rab, G. Upper extremity kinematics during functional activities: Three-dimensional studies in a normal pediatric population. Gait Posture 2007, 25, 573–579. [Google Scholar] [CrossRef]
  6. Pontonnier, C.; Dumont, G. Inverse dynamics method using optimization techniques for the estimation of muscles forces involved in the elbow motion. Int. J. Interact. Des. Manuf. (IJIDeM) 2009, 3, 227–236. [Google Scholar] [CrossRef]
  7. Nussbaum, M.A.; Zhang, X. Heuristics for locating upper extremity joint centres from a reduced set of surface markers. Hum. Mov. Sci. 2000, 19, 797–816. [Google Scholar] [CrossRef]
  8. Cappozzo, A.; Catani, F.; Della Croce, U.; Leardini, A. Position and orientation in space of bones during movement: Anatomical frame definition and determination. Clin. Biomech. 1995, 10, 171–178. [Google Scholar] [CrossRef]
  9. Cappozzo, A.; Catani, F.; Leardini, A.; Benedetti, M.G.; Della Croce, U. Position and orientation in space of bones during movement: Experimental artefacts. Clin. Biomech. 1996, 11, 90–100. [Google Scholar] [CrossRef]
  10. Carpinella, I.; Cattaneo, D.; Ferrarin, M. Quantitative assessment of upper limb motor function in Multiple Sclerosis using an instrumented Action Research Arm Test. J. Neuroeng. Rehabil. 2014, 11, 67. [Google Scholar] [CrossRef]
  11. Carpinella, I.; Lencioni, T.; Bowman, T.; Bertoni, R.; Turolla, A.; Ferrarin, M.; Jonsdottir, J. Planar robotic training versus arm-specific physiotherapy: Effects on arm function and motor strategies in post-stroke subjects. Gait Posture 2019, 74, 7. [Google Scholar] [CrossRef]
  12. Dorazio, T.; Marani, R.; Renò, V.; Cicirelli, G. Recent trends in gesture recognition: How depth data has improved classical approaches. Image Vis. Comput. 2016, 52, 56–72. [Google Scholar] [CrossRef]
  13. Gesto-Diaz, M.; Tombari, F.; Rodríguez-Gonzálvez, P.; González-Aguilera, D. Analysis and Evaluation Between the First and the Second Generation of RGB-D Sensors. IEEE Sens. J. 2015, 15, 6507–6516. [Google Scholar] [CrossRef]
  14. Pagliari, D.; Pinto, L. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors. Sensors 2015, 15, 27569–27589. [Google Scholar] [CrossRef] [Green Version]
  15. Kutlu, M.; Freeman, C.; Spraggs, M. Functional electrical stimulation for home-based upper-limb stroke rehabilitation. Curr. Dir. Biomed. Eng. 2017, 3, 25–29. [Google Scholar]
  16. Bonnechère, B.; Jansen, B.; Salvia, P.; Bouzahouene, H.; Omelina, Ľ.; Moiseev, F.; Sholukha, V.; Cornelis, J.; Rooze, M.; Jan, S.V.S. Validity and reliability of the Kinect within functional assessment activities: Comparison with standard stereophotogrammetry. Gait Posture 2014, 39, 593–598. [Google Scholar] [CrossRef]
  17. Cruz, L.M.V.; Lucio, D.; Velho, L. Kinect and RGBD Images: Challenges and Applications. In Proceedings of the 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials, Ouro Preto, Brasil, 22–25 August 2012; pp. 36–49. [Google Scholar]
  18. Scano, A.; Caimmi, M.; Malosio, M.; Tosatti, L.M. Using Kinect for upper-limb functional evaluation in home rehabilitation: A comparison with a 3D stereoscopic passive marker system. In Proceedings of the 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Ouro Preto, Brasil, 12–15 August 2014; pp. 561–566. [Google Scholar]
  19. Kurillo, G.; Chen, A.; Bajcsy, R.; Han, J.J. Evaluation of upper extremity reachable workspace using Kinect camera. Technol. Heal. Care 2013, 21, 641–656. [Google Scholar] [CrossRef] [Green Version]
  20. Lee, S.H.; Yoon, C.; Chung, S.G.; Kim, H.C.; Kwak, Y.; Park, H.-W.; Kim, K. Measurement of Shoulder Range of Motion in Patients with Adhesive Capsulitis Using a Kinect. PLoS ONE 2015, 10, e0129398. [Google Scholar] [CrossRef]
  21. Huber, M.E.; Seitz, A.; Leeser, M.; Sternad, D. Validity and reliability of Kinect skeleton for measuring shoulder joint angles: A feasibility study. Physiotherapy 2015, 101, 389–393. [Google Scholar] [CrossRef] [Green Version]
  22. Clark, R.; Pua, Y.-H.; Oliveira, C.C.; Bower, K.J.; Thilarajah, S.; McGaw, R.; Hasanki, K.; Mentiplay, B.F. Reliability and concurrent validity of the Microsoft Xbox One Kinect for assessment of standing balance and postural control. Gait Posture 2015, 42, 210–213. [Google Scholar] [CrossRef]
  23. Yahya, M.; Shah, J.; Kadir, K.; Warsi, A.; Khan, S.; Nasir, H. Accurate Shoulder Joint Angle Estimation Using Single RGB camera for Rehabilitation. In Proceedings of the 2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Auckland, New Zealand, 20–23 May 2019; pp. 1–6. [Google Scholar]
  24. Scano, A.; Chiavenna, A.; Malosio, M.; Tosatti, L.M. Kinect V2 Performance Assessment in Daily-Life Gestures: Cohort Study on Healthy Subjects for a Reference Database for Automated Instrumental Evaluations on Neurological Patients. Appl. Bionics Biomech. 2017, 2017, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Chen, Y.C.; Lee, H.J.; Lin, K.H. Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; Volume 2015, pp. 703–706. [Google Scholar]
  26. Scano, A.; Caimmi, M.; Chiavenna, A.; Malosio, M.; Tosatti, L.M. A Kinect-Based Biomechanical Assessment of Neurological Patients’ Motor Performances for Domestic Rehabilitation. Adv. Med Technol. Clin. Pract. 2016, 252–279. [Google Scholar] [CrossRef]
  27. Yang, Y.; Pu, F.; Li, Y.; Li, S.; Fan, Y.; Li, D. Reliability and Validity of Kinect RGB-D Sensor for Assessing Standing Balance. IEEE Sens. J. 2014, 14, 1633–1638. [Google Scholar] [CrossRef]
  28. Scano, A.; Chiavenna, A.; Malosio, M.; Tosatti, L.M.; Molteni, F. Kinect V2 implementation and testing of the reaching performance scale for motor evaluation of patients with neurological impairment. Med. Eng. Phys. 2018, 56, 54–58. [Google Scholar] [CrossRef] [PubMed]
  29. Kim, W.-S.; Cho, S.; Baek, D.; Bang, H.; Paik, N.-J. Upper Extremity Functional Evaluation by Fugl-Meyer Assessment Scoring Using Depth-Sensing Camera in Hemiplegic Stroke Patients. PLoS ONE 2016, 11, e0158640. [Google Scholar] [CrossRef] [PubMed]
  30. Fernández-Baena, A.; Susin, A.; Lligadas, X. Biomechanical Validation of Upper-Body and Lower-Body Joint Movements of Kinect Motion Capture Data for Rehabilitation Treatments. In Proceedings of the 2012 Fourth International Conference on Intelligent Networking and Collaborative Systems, Bucharest, Romania, 19–21 September 2012; pp. 656–661. [Google Scholar]
  31. Okuyama, K.; Kawakami, M.; Tsuchimoto, S.; Ogura, M.; Okada, K.; Mizuno, K.; Ushiba, J.; Liu, M. Depth Sensor–Based Assessment of Reachable Work Space for Visualizing and Quantifying Paretic Upper Extremity Motor Function in People with Stroke. Phys. Ther. 2020. [Google Scholar] [CrossRef]
  32. Latorre, J.; Colomer, C.; Alcañiz, M.; Llorens, R. Gait analysis with the Kinect v2: Normative study with healthy individuals and comprehensive study of its sensitivity, validity, and reliability in individuals with stroke. J. Neuroeng. Rehabil. 2019, 16, 11–97. [Google Scholar] [CrossRef] [Green Version]
  33. Scano, A.; Caimmi, M.; Chiavenna, A.; Malosio, M.; Tosatti, L.M. Kinect One-based biomechanical assessment of upper-limb performance compared to clinical scales in post-stroke patients. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; Volume 2015, pp. 5720–5723. [Google Scholar]
  34. Scano, A.; Molteni, F.; Tosatti, L.M. Low-Cost Tracking Systems Allow Fine Biomechanical Evaluation of Upper-Limb Daily-Life Gestures in Healthy People and Post-Stroke Patients. Sensors 2019, 19, 1224. [Google Scholar] [CrossRef] [Green Version]
  35. Gu, Y.; Pandit, S.; Saraee, E.; Nordahl, T.; Ellis, T.; Betke, M. Home-Based Physical Therapy with an Interactive Computer Vision System. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea, 27–28 October 2019; pp. 2619–2628. [Google Scholar]
  36. Cameirão, M.; Smailagic, A.; Miao, G.; Siewiorek, D. Coaching or gaming? Implications of strategy choice for home based stroke rehabilitation. J. Neuroeng. Rehabil. 2016, 13, 18. [Google Scholar] [CrossRef] [Green Version]
  37. Vieira, Á.; Gabriel, J.; Melo, C.; Machado, J. Kinect system in home-based cardiovascular rehabilitation. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2016, 231, 40–47. [Google Scholar] [CrossRef]
  38. Mosca, N.; Renó, V.; Marani, R.; Nitti, M.; D’Orazio, T.; Stella, E. Human Walking Behavior detection with a RGB-D Sensors Network for Ambient Assisted Living Applications. In Proceedings of the AI* AAL@ AI* IA, Bari, Italy, 14–17 November 2017; pp. 17–29. [Google Scholar]
  39. Terkaj, W.; Tolio, T. The Italian Flagship Project: Factories of the Future. In Factories of the Future; Springer Science and Business Media LLC: Berlin, Germany, 2019; pp. 3–35. [Google Scholar]
  40. Santos, C.; Mehrsai, A.; Barros, A.C.; Araújo, M.; Ares, E. Towards Industry 4.0: An overview of European strategic roadmaps. Procedia Manuf. 2017, 13, 972–979. [Google Scholar] [CrossRef]
  41. Geiselhart, F.; Otto, M.; Rukzio, E. On the Use of Multi-Depth-Camera Based Motion Tracking Systems in Production Planning Environments. Procedia CIRP 2016, 41, 759–764. [Google Scholar] [CrossRef] [Green Version]
  42. Duffy, V.G. A methodology for assessing industrial workstations using optical motion capture integrated with digital human models. Occup. Ergon. 2007, 7, 11–25. [Google Scholar]
  43. Ramey, A.; González-Pacheco, V.; Salichs, M.A. Integration of a low-cost RGB-D sensor in a social robot for gesture recognition. In Proceedings of the 6th international conference on Multimodal interfaces—ICMI ’04, Alicante, Spain, 14–18 November 2011; pp. 229–230. [Google Scholar]
  44. Basso, F.; Munaro, M.; Michieletto, S.; Pagello, E.; Menegatti, E. Fast and Robust Multi-people Tracking from RGB-D Data for a Mobile Robot. In Advances in Intelligent Systems and Computing; Springer Science and Business Media LLC: Berlin, Germany, 2013; Volume 193, pp. 265–276. [Google Scholar]
  45. Colombo, G.; Regazzoni, D.; Rizzi, C. Markerless Motion Capture Integrated with Human Modeling for Virtual Ergonomics. Lect. Notes Comput. Sci. 2013, V, 314–323. [Google Scholar] [CrossRef]
  46. Bachynskyi, M.; Oulasvirta, A.; Palmas, G.; Weinkauf, T. Is motion capture-based biomechanical simulation valid for hci studies? study and implications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; pp. 3215–3224. [Google Scholar]
  47. Pellegrinelli, S.; Moro, F.L.; Pedrocchi, N.; Tosatti, L.M.; Tolio, T. A probabilistic approach to workspace sharing for human–robot cooperation in assembly tasks. CIRP Ann. 2016, 65, 57–60. [Google Scholar] [CrossRef]
  48. Glasauer, S.; Huber, M.; Basili, P.; Knoll, A.; Brandt, T. Interacting in time and space: Investigating human-human and human-robot joint action. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Roman, Italy, 13–15 September 2010; pp. 252–257. [Google Scholar] [CrossRef]
  49. Field, M.; Stirling, D.; Naghdy, F.; Pan, Z. Motion capture in robotics review. In Proceedings of the IEEE International Conference on Control and Automation, Christchurch, New Zealand, 9–11 December 2009; pp. 1697–1702. [Google Scholar]
  50. Kim, S.; Nussbaum, M.A.; Esfahani, M.I.M.; Alemi, M.M.; Alabdulkarim, S.A.; Rashedi, E. Assessing the influence of a passive, upper extremity exoskeletal vest for tasks requiring arm elevation: Part I—“Expected” effects on discomfort, shoulder muscle activity, and work task performance. Appl. Ergon. 2018, 70, 315–322. [Google Scholar] [CrossRef]
  51. Pfister, A.; West, A.M.; Bronner, S.; Noah, J.A. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis. J. Med. Eng. Technol. 2014, 38, 274–280. [Google Scholar] [CrossRef]
  52. Otte, K.; Kayser, B.; Mansow-Model, S.; Verrel, J.; Paul, F.; Brandt, A.U.; Schmitz-Hübsch, T. Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function. PLoS ONE 2016, 11, e0166532. [Google Scholar] [CrossRef]
  53. Cai, L.; Ma, Y.; Xiong, S.; Zhang, Y. Validity and Reliability of Upper Limb Functional Assessment Using the Microsoft Kinect V2 Sensor. Appl. Bionics Biomech. 2019, 2019, 7175240. [Google Scholar] [CrossRef] [Green Version]
  54. Galna, B.; Barry, G.; Jackson, D.; Mhiripiri, D.; Olivier, P.; Rochester, L. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson’s disease. Gait Posture 2014, 39, 1062–1068. [Google Scholar] [CrossRef] [Green Version]
  55. Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect. Comput. Vis. Image Underst. 2015, 139, 1–20. [Google Scholar] [CrossRef] [Green Version]
  56. Mahmoudzadeh, A.; Golroo, A.; Jahanshahi, M.R.; Yeganeh, S.F. Estimating Pavement Roughness by Fusing Color and Depth Data Obtained from an Inexpensive RGB-D Sensor. Sensors 2019, 19, 1655. [Google Scholar] [CrossRef] [Green Version]
  57. Scano, A.; Dardari, L.; Molteni, F.; Giberti, H.; Tosatti, L.M.; D’Avella, A. A Comprehensive Spatial Mapping of Muscle Synergies in Highly Variable Upper-Limb Movements of Healthy Subjects. Front. Physiol. 2019, 10, 1231. [Google Scholar] [CrossRef] [PubMed]
  58. Moore, K.L.; Dalley, A.F.; Agur, A.M. Clinically Oriented Anatomy, 7th ed.; Taylor, C., Ed.; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2013. [Google Scholar]
  59. Sinclair, J.; Taylor, P.; Hobbs, S.J. Digital Filtering of Three-Dimensional Lower Extremity Kinematics: An Assessment. J. Hum. Kinet. 2013, 39, 25–36. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Cicirelli, G.; Attolico, C.; Guaragnella, C.; D’Orazio, T. A Kinect-Based Gesture Recognition Approach for a Natural Human Robot Interface. Int. J. Adv. Robot. Syst. 2015, 12, 22. [Google Scholar] [CrossRef]
  61. Mobini, A.; Behzadipour, S.; Foumani, M.S. Test-retest reliability of Kinect’s measurements for the evaluation of upper body recovery of stroke patients. Biomed. Eng. Online 2015, 14, 75. [Google Scholar] [CrossRef] [Green Version]
  62. Capecci, M.; Ceravolo, M.G.; Ferracuti, F.; Iarlori, S.; Longhi, S.; Romeo, L.; Russi, S.N.; Verdini, F. Accuracy evaluation of the Kinect v2 sensor during dynamic movements in a rehabilitation scenario. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 5409–5412. [Google Scholar]
  63. Bonnechère, B.; Sholukha, V.; Omelina, L.; Jansen, B.; Jan, S.V.S. Three-dimensional functional evaluation of the shoulder complex using the Kinect™ sensor. In Proceedings of the 4th Workshop on ICTs for improving Patients Rehabilitation Research Techniques, Lisbon, Portugal, 13–14 October 2016; pp. 5–8. [Google Scholar]
  64. Cordis European Commission. Available online: https://cordis.europa.eu/project/id/847926 (accessed on 15 February 2020).
  65. Microsoft Azure. Available online: https://azure.microsoft.com/it-it/services/kinect-dk/ (accessed on 2 February 2020).
  66. Jiang, S.; Liu, P.; Fu, D.; Xue, Y.; Luo, W.; Wang, M. A low-cost rapid upper limb assessment method in manual assembly line based on somatosensory interaction technology. In Proceedings of the 2017 5TH International Conference on Computer-Aided Design, Manufacturing, Modeling And Simulation (CDMMS 2017), Busan, Korea, 22–23 April 2017. [Google Scholar] [CrossRef] [Green Version]
  67. Miguez, S.A.; Hallbeck, M.S.; Vink, P. Work Movements: Balance Between Freedom and Guidance on an Assembly Task in a Furniture Manufacturer. In Advances in Intelligent Systems and Computing; Springer Science and Business Media LLC: Berlin, Germany, 2016; Volume 491, pp. 503–511. [Google Scholar]
Figure 1. Graphical schematic of the experimental set-up.
Figure 1. Graphical schematic of the experimental set-up.
Mti 04 00014 g001
Figure 2. Graphical representation of the workspace sectors and movements. Top row: Point-to-point movements: (a) Central, (b) Right, and (c) Left Sector; Lower Row: Exploration movements: (d) Central, (e) Right, and (f) Left Sector.
Figure 2. Graphical representation of the workspace sectors and movements. Top row: Point-to-point movements: (a) Central, (b) Right, and (c) Left Sector; Lower Row: Exploration movements: (d) Central, (e) Right, and (f) Left Sector.
Mti 04 00014 g002
Figure 3. Kinect V2 tracked joints (a) and the location of the markers in the upper-limb Vicon system (b).
Figure 3. Kinect V2 tracked joints (a) and the location of the markers in the upper-limb Vicon system (b).
Mti 04 00014 g003
Figure 4. Graphical representation of the degrees of freedom computed with Vicon and Kinect V2: 1. Shoulder elevation, 2. Shoulder rotation along the vertical axis, 3. Shoulder internal–external rotation, 4. Elbow flexion–extension, 5. Hand flexion–extension, 6. Hand pronation–supination, 7. Hand deviation, 8. Scapular elevation, 9. Trunk torsion, 10. Trunk anterior–posterior flexion, 11. Trunk medial–lateral flexion, a. Hand reference vectors, b. Subject reference system. The direction of the arrows indicates ‘positive’ angles. The procedures for computation and the definition of projection planes are presented in Appendix A.
Figure 4. Graphical representation of the degrees of freedom computed with Vicon and Kinect V2: 1. Shoulder elevation, 2. Shoulder rotation along the vertical axis, 3. Shoulder internal–external rotation, 4. Elbow flexion–extension, 5. Hand flexion–extension, 6. Hand pronation–supination, 7. Hand deviation, 8. Scapular elevation, 9. Trunk torsion, 10. Trunk anterior–posterior flexion, 11. Trunk medial–lateral flexion, a. Hand reference vectors, b. Subject reference system. The direction of the arrows indicates ‘positive’ angles. The procedures for computation and the definition of projection planes are presented in Appendix A.
Mti 04 00014 g004
Figure 5. Pie-charts of the Kinect V2-Vicon angular distance in the point-to-point movements as an average of the test and re-test trials. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. The directional pie-charts had 8 cardinal directions, referred to the forward phases of the movements. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Figure 5. Pie-charts of the Kinect V2-Vicon angular distance in the point-to-point movements as an average of the test and re-test trials. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. The directional pie-charts had 8 cardinal directions, referred to the forward phases of the movements. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Mti 04 00014 g005
Figure 6. Pie-charts of the Kinect V2-Vicon angular distance in the exploration movements, as an average of the test and re-test trials. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. The directional pie-charts had 16 cardinal directions. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Figure 6. Pie-charts of the Kinect V2-Vicon angular distance in the exploration movements, as an average of the test and re-test trials. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. The directional pie-charts had 16 cardinal directions. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Mti 04 00014 g006
Figure 7. Pie-charts of the Kinect V2 angular distance across repetitions, in the point-to-point movements. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. For point-to-point movements, the directional pie-charts had eight cardinal directions, referred to the forward phases of the movements. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Figure 7. Pie-charts of the Kinect V2 angular distance across repetitions, in the point-to-point movements. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. For point-to-point movements, the directional pie-charts had eight cardinal directions, referred to the forward phases of the movements. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Mti 04 00014 g007
Figure 8. Pie-charts of the Kinect V2 angular distance across repetitions, in the exploration movements. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. For exploration movements, we considered also the backward phases and thus the resolution of the pie-charts included 16 directions. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Figure 8. Pie-charts of the Kinect V2 angular distance across repetitions, in the exploration movements. In order from top to bottom, the considered sectors were (a) central sector, (b) right sector, (c) left sector. For exploration movements, we considered also the backward phases and thus the resolution of the pie-charts included 16 directions. The graphical representation adopted a heatmap to depict the magnitude of the ad.
Mti 04 00014 g008
Figure 9. Point-to-point movements: two-way ANOVA test results, for test and re-test datasets.
Figure 9. Point-to-point movements: two-way ANOVA test results, for test and re-test datasets.
Mti 04 00014 g009
Figure 10. Exploration movements: two-way ANOVA test results, for test and re-test datasets.
Figure 10. Exploration movements: two-way ANOVA test results, for test and re-test datasets.
Mti 04 00014 g010
Table 1. Mean angular distance, order by degrees of freedom, and sectors (test and retest), and ICC: point-to-point movements.
Table 1. Mean angular distance, order by degrees of freedom, and sectors (test and retest), and ICC: point-to-point movements.
Degree of FreedomTest Mean (deg°)Retest Mean (deg°)
Shoulder elevation3.963.82
Shoulder rotation along the vertical axis4.734.88
Shoulder internal–external rotation19.0018.18
Elbow extension5.205.36
Hand deviation angle11.2511.60
Hand flexion–extension angle8.107.89
Hand pronation angle36.4038.59
Scapular elevation3.643.70
Trunk torsion3.023.01
Trunk anterior–posterior flexion2.412.61
Trunk medial–lateral flexion4.064.22
SectorsTest Mean (deg°)Retest Mean (deg°)ICC
Right8.598.770.81
Central9.559.730.82
Left9.609.820.73
Table 2. Mean angular distance, order by degrees of freedom and sectors (test and retest), and ICC: exploration movements.
Table 2. Mean angular distance, order by degrees of freedom and sectors (test and retest), and ICC: exploration movements.
Degree of FreedomTest Mean (deg°)Retest Mean (deg°)
Shoulder elevation7.096.92
Shoulder rotation along the vertical axis7.448.67
Shoulder internal–external rotation25.6326.93
Elbow extension7.567.43
Hand deviation angle10.7010.47
Hand flexion–extension angle9.699.70
Hand pronation angle43.2844.06
Scapular elevation4.604.67
Trunk torsion3.704.16
Trunk anterior–posterior flexion1.711.67
Trunk medial–lateral flexion2.162.17
SectorsTest Mean (deg°)Retest Mean (deg°)ICC
Right11.3911.810.84
Central11.1311.450.75
Left11.1811.340.62

Share and Cite

MDPI and ACS Style

Scano, A.; Mira, R.M.; Cerveri, P.; Molinari Tosatti, L.; Sacco, M. Analysis of Upper-Limb and Trunk Kinematic Variability: Accuracy and Reliability of an RGB-D Sensor. Multimodal Technol. Interact. 2020, 4, 14. https://doi.org/10.3390/mti4020014

AMA Style

Scano A, Mira RM, Cerveri P, Molinari Tosatti L, Sacco M. Analysis of Upper-Limb and Trunk Kinematic Variability: Accuracy and Reliability of an RGB-D Sensor. Multimodal Technologies and Interaction. 2020; 4(2):14. https://doi.org/10.3390/mti4020014

Chicago/Turabian Style

Scano, Alessandro, Robert Mihai Mira, Pietro Cerveri, Lorenzo Molinari Tosatti, and Marco Sacco. 2020. "Analysis of Upper-Limb and Trunk Kinematic Variability: Accuracy and Reliability of an RGB-D Sensor" Multimodal Technologies and Interaction 4, no. 2: 14. https://doi.org/10.3390/mti4020014

Article Metrics

Back to TopTop