1. Introduction
Human disabilities affect millions of people worldwide and can result from congenital conditions, accidents, or aging. Among these, motor disorders specifically impair movement and sensory perception, posing significant challenges that require continuous research and innovation related to new technologies that can improve quality of life. Mobility, independence, and the ability to perform activities of daily living (ADLs) are critical factors that greatly influence the quality of life of individuals with motor impairments [
1]. Among sensorimotor impairments, spinal cord injuries (SCIs) represent a significant disability, affecting 250,000 to 500,000 people per year [
2]. These injuries affect signal transmission between the brain and the body, causing permanent changes in motor, sensory, and automatic functions below the injury level. SCIs are classified by injury level (cervical, thoracic, lumbar, sacral) and completeness (complete or incomplete). Higher injury levels are associated with more severe motor impairments, resulting in greater challenges in leading an independent life and controlling assistive devices designed to support activities of daily living. Although loss of mobility is the most apparent consequence, SCIs also cause critical complications that affect other body functions. Statistics indicate that 20–30% of people with SCI experience clinically significant depression (
https://www.who.int/publications/i/item/9789241564588, accessed on 9 February 2025), highlighting the need for interdisciplinary care that addresses both physical and emotional well-being.
Due to their great impact, spinal cord injuries are a key focus in assistive robotics, whose objective is to improve independence and quality of life [
3]. Human–machine interfaces (HMIs) are essential for integrating human cognitive and physical abilities with machine functionalities, facilitating communication between users and external systems [
4]. A subset of these systems, known as body–machine interfaces (BoMIs), exploits residual human signals to generate control inputs for external devices such as prosthetic limbs, powered wheelchairs, robotic arms, manipulators, or displays. While BoMIs can rely on various human signals—such as muscle contractions, body movements, eye gaze, or tongue gestures—a particular subgroup, brain–computer interfaces (BCIs), focuses exclusively on brain signals. However, this study emphasizes BoMIs that exploit users’ residual body movements [
5]. By mapping such motions to external devices, BoMIs can expand the functional capabilities of users to compensate for lost sensorimotor functions [
6]. Body movements, coming from users’ residual active body parts, are the control inputs providing useful information about users’ intent, which can be integrated in the control loop with the great advantage of being a non-invasive solution. Despite promising advancements, several challenges and limitations are still present. A key issue is designing interfaces that are both intuitive and adaptable to different levels of impairment. Overcoming this issue is important to enhance the use of assistive devices and to reduce the abandonment rate. BoMIs employ several sensor technologies, including infrared cameras [
7], inertial measurement units (IMUs) [
8], EMG sensors [
9], or hybrid combinations [
10], to capture user’s measurements effectively.
In general, residual movements exploited by BoMIs are voluntary or involuntary controllable motions that remain in individuals with partial or severe motor impairments. When designing a BoMI, it is crucial to consider the ability to discriminate between voluntary and involuntary motions. For instance, Kirchner et al. [
11] explain how the use of electroencephalographic signals (EEGs) in BMIs holds great potential as an indicator of voluntary movements. Indeed, the readiness potential (RP) is observed only before voluntary movements, not involuntary ones, and can therefore be used as a discriminator. As disadvantage, a daily use of EEGs can become tiresome for users. Other methods involve exploiting signal processing techniques, such as filters [
12]. In individuals with motion disorders, including those with spinal cord injuries, the presence of involuntary movements can also be verified. One of the most common involuntary movements in SCI subjects is associated with sleep-related periodic leg movements [
13]. Involuntary contractions are characterized by brief episodes of high intensity activity (e.g., muscle spasms) or by long-lasting motor unit activity at low firing rates. The spasms can affect both legs and hands [
14].
Even in severe cases, the residual movements often exceed the number of variables needed to control devices such as powered wheelchairs or computers [
15]. By leveraging these movements to control assistive devices, users can regain functional independence, also offering several advantages in assistive technology, as users rely on existing motor skills rather than learning entirely new control paradigms. This approach reduces cognitive load and accelerates the adoption of assistive devices, leading to lower abandonment rates. Additionally, residual motions can also be used in a rehabilitative context to fasten physical rehabilitation using the preserved muscles, promoting neuromuscular recovery, supporting muscle tone, and preventing atrophy [
15].
Despite their potential, using residual movements presents challenges, particularly the variability in motor function among individuals. This necessitates personalized calibration and adaptive algorithms for optimal control. Additionally, muscle fatigue and signal degradation over time can affect the long-term usability of these systems [
16].
This study investigates a specific control approach for assistive robotic devices, outlined in [
17], which leverages the user’s residual motions. Although this control strategy has only recently been introduced and shows promise for individuals with various motor impairments, our focus is on adapting it for people with spinal cord injuries (SCIs). To achieve this, we first revisited the key aspects of the control algorithm and outline the modifications implemented to tailor it specifically to SCI users. The primary objective of our study is to conduct a statistical analysis involving 12 unimpaired subjects performing two tasks, in order to evaluate the system’s accuracy, usability, and time performance before and after targeted training sessions. As a secondary goal, we aim to test the same experimental protocol in a pilot study with two SCI subjects. This will provide a broader perspective on the system’s functionality with the intended population and assess whether they can complete the tasks within the desired maximum time. This final pilot study will assist us in refining the control strategy and training sessions for future investigations, specifically targeting the SCI population.
2. Method
Control Algorithm
Let us consider a human model and an assistive robot (e.g., prosthesis, avatar, supernumerary limb) as part of a unified kinematic chain, with
human joints and
robotic joints. The user’s intent is defined in terms of reaching a desired goal
with the robotic end-effector
, whose pose depends on the joint configuration
, where
and
represent the human and robotic joint variables, respectively. The reaching error is defined as
Humans often attempt to reach the goal without leveraging the assistance provided by the robotic device. This results in unnatural body movements, which can lead to fatigue for the user [
18]. To mitigate this, building on the work presented in Legrand et al. [
19], the study in Feder et al. [
17] introduces a reference frame,
, attached to a selected body segment (e.g., the shoulder). The target pose of this frame, when the user’s body is in a relaxed configuration, is denoted as
, which makes it possible to evaluate the relative displacement of user from their relaxed configuration. The displacement error, expressed in terms of both translation and orientation, is defined as
As outlined in [
17], the human and robotic Jacobian matrices for the reaching task are denoted as
and
, respectively. Similarly, for the body motions, we have
and
. The overall system kinematics can then be expressed as
The control objective is to compute
such that
asymptotically approaches zero. This is particularly challenging because the desired target
, and consequently
, is unknown to the robotic controller. To address this, the solution proposed in [
17] integrates both the human and robotic components into a unified dynamic system with state
, control input
, and measurable output
. This results in the following time-variant dynamic system:
The solution proposed in [
17] involves implementing an asymptotic observer to estimate
. This estimate is then used in a linear quadratic Gaussian (LQG) regulator, where the gain matrix
K is iteratively recomputed during the motion. This control framework is adaptable to a variety of human–robot systems. Indeed, this approach translates user-performed residual motions into the necessary robotic commands for reaching targets. In this way, even slight user motions are sufficient to activate the controller. Once the user observes the robot beginning to move toward the desired target, they can return to their resting position. As highlighted in [
17], residual motions are significantly more pronounced when the control is not activated—leaving the user fully responsible for the reaching task—compared to when the control is active, with the robotic device assisting the user in completing the task. This system effectively leverages the user’s residual motions without actively amplifying them, as excessive residual motion could lead to pain and poor posture over time. Unlike traditional joint-to-joint mapping approaches, which require significant mental effort from the user to actively control each robotic joint individually, this approach allows the user to command multiple robotic degrees of freedom (DoF) simultaneously by providing only a slight motion intention.
In this study, we focus on the application of this control to
disconnected human–robot systems. In such scenarios, users immerse into the remote environment of the robotic avatar, moving their bodies and controlling the robotic end-effector as if they were physically embodied in the avatar. For these cases,
in (
1) is zero, as human joints do not actively participate in the reaching task, unlike in
connected human–robot systems. Additionally,
is also null because robotic joints do not influence the pose of
. In [
17], the frame
is placed along the user’s right shoulder. Its pose, as well as all other human joint values, can be directly measured using inertial measurement unit (IMU) sensors placed along the user’s upper body. For this purpose, XSens MVN sensors can be employed, providing the pose of all joints in the user’s biomechanical model, denoted as
. In particular, the key sensors for this control strategy are those placed on the pelvis, trunk, and shoulder. During the control loop, the error
can be directly measured as the difference between the pose of the right shoulder relative to the human pelvis reference frame, from the starting time,
, to the elapsed time. This formulation incorporates all residual motions of the user’s upper body up to the shoulder within this term. Using the joint angles and body measurements, the Jacobian matrices can also be computed at each time step. The results presented in [
17] demonstrate that, with this experimental setup, the user can incline and rotate their trunk and right shoulder to control virtual prosthetic arms with 3DoFs, 4DoFs, and 7DoFs, attached to their digital twin, as well as a physical robot avatar system, to reach the target. The study highlights that it is possible to estimate the reaching error, and consequently the goal pose, based on the user’s motion intentions. This enables robotic devices to reach the intended target without requiring the user to explicitly specify it. This approach allows the user to focus on the desired goal rather than directly controlling each robotic joint individually. Since the user plays a critical role in guiding control, the framework’s effectiveness largely depends on their residual motion capabilities. This necessitates tailoring the control system to the user, by fine-tuning the weight matrices in the controller to match task requirements and the user’s motion abilities. Another key factor is the choice of the frame
. The placement of this frame can be adjusted to optimize the control strategy, aligning it more closely with the user’s individual motion capabilities. While the control approach presented in [
17] demonstrates promising adaptability across various systems and user conditions, it lacks validation through multi-subject tests to assess its usability and feasibility comprehensively. In this study, individuals with spinal cord injuries are the target population, focusing on their residual movements capabilities, that can affect both the weight matrices tuning and the placement of the
frame. Our goal is to adapt the control algorithm for robotic avatar systems to meet the specific needs of this population. We firstly aim at performing unimpaired multi-subjects tests, to evaluate the system’s usability, accuracy, and performance in navigation and reaching tasks before and after tailored training sessions. Secondly, a pilot study with two SCI subjects is described, to test the training experimental protocol with them and to effectively verify whether they were able to perform the proposed tasks.
3. Experiments
To evaluate the effectiveness and functionality of the previously introduced control algorithm, we test its application in a humanoid robotic avatar scenario, where the users control the avatar to reach a predefined pose, , within a virtual environment. This section outlines the experimental protocol, detailing the setup, involved subjects, and conducted experiments.
3.1. Experimental Setup
We performed all experiments in a virtual environment developed using the Robot Operating System (ROS) Noetic framework, integrated with the Gazebo simulation environment. The experimental setup relies on two main technologies: MVN XSens 2021.2 Inertial Measurement Units (IMUs) (
https://www.movella.com/products/motion-capture/xsens-mvn-awinda, accessed on 9 February 2025) and the two-wheeled, auto-balancing humanoid robot Alter-Ego (
https://softbots.iit.it/service-robots, accessed on 9 February 2025). The robot’s kinematics is used to define the matrix
, representing the Jacobian matrix of the entire robotic chain from the base to the right end-effector. This matrix accounts for the dynamics of the wheels and the five right arm joints.
The experiments focus on using upper-body movements as control inputs, tracked via nine XSens IMU sensors. Users look at the robotic avatar on a screen and adopt its perspective, treating the robotic avatar as an extension of their own body (see
Figure 1).
3.2. Tailoring the System to the SCI Context
The primary objective is to determine whether the robotic joint controller enables users to achieve the target , while minimizing their own motions, i.e., bringing . In terms of error coordinates, this corresponds to stabilizing and at zero. This study aims to assess how intuitively users interact with the system, focusing on how quickly and effectively they learn to control it with minimal prior training. Key performance indicators include target accuracy, task completion time, and success rates.
Given the focus on the SCI context, we made several changes to the control algorithm. These include reducing sensor requirements and introducing a dead-band zone around the target posture to mitigate involuntary reflex motions, a common challenge for SCI subjects, thereby enhancing controller stability. Additionally, accurately fine-tuning the weight matrices allows us to adjust the control system’s response to the user’s motions. This helps prevent the system from reacting to involuntary movements, such as muscle spasms, which the SCI subject may experience. By appropriately adjusting the gains, we can slow down the control system’s response, ensuring that brief, rapid involuntary motions, like spasms, do not cause significant changes in the robot’s movements. Instead, the control algorithm is more effectively triggered by sustained, intentional motions. The same tuning is applied for all 12 unimpaired subjects and the two SCI participants. Additionally, as many SCI subjects retain partial control of the upper arm, we aimed to maximize the use of available degrees of freedom DoFs. Unlike Feder et al. [
17], where the frame
was aligned with the user’s shoulder, we opted to shift it to the next Inertia Measurement Units IMU sensor, placed on the right upper arm. Other sensors along the arm (e.g., wrist and hand) are required for the XSens system but do not actively contribute to the control algorithm. Additionally, extensive empirical tuning of the control gains was required to optimize the algorithm’s response to motion requests.
3.3. Experimental Protocol
The experimental protocol, summarized in
Figure 2, consists of multiple phases and lasts up to 90 min. Although real-world experiments can be performed using the physical Alter-Ego humanoid robot, we conducted this study in a simulated environment with the robot’s virtual twin. This choice ensures a safer, more cost-effective testing process, accelerates development, and provides versatility for various applications, as discussed in Augenstein et al. [
20]. Furthermore, the simulation code can be directly deployed on the real robot without modifications.
Too meet SCI subjects’ needs, we conducted the experiments for all subjects (included the calibration phase) in a seated position.
The protocol begins with sensor placement and calibration, followed by a one-minute free exploration (FE). During this phase, participants freely control the robot in the virtual environment using upper-body movements, learning that the system allows simultaneous control of the robot’s base and right arm. The study then advances to a test phase with two assessments, referred to as Test 1 and Test 2, focusing on navigation and reaching tasks.
Test 1 (
Figure 3a): Participants are given 5 min to guide the robot to three targets. The first two targets are on the floor, requiring navigation using the robot’s base, while the third target is accessed using the robot’s end-effector. The optimal strategy involves aligning the robot’s base with the floor targets and using the end-effector to touch the center of the final target.
Test 2 (
Figure 3b): This more complex scenario evaluates the participants’ ability to interact with everyday objects. Here, they must use the robot’s end-effector to reach a red glass placed on a table.
As evidenced by the metrics discussed in the previous section, Test 1 focuses on providing quantitative measures for system analysis, whereas Test 2 offers a comprehensive qualitative overview.
After the initial tests, participants proceed to a training session designed to teach control strategies through five progressive sub-phases (
Figure 4), promoting a step-by-step learning.
The first sub-phase introduces control of the robot’s base rotation around the vertical Z-axis using the participant’s right arm. By rotating their arm along its longitudinal axis, participants learn to rotate the robot base clockwise (−90°, right) and counterclockwise (+90°, left), as shown in
Figure 5a. In the figure, we represent with the red arrow the residual motions performed by the user, while the yellow arrow indicates the corresponding robotic motions. As observed, at the end of each training sub-phase, once user is satisfied with the robot’s achieved position, he returns to his target posture, bringing the robotic joint velocities to zero.
The second sub-phase focuses on controlling the robot’s forward along the X-axis. As depicted in
Figure 5b, participants move their right upper arm forward to guide the robot along a linear trajectory.
The third sub-phase introduces control of the robot’s end-effector. By moving their right upper arm along the Z-axis and Y-axis, participants can raise the robot’s arm, as shown in
Figure 5c.
The fourth step combines the strategies learned in the second and third phases. Participants are required to move the robot forward, return to the initial N-pose and then raise the robot’s end-effector. The final training phase integrates all strategies from the first three steps. Participants move the robot forward while simultaneously turning it to the right, then return to the N-pose to raise the robot’s arm. All the training steps conclude with participants resetting to their initial position.
After training, participants repeat the same test sessions (Test 1 and Test 2) to evaluate the training’s effectiveness.
Finally, participants complete three questionnaires for qualitative feedback. The National Aeronautics and Space Administration Task Load Index (NASA TLX) [
21] questionnaire measures perceived workload, the System Usability Scale (SUS) [
22] evaluates usability and intuitiveness, and the Negative Attitude toward Robots Scale (NARS) [
23] gathers insights on participants’ overall experience with robots.
3.4. Subjects
This study was conducted in collaboration with Spinal Cord Unit, Santa Corona Hospital of Pietra Ligure (Savona, Italy). This study was approved by the Institutional Review Board (code CE DIBRIS protocol-N. 2022/52 approved on 22/09/2022) and it conformed to the ethical standards of the 1964 Declaration of Helsinki. Each subject provided written informed consent to participate in the study and to publish individual data. The study involved 12 unimpaired subjects (age: 26.9 ± 2.6 y.o., 3 females) with the only exclusion criterion being the presence of sensorimotor, neurological, or psychiatric impairments. All participants were naive to the control algorithm to avoid biases from prior knowledge. To evaluate the potential application of the control algorithm in assistive devices, a pilot test was conducted with two individuals with spinal cord injuries, providing feedback from potential end-users. The SCI subjects were classified based on injury level (cervical, thoracic, lumbar, sacral) and completeness (complete or incomplete). The American Spinal Injury Association (ASIA) offers a standardized international classification system [
24], which grades the severity of the injury from A (complete) to E (normal). The first participant (SCI 1) was a 35-years-old tetraplegic male (C7 lesion, ASIA B), and the second (SCI 2) was a 34-years-old paraplegic male (L1 lesion, ASIA C). Both subjects followed the same experimental protocol as the unimpaired participants. The choice of including the tetraplegic subject is motivated by the potential of this control system to exploit the minimal residual upper-body motions for operating the external device, which can significantly assist in performing ADLs. The paraplegic subject also serves as a valuable participant in testing the new control algorithm, as individuals with paraplegia often face challenges related to fatigue when performing continuous activities requiring extensive movements. Using this control system can help reduce the effort and fatigue associated with such tasks.
3.5. Metrics and Statistical Analysis
As previously described, in Test 1 the robot is required to reach three targets, providing data for calculating three metrics to assess participants’ performance.
The sign test, a nonparametric statistical test, was conducted in MATLAB (MathWorks, Natick, MA, USA) to evaluate differences in performance metrics between the pre-training and post-training phases. Specifically, the analysis focused on the results from Test 1, examining whether performance improvements across all three metrics were statistically significant after the training sessions.
4. Results
This section is divided into four subsections: the first presents the statistical analysis calculated based on the data of the 12 unimpaired subjects in Test 1, in the second we focus on the success rates of the 12 unimpaired subjects for both tests (Test 1 and Test 2), then in the third we focus on users’ feedback by looking at the questionnaires, and at the end the last subsection is related to kinematics and algorithm validation.
4.1. Statistical Analysis
Here, we report the statistical analysis related to the unimpaired subjects. We provide global graphs that also include the indices for the two SCI subjects; however, they were excluded from the statistical analysis due to their limited sample size and the differences in their injury levels, which prevent meaningful statistical comparisons. For clarity, throughout the analysis, their data are presented for visual inspection only: the tetraplegic subject is identified by the red dot, and the paraplegic subject by a green one.
Figure 6a–c show the box-plots for the three metrics before and after training. Distance accuracy significantly increased (
), suggesting that participants became more precise in navigating the robot toward the targets. Similarly, touch accuracy showed a significant improvement (
), indicating enhanced ability to align the robot base or end-effector with the target centers after training. In general, an enhancement in the successful touches across subjects can be observed, highlighting a tendency to improve after the training session. However, the normalized time metric did not exhibit a statistically significant change (
), suggesting that, while participants became more accurate in reaching targets, the time efficiency did not notably improve, potentially due to a cautious approach in achieving higher precision. It is possible that users prioritized accuracy over speed, especially after becoming more familiar with the system.
For the third target, accuracy was analyzed using both the robot base and the end-effector minimum distances.
Figure 7a represents the distribution of the accuracy between the base and the third target.
Figure 7b shows the distributions of the accuracy in terms of end-effector distance from the third target. After training, one outlier is identified, which corresponds to subject S05. The outlier was identified as the data point falling outside the lower whisker of the box-plot, meaning it is more than 1.5 times the interquartile range below the lower quartile. In our test, this indicates that the minimum distance between the end-effector and the third target was much larger than that achieved by other subjects, suggesting poorer performance in controlling the end-effector. This results in a null success rate for this subject in reaching the third target, as further clarified in the next subsection. However, since the overall performance of this outlier still shows improvement before and after training—except for the end-effector accuracy metric (
Figure 7b)—we decided to include this subject in the subsequent analyses.
The increased accuracy for the third target when using the robot base () and the end-effector () confirms that participants improved their control and spatial awareness, even in more complex tasks. For the two SCI participants (red dot for SCI 1 and green dot for SCI 2), we observe that SCI 2 performed worse than SCI 1. This can be attributed to the fact that the control system was primarily tailored to a tetraplegic injury level. Consequently, it was more customized for SCI 1, allowing for a better response to the smaller input signals typically associated with tetraplegic subjects. The results for SCI 1 are favorable, frequently showing performance above 50% and notable improvements between the pre-training and post-training phases. In contrast, SCI 2 often demonstrates metrics below 50%, which can be explained by the system’s design being more suited to tetraplegic subjects. Despite this, the system remains promising, as one of the primary objectives is to assess improvements between pre-training and post-training phases, which are also evident for SCI 2.
4.2. Success Rates
In addition to the statistical analysis conducted on
Test 1, we also examined the success rates for both
Test 1 and
Test 2 only considering the 12 unimpaired subjects.
Figure 8a shows the success rate histogram for
Test 1, in which two bars for each target represent the percentage of successful touches before and after training, respectively. The success rate analysis further supports the positive impact of training. For instance, all 12 participants were able to reach the first target both before and after training. However, the second and third targets presented greater challenges initially, with pre-training success rates of 41.67% and 16.67%, respectively. Post-training, the success rate increased significantly to 100% for the second target and 75% for the third target, indicating an improvement in user proficiency over time. As introduced in the previous subsection with the identification of an outlier, it is evident from this analysis that the outlier is part of the 25% who failed to reach the third target after training.
For
Test 2, we evaluated the success rate of the 12 unimpaired subjects in reaching the red glass on the table. As depicted in
Figure 8b, the percentage of successful touches improved from 42% to 83%, further emphasizing the effectiveness of the training sessions. For what concerns the two SCI participants, due to the limited sample size, we do not report any statistical analysis, but we can highlight that, in
Test 1, SCI 1 successfully reached all three targets both before and after training. On the other hand, SCI 2 successfully reached only the first target both before and after training, while the second and third targets were never reached. For
Test 2, SCI 1 failed to reach the glass both before and after training, whereas SCI 2 did not reach the glass before training but successfully reached it after training.
4.3. Questionnaires
Results from the NASA TLX survey for the 12 unimpaired subjects (
Figure 9a) can be summarized as follows: mental effort (median: 12, range: 3–17), physical fatigue (median: 6.5, range: 1–12), temporal demand (median: 10, range: 1–14), effort (median: 13, range: 2–16), frustration (median: 10, range: 4–13), and performance (median: 15.5, range: 7–20). In
Figure 9b is visualized a comparison between the median values for each questions of the unimpaired subjects and the answers given by the two SCI participants, identified as before by the red and green color, respectively.
From SUS, a usability score of 53.8636 out of 100 is obtained (STD: 16.0999, min: 25, max: 75). In
Figure 10a are reported the maximum, minimum, and median values for each question of the questionnaire, considering all the 12 unimpaired subjects. In
Figure 10b, the median values of the unimpaired subjects are visualized together with the answers of the two individuals with spinal cord injuries.
NARS questionnaires results, divided into the three sub-scales, are the following:
Sub-scale S1: Attitude toward interactions with robots has a median value of 1 (mean: 1.3333, STD: 0.5164).
Sub-scale S2: Attitude toward social influence of robots has a median value of 2 (mean: 2.2, STD: 0.4472).
Sub-scale S3: Attitude toward emotional aspects in robot interaction has a median of 3 (mean: 2.6667, STD: 1.5275).
Results from NASA TLX and SUS questionnaires provide important insights into the cognitive and physical demands of the system. Since NASA TLX survey reveals a high level of mental effort (median: 12) and effort (median: 13), with moderate levels of physical fatigue (median: 6.5) and temporal demand (median: 10), it can be said that the system requires a significant cognitive engagement, while the physical strain remains relatively manageable for most participants. Notably, frustration levels are moderate (median: 10) and this may be due to initial challenges in controlling the robot, as indicated by participants. The self-assessment score is very high for 11 out of 12 subjects (median: 15.5). Improvements in ease of use and efficiency are recommended. The System Usability Scale (SUS) score of 53.86 out of 100 indicates that the system’s usability is below average, suggesting area for improvements. Scores in the range of 50–60 are often classified as “marginal”, implying that the system is acceptable for some users but optimal. This finding underscores the need for further refinement to enhance ease of use, reducing cognitive load and improving efficiency.
Analyzing the results of the two SCI subjects (
Figure 9b and
Figure 10b) and comparing them with those of unimpaired subjects (
Figure 9a and
Figure 10a), both NASA TLX and SUS responses indicate a generally lower perceived workload, such as greater usability and intuitiveness of the system. This provides valuable insights for a future application of the system in the field of spinal cord injuries.
The NARS questionnaire results reveal a generally positive attitude toward robot interactions. Participants expressed a very favorable perception in terms of general interaction (S1, median: 1) and social influence (S2, median: 2). However, attitudes toward emotional aspects of interacting with robots (S3) were more neutral and varied, as indicated by a higher standard deviation.
4.4. Kinematics and Algorithm Validation
Important insights can also be drawn from outcomes related to kinematics and the functioning of the algorithm. As previously mentioned, the goal of the control algorithm presented in [
17] is to generate commands for the robotic joints (
) to reach a desired target,
, known only to the human.
In this section, we briefly describe the validation of the algorithm. Given the ultimate focus on SCI subjects, the tetraplegic participant, SCI 1, was selected as the primary subject for this analysis.
For Test 1, since the position of is predefined in the virtual environment for the three targets (denoted as , , and ), a straightforward comparison with the actual positions can be made.
Figure 11a shows the position achieved by the robot’s base in the
x and
y coordinates before training (blue) and after training (magenta), compared to the desired positions for the first two targets,
and
.
Figure 11b illustrates the comparison between the actual and desired end-effector positions for the third target,
, along all three coordinates, both pre- and post-training. As observed, the participant successfully reached all targets even during the pre-training phase. However, the time required to achieve these targets was significantly reduced in the post-training phase.
To evaluate the effectiveness of the training sessions, we analyze the residual motions performed by the subject, SCI 1.
Figure 12 illustrates the pose of the user’s frame,
, positioned on the right upper arm, in terms of both position and orientation, before and after the training sessions for the entire
Test 1. As shown, the movements performed post-training are significantly smoother and less pronounced compared to those observed during the pre-training session. By specifically focusing on the translational motion, we observe that, after training, the movement remains smooth, slight, and consistent throughout the task, without sudden changes that could indicate a muscle spasm or an involuntary motion by the user at any given moment. However, as highlighted in the discussion section, further improvements are needed to enhance the control’s ability to distinguish between voluntary and involuntary movements. Regarding the orientation, more pronounced fluctuations can be observed after training. This is due to variations in the orientation component of the user’s frame,
, particularly when the last of the three targets is reached. Specifically, at this stage, the user must raise their upper arm, as shown in the third sub-phase of the training sessions in
Figure 5c. This motion results in fluctuations in the orientation range.
5. Discussion
All 12 unimpaired subjects and the two subjects with SCI completed the entire experimental protocol, demonstrating an overall robustness and accessibility of the system. The primary goal of the data analysis was to assess whether a general improvement could be observed between pre-training and post-training tests to evaluate the effectiveness of the training and the usability of the system. Interestingly, SCI participants achieved performances comparable to those of unimpaired subjects, especially in terms of time, touch accuracy, task completion, and success rate, highlighting the potential of this robotic system to support users with different levels of motor function. Their performance and success rates were evaluated individually, with the results analyzed separately. This approach allows us to conclude that, for these two subjects, results appear promising, as improvements between the pre-training and the post-training phases are evident, even though the performance of SCI 2 seems poor if compared to the others.
The three main performance metrics—Distance Accuracy, A, Time, and Touch Accuracy, —provided valuable insights into the impact of training on users’ interactions with the system.
Statistical analysis revealed that participants became significantly more precise in navigating the robot toward the targets following the training sessions. Similarly, improvements were observed in the third metric, touch accuracy. However, the time metric did not show a statistically significant change, indicating that participants prioritized achieving higher precision over reducing the time to reach the targets.
Additionally, the success rates achieved during both tests demonstrated marked improvements post-training. Nevertheless, the fact that some participants did not reach the final target can be attributed to the limited maximum time available for completing the task.
This variability suggests that, while some participants were comfortable engaging with robots on an emotional level, others were less so, reflecting diverse attitudes toward the social and affective dimensions of human–robot interaction. Overall, participants had a positive attitude toward robots in the first two sub-scales (S1 and S2), with slight decrease in the second. The third sub-scale (S3) revealed more distributed opinions. Regarding the algorithm validation and system kinematics, the results confirm the overall correct functioning of the control algorithm, as it effectively translates the user’s residual motions into corresponding robot movements. This process generally leads to the convergence of both errors, and , toward zero.
To illustrate this, our analysis focused on the tetraplegic subject, who successfully reached all targets both before and after the training sessions. The results show a significant reduction in the time required to complete the tasks post-training, demonstrating the effectiveness of the training steps for the subject.
Additionally, the analysis of residual motions highlights the subject’s learning process. During the pre-training phase, the subject tended to explore the task by performing more pronounced movements. However, after learning the strategy, post-training movements became minimal. The subject relied primarily on small displacements of the right arm, reserving arm orientation adjustments for the final phase of the task, specifically when the robot’s arm needed to be raised.
Nevertheless, the results from the pilot study involving SCI subjects highlight the potential of this control strategy. Although additional SCI participants are needed to fully assess the learning process during training sessions, this pilot study suggests its promising effectiveness. Furthermore, involving more SCI subjects will require a more detailed investigation to enable the system to better discriminate between voluntary and involuntary motions. The two SCI participants in this study did not exhibit significant involuntary motions, and the fine-tuning of the weight matrices we adopted was sufficient for the control to function correctly, preventing the system from responding excessively to potential muscle spasms. However, other subjects may present more pronounced involuntary motions or long-lasting motor unit activities, for which additional corrective measures will need to be implemented in the control strategy. Based on the questionnaires, the two SCI subjects also confirmed the system’s usability. Furthermore, after training, we observed how slight and consistent motions by the users enabled the robot to reach the desired goal with the avatar, demonstrating how residual motions can be effectively used to control the BoMI system. A challenge for SCI users in controlling the BoMI system is the variability in injury levels. Even among individuals with the same injury level, the range of possible motions can differ significantly. Although there is margin of improvement, this control strategy shows promise in adapting to different types of users, thanks to its flexibility in placing the frame based on the specific residual motion available to each individual.
6. Conclusions
The findings of this study highlight several promising applications for the proposed system. The results demonstrate that the new control algorithm effectively leverages residual upper-body motions to control an external device with a higher number of degrees of freedom (DoFs). Moreover, the algorithm accurately estimates the target to be reached, as evidenced by the convergence of both errors, and . Post-training analyses also reveal that residual motions become less pronounced, indicating that subjects learn to perform tasks more efficiently over time. These preliminary results suggest that this control strategy can assist the SCI population in operating assistive devices, such as a humanoid avatar, by leveraging their residual motion capabilities. For instance, a slight motion of the upper arm can be sufficient to command a robot to move toward an object the user intends to grasp.
The learning process observed is encouraging, with indices computed after the training sessions often outperforming those obtained beforehand. While the perceived workload is generally moderate, there is room for improvement in system usability.
Future studies could explore adapting the control framework by positioning the user’s reference frame, , on alternative body parts, such as the shoulder, depending on the subject’s characteristics and level of impairment. This approach could expand the applicability of the system to a broader range of individuals with varying degrees of injury.
Finally, experiments with the real robot Alter-Ego and additional participants are necessary to validate the system further and strengthen statistical robustness. Although the virtual environment has provided the opportunity to test the experimental protocol across different scenarios and setups, we aim to test the same protocol with a real robot. Since the control algorithm and the robot’s kinematics remain unchanged, a similar success rate and performance to that observed in the virtual case can be expected. However, a different perception by the subjects, particularly in terms of usability and questionnaire responses, can be anticipated. From a perceptual standpoint, a real robot physically reaching the desired object, as opposed to a simulated one, may enhance the perceived potential of this system for real-world applications within the SCI context.
To sum up, the results confirm the effectiveness of the proposed approach and suggest that we are on the right path to enhancing assistive technologies for individuals with SCI.